Capclave 2019 — Talks & Panels

I’m appearing at Capclave this year (October 18th thru 20th), doing my talk on Time Dispersion in Quantum Mechanics (3pm on Saturday the 19th) and five panels, all great topics: Technospeed, Coming Civil War, Failure of SF Prediction, Secrets of the Dinosaurs, & Exoplanets. Prep for these will be a lot of fun. And the other panelists include a number of old friends and I’m sure some new ones.

Capclave — always one of the best organized cons — did a great job on the schedules, sliced & diced by time, track, & trouble-maker. I can’t improve on theirs for me:

Friday 9:00 pm: Technospeed (Ends at: 9:55 pm) Truman
Panelists:John AshmeadMartin Berman-GorvineBud Sparhawk (M), Christopher Weuve
Is technology moving too far? Too fast? What is coming up in the future? What happens to those left behind? Can people who never learned how to set the time on their VCRs handle what brain-implants and whatever else is coming next? Is this increasing the generation gap?
Saturday 10:00 am: Coming Civil War (Ends at: 10:55 am) Washington Theater
Panelists:John AshmeadTom Doyle (M), Carolyn Ives GilmanSarena UlibarriChristopher Weuve
Is the U.S. dividing again? Or are current difficulties just an historical burp? Why didn’t the US divide in the 1960s? What can be done to keep the Union together? Or would splitting be a good thing? Will the South rise again or will it be cities versus countryside?
Saturday 2:00 pm: Failure of SF Prediction (Ends at: 2:55 pm) Truman
Panelists:John AshmeadTom Doyle (M), Natalie LuhrsSarah PinskerK.M. Szpara
SF is not really supposed to predict the future but presents possibilities. Still, comparisons are inevitable. What did past SF writers get right and wrong about today? How can writers do a better job (or shouldn’t they even bother trying?)
Saturday 3:00 pm: Time Dispersion in Quantum Mechanics (Ends at: 3:55 pm) Truman
Panelists:John Ashmead (M)
John Ashmead gives a science talk on time dispersion. Is time fuzzy? In quantum mechanics space is fuzzy. And in special relativity time and space are interchangeable. But if time and space are interchangeable, shouldnt time be fuzzy as well? Shouldnt quantum mechanics apply — to time? Thanks to recent technical advances we can put this to the test. We ask: How do you get a clock in a box? How do you interfere with time? When is one slit better than two? And what happens at the intersection of time and quantum mechanics?
Sunday 10:00 am: Secrets of the Dinosaurs (Ends at: 10:55 am) Monroe
Panelists:Robert J. SawyerJohn AshmeadMichael Brett-SurmanThomas Holtz (M)
Did dinosaurs really have feathers? Why did people get it wrong for so long? What else did people believe about dinosaurs 50 years ago that is no longer true? Why did people think that then? What of our present knowledge about dinosaurs is most likely to also be incorrect?
Sunday 12:00 pm: Exoplanets (Ends at: 12:55 pm) Truman
Panelists:John AshmeadInge HeyerEdward M. Lerner (M)
What do we know about planets outside our solar system? How do we discover them? What are the implications for aliens Exobiology?

Debugging with PostgreSQL – Sample code

My talk last week at FOSSCon, “Debugging with PostgreSQL: A Strategic Approach” went well. Lots of energy in the room. Good audience.

Bruce Momjian, one of the founders of PostgreSQL, was in the audience & said afterwords (roughly):  “that’s what I’ve been thinking for years; good to hear it spelled out in words”. I got that from a number of other programmers in the audience as well. Much pleased.

Bruce went on to ask I propose the talk for the 2020 World PostgreSQL Conference, which I shall.

I thought it might be helpful to write some of the code examples up in a complete script, so any one who wishes can run and/or hack. I found a few problems and infelicities myself while doing this. Further suggestions very welcome!

Warning: here there be code.

To run the code (assuming you have PostgreSQL 11 installed and call the sample “sample_all.sql”):

psql -U postgres -d postgres -f sample_all.sql > sample_all.out 2>&1

Since it can be tricky to cut-and-paste from a web page, I have uploaded the raw code as “sample_all.txt” (you can’t upload files with an SQL extension for security reasons). For completeness, here are the slides themselves as PDF.

The code is careful to create a sample database, build & test stuff, and then remove the whole thing as if nothing had happened. If you don’t like doing this sort of thing from the postgres user (don’t blame you) create a user with createdb privileges & use that to run this.

Sample Code

/*
	John Ashmead 
		sample_all.sql:  samples as used in my talk "Debugging with PostgreSQL"
		FOSSCon 8/17/2019

	Sample_all.sql is a complete code sample:

		it builds a sample database called sample with a user sample

		then creates a few types, 
		a timestamp trigger function, 
		a table people, 
		and then a small function to set the social security number

	The goal was to provide illustrations for the talk of what I call "self-debugging code"
		1) many problems are trapped, as by type checking, before they can do any harm
		2) in other cases, you will get an exception
		3) and in the worst case, at least you will see what went in and what came out
		
	You can run this as user postgres database postgres.  You could run as any user with createdb,
	if you fix the clean section to go from "postgres" to that user.

	I normally run scripts using psql with "-v ON_ERROR_STOP=1" set on the command line, 
	which will cause psql to exit on the first error.
	
	But in this case you need to allow for errors in the test section. 

	Therefore an appropriate command line is: 
		"psql -U postgres -d postgres -f sample_all.sql > sample_all.out 2>&1"

	The comments are taken from points made in the talk,
	hence their perhaps slightly pedantic character.

	Any comments, my email is "john.ashmead@ashmeadsoftware.com".
*/

\qecho Build user and database
create user sample with password 'HighlySecret';

create database sample with owner = sample;

\c sample sample

set search_path to public;

/*
	Create generic timestamp function: timestamp_trg

	Provided the tables use the fields "updated_at" and "created_at" as timestamps,
		you do not need to rewrite this function on a per table basis.

	It is very useful to have timestamp fields on most tables, even if they are not specifically needed:
		1) knowing "when" something went wrong often takes you much of the way to figuring out "what" went wrong
		2) and using triggers takes the load off the development programmer
		
	I've been working a lot with Ruby-on-Rails which will create & update these fields for you.
	But if you rely on Ruby-on-Rails then you create a lot of traffic on the wire,
	and you can miss cases where the updates were done behind ruby's back,
	as by other scripts & tools.
*/
	
\qecho Create timestamp function
create or replace function public.timestamp_trg() returns trigger
    language plpgsql
    AS $
  begin
	/* assume we have updated_at and created_at fields on the table */
    if new.created_at is null
    then
      new.created_at = now();
    end if;
    new.updated_at = now();
    return new;
  end;
$;

/*
	My own experience has been that it is much better to use logical types, even for simple fields:
		1) it makes changing types much easier:  if three tables are using a social security number, 
		then you only have to change it in one spot
		2) it makes the field names almost self-documenting
		3) and you can include bits of validation, as here, when the field is used

	Obviously this, like any principle, can be carried to extremes.  
	This is, as Captain Barbossa might put it, a guideline rather than a rule.
*/
\qecho Create some types & then the people table

begin;

/*
	Everysooften you run into someone with a single character last name, as Kafka's "K",
	so we allow for that.  

	I prefer text to varchar or character.  Performance about the same (in some cases better) and 
	if you put a fixed length in, what happens when you have to add the last name of a king or queen
	where the name is basically the history of the monarchy?
*/
create domain public.lastname_t text check(length(value) > 0);
comment on domain public.lastname_t is 'holds last name.  Has to be at least one character long.';

create domain public.firstname_t text;
comment on domain public.firstname_t is 'holds first name.  Can be missing';

create domain public.middlename_t text;
comment on domain public.middlename_t is 'holds middle name or initial.  Can be missing';

create domain public.ssn_t text check(value similar to '\d{9}');
comment on domain public.ssn_t is 'holds social security number.  If present, must be 9 digits long.';

/*
	ok_t is self-documenting in the sense that true is good and false is bad.
	This seems obvious enough, but I have seen the reverse convention used.

	As an aside, it is better for maintenance to use positive tests, i.e. "if we_are_ok" 
	rather than negative ones "if not we_are_failed".  Slightly easier to read.
	Which is important when it is 2am and the code has to be working by 9am.

	Further, better to use "not null" whenever possible:  three valued logic is a great source of bugs.
*/
create domain public.ok_t boolean not null;
comment on domain public.ok_t is 'true for success; false for successness challenged';

-- PostgreSQL sequences are a joy!
create sequence public.people_id_seq start 1;

/*
	we are using the ruby convention that we should get the plurals right:  person/people rather than person/persons.
	The only place you see persons is in a police report:
		three persons of a suspicious character were espied leaving the premises in a rushed and furtive manner.
*/
create table public.people (
	id int primary key default nextval('people_id_seq'),
	lastname lastname_t not null,
	firstname firstname_t,
	middlename middlename_t,
	ssn ssn_t,
	updated_at timestamp with time zone default now(),
	created_at timestamp with time zone default now()
);

/*
	In this simple case the comments are, in all candor, redundant.

	But, if you comment everything, then tools like SchemaSpy can give you a nice report of everything in your database.

	And, it is a good habit to get into.
*/
comment on table public.people is 'list of people';
comment on column public.people.id is 'primary key of people table';
comment on column public.people.lastname is 'lastname of person -- mandatory';
comment on column public.people.firstname is 'firstname of person -- optional';
comment on column public.people.middlename is 'middlename of person -- optional';
comment on column public.people.ssn is 'social security number of person -- optional';
comment on column public.people.updated_at is 'last time this row was updated';
comment on column public.people.created_at is 'time this row was created';

-- A unique index on id will be created automagically, so don't bother. 

create index people_name_ix on public.people using btree(lastname, firstname, middlename);

create unique index people_ssn_uix on public.people using btree(ssn);

insert into public.people(lastname, firstname, middlename) values ('Programmer', 'J', 'Random');

select * from public.people order by id;	-- make sure we look OK

/*
	One useful trick is to put a begin at the top of a script & a rollback at the end,
		until you are confident that the script works OK.
	This can be done even for DDL -- i.e. create table -- an incredibly strong feature of PostgreSQL.
*/
	
-- rollback
commit;

-- create ssn_set
\qecho Create the social security function which served as the main example of self-documenting code

-- begin/commit not strictly needed, the create function is an atomic unit, but still a good habit
begin;

create or replace function public.ssn_set( 
	person_id0 public.people.id%type, 	-- makes certain the function & table types are lined up
	ssn0 public.people.ssn%type, 		-- lets us get in a bit of validation (against the ssn type) before we get started
	debug_flag0 boolean default false	-- this lets you turn on debugging at will, if there is a production problem
) 
returns ok_t as $
declare
	person_id1 people.id%type; -- more specific than int
	ssn1 people.ssn%type;	   -- could use ssn_t, but this is still more specific than a generic type
	row_count1 bigint;		   -- more check-y stuff
begin
	if debug_flag0 then
		/*
	   		notice the use of the function name in the message:
	   		always identify the source in an error message! this could be part of a thousand messages
		*/
		raise notice 'ssn_set(%, %)', person_id0, ssn0;	
	end if;

	select id into person_id1 from people where id = person_id0 limit 1; -- limit 1 is overkill
	if person_id1 is null then
		/*
			be as specific as possible in an error message
		*/
		raise exception 'ssn_set:  person_id0 % is not in people table', person_id0;
	end if;

	/*
		We have a unique index on the ssn, but we can issue a more precise error message if we check first.

		This also serves as a double-check if we set the table up incorrectly, unlikely for social security numbers,
		but can happen in general.
	*/
	select id into person_id1 from people 
		where ssn = ssn0 and id != person_id0;
	if person_id1 is not null then
		raise exception 'ssn_set:  ssn % is already in use by id %', ssn0, person_id1;
	end if;

	-- this whole function is really just an elaborate wrapper for this one line
	update people set ssn = ssn0 where id = person_id0;
	/*
		and now make absolutely sure that it worked
	*/
	get diagnostics row_count1 = row_count;
	if row_count1 != 1 then
		raise exception 'ssn_set:  unable to set ssn to % for person# %, rows affected = %', ssn0, person_id0, row_count1;
	end if;

	/*
		giving the exit values as well as entry values of key variables lets us trace
		the flow of gozintas and gozoutas without doing anything more than setting a debug flag
	*/
	if debug_flag0 then
		raise notice 'ssn_set: person %: ssn changed to %', person_id0, ssn0;
	end if;

	/*
		All previous returns were by "raise", this is our first "normal" return.
	*/
	return true;
end; $ language plpgsql;

commit;

/*
	and of course the obligatory red/green tests
	-- bracket the allowed value with three red tests, then verify it works
	-- then check for dups:  one red, one green
*/
\qecho Test the social security function: three red tests then one green

\qecho Expect fail -- nonsense
/*
	We use the "(select...)" in the argument list to avoid hard-coding IDs,
	this will make it easier to extend the tests further, if necessary.

	I didn't bother to assign the "red" values into variables in this section, 
	since we are only using each value once.
*/
select public.ssn_set((select id from public.people where lastname = 'Programmer'), 'unmitigated nonsense'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Expect fail -- too short
select public.ssn_set((select id from public.people where lastname = 'Programmer'), '01234567'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Expect fail -- too long
select public.ssn_set((select id from public.people where lastname = 'Programmer'), '0123456789'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

-- using variables with psql makes it easier to change up the tests later
\set test_ssn 012345678
\set test_ssn2 987654321
\qecho Expect success -- just right
select public.ssn_set((select id from public.people where lastname = 'Programmer'), :'test_ssn'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Second round of testing on the social security function: one red and one green
\qecho Expect fail:  we have already used this SSN
insert into people(lastname) values ('Programmer Junior');
select public.ssn_set((select id from public.people where lastname = 'Programmer Junior'), :'test_ssn'::ssn_t, true);
select * from public.people where lastname = 'Programmer Junior';

\qecho Expect success: give Junior his/her own SSN
select public.ssn_set((select id from public.people where lastname = 'Programmer Junior'), :'test_ssn2'::ssn_t, true);
select * from public.people where lastname = 'Programmer Junior';

-- cleanup:  you have to back out of the sample database and then remove first it, then the role
\qecho A clean database is a happy database

\c postgres postgres
drop database sample;

drop role sample;

Now with more bugs: Debugging with PostgreSQL at FOSSCon 2019 – 8/17/2019

I am giving my Debugging With PostgreSQL talk tomorrow at FossCon. FOSSCon is the annual Free & Open Source Software Convention held every year in Philadelphia.

This version is lightly revised from last month’s version; added back in a few slides that I had to skip last time (I had 40 minutes last month, but 50 minutes tomorrow). And I fed back into the talk a bit of the audience feedback: more of what worked, less of the other stuff.

FOSSCon is fun, with a lot of great talks scheduled on Open Source & related. And it is free (donations are requested but not required.) Be seeing you.

Debugging with PostgreSQL – A Strategic (& Streamlined) Approach

Most popular slide at the talk: and the audience got all of them! (not counting the bit about the official name of Bangkok)

As planned, I gave a talk on Debugging with PostgreSQL at the Philly PostgreSQL conference at Wharton this last Friday (7/19/2019).

Went well: debugging is a great subject & I definitely struck a nerve with the audience; after the talk people were saying they knew about some of the points — which gave them some confidence — and others were new — which gave them some tools. Good.

My most popular slide was a quiz: only 10 lines of code — and from the PostgreSQL man page on foreign keys — but still three bugs. For the record, they are:

  • All of the data types should be domains, not physical types, so the city type should be something like “city_t”, defined as varchar(80). And the temperature should be, say, “fahrenheit_t” (or “celsius_t”), so you know what the units are.
  • The use of key words, like “date”, for field names is not great technique. It is ambiguous at best; breaks stuff at worst.
  • And the width for the city is way too small. Consider the name of Bangkok in Thai, the language of Bangkok: Krungthepmahanakhon Amonrattanakosin Mahintharayutthaya Mahadilokphop Noppharatratchathaniburirom Udomratchaniwetmahasathan Amonphimanawatansathit Sakkathattiyawitsanukamprasit. 177 characters! If you make the city’s type a domain, then you can revise the domain to be, say, “text” — and automagically get the type fixed everywhere you have a city reference.

I was scheduled to go late morning but went first because the opening speaker was still at his hotel. As a result I had the pleasant experience of hearing several later speakers refer to points made in my talk. The most popular was the phrase “lie consistently“.

I had built a form to collect Social Security numbers when I was at Bellcore (now Telcordia). It blew up when one fellow put in a variety of SSNs. I asked him what was going on. He said “I don’t want Bellcore to have my SSN. They have no legal right to it!”. “Fine by me, but just do me a favor & lie consistently“. We both left happy.

I did a run thru of the talk Sunday with my OTC (Official Talk Consultant); she pointed out, with her usual correctness, that I had tried to fit an entire software engineering course into 50 minutes. As a result, the early mornings & late evenings Monday thru Wednesday were spent reorganizing & rewriting. A 2nd run thru Wednesday evening went much better. OTC approved.

But when I did a final final talk & schedule check Friday morning I found the time blocks were now down to 40 minutes. Snip, snip, cut, cut, squeeze, squeeze. I cut out everything that wasn’t on message, useful, & fun. Definitely improved the talk. That which does not destroy us makes us strong. Or at least succinct.

Final version of the talk (PDF): Debugging with PostgreSQL — A Strategic (& Streamlined!) Approach.

Debugging with PostgreSQL – A Strategic Approach

The PostgreSQL Elephant attacks a bug
Debugging with PostgreSQL – A Strategic Approach

The Philly PostgreSQL Meetup is holding an all day conference at the Wharton School in Philadelphia, July 19th, 2019. I will be giving my talk Debugging with PostgreSQL – A Strategic Approach at 11am.

Description:

Depending on the project, debugging can take 50 to 90% of development time. But it usually gets less than 10% of the press. PostgreSQL has great tools for debugging, but they are most effective when deployed as part of an overall strategy.

We will look at strategies for debugging PostgreSQL: how to find bugs, how to fix them, and how to keep them from happening in the first place.

We’ll look at root causes, technical tricks, and scientific strategies, and why — even if you can’t always write perfect code — it is usually a good idea to try.

We’ll hear from Bjarne Stroustrup, Sherlock Holmes, Kernighan and Ritchie, Pogo, & the experts of the PostgreSQL community.

Goal: less time debugging, more time building great tools and apps that stay up & get the job done.

Comments:

I’ll be doing this talk at FOSSCON 2019 as well. That will be Saturday August 17th, 2019.

While I’ve definitely built this for PostgreSQL, it turns out that most of the debugging advice is applicable not just to PostgreSQL but to database in general, and not just to databases, but to most programming langauges.

Time & QM at Balticon 2019

I did my “Time dispersion in quantum mechanics” paper as a popular talk at Balticon 2019 this last Saturday. Very energetic audience; talk went well. The audience had fun riffing on the time & quantum mechanics themes. And gave a round of applause to “quantum mechanics”. That doesn’t happen often. Post talk, I spent the next hour and a half in the hallway responding to questions & comments from attendees. And afterwards I ran into a woman who couldn’t get in because there was no standing room left. I think the audience liked the subject, liked the idea of being at the scientific edge, & was prepared to meet the speaker half way. So talk went well!

Thanks to Balticon for taking a chance on a very technical subject! and to all the attendees who made the talk a success.

So I’m hoping to do the talk for Capclave (DC science fiction convention) & Philcon (Philadelphia science fiction convention) in the fall.

My Balticon talk was basically a translation from Physics to English of my long paper of the same title, keeping the key ideas but doing everything in words & pictures, rather than equations.

Balticon will be publishing the video of the Balticon talk at some point. I developed the talk in Apple’s Keynote. I have exported to Microsoft Powerpoint and to Adobe’s PDF format. The advantage of the two slide presentation formats is that you can see the builds.

The long paper the talk was taken from was just published last week, by the Institute of Physics as part of their Conference Proceedings series. And the week before, I did a fairly technical version of the paper as a virtual (Skype) talk for the Time & Time Flow virtual conference. This is online on Youtube, part of the Physics Debates series.

Is time fuzzy?

Alice’s Past is Bob’s Future. And vice versa. Both are bit fuzzy about time.

“Time dispersion and quantum mechanics”, my long paper — long in page count & long in time taken to come to completion — has just been accepted for publication in the peer-reviewed Proceedings of the IARD 2018. This will be published as part of the IOP Science’s Journal of Physics Conference Series.

I had earlier presented this as a talk at the IARD 2018 conference in June 2018 in Yucatan. The IARD (International Association for Relativistic Dynamics) asked the conference participants if they would submit papers (based on the talks) for the conference proceedings. No problem; the talk was itself based on a paper I had just finished. Of course the paper had more math. Much much more math (well north of 500 equations if you insist).

Close review of the talk revealed one or two soft spots; fixing them consumed more time than I had hoped. But I submitted — on the last possible day, November 30th, 2018. After a month and a bit, the two reviewers got back to me: liked the ideas, deplored the lack of sufficient connection to the literature, and in the case of Reviewer #1, felt that there were various points of ambiguity and omission which needed attention.

And right they were! I spent a few rather pleasant weeks diving into the literature; some I had read before, some frankly I had not given the attention that must be paid. I clarified, literated, disambiguated, and simplified over the next six or seven weeks, submitting a much revised version on Mar 11th this year. Nearly ten per cent shorter. No soft spots. Still a lot of equations (but just south of 500 this time). Every single one checked, rechecked, & cross-checked. And a few fun bits, just to keep things not too dry. Submitted feeling sure that I had done my best but not sure if that was best enough.

And I have just this morning received the very welcome news it will be joining the flock of accepted submissions headed for inclusion in the conference proceedings. I am best pleased.

As to the title of this blog post, my very long paper argues that if we apply quantum mechanics along the time dimension — and Einstein & even Bohr say we should! — then everything should be just a little bit fuzzy in time. But if you title a paper “Is time fuzzy?”, you can say farewell to any chance of acceptance by a serious publication.

But the point is not that time might be fuzzy — we have all suspected something of the kind — it is that this idea can be worked out in detail, in a self-consistent way, in a way that is consistent with all experimental evidence to date, in a way that can be tested itself, and in a way that is definitive: if the experiments proposed don’t show that time is fuzzy, then time is not fuzzy. (As Yoda likes to say: fuzz or no fuzz, there is no “just a little-bit-fuzzy if you please”!)

In any case, if you are going to be down Baltimore way come this coming Memorial Day weekend I will be doing a popular version of the paper at the 2019 Baltimore Science Fiction convention: no equations (well almost no equations), some animations, and I hope a bit of fun with time!

The link at the start of this post points to a version formatted for US Letter, with table of contents & page numbers. The version accepted is the same, but formatted for A4 and without the TOC and page numbers (that being how the IOP likes its papers formatted). For those who prefer A4:


Mars or Bust at Philly Linux

I gave my “Mars or Bust” talk at PLUG North (Philly Linux Users Group/North) on January 1/8/2019. Great audience; lots of good questions. They captured video of the event & have posted to Google Photos. Presented, as the late great Rod Sterling would put it, for your consideration: Mars or Bust.

Recent Time Travel References

At the Worldcon 2018 I had a very interesting conversation with Rafaela Yilun Fan, who is pursuing a Ph. D in Time Travel! She asked me for a list of my own favorite references on time travel. I have my list from 2011 of course. But time travel waits for no traveler, and the list of interesting works has only gotten longer. Herewith a few of my favorites from the last few years:

Alan Burdock. Why Time Flies: A mostly scientific investigation. 2017. physics, psychology, et cetera of time.

Craig Callender. What makes time special? 2017. Unusually deep examination of what we mean by time.

Allen Everett and Thomas Roman. Time travel and warp drives: A scientific guide to shortcuts through time and space. 2012. Title says it all.

Matthew Jones and Joan Ormrod. Time Travel in Popular Media: Essays on Film, Television, Literature, and Video Games. 2015. Interesting collection of essays. Has a section on Asian Time Travel Films & Television Series.

Paul J. Nahin. Time Travel Tales: The Science Fiction Adventures and Philosophical Puzzles of Time Travel. 2017. Usual first rate work by Nahin.

Fraser A. Sherman. Now and Then We Time Travel. 2017. Very good coverage of film & television. Recommends Aetherco, Epguides, and Wikipedia. All good sources as well.

Ryan Wasserman. Paradoxes of Time Travel. 2018. Good review of various paradoxes from a philosophical point of view.

David Wittenberg. Time travel: the popular philosophy of narrative. 2013. My favorite as an explanation of what the function of time travel is, from a narrative point of view. Why do authors use time travel?

Is time an observable? or is it a mere parameter?

I’ve just put my long paper “Time dispersion and quantum mechanics” up on the physics archive.   If you are here, it is very possibly because you have at one point or another talked with me about some of the ideas in this paper and asked to see the paper when it was done.  But if you just googled in, welcome!

The central question in the paper is “is time fuzzy? or is it flat?” Or in more technical language, “it time an observable? or is it a mere parameter?”

To recap, in relativity, time and space enter on a basis of formal equivalence. In special relativity, the time and space coordinates rotate into each other under Lorentz transformations. In general relativity, if you fall into a black hole time and the radial coordinate appear to change places on the way in. And in wormholes and other exotic solutions to general relativity, time can even curve back on itself.

For all its temporal shenanigans, in relativity everything has a definite position in time and in space.  But in quantum mechanics, the three space dimensions are fuzzy.  You can never tell where you are exactly along the x or y or z positions.  And as you try to narrow the uncertainty in say the x dimension, you inevitably (“Heisenberg uncertainty principle”) find the corresponding momentum increasing in direct proportion. The more finely you confine the fly, the fiercer it buzzes to escape. But if it were not for this effect, the atoms that make us — and therefore we ourselves in turn — could not exist (more in the paper on this).

So in quantum mechanics space is complex,  but time is boring. It is well-defined, crisp, moves forward at the traditional second per second rate.  It is like the butler Jeeves at a party at Bertie Wooster’s Drone’s Club:  imperturbable, stately, observing all, participating in nothing. 

Given that quantum mechanics and relativity are the two best theories of physics we have, this curious difference about time is at a minimum, how would Jeeves put it to Bertie?, “most disconcerting sir”.

Till recently this has been a mere cocktail party problem: you may argue on one side, you may argue on the other, but it is more an issue for the philosophers in the philosophy department than for the experimenters in the physics department.

But about two years ago, a team led by Ossiander managed to make some experimental measurements of times less than a single attosecond.    As one attosecond is to a second as a second is to the age of the universe, this is a number small beyond small.

But more critically for this discussion, this is roughly about how fuzzy time would be if time were fuzzy.  A reasonable first estimate of the width of an atom in time is the time it would take light to cross the atom — about an attosecond.

And this means that we can — for the first time — put to experimental test the question:  is time fuzzy or flat? is time an observable or a parameter?

To give the experimenters well-defined predictions is a non-trivial problem. But it’s doable. If we have a circle we can make some shrewd estimates about the height of the corresponding sphere.  If we have an atomic wave function with well-defined extensions in the three space dimensions, we can make some very reasonable estimates about its extent in time as well.

The two chief effects are non-locality in time as an essential aspect of every wave function and the complete equivalence of the Heisenberg uncertainty principle for time/energy to the Heisenberg uncertainty principle for space/momentum.

In particular, if we send a particle through a very very fast camera shutter, the uncertainty in time is given by the time the camera shutter is open. 

In standard quantum mechanics, the particle will be clipped in time.  Time-of-arrival measurements at a detector will show correspondingly less dispersion. 

But if time is fuzzy, then the uncertainty principle kicks in.  The wave function will be diffracted by the camera shutter. If the uncertainty in time is small, the uncertainty in energy will be large, the particle will spread out in time, and time-of-arrival measurements will show much greater dispersion. 

Time a parameter — beam narrower in time.  Time an observable — beam much wider in time.

And if we are careful we can get estimates of the size of the effect in a way which is not just testable but falsifiable.  If the experiments do not show the predicted effects at the predicted scale, then time is flat.

Of course, all this takes a bit of working out.  Hence the long paper.

There was a lot to cover:  how to do calculations in time on the same basis as in space, how to define the rules for detection, how to extend the work from single particles to field theory, and so on. 

The requirements were:

  • Manifest covariance between time and space at every step,
  • Complete consistency with established experimental and observational results,
  • And — for the extension to field theory — equivalence of the free propagator for both Schrödinger equation and Feynman diagrams.

I’ve been helped by many people along the way, especially at the Feynman Festivals in Baltimore & Olomouc/2009; at some conferences hosted by QUIST and DARPA; at The Clock and the Quantum/2008 conference at the Perimeter Institute; at the Quantum Time/2014 conference Pittsburgh; at   Time and Quantum Gravity/2015 in San Diego; and most recently at the  Institute for Relativistic Dynamics (IARD) conference this year in Yucatan.  An earlier version of this paper was presented as a talk at this last conference & feedback from the participants was critical in helping to bring the ideas to final form.

Many thanks! 

The paper has been submitted to the IOP Conference Proceedings series.  The copy on the archive is formatted per the IOP requirements so is formatted for A4 paper, and with no running heads or feet.  I have it formatted for US Letter here.



WordPress Themes