Philcon 2019 — Precap

Lagrange's tightrope, balancing kinetic & potential energy
Working out the effects of quantum mechanics on time requires a delicate balancing between kinetic & potential energy; Lagrange showed the way

The Philcon 2019 schedule is up. I’m doing my Time Dispersion in Quantum Mechanics talk — the tightrope walker is one of the slides, gives you a sense of the style of the whole, balancing ideas against math, time against space, classical against quantum, … — and four panels, all interesting. The con runs from Friday 11/8/2019 through Sunday 11/10. Details:

LOOKING FOR LIFE IN OUR SOLAR SYSTEM

Fri 8:00 pm. John Ashmead (mod), Earl Bennett, Dr. H. Paul Shuch, John Skylar. What’s the latest evidence that we’ve found? Where are the best places to look?

TIME DISPERSION IN QUANTUM MECHANICS

Sat. 4:00 PM. John Ashmead. We know from quantum mechanics that space is fuzzy- that particles don’t have a well-defined position in space — and we know from special relativity that time and space are interchangeable. So shouldn’t time be fuzzy as well? Thanks to recent technical advances in measurements at “short times” we can now put this to the test. Discuss!

THE BLURRY LINE BETWEEN CUTTING EDGE AND PSEUDOSCIENCE

Sat 5:00PM. John Ashmead (mod), Charlie Robertson, Rebecca Robare, Dr. H. Paul Shuch, Carl Fink, Lawrence Kramer. Niels Bohr famously said, “Your theory is crazy but it’s not crazy enough to be true”. How do we keep an open mind but not one so open that our brains fall out? A look at how to tell strange-yet-true science from weapons grade balonium.

THE EVOLUTION OF MARS

Sat 7:00 PM Darrell Schweitzer (mod), John Ashmead, Tom Purdom, James L. Cambias, Earl Bennett. How have depictions of Mars changed in SF from the imaginings of Burroughs and Bradbury to the Mars we know now from studying its surface?

DYSTOPIA NOW

Sat 9:00 PM Hildy Silverman (mod), John Ashmead, Karen Heuler, B. Lana Guggenheim. No one should be surprised that climate change, technological over-reach, and political anxieties have translated themselves into a bumper crop of contemporary dystopian fiction. How coherent are their messages — and how good are the stories? Is there a way to make such a work more than a cautionary tale about the present era’s problems?

Capclave 2019 — Recap

Alice & her dog examine the mysteries of time and quantum mechanics, slide from my talk at Capclave 2019.

Had a great time at Capclave. It’s one of the smaller cons — slightly north of 300 people — and doesn’t have some of the usual con stuff like an art show or cosplay. But for precisely those reasons, you tend to have more of those repeated one-on-one conversations that, for me, are the real life of a con.

Had a good time at the five panels I was on. All were energetic & held the audience.

Technospeed — is technology moving too far too fast? — was the first (Friday evening), with the smallest audience. It was hard to know what to do with the subject, a tad too broad I suspect. Much of the discussion focused on AI, a better subject. (I may take AI that for my big talk next year.) Not a bad panel, with that said: we had a lot of fun with Kurzweil’s Singularity and related topics.

My next two panels (both Saturday), The Coming Civil War & Failed SF Predictions, both had Tom Doyle as moderator. He did a great job, particularly with the Coming Civil War, where he asked the assembled panelists how they would treat present various scenarios from a fictional point of view. How would you tell the story of cities war with the country side? and so on. Kept the conversation from degenerating into what they thought of the [insert-derogatory-noun]-in-chief.

I had a bit of fun with Failed SF Predictions, bringing in some books of pulp age cover art: jet packs, menacing octopi, orbiting cities, threatening robots, giant computers, and attacking space fleets, … The role of women in SF in the days of the pulps is nothing like what it is in the real world today; a lot of the Failed SF Predictions chosen were about gender issues. Not even the first wave of feminist SF writers — LeGuin, Joan Vinge, Joanna Russ, … — fully anticipated how much the field would evolve.

Sunday my first panel was on Secrets of the Dinosaurs. The other three panelists were the GOH Robert Sawyer (author of the Far-Seer trilogy of dinosaur novels), Michael Brett-Surman (Collections Manager of the National Dinosaur Collection at the Smithsonian and co-author/editor of several dinosaur books with Dr. Thomas R. Holtz) and Dr. Thomas R. Holtz (who is the T. Rex of T. Rex scholarship). Being on a dino panel with these three was like being a small mammal in the Jurassic. The primary objective is to not get underfoot and squashed. All three are immensely polite & courteous individuals, who would never think to squash a small mammal who wandered on to the planet panel. I took advantage — as the designated amateur — to ask about dino parental care, how did hadrosaurs defend themselves against a T. Rex (rather easily — those tails are not just ornamental!), and my final q: if dinosaurs lived in groups & relied on visual & auditory display, did they have barn-dances?

My final panel was Exoplanets. My fellow panelists (Inge Heyer & Edward Lerner) were both expert & I had done a fair amount of swotting, so we had a good time going over rogue planets between the stars, planets made of diamond, life within the hidden seas, and various methods of finding new exoplanets — the total of confirmed exoplanets is 4000 & counting!

And my Time Dispersion in Quantum Mechanics talk went well (Saturday afternoon). I had a couple of practice run-thrus with a “volunteer” audience, which left it leaner, shorter, and easier to follow. Same content, but no math (except E=mc-squared, which is so familiar it doesn’t count). Talk went well, good audience and great questions: some I answered there, some I dealt with in the hall discussions, and one or two I had to admit “that’s one for the experimentalists!”

And my thanks to Brent Warner of NASA, who corrected — with great politeness — a couple of soft spots in the presentation. I will incorporate into the next iteration, in two weeks as it happens at Philcon.

And the next morning I got what I think is the best compliment I have ever received: the father of a 10th grader said his daughter was so inspired by my talk she is thinking of going into physics & quantum mechanics. “Here’s my email; tell her to feel free to follow up!” Yes!

Capclave 2019 — Talks & Panels

I’m appearing at Capclave this year (October 18th thru 20th), doing my talk on Time Dispersion in Quantum Mechanics (3pm on Saturday the 19th) and five panels, all great topics: Technospeed, Coming Civil War, Failure of SF Prediction, Secrets of the Dinosaurs, & Exoplanets. Prep for these will be a lot of fun. And the other panelists include a number of old friends and I’m sure some new ones.

Capclave — always one of the best organized cons — did a great job on the schedules, sliced & diced by time, track, & trouble-maker. I can’t improve on theirs for me:

Friday 9:00 pm: Technospeed (Ends at: 9:55 pm) Truman
Panelists:John AshmeadMartin Berman-GorvineBud Sparhawk (M), Christopher Weuve
Is technology moving too far? Too fast? What is coming up in the future? What happens to those left behind? Can people who never learned how to set the time on their VCRs handle what brain-implants and whatever else is coming next? Is this increasing the generation gap?
Saturday 10:00 am: Coming Civil War (Ends at: 10:55 am) Washington Theater
Panelists:John AshmeadTom Doyle (M), Carolyn Ives GilmanSarena UlibarriChristopher Weuve
Is the U.S. dividing again? Or are current difficulties just an historical burp? Why didn’t the US divide in the 1960s? What can be done to keep the Union together? Or would splitting be a good thing? Will the South rise again or will it be cities versus countryside?
Saturday 2:00 pm: Failure of SF Prediction (Ends at: 2:55 pm) Truman
Panelists:John AshmeadTom Doyle (M), Natalie LuhrsSarah PinskerK.M. Szpara
SF is not really supposed to predict the future but presents possibilities. Still, comparisons are inevitable. What did past SF writers get right and wrong about today? How can writers do a better job (or shouldn’t they even bother trying?)
Saturday 3:00 pm: Time Dispersion in Quantum Mechanics (Ends at: 3:55 pm) Truman
Panelists:John Ashmead (M)
John Ashmead gives a science talk on time dispersion. Is time fuzzy? In quantum mechanics space is fuzzy. And in special relativity time and space are interchangeable. But if time and space are interchangeable, shouldnt time be fuzzy as well? Shouldnt quantum mechanics apply — to time? Thanks to recent technical advances we can put this to the test. We ask: How do you get a clock in a box? How do you interfere with time? When is one slit better than two? And what happens at the intersection of time and quantum mechanics?
Sunday 10:00 am: Secrets of the Dinosaurs (Ends at: 10:55 am) Monroe
Panelists:Robert J. SawyerJohn AshmeadMichael Brett-SurmanThomas Holtz (M)
Did dinosaurs really have feathers? Why did people get it wrong for so long? What else did people believe about dinosaurs 50 years ago that is no longer true? Why did people think that then? What of our present knowledge about dinosaurs is most likely to also be incorrect?
Sunday 12:00 pm: Exoplanets (Ends at: 12:55 pm) Truman
Panelists:John AshmeadInge HeyerEdward M. Lerner (M)
What do we know about planets outside our solar system? How do we discover them? What are the implications for aliens Exobiology?

Debugging with PostgreSQL – Sample code

My talk last week at FOSSCon, “Debugging with PostgreSQL: A Strategic Approach” went well. Lots of energy in the room. Good audience.

Bruce Momjian, one of the founders of PostgreSQL, was in the audience & said afterwords (roughly):  “that’s what I’ve been thinking for years; good to hear it spelled out in words”. I got that from a number of other programmers in the audience as well. Much pleased.

Bruce went on to ask I propose the talk for the 2020 World PostgreSQL Conference, which I shall.

I thought it might be helpful to write some of the code examples up in a complete script, so any one who wishes can run and/or hack. I found a few problems and infelicities myself while doing this. Further suggestions very welcome!

Warning: here there be code.

To run the code (assuming you have PostgreSQL 11 installed and call the sample “sample_all.sql”):

psql -U postgres -d postgres -f sample_all.sql > sample_all.out 2>&1

Since it can be tricky to cut-and-paste from a web page, I have uploaded the raw code as “sample_all.txt” (you can’t upload files with an SQL extension for security reasons). For completeness, here are the slides themselves as PDF.

The code is careful to create a sample database, build & test stuff, and then remove the whole thing as if nothing had happened. If you don’t like doing this sort of thing from the postgres user (don’t blame you) create a user with createdb privileges & use that to run this.

Sample Code

/*
	John Ashmead 
		sample_all.sql:  samples as used in my talk "Debugging with PostgreSQL"
		FOSSCon 8/17/2019

	Sample_all.sql is a complete code sample:

		it builds a sample database called sample with a user sample

		then creates a few types, 
		a timestamp trigger function, 
		a table people, 
		and then a small function to set the social security number

	The goal was to provide illustrations for the talk of what I call "self-debugging code"
		1) many problems are trapped, as by type checking, before they can do any harm
		2) in other cases, you will get an exception
		3) and in the worst case, at least you will see what went in and what came out
		
	You can run this as user postgres database postgres.  You could run as any user with createdb,
	if you fix the clean section to go from "postgres" to that user.

	I normally run scripts using psql with "-v ON_ERROR_STOP=1" set on the command line, 
	which will cause psql to exit on the first error.
	
	But in this case you need to allow for errors in the test section. 

	Therefore an appropriate command line is: 
		"psql -U postgres -d postgres -f sample_all.sql > sample_all.out 2>&1"

	The comments are taken from points made in the talk,
	hence their perhaps slightly pedantic character.

	Any comments, my email is "john.ashmead@ashmeadsoftware.com".
*/

\qecho Build user and database
create user sample with password 'HighlySecret';

create database sample with owner = sample;

\c sample sample

set search_path to public;

/*
	Create generic timestamp function: timestamp_trg

	Provided the tables use the fields "updated_at" and "created_at" as timestamps,
		you do not need to rewrite this function on a per table basis.

	It is very useful to have timestamp fields on most tables, even if they are not specifically needed:
		1) knowing "when" something went wrong often takes you much of the way to figuring out "what" went wrong
		2) and using triggers takes the load off the development programmer
		
	I've been working a lot with Ruby-on-Rails which will create & update these fields for you.
	But if you rely on Ruby-on-Rails then you create a lot of traffic on the wire,
	and you can miss cases where the updates were done behind ruby's back,
	as by other scripts & tools.
*/
	
\qecho Create timestamp function
create or replace function public.timestamp_trg() returns trigger
    language plpgsql
    AS $
  begin
	/* assume we have updated_at and created_at fields on the table */
    if new.created_at is null
    then
      new.created_at = now();
    end if;
    new.updated_at = now();
    return new;
  end;
$;

/*
	My own experience has been that it is much better to use logical types, even for simple fields:
		1) it makes changing types much easier:  if three tables are using a social security number, 
		then you only have to change it in one spot
		2) it makes the field names almost self-documenting
		3) and you can include bits of validation, as here, when the field is used

	Obviously this, like any principle, can be carried to extremes.  
	This is, as Captain Barbossa might put it, a guideline rather than a rule.
*/
\qecho Create some types & then the people table

begin;

/*
	Everysooften you run into someone with a single character last name, as Kafka's "K",
	so we allow for that.  

	I prefer text to varchar or character.  Performance about the same (in some cases better) and 
	if you put a fixed length in, what happens when you have to add the last name of a king or queen
	where the name is basically the history of the monarchy?
*/
create domain public.lastname_t text check(length(value) > 0);
comment on domain public.lastname_t is 'holds last name.  Has to be at least one character long.';

create domain public.firstname_t text;
comment on domain public.firstname_t is 'holds first name.  Can be missing';

create domain public.middlename_t text;
comment on domain public.middlename_t is 'holds middle name or initial.  Can be missing';

create domain public.ssn_t text check(value similar to '\d{9}');
comment on domain public.ssn_t is 'holds social security number.  If present, must be 9 digits long.';

/*
	ok_t is self-documenting in the sense that true is good and false is bad.
	This seems obvious enough, but I have seen the reverse convention used.

	As an aside, it is better for maintenance to use positive tests, i.e. "if we_are_ok" 
	rather than negative ones "if not we_are_failed".  Slightly easier to read.
	Which is important when it is 2am and the code has to be working by 9am.

	Further, better to use "not null" whenever possible:  three valued logic is a great source of bugs.
*/
create domain public.ok_t boolean not null;
comment on domain public.ok_t is 'true for success; false for successness challenged';

-- PostgreSQL sequences are a joy!
create sequence public.people_id_seq start 1;

/*
	we are using the ruby convention that we should get the plurals right:  person/people rather than person/persons.
	The only place you see persons is in a police report:
		three persons of a suspicious character were espied leaving the premises in a rushed and furtive manner.
*/
create table public.people (
	id int primary key default nextval('people_id_seq'),
	lastname lastname_t not null,
	firstname firstname_t,
	middlename middlename_t,
	ssn ssn_t,
	updated_at timestamp with time zone default now(),
	created_at timestamp with time zone default now()
);

/*
	In this simple case the comments are, in all candor, redundant.

	But, if you comment everything, then tools like SchemaSpy can give you a nice report of everything in your database.

	And, it is a good habit to get into.
*/
comment on table public.people is 'list of people';
comment on column public.people.id is 'primary key of people table';
comment on column public.people.lastname is 'lastname of person -- mandatory';
comment on column public.people.firstname is 'firstname of person -- optional';
comment on column public.people.middlename is 'middlename of person -- optional';
comment on column public.people.ssn is 'social security number of person -- optional';
comment on column public.people.updated_at is 'last time this row was updated';
comment on column public.people.created_at is 'time this row was created';

-- A unique index on id will be created automagically, so don't bother. 

create index people_name_ix on public.people using btree(lastname, firstname, middlename);

create unique index people_ssn_uix on public.people using btree(ssn);

insert into public.people(lastname, firstname, middlename) values ('Programmer', 'J', 'Random');

select * from public.people order by id;	-- make sure we look OK

/*
	One useful trick is to put a begin at the top of a script & a rollback at the end,
		until you are confident that the script works OK.
	This can be done even for DDL -- i.e. create table -- an incredibly strong feature of PostgreSQL.
*/
	
-- rollback
commit;

-- create ssn_set
\qecho Create the social security function which served as the main example of self-documenting code

-- begin/commit not strictly needed, the create function is an atomic unit, but still a good habit
begin;

create or replace function public.ssn_set( 
	person_id0 public.people.id%type, 	-- makes certain the function & table types are lined up
	ssn0 public.people.ssn%type, 		-- lets us get in a bit of validation (against the ssn type) before we get started
	debug_flag0 boolean default false	-- this lets you turn on debugging at will, if there is a production problem
) 
returns ok_t as $
declare
	person_id1 people.id%type; -- more specific than int
	ssn1 people.ssn%type;	   -- could use ssn_t, but this is still more specific than a generic type
	row_count1 bigint;		   -- more check-y stuff
begin
	if debug_flag0 then
		/*
	   		notice the use of the function name in the message:
	   		always identify the source in an error message! this could be part of a thousand messages
		*/
		raise notice 'ssn_set(%, %)', person_id0, ssn0;	
	end if;

	select id into person_id1 from people where id = person_id0 limit 1; -- limit 1 is overkill
	if person_id1 is null then
		/*
			be as specific as possible in an error message
		*/
		raise exception 'ssn_set:  person_id0 % is not in people table', person_id0;
	end if;

	/*
		We have a unique index on the ssn, but we can issue a more precise error message if we check first.

		This also serves as a double-check if we set the table up incorrectly, unlikely for social security numbers,
		but can happen in general.
	*/
	select id into person_id1 from people 
		where ssn = ssn0 and id != person_id0;
	if person_id1 is not null then
		raise exception 'ssn_set:  ssn % is already in use by id %', ssn0, person_id1;
	end if;

	-- this whole function is really just an elaborate wrapper for this one line
	update people set ssn = ssn0 where id = person_id0;
	/*
		and now make absolutely sure that it worked
	*/
	get diagnostics row_count1 = row_count;
	if row_count1 != 1 then
		raise exception 'ssn_set:  unable to set ssn to % for person# %, rows affected = %', ssn0, person_id0, row_count1;
	end if;

	/*
		giving the exit values as well as entry values of key variables lets us trace
		the flow of gozintas and gozoutas without doing anything more than setting a debug flag
	*/
	if debug_flag0 then
		raise notice 'ssn_set: person %: ssn changed to %', person_id0, ssn0;
	end if;

	/*
		All previous returns were by "raise", this is our first "normal" return.
	*/
	return true;
end; $ language plpgsql;

commit;

/*
	and of course the obligatory red/green tests
	-- bracket the allowed value with three red tests, then verify it works
	-- then check for dups:  one red, one green
*/
\qecho Test the social security function: three red tests then one green

\qecho Expect fail -- nonsense
/*
	We use the "(select...)" in the argument list to avoid hard-coding IDs,
	this will make it easier to extend the tests further, if necessary.

	I didn't bother to assign the "red" values into variables in this section, 
	since we are only using each value once.
*/
select public.ssn_set((select id from public.people where lastname = 'Programmer'), 'unmitigated nonsense'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Expect fail -- too short
select public.ssn_set((select id from public.people where lastname = 'Programmer'), '01234567'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Expect fail -- too long
select public.ssn_set((select id from public.people where lastname = 'Programmer'), '0123456789'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

-- using variables with psql makes it easier to change up the tests later
\set test_ssn 012345678
\set test_ssn2 987654321
\qecho Expect success -- just right
select public.ssn_set((select id from public.people where lastname = 'Programmer'), :'test_ssn'::ssn_t, true);
select * from public.people where lastname = 'Programmer';

\qecho Second round of testing on the social security function: one red and one green
\qecho Expect fail:  we have already used this SSN
insert into people(lastname) values ('Programmer Junior');
select public.ssn_set((select id from public.people where lastname = 'Programmer Junior'), :'test_ssn'::ssn_t, true);
select * from public.people where lastname = 'Programmer Junior';

\qecho Expect success: give Junior his/her own SSN
select public.ssn_set((select id from public.people where lastname = 'Programmer Junior'), :'test_ssn2'::ssn_t, true);
select * from public.people where lastname = 'Programmer Junior';

-- cleanup:  you have to back out of the sample database and then remove first it, then the role
\qecho A clean database is a happy database

\c postgres postgres
drop database sample;

drop role sample;

Now with more bugs: Debugging with PostgreSQL at FOSSCon 2019 – 8/17/2019

I am giving my Debugging With PostgreSQL talk tomorrow at FossCon. FOSSCon is the annual Free & Open Source Software Convention held every year in Philadelphia.

This version is lightly revised from last month’s version; added back in a few slides that I had to skip last time (I had 40 minutes last month, but 50 minutes tomorrow). And I fed back into the talk a bit of the audience feedback: more of what worked, less of the other stuff.

FOSSCon is fun, with a lot of great talks scheduled on Open Source & related. And it is free (donations are requested but not required.) Be seeing you.

Debugging with PostgreSQL – A Strategic (& Streamlined) Approach

Most popular slide at the talk: and the audience got all of them! (not counting the bit about the official name of Bangkok)

As planned, I gave a talk on Debugging with PostgreSQL at the Philly PostgreSQL conference at Wharton this last Friday (7/19/2019).

Went well: debugging is a great subject & I definitely struck a nerve with the audience; after the talk people were saying they knew about some of the points — which gave them some confidence — and others were new — which gave them some tools. Good.

My most popular slide was a quiz: only 10 lines of code — and from the PostgreSQL man page on foreign keys — but still three bugs. For the record, they are:

  • All of the data types should be domains, not physical types, so the city type should be something like “city_t”, defined as varchar(80). And the temperature should be, say, “fahrenheit_t” (or “celsius_t”), so you know what the units are.
  • The use of key words, like “date”, for field names is not great technique. It is ambiguous at best; breaks stuff at worst.
  • And the width for the city is way too small. Consider the name of Bangkok in Thai, the language of Bangkok: Krungthepmahanakhon Amonrattanakosin Mahintharayutthaya Mahadilokphop Noppharatratchathaniburirom Udomratchaniwetmahasathan Amonphimanawatansathit Sakkathattiyawitsanukamprasit. 177 characters! If you make the city’s type a domain, then you can revise the domain to be, say, “text” — and automagically get the type fixed everywhere you have a city reference.

I was scheduled to go late morning but went first because the opening speaker was still at his hotel. As a result I had the pleasant experience of hearing several later speakers refer to points made in my talk. The most popular was the phrase “lie consistently“.

I had built a form to collect Social Security numbers when I was at Bellcore (now Telcordia). It blew up when one fellow put in a variety of SSNs. I asked him what was going on. He said “I don’t want Bellcore to have my SSN. They have no legal right to it!”. “Fine by me, but just do me a favor & lie consistently“. We both left happy.

I did a run thru of the talk Sunday with my OTC (Official Talk Consultant); she pointed out, with her usual correctness, that I had tried to fit an entire software engineering course into 50 minutes. As a result, the early mornings & late evenings Monday thru Wednesday were spent reorganizing & rewriting. A 2nd run thru Wednesday evening went much better. OTC approved.

But when I did a final final talk & schedule check Friday morning I found the time blocks were now down to 40 minutes. Snip, snip, cut, cut, squeeze, squeeze. I cut out everything that wasn’t on message, useful, & fun. Definitely improved the talk. That which does not destroy us makes us strong. Or at least succinct.

Final version of the talk (PDF): Debugging with PostgreSQL — A Strategic (& Streamlined!) Approach.

Debugging with PostgreSQL – A Strategic Approach

The PostgreSQL Elephant attacks a bug
Debugging with PostgreSQL – A Strategic Approach

The Philly PostgreSQL Meetup is holding an all day conference at the Wharton School in Philadelphia, July 19th, 2019. I will be giving my talk Debugging with PostgreSQL – A Strategic Approach at 11am.

Description:

Depending on the project, debugging can take 50 to 90% of development time. But it usually gets less than 10% of the press. PostgreSQL has great tools for debugging, but they are most effective when deployed as part of an overall strategy.

We will look at strategies for debugging PostgreSQL: how to find bugs, how to fix them, and how to keep them from happening in the first place.

We’ll look at root causes, technical tricks, and scientific strategies, and why — even if you can’t always write perfect code — it is usually a good idea to try.

We’ll hear from Bjarne Stroustrup, Sherlock Holmes, Kernighan and Ritchie, Pogo, & the experts of the PostgreSQL community.

Goal: less time debugging, more time building great tools and apps that stay up & get the job done.

Comments:

I’ll be doing this talk at FOSSCON 2019 as well. That will be Saturday August 17th, 2019.

While I’ve definitely built this for PostgreSQL, it turns out that most of the debugging advice is applicable not just to PostgreSQL but to database in general, and not just to databases, but to most programming langauges.

Time & QM at Balticon 2019

I did my “Time dispersion in quantum mechanics” paper as a popular talk at Balticon 2019 this last Saturday. Very energetic audience; talk went well. The audience had fun riffing on the time & quantum mechanics themes. And gave a round of applause to “quantum mechanics”. That doesn’t happen often. Post talk, I spent the next hour and a half in the hallway responding to questions & comments from attendees. And afterwards I ran into a woman who couldn’t get in because there was no standing room left. I think the audience liked the subject, liked the idea of being at the scientific edge, & was prepared to meet the speaker half way. So talk went well!

Thanks to Balticon for taking a chance on a very technical subject! and to all the attendees who made the talk a success.

So I’m hoping to do the talk for Capclave (DC science fiction convention) & Philcon (Philadelphia science fiction convention) in the fall.

My Balticon talk was basically a translation from Physics to English of my long paper of the same title, keeping the key ideas but doing everything in words & pictures, rather than equations.

Balticon will be publishing the video of the Balticon talk at some point. I developed the talk in Apple’s Keynote. I have exported to Microsoft Powerpoint and to Adobe’s PDF format. The advantage of the two slide presentation formats is that you can see the builds.

The long paper the talk was taken from was just published last week, by the Institute of Physics as part of their Conference Proceedings series. And the week before, I did a fairly technical version of the paper as a virtual (Skype) talk for the Time & Time Flow virtual conference. This is online on Youtube, part of the Physics Debates series.

Is time fuzzy?

Alice’s Past is Bob’s Future. And vice versa. Both are bit fuzzy about time.

“Time dispersion and quantum mechanics”, my long paper — long in page count & long in time taken to come to completion — has just been accepted for publication in the peer-reviewed Proceedings of the IARD 2018. This will be has been published as part of the IOP Science’s Journal of Physics Conference Series.

I had earlier presented this as a talk at the IARD 2018 conference in June 2018 in Yucatan. The IARD (International Association for Relativistic Dynamics) asked the conference participants if they would submit papers (based on the talks) for the conference proceedings. No problem; the talk was itself based on a paper I had just finished. Of course the paper had more math. Much much more math (well north of 500 equations if you insist).

Close review of the talk revealed one or two soft spots; fixing them consumed more time than I had hoped. But I submitted — on the last possible day, November 30th, 2018. After a month and a bit, the two reviewers got back to me: liked the ideas, deplored the lack of sufficient connection to the literature, and in the case of Reviewer #1, felt that there were various points of ambiguity and omission which needed attention.

And right they were! I spent a few rather pleasant weeks diving into the literature; some I had read before, some frankly I had not given the attention that must be paid. I clarified, literated, disambiguated, and simplified over the next six or seven weeks, submitting a much revised version on Mar 11th this year. Nearly ten per cent shorter. No soft spots. Still a lot of equations (but just south of 500 this time). Every single one checked, rechecked, & cross-checked. And a few fun bits, just to keep things not too dry. Submitted feeling sure that I had done my best but not sure if that was best enough.

And I have just this morning received the very welcome news it will be joining the flock of accepted submissions headed for inclusion in the conference proceedings. I am best pleased.

As to the title of this blog post, my very long paper argues that if we apply quantum mechanics along the time dimension — and Einstein & even Bohr say we should! — then everything should be just a little bit fuzzy in time. But if you title a paper “Is time fuzzy?”, you can say farewell to any chance of acceptance by a serious publication.

But the point is not that time might be fuzzy — we have all suspected something of the kind — it is that this idea can be worked out in detail, in a self-consistent way, in a way that is consistent with all experimental evidence to date, in a way that can be tested itself, and in a way that is definitive: if the experiments proposed don’t show that time is fuzzy, then time is not fuzzy. (As Yoda likes to say: fuzz or no fuzz, there is no “just a little-bit-fuzzy if you please”!)

In any case, if you are going to be down Baltimore way come this coming Memorial Day weekend I will be doing a popular version of the paper at the 2019 Baltimore Science Fiction convention: no equations (well almost no equations), some animations, and I hope a bit of fun with time!

The link at the start of this post points to a version formatted for US Letter, with table of contents & page numbers. The version accepted is the same, but formatted for A4 and without the TOC and page numbers (that being how the IOP likes its papers formatted). For those who prefer A4:


Mars or Bust at Philly Linux

I gave my “Mars or Bust” talk at PLUG North (Philly Linux Users Group/North) on January 1/8/2019. Great audience; lots of good questions. They captured video of the event & have posted to Google Photos. Presented, as the late great Rod Sterling would put it, for your consideration: Mars or Bust.

WordPress Themes