# Category Archives: Quantum Quandaries

## Quantum Foundations at the APS March Meeting

The March Meeting of the American Physical Society is taking place March 5-9 2017 in Los Angeles. There will be sessions on Quantum Foundations, Quantum Resource Theories, and Quantum Thermodynamics. You can submit an abstract for a contributed talk at http://www.aps.org/meetings/march/. The deadline is November 3 at 11:59pm EST.

The APS March Meeting is a great opportunity to advertise recent work in quantum foundations to the wider physics community. I hope you will consider contributing a talk so that we can showcase our research in the strongest possible way.

## On Outreach and Education for the Foundations of Physics

On Saturday 25th April, I took part in a discussion panel sponsored by FQXi at the “New Directions in the Foundations of Physics” conference in Washington DC.  My co-panellists were Sabine Hossenfelder and Dagomir Kaszlikowski.  Sabine has already posted her comments on her blog, and I largely agree with what she had to say.  However, since I wrote out my comments before the discussion, I might as well post them here, as it is just a cut-and-paste job for me.

The discussion was very interesting, and it evoked more passion from the audience than I thought it would.  Given the limited time, I did not give the best responses to all of the comments from the audience, so I have added a few thoughts on the discussion, particularly the points raised by Mile Gu and David Wallace.

Without further ado, here is my intro.

How do we convey foundational physics concepts to non-physics audiences?  It is obviously hard to do so in a way that is both accurate and accessible.  As this conference shows, it is difficult to do so even amongst ourselves.  However, this is not the main problem we should worry about.  To explain why, requires a diversion on the broader aims of outreach.

I think outreach has three main goals: INSPIRATION, EDUCATION, and ACTIVATION.

Inspiration is making physics seem cool and interesting, so that, for example, a high school student might decide to study physics at university.

Education is the obvious, we want people to understand more physics after the outreach than they did before.

Activation is perhaps less obvious, but it means that we want people to actually DO something after the outreach.  This might be voting for a politician who supports evidence based policy and science funding, or it might mean persuading people not to employ the services of a new age “quantum healer” who claims to resolve health issues holistically using quantum entanglement.

To my knowledge, there is very little research into the effectiveness of outreach for these various goals.  That’s worrying because it feels good to win an FQXi essay (as well as being good for the bank account), to get immediate feedback on a blog post, to give a public lecture to a large audience, and, despite the fact that I have no personal experience of this, I imagine it feels good to have a bestselling popular science book or appear on TV in flashy a documentary.  In absence of hard data on how best to spend our limited time and resources, we will continue to do the things that feel good, regardless of whether they are the most effective.

Nonetheless, I think it is fair to say that the likes of Neil de Grasse Tyson and Brian Cox are doing a pretty good job on the inspiration front, so the rest of us should not devote too much time towards that.  Regarding education, there is now voluminous evidence from Phs Ed research on what are the best methods of teaching physics to high school students and lower level undergraduates.  This research is still ignored by the vast majority of institutions, and given that we know that these methods work, I think we would be better off putting our efforts into implementing research validataed cirricula in schools and universities rather than trying to do it with outreach, the effectiveness of which is largely unknown.  Incidentally, one of the things we do know about good physics pedagogy is that it is largely uncorrelated from the personality of the instructor, which leads me to be suspicious of the personality-driven nature of much scientific outreach.

That leaves activation, and I think we could be doing a much better job here.  Not everyone is going to take action in the name of science, but that does not matter, so long as those who do it do so loudly.  We are past the age of mass media, so we need no longer always cater only to the GENERAL public, instead going for smaller niche audiences who are currently underserved.  In particular, I am thinking of the science fanboys and girls, such as the community of skeptics who like to debunk pseudo-science.  They may be a relatively small community, but they are also the ones most likely to act in the name of science.  Most of them can give you a coherent acount of evolution and why it is true, but ask them about quantum theory and you’ll likely get some vague mumblings about waves, particles and the uncertainty principle.  They’d like to understand things more deeply, but we haven’t given them the tools to do so.  I think this is because we have been far too focussed on making our popular accounts accessible to everyone, e.g. publishers always advise against
including any equations in pop physics books.  This advice is appropriate for the mass audience, but not if we are targeting niche audiences, who are probably bored of hearing the same vague and inaccurate descriptions in fifty different popsci books.

So, turning back to the question of how we should convey foundationsal physics concepts to non-physics audiences, it is almost impossible to do so accurately for the mass audience, and it is probably best to go for inspiration in that case.  However, we can, and should, target more accurate explanations, with more math and more subtle details, to those smaller communities who are already passionate about physics, and who are more likely to act on the knowledge when they have it.

Following these remarks, there are two points from the discussion that I want to address.  Firstly, Mile Gu raised the point that we want to direct outreach to the broadest audience possible, as we need popular support to change government policies on science funding, or at least to keep it at a reasonable level.  To this, I responded that only a tiny minority of people are going to change their vote based on science policy, compared to the big issues like the economy, education, and healthcare, so we are better off focussing on that minority.  I know think that this is wrong.  If there is a general consensus within society then this can influence the policy of all major political parties, regardless of whether it is a vote-changing issue.  An example of this is the issue of gay marriage.  Very few people in the UK would have changed their vote based on this issue alone, but because there was a general feeling in the population that allowing people to marry whoever they choose is a good thing, there was a political consensus that pushed the issue forward.  Similarly, if there were known to be a general consensus in the population that science funding for basic research without ties to immediate applications is a good idea, then there would be political consensus on that too.  For this, I think we need to go beyond inspiring the general public into thinking that science is cool, by also emphasizing that the process of science and technology development as a whole does not work without the freedom to think freely, without knowing in advance what, if any, applications there may be.  We also need to emphasize that science is not just a machine for generating economic growth, but also a key part of human culture, comparable to the arts and humanities, all of which we should fund for their own sake because they enrich the human experience.

Secondly, David Wallace cautioned against my advice to verify the effectiveness of outreach via empirical research, suggesting that to emphasize research too much might make us too bogged down to actually do much outreach.  Instead, he suggested a “let a thousand flowers bloom” approach.  Let people go ahead and do the outreach they want to do, and presumably there will be enough different approaches that we’ll eventually have the desired effect.

I think I answered this badly on the day, effectively conceding David’s point.  However, David’s approach is only valid if we think there is not such a thing as bad outreach, i.e. activities that actually harm the goals we are trying to achieve.  This is especially true if there are not selection mechanisms in place that automatically weed out the bad outreach in favour of the good.

There is a compelling analogy here with physics education.  Professors have been left to their own devices to teach in whatever way they want for decades, and they almost universally choose methods that are pedagogically sub optimal, such as just lecturing from the front for an hour.  These methods can actually harm people’s perception of physics, reinforcing the idea that the subject is too hard for them.  Personally, I think it would be better if all the future medical doctors undergoing their required physics courses came out with a positive impression of the subject, and a good understanding of it, rather than regarding it as an alien subject that is irrelevant for their careers.  It is only through rigorous research that we have developed better pedagogy that is gradually being accepted in physics departments, although we still have a long way to go.

My position on outreach is that, although we shouldn’t encumber every attempt at outreach with a rigorous research investigation, if we think there are widely employed methodologies that are actually harmful to the aims of outreach then we should verify this empirically, try to figure out what works better, and encourage change.

If there are harmful aspects in current outreach, I suspect they are mostly in things like TV documentaries and popular science books, which are driven by popularity and sales.  A literary agent giving advice on how to write a popular science book is not giving advice on how to best convey the science, but rather on how to best sell it to a publisher, who is in turn concerned with how many people will buy the book.  So the usual advice to avoid any equations and to emphasize personal stories over the science, might not be good advice for communicating the science, even if they increase popularity and sales.

I think the focus on popularity leads to many popsci tropes, which might turn out to be actively harmful.  For example, there is the focus on stories of “great men struggling with grand ideas”, which may accidentally reinforce the impression that science is too hard for most people and so they should not engage with it, and discourage under-represented minorities from entering the subject.  Similarly, there is an excessive focus on speculative wild-sounding ideas, as opposed to the basics, which may inadvertently give the impression that “anything goes” in physics, and make people question why they should believe scientists over and above politicians and/or their local preacher.

One experiment I would suggest to address this would be to give a bunch of people a popular science book containing a lot of speculative ideas, and a couple of weeks after finishing the book ask them to classify how speculative the various ideas presented in the book are.  A good choice would be Max Tegmark’s “Mathematical Universe” because he goes to great pains at the beginning to classify how speculative his various multiverses are, even including a table.  My hypothesis is that most readers won’t remember how speculative the ideas are, and that ideas from standard model cosmology would be conflated with those of various multiverses in terms of the degree to which they are established.  I expect people will mostly recall the ideas that sound cool, rather than those that are most supported by evidence.  I also suspect that it won’t matter how careful the author is to distinguish speculation from established science, which could be checked by comparing results from Tegmark’s book with any randomly chosen Michio Kaku book.

If my hypothesis is confirmed, then perhaps we could persuade authors to hold back on the speculation a bit, in favour of established science, particularly in a society where the general level of science literacy is quite low.  If they do include speculation, perhaps it would be better to do so with a more skeptical treatment, including a detailed criticism of the ideas.  Perhaps a book written by a small group of experts with conflicting opinions on the speculative ideas is a better way to do this than the traditional single-author popsci books.  Whatever you think about this, these are ideas that we could clearly benefit from investigating empirically.

## Lubos Motl is right

Armchair physicist and anti-quantum zealot Matt Leifer

In recent years noted string theorist and blogger Lubos Motl has increasingly turned his attention to the foundations of quantum theory.  Those of us who study quantum foundations for a living have tended to find his commentary mildly annoying, as he consistently calls those of us who disagree with his views “anti-quantum zealots”, crackpots, and worse.

I have recently come to the realization that Lubos’ views on this subject are completely correct.  Specifically, I now believe the following:

• The Copenhagen founders of quantum theory—Bohr, Heisenberg, Pauli, Born et. al.—had things essentially right.  They were only missing the details of decoherence theory in order to properly understand the classical limit.
• The decoherent histories formalism as proposed by Gell-Mann and Hartle, gives a completely consistent account of these minor details and is the correct way to understand physical properties and probabilities in quantum theory.
• People who work on high energy physics, and especially string theory, are the ultimate arbiters of truth about the nature of quantum theory.  Only they have the background needed to make meaningful statements on the subject.  This is especially true of theorists who are or ever have been employed at Harvard.  Any idea that has not been discussed by these physicists is probably wrong.  No insight is to be gained by actually studying the foundations of quantum theory for several years, rather than working on proper fundamental physics.
• And finally, in the face of any other views on quantum theory, the correct response is always, “It’s quantum stupid!”

Having adopted this new credo, I now realize that my previous view that quantum theory should be founded on a realist ontology that gives a clear picture of what is going on in reality independently of the observer was wrong-headed.  Lubos’ blog posts on the subject make a compelling argument that my view was guided more by religious zealotry and communist ideology than by empirical data and rational argument.  It therefore seems appropriate that I should enter into a period of repentance by adopting a garb of sackcloth and ashes for a while before emerging cleansed of my previous religious views.

Given the impracticality of wearing sackcloth and ashes in modern life, I have instead decided to wear a t-shirt that identifies me as the anti-quantum zealot that I am.  You can see a picture of me wearing this t-shirt at the top of this post.  Before embarking on a career in string theory, or more likely quitting academia to become an accountant because I do not have the intelligence to understand real physics, I still have several engagements where I shall have to speak about my previous bigoted research.  I therefore promise that I will wear my anti-quantum zealot t-shirt at all such speaking engagements for the next year.

At this point, I would like to urge my colleagues who have also been denounced by Lubos’, and those who hold similar views but have so far flown under Lubos’ radar, to reconsider their views and join me in repentance.  If each of us wears an anti-quantum zealot t-shirt publicly then we may be able to prevent others from following us down the path of ideologically motivated nonsense.

Fortunately, I have made it easy for you to purchase your own anti-quantum zealot apparel and merchandise, from the Spreadshirt shop at this link.  It is available in any colour, so long as it is communist red.  I receive a commission of 2CAD for every purchase from this shop (the rest goes to Spreadshirt, so complain to them about their overpriced t-shirts rather than me).  I would dearly love to keep that commission money because I will be short of income for a while as I retrain as a string theorist or accountant.  However, that would greatly complicate my tax situation, so I have decided to donate it to a charity that will protect future generations of physicists from adopting anti-quantum ideas.  For this purpose, my commission will be donated to the Next Einstein Initiative of the African Institute for Mathematical Sciences (AIMS), which seeks to establish centres of excellence in mathematical science across Africa.  AIMS does cover fundamental physics, but I note with approval that they do not currently have programmes in quantum foundations, so they will not be teaching wrong-headed ideas to the next generation of African physicists.

You might be tempted to consider your purchase of anti-quantum zealot merchandise as a charitable contribution, but if you really want to support AIMS you should forget about the t-shirt and just donate all of the money your would have spent to them directly.  Anti-quantum zealot merchandise is only intended for those who want to seriously repent for their anti-quantum beliefs.

In order to encourage donations, either through merchandise purchases or direct contributions to AIMS, I will be offering a special prize to whoever makes the largest donation in response to this post by the end of this month (April).  You simply have to let me know how much you have donated, either by email, or by leaving a comment if you want to boast about how generous you are (I will be asking the winner to verify their donation by sending copies of their receipts).

What is this special prize you ask?  Well, it is a collection of schwag that I stole from my absolute favourite academic publisher—Elsevier—at the recent APS March meeting.

Elsevier schwag

As you can see, it consists of two pens advertising the exciting new journal “Reviews in Physics”, which I assume will soon surpass Reviews in Modern Physics as the premier physics review journal.  I believe this because of the extremely rigorous editorial oversight that Elsevier applies to all of its journals.

In addition to this, you get a luggage tag advertising Elsevier’s offerings in Optics, which is filled with some mysterious blue liquid, because everything is better with blue stuff in it.  If your luggage accidentally ends up at the Elsevier offices because the baggage handlers read the side of the label displayed in the photo rather than the address written on the back, I am assured that Elsevier will apply their open access policy to your bags and charge you $80 for their return. ## Quantum Times Book Reviews Following Tuesday’s post, here is the second piece I wrote for the latest issue of the Quantum Times. It is a review of two recent popular science books on quantum computing by John Gribbin and Jonathan Dowling. Jonathan Dowling has the now obligatory book author’s blog, which you should also check out. ### Book Review • Title: Computing With Quantum Cats: From Colossus To Qubits • Author: John Gribbin • Publisher: Bantam, 2013 • Title: Schrödinger’s Killer App: Race To Build The World’s First Quantum Computer • Author: Jonathan Dowling • Publisher: CRC Press, 2013 The task of writing a popular book on quantum computing is a daunting one. In order to get it right, you need to explain the subtleties of theoretical computer science, at least to the point of understanding what makes some problems hard and some easy to tackle on a classical computer. You then need to explain the subtle distinctions between classical and quantum physics. Both of these topics could, and indeed have, filled entire popular books on their own. Gribbin’s strategy is to divide his book into three sections of roughly equal length, one on the history of classical computing, one on quantum theory, and one on quantum computing. The advantage of this is that it makes the book well paced, as the reader is not introduced to too many new ideas at the same time. The disadvantage is that there is relatively little space dedicated to the main topic of the book. In order to weave the book together into a narrative, Gribbin dedicates each chapter except the last to an individual prominent scientist, specifically: Turing, von Neumann, Feynman, Bell and Deutsch. This works well as it allows him to interleave the science with biography, making the book more accessible. The first two sections on classical computing and quantum theory display Gribbin’s usual adeptness at popular writing. In the quantum section, my usual pet peeves about things being described as “in two states at the same time” and undue prominence being given to the many-worlds interpretation apply, but no more than to any other popular treatment of quantum theory. The explanations are otherwise very good. I would, however, quibble with some of the choice of material for the classical computing section. It seems to me that the story of how we got from abstract Turing machines to modern day classical computers, which is the main topic of the von Neumann chapter, is tangential to the main topic of the book, and Gribbin fails to discuss more relevant topics such as the circuit model and computational complexity in this section. Instead these topics are squeezed in very briefly into the quantum computing section, and Gribbin flubs the description of computational complexity. For example, see if you can spot the problems with the following three quotes: “…problems that can be solved by efficient algorithms belong to a category that mathematicians call `complexity class P’…” “Another class of problem, known as NP, are very difficult to solve…” “All problems in P are, of course, also in NP.” The last chapter of Gribbin’s book is an tour of the proposed experimental implementations of quantum computing and the success achieved so far. This chapter tries to cover too much material too quickly and is rather credulous about the prospects of each technology. Gribbin also persists with the device of including potted biographies of the main scientists involved. The total effect is like running at high speed through an unfamiliar woods, while someone slaps you in the face rapidly with CVs and scientific papers. I think the inclusion of such a detailed chapter was a mistake, especially since it will seem badly out of date in just a year or two. Finally, Gribbin includes an epilogue about the controversial issue of discord in non-universal models of quantum computing. This is a bold inclusion, which will either seem prescient or silly after the debate has died down. My own preference would have been to focus on well-established theory. In summary, Gribbin’s has written a good popular book on quantum computing, perhaps the best so far, but it is not yet a great one. It is not quite the book you should give to your grandmother to explain what you do. I fear she will unjustly come out of it thinking she is not smart enough to understand, whereas in fact the failure is one of unclear explanation in a few areas on the author’s part. Dowling’s book is a different kettle of fish from Gribbin’s. He claims to be aiming for the same audience of scientifically curious lay readers, but I am afraid they will struggle. Dowling covers more or less everything he is interested in and I think the rapid fire topic changes would leave the lay reader confused. However, we all know that popular science books written by physicists are really meant to be read by other physicists rather than by the lay reader. From this perspective, there is much valuable material in Dowling’s book. Dowling is really on form when he is discussing his personal experience. This mainly occurs in chapters 4 and 5, which are about the experimental implementation of quantum computing and other quantum technologies. There is also a lot of material about the internal machinations of military and intelligence funding agencies, which Dowling has copious experience of on both sides of the fence. Much of this material is amusing and will be of value to those interested in applying for such funding. As you might expect, Dowling’s assessment of the prospects of the various proposed technologies is much more accurate and conservative than Gribbin’s. In particular his treatment of the cautionary tale of NMR quantum computing is masterful and his assessment of non fully universal quantum computers, such as the D-Wave One, is insightful. Dowling also gives an excellent account of quantum technologies beyond quantum computing and cryptography, such as quantum metrology, which are often neglected in popular treatments. Chapter 6 is also interesting, although it is a bit of a hodge-podge of different topics. It starts with a debunking of David Kaiser’s thesis that the “hippies” of the Fundamental Fysiks group in Berkeley were instrumental in the development of quantum information via their involvement in the no-cloning theorem. Dowling rightly points out that the origins of quantum cryptography are independent of this, going back to Wiesner in the 1970’s, and that the no-cloning theorem would probably have been discovered as a result of this. This section is only missing a discussion of the role of Wheeler, since he was really the person who made it OK for mainstream physicists to think about the foundations of quantum theory again, and who encouraged his students and postdocs to do so in information theoretic terms. Later in the chapter, Dowling moves into extremely speculative territory, arguing for “the reality of Hilbert space” and discussing what quantum artificial intelligence might be like. I disagree with about as much as I agree with in this section, but it is stimulating and entertaining nonetheless. You may notice that I have avoided talking about the first few chapters of the book so far. Unfortunately, I do not have many positive things to say about them. The first couple of chapters cover the EPR experiment, Bell’s theorem, and entanglement. Here, Dowling employs the all too common device of psychoanalysing Einstein. As usual in such treatments, there is a thin caricature of Einstein’s actual views followed by a lot of comments along the lines of “Einstein wouldn’t have liked this” and “tough luck Einstein”. I personally hate this sort of narrative with a passion, particularly since Einstein’s response to quantum theory was perfectly rational at the time he made it and who knows what he would have made of Bell’s theorem? Worse than this, Dowling’s treatment perpetuates the common myth that determinism is one of the assumptions of both the EPR argument and Bell’s theorem. Of course, CHSH does not assume this, but even EPR and Bell’s original argument only use it when it can be derived from the quantum predictions. Thus, there is not the option of “uncertainty” for evading the consequences of these theorems, as Dowling maintains throughout the book. However, the worst feature of these chapters is the poor choice of analogy. Dowling insists on using a single analogy to cover everything, that of an analog clock or wristwatch. This analogy is quite good for explaining classical common cause correlations, e.g. Alice and Bob’s watches will always be anti-correlated if they are located in timezones with a six hour time difference, and for explaining the use of modular arithmetic in Shor’s algorithm. However, since Dowling has earlier placed such great emphasis on the interpretation of the watch readings in terms of actual time, it falls flat when describing entanglement in which we have to imagine that the hour hand randomly points to an hour that has nothing to do with time. I think this is confusing and that a more abstract analogy, e.g. colored balls in boxes, would have been better. There are also a few places where Dowling makes flatly incorrect statements. For example, he says that the OR gate does mod 2 addition and he says that the state |00> + |01> + |10> + |11> is entangled. I also found Dowling’s criterion for when something should be called an ENT gate (his terminology for the CNOT gate) confusing. He says that something is not an ENT gate unless it outputs an entangled state, but of course this depends on what the input state is. For example, he says that NMR quantum computers have no ENT gates, whereas I think they do have them, but they just cannot produce the pure input states needed to generate entanglement from them. The most annoying thing about this book is that it is in dire need of a good editor. There are many typos and basic fact-checking errors. For example, John Bell is apparently Scottish and at one point a D-Wave computer costs a mere$10,000. There is also far too much repetition.
For example, the tale of how funding for classical optical computing
dried up after Conway and Mead instigated VLSI design for silicon
chips, but then the optical technology was reused used to build the
internet, is told in reasonable detail at least three different times.
The first time it is an insightful comment, but by the third it is
like listening to an older relative with a limited stock of stories.
There are also whole sections that are so tangentially related to the
main topic that they should have been omitted, such as the long anti
string-theory rant in chapter six.

Dowling has a cute and geeky sense of humor, which comes through well
most of the time, but on occasion the humor gets in the way of clear
exposition. For example, in a rather silly analogy between Shor’s
algorithm and a fruitcake, the following occurs:

“We dive into the molassified rum extract of the classical core of the
Shor algorithm fruitcake and emerge (all sticky) with a theorem proved
in the 1760s…”

If he were a writing student, Dowling would surely get kicked out of
class for that. Finally, unless your name is David Foster Wallace, it
is not a good idea to put things that are essential to following the
plot in the footnotes. If you are not a quantum scientist then it is
unlikely that you know who Charlie Bennett and Dave Wineland are or
what NIST is, but then the quirky names chosen in the first few
chapters will be utterly confusing. They are explained in the main
text, but only much later. Otherwise, you have to hope that the
reader is not the sort of person who ignores footnotes. Overall,
having a sense of humor is a good thing, but there is such a thing as
being too cute.

Despite these criticisms, I would still recommend Dowling’s book to
physicists and other academics with a professional interest in quantum
technology. I think it is a valuable resource on the history of the
subject. I would steer the genuine lay reader more in the direction
of Gribbin’s book, at least until a better option becomes available.

## Quantum Times Article about Surveys on the Foundations of Quantum Theory

A new edition of The Quantum Times (newsletter of the APS topical group on Quantum Information) is out and I have two articles in it. I am posting the first one here today and the second, a book review of two recent books on quantum computing by John Gribbin and Jonathan Dowling, will be posted later in the week. As always, I encourage you to download the newsletter itself because it contains other interesting articles and announcements other than my own. In particlar, I would like to draw your attention to the fact that Ian Durham, current editor of The Quantum Times, is stepping down as editor at some point before the March meeting. If you are interested in getting more involved in the topical group, I would encourage you to put yourself forward. Details can be found at the end of the newsletter.

Upon reformatting my articles for the blog, I realized that I have reached almost Miguel Navascues levels of crankiness. I guess this might be because I had a stomach bug when I was writing them. Today’s article is a criticism of the recent “Snapshots of Foundational Attitudes Toward Quantum Mechanics” surveys that appeared on the arXiv and generated a lot of attention. The article is part of a point-counterpoint, with Nathan Harshman defending the surveys. Here, I am only posting my part in its original version. The newsletter version is slightly edited from this, most significantly in the removal of my carefully constructed title.

### Lies, Damned Lies, and Snapshots of Foundational Attitudes Toward Quantum Mechanics

Q1. Which of the following questions is best resolved by taking a straw
poll of physicists attending a conference?

A. How long ago did the big bang happen?

B. What is the correct approach to quantum gravity?

C. Is nature supersymmetric?

D. What is the correct way to understand quantum theory?

E. None of the above.

By definition, a scientific question is one that is best resolved by
rational argument and appeal to empirical evidence.  It does not
matter if definitive evidence is lacking, so long as it is conceivable
that evidence may become available in the future, possibly via
experiments that we have not conceived of yet.  A poll is not a valid
method of resolving a scientific question.  If you answered anything
other than E to the above question then you must think that at least
one of A-D is not a scientific question, and the most likely culprit
is D.  If so, I disagree with you.

It is possible to legitimately disagree on whether a question is
scientific.  Our imaginations cannot conceive of all possible ways,
however indirect, that a question might get resolved.  The lesson from
history is that we are often wrong in declaring questions beyond the
reach of science.  For example, when big bang cosmology was first
introduced, many viewed it as unscientific because it was difficult to
conceive of how its predictions might be verified from our lowly
position here on Earth.  We have since gone from a situation in which
many people thought that the steady state model could not be
definitively refuted, to a big bang consensus with wildly fluctuating
estimates of the age of the universe, and finally to a precision value
of 13.77 +/- 0.059 billion years from the WMAP data.

Traditionally, many physicists separated quantum theory into its
“practical part” and its “interpretation”, with the latter viewed as
more a matter of philosophy than physics.  John Bell refuted this by
showing that conceptual issues have experimental consequences.  The
more recent development of quantum information and computation also
shows the practical value of foundational thinking.  Despite these
developments, the view that “interpretation” is a separate
unscientific subject persists.  Partly this is because we have a
tendency to redraw the boundaries.  “Interpretation” is then a
catch-all term for the issues we cannot resolve, such as whether
Copenhagen, Bohmian mechanics, many-worlds, or something else is the
best way of looking at quantum theory.  However, the lesson of big
bang cosmology cautions against labelling these issues unscientific.
Although interpretations of quantum theory are constructed to yield
the same or similar enough predictions to standard quantum theory,
this need not be the case when we move beyond the experimental regime
that is now accessible.  Each interpretation is based on a different
explanatory framework, and each suggests different ways of modifying
or generalizing the theory.  If we think that quantum theory is not
our final theory then interpretations are relevant in constructing its
successor.  This may happen in quantum gravity, but it may equally
happen at lower energies, since we do not yet have an experimentally
confirmed theory that unifies the other three forces.  The need to
change quantum theory may happen sooner than you expect, and whichever
explanatory framework yields the next theory will then be proven
correct.  It is for this reason that I think question D is scientific.

Regardless of the status of question D, straw polls, such as the three
that recently appeared on the arXiv [1-3], cannot help us to resolve
it, and I find it puzzling that we choose to conduct them for this
question, but not for other controversial issues in physics.  Even
during the decades in which the status of big bang cosmology was
controversial, I know of no attempts to poll cosmologists’ views on
it.  Such a poll would have been viewed as meaningless by those who
thought cosmology was unscientific, and as the wrong way to resolve
the question by those who did think it was scientific.  The same is
true of question D, and the fact that we do nevertheless conduct polls
suggests that the question is not being treated with the same respect
as the others on the list.

relevant to the sociology of science, and they might be useful to the
beginning graduate student who is more concerned with their career
prospects than following their own rational instincts.  From this
perspective, it would be just as interesting to know what percentage
of physicists think that supersymmetry is on the right track as it is
to know about their views on quantum theory.  However, to answer such
questions, polls need careful design and statistical analysis.  None
of the three polls claims to be scientific and none of them contain
any error analysis.  What then is the point of them?

The three recent polls are based on a set of questions designed by
Schlosshauer, Kofler and Zeilinger, who conducted the first poll at a
conference organized by Zeilinger [1].  The questions go beyond just
asking for a preferred interpretation of quantum theory, but in the
interests of brevity I will focus on this aspect alone.  In the
Schlosshauer et al.  poll, Copenhagen comes out top, closely followed
by “information-based/information-theoretical” interpretations.  The
second comes from a conference called “The Philosophy of Quantum
Mechanics” [2].  There was a larger proportion of self-identified
philosophers amongst those surveyed and “I have no preferred
interpretation” came out as the clear winner, not so closely followed
by de Broglie-Bohm theory, which had obtained zero votes in the poll
of Schlosshauer et al.  Copenhagen is in joint third place along with
objective collapse theories.  The third poll comes from “Quantum
theory without observers III” [3], at which de Broglie-Bohm got a
whopping 63% of the votes, not so closely followed by objective
collapse.

What we can conclude from this is that people who went to a meeting
organized by Zeilinger are likely to have views similar to Zeilinger.
People who went to a philosophy conference are less likely to be
committed, but are much more likely to pick a realist interpretation
than those who hang out with Zeilinger.  Finally, people who went to a
meeting that is mainly about de Broglie-Bohm theory, organized by the
world’s most prominent Bohmians, are likely to be Bohmians.  What have
we learned from this that we did not know already?

One thing I find especially amusing about these polls is how easy it
would have been to obtain a more representative sample of physicists’
views.  It is straightforward to post a survey on the internet for
free.  Then all you have to do is write a letter to Physics Today
asking people to complete the survey and send the URL to a bunch of
mailing lists.  The sample so obtained would still be self-selecting
to some degree, but much less so than at a conference dedicated to
some particular approach to quantum theory.  The sample would also be
larger by at least an order of magnitude.  The ease with which this
could be done only illustrates the extent to which these surveys
should not even be taken semi-seriously.

how the error bars would be huge if you actually bothered to calculate
them.  It is amusing how willing scientists are to abandon the
scientific method when they address questions outside their own field.
However, I think I have taken up enough of your time already.  It is
time we recognized these surveys for the nonsense that they are.

#### References

[1] M. Schlosshauer, J. Kofler and A. Zeilinger, A Snapshot of
Foundational Attitudes Toward Quantum Mechanics, arXiv:1301.1069
(2013).

[2] C. Sommer, Another Survey of Foundational Attitudes Towards
Quantum Mechanics, arXiv:1303.2719 (2013).

[3] T. Norsen and S. Nelson, Yet Another Snapshot of Foundational
Attitudes Toward Quantum Mechanics, arXiv:1306.4646 (2013).

## FQXi Essay Contest

I wrote an essay for the FQXi essay contest.  This year’s theme is “It from bit or bit from it?” and I decided to write about the extent to which Wheeler’s “it from bit” helps us to understand the origin of quantum probabilities from a subjective Bayesian point of view.   You can go here to read and rate the essay and it would be especially great if any fellow FQXi members would do that.

## Quantum Times Article on the PBR Theorem

I recently wrote an article (pdf) for The Quantum Times (Newsletter of the APS Topical Group on Quantum Information) about the PBR theorem. There is some overlap with my previous blog post, but the newsletter article focuses more on the implications of the PBR result, rather than the result itself. Therefore, I thought it would be worth reproducing it here. Quantum types should still download the original newsletter, as it contains many other interesting things, including an article by Charlie Bennett on logical depth (which he has also reproduced over at The Quantum Pontiff). APS members should also join the TGQI, and if you are at the March meeting this week, you should check out some of the interesting sessions they have organized.

Note: Due to the appearance of this paper, I would weaken some of the statements in this article if I were writing it again. The results of the paper imply that the factorization assumption is essential to obtain the PBR result, so this is an additional assumption that needs to be made if you want to prove things like Bell’s theorem directly from psi-ontology rather than using the traditional approach. When I wrote the article, I was optimistic that a proof of the PBR theorem that does not require factorization could be found, in which case teaching PBR first and then deriving other results like Bell as a consequence would have been an attractive pedagogical option. However, due to the necessity for stronger assumptions, I no longer think this.

OK, without further ado, here is the article.

## PBR, EPR, and all that jazz

In the past couple of months, the quantum foundations world has been abuzz about a new preprint entitled “The Quantum State Cannot be Interpreted Statistically” by Matt Pusey, Jon Barrett and Terry Rudolph (henceforth known as PBR). Since I wrote a blog post explaining the result, I have been inundated with more correspondence from scientists and more requests for comment from science journalists than at any other point in my career. Reaction to the result amongst quantum researchers has been mixed, with many people reacting negatively to the title, which can be misinterpreted as an attack on the Born rule. Others have managed to read past the title, but are still unsure whether to credit the result with any fundamental significance. In this article, I would like to explain why I think that the PBR result is the most significant constraint on hidden variable theories that has been proved to date. It provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. Before getting to this though, we need to understand the PBR result itself.

### What are Quantum States?

One of the most debated issues in the foundations of quantum theory is the status of the quantum state. On the ontic view, quantum states represent a real property of quantum systems, somewhat akin to a physical field, albeit one with extremely bizarre properties like entanglement. The alternative to this is the epistemic view, which sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics. A psi-ontologist
(as supporters of the ontic view have been dubbed by Chris Granade) might point to the phenomenon of interference in support of their view, and also to the fact that pretty much all viable realist interpretations of quantum theory, such as many-worlds or Bohmian mechanics, include an ontic state. The key argument in favor of the epistemic view is that it dissolves the measurement problem, since the fact that states undergo a discontinuous change in the light of measurement results does not then imply the existence of any real physical process. Instead, the collapse of the wavefunction is more akin to the way that classical probability distributions get updated by Bayesian conditioning in the light of new data.

Many people who advocate a psi-epistemic view also adopt an anti-realist or neo-Copenhagen point of view on quantum theory in which the quantum state does not represent knowledge about some underlying reality, but rather it only represents knowledge about the consequences of measurements that we might make on the system. However, there remained the nagging question of whether it is possible in principle to construct a realist interpretation of quantum theory that is also psi-epistemic, or whether the realist is compelled to think that quantum states are real. PBR have answered this question in the negative, at least within the standard framework for hidden variable theories that we use for other no go results such as Bell’s theorem. As with Bell’s theorem, there are loopholes, so it is better to say that PBR have placed a strong constraint on realist psi-epistemic interpretations, rather than ruling them out entirely.

### The PBR Result

To properly formulate the result, we need to know a bit about how quantum states are represented in a hidden variable theory. In such a theory, quantum systems are assumed to have real pre-existing properties that are responsible for determining what happens when we make a measurement. A full specification of these properties is what we mean by an ontic state of the system. In general, we don’t have precise control over the ontic state so a quantum state corresponds to a probability distribution over the ontic states. This framework is illustrated below.

In an ontic model, a quantum state (indicated heuristically on the left as a vector in the Bloch sphere) is represented by a probability distribution over ontic states, as indicated on the right.

A hidden variable theory is psi-ontic if knowing the ontic state of the system allows you to determine the (pure) quantum state that was prepared uniquely. Equivalently, the probability distributions corresponding to two distinct pure states do not overlap. This is illustrated below.

Representation of a pair of quantum states in a psi-ontic model

A hidden variable theory is psi-epistemic if it is not psi-ontic, i.e. there must exist an ontic state that is possible for more than one pure state, or, in other words, there must exist two nonorthogonal pure states with corresponding distributions that overlap. This is illustrated below.

Representation of nonorthogonal states in a psi-epistemic model

These definitions of psi-ontology and psi-epistemicism may seem a little abstract, so a classical analogy may be helpful. In Newtonian mechanics the ontic state of a particle is a point in phase space, i.e. a specification of its position and momentum. Other ontic properties of the particle, such as its energy, are given by functions of the phase space point, i.e. they are uniquely determined by the ontic state. Likewise, in a hidden variable theory, anything that is a unique function of the ontic state should be regarded as an ontic property of the system, and this applies to the quantum state in a psi-ontic model. The definition of a psi-epistemic model as the negation of this is very weak, e.g. it could still be the case that most ontic states are only possible in one quantum state and just a few are compatible with more than one. Nonetheless, even this very weak notion is ruled out by PBR.

The proof of the PBR result is quite simple, but I will not review it here because it is summarized in my blog post and the original paper is also very readable. Instead, I want to focus on its implications.

### Size of the Ontic State Space

A trivial consequence of the PBR result is that the cardinality of the ontic state space of any hidden variable theory, even for just a qubit, must be infinite, in fact continuously so. This is because there must be at least one ontic state for each quantum state, and there are a continuous infinity of the latter. The fact that there must be infinite ontic states was previously proved by Lucien Hardy under the name “Ontological Excess Baggage theorem”, but we can now
view it as a corollary of PBR. If you think about it, this property is quite surprising because we can only extract one or two bits from a qubit (depending on whether we count superdense coding) so it would be natural to assume that a hidden variable state could be specified by a finite amount of information.

Hidden variable theories provide one possible method of simulating a quantum computer on a classical computer by simply tracking the value of the ontic state at each stage in the computation. This enables us to sample from the probability distribution of any quantum measurement at any point during the computation. Another method is to simply store a representation of the quantum state at each point in time. This second method is clearly inefficient, as the number of parameters required to specify a quantum state grows exponentially with the number of qubits. The PBR theorem tells us that the hidden variable method cannot be any better, as it requires an ontic state space that is at least as big as the set of quantum states. This conclusion was previously drawn by Alberto Montina using different methods, but again it now becomes a corollary of PBR. This result falls short of saying that any classical simulation of a quantum computer must have exponential space complexity, since we usually only have to simulate the outcome of one fixed measurement at the end of the computation and our simulation does not have to track the slice-by-slice causal evolution of the quantum circuit. Indeed, pretty much the first nontrivial result in quantum computational complexity theory, proved by Bernstein and Vazirani, showed that quantum circuits can be simulated with polynomial memory resources. Nevertheless, this result does reaffirm that we need to go beyond slice-by-slice simulations of quantum circuits in looking for efficient classical algorithms.

### Supercharged EPR Argument

As emphasized by Harrigan and Spekkens, a variant of the EPR argument favoured by Einstein shows that any psi-ontic hidden variable theory must be nonlocal. Thus, prior to Bell’s theorem, the only open possibility for a local hidden variable theory was a psi-epistemic theory. Of course, Bell’s theorem rules out all local hidden variable theories, regardless of the status of the quantum state within them. Nevertheless, the PBR result now gives an arguably simpler route to the same conclusion by ruling out psi-epistemic theories, allowing us to infer nonlocality directly from EPR.

A sketch of the argument runs as follows. Consider a pair of qubits in the singlet state. When one of the qubits is measured in an orthonormal basis, the other qubit collapses to one of two orthogonal pure states. By varying the basis that the first qubit is measured in, the second qubit can be made to collapse in any basis we like (a phenomenon that Schroedinger called “steering”). If we restrict attention to two possible choices of measurement basis, then there are
four possible pure states that the second qubit might end up in. The PBR result implies that the sets of possible ontic states for the second system for each of these pure states must be disjoint. Consequently, the sets of possible ontic states corresponding to the two distinct choices of basis are also disjoint. Thus, the ontic state of the second system must depend on the choice of measurement made on the first system and this implies nonlocality because I can decide which measurement to perform on the first system at spacelike separation from the second.

### PBR as a proto-theorem

We have seen that the PBR result can be used to establish some known constraints on hidden variable theories in a very straightforward way. There is more to this story that I can possibly fit into this article, and I suspect that every major no-go result for hidden variable theories may fall under the rubric of PBR. Thus, even if you don’t care a fig about fancy distinctions between ontic and epistemic states, it is still worth devoting a few braincells to the PBR result. I predict that it will become viewed as the basic result about hidden variable theories, and that we will end up teaching it to our students even before such stalwarts as Bell’s theorem and Kochen-Specker.