Tag Archives: interpretations

Quantum Mechanics and Nonlocality

A Popular Physics Discussion

Travis Norsen in conversation with Matt Leifer

Wednesday October 21, 5pm PST (California Time)

The Institute for Quantum Studies at Chapman University presents an online discussion between Dr. Travis Norsen (Smith College) and Dr. Matthew Leifer (co-Director of the Institute for Quantum Studies at Chapman) on quantum mechanics and nonlocality.  After studying physics and philosophy as an undergraduate at Harvey Mudd College and then getting a PhD in theoretical nuclear astrophysics at the University of Washington, Travis Norsen returned to his two great passions:  teaching physics to undergraduates and working independently on the foundations of quantum mechanics.  He is currently a lecturer in the physics department  at Smith College in Northampton, Massachusetts.   In addition to authoring the first systematic textbook on quantum foundations, Travis has written extensively on the EPR argument and Bell’s Theorem and has also worked on the de Broglie-Bohm pilot-wave theory.  One idiosyncratic theme of his thinking about foundational questions is a stress on the important role played by what Bell called “local beables” in making candidate theories empirically viable.  In addition to physics and philosophy, Travis (like Einstein) enjoys productive physical activities such as chopping wood; he loves gardening and cooking; and he plays, coaches, and has recently written a book about soccer.  The conversation will be broadcast live on YouTube at There will be an opportunity for audience Q&A after the event.

The Many Worlds of Quantum Mechanics

This post exists because many people have complained that the link pointed to a dummy website rather than a page with details of the event. There are no more details of the event other than what you have already seen on Twitter, Facebook, etc. or the email you received. The link will point directly to the YouTube livestream on the day of the event rather than here. I will also post the livestream link here once it has been set up in case anyone bookmarks this page by mistake.

Sean Carroll

The Many Worlds of Quantum Mechanics
A Popular Physics Discussion
Sean Carroll in conversation with Matt Leifer
Wednesday September 16, 5pm PDT (California Time)

The Institute for Quantum Studies at Chapman University presents an online discussion between Dr. Sean Carroll (Caltech) and Dr. Matthew Leifer (co-Director of the Institute for Quantum Studies at Chapman) on the many-worlds interpretation of quantum mechanics.  Dr. Carroll is a theoretical physicist, specializing in quantum mechanics, gravitation, cosmology, statistical mechanics, and foundations of physics.  He is also a prolific author of popular science book and his latest – Something Deeply Hidden: Quantum Worlds ad the Emergence of Spacetime – argues that quantum mechanics is best explained in terms of multiple universes that are constantly splitting from one another, and explains how this point of view may help us to understand quantum gravity.  This will be the topic of conversation with Dr. Leifer, which will be accessible to a general audience.  The conversation will be broadcast live on YouTube at There will be an opportunity for audience Q&A and a book giveaway during the event.

Quantum Times Article about Surveys on the Foundations of Quantum Theory

A new edition of The Quantum Times (newsletter of the APS topical group on Quantum Information) is out and I have two articles in it. I am posting the first one here today and the second, a book review of two recent books on quantum computing by John Gribbin and Jonathan Dowling, will be posted later in the week. As always, I encourage you to download the newsletter itself because it contains other interesting articles and announcements other than my own. In particlar, I would like to draw your attention to the fact that Ian Durham, current editor of The Quantum Times, is stepping down as editor at some point before the March meeting. If you are interested in getting more involved in the topical group, I would encourage you to put yourself forward. Details can be found at the end of the newsletter.

Upon reformatting my articles for the blog, I realized that I have reached almost Miguel Navascues levels of crankiness. I guess this might be because I had a stomach bug when I was writing them. Today’s article is a criticism of the recent “Snapshots of Foundational Attitudes Toward Quantum Mechanics” surveys that appeared on the arXiv and generated a lot of attention. The article is part of a point-counterpoint, with Nathan Harshman defending the surveys. Here, I am only posting my part in its original version. The newsletter version is slightly edited from this, most significantly in the removal of my carefully constructed title.

Lies, Damned Lies, and Snapshots of Foundational Attitudes Toward Quantum Mechanics

Q1. Which of the following questions is best resolved by taking a straw
poll of physicists attending a conference?

A. How long ago did the big bang happen?

B. What is the correct approach to quantum gravity?

C. Is nature supersymmetric?

D. What is the correct way to understand quantum theory?

E. None of the above.

By definition, a scientific question is one that is best resolved by
rational argument and appeal to empirical evidence.  It does not
matter if definitive evidence is lacking, so long as it is conceivable
that evidence may become available in the future, possibly via
experiments that we have not conceived of yet.  A poll is not a valid
method of resolving a scientific question.  If you answered anything
other than E to the above question then you must think that at least
one of A-D is not a scientific question, and the most likely culprit
is D.  If so, I disagree with you.

It is possible to legitimately disagree on whether a question is
scientific.  Our imaginations cannot conceive of all possible ways,
however indirect, that a question might get resolved.  The lesson from
history is that we are often wrong in declaring questions beyond the
reach of science.  For example, when big bang cosmology was first
introduced, many viewed it as unscientific because it was difficult to
conceive of how its predictions might be verified from our lowly
position here on Earth.  We have since gone from a situation in which
many people thought that the steady state model could not be
definitively refuted, to a big bang consensus with wildly fluctuating
estimates of the age of the universe, and finally to a precision value
of 13.77 +/- 0.059 billion years from the WMAP data.

Traditionally, many physicists separated quantum theory into its
“practical part” and its “interpretation”, with the latter viewed as
more a matter of philosophy than physics.  John Bell refuted this by
showing that conceptual issues have experimental consequences.  The
more recent development of quantum information and computation also
shows the practical value of foundational thinking.  Despite these
developments, the view that “interpretation” is a separate
unscientific subject persists.  Partly this is because we have a
tendency to redraw the boundaries.  “Interpretation” is then a
catch-all term for the issues we cannot resolve, such as whether
Copenhagen, Bohmian mechanics, many-worlds, or something else is the
best way of looking at quantum theory.  However, the lesson of big
bang cosmology cautions against labelling these issues unscientific.
Although interpretations of quantum theory are constructed to yield
the same or similar enough predictions to standard quantum theory,
this need not be the case when we move beyond the experimental regime
that is now accessible.  Each interpretation is based on a different
explanatory framework, and each suggests different ways of modifying
or generalizing the theory.  If we think that quantum theory is not
our final theory then interpretations are relevant in constructing its
successor.  This may happen in quantum gravity, but it may equally
happen at lower energies, since we do not yet have an experimentally
confirmed theory that unifies the other three forces.  The need to
change quantum theory may happen sooner than you expect, and whichever
explanatory framework yields the next theory will then be proven
correct.  It is for this reason that I think question D is scientific.

Regardless of the status of question D, straw polls, such as the three
that recently appeared on the arXiv [1-3], cannot help us to resolve
it, and I find it puzzling that we choose to conduct them for this
question, but not for other controversial issues in physics.  Even
during the decades in which the status of big bang cosmology was
controversial, I know of no attempts to poll cosmologists’ views on
it.  Such a poll would have been viewed as meaningless by those who
thought cosmology was unscientific, and as the wrong way to resolve
the question by those who did think it was scientific.  The same is
true of question D, and the fact that we do nevertheless conduct polls
suggests that the question is not being treated with the same respect
as the others on the list.

Admittedly, polls about controversial scientific questions are
relevant to the sociology of science, and they might be useful to the
beginning graduate student who is more concerned with their career
prospects than following their own rational instincts.  From this
perspective, it would be just as interesting to know what percentage
of physicists think that supersymmetry is on the right track as it is
to know about their views on quantum theory.  However, to answer such
questions, polls need careful design and statistical analysis.  None
of the three polls claims to be scientific and none of them contain
any error analysis.  What then is the point of them?

The three recent polls are based on a set of questions designed by
Schlosshauer, Kofler and Zeilinger, who conducted the first poll at a
conference organized by Zeilinger [1].  The questions go beyond just
asking for a preferred interpretation of quantum theory, but in the
interests of brevity I will focus on this aspect alone.  In the
Schlosshauer et al.  poll, Copenhagen comes out top, closely followed
by “information-based/information-theoretical” interpretations.  The
second comes from a conference called “The Philosophy of Quantum
Mechanics” [2].  There was a larger proportion of self-identified
philosophers amongst those surveyed and “I have no preferred
interpretation” came out as the clear winner, not so closely followed
by de Broglie-Bohm theory, which had obtained zero votes in the poll
of Schlosshauer et al.  Copenhagen is in joint third place along with
objective collapse theories.  The third poll comes from “Quantum
theory without observers III” [3], at which de Broglie-Bohm got a
whopping 63% of the votes, not so closely followed by objective
collapse.

What we can conclude from this is that people who went to a meeting
organized by Zeilinger are likely to have views similar to Zeilinger.
People who went to a philosophy conference are less likely to be
committed, but are much more likely to pick a realist interpretation
than those who hang out with Zeilinger.  Finally, people who went to a
meeting that is mainly about de Broglie-Bohm theory, organized by the
world’s most prominent Bohmians, are likely to be Bohmians.  What have
we learned from this that we did not know already?

One thing I find especially amusing about these polls is how easy it
would have been to obtain a more representative sample of physicists’
views.  It is straightforward to post a survey on the internet for
free.  Then all you have to do is write a letter to Physics Today
asking people to complete the survey and send the URL to a bunch of
mailing lists.  The sample so obtained would still be self-selecting
to some degree, but much less so than at a conference dedicated to
some particular approach to quantum theory.  The sample would also be
larger by at least an order of magnitude.  The ease with which this
could be done only illustrates the extent to which these surveys
should not even be taken semi-seriously.

I could go on about the bad design of the survey questions and about
how the error bars would be huge if you actually bothered to calculate
them.  It is amusing how willing scientists are to abandon the
scientific method when they address questions outside their own field.
However, I think I have taken up enough of your time already.  It is
time we recognized these surveys for the nonsense that they are.

References

[1] M. Schlosshauer, J. Kofler and A. Zeilinger, A Snapshot of
Foundational Attitudes Toward Quantum Mechanics, arXiv:1301.1069
(2013).

[2] C. Sommer, Another Survey of Foundational Attitudes Towards
Quantum Mechanics, arXiv:1303.2719 (2013).

[3] T. Norsen and S. Nelson, Yet Another Snapshot of Foundational
Attitudes Toward Quantum Mechanics, arXiv:1306.4646 (2013).

Can the quantum state be interpreted statistically?

A new preprint entitled The Quantum State Cannot be Interpreted Statistically by Pusey, Barrett and Rudolph (henceforth known as PBR) has been generating a significant amount of buzz in the last couple of days. Nature posted an article about it on their website, Scott Aaronson and Lubos Motl blogged about it, and I have been seeing a lot of commentary about it on Twitter and Google+. In this post, I am going to explain the background to this theorem and outline exactly what it entails for the interpretation of the quantum state. I am not going to explain the technicalities in great detail, since these are explained very clearly in the paper itself. The main aim is to clear up misconceptions.

First up, I would like to say that I find the use of the word “Statistically” in the title to be a rather unfortunate choice. It is liable to make people think that the authors are arguing against the Born rule (Lubos Motl has fallen into this trap in particular), whereas in fact the opposite is true.  The result is all about reproducing the Born rule within a realist theory.  The question is whether a scientific realist can interpret the quantum state as an epistemic state (state of knowledge) or whether it must be an ontic state (state of reality). It seems to show that only the ontic interpretation is viable, but, in my view, this is a bit too quick. On careful analysis, it does not really rule out any of the positions that are advocated by contemporary researchers in quantum foundations. However, it does answer an important question that was previously open, and confirms an intuition that many of us already held. Before going into more detail, I also want to say that I regard this as the most important result in quantum foundations in the past couple of years, well deserving of a good amount of hype if anything is. I am not sure I would go as far as Antony Valentini, who is quoted in the Nature article saying that it is the most important result since Bell’s theorem, or David Wallace, who says that it is the most significant result he has seen in his career. Of course, these two are likely to be very happy about the result, since they already subscribe to interpretations of quantum theory in which the quantum state is ontic (de Broglie-Bohm theory and many-worlds respectively) and perhaps they believe that it poses more of a dilemma for epistemicists like myself then it actually does.

Classical Ontic States

Before explaining the result itself, it is important to be clear on what all this epistemic/ontic state business is all about and why it matters. It is easiest to introduce the distinction via a classical example, for which the interpretation of states is clear. Therefore, consider the Newtonian dynamics of a single point particle in one dimension. The trajectory of the particle can be determined by specifying initial conditions, which in this case consists of a position \(x(t_0)\) and momentum \(p(t_0)\) at some initial time \(t_0\). These specify a point in the particle’s phase space, which consists of all possible pairs \((x,p)\) of positions and momenta.

Classical Ontic State

The ontic state space for a single classical particle, with the initial ontic state marked.

Then, assuming we know all the relevant forces, we can compute the position and momentum \((x(t),p(t))\) at some other time \(t\) using Newton’s laws or, equivalently, Hamilton’s equations. At any time \(t\), the phase space point \((x(t),p(t))\) can be thought of as the instantaneous state of the particle. It is clearly an ontic state (state of reality), since the particle either does or does not possess that particular position and momentum, independently of whether we know that it possesses those values ((There are actually subtleties about whether we should think of phase space points as instantaneous ontic states. For one thing, the momentum depends on the first derivative of position, so maybe we should really think of the state being defined on an infinitesimal time interval. Secondly, the fact that momentum appears is because Newtonian mechanics is defined by second order differential equations. If it were higher order then we would have to include variables depending on higher derivatives in our definition of phase space. This is bad if you believe in a clean separation between basic ontology and physical laws. To avoid this, one could define the ontic state to be the position only, i.e. a point in configuration space, and have the boundary conditions specified by the position of the particle at two different times. Alternatively, one might regard the entire spacetime trajectory of the particle as the ontic state, and regard the Newtonian laws themselves as a mere pattern in the space of possible trajectories. Of course, all these descriptions are mathematically equivalent, but they are conceptually quite different and they lead to different intuitions as to how we should understand the concept of state in quantum theory. For present purposes, I will ignore these subtleties and follow the usual practice of regarding phase space points as the unambiguous ontic states of classical mechanics.)). The same goes for more complicated systems, such as multiparticle systems and fields. In all cases, I can derive a phase space consisting of configurations and generalized momenta. This is the space of ontic states for any classical system.

Classical Epistemic States

Although the description of classical mechanics in terms of ontic phase space trajectories is clear and unambiguous, we are often, indeed usually, more interested in tracking what we know about a system. For example, in statistical mechanics, we may only know some macroscopic properties of a large collection of systems, such as pressure or temperature. We are interested in how these quantities change over time, and there are many different possible microscopic trajectories that are compatible with this. Generally speaking, our knowledge about a classical system is determined by assigning a probability distribution over phase space, which represents our uncertainty about the actual point occupied by the system.

A classical epistemic state

An epistemic state of a single classical particles. The ellipses represent contour lines of constant probability.

We can track how this probability distribution changes using Liouville’s equation, which is derived by applying Hamilton’s equations weighted with the probability assigned to each phase space point. The probability distribution is pretty clearly an epistemic state. The actual system only occupies one phase space point and does not care what probability we have assigned to it. Crucially, the ontic state occupied by the system would be regarded as possible by us in more than one probability distribution, in fact it is compatible with infinitely many.

Overlapping epistemic states

Epistemic states can overlap, so each ontic state is possible in more than one epistemic state. In this diagram, the two phase space axes have been schematically compressed into one, so that we can sketch the probability density graphs of epistemic states. The ontic state marked with a cross is possible in both epistemic states sketched on the graph.

Quantum States

We have seen that there are two clear notions of state in classical mechanics: ontic states (phase space points) and epistemic states (probability distributions over the ontic states). In quantum theory, we have a different notion of state — the wavefunction — and the question is: should we think of it as an ontic state (more like a phase space point), an epistemic state (more like a probability distribution), or something else entirely?

Here are three possible answers to this question:

  1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
  2. Wavefunctions are epistemic, but there is no deeper underlying reality.
  3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

I will call options 1 and 2 psi-epistemic and option 3 psi-ontic. Advocates of option 3 are called psi-ontologists, in an intentional pun coined by Chris Granade. Options 1 and 3 share a conviction of scientific realism, which is the idea that there must be some description of what is going on in reality that is independent of our knowledge of it. Option 2 is broadly anti-realist, although there can be some subtleties here ((The subtlety is basically a person called Chris Fuchs. He is clearly in the option 2 camp, but claims to be a scientific realist. Whether he is successful at maintaining realism is a matter of debate.)).

The theorem in the paper attempts to rule out option 1, which would mean that scientific realists should become psi-ontologists. I am pretty sure that no theorem on Earth could rule out option 2, so that is always a refuge for psi-epistemicists, at least if their psi-epistemic conviction is stronger than their realist one.

I would classify the Copenhagen interpretation, as represented by Niels Bohr ((Note, this is distinct from the orthodox interpretation as represented by the textbooks of Dirac and von-Neumann, which is also sometimes called the Copenhagen interpretation. Orthodoxy accepts the eigenvalue-eigenstate link.  Observables can sometimes have definite values, in which case they are objective properties of the system. A system has such a property when it is in an eigenstate of the corresponding observable. Since every wavefunction is an eigenstate of some observable, it follows that this is a psi-ontic view, albeit one in which there are no additional ontic degrees of freedom beyond the quantum state.)), under option 2. One of his famous quotes is:

There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature… ((Sourced from Wikiquote.))

and “what we can say” certainly seems to imply that we are talking about our knowledge of reality rather than reality itself. Various contemporary neo-Copenhagen approaches also fall under this option, e.g. the Quantum Bayesianism of Carlton Caves, Chris Fuchs and Ruediger Schack; Anton Zeilinger’s idea that quantum physics is only about information; and the view presently advocated by the philosopher Jeff Bub. These views are safe from refutation by the PBR theorem, although one may debate whether they are desirable on other grounds, e.g. the accusation of instrumentalism.

Pretty much all of the well-developed interpretations that take a realist stance fall under option 3, so they are in the psi-ontic camp. This includes the Everett/many-worlds interpretation, de Broglie-Bohm theory, and spontaneous collapse models. Advocates of these approaches are likely to rejoice at the PBR result, as it apparently rules out their only realist competition, and they are unlikely to regard anti-realist approaches as viable.

Perhaps the best known contemporary advocate of option 1 is Rob Spekkens, but I also include myself and Terry Rudolph (one of the authors of the paper) in this camp. Rob gives a fairly convincing argument that option 1 characterizes Einstein’s views in this paper, which also gives a lot of technical background on the distinction between options 1 and 2.

Why be a psi-epistemicist?

Why should the epistemic view of the quantum state should be taken seriously in the first place, at least seriously enough to prove a theorem about it? The most naive argument is that, generically, quantum states only predict probabilities for observables rather than definite values. In this sense, they are unlike classical phase space points, which determine the values of all observables uniquely. However, this argument is not compelling because determinism is not the real issue here. We can allow there to be some genuine stochasticity in nature whilst still maintaining realism.

An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory.  For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view.  Nevertheless, this argument may not be compelling to everyone, since it mainly entails that mixed states have to be epistemic. Classically, the pure states are the extremal probability distributions, i.e. they are just delta functions on a single ontic state. Thus, they are in one-to-one correspondence with the ontic states. The same could be true of pure quantum states without ruining the analogy ((but note that the resulting theory would essentially be the orthodox interpretation, which has a measurement problem.)).

A more convincing argument concerns the instantaneous change that occurs after a measurement — the collapse of the wavefunction. When we acquire new information about a classical epistemic state (probability distribution) say by measuring the position of a particle, it also undergoes an instantaneous change. All the weight we assigned to phase space points that have positions that differ from the measured value is rescaled to zero and the rest of the probability distribution is renormalized. This is just Bayesian conditioning. It represents a change in our knowledge about the system, but no change to the system itself. It is still occupying the same phase space point as it was before, so there is no change to the ontic state of the system. If the quantum state is epistemic, then instantaneous changes upon measurement are unproblematic, having a similar status to Bayesian conditioning. Therefore, the measurement problem is completely dissolved within this approach.

Finally, if we allow a more sophisticated analogy between quantum states and probabilities, in particular by allowing constraints on how much may be known and allowing measurements to locally disturb the ontic state, then we can qualitatively explain a large number of phenomena that are puzzing for a psi-ontologist very simply within a psi-epistemic approach. These include: teleportation, superdense coding, and much of the rest of quantum information theory. Crucially, it also includes interference, which is often held as a convincing reason for psi-ontology. This was demonstrated in a very convincing way by Rob Spekkens via a toy theory, which is recommended reading for all those interested in quantum foundations. In fact, since this paper contains the most compelling reasons for being a psi-epistemicist, you should definitely make sure you read it so that you can be more shocked by the PBR result.

Ontic models

If we accept that the psi-epistemic position is reasonable, then it would be superficially resonable to pick option 1 and try to maintain scientific realism. This leads us into the realm of ontic models for quantum theory, otherwise known as hidden variable theories ((The terminology “ontic model” is preferred to “hidden variable theory” for two reasons. Firstly, we do not want to exclude the case where the wavefunction is ontic, but there are no extra degrees of freedom (as in the orthodox interpretation). Secondly, it is often the case that the “hidden” variables are the ones that we actually observe rather than the wavefunction, e.g. in Bohmian mechanics the particle positions are not “hidden”.)). A pretty standard framework for discussing such models has existed since John Bell’s work in the 1960’s, and almost everyone adopts the same definitions that were laid down then. The basic idea is that systems have properties. There is some space \(\Lambda\) of ontic states, analogous to the phase space of a classical theory, and the system has a value \(\lambda \in \Lambda\) that specifies all its properties, analogous to the phase space points. When we prepare a system in some quantum state \(\Ket{\psi}\) in the lab, what is really happening is that an ontic state \(\lambda\) is sampled from a probability distribution over \(\mu(\lambda)\) that depends on \(\Ket{\psi}\).

Representation of a quantum state in an ontic model

In an ontic model, a quantum state (indicated heuristically on the left as a vector in the Bloch sphere) is represented by a probability distribution over ontic states, as indicated on the right.

We also need to know how to represent measurements in the model ((Generally, we would need to represent dynamics as well, but the PBR theorem does not depend on this.)).  For each possible measurement that we could make on the system, the model must specify the outcome probabilities for each possible ontic state.  Note that we are not assuming determinism here.  The measurement is allowed to be stochastic even given a full specification of the ontic state.  Thus, for each measurement \(M\), we need a set of functions \(\xi^M_k(\lambda)\) , where \(k\) labels the outcome.  \(\xi^M_k(\lambda)\) is the probability of obtaining outcome \(k\) in a measurement of \(M\) when the ontic state is \(\lambda\).  In order for these probabilities to be well defined the functions \(\xi^M_k\) must be positive and they must satisfy \(\sum_k \xi^M_k(\lambda) = 1\) for all \(\lambda \in \Lambda\). This normalization condition is very important in the proof of the PBR theorem, so please memorize it now.

Overall, the probability of obtaining outcome \(k\) in a measurement of \(M\) when the system is prepared in state \(\Ket{\psi}\) is given by

\[\mbox{Prob}(k|M,\Ket{\psi}) = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda, \]
which is just the average of the outcome probabilities over the ontic state space.

If the model is going to reproduce the predictions of quantum theory, then these probabilities must match the Born rule.  Suppose that the \(k\)th outcome of \(M\) corresponds to the projector \(P_k\).  Then, this condition boils down to

\[\Bra{\psi} P_k \Ket{\psi} = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda,\]

and this must hold for all quantum states, and all outcomes of all possible measurements.

Constraints on Ontic Models

Even disregarding the PBR paper, we already know that ontic models expressible in this framework have to have a number of undesirable properties. Bell’s theorem implies that they have to be nonlocal, which is not great if we want to maintain Lorentz invariance, and the Kochen-Specker theorem implies that they have to be contextual. Further, Lucien Hardy’s ontological excess baggage theorem shows that the ontic state space for even a qubit would have to have infinite cardinality. Following this, Montina proved a series of results, which culminated in the claim that there would have to be an object satisfying the Schrödinger equation present within the ontic state (see this paper). This latter result is close to the implication of the PBR theorem itself.

Given these constraints, it is perhaps not surprising that most psi-epistemicists have already opted for option 2, denouncing scientific realism entirely. Those of us who cling to realism have mostly decided that the ontic state must be a different type of object than it is in the framework described above.  We could discard the idea that individual systems have well-defined properties, or the idea that the probabilities that we assign to those properties should depend only on the quantum state. Spekkens advocates the first possibility, arguing that only relational properties are ontic. On the other hand, I, following Huw Price, am partial to the idea of epistemic hidden variable theories with retrocausal influences, in which case the probability distributions over ontic states would depend on measurement choices as well as which quantum state is prepared. Neither of these possibilities are ruled out by the previous results, and they are not ruled out by PBR either. This is why I say that their result does not rule out any position that is seriously held by any researchers in quantum foundations. Nevertheless, until the PBR paper, there remained the question of whether a conventional psi-epistemic model was possible even in principle. Such a theory could at least have been a competitor to Bohmian mechanics. This possibility has now been ruled out fairly convincingly, and so we now turn to the basic idea of their result.

The Result

Recall from our classical example that each ontic state (phase space point) occurs in the support of more than one epistemic state (Liouville distribution), in fact infinitely many. This is just because probability distributions can have overlapping support. Now, consider what would happen if we restricted the theory to only allow epistemic states with disjoint support. For example, we could partition phase space into a number of disjoint cells and only consider probability distributions that are uniform over one cell and zero everywhere else.

Restricted classical theory

A restricted classical theory in which only the distributions indicated are allowed as epistemic states. In this case, each ontic state is only possible in one epistemic state, so it is more accurate to say that the epistemic states represent a property of the ontic state.

Given this restriction, the ontic state determines the epistemic state uniquely. If someone tells you the ontic state, then you know which cell it is in, so you know what the epistemic state must be. Therefore, in this restricted theory, the epistemic state is not really epistemic. Its image is contained in the ontic state, and it would be better to say that we were talking about a property of the ontic state, rather than something that represents knowledge. According to the PBR result, this is exactly what must happen in any ontic model of quantum theory within the Bell framework.

Here is the analog of this in ontic models of quantum theory.  Suppose that two nonorthogonal quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) are represented as follows in an ontic model:

Psi-epistemic model

Representation of nonorthogonal states in a psi-epistemic model

Because the distributions overlap, there are ontic states that are compatible with more than one quantum states, so this is a psi-epistemic model.

In contrast, if, for every pair of quantum states \(\Ket{\psi_1},\Ket{\psi_2}\), the probability distributions do not overlap, i.e. the representation of each pair looks like this

Psi-ontic model

Representation of a pair of quantum states in a psi-ontic model

then the quantum state is uniquely determined by the ontic state, and it is therefore better regarded as a property of \(\lambda\) rather than a representation of knowledge.  Such a model is psi-ontic.  The PBR theorem states that all ontic models that reproduce the Born rule must be psi-ontic.

Sketch of the proof

In order to establish the result, PBR make use of the following idea. In an ontic model, the ontic state \(\lambda\) determines the probabilities for the outcomes of any possible measurement via the functions \(\xi^M_k\). The Born rule probabilities must be obtained by averaging these conditional probabilities with respect to the probability distribution \(\mu(\lambda)\) representing the quantum state. Suppose there is some measurement \(M\) that has an outcome \(k\) to which the quantum state \(\Ket{\psi}\) assigns probability zero according to the Born rule. Then, it must be the case that \(\xi^M_k(\lambda) = 0\) for every \(\lambda\) in the support of \(\mu(\lambda)\). Now consider two quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) and suppose that we can find a two outcome measurement such that that the first state gives zero Born rule probability to the first outcome and the second state gives zero Born rule probability to the second outcome. Suppose also that there is some \(\lambda\) that is in the support of both the distributions, \(\mu_1\) and \(\mu_2\), that represent \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) in the ontic model. Then, we must have \(\xi^M_1(\lambda) = \xi^M_2(\lambda) = 0\), which contradicts the normalization assumption \(\xi^M_1(\lambda) + \xi^M_2(\lambda) = 1\).

Now, it is fairly easy to see that there is no such measurement for a pair of nonorthogonal states, because this would mean that they could be distinguished with certainty, so we do not have a result quite yet. The trick to get around this is to consider multiple copies. Consider then, the four states \(\Ket{\psi_1}\otimes\Ket{\psi_1}, \Ket{\psi_1}\otimes\Ket{\psi_2}, \Ket{\psi_2}\otimes\Ket{\psi_1}\) and \(\Ket{\psi_2}\otimes\Ket{\psi_2}\) and suppose that there is a four outcome measurement such that \(\Ket{\psi_1}\otimes\Ket{\psi_1}\) gives zero probability to the first outcome, \(\Ket{\psi_1}\otimes\Ket{\psi_2}\) gives zero probability to the second outcome, and so on. In addition to this, we make an independence assumption that the probability distributions representing these four states must satisfy. Let \(\lambda\) be the ontic state of the first system and let \(\lambda’\) be the ontic state of the second. The independence assumption states that the probability densities representing the four quantum states in the ontic model are \(\mu_1(\lambda)\mu_1(\lambda’), \mu_1(\lambda)\mu_2(\lambda’), \mu_2(\lambda)\mu_1(\lambda’)\) and \(\mu_2(\lambda)\mu_2(\lambda’)\). This is a reasonable assumption because there is no entanglement between the two systems and we could do completely independent experiments on each of them. Assuming there is an ontic state \(\lambda\) in the support of both \(\mu_1\) and \(\mu_2\), there will be some nonzero probability that both systems occupy this ontic state whenever any of the four states are prepared. But, in this case, all four functions \(\xi^M_1,\xi^M_2,\xi^M_3\) and \(\xi^M_4\) must have value zero when both systems are in this state, which contradicts the normalization \(\sum_k \xi^M_k = 1\).

This argument works for the pair of states \(\Ket{\psi_1} = \Ket{0}\) and \(\Ket{\psi_2} = \Ket{+} = \frac{1}{\sqrt{2}} \left ( \Ket{0} + \Ket{1}\right )\). In this case, the four outcome measurement is a measurement in the basis:

\[\Ket{\phi_1} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{1} + \Ket{1} \otimes \Ket{0} \right )\]
\[\Ket{\phi_2} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{-} + \Ket{1} \otimes \Ket{+} \right )\]
\[\Ket{\phi_3} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{1} + \Ket{-} \otimes \Ket{0} \right )\]
\[\Ket{\phi_4} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{-} + \Ket{-} \otimes \Ket{+} \right ),\]

where \(\Ket{-} = \frac{1}{\sqrt{2}} \left ( \Ket{0} – \Ket{1}\right )\). It is easy to check that \(\Ket{\phi_1}\) is orthogonal to \(\Ket{0}\otimes\Ket{0}\), \(\Ket{\phi_2}\) is orthogonal to \(\Ket{0}\otimes\Ket{+}\), \(\Ket{\phi_3}\) is orthogonal to \(\Ket{+}\otimes\Ket{0}\), and \(\Ket{\phi_4}\) is orthogonal to \(\Ket{+}\otimes\Ket{+}\). Therefore, the argument applies and there can be no overlap in the probability distributions representing \(\Ket{0}\) and \(\Ket{+}\) in the model.

To establish psi-ontology, we need a similar argument for every pair of states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\). PBR establish that such an argument can always be made, but the general case is more complicated and requires more than two copies of the system. I refer you to the paper for details where it is explained very clearly.

Conclusions

The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.

Why is many-worlds winning the foundations debate?

Almost every time the foundations of quantum theory are mentioned in another science blog, the comments contain a lot of debate about many-worlds. I find it kind of depressing the extent to which many people are happy to jump on board with this interpretation without asking too many questions. In fact, it is almost as depressing as the fact that Copenhagen has been the dominant interpretation for so long, despite the fact that most of Bohr’s writings on the subject are pretty much incoherent.

Well, this year is the 50th anniversary of Everett’s paper, so perhaps it is appropriate to lay out exactly why I find the claims of many-worlds so unbelievable.

WARNING: The following rant contains Philosophy!

Traditionally, philosophers have made a distinction between analytic and synthetic truths. Analytic truths are those things that you can prove to be true by deduction alone. They are necessary truths and essentially they are just the tautologies of classical logic, e.g. either this is a blog post or this is not a blog post. On the other hand, synthetic truths are things we could imagine to have been another way, or things that we need to make some observation of the world in order to confirm or deny, e.g. Matt Leifer has never written a blog post about his pet cat.

Perhaps the central problem of the philosophy of science is whether the correctness of the scientific method is an analytic or a synthetic truth. Of course this depends a lot on how exactly you decide to define the scientific method, which is a topic of considerable controversy in itself. However, it’s pretty clear that the principle of induction is not an analytic truth, and even if you are a falsificationist you have to admit that it has some role in science, i.e. if a single experiment contradicts the predictions of a dominant theory then you call it an anomaly rather than a falsification. Of the other alternatives, if you’re a radical Kuhnian then you’re probably not reading this blog, since you are busy writing incoherent postmodern junk to write for a sociology journal. If you are a follower of Feyerabend then you are a conflicted soul and I sympathize. Anyway, back to the plot for people who do believe that induction has some role to play in science.

Kant’s resolution to this dilemma was to divide the synthetic truths into two categories, the a priori truths and the rest (I don’t know a good name for non-a priori synthetic truths). The a priori synthetic truths are things that cannot be directly deduced, but are nevertheless so basic to our functioning as beings living in this world that we must assume them to be true, i.e. it would be impossible to make any sense of the world without them. For example, we might decide that the fact that the universe is regular enough to perform approximately repeatable scientific experiments and to draw reliable inferences from them should be in the category of a priori truths. This seems reasonable because it is pretty hard to imagine that any kind of intelligent life could exist in a universe where the laws of physics were in continual flux.

One problem with this notion is that we can’t know a priori exactly what the a priori truths are. We can write down a list of what we currently believe to be a priori truths – our assumed a priori truths – but this is open to revision if we find that we can in fact still make sense of the world when we discard some of these assumed truths. The most famous example of this comes from Kant himself, who assumed that the way our senses are hooked up meant that we must describe the world in terms of events happening in space and time, implicitly assuming a Euclidean geometry. As we now know, the world still makes sense if we drop the Euclidean assumption, unifying space and time and working with much more general geometries. Still, even in relativity we have the concept of events occurring at spacetime locations as a fundamental primitive. If you like, you can modify Kant’s position to take this as the fundamental a priori truth, and explain that he was simply misled by the synthetic fact that our spacetime is approximately flat on ordinary length scales.

At this point, it is useful to introduce Quine’s pudding-bowl analogy for the structure of knowledge (I can’t remember what kind of bowl Quine actually used, but he’s making pudding as far as we are concerned). If you make a small chip at the top of a pudding bowl, then you won’t have any problem making pudding with it and the chip can easily be fixed up. On the other hand, if you make a hole near the bottom then you will have a sticky mess in the oven. It will take some considerable surgery to fix up the bowl and you are likely to consider just throwing out the bowl and sitting down at the pottery wheel to build a new one. The moral of the story is that we should be more skeptical of changes in the structure of our knowledge that seem to undermine assumptions that we think are fundamental. We need to have very good reasons to make such changes, because it is clear that there is a lot of work to be done in order to reconstruct all the dependent knowledge further up the bowl that we rely on every day. The point is not that we should never make such changes – just that we should be careful to ensure that there isn’t an equally good explanation that doesn’t require making such as drastic change.

Aside: Although Quine has in mind a hierarchical structure for knowledge – the parts of the pudding bowl near the bottom are the foundation that supports the rest of the bowl – I don’t think this is strictly necessary. We just need to believe that some areas of knowledge have higher connectivity than others, i.e. more other important things that depend on them. It would work equally well if you think knowledge is stuctured like a power-law graph for example.

The Quinian argument is often levelled against proposed interpretations of quantum theory, e.g. the idea that quantum theory should be understood as requiring a fundamental revision of logic or probability theory rather than these being convenient mathematical formalisms that can coexist happily with their classical counterparts. The point here is that it is bordering on the paradoxical for a scientific theory to entail changes to things on which the scientific method itself seems to depend, and we did use logical and statistical arguments to confirm quantum theory in the first place. Thus, if we revise logic or probability then the theory seems to be “eating its own tail”. This is not to say that this is an actual paradox, because it could be the case that when we reconstruct the entire edifice of knowledge according to the new logic or probability theory we will still find that we were right to believe quantum theory, but just mistaken about the reasons why we should believe it. However, the whole exercise is question begging because if we allow changes to such basic things then why not make a more radical change and consider the whole space of possible logics or probability theories. There are clearly some possible alternatives under which all hell breaks loose and we are seriously deluded about the validity of all our knowledge. In other words, we’ve taken a sledgehammer to our pudding bowl and we can’t even make jelly (jello for North Ameican readers) any more.

At this point, you might be wondering whether a Quinian argument can be levelled against the revision of geometry implied by general relativity as well. The difference is that we do have a good handle of what the space of possible alternative geometries looks like. We can imagine writing down countless alternative theories in the language of differential geometry and figuring out what the world looks like according to them. We can adopt the assumed a priori truth that the world is describable in terms of events in asome spacetime geometry and then we find the synthetic fact that general relativity is in accordance with our observations, while most of the other theories are not. We did some significant damage close to the bottom of the bowl, but it turned out that we could fix it relatively easily. There are still some fancy puddings – like the theory of quantum gravity (baked Alaska) – that we haven’t figured out how to make in the repaired bowl, but we can live without them most of the time.

Now, is there a Quinian argument to be made against the many-worlds interpretation? I think so. The idea is that when we apply the scientific method we assume we can do experiments which have actual definite outcomes. These are the basic data from which we build a confirmation or refutation our theories. Many-worlds says that this assumption is wrong, there are no fundamental definite outcomes – it just appears that way to us because we are all entangled up in the wavefunction of the universe. This is a pretty dramatic assertion and it does seem to be bordering on the “theory eating its own tail” type of assertion. We need to be pretty sure that there isn’t an equally good alternative explanation in which experiments really do have definite outcomes before accepting it. Also, as with the case of revising logic or probability, we don’t have a good understanding of the space of possible theories in which experiments do not have definite outcomes. I can think of one other theory of this type, namely a bizarre interpretation of classical probability theory in which all outcomes that are assigned nonzero probability occur in different universes, but two possible theories does not amount to much in the grand scheme of things. The problem is that on dropping the assumption of definite outcomes, we have not replaced it with an adequate new assumed a priori truth. That the world is describable by vectors in Hilbert space that evolve unitarily seems much to specific to be considered as a potential candidate. Until we do come up with such an assumption, I can’t see why many-worlds is any less radical than proposing a revision of logic or probability theory. Until then, I won’t be making any custard tarts in that particular pudding bowl myself.

Tao on Many-Worlds and Tomb Raider

Terence Tao has an interesting post on why many-worlds quantum theory is like Tomb Raider.  I think it’s de Broglie-Bohm theory that is more like Tomb Raider though, as you can see from the comments.

Steane Roller

Earlier, I promised some discussion of Andrew Steane‘s new paper: Context, spactime loops, and the interpretation of quantum mechanics. Whilst it is impossible to summarize everything in the paper, I can give a short description of what I think are the most important points.

  • Firstly, he does believe that the whole universe obeys the laws of quantum mechanics, which are not required to be generalized.
  • Secondly, he does not think that Everett/Many-Worlds is a good way to go because it doesn’t give a well-defined rule for when we see one particular outcome of a measurement in one particular basis.
  • He believes that collapse is a real phenomenon and so the problem is to come up with a rule for assigning a basis in which the wavefunction collapses, as well as, roughly speaking, a spacetime location at which it occurs.
  • For now, he describes collapse as an unanalysed fundamenally stochastic process that achieves this, but he recognizes that it might be useful to come up with a more detailed mechanism by which this occurs.

Steane’s problem therefore reduces to picking a basis and a spacetime location. For the former, he uses the standard ideas from decoherence theory, i.e. the basis in which collapse occurs is the basis in which the reduced state of the system is diagonal. However, the location of collapse is what is really interesting about the proposal, and makes it more interesting and more bizzare than most of the proposals I have seen so far.

Firstly, note that the process of collapse destroys the phase information between the system and the environment. Therefore, if the environmental degrees of freedom could ever be gathered together and re-interacted with the system, then QM would predict interference effects that would not be present if a genuine collapse had occurred. Since Steane believes in the universal validity of QM, he has to come up with a way of having a genuine collapse without getting into a contradiction with this possibility.

His first innovation is to assert that the collapse need not be associated to an exactly precise location in spacetime. Instead, it can be a function of what is going on in a larger region of spacetime. Presumably, for events that we would normally regard as “classical” this region is supposed to be rather small, but for coherent evolutions it could be quite large.

The rule is easiest to state for special cases, so for now we will assume that we are talking about particles with a discrete quantum degree of freedom, e.g. spin, but that the position and momentum can be treated classically. Now, suppose we have 3 qubits and that they are in the state |000> + e^i phi |111>. The state of the first two qubits is a density operator, diagonal in the basis {|00>, |11>}, with a probability 1/2 for each of the two states. The phase e^i phi will only ever be detectable if the third qubit re-interacts with the first two. Whether or not this can happen is determined by the relative locations of the qubits, since the interaction Hamiltonias in nature are local. Since we are treating position and momentum classically at the moment, there is a matter of fact about whether this will occur and Steane’s rule is simple: if the qubits re-interact in the future then there is no collapse, but if they don’t then the then the first two qubits have collapsed into the state |00> or the state |11> with probability 1/2 for each one.

Things are going to get more complicated if we quantize the position and momentum, or indeed if we move to quantum field theory, since then we don’t have definite particle trajectories to work with. It is not entirely clear to me whether Steane’s proposal can be made to work in the general case, and he does admit that further technical work is needed. However, he still asserts that whether or not a system has collapsed at a given point is spacetime is in principle a function of its entire future, i.e. whether or not it will eventually re-interact with the environment it has decohered with respect to.

At this point, I want to highlight a bizzare physical prediction that can be made if you believe Steane’s point of view. Really, it is metaphysics, since the experiment is not at all practical. For starters, the fact that I experience myself being in a definite state rather than a superposition means that there are environmental degrees of freedom that I have interacted with in the past that have decohered me into a particular basis. We can in principle imagine an omnipotent “Maxwell’s demon” type character, who can collect up every degree of freedom I have ever interacted with, bring it all together and reverse the evolution, eliminating me in the process. Whilst this is impractical, there is nothing in principle to stop it happening if we believe that QM applies to the entire universe. However, according to Steane, the very fact that I have a definite experience means that we can predict with certainty that no such interaction happens in the future. If it did, there would be no basis for my definite experience at the moment.

Contrast this with a many-worlds account a la David Wallace. There, the entire global wavefunction still exists, and the fact that I experience the world in a particular basis is due to the fact that only certain special bases, the ones in which decoherence occurs, are capable of supporting systems complex enough to achieve conciousness. There is nothing in this view to rule out the Maxwell’s demon conclusively, although we may note that he is very unlikely to be generated by a natural process due to the second law of thermodynamics.

Therefore, there is something comforting about Steane’s proposal. If true, my very existence can be used to infer that I will never be wiped out by a Maxwell’s demon. All we need to do to test the theory is to try and wipe out a conscious being by constructing such a demon, which is obviously impractical and also unethical. Needless to say, there is something troubling about drawing such a strong metaphysical conclusion from quantum theory, which is why I still prefer the many-worlds account over Steane’s proposal at the moment. (That’s not to say that I agree with the former either though.)

Against Interpretation

It appears that I haven’t had a good rant on this blog for some time, but I have been stimulated into doing so by some of the discussion following the Quantum Pontiff‘s recent post about Bohmian Mechanics. I don’t want to talk about Bohm theory in particular, but to answer the following general question:

  • Just what is the goal of studying the foundations of quantum mechanics?

Before answering this question, note that its answer depends on whether you are approaching it as a physicist, mathematician, philosopher, or religious crank trying to seek justification for your outlandish worldview. I’m approaching the question as a physicist and to a lesser extent as a mathematician, but philosophers may have legitimate alternative answers. Since the current increase of interest in foundations is primarily amongst physicists and mathematicians, this seems like a natural viewpoint to take.

Let me begin by stating some common answers to the question:

1. To provide an interpretation of quantum theory, consistent with all its possible predictions, but free of the conceptual problems associated with orthodox and Copenhagen interpretations.

2. To discover a successor to quantum theory, consistent with the empirical facts known to date, but making new predictions in untested regimes as well as resolving the conceptual difficulties.

Now, let me give my proposed answer:

  • To provide a clear path for the future development of physics, and possibly to take a few steps along that path.

To me, this statement applies to the study of the foundations of any physical theory, not just quantum mechanics, and the success of the strategy has been born out in practice. For example, consider thermodynamics. The earliest complete statements of the principles of thermodynamics were in terms of heat engines. If you wanted to apply the theory to some physical system, you first had to work out how to think of it as a kind of heat engine before you started. This was often possible, but a rather unnatural thing to do in many cases. The introduction of the concept of entropy eliminated the need to talk about heat engines and allowed the theory to be applied to virtually any macroscopic system. Further, it facilitated the discovery of statistical mechanics. The formulation in terms of entropy is formally mathematically equivalent to the earlier formulations, and thus it might be thought superfluous to requirements, but in hindsight it is abundantly clear that it was the best way of looking at things for the progress of physics.

Let’s accept my answer to the foundational question for now and examine what becomes of the earlier answers. I think it is clear that answer 2 is consistent with my proposal, and is a legitimate task for a physicist to undertake. For those who wish to take that road, I wish you the best of luck. On the other hand, answer 1 is problematic.

Earlier, I wrote a post about criteria that a good interpretation should satisfy. Now I would like to take a step back from that and urge the banishment of the word interpretation entirely. The problem with 1 is that it ring-fences the experimental predictions of quantum theory, so that the foundational debate has no impact on them at all. This is the antithesis of the approach I advocate, since on my view foundational studies are supposed to feed back into improved practice of the theory. I think that the separation of foundations and practice did serve a useful role in the historical development of quantum theory, since rapid progress required focussing attention on practical matters, and the time was not ripe for detailed foundational investigations. For one thing, experiments that probe the weirder aspects of quantum theory were not possible until the last couple of decades. It can also serve a useful role for a subsection of the philosophy community, who may wish to focus on interpretation without having to keep track of modern developments in the physics. However, the view is simply a hangover from an earlier age, and should be abandoned as quickly as possible. It is a debate that can never be resolved, since how can physicists be convinced to adopt one interpretation over another if it makes no difference at all to how they understand the phenomenology of the theory?

On the other hand, if one looks closely it is evident that many “interpretations” that are supposedly of this type are not mere interpretations at all. For example, although Bohmian Mechanics is equivalent to standard quantum theory in its predictions, it immediately suggests a generalization to a “nonequilibrium” hidden variable theory, which would make new predictions not possible within the standard theory. Similar remarks can be made about other interpretations. For example, many-worlds, despite not being a favorite of mine, does suggest that it is perfectly fine to apply standard quantum theory to the entire universe. In Copenhagen this is not possible in any straightforward way, since there is always supposed to be a “classical” world out there at some level, which the state of the quantum system is referred to. In short, the distinction between “the physics” and “the interpretation” often disappears on close inspection, so we are better off abandoning the word “interpretation” and instead viewing the project as providing alternatives frameworks for the future progress of physics.

Finally, the more observant amongst you will have noticed that I did not include “solving the measurement problem” as a possible major goal of quantum foundations, despite its frequent appearance in this context. Deconstructing the measurement problem requires it’s own special rant, so I’m saving it for a future occasion.

More on criteria for interpretations

Well, my “big list” has proved to be my most popular blog post to date, thanks in no small part to a mention over at Uncertain Principles and a n u mber of other blogs. I know when I’m on to a good thing, so let’s stick with the topic for one more post.

The big news is that we have the first response to the criteria from an interpreter of quantum theory over at koantum matters. I would love to see responses from advocates of other interpretations, not because I expect many surprises, but more because it would help me to improve the criteria. I’d like to know if interpreters interpret the criteria in the way I intended.

One of the reasons for engaging in a project like this is that I personally don’t find any of the contemporary interpretations all that compelling. Advocates are often fairly good at arguing their case, so it can be hard to express exactly why a given interpretation makes me uneasy. It is fairly clear that, rightly or wrongly, most of the physics community agrees with me on this, since otherwise there would not be such an emphasis on Copenhagen and Orthodox Dirac-von Neumann ideas in undergraduate quantum mechanics courses. Other interpretations are usually dealt with in one or two lectures at the end of a course, if they are mentioned at all.

In my opinion, the most likely way that the debate on interpretations can be closed is if one interpretation makes itself indespensible for understanding quantum theory. This could be because it leads to new physics, but alternatively it could just lead to a far better way of explaining the phenomena of quantum theory to both students and the general public.

A useful comparison here is to Einstein’s approach to special relativity. In fact, the postulates of quantum theory have been compared to Einstein’s postulates by a variety of authors (e.g. see here and here). Despite Einstein’s insights, the plain fact of the matter is that almost all of the predicitive content of special relativity is contained in the Lorentz transformations, and their extension to the Lorentz and Poincare groups. Especially when doing quantum field theory, special relativity is almost always reduced to just this in modern applications. We could then contemplate starting with a mathematical axiomatization of the Lorentz group and never bother to teach students about Einstein’s postulates at all. This is supposed to be analogous to the current situation in quantum theory, where we cannot derive the whole theory from postulates that are explicitly physical in nature, but are ultimately forced to thinking in terms of abstract Hilbert spaces and the like.

In my view, the main advantage of Einstein’s approach is that it leads directly to the main phenomena of the theory without having to posit the Lorentz transformations to begin with. For example, by considering Einstein’s train thought experiments, we can understand why there is length contraction, time dilation and relativity of simultanaeity directly from the postulates themselves. We would consider a student ill equipped to study relativity if these arguments were not understood before diving into the derivation of the Lorentz transofrmations. In my opinion, it is this that makes relativity more easily understandable than quantum theory.

Therefore, I would argue that to replace orthodoxy in the classroom, an interpretation will have to provide a direct route to some of the main phenomena of quantum theory, as well as facilitating an elegant route to the full mathematical formalism. If not, the interpretation is always likely to remain interesting to only a small band of specialists. Part of the aim of the criteria is to try and make interpreters think about these sort of issues, and that was in particular the point of the “principles” criterion.

Another aim, and perhaps the main one, is to try and move the debate about interpretations forward a little bit. Currently, interpretaions are usually understood as counterpoints to Copenhagen/Orthodoxy. That is, we first explain these ideas, then poke holes in them by discussing the measurement problem, and finally introduce a new interpretation that is supposed to fix the problem. However, we now know that Copenhagen/Orthodoxy is just a small corner in a large space of possibilities, and not necessarily the most convincing of the possibilities at that. Therefore, it seems silly to focus exclusively on these ideas as the starting point. However, once we recognise this, it becomes difficult to formulate the conceptual problems of quantum theory in an interpretationally neutral way, since the measurement problem cannot even be formulated precisely unless we have already taken some stand on the meaning of the wavefunction. Nevertheless, unease about interpretations persists, so the criteria are partly designed to give interpreters a hard time by identifying the weaknesses in their proposals in a more neutral way. This is problematic because there are a number of known issues that only seem to apply to particular interpretations, e.g. it would be nice if the criteria forced many-worlds advocates to discuss the basis problem and the meaning of probability, which may not have analogs in other interpretations. One way of doing this would be to introduce a series of if … then … clauses into the criteria, e.g. if you take an ontological view of the wavefunction then explain the Born rule. However, this is obviously very inelegant and it would be nicer to capture the problems with all interpretations in a short simple set of criteria that applies to every interpretation equally.

With this in mind, it should be clear that the current list is far from final, and I would welcome any ideas on how to improve it.

Professional Jealousy

As some of you know, my alter ego works on quantum information and computation (I’ll leave you to decide which of us is Clark Kent and which is Superman). My foundations personality sometimes feels a twinge of professional jealousy and I’ll tell you why.

In quantum computation we have a set of criteria for evaluating proposed experimental implementations, known as the diVincenzo criteria. These tell you what is required to implement the circuit model of quantum computation, and include things like the ability to prepare pure input states and the ability to perform a universal gate set. Of course, you might choose to implement an alternative model of computation, such as the measurement based models, and then a different set of criteria are applicable. Nevertheless, talks about proposed implementations often proceed by explaining how each of the criteria is to be met in turn. This makes it very clear what the weak and strong points of the implementation are, since there are usually one or two criteria that present a significant experimental challenge.

In contrast, there is no universally accepted set of criteria that an interpretation of quantum mechanics is supposed to meet. They are usually envisioned as attempts to solve the nefarious “measurement problem”, which is actually a catch-all term for a bunch of related difficulties to which different researchers attach different degrees of significance. The question of exactly what an interpretation is supposed to do also varies according to where one is planning to apply it. Is it supposed to explain the emergence of classical mechanics, help us understand why quantum computation works, give us some clues as to how to construct quantum gravity, or simply stand as a work of philosophical elegance?

It seems to me that the foundations community should have, by now, cracked their heads together and come up with a definitive list of issues on which an interpretation has to make a stand, before we are prepared to accept it as a viable contender. Then, instead of reading lots of lengthy papers and spending a lot of time trying to work out exactly where the wool has been pulled out from under our eyes, we can simply send each new interpreter a form to fill in and be done with it. Of course, this is bound to be slightly more subjective than the di Vincenzo criteria, but hopefully not by all that much. For what it’s worth here is my attempt at the big list.

The first six criteria would probably be agreed upon by most people who think seriously about foundations.

  • An interpretation should have a well-defined ontology.
    • To begin with, you need to tell me which things are supposed to correspond to the stuff that actually exists in reality. This can be some element of the quantum formalism, e.g. the state vector, something you have added to it, e.g. hidden variables, or something much more exotic, e.g. relations between things without any definite state for the things that are related, correlations without correlata etc. This is all fine at this stage, but of course the more exotic possibilities are going to get into trouble with the later criteria.
    • At this stage, I am even prepared to allow you to say that only detector clicks exist in reality, so long as you are clear about this and are prepared to face the later challenges.
    • As a side note, some people might want to add that the interpretation should explicitly state whether the quantum state vector is ontological, i.e. corresponds to something in reality, or epistemic, i.e. something more like a probability distribution. I am inclined to believe that if you have a clear ontology then it should also be clear what the answer to this question is without any need for further comment. I am also inclined to believe that this fixation on the role of the state vector is an artifact of taking the Schroedinger picture deadly seriously, and ignoring other formalisms in which it plays a lesser role. For instance, why don’t we ask whether operators or Wigner functions are ontological or epistemic instead?
  • An interpretation should not conflict with my direct everyday experience.
    • In everyday life, objects appear to be in one definite place and I have one unique conscious experience. If you have adopted a bizarre ontology, wherein this is not the case at the quantum level, you have to explain why it appears that it is the case to me. This is a particularly relevant question for relationalists, Everettistas and correlationalists of course. It is also not the same thing as…
  • An interpretation should explain how classical mechanics emerges from quantum theory.
    • Why do systems exist that appear to have states represented by points in phase space, evolving according to the classical evolution equations?
    • Note that it is not enough to give some phase space description. It must correspond to the description that we actually use to describe classical systems.
    • Some people might want to phrase this as “Why don’t we see macroscopic superpositions?”. I’m not quite sure what it would mean to “see” a macroscopic superposition, and I think that this is the more general issue in any case.
    • Similarly, you may be bothered by the fact that I haven’t mentioned the “collapse of the wavefunction” or the “reduction of the wavevector”. Your solution to that ought to be immediately apparent from combining your ontology with the answer to the present issue.
    • Some physicists seem to think that the whole question of interpretation can be boiled down to this one point, or that it is identical with the measurement problem. I hope you are convinced that this is not the case by now.
  • An interpretation should not conflict with any empirically established facts.
    • For example, I don’t mind if you believe that wavefunction collapse is a real physical process, but your theory should be compatible with all the systems that have been observed in superposition to date.
  • An interpretation should provide a clear explanation of how it applies to the “no-go” theorems of Bell and Kochen Specker.
    • A simple answer would be to explain in what sense your interpretation is nonlocal and contextual. If you claim locality or noncontextuality for your interpretation then you need to give a clear explanation of which other premises of the theorems are violated by your interpretation. They are theorems, so some premise must be violated.
  • An interpretation should be applicable to multiparticle systems in nonrelativistic quantum theory.
    • Some interpretations take the idea that the wavefunction is like a wave in real 3d space very seriously (the transactional interpretation comes to mind here). Often such ideas can only be worked out in detail for a single particle. However, the move to wavefunctions on multiparticle configuration space is very necessary and needs to be convincingly accomplished.

The next four criteria are things that I regard as important, but probably some people would not give them such great importance.

  • An interpretation should provide a clear explanation of the principles it stands upon.
    • For example, if you claim that your interpretation is minimal in some sense (as many-worlds and modal advocates often do) then you need to make clear what the minimality assumption is and derive the interpretation from it if possible.
    • If you claim that “quantum theory is about X” then a full derivation of quantum theory from axioms about the properties that X should satisfy would be nice. Examples of X might be nonstandard logics, complimentarity, or information.
  • No facticious sample spaces.
    • OK this is a bit of a personal bugbear of mine. Some interpretations introduce classical sample spaces (over hidden variable states for instance) or generalizations of the notion of a sample space (as in consistent histories). Quantum theory is then thought of as being a sort of probability theory over these spaces. Often, however, the “quantum states” on these sample spaces are a strict subset of the allowed measures on the sample space, and the question is why?
    • I allow the explanation to be dynamical, in analogy to statistical mechanics. There we tend to see equilibrium distributions even though many other distributions are possible. The dynamics ensures that “most” distributions tend to equilibrium ones. Of course, this gets into the thorny issues of the foundations of statistical mechanics, but provided you can do at least as good a job as is done there I am OK with it.
    • I also allow a principle explanation, e.g. some sort of fundamental uncertainty principle. However, unlike the standard uncertainty relations, you should actually be able to derive the set of allowed measures from the principle.
  • An interpretation should not be ambiguous about whether it is consistent with the scientific method.
    • Some interpretations seem to undermine the very method that was used to discover quantum theory in the first place. For example, we assumed that experiments really had outcomes and that it was OK to reason about the world using ordinary deductive logic. If you deny any of these things then you need to explain why it was valid to use the scientific method to arrive at the theory in the first place. How do you know that an even more radical revision of these concepts isn’t in order, perhaps one that could never be arrived at by empirical means?
  • An interpretation should take the great probability debate into account.
    • Quantum theory involves probabilities and some interpretations take a stand on the fundamental significance of these. Is the interpretation consistent with all the major schools of thought on the foundations of probability (propensities, frequentism and subjectivism), at least as far as these are themselves consistent? If not, you need to be clear on what notion of probability is actually needed and address the main arguments in the great probability debate. Good luck, because you could spend a whole career just doing this.

The final three criteria are not strictly required for me to take your interpretation seriously, but addressing them would score you extra bonus points.

  • An interpretation should be consistent with relativistic quantum field theory and the standard model.
    • Obviously, you need to be consistent with the most fundamental theories of physics that we have at the moment. However, the conceptual leap from nonrelativistic to relativistic physics is nontrivial and it has implications for ontology even if we forget about quantum theory. Therefore, it is OK to just focus on the nonrelativistic case when developing an interpretation. QFT might require significant changes to the ontology of your interpretation, and this is something that should be addressed eventually.
  • An interpretation should suggest experiments that might exhibit departures from quantum theory.
    • It’s good to have something which can be tested in the lab. Interpretations such as spontaneous collapse theories make predictions that depart from quantum theory and these should be investigated and tested.
    • However, even if your interpretation is entirely consistent with quantum theory, it might suggest novel ways in which the theory can be modified. We should be constantly on the lookout for such things and test them wherever possible.
  • An interpretation should address the phenomenology of quantum information theory.
    • This reflects my personal interests quite a bit, but I think it is a worthwhile thing to mention. Several quantum protocols, such as teleportation, suggest a strong analogy between quantum states (even pure ones) and probability distributions. If your interpretation makes light of this analogy, e.g. the state is treated ontologically, then it would be nice to have an explanation of why the analogy is so effective in deriving new results.