A new preprint entitled The Quantum State Cannot be Interpreted Statistically by Pusey, Barrett and Rudolph (henceforth known as PBR) has been generating a significant amount of buzz in the last couple of days. Nature posted an article about it on their website, Scott Aaronson and Lubos Motl blogged about it, and I have been seeing a lot of commentary about it on Twitter and Google+. In this post, I am going to explain the background to this theorem and outline exactly what it entails for the interpretation of the quantum state. I am not going to explain the technicalities in great detail, since these are explained very clearly in the paper itself. The main aim is to clear up misconceptions.

First up, I would like to say that I find the use of the word “Statistically” in the title to be a rather unfortunate choice. It is liable to make people think that the authors are arguing against the Born rule (Lubos Motl has fallen into this trap in particular), whereas in fact the opposite is true. The result is all about reproducing the Born rule within a realist theory. The question is whether a scientific realist can interpret the quantum state as an *epistemic* state (state of knowledge) or whether it must be an *ontic* state (state of reality). It seems to show that only the ontic interpretation is viable, but, in my view, this is a bit too quick. On careful analysis, it does not really rule out any of the positions that are advocated by contemporary researchers in quantum foundations. However, it does answer an important question that was previously open, and confirms an intuition that many of us already held. Before going into more detail, I also want to say that I regard this as the most important result in quantum foundations in the past couple of years, well deserving of a good amount of hype if anything is. I am not sure I would go as far as Antony Valentini, who is quoted in the Nature article saying that it is the most important result since Bell’s theorem, or David Wallace, who says that it is the most significant result he has seen in his career. Of course, these two are likely to be very happy about the result, since they already subscribe to interpretations of quantum theory in which the quantum state is ontic (de Broglie-Bohm theory and many-worlds respectively) and perhaps they believe that it poses more of a dilemma for epistemicists like myself then it actually does.

## Classical Ontic States

Before explaining the result itself, it is important to be clear on what all this epistemic/ontic state business is all about and why it matters. It is easiest to introduce the distinction via a classical example, for which the interpretation of states is clear. Therefore, consider the Newtonian dynamics of a single point particle in one dimension. The trajectory of the particle can be determined by specifying initial conditions, which in this case consists of a position \(x(t_0)\) and momentum \(p(t_0)\) at some initial time \(t_0\). These specify a point in the particle’s phase space, which consists of all possible pairs \((x,p)\) of positions and momenta.

Then, assuming we know all the relevant forces, we can compute the position and momentum \((x(t),p(t))\) at some other time \(t\) using Newton’s laws or, equivalently, Hamilton’s equations. At any time \(t\), the phase space point \((x(t),p(t))\) can be thought of as the instantaneous *state* of the particle. It is clearly an *ontic* state (state of reality), since the particle either does or does not possess that particular position and momentum, independently of whether we know that it possesses those values^{[1]}. The same goes for more complicated systems, such as multiparticle systems and fields. In all cases, I can derive a phase space consisting of configurations and generalized momenta. This is the space of ontic states for any classical system.

## Classical Epistemic States

Although the description of classical mechanics in terms of ontic phase space trajectories is clear and unambiguous, we are often, indeed usually, more interested in tracking what we *know* about a system. For example, in statistical mechanics, we may only know some macroscopic properties of a large collection of systems, such as pressure or temperature. We are interested in how these quantities change over time, and there are many different possible microscopic trajectories that are compatible with this. Generally speaking, our knowledge about a classical system is determined by assigning a probability distribution over phase space, which represents our uncertainty about the actual point occupied by the system.

We can track how this probability distribution changes using Liouville’s equation, which is derived by applying Hamilton’s equations weighted with the probability assigned to each phase space point. The probability distribution is pretty clearly an *epistemic* state. The actual system only occupies one phase space point and does not care what probability we have assigned to it. Crucially, the ontic state occupied by the system would be regarded as possible by us in more than one probability distribution, in fact it is compatible with infinitely many.

## Quantum States

We have seen that there are two clear notions of state in classical mechanics: ontic states (phase space points) and epistemic states (probability distributions over the ontic states). In quantum theory, we have a different notion of state — the wavefunction — and the question is: should we think of it as an ontic state (more like a phase space point), an epistemic state (more like a probability distribution), or something else entirely?

Here are three possible answers to this question:

- Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
- Wavefunctions are epistemic, but there is no deeper underlying reality.
- Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

I will call options 1 and 2 psi-epistemic and option 3 psi-ontic. Advocates of option 3 are called psi-ontologists, in an intentional pun coined by Chris Granade. Options 1 and 3 share a conviction of *scientific realism*, which is the idea that there must be some description of what is going on in reality that is independent of our knowledge of it. Option 2 is broadly anti-realist, although there can be some subtleties here^{[2]}.

The theorem in the paper attempts to rule out option 1, which would mean that scientific realists should become psi-ontologists. I am pretty sure that no theorem on Earth could rule out option 2, so that is always a refuge for psi-epistemicists, at least if their psi-epistemic conviction is stronger than their realist one.

I would classify the Copenhagen interpretation, as represented by Niels Bohr^{[3]}, under option 2. One of his famous quotes is:

There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature…

^{[4]}

and “what we can say” certainly seems to imply that we are talking about our knowledge of reality rather than reality itself. Various contemporary neo-Copenhagen approaches also fall under this option, e.g. the Quantum Bayesianism of Carlton Caves, Chris Fuchs and Ruediger Schack; Anton Zeilinger’s idea that quantum physics is only about information; and the view presently advocated by the philosopher Jeff Bub. These views are safe from refutation by the PBR theorem, although one may debate whether they are desirable on other grounds, e.g. the accusation of instrumentalism.

Pretty much all of the well-developed interpretations that take a realist stance fall under option 3, so they are in the psi-ontic camp. This includes the Everett/many-worlds interpretation, de Broglie-Bohm theory, and spontaneous collapse models. Advocates of these approaches are likely to rejoice at the PBR result, as it apparently rules out their only realist competition, and they are unlikely to regard anti-realist approaches as viable.

Perhaps the best known contemporary advocate of option 1 is Rob Spekkens, but I also include myself and Terry Rudolph (one of the authors of the paper) in this camp. Rob gives a fairly convincing argument that option 1 characterizes Einstein’s views in this paper, which also gives a lot of technical background on the distinction between options 1 and 2.

## Why be a psi-epistemicist?

Why should the epistemic view of the quantum state should be taken seriously in the first place, at least seriously enough to prove a theorem about it? The most naive argument is that, generically, quantum states only predict probabilities for observables rather than definite values. In this sense, they are unlike classical phase space points, which determine the values of all observables uniquely. However, this argument is not compelling because determinism is not the real issue here. We can allow there to be some genuine stochasticity in nature whilst still maintaining realism.

An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory. For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view. Nevertheless, this argument may not be compelling to everyone, since it mainly entails that mixed states have to be epistemic. Classically, the pure states are the extremal probability distributions, i.e. they are just delta functions on a single ontic state. Thus, they are in one-to-one correspondence with the ontic states. The same could be true of pure quantum states without ruining the analogy^{[5]}.

A more convincing argument concerns the instantaneous change that occurs after a measurement — the collapse of the wavefunction. When we acquire new information about a classical epistemic state (probability distribution) say by measuring the position of a particle, it also undergoes an instantaneous change. All the weight we assigned to phase space points that have positions that differ from the measured value is rescaled to zero and the rest of the probability distribution is renormalized. This is just Bayesian conditioning. It represents a change in our knowledge about the system, but no change to the system itself. It is still occupying the same phase space point as it was before, so there is no change to the ontic state of the system. If the quantum state is epistemic, then instantaneous changes upon measurement are unproblematic, having a similar status to Bayesian conditioning. Therefore, the measurement problem is completely dissolved within this approach.

Finally, if we allow a more sophisticated analogy between quantum states and probabilities, in particular by allowing constraints on how much may be known and allowing measurements to locally disturb the ontic state, then we can qualitatively explain a large number of phenomena that are puzzing for a psi-ontologist very simply within a psi-epistemic approach. These include: teleportation, superdense coding, and much of the rest of quantum information theory. Crucially, it also includes interference, which is often held as a convincing reason for psi-ontology. This was demonstrated in a very convincing way by Rob Spekkens via a toy theory, which is recommended reading for all those interested in quantum foundations. In fact, since this paper contains the most compelling reasons for being a psi-epistemicist, you should definitely make sure you read it so that you can be more shocked by the PBR result.

## Ontic models

If we accept that the psi-epistemic position is reasonable, then it would be superficially resonable to pick option 1 and try to maintain scientific realism. This leads us into the realm of ontic models for quantum theory, otherwise known as hidden variable theories^{[6]}. A pretty standard framework for discussing such models has existed since John Bell’s work in the 1960’s, and almost everyone adopts the same definitions that were laid down then. The basic idea is that systems have properties. There is some space \(\Lambda\) of ontic states, analogous to the phase space of a classical theory, and the system has a value \(\lambda \in \Lambda\) that specifies all its properties, analogous to the phase space points. When we prepare a system in some quantum state \(\Ket{\psi}\) in the lab, what is really happening is that an ontic state \(\lambda\) is sampled from a probability distribution over \(\mu(\lambda)\) that depends on \(\Ket{\psi}\).

We also need to know how to represent measurements in the model^{[7]}. For each possible measurement that we could make on the system, the model must specify the outcome probabilities for each possible ontic state. Note that we are not assuming determinism here. The measurement is allowed to be stochastic even given a full specification of the ontic state. Thus, for each measurement \(M\), we need a set of functions \(\xi^M_k(\lambda)\) , where \(k\) labels the outcome. \(\xi^M_k(\lambda)\) is the probability of obtaining outcome \(k\) in a measurement of \(M\) when the ontic state is \(\lambda\). In order for these probabilities to be well defined the functions \(\xi^M_k\) must be positive and they must satisfy \(\sum_k \xi^M_k(\lambda) = 1\) for all \(\lambda \in \Lambda\). This normalization condition is very important in the proof of the PBR theorem, so please memorize it now.

Overall, the probability of obtaining outcome \(k\) in a measurement of \(M\) when the system is prepared in state \(\Ket{\psi}\) is given by

\[\mbox{Prob}(k|M,\Ket{\psi}) = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda, \]

which is just the average of the outcome probabilities over the ontic state space.

If the model is going to reproduce the predictions of quantum theory, then these probabilities must match the Born rule. Suppose that the \(k\)th outcome of \(M\) corresponds to the projector \(P_k\). Then, this condition boils down to

\[\Bra{\psi} P_k \Ket{\psi} = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda,\]

and this must hold for all quantum states, and all outcomes of all possible measurements.

## Constraints on Ontic Models

Even disregarding the PBR paper, we already know that ontic models expressible in this framework have to have a number of undesirable properties. Bell’s theorem implies that they have to be nonlocal, which is not great if we want to maintain Lorentz invariance, and the Kochen-Specker theorem implies that they have to be contextual. Further, Lucien Hardy’s ontological excess baggage theorem shows that the ontic state space for even a qubit would have to have infinite cardinality. Following this, Montina proved a series of results, which culminated in the claim that there would have to be an object satisfying the Schrödinger equation present within the ontic state (see this paper). This latter result is close to the implication of the PBR theorem itself.

Given these constraints, it is perhaps not surprising that most psi-epistemicists have already opted for option 2, denouncing scientific realism entirely. Those of us who cling to realism have mostly decided that the ontic state must be a different type of object than it is in the framework described above. We could discard the idea that individual systems have well-defined properties, or the idea that the probabilities that we assign to those properties should depend only on the quantum state. Spekkens advocates the first possibility, arguing that only relational properties are ontic. On the other hand, I, following Huw Price, am partial to the idea of epistemic hidden variable theories with retrocausal influences, in which case the probability distributions over ontic states would depend on measurement choices as well as which quantum state is prepared. Neither of these possibilities are ruled out by the previous results, and they are not ruled out by PBR either. This is why I say that their result does not rule out any position that is seriously held by any researchers in quantum foundations. Nevertheless, until the PBR paper, there remained the question of whether a conventional psi-epistemic model was possible even in principle. Such a theory could at least have been a competitor to Bohmian mechanics. This possibility has now been ruled out fairly convincingly, and so we now turn to the basic idea of their result.

## The Result

Recall from our classical example that each ontic state (phase space point) occurs in the support of more than one epistemic state (Liouville distribution), in fact infinitely many. This is just because probability distributions can have overlapping support. Now, consider what would happen if we restricted the theory to only allow epistemic states with disjoint support. For example, we could partition phase space into a number of disjoint cells and only consider probability distributions that are uniform over one cell and zero everywhere else.

Given this restriction, the ontic state determines the epistemic state uniquely. If someone tells you the ontic state, then you know which cell it is in, so you know what the epistemic state must be. Therefore, in this restricted theory, the epistemic state is not really epistemic. Its image is contained in the ontic state, and it would be better to say that we were talking about a *property* of the ontic state, rather than something that represents knowledge. According to the PBR result, this is exactly what must happen in any ontic model of quantum theory within the Bell framework.

Here is the analog of this in ontic models of quantum theory. Suppose that two nonorthogonal quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) are represented as follows in an ontic model:

Because the distributions overlap, there are ontic states that are compatible with more than one quantum states, so this is a psi-epistemic model.

In contrast, if, for *every* pair of quantum states \(\Ket{\psi_1},\Ket{\psi_2}\), the probability distributions do not overlap, i.e. the representation of each pair looks like this

then the quantum state is uniquely determined by the ontic state, and it is therefore better regarded as a property of \(\lambda\) rather than a representation of knowledge. Such a model is psi-ontic. The PBR theorem states that all ontic models that reproduce the Born rule must be psi-ontic.

### Sketch of the proof

In order to establish the result, PBR make use of the following idea. In an ontic model, the ontic state \(\lambda\) determines the probabilities for the outcomes of any possible measurement via the functions \(\xi^M_k\). The Born rule probabilities must be obtained by averaging these conditional probabilities with respect to the probability distribution \(\mu(\lambda)\) representing the quantum state. Suppose there is some measurement \(M\) that has an outcome \(k\) to which the quantum state \(\Ket{\psi}\) assigns probability zero according to the Born rule. Then, it must be the case that \(\xi^M_k(\lambda) = 0\) for every \(\lambda\) in the support of \(\mu(\lambda)\). Now consider two quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) and suppose that we can find a two outcome measurement such that that the first state gives zero Born rule probability to the first outcome and the second state gives zero Born rule probability to the second outcome. Suppose also that there is some \(\lambda\) that is in the support of both the distributions, \(\mu_1\) and \(\mu_2\), that represent \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) in the ontic model. Then, we must have \(\xi^M_1(\lambda) = \xi^M_2(\lambda) = 0\), which contradicts the normalization assumption \(\xi^M_1(\lambda) + \xi^M_2(\lambda) = 1\).

Now, it is fairly easy to see that there is no such measurement for a pair of nonorthogonal states, because this would mean that they could be distinguished with certainty, so we do not have a result quite yet. The trick to get around this is to consider multiple copies. Consider then, the four states \(\Ket{\psi_1}\otimes\Ket{\psi_1}, \Ket{\psi_1}\otimes\Ket{\psi_2}, \Ket{\psi_2}\otimes\Ket{\psi_1}\) and \(\Ket{\psi_2}\otimes\Ket{\psi_2}\) and suppose that there is a four outcome measurement such that \(\Ket{\psi_1}\otimes\Ket{\psi_1}\) gives zero probability to the first outcome, \(\Ket{\psi_1}\otimes\Ket{\psi_2}\) gives zero probability to the second outcome, and so on. In addition to this, we make an independence assumption that the probability distributions representing these four states must satisfy. Let \(\lambda\) be the ontic state of the first system and let \(\lambda’\) be the ontic state of the second. The independence assumption states that the probability densities representing the four quantum states in the ontic model are \(\mu_1(\lambda)\mu_1(\lambda’), \mu_1(\lambda)\mu_2(\lambda’), \mu_2(\lambda)\mu_1(\lambda’)\) and \(\mu_2(\lambda)\mu_2(\lambda’)\). This is a reasonable assumption because there is no entanglement between the two systems and we could do completely independent experiments on each of them. Assuming there is an ontic state \(\lambda\) in the support of both \(\mu_1\) and \(\mu_2\), there will be some nonzero probability that both systems occupy this ontic state whenever any of the four states are prepared. But, in this case, all four functions \(\xi^M_1,\xi^M_2,\xi^M_3\) and \(\xi^M_4\) must have value zero when both systems are in this state, which contradicts the normalization \(\sum_k \xi^M_k = 1\).

This argument works for the pair of states \(\Ket{\psi_1} = \Ket{0}\) and \(\Ket{\psi_2} = \Ket{+} = \frac{1}{\sqrt{2}} \left ( \Ket{0} + \Ket{1}\right )\). In this case, the four outcome measurement is a measurement in the basis:

\[\Ket{\phi_1} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{1} + \Ket{1} \otimes \Ket{0} \right )\]

\[\Ket{\phi_2} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{-} + \Ket{1} \otimes \Ket{+} \right )\]

\[\Ket{\phi_3} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{1} + \Ket{-} \otimes \Ket{0} \right )\]

\[\Ket{\phi_4} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{-} + \Ket{-} \otimes \Ket{+} \right ),\]

where \(\Ket{-} = \frac{1}{\sqrt{2}} \left ( \Ket{0} – \Ket{1}\right )\). It is easy to check that \(\Ket{\phi_1}\) is orthogonal to \(\Ket{0}\otimes\Ket{0}\), \(\Ket{\phi_2}\) is orthogonal to \(\Ket{0}\otimes\Ket{+}\), \(\Ket{\phi_3}\) is orthogonal to \(\Ket{+}\otimes\Ket{0}\), and \(\Ket{\phi_4}\) is orthogonal to \(\Ket{+}\otimes\Ket{+}\). Therefore, the argument applies and there can be no overlap in the probability distributions representing \(\Ket{0}\) and \(\Ket{+}\) in the model.

To establish psi-ontology, we need a similar argument for every pair of states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\). PBR establish that such an argument can always be made, but the general case is more complicated and requires more than two copies of the system. I refer you to the paper for details where it is explained very clearly.

## Conclusions

The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.

- There are actually subtleties about whether we should think of phase space points as instantaneous ontic states. For one thing, the momentum depends on the first derivative of position, so maybe we should really think of the state being defined on an infinitesimal time interval. Secondly, the fact that momentum appears is because Newtonian mechanics is defined by second order differential equations. If it were higher order then we would have to include variables depending on higher derivatives in our definition of phase space. This is bad if you believe in a clean separation between basic ontology and physical laws. To avoid this, one could define the ontic state to be the position only, i.e. a point in configuration space, and have the boundary conditions specified by the position of the particle at two different times. Alternatively, one might regard the entire spacetime trajectory of the particle as the ontic state, and regard the Newtonian laws themselves as a mere pattern in the space of possible trajectories. Of course, all these descriptions are mathematically equivalent, but they are conceptually quite different and they lead to different intuitions as to how we should understand the concept of state in quantum theory. For present purposes, I will ignore these subtleties and follow the usual practice of regarding phase space points as the unambiguous ontic states of classical mechanics. [↩]
- The subtlety is basically a person called Chris Fuchs. He is clearly in the option 2 camp, but claims to be a scientific realist. Whether he is successful at maintaining realism is a matter of debate. [↩]
- Note, this is distinct from the
*orthodox*interpretation as represented by the textbooks of Dirac and von-Neumann, which is also sometimes called the Copenhagen interpretation. Orthodoxy accepts the eigenvalue-eigenstate link. Observables can sometimes have definite values, in which case they are objective properties of the system. A system has such a property when it is in an eigenstate of the corresponding observable. Since every wavefunction is an eigenstate of some observable, it follows that this is a psi-ontic view, albeit one in which there are no additional ontic degrees of freedom beyond the quantum state. [↩] - Sourced from Wikiquote. [↩]
- but note that the resulting theory would essentially be the orthodox interpretation, which has a measurement problem. [↩]
- The terminology “ontic model” is preferred to “hidden variable theory” for two reasons. Firstly, we do not want to exclude the case where the wavefunction is ontic, but there are no extra degrees of freedom (as in the orthodox interpretation). Secondly, it is often the case that the “hidden” variables are the ones that we actually observe rather than the wavefunction, e.g. in Bohmian mechanics the particle positions are not “hidden”. [↩]
- Generally, we would need to represent dynamics as well, but the PBR theorem does not depend on this. [↩]

Copyright © 2011 - All Rights Reserved

The PBR theorem only applies to systems prepared in a pure state. If a system of two particles is prepared in an entangled pure state then we may apply the theorem to the pair and conclude that the entangled state is ontic (assuming we accept factorizability). However, there is no requirement that the individual subsystems must then have a pure state in their ontology.

However, I think there is the beginnings of an idea in your argument. What you want to do is consider counterfactual situations, i.e. if I were to make a z-measurement on the positron then the electron would collapse to a definite state in the z-basis, at which point it would have |0> or |1> as part of its ontic state and similarly for the x-basis. Then, if you invoke locality you can argue that this must be the case for both bases even if the measurement is not made, and hence there must be a definite state both in the x and z-bases for the electron, but this is impossible because they correspond to nonoverlapping distributions. Of course, the resolution to this is that the locality assumption has to be dropped in an ontic model, which we already knew from Bell’s theorem.

In fact, you can make this argument into a nonlocality proof that is as rigorous as Bell’s theorem. Essentially it goes: PBR => ontic quantum states and then ontic quantum states + steering => nonlocality. The latter part was shown by Spekkens and Harrigan in their paper http://arxiv.org/abs/0706.2661 I review this argument in an article I wrote for “The Quantum Times”, which is the newsletter of the APS topical group on Quantum Information. The newsletter has not been posted yet, but as soon as it is I will cross post it on this blog.

I see, two electrons in pure states |0>|0> is not the same as two pairs of entangled electron-positrons with the electrons in states |0>|0>.

I agree that non-locality is a consequence of this kind of argument, in fact instantaneous collapse everywhere implies non-locality. That does not bother me as long as the collapse is non-informative (it can’t be used to transmit information). But it makes me wonder what exactly a “state” is. An “instantaneous” collapse of a wave function may look different for two observers in relative motion due to their different definition of simultaneity, so their space-time representation of the wave function may be different, but still represent the same “state”.

Even disregarding the issue of whether the quantum state has to be ontic, Bell’s theorem already implies the same issue with simultaneity. It shows that the ontic state at B must depend on the choice of measurement at A and vice versa, and there are frames of reference in which the measurements occur in either order. There are only two possible responses to this:

1. Reject relativity at the fundamental level. Assume that there is a preferred frame of reference and have the nonlocal influences operate instantaneously in this frame. The frame will be hidden at the statistical level due to the averaging over ontic states, so you will still have Lorentz invariance at the operational level, but it means that you cannot use relativistic arguments to reason about what is happening at the ontic level, so the paradoxes do not arise. This is the solution adopted in Bohmian mechanics for example.

2. Reject one or more of the assumptions of Bell’s theorem (also assumed by PBR). For example, one could adopt a no-collapse interpretation like Everett/many-worlds, which denies the existence of ontic properties localized in spacetime, an assumption that Einstein called “separability” and that is crucial to the derivation of nonlocality. Alternatively, one could adopt one of the “neo-Copenhagen” approaches to quantum theory in which the need for an ontic state is denied. Finally, one could retain realism and single-valuedness of measurement outcomes by adopting ontologies that are not considered in the derivation of Bell’s theorem, e.g. retrocausality.

I already stated in the blog post that I like the retrocausal solution, or at least that I consider it worth investigating in more detail. This is because I prefer to retain realism, fundamental Lorentz invariance and psi-epistemicism, and it is one of the few options on the table that still has a chance of doing that. If the retrocausal program fails then I would have to drop one or more of these requirements and I fluctuate between preferring neo-Copenhagen approaches or Everett depending on whether my psi-epistemic or realist convictions are stronger on any given day. To be convinced to drop fundamental Lorentz invariance, I would have to see violations of it on the statistical level. Valentini argues that this is to be expected in the Bohmian approach for example, since the statistical washing out of nonlocal influence is analagous to being in a state of thermal equilibrium in statistical mechanics, so we should expect to see systems out of this state of equilibrium somewhere in the universe. I consider this to be a firm prediction of all such theories, and so I would need to see empirical violations of Lorentz invariance to be convinced of them.

An interesting more philosophically-oriented paper discussing PBR got psted today:

Statistical-Realism versus Wave-Realism in the Foundations of Quantum

Mechanics

http://philsci-archive.pitt.edu/9021/1/Statistical_Realism_Versus_Wave_Realism_in_the_Foundations_of_Quantum_Mechanics.pdf

Hmm, if by “philosophically-oriented” you mean confusing and full of mistakes then I agree.

That’s comforting because I had trouble understanding it and felt really dumb.

Pingback: Quantum Times Article on the PBR Theorem | Matt Leifer

Pingback: A boost for quantum reality - Page 3 - Parapsychology and alternative medicine forums of mind-energy.net

Hi Matt,

I am writing a review of PBR for the “Journal of Scientific Exploration.” I am having trouble visualizing a single classical particle example where we have overlapping probability densities as you illustrate in your 3rd figure. I fail to find how one would find such overlaps. Let me first use the simple example of the difference between O and E reality taken from classical physics and described by PBR. I’ll change it a little to make it even simpler. Consider a ball with mass, m=2, attached to a spring with spring constant, k=2. Such a system is a simple harmonic oscillator (SHO)—stretch or compress the spring and the SHO “springs” into motion with the ball having momentum, p, and a position, x, relative to its unstretched or uncompressed 0 position, and constant energy, E=p²+x². If you think of a two dimensional space with orthogonal coordinate axes, p and x, the above energy equation describes a circle having a radius √E centered about the coordinate origin. Such a space is a simple example of what is called a phase space which in general has n dimensions of ps and xs. Given E, each point on that circle provides a momentum and position of the ball which, even if not observed, hence hidden, are ontic variables. After all at every instant the ball has a specific momentum and position even though we may not know that. If we increase or decrease E we simply change the radius of the circle in phase space and at no time do the circles of different E values have common points of crossing.

We can think of the circles as disjoint epistemic distributions of positions and momenta—disjoint because given constant energies (circles) E1 and E2, we never have any ps and xs in common—the circles are concentrically nested. Given that we know E but not p or x we say we have an epistemic probability distribution (over ontic HVs, p and x) on any circle of radius √E. In essence each point on the circle would be uniformly distributed (over time) HVs consisting of points (p, x) satisfying the SMO energy equation. The probability density distribution dP/dx is a function of x at each point on the circle [dP/dx=(1/2pi)(E- x^2)^- 1/2]. However taking into consideration the time spent at each point on the circle (the period is 4pi) we have a uniform constant probability distribution [dP/dt=(dP/dx)•p/m=1/(4pi)]. However these distributions characterized by different energy values would be disjointed.

Do you have a simple example of one particle probability densities with overlaps?

Well, a simple example would be two Gibbs states with different temperatures. This is a pretty degenerate case though because the supports of the distributions would typically be the entire phase space for both. They would just have different shapes.

I think, from your example, that you are looking for cases where the probability distribution is a time average, and hence a stationary state. My thermal example is also of this form. However, this is a bad place to look for analogies, as quantum states do evolve in time, so generically we would want to consider probability distributions with nontrivial Liouville dynamics. How about this for an example? Sticking with the spring set-up, suppose your friend decides to pull the spring to a certain displacement an then let go. In one case, she tells you that she is going to pull the spring somewhere between 1cm and 2cm, but you never learn the exact displacement. In the second case, she tells you she is going to pull it somewhere between 1.5cm and 2.5cm. The two cases correspond to distributions that start with zero momentum and are extended along the x-axis with an overlap of 0.5cm. As the system evolves, this overlap remains constant, but the supports of the distributions evolve along circular trajectories.

I fail to see the overlap here. I also meant m=1/2, not 2. Each possible preparation (initial stretch) results in a probability density distribution (over time) of points on definite concentric circles in phase space; each preparation produces a definite radius (√E) and each is centered about p=x=0. I see the preparations are certainly different, but each has a unique energy, doesn’t it? Where’s the overlap?

Your knowledge of the preparation, and hence your epistemic state, is going to be a distribution over the possible displacements of the mass that your friend pulled the spring to. Each possible displacement has a definite energy, but you only know that this energy is between kx_1^2 and kx_2^2, where either x_1 = 1cm and x_2 = 2cm or x_1 = 1.5cm and x_2 = 2.5cm.

Ok. (I also erred in the calculation of the period. It is pi not 4pi.) So in this example it is the preparation that is important, not the energy, hence the fixed E distributions for the first preparation (1<x<2) would join together making a circle with a somewhat thickened outline as if drawn with a blunt pencil. The 2nd preparation (1.5<x<2.5) would make another thick concentric circle larger than the first but overlapping the smaller thickened circle in their common energy possibilities (radial positions ). So we not only do not know the p and x, we do not know the E as well. Do I have this right? You may wish to use this example in your blog?

You are right, but you should not get so hung up on the time averaged distributions, which are indeed the thickened concentric circles that you suggest. These would be analogous to time-averaged quantum states, which are not what the PBR theorem is about. Instead, you should think of the original distributions, which are overlapping line-segments on the x-axis, evolving in time under Liouville dynamics. Sure, they sweep out these thickened circles over time, but it is the line-segments themselves rather than the circles that are analogous to quantum states in this setting.

Ok. It is the initial value distributions (x axis line segments overlapping) of the SHO that overlap resulting in blurred circle overlaps as time goes on (time-averaged). Thanks, I get it. How would you represent this in their quantum physical example using|0> and |+>? Would a superposition such as |psi>= a|0>+b|+> work? (Assuming that a and b depend on lamda.) Most likely not according to an email communication I got from Pusey since psi is still a quantum physical state function. But since the pair state possibilities |psi1>|psi2> occur q^2 of the time with both psi1 and psi2 mixed(?) resulting in never finding any orthogonality with any of the four entangled states they use, I wonder what they use to prove that? I get their orthogonality result when psi1 and psi2 are objctively determined as either 0 or +, but I fail to get what they mean by example when the q^2 overlap occurs. It seems a little like hand waving to me.

Do you understand this? if so what do we use for psi1 and psi2 q^2 of the time to get the resuls that do not agree with the quantum physical predictions?

I am not sure exactly what you are asking here. The ontic state space for a quantum system will be different from phase space in general and so there does not have to be a direct mapping of quantum states onto the sort of examples we are discussing. You might find it helpful to look at Rob Spekkens’ toy theory paper http://arxiv.org/abs/quant-ph/0401052 because that provides the motivating example of an epistemic theory and it does support a limited kind of superposition/interference. Note that it is not quantum theory because it is discrete and does not violate Bell inequalities, but it should give you an intuition for how we might expect ontic and epistemic states to look in a psi-epistemic theory.

I am also not sure what your query about the “q^2 overlap” is. There are no mixed states involved in the proof. Everything is pure but we have two systems. There is a little bit of handwaving involving sets of measure zero, but this is fixed in the appendix that deals with errors. Alternatively one can just consider the modal properties of the quantum and ontic states (i.e. what they deem possible/impossible) rather than formulating things in terms of probabilities, which also avoids this problem. In any case, I am not sure whether this is what you are referring to, so perhaps you can clarify the question.

Ok. Sorry for the confusion. Let me repeat. From your point of view it is the initial value distributions (the ranges of starting initial positions of the SHO that overlap resulting in blurred circle overlaps as time goes on (what you call time-averaged?). PBR use the same phase space analogy in their paper (which is where I got the idea in the first place), so I fail to see just what you mean by not using a time-averaged distribution here. On to quantum physics.

What would correspond to a simple HV distribution in the quantum physical case? How would you represent lambda in their quantum physical example using their |0> and |+>? I proposed simply to Pusey a superposition such as |psi1>|psi2>= (a|0>+b|+>)(c|0>+d|+>) would give the nonzero projections onto any of the entangled |xi> base states PBR use. (Assuming that a, b, c, and d depend on lambda.)

To clarify my argument, I will use their example of a HV model modified and augmented to make it more explicit; if 0<=lambda=|0>; or if ½<=lambda=|+>. On the other hand, if lambda does not yield a specific quantum wave function |+> or |0>, i.e., some values of lambda are compatible with either |0> or |+>, e.g., if 0<=lambda<½(1 -q), where 0<q=|0>; if ½(1+q)<=lambda=|+>; but if ½(1 -q)<=lambda is either |0> or |+>. E.g., we could have a mixed quantum wave function, |psi>=a|0>+b|+>, where a and b are phase factors dependent on lambda and q. [Possibly a density matrix element, (a|0>+b|+>)(a*<0|+b* is epistemic and not real (ontic); it can only describe our incomplete (statistical) knowledge of the state.

Most likely my argument is wrong, according to an email communication I got from Pusey, since psi is still a quantum physical state function. I still don’t see what’s wrong with my argument since |psi> is not pure (but maybe I err here). But since the pair state possibilities, |psi1>|psi2>, occur q^2 of the time with both psi1 and psi2 mixed(?) resulting in never finding any orthogonal projections with any of the four entangled states |xi> they use, I wonder what is wrong with my simple idea here? Their argument seems a little like hand waving to me. I get their orthogonal result when psi1 and psi2 are objectively determined as either 0 or +, but I fail to get what they mean, by a simple example such as mine, when the q^2 overlap occurs. Do you understand this? if so, and following my lambda q distributions, what do we use for psi1 and psi2, q^2 of the time to get the results that do not agree with the quantum physical predictions (that is they do not agree if we assume psi1 and psi2 are independently pure)?

I hope I made this clearer and I appreciate your willingness to correspond with and correct me.

Calosi et al’s paper seems clear to me so perhaps I’ll follow it in the final paper I an preparing for the JSE.

Oops, please ignore the aove response and use this one instead. I meant in the above——————-

Ok. Sorry for the confusion. Let me repeat. From your point of view it is the initial value distributions (the ranges of starting initial positions of the SHO that overlap resulting in blurred circle overlaps as time goes on (what you call time-averaged?). PBR use the same phase space analogy in their paper (which is where I got the idea in the first place), so I fail to see just what you mean by not using a time-averaged distribution here.

What would correspond to a simple HV distribution in the quantum physical case? How would you represent lambda in their quantum physical example using their |0> and |+>? I proposed simply to Pusey a superposition such as |psi>= (a|0>+b|+>)(c|0>+d|+>) would give the nonzero projections onto any of the entangled |xi> states they use. (Assuming that a, b, c, and d depend on lambda.)

To clarify their argument, I will use their example of a HV model modified and augmented to make it more explicit; if 0<lambda=|0>; or if ½<lambda=|+>. On the other hand, if lambda does not yield a specific quantum wave function |+> or |0>, i.e., some values of lambda are compatible with either |0> or |+>, e.g., if 0<lambda<½(1-q), where 0<q=|0>; if ½(1+q)<lambda=|+>; but if

½(1-q)<lambda is either |0> or |+>. E.g., we could have a mixed quantum wave function, |psi>=a|0>+b|+>, where a and b are phase factors dependent on lambda and q. Given a uniform distribution of lambda, the mixed quantum wave function would occur q of the time. Consequently the mixed |psi> is epistemic and not real (ontic); it can only describe our incomplete (statistical) knowledge of the state.

Most likely my argument is wrong, according to an email communication I got from Pusey, since psi is still a quantum physical state function. I still don’t see what’s wrong with my argument since |psi> is not pure (but maybe I err here). But since the pair state possibilities |psi1>|psi2> occur q^2 of the time with both psi1 and psi2 mixed(?) resulting in never finding any orthogonal projections with any of the four entangled states |xi> they use, I wonder what they use to prove that? I get their orthogonal result when psi1 and psi2 are objectively determined as either 0 or +, but I fail to get what they mean, by a simple example such as mine, when the q^2 overlap occurs. It seems a little like hand waving to me.

Do you understand this? if so and following my q distributions what do we use for psi1 and psi2, q^2 of the time to get the results that do not agree with the quantum physical predictions?

I hope I made this clearer and I appreciate your willingness to correspond with me.

For some reason the lamda inequaities seems to get garbled when I submit my posts. Here is again.

To clarify their argument, I will use their example of a HV model modified and augmented to make it more explicit; if 0<lambda=|0>; or if ½<lambda=|+>. On the other hand, if lambda does not yield a specific quantum wave function |+> or |0>, i.e., some values of lambda are compatible with either |0> or |+>, e.g., if 0<lambda<½(1-q), where 0<q=|0>; if ½(1+q)<lambda=|+>; but if

½(1-q)<lambda is either |0> or |+>. E.g., we could have a mixed quantum wave function, |psi>=a|0>+b|+>, where a and b are phase factors dependent on lambda and q. Given a uniform distribution of lambda, the mixed quantum wave function would occur q of the time. Consequently the mixed |psi> is epistemic and not real (ontic); it can only describe our incomplete (statistical) knowledge of the state.

It is still being garbled in the transmission to you. I have no idea why. Can I email you my comment so that they make sense to you?

Pingback: On Determinism | Cosmic Variance « Science Technology Informer

Pingback: Guest Post: Terry Rudolph on Nature versus Nurture | Cosmic Variance | BizNax

Pingback: Quantum Mechanics: Are wave functions objective (physical) properties of quantum systems? - Quora

I am not too sympathetic with this kind of analysis. It would seem odd to analyse the measurement problem without an actual analysis of what geiger counters, photo-emulsions, and cloud chambers do do when they measure. Yet any such analysis will leave open the possibility that Nature only approximately obeys the three QM axioms about observables, so that proof by contradiction is just an inadequate tool. We already knew from Darwin and Fowler that a stochastic model, Brownian motion, could be an approximation to a deterministic reality. No amount of abstract, axiomatic analysis can lead to progress unless it takes into account the physics of amplification that underlies all measurements, as per stray comments by Schwinger (Quantum Brownian Motion) and Feynman (Path Integrals and Quantum Mechanics).

But we are not trying to analyse the measurement problem. We are asking a different question about whether or not the wavefunction has to be thought of as real. One would not normally criticize Bell’s theorem on the grounds that it does not contain a complete theory of measurement, and one should think of this result as more along the lines of something like that than as an assault on the measurement problem.

I am sympathetic to the idea that we need to understand what is going on in the measurement process in order to fully understand quantum theory. However, that does not mean that we should have a myopic focus on solving the measurement problem as the only interesting project in quantum foundations. We can also ask and answer other questions about the internal structure of the theory that are interesting in their own right.

“retrocausal influences”

Well, we already know that all times, past and future, are real: http://en.wikipedia.org/wiki/Rietdijk%E2%80%93Putnam_argument

The Rietdijk-Putnam argument seems to imply superdeterminism (e.g. the fact that the measurement result is real prior to the experimenter’s “choosing” of the conditions of the experiment). The lack of counterfactual definiteness is apparently yet another means to defuse Bell’s theorem. And it is hard to not imagine retrocausal influences when an in-progress experiment is already bound to yield fixed results.

Well, some people dispute the Rietdijk-Putnam argument, but as it happens I am a believer in the block universe.

I have heard the “lack of counterfactual definiteness” loophole from several people, but I have a hard time understanding what it is supposed to mean. Bell’s theorem does not assume that unperformed measurements have definite outcomes. In Bell’s original proof, counterfactual definiteness is not assumed, but rather derived from the fact that the singlet state has perfect anti-correlations. In later proofs, such as CHSH, even this is not assumed. The measurement results need not come into existence until the measurement is actually made. They just have to have well-defined probabilities that satisfy local causality. Because of this, I have a hard time understanding where the loophole is supposed to be here.

This is slightly off-topic but the statement ” Bell’s theorem implies that they have to be nonlocal, which is not great if we want to maintain Lorentz invariance” above does not really pose an impediment anymore for the GRWf theory based on Prof Tumulka’s work, correct? http://arxiv.org/pdf/quant-ph/0602208.pdf

Yes. I have to say I don’t find the flash ontology particularly plausible, but this does show that Lorentz invariance is in some sense possible.

Pingback: Quantum Mechanics Explained | Sean Carroll

Pingback: On Determinism | Sean Carroll

Pingback: The PBR Theorem Seen from the Eyes of A Bohmian

If this is true, what are the implications for technology? Are there any clear applications?

My first reaction to this question is, “who cares?” We have found a genuinely new fact about reality as described by quantum theory. Isn’t that exciting enough for you? It is a piece of basic research, pure and simple, and basic research ought not to have to constantly justify itself in terms of practical applications. People obviously should think about practical applications, but it should not be a prerequisite for finding something interesting. Just as we should not judge the worth of an orchestra primarily in terms of how many tourist dollars it brings to the city, we should not judge basic research primarily in terms of the future economic and social benefits it might bring. Knowledge is worthwhile for its own sake.

Another reason why we should not fixate on technological applications is that it may be a really long time until it becomes clear whether or not there are any really useful ones. To use a relevant example, Bell’s theorem was developed in 1964, and viewed as an esoteric result at the time. It was only in the 90’s that it was realized that it has applications to secure cryptography and to the generation of random numbers, and only in the past few years that the security of those protocols has been established rigorously. Thus, even if you don’t accept the knowledge for its own sake argument, you will throw the baby out with the bathwater if you insist that technological applications should be forthcoming immediately. You have to let researchers play their seemingly abstract and useless games and only judge technological worth in the very long term.

That said, there are connections between results about the reality of the quantum state and quantum communication complexity. This is particularly evident in Montina’s work (see http://arxiv.org/abs/1412.1723), which is along different lines to the PBR theorem, but there has also been recent work on communication complexity that is more closely related to PBR http://arxiv.org/abs/1407.8217. However, although these results are in some sense “more applied”, it would be madness to view them as the primary motivation for studying the reality of the wavefunction. Simulating quantum channels using classical information and engaging in contrived quantum games is hardly likely to something we want to do on a daily basis. They may, by a long chain of reasoning and much future work, eventually form the basis of something interesting that we might want to do with quantum technology, but I would caution against viewing this as the main motivation.

I might take it even further and urge that one of the primary implications/applications of quantum technology is to make comprehension, such as the PBR or Hardy’s theorems, *more* essential and urgent. Comprehension of quantum reality (to the extent possible) is the end, not the means.

Thank you for the reply. A better way to ask my question might be: with this understanding of physics:

“The Born rule … is a law of quantum mechanics which gives the probability that a measurement on a quantum system will yield a given result. … The Born rule is one of the key principles of quantum mechanics.”

How do I change my perception to accomodate this result?

“reproducing the Born rule within a realist theory. The question is whether a scientific realist can interpret the quantum state as an epistemic state (state of knowledge) or whether it must be an ontic state (state of reality).”

I don’t get it. The Born rule is a principle of quantum theory. It remains so both before and after learning about theorems on the reality of the wavefunction. After all, that the model should reproduce the quantum mechanical predictions is one of the assumptions of all such theorems.

Didn’t Bell’s inequality already rule out Option 1? Did I misunderstand it what was implied in Bell’s inequality?

YWeij, No. Yes. Bell’s theorem only rules out local theories. Theories in which there are superluminal influences are still allowed.