I recently wrote an article (pdf) for The Quantum Times (Newsletter of the APS Topical Group on Quantum Information) about the PBR theorem. There is some overlap with my previous blog post, but the newsletter article focuses more on the implications of the PBR result, rather than the result itself. Therefore, I thought it would be worth reproducing it here. Quantum types should still download the original newsletter, as it contains many other interesting things, including an article by Charlie Bennett on logical depth (which he has also reproduced over at The Quantum Pontiff). APS members should also join the TGQI, and if you are at the March meeting this week, you should check out some of the interesting sessions they have organized.

Note: Due to the appearance of this paper, I would weaken some of the statements in this article if I were writing it again. The results of the paper imply that the factorization assumption is essential to obtain the PBR result, so this is an additional assumption that needs to be made if you want to prove things like Bell’s theorem directly from psi-ontology rather than using the traditional approach. When I wrote the article, I was optimistic that a proof of the PBR theorem that does not require factorization could be found, in which case teaching PBR first and then deriving other results like Bell as a consequence would have been an attractive pedagogical option. However, due to the necessity for stronger assumptions, I no longer think this.

OK, without further ado, here is the article.

## PBR, EPR, and all that jazz

In the past couple of months, the quantum foundations world has been abuzz about a new preprint entitled “The Quantum State Cannot be Interpreted Statistically” by Matt Pusey, Jon Barrett and Terry Rudolph (henceforth known as PBR). Since I wrote a blog post explaining the result, I have been inundated with more correspondence from scientists and more requests for comment from science journalists than at any other point in my career. Reaction to the result amongst quantum researchers has been mixed, with many people reacting negatively to the title, which can be misinterpreted as an attack on the Born rule. Others have managed to read past the title, but are still unsure whether to credit the result with any fundamental significance. In this article, I would like to explain why I think that the PBR result is the most significant constraint on hidden variable theories that has been proved to date. It provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. Before getting to this though, we need to understand the PBR result itself.

### What are Quantum States?

One of the most debated issues in the foundations of quantum theory is the status of the quantum state. On the ontic view, quantum states represent a real property of quantum systems, somewhat akin to a physical field, albeit one with extremely bizarre properties like entanglement. The alternative to this is the epistemic view, which sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics. A psi-ontologist

(as supporters of the ontic view have been dubbed by Chris Granade) might point to the phenomenon of interference in support of their view, and also to the fact that pretty much all viable realist interpretations of quantum theory, such as many-worlds or Bohmian mechanics, include an ontic state. The key argument in favor of the epistemic view is that it dissolves the measurement problem, since the fact that states undergo a discontinuous change in the light of measurement results does not then imply the existence of any real physical process. Instead, the collapse of the wavefunction is more akin to the way that classical probability distributions get updated by Bayesian conditioning in the light of new data.

Many people who advocate a psi-epistemic view also adopt an anti-realist or neo-Copenhagen point of view on quantum theory in which the quantum state does not represent knowledge about some underlying reality, but rather it only represents knowledge about the consequences of measurements that we might make on the system. However, there remained the nagging question of whether it is possible in principle to construct a realist interpretation of quantum theory that is also psi-epistemic, or whether the realist is compelled to think that quantum states are real. PBR have answered this question in the negative, at least within the standard framework for hidden variable theories that we use for other no go results such as Bell’s theorem. As with Bell’s theorem, there are loopholes, so it is better to say that PBR have placed a strong constraint on realist psi-epistemic interpretations, rather than ruling them out entirely.

### The PBR Result

To properly formulate the result, we need to know a bit about how quantum states are represented in a hidden variable theory. In such a theory, quantum systems are assumed to have real pre-existing properties that are responsible for determining what happens when we make a measurement. A full specification of these properties is what we mean by an ontic state of the system. In general, we don’t have precise control over the ontic state so a quantum state corresponds to a probability distribution over the ontic states. This framework is illustrated below.

A hidden variable theory is psi-ontic if knowing the ontic state of the system allows you to determine the (pure) quantum state that was prepared uniquely. Equivalently, the probability distributions corresponding to two distinct pure states do not overlap. This is illustrated below.

A hidden variable theory is psi-epistemic if it is not psi-ontic, i.e. there must exist an ontic state that is possible for more than one pure state, or, in other words, there must exist two nonorthogonal pure states with corresponding distributions that overlap. This is illustrated below.

These definitions of psi-ontology and psi-epistemicism may seem a little abstract, so a classical analogy may be helpful. In Newtonian mechanics the ontic state of a particle is a point in phase space, i.e. a specification of its position and momentum. Other ontic properties of the particle, such as its energy, are given by functions of the phase space point, i.e. they are uniquely determined by the ontic state. Likewise, in a hidden variable theory, anything that is a unique function of the ontic state should be regarded as an ontic property of the system, and this applies to the quantum state in a psi-ontic model. The definition of a psi-epistemic model as the negation of this is very weak, e.g. it could still be the case that most ontic states are only possible in one quantum state and just a few are compatible with more than one. Nonetheless, even this very weak notion is ruled out by PBR.

The proof of the PBR result is quite simple, but I will not review it here because it is summarized in my blog post and the original paper is also very readable. Instead, I want to focus on its implications.

### Size of the Ontic State Space

A trivial consequence of the PBR result is that the cardinality of the ontic state space of any hidden variable theory, even for just a qubit, must be infinite, in fact continuously so. This is because there must be at least one ontic state for each quantum state, and there are a continuous infinity of the latter. The fact that there must be infinite ontic states was previously proved by Lucien Hardy under the name “Ontological Excess Baggage theorem”, but we can now

view it as a corollary of PBR. If you think about it, this property is quite surprising because we can only extract one or two bits from a qubit (depending on whether we count superdense coding) so it would be natural to assume that a hidden variable state could be specified by a finite amount of information.

Hidden variable theories provide one possible method of simulating a quantum computer on a classical computer by simply tracking the value of the ontic state at each stage in the computation. This enables us to sample from the probability distribution of any quantum measurement at any point during the computation. Another method is to simply store a representation of the quantum state at each point in time. This second method is clearly inefficient, as the number of parameters required to specify a quantum state grows exponentially with the number of qubits. The PBR theorem tells us that the hidden variable method cannot be any better, as it requires an ontic state space that is at least as big as the set of quantum states. This conclusion was previously drawn by Alberto Montina using different methods, but again it now becomes a corollary of PBR. This result falls short of saying that any classical simulation of a quantum computer must have exponential space complexity, since we usually only have to simulate the outcome of one fixed measurement at the end of the computation and our simulation does not have to track the slice-by-slice causal evolution of the quantum circuit. Indeed, pretty much the first nontrivial result in quantum computational complexity theory, proved by Bernstein and Vazirani, showed that quantum circuits can be simulated with polynomial memory resources. Nevertheless, this result does reaffirm that we need to go beyond slice-by-slice simulations of quantum circuits in looking for efficient classical algorithms.

### Supercharged EPR Argument

As emphasized by Harrigan and Spekkens, a variant of the EPR argument favoured by Einstein shows that any psi-ontic hidden variable theory must be nonlocal. Thus, prior to Bell’s theorem, the only open possibility for a local hidden variable theory was a psi-epistemic theory. Of course, Bell’s theorem rules out all local hidden variable theories, regardless of the status of the quantum state within them. Nevertheless, the PBR result now gives an arguably simpler route to the same conclusion by ruling out psi-epistemic theories, allowing us to infer nonlocality directly from EPR.

A sketch of the argument runs as follows. Consider a pair of qubits in the singlet state. When one of the qubits is measured in an orthonormal basis, the other qubit collapses to one of two orthogonal pure states. By varying the basis that the first qubit is measured in, the second qubit can be made to collapse in any basis we like (a phenomenon that Schroedinger called “steering”). If we restrict attention to two possible choices of measurement basis, then there are

four possible pure states that the second qubit might end up in. The PBR result implies that the sets of possible ontic states for the second system for each of these pure states must be disjoint. Consequently, the sets of possible ontic states corresponding to the two distinct choices of basis are also disjoint. Thus, the ontic state of the second system must depend on the choice of measurement made on the first system and this implies nonlocality because I can decide which measurement to perform on the first system at spacelike separation from the second.

### PBR as a proto-theorem

We have seen that the PBR result can be used to establish some known constraints on hidden variable theories in a very straightforward way. There is more to this story that I can possibly fit into this article, and I suspect that every major no-go result for hidden variable theories may fall under the rubric of PBR. Thus, even if you don’t care a fig about fancy distinctions between ontic and epistemic states, it is still worth devoting a few braincells to the PBR result. I predict that it will become viewed as the basic result about hidden variable theories, and that we will end up teaching it to our students even before such stalwarts as Bell’s theorem and Kochen-Specker.

### Further Reading

For further details of the PBR theorem see:

- My blog post
- The PBR paper: M. Pusey, J. Barrett, T. Rudolph (2011). http://arxiv.org/abs/1111.3328

For constraints on the size of the ontic state space see:

- L. Hardy, Stud. Hist. Phil. Mod. Phys. 35:267-276 (2004).
- A. Montina, Phys. Rev. A 77, 022104 (2008). http://arxiv.org/abs/0711.4770

For the early quantum computational complexity results see:

- E. Bernstein and U Vazirani, SIAM J. Comput. 26:1141-1473 (1997). http://arxiv.org/abs/quant-ph/9701001

For a fully rigorous version of the PBR+EPR nonlocality argument see:

- N. Harrigan and R. W. Spekkens, Found. Phys. 40:125 (2010). http://arxiv.org/abs/0706.2661

Copyright © 2012 Matthew Leifer. All Rights Reserved.

How can a poor person access your review of Max Schlosshauer’s new book?

I just took another look at the transfer of copyright agreement and noticed that it gives me the right to post the published version on my website, so I will do this as soon as I can manage to reset my forgotten password for the electronic journals site.

“To properly formulate the result, we need to know a bit about how quantum states are represented in a hidden variable theory. In such a theory, quantum systems are assumed to have real pre-existing properties that are responsible for determining what happens when we make a measurement.”

But isn’t it true that something like an electron possess mass as a pre-existing property before measurement?

It is true that we can assign some definite properties, often misleadingly called “quantum numbers”, to quantum systems whenever there is a (fundamental or effective) superselection rule forbidding superpositions of different values of that property. In fact, such properties are often used to identify the system. However, the sum total of all such properties is not enough to determine what will happen in any possible measurement, e.g. a measurement of spin. The claim here is that properties exist that do influence these results, if only statistically.

“the ontic state space of any hidden variable theory, even for just a qubit, must be infinite, in fact continuously so”. Can I say here that a Qubit is only a very useful engineering approximation to a more complete description of the quantum state? QFT, which is required for accurate physical description, has a state space that is (at least) countably infinite, so can we take a classical stochastic theory that has a countably infinite dimensional state space (ie, a suitably constrained random field theory) not to be ruled out, at least on the grounds that models must be of finite dimensionality? That is, I presume you don’t rule out QFT for this reason. The PBR paper, like Bell-EPR, makes the assumption that there is a finite set of localized identifiable systems, which is not satisfied for random field theories in general.

You can say this here if you want to, but I would counter that I don’t really buy the objection. For me, quantum theory should be viewed as a generalization of classical probability theory. It is not a physical theory per-se, but is a framework within which various physical theories and nonphysical theories (e.g. information theory, computation theory, etc.) can be formulated. You could equally well point out that there are really no classical mechanical systems that have discrete finite state spaces, so we should always do everything with continuous variable models and just view finite sample spaces as an “engineering approximation”. However, I would say that this is putting the cart before the horse. Probability theory is a logic of how to reason in the face of uncertainty and it comes before any consideration of the details of physics. If you want to, you can say that consideration of a finite sample space means that we are interested in some coarse-grained description of a physical system, but this makes little difference and is framed at the wrong level of abstraction. If, tomorrow, we were to find some physical system that genuinely has a finite classical state space then that would be a big deal for physics, but probability theory would be completely unchanged. Similarly, in quantum theory, you could say that whenever you introduce a finite dimensional Hilbert space it means that you are restricting attention to a finite dimensional algebra of observables, which is a coarse-graining of the “real physical” algebra of QFT. From the point of view of a generalized probability theory, it makes no difference whether this is the case or whether there are “real” finite-dimensional systems. The expectation that one should be able to construct an ontological model for a finite dimensional Hilbert space that is finite still stands and PBR still implies that this cannot be so.

I would personally only take continuous and/or continuum models to be effective models for nontrivial systems. Ultimately, I think of quantum (field) theory as a sophisticated approach to signal processing, time-series analysis, where I take the signal to be as real as an experimenter takes their record of the experimental results over time to be. For example, the raw experimental data from Gregor Weihs’ experiment is two series of times when there were leading edge transitions of the signal from Alice’s (and Bob’s) CCDs, from which we can extract statistical summaries that we can compare with quantum models. We could take the view that Weihs’ record is a finite summary of an infinitely detailed signal, or not. It’s important, I think, that Weihs’ hardware implements the initial part of the signal analysis. Anyway, I’m happy to consider the cause of the signal, the experimental context, either in an instrumental way or in a moderately realistic way.

Insofar as we discuss models in which there are finite numbers of DoFs, I would take contextual models to introduce

very largeincreases in the number of DoFs considered, from binary electron or photon states to states modeled by 10^25 (quantum) DoFs, say, by including the apparatus in the description. I take the DoFs in a model, without commitment to what the DoFs represent, arguably to be what are real for the purposes of Physics, which might be analogously and as remarkably coarse as representing a real pendulum by a damped SHO.That’s all rather cryptic, sorry, but I guess, perhaps, that we’re working in different model spaces.

You are of course free to use whatever number of degrees of freedom that you deem appropriate in your approach. The point is simply that we now have a theorem that shows that continuous degrees of freedom are necessary in any realist approach (barring unconventional ontologies like retrocausality) and theorems are better than intuitions, particularly when people disagree about the latter. If you don’t find this implication of the PBR result surprising then fair enough, but, given that there are people like me who do find it surprising, you ought to be pleased as it partially vindicates your position.

Dear Matt, I wrote a ‘short ‘ document about the interpretation of the PBR theorem. I let this in free access in arxiv (http://arxiv.org/abs/1203.2475). I hope this can help us to remove some miss-interpretations concerning this result .

Best regards, Aurélien

I don’t really agree with your take on the PBR theorem. According to Harrgian and Spekkens, and also PBR, \(\lambda\) is supposed to be the full ontic state of the system. If the wavefunction is ontic in the model under consideration, then it is considered to be specified by \(\lambda\) and is not considered a separate variable. This is the reason why the term “ontic state” is used instead of “hidden variable state” because the latter is often interpreted to be variables in addition to the wavefunction. Most people would consider the wavefunction to be ontic in de Broglie-Bohm theory. I know there is some discussion of whether it should instead be regarded as nomological (lawlike) in the literature, but this is not really relevant here. The fact is, even if we know the exact values of the position variables in Bohm’s theory, we will still also need the wavefunction in addition to the position variables to compute the outcome probabilities for any experiment because it is needed to find the trajectories. Anything you need to compute the final outcome probabilities, over and above the primitive ontology (beables), is considered part of the ontic state by PBR by definition. You might not like that definition, but by using it we see that one feature of Bohmian theory is actually necessary for any hidden variable theory, namely that the wavefunction is ontic (in the sense of being required to compute the probabilities of any possible experiment). Therefore, Bohmians should be pretty happy about the PBR result as it vindicates one of their assumptions.

Also, I just wanted to note that I do not understand your discussion around eq. (10). Why do you think we can always replace a qubit state with one that has equal amplitudes up to a relative phase?

So, if I am to remain true to my realist “soul” I have to accept MWI?

I just don’t see Bohm in its current form being a serious contender..

I did not mean to imply that. One could also contemplate ontologies that are not included in the Bell framework, e.g. retrocausality (broken record time).

Isn’t retrocausality a bit of a stretch?

I guess superdeterminism is also a option, but then again, not sure if its easier to believe than MWI

I’ve never investigated retrocausality enough to be sure why it fails to capture the imagination of Physicists more widely, however a somewhat realist retrocausal understanding of particles and antiparticles in Feynman diagrams in QFT is quite common; Haag takes a retrocausality view in chapter VII of the second edition of his classic book on axiomatic QFT, “Local Quantum Physics”, for example. However, AFAIK retrocausality doesn’t have the sort of quantum computation endorsement that is quite often accorded to MWI.

Huw Price has argued eloquently for retrocausality for a number of years. You can see some of his recent papers on the topic on the arXiv. I would not say that it is a stretch, more like it is a possibility that has not attracted a lot of serious attention. Whether it can solve all of the conceptual problems with quantum theory is another matter. I am not sure about that, but I think it is worth investigating to the same extent as other alternatives have been.

I would not take the popularity of an interpretation amongst quantum computing theorists as a whole as an indication of the likelihood of success. In any case, most people in this field are in the “shut up and calculate” camp just like any other area of physics. It is just that we have a fairly vocal minority who support MWI and another vocal minority of roughly equal size who support neo-Copenhagen approaches. In any case, I think retrocausality has good potential for explaining the power of quantum computation, since whether a set of experimental outcomes can occur or not would then depend on whether there is a solution to a global constraint satisfaction problem, and it is easy to come up with instances that are NP-complete. Of course, quantum computers are thought not to be able to solve NP-complete problems, so the constraint problems that a realist, retrocausal account of quantum theory depends on must be a special subclass.

Dear Matt,

First, I agree completely with your comments concerning my EQ.10 . My ‘derivation’ works only for |alpha|=|beta|. I will correct it (and

thanking you for this somewhere in the manuscript) . Still this

a point of detail. My main argument is that PBR theorem is

somehow circular if we dont realize that it applies to a very

narrow class of Hidden variable models : I called these models XIX

th century like in contrast with Bohm’s theory which is

neoclassical. The crucial Error of PBR is summarized in my EQ.12

where the transition of probability xi is independent of the wave

function. Due to this hypothesis PBR can not compare states 1

and 2 as they claim and deduce that the densities must have

non intersecting support: the theorem generally collapses. I f

they used EQ 16 they could not obtain their result : this

demonstrates I think where the error of PBR is coming from. Still, PBR theorem is like von Neumann’s one an interesting result which shows that XIXth century-like ontological models are different from Bohm’s model. (By the way I think that Harrigan and Spekkens didn’t see this poin otherwise they would not have propose this very ‘clamsy’ definition of ontic model).

Aurelien

I don’t think you are correct. The transition probability depends on \(\lambda\) and, as I explained in my last comment, the \(\lambda\) used by PBR is the entire ontic state which can include the wavefunction if necessary. In particular, in a Bohm type model, \(\lambda\) does include the wavefunction and most people would agree that Bohmian mechanics has ontic wavefunctions (at least in the sense of “ontic” needed for PBR). Therefore, there is no need to mark the dependence on the wavefunction explicitly in the transition probability. It is there implicitly already.

Is there any good reason to dismiss the MWI in favour of a retrocausal model?

Dear Matt, the point is not if we should or not use the implicit notation for lambda and psi. Of course we can regroup everything under the same label . However it is only with the explixit separation that yourealize the problem: you cannot in general compare the transition probabilities when you jump from one wave function Psi1 to a second Psi 2 . You can only do that if these transition probabilities are independent of Psi. PBR suppose this point if wereject it (e.g. Bohm) then PBR theorem collapses.

Ah, my apologies. I think I misunderstood the point you were trying to make. Are you saying that the transition probabilities should depend on the state that the system ends up in in addition to its initial state? If so, I think I can argue that PBR still stands, but before I get into that, is this an accurate statement of your point?

Realist,

Yes. It would potentially allow us to retain an epistemic view of quantum states without giving up on realism. I think state epistemicity has a great deal of explanatory power (see Rob Spekkens toy model for instance) so I am reluctant to give it up.

Also, your choice of the word “dismiss” is a little strong. Personally, I have never found the argumentation for MWI that convincing. It assumes that the Schroedinger equation is primary and that the measurement axioms are an evil that need to be eliminated. However, whether you take the Schroedinger equation or the structure of observables to be the fundamental starting point for constructing quantum theory is a matter of debate. If you take the latter view then quantum theory can be viewed as an elegant generalization of probability theory and it is the structure of states and evolutions that are derived from the structure of measurements, via Gleason’s theorem and Wigner’s theorem for example, rather than the other way round. This makes it look like the MWI advocates have entirely the wrong starting point.

This is not to say that MWI does not work. Their arguments are persuasive if you accept their starting point. It is just that I don’t think they have the unique claim to be taking the equations of quantum theory seriously at face value. For example, if you take the “observables are primary” point of view then you could make the same argument for quantum logical realism. I am not claiming that quantum logical realism is a great interpretation either, but just pointing out that taking quantum theory at face value is not as simple a matter as you might think.

Dear Matt,

No, what I said is the exact oppositeof your statement : the transition probabilities depends on the initial wavefunctions and hidden variables : P (outcome|wavefuntion, hidden variables) . Therefore you can not equals these probabilities for two different wave functions psi1 and psi2:

P (outcome|psi1, hidden variables) is not P (outcome|psi2, hidden variables) . Since this the key axiom of PBR I conclude that they cannot deduce that the densities rho(lambda,psi1) and rho(lambda,psi2) have no intersecting support. Consequently the PBR claim generally fails.

Now I am confused. Why doesn’t the inclusion of the wavefunction in \(\lambda\) avoid this issue?

OK, lets focus on the orthogonal case since it is easy to explain in few words: So we go back to my Eq. 4 . As I said before I used only notations where lamda is the hidden variable. this is a convention but I have the right to do that( this theconvention of Bell by the way). Now, the transition probabilities used in Eq4 are independent of psi 1 and psi 2 this is the choice of PBR . with this choice it is easy like you showed that rho1 and rho2 can not have intersecting support. However, with a different model where the transition proba depends on the wavefunction (my eq 16) you can not do anymore the reasoning of PBR. That means that you cannot anymore compare the two lines of my equation 4 since the transition proba have no reason to obey to the rule

P(+|lambda,Psi1)+ P(-|lambda,Psi2)=1 !!

Actually, as I said the proof of PBR relies on the hypothesis that there is no psi in the transition proba. If you then accept that axiom I agree from

P(+|lambda)+ P(-|lambda)=1 that you will find the contradiction which is at the very heart of PBR theorem. But if you dont want to accept the axiom then PBR theroem cannot be proven.

More generally, if you use my explicit notation youwill always see even for non orthogoal states that PBR theorem is not genral enough to eliminate all epistemic models.

I am sorry if i am not clear enough I hope that you will see my points

your most humble servant to command Aurelien Drezet

(I took this sentence from Newton… )

If you allow the transition probabilities to depend on the wavefunction then the wavefunction is already ontic so it is game over already and there would be no point in proving the PBR theorem. If it is not ontic then how does the measuring device know what the state is if the only information it receives from the source is the ontic state? Let me say this again; the wavefunction is ontic in Bohmian mechanics, at least in the sense relevant for PBR.

You have to be prudent: clearly the wave function is an ontological structure which defines the statistics otherwise wave particle duality would not be possible. Still this is also for me epistemic since we deal with statistic in qunatum mechanics : I dont want to discuss too much the words or the definitions. I dont think that we can learn something of constructive from that. What is interresting me are the models of reality which enter in the realm of PBR theorem . I showed with certainty that only a very narrow class of hidden variable models obey to PBR and this is think irreversible. I used clean Bell’s like notations and introduced explicitly psi and lambda. This is why I found so easily the mistake in the PBR derivation. To conclude, like Bell himself I advize anyone to study well Bohm’s approach before publishing any ‘no-go’ theorem if you dont want to reinvent vonNeumann’s ‘no hidden variable proof’.

This is not obviously true. Interference is possible in theories that don’t have an ontological wavefunction-like object. See the discussion of superposition in http://arxiv.org/abs/quant-ph/0401052 for example. Replacing vague intuitions about “wave particle duality” with rigorous theorems is precisely what PBR is about.

I don’t think we are going to end up in agreement on this, so there is probably not much point in continuing the argument. PBR is not a no-go result. Like Bell’s theorem itself, it simply provides a constraint on the sort of hidden variable models that can reproduce quantum theory. The must have ontic wavefunctions. Bohm’s theory is an example of such a model — it is not a counterexample. If you like Bohm’s theory then you should be happy about that.

Thanks Leifer, you make complicated issues very easy to understand.

However is there any other good options rather than retrocausality ? Tht still lets you keep realism and determinism?

I sort of thoughts Spekken’s toy models was ruled out by this?

Spekkens’ toy model is ruled out by a lot of things. That is why it is a “toy”.

Anything that breaks you out of the Bell framework for hidden variable theories could potentially be used to get around PBR. Spekkens is partial to the idea of “relational” theories, but I have no idea what that means concretely. Retrocausality seems like the simplest option to me, but it is really a matter of having the imagination to formulate better options.

Obviously his toy model is not a real model.

However I thought nearly all such options were now “gone” and the only way to retain realism would be MWI or some very nonlocal and “out of nowhere” postulates.

Dear Matt,

I think that we are at least in agreement on one point : we will not find agreement. Anyway, I would like to thank you for the long discussions. It helps me a bit to clarify my argumentation (probably you will disagree on it). There are several other things interesting to discuss like retrocausality which is also my favorite interpretation of quantum paradoxes (Bohm’s theory is too crazy for my ‘realist’ blood). However, I think that the PBR theorem will not help us to clarify this point (since I am convinced that the result is wrong). With best regards.

Wouldn’t it be hard to explain the direction and other qualities of time if the world is retrocausal?

A lot of people think there is mind-stuff correlated closely with physical stuff. I’m curious about the implications of PBR for this dualism. Does it imply, e.g. that there are in infinite number of subjective states for even 1 qubit’s worth of information?

Is it not a problem for retrocausality that to make a prediction about tomorrow you have to wait until tomorrow?

In a retrocausal theory, what happens today depends on what happened yesterday and what will happen tomorrow. If you want to make a deterministic prediction then that could be a problem, but you can still assign probabilities that only depend on what you know about yesterday, factoring out your ignorance about tomorrow. Since QM is probabilistic, this is not necessarily a problem.

PBR does not say anything about this. It says there are an infinite number of ontic states, i.e. states that exist in reality independent of any minds, for even a qubit. It is a theorem about observer independent realist theories, i.e. exactly the sort of theories that avoid talking about minds, so it can’t really say anything about them.

Re: the direction of time — There can still be a thermodynamic arrow of time based on an asymmetry between past and future boundary conditions.

If I understand Maybe Your Baby, if X is going to retrocausally affect the past, then X must exist. But X doesn’t exist until the future. Therefore, if you are going to make a prediction about the future, you must already be in the future. But then it’s not a falsifiable prediction.

Paul,

WordPress is smart enough to know that all these people are you.

I think that retrocausality only makes sense in a “block universe” approach, in which X exists regardless of whether it is in the past or the future. See http://en.wikipedia.org/wiki/Eternalism_(philosophy_of_time)

“Skincares parity”‘s comment prompts me to suggest that retrocausality is a Metaphysical idea insofar as it is used as part of a model of individual cases, whereas the experimental support for Physics, since 1900, say, is only for statistical models. Experimental support for non-statistical models seems unlikely to me, though that may be just imagination failure (deBB falls at the same hurdle).

On the argument in “Skincares parity”‘s comment, however, future and past doesn’t necessarily correlate with empirical verification of models (to remove the word “prediction”, which suggests such a correlation is necessary). Taking prediction to be more valuable than verification is just experimentalists’ way of preferring some level of blindness.

I don’t have access to the experimental evidence that Everyone is Paul! Paul needs to get several computers, e-mail addresses, and/or establish a few proxies if he (wolf? grandma?) wants to deceive, although Matt’s responses are laudably engaged with the ideas in the comments, crazy or not, IMO. I had just been thinking that the names in this comment thread were approaching meltdown.

The block universe is more extravagant than contextual realism.

How do you receive messages from the future? Do you have to wear special hats?

Hi Matt,

Do you still believe that PBR directly implies non-locality, without Bell’s as I think you argued in a section of Quantum Times article?

“It (PBR) provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. ”

Thanks.

Yes, but this requires the factorization assumption used by PBR. At the time of writing, I was hopeful that we could prove the PBR theorem without factorization, but now I know that this is not possible. Therefore, the standard Bell-inequality arguments are still preferable as they involve one less assumption.

BTW, this is not something I “believe”, but rather something that Spekkens and Harrigan have proved.

Pingback: Quoins versus coins – talk by Terry Rudolph @ PI, 27/3/2013 | joaopaulogambaro