Posting has been light of late. I would like to say this is due to the same sort of absorbtion that JoAnne has described over at Cosmic Variance, but in fact my attention span is currently too short for that and it has more to do with my attempts to work on three projects simultaneously. In any case, a report of an experiment on quantum foundations in Nature cannot possibly go ignored for too long on this blog. See here for the arXiv eprint.

What Gröblacher et. al. report on is an experiment showing violations of an inequality proposed by Leggett, aimed at ruling out a class of nonlocal hidden-variable theories, whilst simultaneously violating the CHSH inequality, so that local hidden-variable theories are also ruled out in the same experiment. This is of course subject to the usual caveats that apply to Bell experiments, but let’s grant the correctness of the analysis for now and take a look at the class of nonlocal hidden-variable theories that are ruled out.

It is well-known that Bell’s assumption of locality can be factored out into two conditions.

- Outcome independence: the outcome of the experiment at site A does not depend on the outcome of the experiment at site B.
- Parameter independence: the outcome of the experiment at site A does not depend on the choice of detector setting at site B.

Leggett has proposed to consider theories that maintain the assumption of outcome independence, but drop the assumption of parameter independence. It is worth remarking at this point that the attribution of fundamental importance to this factorization of the locality assumption can easily be criticized. Whilst it is usual to describe the outcome at each site by ±1 this is an oversimplification. For example, if we are doing Stern-Gerlach measurements on electron spins then the actual outcome is a deflection of the path of the electron either up or down with respect to the orientation of the magnet. Thus, the outcome cannot be so easily separated from the orientation of the detector, as its full description depends on the orientation.

Nevertheless, whatever one makes of the factorization, it is the case that one can construct toy models that reproduce the quantum predictions in Bell experiments by dropping parameter independence. Therefore, it is worth considering what other reasonable constraints we can impose on theories when this assumption is dropped. Leggett’s assumption amounts to assuming that the hidden variable states in the theory can be divided into subensembles, in each of which the two photons have a definite polarization (which may however depend on the distant detector setting). The total ensemble corresponding to a quantum state is then a statistical average over such states. This is the class of theories that has been ruled out by the experiment.

This is all well and good, and I am certainly in favor of any experiment that places constraints on the space of possible interpretations of quantum theory. However, the experiment has been sold in some quarters as a “refutation of nonlocal realism”, so we should consider the extent to which this is true. The first point to make is that there are perfectly good nonlocal realistic models, in the sense of reproducing the predictions of quantum theory, that do not satisfy Leggett’s assumptions – the prime example being Bohmian mechanics. In the Bohm theory photons do not have a well-defined value of polarization, but instead it is determined nonlocally via the quantum potential. Therefore, if we regard this as a reasonable theory then no experiment that simply confirms the predictions of quantum theory can be said to rule out nonlocal realism.

Apropos of Bohmian mechanics: Why would Nature use variables and then make sure that they cannot be measured? One might as well posit strictly unobservable elastic strings to explain Newton’s law of gravity. The only sensible reason why so-called “hidden variables” are hidden is that they do not exist. To

beis to bemeasured. Variables have values only if, only when, and only to the extent that they are actually measured (that is, their values are indicated by some actual event or state of affairs). Is the experiment by Simon Gröblacher, Tomasz Paterek, Rainer Kaltenbaek, Časlav Brukner, Marek Żukowski, Markus Aspelmeyer, and Anton Zeilinger isn’t proof enough… If local realism cannot be rule out, then it belongs in the same category as those elastic strings.“Leggett has proposed to consider theories that maintain the assumption of outcome independence, but drop the assumption of parameter independence. It is worth remarking at this point that the attribution of fundamental importance to this factorization of the locality assumption can easily be criticized. Whilst it is usual to describe the outcome at each site by ±1 this is an oversimplification. For example, if we are doing Stern-Gerlach measurements on electron spins then the actual outcome is a deflection of the path of the electron either up or down with respect to the orientation of the magnet. Thus, the outcome cannot be so easily separated from the orientation of the detector, as its full description depends on the orientation.”

I agree, but if one thinks about the question of “where” the information is in the nonlocal hidden variable theory, then I think this sepearation is meaningful, right?

“Why would Nature use variables and then make sure that they cannot be measured?”

I disagree. My computer stores all sorts of information in forms I never ever see, when I’m programming in say Java, and the Java universe’s I create seem as viable and well running, as any other universe even thought the creatures in this universe never have access to this hidden information.

But of course this is my own personal bias!

It’s a good question. IF I believed in Bohmian mechanics (and the emphasis is on IF) then I would probably believe in Valentini’s version, which doesn’t have this problem, i.e. the universe started out in a state where these variables could be measured, but has relaxed to “quantum equilibrium”. This is analogous to thermal equilibrium, wherein you also can’t measure every quantity associated with a system.

For me the Bell experiments are enough to rule out “reasonable” hidden variable theories, i.e. I would like fundamental Lorentz invariance (or diffeomorphism invariance at the GR level) in my theory. This means I’m going to have to adopt a very bizzare ontology and the whole game is to figure out what this should be. On the other hand, if I were to advocate nonlocal hidden-variables, then I see no reason to single out Leggett’s class of theories as the most compelling. Of course, it is interesting that one can rule out some class of nonlocal hidden-variable theories by experiment, and so I think the work is interesting, but I don’t think it significantly changes the interpretational landscape.

I’m not entirely sure what this means because most of the toy models are pretty unrealistic, i.e. you somehow imagine the photons are carrying around a list of variables with them. “Where” the information is located does not make much sense to me unless we are talking about a realistic physical model. On the other hand, I agree that the separation is meaningful in some contexts, and particularly when we are talking about information processing. For example, it was useful in the Barrett-Hardy-Kent proof of the resilience of quantum key distribution to superquantum eavesdroppers.

What exactly is the uncertainty relation for your Java universe?

Hello, I just stumbled upon this article and I found it interesting. I’m a senior in high school, with a very limited knowledge on quantum physics. But I just have a few random questions about this quote.

“To be is to be measured.”

The ability to be measured is the ability to be, in other words?

Does “thermal equilibrium” theorize that something can exist without the ability to be measured?

Thanks you in advance.

Where does that quote come from? In any case, it is not justified by quantum theory. It is true that quantum theory only predicts probabilities rather than telling you the exact outcome that will occur in an experiment. Some people jump to hasty conclusions when they learn about this and start spouting a lot of pseudo-philosophical/spiritual nonsense, including a good many physicists who should know better. The fact of the matter is that there are consistent interpretations of quantum theory which prove that many of the weird things you’ll hear people saying about quantum theory are not inevitably true.

Thermal equilibrium does not have much to do with measurement, although there are certain similarities between the process of decoherence, which is relevant for understanding measurement, and the approach to thermal equilibrium. Thermal equilibrium is a particular distribution of physical variables (e.g. positions and velocities) appropriate when you have a large number of degrees of freedom, e.g. a large number of particles, all interacting with each other. It explains why thermodynamics can be used to relate the macroscopic variables, e.g. pressure, volume and temperature, of such a system. The canonical example is a container full of gas, which can usually be thought of as a large number of molecules distributed according to the Maxwell-Boltzmann law.

“To be is to be measured.”

This means that no property or value is possessed unless its possession, by a system or by an observable, is indicated by (or can be inferred from) an actual event or state of affairs — a “measurement” in the broadest sense of the term. It means that a system or observable has a property or value only if, only when, and only to the extent that it is actually “measured” (in the broadest sense of the term). This is not the only possible way of making sense (or nonsense, depending on your ideology) of the mathematical formalism of quantum mechanics, but it’s certainly better than stories about what happens or is the case

betweenmeasurements, since such stories can neither be confirmed nor ruled out empirically, however ludicrous they may be. It is also in excellent agreement with all existing experiments and analyses of quantum-mechanical probability assignments.Thanks for your analysis of Leggett’s paper and the experimental paper, Matt. We also discussed these in my group recently, and I thought I’d share my thoughts.

To understand better Legget’s restriction, I had to generalise it from photon polarization to arbitrary systems and observables. Using (as in Bell’s convention) a,b for observables (i.e. settings) and A,B for outcomes, and c for the preparation procedure, Legget’s condition in general should be that

p[AB|abc] = \int d\psi \int d\phi p(\psi,\phi | c) \int d\lambda p(\lambda|\psi,\phi) p(A|\lambda,a,b) p(B|\lambda,a,b)

where the ps on the RHS are arbitrary probability distributions, restricted by the following two conditions:

\int d\lambda p(\lambda|\psi,\phi) p(A|\lambda,a,b) = P(A|a\psi)

\int d\lambda p(\lambda|\psi,\phi) p(B|\lambda,a,b) = P(B|b\phi)

Here P(A|a\psi) is the *Quantum Mechanical* probability for the outcome A given a measurement of a on some pure state parametrized by \psi. P(B|b\phi) is defined similarly. In the photon case, these are all polarization vectors, and P(A|a\psi) equals the mod-square of dot product of the vectors a and \psi.

As Matt said, the functional dependence of p(A|\lambda,a,b) and p(B|\lambda,a,b) shows that Leggett is considering a class of theories that are outcome-independent, but not parameter independent. The two conditions Leggett places are necessary to get something different from QM, since without them the first equation just describes an arbitrary (outcome-independent) nonlocal hidden variable theory, which can always replicate any QM prediction.

If you stare at Legget’s two conditions for long enough you might convince yourself that they are reasonable. However, it must be the case that Leggett does NOT require that

\int d\lambda p(\lambda|\psi,\phi)

p(A|\lambda,a,b) p(B|\lambda,a,b) = P(A|a\psi) P(B|b\phi)

otherwise Legget’s first equation just reduces to the assumption of local realism (i.e. Bell-locality). I think that once one realises that this last equation must NOT hold, the reasonableness of Legget’s two conditions diminishes considerably.

My conclusion: Legget is clever to have found reasonable-looking equations that give a restriction weaker than local realism but stronger than QM. However on closer examination there is nothing compelling about his assumptions. The class of nonlocal hidden variables the experiment rules out doesn’t really tell us much about the world as far as I can see. All of the nonlocal hidden variable theories which people take seriously (e.g. Bohmian mechanics) are *interpretations* of QM so can never be ruled out by an experiment that agrees with QM.

p.s. sorry for not formatting the above correctly. believe it or not it is the first time I submitted a blog comment. Also I think I meant modulus, not mod-squared.