A couple of months ago, there was an interesting debate in the Quantum Foundations group here at PI, with the above title. Unfortunately, I missed it, but it is an interesting question given that QF is becoming increasingly popular amongst young physicists, whilst remaining a relatively obscure and controversial subject in most of the mainstream physics community. Here are 3 possible answers to the question:

1. The goal of QF is to correctly predict the result of an experiment for which the standard approach to QM gives the wrong result. That is, we are in the business of providing alternative theories, that will eventually superseed QM. Work on things like spontaneous collapse models or nonlinear modifications to the Schroedinger equation falls into this category.

2. The goal of QF is not to contradict QM within its domain of applicability, but it should suggest possible alterntive approaches in cases where we are currently uncertain how to go about applying quantum theory. The archetypal example of this is quantum gravity, although to be fair it is more common to hear foundations people give this response than to find them actually working on it. Notable exceptions are the work of Gell-Man, Hartle, Isham and collaborators, which draws on the Consistent Histories formalism, and the recent work of Lucien Hardy.

3. The goal of QF is not to contradict QM at all, but it should suggest a variety of different ways to conceoptualize the subject, suggesting new possible experiments and theory that would have been difficult to imagine without considerable insight from QF. The main example we have of this is the field of quantum information. David Deutsch arrived at quantum computing by thinking about the many-worlds interpretation and Schumacher compression bears some similarity to the frequentist justifications of the quantum probability rule that began in Everett’s thesis. More recently, the Bayesian viewpoint of Caves, Fuchs, Schack, et. al. leads to new ways of doing quantum tomography and new variants of the quantum de-Finetti theorem, which have applications in quantum cryptography.

4. The goal of QF is not to bother mainstream physics at all, but to come up with the most consistent and reasonable interpretation of QM possible, involving minimal unverifiable assumptions about the nature of reality.

In my view, all four points of view can be justified. However, I think it is very useful to spell out exactly what we are up to to the rest of the world. A large portion of the physics community is skeptical about QF and, in my opinion, this is probably because they think we are all doing 1 or 4. If this were the case, I think I would agree with them, since QM has withstood a vast array of experimental tests and most of the alternatives suggested under category 1 seem contrived at best to me. Also, it is difficult to see what 4 could ever contribute to the rest of physics. It is also a problem that is better left to philosophers, since they are better qualified to tackle it.

To me, 2 and 3 seem like the most promising avenues of research for physicists who are interested in the field.

Pingback: My Blogging Statement and Quantum Mechanics « A Diary

Pingback: My Blogging Statement and Quantum Foundations RELOAD « A Diary

Hi Matt,

You say “The goal of QF is to correctly predict the result of an experiment for which the standard approach to QM gives the wrong result.” Can you please give specific examples of experiments where the standard approach to QM gives the wrong results?

Thanks,

Michael

I can’t be 100% sure what I meant when I wrote this seven years ago, but I’ll give it a shot. Firstly, you are slightly misquoting me, since I offered that as only one of four possibilities for the goal of QF and one that I considered the least valuable at that. Obviously, QM is enormously successful and there are no experiments that have been performed so far that contradict it. That is not what I meant. Instead, the idea is that QM will fail in experiments that are only slightly different from what we have done so far. For example, spontaneous collapse theories predict that there is a limit to the extent that we can maintain coherent superpositions of a large number of particles in two spatially separated locations. This limit is supposed to be fundamental and not due to environmental decoherence. Existing experiments with macroscopic superpositions, such as SQUID rings and BECs, don’t contradict this because they do not involve a significant difference in position of the terms in the superposition, and the collapse mechanism is supposed to depend on this. However, future experiments with mechanical oscillators designed to test Penrose’s ideas would test this.

There is also a slightly bizarre suggestion due to Adrian Kent that Bell experiments might fail to violate a Bell inequality if the outcomes of the measurement are coupled to a difference in position of very massive objects and this is done quickly enough that a signal could not travel to the other wing of the experiment before the mass has been moved. This is based on a loophole in Bell’s theorem to do with the idea that collapse might not occur until the results of the measurements are brought together and compared. A related suggestion due to Scarani and Suarez is that Bell violation will fail if the experiment is done with moving detectors such that the measurement of the other particle happens first according to the frames in which both of the measurement devices are moving, i.e. neither Alice nor Bob believe they are making the first measurement according to their own frames. This is based on the rather naive way of talking about collapse that we often use in which we say that Alice’s measurement causes the collapse at Bob’s side, or vice versa. Even if this is the case, I find the idea rather implausible because there is no reason why collapse should occur in the frame of the measuring device as opposed to some other natural fame like the frame of the particle itself.

One can find several similar types of suggestion in the literature. As far as I am concerned they are all highly implausible, although they will lead to testing of quantum predictions in situations in which they have not been tested so far, which is a good thing. Such experiments may turn out to be technologically useful. However, if this is what most physicists think we are doing then I am unsurprised that Lubos Motl calls everyone who works of quantum foundations an “anti-quantum zealot”. As I said in the post, I find goals 2 and 3 more promising.

Thank you for the detailed answer. You say “Obviously, QM is enormously successful and there are no experiments that have been performed so far that contradict it.” But Peres gives an example where the conventional interpretation of QM gives a wrong retrodiction [Peres, Asher. “Time asymmetry in quantum mechanics: a retrodiction paradox.” Physics Letters A 194, no. 1 (1994): 21-25]. Penrose gives another example [Penrose, The Road to Reality, pp. 819-823]. Dyson gives several more examples [Dyson, Freeman J. “Thought-experiments in honor of John Archibald Wheeler.” Science and Ultimate Reality (2004): 72-89.] Do you have rebuttals?

I don’t have access to the book in which Dyson’s paper appears, so I can’t address his arguments specifically. The Peres and Penrose arguments are examples of ambiguities to do with how to use the quantum formalism retrodictively. They are not examples of experiments that contradict quantum theory because the conventional formalism of quantum theory is designed to only be used predictively. You are supposed to evolve quantum states forward in time and apply the Born rule, projection postulate etc. to obtain classical probabilities. Once you have those classical probabilities, you can use the rules of classical probabilistic inference, such as Bayes’ theorem, to obtain retrodictive probabilities or any other kind of conditional inferences that you like. The results of all such inferences are in agreement with current experiments. I don’t think that Peres or Penrose are disputing this.

The question they are addressing is whether there is an appropriate way to use the quantum formalism itself retrodictively, rather than first computing the classical probabilities and then inverting them. Peres’ argument seems designed as an argument against a realist reading of the two-state vector formalism of Aharonov et. al. On this point I agree with him. I think you quickly get into problems if you think that the results obtained in a pre- and post-selected experiment are somehow already “real” in between the pre- and post-selection. However, he is not saying that the probabilities obtained from quantum theory in those experiments are wrong.

Penrose is trying to make an argument about the lack of time symmetry in the measurement process by arguing that if you apply the same reasoning backwards in time that we ususally use in the forwards direction then you get an incorrect result. Again, the conventional formalism only mandates inferences forwards in time, so this is not actually a contradiction between quantum theory and experiment. Nevertheless, I believe that Penrose’s argument is wrong because he has failed to correctly describe the time-reverse of the experiment under consideration. If you run the experiment back in time then there has to be a possibility for the photon to come from two places in superposition: the ceiling or the detector. These two components then interfere at the beamsplitter resulting in a single beam going back to the laser, so you get that the photon came from the laser with probability 1, as you should. It is a question of asymmetry between the events that you choose to condition on in the forward and reverse versions of the experiment. Even in classical physics, the issue of how to correctly time-reverse an experiment is subtle. You need to carefully ensure that you impose the correct time-reversed boundary conditions in addition to the time-reversed dynamics. It is easy to introduce apparent asymmetries by hand without noticing that you are doing it, and even the greatest minds of physics have fallen into this trap on occasion. Huw Price essentially wrote a whole book about this, which I recommend.

Although retrodictive formalisms go beyond the conventional understanding of quantum theory, I believe they are useful and, when done properly, do not contradict quantum theory. For my take on how to do this, see this paper. However, I would put this type of work definitively in categories 2 and 3. It is not an attempt to refute quantum theory, but an attempt to reformulate it in such a way that certain aspects of the theory, including the time-symmetry between prediction and retrodiction, become more clear.