Tag Archives: quantum

Q+ Hangout: Francesco Buscemi

Here are the details for the next Q+ hangout.

Date: 28th August 2012

Time: 2pm British Summer Time

Speaker: Francesco Buscemi (Nagoya University)

Title: All entangled quantum states are nonlocal: equivalence between locality and separability in quantum theory

Abstract:

In this talk I will show how, by slightly modifying the rules of nonlocal games, one can prove that all entangled states violate local realism.

As it is well known, Bell inequalities, which are used to test the violation of local realism, can be equivalently reformulated in terms of nonlocal games (namely, cooperative games with incomplete information) played between one referee and two (or more) players, these latter being separated so to make any form of communication between them impossible during the game. Quantum nonlocality is that property of quantum states that allows players sharing them to win nonlocal games more frequently than the assumption of local realism would imply.

However, as Werner proved in 1989, not all quantum states enable such a violation of local realism. In particular, Werner showed the existence of quantum states that cannot be created locally (the so-called “entangled” states) and, yet, do not allow any violation of local realism in nonlocal games. This fact has been since then considered an unsatisfactory gap in the theory, attracting a considerable amount of attentions in the literature.

In this talk I will present a simple proof of the fact that all entangled states indeed violate local realism. This will be done by considering a new larger class of nonlocal games, which I call “semiquantum,” differing from the old ones merely in that the referee can now communicate with the players through quantum channels, rather than being restricted to use classical ones, as it was tacitly assumed before. I will then prove that one quantum state always provides better payoffs than another quantum state, in semiquantum nonlocal games, if and only if the latter can be obtained from the former, by local operations and shared randomness (LOSR). The main claim will then follow as a corollary.

The new approach not only provides a clear theoretical picture of the relation between locality and separability, but also suggests, thanks to its simplicity, new experimental tests able in principle to verify the violation of local realism in situations where previous experiments would fail.

Based on http://arxiv.org/abs/1106.6095

To view the seminar live, go to http://gplus.to/qplus at the appointed hour.

To stay up to date on future Q+ hangouts, follow us on:

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://www.facebook.com/qplushangouts

or visit our website http://qplus.burgarth.de

Q+ Hangout: Caslav Brukner

Here are the details for the next Q+ hangout.

Date: 24th July 2012

Time: 2pm British Summer Time

Speaker: Caslav Brukner (University of Vienna)

Title: Quantum correlations with indefinite causal order

Abstract:

In quantum physics it is standardly assumed that the background time or definite causal structure exists such that every operation is either in the future, in the past or space-like separated from any other operation. Consequently, the correlations between operations respect definite causal order: they are either signalling correlations for the time-like or no-signalling correlations for the space-like separated operations. We develop a framework that assumes only that operations in local laboratories are described by quantum mechanics (i.e. are completely-positive maps), but relax the assumption that they are causally connected. Remarkably, we find situations where two operations are neither causally ordered nor in a probabilistic mixture of definite causal orders, i.e. one cannot say that one operations is before or after the other. The correlations between the operations are shown to enable performing a communication task (“causal game”) that is impossible if the operations are ordered according to a fixed background time.

To view the seminar live, go to http://gplus.to/qplus at the appointed hour.

To stay up to date on future Q+ hangouts, follow us on:

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://www.facebook.com/qplushangouts

or visit our website http://qplus.burgarth.de

Q+ Hangout: Scott Aaronson

Here are the details for the next Q+ hangout.

Date: 19th June 2012

Time: 2pm British Summer Time

Speaker: Scott Aaronson (MIT)

Title: Quantum Money from Hidden Subspaces

Abstract:

Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is (1) public-key—meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and (2) cryptographically secure, under a “classical” hardness assumption that has nothing to do with quantum money. Our scheme is based on hidden subspaces, encoded as the zero-sets of random multivariate polynomials. A main technical advance is to show that the “black-box” version of our scheme, where the polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published). Even in Wiesner’s original setting—quantum money that can only be verified by the bank—we are able to use our techniques to patch a major security hole in Wiesner’s scheme. We give the first private-key quantum money scheme that allows unlimited verifications and that remains unconditionally secure, even if the counterfeiter can interact adaptively with the bank. Our money scheme is simpler than previous public-key quantum money schemes, including a knot-based scheme of Farhi et al. The verifier needs to perform only two tests, one in the standard basis and one in the Hadamard basis—matching the original intuition for quantum money, based on the existence of complementary observables. Our security proofs use a new variant of Ambainis’s quantum adversary method, and several other tools that might be of independent interest.

Based on http://arxiv.org/abs/1203.4740

Joint work with Paul Christiano

To view the seminar live, go to http://gplus.to/qplus at the appointed hour.

To stay up to date on future Q+ hangouts, follow us on:

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://www.facebook.com/qplushangouts

or visit our website http://qplus.burgarth.de

Q+ Hangout: Joe Fitzsimmons

Speaker: Joe Fitzsimons, Centre for Quantum Technologies, Singapore

Date: Tuesday 29th May 2012

Time: 14:00 British Summer Time

Title: Universal blind quantum computation

Abstract:

Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client’s inputs, outputs and computation remain private. In this talk I will present a protocol for universal unconditionally secure BQC, based on the conceptual framework of the measurement-based quantum computing model. In this protocol the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. This scheme has recently been implemented in a quantum optics setting. I will finish with a discussion of variants of the scheme allowing the client to detect deviations from the protocol by a malicious server.

To watch the seminars live, go to http://gplus.to/qplus at the appointed hour. You do not need a Google account to watch, but you do need one if you would like to be able to participate in the question and answer session at the end of the talk.

To stay up to date on the scheduled seminars you can visit our website or follow us on various social networks:

Our website: http://qplus.burgarth.de

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://facebook.com/qplushangouts

We also encourage you to suggest speakers for future talks. You can do so by adding them to the spreadsheet at .

Q+ Hangout: Vlatko Vedral

Date: Tuesday 8th May 2012

Time: 14:00 British Summer Time

Speaker: Vlatko Vedral (University of Oxford/National University of Singapore)

Title: Using Temporal Entanglement to Perform Thermodynamical Work

Abstract: Here we investigate the impact of temporal entanglement on a system’s ability to perform thermodynamical work. We show that while the quantum version of the Jarzynski equality remains satisfied even in the presence of temporal entanglement, the individual thermodynamical work moments in the expansion of the free energy are, in fact, sensitive to the genuine quantum correlations. Therefore, while individual moments of the amount of thermodynamical work can be larger (or smaller) quantumly than classically, when they are all combined together into the (exponential of) free energy, the total effect vanishes to leave the Jarzynski equality intact. Whether this is a fortuitous coincidence remains to be seen, but it certainly goes towards explaining why the laws of thermodynamics happen to be so robust as to be independent of the underlying micro-physics. We discuss the relationship between this result and thermodynamical witnesses of spatial entanglement as well as explore the subtle connection with the “quantum arrow of time”.

Based on http://arxiv.org/abs/1204.5559 and possibly also http://arxiv.org/abs/1204.6168 if time permits

To watch the seminars live, go to http://gplus.to/qplus at the appointed hour. You do not need a Google account to watch, but you do need one if you would like to be able to participate in the question and answer session at the end of the talk.

To stay up to date on the scheduled seminars you can visit our website or follow us on various social networks:

Our website: http://qplus.burgarth.de

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://facebook.com/qplushangouts

We also encourage you to suggest speakers for future talks. You can do so by adding them to the spreadsheet at .

Q+ Hangout: Jonathan Oppenheim

Some of you may be aware that, in collaboration with Daniel Burgarth, I have been organizing a series of online seminars on quantum information and foundations using Google+ hangouts. I have avoided advertising them on this blog so far because there used to be a limit on the number of people who could attend and I didn’t want them to get too oversubscribed. However, recently we have gained the ability to stream the seminars to a large number of people, so I will crosspost the announcements here from now on.

You can watch the seminars live on any computer with an internet connection, or after the fact on YouTube. The details of the next seminar are:

Date: 24th April 2012

Time: 14:00 British Summer Time

Title: Fundamental limitations for quantum and nano thermodynamics

Speaker: Jonathan Oppenheim (University College London)

Abstract:

The relationship between thermodynamics and statistical physics is valid in the thermodynamic limit — when the number of particles involved becomes very large. Here we study thermodynamics in the opposite regime — at both the nano scale, and when quantum effects become important. Applying results from quantum information theory we construct a theory of thermodynamics in these extreme limits. In the quantum regime, we find that the standard free energy no longer determines the amount of work which can be extracted from a resource, nor which state transitions can occur spontaneously. We derive a criteria for thermodynamical state transitions, and find two free energies: one which determines the amount of work which can be extracted from a small system in contact with a heat bath, and the other which quantifies the reverse process. They imply that generically, there are additional constraints which govern spontaneous thermodynamical processes. We find that there are fundamental limitations on work extraction from nonequilibrium states, due to both finite size effects which are present at the nano scale, as well as quantum coherences. This implies that thermodynamical transitions are generically irreversible at this scale, and we quantify the degree to which this is so, and the condition for reversibility to hold. There are particular equilibrium processes which approach the ideal efficiency, provided that certain special conditions are met.

Based on http://arxiv.org/abs/1111.3834.

Biography:

Jonathan Oppenheim is has recently been appointed professor at University College London. He is an expert in quantum information theory and quantum gravity. His Ph.D. under Bill Unruh at the University of British Columbia was on Quantum time. In 2004 he was a postdoctoral researcher under Jacob Bekenstein and then held a University Research Fellowship at Cambridge University. Together with Michał Horodecki and Andreas Winter, he discovered quantum state-merging and used this primitive to show that quantum information could be negative.

To watch the seminars live, go to http://gplus.to/qplus at the appointed hour. You do not need a Google account to watch, but you do need one if you would like to be able to participate in the question and answer session at the end of the talk.

To stay up to date on the scheduled seminars you can visit our website or follow us on various social networks:

Our website: http://qplus.burgarth.de

Google+: http://gplus.to/qplus

Twitter: @qplushangouts

Facebook: http://facebook.com/qplushangouts

We also encourage you to suggest speakers for future talks. You can do so by adding them to the spreadsheet at .

Quantum Times Article on the PBR Theorem

I recently wrote an article (pdf) for The Quantum Times (Newsletter of the APS Topical Group on Quantum Information) about the PBR theorem. There is some overlap with my previous blog post, but the newsletter article focuses more on the implications of the PBR result, rather than the result itself. Therefore, I thought it would be worth reproducing it here. Quantum types should still download the original newsletter, as it contains many other interesting things, including an article by Charlie Bennett on logical depth (which he has also reproduced over at The Quantum Pontiff). APS members should also join the TGQI, and if you are at the March meeting this week, you should check out some of the interesting sessions they have organized.

Note: Due to the appearance of this paper, I would weaken some of the statements in this article if I were writing it again. The results of the paper imply that the factorization assumption is essential to obtain the PBR result, so this is an additional assumption that needs to be made if you want to prove things like Bell’s theorem directly from psi-ontology rather than using the traditional approach. When I wrote the article, I was optimistic that a proof of the PBR theorem that does not require factorization could be found, in which case teaching PBR first and then deriving other results like Bell as a consequence would have been an attractive pedagogical option. However, due to the necessity for stronger assumptions, I no longer think this.

OK, without further ado, here is the article.

PBR, EPR, and all that jazz

In the past couple of months, the quantum foundations world has been abuzz about a new preprint entitled “The Quantum State Cannot be Interpreted Statistically” by Matt Pusey, Jon Barrett and Terry Rudolph (henceforth known as PBR). Since I wrote a blog post explaining the result, I have been inundated with more correspondence from scientists and more requests for comment from science journalists than at any other point in my career. Reaction to the result amongst quantum researchers has been mixed, with many people reacting negatively to the title, which can be misinterpreted as an attack on the Born rule. Others have managed to read past the title, but are still unsure whether to credit the result with any fundamental significance. In this article, I would like to explain why I think that the PBR result is the most significant constraint on hidden variable theories that has been proved to date. It provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. Before getting to this though, we need to understand the PBR result itself.

What are Quantum States?

One of the most debated issues in the foundations of quantum theory is the status of the quantum state. On the ontic view, quantum states represent a real property of quantum systems, somewhat akin to a physical field, albeit one with extremely bizarre properties like entanglement. The alternative to this is the epistemic view, which sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics. A psi-ontologist
(as supporters of the ontic view have been dubbed by Chris Granade) might point to the phenomenon of interference in support of their view, and also to the fact that pretty much all viable realist interpretations of quantum theory, such as many-worlds or Bohmian mechanics, include an ontic state. The key argument in favor of the epistemic view is that it dissolves the measurement problem, since the fact that states undergo a discontinuous change in the light of measurement results does not then imply the existence of any real physical process. Instead, the collapse of the wavefunction is more akin to the way that classical probability distributions get updated by Bayesian conditioning in the light of new data.

Many people who advocate a psi-epistemic view also adopt an anti-realist or neo-Copenhagen point of view on quantum theory in which the quantum state does not represent knowledge about some underlying reality, but rather it only represents knowledge about the consequences of measurements that we might make on the system. However, there remained the nagging question of whether it is possible in principle to construct a realist interpretation of quantum theory that is also psi-epistemic, or whether the realist is compelled to think that quantum states are real. PBR have answered this question in the negative, at least within the standard framework for hidden variable theories that we use for other no go results such as Bell’s theorem. As with Bell’s theorem, there are loopholes, so it is better to say that PBR have placed a strong constraint on realist psi-epistemic interpretations, rather than ruling them out entirely.

The PBR Result

To properly formulate the result, we need to know a bit about how quantum states are represented in a hidden variable theory. In such a theory, quantum systems are assumed to have real pre-existing properties that are responsible for determining what happens when we make a measurement. A full specification of these properties is what we mean by an ontic state of the system. In general, we don’t have precise control over the ontic state so a quantum state corresponds to a probability distribution over the ontic states. This framework is illustrated below.

Representation of a quantum state in an ontic model

In an ontic model, a quantum state (indicated heuristically on the left as a vector in the Bloch sphere) is represented by a probability distribution over ontic states, as indicated on the right.

A hidden variable theory is psi-ontic if knowing the ontic state of the system allows you to determine the (pure) quantum state that was prepared uniquely. Equivalently, the probability distributions corresponding to two distinct pure states do not overlap. This is illustrated below.

Psi-ontic model

Representation of a pair of quantum states in a psi-ontic model

A hidden variable theory is psi-epistemic if it is not psi-ontic, i.e. there must exist an ontic state that is possible for more than one pure state, or, in other words, there must exist two nonorthogonal pure states with corresponding distributions that overlap. This is illustrated below.

Psi-epistemic model

Representation of nonorthogonal states in a psi-epistemic model

These definitions of psi-ontology and psi-epistemicism may seem a little abstract, so a classical analogy may be helpful. In Newtonian mechanics the ontic state of a particle is a point in phase space, i.e. a specification of its position and momentum. Other ontic properties of the particle, such as its energy, are given by functions of the phase space point, i.e. they are uniquely determined by the ontic state. Likewise, in a hidden variable theory, anything that is a unique function of the ontic state should be regarded as an ontic property of the system, and this applies to the quantum state in a psi-ontic model. The definition of a psi-epistemic model as the negation of this is very weak, e.g. it could still be the case that most ontic states are only possible in one quantum state and just a few are compatible with more than one. Nonetheless, even this very weak notion is ruled out by PBR.

The proof of the PBR result is quite simple, but I will not review it here because it is summarized in my blog post and the original paper is also very readable. Instead, I want to focus on its implications.

Size of the Ontic State Space

A trivial consequence of the PBR result is that the cardinality of the ontic state space of any hidden variable theory, even for just a qubit, must be infinite, in fact continuously so. This is because there must be at least one ontic state for each quantum state, and there are a continuous infinity of the latter. The fact that there must be infinite ontic states was previously proved by Lucien Hardy under the name “Ontological Excess Baggage theorem”, but we can now
view it as a corollary of PBR. If you think about it, this property is quite surprising because we can only extract one or two bits from a qubit (depending on whether we count superdense coding) so it would be natural to assume that a hidden variable state could be specified by a finite amount of information.

Hidden variable theories provide one possible method of simulating a quantum computer on a classical computer by simply tracking the value of the ontic state at each stage in the computation. This enables us to sample from the probability distribution of any quantum measurement at any point during the computation. Another method is to simply store a representation of the quantum state at each point in time. This second method is clearly inefficient, as the number of parameters required to specify a quantum state grows exponentially with the number of qubits. The PBR theorem tells us that the hidden variable method cannot be any better, as it requires an ontic state space that is at least as big as the set of quantum states. This conclusion was previously drawn by Alberto Montina using different methods, but again it now becomes a corollary of PBR. This result falls short of saying that any classical simulation of a quantum computer must have exponential space complexity, since we usually only have to simulate the outcome of one fixed measurement at the end of the computation and our simulation does not have to track the slice-by-slice causal evolution of the quantum circuit. Indeed, pretty much the first nontrivial result in quantum computational complexity theory, proved by Bernstein and Vazirani, showed that quantum circuits can be simulated with polynomial memory resources. Nevertheless, this result does reaffirm that we need to go beyond slice-by-slice simulations of quantum circuits in looking for efficient classical algorithms.

Supercharged EPR Argument

As emphasized by Harrigan and Spekkens, a variant of the EPR argument favoured by Einstein shows that any psi-ontic hidden variable theory must be nonlocal. Thus, prior to Bell’s theorem, the only open possibility for a local hidden variable theory was a psi-epistemic theory. Of course, Bell’s theorem rules out all local hidden variable theories, regardless of the status of the quantum state within them. Nevertheless, the PBR result now gives an arguably simpler route to the same conclusion by ruling out psi-epistemic theories, allowing us to infer nonlocality directly from EPR.

A sketch of the argument runs as follows. Consider a pair of qubits in the singlet state. When one of the qubits is measured in an orthonormal basis, the other qubit collapses to one of two orthogonal pure states. By varying the basis that the first qubit is measured in, the second qubit can be made to collapse in any basis we like (a phenomenon that Schroedinger called “steering”). If we restrict attention to two possible choices of measurement basis, then there are
four possible pure states that the second qubit might end up in. The PBR result implies that the sets of possible ontic states for the second system for each of these pure states must be disjoint. Consequently, the sets of possible ontic states corresponding to the two distinct choices of basis are also disjoint. Thus, the ontic state of the second system must depend on the choice of measurement made on the first system and this implies nonlocality because I can decide which measurement to perform on the first system at spacelike separation from the second.

PBR as a proto-theorem

We have seen that the PBR result can be used to establish some known constraints on hidden variable theories in a very straightforward way. There is more to this story that I can possibly fit into this article, and I suspect that every major no-go result for hidden variable theories may fall under the rubric of PBR. Thus, even if you don’t care a fig about fancy distinctions between ontic and epistemic states, it is still worth devoting a few braincells to the PBR result. I predict that it will become viewed as the basic result about hidden variable theories, and that we will end up teaching it to our students even before such stalwarts as Bell’s theorem and Kochen-Specker.

Further Reading

For further details of the PBR theorem see:

For constraints on the size of the ontic state space see:

For the early quantum computational complexity results see:

For a fully rigorous version of the PBR+EPR nonlocality argument see:

Can the quantum state be interpreted statistically?

A new preprint entitled The Quantum State Cannot be Interpreted Statistically by Pusey, Barrett and Rudolph (henceforth known as PBR) has been generating a significant amount of buzz in the last couple of days. Nature posted an article about it on their website, Scott Aaronson and Lubos Motl blogged about it, and I have been seeing a lot of commentary about it on Twitter and Google+. In this post, I am going to explain the background to this theorem and outline exactly what it entails for the interpretation of the quantum state. I am not going to explain the technicalities in great detail, since these are explained very clearly in the paper itself. The main aim is to clear up misconceptions.

First up, I would like to say that I find the use of the word “Statistically” in the title to be a rather unfortunate choice. It is liable to make people think that the authors are arguing against the Born rule (Lubos Motl has fallen into this trap in particular), whereas in fact the opposite is true.  The result is all about reproducing the Born rule within a realist theory.  The question is whether a scientific realist can interpret the quantum state as an epistemic state (state of knowledge) or whether it must be an ontic state (state of reality). It seems to show that only the ontic interpretation is viable, but, in my view, this is a bit too quick. On careful analysis, it does not really rule out any of the positions that are advocated by contemporary researchers in quantum foundations. However, it does answer an important question that was previously open, and confirms an intuition that many of us already held. Before going into more detail, I also want to say that I regard this as the most important result in quantum foundations in the past couple of years, well deserving of a good amount of hype if anything is. I am not sure I would go as far as Antony Valentini, who is quoted in the Nature article saying that it is the most important result since Bell’s theorem, or David Wallace, who says that it is the most significant result he has seen in his career. Of course, these two are likely to be very happy about the result, since they already subscribe to interpretations of quantum theory in which the quantum state is ontic (de Broglie-Bohm theory and many-worlds respectively) and perhaps they believe that it poses more of a dilemma for epistemicists like myself then it actually does.

Classical Ontic States

Before explaining the result itself, it is important to be clear on what all this epistemic/ontic state business is all about and why it matters. It is easiest to introduce the distinction via a classical example, for which the interpretation of states is clear. Therefore, consider the Newtonian dynamics of a single point particle in one dimension. The trajectory of the particle can be determined by specifying initial conditions, which in this case consists of a position \(x(t_0)\) and momentum \(p(t_0)\) at some initial time \(t_0\). These specify a point in the particle’s phase space, which consists of all possible pairs \((x,p)\) of positions and momenta.

Classical Ontic State

The ontic state space for a single classical particle, with the initial ontic state marked.

Then, assuming we know all the relevant forces, we can compute the position and momentum \((x(t),p(t))\) at some other time \(t\) using Newton’s laws or, equivalently, Hamilton’s equations. At any time \(t\), the phase space point \((x(t),p(t))\) can be thought of as the instantaneous state of the particle. It is clearly an ontic state (state of reality), since the particle either does or does not possess that particular position and momentum, independently of whether we know that it possesses those values ((There are actually subtleties about whether we should think of phase space points as instantaneous ontic states. For one thing, the momentum depends on the first derivative of position, so maybe we should really think of the state being defined on an infinitesimal time interval. Secondly, the fact that momentum appears is because Newtonian mechanics is defined by second order differential equations. If it were higher order then we would have to include variables depending on higher derivatives in our definition of phase space. This is bad if you believe in a clean separation between basic ontology and physical laws. To avoid this, one could define the ontic state to be the position only, i.e. a point in configuration space, and have the boundary conditions specified by the position of the particle at two different times. Alternatively, one might regard the entire spacetime trajectory of the particle as the ontic state, and regard the Newtonian laws themselves as a mere pattern in the space of possible trajectories. Of course, all these descriptions are mathematically equivalent, but they are conceptually quite different and they lead to different intuitions as to how we should understand the concept of state in quantum theory. For present purposes, I will ignore these subtleties and follow the usual practice of regarding phase space points as the unambiguous ontic states of classical mechanics.)). The same goes for more complicated systems, such as multiparticle systems and fields. In all cases, I can derive a phase space consisting of configurations and generalized momenta. This is the space of ontic states for any classical system.

Classical Epistemic States

Although the description of classical mechanics in terms of ontic phase space trajectories is clear and unambiguous, we are often, indeed usually, more interested in tracking what we know about a system. For example, in statistical mechanics, we may only know some macroscopic properties of a large collection of systems, such as pressure or temperature. We are interested in how these quantities change over time, and there are many different possible microscopic trajectories that are compatible with this. Generally speaking, our knowledge about a classical system is determined by assigning a probability distribution over phase space, which represents our uncertainty about the actual point occupied by the system.

A classical epistemic state

An epistemic state of a single classical particles. The ellipses represent contour lines of constant probability.

We can track how this probability distribution changes using Liouville’s equation, which is derived by applying Hamilton’s equations weighted with the probability assigned to each phase space point. The probability distribution is pretty clearly an epistemic state. The actual system only occupies one phase space point and does not care what probability we have assigned to it. Crucially, the ontic state occupied by the system would be regarded as possible by us in more than one probability distribution, in fact it is compatible with infinitely many.

Overlapping epistemic states

Epistemic states can overlap, so each ontic state is possible in more than one epistemic state. In this diagram, the two phase space axes have been schematically compressed into one, so that we can sketch the probability density graphs of epistemic states. The ontic state marked with a cross is possible in both epistemic states sketched on the graph.

Quantum States

We have seen that there are two clear notions of state in classical mechanics: ontic states (phase space points) and epistemic states (probability distributions over the ontic states). In quantum theory, we have a different notion of state — the wavefunction — and the question is: should we think of it as an ontic state (more like a phase space point), an epistemic state (more like a probability distribution), or something else entirely?

Here are three possible answers to this question:

  1. Wavefunctions are epistemic and there is some underlying ontic state. Quantum mechanics is the statistical theory of these ontic states in analogy with Liouville mechanics.
  2. Wavefunctions are epistemic, but there is no deeper underlying reality.
  3. Wavefunctions are ontic (there may also be additional ontic degrees of freedom, which is an important distinction but not relevant to the present discussion).

I will call options 1 and 2 psi-epistemic and option 3 psi-ontic. Advocates of option 3 are called psi-ontologists, in an intentional pun coined by Chris Granade. Options 1 and 3 share a conviction of scientific realism, which is the idea that there must be some description of what is going on in reality that is independent of our knowledge of it. Option 2 is broadly anti-realist, although there can be some subtleties here ((The subtlety is basically a person called Chris Fuchs. He is clearly in the option 2 camp, but claims to be a scientific realist. Whether he is successful at maintaining realism is a matter of debate.)).

The theorem in the paper attempts to rule out option 1, which would mean that scientific realists should become psi-ontologists. I am pretty sure that no theorem on Earth could rule out option 2, so that is always a refuge for psi-epistemicists, at least if their psi-epistemic conviction is stronger than their realist one.

I would classify the Copenhagen interpretation, as represented by Niels Bohr ((Note, this is distinct from the orthodox interpretation as represented by the textbooks of Dirac and von-Neumann, which is also sometimes called the Copenhagen interpretation. Orthodoxy accepts the eigenvalue-eigenstate link.  Observables can sometimes have definite values, in which case they are objective properties of the system. A system has such a property when it is in an eigenstate of the corresponding observable. Since every wavefunction is an eigenstate of some observable, it follows that this is a psi-ontic view, albeit one in which there are no additional ontic degrees of freedom beyond the quantum state.)), under option 2. One of his famous quotes is:

There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature… ((Sourced from Wikiquote.))

and “what we can say” certainly seems to imply that we are talking about our knowledge of reality rather than reality itself. Various contemporary neo-Copenhagen approaches also fall under this option, e.g. the Quantum Bayesianism of Carlton Caves, Chris Fuchs and Ruediger Schack; Anton Zeilinger’s idea that quantum physics is only about information; and the view presently advocated by the philosopher Jeff Bub. These views are safe from refutation by the PBR theorem, although one may debate whether they are desirable on other grounds, e.g. the accusation of instrumentalism.

Pretty much all of the well-developed interpretations that take a realist stance fall under option 3, so they are in the psi-ontic camp. This includes the Everett/many-worlds interpretation, de Broglie-Bohm theory, and spontaneous collapse models. Advocates of these approaches are likely to rejoice at the PBR result, as it apparently rules out their only realist competition, and they are unlikely to regard anti-realist approaches as viable.

Perhaps the best known contemporary advocate of option 1 is Rob Spekkens, but I also include myself and Terry Rudolph (one of the authors of the paper) in this camp. Rob gives a fairly convincing argument that option 1 characterizes Einstein’s views in this paper, which also gives a lot of technical background on the distinction between options 1 and 2.

Why be a psi-epistemicist?

Why should the epistemic view of the quantum state should be taken seriously in the first place, at least seriously enough to prove a theorem about it? The most naive argument is that, generically, quantum states only predict probabilities for observables rather than definite values. In this sense, they are unlike classical phase space points, which determine the values of all observables uniquely. However, this argument is not compelling because determinism is not the real issue here. We can allow there to be some genuine stochasticity in nature whilst still maintaining realism.

An argument that I personally find motivating is that quantum theory can be viewed as a noncommutative generalization of classical probability theory, as was first pointed out by von Neumann. My own exposition of this idea is contained in this paper. Even if we don’t always realize it, we are always using this idea whenever we generalize a result from classical to quantum information theory. The idea is so useful, i.e. it has such great explanatory power, that it would be very puzzling if it were a mere accident, but it does appear to be just an accident in most psi-ontic interpretations of quantum theory.  For example, try to think about why quantum theory should be formally a generalization of probability theory from a many-worlds point of view.  Nevertheless, this argument may not be compelling to everyone, since it mainly entails that mixed states have to be epistemic. Classically, the pure states are the extremal probability distributions, i.e. they are just delta functions on a single ontic state. Thus, they are in one-to-one correspondence with the ontic states. The same could be true of pure quantum states without ruining the analogy ((but note that the resulting theory would essentially be the orthodox interpretation, which has a measurement problem.)).

A more convincing argument concerns the instantaneous change that occurs after a measurement — the collapse of the wavefunction. When we acquire new information about a classical epistemic state (probability distribution) say by measuring the position of a particle, it also undergoes an instantaneous change. All the weight we assigned to phase space points that have positions that differ from the measured value is rescaled to zero and the rest of the probability distribution is renormalized. This is just Bayesian conditioning. It represents a change in our knowledge about the system, but no change to the system itself. It is still occupying the same phase space point as it was before, so there is no change to the ontic state of the system. If the quantum state is epistemic, then instantaneous changes upon measurement are unproblematic, having a similar status to Bayesian conditioning. Therefore, the measurement problem is completely dissolved within this approach.

Finally, if we allow a more sophisticated analogy between quantum states and probabilities, in particular by allowing constraints on how much may be known and allowing measurements to locally disturb the ontic state, then we can qualitatively explain a large number of phenomena that are puzzing for a psi-ontologist very simply within a psi-epistemic approach. These include: teleportation, superdense coding, and much of the rest of quantum information theory. Crucially, it also includes interference, which is often held as a convincing reason for psi-ontology. This was demonstrated in a very convincing way by Rob Spekkens via a toy theory, which is recommended reading for all those interested in quantum foundations. In fact, since this paper contains the most compelling reasons for being a psi-epistemicist, you should definitely make sure you read it so that you can be more shocked by the PBR result.

Ontic models

If we accept that the psi-epistemic position is reasonable, then it would be superficially resonable to pick option 1 and try to maintain scientific realism. This leads us into the realm of ontic models for quantum theory, otherwise known as hidden variable theories ((The terminology “ontic model” is preferred to “hidden variable theory” for two reasons. Firstly, we do not want to exclude the case where the wavefunction is ontic, but there are no extra degrees of freedom (as in the orthodox interpretation). Secondly, it is often the case that the “hidden” variables are the ones that we actually observe rather than the wavefunction, e.g. in Bohmian mechanics the particle positions are not “hidden”.)). A pretty standard framework for discussing such models has existed since John Bell’s work in the 1960’s, and almost everyone adopts the same definitions that were laid down then. The basic idea is that systems have properties. There is some space \(\Lambda\) of ontic states, analogous to the phase space of a classical theory, and the system has a value \(\lambda \in \Lambda\) that specifies all its properties, analogous to the phase space points. When we prepare a system in some quantum state \(\Ket{\psi}\) in the lab, what is really happening is that an ontic state \(\lambda\) is sampled from a probability distribution over \(\mu(\lambda)\) that depends on \(\Ket{\psi}\).

Representation of a quantum state in an ontic model

In an ontic model, a quantum state (indicated heuristically on the left as a vector in the Bloch sphere) is represented by a probability distribution over ontic states, as indicated on the right.

We also need to know how to represent measurements in the model ((Generally, we would need to represent dynamics as well, but the PBR theorem does not depend on this.)).  For each possible measurement that we could make on the system, the model must specify the outcome probabilities for each possible ontic state.  Note that we are not assuming determinism here.  The measurement is allowed to be stochastic even given a full specification of the ontic state.  Thus, for each measurement \(M\), we need a set of functions \(\xi^M_k(\lambda)\) , where \(k\) labels the outcome.  \(\xi^M_k(\lambda)\) is the probability of obtaining outcome \(k\) in a measurement of \(M\) when the ontic state is \(\lambda\).  In order for these probabilities to be well defined the functions \(\xi^M_k\) must be positive and they must satisfy \(\sum_k \xi^M_k(\lambda) = 1\) for all \(\lambda \in \Lambda\). This normalization condition is very important in the proof of the PBR theorem, so please memorize it now.

Overall, the probability of obtaining outcome \(k\) in a measurement of \(M\) when the system is prepared in state \(\Ket{\psi}\) is given by

\[\mbox{Prob}(k|M,\Ket{\psi}) = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda, \]
which is just the average of the outcome probabilities over the ontic state space.

If the model is going to reproduce the predictions of quantum theory, then these probabilities must match the Born rule.  Suppose that the \(k\)th outcome of \(M\) corresponds to the projector \(P_k\).  Then, this condition boils down to

\[\Bra{\psi} P_k \Ket{\psi} = \int_{\Lambda} \xi^M_k(\lambda) \mu(\lambda) d\lambda,\]

and this must hold for all quantum states, and all outcomes of all possible measurements.

Constraints on Ontic Models

Even disregarding the PBR paper, we already know that ontic models expressible in this framework have to have a number of undesirable properties. Bell’s theorem implies that they have to be nonlocal, which is not great if we want to maintain Lorentz invariance, and the Kochen-Specker theorem implies that they have to be contextual. Further, Lucien Hardy’s ontological excess baggage theorem shows that the ontic state space for even a qubit would have to have infinite cardinality. Following this, Montina proved a series of results, which culminated in the claim that there would have to be an object satisfying the Schrödinger equation present within the ontic state (see this paper). This latter result is close to the implication of the PBR theorem itself.

Given these constraints, it is perhaps not surprising that most psi-epistemicists have already opted for option 2, denouncing scientific realism entirely. Those of us who cling to realism have mostly decided that the ontic state must be a different type of object than it is in the framework described above.  We could discard the idea that individual systems have well-defined properties, or the idea that the probabilities that we assign to those properties should depend only on the quantum state. Spekkens advocates the first possibility, arguing that only relational properties are ontic. On the other hand, I, following Huw Price, am partial to the idea of epistemic hidden variable theories with retrocausal influences, in which case the probability distributions over ontic states would depend on measurement choices as well as which quantum state is prepared. Neither of these possibilities are ruled out by the previous results, and they are not ruled out by PBR either. This is why I say that their result does not rule out any position that is seriously held by any researchers in quantum foundations. Nevertheless, until the PBR paper, there remained the question of whether a conventional psi-epistemic model was possible even in principle. Such a theory could at least have been a competitor to Bohmian mechanics. This possibility has now been ruled out fairly convincingly, and so we now turn to the basic idea of their result.

The Result

Recall from our classical example that each ontic state (phase space point) occurs in the support of more than one epistemic state (Liouville distribution), in fact infinitely many. This is just because probability distributions can have overlapping support. Now, consider what would happen if we restricted the theory to only allow epistemic states with disjoint support. For example, we could partition phase space into a number of disjoint cells and only consider probability distributions that are uniform over one cell and zero everywhere else.

Restricted classical theory

A restricted classical theory in which only the distributions indicated are allowed as epistemic states. In this case, each ontic state is only possible in one epistemic state, so it is more accurate to say that the epistemic states represent a property of the ontic state.

Given this restriction, the ontic state determines the epistemic state uniquely. If someone tells you the ontic state, then you know which cell it is in, so you know what the epistemic state must be. Therefore, in this restricted theory, the epistemic state is not really epistemic. Its image is contained in the ontic state, and it would be better to say that we were talking about a property of the ontic state, rather than something that represents knowledge. According to the PBR result, this is exactly what must happen in any ontic model of quantum theory within the Bell framework.

Here is the analog of this in ontic models of quantum theory.  Suppose that two nonorthogonal quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) are represented as follows in an ontic model:

Psi-epistemic model

Representation of nonorthogonal states in a psi-epistemic model

Because the distributions overlap, there are ontic states that are compatible with more than one quantum states, so this is a psi-epistemic model.

In contrast, if, for every pair of quantum states \(\Ket{\psi_1},\Ket{\psi_2}\), the probability distributions do not overlap, i.e. the representation of each pair looks like this

Psi-ontic model

Representation of a pair of quantum states in a psi-ontic model

then the quantum state is uniquely determined by the ontic state, and it is therefore better regarded as a property of \(\lambda\) rather than a representation of knowledge.  Such a model is psi-ontic.  The PBR theorem states that all ontic models that reproduce the Born rule must be psi-ontic.

Sketch of the proof

In order to establish the result, PBR make use of the following idea. In an ontic model, the ontic state \(\lambda\) determines the probabilities for the outcomes of any possible measurement via the functions \(\xi^M_k\). The Born rule probabilities must be obtained by averaging these conditional probabilities with respect to the probability distribution \(\mu(\lambda)\) representing the quantum state. Suppose there is some measurement \(M\) that has an outcome \(k\) to which the quantum state \(\Ket{\psi}\) assigns probability zero according to the Born rule. Then, it must be the case that \(\xi^M_k(\lambda) = 0\) for every \(\lambda\) in the support of \(\mu(\lambda)\). Now consider two quantum states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) and suppose that we can find a two outcome measurement such that that the first state gives zero Born rule probability to the first outcome and the second state gives zero Born rule probability to the second outcome. Suppose also that there is some \(\lambda\) that is in the support of both the distributions, \(\mu_1\) and \(\mu_2\), that represent \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\) in the ontic model. Then, we must have \(\xi^M_1(\lambda) = \xi^M_2(\lambda) = 0\), which contradicts the normalization assumption \(\xi^M_1(\lambda) + \xi^M_2(\lambda) = 1\).

Now, it is fairly easy to see that there is no such measurement for a pair of nonorthogonal states, because this would mean that they could be distinguished with certainty, so we do not have a result quite yet. The trick to get around this is to consider multiple copies. Consider then, the four states \(\Ket{\psi_1}\otimes\Ket{\psi_1}, \Ket{\psi_1}\otimes\Ket{\psi_2}, \Ket{\psi_2}\otimes\Ket{\psi_1}\) and \(\Ket{\psi_2}\otimes\Ket{\psi_2}\) and suppose that there is a four outcome measurement such that \(\Ket{\psi_1}\otimes\Ket{\psi_1}\) gives zero probability to the first outcome, \(\Ket{\psi_1}\otimes\Ket{\psi_2}\) gives zero probability to the second outcome, and so on. In addition to this, we make an independence assumption that the probability distributions representing these four states must satisfy. Let \(\lambda\) be the ontic state of the first system and let \(\lambda’\) be the ontic state of the second. The independence assumption states that the probability densities representing the four quantum states in the ontic model are \(\mu_1(\lambda)\mu_1(\lambda’), \mu_1(\lambda)\mu_2(\lambda’), \mu_2(\lambda)\mu_1(\lambda’)\) and \(\mu_2(\lambda)\mu_2(\lambda’)\). This is a reasonable assumption because there is no entanglement between the two systems and we could do completely independent experiments on each of them. Assuming there is an ontic state \(\lambda\) in the support of both \(\mu_1\) and \(\mu_2\), there will be some nonzero probability that both systems occupy this ontic state whenever any of the four states are prepared. But, in this case, all four functions \(\xi^M_1,\xi^M_2,\xi^M_3\) and \(\xi^M_4\) must have value zero when both systems are in this state, which contradicts the normalization \(\sum_k \xi^M_k = 1\).

This argument works for the pair of states \(\Ket{\psi_1} = \Ket{0}\) and \(\Ket{\psi_2} = \Ket{+} = \frac{1}{\sqrt{2}} \left ( \Ket{0} + \Ket{1}\right )\). In this case, the four outcome measurement is a measurement in the basis:

\[\Ket{\phi_1} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{1} + \Ket{1} \otimes \Ket{0} \right )\]
\[\Ket{\phi_2} = \frac{1}{\sqrt{2}} \left ( \Ket{0}\otimes\Ket{-} + \Ket{1} \otimes \Ket{+} \right )\]
\[\Ket{\phi_3} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{1} + \Ket{-} \otimes \Ket{0} \right )\]
\[\Ket{\phi_4} = \frac{1}{\sqrt{2}} \left ( \Ket{+}\otimes\Ket{-} + \Ket{-} \otimes \Ket{+} \right ),\]

where \(\Ket{-} = \frac{1}{\sqrt{2}} \left ( \Ket{0} – \Ket{1}\right )\). It is easy to check that \(\Ket{\phi_1}\) is orthogonal to \(\Ket{0}\otimes\Ket{0}\), \(\Ket{\phi_2}\) is orthogonal to \(\Ket{0}\otimes\Ket{+}\), \(\Ket{\phi_3}\) is orthogonal to \(\Ket{+}\otimes\Ket{0}\), and \(\Ket{\phi_4}\) is orthogonal to \(\Ket{+}\otimes\Ket{+}\). Therefore, the argument applies and there can be no overlap in the probability distributions representing \(\Ket{0}\) and \(\Ket{+}\) in the model.

To establish psi-ontology, we need a similar argument for every pair of states \(\Ket{\psi_1}\) and \(\Ket{\psi_2}\). PBR establish that such an argument can always be made, but the general case is more complicated and requires more than two copies of the system. I refer you to the paper for details where it is explained very clearly.

Conclusions

The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. One of the things that a good interpretation of a physical theory should have is explanatory power. For me, the epistemic view of quantum states is so explanatory that it is worth trying to preserve it. Realism too is something that we should not abandon too hastily. Therefore, it seems to me that we should be questioning the assumptions of the Bell framework by allowing more general ontologies, perhaps involving relational or retrocausal degrees of freedom. At the very least, this option is the path less travelled, so we might learn something by exploring it more thoroughly.

Quantum Foundations Meetings

Prompted in part by the Quantum Pontiff’s post about the APS March meeting, I thought it would be a good idea to post one of my extremely irregular lists of interesting conferences about the foundations of quantum theory that are coming up. A lot of my usual sources for this sort of information have become defunct in the couple of years I was away from work, so if anyone knows of any other interesting meetings then please post them in the comments.

  • March 21st-25th 2011: APS March Meeting (Dallas, Texas) – Includes a special session on Quantum Information For Quantum Foundations. Abstract submission deadline Nov. 19th.
  • April 29th-May 1st 2011: New Directions in the Foundations of Physics (Washington DC) – Always one of the highlights of the foundations calendar, but invite only.
  • May 2nd-6th 2011: 5th Feynman Festival (Brazil) – Includes foundations of quantum theory as one of its topics, but likely there will be more quantum information/computation talks. Registration deadline Feb. 1st, Abstract submission deadline Feb. 15th.
  • July 25th-30th 2011: Frontiers of Quantum and Mesoscopic Thermodynamics (Prague, Czech Republic) – Not strictly a quantum foundations conference, but there are a few foundations speakers and foundations of thermodynamics is interesting to many quantum foundations people.

Time Travel and Information Processing

Lately, the quant-ph section of the arXiv has been aflurry with papers investigating what would happen to quantum information processing if time travel were possible (see the more recent papers here). I am not sure exactly why this topic has become fashionable, but it may well be an example of the Bennett effect in quantum information research. That is, a research topic can meander along slowly at its own pace for a few years until Charlie Bennett publishes an (often important) paper ((Bennett, C. H. et. al. (2009). “Can closed timelike curves or nonlinear quantum mechanics improve quantum state discrimination or help solve hard problems”. Phys. Rev. Lett. 103:170502. eprint arXiv:0908.3023.)) on the subject and then everyone is suddenly talking and writing about it for a couple of years. In any case, there have been a number of counter-intuitive claims that time travel enables quantum information processing to be souped up. Specifically, it supposedly enables super-hard computational problems that are in complexity classes larger than NP to be solved efficiently ((Brun, T. A. and Wilde, Mark M. (2010). “Perfect state distinguishability and computational speedups with postselected closed timelike curves”. eprint arXiv:1008.0433.)) ((Aaronson, S. and Watrous, J. (2009). Closed timelike curves make quantum and classical computing equivalent. Proc. R. Soc. A 465:631-647. eprint arXiv:0808.2669.)) ((Bacon, D. (2004). Quantum Computational Complexity in the Presence of Closed Timelike Curves. Phys. Rev. A 70:032309. eprint arXiv:quant-ph/0309189.)) ((Brun, T. A. (2003). Computers with closed timelike curves can solve hard problems. Found. Phys. Lett. 16:245-253. eprint arXiv:gr-qc/0209061.)) and it supposedly allows nonorthogonal quantum states to be perfectly distinguished ((ref:2)) ((Brun, Todd A., Harrington, J. and Wilde, M. M. (2009). “Localized closed timelike curves can perfectly distinguish quantum states”. Phys. Rev. Lett. 102:210402. eprint arXiv:0811.1209.)). These claims are based on two different models for quantum time-travel, one due to David Deutsch ((Deutsch, D. (1991). “Quantum mechanics near closed timelike lines”. Phys. Rev. D 44:3197—3217.)) and one due to a multitude of independent authors based on post-selected teleportation (this paper ((Lloyd, S. et. al. (2010). “The quantum mechanics of time travel through post-selected teleportation”. eprint arXiv:1007.2615)) does a good job of the history in the introduction).

In this post, I am going to give a basic introduction to the physics of time-travel. In later posts, I will explain the Deutsch and teleportation-based models and evaluate the information processing claims that have been made about them. What is most interesting to me about this whole topic, is that the correct model for time travelling quantum systems, and hence their information processing power, seems to depend sensitively on both the formalism and the interpretation of quantum theory that is adopted ((I should mention that Joseph Fitzsimons (@jfitzsimons) disagreed with this statement in our Twitter conversations on this subject, and no doubt many physicists would too, but I hope to convince you that it is correct by the end of this series of posts.)). For this reason, it is a useful test-bed for ideas in quantum foundations.

Basic Concepts of Time-Travel

Everyone is familiar with the sensation of time-travel into the future. We all do it at a rate of one second per second every day of our lives. If you would like to speed up your rate of future time travel, relative to Earth, then all you have to do is take a space trip at a speed close to the speed of light. When you get back, a lot more time will have elapsed on Earth than you will have experienced on your journey. This is the time-dilation effect of special relativity. Therefore, the problem of time-travel into the future is completely solved in theory, although in practice you would need a vast source of energy in order to accelerate yourself fast enough to make the effect significant. It also causes no conceptual problems for physics, since we have a perfectly good framework for quantum theories that are compatible with special relativity, known as quantum field theory.

On the other hand, time travel into the past is a much more tricky and conceptually interesting proposition. For one thing, it seems to entail time-travel paradoxes, such as the grandfather paradox where you go back in time and kill your grandfather before your parents were born, so that you are never born, so that you cannot go back in time and kill your grandfather, so that you are born, so that you can go back in time and kill your grandfather etc. (see this article for a philosophical and physics-based discussion of time travel paradoxes). For this reason, many physicists are highly sceptical of the idea that time travel into the past is possible. However, General Relativity (GR) provides a reason to temper our skepticism.

Closed Timelike Curves in GR

It has been well-known for a long time that GR admits solutions that include closed timelike curves (CTCs), i.e. world-lines that return to their starting point and loop around. If you happened to be travelling along a CTC then you would eventually end up in the past of where you started from. Actually, it is a bit more complicated than that because the usual notions of past and future do not really make sense on a CTC. However, imagine what it would look like to an observer in a part of the universe that respects causality in the usual sense. First of all, she would see you appear out of nowhere, claiming to have knowledge of events that she regards as being in the future. Some time later she would see you disappear out of existence. From her perspective it certainly looks like time-travel into the past. What things would feel like from your point of view is more of a mystery, as the notion of a CTC makes a mockery of our usual notion of “now”, i.e. it is a fundamentally block-universe construct.

The possibility of CTCs in GR was first noticed by Willem van Stockum in 1937 ((Stockum, W. J. van (1937). “The gravitational field of a distribution of particles rotating around an axis of symmetry”. Proc. Roy. Soc. Edinburgh A 57: 135.)) and later by Kurt Gödel in 1949 ((Kurt Gödel (1949). “An Example of a New Type of Cosmological Solution of Einstein’s Field Equations of Gravitation”. Rev. Mod. Phys. 21: 447.)). Perhaps the most important solution that incorporates CTCs is the Kerr vacuum, which is the solution that describes an uncharged rotating black hole. Since most black holes in the universe are likely to be rotating, there is a sense in which one can say that CTCs are generic. The caveat is that the CTCs in the Kerr vacuum only occur in the interior of the black hole so that the physics outside the event horizon respects causality in the usual sense. Many physicists believe that the CTCs in the Kerr vacuum are mathematical artifacts, which will perhaps not occur in a full theory of quantum gravity. Nevertheless, the conceptual possibility of CTCs in General Relativity is a good reason to look at their physics more closely.

There have been a few attempts to look for solutions of GR that incorporate CTCs that a human being would actually be able to travel along without getting torn to pieces. This is a bit beyond my current knowledge, but, as far as I am aware, all such solutions involve large quantities of negative energy, so they are unlikely to exist in nature and it is unlikely that we can construct them artificially. For this reason, CTCs are currently more of a curiosity for foundationally inclined physicists like myself than they are a practical method of time-travel.

Other Retrocausal Effects in Physics

Apart from GR, other forms of backwards-in-time, or retrocausal, effect have been proposed in physics from time to time. For example, there is the Wheeler-Feynman absorber theory of electrodynamics, which postulates a backwards-in-time propagating field in addition to the usual forwards-in-time propagating field, and Feynman also postulated that positrons might be electrons travelling backwards in time. There is also Cramer’s transactional interpretation of quantum theory ((Cramer, J. G. (1986). “The transactional interpretation of quantum mechanics”. Rev. Mod. Phys. 58:647-687.)), which does a similar thing with quantum wavefunctions, and the distinct, but conceptually similar, two-state vector formalism of Aharonov and collaborators ((Aharonov, Y. and Vaidman, L. (2001). “The Two-State Vector Formalism of Quantum Mechanics: An Updated Review”. in “Time in Quantum Mechanics”, Muga, J. G., Sala Mayato, R. and Egusquiza, I. L. eprint arXiv:quant-ph/0105101.)). Finally, retrocausal influences have been suggested as a mechanism to reproduce the violations of Bell-inequalities in quantum theory without the need for Lorentz-invariance violating nonlocal influences ((For example, see Price, H. (1997). “Time’s Arrow and Archimedes’ Point”. OUP.)).

However, none of these proposals are as compelling an argument for taking the physics of time-travel into the past seriously as the existence of CTCs in General Relativity. This is because, none of these theories gives provides a method for exploiting the retrocausal effect to actually travel back in time. Also, in each case, there is an alternative approach to the same phenomena that does not involve retrocausal influences. Nevertheless, it is possible that the models to be discussed have applications to these alternative approaches to physics.

Consistency Constraints and The Interpretation of Quantum Theory

Any viable theory of time travel into the past has to rule out things like the grandfather paradox. Consistency conditions have to be imposed on any physical model to so that time-travel cannot be used to alter the past. This raises interesting questions about free will, e.g. what exactly stops someone from freely deciding to pull the trigger on their grandfather? Whilst these questions are philosophically interesting, physicists are more inclined to just lay out the mathematics of consistency and see what it leads to. The different models of quantum time travel are essentially just different methods of imposing this sort of consistency constraint on quantum systems.

That is pretty much it for the basic introduction, but I want to leave you with a quick thought experiment to illustrate the sort of quantum foundational issues that come up when considering time-travel into the past. Suppose you prepare a spin-\(\frac{1}{2}\) particle in a spin up state in the z direction and then measure it in the x direction, so that it has a 50-50 chance of giving the spin up or spin down outcome. After observing the outcome you jump onto a CTC, travel back into the past and watch yourself perform the experiment again. The question is, would you see the experiment have the same outcome the second time around?

A consistency condition for time travel has to say something like “the full ontic state (state of things that exist in reality) of the universe must be the same the second time round as it was the first time round”, albeit that your subjective position within it has changed. If you believe, as many-worlds supporters do, that the quantum wavefunction is the complete description of reality then it, and only it, must be the same the second time around. Therefore, it must be the case that the probabilities are still 50-50 and you could see either outcome. This is not inconsistent because the many-worlds supporters believe that both outcomes happened the first time round in any case. If you are a Bohmian then the ontic state includes the positions of all particles in addition to the wavefunction and these, taken together, can be used to determine the outcome of the experiment uniquely. Therefore, a Bohmian must believe that the measurement outcome has to be the same the second time around. Finally, if you are some sort of anti-realist neo-Copenhagen type then it is not clear exactly what you believe, but, then again, it is not clear exaclty what you believe even when there is no time-travel.

There are some subtleties in these arguments. For example, it is not clear what happens to the correlations between you and the observed system when you go around the causal loop. If they still exist then this may restrict the ability of the earlier version of you to prepare a pure state. On the other hand, perhaps they get wiped out or perhaps your memory of the outcome gets wiped. The different models for the quantum physics of CTCs differ on how they handle this sort of issue, and this is what I will be looking at in future posts. If you have travelled along a CTC and happen to have brought a copy of these future posts with you then I would be very grateful if you could email them to me because that would be much easier for me than actually writing them.

‘Till next time!

References