What can decoherence do for us?

OK, so it’s time for the promised post about decoherence, but where to begin? Decoherence theory is now a vast subject with an enormous literature covering a wide variety of physical systems and scenarios. I will not deal with everything here, but just make some comments on how the theory looks from my point of view about the foundations of quantum theory. Alexei Grinbaum pointed me to a review article by Maximilian Schlosshauer on the role of decoherence in solving the measurement problem and in interpretations of quantum theory. That’s a good entry into the literature for people who want to know more.

OK, let me start by defining two problems that I take to be at the heart of understanding quantum theory:

1) The Emergence of Classicality: Our most fundamental theories of the world are quantum mechanical, but the world appears classical to us at the everyday level. Explain why we do not find ourselves making mistakes in using classical theories to make predictions about the everyday world of experience. By this I mean not only classical dynamics, but also classical probability theory, information theory, computer science, etc.

2) The ontology problem: The mathematical formalism of quantum theory provides an algorithm for computing the probabilities of outcomes of measurements made in experiments. Explain what things exist in reality and what laws they obey in such a way as to account for the correctness of the predictions of the theory.

I take these to be the fundamental challenges of understanding quantum mechanics. You will note that I did not mention the measurement problem, Schroedinger’s cat, or the other conventional ways of expressing the foundational challenges of quantum theory. This is because, as I have argued before, these problems are not interpretation neutral. Instead, one begins with something like the orthodox interpretation and shows that unitary evolution and the measurement postulates are in apparent conflict within that interpretation depending on whether we choose to view the measuring apparatus as a physical system obeying quantum theory or to leave it unanalysed. The problems with this are twofold:

i) It is not the case that we cannot solve the measurement problem. Several solutions exist, such as the account given by Bohmian mechanics, that of Everett/many-worlds, etc. The fact that there is more than one solution, and that none of them have been found to be universally compelling, indicates that it is not solving the measurement problem per se that is the issue. You could say that it is solving the measurement problem in a compelling way that is the issue, but I would say it is better to formulate the problem in such a way that it is obvious how it applies to each of the different interpretations.

ii) The standard way of describing the problems essentially assumes that the quantum state-vector corresponds more or less directly to whatever exists in reality, and that it is in fact all that exists in reality. This is an assumption of the orthodox interpretation, so we are talking about a problem with the standard interpretation and not with quantum theory itself. Assuming the reality of the state-vector simply begs the question. What if it does not correspond to an element of reality, but is just an epistemic object with a status akin to a probability distribution in classical theories? This is an idea that I favor, but now is not the time to go into detailed arguments for it. The mere fact that it is a possibility, and is taken seriously by a significant section of the foundations community, means that we should try to formulate the problems in a language that is independent of the ontological status of the state-vector.

Given this background viewpoint, we can now ask to what extent decoherence can help us with 1) and 2), i.e. the emergence and ontology problems. Let me begin with a very short description of what decoherence is in this context. The first point is that it takes seriously the idea that quantum systems, particularly the sort that we usually describe as “classical”, are open, i.e. interact strongly with a large environment. Correlations between system and environment are typically established very quickly in some particular basis, determined by the form of the system-environment interaction Hamiltonain, so that the density matrix of the system quickly becomes diagonal in that basis. Furthermore, the basis in which the correlations exist is stable over a very long period of time, which can typically be much longer than the lifetime of the universe. Finally, for many realistic Hamiltonians and a wide variety of systems, the decoherence basis corresponds very well to the kind of states we actually observe.

From my point of view, the short answer to the role of decoherence in foundations is that it provides a good framework for addressing emergence, but has almost nothing to say about ontology.  The reason for saying that should be clear:  we have a good correspondence with our observations, but at no point in my description of decoherence did I find it necessary to mention a reality underlying quantum mechanics.  Having said that, a couple of caveats are in order. Firstly, decoherence can do much more if it is placed within a framework with a well defined ontology. For example, in Everett/many-worlds, the ontology is the state-vector, which always evolves unitarily and never collapses. The trouble with this is that the ontology doesn’t correspond to our subjective experience, so we need to supplement it with some account of why we see collapses, definite measurement outcomes, etc. Decoherence theory does a pretty good job of this by providing us with rules to describe this subjective experience, i.e. we will experience the world relative to the basis that decoherence theory picks out. However, the point here is that the work is not being done by decoherence alone, as claimed by some physicists, but also by a nontrivial ontological assumption about the state-vector. As I remarked earlier, the latter is itself a point of contention, so it is clear that decoherence alone is not providing a complete solution.

The second caveat, is that some people, including Max Schlosshauer in his review, would argue for plausible denial of the need to answer the ontology question at all. So long as we can account for our subjective experience in a compelling manner then why should we demand any more of our theories? The idea is then that decoherence can solve the emergence problem, and then we are done because the ontology problem need not be solved at all. One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do. The second point is that if we are prepared to give a privelliged status to observers in our physical theories, by saying that physics needs to describe their experience and nothing more, then we can simply say that the collapse is a subjective property of the observer’s experience and leave it at that. We already have privelliged systems in our theory on this view, so what extra harm could that do?

Of course, I don’t subscribe to this viewpoint myself, but on both views described so far, decoherence theory either needs to be supplemented with an ontology, or is not needed at all for addressing foundational issues.

Finally, I want to make a couple of comments about how odd the decoherence solution looks from my particular point of view as a believer in the epistemic nature of wavefunctions. The first is that, from this point of view, the decoherence solution appears to have things backwards. When constructing a classical probabilistic theory, we first identify the ontological entities, e.g. particles that have definite trajectories, and describe their dynamics, e.g. Hamilton’s equations. Only then do we introduce probabilities and derive the corresponding probabilistic theory, e.g. Liouville mechanics. Decoherence theory does things in the other direction, starting from Schroedinger mechanics and then seeking to define the states of reality in terms of the probabilistic object, i.e. the state-vector. Whilst this is not obviously incorrect, since we don’t necessarily have to do things the same way in classical and quantum theories, it does seem a little perverse from my point of view. I’d rather start with an ontology and derive the fact that the state-vector is a good mathematical object for making probabilistic predictions, instead of the other way round.

The second comment concerns an analogy between the emergence of classicality in QM and the emergence of the second law of thermodynamics from statistical mechanics. For the latter, we have a multitude of conceptually different approaches, which all arrive at somewhat similar results from a practical point of view. For a state-vector epistemist like myself, the interventionist approach to statistical mechanics seems very similar to the decoherence approach to the emergence problem in QM. Both say that the respective problems cannot be solved by looking at a closed Hamiltonian system, but only by considering interaction with a somewhat uncontrollable environment. In the case of stat-mech, this is used to explain the statistical fluctuations observed in what would be an otherwise deterministic system. The introduction of correlations between system and environment is the mechanism behind both processes. Somewhat bizzarely, most physicists currently prefer closed-system approaches to the derivation of the second law, based on coarse-graining, but prefer the decoherence approach when it comes to the emergence of classicality from quantum theory. Closed system approaches have the advantage of being applicable to the universe as a whole, where there is no external environment to rely on. However, apart from special cases like this, one can broadly say that the two types of approach are complimentary for stat mech, and neither has a monopoly on explaining the second law. It is then natural to ask whether closed system approaches to emergence in QM are available making use of coarse graining, and whether they ought to be given equal weight to the decoherence explanation. Indeed, such arguments have been given – here is a recent example, which has many precursors too numerous to go through in detail. I myself am thinking about a similar kind of approach at the moment. Right now, such arguments have a disadvantage over decoherence in that the “measurement basis” has to be put in by hand, rather than emerging from the physics as in decoherence. However, further work is needed to determine whether this is an insurmountable obstacle.

In conclusion, decoherence theory has done a lot for our understanding of the emergence of classicality from quantum theory. However, it does not solve all the foundational queations about quantum theory, at least not on it’s own. Further, its importance may have been overemphasized by the physics community because other less-developed approaches to emergence could turn out to be of equal importance.

Creative Commons License
What can decoherence do for us? by Matthew Leifer, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-Noncommercial 3.0 Unported License.

11 Responses to What can decoherence do for us?

  1. Naive questions, exposing my ignorance…are there more or less realistic models demonstrating in detail how the environment chooses a basis via decoherence? apologies if this is in the review you refer to.

    Re: ontology, I would feel more comfortable if I knew any operational way of deciding whether any specific ontology is “correct”, or even a reason to expect that a fully satisfactory solution exists…

    thanks,

    Moshe

  2. Moshe said, “are there more or less realistic models demonstrating in detail how the environment chooses a basis via decoherence? apologies if this is in the review you refer to.”

    Yes there are. That’s part of the technical programme and is probably the most substantial contribution of decoherence theory to foundational studies. The review article focusses mainly on conceptual/interpretational issues, but it does discuss one model by Zurek as an example, which fits into the “less realistic” category. However, you can find references to the extensive technical literature in the review.

    One of the most impressive achievements is that decoherence can select different types of “basis” depending on the relative strengths of the system-environment and internal system Hamiltonians (I use quotation marks because the states selected can be overcomplete). If the internal Hamiltonian is much stronger then the energy basis tends to be selected and if it is much weaker then something approximating the position basis is often found. In intermediate regimes, things like coherent states can be selected. This corresponds well to what we see in the lab, and has significant explanatory power. The fact that other approaches to emergence cannot reproduce this yet is a significant point in favor of the decoherence explanation.

    “Re: ontology, I would feel more comfortable if I knew any operational way of deciding whether any specific ontology is “correct”, or even a reason to expect that a fully satisfactory solution exists…”

    I more or less agree with you. A good choice of ontology should eventually lead to an experimental prediction that it would have been incoceivable to make without it. This might not happen in current QM itself, but perhaps in future theories like quantum gravity. However, in this post, I left the question of what constitutes a “good” ontology open, because different people have different requirements. For example, one could weaken your operational requirement and just say that the ontology should have explanatory power, i.e. be a good way of thinking about the theory that aids intuition. I also don’t have a big problem with having more than one ontology available, so long as you can account for the terms of one ontology consistently in terms of the other. Arguably, we have this stiuation in classical mechanics because the different formulations, e.g. Newton’s laws, Hamiltonian mechanics, Largrangian mechanics, etc. suggest taking different quantities as the fundamental entities of the theory. On the other hand, we shouldn’t end up with competing ontologies that give radically different pictures of what is going on in reality and appear to be genuinely irreconcilable. Arguably, this is the case with current interpretations of QM.

  3. The emergence of the second law can be (partially) tested and understood through the fluctuation theorems regarding probabilities of second law violations. This is a naive question, but is there a set of such statements regarding the role of decoherence in quantum systems as they are made larger? And have people studied the transition from quantum to classical as the system size/structure is varied?

  4. The short answer is yes, although I’m no expert on this particular topic at the moment. I think one can view the remnants of interference effects on timescales shorter than the decoherence time as an effect somewhat analogous to fluctuations from the 2nd law. Having said that, I don’t think the analogies between emergence of classicality and emergence of the second law have received sufficient attention from theorists, so there are probably more precise statements that can be made. It’s something I’m interested in pursuing in the future.

  5. Pingback: Anonymous

  6. Pingback: Foundations at APS, take 2 « Quantum Quandaries

  7. I’m an iitalian student.
    Thank you for this post. Since I started studing Quantum Mechanics (2 years ago) I had thought that under the Probabilistic interpretation of this one, should be hidden something. Exactly the “ontology problem” has always been what make QM for me unsatisfactory. I’m grateful to you because finally I am not the only one to think that,(infact in my university it seemed so) and now I know the name of the physics that study that(so i can seek for reviews). I apologize for my English.
    Thanks,

    Dimitri
    random3f

  8. Hi Matt, thanks for the very interesting read!

    Two questions pop into mind.

    Firstly, I’ve never met someone who takes an epistemic view of the wavefunction before and am interested in understanding what the motivation for that view is. How does the epistemicist account for interference effects, for example, in the double slit experiment (which I would have thought empirically disconfirms the epistemic view)?

    Secondly, given that your interested in the ontological issues, I wonder if your able to describe how decoherence solves the emergence problem in a more ontological manner.
    I’ve never seen anyone do that before – every attempt to describe the physical process of decoherence that I’ve seen looks more like a description of a mathematical process being undertaken by mathematical entities up in plato’s heaven!
    To be honest, I don’t understand how decoherence solves the emergence problem because I don’t understand what connection the mathematical process has to the purported physical process.
    For example, when one speaks of bases, one is speaking of forms of representation (formalism), not things represented (ontology). The position basis, for example, constitutes a way of writing down the physical state of a system. But I could equally write that state down in the momentum basis. Bases are therefore linguistic/mathematical entities and which is basis I choose to use to write down the physical state of a system is of no physical significance.
    Now, when someone says something like…

    “One of the most impressive achievements is that decoherence can select different types of “basis” depending on the relative strengths of the system-environment and internal system Hamiltonians”

    …I don’t entirely know what to make of it. What does it mean to say that a physical process (decoherence) can select different sorts of mathematical/linguistic entities (bases)? At the risk of being uncharitable, it almost looks like some sort of “use-mention confusion”.

    I don’t see how the emergence problem is solved, until one can describe, how physical systems undergoing decoherence manifest classical properties, in physical vocabulary. Would be interested in your thoughts on the matter.

  9. The motivation for the epistemic view is based on a few things, which are emphasized to different degrees by different authors, including:

    • The formalism of quantum theory can be viewed as a noncommutative generalization of probability theory. Therefore, it is natural to interpret the states in the same way as probability distributions, which are their commutative special case.
    • The measurement problem disappears in the epistemic view because there is no problem with discontinuous changes in an epistemic state, i.e. measurement-update is just like Bayes’ rule in classical probability. Also, there is no problem with having two different states depending on whether you describe the measurement unitarily or use the measurement postulates. The two results just represent the epistemic states of agents with different perspectives, i.e. one who has not seen the outcome and one who has.
    • Many phenomena of quantum theory make more sense in the epistemic view. Search the arXiv for Rob Spekkens’ papers on his “toy theory” for lots of examples of this.

    Although many people think that interference kills the epistemic view, this is not correct because there are epistemic toy theories, such as the one by Spekkens, that do exhibit interference. The mistake here is to have a too limited view of what the possible ontology underlying quantum theory can be. If you think it has to be particles having definite positions that only go through a single slit then you’ll end up with something like Bohmian mechanics which has ontological wavefunctions. However, there is no reason to believe that there can’t be some sort of wave-like influence that goes through both slits, with the proviso that that influence must contain far less information than the full wavefunction so that it can still be viewed as epistemic and we retain the ability to solve the measurement problem. Spekkens has a version of his theory for photons in an interferometer that demonstrates this, but I don’t think it has been published yet. Still, if you read the main “toy theory” paper you can probably work it out.

    Regarding emergence, I would probably phrase things a bit differently if I were writing this post again. You can certainly end up with equations that look classical if you apply decoherence in an appropriate way and look in the Wigner function picture for example. However, as you correctly point out, those are just formal derivations and you need to give them a physical interpretation to understand if you really have an emergence of classicality. To do this, you need to add an ontology, but it turns out that most of the ontologies that have been considered end up relying on precisely these formal derivations to get emergence. Perhaps the best worked out example is in the Everett interpretation where you can look at the long papers by David Wallace to find out how decoherence leads to emergence in that case. There is no new maths in these papers, but it provides the necessary philosophical support that you are looking for in that case. Bohmian mechanics is somewhat similar in that it needs decoherence in order to make the trajectories follow their classical counterparts in a stable manner and again there is no new maths involved in understanding this. Therefore, I guess what I was trying to say is that we seem to understand the broad outline of how classicality emerges, with the proviso that the meaning attached to that understanding is ontology dependent. I do not know if and how this works out in the epistemic picture, since we don’t have a full ontological model of quantum theory that satisfies the epistemic criteria yet. It is more of a work in progress.

  10. Pingback: I Hope I’m Being Coherent And A Little Less Incoherent About Decoherence, Playa… | Yeah, Man…

Leave a Reply