Author Archives: mleifer

What can decoherence do for us?

OK, so it’s time for the promised post about decoherence, but where to begin? Decoherence theory is now a vast subject with an enormous literature covering a wide variety of physical systems and scenarios. I will not deal with everything here, but just make some comments on how the theory looks from my point of view about the foundations of quantum theory. Alexei Grinbaum pointed me to a review article by Maximilian Schlosshauer on the role of decoherence in solving the measurement problem and in interpretations of quantum theory. That’s a good entry into the literature for people who want to know more.

OK, let me start by defining two problems that I take to be at the heart of understanding quantum theory:

1) The Emergence of Classicality: Our most fundamental theories of the world are quantum mechanical, but the world appears classical to us at the everyday level. Explain why we do not find ourselves making mistakes in using classical theories to make predictions about the everyday world of experience. By this I mean not only classical dynamics, but also classical probability theory, information theory, computer science, etc.

2) The ontology problem: The mathematical formalism of quantum theory provides an algorithm for computing the probabilities of outcomes of measurements made in experiments. Explain what things exist in reality and what laws they obey in such a way as to account for the correctness of the predictions of the theory.

I take these to be the fundamental challenges of understanding quantum mechanics. You will note that I did not mention the measurement problem, Schroedinger’s cat, or the other conventional ways of expressing the foundational challenges of quantum theory. This is because, as I have argued before, these problems are not interpretation neutral. Instead, one begins with something like the orthodox interpretation and shows that unitary evolution and the measurement postulates are in apparent conflict within that interpretation depending on whether we choose to view the measuring apparatus as a physical system obeying quantum theory or to leave it unanalysed. The problems with this are twofold:

i) It is not the case that we cannot solve the measurement problem. Several solutions exist, such as the account given by Bohmian mechanics, that of Everett/many-worlds, etc. The fact that there is more than one solution, and that none of them have been found to be universally compelling, indicates that it is not solving the measurement problem per se that is the issue. You could say that it is solving the measurement problem in a compelling way that is the issue, but I would say it is better to formulate the problem in such a way that it is obvious how it applies to each of the different interpretations.

ii) The standard way of describing the problems essentially assumes that the quantum state-vector corresponds more or less directly to whatever exists in reality, and that it is in fact all that exists in reality. This is an assumption of the orthodox interpretation, so we are talking about a problem with the standard interpretation and not with quantum theory itself. Assuming the reality of the state-vector simply begs the question. What if it does not correspond to an element of reality, but is just an epistemic object with a status akin to a probability distribution in classical theories? This is an idea that I favor, but now is not the time to go into detailed arguments for it. The mere fact that it is a possibility, and is taken seriously by a significant section of the foundations community, means that we should try to formulate the problems in a language that is independent of the ontological status of the state-vector.

Given this background viewpoint, we can now ask to what extent decoherence can help us with 1) and 2), i.e. the emergence and ontology problems. Let me begin with a very short description of what decoherence is in this context. The first point is that it takes seriously the idea that quantum systems, particularly the sort that we usually describe as “classical”, are open, i.e. interact strongly with a large environment. Correlations between system and environment are typically established very quickly in some particular basis, determined by the form of the system-environment interaction Hamiltonain, so that the density matrix of the system quickly becomes diagonal in that basis. Furthermore, the basis in which the correlations exist is stable over a very long period of time, which can typically be much longer than the lifetime of the universe. Finally, for many realistic Hamiltonians and a wide variety of systems, the decoherence basis corresponds very well to the kind of states we actually observe.

From my point of view, the short answer to the role of decoherence in foundations is that it provides a good framework for addressing emergence, but has almost nothing to say about ontology.  The reason for saying that should be clear:  we have a good correspondence with our observations, but at no point in my description of decoherence did I find it necessary to mention a reality underlying quantum mechanics.  Having said that, a couple of caveats are in order. Firstly, decoherence can do much more if it is placed within a framework with a well defined ontology. For example, in Everett/many-worlds, the ontology is the state-vector, which always evolves unitarily and never collapses. The trouble with this is that the ontology doesn’t correspond to our subjective experience, so we need to supplement it with some account of why we see collapses, definite measurement outcomes, etc. Decoherence theory does a pretty good job of this by providing us with rules to describe this subjective experience, i.e. we will experience the world relative to the basis that decoherence theory picks out. However, the point here is that the work is not being done by decoherence alone, as claimed by some physicists, but also by a nontrivial ontological assumption about the state-vector. As I remarked earlier, the latter is itself a point of contention, so it is clear that decoherence alone is not providing a complete solution.

The second caveat, is that some people, including Max Schlosshauer in his review, would argue for plausible denial of the need to answer the ontology question at all. So long as we can account for our subjective experience in a compelling manner then why should we demand any more of our theories? The idea is then that decoherence can solve the emergence problem, and then we are done because the ontology problem need not be solved at all. One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do. The second point is that if we are prepared to give a privelliged status to observers in our physical theories, by saying that physics needs to describe their experience and nothing more, then we can simply say that the collapse is a subjective property of the observer’s experience and leave it at that. We already have privelliged systems in our theory on this view, so what extra harm could that do?

Of course, I don’t subscribe to this viewpoint myself, but on both views described so far, decoherence theory either needs to be supplemented with an ontology, or is not needed at all for addressing foundational issues.

Finally, I want to make a couple of comments about how odd the decoherence solution looks from my particular point of view as a believer in the epistemic nature of wavefunctions. The first is that, from this point of view, the decoherence solution appears to have things backwards. When constructing a classical probabilistic theory, we first identify the ontological entities, e.g. particles that have definite trajectories, and describe their dynamics, e.g. Hamilton’s equations. Only then do we introduce probabilities and derive the corresponding probabilistic theory, e.g. Liouville mechanics. Decoherence theory does things in the other direction, starting from Schroedinger mechanics and then seeking to define the states of reality in terms of the probabilistic object, i.e. the state-vector. Whilst this is not obviously incorrect, since we don’t necessarily have to do things the same way in classical and quantum theories, it does seem a little perverse from my point of view. I’d rather start with an ontology and derive the fact that the state-vector is a good mathematical object for making probabilistic predictions, instead of the other way round.

The second comment concerns an analogy between the emergence of classicality in QM and the emergence of the second law of thermodynamics from statistical mechanics. For the latter, we have a multitude of conceptually different approaches, which all arrive at somewhat similar results from a practical point of view. For a state-vector epistemist like myself, the interventionist approach to statistical mechanics seems very similar to the decoherence approach to the emergence problem in QM. Both say that the respective problems cannot be solved by looking at a closed Hamiltonian system, but only by considering interaction with a somewhat uncontrollable environment. In the case of stat-mech, this is used to explain the statistical fluctuations observed in what would be an otherwise deterministic system. The introduction of correlations between system and environment is the mechanism behind both processes. Somewhat bizzarely, most physicists currently prefer closed-system approaches to the derivation of the second law, based on coarse-graining, but prefer the decoherence approach when it comes to the emergence of classicality from quantum theory. Closed system approaches have the advantage of being applicable to the universe as a whole, where there is no external environment to rely on. However, apart from special cases like this, one can broadly say that the two types of approach are complimentary for stat mech, and neither has a monopoly on explaining the second law. It is then natural to ask whether closed system approaches to emergence in QM are available making use of coarse graining, and whether they ought to be given equal weight to the decoherence explanation. Indeed, such arguments have been given – here is a recent example, which has many precursors too numerous to go through in detail. I myself am thinking about a similar kind of approach at the moment. Right now, such arguments have a disadvantage over decoherence in that the “measurement basis” has to be put in by hand, rather than emerging from the physics as in decoherence. However, further work is needed to determine whether this is an insurmountable obstacle.

In conclusion, decoherence theory has done a lot for our understanding of the emergence of classicality from quantum theory. However, it does not solve all the foundational queations about quantum theory, at least not on it’s own. Further, its importance may have been overemphasized by the physics community because other less-developed approaches to emergence could turn out to be of equal importance.

Universitas Magistrorum et Scholarium

I have arrived back in Waterloo to start my new hybrid University/Perimeter Institute position.  It’s been quite a long break from posting, because – strangely enough – having two affiliations means I had to do twice the amount of paperwork to get myself set-up this time.  As much as I loved being at PI, it is nice to be back in a university and to have some small role in educating the next generation of quantum mechanics.

Over the break, Andrew Thomas has left a few comments about the role of decoherence in interpretations of quantum theory in my Professional Jealousy post.  There are some who think that understanding decoherence alone is enough to “solve” the conceptual difficulties with quantum theory.  This is quite a popular opinion in some quarters of the physics community, where one often finds people mumbling something about decoherence when asked about the measurement problem.  However, there are also many deep thinkers on foundations who have denied that decoherence completely solves the problems, and I tend to agree with them, so we’ll have a post on “What can decoherence do for us?” later on this week.

To clarify, I’m not going to argue that decoherence isn’t an important and real physical effect, nor am I going to say that it has no role at all in foundational studies, so please hold your fire until after the next post if you were thinking of commenting to that effect.

Happy Holidays!

As I don’t expect to be able to blog again before the Xmas break, I’d like to wish all readers of QQ a happy whateveryou’recelebrating.

The holidays are one of those times of year when relatives get the opportunity to ask you, “So, what exactly is it that you do research on?”. This dreaded question will come with certainty, regardless of how many times you have previously explained it to them. It’s not their fault because the average person does not have physics on their mind for any significant amount of time, so it’s easy to forget what it’s all about.

The question is especially bad if you spend any time thinking about the foundations of quantum theory, because it’s difficult to describe quantum theory accurately in a few words. Here’s my best shot at an answer at the moment.

Miscellaneous Relative: So, what is this quantum theory thing all about then?

Me: Well, it’s not exactly about the fact that particles sometimes behave like waves and waves like particles.

MR: Go on.

Me: There is this thing called the Heisenberg uncertainty relation, but strictly speaking it doesn’t say that a measurement of position necessarily disturbs the momentum and vice-versa.

MR: OK.

Me: And it’s definitely not that there are multiple universes.

MR: That’s a shame. I enjoy science fiction, so that was the bit I liked the most.

Me: There are these things called wavefunctions, which can be in superpositions, but it’s not entirely clear what the true significance of that is.

MR: I’m not getting much insight into what you actually do from this by the way.

Me: It seems that John Bell proved that locality and realism are incompatible, but people are still debating the significance of that, so it’s definitely not the whole story either.

MR: Now I really have no clue what you are talking about.

Me: It’s not just about “finding the right language” with which to talk about physics. In particular, I don’t think that revising logic is really the right thing to do.

MR: That sounds sensible enough.

Me: Some people think the whole thing is just about doing something called “solving the measurement problem”, but I don’t think that’s an entirely helpful way of looking at things.

MR: So just what IS the whole thing about then?

Me: That’s the whole question. Welcome to my research programme.

Steane Roller

Earlier, I promised some discussion of Andrew Steane‘s new paper: Context, spactime loops, and the interpretation of quantum mechanics. Whilst it is impossible to summarize everything in the paper, I can give a short description of what I think are the most important points.

  • Firstly, he does believe that the whole universe obeys the laws of quantum mechanics, which are not required to be generalized.
  • Secondly, he does not think that Everett/Many-Worlds is a good way to go because it doesn’t give a well-defined rule for when we see one particular outcome of a measurement in one particular basis.
  • He believes that collapse is a real phenomenon and so the problem is to come up with a rule for assigning a basis in which the wavefunction collapses, as well as, roughly speaking, a spacetime location at which it occurs.
  • For now, he describes collapse as an unanalysed fundamenally stochastic process that achieves this, but he recognizes that it might be useful to come up with a more detailed mechanism by which this occurs.

Steane’s problem therefore reduces to picking a basis and a spacetime location. For the former, he uses the standard ideas from decoherence theory, i.e. the basis in which collapse occurs is the basis in which the reduced state of the system is diagonal. However, the location of collapse is what is really interesting about the proposal, and makes it more interesting and more bizzare than most of the proposals I have seen so far.

Firstly, note that the process of collapse destroys the phase information between the system and the environment. Therefore, if the environmental degrees of freedom could ever be gathered together and re-interacted with the system, then QM would predict interference effects that would not be present if a genuine collapse had occurred. Since Steane believes in the universal validity of QM, he has to come up with a way of having a genuine collapse without getting into a contradiction with this possibility.

His first innovation is to assert that the collapse need not be associated to an exactly precise location in spacetime. Instead, it can be a function of what is going on in a larger region of spacetime. Presumably, for events that we would normally regard as “classical” this region is supposed to be rather small, but for coherent evolutions it could be quite large.

The rule is easiest to state for special cases, so for now we will assume that we are talking about particles with a discrete quantum degree of freedom, e.g. spin, but that the position and momentum can be treated classically. Now, suppose we have 3 qubits and that they are in the state |000> + e^i phi |111>. The state of the first two qubits is a density operator, diagonal in the basis {|00>, |11>}, with a probability 1/2 for each of the two states. The phase e^i phi will only ever be detectable if the third qubit re-interacts with the first two. Whether or not this can happen is determined by the relative locations of the qubits, since the interaction Hamiltonias in nature are local. Since we are treating position and momentum classically at the moment, there is a matter of fact about whether this will occur and Steane’s rule is simple: if the qubits re-interact in the future then there is no collapse, but if they don’t then the then the first two qubits have collapsed into the state |00> or the state |11> with probability 1/2 for each one.

Things are going to get more complicated if we quantize the position and momentum, or indeed if we move to quantum field theory, since then we don’t have definite particle trajectories to work with. It is not entirely clear to me whether Steane’s proposal can be made to work in the general case, and he does admit that further technical work is needed. However, he still asserts that whether or not a system has collapsed at a given point is spacetime is in principle a function of its entire future, i.e. whether or not it will eventually re-interact with the environment it has decohered with respect to.

At this point, I want to highlight a bizzare physical prediction that can be made if you believe Steane’s point of view. Really, it is metaphysics, since the experiment is not at all practical. For starters, the fact that I experience myself being in a definite state rather than a superposition means that there are environmental degrees of freedom that I have interacted with in the past that have decohered me into a particular basis. We can in principle imagine an omnipotent “Maxwell’s demon” type character, who can collect up every degree of freedom I have ever interacted with, bring it all together and reverse the evolution, eliminating me in the process. Whilst this is impractical, there is nothing in principle to stop it happening if we believe that QM applies to the entire universe. However, according to Steane, the very fact that I have a definite experience means that we can predict with certainty that no such interaction happens in the future. If it did, there would be no basis for my definite experience at the moment.

Contrast this with a many-worlds account a la David Wallace. There, the entire global wavefunction still exists, and the fact that I experience the world in a particular basis is due to the fact that only certain special bases, the ones in which decoherence occurs, are capable of supporting systems complex enough to achieve conciousness. There is nothing in this view to rule out the Maxwell’s demon conclusively, although we may note that he is very unlikely to be generated by a natural process due to the second law of thermodynamics.

Therefore, there is something comforting about Steane’s proposal. If true, my very existence can be used to infer that I will never be wiped out by a Maxwell’s demon. All we need to do to test the theory is to try and wipe out a conscious being by constructing such a demon, which is obviously impractical and also unethical. Needless to say, there is something troubling about drawing such a strong metaphysical conclusion from quantum theory, which is why I still prefer the many-worlds account over Steane’s proposal at the moment. (That’s not to say that I agree with the former either though.)

Real Estate

These days, having a good up-to-date personal website can be as important as having a good CV for an academic. If you are applying for a job, you can be sure that someone on the committee has googled you. Also, if you meet someone at a conference and got them interested in your work, your website is the first place they will look for further details.

Most postdocs know how annoying it can be to constantly have to change jobs, and there is an associated change the location of your website each time. This means that people have to update their links every time you move. Also, the URLs of personal homepages at academic institutions are often long and not very easy to remember, so it would be better to have a permanent catchy URL for your site. The solution to this is to buy your own domain name, as I did recently with mattleifer.info. Rob.rwspekkens.com and scottarronson.com are further examples from my colleagues. This has the added advantage of providing me with the email address matt{at}mattleifer.info, which also won’t change when I move. You can buy your own domain name from many companies. I used GoDaddy, which is one of the largest companies with a reputation for the cheapest prices (5.99USD per year in my case). Here are some points to bear in mind when buying your domain:

  • A lot of people want a .com domain because these are the most common and easiest to remember. Strictly speaking, .name and .info are the more appropriate for personal websites, even though your website may be about “selling yourself”. Although these are less common at the moment, their usage should be increasing in the next few years, so they are worth bearing in mind.
  • After purchasing your domain name you have two options. Either you can get the domain name to be forwarded to your existing website at your institution, or you can opt to have it hosted on a server elsewhere. If you do the former, you have to comply with any restrictions your institution has about what you can put on your site and you have to remember to update when you change institution. On the plus side, this option is usually free, and it is what I did. External hosting is usually only free if you are prepared to have obtrusive ads on your site, and it can be quite costly, but you do get a choice of different companies with different regulations, so you can find one that will let you put up whatever you want so long as it’s legal. This could be relevant if you want to write applications to run on your site, since your institution may not support the tools you need installed on the server side. If you don’t know what that last sentence is about then it probably doesn’t apply to you and you should just use forwarding.
  • Companies like GoDaddy are cheap, but they will try to extract money from you by upselling. This means they will try to convince you to buy hosting, security features, etc. when you buy your domain name. Work out if you need any of this stuff before you go to the site and investigate how much it costs from other companies. If in doubt, just buying the domain name is probably the best option.

Against Interpretation

It appears that I haven’t had a good rant on this blog for some time, but I have been stimulated into doing so by some of the discussion following the Quantum Pontiff‘s recent post about Bohmian Mechanics. I don’t want to talk about Bohm theory in particular, but to answer the following general question:

  • Just what is the goal of studying the foundations of quantum mechanics?

Before answering this question, note that its answer depends on whether you are approaching it as a physicist, mathematician, philosopher, or religious crank trying to seek justification for your outlandish worldview. I’m approaching the question as a physicist and to a lesser extent as a mathematician, but philosophers may have legitimate alternative answers. Since the current increase of interest in foundations is primarily amongst physicists and mathematicians, this seems like a natural viewpoint to take.

Let me begin by stating some common answers to the question:

1. To provide an interpretation of quantum theory, consistent with all its possible predictions, but free of the conceptual problems associated with orthodox and Copenhagen interpretations.

2. To discover a successor to quantum theory, consistent with the empirical facts known to date, but making new predictions in untested regimes as well as resolving the conceptual difficulties.

Now, let me give my proposed answer:

  • To provide a clear path for the future development of physics, and possibly to take a few steps along that path.

To me, this statement applies to the study of the foundations of any physical theory, not just quantum mechanics, and the success of the strategy has been born out in practice. For example, consider thermodynamics. The earliest complete statements of the principles of thermodynamics were in terms of heat engines. If you wanted to apply the theory to some physical system, you first had to work out how to think of it as a kind of heat engine before you started. This was often possible, but a rather unnatural thing to do in many cases. The introduction of the concept of entropy eliminated the need to talk about heat engines and allowed the theory to be applied to virtually any macroscopic system. Further, it facilitated the discovery of statistical mechanics. The formulation in terms of entropy is formally mathematically equivalent to the earlier formulations, and thus it might be thought superfluous to requirements, but in hindsight it is abundantly clear that it was the best way of looking at things for the progress of physics.

Let’s accept my answer to the foundational question for now and examine what becomes of the earlier answers. I think it is clear that answer 2 is consistent with my proposal, and is a legitimate task for a physicist to undertake. For those who wish to take that road, I wish you the best of luck. On the other hand, answer 1 is problematic.

Earlier, I wrote a post about criteria that a good interpretation should satisfy. Now I would like to take a step back from that and urge the banishment of the word interpretation entirely. The problem with 1 is that it ring-fences the experimental predictions of quantum theory, so that the foundational debate has no impact on them at all. This is the antithesis of the approach I advocate, since on my view foundational studies are supposed to feed back into improved practice of the theory. I think that the separation of foundations and practice did serve a useful role in the historical development of quantum theory, since rapid progress required focussing attention on practical matters, and the time was not ripe for detailed foundational investigations. For one thing, experiments that probe the weirder aspects of quantum theory were not possible until the last couple of decades. It can also serve a useful role for a subsection of the philosophy community, who may wish to focus on interpretation without having to keep track of modern developments in the physics. However, the view is simply a hangover from an earlier age, and should be abandoned as quickly as possible. It is a debate that can never be resolved, since how can physicists be convinced to adopt one interpretation over another if it makes no difference at all to how they understand the phenomenology of the theory?

On the other hand, if one looks closely it is evident that many “interpretations” that are supposedly of this type are not mere interpretations at all. For example, although Bohmian Mechanics is equivalent to standard quantum theory in its predictions, it immediately suggests a generalization to a “nonequilibrium” hidden variable theory, which would make new predictions not possible within the standard theory. Similar remarks can be made about other interpretations. For example, many-worlds, despite not being a favorite of mine, does suggest that it is perfectly fine to apply standard quantum theory to the entire universe. In Copenhagen this is not possible in any straightforward way, since there is always supposed to be a “classical” world out there at some level, which the state of the quantum system is referred to. In short, the distinction between “the physics” and “the interpretation” often disappears on close inspection, so we are better off abandoning the word “interpretation” and instead viewing the project as providing alternatives frameworks for the future progress of physics.

Finally, the more observant amongst you will have noticed that I did not include “solving the measurement problem” as a possible major goal of quantum foundations, despite its frequent appearance in this context. Deconstructing the measurement problem requires it’s own special rant, so I’m saving it for a future occasion.

New Blog

Welcome to my new blog.  It exists for me to occasionally air a whole lot of rants I have stored up about technology in academia, and will be posted to less frequently than my other blog Quantum Quandaries.   Here’s what the about section says:

This blog is about the uses of computers and technology in academia. As well as recommendations of useful websites and software, there is advice on how to make use of the internet in teaching and research, and speculation on how we could make the net a better place for academics. The focus is on things that are useful to people in the mathematical and physical sciences, and I have an unashamed bias towards Apple Macs and open source solutions.

Swanky New Website

You will have noticed that I have given my website a complete facelift. You can now access it at http://www.mattleifer.info as well as the old address.

New preprints

I recently posted two new articles on the arXiv.

Enjoy!

Visiting CQC Cambridge

I am currently visiting the Centre for Quantum Compuatation at the University of Cambridge.  I’ll be back in  Waterloo on 6th January 2007.