Here are the details of the next Q+ hangout. This is our “Nobel Prize” lecture. Dietrich is a long time colleague of David Wineland at NIST and will tell us about the latest research from the Ion Storage Group. Please note the unusual start time of 5pm BST(UTC+1)
To join the hangout or watch the livestream go to http://gplus.to/qplus at the appointed hour.
Date: 23rd April 2013 5pm BST(UTC+1)
Speaker: Dietrich Leibfried (NIST)
Title: Towards scalable quantum information processing and quantum simulation with trapped ions
Quantum information processing (QIP) and Quantum Simulation (QS) can potentially provide an exponential speedup for certain problems over the corresponding (known) algorithms on conventional computers. QIP makes use of the counter-intuitive properties of quantum mechanics, like entanglement and the superposition principle (being in more states than one simultaneously). On the way towards a useful QIP device these properties, mostly subject of thought experiments so far, will have to become a practical reality. I will discuss experiments towards Quantum Information Processing (QIP) and Quantum Simulation (QS) with trapped ions. Most requirements for QIP and QS have been demonstrated in this system, with two big challenges remaining: Improving operation fidelity and scaling up to larger numbers of qubits.
The architecture pursued at the Ion Storage Group at NIST is based on quantum information stored in long lived internal (hyperfine) states of the ions. We investigate the use of laser beams and microwave fields to induce both single-qubit rotations and multi-qubit gates mediated by the Coulomb interaction between ions. Moving ions through a multi-zone trap architecture allows for keeping the number of ions per zone small, while sympathetic cooling with a second ion species can remove energy and entropy from the system.
After a brief introduction to these elements, I will present the current status of experiments and some future perspectives for QIP and QS.
This work has been supported by IARPA, DARPA, ARO, ONR, and the NIST Quantum Information Program.
To keep up to date with the latest news and announcements about Q+ hangouts you can follow us on:
or visit our website http://qplus.burgarth.de
Posting has been light of late. I would like to say this is due to the same sort of absorbtion that JoAnne has described over at Cosmic Variance, but in fact my attention span is currently too short for that and it has more to do with my attempts to work on three projects simultaneously. In any case, a report of an experiment on quantum foundations in Nature cannot possibly go ignored for too long on this blog. See here for the arXiv eprint.
What Gröblacher et. al. report on is an experiment showing violations of an inequality proposed by Leggett, aimed at ruling out a class of nonlocal hidden-variable theories, whilst simultaneously violating the CHSH inequality, so that local hidden-variable theories are also ruled out in the same experiment. This is of course subject to the usual caveats that apply to Bell experiments, but let’s grant the correctness of the analysis for now and take a look at the class of nonlocal hidden-variable theories that are ruled out.
It is well-known that Bell’s assumption of locality can be factored out into two conditions.
- Outcome independence: the outcome of the experiment at site A does not depend on the outcome of the experiment at site B.
- Parameter independence: the outcome of the experiment at site A does not depend on the choice of detector setting at site B.
Leggett has proposed to consider theories that maintain the assumption of outcome independence, but drop the assumption of parameter independence. It is worth remarking at this point that the attribution of fundamental importance to this factorization of the locality assumption can easily be criticized. Whilst it is usual to describe the outcome at each site by ±1 this is an oversimplification. For example, if we are doing Stern-Gerlach measurements on electron spins then the actual outcome is a deflection of the path of the electron either up or down with respect to the orientation of the magnet. Thus, the outcome cannot be so easily separated from the orientation of the detector, as its full description depends on the orientation.
Nevertheless, whatever one makes of the factorization, it is the case that one can construct toy models that reproduce the quantum predictions in Bell experiments by dropping parameter independence. Therefore, it is worth considering what other reasonable constraints we can impose on theories when this assumption is dropped. Leggett’s assumption amounts to assuming that the hidden variable states in the theory can be divided into subensembles, in each of which the two photons have a definite polarization (which may however depend on the distant detector setting). The total ensemble corresponding to a quantum state is then a statistical average over such states. This is the class of theories that has been ruled out by the experiment.
This is all well and good, and I am certainly in favor of any experiment that places constraints on the space of possible interpretations of quantum theory. However, the experiment has been sold in some quarters as a “refutation of nonlocal realism”, so we should consider the extent to which this is true. The first point to make is that there are perfectly good nonlocal realistic models, in the sense of reproducing the predictions of quantum theory, that do not satisfy Leggett’s assumptions – the prime example being Bohmian mechanics. In the Bohm theory photons do not have a well-defined value of polarization, but instead it is determined nonlocally via the quantum potential. Therefore, if we regard this as a reasonable theory then no experiment that simply confirms the predictions of quantum theory can be said to rule out nonlocal realism.