One of the first things I did after arriving at PI on Wednesday (and having lunch) was to attend the colloquium talk which was being given by Robert Spekkens. It was called “Why the Quantum?”, but as he described it, the real point of the talk was to take a close look at the features of quantum physics that are commonly considered “weird” or “mysterious” and see what’s really innovative in the departure from classical physics. For the most part, “physics” here means “mechanics”, but he also touched on optics, theory of computation, and briefly on electromagnetism and gravity in a more speculative way.
The main message of his talk is that very few of the things about quantum physics which seem strange are really all that innovative. He showed this by describing a kind of classical theory that has many of them – interference, noncommuting observables, entanglement, “wavefunction collapse”, wave-particle duality, teleportation and a no-cloning theorem, superposition of states, and so forth. All of these, he told us, will show up in a model based on a classical mechanical system, where the “quantum” theory is a theory of probability distributions (or, equivalently, of the knowledge of observers about a classical system) subject to a restriction about what distributions are allowed.
The point is to start with some classical system: let’s say it’s a mechanical system of some moving particles. Then there’s a configuration space of all the possible (classical) configurations of the system – one point in this space for each configuration. Classical mechanics is then about defining a “flow” on this space, which tells you where a point will move over time (how the system will go from one configuration to another). Then Liouville mechanics is about probability distributions in this space: you might not know exactly which configuration the system is in, but you have a way of estimating the probabilities. Then you impose the restriction that the only allowed probability distributions are ones for which the products of the variances for conjugate variables are at least Planck’s constant. (Actually, I think Spekkens formulated this differently, but that’s about what it amounts to, as I understand it.) The result is equivalent to “Gaussian quantum mechanics” – one where probability distributions are all Gaussians.
This also puts limits on what the rule for evolving states can be: any rule for how individual states evolve over time also gives a result for how probability distributions evolve over time. (Picture a cloud of ink, with varying density, flowing along in moving water – knowing the flow lines tells you where the cloud goes.) If there are restrictions on what kind of probability distributions can be set up, these have to be preserved over time – otherwise, you could set up an allowed distribution, and then wait until it evolves into a disallowed one. In particular, for Gaussian quantum mechanics, he told us that systems with a quadratic Hamiltonian will satisfy this condition.
The important fact here is that this is a “realist” interpretation. It says the quantum mechanical uncertainty reflects that QM is a theory about your knowledge of the state of the system, which, however, really exists. Often in quantum mechanics, one defines a “wave function” as a function living on configuration space (complex-valued, not real-valued like a probability density, but a function nonetheless). However, it’s now pretty standard to think of this wave function as the “real” state of the system – the view that it represents a state of knowledge was popular for a while, but ran into various problems in the form of experiments that are hard to account for, such as Bell inequality violations. The point of the talk was to see just how many of the “strange” features of quantum mechanics are genuine problems for this view, and to show the answer is “not many”.
The features he claimed are really mysterious from this point of view are fairly few: Bell inequality violations, some no-go theorems for models of physics involving local hidden variables such as the Kochen-Specker Theorem, and a few others. So Spekkens’ suggestion was that this concept of quantum mechanics as a theory of probability with an “epistemic” restriction (i.e. limits on what’s knowable) might be salvaged if the underlying classical theory were non-local – and perhaps had some other odd features yet to be precisely delineated – to begin with. However, it might not have to be terribly strange apart from that, since quantum mechanical features like interference and superposition of states all show up in the restricted statistical picture.
The gist of his argument then seemed to be that to really straighten out some foundational issues in quantum physics, one approach would be: (a) come up with a well-founded justification for the assumption about restrictions on possible probability distributions, and (b) come up with at least one (and as few as possible) other principles to account for the remaining mysterious things – he also suggested they all seem to have something to do with “contextuality”. As I understand it, this last is the idea that an observable might have definite, but multiple, values – and that which values are seen depend on which groups of observables are measured together. I don’t know what, if anything, to make of that oddball-sounding idea.
However, he did argue that in some cases at least, the restriction can be justified by the observer effect: you have to look at a system using some apparatus, whose state you don’t know completely, and which interferes with the system in order to observe it (for instance, measuring the position of a particle by scattering it off another one, whose state is partly unknown, and imparts an unknown momentum).
My overall reaction to the talk is that it’s interesting to know that realist interpretations of quantum physics (where the “reality” is more or less classical, and quantum effects some kind of afterthought, or epistemic effect) aren’t as dead as they might have seemed. However, the view that says classical physics emerges as some kind of limiting case of quantum effects seems better developed, at least mathematically, than the reverse. As for his claim that we “understand” the classical picture “physically”, whereas it’s not so for the quantum picture – I personally can only agree that’s true for me, but I don’t entirely see what you can conclude from that.
The bottom line seems to be that there are still problems in epistemology. I suspected as much already – though I’m not sure if I “knew” it, whatever that means.