Continuing from the previous post…
I realized I accidentally omitted Klaas Lansdman’s talk on the Kochen-Specker theorem, in light of topos theory. This overlaps a lot with the talk by Andreas Doring, although there are some significant differences. (Having heard only what Andreas had to say about the differences, I won’t attempt to summarize them). Again, the point of the Kochen-Specker theorem is that there isn’t a “state space” model for a quantum system – in this talk, we heard the version saying that there are no “locally sigma-Boolean” maps, from operators on a Hilbert space, to . (This is referring to sigma-algebas (of measurable sets on a space), and Boolean algebras of subsets – if there were such a map, it would be representing the system in terms of a lattice equivalent to some space). As with the Isham/Doring approach, they then try to construct something like a state space – internal to some topos. The main difference is that the toposes are both categories of functors into sets from some locale – but here the functors are covariant, rather than contravariant.
Now, roughly speaking, the remaining talks could be grouped into two kinds:
Many people came to this conference from a physics-oriented point of view. So for instance Rafael Sorkin gave a talk asking “what is a quantum reality?”. He was speaking from a “histories” interpretation of quantum systems. So, by contrast, a “classical reality” would mean one worldline: out of some space of histories, one of them happens. In quantum theory, you typically use the same space of histories, but have some kind of “path integral” or “sum over histories” when you go to compute the probabilities of given events happening. In this context, “event” means “a subset of all histories” (e.g. the subset specified by a statement like “it rained today”). So his answer to the question is: a reality should be a way of answering all questions about all events. This is called a “coevent”. Sorkin’s answer to “what is a quantum reality?” is: “a primitive, preclusive coevent”.
In particular, it’s a measure . For a classical system, “answering” questions means yes/no, whether the one history is in a named event – for a quantum system, it means specifying a path integral over all events – i.e. a measure on the space of events. This measure needs some nice properties, but it’s not, for instance, a probability measure (it’s complex valued, so there can be interference effects). Preclusion has to do with the fact that the measure of an event being zero means that it doesn’t happen – so one can make logical inferences about which events can happen.
Other talks addressing foundational problems in physics included Lucien Hardy’s: he talked about how to base predictive theories on operational structures – and put to the audience the question of whether the structures he was talking about can be represented categorically or not. The basic idea is an “operational structure” is some collection of operations that represents a physical experiment whose outcome we might want to predict. They have some parameters (“knob settings”), outcomes (classical “readouts”), and inputs and outputs for the things they study and affect (e.g. a machine takes in and spit out an electron, doing something in the middle). This sort of thing can be set up as a monoidal category – but the next idea, “object-oriented operationalism”, involved components having “connections” (given relations between their inputs) and “coincidences” (predictable correlations in output). The result was a different kind of diagram language for describing experiments, which can be put together using a “causaloid product” (he referred us to this paper, or a similar one, on this).
Robert Spekkens gave a talk about quantum theory as a probability theory – there are many parallels, though the complex amplitudes give QM phenomena like interference. Instead of a “random variable” , one has a Hilbert space ; instead of a (positive) function of , one has a positive operator on ; standard things in probability have analogs in the quantum world. What Robert Spekkens’ talk dealt with was how to think about conditional probabilities and Bayesian inference in QM. One of the basic points is that when calculating conditional probabilities, you generally have to divide by some probability, which encounters difficulties translating into QM. He described how to construct a “conditional density operator” along similar lines – replacing “division” by a “distortion” operation with an analogous meaning. The whole thing deeply uses the Choi-Jamiolkowski isomorphism, a duality between “states and channels”. In terms of the string diagrams Bob Coecke et. al. are keen on, this isomorphism can be seen as taking a special cup which creates entangled states into an ordinary cup, with an operator on one side. (I.e. it allows the operation to be “slid off” the cup). The talk carried this through, and ended up defining a quantum version of the probabilistic concept of “conditional independence” (i.e. events A and C are independent, given that B occurred).
A more categorical look at foundational questions was given by Rick Blute’s talk on “Categorical Structures in AQFT”, i.e. Algebraic Quantum Field Theory. This is a formalism for QFT which takes into account the causal structure it lives on – for example, on Minkowski space, one has a causal order for points, with if there is a future-directed null or timelike curve from to . Then there’s an “interval” (more literally, a double cone) , and these cones form a poset under inclusion (so this is a version of the poset of subspaces of a space which keeps track of the causal structure). Then an AQFT is a functor from this poset into C*-algebras (taking inclusions to inclusions): the idea is that each local region of space has its own algebra of observables relevant to what’s found there. Of course, these algebras can all be pieced together (i.e. one can take a colimit of the diagram of inclusions coming from all regions on spacetime. The result is . Then, one finds a category of certain representations of it on a hilbert space (namely, “DHR” representations). It turns out that this category is always equivalent to the representations of some group , the gauge group of the AQFT. Rick talked about these results, and suggested various ways to improve it – for example, by improving how one represents spacetime.
The last talk I’d attempt to shoehorn into this category was by Daniel Lehmann. He was making an analysis of the operation “tensor product”, that is, the monoidal operation in . For such a fundamental operation – physically, it represents taking two systems and looking at the combined system containing both – it doesn’t have a very clear abstract definition. Lehmann presented a way of characterizing it by a universal property analogous to the universal definitions for products and coproducts. This definition makes sense whenever there is an idea of a “bimorphism” – a thing which abstracts the properties of a “bilinear map” for vector spaces. This seems to be closely related to the link between multicategories and monoidal categories (discussed in, for example, Tom Leinster’s book).
Categories and Logic
Some less physics-oriented and more categorical talks rounded out the part of the program that I saw. One I might note was Mike Stay‘s talk about the Rosetta Stone paper he wrote with John Baez. The Rosetta Stone, of course, was a major archaeological find from the Ptolemaic period in Egypt – by that point, Egypt had been conquered by Alexander of Macedon and had a Greek speaking elite, but the language wasn’t widespread. So the stone is an official pronouncement with a message in Greek, and in two written forms of Egyptian (heiroglyphic and demotic), neither of which had been readable to moderns until the stone was uncovered and correspondences could be deduced between the same message in a known language and two unknown ones. The idea of their paper, and Mike’s talk, is to collect together analogs between four subjects: physics, topology, computation, and logic. The idea is that each can be represented in terms of monoidal categories. In physics, there is the category of Hilbert spaces; in topology one can look at the category of manifolds and cobordisms; in computation, there’s a monoidal category whose objects are data types, and whose morphisms are (equivalence classes) of programs taking data of one type in and returning data of another type; in logic, one has objects being propositions and morphisms being (classes) of proofs of one proposition from another. The paper has a pretty extensive list of analogs between these domains, so go ahead and look in there for more!
Peter Selinger gave a talk about “Higher-Order Quantum Computation”. This had to do with interesting phenomena that show up when dealing with “higher-order types” in quantum computers. These are “data types”, as I just described – the “higher-order” types can be interpreted by blurring the distinction between a “system” and a “process”. A data type describing a sytem we might act on might be or . A higher order type like describes a process which takes something of type and returns something of type . One could interpret this as a black box – and performing processes on a type is like studying that black box as a system itself. This type is like an “internal hom” – and so one might like to say, “well, it’s dual to tensor – so it amounts to taking , since we’re in the category of Hilbert spaces”. The trouble is, for physical computation, we’re not quite in the category where that works. Because not all operators are significant: only some class of totally positive operators are physical. So we don’t have the hom-tensor duality to use (equivalently, don’t have a well-behaved dual), and these types have to be considered in their own right. And, because computations might not halt, operations studying a black box might not halt. So in particular, a “co-co-qubit” isn’t the same as a qubit. A co-qubit is a black box which eats a qubit and terminates with some halting probability. A co-co-qubit eats a co-qubit and does the same. If not for the halting probability, one could equally well see a qubit “eating” a co-co-qubit as the reverse. But in fact they’re different. A key fact in Peter’s talk is that quantum computation has new logical phenomena happening with types of every higher order. Quantifying this (an open problem, apparently) would involve finding some equivalent of Bell inequalities that apply to every higher order of type. It’s interesting to see how different quantum computing is, in not-so-obvious ways, from the classical kind.
Manoush Sadrzadeh gave a talk describing how “string diagrams” from monoidal categories, and representations of them, have been used in linguistics. The idea is that the grammatical structure of a sentence can be build by “composing” structures associated to words – for example, a verb can be composed on left and right with subject and object to build a phrase. She described some of the syntactic analysis that went into coming up with such a formalism. But the interesting bit was to compare putting semantics on that syntax to taking a representation. In particular, she described the notion of a semantic space in linguistics: this is a large-dimensional vector space that compares the meanings of words. A rough but surprisingly effective way to clump words together by meaning just uses the statistics on a big sample of text, measuring how often they co-occur in the same context. Then there is a functor that “adds semantics” by mapping a category of string diagrams representing the syntax of sentences into one of vector spaces like this. Applying the kind of categorical analysis usually used in logic to natural language seemed like a pretty neat idea – though it’s clear one has to make many more simplifying assumptions.
On the whole, it was a great conference with a great many interesting people to talk to – as you might guess from the fact that it took me three posts to comment on everything I wanted.