Well, a couple of weeks ago I was up in Waterloo at the Perimeter Institute with Dan Christensen and his grad student Wade Cherrington for a couple of days for the “Young Loops and Foams” conference. It actually ran all week, but we only took the time out to go for the first couple of days. The talks that we were there for dealt mainly with the loop-quantum-gravity and spin-foam approaches to quantum gravity.

These are not really what I’m working on, though I certainly have thought about these approaches, and Dan and his grad students have done significant work on them. Wade Cherrington has been applying spin-foam methods to lattice gauge theory, and Igor Khavkine has been working on the “new” spin foam models. Both of these guys are in the Applied Mathematics department here at UWO (though Igor is graduating this year), and a lot of their work has been about getting efficient algorithms for doing computations with these models. This seems like great stuff to me – certainly it’s a step in the direction of getting predictions and comparing them to experiments (i.e. “real physics”, though as a “mathematician” who’s only motivated by physics, I clearly don’t say this to be snobby)

Many of the talks were a bit over my head – for one thing, a lot of the significant new stuff involves fairly substantial calculation, which is by nature rather technical. There were some more introductory talks about Group Field Theory – Etera Livine and Daniele Oriti gave talks about Group Field Theory which described the main concepts of this subject. Livine’s talk was fairly introductory – explaining how GFT describes a field theory on a background which consists of a product of a few copies of a Lie group, for instance on . In that example, states of the theory

Oriti’s talk dealt more with issues about GFT, but also emphasized that it can be seen as a kind of “second quantization” of spin networks. That is, one can think of a spin network geometry in terms of a graph which is labelled with spins (in practice, half-integers). Given such a graph, there is a Hilbert space for such states on the graph, whereas in GFT, the graph itself emerges from the states. The total Hilbert space for the fields in GFT then includes many different graphs, with many different numbers of vertices. The analogy to second quantization, in which, for example, one takes the quantum mechanical theory of an oscillator with a given energy, and turns it

Oriti also made references to this paper, in which he proposes a way to get a continuum limit out of GFT (using methods, which I can hardly comment on, analogous to those used to describe condensates in solid-state physics). However, he didn’t have time to describe this in detail. I’ve only looked briefly at that paper, and it seems sort of impressionistic, but the impressions are interesting, anyway.

…

I managed to have a few conversations with Robert Oeckl about Extended TQFT’s on the one hand, and his general boundary formulation of QFT’s on the other (more here, and slides giving an overview here). These two points of view take the usual formalism of TQFT and run with it in two somewhat different directions. Since I’ve talked a lot here about Extended TQFT’s and categorification, I’ll just say a bit about what Oeckl calls the general boundary formulation. This doesn’t use categorical language, and it remains a theory at “codimension 1” (that is, it tells you about top-dimension “volumes” which connect codimension-1 “surfaces”, and that’s all). It does get outside what the functorial axiomatization of TQFT’s seems to ask, though. In particular, it doesn’t require you to be talking about a cobordism (“spacetime”) going from an input hypersurface (“space-slice”) to an output. Instead, it lets you talk about a general region with boundary, treating the whole boundary at once. Any part of it can be thought of as input or output.

One point of this way of describing a QFT is to help deal with the “problem of time”. His talk at the conference was a sort of “back to basics” discussion about the two basic approaches to quantum gravity – what he named the “covariant” (or perturbative) approach and the “canonical” (or “no-spacetime”) approach. One way to put the “problem” of time has to do with the apparently incompatible roles it plays in, respectively, general relativity and quantum mechanics, and these two approaches respect different portions of these roles.

The point is that in (non-quantum) relativity, a “state” is a whole world-history, part of which is the background geometry, which determines a causal order – a sort of minimal summary of time in that state. But in particular, it is part of the information contained in a state, which describes everything real. In QM, on the other hand, a “state” contains some information about the world in a maximal way (though IF you assume it represents all of reality, THEN you have to accept that reality isn’t *local*). But moreover, time plays a special role in QM outside any particular “world”.

In particular, the state vector in the Hilbert space encodes information about a system between measurements (chronologically!), an operator on changes a state into a new state (also chronologically), and composition of operators implies a temporal sequence (which gives the meaning of noncommuting operators – the result depends on the order in which you perform them). This all depends on a notion of temporal order which, in relativity, depends on the background metric, which is putatively depends on the state itself! So the two approaches to quantization try to either (a) keep the temporal order using a fixed background, and treat perturbations as the field (which can only be approximate), or (b) keep the idea that the metric is part of the state and hopefully recover the usual picture in some special cases (which is hard).

So as I understand it, the general boundary approach is meant to help get around this. It works by assigning data to both regions , and their boundaries , subject to a few rules which are reminiscent of those which make a TQFT in the usual formulation into a monoidal functor. In particular, the theory assigns a Hilbert space to a boundary, and a linear functional to a region. This satisfies some rules such as that , that reversing the orientation of a boundary amounts to taking the dual of the Hilbert space, some gluing rules, and so on.

Then there is a way to recover a generalization of the probability interpretation for quantum mechanics. But it’s not a matter of *first* setting up a system in a state, and *then* making a measurement. Instead, it’s a way of asking a *question*, given some *knowledge* about the “system” at the boundary. Both knowledge and question take the form of subspaces (denoted and ) of , and the formula for probability involves both and the projection operators onto these subspaces. The “probablity of given ” is:

Then one of the rules defining how behaves when is deformed gives a sort of “conservation of probability” – the equivalent of unitarity of time evolution. If decomposes as the union of an input and an output, and the subspaces and correspond to states on the input and the output surfaces, it gives exactly unitarity of time evolution.

Now, this seems like an interesting idea, assuming that it does indeed get over the shortcomings of both canonical and covariant approaches to quantum gravity. My main questions have to do with how to interpret it in category-theoretic terms, since it would be nice to see whether an extended TQFT – with 2-algebraic data for surfaces of codimension 2, and so on – could be described in the same way. The way Oeckl presents his TQFT’s is quite minimal, which is good for some purposes and avoids some complexity, but loses the organizing structure of TQFT-as-functor.

One thing that would be needed is a way of talking about some sort of n-category which has composition for morphisms with fairly arbitrary shapes – not just taking a source to a target. Instead of composition of arrows tip-to-tail, one has to glue randomly shaped regions together. Offhand, I don’t know the right way to do this.

August 17, 2008 at 9:28 am

Hi,

thanks for the report!

Two quick comments:

Regarding Oriti’s work:

This was my impression, too. He should try to substantiate some of his statements.

Regarding Oeckel’s boundary QFT:

That’s quite compatible with the functorial/categorical/cobordismic way of thinking about it, since there the labels “in” and “out” are, while technically necessary, not of intrinsic meaning: every out/in going thing may be turned into an in/outgoing one by attaching a suitable cap or cup. As you know.

August 17, 2008 at 8:25 pm

Urs: Of course it’s true that one can reassign “in” and “out” labellings for boundary components in the usual picture. The fact that manifolds with boundary can be construed as cobordisms is a bit of an artifice. What’s different with the general boundary formulation is that one can glue regions (or “compose morphisms” in the cobordism description) along not just components, put arbitrary portions of boundaries.

So, for instance, you can represent a solid sphere as a cobordism from its boundary to the empty set – so far so good. But while that boundary has only one connected component, it seems as if the general boundary formulation allows you to glue two sphere together along, for example, two disks, giving a torus (topologically – I’ll ignore smoothness questions). In general, you can glue regions along, not just components of their boundary surfaces , but

portionsof the , which in turn have boundaries inside the surfacesIt seems there should be a 2-categorical way to describe this: the boundaries would be objects, the surfaces (or portions of them) would be morphisms, and the regions would be 2-morphisms. However, this would have to be more general than the way I’ve thought about doing this, treating “double cobordisms” as double cospans of manifolds with corners, and then condensing that whole structure to a bicategory. But it’s a bit restricted in which manifolds you can represent that way: not all regions can be thought of as cobordisms between cobordisms. I don’t know what structure, algebraically, describes this kind of gluing.

Then of course there’s the fact that, if you do represent these gluing rules in terms of composition in some type of 2-category, then the QFT ought to be a 2-functor, so there should be some data like 2-Hilbert spaces attached to the boundaries . (The theory as given would be a special case, where these all get the trivial 2-Hilbert space, , so the are morphisms from to itself of the form ). Though of course what such a 2-functor even is depends on what kind of 2-category we’re talking about…

August 18, 2008 at 3:40 pm

Oh, I see. Yes, agreed, this looks like extended QFT.

It may be a bit easier to visualize this one dimension down: suppose we have disks regarded as 2-morphisms with one semi-circle part of their boundary regarded as the source 1-morphism and the other as the target 1-morphism.

Gluing two such disks along only part of their boundary can be realized as a pasting operation where we first “whisker” both disks by a thin disk such that the resulting 2-cells do have coinciding source and target. And then compose these.

August 19, 2008 at 2:28 am

I guess the other thing to do would be hope that the category of spaces you’re working in is equivalent to PL complexes, or simplicial complexes or something similar. Then it should be possible to define a 2-category with opetopic or simplicial 2-morphisms. Since it’s actually codimension, maybe it should be a special kind of n-category in dimension n, having only three nontrivial levels.

Even there, directions aren’t canonical. Is there some higher-dimensional version of “duals” where you change the out-face of a morphism in those settings? Then you could just assume that morphisms with all possible labellings are present.