### tqft

Well, it’s been a while, but it’s now a new semester here in Hamburg, and I wanted to go back and look at some of what we talked about in last semester’s research seminar. This semester, Susama Agarwala and I are sharing the teaching in a topics class on “Category Theory for Geometry“, in which I’ll be talking about categories of sheaves, and building up the technology for Susama to talk about Voevodsky’s theory of motives (enough to give a starting point to read something like this).

As for last semester’s seminar, one of the two main threads, the one which Alessandro Valentino and I helped to organize, was a look at some of the material needed to approach Jacob Lurie’s paper on the classification of topological quantum field theories. The idea was for the research seminar to present the basic tools that are used in that paper to a larger audience, mostly of graduate students – enough to give a fairly precise statement, and develop the tools needed to follow the proof. (By the way, for a nice and lengthier discussion by Chris Schommer-Pries about this subject, which includes more details on much of what’s in this post, check out this video.)

So: the key result is a slightly generalized form of the Cobordism Hypothesis.

### Cobordism Hypothesis

The sort of theory which the paper classifies are those which “extend down to a point”. So what does this mean? A topological field theory can be seen as a sort of “quantum field theory up to homotopy”, which abstract away any geometric information about the underlying space where the fields live – their local degrees of freedom.  We do this by looking only at the classes of fields up to the diffeomorphism symmetries of the space.  The local, geometric, information gets thrown away by taking this quotient of the space of solutions.

In spite of reducing the space of fields this way, we want to capture the intuition that the theory is still somehow “local”, in that we can cut up spaces into parts and make sense of the theory on those parts separately, and determine what it does on a larger space by gluing pieces together, rather than somehow having to take account of the entire space at once, indissolubly. This reasoning should apply to the highest-dimensional space, but also to boundaries, and to any figures we draw on boundaries when cutting them up in turn.

Carrying this on to the logical end point, this means that a topological quantum field theory in the fully extended sense should assign some sort of data to every geometric entity from a zero-dimensional point up to an $n$-dimensional cobordism.  This is all expressed by saying it’s an $n$-functor:

$Z : Bord^{fr}_n(n) \rightarrow nAlg$.

Well, once we know what this means, we’ll know (in principle) what a TQFT is.  It’s less important, for the purposes of Lurie’s paper, what $nAlg$ is than what $Bord^){fr}_n(n)$ is.  The reason is that we want to classify these field theories (i.e. functors).  It will turn out that $Bord_n(n)$ has the sort of structure that makes it easy to classify the functors out of it into any target $n$-category $\mathcal{C}$.  A guess about what kind of structure is actually there was expressed by Baez and Dolan as the Cobordism Hypothesis.  It’s been slightly rephrased from the original form to get a form which has a proof.  The version Lurie proves says:

The $(\infty,n)$-category $Bord^{fr}_n(n)$ is equivalent to the free symmetric monoidal $(\infty,n)$-category generated by one fully-dualizable object.

The basic point is that, since $Bord^{fr}_n(n)$ is a free structure, the classification means that the extended TQFT’s amount precisely to the choice of a fully-dualizable object of $\mathcal{C}$ (which includes a choice of a bunch of morphisms exhibiting the “dualizability”). However, to make sense of this, we need to have a suitable idea of an $(\infty,n)$-category, and know what a fully dualizable object is. Let’s begin with the first.

### $(\infty,n)$-Categories

In one sense, the Cobordism Hypothesis, which was originally made about $n$-categories at a time when these were only beginning to be defined, could be taken as a criterion for an acceptable definition. That is, it expressed an intuition which was important enough that any definition which wouldn’t allow one to prove the Cobordism Hypothesis in some form ought to be rejected. To really make it work, one had to bring in the “infinity” part of $(\infty,n)$-categories. The point here is that we are talking about category-like structures which have morphisms between objects, 2-morphisms between morphisms, and so on, with $j$-morphisms between $j-1$-morphisms for every possible degree. The inspiration for this comes from homotopy theory, where one has maps, homotopies of maps, homotopies of homotopies, etc.

Nowadays, there are several possible concrete models for $(\infty,n)$-categories (see this survey article by Julie Bergner for a summary of four of them). They are all equivalent definitions, in a suitable up-to-homotopy way, but for purposes of the proof, Lurie is taking the definition that an $(\infty,n)$-category is an n-fold complete Segal space. One theme that shows up in all the definitions is that of simplicial methods. (In our seminar, we started with a series of two talks introducing the notions of simplicial sets, simplicial objects in a category, and Kan complexes. If you don’t already know this, essentially everything we need is nicely explained in here.)

One of the underlying ideas is that a category $C$ can be associated with a simplicial set, its nerve $N(C)_{\bullet}$, where the set $N(C)_k$ of $k$-dimensional simplexes is just the set of composable $k$-tuples of morphisms in $C$. If $C$ is a groupoid (everything is invertible), then the simplicial set is a Kan complex – it satisfies some filling conditions, which ensure that any morphism has an inverse. Not every Kan complex is the nerve of a groupoid, but one can think of them as weak versions of groupoids – $\infty$-groupoids, or $(\infty,0)$-categories – where the higher morphisms may not be completely trivial (as with a groupoid), but where at least they’re all invertible. This leads to another desirable feature in any definition of $(\infty,n)$-category, which is the Homotopy Hypothesis: that the $(\infty,1)$-category of $(\infty,0)$-categories, also called $\infty$-groupoids, should be equivalent (in the same weak sense) to a category of Hausdorff spaces with some other nice properties, which we call $\mathbf{Top}$ for short. This is true of Kan complexes.

Thus, up to homotopy, specifying an $\infty$-groupoid is the same as specifying a space.

The data which defines a Segal space (which was however first explicitly defined by Charlez Rezk) is a simplicial space $X_{\bullet}$: for each $n$, there are spaces $X_n$, thought of as the space of composable $n$-tuples of morphisms. To keep things tame, we suppose that $X_0$, the space of objects, is discrete – that is, we have only a set of objects. Being a simplicial space means that the $X_n$ come equipped with a collection of face maps $d_i : X_n \rightarrow X_{n-1}$, which we should think of as compositions: to get from an $n$-tuple to an $(n-1)$-tuple of morphisms, one can compose two morphisms together at any of $(n-1)$ positions in the tuple.

One condition which a simplicial space has to satisfy to be a Segal space has to do with the “weakening” which makes a Segal space a weaker notion than just a category lies in the fact that the $X_n$ cannot be arbitrary, but must be homotopy equivalent to the “actual” space of $n$-tuples, which is a strict pullback $X_1 \times_{X_0} \dots \times_{X_0} X_1$. That is, in a Segal space, the pullback which defines these tuples for a category is weakened to be a homotopy pullback. Combining this with the various face maps, we therefore get a weakened notion of composition: $X_1 \times_{X_0} \dots \times_{X_0} X_1 \cong X_n \rightarrow X_1$. Because we start by replacing the space of $n$-tuples with the homotopy-equivalent $X_n$, the composition rule will only satisfy all the relations which define composition (associativity, for instance) up to homotopy.

To be complete, the Segal space must have a notion of equivalence for $X_{\bullet}$ which agrees with that for Kan complexes seen as $\infty$-groupoids. In particular, there is a sub-simplicial object $Core(X_{\bullet})$, which we understand to consist of the spaces of invertible $k$-morphisms. Since there should be nothing interesting happening above the top dimension, we ask that, for these spaces, the face and degeneracy maps are all homotopy equivalences: up to homotopy, the space of invertible higher morphisms has no new information.

Then, an $n$-fold complete Segal space is defined recursively, just as one might define $n$-categories (without the infinitely many layers of invertible morphisms “at the top”). In that case, we might say that a double category is just a category internal to $\mathbf{Cat}$: it has a category of objects, and a category of morphims, and the various maps and operations, such as composition, which make up the definition of a category are all defined as functors. That turns out to be the same as a structure with objects, horizontal and vertical morphisms, and square-shaped 2-cells. If we insist that the category of objects is discrete (i.e. really just a set, with no interesting morphisms), then the result amounts to a 2-category. Then we can define a 3-category to be a category internal to $\mathbf{2Cat}$ (whose 2-category of objects is discrete), and so on. This approach really defines an $n$-fold category (see e.g. Chapter 5 of Cheng and Lauda to see a variation of this approach, due to Tamsamani and Simpson), but imposing the condition that the objects really amount to a set at each step gives exactly the usual intuition of a (strict!) $n$-category.

This is exactly the approach we take with $n$-fold complete Segal spaces, except that some degree of weakness is automatic. Since a C.S.S. is a simplicial object with some properties (we separately define objects of $k$-tuples of morphisms for every $k$, and all the various composition operations), the same recursive approach leads to a definition of an “$n$-fold complete Segal space” as simply a simplicial object in $(n-1)$-fold C.S.S.’s (with the same properties), such that the objects form a set. In principle, this gives a big class of “spaces of morphisms” one needs to define – one for every $n$-fold product of simplexes of any dimension – but all those requirements that any space of objects “is just a set” (i.e. is homotopy-equivalent to a discrete set of points) simplifies things a bit.

### Cobordism Category as $(\infty,n)$-Category

So how should we think of cobordisms as forming an $(\infty,n)$-category? There are a few stages in making a precise definition, but the basic idea is simple enough. One starts with manifolds and cobordisms embedded in some fixed finite-dimensional vector space $V \times \mathbb{R}^n$, and then takes a limit over all $V$. In each $V \times \mathbb{R}^n$, the coordinates of the $\mathbb{R}^n$ factor give $n$ ways of cutting the cobordism into pieces, and gluing them back together defines composition in a different direction. Now, this won’t actually produce a complete Segal space: one has to take a certain kind of completion. But the idea is intuitive enough.

We want to define an $n$-fold C.S.S. of cobordisms (and cobordisms between cobordisms, and so on, up to $n$-morphisms). To start with, think of the case $n=1$: then the space of objects of $Bord^{fr}_1(1)$ consists of all embeddings of a $(d-1)$-dimensional manifold into $V$. The space of $k$-simplexes (of $k$-tuples of morphisms) consists of all ways of cutting up a $d$-dimensional cobordism embedded in $V \times \mathbb{R}$ by choosing $t_0, \dots , t_{k-2}$, where we think of the cobordism having been glued from two pieces, where at the slice $V \times {t_i}$, we have the object where the two pieces were composed. (One has to be careful to specify that the Morse function on the cobordisms, got by projection only $\mathbb{R}$, has its critical points away from the $t_i$ – the generic case – to make sure that the objects where gluing happens are actual manifolds.)

Now, what about the higher morphisms of the $(\infty,1)$-category? The point is that one needs to have an $\infty$-groupoid – that is, a space! – of morphisms between two cobordisms $M$ and $N$. To make sense of this, we just take the space $Diff(M,N)$ of diffeomorphisms – not just as a set of morphisms, but including its topology as well. The higher morphisms, therefore, can be thought of precisely as paths, homotopies, homotopies between homotopies, and so on, in these spaces. So the essential difference between the 1-category of cobordisms and the $(\infty,1)$-category is that in the first case, morphisms are diffeomorphism classes of cobordisms, whereas in the latter, the higher morphisms are made precisely of the space of diffeomorphisms which we quotient out by in the first case.

Now, $(\infty,n)$-categories, can have non-invertible morphisms between morphisms all the way up to dimension $n$, after which everything is invertible. An $n$-fold C.S.S. does this by taking the definition of a complete Segal space and copying it inside $(n-1)$-fold C.S.S’s: that is, one has an $(n-1)$-fold Complete Segal Space of $k$-tuples of morphisms, for each $k$, they form a simplicial object, and so forth.

Now, if we want to build an $(\infty,n)$-category $Bord^{fr}_n(n)$ of cobordisms, the idea is the same, except that we have a simplicial object, in a category of simplicial objects, and so on. However, the way to define this is essentially similar. To specify an $n$-fold C.S.S., we have to specify a whole collection of spaces associated to cobordisms equipped with embeddings into $V \times \mathbb{R}^n$. In particular, for each tuple $(k_1,\dots,k_n)$, we have the space of such embeddings, such that for each $i = 1 \dots n$ one has $k_i$ special points $t_{i,j}$ along the $i^{th}$ coordinate axis. These are the ways of breaking down a given cobordism into a composite of $k_i +1$ pieces. Again, one has to make sure that these critical points of the Morse functions defined by the projections onto these coordinate axes avoid these special $t_{i,j}$ which define the manifolds where gluing takes place. The composition maps which make these into a simplical object are quite natural – they just come by deleting special points.

Finally, we take a limit over all $V$ (to get around limits to embeddings due to the dimension of $V$). So we know (at least abstractly) what the $(\infty,n)$-category of cobordisms should be. The cobordism hypothesis claims it is equivalent to one defined in a free, algebraically-flavoured way, namely as the free symmetric monoidal $(\infty,n)$-category on a fully-dualizable object. (That object is “the point” – which, up to the kind of homotopically-flavoured equivalence that matters here, is the only object when our highest-dimensional cobordisms have dimension $n$).

### Dualizability

So what does that mean, a “fully dualizable object”?

First, to get the idea, let’s think of the 1-dimensional example.  Instead of “$(\infty,n)$-category”, we would like to just think of this as a statement about a category.  Then $Bord^{fr}_1(1)$ is the 1-category of framed bordisms. For a manifold (or cobordism, which is a manifold with boundary), a framing is a trivialization of the tangent bundle.  That is, it amounts to a choice of isomorphism at each point between the tangent space there and the corresponding $\mathbb{R}^n$.  So the objects of $Bord^{fr}_1(1)$ are collections of (signed) points, and the morphisms are equivalence classes of framed 1-dimensional cobordisms.  These amount to oriented 1-manifolds with boundary, where the points (objects) on the boundary are the source and target of the cobordism.

Now we want to classify what TQFT’s live on this category.  These are functors $Z : Bord^{fr}_1(1)$.  We have two generating objects, $+$ and $-$, the two signed points.  A TQFT must assign these objects vector spaces, which we’ll call $V$ and $W$.  Collections of points get assigned tensor products of all the corresponding vector spaces, since the functor is monoidal, so knowing these two vector spaces determines what $Z$ does to all objects.

What does $Z$ do to morphisms?  Well, some generating morphsims of interest are cups and caps: these are lines which connect a positive to a negative point, but thought of as cobordisms taking two points to the empty set, and vice versa.  That is, we have an evaluation:This statement is what is generalized to say that $n$-dimensional TQFT’s are classified by “fully” dualizable objects.

$ev: W \otimes V \rightarrow \mathbb{C}$

and a coevaluation:

$coev: \mathbb{C} \rightarrow V \otimes W$

Now, since cobordisms are taken up to equivalence, which in particular includes topological deformations, we get a bunch of relations which these have to satisfy.  The essential one is the “zig-zag” identity, reflecting the fact that a bent line can be straightened out, and we have the same 1-morphism in $Born^{fr}_1(1)$.  This implies that:

$(ev \otimes id) \circ (id \otimes coev) : W \rightarrow W \otimes V \otimes W \rightarrow W$

is the same as the identity.  This in turn means that the evaluation and coevaluation maps define a nondegenerate pairing between $V$ and $W$.  The fact that this exists means two things.  First, $W$ is the dual of $V$: $W \cong V*$.  Second, this only makes sense if both $V$ and its dual are finite dimensional (since the evaluation will just be the trace map, which is not even defined on the identity if $V$ is infinite dimensional).

On the other hand, once we know, $V$, this determines $W \cong V*$ up to isomorphism, as well as the evaluation and coevaluation maps.  In fact, this turns out to be enough to specify $Z$ entirely.  The classification then is: 1-D TQFT’s are classified by finite-dimensional vector spaces $V$.  Crucially, what made finiteness important is the existence of the dual $V*$ and the (co)evaluation maps which express the duality.

In an $(\infty,n)$-category, to say that an object is “fully dualizable” means more that the object has a dual (which, itself, implies the existence of the morphisms $ev$ and $coev$). It also means that $ev$ and $coev$ have duals themselves – or rather, since we’re talking about morphisms, “adjoints”. This in turn implies the existence of 2-morphisms which are the unit and counit of the adjunctions (the defining properties are essentially the same as those for morphisms which define a dual). In fact, every time we get a morphism of degree less than $n$ in this process, “fully dualizable” means that it too must have a dual (i.e. an adjoint).

This does run out eventually, though, since we only require this goes up to dimension $(n-1)$: the $n$-morphisms which this forces to exist (quite a few) aren’t required to have duals. This is good, because if they were, since all the higher morphisms available are invertible, this would mean that the dual $n$-morphisms would actually be weak inverses (that is, their composite is isomorphic to the identity)… But that would mean that the dual $(n-1)$-morphisms which forced them to exist would also be weak inverses (their composite would be weakly isomorphic to the identity)… and so on! In fact, if the property of “having duals” didn’t stop, then everything would be weakly invertible: we’d actually have a (weak) $\infty$-groupoid!

### Classifying TQFT

So finally, the point of the Cobordism Hypothesis is that a (fully extended) TQFT is a functor $Z$ out of this $nBord^{fr}_n(n)$ into some target $(\infty,1)$-category $\mathcal{C}$. There are various options, but whatever we pick, the functor must assign something in $\mathcal{C}$ to the point, say $Z(pt)$, and something to each of $ev$ and $coev$, as well as all the higher morphisms which must exist. Then functoriality means that all these images have to again satisfy the properties which make $Z(pt)$ a fully dualizable object. Furthermore, since $nBord^{fr}_n(n)$ is the free gadget with all these properties on the single object $pt$, this is exactly what it means that $Z$ is a functor. Saying that $Z(pt)$ is fully dualizable, by implication, includes all the choices of morphisms like $Z(ev)$ etc. which show it as fully dualizable. (Conceivably one could make the same object fully dualizable in more than one way – these would be different functors).

So an extended $n$-dimensional TQFT is exactly the choice of a fully dualizable object $Z(pt) \in \mathcal{C}$, for some $(\infty,n)$-category $\mathcal{C}$. This object is “what the TQFT assigns to a point”, but if we understand the structure of the object as a fully dualizable object, then we know what the TQFT assigns to any other manifold of any dimension up to $n$, the highest dimension in the theory. This is how this algebraic characterization of cobordisms helps to classify such theories.

Well, as promised in the previous post, I’d like to give a summary of some of what was discussed at the conference I attended (quite a while ago now, late last year) in Erlangen, Germany.  I was there also to visit Derek Wise, talking about a project we’ve been working on for some time.

(I’ve also significantly revised this paper about Extended TQFT since then, and it now includes some stuff which was the basis of my talk at Erlangen on cohomological twisting of the category $Span(Gpd)$.  I’ll get to that in the next post.  Also coming up, I’ll be describing some new things I’ve given some talks about recently which relate the Baez-Dolan groupoidification program to Khovanov-Lauda categorification of algebras – at least in one example, hopefully in a way which will generalize nicely.)

In the meantime, there were a few themes at the conference which bear on the Extended TQFT project in various ways, so in this post I’ll describe some of them.  (This isn’t an exhaustive description of all the talks: just of a selection of illustrative ones.)

Categories with Structures

A few talks were mainly about facts regarding the sorts of categories which get used in field theory contexts.  One important type, for instance, are fusion categories is a monoidal category which is enriched in vector spaces, generated by simple objects, and some other properties: essentially, monoidal 2-vector spaces.  The basic example would be categories of representations (of groups, quantum groups, algebras, etc.), but fusion categories are an abstraction of (some of) their properties.  Many of the standard properties are described and proved in this paper by Etingof, Nikshych, and Ostrik, which also poses one of the basic conjectures, the “ENO Conjecture”, which was referred to repeatedly in various talks.  This is the guess that every fusion category can be given a “pivotal” structure: an isomorphism from $Id$ to $**$.  It generalizes the theorem that there’s always such an isomorphism into $****$.  More on this below.

Hendryk Pfeiffer talked about a combinatorial way to classify fusion categories in terms of certain graphs (see this paper here).  One way I understand this idea is to ask how much this sort of category really does generalize categories of representations, or actually comodules.  One starting point for this is the theorem that there’s a pair of functors between certain monoidal categories and weak Hopf algebras.  Specifically, the monoidal categories are $(Cat \downarrow Vect)^{\otimes}$, which consists of monoidal categories equipped with a forgetful functor into $Vect$.  Then from this one can get (via a coend), a weak Hopf algebra over the base field $k$(in the category $WHA_k$).  From a weak Hopf algebra $H$, one can get back such a category by taking all the modules of $H$.  These two processes form an adjunction: they’re not inverses, but we have maps between the two composites and the identity functors.

The new result Hendryk gave is that if we restrict our categories over $Vect$ to be abelian, and the functors between them to be linear, faithful, and exact (that is, roughly, that we’re talking about concrete monoidal 2-vector spaces), then this adjunction is actually an equivalence: so essentially, all such categories $C$ may as well be module categories for weak Hopf algebras.  Then he gave a characterization of these in terms of the “dimension graph” (in fact a quiver) for $(C,M)$, where $M$ is one of the monoidal generators of $C$.  The vertices of $\mathcal{G} = \mathcal{G}_{(C,M)}$ are labelled by the irreducible representations $v_i$ (i.e. set of generators of the category), and there’s a set of edges $j \rightarrow l$ labelled by a basis of $Hom(v_j, v_l \otimes M)$.  Then one can carry on and build a big graded algebra $H[\mathcal{G}]$ whose $m$-graded part consists of length-$m$ paths in $\mathcal{G}$.  Then the point is that the weak Hopf algebra of which $C$ is (up to isomorphism) the module category will be a certain quotient of $H[\mathcal{G}]$ (after imposing some natural relations in a systematic way).

The point, then, is that the sort of categories mostly used in this area can be taken to be representation categories, but in general only of these weak Hopf algebras: groups and ordinary algebras are special cases, but they show up naturally for certain kinds of field theory.

Tensor Categories and Field Theories

There were several talks about the relationship between tensor categories of various sorts and particular field theories.  The idea is that local field theories can be broken down in terms of some kind of n-category: $n$-dimensional regions get labelled by categories, $(n-1)$-D boundaries between regions, or “defects”, are labelled by functors between the categories (with the idea that this shows how two different kinds of field can couple together at the defect), and so on (I think the highest-dimension that was discussed explicitly involved 3-categories, so one has junctions between defects, and junctions between junctions, which get assigned some higher-morphism data).  Alteratively, there’s the dual picture where categories are assigned to points, functors to 1-manifolds, and so on.  (This is just Poincaré duality in the case where the manifolds come with a decomposition into cells, which they often are if only for convenience).

Victor Ostrik gave a pair of talks giving an overview role tensor categories play in conformal field theory.  There’s too much material here to easily summarize, but the basics go like this: CFTs are field theories defined on cobordisms that have some conformal structure (i.e. notion of angles, but not distance), and on the algebraic side they are associated with vertex algebras (some useful discussion appears on mathoverflow, but in this context they can be understood as vector spaces equipped with exactly the algebraic operations needed to model cobordisms with some local holomorphic structure).

In particular, the irreducible representations of these VOA’s determine the “conformal blocks” of the theory, which tell us about possible correlations between observables (self-adjoint operators).  A VOA $V$ is “rational” if the category $Rep(V)$ is semisimple (i.e. generated as finite direct sums of these conformal blocks).  For good VOA’s, $Rep(V)$ will be a modular tensor category (MTC), which is a fusion category with a duality, braiding, and some other strucutre (see this for more).   So describing these gives us a lot of information about what CFT’s are possible.

The full data of a rational CFT are given by a vertex algebra, and a module category $M$: that is, a fusion category is a sort of categorified ring, so it can act on $M$ as an ring acts on a module.  It turns out that choosing an $M$ is equivalent to finding a certain algebra (i.e. algebra object) $\mathcal{L}$, a “Lagrangian algebra” inside the centre of $Rep(V)$.  The Drinfel’d centre $Z(C)$ of a monoidal category $C$ is a sort of free way to turn a monoidal category into a braided one: but concretely in this case it just looks like $Rep(V) \otimes Rep(V)^{\ast}$.  Knowing the isomorphism class $\mathcal{L}$ determines a “modular invariant”.  It gets “physics” meaning from how it’s equipped with an algebra structure (which can happen in more than one way), but in any case $\mathcal{L}$ has an underlying vector space, which becomes the Hilbert space of states for the conformal field theory, which the VOA acts on in the natural way.

Now, that was all conformal field theory.  Christopher Douglas described some work with Chris Schommer-Pries and Noah Snyder about fusion categories and structured topological field theories.  These are functors out of cobordism categories, the most important of which are $n$-categories, where the objects are points, morphisms are 1D cobordisms, and so on up to $n$-morphisms which are $n$-dimensional cobordisms.  To keep things under control, Chris Douglas talked about the case $Bord_0^3$, which is where $n=3$, and a “local” field theory is a 3-functor $Bord_0^3 \rightarrow \mathcal{C}$ for some 3-category $\mathcal{C}$.  Now, the (Baez-Dolan) Cobordism Hypothesis, which was proved by Jacob Lurie, says that $Bord_0^3$ is, in a suitable sense, the free symmetric monoidal 3-category with duals.  What this amounts to is that a local field theory whose target 3-category is $\mathcal{C}$ is “just” a dualizable object of $\mathcal{C}$.

The handy example which links this up to the above is when $\mathcal{C}$ has objects which are tensor categories, morphisms which are bimodule categories (i.e. categories acted), 2-morphisms which are functors, and 3-morphisms which are natural transformations.  Then the issue is to classify what kind of tensor categories these objects can be.

The story is trickier if we’re talking about, not just topological cobordisms, but ones equipped with some kind of structure regulated by a structure group $G$(for instance, orientation by $G=SO(n)$, spin structure by its universal cover $G= Spin(n)$, and so on).  This means the cobordisms come equipped with a map into $BG$.  They take $O(n)$ as the starting point, and then consider groups $G$ with a map to $O(n)$, and require that the map into $BG$ is a lift of the map to $BO(n)$.  Then one gets that a structured local field theory amounts to a dualizable objects of $\mathcal{C}$ with a homotopy-fixed point for some $G$-action – and this describes what gets assigned to the point by such a field theory.  What they then show is a correspondence between $G$ and classes of categories.  For instance, fusion categories are what one gets by imposing that the cobordisms be oriented.

Liang Kong talked about “Topological Orders and Tensor Categories”, which used the Levin-Wen models, from condensed matter phyiscs.  (Benjamin Balsam also gave a nice talk describing these models and showing how they’re equivalent to the Turaev-Viro and Kitaev models in appropriate cases.  Ingo Runkel gave a related talk about topological field theories with “domain walls”.).  Here, the idea of a “defect” (and topological order) can be understood very graphically: we imagine a 2-dimensional crystal lattice (of atoms, say), and the defect is a 1-dimensional place where the two lattices join together, with the internal symmetry of each breaking down at the boundary.  (For example, a square lattice glued where the edges on one side are offset and meet the squares on the other side in the middle of a face, as you typically see in a row of bricks – the slides linked above have some pictures).  The Levin-Wen models are built using a hexagonal lattice, starting with a tensor category with several properties: spherical (there are dualities satisfying some relations), fusion, and unitary: in fact, historically, these defining properties were rediscovered independently here as the requirement for there to be excitations on the boundary which satisfy physically-inspired consistency conditions.

These abstract the properties of a category of representations.  A generalization of this to “topological orders” in 3D or higher is an extended TFT in the sense mentioned just above: they have a target 3-category of tensor categories, bimodule categories, functors and natural transformations.  The tensor categories (say, $\mathcal{C}$, $\mathcal{D}$, etc.) get assigned to the bulk regions; to “domain walls” between different regions, namely defects between lattices, we assign bimodule categories (but, for instance, to a line within a region, we get $\mathcal{C}$ understood as a $\mathcal{C}-\mathcal{C}$-bimodule); then to codimension 2 and 3 defects we attach functors and natural transformations.  The algebra for how these combine expresses the ways these topological defects can go together.  On a lattice, this is an abstraction of a spin network model, where typically we have just one tensor category $\mathcal{C}$ applied to the whole bulk, namely the representations of a Lie group (say, a unitary group).  Then we do calculations by breaking down into bases: on codimension-1 faces, these are simple objects of $\mathcal{C}$; to vertices we assign a Hom space (and label by a basis for intertwiners in the special case); and so on.

Thomas Nickolaus spoke about the same kind of $G$-equivariant Dijkgraaf-Witten models as at our workshop in Lisbon, so I’ll refer you back to my earlier post on that.  However, speaking of equivariance and group actions:

Michael Müger  spoke about “Orbifolds of Rational CFT’s and Braided Crossed $G$-Categories” (see this paper for details).  This starts with that correspondence between rational CFT’s (strictly, rational chiral CFT’s) and modular categories $Rep(F)$.  (He takes $F$ to be the name of the CFT).  Then we consider what happens if some finite group $G$ acts on $F$ (if we understand $F$ as a functor, this is an action by natural transformations; if as an algebra, then ).  This produces an “orbifold theory” $F^G$ (just like a finite group action on a manifold produces an orbifold), which is the “$G$-fixed subtheory” of $F$, by taking $G$-fixed points for every object, and is also a rational CFT.  But that means it corresponds to some other modular category $Rep(F^G)$, so one would like to know what category this is.

A natural guess might be that it’s $Rep(F)^G$, where $C^G$ is a “weak fixed-point” category that comes from a weak group action on a category $C$.  Objects of $C^G$ are pairs $(c,f_g)$ where $c \in C$ and $f_g : g(c) \rightarrow c$ is a specified isomorphism.  (This is a weak analog of $S^G$, the set of fixed points for a group acting on a set).  But this guess is wrong – indeed, it turns out these categories have the wrong dimension (which is defined because the modular category has a trace, which we can sum over generating objects).  Instead, the right answer, denoted by $Rep(F^G) = G-Rep(F)^G$, is the $G$-fixed part of some other category.  It’s a braided crossed $G$-category: one with a grading by $G$, and a $G$-action that gets along with it.  The identity-graded part of $Rep(F^G)$ is just the original $Rep(F)$.

State Sum Models

This ties in somewhat with at least some of the models in the previous section.  Some of these were somewhat introductory, since many of the people at the conference were coming from a different background.  So, for instance, to begin the workshop, John Barrett gave a talk about categories and quantum gravity, which started by outlining the historical background, and the development of state-sum models.  He gave a second talk where he began to relate this to diagrams in Gray-categories (something he also talked about here in Lisbon in February, which I wrote about then).  He finished up with some discussion of spherical categories (and in particular the fact that there is a Gray-category of spherical categories, with a bunch of duals in the suitable sense).  This relates back to the kind of structures Chris Douglas spoke about (described above, but chronologically right after John).  Likewise, Winston Fairbairn gave a talk about state sum models in 3D quantum gravity – the Ponzano Regge model and Turaev-Viro model being the focal point, describing how these work and how they’re constructed.  Part of the point is that one would like to see that these fit into the sort of framework described in the section above, which for PR and TV models makes sense, but for the fancier state-sum models in higher dimensions, this becomes more complicated.

Higher Gauge Theory

There wasn’t as much on this topic as at our own workshop in Lisbon (though I have more remarks on higher gauge theory in one post about it), but there were a few entries.  Roger Picken talked about some work with Joao Martins about a cubical formalism for parallel transport based on crossed modules, which consist of a group $G$ and abelian group $H$, with a map $\partial : H \rightarrow G$ and an action of $G$ on $H$ satisfying some axioms.  They can represent categorical groups, namely group objects in $Cat$ (equivalently, categories internal to $Grp$), and are “higher” analogs of groups with a set of elements.  Roger’s talk was about how to understand holonomies and parallel transports in this context.  So, a “connection” lets on transport things with $G$-symmetries along paths, and with $H$-symmetries along surfaces.  It’s natural to describe this with squares whose edges are labelled by $G$-elements, and faces labelled by $H$-elements (which are the holonomies).  Then the “cubical approach” means that we can describe gauge transformations, and higher gauge transformations (which in one sense are the point of higher gauge theory) in just the same way: a gauge transformation which assigns $H$-values to edges and $G$-values to vertices can be drawn via the holonomies of a connection on a cube which extends the original square into 3D (so the edges become squares, and so get $H$-values, and so on).  The higher gauge transformations work in a similar way.  This cubical picture gives a good way to understand the algebra of how gauge transformations etc. work: so for instance, gauge transformations look like “conjugation” of a square by four other squares – namely, relating the front and back faces of a cube by means of the remaining faces.  Higher gauge transformations can be described by means of a 4D hypercube in an analogous way, and their algebraic properties have to do with the 2D faces of the hypercube.

Derek Wise gave a short talk outlining his recent paper with John Baez in which they show that it’s possible to construct a higher gauge theory based on the Poincare 2-group which turns out to have fields, and dynamics, which are equivalent to teleparallel gravity, a slightly unusal theory which nevertheless looks in practice just like General Relativity.  I discussed this in a previous post.

So next time I’ll talk about the new additions to my paper on ETQFT which were the basis of my talk, which illustrates a few of the themes above.

Now for a more sketchy bunch of summaries of some talks presented at the HGTQGR workshop.  I’ll organize this into a few themes which appeared repeatedly and which roughly line up with the topics in the title: in this post, variations on TQFT, plus 2-group and higher forms of gauge theory; in the next post, gerbes and cohomology, plus talks on discrete models of quantum gravity and suchlike physics.

## TQFT and Variations

I start here for no better reason than the personal one that it lets me put my talk first, so I’m on familiar ground to start with, for which reason also I’ll probably give more details here than later on.  So: a TQFT is a linear representation of the category of cobordisms – that is, a (symmetric monoidal) functor $nCob \rightarrow Vect$, in the notation I mentioned in the first school post.  An Extended TQFT is a higher functor $nCob_k \rightarrow k-Vect$, representing a category of cobordisms with corners into a higher category of k-Vector spaces (for some definition of same).  The essential point of my talk is that there’s a universal construction that can be used to build one of these at $k=2$, which relies on some way of representing $nCob_2$ into $Span(Gpd)$, whose objects are groupoids, and whose morphisms in $Hom(A,B)$ are pairs of groupoid homomorphisms $A \leftarrow X \rightarrow B$.  The 2-morphisms have an analogous structure.  The point is that there’s a 2-functor $\Lambda : Span(Gpd) \rightarrow 2Vect$ which is takes representations of groupoids, at the level of objects; for morphisms, there is a “pull-push” operation that just uses the restricted and induced representation functors to move a representation across a span; the non-trivial (but still universal) bit is the 2-morphism map, which uses the fact that the restriction and induction functors are bi-ajdoint, so there are units and counits to use.  A construction using gauge theory gives groupoids of connections and gauge transformations for each manifold or cobordism.  This recovers a form of the Dijkgraaf-Witten model.  In principle, though, any way of getting a groupoid (really, a stack) associated to a space functorially will give an ETQFT this way.  I finished up by suggesting what would need to be done to extend this up to higher codimension.  To go to codimension 3, one would assign an object (codimension-3 manifold) a 3-vector space which is a representation 2-category of 2-groupoids of connections valued in 2-groups, and so on.  There are some theorems about representations of n-groupoids which would need to be proved to make this work.

The fact that different constructions can give groupoids for spaces was used by the next speaker, Thomas Nicklaus, whose talk described another construction that uses the $\Lambda$ I mentioned above.  This one produces “Equivariant Dijkgraaf-Witten Theory”.  The point is that one gets groupoids for spaces in a new way.  Before, we had, for a space $M$ a groupoid $\mathcal{A}_G(M)$ whose objects are $G$-connections (or, put another way, bundles-with-connection) and whose morphisms are gauge transformations.  Now we suppose that there’s some group $J$ which acts weakly (i.e. an action defined up to isomorphism) on $\mathcal{A}_G(M)$.  We think of this as describing “twisted bundles” over $M$.  This is described by a quotient stack $\mathcal{A}_G // J$ (which, as a groupoid, gets some extra isomorphisms showing where two objects are related by the $J$-action).  So this gives a new map $nCob \rightarrow Span(Gpd)$, and applying $\Lambda$ gives a TQFT.  The generating objects for the resulting 2-vector space are “twisted sectors” of the equivariant DW model.  There was some more to the talk, including a description of how the DW model can be further mutated using a cocycle in the group cohomology of $G$, but I’ll let you look at the slides for that.

Next up was Jamie Vicary, who was talking about “(1,2,3)-TQFT”, which is another term for what I called “Extended” TQFT above, but specifying that the objects are 1-manifolds, the morphisms 2-manifolds, and the 2-morphisms are 3-manifolds.  He was talking about a theorem that identifies oriented TQFT’s of this sort with “anomaly-free modular tensor categories” – which is widely believed, but in fact harder than commonly thought.  It’s easy enough that such a TQFT $Z$ corresponds to a MTC – it’s the category $Z(S^1)$ assigned to the circle.  What’s harder is showing that the TQFT’s are equivalent functors iff the categories are equivalent.  This boils down, historically, to the difficulty of showing the category is rigid.  Jamie was talking about a project with Bruce Bartlett and Chris Schommer-Pries, whose presentation of the cobordism category (described in the school post) was the basis of their proof.

Part of it amounts to giving a description of the TQFT in terms of certain string diagrams.  Jamie kindly credited me with describing this point of view to him: that the codimension-2 manifolds in a TQFT can be thought of as “boundaries in space” – codimension-1 manifolds are either time-evolving boundaries, or else slices of space in which the boundaries live; top-dimension cobordisms are then time-evolving slices of space-with-boundary.  (This should be only a heuristic way of thinking – certainly a generic TQFT has no literal notion of “time-evolution”, though in that (2+1) quantum gravity can be seen as a TQFT, there’s at least one case where this picture could be taken literally.)  Then part of their proof involves showing that the cobordisms can be characterized by taking vector spaces on the source and target manifolds spanned by the generating objects, and finding the functors assigned to cobordisms in terms of sums over all “string diagrams” (particle worldlines, if you like) bounded by the evolving boundaries.  Jamie described this as a “topological path integral”.  Then one has to describe the string diagram calculus – ridigidy follows from the “yanking” rule, for instance, and this follows from Morse theory as in Chris’ presentation of the cobordism category.

There was a little more discussion about what the various properties (proved in a similar way) imply.  One is “cloaking” – the fact that a 2-morphism which “creates a handle” is invisible to the string diagrams in the sense that it introduces a sum over all diagrams with a string “looped” around the new handle, but this sum gives a result that’s equal to the original map (in any “pivotal” tensor category, as here).

Chronologically before all these, one of the first talks on such a topic was by Rafael Diaz, on Homological Quantum Field Theory, or HLQFT for short, which is a rather different sort of construction.  Remember that Homotopy QFT, as described in my summary of Tim Porter’s school sessions, is about linear representations of what I’ll for now call $Cob(d,B)$, whose morphisms are $d$-dimensional cobordisms equipped with maps into a space $B$ up to homotopy.  HLQFT instead considers cobordisms equipped with maps taken up to homology.

Specifically, there’s some space $M$, say a manifold, with some distinguished submanifolds (possibly boundary components; possibly just embedded submanifolds; possibly even all of $M$ for a degenerate case).  Then we define $Cob_d^M$ to have objects which are $(d-1)$-manifolds equipped with maps into $M$ which land on the distinguished submanifolds (to make composition work nicely, we in fact assume they map to a single point).  Morphisms in $Cob_d^M$ are trickier, and look like $(N,\alpha, \xi)$: a cobordism $N$ in this category is likewise equipped with a map $\alpha$ from its boundary into $M$ which recovers the maps on its objects.  That $\xi$ is a homology class of maps from $N$ to $M$, which agrees with $\alpha$.  This forms a monoidal category as with standard cobordisms.  Then HLQFT is about representations of this category.  One simple case Rafael described is the dimension-1 case, where objects are (ordered sets of) points equipped with maps that pick out chosen submanifolds of $M$, and morphisms are just braids equipped with homology classes of “paths” joining up the source and target submanifolds.  Then a representation might, e.g., describe how to evolve a homology class on the starting manifold to one on the target by transporting along such a path-up-to-homology.  In higher dimensions, the evolution is naturally more complicated.

A slightly looser fit to this section is the talk by Thomas Krajewski, “Quasi-Quantum Groups from Strings” (see this) – he was talking about how certain algebraic structures arise from “string worldsheets”, which are another way to describe cobordisms.  This does somewhat resemble the way an algebraic structure (Frobenius algebra) is related to a 2D TQFT, but here the string worldsheets are interacting with 3-form field, $H$ (the curvature of that 2-form field $B$ of string theory) and things needn’t be topological, so the result is somewhat different.

Part of the point is that quantizing such a thing gives a higher version of what happens for quantizing a moving particle in a gauge field.  In the particle case, one comes up with a line bundle (of which sections form the Hilbert space) and in the string case one comes up with a gerbe; for the particle, this involves associated 2-cocycle, and for the string a 3-cocycle; for the particle, one ends up producing a twisted group algebra, and for the string, this is where one gets a “quasi-quantum group”.  The algebraic structures, as in the TQFT situation, come from, for instance, the “pants” cobordism which gives a multiplication and a comultiplication (by giving maps $H \otimes H \rightarrow H$ or the reverse, where $H$ is the object assigned to a circle).

There is some machinery along the way which I won’t describe in detail, except that it involves a tricomplex of forms – the gradings being form degree, the degree of a cocycle for group cohomology, and the number of overlaps.  As observed before, gerbes and their higher versions have transition functions on higher numbers of overlapping local neighborhoods than mere bundles.  (See the paper above for more)

## Higher Gauge Theory

The talks I’ll summarize here touch on various aspects of higher-categorical connections or 2-groups (though at least one I’ll put off until later).  The division between this and the section on gerbes is a little arbitrary, since of course they’re deeply connected, but I’m making some judgements about emphasis or P.O.V. here.

Apart from giving lectures in the school sessions, John Huerta also spoke on “Higher Supergroups for String Theory”, which brings “super” (i.e. $\mathbb{Z}_2$-graded) objects into higher gauge theory.  There are “super” versions of vector spaces and manifolds, which decompose into “even” and “odd” graded parts (a.k.a. “bosonic” and “fermionic” parts).  Thus there are “super” variants of Lie algebras and Lie groups, which are like the usual versions, except commutation properties have to take signs into account (e.g. a Lie superalgebra’s bracket is commutative if the product of the grades of two vectors is odd, anticommutative if it’s even).  Then there are Lie 2-algebras and 2-groups as well – categories internal to this setting.  The initial question has to do with whether one can integrate some Lie 2-algebra structures to Lie 2-group structures on a spacetime, which depends on the existence of some globally smooth cocycles.  The point is that when spacetime is of certain special dimensions, this can work, namely dimensions 3, 4, 6, and 10.  These are all 2 more than the real dimensions of the four real division algebras, $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$ and $\mathbb{O}$.  It’s in these dimensions that Lie 2-superalgebras can be integrated to Lie 2-supergroups.  The essential reason is that a certain cocycle condition will hold because of the properties of a form on the Clifford algebras that are associated to the division algebras.  (John has some related material here and here, though not about the 2-group case.)

Since we’re talking about higher versions of Lie groups/algebras, an important bunch of concepts to categorify are those in representation theory.  Derek Wise spoke on “2-Group Representations and Geometry”, based on work with Baez, Baratin and Freidel, most fully developed here, but summarized here.  The point is to describe the representation theory of Lie 2-groups, in particular geometrically.  They’re to be represented on (in general, infinite-dimensional) 2-vector spaces of some sort, which is chosen to be a category of measurable fields of Hilbert spaces on some measure space, which is called $H^X$ (intended to resemble, but not exactly be the same as, $Hilb^X$, the space of “functors into $Hilb$ from the space $X$, the way Kapranov-Voevodsky 2-vector spaces can be described as $Vect^k$).  The first work on this was by Crane and Sheppeard, and also Yetter.  One point is that for 2-groups, we have not only representations and intertwiners between them, but 2-intertwiners between these.  One can describe these geometrically – part of which is a choice of that measure space $(X,\mu)$.

This done, we can say that a representation of a 2-group is a 2-functor $\mathcal{G} \rightarrow H^X$, where $\mathcal{G}$ is seen as a one-object 2-category.  Thinking about this geometrically, if we concretely describe $\mathcal{G}$ by the crossed module $(G,H,\rhd,\partial)$, defines an action of $G$ on $X$, and a map $X \rightarrow H^*$ into the character group, which thereby becomes a $G$-equivariant bundle.  One consequence of this description is that it becomes possible to distinguish not only irreducible representations (bundles over a single orbit) and indecomposible ones (where the fibres are particularly simple homogeneous spaces), but an intermediate notion called “irretractible” (though it’s not clear how much this provides).  An intertwining operator between reps over $X$ and $Y$ can be described in terms of a bundle of Hilbert spaces – which is itself defined over the pullback of $X$ and $Y$ seen as $G$-bundles over $H^*$.  A 2-intertwiner is a fibre-wise map between two such things.  This geometric picture specializes in various ways for particular examples of 2-groups.  A physically interesting one, which Crane and Sheppeard, and expanded on in that paper of [BBFW] up above, deals with the Poincaré 2-group, and where irreducible representations live over mass-shells in Minkowski space (or rather, the dual of $H \cong \mathbb{R}^{3,1}$).

Moving on from 2-group stuff, there were a few talks related to 3-groups and 3-groupoids.  There are some new complexities that enter here, because while (weak) 2-categories are all (bi)equivalent to strict 2-categories (where things like associativity and the interchange law for composing 2-cells hold exactly), this isn’t true for 3-categories.  The best strictification result is that any 3-category is (tri)equivalent to a Gray category – where all those properties hold exactly, except for the interchange law $(\alpha \circ \beta) \cdot (\alpha ' \circ \beta ') = (\alpha \cdot \alpha ') \circ (\beta \circ \beta ')$ for horizontal and vertical compositions of 2-cells, which is replaced by an “interchanger” isomorphism with some coherence properties.  John Barrett gave an introduction to this idea and spoke about “Diagrams for Gray Categories”, describing how to represent morphisms, 2-morphisms, and 3-morphisms in terms of higher versions of “string” diagrams involving (piecewise linear) surfaces satisfying some properties.  He also carefully explained how to reduce the dimensions in order to make them both clearer and easier to draw.  Bjorn Gohla spoke on “Mapping Spaces for Gray Categories”, but since it was essentially a shorter version of a talk I’ve already posted about, I’ll leave that for now, except to point out that it linked to the talk by Joao Faria Martins, “3D Holonomy” (though see also this paper with Roger Picken).

The point in Joao’s talk starts with the fact that we can describe holonomies for 3-connections on 3-bundles valued in Gray-groups (i.e. the maximally strict form of a general 3-group) in terms of Gray-functors $hol: \Pi_3(M) \rightarrow \mathcal{G}$.  Here, $\Pi_3(M)$ is the fundamental 3-groupoid of $M$, which turns points, paths, homotopies of paths, and homotopies of homotopies into a Gray groupoid (modulo some technicalities about “thin” or “laminated”  homotopies) and $\mathcal{G}$ is a gauge Gray-group.  Just as a 2-group can be represented by a crossed module, a Gray (3-)group can be represented by a “2-crossed module” (yes, the level shift in the terminology is occasionally confusing).  This is a chain of groups $L \stackrel{\delta}{\rightarrow} E \stackrel{\partial}{\rightarrow} G$, where $G$ acts on the other groups, together with some structure maps (for instance, the Peiffer commutator for a crossed module becomes a lifting $\{ ,\} : E \times E \rightarrow L$) which all fit together nicely.  Then a tri-connection can be given locally by forms valued in the Lie algebras of these groups: $(\omega , m ,\theta)$ in  $\Omega^1 (M,\mathfrak{g} ) \times \Omega^2 (M,\mathfrak{e}) \times \Omega^3(M,\mathfrak{l})$.  Relating the global description in terms of $hol$ and local description in terms of $(\omega, m, \theta)$ is a matter of integrating forms over paths, surfaces, or 3-volumes that give the various $j$-morphisms of $\Pi_3(M)$.  This sort of construction of parallel transport as functor has been developed in detail by Waldorf and Schreiber (viz. these slides, or the full paper), some time ago, which is why, thematically, they’re the next two speakers I’ll summarize.

Konrad Waldorf spoke about “Abelian Gauge Theories on Loop Spaces and their Regression”.  (For more, see two papers by Konrad on this)  The point here is that there is a relation between two kinds of theories – string theory (with $B$-field) on a manifold $M$, and ordinary $U(1)$ gauge theory on its loop space $LM$.  The relation between them goes by the name “regression” (passing from gauge theory on $LM$ to string theory on $M$), or “transgression”, going the other way.  This amounts to showing an equivalence of categories between [principal $U(1)$-bundles with connection on $LM$] and [$U(1)$-gerbes with connection on $M$].  This nicely gives a way of seeing how gerbes “categorify” bundles, since passing to the loop space – whose points are maps $S^1 \rightarrow M$ means a holonomy functor is now looking at objects (points in $LM$) which would be morphisms in the fundamental groupoid of $M$, and morphisms which are paths of loops (surfaces in $M$ which trace out homotopies).  So things are shifted by one level.  Anyway, Konrad explained how this works in more detail, and how it should be interpreted as relating connections on loop space to the $B$-field in string theory.

Urs Schreiber kicked the whole categorification program up a notch by talking about $\infty$-Connections and their Chern-Simons Functionals .  So now we’re getting up into $\infty$-categories, and particularly $\infty$-toposes (see Jacob Lurie’s paper, or even book if so inclined to find out what these are), and in particular a “cohesive topos”, where derived geometry can be developed (Urs suggested people look here, where a bunch of background is collected). The point is that $\infty$-topoi are good for talking about homotopy theory.  We want a setting which allows all that structure, but also allows us to do differential geometry and derived geometry.  So there’s a “cohesive” $\infty$-topos called $Smooth\infty Gpds$, of “sheaves” (in the $\infty$-topos sense) of $\infty$-groupoids on smooth manifolds.  This setting is the minimal common generalization of homotopy theory and differential geometry.

This is about a higher analog of this setup: since there’s a smooth classifying space (in fact, a Lie groupoid) for $G$-bundles, $BG$, there’s also an equivalence between categories $G-Bund$ of $G$-principal bundles, and $SmoothGpd(X,BG)$ (of functors into $BG$).  Moreover, there’s a similar setup with $BG_{conn}$ for bundles with connection.  This can be described topologically, or there’s also a “differential refinement” to talk about the smooth situation.  This equivalence lives within a category of (smooth) sheaves of groupoids.  For higher gauge theory, we want a higher version as in $Smooth \infty Gpds$ described above.  Then we should get an equivalence – in this cohesive topos – of $hom(X,B^n U(1))$ and a category of $U(1)$-$(n-1)$-gerbes.

Then the part about the  “Chern-Simons functionals” refers to the fact that CS theory for a manifold (which is a kind of TQFT) is built using an action functional that is found as an integral of the forms that describe some $U(1)$-connection over the manifold.  (Then one does a path-integral of this functional over all connections to find partition functions etc.)  So the idea is that for these higher $U(1)$-gerbes, whose classifying spaces we’ve just described, there should be corresponding functionals.  This is why, as Urs remarked in wrapping up, this whole picture has an explicit presentation in terms of forms.  Actually, in terms of Cech-cocycles (due to the fact we’re talking about gerbes), whose coefficients are taken in sheaves of complexes (this is the derived geometry part) of differential forms whose coefficients are in $L_\infty$-algebroids (the $\infty$-groupoid version of Lie algebras, since in general we’re talking about a theory with gauge $\infty$-groupoids now).

Whew!  Okay, that’s enough for this post.  Next time, wrapping up blogging the workshop, finally.

I’d like to continue describing the talks that made up the HGTQGR workshop, in particular the ones that took place during the school portion of the event.  I’ll save one “school” session, by Laurent Freidel, to discuss with the talks because it seems to more nearly belong there. This leaves five people who gave between two and four lectures each over a period of a few days, all intermingled. Here’s a very rough summary in the order of first appearance:

## 2D Extended TQFT

Chris Schommer-Pries gave the longest series of talks, about the classification of 2D extended TQFT’s.  A TQFT is a kind of topological invariant for manifolds, which has a sort of “locality” property, in that you can decompose the manifold, compute the invariant on the parts, and find the whole by gluing the pieces back together.  This is expressed by saying it’s a monoidal functor $Z : (Cob_d, \sqcup) \rightarrow (Vect, \otimes)$, where the “locality” property is now functoriality property that composition is preserved.  The key thing here is the cobordism category $Cob_d$, which has objects (d-1)-dimensional manifolds, and morphisms d-dimensional cobordisms (manifolds with boundary, where the objects are components of the boundary).  Then a closed d-manifold is just a cobordism from $latex\emptyset$ to itself.

Making this into a category is actually a bit nontrivial: gluing bits of smooth manifolds, for instance, won’t necessarily give something smooth.  There are various ways of handling this, such as giving the boundaries “collars”, but Chris’ preferred method is to give boundaries (and, ultimately, corners, etc.) a”halation”.  This word originally means the halo of light around bright things you sometimes see in photos, but in this context, a halation for $X$ is an equivalence class of embeddings into neighborhoods $U \subset \mathbb{R}^d$.  The equivalence class says two such embeddings into $U$ and $V$ are equivalent if there’s a compatible refinement into some common $W$ that embeds into both $U$ and $V$.  The idea is that a halation is a kind of d-dimensional “halo”, or the “germ of a d-manifold” around $X$.  Then gluing compatibly along (d-1)-boundaries with halations ensures that we get smooth d-manifolds.  (One can also extend this setup so that everything in sight is oriented, or has some other such structure on it.)

In any case, an extended TQFT will then mean an n-functor $Z : (Bord_d,\sqcup) \rightarrow (\mathcal{C},\otimes)$, where $(\mathcal{C},\otimes)$ is some symmetric monoidal n-category (which is supposed to be similar to $Vect$).  Its exact nature is less important than that of $Bord_d$, which has:

• 0-Morphisms (i.e. Objects): 0-manifolds (collections of points)
• 1-Morphisms: 1-dimensional cobordisms between 0-manifolds (curves)
• 2-Morphisms: 2-dim cobordisms with corners between 1-Morphisms (surfaces with boundary)
• d-Morphisms: d-dimensional cobordisms between (d-1)-Morphisms (n-manifolds with corners), up to isomorphism

(Note: the distinction between “Bord” and “Cobord” is basically a matter of when a given terminology came in.  “Cobordism” and “Bordism”, unfortunately, mean the same thing, except that “bordism” has become popular more recently, since the “co” makes it sound like it’s the opposite category of something else.  This is kind of regrettable, but that’s what happened.  Sorry.)

The crucial point, is that Chris wanted to classify all such things, and his approach to this is to give a presentation of $Bord_d$.  This is based on stuff in his thesis.  The basic idea is to use Morse theory, and its higher-dimensional generalization, Cerf theory.  The idea is that one can put a Morse function  on a cobordism (essentially, a well-behaved “time order” on points) and look at its critical points.  Classifying these tells us what the generators for the category of cobordisms must be: there need to be enough to capture all the most general sorts of critical points.

Cerf theory does something similar, but one dimension up: now we’re talking about “stratified” families of Morse functions.  Again one studies critical points, but, for instance, on a 2-dim surface, there can be 1- and 0-dimensional parts of the set of cricical points.  In general, this gets into the theory of higher-dimensional singularities, catastrophe theory, and so on.  Each extra dimension one adds means looking at how the sets of critical points in the previous dimension can change over “time” (i.e. within some stratified family of Cerf functions).  Where these changes themselves go through critical points, one needs new generators for the various j-morphisms of the cobordism category.  (See some examples of such “catastrophes”, such as folds, cusps, swallowtails, etc. linked from here, say.)  Showing what such singularities can be like in the “generic” situation, and indeed, even defining “generic” in a way that makes sense in any dimension, required some discussion of jet bundles.  These are generalizations of tangent bundles that capture higher derivatives the way tangent bundles capture first-derivatives.  The essential point is that one can find a way to decompose these into a direct sum of parts of various dimensions (capturing where various higher derivatives are zero, say), and these will eventually tell us the dimension of a set of critical points for a Cerf function.

Now, this gives a characterization of what cobordisms can be like – part of the work in the theorem is to show that this is sufficient: that is, given a diagram showing the critical points for some Morse/Cerf function, one needs to be able to find the appropriate generators and piece together the cobordism (possibly a closed manifold) that it came from.  Chris showed how this works – a slightly finicky process involving cutting a diagram of the singular points (with some extra labelling information) into parts, and using a graphical calculus to work out how pasting works – and showed an example reconstruction of a surface this way.  This amounts to a construction of an equivalence between an “abstract” cobordism category given in terms of generators (and relations) which come from Cerf theory, and the concrete one.  The theorem then says that there’s a correspondence between equivalence classes of 2D cobordisms, and certain planar diagrams, up to some local moves.  To show this properly required a digression through some theory of symmetric monoidal bicategories, and what the right notion of equivalence for them is.

This all done, the point is that $Bord_d$ has a characterization in terms of a universal property, and so any ETQFT $Z : Bord_d \rightarrow \mathcal{C}$ amounts to a certain kind of object in $\mathcal{C}$ (corresponding to the image of the point – the generating object in $Bord_d$).  For instance, in the oriented situation this object needs to be “fully dualizable”: it should have a dual (the point with opposite orientation), and a whole bunch of maps that specify the duality: a cobordism from $(+,-)$ to nothing (just the “U”-shaped curve), which has a dual – and some 2-D cobordisms which specify that duality, and so on.  Specifying all this dualizability structure amounts to giving the image of all the generators of cobordisms, and determines the functors $Z$, and vice versa.

This is a rapid summary of six hours of lectures, of course, so for more precise versions of these statements, you may want to look into Chris’ thesis as linked above.

## Homotopy QFT and the Crossed Menagerie

The next series of lectures in the school was Tim Porter’s, about relations between Homotopy Quantum Field Theory (HQFT) and various sort of crossed gizmos.  HQFT is an idea introduced by Vladimir Turaev, (see his paper with Tim here, for an intro, though Turaev also now has a book on the subject).  It’s intended to deal with similar sorts of structures to TQFT, but with various sorts of extra structure.  This structure is related to the “Crossed Menagerie”, on which Tim has written an almost unbelievably extensive bunch of lecture notes, of which a special short version was made for this lecture series that’s a mere 350 pages long.

Anyway, the cobordism category $Bord_d$ described above is replaced by one Tim called $HCobord(d,B)$ (see above comment about “bord” and “cobord”, which mean the same thing).  Again, this has d-dimensional cobordisms as its morphisms and (d-1)-dimensional manifolds as its objects, but now everything in sight is equipped with a map into a space $B$ – almost.  So an object is $X \rightarrow B$, and a morphism is a cobordism with a homotopy class of maps $M \rightarrow B$ which are compatible with the ones at the boundaries.  Then just as a d-TQFT is a representation (i.e. a functor) of $Cob_d$ into $Vect$, a $(d,B)$-HQFT is a representation of $HCobord(d,B)$.

The motivating example here is when $B = B(G)$, the classifying space of a group.  These spaces are fairly complicated when you describe them as built from gluing cells (in homotopy theory, one typically things of spaces as something like CW-complexes: a bunch of cells in various dimensions glued together with face maps etc.), but $B(G)$ has the property that its fundamental group is $G$, and all other homotopy groups are trivial (ensuring this part is what makes the cellular decomposition description tricky).

The upshot is that there’s a correspondence between (homotopy classes of) maps $Map(X ,B(G)) \simeq Hom(\pi(X),G)$ (this makes a good alternative definition of the classifying space, though one needs to ).  Since a map from the fundamental group into $G$ amounts to a flat principal $G$-bundle, we can say that $HCobord(d,B(G))$ is a category of manifolds and cobordisms carrying such a bundle.  This gets us into gauge theory.

But we can go beyond and into higher gauge theory (and other sorts of structures) by picking other sorts of $B$.  To begin with, notice that the correspondence above implies that mapping into $B(G)$ means that when we take maps up to homotopy, we can only detect the fundamental group of $X$, and not any higher homotopy groups.  We say we can only detect the “homotopy 1-type” of the space.  The “homotopy n-type” of a given space $X$ is just the first $n$ homotopy groups $(\pi_1(X), \dots, \pi_n(X))$.  Alternatively, an “n-type” is an equivalence class of spaces which all have the same such groups.  Or, again, an “n-type” is a particular representative of one of these classes where these are the only nonzero homotopy groups.

The point being that if we’re considering maps $X \rightarrow B$ up to homotopy, we may only be detecting the n-type of $X$ (and therefore may as well assume $X$ is an n-type in the last sense when it’s convenient).  More precisely, there are “Postnikov functors” $P_n(-)$ which take a space $X$ and return the corresponding n-type.  This can be done by gluing in “patches” of higher dimensions to “fill in the holes” which are measured by the higher homotopy groups (in general, the result is infinite dimensional as a cell complex).  Thus, there are embeddings $X \hookrightarrow P_n(X)$, which get along with the obvious chain

$\dots \rightarrow P_{n+1}(X) \rightarrow P_n(X) \rightarrow P_{n-1}(X) \rightarrow \dots$

There was a fairly nifty digression here explaining how this is a “coskeleton” of $X$, in that $P_n$ is a right adjoint to the “n-skeleton” functor (which throws away cells above dimension n, not homotopy groups), so that $S(Sk_n(M),X) \cong S(M,P_n(X))$.  To really explain it properly, though I would have to really explain what that $S$ is (it refers to maps in the category of simplicial sets, which are another nice model of spaces up to homotopy).  This digression would carry us away from higher gauge theory, which is where I’m going.

One thing to say is that if $X$ is d-dimensional, then any HQFT is determined entirely by the d-type of $B$.  Any extra jazz going on in $B$‘s higher homotopy groups won’t be detected when we’re only mapping a d-dimensional space $X$ into it.  So one might as well assume that $B$ is just a d-type.

We want to say we can detect a homotopy n-type of a space if, for example, $B = B(\mathcal{G})$ where $\mathcal{G}$ is an “n-group”.  A handy way to account for this is in terms of a “crossed complex”.  The first nontrivial example of this would be a crossed module, which consists of

• Two groups, $G$ and $H$ with
• A map $\partial : H \rightarrow G$ and
• An action of $G$ on $H$ by automorphisms, $G \rhd H$
• all such that action looks as much like conjugation as possible:
• $\partial(g \rhd h) = g (\partial h) g^{-1}$ (so that $\partial$ is $G$-equivariant)
• $\partial h \rhd h' = h h' h^{-1}$ (the “Peiffer identity”)

This definition looks a little funny, but it does characterize “2-groups” in the sense of categories internal to $\mathbf{Groups}$ (the definition used elsewhere), by taking $G$ to be the group of objects, and $H$ the group of automorphisms of the identity of $G$.  In the description of John Huerta’s lectures, I’ll get back to how that works.

The immediate point is that there are a bunch of natural examples of crossed modules.  For instance: from normal subgroups, where $\partial: H \subset G$ is inclusion and the action really is conjugation; from fibrations, using fundamental groups of base and fibre; from a canonical case where $H = Aut(G)$  and $\partial = 1$ takes everything to the identity; from modules, taking $H$ to be a $G$-module as an abelian group and $\partial = 1$ again.  The first and last give the classical intuition of these guys: crossed modules are simultaneous generalizations of (a) normal subgroups of $G$, and (b) $G$-modules.

There are various other examples, but the relevant thing here is a theorem of MacLane and Whitehead, that crossed modules model all connected homotopy 2-types.  That is, there’s a correspondence between crossed modules up to isomorphism and 2-types.  Of course, groups model 1-types: any group is the fundmental group for a 1-type, and any 1-type is the classifying space for some group.  Likewise, any crossed module determines a 2-type, and vice versa.  So this theorem suggests why crossed modules might deserve to be called “2-groups” even if they didn’t naturally line up with the alternative definition.

To go up to 3-types and 4-types, the intuitive idea is: “do for crossed modules what we did for groups”.  That is, instead of a map of groups $\partial : H \rightarrow G$, we consider a map of crossed modules (which is given by a pair of maps between the groups in each) and so forth.  The resulting structure is a square diagram in $\mathbf{Groups}$ with a bunch of actions.  Each of these maps is the $\partial$ map for a crossed module.  (We can think of the normal subgroup situation: there are two normal subgroups $H,K$ of $G$, and in each of them, the intersection $H \cap K$ is normal, so it determines a crossed module).  This is a “crossed square”, and things like this correspond exactly to homotopy 3-types.  This works roughly as before, since there is a notion of a classifying space $B(\mathcal{G})$ where $\mathcal{G} = (G,H,\partial,\rhd)$, and similarly on for crossed n-cubes.   We can carry on in this way to define a “crossed n-cube”, which correspond to homotopy (n+1)-types.  The correspondence is a little bit more fiddly than it was for groups, but it still exists: any (n+1)-type is the classifying space for a crossed n-cube, and any such crossed n-cube has an (n+1)-type for its classifying space.

This correspondence is the point here.  As we said, when looking at HQFT’s from $HCobord(d,B)$, we may as well assume that $B$ is a d-type.  But then, it’s a classifying space for some crossed (d-1)-cube.  This is a sensible sort of $B$ to use in an HQFT, and it ends up giving us a theory which is related to higher gauge theory: a map $X \rightarrow B(\mathcal{G})$ up to homotopy, where $\mathcal{G}$ is a crossed n-cube will correspond to the structure of a flat $(n+1)$-bundle on $X$, and similarly for cobordisms.  HQFT’s let us look at the structure of this structured cobordism category by means of its linear representations.  Now, it may be that this crossed-cube point of view isn’t the best way to look at $B$, but it is there, and available.

To say more about this, I’ll have to talk more directly about higher gauge theory in its own terms – which I’ll do in part IIb, since this is already pretty long.

A more substantial post is upcoming, but I wanted to get out this announcement for a conference I’m helping to organise, along with Roger Picken, João Faria Martins, and Aleksandr Mikovic.  Its website: https://sites.google.com/site/hgtqgr/home has more details, and will have more as we finalise them, but here are some of them:

## ﻿Workshop and School on Higher Gauge Theory, TQFT and Quantum Gravity

Lisbon, 10-13 February, 2011 (Workshop), 7-13 February, 2011 (School)

Description from the website:

Higher gauge theory is a fascinating generalization of ordinary abelian and non-abelian gauge theory, involving (at the first level) connection 2-forms, curvature 3-forms and parallel transport along surfaces. This ladder can be continued to connection forms of higher degree and transport along extended objects of the corresponding dimension. On the mathematical side, higher gauge theory is closely tied to higher algebraic structures, such as 2-categories, 2-groups etc., and higher geometrical structures, known as gerbes or n-gerbes with connection. Thus higher gauge theory is an example of the categorification phenomenon which has been very influential in mathematics recently.

There have been a number of suggestions that higher gauge theory could be related to (4D) quantum gravity, e.g. by Baez-Huerta (in the QG^2 Corfu school lectures), and Baez-Baratin-Freidel-Wise in the context of state-sums. A pivotal role is played by TQFTs in these approaches, in particular BF theories and variants thereof, as well as extended TQFTs, constructed from suitable geometric or algebraic data. Another route between higher gauge theory and quantum gravity is via string theory, where higher gauge theory provides a setting for n-form fields, worldsheets for strings and branes, and higher spin structures (i.e. string structures and generalizations, as studied e.g. by Sati-Schreiber-Stasheff). Moving away from point particles to higher-dimensional extended objects is a feature both of loop quantum gravity and string theory, so higher gauge theory should play an important role in both approaches, and may allow us to probe a deeper level of symmetry, going beyond normal gauge symmetry.

Thus the moment seems ripe to bring together a group of researchers who could shed some light on these issues. Apart from the courses and lectures given by the invited speakers, we plan to incorporate discussion sessions in the afternoon throughout the week, for students to ask questions and to stimulate dialogue between participants from different backgrounds.

Provisional list of speakers:

• Paolo Aschieri (Alessandria)
• Benjamin Bahr (Cambridge)
• Aristide Baratin (Paris-Orsay)
• John Barrett (Nottingham)
• Rafael Diaz (Bogotá)
• Bianca Dittrich (Potsdam)
• Laurent Freidel (Perimeter)
• John Huerta (California)
• Branislav Jurco (Prague)
• Thomas Krajewski (Marseille)
• Tim Porter (Bangor)
• Hisham Sati (Maryland)
• Christopher Schommer-Pries (MIT)
• Urs Schreiber (Utrecht)
• Jamie Vicary (Oxford)
• Konrad Waldorf (Regensburg)
• Derek Wise (Erlangen)
• Christoph Wockel (Hamburg)

The workshop portion will have talks by the speakers above (those who can make it), and any contributed talks.  The “school” portion is, roughly, aimed at graduate students in a field related to the topics, but not necessarily directly in them.  You don’t need to be a student to attend the school, of course, but they are the target audience.  The only course that has been officially announced so far will be given by Christopher Schommer-Pries, on TQFT.  We hope/expect to also have minicourses on Higher Gauge Theory, and Quantum Gravity as well, but details aren’t settled yet.

If you’re interested, the deadline to register is Jan 8 (hence the rush to announce).  Some funding is available for those who need it.

On a tangential note, let me point out John Baez’ most recent “This Week’s Finds”, which has an accessible but fairly in-depth discussion of climate modelling.  There have been many years of very loud public discussion of this which, for reasons of politics, seems to involve putting the “Mathematical models are inherently elitist gibberish” and “Science knows everything so shut up, moron” positions on display and letting viewer decide.  This is known in the journalism trade as “balance”.  Obviously, within the research community working on them, there’s a mountain of literature on what the models model, how detailed they are, how they work, etc., but it mostly goes over my head, so John’s post strikes a nice balance for me.

Like most computer simulation models, they’re basically discrete approximations to big systems of differential equations – but exactly which systems, how they’re developed, how accurately they model the real thing, and the relative merits of simple vs. complex models is the main point.  The use of Monte Carlo methods and Bayesian analysis to tune the various free parameters is a key part of the matter of how accurate they should be.  Anyway – check it out.

Meanwhile, the TQFT club at IST recently started up its series of seminars.  The first few speakers were Rui Carpentier, Anne-Laure Thiel, and Marco Mackaay.  Rui is faculty here at IST, and a former student of Roger Picken (his thesis was on a topic closely related to what he was talking about).  Anne-Laure is a post-doc here at IST, mainly working with Marco, who, however, is actually at the University of the Algarve in Faro, Portugal, and had to come up to Lisbon specially for the seminar.  Anne-Laure and Marco were both speaking mainly about some of the Soergel bimodule stuff which came up at the Oporto meeting on categorification, which I posted about previously, so I’ll go over that in a bit more detail here.

First, though, Rui Carpentier’s talk:

## 3-colourings of Cubic Graphs and Operators

All these talks involve algebraic representations of categories that can be represented by some graphical calculus, but in this case, one starts with a category whose morphisms are precisely graphs with loose ends.  (The objects are non-negative integers, or, if you like, finite sets of dots which act as the vertices of the loose ends).  The graphs are trivalent (except at the input and output vertices, which are 1-valent), hence “cubic graphs”.  This category is therefore called $\mathbf{CG}$, and it has a small number of generators, which happen to be quite similar to those which generate the category of 2D-cobordisms (one of the connections to TQFT), though the relations are slightly different.

Roughly, and without drawing the pictures: the generators are cup and cap (the shapes $\cup$ and $\cap$), two different trivalent vertices (a $Y$, and the same upside-down), the swap (an $X$ where the strands cross without a vertex), and the identity (just a vertical line).  There are a number of relations, including Reidemeister moves, on these generating pictures, which ensure that they’re enough to identify graphs up to isotopy of the pictures.

Then the point is to describe graphs using operators – that is, construct a representation $F :\mathbf{CG} \rightarrow \mathbf{Vect}$.   Given any such representation, these generators provide all the structure maps of a bialgebra – chiefly, unit, counit, multiplication and co-multiplication – and the relations imposed by isotopy make this work (though unlike some other situations, it’s neither commutative nor cocommutative).  The representation $F$ he constructs is based on 3-colourings of the edges of the graphs.  At the object level, it assigns to a dot the 3-dimensional vector space $V= span(e_1,e_2,e_3)$.  Being monoidal, $F$ takes the object $n$ to $V^{\otimes n}$ – the tensor product of the spaces at each vertex.

The idea is that choosing a basis vector in this space amounts to picking a colouring of the incoming and outgoing edges.  For morphisms, we should note that the rule that says when a colouring is admissible is that all the edges incident to a given vertex must have different colours.  Then, given a morphism (graph) $G : m \rightarrow n$, we can describe the linear map $F(G)$ most easily by saying that the component in the matrix, given an incoming and outgoing basis vector, just counts the number of admissible graphs that agree with the chosen colourings on the in-edges and out-edges.

There’s another functor, $\hat{F}$, which counts these graphs with a sign, which marks whether the graph contains an odd or an even number of crossings of differently-coloured edges – negative for odd, positive for even.  This  is the “Penrose evaluation” of the graph.

So these maps give the “operators” of the title, and the rest of the point is to use them to study graphs and their colourings.  One can, in this setup, rewrite some graphs as linear combinations of others – so-called “Skein relations” hold, for example, so that, after applying $F$, the composite of multiplication and comultiplication (taking two points to two points, through one cut-edge) is the same as the identity minus the swap.  This sort of thing appears in formal knot theory all the time, and is a key tool for recoupling in spin networks, and so on…

Given this “recoupling” idea, there are some important facts: first, any graph can be rewritten as a linear combination of planar graphs, and any planar graph with cycles can be reduced to a sum of planar graphs without cycles.  (Rui gave the example of decomposing a pentagonal cycle as a linear combination of four other graphs, three of which are disconnected).  So in fact any graph decomposes as a linear combination of forests (cycle-free graphs, the connected components of which are called “trees”, hence the name).  Another essential fact is that, due to the Euler characteristic of the plane, any planar graph can be split into two parts with at most five edges between them (the basis of the solution to the three utilities puzzle).  Then it so happens that the space of graphs connecting zero in-edges to five out-edges is a 6-dimensional space, $\mathcal{V}^o_5$, generated by just six forests (including one lonesome tree).

So one theorem which Rui told us about, which can be shown using the so-called Penrose relations (provable using the representations $F$ and $\hat{F}$), is that there’s just one such graph (which he described in the particular basis above) that evaluates to zero when composed with some other graph.  The proof of this uses the Four Colour Theorem (3-colouring of graph edges being related to 4-colouring of planar regions); in fact, the two theorems are equivalent so if anyone can find an alternative proof of this one, the bonus is another proof of the FCT.

Finally, he gave a conjecture that, if true, would help recognize planar graphs just by the operators produced by the representation $\hat{F}$ (at least it proposes a necessary condition).  This conjecture says that if a planar graph with five output edges (the maximum, remember) is written in the basis mentioned above, then the sum of the coefficients of the five disconnected trees is nonnegative.  (Thus, the connected tree doesn’t contribute to this measure).  This is still just a conjecture – Rui said that to date neither proof nor counterexample has been found.

## Soergel Bimodules, Singular and Virtual Braids

As I mentioned up top, I previously posted a bit about work on Soergel bimodules when describing Catharina Stroppel’s talk at the meeting in Faro in July.  To recap: they are associated with categories of modules over rings – specifically, rings of certain classes of symmetric functions.  Even more specifically, given a partition $\lambda$ of an integer $n$, there is a subgroup of the symmetric group $S_{\lambda} \subset S_n$ which fixes the partition.  All such groups act on the ring of $n$-variable polynomial functions $R =\mathbb{Q}[x_1, \dots, x_n]$, and the ones fixed by $S_{\lambda}$ form the ring $R^{\lambda}$.

Now, these groups are all related to each other in a web of containments, hence so are the rings.  So the module categories $R^{\lambda}$ are connected by various functors.  Given a containment $R^{\lambda '} \subset R^{\lambda}$, modules over $R^{\lambda}$ can be restricted to ones over $R^{\lambda '}$, and modules over $R^{\lambda '}$ can be induced up to ones over $R^{\lambda}$.  The restriction and induction functors can be represented as “tensor with a bimodule” (this is much the same classification as that for 2-linear maps which I’ve said a bunch about here, except that those must be free).  Applying induction functors repeatedly gives abitrarily large bimodules, but they are built as direct sums of simple parts.  Those simple parts, and any direct sums of them, are Soergel bimodules.  The point is that such bimodules describe morphisms.

So in the TQFT club, Marco Mackaay gave the first of a series of survey talks on this topic, and Anne-Laure Thiel gave a talk about the “Categorification of Singular Braid Monoids and Virtual Braid Groups”.  Since Marco’s talk was the first in a series of surveys, and a lot of what it surveyed was work described in my post on the Faro meeting, I’ll just mention that it dealt with the original motivation of a lot of this work in categorifying representation theory of Lie algebras (c.f. the discussion of the Khovanov-Lauda categorification of quantum groups in the previous post), and also got a bit into some of the different diagrammatic calculi created for that purpose, along the lines of the talks by Ben Webster and Geordie Williamson at that meeting.  Maybe when Marco has given more of these talks, I’ll return to this one here as well.

Now, the starting point of Anne-Laure’s talk was that the setup above lets one define a category with a presentation like that of the Hecke algebra (a quotient of the group algebra of the braid group), where exact relations become isomorphisms.  That is, we go from a category where morphisms are braids (up to isotopy and Reidemeister moves and so forth as usual) to a 2-category where the morphisms are bimodules, which happen to satisfy the same relations.  (The 2-morphisms, bimodule maps, are what allow relations to hold weakly…)

Specifically, the generators of the braid group are $\sigma_i$, the braids taking the $i^{th}$ strand over the $(i+1)^{st}$.  The parallel thing is $B_i = R \otimes_{R^{\sigma_i}} R$, where here we’re talking about the subgroup generated by the transposition of $i$ and $i+1$.  In the language of partitions, this corresponds to a $\lambda$ with one part of size two, $(i,i+1)$, and the rest of size one.  Now, since this bimodule is actually built from polynomials in $R$, it naturally has a grading – this corresponds to the degree of $q$, since the Hecke algebra involves a quotient giving q-deformed relations – so there is a degree-shift operation that categorifies multiplication by $q$.  This much is due to Soergel.

Anne-Laure’s talk was about extending this to talk about a categorification, first of the braid group in terms of complexes of these bimodules (due actually to Rouquier), then virtual and singular braids.  These, again, are basically creatures of formal knot theory (see link above).  They can be described by a presentation similar to that for braids – just as the braid group has a generators-and-relations presentation in terms of over-crossings of adjacent strands, these incorporate other kinds of crossings.  Singular braids allow a sort of “through” crossing, where the $i^{th}$ strand goes neither over nor under the $(i+1)^{st}$.  Virtual braids (the braid variant on virtual knots) have a special type of marked crossing called the “virtual crossing”, drawn with a little circle around it.  These are included as new generators in describing the virtual braid group, and of course some new relations are added to show how they relate to the original generators – variations on the Reidemeister moves, for example.

To categorify this, Anne-Laure explained that these new generators can also be represented by bimodules, but these ones need to be twisted.  In particular, twisting the bimodule $R$ by the action of a permutation $\omega \in S_n$ gives $R_{\omega}$, which is the same as $R$ as a left $R$-module, but is acted on by an element $a \in R$ on the right through multiplication by $\omega(a)$, so that $b \cdot p \cdot a = bp(\omega(a))$.  Then the new generators, beyond the $B_i = R \otimes_{R^{\sigma_i}} R$, are of the form $R_{\omega} \otimes_{R^{\omega '}} R$.  These then satsify the right relations for this to categorify the virtual braid group.

So I recently received word that this paper had been accepted for publication by Applied Categorical Structures. Since I’ll shortly be putting out another which uses its main construction to build Extended Topological Quantum Field Theories, it’s nice and appropriate to say something about that. But actually, just at the moment, I want to take a slightly different approach.

Toward the end of February, I went up to Waterloo to the Perimeter Institute, where my friend Derek Wise was visiting with Andy Randono – apparently they’re working on a project together that has something to do with Cartan Geometry, which is a subject that plays a big role in Derek’s thesis.

However, Derek was speaking in their seminar about Extended TQFT (his slides are now up on his website, and there’s also a video of the talk available). Actually, a lot of what he was talking about was work of mine, since we’re working on a project together to constructs ETQFT’s from Lie groups (most likely compact ones at first, since all the usual analytical problems with noncompact groups turn up here). However, I really enjoyed seeing Derek talk about it, because he has a sharper grasp than I do of how this subject appears to physicists, and the way he presented this stuff is very different from the way I usually talk about it (you can see me in the video trying to help deal with a question at the end from Rafael Sorkin and Laurent Freidel, and taking a while to correctly understand what it was, partly because of this jargon gap – I hope to get better).

So, for example, describing a TQFT in the Atiyah/Segal axiomatic formulation is fairly natural to someone who works with category theory, but Derek motivated it as a way of taking a “deeper look at the partition function” for a certain field theory. The idea is that a partition function $Z$ for a quantum field theory associates a number to a space $M$, satisfying certain rules. It is usually described by some kind of integral. Typically in QFT, these are rather tricky integrals – a topological QFT has the nice feature that, since it has no local degrees of freedom, these integrals are much more tractable. Of course, this is a mathematically nice feature that comes at the expense of physical relevance, but such is life.

Anyway, the idea is that the partition function $Z$ for an $n$-dimensional TQFT can be thought of as assigning, not just numbers to $n$-dimensional manifolds $M$, but something more which reduces to this in a special case. Specifically, $Z$ assigns a Hilbert space to any codimension-1 submanifold of $M$, in a particular way which Derek passed over by saying it “satisfies some compatibility conditions”. For an audience of mathematicians, you can gloss over this just as quickly by saying the assignments are “functorial”, or even with more detail saying the conditions make $Z$ a symmetric monoidal functor.

Part of the point is that these conditions are about as obvious on physical grounds as they are if you’re a category theorist. For example, the fact that composition is preserved by the functor $Z$ can be interpreted physically as saying that the number $Z(M)$ given by the partition function isn’t affected by how we chop up the manifold $M$ to analyse it. The fact that $Z$ is a monoidal functor ends up meaning that the “unit” for manifolds under unions (namely, the empty manifold with no points, which you can add to things without affecting them) gets assigned the Hilbert space $\mathbb{C}$, which is the unit for Hilbert spaces with respect to the tensor product $\otimes$. The fact that this is so means we can treat a manifold with no boundary as going from one (empty) boundary to another (empty) boundary – it therefore gets assigned a linear map from $\mathbb{C}$ to $\mathbb{C}$ – a number. Seeing how this linear map comes from composing pieces of the manifold is what “a deeper look at the partition function” means.

ETQFT does essentially the same thing, at one level deeper. The point is that a TQFT breaks apart a manifold by treating it as a series of pieces – manifolds with boundary, glued together at their boundaries. An ETQFT does the same to these pieces, treating them as composed of pieces – manifolds with corners – which are glued orthogonally to the gluing just mentioned. That is, there are two kinds of composition, so we’re in some sort of 2-category (bi-, or double- depending on how you formulate things). The essential point is that now, to manifolds without boundary, which are of codimension 1, we assign Hilbert spaces – and to top-dimensional manifolds WITH boundary, we assign maps of Hilbert spaces.

An ETQFT attempts to give a “deeper-still look at the partition function” by seeing how the Hilbert space arises from composition of pieces in this new direction, along boundaries of codimension 2. The way Derek describes this for physicists is to say that the ETQFT describes how that Hilbert space is “built from local data”, which he described in the usual physics language of path integrals. First of all, the conventional thing in physics is to take $Z(\Sigma)$ for a (codimension-1) manifold $\Sigma$ to be $L^2(\mathcal{A}_0(\Sigma)/\mathcal{G}(\Sigma))$ – the space of square-integrable functions on the quotient of the space $\mathcal{A}_0(\Sigma)$ of flat $G$-connections on $M$ by the action of the group of gauge transformations $\mathcal{G}(\Sigma)$.

Given a manifold $M$ with boundary components $\Sigma$ and $\Sigma '$, the standard quantum field theory formalism to describe the map $Z(M) : Z(\Sigma) \rightarrow Z(\Sigma ')$ given by a TQFT is to describe how it interacts with particular state-vectors in the Hilbert spaces for the source and target boundary components of $M$. So then:

$\langle \psi | Z(M) | \phi \rangle = \int_{\mathcal{A}_0(M)/\mathcal{G}} \mathcal{D}A \overline{\psi(A|_{\Sigma '})} e^{i S([A])} \phi(A|_{\Sigma})$

The point being, a flat connection $A$ has some action on it, which depends only on its gauge equivalence class $[A]$ (“the Lagrangian has gauge symmetry”), and it restricts to give flat connections on $\Sigma$ and $\Sigma '$, on which the $L^2$-functions $\psi$ and $\phi$ act, to give something we can integrate. The measure $\mathcal{D}[A]$ is a crucial entity here, and in general can be a real puzzle, but at least for discrete groups, it’s just a weighted counting measure which effectively gives us the groupoid cardinality of the quotient space. As for the action $S$, the simplest possible case just says the action of any flat connection is zero – hence this expression is just finding the (groupoid) cardinality, or more generally measuring the (stacky) volume, of the configuration space for flat connections. There are other possible actions, though.

Derek gives an explanation of how to interpret this in terms of the “pull-push” construction, which I’ve talked about elsewhere here, including in the above paper, so right now, I’ll just pass to the next layer of the ETQFT layer cake – codimension-2. Here, there is a similar formula, which also has an interpretation in terms of a “pull-push” construction, but which can be written as a categorified path integral.

So now the $\Sigma$ has boundary, and connects “inner” codimension-2 boundary component $B_1$ to “outer” boundary component $B_2$. Then, say, $B_1$ gets assigned the category of all gauge-equivariant “bundles” of Hilbert spaces on $\mathcal{A}_0(B_1)$, rather than the space of gauge-invariant functions. (Derek carefully avoided using the term “category”, to stay physically motivated – and the term “bundle” is accurate in the case of a discrete gauge group $G$, but in general one has to appeal to the theory of measurable fields of Hilbert spaces, since they needn’t be locally trivial). Then given particular Hilbert bundles $\mathcal{H}$ and $\mathcal{K}$ on the spaces $\mathcal{A}_0(B_1)$ and $\mathcal{A}_0(B_2)$ respectively, we can define what $Z(\Sigma)$ is by:

$\langle \mathcal{K} | Z(M) | \mathcal{H} \rangle = \int_{\mathcal{A}_0(M)/\mathcal{G}} \mathcal{D}A \mathcal{K}(A|_{B_2}) \otimes T_A \otimes \mathcal{H}(A|_{B_1})$

The interpretation is much like the previous formula: now we’re direct-integrating Hilbert spaces, instead of integrating complex functions – and we get a Hilbert space instead of a complex number, but this is in some sense superficial. Something any physicist would notice right away (or anyone comparing this to the previous formula) is that the exponential of the action $S([A])$ seems to have gone missing, to be replaced by some Hilbert space $T_A$. If we’re using the trivial action $S \cong 0$, this is fine, but otherwise, how exactly $S$ affects the direct integral would take some explaining. For now, let’s just say that we should think of $S([A])$ as being folded into either the inner product on $T_A$, or into the measure $\mathcal{D}A$: it shows up in its effect on the inner product on the Hilbert space that this direct integral produces.

Let me jump to the end of Derek’s talk here, to get at some conceptual aspect of what’s happening here. The axiomatic way of talking about ETQFT, namely Ruth Lawrence’s way, is to say we assign a 2-Hilbert space to the codimension-2 manifolds. But “2-Hilbert space” is an off-putting bit of jargon, so instead the suggestion is to replace it with “von Neumann algebra”.

The point is that 2-Hilbert spaces are thought (according to a paper by Baez, Baratin, Friedel and Wise) to be just categories of representations of vN algebras. Being a 2-Hilbert space means, for instance, that they’re additive (by direct sum), $\mathbb{C}$-linear (there is a vector space of intertwiners between any two representations), have duals, and so on. Moreover, they’re monoidal 2-Hilbert spaces, since there is a tensor product. Their idea is that the two ideas correspond exactly. In any case, the way the ETQFT construction in question works actually passes through a von Neumann algebra. This comes from the groupoid algebra that’s associated to a certain group action. Namely, the action of the gauge group on the space of flat $G$-connections on the manifold $M$.

Then the way we can look more closely at the “structure of the partition function” is by seeing the Hilbert space associated to a codimension-1 manifold as actually being a kind of morphism of von Neumann algebras. In particular, it’s a Hilbert bimodule, which is acted on by the source algebra (say $A$) on the left, and the target algebra ($B$) on the right. This is intimately connected with the stuff I was writing about recently about Morita equivalence, and so to the 2-Hilbert space view. In particular, a Hilbert bimodule $H$ gives an adjoint pair of linear functors (or “2-linear maps”) between the representation categories of algebras.

So shortly I’ll make a post about some papers coming out, and get back to this point…

First off, a nice recent XKCD comic about height.

I’ve been busy of late starting up classes, working on a paper which should appear on the archive in a week or so on the groupoid/2-vector space stuff I wrote about last year.  I resolved the issue I mentioned in a previous post on the subject, which isn’t fundamentally that complicated, but I had to disentangle some notation and learn some representation theory to get it figured out.  I’ll maybe say something about that later, but right now I felt like making a little update.  In the last few days I’ve also put together a little talk to give at Octoberfest in Montreal, where I’ll be this weekend.  Montreal is a lovely city to visit, so that should be enjoyable.

A little while ago I had a talk with Dan’s new grad student – something for a class, I think – about classical and modern differential geometry, and the different ideas of curvature in the two settings.  So the Gaussian curvature of a surface embedded in $\mathbb{R}^3$ has a very multivariable-calculus feel to it: you think of curves passing through a point, parametrized by arclength.  The have a moving orthogonal frame attached: unit tangent vector, its derivative, and their cross-product.  The derivative of the unit tangent is always orthogonal (it’s not changing length), so you can imagine it to be the radius of a circle, with length $r$, the radius of curvature.  Then you have $\kappa = \frac{1}{r}$ curvature along that path.  At any given point on a surface, you get two degrees of freedom – locally, the curve looks like a hyperboloid or an ellipse, or whatever, so there’s actually a curvature form.  The determinant gives the Gaussian curvature $K$.  So it’s a “second derivative” of the surface itself (if you think of it as ).  The Gaussian curvature, unlike the curvature in particular directions, is intrinsic – preserved by isometry of the surface, so it’s not really dependent on the embedding.  But this fact takes a little thinking to get to.  Then there’s the trace – the scalar curvature.

In a Riemannian manifold, you  need to have a connection to see what the curvature is about.  Given a metric, there’s the associated Levi-Civita connection, and of course you’d get a metric on a surface embedded in $\mathbb{R}^3$, inherited from the ambient space.  But the modern point of view is that the connection is the important object: the ambient space goes away entirely.  Then you have to think of what the curvature represents differenly, since there’s no normal vector to the surface any more.  So now we’re assuming we want an intrinsic version of the “second derivative of the surface” (or n-manifold) from the get-go.  Here you look at the second derivative of the connection in any given coordinate system.  You’re finding the infinitesimal noncommutativity of parallel transport w.r.t two coordinate directions: take a given vector, and transport it two ways around an infinitesimal square, and take the difference, get a new vector.  This all is written as a (3,1)-form, the Riemann tensor.  Then you can contract it down and get a matrix again, and then contract on the last two indices (a trace!) and you get back the scalar curvature again – but this is all in terms of the connection (the coordinate dependence all disappears once you take the trace).

I hadn’t thought about this stuff in coordinates for a while, so it was interesting to go back and work through it again.

In the noncommutative geometry seminar, we’ve been talking about classical mechanics – the Lagrangian and Hamiltonian formulation.  So it reminded me of the intuition that curvature – a kind of second derivative – often shows up in Lagrangians for field theories using connections because it’s analogous to kinetic energy.  A typical mechanics Lagrangian is something like (kinetic energy) – (potential energy), but this doesn’t appear much in the topological field theories I’ve been thinking about because their curvature is, by definition, zero.  Topological field theory is kind of like statics, as opposed to mechanics, that way.  But that’s a handy simplification for the program of trying to categorify everything.  Since the whole space of connections is infinite dimensional, worrying about categorified action principles opens up a can of worms anyway.

So it’s also been interesting to remember some of that stuff and discuss it in the seminar – and it was inially suprising that it’s the introduction to “noncommutative geometry”.  It does make sense, though, since that’s related to the formalism of quantum mechanics: operator algebras on Hilbert spaces.

Finally, I was looking for something on 2-monads for various reasons, and found a paper by Steve Lack which I wanted to link to here so I don’t forget it.

The reason I was looking was that (a) Enxin Wu, after talking about deformation theory of algebras, was asking after monads and the bar construction, which we talked about at the UCR “quantum gravity” seminar, so at some point we’ll take a look at that stuff.  But it reminded me that I was interested in the higher-categorical version of monads for a different reason. Namely, I’d been talking to Jamie Vicary about his categorical description of the harmonic oscillator, which is based on having a monad in a nice kind of monoidal category.  Since my own category-theoretic look at the harmonic oscillator fits better with this groupoid/2-vector space program I’ll be talking about at Octoberfest (and posting about a little later), it seemed reasonable to look at a categorified version of the same picture.

But first things first: figuring out what the heck a 2-monad is supposed to be.  So I’ll eventually read up on that, and maybe post a little blurb here, at some point.

Anyway, that update turned out to be longer than I thought it would be.

Well, a couple of weeks ago I was up in Waterloo at the Perimeter Institute with Dan Christensen and his grad student Wade Cherrington for a couple of days for the “Young Loops and Foams” conference. It actually ran all week, but we only took the time out to go for the first couple of days. The talks that we were there for dealt mainly with the loop-quantum-gravity and spin-foam approaches to quantum gravity.

These are not really what I’m working on, though I certainly have thought about these approaches, and Dan and his grad students have done significant work on them. Wade Cherrington has been applying spin-foam methods to lattice gauge theory, and Igor Khavkine has been working on the “new” spin foam models. Both of these guys are in the Applied Mathematics department here at UWO (though Igor is graduating this year), and a lot of their work has been about getting efficient algorithms for doing computations with these models. This seems like great stuff to me – certainly it’s a step in the direction of getting predictions and comparing them to experiments (i.e. “real physics”, though as a “mathematician” who’s only motivated by physics, I clearly don’t say this to be snobby)

Many of the talks were a bit over my head – for one thing, a lot of the significant new stuff involves fairly substantial calculation, which is by nature rather technical. There were some more introductory talks about Group Field Theory – Etera Livine and Daniele Oriti gave talks about Group Field Theory which described the main concepts of this subject. Livine’s talk was fairly introductory – explaining how GFT describes a field theory on a background which consists of a product of a few copies of a Lie group, for instance on $G^4$. In that example, states of the theory

Oriti’s talk dealt more with issues about GFT, but also emphasized that it can be seen as a kind of “second quantization” of spin networks. That is, one can think of a spin network geometry in terms of a graph which is labelled with spins (in practice, half-integers). Given such a graph, there is a Hilbert space for such states on the graph, whereas in GFT, the graph itself emerges from the states. The total Hilbert space for the fields in GFT then includes many different graphs, with many different numbers of vertices. The analogy to second quantization, in which, for example, one takes the quantum mechanical theory of an oscillator with a given energy, and turns it

Oriti also made references to this paper, in which he proposes a way to get a continuum limit out of GFT (using methods, which I can hardly comment on, analogous to those used to describe condensates in solid-state physics). However, he didn’t have time to describe this in detail. I’ve only looked briefly at that paper, and it seems sort of impressionistic, but the impressions are interesting, anyway.

I managed to have a few conversations with Robert Oeckl about Extended TQFT’s on the one hand, and his general boundary formulation of QFT’s on the other (more here, and slides giving an overview here). These two points of view take the usual formalism of TQFT and run with it in two somewhat different directions. Since I’ve talked a lot here about Extended TQFT’s and categorification, I’ll just say a bit about what Oeckl calls the general boundary formulation. This doesn’t use categorical language, and it remains a theory at “codimension 1″ (that is, it tells you about top-dimension “volumes” which connect codimension-1 “surfaces”, and that’s all). It does get outside what the functorial axiomatization of TQFT’s seems to ask, though. In particular, it doesn’t require you to be talking about a cobordism (“spacetime”) going from an input hypersurface (“space-slice”) to an output. Instead, it lets you talk about a general region with boundary, treating the whole boundary at once. Any part of it can be thought of as input or output.

One point of this way of describing a QFT is to help deal with the “problem of time”. His talk at the conference was a sort of “back to basics” discussion about the two basic approaches to quantum gravity – what he named the “covariant” (or perturbative) approach and the “canonical” (or “no-spacetime”) approach. One way to put the “problem” of time has to do with the apparently incompatible roles it plays in, respectively, general relativity and quantum mechanics, and these two approaches respect different portions of these roles.

The point is that in (non-quantum) relativity, a “state” is a whole world-history, part of which is the background geometry, which determines a causal order – a sort of minimal summary of time in that state. But in particular, it is part of the information contained in a state, which describes everything real. In QM, on the other hand, a “state” contains some information about the world in a maximal way (though IF you assume it represents all of reality, THEN you have to accept that reality isn’t local). But moreover, time plays a special role in QM outside any particular “world”.

In particular, the state vector in the Hilbert space $\mathcal{H}$ encodes information about a system between measurements (chronologically!), an operator on $\mathcal{H}$ changes a state $\psi_1$ into a new state $\psi_2$ (also chronologically), and composition of operators implies a temporal sequence (which gives the meaning of noncommuting operators – the result depends on the order in which you perform them). This all depends on a notion of temporal order which, in relativity, depends on the background metric, which is putatively depends on the state itself! So the two approaches to quantization try to either (a) keep the temporal order using a fixed background, and treat perturbations as the field (which can only be approximate), or (b) keep the idea that the metric is part of the state and hopefully recover the usual picture in some special cases (which is hard).

So as I understand it, the general boundary approach is meant to help get around this. It works by assigning data to both regions $M$, and their boundaries $\Sigma = \partial M$, subject to a few rules which are reminiscent of those which make a TQFT in the usual formulation into a monoidal functor. In particular, the theory assigns a Hilbert space $\mathcal{H}_{\Sigma}$ to a boundary, and a linear functional $\rho_M : \mathcal{H}_{\Sigma} \rightarrow \mathbb{C}$ to a region. This satisfies some rules such as that $\mathcal{H}_{\Sigma_1 \cup \Sigma_2} = \mathcal{H}_{\Sigma_1} \otimes \mathcal{H}_{\Sigma_2}$, that reversing the orientation of a boundary amounts to taking the dual of the Hilbert space, some gluing rules, and so on.

Then there is a way to recover a generalization of the probability interpretation for quantum mechanics. But it’s not a matter of first setting up a system in a state, and then making a measurement. Instead, it’s a way of asking a question, given some knowledge about the “system” at the boundary. Both knowledge and question take the form of subspaces (denoted $\mathcal{A}$ and $\mathcal{S}$) of $\mathcal{H}_{\Sigma}$, and the formula for probability involves both $\rho_M$ and the projection operators onto these subspaces. The “probablity of $\mathcal{A}$ given $\mathcal{S}$” is:

$P(\mathcal{A}|\mathcal{S}) = \frac{|\rho_M \circ P_\mathcal{S} \circ P_\mathcal{A}|^2}{|\rho_M \circ P_{S}|^2}$

Then one of the rules defining how $\rho_M$ behaves when $M$ is deformed gives a sort of “conservation of probability” – the equivalent of unitarity of time evolution. If $\Sigma$ decomposes as the union of an input and an output, and the subspaces $\mathcal{A}$ and $\mathcal{S}$ correspond to states on the input and the output surfaces, it gives exactly unitarity of time evolution.

Now, this seems like an interesting idea, assuming that it does indeed get over the shortcomings of both canonical and covariant approaches to quantum gravity. My main questions have to do with how to interpret it in category-theoretic terms, since it would be nice to see whether an extended TQFT – with 2-algebraic data for surfaces of codimension 2, and so on – could be described in the same way. The way Oeckl presents his TQFT’s is quite minimal, which is good for some purposes and avoids some complexity, but loses the organizing structure of TQFT-as-functor.

One thing that would be needed is a way of talking about some sort of n-category which has composition for morphisms with fairly arbitrary shapes – not just taking a source to a target. Instead of composition of arrows tip-to-tail, one has to glue randomly shaped regions together. Offhand, I don’t know the right way to do this.

Well, a week ago I got back from England, where I spent a week at the University of Nottingham at the conference “Quantum Gravity and Quantum Geometry 2008″, and a weekend visiting friends in London. London was enjoyable, though surprisingly expensive. It’s strange, when so many things are traded globally, that prices differ so much from place to place – the standard rule being to imagine that all prices in Pounds are actually in dollars, and they seem quite familiar. Clearly not everything is affected by trade, with restaurant meals among them. In any case, it was quite interesting to come come from London, Ontario to London, England, and walk around all the places whose names show up attached to completely dissimilar landmarks in the Canadian version.

As for the conference, it was a great experience. This was an outgrowth of the “LOOPS” series of conferences. The only one of those I’d been to previously was LOOPS ’05 at the Albert Einstein Institute, in Germany. At that time the conference was a little more focused on some particular approaches to quantum gravity (though there was still a whole range of talks). This year, there seemed to have been some attempt to broaden the conference a little – one result being that there must have been about 200 people attending, with something on the order of 90 talks, most of them half-hour talks in the parallel sessions. As a result, I saw less than half of what was going on. However, there were some broad subject areas, such as loop quantum gravity, spin foam and combinatorial quantization, noncommutative geometry, quantum groups, as well as some less readily classifiable talks.

In one talk on the first day, Carlo Rovelli discussed the relation between the Loop Quantum Gravity and spin-foam approaches to a theory of 4D quantum gravity. In particular, he was talking about the fact that the two approaches agree with each other in 3D, but it’s not so clear they do in 4D – or at least, it’s not clear what the spin foam model is that does this in 4D. This is part of what’s behind the program to improve the Barrett-Crane spin foam model for 4D gravity. It has various technical problems as well, which various more technical talks got into in more detail later in the conference. Rovelli was describing work on the new models which agree with LQG. Various other people have done work on this, including (among others) Freidel (who talked about that in his own talk later) and Krasnov, and Engle, Pereira and Rovelli. Florian Conrady also talked about these new models later on. I know Igor Khavkine, just graduating here at Western, has also done some work on these.

Another talk based off the successes of these models was by Abhay Ashtekar, about Loop Quantum Cosmology – that is, applying loop QG methods to the universe as a whole – a quantum version of the Friedman-Robertson-Walker universe. What’s interesting about this is that they’re doing numerical and analytic simulations, and predicting something that otherwise has usually been added as a “what-if” afterthougoht. Namely, such a universe behaves a lot like classical FRW, except near the “big bang”, classically a singularity, where quantum geometric effects prevent that from happening. Continuing through the other side, one sees a collapsing universe – an overall “bounce” effect. An interesting prediction, if hard to check.

In any case, I was bombarded by a whole range of other talks on other points of view. Starting from the very first talk, by Vincent Rivasseau, there were several talks presenting noncommutative geometry, Alain Connes-style, as a setting for a quantum theory of gravity. There’s certainly an appeal to the idea of replacing measure-theoretic and topological information about spacetime with a quantum algebra of observables – just write the theory in quantum terms from the start, giving up the usual differential geometry for its noncommutative version. Rivasseau presented, among other things, the idea of QFT as weighted species, in the sense of Joyal’s combinatorial species. I thought this was great, since I looked at just that idea for the simplest QFT of all, the quantum harmonic oscillator.

(Speaking of which, I had some interesting conversations with Jamie Vicary in which I finally “got” part of what he did with his own paper about the oscillator – which is to show how “taking Fock space” for a quantum system is a monad, namely the monad associated with the “free commutative monoid” functor, and its adjoint.)

Shahn Majid, whom I knew as the author of some well-known books on quantum groups, also spoke about this C*-algebra approach to geometry, and quantum gravity. : begin with a space, like a manifold, or better yet a fibre bundle, which is where a lot of physics gets done, and look at the algebra of forms on it. It has nice properties (it’s a differential graded algebra, etc.), including being commutative. One can deform these to noncommutative algebras that are quite nice – “q-deformation” assumes the commutators between elements depend on some parameter q, so the old picture where q=0 is simply a special case.

So then one thing is to develop a deformed version of classical things from geometry and analysis – for example, the Fourier transform. Even in the big purple book on quantum groups, he outlined what this approach consists of: a criterion for a quantum theory of gravity, that it should be algebraically “self-dual”, under exchange of “position” and “momentum” variables. (That is, under a Fourier transform – $\mathbb{R}^n$ being its own Fourier dual).

Well, speaking of quantum groups, I should mention Aaron Lauda’s talk on categorifying them – specifically, on categorifying “deformed classical Lie groups”, like $U_q({sl}(2))$ (a q-deformed version of the universal enveloping algebra $U({sl}(2))$, which for $q=0$ is the algebra where the Lie bracket of ${sl}(2)$ is a genuine commutator). He described a graphical calculus – a particular kind of string diagram, with some relations on them – which is a categorification of the quantum group. In fact, as sometimes happens, it categorifies a specific presentation of the algebra in terms of some generators and relations.

An appealing thing about these string diagram methods and so forth is that it suggests why these algebraic gadgets – quantum groups, in this case – are good at encoding topological information about tangles, braids, knots, and so on. If diagrams that involve those shapes categorify (read “model the underlying structure of”) quantum groups, then it makes sense that quantum groups to give invariants for them.

Along similar lines, Joao Faria Martins talked about invariants for “welded virtual knots”, and for knotted surfaces from crossed modules (read “2-groups”, if you’re so inclined – they are equivalent). Martins also published a paper with Tim Porter about related work, which in turn builds on David Yetter’s, on a class of manifold invariants. Their paper talks about “extending the Dijkgraaf-Witten model to categorical groups” (Urs Schreiber, possibly among others, rephrased that to call it a “categorification of the Dijkgraaf-Witten model”. The DW model is the TQFT foundation for my own look at extending (read, “categorifying”) TQFT’s based on gauge theory using a group $G$ – (finite, for the DW model). These are categorifications in two different directions, though: one, from a gauge group to a gauge 2-group, the other from a TQFT – a functor – to a 2-functor given by a group. Probably for 4 dimensions and higher, the 2-group version or higher is the most interesting to study.

In fact, there was a fair bevy of talks relating to categorical methods in quantum geometry. For example, Jamie Vicary gave a talk introducing a “categorical framework for quantum algebra”, by means of non-threatening string diagrams. These can be used to show the axioms for a “$\dagger$-monoidal category”. Not incidentally to all this, he also shows that in finite dimensions, at least, a $\mathbb{C}^{\star}$-algebra is “the same thing as” a $\dagger$-Frobenius algebra.

Benjamin Bahr gave another talk dealing with categorical issues – namely, how to get measures on certain groupoids, such as, indeed, the groupoid of connections on a manifold. In fact, he treated various cases under the same framework: flat and non-flat connections, on manifolds and on graphs – and others.

In all, I was pleasantly surprised by the mix of the physically and mathematically inclined points of view, and the trip itself was a lot of fun.

Next Page »