groupoids


To continue from the previous post

Twisted Differential Cohomology

Ulrich Bunke gave a talk introducing differential cohomology theories, and Thomas Nikolaus gave one about a twisted version of such theories (unfortunately, perhaps in the wrong order). The idea here is that cohomology can give a classification of field theories, and if we don’t want the theories to be purely topological, we would need to refine this. A cohomology theory is a (contravariant) functorial way of assigning to any space X, which we take to be a manifold, a \mathbb{Z}-graded group: that is, a tower of groups of “cocycles”, one group for each n, with some coboundary maps linking them. (In some cases, the groups are also rings) For example, the group of differential forms, graded by degree.

Cohomology theories satisfy some axioms – for example, the Mayer-Vietoris sequence has to apply whenever you cut a manifold into parts. Differential cohomology relaxes one axiom, the requirement that cohomology be a homotopy invariant of X. Given a differential cohomology theory, one can impose equivalence relations on the differential cocycles to get a theory that does satisfy this axiom – so we say the finer theory is a “differential refinement” of the coarser. So, in particular, ordinary cohomology theories are classified by spectra (this is related to the Brown representability theorem), whereas the differential ones are represented by sheaves of spectra – where the constant sheaves represent the cohomology theories which happen to be homotopy invariants.

The “twisting” part of this story can be applied to either an ordinary cohomology theory, or a differential refinement of one (though this needs similarly refined “twisting” data). The idea is that, if R is a cohomology theory, it can be “twisted” over X by a map \tau: X \rightarrow Pic_R into the “Picard group” of R. This is the group of invertible R-modules (where an R-module means a module for the cohomology ring assigned to X) – essentially, tensoring with these modules is what defines the “twisting” of a cohomology element.

An example of all this is twisted differential K-theory. Here the groups are of isomorphism classes of certain vector bundles over X, and the twisting is particularly simple (the Picard group in the topological case is just \mathbb{Z}_2). The main result is that, while topological twists are classified by appropriate gerbes on X (for K-theory, U(1)-gerbes), the differential ones are classified by gerbes with connection.

Fusion Categories

Scott Morrison gave a talk about Classifying Fusion Categories, the point of which was just to collect together a bunch of results constructing particular examples. The talk opens with a quote by Rutherford: “All science is either physics or stamp collecting” – that is, either about systematizing data and finding simple principles which explain it, or about collecting lots of data. This talk was unabashed stamp-collecting, on the grounds that we just don’t have a lot of data to systematically understand yet – and for that very reason I won’t try to summarize all the results, but the slides are well worth a look-over. The point is that fusion categories are very useful in constructing TQFT’s, and there are several different constructions that begin “given a fusion category \mathcal{C}“… and yet there aren’t all that many examples, and very few large ones, known.

Scott also makes the analogy that fusion categories are “noncommutative finite groups” – which is a little confusing, since not all finite groups are commutative anyway – but the idea is that the symmetric fusion categories are exactly the representation categories of finite groups. So general fusion categories are a non-symmetric generalization of such groups. Since classifying finite groups turned out to be difficult, and involve a laundry-list of sporadic groups, it shouldn’t be too surprising that understanding fusion categories (which, for the symmetric case, include the representation categories of all these examples) should be correspondingly tricky. Since, as he points out, we don’t have very many non-symmetric examples beyond rank 12 (analogous to knowing only finite groups with at most 12 elements), it’s likely that we don’t have a very good understanding of these categories in general yet.

There were a couple of talks – one during the workshop by Sonia Natale, and one the previous week by Sebastian Burciu, whom I also had the chance to talk with that week – about “Equivariantization” of fusion categories, and some fairly detailed descriptions of what results. The two of them have a paper on this which gives more details, which I won’t summarize – but I will say a bit about the construction.

An “equivariantization” of a category C acted on by a group G is supposed to be a generalization of the notion of the set of fixed points for a group acting on a set.  The category C^G has objects which consist of an object x \in C which is fixed by the action of G, together with an isomorphism \mu_g : x \rightarrow x for each g \in G, satisfying a bunch of unsurprising conditions like being compatible with the group operation. The morphisms are maps in C between the objects, which form commuting squares for each g \in G. Their paper, and the talks, described how this works when C is a fusion category – namely, C^G is also a fusion category, and one can work out its fusion rules (i.e. monoidal structure). In some cases, it’s a “group theoretical” fusion category (it looks like Rep(H) for some group H) – or a weakened version of such a thing (it’s Morita equivalent to ).

A nice special case of this is if the group action happens to be trivial, so that every object of C is a fixed point. In this case, C^G is just the category of objects of C equipped with a G-action, and the intertwining maps between these. For example, if C = Vect, then C^G = Rep(G) (in particular, a “group-theoretical fusion category”). What’s more, this construction is functorial in G itself: given a subgroup H \subset G, we get an adjoint pair of functors between C^G and C^H, which in our special case are just the induced-representation and restricted-representation functors for that subgroup inclusion. That is, we have a Mackey functor here. These generalize, however, to any fusion category C, and to nontrivial actions of G on C. The point of their paper, then, is to give a good characterization of the categories that come out of these constructions.

Quantizing with Higher Categories

The last talk I’d like to describe was by Urs Schreiber, called Linear Homotopy Type Theory for Quantization. Urs has been giving evolving talks on this topic for some time, and it’s quite a big subject (see the long version of the notes above if there’s any doubt). However, I always try to get a handle on these talks, because it seems to be describing the most general framework that fits the general approach I use in my own work. This particular one borrows a lot from the language of logic (the “linear” in the title alludes to linear logic).

Basically, Urs’ motivation is to describe a good mathematical setting in which to construct field theories using ingredients familiar to the physics approach to “field theory”, namely… fields. (See the description of Kevin Walker’s talk.) Also, Lagrangian functionals – that is, the notion of a physical action. Constructing TQFT from modular tensor categories, for instance, is great, but the fields and the action seem to be hiding in this picture. There are many conceptual problems with field theories – like the mathematical meaning of path integrals, for instance. Part of the approach here is to find a good setting in which to locate the moduli spaces of fields (and the spaces in which path integrals are done). Then, one has to come up with a notion of quantization that makes sense in that context.

The first claim is that the category of such spaces should form a differentially cohesive infinity-topos which we’ll call \mathbb{H}. The “infinity” part means we allow morphisms between field configurations of all orders (2-morphisms, 3-morphisms, etc.). The “topos” part means that all sorts of reasonable constructions can be done – for example, pullbacks. The “differentially cohesive” part captures the sort of structure that ensures we can really treat these as spaces of the suitable kind: “cohesive” means that we have a notion of connected components around (it’s implemented by having a bunch of adjoint functors between spaces and points). The “differential” part is meant to allow for the sort of structures discussed above under “differential cohomology” – really, that we can capture geometric structure, as in gauge theories, and not just topological structure.

In this case, we take \mathbb{H} to have objects which are spectral-valued infinity-stacks on manifolds. This may be unfamiliar, but the main point is that it’s a kind of generalization of a space. Now, the sort of situation where quantization makes sense is: we have a space (i.e. \mathbb{H}-object) of field configurations to start, then a space of paths (this is WHERE “path-integrals” are defined), and a space of field configurations in the final system where we observe the result. There are maps from the space of paths to identify starting and ending points. That is, we have a span:

A \leftarrow X \rightarrow B

Now, in fact, these may all lie over some manifold, such as B^n(U(1)), the classifying space for U(1) (n-1)-gerbes. That is, we don’t just have these “spaces”, but these spaces equipped with one of those pieces of cohomological twisting data discussed up above. That enters the quantization like an action (it’s WHAT you integrate in a path integral).

Aside: To continue the parallel, quantization is playing the role of a cohomology theory, and the action is the twist. I really need to come back and complete an old post about motives, because there’s a close analogy here. If quantization is a cohomology theory, it should come by factoring through a universal one. In the world of motives, where “space” now means something like “scheme”, the target of this universal cohomology theory is a mild variation on just the category of spans I just alluded to. Then all others come from some functor out of it.

Then the issue is what quantization looks like on this sort of scenario. The Atiyah-Singer viewpoint on TQFT isn’t completely lost here: quantization should be a functor into some monoidal category. This target needs properties which allow it to capture the basic “quantum” phenomena of superposition (i.e. some additivity property), and interference (some actual linearity over \mathbb{C}). The target category Urs talked about was the category of E_{\infty}-rings. The point is that these are just algebras that live in the world of spectra, which is where our spaces already lived. The appropriate target will depend on exactly what \mathbb{H} is.

But what Urs did do was give a characterization of what the target category should be LIKE for a certain construction to work. It’s a “pull-push” construction: see the link way above on Mackey functors – restriction and induction of representations are an example . It’s what he calls a “(2-monoidal, Beck-Chevalley) Linear Homotopy-Type Theory”. Essentially, this is a list of conditions which ensure that, for the two morphisms in the span above, we have a “pull” operation for some and left and right adjoints to it (which need to be related in a nice way – the jargon here is that we must be in a Wirthmuller context), satisfying some nice relations, and that everything is functorial.

The intuition is that if we have some way of getting a “linear gadget” out of one of our configuration spaces of fields (analogous to constructing a space of functions when we do canonical quantization over, let’s say, a symplectic manifold), then we should be able to lift it (the “pull” operation) to the space of paths. Then the “push” part of the operation is where the “path integral” part comes in: many paths might contribute to the value of a function (or functor, or whatever it may be) at the end-point of those paths, because there are many ways to get from A to B, and all of them contribute in a linear way.

So, if this all seems rather abstract, that’s because the point of it is to characterize very generally what has to be available for the ideas that appear in physics notions of path-integral quantization to make sense. Many of the particulars – spectra, E_{\infty}-rings, infinity-stacks, and so on – which showed up in the example are in a sense just placeholders for anything with the right formal properties. So at the same time as it moves into seemingly very abstract terrain, this approach is also supposed to get out of the toy-model realm of TQFT, and really address the trouble in rigorously defining what’s meant by some of the standard practice of physics in field theory by analyzing the logical structure of what this practice is really saying. If it turns out to involve some unexpected math – well, given the underlying issues, it would have been more surprising if it didn’t.

It’s not clear to me how far along this road this program gets us, as far as dealing with questions an actual physicist would like to ask (for the most part, if the standard practice works as an algorithm to produce results, physicists seldom need to ask what it means in rigorous math language), but it does seem like an interesting question.

So it’s been a while since I last posted – the end of 2013 ended up being busy with a couple of visits to Jamie Vicary in Oxford, and Roger Picken in Lisbon. In the aftermath of the two trips, I did manage to get a major revision of this paper submitted to a journal, and put this one out in public. A couple of others will be coming down the pipeline this year as well.

I’m hoping to get back to a post about motives which I planned earlier, but for the moment, I’d like to write a little about the second paper, with Roger Picken.

Global and Local Symmetry

The upshot is that it’s about categorifying the concept of symmetry. More specifically, it’s about finding the analog in the world of categories for the interplay between global and local symmetry which occurs in the world of set-based structures (sets, topological spaces, vector spaces, etc.) This distinction is discussed in a nice way by Alan Weinstein in this article from the Notices of the AMS from

The global symmetry of an object X in some category \mathbf{C} can be described in terms of its group of automorphisms: all the ways the object can be transformed which leave it “the same”. This fits our understanding of “symmetry” when the morphisms can really be interpreted as transformations of some sort. So let’s suppose the object is a set with some structure, and the morphisms are set-maps that preserve the structure: for example, the objects could be sets of vertices and edges of a graph, so that morphisms are maps of the underlying data that preserve incidence relations. So a symmetry of an object is a way of transforming it into itself – and an invertible one at that – and these automorphisms naturally form a group Aut(X). More generally, we can talk about an action of a group G on an object X, which is a map \phi : G \rightarrow Aut(X).

“Local symmetry” is different, and it makes most sense in a context where the object X is a set – or at least, where it makes sense to talk about elements of X, so that X has an underlying set of some sort.

Actually, being a set-with-structure, in a lingo I associate with Jim Dolan, means that the forgetful functor U : \mathbf{C} \rightarrow \mathbf{Sets} is faithful: you can tell morphisms in \mathbf{C} (in particular, automorphisms of X) apart by looking at what they do to the underlying set. The intuition is that the morphisms of \mathbf{C} are exactly set maps which preserve the structure which U forgets about – or, conversely, that the structure on objects of \mathbf{C} is exactly that which is forgotten by U. Certainly, knowing only this information determines \mathbf{C} up to equivalence. In any case, suppose we have an object like this: then knowing about the symmetries of X amounts to knowing about a certain group action, namely the action of Aut(X), on the underlying set U(X).

From this point of view, symmetry is about group actions on sets. The way we represent local symmetry (following Weinstein’s discussion, above) is to encode it as a groupoid – a category whose morphisms are all invertible. There is a level-slip happening here, since X is now no longer seen as an object inside a category: it is the collection of all the objects of a groupoid. What makes this a representation of “local” symmetry is that each morphism now represents, not just a transformation of the whole object X, but a relationship under some specific symmetry between one element of X and another. If there is an isomorphism between x \in X and y \in X, then x and y are “symmetric” points under some transformation. As Weinstein’s article illustrates nicely, though, there is no assumption that the given transformation actually extends to the entire object X: it may be that only part of X has, for example, a reflection symmetry, but the symmetry doesn’t extend globally.

Transformation Groupoid

The “interplay” I alluded to above, between the global and local pictures of symmetry, is to build a “transformation groupoid” (or “action groupoid“) associated to a group G acting on a set X. The result is called X // G for short. Its morphisms consist of pairs such that  (g,x) : x \rightarrow (g \rhd x) is a morphism taking x to its image under the action of g \in G. The “local” symmetry view of X // G treats each of these symmetry relations between points as a distinct bit of data, but coming from a global symmetry – that is, a group action – means that the set of morphisms comes from the product G \times X.

Indeed, the “target” map in X // G from morphisms to objects is exactly a map G \times X \rightarrow X. It is not hard to show that this map is an action in another standard sense. Namely, if we have a real action \phi : G \rightarrow Hom(X,X), then this map is just \hat{\phi} : G \times X \rightarrow X, which moves one of the arguments to the left side. If \phi was a functor, then $\hat{\phi}$ satisfies the “action” condition, namely that the following square commutes:

actionsquare

(Here, m is the multiplication in G, and this is the familiar associativity-type axiom for a group action: acting by a product of two elements in G is the same as acting by each one successively.

So the starting point for the paper with Roger Picken was to categorify this. It’s useful, before doing that, to stop and think for a moment about what makes this possible.

First, as stated, this assumed that X either is a set, or has an underlying set by way of some faithful forgetful functor: that is, every morphism in Aut(X) corresponds to a unique set map from the elements of X to itself. We needed this to describe the groupoid X // G, whose objects are exactly the elements of X. The diagram above suggests a different way to think about this. The action diagram lives in the category \mathbf{Set}: we are thinking of G as a set together with some structure maps. X and the morphism \hat{\phi} must be in the same category, \mathbf{Set}, for this characterization to make sense.

So in fact, what matters is that the category X lived in was closed: that is, it is enriched in itself, so that for any objects X,Y, there is an object Hom(X,Y), the internal hom. In this case, it’s G = Hom(X,X) which appears in the diagram. Such an internal hom is supposed to be a dual to \mathbf{Set}‘s monoidal product (which happens to be the Cartesian product \times): this is exactly what lets us talk about \hat{\phi}.

So really, this construction of a transformation groupoid will work for any closed monoidal category \mathbf{C}, producing a groupoid in \mathbf{C}. It may be easier to understand in cases like \mathbf{C}=\mathbf{Top}, the category of topological spaces, where there is indeed a faithful underlying set functor. But although talking explicitly about elements of X was useful for intuitively seeing how X//G relates global and local symmetries, it played no particular role in the construction.

Categorify Everything

In the circles I run in, a popular hobby is to “categorify everything“: there are different versions, but what we mean here is to turn ideas expressed in the world of sets into ideas in the world of categories. (Technical aside: all the categories here are assumed to be small). In principle, this is harder than just reproducing all of the above in any old closed monoidal category: the “world” of categories is \mathbf{Cat}, which is a closed monoidal 2-category, which is a more complicated notion. This means that doing all the above “strictly” is a special case: all the equalities (like the commutativity of the action square) might in principle be replaced by (natural) isomorphisms, and a good categorification involves picking these to have good properties.

(In our paper, we left this to an appendix, because the strict special case is already interesting, and in any case there are “strictification” results, such as the fact that weak 2-groups are all equivalent to strict 2-groups, which mean that the weak case isn’t as much more general as it looks. For higher n-categories, this will fail – which is why we include the appendix to suggest how the pattern might continue).

Why is this interesting to us? Bumping up the “categorical level” appeals for different reasons, but the ones matter most to me have to do with taking low-dimensional (or -codimensional) structures, and finding analogous ones at higher (co)dimension. In our case, the starting point had to do with looking at the symmetries of “higher gauge theories” – which can be used to describe the transport of higher-dimensional surfaces in a background geometry, the way gauge theories can describe the transport of point particles. But I won’t ask you to understand that example right now, as long as you can accept that “what are the global/local symmetries of a category like?” is a possibly interesting question.

So let’s categorify the discussion about symmetry above… To begin with, we can just take our (closed monoidal) category to be \mathbf{Cat}, and follow the same construction above. So our first ingredient is a 2-group \mathcal{G}. As with groups, we can think of a 2-group either as a 2-category with just one object \star, or as a 1-category with some structure – a group object in \mathbf{Cat}, which we’ll call C(\mathcal{G}) if it comes from a given 2-group. (In our paper, we keep these distinct by using the term “categorical group” for the second. The group axioms amount to saying that we have a monoidal category (\mathcal{G}, \otimes, I). Its objects are the morphisms of the 2-group, and the composition becomes the monoidal product \otimes.)

(In fact, we often use a third equivalent definition, that of crossed modules of groups, but to avoid getting into that machinery here, I’ll be changing our notation a little.)

2-Group Actions

So, again, there are two ways to talk about an action of a 2-group on some category \mathbf{C}. One is to define an action as a 2-functor \Phi : \mathcal{G} \rightarrow \mathbf{Cat}. The object being acted on, \mathbf{C} \in \mathbf{Cat}, is the unique object \Phi(\star) – so that the 2-functor amounts to a monoidal functor from the categorical group C(\mathcal{G}) into Aut(\mathbf{C}). Notice that here we’re taking advantage of the fact that \mathbf{Cat} is closed, so that the hom-“sets” are actually categories, and the automorphisms of \mathbf{C} – invertible functors from \mathbf{C} to itself – form the objects of a monoidal category, and in fact a categorical group. What’s new, though, is that there are also 2-morphisms – natural transformations between these functors.

To begin with, then, we show that there is a map \hat{\Phi} : \mathcal{G} \times \mathbf{C} \rightarrow \mathbf{C}, which corresponds to the 2-functor \Phi, and satisfies an action axiom like the square above, with \otimes playing the role of group multiplication. (Again, remember that we’re only talking about the version where this square commutes strictly here – in an appendix of the paper, we talk about the weak version of all this.) This is an intuitive generalization of the situation for groups, but it is slightly more complicated.

The action \Phi directly gives three maps. First, functors \Phi(\gamma) : \mathbf{C} \rightarrow \mathbf{C} for each 2-group morphism \gamma – each of which consists of a function between objects of \mathbf{C}, together with a function between morphisms of \mathbf{C}. Second, natural transformations \Phi(\eta) : \Phi(\gamma) \rightarrow \Phi(\gamma ') for 2-morphisms \eta : \gamma \rightarrow \gamma' in the 2-group – each of which consists of a function from objects to morphisms of \mathbf{C}.

On the other hand, \hat{\Phi} : \mathcal{G} \times \mathbf{C} \rightarrow \mathbf{C} is just a functor: it gives two maps, one taking pairs of objects to objects, the other doing the same for morphisms. Clearly, the map (\gamma,x) \mapsto x' is just given by x' = \Phi(\gamma)(x). The map taking pairs of morphisms (\eta,f) : (\gamma,x) \rightarrow (\gamma ', y) to morphisms of \mathbf{C} is less intuitively obvious. Since I already claimed \Phi and \hat{\Phi} are equivalent, it should be no surprise that we ought to be able to reconstruct the other two parts of \Phi from it as special cases. These are morphism-maps for the functors, (which give \Phi(\gamma)(f) or \Phi(\gamma ')(f)), and the natural transformation maps (which give \Phi(\eta)(x) or \Phi(\eta)(y)). In fact, there are only two sensible ways to combine these four bits of information, and the fact that \Phi(\eta) is natural means precisely that they’re the same, so:

\hat{\Phi}(\eta,f) = \Phi(\eta)(y) \circ \Phi(\gamma)(f) = \Phi(\gamma ')(f) \circ \Phi(\eta)(x)

Given the above, though, it’s not so hard to see that a 2-group action really involves two group actions: of the objects of \mathcal{G} on the objects of \mathbf{C}, and of the morphisms of \mathcal{G} on objects of \mathbf{C}. They fit together nicely because objects can be identified with their identity morphisms: furthermore, \Phi being a functor gives an action of \mathcal{G}-objects on \mathbf{C}-morphisms which fits in between them nicely.

But what of the transformation groupoid? What is the analog of the transformation groupoid, if we repeat its construction in \mathbf{Cat}?

The Transformation Double Category of a 2-Group Action

The answer is that a category (such as a groupoid) internal to \mathbf{Cat} is a double category. The compact way to describe it is as a “category in \mathbf{Cat}“, with a category of objects and a category of morphisms, each of which of course has objects and morphisms of its own. For the transformation double category, following the same construction as for sets, the object-category is just \mathbf{C}, and the morphism-category is \mathcal{G} \times \mathbf{C}, and the target functor is just the action map \hat{\Phi}. (The other structure maps that make this into a category in \mathbf{Cat} can similarly be worked out by following your nose).

This is fine, but the internal description tends to obscure an underlying symmetry in the idea of double categories, in which morphisms in the object-category and objects in the morphism-category can switch roles, and get a different description of “the same” double category, denoted the “transpose”.

A different approach considers these as two different types of morphism, “horizontal” and “vertical”: they are the morphisms of horizontal and vertical categories, built on the same set of objects (the objects of the object-category). The morphisms of the morphism-category are then called “squares”. This makes a convenient way to draw diagrams in the double category. Here’s a version of a diagram from our paper with the notation I’ve used here, showing what a square corresponding to a morphism (\chi,f) \in \mathcal{G} \times \mathbf{C} looks like:

squarepic

The square (with the boxed label) has the dashed arrows at the top and bottom for its source and target horizontal morphisms (its images under the source and target functors: the argument above about naturality means they’re well-defined). The vertical arrows connecting them are the source and target vertical morphisms (its images under the source and target maps in the morphism-category).

Horizontal and Vertical Slices of \mathbf{C} // \mathcal{G}

So by construction, the horizontal category of these squares is just the object-category \mathbf{C}.  For the same reason, the squares and vertical morphisms, make up the category \mathcal{G} \times \mathbf{C}.

On the other hand, the vertical category has the same objects as \mathbf{C}, but different morphisms: it’s not hard to see that the vertical category is just the transformation groupoid for the action of the group of \mathbf{G}-objects on the set of \mathbf{C}-objects, Ob(\mathbf{C}) // Ob(\mathcal{G}). Meanwhile, the horizontal morphisms and squares make up the transformation groupoid Mor(\mathbf{C}) // Mor(\mathcal{G}). These are the object-category and morphism-category of the transpose of the double-category we started with.

We can take this further: if squares aren’t hip enough for you – or if you’re someone who’s happy with 2-categories but finds double categories unfamiliar – the horizontal and vertical categories can be extended to make horizontal and vertical bicategories. They have the same objects and morphisms, but we add new 2-cells which correspond to squares where the boundaries have identity morphisms in the direction we’re not interested in. These two turn out to feel quite different in style.

First, the horizontal bicategory extends \mathbf{C} by adding 2-morphisms to it, corresponding to morphisms of \mathcal{G}: roughly, it makes the morphisms of \mathbf{C} into the objects of a new transformation groupoid, based on the action of the group of automorphisms of the identity in \mathcal{G} (which ensures the square has identity edges on the sides.) This last point is the only constraint, and it’s not a very strong one since Aut(1_G) and G essentially determine the entire 2-group: the constraint only relates to the structure of \mathcal{G}.

The constraint for the vertical bicategory is different in flavour because it depends more on the action \Phi. Here we are extending a transformation groupoid, Ob(\mathbf{C}) // Ob(\mathcal{G}). But, for some actions, many morphisms in \mathcal{G} might just not show up at all. For 1-morphisms (\gamma, x), the only 2-morphisms which can appear are those taking \gamma to some \gamma ' which has the same effect on x as \gamma. So, for example, this will look very different if \Phi is free (so only automorphisms show up), or a trivial action (so that all morphisms appear).

In the paper, we look at these in the special case of an adjoint action of a 2-group, so you can look there if you’d like a more concrete example of this difference.

Speculative Remarks

The starting point for this was a project (which I talked about a year ago) to do with higher gauge theory – see the last part of the linked post for more detail. The point is that, in gauge theory, one deals with connections on bundles, and morphisms between them called gauge transformations. If one builds a groupoid out of these in a natural way, it turns out to result from the action of a big symmetry group of all gauge transformations on the moduli space of connections.

In higher gauge theory, one deals with connections on gerbes (or higher gerbes – a bundle is essentially a “0-gerbe”). There are now also (2-)morphisms between gauge transformations (and, in higher cases, this continues further), which Roger Picken and I have been calling “gauge modifications”. If we try to repeat the situation for gauge theory, we can construct a 2-groupoid out of these, which expresses this local symmetry. The thing which is different for gerbes (and will continue to get even more different if we move to n-gerbes and the corresponding (n+1)-groupoids) is that this is not the same type of object as a transformation double category.

Now, in our next paper (which this one was written to make possible) we show that the 2-groupoid is actually very intimately related to the transformation double category: that is, the local picture of symmetry for a higher gauge theory is, just as in the lower-dimensional situation, intimately related to a global symmetry of an entire moduli 2-space, i.e. a category. The reason this wasn’t obvious at first is that the moduli space which includes only connections is just the space of objects of this category: the point is that there are really two special kinds of gauge transformations. One should be thought of as the morphisms in the moduli 2-space, and the other as part of the symmetries of that 2-space. The intuition that comes from ordinary gauge theory overlooks this, because the phenomenon doesn’t occur there.

Physically-motivated theories are starting to use these higher-categorical concepts more and more, and symmetry is a crucial idea in physics. What I’ve sketched here is presumably only the start of a pattern in which “symmetry” extends to higher-categorical entities. When we get to 3-groups, our simplifying assumptions that use “strictification” results won’t even be available any more, so we would expect still further new phenomena to show up – but it seems plausible that the tight relation between global and local symmetry will still exist, but in a way that is more subtle, and refines the standard understanding we have of symmetry today.

Well, it’s been a while, but it’s now a new semester here in Hamburg, and I wanted to go back and look at some of what we talked about in last semester’s research seminar. This semester, Susama Agarwala and I are sharing the teaching in a topics class on “Category Theory for Geometry“, in which I’ll be talking about categories of sheaves, and building up the technology for Susama to talk about Voevodsky’s theory of motives (enough to give a starting point to read something like this).

As for last semester’s seminar, one of the two main threads, the one which Alessandro Valentino and I helped to organize, was a look at some of the material needed to approach Jacob Lurie’s paper on the classification of topological quantum field theories. The idea was for the research seminar to present the basic tools that are used in that paper to a larger audience, mostly of graduate students – enough to give a fairly precise statement, and develop the tools needed to follow the proof. (By the way, for a nice and lengthier discussion by Chris Schommer-Pries about this subject, which includes more details on much of what’s in this post, check out this video.)

So: the key result is a slightly generalized form of the Cobordism Hypothesis.

Cobordism Hypothesis

The sort of theory which the paper classifies are those which “extend down to a point”. So what does this mean? A topological field theory can be seen as a sort of “quantum field theory up to homotopy”, which abstract away any geometric information about the underlying space where the fields live – their local degrees of freedom.  We do this by looking only at the classes of fields up to the diffeomorphism symmetries of the space.  The local, geometric, information gets thrown away by taking this quotient of the space of solutions.

In spite of reducing the space of fields this way, we want to capture the intuition that the theory is still somehow “local”, in that we can cut up spaces into parts and make sense of the theory on those parts separately, and determine what it does on a larger space by gluing pieces together, rather than somehow having to take account of the entire space at once, indissolubly. This reasoning should apply to the highest-dimensional space, but also to boundaries, and to any figures we draw on boundaries when cutting them up in turn.

Carrying this on to the logical end point, this means that a topological quantum field theory in the fully extended sense should assign some sort of data to every geometric entity from a zero-dimensional point up to an n-dimensional cobordism.  This is all expressed by saying it’s an n-functor:

Z : Bord^{fr}_n(n) \rightarrow nAlg.

Well, once we know what this means, we’ll know (in principle) what a TQFT is.  It’s less important, for the purposes of Lurie’s paper, what nAlg is than what Bord^){fr}_n(n) is.  The reason is that we want to classify these field theories (i.e. functors).  It will turn out that Bord_n(n) has the sort of structure that makes it easy to classify the functors out of it into any target n-category \mathcal{C}.  A guess about what kind of structure is actually there was expressed by Baez and Dolan as the Cobordism Hypothesis.  It’s been slightly rephrased from the original form to get a form which has a proof.  The version Lurie proves says:

The (\infty,n)-category Bord^{fr}_n(n) is equivalent to the free symmetric monoidal (\infty,n)-category generated by one fully-dualizable object.

The basic point is that, since Bord^{fr}_n(n) is a free structure, the classification means that the extended TQFT’s amount precisely to the choice of a fully-dualizable object of \mathcal{C} (which includes a choice of a bunch of morphisms exhibiting the “dualizability”). However, to make sense of this, we need to have a suitable idea of an (\infty,n)-category, and know what a fully dualizable object is. Let’s begin with the first.

(\infty,n)-Categories

In one sense, the Cobordism Hypothesis, which was originally made about n-categories at a time when these were only beginning to be defined, could be taken as a criterion for an acceptable definition. That is, it expressed an intuition which was important enough that any definition which wouldn’t allow one to prove the Cobordism Hypothesis in some form ought to be rejected. To really make it work, one had to bring in the “infinity” part of (\infty,n)-categories. The point here is that we are talking about category-like structures which have morphisms between objects, 2-morphisms between morphisms, and so on, with j-morphisms between j-1-morphisms for every possible degree. The inspiration for this comes from homotopy theory, where one has maps, homotopies of maps, homotopies of homotopies, etc.

Nowadays, there are several possible concrete models for (\infty,n)-categories (see this survey article by Julie Bergner for a summary of four of them). They are all equivalent definitions, in a suitable up-to-homotopy way, but for purposes of the proof, Lurie is taking the definition that an (\infty,n)-category is an n-fold complete Segal space. One theme that shows up in all the definitions is that of simplicial methods. (In our seminar, we started with a series of two talks introducing the notions of simplicial sets, simplicial objects in a category, and Kan complexes. If you don’t already know this, essentially everything we need is nicely explained in here.)

One of the underlying ideas is that a category C can be associated with a simplicial set, its nerve N(C)_{\bullet}, where the set N(C)_k of k-dimensional simplexes is just the set of composable k-tuples of morphisms in C. If C is a groupoid (everything is invertible), then the simplicial set is a Kan complex – it satisfies some filling conditions, which ensure that any morphism has an inverse. Not every Kan complex is the nerve of a groupoid, but one can think of them as weak versions of groupoids – \infty-groupoids, or (\infty,0)-categories – where the higher morphisms may not be completely trivial (as with a groupoid), but where at least they’re all invertible. This leads to another desirable feature in any definition of (\infty,n)-category, which is the Homotopy Hypothesis: that the (\infty,1)-category of (\infty,0)-categories, also called \infty-groupoids, should be equivalent (in the same weak sense) to a category of Hausdorff spaces with some other nice properties, which we call \mathbf{Top} for short. This is true of Kan complexes.

Thus, up to homotopy, specifying an \infty-groupoid is the same as specifying a space.

The data which defines a Segal space (which was however first explicitly defined by Charlez Rezk) is a simplicial space X_{\bullet}: for each n, there are spaces X_n, thought of as the space of composable n-tuples of morphisms. To keep things tame, we suppose that X_0, the space of objects, is discrete – that is, we have only a set of objects. Being a simplicial space means that the X_n come equipped with a collection of face maps d_i : X_n \rightarrow X_{n-1}, which we should think of as compositions: to get from an n-tuple to an (n-1)-tuple of morphisms, one can compose two morphisms together at any of (n-1) positions in the tuple.

One condition which a simplicial space has to satisfy to be a Segal space has to do with the “weakening” which makes a Segal space a weaker notion than just a category lies in the fact that the X_n cannot be arbitrary, but must be homotopy equivalent to the “actual” space of n-tuples, which is a strict pullback X_1 \times_{X_0} \dots \times_{X_0} X_1. That is, in a Segal space, the pullback which defines these tuples for a category is weakened to be a homotopy pullback. Combining this with the various face maps, we therefore get a weakened notion of composition: X_1 \times_{X_0} \dots \times_{X_0} X_1 \cong X_n \rightarrow X_1. Because we start by replacing the space of n-tuples with the homotopy-equivalent X_n, the composition rule will only satisfy all the relations which define composition (associativity, for instance) up to homotopy.

To be complete, the Segal space must have a notion of equivalence for X_{\bullet} which agrees with that for Kan complexes seen as \infty-groupoids. In particular, there is a sub-simplicial object Core(X_{\bullet}), which we understand to consist of the spaces of invertible k-morphisms. Since there should be nothing interesting happening above the top dimension, we ask that, for these spaces, the face and degeneracy maps are all homotopy equivalences: up to homotopy, the space of invertible higher morphisms has no new information.

Then, an n-fold complete Segal space is defined recursively, just as one might define n-categories (without the infinitely many layers of invertible morphisms “at the top”). In that case, we might say that a double category is just a category internal to \mathbf{Cat}: it has a category of objects, and a category of morphims, and the various maps and operations, such as composition, which make up the definition of a category are all defined as functors. That turns out to be the same as a structure with objects, horizontal and vertical morphisms, and square-shaped 2-cells. If we insist that the category of objects is discrete (i.e. really just a set, with no interesting morphisms), then the result amounts to a 2-category. Then we can define a 3-category to be a category internal to \mathbf{2Cat} (whose 2-category of objects is discrete), and so on. This approach really defines an n-fold category (see e.g. Chapter 5 of Cheng and Lauda to see a variation of this approach, due to Tamsamani and Simpson), but imposing the condition that the objects really amount to a set at each step gives exactly the usual intuition of a (strict!) n-category.

This is exactly the approach we take with n-fold complete Segal spaces, except that some degree of weakness is automatic. Since a C.S.S. is a simplicial object with some properties (we separately define objects of k-tuples of morphisms for every k, and all the various composition operations), the same recursive approach leads to a definition of an “n-fold complete Segal space” as simply a simplicial object in (n-1)-fold C.S.S.’s (with the same properties), such that the objects form a set. In principle, this gives a big class of “spaces of morphisms” one needs to define – one for every n-fold product of simplexes of any dimension – but all those requirements that any space of objects “is just a set” (i.e. is homotopy-equivalent to a discrete set of points) simplifies things a bit.

Cobordism Category as (\infty,n)-Category

So how should we think of cobordisms as forming an (\infty,n)-category? There are a few stages in making a precise definition, but the basic idea is simple enough. One starts with manifolds and cobordisms embedded in some fixed finite-dimensional vector space V \times \mathbb{R}^n, and then takes a limit over all V. In each V \times \mathbb{R}^n, the coordinates of the \mathbb{R}^n factor give n ways of cutting the cobordism into pieces, and gluing them back together defines composition in a different direction. Now, this won’t actually produce a complete Segal space: one has to take a certain kind of completion. But the idea is intuitive enough.

We want to define an n-fold C.S.S. of cobordisms (and cobordisms between cobordisms, and so on, up to n-morphisms). To start with, think of the case n=1: then the space of objects of Bord^{fr}_1(1) consists of all embeddings of a (d-1)-dimensional manifold into V. The space of k-simplexes (of k-tuples of morphisms) consists of all ways of cutting up a d-dimensional cobordism embedded in V \times \mathbb{R} by choosing t_0, \dots , t_{k-2}, where we think of the cobordism having been glued from two pieces, where at the slice V \times {t_i}, we have the object where the two pieces were composed. (One has to be careful to specify that the Morse function on the cobordisms, got by projection only \mathbb{R}, has its critical points away from the t_i – the generic case – to make sure that the objects where gluing happens are actual manifolds.)

Now, what about the higher morphisms of the (\infty,1)-category? The point is that one needs to have an \infty-groupoid – that is, a space! – of morphisms between two cobordisms M and N. To make sense of this, we just take the space Diff(M,N) of diffeomorphisms – not just as a set of morphisms, but including its topology as well. The higher morphisms, therefore, can be thought of precisely as paths, homotopies, homotopies between homotopies, and so on, in these spaces. So the essential difference between the 1-category of cobordisms and the (\infty,1)-category is that in the first case, morphisms are diffeomorphism classes of cobordisms, whereas in the latter, the higher morphisms are made precisely of the space of diffeomorphisms which we quotient out by in the first case.

Now, (\infty,n)-categories, can have non-invertible morphisms between morphisms all the way up to dimension n, after which everything is invertible. An n-fold C.S.S. does this by taking the definition of a complete Segal space and copying it inside (n-1)-fold C.S.S’s: that is, one has an (n-1)-fold Complete Segal Space of k-tuples of morphisms, for each k, they form a simplicial object, and so forth.

Now, if we want to build an (\infty,n)-category Bord^{fr}_n(n) of cobordisms, the idea is the same, except that we have a simplicial object, in a category of simplicial objects, and so on. However, the way to define this is essentially similar. To specify an n-fold C.S.S., we have to specify a whole collection of spaces associated to cobordisms equipped with embeddings into V \times \mathbb{R}^n. In particular, for each tuple (k_1,\dots,k_n), we have the space of such embeddings, such that for each i = 1 \dots n one has k_i special points t_{i,j} along the i^{th} coordinate axis. These are the ways of breaking down a given cobordism into a composite of k_i +1 pieces. Again, one has to make sure that these critical points of the Morse functions defined by the projections onto these coordinate axes avoid these special t_{i,j} which define the manifolds where gluing takes place. The composition maps which make these into a simplical object are quite natural – they just come by deleting special points.

Finally, we take a limit over all V (to get around limits to embeddings due to the dimension of V). So we know (at least abstractly) what the (\infty,n)-category of cobordisms should be. The cobordism hypothesis claims it is equivalent to one defined in a free, algebraically-flavoured way, namely as the free symmetric monoidal (\infty,n)-category on a fully-dualizable object. (That object is “the point” – which, up to the kind of homotopically-flavoured equivalence that matters here, is the only object when our highest-dimensional cobordisms have dimension n).

Dualizability

So what does that mean, a “fully dualizable object”?

First, to get the idea, let’s think of the 1-dimensional example.  Instead of “(\infty,n)-category”, we would like to just think of this as a statement about a category.  Then Bord^{fr}_1(1) is the 1-category of framed bordisms. For a manifold (or cobordism, which is a manifold with boundary), a framing is a trivialization of the tangent bundle.  That is, it amounts to a choice of isomorphism at each point between the tangent space there and the corresponding \mathbb{R}^n.  So the objects of Bord^{fr}_1(1) are collections of (signed) points, and the morphisms are equivalence classes of framed 1-dimensional cobordisms.  These amount to oriented 1-manifolds with boundary, where the points (objects) on the boundary are the source and target of the cobordism.

Now we want to classify what TQFT’s live on this category.  These are functors Z : Bord^{fr}_1(1).  We have two generating objects, + and -, the two signed points.  A TQFT must assign these objects vector spaces, which we’ll call V and W.  Collections of points get assigned tensor products of all the corresponding vector spaces, since the functor is monoidal, so knowing these two vector spaces determines what Z does to all objects.

What does Z do to morphisms?  Well, some generating morphsims of interest are cups and caps: these are lines which connect a positive to a negative point, but thought of as cobordisms taking two points to the empty set, and vice versa.  That is, we have an evaluation:This statement is what is generalized to say that n-dimensional TQFT’s are classified by “fully” dualizable objects.

ev: W \otimes V \rightarrow \mathbb{C}

and a coevaluation:

coev: \mathbb{C} \rightarrow V \otimes W

Now, since cobordisms are taken up to equivalence, which in particular includes topological deformations, we get a bunch of relations which these have to satisfy.  The essential one is the “zig-zag” identity, reflecting the fact that a bent line can be straightened out, and we have the same 1-morphism in Born^{fr}_1(1).  This implies that:

(ev \otimes id) \circ (id \otimes coev) : W \rightarrow W \otimes V \otimes W \rightarrow W

is the same as the identity.  This in turn means that the evaluation and coevaluation maps define a nondegenerate pairing between V and W.  The fact that this exists means two things.  First, W is the dual of V: W \cong V*.  Second, this only makes sense if both V and its dual are finite dimensional (since the evaluation will just be the trace map, which is not even defined on the identity if V is infinite dimensional).

On the other hand, once we know, V, this determines W \cong V* up to isomorphism, as well as the evaluation and coevaluation maps.  In fact, this turns out to be enough to specify Z entirely.  The classification then is: 1-D TQFT’s are classified by finite-dimensional vector spaces V.  Crucially, what made finiteness important is the existence of the dual V* and the (co)evaluation maps which express the duality.

In an (\infty,n)-category, to say that an object is “fully dualizable” means more that the object has a dual (which, itself, implies the existence of the morphisms ev and coev). It also means that ev and coev have duals themselves – or rather, since we’re talking about morphisms, “adjoints”. This in turn implies the existence of 2-morphisms which are the unit and counit of the adjunctions (the defining properties are essentially the same as those for morphisms which define a dual). In fact, every time we get a morphism of degree less than n in this process, “fully dualizable” means that it too must have a dual (i.e. an adjoint).

This does run out eventually, though, since we only require this goes up to dimension (n-1): the n-morphisms which this forces to exist (quite a few) aren’t required to have duals. This is good, because if they were, since all the higher morphisms available are invertible, this would mean that the dual n-morphisms would actually be weak inverses (that is, their composite is isomorphic to the identity)… But that would mean that the dual (n-1)-morphisms which forced them to exist would also be weak inverses (their composite would be weakly isomorphic to the identity)… and so on! In fact, if the property of “having duals” didn’t stop, then everything would be weakly invertible: we’d actually have a (weak) \infty-groupoid!

Classifying TQFT

So finally, the point of the Cobordism Hypothesis is that a (fully extended) TQFT is a functor Z out of this nBord^{fr}_n(n) into some target (\infty,1)-category \mathcal{C}. There are various options, but whatever we pick, the functor must assign something in \mathcal{C} to the point, say Z(pt), and something to each of ev and coev, as well as all the higher morphisms which must exist. Then functoriality means that all these images have to again satisfy the properties which make Z(pt) a fully dualizable object. Furthermore, since nBord^{fr}_n(n) is the free gadget with all these properties on the single object pt, this is exactly what it means that Z is a functor. Saying that Z(pt) is fully dualizable, by implication, includes all the choices of morphisms like Z(ev) etc. which show it as fully dualizable. (Conceivably one could make the same object fully dualizable in more than one way – these would be different functors).

So an extended n-dimensional TQFT is exactly the choice of a fully dualizable object Z(pt) \in \mathcal{C}, for some (\infty,n)-category \mathcal{C}. This object is “what the TQFT assigns to a point”, but if we understand the structure of the object as a fully dualizable object, then we know what the TQFT assigns to any other manifold of any dimension up to n, the highest dimension in the theory. This is how this algebraic characterization of cobordisms helps to classify such theories.

Since the last post, I’ve been busily attending some conferences, as well as moving to my new job at the University of Hamburg, in the Graduiertenkolleg 1670, “Mathematics Inspired by String Theory and Quantum Field Theory”.  The week before I started, I was already here in Hamburg, at the conference they were organizing “New Perspectives in Topological Quantum Field Theory“.  But since I last posted, I was also at the 20th Oporto Meeting on Geometry, Topology, and Physics, as well as the third Higher Structures in China workshop, at Jilin University in Changchun.  Right now, I’d like to say a few things about some of the highlights of that workshop.

Higher Structures in China III

So last year I had a bunch of discussions I had with Chenchang Zhu and Weiwei Pan, who at the time were both in Göttingen, about my work with Jamie Vicary, which I wrote about last time when the paper was posted to the arXiv.  In that, we showed how the Baez-Dolan groupoidification of the Heisenberg algebra can be seen as a representation of Khovanov’s categorification.  Chenchang and Weiwei and I had been talking about how these ideas might extend to other examples, in particular to give nice groupoidifications of categorified Lie algebras and quantum groups.

That is still under development, but I was invited to give a couple of talks on the subject at the workshop.  It was a long trip: from Lisbon, the farthest-west of the main cities of (continental) Eurasia all the way to one of the furthest-East.   (Not quite the furthest, but Changchun is in the northeast of China, just a few hours north of Korea, and it took just about exactly 24 hours including stopovers to get there).  It was a long way to go for a three day workshop, but as there were also three days of a big excursion to Changbai Mountain, just on the border with North Korea, for hiking and general touring around.  So that was a sort of holiday, with 11 other mathematicians.  Here is me with Dany Majard, in a national park along the way to the mountains:

Here’s me with Alex Hoffnung, on Changbai Mountain (in the background is China):

And finally, here’s me a little to the left of the previous picture, where you can see into the volcanic crater.  The lake at the bottom is cut out of the picture, but you can see the crater rim, of which this particular part is in North Korea, as seen from China:

Well, that was fun!

Anyway, the format of the workshop involved some talks from foreigners and some from locals, with a fairly big local audience including a good many graduate students from Jilin University.  So they got a chance to see some new work being done elsewhere – mostly in categorification of one kind or another.  We got a chance to see a little of what’s being done in China, although not as much as we might have. I gather that not much is being done yet that fit the theme of the workshop, which was part of the reason to organize the workshop, and especially for having a session aimed specially at the graduate students.

Categorified Algebra

This is a sort of broad term, but certainly would include my own talk.  The essential point is to show how the groupoidification of the Heisenberg algebra is a representation of Khovanov’s categorification of the same algebra, in a particular 2-category.  The emphasis here is on the fact that it’s a representation in a 2-category whose objects are groupoids, but whose morphisms aren’t just functors, but spans of functors – that is, composites of functors and co-functors.  This is a pretty conservative weakening of “representations on categories” – but it lets one build really simple combinatorial examples.  I’ve discussed this general subject in recent posts, so I won’t elaborate too much.  The lecture notes are here, if you like, though – they have more detail than my previous post, but are less technical than the paper with Jamie Vicary.

Aaron Lauda gave a nice introduction to the program of categorifying quantum groups, mainly through the example of the special case U_q(sl_2), somewhat along the same lines as in his introductory paper on the subject.  The story which gives the motivation is nice: one has knot invariants such as the Jones polynomial, based on representations of groups and quantum groups.  The Jones polynomial can be categorified to give Khovanov homology (which assigns a complex to a knot, whose graded Euler characteristic is the Jones polynomial) – but also assigns maps of complexes to cobordisms of knots.  One then wants to categorify the representation theory behind it – to describe actions of, for instance, quantum sl_2 on categories.  This starting point is nice, because it can work by just mimicking the construction of sl_2 and U_q(sl_2) representations in terms of weight spaces: one gets categories V_{-N}, \dots, V_N which correspond to the “weight spaces” (usually just vector spaces), and the E and F operators give functors between them, and so forth.

Finding examples of categories and functors with this structure, and satisfying the right relations, gives “categorified representations” of the algebra – the monoidal categories of diagrams which are the “categorifications of the algebra” then are seen as the abstraction of exactly which relations these are supposed to satisfy.  One such example involves flag varieties.  A flag, as one might eventually guess from the name, is a nested collection of subspaces in some n-dimensional space.  A simple example is the Grassmannian Gr(1,V), which is the space of all 1-dimensional subspaces of V (i.e. the projective space P(V)), which is of course an algebraic variety.  Likewise, Gr(k,V), the space of all k-dimensional subspaces of V is a variety.  The flag variety Fl(k,k+1,V) consists of all pairs W_k \subset W_{k+1}, of a k-dimensional subspace of V, inside a (k+1)-dimensional subspace (the case k=2 calls to mind the reason for the name: a plane intersecting a given line resembles a flag stuck to a flagpole).  This collection is again a variety.  One can go all the way up to the variety of “complete flags”, Fl(1,2,\dots,n,V) (where V is n-dimenisonal), any point of which picks out a subspace of each dimension, each inside the next.

The way this relates to representations is by way of geometric representation theory. One can see those flag varieties of the form Fl(k,k+1,V) as relating the Grassmanians: there are projections Fl(k,k+1,V) \rightarrow Gr(k,V) and Fl(k,k+1,V) \rightarrow Gr(k+1,V), which act by just ignoring one or the other of the two subspaces of a flag.  This pair of maps, by way of pulling-back and pushing-forward functions, gives maps between the cohomology rings of these spaces.  So one gets a sequence H_0, H_1, \dots, H_n, and maps between the adjacent ones.  This becomes a representation of the Lie algebra.  Categorifying this, one replaces the cohomology rings with derived categories of sheaves on the flag varieties – then the same sort of “pull-push” operation through (derived categories of sheaves on) the flag varieties defines functors between those categories.  So one gets a categorified representation.

Heather Russell‘s talk, based on this paper with Aaron Lauda, built on the idea that categorified algebras were motivated by Khovanov homology.  The point is that there are really two different kinds of Khovanov homology – the usual kind, and an Odd Khovanov Homology, which is mainly different in that the role played in Khovanov homology by a symmetric algebra is instead played by an exterior (antisymmetric) algebra.  The two look the same over a field of characteristic 2, but otherwise different.  The idea is then that there should be “odd” versions of various structures that show up in the categorifications of U_q(sl_2) (and other algebras) mentioned above.

One example is the fact that, in the “even” form of those categorifications, there is a natural action of the Nil Hecke algebra on composites of the generators.  This is an algebra which can be seen to act on the space of polynomials in n commuting variables, \mathbb{C}[x_1,\dots,x_n], generated by the multiplication operators x_i, and the “divided difference operators” based on the swapping of two adjacent variables.  The Hecke algebra is defined in terms of “swap” generators, which satisfy some q-deformed variation of the relations that define the symmetric group (and hence its group algebra).   The Nil Hecke algebra is so called since the “swap” (i.e. the divided difference) is nilpotent: the square of the swap is zero.  The way this acts on the objects of the diagrammatic category is reflected by morphisms drawn as crossings of strands, which are then formally forced to satisfy the relations of the Nil Hecke algebra.

The ODD Nil Hecke algebra, on the other hand, is an analogue of this, but the x_i are anti-commuting, and one has different relations satisfied by the generators (they differ by a sign, because of the anti-commutation).  This sort of “oddification” is then supposed to happen all over.  The main point of the talk was to to describe the “odd” version of the categorified representation defined using flag varieties.  Then the odd Nil Hecke algebra acts on that, analogously to the even case above.

Marco Mackaay gave a couple of talks about the sl_3 web algebra, describing the results of this paper with Weiwei Pan and Daniel Tubbenhauer.  This is the analog of the above, for U_q(sl_3), describing a diagram calculus which accounts for representations of the quantum group.  The “web algebra” was introduced by Greg Kuperberg – it’s an algebra built from diagrams which can now include some trivalent vertices, along with rules imposing relations on these.  When categorifying, one gets a calculus of “foams” between such diagrams.  Since this is obviously fairly diagram-heavy, I won’t try here to reproduce what’s in the paper – but an important part of is the correspondence between webs and Young Tableaux, since these are labels in the representation theory of the quantum group – so there is some interesting combinatorics here as well.

Algebraic Structures

Some of the talks were about structures in algebra in a more conventional sense.

Jiang-Hua Lu: On a class of iterated Poisson polynomial algebras.  The starting point of this talk was to look at Poisson brackets on certain spaces and see that they can be found in terms of “semiclassical limits” of some associative product.  That is, the associative product of two elements gives a power series in some parameter h (which one should think of as something like Planck’s constant in a quantum setting).  The “classical” limit is the constant term of the power series, and the “semiclassical” limit is the first-order term.  This gives a Poisson bracket (or rather, the commutator of the associative product does).  In the examples, the spaces where these things are defined are all spaces of polynomials (which makes a lot of explicit computer-driven calculations more convenient). The talk gives a way of constructing a big class of Poisson brackets (having some nice properties: they are “iterated Poisson brackets”) coming from quantum groups as semiclassical limits.  The construction uses words in the generating reflections for the Weyl group of a Lie group G.

Li Guo: Successors and Duplicators of Operads – first described a whole range of different algebra-like structures which have come up in various settings, from physics and dynamical systems, through quantum field theory, to Hopf algebras, combinatorics, and so on.  Each of them is some sort of set (or vector space, etc.) with some number of operations satisfying some conditions – in some cases, lots of operations, and even more conditions.  In the slides you can find several examples – pre-Lie and post-Lie algebras, dendriform algebras, quadri- and octo-algebras, etc. etc.  Taken as a big pile of definitions of complicated structures, this seems like a terrible mess.  The point of the talk is to point out that it’s less messy than it appears: first, each definition of an algebra-like structure comes from an operad, which is a formal way of summing up a collection of operations with various “arities” (number of inputs), and relations that have to hold.  The second point is that there are some operations, “successor” and “duplicator”, which take one operad and give another, and that many of these complicated structures can be generated from simple structures by just these two operations.  The “successor” operation for an operad introduces a new product related to old ones – for example, the way one can get a Lie bracket from an associative product by taking the commutator.  The “duplicator” operation takes existing products and introduces two new products, whose sum is the previous one, and which satisfy various nice relations.  Combining these two operations in various ways to various starting points yields up a plethora of apparently complicated structures.

Dany Majard gave a talk about algebraic structures which are related to double groupoids, namely double categories where all the morphisms are invertible.  The first part just defined double categories: graphically, one has horizontal and vertical 1-morphisms, and square 2-morphsims, which compose in both directions.  Then there are several special degenerate cases, in the same way that categories have as degenerate cases (a) sets, seen as categories with only identity morphisms, and (b) monoids, seen as one-object categories.  Double categories have ordinary categories (and hence monoids and sets) as degenerate cases.  Other degenerate cases are 2-categories (horizontal and vertical morphisms are the same thing), and therefore their own special cases, monoidal categories and symmetric monoids.  There is also the special degenerate case of a double monoid (and the extra-special case of a double group).  (The slides have nice pictures showing how they’re all degenerate cases).  Dany then talked about some structure of double group(oids) – and gave a list of properties for double groupoids, (such as being “slim” – having at most one 2-cell per boundary configuration – as well as two others) which ensure that they’re equivalent to the semidirect product of an abelian group with the “bicrossed product”  H \bowtie K of two groups H and K (each of which has to act on the other for this to make sense).  He gave the example of the Poincare double group, which breaks down as a triple bicrossed product by the Iwasawa decomposition:

Poinc = (SO(3) \bowtie (SO(1; 1) \bowtie N)) \ltimes \mathbb{R}_4

(N is certain group of matrices).  So there’s a unique double group which corresponds to it – it has squares labelled by \mathbb{R}_4, and the horizontial and vertical morphisms by elements of SO(3) and N respectively.  Dany finished by explaining that there are higher-dimensional analogs of all this – n-tuple categories can be defined recursively by internalization (“internal categories in (n-1)-tuple-Cat”).  There are somewhat more sophisticated versions of the same kind of structure, and finally leading up to a special class of n-tuple groups.  The analogous theorem says that a special class of them is just the same as the semidirect product of an abelian group with an n-fold iterated bicrossed product of groups.

Also in this category, Alex Hoffnung talked about deformation of formal group laws (based on this paper with various collaborators).  FGL’s are are structures with an algebraic operation which satisfies axioms similar to a group, but which can be expressed in terms of power series.  (So, in particular they have an underlying ring, for this to make sense).  In particular, the talk was about formal group algebras – essentially, parametrized deformations of group algebras – and in particular for Hecke Algebras.  Unfortunately, my notes on this talk are mangled, so I’ll just refer to the paper.

Physics

I’m using the subject-header “physics” to refer to those talks which are most directly inspired by physical ideas, though in fact the talks themselves were mathematical in nature.

Fei Han gave a series of overview talks intorducing “Equivariant Cohomology via Gauged Supersymmetric Field Theory”, explaining the Stolz-Teichner program.  There is more, using tools from differential geometry and cohomology to dig into these theories, but for now a summary will do.  Essentially, the point is that one can look at “fields” as sections of various bundles on manifolds, and these fields are related to cohomology theories.  For instance, the usual cohomology of a space X is a quotient of the space of closed forms (so the k^{th} cohomology, H^{k}(X) = \Omega^{k}, is a quotient of the space of closed k-forms – the quotient being that forms differing by a coboundary are considered the same).  There’s a similar construction for the K-theory K(X), which can be modelled as a quotient of the space of vector bundles over X.  Fei Han mentioned topological modular forms, modelled by a quotient of the space of “Fredholm bundles” – bundles of Banach spaces with a Fredholm operator around.

The first two of these examples are known to be related to certain supersymmetric topological quantum field theories.  Now, a TFT is a functor into some kind of vector spaces from a category of (n-1)-dimensional manifolds and n-dimensional cobordisms

Z : d-Bord \rightarrow Vect

Intuitively, it gives a vector space of possible fields on the given space and a linear map on a given spacetime.  A supersymmetric field theory is likewise a functor, but one changes the category of “spacetimes” to have both bosonic and fermionic dimension.  A normal smooth manifold is a ringed space (M,\mathcal{O}), since it comes equipped with a sheaf of rings (each open set has an associated ring of smooth functions, and these glue together nicely).  Supersymmetric theories work with manifolds which change this sheaf – so a d|\delta-dimensional space has the sheaf of rings where one introduces some new antisymmetric coordinate functions \theta_i, the “fermionic dimensions”:

\mathcal{O}(U) = C^{\infty}(U) \otimes \bigwedge^{\ast}[\theta_1,\dots,\theta_{\delta}]

Then a supersymmetric TFT is a functor:

E : (d|\delta)-Bord \rightarrow STV

(where STV is the category of supersymmetric topological vector spaces – defined similarly).  The connection to cohomology theories is that the classes of such field theories, up to a notion of equivalence called “concordance”, are classified by various cohomology theories.  Ordinary cohomology corresponds then to 0|1-dimensional extended TFT (that is, with 0 bosonic and 1 fermionic dimension), and K-theory to a 1|1-dimensional extended TFT.  The Stoltz-Teichner Conjecture is that the third example (topological modular forms) is related in the same way to a 2_1-dimensional extended TFT – so these are the start of a series of cohomology theories related to various-dimension TFT’s.

Last but not least, Chris Rogers spoke about his ideas on “Higher Geometric Quantization”, on which he’s written a number of papers.  This is intended as a sort of categorification of the usual ways of quantizing symplectic manifolds.  I am still trying to catch up on some of the geometry This is rooted in some ideas that have been discussed by Brylinski, for example.  Roughly, the message here is that “categorification” of a space can be thought of as a way of acting on the loop space of a space.  The point is that, if points in a space are objects and paths are morphisms, then a loop space L(X) shifts things by one categorical level: its points are loops in X, and its paths are therefore certain 2-morphisms of X.  In particular, there is a parallel to the fact that a bundle with connection on a loop space can be thought of as a gerbe on the base space.  Intuitively, one can “parallel transport” things along a path in the loop space, which is a surface given by a path of loops in the original space.  The local description of this situation says that a 1-form (which can give transport along a curve, by integration) on the loop space is associated with a 2-form (giving transport along a surface) on the original space.

Then the idea is that geometric quantization of loop spaces is a sort of higher version of quantization of the original space. This “higher” version is associated with a form of higher degree than the symplectic (2-)form used in geometric quantization of X.   The general notion of n-plectic geometry, where the usual symplectic geometry is the case n=1, involves a (n+1)-form analogous to the usual symplectic form.  Now, there’s a lot more to say here than I properly understand, much less can summarize in a couple of paragraphs.  But the main theorem of the talk gives a relation between n-plectic manifolds (i.e. ones endowed with the right kind of form) and Lie n-algebras built from the complex of forms on the manifold.  An important example (a theorem of Chris’ and John Baez) is that one has a natural example of a 2-plectic manifold in any compact simple Lie group G together with a 3-form naturally constructed from its Maurer-Cartan form.

At any rate, this workshop had a great proportion of interesting talks, and overall, including the chance to see a little more of China, was a great experience!

I’ve written here before about building topological quantum field theories using groupoidification, but I haven’t yet gotten around to discussing a refinement of this idea, which is in the most recent version of my paper on the subject.  I also gave a talk about this last year in Erlangen. The main point of the paper is to pull apart some constructions which are already fairly well known into two parts, as part of setting up a category which is nice for supporting models of fairly general physical systems, using an extension of the  concept of groupoidification. So here’s a somewhat lengthy post which tries to unpack this stuff a bit.

Factoring TQFT

The older version of this paper talked about the untwisted version of the Dijkgraaf-Witten (DW for short) model, which is a certain kind of TQFT based on a gauge theory with a finite gauge group.  (Freed and Quinn put it as: “Chern-Simons theory with finite gauge group”).  The new version gets the general – that is, the twisted – form in the same way: factoring the theory into two parts. So, the DW model, which was originally described by Dijkgraaf and Witten in terms of a state-sum, is a functor

Z : 3Cob \rightarrow Vect

The “twisting” is the point of their paper, “Topological Gauge Theories and Group Cohomology”.  The twisting has to do with the action for some physical theory. Now, for a gauge theory involving flat connections, the kind of gauge-theory actions which involve the curvature of a connection make no sense: the curvature is zero.  So one wants an action which reflects purely global features of connections.  The cohomology of the gauge group is where this comes from.

Now, the machinery I describe is based on a point of view which has been described in a famous paper by Freed, Hopkins, Lurie and Teleman (FHLT for short – see further discussion here) in terms in which the two stages are called the “classical field theory” (which has values in groupoids), and the “quantization functor”, which takes one into Hilbert spaces.

Actually, we really want to have an “extended” TQFT: a TQFT gives a Hilbert space for each 2D manifold (“space”), and a linear map for a 3D cobordism (“spacetime”) between them. An extended TQFT will assign (higher) algebraic data to lower-dimension boundaries still.  My paper talks only about the case where we’ve extended down to codimension 2, whereas FHLT talk about extending “down to a point”. The point of this first stopping point is to unpack explicitly and computationally what the factorization into two parts looks like at the first level beyond the usual TQFT.

In the terminology I use, the classical field theory is:

A^{\omega} : nCob_2 \rightarrow Span_2(Gpd)^{U(1)}

This depends on a cohomology class [\omega] \in H^3(G,U(1)). The “quantization functor” (which in this case I call “2-linearization”):

\Lambda^{U(1)} : Span_2(Gpd)^{U(1)} \rightarrow 2Vect

The middle stage involves the monoidal 2-category I call Span_2(Gpd)^{U(1)}.  (In FHLT, they use different terminology, for instance “families” rather than “spans”, but the principle is the same.)

Freed and Quinn looked at the quantization of the “extended” DW model, and got a nice geometric picture. In it, the action is understood as a section of some particular line-bundle over a moduli space. This geometric picture is very elegant once you see how it works, which I found was a little easier in light of a factorization through Span_2(Gpd).

This factorization isolates the geometry of this particular situation in the “classical field theory” – and reveals which of the features of their setup (the line bundle over a moduli space) are really part of some more universal construction.

In particular, this means laying out an explicit definition of both Span_2(Gpd)^{U(1)} and \Lambda^{U(1)}.

2-Linearization Recalled

While I’ve talked about it before, it’s worth a brief recap of how 2-linearization works with a view to what happens when you twist it via groupoid cohomology. Here we have a 2-category Span(Gpd), whose objects are groupoids (A, B, etc.), whose morphisms are spans of groupoids:

A \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} B

and whose 2-morphisms are spans of span-maps (taken up to isomorphism), which look like so:

span of span maps

(And, by the by: how annoying that WordPress doesn’t appear to support xypic figures…)

These form a (symmetric monoidal) 2-category, where composition of spans works by taking weak pullbacks.  Physically, the idea is that a groupoid has objects which are configurations (in the cause of gauge theory, connections on a manifold), and morphisms which are symmetries (gauge transformations, in this case).  Then a span is a groupoid of histories (connections on a cobordism, thought of as spacetime), and the maps s,t pick out its starting and ending configuration.  That is, A = A_G(S) is the groupoid of flat G-connections on a manifold S, and X = A_G(\Sigma) is the groupoid of flat G-connections on some cobordism \Sigma, of which S is part of the boundary.  So any such connection can be restricted to the boundary, and this restriction is s.

Now 2-linearization is a 2-functor:

\Lambda : Span_2(Gpd)^{U(1)} \rightarrow 2Vect

It gives a 2-vector space (a nice kind of category) for each groupoid G.  Specifically, the category of its representations, Rep(G).  Then a span turns into a functor which comes from “pulling” back along s (the restricted representation where X acts by first applying s then the representation), then “pushing” forward along t (to the induced representation).

What happens to the 2-morphisms is conceptually more complicated, but it depends on the fact that “pulling” and “pushing” are two-sided adjoints. Concretely, it ends up being described as a kind of “sum over histories” (where “histories” are the objects of Y), which turns out to be exactly the path integral that occurs in the TQFT.

Or at least, it’s the path integral when the action is trivial! That is, if S=0, so that what’s integrated over paths (“histories”) is just e^{iS}=1. So one question is: is there a way to factor things in this way if there’s a nontrivial action?

Cohomological Twisting

The answer is by twisting via cohomology. First, let’s remember what that means…

We’re talking about groupoid cohomology for some groupoid G (which you can take to be a group, if you like).  “Cochains” will measure how much some nice algebraic fact, such as being a homomorphism, or being associative, “fails to occur”.  “Twisting by a cocycle” is a controlled way to force some such failure to happen.

So, an n-cocycle is some function of n composable morphisms of G (or, if there’s only one object, “group elements”, which amounts to the same thing).  It takes values in some group of coefficients, which for us is always U(1)

The trivial case where n=0 is actually slightly subtle: a 0-cocycle is an invariant function on the objects of a groupoid. (That is, it takes the same value on any two objects related by an (iso)morphism. (Think of the object as a sequence of zero composable morphisms: it tells you where to start, but nothing else.)

The case n=1 is maybe a little more obvious. A 1-cochain f \in Z^1_{gpd}(G,U(1)) can measure how a function h on objects might fail to be a 0-cocycle. It is a U(1)-valued function of morphisms (or, if you like, group elements).  The natural condition to ask for is that it be a homomorphism:

f(g_1 \circ g_2) = f(g_1) f(g_2)

This condition means that a cochain f is a cocycle. They form an abelian group, because functions satisfying the cocycle condition are closed under pointwise multiplication in U(1). It will automatically by satisfied for a coboundary (i.e. if f comes from a function h on objects as f(g) = \delta h (g) = h(t(g)) - h(s(g))). But not every cocycle is a coboundary: the first cohomology H^1(G,U(1)) is the quotient of cocycles by coboundaries. This pattern repeats.

It’s handy to think of this condition in terms of a triangle with edges g_1, g_2, and g_1 \circ g_2.  It says that if we go from the source to the target of the sequence (g_1, g_2) with or without composing, and accumulate f-values, our f gives the same result.  Generally, a cocycle is a cochain satisfying a “coboundary” condition, which can be described in terms of an n-simplex, like this triangle. What about a 2-cocycle? This describes how composition might fail to be respected.

So, for instance, a twisted representation R of a group is not a representation in the strict sense. That would be a map into End(V), such that R(g_1) \circ R(g_2) = R(g_1 \circ g_2).  That is, the group composition rule gets taken directly to the corresponding rule for composition of endomorphisms of the vector space V.  A twisted representation \rho only satisfies this up to a phase:

\rho(g_1) \circ \rho(g_2) = \theta(g_1,g_2) \rho(g_1 \circ g_2)

where \theta : G^2 \rightarrow U(1) is a function that captures the way this “representation” fails to respect composition.  Still, we want some nice properties: \theta is a “cocycle” exactly when this twisting still makes \rho respect the associative law:

\rho(g_1) \rho( g_2 \circ g_3) = \rho( g_1 \circ g_2) \circ \rho( g_3)

Working out what this says in terms of \theta, the cocycle condition says that for any composable triple (g_1, g_2, g_3) we have:

\theta( g_1, g_2 \circ g_3) \theta (g_2,g_3) = \theta(g_1,g_2) \theta(g_1 \circ g_2, g_3)

So H^2_{grp}(G,U(1)) – the second group-cohomology group of G – consists of exactly these \theta which satisfy this condition, which ensures we have associativity.

Given one of these \theta maps, we get a category Rep^{\theta}(G) of all the \theta-twisted representations of G. It behaves just like an ordinary representation category… because in fact it is one! It’s the category of representations of a twisted version of the group algebra of G, called C^{\theta}(G). The point is, we can use \theta to twist the convolution product for functions on G, and this is still an associative algebra just because \theta satisfies the cocycle condition.

The pattern continues: a 3-cocycle captures how some function of 2 variable may fail to be associative: it specifies an associator map (a function of three variables), which has to satisfy some conditions for any four composable morphisms. A 4-cocycle captures how a map might fail to satisfy this condition, and so on. At each stage, the cocycle condition is automatically satisfied by coboundaries. Cohomology classes are elements of the quotient of cocycles by coboundaries.

So the idea of “twisted 2-linearization” is that we use this sort of data to change 2-linearization.

Twisted 2-Linearization

The idea behind the 2-category Span(Gpd)^{U(1)} is that it contains Span(Gpd), but that objects and morphisms also carry information about how to “twist” when applying the 2-linearization \Lambda.  So in particular, what we have is a (symmetric monoidal) 2-category where:

  • Objects consist of (A, \theta), where A is a groupoid and $\theta \in Z^2(A,U(1))$
  • Morphisms from A to B consist of a span (X,s,t) from A to B, together with \alpha \in Z^1(X,U(1))
  • 2-Morphisms from X_1 to X_2 consist of a span (Y,\sigma,\tau) from X, together with \beta \in Z^0(Y,U(1))

The cocycles have to satisfy some compatibility conditions (essentially, pullbacks of the cocycles from the source and target of a span should land in the same cohomology class).  One way to see the point of this requirement is to make twisted 2-linearization well-defined.

One can extend the monoidal structure and composition rules to objects with cocycles without too much trouble so that Span(Gpd) is a subcategory of Span(Gpd)^{U(1)}. The 2-linearization functor extends to \Lambda^{U(1)} : Span(Gpd)^{U(1)} \rightarrow 2Vect:

  • On Objects: \Lambda^{U(1)} (A, \theta) = Rep^{\theta}(A), the category of \theta-twisted representation of A
  • On Morphisms: \Lambda^{U(1)} ( (X,s,t) , \alpha ) comes by pulling back a twisted representation in Rep^{\theta_A}(A) to one in Rep^{s^{\ast}\theta_A}(X), pulling it through the algebra map “multiplication by \alpha“, and pushing forward to Rep^{\theta_B}(B)
  • On 2-Morphisms: For a span of span maps, one uses the usual formula (see the paper for details), but a sum over the objects y \in Y picks up a weight of \beta(y) at each object

When the cocycles are trivial (evaluate to 1 always), we get back the 2-linearization we had before. Now the main point here is that the “sum over histories” that appears in the 2-morphisms now carries a weight.

So the twisted form of 2-linearization uses the same “pull-push” ideas as 2-linearization, but applied now to twisted representations. This twisting (at the object level) uses a 2-cocycle. At the morphism level, we have a “twist” between “pull” and “push” in constructing . What the “twist” actually means depends on which cohomology degree we’re in – in other words, whether it’s applied to objects, morphisms, or 2-morphisms.

The “twisting” by a 0-cocycle just means having a weight for each object – in other words, for each “history”, or connection on spacetime, in a big sum over histories. Physically, the 0-cocycle is playing the role of the Lagrangian functional for the DW model. Part of the point in the FHLT program can be expressed by saying that what Freed and Quinn are doing is showing how the other cocycles are also the Lagrangian – as it’s seen at higher codimension in the more “local” theory.

For a TQFT, the 1-cocycles associated to morphisms describe how to glue together values for the Lagrangian that are associated to histories that live on different parts of spacetime: the action isn’t just a number. It is a number only “locally”, and when we compose 2-morphisms, the 0-cocycle on the composite picks up a factor from the 1-morphism (or 0-morphism, for a horizontal composite) where they’re composed.

This has to do with the fact that connections on bits of spacetime can be glued by particular gauge transformations – that is, morphisms of the groupoid of connections. Just as the gauge transformations tell how to glue connections, the cocycles associated to them tell how to glue the actions. This is how the cohomological twisting captures the geometric insight that the action is a section of a line bundle – not just a function, which is a section of a trivial bundle – over the moduli space of histories.

So this explains how these cocycles can all be seen as parts of the Lagrangian when we quantize: they explain how to glue actions together before using them in a sum-over histories. Gluing them this way is essential to make sure that \Lambda^{U(1)} is actually a functor. But if we’re really going to see all the cocycles as aspects of “the action”, then what is the action really? Where do they come from, that they’re all slices of this bigger thing?

Twisting as Lagrangian

Now the DW model is a 3D theory, whose action is specified by a group-cohomology class [\omega] \in H^3_{grp}(G,U(1)). But this is the same thing as a class in the cohomology of the classifying space: [\omega] \in H^3(BG,U(1)). This takes a little unpacking, but certainly it’s helpful to understand that what cohomology classes actually classify are… gerbes. So another way to put a key idea of the FHLT paper, as Urs Schreiber put it to me a while ago, is that “the action is a gerbe on the classifying space for fields“.

What does this mean?

This map is given as a path integral over all connections on the space(-time) S, which is actually just a sum, since the gauge group is finite and so all the connections are flat.  The point is that they’re described by assigning group elements to loops in S:

A : \pi_1(M) \rightarrow G

But this amounts to the same thing as a map into the classifying space of G:

f_A : M \rightarrow BG

This is essentially the definition of BG, and it implies various things, such as the fact that BG is a space whose fundamental group is G, and has all other homotopy groups trivial. That is, BG is the Eilenberg-MacLane space K(G,1). But the point is that the groupoid of connections and gauge transformations on S just corresponds to the mapping space Maps(S,BG). So the groupoid cohomology classes we get amount to the same thing as cohomology classes on this space. If we’re given [\omega] \in H^3(BG,U(1)), then we can get at these by “transgression” – which is very nicely explained in a paper by Simon Willerton.

The essential idea is that a 3-cocycle \omega (representing the class [\omega]) amounts to a nice 3-form on BG which we can integrate over a 3-dimentional submanifold to get a number. For a d-dimensional S, we get such a 3-manifold from a (3-d)-dimensional submanifold of Maps(S,BG): each point gives a copy of S in BG. Then we get a (3-d)-cocycle on Maps(S,BG) whose values come from integrating \omega over this image. Here’s a picture I used to illustrate this in my talk:

Now, it turns out that this gives 2-cocycles for 1-manifolds (the objects of 3Cob_2, 1-cocycles on 2D cobordisms between them, and 0-cocycles on 3D cobordisms between these cobordisms. The cocycles are for the groupoid of connections and gauge transformations in each case. In fact, because of Stokes’ theorem in BG, these have to satisfy all the conditions that make them into objects, morphisms, and 2-morphisms of Span^{U(1)}(Gpd). This is the geometric content of the Lagrangian: all the cocycles are really “reflections” of \omega as seen by transgression: pulling back along the evaluation map ev from the picture. Then the way you use it in the quantization is described exactly by \Lambda^{U(1)}.

What I like about this is that \Lambda^{U(1)} is a fairly universal sort of thing – so while this example gets its cocycles from the nice geometry of BG which Freed and Quinn talk about, the insight that an action is a section of a (twisted) line bundle, that actions can be glued together in particular ways, and so on… These presumably can be moved to other contexts.

In the most recent TQFT Club seminar, we had a couple of talks – one was the second in a series of three by Marco Mackaay, which as promised previously I’ll write up together after the third one.

The other was by Björn Gohla, a student of João Faria Martins, giving an overview on the subject of “Tricategories and Trifunctors”, a mostly expository talk explaining some definitions.  Actually, this was a bit more specific than a general introduction – the point of it was to describe a certain kind of mapping space.  I’ve talked here before about representing the “configuration space” of a gauge theory as a groupoid: the objects are (optionally, flat) connections on a manifold M, and the morphisms are gauge transformations taking one connection to another.  The reason for the things Björn was talking about is analogous, except that in this case, the goal is to describe the configuration space of a higher gauge theory.

There are at least two ways I know of to talk about higher gauge theory.  One is in terms of categorical (or n-categorical) groups – which makes it a “categorification” of gauge theory in the sense of reproducing in \mathbf{Cat} (or \mathbf{nCat}) an analog of a sturcture, gauge theory, originally formulated in \mathbf{Set}.  Among other outlines, you might look at this one by John Baez and John Huerta for an introduction.  Another uses the lingo of crossed modules or crossed complexes.  In either case, the essential point is the same: there is some collection of groups (or groupoids, but let’s say groups to keep everything clear) which play the role of the single gauge group in ordinary gauge theory.

In the first language, we can speak of a “2-group”, or “categorical group” – a group internal to \mathbf{Cat}, or what is equivalent, a category internal to \mathbf{Grp}, which would have a group of objects and a group of morphisms (and, in higher settings still, groups of 2-morphisms, 3-morphisms, and so on).  The structure maps of the category (source, target, composition, etc.) have to live in the category of groups.

A crossed complex of groups (again, we could generalize to groupoids, but I won’t) is a nonabelian variation on a chain complex: a sequence of groups with maps from one to the next.  There are also a bunch more structures, which ultimately serve to reproduce all the kind of composition, source, and target maps in the n-categorical groups: some groups act on others, there are “bracket” operations on one group valued in another, and so forth.  This paper by Brown and Higgins explains how the two concepts are related when most of the groups are abelian, and there’s a lot more about crossed complexes and related stuff in Tim Porter’s “Crossed Menagerie“.

The point of all this right now is that these things play the role of the gauge group in higher gauge theory.  The idea is that in gauge theory, you have a connection.  Typically this is described in terms of a form valued in the Lie algebra of the gauge group.  Then a (thin) homotopy classes of curves, gets a holonomy valued in the group by integrating that form.  Alternatively, you can just think of the path groupoid of a manifold \mathcal{P}_1(M), where those classes of curves form the morphisms between the objects, which are just points of M.  Then a connection defines a functor \Gamma : \mathcal{P}_1(M) \rightarrow G, where G is the gauge group thought of as a category (groupoid in fact) with one object.  Or, you can just define a connection that way in the first place.  In higher gauge theory, a similar principle exists: begin with the n-path groupoid \mathcal{P}_n(M) where the morphisms are (thin homotopy classes of) paths, the 2-morphisms are surfaces (really homotopy classes of homotopies of paths), and so on, so the k-morphisms are k-dimensional bits of M.  Then you could define an n-connection as a n-functor into an n-group as defined above.  OR, you could define it in terms of a tower of differential k-forms valued in the crossed complex of Lie algebras associated to the crossed complex of Lie groups that replaces the gauge group.  You can then use an integral to get an element of the group at level k of the complex for any given k-morphism in \mathcal{P}_n(M), which (via the equivalence I mentioned) amounts to the same thing as the other definition of connection.

João Martins has done some work on this sort of thing when n is dimension 2 (with Tim Porter) and 3 (with Roger Picken), which I guess is how Björn came to work on this question.  The question is, roughly, how to describe the moduli space of these connections.  The gist of the answer is that it’s a functor n-category [\mathcal{P}_n(M),\mathcal{G}], where \mathcal{G} is the n-group.  A little more generally, the question is how to describe mapping spaces for higher categories.  In particular, he was talking about the case n=3, which is where certain tricky issues start to show up.  In particular every bicategory (the weakest form of 2-category) is (bi)equivalent to a strict 2-category, so there’s no real need to worry about weakening things like associativity so that they only work up to isomorphism – these are all equalities.  With 3-categories, this fails: the weakest kind of 3-category is a tricategory (introduced by Gordon, Power and Street, though also see the references beyond that link).  These are always tri-equivalent to something stricter than general, but not completely strict: Gray-categories.  The only equation from 2-categories which has to be weakened to an isomorphism here is the interchange law: given a square of four morphisms, we can either compose vertically first, and then horizontally, or vice versa.  In a Gray-category, there’s an “interchanger” isomorphism

I_{\alpha,\alpha ',\beta,\beta'} : (\alpha \circ \beta) \cdot (\alpha ' \circ \beta ') \Rightarrow (\alpha \cdot \alpha ') \circ (\beta \cdot \beta ')

where \cdot is vertical composition of 2-cells, and \circ is horizontal (i.e. the same direction as 1-cells).  This is supposed to satisfy a compatibility condition.  It’s essentially the only one you can come up with starting with (\alpha \cdot \alpha ') \circ \beta (and composing it in different orders by throwing in identities in various places).

There’s another way to look at things, as Björn explained, in terms of enriched category theory.  If you have a monoidal category (\mathcal{V},\otimes), then a (\mathcal{V},\otimes)-enriched category \mathbb{G} is one in which, for any two objects x,y, there is an object \mathbb{G}(x,y) \in \mathcal{V} of morphisms, and composition gives morphisms \circ_{x,y,z} : \mathbb{G}(y,z) \otimes \mathbb{G}(x,y) \rightarrow \mathbb{G}(x,z).  A strict 3-category is enriched in \mathbf{2Cat}, with its usual tensor product, dual to its internal hom [-,-] (which gives the mapping 2-category of functors, natural transformations, and modifications, between any two 2-categories).  A Gray category is similar, except that it is enriched in \mathbf{Gray}, a version of \mathbf{2Cat} with a different tensor product, dual to the hom functor [-,-]' which gives the mapping 2-category with pseudonatural transformations (the weak version of the concept, where the naturality square only has to commute up to a specified 2-cell) as morphisms.  These are not the same, which is where the unavoidability of weakening 3-categories “really” comes from.   The upshot of this is as above: it matters which order we compose things in.

Having defined Gray-categories, let’s say A and B (which, in the applications I mentioned above, tend to actually be Gray-groupoids, though this doesn’t change the theory substantially), the point is to talk about “mapping spaces” – that is, Gray-categories of Gray-functors (etc.) from A to B.

Since they’ve been defined terms of enriched category theory, one wants to use the general theory of enriched functors, transformations, and so forth – which is a lot easier than trying to work out the correct definitions from scratch using a low-level description.  So then a Gray-functor F : A \rightarrow B has an object map F_0 : A_0 \rightarrow B_0, mapping objects of A to objects of B, and then for each x,y \in A_0, a morphism in \mathbf{Gray} (which is our \mathcal{V}), namely F_{x,y} : A(x,y) \rightarrow B(F(x),F(y)).  There are a bunch of compatibility conditions, which can be expressed for any monoidal category \mathcal{V} (since they involve diagrams with the map \circ_{x,y,z} for any triple, and the like).  Similar comments apply to defining \mathcal{V}-natural transformations.

There is a slight problem here, which is that in this case, \mathcal{V} = \mathbf{Gray} is a 2-category, so we really need to use a form of weakly enriched categories…  All the compatibility diagrams should have 2-cells in them, and so forth.  This, too, gets complicated.  So Björn explained is a shortcut from drawing n-dimensional diagrams for these mapping n-categories in terms of the arrow category \vec{B}. This is the category whose objects are the morphisms of B, and whose morphisms are commuting squares, or when B is a 2-category, squares with a 2-cell, so a morphism in \vec{B} from f: x \rightarrow y to f' : x' \rightarrow y' is a triple g = (g_x,g_y,g_f) like so:

Morphism in arrow category

The 2-morphisms in \vec{B} are commuting “pillows”, where the front and back face are morphisms like the above. So \beta : g \Rightarrow g' is \beta = (\beta_x,\beta_y), where \beta_x : g_x \Rightarrow g_{x'} is a 2-cell, and the whole “pillow” commutes.  When B is a tricategory, then we need to go further – these 2-morphsims should be triples including a 3-cell \beta_f filling the “pillow”, and then 3-morphisms are commuting structures between these. These diagrams get hard to draw pretty quickly. This is the point of having an ordinary 2D diagram with at most 1-dimensional cells: pushing all the nasty diagrams into these arrow categories, we can replace a 2-cell representing a natural transformation with a diagram involving the arrow category.

This uses that there are source and target maps (which are Gray-functors, of course) which we’ll call d_0, d_1: \vec{B} \rightarrow B. So then here (in one diagram) we have two ways of depicting a natural transformation \alpha :  F \rightarrow G between functors F,G : A \Rightarrow B:

One is the 2-cell, and the other is the functor into \vec{B}, such that d_0 \circ \alpha = F and d_1 \circ \alpha = G.
To depict a modification between natural transformations (a 3-cell between 2-cells) just involves building the arrow category of \vec{B}, say \vec{\vec{B}}, and drawing an arrow from A into it. And so on: in principle, there is a tower above B built by iterating the arrow category construction, and all the different levels of “functor”, “natural transformation”, “modification”, and all the higher equivalents are just functors into different levels of this tower.  (The generic term for the k^{th} level of maps-between-maps-etc between n-categories is “(n,k)-transfor“, a handy term coined here.)
The advantage here is that at least the general idea can be extended pretty readily to higher values of n than 3.  Naturally, no matter which way one decides to do it, things will get complicated – either there’s a combinatorial explosion of things to consider, or one has to draw higher-dimensional diagrams, or whatever.  This exploding complexity of n-categories (in this case, globular ones) is one of the reasons why simplicial appreaches – quasicategories or \infty-categories – are good.  They allow you to avoid talking about those problems, or at least fold them into fairly well-understood aspects of simplicial sets.  A lot of things – limits, colimits, mapping spaces, etc. are pretty well understood in that case (see, for instance, the first chapter of Joshua Nicholls-Barrer’s thesis for the basics, or Jacob Lurie’s humongous book for something more comprehensive).  But sometimes, as in this case, they just don’t happen to be the things you want for your application.  So here we have some tools for talking about mapping spaces in the world of globular n-categories – and as the work by Martins/Porter/Picken show, it’s motivated by some fairly specific work about invariants of manifolds, differential geometry, and so on.

Whatever ultimately becomes of some aspects of the Standard Model – the Higgs boson, for example – here is a report (based on an experiment described here) that some of the fundamentals hold up well to experimental test. Specifically, the Spin-Statistics Theorem – the relationship between quantum numbers of elementary particles and the representation theory of the Poincare group. It would have been very surprising if things had been otherwise, but as usual, the more you rely on an idea, the more important it is to be sure it fits the facts. The association between physics and representation theory is one of those things.

So the fact that it all seems to work correctly is a bit of a relief for me. See below.


Since the paperwork is now well on its way, I may as well now mention here that I’ve taken a job as a postdoctoral researcher at CAMGSD, a centre at IST in Lisbon, starting in September. In a week or so I will be heading off to visit there – there are quite a few people there doing things I find quite interesting, so it should be an interesting trip. After that, I’ll be heading down to the south of the country for the Oporto meeting on Geometry, Topology and Physics, which is held this year in Faro. This year the subject is “categorification”, so my talk will be mainly about my paper on ETQFT. There are a bunch of interesting speakers – two I happen to know personally are Aaron Lauda and Joel Kamnitzer, but many others look quite promising.

In particular, one of the main invited speakers is Mikhail Khovanov, whose name is famously (for some values of “famous”) attached to Khovanov Homology, which is a categorification of the Jones Polynomial. Instead of a polynomial, it associates a graded complex of vector spaces to a knot. (Dror Bar-Natan wrote an intro, with many pictures and computations). Khovanov’s more recent work, with Aaron Lauda, has been on categorifying quantum groups (starting with this).

Now, as for me, since my talk in Faro will only be about 20 minutes, I’m glad of the opportunity to give some more background during the visit at IST. In particular, a bunch of the background to the ETQFT paper really depends on this paper on 2-linearization. I’ve given some previous talks on the subject, but this time I’m going to try to get a little further into how this fits into a more general picture. To repeat a bit of what’s in this post, 2-linearization describes a (weak) 2-functor:

\Lambda : Span(Gpd) \rightarrow 2Vect

where Span(Gpd) has groupoids as its objects, spans of groupoid homomorphisms as its arrows, and spans-of-span-maps as 2-morphisms. 2Vect is the 2-category of 2-vector spaces, which I’ve explained before. This 2-functor is supposed to be a sort of “linearization”, which is a very simple functor

L : Span(FinSet) \rightarrow Vect

It takes a set X to the free vector space L(X) = \mathbb{C}^X, and a span X \stackrel{s}{\leftarrow} S \stackrel{t}{\rightarrow} Y to a linear map L(S) : L(X) \rightarrow L(Y). This can be described in two stages, starting with a vector in L(S), namely, a function \psi : X \rightarrow \mathbb{C}. The two stages are:

  • First, “pull” \psi up along s to \mathbb{C}^S (note: I’m conflating the set S with the span (S,s,t)), to get the function s^*\psi = \psi \circ s : S \rightarrow \mathbb{C}.
  • Then “push” this along t to get t_*(s^*\psi). The “push” operation f_* along any map f : X \rightarrow Y is determined by the fact that it takes the basis vector \delta_x \in \mathbb{C}^X to the basis vector \delta_{f(x)} \in \mathbb{C}^Y (these are the delta functions which are 1 on the given element and 0 elsewhere)

It’s helpful to note that, for a given map f : X \rightarrow Y, are linear adjoints (using the standard inner product where the delta functions are orthonormal). Combining them together – it’s easy to see – gives a linear map which can be described in the basis of delta functions by a matrix. The (x,y)-entry of the matrix counts the elements of S which map to (x,y) under (s,t) : S \rightarrow X \times Y. We interpret this by saying the matrix “counts histories” connecting x to y.

In groupoidification, a-la Baez and Dolan (see the various references beyond the link), one replaces FinSet with FinGpd, the 2-category of (essentially) finite groupoids, but we still have a functor into Vect. In fact, into FinHilb: the vector space D(G) is the free one on isomorphism classes in G, but the linear maps (and the inner product) are tweaked using the groupoid cardinality, which can be any positive rational number. Then we say the matrix does a “sum over histories” of certain weights. In this paper, I extend this to “U(1)-groupoids”, which are labelled by phases – which represent the exponentiated action in quantum mechanics – and end up with complex matrices. So far so good.

The 2-linearization process is really “just” a categorification of what happens for sets, where we treat “groupoid” as the right categorification of “set”, and “Kapranov-Voevodsky 2-vector space” as the right categorification of “vector space”. (To treat “category” as the right categorification of “set”, one would have to use Elgueta’s “generalized 2-vector space“, which is probably morally the right thing to do, but here I won’t.) To a groupoid X, we assign the category of functors into Vect – that is, Rep(X) (in smooth cases, we might want to restrict what kind of representations we mean – see below).

To pull such a functor along a groupoid homomorphism f : X \rightarrow Y is again done by precomposition: f^*F = F \circ f. The push map in 2-linearization is the Kan extension of the functor \Psi along f. This is the universal way to push a functor forward, and is the (categorical!) adjoint to the pull map. (Kan extensions are supposed to come equipped with some natural transformations: these are the ones associated to the adjunction). Then composing “pull” and “push”, one categorifies “sum over histories”.

So here’s one thing this process is related to: in the case where our groupoids have just one object (i.e. are groups), and the homomorphism f : X \rightarrow Y is an inclusion (conventionally written H < G), this goes by a familiar name in representation theory: restriction and induction. So, given a representation \rho of G (that is, a functor from Y into Vect), there is an induced representation res_H^G \rho = f^*\rho, which is just the same representation space, acted on only by elements of H (that is, X). This is the easy one. The harder one is the induced representation of G from a representation \tau of H (i.e. \tau : X \rightarrow Vect, which is to say ind^G_H \tau = f_* \tau : Y \rightarrow Vect. The fact that these operations are adjoints goes in representation theory by the name “Frobenius reciprocity”.

These two operations were studied by George Mackey (in particular, though I’ve been implicitly talking about discrete groups, Mackey’s better known for looking at the case of unitary representations of compact Lie groups). The notion of a Mackey functor is supposed to abstract the formal properties of these operations. (A Mackey functor is really a pair of functors, one covariant and one contravariant – giving restriction and “transfer”/induction maps for – which have formal properties similar to the functor from groups into their representation rings – which it’s helpful to think of as the categories of representations, decategorificatied. In nice cases, a Mackey functor from a category C is the same as a functor out of Span(C)).

Anyway, by way of returning to groupoids: the induced representation for groups is found by \mathbb{C}[G] \otimes_{\mathbb{C}[H]} V, where V is the representation space of \tau. (For compact Lie groups, replace the group algebra \mathbb{C}[G] with L^2(G), and likewise for H). A similar formula shows up in the groupoid case, but with a contribution from each object (see the paper on 2-linearization for more details). This is also the formula for the Kan extension.

“Now wait a minute”, the categorically aware may ask, “do you mean the left Kan extension, or the right Kan extension?” That’s a good question! For one thing, they have different formulas: one involving limits, and the other involving colimits. Instead of answering it, I’ll talk about something not entirely unrelated – and a little more context for 2-linearization.

The setup here is actually a rather special case of Grothendieck’s six-operation framework, in the algebro-geometric context, for sheaves on (algebraic) spaces (there’s an overview in this talk by Joseph Lipman, the best I’ve been able to find online). Now, , these operations as extended to derived categories of sheaves (see this intro by R.P. Thomas). The derived category D(X) is described concretely in terms of chain complexes of sheaves in Sh(X), taken “up to homotopy” – it is a sort of categorification of cohomology. But of course, this contains Sh(X) as trivial complexes (i.e. concentrated at level zero). The fact that our sheaves come from functors into Vect, which form a 2-vector space, so that functors between these are exact, means that there’s no nontrivial homology – so in our special case, the machinery of derived categories is more than we need.

This framework has been extended to groupoids – so the sheaves are on the space of objects, and are equivariant – as described in a paper by Moerdijk called “Etale Groupoids, Derived Categories, and Operations” (the situation of sheaves that are equivariant under a group action is described in more detail by Bernstein and Lunts in the Springer lecture notes “Equivariant Sheaves and Functors”). Sheaves on groupoids are essentially just equivariant sheaves on the space of objects. Now, given a morphism $f : X \ra Y$, there are four induced operations:

  • f^* , f^! : D(Y) \rightarrow D(X)
  • f_*, f^! : D(X) \rightarrow D(Y) (in general right adjoint to f^* and f^!)

(The other operations of the “six” are hom and \otimes). The basic point here is that we can “pull” and “push” sheaves along the map f in various ways. For our purposes, it’s enough to consider f^* and f_*. The sheaves we want come from functors into Vect (we actually have a vector space at each point in the space of objects). These are equivariant “bundles”, albeit not necessarily locally trivial. The fact that we can think of these as sheaves – of sections – tends to stay in the background most of the time, but in particular, being functors automatically makes the resulting sheaves equivariant. In the discrete case, we can just think of these as sheaves of vector spaces: just take F(U) to be the direct sum of all the vector spaces at each object in any subset U – all subsets are open in the discrete topology… For the smooth situation, it’s better not to do this, and think of the space of sections as a module over the ring of suitable functions.

Now to return to your very good question about “left or right Kan extension”… the answer is both. since for Vect-valued functors (where Vect is the category of finite dimensional vector spaces), we have natural isomorphisms f^* \cong f^! and f_* \cong f_!: these functors are \textit{ambiadjoint} (ie. both left and right adjoint). We use this to define the effect of \Lambda on 2-morphisms in Span_2(Gpd).

This isomorphism is closely related to the fact that finite-dimensional vector spaces are canonically isomorphic to their double-dual: V \cong V^{**}. That’s because the functors f^* and f_* are 2-linear maps. These are naturally isomorphic to maps represented as matrices of vector spaces. Taking an adjoint – aside from transposing the matrix, naturally replaces the matrices with their duals. Doing this twice, we get the isomorphisms above. So the functors are both left and right adjoint to each other, and thus in particular we have what is both left and right Kan extension. (This is also connected with the fact that, in Vect, the direct sum is both product and coproduct – i.e. limit and colimit.)

It’s worth pointing out, then, that we wouldn’t generally expect this to happen for infinite-dimensional vector spaces, since these are generally not canonically isomorphic to their double-duals. Instead, for this case we would need to be looking at functors valued in Hilb, since Hilbert spaces do have that property. That’s why, in the case of smooth groupoids (say, Lie groupoids), we end up talking about “(measurable) equivariant Hilbert bundles”. (In particular, the ring of functions over which our sheaves are modules is: the measurable ones. Why this is the right choice would be a bit of a digression, but roughly it’s analogous to the fact that L^2(X) is a space of measurable functions. This is the limitation on which representations we want that I alluded to above.).

Now, \Lambda is supposed to be a 2-functor. In general, given a category C with all pullbacks, Span_2(C) is the universal 2-category faithfully containing C such that every morphism has an ambiadjoint. So the fact that the “pull” and “push” operations are ambiadjoint lets this 2-functor respect that property. It’s the unit and counits of the adjunctions which produce the effect of \Lambda on 2-morphisms: given a span of span-maps, we take the two maps in the middle, consider the adjoint pairs of functors that come from them, and get a natural transformation which is just the composite of the counit of one adjunction and the unit of the other.

Here’s where we understand how this fits into the groupoidification program – because the effect of \Lambda on 2-morphisms exactly reproduces the “degroupoidification” functor of Baez and Dolan, from spans of groupoids into Vect, when we think of such a span as a 2-morphism in Hom(1,1) – that is, a span of maps of spans from the terminal groupoid to itself. In other words, degroupoidification is an example something we can do between ANY pair of groupoids – but in the special case where the representation theory all becomes trivial. (This by no means makes it uninteresting: in fact, it’s a perfect setting to understand almost everything else about the subject).

Now, to actually get all the coefficients to work out to give the groupoid cardinality, one has to be a bit delicate – the exact isomorphism between the construction of the left and right adjoint has some flexibility when we’re working over the field of complex numbers. But there’s a general choice – the Nakayama isomorphism – which works even when we’re replace Vect by R-modules for some ring R. To make sure, for general R, that we have a true isomorphism, the map needs some constants. These happen to be, in our case, exactly the groupoid cardinalities to make the above statement true!

To me, this last part is a rather magical aspect of the whole thing, since the motivation I learned for groupoid cardinalities is quite remote from this – it’s just a valuation on groupoids which gets along with products and coproducts, and also with group actions (so that |X/G| = |X|/|G|, even when the action isn’t free). So one thing I’d like to know, but currently don’t is: how is it that this is “secretly” the same thing as the Nakayama isomorphism?

Among the talks given in our seminar on stacks and groupoids, there have been a few which I haven’t posted about yet – two by Tom Prince about stacks and homotopy theory, and one by José Malagon-Lopez comparing different characterizations of stacks. Tom is a grad student, and José is a postdoc, and they both work with Rick Jardine, who has done a lot of important work in homotopy theory, notably from the simplicial point of view. There was some overlap, since José was comparing the different characterizations for stacks that had been used by different people through the seminar, including Tom, but there’s still quite a lot to say here. I’ll try to cover the main points as I understand them, focusing on what I personally find relevant.

A major theme for both of them is the use of descent, which in general is a way to talk about the objects of a category in terms of another category. A standard example of descent would be the case of sheaves. First, though, what is it that’s being described in terms of descent?

Well, there are two opposite points of view on stacks – as categories fibred in groupoids (CFG’s), and as sheaves of groupoids. (I’ve found this book by Behrend et al. on algebraic stacks handy in parsing through some of the definitions here, and Jose recommended Vistoli’s notes on sites, fibred categories, and descent) One of the things Jose summarized in his talk was how these are related (which was a key bit of Aji’s earlier talk, blogged here). A CFG over \mathcal{S} is a functor p: \mathcal{X} \rightarrow \mathcal{S} where the preimage over (x,1_x) is a groupoid (that is, all the morphisms mapping to an identity are invertible).

Now, given such p : \mathcal{X} \rightarrow \mathcal{S} one gets a (weak) functor from \mathcal{S} into groupoids (the “fibre-selecting” functor, which, among other conditions, gives the groupoid p^{-1}(x,1_x) for each object x. Specifying this and showing it is a weak functor takes a little work. But in particular, there are properties on CFG’s a stack is such a functor into Gpd with the extra property that descent data are effective. This is a weak version of the condition for a sheaf.

Stacks and Descent

The classical setting for descent questions is sheaf theory. To begin with, we have some category \mathcal{S} of spaces – this might be Top (topological spaces), or Sch (affine schemes), or something else – the classical version has \mathcal{S} = \mathcal{O}(X), the category of open sets on a topological space. The main thing is that \mathcal{S} must be a Grothendieck site; in particular, there is a notion of covering for an object X \in \mathcal{S}. This is a collection \underline{U} = \{ f_{\alpha} : U_{\alpha} \rightarrow X \} of arrows satisfying some conditions that capture the intuitive idea of “open cover”.

So, just to recall: the idea of describing a space as a sheaf on a site involves a little shift of perspective, but it’s the idea behind diffeological spaces (as I described in my post on Enxin Wu’s talk in our seminar, and which, for me, is a good example to help understand this viewpoint). A diffeological space is determined by giving the set of all “smooth” maps into it from each object in a certain site. Now, any space S \in \mathcal{S} can also be represented in Hom(\mathcal{S}^{op},Set) (by the Yoneda embedding) as the sheaf Hom(-,S) which gives, for each space X, the set of maps in \mathcal{S} (topological, algebraic, or whatever) into S – but one can get objects in a bigger category, namely that of sheaves, which is a way of describing them in terms of the objects in the site \mathcal{S}. In the case of diffeological spaces, the site in question is just the one consisting of neighborhoods in \mathbb{R}^n for any n, with smooth maps, and the obvious idea of a cover. So representable ones are just Euclidean neighborhoods, and general ones are defined by smooth maps out of these: the sheaf condition is just a way to state the natural compatibility condition for these maps. Similar thinking applies to any site \mathcal{S}.

The point of this condition is to ask when we can take a cover of an object S, and describe global objects (functions on S) in terms of local objects (functions on elements in the cover), which are compatible. Descent is the gluing condition for a sheaf F: given a cover – a bunch of maps f_i : U_i \rightarrow S which satisfy some conditions that capture the intuitive idea of covering S – a descent datum is a collection of x_i \in F(U_i), and isomorphisms between the restrictions (by F(\leq)) to overlaps U_i \cap U_j, where the isomorphisms satisfy some cocycle condition ensuring that restrictions to U_i \cap U_j \cap U_k are equal. The datum is effective if all there is a “global” object x \in F(S) where x_i is the restriction of x. (I find this easiest to see when \mathcal{S}=\mathcal{O}(X), where it says we can glue functions on local patches that agree on overlaps, and find that they must have come by restricting a global function on X.)

This all makes sense if F has values in Set (or some other 1-category), but the point for stacks is that we have a weak functor G : \mathcal{S}^{op} \rightarrow Gpd. That is, the values are in groupoids, which naturally form a 2-category. So the descent can be weakened – instead of an equality in the cocycle condition, we get an isomorphism, which has to be coherent. Part of the point of describing stacks as “sheaves of groupoids” is as a weakening this way of describing a space, to an “up to equivalence” kind of condition.

One point which Jose made, and which Tom made use of, is that this description of a Grothendieck topology really gives too much information – that is, the category of sheaves on a site (taken up to equivalence) doesn’t uniquely determine the site. Instead of coverings, one should talk about sieves – these are, one might say, one-sided ideals of maps into S. In particular, subfunctors R \subset Hom(-,S) – that is, for each space V, a subset of all maps V \rightarrow S, in a way that gets along with composition of maps (which is how they resemble ideals). Any covering defines a seive – as the subfunctor of maps which factor through the covering maps – but more than one covering might define the same sieve (rather the same way an ideal can be presented in terms of different generators).

So the view of stacks as sheaves G (of groupoids) satisfying descent is then rephrased by saying that, for any covering sieve R of an object S \in \mathcal{S}, there is an equivalence of functors between Hom(E_S, G) and Hom(E_R,G), where E_S and E_R are some sheaves on \mathcal{S} constructed in a fairly natural way from the object S itself, and from the sieve R. The point is that Hom(E_S,G) = G(S) is a groupoid. The functor E_R ends up such that Hom(E_R,G) can be described in terms of covers \{ U_i \rightarrow S \} as having objects which are compatible collections of objects from U_i and isomorphisms between their restrictions – that is, descent data – and morphisms being compatible maps. So equivalence of these (2-)functors ends up being the stack condition.

One of Tom’s objectives was to look at all this from the point of view of simplicial sheaves – and here we need to think about homotopy-theoretic ideas of “equivalence”, instead of just the equivalence of categories we just used.

Model Structure

One of the major tools in homotopical algebra is the notion of a model structure (these slides by Peter May give the basic concepts). These show up throughout higher category theory because homotopies-between-homotopies-…-between-maps give a natural model of higher morphisms.

Model categories axiomatize three special kinds of maps one is interested in when talking about maps between spaces, up to homotopy. “Weak equivalence” generalizes a “homotopy equivalence” f : X \rightarrow Y – a map which induces isomorphisms between homotopy groups of X and Y (as far as homotopy theory can detect, X and Y are “the same”). “Fibration” and “cofibration” are defined in homotopy theory by a lifting property (and its dual) – essentially, that if a map can be lifted along f, so can a homotopy of the map.  Fibrations generalize (“nice”) surjections, and cofibrations generalize (“nice”) inclusions.

In particular, Tom was making use of a notion of descent where the equations that define the descent conditions are just required to be weak equivalences. The point is that we can talk about sheaves of various kinds of things – sets, groupoids, or simplicial sets were the examples he gave. The relevant notion of equivalence for sets is isomorphism (the usual way of stating descent), but for groupoids it’s equivalence, and for simplicial sets, it’s another notion of weak equivalence (from the Joyal-Tierney model structure). When talking about stacks, we’re dealing with groupoids.

On the other hand, groupoids can be described in terms of simplicial sets, using the construction known as the simplicial nerve. In particular the classifying spaces of groupoids have no interesting homotopy groups above the first – so this ends up giving another way to state the weakened form of descent mentioned above. This type of construction – using the fact that simplicial sets are very versatile (can describe categories, or reasonable spaces, one \infty-categories, for instance), is what makes the study of simplicial presheaves, which is the basis of a lot of work by Rick Jardine (see the book Simplicial Homotopy Theory for a whole lot more that I can touch on here).

This gives another characterization of stacks: a sheaf of groupoids G is a stack if and only if BG (sheaf of classfying spaces), satisfies descent in that it is “pointwise” (that is, section-wise) weakly-equivalent to a certain kind of “globally fibrant replacement”. This is like the description of descent in terms of an equivalence of categories, as above – but in general is weaker. In fact, when the simplicial sets we’re talking about are classifying spaces for groupoids, then by construction these are just the same. This kind of replacement accomplishes for stacks roughly what “sheafification” does for sheaves – i.e. turns “prestacks” into “stacks”. This is done by taking a limit over all sieves – the universal property of the limit, then, is what ensures the existence of all the global objects that descent requires must exist. This is always a “local” weak equivalence, but only if we started with a stack is it one “pointwise” (i.e. in terms of sections).

Cocycles

As an aside: one thing which Tom talked about as a preliminary, but which I found particularly helpful from where I was coming from, had to do with “cocycle categories”. This is a somewhat unusual use of the term “cocycle”: here, a cocycle from X to Y is a certain kind of span – namely, a pair of maps from Z:

X \stackrel{f}{\leftarrow} Z \stackrel{g}{\rightarrow} Y

where f is a “weak equivalence”. A morphism between cocycles is just a map Z \rightarrow Z' which commutes with those in the cocycle. These form a category H(X,Y). The point of introducing this is to say that there is a correspondence between components in this category – that is, \pi_0(H(X,Y)) and homotopy classes of maps from X to Y (the collection of which is denoted [X,Y] in homotopy theory).

One way to think about this is that cocycles stand in relation to functions, roughly, as spans stand to relations. If we are in Sets, where weak equivalence is isomorphism, then Z can be thought of as the graph of a function from X to Y – since f is bijective, Z can stand as a substitute for X. Moving to spaces, we weaken the requirement so that Z is only a replacement for X “up to homotopy” – thus, cocycles are adequate replacements for homotopy classes of functions. This business of replacing objects with other, nicer objects (say, “fibrant replacement”) is a recurring theme in homotopy theory. This digression on cocycles helped me understand why. Part of the point is that the equivalence classes of these “cocycles” is easier to calculate directly than, but equivalent to, homotopy classes of maps.


In any case, there’s more I could say about these talks, but I’ll leave off for now.

Over the next week, I’ll be visiting Derek Wise at UC Davis, to talk about some stuff having to do with ETQFT’s , but soon enough I’ll also do a writeup of Emre Coskun’s talks in the seminar about gerbes, which started today and continue tomorrow.

It’s been a while since I posted here, partly because I was working on getting this paper ready for submission. Since I wrote about its subject in my previous post, about Derek Wise’s talk at Perimeter Institute, I’ll let that stand for now. In the meantime, we’ve had a few talks in the seminar on stacks and groupoids. Tom Prince gave a couple of interesting talks about stacks from the point of view of simplicial sheaves, explaining how they can be seen as certain categories of objects satisfying descent. Since I only have handwritten notes on this talk, and I still haven’t entirely digested it, I think I’ll talk about that at the same time as discussing the upcoming talk about descent and related stuff by José Malagon-Lopez. For right now, I’ll write about Enxin Wu’s talk on diffeological bundles and the irrational torus. (DVI notes here)  Some of the theory of diffeological spaces has been worked out by Souriau (originally) and then Patrick Iglesias-Zemmour.  Some of the categorical properties he discussed are explained by Baez and Hoffnung (Enxin’s notes give some references).  Enxin and Dan Christensen have looked a bit at diffeological spaces in the context of homotopy theory and model categories.

Part of the motivation for this seminar was to look at how groupoids and some related entities, namely stacks, and algebra in the form of noncommutative geometry (although we didn’t get as much on this as I’d hoped), can be treated as ways to expand the notion of “space”. One reason for doing this is to handle certain kinds of moduli problems, but another – more directly related to the motivation for noncommutative geometry (NCG) – is to deal with certain quotients. The irrational torus is one of these, and under the name “noncommutative torus” is a standard example in NCG. A brief introduction to it by John Baez can be found here, and more detailed discussion is in, for example, Ch3, section 2.β of Connes’ “Noncommutative Geometry“, which describes how to find its cyclic cohomology (a noncommutative anolog of cohomology of a space), which turns out to be 2-dimensional.

The point here should be to think of it as the quotient of a space by a group action. (Which gives a transformation groupoid, and from there a – noncommutative – groupoid C^{\star}-algebra). The space is a torus , and the group acting on it is \mathbb{R} acting by translation parallel to a line with irrational slope. In particular, we can treat T^2 as a group \{ (e^{ix},e^{iy}) | x,y \in \mathbb{R} \} with componentwise multiplication, and think of the irrational torus, given an irrational \theta, as the quotient T^2/\mathbb{R}_{\theta} by the subgroup \mathbb{R}_{\theta} = \{ (e^{ix},e^{i \theta x}) \}.

Now, this is quite well-defined as a set, but as a space it’s quite horrible, even though both groups are quite nice Lie groups. In particular, the subgroup \mathbb{R}_{\theta} is dense in T^2 – or, thought of in terms of a group acting on the torus, the orbit space of any given point is dense. So the quotient is not a manifold – in fact, it’s quite hard to visualize. This illustrates the point that smooth manifolds are badly behaved with respect to quotients. In his talk, Enxin told us about another way to approach this problem by moving to the category of diffeological spaces. As I mentioned in a previous post, this is one of a number of attempts to expand the category of smooth manifolds \mathbf{Mfld}, to get a category which has nice properties \mathbf{Mfld} does not have, such as having quotient objects, mapping objects, and so on. Now, the category \mathbf{Top} is such an example, but this loses all the information about which maps are smooth. The point is to find some intermediate generalization, which still carries information about geometry, not just topology.

A diffeological space can be defined as a concrete (i.e. \mathbf{Set}-valued) sheaf on the site whose objects are open neighborhoods of \mathbb{R}^n (for all n) and whose morphisms are smooth maps, though this is sort of an abstract way to define a space. The point of it, however, is that this site gives a model of all the maps we want to call “smooth”. Defining the category \mathbf{Diff} of diffeological spaces in terms of sheaves on sites helps to ensure it has nice categorical properties, but more intuitively, a smooth space X is described by giving a set, and defining all the smooth maps into the space from neighborhoods of \mathbb{R}^n (these are called plots, and the collection is a diffeology). This differs from a manifold, which is defined in terms of (ahem) an atlas of charts – which unlike plots are required to be local homeomorphisms into a topological space, which fit together in smooth ways. The smooth maps into X also have to be compatible – which is what the condition of being a sheaf guarantees – but the point is that we no longer suppose X locally looks just like \mathbb{R}^n, so it can include strange quotients like the irrational torus.

Now \mathbf{Diff} has lots of good properties, some of which are listed in Enxin’s notes. For instance, it has all limits and colimits, and is cartesian closed. What’s more, there’s a pair of adjoint functors between \mathbf{Top} and \mathbf{Diff} – so there’s an “underlying topological space” for any diffeological space (a topology making all the plots continuous), and a free diffeology on any topological space (where any continuous map from a neighborhood in \mathbb{R}^n is smooth). There’s also a natural diffeology on any manifold (the one generated by taking all the charts to be plots).

The real point, though, is that a lot of standard geometric constructions that are made for manifolds also make sense for diffeological spaces, so they “support geometry”. Some things which can be defined in the context of \mathbf{Diff} include: dimension; tangent spaces; differential forms; cohomology; smooth homotopy groups.

Naturally, one can define a diffeological groupoid: this is just an internal groupoid in \mathbf{Diff} – there are diffeological spaces Ob and Mor of objects and morphisms (and of course composable pairs, Mor \times_{Ob} Mor, which, being a limit, is also in \mathbf{Diff}), and the structure maps are all smooth. These are related to diffeological bundles (defined below) in that certain groupoids can be build from bundles. The resulting groupoids all have the property of being perfect, which means that (s,t) : Mor \rightarrow Ob \times Ob is a subduction – i.e. is onto, and such that the product diffeology which is the natural one on Ob \times Ob is also the minimal one making this map smooth.

In fact, we need this to even define diffeological bundles, which are particular kinds of surjective maps f : X \rightarrow Y in \mathbf{Diff}. Specifically, one gets a groupoid K_f whose objects are points of Y, and where the morphisms hom(y,y') are just smooth maps from the fibre f^{-1}(y) to the fibre f^{-1}(y') (which, of course, are diffeological spaces because they are subsets of X). It’s when this groupoid is perfect that one has a bundle.

The point here is that, unlike for manifolds, we don’t have local charts, so we can’t use the definition that a bundle is “locally trivializable”, but we do have this analogous condition. In both cases, the condition implies that all the fibres are diffeomorphic to each other (in the relevant sense). Enxin also gave a few equivalent conditions, which amount to saying one gets locally trivial bundles over neighborhoods in \mathbb{R}^n when pulling back f along any plot.

So now we can at least point out that the irrational torus can be construed as a diffeological bundle – thinking of it as a quotient of a group by a subgroup, we can think of this as a bundle where X = T^2 is the total space, the base Y is the space of orbits, and the fibres are all diffeomorphic to F = \mathbb{R}_{\theta}.

The punchline of the talk is to use this as an example which illustrates the theorem that there is a diffeological version of the long exact sequence of homotopy groups:

\dots \rightarrow \pi_n^D(F) \rightarrow \pi_n^D(X) \rightarrow \pi_n^D(Y) \rightarrow \pi_{n-1}^D(F) \rightarrow \dots

Using this long exact sequence, and the fact that the (diffeological) homotopy groups for manifolds (in this case, X = T^2 and F = \mathbb{R}_{\theta} are the same as the usual ones, one can work out the homotopy groups for the base Y, which is the quotient T^2/\mathbb{R}_{\theta}. Whereas, for topological spaces, since \mathbb{R}_{\theta} is dense in T^2, the usual homotopy groups are all zero, for diffeological spaces, we get a different answer. In particular, \pi_1^D(Y) = \mathbb{Z} \oplus \mathbb{Z}, a two-dimensional lattice.

It’s interesting that this essentially agrees with what noncommutative geometry tells us about the quotient, while keeping some of our plain intuitions about “space” intact – that is, without moving whole-hog into (the opposite of) a category of noncommutative algebras. It would be interesting to know how far one can push this correspondence.

So I recently received word that this paper had been accepted for publication by Applied Categorical Structures. Since I’ll shortly be putting out another which uses its main construction to build Extended Topological Quantum Field Theories, it’s nice and appropriate to say something about that. But actually, just at the moment, I want to take a slightly different approach.

Toward the end of February, I went up to Waterloo to the Perimeter Institute, where my friend Derek Wise was visiting with Andy Randono – apparently they’re working on a project together that has something to do with Cartan Geometry, which is a subject that plays a big role in Derek’s thesis.

However, Derek was speaking in their seminar about Extended TQFT (his slides are now up on his website, and there’s also a video of the talk available). Actually, a lot of what he was talking about was work of mine, since we’re working on a project together to constructs ETQFT’s from Lie groups (most likely compact ones at first, since all the usual analytical problems with noncompact groups turn up here). However, I really enjoyed seeing Derek talk about it, because he has a sharper grasp than I do of how this subject appears to physicists, and the way he presented this stuff is very different from the way I usually talk about it (you can see me in the video trying to help deal with a question at the end from Rafael Sorkin and Laurent Freidel, and taking a while to correctly understand what it was, partly because of this jargon gap – I hope to get better).

So, for example, describing a TQFT in the Atiyah/Segal axiomatic formulation is fairly natural to someone who works with category theory, but Derek motivated it as a way of taking a “deeper look at the partition function” for a certain field theory. The idea is that a partition function Z for a quantum field theory associates a number to a space M, satisfying certain rules. It is usually described by some kind of integral. Typically in QFT, these are rather tricky integrals – a topological QFT has the nice feature that, since it has no local degrees of freedom, these integrals are much more tractable. Of course, this is a mathematically nice feature that comes at the expense of physical relevance, but such is life.

Anyway, the idea is that the partition function Z for an n-dimensional TQFT can be thought of as assigning, not just numbers to n-dimensional manifolds M, but something more which reduces to this in a special case. Specifically, Z assigns a Hilbert space to any codimension-1 submanifold of M, in a particular way which Derek passed over by saying it “satisfies some compatibility conditions”. For an audience of mathematicians, you can gloss over this just as quickly by saying the assignments are “functorial”, or even with more detail saying the conditions make Z a symmetric monoidal functor.

Part of the point is that these conditions are about as obvious on physical grounds as they are if you’re a category theorist. For example, the fact that composition is preserved by the functor Z can be interpreted physically as saying that the number Z(M) given by the partition function isn’t affected by how we chop up the manifold M to analyse it. The fact that Z is a monoidal functor ends up meaning that the “unit” for manifolds under unions (namely, the empty manifold with no points, which you can add to things without affecting them) gets assigned the Hilbert space \mathbb{C}, which is the unit for Hilbert spaces with respect to the tensor product \otimes. The fact that this is so means we can treat a manifold with no boundary as going from one (empty) boundary to another (empty) boundary – it therefore gets assigned a linear map from \mathbb{C} to \mathbb{C} – a number. Seeing how this linear map comes from composing pieces of the manifold is what “a deeper look at the partition function” means.

ETQFT does essentially the same thing, at one level deeper. The point is that a TQFT breaks apart a manifold by treating it as a series of pieces – manifolds with boundary, glued together at their boundaries. An ETQFT does the same to these pieces, treating them as composed of pieces – manifolds with corners – which are glued orthogonally to the gluing just mentioned. That is, there are two kinds of composition, so we’re in some sort of 2-category (bi-, or double- depending on how you formulate things). The essential point is that now, to manifolds without boundary, which are of codimension 1, we assign Hilbert spaces – and to top-dimensional manifolds WITH boundary, we assign maps of Hilbert spaces.

An ETQFT attempts to give a “deeper-still look at the partition function” by seeing how the Hilbert space arises from composition of pieces in this new direction, along boundaries of codimension 2. The way Derek describes this for physicists is to say that the ETQFT describes how that Hilbert space is “built from local data”, which he described in the usual physics language of path integrals. First of all, the conventional thing in physics is to take Z(\Sigma) for a (codimension-1) manifold \Sigma to be L^2(\mathcal{A}_0(\Sigma)/\mathcal{G}(\Sigma)) – the space of square-integrable functions on the quotient of the space \mathcal{A}_0(\Sigma) of flat G-connections on M by the action of the group of gauge transformations \mathcal{G}(\Sigma).

Given a manifold M with boundary components \Sigma and \Sigma ', the standard quantum field theory formalism to describe the map Z(M) : Z(\Sigma) \rightarrow Z(\Sigma ') given by a TQFT is to describe how it interacts with particular state-vectors in the Hilbert spaces for the source and target boundary components of M. So then:

\langle \psi | Z(M) | \phi \rangle = \int_{\mathcal{A}_0(M)/\mathcal{G}} \mathcal{D}A \overline{\psi(A|_{\Sigma '})} e^{i S([A])} \phi(A|_{\Sigma})

The point being, a flat connection A has some action on it, which depends only on its gauge equivalence class [A] (“the Lagrangian has gauge symmetry”), and it restricts to give flat connections on \Sigma and \Sigma ', on which the L^2-functions \psi and \phi act, to give something we can integrate. The measure \mathcal{D}[A] is a crucial entity here, and in general can be a real puzzle, but at least for discrete groups, it’s just a weighted counting measure which effectively gives us the groupoid cardinality of the quotient space. As for the action S, the simplest possible case just says the action of any flat connection is zero – hence this expression is just finding the (groupoid) cardinality, or more generally measuring the (stacky) volume, of the configuration space for flat connections. There are other possible actions, though.

Derek gives an explanation of how to interpret this in terms of the “pull-push” construction, which I’ve talked about elsewhere here, including in the above paper, so right now, I’ll just pass to the next layer of the ETQFT layer cake – codimension-2. Here, there is a similar formula, which also has an interpretation in terms of a “pull-push” construction, but which can be written as a categorified path integral.

So now the \Sigma has boundary, and connects “inner” codimension-2 boundary component B_1 to “outer” boundary component B_2. Then, say, B_1 gets assigned the category of all gauge-equivariant “bundles” of Hilbert spaces on \mathcal{A}_0(B_1), rather than the space of gauge-invariant functions. (Derek carefully avoided using the term “category”, to stay physically motivated – and the term “bundle” is accurate in the case of a discrete gauge group G, but in general one has to appeal to the theory of measurable fields of Hilbert spaces, since they needn’t be locally trivial). Then given particular Hilbert bundles \mathcal{H} and \mathcal{K} on the spaces \mathcal{A}_0(B_1) and \mathcal{A}_0(B_2) respectively, we can define what Z(\Sigma) is by:

\langle \mathcal{K} | Z(M) | \mathcal{H} \rangle = \int_{\mathcal{A}_0(M)/\mathcal{G}} \mathcal{D}A \mathcal{K}(A|_{B_2}) \otimes T_A \otimes \mathcal{H}(A|_{B_1})

The interpretation is much like the previous formula: now we’re direct-integrating Hilbert spaces, instead of integrating complex functions – and we get a Hilbert space instead of a complex number, but this is in some sense superficial. Something any physicist would notice right away (or anyone comparing this to the previous formula) is that the exponential of the action S([A]) seems to have gone missing, to be replaced by some Hilbert space T_A. If we’re using the trivial action S \cong 0, this is fine, but otherwise, how exactly S affects the direct integral would take some explaining. For now, let’s just say that we should think of S([A]) as being folded into either the inner product on T_A, or into the measure \mathcal{D}A: it shows up in its effect on the inner product on the Hilbert space that this direct integral produces.

Let me jump to the end of Derek’s talk here, to get at some conceptual aspect of what’s happening here. The axiomatic way of talking about ETQFT, namely Ruth Lawrence’s way, is to say we assign a 2-Hilbert space to the codimension-2 manifolds. But “2-Hilbert space” is an off-putting bit of jargon, so instead the suggestion is to replace it with “von Neumann algebra”.

The point is that 2-Hilbert spaces are thought (according to a paper by Baez, Baratin, Friedel and Wise) to be just categories of representations of vN algebras. Being a 2-Hilbert space means, for instance, that they’re additive (by direct sum), \mathbb{C}-linear (there is a vector space of intertwiners between any two representations), have duals, and so on. Moreover, they’re monoidal 2-Hilbert spaces, since there is a tensor product. Their idea is that the two ideas correspond exactly. In any case, the way the ETQFT construction in question works actually passes through a von Neumann algebra. This comes from the groupoid algebra that’s associated to a certain group action. Namely, the action of the gauge group on the space of flat G-connections on the manifold M.

Then the way we can look more closely at the “structure of the partition function” is by seeing the Hilbert space associated to a codimension-1 manifold as actually being a kind of morphism of von Neumann algebras. In particular, it’s a Hilbert bimodule, which is acted on by the source algebra (say A) on the left, and the target algebra (B) on the right. This is intimately connected with the stuff I was writing about recently about Morita equivalence, and so to the 2-Hilbert space view. In particular, a Hilbert bimodule H gives an adjoint pair of linear functors (or “2-linear maps”) between the representation categories of algebras.

So shortly I’ll make a post about some papers coming out, and get back to this point…

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 45 other followers