### category theory

Now for a more sketchy bunch of summaries of some talks presented at the HGTQGR workshop.  I’ll organize this into a few themes which appeared repeatedly and which roughly line up with the topics in the title: in this post, variations on TQFT, plus 2-group and higher forms of gauge theory; in the next post, gerbes and cohomology, plus talks on discrete models of quantum gravity and suchlike physics.

## TQFT and Variations

I start here for no better reason than the personal one that it lets me put my talk first, so I’m on familiar ground to start with, for which reason also I’ll probably give more details here than later on.  So: a TQFT is a linear representation of the category of cobordisms – that is, a (symmetric monoidal) functor $nCob \rightarrow Vect$, in the notation I mentioned in the first school post.  An Extended TQFT is a higher functor $nCob_k \rightarrow k-Vect$, representing a category of cobordisms with corners into a higher category of k-Vector spaces (for some definition of same).  The essential point of my talk is that there’s a universal construction that can be used to build one of these at $k=2$, which relies on some way of representing $nCob_2$ into $Span(Gpd)$, whose objects are groupoids, and whose morphisms in $Hom(A,B)$ are pairs of groupoid homomorphisms $A \leftarrow X \rightarrow B$.  The 2-morphisms have an analogous structure.  The point is that there’s a 2-functor $\Lambda : Span(Gpd) \rightarrow 2Vect$ which is takes representations of groupoids, at the level of objects; for morphisms, there is a “pull-push” operation that just uses the restricted and induced representation functors to move a representation across a span; the non-trivial (but still universal) bit is the 2-morphism map, which uses the fact that the restriction and induction functors are bi-ajdoint, so there are units and counits to use.  A construction using gauge theory gives groupoids of connections and gauge transformations for each manifold or cobordism.  This recovers a form of the Dijkgraaf-Witten model.  In principle, though, any way of getting a groupoid (really, a stack) associated to a space functorially will give an ETQFT this way.  I finished up by suggesting what would need to be done to extend this up to higher codimension.  To go to codimension 3, one would assign an object (codimension-3 manifold) a 3-vector space which is a representation 2-category of 2-groupoids of connections valued in 2-groups, and so on.  There are some theorems about representations of n-groupoids which would need to be proved to make this work.

The fact that different constructions can give groupoids for spaces was used by the next speaker, Thomas Nicklaus, whose talk described another construction that uses the $\Lambda$ I mentioned above.  This one produces “Equivariant Dijkgraaf-Witten Theory”.  The point is that one gets groupoids for spaces in a new way.  Before, we had, for a space $M$ a groupoid $\mathcal{A}_G(M)$ whose objects are $G$-connections (or, put another way, bundles-with-connection) and whose morphisms are gauge transformations.  Now we suppose that there’s some group $J$ which acts weakly (i.e. an action defined up to isomorphism) on $\mathcal{A}_G(M)$.  We think of this as describing “twisted bundles” over $M$.  This is described by a quotient stack $\mathcal{A}_G // J$ (which, as a groupoid, gets some extra isomorphisms showing where two objects are related by the $J$-action).  So this gives a new map $nCob \rightarrow Span(Gpd)$, and applying $\Lambda$ gives a TQFT.  The generating objects for the resulting 2-vector space are “twisted sectors” of the equivariant DW model.  There was some more to the talk, including a description of how the DW model can be further mutated using a cocycle in the group cohomology of $G$, but I’ll let you look at the slides for that.

Next up was Jamie Vicary, who was talking about “(1,2,3)-TQFT”, which is another term for what I called “Extended” TQFT above, but specifying that the objects are 1-manifolds, the morphisms 2-manifolds, and the 2-morphisms are 3-manifolds.  He was talking about a theorem that identifies oriented TQFT’s of this sort with “anomaly-free modular tensor categories” – which is widely believed, but in fact harder than commonly thought.  It’s easy enough that such a TQFT $Z$ corresponds to a MTC – it’s the category $Z(S^1)$ assigned to the circle.  What’s harder is showing that the TQFT’s are equivalent functors iff the categories are equivalent.  This boils down, historically, to the difficulty of showing the category is rigid.  Jamie was talking about a project with Bruce Bartlett and Chris Schommer-Pries, whose presentation of the cobordism category (described in the school post) was the basis of their proof.

Part of it amounts to giving a description of the TQFT in terms of certain string diagrams.  Jamie kindly credited me with describing this point of view to him: that the codimension-2 manifolds in a TQFT can be thought of as “boundaries in space” – codimension-1 manifolds are either time-evolving boundaries, or else slices of space in which the boundaries live; top-dimension cobordisms are then time-evolving slices of space-with-boundary.  (This should be only a heuristic way of thinking – certainly a generic TQFT has no literal notion of “time-evolution”, though in that (2+1) quantum gravity can be seen as a TQFT, there’s at least one case where this picture could be taken literally.)  Then part of their proof involves showing that the cobordisms can be characterized by taking vector spaces on the source and target manifolds spanned by the generating objects, and finding the functors assigned to cobordisms in terms of sums over all “string diagrams” (particle worldlines, if you like) bounded by the evolving boundaries.  Jamie described this as a “topological path integral”.  Then one has to describe the string diagram calculus – ridigidy follows from the “yanking” rule, for instance, and this follows from Morse theory as in Chris’ presentation of the cobordism category.

There was a little more discussion about what the various properties (proved in a similar way) imply.  One is “cloaking” – the fact that a 2-morphism which “creates a handle” is invisible to the string diagrams in the sense that it introduces a sum over all diagrams with a string “looped” around the new handle, but this sum gives a result that’s equal to the original map (in any “pivotal” tensor category, as here).

Chronologically before all these, one of the first talks on such a topic was by Rafael Diaz, on Homological Quantum Field Theory, or HLQFT for short, which is a rather different sort of construction.  Remember that Homotopy QFT, as described in my summary of Tim Porter’s school sessions, is about linear representations of what I’ll for now call $Cob(d,B)$, whose morphisms are $d$-dimensional cobordisms equipped with maps into a space $B$ up to homotopy.  HLQFT instead considers cobordisms equipped with maps taken up to homology.

Specifically, there’s some space $M$, say a manifold, with some distinguished submanifolds (possibly boundary components; possibly just embedded submanifolds; possibly even all of $M$ for a degenerate case).  Then we define $Cob_d^M$ to have objects which are $(d-1)$-manifolds equipped with maps into $M$ which land on the distinguished submanifolds (to make composition work nicely, we in fact assume they map to a single point).  Morphisms in $Cob_d^M$ are trickier, and look like $(N,\alpha, \xi)$: a cobordism $N$ in this category is likewise equipped with a map $\alpha$ from its boundary into $M$ which recovers the maps on its objects.  That $\xi$ is a homology class of maps from $N$ to $M$, which agrees with $\alpha$.  This forms a monoidal category as with standard cobordisms.  Then HLQFT is about representations of this category.  One simple case Rafael described is the dimension-1 case, where objects are (ordered sets of) points equipped with maps that pick out chosen submanifolds of $M$, and morphisms are just braids equipped with homology classes of “paths” joining up the source and target submanifolds.  Then a representation might, e.g., describe how to evolve a homology class on the starting manifold to one on the target by transporting along such a path-up-to-homology.  In higher dimensions, the evolution is naturally more complicated.

A slightly looser fit to this section is the talk by Thomas Krajewski, “Quasi-Quantum Groups from Strings” (see this) – he was talking about how certain algebraic structures arise from “string worldsheets”, which are another way to describe cobordisms.  This does somewhat resemble the way an algebraic structure (Frobenius algebra) is related to a 2D TQFT, but here the string worldsheets are interacting with 3-form field, $H$ (the curvature of that 2-form field $B$ of string theory) and things needn’t be topological, so the result is somewhat different.

Part of the point is that quantizing such a thing gives a higher version of what happens for quantizing a moving particle in a gauge field.  In the particle case, one comes up with a line bundle (of which sections form the Hilbert space) and in the string case one comes up with a gerbe; for the particle, this involves associated 2-cocycle, and for the string a 3-cocycle; for the particle, one ends up producing a twisted group algebra, and for the string, this is where one gets a “quasi-quantum group”.  The algebraic structures, as in the TQFT situation, come from, for instance, the “pants” cobordism which gives a multiplication and a comultiplication (by giving maps $H \otimes H \rightarrow H$ or the reverse, where $H$ is the object assigned to a circle).

There is some machinery along the way which I won’t describe in detail, except that it involves a tricomplex of forms – the gradings being form degree, the degree of a cocycle for group cohomology, and the number of overlaps.  As observed before, gerbes and their higher versions have transition functions on higher numbers of overlapping local neighborhoods than mere bundles.  (See the paper above for more)

## Higher Gauge Theory

The talks I’ll summarize here touch on various aspects of higher-categorical connections or 2-groups (though at least one I’ll put off until later).  The division between this and the section on gerbes is a little arbitrary, since of course they’re deeply connected, but I’m making some judgements about emphasis or P.O.V. here.

Apart from giving lectures in the school sessions, John Huerta also spoke on “Higher Supergroups for String Theory”, which brings “super” (i.e. $\mathbb{Z}_2$-graded) objects into higher gauge theory.  There are “super” versions of vector spaces and manifolds, which decompose into “even” and “odd” graded parts (a.k.a. “bosonic” and “fermionic” parts).  Thus there are “super” variants of Lie algebras and Lie groups, which are like the usual versions, except commutation properties have to take signs into account (e.g. a Lie superalgebra’s bracket is commutative if the product of the grades of two vectors is odd, anticommutative if it’s even).  Then there are Lie 2-algebras and 2-groups as well – categories internal to this setting.  The initial question has to do with whether one can integrate some Lie 2-algebra structures to Lie 2-group structures on a spacetime, which depends on the existence of some globally smooth cocycles.  The point is that when spacetime is of certain special dimensions, this can work, namely dimensions 3, 4, 6, and 10.  These are all 2 more than the real dimensions of the four real division algebras, $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$ and $\mathbb{O}$.  It’s in these dimensions that Lie 2-superalgebras can be integrated to Lie 2-supergroups.  The essential reason is that a certain cocycle condition will hold because of the properties of a form on the Clifford algebras that are associated to the division algebras.  (John has some related material here and here, though not about the 2-group case.)

Since we’re talking about higher versions of Lie groups/algebras, an important bunch of concepts to categorify are those in representation theory.  Derek Wise spoke on “2-Group Representations and Geometry”, based on work with Baez, Baratin and Freidel, most fully developed here, but summarized here.  The point is to describe the representation theory of Lie 2-groups, in particular geometrically.  They’re to be represented on (in general, infinite-dimensional) 2-vector spaces of some sort, which is chosen to be a category of measurable fields of Hilbert spaces on some measure space, which is called $H^X$ (intended to resemble, but not exactly be the same as, $Hilb^X$, the space of “functors into $Hilb$ from the space $X$, the way Kapranov-Voevodsky 2-vector spaces can be described as $Vect^k$).  The first work on this was by Crane and Sheppeard, and also Yetter.  One point is that for 2-groups, we have not only representations and intertwiners between them, but 2-intertwiners between these.  One can describe these geometrically – part of which is a choice of that measure space $(X,\mu)$.

This done, we can say that a representation of a 2-group is a 2-functor $\mathcal{G} \rightarrow H^X$, where $\mathcal{G}$ is seen as a one-object 2-category.  Thinking about this geometrically, if we concretely describe $\mathcal{G}$ by the crossed module $(G,H,\rhd,\partial)$, defines an action of $G$ on $X$, and a map $X \rightarrow H^*$ into the character group, which thereby becomes a $G$-equivariant bundle.  One consequence of this description is that it becomes possible to distinguish not only irreducible representations (bundles over a single orbit) and indecomposible ones (where the fibres are particularly simple homogeneous spaces), but an intermediate notion called “irretractible” (though it’s not clear how much this provides).  An intertwining operator between reps over $X$ and $Y$ can be described in terms of a bundle of Hilbert spaces – which is itself defined over the pullback of $X$ and $Y$ seen as $G$-bundles over $H^*$.  A 2-intertwiner is a fibre-wise map between two such things.  This geometric picture specializes in various ways for particular examples of 2-groups.  A physically interesting one, which Crane and Sheppeard, and expanded on in that paper of [BBFW] up above, deals with the Poincaré 2-group, and where irreducible representations live over mass-shells in Minkowski space (or rather, the dual of $H \cong \mathbb{R}^{3,1}$).

Moving on from 2-group stuff, there were a few talks related to 3-groups and 3-groupoids.  There are some new complexities that enter here, because while (weak) 2-categories are all (bi)equivalent to strict 2-categories (where things like associativity and the interchange law for composing 2-cells hold exactly), this isn’t true for 3-categories.  The best strictification result is that any 3-category is (tri)equivalent to a Gray category – where all those properties hold exactly, except for the interchange law $(\alpha \circ \beta) \cdot (\alpha ' \circ \beta ') = (\alpha \cdot \alpha ') \circ (\beta \circ \beta ')$ for horizontal and vertical compositions of 2-cells, which is replaced by an “interchanger” isomorphism with some coherence properties.  John Barrett gave an introduction to this idea and spoke about “Diagrams for Gray Categories”, describing how to represent morphisms, 2-morphisms, and 3-morphisms in terms of higher versions of “string” diagrams involving (piecewise linear) surfaces satisfying some properties.  He also carefully explained how to reduce the dimensions in order to make them both clearer and easier to draw.  Bjorn Gohla spoke on “Mapping Spaces for Gray Categories”, but since it was essentially a shorter version of a talk I’ve already posted about, I’ll leave that for now, except to point out that it linked to the talk by Joao Faria Martins, “3D Holonomy” (though see also this paper with Roger Picken).

The point in Joao’s talk starts with the fact that we can describe holonomies for 3-connections on 3-bundles valued in Gray-groups (i.e. the maximally strict form of a general 3-group) in terms of Gray-functors $hol: \Pi_3(M) \rightarrow \mathcal{G}$.  Here, $\Pi_3(M)$ is the fundamental 3-groupoid of $M$, which turns points, paths, homotopies of paths, and homotopies of homotopies into a Gray groupoid (modulo some technicalities about “thin” or “laminated”  homotopies) and $\mathcal{G}$ is a gauge Gray-group.  Just as a 2-group can be represented by a crossed module, a Gray (3-)group can be represented by a “2-crossed module” (yes, the level shift in the terminology is occasionally confusing).  This is a chain of groups $L \stackrel{\delta}{\rightarrow} E \stackrel{\partial}{\rightarrow} G$, where $G$ acts on the other groups, together with some structure maps (for instance, the Peiffer commutator for a crossed module becomes a lifting $\{ ,\} : E \times E \rightarrow L$) which all fit together nicely.  Then a tri-connection can be given locally by forms valued in the Lie algebras of these groups: $(\omega , m ,\theta)$ in  $\Omega^1 (M,\mathfrak{g} ) \times \Omega^2 (M,\mathfrak{e}) \times \Omega^3(M,\mathfrak{l})$.  Relating the global description in terms of $hol$ and local description in terms of $(\omega, m, \theta)$ is a matter of integrating forms over paths, surfaces, or 3-volumes that give the various $j$-morphisms of $\Pi_3(M)$.  This sort of construction of parallel transport as functor has been developed in detail by Waldorf and Schreiber (viz. these slides, or the full paper), some time ago, which is why, thematically, they’re the next two speakers I’ll summarize.

Konrad Waldorf spoke about “Abelian Gauge Theories on Loop Spaces and their Regression”.  (For more, see two papers by Konrad on this)  The point here is that there is a relation between two kinds of theories – string theory (with $B$-field) on a manifold $M$, and ordinary $U(1)$ gauge theory on its loop space $LM$.  The relation between them goes by the name “regression” (passing from gauge theory on $LM$ to string theory on $M$), or “transgression”, going the other way.  This amounts to showing an equivalence of categories between [principal $U(1)$-bundles with connection on $LM$] and [$U(1)$-gerbes with connection on $M$].  This nicely gives a way of seeing how gerbes “categorify” bundles, since passing to the loop space – whose points are maps $S^1 \rightarrow M$ means a holonomy functor is now looking at objects (points in $LM$) which would be morphisms in the fundamental groupoid of $M$, and morphisms which are paths of loops (surfaces in $M$ which trace out homotopies).  So things are shifted by one level.  Anyway, Konrad explained how this works in more detail, and how it should be interpreted as relating connections on loop space to the $B$-field in string theory.

Urs Schreiber kicked the whole categorification program up a notch by talking about $\infty$-Connections and their Chern-Simons Functionals .  So now we’re getting up into $\infty$-categories, and particularly $\infty$-toposes (see Jacob Lurie’s paper, or even book if so inclined to find out what these are), and in particular a “cohesive topos”, where derived geometry can be developed (Urs suggested people look here, where a bunch of background is collected). The point is that $\infty$-topoi are good for talking about homotopy theory.  We want a setting which allows all that structure, but also allows us to do differential geometry and derived geometry.  So there’s a “cohesive” $\infty$-topos called $Smooth\infty Gpds$, of “sheaves” (in the $\infty$-topos sense) of $\infty$-groupoids on smooth manifolds.  This setting is the minimal common generalization of homotopy theory and differential geometry.

This is about a higher analog of this setup: since there’s a smooth classifying space (in fact, a Lie groupoid) for $G$-bundles, $BG$, there’s also an equivalence between categories $G-Bund$ of $G$-principal bundles, and $SmoothGpd(X,BG)$ (of functors into $BG$).  Moreover, there’s a similar setup with $BG_{conn}$ for bundles with connection.  This can be described topologically, or there’s also a “differential refinement” to talk about the smooth situation.  This equivalence lives within a category of (smooth) sheaves of groupoids.  For higher gauge theory, we want a higher version as in $Smooth \infty Gpds$ described above.  Then we should get an equivalence – in this cohesive topos – of $hom(X,B^n U(1))$ and a category of $U(1)$-$(n-1)$-gerbes.

Then the part about the  “Chern-Simons functionals” refers to the fact that CS theory for a manifold (which is a kind of TQFT) is built using an action functional that is found as an integral of the forms that describe some $U(1)$-connection over the manifold.  (Then one does a path-integral of this functional over all connections to find partition functions etc.)  So the idea is that for these higher $U(1)$-gerbes, whose classifying spaces we’ve just described, there should be corresponding functionals.  This is why, as Urs remarked in wrapping up, this whole picture has an explicit presentation in terms of forms.  Actually, in terms of Cech-cocycles (due to the fact we’re talking about gerbes), whose coefficients are taken in sheaves of complexes (this is the derived geometry part) of differential forms whose coefficients are in $L_\infty$-algebroids (the $\infty$-groupoid version of Lie algebras, since in general we’re talking about a theory with gauge $\infty$-groupoids now).

Whew!  Okay, that’s enough for this post.  Next time, wrapping up blogging the workshop, finally.

Continuing from the previous post, there are a few more lecture series from the school to talk about.

## Higher Gauge Theory

The next was John Huerta’s series on Higher Gauge Theory from the point of view of 2-groups.  John set this in the context of “categorification”, a slightly vague program of replacing set-based mathematical ideas with category-based mathematical ideas.  The general reason for this is to get an extra layer of “maps between things”, or “relations between relations”, etc. which tend to be expressed by natural transformations.  There are various ways to go about this, but one is internalization: given some sort of structure, the relevant example in this case being “groups”, one has a category ${Groups}$, and can define a 2-group as a “category internal to ${Groups}$“.  So a 2-group has a group of objects, a group of morphisms, and all the usual maps (source and target for morphisms, composition, etc.) which all have to be group homomorphisms.  It should be said that this all produces a “strict 2-group”, since the objects $G$ necessarily form a group here.  In particular, $m : G \times G \rightarrow G$ satisfies group axioms “on the nose” – which is the only way to satisfy them for a group made of the elements of a set, but for a group made of the elements of a category, one might require only that it commute up to isomorphism.  A weak 2-group might then be described as a “weak model” of the theory of groups in $Cat$, but this whole approach is much less well-understood than the strict version as one goes to general n-groups.

Now, as mentioned in the previous post, there is a 1-1 correspondence between 2-groups and crossed modules (up to equivalence): given a crossed module $(G,H,\partial,\rhd)$, there’s a 2-group $\mathcal{G}$ whose objects are $G$ and whose morphisms are $G \ltimes H$; given a 2-group $\mathcal{G}$ with objects $G$, there’s a crossed module $(G, Aut(1_G),1,m)$.  (The action $m$ acts on a morphism in such as way as to act by multiplication on its source and target).  Then, for instance, the Peiffer identity for crossed modules (see previous post) is a consequence of the fact that composition of morphisms is supposed to be a group homomorphism.

Looking at internal categories in [your favourite setting here] isn’t the only way to do categorification, but it does produce some interesting examples.  Baez-Crans 2-vector spaces are defined this way (in $Vect$), and built using these are Lie 2-algebras.  Looking for a way to integrate Lie 2-algebras up to Lie 2-groups (which are internal categories in Lie groups) brings us back to the current main point.  This is the use of 2-groups to do higher gauge theory.  This requires the use of “2-bundles”.  To explain these, we can say first of all that a “2-space” is an internal category in $Spaces$ (whether that be manifolds, or topological spaces, or what-have-you), and that a (locally trivial) 2-bundle should have a total 2-space $E$, a base 2-space $M$, and a (functorial) projection map $p : E \rightarrow M$, such that there’s some open cover of $M$ by neighborhoods $U_i$ where locally the bundle “looks like” $\pi_i : U_i \times F \rightarrow U_i$, where $F$ is the fibre of the bundle.  In the bundle setting, “looks like” means “is isomorphic to” by means of isomorphisms $f_i : E_{U_i} \rightarrow U_i \times F$.  With 2-bundles, it’s interpreted as “is equivalent to” in the categorical sense, likewise by maps $f_i$.

Actually making this precise is a lot of work when $M$ is a general 2-space – even defining open covers and setting up all the machinery properly is quite hard.  This has been done, by Toby Bartels in his thesis, but to keep things simple, John restricted his talk to the case where $M$ is just an ordinary manifold (thought of as a 2-space which has only identity morphisms).   Then a key point is that there’s an analog to how (principal) $G$-bundles (where $F \cong G$ as a $G$-set) are classified up to isomorphism by the first Cech cohomology of the manifold, $\check{H}^1(M,G)$.  This works because one can define transition functions on double overlaps $U_{ij} := U_i \cap U_j$, by $g_{ij} = f_i f_j^{-1}$.  Then these $g_{ij}$ will automatically satisfy the 1-cocycle condidion ($g_{ij} g_{jk} = g_{ik}$ on the triple overlap $U_{ijk}$) which means they represent a cohomology class $[g] = \in \check{H}^1(M,G)$.

A comparable thing can be said for the “transition functors” for a 2-bundle – they’re defined superficially just as above, except that being functors, we can now say there’s a natural isomorphism $h_{ijk} : g_{ij}g_{jk} \rightarrow g_{ik}$, and it’s these $h_{ijk}$, defined on triple overlaps, which satisfy a 2-cocycle condition on 4-fold intersections (essentially, the two ways to compose them to collapse $g_{ij} g_{jk} g_{kl}$ into $g_{il}$ agree).  That is, we have $g_{ij} : U_{ij} \rightarrow Ob(\mathcal{G})$ and $h_{ijk} : U_{ijk} \rightarrow Mor(\mathcal{G})$ which fit together nicely.  In particular, we have an element $[h] \in \check{H}^2(M,G)$ of the second Cech cohomology of $M$: “principal $\mathcal{G}$-bundles are classified by second Cech cohomology of $M$“.  This sort of thing ties in to an ongoing theme of the later talks, the relationship between gerbes and higher cohomology – a 2-bundle corresponds to a “gerbe”, or rather a “1-gerbe”.  (The consistent terminology would have called a bundle a “0-gerbe”, but as usual, terminology got settled before the general pattern was understood).

Finally, having defined bundles, one usually defines connections, and so we do the same with 2-bundles.  A connection on a bundle gives a parallel transport operation for paths $\gamma$ in $M$, telling how to identify the fibres at points along $\gamma$ by means of a functor $hol : P_1(M) \rightarrow G$, thinking of $G$ as a category with one object, and where $P_1(M)$ is the path groupoid whose objects are points in $M$ and whose morphisms are paths (up to “thin” homotopy). At least, it does so once we trivialize the bundle around $\gamma$, anyway, to think of it as $M \times G$ locally – in general we need to get the transition functions involved when we pass into some other local neighborhood.  A connection on a 2-bundle is similar, but tells how to parallel transport fibres not only along paths, but along homotopies of paths, by means of $hol : P_2(M) \rightarrow \mathcal{G}$, where $\mathcal{G}$ is seen as a 2-category with one object, and $P_2(M)$ now has 2-morphisms which are (essentially) homotopies of paths.

Just as connections can be described by 1-forms $A$ valued in $Lie(G)$, which give $hol$ by integrating, a similar story exists for 2-connections: now we need a 1-form $A$ valued in $Lie(G)$ and a 2-form $B$ valued in $Lie(H)$.  These need to satisfy some relations, essentially that the curvature of $A$ has to be controlled by $B$.   Moreover, that $B$ is related to the $B$-field of string theory, as I mentioned in the post on the pre-school… But really, this is telling us about the Lie 2-algebra associated to $\mathcal{G}$, and how to integrate it up to the group!

## Infinite Dimensional Lie Theory and Higher Gauge Theory

This series of talks by Christoph Wockel returns us to the question of “integrating up” to a Lie group $G$ from a Lie algebra $\mathfrak{g} = Lie(G)$, which is seen as the tangent space of $G$ at the identity.  This is a well-understood, well-behaved phenomenon when the Lie algebras happen to be finite dimensional.  Indeed the classification theorem for the classical Lie groups can be got at in just this way: a combinatorial way to characterize Lie algebras using Dynkin diagrams (which describe the structure of some weight lattice), followed by a correspondence between Lie algebras and Lie groups.  But when the Lie algebras are infinite dimensional, this just doesn’t have to work.  It may be impossible to integrate a Lie algebra up to a full Lie group: instead, one can only get a little neighborhood of the identity.  The point of such infinite-dimensional groups, and ultimately their representation theory, is to deal with string groups that have to do with motions of extended objects.  Christoph Wockel was describing a result which says that, going to 2-groups, this problem can be overcome.  (See the relevant paper here.)

The first lecture in the series presented some background on a setting for infinite dimensional manifolds.  There are various approaches, a popular one being Frechet manifolds, but in this context, the somewhat weaker notion of locally convex spaces is sufficient.  These are “locally modelled” by (infinite dimensional) locally convex vector spaces, the way finite dimensonal manifolds are locally modelled by Euclidean space.  Being locally convex is enough to allow them to support a lot of differential calculus: one can find straight-line paths, locally, to define a notion of directional derivative in the direction of a general vector.  Using this, one can build up definitions of differentiable and smooth functions, derivatives, and integrals, just by looking at the restrictions to all such directions.  Then there’s a fundamental theorem of calculus, a chain rule, and so on.

At this point, one has plenty of differential calculus, and it becomes interesting to bring in Lie theory.  A Lie group is defined as a group object in the category of manifolds and smooth maps, just as in the finite-dimensional case.  Some infinite-dimensional Lie groups of interest would include: $G = Diff(M)$, the group of diffeomorphisms of some compact manifold $M$; and the group of smooth functions $G = C^{\infty}(M,K)$ from $M$ into some (finite-dimensional) Lie group $K$ (perhaps just $\mathbb{R}$), with the usual pointwise multiplication.  These are certainly groups, and one handy fact about such groups is that, if they have a manifold structure near the identity, on some subset that generates $G$ as a group in a nice way, you can extend the manifold structure to the whole group.  And indeed, that happens in these examples.

Well, next we’d like to know if we can, given an infinite dimensional Lie algebra $X$, “integrate up” to a Lie group – that is, find a Lie group $G$ for which $X \cong T_eG$ is the “infinitesimal” version of $G$.  One way this arises is from central extensions.  A central extension of Lie group $G$ by $Z$ is an exact sequence $Z \hookrightarrow \hat{G} \twoheadrightarrow G$ where (the image of) $Z$ is in the centre of $\hat{G}$.  The point here is that $\hat{G}$ extends $G$.  This setup makes $\hat{G}$ is a principal $Z$-bundle over $G$.

Now, finding central extensions of Lie algebras is comparatively easy, and given a central extension of Lie groups, one always falls out of the induced maps.  There will be an exact sequence of Lie algebras, and now the special condition is that there must exist a continuous section of the second map.  The question is to go the other way: given one of these, get back to an extension of Lie groups.  The problem of finding extensions of $G$ by $Z$, in particular as a problem of finding a bundle with connection having specified curvature, which brings us back to gauge theory.  One type of extension is the universal cover of $G$, which appears as $\pi_1(G) \hookrightarrow \hat{G} \twoheadrightarrow G$, so that the fibre is $\pi_1(G)$.

In general, whether an extension can exist comes down to a question about a cocycle: that is, if there’s a function $f : G \times G \rightarrow Z$ which is locally smooth (i.e. in some neighborhood in $G$), and is a cocyle (so that $f(g,h) + f(gh,k) = f(g,hk) + f(h,k)$), by the same sorts of arguments we’ve already seen a bit of.  For this reason, central extensions are classified by the cohomology group $H^2(G,Z)$.  The cocycle enables a “twisting” of the multiplication associated to a nontrivial loop in $G$, and is used to construct $\hat{G}$ (by specifying how multiplication on $G$ lifts to $\hat{G}$).  Given a  2-cocycle $\omega$ at the Lie algebra level (easier to do), one would like to lift that up the Lie group.  It turns out this is possible if the period homomorphism $per_{\omega} : \Pi_2(G) \rightarrow Z$ – which takes a chain $[\sigma]$ (with $\sigma : S^2 \rightarrow G$) to the integral of the original cocycle on it, $\int_{\sigma} \omega$ – lands in a discrete subgroup of $Z$. A popular example of this is when $Z$ is just $\mathbb{R}$, and the discrete subgroup is $\mathbb{Z}$ (or, similarly, $U(1)$ and $1$ respectively).  This business of requiring a cocycle to be integral in this way is sometimes called a “prequantization” problem.

So suppose we wanted to make the “2-connected cover” $\pi_2(G) \hookrightarrow \pi_2(G) \times_{\gamma} G \twoheadrightarrow G$ as a central extension: since $\pi_2(G)$ will be abelian, this is conceivable.  If the dimension of $G$ is finite, this is trivial (since $\pi_2(G) = 0$ in finite dimensions), which is why we need some theory  of infinite-dimensional manifolds.  Moreover, though, this may not work in the context of groups: the $\gamma$ in the extension $\pi_2(G) \times_{\gamma} G$ above needs to be a “twisting” of associativity, not multiplication, being lifted from $G$.  Such twistings come from the THIRD cohomology of $G$ (see here, e.g.), and describe the structure of 2-groups (or crossed modules, whichever you like).  In fact, the solution (go read the paper for more if you like) to define a notion of central extension for 2-groups (essentially the same as the usual definition, but with maps of 2-groups, or crossed modules, everywhere).  Since a group is a trivial kind of 2-group (with only trivial automorphisms of any element), the usual notion of central extension turns out to be a special case.  Then by thinking of $\pi_2(G)$ and $G$ as crossed modules, one can find a central extension which is like the 2-connected cover we wanted – though it doesn’t work as an extension of groups because we think of $G$ as the base group of the crossed module, and $\pi_2(G)$ as the second group in the tower.

The pattern of moving to higher group-like structures, higher cohomology, and obstructions to various constructions ran all through the workshop, and carried on in the next school session…

## Higher Spin Structures in String Theory

Hisham Sati gave just one school-lecture in addition to his workshop talk, but it was packed with a lot of material.  This is essentially about cohomology and the structures on manifolds to which cohomology groups describe the obstructions.  The background part of the lecture referenced this book by Fridrich, and the newer parts were describing some of Sati’s own work, in particular a couple of papers with Schreiber and Stasheff (also see this one).

The basic point here is that, for physical reasons, we’re often interested in putting some sort of structure on a manifold, which is really best described in terms of a bundle.  For instance, a connection or spin connection on spacetime lets us transport vectors or spinors, respectively, along paths, which in turn lets us define derivatives.  These two structures really belong on vector bundles or spinor bundles.  Now, if these bundles are trivial, then one can make the connections on them trivial as well by gauge transformation.  So having nontrivial bundles really makes this all more interesting.  However, this isn’t always possible, and so one wants to the obstruction to being able to do it.  This is typically a class in one of the cohomology groups of the manifold – a characteristic class.  There are various examples: Chern classes, Pontrjagin classes, Steifel-Whitney classes, and so on, each of which comes in various degrees $i$.  Each one corresponds to a different coefficient group for the cohomology groups – in these examples, the groups $U$ and $O$ which are the limits of the unitary and orthogonal groups (such as $O := O(\infty) \supset \dots \supset O(2) \supset O(1)$)

The point is that these classes are obstructions to building certain structures on the manifold $X$ – which amounts to finding sections of a bundle.  So for instance, the first Steifel-Whitney classes, $w_1(E)$ of a bundle $E$ are related to orientations, coming from cohomology with coefficients in $O(n)$.  Orientations for the manifold $X$ can be described in terms of its tangent bundle, which is an $O(n)$-bundle (tangent spaces carry an action of the rotation group).  Consider $X = S^1$, where we have actually $O(1) \simeq \mathbb{Z}_2$.  The group $H^1(S^1, \mathbb{Z}_2)$ has two elements, and there are two types of line bundle on the circle $S^1$: ones with a nowhere-zero section, like the trivial bundle; and ones without, like the Moebius strip.  The circle is orientable, because its tangent bundle is of the first sort.

Generally, an orientation can be put on $X$ if the tangent bundle, as a map $f : X \rightarrow B(O(n))$, can be lifted to a map $\tilde{f} : X \rightarrow B(SO(n))$ – that is, it’s “secretly” an $SO(n)$-bundle – the special orthogonal group respects orientation, which is what the determinant measures.  Its two values, $\pm 1$, are what’s behind the two classes of bundles.  (In short, this story relates to the exact sequence $1 \rightarrow SO(n) \rightarrow O(n) \stackrel{det}{\rightarrow} O(1) = \mathbb{Z}_2 \rightarrow 1$; in just the same way we have big groups $SO$, $Spin$, and so forth.)

So spin structures have a story much like the above, but where the exact sequence $1 \rightarrow \mathbb{Z}_2 \rightarrow Spin(n) \rightarrow SO(n) \rightarrow 1$ plays a role – the spin groups are the universal covers (which are all double-sheeted covers) of the special rotation groups.  A spin structure on some $SO(n)$ bundle $E$, let’s say represented by $f : X \rightarrow B(SO(n))$ is thus, again, a lifting to $\tilde{f} : X \rightarrow B(Spin(n))$.  The obstruction to doing this (the thing which must be zero for the lifting to exist) is the second Stiefel-Whitney class, $w_2(E)$.  Hisham Sati also explained the example of “generalized” spin structures in these terms.  But the main theme is an analogous, but much more general, story for other cohomology groups as obstructions to liftings of some sort of structures on manifolds.  These may be bundles, for the lower-degree cohomology, or they may be gerbes or n-bundles, for higher-degree, but the setup is roughly the same.

The title’s term “higher spin structures” comes from the fact that we’ve so far had a tower of classifying spaces (or groups), $B(O) \leftarrow B(SO) \leftarrow B(Spin)$, and so on.  Then the problem of putting various sorts of structures on $X$ has been turned into the problem of lifting a map $f : X \rightarrow S(O)$ up this tower.  At each point, the obstruction to lifting is some cohomology class with coefficients in the groups ($O$, $SO$, etc.)  So when are these structures interesting?

This turns out to bring up another theme, which is that of special dimensions – it’s just not true that the same phenomena happen in every dimension.  In this case, this has to do with the homotopy groups  – of $O$ and its cousins.  So it turns out that the homotopy group $\pi_k(O)$ (which is the same as $\pi_k(O_n)$ as long as $n$ is bigger than $k$) follows a pattern, where $\pi_k(O) = \mathbb{Z}_2$ if $k = 0,1 (mod 8)$, and $\pi_k(O) = \mathbb{Z}$ if $k = 3,7 (mod 8)$.  The fact that this pattern repeats mod-8 is one form of the (real) Bott Periodicity theorem.  These homotopy groups reflect that, wherever there’s nontrivial homotopy in some dimension, there’s an obstruction to contracting maps into $O$ from such a sphere.

All of this plays into the question of what kinds of nontrivial structures can be put on orthogonal bundles on manifolds of various dimensions.  In the dimensions where these homotopy groups are non-trivial, there’s an obstruction to the lifting, and therefore some interesting structure one can put on $X$ which may or may not exist.  Hisham Sati spoke of “killing” various homotopy groups – meaning, as far as I can tell, imposing conditions which get past these obstructions.  In string theory, his application of interest, one talks of “anomaly cancellation” – an anomaly being the obstruction to making these structures.  The first part of the punchline is that, since these are related to nontrivial cohomology groups, we can think of them in terms of defining structures on n-bundles or gerbes.  These structures are, essentially, connections – they tell us how to parallel-transport objects of various dimensions.  It turns out that the $\pi_k$ homotopy group is related to parallel transport along $(k-1)$-dimensional surfaces in $X$, which can be thought of as the world-sheets of $(k-2)$-dimensional “particles” (or rather, “branes”).

So, for instance, the fact that $\pi_1(O)$ is nontrivial means there’s an obstruction to a lifting in the form of a class in $H^2(X,\mathbb{Z})$, which has to do with spin structure – as above.  “Cancelling” this “anomaly” means that for a theory involving such a spin structure to be well-defined, then this characteristic class for $X$ must be zero.  The fact that $\pi_3(O) = \mathbb{Z}$ is nontrivial means there’s an obstruction to a lifting in the form of a class in $H^4(X, \mathbb{Z})$.  This has to do with “string bundles”, where the string group is a higher analog of $Spin$ in exactly the sense we’ve just described.  If such a lifting exists, then there’s a “string-structure” on $X$ which is compatible with the spin structure we lifted (and with the orientation a level below that).  Similarly, $\pi_7(O) = \mathbb{Z}$ being nontrivial, by way of an obstruction in $H^8$, means there’s an interesting notion of “five-brane” structure, and a $Fivebrane$ group, and so on.  Personally, I think of these as giving a geometric interpretation for what the higher cohomology groups actually mean.

A slight refinement of the above, and actually more directly related to “cancellation” of the anomalies, is that these structures can be defined in a “twisted” way: given a cocycle in the appropriate cohomology group, we can ask that a lifting exist, not on the nose, but as a diagram commuting only up to a higher cell, which is exactly given by the cocycle.  I mentioned, in the previous section, a situation where the cocycle gives an associator, so that instead of being exactly associative, a structure has a “twisted” associativity.  This is similar, except we’re twisting the condition that makes a spin structure (or higher spin structure) well-defined.  So if $X$ has the wrong characteristic class, we can only define one of these twisted structures at that level.

This theme of higher cohomology and gerbes, and their geometric interpretation, was another one that turned up throughout the talks in the workshop…

And speaking of that: coming up soon, some descriptions of the actual workshop.

So there’s a lot of preparations going on for the workshop HGTQGR coming up next week at IST, and the program(me) is much more developed – many of the talks are now listed, though the schedule has yet to be finalized.  This week we’ll be having a “pre-school school” to introduce the local mathematicans to some of the physics viewpoints that will be discussed at the workshop – Aleksandar Mikovic will be introducing Quantum Gravity (from the point of view of the loop/spin-foam approach), and Sebastian Guttenberg will be giving a mathematician’s introduction to String theory.

These are by no means the only approaches physicists have taken to the problem of finding a theory that incorporates both General Relativity and Quantum Field Theory.  They are, however, two approaches where lots of work has been done, and which appear to be amenable to using the mathematical tools of (higher) category theory which we’re going to be talking about at the workshop.  These are “higher gauge theory”, which very roughly is the analog of gauge theory (which includes both GR and QFT) using categorical groups, and TQFT, which is a very simple type of quantum field theory that has a natural description in terms of categories, which can be generalized to higher categories.

I’ll probably take a few posts after the workshop to write up these, and the many other talks and mini-courses we’ll be having, but right now, I’d like to say a little bit about another talk we had here recently.  Actually, the talk was in Porto, but several of us at IST in Lisbon attended by a videoconference.  This was the first time I’ve seen this for a colloquium-style talk, though I did once take a course in General Relativity from Eric Poisson that was split between U of Waterloo and U of Guelph.  I thought it was a great idea then, and it worked quite well this time, too.  This is the way of the future – and unfortunately it probably will be for some time to come…

Anyway, the talk in question was by Thomasz Brzezinski, about “Synthetic Non-Commutative Geometry” (link points to the slides).  The point here is to take two different approaches to extending differential geometry (DG) and combine the two insights.  The “Synthetic” part refers to synthetic differential geometry (SDG), which is a program for doing DG in a general topos.  One aspect of this is that in a topos where the Law of the Excluded Middle doesn’t apply, it’s possible for the real-numbers object to have infinitesimals: that is, elements which are smaller than any positive element, but bigger than zero.  This lets one take things which have to be treated in a roundabout way in ordinary DG, like $dx$, and take them at face value – as an infinitesimal change in $x$.  It also means doing geometry in a completely constructive way.

However, these aspects aren’t so important here.  The important fact about it here is that it’s based on building a theory that was originally defined in terms of sets, or topological spaces – that is, in the toposes $Sets$, or $Top$  – and transplanting it to another category.  This is because Brzezinski’s goal was to do something similar for a different extension of DG, namely non-commutative geometry (NCG).  This is a generalisation of DG which is based on the equivalence $CommAlg^{op} \simeq lCptHaus$ between the categories of commutative $C^{\star}$-algebras (and algebra maps, read “backward” as morphisms in $CommAlg^{op}$), and that of locally compact Hausdorff spaces (which, for objects, equates a space $X$ with the algebra $C(X)$ of continuous functions on it, and an algebra $A$ with its spectrum $Spec(A)$, the space of maximal ideals).  The generalization of NCG is to take structures defined for $lCptHaus$ that create DG, and make similar definitions in the category $Alg^{op}$, of not-necessarily-commutative $C^{\star}$-algebras.

This category is the one which plays the role of the topos $Top$.  It isn’t a topos, though: it’s some sort of monoidal category.  And this is what “synthetic NCG” is about: taking the definitions used in NCG and reproducing them in a generic monoidal category (to be clear, a braided monoidal category).

The way he illustrated this is by explaining what a principal bundle would be in such a generic category.

To begin with, we can start by giving a slightly nonstandard definition of the concept in ordinary DG: a principal $G$-bundle $P$ is a manifold with a free action of a (compact Lie) group $G$ on it.  The point is that this always looks like a “base space” manifold $B$, with a projection $\pi : P \rightarrow B$ so that the fibre at each point of $B$ looks like $G$.  This amounts to saying that $\pi$ is an equalizer:

$P \times G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$

where the maps from $G\times P$ to $P$ are (a) the action, and (b) the projection onto $P$.  (Being an equalizer means that $\pi$ makes this diagram commute – has the same composite with both maps – and any other map $\phi$ that does the same factors uniquely through $\pi$.)  Another equivalent way to say this is that since $P \times G$ has two maps into $P$, then it has a map into the pullback $P \times_B P$ (the pullback of two copies of $P \stackrel{\pi}{\rightarrow} B$), and the claim is that it’s actually ismorphic.

The main points here are that (a) we take this definition in terms of diagrams and abstract it out of the category $Top$, and (b) when we do so, in general the products will be tensor products.

In particular, this means we need to have a general definition of a group object $G$ in any braided monoidal category (to know what $G$ is supposed to be like).  We reproduce the usual definition of a group objects so that $G$ must come equipped with a “multiplication” map $m : G \otimes G \rightarrow G$, an “inverse” map $\iota : G \rightarrow G$ and a “unit” map $u : I \rightarrow G$, where $I$ is the monoidal unit (which takes the role of the terminal object in a topos like $Top$, the unit for $\times$).  These need to satisfy the usual properties, such as the monoid property for multiplication:

$m \circ (m \otimes id_G) = m \circ (id_G \otimes m) : G \otimes G \otimes G \rightarrow G$

(usually given as a diagram, but I’m being lazy).

The big “however” is this: in $Sets$ or $Top$, any object $X$ is always a comonoid in a canonical way, and we use this implictly in defining some of the properties we need.  In particular, there’s always the diagonal map $\Delta : X \rightarrow X \times X$ which satisfies the dual of the monoid property:

$(id_X \times \Delta) \circ \Delta = (\Delta \times id_X) \circ \Delta$

There’s also a unique counit $\epsilon \rightarrow \star$, the map into the terminal object, which makes $(X,\Delta,\epsilon)$ a counital comonoid automatically.  But in a general braided monoidal category, we have to impose as a condition that our group object also be equipped with $\Delta : G \rightarrow G \otimes G$ and $\epsilon : G \rightarrow I$ making it a counital comonoid.  We need this property to even be able to make sense of the inverse axiom (which this time I’ll do as a diagram):

This diagram uses not only $\Delta$ but also the braiding map $\sigma_{G,G} : G \otimes G \rightarrow G \otimes G$ (part of the structure of the braided monoidal category which, in $Top$ or $Sets$ is just the “switch” map).  Now, in fact, since any object in $Set$ or $Top$ is automatically a comonoid, we’ll require that this structure be given for anything we look at: the analog of spaces (like $P$ above), or our group object $G$.  For the group object, we also must, in general, require something which comes for free in the topos world and therefore generally isn’t mentioned in the definition of a group.  Namely, the comonoid and monoid aspects of $G$ must get along.  (This comes for free in a topos essentially because the comonoid structure is given canonically for all objects.)  This means:

For a group in $Sets$ or $Top$, this essentially just says that the two ways we can go from $(x,y)$ to $(xy,xy)$ (duplicate, swap, then multiply, or on the other hand multiply then duplicate) are the same.

All these considerations about how honest-to-goodness groups are secretly also comonoids does explain why corresponding structures in noncommutative geometry seem to have more elaborate definitions: they have to explicitly say things that come for free in a topos.  So, for instance, a group object in the above sense in the braided monoidal category $Vect = (Vect_{\mathbb{F}}, \otimes_{\mathbb{F}}, \mathbb{F}, flip)$ is a Hopf algebra.  This is a nice canonical choice of category.  Another is the opposite category $Vect^{op}$ – this is a standard choice in NCG, since spaces are supposed to be algebras – this would be given the comonoid structure we demanded.

So now once we know all this, we can reproduce the diagrammatic definition of a principal $G$-bundle above: just replace the product $\times$ with the monoidal operation $\otimes$, the terminal object by $I$, and so forth.  The diagrams are understood to be diagrams of comonoids in our braided monoidal category.  In particular, we have an action $\rho : P \otimes G \rightarrow P$,which is compatible with the $\Delta$ maps – so in $Vect$ we would say that a noncommutative principal $G$-bundle $P$ is a right-module coalgebra over the Hopf algebra $G$.  We can likewise take this (in a suitably abstract sense of “algebra” or “module”) to be the definition in any braided monoidal category.

To have the “freeness” of the action, there needs to be an equalizer of:

$\rho, (id_P \otimes \epsilon) : P \otimes G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$

The “freeness” condition for the action is likewise defined using a monoidal-category version of the pullback (fibre product) $P \times_B P$.

This was as far as Brzezinski took the idea of synthetic NCG in this particular talk, but the basic idea seems quite nice.  In SDG, one can define all sorts of differential geometric structures synthetically, that is, for a general topos: for example, Gonzalo Reyes has gone and defined the Einstein field equations synthetically.  Presumably, a lot of what’s done in NCG could also be done in this synthetic framework, and transplanted to other categories than the usual choices.

Brzezinski said he was mainly interested in the “usual” choices of category, $Vect$ and $Vect^{op}$ – so for instance in $Vect^{op}$, a “principal $G$-bundle” is what’s called a Hopf-Galois extension.  Roger Picken did, however, ask an interesting question about other possible candidates for the category to work in.  Given that one wants a braided monoidal category, a natural one to look at is the category whose morphisms are braids.  This one, as a matter of fact, isn’t quite enough (there’s no braid $m : n \otimes n \rightarrow n$, because this would be a braid with $2n$ strands in and $n$ strands out – which is impossible.  But some sort of category of tangles might make an interestingly abstract setting in which to see what NCG looks like.  So far, this doesn’t seem to have been done as far as I can see.

Marco Mackaay recently pointed me at a paper by Mikhail Khovanov, which describes a categorification of the Heisenberg algebra $H$ (or anyway its integral form $H_{\mathbb{Z}}$) in terms of a diagrammatic calculus.  This is very much in the spirit of the Khovanov-Lauda program of categorifying Lie algebras, quantum groups, and the like.  (There’s also another one by Sabin Cautis and Anthony Licata, following up on it, which I fully intend to read but haven’t done so yet. I may post about it later.)

Now, as alluded to in some of the slides I’ve from recent talks, Jamie Vicary and I have been looking at a slightly different way to answer this question, so before I talk about the Khovanov paper, I’ll say a tiny bit about why I was interested.

Groupoidification

The Weyl algebra (or the Heisenberg algebra – the difference being whether the commutation relations that define it give real or imaginary values) is interesting for physics-related reasons, being the algebra of operators associated to the quantum harmonic oscillator.  The particular approach to categorifying it that I’ve worked with goes back to something that I wrote up here, and as far as I know, originally was suggested by Baez and Dolan here.  This categorification is based on “stuff types” (Jim Dolan’s term, based on “structure types”, a.k.a. Joyal’s “species”).  It’s an example of the groupoidification program, the point of which is to categorify parts of linear algebra using the category $Span(Gpd)$.  This has objects which are groupoids, and morphisms which are spans of groupoids: pairs of maps $G_1 \leftarrow X \rightarrow G_2$.  Since I’ve already discussed the backgroup here before (e.g. here and to a lesser extent here), and the papers I just mentioned give plenty more detail (as does “Groupoidification Made Easy“, by Baez, Hoffnung and Walker), I’ll just mention that this is actually more naturally a 2-category (maps between spans are maps $X \rightarrow X'$ making everything commute).  It’s got a monoidal structure, is additive in a fairly natural way, has duals for morphisms (by reversing the orientation of spans), and more.  Jamie Vicary and I are both interested in the quantum harmonic oscillator – he did this paper a while ago describing how to construct one in a general symmetric dagger-monoidal category.  We’ve been interested in how the stuff type picture fits into that framework, and also in trying to examine it in more detail using 2-linearization (which I explain here).

Anyway, stuff types provide a possible categorification of the Weyl/Heisenberg algebra in terms of spans and groupoids.  They aren’t the only way to approach the question, though – Khovanov’s paper gives a different (though, unsurprisingly, related) point of view.  There are some nice aspects to the groupoidification approach: for one thing, it gives a nice set of pictures for the morphisms in its categorified algebra (they look like groupoids whose objects are Feynman diagrams).  Two great features of this Khovanov-Lauda program: the diagrammatic calculus gives a great visual representation of the 2-morphisms; and by dealing with generators and relations directly, it describes, in some sense1, the universal answer to the question “What is a categorification of the algebra with these generators and relations”.  Here’s how it works…

Heisenberg Algebra

One way to represent the Weyl/Heisenberg algebra (the two terms refer to different presentations of isomorphic algebras) uses a polynomial algebra $P_n = \mathbb{C}[x_1,\dots,x_n]$.  In fact, there’s a version of this algebra for each natural number $n$ (the stuff-type references above only treat $n=1$, though extending it to “$n$-sorted stuff types” isn’t particularly hard).  In particular, it’s the algebra of operators on $P_n$ generated by the “raising” operators $a_k(p) = x_k \cdot p$ and the “lowering” operators $b_k(p) = \frac{\partial p}{\partial x_k}$.  The point is that this is characterized by some commutation relations.  For $j \neq k$, we have:

$[a_j,a_k] = [b_j,b_k] = [a_j,b_k] = 0$

but on the other hand

$[a_k,b_k] = 1$

So the algebra could be seen as just a free thing generated by symbols $\{a_j,b_k\}$ with these relations.  These can be understood to be the “raising and lowering” operators for an $n$-dimensional harmonic oscillator.  This isn’t the only presentation of this algebra.  There’s another one where $[p_k,q_k] = i$ (as in $i = \sqrt{-1}$) has a slightly different interpretation, where the $p$ and $q$ operators are the position and momentum operators for the same system.  Finally, a third one – which is the one that Khovanov actually categorifies – is skewed a bit, in that it replaces the $a_j$ with a different set of $\hat{a}_j$ so that the commutation relation actually looks like

$[\hat{a}_j,b_k] = b_{k-1}\hat{a}_{j-1}$

It’s not instantly obvious that this produces the same result – but the $\hat{a}_j$ can be rewritten in terms of the $a_j$, and they generate the same algebra.  (Note that for the one-dimensional version, these are in any case the same, taking $a_0 = b_0 = 1$.)

Diagrammatic Calculus

To categorify this, in Khovanov’s sense (though see note below1), means to find a category $\mathcal{H}$ whose isomorphism classes of objects correspond to (integer-) linear combinations of products of the generators of $H$.  Now, in the $Span(Gpd)$ setup, we can say that the groupoid $FinSet_0$, or equvialently $\mathcal{S} = \coprod_n \mathcal{S}_n$, represents Fock space.  Groupoidification turns this into the free vector space on the set of isomorphism classes of objects.  This has some extra structure which we don’t need right now, so it makes the most sense to describe it as $\mathbb{C}[[t]]$, the space of power series (where $t^n$ corresponds to the object $[n]$).  The algebra itself is an algebra of endomorphisms of this space.  It’s this algebra Khovanov is looking at, so the monoidal category in question could really be considered a bicategory with one object, where the monoidal product comes from composition, and the object stands in formally for the space it acts on.  But this space doesn’t enter into the description, so we’ll just think of $\mathcal{H}$ as a monoidal category.  We’ll build it in two steps: the first is to define a category $\mathcal{H}'$.

The objects of $\mathcal{H}'$ are defined by two generators, called $Q_+$ and $Q_-$, and the fact that it’s monoidal (these objects will be the categorifications of $a$ and $b$).  Thus, there are objects $Q_+ \otimes Q_- \otimes Q_+$ and so forth.  In general, if $\epsilon$ is some word on the alphabet $\{+,-\}$, there’s an object $Q_{\epsilon} = Q_{\epsilon_1} \otimes \dots \otimes Q_{\epsilon_m}$.

As in other categorifications in the Khovanov-Lauda vein, we define the morphisms of $\mathcal{H}'$ to be linear combinations of certain planar diagrams, modulo some local relations.  (This type of formalism comes out of knot theory – see e.g. this intro by Louis Kauffman).  In particular, we draw the objects as sequences of dots labelled $+$ or $-$, and connect two such sequences by a bunch of oriented strands (embeddings of the interval, or circle, in the plane).  Each $+$ dot is the endpoint of a strand oriented up, and each $-$ dot is the endpoint of a strand oriented down.  The local relations mean that we can take these diagrams up to isotopy (moving the strands around), as well as various other relations that define changes you can make to a diagram and still represent the same morphism.  These relations include things like:

which seems visually obvious (imagine tugging hard on the ends on the left hand side to straighten the strands), and the less-obvious:

and a bunch of others.  The main ingredients are cups, caps, and crossings, with various orientations.  Other diagrams can be made by pasting these together.  The point, then, is that any morphism is some $\mathbf{k}$-linear combination of these.  (I prefer to assume $\mathbf{k} = \mathbb{C}$ most of the time, since I’m interested in quantum mechanics, but this isn’t strictly necessary.)

The second diagram, by the way, are an important part of categorifying the commutation relations.  This would say that $Q_- \otimes Q_+ \cong Q_+ \otimes Q_- \oplus 1$ (the commutation relation has become a decomposition of a certain tensor product).  The point is that the left hand sides show the composition of two crossings $Q_- \otimes Q_+ \rightarrow Q_+ \otimes Q_-$ and $Q_+ \otimes Q_- \rightarrow Q_- \otimes Q_+$ in two different orders.  One can use this, plus isotopy, to show the decomposition.

That diagrams are invariant under isotopy means, among other things, that the yanking rule holds:

(and similar rules for up-oriented strands, and zig zags on the other side).  These conditions amount to saying that the functors $- \otimes Q_+$ and $- \otimes Q_-$ are two-sided adjoints.  The two cups and caps (with each possible orientation) give the units and counits for the two adjunctions.  So, for instance, in the zig-zag diagram above, there’s a cup which gives a unit map $\mathbf{k} \rightarrow Q_- \otimes Q_+$ (reading upward), all tensored on the right by $Q_-$.  This is followed by a cap giving a counit map $Q_+ \otimes Q_- \rightarrow \mathbf{k}$ (all tensored on the left by $Q_-$).  So the yanking rule essentially just gives one of the identities required for an adjunction.  There are four of them, so in fact there are two adjunctions: one where $Q_+$ is the left adjoint, and one where it’s the right adjoint.

Karoubi Envelope

Now, so far this has explained where a category $\mathcal{H}'$ comes from – the one with the objects $Q_{\epsilon}$ described above.  This isn’t quite enough to get a categorification of $H_{\mathbb{Z}}$: it would be enough to get the version with just one $a$ and one $b$ element, and their powers, but not all the $a_j$ and $b_k$.  To get all the elements of the (integral form of) the Heisenberg algebras, and in particular to get generators that satisfy the right commutation relations, we need to introduce some new objects.  There’s a convenient way to do this, though, which is to take the Karoubi envelope of $\mathcal{H}'$.

The Karoubi envelope of any category $\mathcal{C}$ is a universal way to find a category $Kar(\mathcal{C})$ that contains $\mathcal{C}$ and for which all idempotents split (i.e. have corresponding subobjects).  Think of vector spaces, for example: a map $p \in End(V)$ such that $p^2 = p$ is a projection.  That projection corresponds to a subspace $W \subset V$, and $W$ is actually another object in $Vect$, so that $p$ splits (factors) as $V \rightarrow W subset V$.  This might not happen in any general $\mathcal{C}$, but it will in $Kar(\mathcal{C})$.  This has, for objects, all the pairs $(C,p)$ where $p : C \rightarrow C$ is idempotent (so $\mathcal{C}$ is contained in $Kar(\mathcal{C})$ as the cases where $p=1$).  The morphisms $f : (C,p) \rightarrow (C',p')$ are just maps $f : C \rightarrow C'$ with the compatibility condition that $p' f = p f = f$ (essentially, maps between the new subobjects).

So which new subobjects are the relevant ones?  They’ll be subobjects of tensor powers of our $Q_{\pm}$.  First, consider $Q_{+^n} = Q_+^{\otimes n}$.  Obviously, there’s an action of the symmetric group $\mathcal{S}_n$ on this, so in fact (since we want a $\mathbf{k}$-linear category), its endomorphisms contain a copy of $\mathbf{k}[\mathcal{S}_n]$, the corresponding group algebra.  This has a number of different projections, but the relevant ones here are the symmetrizer,:

$e_n = \frac{1}{n!} \sum_{\sigma \in \mathcal{S}_n} \sigma$

which wants to be a “projection onto the symmetric subspace” and the antisymmetrizer:

$e'_n = \frac{1}{n!} \sum_{\sigma \in \mathcal{S}_n} sign(\sigma) \sigma$

which wants to be a “projection onto the antisymmetric subspace” (if it were in a category with the right sub-objects). The diagrammatic way to depict this is with horizontal bars: so the new object $S^n_+ = (Q_{+^n}, e)$ (the symmetrized subobject of $Q_+^{\oplus n}$) is a hollow rectangle, labelled by $n$.  The projection from $Q_+^{\otimes n}$ is drawn with $n$ arrows heading into that box:

The antisymmetrized subobject $\Lambda^n_+ = (Q_{+^n},e')$ is drawn with a black box instead.  There are also $S^n_-$ and $\Lambda^n_-$ defined in the same way (and drawn with downward-pointing arrows).

The basic fact – which can be shown by various diagram manipulations, is that $S^n_- \otimes \Lambda^m_+ \cong (\Lambda^m_+ \otimes S^n_-) \oplus (\Lambda_+^{m-1} \otimes S^{n-1}_-)$.  The key thing is that there are maps from the left hand side into each of the terms on the right, and the sum can be shown to be an isomorphism using all the previous relations.  The map into the second term involves a cap that uses up one of the strands from each term on the left.

There are other idempotents as well – for every partition $\lambda$ of $n$, there’s a notion of $\lambda$-symmetric things – but ultimately these boil down to symmetrizing the various parts of the partition.  The main point is that we now have objects in $\mathcal{H} = Kar(\mathcal{H}')$ corresponding to all the elements of $H_{\mathbb{Z}}$.  The right choice is that the $\hat{a}_j$  (the new generators in this presentation that came from the lowering operators) correspond to the $S^j_-$ (symmetrized products of “lowering” strands), and the $b_k$ correspond to the $\Lambda^k_+$ (antisymmetrized products of “raising” strands).  We also have isomorphisms (i.e. diagrams that are invertible, using the local moves we’re allowed) for all the relations.  This is a categorification of $H_{\mathbb{Z}}$.

Some Generalities

This diagrammatic calculus is universal enough to be applied to all sorts of settings where there are functors which are two-sided adjoints of one another (by labelling strands with functors, and the regions of the plane with categories they go between).  I like this a lot, since biadjointness of certain functors is essential to the 2-linearization functor $\Lambda$ (see my link above).  In particular, $\Lambda$ uses biadjointness of restriction and induction functors between representation categories of groupoids associated to a groupoid homomorphism (and uses these unit and counit maps to deal with 2-morphisms).  That example comes from the fact that a (finite-dimensional) representation of a finite group(oid) is a functor into $Vect$, and a group(oid) homomorphism is also just a functor $F : H \rightarrow G$.  Given such an $F$, there’s an easy “restriction” $F^* : Fun(G,Vect) \rightarrow Fun(H,Vect)$, that just works by composing with $F$.  Then in principle there might be two different adjoints $Fun(H,Vect) \rightarrow Fun(G,Vect)$, given by the left and right Kan extension along $F$.  But these are defined by colimits and limits, which are the same for (finite-dimensional) vector spaces.  So in fact the adjoint is two-sided.

Khovanov’s paper describes and uses exactly this example of biadjointness in a very nice way, albeit in the classical case where we’re just talking about inclusions of finite groups.  That is, given a subgroup $H < G$, we get a functors $Res_G^H : Rep(G) \rightarrow Rep(H)$, which just considers the obvious action of $H$ act on any representation space of $G$.  It has a biadjoint $Ind^G_H : Rep(H) \rightarrow Rep(G)$, which takes a representation $V$ of $H$ to $\mathbf{k}[G] \otimes_{\mathbf{k}[H]} V$, which is a special case of the formula for a Kan extension.  (This formula suggests why it’s also natural to see these as functors between module categories $\mathbf{k}[G]-mod$ and $\mathbf{k}[H]-mod$).  To talk about the Heisenberg algebra in particular, Khovanov considers these functors for all the symmetric group inclusions $\mathcal{S}_n < \mathcal{S}_{n+1}$.

Except for having to break apart the symmetric groupoid as $S = \coprod_n \mathcal{S}_n$, this is all you need to categorify the Heisenberg algebra.  In the $Span(Gpd)$ categorification, we pick out the interesting operators as those generated by the $- \sqcup \{\star\}$ map from $FinSet_0$ to itself, but “really” (i.e. up to equivalence) this is just all the inclusions $\mathcal{S}_n < \mathcal{S}_{n+1}$ taken at once.  However, Khovanov’s approach is nice, because it separates out a lot of what’s going on abstractly and uses a general diagrammatic way to depict all these 2-morphisms (this is explained in the first few pages of Aaron Lauda’s paper on ambidextrous adjoints, too).  The case of restriction and induction is just one example where this calculus applies.

There’s a fair bit more in the paper, but this is probably sufficient to say here.

1 There are two distinct but related senses of “categorification” of an algebra $A$ here, by the way.  To simplify the point, say we’re talking about a ring $R$.  The first sense of a categorification of $R$ is a (monoidal, additive) category $C$ with a “valuation” in $R$ that takes $\otimes$ to $\times$ and $\oplus$ to $+$.  This is described, with plenty of examples, in this paper by Rafael Diaz and Eddy Pariguan.  The other, typical of the Khovanov program, says it is a (monoidal, additive) category $C$ whose Grothendieck ring is $K_0(C) = R$.  Of course, the second definition implies the first, but not conversely.  The objects of the Grothendieck ring are isomorphism classes in $C$.  A valuation may identify objects which aren’t isomorphic (or, as in groupoidification, morphisms which aren’t 2-isomorphic).

So a categorification of the first sort could be factored into two steps: first take the Grothendieck ring, then take a quotient to further identify things with the same valuation.  If we’re lucky, there’s a commutative square here: we could first take the category $C$, find some surjection $C \rightarrow C'$, and then find that $K_0(C') = R$.  This seems to be the relation between Khovanov’s categorification of $H_{\mathbb{Z}}$ and the one in $Span(Gpd)$. This is the sense in which it seems to be the “universal” answer to the problem.

A more substantial post is upcoming, but I wanted to get out this announcement for a conference I’m helping to organise, along with Roger Picken, João Faria Martins, and Aleksandr Mikovic.  Its website: https://sites.google.com/site/hgtqgr/home has more details, and will have more as we finalise them, but here are some of them:

## ﻿Workshop and School on Higher Gauge Theory, TQFT and Quantum Gravity

Lisbon, 10-13 February, 2011 (Workshop), 7-13 February, 2011 (School)

Description from the website:

Higher gauge theory is a fascinating generalization of ordinary abelian and non-abelian gauge theory, involving (at the first level) connection 2-forms, curvature 3-forms and parallel transport along surfaces. This ladder can be continued to connection forms of higher degree and transport along extended objects of the corresponding dimension. On the mathematical side, higher gauge theory is closely tied to higher algebraic structures, such as 2-categories, 2-groups etc., and higher geometrical structures, known as gerbes or n-gerbes with connection. Thus higher gauge theory is an example of the categorification phenomenon which has been very influential in mathematics recently.

There have been a number of suggestions that higher gauge theory could be related to (4D) quantum gravity, e.g. by Baez-Huerta (in the QG^2 Corfu school lectures), and Baez-Baratin-Freidel-Wise in the context of state-sums. A pivotal role is played by TQFTs in these approaches, in particular BF theories and variants thereof, as well as extended TQFTs, constructed from suitable geometric or algebraic data. Another route between higher gauge theory and quantum gravity is via string theory, where higher gauge theory provides a setting for n-form fields, worldsheets for strings and branes, and higher spin structures (i.e. string structures and generalizations, as studied e.g. by Sati-Schreiber-Stasheff). Moving away from point particles to higher-dimensional extended objects is a feature both of loop quantum gravity and string theory, so higher gauge theory should play an important role in both approaches, and may allow us to probe a deeper level of symmetry, going beyond normal gauge symmetry.

Thus the moment seems ripe to bring together a group of researchers who could shed some light on these issues. Apart from the courses and lectures given by the invited speakers, we plan to incorporate discussion sessions in the afternoon throughout the week, for students to ask questions and to stimulate dialogue between participants from different backgrounds.

Provisional list of speakers:

• Paolo Aschieri (Alessandria)
• Benjamin Bahr (Cambridge)
• Aristide Baratin (Paris-Orsay)
• John Barrett (Nottingham)
• Rafael Diaz (Bogotá)
• Bianca Dittrich (Potsdam)
• Laurent Freidel (Perimeter)
• John Huerta (California)
• Branislav Jurco (Prague)
• Thomas Krajewski (Marseille)
• Tim Porter (Bangor)
• Hisham Sati (Maryland)
• Christopher Schommer-Pries (MIT)
• Urs Schreiber (Utrecht)
• Jamie Vicary (Oxford)
• Derek Wise (Erlangen)
• Christoph Wockel (Hamburg)

The workshop portion will have talks by the speakers above (those who can make it), and any contributed talks.  The “school” portion is, roughly, aimed at graduate students in a field related to the topics, but not necessarily directly in them.  You don’t need to be a student to attend the school, of course, but they are the target audience.  The only course that has been officially announced so far will be given by Christopher Schommer-Pries, on TQFT.  We hope/expect to also have minicourses on Higher Gauge Theory, and Quantum Gravity as well, but details aren’t settled yet.

If you’re interested, the deadline to register is Jan 8 (hence the rush to announce).  Some funding is available for those who need it.

In the most recent TQFT Club seminar, we had a couple of talks – one was the second in a series of three by Marco Mackaay, which as promised previously I’ll write up together after the third one.

The other was by Björn Gohla, a student of João Faria Martins, giving an overview on the subject of “Tricategories and Trifunctors”, a mostly expository talk explaining some definitions.  Actually, this was a bit more specific than a general introduction – the point of it was to describe a certain kind of mapping space.  I’ve talked here before about representing the “configuration space” of a gauge theory as a groupoid: the objects are (optionally, flat) connections on a manifold $M$, and the morphisms are gauge transformations taking one connection to another.  The reason for the things Björn was talking about is analogous, except that in this case, the goal is to describe the configuration space of a higher gauge theory.

There are at least two ways I know of to talk about higher gauge theory.  One is in terms of categorical (or n-categorical) groups – which makes it a “categorification” of gauge theory in the sense of reproducing in $\mathbf{Cat}$ (or $\mathbf{nCat}$) an analog of a sturcture, gauge theory, originally formulated in $\mathbf{Set}$.  Among other outlines, you might look at this one by John Baez and John Huerta for an introduction.  Another uses the lingo of crossed modules or crossed complexes.  In either case, the essential point is the same: there is some collection of groups (or groupoids, but let’s say groups to keep everything clear) which play the role of the single gauge group in ordinary gauge theory.

In the first language, we can speak of a “2-group”, or “categorical group” – a group internal to $\mathbf{Cat}$, or what is equivalent, a category internal to $\mathbf{Grp}$, which would have a group of objects and a group of morphisms (and, in higher settings still, groups of 2-morphisms, 3-morphisms, and so on).  The structure maps of the category (source, target, composition, etc.) have to live in the category of groups.

A crossed complex of groups (again, we could generalize to groupoids, but I won’t) is a nonabelian variation on a chain complex: a sequence of groups with maps from one to the next.  There are also a bunch more structures, which ultimately serve to reproduce all the kind of composition, source, and target maps in the $n$-categorical groups: some groups act on others, there are “bracket” operations on one group valued in another, and so forth.  This paper by Brown and Higgins explains how the two concepts are related when most of the groups are abelian, and there’s a lot more about crossed complexes and related stuff in Tim Porter’s “Crossed Menagerie“.

The point of all this right now is that these things play the role of the gauge group in higher gauge theory.  The idea is that in gauge theory, you have a connection.  Typically this is described in terms of a form valued in the Lie algebra of the gauge group.  Then a (thin) homotopy classes of curves, gets a holonomy valued in the group by integrating that form.  Alternatively, you can just think of the path groupoid of a manifold $\mathcal{P}_1(M)$, where those classes of curves form the morphisms between the objects, which are just points of $M$.  Then a connection defines a functor $\Gamma : \mathcal{P}_1(M) \rightarrow G$, where $G$ is the gauge group thought of as a category (groupoid in fact) with one object.  Or, you can just define a connection that way in the first place.  In higher gauge theory, a similar principle exists: begin with the $n$-path groupoid $\mathcal{P}_n(M)$ where the morphisms are (thin homotopy classes of) paths, the 2-morphisms are surfaces (really homotopy classes of homotopies of paths), and so on, so the $k$-morphisms are $k$-dimensional bits of $M$.  Then you could define an $n$-connection as a $n$-functor into an $n$-group as defined above.  OR, you could define it in terms of a tower of differential $k$-forms valued in the crossed complex of Lie algebras associated to the crossed complex of Lie groups that replaces the gauge group.  You can then use an integral to get an element of the group at level $k$ of the complex for any given $k$-morphism in $\mathcal{P}_n(M)$, which (via the equivalence I mentioned) amounts to the same thing as the other definition of connection.

João Martins has done some work on this sort of thing when $n$ is dimension 2 (with Tim Porter) and 3 (with Roger Picken), which I guess is how Björn came to work on this question.  The question is, roughly, how to describe the moduli space of these connections.  The gist of the answer is that it’s a functor $n$-category $[\mathcal{P}_n(M),\mathcal{G}]$, where $\mathcal{G}$ is the $n$-group.  A little more generally, the question is how to describe mapping spaces for higher categories.  In particular, he was talking about the case $n=3$, which is where certain tricky issues start to show up.  In particular every bicategory (the weakest form of 2-category) is (bi)equivalent to a strict 2-category, so there’s no real need to worry about weakening things like associativity so that they only work up to isomorphism – these are all equalities.  With 3-categories, this fails: the weakest kind of 3-category is a tricategory (introduced by Gordon, Power and Street, though also see the references beyond that link).  These are always tri-equivalent to something stricter than general, but not completely strict: Gray-categories.  The only equation from 2-categories which has to be weakened to an isomorphism here is the interchange law: given a square of four morphisms, we can either compose vertically first, and then horizontally, or vice versa.  In a Gray-category, there’s an “interchanger” isomorphism

$I_{\alpha,\alpha ',\beta,\beta'} : (\alpha \circ \beta) \cdot (\alpha ' \circ \beta ') \Rightarrow (\alpha \cdot \alpha ') \circ (\beta \cdot \beta ')$

where $\cdot$ is vertical composition of 2-cells, and $\circ$ is horizontal (i.e. the same direction as 1-cells).  This is supposed to satisfy a compatibility condition.  It’s essentially the only one you can come up with starting with $(\alpha \cdot \alpha ') \circ \beta$ (and composing it in different orders by throwing in identities in various places).

There’s another way to look at things, as Björn explained, in terms of enriched category theory.  If you have a monoidal category $(\mathcal{V},\otimes)$, then a $(\mathcal{V},\otimes)$-enriched category $\mathbb{G}$ is one in which, for any two objects $x,y$, there is an object $\mathbb{G}(x,y) \in \mathcal{V}$ of morphisms, and composition gives morphisms $\circ_{x,y,z} : \mathbb{G}(y,z) \otimes \mathbb{G}(x,y) \rightarrow \mathbb{G}(x,z)$.  A strict 3-category is enriched in $\mathbf{2Cat}$, with its usual tensor product, dual to its internal hom $[-,-]$ (which gives the mapping 2-category of functors, natural transformations, and modifications, between any two 2-categories).  A Gray category is similar, except that it is enriched in $\mathbf{Gray}$, a version of $\mathbf{2Cat}$ with a different tensor product, dual to the hom functor $[-,-]'$ which gives the mapping 2-category with pseudonatural transformations (the weak version of the concept, where the naturality square only has to commute up to a specified 2-cell) as morphisms.  These are not the same, which is where the unavoidability of weakening 3-categories “really” comes from.   The upshot of this is as above: it matters which order we compose things in.

Having defined Gray-categories, let’s say $A$ and $B$ (which, in the applications I mentioned above, tend to actually be Gray-groupoids, though this doesn’t change the theory substantially), the point is to talk about “mapping spaces” – that is, Gray-categories of Gray-functors (etc.) from $A$ to $B$.

Since they’ve been defined terms of enriched category theory, one wants to use the general theory of enriched functors, transformations, and so forth – which is a lot easier than trying to work out the correct definitions from scratch using a low-level description.  So then a Gray-functor $F : A \rightarrow B$ has an object map $F_0 : A_0 \rightarrow B_0$, mapping objects of $A$ to objects of $B$, and then for each $x,y \in A_0$, a morphism in $\mathbf{Gray}$ (which is our $\mathcal{V}$), namely $F_{x,y} : A(x,y) \rightarrow B(F(x),F(y))$.  There are a bunch of compatibility conditions, which can be expressed for any monoidal category $\mathcal{V}$ (since they involve diagrams with the map $\circ_{x,y,z}$ for any triple, and the like).  Similar comments apply to defining $\mathcal{V}$-natural transformations.

There is a slight problem here, which is that in this case, $\mathcal{V} = \mathbf{Gray}$ is a 2-category, so we really need to use a form of weakly enriched categories…  All the compatibility diagrams should have 2-cells in them, and so forth.  This, too, gets complicated.  So Björn explained is a shortcut from drawing $n$-dimensional diagrams for these mapping $n$-categories in terms of the arrow category $\vec{B}$. This is the category whose objects are the morphisms of $B$, and whose morphisms are commuting squares, or when $B$ is a 2-category, squares with a 2-cell, so a morphism in $\vec{B}$ from $f: x \rightarrow y$ to $f' : x' \rightarrow y'$ is a triple $g = (g_x,g_y,g_f)$ like so:

Morphism in arrow category

The 2-morphisms in $\vec{B}$ are commuting “pillows”, where the front and back face are morphisms like the above. So $\beta : g \Rightarrow g'$ is $\beta = (\beta_x,\beta_y)$, where $\beta_x : g_x \Rightarrow g_{x'}$ is a 2-cell, and the whole “pillow” commutes.  When $B$ is a tricategory, then we need to go further – these 2-morphsims should be triples including a 3-cell $\beta_f$ filling the “pillow”, and then 3-morphisms are commuting structures between these. These diagrams get hard to draw pretty quickly. This is the point of having an ordinary 2D diagram with at most 1-dimensional cells: pushing all the nasty diagrams into these arrow categories, we can replace a 2-cell representing a natural transformation with a diagram involving the arrow category.

This uses that there are source and target maps (which are Gray-functors, of course) which we’ll call $d_0, d_1: \vec{B} \rightarrow B$. So then here (in one diagram) we have two ways of depicting a natural transformation $\alpha : F \rightarrow G$ between functors $F,G : A \Rightarrow B$:

One is the 2-cell, and the other is the functor into $\vec{B}$, such that $d_0 \circ \alpha = F$ and $d_1 \circ \alpha = G$.
To depict a modification between natural transformations (a 3-cell between 2-cells) just involves building the arrow category of $\vec{B}$, say $\vec{\vec{B}}$, and drawing an arrow from $A$ into it. And so on: in principle, there is a tower above $B$ built by iterating the arrow category construction, and all the different levels of “functor”, “natural transformation”, “modification”, and all the higher equivalents are just functors into different levels of this tower.  (The generic term for the $k^{th}$ level of maps-between-maps-etc between $n$-categories is “$(n,k)$-transfor“, a handy term coined here.)
The advantage here is that at least the general idea can be extended pretty readily to higher values of $n$ than 3.  Naturally, no matter which way one decides to do it, things will get complicated – either there’s a combinatorial explosion of things to consider, or one has to draw higher-dimensional diagrams, or whatever.  This exploding complexity of $n$-categories (in this case, globular ones) is one of the reasons why simplicial appreaches – quasicategories or $\infty$-categories – are good.  They allow you to avoid talking about those problems, or at least fold them into fairly well-understood aspects of simplicial sets.  A lot of things – limits, colimits, mapping spaces, etc. are pretty well understood in that case (see, for instance, the first chapter of Joshua Nicholls-Barrer’s thesis for the basics, or Jacob Lurie’s humongous book for something more comprehensive).  But sometimes, as in this case, they just don’t happen to be the things you want for your application.  So here we have some tools for talking about mapping spaces in the world of globular $n$-categories – and as the work by Martins/Porter/Picken show, it’s motivated by some fairly specific work about invariants of manifolds, differential geometry, and so on.

In the first week of November, I was in Montreal for the biannual meeting of the Philosophy of Science Association, at the invitation of Hans Halvorson and Steve Awodey.  This was for a special session called “Category Theoretical Reflections on the Foundations of Physics”, which also had talks by Bob Coecke (from Oxford), Klaas Landsman (from Radboud University in Nijmegen), and Gonzalo Reyes (from the University of Montreal).  Slides from the talks in this session have been collected here by Steve Awodey.  The meeting was pretty big, and there were a lot of talks on a lot of different topics, some more technical, and some less.  There were enough sessions relating to physics that I had a full schedule just attending those, although for example there were sessions on biology and cognition which I might otherwise have been interested in sitting in on, with titles like “Biology: Evolution, Genomes and Biochemistry”, “Exploring the Complementarity between Economics and Recent Evolutionary Theory”, “Cognitive Sciences and Neuroscience”, and “Methodological Issues in Cognitive Neuroscience”.  And, of course, more fundamental philosophy of science topics like “Fictions and Scientific Realism” and “Kinds: Chemical, Biological and Social”, as well as socially-oriented ones such as “Philosophy of Commercialized Science” and “Improving Peer Review in the Sciences”.  However, interesting as these are, one can’t do everything.

In some ways, this was a really great confluence of interests for me – physics and category theory, as seen through a philosophical lens.  I don’t know exactly how this session came about, but Hans Halvorson is a philosopher of science who started out in physics (and has now, for example, learned enough category theory to teach the course in it offered at Princeton), and Steve Awodey is a philosopher of mathematics who is interested in category theory in its own right.  They managed to get this session brought in to present some of the various ideas about the overlap between category theory and physics to an audience mostly consisting of philosophers, which seems like a good idea.  It was also interesting for me to get a view into how philosophers approach these subjects – what kind of questions they ask, how they argue, and so on.  As with any well-developed subject, there’s a certain amount of jargon and received ideas that people can refer to – for example, I learned the word and current usage (though not the basic concept) of supervenience, which came up, oh, maybe 5-10 times each day.

There are now a reasonable number of people bringing categorical tools to bear on physics – especially quantum physics.  What people who think about the philosophy of science can bring to this research is the usual: careful, clear thinking about the fundamental concepts involved in a way that tries not to get distracted by the technicalities and keep the focus on what is important to the question at hand in a deep way.  In this case, the question at hand is physics.  Philosophy doesn’t always accomplish this, of course, and sometimes get sidetracked by what some might call “pseudoquestions” – the kind of questions that tend to arise when you use some folk-theory or simple intuitive understanding of some subtler concept that is much better expressed in mathematics.  This is why anyone who’s really interested in the philosophy of science needs to learn a lot about science in its own terms.  On the whole, this is what they actually do.

And, of course, both mathematicians and physicists try to do this kind of thinking themselves, but in those fields it’s easy – and important! – to spend a lot of time thinking about some technical question, or doing extensive computations, or working out the fiddly details of a proof, and so forth.  This is the real substance of the work in those fields – but sometimes the bigger “why” questions, that address what it means or how to interpret the results, get glossed over, or answered on the basis of some superficial analogy.  Mind you – one often can’t really assess how a line of research is working out until you’ve been doing the technical stuff for a while.  Then the problem is that people who do such thinking professionally – philosophers – are at a loss to understand the material because it’s recent and technical.  This is maybe why technical proficiency in science has tended to run ahead of real understanding – people still debate what quantum mechanics “means”, even though we can use it competently enough to build computers, nuclear reactors, interferometers, and so forth.

Anyway – as for the substance of the talks…  In our session, since every speaker was a mathematician in some form, they tended to be more technical.  You can check out the slides linked to above for more details, but basically, four views of how to draw on category theory to talk about physics were represented.  I’ve actually discussed each of them in previous posts, but in summary:

• Bob Coecke, on “Quantum Picturalism”, was addressing the monoidal dagger-category point of view, which looks at describing quantum mechanical operations (generally understood to be happening in a category of Hilbert spaces) purely in terms of the structure of that category, which one can see as a language for handling a particular kind of logic.  Monoidal categories, as Peter Selinger as painstakingly documented, can be described using various graphical calculi (essentially, certain categories whose morphisms are variously-decorated “strands”, considered invariant under various kinds of topological moves, are the free monoidal categories with various structures – so anything you can prove using these diagrams is automatically true for any example of such categories).  Selinger has also shown that, for the physically interesting case of dagger-compact closed monoidal categories, a theorem is true in general if and only if it’s true for (finite dimensional) Hilbert spaces, which may account for why Hilbert spaces play such a big role in quantum mechanics.  This program is based on describing as much of quantum mechanics as possible in terms of this kind of diagrammatic language.  This stuff has, in some ways, been explored more through the lens of computer science than physics per se – certainly Selinger is coming from that background.  There’s also more on this connection in the “Rosetta Stone” paper by John Baez and Mike Stay,
• My talk (actually third, but I put it here for logical flow) fits this framework, more or less.  I was in some sense there representing a viewpoint whose current form is due to Baez and Dolan, namely “groupoidification”.  The point is to treat the category $Span(Gpd)$ as a “categorification” of (finite dimensional) Hilbert spaces in the sense that there is a representation map $D : Span(Gpd) \rightarrow Hilb$ so that phenomena living in $Hilb$ can be explained as the image of phenomena in $Span(Gpd)$.  Having done that, there is also a representation of $Span(Gpd)$ into 2-Hilbert spaces, which shows up more detail (much more, at the object level, since Tannaka-Krein reconstruction means that the monoidal 2-Hilbert space of representations of a groupoid is, at least in nice cases, enough to completely reconstruct it).  This gives structures in $2Hilb$ which “conceptually” categorify the structures in $Hilb$, and are also directly connected to specific Hilbert spaces and maps, even though taking equivalence classes in $2Hilb$ definitely doesn’t produce these.  A “state” in a 2-Hilbert space is an irreducible representation, though – so there’s a conceptual difference between what “state” means in categorified and standard settings.  (There’s a bit more discussion in my notes for the talk than in the slides above.)
• Klaas Landsman was talking about what he calls “Bohrification“, which, on the technical side, makes use of Topos theory.  The philosophical point comes from Niels Bohr’s “doctrine of classical concepts” – that one should understand quantum systems using concepts from the classical world.  In practice, this means taking a (noncommutative) von Neumann algebra $A$ which describes the observables a quantum system and looking at it via its commutative subalgebras.  These are organized into a lattice – in fact, a site.  The idea is that the spectrum of $A$ lives in the topos associated to this site: it’s a presheaf that, over each commutative subalgebra $C \subset A$, just gives the spectrum of $C$.  This is philosophically nice in that the “Bohrified” propositions actually behave in a logically sensible way.  The topos approach comes from Chris Isham, developed further with Andreas Doring. (Note the series of four papers by both from 2007.  Their approach is in some sense dual to that of Lansman, Heunen and Spitters, in the sense that they look at the same site, but look at dual toposes – one of sheaves, the other of cosheaves.  The key bit of jargon in Isham and Doring’s approach is “daseinization”, which is a reference to Heidegger’s “Being and Time”.  For some reason this makes me imagine Bohr and Heidegger in a room, one standing on the ceiling, one on the floor, disputing which is which.)
• Gonzalo Reyes talked about synthetic differential geometry (SDG) as a setting for building general relativity.  SDG is a way of doing differential geometry in a category where infinitesimals are actually available, that is, there is a nontrivial set $D = \{ x \in \mathbb{R} | x^2 = 0 \}$.  This simplifies discussions of vector fields (tangent vectors will just be infinitesimal vectors in spacetime).  A vector field is really a first order DE (and an integral curve tangent to it is a solution), so it’s useful to have, in SDG, the fact that any differentiable curve is, literally, infinitesimally a line.  Then the point is that while the gravitational “field” is a second-order DE, so not a field in this sense, the arguments for GR can be reproduced nicely in SDG by talking about infinitesimally-close families of curves following geodesics.  Gonzalo’s slides are brief by necessity, but happily, more details of this are in his paper on the subject.

The other sessions I went to were mostly given by philosophers, rather than physicists or mathematicians, though with exceptions.  I’ll briefly present my own biased and personal highlights of what I attended.  They included sessions titled:

Quantum Physics“: Edward Slowik talked about the “prehistory of quantum gravity”, basically revisiting the debate between Newton and Leibniz on absolute versus relational space, suggesting that Leibniz’ view of space as a classification of the relation of his “monads” is more in line with relational theories such as spin foams etc.  M. Silberstein and W. Stuckey – gave a talk about their “relational blockworld” (described here) which talks about QFT as an approximation to a certain discrete theory, built on a graph, where the nodes of the graph are spacetime events, and using an action functional on the graph.

Meinard Kuhlmann gave an interesting talk about “trope bundles” and AQFTTrope ontology is an approach to “entities” that doesn’t assume there’s a split between “substrates” (which have no properties themselves), and “properties” which they carry around.  (A view of ontology that goes back at least to Aristotle’s “substance” and “accident” distinction, and maybe further for all I know).  Instead, this is a “one-category” ontology – the basic things in this ontology are “tropes”, which he defined as “individual property instances” (i.e. as opposed to abstract properties that happen to have instances).  “Things” then, are just collections of tropes.  To talk about the “identity” of a thing means to pick out certain of the tropes as the core ones that define that thing, and others as peripheral.  This struck me initially as a sort of misleading distinction we impose (say, “a sphere” has a core trope of its radial symmetry, and incidental tropes like its colour – but surely the way of picking the object out of the world is human-imposed), until he gave the example from AQFT.  To make a long story short, in this setup, the key entites are something like elementary particles, and the core tropes are those properties that define an irreducible representation of a $C^{\star}$-algebra (things like mass, spin, charge, etc.), whereas the non-core tropes are those that identify a state vector within such a representation: the attributes of the particle that change over time.

I’m not totally convinced by the “trope” part of this (surely there are lots of choices of the properties which determine a representation, but I don’t see the need to give those properties the burden of being the only ontologically primaries), but I also happen to like the conclusions because in the 2Hilbert picture, irreducible representations are states in a 2-Hilbert space, which are best thought of as morphisms, and the state vectors in their components are best thought of in terms of 2-morphisms.  An interpretation of that setup says that the 1-morphism states define which system one’s talking about, and the 2-morphism states describe what it’s doing.

New Directions Concerning Quantum Indistinguishability“: I only caught a couple of the talks in this session, notably missing Nick Huggett’s “Expanding the Horizons of Quantum Statistical Mechanics”.  There were talks by John Earman (“The Concept of Indistinguishable Particles in Quantum
Mechanics”), and by Adam Caulton (based on work with Jeremy Butterfield) on “On the Physical Content of the Indistinguishability Postulate”.  These are all about the idea of indistinguishable particles, and the statistics thereof.  Conventionally, in QM you only talk about bosons and fermions – one way to say what this means is that the permutation group $S_n$ naturally acts on a system of $n$ particles, and it acts either trivially (not altering the state vector at all), or by sign (each swap of two particles multiplies the state vector by a minus sign).  This amounts to saying that only one-dimensional representations of $S_n$ occur.  It is usually justified by the “spin-statistics theorem“, relating it to the fact that particles have either integer or half-integer spins (classifying representations of the rotation group).  But there are other representations of $S_n$, labelled by Young diagrams, though they are more than one-dimensional.  This gives rise to “paraparticle” statistics.  On the other hand, permuting particles in two dimensions is not homotopically trivial, so one ought to use the braid group $B_n$, rather than $S_n$, and this gives rise again to different statistics, called “anyonic” statistics.

One recurring idea is that, to deal with paraparticle statistics, one needs to change the formalism of QM a bit, and expand the idea of a “state vector” (or rather, ray) to a “generalized ray” which has more dimensions – corresponding to the dimension of the representation of $S_n$ one wants the particles to have.  Anyons can be dealt with a little more conventionally, since a 2D system may already have them.  Adam Caulton’s talk described how this can be seen as a topological phenomenon or a dynamical one – making an analogy with the Bohm-Aharonov effect, where the holonomy of an EM field around a solenoid can be described either dynamically with an interacting Lagrangian on flat space, or topologically with a free Lagrangian in space where the solenoid has been removed.

Quantum Mechanics“: A talk by Elias Okon and Craig Callender about QM and the Equivalence Principle, based on this.  There has been some discussion recently as to whether quantum mechanics is compatible with the principle that relates gravitational and inertial mass.  They point out that there are several versions of this principle, and that although QM is incompatible with some versions, these aren’t the versions that actually produce general relativity.  (For example, objects with large and small masses fall differently in quantum physics, because though the mean travel time is the same, the variance is different.  But this is not a problem for GR, which only demands that all matter responds dynamically to the same metric.)  Also, talks by Peter Lewis on problems with the so-called “transactional interpretation” of QM, and Bryan Roberts on time-reversal.

Why I Care About What I Don’t Yet Know“:  A funny name for a session about time-asymmetry, which is the essentially philosophical problem of why, if the laws of physics are time-symmetric (which they approximately are for most purposes), what we actually experience isn’t.  Personally, the best philosophical account of this I’ve read is Huw Price’s “Time’s Arrow“, though Reichenbach’s “The Direction of Time” has good stuff in it also, and there’s also Zeh’s more technical “The Physical Basis of the Direction of Time“. In the session, Chris Suhler and Craig Callender gave an account of how, given causal asymmetry, our subjective asymmetry of values for the future and the past can arise (the intuitively obvious point being that if we can influence the future and not the past, we tend to value it more).  Mathias Frisch talked about radiation asymmetry (the fact that it’s equally possible in EM to have waves converging on a source than spreading out from it, yet we don’t see this).  Owen Maroney argued that “There’s No Route from Thermodynamics to the Information Asymmetry” by describing in principle how to construct a time-reversed (probabilisitic) computer.  David Wallace spoke on “The Logic of the Past Hypothesis”, the idea inspired by Boltzmann that we see time-asymmetry because there is a point in what we call the “past” where entropy was very low, and so we perceive the direction away from that state as “forward” it time because the world tends to move toward equilibrium (though he pointed out that for dynamical reasons, the world can easily stay far away from equilibrium for a long time).  He went on to discuss the logic of this argument, and the idea of a “simple” (i.e. easy-to-describe) distribution, and the conjecture that the evolution of these will generally be describable in terms of an evolution that uses “coarse graining” (i.e. that repeatedly throws away microscopic information).

The Emergence of Spacetime in Quantum Theories of Gravity“:  This session addressed the idea that spacetime (or in some cases, just space) might not be fundamental, but could emerge from a more basic theory.  Christian Wüthrich spoke about “A-Priori versus A-Posteriori” versions of this idea, mostly focusing on ideas such as LQG and causal sets, which start with discrete structures, and get manifolds as approximations to them.  Nick Huggett gave an overview of noncommutative geometry for the philosophically minded audience, explaining how an algebra of observables can be treated like space by means of all the concepts from geometry which can be imported into the theory of $C^{\star}$-algebras, where space would be an approximate description of the algebra by letting the noncommutativity drop out of sight in some limit (which would be described as a “large scale” limit).  Sean Carroll discussed the possibility that “Space is Not Fundamental – But Time Might Be”, pointing out that even in classical mechanics, space is not a fundamental notion (since it’s possible to reformulate even Hamiltonian classical mechanics without making essential distinctions between position and momentum coordinates), and suggesting that space arises from the dynamics of an actual physical system – a Hamiltonian, in this example – by the principle “Position Is The Thing In Which Interactions Are Local”.  Finally, Sean Maudlin gave an argument for the fundamentality of time by showing how to reconstruct topology in space from a “linear structure” on points saying what a (directed!) path among the points is.

Last week I spoke in Montreal at a session of the Philosophy of Science Association meeting.  Here are some notes for it.  Later on I’ll do a post about the other talks at the meeting.

Right now, though, the meeting slowed me down from describing a recent talk in the seminar here at IST.  This was Gonçalo Rodrigues’ talk on categorifying measure theory.  It was based on this paper here, which is pretty long and goes into some (but not all) of the details.  Apparently an updated version that fills in some of what’s not there is in the works.

In any case, Gonçalo takes as the starting point for categorifying ideas in analysis the paper “Measurable Categories” by David Yetter, which is the same point where I started on this topic, although he then concludes that there are problems with that approach.  Part of the reason for saying this has to do with the fact that the category of Hilbert spaces has many bad properties – or rather, fails to have many of the good ones that it should to play the role one might expect in categorifying ideas from analysis.

Yetter’s idea can be described, very roughly, as follows: we would like to categorify the concept of a function-space on a measure space $(X,\mu)$.  That is, spaces like $L^2(X,\mu)$ or $L^{\infty}(X,\mu)$.  The reason for this is that the 2-vector-spaces of Kapranov and Voevodsky are very elegant, but intrinsically finite-dimensional, categorifications of “vector space”.  An infinite-dimensional version would be important for representation theory, particularly of noncompact Lie groups or 2-groups, but even just infinite ones, since there are relatively few endomorphisms of KV 2-vector spaces.  Yetter’s paper constructs analogs to the space of measurable functions $\mathcal{M}(X)$, where “functions” take values in Hilbert spaces.

A measurable field of Hilbert spaces is, roughly, a family of Hilbert spaces indexed by points of $X$, together with a nice space of “measurable sections”.  This is supposed to be an infinite-dimensional, measure-theoretic counterpart to an object in a KV 2-vector space, which always looks like $\mathbf{Vect}^k$ for some natural number $k$, which is now being replaced by $(X,\mu)$.  One of the key tools in Yetter’s paper is the direct integral of a field of Hilbert spaces, which is similarly the counterpart to the direct sum $\bigoplus$ in the discrete world.  It just gives the space of measurable sections (taken up to almost-everywhere equivalence, as usual).  This was the main focus of Gonçalo’s talk.

The direct integral has one major problem, compared to the (finite) direct sum it is supposed to generalize – namely, the direct sum is a categorical coproduct, in $\mathbf{Vect}$ or any other KV 2-vector space.  Actually, it is both a product and a coproduct ($\mathbf{Vect}$ is abelian), so it is defined by a nice universal property.  The direct integral, on the other hand, is not.  It doesn’t have any similarly nice universal property.  (In the infinite-dimensional case, colimits and limits would be expected to become different in any case, but the direct integral is neither).  This means that many proofs in analysis will be hard to reproduce in the categorified setting – universal properties mean one doesn’t have to do nearly as much work to do this, among their other good qualities.  This is related to the issue that the category $\mathbf{Hilb}$ does not have all limits and colimits

Gonçalo’s paper and talk outline a program where one can categorify a lot of the proofs in analysis, by using a slightly different framework which uses a bigger category than $\mathbf{Hilb}$, namely $Ban_C$, whose objects are Banach spaces and whose maps are (linear) contractions.  A Banach Category is a category enriched in $Ban_C$.  Now, Banach spaces have a norm, but not necessarily an inner product, and this small weakening makes them much worse than Hilbert spaces as objects.  Many intuitions from Hilbert spaces, like the one that says any subspace has a complement, just fail: the corresponding notion for Banach spaces is the quasicomplement ($X$ and $Y$ are quasicomplements if they intersect only at zero, and their sum is dense in the whole space), and it’s quite possible to have subspaces which don’t have one.  Other unpleasant properties abound.

Yet $Ban_C$ is a much nicer category than $Hilb$.  (So we follow the general dictum that it’s better to have a nice category with bad objects than a bad category with nice objects – the same motivation behind “smooth spaces” instead of manifolds, and the like.)  It’s complete and cocomplete (i.e. has all limits and colimits), as well as monoidal closed – for Banach spaces $A$ and $B$, the space $Hom(A,B)$ is also in $Ban_C$.  None of these facts holds for $Hilb$.  On the other hand, the space of bounded maps between Hilbert spaces is a Banach space (with the operator norm), but not necessarily a Hilbert space.  So even $Hilb$ is already a Banach category.

It also turns out that, unlike in $Hilb$, limits and colimits (where those exist in $Hilb$) are not necessarily isomorphic.  In particular, in $Ban_C$, the coproduct and product $A + B$ and $A \times B$ both have the same underlying vector space $A \oplus B$, but the norms are different.  For Hilbert spaces, the inner product comes from the Pythagorean formula in either case, but for Banach spaces, the coproduct gets the sum of the two norms, and the product gets the supremum.  It turns out that coproducts are the more important concept, and this is where the direct integral comes in.

First, we can talk about Banach 2-spaces (the analogs of 2-vector spaces): these are just Banach categories which are cocomplete (have all weighted colimits).  Maps between them are cocontinuous functors – that is, colimit-preserving ones.  (Though properly, functors between Banach categories ought to be contractions on Hom-spaces).  Then there are categorified analogs of all sorts of Banach space structure in a familiar way – the direct sum (coproduct) is the analog of vector addition, the category $Ban_C$ is the analog of the base field (say, $\mathbb{R}$), and so on.

This all gives the setting for categorified measure theory.  Part of the point of choosing $Ban_C$ is that you can now reason out at least some of how it works by analogy.  To start with, one needs to fix a Boolean algebra $\Omega$ – this is to be the $\sigma$-algebra of measurable sets for some measure space, though it’s important that it needn’t have any actual points (this is a notion of measure space akin to the notion of locale in the topological world).  This part of the theory isn’t categorified (arguably a limitation of this approach, but not one that’s any different from Yetter’s).  Instead, we categorify the definition of measure itself.

A measure is a function $\mu : \Omega \mapsto \mathbb{R}$ – it assigns a number to each measurable set.  The pair $(\Omega,\mu)$ is a measure algebra, and relates to a measure space the way a locale relates to a topological space.  So a categorified measure $\nu$ should be a functor from $\Omega$ (seen now as a category) into $Ban_C$.  (We can generalize this: the measure could be valued in some vector space over $\mathbb{R}$, and a categorified measure could be a functor into some other Banach 2-space.)  Since we’re thinking of $\Omega$ as a lattice of subsets, it makes some sense to call $\nu$ a presheaf, or rather co-presheaf.  What’s more, just as a measure is additive ($\mu(A + B) = \mu(A) + \mu(B)$, for disjoint sets, where $+$ is the union), so also the categorical measure $\nu$ should be (finitely) additive up to isomorphism.  So we’re assigning Banach spaces to all the measurable sets.  This is a “co”-presheaf – which is to say, a covariant functor, so the spaces “nest”: when for measurable sets, we have $A \subset B$, then $\nu(A) \leq \nu(B)$ also.

An intuition for how this works comes from a special case (not at all exhaustive), where we start with an actual, uncategorified, measure space $(X,\mu)$.  Then one categorified measure will arise by taking $\nu(E) = L_1(E,\mu)$: the Banach space associated to a measurable set $E$ is the space of integrable functions.  We can take any “scalar” multiple of this, too: given a fixed Banach space $B$, let $\nu(E) = L_1(E,\mu) \otimes B$.  But there are lots of examples that aren’t like this.

All this is fine, but the point here is to define integration.  The usual way to go about this when you learn analysis is to start with characteristic functions of measurable sets, then define a sequence through simple functions, measurable functions, and so forth.  Eventually one can define $L^p$ spaces based on the convergence of various integrals.  Something similar happens here.

The analog of a function here is a sheaf: a (compatible) assignment of Banach spaces to measurable sets.  (Technically, to get to sheaves, we need an idea of “cover” by measurable sets, but it’s pretty much the obvious one, modulo the subtlety that we should only allow countable covers.) The idea will be to start with characteristic sheaves for measurable sets, then take some kind of completion of the category of all of these as a definition of “measurable sheaf”.  Then the point will be that we can extend the measure from characteristic sheaves to all measurable sheaves using a limit (actually, a colimit), analogous to the way we define a Lebesgue integral as a limit of simple functions approximating a measurable one.

A characteristic sheaf $\chi(E)$ for a measurable set $E \in \Omega$ might be easiest to visualize in terms of a characteristic bundle, which just puts a copy of the base field (we’ve been assuming it’s $\mathbb{R}$) at each point of $E$, and the zero vector space everywhere else.  (This is a bundle in the measurable sense, not the topological one – assuming $X$ has a topology other than $\Omega$ itself.)  Very intuitively, to turn this into a sheaf, one can just use brute force and take a set $A$ the product of all the spaces lying in $A$.  A little less crudely, one should take a space of sections with decent properties – so that $\chi(E)$ assigns to $A$ a space of functions on $E \cap A$.  In particular, the functor $\chi : \Omega \rightarrow L_{\infty}(\Omega)$ which picks out all the (measurable) bounded sections is a universal way to do this.

Now the point is that the algebra of measurable sets, $\Omega$, thought of as a category, embeds into the category of presheaves on it by $\chi : \Omega \rightarrow \mathbf{PShv}(\Omega)$, taking a set to its characteristic sheaf.  Given a measure valued in some Banach category, $\nu : \Omega \rightarrow \mathcal{B}$, we can find the left Kan extension $\int_X d\nu : \mathbf{PShv}(\Omega) \rightarrow \mathcal{B}$, such that $\nu = \int_X d\nu \circ \chi$.  The Kan extension is a universal way to extend $\nu$ to all of $\mathbf{PShv}(\Omega)$ so that this is true, and it can be calculated as a colimit.

The essential fact here is that the characteristic sheaves are dense in $\mathbf{PShv}(\Omega)$: any presheaf can be found as a colimit of the characteristic ones.  This is analogous to how any function can be approximated by linear combinations of characteristic functions.  This means that the integral defined above will actually give interesting results for all the sheaves one might expect.

I’m glossing over some points here, of course – for example, the distinction between sheaves and presheaves, the role of sheafification, etc.  If you want to get a more accurate picture, check out the paper I linked to up above.

All of this granted, however, many of the classical theorems of measure theory have analogs that are proved in essentially the same way as the standard versions.  One can see the presheaf category as a categorified analog of $L_1(X,\nu)$, and get the Fubini theorem, for instance: there is a canonical equivalence (no longer isomorphism) between (a suitable) tensor product of $\mathbf{PShv}(X)$ and $\mathbf{PShv}(Y)$ on one hand, and on the other $\mathbf{PShv}(X \times Y)$.  Doing integration, one can then do all the usual things – exchange order of integration between $X$ and $Y$, say – in analogous conditions.  The use of universal properties to define integrals etc. means that one doesn’t need to fuss about too much with coherence laws, and so the proofs of the categorified facts are much the same as the original proofs.

On a tangential note, let me point out John Baez’ most recent “This Week’s Finds”, which has an accessible but fairly in-depth discussion of climate modelling.  There have been many years of very loud public discussion of this which, for reasons of politics, seems to involve putting the “Mathematical models are inherently elitist gibberish” and “Science knows everything so shut up, moron” positions on display and letting viewer decide.  This is known in the journalism trade as “balance”.  Obviously, within the research community working on them, there’s a mountain of literature on what the models model, how detailed they are, how they work, etc., but it mostly goes over my head, so John’s post strikes a nice balance for me.

Like most computer simulation models, they’re basically discrete approximations to big systems of differential equations – but exactly which systems, how they’re developed, how accurately they model the real thing, and the relative merits of simple vs. complex models is the main point.  The use of Monte Carlo methods and Bayesian analysis to tune the various free parameters is a key part of the matter of how accurate they should be.  Anyway – check it out.

Meanwhile, the TQFT club at IST recently started up its series of seminars.  The first few speakers were Rui Carpentier, Anne-Laure Thiel, and Marco Mackaay.  Rui is faculty here at IST, and a former student of Roger Picken (his thesis was on a topic closely related to what he was talking about).  Anne-Laure is a post-doc here at IST, mainly working with Marco, who, however, is actually at the University of the Algarve in Faro, Portugal, and had to come up to Lisbon specially for the seminar.  Anne-Laure and Marco were both speaking mainly about some of the Soergel bimodule stuff which came up at the Oporto meeting on categorification, which I posted about previously, so I’ll go over that in a bit more detail here.

First, though, Rui Carpentier’s talk:

## 3-colourings of Cubic Graphs and Operators

All these talks involve algebraic representations of categories that can be represented by some graphical calculus, but in this case, one starts with a category whose morphisms are precisely graphs with loose ends.  (The objects are non-negative integers, or, if you like, finite sets of dots which act as the vertices of the loose ends).  The graphs are trivalent (except at the input and output vertices, which are 1-valent), hence “cubic graphs”.  This category is therefore called $\mathbf{CG}$, and it has a small number of generators, which happen to be quite similar to those which generate the category of 2D-cobordisms (one of the connections to TQFT), though the relations are slightly different.

Roughly, and without drawing the pictures: the generators are cup and cap (the shapes $\cup$ and $\cap$), two different trivalent vertices (a $Y$, and the same upside-down), the swap (an $X$ where the strands cross without a vertex), and the identity (just a vertical line).  There are a number of relations, including Reidemeister moves, on these generating pictures, which ensure that they’re enough to identify graphs up to isotopy of the pictures.

Then the point is to describe graphs using operators – that is, construct a representation $F :\mathbf{CG} \rightarrow \mathbf{Vect}$.   Given any such representation, these generators provide all the structure maps of a bialgebra – chiefly, unit, counit, multiplication and co-multiplication – and the relations imposed by isotopy make this work (though unlike some other situations, it’s neither commutative nor cocommutative).  The representation $F$ he constructs is based on 3-colourings of the edges of the graphs.  At the object level, it assigns to a dot the 3-dimensional vector space $V= span(e_1,e_2,e_3)$.  Being monoidal, $F$ takes the object $n$ to $V^{\otimes n}$ – the tensor product of the spaces at each vertex.

The idea is that choosing a basis vector in this space amounts to picking a colouring of the incoming and outgoing edges.  For morphisms, we should note that the rule that says when a colouring is admissible is that all the edges incident to a given vertex must have different colours.  Then, given a morphism (graph) $G : m \rightarrow n$, we can describe the linear map $F(G)$ most easily by saying that the component in the matrix, given an incoming and outgoing basis vector, just counts the number of admissible graphs that agree with the chosen colourings on the in-edges and out-edges.

There’s another functor, $\hat{F}$, which counts these graphs with a sign, which marks whether the graph contains an odd or an even number of crossings of differently-coloured edges – negative for odd, positive for even.  This  is the “Penrose evaluation” of the graph.

So these maps give the “operators” of the title, and the rest of the point is to use them to study graphs and their colourings.  One can, in this setup, rewrite some graphs as linear combinations of others – so-called “Skein relations” hold, for example, so that, after applying $F$, the composite of multiplication and comultiplication (taking two points to two points, through one cut-edge) is the same as the identity minus the swap.  This sort of thing appears in formal knot theory all the time, and is a key tool for recoupling in spin networks, and so on…

Given this “recoupling” idea, there are some important facts: first, any graph can be rewritten as a linear combination of planar graphs, and any planar graph with cycles can be reduced to a sum of planar graphs without cycles.  (Rui gave the example of decomposing a pentagonal cycle as a linear combination of four other graphs, three of which are disconnected).  So in fact any graph decomposes as a linear combination of forests (cycle-free graphs, the connected components of which are called “trees”, hence the name).  Another essential fact is that, due to the Euler characteristic of the plane, any planar graph can be split into two parts with at most five edges between them (the basis of the solution to the three utilities puzzle).  Then it so happens that the space of graphs connecting zero in-edges to five out-edges is a 6-dimensional space, $\mathcal{V}^o_5$, generated by just six forests (including one lonesome tree).

So one theorem which Rui told us about, which can be shown using the so-called Penrose relations (provable using the representations $F$ and $\hat{F}$), is that there’s just one such graph (which he described in the particular basis above) that evaluates to zero when composed with some other graph.  The proof of this uses the Four Colour Theorem (3-colouring of graph edges being related to 4-colouring of planar regions); in fact, the two theorems are equivalent so if anyone can find an alternative proof of this one, the bonus is another proof of the FCT.

Finally, he gave a conjecture that, if true, would help recognize planar graphs just by the operators produced by the representation $\hat{F}$ (at least it proposes a necessary condition).  This conjecture says that if a planar graph with five output edges (the maximum, remember) is written in the basis mentioned above, then the sum of the coefficients of the five disconnected trees is nonnegative.  (Thus, the connected tree doesn’t contribute to this measure).  This is still just a conjecture – Rui said that to date neither proof nor counterexample has been found.

## Soergel Bimodules, Singular and Virtual Braids

As I mentioned up top, I previously posted a bit about work on Soergel bimodules when describing Catharina Stroppel’s talk at the meeting in Faro in July.  To recap: they are associated with categories of modules over rings – specifically, rings of certain classes of symmetric functions.  Even more specifically, given a partition $\lambda$ of an integer $n$, there is a subgroup of the symmetric group $S_{\lambda} \subset S_n$ which fixes the partition.  All such groups act on the ring of $n$-variable polynomial functions $R =\mathbb{Q}[x_1, \dots, x_n]$, and the ones fixed by $S_{\lambda}$ form the ring $R^{\lambda}$.

Now, these groups are all related to each other in a web of containments, hence so are the rings.  So the module categories $R^{\lambda}$ are connected by various functors.  Given a containment $R^{\lambda '} \subset R^{\lambda}$, modules over $R^{\lambda}$ can be restricted to ones over $R^{\lambda '}$, and modules over $R^{\lambda '}$ can be induced up to ones over $R^{\lambda}$.  The restriction and induction functors can be represented as “tensor with a bimodule” (this is much the same classification as that for 2-linear maps which I’ve said a bunch about here, except that those must be free).  Applying induction functors repeatedly gives abitrarily large bimodules, but they are built as direct sums of simple parts.  Those simple parts, and any direct sums of them, are Soergel bimodules.  The point is that such bimodules describe morphisms.

So in the TQFT club, Marco Mackaay gave the first of a series of survey talks on this topic, and Anne-Laure Thiel gave a talk about the “Categorification of Singular Braid Monoids and Virtual Braid Groups”.  Since Marco’s talk was the first in a series of surveys, and a lot of what it surveyed was work described in my post on the Faro meeting, I’ll just mention that it dealt with the original motivation of a lot of this work in categorifying representation theory of Lie algebras (c.f. the discussion of the Khovanov-Lauda categorification of quantum groups in the previous post), and also got a bit into some of the different diagrammatic calculi created for that purpose, along the lines of the talks by Ben Webster and Geordie Williamson at that meeting.  Maybe when Marco has given more of these talks, I’ll return to this one here as well.

Now, the starting point of Anne-Laure’s talk was that the setup above lets one define a category with a presentation like that of the Hecke algebra (a quotient of the group algebra of the braid group), where exact relations become isomorphisms.  That is, we go from a category where morphisms are braids (up to isotopy and Reidemeister moves and so forth as usual) to a 2-category where the morphisms are bimodules, which happen to satisfy the same relations.  (The 2-morphisms, bimodule maps, are what allow relations to hold weakly…)

Specifically, the generators of the braid group are $\sigma_i$, the braids taking the $i^{th}$ strand over the $(i+1)^{st}$.  The parallel thing is $B_i = R \otimes_{R^{\sigma_i}} R$, where here we’re talking about the subgroup generated by the transposition of $i$ and $i+1$.  In the language of partitions, this corresponds to a $\lambda$ with one part of size two, $(i,i+1)$, and the rest of size one.  Now, since this bimodule is actually built from polynomials in $R$, it naturally has a grading – this corresponds to the degree of $q$, since the Hecke algebra involves a quotient giving q-deformed relations – so there is a degree-shift operation that categorifies multiplication by $q$.  This much is due to Soergel.

Anne-Laure’s talk was about extending this to talk about a categorification, first of the braid group in terms of complexes of these bimodules (due actually to Rouquier), then virtual and singular braids.  These, again, are basically creatures of formal knot theory (see link above).  They can be described by a presentation similar to that for braids – just as the braid group has a generators-and-relations presentation in terms of over-crossings of adjacent strands, these incorporate other kinds of crossings.  Singular braids allow a sort of “through” crossing, where the $i^{th}$ strand goes neither over nor under the $(i+1)^{st}$.  Virtual braids (the braid variant on virtual knots) have a special type of marked crossing called the “virtual crossing”, drawn with a little circle around it.  These are included as new generators in describing the virtual braid group, and of course some new relations are added to show how they relate to the original generators – variations on the Reidemeister moves, for example.

To categorify this, Anne-Laure explained that these new generators can also be represented by bimodules, but these ones need to be twisted.  In particular, twisting the bimodule $R$ by the action of a permutation $\omega \in S_n$ gives $R_{\omega}$, which is the same as $R$ as a left $R$-module, but is acted on by an element $a \in R$ on the right through multiplication by $\omega(a)$, so that $b \cdot p \cdot a = bp(\omega(a))$.  Then the new generators, beyond the $B_i = R \otimes_{R^{\sigma_i}} R$, are of the form $R_{\omega} \otimes_{R^{\omega '}} R$.  These then satsify the right relations for this to categorify the virtual braid group.

So this is a couple of weeks backdated.  I’ve had a pretty serious cold for a while – either it was bad in its own right, or this was just a case of the difference in native viruses between two different continents that my immune system wasn’t prepared for.  Then, too, last week was Republic Day – the 100th anniversary of the middle of three revolutions (the Liberal, the Republican, and the Carnation revolution that ousted the dictatorship regime in 1974 – and let me say that it’s refreshing for a North American to be reminded that Republicanism is a refinement of Liberalism, though how the flowers fit into it is less straightforward).  So my family and I went to attend some of the celebrations downtown, which were impressive.

Anyway, with the TQFT club seminars starting up very shortly, I wanted to finish this post on the first talks I got to see here at IST, which were on pretty widely different topics.  The first was by Ivan Smith, entitled “Quadrics, 3-Manifolds and Floer Cohomology”.  The second was a recorded video talk arranged by the string theory group.  This was a recording of a talk given by Kostas Skenderis a couple of years ago, entitled “The Fuzzball Proposal for Black Holes”.

## Ivan Smith – Quadrics, 3-Manfolds and Floer Cohomology

Ivan Smith’s talk began with some motivating questions from topology, symplectic geometry, and from the study of moduli spaces.  The topological question talks about 3-manifolds $Y$ and the space of representations $Hom(\pi_1(Y),G)$ of its fundamental group into a compact Lie group $G$, which was generally $SO(3)$ or $SU(2)$.  Specifically, the question is how this space is affected by operations on $Y$ such as surgery, taking covering spaces, etc.  The symplectic geometry question asks, for a symplectic manifold $(X,\omega)$, what the “mapping class group” of symplectic transformations – that is, the group $\pi_0(Symp(X))$ of connected components of symplectomorphisms from $X$ to itself – in a sense, this is asking how much of the geometry is seen by the symplectic situation.  The question about moduli spaces asks to characterize the action of the (again, mapping class group of) diffeomorphisms of a Riemann surface on the moduli space of bundles on it.  (This space, for  $\Sigma$ with genus $g \geq 2$, look like $M_g \simeq Hom(\pi_1(\Sigma),SU(2))$ modulo conjugation.  It is the complex-manifold version of the space of flat connections which I’ve been quite interested in for purposes of TQFT, though this is a coarse quotient, not a stack-like quotient.  Lots of people are interested in this space in its various hats.)

The point of the talk being to elucidate how these all fit together.  The first part of the title, “Quadrics”, referred to the fact that, when $\Sigma$ has genus 2, the moduli space we’ll be looking at can be described as an intersection of some varieties (defined by quadric equations) in the projective space $\mathbb{CP}^5$.  Knowing this, one can describe some of its properties just by looking at intersections of curves.

In general we’re talking about complex manifolds, here.  To start with, for Riemann surfaces (one-dimensional complex manifolds), he pointed out that there is an isomorphism between the mapping class groups of symplectomorphisms and diffeomorphisms: $\pi_0(Symp(\Sigma)) \simeq \pi_0(Diff(\Sigma))$.  But in general, for example, for 3-dimensional manifolds, there is structure in the symplectic maps which is forgotten by the smooth ones – there’s still a map $\pi_0(Symp(\Sigma)) \rightarrow \pi_0(Diff(\Sigma))$, but it has a kernel – there are distinct symplectic maps that all look like the identity up to smooth deformation.

Now, our original question was what the action of the diffeomorphisms of on the moduli space $M_g$ of bundles over $\Sigma$.  An element $h$ of $\pi_0(Diff(\Sigma))$ acts (by symplectic map) on it.  The discrepancy we mentioned is that the map corresponding to $h$ will always have fixed points, but be smoothly equivalent to one that doesn’t.  So the smooth mapping class group can’t detect the property of having fixed points.  What it CAN detect, however, is information about intersections.  In particular,   as mentioned above, the moduli space of bundles over a genus 2 surface is an intersection; in this situation, there is an injective map back from the smooth mapping class group into the group of classes of symplectic maps.  So looking symplectically loses nothing from the smooth case.

Now, these symplectic maps tie into the third part of the title, “Floer Homology”, as follows.  Given a symplectic map $\phi : (X,\omega) \rightarrow (X,\omega)$, one can define a complex of vector spaces $HF(\phi)$ which is the usual cohomology of a chain complex generated by fixed points of the map $\phi$, and with a differential $\partial$ which is defined by counting certain curves.  The way this is set up, if $\phi$ is the identity so that all points are fixed points, one gets the usual cohomology of the space $X$ – except that it’s defined so as to be the quantum cohomology of $X$ (for more, check out this tutorial by Givental).  This has the same complex as the usual cohomology, but with the cup product replaced by a deformed product.  It’s an older theorem (due to Donaldson) that, at least for genus 2, the quantum cohomology of the moduli space of bundles over $\Sigma$ splits into a direct sum of rings:

$QH^*(M_2) \cong \mathbb{C} \oplus QH^*(\Sigma_2) \oplus \mathbb{C}$

So one of the key facts is that this works also with Floer homology for other maps than the identity (so this becomes a special case).  So replacing $QH^*$ in the above with $HF^*(\phi)$ for any $\phi$ (acting either on the surface $\Sigma$, or the induced action on the moduli space) still gives a true statement.  Note that this actually implies the theorem that there are fixed points in the space of bundles, since the right hand side is always nontrivial.

So at this point we have some idea of how Floer cohomology is part of what ties the original three questions together.  To take a further look at these we can start to build a category combining much of the same information.  This is the (derived) Fukaya category.  The objects are Lagrangian submanifolds of a symplectic manifold $(X,\omega)$ – ones where the symplectic form vanishes.  To start building the category, consider what we can build from pairs of such objects $(L_1,L_2)$.  This is rather like the above – we define a complex of vector spaces, which is the cohomology of another complex.  Instead of being the complex freely generated by fixed points, though, it’s generated by intersection points of $L_1$ and $L_2$.  This automatically becomes a module over $QH^*(X)$, so the category we’re building is enriched over these.

Defining the structure of this category is apparently a little bit complicated – in particular, there is a composition product $HF(L_1,L_2) \otimes HF(L_2,L_3) \rightarrow HF(L_1,L_3)$ in the form of a cohomology operation.  Furthermore, which Ivan Smith didn’t have time to describe in detail, there are other “higher” products.  These are Massey type products, which is to say higher-order cohomology operations, which involve more than two inputs.  These give the whole structure (where one takes the direct sum of all those hom-modules $HF(L_i,L_j)$ to get one big module) the structure of an $A_{\infty}$-algebra (so the Fukaya category is an $A_{\infty}$-category, I suppose).  This is one way of talking about weak higher categories (the higher products give the associator for composition, and its higher analogs), so in fact this is a pretty complex structure, which the talk didn’t dwell on in detail.  But in any case, the point is that the operations in the category correspond to cohomology operations.

Then one deals with the “derived” Fukaya category $\mathcal{DF}(X)$.  I understand derived categories to be (at least among other examples) a way of taking categories of complexes “up to homotopy”, perhaps as a way of getting rid of some of this complication.  Again, the talk didn’t elaborate too much on this.  However, the fundamental theorem about this category is a generalization of the theorem above above quantum cohomology:

$\mathcal{DF}(M_2) \cong \mathcal{DF}(pt) \oplus \mathcal{DF}(\Sigma_2) \oplus \mathcal{DF}(pt)$

That is, the derived Fukaya category for the moduli space of bundles over $\Sigma_2$ is the category for the Riemann surface itself, summed with two copies of the category for a single point (which is replacing the two copies of $\mathbb{C}$).  This reduces to the previous theorem when we’re looking at the map $\phi = id$, just as before.

So the last question Ivan Smith addressed about this is the fact that these sorts of categories are often hard to calculate explicitly, but they can be described in terms of some easily-described data.  He gave the analogy of periodic functions – which may be quite complicated, but by means of Fourier decompositions, can be easily described in terms of sines and cosines, which are easy to analyze.  In the same way, although the Fukaya categories for particular spaces might be complicated, they can be described in terms of the (derived) category of modules over the $A_{\infty}$-algebras.  In particular, every category $\mathcal{DF}(X)$ embeds in a generic example $\mathcal{D}(mod-A_{\infty}-alg)$.  So by understanding categories like this, one can understand a lot about the categories that come from spaces, which generalize quantum cohomology as described above.

I like this punchline of the analogy with Fourier analysis, as imprecise as it might be, because it suggests a nice way to approach complex entities by finding out the parts that can generate them, or simple but large things you might discover them inside.

## Fuzzballs

The Skenderis talk about black holes was interesting, in that it was a recorded version of a talk given somewhere else – I haven’t seen this done before, but apparently the String Theory group does it pretty regularly.  This has some obvious advantages – they can get a wider range of talks by many different speakers.  There was some technical problem – I suppose due to the way the video was encoded – that meant the slides were sometimes unreadably blurry, but that’s still better than not getting the speaker at all.  I don’t have the background in string theory to be able to really get at the meat of the talk, though it did involve the AdS/CFT correspondence.  However, I can at least say a few concrete things about the motivation.  First, the “fuzzball” proposal is a more-or-less specific proposal to deal with the problem of black hole entropy.

The problem, basically, is that it’s known that the thermodynamic entropy associated to a black hole – which can be computed in completely macroscopic terms – is proportional to the area of its horizon.  On the other hand, in essentially every other setting, entropy has an interpretation in terms of counting microstates, so that the entropy of a “macrostate” is proportional to the logarithm of the number of microstates.  (Or, in a thermal state, which is a statistical distribution, this is weighted by the probability of the microstate).  So, for example, with a gas in a box, there are many macrostates that correspond to a relatively even distribution of position and momentum among the molecules, and relatively few in which all molecules are all in one small corner of the box.

The reason this is a problem is that, classically, the state of a black hole is characterized by very few numbers: the mass, angular momentum, and electric charge.   There doesn’t seem to be room for “microstates” in a classical black hole.  So the overall point of the proposal is to describe what microstates would be.  The specific way this is done with “fuzzballs” is somewhat mysterious to me, but the overall idea makes sense.  One interesting consequence of this approach is that event horizons would be strictly a property of thermal states, in whatever underlying theory one takes to be the quantum theory behind classical gravity (here assumed to be some specific form of string theory – the example he was using is something called the B1-B5 black hole, which I know nothing about).  That’s because a pure state would have a single microstate, hence have zero entropy, hence no horizon.

Now, what little I do understand about the particular model relies on the fact that near a (classical) event horizon, the background metric has a component that looks like anti-deSitter space – a vacuum solution to the Einstein equations with a negative cosmological constant.  (This part isn’t so hard to see – AdS space has that “saddle-shaped” appearance of a hyperbolic surface, and so does the area around a horizon, even when you draw it like this.)  But then, there is the AdS/CFT correspondence that says states for a gravitational field in (asymptotically) anti-deSitter space correspond to states for a conformal field theory (CFT) at the boundary.  So the way to get microstates, in the “fuzzball” proposal, is to look at this CFT, and find geometries that correspond to them.  Some would be well-approximated by the classical, horizon-ridden geometry, but others would be different.  The fact that this CFT is defined at the boundary explains why entropy would be proportional to area, not volume, of the black hole – this being a manifestation of the so-called “holographic principle”.  The “fuzziness” that one throws away by reducing a thermal state that combines these many geometries to the classical “no-hair” black hole determined by just three numbers is exactly the information described by the entropy.

I couldn’t follow some parts of it, not having much string-theory background – I don’t feel qualified to judge whether string theory makes sense as physics, but it isn’t an approach I’ve studied much.  Still, this talk did reinforce my feeling that the AdS/CFT correspondence, at the very least, is something well-worth learning about and important in its own right.

Coming soon: descriptions of the TQFT club seminars which are starting up at IST.