Marco Mackaay recently pointed me at a paper by Mikhail Khovanov, which describes a categorification of the Heisenberg algebra (or anyway its integral form ) in terms of a diagrammatic calculus. This is very much in the spirit of the Khovanov-Lauda program of categorifying Lie algebras, quantum groups, and the like. (There’s also another one by Sabin Cautis and Anthony Licata, following up on it, which I fully intend to read but haven’t done so yet. I may post about it later.)
Now, as alluded to in some of the slides I’ve from recent talks, Jamie Vicary and I have been looking at a slightly different way to answer this question, so before I talk about the Khovanov paper, I’ll say a tiny bit about why I was interested.
The Weyl algebra (or the Heisenberg algebra – the difference being whether the commutation relations that define it give real or imaginary values) is interesting for physics-related reasons, being the algebra of operators associated to the quantum harmonic oscillator. The particular approach to categorifying it that I’ve worked with goes back to something that I wrote up here, and as far as I know, originally was suggested by Baez and Dolan here. This categorification is based on “stuff types” (Jim Dolan’s term, based on “structure types”, a.k.a. Joyal’s “species”). It’s an example of the groupoidification program, the point of which is to categorify parts of linear algebra using the category . This has objects which are groupoids, and morphisms which are spans of groupoids: pairs of maps . Since I’ve already discussed the backgroup here before (e.g. here and to a lesser extent here), and the papers I just mentioned give plenty more detail (as does “Groupoidification Made Easy“, by Baez, Hoffnung and Walker), I’ll just mention that this is actually more naturally a 2-category (maps between spans are maps making everything commute). It’s got a monoidal structure, is additive in a fairly natural way, has duals for morphisms (by reversing the orientation of spans), and more. Jamie Vicary and I are both interested in the quantum harmonic oscillator – he did this paper a while ago describing how to construct one in a general symmetric dagger-monoidal category. We’ve been interested in how the stuff type picture fits into that framework, and also in trying to examine it in more detail using 2-linearization (which I explain here).
Anyway, stuff types provide a possible categorification of the Weyl/Heisenberg algebra in terms of spans and groupoids. They aren’t the only way to approach the question, though – Khovanov’s paper gives a different (though, unsurprisingly, related) point of view. There are some nice aspects to the groupoidification approach: for one thing, it gives a nice set of pictures for the morphisms in its categorified algebra (they look like groupoids whose objects are Feynman diagrams). Two great features of this Khovanov-Lauda program: the diagrammatic calculus gives a great visual representation of the 2-morphisms; and by dealing with generators and relations directly, it describes, in some sense1, the universal answer to the question “What is a categorification of the algebra with these generators and relations”. Here’s how it works…
One way to represent the Weyl/Heisenberg algebra (the two terms refer to different presentations of isomorphic algebras) uses a polynomial algebra . In fact, there’s a version of this algebra for each natural number (the stuff-type references above only treat , though extending it to “-sorted stuff types” isn’t particularly hard). In particular, it’s the algebra of operators on generated by the “raising” operators and the “lowering” operators . The point is that this is characterized by some commutation relations. For , we have:
but on the other hand
So the algebra could be seen as just a free thing generated by symbols with these relations. These can be understood to be the “raising and lowering” operators for an -dimensional harmonic oscillator. This isn’t the only presentation of this algebra. There’s another one where (as in ) has a slightly different interpretation, where the and operators are the position and momentum operators for the same system. Finally, a third one – which is the one that Khovanov actually categorifies – is skewed a bit, in that it replaces the with a different set of so that the commutation relation actually looks like
It’s not instantly obvious that this produces the same result – but the can be rewritten in terms of the , and they generate the same algebra. (Note that for the one-dimensional version, these are in any case the same, taking .)
To categorify this, in Khovanov’s sense (though see note below1), means to find a category whose isomorphism classes of objects correspond to (integer-) linear combinations of products of the generators of . Now, in the setup, we can say that the groupoid , or equvialently , represents Fock space. Groupoidification turns this into the free vector space on the set of isomorphism classes of objects. This has some extra structure which we don’t need right now, so it makes the most sense to describe it as , the space of power series (where corresponds to the object ). The algebra itself is an algebra of endomorphisms of this space. It’s this algebra Khovanov is looking at, so the monoidal category in question could really be considered a bicategory with one object, where the monoidal product comes from composition, and the object stands in formally for the space it acts on. But this space doesn’t enter into the description, so we’ll just think of as a monoidal category. We’ll build it in two steps: the first is to define a category .
The objects of are defined by two generators, called and , and the fact that it’s monoidal (these objects will be the categorifications of and ). Thus, there are objects and so forth. In general, if is some word on the alphabet , there’s an object .
As in other categorifications in the Khovanov-Lauda vein, we define the morphisms of to be linear combinations of certain planar diagrams, modulo some local relations. (This type of formalism comes out of knot theory – see e.g. this intro by Louis Kauffman). In particular, we draw the objects as sequences of dots labelled or , and connect two such sequences by a bunch of oriented strands (embeddings of the interval, or circle, in the plane). Each dot is the endpoint of a strand oriented up, and each dot is the endpoint of a strand oriented down. The local relations mean that we can take these diagrams up to isotopy (moving the strands around), as well as various other relations that define changes you can make to a diagram and still represent the same morphism. These relations include things like:
which seems visually obvious (imagine tugging hard on the ends on the left hand side to straighten the strands), and the less-obvious:
and a bunch of others. The main ingredients are cups, caps, and crossings, with various orientations. Other diagrams can be made by pasting these together. The point, then, is that any morphism is some -linear combination of these. (I prefer to assume most of the time, since I’m interested in quantum mechanics, but this isn’t strictly necessary.)
The second diagram, by the way, are an important part of categorifying the commutation relations. This would say that (the commutation relation has become a decomposition of a certain tensor product). The point is that the left hand sides show the composition of two crossings and in two different orders. One can use this, plus isotopy, to show the decomposition.
That diagrams are invariant under isotopy means, among other things, that the yanking rule holds:
(and similar rules for up-oriented strands, and zig zags on the other side). These conditions amount to saying that the functors and are two-sided adjoints. The two cups and caps (with each possible orientation) give the units and counits for the two adjunctions. So, for instance, in the zig-zag diagram above, there’s a cup which gives a unit map (reading upward), all tensored on the right by . This is followed by a cap giving a counit map (all tensored on the left by ). So the yanking rule essentially just gives one of the identities required for an adjunction. There are four of them, so in fact there are two adjunctions: one where is the left adjoint, and one where it’s the right adjoint.
Now, so far this has explained where a category comes from – the one with the objects described above. This isn’t quite enough to get a categorification of : it would be enough to get the version with just one and one element, and their powers, but not all the and . To get all the elements of the (integral form of) the Heisenberg algebras, and in particular to get generators that satisfy the right commutation relations, we need to introduce some new objects. There’s a convenient way to do this, though, which is to take the Karoubi envelope of .
The Karoubi envelope of any category is a universal way to find a category that contains and for which all idempotents split (i.e. have corresponding subobjects). Think of vector spaces, for example: a map such that is a projection. That projection corresponds to a subspace , and is actually another object in , so that splits (factors) as . This might not happen in any general , but it will in . This has, for objects, all the pairs where is idempotent (so is contained in as the cases where ). The morphisms are just maps with the compatibility condition that (essentially, maps between the new subobjects).
So which new subobjects are the relevant ones? They’ll be subobjects of tensor powers of our . First, consider . Obviously, there’s an action of the symmetric group on this, so in fact (since we want a -linear category), its endomorphisms contain a copy of , the corresponding group algebra. This has a number of different projections, but the relevant ones here are the symmetrizer,:
which wants to be a “projection onto the symmetric subspace” and the antisymmetrizer:
which wants to be a “projection onto the antisymmetric subspace” (if it were in a category with the right sub-objects). The diagrammatic way to depict this is with horizontal bars: so the new object (the symmetrized subobject of ) is a hollow rectangle, labelled by . The projection from is drawn with arrows heading into that box:
The antisymmetrized subobject is drawn with a black box instead. There are also and defined in the same way (and drawn with downward-pointing arrows).
The basic fact – which can be shown by various diagram manipulations, is that . The key thing is that there are maps from the left hand side into each of the terms on the right, and the sum can be shown to be an isomorphism using all the previous relations. The map into the second term involves a cap that uses up one of the strands from each term on the left.
There are other idempotents as well – for every partition of , there’s a notion of -symmetric things – but ultimately these boil down to symmetrizing the various parts of the partition. The main point is that we now have objects in corresponding to all the elements of . The right choice is that the (the new generators in this presentation that came from the lowering operators) correspond to the (symmetrized products of “lowering” strands), and the correspond to the (antisymmetrized products of “raising” strands). We also have isomorphisms (i.e. diagrams that are invertible, using the local moves we’re allowed) for all the relations. This is a categorification of .
This diagrammatic calculus is universal enough to be applied to all sorts of settings where there are functors which are two-sided adjoints of one another (by labelling strands with functors, and the regions of the plane with categories they go between). I like this a lot, since biadjointness of certain functors is essential to the 2-linearization functor (see my link above). In particular, uses biadjointness of restriction and induction functors between representation categories of groupoids associated to a groupoid homomorphism (and uses these unit and counit maps to deal with 2-morphisms). That example comes from the fact that a (finite-dimensional) representation of a finite group(oid) is a functor into , and a group(oid) homomorphism is also just a functor . Given such an , there’s an easy “restriction” , that just works by composing with . Then in principle there might be two different adjoints , given by the left and right Kan extension along . But these are defined by colimits and limits, which are the same for (finite-dimensional) vector spaces. So in fact the adjoint is two-sided.
Khovanov’s paper describes and uses exactly this example of biadjointness in a very nice way, albeit in the classical case where we’re just talking about inclusions of finite groups. That is, given a subgroup , we get a functors , which just considers the obvious action of act on any representation space of . It has a biadjoint , which takes a representation of to , which is a special case of the formula for a Kan extension. (This formula suggests why it’s also natural to see these as functors between module categories and ). To talk about the Heisenberg algebra in particular, Khovanov considers these functors for all the symmetric group inclusions .
Except for having to break apart the symmetric groupoid as , this is all you need to categorify the Heisenberg algebra. In the categorification, we pick out the interesting operators as those generated by the map from to itself, but “really” (i.e. up to equivalence) this is just all the inclusions taken at once. However, Khovanov’s approach is nice, because it separates out a lot of what’s going on abstractly and uses a general diagrammatic way to depict all these 2-morphisms (this is explained in the first few pages of Aaron Lauda’s paper on ambidextrous adjoints, too). The case of restriction and induction is just one example where this calculus applies.
There’s a fair bit more in the paper, but this is probably sufficient to say here.
1 There are two distinct but related senses of “categorification” of an algebra here, by the way. To simplify the point, say we’re talking about a ring . The first sense of a categorification of is a (monoidal, additive) category with a “valuation” in that takes to and to . This is described, with plenty of examples, in this paper by Rafael Diaz and Eddy Pariguan. The other, typical of the Khovanov program, says it is a (monoidal, additive) category whose Grothendieck ring is . Of course, the second definition implies the first, but not conversely. The objects of the Grothendieck ring are isomorphism classes in . A valuation may identify objects which aren’t isomorphic (or, as in groupoidification, morphisms which aren’t 2-isomorphic).
So a categorification of the first sort could be factored into two steps: first take the Grothendieck ring, then take a quotient to further identify things with the same valuation. If we’re lucky, there’s a commutative square here: we could first take the category , find some surjection , and then find that . This seems to be the relation between Khovanov’s categorification of and the one in . This is the sense in which it seems to be the “universal” answer to the problem.