species


Marco Mackaay recently pointed me at a paper by Mikhail Khovanov, which describes a categorification of the Heisenberg algebra H (or anyway its integral form H_{\mathbb{Z}}) in terms of a diagrammatic calculus.  This is very much in the spirit of the Khovanov-Lauda program of categorifying Lie algebras, quantum groups, and the like.  (There’s also another one by Sabin Cautis and Anthony Licata, following up on it, which I fully intend to read but haven’t done so yet. I may post about it later.)

Now, as alluded to in some of the slides I’ve from recent talks, Jamie Vicary and I have been looking at a slightly different way to answer this question, so before I talk about the Khovanov paper, I’ll say a tiny bit about why I was interested.

Groupoidification

The Weyl algebra (or the Heisenberg algebra – the difference being whether the commutation relations that define it give real or imaginary values) is interesting for physics-related reasons, being the algebra of operators associated to the quantum harmonic oscillator.  The particular approach to categorifying it that I’ve worked with goes back to something that I wrote up here, and as far as I know, originally was suggested by Baez and Dolan here.  This categorification is based on “stuff types” (Jim Dolan’s term, based on “structure types”, a.k.a. Joyal’s “species”).  It’s an example of the groupoidification program, the point of which is to categorify parts of linear algebra using the category Span(Gpd).  This has objects which are groupoids, and morphisms which are spans of groupoids: pairs of maps G_1 \leftarrow X \rightarrow G_2.  Since I’ve already discussed the backgroup here before (e.g. here and to a lesser extent here), and the papers I just mentioned give plenty more detail (as does “Groupoidification Made Easy“, by Baez, Hoffnung and Walker), I’ll just mention that this is actually more naturally a 2-category (maps between spans are maps X \rightarrow X' making everything commute).  It’s got a monoidal structure, is additive in a fairly natural way, has duals for morphisms (by reversing the orientation of spans), and more.  Jamie Vicary and I are both interested in the quantum harmonic oscillator – he did this paper a while ago describing how to construct one in a general symmetric dagger-monoidal category.  We’ve been interested in how the stuff type picture fits into that framework, and also in trying to examine it in more detail using 2-linearization (which I explain here).

Anyway, stuff types provide a possible categorification of the Weyl/Heisenberg algebra in terms of spans and groupoids.  They aren’t the only way to approach the question, though – Khovanov’s paper gives a different (though, unsurprisingly, related) point of view.  There are some nice aspects to the groupoidification approach: for one thing, it gives a nice set of pictures for the morphisms in its categorified algebra (they look like groupoids whose objects are Feynman diagrams).  Two great features of this Khovanov-Lauda program: the diagrammatic calculus gives a great visual representation of the 2-morphisms; and by dealing with generators and relations directly, it describes, in some sense1, the universal answer to the question “What is a categorification of the algebra with these generators and relations”.  Here’s how it works…

Heisenberg Algebra

One way to represent the Weyl/Heisenberg algebra (the two terms refer to different presentations of isomorphic algebras) uses a polynomial algebra P_n = \mathbb{C}[x_1,\dots,x_n].  In fact, there’s a version of this algebra for each natural number n (the stuff-type references above only treat n=1, though extending it to “n-sorted stuff types” isn’t particularly hard).  In particular, it’s the algebra of operators on P_n generated by the “raising” operators a_k(p) = x_k \cdot p and the “lowering” operators b_k(p) = \frac{\partial p}{\partial x_k}.  The point is that this is characterized by some commutation relations.  For j \neq k, we have:

[a_j,a_k] = [b_j,b_k] = [a_j,b_k] = 0

but on the other hand

[a_k,b_k] = 1

So the algebra could be seen as just a free thing generated by symbols \{a_j,b_k\} with these relations.  These can be understood to be the “raising and lowering” operators for an n-dimensional harmonic oscillator.  This isn’t the only presentation of this algebra.  There’s another one where [p_k,q_k] = i (as in i = \sqrt{-1}) has a slightly different interpretation, where the p and q operators are the position and momentum operators for the same system.  Finally, a third one – which is the one that Khovanov actually categorifies – is skewed a bit, in that it replaces the a_j with a different set of \hat{a}_j so that the commutation relation actually looks like

[\hat{a}_j,b_k] = b_{k-1}\hat{a}_{j-1}

It’s not instantly obvious that this produces the same result – but the \hat{a}_j can be rewritten in terms of the a_j, and they generate the same algebra.  (Note that for the one-dimensional version, these are in any case the same, taking a_0 = b_0 = 1.)

Diagrammatic Calculus

To categorify this, in Khovanov’s sense (though see note below1), means to find a category \mathcal{H} whose isomorphism classes of objects correspond to (integer-) linear combinations of products of the generators of H.  Now, in the Span(Gpd) setup, we can say that the groupoid FinSet_0, or equvialently \mathcal{S} = \coprod_n  \mathcal{S}_n, represents Fock space.  Groupoidification turns this into the free vector space on the set of isomorphism classes of objects.  This has some extra structure which we don’t need right now, so it makes the most sense to describe it as \mathbb{C}[[t]], the space of power series (where t^n corresponds to the object [n]).  The algebra itself is an algebra of endomorphisms of this space.  It’s this algebra Khovanov is looking at, so the monoidal category in question could really be considered a bicategory with one object, where the monoidal product comes from composition, and the object stands in formally for the space it acts on.  But this space doesn’t enter into the description, so we’ll just think of \mathcal{H} as a monoidal category.  We’ll build it in two steps: the first is to define a category \mathcal{H}'.

The objects of \mathcal{H}' are defined by two generators, called Q_+ and Q_-, and the fact that it’s monoidal (these objects will be the categorifications of a and b).  Thus, there are objects Q_+ \otimes Q_- \otimes Q_+ and so forth.  In general, if \epsilon is some word on the alphabet \{+,-\}, there’s an object Q_{\epsilon} = Q_{\epsilon_1} \otimes \dots \otimes Q_{\epsilon_m}.

As in other categorifications in the Khovanov-Lauda vein, we define the morphisms of \mathcal{H}' to be linear combinations of certain planar diagrams, modulo some local relations.  (This type of formalism comes out of knot theory – see e.g. this intro by Louis Kauffman).  In particular, we draw the objects as sequences of dots labelled + or -, and connect two such sequences by a bunch of oriented strands (embeddings of the interval, or circle, in the plane).  Each + dot is the endpoint of a strand oriented up, and each - dot is the endpoint of a strand oriented down.  The local relations mean that we can take these diagrams up to isotopy (moving the strands around), as well as various other relations that define changes you can make to a diagram and still represent the same morphism.  These relations include things like:

which seems visually obvious (imagine tugging hard on the ends on the left hand side to straighten the strands), and the less-obvious:

and a bunch of others.  The main ingredients are cups, caps, and crossings, with various orientations.  Other diagrams can be made by pasting these together.  The point, then, is that any morphism is some \mathbf{k}-linear combination of these.  (I prefer to assume \mathbf{k} = \mathbb{C} most of the time, since I’m interested in quantum mechanics, but this isn’t strictly necessary.)

The second diagram, by the way, are an important part of categorifying the commutation relations.  This would say that Q_- \otimes Q_+ \cong Q_+ \otimes Q_- \oplus 1 (the commutation relation has become a decomposition of a certain tensor product).  The point is that the left hand sides show the composition of two crossings Q_- \otimes Q_+ \rightarrow Q_+ \otimes Q_- and Q_+ \otimes Q_- \rightarrow Q_- \otimes Q_+ in two different orders.  One can use this, plus isotopy, to show the decomposition.

That diagrams are invariant under isotopy means, among other things, that the yanking rule holds:

(and similar rules for up-oriented strands, and zig zags on the other side).  These conditions amount to saying that the functors - \otimes Q_+ and - \otimes Q_- are two-sided adjoints.  The two cups and caps (with each possible orientation) give the units and counits for the two adjunctions.  So, for instance, in the zig-zag diagram above, there’s a cup which gives a unit map \mathbf{k} \rightarrow Q_- \otimes Q_+ (reading upward), all tensored on the right by Q_-.  This is followed by a cap giving a counit map Q_+ \otimes Q_- \rightarrow \mathbf{k} (all tensored on the left by Q_-).  So the yanking rule essentially just gives one of the identities required for an adjunction.  There are four of them, so in fact there are two adjunctions: one where Q_+ is the left adjoint, and one where it’s the right adjoint.

Karoubi Envelope

Now, so far this has explained where a category \mathcal{H}' comes from – the one with the objects Q_{\epsilon} described above.  This isn’t quite enough to get a categorification of H_{\mathbb{Z}}: it would be enough to get the version with just one a and one b element, and their powers, but not all the a_j and b_k.  To get all the elements of the (integral form of) the Heisenberg algebras, and in particular to get generators that satisfy the right commutation relations, we need to introduce some new objects.  There’s a convenient way to do this, though, which is to take the Karoubi envelope of \mathcal{H}'.

The Karoubi envelope of any category \mathcal{C} is a universal way to find a category Kar(\mathcal{C}) that contains \mathcal{C} and for which all idempotents split (i.e. have corresponding subobjects).  Think of vector spaces, for example: a map p \in End(V) such that p^2 = p is a projection.  That projection corresponds to a subspace W \subset V, and W is actually another object in Vect, so that p splits (factors) as V \rightarrow W subset V.  This might not happen in any general \mathcal{C}, but it will in Kar(\mathcal{C}).  This has, for objects, all the pairs (C,p) where p : C \rightarrow C is idempotent (so \mathcal{C} is contained in Kar(\mathcal{C}) as the cases where p=1).  The morphisms f : (C,p) \rightarrow (C',p') are just maps f : C \rightarrow C' with the compatibility condition that p' f = p f = f (essentially, maps between the new subobjects).

So which new subobjects are the relevant ones?  They’ll be subobjects of tensor powers of our Q_{\pm}.  First, consider Q_{+^n} = Q_+^{\otimes n}.  Obviously, there’s an action of the symmetric group \mathcal{S}_n on this, so in fact (since we want a \mathbf{k}-linear category), its endomorphisms contain a copy of \mathbf{k}[\mathcal{S}_n], the corresponding group algebra.  This has a number of different projections, but the relevant ones here are the symmetrizer,:

e_n = \frac{1}{n!} \sum_{\sigma \in \mathcal{S}_n} \sigma

which wants to be a “projection onto the symmetric subspace” and the antisymmetrizer:

e'_n = \frac{1}{n!} \sum_{\sigma \in \mathcal{S}_n} sign(\sigma) \sigma

which wants to be a “projection onto the antisymmetric subspace” (if it were in a category with the right sub-objects). The diagrammatic way to depict this is with horizontal bars: so the new object S^n_+ = (Q_{+^n}, e) (the symmetrized subobject of Q_+^{\oplus n}) is a hollow rectangle, labelled by n.  The projection from Q_+^{\otimes n} is drawn with n arrows heading into that box:

The antisymmetrized subobject \Lambda^n_+ = (Q_{+^n},e') is drawn with a black box instead.  There are also S^n_- and \Lambda^n_- defined in the same way (and drawn with downward-pointing arrows).

The basic fact – which can be shown by various diagram manipulations, is that S^n_- \otimes \Lambda^m_+ \cong (\Lambda^m_+ \otimes S^n_-) \oplus (\Lambda_+^{m-1} \otimes S^{n-1}_-).  The key thing is that there are maps from the left hand side into each of the terms on the right, and the sum can be shown to be an isomorphism using all the previous relations.  The map into the second term involves a cap that uses up one of the strands from each term on the left.

There are other idempotents as well – for every partition \lambda of n, there’s a notion of \lambda-symmetric things – but ultimately these boil down to symmetrizing the various parts of the partition.  The main point is that we now have objects in \mathcal{H} = Kar(\mathcal{H}') corresponding to all the elements of H_{\mathbb{Z}}.  The right choice is that the \hat{a}_j  (the new generators in this presentation that came from the lowering operators) correspond to the S^j_- (symmetrized products of “lowering” strands), and the b_k correspond to the \Lambda^k_+ (antisymmetrized products of “raising” strands).  We also have isomorphisms (i.e. diagrams that are invertible, using the local moves we’re allowed) for all the relations.  This is a categorification of H_{\mathbb{Z}}.

Some Generalities

This diagrammatic calculus is universal enough to be applied to all sorts of settings where there are functors which are two-sided adjoints of one another (by labelling strands with functors, and the regions of the plane with categories they go between).  I like this a lot, since biadjointness of certain functors is essential to the 2-linearization functor \Lambda (see my link above).  In particular, \Lambda uses biadjointness of restriction and induction functors between representation categories of groupoids associated to a groupoid homomorphism (and uses these unit and counit maps to deal with 2-morphisms).  That example comes from the fact that a (finite-dimensional) representation of a finite group(oid) is a functor into Vect, and a group(oid) homomorphism is also just a functor F : H \rightarrow G.  Given such an F, there’s an easy “restriction” F^* : Fun(G,Vect) \rightarrow Fun(H,Vect), that just works by composing with F.  Then in principle there might be two different adjoints Fun(H,Vect) \rightarrow Fun(G,Vect), given by the left and right Kan extension along F.  But these are defined by colimits and limits, which are the same for (finite-dimensional) vector spaces.  So in fact the adjoint is two-sided.

Khovanov’s paper describes and uses exactly this example of biadjointness in a very nice way, albeit in the classical case where we’re just talking about inclusions of finite groups.  That is, given a subgroup H < G, we get a functors Res_G^H : Rep(G) \rightarrow Rep(H), which just considers the obvious action of H act on any representation space of G.  It has a biadjoint Ind^G_H : Rep(H) \rightarrow Rep(G), which takes a representation V of H to \mathbf{k}[G] \otimes_{\mathbf{k}[H]} V, which is a special case of the formula for a Kan extension.  (This formula suggests why it’s also natural to see these as functors between module categories \mathbf{k}[G]-mod and \mathbf{k}[H]-mod).  To talk about the Heisenberg algebra in particular, Khovanov considers these functors for all the symmetric group inclusions \mathcal{S}_n < \mathcal{S}_{n+1}.

Except for having to break apart the symmetric groupoid as S = \coprod_n \mathcal{S}_n, this is all you need to categorify the Heisenberg algebra.  In the Span(Gpd) categorification, we pick out the interesting operators as those generated by the - \sqcup \{\star\} map from FinSet_0 to itself, but “really” (i.e. up to equivalence) this is just all the inclusions \mathcal{S}_n < \mathcal{S}_{n+1} taken at once.  However, Khovanov’s approach is nice, because it separates out a lot of what’s going on abstractly and uses a general diagrammatic way to depict all these 2-morphisms (this is explained in the first few pages of Aaron Lauda’s paper on ambidextrous adjoints, too).  The case of restriction and induction is just one example where this calculus applies.

There’s a fair bit more in the paper, but this is probably sufficient to say here.


1 There are two distinct but related senses of “categorification” of an algebra A here, by the way.  To simplify the point, say we’re talking about a ring R.  The first sense of a categorification of R is a (monoidal, additive) category C with a “valuation” in R that takes \otimes to \times and \oplus to +.  This is described, with plenty of examples, in this paper by Rafael Diaz and Eddy Pariguan.  The other, typical of the Khovanov program, says it is a (monoidal, additive) category C whose Grothendieck ring is K_0(C) = R.  Of course, the second definition implies the first, but not conversely.  The objects of the Grothendieck ring are isomorphism classes in C.  A valuation may identify objects which aren’t isomorphic (or, as in groupoidification, morphisms which aren’t 2-isomorphic).

So a categorification of the first sort could be factored into two steps: first take the Grothendieck ring, then take a quotient to further identify things with the same valuation.  If we’re lucky, there’s a commutative square here: we could first take the category C, find some surjection C \rightarrow C', and then find that K_0(C') = R.  This seems to be the relation between Khovanov’s categorification of H_{\mathbb{Z}} and the one in Span(Gpd). This is the sense in which it seems to be the “universal” answer to the problem.

I just posted the slides for “Groupoidification and 2-Linearization”, the colloquium talk I gave at Dalhousie when I was up in Halifax last week. I also gave a seminar talk in which I described the quantum harmonic oscillator and extended TQFT as examples of these processes, which covered similar stuff to the examples in a talk I gave at Ottawa, as well as some more categorical details.

Now, in the previous post, I was talking about different notions of the “state” of a system – all of which are in some sense “dual to observables”, although exactly what sense depends on which notion you’re looking at. Each concept has its own particular “type” of thing which represents a state: an element-of-a-set, a function-on-a-set, a vector-in-(projective)-Hilbert-space, and a functional-on-operators. In light of the above slides, I wanted to continue with this little bestiary of ontologies for “states” and mention the versions suggested by groupoidification.

State as Generalized Stuff Type

This is what groupoidification introduces: the idea of a state in Span(Gpd). As I said in the previous post, the key concepts behind this program are state, symmetry, and history. “State” is in some sense a logical primitive here – given a bunch of “pure” states for a system (in the harmonic oscillator, you use the nonnegative integers, representing n-photon energy states of the oscillator), and their local symmetries (the n-particle state is acted on by the permutation group on n elements), one defines a groupoid.

So at a first approximation, this is like the “element of a set” picture of state, except that I’m now taking a groupoid instead of a set. In a more general language, we might prefer to say we’re talking about a stack, which we can think of as a groupoid up to some kind of equivalence, specifically Morita equivalence. But in any case, the image is still that a state is an object in the groupoid, or point in the stack which is just generalizing an element of a set or point in configuration space.

However, what is an “element” of a set S? It’s a map into S from the terminal element in \mathbf{Sets}, which is “the” one-element set – or, likewise, in \mathbf{Gpd}, from the terminal groupoid, which has only one object and its identity morphism. However, this is a category where the arrows are set maps. When we introduce the idea of a “history “, we’re moving into a category where the arrows are spans, A \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} B (which by abuse of notation sometimes gets called X but more formally (X,s,t)). A span represents a set/groupoid/stack of histories, with source and target maps into the sets/groupoids/stacks of states of the system at the beginning and end of the process represented by X.

Then we don’t have a terminal object anymore, but the same object 1 is still around – only the morphisms in and out are different. Its new special property is that it’s a monoidal unit. So now a map from the monoidal unit is a span 1 \stackrel{!}{\rightarrow} X \stackrel{\Phi}{\rightarrow} B. Since the map on the left is unique, by definition of “terminal”, this really just given by the functor \Phi, the target map. This is a fibration over B, called here \Phi for “phi”-bration, but this is appropriate, since it corresponds to what’s usually thought of as a wavefunction \phi.

This correspondence is what groupoidification is all about – it has to do with taking the groupoid cardinality of fibres, where a “phi”bre of \Phi is the essential preimage of an object b \in B – everything whose image is isomorphic to b. This gives an equivariant function on B – really a function of isomorphism classes. (If we were being crude about the symmetries, it would be a function on the quotient space – which is often what you see in real mechanics, when configuration spaces are given by quotients by the action of some symmetry group).

In the case where B is the groupoid of finite sets and bijections (sometimes called \mathbf{FinSet_0}), these fibrations are the “stuff types” of Baez and Dolan. This is a groupoid with something of a notion of “underlying set” – although a forgetful functor U: C \rightarrow \mathbf{FinSet_0} (giving “underlying sets” for objects in a category C) is really supposed to be faithful (so that C-morphisms are determined by their underlying set map). In a fibration, we don’t necessarily have this. The special case corresponds to “structure types” (or combinatorial species), where X is a groupoid of “structured sets”, with an underlying set functor (actually, species are usually described in terms of the reverse, fibre-selecting functor \mathbf{FinSet_0} \rightarrow \mathbf{Sets}, where the image of a finite set consists of the set of all “$\Phi$-structured” sets (such as: “graphs on set S“, or “trees on S“, etc.) The fibres of a stuff type are sets equipped with “stuff”, which may have its own nontrivial morphisms (for example, we could have the groupoid of pairs of sets, and the “underlying” functor \Phi selects the first one).

Over a general groupoid, we have a similar picture, but instead of having an underlying finite set, we just have an “underlying B-object”. These generalized stuff types are “states” for a system with a configuration groupoid, in Span(\mathbf{Gpd}). Notice that the notion of “state” here really depends on what the arrows in the category of states are – histories (i.e. spans), or just plain maps.

Intuitively, such a state is some kind of “ensemble”, in statistical or quantum jargon. It says the state of affairs is some jumble of many configurations (which we apparently should see as histories starting from the vacuous unit 1), each of which has some “underlying” pure state (such as energy level, or what-have-you). The cardinality operation turns this into a linear combination of pure states by defining weights for each configuration in the ensemble collected in X.

2-State as Representation

A linear combination of pure states is, as I said, an equivariant function on the objects of B. It’s one way to “categorify” the view of a state as a vector in a Hilbert space, or map from \mathbb{C} (i.e. a point in the projective Hilbert space of lines in the Hilbert space H = \mathbb{C}[\underline{B}]), which is really what’s defined by one of these ensembles.

The idea of 2-linearization is to categorify, not a specific state \phi \in H, but the concept of state. So it should be a 2-vector in a 2-Hilbert space associated to B. The Hilbert space H was some space of functions into $mathbb{C}$, which we categorify by taking instead of a base field, a base category, namely \mathbf{Vect}_{\mathbb{C}}. A 2-Hilbert space will be a category of functors into \mathbf{Vect}_{\mathbb{C}} – that is, the representation category of the groupoid B.

(This is all fine for finite groupoids. In the inifinte case, there are some issues: it seems we really should be thinking of the 2-Hilbert space as category of representations of an algebra. In the finite case, the groupoid algebra is a finite dimensional C*-algebra – that is, just a direct sum (over iso. classes of objects) of matrix algebras, which are the group algebras for the automorphism groups at each object. In the infinite dimensional world, you probable should be looking at the representations of the von Neumann algebra completion of the C*-algebra you get from the groupoid. There are all sorts of analysis issues about measurability that lurk in this area, but they don’t really affect how you interpret “state” in this picture, so I’ll skip it.)

A “2-state”, or 2-vector in this Hilbert space, is a representation of the groupoid(-algebra) associated to the system. The “pure” states are irreducible representations – these generate all the others under the operations of the 2-Hilbert space (“sum”, “scalar product”, etc. in their 2-vector space forms). Now, an irreducible representation of a von Neumann algebra is called a “superselection sector” for a quantum system. It’s playing the role of a pure state here.

There’s an interesting connection here to the concept of state as a functional on a von Neumann algebra. As I described in the last post, the GNS representation associates a representation of the algebra to a state. In fact, the GNS representation is irreducible just when the state is a pure state. But this notion of a superselection sector makes it seem that the concept of 2-state has a place in its own right, not just by this correspondence.

So: if a quantum system is represented by an algebra \mathcal{A} of operators on a Hilbert space H, that representation is a direct sum (or direct integral, as the case may be) of irreducible ones, which are “sectors” of the theory, in that any operator in \mathcal{A} can’t take a vector out of one of these “sectors”. Physicists often associate them with conserved quantities – though “superselection” sectors are a bit more thorough: a mere “selection sector” is a subspace where the projection onto it commutes with some subalgebra of observables which represent conserved quantities. A superselection sector can equivalently be defined as a subspace whose corresponding projection operator commutes with EVERYTHING in \mathcal{A}. In this case, it’s because we shouldn’t have thought of the representation as a single Hilbert space: it’s a 2-vector in \mathbb{Rep}(\mathcal{A}) – but as a direct integral of some Hilbert bundle that lives on the space of irreps. Those projections are just part of the definition of such a bundle. The fact that \mathcal{A} acts on this bundle fibre-wise is just a consequence of the fact that the total H is a space of sections of the “2-state”. These correspond to “states” in usual sense in the physical interpretation.

Now, there are 2-linear maps that intermix these superselection sectors: the ETQFT picture gives nice examples. Such a map, for example, comes up when you think of two particles colliding (drawn in that world as the collision of two circles to form one circle). The superselection sectors for the particles are labelled by (in one special case) mass and spin – anyway, some conserved quantities. But these are, so to say, “rest mass” – so there are many possible outcomes of a collision, depending on the relative motion of the particles. So these 2-maps describe changes in the system (such as two particles becoming one) – but in a particular 2-Hilbert space, say \mathbb{Rep}(X) for some groupoid X describing the current system (or its algebra), a 2-state \Phi is a representation of the of the resulting system). A 2-state-vector is a particular representation. The algebra \mathcal{A} can naturally be seen as a subalgebra of the automorphisms of \Phi.

So anyway, without trying to package up the whole picture – here are two categorified takes on the notion of state, from two different points of view.

I haven’t, here, got to the business about Tomita flows coming from states in the von Neumann algebra sense: maybe that’s to come.

Well, I was out of town for a bit, but classes are now underway here at UWO. This term I’m teaching an introductory Linear Algebra course, which is, I believe, the largest class I’ve taught so far, with on the order of a couple of hundred students. That should be a change of pace: last year, both courses I taught had just seven students each.

Meanwhile, I’ll carry on from the last post. I described structure types (a.k.a. species) as functors T : \mathbf{FinSet_0} \rightarrow \mathbf{Sets}, which take a finite set S, and give the set of all “T-structures on the underlying set S“. A lot of combinatorial enumeration uses the generating functions for such structure types, which are power series whose coefficients count the number of structures for a set of size n (the fact that structure types are functorial is what allows us to ignore everything but the isomorphism class of the underlying set S). Now, there is a notion of generalized species, described in this paper by Fiore, Gambino, Hyland and Winskel, which I’ll link here because I think it’s a great point of view on the setup I discussed before. But right now, I’ll go in a somewhat different direction. (Whether there’s a connection between the two is left as an exercise)

Stuff Types

To start with, there is a dual way to look at structure types (a.k.a. species). The “structures” identified by a structure type T form a category X. It’s a concrete category in fact: each object has an underlying set. The morphisms of X are “structure-preserving” maps (the meaning of which depends, obviously, on T) of T-structured sets. These correspond exactly (by fiat) to the isomorphisms of underlying sets (i.e. relabellings). These are all invertible, so this is a groupoid.

So is the category FinSet_0 of “underlying sets”, so the forgetful functor F from T-structured sets into \mathbf{FinSet_0} is a functor between groupoids. This functor F is a sort of “dual” way to look at the structure type – the original functor T. In fact, for any structure type T, this dual F will always be a faithful functor. That is, the morphism map is one-to-one, so morphisms in X are uniquely determined by the corresponding map of underlying sets. In other words, there are no additional symmetries in X but those determined by set bijections.

But this is an artifact! I declared the union of all the sets T(S) to be the objects of a category X and then added morphisms by hand. That makes sense if you think of the “T-structured sets built on underlying set S” as derived entirely from T and S. But the dual view, focusing on F, tends to make us think of X as given, and F as observing some property – underlying sets and maps for objects and morphisms. This may throw away information about both, in principle. Faithfulness of F suggests that the objects of X just consists of sets S with some inflexible extra decorations with no local symmetries of their own to complicate the morphisms.

So let’s treat X as real and F as some kind of synopsis or measurement of X. If F doesn’t need to be faithful, it may not correspond to a structure type, but it will be what Baez and Dolan call a stuff type, which is actually just any groupoid equipped with underlying set functor F: X \rightarrow \mathbf{FinSet_0}. Maybe it’s surprising that these can still be treated like power series, by taking the coefficient at n to be the (real-valued) groupoid cardinality of the preimage of n. (The groupoid cardinality, described here, is related to the “Leinster measure” for categories. Regular readers of the n-Category Cafe will know that there has been some discussion over there about this – some links from here, and discussion about applying it to “configuration categories” of physical systems here.)

Stuff types can be used to deal with seemingly straightforward “structures” which structure types have a hard time with. For instance, letting E^Z be the structure type “a set”, and so E^{E^Z} should be the type “a set of sets” (where the underlying set operation is the union of elements). This can be represented by a stuff type, but not a structure type.

Groupoidification

Stuff types fit into a more general pattern, which has to do with the 2-category of spans of groupoids. I really cleared up just how this works in conversation with Jamie Vicary.

Groupoidification is the program of looking for analogs of linear algebra (whose native habitat is the monoidal category \mathbf{Vect}) in a different monoidal category (in fact, bicategory) \mathbf{Span(Gpd)} of spans of groupoids, which I’ve talked about quite a bit before. Very briefly, we have a bicategory where the objects are groupoids, and the morphisms are spans like: A \leftarrow X \rightarrow B, composed by (weak) pullback. Given spans X, X', a 2-morphism is a map \alpha: X \rightarrow X' which makes the resulting diagram commute.

So the key thing now is the fact that a stuff type F : X \rightarrow \mathbf{FinSet_0} can be regarded as a span of groupoids in two ways:

\mathbf{1} \leftarrow X \rightarrow \mathbf{FinSet_0}

and

\mathbf{FinSet_0} \leftarrow X \rightarrow \mathbf{1}

Here, \mathbf{1} is the trivial groupoid consisting of just one object and its identity morphism. Any groupoid X has just one functor into \mathbf{1}, so a stuff type automatically has these two incarnations as a span. One is a morphism (in \mathbf{Span(Gpd)}) from $\mathbf{1}$ to \mathbf{FinSet_0}, and the other is its dual, going the other way. One can call these a “state” and a “costate”. Why these terms?

One important fact is that $atex \mathbf{Span(Gpd)}$ is not just a bicategory, it’s a monoidal bicategory, whose monoidal operation on objects A \otimes B is the (cartesian) product of groupoids. (Which also tells you what it is for morphisms, by the way). It should be clear, then, that \mathbf{1} is the monoidal unit, since X \times \mathbf{1} \cong X.

So another way of describing a stuff type is that it’s a morphism from (or to) the monoidal unit in a certain monoidal (bi)category with duals. In the category of Hilbert spaces, if \mathcal{H} is the space associated to a quantum system, a map \mathbb{C} \rightarrow \mathcal{H} would be called a “state” (and the dual would be a “costate”). Stuff types provide a 2-categorical version of the same thing, where the object taking the place of \mathcal{H} is \mathbf{FinSet_0}.

There is, as I’ve discussed here previously, a 2-vector space (indeed, a 2-Hilbert space) associated with this groupoid. But the point of view I’m adopting right now is based on discussion I had with Jamie Vicary about this paper. In it, he gives an abstract (i.e. categorical) description of what’s going on in the quantum mechanics of the harmonic oscillator in terms of an adjunction of categories. This can then be transplanted into various monoidal categories with duals. Here, Jamie gives a more general discussion of quantum algebras, with the same sort of flavour.

So as to the question of how species relate to QFT, this suggests one way to look at how. The harmonic oscillator is the physical system of interest when we look at “states” as spans into \mathbf{FinSet_0}. Up to isomorphism, the important features of the groupoid \mathbf{FinSet_0} are: its objects correspond to nonnegative integers, which label the energy levels for the oscillator (they “count photons”); each object n has automorphisms corresponding to permutations of those n photons (they’re indistinguishable – in particular, “bosons”). This is fairly simple, but for a more elaborate QFT picture, look at “states” for other groupoids in \mathbf{Span(Gpd)}.  One complication is that typically these groupoids are going to have some smooth structure…

Perhaps more on this later.

A couple of posts ago, I mentioned some work by Rivasseau that touched on combinatorial species and QFT. Since then there have been a few mentions: at Arcadian Functor, where Kea further pointed out a post at U Duality which in turn pointed to the arXiv where there is a new paper by Rivasseau, Gurau and Magnen called “Tree Quantum Field Theory”. It claims to present a formalism for quantum field theory which is non-perturbative and based on species of marked trees.

(And here, incidentally, is a recent arXiv posting by Joachim Kock about decorated trees and polynomial functors, quite possibly related as we’ll see, but from a different point of view. I’d have to look at this more closely to see whether it’s related to species, but “analytic functors” certainly are. Analytic functors have the structure of power series as described below. As for “polynomial functors”, Dan tells me that the term also appears in homotopy theory in relation to the Goodwillie calculus, which talks about functors in categories of spaces. From a little examination of Kock’s paper, it seems like what he’s talking about deals with the case where the spaces you’re talking about happen to be discrete sets. I only mention this because this is a blog, so I don’t have to feel obliged to understand just how, or even whether, this is relevant – it just looks interesting, and I found it while searching around about the rest of this stuff.)

Since I have been thinking about species a bit recently anyway in the wake of a conversation I had with Jamie Vicary in Nottingham, I thought I’d try to lay out out one way of looking at the link between species and QFT, which is by way of groupoidification (described by John Baez in this draft paper) – more on that in part 2… For now, I’ll recap some basics about species.

Species

First off, what is the original definition of combinatorial species? This comes from Andre Joyal, I think originally in two main places: “Une theorie combinatoire des series formelles” (1981) and “Foncteurs analytiques et especes de structures” (1986). That word “especes” is the source of the term “combinatorial species”, but note that the more accurate translation of “especes de structures” is “type of structure” (or “sort of..”, or “species of…”). This is why some people, including John Baez (from whom I got the habit) call them “structure types” – . In any case, what are they?

The simple version is to say that they are functors Tfrom \mathbf{FinSet_0} (the category of finite sets and bijections, rather than any set maps) to \mathbf{Sets}, the category of sets. Every finite set S gets assigned a set T(S), interpreted as the set of T-structures on S. Every bijection of sets f : S \rightarrow S' induces a bijection of the T-structured sets T(f) : T(S) \rightarrow T(S') that comes by replacing each element in the underlying set of a given structured set.

A specific example should help: suppose T is the type “combinatorial graphs”, so that T(S) is the set of all graphs whose set of vertices is S. That is, each element of T(S) amounts to a choice of edges, each of which is just a pair of vertices, so T(S) \cong \mathcal{P}(S \times S). (I’m assuming a specific definition of “graph” here, and there are variants, but hopefully this is clear enough.) Then a bijection f : S \rightarrow S' takes a graph and gives a new one with underlying set S', where each vertex s is replaced with f(s).

Part of the point of this was to give a more sophisticated account of something combinatorialists had been doing for some time, which is using power series to represent “types of structure”, using a variable, say z, to “mark” elements of the underlying set (fancier versions use multiple variables, marking different things one might count). The “generating function” for T has a coefficient for z^n which is \frac{\#T(S)}{n!} (the n! refers to the number of self-bijections of an n-element set). So for instance, with the T in our example, \#T(S) = 2^{(\#S)^2}, so the generating function for the structure type “graphs” is

t(z) = \sum_{n=0}^{\infty} \frac{2^{(\#S)^2}}{n!} z^n

Then there are combinatorial operations that correspond to sums and products of power series, composition of power series, and so on. You can read plenty more about this in various places apart from Joyal’s papers, such as Wilf’s “generatingfunctionology“, books on combinatorial enumeration by Richard Stanley, by Ian Goulden and David Jackson (whom I learned this stuff from long ago, so I’ll plug the book!), or even – why not? – in this paper of mine, which describes a relation between this classical stuff and a very simple (the simplest) QFT, namely the Hilbert space and algebra of observables for a single harmonic oscillator. A look at species as entities on their own can be found in Bergeron, Labelle and Leroux’s “Combinatorial Species and Tree-Like Structures”.

One example of the sort of thing one can do relies on the fact that the power series representation for e^z represents the type “a set” (there is one of these for each finite set, since the coefficient for z^n is \frac{1}{n!}). So if the generating function for a type F is f(z), then the type T, “sets of structures of type F$, has generating function t(z) = e^{f(z)}. Conversely, since you can invert this to get f(z) = ln(t(z)), one can use the coefficients of the power series for this f(z) to count the number of connected graphs on a set S (since a graph amounts to the same thing as a set of connected graphs). In particular, you can count them without having to find them all.

Well, so much for the classical theory of enumeration. One point of species is that one can deal with these functors from sets to sets directly, using various operations that correspond to the operations on power series. (A functor that can be represented in this “power series” sort of way is, naturally, an “analytic” functor.) One important such operation would be the derivative.

The natural way to define the combinatorial “derivative” is not too hard to predict: at the level of generating functions, it has to take z^n to nz^{n-1}. The structure type whose generating function is z^n is (naturally isomorphic to) the type “ordered n-element set” since there are n! ways to make an n-element set with underlying set S (presuming \#S=n). The derivative of this type has n ways to define it on an (n-1)-element set. What this does is: given a set S, it formally adjoins a new element, and puts the structure “ordered n-element set” on the result, S \cup \{\star\}. This can only be done if S (the “underlying set” of the new structure) has n-1 elements, and since there are n! ways to order S \cup \{\star\}, the coefficient in the power series is \frac{n!}{(n-1)!} = n. In general, the combinatorial derivative takes a type T, and returns the type which gives T-structures on S \{\cup \star\}.

What does this have to do with QFT? Well, having a derivative operator sets us off in the direction of operator algebras, which is a key part of the answer. I’ll address that some more after I’ve explained how this relates to groupoidification in Part 2, upcoming…

Follow

Get every new post delivered to your Inbox.

Join 43 other followers