In the last post, I was describing how you can represent spans of sets using vector spaces and linear maps, which turn out to be fairly special, in that they’re given by integer matrices in the obvious basis. Next I’d like to say a little about what happens if you step up one categorical level. This is something I gave a little talk on to our group at UWO on Wednesday, and will continue with next Wednesday. Here I’ll give a record of part of it.

Once again, part of the point here is that categories of spans are symmetric monoidal categories with duals – like categories of cobordisms (which can be interpreted as “pieces of spacetime” in a sufficienly loose sense), and also like categories of quantum processes (that is, whose objects are Hilbert spaces of states, and whose morphisms are linear maps – processes taking states to states).

So first, what do I mean by “move up a categorical level”?

We were talking about spans of, say, sets, like this: S \leftarrow X \rightarrow T. To go up a categorical level, we can talk about spans of categories. The objects S and T now carry some extra information – they’re not just collections of elements, but they also tell us about how elements are related to each other. So then remember that spans of sets really want to form a bicategory, which we can cut down to a category by only thinking of them up to isomorphism. Well, likewise, spans of categories probably want to form a tricategory, which we can cut down to a bicategory in the same way. (Several people have studied them, but the only person I know who really seems to grok tricategories is Nick Gurski, though in this talk he tried to convince us that we all could have invented them ourselves. ) Before rushing off into realms involving the word “terrifying”, we should start by looking at what happens at the level of objects.

But first, why should we bother? Well, there’s a physical motivation: building vector spaces from sets plays a role in quantizing physical theories, where the sets are sets of classical states for some system. That is, when you quantize the system, you allow it to have states which are linear combinations – superpositions – of classical states. But saying you have just a set of states is limiting even in the classical situation. Sometimes – for instance, in gauge theory – there are actually lots of “configurations” of a system that are physically indistinguishable (because of some symmetry, which in that example is achieved by “gauge equivalence”), and so what’s usually done is to just look at the set of equivalence classes of configurations. But that throws away information we may want: it’s better to just take a category whose objects are states, and whose morphisms are the symmetries of the states.

For these to really be symmetries, they should be invertible, so we’re looking at a groupoid of states, S. But then to quantize things, we can’t just take a vector space of all functions – as we did when S was a set. Now we need to have something collecting together all the functors out of S. These certainly form a category, so we want some kind of category which is “like” a vector space. By default it’s called a 2-vector space, since it now has an extra level of structure.

As I said before, this stuff isn’t so hard if you’re willing to ignore details until needed – so for now, I’ll just say that (Kapranov-Voevodsky) 2-vector spaces are categories which resemble \mathbf{Vect}^n, just as (finite-dimensional) vector spaces are sets resembling \mathbb{C}^n, for some n. And just as the set of functions f : S \rightarrow \mathbb{C} becomes a vector space, so does the category of functors F : X \rightarrow \mathbf{Vect} become a 2-vector space when X is a groupoid. (Josep Elgueta discusses in some depth what happens for a general category in this paper.)

What makes a groupoid X special is that the two layers – objects and morphisms – both get along nicely with the operation of taking functors into Vect. That is, it’s easy to describe such functors. It’s a little easier to talk about it for a skeletal groupoid: one with just one object in each isomorphism class. Fortunately, every groupoid is equivalent to one like this. So since I’ve figured out how to do pictures here, let’s see one of a functor R : X \rightarrow \mathbf{Vect}:

Vect-valued Presheaf

This is one particular 2-vector in the 2-vector space I’m building. The picture is showing the following: the objects x_i \in X have groups of automorphisms, G_i, indicated by the curved arrows. A functor R : X \rightarrow \mathbf{Vect} assigns, to each object x_i, a vector space R(x_i) = V_i (sketched roughly as squares), and for each automorphism of that object g \in G_i, a linear map R(g) : V_i \rightarrow V_i. Since R is a functor, these linear maps are chosen so that R(gg') = R(g)R(g') – so this is a G_i-action on V_i. In other words, for each x_i, we have a representation R_i of its automorphism group G_i on the vector space V_i.

A morphism \alpha : R \rightarrow R' between two such 2-vectors is a natural transformation of functors – for each x_i \in X, a linear map \alpha_i : V_i \rightarrow V'_i satisfying the usual naturality condition. As you might expect, this condition means that \alpha gives, for each x_i, an intertwining operator between the two representations R_i and R'_i. So it turns out that the 2-vector space hom(X,\mathbf{Vect}) is a product, taken over the objects x_i \in X, of the categories Rep(G_i).

In particular, that if X is just a set, thought of as a groupoid with only identity morphisms, then this is just \mathbf{Vect}^n, since any vector space is automatically a representation of the trivial group, and any linear map is an intertwining operator between such trivial representations.

Now, proving that this is a 2-vector space would involve giving a lot more details about what that actually means – and would involve some facts about representation theory, such as Schur’s Lemma – but at least we have some idea what the 2-vector space on a groupoid looks like.

Next up (pt 3): what about spans? What happened to spans, anyway? There was supposed to be an earth-shattering fact about spans! Then, that done, hopefully I’ll get back to looking at the physical interpretation of an extended TQFT.

Advertisements