In the last post, I was describing how you can represent spans of sets using vector spaces and linear maps, which turn out to be fairly special, in that they’re given by integer matrices in the obvious basis. Next I’d like to say a little about what happens if you step up one categorical level. This is something I gave a little talk on to our group at UWO on Wednesday, and will continue with next Wednesday. Here I’ll give a record of part of it.
Once again, part of the point here is that categories of spans are symmetric monoidal categories with duals – like categories of cobordisms (which can be interpreted as “pieces of spacetime” in a sufficienly loose sense), and also like categories of quantum processes (that is, whose objects are Hilbert spaces of states, and whose morphisms are linear maps – processes taking states to states).
So first, what do I mean by “move up a categorical level”?
We were talking about spans of, say, sets, like this: . To go up a categorical level, we can talk about spans of categories. The objects and now carry some extra information – they’re not just collections of elements, but they also tell us about how elements are related to each other. So then remember that spans of sets really want to form a bicategory, which we can cut down to a category by only thinking of them up to isomorphism. Well, likewise, spans of categories probably want to form a tricategory, which we can cut down to a bicategory in the same way. (Several people have studied them, but the only person I know who really seems to grok tricategories is Nick Gurski, though in this talk he tried to convince us that we all could have invented them ourselves. ) Before rushing off into realms involving the word “terrifying”, we should start by looking at what happens at the level of objects.
But first, why should we bother? Well, there’s a physical motivation: building vector spaces from sets plays a role in quantizing physical theories, where the sets are sets of classical states for some system. That is, when you quantize the system, you allow it to have states which are linear combinations – superpositions – of classical states. But saying you have just a set of states is limiting even in the classical situation. Sometimes – for instance, in gauge theory – there are actually lots of “configurations” of a system that are physically indistinguishable (because of some symmetry, which in that example is achieved by “gauge equivalence”), and so what’s usually done is to just look at the set of equivalence classes of configurations. But that throws away information we may want: it’s better to just take a category whose objects are states, and whose morphisms are the symmetries of the states.
For these to really be symmetries, they should be invertible, so we’re looking at a groupoid of states, . But then to quantize things, we can’t just take a vector space of all functions – as we did when was a set. Now we need to have something collecting together all the functors out of . These certainly form a category, so we want some kind of category which is “like” a vector space. By default it’s called a 2-vector space, since it now has an extra level of structure.
As I said before, this stuff isn’t so hard if you’re willing to ignore details until needed – so for now, I’ll just say that (Kapranov-Voevodsky) 2-vector spaces are categories which resemble , just as (finite-dimensional) vector spaces are sets resembling , for some . And just as the set of functions becomes a vector space, so does the category of functors become a 2-vector space when is a groupoid. (Josep Elgueta discusses in some depth what happens for a general category in this paper.)
What makes a groupoid special is that the two layers – objects and morphisms – both get along nicely with the operation of taking functors into . That is, it’s easy to describe such functors. It’s a little easier to talk about it for a skeletal groupoid: one with just one object in each isomorphism class. Fortunately, every groupoid is equivalent to one like this. So since I’ve figured out how to do pictures here, let’s see one of a functor :
This is one particular 2-vector in the 2-vector space I’m building. The picture is showing the following: the objects have groups of automorphisms, , indicated by the curved arrows. A functor assigns, to each object , a vector space (sketched roughly as squares), and for each automorphism of that object , a linear map . Since is a functor, these linear maps are chosen so that – so this is a -action on . In other words, for each , we have a representation of its automorphism group on the vector space .
A morphism between two such 2-vectors is a natural transformation of functors – for each , a linear map satisfying the usual naturality condition. As you might expect, this condition means that gives, for each , an intertwining operator between the two representations and . So it turns out that the 2-vector space is a product, taken over the objects , of the categories .
In particular, that if is just a set, thought of as a groupoid with only identity morphisms, then this is just , since any vector space is automatically a representation of the trivial group, and any linear map is an intertwining operator between such trivial representations.
Now, proving that this is a 2-vector space would involve giving a lot more details about what that actually means – and would involve some facts about representation theory, such as Schur’s Lemma – but at least we have some idea what the 2-vector space on a groupoid looks like.
Next up (pt 3): what about spans? What happened to spans, anyway? There was supposed to be an earth-shattering fact about spans! Then, that done, hopefully I’ll get back to looking at the physical interpretation of an extended TQFT.