Well, I was out of town for a weekend, and then had a miserable cold that went away but only after sleeping about 4 extra hours per day for a few days. So it’s been a while since I continued the story here.

To recap: I first explained how to turn a span of sets into a linear operator between the free vector spaces on those sets. Then I described the “free” 2-vector space on a groupoid – namely, the category of functors from to . So now the problem is to describe how to turn a span of groupoids into a 2-linear map. Here’s a span of groupoids:

Here we have a span , of groupoids. In fact, they’re skeletal groupoids: there’s only one object in each isomorphism class, so they’re completely described, up to isomorphism, by the automorphism groups of each object. The object , for instance, has automorphism group , and the object has automorphism group . This diagram shows the object maps of the “source” and “target” functors and explicitly, but note that with each arrow indicated in the diagram, there is a group homomorphism. So, since the object map for sends to , that strand must be labelled with a group homomorphism . (We’re leaving these out of the diagram for clarity).

So, we want to know how to transport a -valued functor – along this span. We know that such a functor attaches to each a representation of on some vector space . As with spans of sets, the first stage is easy: we have the composable pair of functors , so “pulling back” to gives .

What about the other leg of the span? Remember back in Part 1 what happened when we pushed down a function (not a functor) along the second leg of a span. To find the value of the pushed-forward function on an element , we took a *sum* of the complex values on every element of the preimage . For vector-space-valued functors, we expect to use a direct sum of some terms. Since we’re dealing with functors, things are a little more complex than before, but there should still be a contribution from each object in the preimage (or, if we’re not talking about skeletal groupoids, the *essential preimage*) of the object we look at.

However, we have to deal with the fact that there are morphisms. Instead of adding scalars, we have to combine vector spaces using the fact that they are given as representation spaces for some particular groups.

To see what needs to be done, consider the situation of groupoids with just one object, so the only important information is the homomorphism of groups. These can be seen as one-object groupoids, which we can just call and . A functor between them is given by the single group homomorphism .

Now suppose we have a representation of the group on (so that and ). Then somehow we need to get a representation of which is “induced” by the homomorphism , :

This diagram shows “the answer” – but how does it work? Essentially, we use the fact that there’s a nice, convenient representation of any group , namely the *regular representation* of on the group algebra . Elements of are just complex linear combinations of elemenst of , which are acted on by by left multiplication. The group also has regular representation, on . These are the most easily available building blocks with which to build the “push-forward” of onto .

To see how, we use the fact that has a right-action of , and hence , by way of . An element acts on by right-multiplication by – and this extends linearly to . So we can combine this with the left action of on (also extended linearly from ) by taking a tensor product of with over . This lets us “mod out” by the actions of which are not detected in . The result, called the induced representation of , in turn gives us back a left-action of on . I’ll call this .

(Note that usually this name refers to the situation where is a subgroup of , but in fact this can be defined for any homomorphism.)

This tells us what to do for single-object groupoids. As we remarked earlier, if more than one object is sent to the same , we should get a direct sum of all their contributions. So I want to describe the 2-linear map, which I’ll now call which we get from the span above, thought of as in . Here and (where I’m now being more explicit that this whole process is a functor in some reasonable sense).

I have to say what does to a given 2-vector (what it does to morphisms between 2-vectors is straightforward to work out, since every operation we do is a tensor product or direct sum). Suppose we have is one. Then . We can now say what this works out to. At some object , we get (still assuming everything is skeletal for simplicity):

And this is a direct sum of a bunch of such expressions where is a basis 2-vector – i.e. assigns an irreducible representation to some one object, and the trivial rep on the zero vector space to every other. That allows this to be written as a matrix with vector-space components, just like any 2-linear map.

So the 2-linear map has a matrix representation. The indices of the matrix are the simple objects in and , which consist of a choice of (a) object in or (which we assume are skeletal – otherwise it’s a choice of isomorphism class), and (b) irreducible representation of the automorphism group of that object. Given a choice of index on each side, the corresponding coefficient in the matrix is a vector space. Namely the direct sum, over all the objects that restrict down to our chosen pair, of a bunch of terms like . This is just a quotient space of the one group algebra by the image of the other.

Next up: a quick finisher about what happens at the 2-morphism level, then back to TQFT and gravity!

October 30, 2007 at 3:59 am

Good stuff! Hope you get healthy!

typo: “how to transport a \mathbf{Vect}-valued functor”

November 1, 2007 at 10:23 am

You said in part 1 “To get a general matrix, you’d have to give labels to X” when converting a span of sets into a map between vector spaces. Is something similar necessary in this new sitatuation when converting a span of groupoids into a 2-linear map?

And, is there any sign that there’s scope for some form of row/column normalization in these 2-linear map matrices?

November 1, 2007 at 11:15 pm

David: interesting question.

I don’t think extra labels are needed here. Any 2-linear map can be realized as a matrix of vector spaces, and I believe that any components you want can be realized as coming from some span. The components in the position labelled by some choice of objects in source and target, and by irreps, say, and of their automorphism groups, will get components of the form , where the sum is over all the objects in the middle which restrict to the specified ones, and the tensor product is reducing modulo the action of the automorphism group of . I think you can choose the groupoid in the middle to get any vector space you like (up to isomorphism!) in this way.

The reason you need something extra when you have spans of sets is the fact that components of a linear map need to be arbitrary complex numbers, and cardinalities of sets are too coarse. With KV 2-vector spaces, there is a kind of discreteness already built into the morphisms. The set of isomorphism classes of vector spaces is , rather than – which is one way that (KV) is less than satisfying as a categorification of . Baez-Crans 2-vector spaces avoid this problem, but that’s not what we get here. There are also Elgueta’s “generalized 2-vector spaces”: I don’t know if these would have any room for the kind of generalization we’re talking about here.

There are similar considerations for normalization: you can reduce vector spaces modulo some group action, but “normalizing” them is not so straightforward.

At the object level, 2-vector spaces resemble rig modules over more than complex vector spaces.

November 2, 2007 at 9:40 am

Urs has frequently encouraged us to think about richer 2-vector spaces, e.g., here.

Ultimately, should we be looking for non-integer dimensional ‘vector spaces’ for these matrix entries?