representation theory


Recently I finished up my series of talks on 2-Hilbert spaces with a description of the basics of 2-group representation theory, and a little about the special case of the Poincaré 2-group. The main sources were a paper by Crane and Yetter describing 2-group representations in general, and another by Crane and Sheppeard. The Poincaré 2-group, so far as I know, was first explicitly mentioned by John Baez in the context of higher gauge theory. It’s an example of a kind of 2-group which can be cooked up from any group G and abelian group H, and which is related to the semidirect product G \ltimes H.

One reason people are starting to take an interest in the representation theory of the Poincaré 2-group is that representations of the Poincaré group (among others) and intertwiners between them play a role in spin foam models for field theories such as BF theory, various models of quantum gravity, and so on. Some of these, turn up naturally when looking at TQFT’s, and generalizations of these, which is how I got here. Extending this to 2-groups gives a richer structure to work with. (Whether the extra richness is useful is another matter).

Before getting into more detail, I first would like to take a look at representation theory for groups from a categorical point of view, and then see what happens when we move to n-groups – that is, when we categorify.

To begin with, we can think of a representation (V, \rho) of a group G as a functor. The group G can be thought of as a category with one object and all morphisms invertible – so that the group elements are morphisms, and the group operation is composition. In this case, a representation of the group is just any functor:

\rho : G \rightarrow Vect

since this assigns some one vector space (the representation space, \rho(\star) = V) to the one object of G, and a linear map \rho(g):  V \rightarrow V to each morphism of G (i.e. to each group element) in a way consistent with composition. The nice thing about this point of view is that knowing a little category theory is enough to suggest one of the fundamental ideas of representation theory, namely intertwining operators (“intertwiners”). These are natural transformations between functors. This is the idea to categorify.

The point is that functors F : G \rightarrow Vect can be organized into a structure hom(G,Vect), and this is most naturally seen as a category, not just a set. The category of representations of G is usually called Rep(G), but seen as a category of functors, it is a general case of a category $hom(C,D)$ of functors from category C to category D. Let’s look at how this is structured, then consider what happens with higher dimensional categories. There seems to be a general pattern which one can just begin to see with 1-categories:

  • a functor F : C \rightarrow D is a map between categories, assigning
    • to each C-object a corresponding D-object
    • to each C-morphism a corresponding D-morphism

    in a way compatible with composition and identities

  • a natural transformation n between functors F,F' : C \rightarrow D assigns
    • to each C-object a D-morphism

    making a naturality square commute for any morphism g : x \rightarrow y in C:

Naturality Square

(In the case where the functors are representations of a group, this is an intertwiner – a linear map which commutes with the action of the group on V.)

The pattern is a little more obvious for 2-categories:

  • a 2-functor F : C \rightarrow D is a map between 2-categories, assigning
    • to each C-object a corresponding D-object
    • to each C-morphism a corresponding D-morphism
    • to each C-2-morphism a corresponding D-2-morphism

    in a way compatible with composition and identities

  • a natural transformation n between 2-functors F,F' : C \rightarrow D assigns
    • to each C-object a D-morphism
    • to each C-morphism a D-2-morphism

    making a generalized naturality square commute for any 2-morphism h : f \rightarrow g in C (where f,g : x \rightarrow y):

2-Naturality Diagram
  • a modification (what I might have named a “2-natural transformation” or similar) m between natural transformations n,n' : F \rightarrow n' assigns
    • to each C-object a D-2-morphism

    making a similar diagram commute (OK, well, it appears on p11 of John Baez’ Introduction to n-Categories, but I don’t have a web-ified version of it – I haven’t learned how to turn LaTeX diagrams into handy web format).

In the case where C = G is a 2-group – a 2-category with one object and all j-morphisms invertible, and D = 2Vect, then we have here the (quite abstract!) definition of a representation, an 1-intertwiner between representations, and a 2-intertwiner between 1-intertwiners.

It’s not too hard to see the pattern suggested here – a “k-natural transformation” assigns a k-morphism in D to an object in the n-category C, and a (k+j)-morphism in D to each j-morphism in C. This morphism fits into a diagram filling a commutative diagram which was the coherence law for the top dimensional transformation for (n-1)-categories. (I might point out that if I were to come up with terminology for these things from scratch, I’d try to build in some flexibility from the start. Instead of “functor”, “natural transformation”, and “modification”, I’d have used terms more analogous to the terminology for morphisms. Probably I’d have used, respectively, “1-functor”, “2-functor”, “3-functor”, and so on. This is already a problem, since these terms are in use with a different meaning! Instead, I’ve used “natural k-transformation”.) It’s less easy to say what, explicitly, the various coherence laws should be at each stage, except that there should be an equation between the composites of (a) the n-morphisms in an n-natural transformation with (b) the two possible images of any chosen lower dimensional morphisms.

There is a lot of useful information out there about various forms of n-categories, such as the Illustrated Guidebook by Cheng and Lauda, and Tom Leinster’s “Higher Operads, Higher Categories” (also in print). They’re a little less packed with information on functors, natural transformations, and their higher generalizations. I don’t know a reference that explains the generalization thoroughly, though. If anyone does know a good source on this, I’d like to hear about it. Probably this is somewhere in the work of Street, Kelly, maybe Batanin (whose definition of n-category is the one implicitly used here) or others, but I’m not familiar enough with the literature to know where this is done.

These generalizations of functors and natural transformations to higher n-categories describe what functor n-categories are like. When written down and decoded, these definitions can be turned into a concrete definition of representations and the various k-intertwiners involved in the representation theory of n-groups.

However, next time I’ll take a look at some of what is known in the slightly more down to earth world where n=2.

Well, I was out of town for a weekend, and then had a miserable cold that went away but only after sleeping about 4 extra hours per day for a few days. So it’s been a while since I continued the story here.

To recap: I first explained how to turn a span of sets into a linear operator between the free vector spaces on those sets. Then I described the “free” 2-vector space on a groupoid X – namely, the category of functors from X to \mathbf{Vect}. So now the problem is to describe how to turn a span of groupoids into a 2-linear map. Here’s a span of groupoids:

A span of groupoids

Here we have a span Y \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} Z, of groupoids. In fact, they’re skeletal groupoids: there’s only one object in each isomorphism class, so they’re completely described, up to isomorphism, by the automorphism groups of each object. The object y_2 \in Y, for instance, has automorphism group H_2, and the object x_1 \in X has automorphism group G_1. This diagram shows the object maps of the “source” and “target” functors s and t explicitly, but note that with each arrow indicated in the diagram, there is a group homomorphism. So, since the object map for s sends x_1 to y_2, that strand must be labelled with a group homomorphism s_1 : G_1 \rightarrow H_2. (We’re leaving these out of the diagram for clarity).

So, we want to know how to transport a \mathbf{Vect}-valued functor F : Y \rightarrow \mathbf{Vect} – along this span. We know that such a functor attaches to each y_i \in Y a representation of H_i on some vector space F(y_i). As with spans of sets, the first stage is easy: we have the composable pair of functors X \stackrel{s}{\longrightarrow} Y \stackrel{F}{\longrightarrow} \mathbf{Vect}, so “pulling back” F to X gives s^{\ast}F = F \circ s : X \rightarrow \mathbf{Vect}.

What about the other leg of the span? Remember back in Part 1 what happened when we pushed down a function (not a functor) along the second leg of a span. To find the value of the pushed-forward function on an element z, we took a sum of the complex values on every element of the preimage t^{-1}(z). For vector-space-valued functors, we expect to use a direct sum of some terms. Since we’re dealing with functors, things are a little more complex than before, but there should still be a contribution from each object in the preimage (or, if we’re not talking about skeletal groupoids, the essential preimage) of the object z we look at.

However, we have to deal with the fact that there are morphisms. Instead of adding scalars, we have to combine vector spaces using the fact that they are given as representation spaces for some particular groups.

To see what needs to be done, consider the situation of groupoids with just one object, so the only important information is the homomorphism of groups. These can be seen as one-object groupoids, which we can just call G and H. A functor between them is given by the single group homomorphism h : G \rightarrow H.

Now suppose we have a representation R of the group G on V (so that R(g) \in GL(V) and R(gg') = R(g)R(g')). Then somehow we need to get a representation of H which is “induced” by the homomorphism h, Ind(R):

Induced Representation

This diagram shows “the answer” – but how does it work? Essentially, we use the fact that there’s a nice, convenient representation of any group G, namely the regular representation of G on the group algebra \mathbb{C}[G]. Elements of \mathbb{C}[G] are just complex linear combinations of elemenst of G, which are acted on by G by left multiplication. The group H also has regular representation, on \mathbb{C}[H]. These are the most easily available building blocks with which to build the “push-forward” of R onto H.

To see how, we use the fact that \mathbb{C}[H] has a right-action of G, and hence \mathbb{C}[G], by way of h. An element g \in G acts on \mathbb{C}[H] by right-multiplication by h(g) – and this extends linearly to \mathbb{C}[G]. So we can combine this with the left action of \mathbb{C}[G] on V (also extended linearly from G) by taking a tensor product of \mathbb{C}[H] with V over \mathbb{C}[G]. This lets us “mod out” by the actions of G which are not detected in \mathbb{C}[H]. The result, called the induced representation Ind(R) of H, in turn gives us back a left-action of H on \mathbb{C}[H] \otimes_{\mathbb{C}[G]} V. I’ll call this h_{\ast} R.

(Note that usually this name refers to the situation where G is a subgroup of H, but in fact this can be defined for any homomorphism.)

This tells us what to do for single-object groupoids. As we remarked earlier, if more than one object is sent to the same z \in Z, we should get a direct sum of all their contributions. So I want to describe the 2-linear map, which I’ll now call V(X) : V(Y) \rightarrow V(Z) which we get from the span above, thought of as X : Y \rightarrow Z in Span(\mathbf{Grpd}). Here V(X) = hom(X,\mathbf{Vect}) and V(Y) = hom(Y,\mathbf{Vect}) (where I’m now being more explicit that this whole process is a functor in some reasonable sense).

I have to say what V(X) does to a given 2-vector (what it does to morphisms between 2-vectors is straightforward to work out, since every operation we do is a tensor product or direct sum). Suppose we have F : Y \rightarrow \mathbf{Vect} is one. Then V(X)(F) = t_{\ast} s^{ast} F= t_{\ast} (F \circ s) : Z \rightarrow \mathbf{Vect}. We can now say what this works out to. At some object z \in Z, we get (still assuming everything is skeletal for simplicity):

V(X)(F) = \bigoplus_{t(x)=z} \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} F(s(x))

And this is a direct sum of a bunch of such expressions where F is a basis 2-vector – i.e. assigns an irreducible representation to some one object, and the trivial rep on the zero vector space to every other. That allows this to be written as a matrix with vector-space components, just like any 2-linear map.

So the 2-linear map V(X) has a matrix representation. The indices of the matrix are the simple objects in hom(Y,\mathbf{Vect} and hom(Z,\mathbf{Vect}, which consist of a choice of (a) object in Y or Z (which we assume are skeletal – otherwise it’s a choice of isomorphism class), and (b) irreducible representation of the automorphism group of that object. Given a choice of index on each side, the corresponding coefficient in the matrix is a vector space. Namely the direct sum, over all the objects x \in X that restrict down to our chosen pair, of a bunch of terms like \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} \mathbb{C}. This is just a quotient space of the one group algebra by the image of the other.

Next up: a quick finisher about what happens at the 2-morphism level, then back to TQFT and gravity!

In the last post, I was describing how you can represent spans of sets using vector spaces and linear maps, which turn out to be fairly special, in that they’re given by integer matrices in the obvious basis. Next I’d like to say a little about what happens if you step up one categorical level. This is something I gave a little talk on to our group at UWO on Wednesday, and will continue with next Wednesday. Here I’ll give a record of part of it.

Once again, part of the point here is that categories of spans are symmetric monoidal categories with duals – like categories of cobordisms (which can be interpreted as “pieces of spacetime” in a sufficienly loose sense), and also like categories of quantum processes (that is, whose objects are Hilbert spaces of states, and whose morphisms are linear maps – processes taking states to states).

So first, what do I mean by “move up a categorical level”?

We were talking about spans of, say, sets, like this: S \leftarrow X \rightarrow T. To go up a categorical level, we can talk about spans of categories. The objects S and T now carry some extra information – they’re not just collections of elements, but they also tell us about how elements are related to each other. So then remember that spans of sets really want to form a bicategory, which we can cut down to a category by only thinking of them up to isomorphism. Well, likewise, spans of categories probably want to form a tricategory, which we can cut down to a bicategory in the same way. (Several people have studied them, but the only person I know who really seems to grok tricategories is Nick Gurski, though in this talk he tried to convince us that we all could have invented them ourselves. ) Before rushing off into realms involving the word “terrifying”, we should start by looking at what happens at the level of objects.

But first, why should we bother? Well, there’s a physical motivation: building vector spaces from sets plays a role in quantizing physical theories, where the sets are sets of classical states for some system. That is, when you quantize the system, you allow it to have states which are linear combinations – superpositions – of classical states. But saying you have just a set of states is limiting even in the classical situation. Sometimes – for instance, in gauge theory – there are actually lots of “configurations” of a system that are physically indistinguishable (because of some symmetry, which in that example is achieved by “gauge equivalence”), and so what’s usually done is to just look at the set of equivalence classes of configurations. But that throws away information we may want: it’s better to just take a category whose objects are states, and whose morphisms are the symmetries of the states.

For these to really be symmetries, they should be invertible, so we’re looking at a groupoid of states, S. But then to quantize things, we can’t just take a vector space of all functions – as we did when S was a set. Now we need to have something collecting together all the functors out of S. These certainly form a category, so we want some kind of category which is “like” a vector space. By default it’s called a 2-vector space, since it now has an extra level of structure.

As I said before, this stuff isn’t so hard if you’re willing to ignore details until needed – so for now, I’ll just say that (Kapranov-Voevodsky) 2-vector spaces are categories which resemble \mathbf{Vect}^n, just as (finite-dimensional) vector spaces are sets resembling \mathbb{C}^n, for some n. And just as the set of functions f : S \rightarrow \mathbb{C} becomes a vector space, so does the category of functors F : X \rightarrow \mathbf{Vect} become a 2-vector space when X is a groupoid. (Josep Elgueta discusses in some depth what happens for a general category in this paper.)

What makes a groupoid X special is that the two layers – objects and morphisms – both get along nicely with the operation of taking functors into Vect. That is, it’s easy to describe such functors. It’s a little easier to talk about it for a skeletal groupoid: one with just one object in each isomorphism class. Fortunately, every groupoid is equivalent to one like this. So since I’ve figured out how to do pictures here, let’s see one of a functor R : X \rightarrow \mathbf{Vect}:

Vect-valued Presheaf

This is one particular 2-vector in the 2-vector space I’m building. The picture is showing the following: the objects x_i \in X have groups of automorphisms, G_i, indicated by the curved arrows. A functor R : X \rightarrow \mathbf{Vect} assigns, to each object x_i, a vector space R(x_i) = V_i (sketched roughly as squares), and for each automorphism of that object g \in G_i, a linear map R(g) : V_i \rightarrow V_i. Since R is a functor, these linear maps are chosen so that R(gg') = R(g)R(g') – so this is a G_i-action on V_i. In other words, for each x_i, we have a representation R_i of its automorphism group G_i on the vector space V_i.

A morphism \alpha : R \rightarrow R' between two such 2-vectors is a natural transformation of functors – for each x_i \in X, a linear map \alpha_i : V_i \rightarrow V'_i satisfying the usual naturality condition. As you might expect, this condition means that \alpha gives, for each x_i, an intertwining operator between the two representations R_i and R'_i. So it turns out that the 2-vector space hom(X,\mathbf{Vect}) is a product, taken over the objects x_i \in X, of the categories Rep(G_i).

In particular, that if X is just a set, thought of as a groupoid with only identity morphisms, then this is just \mathbf{Vect}^n, since any vector space is automatically a representation of the trivial group, and any linear map is an intertwining operator between such trivial representations.

Now, proving that this is a 2-vector space would involve giving a lot more details about what that actually means – and would involve some facts about representation theory, such as Schur’s Lemma – but at least we have some idea what the 2-vector space on a groupoid looks like.

Next up (pt 3): what about spans? What happened to spans, anyway? There was supposed to be an earth-shattering fact about spans! Then, that done, hopefully I’ll get back to looking at the physical interpretation of an extended TQFT.

« Previous Page

Follow

Get every new post delivered to your Inbox.

Join 45 other followers