representation theory


I just posted the slides for “Groupoidification and 2-Linearization”, the colloquium talk I gave at Dalhousie when I was up in Halifax last week. I also gave a seminar talk in which I described the quantum harmonic oscillator and extended TQFT as examples of these processes, which covered similar stuff to the examples in a talk I gave at Ottawa, as well as some more categorical details.

Now, in the previous post, I was talking about different notions of the “state” of a system – all of which are in some sense “dual to observables”, although exactly what sense depends on which notion you’re looking at. Each concept has its own particular “type” of thing which represents a state: an element-of-a-set, a function-on-a-set, a vector-in-(projective)-Hilbert-space, and a functional-on-operators. In light of the above slides, I wanted to continue with this little bestiary of ontologies for “states” and mention the versions suggested by groupoidification.

State as Generalized Stuff Type

This is what groupoidification introduces: the idea of a state in Span(Gpd). As I said in the previous post, the key concepts behind this program are state, symmetry, and history. “State” is in some sense a logical primitive here – given a bunch of “pure” states for a system (in the harmonic oscillator, you use the nonnegative integers, representing n-photon energy states of the oscillator), and their local symmetries (the n-particle state is acted on by the permutation group on n elements), one defines a groupoid.

So at a first approximation, this is like the “element of a set” picture of state, except that I’m now taking a groupoid instead of a set. In a more general language, we might prefer to say we’re talking about a stack, which we can think of as a groupoid up to some kind of equivalence, specifically Morita equivalence. But in any case, the image is still that a state is an object in the groupoid, or point in the stack which is just generalizing an element of a set or point in configuration space.

However, what is an “element” of a set S? It’s a map into S from the terminal element in \mathbf{Sets}, which is “the” one-element set – or, likewise, in \mathbf{Gpd}, from the terminal groupoid, which has only one object and its identity morphism. However, this is a category where the arrows are set maps. When we introduce the idea of a “history “, we’re moving into a category where the arrows are spans, A \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} B (which by abuse of notation sometimes gets called X but more formally (X,s,t)). A span represents a set/groupoid/stack of histories, with source and target maps into the sets/groupoids/stacks of states of the system at the beginning and end of the process represented by X.

Then we don’t have a terminal object anymore, but the same object 1 is still around – only the morphisms in and out are different. Its new special property is that it’s a monoidal unit. So now a map from the monoidal unit is a span 1 \stackrel{!}{\rightarrow} X \stackrel{\Phi}{\rightarrow} B. Since the map on the left is unique, by definition of “terminal”, this really just given by the functor \Phi, the target map. This is a fibration over B, called here \Phi for “phi”-bration, but this is appropriate, since it corresponds to what’s usually thought of as a wavefunction \phi.

This correspondence is what groupoidification is all about – it has to do with taking the groupoid cardinality of fibres, where a “phi”bre of \Phi is the essential preimage of an object b \in B – everything whose image is isomorphic to b. This gives an equivariant function on B – really a function of isomorphism classes. (If we were being crude about the symmetries, it would be a function on the quotient space – which is often what you see in real mechanics, when configuration spaces are given by quotients by the action of some symmetry group).

In the case where B is the groupoid of finite sets and bijections (sometimes called \mathbf{FinSet_0}), these fibrations are the “stuff types” of Baez and Dolan. This is a groupoid with something of a notion of “underlying set” – although a forgetful functor U: C \rightarrow \mathbf{FinSet_0} (giving “underlying sets” for objects in a category C) is really supposed to be faithful (so that C-morphisms are determined by their underlying set map). In a fibration, we don’t necessarily have this. The special case corresponds to “structure types” (or combinatorial species), where X is a groupoid of “structured sets”, with an underlying set functor (actually, species are usually described in terms of the reverse, fibre-selecting functor \mathbf{FinSet_0} \rightarrow \mathbf{Sets}, where the image of a finite set consists of the set of all “$\Phi$-structured” sets (such as: “graphs on set S“, or “trees on S“, etc.) The fibres of a stuff type are sets equipped with “stuff”, which may have its own nontrivial morphisms (for example, we could have the groupoid of pairs of sets, and the “underlying” functor \Phi selects the first one).

Over a general groupoid, we have a similar picture, but instead of having an underlying finite set, we just have an “underlying B-object”. These generalized stuff types are “states” for a system with a configuration groupoid, in Span(\mathbf{Gpd}). Notice that the notion of “state” here really depends on what the arrows in the category of states are – histories (i.e. spans), or just plain maps.

Intuitively, such a state is some kind of “ensemble”, in statistical or quantum jargon. It says the state of affairs is some jumble of many configurations (which we apparently should see as histories starting from the vacuous unit 1), each of which has some “underlying” pure state (such as energy level, or what-have-you). The cardinality operation turns this into a linear combination of pure states by defining weights for each configuration in the ensemble collected in X.

2-State as Representation

A linear combination of pure states is, as I said, an equivariant function on the objects of B. It’s one way to “categorify” the view of a state as a vector in a Hilbert space, or map from \mathbb{C} (i.e. a point in the projective Hilbert space of lines in the Hilbert space H = \mathbb{C}[\underline{B}]), which is really what’s defined by one of these ensembles.

The idea of 2-linearization is to categorify, not a specific state \phi \in H, but the concept of state. So it should be a 2-vector in a 2-Hilbert space associated to B. The Hilbert space H was some space of functions into $mathbb{C}$, which we categorify by taking instead of a base field, a base category, namely \mathbf{Vect}_{\mathbb{C}}. A 2-Hilbert space will be a category of functors into \mathbf{Vect}_{\mathbb{C}} – that is, the representation category of the groupoid B.

(This is all fine for finite groupoids. In the inifinte case, there are some issues: it seems we really should be thinking of the 2-Hilbert space as category of representations of an algebra. In the finite case, the groupoid algebra is a finite dimensional C*-algebra – that is, just a direct sum (over iso. classes of objects) of matrix algebras, which are the group algebras for the automorphism groups at each object. In the infinite dimensional world, you probable should be looking at the representations of the von Neumann algebra completion of the C*-algebra you get from the groupoid. There are all sorts of analysis issues about measurability that lurk in this area, but they don’t really affect how you interpret “state” in this picture, so I’ll skip it.)

A “2-state”, or 2-vector in this Hilbert space, is a representation of the groupoid(-algebra) associated to the system. The “pure” states are irreducible representations – these generate all the others under the operations of the 2-Hilbert space (“sum”, “scalar product”, etc. in their 2-vector space forms). Now, an irreducible representation of a von Neumann algebra is called a “superselection sector” for a quantum system. It’s playing the role of a pure state here.

There’s an interesting connection here to the concept of state as a functional on a von Neumann algebra. As I described in the last post, the GNS representation associates a representation of the algebra to a state. In fact, the GNS representation is irreducible just when the state is a pure state. But this notion of a superselection sector makes it seem that the concept of 2-state has a place in its own right, not just by this correspondence.

So: if a quantum system is represented by an algebra \mathcal{A} of operators on a Hilbert space H, that representation is a direct sum (or direct integral, as the case may be) of irreducible ones, which are “sectors” of the theory, in that any operator in \mathcal{A} can’t take a vector out of one of these “sectors”. Physicists often associate them with conserved quantities – though “superselection” sectors are a bit more thorough: a mere “selection sector” is a subspace where the projection onto it commutes with some subalgebra of observables which represent conserved quantities. A superselection sector can equivalently be defined as a subspace whose corresponding projection operator commutes with EVERYTHING in \mathcal{A}. In this case, it’s because we shouldn’t have thought of the representation as a single Hilbert space: it’s a 2-vector in \mathbb{Rep}(\mathcal{A}) – but as a direct integral of some Hilbert bundle that lives on the space of irreps. Those projections are just part of the definition of such a bundle. The fact that \mathcal{A} acts on this bundle fibre-wise is just a consequence of the fact that the total H is a space of sections of the “2-state”. These correspond to “states” in usual sense in the physical interpretation.

Now, there are 2-linear maps that intermix these superselection sectors: the ETQFT picture gives nice examples. Such a map, for example, comes up when you think of two particles colliding (drawn in that world as the collision of two circles to form one circle). The superselection sectors for the particles are labelled by (in one special case) mass and spin – anyway, some conserved quantities. But these are, so to say, “rest mass” – so there are many possible outcomes of a collision, depending on the relative motion of the particles. So these 2-maps describe changes in the system (such as two particles becoming one) – but in a particular 2-Hilbert space, say \mathbb{Rep}(X) for some groupoid X describing the current system (or its algebra), a 2-state \Phi is a representation of the of the resulting system). A 2-state-vector is a particular representation. The algebra \mathcal{A} can naturally be seen as a subalgebra of the automorphisms of \Phi.

So anyway, without trying to package up the whole picture – here are two categorified takes on the notion of state, from two different points of view.

I haven’t, here, got to the business about Tomita flows coming from states in the von Neumann algebra sense: maybe that’s to come.

I’m going to be giving a talk on extended TQFT stuff and quantum gravity at Perimeter Institute next thursday, and then in mid-March I’ll be heading to UC Davis to give the same/similar talk for the String Theory and Quantum Gravity seminar being run by Derek Wise. So I have a bunch of things on my mind right now. However, before heading to Davis, I wanted to go back and look at some of the stuff Derek has done having to do with Cartan geometry, which I was following somewhat at the time, and blog about it a bit here. Before that, I’d like to wrap up this presentation of the talks I gave here about representation theory of the Poincaré 2-group, \mathbf{Poinc}.

As a side note, thanks to Dan for pointing out these notes on representations of the (normal, uncategorified) Poincaré group, including some general comments on representations of semidirect products. It’s interesting to consider how this relates to the more general picture of 2-group representations – but I won’t do so here and now.

In Part 1 I talked about what representations 2-categories of 2-groups are like in general, and in Part 2 a fairly concrete description of \mathbf{Poinc}. Here I’ll wrap up by summarizing the results of Crane and Sheppeard about what Rep(\mathbf{Poinc}) looks like concretely.

It has three parts: the objects are representations (also known as functors from \mathbf{Poinc} as a 2-category with one object, into \mathbf{Meas}); the morphisms are 1-intertwiners (a.k.a. natural transformations) between reps; and the 2-morphisms are 2-intertwiners (a.k.a. modifications) between 1-intertwiners.

1) Representations: A functor

\mathbf{Poinc} \rightarrow \mathbf{Meas}

will pick out some measurable space X = F(\star) for the lone object of the 2-group – or rather, Meas(X), the 2-vector space of all measurable fields of Hilbert spaces on X. (This is a matter of taste since to know the one is to know the other.) Then for the morphisms and 2-morphisms of \mathbf{Poinc} we get, respectively, 2-linear maps from Meas(X) to itself, and natural transformations between them.

The morphisms of \mathbf{Poinc} are just the group G in the crossed-module picture I described in Part 2. For the usual Poincaré 2-group, this is SO(p,q). For each such element, we’re supposed to get an invertible 2-linear map from Meas(X) to itself – that is, a measurable field of Hilbert spaces on X \times X (together with measures to do “matrix multiplication” with by direct integrals). This can only be invertible if the only Hilbert spaces which appear are 1-dimensional (since these maps compose by a “matrix multiplication” involving direct sums of tensor products of the components – and the discreteness of dimensions means that if any dimension is higher than 1, you’ll never get back the identity).

So any representation turns out to give what amounts to an action of SO(p,q) on X – the component F(g)(x_1,x_2) is \mathbb{C} if x_2 = g \triangleright x_1 and 0 otherwise. An irreducible representation gives an X with a transitive action (otherwise, you can decompose it into orbits, each of which corresponds to a subrepresentation). Crane and Sheppeard classify several kinds of these, associated to various subgroups of SO(p,q), but an easy example would be a mass shell in Minkowski space – a sphere or hyperboloid (depending on (p,q)) that is the full orbit of some point under rotations and boosts (a “mass shell” because it gives all the possible momenta for a particle of a given mass, as seen by an observer in some inertial frame).

The 2-morphism part of \mathbf{Poinc} gives a homomorphism from \mathbb{R}^{p+q} \rightarrow Mat_1(\mathbb{C}) at each of these points. Now, one-by-one matrices of complex numbers are just complex numbers, so what we have here is a character of \mathbb{R}^{p+q} – at each point on X. To be functorial, this has to be done in an equivariant way (so that acting on the point x \in X by g \in SO(p,q) affects the character by acting on \mathbb{R}^{p+q} by the same g).

2) 1-Intertwiners:

If representations F and F' correspond to actions of SO(p,q) on spaces X and X' respectively, with characters h, h', then what is a 1-intertwiner \phi : F \rightarrow F'? Remember from Part 1 that it’s a natural transformation: to the object \star of \mathbf{Poinc} it assigns a specific 2-linear map

\phi(\star) : F(\star) \rightarrow F'(\star)

To each g \in SO(p,q) (object of \mathbf{Poinc}) it gives a transformation

\phi(g) : \phi(\star) \circ F(g) \rightarrow F'(g) \circ \phi(\star)

This is a specified map which replaces the naturality square in the old definition of an intertwiner. It has to make a certain “pillow” diagram commute (Part 1).

Now, back in the posts on 2-Hilbert spaces, I explained that a 2-linear map \phi(\star) is given by some field of Hilbert spaces \mathcal{K} on X \times X' (a “matrix” of Hilbert spaces, though of course X, X' needn’t be finite), along with a family of measures on X indexed by X' (which allow us to do integration when doing the sum in “matrix multiplication”). The transformations \phi(g) also can be written in components, so that

\phi(g)_{(x,y)} : \mathcal{K}_{(F(g)^{-1}(x),y)}\rightarrow \mathcal{K}_{(x,F'(g)(y))}

(Note this uses the two actions given by F,F' on X,X' – one forward, and one backward. This is the current form of what, in uncategorified representation theory, would be a naturality condition.)

What does this all amount to? One way to think of it is as a representation of SO(p,q) \ltimes R^{p+q} itself! In particular, it’s a representation on the direct sum of all the Hilbert spaces which appear as components of \phi(\star). This is since the maps given by the \phi(g) have to satisfy a condition which says that composition is preserved (as long as you’re careful about indexing things):

\phi(gg')_{(x,y)} = \phi(g)_{F(g')x,G(g')y)} \circ \phi(g')_{(x,y)}

To get a representation of the group, we can say that elements (g,h) \in G shuffle vector spaces over points in X by the action of g and then act within vector spaces by h. So then \phi has both intertwiner-like and representation-like properties.

The “intertwiner-ness” of \phi has to do with how it interpolates between two actions on X,X' by turning them into an action on the product X \times X' – but it also has some “representation-ness”, by giving this action of a (semidirect product) group on a big vector space.

3) 2-intertwiners

If a 1-intertwiner can be thought of as a representation of G \ltimes H, it shouldn’t be too surprising that a 2-intertwiner between 1-intertwiners \phi, \phi' ends up being an intertwiner between the associated representations. If 1-intertwiners have some qualities of both reps and intertwiners, the 2-intertwiners are more single-minded.

In particular, a 2-intertwiner m : \phi \rightarrow \phi' assigns to the only object of \mathbf{Poinc} a 2-morphism in \mathbf{2Vect} (that is, a field of linear maps between the vector spaces which are the components of \phi, \phi'), which satisfies some “pillow” diagram. When we form the big rep. by taking a direct integral of all those spaces, the field of linear maps turns into one big linear map, and the diagram it satisfies just collapses into the condition that it be an intertwiner.

So the representation theory of this interesting 2-group looks a lot like the representation theory of the group of 2-morphisms. The extra structure involving actions on measurable spaces by G = SO(p,q) would be mostly invisible if you just thought about irreducible reps of the group, since the space would be just a single point.

This phenomenon where a lower-order structure turns up in some form at the top level of morphisms of its categorified version has cropped up before in this blog – namely, when extended TQFT’s turn out to contain normal TQFT’s in individual components. In these examples, categorification is less a matter of building more floors “on top” of structures we already know, as “higher morphisms” suggests, but excavating additional floors of subbasement – interpreting what were objects as morphisms.

It’s been a while since I wrote the last entry, on representation theory of n-groups, partly because I’ve been polishing up a draft of a paper on a different subject. Now that I have it at a plateau where other people are looking at it, I’ll carry on with a more or less concrete description of the situation of a 2-group. For higher values of n, describing things concretely would get very elaborate quite quickly, but interesting things already happen for n=2. In particular, the case that I gave the talk about, a while back, was mostly the Poincaré 2-group, since this is the one Crane, Sheppeard, and Yetter talk about, and probably the one most interesting to physicists.  It was first described by John Baez.

So what’s the Poincaré 2-group? To begin with, what’s a 2-group again?

I already said that a 2-group \mathbb{G} is a 2-category with only one object, and all morphisms and 2-morphisms invertible. That’s all very good for summing up the representation theory of \mathbb{G} as I described last time, but it’s sometimes more informative to describe the structure of \mathbb{G} concretely. A good tool for doing this is a crossed module. (A lot more on 2-groups can be found in Baez and Lauda’s HDA V, and there are some more references and information in this page by Ronald Brown, who’s done a lot to popularize crossed modules).

A crossed module has two layers, which correspond to the morphisms and 2-morphisms of \mathbb{G}. These can be represented as (G,H,\triangleright, \partial), where G is the group of morphisms in \mathbb{G}, H consists of the 2-morphisms ending at the identity of G (a group under horizontal composition).

There has to be an action \triangleright : G \rightarrow End(H) of G on H (morphisms can be composed “horizontally” with 2-morphisms), and a map \partial : H \rightarrow G (which picks out the source of the 2-morphism). The data (G,H,\triangleright,\partial) have to fit together a certain way, which amounts to giving the axioms for a 2-category.

A handy way to remember the conditions is to realize that the action \triangleright : G \rightarrow End(H) and the injection \partial : H \rightarrow G give ways for elements of G to act on each other and for elements of H to act on each other. These amount to doing first \triangleright and then \partial or vice versa, and both of these must amount to conjugation. That is:

\partial(g \triangleright h) = g (\partial h) g^{-1}

and

(\partial h_1) \triangleright h_2 = h_1 h_2 h_2^{-1}

Both of these are simplified in the case that \partial maps everything in H to the identity of G – in this case, H can be interpreted as the group of 2-automorphisms of the identity 1-morphism of the sole object of \mathbb{G}. In this case, by the Eckmann-Hilton argument (the clearest explanation of which that I know being the one in TWF Week 100) it turns out that H has to be commutative, so the first condition is trivial since \partial h = 1, and the second is trivial since it follows from commutativity. This simpler situation is known as an automorphic 2-group.

In any case, given a 2-group represented as a crossed module, automorphic or not, the collection of all morphisms can be seen as a group in itself – namely the semidirect product G \ltimes H, which is to say G \times H with the multiplication (g_1,h_1) \cdot (g_2,h_2) = (g_1 g_2 , g_2 \triangleright h_1 h_2). “What?” you may ask, or maybe “Why?”

Maybe a concrete example would help, since we’d like one anyway: the Poincaré 2-group, which is an automorphic 2-group. There are versions of various signatures (p,q), in which case G = SO(p,q), and H = \mathbb{R}^{p+q}.

The group G, then, consists of metric-preserving transformations of Minkowski space R^{p+q} with the metric of signature (p,q) – rotations and boosts (if any). The (abelian) group H consists of translations of this space – in fact, being a vector space, it’s just a copy of it. Between them, they cover the basic types of transformation. Thinking of the translations as having a “projection” down to the identity rotation/boost may seem a bit artificial, except insofar as translations “don’t rotate” anything. More obvious is that rotations or boosts act on translations: the same translation can look differently in rotated/boosted coordinate systems – that is, to different observers.

So where does the Poincaré group SO(p,q) \ltimes \mathbb{R}^{p+q} come in? It’s the group of all metric-preserving transformations of Minkowski space, and is built from these two types: but how?

Well, the vector space H = \mathbb{R}^{p+q} is the group of transformations of the identity Lorentz transformation 1 \in G = SO(p,q), since the map \partial : H \rightarrow G is trivial. But suppose that there is another copy of H over each point in G. Then we have the set of points G \times H, but notice that to talk about this as a group, we’d want a way to act on an element h_1 of one copy of H over g_1 \in G by another h_2 over g_2. The obvious way is to just treat the whole set as a product of groups, but this misses the fundamental relation between G and H, which is that G can act on H, just as morphisms can act on 2-morphisms by “whiskering with the identity”. (Via Google books, here is the description of this in MacLane’s Categories for the Working Mathematician).

Concretely, this is the fact that there is a sensible way for both parts of (g_1,h_1) to affect the h_2, so we can say (g_2,h_2) \cdot (g_1,h_1) = (g_2 g_1, g_1 h_2 + h_1) (using additive notation for translations, since they’re abelian). The point is that the first rotation we do, g_1, changes coordinates, and therefore the definition of the translation h_2.

So that’s the construction of the Poincaré group from the Poincaré 2-group. What would be nice would be to have some clear description of some higher analog of Minkowski space where it makes sense to say the Poincaré 2-group acts as a 2-group. I don’t quite know how to set this up, but if anyone has thoughts, it would be interesting to hear them.

One reason is that, when describing representations of the 2-group, there’s an important role for spaces (or at least sets) with an action of the group G – which raises questions like whether there’s a role for 2-spaces with 2-group actions in representation theory of higher n-groups. Again – I don’t really know the answer to this. However, in Part 3 I’ll describe concretely how this works for 2-groups, and particularly the Poincaré 2-group.

Recently I finished up my series of talks on 2-Hilbert spaces with a description of the basics of 2-group representation theory, and a little about the special case of the Poincaré 2-group. The main sources were a paper by Crane and Yetter describing 2-group representations in general, and another by Crane and Sheppeard. The Poincaré 2-group, so far as I know, was first explicitly mentioned by John Baez in the context of higher gauge theory. It’s an example of a kind of 2-group which can be cooked up from any group G and abelian group H, and which is related to the semidirect product G \ltimes H.

One reason people are starting to take an interest in the representation theory of the Poincaré 2-group is that representations of the Poincaré group (among others) and intertwiners between them play a role in spin foam models for field theories such as BF theory, various models of quantum gravity, and so on. Some of these, turn up naturally when looking at TQFT’s, and generalizations of these, which is how I got here. Extending this to 2-groups gives a richer structure to work with. (Whether the extra richness is useful is another matter).

Before getting into more detail, I first would like to take a look at representation theory for groups from a categorical point of view, and then see what happens when we move to n-groups – that is, when we categorify.

To begin with, we can think of a representation (V, \rho) of a group G as a functor. The group G can be thought of as a category with one object and all morphisms invertible – so that the group elements are morphisms, and the group operation is composition. In this case, a representation of the group is just any functor:

\rho : G \rightarrow Vect

since this assigns some one vector space (the representation space, \rho(\star) = V) to the one object of G, and a linear map \rho(g):  V \rightarrow V to each morphism of G (i.e. to each group element) in a way consistent with composition. The nice thing about this point of view is that knowing a little category theory is enough to suggest one of the fundamental ideas of representation theory, namely intertwining operators (“intertwiners”). These are natural transformations between functors. This is the idea to categorify.

The point is that functors F : G \rightarrow Vect can be organized into a structure hom(G,Vect), and this is most naturally seen as a category, not just a set. The category of representations of G is usually called Rep(G), but seen as a category of functors, it is a general case of a category $hom(C,D)$ of functors from category C to category D. Let’s look at how this is structured, then consider what happens with higher dimensional categories. There seems to be a general pattern which one can just begin to see with 1-categories:

  • a functor F : C \rightarrow D is a map between categories, assigning
    • to each C-object a corresponding D-object
    • to each C-morphism a corresponding D-morphism

    in a way compatible with composition and identities

  • a natural transformation n between functors F,F' : C \rightarrow D assigns
    • to each C-object a D-morphism

    making a naturality square commute for any morphism g : x \rightarrow y in C:

Naturality Square

(In the case where the functors are representations of a group, this is an intertwiner – a linear map which commutes with the action of the group on V.)

The pattern is a little more obvious for 2-categories:

  • a 2-functor F : C \rightarrow D is a map between 2-categories, assigning
    • to each C-object a corresponding D-object
    • to each C-morphism a corresponding D-morphism
    • to each C-2-morphism a corresponding D-2-morphism

    in a way compatible with composition and identities

  • a natural transformation n between 2-functors F,F' : C \rightarrow D assigns
    • to each C-object a D-morphism
    • to each C-morphism a D-2-morphism

    making a generalized naturality square commute for any 2-morphism h : f \rightarrow g in C (where f,g : x \rightarrow y):

2-Naturality Diagram
  • a modification (what I might have named a “2-natural transformation” or similar) m between natural transformations n,n' : F \rightarrow n' assigns
    • to each C-object a D-2-morphism

    making a similar diagram commute (OK, well, it appears on p11 of John Baez’ Introduction to n-Categories, but I don’t have a web-ified version of it – I haven’t learned how to turn LaTeX diagrams into handy web format).

In the case where C = G is a 2-group – a 2-category with one object and all j-morphisms invertible, and D = 2Vect, then we have here the (quite abstract!) definition of a representation, an 1-intertwiner between representations, and a 2-intertwiner between 1-intertwiners.

It’s not too hard to see the pattern suggested here – a “k-natural transformation” assigns a k-morphism in D to an object in the n-category C, and a (k+j)-morphism in D to each j-morphism in C. This morphism fits into a diagram filling a commutative diagram which was the coherence law for the top dimensional transformation for (n-1)-categories. (I might point out that if I were to come up with terminology for these things from scratch, I’d try to build in some flexibility from the start. Instead of “functor”, “natural transformation”, and “modification”, I’d have used terms more analogous to the terminology for morphisms. Probably I’d have used, respectively, “1-functor”, “2-functor”, “3-functor”, and so on. This is already a problem, since these terms are in use with a different meaning! Instead, I’ve used “natural k-transformation”.) It’s less easy to say what, explicitly, the various coherence laws should be at each stage, except that there should be an equation between the composites of (a) the n-morphisms in an n-natural transformation with (b) the two possible images of any chosen lower dimensional morphisms.

There is a lot of useful information out there about various forms of n-categories, such as the Illustrated Guidebook by Cheng and Lauda, and Tom Leinster’s “Higher Operads, Higher Categories” (also in print). They’re a little less packed with information on functors, natural transformations, and their higher generalizations. I don’t know a reference that explains the generalization thoroughly, though. If anyone does know a good source on this, I’d like to hear about it. Probably this is somewhere in the work of Street, Kelly, maybe Batanin (whose definition of n-category is the one implicitly used here) or others, but I’m not familiar enough with the literature to know where this is done.

These generalizations of functors and natural transformations to higher n-categories describe what functor n-categories are like. When written down and decoded, these definitions can be turned into a concrete definition of representations and the various k-intertwiners involved in the representation theory of n-groups.

However, next time I’ll take a look at some of what is known in the slightly more down to earth world where n=2.

Well, I was out of town for a weekend, and then had a miserable cold that went away but only after sleeping about 4 extra hours per day for a few days. So it’s been a while since I continued the story here.

To recap: I first explained how to turn a span of sets into a linear operator between the free vector spaces on those sets. Then I described the “free” 2-vector space on a groupoid X – namely, the category of functors from X to \mathbf{Vect}. So now the problem is to describe how to turn a span of groupoids into a 2-linear map. Here’s a span of groupoids:

A span of groupoids

Here we have a span Y \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} Z, of groupoids. In fact, they’re skeletal groupoids: there’s only one object in each isomorphism class, so they’re completely described, up to isomorphism, by the automorphism groups of each object. The object y_2 \in Y, for instance, has automorphism group H_2, and the object x_1 \in X has automorphism group G_1. This diagram shows the object maps of the “source” and “target” functors s and t explicitly, but note that with each arrow indicated in the diagram, there is a group homomorphism. So, since the object map for s sends x_1 to y_2, that strand must be labelled with a group homomorphism s_1 : G_1 \rightarrow H_2. (We’re leaving these out of the diagram for clarity).

So, we want to know how to transport a \mathbf{Vect}-valued functor F : Y \rightarrow \mathbf{Vect} – along this span. We know that such a functor attaches to each y_i \in Y a representation of H_i on some vector space F(y_i). As with spans of sets, the first stage is easy: we have the composable pair of functors X \stackrel{s}{\longrightarrow} Y \stackrel{F}{\longrightarrow} \mathbf{Vect}, so “pulling back” F to X gives s^{\ast}F = F \circ s : X \rightarrow \mathbf{Vect}.

What about the other leg of the span? Remember back in Part 1 what happened when we pushed down a function (not a functor) along the second leg of a span. To find the value of the pushed-forward function on an element z, we took a sum of the complex values on every element of the preimage t^{-1}(z). For vector-space-valued functors, we expect to use a direct sum of some terms. Since we’re dealing with functors, things are a little more complex than before, but there should still be a contribution from each object in the preimage (or, if we’re not talking about skeletal groupoids, the essential preimage) of the object z we look at.

However, we have to deal with the fact that there are morphisms. Instead of adding scalars, we have to combine vector spaces using the fact that they are given as representation spaces for some particular groups.

To see what needs to be done, consider the situation of groupoids with just one object, so the only important information is the homomorphism of groups. These can be seen as one-object groupoids, which we can just call G and H. A functor between them is given by the single group homomorphism h : G \rightarrow H.

Now suppose we have a representation R of the group G on V (so that R(g) \in GL(V) and R(gg') = R(g)R(g')). Then somehow we need to get a representation of H which is “induced” by the homomorphism h, Ind(R):

Induced Representation

This diagram shows “the answer” – but how does it work? Essentially, we use the fact that there’s a nice, convenient representation of any group G, namely the regular representation of G on the group algebra \mathbb{C}[G]. Elements of \mathbb{C}[G] are just complex linear combinations of elemenst of G, which are acted on by G by left multiplication. The group H also has regular representation, on \mathbb{C}[H]. These are the most easily available building blocks with which to build the “push-forward” of R onto H.

To see how, we use the fact that \mathbb{C}[H] has a right-action of G, and hence \mathbb{C}[G], by way of h. An element g \in G acts on \mathbb{C}[H] by right-multiplication by h(g) – and this extends linearly to \mathbb{C}[G]. So we can combine this with the left action of \mathbb{C}[G] on V (also extended linearly from G) by taking a tensor product of \mathbb{C}[H] with V over \mathbb{C}[G]. This lets us “mod out” by the actions of G which are not detected in \mathbb{C}[H]. The result, called the induced representation Ind(R) of H, in turn gives us back a left-action of H on \mathbb{C}[H] \otimes_{\mathbb{C}[G]} V. I’ll call this h_{\ast} R.

(Note that usually this name refers to the situation where G is a subgroup of H, but in fact this can be defined for any homomorphism.)

This tells us what to do for single-object groupoids. As we remarked earlier, if more than one object is sent to the same z \in Z, we should get a direct sum of all their contributions. So I want to describe the 2-linear map, which I’ll now call V(X) : V(Y) \rightarrow V(Z) which we get from the span above, thought of as X : Y \rightarrow Z in Span(\mathbf{Grpd}). Here V(X) = hom(X,\mathbf{Vect}) and V(Y) = hom(Y,\mathbf{Vect}) (where I’m now being more explicit that this whole process is a functor in some reasonable sense).

I have to say what V(X) does to a given 2-vector (what it does to morphisms between 2-vectors is straightforward to work out, since every operation we do is a tensor product or direct sum). Suppose we have F : Y \rightarrow \mathbf{Vect} is one. Then V(X)(F) = t_{\ast} s^{ast} F= t_{\ast} (F \circ s) : Z \rightarrow \mathbf{Vect}. We can now say what this works out to. At some object z \in Z, we get (still assuming everything is skeletal for simplicity):

V(X)(F) = \bigoplus_{t(x)=z} \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} F(s(x))

And this is a direct sum of a bunch of such expressions where F is a basis 2-vector – i.e. assigns an irreducible representation to some one object, and the trivial rep on the zero vector space to every other. That allows this to be written as a matrix with vector-space components, just like any 2-linear map.

So the 2-linear map V(X) has a matrix representation. The indices of the matrix are the simple objects in hom(Y,\mathbf{Vect} and hom(Z,\mathbf{Vect}, which consist of a choice of (a) object in Y or Z (which we assume are skeletal – otherwise it’s a choice of isomorphism class), and (b) irreducible representation of the automorphism group of that object. Given a choice of index on each side, the corresponding coefficient in the matrix is a vector space. Namely the direct sum, over all the objects x \in X that restrict down to our chosen pair, of a bunch of terms like \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} \mathbb{C}. This is just a quotient space of the one group algebra by the image of the other.

Next up: a quick finisher about what happens at the 2-morphism level, then back to TQFT and gravity!

In the last post, I was describing how you can represent spans of sets using vector spaces and linear maps, which turn out to be fairly special, in that they’re given by integer matrices in the obvious basis. Next I’d like to say a little about what happens if you step up one categorical level. This is something I gave a little talk on to our group at UWO on Wednesday, and will continue with next Wednesday. Here I’ll give a record of part of it.

Once again, part of the point here is that categories of spans are symmetric monoidal categories with duals – like categories of cobordisms (which can be interpreted as “pieces of spacetime” in a sufficienly loose sense), and also like categories of quantum processes (that is, whose objects are Hilbert spaces of states, and whose morphisms are linear maps – processes taking states to states).

So first, what do I mean by “move up a categorical level”?

We were talking about spans of, say, sets, like this: S \leftarrow X \rightarrow T. To go up a categorical level, we can talk about spans of categories. The objects S and T now carry some extra information – they’re not just collections of elements, but they also tell us about how elements are related to each other. So then remember that spans of sets really want to form a bicategory, which we can cut down to a category by only thinking of them up to isomorphism. Well, likewise, spans of categories probably want to form a tricategory, which we can cut down to a bicategory in the same way. (Several people have studied them, but the only person I know who really seems to grok tricategories is Nick Gurski, though in this talk he tried to convince us that we all could have invented them ourselves. ) Before rushing off into realms involving the word “terrifying”, we should start by looking at what happens at the level of objects.

But first, why should we bother? Well, there’s a physical motivation: building vector spaces from sets plays a role in quantizing physical theories, where the sets are sets of classical states for some system. That is, when you quantize the system, you allow it to have states which are linear combinations – superpositions – of classical states. But saying you have just a set of states is limiting even in the classical situation. Sometimes – for instance, in gauge theory – there are actually lots of “configurations” of a system that are physically indistinguishable (because of some symmetry, which in that example is achieved by “gauge equivalence”), and so what’s usually done is to just look at the set of equivalence classes of configurations. But that throws away information we may want: it’s better to just take a category whose objects are states, and whose morphisms are the symmetries of the states.

For these to really be symmetries, they should be invertible, so we’re looking at a groupoid of states, S. But then to quantize things, we can’t just take a vector space of all functions – as we did when S was a set. Now we need to have something collecting together all the functors out of S. These certainly form a category, so we want some kind of category which is “like” a vector space. By default it’s called a 2-vector space, since it now has an extra level of structure.

As I said before, this stuff isn’t so hard if you’re willing to ignore details until needed – so for now, I’ll just say that (Kapranov-Voevodsky) 2-vector spaces are categories which resemble \mathbf{Vect}^n, just as (finite-dimensional) vector spaces are sets resembling \mathbb{C}^n, for some n. And just as the set of functions f : S \rightarrow \mathbb{C} becomes a vector space, so does the category of functors F : X \rightarrow \mathbf{Vect} become a 2-vector space when X is a groupoid. (Josep Elgueta discusses in some depth what happens for a general category in this paper.)

What makes a groupoid X special is that the two layers – objects and morphisms – both get along nicely with the operation of taking functors into Vect. That is, it’s easy to describe such functors. It’s a little easier to talk about it for a skeletal groupoid: one with just one object in each isomorphism class. Fortunately, every groupoid is equivalent to one like this. So since I’ve figured out how to do pictures here, let’s see one of a functor R : X \rightarrow \mathbf{Vect}:

Vect-valued Presheaf

This is one particular 2-vector in the 2-vector space I’m building. The picture is showing the following: the objects x_i \in X have groups of automorphisms, G_i, indicated by the curved arrows. A functor R : X \rightarrow \mathbf{Vect} assigns, to each object x_i, a vector space R(x_i) = V_i (sketched roughly as squares), and for each automorphism of that object g \in G_i, a linear map R(g) : V_i \rightarrow V_i. Since R is a functor, these linear maps are chosen so that R(gg') = R(g)R(g') – so this is a G_i-action on V_i. In other words, for each x_i, we have a representation R_i of its automorphism group G_i on the vector space V_i.

A morphism \alpha : R \rightarrow R' between two such 2-vectors is a natural transformation of functors – for each x_i \in X, a linear map \alpha_i : V_i \rightarrow V'_i satisfying the usual naturality condition. As you might expect, this condition means that \alpha gives, for each x_i, an intertwining operator between the two representations R_i and R'_i. So it turns out that the 2-vector space hom(X,\mathbf{Vect}) is a product, taken over the objects x_i \in X, of the categories Rep(G_i).

In particular, that if X is just a set, thought of as a groupoid with only identity morphisms, then this is just \mathbf{Vect}^n, since any vector space is automatically a representation of the trivial group, and any linear map is an intertwining operator between such trivial representations.

Now, proving that this is a 2-vector space would involve giving a lot more details about what that actually means – and would involve some facts about representation theory, such as Schur’s Lemma – but at least we have some idea what the 2-vector space on a groupoid looks like.

Next up (pt 3): what about spans? What happened to spans, anyway? There was supposed to be an earth-shattering fact about spans! Then, that done, hopefully I’ll get back to looking at the physical interpretation of an extended TQFT.

« Previous Page

Follow

Get every new post delivered to your Inbox.

Join 45 other followers