spans


About a week ago (of November 22-23) I was in Riverside, California for Groupoidfest ’08. (Slides for the talk I gave here, and also in pdf.) It’s taken me a while to write it up, because I’ve been, among other things, applying for jobs.

This would be the second time I’ve been to Groupoidfest, and the first time I’ve been back in Riverside since I graduated from UCR last summer. While I was there, I also had the chance to talk to John Baez and some of his other students, past and present, and also to attend Alan Weinstein’s colloquium talk on Friday. On top of that, I managed to see a couple of my other friends in town, so all in all, it was a good trip.

There were quite a few talks, several of which were fairly short, so I’ll comment on a few examples which I found particularly relevant to me. So for instance Alan Paterson’s talk on Equivariant K-Theory for Proper Groupoids: here’s a case where I’m seeing familiar issues from a different direction. K-theory studies objects by looking at categories of vector bundles. Equivariant K-theory can be taken to mean that these bundles come with isomorphisms between fibres which come from a group action, or more generally the morphisms of a groupoid. It’s a kind of categorification of equivariant cohomology. Alan Paterson’s talk was quite extensive, but there’s a whole vocabulary here I’m still learning. The culmination of the talk dealt with Hilbert bundles (a little more structured than vector bundles), and the the Hilbert bundle L^2(\mathcal{G})^{\infty} (where \mathcal{G} is the space of morphisms of a groupoid – so this can be treated as a bundle over the space of objects induced by, say, the map taking a morphism to its target object). This bundle has the nice “stabilization” property that taking a direct sum with any other bundle leaves it unchanged.

John Quigg also spoke about Hilbert bundles, and “Fell bundles” (he spoke about these last year, too), but since John Baez described this in more detail in his report on GFest, I’ll just remark on another aspect of this talk, where he was using a “disintegration theorem”, which was more familiar to me. This says that every representation of the convolution algebra of a groupoid comes from direct-integrating some Hilbert bundle. This is reminiscent of the decomposition of any von Neumann algebra as a direct integral of “factors” (which are each subalgebras of the algebra of operators on fibres of some Hilbert bundle). There seem to be a lot of these “disintegration theorems” involving direct integrals.  I have some ideas about this, but I’ll hold off on them until they’re a little more developed.

There were a number of other talks with interesting elements, but many were a bit too short for me to get much more than an awareness that there’s interesting work being done that I’d like to learn more about: Xiang Tang’s talk on “Group Extensions and Duality of Gerbes” seemed to be perhaps related to what I would describe as 2-vector spaces generated by U(1)-groupoids, but using (blush) a more standard language; Joris Vankerschaner talked about classical mechanics on Lie Groupoids – in particular, discrete field theories valued in groupoids.

Now, the colloqium talk by Alan Weinstein was titled “Groupoid Symmetry for Einstein’s Equation?”, including the querulous punctuation, since some of it was speculative. The basic idea behind this talk was to apply groupoids to General Relativity, thought of as an evolution equation. The Hamiltonian formulation of GR describes a spacelike hypersurface evolving in time – this was described by Arnowitt, Deser and Misner, or ADM, from whom we likewise get the “ADM mass”, which can be thought of as the energy of the worldsheet, as it’s seen by an observer at spacelike infinity. This formulation doesn’t describe all solutions of Einstein’s equations – in particular, nothing with closed timelike curves, and unless I misremember, really only makes sense for asymptotically flat spacetimes – certainly that’s true for the ADM mass. But it does fit with our usual intuitions about systems evolving in time, and makes some initial-value problems – including local ones – more or less tractable, which is good for practical purposes. (There are still further technical provisos to ensure the result actually satisfies Einstein’s equations.)

(Note: on looking the asymptotic flatness issue up in Wald’s book, it seems that even for compact space slices, although it naively appears the Hamiltonian vanishes, this can possibly be resolved by some tricky “deparametrization” Wald doesn’t entirely explain. The restriction against closed timelike curves alone probably won’t dissuade anyone who isn’t dead set on building a time machine.)

Anyway, the groupoid symmetry Weinstein was suggesting involves taking space slices (or some slight variation thereon, such as thickened slices, or slices equipped with a metric) to be objects, and considering diffeomorphisms between slices as morphisms. This would make a Lie groupoid, and the corresponding Lie algebroid would reproduce an algebroid structure which it’s natural to associate with the phase space for the system (specifically, one associated with the Poisson structure. There are some links on this correspondence over on the n-Category Cafe – basically, a Poisson algebra is like a Lie bracket structure on the tangent Lie algebroid to a manifold).

Where one can go with this idea, I’m not sure. More clear to me, since I’ve thought about it more, was the content of Weinstein’s other talk – about the volume of a differentiable stack – which he gave at the conference proper. This is a smooth/differentiable version of groupoid cardinality, and therefore has all sorts of applications to groupoidification in both the vector-space and 2-vector-space flavours. The basic point is that groupoid cardinality – for finite groupoids – involves measures in two ways. One way to find it is as a sum over all the objects; the sum is of the quantities \frac{1}{|t^{-1}(x)|}, the reciprocals of the numbers of morphisms ending at object x. There are other, equivalent, ways, but they all amount to sums of reciprocals of numbers found by counting. Both these numbers, and the sums themselves, can be seen as integrals – using the counting measure on a discrete, finite set. The first point of the talk was that counting measure should be replaced by two kinds of measure – one on the object spaces, and one for morphisms – when one passes to a differentiable stack.

(There are a variety of ways to think about stacks – one is that a stack is a groupoid, thought of up to equivalence. In which case, the real information in a stack consists of the set of isomorphism classes of object, or orbits, and also the automorphism, or isotropy, groups for objects in each orbit. One good thing about stacks is that they keep track of information which is lost when taking quotients – if a point is fixed under a group action, for instance, it still has nontrivial isotropy when taking the “stacky” quotient of a set by a group action. A nice representative groupoid for this quotient is keeps all points, but adds morphisms corresponding to motions under the action.)

There were also a bunch of talks about groupoidification of linear algebra, by John Baez and his current students – about groupoidification of linear algebra. Since I’ve written about this a lot here anyway, I’ll just remark that Christopher Walker introduced the concept, Alex Hoffnung talked about applications to Hecke algebras and incidence geometries (also discussed in their seminar starting here), while John spoke about Jim Dolan’s ideas for groupoidifying the harmonic oscillator, which have also been written up and slightly expanded by me. My own talk is also sort of about groupoidification, albeit a higher-dimensional version thereof.

At any rate, that’s about all I have time to say about GFest ’08, although there were many other talks which reinforced my desire to keep learning more about all the wonderful stuff known to people who study groupoids, and especially Lie groupoids.

There haven’t been many colloquium talks here this term, but there was one a week ago (Thursday) by Joel Kamnitzer from University of Toronto (and contributor to the Secret Blogging Seminar), who gave a talk called “Categorical sl_2 Actions and Equivalence of Categories”.

As it turns out, I have at least two things in common with Joel Kamnitzer. First, we were both President of the University of Waterloo Pure Math Club (which became the Pure Math, Applied Math, and Combinatorics and Optimization club ’round about my time, when we noticed the other two math faculties at Waterloo no longer had their own undergraduate clubs). Second, we both did math Ph.D’s in California.  And while that’s probably a coincidence, there were several themes in the talk that overlap things I’ve talked about here.

The basic idea behind the talk was roughly this: when there’s an action of the Lie algebra sl_2(\mathbb{C}) (i.e. trace-zero 2-by-2 matrices) on a space, that space can be decomposed into some eigenspaces, and one can get isomorphisms between certain pairs of them. So the question is whether this can be categorified: if there’s an action of a categorical sl_2 on a category, can it be decomposed into subcategories which generate it, such that certain pairs can be shown to be equivalent?

So first he reminded/informed us of some of the non-categorified examples. The main thing is to show an equivalent way of describing an sl_2 action. This uses that sl_2 is generated by three matrices:

e = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} and f = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} and h = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}

These satisfy some commutation relations: (e,f) = h, (e,h) = 2e and (f,h) = -2f. These relations specify sl_2 up to isomorphism, so one can describe an action on a set by specifying what e, f, and h do (satisfying the commutation relations, of course).  It’s a classical fact from Lie theory that representations of sl_2 all look similar: they’re direct sums \bigoplus_r V(r) of eigenspaces of the generator h (for integer eigenvalues r), and the generators e and f act as “raising” and “lowering” operators, e: V(r) \rightarrow V(r+2) and f : V(r) \rightarrow V(r-2).  (All of which is key to describing spins of fundamental particles, due to SL_2(\mathbb{C}) being the cover of the Lorentz group SO(3,1; \mathbb{R}), though that’s beside the point just at the moment.

We heard three examples, of which for me the most intuitively nice involves an action on the vector space V_X = \mathbb{C}^{P(X)} generated by the power set of a fixed finite set X of size n.  Then h is a (modified) counting operator – its eigenspaces are the subspaces V(r) generated by subsets of size k (where r = 2k -n).  The operator e takes a set A \subset X of size k and maps it to the sum \sum B over all A \subset B with B of size (k+1) (all ways to “add one element” to A);  f takes A to the sum of all subsets of size (k-1) contained in A (all ways to “remove one element” from B.  (This all seems very familiar to me from the combinatorial interpretation of the Weyl algebra, which I talk about here.)  These satisfy the commutation relations ef - fe = h.

Now, the “equivalences” in the talk will be categorified versions of some obvious isomorphisms here, namely V(r) \cong V(-r) (that is, k subsets are in bijection with (n-k)-subsets).  These turn out to be imposed by the fact that we have a representation of sl_2, which lifts to a representation of SL_2(\mathbb{C}) in GL(V).  The isomorphism is given by restricting the action of \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} to V(r).

There is a more algebraic-geometry version of this example which replaces the power set of a set with the union of the Grassman varieties of subspaces of \mathbb{C}^n.  Instead of the vector space generated by subsets of size k, one builds V out of the cohomology of the tangent bundle to the variety, with V(r) = H^{\bullet} ( T^{\star}Gr(k,\mathbb{C}^n)).

Now, the thing I find interesting about this picture is that, as with the Weyl algebra setup I mention above, it represets the raising and lowering operators in terms of transfer through a span.  Since this seems to pop up everywhere, it’s important enough to think on for a moment.  The span in question goes from T^{\star}Gr(k,\mathbb{C}^n) to T^{\star}Gr(k+1,\mathbb{C}^n).

To say what goes in the middle, we use the fact that an element of the cotangent bundle T^{\star}Gr(k,\mathbb{C}^n amounts to a pair (W,X), where W < \mathbb{C}^n is a k-dimensional subspace (a point on Gr(k,\mathbb{C}^n)) and X is a tangent vector at W.  As it turns out X amounts to a map X : \mathbb{C}^n \rightarrow W which annihilates W itself.  So then we have the variety I = \{ (X,W_k,W_{k+1}) \} where W_k < W_{k+1}, and (X,W_k) and (X,W_{k+1}) are cotangent vectors.  This has projection maps to the two cotangent bundles: T^{\star}(Gr(k,\mathbb{C}^n)) \stackrel{\pi_k}{\leftarrow} I \stackrel{\pi_{k+1}}{\rightarrow} T^{\star}(Gr(k,\mathbb{C}^n)).

Then the point is that the cohomology spaces H^{\bullet}(T^{\star}(Gr(k,\mathbb{C}^n)) are build from maps into \mathbb{C}^n, so we call “pull-push” them through the span by e = (\pi_{k+1})_{\star} \circ \pi_k^{\star}.  This defines e, and f is similar, going the other way.

So much for actions of “old-school” sl_2: what about “categorical” sl_2?  To begin with, what does that even mean?  Well, Aaron Lauda has described a “categorified” version of sl_2 (actually, of Lusztig’s presentation of the enveloping algebra U_q(sl_2) – a quantum version, though that won’t enter into this).  This is a categorification of the generators E, F, and H, and of their commutation relations (which now become isomorphisms, which may have to satisfy some coherence laws – the details here being incredibly important, but not very enlightening at first).  These E, F and H are now functors, rather than maps.

As a side note, this is not precisely a categorification of the Lie algebra sl_2, but actually a categorification of a particular presentation of sl_2.  Though, since I’m mentioning this, I’ll remark it’s much more like the categorification of the Weyl algebra which is involved in the groupoidification of the quantum harmonic oscillator.

In any case, Joel went on to describe categorical actions of sl_2.  Actually, he distinguished “weak” and “strong” versions, which is apparently a common usage, though not the one I’m used to.  “Weak” means things are specified up to unspecified isomorphisms required to exist, and “strong” means things are defined up to specified (presumably coherent) isomorphisms (which is what I usually understand “weak” to mean).  The strong ones are the ones which give the equivalences we’re looking for, though.

It turns out that an action of the categorical sl_2 on an additive category D gives: (1) a way to split up D = \bigoplus_r D(r) for integers -n \leq r n, and (2) the action of the generators E and F with E : D(r) \rightarrow D(r+2) and F : D(r) \rightarrow D(r-2), such that (3) there are commutation isomorphisms analogous to the commutator identities for regular sl_2.  I note that algebraic geometers prefer to use additive categories – where the hom-sets are abelian groups, rather than vector spaces, which is what they would be in a 2-vector space.  In fact, later in the talk we heard about generalizations to triangulated categories – even a weaker condition.  In the special case where the additive category happened to be a 2-vector space, we’d have a “2-linear representation of a 2-algebra”.

Now, the main example was similar to the one above involving Grassman varieties.  The difference is that one doesn’t of cooking up a vector space from T^{\star}(Gr(k,\mathbb{C}^n)) from the cohomology of its cotangent bundle, one cooks up an abelian category.  This is D(r) = D Coh(T^{\star}(Gr(k,\mathbb{C}^n)) where, again, r = 2k - n, for r = -n ... n.  This is the derived category of coherent sheaves on the cotangent bundle.  There seems to be some analogy between the two: cohomology involves maps into $\mathbb{C}$ (and the exterior algebra of forms), while coherent sheaves might be thought of as (algebraic) vector-space valued functions, a categorified version of functions.  Also, while the cohomology is a chain complex, the objects of the derived category are themselves chain complexes.  Exactly how the analogy works is something I can’t explain just now.

Anyway, the key result, due to Chuang and Rouqier, says that from a “strong” categorical sl_2 action (in the sense above) and the E and F are exact functors (in 2-vector spaces, they’d be “2-linear maps”), then there is an equivalence (given in terms of the E and F) between the categories of complexes on D(-r) and D(r).  This isn’t quite what was wanted (we wanted an equivalence D(-r) \cong D(r)), so for the remainder of the talk we heard about work directed at this question: cases where it works, counterexamples when it doesn’t, some generalizations, and so on.

Since coming back from Montreal, I’ve given an exam for a very large linear algebra class, but before I forget, I’d like to make a few notes about some of the talks.

The first day, Saturday, October 4, was a long day of mostly half-hour talks, and some 20-min talks, including my late-registering contribution. It was about the 2-linearization of spans of groupoids which I’ve talked about before, but with a problem fixed. I’ll say more about that soon.

It was interesting to see the range of talks – category theory spans a few areas of mathematics, after all. To start off the day, there was a session in which Michael Makkai and Victor Harnik both gave talks about higher-dimensional categories in one form or another.

Makkai’s was about “revisiting coherence in bicategories and tricategories”. Coherence is an issue that comes up once you get into higher categories – that is, looking at things bearing more complicated relationships than “equal” and “not-equal”, such as “isomorphic”, or “equivalent”. Or “biequivalent”, I suppose – Makkai covered some work of Nick Gurski and Steve Lack about how bicategories and tricategories are (or are not) equivalent to strict versions of themselves. More precisely, that there’s a biequivalence between \mathbf{2-Cat} (the strict form) and \mathbf{Bicat} (the weak form). Whereas there is no triequivalence between (strict) \mathbf{3-Cat} and (weak) \mathbf{Tricat}. There is a triequivalence between \mathbf{Tricat} and \mathbf{Gray} – something intermediate between strict and weak. He also explained how these equivalences pass through a relationship with the category of graphs. (An equivalence is a pair of adjoint functors – the equivalence between \mathbf{Bicat} and \mathbf{2-Cat} factors through pairs of adjoint functors between each of these and \mathbf{Graph}). There was more to the talk, but it was somewhat over my head.

Harnik’s talk, “Placed composition in higher dimensional categories”, was about a recursive way of defining partial composition operations in higher dimensions. Here, the point is that it’s easy and obvious how to compose one-dimensional arrows: you stick them tip-to-tail. Higher-dimensional morphisms need more complicated rules telling how to stick them together along various numbers of shared faces. (A line-segment arrow has only two faces, both points with no sub-faces). Harnik described how to generate an \omega-category recursively: generate faces of dimension n by freely adjoining some indeterminate cells, which need all these operations telling how they can be stuck together. Then you have to impose some algebraic relations – certain composites are the same. This is like a problem of presenting groups in terms of generators and relations: it can be hard to tell whether two elements are equal or not – two elements being declared equal if they can be proved so in some algebraic system (not an easy question to test, usually).

In fact, questions about computability came up a lot, since there is a lot of interaction between category theory and computer science. We saw several talks that touched on that in the afternoon: B. Redmond gave a talk, “Safe Recursion Revisited”, about a categorical point of view on defining recursion “safely” (i.e. keeping algorithms in polynomial time); G. Lukacs described “A cartesian closed category that might be useful for higher-type computation” – higher types being apparently the type-theory correlate of higher categories. We had heard about this earlier – M. Warren talked on “types and groupoids”, showing how to use \omega-groupoids to look at types, variables of those types (objects), and terms or “elements of proofs” (as morphisms), and so on for “higher types”. A different take on the intersection between computing and categories was N. Yanofsky’s talk “On the algorithmic informational content of categories”, which applied Kolmogorov complexity (the size of a turing machine required to produce a given output) to productions describing categories. Productions like the one that takes a simpler description – of the category of topological spaces, say – and turns it into a more complex one, like the category of pointed topological spaces. Or from vector spaces to Banach spaces, or what-have-you. He described a little language that can be used to specify (some, not all) categories by such operations, starting with a few building blocks – which then allows you to ask about the Kolmogorov complexity of the category itself.

On a different vein, there was also a reasonable cross-section of topological ideas going around. Certainly any time \omega-groupoids come up, it also comes up that they classify homotopy types of spaces. But much more detailed geometric pictures also come up. Walter Tholen talked about the Gromov metric on the category of metric spaces: the distance between two metric spaces is defined as a minimum over all possible isometric embeddings into a common space, of a certain maximum separation between the spaces. One can then talk about Cauchy sequences of metric spaces, and the fact that (for example), the category of complete metric spaces is itself complete.

Dorette Pronk also brought in some geometry when she talked about “Transformation groupoids and orbifolt homotopy theory”. I’m quite interested in transformation groupoids, which show up when a set is acted on by a group. The example I’ve talked about is from gauge theory, where there is a group of gauge transformations acting on the moduli space of configurations (i.e. connections). This was one of the examples she gave for where these sorts of things come from. Then she got into the connections between these sorts of groupoids and the homotopy theory of orbifolds. Orbifolds are like manifolds, except that their neighborhoods have isomorphisms to U/G, where U is an open set in \mathbb{R}^n, and G is a finite group (a nontrivial group action distinguishes orbifolds from mere manifolds). Most can be said in the case where the orbifold is just X/L where X is a manifold and L is a Lie group, acting globally. Orbifolds like this are called representable.

Now, orbifolds have groupoids associated to them (in various ways), and Dorette Pronk’s talk dealt with the fact that the orbifolds being representable (i.e. arising from a global group action) is equivalent to the associated groupoid being Morita equivalent to a transformation groupoid (i.e. one arising from a global group action). Morita equivalence for groupoids G and H turns out to be the same as having a nice enough SPAN of groupoids

G \leftarrow K \rightarrow H

So in fact here are spans of groupoids again – just the sort of thing I was there to talk about, and should have more to say on here shortly. So that was interesting. This situation of having a span of groupoids seems to show up in several different guises.

There were some other talks I’ve missed, but it’s taken me a while to get to this, and some of them have faded a bit, so I’ll cut this short there.

Well, I was out of town for a bit, but classes are now underway here at UWO. This term I’m teaching an introductory Linear Algebra course, which is, I believe, the largest class I’ve taught so far, with on the order of a couple of hundred students. That should be a change of pace: last year, both courses I taught had just seven students each.

Meanwhile, I’ll carry on from the last post. I described structure types (a.k.a. species) as functors T : \mathbf{FinSet_0} \rightarrow \mathbf{Sets}, which take a finite set S, and give the set of all “T-structures on the underlying set S“. A lot of combinatorial enumeration uses the generating functions for such structure types, which are power series whose coefficients count the number of structures for a set of size n (the fact that structure types are functorial is what allows us to ignore everything but the isomorphism class of the underlying set S). Now, there is a notion of generalized species, described in this paper by Fiore, Gambino, Hyland and Winskel, which I’ll link here because I think it’s a great point of view on the setup I discussed before. But right now, I’ll go in a somewhat different direction. (Whether there’s a connection between the two is left as an exercise)

Stuff Types

To start with, there is a dual way to look at structure types (a.k.a. species). The “structures” identified by a structure type T form a category X. It’s a concrete category in fact: each object has an underlying set. The morphisms of X are “structure-preserving” maps (the meaning of which depends, obviously, on T) of T-structured sets. These correspond exactly (by fiat) to the isomorphisms of underlying sets (i.e. relabellings). These are all invertible, so this is a groupoid.

So is the category FinSet_0 of “underlying sets”, so the forgetful functor F from T-structured sets into \mathbf{FinSet_0} is a functor between groupoids. This functor F is a sort of “dual” way to look at the structure type – the original functor T. In fact, for any structure type T, this dual F will always be a faithful functor. That is, the morphism map is one-to-one, so morphisms in X are uniquely determined by the corresponding map of underlying sets. In other words, there are no additional symmetries in X but those determined by set bijections.

But this is an artifact! I declared the union of all the sets T(S) to be the objects of a category X and then added morphisms by hand. That makes sense if you think of the “T-structured sets built on underlying set S” as derived entirely from T and S. But the dual view, focusing on F, tends to make us think of X as given, and F as observing some property – underlying sets and maps for objects and morphisms. This may throw away information about both, in principle. Faithfulness of F suggests that the objects of X just consists of sets S with some inflexible extra decorations with no local symmetries of their own to complicate the morphisms.

So let’s treat X as real and F as some kind of synopsis or measurement of X. If F doesn’t need to be faithful, it may not correspond to a structure type, but it will be what Baez and Dolan call a stuff type, which is actually just any groupoid equipped with underlying set functor F: X \rightarrow \mathbf{FinSet_0}. Maybe it’s surprising that these can still be treated like power series, by taking the coefficient at n to be the (real-valued) groupoid cardinality of the preimage of n. (The groupoid cardinality, described here, is related to the “Leinster measure” for categories. Regular readers of the n-Category Cafe will know that there has been some discussion over there about this – some links from here, and discussion about applying it to “configuration categories” of physical systems here.)

Stuff types can be used to deal with seemingly straightforward “structures” which structure types have a hard time with. For instance, letting E^Z be the structure type “a set”, and so E^{E^Z} should be the type “a set of sets” (where the underlying set operation is the union of elements). This can be represented by a stuff type, but not a structure type.

Groupoidification

Stuff types fit into a more general pattern, which has to do with the 2-category of spans of groupoids. I really cleared up just how this works in conversation with Jamie Vicary.

Groupoidification is the program of looking for analogs of linear algebra (whose native habitat is the monoidal category \mathbf{Vect}) in a different monoidal category (in fact, bicategory) \mathbf{Span(Gpd)} of spans of groupoids, which I’ve talked about quite a bit before. Very briefly, we have a bicategory where the objects are groupoids, and the morphisms are spans like: A \leftarrow X \rightarrow B, composed by (weak) pullback. Given spans X, X', a 2-morphism is a map \alpha: X \rightarrow X' which makes the resulting diagram commute.

So the key thing now is the fact that a stuff type F : X \rightarrow \mathbf{FinSet_0} can be regarded as a span of groupoids in two ways:

\mathbf{1} \leftarrow X \rightarrow \mathbf{FinSet_0}

and

\mathbf{FinSet_0} \leftarrow X \rightarrow \mathbf{1}

Here, \mathbf{1} is the trivial groupoid consisting of just one object and its identity morphism. Any groupoid X has just one functor into \mathbf{1}, so a stuff type automatically has these two incarnations as a span. One is a morphism (in \mathbf{Span(Gpd)}) from $\mathbf{1}$ to \mathbf{FinSet_0}, and the other is its dual, going the other way. One can call these a “state” and a “costate”. Why these terms?

One important fact is that $atex \mathbf{Span(Gpd)}$ is not just a bicategory, it’s a monoidal bicategory, whose monoidal operation on objects A \otimes B is the (cartesian) product of groupoids. (Which also tells you what it is for morphisms, by the way). It should be clear, then, that \mathbf{1} is the monoidal unit, since X \times \mathbf{1} \cong X.

So another way of describing a stuff type is that it’s a morphism from (or to) the monoidal unit in a certain monoidal (bi)category with duals. In the category of Hilbert spaces, if \mathcal{H} is the space associated to a quantum system, a map \mathbb{C} \rightarrow \mathcal{H} would be called a “state” (and the dual would be a “costate”). Stuff types provide a 2-categorical version of the same thing, where the object taking the place of \mathcal{H} is \mathbf{FinSet_0}.

There is, as I’ve discussed here previously, a 2-vector space (indeed, a 2-Hilbert space) associated with this groupoid. But the point of view I’m adopting right now is based on discussion I had with Jamie Vicary about this paper. In it, he gives an abstract (i.e. categorical) description of what’s going on in the quantum mechanics of the harmonic oscillator in terms of an adjunction of categories. This can then be transplanted into various monoidal categories with duals. Here, Jamie gives a more general discussion of quantum algebras, with the same sort of flavour.

So as to the question of how species relate to QFT, this suggests one way to look at how. The harmonic oscillator is the physical system of interest when we look at “states” as spans into \mathbf{FinSet_0}. Up to isomorphism, the important features of the groupoid \mathbf{FinSet_0} are: its objects correspond to nonnegative integers, which label the energy levels for the oscillator (they “count photons”); each object n has automorphisms corresponding to permutations of those n photons (they’re indistinguishable – in particular, “bosons”). This is fairly simple, but for a more elaborate QFT picture, look at “states” for other groupoids in \mathbf{Span(Gpd)}.  One complication is that typically these groupoids are going to have some smooth structure…

Perhaps more on this later.

In the past couple of weeks, Masoud Khalkhali and I have been reading and discussing this paper by Marcolli and Al-Yasry. Along the way, I’ve been explaining some things I know about bicategories, spans, cospans and cobordisms, and so on, while Masoud has been explaining to me some of the basic ideas of noncommutative geometry, and (today) K-theory and cyclic cohomology. I find the paper pretty interesting, especially with a bit of that background help to identify and understand the main points. Noncommutative geometry is fairly new to me, but a lot of the material that goes into it turns out to be familiar stuff bearing unfamiliar names, or looked at in a somewhat different way than the one I’m accustomed to. For example, as I mentioned when I went to the Groupoidfest conference, there’s a theme in NCG involving groupoids, and algebras of \mathbb{C}-linear combinations of “elements” in a groupoid. But these “elements” are actually morphisms, and this picture is commonly drawn without objects at all. I’ve mentioned before some ideas for how to deal with this (roughly: \mathbb{C} is easy to confuse with the algebra of 1 \times 1 matrices over \mathbb{C}), but anything special I have to say about that is something I’ll hide under my hat for the moment.

I must say that, though some aspects of how people talk about it, like the one I just mentioned, seem a bit off, to my mind, I like NCG in many respects. One is the way it ties in to ideas I know a bit about from the physics end of things, such as algebras of operators on Hilbert spaces. People talk about Hamiltonians, concepts of time-evolution, creation and annihilation operators, and so on in the algebras that are supposed to represent spaces. I don’t yet understand how this all fits together, but it’s definitely appealing.

Another good thing about NCG is the clever elegance of Connes’ original idea of yet another way to generalize the concept “space”. Namely, there was already a duality between spaces (in the usual sense) and commutative algebras (of functions on spaces), so generalizing to noncommutative algebras should give corresponding concepts of “spaces” which are different from all the usual ones in fairly profound ways. I’m assured, though I don’t really know how it all works, that one can do all sorts of things with these “spaces”, such as finding their volumes, defining derivatives of functions on them, and so on. They do lack some qualities traditionally associated with space – for instance, many of them don’t have many, or in some cases any, points. But then, “point” is a dubious concept to begin with, if you want a framework for physics – nobody’s ever seen one, physically, and it’s not clear to me what seeing one would consist of…

(As an aside – this is different from other versions of “pointless” topology, such as the passage from ordinary topologies to, sites in the sense of Grothendieck. The notion of “space” went through some fairly serious mutations during the 20th century: from Einstein’s two theories of relativity, to these and other mathematicians’ generalizations, the concept of “space” has turned out to be either very problematic, or wonderfully flexible. A neat book is Max Jammer’s “Concepts of Space“: though it focuses on physics and stops in the 1930’s, you get to appreciate how this concept gradually came together out of folk concepts, went through several very different stages, and in the 20th century started to be warped out of all recognition. It’s as if – to adapt Dan Dennett – “their word for milk became our word for health”.I would like to see a comparable history of mathematicians’ more various concepts, covering more of the 20th century. Plus, one could probably write a less Eurocentric genealogy nowadays than Jammer did in 1954.)

Anyway, what I’d like to say about the Marcolli and Al-Yasry paper at the moment has to do with the setup, rather than the later parts, which are also interesting. This has to do with the idea of a correspondence between noncommutative spaces. Masoud explained to me that, related to the matter of not having many points, such “spaces” also tend to be short on honest-to-goodness maps between them. Instead, it seems that people often use correspondences. Using that duality to replace spaces with algebras, a recurring idea is to think of a category where morphism from algebra A to algebra B is not a map, but a left-right (A,B)-bimodule, _AM_B. This is similar to the business of making categories of spans.

Let me describe briefly what Marcolli and Al-Yasry describe in the paper. They actually have a 2-category. It has:

Objects: An object is a copy of the 3-sphere S^3 with an embedded graph G.

Morphisms: A morphism is a span of branched covers of 3-manifolds over S^3:

G_1 \subset S^3 \stackrel{\pi_1}{\longleftarrow} M \stackrel{\pi_2}{\longrightarrow} S^3 \supset G_2

such that each of the maps \pi_i is branched over a graph containing G_i (perhaps strictly). In fact, as they point out, there’s a theorem (due to Alexander) proving that ANY 3-manifold M can be realized as a branched cover over the 3-sphere, branched at some graph (though perhaps not including a given G, and certainly not uniquely).

2-Morphisms: A 2-morphism between morphisms M_1 and M_2 (together with their \pi maps) is a cobordism M_1 \rightarrow W \leftarrow M_2, in a way that’s compatible with the structure of the $lateux M_i$ as branched covers of the 3-sphere. The M_i are being included as components of the boundary \partial W – I’m writing it this way to emphasize that a cobordism is a kind of cospan. Here, it’s a cospan between spans.

This is somewhat familiar to me, though I’d been thinking mostly about examples of cospans between cospans – in fact, thinking of both as cobordisms. From a categorical point of view, this is very similar, except that with spans you compose not by gluing along a shared boundary, but taking a fibred product over one of the objects (in this case, one of the spheres). Abstractly, these are dual – one is a pushout, and the other is a pullback – but in practice, they look quite different.

However, this higher-categorical stuff can be put aside temporarily – they get back to it later, but to start with, they just collapse all the hom-categories into hom-sets by taking morphisms to be connected components of the categories. That is, they think about taking morphisms to be cobordism classes of manifolds (in a setting where both manifolds and cobordisms have some branched-covering information hanging around that needs to be respected – they’re supposed to be morphisms, after all).

So the result is a category. Because they’re writing for noncommutative geometry people, who are happy with the word “groupoid” but not “category”, they actually call it a “semigroupoid” – but as they point out, “semigroupoid” is essentially a synonym for (small) “category”.

Apparently it’s quite common in NCG to do certain things with groupoids \mathcal{G} – like taking the groupoid algebra \mathbb{C}[\mathcal{G}] of \mathbb{C}-linear combinations of morphisms, with a product that comes from multiplying coefficients and composing morphisms whenever possible. The corresponding general thing is a categorical algebra. There are several quantum-mechanical-flavoured things that can be done with it. One is to let it act as an algebra of operators on a Hilbert space.

This is, again, a fairly standard business. The way it works is to define a Hilbert space \mathcal{H}(G) at each object G of the category, which has a basis consisting of all morphisms whose source is G. Then the algebra acts on this, since any morphism M' which can be post-composed with one M starting at G acts (by composition) to give a new morphism M' \circ M starting at G – that is, it acts on basis elements of \mathcal{H}(G) to give new ones. Extending linearly, algebra elements (combinations of morphisms) also act on \mathcal{H}(G).

So this gives, at each object G, an algebra of operators acting on a Hilbert space \mathcal{H}(G) – the main components of a noncommutative space (actually, these need to be defined by a spectral triple: the missing ingredient in this description is a special Dirac operator). Furthermore, the morphisms (which in this case are, remember, given by those spans of branched covers) give correspondences between these.

Anyway, I don’t really grasp the big picture this fits into, but reading this paper with Masoud is interesting. It ties into a number of things I’ve already thought about, but also suggests all sorts of connections with other topics and opportunities to learn some new ideas. That’s nice, because although I still have plenty of work to do getting papers written up on work already done, I was starting to feel a little bit narrowly focused.

In “The Fabric of Reality”, David Deutch gives a refutation of solipsism. I’m not entirely sure it works – all he really tries to do is to show that the difference between solipsism and realism is more nearly a mere semantic distinction than is generally assumed. But in any case, along the way, there’s an anecdote about a solipsist professor lecturing his (imaginary?) class merely to help him clarify his ideas. The idea being that, even if the imaginary students don’t really exist, it helps to clarify the professor’s own ideas by lecturing to them, answering questions, and so forth. In this view, you don’t really understand your own opinions – let alone justifiably believe in them – unless you’ve argued for them against a variety of possible criticisms. (J.S. Mill gave a defense of full-fledged freedom of speech, even for grossly offensive and even “dangerous” opinion, on this ground.)

I mention this because, when I told Dan about the blog, he seemed dubious about blogging as a way of communicating math. It’s certainly more solipsistic than a usenet newsgroup, or a mailing list. Those are channels devoted to a particular subject, with many participants. A blog, comments notwithstanding, is mainly a channel devoted to one voice, on many particular subjects. It’s true that half the point of communicating ideas is to get feedback on them from other people. You make your thinking part of one of those great processes like cathedral-building – ad-hoc, gradual, and (significantly) collective. Even so, relatively solipsistic channels are not entirely pointless.

To wit: by working through my theorems about transporting 2-vectors through spans – both for this blog, and for my talk at Groupoidfest, I discovered some problems. Nobody pointed them out, but discovering them was a consequence of approaching the material again from a new angle, with an audience in mind.

The problem is a conceptually important one – mistaking an n-dimensional space for a 1-dimensional space. I’m fairly sure, for various reasons, that the theorem that there is a 2-functor V : Span(\mathbf{Gpd}) \rightarrow \mathbf{Vect} is still true, but the proof I have in my thesis (in the special case where the groupoids are flat connection groupoids on spaces) has a problem. Since that affects the Part 4 of “Spans and Vector Spaces” which I was going to post, I’ll put that off for a while as I get the proof straightened out.

Here is the issue in a nutshell, however:

The proof I have involves a construction of a functor by a particular method, which I’ve been describing in the last three posts. The final step I was going to describe involved what the contstruction does for 2-morphisms – spans between spans. (There is more to the proof, but the remainder is technical enough to be fairly unenlightening – basically, to be a 2-functor, there need to be specified natural isomorphisms replacing the equations for preserving identities and composition in the definition of a functor, and these have to obey some equations which need to be checked.)

The construction given in my thesis is supposed to give a way to take a span of spans of groupoids, and give a natural transformation between a pair of 2-linear maps. But a 2-linear map can be written as a matrix of vector spaces, and a natural transformation is then written as a matrix of linear operators which act componentwise. So one way to look at the problem is to construct a linear map between vector spaces from a span of groupoids.

That is, we have spans A \leftarrow X_1 \rightarrow B and A \leftarrow X_2 \rightarrow B. Picking basis objects for V(A) and V(B) (namely, objects a \in A and b \in B, plus representations U, W of their automorphism groups) gives a subgroupoid of of X_1, consisting of those objects x \in X_1 which are sent to a and b under the maps in the span. It also gives a vector space which is built as a colimit of some vector spaces associated to these objects. Assuming X_1 is skeletal, this works out (as I described before) to W^{\ast} \otimes_{\mathbb{C}[Aut(x)]} U for each of the x \in X_1 in question. The same holds for X_2.

Now suppose we have a span-of-spans X_1 \leftarrow Y \rightarrow X_2 making the obvious diagram commute. Then because of that commutation, we also have a span of groupoids over each of the choices (a,b) of objects, and so then the question becomes, partly, how to get a linear map between the vector spaces we just constructed. If you have bases for all the vector spaces here, it’s not too bad: vectors can be seen as complex-valued functions on the basis. We can push these through the span just as we’ve been talking about in the last few posts here: first pull back a function along one leg by composition, then push forward along the other leg. The push-forward will involve a sum over some objects, and some normalizing factors having to do with the groupoid cardinalities of the groupoids in the span.

However, I won’t go too far into detail about this, because the construction I actually outlined doesn’t adequately specify the basis to use. In fact, it will really only work if all the vector spaces W^{\ast} \otimes_{\mathbb{C}[Aut(x)]} U is one-dimensional. Then there is a basis for the combined space which just consists of all the objects x. I’d hoped that Schur’s lemma (that intertwiners from W to itself, or from U to itself, have to be multiples of the identity) would get out of this problem, but I’m not sure it does. So there is a problem with the construction I was trying to use.

As I say, I’m fairly sure the theorem remains true – it’s just the proof needs fixing, which I don’t expect to be too hard. However, I’ll refrain from getting sidetracked until I know I have it worked out.

Instead, next time I’ll describe some of the things I learned at Groupoidfest 07 when I presented a talk on this stuff. (At first I was nervous, having discovered this flaw while preparing the talk – but then, a lot of people were talking about work-in-progress, so I don’t feel too bad now. Plus, the meeting was a lot of fun.)

Well, I was out of town for a weekend, and then had a miserable cold that went away but only after sleeping about 4 extra hours per day for a few days. So it’s been a while since I continued the story here.

To recap: I first explained how to turn a span of sets into a linear operator between the free vector spaces on those sets. Then I described the “free” 2-vector space on a groupoid X – namely, the category of functors from X to \mathbf{Vect}. So now the problem is to describe how to turn a span of groupoids into a 2-linear map. Here’s a span of groupoids:

A span of groupoids

Here we have a span Y \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} Z, of groupoids. In fact, they’re skeletal groupoids: there’s only one object in each isomorphism class, so they’re completely described, up to isomorphism, by the automorphism groups of each object. The object y_2 \in Y, for instance, has automorphism group H_2, and the object x_1 \in X has automorphism group G_1. This diagram shows the object maps of the “source” and “target” functors s and t explicitly, but note that with each arrow indicated in the diagram, there is a group homomorphism. So, since the object map for s sends x_1 to y_2, that strand must be labelled with a group homomorphism s_1 : G_1 \rightarrow H_2. (We’re leaving these out of the diagram for clarity).

So, we want to know how to transport a \mathbf{Vect}-valued functor F : Y \rightarrow \mathbf{Vect} – along this span. We know that such a functor attaches to each y_i \in Y a representation of H_i on some vector space F(y_i). As with spans of sets, the first stage is easy: we have the composable pair of functors X \stackrel{s}{\longrightarrow} Y \stackrel{F}{\longrightarrow} \mathbf{Vect}, so “pulling back” F to X gives s^{\ast}F = F \circ s : X \rightarrow \mathbf{Vect}.

What about the other leg of the span? Remember back in Part 1 what happened when we pushed down a function (not a functor) along the second leg of a span. To find the value of the pushed-forward function on an element z, we took a sum of the complex values on every element of the preimage t^{-1}(z). For vector-space-valued functors, we expect to use a direct sum of some terms. Since we’re dealing with functors, things are a little more complex than before, but there should still be a contribution from each object in the preimage (or, if we’re not talking about skeletal groupoids, the essential preimage) of the object z we look at.

However, we have to deal with the fact that there are morphisms. Instead of adding scalars, we have to combine vector spaces using the fact that they are given as representation spaces for some particular groups.

To see what needs to be done, consider the situation of groupoids with just one object, so the only important information is the homomorphism of groups. These can be seen as one-object groupoids, which we can just call G and H. A functor between them is given by the single group homomorphism h : G \rightarrow H.

Now suppose we have a representation R of the group G on V (so that R(g) \in GL(V) and R(gg') = R(g)R(g')). Then somehow we need to get a representation of H which is “induced” by the homomorphism h, Ind(R):

Induced Representation

This diagram shows “the answer” – but how does it work? Essentially, we use the fact that there’s a nice, convenient representation of any group G, namely the regular representation of G on the group algebra \mathbb{C}[G]. Elements of \mathbb{C}[G] are just complex linear combinations of elemenst of G, which are acted on by G by left multiplication. The group H also has regular representation, on \mathbb{C}[H]. These are the most easily available building blocks with which to build the “push-forward” of R onto H.

To see how, we use the fact that \mathbb{C}[H] has a right-action of G, and hence \mathbb{C}[G], by way of h. An element g \in G acts on \mathbb{C}[H] by right-multiplication by h(g) – and this extends linearly to \mathbb{C}[G]. So we can combine this with the left action of \mathbb{C}[G] on V (also extended linearly from G) by taking a tensor product of \mathbb{C}[H] with V over \mathbb{C}[G]. This lets us “mod out” by the actions of G which are not detected in \mathbb{C}[H]. The result, called the induced representation Ind(R) of H, in turn gives us back a left-action of H on \mathbb{C}[H] \otimes_{\mathbb{C}[G]} V. I’ll call this h_{\ast} R.

(Note that usually this name refers to the situation where G is a subgroup of H, but in fact this can be defined for any homomorphism.)

This tells us what to do for single-object groupoids. As we remarked earlier, if more than one object is sent to the same z \in Z, we should get a direct sum of all their contributions. So I want to describe the 2-linear map, which I’ll now call V(X) : V(Y) \rightarrow V(Z) which we get from the span above, thought of as X : Y \rightarrow Z in Span(\mathbf{Grpd}). Here V(X) = hom(X,\mathbf{Vect}) and V(Y) = hom(Y,\mathbf{Vect}) (where I’m now being more explicit that this whole process is a functor in some reasonable sense).

I have to say what V(X) does to a given 2-vector (what it does to morphisms between 2-vectors is straightforward to work out, since every operation we do is a tensor product or direct sum). Suppose we have F : Y \rightarrow \mathbf{Vect} is one. Then V(X)(F) = t_{\ast} s^{ast} F= t_{\ast} (F \circ s) : Z \rightarrow \mathbf{Vect}. We can now say what this works out to. At some object z \in Z, we get (still assuming everything is skeletal for simplicity):

V(X)(F) = \bigoplus_{t(x)=z} \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} F(s(x))

And this is a direct sum of a bunch of such expressions where F is a basis 2-vector – i.e. assigns an irreducible representation to some one object, and the trivial rep on the zero vector space to every other. That allows this to be written as a matrix with vector-space components, just like any 2-linear map.

So the 2-linear map V(X) has a matrix representation. The indices of the matrix are the simple objects in hom(Y,\mathbf{Vect} and hom(Z,\mathbf{Vect}, which consist of a choice of (a) object in Y or Z (which we assume are skeletal – otherwise it’s a choice of isomorphism class), and (b) irreducible representation of the automorphism group of that object. Given a choice of index on each side, the corresponding coefficient in the matrix is a vector space. Namely the direct sum, over all the objects x \in X that restrict down to our chosen pair, of a bunch of terms like \mathbb{C}[Aut(z)] \otimes_{\mathbb{C}[Aut(x)]} \mathbb{C}. This is just a quotient space of the one group algebra by the image of the other.

Next up: a quick finisher about what happens at the 2-morphism level, then back to TQFT and gravity!

In the last couple of posts, I described how an extended TQFT gives a 2-vector space, with generators corresponding to particular states of matter, for each boundary of space (mostly talking about 1-D uboundaries of 2D space in 3D spacetime). I was starting to build up to talking about how cobordisms give rise to “spin network states” on space with given boundary conditions. Before I can do that, it’s probably helpful to talk about something a little more general. Since the general thing in question is something I’m developing a talk on to give in Iowa, this is helpful for me anyway.

A slightly more general thing has to do with spans of groupoids, and how to get 2-linear maps from them. A span in a category \mathbf{C} is a diagram like this:

B_1 \leftarrow S \rightarrow B_2

Now, as for spans, let me first give a couple of link-outs (the blathyspherian version of a shout-out) to a couple of guys named John… Given a category \mathbf{C} with pullbacks, there is a (bi)category \mathbf{Span(C)}, where spans are composed using pullbacks. John Armstrong recently posted about spans, describing \mathbf{Span(C)}, which has the same objects as \mathbf{C}, and morphisms which are spans in \mathbf{C}.

In fact, it also has 2-morphisms, which are span maps – given two spans with central objects X and Y, a span map is a map from X to Y which makes the resulting diagram commute. It turns out these make \mathbf{Span(C)} into a bicategory – one of the classic examples, in fact, which goes back to Jean Benabou’s “Introduction to Bicategories” (1967) in which the concept was introduced. However, one can ignore these, and just think of it as a category, by taking spans only up to isomorphism.

John Baez recently posted some slides for a talk about spans in quantum mechanics, which gives a nice overview of the context that makes this stuff relevant to this discussion of TQFT. A key concept is summarized in the abstract:

Many features of quantum theory — quantum teleportation, violations of Bell’s inequality, the no-cloning theorem and so on — become less puzzling when we realize that quantum processes more closely resemble pieces of spacetime than functions between sets.

And the point both of them make is that cobordisms can be seen as spans (actually, cospans, although a cospan in \mathbf{C} is by definition a span in \mathbf{C^{op}}). This is an important idea when thinking of TQFTs as functors, since \mathbf{nCob} and \mathbf{Vect} (or \mathbf{Hilb}) are symmetric monoidal categories with duals. A TQFT is a functor Z : \mathbf{nCob} \rightarrow \mathbf{Vect}, which respects exactly this structure. So it’s important that quantum processes are “like” these “pieces of spacetime”. And “pieces of spacetime” (cobordisms) have these properties is that, any time you start off with a cartesian category with pullbacks, like \mathbf{Sets}, then taking spans in it gives you a symmetric monoidal category with duals.

What we’re really talking about are properties of (a) spans, and (b) certain free functors. In particular, free functors taking sets to vector spaces, groupoids to 2-vector spaces, and (potentially) so on. Both of these have something to do with how to go from a cartesian category like \mathbf{Sets}, or \mathbf{Gpd} (really a 2-category), to a monoidal category with duals (“dagger compact”), like \mathbf{Vect}, or \mathbf{2Vect} (also a 2-category) – but also like Span(Set) or Span(Gpd)… I’ll describe what happens for sets, to keep things simple for this installment.

One example of going from a cartesian category to a dagger compact one is by the “free vector space” functor F, taking a set S to F(S), the free vector space on S, and set maps to linear maps that just permute basis elements. Another is the process of taking \mathbf{C} and building $\mathbf{Span(C)}$. The point is that these two can be related in a rather interesting way. In particular, there’s a functor

F : \mathbf{Span(Sets)} \rightarrow \mathbf{Vect}

which acts on the objects of $\mathbf{Span(Sets)}$ (which are sets) just like the free-vector-space functor. That is, given a set S, it gives \mathbb{C}^S, the space of functions from S into \mathbb{C}. (For simplicity, I’ll assume all my sets are finite).

But it does something rather special on morphisms in \mathbf{Span(Sets)}. These are spans of sets and therefore they have two morphisms in them. If we think of the span S \leftarrow^{s} X \rightarrow^{t} T as a morphism X : S \rightarrow T in \mathbf{Span(Sets)}, then the two arrows in the span are distinguished as first a “backwards” arrow, then a “forwards” arrow. The point is to take a vector in F(S) – a complex-valued function on S, through the span.

So the question is, if I have a complex-valued function f : S \rightarrow \mathbb{C}, how do I get a complex-valued function on T? Well, first, of course, I have to get one on X. Since I have a function s : X \rightarrow S, the obvious candidate is s^{\ast}f := f \circ s : X \rightarrow \mathbb{C}. Each element of X just gets the same complex number as its image down in S. That’s easy: we’ve “pulled back” the function f along s.

Now we have to transport this function down to T, which is a little less obvious. A given object in T may have several different objects in X which map down to it, and no reason why they should all have the same function value under s^{\ast}f. What can we do with a bunch of complex numbers? The two things which are most obvious are: add them up, or multiply them. The one we pick is to add them up (it may help to remember that the preimage of some object in T is the union, or coproduct, of a bunch of elements – and coproducts are like sums, just as products are like… well… products). The result is that we’ve “pushed forward” the function s^{\ast}f along t, and the result is called t_{\ast}s^{\ast}f.

How do I know the process of taking a function f – that is, a vector in F(S), and finding the vector t_{\ast}s^{\ast}f in F(T) is a linear map? Well, it’s not too hard to check that it’s represented by a matrix, and the summation over the preimage of an object in T was the sum in the matrix multiplication. (Go ahead!) This works out very nicely because \mathbf{Set} is cartesian, so any span between S and T factors through the product S \times T. In fact, X corresponds to an integer matrix, whose (i,j) component is the number of elements of X that project down to both i \in S and j \in T. (To get a general matrix, you’d have to give labels to the elements of X, which is something I talk about in this paper – the thing I like about which is that it gives lots of pictures which make “matrix mechanics” seem pretty natural – to me, anyway.)

It turns out this gives you a functor which represents \mathbf{Span(Sets)} inside $\latex \mathbf{Vect}$. In fact, to really get the bigger picture, instead of \mathbf{Sets} in everything I’ve said here, you should replace \mathbf{Gpd}, and for \mathbb{C} you should replace \mathbf{Vect}. I’ll say something about that in the next installment – but “morally speaking” it’s much the same as what I’ve talked about here.

« Previous Page