### c*-algebras

So there’s a lot of preparations going on for the workshop HGTQGR coming up next week at IST, and the program(me) is much more developed – many of the talks are now listed, though the schedule has yet to be finalized.  This week we’ll be having a “pre-school school” to introduce the local mathematicans to some of the physics viewpoints that will be discussed at the workshop – Aleksandar Mikovic will be introducing Quantum Gravity (from the point of view of the loop/spin-foam approach), and Sebastian Guttenberg will be giving a mathematician’s introduction to String theory.

These are by no means the only approaches physicists have taken to the problem of finding a theory that incorporates both General Relativity and Quantum Field Theory.  They are, however, two approaches where lots of work has been done, and which appear to be amenable to using the mathematical tools of (higher) category theory which we’re going to be talking about at the workshop.  These are “higher gauge theory”, which very roughly is the analog of gauge theory (which includes both GR and QFT) using categorical groups, and TQFT, which is a very simple type of quantum field theory that has a natural description in terms of categories, which can be generalized to higher categories.

I’ll probably take a few posts after the workshop to write up these, and the many other talks and mini-courses we’ll be having, but right now, I’d like to say a little bit about another talk we had here recently.  Actually, the talk was in Porto, but several of us at IST in Lisbon attended by a videoconference.  This was the first time I’ve seen this for a colloquium-style talk, though I did once take a course in General Relativity from Eric Poisson that was split between U of Waterloo and U of Guelph.  I thought it was a great idea then, and it worked quite well this time, too.  This is the way of the future – and unfortunately it probably will be for some time to come…

Anyway, the talk in question was by Thomasz Brzezinski, about “Synthetic Non-Commutative Geometry” (link points to the slides).  The point here is to take two different approaches to extending differential geometry (DG) and combine the two insights.  The “Synthetic” part refers to synthetic differential geometry (SDG), which is a program for doing DG in a general topos.  One aspect of this is that in a topos where the Law of the Excluded Middle doesn’t apply, it’s possible for the real-numbers object to have infinitesimals: that is, elements which are smaller than any positive element, but bigger than zero.  This lets one take things which have to be treated in a roundabout way in ordinary DG, like $dx$, and take them at face value – as an infinitesimal change in $x$.  It also means doing geometry in a completely constructive way.

However, these aspects aren’t so important here.  The important fact about it here is that it’s based on building a theory that was originally defined in terms of sets, or topological spaces – that is, in the toposes $Sets$, or $Top$  – and transplanting it to another category.  This is because Brzezinski’s goal was to do something similar for a different extension of DG, namely non-commutative geometry (NCG).  This is a generalisation of DG which is based on the equivalence $CommAlg^{op} \simeq lCptHaus$ between the categories of commutative $C^{\star}$-algebras (and algebra maps, read “backward” as morphisms in $CommAlg^{op}$), and that of locally compact Hausdorff spaces (which, for objects, equates a space $X$ with the algebra $C(X)$ of continuous functions on it, and an algebra $A$ with its spectrum $Spec(A)$, the space of maximal ideals).  The generalization of NCG is to take structures defined for $lCptHaus$ that create DG, and make similar definitions in the category $Alg^{op}$, of not-necessarily-commutative $C^{\star}$-algebras.

This category is the one which plays the role of the topos $Top$.  It isn’t a topos, though: it’s some sort of monoidal category.  And this is what “synthetic NCG” is about: taking the definitions used in NCG and reproducing them in a generic monoidal category (to be clear, a braided monoidal category).

The way he illustrated this is by explaining what a principal bundle would be in such a generic category.

To begin with, we can start by giving a slightly nonstandard definition of the concept in ordinary DG: a principal $G$-bundle $P$ is a manifold with a free action of a (compact Lie) group $G$ on it.  The point is that this always looks like a “base space” manifold $B$, with a projection $\pi : P \rightarrow B$ so that the fibre at each point of $B$ looks like $G$.  This amounts to saying that $\pi$ is an equalizer:

$P \times G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$

where the maps from $G\times P$ to $P$ are (a) the action, and (b) the projection onto $P$.  (Being an equalizer means that $\pi$ makes this diagram commute – has the same composite with both maps – and any other map $\phi$ that does the same factors uniquely through $\pi$.)  Another equivalent way to say this is that since $P \times G$ has two maps into $P$, then it has a map into the pullback $P \times_B P$ (the pullback of two copies of $P \stackrel{\pi}{\rightarrow} B$), and the claim is that it’s actually ismorphic.

The main points here are that (a) we take this definition in terms of diagrams and abstract it out of the category $Top$, and (b) when we do so, in general the products will be tensor products.

In particular, this means we need to have a general definition of a group object $G$ in any braided monoidal category (to know what $G$ is supposed to be like).  We reproduce the usual definition of a group objects so that $G$ must come equipped with a “multiplication” map $m : G \otimes G \rightarrow G$, an “inverse” map $\iota : G \rightarrow G$ and a “unit” map $u : I \rightarrow G$, where $I$ is the monoidal unit (which takes the role of the terminal object in a topos like $Top$, the unit for $\times$).  These need to satisfy the usual properties, such as the monoid property for multiplication:

$m \circ (m \otimes id_G) = m \circ (id_G \otimes m) : G \otimes G \otimes G \rightarrow G$

(usually given as a diagram, but I’m being lazy).

The big “however” is this: in $Sets$ or $Top$, any object $X$ is always a comonoid in a canonical way, and we use this implictly in defining some of the properties we need.  In particular, there’s always the diagonal map $\Delta : X \rightarrow X \times X$ which satisfies the dual of the monoid property:

$(id_X \times \Delta) \circ \Delta = (\Delta \times id_X) \circ \Delta$

There’s also a unique counit $\epsilon \rightarrow \star$, the map into the terminal object, which makes $(X,\Delta,\epsilon)$ a counital comonoid automatically.  But in a general braided monoidal category, we have to impose as a condition that our group object also be equipped with $\Delta : G \rightarrow G \otimes G$ and $\epsilon : G \rightarrow I$ making it a counital comonoid.  We need this property to even be able to make sense of the inverse axiom (which this time I’ll do as a diagram):

This diagram uses not only $\Delta$ but also the braiding map $\sigma_{G,G} : G \otimes G \rightarrow G \otimes G$ (part of the structure of the braided monoidal category which, in $Top$ or $Sets$ is just the “switch” map).  Now, in fact, since any object in $Set$ or $Top$ is automatically a comonoid, we’ll require that this structure be given for anything we look at: the analog of spaces (like $P$ above), or our group object $G$.  For the group object, we also must, in general, require something which comes for free in the topos world and therefore generally isn’t mentioned in the definition of a group.  Namely, the comonoid and monoid aspects of $G$ must get along.  (This comes for free in a topos essentially because the comonoid structure is given canonically for all objects.)  This means:

For a group in $Sets$ or $Top$, this essentially just says that the two ways we can go from $(x,y)$ to $(xy,xy)$ (duplicate, swap, then multiply, or on the other hand multiply then duplicate) are the same.

All these considerations about how honest-to-goodness groups are secretly also comonoids does explain why corresponding structures in noncommutative geometry seem to have more elaborate definitions: they have to explicitly say things that come for free in a topos.  So, for instance, a group object in the above sense in the braided monoidal category $Vect = (Vect_{\mathbb{F}}, \otimes_{\mathbb{F}}, \mathbb{F}, flip)$ is a Hopf algebra.  This is a nice canonical choice of category.  Another is the opposite category $Vect^{op}$ – this is a standard choice in NCG, since spaces are supposed to be algebras – this would be given the comonoid structure we demanded.

So now once we know all this, we can reproduce the diagrammatic definition of a principal $G$-bundle above: just replace the product $\times$ with the monoidal operation $\otimes$, the terminal object by $I$, and so forth.  The diagrams are understood to be diagrams of comonoids in our braided monoidal category.  In particular, we have an action $\rho : P \otimes G \rightarrow P$,which is compatible with the $\Delta$ maps – so in $Vect$ we would say that a noncommutative principal $G$-bundle $P$ is a right-module coalgebra over the Hopf algebra $G$.  We can likewise take this (in a suitably abstract sense of “algebra” or “module”) to be the definition in any braided monoidal category.

To have the “freeness” of the action, there needs to be an equalizer of:

$\rho, (id_P \otimes \epsilon) : P \otimes G \stackrel{\longrightarrow}{\rightarrow} P \stackrel{\pi}{\rightarrow} B$

The “freeness” condition for the action is likewise defined using a monoidal-category version of the pullback (fibre product) $P \times_B P$.

This was as far as Brzezinski took the idea of synthetic NCG in this particular talk, but the basic idea seems quite nice.  In SDG, one can define all sorts of differential geometric structures synthetically, that is, for a general topos: for example, Gonzalo Reyes has gone and defined the Einstein field equations synthetically.  Presumably, a lot of what’s done in NCG could also be done in this synthetic framework, and transplanted to other categories than the usual choices.

Brzezinski said he was mainly interested in the “usual” choices of category, $Vect$ and $Vect^{op}$ – so for instance in $Vect^{op}$, a “principal $G$-bundle” is what’s called a Hopf-Galois extension.  Roger Picken did, however, ask an interesting question about other possible candidates for the category to work in.  Given that one wants a braided monoidal category, a natural one to look at is the category whose morphisms are braids.  This one, as a matter of fact, isn’t quite enough (there’s no braid $m : n \otimes n \rightarrow n$, because this would be a braid with $2n$ strands in and $n$ strands out – which is impossible.  But some sort of category of tangles might make an interestingly abstract setting in which to see what NCG looks like.  So far, this doesn’t seem to have been done as far as I can see.

As I mentioned in my previous post, I’ve recently started out a new postdoc at IST – the Instituto Superior Tecnico in Lisbon, Portugal.  Making the move from North America to Europe with my family was a lot of work – both before and after the move – involving lots of paperwork and shifting of heavy objects.  But Lisbon is a good city, with lots of interesting things to do, and the maths department at IST is very large, with about a hundred faculty.  Among those are quite a few people doing things that interest me.

The group that I am actually part of is coordinated by Roger Picken, and has a focus on things related to Topological Quantum Field Theory.  There are a couple of postdocs and some graduate students here associated in some degree with the group, and elsewhere than IST Aleksandar Mikovic and Joao Faria Martins.   In the coming months there should be some activity going on in this group which I will get to talk about here, including a workshop which is still in development, so I’ll hold off on that until there’s an official announcement.

## Quantales

I’ve also had a chance to talk a bit with Pedro Resende, mostly on the subject of quantales.  This is something that I got interested in while at UWO, where there is a large contingent of people interested in category theory (mostly from the point of view of homotopy theory) as well as a good group in noncommutative geometry.  Quantales were originally introduced by Chris Mulvey – I’ve been looking recently at a few papers in which he gives a nice account of the subject – here, here, and here.
The idea emerged, in part, as a way of combining two different approaches to generalising the idea of a space.  One is the approach from topos theory, and more specifically, the generalisation of topological spaces to locales.  This direction also has connections to logic – a topos is a good setting for intuitionistic, but nevertheless classical, logic, whereas quantales give an approach to quantum logics in a similar spirit.

The other direction in which they generalize space is the $C^{\star}$-algebra approach used in noncommutative geometry.  One motivation of quantales is to say that they simultaneously incorporate the generalizations made in both of these directions – so that both locales and $C^{\star}$-algebras will give examples.  In particular, a quantale is a kind of lattice, intended to have the same sort of relation to a noncommutative space as a locale has to an ordinary topological space.  So to begin, I’ll look at locales.

A locale is a lattice which formally resembles the lattice of open sets for such a space.  A lattice is a partial order with operations $\bigwedge$ (“meet”) and $\bigvee$ (“join”).  These operations take the role of the intersection and union of open sets.  So to say it formally resembles a lattice of open sets means that the lattice is closed under arbitrary joins, and finite meets, and satisfies the distributive law:

$U \bigwedge (\bigvee_i V_i) =\bigvee_i (U \bigwedge V_i)$

Lattices like this can be called either “Frames” or “Locales” – the only difference between these two categories is the direction of the arrows.  A map of lattices is a function that preserves all the structure – order, meet, and join.   This is a frame morphism, but it’s also a morphism of locales in the opposite direction.  That is, $\mathbf{Frm} = \mathbf{Loc}^{op}$.

Another name for this sort of object is a “Heyting algebra”.  One of the great things about topos theory (of which this is a tiny starting point) is that it unifies topology and logic.  So, the “internal logic” of a topos has a Heyting algebra (i.e. a locale) of truth values, where the meet and join take the place of logical operators “and” and “or”.  The usual two-valued logic is the initial object in $\mathbf{Loc}$, so while it is special, it isn’t unique.  One vital fact here is that any topological space (via the lattice of open sets) produces a locale, and the locale is enough to identify the space – so $\mathbf{Top} \rightarrow \mathbf{Loc}$ is an embedding.  (For convenience, I’m eliding over the fact that the spaces have to be “sober” – for example, Hausdorff.)  In terms of logic, we could imagine that the space is a “state space”, and the truth values in the logic identify for which states a given proposition is true.  There’s nothing particularly exotic about this: “it is raining” is a statement whose truth is local, in that it depends on where and when you happen to look.

To see locales as a generalisation of spaces, it helps to note that the embedding above is full – if $A$ and $B$ are locales that come from topological spaces, there are no extra morphisms in $\mathbf{Loc}(A,B)$ that don’t come from continuous maps in $\mathbf{Top}(A,B)$.  So the category of locales makes the category of topological spaces bigger only by adding more objects – not inventing new morphisms.  The analogous noncommutative statement turns out not to be true for quantales, which is a little red-flag warning which Pedro Resende pointed out to me.

What would this statement be?  Well, the noncommutative analogue of the idea of a topological space comes from another embedding of categories.  To start with, there is an equivalence $\mathbf{LCptHaus}^{op} \simeq \mathbf{CommC}^{\star}\mathbf{Alg}$: the category of locally compact, Hausdorff, topological spaces is (up to equivalence) the opposite of the category of commutative $C^{\star}$-algebras.  So one simply takes the larger category of all $C^{\star}$-algebras (or rather, its opposite) as the category of “noncommutative spaces”, which includes the commutative ones – the original locally compact Hausdorff spaces.  The correspondence between an algebra and a space is given by taking the algebra of functions on the space.

So what is a quantale?  It’s a lattice which is formally similar to the lattice of subspaces in some $C^{\star}$-algebra.  Special elements – “right”, “left,” or “two-sided” elements – then resemble those subspaces that happen to be ideals.  Some intuition comes from thinking about where the two generalizations coincide – a (locally compact) topological space.  There is a lattice of open sets, of course.  In the algebra of continuous functions, each open set $O$ determines an ideal – namely, the subspace of functions which vanish on $O$.  When such an ideal is norm-closed, it will correspond to an open set (it’s easy to see that continuous functions which can be approximated by those vanishing on an open set will also do so – if the set is not open, this isn’t the case).

So the definition of a quantale looks much like that for a locale, except that the meet operation $\bigwedge$ is replaced by an associative product, usually called $\&$.  Note that unlike the meet, this isn’t assumed to be commutative – this is the point where the generalization happens.  So in particular, any locate gives a quantale with $\& = \bigwedge$.  So does any $C^{\star}$-algebra, in the form of its lattice of ideals.  But there are others which don’t show up in either of these two ways, so one might hope to say this is a nice all-encompassing generalisation of the idea of space.

Now, as I said, there was a bit of a warning that comes attached to this hope.  This is that, although there is an embedding of the category of $C^{\star}$-algebras into the category of quantales, it isn’t full.  That is, not only does one get new objects, one gets new morphisms between old objects.  So, given algebras $A$ and $B$, which we think of as noncommutative spaces, and a map of algebras between them, we get a morphism between the associated quantales – lattice maps that preserve the operations.  However, unlike what happened with locales, there are quantale morphisms that don’t correspond to algebra maps.  Even worse, this is still true even in the case where the algebras are commutative, and just come from locally compact Hausdorff spaces: the associated quantales still may have extra morphisms that don’t come from continuous functions.

There seem to be three possible attitudes to this situation.  First, maybe this is just the wrong approach to generalising spaces altogether, and the hints in its favour are simply misleading.  Second, maybe quantales are absolutely the right generalisation of space, and these new morphisms are telling us something profound and interesting.  The third attitude, which Pedro mentioned when pointing out this problem to me, seems most likely, and goes as follows.  There is something special that happens with $C^{\star}$-algebras, where the analytic structure of the norm makes the algebras more rigid than one might expect.  In algebraic geometry, one can take a space (algebraic variety or scheme) and consider its algebra of global functions.  To make sure that an algebra map corresponds to a map of schemes, though, one really needs to make sure that it actually respects the whole structure sheaf for the space – which describe local functions.  When passing from a topological space to a $C^{\star}$-algebra, there is a norm structure that comes into play, which is rigid enough that all algebra morphisms will automatically do this – as I said above, the structure of ideals of the algebra tells you all about the open sets.  So the third option is to say that a quantale in itself doesn’t quite have enough information, and one needs some extra data something like the structure sheaf for a scheme.  This would then pick out which are the “good” morphisms between two quantales – namely, the ones that preserve this extra data.  What, precisely, this data ought to be isn’t so clear, though, at least to me.

So there are some complications to treating a quantale as a space.  One further point, which may or may not go anywhere, is that this type of lattice doesn’t quite get along with quantum logic in quite the same way that locales get along with (intuitionistic) classical logic (though it does have connections to linear logic).

In particular, a quantale is a distributive lattice (though taking the product, rather than $\bigwedge$, as the thing which distributes over $\bigvee$), whereas the “propositional lattice” in quantum logic need not be distributive.  One can understand the failure of distributivity in terms of the uncertainty principle.  Take a statement such as “particle $X$ has momentum $p$ and is either on the left or right of this barrier”.  Since position and momentum are conjugate variables, and momentum has been determined completely, the position is completely uncertain, so we can’t truthfully say either “particle $X$ has momentum $p$ and is on the left or “particle $X$ has momentum $p$ and is on the right”.  Thus, the combined statement that either one or the other isn’t true, even though that’s exactly what the distributive law says: “P and (Q or S) = (P and Q) or (P and S)”.

The lack of distributivity shows up in a standard example of a quantum logic.  This is one where the (truth values of) propositions denote subspaces of a vector space $V$.  “And” (the meet operation $\bigwedge$) denotes the intersection of subspaces, while “or” (the join operation $\bigvee$) is the direct sum $\oplus$.  Consider two distinct lines through the origin of $V$ – any other line in the plane they span has trivial intersection with either one, but lies entirely in the direct sum.  So the lattice of subspaces is non-distributive.  What the lattice for a quantum logic should be is orthocomplemented, which happens when $V$ has an inner product – so for any subspace $W$, there is an orthogonal complement $W^{\bot}$.

Quantum logics are not very good from a logician’s point of view, though – lacking distributivity, they also lack a sensible notion of implication, and hence there’s no good idea of a proof system.  Non-distributive lattices are fine (I just gave an example), and very much in keeping with the quantum-theoretic strategy of replacing configuration spaces with Hilbert spaces, and subsets with subspaces… but viewing them as logics is troublesome, so maybe that’s the source of the problem.

Now, in a quantale, there may be a “meet” operation, separate from the product, which is non-distributive, but if the product is taken to be the analog of “and”, then the corresponding logic is something different.  In fact, the natural form of logic related to quantales is linear logic. This is also considered relevant to quantum mechanics and quantum computation, and as a logic is much more tractable.  The internal semantics of certain monoidal categories – namely, star-autonomous ones (which have a nice notion of dual) – can be described in terms of linear logic (a fairly extensive explanation is found in this paper by Paul-André Melliès).

Part of the point in the connection seems to be resource-limitedness: in linear logic, one can only use a “resource” (which, in standard logic, might be a truth value, but in computation could be the state of some memory register) a limited number of times – often just once.  This seems to be related to the noncommutativity of $\&$ in a quantale.  The way Pedro Resende described this to me is in terms of observations of a system.  In the ordinary (commutative) logic of a locale, you can form statements such as “A is true, AND B is true, AND C is true” – whose truth value is locally defined.  In a quantale, the product operation allows you to say something like “I observed A, AND THEN observed B, AND THEN observed C”.  Even leaving aside quantum physics, it’s not hard to imagine that in a system which you observe by interacting with it, statements like this will be order-dependent.  I still don’t quite see exactly how these two frameworks are related, though.

On the other hand, the kind of orthocomplemented lattice that is formed by the subspaces of a Hilbert space CAN be recovered in (at least some) quantale settings.  Pedro gave me a nice example: take a Hilbert space $H$, and the collection of all projection operators on it, $P(H)$.  This is one of those orthocomplemented lattices again, since projections and subspaces are closely related.  There’s a quantale that can be formed out of its endomorphisms, $End(P(H))$, where the product is composition.  In any quantale, one can talk about the “right” elements (and the “left” elements, and “two sided” elements), by analogy with right/left/two-sided ideals – these are elements which, if you take the product with the maximal element, $1$, the result is less than or equal to what you started with: $a \& 1 \leq a$ means $a$ is a right element.  The right elements of the quantale I just mentioned happen to form a lattice which is just isomorphic to $P(H)$.

So in this case, the quantale, with its connections to linear logic, also has a sublattice which can be described in terms of quantum logic.  This is a more complicated situation than the relation between locales and intuitionistic logic, but maybe this is the best sort of connection one can expect here.

In short, both in terms of logic and spaces, hoping quantales will be “just” a noncommutative variation on locales seems to set one up to be disappointed as things turn out to be more complex.  On the other hand, this complexity may be revealing something interesting.

Coming soon: summaries of some talks I’ve attended here recently, including Ivan Smith on 3-manifolds, symplectic geometry, and Floer cohomology.

It’s the last week of classes here at UWO, and things have been wrapping up. There have also been a whole series of interesting talks, as both Doug Ravenel and Paul Baum have been visiting members of the department. Doug Ravenel gave a colloquium explaining work by himself, and collaborators Mike Hopkins and Mike Hill, solving the “Kervaire Invariant One” problem – basically, showing that certain kinds of framed manifolds – and, closely related, certain kinds of maps between spectra – don’t exist (namely, those where the Kervaire invariant is nonzero). This was an interesting and very engaging talk, but as a colloqium it necessarily had to skip past some of the subtleties of stable homotopy theory involved, and since my understanding of this subject is limited, I don’t really know if I could do it justice.

In any case, I have my work cut out for me with what I am going to try to do (taking blame for any mistakes or imprecisions I introduce in here, BTW, since I may not be able to do this justice either). This is to discussing the first two of four talks which Paul Baum gave here last week, starting with an introduction to K-theory, and ending up with some discussion of the Baum-Connes Conjecture. This is a famous conjecture in noncommutative geometry which Baum and Alain Connes proposed in 1982 (and which Baum now seems to be fairly convinced is probably not true, though nobody knows a counterexample at the moment).

It’s a statement about (locally compact, Hausdorff, topological) groups $G$; it relates K-theory for a $C^{\star}$-algebra associated to $G$, with the equivariant K-homology of a space associated to $G$ (in fact, it asserts a certain map $\mu$, which always exists, is furthermore always an isomorphism). It implies a great many things about any case where it IS true, which includes a good many cases, such as when $G$ is commutative, or a compact Lie group. But to backtrack, we need to define those terms:

K-Theory

The basic point of K-theory, which like a great many things began with Alexandre Grothendieck, is that it defines some invariants – which happen to be abelian groups – for various entities. There is a topological and an algebraic version, so the “entities” in question are, in the first case, topological spaces, and in the second, algebras (and more classically, algebraic varieties). Part of Paul Baum’s point in his talk was to describe the underlying unity of these two – essentially, both correspond to particular kinds of algebras. Taking this point of view has the added advantage that it lets you generalize K-theory to “noncommutative spaces” quite trivially. That is: the category of locally compact topological spaces is equivalent to the opposite category of commutative $C^{\star}$-algebras – so taking the opposite of the category of ALL $C^{\star}$ algebras gives a noncommutative generalization of “space”. Defining K-theory in terms of $C^{\star}$ algebras extends the invariant to this new sort of space, and also somewhat unifies topological and algebraic K-theory.

Classically, anyway, Atiyah and Hirzebruch’s definition for K-theory (adapted to the topological case by Adams) gives an abelian group from a (topological or algebraic) space $X$, using the category of (respectively, topological or algebraic) vector bundles over $X$. The point is, from this category one naturally gets a set of isomorphism classes of bundles, with a commutative addition (namely, direct sum) – this is an abelian semigroup. One can turn any abelian semigroup $J$ (with or without zero) into an abelian group, by taking pairs – $J \oplus J$, and taking the quotient by the relation $(x,y) \sim (x',y')$ which holds when there is $z \in J$ with $x + y' + z = x' + y + z$. This is like taking “formal differences” (and any $(x,x)$ becomes zero, even if there was no zero originally). In fact, it does a little more, since if $x$ and $x'$ are not equal, but become equal upon adding some $z$, they’re forced to be equal (so an equivalence relation is being imposed on bundles as well as allowing formal inverses).

In fact, a definition equivalent to Atiyah and Hirzebruch’s (in terms of bundles) can be given in terms of the coordinate ring of a variety $X$, or ring of continuous complex-valued functions on a (compact, Hausdorff) topological space $X$. Given a ring $\Lambda$, one defines $J(\Lambda)$ to be the abelian semigroup of all idempotents (i.e. projections) in the rings of matrices $M_n(\Lambda)$ up to STABLE similarity. Two idempotent matrices $\alpha$ and $\beta$ are equivalent if they become similar – that is, conjugate matrices – possibly after adjoining some zeros by the direct sum $\oplus$. (In particular, this means we needn’t assume $\alpha$ and $\beta$ were the same size). Then $K_0^{alg}(\Lambda)$ comes from this $J(\Lambda)$ by the completion to a group as just described.

A class of idempotents (projections) in a matrix algebra over $\mathbb{C}$ is characterized by the image, up to similarity (so, really, the dimension). Since these are matrices over a ring of functions on a space, we’re then secretly talking about vector bundles over that space. However, defining things in terms of the ring $\Lambda$ is what allows the generalization to noncommutative spaces (where there is no literal space, and the “coordinate ring” is no longer commutative, but this construction still makes sense).

Now, there’s quite a bit more to say about this – it was originally used to prove the Hirzebruch-Riemann-Roch theorem, which for nice projective varieties $M$ defines an invariant from the alternating sum of dimensions of some sheaf-cohomology groups – roughly, cohomology where we look at sections of the aforementioned vector bundles over $M$ rather than functions on $M$. The point is that the actual cohomology dimensions depend sensitively on how you turn an underlying topological space into an algebraic variety, but the HRR invariant doesn’t. Paul Baum also talked a bit about some work by J.F. Adams using K-theory to prove some results about vector fields on spheres.

For the Baum-Connes conjecture, we’re looking at the K-theory of a certain $C^{\star}$-algebra. In general, given such an algebra $A$, the (level-j) K-theory $K_j(A)$ can be defined to be the $(j-1)$ homotopy group of $GL(A)$ – the direct limit of all the finite matrix algebras $GL_n(A)$, which have a chain of inclusions under extensions where $GL_n(A) \rightarrow GL_{n+1}(A)$ by direct sum with the 1-by-1 identity. This looks a little different from the algebraic case above, but they are closely connected – in particular, under this definition $K_0(A)$ is just the same as $K^{alg}_0(A)$ as defined above (so the norm and involution on $A$ can be ignored for the level-0 K-theory of a $C^{\star}$-algebra, though not for level-1).

You might also notice this appears to define $K_0(A)$ in terms of negative-one-dimensional homotopy groups. One point of framing the definition this way is that it reveals that there are only two levels which matter – namely the even and the odd – so $K_0(A) = K_2(A) = K_4(A) \dots$, and $K_1(A) = K_3(A) = \dots$, and this detail turns out not to matter. This is a result of Bott periodicity. Changing the level of homotopy groups amounts to the same thing as taking loop spaces. Specifically, the functor $\Omega$ that takes the space of loops $\Omega(X)$ of a space $X$ is right-adjoint to the suspension functor $S$ – and since $S(S^{n-1}) = S^n$, this means that $\pi_{j+1}(X) = [S(S^n),X] \cong [S^n,\Omega(X)]$. (Note that $[S^n,X]$ is the group of homotopy classes of maps from the $n$-sphere into $X$). On the other hand, Bott periodicity says that $\Omega^2(GL(A)) \sim GL(A)$ – taking the loop-space twice gives something homotopic to the original $GL(A)$. So the tower of homotopy groups repeats every two dimensions. (So, in particular, one may as well take that $j-1$ to be $j+1$, and just find $K_2$ for $K_0$).

Now, to get the other side of the map in the Baum-Connes conjecture, we need a different part of K-theory.

K-Homology

Now, as with homology and cohomology, there are two related functors in the world of K-theory from spaces (of whatever kind) into abelian groups. The one described above is contravariant (for “spaces”, not algebras – don’t forget this duality!). Thus, maps $f : X \rightarrow Y$ give maps $K^0(f) : K^0(Y) \rightarrow K^0(X)$, which is like cohomology. There is also a covariant functor $K_0$ (so $f$ gives $K_0(f) : K_0(X) \rightarrow K_0(Y)$), appropriately called K-homology. If the K-theory is described in terms of vector bundles on $X$, K-homology – in the case of algebraic varieties, anyway – is about coherent sheaves of vector spaces on $X$ – concretely, you can think of these as resembling vector bundles, without a local triviality condition (one thinks, for instance, of the “skyscraper sheaf” which assigns a fixed vector space $V$ to any open set containing a given point $x \in X$, and $0$ to any other, which is like a “bundle” having fibre $V$ at $x$, and $0$ everywhere else – generalizations putting a given fibre on a fixed subvariety – and of course one can add such examples. This image explains why any vector bundle can be interpreted as a coherent sheaf – so there is a map $K^0 \rightarrow K_0$. When the variety $X$ is not singular, this turns out to be an isomorphism (the groups one ends up constructing after all the identifications involved turn out the same, even though sheaves in general form a bigger category to begin with).

But to take $K_0$ into the topological setting, this description doesn’t work anymore. There are different ways to describe $K_0$, but the one Baum chose – because it extends nicely to the NCG world where our “space” is a (not necessarily commutative) $C^{\star}$-algebra $A$ – is in terms of generalized elliptic operators. This is to say, triples $(H, \psi, T)$, where $H$ is a (separable) Hilbert space, $\psi$ is a representation of $A$ in terms of bounded operators on $H$, and $T$ is some bounded operator on $H$ with some nice properties. Namely, $T$ is selfadjoint, and for any $a \in A$, both its commutator with $\psi(a)$ and $\psi(a)(I - T^2)$ land in $\mathcal{K}(H)$, the ideal of compact operators. (This is the only norm-closed ideal in $\mathcal{L}(H)$, the bounded operators – the idea being that for this purpose, operators in this ideal are “almost” zero).

These are “abstract” elliptic operators – but many interesting examples are concrete ones – that is, $H = L^2(S)$ for some space $S$, and $T$ is describing some actual elliptic operator on functions on $S$. (He gave the case where $S$ is the circle, and $T$ is a version of the Dirac operator $-i \partial/\partial \theta$ – normalized so all its nonzero eigenvalues are $\pm 1$ – then we’d be doing K-homology for the circle.)

Then there’s a notion of homotopy between these operators (which I’ll elide), and the collection of these things up to homotopy forms an abelian group, which is called $K^1(A)$. This is the ODD case – that is, there’s a tower of groups $K^j(A)$, but due to Bott periodicity they repeat with period 2, so we only need to give $K^0(A)$ and $K^1(A)$. The definition for $K^0(A)$ is similar to the one for $K^1(A)$, except that we drop the “self-adjoint” condition on $T$, which necessitates expanding the other two conditions – there’s a commutator for both $T$ and $T^*$, and the condition for $T^2$ becomes two conditions, for $TT^*$ and $T^* T$.  Now, all these $K^j(A)$ should be seen as the K-homology groups $K_j(X)$ of spaces $X$ (the sub/super script is denoting co/contra-variance).

Now, for the Baum-Connes conjecture, which is about groups, one actually needs to have an equivariant version of all this – that is, we want to deal with categories of $G$-spaces (i.e. spaces with a $G$-action, and maps compatible with the $G$-action). This generalizes to noncommutative spaces perfectly well – there are $G$$C^{\star}$-algebras with suitable abstract elliptic operators (one needs a unitary representation of $G$ on the Hilbert space $H$ in the triple to define the compatibility – given by a conjugation action), $G$-homotopies, and so forth, and then there’s an equivariant K-homology group, $K^G_j(X)$ for a $G$-space $X$.  (Actually, for these purposes, one cares about proper $G$-actions – ones where $X$ and the quotient space are suitably nice).

Baum-Connes Conjecture

Now, suppose we have a (locally compact, Hausdorff) group $G$. The Baum-Connes conjecture asserts that a map $\mu$, which always exists, between two particular abelian groups found from K-theory, is always an isomorphism. In fact, this is supposed to be true for the whole complex of groups, but by Bott periodicity, we only need the even and the odd case. For simplicity, let’s just think about one of $j=0,1$ at a time.

So then the first abelian group associated to $G$ comes from the equivariant K-homology for $G$-spaces.  In particular, there is a classifying space $\underline{E}G$ – this is the terminal object in a category of (“proper”) $G$-spaces (that is, any other $G$-space has a $G$-map into $\underline{E}G$). The group we want is the equivariant K-homology of this space: $K_j^G(\underline{E}G)$.  Since $\underline{E}G$ is a terminal object among $G$-spaces, and $K_j$ is covariant, it makes sense that this group is a limit over $G$-spaces (with some caveats), so another way to define it is $K^G_j(\underline{E}G) = lim K^G_j(X)$, where the limit is over all ($G$-compact) $G$-spaces.  Now, being defined in this abstract way makes this a tricky thing to deal with computationally (which is presumably one reason the conjecture has resisted proof).  Not so for the second group:

The second group is $K_j(C^{\star}_r(G))$ the reduced $C^{\star}$-algebra of a (locally compact, Hausdorff topological) group $G$. To get this, you take the compactly supported continuous functions on $G$, with the convolution product, and then, thinking of these as acting on $L^2(G)$ by multiplication, take the completion in the algebra of all such operators. This is still closed under the convolution product. Then one takes the K-theory for this algebra at level $j$.

So then there is always a particular map $\mu : K^G_j(\underline{E}G) \rightarrow K_j(C^{\star}_r(G)$, which is defined in terms of index theory.  The conjecture is that this is always an isomorphism (which, if true, would make the equivariant K-homology much more tractable).  There aren’t any known counterexamples, and in fact this is known to be true for all finite groups, and compact Lie groups – but for infinite discrete groups, there’s no proof known.  Indeed, it’s not even known whether it’s true for some specific, not very complicated groups, notably $SL(3,\mathbb{Z})$ – the 3-by-3 integer matrices of determinant 1.

In fact, Paul Baum seemed to be pretty confident that the conjecture is wrong (that there is a counterexample $G$) – essentially because it implies so many things (the Kadison-Kaplansky conjecture, that groups with no torsion have group rings with no idempotents; the Novikov conjecture, that certain manifold invariants coming from $G$ are homotopy invariants; and many more) that it would be too good to be true.  However, it does imply all these things about each particular group it holds for.

Now, I’ve not learned much about K-theory in the past, but Paul Baum’s talks clarified a lot of things about it for me.  One thing I realized is that some invariants I’ve thought more about, in the context of Extended TQFT – which do have to do with equivariant coherent sheaves of vector spaces – are nevertheless not the same invariants as in K-theory (at least in general).  I’ve been asked this question several times, and on my limited understanding, I thought it was true – for finite groups, they’re closely related (the 2-vector spaces that appear in ETQFT are abelian categories, but you can easily get abelian groups out of them, and it looks to me like they’re the K-homology groups).  But in the topological case, K-theory can’t readily be described in these terms, and furthermore the ETQFT invariants don’t seem to have all the identifications you find in K-theory – so it seems in general they’re not the same, though there are some concepts in common. But it does inspire me to learn more about K-theory.

Coming up: more reporting on talks from our seminar on Stacks and Groupoids, by Tom Prince and Jose Malagon-Lopez, who were talking about stacks in terms of homotopical algebra and category theory.

I say this is about a “recent” talk, though of course it was last year… But to catch up: Ivan Dynov was visiting from York and gave a series of talks, mainly to the noncommutative geometry group here at UWO, about the problem of classifying von Neumann algebras. (Strictly speaking, since there is not yet a complete set of invariants for von Neumann algebras known, one could dispute the following is a “classification”, but here it is anyway).

The first point is that any von Neumann algebra $\mathcal{A}$ is a direct integral of factors, which are highly noncommutative in that the centre of a factor consists of just the multiples of the identity. The factors are the irreducible building blocks of the noncommutative features of $\mathcal{A}$.

There are two basic tools that provide what classification we have for von Neumann algebras: first, the order theory for projections; second, the Tomita-Takesaki theory. I’ve mentioned the Tomita flow previously, but as for the first part:

A projection (self-adjoint idempotent) is just what it sounds like, if you reprpsent $\mathcal{M}$ as an algebra of bounded operators on a Hilbert space. An extremal but informative case is $\mathcal{M} = \mathcal{B}(H)$, but in general not every bounded operator appears in $\mathcal{M}$.

In the case where $\mathcal{M} = \mathcal{B}(H)$, then a projection in $\mathcal{M}$ is the same thing as a subspace of $H$. There is an (orthomodular) lattice of them (in general, the lattice of projections is $\mathcal{P(M)}$). For subspaces, the dimension characterizes $H$ up to isomorphism – any any two subspaces of the same dimension are isomorphic by some operator in $\mathcal{B}(H)$ (but not necessarily in a general $\mathcal{M}$).

The idea is to generalize this to projections in a general $\mathcal{A}$, and get some characterization of $\mathcal{A}$. The kind of isomorphism that matters for subspaces is a partial isometry – a map $u$ which preserves the metric on some subspace, and otherwise acts as a projection. In fact, the corresponding projections are then conjugate by $u$. So we define, for a general $\mathcal{M}$, an equivalence relation on projections, which amounts to saying that $e \sim f$ if there’s a partial isometry $u \in \mathcal{M}$ with $e = u*u$, and $f = uu*$ (i.e. the projections are conjugate by $u$).

Then there’s an order relation on the equivalence classes of projections – which, as suggested above, we should think of as generalizing “dimension” from the case $\mathcal{M} = \mathcal{B}(H)$. The order relation says that $e \leq f$ if $e \sim e_0$ where $e_0 \leq f$ as a projection (i.e. inclusion thinking of a projection as its image subspace of $H$). But the fact that $\mathcal{M}$ may not be all of $\mathcal{B}(H)$ has some counterintuitive consequences. For example, we can define a projection $e \in \mathcal{M}$ to be finite if the only time $e \sim e_0 \leq e$ is when $e_0 = e$ (which is just the usual definition of finite, relativized to use only maps in $\mathcal{M}$). We can call $e \in \mathcal{M}$ a minimal projection if it is nonzero and $f \leq e$ imples $f = e$ or $f = 0$.

Then the first pass at a classification of factors (i.e. “irreducible” von Neumann algebras) says a factor $\mathcal{M}$ is:

• Type $I$: If $\mathcal{M}$ contains a minimal projection
• Type $II$: If $\mathcal{M}$ contains no minimal projection, but contains a (nontrivial) finite projection
• Type $III$: If $\mathcal{M}$ contains no minimal or nontrivial finite projection

We can further subdivide them by following the “dimension-function” analogy, which captures the ordering of projections for $\mathcal{M} = \mathcal{B}(H)$, since it’s a theorem that there will be a function $d : \mathcal{P(M)} \rightarrow [0,\infty]$ which has the properties of “dimension” in that it gets along with the equivalence relation $\sim$, respects finiteness, and “dimension” of direct sums. Then letting $D$ be the range of this function, we have a few types. There may be more than one function $d$, but every case has one of the types:

• Type $I_n$: When $D = \{0,1,\dots,n\}$ (That is, there is a maximal, finite projection)
• Type $I_\infty$: When $D = \{ 0, 1, \dots, \infty \}$ (If there is an infinite projection in $\mathcal{M}$
• Type $II_1$: When $D = [ 0 , 1 ]$ (The maximal projection is finite – such a case can always be rescaled so the maximum $d$ is $1$)
• Type $II_\infty$: When $D = [ 0 , \infty ]$ (The maximal projection is infinite – notice that this has the same order type as type $II_1$)
• Type $III_\infty$ \: When $D = [0,\infty]$ (An infinite maximal projection)
• Type $III$: $D = \{0,1\}$, (these are called properly infinite)

The type $I$ case are all just (equivalent to) matrix algebras on some countable or finite dimensional vector space – which we can think of as a function space like $l_2(X)$ for some set $X$. Types $II$ and $III$ are more interesting. Type $II$ algebras are related to what von Neumann called “continuous geometries” – analogs of projective geometry (i.e. geometry of subspaces), with a continuous dimension function.

(If we think of these algebras $\mathcal{M}$ as represented on a Hilbert space $H$, then in fact, thought of as subspaces of $H$, all the projections give infinite dimensional subspaces. But since the definition of “finite” is relative to $\mathcal{M}$, and any partial isometry from a subspace $H' \leq H$ to a proper subspace $H'' < H'$ of itself that may exist in $\mathcal{B}(H)$ is not in $M$.)

In any case, this doesn’t exhaust what we know about factors. In his presentation, Ivan Dynov described some examples constructed from crossed products of algebras, which is important later, but for the moment, I’ll finish describing another invariant which helps pick apart the type $III$ factors. This is related to Tomita-Takesaki theory, which I’ve mentioned in here before.

You’ll recall that the Tomita flow (associated to a given state $\phi$) is given by $\sigma^{\phi}_t(A) = e^{i \Delta t} A e^{-i \Delta t}$, where $\Delta$ is the self-adjoint part of the conjugation operator $S$ (which depends on the state $\phi$ because it refers to the GNS representation of $\mathcal{M}$ on a Hilbert space $H$). This flow is uninteresting for Type $I$ or $II$ factors, but for type $III$ factors, it’s the basis of Connes’ classification.

In particular, the we can understand the Tomita flow in terms of eigenvalues of $\Delta$, since it comes from exponentials of $\Delta$. Moreover, as I commented last time, the really interesting part of the flow is independent of which state we pick. So we are interested in the common eigenvalues of the $\Delta$ associated to different states $\phi$, and define

$S(\mathcal{M}) = \cap_{\phi \in W} Spec(\Delta_{\phi})$

(where $W$ is the set of all states on $\mathcal{M}$, or actually “weights”)

Then $S(\mathcal{M}) - \{ 0 \}$, it turns out, is always a multiplicative subgroup of the positive real line, and the possible cases refine to these:

• $S(\mathcal{M}) = \{ 1 \}$ : This is when $\mathcal{M}$ is type $I$ or $II$
• $S(\mathcal{M}) = [0, \infty )$ : Type $III_1$
• $S(\mathcal{M}) = \{ 0 \} \cup \{ \lambda^n : n \in \mathbb{Z}, 0 < \lambda < 1 \}$ : Type $III_{\lambda}$ (for each $\lambda$ in the range $(0,1)$, and
• $S(\mathcal{M}) = \{ 0 , 1 \}$ : Type $III_0$

(Taking logarithms, $S(\mathcal{M}) - \{ 0 \}$ gives an additive subgroup of $\mathbb{R}$, $\Gamma(\mathcal{M})$ which gives the same information). So roughly, the three types are: $I$ finite and countable matrix algebras, where the dimension function tells everything; $II$ where the dimension function behaves surprisingly (thought of as analogous to projective geometry); and $III$, where dimensions become infinite but a “time flow” dimension comes into play.  The spectra of $\Delta$ above tell us about how observables change in time by the Tomita flow:  high eigenvalues cause the observable’s value to change faster with time, while low ones change slower.  Thus the spectra describe the possible arrangements of these eigenvalues: apart from the two finite cases, the types are thus a continuous positive spectrum, and a discrete one with a single generator.  (I think of free and bound energy spectra, for an analogy – I’m not familiar enough with this stuff to be sure it’s the right one).

This role for time flow is interesting because of the procedures for constructing examples of type $III$, which Ivan Dynov also described to us. These are examples associated with dynamical systems. These show up as crossed products. See the link for details, but roughly this is a “product” of an algebra by a group action – a kind of von Neumann algebra equivalent of the semidirect product of groups $H \rtimes K$ incorporating an action of $K$ on $H$. Indeed, if a (locally compact) group $K$ acts on group $H$ then the crossed product of algebras is just the von Neumann algebra of the semidirect product group.

In general, a ($W*$)-dynamical system is $(\mathcal{M},G,\alpha)$, where $G$ is a locally compact group acting by automorphisms on the von Neumann algebra $\mathcal{M}$, by the map $\alpha : G \rightarrow Aut(\mathcal{M})$. Then the crossed product $\mathcal{M} \rtimes_{\alpha} G$ is the algebra for the dynamical system.

A significant part of the talks (which I won’t cover here in detail) described how to use some examples of these to construct particular type $III$ factors. In particular, a theorem of Murray and von Neumann says $\mathcal{M} = L^{\infty}(X,\mu) \rtimes_{\alpha} G$ is a factor if the action of discrete group $G$ on a finite measure space $X$ is ergodic (i.e. has no nontrivial proper invariant sets – roughly, each orbit is dense). Another says this factor is type $III$ unless there’s a measure equivalent to (i.e. absolutely continuous with) $\mu$, and which is equivariant. Some clever examples I won’t reconstruct gave some factors like this explicitly.

He concluded by talking about some efforts to improve the classification: the above is not a complete set of invariants, so a lot of work in this area is improving the completeness of the set. One set of results he told us about do this somewhat for the case of hyperfinite factors (i.e. ones which are limits of finite ones), namely that if they are type $III$, they are crossed products of with a discrete group.

At any rate, these constructions are interesting, but it would take more time than I have here to look in detail – perhaps another time.

When I made my previous two posts about ideas of “state”, one thing I was aiming at was to say something about the relationships between states and dynamics. The point here is that, although the idea of “state” is that it is intrinsically something like a snapshot capturing how things are at one instant in “time” (whatever that is), extrinsically, there’s more to the story. The “kinematics” of a physical theory consists of its collection of possible states. The “dynamics” consists of the regularities in how states change with time. Part of the point here is that these aren’t totally separate.

Just for one thing, in classical mechanics, the “state” includes time-derivatives of the quantities you know, and the dynamical laws tell you something about the second derivatives. This is true in both the Hamiltonian and Lagrangian formalism of dynamics. The Hamiltonian function, which represents the concept of “energy” in the context of a system, is based on a function $H(q,p)$, where $q$ is a vector representing the values of some collection of variables describing the system (generalized position variables, in some configuration space $X$), and the $p = m \dot{q}$ are corresponding “momentum” variables, which are the other coordinates in a phase space which in simple cases is just the cotangent bundle $T*X$. Here, $m$ refers to mass, or some equivalent. The familiar case of a moving point particle has “energy = kinetic + potential”, or $H = p^2 / m + V(q)$ for some potential function $V$. The symplectic form on $T*X$ can then be used to define a path through any point, which describes the evolution of the system in time – notably, it conserves the energy $H$. Then there’s the Lagrangian, which defines the “action” associated to a path, which comes from integrating some function $L(q, \dot{q})$ living on the tangent bundle $TX$, over the path. The physically realized paths (classically) are critical points of the action, with respect to variations of the path.

This is all based on the view of a “state” as an element of a set (which happens to be a symplectic manifold like $T*X$ or just a manifold if it’s $TX$), and both the “energy” and the “action” are some kind of function on this set. A little extra structure (symplectic form, or measure on path space) turns these functions into a notion of dynamics. Now a function on the space of states is what an observable is: energy certainly is easy to envision this way, and action (though harder to define intuitively) counts as well.

But another view of states which I mentioned in that first post is the one that pertains to statistical mechanics, in which a state is actually a statisticial distribution on the set of “pure” states. This is rather like a function – it’s slightly more general, since a distribution can have point-masses, but any function gives a distribution if there’s a fixed measure $d\mu$ around to integrate against – then a function like $H$ becomes the measure $H d\mu$. And this is where the notion of a Gibbs state comes from, though it’s slightly trickier. The idea is that the Gibbs state (in some circumstances called the Boltzmann distribution) is the state a system will end up in if it’s allowed to “thermalize” – it’s the maximum-entropy distribution for a given amount of energy in the specified system, at a given temperature $T$. So, for instance, for a gas in a box, this describes how, at a given temperature, the kinetic energies of the particles are (probably) distributed. Up to a bunch of constants of proportionality, one expects that the weight given to a state (or region in state space) is just $exp(-H/T)$, where $H$ is the Hamiltonian (energy) for that state. That is, the likelihood of being in a state is inversely proportional to the exponential of its energy – and higher temperature makes higher energy states more likely.

Now part of the point here is that, if you know the Gibbs state at temperature $T$, you can work out the Hamiltonian
just by taking a logarithm – so specifying a Hamiltonian and specifying the corresponding Gibbs state are completely equivalent. But specifying a Hamiltonian (given some other structure) completely determines the dynamics of the system.

This is the classical version of the idea Carlo Rovelli calls “Thermal Time”, which I first encountered in his book “Quantum Gravity”, but also is summarized in Rovelli’s FQXi essay “Forget Time“, and described in more detail in this paper by Rovelli and Alain Connes. Mathematically, this involves the Tomita flow on von Neumann algebras (which Connes used to great effect in his work on the classification of same). It was reading “Forget Time” which originally got me thinking about making the series of posts about different notions of state.

Physically, remember, these are von Neumann algebras of operators on a quantum system, the self-adjoint ones being observables; states are linear functionals on such algebras. The equivalent of a Gibbs state – a thermal equilibrium state – is called a KMS (Kubo-Martin-Schwinger) state (for a particular Hamiltonian). It’s important that the KMS state depends on the Hamiltonian, which is to say the dynamics and the notion of time with respect to which the system will evolve. Given a notion of time flow, there is a notion of KMS state.

One interesting place where KMS states come up is in (general) relativistic thermodynamics. In particular, the effect called the Unruh Effect is an example (here I’m referencing Robert Wald’s book, “Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics”). Physically, the Unruh effect says the following. Suppose you’re in flat spacetime (described by Minkowski space), and an inertial (unaccelerated) observer sees it in a vacuum. Then an accelerated observer will see space as full of a bath of particles at some temperature related to the acceleration. Mathematically, a change of coordinates (acceleration) implies there’s a one-parameter family of automorphisms of the von Neumann algebra which describes the quantum field for particles. There’s also a (trivial) family for the unaccelerated observer, since the coordinate system is not changing. The Unruh effect in this language is the fact that a vacuum state relative to the time-flow for an unaccelerated observer is a KMS state relative to the time-flow for the accelerated observer (at some temperature related to the acceleration).

The KMS state for a von Neumann algebra with a given Hamiltonian operator has a density matrix $\omega$, which is again, up to some constant factors, just the exponential of the Hamiltonian operator. (For pure states, $\omega = |\Psi \rangle \langle \Psi |$, and in general a matrix becomes a state by $\omega(A) = Tr(A \omega)$ which for pure states is just the usual expectation value value for A, $\langle \Psi | A | \Psi \rangle$).

Now, things are a bit more complicated in the von Neumann algebra picture than the classical picture, but Tomita-Takesaki theory tells us that as in the classical world, the correspondence between dynamics and KMS states goes both ways: there is a flow – the Tomita flow – associated to any given state, with respect to which the state is a KMS state. By “flow” here, I mean a one-parameter family of automorphisms of the von Neumann algebra. In the Heisenberg formalism for quantum mechanics, this is just what time is (i.e. states remain the same, but the algebra of observables is deformed with time). The way you find it is as follows (and why this is right involves some operator algebra I find a bit mysterious):

First, get the algebra $\mathcal{A}$ acting on a Hilbert space $H$, with a cyclic vector $\Psi$ (i.e. such that $\mathcal{A} \Psi$ is dense in $H$ – one way to get this is by the GNS representation, so that the state $\omega$ just acts on an operator $A$ by the expectation value at $\Psi$, as above, so that the vector $\Psi$ is standing in, in the Hilbert space picture, for the state $\omega$). Then one can define an operator $S$ by the fact that, for any $A \in \mathcal{A}$, one has

$(SA)\Psi = A^{\star}\Psi$

That is, $S$ acts like the conjugation operation on operators at $\Psi$, which is enough to define $S$ since $\Psi$ is cyclic. This $S$ has a polar decomposition (analogous for operators to the polar form for complex numbers) of $S = J \Delta$, where $J$ is antiunitary (this is conjugation, after all) and $\Delta$ is self-adjoint. We need the self-adjoint part, because the Tomita flow is a one-parameter family of automorphisms given by:

$\alpha_t(A) = \Delta^{-it} A \Delta^{it}$

An important fact for Connes’ classification of von Neumann algebras is that the Tomita flow is basically unique – that is, it’s unique up to an inner automorphism (i.e. a conjugation by some unitary operator – so in particular, if we’re talking about a relativistic physical theory, a change of coordinates giving a different $t$ parameter would be an example). So while there are different flows, they’re all “essentially” the same. There’s a unique notion of time flow if we reduce the algebra $\mathcal{A}$ to its cosets modulo inner automorphism. Now, in some cases, the Tomita flow consists entirely of inner automorphisms, and this reduction makes it disappear entirely (this happens in the finite-dimensional case, for instance). But in the general case this doesn’t happen, and the Connes-Rovelli paper summarizes this by saying that von Neumann algebras are “intrinsically dynamic objects”. So this is one interesting thing about the quantum view of states: there is a somewhat canonical notion of dynamics present just by virtue of the way states are described. In the classical world, this isn’t the case.

Now, Rovelli’s “Thermal Time” hypothesis is, basically, that the notion of time is a state-dependent one: instead of an independent variable, with respect to which other variables change, quantum mechanics (per Rovelli) makes predictions about correlations between different observed variables. More precisely, the hypothesis is that, given that we observe the world in some state, the right notion of time should just be the Tomita flow for that state. They claim that checking this for certain cosmological models, like the Friedman model, they get the usual notion of time flow. I have to admit, I have trouble grokking this idea as fundamental physics, because it seems like it’s implying that the universe (or any system in it we look at) is always, a priori, in thermal equilibrium, which seems wrong to me since it evidently isn’t. The Friedman model does assume an expanding universe in thermal equilibrium, but clearly we’re not in exactly that world. On the other hand, the Tomita flow is definitely there in the von Neumann algebra view of quantum mechanics and states, so possibly I’m misinterpreting the nature of the claim. Also, as applied to quantum gravity, a “state” perhaps should be read as a state for the whole spacetime geometry of the universe – which is presumably static – and then the apparent “time change” would then be a result of the Tomita flow on operators describing actual physical observables. But on this view, I’m not sure how to understand “thermal equilibrium”.  So in the end, I don’t really know how to take the “Thermal Time Hypothesis” as physics.

In any case, the idea that the right notion of time should be state-dependent does make some intuitive sense. The only physically, empirically accessible referent for time is “what a clock measures”: in other words, there is some chosen system which we refer to whenever we say we’re “measuring time”. Different choices of system (that is, different clocks) will give different readings even if they happen to be moving together in an inertial frame – atomic clocks sitting side by side will still gradually drift out of sync. Even if “the system” means the whole universe, or just the gravitational field, clearly the notion of time even in General Relativity depends on the state of this system. If there is a non-state-dependent “god’s-eye view” of which variable is time, we don’t have empirical access to it. So while I can’t really assess this idea confidently, it does seem to be getting at something important.

I just posted the slides for “Groupoidification and 2-Linearization”, the colloquium talk I gave at Dalhousie when I was up in Halifax last week. I also gave a seminar talk in which I described the quantum harmonic oscillator and extended TQFT as examples of these processes, which covered similar stuff to the examples in a talk I gave at Ottawa, as well as some more categorical details.

Now, in the previous post, I was talking about different notions of the “state” of a system – all of which are in some sense “dual to observables”, although exactly what sense depends on which notion you’re looking at. Each concept has its own particular “type” of thing which represents a state: an element-of-a-set, a function-on-a-set, a vector-in-(projective)-Hilbert-space, and a functional-on-operators. In light of the above slides, I wanted to continue with this little bestiary of ontologies for “states” and mention the versions suggested by groupoidification.

State as Generalized Stuff Type

This is what groupoidification introduces: the idea of a state in $Span(Gpd)$. As I said in the previous post, the key concepts behind this program are state, symmetry, and history. “State” is in some sense a logical primitive here – given a bunch of “pure” states for a system (in the harmonic oscillator, you use the nonnegative integers, representing n-photon energy states of the oscillator), and their local symmetries (the $n$-particle state is acted on by the permutation group on $n$ elements), one defines a groupoid.

So at a first approximation, this is like the “element of a set” picture of state, except that I’m now taking a groupoid instead of a set. In a more general language, we might prefer to say we’re talking about a stack, which we can think of as a groupoid up to some kind of equivalence, specifically Morita equivalence. But in any case, the image is still that a state is an object in the groupoid, or point in the stack which is just generalizing an element of a set or point in configuration space.

However, what is an “element” of a set $S$? It’s a map into $S$ from the terminal element in $\mathbf{Sets}$, which is “the” one-element set – or, likewise, in $\mathbf{Gpd}$, from the terminal groupoid, which has only one object and its identity morphism. However, this is a category where the arrows are set maps. When we introduce the idea of a “history “, we’re moving into a category where the arrows are spans, $A \stackrel{s}{\leftarrow} X \stackrel{t}{\rightarrow} B$ (which by abuse of notation sometimes gets called $X$ but more formally $(X,s,t)$). A span represents a set/groupoid/stack of histories, with source and target maps into the sets/groupoids/stacks of states of the system at the beginning and end of the process represented by $X$.

Then we don’t have a terminal object anymore, but the same object $1$ is still around – only the morphisms in and out are different. Its new special property is that it’s a monoidal unit. So now a map from the monoidal unit is a span $1 \stackrel{!}{\rightarrow} X \stackrel{\Phi}{\rightarrow} B$. Since the map on the left is unique, by definition of “terminal”, this really just given by the functor $\Phi$, the target map. This is a fibration over $B$, called here $\Phi$ for “phi”-bration, but this is appropriate, since it corresponds to what’s usually thought of as a wavefunction $\phi$.

This correspondence is what groupoidification is all about – it has to do with taking the groupoid cardinality of fibres, where a “phi”bre of $\Phi$ is the essential preimage of an object $b \in B$ – everything whose image is isomorphic to $b$. This gives an equivariant function on $B$ – really a function of isomorphism classes. (If we were being crude about the symmetries, it would be a function on the quotient space – which is often what you see in real mechanics, when configuration spaces are given by quotients by the action of some symmetry group).

In the case where $B$ is the groupoid of finite sets and bijections (sometimes called $\mathbf{FinSet_0}$), these fibrations are the “stuff types” of Baez and Dolan. This is a groupoid with something of a notion of “underlying set” – although a forgetful functor $U: C \rightarrow \mathbf{FinSet_0}$ (giving “underlying sets” for objects in a category $C$) is really supposed to be faithful (so that $C$-morphisms are determined by their underlying set map). In a fibration, we don’t necessarily have this. The special case corresponds to “structure types” (or combinatorial species), where $X$ is a groupoid of “structured sets”, with an underlying set functor (actually, species are usually described in terms of the reverse, fibre-selecting functor $\mathbf{FinSet_0} \rightarrow \mathbf{Sets}$, where the image of a finite set consists of the set of all “$\Phi$-structured” sets (such as: “graphs on set $S$“, or “trees on $S$“, etc.) The fibres of a stuff type are sets equipped with “stuff”, which may have its own nontrivial morphisms (for example, we could have the groupoid of pairs of sets, and the “underlying” functor $\Phi$ selects the first one).

Over a general groupoid, we have a similar picture, but instead of having an underlying finite set, we just have an “underlying $B$-object”. These generalized stuff types are “states” for a system with a configuration groupoid, in $Span(\mathbf{Gpd})$. Notice that the notion of “state” here really depends on what the arrows in the category of states are – histories (i.e. spans), or just plain maps.

Intuitively, such a state is some kind of “ensemble”, in statistical or quantum jargon. It says the state of affairs is some jumble of many configurations (which we apparently should see as histories starting from the vacuous unit $1$), each of which has some “underlying” pure state (such as energy level, or what-have-you). The cardinality operation turns this into a linear combination of pure states by defining weights for each configuration in the ensemble collected in $X$.

2-State as Representation

A linear combination of pure states is, as I said, an equivariant function on the objects of $B$. It’s one way to “categorify” the view of a state as a vector in a Hilbert space, or map from $\mathbb{C}$ (i.e. a point in the projective Hilbert space of lines in the Hilbert space $H = \mathbb{C}[\underline{B}]$), which is really what’s defined by one of these ensembles.

The idea of 2-linearization is to categorify, not a specific state $\phi \in H$, but the concept of state. So it should be a 2-vector in a 2-Hilbert space associated to $B$. The Hilbert space $H$ was some space of functions into $mathbb{C}$, which we categorify by taking instead of a base field, a base category, namely $\mathbf{Vect}_{\mathbb{C}}$. A 2-Hilbert space will be a category of functors into $\mathbf{Vect}_{\mathbb{C}}$ – that is, the representation category of the groupoid $B$.

(This is all fine for finite groupoids. In the inifinte case, there are some issues: it seems we really should be thinking of the 2-Hilbert space as category of representations of an algebra. In the finite case, the groupoid algebra is a finite dimensional C*-algebra – that is, just a direct sum (over iso. classes of objects) of matrix algebras, which are the group algebras for the automorphism groups at each object. In the infinite dimensional world, you probable should be looking at the representations of the von Neumann algebra completion of the C*-algebra you get from the groupoid. There are all sorts of analysis issues about measurability that lurk in this area, but they don’t really affect how you interpret “state” in this picture, so I’ll skip it.)

A “2-state”, or 2-vector in this Hilbert space, is a representation of the groupoid(-algebra) associated to the system. The “pure” states are irreducible representations – these generate all the others under the operations of the 2-Hilbert space (“sum”, “scalar product”, etc. in their 2-vector space forms). Now, an irreducible representation of a von Neumann algebra is called a “superselection sector” for a quantum system. It’s playing the role of a pure state here.

There’s an interesting connection here to the concept of state as a functional on a von Neumann algebra. As I described in the last post, the GNS representation associates a representation of the algebra to a state. In fact, the GNS representation is irreducible just when the state is a pure state. But this notion of a superselection sector makes it seem that the concept of 2-state has a place in its own right, not just by this correspondence.

So: if a quantum system is represented by an algebra $\mathcal{A}$ of operators on a Hilbert space $H$, that representation is a direct sum (or direct integral, as the case may be) of irreducible ones, which are “sectors” of the theory, in that any operator in $\mathcal{A}$ can’t take a vector out of one of these “sectors”. Physicists often associate them with conserved quantities – though “superselection” sectors are a bit more thorough: a mere “selection sector” is a subspace where the projection onto it commutes with some subalgebra of observables which represent conserved quantities. A superselection sector can equivalently be defined as a subspace whose corresponding projection operator commutes with EVERYTHING in $\mathcal{A}$. In this case, it’s because we shouldn’t have thought of the representation as a single Hilbert space: it’s a 2-vector in $\mathbb{Rep}(\mathcal{A})$ – but as a direct integral of some Hilbert bundle that lives on the space of irreps. Those projections are just part of the definition of such a bundle. The fact that $\mathcal{A}$ acts on this bundle fibre-wise is just a consequence of the fact that the total $H$ is a space of sections of the “2-state”. These correspond to “states” in usual sense in the physical interpretation.

Now, there are 2-linear maps that intermix these superselection sectors: the ETQFT picture gives nice examples. Such a map, for example, comes up when you think of two particles colliding (drawn in that world as the collision of two circles to form one circle). The superselection sectors for the particles are labelled by (in one special case) mass and spin – anyway, some conserved quantities. But these are, so to say, “rest mass” – so there are many possible outcomes of a collision, depending on the relative motion of the particles. So these 2-maps describe changes in the system (such as two particles becoming one) – but in a particular 2-Hilbert space, say $\mathbb{Rep}(X)$ for some groupoid $X$ describing the current system (or its algebra), a 2-state $\Phi$ is a representation of the of the resulting system). A 2-state-vector is a particular representation. The algebra $\mathcal{A}$ can naturally be seen as a subalgebra of the automorphisms of $\Phi$.

So anyway, without trying to package up the whole picture – here are two categorified takes on the notion of state, from two different points of view.

I haven’t, here, got to the business about Tomita flows coming from states in the von Neumann algebra sense: maybe that’s to come.

Continuing from the previous post…

I realized I accidentally omitted Klaas Lansdman’s  talk on the Kochen-Specker theorem, in light of topos theory.  This overlaps a lot with the talk by Andreas Doring, although there are some significant differences.  (Having heard only what Andreas had to say about the differences, I won’t attempt to summarize them).  Again, the point of the Kochen-Specker theorem is that there isn’t a “state space” model for a quantum system – in this talk, we heard the version saying that there are no “locally sigma-Boolean” maps, from operators on a Hilbert space, to $\{ 0, 1 \}$.  (This is referring to sigma-algebas (of measurable sets on a space), and Boolean algebras of subsets – if there were such a map, it would be representing the system in terms of a lattice equivalent to some space).  As with the Isham/Doring approach, they then try to construct something like a state space – internal to some topos.  The main difference is that the toposes are both categories of functors into sets from some locale – but here the functors are covariant, rather than contravariant.

Now, roughly speaking, the remaining talks could be grouped into two kinds:

Quantum Foundations

Many people came to this conference from a physics-oriented point of view.  So for instance Rafael Sorkin gave a talk asking “what is a quantum reality?”. He was speaking from a “histories” interpretation of quantum systems. So, by contrast, a “classical reality” would mean one worldline: out of some space of histories, one of them happens. In quantum theory, you typically use the same space of histories, but have some kind of “path integral” or “sum over histories” when you go to compute the probabilities of given events happening. In this context, “event” means “a subset of all histories” (e.g. the subset specified by a statement like “it rained today”). So his answer to the question is: a reality should be a way of answering all questions about all events.  This is called a “coevent”.  Sorkin’s answer to “what is a quantum reality?” is: “a primitive, preclusive coevent”.

In particular, it’s a measure $\mu$.  For a classical system, “answering” questions means yes/no, whether the one history is in a named event – for a quantum system, it means specifying a path integral over all events – i.e. a measure on the space of events.  This measure needs some nice properties, but it’s not, for instance, a probability measure (it’s complex valued, so there can be interference effects).  Preclusion has to do with the fact that the measure of an event being zero means that it doesn’t happen – so one can make logical inferences about which events can happen.

Other talks addressing foundational problems in physics included Lucien Hardy’s: he talked about how to base predictive theories on operational structures – and put to the audience the question of whether the structures he was talking about can be represented categorically or not.  The basic idea is an “operational structure” is some collection of operations that represents a physical experiment whose outcome we might want to predict.  They have some parameters (“knob settings”), outcomes (classical “readouts”), and inputs and outputs for the things they study and affect (e.g. a machine takes in and spit out an electron, doing something in the middle).  This sort of thing can be set up as a monoidal category – but the next idea, “object-oriented operationalism”, involved components having “connections” (given relations between their inputs) and “coincidences” (predictable correlations in output).  The result was a different kind of diagram language for describing experiments, which can be put together using a “causaloid product” (he referred us to this paper, or a similar one, on this).

Robert Spekkens gave a talk about quantum theory as a probability theory – there are many parallels, though the complex amplitudes give QM phenomena like interference.  Instead of a “random variable” $A$, one has a Hilbert space $H_A$; instead of a (positive) function of $A$, one has a positive operator on $H_A$; standard things in probability have analogs in the quantum world.  What Robert Spekkens’ talk dealt with was how to think about conditional probabilities and Bayesian inference in QM.  One of the basic points is that when calculating conditional probabilities, you generally have to divide by some probability, which encounters difficulties translating into QM.  He described how to construct a “conditional density operator” along similar lines – replacing “division” by a “distortion” operation with an analogous meaning.  The whole thing deeply uses the Choi-Jamiolkowski isomorphism, a duality between “states and channels”.  In terms of the string diagrams Bob Coecke et. al. are keen on, this isomorphism can be seen as taking a special cup which creates entangled states into an ordinary cup, with an operator on one side.  (I.e. it allows the operation to be “slid off” the cup).  The talk carried this through, and ended up defining a quantum version of the probabilistic concept of “conditional independence” (i.e. events A and C are independent, given that B occurred).

A more categorical look at foundational questions was given by Rick Blute’s talk on “Categorical Structures in AQFT”, i.e. Algebraic Quantum Field Theory.  This is a formalism for QFT which takes into account the causal structure it lives on – for example, on Minkowski space, one has a causal order for points, with $x \leq y$ if there is a future-directed null or timelike curve from $x$ to $y$.  Then there’s an “interval” (more literally, a double cone) $[x,y] = \{ z | x \leq z \leq y\}$, and these cones form a poset under inclusion (so this is a version of the poset of subspaces of a space which keeps track of the causal structure).  Then an AQFT is a functor $\mathbb{A}$ from this poset into C*-algebras (taking inclusions to inclusions): the idea is that each local region of space has its own algebra of observables relevant to what’s found there.  Of course, these algebras can all be pieced together (i.e. one can take a colimit of the diagram of inclusions coming from all regions on spacetime.  The result is $\hat{\mathbb{A}}$.  Then, one finds a category of certain representations of it on a hilbert space $H$ (namely, “DHR” representations).  It turns out that this category is always equivalent to the representations of some group $G$, the gauge group of the AQFT.  Rick talked about these results, and suggested various ways to improve it – for example, by improving how one represents spacetime.

The last talk I’d attempt to shoehorn into this category was by Daniel Lehmann.  He was making an analysis of the operation “tensor product”, that is, the monoidal operation in $Hilb$.  For such a fundamental operation – physically, it represents taking two systems and looking at the combined system containing both – it doesn’t have a very clear abstract definition.  Lehmann presented a way of characterizing it by a universal property analogous to the universal definitions for products and coproducts.  This definition makes sense whenever there is an idea of a “bimorphism” – a thing which abstracts the properties of a “bilinear map” for vector spaces.  This seems to be closely related to the link between multicategories and monoidal categories (discussed in, for example, Tom Leinster’s book).

Categories and Logic

Some less physics-oriented and more categorical talks rounded out the part of the program that I saw.  One I might note was Mike Stay‘s talk about the Rosetta Stone paper he wrote with John Baez.  The Rosetta Stone, of course, was a major archaeological find from the Ptolemaic period in Egypt – by that point, Egypt had been conquered by Alexander of Macedon and had a Greek speaking elite, but the language wasn’t widespread.  So the stone is an official pronouncement with a message in Greek, and in two written forms of Egyptian (heiroglyphic and demotic), neither of which had been readable to moderns until the stone was uncovered and correspondences could be deduced between the same message in a known language and two unknown ones.  The idea of their paper, and Mike’s talk, is to collect together analogs between four subjects: physics, topology, computation, and logic.  The idea is that each can be represented in terms of monoidal categories.  In physics, there is the category of Hilbert spaces; in topology one can look at the category of manifolds and cobordisms; in computation, there’s a monoidal category whose objects are data types, and whose morphisms are (equivalence classes) of programs taking data of one type in and returning data of another type; in logic, one has objects being propositions and morphisms being (classes) of proofs of one proposition from another.  The paper has a pretty extensive list of analogs between these domains, so go ahead and look in there for more!

Peter Selinger gave a talk about “Higher-Order Quantum Computation”.  This had to do with interesting phenomena that show up when dealing with “higher-order types” in quantum computers.  These are “data types”, as I just described – the “higher-order” types can be interpreted by blurring the distinction between a “system” and a “process”.  A data type describing a sytem we might act on might be $A$ or $B$.  A higher order type like $A \multimap B$ describes a process which takes something of type $A$ and returns something of type $B$.  One could interpret this as a black box – and performing processes on a type $A \multimap B$ is like studying that black box as a system itself.  This type is like an “internal hom” – and so one might like to say, “well, it’s dual to tensor – so it amounts to taking $A^* \otimes B$, since we’re in the category of Hilbert spaces”.  The trouble is, for physical computation, we’re not quite in the category where that works.  Because not all operators are significant: only some class of totally positive operators are physical.  So we don’t have the hom-tensor duality to use (equivalently, don’t have a well-behaved dual), and these types have to be considered in their own right.  And, because computations might not halt, operations studying a black box might not halt.  So in particular, a “co-co-qubit” isn’t the same as a qubit.  A co-qubit is a black box which eats a qubit and terminates with some halting probability.  A co-co-qubit eats a co-qubit and does the same.  If not for the halting probability, one could equally well see a qubit “eating” a co-co-qubit as the reverse.  But in fact they’re different.  A key fact in Peter’s talk is that quantum computation has new logical phenomena happening with types of every higher order.  Quantifying this (an open problem, apparently) would involve finding some equivalent of Bell inequalities that apply to every higher order of type.  It’s interesting to see how different quantum computing is, in not-so-obvious ways, from the classical kind.

Manoush Sadrzadeh gave a talk describing how “string diagrams” from monoidal categories, and representations of them, have been used in linguistics.  The idea is that the grammatical structure of a sentence can be build by “composing” structures associated to words – for example, a verb can be composed on left and right with subject and object to build a phrase.  She described some of the syntactic analysis that went into coming up with such a formalism.  But the interesting bit was to compare putting semantics on that syntax to taking a representation.  In particular, she described the notion of a semantic space in linguistics: this is a large-dimensional vector space that compares the meanings of words.  A rough but surprisingly effective way to clump words together by meaning just uses the statistics on a big sample of text, measuring how often they co-occur in the same context. Then there is a functor that “adds semantics” by mapping a category of string diagrams representing the syntax of sentences into one of vector spaces like this.  Applying the kind of categorical analysis usually used in logic to natural language seemed like a pretty neat idea – though it’s clear one has to make many more simplifying assumptions.

On the whole, it was a great conference with a great many interesting people to talk to – as you might guess from the fact that it took me three posts to comment on everything I wanted.

Next Page »