analysis


I say this is about a “recent” talk, though of course it was last year… But to catch up: Ivan Dynov was visiting from York and gave a series of talks, mainly to the noncommutative geometry group here at UWO, about the problem of classifying von Neumann algebras. (Strictly speaking, since there is not yet a complete set of invariants for von Neumann algebras known, one could dispute the following is a “classification”, but here it is anyway).

The first point is that any von Neumann algebra \mathcal{A} is a direct integral of factors, which are highly noncommutative in that the centre of a factor consists of just the multiples of the identity. The factors are the irreducible building blocks of the noncommutative features of \mathcal{A}.

There are two basic tools that provide what classification we have for von Neumann algebras: first, the order theory for projections; second, the Tomita-Takesaki theory. I’ve mentioned the Tomita flow previously, but as for the first part:

A projection (self-adjoint idempotent) is just what it sounds like, if you reprpsent \mathcal{M} as an algebra of bounded operators on a Hilbert space. An extremal but informative case is \mathcal{M} = \mathcal{B}(H), but in general not every bounded operator appears in \mathcal{M}.

In the case where \mathcal{M} = \mathcal{B}(H), then a projection in \mathcal{M} is the same thing as a subspace of H. There is an (orthomodular) lattice of them (in general, the lattice of projections is \mathcal{P(M)}). For subspaces, the dimension characterizes H up to isomorphism – any any two subspaces of the same dimension are isomorphic by some operator in $\mathcal{B}(H)$ (but not necessarily in a general \mathcal{M}).

The idea is to generalize this to projections in a general \mathcal{A}, and get some characterization of \mathcal{A}. The kind of isomorphism that matters for subspaces is a partial isometry – a map u which preserves the metric on some subspace, and otherwise acts as a projection. In fact, the corresponding projections are then conjugate by u. So we define, for a general \mathcal{M}, an equivalence relation on projections, which amounts to saying that e \sim f if there’s a partial isometry u \in \mathcal{M} with e = u*u, and f = uu* (i.e. the projections are conjugate by u).

Then there’s an order relation on the equivalence classes of projections – which, as suggested above, we should think of as generalizing “dimension” from the case \mathcal{M} = \mathcal{B}(H). The order relation says that e \leq f if e \sim e_0 where e_0 \leq f as a projection (i.e. inclusion thinking of a projection as its image subspace of H). But the fact that \mathcal{M} may not be all of \mathcal{B}(H) has some counterintuitive consequences. For example, we can define a projection e \in \mathcal{M} to be finite if the only time e \sim e_0 \leq e is when e_0 = e (which is just the usual definition of finite, relativized to use only maps in \mathcal{M}). We can call e \in \mathcal{M} a minimal projection if it is nonzero and f \leq e imples f = e or f = 0.

Then the first pass at a classification of factors (i.e. “irreducible” von Neumann algebras) says a factor \mathcal{M} is:

  • Type I: If \mathcal{M} contains a minimal projection
  • Type II: If \mathcal{M} contains no minimal projection, but contains a (nontrivial) finite projection
  • Type III: If \mathcal{M} contains no minimal or nontrivial finite projection

We can further subdivide them by following the “dimension-function” analogy, which captures the ordering of projections for \mathcal{M} = \mathcal{B}(H), since it’s a theorem that there will be a function d : \mathcal{P(M)} \rightarrow [0,\infty] which has the properties of “dimension” in that it gets along with the equivalence relation \sim, respects finiteness, and “dimension” of direct sums. Then letting D be the range of this function, we have a few types. There may be more than one function d, but every case has one of the types:

  • Type I_n: When D = \{0,1,\dots,n\} (That is, there is a maximal, finite projection)
  • Type I_\infty: When D = \{ 0, 1, \dots, \infty \} (If there is an infinite projection in \mathcal{M}
  • Type II_1: When D = [ 0 , 1 ] (The maximal projection is finite – such a case can always be rescaled so the maximum d is 1)
  • Type II_\infty: When D = [ 0 , \infty ] (The maximal projection is infinite – notice that this has the same order type as type II_1)
  • Type III_\infty \: When D = [0,\infty] (An infinite maximal projection)
  • Type III: D = \{0,1\}, (these are called properly infinite)

The type I case are all just (equivalent to) matrix algebras on some countable or finite dimensional vector space – which we can think of as a function space like l_2(X) for some set X. Types II and III are more interesting. Type II algebras are related to what von Neumann called “continuous geometries” – analogs of projective geometry (i.e. geometry of subspaces), with a continuous dimension function.

(If we think of these algebras \mathcal{M} as represented on a Hilbert space H, then in fact, thought of as subspaces of H, all the projections give infinite dimensional subspaces. But since the definition of “finite” is relative to \mathcal{M}, and any partial isometry from a subspace H' \leq H to a proper subspace H'' < H' of itself that may exist in \mathcal{B}(H) is not in M.)

In any case, this doesn’t exhaust what we know about factors. In his presentation, Ivan Dynov described some examples constructed from crossed products of algebras, which is important later, but for the moment, I’ll finish describing another invariant which helps pick apart the type III factors. This is related to Tomita-Takesaki theory, which I’ve mentioned in here before.

You’ll recall that the Tomita flow (associated to a given state \phi) is given by \sigma^{\phi}_t(A) = e^{i \Delta t} A e^{-i \Delta t}, where \Delta is the self-adjoint part of the conjugation operator S (which depends on the state \phi because it refers to the GNS representation of \mathcal{M} on a Hilbert space H). This flow is uninteresting for Type I or II factors, but for type III factors, it’s the basis of Connes’ classification.

In particular, the we can understand the Tomita flow in terms of eigenvalues of \Delta, since it comes from exponentials of \Delta. Moreover, as I commented last time, the really interesting part of the flow is independent of which state we pick. So we are interested in the common eigenvalues of the \Delta associated to different states \phi, and define

S(\mathcal{M}) = \cap_{\phi \in W} Spec(\Delta_{\phi})

(where W is the set of all states on \mathcal{M}, or actually “weights”)

Then S(\mathcal{M}) - \{ 0 \}, it turns out, is always a multiplicative subgroup of the positive real line, and the possible cases refine to these:

  • S(\mathcal{M}) = \{ 1 \} : This is when \mathcal{M} is type I or II
  • S(\mathcal{M}) = [0, \infty ) : Type III_1
  • S(\mathcal{M}) = \{ 0 \} \cup \{ \lambda^n : n \in \mathbb{Z}, 0 < \lambda < 1 \} : Type III_{\lambda} (for each \lambda in the range (0,1), and
  • S(\mathcal{M}) = \{ 0 , 1 \} : Type III_0

(Taking logarithms, S(\mathcal{M}) - \{ 0 \} gives an additive subgroup of \mathbb{R}, \Gamma(\mathcal{M}) which gives the same information). So roughly, the three types are: I finite and countable matrix algebras, where the dimension function tells everything; II where the dimension function behaves surprisingly (thought of as analogous to projective geometry); and III, where dimensions become infinite but a “time flow” dimension comes into play.  The spectra of \Delta above tell us about how observables change in time by the Tomita flow:  high eigenvalues cause the observable’s value to change faster with time, while low ones change slower.  Thus the spectra describe the possible arrangements of these eigenvalues: apart from the two finite cases, the types are thus a continuous positive spectrum, and a discrete one with a single generator.  (I think of free and bound energy spectra, for an analogy – I’m not familiar enough with this stuff to be sure it’s the right one).

This role for time flow is interesting because of the procedures for constructing examples of type III, which Ivan Dynov also described to us. These are examples associated with dynamical systems. These show up as crossed products. See the link for details, but roughly this is a “product” of an algebra by a group action – a kind of von Neumann algebra equivalent of the semidirect product of groups H \rtimes K incorporating an action of K on H. Indeed, if a (locally compact) group K acts on group H then the crossed product of algebras is just the von Neumann algebra of the semidirect product group.

In general, a (W*)-dynamical system is (\mathcal{M},G,\alpha), where G is a locally compact group acting by automorphisms on the von Neumann algebra \mathcal{M}, by the map \alpha : G \rightarrow Aut(\mathcal{M}). Then the crossed product \mathcal{M} \rtimes_{\alpha} G is the algebra for the dynamical system.

A significant part of the talks (which I won’t cover here in detail) described how to use some examples of these to construct particular type III factors. In particular, a theorem of Murray and von Neumann says \mathcal{M} = L^{\infty}(X,\mu) \rtimes_{\alpha} G is a factor if the action of discrete group G on a finite measure space X is ergodic (i.e. has no nontrivial proper invariant sets – roughly, each orbit is dense). Another says this factor is type III unless there’s a measure equivalent to (i.e. absolutely continuous with) \mu, and which is equivariant. Some clever examples I won’t reconstruct gave some factors like this explicitly.

He concluded by talking about some efforts to improve the classification: the above is not a complete set of invariants, so a lot of work in this area is improving the completeness of the set. One set of results he told us about do this somewhat for the case of hyperfinite factors (i.e. ones which are limits of finite ones), namely that if they are type III, they are crossed products of with a discrete group.

At any rate, these constructions are interesting, but it would take more time than I have here to look in detail – perhaps another time.

Last week there was an interesting series of talks by Ivan Dynov about the classification of von Neumann algebras, and I’d like to comment on that, but first, since it’s been a while since I posted, I’ll catch up on some end-of-term backlog and post about some points I brought up a couple of weeks ago in a talk I gave in the Geometry seminar at Western. This was about getting Extended TQFT’s from groups, which I’ve posted about plenty previously . Mostly I talked about the construction that arises from “2-linearization” of spans of groupoids (see e.g. the sequence of posts starting here).

The first intuition comes from linearizing spans of (say finite) sets. Given a map of sets f : A \rightarrow B, you get a pair of maps f^* : \mathbb{C}^B \rightarrow \mathbb{C}^A and f_* : \mathbb{C}^A \rightarrow \mathbb{C}^B between the vector spaces on A and B. (Moving from the set to the vector space stands in for moving to quantum mechanics, where a state is a linear combination of the “pure” ones – elements of the set.) The first map is just “precompose with f“, and the other involves summing over the preimage (it takes the basis vector a \in A to the basis vector f(a) \in B. These two maps are (linear) adjoints, if you use the canonical inner products where A and B are orthonormal bases. So then a span X \stackrel{s}{\leftarrow} S \stackrel{t}{\rightarrow} Y gives rise to a linear map t_* \circ s^* : \mathbb{C}^X \rightarrow \mathbb{C}^Y (and an adjoint linear map going the other way).

There’s more motivation for passing to 2-Hilbert spaces when your “pure states” live in an interesting stack (which can be thought of, up to equivalence, as a groupoid hence a category) rather than an ordinary space, but it isn’t hard to do. Replacing \mathbb{C} with the category \mathbf{FinHilb}_\mathbb{C}, and the sum with the direct sum of (finite dimensional) Hilbert spaces gives an analogous story for (finite dimensional) 2-Hilbert spaces, and 2-linear maps.

I was hoping to get further into the issues that are involved in making the 2-linearization process work with Lie groups, rather than finite groups. Among other things, this generalization ends up requiring us to work with infinite dimensional 2-Hilbert spaces (in particular, replacing \mathbf{FinHilb} with $\mathbf{Hilb}$). Other issues are basically measure-theoretic, since in various parts of the construction one uses direct sums. For Lie groups, these need to be direct integrals. There are also places where counting measure is used in the case of a discrete group G. So part of the point is to describe how to replace these with integrals. The analysis involved with 2-Hilbert spaces isn’t so different for than that required for (1-)Hilbert spaces.

Category theory and measure theory (analysis in general, really), have not historically got along well, though there are exceptions. When I was giving a similar talk at Dalhousie, I was referred to some papers by Mike Wendt, “The Category of Disintegration“, and “Measurable Hilbert Sheaves“, which is based on category-theoriecally dealing with ideas of von Neumann and Dixmier (a similar remark applies Yetter’s paper “Measurable Categories“), so I’ve been reading these recently. What, in the measurable category, is described in terms of measurable bundles of Hilbert spaces, can be turned into a description in terms of Hilbert sheaves when the category knows about measures. But categories of measure spaces are generally not as nice, categorically, as the category of sets which gives the structure in the discrete case. Just for example, the product measure space X \times Y isn’t a categorical product – just a monoidal one, in a category Wendt calls \mathbf{Disint}.

This category has (finite) measure spaces as objects, and as morphisms has disintegrations. A disintegration from (X,\mathcal{A},\mu) to (Y,\mathcal{B},\nu) consists of:

  • a measurable function f : X \rightarrow Y
  • for each y \in Y, the preimage f^{-1}(y) = X_y becomes a measure space (with the obvious subspace sigma-algebra \mathcal{A}_y), with measure \mu_y

such that \mu can be recovered by integrating against $\nu$: that is, for any measurable A \subset X, (that is, A \in \mathcal{A}), we have

$\int_Y \int_{A_y} d\mu_y(x) d\nu(y) = \int_A d\mu(x) = \mu (A)$

where A_y = A \cap X_y.

So the point is that such a morphism gives, not only a measurable function f : X \rightarrow Y, but a way of “disintegrating” X relative to Y. In particular, there is a forgetful functor U : \mathbf{Disint} \rightarrow \mathbf{Msble}, where \mathbf{Msble} is the category of measurable spaces, taking the disintegration (f, \{ (X_y,\mathcal{A}_y,\mu_y) \}_{y \in Y} ) to f.

Now, \mathbf{Msble} is Cartesian; in particular, the product of measurable spaces, X \times Y, is a categorical product. Not true for the product measure space in \mathbf{Disint}, which is just a monoidal category1. Now, in principle, I would like to describe what to do with groupoids in (i.e. internal to), \mathbf{Disint}, but that would involve side treks into things like volumes of measured groupoids, and for now I’ll just look at plain spaces.

The point is that we want to reproduce the operations of “direct image” and “inverse image” for fields of Hilbert spaces. The first thing is to understand what’s mean by a “measurable field of Hilbert spaces” (MFHS’s) on a measurable space X. The basic idea was already introduced by von Neumann not long after formalizing Hilbert spaces. A MFHS’s on (X,\mathcal{A}) consists of:

  • a family \mathcal{H}_x of (separable) Hilbert spaces, for x \in X
  • a space \mathcal{M} \subset \bigoplus_{x \in X}\mathcal{H}_x (of “measurable sections” \phi) (i.e. pointwise inverses to projection maps \pi_x : \mathcal{M} \rightarrow \mathcal{H}_x) with three properties:
  1. measurability: the function x \mapsto ||\phi_x|| is measurable for all \phi \in \mathcal{M}
  2. completeness: if \phi \in \mathcal{M} and \psi \in \bigoplus_{x \in X} \mathcal{H}_x makes the function x \mapsto \langle \phi_x , \psi_x \rangle then \psi \in \mathcal{M}
  3. separability: there is a countable set of sections \{ \phi^{(n)} \}_{n \in \mathbb{N}} \subset \mathcal{M} such that for all x, the \phi^{(n)}_x are dense in \mathcal{H}_x

This is a categorified analog of a measurable function: a measurable way to assign Hilbert spaces to points. Yetter describes a 2-category \mathbf{Meas(X)} of MFHS’s on X, which is an (infinite dimensional) 2-vector space – i.e. an abelian category, enriched in vector spaces. \mathbf{Meas(X)} is analogous to the space of measurable complex-valued functions on X. It is also similar to a measurable-space-indexed version of \mathbf{Vect^k}, the prototypical 2-vector space – except that here we have \mathbf{Hilb^X}. Yetter describes how to get 2-linear maps (linear functors) between such 2-vector spaces \mathbf{Meas(X)} and \mathbf{Meas(Y)}.

This describes a 2-vector space – that is, a \mathbf{Vect}-enriched abelian category – whose objects are MFHS’s, and whose morphisms are the obvious (that is, fields of bounded operators, whose norms give a measurable function). One thing Wendt does is to show that a MFHS \mathcal{H} on X gives rise to measurable Hilbert sheaf – that is, a sheaf of Hilbert spaces on the site whose “open sets” are the measurable sets in $\mathcal{A}$, and where inclusions and “open covers” are oblivious to any sets of measure zero. (This induces a sheaf of Hilbert spaces H on the open sets, if X is a topological space and \mathcal{A} is the usual Borel \sigma-algebra). If this terminology doesn’t spell it out for you, the point is that for any measurable set A, there is a Hilbert space:

H(A) = \int^{\oplus}_A \mathcal{H}_x d\mu(x)

The descent (gluing) condition that makes this assignment a sheaf follows easily from the way the direct integral works, so that H(A) is the space of sections of \coprod_{x \in A} \mathcal{H}_x with finite norm, where the inner product of two sections \phi and \psi is the integral of \langle \phi_x, \psi_x \rangle over A.

The category of all such sheaves on X is called \mathbf{Hilb^X}, and it is equivalent to the category of MFHS up to equivalence a.e. Then the point is that a disintegration (f, \mu_y) : (X,\mathcal{A},\mu) \rightarrow (Y,\mathcal{B},\nu) gives rise to two operations between the categories of sheaves (though it’s convenient here to describe them in terms of MFHS: the sheaves are recovered by integrating as above):

f^* : \mathbf{Hilb^Y} \rightarrow \mathbf{Hilb^X}

which comes from pulling back along f – easiest to see for the MFHS, so that f^*\mathcal{H}_x = \mathcal{H}_{f(x)}, and

\int_f^{\oplus} : \mathbf{Hilb^X} \rightarrow \mathbf{Hilb^Y}

the “direct image” operation, where in terms of MFHS, we have (\int_f^{\oplus}\mathcal{H})_y = \int_{f^{-1}(y)}^{\oplus}\mathcal{H}_x d\mu_y(x). That is, one direct-integrates over the preimage.

Now, these are measure-theoretic equivalents of two of the Grothendieck operations on sheaves (here is the text of Lipman’s Springer Lecture Notes book which includes an intro to them in Ch3 – a bit long for a first look, but the best I could find online). These are often discussed in the context of derived categories. The operation \int_f^{\oplus} is the analog of what is usually called f_*.

Part of what makes this different from the usual setting is that \mathbf{Disint} is not as nice as \mathbf{Top}, the more usual underlying category. What’s more, typically one talks about sheaves of sets, or abelian groups, or rings (which give the case of operations on schemes – i.e. topological spaces equipped with well-behaved sheaves of rings) – all of which are nicer categories than the category of Hilbert spaces. In particular, while in the usual picture f_* is left adjoint to f^*, this condition fails here because of the requirement that morphisms in \mathbf{Hilb} are bounded linear maps – instead, there’s a unique extension property.

Similarly, while f* is always defined by pulling back along a function f, in the usual setting, the direct image functor f_* is left-adjoint to f^*, found by taking a left Kan extension along f. This involves taking a colimit (specifically, imagine replacing the direct integral with a coproduct indexed over the same set). However, in this setting, the direct integral is not a coproduct (as the direct sum would be for vector spaces, or even finite-dimensional Hilbert spaces).

So in other words, something like the Grothendieck operations can be done with 2-Hilbert spaces, but the categorical properties (adjunction, Kan extension) are not as nice.

Finally, I’ll again remark that my motivation is to apply this to groupoids (or stacks), rather than just spaces X, and thus build Extended TQFT’s from (compact) Lie groups – but that’s another story, as we said when I was young.


1 Products: The fact that we want to look at spans in categories that aren’t Cartesian is the reason it’s more general to think about spans, rather than (as you can in some settings such as algebraic geometry) in terms of “bundles over the product”, which is otherwise equivalent. For sets or set-groupoids, this isn’t an issue.