I say this is about a “recent” talk, though of course it was last year… But to catch up: Ivan Dynov was visiting from York and gave a series of talks, mainly to the noncommutative geometry group here at UWO, about the problem of classifying von Neumann algebras. (Strictly speaking, since there is not yet a complete set of invariants for von Neumann algebras known, one could dispute the following is a “classification”, but here it is anyway).

The first point is that any von Neumann algebra is a direct integral of *factors*, which are highly noncommutative in that the centre of a factor consists of just the multiples of the identity. The factors are the irreducible building blocks of the noncommutative features of .

There are two basic tools that provide what classification we have for von Neumann algebras: first, the order theory for projections; second, the Tomita-Takesaki theory. I’ve mentioned the Tomita flow previously, but as for the first part:

A projection (self-adjoint idempotent) is just what it sounds like, if you reprpsent as an algebra of bounded operators on a Hilbert space. An extremal but informative case is , but in general not every bounded operator appears in .

In the case where , then a projection in is the same thing as a subspace of . There is an (orthomodular) lattice of them (in general, the lattice of projections is ). For subspaces, the dimension characterizes up to isomorphism – any any two subspaces of the same dimension are isomorphic by some operator in $\mathcal{B}(H)$ (but not necessarily in a general ).

The idea is to generalize this to projections in a general , and get some characterization of . The kind of isomorphism that matters for subspaces is a partial isometry – a map which preserves the metric on some subspace, and otherwise acts as a projection. In fact, the corresponding projections are then conjugate by . So we define, for a general , an equivalence relation on projections, which amounts to saying that if there’s a partial isometry with , and (i.e. the projections are conjugate by ).

Then there’s an order relation on the equivalence classes of projections – which, as suggested above, we should think of as generalizing “dimension” from the case . The order relation says that if where as a projection (i.e. inclusion thinking of a projection as its image subspace of ). But the fact that may not be all of has some counterintuitive consequences. For example, we can define a projection to be **finite** if the only time is when (which is just the usual definition of finite, relativized to use only maps in ). We can call a **minimal** projection if it is nonzero and imples or .

Then the first pass at a classification of factors (i.e. “irreducible” von Neumann algebras) says a factor is:

- Type : If contains a minimal projection
- Type : If contains no minimal projection, but contains a (nontrivial) finite projection
- Type : If contains no minimal or nontrivial finite projection

We can further subdivide them by following the “dimension-function” analogy, which captures the ordering of projections for , since it’s a theorem that there will be a function which has the properties of “dimension” in that it gets along with the equivalence relation , respects finiteness, and “dimension” of direct sums. Then letting be the range of this function, we have a few types. There may be more than one function , but every case has one of the types:

- Type : When (That is, there is a maximal, finite projection)
- Type : When (If there is an infinite projection in
- Type : When (The maximal projection is finite – such a case can always be rescaled so the maximum is )
- Type : When (The maximal projection is infinite – notice that this has the same order type as type )
- Type \: When (An infinite maximal projection)
- Type : , (these are called properly infinite)

The type case are all just (equivalent to) matrix algebras on some countable or finite dimensional vector space – which we can think of as a function space like for some set . Types and are more interesting. Type algebras are related to what von Neumann called “continuous geometries” – analogs of projective geometry (i.e. geometry of subspaces), with a continuous dimension function.

(If we think of these algebras as represented on a Hilbert space , then in fact, thought of as subspaces of , all the projections give infinite dimensional subspaces. But since the definition of “finite” is relative to , and any partial isometry from a subspace to a proper subspace of itself that may exist in is not in .)

In any case, this doesn’t exhaust what we know about factors. In his presentation, Ivan Dynov described some examples constructed from crossed products of algebras, which is important later, but for the moment, I’ll finish describing another invariant which helps pick apart the type factors. This is related to Tomita-Takesaki theory, which I’ve mentioned in here before.

You’ll recall that the Tomita flow (associated to a given state ) is given by , where is the self-adjoint part of the conjugation operator (which depends on the state because it refers to the GNS representation of on a Hilbert space ). This flow is uninteresting for Type or factors, but for type factors, it’s the basis of Connes’ classification.

In particular, the we can understand the Tomita flow in terms of eigenvalues of , since it comes from exponentials of . Moreover, as I commented last time, the really interesting part of the flow is independent of which state we pick. So we are interested in the common eigenvalues of the associated to different states , and define

(where is the set of all states on , or actually “weights”)

Then , it turns out, is always a multiplicative subgroup of the positive real line, and the possible cases refine to these:

- : This is when is type or
- : Type
- : Type (for each in the range , and
- : Type

(Taking logarithms, gives an additive subgroup of , which gives the same information). So roughly, the three types are: finite and countable matrix algebras, where the dimension function tells everything; where the dimension function behaves surprisingly (thought of as analogous to projective geometry); and , where dimensions become infinite but a “time flow” dimension comes into play. The spectra of above tell us about how observables change in time by the Tomita flow: high eigenvalues cause the observable’s value to change faster with time, while low ones change slower. Thus the spectra describe the possible arrangements of these eigenvalues: apart from the two finite cases, the types are thus a continuous positive spectrum, and a discrete one with a single generator. (I think of free and bound energy spectra, for an analogy – I’m not familiar enough with this stuff to be sure it’s the right one).

This role for time flow is interesting because of the procedures for constructing examples of type , which Ivan Dynov also described to us. These are examples associated with dynamical systems. These show up as crossed products. See the link for details, but roughly this is a “product” of an algebra by a group action – a kind of von Neumann algebra equivalent of the semidirect product of groups incorporating an action of on . Indeed, if a (locally compact) group acts on group then the crossed product of algebras is just the von Neumann algebra of the semidirect product group.

In general, a ()-**dynamical system** is , where is a locally compact group acting by automorphisms on the von Neumann algebra , by the map . Then the crossed product is the algebra for the dynamical system.

A significant part of the talks (which I won’t cover here in detail) described how to use some examples of these to construct particular type factors. In particular, a theorem of Murray and von Neumann says is a factor if the action of discrete group on a finite measure space is ergodic (i.e. has no nontrivial proper invariant sets – roughly, each orbit is dense). Another says this factor is type unless there’s a measure equivalent to (i.e. absolutely continuous with) , and which is equivariant. Some clever examples I won’t reconstruct gave some factors like this explicitly.

He concluded by talking about some efforts to improve the classification: the above is not a complete set of invariants, so a lot of work in this area is improving the completeness of the set. One set of results he told us about do this somewhat for the case of hyperfinite factors (i.e. ones which are limits of finite ones), namely that if they are type , they are crossed products of with a discrete group.

At any rate, these constructions are interesting, but it would take more time than I have here to look in detail – perhaps another time.