spin foams


Well, as promised in the previous post, I’d like to give a summary of some of what was discussed at the conference I attended (quite a while ago now, late last year) in Erlangen, Germany.  I was there also to visit Derek Wise, talking about a project we’ve been working on for some time.

(I’ve also significantly revised this paper about Extended TQFT since then, and it now includes some stuff which was the basis of my talk at Erlangen on cohomological twisting of the category Span(Gpd).  I’ll get to that in the next post.  Also coming up, I’ll be describing some new things I’ve given some talks about recently which relate the Baez-Dolan groupoidification program to Khovanov-Lauda categorification of algebras – at least in one example, hopefully in a way which will generalize nicely.)

In the meantime, there were a few themes at the conference which bear on the Extended TQFT project in various ways, so in this post I’ll describe some of them.  (This isn’t an exhaustive description of all the talks: just of a selection of illustrative ones.)


Categories with Structures

A few talks were mainly about facts regarding the sorts of categories which get used in field theory contexts.  One important type, for instance, are fusion categories is a monoidal category which is enriched in vector spaces, generated by simple objects, and some other properties: essentially, monoidal 2-vector spaces.  The basic example would be categories of representations (of groups, quantum groups, algebras, etc.), but fusion categories are an abstraction of (some of) their properties.  Many of the standard properties are described and proved in this paper by Etingof, Nikshych, and Ostrik, which also poses one of the basic conjectures, the “ENO Conjecture”, which was referred to repeatedly in various talks.  This is the guess that every fusion category can be given a “pivotal” structure: an isomorphism from Id to **.  It generalizes the theorem that there’s always such an isomorphism into ****.  More on this below.

Hendryk Pfeiffer talked about a combinatorial way to classify fusion categories in terms of certain graphs (see this paper here).  One way I understand this idea is to ask how much this sort of category really does generalize categories of representations, or actually comodules.  One starting point for this is the theorem that there’s a pair of functors between certain monoidal categories and weak Hopf algebras.  Specifically, the monoidal categories are (Cat \downarrow Vect)^{\otimes}, which consists of monoidal categories equipped with a forgetful functor into Vect.  Then from this one can get (via a coend), a weak Hopf algebra over the base field k(in the category WHA_k).  From a weak Hopf algebra H, one can get back such a category by taking all the modules of H.  These two processes form an adjunction: they’re not inverses, but we have maps between the two composites and the identity functors.

The new result Hendryk gave is that if we restrict our categories over Vect to be abelian, and the functors between them to be linear, faithful, and exact (that is, roughly, that we’re talking about concrete monoidal 2-vector spaces), then this adjunction is actually an equivalence: so essentially, all such categories C may as well be module categories for weak Hopf algebras.  Then he gave a characterization of these in terms of the “dimension graph” (in fact a quiver) for (C,M), where M is one of the monoidal generators of C.  The vertices of \mathcal{G} = \mathcal{G}_{(C,M)} are labelled by the irreducible representations v_i (i.e. set of generators of the category), and there’s a set of edges j \rightarrow l labelled by a basis of Hom(v_j, v_l \otimes M).  Then one can carry on and build a big graded algebra H[\mathcal{G}] whose m-graded part consists of length-m paths in \mathcal{G}.  Then the point is that the weak Hopf algebra of which C is (up to isomorphism) the module category will be a certain quotient of H[\mathcal{G}] (after imposing some natural relations in a systematic way).

The point, then, is that the sort of categories mostly used in this area can be taken to be representation categories, but in general only of these weak Hopf algebras: groups and ordinary algebras are special cases, but they show up naturally for certain kinds of field theory.

Tensor Categories and Field Theories

There were several talks about the relationship between tensor categories of various sorts and particular field theories.  The idea is that local field theories can be broken down in terms of some kind of n-category: n-dimensional regions get labelled by categories, (n-1)-D boundaries between regions, or “defects”, are labelled by functors between the categories (with the idea that this shows how two different kinds of field can couple together at the defect), and so on (I think the highest-dimension that was discussed explicitly involved 3-categories, so one has junctions between defects, and junctions between junctions, which get assigned some higher-morphism data).  Alteratively, there’s the dual picture where categories are assigned to points, functors to 1-manifolds, and so on.  (This is just Poincaré duality in the case where the manifolds come with a decomposition into cells, which they often are if only for convenience).

Victor Ostrik gave a pair of talks giving an overview role tensor categories play in conformal field theory.  There’s too much material here to easily summarize, but the basics go like this: CFTs are field theories defined on cobordisms that have some conformal structure (i.e. notion of angles, but not distance), and on the algebraic side they are associated with vertex algebras (some useful discussion appears on mathoverflow, but in this context they can be understood as vector spaces equipped with exactly the algebraic operations needed to model cobordisms with some local holomorphic structure).

In particular, the irreducible representations of these VOA’s determine the “conformal blocks” of the theory, which tell us about possible correlations between observables (self-adjoint operators).  A VOA V is “rational” if the category Rep(V) is semisimple (i.e. generated as finite direct sums of these conformal blocks).  For good VOA’s, Rep(V) will be a modular tensor category (MTC), which is a fusion category with a duality, braiding, and some other strucutre (see this for more).   So describing these gives us a lot of information about what CFT’s are possible.

The full data of a rational CFT are given by a vertex algebra, and a module category M: that is, a fusion category is a sort of categorified ring, so it can act on M as an ring acts on a module.  It turns out that choosing an M is equivalent to finding a certain algebra (i.e. algebra object) \mathcal{L}, a “Lagrangian algebra” inside the centre of Rep(V).  The Drinfel’d centre Z(C) of a monoidal category C is a sort of free way to turn a monoidal category into a braided one: but concretely in this case it just looks like Rep(V) \otimes Rep(V)^{\ast}.  Knowing the isomorphism class \mathcal{L} determines a “modular invariant”.  It gets “physics” meaning from how it’s equipped with an algebra structure (which can happen in more than one way), but in any case \mathcal{L} has an underlying vector space, which becomes the Hilbert space of states for the conformal field theory, which the VOA acts on in the natural way.

Now, that was all conformal field theory.  Christopher Douglas described some work with Chris Schommer-Pries and Noah Snyder about fusion categories and structured topological field theories.  These are functors out of cobordism categories, the most important of which are n-categories, where the objects are points, morphisms are 1D cobordisms, and so on up to n-morphisms which are n-dimensional cobordisms.  To keep things under control, Chris Douglas talked about the case Bord_0^3, which is where n=3, and a “local” field theory is a 3-functor Bord_0^3 \rightarrow \mathcal{C} for some 3-category \mathcal{C}.  Now, the (Baez-Dolan) Cobordism Hypothesis, which was proved by Jacob Lurie, says that Bord_0^3 is, in a suitable sense, the free symmetric monoidal 3-category with duals.  What this amounts to is that a local field theory whose target 3-category is \mathcal{C} is “just” a dualizable object of \mathcal{C}.

The handy example which links this up to the above is when \mathcal{C} has objects which are tensor categories, morphisms which are bimodule categories (i.e. categories acted), 2-morphisms which are functors, and 3-morphisms which are natural transformations.  Then the issue is to classify what kind of tensor categories these objects can be.

The story is trickier if we’re talking about, not just topological cobordisms, but ones equipped with some kind of structure regulated by a structure group G(for instance, orientation by G=SO(n), spin structure by its universal cover G= Spin(n), and so on).  This means the cobordisms come equipped with a map into BG.  They take O(n) as the starting point, and then consider groups G with a map to O(n), and require that the map into BG is a lift of the map to BO(n).  Then one gets that a structured local field theory amounts to a dualizable objects of \mathcal{C} with a homotopy-fixed point for some G-action – and this describes what gets assigned to the point by such a field theory.  What they then show is a correspondence between G and classes of categories.  For instance, fusion categories are what one gets by imposing that the cobordisms be oriented.

Liang Kong talked about “Topological Orders and Tensor Categories”, which used the Levin-Wen models, from condensed matter phyiscs.  (Benjamin Balsam also gave a nice talk describing these models and showing how they’re equivalent to the Turaev-Viro and Kitaev models in appropriate cases.  Ingo Runkel gave a related talk about topological field theories with “domain walls”.).  Here, the idea of a “defect” (and topological order) can be understood very graphically: we imagine a 2-dimensional crystal lattice (of atoms, say), and the defect is a 1-dimensional place where the two lattices join together, with the internal symmetry of each breaking down at the boundary.  (For example, a square lattice glued where the edges on one side are offset and meet the squares on the other side in the middle of a face, as you typically see in a row of bricks – the slides linked above have some pictures).  The Levin-Wen models are built using a hexagonal lattice, starting with a tensor category with several properties: spherical (there are dualities satisfying some relations), fusion, and unitary: in fact, historically, these defining properties were rediscovered independently here as the requirement for there to be excitations on the boundary which satisfy physically-inspired consistency conditions.

These abstract the properties of a category of representations.  A generalization of this to “topological orders” in 3D or higher is an extended TFT in the sense mentioned just above: they have a target 3-category of tensor categories, bimodule categories, functors and natural transformations.  The tensor categories (say, \mathcal{C}, \mathcal{D}, etc.) get assigned to the bulk regions; to “domain walls” between different regions, namely defects between lattices, we assign bimodule categories (but, for instance, to a line within a region, we get \mathcal{C} understood as a \mathcal{C}-\mathcal{C}-bimodule); then to codimension 2 and 3 defects we attach functors and natural transformations.  The algebra for how these combine expresses the ways these topological defects can go together.  On a lattice, this is an abstraction of a spin network model, where typically we have just one tensor category \mathcal{C} applied to the whole bulk, namely the representations of a Lie group (say, a unitary group).  Then we do calculations by breaking down into bases: on codimension-1 faces, these are simple objects of \mathcal{C}; to vertices we assign a Hom space (and label by a basis for intertwiners in the special case); and so on.

Thomas Nickolaus spoke about the same kind of G-equivariant Dijkgraaf-Witten models as at our workshop in Lisbon, so I’ll refer you back to my earlier post on that.  However, speaking of equivariance and group actions:

Michael Müger  spoke about “Orbifolds of Rational CFT’s and Braided Crossed G-Categories” (see this paper for details).  This starts with that correspondence between rational CFT’s (strictly, rational chiral CFT’s) and modular categories Rep(F).  (He takes F to be the name of the CFT).  Then we consider what happens if some finite group G acts on F (if we understand F as a functor, this is an action by natural transformations; if as an algebra, then ).  This produces an “orbifold theory” F^G (just like a finite group action on a manifold produces an orbifold), which is the “G-fixed subtheory” of F, by taking G-fixed points for every object, and is also a rational CFT.  But that means it corresponds to some other modular category Rep(F^G), so one would like to know what category this is.

A natural guess might be that it’s Rep(F)^G, where C^G is a “weak fixed-point” category that comes from a weak group action on a category C.  Objects of C^G are pairs (c,f_g) where c \in C and f_g : g(c) \rightarrow c is a specified isomorphism.  (This is a weak analog of S^G, the set of fixed points for a group acting on a set).  But this guess is wrong – indeed, it turns out these categories have the wrong dimension (which is defined because the modular category has a trace, which we can sum over generating objects).  Instead, the right answer, denoted by Rep(F^G) = G-Rep(F)^G, is the G-fixed part of some other category.  It’s a braided crossed G-category: one with a grading by G, and a G-action that gets along with it.  The identity-graded part of Rep(F^G) is just the original Rep(F).

State Sum Models

This ties in somewhat with at least some of the models in the previous section.  Some of these were somewhat introductory, since many of the people at the conference were coming from a different background.  So, for instance, to begin the workshop, John Barrett gave a talk about categories and quantum gravity, which started by outlining the historical background, and the development of state-sum models.  He gave a second talk where he began to relate this to diagrams in Gray-categories (something he also talked about here in Lisbon in February, which I wrote about then).  He finished up with some discussion of spherical categories (and in particular the fact that there is a Gray-category of spherical categories, with a bunch of duals in the suitable sense).  This relates back to the kind of structures Chris Douglas spoke about (described above, but chronologically right after John).  Likewise, Winston Fairbairn gave a talk about state sum models in 3D quantum gravity – the Ponzano Regge model and Turaev-Viro model being the focal point, describing how these work and how they’re constructed.  Part of the point is that one would like to see that these fit into the sort of framework described in the section above, which for PR and TV models makes sense, but for the fancier state-sum models in higher dimensions, this becomes more complicated.

Higher Gauge Theory

There wasn’t as much on this topic as at our own workshop in Lisbon (though I have more remarks on higher gauge theory in one post about it), but there were a few entries.  Roger Picken talked about some work with Joao Martins about a cubical formalism for parallel transport based on crossed modules, which consist of a group G and abelian group H, with a map \partial : H \rightarrow G and an action of G on H satisfying some axioms.  They can represent categorical groups, namely group objects in Cat (equivalently, categories internal to Grp), and are “higher” analogs of groups with a set of elements.  Roger’s talk was about how to understand holonomies and parallel transports in this context.  So, a “connection” lets on transport things with G-symmetries along paths, and with H-symmetries along surfaces.  It’s natural to describe this with squares whose edges are labelled by G-elements, and faces labelled by H-elements (which are the holonomies).  Then the “cubical approach” means that we can describe gauge transformations, and higher gauge transformations (which in one sense are the point of higher gauge theory) in just the same way: a gauge transformation which assigns H-values to edges and G-values to vertices can be drawn via the holonomies of a connection on a cube which extends the original square into 3D (so the edges become squares, and so get H-values, and so on).  The higher gauge transformations work in a similar way.  This cubical picture gives a good way to understand the algebra of how gauge transformations etc. work: so for instance, gauge transformations look like “conjugation” of a square by four other squares – namely, relating the front and back faces of a cube by means of the remaining faces.  Higher gauge transformations can be described by means of a 4D hypercube in an analogous way, and their algebraic properties have to do with the 2D faces of the hypercube.

Derek Wise gave a short talk outlining his recent paper with John Baez in which they show that it’s possible to construct a higher gauge theory based on the Poincare 2-group which turns out to have fields, and dynamics, which are equivalent to teleparallel gravity, a slightly unusal theory which nevertheless looks in practice just like General Relativity.  I discussed this in a previous post.

So next time I’ll talk about the new additions to my paper on ETQFT which were the basis of my talk, which illustrates a few of the themes above.

So I had a busy week from Feb 7-13, which was when the workshop Higher Gauge Theory, TQFT, and Quantum Gravity (or HGTQGR) was held here in Lisbon.  It ended up being a full day from 0930h to 1900h pretty much every day, except the last.  We’d tried to arrange it so that there were coffee breaks and discussion periods, but there was also a plethora of talks.  Most of the people there seemed to feel that it ended up pretty well.  Since then I’ve been occupied with other things – family visiting the country, for one, so it’s taken a while to get around to writing it up.  Since there were several parts to the event, I’ll do this in several parts as well, of which this is the first one.

Part of the point of the workshop was to bring together a few related subjects in which category theoretic ideas come into areas of mathematics which play a role in physics, and hopefully to build some bridges toward applications.  While it leaned pretty strongly on the mathematical side of this bridge, I think we did manage to get some interaction at the overlap.  Roger Picken drew a nifty picture on the whiteboard at the end of the workshop summarizing how a lot of the themes of the talks clustered around the three areas mentioned in the title, and suggesting how TQFT really does form something of a bridge between the other two – one reason it’s become a topic of some interest recently.  I’ll try to build this up to a similar punchline.

Pre-School

Before the actual event began, though, we had a bunch of talks at IST for a local audience, to try to explain to mathematicians what the physics part of the workshop was about.  Aleksandr Mikovic gave a two-talk introduction to Quantum Gravity, and Sebastian Guttenberg gave a two-part intro to String Theory.  These are two areas where higher gauge theory (in the form of n-connections and n-bundles, or of n-gerbes) has made an appearance, and were the main physics content of the workshop talks.  They set up the basics to help put those talks in context.

Quantum Gravity

Aleksandr’s first talk set out the basic problem of quantizing the gravitational field (this isn’t the only attitude to what the problem of quantum gravity is, but it’s a good starting point), starting with the basic ingredients.  He summarized how general relativity describes gravity in terms of a metric g_{\mu \nu} which is supposed to satisfy the Einstein equation, relating the curvature of the metric to a source field T_{\mu \nu} which comes from matter.  Quantization then, starting from a classical picture involving trajectories of particles (or sections of fibre bundles to describe fields), one gets a picture where states are vectors in a Hilbert space, and there’s an algebra of operators including observables (self-adjoint operators) and time-evolution (hermitian ones).   An initial try at quantum gravity was to do this using the metric as the field, using the methods of perturbative QFT: treating the metric in terms of “small” fluctuations from some background metric like the flat Minkowski metric.  This uses the Einstein-Hilbert action S=\frac{1}{G} \int \sqrt{det(g)}R, where G is the gravitational constant and R is the Ricci scalar that summarizes the curvature of g.  This runs into problems: things diverge in various calculations, and since the coupling constant G has units, one can’t “renormalize” the divergences away.  So one needs a non-perturbative approach,  one of which is “canonical quantization“.

After some choice of coordinates (so-called “lapse” and “shift” functions), this involves describing the action in terms of the (space part of) the metric g_{kl} and some canonically conjugate “momentum” variables \pi_{kl} which describe its extrinsic curvature.  The Euler-Lagrange equations (found as usual by variational calculus methods) then turn out to give the “Hamiltonian constraint” that certain functions of g are always zero.  Then the program is to get a Poisson algebra giving commutators of the \pi and g variables, then turn it into an algebra of operators in a standard way.  This also runs into problems because the space of metrics isn’t a Hilbert space.  One solution is to not use the metric, but instead a connection and a “frame field” – the so-called Ashtekar variables for GR.  This works better, and gives the “Loop Quantum Gravity” setup, since observables tend to be expressed as holonomies around loops.

Finally, Aleksandr outlined the spin foam approach to quantizing gravity.  This is based on the idea of a quantum geometry as a network (graph) with edges labelled by spins, i.e. representations of SU(2) (which are labelled by half-integers).  Vertices labelled by intertwining operators (which imposes triangle inequalities, as it happens).  The spin foam approach takes a Hilbert space with a basis given by these spin networks.  These are supposed to be an alternative way of describing geometries given by SU(2)-connections. The representations arise because, as the Peter-Weyl theorem shows, they form a nice basis for L^2(SU(2)).  Then to get operators associated to “foams” that interpolate the spacetime between two such geometries (i.e. linear combinations of spin networks).  These are 2-complexes where faces are labelled with spins, and edges with intertwiners for the spins on the faces incident to them.  The operators arise from  a discrete variant of the Feynman path-integral, where time-evolution comes from integrating an action over a space of (classical) trajectories, which in this case are foams.  This needs an action to integrate – in the discrete world, this corresponds to ways of choosing weights A_e for edges and A_f for faces in a generic partition function:

Z = \sum_{J,I} \prod_{faces} A_f(j_f) \prod_{edges}A_e(i_l)

which is a sum over the labels for representations and intertwiners.  Some of the talks that came later in the conference (e.g. by Benjamin Bahr and Bianca Dittrich) came back to discuss principles behind how these A functions could be chosen.  (Aristide Baratin’s talk described a similar but more general kind of model based on 2-groups.)

String Theory

In parallel with these, Sebastian Guttenberg gave us a two-lecture introduction to string theory.  His starting point is the intuition that a lot of classical physics studies particles living on a background of some field.  The field can be understood as an approximate way of talking about a large number of quantum-mechanical particles, rather as the dynamics of a large number of classical particles can be approximated by the equations of state for a fluid or gas (depending on how much they interact with one another, among other things).  In string theory and “string field theory”, we have a similar setup, except we replace the particles with small strings – either open strings (which look like intervals) or closed ones (which look like circles).

To begin with, he introduced the basic tools of “classical” string theory – the analog of classical mechanics of point particles.  This is the string analog of the following: one can describe a moving particle by its worldline – a path x : \mathbb{R} \rightarrow M^{(D)} from a “generic” worldline into a (D-dimensional) manifold M^{(D)}.  This M^{(D)} is generally taken to be like physical spacetime, which in this context means that it has a metric g with signature (-1,1,\dots,1) (that is, locally there’s a basis for tangent spaces with one timelike vector and D-1 spacelike ones).  Then one can define an action for a moving particle which is just determined by the length of the line’s image.  The nicest way to say this is S[x] = m \int d\tau \sqrt{x*g}, where x*g means the pullback of the metric along the map x, \tau is some parameter along the generic worldline, and m, the particle’s mass, is a coupling constant which doesn’t happen to affect the result in this simple case, but eventually becomes important.  One can do the usual variational-calculus of the Lagrangian approach here, finding a critical point of the action occurs when the particle is travelling in a geodesic – a straight line, in flat space, or the closest available approximation.  In paritcular, the Euler-Lagrange equations say that the covariant derivative of the path should be zero.

There’s an analogous action for a string, the Nambu-Goto action.  Instead of a single-parameter x, we now have an embedding of a “generic string worldsheet” – let’s say \Sigma^{(2)} \cong S^1 \times \mathbb{R} into spacetime: x : \Sigma^{(2)} \rightarrow M^{(D)}.  Then then the analogous action is just S[x] = \int_{\Sigma^{(2)}} \star_{x*g} 1.  This is pretty much the same as before: we pull back the metric to get x*g, and integrate over the generic worldsheet.  A slight subtlety comes because we’re taking the Hodge dual \star.  This is conceptually clean, but expands out to a fairly big integral when you express it in coordinates, where the leading term  involves \sqrt{det(\partial_{\mu} x^m \partial_{\nu} x^n g_{mn}} (the determinant is taken over (\mu,\nu).  Varying this to get the equations of motion produces:

0 = \partial_{\mu} \partial^{\mu} x^k + \partial_{\mu} x^m \partial^{\mu} x^n \Gamma_{mn}^k

which is the two-dimensional analog of the geodesic equation for a point particle (the \Gamma are the Christoffel symbols associated to the connection that goes with the metric).  The two-dimensional analog says we have a critical point for the area of the surface which is the image of \Sigma^{(2)} – in fact, a “maximum”, given the sign of the metric.  For solutions like this, the pullback metric on the worldsheet, x*g, looks flat.  (Naturally, the metric looks flat along a geodesic, too, but this is stronger in 2 dimensions, where there can be intrinsic curvature.)

A souped up version of the Nambu-Goto action is the Polyakov action, which is a natural variation that comes up when \Sigma^{(2)} has a metric of its own, h.  You can check out the details behind that link, but part of what makes this action nice is that the corresponding Euler-Lagrange equation from varying h says that x*g \sim h.  That is, the worldsheet \Sigma^{(2)} will have an image with a shape such that its own metric agrees with the one induced from the spacetime M^{(D)}.   This action is called the Polyakov action (even though it was introduced by Deser and Zumino, among others) because Polyakov used it for quantizing the string.

Other variations on this action add additional terms which represent fields which the string might be affected by: a scalar \phi(x), and a 2-form field B_{mn}(x) (here we’re using the physics convention where x represents both the function, and its values at particular points, in this case, values of parameters (\sigma_0,\sigma_1) on \Sigma^{(2)}).

That 2-form, the “B-field”, is an important field in string theory, and eventually links up with higher gauge theory, which we’ll get to as we go on: one can interpret the B-field as part of a higher connection, to which the string is coupled (as in Baez and Perez, say).  The scalar field \phi essentially determines how strongly the shape of the string itself affects the action – it’s a “string coupling” term, or string coupling “constant” if it’s chosen to be just a number \phi_0.  (In such a case, the action includes a term that looks like \phi_0 times the Euler characteristic of the surface \Sigma^{(2)}.)

Sebastian briefly explained some of the physical intuition for why these are the kinds of couplings which it makes sense to introduce.  Essentially, any coupling one writes in coordinates has to get along with gauge symmetries, changes of coordinates, etc.  That is, there should be no physical difference between the class of solutions one finds in a given set of coordinates, and the coordinates one gets by doing some diffeomorphism on the spacetime M^{(D)}, or by changing the metric on \Sigma^{(2)} by some conformal transformation h_{\mu \nu} \mapsto exp(2 \omega(\sigma^0,\sigma^1)) h_{\mu \nu} (that is, scaling by some function of position on the worldsheet – underlying string theory is Conformal Field Theory in that the scale of the generic worldsheet is irrelevant – only the light-cones).  Anything a string couples to should be a field that transforms in a way that respects this.  One important upshot for the quantum theory is that when one quantizes a string coupled to such a field, this makes sure that time evolution is unitary.

How this is done is a bit more complicated than Sebastian wanted to go into in detail (and I got a little lost in the summary) so I won’t attempt to do it justice here.  The end results include a partition function:

Z = \sum_{topologies} dx dh \frac{exp(-S[x,h])}{V_{diff} V_{weyl}}

Remember: if one is finding amplitudes for various observables, the partition function is a normalizing factor, and finding the value of any observables means squeezing them into a similar-looking integral (and normalizing by this factor).  So this says that they’re found by summing over all the string topologies which go from the input to the output, and integrating over all embeddings x : \Sigma^{(2)} \rightarrow M^{(D)} and metrics on \Sigma^{(2)}.  (The denominator in that fraction is dividing out by the volumes of the symmetry groups, as usual is quantum field theory since these symmetries mean one is “overcounting” physically identical situations.)

This is just the beginning of string field theory, of course: just as the dynamics of a free moving particle, or even a particle coupled to a background field, are only the beginning of quantum field theory.  But many later additions can be understood as adding various terms to the action S in some such formalism.  These would be analogs of giving a point-particle attributes like charge, spin, “colour” and so forth in the Standard Model: these define how it couples to, hence is affected by, various kinds of fields.  Such fields can be understood in terms of connections (or, in general, higher connections, as we’ll get to later), which define how structures are “parallel-transported” along a path (or higher-dimensional surface).


Coming up in In Part II… I’ll summarize the School portion of the HGTQGR workshop, including lecture series by: Christopher Schommer-Pries on Classifying 2D Extended TQFT, which among other things explained Chris’ proof of the Cobordism Hypothesis using Cerf theory; Tim Porter on Homotopy QFT and the “Crossed Menagerie”, which describe a general framework for talking about quantum theories on cobordisms with structure; John Huerta on Higher Gauge Theory, which gave an introductory account of 2-groups and 2-bundles with 2-connections; Christoph Wockel on connections between Higher Gauge Theory and Infinite Dimensional Lie Theory, which described how some infinite-dimensional Lie algebras can’t be integrated to Lie groups, but only to 2-groups; and one by Hisham Sati on Higher Spin Structures in String Theory, which among other things described how cohomological obstructions to putting certain kinds of structure on manifolds motivates the use of particular higher dimensions.

Last week was Wade Cherrington‘s Ph.D. defense – he is (or, rather, WAS) a student of Dan Christensen. The title was “Dual Computational Methods for Lattice Gauge Theory”. The point of which is to describe some methods for doing numerical computations of various physical systems governed by gauge theories. This would include electromagnetism, Yang-Mills theory (which covers the Standard Model and other quantum field theories), as well as gravity. In any gauge theory, the fundamental objects being studied are fields described by G-connections, for some (Lie) group G. To some degree of approximation, a connection gives a group element for any path in space: \Gamma : Path(M) \rightarrow G. Then the dynamics of these fields are described by a Lagrangian, where the action for a field is the integral of the curvature over the whole space M: \int tr(F \wedge \star F) (plus possibly some other terms to couple the field to sources).

Now, the point here is to get non-perturbative ways to study these theories: rather than, say, getting differential equations for the fields and finding solutions by expanding a power series. The approach in question is to take a discrete version of this continuum theory, which is finite and can be dealt with exactly, and then take a limit.

So in lattice gauge theory, continuous space is replaced by a – well, a lattice L, say L=\mathbb{Z}^3, for definiteness (then eventually take the continuum limit as the spacing of the lattice goes to zero). The lattice also include edges joining adjacent points – say the set of edges is E. Paths in the lattice are built from these edges. (Furthermore, since an infinite lattice can’t be represented in the computer, the actual computations use a quotient of this – a lattice in a 3-torus, or equivalently, one considers only periodic fields.) Then it’s enough to say that a connection assigns a group element to each edge of the lattice, \Gamma : E \rightarrow G.

Of course, to back up, describing connections as functions \Gamma : E \rightarrow G, or Path(M) : \rightarrow G, often provokes various objections from people used to differential geometry. One is that the group elements assigned don’t have any direct physical meaning – since a physical state is only defined up to gauge equivalence. So if an edge e joins lattice points a and b, a gauge transformation g : L \rightarrow G acts on \Gamma to give \Gamma' : E \rightarrow G with \Gamma'(e) = g(a)\Gamma(e)g(b)^{-1}. Clearly, for any given edge, there are gauge-equivalent connections assigning any group element you want. As Wade pointed out, one benefit of the dual models he was describing is that their states can be given a definite physical meaning – there are no gauge choices. Another, helpfully, is that they’re easier to calculate with (sometimes). A more physical motivation Wade suggested is that these methods can deal with spin-foam models of quantum gravity, and also matter fields: a realistic look at a theory of gravity should have some matter to gravitate, so this gives a way to simulate them together.

So what are these dual methods? This is described in some detail in this paper by Wade, Dan, and Igor Khavkine. The first step is to find a discrete version of curvature: instead of the action \int tr(F \wedge \star F), we want a sum of face amplitudes. Curvature is described by the holonomy around a contractible loop, so the basic element is a face in the lattice (say F is the set of faces). Given a square face f \in F with edges whose holonomies are g_1 through g_4 (assuming all faces are oriented in a consistent direction), the holonomy around the face is g_1 g_2 g_3^{-1} g_4^{-1} = g(f). From this, one defines an amplitude for the face, for some function S(g(f)) (there are various possibilities – Wade’s example used the heat kernel action mentioned in the paper above), and then the total action S = \sum_{F} S(g(f)) is the sum over all faces. Then instead of integrating over an infinite-dimensional, and generally intractable, space of smooth connections, one itegrates over G^E, the space of discrete connections.

The duality here is the expansion of this in terms of group characters: a function S(g) can be written as a combination of irreducible characters: S(g) = \sum_i c_i \chi_i(g). Then one can pull this sum over characters outside the integral over G^E (so that local quantities are inside).

There are many nice images on Wade’s homepage (above) showing visualizations of the resulting calculations – one finds sums over certain labellings of the lattice, namely those which can be described by having certain surfaces. In particular, closed (boundaryless), branched (possibly self-intersecting) surfaces with face and edge labels given by representations of G and intertwining operators between them… that is, spin foams. These dual spin foam configurations have the advantage of having a physical interpretation (though I confess I don’t have a good intuition about it) which doesn’t depend on gauge choices.

A variant on this comes about when the action is changed to include a term coupling the Yang-Mills field to fermions (one thinks of quarks and gluons, for example). In this case, the fermion part is described by “polymers” (closed, possibly self-intersecting paths, rather than surfaces), and the coupled system allows the surfaces used in the YM calculations to have boundaries – but only on these polymers. (Again, Wade has some nice images of this on his site. Personally, I find a lot of the details here remain obscure, though I’ve seen a few versions of this talk and related ones, but the pictures give a framework to hang the rest of it on.)

Wade identified two “key” ingredients for doing calculations with these dual spin foams:

  1. Recoupling moves for the graphs (as described, for instance, by Carter, Flath, and Saito) which simplify the calculation of amplitudes, and
  2. A set of local moves (changes of configuration) which are ergodic – that is, between them they can take any configuration to any other. (The point here is to allow a reasonably random sampling – the algorithm is stochastic – of the configuration space, while making only local changes, requiring a minimum of recomputation, at each step.)

Finally, Wade summed up by pointing out that the results obtained so far agree with the usual methods, and in some cases are faster. Then he told us about some future projects. Some involve optimizing code and adapting it to run on clusters. Others were more theoretical matters: doing for SU(3) what has been done for SU(2) (which will involve developing much of the recoupling theory for 3j- and 6j-symbols); finding and computing observables for these configurations (such as Wilson loops); and modelling supersymmetry and other notions about particle physics.

Follow

Get every new post delivered to your Inbox.

Join 45 other followers