quantum gravity


One talk at the workshop was nominally a school talk by Laurent Freidel, but it’s interesting and distinctive enough in its own right that I wanted to consider it by itself.  It was based on this paper on the “Principle of Relative Locality”. This isn’t so much a new theory, as an exposition of what ought to happen when one looks at a particular limit of any putative theory that has both quantum field theory and gravity as (different) limits of it. This leads through some ideas, such as curved momentum space, which have been kicking around for a while. The end result is a way of accounting for apparently non-local interactions of particles, by saying that while the particles themselves “see” the interactions as local, distant observers might not.

Whereas Einstein’s gravity describes a regime where Newton’s gravitational constant G_N is important but Planck’s constant \hbar is negligible, and (special-relativistic) quantum field theory assumes \hbar significant but G_N not.  Both of these assume there is a special velocity scale, given by the speed of light c, whereas classical mechanics assumes that all three can be neglected (i.e. G_N and \hbar are zero, and c is infinite).   The guiding assumption is that these are all approximations to some more fundamental theory, called “quantum gravity” just because it accepts that both G_N and \hbar (as well as c) are significant in calculating physical effects.  So GR and QFT incorporate two of the three constants each, and classical mechanics incorporates neither.  The “principle of relative locality” arises when we consider a slightly different approximation to this underlying theory.

This approximation works with a regime where G_N and \hbar are each negligible, but the ratio is not – this being related to the Planck mass m_p \sim  \sqrt{\frac{\hbar}{G_N}}.  The point is that this is an approximation with no special length scale (“Planck length”), but instead a special energy scale (“Planck mass”) which has to be preserved.   Since energy and momentum are different parts of a single 4-vector, this is also a momentum scale; we expect to see some kind of deformation of momentum space, at least for momenta that are bigger than this scale.  The existence of this scale turns out to mean that momenta don’t add linearly – at least, not unless they’re very small compared to the Planck scale.

So what is “Relative Locality”?  In the paper linked above, it’s stated like so:

Physics takes place in phase space and there is no invariant global projection that gives a description of processes in spacetime.  From their measurements local observers can construct descriptions of particles moving and interacting in a spacetime, but different observers construct different spacetimes, which are observer-dependent slices of phase space.

Motivation

This arises from taking the basic insight of general relativity – the requirement that physical principles should be invariant under coordinate transformations (i.e. diffeomorphisms) – and extend it so that instead of applying just to spacetime, it applies to the whole of phase space.  Phase space (which, in this limit where \hbar = 0, replaces the Hilbert space of a truly quantum theory) is the space of position-momentum configurations (of things small enough to treat as point-like, in a given fixed approximation).  Having no G_N means we don’t need to worry about any dynamical curvature of “spacetime” (which doesn’t exist), and having no Planck length means we can blithely treat phase space as a manifold with coordinates valued in the real line (which has no special scale).  Yet, having a special mass/momentum scale says we should see some purely combined “quantum gravity” effects show up.

The physical idea is that phase space is an accurate description of what we can see and measure locally.  Observers (whom we assume small enough to be considered point-like) can measure their own proper time (they “have a clock”) and can detect momenta (by letting things collide with them and measuring the energy transferred locally and its direction).  That is, we “see colors and angles” (i.e. photon energies and differences of direction).  Beyond this, one shouldn’t impose any particular theory of what momenta do: we can observe the momenta of separate objects and see what results when they interact and deduce rules from that.  As an extension of standard physics, this model is pretty conservative.  Now, conventionally, phase space would be the cotangent bundle of spacetime T^*M.  This model is based on the assumption that objects can be at any point, and wherever they are, their space of possible momenta is a vector space.  Being a bundle, with a global projection onto M (taking (x,v) to x), is exactly what this principle says doesn’t necessarily obtain.  We still assume that phase space will be some symplectic manifold.   But we don’t assume a priori that momentum coordinates give a projection whose fibres happen to be vector spaces, as in a cotangent bundle.

Now, a symplectic manifold  still looks locally like a cotangent bundle (Darboux’s theorem). So even if there is no universal “spacetime”, each observer can still locally construct a version of “spacetime”  by slicing up phase space into position and momentum coordinates.  One can, by brute force, extend the spacetime coordinates quite far, to distant points in phase space.  This is roughly analogous to how, in special relativity, each observer can put their own coordinates on spacetime and arrive at different notions of simultaneity.  In general relativity, there are issues with trying to extend this concept globally, but it can be done under some conditions, giving the idea of “space-like slices” of spacetime.  In the same way, we can construct “spacetime-like slices” of phase space.

Geometrizing Algebra

Now, if phase space is a cotangent bundle, momenta can be added (the fibres of the bundle are vector spaces).  Some more recent ideas about “quasi-Hamiltonian spaces” (initially introduced by Alekseev, Malkin and Meinrenken) conceive of momenta as “group-valued” – rather than taking values in the dual of some Lie algebra (the way, classically, momenta are dual to velocities, which live in the Lie algebra of infinitesimal translations).  For small momenta, these are hard to distinguish, so even group-valued momenta might look linear, but the premise is that we ought to discover this by experiment, not assumption.  We certainly can detect “zero momentum” and for physical reasons can say that given two things with two momenta (p,q), there’s a way of combining them into a combined momentum p \oplus q.  Think of doing this physically – transfer all momentum from one particle to another, as seen by a given observer.  Since the same momentum at the observer’s position can be either coming in or going out, this operation has a “negative” with (\ominus p) \oplus p = 0.

We do have a space of momenta at any given observer’s location – the total of all momenta that can be observed there, and this space now has some algebraic structure.  But we have no reason to assume up front that \oplus is either commutative or associative (let alone that it makes momentum space at a given observer’s location into a vector space).  One can interpret this algebraic structure as giving some geometry.  The commutator for \oplus gives a metric on momentum space.  This is a bilinear form which is implicitly defined by the “norm” that assigns a kinetic energy to a particle with a given momentum. The associator given by p \oplus ( q \oplus r ) - (p \oplus q ) \oplus r), infinitesimally near 0 where this makes sense, gives a connection.  This defines a “parallel transport” of a finite momentum p in the direction of a momentum q by saying infinitesimally what happens when adding dq to p.

Various additional physical assumptions – like the momentum-space “duals” of the equivalence principle (that the combination of momenta works the same way for all kinds of matter regardless of charge), or the strong equivalence principle (that inertial mass and rest mass energy per the relation E = mc^2 are the same) and so forth can narrow down the geometry of this metric and connection.  Typically we’ll find that it needs to be Lorentzian.  With strong enough symmetry assumptions, it must be flat, so that momentum space is a vector space after all – but even with fairly strong assumptions, as with general relativity, there’s still room for this “empty space” to have some intrinsic curvature, in the form of a momentum-space “dual cosmological constant”, which can be positive (so momentum space is closed like a sphere), zero (the vector space case we usually assume) or negative (so momentum space is hyperbolic).

This geometrization of what had been algebraic is somewhat analogous to what happened with velocities (i.e. vectors in spacetime)) when the theory of special relativity came along.  Insisting that the “invariant” scale c be the same in every reference system meant that the addition of velocities ceased to be linear.  At least, it did if you assume that adding velocities has an interpretation along the lines of: “first, from rest, add velocity v to your motion; then, from that reference frame, add velocity w”.  While adding spacetime vectors still worked the same way, one had to rephrase this rule if we think of adding velocities as observed within a given reference frame – this became v \oplus w = (v + w) (1 + uv) (scaling so c =1 and assuming the velocities are in the same direction).  When velocities are small relative to c, this looks roughly like linear addition.  Geometrizing the algebra of momentum space is thought of a little differently, but similar things can be said: we think operationally in terms of combining momenta by some process.  First transfer (group-valued) momentum p to a particle, then momentum q – the connection on momentum space tells us how to translate these momenta into the “reference frame” of a new observer with momentum shifted relative to the starting point.  Here again, the special momentum scale m_p (which is also a mass scale since a momentum has a corresponding kinetic energy) is a “deformation” parameter – for momenta that are small compared to this scale, things seem to work linearly as usual.

There’s some discussion in the paper which relates this to DSR (either “doubly” or “deformed” special relativity), which is another postulated limit of quantum gravity, a variation of SR with both a special velocity and a special mass/momentum scale, to consider “what SR looks like near the Planck scale”, which treats spacetime as a noncommutative space, and generalizes the Lorentz group to a Hopf algebra which is a deformation of it.  In DSR, the noncommutativity of “position space” is directly related to curvature of momentum space.  In the “relative locality” view, we accept a classical phase space, but not a classical spacetime within it.

Physical Implications

We should understand this scale as telling us where “quantum gravity effects” should start to become visible in particle interactions.  This is a fairly large scale for subatomic particles.  The Planck mass as usually given is about 21 micrograms: small for normal purposes, about the size of a small sand grain, but very large for subatomic particles.  Converting to momentum units with c, this is about 6 kg m/s: on the order of the momentum of a kicked soccer ball or so.  For a subatomic particle this is a lot.

This scale does raise a question for many people who first hear this argument, though – that quantum gravity effects should become apparent around the Planck mass/momentum scale, since macro-objects like the aforementioned soccer ball still seem to have linearly-additive momenta.  Laurent explained the problem with this intuition.  For interactions of big, extended, but composite objects like soccer balls, one has to calculate not just one interaction, but all the various interactions of their parts, so the “effective” mass scale where the deformation would be seen becomes N m_p where N is the number of particles in the soccer ball.  Roughly, the point is that a soccer ball is not a large “thing” for these purposes, but a large conglomeration of small “things”, whose interactions are “fundamental”.  The “effective” mass scale tells us how we would have to alter the physical constants to be able to treat it as a “thing”.  (This is somewhat related to the question of “effective actions” and renormalization, though these are a bit more complicated.)

There are a number of possible experiments suggested in the paper, which Laurent mentioned in the talk.  One involves a kind of “twin paradox” taking place in momentum space.  In “spacetime”, a spaceship travelling a large loop at high velocity will arrive where it started having experienced less time than an observer who remained there (because of the Lorentzian metric) – and a dual phenomenon in momentum space says that particles travelling through loops (also in momentum space) should arrive displaced in space because of the relativity of localization.  This could be observed in particle accelerators where particles make several transits of a loop, since the effect is cumulative.  Another effect could be seen in astronomical observations: if an observer is observing some distant object via photons of different wavelengths (hence momenta), she might “localize” the object differently – that is, the two photons travel at “the same speed” the whole way, but arrive at different times because the observer will interpret the object as being at two different distances for the two photons.

This last one is rather weird, and I had to ask how one would distinguish this effect from a variable speed of light (predicted by certain other ideas about quantum gravity).  How to distinguish such effects seems to be not quite worked out yet, but at least this is an indication that there are new, experimentally detectible, effects predicted by this “relative locality” principle.  As Laurent emphasized, once we’ve noticed that not accepting this principle means making an a priori assumption about the geometry of momentum space (even if only in some particular approximation, or limit, of a true theory of quantum gravity), we’re pretty much obliged to stop making that assumption and do the experiments.  Finding our assumptions were right would simply be revealing which momentum space geometry actually obtains in the approximation we’re studying.

A final note about the physical interpretation: this “relative locality” principle can be discovered by looking (in the relevant limit) at a Lagrangian for free particles, with interactions described in terms of momenta.  It so happens that one can describe this without referencing a “real” spacetime: the part of the action that allows particles to interact when “close” only needs coordinate functions, which can certainly exist here, but are an observer-dependent construct.  The conservation of (non-linear) momenta is specified via a Lagrange multiplier.  The whole Lagrangian formalism for the mechanics of colliding particles works without reference to spacetime.  Now, even though all the interactions (specified by the conservation of momentum terms) happen “at one location”, in that there will be an observer who sees them happening in the momentum space of her own location.  But an observer at a different point may disagree about whether the interaction was local – i.e. happened at a single point in spacetime.  Thus “relativity of localization”.

Again, this is no more bizarre (mathematically) than the fact that distant, relatively moving, observers in special relativity might disagree about simultaneity, whether two events happened at the same time.  They have their own coordinates on spacetime, and transferring between them mixes space coordinates and time coordinates, so they’ll disagree whether the time-coordinate values of two events are the same.  Similarly, in this phase-space picture, two different observers each have a coordinate system for splitting phase space into “spacetime” and “energy-momentum” coordinates, but switching between them may mix these two pieces.  Thus, the two observers will disagree about whether the spacetime-coordinate values for the different interacting particles are the same.  And so, one observer says the interaction is “local in spacetime”, and the other says it’s not.  The point is that it’s local for the particles themselves (thinking of them as observers).  All that’s going on here is the not-very-astonishing fact that in the conventional picture, we have no problem with interactions being nonlocal in momentum space (particles with very different momenta can interact as long as they collide with each other)… combined with the inability to globally and invariantly distinguish position and momentum coordinates.

What this means, philosophically, can be debated, but it does offer some plausibility to the claim that space and time are auxiliary, conceptual additions to what we actually experience, which just account for the relations between bits of matter.  These concepts can be dispensed with even where we have a classical-looking phase space rather than Hilbert space (where, presumably, this is even more true).

Edit: On a totally unrelated note, I just noticed this post by Alex Hoffnung over at the n-Category Cafe which gives a lot of detail on issues relating to spans in bicategories that I had begun to think more about recently in relation to developing a higher-gauge-theoretic version of the construction I described for ETQFT. In particular, I’d been thinking about how the 2-group analog of restriction and induction for representations realizes the various kinds of duality properties, where we have adjunctions, biadjunctions, and so forth, in which units and counits of the various adjunctions have further duality. This observation seems to be due to Jim Dolan, as far as I can see from a brief note in HDA II. In that case, it’s really talking about the star-structure of the span (tri)category, but looking at the discussion Alex gives suggests to me that this theme shows up throughout this subject. I’ll have to take a closer look at the draft paper he linked to and see if there’s more to say…

As usual, this write-up process has been taking a while since life does intrude into blogging for some reason.  In this case, because for a little less than a week, my wife and I have been on our honeymoon, which was delayed by our moving to Lisbon.  We went to the Azores, or rather to São Miguel, the largest of the nine islands.  We had a good time, roughly like so:

Now that we’re back, I’ll attempt to wrap up with the summaries of things discussed at the workshop on Higher Gauge Theory, TQFT, and Quantum Gravity.  In the previous post I described talks which I roughly gathered under TQFT and Higher Gauge Theory, but the latter really ramifies out in a few different ways.  As began to be clear before, higher bundles are classified by higher cohomology of manifolds, and so are gerbes – so in fact these are two slightly different ways of talking about the same thing.  I also remarked, in the summary of Konrad Waldorf’s talk, the idea that the theory of gerbes on a manifold is equivalent to ordinary gauge theory on its loop space – which is one way to make explicit the idea that categorification “raises dimension”, in this case from parallel transport of points to that of 1-dimensional loops.  Next we’ll expand on that theme, and then finally reach the “Quantum Gravity” part, and draw the connection between this and higher gauge theory toward the end.

Gerbes and Cohomology

The very first workshop speaker, in fact, was Paolo Aschieri, who has done a lot of work relating noncommutative geometry and gravity.  In this case, though, he was talking about noncommutative gerbes, and specifically referred to this work with some of the other speakers.  To be clear, this isn’t about gerbes with noncommutative group G, but about gerbes on noncommutative spaces.  To begin with, it’s useful to express gerbes in the usual sense in the right language.  In particular, he explain what a gerbe on a manifold X is in concrete terms, giving Hitchin’s definition (viz).  A U(1) gerbe can be described as “a cohomology class” but it’s more concrete to present it as:

  • a collection of line bundles L_{\alpha \beta} associated with double overlaps U_{\alpha \beta} = U_{\alpha} \cap U_{\beta}.  Note this gets an algebraic structure (multiplication \star of bundles is pointwise \otimes, with an inverse given by the dual, L^{-1} = L^*, so we can require…
  • L_{\alpha \beta}^{-1} \cong L_{\beta \alpha}, which helps define…
  • transition functions \lambda _{\alpha \beta \gamma} on triple overlaps U_{\alpha \beta \gamma}, which are sections of L_{\alpha \beta \gamma} = L_{\alpha \beta} \star L_{\beta \gamma} \star L_{\gamma \alpha}.  If this product is trivial, there’d be a 1-cocycle condition here, but we only insist on the 2-cocycle condition…
  • \lambda_{\beta \gamma \delta} \lambda_{\alpha \gamma \delta}^{-1} \lambda_{\alpha \beta \delta} \lambda_{\alpha \beta \gamma}^{-1} = 1

This is a U(1)-gerbe on a commutative space.  The point is that one can make a similar definition for a noncommutative space.  If the space X is associated with the algebra A=C^{\infty}(X) of smooth functions, then a line bundle is a module for A, so if A is noncommutative (thought of as a “space” X), a “bundle over X is just defined to be an A-module.  One also has to define an appropriate “covariant derivative” operator D on this module, and the \star-product must be defined as well, and will be noncommutative (we can think of it as a deformation of the \star above).  The transition functions are sections: that is, elements of the modules in question.  his means we can describe a gerbe in terms of a big stack of modules, with a chosen algebraic structure, together with some elements.  The idea then is that gerbes can give an interpretation of cohomology of noncommutative spaces as well as commutative ones.

Mauro Spera spoke about a point of view of gerbes based on “transgressions”.  The essential point is that an n-gerbe on a space X can be seen as the obstruction to patching together a family of  (n-1)-gerbes.  Thus, for instance, a U(1) 0-gerbe is a U(1)-bundle, which is to say a complex line bundle.  As described above, a 1-gerbe can be understood as describing the obstacle to patching together a bunch of line bundles, and the obstacle is the ability to find a cocycle \lambda satisfying the requisite conditions.  This obstacle is measured by the cohomology of the space.  Saying we want to patch together (n-1)-gerbes on the fibre.  He went on to discuss how this manifests in terms of obstructions to string structures on manifolds (already discussed at some length in the post on Hisham Sati’s school talk, so I won’t duplicate here).

A talk by Igor Bakovic, “Stacks, Gerbes and Etale Groupoids”, gave a way of looking at gerbes via stacks (see this for instance).  The organizing principle is the classification of bundles by the space maps into a classifying space – or, to get the category of principal G-bundles on, the category Top(Sh(X),BG), where Sh(X) is the category of sheaves on X and BG is the classifying topos of G-sets.  (So we have geometric morphisms between the toposes as the objects.)  Now, to get further into this, we use that Sh(X) is equivalent to the category of Étale spaces over X – this is a refinement of the equivalence between bundles and presheaves.  Taking stalks of a presheaf gives a bundle, and taking sections of a bundle gives a presheaf – and these operations are adjoint.

The issue at hand is how to categorify this framework to talk about 2-bundles, and the answer is there’s a 2-adjunction between the 2-category 2-Bun(X) of such things, and Fib(X) = [\mathcal{O}(X)^{op},Cat], the 2-category of fibred categories over X.  (That is, instead of looking at “sheaves of sets”, we look at “sheaves of categories” here.)  The adjunction, again, involves talking stalks one way, and taking sections the other way.  One hard part of this is getting a nice definition of “stalk” for stacks (i.e. for the “sheaves of categories”), and a good part of the talk focused on explaining how to get a nice tractable definition which is (fibre-wise) equivalent to the more natural one.

Bakovic did a bunch of this work with Branislav Jurco, who was also there, and spoke about “Nonabelian Bundle 2-Gerbes“.  The paper behind that link has more details, which I’ve yet to entirely absorb, but the essential point appears to be to extend the description of “bundle gerbes” associated to crossed modules up to 2-crossed modules.  Bundles, with a structure-group G, are classified by the cohomology H^1(X,G) with coefficients in G; and whereas “bundle-gerbes” with a structure-crossed-module H \rightarrow G can likewise be described by cohomology H^1(X,H \rightarrow G).  Notice this is a bit different from the description in terms of higher cohomology H^2(X,G) for a G-gerbe, which can be understood as a bundle-gerbe using the shifted crossed module G \rightarrow 1 (when G is abelian.  The goal here is to generalize this part to nonabelian groups, and also pass up to “bundle 2-gerbes” based on a 2-crossed module, or crossed complex of length 2, L \rightarrow H \rightarrow G as I described previously for Joao Martins’ talk.  This would be classified in terms of cohomology valued in the 2-crossed module.  The point is that one can describe such a thing as a bundle over a fibre product, which (I think – I’m not so clear on this part) deals with the same structure of overlaps as the higher cohomology in the other way of describing things.

Finally,  a talk that’s a little harder to classify than most, but which I’ve put here with things somewhat related to string theory, was Alexander Kahle‘s on “T-Duality and Differential K-Theory”, based on work with Alessandro Valentino.  This uses the idea of the differential refinement of cohomology theories – in this case, K-theory, which is a generalized cohomology theory, which is to say that K-theory satisfies the Eilenberg-Steenrod axioms (with the dimension axiom relaxed, hence “generalized”).  Cohomology theories, including generalized ones, can have differential refinements, which pass from giving topological to geometrical information about a space.  So, while K-theory assigns to a space the Grothendieck ring of the category of vector bundles over it, the differential refinement of K-theory does the same with the category of vector bundles with connection.  This captures both local and global structures, which turns out to be necessary to describe fields in string theory – specifically, Ramond-Ramond fields.  The point of this talk was to describe what happens to these fields under T-duality.  This is a kind of duality in string theory between a theory with large strings and small strings.  The talk describes how this works, where we have a manifold with fibres at each point M\times S^1_r with fibres strings of radius r and M \times S^1_{1/r} with radius 1/r.  There’s a correspondence space M \times S^1_r \times S^1_{1/r}, which has projection maps down into the two situations.  Fields, being forms on such a fibration, can be “transferred” through this correspondence space by a “pull-back and push-forward” (with, in the middle, a wedge with a form that mixes the two directions, exp( d \theta_r + d \theta_{1/r})).  But to be physically the right kind of field, these “forms” actually need to be representing cohomology classes in the differential refinement of K-theory.

Quantum Gravity etc.

Now, part of the point of this workshop was to try to build, or anyway maintain, some bridges between the kind of work in geometry and topology which I’ve been describing and the world of physics.  There are some particular versions of physical theories where these ideas have come up.  I’ve already touched on string theory along the way (there weren’t many talks about it from a physicist’s point of view), so this will mostly be about a different sort of approach.

Benjamin Bahr gave a talk outlining this approach for our mathematician-heavy audience, with his talk on “Spin Foam Operators” (see also for instance this paper).  The point is that one approach to quantum gravity has a theory whose “kinematics” (the description of the state of a system at a given time) is described by “spin networks” (based on SU(2) gauge theory), as described back in the pre-school post.  These span a Hilbert space, so the “dynamical” issue of such models is how to get operators between Hilbert spaces from “foams” that interpolate between such networks – that is, what kind of extra data they might need, and how to assign amplitudes to faces and edges etc. to define an operator, which (assuming a “local” theory where distant parts of the foam affect the result independently) will be of the form:

Z(K,\rho,P) = (\prod_f A_f) \prod_v Tr_v(\otimes P_e)

where K is a particular complex (foam), \rho is a way of assigning irreps to faces of the foam, and P is the assignment of intertwiners to edges.  Later on, one can take a discrete version of a path integral by summing over all these (K, \rho, P).  Here we have a product over faces and one over vertices, with an amplitude A_f assigned (somehow – this is the issue) to faces.  The trace is over all the representation spaces assigned to the edges that are incident to a vertex (this is essentially the only consistent way to assign an amplitude to a vertex).  If we also consider spacetimes with boundary, we need some amplitudes B_e at the boundary edges, as well.  A big part of the work with such models is finding such amplitudes that meet some nice conditions.

Some of these conditions are inherently necessary – to ensure the theory is invariant under gauge transformations, or (formally) changing orientations of faces.  Others are considered optional, though to me “functoriality” (that the way of deriving operators respects the gluing-together of foams) seems unavoidable – it imposes that the boundary amplitudes have to be found from the A_f in one specific way.  Some other nice conditions might be: that Z(K, \rho, P) depends only on the topology of K (which demands that the P operators be projections); that Z is invariant under subdivision of the foam (which implies the amplitudes have to be A_f = dim(\rho_f)).

Assuming all these means the only choice is exactly which sub-projection P_e is of the projection onto the gauge-invariant part of the representation space for the faces attached to edge e.  The rest of the talk discussed this, including some examples (models for BF-theory, the Barrett-Crane model and the more recent EPRL/FK model), and finished up by discussing issues about getting a nice continuum limit by way of “coarse graining”.

On a related subject, Bianca Dittrich spoke about “Dynamics and Diffeomorphism Symmetry in Discrete Quantum Gravity”, which explained the nature of some of the hard problems with this sort of discrete model of quantum gravity.  She began by asking what sort of models (i.e. which choices of amplitudes) in such discrete models would actually produce a nice continuum theory – since gravity, classically, is described in terms of spacetimes which are continua, and the quantum theory must look like this in some approximation.  The point is to think of these as “coarse-graining” of a very fine (perfect, in the limit) approximation to the continuum by a triangulation with a very short length-scale for the edges.  Coarse graining means discarding some of the edges to get a coarser approximation (perhaps repeatedly).  If the Z happens to be triangulation-independent, then coarse graining makes no difference to the result, nor does the converse process of refining the triangulation.  So one question is:  if we expect the continuum limit to be diffeomorphism invariant (as is General Relativity), what does this say at the discrete level?  The relation between diffeomorphism invariance and triangulation invariance has been described by Hendryk Pfeiffer, and in the reverse direction by Dittrich et al.

Actually constructing the dynamics for a system like this in a nice way (“canonical dynamics with anomaly-free constraints”) is still a big problem, which Bianca suggested might be approached by this coarse-graining idea.  Now, if a theory is topological (here we get the link to TQFT), such as electromagnetism in 2D, or (linearized) gravity in 3D, coarse graining doesn’t change much.  But otherwise, changing the length scale means changing the action for the continuum limit of the theory.  This is related to renormalization: one starts with a “naive” guess at a theory, then refines it (in this case, by the coarse-graining process), which changes the action for the theory, until arriving at (or approximating to) a fixed point.  Bianca showed an example, which produces a really huge, horrible action full of very complicated terms, which seems rather dissatisfying.  What’s more, she pointed out that, unless the theory is topological, this always produces an action which is non-local – unlike the “naive” discrete theory.  That is, the action can’t be described in terms of a bunch of non-interacting contributions from the field at individual points – instead, it’s some function which couples the field values at distant points (albeit in a way that falls off exponentially as the points get further apart).

In a more specific talk, Aleksandr Mikovic discussed “Finiteness and Semiclassical Limit of EPRL-FK Spin Foam Models”, looking at a particular example of such models which is the (relatively) new-and-improved candidate for quantum gravity mentioned above.  This was a somewhat technical talk, which I didn’t entirely follow, but  roughly, the way he went at this was through the techniques of perturbative QFT.  That is, by looking at the theory in terms of an “effective action”, instead of some path integral over histories \phi with action S(\phi) – which looks like \int d\phi  e^{iS(\phi)}.  Starting with some classical history \bar{\phi} – a stationary point of the action S – the effective action \Gamma(\bar{\phi}) is an integral over small fluctuations \phi around it of e^{iS(\bar{\phi} + \phi)}.

He commented more on the distinction between the question of triangulation independence (which is crucial for using spin foams to give invariants of manifolds) and the question of whether the theory gives a good quantum theory of gravity – that’s the “semiclassical limit” part.  (In light of the above, this seems to amount to asking if “diffeomorphism invariance” really extends through to the full theory, or is only approximately true, in the limiting case).  Then the “finiteness” part has to do with the question of getting decent asymptotic behaviour for some of those weights mentioned above so as to give a nice effective action (if not necessarily triangulation independence).  So, for instance, in the Ponzano-Regge model (which gives a nice invariant for manifolds), the vertex amplitudes A_v are found by the 6j-symbols of representations.  The asymptotics of the 6j symbols then becomes an issue – Alekandr noted that to get a theory with a nice effective action, those 6j-symbols need to be scaled by a certain factor.  This breaks triangulation independence (hence means we don’t have a good manifold invariant), but gives a physically nicer theory.  In the case of 3D gravity, this is not what we want, but as he said, there isn’t a good a-priori reason to think it can’t give a good theory of 4D gravity.

Now, making a connection between these sorts of models and higher gauge theory, Aristide Baratin spoke about “2-Group Representations for State Sum Models”.  This is a project Baez, Freidel, and Wise, building on work by Crane and Sheppard (see my previous post, where Derek described the geometry of the representation theory for some 2-groups).  The idea is to construct state-sum models where, at the kinematical level, edges are labelled by 2-group representations, faces by intertwiners, and tetrahedra by 2-intertwiners.  (This assumes the foam is a triangulation – there’s a certain amount of back-and-forth in this area between this, and the Poincaré dual picture where we have 4-valent vertices).  He discussed this in a couple of related cases – the Euclidean and Poincaré 2-groups, which are described by crossed modules with base groups SO(4) or SO(3,1) respectively, acting on the abelian group (of automorphisms of the identity) R^4 in the obvious way.  Then the analogy of the 6j symbols above, which are assigned to tetrahedra (or dually, vertices in a foam interpolating two kinematical states), are now 10j symbols assigned to 4-simplexes (or dually, vertices in the foam).

One nice thing about this setup is that there’s a good geometric interpretation of the kinematics – irreducible representations of these 2-groups pick out orbits of the action of the relevant SO on R^4.  These are “mass shells” – radii of spheres in the Euclidean case, or proper length/time values that pick out hyperboloids in the Lorentzian case of SO(3,1).  Assigning these to edges has an obvious geometric meaning (as a proper length of the edge), which thus has a continuous spectrum.  The areas and volumes interpreting the intertwiners and 2-intertwiners start to exhibit more of the discreteness you see in the usual formulation with representations of the SO groups themselves.  Finally, Aristide pointed out that this model originally arose not from an attempt to make a quantum gravity model, but from looking at Feynman diagrams in flat space (a sort of “quantum flat space” model), which is suggestively interesting, if not really conclusively proving anything.

Finally, Laurent Freidel gave a talk, “Classical Geometry of Spin Network States” which was a way of challenging the idea that these states are exclusively about “quantum geometries”, and tried to give an account of how to interpret them as discrete, but classical.  That is, the quantization of the classical phase space T^*(A/G) (the cotangent bundle of connections-mod-gauge) involves first a discretization to a spin-network phase space \mathcal{P}_{\Gamma}, and then a quantization to get a Hilbert space H_{\Gamma}, and the hard part is the first step.  The point is to see what the classical phase space is, and he describes it as a (symplectic) quotient T^*(SU(2)^E)//SU(2)^V, which starts by assigning $T^*(SU(2))$ to each edge, then reduced by gauge transformations.  The puzzle is to interpret the states as geometries with some discrete aspect.

The answer is that one thinks of edges as describing (dual) faces, and vertices as describing some polytopes.  For each p, there’s a 2(p-3)-dimensional “shape space” of convex polytopes with p-faces and a given fixed area j.  This has a canonical symplectic structure, where lengths and interior angles at an edge are the canonically conjugate variables.  Then the whole phase space describes ways of building geometries by gluing these things (associated to vertices) together at the corresponding faces whenever the two vertices are joined by an edge.  Notice this is a bit strange, since there’s no particular reason the faces being glued will have the same shape: just the same area.  An area-1 pentagon and an area-1 square associated to the same edge could be glued just fine.  Then the classical geometry for one of these configurations is build of a bunch of flat polyhedra (i.e. with a flat metric and connection on them).  Measuring distance across a face in this geometry is a little strange.  Given two points inside adjacent cells, you measure orthogonal distance to the matched faces, and add in the distance between the points you arrive at (orthogonally) – assuming you glued the faces at the centre.  This is a rather ugly-seeming geometry, but it’s symplectically isomorphic to the phase space of spin network states – so it’s these classical geometries that spin-foam QG is a quantization of.  Maybe the ugliness should count against this model of quantum gravity – or maybe my aesthetic sense just needs work.

(Laurent also gave another talk, which was originally scheduled as one of the school talks, but ended up being a very interesting exposition of the principle of “Relativity of Localization”, which is hard to shoehorn into the themes I’ve used here, and was anyway interesting enough that I’ll devote a separate post to it.)

So I had a busy week from Feb 7-13, which was when the workshop Higher Gauge Theory, TQFT, and Quantum Gravity (or HGTQGR) was held here in Lisbon.  It ended up being a full day from 0930h to 1900h pretty much every day, except the last.  We’d tried to arrange it so that there were coffee breaks and discussion periods, but there was also a plethora of talks.  Most of the people there seemed to feel that it ended up pretty well.  Since then I’ve been occupied with other things – family visiting the country, for one, so it’s taken a while to get around to writing it up.  Since there were several parts to the event, I’ll do this in several parts as well, of which this is the first one.

Part of the point of the workshop was to bring together a few related subjects in which category theoretic ideas come into areas of mathematics which play a role in physics, and hopefully to build some bridges toward applications.  While it leaned pretty strongly on the mathematical side of this bridge, I think we did manage to get some interaction at the overlap.  Roger Picken drew a nifty picture on the whiteboard at the end of the workshop summarizing how a lot of the themes of the talks clustered around the three areas mentioned in the title, and suggesting how TQFT really does form something of a bridge between the other two – one reason it’s become a topic of some interest recently.  I’ll try to build this up to a similar punchline.

Pre-School

Before the actual event began, though, we had a bunch of talks at IST for a local audience, to try to explain to mathematicians what the physics part of the workshop was about.  Aleksandr Mikovic gave a two-talk introduction to Quantum Gravity, and Sebastian Guttenberg gave a two-part intro to String Theory.  These are two areas where higher gauge theory (in the form of n-connections and n-bundles, or of n-gerbes) has made an appearance, and were the main physics content of the workshop talks.  They set up the basics to help put those talks in context.

Quantum Gravity

Aleksandr’s first talk set out the basic problem of quantizing the gravitational field (this isn’t the only attitude to what the problem of quantum gravity is, but it’s a good starting point), starting with the basic ingredients.  He summarized how general relativity describes gravity in terms of a metric g_{\mu \nu} which is supposed to satisfy the Einstein equation, relating the curvature of the metric to a source field T_{\mu \nu} which comes from matter.  Quantization then, starting from a classical picture involving trajectories of particles (or sections of fibre bundles to describe fields), one gets a picture where states are vectors in a Hilbert space, and there’s an algebra of operators including observables (self-adjoint operators) and time-evolution (hermitian ones).   An initial try at quantum gravity was to do this using the metric as the field, using the methods of perturbative QFT: treating the metric in terms of “small” fluctuations from some background metric like the flat Minkowski metric.  This uses the Einstein-Hilbert action S=\frac{1}{G} \int \sqrt{det(g)}R, where G is the gravitational constant and R is the Ricci scalar that summarizes the curvature of g.  This runs into problems: things diverge in various calculations, and since the coupling constant G has units, one can’t “renormalize” the divergences away.  So one needs a non-perturbative approach,  one of which is “canonical quantization“.

After some choice of coordinates (so-called “lapse” and “shift” functions), this involves describing the action in terms of the (space part of) the metric g_{kl} and some canonically conjugate “momentum” variables \pi_{kl} which describe its extrinsic curvature.  The Euler-Lagrange equations (found as usual by variational calculus methods) then turn out to give the “Hamiltonian constraint” that certain functions of g are always zero.  Then the program is to get a Poisson algebra giving commutators of the \pi and g variables, then turn it into an algebra of operators in a standard way.  This also runs into problems because the space of metrics isn’t a Hilbert space.  One solution is to not use the metric, but instead a connection and a “frame field” – the so-called Ashtekar variables for GR.  This works better, and gives the “Loop Quantum Gravity” setup, since observables tend to be expressed as holonomies around loops.

Finally, Aleksandr outlined the spin foam approach to quantizing gravity.  This is based on the idea of a quantum geometry as a network (graph) with edges labelled by spins, i.e. representations of SU(2) (which are labelled by half-integers).  Vertices labelled by intertwining operators (which imposes triangle inequalities, as it happens).  The spin foam approach takes a Hilbert space with a basis given by these spin networks.  These are supposed to be an alternative way of describing geometries given by SU(2)-connections. The representations arise because, as the Peter-Weyl theorem shows, they form a nice basis for L^2(SU(2)).  Then to get operators associated to “foams” that interpolate the spacetime between two such geometries (i.e. linear combinations of spin networks).  These are 2-complexes where faces are labelled with spins, and edges with intertwiners for the spins on the faces incident to them.  The operators arise from  a discrete variant of the Feynman path-integral, where time-evolution comes from integrating an action over a space of (classical) trajectories, which in this case are foams.  This needs an action to integrate – in the discrete world, this corresponds to ways of choosing weights A_e for edges and A_f for faces in a generic partition function:

Z = \sum_{J,I} \prod_{faces} A_f(j_f) \prod_{edges}A_e(i_l)

which is a sum over the labels for representations and intertwiners.  Some of the talks that came later in the conference (e.g. by Benjamin Bahr and Bianca Dittrich) came back to discuss principles behind how these A functions could be chosen.  (Aristide Baratin’s talk described a similar but more general kind of model based on 2-groups.)

String Theory

In parallel with these, Sebastian Guttenberg gave us a two-lecture introduction to string theory.  His starting point is the intuition that a lot of classical physics studies particles living on a background of some field.  The field can be understood as an approximate way of talking about a large number of quantum-mechanical particles, rather as the dynamics of a large number of classical particles can be approximated by the equations of state for a fluid or gas (depending on how much they interact with one another, among other things).  In string theory and “string field theory”, we have a similar setup, except we replace the particles with small strings – either open strings (which look like intervals) or closed ones (which look like circles).

To begin with, he introduced the basic tools of “classical” string theory – the analog of classical mechanics of point particles.  This is the string analog of the following: one can describe a moving particle by its worldline – a path x : \mathbb{R} \rightarrow M^{(D)} from a “generic” worldline into a (D-dimensional) manifold M^{(D)}.  This M^{(D)} is generally taken to be like physical spacetime, which in this context means that it has a metric g with signature (-1,1,\dots,1) (that is, locally there’s a basis for tangent spaces with one timelike vector and D-1 spacelike ones).  Then one can define an action for a moving particle which is just determined by the length of the line’s image.  The nicest way to say this is S[x] = m \int d\tau \sqrt{x*g}, where x*g means the pullback of the metric along the map x, \tau is some parameter along the generic worldline, and m, the particle’s mass, is a coupling constant which doesn’t happen to affect the result in this simple case, but eventually becomes important.  One can do the usual variational-calculus of the Lagrangian approach here, finding a critical point of the action occurs when the particle is travelling in a geodesic – a straight line, in flat space, or the closest available approximation.  In paritcular, the Euler-Lagrange equations say that the covariant derivative of the path should be zero.

There’s an analogous action for a string, the Nambu-Goto action.  Instead of a single-parameter x, we now have an embedding of a “generic string worldsheet” – let’s say \Sigma^{(2)} \cong S^1 \times \mathbb{R} into spacetime: x : \Sigma^{(2)} \rightarrow M^{(D)}.  Then then the analogous action is just S[x] = \int_{\Sigma^{(2)}} \star_{x*g} 1.  This is pretty much the same as before: we pull back the metric to get x*g, and integrate over the generic worldsheet.  A slight subtlety comes because we’re taking the Hodge dual \star.  This is conceptually clean, but expands out to a fairly big integral when you express it in coordinates, where the leading term  involves \sqrt{det(\partial_{\mu} x^m \partial_{\nu} x^n g_{mn}} (the determinant is taken over (\mu,\nu).  Varying this to get the equations of motion produces:

0 = \partial_{\mu} \partial^{\mu} x^k + \partial_{\mu} x^m \partial^{\mu} x^n \Gamma_{mn}^k

which is the two-dimensional analog of the geodesic equation for a point particle (the \Gamma are the Christoffel symbols associated to the connection that goes with the metric).  The two-dimensional analog says we have a critical point for the area of the surface which is the image of \Sigma^{(2)} – in fact, a “maximum”, given the sign of the metric.  For solutions like this, the pullback metric on the worldsheet, x*g, looks flat.  (Naturally, the metric looks flat along a geodesic, too, but this is stronger in 2 dimensions, where there can be intrinsic curvature.)

A souped up version of the Nambu-Goto action is the Polyakov action, which is a natural variation that comes up when \Sigma^{(2)} has a metric of its own, h.  You can check out the details behind that link, but part of what makes this action nice is that the corresponding Euler-Lagrange equation from varying h says that x*g \sim h.  That is, the worldsheet \Sigma^{(2)} will have an image with a shape such that its own metric agrees with the one induced from the spacetime M^{(D)}.   This action is called the Polyakov action (even though it was introduced by Deser and Zumino, among others) because Polyakov used it for quantizing the string.

Other variations on this action add additional terms which represent fields which the string might be affected by: a scalar \phi(x), and a 2-form field B_{mn}(x) (here we’re using the physics convention where x represents both the function, and its values at particular points, in this case, values of parameters (\sigma_0,\sigma_1) on \Sigma^{(2)}).

That 2-form, the “B-field”, is an important field in string theory, and eventually links up with higher gauge theory, which we’ll get to as we go on: one can interpret the B-field as part of a higher connection, to which the string is coupled (as in Baez and Perez, say).  The scalar field \phi essentially determines how strongly the shape of the string itself affects the action – it’s a “string coupling” term, or string coupling “constant” if it’s chosen to be just a number \phi_0.  (In such a case, the action includes a term that looks like \phi_0 times the Euler characteristic of the surface \Sigma^{(2)}.)

Sebastian briefly explained some of the physical intuition for why these are the kinds of couplings which it makes sense to introduce.  Essentially, any coupling one writes in coordinates has to get along with gauge symmetries, changes of coordinates, etc.  That is, there should be no physical difference between the class of solutions one finds in a given set of coordinates, and the coordinates one gets by doing some diffeomorphism on the spacetime M^{(D)}, or by changing the metric on \Sigma^{(2)} by some conformal transformation h_{\mu \nu} \mapsto exp(2 \omega(\sigma^0,\sigma^1)) h_{\mu \nu} (that is, scaling by some function of position on the worldsheet – underlying string theory is Conformal Field Theory in that the scale of the generic worldsheet is irrelevant – only the light-cones).  Anything a string couples to should be a field that transforms in a way that respects this.  One important upshot for the quantum theory is that when one quantizes a string coupled to such a field, this makes sure that time evolution is unitary.

How this is done is a bit more complicated than Sebastian wanted to go into in detail (and I got a little lost in the summary) so I won’t attempt to do it justice here.  The end results include a partition function:

Z = \sum_{topologies} dx dh \frac{exp(-S[x,h])}{V_{diff} V_{weyl}}

Remember: if one is finding amplitudes for various observables, the partition function is a normalizing factor, and finding the value of any observables means squeezing them into a similar-looking integral (and normalizing by this factor).  So this says that they’re found by summing over all the string topologies which go from the input to the output, and integrating over all embeddings x : \Sigma^{(2)} \rightarrow M^{(D)} and metrics on \Sigma^{(2)}.  (The denominator in that fraction is dividing out by the volumes of the symmetry groups, as usual is quantum field theory since these symmetries mean one is “overcounting” physically identical situations.)

This is just the beginning of string field theory, of course: just as the dynamics of a free moving particle, or even a particle coupled to a background field, are only the beginning of quantum field theory.  But many later additions can be understood as adding various terms to the action S in some such formalism.  These would be analogs of giving a point-particle attributes like charge, spin, “colour” and so forth in the Standard Model: these define how it couples to, hence is affected by, various kinds of fields.  Such fields can be understood in terms of connections (or, in general, higher connections, as we’ll get to later), which define how structures are “parallel-transported” along a path (or higher-dimensional surface).


Coming up in In Part II… I’ll summarize the School portion of the HGTQGR workshop, including lecture series by: Christopher Schommer-Pries on Classifying 2D Extended TQFT, which among other things explained Chris’ proof of the Cobordism Hypothesis using Cerf theory; Tim Porter on Homotopy QFT and the “Crossed Menagerie”, which describe a general framework for talking about quantum theories on cobordisms with structure; John Huerta on Higher Gauge Theory, which gave an introductory account of 2-groups and 2-bundles with 2-connections; Christoph Wockel on connections between Higher Gauge Theory and Infinite Dimensional Lie Theory, which described how some infinite-dimensional Lie algebras can’t be integrated to Lie groups, but only to 2-groups; and one by Hisham Sati on Higher Spin Structures in String Theory, which among other things described how cohomological obstructions to putting certain kinds of structure on manifolds motivates the use of particular higher dimensions.

A more substantial post is upcoming, but I wanted to get out this announcement for a conference I’m helping to organise, along with Roger Picken, João Faria Martins, and Aleksandr Mikovic.  Its website: https://sites.google.com/site/hgtqgr/home has more details, and will have more as we finalise them, but here are some of them:

Workshop and School on Higher Gauge Theory, TQFT and Quantum Gravity

Lisbon, 10-13 February, 2011 (Workshop), 7-13 February, 2011 (School)

Description from the website:

Higher gauge theory is a fascinating generalization of ordinary abelian and non-abelian gauge theory, involving (at the first level) connection 2-forms, curvature 3-forms and parallel transport along surfaces. This ladder can be continued to connection forms of higher degree and transport along extended objects of the corresponding dimension. On the mathematical side, higher gauge theory is closely tied to higher algebraic structures, such as 2-categories, 2-groups etc., and higher geometrical structures, known as gerbes or n-gerbes with connection. Thus higher gauge theory is an example of the categorification phenomenon which has been very influential in mathematics recently.

There have been a number of suggestions that higher gauge theory could be related to (4D) quantum gravity, e.g. by Baez-Huerta (in the QG^2 Corfu school lectures), and Baez-Baratin-Freidel-Wise in the context of state-sums. A pivotal role is played by TQFTs in these approaches, in particular BF theories and variants thereof, as well as extended TQFTs, constructed from suitable geometric or algebraic data. Another route between higher gauge theory and quantum gravity is via string theory, where higher gauge theory provides a setting for n-form fields, worldsheets for strings and branes, and higher spin structures (i.e. string structures and generalizations, as studied e.g. by Sati-Schreiber-Stasheff). Moving away from point particles to higher-dimensional extended objects is a feature both of loop quantum gravity and string theory, so higher gauge theory should play an important role in both approaches, and may allow us to probe a deeper level of symmetry, going beyond normal gauge symmetry.

Thus the moment seems ripe to bring together a group of researchers who could shed some light on these issues. Apart from the courses and lectures given by the invited speakers, we plan to incorporate discussion sessions in the afternoon throughout the week, for students to ask questions and to stimulate dialogue between participants from different backgrounds.

Provisional list of speakers:

  • Paolo Aschieri (Alessandria)
  • Benjamin Bahr (Cambridge)
  • Aristide Baratin (Paris-Orsay)
  • John Barrett (Nottingham)
  • Rafael Diaz (Bogotá)
  • Bianca Dittrich (Potsdam)
  • Laurent Freidel (Perimeter)
  • John Huerta (California)
  • Branislav Jurco (Prague)
  • Thomas Krajewski (Marseille)
  • Tim Porter (Bangor)
  • Hisham Sati (Maryland)
  • Christopher Schommer-Pries (MIT)
  • Urs Schreiber (Utrecht)
  • Jamie Vicary (Oxford)
  • Konrad Waldorf (Regensburg)
  • Derek Wise (Erlangen)
  • Christoph Wockel (Hamburg)

The workshop portion will have talks by the speakers above (those who can make it), and any contributed talks.  The “school” portion is, roughly, aimed at graduate students in a field related to the topics, but not necessarily directly in them.  You don’t need to be a student to attend the school, of course, but they are the target audience.  The only course that has been officially announced so far will be given by Christopher Schommer-Pries, on TQFT.  We hope/expect to also have minicourses on Higher Gauge Theory, and Quantum Gravity as well, but details aren’t settled yet.

If you’re interested, the deadline to register is Jan 8 (hence the rush to announce).  Some funding is available for those who need it.