In my post about my short talk at CQC, I mentioned that the groupoidification program in physics is based on a few simple concepts (most research programs are, I suppose). The ones I singled out are: state, symmetry, and history. But since concepts tend to seem simpler if you leave them undefined, there are bound to be subtleties here. Recently I’ve been thinking about the first one, state. What is a state? What is this supposedly simple concept?

Etymology isn’t an especially reliable indicator of what a word means, or even the history of a concept (words change meanings, and concepts shift over time), but it’s sometimes interesting to trace. The English word “state” comes from the Latin verb *stare*, meaning “to stand”, whose past participle is *status*, which is also borrowed directly into English. The Proto-Indoeuropean root *sta-* also means “stand”, which in turn comes from this root, but this time via Germanic (along with “standard”). However, most of the words with this root come via various Latin intermediaries: *state, stable, status, statue, stationary, station*, and also *substance, understand* and others. The *state of affairs* is sometimes referred to as being “how things stand”, how they are, the current condition. Most of the words based on the *sta-* root imply non-motion (i.e. “stasis”). If anything, “state” (like “status”) carries this connotation less strongly than most, since the state of affairs can change – but it emphasizes how things stand *now* and not how they’re changing. From this sense, we also get the political meaning of “a state”, a reified version of a term originally meaning the political condition of a country (by analogy with Latin expressions like *status rei publicae*, the “condition of public affairs”).

So, narrowing focus now, the “state” of a physical system is the condition it’s in. In different models of physics, this is described in different ways, but in each case, by the “condition” we mean something like a complete description of all the facts about the system we can get. But this means different things in different settings. So I just want to take a look at some of them.

Think of these different settings for physics as being literally “settings” (but please excuse the pun) of the switches on a machine. Three of the switches are labelled Thermal, Quantum, and Relativistic. The “Thermal” switch varies whether or not we’re talking about thermodynamics or ordinary mechanics. The “Quantum” switch varies whether we’re talking about a quantum or classical system.

The “Relativistic” switch, which I’ll ignore for this post, specifies what kind of invariance we have: Galileian for Newton’s physics; Lorentzian for Special Relativity; general covariance for General Relativity. But this gets into dynamics, and “state” implies things are, well, static – that is, it’s about kinematics. At the very least, in Relativity, it’s not canonical what you mean by “now”, and so the definition of a state must include choosing a reference frame (in SR), or a Cauchy hypersurface (in GR). So let’s gloss over that for now.

When all these switches are in the “off” position, we have classical mechanics. Here, we think of a state as – at a first level of approximation, an *element of a set*. Now, for serious classical mechanics, this set will be a symplectic manifold, like the cotangent bundle of some manifold . This is actually a bit subtle already, since a point in represents a collection of positions and momenta (or some generalization thereof): that is, we can start with a space of “static” configurations, parametrized by the values of some observable quantities, but a *state* (contrary to what etymology suggests) also includes momenta describing how those quantities are changing with time (which, in classical mechanics, is a fairly unproblematic notion).

The Hamiltonian picture of the *dynamics* of the system then tells us: given its state, what will be the acceleration, which we can then use to calculate states at future time. This requires a Hamiltonian, , which we think of as the energy, which can be calculated from the state. So, for example, kinetic plus potential energy: in the case of a particle moving in a potential on a line, . The space of states can be described without much reference to the Hamiltonian, but once we have , we get a flow on that space, transforming old states into new states with time.

Now if we turn on the “Thermal” switch, we have a different notion of state. The standard image for the classical mechanical system is that we may be talking about a particle, or a few particles, or perhaps a rigid object, moving in space, maybe subject to some constraints. In thermodynamics, we are thinking of a statistical ensemble of objects – in the simplest case, identical objects – and want to ask how energy is distributed among them. The standard image is of a box full of gas at some temperature: it’s full of molecules, each with its own trajectory, and they interact through collisions and exchange energy and momentum. Rather than tracking the exact positions of molecules, in thermodynamics a “state” is a *distribution*, or more precisely a probability measure, on the space of such states. We don’t assume we know the detailed *microstate* of the system – the positions and momenta of all the particles in the gas – but only something about how these are distributed among them. This reflects the real fact that we can only measure things like pressure, temperature, etc. The measure is telling us the proportion of particles with positions and momenta in a given range.

This is a big difference for something described by the same word “state”. Even assuming our underlying space of “microstates” is still the same , the state is no longer a point. One way to interpret the difference is that here the state is something epistemic. It describes what we know about the system, rather than everything about it. The measure answers the question: “given what we know, what is the likelihood the system is in microstate X?” for each X. Now, of course, we could take a space of all such measures: given our previous classical system, it’s a space of functionals on . Then the state can again be seen as an element of a set. But it’s more natural to keep in view its nature as a measure, or, if it’s nice enough, as a positive function on the space of states. (It’s interesting that this is an object of the same type as the Hamiltonian – this is, intuitively, the basis of what Carlo Rovelli calls the “Thermal Time Hypothesis”, summarized here, which is secretly why I wanted to write on this topic. But more on that in a later post. For one thing, before I can talk about it, I have to talk about what comes next.)

Now turn off the “Thermal” switch, and think about the “Quantum” switch. Here there are a couple of points of view.

To begin with, we describe a system in terms of a Hilbert space, and a state is a *vector in a Hilbert space*. Again, this could be described as an element of a set, but the complex linear structure is important, so we keep thinking of it as fundamental to the type of a state. In geometric quantization, one often starts with a classical system with a state space like , and then takes the Hilbert space , so that a state is (modulo analysis issues) basically a complex-valued function on . This is something like the (positive real-valued) measure which gives a thermodynamic state, but the interpretation is trickier. Of course, if is an -space, we can recover a probability measure, since the square modulus of has finite total measure (so we can normalize it). But this isn’t enough to describe , and the extra information of phases goes missing. In any case, the probability measure no longer has the obvious interpretation of describing the statistics of a whole ensemble of identical systems – only the likelihood of measuring particular values for one system in the state . (In fact, there are various no-go theorems getting in the way of a probablity interpretation of , though this again involves dynamics – a recurring theme is that it’s hard to reason sensibly about states without dynamics). So despite some similarity, this concept of “state” is very different, and *phase* is a key part of how it’s different. I’ll be jiggered if I can say *why*, though: most of the “huh?” factor in quantum mechanics lives right about here.

Another way to describe the state of a quantum system is related to this probability, though. The inner product of (whether we found it as an -space or not) gives a way to talk about statistics of the system under repeated observations. Observables, which for the classical picture are described by functions on the state space , are now self-adjoint operators on . The expectation value for an observable in the state is $\langle \phi | A | \phi \rangle$ (note that the Dirac notation implicitly uses self-adjointness of ). So the state has another, intuitively easier, interpretation: it’s a real-valued functional on observables, namely the one I just described.

The observables live in the algebra of bounded operators on . Setting both Thermal and Quantum switches of our notion of “state” gives quantum statistical mechanics. Here, the “C*-algebra” (or von Neumann-algebra) picture of quantum mechanics says that really it’s the algebra that’s fundamental – it corresponds to actual operations we can perform on the system. Some of them (the self-adjoint ones) represent really very intuitive things, namely observables, which are tangible, measurable quantities. In this picture, isn’t assumed to start with at all – but when it is, the kind of object we’re dealing with is a density matrix. This is (roughly) a positive operator on of unit trace). In general a state on a von Neumann algebra is a linear functional with unit trace.

This is analogous to the view of a state as a probability measure (positive function with unit total integral) in the classical realm: if an observable is a function on states (giving the value of that observable in each state), then a measure is indeed a functional on the space of observables. A probability measure, in fact, is the functional giving the expectation value of the observable. (And, since variance and all the higher moments of the probability distribution for that observable are themselves defined as expectation values, it also tells us all of those.)

On the other hand, the Gelfand-Naimark-Segal theorem says that, given a state , there’s a representation of as an algebra of operators on some Hilbert space, and a vector for which this is just . This is the GNS representation (and in fact it’s built by taking the regular representation of on itself by multiplication, with made into a Hilbert space by definining the inner product to make this property work, and with ). So the view here is that a state is some kind of *operation on observables* – a much more epistemic view of things. So although the GNS theorem relates this to the vector-in-Hilbert-space view of “state”, they are quite different conceptually. (For one thing, the GNS representation is giving a different Hilbert space for each state, which undermines the sense that the space of ALL states is fundamentally “there”, but in both pictures is the same for all states.)

(This von Neumann-algebra point of view, by the way, gets along nicely with the 2-Hilbert space lens for looking at quantum mechanics, which may partly bridges the gap between it and the Hilbert-space view. The category of representations of a von Neumann algebra is a 2-Hilbert space. A “2-vector” (or “2-state”, if you like) in this category is a representation of the algebra. So the GNS representation itself is a “2-state”. This raises the question about 2-algebras of 2-operators, and John Baez’ question: “What is the categorified GNS theorem?” But let’s leave 2-states for later along with the rest.)

So where does this leave us regarding the meaning of “state”? The classical view is that a state is an element of some (structured) set. The usual quantum picture is that a state is, depending on how precise you want to be, either a vector in a Hilbert space, or a 1-d subspace of that Hilbert space – that is, a point in the projective Hilbert space. What these two views have in common is that there is some space of all “possible worlds”, i.e. of all ways things can be in the system being studied. A state is then a way of selecting one of these. The difference is in what this space of possible worlds is like – that is, which category it lives in – and how exactly one “selects” a state. How they differ is in the possibility of taking combinations of states. As for selecting states, is a Cartesian category, with a terminal object : an element of a set is a map from into it. is a monoidal category, but not Cartesian: selecting a single vector has no obvious categorical equivalent, though selecting a 1-D subspace amounts to a map from (up to isomorphism). So the model of an “element” isn’t a singleton, it’s the complex line – and it relates to other possible spaces differently: not as a terminal object, but as a monoidal unit. This is a categorical way of saying how the idea of “state” is structurally different.

The thermal point of view is a little more epistemically subtle: for both classical and quantum pictures, it’s best thought of as, not a possible world, but a function acting on observables (that is, conditions of knowledge). In the classical picture, this is directly related to a space of possible worlds – it’s a measure on it, which we can think of as saying how a large ensemble of systems are distributed in that space. In the quantum picture, in some ways the most (epistemically) natural view, in terms of von Neumann algebras, breaks the connection to this notion of “possible worlds” altogether, since has representations on many different Hilbert spaces?

So a philosophical question is: what do these different concepts have in common that lets us use them all to represent the “same” root idea? Without actually answering this, I’ll just mention that at some point I’d like to talk a bit about “2-states” as 2-vectors, and in general how to categorify everything above.

August 11, 2009 at 10:57 pm

If I gaze upon the etched iridium control panel of your Model-o-tron, I see the Thermal, Quantum, and Relativistic switches. Your metamodel of Model-o-tron has the first two each as binary (single-pole-single-throw to use old electrical terms). You show 3 settings for the “Relativistic” switch (though it looks blurry so far). Do you assert that all 2 x 2 x 3 = 12 Models are known and equally valid, in an abstract sense? More subtly, the Quantum switch is in deformation meta-model often shown as allowing one to set Planck’s Constant, with the zero setting yielding classical physics. Likewise, the “Relativistic” switch is sometimes portrayed as a variable C (or 1/c) with 1/c = 0 being Newtonian dynamics. Even more subtly, are we SURE that Planck’s constant is a real number? The two best measurements at NIST differ with statistical significance. Could h-bar be a complex number, with small nonzero imaginary component? Or quaternionic? Or octonionic? What can we really say about the topology of metrics of the manifold (?) of settings of the Model-o-tron? Is it a complex manifold? Does it have singularities? What is its Betti number? It is a fibration of something we know?

August 12, 2009 at 6:27 pm

Hi Jonathan:

The last few questions there sound a bit fanciful, but I take the point to be: the “Model-o-tron” I described is rather primitive. This is true. Planck’s constant is a parameter for the “quantum” switch, and indeed temperature would be a parameter for the “thermal” switch. Setting both to zero leaves us with classical physics. I was content to be primitive this way because what I’m classifying are mental constructs. It was more a historical breakdown of ways people have thought about the idea of “state” than some kind of sophisticated way of classifying “kinds of physics”.

I *would* be interested in knowing the answers (or even plausible methods of looking for answers) to similar questions about the moduli space for the parameters that show up in physical laws. Planck’s constant would be a good example, though I can’t imagine what it would mean for it to be non-real: it has units of action (Joule-seconds, for example). What would an imaginary Joule-second be? But, yes: plenty of people have suggested these parameters could have had different values, and even supposing them all to be real-valued, I can imagine that not all possible combinations of values are admissible – maybe there’s some submanifold (or subvariety, etc.) of admissible values which we could ask such questions about. Then again, I don’t know any candidates for a law they might have to satisfy.

For the purposes of what I was doing here, any (meta-)theory where you could ask such questions would probably have just one notion of “state”…

August 13, 2009 at 11:56 pm

Thank you, Dr. Morton. That’s a very encouraging reply, which motivates me to continue rewriting the (too-long) draft paper I’m writing on the subject. I do agree with you about the centrality of “state” — and am more careful with it now, given what I understand of the Kochen-Specker theorem. Yes also, the space of all possible THOUGHTS (Zwicky’s IDEOCOSM) is quite different from the space of all possible PHYSICAL THEORIES. What people believe, Historically, is some kind of nonlinear evolution in that space, according to hard-to-express meta-rules. The mappings between them are hard to express. There is a hard question about what hypersurface (if any) separates Physical Theories from Nonphysical Theories in the space of all possible Theories of Mathematical Physics.

The Temperature switch on THERMAL by the way has settings that go from Absolute Zero, up through 1st and 2nd quantization phase transition temperatures for theories of specific systems, up to infinity, then up to negative infinity, and up again until it wraps around to Absolute Zero. Right?

August 14, 2009 at 5:46 pm

[…] Some different ideas of “state” […]

August 15, 2009 at 7:59 pm

It’s fun to try to combine the ‘thermal’ and ‘quantum’ ideas using the Wick rotation idea, which says

exp(-H/kT)

deserves to be in the same family of operators as

exp(-itH/hbar).

So, the reciprocal of temperature is like imaginary time, and Planck’s constant is like Boltzmann’s constant.

It’s fun to try, but it seems quite challenging to make this idea fully precise, or fully understand what it really means!

On a more digressive note: computer scientists have a different yet related notion of state that I’m finally beginning to understand. For example, it’s nontrivial for functional programming languages to incorporate ‘state’, and Haskell does this using a monad.

It took me ages to figure out what that last sentence means! But now I do, and I’m going to have some fun lording it over people who don’t know yet, before I explain it.

August 19, 2009 at 7:11 pm

Or anyway is like . I’m not sure if the analogy between the constants directly is especially convincing. The fact that the units are different isn’t totally conclusive (one can “geometrize” constants like the speed of light, and change one set of units into another, for example). But making a direct analogy between Boltzmann’s and Planck’s constants means relating something with units of “Joules per Kelvin” and something with units “Joule-seconds”. There’s something more going on there than just the Wick rotation. But either way, I agree it’s interesting to try to understand this relation.

As for monads, a few people, including Mike, have explained this concept to me but the details haven’t stuck yet. Functional programming? Great. Monads? Not so great. Maybe if I tried to actually use them for something it would help.

August 21, 2009 at 6:17 am

People who program in Haskell need to learn category theory and need to learn about monads.

My problem was that while I love monads and vaguely understand functional programming, I had immense problems understanding what monads have to do with state.

Remarks like this seemed utterly cryptic to me:

A pure functional language cannot update values in place because it violates referential transparency. A common idiom to simulate such stateful computations is to “thread” a state parameter through a sequence of functions. This approach works, but such code can be error-prone, messy and difficult to maintain. The State monad hides the threading of the state parameter inside the binding operation, simultaneously making the code easier to write, easier to read and easier to modify.

Now they just seem

fairlycryptic.August 16, 2009 at 5:13 pm

As has been said, temperature lives on a Riemann sphere.

August 16, 2009 at 5:41 pm

So, the reciprocal of temperature is like imaginary time …Just as in the Riofrio cosmology, which has no Dark Energy, but rather a varying speed of light (with cosmic epoch) and varying hbar (CMB temperature).

September 10, 2009 at 7:18 pm

[…] in the previous post, I was talking about different notions of the “state” of a system – all of which […]

September 10, 2009 at 7:21 pm

[…] in the previous post, I was talking about different notions of the “state” of a system – all of which […]

October 5, 2009 at 10:49 pm

Seriously, we cited this blog thread in our entry to this year’s FQXi Essay Contest.

http://fqxi.org/community/forum/topic/550

“The Fundamental Importance Of Discourse In Theoretical Physics”

Philip V. Fellman, Jonathan Vos Post and Christine M. Carmichael

[PDF can be downloaded from above URL]

============

On the lighter side (where you started)

http://strangemaps.wordpress.com/2009/09/21/412-federal-feathers/

The German language describes the difference between two main types of federal states aptly and concisely as being between a Bundesstaat (1) and a Staatenbund (2). The European Union, in which the 27 constituent nations retain sovereignty over such key issues as defence and foreign policy, clearly is an example of the latter. The United States, where federal sovereignty clearly trumps states’ rights, is of the former type.

This does not mean, however, that the 50 constituent states are completely homogenised; in fact, they exhibit a marked tendency to stress their uniqueness and individuality, among other means by choosing a raft of state insignia – even if often as trivial as a State Toy (Kansas: Etch-A-Sketch), State Instrument (Kentucky: Appalachian dulcimer), or State Beverage (Massachusetts: cranberry juice).

Only a handful of states have adopted such idiosyncratic symbols. A much more popular one, adopted by all states and DC in fact, is the State Bird. Funny thing, though: instead of choosing birds unique to each state, or at least not shared with other states, these insignia show an intriguing degree of overlap, and geographic contiguity – as shown by this map….