Wednesday, January 10, 2007

The Limit of the Bayesian Interpretation

I read the paper “Subjective probability and quantum certainty” by Carleton Caves, Christopher Fuchs, and Rüdiger Schack; this is the latest in a series of papers which make a compelling case for the superiority of a Bayesian or subjective interpretation of quantum probabilities. (For a discussion of another paper in the same spirit, see this prior post). To bring the point home, this particular paper features the discussion of quantum experiments with a certain outcome: the authors show that this outcome is to be interpreted as a certainty of epistemic belief on the part of the observer, not an objective certainty.

While I am persuaded that the Bayesian interpretation does win out over attempts to see quantum probabilities (or quantum states) as fully objective, there is a limit to how satisfying the analysis can be as an overall ontological interpretation of QM.


Here's a very brief summary. The authors note the fundamental category distinction in Bayesian probability theory between probabilities and facts. Probabilities cannot be reduced to facts; facts (the truth values of propositions) are used to update prior probability assignments.

In the context of a QM experimental setup, the quantum states are “catalogues of probabilities for measurement outcomes...” which “…summarize an agent’s degrees of belief about the potential outcomes of quantum measurements.” What might be less obvious is how to correctly interpret the preparation procedure. According to a common Copenhagen reading of the situation, the preparation procedure is described classically and the facts about the procedure determine the quantum state. (To reiterate, in the Bayesian interpretation probabilities can never be wholly determined by facts).

The authors set out to show that “the posterior state always depends on the agent’s prior beliefs, even in the case of quantum state preparation (emphasis original)”. The preparation device must be considered quantum mechanical also. The contribution of this paper uses the example of a certain (probability =1) experiment to present the following argument: If you posit that pre-existing properties fully determine the experimental outcome, this will violate the principle of locality (the authors make use of the Kochen-Specker theorem in the argument – the argument is presented in section 5). Therefore we cannot interpret a measurement as revealing an objective pre-assigned value. Certainty means certainty of the agent’s belief.

So, what’s the problem? No problem, just an observation of the limit to how helpful this in forming an ontological interpretation of QM (and, to be fair, the authors finish the paper by making clear they don’t claim this is the answer to all foundational questions). It seems precisely clear where the limit is. In the interpretation: “A probability is an agent’s degree of belief…” The agent and the agent’s degrees of belief are primitives in the implied ontology. There is no explanation or description of the agent, quantum mechanical or otherwise.

If quantum mechanics describes our world, we would like a generalized ontology which makes no distinction between an “agent” and any quantum system. I guess there are two ways to think of this: first, we could try to come to grips with a theory in which all quantum systems, even simple ones, are somehow agents which have beliefs. Offhand I don’t see how this can work. The second option (and my preference) is that we take the probabilities out of the subjective realm and put them (the quantum states) back into reality. The trick is that we have to do this in a way which respects the findings of the Bayesian analysis, meaning the quantum state can’t be considered fully objective. The solution would seem to be to consider the quantum state (between preparation and measurement) to be a propensity of the system under consideration which exists relative to the measuring system. You still have a dichotomy between facts (measurements/interactions) and probabilities (propensities), but both have their own form of existence. (Note my use of term “propensities” here is distinct from the usual Popperian sense which I gather does see them as objective).

Just like at the end of my previous post on this topic, I’ll note that the idea of generalizing the Bayesian approach to ubiquitously cover all quantum systems attracts me to a relational interpretation of QM. Matt Leifer had a good discussion of challenges facing the relational interpretation at Quantum Quandaries (here and here). Many of his points centered around the problem of how simple quantum systems could “choose” a consistent measurement basis. I’m interested to watch and see if these issues can be worked out. A paper was posted last year by Paul Merriam which takes an interesting approach to addressing some of these issues. I'll see if I can summarize it in a future post.

No comments: