Tuesday, September 20, 2011

Aaronson on QM and Free Will

One thing that has frustrated me in the past is the fact that folks tend to think indeterministic means “just random”, where by random they mean some stochastic process (like a dice roll) where one can’t predict which outcome will be chosen from some probability distribution. Quantum indeterminism doesn’t work this way, but it’s a difficult subject and experts don’t agree on exactly how to characterize it. It seems clear one cannot simply use a “frequency" interpretation, the way you can with a classical stochastic system. There seems to be something more involved, something spontaneous which resists reduction, but I have a hard time being more precise about this.

Computation theorist Scott Aaronson (home page, blog) recently gave a presentation on free will (at an FQXi conference) which was very thought-provoking (see this Sciam piece with helpful links), and had an interesting take on this issue.



He says (on a slide):

“Conventional wisdom: ‘Free will is a hopelessly muddled concept. If something isn’t deterministic, then logically, it must be random—but a radioactive nucleus obviously doesn’t have free will!’ But the leap from “indeterminism” to “randomness” here is total nonsense! In computer science, we deal all the time with processes that are neither deterministic nor random…”

As examples Aaronson cites cites nondeterministic finite automota, and more generally, algorithms designed to work for any inputs.

I take his general point to be that there’s a difference between randomness, where the distribution of outcomes is known (or at least can be discovered in some way) versus a situation where this is impossible.

If something is indeterministic, but there’s no way to know the probability distribution, then it is something which seems worthy of being called free.

Free will is then defined (for the purposes of his talk) as unpredictability (even in terms of probability distribution) by any actual or conceivable technologies. Aaronson describes a “prediction game” whereby a future computer analyzes one's entire brain/body/immediate environment, and predicts your answer to questions (actually the probability distribution of your answers).

Now, in assessing whether this will be possible, there is a key discovery we need to derive from science, which is: in a human brain, do quantum level states impact macroscopic (say, neuronal) behavior? We don’t know the answer yet for certain, although I would guess it’s extremely likely. This doesn’t mean any fancy quantum coherence in the brain; it just means quantum states at the molecular level sometimes are amplified to influence macroscopic processes.

The next key point is that if this were to be true, then the quantum no-cloning theorem would prevent prediction of human behavior by any future technology (assuming quantum mechanics is true). We cannot replicate all the relevant physical states.

Then our behavior is described by “Knightian uncertainty”, i.e. uncertainty that one can’t even accurately quantify using probabilities. The prediction game is unwinnable.

Even if the prediction game is unwinnable in this way, what does this have to do with free will? Even if the universe were deterministically evolving from an initial quantum state (Everettian view), the world would still be (stochastically) determined in spite of this result. It would just be that the computer couldn’t know the initial condition of the universe.

But here’s something weird. He says: “If the Prediction Game was unwinnable, then it would seem just as logically coherent to speak about our decisions determining the initial state, as about the initial state determining our decisions!” The situation could be something like this: “…there are qubits all over the world today which have been in states of Knightian uncertainty since the Big Bang. Maybe we should call them ‘willbits’. By making a decision, you can retroactively determine the quantum state of one of these willbits. But then once you determine it, that’s it! There’s no going back.”

A sort of backwards-in-time causation seems implied (but not one which could lead to grandfather paradoxes). In general, the picture is of spacetime history determining in retrospect what it’s own initial state was as quantum particle states get amplified to a macroscopic scale and decohere.

(Aaronson then finishes with a speculative discussion of why this situation might fit well with black-hole complementarity, but I’ll leave that aside for now.)

Now, personally, I have an different opinion about the measurement problem. Whereas in the Everettian view all of the uncertainty could be seen as embedded in an initial state of the universe, I believe measurement collapses are happening all the time naturally. So spontaneity is introduced continually, not just all at once. But it’s not clear this matters for the present discussion (except that perhaps we wouldn’t need to discuss retrocausation). Either way, there is freedom, if one accepts the way it is defined here.

No comments: