Recently Published Book SpotlightRecently Published Book Spotlight: Philosophy of Quantum Physics

Recently Published Book Spotlight: Philosophy of Quantum Physics

As an expert in physics, metaphysics, and logic, Tim Maudlin is well qualified to talk about how discoveries in quantum physics are influencing established philosophical fields like ontology. His books include Quantum Non-Locality and Relativity , Truth and Paradox, The Metaphysics Within Physics, New Foundations for Physical Geometry: The Theory of Linear Structures and Philosophy of Physics: Space and Time. He is the Founder and Director of the John Bell Institute for the Foundations of Physics. In the following interview, Maudlin and I discuss the value of quantum research for philosophical inquiry.

The field of Quantum Physics seems like it could have implications for many of philosophy’s sub-fields. The one book on it that I read, Quantum Ontology by Peter Lewis, describes how it changes discussions of causality, realism, determinism, and holism, among other topics. What’s the biggest contribution you believe Quantum Physics makes to philosophy?

The most interesting implications of quantum theory for philosophy are in metaphysics, that is, in the most high-level generic account of the kinds of things that exist. On your list of topics, the most important consequences would fall under holism. The central entity that is postulated by quantum theory is an entity described by the mathematical object called the wavefunction of a system. From the very beginning of the theory it has been controversial what (if anything!) the wavefunction represents: is it a physical characteristic of an individual system, or a statistical description of an ensemble of systems, or a representation of the information that an individual has about a system, or what? It was also controversial how the wavefunction behaves through time: does it always evolve deterministically and linearly (in accord with Schrödinger’s equation) or does it sometimes randomly “collapse”? And it was unclear whether the wavefunction provides a complete physical description of a system or only a partial description.

All of these questions were addressed by Einstein, who held that the wavefunction was not a complete physical description, because (he argued) if it is complete then it has to collapse, and if it collapses that not only violates determinism (“God plays dice with the universe”) but more importantly for Einstein the collapse is a sudden global change in the physical state of the universe (“spooky action-at-a-distance”). So in Einstein’s view the wavefunction is not complete, and further the “collapse” is not a physically real change. So it is at least possible to maintain determinism even though quantum theory only provides probabilistic predictions.

We now know that Einstein was wrong on at least two counts. Due to the recent theorem of Pusey, Barrett and Rudolph we know that the wavefunction does indeed represent a physical characteristic of individual systems: it cannot be understood either statistically or epistemically (in terms of information). And due to the epochal theorem of John Bell and the subsequent experimental work it inspired, we also know that quantum non-locality, “spooky action-at-a-distance”, is physically real and not just an artifact of the theory. So we are forced to accept a novel physical entity—the quantum state—as a feature of the physical universe.

This does not settle all of the questions raised above. It is still not certain whether the quantum state of the universe evolves deterministically or stochastically, and whether or not it provides a complete physical description of the universe. Different theories provide different answers to these questions. For example, according to the pilot-wave theory, the wavefunction is not complete but evolves deterministically, and according to objective collapse theories it is complete but evolves indeterministically, and according to the Many Worlds theory it is complete and evolves deterministically. So there are still lots of controversies. But what is not controversial (or rather shouldn’t be!) is that there is a physical quantum state, that it is a global, holistic sort of thing, and that there is indeed spooky action-at-a-distance mediated by the quantum state.

The quantum state is a new sort of metaphysical item. It does not fit neatly in any of the metaphysical categories inherited from Aristotle or Kant or developed by pre-quantum classical physics. So it broadens the scope of metaphysics in a fundamentally novel way, and gives philosophers a lot to think about.

Could you explain briefly the difference between pilot-wave theory, objective collapse theory, and Many Worlds theory? What empirical data would we need to decide which of the theories is more likely to be correct?

When presented with a properly formulated, precise physical theory that aspires to account for all known “quantum mechanical” (or “quantum field theoretic”) phenomena, there are immediately two diagnostic questions to ask. As we have seen, such a theory must postulate a quantum state of a system (and ultimately of the universe). A central aspect of how quantum theory makes predictions is the use of Schrödinger’s equation to determine how the quantum state evolves through time. That equation has the mathematical characteristic called “linearity”, which just means the following. Suppose one quantum state, Ainitial, evolves over a period of time to Afinal, and another Binitial evolves over the same time to Bfinal. If the evolution is linear, then the superposition of the two states (Ainitial + Binitial) will evolve to the superposition (Afinal + Bfinal), or, in other words, it does not matter if I evolve two initial states separately and then superpose the results or superpose the intial states and then evolve: I get the same result either way.

The problem known as Schrödinger’s cat arises from two suppositions: 1) The evolution of the quantum state is always linear and 2) the quantum state provides a complete physical characterization of a system. If both of these conditions hold then, as Schrödinger pointed out, in certain easy-to-realize experimental situations a cat that is initially alive will end up neither definitely alive (and not dead!) nor definitely dead (and not alive!) but rather, as he said, somehow “smeared” between the two. Schrödinger took this result to be ridiculous: we know that no matter how “smeary” things might be at microscopic scale, cats just end up either alive or dead, full stop.

A pilot-wave theory embraces the resolution that the quantum state is not complete. This allows the theory to retain the linear Schrödinger evolution of the quantum state at all times, but the health of the cat is not a function just of the quantum state: it critically depends as well on the values of some additional variables (often very misleadingly called “hidden variables”) that describe additional physical characteristics of the cat. In the case of Bohmian mechanics, for example, these additional variables are particle positions: the cat is made of particles that always have definite locations, and whether the cat ends up alive or dead depends on the configuration that these particles end up in. The physical role of the quantum state, then, is to “guide” or “pilot” the particles, to determine where they go. In a Schrödinger cat experiment, the final outcome for the cat depends not merely on the initial quantum state of the experiment but also on the initial configuration of the particles. Some configurations will be guided to a live cat and others to a dead cat. But on any single run, the cat unproblematically ends up alive or dead. Different outcomes in different runs result from different initial conditions.

A collapse theory wants to maintain the completeness of the quantum state and to avoid adding any additional variables. If the cat is to end up either alive or dead, the linearity of the Schrödinger evolutions must be broken. This is done by postulating the “collapse” or “reduction” of the wavefunction, which can be done in many different ways. The main burden of a collapse theory is to precisely specify how that linearity gets broken. Usually, the collapse is a fundamentally random or stochastic affair: two experiments that start in precisely the same initial state can end up in different final states, one with a live cat and the other with a dead cat.

Finally, the Many Worlds approach tries to bite the bullet: keep the linear evolution and refuse to add any new variables. While Schrödinger found the outcome of such a theory to be manifestly unacceptable, the Many Worlds theorist tries to maintain that it is not that a single cat ends up “smeared”, but that a single initial cat ends up as a plurality of cats, each in a different “world” or “branch”, and each of which is unproblematically alive or dead. One problem for the Many Worlds theorist is to explain how this “branching” occurs and what it means. This also raises conceptual problems for understanding how to connect the physical picture provided by the theory to the probabilistic or statistical predictions of the quantum formalism.

In general, the non-collapse theories—pilot wave and Many Worlds—will make precisely the same empirical predictions as each other, assuming you can make sense of how they make predictions at all. But any collapse theory will, on account of the loss of linearity, make slightly different predictions in some experimental conditions. For example, due to the collapses an isolated system will, over time, spontaneously heat up a bit. Unfortunately, the critical experimental conditions where different predictions are made are very hard to realize (the entire visible universe, over the past 13.7 billion years, only heats up a fraction of a degree Kelvin, which is below what we can observe), so while the experimentalists have been able to put some pressure on precise details of possible collapse mechanisms, they have not been able to rule collapse definitively in or out yet. That may happen in the next decade or so for the sorts of theories under present consideration.

Which of your works on Quantum Physics are you proudest of? Describe your thesis in that work, and how you make your case for it.

I suppose it is still my first book Quantum Non-Locality and Relativity. It is not so much that the book has a particular thesis, it is rather a work of quintessential conceptual analysis. It is often said that according to the Theory of Relativity, nothing can go faster than light. But on the other hand, the violation of Bell’s inequality seems to imply that something must go faster than light, the spooky action-at-a-distance. In this book I consider many different “somethings” (matter, causal connections, information) that might be thought to go faster than light, and investigate whether or not each is really prohibited by Relativity. There appears to be a tension between violations of Bell’s inequality and Relativity, and I wanted to investigate in detail just what form that tension takes. 

Which philosophers (either past or present) have you found most useful in discussing the relevance of Quantum Physics to philosophy?

The most important person to have written on these topics, and by a large margin, is the physicist John Stewart Bell. Anyone interested in these issues should first of all devote their time to very carefully reading Bell’s works, because Bell is a very, very careful thinker and writer. (I have, not co-incidentally, just founded the John Bell Institute for the Foundations of Physics. Check out our website at JohnBellInstitute.org.)

 Other than Bell, are there good writers or books you would recommend for people wishing to learn more about this topic?

There are many different recent books, aimed at different audiences, that are well worth reading. You mentioned Peter Lewis’s book above. For a lovely overview of the history of quantum theory with respect to foundational questions, there is Adam Becker’s recent book What Is Real?, which is pitched to a popular audience. Much of Becker’s history relies on Mara Beller’s older Quantum Dialogue, which goes into much more historical and technical detail, but really does not require much background knowledge. For people comfortable with undergraduate level math, Travis Norsen’s recent Foundations of Quantum Mechanics is the first undergraduate physics text designed to cover foundational issues. Jean Bricmont has two recent books, Understanding Quantum Mechanics and Quantum Sense and Nonsense, both of which tilt strongly in the direction of pilot wave theory. The former is more technical, with detailed discussions of some of the mathematics, and the latter is at a popular level. GianCarlo Ghirardi’s Sneaking a Look at God’s Cards covers a lot of topics, including quantum computation and quantum cryptography and is at a popular level. It also goes into some detail on collapse theories. The most extensive discussion of Many Worlds (and only Many Worlds) is David Wallace’s The Emergent Multiverse. And last, but I hope not least, my own book Philosophy of Physics: Quantum Theory is due out from Princeton University Press on March 12.

Where do you think the philosophical study of Quantum Physics will go in the future?

There will continue to be development and—where possible—empirical testing of the three approaches to understanding quantum theory: pilot wave, collapse and Many Worlds. And there will be more and more detailed and careful discussions of what the precise implications of each of these approaches for metaphysics and ontology is, and in some cases whether an approach is really conceptually coherent. There is a lot of work to be done.

 

You can ask Dr. Tim Maudlin questions about his work in the comments section below. Comments must conform to our community guidelines and comment policy.

*

The purpose of the Recently Published Book Spotlight is to disseminate information about new scholarship to the field, explore the motivations for authors’ projects, and discuss the potential implications of the books. Our goal is to cover research from a broad array of philosophical areas and perspectives, reflecting the variety of work being done by APA members. If you have a suggestion for the series, please contact us here.

6 COMMENTS

  1. “We now know that Einstein was wrong on at least two counts. Due to the recent theorem of Pusey, Barrett and Rudolph… And due to the epochal theorem of John Bell…”

    Einstein was not wrong. Neither the PBR nor Bell theorems are valid, when applied to an entity that manifests only a single-bit of information (as defined by Shannon’s Capacity theorem). Since that is exactly what a quantum entity appears to be, the theorems do not apply, to the very case for which they were intended. Both PBR and Bell, effectively assume that more than a single bit is always present. But a single-bit entity has no measurable properties at all; its existence can be “detected”, but not measured, precisely because there is nothing left to measure, once its information conveying capacity, has been entirely exhausted by the mere act of a yes/no detection.

    • The physical world displays phenomena that violate Bell’s inequality. That cannot be accounted for by any theory with the sort of locality Einstein wanted. Measures of Shannon information have nothing to do with that.

      • It has now been demonstrated , by actual construction (independently reproduced and verified), that peculiar, classical objects, manifesting only a single bit of information, will reproduce the so-called quantum correlations, with (unoptimized) detection efficiencies comparable to the best ever obtained in any Bell tests – significantly higher than any supposed theoretical limit. It has nothing to do with either spooky action or hidden variables. See my comment here, for further details:
        https://disqus.com/home/discussion/societyforscience/beyond_weird_and_what_is_real_try_to_make_sense_of_quantum_weirdness/#comment-4272228526

        • The so-called “detection loophole” (discussed in my book Quantum Non-Locality and Relativity”) has been closed. One cannot account for the apparent violations of Bell’s inequality by appeal to low detector efficiencies.

        • The detection loophole is not the issue. The problem is, the “self evidently true” assumption, that two uncorrelated measurements (such as position and momentum, or two spin components) ought to always be possible for a classical entity, is in fact false, for a certain peculiar class of objects – objects manifesting only a single bit of information. To put it bluntly, it is foolhardy to even attempt a Bell test on such objects, because there is no possibility whatsoever, of ever getting the type of uncorrelated measurements that Bell assumed to be characteristic of the classical realm.

  2. I wonder if you are familiar with Prigogine’s approach to q.m? His claim that it is incomplete…and his use of large Poincare systems in approaching the (non) measurement problem….
    I am an amateur but his work does not seem to get much mention…
    Also are you familiar with the work of the philosopher of science Isabelle Stengers? She writes about these question in Cosmopolitics II. – referring also to Nancy Cartwright’s ‘How the measurement problem is an artefact of mathematics.’

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

How to Practice Embodied Pedagogy

When preparing my poster for the AAPT/APA conference in New York in January 2024, I had to consider not only what topics would interest...