1 The quantum and the macro

At the workshop on Physics and Decisions: An Exploration that took place at the Humboldt University in Berlin on 4–6 December 2019, quantum mechanics seemed to be relevant to several of the topics under discussion. In this symposium contribution I wish to try and summarise this in a somewhat systematic way, focusing in particular on the concept of emergence.

I shall begin by distinguishing three aspects of the relation between quantum mechanics and the macroscopic world (as discussed in particular on the last morning of the workshop).

The first is the prima facie qualitative tension between the quantum superposition principle and the definiteness of the macroscopic world (as in Schrödinger’s cat paradox). Various attractive strategies for overcoming this tension are available and have been worked out in detail, but I shall not review them here. The one we focused on was the idea of multiplicity provided by the multiverse (or Everett, or many-worlds) approach. If one accepts the idea that the reality that we observe corresponds only to one of many components of a superposition of quantum states, then that basic tension disappears.

The second aspect is the detailed question of whether and how, even at the level of individual components (‘branches’, ‘worlds’), one can get an adequate description of macroscopic phenomena (e.g. the usual behaviour of a living cat, or the motions of the bodies in the solar system). Addressing this question provides the justification for the idea of multiplicity in the first place, i.e. how worlds may emerge as structures within the universal wave function. It is a technical question, and it has been studied very successfully for almost 50 years under the heading of ‘decoherence theory’.

The third aspect, finally, is the question of whether, despite the emergence of worlds and of classical behaviour, quantum mechanics is in fact directly relevant to certain macroscopic phenomena. We shall see a number of ways in which this might be true, but at least one is obvious. Indeed, as Niels Bohr never tired to emphasise, the very notion of a ‘quantum phenomenon’ is defined in terms of macroscopic laboratory equipment: quantum mechanics impinges on the macroscopic world by making macroscopic objects behave in apparently probabilistic ways. The Born rule itself (the quantum mechanical recipe for assigning probabilities) is telling us about unexpected behaviour of macroscopic objects.Footnote 1

All three aspects relate to general questions of reduction and emergence. We shall now look at how quantum theory illuminates them, with particular regard to Everett worlds, classical behaviour, and the Born rule.

2 A mini-primer on decoherence

For a more extensive review, see e.g. Bacciagaluppi (2020).

As a nice mesoscopic example of decoherence one can look at the handedness of chiral molecules. These are chemical analogues of Schrödinger’s cat: at the ‘higher’, more ‘macroscopic’ chemical level of description one observes these molecules as either right-handed or left-handed, but at the fundamental quantum level one would expect them to be in superpositions of right- and left-handed states.

What happens in fact is that chiral molecules interact with the electromagnetic field (spontaneously and largely uncontrollably). Interactions between quantum systems tend to produce entangled states, i.e. superpositions in which certain states of one system (e.g. the living cat state and the dead cat state) are coupled with certain states of another system (e.g. the undecayed atomic state and the decayed atomic state). And the electromagnetic field happens to couple to the right- and left-handed states of a chiral molecule rather than to superpositions thereof (indeed, extremely quickly and effectively). This means that the original superposition of right- and left-handed states is transformed to a superposition in a very much larger system. It then becomes in practice impossible to observe any effects of this superposition. In the familiar case of the two-slit experiment, we are able to observe a superposition of an electron passing through the upper and lower slit by bringing together (‘interfering’) the two components of the state. But now we would have to bring together different components of the state of the composite of molecule and electromagnetic field. This is beyond our control.

Of course, when we observe the molecule, we as observers split (many-worlds!), but decoherence ensures that we split into an observer who sees a right-handed molecule, and one who sees a left-handed one, never a molecule in a superposition. Furthermore, once we have observed a right-handed molecule, in that world the molecule will remain right-handed. Handedness is a stable property of chiral molecules in Everett worlds.

The ‘more macroscopic’ a system, the more unavoidable the effects of decoherence, and the more complex the stable structures that emerge from the universal wave function. This, however, is a rule of thumb. Whether or not decoherence effects are relevant needs to be assessed on a case-by-case basis. Not all molecules are chiral, for instance (ammonia being a standard counterexample, where we do observe superpositions of handed bound ammonia groups), and there is no easy criterion for when a system should count as macroscopic (superconducting systems consist of a macroscopic number of particles, but they behave in distinctively quantum ways).

Indeed, there might be cases in which not all quantum effects have been suppressed by decoherence even at a clearly macroscopic level. A famous suggestion by Hameroff-Penrose links the phenomenon of consciousness with the possibility of quantum superpositions within microtubules in the brain (and their subsequent active suppression). Others interpret the mathematically quantum-like effects described within ‘quantum cognition’ as actual quantum effects (quantum superpositions of ‘vote Trump’ and ‘vote Clinton’ until the fateful first Tuesday in November!). At present, many macroscopic effects of quantum mechanics remain speculative at best, in particular any holistic behaviour due to entanglement, but plausible cases for the continuing relevance of quantum superpositions at the macroscopic level can be found in quantum biology, notably the studies of possible quantum effects in the navigational system of migrating birds.Footnote 3

3 Reduction and emergence

The concept of multiplicity in the Everett approach is best understood in terms of emergence: Everett was interested in stable structures arising within the universal wave function, and the theory of decoherence since developed provides powerful tools for identifying them. In his book on the multiverse theory, itself one of the most articulate expositions of the Everett approach, David Wallace (2012) phrases the question of reduction and emergence in terms of the concept of instantiation: a theory A instantiates a theory B over a certain domain D, if there is a (relatively simple) mapping from the solutions of A within D to (approximate) solutions of B. This is a conceptualisation of emergence (indeed a functionalist one), because B will in general be autonomous from the details of the lower-level theory (in particular, it could be multiply instantiated), and because finding the relevant mapping may be possible only from a top-down perspective (more about this presently). It is also a conceptualisation of reduction, because ontologically speaking the entities at the higher level B are analysed as being suitable patterns in the more fundamental level A (Wallace adapts the ‘real patterns’ idea from Dennett’s (1991)). In this sense, reduction and emergence are seen as two sides of the same coin. I find this analysis extremely helpful. As Jenann Ismael and I have written in our review of Wallace’s book (Bacciagaluppi and Ismael 2015, p. 138):

This method quite generally is the strategy for ʻinterpretingʼ a fundamental theory. It turns the Ramsey/Lewis/Horwich method on its head. Instead of trying to implicitly define theoretical primitives in everyday or observational vocabulary, it treats the theory’s basic concepts as ontological primitives and interprets everyday concepts in that ontological setting by identifying something that plays that role.

With specific reference to the emergence of worlds in Everett, decoherence theory tells us when different components of the universal wave function become dynamically independent of each other, so for all intents and purposes they behave as if the other components were not there. Furthermore, these components show typical kinds of behaviour. First of all, many models of macroscopic systems display not only dynamical stability and in this sense deterministic behaviour, but even quantitatively Newtonian behaviour. (Narrow ‘wave-packets’ provably follow approximately Newtonian trajectories, and decoherence makes sure that wave functions split into narrow wave packets.) The ‘classical world’ (insofar as it exists) is thus recognised as one of many components of the universal wave function that behave approximately classically. However (and surprisingly), models of classically chaotic systems like the weather turn out to be branching all the time, so that what we usually think of as classical unpredictability is in fact indeterminism emerging from the quantum level! Finally, the macroscopic worlds that emerge are never really (even approximately) classical, but are punctuated by ‘quantum phenomena’. Decoherence is in fact responsible for the existence of stable measurement records and for the indeterministic aspect of quantum measurements: from the internal perspective of a world, the deterministic branching of the universal wave function appears as indeterministic ‘collapse’.

This is an example of how the top-down perspective is essential in the analysis of emergent behaviour. From the bottom up, even if one were able to always identify the correct variables for a decoherence analysis, one would at most see a deterministic branching structure. Superpositions are still there (this in fact has been a major source of opposition to the Everett approach). It is only through the adoption of the higher-level perspective that one realises that the theory now predicts exactly the indeterministic (or classically unpredictable) kind of behaviour that we observe: a ‘Copernican’ shift in perspective, which Everett explicitly likened to Galileo’s argument that if the Earth moved, we would not feel it.Footnote 4

4 Decisions

This crucial role of the top-down perspective becomes even clearer when we consider the emergence of the quantitative probabilistic aspects of quantum mechanics (i.e. the Born rule), especially in the much discussed decision-theoretic approach by Deutsch-Wallace (cf. Wallace 2012, Chap. 5). Clearly, at the fundamental level of the universal wave function there are no probabilities (the original title of Everett’s thesis was Quantum mechanics without probability). But we can ask whether from the perspective of a splitting agent there is anything in the formalism that plays the functional role of probabilities in guiding our decisions. Within this perspective—and using a number of assumptions that can be motivated only within this perspectiveFootnote 5—Deutsch-Wallace identify branch weights (the squared moduli of the coefficients of the wave function components) as what plays this role, thus recovering the Born rule. In this sense, quantum probabilities are a truly emergent concept that makes sense only at the higher level.

But quantum mechanics may have a further effect on the macroscopic level of decision-making, through the very fact of there being a multiplicity of worlds (which are still in a quantum superposition, even if realistically they are not ever going to reinterfere). Surprising as this might sound, that it should make a difference whether it is a single ‘I’ who faces the consequences of our decisions or a whole multiplicity of our successors, is actually far from implausible.

The Deutsch-Wallace approach rules out such a difference (and is thereby able to exploit the powerful tools of classical decision theory), by explicitly assuming that what matters to an agent deliberating before a quantum split are the utilities for their successors in the various branches (‘in-branch’ utilities). As pointed out by various critics of the Deutsch-Wallace approach, this need not be the case. In particular, when an agent is deliberating whether or not to accept a bet involving a branching of their world, they might be thrilled by the very prospect of splitting (‘having their cake and eating it’), as evidenced by the commercial availability of world-splitting apps. Or an agent might be moved by considerations of distributive justice, as suggested by Huw Price (none of our successors should be very badly off). Simon Saunders himself, one of the researchers who has done most for the revival of Everett’s fortunes since the 1990s, once told me that being an Everettian makes him more risk-averse (something along the lines of: ‘If I drink a glass of wine at dinner, there is a world in which driving home I run over a child and kill them’). Conversely, an agent who believes in Everett might be keen to seize any opportunity to benefit at least some of their successors (my own excuse for occasionally playing the lottery is precisely that Everett might be right!). Some might even consider it rational to play quantum Russian roulette, when they would never play the classical version (but that seems to me just making most of their successors appallingly off).Footnote 6

These considerations are about agents choosing some course of action in the knowledge of a later branching of their world: for instance we might accept a quantum bet; but even more commonly, our classical world will split of its own accord (betting on good weather when driving home after dinner is a quantum bet). From the decision-theoretic perspective it is irrelevant how the process of deliberation itself is modelled, as long as such behaviour emerges from the underlying description. In particular, I believe it is irrelevant whether or not we also split when taking a decision. Establishing what kind of process this is, is a task for cognitive neuroscience, but deliberating could be a high-level description of a classical neurophysiological process within a single non-branching world,Footnote 7 it could be a high-level description of a classically unpredictable process involving massive branching, or could even involve quantum branching in an essential way, either initiating it or (less likely) exploiting interference effects in the brain.

All of this is perfectly compatible with physicalism, but also provides fresh scope for dualism (at least in decisions that do involve branching): specifically as suggested by Christian, when we branch, in particular when making a decision, our (many-)minds will by preference cluster in certain branches. While I am not sure I want to follow him in embracing dualism, I do believe that an Everett-Deutsch-Wallace framework in which probabilities are not fundamental but only emerge from a decision-theoretic perspective may be especially advantageous in arguing for the emergence of conscious choices. Indeed, while weights of later branches play the role of probabilities in the Deutsch-Wallace approach,Footnote 8 this approach is presumably inapplicable to weights attached to any branches defining the very choice we are making. Adopting a first-person perspective, these weights will not be probabilities for the result of our own decision (unlike from a third-person perspective). Thus, the fact that probabilities are not fundamental, and do not emerge until we get to the very high level of agents, arguably removes what might be thought of as excessive constraints at the physical level for the emergence of conscious decision-making. But I shall leave that as a speculative point.