# On the Gravitization of Quantum Mechanics 1: Quantum State Reduction

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s10701-013-9770-0

- Cite this article as:
- Penrose, R. Found Phys (2014) 44: 557. doi:10.1007/s10701-013-9770-0

- 18 Citations
- 3.3k Downloads

## Abstract

This paper argues that the case for “gravitizing” quantum theory is at least as strong as that for quantizing gravity. Accordingly, the principles of general relativity must influence, and actually *change*, the very formalism of quantum mechanics. Most particularly, an “Einsteinian”, rather than a “Newtonian” treatment of the gravitational field should be adopted, in a quantum system, in order that the principle of equivalence be fully respected. This leads to an expectation that quantum superpositions of states involving a significant mass displacement should have a finite lifetime, in accordance with a proposal previously put forward by Diósi and the author.

### Keywords

Quantum theoryLinear superpositionMeasurementGravitationPrinciple of equivalenceDiósi-Penrose state reductionThe title of this article (and of its companion, [16]) contains the phrase “gravitization of quantum Mechanics” in contrast to the more usual “quantization of gravity”. This reversal of wording is deliberate, of course, indicating my concern with the bringing of quantum theory more in line with the principles of Einstein’s general relativity, rather than attempting to make Einstein’s theory—or any more amenable theory of gravity—into line with those of quantum mechanics (or quantum field theory).

Why am I phrasing things this way around rather than attempting to move forward within that more familiar grand programme of “quantization of gravity”? I think that people tend to regard the great twentieth century revolution of *quantum theory*, as a more fundamental scheme of things than gravitational theory. Indeed, quantum mechanics, strange as its basic principles seem to be, has no evidence against it from accepted experiment or observation, and people tend to argue that this theory is so well established now that one must try to bring the *whole* of physics within its compass. Yet, that other great twentieth century revolution, namely the *general theory of relativity*, is also a fundamental scheme of things which, strange as *its* basic principles seem to be, also has no confirmed experiments or observations that tell against it—provided that we comply with Einstein’s introduction of a cosmological constant \(\Lambda \), in 1917, which now appears to be needed to explain the very large-scale behavior of cosmology. So why give quantum theory pride of place in this proposed union?

It is true that there are far more phenomena, on a human scale, which are found to require quantum mechanics for their explanations, than are those that require general relativity for their explanations—far, far more. Also, I think it is felt that since quantum mechanics normally refers to very small things, and general relativity to very big things—and we tend to think of big things as being made up of small things—then the theory of the small things, i.e. quantum mechanics, must be the more fundamental. However I think that this is misleading. Moreover, there is another issue that I regard as far more important than this issue of *size*, namely that of the *consistency* of the theory.

We know of the problems of the infinities that so frequently tend to arise in the quantum theory of fields, and much work is geared to trying to eliminate or, at least, deal with these infinities in a consistent and appropriate way. But, of course, general relativity also has its infinities, these arising inevitably in the gravitational collapse to a black hole, or perhaps of an entire universe. These are serious issues, representing potential mathematical inconsistencies in the global applicability of each theory. These are issues that certainly do need attending to, and it is often argued that the infinities of one theory might be alleviated if the principles of the other theory can be appropriately brought to bear on them.

*paradox*”. We can think of this as a fundamental conflict between two of the very foundational principles of quantum mechanics, namely

*linearity*and

*measurement*. Let us have a look at these issues, starting with linearity, which is an essential part of the

*unitary*evolution

**U**of the quantum state-vector. In Fig. 1, we envisage a high-energy photon aimed at some brown object, which ejects various other objects as soon as the photon hits it. In Fig. 2, a mirror is inserted in the path of the photon, and the reflected photon hits a green object instead, which ejects a quite different family of objects when hit by the photon. In Fig. 3, the mirror is replaced by a beam-splitter (effectively a half-silvered mirror) which splits the photon state into a quantum superposition of the two considered earlier, so that its state now becomes a superposition of being transmitted, and aimed at the brown object, and of being reflected, and aimed at the green object. Quantum linearity demands that the two superposed photon states, as they emerge from the beam splitter each individually maintains the evolution that it would have achieved in the absence of the other. If quantum linearity holds universally, then the results of the photon impacts (for a photon of high energy) could be perfectly evidently macroscopic events.

This issue was clearly appreciated by Schrödinger, as he illustrated the problem with his famous “cat” thought experiment. In his version he considered a decay of a radio-active atom, but it will fit in better with what I have to say if I employ a beam-splitter instead. In Fig. 1, the brown object is replaced by a murderous device (photon detector connected to a gun) which kills a cat. In Fig. 2, a mirror is inserted between the photon source and the device, so the cat survives. But in Fig. 3, the mirror is replaced by a beam-splitter so, according to quantum linearity (as demanded by **U**), the resulting quantum state is one involving a superposition of a dead and a live cat!

Sometimes people object to this direct kind of description of a quantum **U**-evolution, pointing out that I do not appear to have taken into consideration the complicated *environment* that must be entangled with the detector and the cat, etc.—since in the “environmental decoherence” description of quantum measurement, this environmental involvement is all-important, the unobserved degrees of freedom in the air molecules, for example, having to be averaged out in our quantum descriptions. Other objections come from the fact that no “observer” has been brought into the picture, it being frequently argued that it is by bringing in an observer’s consciously perceived alternatives—necessarily of one clear outcome or another—that finally breaks the chain of continuing superpositions. In Fig. 1, I have indicated the environment, with some dots to represent air molecules, in the situation described previously in Fig. 1. I have also included a human observer. The observer’s conscious perception of the dead cat is depicted in a “thought bubble”, but whether or not such a thing can be represented as part of the quantum state is not important here, because one can see from the observer’s unhappy expression that the cat’s demise has been consciously perceived. In Fig. 2, I have done the same, but with the mirror inserted, so that the cat remains alive, accompanied by a different environment, represented by some slightly differently placed dots. Now the observer consciously perceives the cat to be alive (as indicated in the thought bubble), and this is made evident from the now happy expression on the observer’s face. None of this appears to affect the situation arising when the mirror is replaced by a beam splitter, as depicted in Fig. 3. The environment is now a superposition of these two possibilities, and the observer’s facial expression is now a superposition of a smile and a frown, as the outward expression of the superposed conscious perception of a dead and live cat.

Of course, some might object that conscious perceptions are not like this, and our normal streams of consciousness do not involve us in perceiving such gross superpositions. Of course they do not, in our actual experiences, but I see no contradiction in this kind of conscious perception, and we need to explain *why* they do not occur. The **U**-evolution of the standard quantum formalism, with its implied linearity, provides, in itself, no explanation for this remarkable but utterly familiar fact.

To get the right answer for the un-superposed nature of the perceived macroscopic world, we need to wheel in the *other* part of quantum formalism, namely the reduction **R** of the quantum state, according to which the state is taken as “jumping”, probabilistically, to an eigenstate of some quantum operator that is taken to represent the action of quantum measurement. If we don’t bring in **R** then we just don’t get a description of the physical world that we all perceive. Strictly speaking, **R** is simply *inconsistent* with **U**. Whereas **U** is a continuous and deterministic evolution, **R** is (normally) discontinuous and probabilistic. Moreover, any measuring apparatus is itself constructed from the same kind of “stuff” that constitutes all the quantum systems that are under examination, being built from quantum particles, quantum fields and quantum forces. So why do we not use **U** to describe its action in the course of performing a measurement?

Of course there are innumerable different ways that quantum physicists and philosophers try patch up this inconsistency, normally taking **U** to represent an underlying truth of Nature and regarding **R** as coming about somehow as an approximation, or as an “interpretation” of how the quantum state is supposed to be viewed in relation to physical reality. I do not propose to enter into a discussion of any of these alternative viewpoints, but I merely present my own view, which is to regard **U** itself as an approximation to some yet-undiscovered non-linear theory. That theory would have to yield **R** as another approximation to a reality, describing how quantum measuring devices actually evolve in a way that subtly deviates from the action of **U**.

*gravitation*as being the key to the issue of how standard quantum theory is to be modified. Other rather different gravitational proposals have been proposed by [5–8, 17]. My own approach to this is to take the principles of general relativity as having a defining role in how quantum mechanics is to be modified. Some of the relevant issues, relating to the idea that gravity should be involved in this modification of quantum mechanics are listed in Fig. 4.

*stationary*state. This superposition is to be taken as being

*unstable*(owing to some non-linear effects coming from the putative unknown “gravitization” of standard quantum mechanics), with a lifetime \(\tau \), of the order of

*uncertainty*in the energy of the superposed state see [12], and the above formula is taken to be an expression of the Heisenberg time-energy uncertainty relation (in analogy with the formula relating the lifetime of a radioactive nucleus to its mass/energy uncertainty).

The quantity \(E_{G}\) is the gravitational self-energy of the *difference* between the mass (expectation) distributions of the two stationary states in superposition. (If the two states merely differ from one another by a rigid translation, then we can calculate \(E_{G}\) as the gravitational interaction energy, namely the energy it would cost to separate two copies of the lump, initially considered to be coincident and then moved to their separated locations in the superposition.) The calculation of \(E_{G}\) is carried out entirely within the framework of Newtonian mechanics, as we are considering the masses involved as being rather small and moved very slowly, so that general-relativistic corrections can be ignored.

*principle of equivalence*see [15] which is the basic foundational principle of general relativity. Let us first imagine a table-top quantum experiment being performed, and we wish to calculate the wavefunction of the quantum system under consideration, so that Schrödinger’s equation can then be used to calculate the time-evolution of this state. In this particular case, we are interested in taking the Earth’s

*gravitational field*into consideration. We may envisage two different procedures for doing this (illustrated in Fig. 9). The first (with the green coordinates) would be the conventional quantum physicist’s approach, where we simply include a term in the quantum Hamiltonian representing the Newtonian potential of the gravitational field, where we are just treating the gravitational field as providing an “ordinary force” in the same way that we would for any other force (such as an electric or magnetic force). This procedure gives us what I shall call the

*Newtonian*wavefunction \(\psi \) (written in green in Fig. 9). The second procedure is more in the spirit of Einstein’s general relativity, where we imagine doing our quantum mechanics in a freely falling frame (purple coordinates), and in this frame there is no gravitational field, the accelerated fall cancelling it out, and our

*Einsteinian*wavefunction \(\Psi \) (written in purple in Fig. 9), described in terms of these free-fall coordinates, evolves according to a Schrödinger equation whose quantum Hamiltonian has no term representing the gravitational field.

We would expect these two quantum evolutions to agree with one another—and, indeed a famous experiment performed in 1975, by Colella, Overhauser and Werner did confirm that Einstein’s principle of equivalence is indeed respected by quantum mechanics in this respect. In fact, a direct calculation shows that the Newtonian and Einsteinian wavefunctions are related to each other simply by one being a *phase multiple* of the other. This multiple is explicitly shown in Fig. 9 (outlined with a yellow broken line), 3/4 of the way down the picture for the dynamics of a single particle, and at the bottom of the picture for the dynamics of a system of many particles.

Now, surely, this shows that standard quantum mechanics is perfectly consistent with Einstein’s equivalence principle, since an overall *phase* multiplier for the wavefunction ought not to affect the interpretation in terms of probabilities, when a measurement (**R**) is performed on the system. However, there is an additional subtlety here, because if we examine this particular phase factor we see that it contains a term in \(t^{3}\) in the exponent (with coefficient proportional the square **g**\(^{2}\) of the gravitational acceleration vector \(\overrightarrow{g})\) where \(t\)(=T) is the time in both coordinate systems. Because of this non-linear dependence of the phase factor on \(t\), we find that the notion of *positive frequency* differs in the two viewpoints (Newtonian and Einsteinian). Accordingly, the two different reference systems, the Newtonian one and the Einsteinian one, refer to two different quantum field theory *vacua*.

The fact that physics in an accelerating frame is described by a different vacuum from that in an unaccelerated one is familiar in relativity, where according to what is called the “Unruh effect” [2, 18] the accelerated vacuum is a *thermal vacuum* with a temperature \(\hbar g\)/2\(\pi \)*ck*. (\(k\) being Boltzmann’s constant, etc.). Here, we are considering the Galilean limit \(c\rightarrow \infty \), so we see that the temperature goes to zero. Nevertheless, the accelerated vacuum remains different from the unaccelerated one even in this limit, although in this limiting case the accelerated vacuum is not a thermal one, but the phase discrepancy between the Newtonian and Einsteinian wavefunctions agrees with that shown in Fig. 9 (confirmed in a private communication from B. S. Kay).

Although, strictly speaking, the notion of alternative vacua is a feature of quantum *field* theory, rather than of the “Galilean” non-relativistic quantum mechanics that is under consideration here, the issue has direct relevance to the latter also. Standard quantum mechanics indeed requires that energies remain positive (i.e. that frequencies remain positive). This is not normally a problem, because the quantum dynamics governed by a positive-definite Hamiltonian will preserve this condition. But in the situation arising here we appear to be forced into the prospect of its violation unless the vacua are kept separate, the presence of the \(t^{3}\) term coming about through the adoption of the Einsteinian perspective with regrd to the gravitational field.

*given*, this difference in vacua is of no particular concern, since we can use one or the other description consistently without any discrepancy between the two choices of vacuum, as regards the results of measurement. But when the gravitational field itself becomes involved in a quantum superposition, the issue is more serious. In Fig. 10 I have illustrated this kind of situation, where a massive sphere is placed in a quantum linear superposition of two separate locations. We are to consider how quantum mechanics is to be used to treat this situation in accordance with the (preferred)

*Einsteinian*viewpoint. The problem now is that in order to describe this superposition of gravitational fields, according to this viewpoint, we are confronted with the problem of superposing states that relate to two different vacua.

Strictly speaking this is *illegal*, according to the rules of quantum field theory, as the two states that are proposed to be under superposition belong to different Hilbert spaces. The general problem that arises would be that when we try to form an inner product between a state described in relation to one vacuum with a state described in relation to a different vacuum, we tend to get a *divergence* which violates the very rules that Hilbert spaces are supposed to satisfy (that inner products should be finite numbers), and for which the probability interpretation (Born rule) now makes nonsense. Nevertheless, we might take the view that such things come about merely because some mathematical idealization has been made (such as is the case for momentum states, for example, which are, strictly, not Hilbert space members) and that if full details of the situation are properly encoded, then such divergences would not occur. But how are we to deal with this when the very Hilbert spaces are diiferent, so that superpositions cannot actually be performed? An analogous situation might come from some mathematical treatments of, for example, a superconductor, where one might consider the superconducting state to be described in relation to a vacuum different from the standard one. Of course, if an actual physical superconductor were in a quantum state belonging to a Hilbert space with a different vacuum from that of the Hilbert space describing the system before the superconducting state were set up, then there would be no way to build a superconductor, since transitions from one Hilbert space to another, each with a different vacuum, would be “against the rules” of standard quntum field theory.

*error*that is involved in trying to form a stationary state out of the superposition of locations, where we take the view that the divergences involved in the superposition will come about from the coefficient of \(t^{3 }\)in the “discrepancy” phase factor that we see in Fig. 10, namely

*difference*between the two mass distributions involved. (See [12] for this derivation and for an alternative motivation for this energy uncertainty.) When \(E_{G}\) is substituted into Heisenberg’s time-energy uncertainty formula, we get the aforementioned estimate of the lifetime of such a superposed state, before it spontaneously decays into one alternative or the other.

One point should be made clear about this proposal [3, 4, 11, 12], namely that *all ***R** processes of state-reduction are to be the result of actions of this gravitational proposal. In a deliberate measurement by means of a conventional quantum detector, **R** would be triggered by some movement of mass within the detector, sufficient to reduce the state within a very small period of time \(\sim \tau \). In other situations, where there is no deliberate measurement of the quantum system, but some considerable entanglement with an extended random environment, the major mass displacement would likely come about from the mass displacement of material within that environment, so when the environment state collapses to one or the other so does that of the quantum system itself, because the two parts, being entangled, both reduce together. This makes contact with the point of view whereby **R** is deemed to occur via *environmental decoherence*, but now we have a clear ontology of an objectively *real* reduction of the quantum state, and we do not have to appeal to some vague concept of the environmental degrees of freedom being, in some sense, “unobservable”.

*contact*of the two instance of the sphere, the value of \(E_{G}\) is already nearly 2/3 of the value it would reach for a displacement all the way out to infinity. Thus, for a uniformly solid body like this, we do not gain much by considering displacements in which the two instances of the body are moved apart by a distance of more than roughly its own diameter.

*difference*between the two (the mass difference distiribution indicated by the brown–green difference shown in Fig. 13), then we get an infinite answer if the mass distributions are actually taken as the delta functions representing these point-particle locations. Such an infinite answer for \(E_{G}\) would give a

*zero*decay time, in other words an instantaneous reduction of any superposed state of actual material—and we would have no quantum mechanics!

Clearly this is not the correct answer, and indeed the procedure has not been carried out correctly, because (as noted earlier) the procedure of calculating \(E_{G}\) applies only when each of the two states in superposition is actually *stationary*. A state involving delta function mass distributions like this would not be stationary, according to Schrödinger’s equation, since the delta-function mass distributions would instantly spread out from these locations. What we need to do is first to solve the stationary Schrödinger equation for the material in our object. Even this would lead to a nonsense, however, if carried out strictly, for an actual stationary state, since such a state would have to be spread out over the entire universe, and we would now get the opposite problem of obtaining a *zero* answer for \(E_{G}\).

What one tends to do, in practice, when considering the wavefunction of a stationary state is to factor out the mass centre, regarding the wavefunction to be concerned with distances *relative* to the mass centre. This procedure is a little artificial, and one can adopt the more systematic procedure of modifying the Schrödinger equation in the form of the self-coupled “Schrödinger-Newton equation”, for a wavefunction \(\psi \), which incorporates an additional term in the Hamiltonian, that is given by the gravitational potential due to the expectation value of the mass distribution in \(\psi \) itself see [10, 13]. The “Newton” term prevents the stationary solutions from being infinitely spread out. In situations of relevance here, to a reasonable enough approximation, this amounts to the aforementioned procedure of factoring out the mass centre, and then just solving the Schrödinger equation. This saves us from having to solve the non-linear Schrödinger-Newton equation in detail in order to calculate \(E_{G}\). On the other hand, incorporating the Schrödinger-Newton equation has another theoretical importance in also providing the possible stable stationary states to which a quantum superposition can reduce by the operation of **R**.

It might be presumed, since we are looking at space-time differences whose measure is of the order of the extremely tiny Planck scale, that nothing of observable consequence would be likely to come from considertions of this kind. Indeed, experiments aimed at testing the influences of quantum mechanics on space-time structure—the presumed consequences of “quantum gravity” in the usual interpretation of that term—would be enormously far from what can be achieved by current technology. However, we are here concerned with the role of gravitational principles on the structure of quantum mechanics, not the other way about, where the experimental situation is very different. One way of looking at this issue is to realize that the scales at which quantum mechanics is expected to influence space-time structure would indeed be the absurdly tiny Planck length \((\hbar G/c^{3})^{1/2}\approx 1.6\times 10^{-35}\)m and Planck time \((\hbar G/c^{3})^{1/2}\approx 5.4\times 10^{-44}\)s, these being so very tiny, compared with ordinary scales, because they involve dividing a very small quantity by a very large one. However, the quantity of relevance to our considerations of the “gravitization of quantum mechanics” under consideration here is \(\tau \approx \hbar /E_{G,}\) which is one very small quantity divided by another very small quantity (since \(E_{G}\)is scaled by the gravitational constant), which need be neither very small nor very large. Accordingly, one must examine the quantities involved in any proposed experiment very carefully, to see whether the time scale \(\tau \) turns out to be one that comes within the scope of the experiment being envisaged.

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.