Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In an earlier version of this text, a sub title was added to the title of this book: A view on the quantum nature of our universe. This raised objections: “your view seems to be more classical than anything we’ve seen before!” Actually, this can be disputed. We argue that classical underlying laws can be turned into quantum mechanical ones practically without leaving any trace. We insist that it is real quantum mechanics that comes out, including all “quantum weirdness”. The nature of our universe is quantum mechanical, but it may have a classical explanation. The underlying classical laws may be seen to be completely classical. We show how ‘quantum mechanical’ probabilities can originate from completely classical probabilities.

Whether the views presented here are ‘superior’ over other interpretations, we leave to the reader to decide. The author’s own opinion should be clear. I don’t believe in contrived concoctions some of my colleagues come forward with; as for those, I would prefer the original Copenhagen interpretation without any changes at all.

It may seem odd that our theory, unlike most other approaches, does not contain any strange kinds of stochastic differential equations, no “quantum logic”, not an infinity of other universes, no pilot wave, just completely ordinary equations of motion that we have hardly been able to specify, as they could be almost anything. Our most essential point is that we should not be deterred by ‘no go theorems’ if these contain small print and emotional ingredients in their arguments. The small print that we detect is the assumed absence of strong local as well as non-local correlations in the initial state. Our models show that there should be such strong correlations. Correlations do not require superluminal signals, let alone signals going backwards in time.

The emotional ingredient is the idea that the conservation of the ontological nature of a wave function would require some kind of ‘conspiracy’, as it is deemed unbelievable that the laws of Nature themselves can take care of that. Our point is that they obviously can. Once we realize this, we can consider studying very simple local theories.

Is a cellular automaton equivalent to a quantum theory, or indeed, a quantum field theory? As stated above, the answer is: formally yes, but in most cases the quantum equations will not resemble the real world very much. To obtain locality in the quantum sense out of a cellular automaton that is local classically, is hard, and the positivity of the Hamiltonian, or the boundedness of the Hamiltonian density, are difficult to prove in most cases.

In principle it is almost trivial to obtain “quantum mechanics” out of classical theories. We demonstrated how it can be done with a system as classical as the Newtonian planets moving around the sun. But then difficulties do arise, which of course explain why our proposal is not so simple after all. The positivity of the Hamiltonian is one of the prime stumbling blocks. We can enforce it, but then the plausibility of our models needs to be scrutinized. At the very end we have to concede that the issue will most likely involve the mysteries of quantum gravity. Our present understanding of quantum gravity suggests that discretized information is spread out in a curved space–time manifold; this is difficult to reconcile with Nature’s various continuous symmetry properties such as Lorentz invariance. So, yes, it is difficult to get these issues correct, but we suggest that these difficulties will only indirectly be linked to the problem of interpreting quantum mechanics.

This book is about these questions, but also about the tools needed to address them; they are the tools of conventional quantum mechanics, such as symmetry groups and Noether’s theorem.

A distinction should be made between on the one hand explicit theories concerning the fate of quantum mechanics at the tiniest meaningful distance scale in physics, and on the other hand proposals for the interpretation of today’s findings concerning quantum phenomena.

Our theories concerning the smallest scale of natural phenomena are still very incomplete. Superstring theory has come a long way, but seems to make our view more opaque than desired; in any case, in this book we investigated only rather simplistic models, of which it is at least clear what they say.

1 The CAI

What we found, seems to be more than sufficient to extract a concrete interpretation of what quantum mechanics really is about. The technical details of the underlying theory do not make much difference here. All one needs to assume is that some ontological theory exists; it will be a theory that describes phenomena at a very tiny distance scale in terms of evolution laws that process bits and bytes of information. These evolution laws may be “as local as possible”, requiring only nearest neighbours to interact directly. The information is also strictly discrete, in the sense that every “Planckian” volume of space may harbour only a few bits and bytes. We also suspect that the bits and bytes are processed as a function of local time, in the sense that only a finite amount of information processing can take place in a finite space–time 4-volume. On the other hand, one might suspect that some form of information loss takes place such that information may be regarded to occupy surface elements rather than volume elements, but this we could not elaborate very far.

In any case, in its most basic form, this local theory of information being processed, does not require any Hilbert space or superposition principles to be properly formulated. At the most basic level of physics (but only there), the bits and bytes we discuss are classical bits and bytes. At that level, qubits do not play any role, in contrast with more standard approaches considered in today’s literature. Hilbert space only enters when we wish to apply powerful mathematical machinery to address the question how these evolution laws generate large scale behaviour, possibly collective behaviour, of the data.

Our theory for the interpretation of what we observe is now clear: humanity discovered that phenomena at the distance and energy scale of the Standard Model (which comprises distances vastly larger, and energies far smaller, than the Planck scale) can be captured by postulating the effectiveness of templates. Templates are elements of Hilbert space that form a basis that can be chosen in numbers of ways (particles, fields, entangled objects, etc.), which allow us to compute the collective behaviour of solutions to the evolution equations that do require the use of Hilbert space and linear operations in that space. The original observables, the beables, can all be expressed as superpositions of our templates. Which superpositions one should use, differs from place to place. This is weird but not inconceivable. Apparently there exists a powerful scheme of symmetry transformations allowing us to use the same templates under many different circumstances. The rule for transforming beables to templates and back is complex and not unambiguous; exactly how the rules are to be formulated, for all objects we know about in the universe, is not known or understood, but must be left for further research.

Most importantly, the original ontological beables do not allow for any superposition, just as we cannot meaningfully superimpose planets, but the templates, with which we compare the beables, are elements of Hilbert space and require the well-known principles of superposition.

The second element in our CAI is that objects we normally call classical, such as planets and people, but also the dials and all other detectable signals coming from measurement devices, can be directly derived from beables, in principle without the intervention of the templates.

Of course, if we want to know how our measurement devices work, we use our templates, and this is the origin of the usual “measurement problem”. What is often portrayed as mysteries in quantum theory: the measurement problem, the ‘collapse of the wave function’, and Schrödinger’s cat, is completely clarified in the CAI. All wave functions that will ever occur in our world, may seem to be superpositions of our templates, but they are completely peaked, ‘collapsed’, as soon as we use the beable basis. Since classical devices are also peaked in the beable basis, their wave functions are collapsed. No violation of Schrödinger’s equation is required for that, on the contrary, the templates, and indirectly, also the beables, exactly obey the Schrödinger equation.

In short, it is not Nature’s degrees of freedom themselves that allow for superposition, it is the templates we normally use that are man-made superpositions of Nature’s ontological states. The fact that we hit upon the apparently inevitable paradoxes concerning superposition of the natural states, is due to our intuitive thinking that our templates represent reality in some way. If, instead, we start from the ontological states that we may one day succeed to characterize, the so-called ‘quantum mysteries’ will disappear.

2 Counterfactual Definiteness

Suppose we have an operator \(\mathcal{O}^{\mathrm{op}}_{1}\) whose value cannot be measured since the value of another operator, \(\mathcal{O}_{2}^{\mathrm{op}}\), has been measured, while \([\mathcal{O}_{1}^{\mathrm{op}},\mathcal {O}_{2}^{\mathrm{op}}]\ne0\). Counterfactual reality is the assumption that, nevertheless, the operator \(\mathcal{O}^{\mathrm{op}}_{1}\) takes some value, even if we don’t know it. It is often assumed that hidden variable theories imply counterfactual definiteness. We should emphasize categorically that no such assumption is made in the Cellular Automaton Interpretation. In this theory, the operator \(\mathcal{O}_{2}^{\mathrm{op}}\), whose value has been measured, apparently turned out to be composed of ontological observables (beables). Operator \(\mathcal{O}_{1}^{\mathrm{op}}\) is, by definition, not ontological and therefore has no well-defined value, for the same reason why, in the planetary system, the Earth–Mars interchange operator, whose eigenvalues are \(\pm1\), has neither of these values; it is unspecified, in spite of the fact that planets evolve classically, and in spite of the fact that the Copenhagen doctrine would dictate that these eigenvalues are observable!

The tricky thing about the CAI when applied to atoms and molecules, is that one often does not know a priori which of our operators are beables and which are changeables or superimposables (as defined in Sects. 2.1.1 and 5.5.1). One only knows this a posteriori, and one might wonder why this is so. We are using templates to describe atoms and molecules, and these templates give us such a thoroughly mixed-up view of the complete set of observables in a theory that we are left in the dark, until someone decides to measure something.

It looks as if the simple act of a measurement sends a signal backwards in time and/or with superluminal speed to other parts of the universe, to inform observers there which of their observables can be measured and which not. Of course, that is not what happens. What happens is that now we know what can be measured accurately and which measurements will give uncertain results. The Bell and CHSH inequalities are violated as they should be in quantum field theory, while nevertheless quantum field theory forbids the possibility to send signals faster than light or back to the past.

3 Superdeterminism and Conspiracy

Superdeterminism may be defined to imply that not only all physical phenomena are declared to be direct consequences of physical laws that do not leave anything anywhere to chance (which we refer to as ‘determinism’), but it also emphasizes that the observers themselves behave in accordance with the same laws. They also cannot perform any whimsical act without any cause in the near past as well as in the distant past. By itself, this statement is so obvious that little discussion would be required to justify it, but what makes it more special is that it makes a difference. The fact that an observer cannot reset his or her measuring device without changing physical states in the past is usually thought not to be relevant for our description of physical laws. The CAI claims that it is. Further explanations were given in Sect. 3.8, where we attempted to demystify ‘free will’.

It is often argued that, if we want any superdeterministic phenomenon to lead to violations of the Bell-CHSH inequalities, this would require conspiracy between the decaying atom observed and the processes occurring in the minds of Alice and Bob, which would be a conspiracy of a kind that should not be tolerated in any decent theory of natural law. The whole idea that a natural mechanism could exist that drives Alice’s and Bob’s behaviour is often found difficult to accept.

In the CAI, however, natural law forbids the emergence of states where beables are superimposed. Neither Alice nor Bob will ever be able to produce such states by rotating their polarization filters. Indeed, the state their minds are in, are ontological in terms of the beables, and they will not be able to change that.

Superdeterminism has to be viewed in relation with correlations over space-like distances. We claim that not only there are correlations, but the correlations are also extremely strong. The state we call ‘vacuum state’ is full of correlations. Quantum field theory (to be discussed in Part II, Sect. 20), must be a direct consequence of the underlying ontological theory. It explains these correlations. All 2-particle expectation values, called propagators, are non-vanishing both inside and outside the light cone. Also the many-particle expectation values are non-vanishing there; indeed, by analytic continuation, these amplitudes are seen to turn into the complete scattering matrix, which encapsulates all laws of physics that are implied by the theory. In Chap. 3.6, it is shown how a 3-point correlation can, in principle, generate the violation of the CHSH inequality, as required by quantum mechanics.

In view of these correlation functions, and the law that says that beables will never superimpose, we now suspect that this law forbids Alice and Bob to both change their minds in such a way that these correlation functions would no longer hold.

3.1 The Role of Entanglement

We do not claim that these should be the last words on Bell’s interesting theorem. Indeed, we could rephrase our observations in a somewhat different way. The reason why the standard, Copenhagen theory of quantum mechanics violates Bell is simply that the two photons (or electrons, or whatever the particles are that are being considered), are quantum entangled. In the standard theory, it is assumed that Alice may change her mind about her setting without affecting the photon that is on its way to Bob. Is this still possible if Alice’s photon and Bob’s photon are entangled?

According to the CAI, we have been using templates to describe the entangled photons Alice and Bob are looking at, and this leads to the violation of the CHSH inequalities. In reality, these templates were reflecting the relative correlations of the ontological variables underlying these photons. To describe entangled states as beables, their correlations are essential. We assume that this is the case when particles decay into entangled pairs because the decay has to be attributed to vacuum fluctuations (see Sect. 5.7.5), while also the vacuum cannot be a single, translation invariant, ontological state.

Resetting Alice’s experiment without making changes near Bob would lead to a state that, in quantum mechanical terms, is not orthogonal to the original one, and therefore not ontological. The fact that the new state is not orthogonal to the previous one is quite in line with the standard quantum mechanical descriptions; after all, Alice’s photon was replaced by a superposition.

The question remains how this could be. If the cellular automaton stays as it is near Bob, why is the ‘counterfactual’ state not orthogonal to it? The CAI says so, since the classical configuration of her apparatus has changed, and we stated that any change in the classical setting leads to a change in the ontological state, which is a transition to an orthogonal vector in Hilbert space.

We cannot exclude the possibility that the apparent non-locality in the ontic—template mapping is related to the difficulty of identifying the right Hamiltonian for the standard quantum theory, in terms of the ontic states. We should find a Hamiltonian that is the integral of a local Hamiltonian density. Strictly speaking, there may be a complication here; the Hamiltonian we are using is often an approximation, a very good one, but it ignores some subtle non-local effects. This will be further explained in Part II.

3.2 Choosing a Basis

Some physicists of previous generations thought that distinguishing different basis sets is very important. Are particles ‘real particles’ in momentum space or in configuration space? Which are the ‘true’ variables of electro-magnetism, the photons or the electric and magnetic fields?Footnote 1 The modern view is to emphasize that, any basis serves our purposes as well as any other, since, usually, none of the conventionally chosen basis spaces is truly ontological.

In this respect, Hilbert space should be treated as any ordinary vector space, such as the space in which each of the coordinates of the planets in our planetary system are defined. It makes no difference which coordinate frame we use. Should the \(z\)-axis be orthogonal to the local surface of the earth? Parallel to the earth’s axis of rotation? The ecliptic? Of course, the choice of coordinates is immaterial. Clearly, this is exemplified in Dirac’s notation. This beautiful notation is extremely general and as such it is quite suitable for the discussions presented in this book.

But then, after we declared all sets of basis elements to be as good as any other, in as far as they describe some familiar physical process or event, we venture into speculating that a more special basis choice can be made. A basis could exist in which all super-microscopic physical observables are beables; these are the observable features at the Planck scale, they all must be diagonal in this basis. Also, in this basis, the wave function only consists of zeros and a single one. It is ontological. This special basis, called ‘ontological basis’, is hidden from us today but it should be possible to identify such a basis,Footnote 2 in principle. This book is about the search for such a basis. It is not the basis of the particles, the fields of the particles, of atoms or molecules, but something considerably more difficult to put our hands on.

We emphasize that also all classical observables, describing stars and planets, automobiles, people, and eventually pointers on detectors, will be diagonal in the same ontological basis. This is of crucial importance, and it was explained in Sect. 4.2.

3.3 Correlations and Hidden Information

An essential element in our analysis may be the observations expressed in Sect. 3.7.1. It was noted that the details of the ontological basis must carry crucial information about the future, yet in a concealed manner: the information is non-local. It is a simple fact that, in all of our models, the ontological nature of a state the universe is in, will be conserved in time: once we are in a well-defined ontological state, as opposed to a template, this feature will be preserved in time (the ontology conservation law). It prevents Alice and Bob from choosing any setting that would ‘measure’ a template that is not ontological. Thus, this feature prevents counterfactual definiteness. Even the droppings of a mouse cannot disobey this principle.

In adopting the CAI, we accept the idea that all events in this universe are highly correlated, but in a way that we cannot exploit in practice. It would be fair to say that these features still carry a sense of mystery that needs to be investigated more, but the only way to do this is to search for more advanced models.

4 The Importance of Second Quantization

We realize that, in spite all observations made in this book, our arguments would gain much more support if more specific models could be constructed where solid calculations support our findings. We have specific models, but so-far, these have been less than ideal to explain our points. What would really be needed is a model or a set of models that obviously obey quantum mechanical equations, while they are classical and deterministic in their set-up. Ideally, we should find models that reproduce relativistic, quantized field theories with interactions, such as the Standard Model.

The use of second quantization, will be explained further in Sects. 15.2.3 and 22.1 of Part II. We start with the free Hamiltonian and insert the interactions at a later stage, a procedure that is normally assumed to be possible using perturbation expansion. The trick used here is, that the free particle theory will be manifestly local, and also the interactions, represented by rare transitions, will be local. The interactions are introduced by postulating new transitions that create or annihilate particles. All terms in this expansion are local, so we have a local Hamiltonian. To handle the complete theory, one then has to do the full perturbation expansion. Obeying all the rules of quantum perturbation theory, we should obtain a description of the entire, interacting model in quantum terms. Indeed, we should reproduce a genuine quantum field theory this way.

Is this really true? Strictly speaking, the perturbation expansion does not converge, as is explained also in Sect. 22.1. However, then we can argue that this is a normal situation in quantum field theory. Perturbation expansions formally always diverge, but they are the best we have—indeed they allow us to do extremely accurate calculations in practice. Therefore, reproducing these perturbative expansions, regardless how well they converge, is all we need to do in our quantum theories.

A fundamentally convergent expression for the Hamiltonian does exist, but it is entirely different. The differences are in non-local terms, which we normally do not observe. Look at the exact expression, Eq. (2.8), first treated in Sect. 2.1 of Chap. 2:

$$\begin{aligned} H_{\mathrm{op}}\delta t=\pi-i\sum_{n=1}^{\infty}{1\over n} \bigl(U_{\mathrm{op}}(n\delta t)-U_{\mathrm{op}}(-n \delta t) \bigr). \end{aligned}$$
(10.1)

This converges, except for the vacuum state itself. Low energy states, states very close to the vacuum, are the states where convergence is excessively slow. Consequently, as was explained before, terms that are extremely non-local, sneak in.

This does not mean that the cellular automaton would be non-local; it is as local as it can be, but it means that if we wish to describe it with infinite precision in quantum mechanical terms, the Hamiltonian may generate non-localities. One can view these non-localities as resulting from the fact that our Hamiltonian replaces time difference equations, linking instances separated by integral multiples of \(\delta t\), by differential equations in time; the states in between the well-defined time spots are necessarily non-local functions of the physically relevant states.

One might argue that there should be no reason to try to fill the gaps between integer times steps, but then there would not exist an additive energy function, which we need to stabilize the solutions of our equations. Possibly, we have to use a combination of a classical, integer-valued Hamilton function (Chap. 19 of Part II), and the periodically defined Hamiltonian linking only integer-valued time steps, but exactly how to do this correctly is still under investigation. We have not yet found the best possible approach towards constructing the desired Hamilton operator for our template states. The second-quantized theory that will be further discussed is presently our best attempt, but we have not yet been able to reproduce the quite complex symmetry structure of the Standard Model, in particular Poincaré invariance.

As long as we have no explicit model that, to the extent needed, reproduces Standard Model-like interactions, we cannot verify the correctness of our approach. Lacking that, what has to be done next is the calculations in models that are as realistic as possible.

Possibilities for experimental checks for the CA theory, unfortunately, are few and far between. Our primary goal was not to change the quantum laws of physics, but rather to obtain a more transparent description of what it is that actually is going on. This should help us to construct models of physics, particularly at the Planck scale, and if we succeed, these models should be experimentally verifiable. Most of our predictions are negative: there will be no observable departure from Schrödinger’s equation and from Born’s probability expression.

A more interesting negative prediction concerns quantum computers, as was explained in Sect. 5.8. Quantum devices should, in principle allow us to perform a very large number of calculations ‘simultaneously’, so that there are mathematical problems that can be solved much faster than with standard, classical computers. Then why have these quantum computers not yet been built? There seem to be some ‘technical difficulties’ standing in the way: the quantum bits that are employed—qubits—tend to decohere. The remedy expected to resolve such disturbing features, is a combination of superior hardware (the qubits must be disconnected from the environment as well as possible) and software: error correction codes. And then, the quantum computer should be able to perform mathematical miracles.

According to the CA theory, our world is merely quantum mechanical because it would be too complicated to even approximately follow the cellular data at the length and time scale where they occur, most likely the Planck units. If we could follow these data, we could do this using classical devices. This means that a classical computer should be able to reproduce the quantum computer’s results, if its memory cells and computing time would be scaled to Planckian dimensions. No-one can construct such classical computers, so that indeed, quantum computers will be able to perform miracles; but there are clear limits: the quantum computer will never outperform classical computers when these would be scaled to Planckian dimensions. This would mean that numbers with millions of digits can never be decomposed into prime factors, even by a quantum computer. We predict that there will be fundamental limits in our ability to avoid decoherence of the qubits. This is a non-trivial prediction, but not one to be proud of.