Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Of course we assume that the reader is familiar with the ‘Schrödinger representation’ as well as the ‘Heisenberg representation’ of conventional quantum mechanics. We often use these intermittently.

1 Time Reversible Cellular Automata

The Cogwheel Model, which, as described in Sect. 2.2, can be mapped on a Zeeman atom with equally spaced energy levels. It is the prototype of an automaton. All our deterministic models can be characterized as ‘automata’. A cellular automaton is an automaton where the data are imagined to form a discrete, \(d\) -dimensional lattice, in an \(n=d+1\) dimensional space–time. The elements of the lattice are called ‘cells’, and each cell can contain a limited amount of information. The data \(Q(\vec{x},t)\) in each cell \((\vec{x},t)\) could be represented by an integer, or a set of integers, possibly but not necessarily limited by a maximal value \(N\) . An evolution law prescribes the values of the cells at time \(t+1\) if the values at time \(t\) (or \(t\) and \(t-1\)) are given.Footnote 1 Typically, the evolution law for the data in a cell at the space–time position

$$\begin{aligned} (\vec{x},t),\quad\vec{x}=\bigl(x^{1},x^{2},\ldots, x^{d}\bigr),\quad x^{i}, t\in\Bbb {Z}, \end{aligned}$$
(5.1)

will only depend on the data in neighbouring cells at \((\vec{x}\,', t-1)\) and possibly those at \((\vec{x},t-2)\). If, in this evolution law, \(\|\vec{x}\,'-\vec{x}\|\) is limited by some bound, then the cellular automaton is said to obey locality.

Furthermore, the cellular automaton is said to be time-reversible if the data in the past cells can be recovered from the data at later times, and if the rule for this is also a cellular automaton. Time reversibility can easily be guaranteed if the evolution law is assumed to be of the form

$$\begin{aligned} Q(\vec{x},t+1)=Q(\vec{x},t-1)+F(\vec{x},\{Q(t) \} ), \end{aligned}$$
(5.2)

where \(Q(\vec{x},t)\) represent the data at a given point \((\vec{x}, t)\) of the space–time lattice, and \(F(\vec{x},\{Q(t)\})\) is some given function of the data of all cells neighbouring the point \(\vec{x}\) at time \(t\). The + here stands for addition, addition modulo some integer \(N\), or some other simple, invertible operation in the space of variables \(Q\). Of course, we then have time reversibility:

$$\begin{aligned} Q(\vec{x},t-1)=Q(\vec{x},t+1)-F(\vec{x},\{ Q(t) \} ), \end{aligned}$$
(5.3)

where − is the inverse of the operation +.

The simple cogwheel model allows for both a classical and a quantum mechanical description, without any modification of the physics. We can now do exactly the same thing for the cellular automaton. The classical states the automaton can be in, are regarded as an orthonormal set of basis elements of an ontological basis. The evolution operator \(U_{\mathrm{op}} (\delta t)\) for one single time step whose duration is \(\delta t\), is a unitary operator, so that all its eigen states \(|E_{i}\rangle\) are unimodular:

$$\begin{aligned} U_{\mathrm{op}}(\delta t)|E_{i}\rangle=e^{-i\omega _{i}}|E_{i} \rangle, \quad0\le\omega< 2\pi, \end{aligned}$$
(5.4)

and one can find an operator \(H_{\mathrm{op}}\) such that

$$\begin{aligned} U_{\mathrm{op}}(\delta t)=e^{-iH_{\mathrm{op}}\delta t},\quad0\le H_{\mathrm{op}}< 2 \pi/\delta t. \end{aligned}$$
(5.5)

However, one is free to add integer multiples of \(2\pi/\delta t\) to any of the eigen values of this Hamiltonian without changing the expression for \(U_{\mathrm{op}}\), so that there is a lot of freedom in the definition of \(H_{\mathrm{op}}\). One may add arbitrary phase angles to the eigenstates, \(|E_{i}\rangle\rightarrow e^{i\varphi_{i}}|E_{i}\rangle\), and these modifications may also depend on possible conserved quantities. Clearly, one can modify the Hamiltonian quite a bit without damaging its usefulness to generate the evolution operator \(U_{\mathrm{op}}\). In Sect. 2.2.2 of Sect. 2.2, it is demonstrated how quite complex energy spectra can emerge this way in relatively simple generalizations of the cogwheel model.

Modifications of this sort in the Hamiltonian may well be needed if we wish to reflect the locality property of the cellular automaton in the Hamiltonian:

$$\begin{aligned} H_{\mathrm{op}}\,{\stackrel{{?}}{=}}\,\sum_{\vec{x}} \mathcal{H}_{\mathrm{op}}(\vec{x}),\qquad\bigl[ \mathcal{H}_{\mathrm{op}} \bigl(\vec{x}\,'\bigr),\mathcal{H}_{\mathrm{op}}(\vec{x})\bigr] \rightarrow0\quad \hbox{if }\|\vec{x}\,'-\vec{x}\|\gg1, \end{aligned}$$
(5.6)

but it is an important mathematical question whether a Hamiltonian obeying Eq. (5.6) can be constructed even if the classical cellular automaton is local in the sense described above. This problem is discussed further in Sect. 5.6.1, in Sect. 9.1, and in Part II, Chaps. 14 and 22. There, we shall see that the situation can become quite complex. Very good approximations may exist for certain cellular automaton systems with possibly some modified locality properties and a Hamiltonian that approximately obeys a locality principle as in Eq. (5.6).

Note, that a Hamiltonian that obeys (5.6) in combination with Poincaré invariance will correspond to a fully renormalized quantum field theory, with all complexities of that, and this assures that finding a complete mathematical solution to the problems sketched will not be easy.

2 The CAT and the CAI

The Cellular Automaton Theory (CAT) assumes that, once a universal Schrödinger equation has been identified that encapsulates all conceivable phenomena in the universe (a Grand Unified Theory, or a Theory for Everything), it will feature an ontological basis that maps the system into a classical automaton. It is possible, indeed quite likely, that the true automaton of the universe will be more complex than an ‘ordinary’ cellular automaton as sketched here, but it may well share some of its main characteristics. The extent to which this is so, will have to be sorted out by further research.

How the symmetries of Nature will be reflected in these classical rules is also difficult to foresee; it is difficult to imagine how Lorentz invariance and diffeomorphism invariance can be realized in these classical rules. Probably, they will refer to more general quantum basis choices. This we will assume, for the time being, see in Part II, Chap. 18 on symmetries.

This theory then does seem to be what is usually called a ‘hidden variable theory’. Indeed, in a sense, our variables are hidden; if symmetry transformations exist that transform our basis into another one, which diagonalizes different operators, then it will be almost impossible for us to tell which of these is the ‘true’ ontological basis, and so we will have different candidates for the ‘hidden variables’, which will be impossible to distinguish in practice.

The Cellular Automaton Interpretation (CAI) [104] takes it for granted that this theory is correct, even if we will never be able to explicitly identify any ontological basis. We assume that the templates presently in use in quantum mechanics are to be regarded as superpositions of ontological states, and that the classical states that describe outcomes of observations and measurements are classical distributions of ontological states. If two classical states are distinguishable, their distributions have no ontological state in common (see Fig. 4.1b). The universe is in a single ontological state, never in a superposition of such states, but, whenever we use our templates (that is, when we perform a conventional quantum mechanical calculation), we use superpositions just because they are mathematically convenient. Note that, because the superpositions are linear, our templates obey the same Schrödinger equation as the ontological states do.

In principle, the transformation from a conventional quantum basis to an ontological basis may be quite complex. Only the eigenvalues of the Hamiltonian will be unaffected, and also in deterministic models, these can form quite general spectra, see Fig. 2.3c. In group theory language, the Hamiltonians we obtain by transforming to different basis choices, will form one single conjugacy class, characterized by the set of eigenvalues.

Eventually, our quantum system should be directly related to some quantum field theory in the continuum limit. We describe quantum field theories in Part II, Chap. 20. At first sight, it may seem to be obvious that the Hamiltonian should take the form of Eq. (5.6), but we have to remember that the Hamiltonian definitions (2.8), (2.18), (2.26) and the expressions illustrated in Fig. 2.3, are only well-defined modulo \(2\pi/\delta t\), so that, when different, non interacting systems are combined, their Hamiltonians do not necessarily have to add up.

These expressions do show that the Hamiltonian \(H_{\mathrm{op}}\) can be chosen in many ways, since one can add any operator that commutes with \(H_{\mathrm{op}} \). Therefore it is reasonable to expect that a Hamiltonian obeying Eq. (5.6) can be defined. An approach is explained in much more detail in Part II, Chap. 21, but some ambiguity remains, and convergence of the procedure, even if we limit ourselves to low energy states, is far from obvious.

What we do see is this: a Hamiltonian obeying Eq. (5.6) is the sum of terms that each are finite and bounded from below as well as from above. Such a Hamiltonian must have a ground state, that is, an eigenstate \(|\emptyset\rangle\) with lowest eigenvalue, which could be normalized to zero. This eigenstate should be identified with the ‘vacuum’. This vacuum is stationary, even if the automaton itself may have no stationary solution. The next-to-lowest eigenstate may be a one-particle state. In a Heisenberg picture, the fields \(Q(\vec{x},t)\) will behave as operators \(Q_{\mathrm{op}}(\vec{x},t)\) when we pass on to a basis of template states, and as such they may create one-particle states out of the vacuum. Thus, we arrive at something that resembles a genuine quantum field theory. The states are quantum states in complete accordance with a Copenhagen interpretation. The fields \(Q_{\mathrm{op}}(\vec{x},t)\) should obey the Wightman axioms [55, 79]. Quantum field theories will further be discussed in Chap. 20.

However, if we start from just any cellular automaton, there are three ways in which the resulting theory will differ from conventional quantum field theories. One is, of course, that space and time are discrete. Well, maybe there is an interesting ‘continuum limit’, in which the particle mass(es) is(are) considerably smaller than the inverse of the time quantum, but, unless our models are chosen very carefully, this will not be the case.

Secondly, the generic cellular automaton will not even remotely be Lorentz invariant. Not only will the one-particle states fail to exhibit Lorentz invariance, or even Galilei invariance; the states where particles may be moving with respect to the vacuum state will be completely different from the static one-particle states. Also, rotation symmetry will be reduced to some discrete lattice rotation group if anything at all. So, the familiar symmetries of relativistic quantum field theories will be totally absent.

Thirdly, it is not clear that the cellular automaton can be associated to a single quantum model or perhaps many inequivalent ones. The addition or removal of other conserved operators to \(H_{\mathrm{op}} \) is akin to the addition of chemical potential terms. In the absence of Lorentz invariance, it will be difficult to distinguish the different types of ‘vacuum’ states one thus obtains.

For all these reasons, most cellular automaton models will be very different from the quantized field theories for elementary particles. The main issue discussed in this book, however, is not whether it is easy to mimic the Standard Model in a cellular automaton, but whether one can obtain quantum mechanics, and something resembling quantum field theory, at least in principle. The origin of the continuous symmetries of the Standard Model will stay beyond what we can handle in this book, but we can discuss the question to what extent cellular automata can be used to approximate and understand the quantum nature of this world. See our discussion of symmetry features in Part II, Sect. 18.

As will be explained later, it may well be that invariance under general coordinate transformations will be a crucial ingredient in the explanation of continuous symmetries, so it may well be that the ultimate explanation of quantum mechanics will also require the complete solution of the quantum gravity problem. We cannot pretend to have solved that.

Many of the other models in this book will be explicitly integrable. The cellular automata that we started off with in the first section of this chapter illustrate that our general philosophy also applies to non-integrable systems. It is generally believed, however, that time reversible cellular automata can be computationally universal [37, 61]. This means that any such automaton can be arranged in special subsets of states that obey the rule of any other computationally universal cellular automaton. One would be tempted to argue then, that any computationally universal cellular automaton can be used to mimic systems as complicated as the Standard Model of the subatomic particles. However, in that case, being physicists, we would ask for one single special model that is more efficient than any other, so that any choice of initial state in this automaton describes a physically realizable configuration.

3 Motivation

It is not too far-fetched to expect that, one day, quantum gravity will be completely solved, that is, a concise theory will be phrased that shows an air-tight description of the relevant physical degrees of freedom, and a simple model will be proposed that shows how these physical variables evolve. We might not even need a conventional time variable for that, but what we do need is an unambiguous prescription telling us how the physical degrees of freedom will look in a region of space–time that lies in the future of a region described at an earlier time.

A complete theory explaining quantum mechanics can probably not be formulated without also addressing quantum gravity, but what we can do is to formulate our proposal, and to establish the language that will have to be employed. Today, our description of molecules, atoms, fields and relativistic subatomic particles is interspersed with wave functions and operators. Our operators do not commute with operators describing other aspects of the same world, and we have learned not to be surprized by this, but just to choose a set of basis elements as we please, guess a reasonable looking Schrödinger equation, and calculate what we should find when we measure things. We were told not to ask what reality is, and this turned out to be a useful advice: we calculate, and we observe that the calculations make sense. It is unlikely that any other observable aspects of fields and particles can ever be calculated, it will never be more than what we can derive from quantum mechanics. For example, given a radio-active particle, we cannot calculate exactly at what moment it will decay. The decay is controlled by a form of randomness that we cannot control, and this randomness seems to be far more perfect than what can be produced using programmed pseudo-random sequences. We are told to give up any hope to outsmart Nature in this respect.

The Cellular Automaton Interpretation (CAI) suggests to us what it is that we actually do when we solve a Schrödinger equation. We thought that we are following an infinite set of different worlds, each with some given amplitude, and the final events that we deduce from our calculations depend on what happens in all these worlds. This, according to the CAI, is an illusion. There is no infinity of different worlds, there is just one, but we are using the “wrong” basis to describe it. The word “wrong” is here used not to criticize the founding fathers of quantum mechanics, who made marvellous discoveries, but to repeat what they of course also found out, which is that the basis they are using is not an ontological one. The terminology used to describe that basis does not disclose to us exactly how our world, our single world, ‘actually’ evolves in time.

Many other proposed interpretations of quantum mechanics exist. These may either regard the endless numbers of different worlds all to be real, or they require some sort of modification, mutilation rather, to understand how a wave function can collapse to produce measured values of some observable without allowing for mysterious superpositions at the classical scale.

The CAI proposes to use the complete mathematical machinery that has been developed over the years to address quantum mechanical phenomena. It includes exactly the Copenhagen prescriptions to translate the calculations into precise predictions when experiments are done, so, at this point, definitely no modifications are required.

There is one important way in which we deviate from Copenhagen however. According to Copenhagen, certain questions are not to be asked:

Can it be that our world is just a single world where things happen, according to evolution equations that might be fundamentally simpler than Schrödinger’s equation, and are there ways to find out about this? Can one remove the element of statistical probability distributions from the basic laws of quantum mechanics?

According to Copenhagen, there exist no experiments that can answer such questions, and therefore it is silly even to ask them. But what I have attempted to show in this work is that not experimentally, but theoretically, we may find answers. We may be able to identify models that describe a single classical world, even if forbiddingly complex compared to what we are used to, and we may be able to identify its physical degrees of freedom with certain quantum variables that we already know about.

The cellular automaton described in the preceding sections, would be the prototype example; it is complicated yet quite possibly not complicated enough. It has symmetries, but in the real world there are much larger symmetry groups, such as the Lorentz or Poincaré group, showing relations between different kinds of events, and these are admittedly difficult to implement. The symmetry groups, think of space–time translation symmetry, may actually be at the roots of the mysterious features that were found in our quantum world, so that these may have natural explanations.

Why should we want just a single world with classical equations describing its evolution? What’s wrong with obeying Copenhagen’s dictum about not asking questions, if they will not be experimentally accessible anyway?

According to the author, there will be overwhelming motivations: If a classical model does exist, it will tremendously simplify our view of the world, it will help us once and forever to understand what really happens when a measurement is made and a wave function ‘collapses’. It will explain the Schrödinger cat paradox once and for good.

Even more importantly, the quantum systems that allow for a classical interpretation form an extremely tiny subset of all quantum models. If it is indeed true that our world falls in that class, which one may consider to be likely after having read all this, then this restricts our set of allowable models so much that it may enable us to guess the right one, a guess that otherwise could never have been made. So indeed, what we are really after is a new approach towards guessing what the ‘Theory of the World’ is. We strongly suspect that, without this superb guide, we will never even come close. Thus, our real motivation is not to be able to better predict the outcomes of experiments, which may not happen soon, but rather to predict which class of models we should scrutinize to find out more about the truth.

Let us emphasize once more, that this means that the CAI/CAT will primarily be of importance if we wish to decipher Nature’s laws at the most fundamental time- and distance scales, that is, the Planck scale. Thus, an important new frontier where the empire of the quantum meets the classical world, is proclaimed to be near the Planckian dimensions. As we also brought forward repeatedly, the CAI requires a reformulation of our standard quantum language also when describing the other important border: that between the quantum empire and the ‘ordinary’ classical world at distance scales larger than the sizes of atoms and molecules.

The CAI actually has more in common with the original Copenhagen doctrine than many other approaches, as will be explained. It will do away with the ‘many worlds’, more radically than the De Broglie–Bohm interpretation. The CAI assumes the existence of one or more models of Nature that have not yet been discovered. We do discuss many toy models. These toy models are not good enough to come anywhere close to the Standard Model, but there is reason to hope that one day such a model will be found. In any case, the CAI will apply only to a tiny sub class of all quantum mechanical models usually considered to explain the observed world, and for this reason we hope it might be helpful to pin down the right procedure to arrive at the correct theory.

Other models were exposed in this work, just to display the set of tools that one might choose to use.

3.1 The Wave Function of the Universe

Standard quantum mechanics can confront us with a question that appears to be difficult to answer: Does the universe as a whole have a wave function? Can we describe that wave function? Several responses to this question can be envisaged:

  1. (1)

    I do not know, and I do not care. Quantum mechanics is a theory about observations and measurements. You can’t measure the entire universe.

  2. (2)

    I do care, but I do not know. Such a wave function might be so much entangled that it will be impossible to describe. Maybe the universe has a density matrix, not a wave function.

  3. (3)

    The universe has no fixed wave function. Every time an observation or measurement is made, the wave function collapses, and this collapse phenomenon does not follow any Schrödinger equation.

  4. (4)

    Yes, the universe has a wave function. It obeys the Schrödinger equation, and at all times the probability that some state \(|\psi \rangle\) is realized is given by the norm of the inner product squared. Whenever any ‘collapse’ takes place, it always obeys the Schrödinger equation.

The agnostic answers (1) and (2) are scientifically of course defensible. We should limit ourselves to observations, so don’t ask silly questions. However, they do seem to admit that quantum mechanics may not have universal validity. It does not apply to the universe as a whole. Why not? Where exactly is the limit of the validity of quantum mechanics? The ideas expressed in this work were attacked because they allegedly do not agree with observations, but all observations ever made in atomic and subatomic science appear to confirm the validity of quantum mechanics, while the answers (1) and (2) suggest that quantum mechanics should break down at some point. In the CAI, we assume that the mathematical rules for the application of quantum mechanics have absolute validity. This, we believe, is not a crazy assumption.

In the same vein, we also exclude answer (3). The collapse should not be regarded as a separate axiom of quantum mechanics, one that would invalidate the Schrödinger equation whenever an observation, a measurement, and hence a collapse takes place. Therefore, according to our theory, the only correct answer is answer (4). An apparent problem with this would be that the collapse would require ‘conspiracy’, a very special choice of the initial state because otherwise we might accidentally arrive at wave functions that are quantum superpositions of different collapsed states. This is where the ontological basis comes to the rescue. If the universe is in an ontological state, its wave function will collapse automatically, whenever needed. As a result, classical configurations, such as the positions and velocities of the planets are always described by wave functions that are delta-peaked at these values of the data, whereas wave functions that are superpositions of planets in different locations, will never be ontological.

The conclusion of this subsection is that, as long as we work with templates, our amplitudes are psi-epistemic, as they were in the Copenhagen view, but a psi-ontic wave function does exist: the wave function of the universe itself. It is epistemic and ontological at the same time.

Now, let us go back to Copenhagen, and formulate the rules. As the reader can see, in some respects we are even more conservative than the most obnoxious quantum dogmatics.

4 The Rules

As for the Copenhagen rules that we keep, we emphasise the ones most important to us:

  1. (i)

    To describe a physical phenomenon, the use of any basis is as legitimate as any other. We are free to perform any transformation we like, and rephrase the Schrödinger equation, or rather, the Hamiltonian employed in it, accordingly. In each basis we may find a useful description of variables, such as positions of particles, or the values of their momenta, or the energy states they are in, or the fields of which these particles are the energy quanta. All these descriptions are equally ‘real’ .

But none of the usually employed descriptions are completely real. We often see that superpositions occur, and the phase angles of these superpositions can be measured, occasionally. In such cases, the basis used is not an ontological one. In practice, we have learned that this is just fine; we use all of these different basis choices in order to have templates. We shall impose no restrictions on which template is ‘allowed’, or which of them may represent the truth ‘better’ than others. Since these are mere templates, reality may well emerge to be a superposition of different templates.

Curiously, even among the diehards of quantum mechanics, this was often thought not to be self-evident. “Photons are not particles, protons and electrons are”, was what some investigators claimed. Photons must be regarded as energy quanta. They certainly thought it was silly to regard the phonon as a particle; it is merely the quantum of sound. Sometimes it is claimed that electric and magnetic fields are the ‘true’ variables, while photons are merely abstract concepts. There were disputes on whether a particle is a true particle in the position representation or in the momentum representation, and so on. We do away with all this. All basis choices are equivalent. They are nothing more than a coordinate frame in Hilbert space. As long as the Hamiltonian employed appears to be such that finite-time evolution operators turn diagonal operators (beables) into non-diagonal ones (superimposables), these basis choices are clearly non-ontological; none of them describes what is really happening. As for the energy basis, see Sect. 5.6.3.

Note, that this means that it is not really the Hamiltonian that we will be interested in, but rather its conjugacy class. If a Hamiltonian \(H\) is transformed into a new Hamiltonian by the transformation

$$\begin{aligned} \tilde{H}=G H G^{-1}, \end{aligned}$$
(5.7)

where \(G\) is a unitary operator, then the new Hamiltonian \(\tilde {H}\), in the new basis, is just as valid as the previous one. In practice, we shall seek the basis, or its operator \(G\), that gives the most concise possible expression for \(\tilde{H}\).

  1. (ii)

    Given a ket state \(|\psi\rangle\) , the probability for the outcome of a measurement to be described by a given state \(|a\rangle \) in Hilbert space, is exactly given by the absolute square of the inner product \(\langle a|\psi\rangle\) .

This is the well-known Born rule. We shall never modify its mathematical form; only the absolute squares of the inner products can be used. However, there is a limitation in principle: the bra state \(\langle a|\) must be an ontological state. In practice, this is always the case: the bras \(\langle a|\) usually are represented by the classical observations. The Born rule is often portrayed as a separate axiom of the Copenhagen Interpretation. In our view it is an inevitable consequence of the mathematical nature of quantum mechanics as a tool to perform calculations, see Sect. 4.3.

The most important point where we depart from Copenhagen is that we make some fundamental assumptions:

  1. (a)

    We postulate the existence of an ontological basis. It is an orthonormal basis of Hilbert space that is truly superior to the basis choices that we are familiar with. In terms of an ontological basis, the evolution operator for a sufficiently fine mesh of time variables, does nothing more than permute the states.

How exactly to define the mesh of time variables we do not know at present, and it may well become a subject of debate, particularly in view of the known fact that space–time has a general coordinate invariance built in. We do not claim to know how to arrive at this construction—it is too difficult. In any case, the system is expected to behave entirely as a classical construction. Our basic assumption is that a classical evolution law exists, dictating exactly how space–time and all its contents evolve. The evolution is deterministic in the sense that nothing is left to chance. Probabilities enter only if, due to our ignorance, we seek our refuge in some non-ontological basis.

  1. (b)

    When we perform a conventional quantum mechanical calculation, we employ a set of templates for what we thought the wave function is like. These templates, such as the orthonormal set of energy eigen states of the hydrogen atom, just happen to be the states for which we know how they evolve. However, they are in a basis that is a rather complicated unitary transformation of the ontological basis.

Humanity discovered that these templates obey Schrödinger equations, and we employ these to compute probabilities for the outcomes of experiments. These equations are correct to very high accuracies, but they falsely suggest that there is a ‘multiverse’ of many different worlds that interfere with one another. Today, these templates are the best we have.

  1. (c)

    Very probably, there are more than one different choices for the ontological basis, linked to one another by Nature’s continuous symmetry transformations such as the elements of the Poincaré group, but possibly also by the local diffeomorphism group used in General Relativity. Only one of these ontological bases will be ‘truly’ ontological.

Which of them will be truly ontological will be difficult or impossible to determine. The fact that we shall not be able to distinguish the different possible ontological bases, will preclude the possibility of using this knowledge to perform predictions beyond the usual quantum mechanical ones. This was not our intention anyway. The motivation for this investigation has alway been that we are searching for new clues for constructing models more refined than the Standard Model.

The symmetry transformations that link different (but often equivalent) ontological basis choices are likely to be truly quantum mechanical: operators that are diagonal in one of these ontological bases may be non diagonal in an other. However, in the very end we shall only use the ‘real’ ontological basis. This will be evident in axiom (e).

  1. (d)

    Classical states are ontological, which means that classical observables are always diagonal in the ‘truly’ ontological basis.

This would be more difficult to ‘prove’ from first principles, so we introduce it indeed as an axiom. However, it seems to be very difficult to avoid: it is hard to imagine that two different classical states, whose future evolution will be entirely different in the end, could have non-vanishing inner products with the same ontological state.

  1. (e)

    From the very beginning onwards, the Universe was, and it always will be, in a single, evolving ontological state. This means that not only the observables are diagonal in the ontological basis, but the wave function always takes the simplest possible form: it is one of the elements of the basis itself, so this wave function only contains a single 1 and for the rest zeros.

Note that this singles out the ‘true’ ontological basis from other choices, where the physical degrees of freedom can also be represented by ‘beables’, that is, operators that at all times commute. So, henceforth, we only refer to this one ‘true’ ontological basis as ‘the’ ontological basis.

Most importantly, the last two axioms completely solve the measurement problem [93], the collapse question and Schrödinger’s cat paradox. The argument is now simply that Nature is always in a single ontological state, and therefore it has to evolve into a single classical state.

5 Features of the Cellular Automaton Interpretation (CAI)

A very special feature of the CAI is that ontological states do never form superpositions. From day and time zero onwards, the universe must have been in a single ontological state that evolves. This is also the reason why it never evolves into a superposition of classical states. Now remember that the Schrödinger equation is obeyed by the ontological states the universe may be in. This then is the reason why this theory automatically generates ‘collapsed wave functions’ that describe the results of a measurement, without ever parting from the Schrödinger equation. For the same reason, ontological states can never evolve into a superposition of a dead cat and a live cat. Regarded from this angle, it actually seems hard to see how any other interpretation of quantum mechanics could have survived in the literature: quantum mechanics by itself would have predicted that if states \(| \psi \rangle\) and \(| \chi \rangle\) can be used as initial states, so can the state \(\alpha| \psi \rangle +\beta| \chi \rangle\). Yet the superposition of a dead cat and a live cat cannot serve to describe the final state. If \(| \psi \rangle\) evolves into a live cat and \(| \chi \rangle\) into a dead one, then what does the state \(\alpha | \psi \rangle+\beta| \chi \rangle\) evolve into? the usual answers to such questions cannot be correct.Footnote 2

The Cellular Automaton Interpretation adds some notions to quantum mechanics that do not have any distinguished meaning in the usual Copenhagen view. We introduced the ontological basis as being, in some sense, superior to any other choice of basis. One might naturally argue that this would be a step backwards in physics. Did Copenhagen, in Sect. 5.4, not emphasise that all choices of basis are equivalent? Why would one choice stand out?

Indeed, what was stated in rule #i in Sect. 5.4 was that all basis choices are equivalent, but what we really meant was that all basis choices normally employed are equivalent. Once we adopt the Copenhagen doctrine, it does not matter anymore which basis we choose. Yet there is one issue in the Copenhagen formalism that has been heavily disputed in the literature and now is truly recognized as a weakness: the collapse of the wave function and the treatment of measurements. At these points the superposition axiom fails. As soon as we admit that one superior basis exists, this weakness disappears. All wave functions may be used to describe a physical process, but then we have to tolerate the collapse axiom if we do not work in the ontological basis.

The CAI allows us to use a basis that stands out. It stands out because, in this basis, we recognize wave functions that are special: the ontological wave functions. In the ontological basis, the ontological wave functions are the wave functions that correspond to the basis elements; each of these wave functions contains a one and for the rest zeros. The ontological wave functions exclusively evolve into ontological wave functions again. In this basis, there is no room for chance anymore, and superpositions can be completely avoided.

The fact that, also in the usual formalism of quantum mechanics, states that start out with a classical description, such as beams of particles aimed at each other, end up as classical probability distributions of particles coming out of the interaction region, in our view can be seen as evidence for an ‘ontology conservation law’, a law that says that an ontological basis exists, such that true ontological states at one moment of time, always evolve into true ontological states at later times. This is a new conservation law. It is tempting to conclude that the CAI is inevitable.

In the ontological basis, the evolution is deterministic. However, this term must be used with caution. “Deterministic” cannot imply that the outcome of the evolution process can be foreseen. No human, nor even any other imaginable intelligent being, will be able to compute faster than Nature itself. The reason for this is obvious: our intelligent being would also have to employ Nature’s laws, and we have no reason to expect that Nature can duplicate its own actions more efficiently than having them happen in the first place. This is how one may restore the concept of “free will”: whatever happens in our brains is unique and unforeseeable by anyone or anything.

5.1 Beables, Changeables and Superimposables

Having a special basis and special wave functions, also allows us to distinguish special observables or operators. In standard quantum mechanics, we learned that operators and observables are indistinguishable, so we use these concepts interchangeably. Now, we will have to learn to restore the distinctions. We repeat what was stated in Sect. 2.1.1, operators can be of three different forms:

  1. (I)

    beables \(\mathcal{B}_{\mathrm{op}}\): these denote a property of the ontological states, so that beables are diagonal in the ontological basis \(\{|A\rangle, |B\rangle,\ldots \}\) of Hilbert space:

    $$\begin{aligned} \mathcal{B}_{\mathrm{op}}^{\,a}|A\rangle= \mathcal {B}^{\,a}(A)|A\rangle,\quad\hbox{(beable)}. \end{aligned}$$
    (5.8)

Beables will always commute with one another, at all times:

$$\begin{aligned} \bigl[{\mathcal{B}}_{\mathrm{op}}^{\,a}(\vec{x}_{1},t_{1}), \mathcal{B}_{\mathrm{op}}^{\,b}(\vec{x}_{2},t_{2}) \bigr]=0\quad \forall\vec{x}_{1}, \vec{x}_{2}, t_{1}, t_{2}. \end{aligned}$$
(5.9)

Quantized fields, copiously present in all elementary particle theories, do obey Eq. (5.9), but only outside the light cone (where \(|t_{1}-t_{2}|<|\vec{x}_{1}-\vec{x}_{2}|\)), not inside that cone, where Eqs. (20.29), see Part II, do not hold, as can easily be derived from explicit calculations. Therefore, quantized fields are altogether different from beables.

  1. (II)

    changeables \(\mathcal{C}_{\mathrm{op}}\): operators that replace an ontological state \(|A\rangle\) by another ontological state \(|B\rangle\), such as a permutation operator:

    $$\begin{aligned} \mathcal{C}_{\mathrm{op}}|A\rangle=|B\rangle,\quad\hbox{(changeable)}. \end{aligned}$$
    (5.10)

Changeables do not commute, but they do have a special relationship with beables; they interchange them:

$$\begin{aligned} \mathcal{B}_{\mathrm{op}}^{(1)}\mathcal{C}_{\mathrm{op}}= \mathcal{C}_{\mathrm{op}}\mathcal{B}_{\mathrm{op}}^{(2)}. \end{aligned}$$
(5.11)

We may want to make an exception for infinitesimal changeables, such as the Hamiltonian \(H_{\mathrm{op}}\):

$$\begin{aligned}{} [ {\mathcal{B}}_{\mathrm{op}}, H_{\mathrm{op}}]=i {\partial \over \partial t}\mathcal{B}_{\mathrm{op}}. \end{aligned}$$
(5.12)
  1. (III)

    superimposables \(\mathcal{S}_{\mathrm{op}}\): these map an ontological state onto any other, more general superposition of ontological states:

    $$\begin{aligned} \mathcal{S}_{\mathrm{op}}|A\rangle=\lambda_{1}|A\rangle + \lambda_{2}|B\rangle+\cdots,\quad\hbox{(superimposable)}. \end{aligned}$$
    (5.13)

All operators normally used are superimposables, even the simplest position or momentum operators of classroom quantum mechanics. This is easily verified by checking the time-dependent commutation rules (in the Heisenberg representation). In general:Footnote 3

$$\begin{aligned} \bigl[\vec{x}(t_{1}), \vec{x}(t_{2})\bigr] \ne0,\quad \hbox{if}\quad t_{1}\ne t_{2}. \end{aligned}$$
(5.14)

5.2 Observers and the Observed

Standard quantum mechanics taught us a number of important lessons. One was that we should not imagine that an observation can be made without disturbing the observed object. This warning is still valid in the CAI. If measuring the position of a particle means checking the wave function whether perhaps \(\vec{x}\,{\stackrel{{?}}{=}}\,\vec{x}^{(1)}\), this may be interpreted as having the operator \(P_{\mathrm{op}}(\vec{x}^{(1)})\) act on the state:

$$\begin{aligned} |\psi\rangle\rightarrow P_{\mathrm{op}}\bigl(\vec{x}^{(1)}\bigr)| \psi\rangle, \quad P_{\mathrm{op}}\bigl(\vec{x}^{(1)}\bigr)=\delta \bigl(\vec{x}_{\mathrm{op}}-\vec{x}^{(1)}\bigr). \end{aligned}$$
(5.15)

This modifies the state, and consequently all operators acting on it after that may yield results different from what they did before the “measurement”.

However, when a genuine beable acts on an ontological state, the state is simply multiplied with the value found, but will evolve in the same way as before (assuming we chose the ‘true’ ontological basis, see axiom #\(c\) in Sect. 5.4). Thus, measurements of beables are, in a sense, classical measurements. They would be the only measurements that do not disturb the wave function, but of course, such measurements cannot be performed in practice.

Other measurements, which seem to be completely legal according to conventional quantum mechanics, will not be possible in the CAI. In the CAI, as in ordinary quantum mechanics, we can consider any operator and study its expectation value. But since the class of physically realizable wave functions is now smaller than in standard QM, certain states can no longer be realized, and we cannot tell what the outcome of such a measurement could possibly be. Think of any (non infinitesimal) changeable \(\mathcal{C}_{\mathrm{op}}\). All ontological states will give the ‘expectation value’ zero to such a changeable, but we can consider its eigenvalues, which in general will not yield the value zero. The corresponding eigenstates are definitely not ontological (see Sect. 5.7.1).

Does this mean that standard quantum mechanics is in conflict with the CAI? We emphasize that this is not the case. It must be realized that, also in conventional quantum mechanics, it may be considered acceptable to say that the Universe has been, and will always be, in one and the same quantum state, evolving in time in accordance with the Schrödinger equation (in the Schrödinger picture), or staying the same (in the Heisenberg picture). If this state happens to be one of our ontological states then it behaves exactly as it should. Ordinary quantum mechanics makes use of template states, most of which are not ontological, but then the real world in the end must be assumed to be in a superposition of these template states so that the ontological state resurfaces anyway, and our apparent disagreements disappear.

5.3 Inner Products of Template States

In doing technical calculations, we perform transformations and superpositions that lead to large sets of quantum states which we now regard as templates, or candidate models for (sub) atomic processes that can always be superimposed at a later stage to describe the phenomena we observe. Inner products of templates can be studied the usual way.

A template state \(|\psi\rangle\) can be used to serve as a model of some actually observed phenomenon. It can be any quantum superposition of ontological states \(|A\rangle \). The inner product expressions \(|\langle A|\psi\rangle|^{2}\) then represent the probabilities that ontological state \(|A\rangle\) is actually realized.

According to the Copenhagen rule #ii, Sect. 5.4, the probability that the template state \(|\psi_{1}\rangle\) is found to be equal to the state \(|\psi_{2}\rangle\), is given by \(| \langle\psi _{1}|\psi_{2}\rangle|^{2}\). However, already at the beginning, Sect. 2.1, we stated that the inner product \(\langle\psi_{1}|\psi_{2}\rangle\) should not be interpreted this way. Even if their inner product vanishes, template states \(|\psi _{1}\rangle \) and \(|\psi_{2}\rangle\) may both have a non vanishing coefficient with the same ontological state \(|A\rangle\). This does not mean that we depart from Copenhagen rule #ii, but that the true wave function cannot be just a generic template state; it is always an ontological state \(|A\rangle\). This means that the inner product rule is only true if either \(|\psi_{1}\rangle\) or \(|\psi_{2}\rangle\) is an ontological state while the other may be a template. We are then considering the probability that the template state coincides exactly with one of the ontological state \(|A\rangle\).

Thus, we use the Born interpretation of the inner product \(|\langle \psi_{1}|\psi _{2}\rangle|^{2}\) if one of the two templates is regarded as a candidate ontological state. This is legitimate, as we know that the ontological states are complicated superpositions of our templates. There are unobserved degrees of freedom, and how these are entangled with this state is then immaterial. Thus, one of our template states then can be assumed to represent a probability distribution of the ontological states of the universe, the other is a model of an ontological state.

We see that the inner product rule can be used in two ways; one is to describe the probability distribution of the initial states of a system under consideration, and one is to describe the probability that a given classical state is reached at the end of a quantum process. If the Born rule is used to describe the initial probabilities, the same rule can be used to calculate the probabilities for the final states.

5.4 Density Matrices

Density matrices are used when we neither know exactly the ontological states nor the templates. One takes a set of templates \(|\psi _{i}\rangle\), and attributes to them probabilities \(W_{i}\ge0\), such that \(\sum_{i}W_{i}=1\). This is called a mixed state. In standard quantum mechanics, one then finds for the expectation values of an operator \(\mathcal{O}\):

$$\begin{aligned} \langle \mathcal{O} \rangle=\sum_{i} W_{i} \langle\psi_{i}|\mathcal{O}|\psi_{i}\rangle={\mbox{Tr}} ( \varrho \mathcal {O});\quad\varrho =\sum_{i} W_{i} |\psi_{i}\rangle \langle\psi_{i}|. \end{aligned}$$
(5.16)

The operator \(\varrho \) is called the density matrix.

Note that, if the ontologic basis is known, and the operator \(\mathcal {O}\) is a beable, then the probabilities are indistinguishable from those generated by a template,

$$ |\psi\rangle=\sum_{i}\lambda_{i}\bigl| \psi^{\mathrm{ont}}_{i}\bigr\rangle ,\quad|\lambda_{i}|^{2}=W_{i}, $$
(5.17)

since in both cases the density matrix in that basis is diagonal:

$$\begin{aligned} \varrho =\sum_{i}W_{i} \bigl| \psi_{i}^{\mathrm{ont}}\bigr\rangle \bigl\langle \varPsi _{i}^{\mathrm{ont}}\bigr|. \end{aligned}$$
(5.18)

If \(\mathcal{O}\) is not a beable, the off-diagonal elements of the density matrix may seem to be significant, but we have to remember that non-beable operators in principle cannot be measured, and this means that the formal distinction between density matrices and templates disappears.

Nevertheless, the use of density matrices is important in physics. In practice, the density matrix is used to describe situations where less can be predicted than what can be maximally obtained in ideal observations. Take a single qubit as an example. If we consider the expectation values of the Pauli matrices \(\sigma_{x},\sigma_{y},\) and \(\sigma_{z}\), a calculation using the qubit will yield

$$\begin{aligned} |\langle\sigma_{x}\rangle|^{2}+|\langle\sigma _{y}\rangle|^{2}+|\langle\sigma_{z} \rangle|^{2}=1, \end{aligned}$$
(5.19)

whereas a mixed state will give

$$\begin{aligned} |\langle\sigma_{x}\rangle|^{2}+|\langle\sigma _{y}\rangle|^{2}+|\langle\sigma_{z} \rangle|^{2}< 1. \end{aligned}$$
(5.20)

This amounts to a loss of information in addition to the usual quantum uncertainty.

6 The Hamiltonian

As was explained in the Introduction to this chapter, Sect. 5.1, there are many ways to choose a Hamiltonian operator that correctly produces a Schrödinger equation for the time dependence of a (cellular) automaton. Yet the Hamiltonian for the quantum world as we know it, and in particular for the Standard Model, is quite unique. How does one derive the ‘correct’ Hamiltonian?

Of course there are conserved quantities in the Standard Model, such as chemical potentials, global and local charges, and kinematical quantities such as angular momentum. Those may be added to the Hamiltonian with arbitrary coefficients, but they are usually quite distinct from what we tend to call ‘energy’, so it should be possible to dispose of them. Then, there are many non-local conserved quantities, which explains the large number of possible shifts \(\delta E_{i}\) in Fig. 2.3, Sect. 2.2.2. Most of such ambiguities will be removed by demanding the Hamiltonian to be local.

6.1 Locality

Our starting expressions for the Hamiltonian of a deterministic system are Eqs. (2.8) and (2.26). These, however, converge only very slowly for large values of \(n\). If we apply such expansions to the cellular automaton, Eqs. (5.2) and (5.3), we see that the \(n\mathrm{th}\) term will involve interactions over neighbours that are \(n\) steps separated. If we write the total Hamiltonian \(H\) as

$$\begin{aligned} H=\sum_{\vec{x}}\mathcal{H}(\vec{x}),\qquad\mathcal {H}( \vec{x})=\sum_{n=1}^{\infty}\mathcal{H}_{n}( \vec{x}), \end{aligned}$$
(5.21)

we see contributions \(\mathcal{H}_{n}(\vec{x})\) that involve interactions over \(n\) neighbours, with coefficients dropping typically as \(1/n\). Typically,

$$\begin{aligned} \bigl[ {\mathcal{H}}_{n}(\vec{x}), \mathcal{H}_{m}\bigl( \vec{x} \,'\bigr)\bigr]=0\quad\hbox{only if}\quad |\vec{x}-\vec{x} \,'|>n+m, \end{aligned}$$
(5.22)

while in relativistic quantum field theories, we have \([\mathcal {H}(\vec{x}), \mathcal{H}(\vec{x}\,')]=0\) as soon as \(\vec{x}\ne\vec{x}\,'\). Considering that the number of interacting neighbouring cells that fit in a \(d\)-dimensional sphere with radius \(n\), may grow rapidly with \(n\), while the leading terms start out being of the order of the Planck energy, we see that this convergence is too slow: large contributions spread over large distances. This is not the Hamiltonian that has the locality structure typical for the Standard Model.

Now this should not have been surprising. The eigenvalues of Hamiltonians (2.8) and (2.26) are both bounded to the region \((0, 2\pi/\delta t)\), while any Hamiltonian described by equations such as (5.21), should be extensive: their eigenvalues may grow proportionally to the volume of space.

A better construction for a cellular automaton is worked out further in Part II, Chap. 21. There, we first introduce the Baker Campbell Hausdorff expansion. In that, also, the lowest terms correspond to a completely local Hamiltonian density, while all terms are extensive. Unfortunately, this also comes at a price that is too expensive for us: the Baker Campbell Hausdorff series does not seem to converge at all in this case. One could argue that, if used only for states where the total energies are much less than the Planck energy, the series should converge, but we have no proof for that. Our problem is that, in the expressions used, the intermediate states can easily represent higher energies.

Several attempts were made to arrive at a positive Hamiltonian that is also a space integral over local Hamiltonian densities. The problem is that the cellular automaton only defines a local evolution operator over finite time steps, not a local Hamiltonian that specifies infinitesimal time evolution. Generically valid formal procedures seem to fail, but if we stick closer to procedures that resemble perturbative quantum field theories, we seem to approach interesting descriptions that almost solve the problem. In Chap. 22 of Part II, we use second quantization. This procedure can be summarized as follows: consider first a cellular automaton that describes various types of particles, all non-interacting. This automaton will be integrable, and its Hamiltonian, \(H_{0}\), will certainly obey locality properties, and have a lower bound. Next, one must introduce interactions as tiny perturbations. This should not be difficult in cellular automata; just introduce small deviations from the free particle evolution law. These small perturbations, even if discrete and deterministic, can be handled perturbatively, assuming that the perturbation occurs infrequently, at sufficiently separated spots in space and time. This should lead to something that can reproduce perturbative quantum field theories such as the Standard Model.

Note, that in most quantum field theories, perturbation expansions have been used with much success (think for instance of the calculation of the anomalous magnetic moment \(g-2\) of the electron, which could be calculated and compared with experiment in superb precision), while it is still suspected that the expansions eventually do not converge. The non-convergence, however, sets in at very high orders, way beyond today’s practical limits of accuracy in experiments.

We now believe that this will be the best way to construct a Hamiltonian with properties that can be compared to experimentally established descriptions of particles. But, it is only a strategy; it was not possible to work out the details because the deterministic free particle theories needed are not yet sufficiently well understood.

Thus, there is still a lot of work to be done in this domain. The questions are technically rather involved, and therefore we postpone the details to Part II of this book.

6.2 The Double Role of the Hamiltonian

Without a Hamiltonian, theoretical physics would look completely different. In classical mechanics, we have the central issue that a mechanical system obeys an energy conservation law. The energy is a non negative, additive quantity that is locally well-defined. It is these properties that guarantee stability of mechanical systems against complete collapse or completely explosive solutions.

The classical Hamiltonian principle is a superb way to implement this mechanism. All that is needed is to postulate an expression for the non-negative, conserved quantity called energy, which turns into a Hamiltonian \(H(\vec{x}, \vec{p} )\) if we carefully define the dynamical quantities on which it depends, being canonical pairs of positions \(x_{i}\) and momenta \(p_{i}\). The ingenious idea was to take as equation of motion the Hamilton equations

$$\begin{aligned} {\mathrm{d}\over \mathrm{d}t} x_{i}(t)={\partial \over \partial p_{i}}H(\vec{x}, \vec{p} ) ,\qquad{\mathrm{d}\over \mathrm{d}t} p_{i}(t)=-{\partial \over \partial x_{i}}H( \vec{x}, \vec{p} ). \end{aligned}$$
(5.23)

This guarantees that \({\mathrm{d}\over \mathrm{d}t}H(\vec{x}, \vec{p} )=0\) . The fact that the equations (5.23) allow for a large set of mathematical transformations makes the principle even more powerful.

In quantum mechanics, as the reader should know, one can use the same Hamiltonian function \(H\) to define a Schrödinger equation with the same property: the operator \(H\) is conserved in time. If \(H\) is bounded from below, this guarantees the existence of a ground state, usually the vacuum,

Thus, both quantum and classical mechanics are based on this powerful principle, where a single physical variable, the Hamiltonian, does two things: it generates the equations of motion, and it gives a locally conserved energy function that stabilizes the solutions of the equations of motion. This is how the Hamiltonian principle describes equations of motion, or evolution equations, whose solutions are guaranteed to be stable.

Now how does this work in discrete, deterministic systems of the kind studied here? Our problem is that, in a discrete, classical system, the energy must also be discrete, but the generator of the evolution must be an operator with continuous eigenvalues. The continuous differential equations (5.23) must be replaced by something else. In principle, this can be done, we could attempt to recover a continuous time variable, and derive how our system evolves in terms of that time variable. What we really need is an operator \(H\), that partly represents a positive, conserved energy, and partly a generator of infinitesimal time changes. We elaborate this issue further in Part II, Chap. 19, where, among other things, we construct a classical, discretized Hamiltonian in order to apply a cellular automaton version of the Hamilton principle.

6.3 The Energy Basis

In Sect. 5.5.1, it was explained, that a deterministic model of a quantum mechanical system is obtained if we can find a set of beable operators, (5.8), that commute at all times, see Eq. (5.9). The ontological states are then eigenstates of these beables. There is a trivial example of such operators and such states in the real world: the Hamiltonian and its eigenstates. According to our definition they form a set of beables, but unfortunately, they are trivial: there is only one Hamiltonian, and the energy eigenstates do not change in time at all. This describes a static classical world. What is wrong with it?

Since we declared that superpositions of ontological states are not ontological, this solution also tells us that, if the energy eigenstates would be considered as ontological, no superpositions of these would be admitted, while all conventional physics only re-emerges when we do consider superpositions of the energy eigenstates. Only superpositions of different energy states can be time-dependent, so yes, this is a solution, but no, it is not the solution we want. The energy basis solution emerges for instance if we take the model of Sect. 2.2.2, Figs. 2.2 and 2.3, where we replace all loops by trivial loops, having only a single state, while putting all the physics in the arbitrary numbers \(\delta E_{i}\). It is in accordance with the rules, but not useful.

Thus, the choice of the energy basis represents an extreme limit that is often not useful in practice. We see this also happen when a very important procedure is considered: it seems that we will have to split energy into two parts: on the one hand, there is a large, classical component, where energy, equivalent to mass, acts as the source of a gravitational field and as such must be ontological, that is, classical. This part must probably be discretized. On the other hand, we have the smaller components of the energy that act as eigen values of the evolution operator, over sufficiently large time steps (much larger than the Planck time). These must form a continuous spectrum.

If we would consider the energy basis as our ontological basis, we would regard all of the energy as classical, but then the part describing evolution disappears; that is not good physics. See Part II, Fig. 19.1, in Sect. 19.4.1. The closed contours in that figure must be non-trivial.

7 Miscellaneous

7.1 The Earth–Mars Interchange Operator

The CAI surmises that quantum models exist that can be regarded as classical systems in disguise. If one looks carefully at these classical systems, it seems as if any classical system can be rephrased in quantum language: we simply postulate an element of a basis of Hilbert space to correspond to every classical physical state that is allowed in the system. The evolution operator is the permutator that replaces a state by its successor in time, and we may or may not decide later to consider the continuous time limit.

Naturally, therefore, we ask the question whether one can reverse the CAI, and construct quantum theories for systems that are normally considered classical. The answer is yes. To illustrate this, let us consider the planetary system. It is the prototype of a classical theory. We consider large planets orbiting around a sun, and we ignore non-Newtonian corrections such as special and general relativity, or indeed the actual quantum effects, all of which being negligible corrections. We regard the planets as point particles, even if they may feature complicated weather patterns, or life; we just look at their classical, Newtonian equations of motion.

The ontological states are defined by listing the positions \(\vec{x}_{i}\) and velocities \(\vec{v}_{i}\) of the planets (which commute), and the observables describing them are the beables. Yet also this system allows for the introduction of changeables and superimposables. The quantum Hamiltonian here is not the classical thing, but

$$\begin{aligned} H^{\mathrm{quant}}=\sum_{i} \bigl(\vec{p}_{x, i}^{\mathrm{op}}\cdot\vec{v}_{i}+\vec{p}_{v, i}^{\mathrm{op}}\cdot\vec{F}_{i}(\mathbf {x})/m_{i} \bigr), \end{aligned}$$
(5.24)

where

$$\begin{aligned} \vec{p}_{x, i}^{\mathrm{op}} =-i{\partial \over \partial \vec{x}_{i}},\qquad\vec{p}_{v, i}^{\mathrm{op}}=-i{\partial \over \partial \vec{v}_{i}}, \quad\hbox{and}\quad \mathbf {x}=\{\vec{x}_{i}\}. \end{aligned}$$
(5.25)

Here, \(\vec{F}_{i}(\mathbf {x})\) are the classical forces on the planets, which depend on all positions. Equation (5.24) can be written more elegantly as

$$\begin{aligned} H^{\mathrm{quant}}=\sum_{i} \biggl(\vec{p}_{ x, i}^{\mathrm{op}}\cdot{\partial H^{\mathrm{class}}\over \partial \vec{p}_{i}}- \vec{p}_{ p, i}^{\mathrm{op}}\cdot{\partial H^{\mathrm {class}}\over \partial \vec{x}_{i}} \biggr), \end{aligned}$$
(5.26)

where \(p_{ p, i}^{\mathrm{op}}=m_{i}^{-1}p_{ v, i}^{\mathrm {op}}\). Clearly, \(\vec{p}_{x i}^{\mathrm{op}}\), \(\vec{p}_{v i}^{\mathrm {op}}\) and \(\vec{p}_{p i}^{\mathrm{op}}\) are infinitesimal changeables, and so is, of course, the Hamiltonian \(H^{\mathrm{quant}}\). The planets now span a Hilbert space and behave as if they were quantum objects. We did not modify the physics of the system.

We can continue to define more changeables, and ask how they evolve in time. One of the author’s favourites is the Earth–Mars interchange operator. It puts the Earth where Mars is now, and puts Mars where planet Earth is. The velocities are also interchanged.Footnote 4 If Earth and Mars had the same mass then the planets would continue to evolve as if nothing happened. Since the masses are different however, this operator will have rather complicated properties when time evolves. It is not conserved in time.

The eigenvalues of the Earth–Mars interchange operator \(X_{EM}\) are easy to calculate:

$$\begin{aligned} X_{EM}=\pm1, \end{aligned}$$
(5.27)

simply because the square of this operator is one. In standard QM language, \(X_{EM}\) is an observable. It does not commute with the Hamiltonian because of the mass differences, but, at a particular moment, \(t=t_{1}\), we can consider one of its eigenstates and ask how it evolves.

Now why does all this sound so strange? How do we observe \(X_{EM}\)? No-one can physically interchange planet Earth with Mars. But then, no-one can interchange two electrons either, and yet, in quantum mechanics, this is an important operator. The answer to these questions is, that regarding the planetary system, we happen to know what the beables are: they are the positions and the velocities of the planets, and that turns them into ontological observables. The basis in which these observables are diagonal operators is our preferred basis. The elements of this basis are the ontological states of the planets. If, in a quantum world, investigators one day discover what the ontological beables are, everything will be expressed in terms of these, and any other operators are no longer relevant.

It is important to realize that, in spite of the fact that, in Copenhagen language, \(X_{EM}\) is an observable (since it is Hermitian), we cannot measure it, to see whether it is \(+1\) or −1 . This is because we know the wave function \(|\mathrm{ont}\rangle\) . It is 1 for the actual positions of Earth and Mars, 0 when we interchange the two. This is the superposition

$$\begin{aligned} |\mathrm{ont}\rangle={\textstyle{1\over \sqrt{2}}} \bigl(| X_{EM}=1 \rangle+| X_{EM}=-1 \rangle \bigr); \end{aligned}$$
(5.28)

which is a superposition of two template states. According to Copenhagen, a measurement would yield \(\pm1\) with \(50~\%/50~\%\) chances.

7.2 Rejecting Local Counterfactual Definiteness and Free Will

The arguments usually employed to conclude that local hidden variables cannot exist, begin with assuming that such hidden variables should imply local counterfactual definiteness. One imagines a set-up such as the EPR-Bell experiment that we exposed in Sect. 3.6. Alice and Bob are assumed to have the ‘free will’ to choose the orientation of their polarization filters anytime and anyway they wish, and there is no need for them to consult anyone or anything to reach their (arbitrary) decision. The quantum state of the photon that they are about to investigate should not depend on these choices, nor should the state of a photon depend on the choices made at the other side, or on the outcome of those measurements.

This means that the outcomes of measurements should already be determined by some algorithm long before the actual measurements are made, and also long before the choice was made what to measure. It is this algorithm that generates conflicts with the expectations computed in a simple quantum calculation. It is counterfactual, which means that there may be one ‘factual’ measurement but there would have been many possible alternative measurements, measurements that are not actually carried out, but whose results should also have been determined. This is what is usually called counterfactual definiteness, and it has essentially been disproven by simple logic.

Now, as has been pointed out repeatedly, the violation of counterfactual definiteness is not at all a feature that is limited to quantum theory. In our example of the planetary system, Sect. 5.7.1, there is no a priori answer to the question which of the two eigenstates of the Earth–Mars exchange operator, the eigenvalue \(+1\) or −1, we are in. This is a forbidden, counterfactual question. But in case of the planetary system, we know what the beables are (the positions and velocities of the planets), whereas in the Standard Model we do not know this. There, the illegitimacy of counterfactual statements is not immediately obvious. In essence, we have to posit that Alice and Bob do not have the free will to change the orientation of their filters; or if we say that they do, their decisions must have their roots in the past and, as such, they affect the possible states a photon can be in. In short, Alice and Bob make decisions that are correlated with the polarizations of the photons, as explained in Sect. 3.6.

It is the more precise definition of ‘free will’, as the freedom to choose one’s state at any given time, that should be used in these arguments, as was explained in Sect. 3.8.

7.3 Entanglement and Superdeterminism

An objection often heard against the general philosophy advocated in this work, is that it would never enable accommodation for entangled particles. The careful reader, however, must realize by now that, in principle, there should be no problem of this sort. Any quantum state can be considered as a template, and the evolution of these templates will be governed by the real Schrödinger equation. If the relation between the ontological basis and the more conventional basis choices is sufficiently complex, we will encounter superimposed states of all sorts, so one definitely also expects states where particles behave as being ‘quantum-entangled’.

Thus, in principle, it is easy to write down orthonormal transformations that turn ontic states into entangled template states.

There are some problems and mysteries, however. The EPR paradox and Bell’s theorem are examples. As explained in Sect. 3.6, the apparent contradictions can only be repaired if we assume rather extensive correlations among the ‘hidden variables’ everywhere in the Universe. The mapping of ontic states into entangled states appears to depend on the settings chosen by Alice and Bob in the distant future.

It seems as if conspiracy takes place: due to some miraculous correlations in the events at time \(t= 0\), a pair of photons ‘knows in advance’ what the polarization angles of the filters will be that they will encounter later, and how they should pass through them. Where and how did this enter in our formalism, and how does a sufficiently natural system, without any conspiracy at the classical level, generate this strange behaviour?

It is not only a feature among entangled particles that may appear to be so problematic. The conceptual difficulty is already manifest at a much more basic level. Consider a single photon, regardless whether it is entangled with other particles or not. Our description of this photon in terms of the beables dictates that these beables behave classically. What happens later, at the polarization filter(s), is also dictated by classical laws. These classical laws in fact dictate how myriads of variables fluctuate at what we call the Planck scale, or more precisely, the smallest scale where distinguishable physical degrees of freedom can be recognized, which may or may not be close to what is usually called the Planck scale. Since entangled particles occur in real experiments, we claim that the basis-transformations will be sufficiently complex to transform states that are ontic at the Planck scale, into entangled states.

But this is not the answer to the question posed. The usual way to phrase the question is to ask how ‘information’ is passed on. Is this information classical or quantum? If it is true that the templates are fundamentally quantum templates, we are tempted to say, well, the information passed on is quantum information. Yet it does reduce to classical information at the Planck scale, and this was generally thought not to be possible.

That must be a mistake. As we saw in Sect. 3.6, the fundamental technical contradiction goes away if we assume strong correlations between the ‘quantum’ fluctuations—including the vacuum fluctuations—at all distance scales (also including correlations between the fluctuations generated in quasars that are separated by billions of light years). We think the point is the following. When we use templates, we do not know in advance which basis one should pick to make them look like the ontological degrees of freedom as well as possible. For a photon going through a polarization filter, the basis closest to the ontological one is the basis where coordinates are chosen to be aligned with the filter. But this photon may have been emitted by a quasar billions of years ago, how did the quasar know what the ontological basis is?

The answer is that indeed the quasar knows what the ontological basis is, because our theory extends to these quasars as well. The information, ‘this is an ontological state, and any set of superimposed states is not’, is a piece of information that, according to our theory, is absolutely conserved in time. So, if that turns out to be the basis now, it was the basis a billion years ago. The quasars seem to conspire in a plot to make fools of our experimenters, but in reality they just observe a conservation law: the information as to which quantum states form the ontological basis, is conserved in time. Much like the law of angular momentum, or any other exactly conserved entity, this conservation law tells us what this variable is in the future if it is known in the past, and vice-versa.

The same feature can be illustrated by a thought experiment where we measure the fluctuations of photons emitted by a quasar, but first we send the photons through a polarization filter. The photographs we make will be classical objects. Here also, we must conclude that the photons emitted by the quasar already ‘knew’ what their polarizations were when they left, billions of years ago. This isn’t conspiracy, it is just the consequence of our conservation law: ontological states, indeed only ontological states, evolve into other ontological states.

We must conclude that, if there seems to be conspiracy in our quantum description of reality, then that is to be considered as a feature of our quantum techniques, not of the physical system we are looking at. There is no conspiracy in the classical description of the cellular automaton. Apparent conspiracy is a feature, not a bug.

The answer given here, is often called superdeterminism. It is the idea that indeed Alice and Bob can only choose ontological states to be measured, never the superimposed states, which we use as templates. In a sense, their actions were predetermined, but of course in a totally unobservable way. Superdeterminism only looks weird if one adheres to the description of entangled particles as being quantum systems, described by their quantum states. The ontological description does not use quantum states. In that description, the particles behave in a normal, causal manner. However, we do have to keep in mind that these particles, and everything else, including Bob and Alice’s minds, are all strongly correlated. They are correlated now as strongly as when the photons were emitted by the source, as was laid downFootnote 5 in the mouse-dropping function, Fig. 3.2 in Sect. 3.7.

In Sect. 3.8, it was explained in explicitly physical terms, what ‘free will’ should really stand for, and why superdeterminism can clash with it.

7.4 The Superposition Principle in Quantum Mechanics

What exactly happened to the superposition principle in the CA Interpretation of quantum mechanics? Critics of our work brought forward that the CAI disallows superposition, while obviously the superposition principle is serving quite well as a solid back bone of quantum mechanics. Numerous experiments confirm that if we have two different states, also any superposition of these states can be realized. Although the reader should have understood by now how to answer this question, let us attempt to clarify the situation once again.

At the most basic level of physical law, we assume only ontological states to occur, and any superposition of these, in principle, does not correspond to an ontological state. At best, a superposition can be used to describe probabilistic distributions of states (we call these “template states”, to be used when we do not have the exact information at hand to determine with absolute certainty which ontological state we are looking at). In our description of the Standard Model, or any other known physical system such as atoms and molecules, we do not use ontological states but templates, which can be regarded as superpositions of ontological states. The hydrogen atom is a template, all elementary particles we know about are templates, and this means that the wave function of the universe, which is an ontological state, must be a superposition of our templates. Which superposition? Well, we will encounter many different superpositions when doing repeated experiments. This explains why we were led to believe that all superpositions are always allowed.

But not literally all superpositions can occur. Superpositions are man-made. Our templates are superpositions, but that is because they represent only the very tiny sector of Hilbert space that we understand today. The entire universe is in only one ontological state at the time, and it of course cannot go into superpositions of itself. This fact now becomes manifestly evident when we consider the “classical limit”. In the classical limit we again deal with certainties. Classical states are also ontological. When we do a measurement, by comparing the calculated “template state” with the ontological classical states that we expect in the end, we recover the probabilities by taking the norm squared of the amplitudes.

It appears that for many scientists this is difficult to accept. During a whole century we have been brainwashed with the notion that superpositions occur everywhere in quantum mechanics. At the same time we were told that if you try to superimpose classical states, you will get probabilistic distributions instead. It is here that our present theory is more accurate: if we knew the wave function of the universe exactly, we would find that it always evolves into one classical state only, without uncertainties and without superpositions.

Of course this does not mean that standard quantum mechanics would be wrong. Our knowledge of the template states, and how these evolve, is very accurate today. It is only because it is not yet known how to relate these template states to the ontological states, that we have to perform superpositions all the time when we do quantum mechanical calculations. They do lead to statistical distributions in our final predictions, rather than certainties. This could only change if we would find the ontological states, but since even the vacuum state is expected to be a template, and as such a complicated superposition of uncountably many ontic states, we should expect quantum mechanics to stay with us forever—but as a mathematical tool, not as a mystic departure from ordinary, “classical”, logic.

7.5 The Vacuum State

Is the vacuum state an ontological state? The vacuum state is generally defined to be the state with lowest energy. This also means that no particles can be found in this state, simply because particles represent energy, and, in the vacuum state, we have not enough energy even to allow for the presence of a single particle.

The discretized Hamiltonian is only introduced in Sect. 19 of Part II. It is a beable, but being discrete it can only be a rough approximation of the quantum Hamiltonian at best, and its lowest energy state is highly degenerate. As such, this is not good enough to serve as a definition of a vacuum. More to the point, the Hamiltonian defined in Sect. 19.2 is quantized in units that seem to be as large as the Planck mass. It will be clear that the Hamiltonian to be used in any realistic Schrödinger equation has a much more dense, basically continuous, eigenvalue spectrum. The quantum Hamiltonian is definitely not a beable, as we explained earlier above, in Sect. 5.6.3. Therefore, the vacuum is not an ontological state.

In fact, according to quantum field theories, the vacuum contains many virtual particles or particle-antiparticle pairs that fluctuate in- and out of existence everywhere in space–time. This is typical for quantum superpositions of ontological states. Furthermore, the lightest particles in our theories are much lighter than the Planck mass. They are not ontological, and demanding them to be absent in our vacuum state inevitably turns our vacuum itself also into a non-ontological state.

This is remarkable, because our vacuum state has one more peculiar property: its energy density itself is almost perfectly vanishing. Due to the cosmological constant, there is energy in our vacuum state, but it is as small as about 6 protons per \(m^{3}\), an extremely tiny number indeed, considering the fact that most length scales in particle physics are indeed much smaller than a metre. This very tiny but non-vanishing number is one of Nature’s greater mysteries.

And yet, the vacuum appears to be non-ontological, so that it must be a place full of activity. How to reconcile all these apparently conflicting features is not at all understood.Footnote 6

The vacuum fluctuations may be seen as one of the primary causes of non-vanishing, non-local correlations, such as the mouse dropping function of Sect. 3.6. Without the vacuum fluctuations, it would be difficult to understand how these correlations could be sustained.

7.6 A Remark About Scales

Earlier, we raised the question in what way our quantum world differs from a more classical, chaotic system such as the Van der Waals gas. There is one important aspect, which actually may shed some further light on some of the ‘quantum peculiarities’ that we encounter.

The picture of our world that we obtain seems to be as follows. Imagine a screen displaying the evolution of our cellular automaton. We imagine its pixels to have roughly the size of one Planck length, \(10^{-33}~\mbox{cm}\). All possible states seem to occur, so that our screen may appear to feature mainly just white noise. Now, the scale of atoms, molecules and sub-atomic particles is roughly at \(10^{-8}\) to \(10^{-15}~\mbox{cm}\), or typically some 20 factors of 10 larger. It is as if we are looking at a typical computer screen from the distance of one light year, roughly.

However, imagine that we flip just one pixel from 0 to 1 or back, without touching any of its neighbors. This is the action of an operator that modifies the energy of the system by typically one unit of Planck energy, or, the kinetic energy of a moderately sized airplane. Therefore:

flipping one pixel has a formidable effect on the state we are looking at.

If we want to make less energetic changes, we have to flip the information over a domain with much less energy, or typically thousands of times bigger than the Planck size. This means that

States that are easier to encounter in ordinary systems would require flipping millions of pixels, not just one.

It is not obvious what we have to conclude from this. It may well be that we have to connect this observation with our ideas of information loss: in making a change representing information that is not easily lost, we will find that millions of pixels are involved.

Finally, when we apply an operator affecting \((10^{20})^{3}~\mbox{pixels}\), or so, we achieve a state where a tiny atom or molecule is flipped to another quantum state. Thus, although our system is deterministic, it is forbidden to modify just exactly one pixel; this is not what can be achieved in simple quantum experiments. It is important to realize this fact when discussing the vacuum correlation functions in connection with Bell’s theorem and similar topics.

7.7 Exponential Decay

Vacuum fluctuations must be the primary reason why isolated systems such as atoms and molecules in empty space, show typical quantum features. A very special quantum mechanical property of many particles is the way they can decay into two or more other particles. Nearly always, this decay follows a perfect exponential decay law: the probability \(P(t)\) that the particle of a given type has not yet decayed after elapsed time \(t\) obeys the rule

$$\begin{aligned} {\mathrm{d}P(t)\over \mathrm{d}t}=-\lambda P(t), \quad\rightarrow\quad P(t)=P(0)e^{-\lambda t}, \end{aligned}$$
(5.29)

where \(\lambda\) is a coefficient that often does not at all depend on external circumstances, or on time. If we start off with \(N_{0}\) particles, then the expectation value \(\langle N(t) \rangle\) of the particle number \(N(t)\) after time \(t\) follows the same law:

$$\begin{aligned} \bigl\langle N(t)\bigr\rangle =N_{0} e^{-\lambda t}. \end{aligned}$$
(5.30)

If there are various modes in which the particle can decay, we have \(\lambda =\lambda_{1}+\lambda_{2}+\cdots ,\) and the ratios of the \(\lambda _{i}\) equals the ratios of the modes of the decays observed.

Now how can this be explained in a deterministic theory such as a cellular automaton? In general, this would not be possible if the vacuum would be a single ontological state. Consider particles of a given type. Each individual particle will decay after a different amount of time, precisely in accordance to Eq. (5.29). Also, the directions in which the decay products will fly will be different for every individual particle, while, if there are three or more decay products involved, also the energies of the various decay products will follow a probability distribution. For many existing particles, these distributions can be accurately calculated following the quantum mechanical prescriptions.

In a deterministic theory, all these different decay modes would have to correspond to distinct initial states. This would be hopeless to accommodate for if every individual particle would have to behave like a ‘glider solution’ of a cellular automaton, since all these different decay features would have to be represented by different gliders. One would quickly find that millions, or billions of different glider modes would have to exist.

The only explanation of this feature must be that the particles are surrounded by a vacuum that is in a different ontological state every time. A radio-active particle is continuously hit by fluctuating features in its surrounding vacuum. These features represent information that flies around, and as such, must be represented by almost perfect random number generators. The decay law (5.29) thus emerges naturally.

Thus it is inevitable that the vacuum state has to be regarded as a single template state, which however is composed of infinitely many ontological states. The states consisting of a single particle roaming in a vacuum form a simple set of different template states, all orthogonal to the vacuum template state, as dictated in the Fock space description of particle states in quantum field theory.

7.8 A Single Photon Passing Through a Sequence of Polarizers

It is always illustrative to reduce a difficulty to its most basic form. The conceptual difficulty one perceives in Bell’s gedanken experiment, already shows up if we consider a single photon, passing through a sequence of polarization filters, \(F_{1}, \dots,F_{N}\). Imagine these filters to be rotated by angles \(\varphi_{1},\varphi _{2},\dots,\varphi_{N}\), and every time a photon hits one of these filters, say \(F_{n}\) , there is a probability equal to \(\sin^{2}\psi_{n}\), with \(\psi _{n}=\varphi _{n-1}-\varphi_{n}\), that the photon is absorbed by this filter. Thus, the photon may be absorbed by any of the \(N\) polarizers. What would an ontological description of such a setup be like?

Note that, the setup described here resembles our description of a radio-active particle, see the previous subsection. There, we suggested that the particle is continuously interacting with the surrounding vacuum. Here, it is easiest to assume that the photon interacts with all filters. The fact that the photon arrives at filter \(F_{n}\) as a superposition of two states, one that will pass through and one that will be absorbed, means that, in the language of the ontological theory, we have an initial state that we do not quite know; there is a probability \(\cos^{2}\psi_{n}\) that we have a photon of the pass-through type, and a probability \(\sin^{2}\psi_{n}\) that it is of the type that will be absorbed. If the photon passes through, its polarization is again well-defined to be \(\varphi_{n}\). This will determine what the distribution is of the possible states that may or may not pass through the next filter.

We conclude that the filter, being essentially classical, can be in a very large number of distinct ontological states. The simplest of all ontological theories would have it that a photon arrives with polarization angle \(\psi_{n}\) with respect to the filter. Depending on the ontological state of the filter, we have probability \(\cos^{2}\psi \) that the photon is allowed through, but rotated to the direction \(\varphi _{n}\), and probability \(\sin^{2}\psi\) that it is absorbed (or reflected and rotated).

Now, return to the two entangled photons observed by Alice and Bob in Bell’s set-up. In the simplest of all ontological theories, this is what happens: Alice’s and Bob’s filters act exactly as above. The two photons carry both the same information in the form of a single angle, \(c\). Alice’s filter has angle \(a\), Bob’s has angle \(b\). As we saw in Sect. 3.7, there is a 3-point correlation between \(a\), \(b\) and \(c\), given by the mouse-dropping function, (3.23) and Fig. 3.2.

Now note, that the mouse-dropping function is invariant under rotations of \(a\), \(b\) and/or \(c\) by \(90^{\circ}\). The nature of the ontological state depends very precisely on the angles \(a\), \(b\), and \(c\), but each of these states differs from the others by multiples of \(90^{\circ}\) in these angles. Therefore, as soon as we have picked the desired orthonormal basis, the basis elements will be entirely uncorrelated. This makes the correlations unobservable whenever we work with the templates. Assuming the ontological conservation law at work here, we find that the ontological nature of the angles \(a\), \(b\) and \(c\) is correlated, but not the physical observables. It is to be expected that correlations of this sort will pervade all of physics.

Our description of a photon passing through a sequence of polarization filters, requires that the ontological initial state should include the information which of the filters actually absorbs (or reflects) the photon. According to standard quantum mechanics, this is fundamentally unpredictable. Apparently, this means that the exact ontological state of the initial photon cannot be known when it occurs. This makes our ‘hidden variables’ invisible. Due to the conspicuous correlation functions of these initial states, an observer of the hidden variable would have access to information that is forbidden by the Copenhagen doctrine. We suspect this to be a special—and very important—property of the cellular automaton.

7.9 The Double Slit Experiment

Now that we have some idea how quantum mechanics should be explained in terms of a cellular automaton, one might consider settings such as the double slit experiment. It actually makes more sense to consider more general optical settings with screens with openings in them, lenses, polarizers, birefractors, Stern–Gerlach splitters, etc.

The general question would be how to understand how given \(|\mathrm {in}\rangle\) states lead to given \(|\mathrm{out}\rangle\) states. The more specific question is how this could yield interference patterns, and in particular, how these could become dependent on phase angles, which are generally thought of as typical quantum phenomena.

As for the first question, our general theory says that the number of possible in-states and the number of possible out-states are huge, and transitions can occur between many in-states and many out-states . As was explained before, the amplitudes that are obtained in the end, actually represent the probabilities that were put in when we constructed the initial states; this is how Born’s probabilities eventually came out.

The phases emerge primarily when we consider the time dependence of the states. All ontological states were postulated to obey evolution equations where the evolution operators \(U(t)\) were written as \(e^{-iHt}\), where \(H\) is the Hamiltonian. In our simplified ontological models, we see what these phase angles really mean: If our system tends to become periodic after a time \(T\) , the phase angle, \(e^{-iHT}\) returns to one. The phase therefore indicates the position of an ontological variable in its periodic orbit.

This should explain everything. All ontological variables consist of basic elements that are periodic in time. The question about the probability of a given in-state to evolve into a given out-state depends on where in its periodic orbit it hits. If there are two or more paths from a given in-state to a given out-state, the probability increases when the two paths are in phase and decreases if they are completely out of phase. Of course this is true if these ontological variables would be classical waves, in which case this is the standard interference phenomenon, such as is the case with photons. The ontological variables associated to photons are essentially the Maxwell fields. Now, we see that this is more generally true. All ontological variables, in their most pristine form, must apparently be periodic in time, and if there are many ways for one ontological state to evolve into an other ontological state, the probabilities depend on the extent to which one phase angle is reached in more different ways (more probably) than the opposite phase angle.

We do emphasize that this is not a very familiar formulation for classical processes. What we are looking at here is the ultimate physics relevant near Planckian scales, where much of what is going on will be new to physicists.

8 The Quantum Computer

Quantum mechanics is often endowed with mysterious features. There are vigorous attempts to turn some of these features to our advantage. One famous example is the quantum computer. The idea is to use entangled states of information carriers, which could be photons, electrons, or something else, to represent vastly more information than ordinary bits and bytes, and are therefore called qubits.

Since the machines that investigators plan to construct would obey ordinary quantum mechanics, they should behave completely in accordance with our theories and models. However, this seems to lead to contradictions.

In contrast with ordinary computers, the amount of information that can be carried by qubits in a quantum computer, in principle, increases exponentially with the number of cells, and consequently, it is expected that quantum computers will be able to perform calculations that are fundamentally impossible in ordinary computers. An ordinary, classical computer would never be able to beat a quantum computer even if it took the size of the universe, in principle.

Our problem is then, that our models underlying quantum mechanics are classical, and therefore they can be mimicked by classical computers, even if an experimentalist would build a ‘quantum computer’ in such a world. Something is wrong.

Quantum computers still have not been constructed however. There appear to be numerous practical difficulties. One difficulty is the almost inevitable phenomenon of decoherence. For a quantum computer to function impeccably, one needs to have perfect qubits.

It is generally agreed that one cannot make perfect qubits, but what can be done is correct them for the errors that may sometimes occur. In a regular computer, errors can easily be corrected, by using a slight surplus of information to check for faulty memory sites. Can the errors of qubits also be corrected? There are claims that this can be done, but in spite of that, we still don’t have a functioning quantum computer, let alone a quantum computer that can beat all classical computers. Our theory comes with a firm prediction:

Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or perhaps, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of \(10^{-43}\) seconds.

Such scaled classical computers can of course not be built, so that this quantum computer will still be allowed to perform computational miracles, but factoring a number with millions of digits into its prime factors will not be possible—unless fundamentally improved classical algorithms turn out to exist. If engineers ever succeed in making such quantum computers, it seems to me that the CAT is falsified; no classical theory can explain perfect quantum computers.