Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It may take years, decades, perhaps centuries to arrive at a comprehensive theory of quantum gravity, combined with a theory of quantum matter that will be an elaborate extension of the Standard Model. Only then will we have the opportunity to isolate the beables that are the basic ingredients for an ontological theory; the templates of that theory will be projected onto the beable basis. Only then can we tell whether the CAI really works. Conceivably however, we may be able to use the CAI as a guide to arrive at such theories. That was the main motivation for writing this book.

1 What Will Be the CA for the SM?

There are numerous problems remaining. The first of these was encountered by the author right-away: how to convince a majority of researchers who have been working in this field for many decades, that the views expressed here may well be basically correct. In particular we state the most important conclusions:

  • there’s only a single, essentially classical, discrete, universe, not an infinity of distinct universes, as advocated in the Many World Interpretation, whether or not being guided by a pilot wave that is active in all those worlds.

  • The Born probabilities hold exactly true, requiring no corrections of any form, since they were put in the state vectors (templates) to describe the probability distribution of the initial states. What has been put in, comes out unchanged: the Born probabilities.

  • The “collapse of the wave function” takes place automatically, without requiring any corrections to Schrödinger’s equation, such as non-linearities. This is because the universe is in a single ontological state from the Big Bang onwards, whereas the final result of an experiment will also always be a single ontological state. The final state can never be in a superposition, such as a live cat superimposed with a dead cat.

  • The underlying theory may well be a local one, but the transformation of the classical equations into the much more efficient quantum equations, involves some degree of non-locality, which leaves no trace in the physical equations, apart from the well-known, apparent ‘quantum miracles’.

  • It is very much worth-while to search for more models where the transformation can be worked out in detail; this could lead to a next generation of Standard Models, with ‘cellular automaton restrictions’ that can be tested experimentally.

The problem how to set up such searches in a systematic way is very challenging indeed. Presumably a procedure corresponding to second quantization has to be employed, but as yet it is only clear how to do this for fermionic fields. The problem is then that we also must replace the Dirac equation for the first-quantized particles by a deterministic one. This could be done for free, massless particles, which is not a bad start, but it is also not good enough to proceed. Then, we have some rudimentary ideas about bosonic force-carrying fields, as well as a suggestive string-like excitation (worked out further in Part II), but again, it is not known yet how to combine these into something that can compete with the Standard Model known today.

2 The Hierarchy Problem

There is a deeper reason why any detailed theory involving the Planck scale, quantum mechanics and relativity, may be extremely difficult to formulate. This is the empirical fact that there are two or more radically different scales of very special significance: the Planck scale and the mass scale(s) of the most significant particles in the system. The amazing thing about our world as that these various scales are many orders of magnitude apart. The Planck scale is at \(\approx10^{19}~\mbox{GeV}\), the nuclear scale is at \(\approx1~\mbox{GeV}\), while there are also electrons, and finally neutrinos at some \(10^{-11}~\mbox{GeV}\).

The origin of these large differences in scales, which are essential for the universe to function the way it does, is still entirely obscure. We could add to this that there are very few experiments that reach accuracies better than 1 part in \(10^{11}\), let alone \(10^{19}\), so that it is questionable whether any of the fundamental principles pertaining to one scale, are still valid at an other—they could well be, but everything could also be different. There is no lack of speculative ideas to explain the origins of these numbers. The simplest observation one can make is that fairly down-to-earth mathematics can easily generate numbers of such magnitudes, but to make them arise naturally in fundamental theories of Nature is not easy at all.

Most theories of Planck scale physics, such as superstring theory and loop quantum gravity, make no mention of the origins of large numbers, whereas, we believe, good theories should.Footnote 1 In discrete cellular automata, one can certainly observe that, if lattices in space and time play any role, there may be special things happening when the lattice is exactly regular—such lattices have zero curvature. The curvature of our universe is extremely tiny, being controlled by numbers even more extreme than the hierarchy scales mentioned: the cosmological constant is described by a dimensionless number of the order of \(10^{-122}\). This might mean that, indeed, we are dealing with a regular lattice, but it must accommodate for rare lattice defects.

In general, a universal theory must explain the occurrence of very rare events, such as the mass terms causing zitterbewegung in fermions such as electrons. We do believe that cellular automaton models are in a good position to allow for special events that are very rare, but it is far too early to try to understand these.

In short, the most natural way to incorporate hierarchies of scales in our theory is not clear.