Paradigm shifts —or scientific revolutionsFootnote 1—happen in the dark. The established scientific wordview continues to dominate the discourse, while a pocket of resistance emerges. Within this heretical breeding ground, the current challenges threatening the orthodox view are, relentlessly and uncompromisingly, being addressed. Unbeknownst to most, a handful of brave pioneers is questioning the status quo. They believe that the, by now, glaring cracks in the current edifice of knowledge warrant not only the contemplation of radical new ideas but, crucially, the abandonment of many befriended assumptions.

Today, our understanding of the world and ourselves is challenged on three fronts. We do not understand the relationship between our own consciousness and physical reality (Chap. 11). Unexpectedly, the very nature of this reality, as far as we can probe, appears truly outlandish and bizarre (Chap. 10). Even the nature of physical laws and knowledge seems elusive (Chap. 9). Overall, existence itself—including the uniquely improbable cosmic evolution leading to this very moment in time—is unfathomable (Chap. 8). We are lacking a foundational understanding of the world. In essence, the current materialistic and reductionistic scientific worldview appears to have reached its limits. After all the amazing success in decoding the workings of the world (Chaps. 27), giving us the gift of technology, sadly, this edifice of knowledge now seems outdated and ineffective. However, what should replace the void once we retire this current sketch of existence? The answer is: information.

Information is an elusive concept. However, it can be formalized or mathematized. Indeed, information is the very notion that unlocked the latest, and most dramatic, technological surge: the emergence of information processing capabilities. Indeed (Floridi 2014, p. 4):

The information society has been brought about by the fastest growing technology in history. No previous generation has ever been exposed to such an extraordinary acceleration of technological power over reality, with the corresponding social changes and ethical responsibilities. [...] The computer presents itself as a culturally defining technology and has become a symbol of the new millennium, playing a cultural role far more influential than that of mills in the Middle Ages, mechanical clocks in the seventeenth century, and the loom or the steam engine in the age of the Industrial Revolution.

Moreover, information is the unifying thread connecting aspects of classical physics (Sect. 2.1), complexity theory (Chap. 6), quantum mechanics (Sects. 4.3.4 and 10.3.2), cosmology (Sects. 4.1 and 10.1.2) , string/M-theory (Sects. 4.3.2 and 10.2.2), and loop quantum gravity (Sect. 10.2.3). Slowly, a computational and information-theoretic approach to reality is emerging. Specifically, information is a prime candidate for the foundations of the world. Indeed (Davies 2014, p. 95):

[A]n alternative view is gaining in popularity: a view in which information is regarded as the primary entity from which physical reality is built. It is popular among scientists and mathematicians who work on the foundations of computing, and physicists who work in the theory of quantum computing.

The eminent physicist John Wheeler was one of the first human minds to realize this.

1 The Many Faces of Information

An interesting dichotomy emerged. At the same time as the notion of matter started to disintegrate and dematerialize (Sect. 10.4.1 and Davies and Gregersen 2014), the intangible concept of information became robust. Both developments were unexpected.

1.1 The Philosophy of Information

The philosopher Luciano Floridi has laid out a philosophy of information (Floridi 2010, 2014). This overlap is seen as very fruitful (Floridi 2014, p. 16):

PI [the philosophy of information] possesses one of the most powerful conceptual vocabularies ever devised in philosophy.

Philosophy is now understood as “conceptual engineering.” Notwithstanding (Floridi 2014, p. 30):

What is information? This is the hardest and most central problem in PI and this book could be read as a long answer to it. Information is still an elusive concept. This is a scandal not by itself, but because so much basic theoretical work relies on a clear analysis and explanation of information and of its cognate concepts.

The problem is that (Floridi 2010, p. 1):

Information is notorious in coming in many forms and having many meanings. It can be associated with several explanations, depending on the perspective adopted and the requirements and desiderata one has in mind.

A general definition of information is the following:

Definition 13.1

\(\sigma \) is an instance of information, understood as semantic content, if and only if:

  1. 1.

    \(\sigma \) consists of n data, for \(n \ge 1\);

  2. 2.

    the data are well formed;

  3. 3.

    the well-formed data are meaningful.

See Floridi (2010). Some of the approaches trying to capture the enigma of information are related to probability spaces (Bar-Hillel and Carnap 1953), algorithmic information theory (Chaitin 2003), and data spaces (Floridi 2014). The constructor theory of information (Deutsch and Marletto 2015) aims at “a physical theory of the regularities in the laws of physics required for there to exist what has been vaguely referred to as ‘information’” (Durham and Rickles 2017, p. 104). However, the most successful approach is known as information theory.

1.2 The Computability of Information

In 1948, the engineer and mathematician Claude Shannon published a seminal paper (Shannon 1948). Information theory was born (Guizzo 2003):

In that paper, Shannon defined what the once fuzzy concept of “information” meant for communication engineers and proposed a precise way to quantify it—in his theory, the fundamental unit of information is the bit. He also showed how data could be “compressed” before transmission and how virtually error-free communication could be achieved. The concepts Shannon developed in his paper are at the heart of today’s digital information technology. CDs, DVDs, cell phones, fax machines, modems, computer networks, hard drives, memory chips, encryption schemes, MP3 music, optical communication, high-definition television—all these things embody many of Shannon’s ideas and others inspired by him.

It is, however, worth noting that (Davies and Gregersen 2014, p. 5):

When the foundation of information theory was laid down by Shannon, he purposely left out of the account any reference to what the information means, and dwelt solely on the transmission aspects.

Notwithstanding, Shannon reduced the notion of information to a pragmatic and tangible entity by providing an operational definition. He took the binary digit, \(d \in \{0,1\}\), to be the fundamental unit in information theory. Now, the information content of any kind of message can be encoded using binary digits—called bits. This subtle and unremarkable shift in perspective had huge ramifications.

For one, information theory utilizes discrete mathematic. The unbridgeable infinity of real numbers between 0 and 1 is overcome by postulating two binary states. In essence, information is quantized, similarly to the idea Max Planck invoked at the genesis of quantum physics (Sect. 4.3.4). A new formal model emerged (Hromkovič 2010) from discrete mathematics (Steger 2001; Biggs 2003), rivaling the success of its infinite cousin, called continuous mathematics. Recall the tension and unity of these two mathematical branches recounted in Sect. 5.3.

The discrete binary system of data encoding has advantages over its analogue counterpart, characterized by the continuous and smooth. Bits represent the common ground where semantics, logic, and the physical can converge. This follows from the many possibilities of physically representing the true-false logic of Boolean algebra (Boole 1854) as transistors, switches, circuits, tapes, and CDs. As a result (Floridi 2010, p. 29):

[I]t is possible to construct machines that can recognize bits physically, behave logically on the basis of such recognition, and therefore manipulate data in a way we find meaningful.

However, the most powerful aspect of digital computation is the possibility of error-correction (Deutsch 2011, p. 140):

Without error-correction all information processing, and hence n knowledge-creation, is necessarily bounded.

Perhaps the most fruitful concept emerging from this novel digital, computational, and informational paradigm is universality. This notion goes back to Alan Turing and his infamous Turning machine, laying the benchmark for universality, or Turing completeness. A universal computer can perform any possible mathematical manipulation (Turing 1936; Church 1936; Turing 1938). In other words, computers can access any level of mathematical complexity without the need of being complex themselves. Essentially (Seife 2007, p. 18):

This means that you can, in theory, do the most complicated algorithms, the most intricate computerized tasks, if you are able to read, write, or erase a mark on a tape and move the tape around.

Just as complexity emerges from elegant simplicity (Sect. 5.2.2), so too, is the computability of reality encoded in simple rules. Moreover, universal computers can simulate each other efficiently. As a result, the messy details of any particular instantiation of a computer can be disregarded. Even Turing’s cumbersome but smallest possible skeleton of computation suffices to unlock the magic of universal information processing. Today, the work of Turing has been extended to the Church–Turing–Deutsch principle: A universal computing device can simulate every physical process (Deutsch 1985).

Intriguingly, the computational structure emerging from bits of information mirrors the inherent paradoxes found in mathematics. Recall how Kurt Gödel single-handedly brought mathematics to its knees with his incompleteness theorems (Sect. 2.2). Turing’s halting problem (Sect. 9.4.1) goes a step further by defining the general concept of a formal system (Gleick 2011, p. 212):

Any mechanical procedure for generating formulas is essentially a Turning machine. Any formal system, therefore, must have undecidable propositions. Mathematics is not decidable. Incompleteness follows from uncomputability.

Finally, Gregory Chaitin uncovered the inherent randomness in mathematics by introducing uncomputable numbers and extending the legacy yet again (Sect. 9.4.1). Unfortunately, Turing’s own life ended tragically (Seife 2007, p. 20):

Sadly, Turing himself would not play a major role in the newborn science of information theory. In 1952, Turing, a homosexual, pleaded guilty to charges of “gross indecency” for his dalliance with a nineteen-year-old boy. To avoid imprisonment, he consented to undergo “treatment”—a set of hormone injections that were supposed to end his sexual proclivities. They didn’t, and his “moral turpitude” was a stain that he never recovered from. Two years later, the tortured Turing apparently killed himself with cyanide.

1.3 Information is Physical

Up to now, the notion of information has remained intangible. Even if we encode data as bits, the content, representation, and ontology of information appear separate. How then, can information be physical? In other words, what link establishes the relationship between the ethereal nature of information and its physicality?

The first hint was given by Shannon himself. He reinterpreted the notion of entropy, found in thermodynamics, in an information-theoretic context. Thermodynamics—pioneered by Rudolf Clausius, William Thomson Kelvin, James Clerk Maxwell, Ludwig Boltzmann, and Josiah Willard Gibbs—offers two laws:

Theorem 13.1

It holds that:

  1. 1.

    The energy of the universe is constant.

  2. 2.

    The entropy of the universe always increases.

See Huang (1987). The notion of entropy was forced upon physicists, as thermodynamics required a novel measure which somehow corresponded to the unavailability of energy. This entropy turned out to be just as measurable as temperature, volume, or pressure. In statistical mechanics, it represents the measure of uncertainty about the state of a physical system. More loosely, it is an expression of the disorder, or randomness of a system.

Shannon reinterpreted physical entropy as a measure of the uncertainty about a message—the lack of information about it. Specifically, given all the possible messages a communication source can produce, how probable is a specific message? The answer is given by recasting the equation defining physical entropy:

$$\begin{aligned} H(X) = -\sum _{i=1}^n \mathrm {P}(X_i) \ln \left( \mathrm {P}(X_i) \right) , \end{aligned}$$
(13.1)

where P is a probability mass functionFootnote 2 and X is a discrete random variable with possible values \({X_1,\dots , X_n}\) (Shannon 1948). This is a remarkable insight: physical and informational entropy share the same mathematical expression. Interestingly, such a link had already been discovered earlier. However, it was published in a German journal of physics. There, it was established that each unit of information brings a corresponding increase in entropy of \(k T \ln 2\) units, where k is Boltzmann’s constant and T the environment’s temperature (Szilárd 1929).

The final step in unmasking information as physical, tying all the strings together, was done by Rolf Landauer at IBM, while in exile from Nazi Germany. In essence (Landauer 1996):

Information is not a disembodied abstract entity; it is always tied to a physical representation. It is represented by engraving on a stone tablet, a spin, a charge, a hole in a punched card, a mark on paper, or some other equivalent. This ties the handling of information to all the possibilities and restrictions of our real physical word, its laws of physics and its storehouse of available parts.

Landauer made the relationship of \(k T \ln 2\) per bit exact. For reversible computations the entropy does not increase, i.e., no heat is dissipated. In other words, processing information by flipping bits from zero to one and vice versa conserves information and entropy. In contrast, Landauer’s principle states that only the erasure of information—an irreversible operation—increases entropy (Landauer 1961, 1996). Information is physical: by deleting its physical manifestation as strings of bits, the universe reacts. Experiments have confirmed the validity of this principle (Bérut et al. 2012; Jun et al. 2014; Hong et al. 2016). In essence, the process of erasing a bit in one place transfers information to another place, in the form of heat. In other words:

Information cannot be destroyed.

Recently, physicists have been able to build a contraption which converts information into mechanical work (Paneru et al. 2018). The engine exceeds the conventional bound of the second law of (nonequilibrium) thermodynamics and, for the first time, achieves a bound set by a generalized second law of thermodynamics.

In his famous paper called Information is Physical , Landauer outlines the implications of information’s physical properties for the nature of physical laws (Landauer et al. 1991). He also reminds us of the great physicists Wheeler.

2 It from Bit

John Archibald Wheeler was one of the pioneers helping develop general relativity (Misner et al. 1973). He coined the words “black hole” (Thorne 1995, p. 536) and “wormhole” (Misner and Wheeler 1957). Moreover, he was also involved in quantum mechanics, coining the term “quantum foam” (Thorne 1995, p. 536, also related to Sect. 10.1). He worked with Niels Bohr on nuclear fission (Bohr and Wheeler 1939). His interests included the interpretation of quantum mechanics (Wheeler and Zurek 1983). He also devised the infamous delayed choice experiment, a quantum enigma where a choice now appears to alter the past (Sect. 10.3.2.2). Wheeler contributed to the first attempt in devising a theory of quantum gravity, in the form of the Wheeler–DeWitt equation (Sect. 10.2.1). Finally, he was also involved in the study of quantum information. Notably, two of his former Ph.D. students discovered the important no-cloning theorem (Wootters and Zurek 1982), establishing quantum encryption technology (Sect. 10.3.2.1). However, perhaps his most insightful work will turn out to be related to the nature of information (and consciousness, as discussed in the next chapter), igniting a potential paradigm shift (Gleick 2011, p. 9f.):

Increasingly, the physicists and the information theorists are one and the same. The bit is a fundamental particle of a different sort: not just tiny but abstract—a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence. Bridging the physics of the twentieth and twenty-first centuries, John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, put this manifesto in oracular monosyllables: “It from Bit.” Information gives rise to “every it—every particle, every field of force, even the spacetime continuum itself.” This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined, when it is observed. Not only is the observer observing, she is asking questions and making statements that must ultimately be expressed in discrete bits. “What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer—a cosmic information-processing machine.

Wheeler offered a monumental shift in understanding the nature of reality. Quantum mechanics, and thus reality, is about information. Reality is quantized because information is quantized. “The bit is the ultimate unsplittable particle” (Gleick 2011, p. 357). If reality is built upon and utilizes the abstract notion of information, no wonder the quest to find a tangible foundation for it fails (Sect. 10.4.1). Moreover, consciousness is intrinsically woven into the informational fabric of existence. The notion that the world exists “out there” independent of the mind is a view which is abandoned.

Wheeler proposed these unconventional views in an influential article called Information, Physics, Quantum: The Search for Links (Wheeler 1990). There, he sets out to depose of some core concepts of the prevailing scientific worldview:

(1) The world cannot be a giant machine, ruled by any preestablished continuum physical law. (2) There is no such thing at the microscopic level as space or time or spacetime continuum.

The laws of nature are emergent and became manifested at the Big Bang; reality is a finite structure. In his own words (Wheeler 1990):

It from bit. Otherwise put, every it—every particle, every field of force, even the spacetime continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatuselicited answers to yes or no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom—at a very deep bottom, in most instances—an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.

Reality is animated by information; subjective consciousness is intimately intertwined with objective reality. Wheeler offers five clues supporting his idea:

  1. 1.

    A topological argument: The boundary of a boundary is zero.

  2. 2.

    Without question, no answers exist.

  3. 3.

    The super-Copernican principle rejecting “nowcenteredness.”

  4. 4.

    Consciousness.

  5. 5.

    Complexity.

He also discusses the Bekenstein bound, introduced below. The notion of consciousness is picked up in the next chapter. Regarding complexity and information processing, in the words of the quantum mechanical engineer Seth Lloyd (Lloyd 2014, p. 125f.):

Everywhere you look, you see immense variation and complexity. Why? How did the universe get this way? We know from astronomical observation that the initial state of the universe, fourteen billion years ago, was extremely flat, regular, and simple. Similarly, the laws of physics are simple: the known laws of physics could fit on the back of a T-shirt. Simple laws, simple initial state. So where did all of this complexity come from? The laws of physics are silent on this subject.

By contrast, the computational theory of the universe has a simple and direct explanation for how and why the universe became complex. The history of the universe in terms of information-processing revolutions, each arising naturally from the previous one, already hints at why a computing universe necessarily gives rise to complexity. In fact, we can prove mathematically that a universe that computes must, with high probability, give rise to a stream of ever-more-complex structures.

In a nutshell (Lloyd 2006, p. 3):

The computational capability of the universe explains one of the great mysteries of nature: how complex systems such as living creatures can arise from fundamentally simple physical laws.

In this context, recall Chap. 6. For details on complexity and information, see Haken (2006).

Today, some of the proponents of “it from bit” come from the field of quantum information and computation (Aspelmeyer et al. 2003; Nielsen and Chuang 2007). These are the practitioners grappling with the notion of information at the quantum level of reality. For instance, Anton Zeilinger who has realized many important quantum information protocols (Bouwmeester and Zeilinger 1997; Poppe et al. 2004) next to his work on the foundations of quantum mechanics (Nairz et al. 2003; Gröblacher et al. 2007; Giustina et al. 2013; Ma et al. 2012). He describes his views on information and quantum mechanics in the book Dance of the Photons: From Einstein to Quantum Teleportation (Zeilinger 2010). There we can read on Page 267:

Information has a significant role in quantum physics, and that role seems to go beyond the role it plays in physics.

[...]

We can now make a very important observation. This is the observation that the concepts reality and information cannot be separated from each other.

However, he admits (quoted in Brockman 2006, p. 223):

What I believe but cannot prove is that quantum physics requires us to abandon the distinction between information and reality.

Zeilinger is convinced (quoted in Brockman 2006, p. 224):

Once you adopt the notion that reality and information are the same, all quantum paradoxes and puzzles—like the measurement problem [...]—disappear.

Then, Lloyd analyzes the computational properties and capacities of reality itself. In other words, do physical systems compute? Specifically, his career focuses on quantum computation (Lloyd 2006, p. 53):

A few years ago, acting on a suggestion from the physicist Richard Feynman, I showed that quantum computers can simulate any system that obeys the known laws of physics (and even those that obey as yet undiscovered laws!) in a straightforward and efficient way.

See Lloyd (1996). All interactions of elementary particles in the universe not only convey energy but crucially also information. In this sense the entire universe is computing reality. In his words (quoted in Wired Magazine 2006):

Atoms and electrons are bits. Atomic collisions are “ops” [logical operations per second]. Machine language is the laws of physics. The universe is a quantum computer.

In detail (Lloyd 2014, p. 125):

[N]ot only does the universe register and process information at its most fundamental level, as was discovered in the nineteenth century, it is literally a computer: a system that can be programmed to perform arbitrary digital computations.

Moreover (Lloyd 2002):

Merely by existing, all physical systems register information. And by evolving dynamically in time, they transform and process that information. The laws of physics determine the amount of information that a physical system can register (number of bits) and the number of elementary logic operations that a system can perform (number of ops).

Lloyd analyzed the physical limits of computation by asking what a physical system with a mass of one kilogram confined to a volume of one liter—the ultimate laptop —can compute. The answer is \(10^{51}\) operations per second on \(10^{31}\) bits, compared to today’s laptops performing \(10^{10}\) operations per second on \(10^{10}\) bits (Lloyd 2000). The universe is also a physical system. Lloyd placed an upper limit on its computational capacities: no more than \(10^{120}\) operations per second on \(10^{90}\) bits can have been performed (Lloyd 2002). For Lloyd it is very clear (Wired Magazine 2006):

[E]verything in the universe is made of bits. Not chunks of stuff, but chunks of information—ones and zeros.

Physical systems interact in a language consisting of information, where the syntax yields the laws of physics.

2.1 It from Qubit

A classical bit is in either one of two states—0 or 1. On the physical realization of this information, a digital computer performs its computations. Bestowing classical bits with the powers of the quantum realm unlocks a new level of computation. One key property of quantum systems is that they exist in a state of superposition until a measurement is made. For instance, an electron can have a spin “pointing” up or down (Sect. 3.2.2.1) or a photon can have a horizontal or vertical polarization. Let \(|{\uparrow } \rangle \) and \(|{\downarrow } \rangle \) denote the spin-up and spin-down states of an electron in bra-ket notation (Sect. 3.1.4), respectively. In general, electrons exist in a state of superposition

$$\begin{aligned} | \psi \rangle = a |{\uparrow } \rangle + b |{\downarrow } \rangle , \end{aligned}$$
(13.2)

where \(| \psi \rangle \) is related to the wave function, a and b are complex numbers with \(|a|^2 + |b|^2 = 1\). In a sense, the electron is simultaneously comprised of the two opposite states. By encoding a classical bit using a quantum system, the binary digital information is augmented by a superposition of 0 and 1. A qubit is born (Schumacher 1995). In general (Grover 2001):

Just as classical computing systems are synthesized out of two-state systems called bits, quantum computing systems are synthesized out of two-state systems called qubits. The difference is that a bit can be in only one of the two states at a time, on the other hand a qubit can be in both states at the same time.

A qubit, represented by the state \(| \psi \rangle \), is a linear combination of the states corresponding to 0 and 1

$$\begin{aligned} | \psi \rangle = a | 0 \rangle + b | 1 \rangle . \end{aligned}$$
(13.3)

A classical bit can be examined to determine whether it is in the state 0 or 1. Remarkably, for a qubit one cannot find its quantum state. In other words, there is no way of discovering the values of a and b. However, by measuring a qubit, it is either 0 with a probability \(|a|^2\) or 1 with a probability \(|b|^2\). Consequently, new ways of processing information emerge (Nielsen and Chuang 2007).

Next to superposition, quantum computers employ other quantum-mechanical phenomena, such as entanglement (Brown 2000). Stated loosely, this gives them more power, as they utilize a novel computational layer of reality. Consider a classical 2-bit system, where there are \(2^2 = 4\) possible states: (00), (01), (10), (11). Now, the corresponding 2-qubit system is described by \(| \psi \rangle = a_1 | 00 \rangle + a_2 | 01 \rangle + a_3 | 10 \rangle + \rangle + a_4 | 11 \rangle \), where the \(a_i\) are the complex coefficients obeying \(\sum _i |a_i|^2=1\). In other words, the 2-qubit quantum system, corresponding to two classical bits, can utilize four bits of information in its computation. In general, n qubits are associated with \(2^n\) classical bits. It appears as if the quantum world can harness more computational power. However, there is a catch, as one has to distinguish between the quantum states and the actual information which is accessible. Indeed, to describe the state of n qubits requires \(2^n\) classical bits. Unfortunately, there is no way in which \(2^n\) classical bits can be stored using n qubits and then reliably read out later (Holevo 1973) .

It is sometimes stated that quantum computers can effectively solve problems that would take conventional computers longer than the age of the universe to solve. The power of superposition allows the creation of an immense number of parallel computational branches. However, this is, unfortunately only applicable to some very specific problems. In the words of the computer scientists Scott Aaronson (quoted in Horgan 2016):

In particular, if an event can happen one way with a positive amplitude,Footnote 3 and another way with a negative amplitude, those two amplitudes can “interfere destructively” and cancel each other out, so that the event never happens at all. The goal, in quantum computing, is always to choreograph things so that for each wrong answer, some of the paths leading there have positive amplitudes and others have negative amplitudes, so they cancel each other out, while the paths leading to the right answer reinforce.

It’s only for certain special problems that we know how to do that. Those problems include a few with spectacular applications to cryptography, like factoring large numbers, as well as the immensely useful problem of simulating quantum mechanics itself.

Specifically, Grover’s algorithm is a quantum search algorithm utilizing the principle of superposition and entanglement (Grover 1996). The quantum speed gain is impressive. A classical search algorithm’s performance will grow linearly and in direct proportion to the size N of the input data set. Grover’s algorithm grows with \(\sqrt{N}\). Then, Shor’s algorithm, represents another milestone in quantum computing (Shor 1999). Given an integer N, the algorithm finds all its prime factors. Theoretically, any encryption key can be broken by a quantum computer of comparable size in reasonable time.Footnote 4 In comparison, a classical computer requires eons to crack 256-bit encryption. At the time of writing, the Oak Ridge National Laboratory’s Summit supercomputing machineFootnote 5 is the fastest computer, running at 200 petaflops or \(10^{15}\) floating-point operations per second. A petaflop is approximately also \(2^{50}\) operations. Note that there are \(31,536,000=365\cdot 24\cdot 60\cdot 60\) s in a year. To explore all the \(2^{256}\) combinations related to the encryption, Summit requires the following number of years

$$\begin{aligned} \frac{2^{256}}{200\cdot 2^{50} \cdot 365 \cdot 24\cdot 60 \cdot 60} \approx 1.6 \cdot 10^{52}. \end{aligned}$$
(13.4)

Recall that the age of the universe is \(13.8\cdot 10^{10}\) years. In effect, quantum computers could render all of today’s cryptography useless (Sect. 7.4.3). However, quantum computing is still in its infancy and in 2014, the number 56,153 was quantum factorized into 241 and 233 using 4 qubits (Dattani and Bryans 2014). In 2015, basic quantum computation was achieved with silicon (Veldhorst et al. 2015).

Essentially, quantum mechanics makes statements about information. Heisenberg’s uncertainty principle is simply a limit to universal information retrieval (Sect. 10.1). Measurements result in the fragile fuzziness of superpositions becoming manifested as a single state—the apparent wave function collapse —and are essentially information transfers. Depending on how we “interrogate” an electron, it behaves as a wave or a particle. Indeed, all answers to the fundamental questioning of reality are always binary. Entanglement (Sect. 10.3.2.1) fuses quantum systems into a single information entity—it encodes information. This new system appears to be unconstrained by space and time but very much obeying the rules of quantum information (Jaeger 2009). In essence, space and time become impotent or non-existent at the fundamental level of reality—all that remains is information-theoretic. Furthermore, (Wootters 2007, p. 229):

[It is] remarkable that, even though entanglement by itself does not constitute a communication channel, the presence of entanglement allows modes of communication that are not possible without it.

Consequently, “it from qubit” appears to be the exact mantra (Deutsch 2004; Vedral 2012; D’Ariano 2015). Although (Jaeger 2009, p. 189):

A central question when considering information in relation to the foundations of quantum mechanics is whether quantum information and classical information differ, and if so, how fundamental their differences are.

Some see quantum information as fundamental (Deutsch 2004, p. 93):

Although [...] the classical information storage capacity of a qubit is exactly one bit, there is no elementary entity in nature corresponding to a bit. It is qubits that occur in nature. Bits, Boolean variables, and classical computation are all emergent or approximate properties of qubits [...].

In any case (Jaeger 2009, p. 256):

Given that quantum theory involves probability, state preparation and state measurement, which are essential elements of signaling, and that communication is based on the establishment of correlations using these, it is clear that information theory will remain of considerable relevance to the investigation of the foundations of quantum physics. [...] the perspective provided by the recent focus on information has contributed to what is the most detailed picture yet of the broader implications of quantum theory.

Moreover (Brukner and Zeilinger 2006, p. 47):

There are at least three different ways in which quantum physics is connected with the concept of information. One is the relationship between quantum interference and knowledge. This was at the very heart of the early debates concerning the meaning of quantum mechanics, most notably the Bohr-Einstein dialogue. [...] The debate was resolved by the Copenhagen interpretation in the most radical, conceptually challenging and foresightful manner, although for many physicists today, the Copenhagen interpretation is still conceptually unacceptable. The second connection between quantum physics and information was the discovery in the early 1990s that quantum concepts could be used for communication and for processing information in completely novel ways. These include such topics as quantum cryptography, quantum teleportation and quantum computation. The third connection between quantum physics and information has been emerging gradually over the last few years with the conceptual groundwork for this connection going back to the works of von Weizsaecker and Wheeler. It is the notion that information is the basic concept of quantum physics itself. That is, quantum physics is only indirectly a science of reality but more immediately a science of knowledge.

Yet another proposition utilizing quantum information as a unifying theme is (Pawłowski et al. 2009):

We suggest that information causality—a generalization of the no-signalling condition [i.r., information cannot be transmitted faster than light]—might be one of the foundational properties of nature.

2.2 The Ur-Alternatives

While Wheeler was instrumental in popularizing the notion of an information-theoretic reality, he was not the first to think along these lines. Probably the first person to do so was the mathematician, philosopher, inventor, and mechanical engineer, Charles Babbage (Gleick 2011). His interests included the manipulation of information: messaging, encoding, and processing. In 1642, Blaise Pascal, a mathematician, physicist, and inventor, had constructed an adding machine. Three decades later, Gottfried Wilhelm Leibniz, the co-creator of calculus and an influential philosopher (Sect. 8.1.2), improved on the design. However, Babbage realized that these prototypes were all very similar to an abacus and not automatic. In 1822, he presented a working model of an automatic mechanical calculator, called the Difference Machine , to the Royal Society. His next project was the Analytical Engine , the first mechanical general-purpose computer, he described in 1837. In principle, this contraption was Turing-complete —igniting the age of computation. The mathematician Ada Lovelace was initially Babbage’s acolyte before becoming his muse. She was the first person to realize that the Analytical Engine was more than a calculator. “It had been an engine of numbers; now it became an engine of information” (Gleick 2011, p. 116). Lovelace devised an algorithm for the engine, emerging as the first computer programmer in history. Babbage himself “took an information-theoretic view of the new physics” (Gleick 2011, p. 375) that was emerging. Newtonian mechanics (Sect. 2.1.1) imposed the notion of a clockwork universe. In contrast, to Babbage, “nature suddenly resembled a vast calculating engine, a grand version of his own deterministic machine” (Gleick 2011, p. 376).

Carl Friedrich von Weizsäcker was an eminent and distinguished physicist and philosopher. He played an important role in the developments of 20th century physics, in particular related to astrophysics and nuclear physics. Other contributions he made centered around the understanding of the nature of reality and time, and the interpretation of quantum mechanics. The occasion of his 90th birthday in 2002 prompted the compilation of essays—a homage to his work—by renowned physicists. The book is aptly titled Time, Quantum and Information (Castell and Ischenbeck 2003). One chapter is also contributed by Zeilinger.

Von Weizsäcker was the first person to think about a quantum theory of information (Castell and Ischenbeck 2003, VI):

Weizsäcker called the elementary unit of information in quantum theory an ur. As an all encompassing theory of physics, quantum theory should contain the possible fundamental forms of matter, elementary particles, and their interactions. It should thus permit the construction of particles and interactions from quantized bits of information. This hypothesis is called the ur-hypothesis , which was developed during the 1970s at the Max Planck Institut zur Erforschung der Lebensbedingungen der wissenschaftlich-technischen Welt in Starnberg.

He introduced the novel concept of the most basic informational entity of reality in 1971 (von Weizsäcker 1971), 19 years before Wheeler’s “it from bit” (Wheeler 1990). Indeed, von Weizsäcker first started to conceive of these ideas in the 1950s (von Weizsäcker 1952, 1955, 1958). Utilizing the notion of an information-theoretic foundation of reality, he set out to axiomatically construct a unified quantum theory (von Weizsäcker 1975, 1985; Lyre 1995, 1998). At the Big Bang, the universe began with one ur—one bit of information. Today, the information content is \(10^{120}\) urs. From this result, the estimated \(10^{80}\) nucleons in the universe can be derived. Moreover, ur theory was shown to be connected with the Bekenstein-Hawking entropy seen below (Görnitz 1988).

Unfortunately, von Weizsäcker’s information-based ideas never enticed a larger audience. Notably, in Anglo-Saxon countries his contributions remain overlooked. In modern monographs discussing the fundamental nature of information, von Weizsäcker is not mentioned at all or only in a different context (Jaeger 2009; Davies and Gregersen 2014; Aguirre et al. 2015; Durham and Rickles 2017). Moreover, popular accounts, chronicling the (quantum) information-theoretic revolution, omit his legacy as well (Siegfried 2000; Seife 2007; Gleick 2011; Vedral 2012). In contrast, Wheeler is prominently featured. In retrospect, they both helped establish what is today known as digital physics.

3 Digital Physics

Digital physics, and by extension, digital philosophy, is a movement of contemporary scientists who believe in the fundamental nature of information. By taking the notion of digital information seriously, a new worldview emerges. For one, the universe is a giant information-processing machine—a digital computer. Then, reality is a finite structure and infinities are only harbored in the abstract realm the human mind can access (recall Fig. 2.1 in Sect. 2.1). In essence, everything in the universe is quantized or grainy—including space and time. Mathematically speaking, we are dealing with discrete entities and not continuous ones (Sect. 5.3).

The physicist and computer scientist Edward Fredkin is a pioneer of this approach. He invented the computational circuit called the Fredkin gate (Fredkin and Toffoli 1982). It is suitable for reversible computing and is universal, meaning that any arithmetic operation can be constructed entirely of such gates. Moreover, it is also relevant for quantum computing (Patel et al. 2016). Early work on digital physics can be found in Fredkin (1992). Other proponents of this idea also include the mathematician Chaitin (Sect. 9.4.1). He traces the genesis of the philosophy back to Leibniz, the discoverer of base-two arithmetic (Chaitin 2005). The first person to claim that the universe is a digital computer was the IT pioneer Konrad Zuse. Specifically, he proposed that the cosmos is being computed by some kind of computational systems, for instance, by cellular automata (Sect. 5.2.2). This idea was outlined in his 1969 book called Rechnender Raum (Zuse 1969)—the calculating space. The physicist, computer scientist, and entrepreneur Stephen Wolfram (Sect. 5.2.2) proposed the idea that the universe, and all its inherent complexity, is built from simple programs. The computational systems he invokes are also cellular automata. He outlines this idea in the 2002 book, called A New Kind of Science , comprising one-thousand-two-hundred pages (Wolfram 2002). Furthermore, Lloyd, another digital physics proponent, proposes a theory of quantum gravity based on quantum computation (Lloyd 2005). The Nobel laureate Gerard ’t Hooft entertains the notion that quantum gravity is linked to (the dissipation of) information (’t Hooft 1999). Moreover, he has proposed a cellular automaton interpretation of quantum mechanics (’t Hooft 2016). In 2010, the Foundational Questions Institute,Footnote 6 or FQXi, held its third essay contest. It was co-sponsored by Scientific American.Footnote 7 The question posed was “Is Reality Digital or Analog?” and the various essays attacked the problem form a multitude of angles.Footnote 8 In 2013, the 856-page book, called A Computable Universe: Understanding and Exploring Nature as Computation, was published. It contains contributions from many scientists, including Fredkin, Chaitin, Wolfram, and Lloyd (Zenil 2013).

In a nutshell, digital physics, or synonymously, digital philosophy, can be summarized as follows, taken from Fredkin’s webpageFootnote 9 devoted to the subject:

Digital Philosophy (DP) is a new way of thinking about the fundamental workings of processes in nature. DP is an atomic theory carried to a logical extreme where all quantities in nature are finite and discrete. This means that, theoretically, any quantity can be represented exactly by an integer. Further, DP implies that nature harbors no infinities, infinitesimals, continuities, or locally determined random variables.

An introductory text is Fredkin (2003). Four laws of digital physics are laid out:

  1. 1.

    Information is conserved.

  2. 2.

    The fundamental process of nature must be a computation-universal process.

  3. 3.

    The state of any physical system must have a digital representation.

  4. 4.

    The only kind of change is that caused by a digital informational process.

In the novel paradigm of a finite nature of reality, physics appears in a new light. For instance, “five big questions with pretty simple answers” are (Fredkin 2004):

  1. 1.

    What is the origin of spin?

  2. 2.

    Why are there symmetries and CPT (charge conjugation, parity, and time reversal)?

  3. 3.

    What is the origin of length?

  4. 4.

    What does a process model of motion tell us?

  5. 5.

    Can the finite nature assumption account for the efficacy of quantum mechanics?

Indeed, there may exist experimental predictions of digital physics (Fredkin 2004):

Digital mechanics [digital physics implies this discrete process called digital mechanics which must be a substrate for quantum mechanics] predicts that for every continuous symmetry of physics there will be some microscopic process that violates that symmetry. We are, therefore, able to suggest experimental tests of the finite nature hypothesis. Finally, we explain why experimental evidence for such violations might be elusive and hard to recognize.

Recall that the notion of symmetry was the common thread in Chaps. 4 and 3, from which much of theoretical physics emerged.

3.1 The Illusion of the Infinite

Carlo Rovelli, one of the founders of loop quantum gravity, recently paraphrased a saying of Karl Popper (Chap. 9 and Sect. 8.1.1), an influential philosopher of science (Rovelli 2017, p. 208):

The only truly infinite thing is our ignorance.

Today, most people believe that the universe is spatially infinite. However, we can ever only see the observable universe (Halpern and Tomasello 2016):

Because of the expansion of space and the finite age of the cosmos, there exists a horizon beyond which the light emitted by objects will never be able to reach us, marking the bounds of the observable universe.

The latest estimation of the radius is 14,200 Mpc, or 46.3 billion light-years (Halpern and Tomasello 2016). Indeed, even what we once believed to be the Big Bang singularity—an instant where the laws of general relativity break down and the temperatures, densities, and energies of the universe become infinitely large—has been revised. In the words of the theoretical astrophysicist Ethan Siegel (Siegel 2018):

But this picture [existence of Big Bang singularity] isn’t just wrong, it’s nearly 40 years out of date! We are absolutely certain there was no singularity associated with the hot Big Bang, and there may not have even been a birth to space and time at all.

[...]

The idea of a Big Bang singularity went out the window as soon as we realized we had a different state—-that of cosmic inflation—preceding and setting up the early, hot-and-dense state of the Big Bang.

The notion of inflation is a postulated exponential, but extremely brief, expansion of space in the early universe, around \(10^{-36}\) s after the breakdown of general relativity (Guth 1981; Collins et al. 1989; Peacock 1999; Peebles 1993; Penrose 2004). Crucially, “inflation cannot arise from a singular state, because an inflating region must always begin from a finite size” (Siegel 2018).

Infinity, like zero, is a strange concept. Our minds can approximately comprehend its abstract nature. Mathematically, it becomes tractable. The work of the mathematician Georg Cantor uncovered different “types” of infinities. For instance, there exist more real numbers than natural numbers, although both sets of numbers are infinite (Cantor 1874). Interestingly, Cantor’s continuum hypothesis —relating to the question if there lies an “infinity” between the integers and the real numbers (Cohen 2008)—was demonstrated to be an example of the incompleteness of mathematics Gödel had theorized about: the truth or falsity of the hypothesis cannot be determined within the mathematical framework we know today (specifically, set theory) . Notwithstanding, we can never observe an instantiation of infinity —or zero —in our physical universe. This insight prompted the following warning from the eminent mathematician David Hilbert (paraphrased in Ellis and Silk 2014):

Although infinity is needed to complete mathematics, it occurs nowhere in the physical Universe.

More recently, the cosmologist Max Tegmark observed (quoted in Brockman 2015, p. 48ff.):

I was seduced by infinity at an early age. Georg Cantor’s diagonality proof that some infinities are bigger than others mesmerized me, and his infinite hierarchy of infinities blew my mind. The assumption that something truly infinite exists in nature underlies every physics course I’ve ever taught at MIT—and, indeed, all of modern physics. But it’s an untested assumption, which begs the question: is it actually true?

[...]

In the past, many venerable mathematicians were skeptical of infinity and the continuum. The legendary Carl Friedrich Gauss denied that anything infinite really exists, saying “Infinity is merely a way of speaking” [...]. In the past century, however, infinity has become mathematically mainstream, and most physicists and mathematicians have become so enamored with infinity that they rarely question it.

[...]

Let’s face it: despite their seductive allure, we have no direct observational evidence for either the infinitely big or the infinitely small. We speak of infinite volumes with infinitely many planets, but our observable universe contains only about \(10^{89}\) objects (mostly photons) . If space is a true continuum, then to describe even something as simple as the distance between two points requires an infinite amount of information, specified by a number with infinitely many decimal places. In practice, we physicists have never managed to measure anything to more than about 17 decimal places. Yet real numbers with their infinitely many decimals have infested almost every nook and cranny of physics, from the strengths of electromagnetic fields to the wave functions of quantum mechanics: we describe even a single bit of quantum information (qubit) using two real numbers involving infinitely many decimals.

Not only do we lack evidence for the infinite, but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. [...] Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it.

Perhaps many of today’s problems faced by physicists stem from this clash of philosophies. The crux of quantum gravity essentially centers around the failed attempts to naively quantize—essentially finitize—gravity (Sect. 10.2). Discrete quantum mechanics won’t be married to continuous general relativity. This schism runs deep. Again, FredkinFootnote 10:

The utterly fantastic success of Mathematical Analysis (the mathematics of continuous functions of continuous variable) as applied to physics and engineering, tends to blind us to the possibility that the ultimate nature of space and time might be discrete.

This fundamental tension between the finite and the infinite was outlined in Sect. 5.3. Essentially, Fig. 5.9 in Sect. 5.4 provides a schematic overview of the four different types of knowledge generation utilizing formal representations. The analytical description of nature is the one employing continuous mathematics —the infinite. As mentioned, this approach has been spectacularly successful in describing so-called fundamental processes. This is the story of Chaps. 24. Most of theoretical physics is unearthed by this approach. In contrast, only slowly is the fundamental relevance of discrete mathematics being realized—finiteness raises its head. This formal representation lies at the heart of what is called the algorithmic description of nature in this book. This mode of knowledge-generation has uncovered a wealth of understanding relating to the complex phenomena surrounding us and comprising us—a tale which is told in Chaps. 6 and 7. Perhaps now is the time that the human mind unleashes the power of yet another approach to decoding reality. After over three hundred years of utilizing the fundamental-analytical paradigm of knowledge generation, and a couple of decades employing the complex-algorithmic one, the fundamental-algorithmic paradigm is emerging. Guided by an inherently finite information ontology, new insights about the fundamental nature of reality are being gained.

4 An Information Ontology

The intangible notion of information is undeniably a physical manifestation. Moreover, it holds a fundamental role in the interpretation of quantum physics. This raises the ontological question: Is nature discrete or fundamental? See also Holden (2004). Digital physics goes a step further and asks: Is nature fundamentally digital or analogue? The philosophical questions relating to an information ontology, or even a digital ontology, are currently a hot topic in fundamental theoretical physics. In other words, modern theories of quantum gravity are providing insights into this novel approach to reality.

Why is this new paradigm only emerging now? Aaronson proposes an answer, of why everything appears to becoming together now (Aaronson 2005):

In my (unbiased) opinion, the showdown that quantum computing has forced—between our deepest intuitions about computers on the one hand, and our best-confirmed theory of the physical world on the other—constitutes one of the most exciting scientific dramas of our time. But why did this drama not occur until so recently? Arguably, the main ideas were already in place by the 1960’s or even earlier. I do not know the answer to this sociological puzzle, but can suggest two possibilities. First, many computer scientists see the study of “speculative” models of computation as at best a diversion from more serious work. [...] And second, many physicists see computational complexity as about as relevant to the mysteries of Nature as dentistry or tax law.

Quantum computing, string/M-theory, and loop quantum gravity are converging with respect to a very specific topic: the black hole information paradox.

4.1 The Cosmic Hologram

Black holes represent the ultimate interface where quantum mechanics meets general relativity. At first, they were thought to be just a theoretical curiosity in the emerging field of general relativity.

4.1.1 Black Holes

At the end of their life, after having “burned up” all the available energy, stars die and transform into various astrophysical entities. While some stars simply explode at the end of their life-cycle, others contract into small compact objects, like white dwarfs (Chandrasekhar 1931) or neutron stars (Baade and Zwicky 1933) . However, if the original sun is large enough, the stellar remnant remaining after the gravitational collapse can have an extreme gravitational pull. So much so, that not even electromagnetic radiation can escape. A black hole is born, effectively cutting itself off from the rest of the universe (Oppenheimer and Snyder 1939). The equations of general relativity break down at the center of a black hole. Here we find the so-called gravitational singularity, a region where the curvature of space-time becomes infinite—at least mathematically. However, this “naked singularity” is shielded from the rest of the universe by the black hole’s event horizon. This is basically a border of causality where anything crossing it will forever be trapped inside. It is interesting to see how physicists believing in the reality of infinity deal with this issue. For instance, Stephen Hawking, explaining Roger Penrose’s cosmic censor conjecture (Penrose 1969), observes (quoted in Hawking and Penrose 1996, p. 21):

Nature abhors a naked singularity.

Perhaps the true reason is the fictitious nature of mathematical infinity. Recall that the Big Bang represents yet another “singularity” in general relativity.

Black holes are, by their very definition, hard to detect. Indirect evidence speaks of a gigantic black hole at the center of the Milky Way (Johnson et al. 2015). Notwithstanding a black hole’s bizarre nature, it is very easily described. By knowing only three externally observable classical parameters (i.e., mass, electric charge, and angular momentum) any black hole can be fully classified (Israel 1967; Carter 1970; Hawking 1971). This is known as the “no-hair” theorem. It also states that “a large amount of information is lost when a body collapses to form a black hole” (Hawking and Penrose 1996, p. 39). By introducing thermodynamics and quantum mechanics into the picture, a puzzle emerges.

In the early 1970s, the physicist Jacob Bekenstein, a former Ph.D. student of Wheeler, was theorizing about the entropy of black holes and discovered an astonishing fact: Black hole entropy has a remarkable geometric interpretation. The entropy was found to be proportional to the area of its event horizon (Bekenstein 1972, 1973). Unlike any other object in the universe, the entropy of a black hole increases with the area of its surface. As a result, any matter dropped into a black hole will only increase its entropy as much as it can increase the event horizon. Somehow, the three-dimensional nature of ordinary entropy is reduced to two dimensions. Hawking picked up on these ideas and showed that black holes, in fact, also radiate energy due to quantum effects (Hawking 1974). This thermal radiation corresponds to a temperature related to the black hole’s gravity, i.e., its mass. In general, thermal radiation has no informational content. In other words, it cannot encode any signal. In Hawking’s words (quoted in Hawking and Penrose 1996, p. 26):

This [the black hole radiation being thermal] is too beautiful a result to be a coincidence or just an approximation.

In a next step, Hawking improved on Bekenstein’s calculations of the black hole entropy \(S_{\text {BH}}\). He derived the following equation (Hawking 1975)

$$\begin{aligned} S_{\text {BH}} = \frac{k c^3 A}{4 G \hbar }, \end{aligned}$$
(13.5)

where A is the area of the horizon. We now have an amalgamation of very different fundamental constants, coming from thermodynamics (Boltzmann’s constant k), quantum mechanics (Planck’s constant \(\hbar \)), special relativity (the speed of light c) , and general relativity (Newton’s gravitational constant G). Essentially, the Bekenstein-Hawking entropy is one-fourth the area of the event horizon.

Now the paradox emerges. Due to Hawking radiation, black holes lose mass and eventually evaporate. Recall that any matter, and thus information, falling into a black hole inescapably gets trapped there. Indeed, the no-hair theorem tells us that all we can ever know about it is its mass, electric charge, and angular momentum. A black hole could have been “fed” with the most intricate configurations of matter and information, at the end of its life cycle, everything appears to have vanished without a trace—any information about the black hole’s composition is lost. The key question is: Where did all the information go? In essence, what happened to all the bits of information once the black hole has evaporated? There exist three alternatives (Penrose 2004, p. 840):

  1. 1.

    Information is lost when the black hole evaporates.

  2. 2.

    Information is stored in a final “nugget,” a remnant of the hole.

  3. 3.

    Information is returned to the universe in a final explosion.

The crux of the issue is the following (Gleick 2011, p. 358):

According to quantum mechanics, information may never be destroyed. The deterministic laws of physics require the states of a physical system at one instance to determine the states at the next instance; in microscopic detail, the laws are reversible, and information must be preserved. [...] The loss of information would violate unitarity, the principle that probabilities must add up to one.

For classical physics, time reversal symmetry is a discrete symmetry.Footnote 11 It guarantees that the equations of motions can be rewinded and yield a single unique past history. In quantum mechanics, the wave function encapsulates the distribution of probability for a given property. Schrödinger’s equation (Sect. 3.1.4) encodes the time evolution of the wave function. It is deterministic and time-reversal symmetric. A foundational assumption of quantum mechanics is that the probability is conserved. This notion, called unitarity, ensures that the described properties will always have some possible value (including zero) . In other words, probabilities add up to one. As a result, the quantum states are preserved and no information is lost. This conservation of information now refers to the quantum information describing the full informational content of the wave function and not just a single probabilistic measurement. It is worth noting that the Copenhagen interpretation (Sect. 10.3.2.2) is not deterministic nor time-reversal symmetric. One of its core postulates is that a measurement irreversibly collapses the wave function, manifesting a specific result from the many probabilities.

The black hole information paradox has prompted many discussions among theoretical physicists (Susskind 2008). Bets were made if information is really lost or not. Some physicists presented very unique solutions. For instance, Hawking changed his initial stance that information is lost, subsequently losing a bet. He offered an argument for information preservation (Hawking 2005) utilizing Feynman path integrals (Sect. 10.1.1). However “his new formulation struck some physicists as cloudy and left many questions unanswered” (Gleick 2011, p. 359). Penrose presented what he calls the conformal cyclic cosmology, where the universe iterates through infinite cycles and each ending cycle spawns a new one with a Big Bang singularity (Penrose 2010). However, the most intriguing solution came from string/M-theory.

4.1.2 AdS/CFT Duality

M-theory is the eleven-dimensional overarching framework unifying the five known string theories (Sect. 4.3.2 and Witten 1995). At the core of its discovery lie dualities (Duff 1999; Kaku 2000). They are powerful mathematical tools—symmetry principles—linking different theories to each other. In essence, dualities relate quantities that appear to be separate. For instance, T-duality associates strings in a space-time with radius R to strings in a space-time with radius 1 / R (Giveon et al. 1994). The surprising result is that “a string cannot tell the difference between a big circle and a small one” (Duff 1999, p. 325). S-duality uncovers the equivalence of a theory which is mathematically intractable to another theory in which calculations are easy (Sen 1994). Specifically, S-dualities relate ten-dimensional string theories to M-theory. Moreover, even the dualities are dual to each other.

One of the most powerful and fruitful dualities was discovered by Juan Maldacena (Maldacena 1998). In a nutshell, there exists a correspondence between theories with gravity in d dimensions and theories without gravity in \((d-1)\) dimensions. Specifically, a quantum gravitational theory on the bulk of a space is equivalent to a quantum field theory on its surface. Formally, anti-de Sitter (AdS) space is a manifold with negative curvature. It is closely related to a hyperbolic space, meaning it has a boundary “at infinity.” On the one side the duality considers aspects of M-theory formulated on AdS space. On the other side, conformal field theories are analyzed. These are a quantum field theories (Sects. 3.1.4, 3.2.2.1, 4.2, and 10.1.1) which are invariant under conformal transformations (i.e., functions that preserve angles). Specifically, AdS/CFT duality links a string theory in a five-dimensional AdS space to a supersymmetric (Sect. 4.3.2) Yang-Mills theory (Sect. 4.2) on its four-dimensional boundary (Maldacena 1998). In other words, the string theory with gravity describes the five-dimensional space-time, while the quantum field theory of point particles with no gravity operates on the four-dimensional space-time. Both descriptions are equivalent, meaning that no observer could ever determine if she was inhabiting the five dimensional world or its four dimensional boundary.

To summarize, the words of Aaronson (quoted in Horgan 2016):

Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible. Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory.

Maldacena’s publication is the most referenced high-energy physics paper, with 12,968 citations up to the end of 2017.Footnote 12

4.1.3 The Bounds of Information

The Planck length is the scale at which quantum gravitational effects should become apparent. It is defined as

$$\begin{aligned} \ell _\mathrm {P} =\sqrt{{\frac{\hbar G}{c^3}}} \approx 1.6 \times 10^{-35}\ [\mathrm {m}], \end{aligned}$$
(13.6)

employing the speed of light c , Planck’s constant \(\hbar \), and the gravitational constant G. The early work of Bekenstein and Hawking gave a glimpse of a new feature of quantum gravity. The black hole entropy (13.5) can be expressed in terms of information, recalling Shannon’s reinterpretation of physical entropy as an information-theoretic concept. Specifically (Bekenstein 2003, p. 61):

[A] hole with a horizon spanning A Planck areas has A / 4 units of entropy. [...] Considered as information, it is as if the entropy were written on the event horizon, with each bit (each digital 1 or 0) corresponding to four Planck areas.

Consequently, and unlike any other theory, the total number of bits that can be stored in a certain bounded region of space is predicted to be finite rather than infinite. This idea is, of course, in line with the speculative philosophy of digital physics.

In 1981, Bekenstein presented an exact bound on the entropy of a physical system \(S_{\text {matter}}\). Only the mass (or equivalently, the energy) and volume of the system are relevant. The relation is (Bekenstein 1981)

$$\begin{aligned} S_{\text {matter}} \le 2 \pi \frac{kER}{\hbar c}, \end{aligned}$$
(13.7)

utilizing Boltzmann’s constant, next to Planck’s, and the speed of light. E represents the mass-energy of the matter system and R is the radius of the smallest sphere that can enclose the matter system. In order to derive this result, Bekenstein had to generalize the second law of thermodynamics, establishing black hole thermodynamics (Bekenstein 1974). The Bekenstein bound relates the information capacity of a system in bits to its size, given a density. Specifically (Vedral 2012, p. 185):

The number of bits that can be packed into any system is at most \(10^{44}\) bits of information times the system’s mass in kilograms and its maximum length in meters. [...]

It is amazing that to calculate something as profound as the information carrying capacity of an object, out of its infinitely many possible properties, we only need two: area and mass.

Fourteen years later, the holographic bound was introduced (Susskind 1995). It is a weaker bound and essentially defines how much information can be contained in a specified region of space. In other words, the focus now lies on the area A enclosing a matter system, without any knowledge of its nature. Mathematically, using Planck units Footnote 13

$$\begin{aligned} S_{\text {vol}} \le \frac{A}{4}. \end{aligned}$$
(13.8)

It then holds that

$$\begin{aligned} S_{\text {matter}} \le 2 \pi ER \le \pi R^2 = \frac{A}{4}. \end{aligned}$$
(13.9)

Some of the implications are the following (Bekenstein 2003, p. 63):

The visible universe contains at least \(10^{100}\) bits of entropy, which could in principle be packed inside a sphere a tenth of a light-year across.

Moreover, consider packing information—for instance, bits stored in RAM chips —into a spherical volume. Then (Bekenstein 2003, p. 63):

[T]he theoretical ultimate information capacity of the space occupied by the heap [of chips] increases only with the surface area. Because volume increases more rapidly than surface area, at some point the entropy of all the chips would exceed the holographic bound.

At this point, a black hole will be formed. A black hole is thus the densest information storage device allowed by the laws of physics. Or, equivalently, a black hole is the most entropic object that can be fitted inside a spherical volume.

The derivation of the holographic bound was motivated by ’t Hooft’s research. In 1993, using simple arguments, he claimed that at the Planck scale our world is no longer three-dimensional. Rather, he found that reality is described by bits located on a two-dimensional lattice (’t Hooft 1993). Leonard Susskind formalized these ideas in the context of string theory, introducing the holographic bound (Susskind 1995). Now its name can be understood: ’t Hooft and Susskind are arguing that our reality is a hologram (Bekenstein 2003, p. 63):

In the everyday world, a hologram is a special kind of photograph that generates a full three-dimensional image when it is illuminated in the right manner. All the information describing the 3-D scene is encoded into the pattern of light and dark areas on the two-dimensional piece of film, ready to be regenerated. The holographic principle contends that an analogue of this visual magic applies to the full physical description of any system occupying a 3-D region: it proposes that another physical theory defined only on the 2-D boundary of the region completely describes the 3-D physics. If a 3-D system can be fully described by a physical theory operating solely on its 2-D boundary, one would expect the information content of the system not to exceed that of the description on the boundary.

Our universe has a four-dimensional space-time structure. There now should exists a set of physical laws, operating on the three-dimensional border of physical space-time, describing the identical physical reality. In a nutshell, our physical universe is a hologram that is isomorphic to the quantum information encoded on the surface of its boundary. Again, an observer cannot know if she inhabits the bulk space or the boundary. Note also, that now the usual assumptions about our universe are dropped. It is no longer infinite, without boundary, and expanding indefinitely.

The holographic bound has been generalized and extended to any type of space-time (Bousso 1999) . This new upper bound on information is called the covariant entropy bound and is associated with a maximum information capacity of one bit per Planck area. Both the Bekenstein bound and the holographic bound can be derived from it. However, how can one construct a holographic theory?

4.1.4 Tying It All Together

Surprisingly, all the different worlds are starting to converge.

Strings and Loops

Most intriguingly (Busso 2002):

The AdS/CFT correspondence realizes the holographic principle explicitly in a quantum gravity theory.

Moreover, AdS/CFT duality offers a solution to the black hole information paradox (Hawking 2005). Then, the holographic principle can help find a non-perturbative definitionFootnote 14 of string theory. Furthermore, it can also be useful in deriving a background-independent formulationFootnote 15 of string theory. Remember that loop quantum gravity, string theory’s rival, is a non-perturbative and background-independent theory (Sect. 10.2.3).

There was a recent claim that loop quantum gravity violated the holographic principle (Sargın and Faizal 2016). However, this turned out to be an error. In the words of the theoretical physicist Sabine HossenfelderFootnote 16:

[A]fter having read the paper I did contact the authors and explained that their statement that the LQG [loop quantum gravity] violates the Holographic Principle is wrong and does not follow from their calculation. After some back and forth, they agreed with me, but refused to change anything about their paper, claiming that it’s a matter of phrasing and in their opinion it’s all okay even though it might confuse some people.

Indeed, within this new toolbox of concepts and formalisms, the incompatibilities of string theory and loop quantum gravity could vanish, potentially making them the “two sides of the same coin” (Hossenfelder 2016). Moreover, questions about quantum information and entanglement are “exactly what people in loop quantum gravity have been working on for a long time” (Hossenfelder 2016). In addition (Hossenfelder 2016):

Meanwhile, in a development that went unnoted by much of the string community, the barrier once posed by supersymmetry and extra dimensions has fallen as well.

We now have a higher-dimensional theory of loop quantum gravity incorporating supersymmetry (Bodendorfer et al. 2013). Things are changing. The “younger people in string theory [...] are very open-minded” and they “are very interested [in] what is going on at the interface” (Hossenfelder 2016). Recall the days when string theory was said to be the “only game in town” (Sect. 10.2.1). Could information and computation be the unifying element for the two theories of quantum gravity? Indeed, the holographic principle is relevant in both theories. While string theory offered the powerful AdS/CFT duality, loop quantum gravity theorists are also trying to incorporate the feature. In the words of Bekenstein (2003, p. 65):

Holography may be a guide to a better theory. What is the fundamental theory like? The chain of reasoning involving holography suggests to some, notably Lee Smolin of the Perimeter Institute for Theoretical Physics in Waterloo [a pioneer of loop quantum gravity], that such a final theory must be concerned not with fields, not even with spacetime, but rather with information exchange among physical processes. If so, the vision of information as the stuff the world is made of will have found a worthy embodiment.

The Bleeding Edge

Recall that black hole entropy was the starting point of the whole discussion opening up new horizons. Moreover, this information-theoretic angle of attack appears to be the nexus where many different theoretical fragments meet: the Bekenstein-Hawking black hole entropy can be derived from loop quantum gravity (Smolin 1995; Rovelli 1996), string theory (Strominger and Vafa 1996), and von Weizsäcker’s ur-alternatives (Görnitz 1988).

Currently, this line of thinking—building on the holographic principle and AdS/ CFT duality—has been extended even further. The ontology of reality is being probed ever deeper. Now entanglement enters the picture (Van Raamsdonk 2010):

[W]e argue that the emergence of classically connected spacetimes is intimately related to the quantum entanglement of degrees of freedom in a non-perturbative description of quantum gravity.

In other words, quantum entanglement creates space-time. Maldacena and Susskind jumped in Maldacena and Susskind (2013). In a next iteration, the entanglement giving rise to the emergence of space-time is based on one fundamental concept: quantum information (Verlinde 2017). In this formalism, the enigma of dark energy (Sect. 10.3.1) finds an explanation. Naturally, these speculations are at the bleeding edge of contemporary theoretical physics, including quantum information theory. However, they seem to wrap up many isolated phenomena into a unified and broad view of reality. Moreover, recall all the problems encountered by the conventional materialistic scientific worldview, asserting the reality of matter (Sect. 10.4.1). The newly forming ontology speaks of space-time being emergent and information the only fundamental entity of reality.

Computational Complexity Theory to the Rescue

The role of quantum information and computation is essential for this new worldview. Crucially, only a finite number of bits can be stored in a bounded region of space which translates into the same number of qubits per volume.Footnote 17 As a result, in the words of Aaronson (quoted in Horgan 2016):

So, that immediately suggests a picture of the universe, at the Planck scale [...] as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation.

Interestingly, a specific problem recently identified in the context of Hawking radiation, called the firewall paradox (Almheiri et al. 2013), has an easy quantum computational answer. If the standard conjectures in theoretical computer science are true, then the paradox never actually can appear in the universe (Harlow and Hayden 2013). At the heart of this information-theoretic approach lies the power of computational complexity, a concept from theoretical computer science. The most famous problem in this framework is the unsolved P versus NP problem (Cook 1971). At its core, the challenge is to classify computational problems according to their inherent difficulty. For this, different complexity classes are defined. Specifically, can a problem whose solution is verified quickly (in nondeterministic polynomial time NP) also be solved quickly (in polynomial time P)? Symbolically, does our universe allow \(N=NP\) or \(N \ne NP\)? The answer to this question has huge consequences for the theory of computation. If equality holds, Aaronson observes that “it would mean that we’d grossly underestimated the abilities of our existing computers” (quoted in Horgan 2016). The whole puzzle is also related to the incompleteness of mathematics (Sect. 2.2) and the undecidability in computation (Sect. 9.4.1). Returning to the issues at hand, the connection between computational complexity and quantum gravity is currently being addressed by Susskind and Aaronson (Susskind 2018):

For how long a time does classical GR [general relativity] hold during the evolution of a black hole? This connection between black holes and complexity classes is unexpected, and in my opinion very remarkable. Broadly speaking it says that the longer classical general relativity describes the interior of black holes, the less quantum computers have power to solve PSPACE-complete problems [where PSPACE refers to a complexity class].

In broader terms (Cowen 2015):

“It appears more and more that the growth of the interior of a black hole is exactly the growth of computational complexity,” says Susskind. If quantum entanglement knits together pieces of space, he says, then computational complexity may drive the growth of space —and thus bring in the elusive element of time.

Recall that Susskind is not only a pioneer of string theory but has also made many important contributions over the years. This new interest in quantum information appears to mark a departure from the orthodox views in the community. Resources on computation are Hopcroft et al. (2008), Hromkovič (2010), Aaronson (2013), Cockshott et al. (2015), Moore and Mertens (2016).

A New Ontology

The merger of (quantum) information with quantum gravity reveals a radically new ontology of reality. The nature of the universe is as follows:

  1. 1.

    Infinities are abstract concepts never encountered in the physical world. [Digital physics]

  2. 2.

    Space-time is discrete and comprised of “atoms.” [Loop quantum gravity’s quantized volume operator]

  3. 3.

    Reality’s finite structure is brought about by the digital nature of information. [Information bounds, “it from bit”]

  4. 4.

    The universe is a computational engine. [Digital physics]

  5. 5.

    What appears as a three-dimensional universe is the result of quantum information encoded on its two-dimensional surface [Holographic principle, AdS/CFT]

All of this can be summarized as the informational-digital ontology.

The HolometerFootnote 18 at Fermilab is designed to detect holographic fluctuations in space-time. In other words, it should be able to detect the pixelation of space-time. Recently, claims supporting the holographic principle have been made, based on apparent observations in the cosmic microwave background data, competing with standard cosmological models (Afshordi et al. 2017). It is also interesting to note, that the science writer Michael Talbot already presented the notion of a holographic universe in his book by the same name in 1991 (Talbot 1991). Especially, as he unified completely separate angles of research, building on the conclusions of the eminent physicist David Bohm and the psychologist and psychiatrist Karl H. Pribram. Talbot writes (Talbot 1991, p. 1f.):

Intriguingly, Bohm and Pribram arrived at their conclusions independently and while working from very different directions. Bohm became convinced of the universe’s holographic nature only after years of dissatisfaction with standard theories’ inability to explain all the phenomena encountered in quantum physics. Pribram became convinced because of the failure of standard theories of the brain to explain various neurophysiological puzzles.

4.2 A Simulated Reality

The holographic principle, with strong support from theoretical physics and quantum information theory, suggests that the world is essentially an illusion. Specifically, the three-dimensional nature of space is fictitious. At the heart of reality lies a two-dimensional computational grid. If one zooms into the fabric of the universe one hits an endpoint. This is reached at the Planck length, where every Planck area carries one bit (or qubit) of information. From this foundation, our illusion of three spatial dimensions is being computed, in which elementary particles (with and without mass) interact. In the words of one of the pioneers of an information-theoretic reality (Bekenstein 2003, p. 65):

Our innate perception that the world is three-dimensional could be an extraordinary illusion.

Can we go a step further? Could this illusion be more elaborate than we dare to dream? Is reality itself perhaps a vast simulation? Is “it from bit” and digital physics actually uncovering a radically different ontology of reality? One in which everything is simulated? In effect, the ontology we are embedded in is one which is simulated—most likely computed—in a vaster and more fundamental ontology encompassing the simulation.

The notion that reality is an illusion has a long history. Recall Zhuang Zhu’s butterfly dream recounted in the Preface. Or the Buddhist notion of anicca , describing reality as a vast and impermanent illusion (Chap. 1). The postmodern philosopher Jean Baudrillard introduced the notion of simulacra in the context of simulations (Baudrillard 1981). A simulation is the imitation of the operation of a real-world process or system. In contrast, simulacra represent the last step in four stages disassociating from reality. From faithful representations, higher levels of “perversion” finally reveal the simulacrum, which bears no relation to reality anymore. As it is not a copy of reality, it becomes a reality in its own right. Baudrillard coined the term hyperreality for this: “It is the generation by models of a real without origin or reality” (Baudrillard 1994, p. 1).

Naturally, the premise of a fictitious reality is also encountered in science fiction (Botz-Bornstein 2015):

Solaris (1972) and Stalker (1979) by Andrei Tarkovsky as well as The Matrix by the Wachowski brothers are science fiction films with a highly metaphysical appeal. In addition, all three films deal with the possible falseness of what we generally supposed to be a “reality”. In The Matrix , a posthuman reality of millions is declared to be due to cognitive manipulations effectuated by machines and computers. People do not live their everyday lives in a human way in the real world but inside a computer program.

Today, the notion of a simulated reality has been adopted by some Silicon Valley tech billionaires, potentially helping fund research on such outlandish ideas (Griffin 2016).

In the science community, Brian Whitworth has proposed that the physical world is a virtual reality (Whitworth 2008, 2010). However, the most popular version of a simulated universe goes under the name of the simulation hypothesis. An early version was proposed by the robotics and artificial intelligence researcher Hans Moravec in Moravec (1999). Then, the philosopher Nick Bostrom developed and expanded the argument. In a nutshell (Bostrom 2003):

Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that if this were the case, we would be rational to think that we are likely to be among the simulated minds rather than among the original biological ones. Therefore if we do not think that we are currently living in a computer simulation, we are not entitled to believe that we shall have descendants who will run lots of simulations of their forebears. That is the basic idea.

More technically (Bostrom 2003):

[A]t least one of the following propositions is true: (i) the human species is very likely to become extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we shall one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

A posthuman stage of civilization is “where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constrains” (Bostrom 2003). More generally (Herbrechter 2013, back cover):

Posthumanism is a major reassessment of the most pressing of contemporary debates.

Ancestor-simulations are simulations of ancestral life which are indistinguishable from reality to the simulated observer. Creationism (Sect. 12.2.2) implies an ancestor-simulation. Although our universe appears billions of years old to the uninitiated, it is actually a couple of thousand years old filled with fictitious (simulated) evidence of its epochal history.

Bostrom presents his argument in detail as follows. One of the following statements is correct (Bostrom 2003):

  1. 1.

    The fraction of all human-level technological civilizations that survive to reach a posthuman stage is close to or zero.

  2. 2.

    The fraction of posthuman civilizations that are interested in running ancestor-simulations is close to or zero.

  3. 3.

    The fraction of all observers with human-type experiences that live in simulations is close to or one.

Bostrom, of course, believes option three is the most probable one. He asks (Bostrom 2003):

If there were a substantial chance that our civilization will get to the posthuman stage and run many ancestor-simulations, then how come we are not living in such a simulation?

He is proposing an either/or argument: “unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation” (Bostrom 2003). Either there is no technologically advanced species in the universe capable of creating high-fidelity simulations or, once they are discovered, the simulations proliferate. Especially through nested ones, where simulated observers in the simulated realities create their own simulated realities with simulated observers—a process that could go one indefinitely. As humanities computational prowess, and the understanding of reality as information-theoretic, increases, we should expect to be able to construct detailed simulations of reality in the near future, including observers—unless humanity destroys itself (see also Epilogue).

Are there any indications that the simulation hypothesis is more than an entertaining thought experiment? If we inhabit a simulation, the following observations can be expected:

  1. 1.

    The finite nature of the computational process running the simulation should render the simulation finite as well.

    1. a.

      There should exist no measurements with infinite accuracy within the simulation.

    2. b.

      Changes can only happen in discrete steps.

    3. c.

      The accuracy of the initial conditions should determine how systems evolve in time.

    4. d.

      There should exist a minimal non-zero value and a maximal value for physical quantities.

  2. 2.

    The simulation should become fuzzy and uncertain at the “borders” of the simulation.

  3. 3.

    Informational entities should be unconstrained by space and time.

  4. 4.

    There should exists glitches in the simulation.

  5. 5.

    There should exists hacks in the simulation.

  6. 6.

    The simulation should be optimized.

Indeed, this is what we observe. Heisenberg’s uncertainty principle gives a limit to how accurate measurements can be (1a). Quantum mechanics was the first theory speaking of discreteness: the infamous quantum leap (1b). Chaos theory (Sect. 5.1.3) displays a path-dependence sensitive to the accuracy of the initial conditions (1c). The speed of light is constant and the third law of thermodynamics forbids that absolute zero (−273.15 \(^\circ \)C) can be reached by a physical system (1d). Our entire commonsensical classical world disintegrates at the quantum level (2). Entangled systems are only constrained by the laws of quantum information and are not impeded by space or time (3). Mathematics suffers from inherent incompleteness and randomness and computation is fundamentally undecidable (Sect. 9.4)—to everyone’s great surprise (4). The holographic principle (and AdS/CFT duality) allow the three-dimensional simulation to be rendered using only two-dimensional computation (5). Quantum mechanics and general relativity are incompatible due to the different nature of their underlying “programming” (i.e., the discrete vs. the continuous) , exposing a missing feature in the simulation (5). The exactly fine-tuned values of the “natural constants” (Sect. 10.3.1), allowing for complex structure formation, are simply the parameters of the simulation (6). The location of Earth within the universe and the current time in cosmic history (“axis of evil” and “coincidence problem,” Sect. 10.3.1) both appear to be very special and not coincidental (6).

It may seem surprising that thinking about reality in terms of a simulation allows many phenomena to appear in a very different light. However, how feasible is such a computation? In the words of AaronsonFootnote 19:

[O]ur observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere \(\approx 2^{10^{122}}\) time steps.

But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits —then at some point the speculative shoe shifts to the other foot. The question becomes: do you reject the Church–Turing Thesis? Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems? If so, how? But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?

In essence, he claims that the simulation hypothesis is unfalsifiable. He refutesFootnote 20 the claims some people have made that a recent publication has falsified the hypothesis (Ringel and Kovrizhin 2017). Some physicists support the idea with humor, sidetracking the profound philosophical implications. For instance, the Nobel laureate George Smoot’s TEDx talk, with the tongue-in-cheek title You are a Simulation and Physics Can Prove It (Smoot 2013). Indeed, the general public also appears to find this idea intriguing (Lewin 2016):

The 17th annual Isaac Asimov Debate at New York’s American Museum of Natural History sold out in just 3 minutes online, host Neil deGrasse Tyson told the audience. The debate featured five experts chewing on the idea of the universe as a simulation.

Some attempts to find empirical evidence have been made (Beane et al. 2014). Naturally, many people find the notion of a simulated universe preposterous. Specifically, who is doing the programming on what sort of computer, in what kind of reality and why? In effect, the simulation hypothesis is a variation of the theistic intelligent design argument, shifted towards programming “deities” or aliens.

4.3 Alternatives and Opposition

The cosmologist Max Tegmark goes a step further with his proposed ontology of reality. He retreats from the informational paradigm and invokes a radical form of Platonism (Sect. 2.2). An overview of the current situation is found in the following (Brockman 2016, p. 228f.):

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages, and is mostly undecidable, which is just another word for saying uncomputable (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though.

[...]

A growing number of physicists understand that the universe is not mathematical, but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable, mathematical notions (such as continuous space) makes progress possible.

Tegmark closes the loop by redeclaring the primacy of mathematics. His ideas are summarized under the term of the mathematical universe hypothesis (Tegmark 2008, 2014). In a nutshell, reality is a mathematical structure. Tegmark proceeds as follows (Tegmark 2008):

In this section, we will discuss the following two hypotheses and argue that, with a sufficiently broad definition of mathematical structure, the former implies the latter.

  1. 1.

    External Reality Hypothesis: There exists an external physical reality completely independent of us humans.

  2. 2.

    Mathematical Universe Hypothesis: Our external physical reality is a mathematical structure.

In his book, called Our Mathematical Universe , Tegmark adds the following concepts (Tegmark 2014, Chapter 12):

  1. 1.

    Computable Universe Hypothesis: Our external physical reality is a mathematical structure defined by computable functions.

  2. 2.

    Finite-Universe Hypothesis: Our external physical reality is a finite mathematical structure.

Tegmark claims that the mathematical universe hypothesis is, in principle, testable and falsifiable. It should not come as a surprise that today many eminent physicists are pondering radical new ideas for the ontology of reality. In the words of Tegmark (2014, p. 8):

If my life as physicists has taught me anything at all, it’s that Plato was right: modern physics has made abundantly clear that the ultimate nature of reality isn’t what is seems.

He list some of the responses to the question “What is reality?” (Tegmark 2014, p. 9). A shortened selection is:

  • Elementary particles in motion.

  • Quantum fields in curved space-time.

  • Strings in motion.

  • A divine creation.

  • A social construct.

  • A neurophysiological construct.

  • A dream.

  • Information.

  • A simulation.

  • A mathematical structure.

  • We have no access to what Immanuel Kant called “das Ding an sich.”

  • Reality is fundamentally unknowable.

  • Not only don’t we know it, but we couldn’t express it if we did [...] (postmodern answer by Jacques Derrida and others).

  • Reality is all in our head (constructivist answer).

  • Reality doesn’t exists (solipsism) .

Many of the themes and concepts have appeared throughout this book, some even being trusty companions in the voyage.

Another approach claiming that information is the primordial essence of reality comes from Fisher information, a concept from mathematical statistics. Ronald Fisher was a geneticist who was instrumental in the development of modern statistics. He was a prolific researcher.Footnote 21 The physicist B. Roy Frieden utilizes Fisher information to claim that, in general, “information is at the root of all fields of science” (Frieden 2004, back cover). In effect, he is unifying much of physics utilizing his grounding principle. Examples are Schrödinger’s wave equation of quantum mechanics, and the Maxwell-Boltzmann distribution of statistical mechanics.

Floridi opposes the notion of a digital ontology. The ideas of Wheeler, Fredkin, Lloyd, and others represent an unsatisfactory approach “to the description of the environment in which informational organisms like us are embedded” (Floridi 2010, p. 339). Floridi argues in favor of an informational structural realism. Structural realism was discussed in Sect. 2.2.1, and its ontic version was introduced in Sects. 6.2.2 and 10.4.1.

Others oppose the entire notion of an information-theoretic basis of reality. The science writer John Horgan, famous for his book on the end of science (Horgan 1997), believes (Horgan 2011):

[T]he everything-is-information meme violates common sense.

More specifically (Horgan 2011):

The concept of information makes no sense in the absence of something to be informed—that is, a conscious observer capable of choice, or free will [...]. If all the humans in the world vanished tomorrow, all the information would vanish, too.

The question of how consciousness might enter an information ontology is the topic of Chap. 14. Moreover, in 2011 the link between AdS/CFT duality, entanglement, and algorithmic complexity theory had not been established. The current work at the interface of these topics supports an information ontology.

In the collection of essaysFootnote 22 published as the book It From Bit or Bit From It (Aguirre et al. 2015), Wheeler’s dictum is analyzed in great detail. One finds that (Aguirre et al. 2015, p. 3):

Some entrants argued against Wheeler’s stance that It derives from Bit, and these contributions appear in Chaps. 14–18.

There are 19 chapters in total. Chapter 17 argues the following (Barbour 2015, p. 197):

Examination of what Wheeler meant by “it” and “bit” then leads me to invert his aphorism: “bit” derives from “it”. I argue that this weakens but not necessarily destroys the argument that nature is fundamentally digital and continuity an illusion.

The quantum information expert Gregg Jaeger, whose work was introduced above in Sect. , also rejects the fundamental nature of information. In detail (Jaeger 2009, p. 234f.):

The idea that physics is reducible to information is problematic for at least two reasons. One difficulty is that it is far from clear that all physical things have anything intrinsic corresponding to informational magnitudes, much less that they are “submitting to information-theoretical descriptions” in all their aspects. [...] A second, insurmountable difficulty is that any information-theoretic description of an object is, by definition, entirely different from the existent it describes. A physical entity is not a simulacrum and cannot be equated with its own description; that this issue could have been ignored is a symptom of the influence of postmodernism [...].

He goes on to examine the work of Zeilinger and Landauer. It is perhaps safe to say that at this point the discussion has become philosophical. In the end, we all adopt a worldview and try to classify reality within its bounds. This is why it is so important to critically examine all such conceptual frameworks—old and new. Key notions relate to certainty (Sect. 8.1.1) and the question“Is the universe queerer than we can suppose?” (Sect. 12.4.4).