1 Introduction

Throughout their history, humans have been aware that the results of some actions, for example, throwing a dice or flipping a coin or drawing a stick out of a group with different symbols, are difficult if not impossible to predict. Many cultures have interpreted this as hidden divine knowledge (believing that God obviously knew the outcome of any such action) or even divine intention and plan (God expressing His will through these “chance” practices); people thus often used those actions as a method of divination, an approach known as cleromancy, aiming to access God’s intentions.

Unpredictability was thus, for a long time, believed to be simply a reflection of our limited knowledge, in contrast with God’s unlimited knowledge. In that optic randomness in any real, fundamental sense does not exist. Chance is simply how unpredictable things appear to humans, while God knows their reality. Divine providence, hidden in God’s mysterious ways, could be revealed in the casting of lots.

A brief, even spotty review of ancient scholars’ views of “chance” (and its relation to providence) will confirm this general stance. For example, according to Augustine (fourth–fifth centuries AD): “Nothing in our lives happens haphazardly. Everything that takes place against our will can only come from God’s will, his Providence, the order he has created, the permission he gives, and the laws he has established” (Augustine 2011, 118, 12–32). John Calvin (sixteenth century), like Augustine, claimed that divine providence precludes randomness in the world: “it must be observed that the providence of God, as it is taught in Scripture, is opposed to fortune and fortuitous accidents” (Calvin 1813, I:16, 233). Avicenna (tenth–eleventh centuries) and Averroes (twelfth century) argued similarly (although Avicenna was a much stronger determinist than Averroes): since there is a primary cause to everything, everything (except human will and action) must have a prescribed cause, hence no fundamental randomness (Belo 2007). The poet-philosopher Omar Khayyam (eleventh–twelfth centuries) expressed a similarly deterministic view of the world in one of his quatrains: “And the first Morning of Creation wrote/What the Last Dawn of Reckoning shall read” (Britannica Academic 2021a).

However, the belief that free will conflicts with absolute determinism led other scholars, including Thomas Aquinas, to reject absolute determinism in the world (e.g., Hoffman and Michon 2017, 1–36). Aquinas also considered the relationship of chance to the existence of divine providence and concluded that the latter does not preclude contingency and does not negate the occurrence of fortune and chance (Strumia 2002).

The rise of modern science seemed to side with determinism. In the early seventeenth century, Johannes Kepler showed that the orbits and motions of planets could be described using simple laws, making it possible to predict the positions and speeds of all planets at all times, even centuries or millennia ahead. Half a century later, Isaac Newton showed that those laws followed from his universal law of gravity and laws of motion, more deeply grounding the predictability of the motions of all objects here on Earth or in the heavens. A century and a half later, Pierre-Simon de Laplace (1840) took these developments to their logical, final, and stunning conclusion, arguing that if one had enough brain or computing power, one could determine the position and speed of every object in the universe at all times, past, present, and future.Footnote 1 Also, famously, Laplace argued, contra Newton, that God was not needed for the clock-universe to work precisely for all infinity—except perhaps to set it off at t = 0 (the time of creation).

By the nineteenth century, however, physical phenomena began to reveal a serious difficulty: gases are made of zillions of molecules (though at that time no one knew what those were exactly), which are moving at various speeds, constantly colliding and changing directions and speeds, in unpredictable ways. So, it seemed, the Laplacean claim—we can know the positions and speeds of all objects in the universe at all times—was, at least in practice, not true. While we may be able to know that about planets, molecules in gases were much more difficult. Indeed, tiny differences in initial positions, speeds, or directions of molecules would change the distribution drastically after a long enough time. While “chaos” (phenomena so sensitive to initial conditions as to be unpredictable after a long enough time) had not yet made its big entry into science, its seeds were already in place.

By the early twentieth century, various phenomena, experiments, and theoretical developments led to the formulation of quantum mechanics, with Heisenberg’s foundational ‘Uncertainty Principle’ and fundamental randomness (at least in the standard, Copenhagen interpretation). Indeed, Mark P. Silverman refers to quantum randomness as “the mother of all randomness” (Silverman 2014, 112). However, a minority of physicists have insisted that behind quantum randomness is a deterministic reality, with Albert Einstein leading that camp.

By the late twentieth century, randomness became an important fixture in physics and other scientific fields, and even in technology (for example, cryptography). Moreover, Laplace’s claim of total determinism was shown to be wrong even for planets in the solar system, as we shall see.

Randomness is not merely a subject of academic, philosophical, and scientific study, it also relates to humans’ lives. Indeed, randomness may impact our survival, at the individual, group, or species levels. Humans need some ability to predict things, for example, when it will rain (for farming schedules and such), and if nature exhibits too much randomness, then life becomes unmanageable. In fact, rainfall does carry some element of randomness, in terms of timing and amount; however, it turns out that we can extract patterns in the data, which allows us to predict rainfall by month and by region (Eagle 2005, 752).

At the species level, the importance of randomness to humanity can be exemplified by the asteroid that struck Earth some 66 million years ago and resulted in the disappearance of the dinosaurs, paving the way for the emergence of primates and humans. Was that a random event? Was it predictable as per Laplace? Could it have been a smaller asteroid, thus not exterminating all the dinosaurs, or fallen some place in a large desert and not have had the transformational effect for mammals, primates, and humans? I will review this particular case in the section I devote to the bombardment of Earth by meteoroids.

Yet science retained its remarkable ability to predict outcomes of processes and phenomena (at least probabilistically) even in situations where randomness seemed to play a fundamental role. Behind the randomness, as I will try to show, and this will be my main thesis, there seems to be some order after all, which is reflected in most, if not all, areas of the cosmos: the tiny quantum fluctuations in the very early universe (which led to clumping of matter and the formation of stars and galaxies), the formation of planets, asteroids hitting Earth, solar activity (big flares and eruptions), supernova explosions and gamma-ray bursts, all impacting life and animals on Earth.

2 What Is Randomness?

In the introduction, I mentioned ‘randomness’, ‘chance’, ‘haphazardness’ (in the quotes from Augustine and Laplace), ‘chaos’, ‘probability, ‘determinism’, and so on, without defining them. I basically take “random” to mean “unpredictable” and “chaos” to refer to processes or systems which are so sensitive to initial conditions that they sooner or later become unpredictable, even though those phenomena had no intrinsic randomness. There are many examples of such chaotic phenomena, from the weather to the orbits of planets in our solar system.

At the end of the introduction, I alluded to some order underlying the randomness of a given system, giving us the possibility of making at least probabilistic predictions.

We need more precise definitions, however. We can adopt the basic definitions given by the Encyclopedia of Mathematics (2021) or the Britannica Academic (2021b):

  • ‘Randomness’ per se is not defined in either the Encyclopedia of Mathematics (EM) or the Brittanica Academic (BA)  (delete EB); however, they define ‘random events’ (“Any combination of outcomes of an experiment that has a definite probability [but not certainty] of occurrence”—EM, n.d.) and ‘random variable’ (“a numerical description of the outcome of a statistical experiment”—(BA), n.d.). Since random numbers, variables, or events are not equally probable, a probability distribution “describes how the probabilities are distributed over the values of the random variable” (BA). We can thus surmise that randomness is the absence of exact and specific predictability of the outcome of a given measurement, or (equivalently) the existence of random variables in the process, which are described by a probability distribution.

  • Chance’ and ‘probability’ (also, commonly, ‘odds’, ‘likelihood’, or ‘expectation’) can be defined as: the extent to which something is likely to happen, with “extent to which” being quantifiable (a “frequentist” definition of ‘probability’ is the ratio of the number of times a given outcome occurs to the total number of trials).

  • A ‘stochastic process’ is one which involves random variables or events. For example, the process of radioactive decay of unstable nuclei can be said to be stochastic and is described probabilistically. “More generally, a stochastic process refers to a family of random variables indexed against some other variable or set of variables” (BA, n.d.). Oftentimes, the variation (or “indexing”) of random elements with time is what defines the stochastic process.

  • Phenomena or processes are said to be deterministic’ when the outcome or next state of the system can—in principle—be calculated or described completely if one knows the current state fully. This is what Laplace expressed, as he assumed that all physical processes follow well-defined and fixed laws and thus allow one to predict all future events (of any system, even the entire universe) from current states. Indeterminism, on the other hand, holds that at least some events in the world do not follow deterministic laws but, instead, involve some element of randomness or unpredictability.

  • ‘Chaos’ combines determinism on short scales and unpredictability over extended periods of time, due to some sensitivity of the system to its initial conditions; the system is thus deterministic over the next short time-step, but unpredictable over long time intervals. How long a time before a system becomes chaotic depends on the physical parameters and interactions involved in each case. It is important to note that chaotic phenomena are not fundamentally random, but they are still, practically speaking, impossible to predict after long enough times.

  • ‘Pseudo-random numbers’ are numbers which are generated by a numerical algorithm (usually) or a physical device (sometimes) in which sequences appear to be random, such that it will be extremely difficult to extract the rule, function, or operation that generated those numbers, even though there is such an algorithm. Pseudo-random numbers are very widely used in simulations (Monte Carlo or other) in various fields (meteorology, climatology, cryptography, economics, etc.).

Having given simple definitions of the terms being used in this topic, we can now focus on randomness and present its different types and characteristics.

Carmen Batanero offers four widely held conceptions of the term ‘randomness’ (Batanero 2015, 34–49):

  • Randomness as equi-probability: where people think (or assume) that the possible outcomes of an unpredictable process are equally probable. (This is a misconception, as the distribution of probabilities of a random variable may not be constant and uniform.)

  • Randomness as the opposite of causality, or as a special type of cause.

  • Randomness as uncertainty in an outcome: the existence of multiple possibilities under the same conditions.

  • Randomness as a way to describe some phenomenon when information about it is limited, making it “unpredictable” (to us).

Batanero (2015, 38) also gives five scientific meanings/conceptions of randomness, along with the problems that they address and the procedures used in such cases. They can be summarized as follows:

  • Intuitive: what we think of as luck and fate, unknowable except perhaps (in the past) with divination tools (dice, coins).

  • Subjective: what we think of as “possibilities”, which start (in our minds) as all equally probable but get updated using methods such as Bayes’ theorem.

  • Classical: events being equally probable based on dearth of knowledge about any underlying factors; this is used as a basis for fair betting in games of chance, probabilities computed using combinatorial analysis.

  • Frequentist: related to the frequency of outcomes, probabilities estimated and used as projections in the long run.

  • Formal: experiments are performed, random sequences are observed and measured; mathematical properties are described, simulations are conducted using pseudo-random numbers or sequences.

In the rest of this paper, I will be dividing randomness into “fundamental” (found in quantum systems), that is, intrinsic and not due to our limited epistemic (knowledge) capabilities, and “chaotic”, which is due to the exponential growth of uncertainties and appears erratic only because we cannot follow the system with any precision in our calculations.

Let us now focus on what I am calling the “fundamental” (quantum) type of randomness.

As I have mentioned, there are several interpretations of quantum theory, including some that assume determinism (“hidden variables theories”, most notably); however, standard interpretations consider phenomena at the smallest levels (elementary particles, atoms, molecules) as fundamentally random. The best example is the decay of an unstable nucleus, say Aluminum-30, which has a half-life of 3.6 seconds. A half-life is the time after which half of a sample (say 1 gram) of the given element will have decayed. But there is absolutely no way of knowing which nucleus will be among the half that will have decayed. If we zoom in and focus on one specific nucleus, we may wait a millisecond or a century before it decays, even though half the sample will have decayed in 3.6 seconds and then half of the remaining half will decay after another 3.6 seconds, and so on. The process is fundamentally random (i.e., not knowable even in theory), even though there is a simple probabilistic rule (half will decay over each half-life) that allows us to make predictions and use the material and the law that regulates it.

More generally, according to the Heisenberg Uncertainty Principle, it is impossible to predict precisely where an electron will be at any given moment. However, the Schrödinger Equation allows us to calculate the probability of finding an electron in any spot (of any size, small or large) at any moment. Moreover, the Schrödinger Equation allows us to draw 3-D distributions of a single electron’s probability distribution (or that of a large ensemble, assuming they are under the same physical conditions), telling us where it (or they) will more likely be found in a measurement.

And these probability distributions in the quantum world will translate into non-uniform distributions in the densities of particles at small scales. This also applies to the universe when it was small enough for quantum physics to apply.

3 Randomness in the Early Universe; Galaxy Formation

Quantum randomness played a key role in the affairs of our universe from the earliest times. When the universe was 10−36 seconds old, and under the effect of the inflaton field that suffused it, the universe underwent an “inflation”, that is a period of exponential growth in size, by a factor of about 1026, making the universe go from smaller than an atom to about 1 millimeter in diameter. This inflation had a number of consequences, including (in what concerns us here) the amplification of tiny quantum-level fluctuations in the inflaton field to macroscopic scales.

The usual analogy is that of a balloon in which we blow air, with the surface of the balloon representing the spatial dimensions of the universe and the radial direction inside and outside the balloon, representing time, past and future. (The center of the balloon would be the origin of space and time, the “singularity” from which the universe came out.) If tiny letters are written on the surface of the balloon, any exponential increase in the size would make the letters macroscopically large and separate. And if the writing on the surface of the balloon were done by quantum fluctuations acting on ink that would have been uniformly distributed on the surface early on, then we would see how large-scale structures (the big letters in our analogy, the galaxies in the real universe) would have emerged, especially as gravity would soon start to act on clumps of matter whenever they become large enough for gravity to affect them.

Thus, large-scale cosmic structure (the distribution and sizes of galaxies) is due in some fundamental way to (i) the quantum fluctuations that happened in those early times, (ii) the inflation that magnified those fluctuations to macroscopic levels, (iii) the expansion that continued on afterward, and (iv) the gravitational attraction between large clumps of matter, which themselves resulted from those fluctuations in the density of microscopic particles.

The fluctuations (Fig. 4.1), also translated into slight variations in the radiation that was emitted when (about 380,000 years later) electrons and protons started to bind as atoms (the universe having expanded to be cool enough to allow atoms to form and not break up immediately). That radiation, “decoupled” from the atoms (no longer interacting with matter), filled the “small” universe, and with the continued expansion, had its wavelength stretched, to reach microwave scales today. This “cosmic microwave background” as it is known today, because it fills the cosmos almost uniformly in all directions, constitutes one of the main pieces of evidence for the Big Bang model.

Fig. 4.1
Two line graphs. 1. Early on, with a constant horizontal straight line. 2. Later, with two peak curves.

Growth of quantum fluctuations in the density of matter in the early universe. (Source: Nidhal Guessoum)

But this “fossil” radiation should carry the imprint of the original fluctuations in density and not be totally uniform. If we measure the radiation in various directions in the sky, we should see some tiny differences. Indeed, the mapping of this cosmic microwave background has been performed with higher and higher precision and resolution over the last 20 years, with the Wilkinson Microwave Anisotropy Probe (WMAP), which operated between 2001 and 2010, and with the Planck satellite, which was launched in 2009 and operated until 2013, both producing cosmic maps (Fig. 4.2, below).

Fig. 4.2
Two world maps illustrate the variation in the temperature of the cosmic radiation in the order of 10 to the power negative 4.

Maps of the cosmic microwave background radiation produced by WMAP (top: https://map.gsfc.nasa.gov/media/121238/index.html) and Planck (bottom: https://wmap.gsfc.nasa.gov/media/121238/index.html)

Variations in the temperature of the cosmic radiation were found to be of the order of 10−4, in agreement with the theoretical calculations. What matters to us here is the fundamental role that quantum fluctuations, which are random in nature but follow well-known and well-understood laws, played in the formation of structures (galaxies, clusters) in the universe. No model or theory could ever predict what galaxy would have formed, where, and with what size or characteristics, but the appearance of such structures and the statistical distribution of their sizes were “written” in the quantum laws from the start.

4 Randomness and Chaos in the Formation of the Solar System

Fast forward about 9 billion years, and in one region of what will later be called the Milky Way galaxy, a nebula (big cloud of gas and dust) is rotating very slowly around itself. (Everything moves, and almost everything rotates, in the cosmos, because of gravity’s pull on one side or another of a given object by nearby or passing objects.) But the nebula being large enough (originally a few light-years across and weighing a trillion solar masses), its internal mass pulls everything inward. The nebula thus contracts, and density becomes much larger in the central regions. The smaller the nebula becomes, the faster it rotates (this is a manifestation of the ice-skater rotation effect, or more technically a result of the conservation of angular momentum). And the faster rotation makes the nebula flatten (this is the “pizza dough effect”, or more technically collisions having to conserve angular momentum), making the solar system flat (except for the farthest matter, which was too far to be drawn to the disk).

At this point, some creative chaos occurs. First, most of the matter will have fallen to the inner “small” region of the nebula (a few million kilometers across), with densities and pressures reaching very high values, raising the central temperature to above 10 million degrees, which then allows for nuclear fusion reactions to occur—a star (the sun) is born! The sun then clears up its surrounding region by swallowing up what is very close by and blowing away the remaining nearby gas with its powerful radiation.

The dust in the inner region then undergoes countless collisions and mergers. “Planetesimals” start to form, and small rocky objects slowly emerge, with sizes between about one third and a few times that of Earth.

But the collisions between rocks being numerous, and their outcomes being very sensitive to the speeds and directions of the colliding objects, the formation of Earth, with a perfect size and at a perfect location around the Sun, that is, right in the middle of the “habitable” zone, seems rather fortuitous. Indeed, a “perfect” planet for future complex life, must have the right size for a “good” atmosphere to form (thanks to its appropriate gravity), be rocky (with a solid surface for animals to later evolve and thrive), and be in a region where water, after it is brought over by comets and asteroids, will be liquid, allowing life to form sometime later; plants will then absorb the carbon dioxide (which would have been released into the atmosphere by volcanic and other geological activity) and release oxygen, the latter being vital for the animals (and humans later) to prosper and evolve.

Thus, it is clear that Earth was not necessarily bound to appear, at least not in this particular solar system, considering all the randomness that is involved in the formation of planets. But if we recall that there are tens of billions of solar systems in our Milky Way galaxy alone, then a planet of roughly the size of Earth (a bit smaller or larger would still have worked) at the right location and with the right kind of star, would have appeared somewhere.

However, there is an important difference between an Earth-size, an Earth-like, and a life-bearing planet. Indeed, life-friendly planets are probably not so easy to find, as a number of criteria must be simultaneously fulfilled. And this is just at the formation stage. More trouble lies ahead.

Indeed, it is not enough to form a planetary system, one must ensure its stability, that is, that the planets will not crash into each other before life has had a chance to evolve, as well as the survival and prosperity of life, that is, not be wiped out by unpredictable, disastrous impacts or bursts of radiation.

5 Stability of the Planetary System

Once the planets formed and the system (the sun and everything else) settled, the main issue is whether it was stable. Do the planets keep their orbits over long periods of time; are the planets’ motions regular enough for their positions to be predictable after millions or billions of years; and can we trace back their positions (absolute and relative) into the far past to determine what happened at various points?

Before comets were understood as small objects, thus having negligible gravitational effects on other objects, their gravitational effect when passing by planets at various distances had to be considered. Newton realized that these pass-bys could lead to incremental changes in the speeds and/or directions of motion of planets, and he concluded that God had to intervene from time to time to restore the order, or else the planets would crash after some time. Leibniz, in contrast, rejected the idea of God having made an imperfect creation and needing regular interventions to save the world.

Laplace, as we noted earlier, believed in total determinism, though this did not contradict Newton’s realization that comets could disturb planetary orbits. The point that remained unclear was whether small gravitational effects (the comets would soon be known to be really tiny compared to planets) would end up disrupting the whole system. Chaos had not yet appeared in physics, this had to wait until 1963 when Edward Norton Lorenz published his seminal paper ‘Deterministic Nonperiodic Flow’ (1963, 130–141), the foundational work for chaos theory, and so the issue did not worry scientists too much (yet).

Well before chaos theory came on the scene, it was realized that there could be an issue with the stability and predictability (or lack thereof) of three-body problems. No general solution had been found for a gravitationally interacting system of three objects (say the Sun, the Earth, and the Moon), and around 1888, King Oscar II of Sweden was enticed to offer a prize for the scientist who would solve the problem or at least make major advances on it (Scott 2007, 21–22; Charap 2018, 67). Jules Henri Poincaré (1854–1912), by then already an accomplished mathematician, submitted a 158-page essay for which he was awarded the prize in January 1889. Just before publication, however, he realized he had made an important mistake. The revised version (submitted in January 1890) was 270 pages long and concluded that not only were there no general solutions to the problem for all initial conditions (Scott 2007), in some cases the system would vary unpredictably over the positions-velocities parameter space,Footnote 2 the landmark of what we would come to call ‘chaos’. Poincaré (1908) would later write: “It may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon” (Maitland 1914, 67).

Despite further work on the stability of dynamical systems, particularly by the Russian mathematician and physicist Aleksandr Mikhailovich Lyapunov (1857–1918), with his introduction of the ‘Lyapunov exponent’, which characterizes the timescale over which a system becomes highly unstable, unpredictable, and chaotic, the problem was mostly forgotten until the aforementioned Lorenz discovered it while running weather forecast numerical simulations (on computers) around 1960. However, this was not applied to planets and their orbits until 25 years later.

In 1989 and 1990, Jacques Laskar made important contributions to the problem by performing high-precision numerical calculations on the time evolution of the solar system, taking up to eight planets into account and evolving the system for up to 100 million years (Laskar 1989, 237–238; 1990, 266–291; 2013, 239–270). He found that a difference of 1 kilometer in the initial position of a planet could grow to 1 AU (150 million km) over 95 million years! He also showed that for certain initial conditions, some planets could be ejected from the solar system (or other planetary systems out there), which would later explain the discovery of “rogue planets”. Others would later confirm these findings, for example, Gerald J. Sussman and Jack Wisdom: “The evolution of the entire planetary system has been numerically integrated for a time span of nearly 100 million years. This calculation confirms that the evolution of the solar system as a whole is chaotic, with a time scale of exponential divergence of about 4 million years” (Sussman and Wisdom 1992, 56).

Additional simulations, more and more precise with faster and faster computers, would later reach the following important conclusion: orbits of the planets in our solar system are stable for 99% of the initial conditions, but phase (where along its orbit a planet would be after, say, 10 million years) is not predictable, at least for some planets. If a planet’s rotation axis is inclined, as both Earth’s and Mars’ are, then big changes of its position along its orbit over millions of years make it difficult for scientists to trace back or forward climate conditions, the evolution of life, and survival. In other words, we do not know whether July in the year 5 million BC corresponded to summer or winter in each hemisphere. We do have ice cores that can tell us what the temperature was like in that era, but not so precise as to determine the weather in a given month north or south of the equator.

6 Formation of the Moon

Another “random” event, unpredictable due to the huge number of collisions that will have occurred before—and a very lucky one for life and humans billions of years later—occurred about 50 million years after Earth had formed. The planet had not yet fully settled, indeed its surface had not hardened yet, when an object the size of Mars (about ten times smaller than Earth in volume) hit it in what is commonly referred to as “the giant impact”.Footnote 3 The object, which has been called Theia, the name of the mythical mother of Selene, the Greek goddess of the Moon, was completely destroyed, its debris scattered in space, along with the parts of Earth that were excavated in the collision.

Indeed, contrary to the way other planets have acquired moons in the solar system (co-accretion of many moons for the large, gaseous, outer planets; capture by Mars of small objects from the asteroid belt), Earth acquired a moon in a giant collision-breakup. Evidence from this comes mainly from the high similarity between the composition of rocks on the moon and the geological material below the crust of our planet (Young et al. 2016, 493). The giant impact thus produced a relatively large moon (the largest moon/planet ratio in the solar system), which enabled it to play a vital role in the evolution of life on Earth. Indeed, for the climate to be stable and not vary widely and wreak havoc on life, the inclination of Earth’s axis needs to be stable, that is, not vary by more than about 1 degree; a large moon provides that stability. Moreover, a large moon produces substantial tides, which lead to the mixing of nutrients and life forms on coasts, further helping biological evolution. Furthermore, a recent paper (Grewal et al. 2019, 3669) has indicated that the giant impact may have brought important chemical elements (carbon, nitrogen, and sulfur) to the bulk Earth, elements which were vital for the emergence of life later.

This seems like the luckiest event that one could possibly envision for a planet like Earth. Without that giant impact, the chemical conditions for life to form may have been insufficient, and there would have been no moon to stabilize the climate and allow for an upward evolution of life to complex animals and ultimately humans.

However, here too, statistics make even such a fluke event not so wondrous. Indeed, studies of embryonic star-planets systems (using the infrared Spitzer Space Telescope, in particular) find some tentative evidence of “catastrophic collisions” (Gorlova et al. 2007, 516–535). It has been estimated that 5–10% of such planetary systems undergo big enough collisions to produce large moons. If so, our lucky event was not such a rare incident after all.

7 Randomness in the Bombardment of the Earth by Meteoroids

There are two important issues in the history of our planet: when did life appear and what impact did the bombardment by meteoroids and energetic solar radiation have on the emergence and evolution of life and the appearance of primates and humans?

The strong gravity exerted by the giant planets (Jupiter and Saturn) on objects orbiting or passing nearby tugs those objects, slightly modifying their motions, sometimes enough to send them toward the inner regions of the solar system, to then be attracted by Earth, other planets, our Moon (thus the heavy cratering), or the Sun.

In the last 40 years or so, scientists have become convinced that a “late heavy bombardment” occurred between 3.8 and 4.1 billion years ago (Fig. 4.3). The main evidence for this comes from the radiometric dating of a number of craters on the Moon—hence the other name “lunar cataclysm” (Tera et al. 1974, 1–21; Cohen et al. 2000, 1754); the absence of an atmosphere there leads to a rather pristine preservation of the craters, small and big.

Fig. 4.3
A map represents a decreasing curve from 4.5 to 3.0 with three points labeled lunar cataclysm, earth evidence, and oldest fossils.

Time plot of Moon bombardment in the first billion years, showing the ‘lunar cataclysm’, that is, the ‘late heavy bombardment’. (Source: Nidhal Guessoum)

A number of hypotheses have been advanced to explain this major event: a dynamical instability in the outer Solar System; the collisional disruption of a large Mars-crossing asteroid; a gravitational event that swept objects out of the asteroid belt; and other possible scenarios.

Interestingly, the earliest fossils of life forms that have been found on Earth date back to roughly that time. It is often assumed that such a sustained assault on Earth would have wiped out life, if it existed then in whatever form. Recently, however, studies have suggested that craters produced by such impacts could have been ideal for the appearance of life, for meteoroids bring water and iron, and impact craters present important helpful features, including secondary minerals which can act as templates or catalysts for prebiotic syntheses, diverse impact energies resulting in different rates of organic syntheses, and so on (Cockell 2006, 1845–55).

Another important event which had a crucial consequence on the emergence of humans on Earth was the fall of an asteroid on Earth about 66 million years ago. The meteoroid is estimated to have been about 10 kilometers wide, and it is believed to have struck just off the coast of the Yucatan peninsula in Mexico. The blast it produced in the shallow sea was the equivalent of 10 billion Hiroshima-type atomic bombs or 100 trillion tons of TNT; it released 10,000 billion tons of carbon and sulfur-rich gases, blocking sunlight for months, igniting fires in many forests, and producing a dead world, killing almost all reptiles, most birds, and plants, starving the dinosaurs to death within a few months, and paving the way for mammals. It was a sudden and brief event by cosmic and geological scales, but a momentous one by all measures.

This event is often cited as an example of how randomness and chaos rule the world, and our existence could very well never have come about, if the asteroid had been smaller or if it had hit in a place where its impact would have been much less devastating.

This is where statistics and probabilities again come to play an important role in our understanding of the world. A study of craters and meteorites found around the globe has allowed us to infer an empirical law of the frequency of such strikes as a function of the size of the incoming, falling rock/meteoroid. This is not an easy, straightforward matter, it must be stressed, partly because not all meteorites are found or registered, craters are often eroded, and data is thus necessarily incomplete. A 20-year record of “air bursts” (explosions of meteoroids as they hit the thicker part of the atmosphere) or medium-size (1–20 meter) meteoroids entering Earth’s atmosphere is shown below (Fig. 4.4).

Fig. 4.4
A world map represents a record for the size of meteoroids falling with energy ranging from 1 to 1 million gigajoules. Two different shades in different countries represent day, 225 and night, 301.

Record of small meteoroids entering and disintegrating in the atmosphere between 1994 and 2013. https://www.nasa.gov/sites/default/files/bollide.jpg

And Fig. 4.5 is a similar map made for “fireballs” (very bright meteors) recorded between April 15, 1988, and April 22, 2019.

Fig. 4.5
A world map represents a record for the size of meteoroids falling with an impact energy ranging from negative 1 to positive 2.5 in logarithmic form.

Fireballs reported by US government sensors (April 15, 1988 to April 22, 2019). Alan B. Chamberlin (JPL/Caltech—https://cneos.jpl.nasa.gov/fireballs/)

From these records (data), one can plot the frequency of, and time intervals for, meteoroids entering Earth’s atmosphere as a function of their diameters and the energies they release (Fig. 4.6).

Fig. 4.6
A line graph depicts a decreasing line of meteoroids with a diameter of 10 meters to 10 kilometers. A point of 80 m is labeled Tunguska, and at 10 kilometers is Chicxulub.

Frequency and time interval of meteoroids entering Earth’s atmosphere as a function of their diameter and corresponding energy release. (Source: Nidhal Guessoum)

From the plot, one can see that 10-km size asteroids strike every 100 million years or so, 1-km meteoroids hit every million years or so, 100-m rocks arrive at earth roughly every 1000 years, and so on. Again, we find that random events of these kinds follow rather simple probabilistic and statistical laws, and they can thus be forecast in terms of probabilities.

The important conclusion from this realization is that Earth was bound to be hit by a large asteroid, within a hundred million years or so. The probability of a dinosaur-exterminating asteroid hitting Earth during the last 300 million years was more than 99%. Humans and other highly evolved creatures might not have existed precisely 4.567 billion years after Earth’s formation, but intelligent and conscious creatures would have appeared sooner or later, since evolution was unfolding in full glory.

8 Randomness in the Sun’s Activity

The Sun’s activity and evolution during its first billion years is also an important issue pertaining to both randomness and the emergence and evolution of life on Earth. There are, however, a number of important uncertainties in this regard: (a) the composition of Earth’s early atmosphere is not well known; (b) there are indications that liquid water existed fairly early on the surface of our planet, but this is not strongly established; (c) it is widely believed that the young Sun was less luminous (by about 25–30%, since its core gradually heats up and produces more energy), but without an Earth atmosphere richer in greenhouse gases, it would have kept the planet frozen for the first 2 billion years (Sagan and Mullen 1972, 52–56); (d) the random solar flare and eruption activity, which was very probably much stronger early on, and which would have repeatedly zapped our planet with UV and X-ray radiation; (e) as with the bombardment, it is not clear whether the energetic radiation would have hindered or helped the emergence and the evolution of life on Earth.

I will focus here only on the randomness part, namely, the solar surface (magnetic) activity, which is observed in the sunspots and flares/eruptions that appear on the Sun, and which leave a mark in tree trunks (over decades, centuries, and sometimes millenia) or ice cores (over centuries, millenia, and longer periods). Sunspots are the footprints of flares, which together with the much stronger coronal mass ejections send out large quantities of charged particles (electrons and protons) into the solar system. This “solar wind”, along with the energetic photons of the accompanying X-ray and UV radiation, can break organic molecules on Earth or at least ionize them and induce reactions or even mutations in the DNAs of the cells. And this can either kill a cell or produce a mutation that is most likely destructive, but it can also lead to the appearance of different species or life forms—evolution.

Reproducing early solar activity is extremely difficult. Figure 4.7 shows the variations in sunspots over the last 400 years (upper panel) and over the last 40 years (lower panel). In fact, one can produce plots of sunspot numbers for the last 10,000 years (Yin et al. 2007) from Carbon-14 concentrations in tree-rings and/or geomagnetic variations. One method by which one can explore solar activity in the first 1–4 billion years is to study stars that are very similar to the Sun but have different ages. Egeland et al. (2016, 330–334) studied five such sun-like stars and found that young, fast rotating ones show many-times larger variabilities in their activities, while old, slowly rotating ones display very little variability.

Fig. 4.7
Three graphs depict the I S E S solar cycle sunspot number progression. The first graph has a stronger maximum, 1950. The second graph has a maximum progression of 24 between 2010 to 2020.

Sunspot number (solar activity) variation over the last 400 years (top: https://www.swpc.noaa.gov/products/solar-cycle-progression), and 40 years (bottom: https://www.swpc.noaa.gov/products/solar-cycle-progression)

What must be stressed here, however, is that while solar magnetic activity is chaotic, it still follows quasi-periodic cycles. The most famous and easily noticeable cycle of solar activity is the 11-year cycle, which takes the Sun from “solar maximum” to “solar minimum” and back, periodically; neither the 11-year interval nor the level of activity repeat exactly, as can be shown in Fig. 4.7.

Figure 4.7 clearly shows both the irregularities in the activity over short time scales (month to month, year to year) and the quasi-regularity (cyclicity) over long time scales (decades), even though the magnitude of the activity in each cycle varies substantially.

9 Randomness, Order in the World, and Divine Providence

What we have learned from this general review of “random” processes in the cosmos is that “randomness” is ubiquitous in the universe. We have found important examples of it ranging from the earliest times of the universe to rainfall in farmers’ fields today, including the chaotic formation of the planets and our moon, the subsequent meteoritic bombardment and energetic radiation that Earth was subjected to, particularly in the first billion years or so, and the chaotic solar activity still going on today.

The second important idea that I have stressed is that there are two kinds of randomnesses: the “fundamental” one, which is due to quantum indeterminacy (as believed by most physicists), and the “epistemic” one, which manifests itself in chaotic phenomena and which is only due to our inability to theoretically determine (calculate) the state of a system over long timescales if it is highly sensitive to initial conditions and the equations that describe it are non-linear.

The third and perhaps most important idea I have stressed is that all the randomness that we encounter in nature is, however, not without some underlying order or probabilistic-statistical pattern. In each case, whether quantum-based or chaotic, we have found some laws regulating the random process. Even the two big “lucky” events that allowed life to evolve and let humans emerge after a few billion years, namely, the planetary collision from which our Moon was born and the asteroid strike that killed the dinosaurs and the big reptiles and allowed mammals to prosper and produce primates, those two events were not “flukes” but rather expected to occur on long enough timescales.

One comes out amazed from such a big-history review that randomness not only follows simple, elegant laws and produces beautiful order, but in fact was necessary for differentials to occur in nature and varieties to emerge. Without quantum fluctuations, the universe would have been utterly homogeneous, and complex structures (galaxies, stars, planets, etc.) would not have formed.

What emerges is a multi-layered picture of our universe: at various scales (of space or time), processes can be probabilistic, then collectively predictable, then chaotically unpredictable, then globally or long-term predictable, and so on. For instance, Earth’s atmosphere, made of gases, has: quantum probabilistic (un)predictability at the lowest scale (individual atoms and molecules); then statistical predictability of the gases’ characteristics (temperature, pressure, etc.); chaotic behavior of the weather, becoming unpredictable over just a few days; predictability of the climate over longer timescales (seasons and long cycles); instability over very long terms; and so forth.

In earlier times, randomness used to be conceived of as God’s way of hiding His plans (recall that many cultures practiced cleromancy, casting lots by throwing dice or bones with symbols to try to “uncover” divine plans or divine hints). More recently, and with the realization that randomness may play a key role in cosmology and biology, a number of authors have insisted that randomness strongly conflicts with divine providence. For instance, Robert Charles Sproul wrote: “The mere existence of chance is enough to rip God from his cosmic throne. … If chance exists in its frailest possible form, God is finished” (Sproul 1994, 3). Similarly, Benedikt Paul Göcke, reviewing randomness in cosmology and biology formulated an ‘Argument [against the existence of God] from Chance and Randomness’: “If there is a random state of affairs in the universe, then God does not know that his providential plans are fulfilled” (Göcke 2015, 233–254). As I explained earlier, if the collision between Theia and early-Earth had occurred with different parameters (a different size for Theia, a different angle of collision, etc.), life would probably have evolved differently, perhaps not leading to primates and humans; likewise, if that 10-km asteroid that struck Earth 66 million years ago had been substantially smaller or had hit elsewhere, the dinosaurs would not have been exterminated, and we humans would not be here today.

However, what we have seen in this review is that there are probabilities and regularities behind those random events. For instance, 10-km sized meteoroids strike Earth every 100 million years or so, thus sooner or later one such asteroid or comet would have hit Earth and changed the course of life’s history on the planet. And if we believe that the primate niche is inscribed in the general evolutionary scheme of Earth’s environment, then a species more or less similar to humans would have emerged at some point (around now, give or take a few hundred million years).

Is God’s plan then built on probabilities and statistics? There are two ways to answer this question.

The first is by an analogy used by Peter van Inwagen in his essay ‘The Place of Chance in a World Sustained by God’ (van Inwagen 1988, 225), that he and his wife, when they decided to have a child, knew they wanted one to be born sometime in the next year or so, but they didn’t plan for a specific child, a girl with detailed characteristics. According to van Inwagen, God made general plans for creation, established the laws (some of which are probabilistic) by which natural processes will lead to the creation of various objects and beings, and most importantly sustained those laws and interactions (the way he sustained those causal interactions is, according to van Inwagen, how God acts in the world), and let things unfold.

This view is still subject to the critical retort: so God does not know the characteristics of each object, creature, and event at every point in time?!

The second way to answer the above question is, in my view, to insist that God being outside of time “sees” everything happening everywhere and everywhen, thus God does know the full characteristics of everything even though the process that leads to this or that may be partially or fully probabilistic or even random, unlike the above child conception analogy.

But what about quantum randomness, which is “fundamental”? Quantum randomness can be related to God in an interesting way that was suggested by Serkan Zorba in an essay titled ‘God is Random: A Novel Argument for the Existence of God’: only God has the ability to generate absolutely random numbers or sequences, as opposed to pseudo-random numbers, which follow complex, difficult to break, but nonetheless deterministic rules. He writes: “I will propound the idea that the epistemic cost of unpredictable randomness is infinite intelligence, and thereby present a new a posteriori argument for the existence of God from the irreducible randomness of the quantum world” (Zorba 2016, 51).

At the other end of the theological spectrum, a number of western theologians have proposed “open” relations between God and nature, whereby God willingly granted some freedom and autonomy to nature by setting some fundamental indeterminacy in the world; just as God granted humans free will, He may have granted nature free processes. Traditional Islamic theologians will not accept such an “open, free” relation between God and nature; however, panentheistic traditions (including some Sufi conceptions of God, the world, and humans), might integrate characteristics of nature, including any intrinsic randomness, into the (mysterious) divine nature.

Our understanding of randomness in the world, nature, and the cosmos, is far from robust or complete even in the scientific realm. And how it can be integrated into any religious or spiritual conception of the world and any theistic or even deistic philosophy still has much ground to cover.