1 No Single Definition

While “random” is a familiar word, its definition is surprisingly hard to pin down. Some think of random events as lacking a cause or purpose. For others, the idea is bound up with improbability, indeterminism, or unpredictability. Making matters worse, the definition found in one discipline tends to conflict with those given in others. A statistician’s random sequence has little to do with random mutations in evolutionary biology.

This chapter offers an overview of the terrain, looking at how randomness and closely related terms are used in a variety of disciplines. As will see, some are more relevant to the question of providence than others.

2 Purpose

Let’s begin with purpose. Driving down the road is a purposeful act. A rock bouncing down a hillside and coming to rest in the road is not. It might just as well have landed in a ditch. Its precise path down the hill and where it lands is random. Many of the best examples of randomness are natural events like this: leaves blowing in the wind, water molecules bouncing off the side of a glass, or when the next meteorite will strike the moon. In contrast, artifacts are designed to mitigate the effects of random changes in conditions. A car might run over that rock in the road, but its tires and suspension are designed to minimize the risk of an accident. Perhaps a lack of agency or purpose is at least a necessary condition for randomness.

Not so, as we sometimes use random processes for our own ends. Consider games with dice or ping-pong balls in a state lottery. In both cases, manufactures ensure that no particular outcome has a higher probability than another. Random number generators in computers are designed to produce random-looking sequences. If randomness can be employed in a purposeful way, then the two are not mutually exclusive.

3 Probability and Statistics

Improbability was mentioned above. Perhaps probability theory could help to define randomness. Take a fair coin, flip it fifty times, and record the order of heads and tails. Call the following sequence S1:

H

H

T

T

T

H

H

H

T

T

H

H

T

T

T

T

T

H

T

T

H

T

H

T

H

 

H

H

T

H

H

H

T

T

H

H

T

H

T

T

T

H

T

H

T

H

H

H

T

H

T

 

This seems like a typical random distribution of heads and tails. What is the probability of getting this sequence? For one toss of the coin, P(H) = ½ and P(T) = ½. So for fifty tosses, the overall probability for this sequence is P(S1) = (½)50, a small number. One might therefore think that small probabilities are indicative of random events, but basic probability theory immediately presents a problem with this. What if instead of S1, the coin produces S2, which is fifty tails in a row. Such a sequence is physically possible. One might think that P(S1) > P(S2), but that is false. If the coin is fair, then P(S2) = (½)50, the same small number. Of course, S2 doesn’t look like a random sequence, so small probabilities alone do not seem to be a good indicator of randomness. At best, we might say that tossing the coin is a random process, but that process need not yield a random-looking product (Smith 1998, 149).

Turning to a related area of mathematics, “randomness” is well-defined in statistics, but it is a technical term that does not hew closely to everyday usage. Statistical randomness only applies to a sequence of events, not the way in which the sequence was produced. It would have nothing to say about the process, whether perfectly fair dice or ones that were obviously not symmetrical. Statistical randomness is a matter of patterns and pattern matching. A fully random sequence lacks any pattern or correlations. On this approach, S1 would count as random but S2 not at all. But these are extremes. Statistical randomness is a relative notion. Consider a third sequence, S3:

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

H

T

T

T

Statistical tests would judge S3 to be more random than S2, but much less than S1. This is because S3 is a repetition of one H followed by three T’s. The fact that one can specify such a pattern means that it is not completely random. In statistics, there are many well-known tests that detect degrees of correlation or patterns within a data set (e.g., the chi-square test). Mathematicians Andrei Kolmogorov and Gregory Chaitan emphasized the compactness of a description in defining degrees of randomness. To reproduce S1, the best one can do is specify each data point individually, spelling out the entire string of events one by one. There is no shorter set of instructions. The fact that S1 is incompressible in this way means that the sequence is random. S2, in contrast, can be reproduced by two rules:

  1. 1.

    Print T

  2. 2.

    Repeat step 1 forty-nine more times

One need not mention all fifty points of data in order to reproduce S2. That such rules exist shows that S2 is not random. Finally, S3 is slightly more random because it requires a less compact set of instructions to reproduce:

  1. 1.

    Print H

  2. 2.

    Print T

  3. 3.

    Repeat step 2 two more times

  4. 4.

    Repeat step 1 twelve more times

The relation between compressibility and statistical randomness can be rigorously defined and it plays an important role in communications theory. There seems to be little relevance here, though, to the question of providence. Some sort of pattern in nature could be evidence of divine action, although attempts to make such a case have not met with success.Footnote 1 Moreover, God could exert meticulous control over every natural process and yet make any sequence of events look random. In short, if the Kolmogorov/Chaitan compressibility is relevant to the question of providence, it is not clear how.

Let’s return to the probability theory and consider a distinction. Probability can be understood in either an objective or subjective way. Physical symmetries, like those in dice or coins, ground objective probabilities. There is a fact-of-the-matter about the probability of rolling a 5 with a fair die. If two people disagree, at least one of them must be wrong. The correct probabilities in these situations are part of reality, independent of what anyone believes.

Subjective probabilities are different. Say that my neighbor believes there is better than a 50% chance that a Democrat will be the next President of the United States. This probability captures his degree of confidence about a future state of affairs. My provost, on the other hand, might say there is a 75% chance, and both would be right given that they are merely describing their own subjective degrees of confidence. While quantifying beliefs like this might not seem to have much practical value, Bayes’ Theorem is a well-known rule for how one’s subjective probability should be updated in light of new information (Joyce 2019). Bayesian probability theory has proven to be extremely useful, with applications everywhere from rational decision making to artificial intelligence. One of its key features is that although people’s experiences will lead them to assign different probabilities to an event before data-gathering has begun (the so-called prior probability), values tend to converge as more observations are made. In other words, it doesn’t seem to matter that people disagree about how likely an event is (i.e., their subjective priors differ). The correct application of Bayes’ Theorem ensures that those differences will shrink as data accumulates.

The two interpretations of probability can help us to get a handle on the problem of randomness and providence. Subjective probability is useful for describing finite beings with limited experience. While people believe all sorts of weird things and so would rank their subjective degrees of belief accordingly, an omniscient being would hold every belief with certainty—a probability of 100%. God’s subjective probabilities do not change over time.Footnote 2

The question then becomes how God’s providential control can be squared with objective probabilities. If God wants a fair die to land on 5, in what sense is there only a \( \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$6$}\right. \)chance of that happening?Footnote 3 From a God’s-eye perspective, it seems as if the only real probabilities are 1 and 0. To some extent, physics backs this up. We use dice and coins in games to introduce a degree of uncertainty. With enough information, however, these events are perfectly predictable. The angular and linear momentum of the dice leaving your hand along, the strength of gravity, and the table’s coefficient of friction determine the outcome of the dice. We do not have that information available and could not do the calculations before the dice came to rest even if we did, but based on the physics alone the outcome is in principle predictable. Dice are at best random for all practical purposes, but not fundamentally so.

There are other reasons for doubting that probabilities should be interpreted objectively. One is that perfectly good yet contradictory probability distributions apply to the same events. Say that a malfunctioning machine randomly fills soda pop bottles anywhere from one drop to completely full.Footnote 4 One could measure the bottles over the course of the day to calculate the probability that the machine will fill ¾ of a bottle. But what does it mean to be ¾ full? For a 20 cm tall bottle it could mean that the liquid reaches 15 cm in height. Note, however, that if the bottle has a total volume of 1 liter, then ¾ full could mean 750 ml. One way of determining ¾ full seems just as good as the other. While it might not be obvious, the measurement based on height will typically not be the same as the one based on volume. In other words, there is no one fact about when the bottle is ¾ full and so no one probability P(¾ full). Two different numbers will emerge depending on which units one chooses.

The problem is that the probability based on height seems just as real and correct as the one based on volume. Neither has a better claim to be the objective probability of interest. For the notion of objective probability to make sense, a unique probability distribution needs to exist, what physicists call a “natural measure.” The reason many theorists favor subjective probability is that situations like the bottle are the norm and examples with a single probability distribution like dice are the exceptions.

If all probabilities are subjective, then probability theory isn’t going to be of much help with the questions at hand. As we saw, an omniscient being would hold all beliefs with certainty. Even if the non-uniqueness problem can be overcome, it might still be the case that objective probabilities are only prima facie, like the dice example. Again, rolling dice are only random for all practical purposes. In terms of the underlying physics, the dice must roll precisely the way they do. Can all examples of objective probability be reduced in this way? To answer that question, let’s turn from probability theory to natural science.

4 Physics

Physics provides many examples of randomness.

4.1 Statistical Mechanics

What we experience as air temperature depends on the average velocity of the molecules in the air around us. The higher the kinetic energy of the molecules, the warmer the air feels. Statistical mechanics is the area of physics that relates the aggregate behavior of particles to detectable properties, like temperature and pressure. Such averages are not directly calculated from the velocities of each molecule in the room. There are far too many molecules in even a cubic millimeter of air to track or simulate on a computer. Since no one can predict the evolution of a system with more than a dozen or so particles, physicists must resort to probabilities. The precise state of such systems fluctuates randomly.

4.2 Chaos Theory

Chaos poses some of the same challenges as statistical mechanics, but often for far simpler systems, like the tumbling of an odd-shaped moonFootnote 5 or a dripping faucet. Even the best supercomputers supplied with information from the most advanced measuring devices cannot accurately track the evolution of a chaotic system. There are two reasons for this. First, chaotic systems display sensitive dependence on initial conditions (SDIC). Even with a set of equations that perfectly models the evolution of the system, the slightest error in the initial conditions will explode exponentially fast. Since all physical measurements involve some error, there is no way to provide the model with initial conditions that are perfectly accurate and precise. Because of SDIC, state predictions based on less than perfect information fail for all chaotic systems. Second, digital computers have a finite amount of storage and memory. No matter how many digits they can store, most calculations will involve some amount of roundoff error. Much like the measurement errors in the previous case, these will cause state predictions of a chaotic system to fail in a relatively short time.

4.3 Instability and Singular Points

Think of a perfectly symmetrical sphere balancing on a knife edge. Say that the system exists in a vacuum and is isolated from all vibrations. In principle, the sphere could remain in place indefinitely. If the sphere were to fall at some point in the future, nothing in the laws of nature at present dictates which way it would fall. The relevant equations have so-called singular solutions that block any predictions about how the system will evolve. Physicists Joseph Boussinesq and James Clerk Maxwell, still working in the age of classical mechanics, hoped that such systems could provide insight into freewill (van Strien 2014, 175–76).

While the sphere example involves several idealizations, singular points pose real-world obstacles for mechanical engineers. If a locomotive were to come to rest in such a state, there would be no way to know which way the train would go when started again.

4.4 Norton’s Dome

Consider another system from classicalmechanics, in this case a point particle situated at the top of a frictionless dome. When the mathematics for this system is restricted to “well-behaved” force functions, the particle will remain in place until a new force nudges it along. So far, this is much like the sphere on the knife edge. However, what if we loosen the restrictions? Instead of ruling some solutions out by fiat, let’s allow a wider range of functions.Footnote 6 In that case, the particle may move off the dome at some unknown time without being nudged. While this might seem impossible, many papers have been written exploring “Norton’s dome” (Norton 2008). What is not controversial is that, unless we simply choose to ignore such possibilities, the particle will move off the dome without perturbation at some finite time in the future. Moreover, there is no way to know when this will happen.

4.5 Spontaneous Symmetry Breaking

There are many different types of symmetry in physics. The simplest are spatial, like a pin balancing on its tip. The pin looks the same no matter from which side one approaches it. When the pin falls, the symmetry is broken. Other symmetries are purely mathematical, like those that give rise to conservation laws.Footnote 7 Still others are about physical processes like the formation of magnets. The atoms in a piece of iron have their own tiny magnetic poles. Above 770 °C, these poles are randomly oriented and so the piece of iron as a whole is not magnetic. As the metal cools below the Curie point, the magnetic poles of the atoms align, and a weak magnetic field emerges. At this point, the symmetry is broken. The magnetic field has a defined orientation in space. There is no way to know in advance, however, which direction this alignment will take place. It seems to be a random process.

There are many other examples of spontaneous symmetry breaking in condensed matter physics. It also plays an important role in particle physics, including the behavior of the recently verified Higgs boson. In each case, a system evolves from an unstable, symmetric state to a stable, asymmetric one. But the choice of asymmetric state appears to be random. Nothing in nature prefers one possibility rather than another.

4.6 Quantum Mechanics

The best-known examples of randomness in physics have to do with quantum mechanics. One is radioactive decay. Quantum mechanics can only be used to predict the probability of a uranium atom decaying within a designated time. While conditions must be right in order for decay to occur, there is no hidden mechanism that causes a particular atom to decay precisely when it does. It “just happens.” This is also true for quantum measurements and the collapse of the wavefunction, part of the famous Schrödinger’s Cat scenario. This thought experiment restricts the outcome of a measurement to two states: the cat lives or the cat dies. There is a 50% chance for either. Once again, nothing in the laws of nature determines which will happen. All measurement events in quantum mechanics are to some degree random.

These examples show that randomness has been part of physics for centuries. Do they pose a problem for divine providence? Some might, but most do not. Let’s work back through the list in a different order.

Most physicists would dismiss cases like Norton’s dome as physically impossible. While the mathematics might allow the particle to move by itself, not all solutions to the governing equations need to be treated realistically. Mathematical possibility is broader than physical possibility. Norton’s dome is therefore nothing but an idealized thought experiment.Footnote 8

As for instability and singular solutions, it’s true that the sphere on the knife-edge will not move until perturbed in some way—the slightest vibration or the impact of a single air molecule. But if we knew what the perturbation was, then it would be trivial to predict which way the sphere will fall. Once all the physical conditions and influences are accounted for, the outcomes are completely predictable, much like the dice example mentioned earlier. Examples like this are only random in a superficial way.

A useful device for sorting out the other cases is what is now called a “Laplacian demon.” Physicist Pierre-Simon Laplace discussed the idea of a super intelligence with perfect knowledge of the laws of nature and the state of every particle in the universe at a point in time ([1814] 1902, 4). With such information in hand, a Laplacian demon could calculate all future states of a universe governed by Newton’s laws. For our purposes, we need to expand on this idea a bit. Let’s give the Laplacian demon unlimited computational capacity and perfect knowledge of the state of any given system. If there is a truth about how events will unfold given the physical conditions and the laws of nature, the Laplacian demon would be able to accurately predict it. Such a being is clearly an idealization. (The observable universe contains a finite number of particles, which puts a limit on computational capacity. Plus, no physical measurements are perfectly precise.)

Even this improved Laplacian demon falls far short of omniscience. Laplace’s idealized intelligence is not a model of divine knowledge, but rather a limiting case. It is more like a perfect computer that solves equations based on error-free information. This means that if a Laplacian demon could predict the outcome of an event, then surely God knows it as well.

With some further analysis, it turns out that only quantum mechanics would pose a fundamental challenge to a Laplacian demon. Classical statistical mechanics and chaos are just more complex versions of the dice example given earlier. Given enough information, a physicist could predict how a pair of dice will roll. Given unlimited computational capacity—no roundoff errors—and perfect measurements, a Laplacian demon could calculate the collisions in a many-particle system and track the evolution of chaotic ones.

The physics behind spontaneous symmetry breaking can be far more complex, but it also requires some sort of perturbation for a system to move from an unstable, symmetrical state to a stable, asymmetrical one. Like the sphere on the knife-edge, a Laplacian demon with complete knowledge of all the physical influences could predict how and when these symmetries would be broken, with one exception. In the examples involving the most sophisticated physics, the perturbations will sometimes be due to quantum fluctuations. Not even a Laplacian demon could predict when a given uranium atom will decay or whether Schrödinger’s cat will live or die. The type of randomness involved is intrinsic and cannot be resolved with epistemic access to some underlying physics. Many in the science-and-religion literature refer to this as “ontological randomness” to emphasize that it is real and not merely perceived. The other examples involve “epistemic randomness,” which is ultimately based on a lack of knowledge. This is why the Laplacian demon is useful. By definition, it has access to all the physical facts, even ones that are hidden to us, and so is not susceptible to epistemic randomness.

As Nidhal Guessoum points out, physicists do not use “epistemic randomness” to describe these events (private conversation). The more standard terminology refers instead to determinism. Take two identical systems, say two solar systems with the same sizes and orbits of planets and identical stars. Determinism says that if those systems have the same overall configuration at one point in time, then they will remain in perfect sync unless something interferes with one system or the other (Butterfield 2005). The relevant laws and state of the system at one instant determine the evolution of that system arbitrarily far into the future. Given that the same laws of nature govern both, if the two systems have the same overall state at one instant, the two will evolve in lockstep. Except for a few special cases (Earman 2007), classical mechanics is deterministic. This is the underlying truth that Laplace was trying to illustrate.

For the most part, quantum mechanics is also deterministic. The evolution of a system according to Schrödinger’s equation would pose no challenge to a Laplacian demon. The reason that it cannot predict radioactive decay and the outcome of a Schrödinger’s cat experiment is that those events are indeterministic. Two uranium atoms in identical environments will likely decay at different times. Two Schrödinger experiments with identical cats might result with one alive and one dead. The laws of nature and the initial conditions do not fix a unique set of future states for these systems.

This gets to the heart of the matter vis-à-vis randomness and providence. If a system is deterministic, then no matter how complex or chaotic it is, God would know its future states. Recall Clark’s bowler analogy (section 1.4 of this volume). The physics of bowling is deterministic. Given the angular and linear momentum imparted to the ball, the pins must fall the way they do. A Laplacian demon would rightly predict which pins will remain. But what if quantum events were manifested at the level of our experience, and bowling involved some degree of indeterminism? In that case, no amount of skill, knowledge, or precision could guarantee that when the ball leaves the bowler’s hands it will produce a strike. This illustrates one concrete challenge involving randomness. Can God providentially govern a universe that contains irreducibly indeterministic processes without intervening along the way? Is a world with quantum events in some sense risky for God?

There is one more thing to note about quantum mechanics. Not all interpretations are indeterministic. The orthodox, Copenhagen approach is, as well as others with a collapse of thewavefunction, such as the GRW interpretation (Ghirardi–Rimini–Weber). But Bohmian mechanics and the Everettianmany-worlds interpretation are not. This means that quantum mechanics has not proved that some events are indeterministic. In the next century, most physicists might come to reject a collapse of the wavefunction and thereby restore determinism to quantum mechanics. In any case, of the many ways of understanding randomness, indeterminism appears to be one of the better candidates.

5 Biology

Another obvious place to look for randomness in science is evolutionary biology. Random mutations play a key role in Neo-Darwinian evolution. In what sense are they random? In part, the word is used to deny any sort of teleology or directedness in the process. In Lamarckian evolution—a theory which predates Darwin—changes from one generation to the next had a clear direction. Lamarck believed that nature responds to the needs of a species. According to this theory, giraffes evolved long necks because of their persistent stretching for leaves on tall trees over many generations. Likewise, the more elephant ancestors used their trunks, the more functional they became in their progeny.

Darwin explicitly rejected this sort of directed evolution. He believed instead that changes from one generation were random: some might prove useful in acquiring food, resisting pests, finding a mate, and so on, but most would be maladaptive. (“Most” because it is easier for a mutation to undermine a useful trait than for it to produce an adaptive one.) With the discovery of genetics, we can say more precisely that there is no directionality to genetic mutations with respect to the evolution of the species in which they occur. This is the sense in which mutations are random. How does it compare to those in physics?

The answer depends on the cause of a given mutation. Some mutations arise during cell division. Errors occur when genes fail to produce exact copies themselves. But such events need not be random in any deep sense. The underlying biochemical processes could be fully deterministic. There is only one causal pathway for a mutation given the interactions of the organic molecules involved. Other mutations are due to external sources, such as radiation. While the exact chain of events is more complex, it would be just as tractable to a Laplacian demon as the collisions of ping-pong balls in a lottery machine.Footnote 9 In short, the underlying processes responsible for random mutations are on a par with examples of deterministic, epistemically random events in physics.

Philosopher of biology Alan Love discusses a less obvious type of biological randomness in the writing of paleontologist Stephen Jay Gould (Chap. 7 of this volume). Gould highlighted the role of contingency in evolution (1989, 48–51). Would the phylogenetic tree of life look different if evolution were restarted from the same initial conditions? In other words, if we could “run the tape again,” would natural selection produce roughly the same set of species? Gould thought not. There is too much contingency involved to think that evolution would play out the same way again. The distant ancestors of any species would have been extremely lucky to survive and pass on their genes. Consider the earliest primate. Think of all the things that could have happened before it had a chance to reproduce. It could have been killed by a predator. It might have died from disease or starvation. Say that the asteroid that hit the Yucatan Peninsula 66 million years ago had missed the Earth by a few hundred miles. Dinosaurs would have continued to dominate for some time. As Gould put it,

replay the tape a million times from a Burgess beginning, and I doubt that anything like Homo sapiens would ever evolve again. … Wind the tape of life back to Burgess times, and let it play again. If Pikaia does not survive in the replay, we are wiped out of future history. (1989, 289, 323)

The tree of life seems to have been shaped by this ever-present contingency. If crucial events had gone differently, our ecosystems would be populated with other species, and humans would not be among them.

In most ways, evolutionary contingency does not pose any new problems for divine providence. The Yucatan asteroid and many of Gould’s other examples are no more unpredictable or indeterministic than those in statistical mechanics or chaos theory. There is, however, a possible exception. Say that the first primate had been eaten by a carnivore before reproducing. Was this event foreseeable? The answer is tied up with the difficult question of free will. Many theists hold that humans have a robust sort of free will that is not compatible with determinism.Footnote 10 A Laplacian demon, therefore, could not predict one’s free choices. And while some, like Descartes, did not believe that animals have free will, theists generally believe that high-functioning animals do as well. If the predator in question had the free will to pursue our primate ancestor or some slower-looking prey, then that decision would be indeterministic. No amount of knowledge about the relevant laws and conditions would allow one to predict which choice the predator would make. At best, there is an objective probability that the creature would make one choice rather than the other.

Does God know what choices a free creature will make? This is a contentious issue. Most theistic philosophers would answer “yes,” but there is little agreement about how God knows this. They do agree, however, that if God has exhaustive foreknowledge, the way in which God knows our future choices is nothing like the prediction of a Laplacian demon. In contrast, open theists deny that that there is a definitive fact-of-the-matter now about what a free creature will choose. There is simply no truth “out there” to be known, and so the answer is “no,” God does not know the outcome of free choices.

Going back to Gould’s notion of contingency in evolution, does predation by higher animals pose a challenge to divine providence? Maybe. If open theism is right and God does not precisely know the outcome of free choices, and if prehistoric predators had free will—both of which are questionable—then that sort of contingency would make a particular view of providence more difficult.

Of the many concepts in mathematics and the natural sciences that are related to randomness, few seem to pose a problem for divine providence. The main challenge comes from indeterminism. It is difficult to see how God could exercise providential control over nature without intervening if events are indeterministic. Quantum fluctuations shortly after the Big Bang might have produced an uninhabitable universe. Free will choices at key points in history might have led to a world with far more suffering, or, if the predation example is correct, one without Homo sapiens. In either case, God could not guarantee how events would unfold in the future from a given set of conditions at creation.

There is clearly a conceptual tension, then, between some forms of randomness and providence, one that I have not sought to resolve in this essay.