Is there evidence for the action of providence in the cosmos? Most cosmologists would answer “no.” For them, the universe has no purpose at all, and it is ruled only by randomness acting according to the laws of physics (ultimately, general relativity and quantum field theory). To their eyes, the outcomes of the universe, including the existence of life and human beings on Earth, are radically contingent. However, this view is now discussed anew, in the context of a series of discoveries in the last decades. This chapter presents a brief summary of the issue. It addresses how science evacuated the centrality of the human of the universe by unfolding the importance of contingency, how contingency itself was questioned, and how cosmologists are trying to answer this question.

As a matter of fact, most cosmologists would subscribe to the materialistic view, as stated by Jacques Monod, a French biologist and Nobel Prize Winner. In 1970, Monod published Chance andNecessity, a book on the alleged philosophical implications of the contemporary discoveries in molecular biology, including his own on the role of RNA in decoding DNA. This book had a strong impact on the general audience, and was translated into many languages. At the very end of the book, Monod summarized his views on the cosmos with a few sentences: “The old covenant is broken. Finally the human knows that it is alone in the indifferent immensity of the universe, from which it emerged by chance. Its destiny and its duty are written nowhere. To it belongs the choice between the Kingdom and darkness.”

1 The Indifferent Universe of Materialism

Monod’s statement is clear: the universe is indifferent to the existence of the human, which has emerged by chance. With these words, Monod put a conclusion to the long-lasting philosophical project of materialism, which started 25 centuries ago with the works of Democritus (470–360 BCE) and Epicurus (341–270 BCE), and was subsequently developed in more details by Lucretius (99–55 BCE). According to Lucretius’ work, De Natura Rerum (Lucretius 1993), the world is eternal, and ruled by chance. There is no “first principle” (archè) that produces the world, as pre-Socratic philosophers thought, and no god to enforce his will. Of course, Monod’s views are much more sophisticated than Lucretius’. But both worldviews share the same ontological reductionism and materialism. For Monod, the deciphering of the DNA and RNA codes is an important step forward in the scientific endeavor, and it shows, after the unraveling of the mysteries of matter by physics and chemistry, that life itself can be understood by the interplay of chance (random mutations) and necessity (the laws of biochemistry and selection pressure). Very much as Lucretius, in his times, considered this interplay to be the full and final explanation of the “nature of the things.”

Monod’s views are interesting for another reason, his choice of the words in his final sentences: “the old covenant,” “destiny and duty,” “the Kingdom” (with a capital in the original French word “Royaume”) and “darkness.” It is not clear why Monod chose these specific words. In his book, he commented on, and criticized, animism and the separation between mind and matter, and he explained that the attraction of the human to metaphysical explanations and religions is likely coded into one’s DNA itself. According to Monod, there is no God, and the universe is indifferent to the human. In its loneliness, the human has a choice between, on the one hand, the “Kingdom,” which in Monod’s mind is the quest for “objective knowledge” (i.e., science), a quest that is demanding and valuable, and, on the other hand, the “darkness,” which is likely to refer to superstition and obscurantism. Monod uses these Biblical motifs to subvert them, and criticize the monotheistic views on the world, especially God’s existence and providence.

2 General and Special Providence

For monotheistic religions—principally Judaism, Christianity, and Islam—every created being has purpose in accordance with God’s plan. To be more specific, it is useful to distinguish between various modes of action that creation implies in monotheistic doctrines. First, God has the power to bring things into existence out of nothingness. All three religions agree on the creation “ex nihilo,” that is, from nothing other than God’s Wisdom and Will. Second, God does not bring into existence all of what is possible. He has the capacity to make choices, and to give existence to a subset of possible things within a larger ensemble. Third, God organizes created things “ex numero et pondere,” that is, he measures things, and makes the cosmos appear as an organized whole out of all the individual creatures that are brought into being and their interconnections. Finally, God has a specific relationship with each one of his creatures, to which he provides what is necessary for its subsistence, and for achieving his plans for it. Obviously, this relationship is unique with the human being, which is gifted with spirit and free will to fulfill a central role in God’s plan for the Creation.

It is important to realize that each of these characteristics is related to four specific issues in metaphysics and philosophy. God as the Creator (attribute 1) is an answer to Leibniz’s (1714) questions: “Why is there something rather than nothing?” God as the Chooser (attribute 2) explains why we live in this specific world rather than in a world with different characteristics. God as the Organizer (attribute 3) is an explanation for the existence of laws of physics in the world, including the possibility that these laws drive change or evolution in the cosmos. All these actions are linked to what can be called “General Providence” directed toward all creatures. Finally, because God is the Provider (attribute 4), humans have a specific relationship with God, and they should thank him and behave in accordance with his Will, inasmuch as God gives purpose and meaning to their lives.Footnote 1 This can be called “Specific Providence,” because it refers to a peculiar link that God has with each of his creatures, and especially each human being. We could say that, according to those who believe in him, God is a sufficient explanation for many, if not all, patterns of our existence.

In contrast, materialism does not believe that God is a necessary explanation for any of the above-mentioned four philosophical issues. These issues can get satisfactory answers without invoking a transcendent Agent. For instance, materialism claims that there is no cause to the existence of matter, and we have to assume that matter has always been there, although its form can change. For it, having God as the “Creator” just shifts the issue from the existence of matter to the existence of God. When monotheism answers that God is the only necessary being, materialism claims that matter could very well be the necessary being, without multiplying explanatory entities. As for the “Chooser” among a set of possible things, materialism thinks that things appear by chance out of an ensemble of possibilities. In an infinite universe, everything that is possible will end up happening. For issue three about the organization of the cosmos, the laws of physics do the job of the “Organizer.” Of course, we do not know why laws hold, but this issue is hidden under the consideration that these laws are primarily the creation of the human mind to describe observations and experimentations. And finally, for materialism, the only purpose and meaning of the human are given to the human by the human itself, as Monod wrote at the end of his book.

According to Monod, the deciphering of the riddles of biology by the discovery of DNA and RNA was the final step of the destruction of the world of the “old covenant” which relied on God’s existence and providence and flourished in the medieval synthesis of, for instance, Thomas Aquinas’Summa Theologiae or Dante Alighieri’sConvivio or Divina Commedia. In this medieval synthesis shared by Jews, Christians, and Muslims, and based on Aristotelian physics and Ptolemaic astronomy interpreted in the light of the Sacred Scriptures, the centrality of the human appears through its location at the center and lower place of the world, as a result of a “Fall” that was both physical and moral. But, step by step, modern science has dismantled this medieval synthesis in which the human being was central in a closed world with a finite age.Footnote 2 Let’s briefly consider this shift.

In the Copernican model of the world, the Earth is a planet with the Sun occupying the central place in the universe. However, at the beginning of the seventeenth century, it appeared that the stars were other suns located at large distances, and that the Sun was one of the stars among many. When, in 1610, Galileo Galilei sees mountains on the Moon, spots on the Sun’s surface, satellites turning around Jupiter, or a ring around Saturn, he shows that there is nothing special with the Earth. The passage from the closed world to the infinite universe was hard to accept, because it displaced humanity from its central role. Isaac Newton gave the key to understanding the structure of the world with the laws of motion and gravitation. He showed that there were many “centres of fall” in the cosmos. However, in his views, there was still some providence in the universe, because only God could guarantee that the law of gravitation holds at a distance. Empty space as the locus of gravitation became the sensorium Dei. Moreover, for Newton, God corrects the motions of the planets to keep the solar system stable, because the mere application of the law of gravitation seemed unable to account for the details of astronomical observations. Later, Pierre-Simon de Laplace explained this disagreement by computing the gravitational influence of all the planets of the solar system on each one of them, thanks to Newton’s law of gravitation, and Albert Einstein elucidated the puzzling characteristics of a law of gravitation at a distance, with the curved space-time of general relativity. Each of these steps effectively placed God at a greater distance.

The true scale of the universe was discovered incrementally, with the identification of the shape of the Milky Way by William Herschel (1781), the first measurement of the distance of a star by Friedrich Bessel (1838), and finally, the discovery of the extragalactic nature of the “spiral nebulae” by Edwin Hubble (1926). At each step forward, the universe appeared to be larger, and the Earth became more insignificant in terms of size and location. Physical cosmology, which appeared at the beginning of the twentieth century, was built on the foundations of general relativity, and the growing power of telescopes, spectrographs, and photographic plates to probe the deep universe. The “principle of mediocrity,” which was born at the time of the Copernican model, is at the basis of the cosmological theory. We humans live on an ordinary planet, orbiting an ordinary star, on the periphery of an ordinary galaxy, the Milky Way that includes about 100 billion stars. The Milky Way itself is in the Local Group, with several other galaxies, and especially the Andromeda galaxy (M31) at 778 kpc,Footnote 3 our closest giant neighbor, which is about twice as big as the Milky Way. The Local Group itself is at the periphery of the Local Super-cluster, centered on the Virgo cluster, which is located at a distance of 16.4 Mpc, and where the giant galaxy M87 occupies the center. There are many super-clusters in the universe, at the crossing of the filaments and sheets that constitute the large-scale structures. This “principle of mediocrity” translates into the so-called Copernican principle, stating that the Earth is not located at a special position or at a special epoch of the universe. From the cosmological point of view, “there” is like “here,” and “some time ago” is like “now.” Another (although not equivalent) way to restate the Copernican principle is to postulate the “universality of the laws of physics,” without which any attempt to develop cosmology is hopeless. This universality of the laws is constantly checked by the cross-consistency of all the astronomical observations, and there are dedicated research programs to detect any change of the constants of physics.

It is possible to solve the Einstein equation of general relativity under the so-called cosmological principle which states that, above a given scale, there are “average properties” of the universe that do not depend on location. The cosmological principle, which is another way to rephrase the “principle of mediocrity,” leads to the simplifying assumptions of a homogeneous and isotropic universe with a uniform time flow. Under these assumptions, Einstein’s equation can be solved to obtain the Friedmann-Lemaître equations that describe the space-time evolution of the universe. The observations of Hubble and Humason (1931) confirmed what these equations described: we live in an expanding universe, although it is important to emphasize that it is space itself that is expanding and not matter “exploding” into previously empty space.

Two models would be in competition in the following 30 years: the Big-Bang model (BBM), in which the universe emerged out of an “initial singularity” (Friedmann 1922, 1924; Lemaître 1927), and matter is diluted by the expansion, and the Steady State model (SSM) in which the universe is eternal, and eternally expanding, with new matter constantly appearing to fill in the voids, and to compensate the dilution (Bondi and Gold 1948; Hoyle 1948). In BBM, there is a special time (the initial singularity), but not in SSM. For this reason, these two models suggested philosophical implications. Would BBM be reminiscent of Biblical creation, and SSM of the eternal universe of Lucretius? Whereas Pope Pius XII claimed that science confirmed the Biblical narrative of creation in his address of 1952, Georges Lemaître was more careful about this type of “concordism,” because he knew that physics, based on conservation laws, is unable to conceive of the appearance of something out of nothing. On his side, Fred Hoyle, one of the proponents of SSM, also saw the Big Bang as a revival of the creation ex nihilo in the fiat lux of Genesis, and preferred the eternal universe in which things have plenty of time to appear “by chance,” in the wake of Lucretius’ views.

The discovery of the Cosmic Microwave Background (Penzias and Wilson 1965) provided evidence for a universe much hotter and denser in the past and strongly favored the Big-Bang model. SSM was quickly abandoned. The CMB observations are consistent with the theory of Big-Bang nucleosynthesis in which light elements such as He3, He4, and Li7 are made during the first three minutes of the expanding universe. The heavier elements are forged in stellar interiors, and thermonuclear reactions provide stars with their source of energy (Burbidge et al. 1957).

Another piece of the cosmological puzzle is the existence of “dark matter,” whose gravitational effects appear in the fast motions of cluster galaxies, as well as in the flat rotation curves of spiral galaxies and the deviation of light rays by gravitational lensing (Zwicky 1933; Rubin et al. 1978, 1980; Tyson et al. 1984). Dark matter is mostly made of unknown particles, which are not those that make “normal matter” (the particles of the so-called Standard Model of particle physics), whose density is constrained by nucleosynthesis. Dark matter is the dominant component of matter, and rules gravitation at large scales. To end this brief sketch of the panorama offered by modern cosmology, it is necessary to mention the discovery of the acceleratedexpansion of the universe (Perlmutter et al. 1999), which is interpreted as the result of “dark energy,” whether it is due to a non-zero cosmological constant, or to an unknown “scalar field” associated with a still-to-be-discovered particle.

Many pages of the narrative of cosmic evolution are now written, and the Earth appears as a tiny, peripheral bit of matter within the vast expanse of space. Probably one of the most striking pictures that illustrate this narrative is the image of the Hubble Deep Field taken by the Hubble Space Telescope (Williams et al. 1996), which unveils about 10,000 galaxies in the field of 11 arcmin. We see distant galaxies as they were in the past, when light was emitted and the universe was much younger than it is now. Subsequent galaxy counts seem to show that the observable universe, defined as the sphere around us where light can reach us during the finite age of the universe (13.8 Gyr is the current best measurement), includes about 100 billion galaxies, each of them including 1–1000 billion stars.

The richness of the universe unveiled by modern cosmology still increased with the identification of the first exoplanet (Mayor and Queloz 1995) that was followed by the discovery of thousands of planetary systems around nearby stars, many of them harboring rocky planets similar to the Earth and located in the so-called Habitable Zone that enables the presence of liquid water. After surveys conducted by the Kepler satellite, it is believed that most stars have planets. One of the main issues now is to detect bio-signatures in the atmospheres of some of these exoplanets that would signal the existence of elementary life forms, for instance unicellular organisms that would have something similar to a photosynthetic function that would produce biotic oxygen. Several projects are in preparation to make use of the forthcoming James Webb Space Telescope (launch in 2021) and the 30-meter class telescopes (commissioning in 2025) to attempt such detections.

3 Is the Universe Fine-Tuned for Life?

It is clear that the development of modern cosmology bolstered the idea of an indifferent universe. What would be the purpose and meaning of the human life among so many billions of stars and planets? Cosmological, geological, and biological evolution appear to have been contingent with random processes at all the stages. Theologians and deist/theist philosophers would have to find how God acts in this universe within the constraints given by science (see, e.g., Barbour 1997; Saunders 2002; Polkinghorne 2005; Ward 2007). Yet, the idea of a purposeless Earth (and human) in a vast universe would begin to be criticized just a few years after the publishing of Monod’s essay, during a meeting held in Krakow in 1973, to celebrate the 500th anniversary of Copernicus’ birth date. Brandon Carter, a physicist, suggested that we should be careful about too systematic an application of the Copernican principle: clearly we live in a zone of the universe where, and at an epoch when, our very existence as observers is possible. This is known as the Weak Anthropic Principle (WAP). Carter proposed a stronger statement, the Strong Anthropic Principle (SAP): the universe must have the overall properties that make our existence as observers possible: any change in the values of these overall properties would have made our existence either more difficult or impossible. These two statements triggered a long-lasting controversy among scientists and philosophers of science, with attitudes ranging from exasperation to excitation (Bertola and Curi 1993).

WAP seems reasonable. It looks like a reminder of the series of “decentring discoveries” of the last centuries, and question each one of them. We should not be surprised that we are located in a galaxy (very few stars, if any, form out of galaxies), and more specifically in a spiral galaxy: elliptical galaxies are mostly made of old stars, most of them being 10 Gyr-old Red Giants that have inflated in their late stages and have destroyed nearby planets, whereas late-type, irregular galaxies have low contents of those heavy elements that are necessary to form planets. We should not be surprised by living in a universe that is 13.8 Gyr old, because the chemical enrichment of the interstellar medium took a few Gyr before the formation of the Solar System, 4.56 Gyr ago. The emergence of life took 0.5 Gyr on Earth, and evolution proceeded slowly to produce the first animal and vegetal diversification. As a result, we should not be surprised to live not only in an old universe but also in a vast universe, because the expansion has lasted all this time, and galaxies were able to travel away from their neighbors. Similarly, the Earth, located around a G-type star (among the most frequent spectral types in the Milky Way), is in the habitable zone, bringing what is sometimes called the “goldilocks” conditions, “not too hot and not too cold,” that enable the existence of liquid water. The existence of liquid water is also conditioned by the Earth’s mass, which enables plate tectonics and the presence of an atmosphere at a sufficient pressure. Less massive planets have no tectonics, and more massive ones keep their thick initial atmosphere of hydrogen and helium. Finally, the Moon played a role in stabilizing the rotation axis of the Earth, very much as Jupiter gave inner rocky planets some kind of gravitational protection against deadly comets coming from the outskirts of the solar system.

The list of all these conditions (plus many others) gave birth to the “Rare Earth Hypothesis” (REH; Ward and Brownlee 2000) that is antagonistic to the “principle of mediocrity.” It is not easy to assess the combination of the low probability of gathering all these conditions with the very large number of planets that are present in the Milky Way (or in the universe), what could be coarsely encapsulated into Drake’s frequency equation (Drake 1961). The general feeling among the community of astrophysicists is that the very large number of planets should more than compensate the restrictive list of REH. The fact that this feeling is overwhelming manifests itself in the organization of a growing international community of active astrobiologists, as well as in the development of costly research programs aiming at detecting bio-signatures in solar system planets and satellites, and in nearby exoplanets.

Of course, WAP and SAP were immediately criticized for being too anthropocentric. What is at stake here is not really the emergence of the human beings, that is, life with superior cognitive capabilities, but rather the existence of planets that make simple life possible, or even complex life possible, that is, multicellular life, or maybe more, the existence of predators and preys, a powerful trigger for biological evolution. It is possible that the existence of multicellular life forms, and predator-prey couples is linked to some overall property of the Earth (like the Great Oxygenation Event), since motility involved in predator-prey couples requires more oxygen for metabolisms. Multicellular life forms (Gabonionta) seem to have appeared already 2.1 Gyr ago (El Albani et al. 2010) and maybe even moving life forms (El Albani et al. 2019). As far as life forms with superior cognitive capabilities are concerned, we get into the hazy field of the emergence of the human, which might be totally contingent (Gould 1989), or, on the contrary, unavoidable, provided the mechanisms of convergent evolution are at work (Conway-Morris 2003).

Clearly the emergence of life on Earth required many local conditions, and WAP has been discussed mostly in the related background of REH. In the context of theism, WAP may be seen as evidence for divine providence, but is of no consequence for our overall understanding of nature. The case for SAP is different as it involves the very possibility of life anywhere in the universe. As a consequence, it triggered a fierce debate that is still going on now, more than 40 years after it started. The ensemble of pieces of evidence that was initially referred to as SAP, is now preferentially called “the apparent coincidences” of the properties of the universe (Carr and Rees 1979), the universe “just right for life” (Davies 2007), the “fitness of the cosmos for life” (Barrow et al. 2008), or the “fortunate universe” (Lewis and Barnes 2016). The list of these pieces of evidence is impressive (see, e.g., Barrow and Tipler 1986 for a first thorough study, and the above-mentioned references). One characteristic example is the formation of carbon nuclei in the interior of stars through the so-called triple-alpha process. It is made possible by the existence of a “resonance” in carbon energy levels that is ultimately due to a happy combination of various constants of physics (Hoyle 1981). Without such a coincidence in the constants, all within tightly defined ranges, carbon would have formed in much lower quantities in stars, making the formation of planets and of carbon-based life a much more difficult process.

Rees (1999) produced a convincing summary of how most of the large-scale patterns of the universe that seem to be necessary for harboring life, are based on “just six numbers.” The number of space dimensions D = 3 is a condition for the stability of the planetary orbits around stars, as well as electrons’ orbitals around nuclei. One or two space dimensions (D = 1 or 2) are just too simple for the development of complexity, and D = 4 or larger produce the above-mentioned instabilities. The relative matter density of the universe Ωm = 0.25 assesses the influence of gravitation on the expansion of the universe, and the possibility to form galaxies and stars. With a significantly lower value, no galaxies and stars would have formed. With a significantly larger value, the universe would have collapsed within the few Gyr, without letting enough time for the development of life. The magnitude of the cosmological constant Λ = 0.75 gives the value of the acceleration of the expansion. A larger cosmological constant would have produced a much faster acceleration that would have hampered galaxy and star formation. The intensity of matter fluctuations at the epoch of recombination, as they are measured on the CMB, is Q = 10−5. Smaller values would have given a much smaller number of galaxies now, and larger values would have given very dense galaxies where planetary system would be unstable because of the frequent gravitational encounters between stars. Admittedly, the fine-tuning of these last three parameters is not very tight. Values different by 10–20% would not give qualitatively different evolutions for the content of the universe. The intensity of the weak interaction (that participates in the internal binding of nuclei) is ε = 0.007. With a slightly larger value, Big-Bang nucleosynthesis would have been more efficient, and all stars would have burnt their nuclear fuel in a short amount of time. On the contrary, a slightly smaller value would have made stars unable to light up. Finally, the ratio of the electromagnetic constant to the gravity constant N = 1036 defines the sizes of the planets as well as of the life forms that would live on them. A slightly smaller value would make smaller life forms probably unable to evolve toward complexity, whereas a slightly larger value would slow down planet formation.

This list gives an idea of the kind of arguments around the “coincidences” or the “apparent fine-tuning” of the properties. Maybe each value by itself could be considered coincidence, but the whole argument of the fine-tuning relies on the number of these coincidences, which seems unexpected, or puzzling. The heart of the issue is that the universe seems to be fit for life, or bio-friendly, and it unavoidably brings to mind the religious statements about a world created for the human being. Of course, what fine-tuning might say is that the world is finely tuned for the existence of life, and not necessarily for the specific existence of the human being. But the religious discourse about providence starts already at this level of general providence, and the sacred scriptures remind the believer that the world was created as a gift to the human, before developing the topic of the specific providence by which God feeds, teaches, and loves each one of us.

4 Debates on Fine-Tuning

Because fine-tuning is reminiscent of providence, very much as the Big-Bang model was considered similar to the fiat lux of Genesis by Hoyle and Pope Pius XII, the evidence of fine-tuning was, on one side, criticized, and, on the other side, discussed and interpreted.

The first attitude toward the argument is denial. After all, how exactly are the constants fine-tuned? Some of them seem to be loosely-tuned. The cosmological constant might be one half, or twice, its measured value without much change in the fitness of the universe for life. However the “natural” value proposed by theory is much larger: it should be 10120, and the fact that it is close to 1 is already an extraordinary fine-tuning that has led many theorists to think that some kind of (unknown) mechanism should have put it exactly to zero (Weinberg 1989). Consequently, the observational measurement of its non-zero value was really a surprise. The list of fine-tuning coincidences includes some tight ones and some loose ones, but the overall probability of finding all of them gathered in a single realization of a random process appears very low. The evidence is such that it demands an explanation.

Alternatively, one might think fine-tuning is a kind of observer selection effect. If the universe did not have the properties that enable our existence, nobody would have been here to see the hostility of cosmic conditions. However, this attitude seems contrary to the scientific quest that always question facts. White (2003) illustrates this situation by the metaphor of the firing squad.Footnote 4 If a convict must be executed by a platoon of soldiers, and if he survives the execution, he would not say: there is no problem to discover that I am alive, because if I were dead, I would not be able to discover anything. On the contrary, he would try to understand whether there was a plot to save him, or whether the guns used by the soldiers had some kind of malfunction.

The last type of criticism of cosmological fine-tuning consists in denigrating this position as another shameful type of “Intelligent Design,” a line of argument that tried to disprove the theory of evolution. However, cosmological fine-tuning is completely part of mainstream science, and does not contradict cosmic, geological, or biological evolution, which are even very much part of the case. If the case for fine-tuning is sound and serious, it has to be interpreted, and this interpretation comes at a cost. Four roads seem to have been explored, one with more success.

The first road consists in slightly modifying the rules of science by incorporating SAP into it as a methodological principle. In addition to, for instance, Occam’s razor, the refusal of final causes, and the Copernican principle, we would include the principle that the universe has the properties to host observers. If SAP were a starting principle, no more discussion is needed. However, asserting SAP as a first principle looks like a desperate patch. It is a glaring admission of weakness, and, because it stops further exploration of the topics, it may hide a whole avenue of interesting discoveries.

The second road is to confess our ignorance. In his well-known “Seven World Riddles” (Du Bois-Reymond 1880), Du Bois-Reymond lists the apparent teleological arrangement of nature as one of these riddles that might remain unanswered in spite of the efforts of scientists. We surely won’t be able to unravel all the laws of nature. This line of arguments, called the ignorabimus, might be defended by very different kinds of people. Such a standpoint means that cosmology as a science is reaching its limits, and that we should consider the patterns of the universe, and the coincidences that make it, as just “happenstance.” The line of arguments of cosmological fine-tuning leads us to question our very existence as observers, and we know that self-reference can produce tricky problems. The issue of fine-tuning might be linked to the way we become aware of the cosmos, and we know that our consciousness itself may be a scientific riddle, as put forth, for example, by the philosophical standpoint of “mysterianism” (McGinn 1991).

The third road is to accept some sort of teleological argument—something most scientists would be reluctant to do. Whereas the first road was adding up a new principle to science, this third road would suppress one of its fundamental principles, that is, the refusal of final causes. This kind of interpretation of the properties of the universe with final causes is favored by deism (which would speak only of general providence), and theism (whether it is Judaism, Christianity, or Islam) that would add special providence to general providence. However, final causes do not belong exclusively to monotheism: pantheistic views can endorse them. They might consider that matter is slowly “taking awareness” of itself (see, e.g., Reeves 1981), and that this process requires properties of matter that make complexity possible. A variant of these views can be found with Wheeler’sParticipatory Anthropic Principle (PAP). PAP is an interpretation of quantum mechanics “à la Wigner” in which the observer is responsible for the collapse of the state vector. Thus the observation of the universe by observers puts its wavefunction in a state that enables their very existence as observers.Footnote 5 In this context, the universe must have the properties to enable the existence of the observers, because the observers select the wavefunction of the universe. Here, “must have” describes the result of a causal process.

The mainstream interpretation of SAP is the multiverse, which would re-inject randomness into cosmology. According to this proposal, our universe is drawn from a large ensemble of realizations (or random draws) in a kind of “cosmic lottery” that explores a whole range of possibilities for the values of the laws of nature and constants of physics. One specific example of this explanation is Lee Smolin’s theory (Smolin 1997) in which stellar black holes are the seeds of subsequent Big Bangs, with different parameter sets. The parameter sets that enable the existence of massive stars (and their final evolution to black holes) are very fecund. These parameter sets are also those that enable the existence of life. However, the so-far preferred scenario of the multiverse comes from particle physics. An overarching law, or “fundamental law” (at high densities and temperatures), would produce “derived laws” (at lower densities and temperatures) by “symmetry breaking,” where some of the characteristics of the derived laws, including the values of the constants, are determined at random.

There is a mechanism for producing such symmetry breaking: cosmic inflation (Guth 1981; Linde 1982; Albrecht and Steinhardt 1982). Inflation was introduced to explain properties of the observable universe such as flatness, isotropy, and the absence of magnetic monopoles. The overall idea is that, when the universe was very young (typically 10−35 sec after the “singularity”), a scalar field was responsible for an exponential expansion that diluted the universe, homogenized its initial density irregularities, and flattened its space curvature as measured within a Hubble radius (the size of the observable universe). This process can occur repeatedly, in various places, giving rise to “eternal inflation,” and a large number of Big Bangs and “universes.” CMB observations have already corroborated some of the predictions of inflation. The scalar field is not known yet, but there is hope that forthcoming observations of the CMB will be able to test a large fraction of the possible models, and especially the possibility that the scalar field is simply the recently discovered Higgs boson.

Grand Unified Theories (GUTs) should give us the nature of the overarching or fundamental law. The best candidate could be superstring theory, one of the two major attempts to unify general relativity and quantum physics. Particles are described as vibration modes of “superstrings” in a 10D space. At lower energy, some of the dimensions have to be “compactified,” and to become very small, to make predictions in agreement with our observed 3D universe. However, the theory is currently undergoing a strong crisis since there seems to be 10500 different ways to compactify the extra dimensions. At this stage, either we have to contemplate the possibility that these 10500 universes actually exist, which Susskind (2008) calls the “landscape,” or there is a yet-to-be-found process that makes only one compactification possible, which is the hope of those who still defend the theory. In any case, the theory is not tested yet, and there are debates about the very possibility of its testability.

5 The Cost of Each Option

This brief overview shows that it is not easy to get rid of SAP. As scientists, we are not ready to accept the existence of final causes, because this refusal is one of the fundamental principles of modern science. We cannot accept either the ignorabimus, which is the end of the scientific exploration in a whole field of cosmology. Finally, we would be very reluctant to include, into the list of our fundamental principles, an anthropic principle that seems a completely ad hoc way to solve a potentially fruitful crisis.

Since we cannot be satisfied by final causes, the ignorabimus, or the anthropic principle, there is only one option left, that is, the multiverse. It seems an attractive solution, because it is suggested by GUTs. However, there is a cost for this option, and even a “double” cost. First, it may not be testable. Could a non-testable theory be considered as scientific? At this stage, we might have to slightly twist the definition of what a scientific theory is, by accepting that it may have untestable consequences beyond our observable universe, provided there is sufficient evidence within our observable universe (evidence which is still missing for the superstring theory, Woit 2006; Smolin 2006). Second, are we sure that we have gotten rid of final causes with the whole process of the multiverse? Shouldn’t the overarching law that produces the cosmic lottery be considered “fine-tuned” to be able to produce bio-friendly universes among its many duller outcomes?

There are three possible answers to this question. We may find in the future that one, and only one, fundamental theory is self-consistent and possible. But why would it be this specific theory that makes bio-friendly universes possible? Or we may find that several or many theories are self-consistent and possible. Maybe all these theories exist in reality, in the spirit of the “ultimate ensemble theory” (Tegmark 1998), there is no choice, and everything exists. Or maybe only one of these possible overarching theories actually exists, and we are led to understand why this one rather than another one: there has been a “choice,” which it is difficult to attribute to randomness, because randomness presupposes a process for the realization of random outcomes, and, by definition, there is no physical process over the fundamental law.

Of course, it is very difficult to guess whether we shall have hints toward one or the other of these three answers in the future. At this stage of bold speculations, the preferred option is a matter of faith. In any case, we shall always face the issue raised by Leibniz: why is there something rather than nothing? And why is there this “something” rather than another one? Or, in other words: what is the origin of the substance that makes the world (Haeckel 1900)? What puts the “fire in the equations” that transforms mathematics into matter (Ferguson 2004)? Is it the mere logic of the only possible solution? Or does all what is logically possible have a correspondence in matter? Or is there a still unknown metaphysical process or reality that triggers/makes a choice?

Finally, there is another aspect of the issue to be considered. The observed fine-tuning favors bio-friendly universes, but is life the rule or the exception? Many scientists would think that elementary life might be ubiquitous, given the large number of planets in the habitable zones found by current surveys. We have no evidence of it, but we hope that, in the next decades, we should be able to conduct spectroscopic surveys of bio-signatures in a few dozen planets around nearby stars. Only one positive result would confirm the intuition of many astrobiologists, while negative results would only put a limit on the statistical frequency of life. Now, what about observers? So far, the only known observers are present on Earth. The problem is that once animals with superior cognitive capabilities appeared on Earth (such as dolphins, elephants, dogs, or apes), it took just a few ten million years to have Homo sapiens, and just a few hundred thousand years to have a technologically-developed civilization that would be detected through its short wavelength radio wave emissions within a sphere of about 70 light-years, that encompasses about 40,000 stars and probably a similar number of planets. Why does a whole bunch of evidence, from Fermi’s paradox to the absence of signal in the 60-year old SETI surveys, point to the silence of the universe? At this stage of the reflection on the “Great Filter” (Hanson 1998; Bostrom 2008), there are three possibilities: (i) the “emergence bottleneck,” in which life is a very rare event, in the wake of REH; (ii) the “Gaian bottleneck,” in which elementary life appears frequently, but vanishes quickly because it is not able to control the evolution of the planet, and avoid the transformation of the latter into a desert without atmosphere or an ice ball, following a scenario that might have been the one on Mars (Chopra and Lineweaver 2016), or (iii) the “self-destruction bottleneck,” where technologically-developed civilizations disappear “just after” they start developing, either through war or through the exhaustion of natural resources. Do these considerations change the perspective on fine-tuning? It seems that they may have at least an ethical consequence. If we are actually alone in the (observable) universe, because of one of the three above-mentioned bottlenecks, or maybe the combination of the three, doesn’t this loneliness bring back a new centrality to the Earth and give a certain sense of responsibility to the human? It is interesting to note that the multiverse explanation tends to transform SAP into a variant of WAP (we are located in a specific location of the multiverse, that is, a bio-friendly universe), and the silence of the universe tends to transform WAP into SAP because the large bio-friendly universe would have just the single outcome of the human kind as observers.

At the end of this overview, let us come back to Monod. For him, the world can be fully described by the interplay of chance and necessity. This interplay explains the contingency of the human being, and its subsequent meaninglessness. With Monod’s words, the human beings appeared by chance in an indifferent universe. Necessity is the set of constraints given by the laws of nature, within which random processes can unfold. It turns out that, with WAP and SAP, this necessity is more constraining than what Monod was considering in the 1960s, when he wrote his book. The set of constraints tightly corresponds to the possibility of a bio-friendly universe, and a small change in these constraints would have the dramatic consequences of making the universe more hostile to life. In this context, the multiverse appears as a solution to release the strong constraints of the current laws, by injecting a new dose of randomness in the process, at an earlier stage.

With the multiverse, matter is necessary, and all patterns, including the derived laws, are the products of randomness and necessity. Necessity is now understood as the frame of constraints imposed by the overarching law. Is this overarching law necessary (as the only one that is self-consistent), or are there various possibilities? No matter the answer, the question remains as to what has put “the fire in the equations.” If there are various possibilities, we are still facing the classical issue: why is the universe so? What gave the preponderance to one choice on another one? Or, to put it with Leibniz’s words: “Moreover, if the things have to exist, we have to explain why they have to exist in such a way, and not otherwise.”

For a long time ahead, the hypothetical overarching law might appear similar to Lucretius’clinamen,Footnote 6 something whose action is “just enough” to let randomness play its role, but whose origin is mysterious. Believers might think they are still right to continue to see it as evidence for providence in the multiverse. Of course, the multiverse appears to be still larger and more astonishing than previous views on the universe. But it is not a problem for those who believe in God’s Will and Power. The vast expanses of sterile universes in the multiverse might just be a consequence of God’s creative power, which makes the overarching law with the purpose of creating a bio-friendly universe that is not “indifferent,” and ultimately creatures like the Human. “I am the Lord, the God of all mankind. Is there anything too difficult for me?” (Jeremiah 32:27).