Abstract
The practice of science is sometimes described as a coldly rational route that follows rigid guidelines to reach certain truth. The reality is very different, and much more interesting and exciting. To be a good scientist you need curiosity, imagination, a sceptical attitude and the willingness to admit mistakes when you make them, which you will.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
General Principles
The practice of science is sometimes described as a coldly rational route that follows rigid guidelines to reach certain truth. The reality is very different, and much more interesting and exciting. To be a good scientist you need curiosity, imagination, a sceptical attitude and the willingness to admit mistakes when you make them, which you will.
How science works is defined by its methodology (Fig. 3.1). To be a scientist you have to be curious about the Universe and ask questions about some aspects that you do not understand. You have to concentrate on a particular problem that looks to be of manageable size and imagine possible explanations. You then have to devise observations and experiments to test these explanations, and ask whether the Universe behaves in accordance with one or another of your ideas. You usually have to modify your ideas to accommodate the results of your observations and experiments, and devise further tests in a continuing cyclical process. In practice not all these steps take place in every scientific enquiry and are not always in the same order, so the scientific method is better described as a set of general principles. Discoveries often result from complex interactions over long periods between several individuals with different approaches, temperaments and modes of working. An example is the discovery of the genetic code, described in ‘Life’s Greatest Secret’ by Matthew Cobb. Major discoveries are also made by accident, X-rays and radioactivity being prime examples.
This procedure is not foolproof, nor is it guaranteed to give you the correct answers. The correct answer may be one that has not occurred to anyone. It is also likely that there is a limit to what we can understand, a limit imposed by the structure of the human brain. It is surprising to me that brains that evolved over 5 million years ago to cope with life on the African savannah can conceive of something as small as an atom. But we do not know where the limit of human understanding lies, so it is best to assume it is far away or we may be tempted to give up searching.
Some accounts of the scientific method stress the necessity of repeating experiments as often as possible. But it is also possible to test predictions made about single events that occurred in the past – historical science complements experimental science. For example the giant impact hypothesis supposes that the Moon was formed by the glancing collision of a Mars-sized body with the young Earth, creating a ring of debris from which the Moon formed. It is known that all the planets and moons sampled so far in the Solar system have different isotope ratios of elements such as oxygen and titanium, so we would expect that the Moon rocks would have different isotope ratios to Earth rocks. But direct measurements show that these ratios are essentially identical. So an active area of research is the discussion and analysis of other hypotheses about the origin of our Moon that accommodate the isotope evidence. I discuss in Chap. 6 the evidence supporting the idea that the extinction of dinosaurs, and the subsequent evolution of humans, was made possible by a massive change in environment caused by a single chance event – an asteroid strike some 65 million years ago.
Science is not the affirmation of a set of beliefs, but a process of enquiry aimed at building a testable body of knowledge constantly open to rejection or confirmation. There are three specific features of science that distinguish it from supernaturalism. I call these features: nullius in verba, Occam’s razor, and uncertainty.
Nullius in Verba
‘Nullius in verba’ is the motto of the Royal Society, the premier body of scientists in Britain, founded in 1660. Other countries have similar bodies of leading scientists, called National Academies. The literal translation of this motto is ‘not in words’. This motto encapsulates the view that you should base your beliefs on your own assessment of the best available evidence and not take anyone’s word for it. So there is no room for dogma or for an appeal to authority, tradition or ancient texts – this is a major difference from how supernaturalism operates. This difference between supernaturalism and naturalism can be summarised by saying that, while supernaturalism has authorities who make assertions, naturalism has experts who are familiar with the best available evidence.
But now we have to ask – where is the best available evidence to be found? The answer is that the best available evidence is available in the peer-reviewed literature. Scientists are elected to their National Academies on the basis of the quality of their papers in this literature. When a scientist, or more likely these days, a group of scientists, feel that they have made some new observations or arrived at some novel insights about some aspect of the natural world from their experiments, they write down what they have done in enough detail for other scientists familiar with the field to be able to repeat the observations and experiments. The main reason for doing this is to ensure that the observations are reliable, in the sense that they can be repeated by other, independent scientists. The completed writing is referred to as a manuscript or paper, and this is submitted to a learned journal that specialises in the appropriate branch of science. The journal editor sends the paper out for review by several anonymous experts in the field – the peers. The term ‘peer’ means ‘equal’, but is often misunderstood to mean ‘superior’.
These peers are asked to read the paper in detail and to assess the validity of the observations, experiments and conclusions. They may suggest that some of the conclusions are not justified until further observations or experiments have been performed or point out flaws in the reasoning used by the authors. They may think that, although the conclusions are valid, they are not sufficiently important or novel to justify publication in the journal to which they have been submitted. The editor passes on these criticisms to the authors and asks them to revise the paper in light of the peers’ comments. Such is the pressure on space in the best journals that only those papers that contain the most innovative observations can be accepted for publication – the others are rejected, and the authors then may send them to other journals, where the entire process is repeated. The top two journals are Nature and Science – they appear every week. Figure 3.2 shows the front covers of these journals that appeared in November 2009.
Once a paper has been accepted into the peer-reviewed literature, its assessment by other scientists does not stop. Its conclusions are critically discussed at science conferences and in laboratories around the world. Eventually a general consensus on a given topic emerges. An example would be the general consensus that global warming has a human-made component. This does not mean that every climate scientist agrees with this conclusion, but it does mean that the weight of the evidence available today points in this direction. Future discoveries may of course modify this conclusion. Scientific ideas are always open to challenge and change in the light of new evidence.
Contrast this elaborate assessment procedure with the lack of such procedures by which religious claims can be assessed. How can you assess claims made in documents written hundreds or thousands of years ago by people whose knowledge of the world was inferior to ours? You either accept such claims based on the authority of the person making it, or you do not. What you cannot do is to assess such claims in the same way that you can assess scientific claims by reference to the peer-reviewed literature. This is why there are so many different religious beliefs but only one science. Hypotheses made in a supernaturalist framework are not testable, which is why there is so much conflict between different religions – there is no way of resolving differences between them. For example, in the Christian religion alone, there are between 9000 and 33,000 distinct denominations that are recognised, depending on how they are defined. Each of these denominations claims that its interpretation of the Bible is the correct one. Science on the other hand advances cumulatively, step-by-step, reaching broad agreement. Compared with religion, science speaks with one voice, and so is humanity’s only universal language.
Not all the claims made in the peer-reviewed literature are correct – scientists are human, they make mistakes, they are prone to dogma, and some commit fraud. They can be influenced by things such as politics, seniority, charisma, one-upmanship and the struggle to obtain research funding. Nevertheless, the peer-reviewed literature is still the best source of information about the world that we have. The same is true for the arts and the humanities – the peer-reviewed literature in these fields of study is the best source of information about the state of understanding in these fields. You should ignore health scares that appear in the media until the evidence has been peer-reviewed and confirmed by independent laboratories.
Occam’s Razor
The second distinctive aspect of science, and in my view, the most important, is summarized in Fig. 3.3.
William of Occam was a Franciscan philosopher who came from a village, now called Oakham, in the county of Surrey in the United Kingdom. He tackled the problem that for any given body of evidence you can almost always postulate several, quite different explanations. William argued that in this situation the best way to proceed is to prefer the simplest explanation that is consistent with all the available evidence – the explanation that makes the least number of assumptions. The word ‘razor’ is used to mean that unnecessary assumptions are shaved away. Occam’s razor is sometimes referred to as the ‘principle of parsimony’; the word ‘parsimony’ means ‘economy’. Statements similar to this can be traced back as far as Aristotle, so it is an old idea. A good example is the debate over the ‘face’ on the surface of the planet Mars.
In 1976 the Viking spacecraft took the photograph of a rock formation on the Martian surface, shown in Fig. 3.4a. This photograph caused a lot of excitement because many people, including some scientists, interpreted it to mean that Martians had carved the image of a human face on the rock. Now this interpretation clearly makes a lot of assumptions – that intelligent creatures exist or have existed on the planet Mars, that they know what humans look like, or even look like humans themselves and that they want to signal to us. A much simpler interpretation, that is, one with fewer assumptions, is that this is just an accidental effect of the angle of sunlight that happens to remind us of a face. We humans are programmed to recognise faces because we are highly social animals. Some people see faces in clouds, fires, teacups and items of pastry! The picture shown in Fig. 3.4b was taken in 1998 by the Global Surveyor spacecraft when the angle of illumination of the same region was different, and it clearly supports the simpler interpretation. But let’s be honest – the simpler interpretation is also less interesting, so we have to be careful to avoid the natural human tendency to prefer more interesting explanations.
Now it is important to grasp that Occam’s razor does not say that you should prefer the simplest hypothesis because it is more likely to be correct. There is no a priori reason why the Universe should be simple. Biology certainly is not simple. What Occam’s razor says is that you should prefer the simplest hypothesis because it is the best way to proceed. It is the best way to proceed because it is easier, as well as usually quicker and cheaper, to test simple hypotheses than to test complex hypotheses. So William is defining a method – an essential part of the scientific method. However it is important to appreciate that Occam’s razor is only a guiding principle and not an infallible rule. Indeed, Occam’s razor can mislead you if the correct explanation is not one that has occurred to you. I know this from personal experience – a lecture in which I describe my experience of being misled by Occam’s razor can be viewed online by finding ‘Croonian lecture 2011’ on the website of the Royal Society of London.
I cannot overemphasise the importance of Occam’s razor to the practice of science. If you abandon this principle, you might as well believe any interpretation of the world that you find comforting and appealing – and many people do exactly that. It is probably because of Occam’s razor that the majority of leading scientists today are not supernaturalists, because postulating invisible active agents clearly requires more assumptions than does the naturalistic view that such agents do not exist. Let us now look at some of the evidence.
Religious Belief Among Leading Scientists
A survey of the incidence of religious belief among leading American scientists was published in 1998, while a more detailed survey of belief among Fellows of the Royal Society of London was published in 2013 (Fig. 3.5).
The response rate in the survey of members of the US National Academy of Sciences was about 50 %, while that of Fellows of the Royal Society was about 23 %. Both surveys show that a large majority of the scientists in these two societies who responded to the survey reject the supernatural. But interestingly, this rejection of the supernatural by leading scientists is a recent trend, and if we look further back in the history of science, we see a different pattern.
We need first to ask the question – who was the first scientist in the modern sense? What I mean by this is – who is the first person we know about who used the basic experimental methodology that modern scientists use? Most people think it was an Ancient Greek philosopher, such as Aristotle, Plato or Archimedes. Ancient Greek philosophers based their views in many cases on empirical observations of the world rather than by appeals to authority, and thereby made many advances in understanding, but they were not renowned for formulating hypotheses and then testing them by experiment in the routine way that modern scientists do – they preferred to think rather than do experiments, which is why scientists tend to be disdainful about philosophers.
The first recorded scientist, defined as I have described, was an Arab called Ibn al-Haytham, who carried out experiments on many aspects of vision and optics in the tenth century. This was the middle of a period when in the Arab world there was a tremendous flowering in the arts, literature and sciences, called the Arabic Golden Age. Some people call it the Islamic Golden Age but many people of different faiths contributed to it, and they all wrote in Arabic. It is also surprising to learn that many of the discoveries that we in the West associate with people like Copernicus, Galileo and Newton in the sixteenth and seventeenth centuries were initiated in the Arab world, at a time when free enquiry was not encouraged by the religious authorities in Europe. Ibn al-Haytham was the first person to demonstrate that light travels from objects into eyes and not from eyes onto objects, as had been suggested by earlier Greek philosophers. He also showed by experiment that light travels in straight lines.
Now Ibn al-Haytham was a devout Muslim – that is, he was a supernaturalist. He studied science because he considered that by doing this he could better understand the nature of the god that he believed in – he thought that a supernatural agent had created the laws of nature. The same is true of virtually all the leading scientists in the Western world, such as Galileo, Newton, Faraday, Heisenberg and Planck who lived after al-Haytham, until about the middle of the twentieth century. There were a few exceptions – Pierre Laplace, Simeon Poisson, Albert Einstein, Paul Dirac and Marie Curie were naturalists for example. Charles Darwin experienced a decline in his Anglican belief in a benevolent god as he grew older, and in a private letter written 2 years before his death in 1882 said “I have never been an atheist in the sense of denying the existence of God… I think that generally (and more and more so as I grow older), but not always… that an agnostic would be a more correct description of my state of mind”. The term ‘agnostic’ was coined in 1869 by Darwin’s friend and supporter, Thomas Henry Huxley, who defined it as the position that “it is wrong for a man to say that he is certain of the objective truth of a proposition unless he can provide evidence which logically justifies that certainty”. In other words, Huxley was disputing the statement by religious people that their knowledge of the supernatural is certain.
Both agnosticism and atheism come in strong and weak versions. The strong agnostic thinks that the supernatural is unknowable even in principle, while the weak agnostic thinks that there is no empirical evidence to support the existence of the supernatural, and that therefore one should not believe in the supernatural unless, and until, such evidence is found. The strong form of atheism is defined as the belief that the supernatural does not exist, while the weak form of atheism is indistinguishable from the weak form of agnosticism. One way to remember the difference between weak agnosticism and strong atheism is that weak agnosticism assumes that the supernatural does not exist because there is no evidence for it, but does not rule it out as a logical possibility, while strong atheism asserts that the supernatural does not exist and does rule it out as a logical possibility. This is another debate where definitions are important.
Other correspondence suggests that Darwin was sometimes inclined to the deist view that an intelligent agent had created the Universe and the laws by which it operates, but thereafter had no interaction with it. This deist view is equivalent in practical terms to being a naturalist because it allows science to function in a naturalistic framework, as well as ruling out miracles and praying for divine intervention. For this reason, deism is the only type of religious belief that is not in conflict with the naturalist approach upon which science depends. Most of the other leading scientists up to about the middle of the twentieth century were supernaturalists however.
What can we deduce from these historical facts? Firstly, it is obvious that believing in the supernatural does not prevent you becoming a leading scientist. We can also deduce that such people separate the way they think about science from the way they think about religion. When doing science they use Occam’s razor, but when doing religion they abandon Occam’s razor, because postulating invisible agents clearly requires more assumptions than not postulating them. These assumptions include the origins, properties, and interests of such agents. These assumptions vary greatly among the different religions – in the monotheistic religions God is good, omniscient and omnipotent (hence the capital G), but this is not the case for some of the gods in many polytheistic religions.
What about the evidence that, according to polls and surveys, most leading scientists today have no religious beliefs? One can only speculate about the reasons for this. My suggestion is that it is because, at the end of the day, religious explanations are not really explanations – they may be emotionally appealing but they are intellectually unsatisfying, because they posit even greater mysteries that the ones you are trying to explain. As the philosopher Anthony Grayling so eloquently puts it “To answer the question of how the universe came into existence by saying ‘God created it’ is not in fact to answer the question, but to explain one mystery by appealing to an even greater mystery – exactly like saying that the universe rests on the back of a turtle, and then ignoring the question of what the turtle rests on”.
Naturalistic Origins of Religious Beliefs
One interesting problem that some naturalist scientists and philosophers of today are tackling is how to explain the universal persistence of supernatural beliefs – why do they occur, what accounts for their particular features, what purposes do they serve? In recent years, anthropologists studying the huge variety of supernatural beliefs found around the world, and psychologists seeking evolutionary explanations for religious beliefs, have proposed a number of hypotheses.
There are two general types of explanation offered, but they are not mutually exclusive – elements of both may be correct. The first type assumes that supernatural beliefs have direct survival value for humans and thus are adaptive features. Common sense purposes of the major religions include comforting people who are suffering, allaying the fear of death, providing explanations of things we do not understand, and encouraging social cohesion in a competitive world. Before the real causes of human disease started to be identified some 150 years ago, people often interpreted illness as punishment for flouting the will of a deity, so they would plead with their deity for relief. Humans are also probably the only species that are aware they are going to die and they often fear what may lie beyond. Most people grieve when they lose their loved ones and like to believe that their personalities survive in some sense after death. However, anthropologists such as Pascal Boyer, author of Religion Explained: The Human Instincts that Fashion Gods, Spirits, and Ancestors, and Scott Attran, author of “In Gods We Trust: the Evolutionary Landscape of Religion”, point out that there are many thousands of minor supernatural beliefs where these obvious, common sense explanations do not always apply, so they suppose there must be some deeper underlying reasons for supernatural beliefs, related to how the cognitive systems in the human brain interpret the world.
A recent hypothesis for the naturalistic origin of religious beliefs is that offered by Michael Shermer in his book “The Believing Brain”. Shermer has introduced two new terms to explain his hypothesis – patternicity and agenticity.
Patternicity and Agenticity
Patternicity is defined as the tendency to find meaningful patterns in both meaningful and meaningless noise. Shermer uses the following scenario to illustrate this idea. Imagine that you are an early human walking along the savannah of an African valley 3 million years ago. You hear a rustle in the grass. It probably is the wind, but it might be a lion. What you decide, and you must decide quickly, could determine whether you live or die. If you assume that the rustle is actually made by a dangerous predator like a lion, and not the wind, you will take rapid evasive action and thus may survive for another day. But if it turns out to be the wind, and you took evasive action, the consequence is the same – you still have to exert effort to move away quickly, but you will survive. So there is less cost for always assuming that the correlation between your hearing a rustle and the possible presence of a threat is meaningful, even when it is not. Shermer proposes that our brains have evolved mechanisms that assume patterns are always meaningful because this aids our survival, but one consequence is that our brains have no means to distinguish between true and false patterns. In his words, “our brains are belief engines, that connects the dots and create meaning out of the patterns we think we see in nature.... We are the descendents of primates who most successfully employed patternicity”. In his book Shermer then discusses the evidence from peer-reviewed studies of human behaviour that support this idea. A recent paper by Foster and Kokko in the Proceedings of the Royal Society presents a model that explains why evolution by natural selection favours strategies that make many false causal interactions in order to establish those interactions that are essential for survival and reproduction (see “Further Reading”).
Agenticity is defined as the tendency to infuse patterns with meaning, intention and agency. Humans, and some other animals possess a mental property, called ‘intentionality’ or ‘theory of mind’. Intentionality is the ability of humans and some other animals to treat other objects and animals as agents like themselves, that is, agents with minds that have desires, beliefs and intentions. Each time you have a conversation, you adopt an intentional stance towards the other person – you make assumptions about their desires, beliefs and intentions because you believe the other person is an active agent like yourself, with a mind like yours. The other person is making similar assumptions about you. The term ‘adopting the intentional stance’ was suggested by the philosopher Daniel Dennett.
Adopting the intentional stance is clearly an important survival tactic for animals, especially for social animals like ourselves, whose survival prospects improve if we co-operate with other humans. So the suggestion is that our brains are so hard-wired to produce this type of thinking that we tend to extend it to other objects and events that affect us. For example, when a rock falls and injures us, some people tend to assume that this means that there is an active agent making the rock do this – they believe that the rock moves because of some intentionality. Such people commonly think that the agent is invisible because they cannot see an agent. In primitive societies today, it is believed that many objects in the environment – trees, rocks, rivers, mountains and so on – are inhabited by invisible spirits that can be influenced by ritual practices. Before we feel superior to this type of thinking, how many among the most rational of us shout at our PCs when they do not do what we want? It is easy to slip momentarily into responding as though they were active agents. We all sympathize with Basil Fawlty losing his temper with his car when it refuses to start in the BBC TV comedy series Fawlty Towers. This adoption of the intentional stance is also a common experience among survivors of life-threatening accidents – they attribute meaning to their survival in terms of actions by a supernatural agent.
The Shermer hypothesis offers a naturalistic explanation of why it is so common to believe that intentional agents control the Universe. The agents are often thought to be invisible and control the Universe from the top down, in contrast to the scientific view which states that the Universe is controlled by unvarying regularities that work from the bottom up. These supposed intentional agents include gods, souls, spirits, demons, angels and the like, but also include conspiracies led by groups of people who manipulate the world’s economy in secret.
Evidence that this tendency to interpret the world in a supernatural fashion is partly genetically determined comes from the Minnesota Twin Family Study, in which the religiosity of identical twins raised apart in different environments was compared with that of fraternal twins raised apart. These studies began in 1979 by studying identical twins who were separated at birth and raised in different families. It was found that an identical twin reared away from their co-twin has about an equal chance of being similar to their co-twin in terms of personality, interests and attitudes. The conclusion is that the similarities between identical twins raised apart are due to genes, not environment, because any differences between identical twins reared apart must be due to their environment. The results are interpreted to mean that about 50 % of the tendency to be religious is genetically determined.
Given the universality of interpreting the world in an intentional fashion, it is perhaps not surprising that some scientists have also fallen into this trap. For example, the first version of the Gaia hypothesis of James Lovelock defined Gaia as “a complex entity involving the Earth’s biosphere, atmosphere, oceans and soil: the totality constituting a feedback or cybernetic system which seeks an optimal physical and chemical environment for life in this planet”. The simpler view, held by most scientists, is that the biosphere is a comprehensible mixture of air, water, soil and organisms, whose behaviour is explicable in terms of different steady states produced by negative feedback effects. There is no sense in which such a system can be said to seek anything, and Lovelock has since stated that this aspect of his original proposal was meant only in a metaphorical sense.
Similarly, some physicists suggest from the fact that the Universe has properties that allow human life to appear, that this means that its properties were designed with this intention in mind – the so-called strong anthropic principle. It is hard to imagine a finer example of unconscious arrogance, as well as ignorance of the mechanisms of evolution, than to assert that humans are the purpose of the Universe, but this example does illustrate the tendency we all have to adopt the intentional stance.
On this intentional hypothesis, the widespread tendency to explain the world in supernatural terms is not itself an adaptive feature of evolution, but a byproduct of parts of the mind that evolved because they aid survival in other ways. It is essential to realise that a common feature of evolution is that properties which evolved because they help survival in a particular environment may have important, but quite unrelated, consequences, later on. For example, the evolution of feathers in some dinosaurs was probably connected with the development of warm-bloodedness, which requires insulation to combat the loss of heat, but had the unrelated, but important, consequence of permitting the subsequent development of flight.
Thus the universality of the intentional stance does not mean that supernatural beliefs are necessarily correct, only that they may originate naturally in the cognitive systems that all humans use to interpret the world. This hypothesis amounts to saying that, while supernaturalism tells us something about what goes on inside the human head, there is currently no convincing evidence that it also tells us what goes on outside the human head. So we have to be alert to avoid being victims of our biology, especially today when we find ourselves living with a stone-age mentality in a space-age world that is changing much more rapidly than our brains are evolving.
Uncertainty
The third distinctive feature of science is that its conclusions are never certain but are always provisional. The provisional nature of scientific knowledge is the aspect most commonly misunderstood by non-scientists. Often when science is mentioned in the media, the impression is given that scientific knowledge is absolute and certain. This is not the case.
If we ask what science is, it is not a set of data, it is not a set of techniques, it is a set of ideas. These ideas are based on reason applied to data and techniques, but data and techniques do not on their own constitute science – it is the ideas that constitute science. These ideas are based on the best evidence available at the time, but they are not sacrosanct – they are always open to change to accommodate new data and new ideas. It is this openness to change that explains why science is so successful at understanding the Universe. Let me be very clear about this – science is the most successful human endeavour in history. Despite all the problems in the world, it is the case that never before in human history have so many people been so well fed, and have had so many opportunities to lead long, healthy and interesting lives. These advances stem from the application of scientific ideas and discoveries. Science works, so it may seem surprising that scientific knowledge is not certain in an absolute sense. Why is this?
It is because you cannot predict the future. You can never be certain that even long-held and very successful scientific ideas will not change as a result of future discoveries. Even if you had a theory that was completely correct, you could never be sure that this was the case, because you could never be certain that future discoveries would not falsify it. Let me give you an example, illustrated in Fig. 3.6.
In 1687 Isaac Newton published his master work that marks the beginning of science in the Western world – the Principia Mathematica. This was the first book to propose general natural laws in a quantitative fashion. Newton’s laws of motion and his equations that describe gravity are incredibly successful and precise – precise enough to be used to send astronauts to the Moon and to land spacecraft on the planet Mars. Nevertheless, the concepts on which Newton based his laws of motion were shown to be incorrect just over 200 years later by Albert Einstein in 1905. Newton’s concepts with respect to motion are wrong because he supposed that time and space are absolute and independent – this agrees with our common-sense perceptions of time and space. Einstein had the genius to realise that because experiments show that the velocity of light is constant, irrespective of the velocity of the light source, this view must be wrong – time and space are relative to one another, not independent. Experiments done since Einstein’s time show that the faster a clock moves, the slower it ticks, and that the faster an object moves, the heavier it becomes. In other words, our common-sense perceptions of time and space are wrong. Newton’s laws of motion work well enough in practice because relativistic effects become significant only at velocities much faster than the ones we normally have to deal with. These effects impact on human affairs only in the design of particle accelerators that would not work unless the relativistic effects were taken into account in their design and in the operation of the GPS satellite system.
So here we have an example of a very successful scientific theory that was accepted for over 200 years, but whose basic concepts, on which the Industrial Revolution was partly founded, are now known to be incorrect. Science is a uniquely successful human activity precisely because it employs this inbuilt, self-correcting mechanism. So certainty is an illusion. Scientific knowledge is not a fixed destination but a moving target. This is why media commentators discussing science who use the word ‘proof’ demonstrate that they do not understand how science works. In science, proof is not an option. Disproof on the other hand is an option- if we discovered human fossils in rocks older than the rocks containing dinosaurs, our current ideas about the evolution of mammals would be instantly disproved. Einstein famously said that no amount of experiments could prove him correct, but a single experiment could prove him wrong. So if you crave certainty, you will not find it in science. This is why media interviewers who ask scientists “Are you certain that…?” are demonstrating their ignorance of how science works.
Science and Religion Compared
Science tells you how the world works, but it does not tell you how to behave or what to admire – science is morally and aesthetically neutral. The major world religions, on the other hand, do offer instruction in these important areas, but this advice is based on their supernatural interpretations of the world. Because there is no general agreement on the nature and intentions of postulated supernatural agents, it is not surprising that different religions take conflicting moral positions on such things as warfare, the status of women, and sexual behaviour. Science, by comparison, helps you to predict the likely possible results of any given type of action that you are considering. Some people argue that religions are really just ethical systems aimed at persuading people to behave better, and thus that religious beliefs should be encouraged, but this argument is incorrect. All religions start by postulating explanations about the nature of the physical Universe and the forces that control it, based on human experience. It is only later that some religions try to justify their beliefs by recommending or enforcing certain types of behaviour.
Religious beliefs do have some major positive effects. For example, many people that run charities helping the poor and disadvantaged are motivated by their religious outlook. It is also obvious that such beliefs have inspired and stimulated many forms of art, especially painting, sculpture, architecture and music. You have only to look up at the Sistine Chapel ceiling in St. Peter’s in Rome or listen to the King’s College Choir in Cambridge, to realise this. Science has not inspired art to anything like the same extent, but in my personal view, what science is doing is to reveal a Universe whose complexity and beauty surpasses anything imagined by supernaturalists. Figure 3.7 shows a photograph of the Crab Nebula, the expanding remnants of a massive star that was observed by Chinese astronomers to explode in 1054. Such supernovae explosions have a biological relevance because they distribute into space the elements out of which stars like the Sun and its planets are made.
On the other hand, science lacks for some, but by no means all, people, the same emotional appeal as religion – it presents a view of human life that is bleak and joyless by comparison, because of the absence of any discernable, overall purpose in the Universe. This view conflicts with the purpose-driven, individual lives that we all lead. This relative lack of appeal is probably the main reason why the majority of people confine their interest in science to its useful applications or dangers, and turn to religion to seek meaning and comfort, especially in times of grief and hardship. Belief in the supernatural provides the possibility of hope in circumstances where a naturalistic approach might provide none. As the poet T.S. Eliot wrote, “Humankind cannot bear very much reality”.
The urge to believe in the existence of a personal, all-loving and all-powerful God is very strong in many, but not all, people. The strength of this tendency is shown by the lengths of irrational reasoning that some people will go to in attempting to explain how such a God can permit horrible things to befall innocent people. For example, the argument has been advanced that the Holocaust was permitted by God because he has given humans free will, that is, the ability to make choices between different courses of action. The problem with this argument is that it conflicts with the idea that this God is all-loving, so how can he permit such events, unless he is not all-powerful? This argument also does not explain terrible things that happen, not because of human actions, but because of natural disasters such as earthquakes and tidal waves.
An example of this type of thinking was shown by Rowan Williams, a former Archbishop of Canterbury, who was observed to say, when witnessing from close quarters the deliberate destruction of the Twin Towers in New York in 2001, that “God is useless”. He later explained that this terrible event had been permitted because God has given us free will. Thus the depth of his need to believe in an all-loving God overrode the simpler explanation of such events provided by the naturalistic viewpoint. On the naturalistic view of the world, such events present no such problem – bad things happen to innocent people because they were unlucky enough to be in the wrong place at the wrong time, while bad behaviour exists because violence, cheating and aggression had survival value during the early evolution of humans, just as it did for other animals.
Individual humans are physically poorly adapted to survive and so there is a strong evolutionary tendency for humans to band into tribes. Being a member of a tribe improves survivability but often results in conflict with other tribes, which may decrease survivability. We see tribalism at many levels in the modern world, in the form of political, religious and sporting groupings. Joining a group of like-minded people is a natural thing to do, but it can lead to bad behaviour if loyalty to the tribe becomes stronger than our basic morality. For example, the Roman Catholic Church has a long history of covering up child sexual abuse by a small number of priests in order to preserve the reputation of their tribe. Tribalism is pervasive and controls much of our behaviour. Even scientists are not immune to the lure of tribalism – leading scientists are elected to National Academies which rigorously promote their members’ interests.
The Naturalistic Origins of Moral Values
Many people derive their moral values from their religious beliefs. The creationists prominent in the USA reject evolution partly because they fear that acceptance of the evolutionary origin of humans will undermine the basis for morality and lead to social breakdown. They think that if the idea that humans are just another sort of animal becomes widely accepted, it will lead to an increase in violence and disorder. What little evidence there is in the peer-reviewed literature does not tend to support this view, and I will now discuss this evidence as an example of how scientists, in this case, social scientists, try to understand the world.
In 2005, a freelance palaeontologist called Gregory Paul published a paper in the electronic peer-reviewed Journal of Religion and Society, which is freely available online. This paper compares the incidence of various indicators of the moral state of a society, such as homicide, juvenile and early adult mortality, teenage pregnancy and abortion, and the incidence of sexually transmitted diseases, with the incidence of religious belief and acceptance of human evolution in 18 prosperous democracies of the world for which extensive data are available. This comparison showed a negative correlation between the acceptance of human evolution and the degree of religious belief. Thus the least religious nation of those surveyed, Japan, shows the highest acceptance of human evolution, while the lowest level of acceptance is found in the most religious developed democracy, the USA.
These correlations support the view of creationists that religious belief tends to lead to the rejection of belief in human evolution, but their further conclusion that therefore the latter leads to a less moral society is contradicted by the data on the incidence of the indicators of low moral standards listed above. All these indicators correlate positively with religious belief, the leader being the USA, which has the highest rates of homicide, early mortality, teenage pregnancy and abortion, and sexually transmitted infection rates in the developed world. The most successful countries by these indicators are the secular democracies, France, Scandanavia and Japan. Some of these correlations are illustrated in Fig. 3.8.
Correlations must be interpreted with great caution because the observation that two phenomena are correlated with each other does not necessarily mean that one causes the other. For example, important causal factors in the high rate of homicide in the USA are likely to be the easy availability of firearms and the wide disparities in wealth between different groups of people. Gregory Paul himself cautions in the Introduction to his paper that “This is not an attempt to present a definitive study that establishes cause versus effect between religiosity, secularism and societal health. It is hoped that these original correlations and results will spark further research and debate on the issue”. In a later paper published in the journal Evolutionary Psychology (see Further Reading), Paul presents more extensive evidence to support his view that high levels of religious belief are not well correlated with more moral societies.
If the naturalistic assumption is correct, moral values must originate from natural sources. An important aim of evolutionary theory is to explain why the vast majority of people have a sense of natural justice, that is, a sense of right and wrong, despite the fact that moral behaviour of that sort is rarely observed in the interactions of non-human animals.
Some evolutionary biologists suggest that morality is a product of natural forces acting through evolution at both the level of individuals and the level of groups of individuals. The basic argument is that those behaviours that increase the probability of survival and reproduction become selected for during evolution. Some of these behaviours are linked to emotional states such as guilt and empathy, so that to us these emotions appear compelling. It is hypothesized that all social animals, from ants to elephants, have modified their behaviour to become less ‘selfish’ because this increases the survival of the group as a whole. On this view, human morality is a natural phenomenon that evolved to increase human co-operation by restricting selfishness.
Individual humans are physically weak and not specialised for running or combat in the way that many other animals are. One of the reasons that, despite these limitations, humans are so successful, is because they co-operate with one another. A simple example is described in the essay The Biological Basis for Morality by the biologist, Edward O. Wilson, which is freely available online.
Imagine a Paleolithic band of five hunters. One considers breaking away from the others to look for an antelope on his own. If successful, he will gain a large quantity of meat and hide – five times as much as if he stays with the band and they are successful. But he knows from experience that his chances of success are very low, much less than the chances of the band of five working together. In addition, whether successful or not, he will suffer animosity from the others for lessening their prospects. By custom the band members stay together and share equitably the animals. So the hunter stays.
We know from experiments with non-human animals that behaviour is partly determined genetically, so if the tendency of humans to co-operate has a genetic component, it follows that genes predisposing people to behave in this way will increase in frequency in the human population. Over thousands of generations, such increases produce those emotions that underlie moral behaviours such as co-operation. In other words, moral feelings are more accurately described as moral instincts. We experience these instincts as conscience, self-respect, shame and outrage. Further discussion of the naturalistic origins of moral behaviour among humans can be found in the book The Origins of Virtue by Matt Ridley
A vital by-product of the strong human tendency to co-operate with one another is the development of technology. If you compare the behaviour of humans with that of other animals, it is clear that we have become the dominant species on the planet because we can control our environment by means of technology. Thus the success of the human species today depends upon co-operation, even to the extent of putting the interests of the community above that of the individual. This survival advantage of co-operation would have been especially important when early humans were evolving over several million years on the African savannah. Which brings me to evolution.
Further Reading
How Science Works: David Goodstein. www.cs.wisc.edu/~dluu/data/papers. A useful discussion about the criteria that distinguish the practice of science.
Religion Explained: The human instincts that fashion gods, spirits and ancestors. Pascal Boyer. Published by Vintage, 2001. ISBN 0 099 282763. This book discusses the evidence for the intentionality argument for the origin of religious beliefs.
In Gods We Trust: The evolutionary landscape of religion. Scott Attran. Published by Oxford University Press 2002. ISBN-13 978-0-19-517803-6. An anthropologist’s view of the naturalistic origins of religious beliefs.
Breaking the Spell: Religion as a Natural Phenomenon. Daniel C. Dennett. Published by Allen Lane 2006. ISBN-13 978-0-713-99789-7. This book argues that religious beliefs originate from the cognitive systems that have evolved in the human brain.
How We Believe; The Search for God in an Age of Science. Michael Shermer. Published by W.H Freeman and Company 2000. ISBN 0-7167-4161-x. This book addresses the question as to why people believe in God, as determined from surveys, and examines the validity of the different reasons they give.
The Believing Brain. Michael Shermer. Published by Constable & Robinson Ltd. 2011. ISBN:978-1-78033-529-2. In this book Shermer uses his thirty years of research as a psychologist to draw the conclusion that, contrary to common expectation, human beliefs come first, and it is only later that we seek evidence to support them
The Origins of Virtue. Matt Ridley. Published by the Penguin Group 1996. ISBN 0-670-86357-2. This book discusses the origins of co-operation as an evolutionary strategy that led to human society.
The Chronic Dependence of Popular Religiosity upon Dysfunctional Psychosociological Conditions. Gregory Paul, Evolutionary Psychology vol.7, 398–441 (2009). This paper uses 25 indicators of social health to determine the association between degrees of religious belief and the moral health of first world democracies.
The Evolution of Superstitious and Superstitious-like Behavior. Kevin Foster and Hanna Kokko. Proceedings Royal Society B 276, 31–37 (2009).
Life’s Greatest Secret. Matthew Cobb. Published by Profile Books 2015. ISBN −101781251401. A detailed account of the tortuous path to unravelling the role of DNA.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Ellis, J. (2016). How Science Works. In: How Science Works: Evolution. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-7749-0_3
Download citation
DOI: https://doi.org/10.1007/978-94-017-7749-0_3
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-017-7747-6
Online ISBN: 978-94-017-7749-0
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)