The phenomenon of consciousness represents the last enigma. After the long journey of the human mind —attempting to decode the workings of the universe—it has returned and is now facing its own existence. Just as the understanding of the fundamental nature of reality turned out to be an elusive quest, the challenge of understanding one’s own mind appears truly stupendous. The psychologist Susan Blackmore summarizes (Blackmore 2005, p. 1):

What is consciousness? This may sound like a simple question but it is not. Consciousness is at once the most obvious and the most difficult thing we can investigate. We seem either to have to use consciousness to investigate itself, which is a slightly weird idea, or have to extricate ourselves from the very thing we want to study.

Consciousness is not only a vexing problem for scientists and philosophers alike, it is also a deep mystery for every conscious being. On the one hand it is so familiar. Indeed, my conscious experiences of the world and of dreams—flowing in the river of time, eternally locked into this moment of “now”—is all that I know. I am intimately and fundamentally a manifestation of my consciousness. On the other hand, consciousness creates an insurmountable schism: reality divides into the inner and outer world. My first-person subjective experience is pitted against an outer, objective reality. The two realities collide. One is characterized by “what does it feel like?” and the other by “out there.” The observer and the observed emerge. The central conundrum is the following. How do our brains conjure up subjective, conscious experiences? In other words, where is the greenness of green to be found in our brains? After all, it is mediated by intrinsically featureless electromagnetic radiation with a wavelength between 495 and 570 nm. In more technical language, what are the neural correlates of consciousness (i.e., the pinpointed electrochemical neuronal activity in the brain) of qualia (i.e., the subjective conscious experiences)? Even more puzzling is meta-cognition: the thinking about thinking or the awareness of one’s awareness. The mind can relate to the mind as a mind.

The neuroscience pioneer Christof Koch summarized the enigma as follows (Koch 2012, p. 23):

Without consciousness there is nothing. The only way you experience your body and the world of mountains and people, trees and dogs, stars and music is through your subjective experiences, thoughts, and memories. You act and move, see and hear, love and hate, remember the past and imagine the future. But ultimately, you only encounter the world in all of its manifestations via consciousness. And when consciousness ceases, this world ceases as well.

Many traditions view humans as having a mind (or psyche), a body, and a transcendental soul. Others reject this tripartite division in favor of a mind-body duality. The ancient Egyptians and Hebrews placed the psyche in the heart, whereas the Maya located it in the liver. We moderns know that the conscious mind is a product of the brain. To understand consciousness, we must understand the brain.

But there’s the rub. How the brain converts bioelectrical activity into subjective states, how photons reflected off water are magically transformed into the percept of iridescent aquamarine mountain tarn is a puzzle. The nature of the relationship between the nervous system and consciousness remains elusive and the subject of heated and interminable debates.

Since 1995, specialists and interested laypersons flock to the picturesque Swiss city of Lucerne every two years. The focus of the Swiss Biennial on Science, Technics \(+\) Aesthetics Footnote 1 lies on contemporary challenges to knowledge. Topics range from quantum physics to cosmology. Over the years, many intellectualsFootnote 2 have presented their thoughts, the likes of Heather A. Berlin, Hans-Peter Dürr, Marcelo Gleiser, Stuart Hameroff, Donald D. Hoffman, John Horgan, Brian Josephson, Koch, Lawrence M. Krauss, Roger Penrose, Dean Radin, Martin Rees, Matthieu Ricard, Abner Shimony, Henry Stapp, Franz X Vollenweider, Anton Zeilinger, and the Dalai Lama’s English translator, Thupten Jinpa. In 2001, the eminent philosopher Ernst von Glasersfeld summarized (von Glasersfeld 2001):

Every two years René Stettler, the owner and director of the Neue Galerie of Lucerne, organizes a two-day symposium for scientists, philosophers, and artists to present and discuss their views on a topic thought to be of interest to a general audience. The main purpose of these events is to foster interdisciplinary discussion and the New Gallery sees itself as a “cultural laboratory”. This year’s symposium had the title “The Enigma of Consciousness” and attracted between four and five hundred people, filling in the city’s theatre.

In 2016 and 2018 the topic was, yet again, The Enigma of Consciousness. A diverse list of speakers addressed the issue, including scholars of Buddhism, anthropology, and Amazonian shamanism.Footnote 3 Some conclusions from the conference are discussed in Sect. 14.2.2.

It has become apparent that the enigma of consciousness entails a large complex of problems. Challenges originate from issues as diverse as free will, quantum theory, psychoactive drugs, altered states of consciousness, contemplative traditions, meditation, information processing, virtual reality, and artificial intelligence. Indeed, we now have to ask ourselves how unique our human consciousness is. There is very compelling evidence that many animals also have an inner sensation that “it feels like something.” Perhaps even insects. The fact that animals possess complex inner worlds—with great capacity for suffering —represents a pressing, albeit mostly ignored, ethical challenge when it comes to industrial livestock production. For instance, in 2016, approximately 65.8 billion chickens, 1.5 billion pigs, and 302 million cattle were slaughtered globally.Footnote 4 Viewed from a historical perspective (Harari 2015, p. 104f.):

Domesticated chickens and cattle may well be an evolutionary success story, but they are also among the most miserable creatures that ever lived.

To this day, the nature of consciousness is as elusive as ever, despite the tremendous advancements in neuroscience and clinical technology of the last decades. As such, the issue has the potential to attract a lot of publicity—and funding. Henry Markram is a controversial neuroscientist (Schneider 2016):

[His] research proposal received the biggest research funding grant in history: one billion Euros from the European Union,Footnote 5 for his “brainchild” (as journalists dubbed it), the Human Brain Project. The modest promise Markram originally made to secure this mind-boggling mountain of cash: he intended to simulate the entire human brain in his supercomputer by 2023, the possibility of artificial consciousness specifically not excluded.

Unfortunately, events did not unfold as planned (Schneider 2016):

Now however, his consortium partners took over, Markram was dethroned in a scientists’ coup and pushed aside to tinker on his seemingly less ambitious, but just as science-fictionary mouse Blue Brain simulation.Footnote 6 Once in control of almost everything and everyone, with all the big money going through his hands, Markram is now only one of 12 project leaders and far from being the boss.

This, after (Abbott 2015):

[M]ore than 150 leading neuroscientists sent a protest letter to the European Commission, charging, among other things, that the committee was acting autocratically and running the project’s scientific plans off course. Led by the charismatic but divisive Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne (EPFL), which is coordinating the HBP [Human Brain Project] , the committee had stirred up anger in early 2014 when it revealed plans to cut cognitive neuroscience from the initiative.

The controversy continues (Schneider 2016):

After 3 years and almost EUR 150 Mio spent, HBP [Human Brain Project] delivered no published results worth mentioning.

This lack of progress, despite the availability of vast resources, makes consciousness an even more mystifying phenomenon. Indeed (Burkeman 2015):

It would be poetic—albeit deeply frustrating—were it ultimately to prove that the one thing the human mind is incapable of comprehending is itself.

Koch succinctly captures the essence of this discrepancy (Koch 2012, p. 23):

[A]stronomy can make testable statements about an event that took place 13.7 billion years ago [referring to NASA’s Cosmic Background Explorer data]! Yet something as mundane as a toothache, right here and now, remains baffling.

Perhaps this assessment can be placed in a different context. After all, the human brain is the most complex structure we have discovered in the universe.Footnote 7 That is, if we look from the outside. Looking from the inside, we would never guess that behind our eyes something as intricate and sophisticated is lurking in the silent darkness of our skulls.

1 The History and Philosophy of Our Minds

The enigma of consciousness has entranced the human mind since it became aware. However, a new chapter in the saga opened over two decades ago (Burkeman 2015):

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness [...]. Though he didn’t realize it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life—perhaps the central mystery of human life—and revealing how embarrassingly far they were from solving it.

Indeed, in serious academic circles, the notion of consciousness was taboo (Burkeman 2015):

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time.

[...]

As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

Then, in 1990 Francis Crick, the co-discoverer of the double helix, wrote an article with Koch, called Towards a Neurobiological Theory of Consciousness. It opens as follows (Crick and Koch 1990):

It is remarkable that most of the work in both cognitive science and the neurosciences makes no reference to consciousness (or “awareness”) , especially as many would regard consciousness as the major puzzle confronting the neural view of the mind and indeed at the present time it appears deeply mysterious to many people.

As a scientist, it is always risky to go against the established status quo. Indeed, Koch recalls (quoted in Burkeman 2015):

A senior colleague took me out to lunch and said, yes, he had the utmost respect for Francis [Crick], but Francis was a Nobel laureate and a half-god and he could do whatever he wanted, whereas I didn’t have tenure yet, so I should be incredibly careful. Stick to more mainstream science! These fringey things—why not leave them until retirement, when you’re coming close to death, and you can worry about the soul and stuff like that?

This highlights yet another example of an entrenched scientific paradigm in the Kuhnian view (Sect. 9.1.3). It is not always the unquenchable thirst for knowledge that gets to set the scientific agenda, but often mundane constraints coming from authority outlawing certain ideas. It is then up to maverick scientists to herald the start of a new paradigm.Footnote 8 However, the stigmatization of consciousness can still be felt today. In the words of the neuroscientist Antonio Damasio (Damasio 2011):

This [the conscious mind] is a mystery that has really been extremely hard to elucidate. All the way back into early philosophy and certainly throughout the history of neuroscience, this has been one mystery that has always resisted elucidation, has got major controversies. And there are actually many people that think we should not even touch it; we should just leave it alone, it’s not to be solved.

Whereas neuroscientists and cognitive scientists can retreat back into the specificities and technical details of their research, philosophers are exposed to the full brunt of the controversy (Burkeman 2015):

The consciousness debates have provoked more mudslinging and fury than most in modern philosophy, perhaps because of how baffling the problem is: opposing combatants tend not merely to disagree, but to find each other’s positions manifestly preposterous.

It did not help when Chalmers started to talk about zombies (Chalmers 1996). He stresses (quoted in Burkeman 2015):

Look, I’m not a zombie, and I pray that you’re not a zombie but the point is that evolution could have produced zombies instead of conscious creatures—and it didn’t!

The zombie scenario goes as follows (Burkeman 2015):

[I]magine that you have a doppelgänger. This person physically resembles you in every respect, and behaves identically to you; he or she holds conversations, eats and sleeps, looks happy or anxious precisely as you do. The sole difference is that the doppelgänger has no consciousness; this—as opposed to a groaning, blood-spattered walking corpse from a movie—is what philosophers mean by a “zombie”.

This idea is reminiscent of solipsism, the radically skeptical stance that only one’s own mind is known to exist. Not everyone was intrigued (Burkeman 2015):

The withering tone of the philosopher Massimo Pigliucci sums up the thousands of words that have been written attacking the zombie notion: “Let’s relegate zombies to B-movies and try to be a little more serious about our philosophy, shall we?” Yes, it may be true that most of us, in our daily lives, think of consciousness as something over and above our physical being—as if your mind were “a chauffeur inside your own body”, to quote the spiritual author Alan Watts. But to accept this as a scientific principle would mean rewriting the laws of physics. Everything we know about the universe tells us that reality consists only of physical things: atoms and their component particles, busily colliding and combining. Above all, critics point out, if this non-physical mental stuff did exist, how could it cause physical things to happen—as when the feeling of pain causes me to jerk my fingers away from the saucepan’s edge?

In his 1994 talk, Chalmers rocked the boat of the philosophy of the mind by introducing the “hard problem of consciousness ” (Chalmers 1995). The “easy problem” of consciousness relates to explaining the brain’s dynamics in terms of the functional or computational organization of the brain. The hard problem can be related to an observation by the mathematician and philosopher Alfred N. Whitehead (Whitehead 1953, p. 68):

But the mind in apprehending also experiences sensations which, properly speaking, are qualities of the mind alone.

In essence, the hard problem of consciousness is the challenge of explaining how and why we have phenomenal experiences (qualia) . How do 1,400 g of organic matter, organized as a neural network and constrained by the laws of nature, give rise to first-person conscious experiences? How do sensations acquire their specific characteristics such as colors and tastes? More specifically, how do neurophysiological and biochemical processes translate into phenomenal and subjective perception? This issue had already baffled the biologist Thomas Huxley in 1866 (quoted in McGinn 2004, p. 56):

How it is that anything so remarkable as a state of consciousness comes about as a result of irritating nervous tissue, is just as unaccountable as the appearance of the Djinn when Aladdin rubbed his lamp in the story.

The cognitive scientist Donald D. Hoffman reiterates (Hoffman 2015):

Now, Huxley knew that brain activity and conscious experiences are correlated, but he didn’t know why. To the science of his day, it was a mystery. In the years since Huxley, science has learned a lot about brain activity, but the relationship between brain activity and conscious experiences is still a mystery.

Essentially, the hard problem of consciousness is a reiteration of the mind-body problem. This dualism—the schism between the physical body and an ethereal mind—goes back to the philosopher, mathematician, and scientist René Descartes (Descartes 1641). Cartesian dualism contrasts monism, which is the notion that there exists only one fundamental essence. In greater detail, in the words of the psychologist Tania Lombrozo (quoted in Brockman 2015, p. 271ff.):

In the beginning, there was dualism. Descartes famously posited two kinds of substance, non-physical mind and material body. Leibniz differentiated mental and physical realms. But dualism faced a challenge—explaining how mind and body interact.

We now know, of course, that mind and brain are intimately connected. Injuries to the brain can alter perceptual experience, cognitive abilities, and personality. Changes in brain chemistry can do the same. [...]

In fact, it appears the mind is just the brain. [...]

Or maybe not.

In our enthusiasm to find a scientifically acceptable alternative to dualism, some of us have gone too far the other way, adopting a stark reductionism. Understanding the mind is not just a matter of understanding the brain. But then, what is it a matter of? Many alternatives to the mind=brain equation seem counterintuitive or spooky. Some suggest that the mind extends beyond the brain to encompass the whole body, or even parts of the environment, or that the mind is not subject to the laws of physics.

[...]

Rejecting the mind in an effort to achieve scientific legitimacy—a trend we’ve seen with both behaviorism and some popular manifestations of neuroscience —is unnecessary and unresponsive to the aims of scientific psychology. Understanding the mind isn’t the same as understanding the brain. Fortunately, though, we can achieve such understanding without abandoning scientific rigor.

Naturally, not everyone believes that the hard problem of consciousness even exists. The famous philosopher Daniel Dennett, for instance, sees consciousness as a “bag of tricks.” Like any magic performance, we are enthralled by it only as long as we do not know how it was done. Once we know the trick, the illusion disappears and we are disappointed. By simply studying the brain, according to Dennett, we will soon be able to uncover all of its magic. A central claim, made in his book Consciousness Explained , is that qualia do not—indeed, cannot—exist (Daniel 1991). In effect, the hard problem of consciousness vanishes, and with it all the zombies. In greater detail (Burkeman 2015):

Not everybody agrees there is a Hard Problem to begin with—making the whole debate kickstarted by Chalmers an exercise in pointlessness. Daniel Dennett, the high-profile atheist and professor at Tufts University outside Boston, argues that consciousness, as we think of it, is an illusion: there just isn’t anything in addition to the spongy stuff of the brain, and that spongy stuff doesn’t actually give rise to something called consciousness. Common sense may tell us there’s a subjective world of inner experience—but then common sense told us that the sun orbits the Earth, and that the world was flat. Consciousness, according to Dennett’s theory, is like a conjuring trick: the normal functioning of the brain just makes it look as if there is something non-physical going on. To look for a real, substantive thing called consciousness, Dennett argues, is as silly as insisting that characters in novels, such as Sherlock Holmes or Harry Potter, must be made up of a peculiar substance named “fictoplasm”; the idea is absurd and unnecessary, since the characters do not exist to begin with. This is the point at which the debate tends to collapse into incredulous laughter and head-shaking: neither camp can quite believe what the other is saying. To Dennett’s opponents, he is simply denying the existence of something everyone knows for certain: their inner experience of sights, smells, emotions and the rest. (Chalmers has speculated, largely in jest, that Dennett himself might be a zombie.) It’s like asserting that cancer doesn’t exist, then claiming you’ve cured cancer; more than one critic of Dennett’s most famous book, Consciousness Explained , has joked that its title ought to be Consciousness Explained Away. Dennett’s reply is characteristically breezy: explaining things away, he insists, is exactly what scientists do. When physicists first concluded that the only difference between gold and silver was the number of subatomic particles in their atoms, he writes, people could have felt cheated, complaining that their special “goldness” and “silveriness” had been explained away. But everybody now accepts that goldness and silveriness are really just differences in atoms. However hard it feels to accept, we should concede that consciousness is just the physical brain, doing what brains do.

Chalmers replies (Chalmers 2014):

My friend Dan Dennett, who’s here today, has one [radical idea about consciousness]. His crazy idea is that there is no hard problem of consciousness. The whole idea of the inner subjective movie involves a kind of illusion or confusion. Actually, all we’ve got to do is explain the objective functions, the behaviors of the brain, and then we’ve explained everything that needs to be explained. Well I say, more power to him. That’s the kind of radical idea that we need to explore if you want to have a purely reductionist brain-based theory of consciousness. At the same time, for me and for many other people, that view is a bit too close to simply denying the datum of consciousness to be satisfactory. So I go in a different direction. In the time remaining, I want to explore two crazy ideas that I think may have some promise.

More on Chalmers’ crazy ideas will follow in Chap. 14. Blackmore also rejects dualism and thus the validity of the hard problem of consciousness. Specifically, she questions the idea of neural correlates of consciousness. In detail, Blackmore argues (quoted in Brockman 2015, p. 141ff.):

Consciousness is a hot topic in neuroscience and some of the brightest researchers are hunting for the neural correlates of consciousness (NCCs)—but they will never find them. The implicit theory of consciousness underlying this quest is misguided and needs to be retired.

The idea of the NCCs is simple enough and intuitively tempting. If we believe in the “hard problem of consciousness ”—the mystery of how subjective experience arises from (or is created by or generated by) objective events in a brain—then it’s easy to imagine that there must be a special place in the brain where this happens. Or if there is no special place then some kind of “consciousness neuron”, or process or pattern or series of connections. [...]

The trouble is it depends on a dualist—and ultimately unworkable—theory of consciousness. The underlying intuition is that consciousness is an added extra—something additional to and different from the physical processes on which it depends. [...]

Dualist thinking comes so naturally to us. We feel as though our conscious experiences are of a different order from the physical world. But this is the same intuition that leads to the hard problem seeming hard. It is the same intuition that produces the philosopher’s zombie —a creature that is identical to me in every way except that it has no consciousness. [...]

Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it is an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents and so build up the false idea of a persisting self that has consciousness and free will.

Very Short Introductions is a book series published by the Oxford University Press. The books offer, as the name suggests, very concise introductions to various topics, ranging from happiness (Haybron 2013) to reality (Westerhoff 2011). There are currently 622 titles.Footnote 9 Blackmore wrote the one on consciousness (Blackmore 2005). There we can read:

It seems we have some tough choices in thinking about our own precious self. We can hang on to the way it feels and assume that a persisting self or soul or spirit exists, even though it cannot be found and leads to philosophical troubles. We can equate it with some kind of brain process and shelve the problem of why this brain process should have conscious experiences at all, or we can reject any persisting entity that corresponds to our feeling of being a self. I think that intellectually we have to take this last path. The trouble is that it is very hard to accept in one’s own personal life. It means taking a radically different view of every experience. It means accepting that there is no one who is having these experiences. [P. 81]

[...]

There are two really fundamental assumptions that almost everyone makes. The first is that experiences happen to someone; that there cannot be experiences without an experiencer. [...] This has to be thrown out. The second assumption is that experiences flow through the conscious mind as a stream of ideas, feelings, images, and perceptions. The stream may break, change direction, or be disrupted, but it remains a series of conscious events in the theater of the mind. [...] This has to be thrown out. [...] This is how the grand delusion of consciousness comes about. We humans are clever, speaking, thinking creatures who can ask ourselves the question “Am I conscious now?”. Then, because we always get the answer “yes”, we leap to the erroneous conclusion that we are always conscious. The rest follows from there. [P. 128f.]

To some, these assertions may sound deeply troubling and existentially threatening. This reaction, however, could be a result of specific socio-cultural and religious programming. Again, Blackmore (2005, p. 68):

Among religions, Buddhism alone rejects the idea of self. [...] [The historical Buddha] taught that human suffering is caused by ignorance and in particular by clinging to a false notion of self; the way out of suffering is to drop all the desires and attachments that keep recreating the self. Central to his teaching therefore, is the idea of no-self. This is not to say that the self does not exists, but that it is illusory—or not what it seems.

For more on the notion of suffering and happiness in Buddhism, see Sect. 7.4.2.1. For the exceptional display of mental prowess of Buddhist meditators, see Sect. 9.3.5.

The rejection of dualism by scientists and philosophers is mainly motivated by a fundamental assumptions regarding the nature of reality. Namely, the conviction that the reductionist materialist paradigm is the best, and perhaps only, template for decoding reality. This, of course, is a very sensible approach. However, one should not forget that reductionism is a tool—perhaps even a philosophy—for dealing with the nature of reality and is not a theory in itself. Moreover, materialism is based on the intuitions a human mind acquires, based on its perception of reality. In the context of this book, we know that reductionism, while being spectacularly successful (Sect. 5.1) in understanding the fundamental processes of nature (Chaps. 3 and 4), fails as a explanatory matrix (Sect. 5.2) for complex phenomena (Chap. 6). Furthermore, the notion of materialism, in the framework of our best theory of the microscopic world, appears highly problematic to be upheld (Sects. 10.3.2 and 10.4). Indeed, the very notion of scientific inquiry is plagued by a multitude of issues (Chap. 9). Recall from the end of the last chapter (Westerhoff 2011):

If we follow scientific reduction all the way down, we end up with stuff that certainly does not look like tiny pebbles or billiard balls, not even like strings vibrating in a multidimensional space, but more like what pure mathematics deals with. [P. 51]

The moral to draw from the reductionist scenario [...] seems to be that either what is fundamental is not material, or that nothing at all is fundamental. [P. 54]

In the end, one sides with Chalmers or Dennett based on deeply held assumptions and beliefs about the nature of reality and the self. Of course, both sides will insist that they are justified in their assessments based on a rational, intellectual, and inevitable decision-making process—but this too is just an illusion. The conclusion is as obvious as it is disheartening. Our minds have cultivated intuitions and constructed narratives about the universe and ourselves which are simply false. However, which ones have to be exchanged, and what should fill the void, is the fuel for potentially endless debates. Some personal insights on the mind, collected from various thinkers, can be found in Brockman (2011), Marcus and Freeman (2015).

2 Modern Neuroscience

In the preceding section, the mind used itself as a tool to try and understand itself. In other words, the mind contemplated its own existence—the philosophy of the mind was born. However, what happens if we expose the human mind to the contemporary tools of science? In the preface of the textbook Foundational Concepts in Neuroscience we can read (Presti 2016, p. xiii):

Neuroscience —the science of brain and behavior—is one of the most exciting fields in the landscape of contemporary science. Its rapid growth over the last several decades has spawned many discoveries and a large number of popular books. Contemporary news is filled with stories about the brain, brain chemistry, and behavior. Photos and drawings of brains and nerve cells grace the pages of newspapers and magazines.

The recent explosion we are witnessing in the field of neuroscience is driven by technology. Again, the human mind is ingenious enough to decode reality in a way that allows for the engineering of technology (the genesis of this knowledge generation process is described in Chap. 2). Specifically, the new found ability of the mind to observe its own physiological basis—its rooting in reality—has shone a bright light on this previously obscure topic. Indeed, the clinical technology uncovering the brain’s activity has come a long way: from electroencephalography (EEG) , computed tomography scans (CT) , positron-emission tomography (PET), functional magnetic resonance imaging (fMRI) , to magnetoencephalography (MEG) . Perhaps the most stunning and beautiful image of the human brain is the connectome,Footnote 10 a map of neural connections in the brain. It uncovers the brain’s wiring diagram (Sporns et al. 2005). The casual reader might gloss over the fact that some of these diagnostic technologies utilize highly evolved technology. PET detects antimatter (see Fig. 4.1) and MEG employs superconducting quantum interference devices. Some researchers have used artificial intelligence, specifically, machine learning algorithms, to reconstruct the original picture, given an fMRI scan of a person looking at that picture (Shen et al. 2017). Other examples of how modern technology can read our minds can be, for instance, found in the articles published in the New Scientist , a weekly magazine covering aspects of science and technology. These range from Brown (2017) recording the inner monologues in our minds (Thomson 2014) to brain implants allowing paralyzed people to type with their thoughts (Hamzelou 2017). Neuroscientists can now follow a thought as it moves through the brain (Haller et al. 2018).

Exposing the intimate experiences within our brains has mind-numbing potential. But there are also reasons to be cautious. There exists a lot of “neuro-bunk” (Crockett 2012)—myths about the brain. From the role of mirror neuronsFootnote 11 to the importance of oxytocin,Footnote 12 many spectacular claims have been debunked. For instance Crockett (2012):

So speaking of love and the brain, there’s a researcher, known to some as Dr. Love, who claims that scientists have found the glue that holds society together, the source of love and prosperity. [...] [I]t’s a hormone called oxytocin. You’ve probably heard of it. So, Dr. Love bases his argument on studies showing that when you boost people’s oxytocin, this increases their trust, empathy and cooperation. So he’s calling oxytocin “the moral molecule.”

Now these studies are scientifically valid, and they’ve been replicated, but they’re not the whole story. Other studies have shown that boosting oxytocin increases envy. It increases gloating. Oxytocin can bias people to favor their own group at the expense of other groups. And in some cases, oxytocin can even decrease cooperation.

Furthermore (Jarrett 2013):

Neuroscientist V.S. Ramachandran says these cells [mirror neurons] shaped our civilisation; in fact he says they underlie what it is to be human—being responsible for our powers of empathy, language and the emergence of human culture, including the widespread use of tools and fire. When mirror neurons don’t work properly, Ramachandran believes the result is autism.

For the record, a detailed investigation earlier this year found little evidence to support his theory about autism. Other experts have debunked Ramachandran’s claims linking mirror neurons to the birth of human culture.

The referenced study is Kilner and Lemon (2013). Also, recall the bug in the fMRI software potentially affecting 15 years of research (Sect. 9.2.2).

2.1 Perceiving the Outer World

Perhaps the most astonishing story neuroscientists tell us is about perception—the very way we known about an external world. Yet again, a very basic intuition we hold is deeply flawed: the notion that our eyes translate electromagnetic radiation into electrical impulses from which the brain reconstructs a faithful image of the world. In the words of the neuroscientist David Eagleman (Eagleman 2011):

[O]ur brains sample just a small bit of the surrounding physical world. [P. 77]

Instead of reality being passively recorded by the brain, it is actively constructed by it. [P. 82]

You’re not perceiving what’s out there. You’re perceiving whatever your brain tells you. [P. 33]

Indeed (Eagleman 2016, p. 73):

Despite the feeling that we’re directly experiencing the world out there, our reality is ultimately built in the dark, in a foreign language of electrochemical signals. The activity churning across vast neural networks gets turned into your story of this, your private experience of the world: the feeling of this book in your hands, the light in the room, the smell of roses, the sound of others speaking.

If our perception of reality is not a faithful image of the outer reality, what is it then? Research suggests (Seth 2017):

So perception—figuring out what’s there—has to be a process of informed guesswork in which the brain combines these sensory signals with its prior expectations or beliefs about the way the world is to form its best guess of what caused those signals. The brain doesn’t hear sound or see light. What we perceive is its best guess of what’s out there in the world.

[...]

Instead of perception depending largely on signals coming into the brain from the outside world, it depends as much, if not more, on perceptual predictions flowing in the opposite direction. We don’t just passively perceive the world, we actively generate it. The world we experience comes as much, if not more, from the inside out as from the outside in.

[...]

If hallucination is a kind of uncontrolled perception, then perception right here and right now is also a kind of hallucination, but a controlled hallucination in which the brain’s predictions are being reined in by sensory information from the world. In fact, we’re all hallucinating all the time, including right now. It’s just that when we agree about our hallucinations, we call that reality.

Eagleman also agrees (Eagleman 2011, p. 46):

[W]hat we call normal perception does not really differ from hallucinations, except that the latter are not anchored by external input.

In other words, the perception of the external world is a virtual reality simulation (Hoffman 2000).

This existentially challenging claim can perhaps be best understood in the context of visual illusions (Lotto 2012):

So, tomorrow morning when you open your eyes and look “out into” the world, don’t be fooled. You’re in fact looking in. You’re not seeing the world covered in a blue blanket at all; you’re seeing a world... an internal map of value-relations derived from interactions within a particular, narrow context.

And yet color is the simplest sensations the brain has. What may surprise you is that even at this most basic level we never see the light that falls onto our eyes or even the real-world source of that light. Rather, neuroscience research tells us that we only ever see what proved useful to see in the past. Illusions are a simple but powerful example of this point.

There exist many examples of such illusions (Shepard 1990). The crux of the issue is that the illusion can never be deconstructed by the intellect. Any knowledge or understanding of the illusion remains virtually powerless to diminish the magnitude of the illusion. Moreover, we know that our visual perception suffers from some severe restrictions. For instance, there exists a significant blind spot in our visual field where the optic nerve passes through the optic disc of the retina (Ramachandran and Gregory 1991). This lack of photoreceptor cells cuts a hole into the visual field. Indeed, the area of sharp central vision is restricted to the fovea, a structure in the inner retinal surface. As a result, our visual field only has a high-resolution part which is approximately the size of a thumbnail held at arm’s length (Fairchild 2005). Then, our peripheral vision is very limited. It is low-resolution and mostly monochromatic (Anderson et al. 1991). Moreover, our eyes are constantly in motion, erratically scanning the visual field (Deubel and Schneider 1996):

When we inspect a visual scene, periods of fixation are interrupted by fast ballistic movements of the eyes, the saccades. By means of these goal-directed eye movements, the fovea is brought to “interesting spots” of the scene. For instance, a common observation is that when a subject views the picture of a person, the nose and mouth are fixated more often and first in sequence compared to other objects of the picture, such as spots on the cheek.

My personal subjective experience of vision denies these aspects. I perceive a continuous, steady, high-resolution, chromatic picture of the world, which covers my entire field of vision. The visual hallucination of the world that I continually experience is a very immediate and compelling one. This constant stream of my perception of reality is, however, also temporally restricted (Herzog et al. 2016):

We experience the world as a seamless stream of percepts. However, intriguing illusions and recent experiments suggest that the world is not continuously translated into conscious perception. Instead, perception seems to operate in a discrete manner, just like movies appear continuous although they consist of discrete images.

Finally, the entire perception of time itself is a mental construct akin to an illusion and very malleable (Hodinott-Hill et al. 2002; Eagleman 2008).

If our trusted visual perception of the world is indeed an inaccurate representation of it, we should expect ramifications. The McGurk effect is a fascinating example of how visual information can affect auditory perception (McGurk and MacDonald 1976). Specifically, seeing the lips of a person form certain inaudible sounds alters the perception of an actual sound. Change blindness is another such striking example of perceptive deficiency. It is a phenomenon which occurs when a change in a visual stimulus is not detected by the observer. It turns out that humans are very bad at noticing even major differences introduced into an image while it flickers off and on again. In other words, we can be oblivious to changes happening right in front of our very own eyes. Indeed (Simons and Levin 1997):

Given failures of change detection, we must question the assumption that we have a detailed representation of our visual world.

This effect can be dramatically increased if the observer’s attention is captured, a phenomenon known as selective attention. The results are disheartening. The human mind is astonishingly restricted when perceiving reality. In detail (Chabris and Simons 2009, p. xi, 5f.):

About twelve years ago, we conducted a simple experiment with the students in a psychology course we were teaching at Harvard University. To our surprise, it has become one of the best-known experiments in psychology. It appears in textbooks and is taught in introductory psychology courses throughout the world.

[...]

With our students as actors and a temporarily vacant floor of the psychology building as a set, we made a short film of two teams of people moving around and passing basketballs. One team wore white shirts and the other wore black. [...]

They [the students] asked volunteers to silently count the number of passes made by the players wearing white while ignoring any passes by the players wearing black. The video lasted less than a minute.Footnote 13 [...]

Immediately after the video ended, our students asked the subjects to report how many passes they’d counted. [...] The pass-counting task was intended to keep people engaged in doing something that demanded attention to the action on the screen, but we weren’t really interested in pass-counting ability. We were actually testing something else: Halfway through the video, a female student wearing a full-body gorilla suit walked into the scene, stopped in the middle of the players, faced the camera, thumped her chest, and then walked off, spending about nine seconds onscreen. After asking the subjects about the passes, we asked the more important questions:

Q: Did you notice anything unusual while you were doing the counting task?

A: No.

[...]

Q: Did you notice anyone other than the players?

A: No.

Q: Did you notice a gorilla?

A: A what?!?

Amazingly, roughly half of the subjects in our study did not notice the gorilla! Since then the experiment has been repeated many times, under different conditions, with diverse audiences, and in multiple countries, but the results are always the same: About half the people fail to see the gorilla.

Our mind’s capacity to perceive an inaccurate representation of reality is not restricted to vision. The expectation and context of any experience can alter how this experience is perceived. One experiment in particular will make oenologists despair (Plassmann et al. 2008):

We propose that marketing actions, such as changes in the price of a product, can affect neural representations of experienced pleasantness. We tested this hypothesis by scanning human subjects using functional MRI while they tasted wines that, contrary to reality, they believed to be different and sold at different prices. Our results show that increasing the price of a wine increases subjective reports of flavor pleasantness as well as blood-oxygen -level-dependent activity in medial orbitofrontal cortex, an area that is widely thought to encode for experienced pleasantness during experiential tasks.

Believing that a wine is expensive results in it tasting better, irrespective of the actual quality. This is not imagined, as the brain will construct an actual experience of reality based on the priming. The experiment has recently been redone (Schmidt et al. 2017):

Informational cues such as the price of a wine can trigger expectations about its taste quality and thereby modulate the sensory experience on a reported and neural level.

Expectations related to experiences change how they are perceived across a variety of sensory domains, including pain (Schmidt et al. 2017), vision (Summerfield and De Lange 2014), smell (De Araujo et al. 2005), and hearing (Kirk et al. 2009). While it seems astounding that expectations shape the way we taste, see, hear, and smell, the finding that pain is also under the influence of our mind’s expectations appears preposterous. Indeed, if you believe that the pain you feel is the result of an intentional act, it will hurt more than if you thought it was administered unintentionally (Gray and Wegner 2008).

Where does this leave us? Our closest intuitions about the world fails. Everything we thought was “out there” is mostly in the mind. What we call our sober state of consciousness is in fact an elaborate hallucination—an internal simulation—guided by some external stimuli. What we perceive is a creation of our minds. However, we still should be confident that the objects in our minds correspond to objects “out there.” If evolution has tailored the human mind to construct an internal simulation of an external reality, we should expect it to be a fairly accurate one. Even if the true nature of reality is forever beyond our reach, our senses should still give us an approximation.

We know that we only perceive a tiny fraction of the physical universe. We see just a tiny slice of the entire electromagnetic spectrum. The same is true for the frequency range of human hearing. We are oblivious to much of the richness of the cosmos, as we cannot feel gravity. We are blind and deaf to the forces and activity of the seething atomic realm. “Tens of billions of neutrinos from the sun traverse each square centimeter of the Earth every second” without us noticing (Fukuda et al. 1998). Although we are blind to many aspects of reality, surely evolution ensured that we at least glimpse the ones important for survival and gene reproduction. Well (Gefter 2016):

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

In a nutshell, Hoffman argues (quoted in Nave 2016):

Evolution isn’t about truth, it’s about making kids. Every bit of information that you process costs calories, meaning that’s more food you need to kill and eat. So an organism that sees all of reality would never be more fit than one tuned only to see what it needs to survive.

The following example is offered (quoted in Nave 2016):

Australian jewel beetle’s reproductive strategy proceeded very effectively. Then, Homo sapiens —and its habit of dumping used beer bottles—entered the picture. Unable to distinguish between these brown glass containers and the shell of a potential mate, the male beetles began attempting to copulate with discarded vessels.

The beetles nearly went extinct. Hoffman explains (Hoffman 2015):

The Australian jewel beetle is dimpled, glossy and brown. [...] Now, as it happens, these bottles are dimpled, glossy, and just the right shade of brown to tickle the fancy of these beetles. [...] Now, the males had successfully found females for thousands, perhaps millions of years. It looked like they saw reality as it is, but apparently not. Evolution had given them a hack. A female is anything dimpled, glossy and brown, the bigger the better.

To test the hypothesis that natural selection does, in fact, not favor veridical perception, Hoffman ran computer simulations (Mark et al. 2010). He reports (Hoffman 2015):

So, in my lab, we have run hundreds of thousands of evolutionary game simulations with lots of different randomly chosen worlds and organisms that compete for resources in those worlds. Some of the organisms see all of the reality, others see just part of the reality, and some see none of the reality, only fitness. Who wins?

Well, I hate to break it to you, but perception of reality goes extinct. In almost every simulation, organisms that see none of reality but are just tuned to fitness drive to extinction all the organisms that perceive reality as it is. So the bottom line is, evolution does not favor veridical, or accurate perceptions. Those perceptions of reality go extinct.

This work has been further elaborated (Hoffman and Singh 2012). If the human mind does not see reality per se—the noumenon—what does it then perceive? Hoffman offers the interface theory of perception (Hoffman et al. 2015). In his words (Hoffman 2015):

Well, fortunately, we have a very helpful metaphor: the desktop interface on your computer. Consider that blue icon for a TED Talk that you’re writing. Now, the icon is blue and rectangular and in the lower right corner of the desktop. Does that mean that the text file itself in the computer is blue, rectangular, and in the lower right-hand corner of the computer? Of course not. Anyone who thought that misinterprets the purpose of the interface. It’s not there to show you the reality of the computer. In fact, it’s there to hide that reality. You don’t want to know about the diodes and resistors and all the megabytes of software. If you had to deal with that, you could never write your text file or edit your photo. So the idea is that evolution has given us an interface that hides reality and guides adaptive behavior. Space and time, as you perceive them right now, are your desktop. Physical objects are simply icons in that desktop.

In other words, we should take the symbols we perceive—for instance, snakes and trains—seriously, but not literally. In effect, what Hoffman is arguing is that (quoted in Brockman 2006, p. 91):

Consciousness is all that exists. Space-time and matter never were fundamental denizens of the universe but have always been among the humbler contents of consciousness.

This sounds like a modern version of the intuitions Immanuel Kant cultivated in his radical Critique of Pure Reason (Kant 1781). There we can read in the Transcendental Aesthetic (translated by Meiklejohn 2003):

From this investigation it will be found that there are two pure forms of sensuous intuition, as principles of knowledge a priori, namely, space and time. [§1]

Space is nothing else than the form of all phenomena of the external sense, that is, the subjective condition of the sensibility, under which alone external intuition is possible. [§4]

Time is nothing else than the form of the internal sense, that is, of the intuitions of self and of our internal state. [§7]

[...] if we take away the subject, or even only the subjective constitution of our senses in general, then not only the nature and relations of objects in space and time, but even space and time themselves disappear; and that these, as phenomena, cannot exist in themselves, but only in us. [§9]

Hoffman’s ideas about consciousness and reality, for instance, criticized here (Cohen 2015), will reappear in the context of Chap. 14.

2.2 Perceiving the Inner World

If our sober state of consciousness, tricking us into believing we are perceiving an external world, is a hallucination, then dreams themselves represent the ultimate virtual reality. Thomas Metzinger is a theoretical philosopher and a philosopher of the mind. His view on consciousness, which he calls the Ego Tunnel (Metzinger 2009), is that the self is an illusion. Our conception of a self is akin to a tunnel-vision-like experience of reality. Indeed Metzinger explains (quoted in Flanagan 2009):

The phenomenal Ego is not some mysterious thing or little man inside the head but the content of an inner image...By placing the self-model within the world-model, a center is created. That center is what we experience as ourselves, the Ego.

In other words, our sense of self is a virtual simulation within a larger virtual reality comprising the external world.

During the night of May 6, 1986, Metzinger was dreaming an out-of-body experience (OBE). He reports (Metzinger 2009, p. 133f.):

When I became afraid of not being able to sustain the condition much longer, I flew back up, somehow returned to my physical body, and awoke with a mixture of great pride and joy. [...] I jumped out of bed and went over to my sister (who slept in the same room), woke her up, and told her, with great excitement, that I had just managed to do it again [dream of an OBE], that I had just been down in the garden, bouncing around on the lawn a minute ago. My sister looked at her alarm clock and said, “Man, it’s quarter to three! Why did you wake me up? Can’t this wait until breakfast? Turn out the light and leave me alone!” She turned over and went back to sleep. I was a bit disappointed at this lack of interest. [...]

At that moment, I woke up. I was not upstairs in my parents’ house in Frankfurt but in my basement room, in the house I shared with four friends about thirty-five kilometers away. It was not quarter to three at night; the sun was shining and I had obviously been taking a short afternoon nap. [...] I was unsure how real this situation was. I did not understand what had just happened to me. I didn’t dare move, because I was afraid I might wake up again, into yet another ultrarealistic environment.

Metzinger not only had experienced a lucid dream with an OBE, he also witnessed the phenomenon of false awakening. This is a vivid dream about one’s own awakening from sleep. To this day, Metzinger is still affected by that episode. He explains (Metzinger 2009, p. 134f.):

To wake up twice in a row is something that can shatter many of the theoretical intuitions you have about consciousness—for instance, that the vividness, the coherence, and the crispness of a conscious experience are evidence that you are really in touch with reality. Apparently, what we call “waking up” is something that can happen to you at any point in phenomenological time. This is a highly relevant empirical fact for philosophical epistemology. [...] False awakenings demonstrate that consciousness is never more than the appearance of a world. There is no certainty involved, not even about the state, the general category of conscious experience in which you find yourself. So, how do you know that you actually woke up this morning? Couldn’t it be that everything you have experienced was only a dream?

This is very unsettling. Not only are our perceptions of reality hallucinations, we can also never be sure about the nature of any experience we are having. In other words, the entirety of synthetic experiences—be they dreams, psychosis, or psychedelic trips—are indistinguishable from “real” experiences. In fact, experiencing a self and a world constitutes reality-independent consciousness.

But why do we dream? No one knows for sure. Monitoring rat’s brains appears to show that they dream about running down a corridor to collect food that was inaccessible during the day, meaning that they dream of a better future (Ólafsdóttir et al. 2015). In humans, dreaming has been associated with forgetting unnecessary information, called reverse learning (Crick et al. 1983; Shepherd 1983), keeping the brain functioning in the continual activation theory (Zhang 2005), processing emotional events for mental health (Walker and van Der Helm 2009), overall problem-solving activities (Barrett 1993), and memory capabilities (Wamsley et al. 2010; Wamsley and Stickgold 2011). Others have argued that dreaming is an epiphenomenon, meaning it has no function as such (Flanagan 2000). In any case, dreaming is a very creative activity. The discovery of the molecular structure of benzene by August Kekulé was inspired by a dream (Schultz 1890)—ushering in the science of organic chemistry. The neurologists Otto Loewi successfully reconstructed an experiment he had dreamt of, winning him a Nobel prize (York III 2004). The unusual and extremely talented mathematician Srinivasa Ramanujan claimed that the goddess of Namakkal would visit him in his dreams and show him the equations (Sect. 2.2). However, the often-cited anecdote that Niels Bohr discovered his Nobel prize winning model of the atom in a dream appears to be an urban legend (Runco and Pritzker 1999, p. 600). Today, a drug exists that can induce lucid dreaming (LaBerge et al. 2018).

Every day, the most remarkable thing happens to me. I wake up. Instantly, the memories of who I am enter my mind. I start to sense myself and the external world I woke up to. Then I open my eyes. If I choose to believe the neuroscientists and philosophers, I should be skeptical of the perceptions my brain forms about the external world. I should assign them a status similar to dreams. However, I can, at least, rely on the faithfulness of my own memories? After all, without memory the continuous flow of subjective experiences threatens to disintegrate. Without memory, I exists only in the now, with no autobiographical trail extending into the past and I have no manual for interacting with the world and other conscious beings. When I am snowboarding, my brain relies on nondeclarative memories. In contrast, declarative memory gives me access to stored knowledge. Nearly everything depends on memory. As can be expected, understanding memory in the human brain is a daunting task. It is known that long-term memory relies on synaptic plasticity. This is the way the brain forms new synapses or varies the strength of existing ones (Martin et al. 2000). In general (Foster 2009, p. 13f.):

As we have seen from the work of [Frederick] Bartlett [(Bartlett 1932)], memory is not a veridical copy of the world, unlike a DVD or video recording. It is perhaps more helpful to think of memory as an influence of the world on the individual. Indeed a constructivist approach describes memory as the combined influences of the world and the person’s own ideas and expectations. [...] So an event, as it occurs, is constructed by the person who experienced it. [...]

Later, when we come to try to remember that event [for instance, watching a movie], some parts of the film come readily to mind, whereas other parts we may re-construct—based on the parts that we remember and on what we know or believe must have happened. [...] In fact, we are so good at this sort of re-construction (or “filling in the gaps”) that we are often consciously unaware that it is happening. [...] This is an especially worrying consideration when we reflect on the degree to which people can feel that they are “remembering” crucial features of a witnessed murder or a personally experienced childhood assault, when—instead—they may be “reconstructing” these events and filling in missing information based on their general knowledge of the world.

Today, we know that eyewitness testimonies can be frighteningly inaccurate and the errors frequent—with far-reaching and dramatic consequences (Buonomano 2011). Two victims of false eyewitness testimonies are campaigning together for changes in the judicial system (Thompson-Cannino et al. 2009). Jennifer Thompson-Cannino was raped as a young woman in 1984. She falsely identified Ronald Cotton as the perpetrator who was then sentence to life for a crime he did not commit. After eleven years the real rapist was found and identified with, at the time, novel DNA fingerprinting technology. Cotton and Thompson-Cannino now are, together, vocal activists. The fallibility of human memory has motivated specialists to request a more scientific approach to trial evidence (Fraser 2012).

Even more worrying and troubling than misremembering the past are entirely false memories masquerading as ambassadors of the past. This can happen when memories are distorted through the incorporation of new information. In other words, the act of retrieving memories also has a component of storage. Using the metaphor of computer memory, reading information also results in writing. A benign version of this is called the misinformation effect (Loftus 2005). The very nature of the kind of questions people are asked can affect their answers—slight cues or suggestions can be incorporated into a person’s memory and claimed as theirs. This can lead to the disturbing phenomenon of false memories. Researchers have managed to implant false memories (for instance, remembered words) in experiments (Roediger and McDermott 1995). Indeed (Foster 2009, p. 78):

Less benignly, it is also possible to create—using suggestions and misleading information—memories for “events” that the individual believes very strongly happened in their past but which are, in fact, false.

For instance, convincing people that, as a child, they were lost in a shopping mall (Loftus et al. 1996). Amazingly (Eagleman 2016, p. 26):

They [the participants in the experiment] may start to remember a little bit about it [the false memory of being lost in a mall] . But when they come back a week later, they’re starting to remember more. [...] Over time, more and more detail crept into the false memory.

The fabrication of memories does not only happen under experimental conditions. In the 1980s and early 1990s therapists and counselors started to uncover “repressed” memories in patients—with dramatic consequences. Luckily, today we know about false memories. One victim reported (Buonomano 2011, p. 57):

At the end of 2 1/2 years of therapy, I had come to fully believe that I had been impregnated by my father twice. I remember that he had performed a coat hanger abortion on me with the first pregnancy and that I performed the second coat hanger abortion myself.

Gynecological examinations spoke of an entirely different past. False memories lie on one side of the memory spectrum. In contrast, people who suffer from the neurological disorder called hyperthymesia remember an excessive number of their experiences. Often, a person with hyperthymesia can remember any day of their life, going back to childhood, with great detail. However (Rodriguez McRobbie 2017):

Most have called it [hyperthymesia] a gift but I call it a burden. I run my entire life through my head every day and it drives me crazy.

Other researchers have been able to manipulate memories in mice—literally. Using optogenetic techniques,Footnote 14 the scientists could trigger a fear response in mice by activating a previously formed memory of fear (Liu et al. 2012). Then, three years later, researchers implanted false memories into the minds of mice during sleep (De Lavilléon et al. 2015). Not only is our mind susceptible to fictitious content, it can also, in theory, be edited.

If our dreams and memories are constructs of the brain, what about our sense of identity, our “self?” Indeed, in the words of the clinical neuropsychologist Paul Broks, quoted by the philosopher Julian Baggini (Baggini 2011):

We have a deep intuition that there is a core, an essence there, and it’s hard to shake off, probably impossible to shake off, I suspect. But it’s true that neuroscience shows that there is no centre in the brain where things do all come together.

The question is, if we are our self or if we have a self (Baggini 2011):

But if you think of yourself as being, in a way, not a thing as such, but a kind of a process, something that is changing, then I think that’s quite liberating.

Akin to a waterfall, which persist as a collective entity, while at every instant in time its actual structure and composition changes, so too our self emerges out of the stream of consciousness. We have already heard prominent scholars argue that the self is an illusion, for instance Blackmore, Dennett, and Metzinger. The self, or the conscious mind, is not the only phenomenon that arises in a brain. There is a lot of activity going on in the subconscious depth, below the threshold of awareness. Indeed (Eagleman 2011, p. 9):

The conscious mind is not at the center of the action in the brain; instead, it is far out on a distant edge, hearing but whispers of the activity.

Eagleman calls the constant activity in the subconscious parts of the brain—steering much of the conscious decision making—alien subroutines (Eagleman 2011, p. 133):

Not only do we run alien subroutines; we also justify them. We have ways of retrospectively telling stories about our actions as thought the actions were always our idea. [...] We are constantly fabricating and telling stories about the alien processes running under the hood.

In effect, the conscious self is relegated to the role of a grand narrator, without much actual power or control. Your conscious mind feels like the center of control, but, in fact, it is just retelling the stories of control it is hearing about (Eagleman 2016, p. 73):

Your brain serves up a narrative—and each of us believes whatever narrative it tells. Whether you’re falling for a visual illusion, or believing the dream you happen to be trapped in, or experiencing letters in color [i.e., synesthesia] , or accepting a delusion as true during an episode of schizophrenia, we each accept our realities however our brains script them.

Moreover, much of our mental makeup is contingent (Eagleman 2016, p. 76):

It’s not simply that you are attracted to humans over frogs or that you like apples more than fecal matter—these same principles of [evolutionary] hardwired thought guidance apply to all your deeply held beliefs about logic, economics, ethics, emotions, beauty, social interactions, love, and the rest of your mental landscape.

Complex social behavior, like trust and fairness, depend on what molecules are present in the brain at a certain time (Kosfeld et al. 2005; Crockett et al. 2008). Perhaps a task as mundane as eating a sandwich can increase your level of empathy (Danziger et al. 2011). Indeed (Eagleman 2016, p. 206):

The exact levels of dozens of other neurotransmitters —for example, serotonin—are critical for who you believe yourself to be.

For the effects of dopamine on decision making, see Sharot et al. (2009).

Finally, after all these intuitional certainties about our outer and inner perception have been deconstructed and exposed as contingent, malleable, and ambiguous, what is the status of our own body? How much of it is under our control and how much of it do we sense? In 1998, a now-classic experiment was performed. In the rubber-hand illusion, researchers achieved a feat in which healthy subjects experienced an artificial limb as part of their own body (Botvinick and Cohen 1998). In other words, the mind incorporated an artificial object into its self-model and discarded a real part of its own body. Body perception, too, is a mental construct which can easily be deconstructed. Indeed (Metzinger 2009, p. 76f.):

In a similar experiment, [(Armel and Ramachandran 2003)] if one of the rubber fingers was bent backwards into a physiologically impossible position, subjects not only experienced their phenomenal finger as being bent but also exhibited a significant skin-conduction reaction [...] Only two out of one hundred and twenty subjects reported feeling actual pain, but many pulled back their real hands and widened their eyes in alarm or laughed nervously.

Utilizing virtual reality applications, scientists could induce the illusion of body swapping (Petkova and Ehrsson 2008). The abstract reads:

This effect was so strong that people could experience being in another person’s body when facing their own body and shaking hands with it. Our results are of fundamental importance because they identify the perceptual processes that produce the feeling of ownership of one’s body.

In the words of Metzinger, describing a similar experiment (Blanke and Metzinger 2009) he devised to induce virtual OBEs (Metzinger 2009, p. 99f.):

As I watched my own back being stroked, I immediately had an awkward feeling: I felt subtly drawn towards my virtual body in front of me, and I tried to “slip into” it.

This section on inner and outer perception represents only the tip of the iceberg regarding the unexpectedly bizarre status of consciousness, unknown even to the conscious perceiver itself. The human mind, as will be seen, has a seemingly inexhaustible capacity for irrationality. And once the brain breaks down, things get truly terrifying.

3 Impressionable Consciousness

We all feel in control of our minds and decisions. However, these experiences arising in our consciousness can be influenced by other beings, even nonhuman lifeforms. The human body is a host to a universe of microorganisms, including bacteria, archaea, protists, fungi, and viruses. The collective genomes of these microorganisms—for instance, residing our guts, mouths, and noses—is called the microbiome. Indeed, an average human is comprised of about 30 trillion human cells and 39 trillion bacteria (Abbott 2016). In health, the relationship with these microbial aliens in our bodies is symbiotic. While it appears obvious that the gut’s microbiome assists digestion, a link between it and the brain, opening a window on behavior, sounds outlandish. However (Smith 2015):

A growing body of data, mostly from animals raised in sterile, germ-free conditions, shows that microbes in the gut influence behaviour and can alter brain physiology and neurochemistry.

The microbes also play a role in the development of the mammalian brain (Heijtz et al. 2011; Ogbonnaya et al. 2015). Moreover, the gut-brain axis allows the microbiome to influence anxiety and depression (Foster and Neufeld 2013). While most studies are based on animal experiments (Smith 2015):

Clues about the mechanisms by which gut bacteria might interact with the brain are starting to emerge, but no one knows how important these processes are in human development and health.

Especially the link between gut microbiota and potential behavior is an intriguingly hot topic (Cryan and Dinan 2012). Perhaps your “gut feeling” is indeed based upon an extension of your cerebral capabilities, aided by tiny organisms. The astrobiologist Nigel Goldenfeld summarizes (Brockman 2015, p. 27):

[Y]ou’re in some sense not even human. You have perhaps 100 trillion bacterial cells in your body,Footnote 15 numbering 10 times more than your human cells and containing 100 times as many genes as your human cells. These bacteria aren’t just passive occupants of the zoo that is you. They self-organize into communities within your mouth, guts, and elsewhere, and these communities—microbiomes —are maintained by varied, dynamic patterns of competition and cooperation between the various bacteria, which allow us to live.

Your gastrointestinal microbiome can generate small molecules that may be able to pass through the blood-brain barrier and affect the state of your brain. Although the precise mechanism isn’t yet clear, there’s growing evidence that your microbiome may be a significant factor in mental states such as depression and autism.

A healthy microbiome promotes the health in its host and may also be beneficial for mental health. Other microorganisms are not so beneficial. For instance, the parasitic brain infection called toxoplasmosis, affecting about thirty percent of all humans, is associated with intermittent explosive disorder or IED (Coccaro et al. 2016). IED is characterized by fierce outbursts of anger and violence which represent disproportionate reactions to the situation at hand. Perhaps the last episode of road rage you witnesses was steered by tiny, multi-cellular organisms. How much of the behaviors we witness in our fellow humans—and ourselves—is the result of alien organisms high-jacking our cognitive apparatus?

Another line of research tries to identify how behaviors can be transmitted via genes. In other words, how the choices of your ancestors can leave a mark in you. Specifically, the challenge lies in understanding how experiences can result in transgenerationally inherited behavior. For instance, how parental olfactory experience influences the behavior and neural structure in subsequent generations of mice (Dias and Ressler 2014). Indeed, the trauma suffered by Holocaust survivors appears to have left a genetic imprint in the DNA of their children (Yehuda et al. 2016). In mice, genetic imprinting from traumatic experiences carries through at least two generations (Callaway 2013). Again, one is forced to ask the question of how much of me is actually me? How much of my behavior is self-determined and how much is induced by external factors?

3.1 The Gullible Mind

Philip Zimbardo was the leader of the notorious 1971 Stanford Prison Experiment. This was a study of the power of institutions to influence individual behavior, which quickly turned bad (Zimbardo 1971). Two dozen volunteers were randomly assigned to be prisoners and guards in a mock prison. The experiment soon spiraled out of control, as the “guards” developed authoritarian traits and subjected the “prisoners” to psychological torture, while the “prisoners” seemed to passively accept their fate. Zimbardo recalls, showing a portrait (Zimbardo 2008):

This is the woman who stopped the Stanford Prison Study. When I said it got out of control, I was the prison superintendent. I didn’t know it was out of control. I was totally indifferent. She saw that madhouse and said, “You know what, it’s terrible what you’re doing to those boys. They’re not prisoners nor guards, they’re boys, and you are responsible.” And I ended the study the next day. The good news is I married her the next year.

Zimbardo identifies the following mechanisms which persuades normal people to commit acts of evil:

  • a mindless first small step down the road to evil;

  • dehumanization of others;

  • de-individuation of self (anonymity);

  • diffusion of personal responsibility;

  • blind obedience to authority;

  • uncritical conformity to group norms;

  • passive tolerance of evil (inaction, indifference) .

Recently, the validity of the study has been questioned,Footnote 16 prompting Zimbardo’s response.Footnote 17 In any case, history is the witness of apparently normal human beings descending into abject cruelty.

Another famous experiment, relating to the innate capacity of humans to subdue others, is Stanley Milgram’sFootnote 18 obedience, or shock, experiment (Milgram 1965). In a nutshell (NPR 2013):

In the early 1960s, Stanley Milgram, a social psychologist at Yale, conducted a series of experiments that became famous. Unsuspecting Americans were recruited for what purportedly was an experiment in learning. A man who pretended to be a recruit himself was wired up to a phony machine that supposedly administered shocks. He was the “learner.” [...]

The unsuspecting subject of the experiment, the “teacher,” read lists of words that tested the learner’s memory. Each time the learner got one wrong, which he intentionally did, the teacher was instructed by a man in a white lab coat to deliver a shock. With each wrong answer the voltage went up. From the other room came recorded and convincing protests from the learner—even though no shock was actually being administered.

The results of Milgram’s experiment made news and contributed a dismaying piece of wisdom to the public at large: It was reported that almost two-thirds of the subjects were capable of delivering painful, possibly lethal shocks, if told to do so. We are as obedient as Nazi functionaries.

However, recent research into the subject reveals a somewhat different picture. It appears as though Milgram manipulated the findings by exaggerating and downplaying results to suit the narrative (Perry 2013).

Is there a way to measure how susceptible people are to being manipulated? In other words, what fraction of the population can be instrumentalized? Unfortunately, we do not know this. However, we do know that our minds constantly manipulate themselves. In 1999, two psychologists made a groundbreaking discovery of human nature, albeit a disenchanting one. In an experiment, the test subjects were asked to perform tasks related to humor, grammar, and logic. Then the participants were asked to judge their own performance and ability. The result is the infamous Dunning–Kruger effect (Kruger and Dunning 1999). Grossly incompetent people lack the skill to identify their own lack of skill. This leads to an inflated and distorted self-perception. In summary (Kruger and Dunning 1999):

The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.

The tag line “unskilled and unaware” was born. In contrast, highly competent people are troubled by doubt and indecision, resulting in a self-conscious and distorted perception of themselves as well. The study has been reproduced since (Krueger and Mueller 2002)—however, with a different interpretation which the original authors challenge (Kruger and Dunning 2002)—and some nuances have been detected (Burson et al. 2006), next to the influence of cultural differences between US Americans and Japanese (Heine et al. 2001). The Dunning–Kruger effect is pervasive in our post-truth world (Chap. 12), where confidence is so often confused with competence.

Indeed, it is possible that political ideology stems from specific cognitive traits in humans. Early research includes (Adorno et al. 1950) . Newer findings identify a key deliminator in the psychology of liberals and conservatives. In one study, conservatives tended to register greater physiological responses to negative aspects than their more liberal counterparts (Hibbing et al. 2014). Another study showed that conservatives were more likely to remember things that evoked negative emotions, like images of war, snakes, and road kill (Mills et al. 2016). Yet another study concluded (van Prooijen et al. 2015):

Our study reveals that negative political emotions and outgroup derogation are stronger among the extremes than among the moderates. These phenomena are attributable to the fear that people at both the left and the right extreme experience as a result of societal and economic developments. It is concluded that fear flourishes mostly among the extremes.

Other “neuropolitical” studies highlight the differences in the minds of people (Jost and Amodio 2012):

[A] meta-analytic review of 88 studies conducted in 12 countries between 1958 and 2002, which confirmed that both situational and dispositional variables associated with the management of threat and uncertainty were robust predictors of political orientation. Specifically, death anxiety, system instability, fear of threat and loss, dogmatism, intolerance of ambiguity, and personal needs for order, structure, and closure were all positively associated with conservatism (or negatively associated with liberalism) . Conversely, openness to new experiences, cognitive complexity, tolerance of uncertainty, and self-esteem were all positively associated with liberalism (or negatively associated with conservatism). Subsequent research has also demonstrated that liberals exhibit stronger implicit as well as explicit preferences for social change and equality when compared with conservatives.

In detail, the authors analyzed ideological differences in the context of neuroanatomical structures (Jost and Amodio 2012):

[L]arger ACC [anterior cingulate cortex] volume was associated with greater liberalism (or lesser conservatism) . Furthermore, larger right amygdala volume was associated with greater conservatism (or lesser liberalism). [...] Given that the ACC is associated with conflict monitoring and the amygdala is centrally involved in physiological and behavioral responses to threat, this neuroanatomical evidence appears to lend further support to the notion that political ideology is linked to basic neurocognitive orientations toward uncertainty and threat.

However (Jost and Amodio 2012):

It is also important to point out that in all neuroscientific studies of political orientation, the direction of causality is ambiguous; it could be that (a) differences in brain activity lead to liberal-conservative ideological differences, or (b) embracing liberal vs. conservative ideologies leads to differences in brain structure and function.

Other researchers claim (Hodson and Busseri 2012):

In conclusion, our investigation establishes that cognitive ability is a reliable predictor of prejudice. Understanding the causes of intergroup bias is the first step toward ultimately addressing social inequalities and negativity toward out-groups. Exposing right-wing conservative ideology and inter-group contact as mechanisms through which personal intelligence may influence prejudice represents a fundamental advance in developing such an understanding.

Linking lower cognitive abilities with prejudice and right-wing ideology will, most likely, be shrugged off by the stigmatized group as resulting from a liberal research agenda of the scientists. In any case, humans like to belong to groups which then allows them to villainize non-group members. Studies show that even in groups created by random, in which the members should have no reason to discriminate against the out-group, exactly this, in fact, does happen (Tajfel and Turner 1979). Moreover, people are willing to tolerate unethical behavior from the members of their own group (Ariely 2009):

If somebody from our in-group cheats and we see them cheating, we feel it’s more appropriate, as a group, to behave this way. But if it’s somebody from another group, [...] all of a sudden people’s awareness of honesty goes up.

Unfortunately, there appears to be a natural tendency in the human mind towards prejudice. Such implicit biases can be measured (Eagleman 2011, p. 60):

Imagine that you sit down in front of two buttons, and you’re asked to hit the right button whenever a positive word flashes on the screen (joy, love, happy, and so on), and the left button whenever you see a negative word (terrible, nasty, failure). Pretty straightforward. Now the task changes a bit: hit the right button whenever you see a photo of an overweight person, and the left button whenever you see a photo of a thin person. Again, pretty easy. But for the next task, things are paired up: you’re asked to hit the right button when you see either a positive word or an overweight person, and the left button whenever you see a negative word or a thin person. In another group of trials, you do the same thing but with the pairings switched—so you now press the right button for a negative word or a thin person.

The results can be troubling. The reaction times of subjects are faster when the pairings have a strong association unconsciously. For example, if overweight people are linked with a negative association in the subject’s unconscious, then the subject reacts faster to a photo of an overweight person when the response is linked to the same button as a negative word. During trials in which the opposite concepts are linked (thin with bad), subjects will take a longer time to respond, presumably because the pairing is more difficult. This experiment has been modified to measure implicit attitudes toward races, religions, homosexuality, skin tone, age, disabilities, and presidential candidates.

However, while such tests give a reliable picture at an aggregate level, it is unclear how accurate they are for detecting individual biases or racism (Lopez 2017).

If biases and prejudices are widespread neural patterns in our brains, can these be manipulated? Indeed, something as innocuous as a magnet can do the trick. Specifically, using transcranial magnetic stimulation to temporarily shut down specific regions of the brain (University of York 2015):

New research involving a psychologist from the University of York has revealed for the first time that both belief in God and prejudice towards immigrants can be reduced by directing magnetic energy into the brain. [...]

The researchers targeted the posterior medial frontal cortex, a part of the brain located near the surface and roughly a few inches up from the forehead that is associated with detecting problems and triggering responses that address them.

[...] people in whom the targeted brain region was temporarily shut down reported 32.8% less belief in God, angels, or heaven. They were also 28.5% more positive in their feelings toward an immigrant who criticised their country.

The study is found here (Holbrook et al. 2015). Researchers who have scanned the brains of subjects conclude (Harris et al. 2009):

[R]eligious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks.

3.2 The Irrational Mind

In 1978, Herbert A. Simon was awarded the Nobel Memorial Prize in economic science.Footnote 19 His work was centered around human rationality. Indeed, neoclassical economics (Sects. 7.1.2.1, 7.1.2.3, and 7.2) painted a highly optimistic picture of individuals’ rationality in the context of economic choices. People were thought to always maximize a perceived utility. Simon’s work questioned these assumptions of perfect rational decision-making in his concept of bounded rationality (Simon 1972). Nearly a quarter of a century later, in 2002, the psychologist Daniel Kahneman was awarded the Nobel Memorial Prize in economic sciences “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty”.Footnote 20 He pioneered the filed of behavioral economics which is steadily growing. In 2017, Richard Thaler, another behavioral economist, received the award for the idea of “nudging” people towards doing what is best for them (Thaler and Sunstein 2008).

Today, behavioral economics has uncovered a trove of embarrassing findings exposing innate and ubiquitous human irrationality. Kahneman was already quoted in Sect. 8.1.1 , when he remarked:

Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.

Indeed, the human mind falls prey to an astonishingly wide array of cognitive biases, logical fallacies, and self-deception (Buonomano 2011; McRaney 2012; Hallinan 2014). For instance:

  • Confirmation bias: ignoring information which threatens pre-existing beliefs.

  • Gambler’s fallacy: believing that the observation of previous events influences future outcomes.

  • Neglecting probability: underestimating a risk (e.g., the probability of a car crash) while inflating others (e.g., the probability of a plane crash).

  • Observational selection bias: noticing a novel feature more often and assuming that the frequency of appearances has increased.

  • Status-quo bias: inertia towards change.

  • Negativity bias: bad news captures more attention.

  • Bandwagon effect: groupthink or group pressure.

  • Projection bias: assuming others tend to think like oneself.

  • The current moment bias: favoring pleasure in the “now.”

  • Anchoring effect: relying too heavily on an initial piece of information.

  • Availability heuristic: confusing ease of remembering with frequency of occurrence.

In one experiment, subjects were asked to write down either three or nine reasons why they loved their partner. People who wrote down only three things reported that they loved their partner more than those who had to write down nine. The reason being that (Banaji 2006):

Who has nine amazing properties? So by the time you get to number five you’re struggling. Six is hard and seven is almost impossible. And you make eight and nine up.

Other subjects were asked to first memorizes the last four digits of their social security number and then to estimate the number of doctors in New York City. The correlation between the two numbers was around 0.4 (reported in Hubbard 2014, p. 308). In other experiments, the actual decisions of people was influenced (Ariely 2008a):

This was an ad in The Economist a few years ago that gave us three choices: an online subscription for 59 dollars, a print subscription for 125 dollars, or you could get both for 125.

To visualize:

  1. A

    One-year online subscription $59.

  2. B

    One-year subscription to the print edition $125.

  3. C

    One-year subscription to the online and print edition $125.

The story continues:

Now I looked at this, and I called up The Economist, and I tried to figure out what they were thinking. And they passed me from one person to another to another, until eventually I got to the person who was in charge of the website, and I called them up, and they went to check what was going on. The next thing I know, the ad is gone, no explanation.

So I decided to do the experiment that I would have loved The Economist to do with me. I took this [the ad] and I gave it to 100 MIT students. I said, “What would you choose?” These are the market shares—most people wanted the combo deal [i.e., Choice C with 84% and Choice A with 16%]. [...]

But now, if you have an option that nobody wants, you can take it off, right? So I printed another version of this, where I eliminated the middle option [Choice B]. I gave it to another 100 students. Here is what happened: Now the most popular option became the least popular, and the least popular became the most popular [i.e., Choice C went from 84% to 32%, while Choice A went from 16% to 68%].

What was happening was the option that was useless, in the middle, was useless in the sense that nobody wanted it. But it wasn’t useless in the sense that it helped people figure out what they wanted. In fact, relative to the option in the middle, which was get only the print for 125, the print and web for 125 looked like a fantastic deal. And as a consequence, people chose it. [...]

What is the general point? The general point is that, when we think about economics, we have this beautiful view of human nature. “What a piece of work is a man! How noble in reason!” We have this view of ourselves, of others. The behavioral economics perspective is slightly less “generous” to people [...].

Next to economic decisions also ethical choices can be influenced (Ariely 2009):

So, we got people to the lab, and we said, “We have two tasks for you today.” First, we asked half the people to recall either 10 books they read in high school, or to recall The Ten Commandments, and then we tempted them with cheating. Turns out the people who tried to recall The Ten Commandments—and in our sample nobody could recall all of The Ten Commandments—but those people who tried to recall The Ten Commandments, given the opportunity to cheat, did not cheat at all. It wasn’t that the more religious people—the people who remembered more of the Commandments—cheated less, and the less religious people—the people who couldn’t remember almost any Commandments—cheated more. The moment people thought about trying to recall The Ten Commandments, they stopped cheating.

Indeed, our sens of morality is easily manipulated. The abstract of a study reads (Bateson et al. 2006):

We examined the effect of an image of a pair of eyes on contributions to an honesty box used to collect money for drinks in a university coffee room. People paid nearly three times as much for their drinks when eyes were displayed rather than a control image. This finding provides the first evidence from a naturalistic setting of the importance of cues of being watched, and hence reputational concerns, on human cooperative behaviour.

More on how the human mind decides to deceive others and itself is found in Ariely (2012). Other experiments have shown that people will abandon their judgments under group pressure. Specifically (Asch 1951):

The critical subject was submitted to two contradictory and irreconcilable forces—the evidence of his own experience of an utterly clear perceptual fact and the unanimous evidence of a group of [perceived] equals [who were actors].

A sizable fraction of participants yielded to the pressure. Another way to influence behavior is through smell. The positive effect of citrus scent on cleaning related behavior (the frequency of participants’ crumb removal while eating) was measured in Holland et al. (2005). The link between testosterone and risk appetite is also known (Coates 2012). This impacts traders (Sect. 7.4.2.3).

Kahneman distinguishes between the experiencing self, “who lives in the present and knows the present” and the remembering self, which “is the one that keeps score, and maintains the story of our life.” The two selves are in conflict with each other. For instance, in remembering time (Kahneman 2010):

From the point of view of the experiencing self, if you have a vacation, and the second week is just as good as the first, then the two-week vacation is twice as good as the one-week vacation. That’s not the way it works at all for the remembering self. For the remembering self, a two-week vacation is barely better than the one-week vacation because there are no new memories added.

Kahneman also categorizes thinking into two components: fast and slow. While the former is characterized by unconscious and automatic responses, the latter is an effortful and logical process. To illustrate fast thinking, consider the following (Kahneman 2011, p. 44). A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost? Most people think of 10 cents. This is, of course, seen to be wrong once we do the calculation via slow thinking. Then, if a message is printed in bright blue or red—compared to middling shades of green, yellow, or pale blue—it is more likely to be believed (Kahneman 2011, p. 63). Test performances, pitting fast and slow thinking against each other, turn out to be better if the font the test is written in is bad. The cognitive strain of reading barely legible text makes the brain perform better in its slow mode (Kahneman 2011, p. 65). Such examples go on and on. It is truly humbling, if not frightening, how error-prone and susceptible our minds are, while, at the same time claiming rationality and autonomy. See, for instance, Ariely (2008b), Kahneman (2011) for the trove of experiments unmasking human irrationality.

Even without performing such experiments, the limits of the human mind become apparent. For one, humans are very bad at dealing with probabilities. Consider the following situation (Gigerenzer and Hoffrage 1995):

The probability of breast cancer is 1% for a woman at age forty who participates in routine screening. If a woman has breast cancer, the probability is 80% that she will get a positive mammography. If a woman does not have breast cancer, the probability is 9.6% that she will also get a positive mammography. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

This type of scenario deals with conditional probabilities, e.g., false positives. Alarmingly, even physicians get the answer very wrong. Out of 100 doctors, 95 gave an estimation between 70$ and 80% (Gigerenzer and Hoffrage 1995). The correct mathematical formalism to deal with such probabilities is Bayesian inference. Applying Bayes’ theorem (Bayes 1763), the correct and highly unintuitive answer is found to be 7.8%. Another example is the Monty Hall problem. It is based off of a television game show and is, by now, a well-known mathematical brainteaser. A contestant is placed in front of three doors (Arbesman 2014):

She is told that behind one of them is a car, while behind the other two there are goats. Since it is presumed that contestants want to win cars not goats, if nothing else for their resale value, there is a one-third chance of choosing the car and winning.

But now here’s the twist. After the contestant chooses a door, the game show host has another door opened and the contestant is shown a goat. Should she stick with the door she has originally chosen, or switch to the remaining unopened door?

By switching, the contestant has a 2/3 chance of winning the car. Somehow, miraculously, the odds went from a perceived fifty-fifty chance—after all, two doors remain closed and nothing seems to have changed—to 2/3. By mapping out the probability space it becomes clear that this is really the case. However, the human mind’s intuitions about probabilities are bad. Even professionally trained human minds (Arbesman 2014):

In fact, Paul Erdős,Footnote 21 one of the most prolific and foremost mathematicians involved in probability, when initially told of the Monty Hall problem also fell victim to not understanding why opening a door should make any difference. Even when given the mathematical explanation multiple times, he wasn’t really convinced. It took several days before he finally understood the correct solution.

Indeed, the cognitive psychologist Massimo is quoted as saying that (Vos Savant 1996, p. 15):

[N]o other statistical puzzle comes so close to fooling all the people all the time. [...]

[E]ven Nobel physicists systematically give the wrong answer, and [...] they insist on it, and they are ready to berate in print those who propose the right answer.

But most strikingly, this cognitive impairment seems to be specific to human minds. Pigeons, on the other hand, have a better grasp of probabilities (Herbranson and Schroeder 2010):

Birds completed multiple trials of a standard MHD [Monty Hall Dilemma], with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training.

Some ventures into the minds of animals are the following: the philosopher Thomas Nagel’s classic What Is It Like to Be a Bat? (Nagel 1974) or the search for the origins of consciousness in the minds of cephalopods (Godfrey-Smith 2016). Indeed, octopuses are very alien life forms with three hearts, skin that acts like an organic display by changing both color and texture, and their brains are located in their eight semi-autonomous arms. Most intriguingly, they regularly use tools and can solve puzzles, and they can edit their own genes (specifically, RNA). They have about 33,000 genes (Albertin et al. 2015), compared to the 20,000–25,000 genes in humans (International Human Genome Sequencing Consortium 2004). However, for some reason, they don’t live for very long.

In light of all the troubling findings discussed above, we should not be surprised that humans also cannot be convinced by empirical evidence (Ahluwalia 2000). In summary (Kaplan et al. 2016):

Few things are as fundamental to human progress as our ability to arrive at a shared understanding of the world. The advancement of science depends on this, as does the accumulation of cultural knowledge in general.

It is well known that people often resist changing their beliefs when directly challenged, especially when these beliefs are central to their identity. In some cases, exposure to counterevidence may even increase a person’s confidence that his or her cherished beliefs are true.

This last epitome of irrationality is called the backfire effect. All of these insights become acutely worrisome—and amplified—in our modern digital and interconnected age. The Internet constantly feeds our biases, anchoring us in a state of blissful fantasy, where our beliefs get reinforced, but never challenged. This is mediated by mechanisms like filter bubbles (Pariser 2011) and echo chambers (Barberá et al. 2015; Del Vicario et al. 2016). Sadly, unearthing these deficits in the human mind will not be able to change much (Lehrer 2012):

For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.

For a popular account of human psychology, see, for instance Freeman and Freeman (2010).

3.3 The Broken Mind

Astonishingly, all the listed cognitive deficiencies and idiosyncrasies of consciousness, plus its warped and constructed perceptions, are associated with a healthy mind. Sadly, the mind can break in many ways, unearthing more enigmas of consciousness.

The Oxford Textbook of Psychopathology is a frightening 840 pages thick, listing every pathology that has been observed in the human mind. The array of potential mental defects is very wide, affecting all aspects of consciousness: the sense of self, perception, experience, memory, attention, general cognition, intelligence, instincts, aggression, affectivity, and sexuality. Perhaps most disheartening are delusions, where the patients are trapped alone in an atom of constructed reality no other mind can access. Other patients can have impaired time perception. Time can accelerate or slow down and even completely grind to a halt. The past and the present can be confused with each other or the future can be impossible to comprehend. Facing so many potential ruptures of the mind, it is astounding that most humans appear to have a normally functioning brain. The neurologist Oliver Sacks was perhaps one of the first researchers to introduce the nightmares of brain dysfunctions to a wide audience. In his popular book, titled The Man Who Mistook His Wife for a Hat , he describes the case histories of some of his patients (Sacks 1985). The title was inspired by the case study of a patient with visual agnosia, unable to distinguish between animate and inanimate things. After reading the 24 essays describing extraordinary disabilities of the mind, one is again left wondering about the status of sober waking consciousness. Sacks also researched the phenomenon of hallucinations (Sacks 2012).

Some mental illnesses are very perfidious. Like Tourette syndrome, where the affected people suffer from compulsive tics. The worst being the urge to express obscenities—the more inappropriate a situation the greater the impulse to swear. Narcolepsy is a sleeping disorder which is very debilitating for the sufferers. During the day, these people inadvertently and constantly fall into momentary states of deep sleep, while walking the dog, buying groceries, or doing any other activity. The social ostracism must be unbearable. People affected by schizophrenia —about 1% of the population—live in a world which is incoherent and fragmented (Swaab 2014). Inner and outer perception mix, giving rise to hallucinations and delusions. A schizophrenic mind can be instructed by (inner) voices to kill other people, which some then obey. Other mental illnesses are truly bizarre. Cotard delusion is a condition in which the affected person holds the belief that they are dead or do not exists (Ananthaswamy 2016). Patients suffering from aphantasia lose their inner eye and become mentally blind, unable to form mental images.

Sometimes neuroscience can give some insight into the mechanisms underlying the problems. Capgras delusion is a disorder in which a person firmly holds the belief that a friend, spouse, parent, or family member is an impostor (Ramachandran 2007):

So, to explain this curious disorder, we look at the structure and functions of the normal visual pathways in the brain. Normally, visual signals come in, into the eyeballs, go to the visual areas in the brain. There are, in fact, 30 areas in the back of your brain concerned with just vision, and after processing all that, the message goes to a small structure called the fusiform gyrus, where you perceive faces. [...] Now, when that area’s damaged, you lose the ability to see faces, right?

But from that area, the message cascades into a structure called the amygdala in the limbic system, the emotional core of the brain, and that structure, called the amygdala, gauges the emotional significance of what you’re looking at. Is it prey? Is it predator? Is it mate? Or is it something absolutely trivial, like a piece of lint, or a piece of chalk [...].

But maybe, in this chap [a sufferer of Capgras delusion] , that wire that goes from the amygdala to the limbic system, the emotional core of the brain, is cut by the accident. So because the fusiform is intact, the chap can still recognize his mother, and says, “Oh yeah, this looks like my mother.” But because the wire is cut to the emotional centers, he says, “But how come, if it’s my mother, I don’t experience a warmth?” Or terror, as the case may be?

Sometimes we voluntarily impair the brain. The use of botulinum toxin (botox) in cosmetic applications to reduce facial wrinkles has an unexpected side effect. By temporarily paralyzing the facial muscle used in frowning, a feedback loop is cut which results in reduced emotional processing of patients (Hennenlotter et al. 2008; Havas et al. 2010). By not being able to express facial emotions, people tend to lose their ability to recognize these in others.

The notion of a split-brain describes the result when the corpus callosum connecting the two hemispheres of the brain is damaged or severed (Sperry et al. 1969). This can happen due to an accident or can also be induced surgically, to treat severe forms of epilepsy (Wilson et al. 1978). The results are mind-numbing, as now two distinct minds seem to appear in these split brains. Indeed (Gazzaniga 2011, p. 59f.):

One of the more general and also more interesting and striking features of this [split-brain] syndrome may be summarized as an apparent doubling in most of the realms of conscious awareness. Instead of the normally unified single stream of consciousness, these patients behave in many ways as if they have two independent streams of conscious awareness, one in each hemisphere, each of which is cut off from and out of contact with the mental experiences of the other. In other words, each hemisphere seems to have its own separate and private sensations; its own perceptions; its own concepts; and its own impulses to act, with related volitional, cognitive, and learning experiences.

[...]

Over the past ten years we have collected evidence that, following midline section of the cerebrum, common normal conscious unity is disrupted, leaving the split-brain patient with two minds (at least), mind left and mind right. They coexist as two completely conscious entities, in the same manner as conjoined twins are two completely separate persons.

The mystery of consciousness deepens. A whole unified mind can be divided into two independent whole minds. The functional traits of each hemisphere appear to be very different. In 2008, the neuroanatomist Jill Bolte Taylor delivered one of the most popular TED talks (Bolte Taylor 2008a), describing the battle of her hemispheres during a stroke. She begins:

But on the morning of December 10, 1996, I woke up to discover that I had a brain disorder of my own. A blood vessel exploded in the left half of my brain. And in the course of four hours, I watched my brain completely deteriorate in its ability to process all information. [...]

On the morning of the stroke, I woke up to a pounding pain behind my left eye. And it was the kind of caustic pain that you get when you bite into ice cream. And it just gripped me—and then it released me. [...]

So I got up and I jumped onto my cardio glider, which is a full-body, full-exercise machine. And I’m jamming away on this thing, and I’m realizing that my hands look like primitive claws grasping onto the bar. And I thought, “That’s very peculiar.” And I looked down at my body and I thought, “Whoa, I’m a weird-looking thing.” And it was as though my consciousness had shifted away from my normal perception of reality, where I’m the person on the machine having the experience, to some esoteric space where I’m witnessing myself having this experience.

And it was all very peculiar, and my headache was just getting worse. So I get off the machine, and I’m walking across my living room floor, and I realize that everything inside of my body has slowed way down. And every step is very rigid and very deliberate. [...]

And then I lost my balance, and I’m propped up against the wall. And I look down at my arm and I realize that I can no longer define the boundaries of my body. I can’t define where I begin and where I end, because the atoms and the molecules of my arm blended with the atoms and molecules of the wall. And all I could detect was this energy.

And I’m asking myself, “What is wrong with me? What is going on?” And in that moment, my left hemisphere brain chatter went totally silent. Just like someone took a remote control and pushed the mute button. Total silence. And at first I was shocked to find myself inside of a silent mind. But then I was immediately captivated by the magnificence of the energy around me. And because I could no longer identify the boundaries of my body, I felt enormous and expansive. I felt at one with all the energy that was, and it was beautiful there. [...]

Then all of a sudden my left hemisphere comes back online and it says to me, “Hey! We’ve got a problem! We’ve got to get some help.” And I’m going, “Ahh! I’ve got a problem!”

But then I immediately drifted right back out into the consciousness—and I affectionately refer to this space as La La Land. But it was beautiful there. Imagine what it would be like to be totally disconnected from your brain chatter that connects you to the external world. [...]

And I felt this sense of peacefulness. And imagine what it would feel like to lose 37 years of emotional baggage! Oh! I felt euphoria. It was beautiful.

And in that moment, my right arm went totally paralyzed by my side. Then I realized, “Oh my gosh! I’m having a stroke!” And the next thing my brain says to me is, “Wow! This is so cool!” [...]

Bolte Taylor managed to get help and was taken to a hospital:

When I woke later that afternoon, I was shocked to discover that I was still alive. When I felt my spirit surrender, I said goodbye to my life. And my mind was now suspended between two very opposite planes of reality. Stimulation coming in through my sensory systems felt like pure pain. Light burned my brain like wildfire, and sounds were so loud and chaotic that I could not pick a voice out from the background noise, and I just wanted to escape. Because I could not identify the position of my body in space, I felt enormous and expansive, like a genie just liberated from her bottle. And my spirit soared free, like a great whale gliding through the sea of silent euphoria. Nirvana. I found Nirvana. And I remember thinking, there’s no way I would ever be able to squeeze the enormousness of myself back inside this tiny little body. [...]

And I pictured a world filled with beautiful, peaceful, compassionate, loving people who knew that they could come to this space at any time. And that they could purposely choose to step to the right of their left hemispheres—and find this peace. And then I realized what a tremendous gift this experience could be, what a stroke of insight this could be to how we live our lives. And it motivated me to recover. [...]

So who are we? We are the life-force power of the universe, with manual dexterity and two cognitive minds. And we have the power to choose, moment by moment, who and how we want to be in the world. Right here, right now, I can step into the consciousness of my right hemisphere, where we are. I am the life-force power of the universe. I am the life-force power of the 50 trillion beautiful molecular geniuses that make up my form, at one with all that is. Or, I can choose to step into the consciousness of my left hemisphere, where I become a single individual, a solid. Separate from the flow, separate from you. I am Dr. Jill Bolte Taylor: intellectual, neuroanatomist. These are the “we” inside of me. Which would you choose? Which do you choose? And when? I believe that the more time we spend choosing to run the deep inner-peace circuitry of our right hemispheres, the more peace we will project into the world, and the more peaceful our planet will be.

It took her eight years to completely recover. Her ordeal is described in her book (Bolte Taylor 2008b). Beauty, peace, disintegration, expansion, euphoria, merging with the universe—these are not the words one would expect to hear from a person whose left brain is damaged. The described experiences sound like what users of psychoactive substances often report (Sect. 14.3). Indeed, there is a distinct spiritual message in Bolte Taylor’s words. In effect, she believes in the reality of the extraordinary experiences she witnessed. More on the notion and context of spirituality is found in Chap. 14.

Martin Pistorius contracted a brain infection at the age of twelve. Slowly he lost his cognitive abilities (Pistorius 2015):

My parents were told I was as good as not there. A vegetable, having the intelligence of a three-month-old baby. They were told to take me home and try to keep me comfortable until I died. [...]

I had become a ghost, a faded memory of a boy people once knew and loved. Meanwhile, my mind began knitting itself back together. Gradually, my awareness started to return. But no one realized that I had come back to life. I was aware of everything, just like any normal person. I could see and understand everything, but I couldn’t find a way to let anybody know. My personality was entombed within a seemingly silent body, a vibrant mind hidden in plain sight within a chrysalis.

The stark reality hit me that I was going to spend the rest of my life locked inside myself, totally alone. I was trapped with only my thoughts for company. I would never be rescued. No one would ever show me tenderness. I would never talk to a friend. No one would ever love me. I had no dreams, no hope, nothing to look forward to. Well, nothing pleasant. I lived in fear, and, to put it bluntly, was waiting for death to finally release me, expecting to die all alone in a care home.

I don’t know if it’s truly possible to express in words what it’s like not to be able to communicate. Your personality appears to vanish into a heavy fog and all of your emotions and desires are constricted, stifled and muted within you. For me, the worst was the feeling of utter powerlessness. I simply existed. It’s a very dark place to find yourself because in a sense, you have vanished. Other people controlled every aspect of my life. They decided what I ate and when. Whether I was laid on my side or strapped into my wheelchair. I often spent my days positioned in front of the TV watching Barney reruns. I think because Barney is so happy and jolly, and I absolutely wasn’t, it made it so much worse.

I was completely powerless to change anything in my life or people’s perceptions of me. I was a silent, invisible observer of how people behaved when they thought no one was watching. Unfortunately, I wasn’t only an observer. With no way to communicate, I became the perfect victim: a defenseless object, seemingly devoid of feelings that people used to play out their darkest desires. For more than 10 years, people who were charged with my care abused me physically, verbally and sexually.

It is impossible to comprehend the gravity of this devastating account. There are perhaps only a few torture techniques which can compare to locked-in syndrome, where a functioning mind is cut from any human social interaction and reality itself. Miraculously, Pistorius did not surrender:

My mind became a tool that I could use to either close down to retreat from my reality or enlarge into a gigantic space that I could fill with fantasies.

After being trapped in his body for thirteen years, by chance, an aromatherapist who came to the care home he was in thought she had detected a sign of life in him—a responsiveness. She urged experts to run tests:

And within a year, I was beginning to use a computer program to communicate. It was exhilarating, but frustrating at times. I had so many words in my mind, that I couldn’t wait to be able to share them. Sometimes, I would say things to myself simply because I could. In myself, I had a ready audience, and I believed that by expressing my thoughts and wishes, others would listen, too.

Today, Pistorius, born in 1975, is still confined to a wheelchair and cannot talk. However (The Scotsman 2011):

So much has happened since then. Being unlocked from that prison, aged 25, learning to read and write and communicate through the written word, finding a job, falling in love and getting married, moving country. Pistorius’ story is a mix of serendipity and determination on an epic scale.

He wrote an autobiography of his truly unique life (Pistorius 2011).

Perhaps the most amazing and shocking aspect of the mind is that it can persist even without much physical neural hardwiring. The following case study surprised experts and laypeople alike. In 2007, a 44-year-old man went to see a doctor because of a problem in his leg (New Scientist 2007):

Scans of the 44-year-old man’s brain showed that a huge fluid-filled chamber called a ventricle took up most of the room in his skull, leaving little more than a thin sheet of actual brain tissue.

“It is hard for me [to say] exactly the percentage of reduction of the brain, since we did not use software to measure its volume. But visually, it is more than a 50 to 75 per cent reduction,” says Lionel Feuillet, a neurologist at the Mediterranean University in Marseille, France.

Feuillet and his colleagues describe the case of this patient in The Lancet . He is a married father of two children, and works as a civil servant.

See Feuillet et al. (2007). How can this be possible? If consciousness is associated with neural correlates of consciousness in the brain, then how can sober waking consciousness emerge in a brain that is hardly developed? Indeed (New Scientist 2007):

Intelligence tests showed the man had an IQ of 75, below the average score of 100 but not considered mentally retarded or disabled.

“The whole brain was reduced—frontal, parietal, temporal and occipital lobes—on both left and right sides. These regions control motion, sensibility, language, vision, audition, and emotional and cognitive functions,” Feuillet told New Scientist .

Another case study describes a 24-year-old woman who was missing a large part of her brain (the cerebellum, also called “the little brain,” representing about 10% of the brain’s total volume but contains roughly 50% of its neurons), again, without anybody noticing (Yu et al. 2014). Then, a boy missing the visual processing center of his brain, unexpectedly seems to have near-normal sight (Klein 2017). This phenomenon is called blindsight.

The brain displays a great plasticity and dynamic. This could explain why some damages to the brain result in an enhanced mind. This phenomenon is known as acquired savantism. For instance, Derek Amato, who, after suffering a head injury after diving into a shallow swimming pool, suddenly became a musical prodigy (Amato 2013). A normal person, without any particularly remarkable skills, he transformed into a composer and pianist. Other acquired savants, resulting from brain trauma, are the artists Jon Sarkin and Tommy McHugh, and the mathematical talents Jason Padgett and Orlando Serrell. At the age of three, Ben Underwood lost both of his eyes because of cancer. Around the age of five, he taught himself echolocation (Rojas et al. 2009). Similar to bats, he used a series clicking sounds to navigate in space, allowing him to accomplish amazing feats like running, playing basketball, riding a bicycle, rollerblading, playing football, and skateboarding. Underwood died at the age of sixteen as a result of his cancer. People suffering from synesthesia experiences a mixing of sensory inputs (Cytowic 2002)—for instance, hearing color or seeing sounds. For some, letters and numbers are associated with distinct colors. Higher levels of creativity are thought to go hand in hand with synesthesia (Dailey et al. 1997).

What happens if the brains of criminals are analyzed? Pedophiles, for instance, have an abnormally functioning amygdala (Sartorius et al. 2008). Then, other insights reveal how an arising brain pathology can drastically change behavior (Eagleman 2011, p. 151f.):

On the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The twenty-five-year-old climbed three flights of stairs to the observation deck, lugging with him a trunk full of guns and ammunition. At the top he killed a receptionist with the butt of his rifle. He then shot at two families of tourists coming up the stairwell before beginning to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As others ran to help her, he shot them as well. He shot pedestrians in the street and the ambulance drivers that came to rescue them. The night before Whitman had sat at his typewriter and composed a suicide note:

I do not really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I cannot recall when it started) I have been a victim of many unusual and irrational thoughts.

It was after much thought that I decided to kill my wife, Kathy, tonight...I love her dearly, and she has been a fine wife to me as any man could ever hope to have. I cannot rationally pinpoint any specific reason for doing this...

Along with the shock of the murders lay another, more hidden surprise: the juxtaposition of his aberrant actions and his unremarkable personal life. Whitman was a former Eagle Scout and marine, worked as a teller in a bank, and volunteered as a scoutmaster for Austin Scout Troop 5. [...]

A few months before the shooting, Whitman had written in his diary:

I talked to a doctor once for about two hours and tried to convey to him my fears that I felt overcome by overwhelming violent impulses. After one session I never saw the Doctor again, and since then I have been fighting my mental turmoil alone, and seemingly to no avail.

Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor about the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region, called the amygdala. The amygdala is involved in emotional regulation, especially as regards fear and aggression.

A similar case study is reported (Eagleman 2011, p. 154f.):

Take the case of a forty-year-old man we’ll call Alex. Alex’s wife, Julia, began to notice a change in his sexual preferences. For the first time in the two decades she had known him, he began to show an interest in child pornography. And not just a little interest, an overwhelming one. [...]

This was no longer the man Julia had married, and she was alarmed by the change in his behavior. At the same time, Alex was complaining of worsening headaches. And so Julia took him to the family doctor, who referred them on to a neurologist. Alex underwent a brain scan, which revealed a massive brain tumor in his orbitofrontal cortex. The neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

The lesson of Alex’s story is reinforced by its unexpected follow-up. About six months after the brain surgery, his pedophilic behavior began to return. His wife took him back to the doctors. The neuro-radiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior returned to normal.

Such insights motivate the burgeoning field of neurolaw.

The criminal justice system is based on the assumption that criminals had the choice to not act out their crimes. Hence, they are culpable and incarceration is the right measure to correct such immoral behavior. Neuroscience questions this assumption. Indeed, in the case of Whitman, one wonders what would have happened if he had had his brain tumor removed. More personally, I must assume that if my brain is affected by such an invasive destructive force, I too would be capable of heinous acts unimaginable to my healthy mind now. Neurolaw presents a dilemma. Clearly, perpetrators do great harm to society, hence society needs protection from them. But on the other hand, what happens to the notion of responsibility if one’s brain is in fact damaged? Indeed “My brain made me do it!” is becoming a common criminal defense (Sternberg 2010).

In summary (Eagleman 2011, p. 157f.):

Many of us like to believe that all adults possess the same capacity to make sound choices. It’s a nice idea, but it’s wrong. People’s brains can be vastly different—influenced not only by genetics but by the environments in which they grew up. Many “pathogens” (both chemical and behavioral) can influence how you turn out; these include substance abuse by a mother during pregnancy, maternal stress, and low birth weight. As a child grows, neglect, physical abuse, and head injury can cause problems in mental development. Once the child is grown, substance abuse and exposure to a variety of toxins can damage the brain, modifying intelligence, aggression, and decision-making abilities. The major public health movement to remove lead-based paint grew out of an understanding that even low levels of lead can cause brain damage that makes children less intelligent and, in some cases, more impulsive and aggressive. How you turn out depends on where you’ve been. So when it comes to thinking about blameworthiness, the first difficulty to consider is that people do not choose their own developmental path.

4 The Mind-Body Problem

To reiterate: How do the processes happening in the brain give rise to the stream of subjective conscious experience? Why does it “feel like something?” Why don’t all the subroutines in the brain simply collaborate to ensure survival and procreation without the emergence of an identity—an observer? Why the need for authorship? As discussed, one solution is to rebrand consciousness as an illusion. Indeed, now the enigma of the hard problem of consciousness is “explained away.” This line of thought does, however, not convince everyone. Nonetheless, if we do believe in the reality of ourselves, the specter of dualism emerges. In essence, how does the ethereal mind interact with the physical? What is this elusive mind-body link?

One clue comes from psychosomatic disorders, where the mind is a key factor in a physical condition (Dum et al. 2016). Another one comes in the guise of the placebo effect (Beecher 1955). Simple beliefs and expectations can trigger spontaneous self-healing in patients. Indeed, this effect is so strong that clinical trials should be conducted as double-blind experiments (Rivers and Webber 1907). It does not suffice to keep the patients ignorant about which pill is the actual medicine and which one is simply a sugar tablet. Surprisingly, also the doctor is not allowed to know. If he or she knows which patients are receiving the medicine and the placebo, some form of nonverbal communication can subconsciously influence the patients, leading to a biases in the study. The placebo effect works on many levels. The color, shape, and size of a pill affects its effect. Large capsules appear more effective than small ones, yellow capsules are associated with a stimulating and antidepressant effect, while green soothes pain (Buckalew and Coffield 1982). Moreover, blue pills act best as sedatives (Blackwell et al. 1972), although they induce insomnia in Italian men (Vallance 2006). Remarkably, placebo medication works better if the patients are told that they are taking a placebo and the effect is explainedFootnote 22 (Kelley et al. 2012; Locher et al. 2017). These are now called open-label placebos. Some neurophysiological and genetic underpinnings of the placebo effect are emerging (Hall et al. 2015). Strikingly, even placebo surgery works. For instance, when comparing real surgery for osteoarthritis of the knee to a simulated procedure using skin incisions (Moseley et al. 2002).

On the one hand, the effectiveness of traditional medication appears to be declining. The medical term tachyphylaxis describes a sudden and unexplained decrease in the response to an administered drug. These developments are worrying the pharmaceutical industry (Wired Magazine 2009):

It’s not only trials of new drugs that are crossing the futility boundary. Some products that have been on the market for decades, like Prozac, are faltering in more recent follow-up tests. In many cases, these are the compounds that, in the late ’90s, made Big Pharma more profitable than Big Oil. But if these same drugs were vetted now, the FDA might not approve some of them. [...]

The fact that an increasing number of medications are unable to beat sugar pills has thrown the industry into crisis.

But on the other hand, the placebo effect seems to somehow be getting stronger (Tatera 2015):

“If 40 percent of people recover from a chronic illness without a medication, I want to know why,” Jon-Kar Zubieta, study lead and chair of the Department of Psychiatry at the University of Utah, said in the press release. “And if you respond to a medication and half your response is due to a placebo effect, we need to know what makes you different from those who don’t respond as well.”

While the placebo effect might appear as a strange phenomenon, it simply documents the body’s innate ability to heal itself. Even more bizarre is the nocebo effect where your beliefs actually harm your body. In one case study, a patient required emergency medical intervention (Reeves et al. 2007):

A 26-year-old male took 29 inert capsules, believing he was overdosing on an antidepressant. Subsequently, he experienced hypotension requiring intravenous fluids to maintain an adequate blood pressure until the true nature of the capsules was revealed. The adverse symptoms then rapidly abated.

The negative effects of Voodoo spells could be linked to the nocebo phenomenon. Indeed, the recent rise of reported nonceliac gluten sensitivity could also be a collective nocebo effect (Di Sabatino and Corazza 2012). The human mind’s ability to harm the “host” it is living in is truly astonishing. What is the evolutionary benefit of allowing organisms to will themselves into a state of reduced fitness? In conclusion (Vallance 2006)

Research into the placebo effect has proved it to be a vastly complicated phenomenon, encompassing complex interactions of conscious and unconscious psychosocial processes. Furthermore, the phenomenon stretches intriguingly across the mind/body gap, a conceptual divide that has plagued scientists and philosophers since Descartes and beyond. Functional neuroimaging may not fully resolve this philosophical conundrum, but it is beginning to unravel some of the placebo effect’s neurobiological components; already several candidate areas and processes are being proposed. Although research so far remains hampered by small samples and lack of replication, it will undoubtedly continue to reveal useful insights.

Perhaps the increasing research on the effects of meditation —even simple mindfulness—can also help unravel more of the mystery (Lutz et al. 2004).

4.1 Free Will

Free will is very a intuitive notion. Indeed, even discussing the topic appears ridiculous as the counter-thesis is preposterous. With no free will, who or what is making decisions and why? Unfortunately, the interpretation of quantum mechanics and neuroscientific insights force us to critically reevaluate the issue. In summary (Eagleman 2011, p. 220f.):

Do human minds interact with the stuff of the universe? This is a totally unsolved issue in science, and one that will provide a critical meeting ground between physics and neuroscience. Most scientists currently approach the two fields as separate, and the sad truth is that researchers who try to look more deeply into the connections between them often end up marginalized. Many scientists will make fun of the pursuit by saying something like “Quantum mechanics is mysterious, and consciousness is mysterious; therefore, they must be the same thing.” This dismissiveness is bad for the field. To be clear, I’m not asserting there is a connection between quantum mechanics and consciousness. I am saying there could be a connection, and that a premature dismissal is not in the spirit of scientific inquiry and progress. When people assert that brain function can be completely explained by classical physics, it is important to recognize that this is simply an assertion—it’s difficult to know in any age of science what pieces of the puzzle we’re missing.

Free Will in Quantum Mechanics

Recall Sects. 4.3.4 and 10.3.2 on quantum mechanics,Footnote 23 specifically Sect. 10.3.2.2 on the interpretation of quantum physics. Before addressing the quantum challenges, a brief overview of the status of free will in physics follows. Isaac Newton’s theory of classical mechanics (Sect. 2.1.1) is the starting point (Lewis 2016, p. 145):

Historically, the greatest challenge to free will has been determinism. If the physical world is like a giant clockwork, what room is there for free human action?

Then, Albert Einstein’s theory of special relativity merged space and time into the space-time continuum (Sect. 3.2.1). This gives rise to an atemporal block universe, where time, and thus free will, are an illusion (Sect. 10.4.2).

The rise of quantum theory gave free will a new chance (Lewis 2016, p. 145):

So it is not surprising, then, that the advent of quantum mechanics was hailed by many as giving a physical underpinning for free will, since quantum mechanics provides the first suggestion that determinism might fail at the fundamental physical level.

The inherent randomness of the subatomic level of reality appeared to be hospitable to the notion of free will. Unfortunately (Zeilinger 2010, p. 266):

It does not at all follow that quantum randomness explains free will as is often stated.

Moreover, it is the phenomenon of quantum entanglement (Sect. 10.3.2.2) that appears to break causality as one cannot distinguish between “cause” and “effect” anymore (Stefanov et al. 2002, and Sect. 10.3.2.2). A causal ordering of events is a prerequisite for free will. As so often, nature refuses to give clear and unambiguous answers. In the end, the discussion comes down to personally held views (Zeilinger 2010, p. 266):

In the experiment on the entangled pair of photons, Alice and Bob are free to choose the position of the switch that determines which measurement is performed on their respective particle. [...] This fundamental assumption [that choice is not determined from the outside] is essential to doing science. If this were not true, then, I suggest it would make no sense at all to ask nature questions in an experiment [...].

However, the very nature of a measurement in quantum mechanics is problematic. Specifically, how (or whether) the spread out wave function of probability, obeying Schrödinger’s equation, collapses and becomes a randomly located point as a result of the measurement. This appears to give an observer the power to influence objective reality. See Sect. 10.3.2.2 for alternative, albeit no less unsettling, interpretations.

Muddying matters even more is the “free will theorem” (Conway and Kochen 2006).Footnote 24 In detail (Maudlin 2011, p. 252):

This paper contains the astonishing claim of a derivation from the predictions of quantum theory to the conclusion “if indeed there exist any experimenters with any modicum of free will, then elementary particles must have their own share of this valuable commodity.”

For the ensuing technical debates, see Conway and Kochen (2007, 2009).

In the end, we appear to be facing two radical and contradictory options regarding free will. Either it does not exists at all or everything has free will—including electrons. For further reading, see, for instance Aaronson (2016).

Free Will in Neuroscience

There are certain contexts in which the free will of a person is restricted. For instance, sufferers of Tourette’s are not free to stop swearing. Split-brain patients can develop alien hand syndrome (Eagleman 2011, p. 163f.):

While one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing.

The crux of the question is now whether all of our actions are simply the result of neural activity—the outcome of the subconscious subroutines autopiloting decisions—or if there is a free will unconstrained by biology. To summarize (Swaab 2014, p. 327):

Our current knowledge of neurobiology makes it clear that there’s no such thing as absolute freedom. Many genetic factors and environmental influences in early development [...] determine the structure and therefore the function of our brains for the rest of our lives. As a result, we start life not only with a host of possibilities and talents but also many limitations [...].

Moreover, it appears that most of our actions are not a result of free will (Eagleman 2011, p. 167):

In the 1960s, a scientist named Benjamin Libet placed electrodes on the heads of subjects and asked them to do a very simple task: lift their finger at a time of their own choosing. They watched a high-resolution timer and were asked to note the exact moment at which they “felt the urge” to make the move. Libet discovered that people became aware of an urge to move about a quarter of a second before they actually made the move. But that wasn’t the surprising part. He examined their EEC recordings—the brain waves—and found something more surprising: the activity in their brains began to rise before they felt the urge to move. And not just by a little bit. By over a second. In other words, parts of the brain were making decisions well before the person consciously experienced the urge.

Libet’s experiment is one of the most famous in neuroscience (Libet et al. 1983). While it has been reproduced many times, and some criticism addressed (Matsuhashi and Hallett 2008), it remains controversial. But the evidence is mounting. In one experiment, subjects were told that they can freely choose between adding or subtracting numbers appearing on a screen. Analyzing their fMRI data, the researchers could accurately predict which choice would be made several seconds before the subjects reported that they had consciously made a decision (Soon et al. 2013).

Overall neuroscientists and philosophers of the mind agree that free will is a contestable notion and that it is illusory in many contexts. Ironically (Gazzaniga 2011, p. 114):

The belief that we have free will permeates our culture, and this belief is reinforced by the fact that people and societies behave better when they believe that is the way things work.

Yet again, we face an existential dilemma. All our intuitions and beliefs of the world center around the existence of our self and the free will accompanying it. However, experimental results expose these ideas as flawed or false. But not everyone agrees (Metzinger 2009, p. 126f.):

As noted previously, the philosophical spectrum on freedom of the will is a wide one, ranging from outright denial to the claim that all physical events are goal-driven and caused by a divine agent, that nothing happens by chance, that everything is, ultimately, willed. The most beautiful idea, perhaps, is that freedom and determinism can peacefully coexist. [...] Determinism and free will are compatible.

[...]

Probably most professional philosophers in the field would hold that given your body, the state of your brain, and your specific environment, you could not act differently from the way you’re acting now—that your actions are preordained, as it were. [...] This is a widely shared view: It is, simply, the scientific worldview. The current state of the physical universe always determines the next state of the universe, and your brain is a part of this universe.

[...]

If we take our own phenomenology seriously, we clearly experience ourselves as beings that can initiate new causal chains out of the blue—as beings that could have acted otherwise given exactly the same situation. The unsettling point about modern philosophy of mind and the cognitive neuroscience of will, already apparent even at this early stage, is that a final theory may contradict the way we have been subjectively experiencing ourselves for millennia. There will likely be a conflict between the scientific view of the acting self and the phenomenal narrative, the subjective story our brains tell us about what happens when we decide to act.