Introduction

The history of the brain as an object of scientific investigation can be viewed as a prime example of how scientific concepts are embedded within, and guided by the broader cultural background of their time. In the Western tradition different models of brain structure and function were constructed at different times, which can be considered attempts towards global theories of the brain.

Views on the brain changed from the pneumatic model which dominated from the Greek and Roman times until and through the seventeenth century to other forms of mechanical models, including the electronic and computing models of our time. These models were the product of incomplete knowledge of brain structure and function (and this knowledge remains far from complete), complemented with elements from other scientific fields. Fortunately, facts are stubbornly independent of theories; as soon as new facts came to light new models of the brain were constructed.

In what follows I will discuss the metaphor which provides the ground for most models of the brain, that of the brain-as-a-machine. This metaphor has its roots in the seventeenth and eighteenth century triumphant materialistic/mechanism. Nevertheless it still deeply influences contemporary culture, at least in the West. Indeed, although brain models are influenced by the leading scientific and philosophical trends at the time of their origin, they have a powerful and long lasting impact on culture. By this I mean that they influence the self-image of man and therefore the rules and practice of human interactions, which are fundamental constituents of culture. Indeed no other science seems to be as important as the neurosciences in constructing our image of man, that is, in defining what is human. That is because understanding the brain directly addresses the questions of the physical substrates and processes underlying typical human traits including perceptions, memories, feelings, beliefs, values, decisions, actions etc. which have been the goal of philosophical quests, often of religious choices, sometimes of evolutionary or historical interpretations.

The mechanical brain

Julien Offroy de Lamettrie (1748), a French medical doctor, published in Leiden “L’homme machine” (man as a machine). As G. Preti writes in the introduction to his Italian translation (1955) the book is interesting, not so much for its intrinsic value but “as a document and expression of a time, of an epoch, of a mentality” in other words of a culture. Indeed the book documents what, at that time, was a very controversial attempt to interpret the living being, including man in purely mechanistic terms. Throughout the book, the observations and the theoretical interpretations are intermixed and therefore the main lines of thinking are difficult to extract. The starting point is the observation that the states of the “soul” are modified by states of the body, including diseases, but also sleep/wakefulness, diet, climate and that there is a parallelism between somatic and psychic characters. A second argument is provided by the structural similarities in the organization of the nervous system of animals, aligned in a kind of descending Scala Naturae, of cerebral complexity from man to monkey, beaver, elephant, dog, fox, cat etc. Eventually,

Since all the faculties of the soul depend on the particular organization of the brain and on that of the whole body to the point that they are nothing else than this organization, here appears a machine! (Here and below, my translations from Italian)

Yes but which kind of a machine? Well, an eighteenth century machine, consisting of metaphorical springs and wheels and provided with an intrinsic property of motion, and of response to stimuli

“The site or the mere idea of an attractive woman elicits particular motions and desires” but also, “the light can contract the pupil, and an isolated muscle when touched can contract,” etc.

The fact of ascribing mechanical features to the human mind raised several fundamental problems of a philosophical and ethical nature then, as it does nowadays. One is with the foundation of the “humanistic values” including “freedom, responsibility, rational choice, love, beauty, and human uniqueness” (Olviedo 2011). De Lamettrie believed that ethical principles originate from a natural law, inscribed in all animals albeit to a different degree. For a debate on the natural foundation of ethics see Stent (1977). The other difficulty is the assessment of responsibility for criminal acts, are they crimes or pathologies? According to De Lamettrie:

It would be desirable that excellent medical doctors would be the judges. Only they would be capable of distinguishing between the guilty and the innocent criminal; if reason is enslaved to a degenerated sense or in furor, who can control it?

Recent difficulties in assessing the responsibility of a mass murderer in Norway with the help of distinguished psychiatrists (The Guardian August 24, 2012) seem to question if current understanding of brain-psyche relationships is up to the task.

The discovery of reflex responses of the spinal cord initiated probably by Whytt with work published in 1751 and 1765 and continued by Procházka (1784) and Hall (1832–1857), could have definitely endorsed a mechanistic view of the nervous system but it did not. Müller, for example, believed that in the intact animal conscious sensation was always involved in reflex responses. Moreover the discussion enlarged to whether the concept could be extended to cerebral reflexes (references above and for a thorough historical account of reflex action see Clarke and Jacyna 1987).

What characterizes a machine in the seventeenth, eighteenth and nineteenth centuries, but also to day, is the reliability and predictability of the response to a given input. Since the brain is obviously complicated, models of the brain tend to be inspired by the most complicated machines known at a given time in history, in a given culture.

The pneumatic model of the brain of the classical Roman, Arab and Mediaeval medicine and the associated ventricular theory with a ventricle for memory, one for imagination and one, for perception, the sensorium communis, was still alive well into the sixteenth century. Its longevity was at least in part due to the support of the Church which judged the animal spirits and the psychic pneuma which animated the ventricles to be compatible with the theory of the soul. The sensorium communis, rather surprisingly, survived well into the nineteenth century (Müller 1841–1844, quoted by Clarke and Jacyna 1987) a precursor, perhaps, of the multimodal sensory areas of contemporary physiology.

The model percolated into the seventeenth and eighteenth century when it became a pneumo-hydraulic model which assumed the occurrence of pressure waves along hollow nerves. The hydraulic model of brain function seems related to two cultural success stories of the time, the development of hydraulic machines including complex hydraulic automata (see Finger 1994) and the discovery of the circulation of the blood by Harvey published in De motu cordis (1628).

The discovery of animal electricity by Galvani, the foundation of electrophysiology, essentially the study of brain function by electric methods, and the demonstration that axons were not hollow disproved the hydraulic theory and paved the way to the cybernetic/electronic model of the brain, with neurons represented as assemblies of resistors and capacitors (Mc Culloch and Pitts 1943; Rall 1962). The construction of the Perceptron (Rosenblatt 1958) was an important attempt to simulate brain function based on the electronic/cybernetic model.

It should be mentioned that axons are in fact, hydraulic elements as well, with bidirectional transport of all kinds of molecules. But in contrast to water pipes and blood vessels the energy for the motion is not generated by a pump but by local molecular transporters along the axon. The axonal flow maintains a bidirectional molecular continuity between the cell body, the axon, and the synapses far away. This continuity is information-loaded since molecular transport from the cell body continuously upgrades structural elements of the axons and of the synapses. In the other direction, molecular information from the axonal terminals modifies the cell body, to the point of determining its death or survival.

Axons therefore, unlike electric cables, are also pipes. Incidentally, this feature allowed the enormous revolution in the study of brain connectivity by exploiting the rather promiscuous transport by the axon of horse-radish-peroxidase, plastic beads, amino acids, fluorescent molecules, dextran etc., i.e., by neuroanatomical tracing methods (reviewed in Vercelli et al. 2000).

Today the computer metaphor of the brain seems unavoidable.

The computing brain

In the eighties Marr (1982), the PDP research group (1986) as well as Churchland and Sejnowski (1992) forcefully advocated the notion that neural operations are computational in nature. The issue lay much with the very notion of computation, for which a universally accepted view seems not to exist. Computation encompasses more than the numerical transformations according to mathematical rules that our school teachers try to convey with questionable success. Rather, computation encompasses any transformation of symbols according to specified rules. The rules include the sequential instructions (algorithms) which control today’s computers. The Turing machine, i.e., the “gedanken” device that can perform any transformation of symbols on the basis of unlimited sets of instructions is both the precursor of today’s computers and the ideal computational device. So is a Turing machine an appropriate model for the brain?

It is hard to deny that, according to the above broad definition of computation the brain performs computational operations. Markram and colleagues added a twist to the computational style of the brain by proposing that the brain performs liquid computations, rather as the size, location and velocity of a fallen stone can be computed from the ripples it generates in a basin of still water (Maass et al. 2002).

Computational approaches to brain studies have flourished. For example, we are witnessing the emergence of the new field of “computational neuroanatomy” which started as extrapolations to from quantitative-structural data to functional properties (Innocenti et al. 1994; Ascoli 1999; Bjaalie 2001; van Pelt et al. 2001), and now includes methodologies whereby, structural properties can be identified from techniques other than the traditional anatomical/histological ones (Good et al. 2001; Hickok 2012). Applications of graph theory to brain connectivity have resulted in a vast literature interfacing human-and-animal structural-and-functional data (Sporns 2011).

Nevertheless the notion of a computational brain is not unanimously accepted. Penrose (1995) has turned the metaphor on its head essentially by applying the logical deduction that if the brain is a computing device, then all the brain does, including the generation of consciousness should be reproducible by computation. Penrose argues that consciousness is not computable and he supports the rather bizarre hypothesis that consciousness arises from some yet undiscovered properties of microtubules and by quantum events (the so called Hameroff-Penrose theory). The theory is received with skepticism by most (e.g., Koch and Hepp 2006; Smith 2009; McKemmish et al. 2009, and references therein).

The author accepts the notion that the brain performs operations of a computational nature. But one cannot ignore some profound differences between the brain and the man-made computational devices as we know them to-day.

Teleology of brains and computers

Brains, unlike man- made devices, were not constructed to solve any computationally tractable problem. In contrast, the animal brain was tinkered by evolution to solve specific problems of survival and reproduction in a given ethological niche, with a given body. In development, brain structure and function need validation by specific interactions with the environment. When these are missing, parts of the brain and the functions they perform atrophy. This is most clearly observed in situations in which animals (and occasionally humans) are deprived of the critical interactions with the environment at early stages of brain development (Innocenti 2007). Animals living in the same niche but with different bodies, e.g., an herbivore and a predating carnivore will solve similar problems differently. This entails that the brain is fitted to the animal body by a combination of genetic and epigenetic mechanisms. An example of this is the fact that the representation of sensory organs of the skin is imprinted on the cerebral cortex during a specific period of development (Van der Loos 1978). Furthermore the human brain does not come with previous knowledge of the body where it will be fitted. In the course of development brain structure is sculpted out and refined by interactions with the body as well as with the environment. The knowledge of body ownership is probably acquired, and the brain can be misled to interpret external tools as body parts even in adulthood (Quallo et al. 2009; Slater et al. 2009).

I propose that the fundamental problem the brain was designed to solve is the creation of “Gestalts”, i.e., the extraction from background of perceptual and cognitive entities with some degree of completeness and coherence, based on incomplete knowledge. The successful solution to this problem allows the detection of prays and/or predators in the natural environment. It also guides decisions in a large number of diverse situations from the choice of a mating partner to that of going to war, choosing a career/profession, investing in financial assets etc. As is discussed elsewhere connections among cortical (and subcortical) neurons implement the neuronal assemblies which embody the “Gestalts” (Innocenti 2010).

Software/hardware relations in brains and computers

As Turing machines, computers operate on the basis of instructions specifying sequential operations to be performed on an input. Quite to the contrary, the operations performed by the brain are the result of its internal structure that is, of the properties of the elements which constitute the brain, the neurons, and of the ways these elements are interconnected (see below). Moreover the brain uses massively parallel processing lines. For example, about 1 million fibers convey information from the retina to the first relay station in the brain, the Lateral Geniculate Body. From there onwards, the divergence of projections multiplies the number of neurons simultaneously or almost simultaneously involved in computing visual information by some unknown, large factor. The brain, however, also functions serially. Serial operations are performed along all the neural pathways connecting sensory peripheries to the brain, and connecting the brain with motor effectors as well as between different brain structures.

The brain unlike computers, consists of incredibly large numbers of non-identical elements

Neurons come in a broad variety of sizes, shapes, firing properties and genetic make-up. Those neuronal differences are a hallmark of nervous system design as epitomized by the nervous system of C. elegans which consist of only 302 neurons but of at least 118 types of neurons (White et al. 1986). The classificatory efforts of nineteenth century neuroanatomists, in particular Ramon y Cajal, have identified classes of neurons which share certain morphological features, e.g., shape of their soma and dendrites and/or distribution of their axons. For the cerebral cortex, the largest neuronal assembly in the brain, two classes of neurons were identified, the pyramidal and the non-pyramidal neurons. The pyramidal neurons, characterized by a roughly conical cell body, one apical dendrite originating at its top and sets of dendrites at its base appeared to be a roughly homogeneous class. However, the size of the cell body, the distribution of the dendrites, the connections and some molecular markers vary from neuron to neuron. Pyramidal neurons differ also for the caliber of their axons, the part of the neurons which establishes contact with other neurons and these differences have consequences for the speed at which neurons interact with each other (Caminiti et al. 2009; Innocenti 2011; Tomasi et al. 2012). The non-pyramidal neurons of the neocortex, include elements which differ substantially in most of their properties (Markram et al. 2004).

Each element of the brain is a computational device

Each neuron performs transformations, i.e., computations, of the inputs it receives. The computational algorithms are not determined by external instructions but by the physical-molecular structure of the neuron in ways which are incompletely understood. Simplifying, it can be said that at the level of the dendrites, the input stage of the neuron, the computational algorithms are determined by the diameter and length of the dendritic branches, the distribution of membrane receptors and ionic channels. At the output stage of the neuron the diameter and length of the axon and of its branches, the spatial distribution of the synapses as well as their size and molecular composition determine the computation. Since neurons differ from each other, in both their dendritic and the axonal arbors they also perform different computations. If we take the example of cerebral cortex with an estimated 19 billion in female brains and 23 billion in male brains (Pakkenberg and Gundersen 1997) we are clearly dealing with a network of computational devices beyond imagination; and at least as many neurons do exist in the rest of the brain. Although each neuron is a computing element in its own right, most neural operations are collective computations, performed by neuronal assemblies. It is usually believed that the computing assemblies are flexible, i.e., they are task-dependent, and therefore ephemeral, since they are continuously created and dissolved and a neuron can take part in different assemblies.

Memory elements are embedded in each neuronal operation

This is due to the fact that the strength of the connections between two neurons changes over time, depending on the past history of the connections. Interestingly this phenomenon now called synaptic potentiation (or depression) was anticipated by Freud (1895), as noticed also by Centonze et al. (2004). The changes consist in modifications of the release of molecules whereby neurons communicate with each other (neurotransmitters), in modification of the molecules which bind the neurotransmitters, but also in modifications of the structure of the axonal arbors (Tamahachi et al. 2009). The changes can alter the balance between excitation and inhibition in neural assemblies. They can have adaptive consequences, for example by favoring functional recovery after lesions, as well as pathological consequences as in the case of epileptic foci whose activity can progressively modify and impair brain function. Most brain operations leave a trace which can last for a lifetime. This trace is often inaccessible to consciousness and can have maladaptive consequences which justify the search of methods to modify or erase the trace, such as psychoanalysis, cognitive therapy or pharmacological interventions (for bridges between neuroscience and psychoanalysis see Ansermet and Magistretti 2004). Because this trace biases the computations performed in large neuronal assemblies distributed within the brain it can be viewed as an algorithmic element akin to a piece of software. And, to the extent that some of the traces are established during the developmental period they beg the old question of the relative roles of nature versus nurture in determining brain structure and function. The answer is obviously important for contemporary culture in areas as different as education and assessment of individual/criminal responsibility.

The brain is a slow processor. Computers are fast

This seems to be paradoxical in view of the fact that brains perform effortlessly operations that the fastest computers cannot match. In the course of evolution the brain enlarged progressively, from the about 400 cc of Australopithecus to the current, roughly 1,500 cc. Because the diameter, and hence conduction velocity of cortical axons did not keep up with this volumetric enlargement, the brain became a progressively slower processing device (Caminiti et al. 2009). One possibility is that the brain takes advantage of the slow processing, because this allows expanding the time span of processing “windows”, hence enriching cortical dynamics (Caminiti et al. 2009; Innocenti 2011).

The brain is self-organizing and can, to some extent, self-repair

The constituents of the brain, neurons, non-neuronal elements, blood vessels etc. attain their structure, position, reciprocal relations, and inter-connections in the case of neurons, over a more or less protracted (depending on species) developmental period, in the absence of an external craftsman. This entails an extremely complex set of interacting operations (Fig. 1, and below).

Fig. 1
figure 1

An illustration of network causality, applied to development (Weiss 1955). Notice that an at least equally complex network of molecular interactions underlies such a causal network. Reproduced from Innocenti (1990)

Much is made in current clinical publications of the self-repairing properties of the brain, which is usually labeled as “neural plasticity”. Indeed, we have had the opportunity of studying in a few patients a remarkable functional recovery, in spite of profound alterations of cortical structure in early development (Kiper et al. 2002; Zesiger et al. 2002). The possibility of structural and functional repair is not lost in adulthood (Tamahachi et al. 2009; Gilbert and Li 2012). The mechanisms underlying self-repair are usually not known, although there are some candidates.

The brain is capable of free decisions, at least sometimes

Surprisingly this has become a controversial issue, which would, in itself, justify at least one full paper. The conclusions on the existence or non-existence, and nature of free will have obvious consequences for ethics and law. In this domain it is difficult to ignore one’s personal subjective experience that our choices are indeed free, although often prepared by some subconscious brain processes; and sometimes they have to run against severe external or internal constraints. It seems easier and wiser, to ignore, for now, statements on the absence of free will based on the contemporary primitive and incomplete scientific knowledge of the physical reality, and of the brain, and on philosophical conundrums.

Genes, brains and behavior

Sociobiology has promoted a further mechanistic view of brain function by implying that behavior is the expression of the “selfish genes” whose purpose is self-replication (Dawkins 1976). This controversial notion of the relationship between genes, brain and behavior has percolated into culture to the point that it can be found in at least one recent novel (1Q84; Murakami 2011). The implicit mechanistic view of brain function corresponds to a growing contemporary scientific mode as suggested by the fact that under the entry “genes and behavior” PubMed reports 317 papers in 1992, 1,175 in 2002 and 1,771 papers until October 2012. Genetic components have been identified in varieties of behavioral traits including political attitudes (Hatemi and McDermott 2012), antisocial behavior (Tielbeek et al. 2012), addiction (Ducci and Goldman 2012), social trust (Oskarsson et al. 2012), and the search for the genetic basis of psychosis is continuing (e.g., Kang et al. 2012; Bennett 2011).

A relatively close relationship seems to exist between genes and behavior in the bee. Different behavioral states correspond to different constellations of gene expression; moreover gene expression appears to precede and causally control behavior (Zayed and Robinson 2012). The relationship is far more distant in humans. Preuss (2012) has carefully reviewed the work on the FOXP2 gene once believed to be closely associated with language and speech. Since a considerable amount of work, including mutagenesis in animals was performed on this gene, the conclusions thus far provide a paradigmatic illustration of the relations between genes and behavior in humans. As Preuss wrote “This conclusion merely restates two of the important lessons of experimental population genetics: first, that most phenotypes arise through the interactions of multiple genes (the principle of epistasis), and, second, that most genes influence multiple phenotypes (the principle of pleiotropy)”. Similar conclusions can be drawn from the studies on genes and human behavior quoted above.

The core of the problem seems to be that one is dealing with at least three very complex causal networks cascading onto each other. The lowest level causal network is the genetic regulatory network. Genes interact with each other, often indirectly, through multiple molecular nodes. The second level is the network of interacting processes in development that Weiss (1955) illustrated in a classic diagram (Fig. 1). Such a network causes system-wide repercussions of local disturbances in the developing brain (Payne 1999). In addition it was pointed out that the network is intrinsically undetermined, i.e., the outcome of neural development cannot be predicted solely from the interaction of genes and environment (Clarke 2012). The third is the neural networks responsible for behavior. Each of these networks needs specifying in terms of its components and of the dynamic interactions among them. We seem to be far from reaching these targets. Within these complexities lurks the concrete possibility of non-linear, chaotic behaviors that is of experiencing unimaginable amazement and wonder, the fun part of being a scientist.

Conclusions

There seem to be no alternatives to the mechanical metaphor which guides our exploration and understanding of the brain. And it might be this lack of alternatives which fuels the worried opposition of those who wish to draw a sharp dividing line, between machines and humans. This recalls the opposition De Lamettrie experienced with his “L’homme machine”, Changeux (1983) with “L’homme neuronal” or that Darwin confronted when he erased the reassuring frontier between man and animals. I have tried to illustrate how profoundly different the brain is from the most complicated machines of today, the computers. In these differences I see the need to redefine what a machine can be: a machine that we might never be able to construct. Such a machine should be capable of generating culture, including moral and esthetic judgments and should experience joy and sorrow, friendship and love. Toward the end of his career, having contributed in different fields of neuroscience probably more than most scientists of the last century Roger Sperry wrote: “Changing priorities” (1981), a strong plea on the role of neuroscience in founding human values based on the emergence of consciousness and free will from the complexity which is one of the hallmarks of the human brain.