We have seen how the evolution of the universe brought exploitable energy, complexity and finally, at least on our planet, life. In this chapter we will see why life had to evolve beyond unicellular organism, creating complex organisms and then aggregates of those organisms –societies.

Intelligence Needs Energy

In order to understand the evolution of life in terms of the subjects covered in Chap. 1, let’s take another look at the example of the container with a semi-divider wall between two compartments:

figure a

If the system is microscopic, thermal disturbances will make balls pass from one side to the other. We can make the system more stable by using heavier balls, but in this case we’ll need more energy to change the state of the system and store some information.

The system with light balls is more efficient, but also fragile. The other is stronger, but inefficient. Another trick could be using several systems of light balls to store the message in a redundant way, but in the end we’d have to use just as much energy as in the heavy ball system.

In nature there are robust “heavy ball” information tools such as DNA and other more agile “light ball” ones like the nervous system. All of them need energy to be set or remain in a particular state. This is not related to the information the state carries.

We call a particular state to which we associate a meaning as syntax. Syntax has to do with the structure of the message, it is the substrate on which we put a message, but has nothing to do with its value. With 5 bits we can pass on a message that reduces uncertainty (the next number on a roulette wheel where the ball will land, see Appendix 2) or not (a number where the ball won’t land). But to store these 5 bits we need energy.

As it costs energy to build and use syntax, regardless of its utility, every intelligent system has to make sure every processed bit brings information, i.e. reduction of uncertainty about the environment. Intelligent systems must be able to change their mind according to observation.

We do that quite easily for simple cases. For example, how likely is it that a coin is biased if after 100 tosses it comes up heads 51 times and tails 49? Not very likely, as there aren’t that many biased coins around and the ratio of heads to tails we observed is close to 1. But if it comes up heads 89 times and tails just 11 it makes sense to consider the hypothesis that the coin is biased as likely.

This seems obvious, but while it’s relatively easy for one brain to change opinion, it’s not so easy for a network of brains to do the same. We quite rightly refer to a company’s DNA, not its frontal lobes. “Organizational capabilities are difficult to create and costly to adjust” (Henderson and Clark 1990), in the same way as for DNA.

Unfortunately, an organisation created to extract energy needs itself energy. Not updating the knowledge – keeping the same syntax even when it’s no longer of value, when it produces no more energy – means accepting to keep an energy investment portfolio that will result in a loss.

This is also what ageing means: storing syntax that provide less and less up-to-date information. Evolving, on the other hand, means updating the stored information on the basis of the changes in the environment we are living in. Life and evolution are a continuous compromise: attempting to keep complexity (and therefore also energy requirements) to a minimum, but not to such an extent as not to store enough valid information.

As the apocryphal quotation from Albert Einstein goes: everything should be as simple as it can be, but not simpler. As the original says, observation of the environment will set the limit beyond which we must not simplify: “… the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” (Einstein 1934).

Sleep, Death and Reproduction

Because of the inevitable build-up of useless syntax, every intelligent system has three possibilities: learn more, forget what’s useless, or die.

Learning means storing (and being able to process) new, valid information that effectively reduces uncertainty and, although energy is needed to store this new information in memory, being able to acquire new energy from the environment. Forgetting means freeing up memory from the syntax that’s no longer used, one that requires energy without producing anything in exchange.

Forgetting is normally done through periods of reduced activity in which the system works on its internal reorganisation, processing less external information.

This is true for every complex system a la Herbert Simon, even non-intelligent ones. Some computer operating systems need to be periodically rebooted because they cannot manage the level of superfluous complexity they produce while operating. In order to postpone the need for rebooting, computers “reorganise themselves” during periods of scarce activity (as is the case with Microsoft Windows Disk Defragmenter).

Organisms with a central nervous system need sleep: while the role sleep plays isn’t fully understood, it’s widely accepted that one of its functions is to prune many of the synapses created while the organism is awake (Li et al. 2017).

Unfortunately, sleep, like the defragmenter, can however only put off the inevitable: death. In time, a computer will accumulate more and more useless syntax. The same goes for a car, a human being, or a civilisation. It’s the natural order of things, and can be well understood in the information-energy framework. As some energy opportunities are exploited, they no longer arise again: for a lion it may be prey that no longer follows a certain trail to a water hole, for a company it may be that the market is no longer willing to pay a price that is higher than the cost of production. A lion lying in wait for prey that doesn’t show, or a company manufacturing obsolete products, and storing the information necessary to do so, continues to require the same amount of energy as before, but with no energy return.

The history of management is full of stories of companies that, at the very peak of their success, collapsed disastrously because they couldn’t shake-up the company and introduce new production processes. Kodak is a glaring example of this. Rebecca Henderson, one of the first scholars who studied corporate organisation to realise how important information and in-house organisation is for companies in order to evolve (Henderson and Clark 1990), said the following after Kodak went bankrupt: “Kodak is an example of a firm that was very much aware that they had to adapt, and spent a lot of money trying to do so, but ultimately failed.” (Gustin 2012).

Interestingly, the company’s senior vice president, had proposed the most sensibly solution: “Directing its skills in complex organic chemistry and high-speed coating toward other products involving complex materials” (Shih 2016). This would have meant not sacrificing the huge amount of technical know-how the company had accumulated over the years, but did require significant changes to the in-house organisation, which in the end proved impossible.

The last option was death. We see death as a tragedy, because most of the information stored by the organism seems lost to us. But this cannot be completely true: if information was irreversibly lost when an organism died, it would have been difficult for life to evolve. So there had to be a way to pass useful information on to future generations. The same is true for companies. The legacy of General Electric, which today is fighting to survive, in terms of management innovation, will remain in the “DNA” of many other companies for years to come. Xerox’s research into the usability of computers paved the way for the success of companies like Apple and Microsoft.

Works of pure research stand out too: Claude Shannon was working for AT&T when he published his article on information theory in the company journal. Benoit Mandelbrot was working for IBM when he developed fractal mathematics, and today “Big Internet” business groups like Google publish fundamental studies on applied mathematics. Kodak left its legacy to human knowledge in the form of hundreds of scientific articles.

The same is true for individuals. There’s a legend that tells of the religious conversion of John von Neumann before he died, refuted however by the scientist’s brother (Hargittai 2008). Unlikely as it may seem, it is a plausible idea: how could the most intelligent mind of the time accept that all his knowledge would simply disappear the day he lost his battle with cancer? But the truth is that von Neumann’s knowledge has not been lost, thanks to the thousands of people who have soaked up at least part of his intellectual legacy. Thousands of articles in IT, economics, physics and mathematics, have references to works of von Neumann.

Language and writing are Homo sapiens’ solution to lost of information which would happen when people die.

Unicellular organisms learnt a similar trick billions of years ago. As mentioned above, DNA passes on information not only on how to construct the cell itself, a process perfected over hundreds of millions of years, but also on how to react in certain situations: something that can be learnt during a lifetime and passed on to descendants through epigenetic inheritance –“phenotypic variations that do not stem from variations in DNA base sequences are transmitted to subsequent generations of cells or organisms.” (Jablonka and Raz 2009).

The Limits of Unicellular Organisms

The DNA is rightfully seen as a milestone in the evolution of life. Thanks to it, unicellular organisms found a way to store information, and filter it through natural selection. More successful DNA strands were more likely to reproduce, less successful ones more likely to become extinct.

But cells soon reached the limits of this mechanism. In fact, the amount of information an isolated cell can store is limited by the fact that it cannot grow out of all proportion: in cells, nutritive material is only distributed by diffusion and the walls are made of protein chains that cannot extend over more than a certain surface area. For this and other reasons, the biggest unicellular organisms grow to just a few centimetres and these are very much a minority (Marshall et al. 2012).

A limit in size also limits information. To increase information capability, life stopped storing information in single cells – by now saturated – and started using networks. Although it occurred in different ways, first with amino acid networks and then with protein networks, life changed gear and introduced a new level of complexity. This didn’t take long: traces of bacteria 3.5 billion years old show that there were communication’s mechanisms even then (Decho et al. 2009), (Lyon 2015), (Zhang et al. 2012).

In recent decades the mechanism of bacteria communication, called quorum sensing, has revolutionised our idea of unicellular organisms: more than 99% of bacteria live in communities, so-called biofilms, where the organisms form “a complex web of symbiotic interactions” (Li and Tian 2012). Using quorum sensing, some bacteria (such as cholera) sense not only how many conspecifics are within signal range, but often also which and how many other species of bacteria they are sharing the environment with (Ryan, website), (Ng and Bassler 2009).

Many types of bacteria use quorum sensing mechanisms to do more than simple census taking. Myxobacteria Xanthus and Amoeba Proteus are examples of unicellular organisms that form colonies very similar to multicellular organisms: “In a process likened to ‘the great animal herd migrations’ up to a million cells move toward aggregation sites where fruiting bodies form. Of the initial cell population, only 10–20% will transform into long-lasting, stress-resistant myxospores and survive to reproduce another day. A staggering 65–90% of the initial population collectively suicide, by rupturing their cell envelope (autolysis). … The cell fragments are assumed to provide carbon and energy for development. Another 10% of the population transform into special cells that remain on the periphery … these cells could be a kind of sentry, to prevent pillaging of the sacrificial feast and predation of the dormant myxospores” (Lyon 2007).

From Interaction to Cognitive Processes

We humans usually associate cognitive processes with just one lifeform: the one we call sapiens (Lyon 2015). Yet, as we’ve seen, biological networks like DNA have the ability to learn, and evolution favours the emergence of cognitive processes (Watson 2014).

To understand how simple mechanisms of interaction can produce cognitive abilities, researchers have studied various loose communities found in nature –those in which every element is linked to the others through just an exchange of simple signals. Similar interactions between elements regulate typically bacteria communities, flocks of birds and schools of fish (Deneubourg and Goss 1989).

In these cases the community learns through allelomimesis, or imitation, with the help of a mechanism called autocatalysis that prevents the indiscriminate dissemination of the behaviour of just one individual. “The probability of an individual adopting a particular behaviour is a function of the number of individuals already exhibiting that behaviour.” (Goss and Deneubourg 1988).

For example, in a flock of birds on the ground, one bird takes flight because it notices a suspicious movement. The birds nearby lower their alarm threshold, and follow the first that took flight if they see even a slightly suspicious movement. At this point the whole flock takes flight, regardless of whether or not there is actually a predator. One bird acts as a trigger for a possible group behaviour through allelomimesis, autocatalysis induces the whole flock taking flight in case of danger.

Autocatalysis and allelomimesis allow the emergence of Hebbian learning processes, although rudimentary ones: “The strengthening of frequently used trails is also reminiscent of Hebbian reinforcement of active neuronal pathways through long term potentiation” (Couzin 2009).

They can often be observed in the behaviour of humans too. How does a peacefully protesting crowd turn into a crowd of looters? One protester becomes violent, and another might copy the behaviour (allelomimesis). But generally, all protesters have their own threshold for doing what others are doing. Each protester will become violent themselves only if a certain number of protesters have already started looting. Each new looter will increase then the probability of the crowd looting (autocatalysis) (Buchanan 2007).

Something similar happens when fake news goes viral on social networks: the logical-analytical abilities of the average user aren’t comparable to those of experts. But the latter, who base their knowledge on experiments and cross-validation, typically don’t dedicate much time to updating their social profiles.

Unfortunately for the quality of the information on so-called social networks, while scientific thought has only been around for 500 years, allelomimetic and autocatalytic behaviour has been part of how instincts have evolved for hundreds of millions – or even billions – of years. Therefore the comments of thousands of people who consider vaccines to be harmful, no matter how little they know about the subject, have more power to convince others than hundreds of scientific articles written on the same subject.

Facebook users act more like the lemmings of the mass suicide myth than a flock of heronsFootnote 1: when someone starts sounding off about vaccines, other users find it all but impossible to avoid following suit, creating a sort of domino effect.

Autocatalysis and allelomimesis of course weren’t introduced by nature to favour the spread of fake news on Facebook. They were meant to guide the evolution of life towards forms of aggregation that were more, and not less, intelligent: it’s thanks to these mechanisms for example that ants find the best and shortest paths to where they’re going (Goss and Deneubourg 1988), a capability of insect colonies we will describe later in this chapter.

Nervous System or the Forgotten Transition

As mentioned at the beginning of this chapter, the genetic code and the nervous system are two different memories – the first is long term, the second short term. The genetic code of living organisms lets them pass information on from one generation to the next, and do so for millions of years, and has changed the way we think about the evolution of life. The brain, on the contrary, loses the information it contains with the death of the organism. Probably for this reason, scholars often considered the role played by the brain as secondary.

In a fundamental work like The Major Transitions of Evolution, which focuses on the evolution of information processing by living organisms, John Maynard Smith and Eörs Szathmáry failed to consider the emergence of the nervous system as one of the fundamental transitions (Smith and Szathmary 1995).

In (Calcott and Sterelny 2011), Szathmáry himself acknowledges this slip: “the origin of the nervous system is a forgotten transition.” He also adds that they would have included the nervous system in the second edition of the book.

From the point of view of processing information, and therefore that of this book, the emergence of the nervous system, up to the formation of the brain, is a fundamental stage in the evolution of life, and it’s just as important as the emergence of DNA.

Jablonka and Lamb (2006), quoted by Szathmáry in the article above, write: “There are interesting similarities … between the outcomes of the emergence of the nervous system and the transition to DNA and translation, in both of which the interpretation of information involves decoding processes.”

From the first insects to the Anthropocene, the nervous system has had an extraordinary impact on the planet, transforming the interaction between life and the environment, as well as life itself.

In her autobiography, neurobiologist Rita Levi-Montalcini (1987) praises the imperfection of our species in relation to the environment. The fact that we are ill-equipped has forced us to continuously come up with new solutions – mostly produced by our brains.

The essential role of brains, with their ability to analyse data and react as a consequence, is “to serve as a buffer against environmental variation” (Allman 2000). The more sudden and unexpected these environmental variations are, the more organisms have to be able to adapt to them. And for this task, what could be better than short-term memory, able to see the patterns in everything around us?

Origin of Neurons

As is often the case, neurons probably did not appear to do what they do now – create information-processing networks – but rather to find food: “Initially sensing environmental cues (such as the amino acid glutamate indicating prey) the partaking receptors and ion channels may have started to receive internal information (such as the transmitter glutamate) from within the newly evolving synapse” (Achim and Arendt 2014).

The role of glutamate has been studied in organisms that were probably the first to develop a nervous system, ctenophores, marine invertebrates also known as comb jellies, whose ancestors probably appeared towards the end of the Neoproterozoic Era around 550 million years ago. A ctenophore’s nervous system coordinates its cilia; hundreds of tiny tentacles the animal uses to move around and search for food. Unlike more complex nervous systems, the ctenophore’s one mainly uses L-glutamate as its neurotransmitter (Moroz et al. 2014), which leads us to consider the hypothesis of the origin of neuron transmission (Achim and Arendt 2014).

Of a ctenophore’s many properties, the most interesting could in fact be that it appeared before sponges. In other words, the first complex organisms to form might have adopted immediately a rudimentary nervous system, unlike sponges (Shen et al. 2017).

This has caused quite a stir. Firstly because it’s not universally accepted as having been proven (Marlow and Arendt 2014), and secondly because it goes against the idea – officially denied but intuitively felt – that evolution is always a process of complexification.

The paleo-geneticists will have to make up their minds about the first point, but the second appears to be completely unjustified. Evolution means making the process of storing useful information more efficient, also reducing the amount of energy required to store it (using lighter balls as in the example at the beginning of this chapter).

Going from ctenophores, more complex and active, to sponges, simpler and with modest energy requirements, is just as much a winning strategy as making the nervous system more powerful and finding new solutions to the resulting energy crisis.

Having said this, how could we fail to agree with Rita Levi-Montalcini? Our conscious organisms, yearning for knowledge, with memories of past life – all made possible by our brain – seems a more interesting condition than the static existence of the humble sponge. We should consider ourselves lucky that evolution took a more complex path that led to Homo sapiens.

First Brains and Shallow Neural Networks

We humans have got to know our brains so well, now we know that all we know is all but nothing. Nonetheless, by examining simpler natural neural networks and artificial ones, it’s possible to come up with some hypotheses as to how information processing capability evolved in time.

The only brain that’s been mapped until now is that of the hermaphrodite and the male C. elegans. The connectome of the hermaphrodite’s brain (White et al. 1986), with its 302 neurons, was compiled in 1986 and has been studied extensively, as can be seen in (Watts and Strogatz 1998). The connectome of the male, with 383 neurons, was compiled, in 2012 (Jarrell et al. 2012).

The difference between the nervous systems in the two genera is due to the fact that the male C. elegans, which cannot inseminate itself, has a further purpose in life in addition to energy provision: to mate with the hermaphrodite.

As every adolescent knows, the cognitive processes behind mating mechanisms are challenging. Considering that the C. elegans mating ritual isn’t as simple as the animal’s position on the evolutionary tree might lead us to believe (Jarrell et al. 2012), it’s almost a miracle that the male C. elegans manages to do its job with just a sprinkling of neurons.

Basically, C. elegans has developed a shallow neural network,Footnote 2 in which the sensory layer is a Hopfield network able to recognise some categories, called Gestalt by Hopfield (1982), of the surrounding environment – for example if there’s a hermaphrodite in the vicinity, and what position it’s in. After this initial classification, the information is sent to a second layer (interneurons) and then to a third, last layer, the motor neurons, which pass the stimulus to the relevant muscles (Varshney et al. 2011).

In the same way as in so-called Recurrent Artificial Neural Networks, some neurons have a processing memory and re-analyse their own output.

Considering that nematodes (the phylum Elegans belongs to) appeared soon after ctenophores (Poinar 2011), it didn’t take long for evolution to go from a distributed neuron network that could only move cilia, to one organised in a way that could process information and produce relatively complex behaviour.

By forcing the analogies with artificial networks a little, we’ve gone from the Hopfield Networks of 1982, to Recurrent Neural Networks, developed during the 1990s, in a few million years.

Societies and Natural Selection

If natural selection were the only mechanism of evolution, one might think that communication between complex organisms would not have gone beyond reproduction. Why collaborate with other individuals? This would mean increasing the possibilities of success of the other, with their genetic code, which isn’t exactly advantageous to our own genetic code. Or even better, why allow other individuals’ genetic code to propagate at all, and invent instead a way to clone itself?

The idea of natural selection influenced more than biologists. As seen in Darwin’s The Descent of Man, welfare and redistribution of wealth were thought to play against the fitness of the species: the weak must perish to leave room for the strong. Scientists didn’t even think collaboration was possible among animals. Therefore, they would never say that a collaborative society could be more successful than a non-collaborative one: collaboration was “human misbehaviour”.

Ronald Fisher (1930), one of the forefathers of statistical experimentation, in The Genetical Theory of Natural Selection considers an original hypothesis on how human civilisations first formed. At a certain point in history, some primitive populations entered a “barbarian” phase. Barbarian societies were organised to evolve through selection: they were divided into social classes, with those best suited to survival at the top, and the strongest elements were selected through family feuds. The strongest were rewarded with reproductive prizes (e.g. polygamy), and finally, they adopted a lineage cult, to favour the male offspring of the strongest.

These barbarian mechanisms, according to Fisher, resulted in the “social promotion of fertility into the superior social strata”. Human societies, and only human ones, started to favour the process of selection: “The combination of conditions which allows of the utilization [sic] of differential fertility for the acceleration of evolutionary changes, either progressive or destructive, seems to be peculiar to man.” (Fisher 1930)

As had been the case for life according to Dawkins (2013), suddenly a structure appeared that could evolve thanks to selection – but a mechanism that explains the emergence of this structure isn’t introduced (see the analysis of Johnson and Sheung Kwan Lam 2010). Dawkins explains the origin of life with the sudden appearance of the Replicator, which can eventually evolve through natural selection. Fisher is no different: suddenly, the society appears, so that can evolve through natural selection. Neither Dawkins nor Fisher suggest mechanisms that explain why and how these structures emerge.

Never mind that, if societies appear to let natural selection do its job, it becomes difficult to explain the emergence of fully collaborative societies (bees and ants), where no internal selection takes place.

The idea of emerging properties, “the whole is more than the sum of the parts”, was still a long way off. It seemed necessary to imagine a mechanism that made it possible to apply natural selection to the evolution of communities of organisms.

The mathematics of genetic variation within populations under the effects of natural selection was eventually perfected by John B. S. Haldane (1932) and finally William Hamilton (1964).

In his inclusive fitness theory, later called kin selection by John Maynard Smith, Hamilton explains that when we behave altruistically with a relative with whom we have a certain amount of genes in common, we do so if the person we are being altruistic with shares some of the same genes.

As an example, if your sister steals €100 from you for an investment that makes €250, you’ve both made a profit. 100% of your genes have lost €100, but 50% of your genes –the one you have in common with your sister – have earned €250. That means on average your genes have made a €25 profit. The fact that the champagne eventually purchased with the €250 gain doesn’t whet your taste buds, is but a minor detail: the whetted taste buds have 50% of your genes, so you reap the benefits in any case.

In The Selfish Gene, Dawkins mentioned it would be worth studying nine-banded armadillos, the female of which gives birth to a litter of identical quadruplets: “some strong altruism is definitely to be expected, and it would be well worth somebody’s while going out to South America to have a look” (Dawkins 2013).

As can be expected, Dawkins’ guess was disproven: despite the fact that armadillos can tell the difference between their twins and non-twins, they apparently behave in a friendly way towards all, regardless of shared genes (Loughry et al. 1998).Footnote 3 What Blanche says at the end of Tennessee Williams’ A Streetcar Named Desire goes for armadillos too: “I have always depended on the kindness of strangers”.

In fact, “Hamilton’s rule almost never holds” (Nowak et al. 2012).Footnote 4 The most extreme form of collaboration, eusociality, would appear to be exactly what’s needed to refute Hamilton’s rule. In eusocial societies “adult members are divided into reproductive and (partially) non-reproductive castes and the latter care for the young” (Nowak et al. 2012). Eusociality is more common in societies in which siblings have 50% of their genes in common with each other than in those with 75%, as is the case with some bees and ants.Footnote 5

Finally, the behaviour of individuals appears to be collaborative in many societies, regardless of how many genes the individuals have in common with each other. Furthermore, no animal societies – as Fisher mentioned – have established an internal selection system.

We must conclude that, from the point of view of evolution, organisms that collaborate for the common good have a greater chance of surviving than those that don’t. This is just as true for humans as it is for other animals. Actually, more so for humans.

Internal selection, as Ludwig von Mises mentioned shortly after the fall of fascist regimes in Italy and Germany, seems to be there to favour tyrants:

As every supporter of economic planning aims at the execution of his own plan only, so every advocate of eugenic planning aims at the execution of his own plan and wants himself to act as the breeder of human stock (Mises 1951).

Insects and Intelligent Societies

The first organised societies of complex organisms emerged 300 million years ago in insect communities, in the form of eusociality in fact.

Without the help of parental selection it would appear to be difficult to explain why certain insects chose to help each other, sacrificing their own fertility: if they had an “altruism gene”, they would favour the survival of others more than their own, so natural selection could not propagate the “altruistic gene”. By definition, genes that don’t follow Hamilton’s rule, like the altruism gene, should become extinct.

But “.. eusociality is not a marginal phenomenon in the living world. The biomass of ants alone composes more than half that of all insects and exceeds that of all terrestrial non-human vertebrates combined. Humans, which can be loosely characterized as eusocial, are dominant among the land vertebrates” (Nowak et al. 2012).

Natural selection can prevent unsuitable life forms from developing, and favour the success of a mutant organism. But it cannot explain either the emergence of life, or the change from unicellular organisms to complex organisms, nor the organisation of the latter into societies.

The fact that natural selection can filter behaviour that’s unsuitable for survival doesn’t imply that every new behaviour emerges only thanks to selection. If A implies B, B doesn’t necessarily imply A: behaviour that’s better suited to survival is favoured by selection, but this doesn’t mean that if a new behaviour emerges the cause is necessarily natural selection.

The essence of an intelligent system is to continually organise itself in a better way to absorb more energy, storing more and more useful information. The concept of evolvability is more important than that of fitness: the first is a long-term investment, which goes beyond the current generation and genes, the second only lets behaviour emerge that increases the probability of survival of the individual, now.

An intelligent system that doesn’t implement a long-term survival strategy is not worthy of the name. Improving cognitive capacities, or creating new (perhaps external) ones would appear to be an excellent strategy. This is valid for all intelligence systems, from the biological cell to the nation-state.

The creation of networks to increase the amount of available information and the capacity to process said information is a common phenomenon. For data analysts, and high energy physicists, the use of personal computers in the 1990s was severely limited by one factor: the capacity of local hard disks. At CERN, over a period of 5 years, I personally witnessed the change-over from workstations, kind of powerful PCs, with their huge external hard disks, to consumer PCs, which were cheaper, could process data stored in shared file systems in parallel (NFS and AFS, similar to virtual disks) and distributed databases. A path which culminated with the Web, giving non-professional users access to a virtually infinite amount of information, incomparable with what’s on the hard disk of their PC.

This process has intensified in time: today, a typical PC (smartphone) can store just a few tens of gigabytes on the hard disk (not much more than what I had on my workstation in 1998), but processes – in the case of adolescents – hundreds of gigabytes every weekFootnote 6 (approximately the amount of data I processed during my PhD in the 1990s).

Today, having the same storage system shared by millions of people, gives the “Homo smartphonicus” access to an abundance of information, something that would have been unimaginable until recently, with only-local storage.

Something similar happened with eusocial insects, the first animals to develop societies.

Insects were the first to reach an information-processing limit, for the same reason as biological cells a few billion years earlier. Although giant insects roamed the earth during the Permo-Carboniferous period,Footnote 7 300 million years ago, is not convenient for insects to grow into giants. Quite the contrary: they’ve implemented various miniaturisation strategies as they evolved. Their compact dimensions have been essential for “microhabitat colonization, [and the] acquisition of a parasitic mode of life, or reduced developmental time as a result of a rapidly changing environment” (Niven and Farris 2012).

To sustain and perfect miniaturisation, insects have developed a passive respiratory system, consisting of spiracles and tracheae that let the cells exchange carbon dioxide for oxygen. As a result, the nervous system developed in a decentralised way,Footnote 8 providing a greater surface area to oxygenate in terms of total volumeFootnote 9 (Niven and Farris 2012).

This miniaturisation strategy comes at a cost then: a cognitive limit of the individual, due to the concentration of oxygen. Processing information no longer as individuals, but as a network, increased then the information processing capacity of the species, without renouncing to miniaturisation.

Ants, for instance, communicate with dozens of different pheromones (Jackson and Ratnieks 2006): each ant leaves a track as it passes that acts as a signal for the next ant that passes that way, and each message is subsequently reinforced, weakened or changed by the other sisters in the colony.

Using this relatively simple mechanism, ants, which are blind, successfully solve “the travelling salesman problem”: Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city? (Yogeesha and Pujeri 2014).

It sounds like a miracle, but ants really can find the fastest route between a certain number of points. As the problem is one of the toughest in mathematics, for which no one (including ants) have yet found a completely satisfactory solution (in other words valid for any number of cities), it is a noteworthy result.Footnote 10

As was the case when unicellular organisms organised themselves into complex organisms, the emergence of sociality in insects as a mechanism used to store and process information has not certainly been the dominant paradigm in science, but it is present.

First of all, the idea that information processing capabilities can spontaneously emerge is deeply-rooted in information technology and neuroscience:

The bridge between simple circuits and the complex computational properties of higher nervous systems may be the spontaneous emergence of new computational capabilities from the collective behaviour of large numbers of simple processing elements (Hopfield 1982).

In addition to that, for other geneticists, “gene-based theories have provided plausible, albeit incomplete, explanations [for the origins of collaborative societies] … In many social groups, socially learned cooperative behaviours increase the productivity of the individuals in the group relative to that of animals that are not group members, yet the evolutionary effects of such behaviourally transmitted information have rarely been explored.” (Jablonka 2006).

There is a long way to go from ants colony solving the salesman problem to Homo sapiens inventing GPS satellite system and creating car navigators. But the road is marked out.

The Social Body

To many biologists, including Bert Hölldobler and Edward Wilson (2009), the authors of The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies, the problem lies in understanding when and how the “eusociality gene” became established. The reasoning behind this is that, if a complex organism is altruist, this is encoded in the information code of each individual.

But we are dealing with complex systems: a property of the system doesn’t derive from a property possessed by each individual – this is in fact Simon’s definition of a complex system. Cooperation can be one of the many effects of communication: first comes the ability to communicate, then the instinct to cooperate for a common scope.

If we consider complex organisms, including human beings, this hypothesis doesn’t appear to be too far-fetched. For human beings, cooperation without prior communication is something of a rarity. Many anthropologists who have lived with hunter-gatherer populations, say that learning the language was essential not only to their studies, but for their safety too. When Daniel Everett was with the Piraha he acknowledged that on one occasion he only managed to avoid being killed by a drunk member of the tribe because he was able to speak the man’s language (Everett 2009).

Even Frans de Waal, a primatologist famous for his idea that culture and morals are not exclusively human traits, acknowledges the fundamental role played by language in the formation of an ethical society:

Members of some species may reach tacit consensus about what kind of behaviour to tolerate or inhibit in their midst, but without language the principles behind such decisions cannot be conceptualized, let alone debated. To communicate intentions and feelings is one thing; to clarify what is right, and why, and what is wrong, and why, is quite something else (Waal 1996).

Surely chimpanzees communicate, and do form collaborative societies. But nothing compares to humans, and in fact there’s an incredibly high level of collaboration between humans “What is most amazing is that our species is able to survive in cities at all, and how relatively rare violence is.” (Waal 1996). No other primate could live in cities as crowded as ours without starting a civil war.

For Robin Dunbar, the anthropologist famous for proposing that group size was correlated with brain size among species of social primates, language lays the foundations of society. What’s more, according to Dunbar, language, in the form of gossip, emerged not as a form of communication, but as the evolution of grooming: “gossiping … is the core of human social relationships, indeed of society itself. Without gossip, there would be no society.” (Dunbar 2004).

As mentioned in (Reader 1998), language isn’t much use when hunting. There’s really no need to read the adventures of anthropologists gone native, from Colin Turnbull in The Forest People (Turnbull 1961), to Daniel Everett, who tell of their inability to move through the forest in perfect silence on a hunting expedition. As every hunter knowns (Forense 1862), language is not only superfluous, it’s to be avoided at all costs: vocal skills are used just for “faking natural sounds in order to lure their prey within range of their weapons” (Botha 2009).

But language does have something to say when it comes to hunting. Language in fact is fundamental if hunters are to peacefully share out game: it therefore emerges more as a means of negotiation (Reader 1998), rather than just gossip, or a way to pass on information.

This goes for every means of communication. While communication, to be efficient, has to pass on “useful” information, this does not necessarily have to be the case. The exabytes of cute kittens – hardly informative – sent over Facebook might have a role: communication is also used to create the system’s identity. The role of language and communication in general is also to make individuals feel – and therefore become – part of the system as a whole.

Language permits the constitution of the social body, a structure where the single elements go beyond collaboration. In Jean Jacques Rousseau’s words means “the total alienation of each associate, together with all his rights, to the whole community” (Rousseau 1762). We call it body for a reason: the citizens become the cells of an organism, they stop existing as a single element. When we raise an arm to defend our head from a dangerous object, there’s no altruism in the arm’s action: the cells of the arm wouldn’t survive outside the organism anyway: the self of the cells in the human body is a meta-self, transcendent, not immanent.

For Rousseau, in human societies, like in insect societies, the individual ceases to exist, and is alienated mainly thanks to shrewd communication – the conviction that every person has a well-defined role to play in society, and must play that role and no other.

Language allows the emergence of collaborative societies, but not necessarily fair societies –exactly like ant colonies. Citizens under the social contract (which is an emergent property in Rousseau writing, not an actual piece of paper) are safer. But certainly not more free or smarter than they were.