Keywords

Tim Henderson had one of the most disgusting jobs in the world to perform: to remove an enormous, stinking 40-metre-long blob of fat, grease and domestic waste which had silently tumefied in one of London’s main sewers beneath the upscale suburb of Chelsea. A blob so bloated it had burst through the ageing brickwork of the sewer walls. And Tim was among the lucky ‘flushers’, or trunk sewer technicians, who drew the chore of removing it, chunk by festering chunk.

“We see blockages all the time in household sewer pipes… but to have this much damage on a sewer almost a meter in diameter is mind-boggling ,” Thames Water repair and maintenance supervisor Stephen Hunt told Britain’s The Guardian newspaper (Kaplan 2015; Ratcliffe 2015).

The blobs—known as ‘fatbergs’, which can weigh 15 tonnes or more and extend for 80 m—are becoming a common but largely invisible symptom of the modern metropolis, whose citizens and restaurants unthinkingly tip megatonnes of kitchen and other waste down the drain. “Fat goes down the drain easily enough, but when it hits the cold sewers, it hardens into disgusting ‘fatbergs’ that block pipes,” Rob Smith, Thames Water’s chief sewer flusher and member of the local “fatberg hit squad”, told media. In 2013 New York City paid nearly $5m to purge fat buildups from its sewer network (Kaplan 2015).

In a way the ‘fatbergs’ are a distasteful allegory for the substances that also choke the sclerotic arteries of more than a billion urbanites around the world and who will mostly die of the resulting heart disease or stroke. Fatbergs consist of the same unhealthy materials that predominate in the modern diet. They are the loathsome spawn of a culture of waste—of water, nutrients, food and energy—on a mega-industrial scale. They are another form of pollution, growing unseen in the darkness beneath our very feet: in London alone, such blockages flooded 18,000 homes with sewage in recent years. Their mind-boggling size speaks of the burgeoning of giant cities and waistlines as humanity abandons the countryside and its thrifty ways in favour of prodigal urban lifestyles. In themselves, fatbergs are hardly species-threatening—just they are another ugly symptom of a system going sadly wrong. They are another wakeup call.

A Hand-Made World

By the mid-twenty-first century the world’s cities will be home to approaching eight billion inhabitants and will carpet an area of the planet’s surface the size of China. Several megacities will have 20, 30, and even 40 million people. The largest city on Earth will be Guangzhou-Shenzen, which already has an estimated 120 million citizens crowded into in its greater metropolitan area (Vidal 2010).

By the 2050s these colossal conurbations will absorb 4.5 trillion tonnes of fresh water for domestic, urban and industrial purposes, and consume around 75 billion tonnes of metals, materials and resources every year. Their very existence will depend on the preservation of a precarious balance between the essential resources they need for survival and growth—and the capacity of the Earth to supply them. Furthermore, they will generate equally phenomenal volumes of waste, reaching an alpine 2.2 billion tonnes by 2025 (World Bank)—an average of six million tonnes a day—and probably doubling again by the 2050s, in line with economic demand for material goods and food. In the words of the Global Footprint Network “The global effort for sustainability will be won, or lost, in the world’s cities” (Global Footprint Network 2015).

As we have seen in the case of food (Chap. 7), these giant cities exist on a razor’s edge, at risk of resource crises for which none of them are fully-prepared. They are potential targets for weapons of mass destruction (Chap. 4). They are humicribs for emerging pandemic diseases, breeding grounds for crime and hatcheries for unregulated advances in biotechnology, nanoscience, chemistry and artificial intelligence.

Beyond all this, however, they are also the places where human minds are joining at lightspeed to share knowledge, wisdom and craft solutions to the multiple challenges we face.

For good or ill, in cities is the future of civilisation written. They cradle both our hopes and fears.

Urban Perils

The Brazilian metropolis of Sao Paulo is a harbinger of the challenges which lie ahead for Homo urbanus, Urban Human. In a land which the New York Times once dubbed “the Saudi Arabia of water” because its rivers and lakes held an eighth of all the fresh water on the planet, Brazil’s largest and wealthiest city and its 20 million inhabitants were almost brought to their knees by a one-in-a-hundred-year drought (Romero 2015). It wasn’t simply a drought, however, but rather a complex interplay of factors driven by human overexploitation of the surrounding landscape, pollution of the planetary atmosphere and biosphere , corruption of officialdom, mismanagement and governance failure. In other words, the sort of mess that potentially confronts most of the world’s megacities.

In the case of Sao Paulo, climate change was implicated by scientists in making a bad drought worse. This was compounded by overclearing in the Amazon basin, which is thought to have reduced local hydrological cycling so that less water was respired by forests and less rain then fell locally. This reduced infiltration into the landscape and inflow to river systems which land-clearing had engorged with sediment and nutrients. Rivers running through the city were rendered undrinkable from the industrial pollutants and waste dumped in them. The Sao Paulo water network leaked badly, was subject to corruption, mismanagement and pilfering bordering on pillage. Government plans to build more dams arrived 20 years too late. “Only a deluge can save São Paulo,” Vicente Andreu, the chief of Brazil’s National Water Agency (ANA) told The Economist magazine (The Economist 2014). Depopulation, voluntary or forced, loomed as a stark option, officials admitted. Although the drought eased in 2016, water scarcity remained a shadow over the region’s future.

Sao Paulo is far from alone: many of the world’s great cities face the spectre of thirst. The same El Nino event also struck the great cities of California, leading urban planners—like others all over the world—to turn to desalination of seawater, using electricity and reverse osmosis filtration (Talbot 2014). This kneejerk response to unanticipated water scarcity echoed the Australian experience where, following the ‘Millennium Drought’ desalination plants were producing 460 gigalitres of water a year in four major cities (National Water Commission 2008)—only to be mothballed a few years later when the dry eased. By the early 2010s there were more than 17,000 desalination plants in 150 countries worldwide, churning out more than 80 gigalitres (21 billion US gallons) of water per day, according to the International Desalination Association (Brown 2015). Most of these plants were powered by fossil fuels which supply the immense amount of energy needed to push saline water through a membrane filter and remove the salt. Ironically, by releasing more carbon into the atmosphere, desalination exacerbates global warming and so helps to increase the probability of fiercer and more frequent droughts. It thus defeats its own purpose by reducing natural water supplies. A similar irony applies to the city of Los Angeles which attempted to protect its dwindling water storages from evaporation by covering them with millions of plastic balls (Howard 2015)—thus using petrochemicals in an attempt to solve a problem originally caused by … petrochemicals.

These examples illustrate the ‘wicked’ character of the complex challenges now facing the world’s cities—where poorly-conceived ‘solutions’ may only land the metropolis, and the planet, in deeper trouble that it was before. This is a direct consequence of the pressure of demands from our swollen population outrunning the natural capacity of the Earth to supply them, and short-sighted or corrupt local politics leading to ‘bandaid’ solutions that don’t work or cause more trouble in the long run.

Other forms of increasing urban vulnerability include: storm damage, sea level rise, flooding and fire resulting from climate change or geotectonic forces; governance failure, civic unrest and civil war exemplified in Lebanon, Iraq and Syria over the 2010s; disruption of oil supplies and consequent failure of food supplies; worsening urban health problems due to the rapid spread of pandemic diseases and industrial pollution and still ill-defined but real threats posed by the rise of machine intelligence and nanoscience (Gencer 2013). The issue was highlighted early in the present millennium by UN Secretary General Kofi Annan, who wrote:

Communities will always face natural hazards, but today’s disasters are often generated by, or at least exacerbated by, human activities… At no time in human history have so many people lived in cities clustered around seismically active areas. Destitution and demographic pressure have led more people than ever before to live in flood plains or in areas prone to landslides. Poor land-use planning; environmental management; and a lack of regulatory mechanisms both increase the risk and exacerbate the effects of disasters (Annan 2003).

These factors are a warning sign for the real possibility of megacity collapses within coming decades. With the universal spread of smart phones, the consequences will be vividly displayed in real time on news bulletins and social media. Unlike historic calamities, the whole world will have a virtual ringside seat as future urban nightmares unfold.

New Plagues

From the point of view of an infectious microbe, like the flu virus, ebola, zika, cholera or drug-resistant TB, a megacity is an orgy of gourmet and reproductive opportunities. The larger the city, the more billions of human cells it harbours, on which the bug delights to dine, or in which it can multiply. Furthermore, cities have carefully equipped themselves with the most efficient means for spreading infectious microbes: international airports, schools and kindergartens, air-conditioned offices, steamy night clubs, dating agencies, sporting facilities, hospitals, pet and pest animals, insects, not-so-clean restaurants, markets and food factories, polluted water supplies and rivers, leaky waste dumps and cemeteries. From a microbe’s perspective the modern city is nirvana.

It was those ancient Roman kings, the Tarquins—a dynasty that always received a spectacularly bad press from subsequent republican historians—who laid the essential foundation for the modern city when they built the Cloaca Maxima, the world’s first sewer, to move the growing city’s filthy waste further down the Tiber River (Hopkins 2012). Without this simple, enclosed stream draining sources of infection to a safer distance, Rome could never have flourished. The resulting reduction in disease and especially infant death rates in one of the largest concentrations of people at the time led to population growth, economic expansion and, especially, enough surplus males to maintain the standing army on which the city’s subsequent ascendancy was built. One of the world’s earliest examples of public health intervention, it also laid the foundations for modern urban planning—as well as the fatbergs of the future. The Cloaca Maxima was also a classic case of another ancient human tradition which still survives today: the habit of relocating a problem from A to B and then regarding it as ‘solved’. When cities were relatively small, there was plenty of spare land and ocean around the world to absorb their foul emissions, they could afford to pollute and generally get away with it. But with the emergence of the megacities and a globalised economy in the modern era this has all changed. Megacities that do not self-cleanse and re-supply their resources risk drowning in their own filth, poisoning their citizens and cultivating waves of pollution and infectious disease which can then travel internationally in a matter of hours.

The World Health Organisation identifies 14 major pandemic disease threats to the global population: avian influenza, cholera, emerging diseases (e.g. nodding disease), Hendra virus, pandemic influenza, leptospirosis, meningitis, Nipah virus, plague, Rift Valley fever, SARS, smallpox, tularaemia, haemorrhagic fevers (like the Ebola and Marburg viruses), hepatitis and yellow fever (World Health Organization 2015a). To this formidable panoply of scourges it adds the worldwide emergence of a new wave of drug-resistant organisms, such as tuberculosis, golden staph, streptococcus, salmonella and malaria, which pose a rising hazard to human health not only from the diseases they cause that resist treatment, but also from the accompanying loss of antibiotic protection for surgical procedures, cancer therapies and the like. “Epidemics are common occurrences in the world of the 21st century,” WHO explains. “Every country on earth has experienced at least one epidemic since the year 2000. Some epidemics, such as the H1N1 2009, Avian Flu and SARS pandemics, have had global reach, but far more often, and with increasing regularity, epidemics strike at lesser geographic levels. Devastating diseases such as the Marburg and Ebola haemorrhagic fevers, cholera, plague, and yellow fever, for instance, have wreaked havoc on regional and local scales, with much loss of life and livelihoods” (World Health Organization 2015b). By redistributing disease-carrying mosquitoes worldwide, as in the case of the Zika virus, climate change is also augmenting the risks of pandemics, according to the New York Times: “Recent research suggests that under a worst-case scenario, involving continued high global emissions coupled with fast population growth, the number of people exposed to the principal mosquito could more than double, to as many as 8 billion or 9 billion by late this century from roughly 4 billion today” (Gillis 2016).

Of the 60 million or so people who die in our world each year, as many as 15 million die from an infectious disease—the rest perishing chiefly from lifestyle diseases and a much smaller number from accidents and wars (World Health Organization 2014). This underlines the dramatic change in the modern era, in which infectious disease has become a far less common cause of death than was the case throughout most of human history—thanks chiefly to the advent of vaccines, antibiotics and sound public health measures. It also highlights the dramatic rise in deaths from self-inflicted disease and the almost complete failure, so far, of preventative medicine. However, a diet of disaster movies and highly-coloured news reports has left the public with the erroneous impression that the risk from infectious disease is much greater than, for example, the risks posed by our own poor food choices, air or water pollution, whereas the opposite is in fact true. If there exists an Andromeda Strain-style agent capable of wiping out the whole of humanity,Footnote 1 it has yet to come to the attention of science—and, for good biological reasons, it probably doesn’t exist unless somebody artificially creates it: natural organisms seldom eliminate all their hosts, as to do so is not a good strategy for their own survival. Instead they attenuate and adapt—a lesson humans need also to contemplate.

Viewed from the perspective of a direct threat to the existence of civilisation or the human species as a whole, the risk from infectious disease per se comes a long way behind that of nuclear war, climate change, global toxicity, famines and some of the other technological perils described in this chapter. However, pandemics frequently arise as a synergetic consequence of war, famine, poverty, mass migration, climate change, ecological collapse and other major disasters and therefore play an amplifying role in endangering the human future. The classic case was the 1918–1919 influenza outbreak which arose in the immediate aftermath of World War I largely as a consequence of the world-wide movement of soldiers and refugees at a time when many populations were weak from hunger. The ‘Spanish’ flu infected an estimated 500 million people worldwide, killing between 20 and 50 million of them.Footnote 2

The organisms which pose the greatest pandemic dangers in the twenty-first century—such as avian flu , Ebola , HIV and SARS —mostly originate in wild or domesticated animals and often arise out of some sort of environmental decline. As human numbers grow and people push into areas formerly dominated by wildlife and forests, more of these zoonotic diseases (animal-sourced infections) will probably transfer into the human population: as we replace their natural hosts with large concentrations of people, the viruses have little option other than to jump species, if they are to survive themselves. However, the very fact that their likely origins are understood, if not always precisely known, makes it possible to establish detection, early warning and prevention systems, which are the current goal of world health authorities (McCloskey et al. 2014). In the second category of threat are diseases which transfer into humans from domesticated livestock—seasonal influenza outbreaks, the Nipah, Hendra and MERS viruses and food-borne infections like E. coli, salmonella and listeria. Here too, early detection and prevention hold the key to arresting pandemics.

Unknown diseases can strike without warning out of the world’s fragmenting environments, as shown by the cases of HIV, Ebola, Nipah and Zika. HIV originated with SIV, a relatively harmless virus of African monkeys and apes which crossed into humans, who had no resistance to it, during the mid-twentieth century—nobody yet knows how for sure (Cribb 2001)—and by the early 2010s had claimed 25 million lives and infected a further 35 million individuals, most of whom will eventually die from it. However, the development of preventive strategies, education, better drug therapies and vaccines all promise to reduce the toll. Ebola, a frightening infection in which victims leak contagious blood and convulse, is thought to originate in bats or rodents and first emerged as a human disease with outbreaks in the Congo in 1976 and 1979. A major eruption in West Africa in 2013 infected 25,000 people within a year and killed 10,000. Despite the alarming speed of its onset, the infection was contained and most patients with access to good healthcare made successful recoveries (The Economist 2015b). The experience in these cases suggests that new pandemic diseases crossing into humans from wildlife can be contained and, even if they have large initial local death tolls, they do not pose an existential threat to humanity at large. However, it is far easier to limit their impact by taking effective medical, public health and quarantine action close to the point of origin—and this depends heavily on the local government, its skills and resources, and its willingness to co-operate with others, nationally, regionally and globally.

The best candidate for a twenty-first century version of the ‘Black Death’ is still the flu virus, in one of its newer evolutions, its close relatives such as the avian influenza H5N1, or SARS. The reason is that these viruses can be transmitted in airborne droplets from coughs and sneezes, not just in bodily fluids as is the case with HIV and Ebola. Robert Webster, a professor of virology division at St Jude Children’s research hospital in the UK explains: “Just imagine if the Ebola outbreak in West Africa was transmitted by aerosol. If flu was just as lethal. If H5N1 [avian flu] was as lethal in humans as it is in chickens – and studies have shown that it only takes about three mutations to make it highly lethal. It’s not out of the realms of possibility” (Woolf 2014). One reason flu mutates so often is because the virus is constantly cycling between different poultry, pig and human populations: each host presents it with fresh genetic challenges, forcing it to evolve novel strains in order to adapt: sometimes these prove more infectious, or more deadly, making it a very clever virus. Australian virologist the late Frank Fenner—one of the heroes of the world campaign to eliminate smallpox—once stated that a neuropathogenic strain of avian flu (one which infects the brain and central nervous system of birds and kills them rapidly) was the plague he most feared because it was both highly infectious and highly lethal: in theory, a sneeze on an airliner could kill most of the passengers. So far, however, such a strain has yet to cross from birds to humans. Projections for a major outbreak of a new and deadlier strain of flu suggest that it would infect between 100 and 1000 million people and kill from 12 million to 100 million of them, depending on how quickly and effectively the outbreak was suppressed (Klotz and Sylvester 2012).

Two other major killers which could potentially account for a large part of the population if widely released are smallpox and SARS. Smallpox , one of the worst human plagues throughout history which used to kill up to two million people a year, was declared eradicated in 1980 following a global immunization campaign led by WHO. The last known natural case occurred in Somalia in 1977. Since then, the only other reported cases were caused by a laboratory accident in 1978 in Birmingham, England, which killed one person and caused a limited outbreak (World Health Organization 2015c). However, the virus has not been eliminated from the Earth: both the United States and Russia are thought to maintain stocks in their biological warfare laboratories, perpetuating the risk of either a laboratory escape or deliberate release. To the end of his life, Fenner campaigned for the complete destruction of all smallpox virus stocks worldwide.

In a far-sighted paper published in 1996 Paul Ehrlich and Gretchen Daily argued that, while a pandemic would be a horrible way to end the human population surge, reversing human population growth voluntarily is a wise and sensible way to reduce the risks of future pandemics (Daily and Ehrlich 1996).

Man-Made Killers

The potential for the rapid global spread of a new plague agent was highlighted in 2002–2003 with the outbreak of severe acute respiratory syndrome (SARS). In this “a woman infected in Hong Kong flew to Toronto, a city with outstanding public health capabilities. The woman caused infections in 438 people in Canada, 44 of whom died.” Ultimately the disease infected 8000 people worldwide and killed nearly 800 of them. “What if the next infected person flies to a crowded city in a poor nation, where surveillance and quarantine capabilities are minimal? Or to a war zone where there may be no public health infrastructure worthy of the name?” asked Lynn Klotz and Edward Sylvester, writing in The Bulletin of Atomic Scientists. Checking around, they identified no fewer than 42 laboratories worldwide which keep live stocks of potential pandemic pathogens (PPPs) such as SARS and the 1918 flu virus for ‘scientific and military purposes’ (Klotz and Sylvester 2012).

The risks of a man-made plague were highlighted in a scientific row which broke out in 2014 over the work of Wisconsin University microbiologist Yoshihiro Kawaoka who, as part of an experiment to understand the evolution of flu viruses, had deliberately engineered a strain of the 2009 killer H1N1 virus into mutated forms to which humans were completely susceptible and had no immune protection. Professor Kawaoka claimed his mutant strains were intended purely to help in the development of vaccines, but other scientists pointed out that if they either accidentally escaped or were deliberately released from his medium-security laboratory, the effects could be horrendous (Connor 2014). The episode underlined the lack of ethical oversight of scientists worldwide engaged in designing new and potentially deadly life-forms.

That the intentional release of a pandemic agent from even the most secure government facility can happen was proved beyond doubt by the 2001 American case in which five people died and 17 became sick following the mailing of anthrax spores to offices of the US Senate and to media. When analysed, the anthrax turned out to be the ‘Ames strain’, a type specifically engineered by American biowarfare scientists from a microbe found in a Texan cow, and subsequently distributed to 16 laboratories across the US. After intensive investigation by the FBI, it was concluded that the microbes had been mailed by a mentally-unbalanced employee of the US biowarfare facility at Fort Detrick, Maryland, in the wake of the 9/11 terrorist attacks, to highlight America’s vulnerability to this form of attack and to frighten the Congress into increasing funding for biowarfare research . He succeeded. However, since the suspect committed suicide soon afterwards, his motives were never clarified beyond doubt (US Federal Bureau of Investigation 2016). The important take-home lesson from the event is that no laboratory anywhere in the world, no matter how secure, is proof against malicious distribution of plague agents, whether natural or artificial, by a crazy or fanatical employee, a government acting in its perceived national interest, an undercover enemy agent or just by plain accident. All biowarfare laboratories- and indeed, many ordinary biotech labs – thus represent an ongoing existential threat to humanity whose safety, like that of nuclear materials, cannot ever be guaranteed.

This was highlighted in early 2016 when James Clapper, U.S. director of national intelligence, issued a warning that even gene editing (such as by the technology known as CRISPR) should be added to the list of weapons of mass destruction, adding that it “increases the risk of the creation of potentially harmful biological agents or products”. (Regalado 2016). Other scientists warned that genetically modified lifeforms could be used to target specific groups of humans carrying certain genes, or if released in agricultural ‘designer crops’ might result in uncontrollable plagues. They cautioned that gene editing technology is far cheaper and easier to access than nuclear or chemical weapons.

Of the dangers of ‘synthetic biology’ —the artificial making of novel life-forms—the Global Challenges Foundation says “The design and construction of biological devices and systems for useful purposes, but adding human intentionality to traditional pandemic risks…” constitute one of the 12 major existential threats it identifies to humanity in its 2015 report:

Attempts at regulation or self-regulation are currently in their infancy, and may not develop as fast as research does. One of the most damaging impacts from synthetic biology would come from an engineered pathogen targeting humans or a crucial component of the ecosystem.

This could emerge through military or commercial bio-warfare, bioterrorism (possibly using dual-use products developed by legitimate researchers, and currently unprotected by international legal regimes), or dangerous pathogens leaked from a lab. Of relevance is whether synthetic biology products become integrated into the global economy or biosphere . This could lead to additional vulnerabilities (a benign but widespread synthetic biology product could be specifically targeted as an entry point through which to cause damage) (Global Challenges Foundation 2015).

Machine Minds

In 2014 the world received a startling wakeup call when eminent British cosmologist Stephen Hawking, one of the world’s best-known scientists and a man who has personally benefitted from super-smart technologies to overcome the physical handicaps imposed by his motor neurone disease, uttered a warning that artificial or machine intelligence could be the undoing of humanity. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC. “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution , couldn’t compete, and would be superseded” (Cellan-Jones 2014).

It wasn’t a new thought: science fiction writers have been grappling with the potential for conflict between human and machine intelligence for decades: it was a key theme of Isaac Asimov’s robot stories written between the 1940s and 1960s; it was central motif in Stanley Kubrick’s 1968 epic film 2001: A Space Odyssey, in which HAL, the suavely paranoiac computer, tries to eliminate the human crew of a starship after concluding they are a threat to his mission. But in the words of Hawking, who has used latest generation AI to enhance his thought, word and speech contact with fellow humans and was impressed by its ability to interpret his wishes, it held a certain arresting quality.

Hawking wasn’t alone. Tesla Motors and SpaceX CEO Elon Musk , regarded as one of the world’s technological visionaries, also expressed deep disquiet. Commenting on the emerging power of internet-based artificial intelligence he told a group of science thinkers calling itself the Reality Club: “The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. Please note that I am normally super pro technology, and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand” (Rosenfeld 2014). Elaborating on this comment in a talk at Massachusetts Institute of Technology he said: “I think we should be very careful about artificial intelligence. Our biggest existential threat is probably that … There should be some regulatory oversight at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, he’s sure he can control the demon. Didn’t work out.”

Like many two-edged technologies before it, AI promises to insinuate itself into our hearts, minds and wallets by taking over all the hard, dirty, inconvenient, boring and costly tasks that humans prefer not to do—and few of us have the penetrating gaze of a Hawking or a Musk to see where it all may lead. As with all new technologies, its boosters talk it up: its intelligent critics are heard far more rarely. Of this powerful new technology, the journal Scientific American said “Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over—and even perform better—certain human tasks. Substantial evidence suggests that self-driving cars will reduce the frequency of collisions and avert deaths and injuries from road transport, because machines avoid human errors, lapses in concentration and defects in sight, among other shortcomings. Intelligent machines, having faster access to a much larger store of information and the ability to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases” (Meyerson 2015).

The issue received a major airing in January 2015, when over 4000 of the world’s leading technological minds—including Hawking and Musk—signed an open letter to the Future of Life Institute, which stated:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do (Russell et al. 2015).

Although this sounds a bit like asking chemists to come up with better antibiotics but not monkey with poison gas or high explosives, or asking physicists to design better electronic devices but not build better nuclear bombs, it does at least inject the issue of ethics into the early-stage development of a potentially plenipotent and disruptive new technology.

Propelling such concerns is the sharp increase in the use of by various countries of robot vehicles, primarily airborne drones, capable of dealing death to those with whom their operators disagree in opinion, interest, politics, belief or culture—as well as to disturbingly large numbers of innocent bystanders, or ‘collateral damage’. This has prompted a group of international scientists and peace activists to form the Campaign to Stop Killer Robots, which demands a moratorium on all new ‘autonomous executions’ until international law has been developed to deal with the issueFootnote 3. The campaigners explain:

Rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention. This capability would pose a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.

Several nations with high-tech militaries, including China, Israel, Russia, the United Kingdom, and the United States, are moving toward systems that would give greater combat autonomy to machines. If one or more chooses to deploy fully autonomous weapons, a large step beyond remote-controlled armed drones, others may feel compelled to abandon policies of restraint, leading to a robotic arms race (Campaign to Stop Killer Robots 2015).

The killer robots are a fresh case where technology has outrun society and its ability to manage and regulate it. Remote-control military drones were barely in use for a decade, and were still unfamiliar to most of the world’s citizens, before technicians were hard at work developing pieces of machinery capable of roaming at will and making their own decisions, under certain rules, about whom to murder. By the mid-twenty-first century such machines will be a commonplace in the military arsenals, police forces and security agencies of most countries and maybe even of multi-national corporations—on the pretext of ‘better security’. Like the warhorse, the musket and the aircraft in ages gone by, ‘mindless’ machine killers could become a tactical game-changer, capable of hunting down individuals, menacing entire nations, corporations, regions, cities, leaders, executives or systems of belief and—in the hands of a malignant group or country—of threatening global civilisation as a whole.

However, the greater risk from AI may stem less from autonomous weapons, which operate to some extent under human direction, than from machine intelligence which might seek—for reasons of its own—to dominate, supplant or eradicate humans. Although this may sound like science fiction, it is the issue that so alarmed Hawking and Musk and is based on technologies which already exist or else are now in development. The Global Challenges Foundation explains:

The field [of AI] is often defined as “the study and design of intelligent agents”, systems that perceive their environment and act to maximise their chances of success. Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.

And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts. There is also the possibility of AI-enabled warfare and all the risks of the technologies that AIs would make possible. An interesting version of this scenario is the possible creation of “whole brain emulations”, human brains scanned and physically represented in a machine. This would make the AIs into properly human minds… (Pamlin et al. 2015).

The Foundation points out that such risks are not standalone; very often they intersect with, compound or trigger other risks in a domino-like chain. The risks from machine intelligence, for example, could easily complement and exacerbate threats from nanotechnology and biotechnology, producing a technology-dominated environment in which mere humans could not survive: for example, the use of drones to distribute viruses engineered to attack only humans carrying a particular set of genes. Of all the various risks facing humankind in this century, the Foundation rates artificial intelligence as the most technologically difficult to overcome, and the hardest of all to form partnership to oppose it, since so many people may have vested interests in its development. In short, the control of AI is liable to prove as problematic, disputed and intractable as the control of the Earth’s climate, the control of nuclear weapons or toxic chemicals.

The precise process whereby machine intelligence would eliminate humanity is not described in any of these scenarios, but the common concern is that any AI created by humans would inherit both our own competitive instincts and ruthlessness, and unlike humans this would not be moderated by a ‘moral’ obligation to protect our species. It may therefore be motivated to eliminate all potential competitors or perceived risks to its own survival, including its creators. The still unanswered question in all this is the Asimovian one: can a machine be endowed with morals?

Nanocracy

A second dimension in which the march of technology imperils the human future is through the rise of the ‘nanocracy’,Footnote 4 a condition in which close surveillance and information about individuals throughout the whole of their lives will be maintained by a network of governments, commercial corporations and law enforcement agencies (Cribb 2007).

As whistleblowers Mark Snowden, Chelsea Manning and Julian Assange exposed, modern society and all who dwell in it are already potentially subject to intensive surveillance (Pope 2014). All of our financial, computer and mobile phone records, our health details, purchasing decisions, travel, tastes, hobbies and preferences, our appearances on security cams in shops, offices, taxis and public places all over the modern city, are available to the state—and many of them to private corporations equally powerful. Testifying to the rapid spread of surveillance devices, as early as 2013 Britain alone already had six million CCTV cameras—one for every 11 citizens—according to the British Security Industry Authority (BSIA). Our smart phones, satnav vehicles and airlines can report wherever we go with them. ‘Intelligent’ TVs, voice-controlled household devices and smart phones can potentially monitor, record and report our conversations and utterances even in the privacy of our own homes (BBC 2015). Our computers can scan our faces and work patterns for signs of boredom, resentment or dissidence. Technologies to interpret our brain patterns are already in their infancy, as Hawking has warned. All that is missing are computers capacious, fast and powerful enough to store, retrieve and interpret every piece of data on each individual from the moment of birth to the moment of death. These are now just around the corner, thanks to quantum technology.

A quantum computer is a device that goes to the next level of super-miniaturisation, using quantum particles (or qubits) which can exist in several superimposed states instead of the familiar binary digits (or bits), which exist in only two. The result is a device of massively more speed, power and memory capacity than conventional technology, or colloquially, ‘a supercomputer the size of a room in a matchbox’. Researchers from the University of New South Wales, who created the world’s first working quantum bit in 2012 (University of NSW 2012), told media at the time the world’s first quantum computer was probably only 5–10 years away. Dr Andrea Morello said that quantum computers “promise to solve complex problems that are currently impossible on even the world’s largest supercomputers: these include data-intensive problems, such as cracking modern encryption codes, searching databases (author’s italics), and modelling biological molecules and drugs.” Google and NASA claim to have built the most powerful computer ever—the D-Wave 2X—trumpeted as a major breakthrough for artificial intelligence (NASA 2015). Wall Street and banks like Goldman Sachs are investing in quantum computing, in a race to turn atomic particles into fast cash (Bloomberg 2015). Airbus is using an early version to design jet aircraft of the future (The Telegraph 2015). IBM and the US intelligence research agency IARPA are building the most powerful spying machine in history (IBM 2015).

By the 2030s, thanks to quantum computing and t he universal spread of the internet and electronic devices such as smart phones and closed-circuit cameras, it will probably be feasible to observe and monitor virtually every individual in society for most of their life, automatically and without their consent: our genetic details and unique identifiers like personal smell or other biometrics, all we do and say or is done or said to us, everywhere we go and everyone we meet, all our financial transactions, private documents and photos, our unique brain patterns and biological indicators, all the vision we generate, every keystroke or touch on a mobile device, every website we visit, TV program we watch or book we read. This can potentially be stored, mined and sifted at light-speed using ‘quputers’—and interpreted by artificial intelligence directed according to the purposes of the person (or intelligence) who authorised the search. For those who may attempt to isolate themselves from this universal electronic espionage, drones or swarms of microscopic ‘nanobots’ will provide close surveillance (Motherboard 2014).

The Orwellian notion of a single, centralised ‘big brother’ surveillance brain is misplaced in the modern world. In reality, the information on individuals in the developed world already exists in hundreds, even thousands of separate databases, most of them owned by the private sector—your bank, your Facebook or email account, your internet service provider, your phone company, your car firm, your supermarket, doctor, golf club or travel agent. By the 2030s these will become retrievable and searchable in microseconds by any agency or corporation with the power to do so—and a quantum computer to do it. Advanced data mining and pattern recognition technology will enable ‘targets’ to be picked out of the population on the basis of their words, thoughts, habits and deeds automatically, without the individual ever having previously come to the attention of law enforcement, security services, political or religious ‘thought-police’ or commercial marketers. And once you have been selected as a target it will be almost impossible to get off the database. The oft-repeated claim that ‘the innocent have nothing to fear’ is nonsense: everyone, guilty or innocent, will potentially be subject to unblinking, 24/7 AI scrutiny throughout their lives.

These are, of course, no less than the enabling technologies for a global surveillance state—though nobody is admitting as much. While it is logical that a complex society of ten billion people requires more laws, regulations and enforcement that a nineteenth century world of half a billion humans, the advent of quantum surveillance will over-ride and eliminate most aspects of individual freedom. Without strict safeguards, transparency and public oversight, it could potentially render everyone, in effect, state property. On present trends, this will probably be accomplished with the co-operation of the private sector, via internet companies and banks, and with the gullible consent of voters reassured by government claims that spying on everyone is ‘essential to national security’. With many transnational corporations now larger, wealthier and more powerful than individual countries or governments, one of the chief and most intrusive objectives of universal surveillance will be marketing—to precisely target every individual with an avalanche of products and services to anticipate their every whim, before they even know they have it. And finally, political parties and religious bodies may exploit the technology not only to spy on their opponents but to ensure the loyalty of supporters, who may then be coerced by threats to expose aspects of the private lives. This is the dawn of the nanocracy, the rule of the Dwarf Lords (see Pamlin et al. 2015).

Like all advanced technologies—and despite all the self-serving hype by the scientists working on it—there is no guarantee such omnipotence will be used wisely, benignly, ethically or well, be regulated, publicly supervised or even its details widely known. Indeed, the odds are it will first be employed by political, economic and religious elites to spy on and control those they deem a threat to their power, beliefs, wealth or freedom of action—or else an opportunity to spot customers, recruits or agents of influence. Edward Snowden, who witnessed the birth of the secretive age of universal espionage and blew the whistle on it, told Australia’s ABC in May 2015 that the power to search both our content and metadata is “incredibly empowering for governments, incredibly disempowering for civil society”. It could lead to what he termed a ‘turnkey tyranny’ in which governments claim to follow due process but secretly ratchet-up their level of intrusion into private lives without disclosing it. “They are collecting information about everyone, in every place, regardless of whether they have done anything wrong,” he warned (Snowden 2015).

While most people will regard such electronic intrusion mainly as threats to individual liberty or privacy, there is in fact a far more dangerous aspect to them, which affects the fate of our species. One of the most striking lessons from communism, Nazism, McCarthyism, Jacobinism or the religious fanaticism of the past two centuries is the way they enforced surveillance on their societies, compelling citizens to inform on one another, and driving individuals to self-censor even to the point of suppressing private thoughts contrary to the prevailing doctrine.

The risk such a development on a universal scale poses to the human future in the twenty-first century is its potential to chill or prevent the very debate and change which are vital to our survival. Evidence that surveillance can discourage public discussion or the expression of opinion has already appeared in a study by Wayne State University’s Elizabeth Stoycheff which found “the ability to surreptitiously monitor the online activities of … citizens may make online opinion climates especially chilly”, adding “While proponents of (mass surveillance) programs argue surveillance is essential for maintaining national security, more vetting and transparency is needed as this study shows it can contribute to the silencing of minority views that provide the bedrock of democratic discourse” (Stoycheff 2016).

Many people are by nature explorers of new ideas, adventurers, challengers of accepted opinion, reformers, liberals, researchers, conservationists, pioneers, creators and innovators. These gifted individuals have led every major social and technological transformation since civilization began. They are the foil to our natural conservatism and apathy, the navigators and sources of inspiration in the human ascendancy. Progressive, prosperous and dynamic societies rely on such individuals to inspire and lead us to greater, bolder, wiser futures.

However, under the nanocracy such people will be easily picked out and ‘discouraged’, especially if the changes they propose threaten those who most profit from the status quo. Even if they are not directly censored, most people will self-censor rather than invite scrutiny. Historically, reformers, visionaries and dissidents from Socrates and Jesus to Galileo, Martin Luther King and Nelson Mandela often pay a high personal price. Under the nanocracy such people won’t even have the opportunity. They will be quietly identified by AI and hushed long before they have a chance to cause trouble.

A human race deprived of its radicals, visionaries, liberals, evangelists, innovators and adventurers will be a lobotomized species, more like a termite mound than a society. It may be stable, organised and industrious—but it will also be less progressive, less creative and less resilient, because it would tend to suppress warning voices and views that contest social norms or which argue for reform. It will be a species less able to avoid the main existential threats because—as with climate change and pandemic poisoning—to do so may threaten the self-interest of ruling elites.

The advent of quantum computers and universal surveillance may thus herald a profound fork in the path of human evolution, creating a species less wise, less fit for survival at the precise moment in history when that survival is most in play (Cribb 2016).

The Wealth Divide

Worldwide, while there is abundant evidence that humanity is becoming wealthier and achieving higher living standards as a whole, there is also evidence that wealth is being distributed less evenly across many societies and is concentrating in fewer hands: to quote the old saw, the richer are getting richer and the poor—relatively—poorer. The World Bank maintains an index which ranks countries according to their income equality/inequality (World Bank 2015b) which tends to bear this out, while the international aid agency Oxfam argues that half the world’s wealth is now held by just 1 % of its people.

These wealthy individuals have generated and sustained their vast riches through their interests and activities in a few important economic sectors, including finance and insurance, and pharmaceuticals and healthcare. Companies from these sectors spend millions of dollars every year on lobbying to create a policy environment that protects and enhances their interests further. The most prolific lobbying activities…. are on budget and tax issues; public resources that should be directed to benefit the whole population, rather than reflect the interests of powerful lobbyists (Hardoon 2015).

According to The UK Guardian, in 2014, 80 individuals on Earth controlled more wealth than the poorest 3,600,000,000: (Elliott 2015). The Credit Suisse Wealth Report in 2015 came up with a similar estimate, that 1 % of the population controlled half the household assets in the world (Credit Suisse Research Institute 2015). In his book Capital in the 21st Century, economist Thomas Piketty showed that income inequality in North America, Britain and Australasia had climbed steadily for three decades, and by the 2010s was back on a par with where it was in the 1920s–1930s! (Piketty 2014). In the United States, the top 1 % of earners controlled almost one dollar in every five of the nation’s income (up from 8 % in 1980 to nearly 18 % by 2010). The United Kingdom’s rich share rose from 6 to 15 %, while Canada’s grew from 8 to 12 %. Many commentators have been quick to attribute the rise of extremist politics and demagogic figures to the disillusion among voters over their dwindling share of national prosperity—since, as the New York Times put it: “the wealthy bring their wealth to bear on the political process to maintain their privilege” (Porter 2014).

The argument that income inequality leads to legislative stalemate and government indecision was advanced by Mian and colleagues in a study of the political outcomes of the 2008–2009 Global Economic Recession (Mian et al. 2012), stating “…politically countries become more polarized and fractionalized following financial crises. This results in legislative stalemate, making it less likely that crises lead to meaningful macroeconomic reforms.” It also affects intergenerational cohesion, explains Nobel economics laureate Joseph Stiglitz: “These three realities – social injustice on an unprecedented scale, massive inequities, and a loss of trust in elites – define our political moment, and rightly so…. But we won’t be able to fix the problem if we don’t recognize it. Our young do. They perceive the absence of intergenerational justice, and they are right to be angry” (Stiglitz 2016).

From the perspective of the survival of civilization and the human species, financial inequality does not represent a direct threat—indeed most societies have long managed with varying degrees of income disparity. Where it is of concern to a human race, whose numbers and demands have already exceeded the finite boundaries of its shared planet, is in the capacity of inequality to wreck social cohesion and hence, to undermine the prospects for a collaborative effort by the whole of humanity to tackle the multiple existential threats we face. Rich-against-poor is a good way to divert the argument and so de-rail climate action, disarmament, planetary clean-up or food security, for instance.

Disunity spells electoral loss in politics, rifts between commanders and their troops breed military defeat, lack of team spirit yields failure in sport, disharmony means a poor orchestra or business performance, family disagreements often lead to dysfunction and violence. These lessons are well-known and attested, from every walk of life. Yet humans persistently overlook the cost of socioeconomic disunity and grievances when it comes to dealing with our common perils as a species.

For civilisation and our species to survive and prosper sustainably in the long run, common understandings and co-operation are essential, across all the gulfs that divide us—political, ethnic, religious and economic. A sustainable world, and a viable human species, will not be possible unless the poverty and inequity gaps can be reduced, if not closed. This is not a matter of politics or ideology, as many may argue: it is the same lesson in collective wisdom and collaboration which those earliest humans first learned on the African savannah a million and a half years ago: together we stand, divided we fall.

It is purely an issue of co-existence and co-survival. Neither rich nor poor are advantaged by a state of civilisation in collapse. An unsustainable world will kill the affluent as surely as the deprived.

What We Must Do

  1. 1.

    Replan the world’s cities so they recycle 100 % of their water, nutrients, metals and building materials

    Pathway: primarily the role of urban planners and civic leaders, many have already begun to develop ‘sustainable cities’. These cities are sharing their knowledge, technologies and experiences with one another round the planet via the internet. This is placing cities, often, far in advance of nations in dealing with issues such as climate, water, energy, recycling etc. Probably the most useful development would be a virtual ‘Library of Alexandria’ through which all urban plans, ideas, technologies, advice and solutions can be shared at lightspeed to cities all around the globe. Partnering between advanced and underdeveloped cities will help. The recycling of water and nutrients is top priority.

  2. 2.

    Stop destroying rainforests and wilderness, which forces animal viruses to take refuge in humans.

    Pathway: Global awareness and education is needed that new diseases usually come out of ruined ecosystems, and those environments are being ruined by our own dollar signals as consumers. Consumer economics thus drives the growing risk of pandemicsand equally offers a solution through informed consumers, ethical corporations and sustainable industries. Strengthen international efforts to restore soils, water, landscapes and oceans. Build price signals into food and other resource-based products that enable reinvestment of natural capital.

  3. 3.

    Establish worldwide early warning systems for new pandemics. Publicly fund a major global effort to develop new antibiotics and antivirals.

    Pathway: WHO and world medical authorities are already working on this. It needs to be coupled with predictive systems for ecosystems facing profound stress, whence new pathogens are likely to spread.

  4. 4.

    Destroy all stocks of extinct plagues. Outlaw the scientific development of novel pathogens with potential to harm humans.

    Pathway: like nuclear weapons, this pathway is blocked by the refusal of militarised nations to disarm. Only citizen and voter action can compel them.

  5. 5.

    Impose a code of ethics and public transparency on all scientific research—on pain of dismissal, refusal to publish and criminal penalties—with potential to create autonomous machine intelligence or robotic devices which take their own decisions to kill people.

    Pathway: it is time for all scientific disciplines to impose a code of ethics on their practitioners, to reduce the likelihood of science being used for evil or dangerous existentially risky purposes. Discussion at global scientific congresses should begin at once.

  6. 6.

    Establish a new human right to prohibit mass surveillance of entire populations and to restrict cradle-to-grave data collection on individuals not suspected of a crime.

    Pathway: Constitutional reform will be necessary in most cases to prevent governments, and stronger privacy laws to prevent corporations, from amassing data on all citizens and misusing it. Citizen and voter action will be essential to drive this. Transparency about, and public control over, data collection must become a fundamental pillar of democracy.

  7. 7.

    End poverty in all countries and redistribute human wealth more equitably as a primary requirement for the social cohesion necessary to preserve civilisation through its greatest challenges ever.

    Pathway: ending poverty is already cemented in global planning by the Sustainable Development Goals, however it is necessary to engage transnational corporations more fully in this task, since they now control most of the world’s wealth. Dialogues around this have begun, but need to make swifter progress driven by awareness of the existential risk to all which disunity brings.

What You Can Do

  • Live a more sustainable life. Select all your purchases wisely and thus share your wisdom through the potent influence of market economics.

  • Practice the ancient human art of survival by anticipating risk: for every powerful new technology, ask yourself “What does this mean for my grandchildren?” and distinguish potential threats from opportunities.

  • As a voter, demand laws which publicly disclose advances in artificial intelligence and nanoscience, so that there can be free and fair public debate about which aspects of these powerful new technologies should be free and which should be restricted or banned.

  • Take a moral stand against machines which can kill humans based on an autonomous decision.

  • Take a moral stand against universal data collection and surveillance and their misuse. Demand constitutional reform to protect your freedom from spying.

  • Understand that a fairer distribution of human wealth will lower the burden on the planet, increase the prospects of peace and plenty for all, and build the social cohesion necessary to counter major existential threats to civilisation and human existence. Support social justice as well as legal justice.

  • Don’t buy products or shares in companies that exploit and impoverish other people or damage the landscape, water or resources needed for human survival or who spy on their customers. Don’t reward the wealthy for selfish behaviour.

  • Require ethics, decency and fairness of all you whom deal with. Enforce them by your economic and democratic political choices.