Anthony Levandndoski, a former Google engineer, “…founded a church called, the Way of the Future, that was devoted to ‘the realization, acceptance, and worship of a God-head based on Artificial Intelligence [AI]’. ‘Machines would eventually become more powerful than humans, he proclaimed, and the members of his church would prepare themselves, intellectually and spiritually, for that momentous transition…’ I don’t believe in God, ‘he [said]’. ‘But I do believe that we are creating something that basically, as far as we’re concerned, will be like God to us.’”Footnote 1

This book is about the dire existential threat posed by modern technology. Indeed, it grows with every passing day. While much of our criticism is focused on Facebook, it’s only because it’s the most persistent and visible representative of what’s wrong with tech in general, which is our real concern.

Fundamentally, it’s about what needs to be done to make both technologists and tech companies socially responsible. If they don’t change and behave more responsibly—more ethically–they will be subjected to ever more burdensome rules and regulations, thereby making it harder for us all to reap the positive benefits of technology. In fact, since technologists and tech companies have demonstrated repeatedly that they would rather fight than accept legitimate and reasonable regulations, nothing substantial will happen without the imposition of stiff penalties and clear constraints. In short, they’ve brought onerous rules and regulations on themselves. What these need to be—even more important, how they should be administered—is one of the major topics of the book. For this reason, we intentionally outline a process for helping to ensure that technology will act on our behalf. By themselves, all the rules and regulations will not do the job.

A backlash against technology—Techlash!—has been brewing steadily. It’s due to the confluence of a number of important factors. By itself, each is powerful enough, but acting together, their impact is magnified considerably. The following is merely a sample of the many issues we treat throughout:

  1. 1.

    The monopolistic, predatory power of tech companies and thus the growing chorus to break them up; in a word, they are the “new robber barons”;

  2. 2.

    The shameful, out-and-out unethical behavior of tech companies, most notably Facebook, not to mention Instagram, Google,Footnote 2 etc., that continue to come to light, virtually on a daily basis; they violate not only our privacy but our deepest sense of self and well-being by collecting without our explicit knowledge and consent enormous amounts of personal data which they then sell to unscrupulous third parties for their profit;

  3. 3.

    The fact that technology and technologists always promise more than they can possibly deliver; in a word, they overly hype the positive benefits of their creations with little if any thought with regard to their inevitable negative and unintended consequences; much is due to serious shortcomings in the education of technologists—namely the lack of serious programs in ethics and management; as such, and not always for the better, technology affects society as a whole;

  4. 4.

    And not least of all, the fact that those democratic and republican members of Congress who were once staunch supporters of tech companies such as Facebook and Google—in large part, because they contributed substantially to their campaigns, and thereby let them operate—if not get away—with few if any regulations at all are so now angry at their outrageous behavior that they are now calling for tough regulations.Footnote 3 It even brought democrats and republicans together in one of the few bipartisan issues on which they can both agree.

Not surprisingly, members of Congress have been focused primarily on the monopolistic, predatory power of tech companies. While clearly important, they have given less attention to the negative aspects and unintended consequences of technology. Nonetheless, this is changing as well.

Dangers

Even though it was exclusively about Facebook, a front-page article in the November 15, 2018, edition of The New York Times is responsible for one of the most powerful exposés to date about the dangers posed by tech.Footnote 4 From Cyber Bullying, to fake news, the unauthorized selling of our personal information to nefarious third parties, Russian interference with the 2106 US elections, Facebook not only knew about these and other serious problems, but deliberately ignored clear and persistent Early Warning Signs that they were guaranteed to occur—indeed, were already occurring. (With few exceptions, a persistent trail of Early Warning Signs and Signals precedes all crises. If they can be picked up and acted upon in a timely manner, then many crises can be prevented before they occur, the best form of CM. In this way, Signal Detection is one of the key components of Proactive CM. We say more about this later.) To make matters worse, instead of taking quick and decisive action, it aggressively suppressed the signals. In essence, it didn’t want to hear any bad news, let alone act on it. All of this was despite the fact that subordinates tried repeatedly to warn Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg that major crises that could literally affect millions, if not billions, of users were virtually assured to occur. Far worse, Facebook engaged in deliberate smear campaigns to attempt to silence its critics. It shamefully portrayed prominent detractors as anti-Semitic. And, it lobbied major democratic and republican members of Congress to go easy on regulations. Once again, they are now so angry that have imposed major fines on Facebook and are pressing for tough regulations.

One of the Greatest Threats

To reiterate, technology, which has made our lives incomparably better in every conceivable way, now constitutes one of the biggest threats facing humankind. It threatens not only our physical, but our mental and social well-being. It forces us to confront some of the most important questions facing humankind. What are the extent and the depth to which we are willing to allow technology to intrude into our lives? In other words, how pervasive and invasive are we willing to let technology be? Further, because we can do something, does it mean that we ought to do it? Such questions are profoundly Ethical, for contrary to the outrageous contention that machines should be the new God, it is we who should control technology for our benefit and not be subservient to it, let alone elevate it to the status of a God.

With regard to the extent, i.e., the pervasiveness and thus the impact, of technology on our lives, an article in The New Yorker noted ominously:

…One study has estimated that by 2030 the ‘robocalypse” [i.e., the use of robots] will erase eight hundred million jobs.Footnote 5

With regard to invasiveness, along with Elon Musk, AI enthusiasts have proposed putting chips directly in our brains so that we’ll be “improved and enhanced humans,” thereby better able to keep up with the increasing demands of modern life. As a result, AI will not only literally be “in us,” but be an integral part “of us.” The hope is that we’ll thereby be more able to communicate seamlessly with all of our marvelous devices. Whether intended or not, the direct consequence is that we’ll be cyborgs. The lines between humans and machine will have become permanently, if not irreversibly, crossed, if they have not already essentially vanished altogether. We’re already so addicted to our innumerable devices such that for all purposes they are inseparable parts of us. In the not so distant future, the ultimate question will be, “Who and what will be human?” To put it mildly, it’s an Ethical and Moral question of the highest and most pressing order.

Even without putting chips in us, there is mounting evidence that prolonged exposure to Social Media adversely affects the development of brains in young children. It also affects adults as well.

Nonetheless, we would be remiss if we didn’t acknowledge that there is a persistent, ongoing debate as to how much Social Media are responsible, if at all, on the attitudes and states of mind of young people.Footnote 6 In part, it’s due to the sharp differences in methods and underlying philosophical positions between those who favor large-scale surveys versus those who are partial to one-on-one clinical observations.Footnote 7 Thus, large-scale surveys do not necessarily find that Facebook is directly responsible for mental health issues such as depression. Rather, the effects are more insidious such as the disruption of sleep. Nonetheless, even surveys find that Social Media is an important factor in Cyber Bullying, and in this sense, it affects the mental health of children. In addition, by means of clinical observations, it’s been found that Social Media are a major threat to normal child development.

As always, the supreme challenges facing us are not technical per se, but profoundly Ethical. They are embodied in the justifications we give for allowing a technology to proceed in the first place, let alone the Ethical ends to which it’s intended to serve. To make no mistake about it, technology is never neutral. Even though it’s often used for purposes and in ways other than those for which it’s intended, it’s always designed with some supposedly desirable end in mind. We say more about the Ethics of tech later.

In proposing that chips be placed in our brains for the prime purpose of our engineering the next stages of human evolution, little consideration has been given to what’s to prevent someone from hacking into the chips and thereby gaining access to our most personal thoughts and desires, if not directing them altogether, at the very least seriously affecting them. How in other words do we protect ourselves against the unwanted intrusion of malevolent parties? If we have trouble protecting our data, how can we protect our minds and bodies?

The fact that the mind–body organism is a very complicated system that is highly dependent on the interactions between all of its parts such that no one can foresee, let alone predict, all the things that could go wrong don’t seem to have crossed the minds of those making such proposals, or at least not to the extent that it should. Indeed, the mind–body organism is as complicated, if not more so, than any technical system currently in existence.

If this weren’t ominous enough, efforts are under way to build robots that can not only read, but respond to our emotions. Apparently, more and more of us feel more comfortable in talking to an AI-enhanced robot about our deepest feelings and emotional states than we do to another fellow human being. Again, because we can do something, does that mean that we should necessarily do it? “How will it impact human beings and society in general?” needs to be given prime consideration. And, “How will it broadly affect our relationships with our fellow humans?”

Table 1 presents a summary of the various threats posed by technology with which we deal throughout. It also includes a brief synopsis of various remedies.

Table 1 Threats posed by technology

Pervasiveness and Invasiveness

While we explore more examples throughout, the preceding is sufficient to illustrate two of the major themes of the book. The first, i.e., Robocalypse, is an instance of the pervasiveness of technology, the extent to which it affects society as a whole. The second, AI chips in our brains, is an example of invasiveness, how deeply it intrudes into our minds and bodies, and thereby also affects us as a whole. Since the two constantly interact and therefore reinforce one another, they act on multiple levels of society simultaneously.

Pervasiveness and invasiveness are in fact two of the major dimensions that are critical in evaluating the threats posed by technology. For example, Facebook scores poorly on both counts. It’s pervasive with regard to its effects on society as a whole: its ability to spread and to serve as a platform for fake news, dis- and misinformation, interference in our elections by nefarious foreign governments, etc. It’s nothing less than a direct threat to our democracy (see Table 1). It’s invasive in that once again, a growing body of studies shows that the more the young people use Facebook, the lonelier, more isolated, and depressed they are. If this weren’t bad enough, Social Media have seriously impacted the ability of young people to engage in spontaneous, unscripted conversations.Footnote 8 For there are no “strict rules” as it were for starting, maintaining, and ending conversations.

In particular, the American Psychological Association has shown that adults suffer as well from anxiety and depression the more they use Social Media.Footnote 9 The supreme irony is that Social Media that was supposed “to better connect and bring us together” is one of the greatest threats to our ability to relate socially to one another.

Other dimensions play an equally important role in assessing the threats posed by technology, and thus hopefully, in our ability to control it for our benefit: (1) whether the potential dangers of a particular technology are preventable and (2) whether they are reversible. They are important with regard to whether we ought to go forward in the development and subsequent deployment of a technology. Stronger still, they need to play a major role. Thus, if we go forward and later find that a technology is harmful, are the effects reversible or not? Obviously, those things that pose grave danger but are neither preventable nor reversible are extremely serious and thus need to be strictly controlled.

One of the most disturbing cases is the following. Although it was done for the ostensible purpose of eliminating the threat of childhood diseases, Chinese doctors have made direct alterations in the DNA of twin girls thus giving rise to genuine fears of “designer babies.” It’s a prime case of something that should have been debated seriously, if not prevented altogether, but was not. Even worse, the deeper fear is that it’s not reversible. We discuss such considerations further.

We cannot emphasize enough that left unchecked technology constitutes an existential threat of the gravest order. Not only are society and democracy under continuous assault, but so is the fundamental nature of the self. We cannot overemphasize the dangers inherent in the fact that for better and for worse, we possess the godlike ability to intervene at the genetic level and thus alter the basic makeup and thereby the ultimate nature of humans.Footnote 10

Beyond Privacy

Because of its importance and the constant attention that is rightly paid to it—indeed, that it demands—we need to say a few words about privacy. Privacy is in fact the single issue that’s most responsible for the growing doubts about the benefits of technology and thus the rising backlash against it. It also illustrates many of the issues connected with pervasiveness, invasiveness, preventability, and reversibility.

To our detriment, the USA has among the weakest safeguards in the world when it comes to protecting the data that users both willingly and unwillingly give to tech companies every hour of every day. And, as we’ve noted, the USA also has the weakest protections in safeguarding young children from harmful content.

Once people check the “agree box” on an app or program, they essentially give tech companies the unfettered right to use their data with few if any restrictions. In addition, they have virtually no understanding of how and by whom their data will be used and for what purposes. The result is that there are little reasons to trust tech companies to protect us especially when their profits are tied directly to selling our most personal and highly sensitive information to third parties, with once again Facebook being the preeminent example.

In sharp contrast, the EU and the UK in particular have some of the strictest privacy protections of all. First of all, the agreement statements must be written in plain easy-to-understand language that as much as possible is devoid of unintelligible legal jargon. Second, users must be informed on a constant and timely basis of how their data is being used. In other words, they must give their explicit consent as to how, when, and by whom their data will be used.

The time has come to apply the same tough standards to the USA. Weak, after the fact, and self-regulations just don’t work. We have to get out in front of potential breaches of our most personal data before they are too onerous to fix.

The situation has gotten worse given the fact that tech companies now collect enormous data about us, as much if not more than the government. Accessing our credit information, social security numbers, where we live, work, etc., without our full knowledge and explicit consent—if not stealing them outright—are bad enough, but it now includes obtaining our preferences for products, what TV shows and movies we like and don’t like, what books we read, for whom we vote, access to our deepest values and beliefs, and so forth. Yes, they have now agreed reluctantly to protect personal data such as credit information, but they have not taken steps to safeguard other data that they have skimmed without our awareness, let alone our full consent. As a result, traditional privacy agreements fall short.

For one, the users of apps need to be made more aware of how they can control the privacy settings on their devices. Information regarding their location, the phone calls they make, sites accessed, etc., can be controlled. As such, users can prevent third parties from gaining access to their personal information by controlling the privacy settings on their phones, thereby blocking unauthorized parties. There need to be clear instructions for “opting in” versus “opting out.”

But even more, we need to be reassured that tech companies are doing everything they can to protect us from how all the data pertaining to us will not be misused or abused by either the company or third parties. Precisely because they have resisted such steps so strongly in the past is exactly why tech companies need to be strictly regulated. They have spent millions of dollars in lobbying to help ensure that they get the weakest regulations they want. For this reason, we propose that before they are allowed to operate, they must first develop and then submit plans to a new government agency, about which we talk later, charged with protecting the public from the dangers attendant to all technologies. In other words, tech companies must pass strict tests before they are given licenses to operate. They must be rebranded for what they are: “media companies” that are subjected to strict government regulations.

Writing in Time, Richard Stengel has proposed the creation of a senior federal official and even a cabinet office to deal with disinformation and election interference.Footnote 11

If “scrubbing our data” isn’t worrisome enough, consider once again that more portentous things are in the offing such as putting chips in our brains so that we’ll be “improved and enhanced humans.” We cannot overemphasize: What’s to prevent someone from hacking into the chips thereby gaining access to our most personal thoughts and desires?

Figure 1 not only summarizes the discussion thus far, but illustrates the intense interactions between all of the various factors.

Fig. 1
figure 1

Interactive effects of technology

In sum, tech needs to be strictly regulated! It cannot be allowed to operate at will.

In closing, a recent article in The New York Times shows that the fears regarding technology are far from being overblown.Footnote 12 By use of facial recognition technology, it’s now possible to match and thus identify the faces of those who have had magnetic resonance imaging or MRI of their brains without their knowledge and thus consent. The inescapable conclusion is that for all purposes, Privacy Is Dead!

Concluding Remarks

This book is about what needs to be done to ensure that technology serves us, not for us to be subservient to it.

Because we have no way of knowing for certain whether the potential threats that we and others have identified—even more worrisome, those that we have not—will turn into full-blown crises and calamities is precisely why we are sounding repeated words of alarm. Before one can mitigate and thus hopefully prevent the threats from turning into full-blown, out-of-control crises, one first has to acknowledge their possibility and monitor their status carefully. In our experience, the numbers of organizations that acknowledge threats and prepare seriously for them are far too few. The time to start preparing is way overdue!

But even more, we have to acknowledge that many of the threats that we and others have already identified are taking place.