All of the rules and regulations per se will not protect us from the false promises and excesses of technology, let alone their negative and unintended consequences. What’s needed is a carefully orchestrated, ongoing process that will continuously examine all of the elements, and especially their interactions, that are not only key to Thinking the Unthinkable, but even more, to coping with it. This chapter provides the broad outlines of such a process.

From 1972 to 1995, the Office of Technology Assessment or OTA was an agency of the US Congress. Its main purpose was to provide Congressional members and committees with supposedly “objective and authoritative analysis of complex scientific and technical issues.” Over the course of its existence, it produced over 750 studies on a wide range of topics including acid rain, health care, and global climate change.

When OTA was closed in 1995, Republican Representative Amo Houghton criticized his own party for defunding the agency, noting that “we are cutting off one of the most important arms of Congress when we cut off unbiased knowledge about science and technology.” Critics saw it as a prime example of politics overriding science, and numerous scientists called for its reinstatement.

While it was shut down in the USA, OTA survived, largely in Europe. Further, while campaigning for President in 2008, Hillary Clinton pledged to work to restore it if she were elected.

Today, OTA is needed more than ever. It not only needs to be resurrected, but rebranded as The Office for Socially Responsible Tech or OSRT.

While the Congressional Research Service has proposed re-establishing something like OTA, it does not go far enough. More is needed than an agency that merely gives “advice” to members of Congress on matters dealing with science and technology.

In particular, OSRT would especially subject those technologies which threaten to make significant alterations in the genetic makeup—thus at the level of DNA–of humans to serious inspection and control. They would not be allowed to go forward without its direct approval and continuous monitoring. Furthermore, given that technology is more encompassing than ever, unlike the old OTA, it would have no formal date of expiration.

Central to OSRT is the idea of a Science Court, or more appropriately, a Tech Court. First proposed in 1976, the Science Court never took off for a variety of reasons, mainly political. In effect, scientists on opposing sides of an issue would make their strongest case before a panel of specially trained Scientist Judges. Similar to a court of law, advocates would have the opportunity to question the evidence submitted by the opposing side. Having heard the evidence, the judges would render their decision. Further, it would be published so that the public at large would hopefully have a clearer understanding of the scientific issues at hand.

The Tech Court would work by having one or more sides argue the proposed benefits of a proposed technology, while others would not only argue the possible disbenefits, but how it could be systematically abused and misused as in the case of Social Media, certainly facial recognition technology. In short, the Tech Court is the living embodiment of Dialectical Thinking.

The Tech Court would especially scrutinize each of the elements of the Unthinkable to help ensure that they would not lead to major crises. It would pay especially close attention to the pervasiveness and intrusiveness of a technology, as well as to the preventability and reversibility of unintended consequences and undesirable side effects.

Most of all, OSRT would be guided by the Precautionary Principle. The burden would be squarely on the developers of a new technology to make the case that it would not cause significant harm to individuals and society at large. If they fail to do so, a particular innovation would be strictly prohibited, curtailed, and/or modified, assuming that it could be. If not, it would be withdrawn and eliminated altogether.

Needless to say, the Precautionary Principle has been subject to widespread criticism, the most prevalent being that it’s biased against anything new because it imposes a severe threshold on anything truly innovative.

Be this at it may, when it comes to intervening in humans for the express purpose of “redesigning and thus improving the human condition,” the Precautionary Principle must be given the highest priority. We should all be extremely wary of any and all proposals to “improve the human race.”

In sum, more than ever, we need a new agency to protect us from the excesses of technology, and especially from the overinflated claims of their proponents. In order for regulations to do their job, they must be part of an ongoing process—a mechanism such as the Tech Court—that will ensure that they are appropriate and that a company is carrying them out as needed.

The Toulmin Argumentation Framework, TAF

The eminent Historian and Philosopher of Science, Stephen Toulmin developed an ingenious framework for analyzing the structure of arguments.Footnote 1 It’s especially pertinent to the idea of the Tech Court. It’s vital to Thinking the Unthinkable.

The Toulmin Argumentation Framework or TAF for short is deceptively simple. It consists of a Claim C, Evidence E, Warrant W, Backing B, and a Rebuttal R.

All arguments terminate in a Claim C or a set of Claims. The Claim is the end result or conclusion of an argument. With regard to tech, a typical set of Claims is one or more of the following: “Not only is our technology clearly superior, but it’s a major step forward. It’s unquestionably innovative. Indeed, it’s revolutionary. Further, its benefits are not only crystal clear, but indisputable.”

All arguments make use of some kind of Evidence E to support their Claim(s). Typically, E is whatever data or facts that one can muster that lend support to the Claim(s). In the case of tech, the Evidence is generally the Track Record of the Developers, i.e., their successes with previous innovations, how it’s performed, etc. If this is their first venture, then the Evidence is typically their credentials, the testimony and support of peers, backers, teachers, etc.

The strongest E are facts from independent authorities supporting all of the proposed benefits and properties of a proposed technology. As with Claims, Evidence differs with regard to how strong or weak they are. Whatever the case, this is typically the Empirical Base of an argument.

In general, the different sides of an argument start with different Evidence because they are working backward from different Claims. A Claim can thus either be the beginning and/or end of an argument. Once one has a preferred Claim, one typically searches for Evidence to support it. As a matter of fact, in the vast majority of cases, one is generally working backward from a preferred Claim or set of Claims. The term that best captures this process is Confirmation Bias.

Those who are on the opposing side of a technology generally argue that “The benefits of a proposed technology are far from clear; indeed, not only does it lead to their exact opposite, i.e., dis-benefits, but to demonstrable harm.” In short, “The reasons for going forward with the proposed technology are not acceptable, i.e., unsupported.”

The upshot is that instead of facts or more generally Evidence always leading unequivocally to or driving conclusions, more often than not, it’s the other way around. One starts with a favored, pet conclusion, or typically a set of conclusions, and then works backward to make it appear that one derived it by first starting with “Impartial Evidence, Facts,” etc.

The Warrant is the set of reasons why the Claim follows from the Evidence. By itself, Evidence does not directly imply a Claim. A good way to think of it is that the Warrant is a “conceptual or intellectual bridge” that allows one to go from a limited set of evidence E, to a general conclusion, C. To put it another way, the Warrant is the “because” part of an argument. For instance, a typical Warrant is: “Whenever E has resulted in the past, C has occurred because E is not only an indicator of C, but a prime factor in its occurrence or causation; since E has occurred this time as well, we are Warranted in concluding C once again.” In this particular example, the Warrant functions as a “continuity preserver.” Supposedly, whenever a particular set of facts or certain events E, etc. have occurred “n” times, then we are Warranted in concluding that they will occur “n + 1” times. Furthermore, according to this line of thinking (argument), the larger the n the more we are entitled to conclude that n + 1 will result. Thus, if E has occurred 1000 times, then we feel confident that it will occur the 1001st time.

The problem of course is the fact that a rooster has been fed 300 days in a row is no guarantee that its head will not be chopped off on the 301st day!

If the Evidence is the Empirical Basis of an argument, then from the standpoint of Philosophic School of Rationalism, the Warrant is the Analytic Basis.

More often than not, the Warrant is the Claim of a prior argument, and so on ad infinitum. The same is true of the Evidence and all of the parts of an argument. In addition, arguments are key parts of problems—indeed, some of their most important parts—because arguments are used to define what is a problem in the first place and how they ought to be handled in the second place. Even more, they are among the most important parts of Wicked Messes. Indeed, the parts of TAF are generally parts of a Mess.

Most of the time, Warrants are implied rather than stated explicitly. In fact, a great deal of the time they are unconscious. That is, the person making them is not fully aware that they are doing it. In general, they are reflective of a person’s entire personal history. In the case of society, they reflect its general history and current conditions.

In the case of tech, a common taken-for-granted Warrant is “The reputation and standing of a technology’s developers guarantees that it will work as planned.” Or, “The benefits of a proposed technology have been well demonstrated: it’s passed every important test with flying colors.”

Every argument also has a Backing B. B is the deeper set of underlying assumptions, basic reasons, or values as to why a particular Warrant holds. If the Warrant is not accepted at its face value—which is often the case—then the Backing is necessary in order to support it.

In general, the Backing is the larger set of general Philosophic Assumptions a person has about what is right (Ethics), human nature, the world (Reality itself). Since Backings are generally taken-for-granted, they are likewise mostly implicit and unconscious.

Another way to think of it is as follows. If the Warrant W is the “conceptual bridge” that allows us to go from the Evidence E to the Claim C, then the Backing B is the “foundation” on which the bridge rests.

In the case of tech, the Backing is the credentials of the developers, plus the fact that they are “well-qualified.” It’s also the general body of stakeholders who will benefit from it. Notice that there is a great deal of overlap among all of the elements of TAF. Again, it’s a Mess, and more often than not, a Wicked Mess.

Finally, every argument has a Rebuttal R. In principle, R challenges each and every part of an argument. In terms of the metaphor of arguments as a “bridge” between Evidence and Claims, R attacks E, W, and C as strongly as it can. R thus tries to “tear down the entire bridge and its foundation.” For instance, “The Evidence is faulty and/or weak and thus does not support the Claim, etc.” Or, “I accept your E, but it leads to a completely opposite C from that which you’ve claimed.”

In the case of the Tech Court, the Rebuttal is the opposition’s argument as to why the current proposed technology is not beneficial, indeed, why every part of the so-called supporting argument is deeply flawed. In short, the only reasonable conclusion or Opposing Claim is that the proposed technology should be strictly prohibited.