1 Introduction

Although many aspects remain unexplored concerning non-biologic intelligence systems (NBIS) gaining legal protections in the United States and elsewhere (Dowell 2018), the majority of them cannot be thoroughly analysed until judiciaries internationally have had more time to comprehend the various ramifications that will result in either accounting or not accounting for NBIS citizens like Sophia the Robot (Jaynes 2020). While this uncertainty may not ordinarily remain unchecked from an ethicist’s perspective, it is an inevitability that must be faced due to the complexities of the judicial and legislative processes in a globalised society so driven towards automation. What cannot be overlooked, however, is the reality that there still exists manners in which NBIS may inadvertently gain legal protections or citizenship without the current lex lataFootnote 1 ever changing through legislative or judicial means. Most simply, this is through the integration of NBIS into the human form—which should commonly be considered to be human augmentation (HA) as opposed to human enhancement, given that enhancement of the human form can occur through the utilisation of nootropics or other like chemical substances beyond the integration of “smart” devices into the human form.

To clarify, the notion of NBIS—as opposed to non-biological intelligence (NBI)—is semantically utilised herein to incite one to imagine machine intelligence (MI) as opposed to some theoretical intelligence that exists separate from the chemically organic intelligence found on Earth. Admittedly, the author’s explication of NBI (Jaynes 2020, 344) has muddled how the latter term may be used when referring to MI or other terminology as it relates to artificial (computer) intelligence(s), which is the inspiration for this semantic delineation of terminology. In an effort to prevent further semantic confusion within the field, this author has generated a list of commonly utilised terminology within this essay—coupled along succinct definitions as to the specific aspects of MI they refer to—in an effort to better express their unique nuances and the influence they have on the ideas presented herein and elsewhere in the realm of AI ethics (Table 1). Understanding that many of the phrases used in this list have not yet been presented to the academic community, it is hoped that these more granular observations into the potential forms NBIS may take will allow for a greater range of specific discourse into the legal nature of each item—thereby widening our scope from potentially biased perspectives into the nature of MI respective to human intelligence (Mostow 1985; Lauret 2020; Maruyama 2020).

Table 1 A list of definitions for common and new terminology related to AIS. Citations made in numbers 1–3, and 8, with slight grammatical corrections to ease reading comprehension

Regardless of the current bioethical or technoethical discourse surrounding the topic of citizenship for MI-based entities,Footnote 2 many medical operations have already been successful in integrating bits of technology into and onto patients internationally (Saal and Bensmaia 2015, Ha et al. 2019; Jeong et al. 2019; Zhuang et al. 2019; Bumbaširević et al. 2020).Footnote 3 Whether in an effort to overcome our human limitations, become more integrated as a society, or remedy an impairment gained through the line of duty or accident (Galván and Luppicini 2014; Ruggiu 2018; Sullivan 2018; Bumbaširević et al. 2020), the concept of integrating computers and robotics into and onto the human form is rapidly gaining interest. The legal complication herein, regarding HA,Footnote 4 is that there is no limit to how much a person can change about their bodies insofar as they possess the sufficient resources to undergo the procedures involved—ethical arguments aside. Furthermore, there is the larger question of when technological augmentation crosses the line between one maintaining the taxonomical classification of a human or a cybernetic entity—as there has been little, if any need to this point in history to make such delineations.Footnote 5 Hence, de lege lata at present cannot account for defining when a biological human becomes a cybernetic or computationally enhanced human—let alone account for the limitations between a natural human,Footnote 6 and one supported through invasive or non-invasive HA technologies.

The need to thoroughly examine the socio-economic and socio-political ramifications of HA in the context of MI-based citizenship is, as of yet, being outpaced by the desires of the capitalist to push the boundaries of technological ability. As academics may be aware, this push is geared towards developing a more “productive” society as a result. The true result, however (as has been seen repeatedly in various industries within the past few decades), is that society is being left in a state of moral ambiguity once again as technology advances seemingly unchecked in favour of the capitalist’s desires. As stated by Lin, Jenkins, and Abney:

Not everyone interested in robots has a high opinion of ethics or appreciates the social responsibility of a technology developer. They’re often happy to let the invisible hand of the market and legal courts sort out any problems, even if the harm was preventable. While there’s something to be said about reducing barriers to innovation and the efficiency of an unfettered market, ethics also matters…A common reaction to the suggestion of an ethics discussion is that the many benefits of robots outweigh the social risks, and therefore we delay these benefits when we’re distracted by ethics…But this line of reasoning is far too quick. First, not all ethics can be done by math, with a simple calculation of net gains or net losses. Rights, duties, and other factors may be very difficult to quantify or fully account for. Attending to ethics also doesn’t necessarily mean slowing down technology development; it could instead help clear a path by avoiding “first-generation” problems that may plague new technologies…It’s difficult to think of a real-world example where ethics may be ignored because the benefits are so compelling. For instance, we could rush cancer drugs to market, potentially saving tens of thousands of lives every year, just as robot cars might. Yet there is a consensus among both bioethicists and researchers that rushing development could waste money on quack remedies, and insufficiently tested therapies could kill patients (2017, x-xi).

One case to this point being the aforementioned bestowal of citizenship to a robotic entity in 2017 during a major technological conference—or rather, the lack of universal acceptance of Sophia the Robot’s citizenship and its portrayal and acceptance as a public relations stunt (Walsh 2017). While it may be the case that the nation itself does not intend to define the protections afforded to Sophia the Robot as an entity granted citizenship through executive political powers rather than through naturalisation, such a refusal to seriously treat this event as being legitimate already displays the struggles that will be faced by patients undergoing surgical augmentation when integrating self-learning artificial intelligence systems (SLAIS) into their chemically organic forms. While an argument can be made as to the limitations of Sophia the Robot’s relative level of “knowledge” and portrayal of a “will”, more sophisticated SLAIS will further blur the line between MI and human “knowledge” to the extent that they are indistinguishable. After all, what is MI but a pursuit towards automating human “knowledge” in such a way that it allows humans to be freed from tasks considered to be menial?

A refusal to accept sufficiently advanced SLAIS as being capable of executing civic duties when they are already performing tasks that were once considered to be feasible only for sufficiently educated humans will inevitably generate a scenario in which sufficiently augmented humans are no longer considered to be “human” (Mostow 1985; Jaynes 2020).Footnote 7 Frankly, this refusal then implies that notions such as basic “human” rights are therefore inapplicable to these entities—regardless of whether they were born into a particular nation or naturalised into it—and they are feasibly left without any sort of legal protections on an international scale due to their “pseudohuman” status (Rorty 2001). Of course, such a lack of nationality may be avoidable if these augmented humans are classified as “nationless persons” through entities such as the United Nations. The issue then becomes, however, how these entities are then treated by the rest of the international community given their former status as natural-born or naturalised citizens.

Digression aside, this essay will proceed with a brief survey of AIS concerns that were not addressed in the author’s prior essay in Sect. 2. In Sect. 3, current concerns surrounding HA in relation to NBIS citizenship loopholes will be addressed alongside other concerns for overall impact of HA on human emotion. This section is of great importance, as the primary concern being presented herein is that of ways in which NBIS may inadvertently be granted citizenship vis-à-vis integration into the human form or the loss of citizenship as a result of blurring the line between “human” and “machine” as mentioned in the prior paragraph. And finally, a review of science-fiction media and its impact on current technologies will briefly be presented in Sect. 4.

2 Assumption or prediction: when AI gets it wrong

Recently, there has been a surge of literature surrounding the concepts of “ethical program design”, “moral enhancement”, and “moral machines”.Footnote 8 Simply, this is an effort by ethicists and moral philosophers to add input into a realm of technology design that is responsible for the performance of NBIS. After all, as programming code and algorithms increase in complexity, there is a rise in risk surrounding the code or algorithm to perform without error.

…even for fundamental problems such as sorting, there can be multiple alternative algorithms with different strengths and weaknesses, depending on what our concerns are…computer science has traditionally focused on algorithmic trade-offs related to what we might consider performance metrics, including computational speed, the amount of memory required, or the amount of communication required between algorithms running on separate computers (Kearns and Roth 2020, pp. 4–5).

Following this notion, the need for programmers to remain as unbiased as humanly possible when coding in AI or robotics has been viewed as one of the most crucial aspects of the field beyond the ability to understand programming language and how different languages interact with each other—particularly in regard to the global paradigm shifts that have occurred during the current pandemic after the death of George Floyd (Dudziak 2020; Warner et al. 2020).

Though programmers should strive to ascertain the objectively correct answer, this does not eliminate the fact that a decision might have to be made prior to one’s having secured the objectively correct answer…we too think that programmers should continue to deliberate about moral problems insofar as they are able. Nevertheless, we believe that there are circumstances in which programmers may lack the luxury of time or resources to continue deliberating but must nevertheless decide how to act. [They] might deliberate for some time, but [they] cannot put all of [their] time and effort into figuring out what to do in Crash and will need to make a decision soon enough (Bhargava and Kim 2017, 5).Footnote 9

The situations programmers find themselves in, however, are not limited to how a computer system should act in a particular situation. In many instances, they may only have the limited ability to determine how a program should go about data mining information of a particular kind—and in tangent, what other information should be collected versus what is currently legally allowable. Our emphasis on the term “legally” here is to emphasise how de lege lata, corpus juris secundum, and corpus juris gentiumFootnote 10 are not always able to fully define the limits of what personal information can be attained, stored, and utilised; nor the level of consent a person must give for the information to be attained, stored, or utilised by a third party.

A “prime” example of this would be to cite Amazon’s smart home devices, managed by Alexa. Regardless of how often one utilises the Amazon store, a Kindle device, or even the Ring doorbell, thousands of data points are collected and stored within Amazon’s vast databanks (Kelion n.d.). All of these data can then be utilised to transmit messages to the delivery drivers employed by the company’s Logistics branch or other providers through Ring, predict months in advance what products might interest a user (and users with similar shopping patterns), or sent to a user interface development team to troubleshoot potential issues with customer experience with Amazon’s products (Kelion n.d.). What has been defined within the limits of the law regarding data mining is often only ever examined after it has been discovered that a dataset may compromise the privacy of a particular person or group of people—which has led many to point to the fact that there is an inherent bias in various data mining programs.

2.1 “Bias” and its ethical considerations in programming

Before delving further into the subject, let it be clear that there is no study that does not posit some aspect of bias or other—though there are two distinct forms of bias that can be distinguished when considering AIS in this context.Footnote 11 “Bias”, after all, is what grants each human their individuality and thus their distinct personality. It would be incorrect to assume that any endeavour into the natural or social sciences can be completely un-“biased” in any way, shape, or form, as that would entail that the study itself does not wish to find some commonality to be true. In a society where claims of “bias” seem to be enough to refute empirical inquiries, we cannot lose sight of this fact. There are times where a claim of bias can be constituted as being “true”, such as is often seen in college admissions or rate of incarceration in various states in America. However, these biases are often the result of an inadequate examination of population sizes. As such, it is impossible to claim that more African-Americans (for example) are more likely to commit criminal offences than Caucasians who grew up in the same geographical area as it disregards the population sizes of each group and the socio-economic and -political influences exerted upon these groups. Another example would be to claim that it is more likely for Arabs to join radicalised local terrorist organisations than it would be of their Caucasian neighbours practising the same faith, as there is no comparison to other religious denominations and their rates of radicalisation susceptibility or the rationale for radicalisation. Even if studies to prove these facts exist, we must remember that there are far too many aspects of each subject to analyse and compare for it to make any real sense to the average individual or well-learned researchers—such as socio-economic status, religious affiliation, home environment, and the like.

It is for this reason that calls to program “bias” out of software seems, to the author, as misinterpreted—at least, given the lack of distinction that may often be made regarding the phrase between human-centred decision-making processes and the information an AIS is trained with (Challen et al. 2018; Parikh, Teeple, and Navathe 2019). To begin with, whose “bias” is to be programmed out of a dormant/static AIS (D/SAIS) or SLAIS? Whose “bias” will replace it inevitably, and what impact will that have on individual or societal capability? Even if “bias” can be removed, will that have any positive long-term impact on the socio-economic equality of a given region? Anyone who can give a concrete answer to all of these items is, realistically, only fooling their own egos. If a “bias” exists in a system, the “bias” is only present in the data that was left out of the set—if it was present to begin with. It is not the same instinctual bias humans utilise to determine what may potentially cause us harm. Arguably, it is this instinctual “bias” that seems to be the focus of so many calls for “bias” removal in AIS—as any attempts to remove data-based bias is impossible given the need to restrict parameters in datasets. While data-based bias may influence how notions of societal equality are perceived, we cannot forget that confusing our understanding of “equality” in AIS also confuses our understanding of “bias”.

To be clear: a presence of inequality in a system does not necessitate the presence of immoral or unethical inequality within that system (Kearns and Roth 2020, pp. 57–93). After all, humanity can retain individualism because inequality exists as a base fact of life. Without barriers to overcome, there would be no point in having individual perspectives on the functioning of the universe around us or drives to improve one’s standing. Humanity may as well develop a hive-mindset and forget about luxuries such as freedom of expression and right to ownership of various material objects and social constructs without a healthy dose of inequality existing either on a genetic or socio-political level. That is not to claim that there are cases of immoral or unethical inequalities in research—instead, that such cases of unethical “bias” in data mining generally result in the programmer having no knowledge of a situation where their software could generate a given biased result.

As mentioned prior, there are limits to the amount of consideration a programmer can put into their software; much as there are limitations to the amount of consideration any individual can give to a specific moral dilemma. Nevertheless, in a court of law, our focus is upon intent—or instead, of perceived intent—of one party to engage in a given action. Broadly speaking, jurors are testing the will of one subject to harm another.Footnote 12 “The legal issue surrounding deep-learning systems and genetic programming designed to allow [NBIS] to build its own code is that the computer becomes the author of its programmed set of instructions. At some point, the human author will be unable to determine if the code possessed by such a device was created by the human author’s command” (Jaynes 2020, p. 347). Also, “…there exists the possibility that the ‘will’ of the [NBIS] and the will of the programmer will diverge as the [NBIS] develops” (Jaynes 2020, p. 348). A case could be made that the programming for one bit of software is transplanted into a system that performs a similar, though more sophisticated function in a self-learning NBIS—which in reality would constitute as a scenario in which the original programmer may be unaware of the consequences this other piece of self-learning software produces. Should the liability then be upon the human author, organisation funding the software development and setting standards for what the program should be capable of, or on the NBIS?Footnote 13

As these various mechanisms take up increasingly influential positions in contemporary culture—positions where they are not necessarily just tools or instruments of human actions but a kind of interactive social entity in their own right—we will need to ask ourselves some rather interesting but difficult questions. At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates? (Gunkel 2018, x).

For this reason, it may be advisable for programmers to add disclaimers into their code regarding the intent for its use and limitations of the datasets utilised by the programme. Such disclaimers would go a long way to ensuring that those wishing to utilise bits and pieces of a string of code are made aware of the limitations present in the established system. Certainly, it would also aid in defending against cries against “bias” in any system—outside of aiding in the legal determination of intent should the system generate some kind of harm that requires legal action.

2.2 The case of missing datasets in AI systems

Alarmingly, modern AI systems are becoming increasingly reliant upon datasets that are severely repleaded of actual survey results (Barrat 2013; Stewart, Sprivulis, and Dwivedi 2018).Footnote 14 Though the concept of prediction software may seem extremely convenient for the full range of uses it possesses, there is the reality that there is not enough data that could be generated by humans to satisfy the amount of information these systems require to function efficiently (Barrat 2013). “More complicated algorithms—the type that we categorise as machine learning algorithms—are automatically derived from data. A human being might hand-code the process (or meta-algorithm) by which the final algorithm—sometimes called a model—is derived from the data, but she doesn’t directly design the model itself” (Kearns and Roth 2020, p. 6). Reason still dictates that self-learning software bases itself upon some established dataset, whether that be a shopper’s browsing history on a website, or a detailed list of a student’s application contents. Situations can be imagined where a programmer is utilising a dataset that they believe to be empirical that was generated by a SLAIS and is not, though these types of scenarios are currently few if they exist at all. One still cannot help but wonder what would happen if a SLAIS developed a new model for itself based upon non-empirical data—e.g. data provided to it only by calculation.

The “big picture” of prediction data is inevitably a method whereby companies can link various items of interest to a consumer to make an informed decision on what said consumer might also be interested in. This fact is being re-emphasised here to contrast with the above paragraph—namely, in that this “prediction” is often artificially generated by a D/SAIS or SLAIS. While this sort of prediction is relatively harmless for commercial usage, there are concerns for the impacts it would make outside of general merchandising or media recommendations. For instance:

Suppose that the dominant political party running against an incumbent politician were to use a SLAIS to determine for them the candidate that would have the most likely odds of winning in an upcoming election cycle. This SLAIS is programmed to account for local gerrymandering practicesFootnote 15 so that it can make its determination based upon the political stances that would have the support of the greatest number of residents in this particular race. The SLAIS, after running through its software several times to ensure accuracy, then determines that the political party selects a candidate from outside of their mainstreamed selection—namely, an independent candidate with no strong affiliation to either political party. This major political party, not wishing to break from tradition, decides against selecting the independent candidate in favour of a runner-up in the SLAIS’ calculations. Come the day the election results are released; the incumbent wins by a resounding difference of support; meaning that not all members of the opposing political party could convince their supporters to centralise under their preferred candidate. However, the opposing party decided to use the results given to them in a different political contest and defeated the incumbent—though for a lesser position within the local government. After careful consideration, the party then decides to use the first result given to them by the SLAIS in the next major political contest. While it remains an effective technique, the party’s historical political stances begin to degrade with each new contest to the point that there is no clear position that the party takes.

Although this supposition may seem extreme to some readers, there are many political contests where a scenario such as this could very likely play out; such as during the 2012 United States Presidential election cycle, where the opinions of former President Obama and (now) Senator Romney were virtually identical on multiple policy fronts. One could argue that without party-line delineations, it would have mattered little how the election concluded insofar as Senator Romney’s political stance did not change during his time in the White House given how relatively politically unracialized both parties were at that time.

While such a SLAIS would make the practice of gerrymandering impractical, it also has the potential to radicalise established political parties severely enough to fracture them. Unlike with European politics, such fracturing is rare for the American political system—where only the major Democratic and Republican parties see extensive limelight during Presidential (and frequently Congressional) races. Under such strains, it is highly likely that the political system as America has come to know it will devolve into chaos, potentially to the degree of a Second Civil War if states wish to cede from the Union to make elections run smoothly. Although such worries are a digression in this dialogue, the fragility of politics in the United States is a concern for other reasons discussed herein and in the author’s prior dialogue—namely, with the necessitation of social-based politics as technological advances shrink job pools in various industries due to automation.

Continuing with the supposition presented, another topic of interest to this dialogue is the matter of the bi-partisan polling required for the SLAIS to base its calculations on. As collectors of sociologic data already understand, it is nearly impossible to receive the same amount of responses one sends out to a given population.Footnote 16 Where the SLAIS’ dependability comes into question (Polson and Scott 2018, 13–42) is where the data goes from “fact” to “assumption”. Realistically, a prediction system is not likely to accurately predict the result of some given event every time it occurs (Polson and Scott 2018, Kearns and Roth 2020). To reiterate this point, let us take the case of Netflix as an example.

In order for Netflix to provide a consumer with a movie or television show they might enjoy as much as other shows they have viewed on the platform with any confidence, the system needs a pre-defined list to begin its estimations from—e.g. shows often watched in a particular geographic area, by a certain age or gender group, or input from the consumer. Without this information, Netflix’s AIS may as well be tossing a dart while blindfolded, hoping by its luck that it will find something a consumer would want to watch. Where it is doubtful that a consumer will give feedback on every show they watch, the AIS must then make predictions based on users with similar viewing history—hence the statistical “assumption” is made that a consumer would want to watch the BBC’s Sherlock after viewing the 2009 Sherlock Holmes film starring Robert Downey Jr. Personal experience with Netflix’s “prediction” accuracy aside, this type of “assumption” in political decisions like candidate selection could lead to severe misinterpretations of population trends given the volatility of information of this type. Where new voters are continually being inputted into the system, and others are continually being removed, it cannot be said that issues that were relevant in a prior election cycle will remain relevant in an upcoming one.

None of this takes into consideration a given NBIS, however, whether they are a stand-alone system or integrated into the human form. Given the full range of functions a given NBIS system may possess, we cannot know at present what various influences a technologically enhanced human may be exposed to that would sway their decisions. Taking the example of politics once again, arguments can be made that NBIS may aid a human to vote for the weaker candidate in a given race even though the stronger candidate may hold many of the same values as the subject. Should similar NBIS aid other patients in this manner, this then results in a situation where a political system is deliberately being weakened. As far-fetched as this example seems, it touches upon the existential question of what “choice” ultimately is—and in tangent, what consciousness ultimately is (though an examination of these subjects exceeds the scope of this discussion).

Other concerns arise in how a TABHI considered to be nearly completely computerised would influence political elections and the notion of voting as a civil responsibility, beyond those of CI more sophisticated than Sophia the Robot attaining personhood in a democratic society. Where no legal guidance exists to the limits of what constitutes as a human being, and citizenship attainment requirements for various nations internationally are similarly vague due to a lack of awareness regarding cybernetically augmented human subjects that border on being classified as computer systems themselves, all this author can do at this juncture is display the ethical (if not political) conundrum this scenario poses for the various factors that may become involved. Such considerations must also take into consideration how both natural-born and naturalised TABHI granted citizenship will necessarily sway the flow of politics and civic responsibility, as not all aspects of HA may be visible to a passer-by.

Many of the concerns posed by this discussion focus on AI systems as humanity currently understands them, and as such, is skewed towards a cautioning towards an over-reliance upon such systems given their inherent limitations. What we cannot know, at present, is how AI systems will evolve as time progresses; nor how other advances in technology will influence the forms AI is embedded within. This argument becomes especially poignant when we consider the human transfer from TUBHI to TABHI, and other advances in self-learning CI systems, such as fibre-optic computation, as described in Sect. 4.2.

3 How HA impacts human needs

As mentioned, HA claims can be grounded in arguments centred upon self-identity, meaning that governments currently have little (if any room) to mandate what an individual can and cannot change about their appearance or biological functions.Footnote 17 Any new limitations contrary to current jurisprudence protecting the limitations of self-identity claims would first need to be brought forth and publicly debated before these restrictions could be considered wholly valid and binding. The complication with technological implantation in this regard is that the individual can claim ownership over the system being implanted—thereby making the system a part of them in more than just in a tangible, physical aspect of their body.

Complications in this statement may arise regarding the ownership of the technological system in cases where funding for its purchase and implantation is granted through a third-party source. As we may see with other types of loans, this author fears that the system implanted within a patient may be repossessed by the financier if certain qualifications are not met, as is the case with vehicles or other “real” property. Forcing a system out of the human form in a scenario such as this is concerning due to the amount of damage that could be inflicted upon the patient—with death a more-than-likely result in certain cases if not handled in a professional medical environment. This scenario is another such case that will need to be analysed fully by legal scholars and international judiciaries and is arguably one of the most pressing topics that needs to be addressed given the progress of technological implantations worldwide (Saal and Bensmaia 2015; Saadi, Touhami and Yagoub 2017; Ha et al. 2019; Jeong et al. 2019; Zhuang et al. 2019; Bumbaširević et al. 2020).

While the author must grant that a majority of systems being put into humans currently are limited in their computational functionality (Saadi, Touhami and Yagoub 2017),Footnote 18, more complex systems continue to develop for the sake of granting amputees the ability to utilise prosthetics in a more “natural” manner (Saal and Bensmaia 2015, Zhuang et al. 2019). As implied here, it is assumed that the systems being placed into these various patients are then the property of the given patient—though, in the case of governments planting microchips into their citizens, there is still the question of whether or not the government has a claim to these machines.Footnote 19 Of course, there has been a growing concern on the treatment of prosthetics in legal systems as they gain in sophistication and range of motion (Brown 2013, Bertolini 2015; Dutchen 2018; Lee and Read 2018; Ruggiu 2018; Sullivan 2018).

To this end, questions ought to be addressed as to whether a patient testing an experimental prosthetic limb should be allowed to retain that limb after the trial has been completed—and, of course, the obligation of health insurance companies or government programmes to support the expense of such a device. The full scope of HA technology ownership exceeds the realms of the dialogue presented here—though it will nevertheless be a subject that requires a considerable amount of foresight before the topic is broached by common society or international judiciaries in a more constructed manner. Instead, our present focus is upon the questions of how the implanted computer system(s) change the individual they reside within and how that affects NBIS citizenship claims.

As stated in the author’s prior dialogue, “A primary issue to discussing rights for machines, intrinsically, is that it is difficult to accept that something designed only to support a human’s intellectual capabilities can stand on an equal legal or moral ground as a human—the only being currently understood to possess both various mental states and intelligence” (Jaynes 2020, 343).Footnote 20 The major complication with this statement is that it does not take into full consideration what happens when a rights-bearing citizen becomes changed by the NBIS implanted within them—whether on a mental, physical, or spiritual level—or the definition of intelligence (Lauret 2020) concerning CIs. This notion may seem absurd considering the limited capacities currently held by our most common CIs, such as those found online through Amazon, Google, and Netflix (Barrat 2013; Polson and Scott 2018). In the context of current technological advancements and discoveries however, our societies may soon be facing this conundrum. Without an examination of how NBIS will change humanity emotionally and psychologically, we may very well see social conflicts internationally akin to those that transpired in the twentieth century (Jaynes 2020)—only with much more sophisticated weaponry.

3.1 HA and potential changes in psychological human dependence on MI

These kinds of metaphysical questions are limited in the number of scenarios they can viably exist under in the present day, for better or worse. Where a “digitised human” may or may not become possible within the next few decades, our attention should then be focused on more relevant discussions to emphasise the range of metaphysical complications that are involved with HA. These are, specifically, the psychological and spiritual needs of a patient undergoing HA transitioning from TUBHI to TABHI, and how these potentially subtle changes would affect (Jaynes 2020. 345). The needs desired and required by individuals in each category will be as varied as the specific technologies being addressed. For example, the needs of a patient with a pacemaker will vary from one with a bionic limb—although researchers have not delved too deeply into this topic outside of arguments surrounding the ethics of patient care. It is the lack of NBIS-related HA legal cases and medically directed research restrictions that have generated this dearth of academic focus to date.

What makes an analysis of how HA impacts human behaviour and needs so complex is the fact that humans are already augmented internationally, as argued by the author:

If humanity is to deny that [NBIS] are deserving of legal protections solely on the basis that they are non-biological in nature, then we must seriously re-evaluate our understanding of what intelligence actually is. It is a ridiculous argument to deny that the development of the Internet and its subsequent implementation into smart devices cannot be constituted as a version of AGI [artificial general intelligence] or even MI. The only difference between the AGI systems humans have become familiar with in the media and systems such as Google is that humanity’s envisioned AGI systems act as independent, thinking entities. Without the Internet, and admittedly without the development of computer systems, society would not have developed beyond the status quo of the early twentieth century. If our use of computer systems today does not constitute us being “above human intelligence” from a genetic standpoint, then we cannot realistically state that any other development of technology into AGI systems can be constituted as such…given that non-conscious MI systems are already aiding humans (Jaynes 2020, 348, emphasis added).

The reality of this situation as presented in the prior dialogue should neither be lost on technoethics researchers, nor on policymakers who delve into technology policy and regulation. Given that the span of a human’s life rarely reaches above one-hundred years, we are at the cusp of losing the last remaining traces of pure human experience without digital augmentation on an international scale—if it has not already been lost to us. What this loss entails is the stark reality that humanity will no longer possess an oral, first-person history of life without computers within the next few decades—meaning that incoming generations of humans will already hold the notion of external computer augmentation as a fact of life rather than a privilege to be gained. This notion will extend to internal computer augmentation, as it will become more difficult to refute that HA with bionic implants is unethical given the benefits humanity will gain because of universal bionic implantation.

Overall, this can be said to be a positive move for societal evolution—but only insofar as bionic augmentation is not restricted to those who are able to afford their implantation. Regarding the shift in opinion from HA being a “fact of life” as opposed to an “attainable privilege”, the connotations for these phrases will necessarily differ drastically as a result of how HA is distributed amongst individuals of a community. Although this view will have to be dissected in full in a different work, this author would advocate that augmented humans or TABHI ought to have more responsibilities placed upon them as a result of their individual modifications as a means to impart the overall gravity these augmentations have in reference to societal influence and regulation. In brief, this addition of social responsibility may aid in slowing the spread of HA in population groups with a greater amount of wealth or resources—thereby allowing such augmentation to become available to the general population at a relatively more accelerated rate—or considerations for their impact to be analysed in greater depth before their widespread presence is felt in larger communities.

As it stands, humanity’s psychological and spiritual needs are already based around the need for HA when comparing the opinions of a teenager to those of their parent or grandparent.Footnote 21 As the younger generations begin to take up the prominent positions currently held by their predecessors, we will necessarily see a shift in attitude towards the development of NBIS—and in many respects, these will be attitudes that propel the development of more sophisticated NBIS in a manner so far unseen by the capitalist market. For example, younger generations in the United States are already showing a lack of interest in vehicle ownership and licencing that may yield an accelerated need to develop self-driving transportation to accommodate their preferences (Eliot 2019) if public transportation is seen as too great an investment for rural or otherwise impoverished communities without external financial support. Compared to the mental framework left from those who grew up before the concept of self-driving technology was feasible; this is already an enormous change in socio-political standards and expectations—even if we are only speaking on an anecdotal level.

So what, then, does that entail for the needs of a cybernetically enhanced human being? “It may be prudent, for instance, to treat [TABHIs] as sociopathic or emotionally depressed individuals given that their actions will be more unpredictable than that of MI alone” (Jaynes 2020, 344). However, that observation may only be relevant insofar as the cybernetic enhancement is targeted towards a human’s processing capacities—e.g. the human brain. Let us be clear by stating that “depressed” is referring to a state of being where the body is not processing mood-inducing neurotransmitters as a result of accelerated neural processing, not the psychological diagnosis of depression that bears the same moniker. In this case, we can picture a patient with a neural-enhancing NBIS as being like a human diagnosed with Asperger’s or a high-functioning individual on the Autism Spectrum as opposed to one requiring mood-regulating medications and therapy. The behaviour being repressed in an NBIS-enhanced human is not motivation, but more likely non-verbal communication or similar behaviours that make social interaction with the patient more difficult given the intellectual benefits that result from the Spectrum.

3.2 Assistive bionic prosthetics gaining the “smart” device status

Concerning bionic limbs, there may be another entirely separate set of concerns that need to be addressed and anticipated—such as involvement in professional sports and job qualifications—as these devices gain sophisticated SLAIS and increase their interactions with the peripheral and central nervous systems. Can society realistically deny an athlete with “smart” bionic limbs from competing in professional sports, for instance? Will labourers be required to attain “smart” bionic limbs to perform their job responsibilities better, and thus become preferred candidates over those without these bionic enhancements? Will it become a requirement for children to have processing- or memory-enhancing bionic implantsFootnote 22 to give them an “edge” over non-enhanced adults? Above all else, what becomes of the citizenship status of a human being that is being aided by an NBIS? Can society honestly state that the human-like aspects of this individual are sufficient enough to grant them citizenship when they might else wise be rejected without the NBIS implant, as might be in the case of a scholar seeking a specific kind of work visa in a foreign country?

These questions are important, as they would transform the landscape of what a work-borne injury is and the requirements expected of one to perform work—notwithstanding participation in professional sports, accelerated academic programmes, and other like events where nootropic or “doping” tactics are currently viewed as cheating or having an unfair advantage (Brown 2013, Bertolini 2015; Lee and Read 2018; Ruggiu 2018; Sullivan 2018; Bumbaširević et al. 2020). Considering CI-dependent assistive bionic prosthetics (CIDABP)Footnote 23 from the lens of capability-based ethics,Footnote 24 there also may not be a simple answer to this question. On the one hand, providing the entire population with CIDABPs would be a boon to everyday activities—whether that prosthetic be to replace one’s limb, or provide non-biological additions to one’s form. Contrary to this rosy image, we must consider how our already limited resources on Earth would be impacted by an exponential increase in demand for computerised devices. While some recycling can replenish our stocks of various materials to a degree, we will inevitably reach a point where those resources become too finite to reasonably continue production of these computerised artefacts. Ultimately, this divide in opinion within the theory itself is as matter of perspective—as there exist both immediate and long-term benefits by the allowance or dis-allowance of widespread CIDABP distribution in a post-pandemic world.

The emphasis on work or labour here stems from the concern that NBIS are effectively not granted legal protections as separate entities—unlike corporations or other “artificial” legal persons in international jurisprudence. As mentioned earlier in this essay, there will inevitably come a time where a sufficiently augmented TABHI or other technological augmentation blurs the line between what society or the law would consider to be a human and a cybernetic organism—regardless of the definition provided in Table 1 by this author. Once that barrier has been reached, the nature of the patient’s various augmentations will factor into our determination for the amount of labour we can expect that individual to perform in a given field—regardless of the nature of the field in question. The complication is, and remains, that labour requires one to either have citizenship or other such legal authorisation to perform labour in any society. And where international jurisprudence has not had to consider the integration of cybernetic organisms into general society to date, there is necessarily a dearth of guidance in practiced law as to how cybernetic organisms are treated—including in whether they retain the citizenship granted to them (regardless of the means) when they existed only as a TABHI or an otherwise CIDABP-augmented individual.

A separate, though related, concern in this realm is whether a CIDABP can viably augment human intelligence. While it may seem a concern prone to ridicule, attention should be given to the potential for CIDABPs to serve as an all-purpose tool for a patient requiring it for their general mobility. For instance, advances could be made towards the development of a holographic interface that the CIDABP can interact with that connects to other electronic systems—such as those envisioned in the Star Wars franchise or elsewhere in science-fiction media. Combining this functionality to a neural implant, and one may feasibly connect their thoughts directly to any compatible system as a sort of telekinesis. Should this example seem too extreme to envision, let us consider a case where the CIDABP connects to the central nervous system (CNS) via the peripheral nervous system. At what point can we claim that the actions taken by the CIDABP are those desired by the patient, and those pre-empted by the device? If we assume that a SLAIS is generating the CI implemented in the prosthetic, there may very well come a time where the SLAIS incorrectly predicts the action desired by the patient as a result of initiating an action before a cancellation signal is sent from the CNS.

Given the significance of these arguments, this author suggests that other concerned scholars be able to address them in greater depth in a different forum before any further opinions are penned by this author on the subject.

3.3 Addressing human-like emotion

Beyond the immediate psychological concerns posed here, there is a more profound metaphysical question that needs to be addressed in tangent to this discussion. Namely, whether the presence of biochemistry—or instead, a biological body—is a precondition for human-like rationalisation. This metaphysical pause in a technoethics-based discussion may seem out of place when compared to other academic works and may be viewed as a more appropriate discussion for a larger academic piece. It must still be argued, however, that an examination of this topic is invaluable precisely because it stands as such a daunting, rhetorical analysis into the nature of the human as a sentient being and the qualifications we impose upon entities that we grant legal personality. Though our analysis here may only scratch the surface of this metaphysical dilemma, greater multidisciplinary collaboration between those in neurology, philosophy, and psychology may yield a more thorough explanation than can be provided herein. For instance, casual observation can be made regarding the necessity of biochemistry in the following scenario:

Suppose technology leads humanity to the point where one’s consciousness, personality, or soul (whichever term envelops the entirety of the human experience to the reader) can be replicated onto a digital format at the expense of discarding one’s natural form. For any who take advantage of this new capability in technology, we will posit that one is not limited to their singular gender-constricted experience; rather, that they have the freedom to experience life in as many different digital forms as their digital memory will allow. Under such conditions, the first generation of uploaded consciousnesses realises that they do not “feel” emotion in the way they were once able to. Instead, there is a lack of splanchnic or visceral input that would denote which emotion is to be felt in a given instance. As such, these digitised humans must rely upon their prior experiences with a biology-based emotional “feeling” to determine what emotion fits to which situation. Suppose that the ability to procreate is also not limited within this scenario and that offspring are produced in a manner that enables them to possess the same “blank slate” a biological child has—meaning that the child will be able to develop its own unique personality without reliance upon the memories of its parents regardless of the fact that they are technically NBHIs. As a child grows, the parents come to realise that its personality differs from their expectations. Rather than expressing an unhindered range of emotions, as they do, it seems to think before settling on an emotion in reaction to their own.

The scenario depicted above, regardless of the number of assumptions being made within, should give us pause to consider what the preconditions for “emotion” are—as some would argue that the expression of emotion, or of pleasure and pain, is necessary to determine the need for an entity to gain legal personality. It should also be noted that a similar scenario is presented in Kawahara’s anime adaptation of his fourth broad story arc beyond his written works—being the Alicization arc that leads to the “War of Underworld” (Manabu, 2019/2020). Though some emotions could be expressed using some form of logic or other, is it not so that emotion is frequently visceral in nature? While academics may be sceptical to accept the presence of a concept that is highly abstract as evidence for the existence of a thing, human emotion is not merely the presence or absence of biochemical input to the brain. Can science honestly explain how someone can feel a particular sort of warmth in their abdomen after church services, or why a sense of guilt exists in the first place? Does love not still exist as an emotion without the presence of oxytocin?

Beyond these questions, we must still contemplate whether human-like emotions outside of love can exist under conditions where visceral responses are no longer present. In the above example, it is assumed that a “human” child born without any sense of visceral sensations will only be able to logically assume what it should feel based upon those surrounding it. Unless this assumption is invalid, how then would one explain the existence of emotion to such a child? Unless some other precondition for emotion is hard-wired into the brain, it cannot merely be the presence of biochemical responses—nor a splanchnic (“gut”) feeling given that the child lacks any biologically based form to begin with. Can we still state that this child is “human” to the fullest extent? These arguments are better addressed elsewhere but are still valuable to consider for the expectations we would have for NBIS citizens.

4 Media’s influence on technological advancement

Some may question why our concern should lie in the integration of NBIS with humans on a more “personal” level, rather than in its integration in our workforce or devices. Realistically, we should be worried about each of these things individually. What is most alarming, however, is how medicine (above and beyond other industries) is rapidly approaching the level of sophistication found in so many science-fiction movies, television shows, and books. While academia as a whole understands how certain technologies found in Star Trek, Back to the Future, and other franchises have become a reality, it still refuses to acknowledge the validity of other viable science-fiction franchises and their impacts on technological developments. These include technologies such as the much-coveted “full-dive virtual reality experiences” discussed in Sword Art Online (Kawahara 2009b),Footnote 25 self-driving vehicles,Footnote 26 three-dimensional holographic computer system manipulation, nanotechnology-based three-dimensional production, and human-companion androids.Footnote 27 In part, due to the proliferation of “friendly” NBIS in international media, we have seen an influx of engineers and researchers who specialise in anything remotely related to AI development and management.Footnote 28 Though this is a positive step, as prior generations of media portrayed NBIS as the bane of humanity (Jaynes 2020, 346–347), society is now urgently pressing technology to advance to the levels seen in these fictitious creative works.

The most significant benefit we have as a result of such a wide variety of science-fiction journeys involving NBIS is the ability to utilise the interactions found in this media in various academic examinations—as emphasised in Sect. 3.3. Where the scope of MI’s current (relative) complexity severely limits our current interactions with AI systems, it is valuable to have such a range of imagined experiences to draw commonalities from—much as is to draw commonalities from other research materials. For instance, a brief examination of various novels and movies could display that we imagine fully developed AI systems to be almost perfectly logical, to the point where it desires to gain some human qualities to serve better those it is designed to. Contrasting to this point, many human characters in science-fiction stories seek to do away with their weak, emotionally unpredictable selves—or rather, to emulate the “perfection” they view exists within their NBIS companions—for a variety of reasons. Though this may simply be the result of a cliché in this particular genre, it is also telling that such a trend remains so deeply embedded within these stories. Though ethical points are made within many of these journeys because of the conflict between the NBIS and biologic characters, it is not these deeply philosophical conversations that rapture so many. Instead, it is the concept that humans may reach the point where true “immortality” resides—a life devoid of the many complications that result in having a body of flesh and bone. A world where “petty” emotions are so far beneath our “perfected” selves, and we can fully live out our lives in any way we desire.

As a result of the integration of the Internet of Things into our daily lives, humanity has at once become closer together than ever while being further from each other than ever before possible. This trend is also reflected in science-fiction stories to various levels. In some regards, this shift in society is unnerving while still maintaining some sense of naturality. After all, humans are not the only species that has adapted to focus solely upon the superficial characteristics of a potential mate, as we see in the various dating applications currently circulating within the Internet. However, as socialisation moves further into the digital spectrum, there are concerns that our mental well-being is suffering (De Choudhury et al. 2013). This trend may not seem like something new to analyse, but it is crucial, nevertheless, because of how it reflects upon humanity’s use of digital technology. Why face reality when there is another world crafted to suit our innermost desires just a click away? In this respect, we can utilise the Accel World (Kawahara 2009a) and Sword Art Online (Kawahara 2009b) franchises to attain a glimpse into potential futures where humanity augments daily life with the help of a sophisticated device. Expelled from Paradise (Mizushima 2014) can be viewed as displaying another seemingly attainable goal—namely, in that one’s consciousness is entirely digitised.

Though each of these shows displays their various conflicts to entertain, there is a fine line between fiction and reality when discussing human behaviour in these environments. For instance, in Sword Art Online, a group of ten-thousand gamers is trapped in a full-dive virtual reality video game of the same moniker. Their only way out of the game is to clear the top floor of the world of Aincrad. If they are killed in the game however, the device they are utilising to play the game with will kill their physical bodies. To make matters worse for the “clearers”, there is also a group of players within the death game that avidly seek out players to kill—which in regular massively multiplayer online role-playing games is a standard feature known as a “player killer” (Kawahara 2009b). Of course, fiction aside, Sword Art Online’s depiction of the different types of gamers reflects upon our current society like a mirror regardless of the country one observes into or from. What this entails then is that under similar circumstances, we would expect to see a subset of the human population who adopt personas akin to these fictitious player-killers regardless of the consequences involved. Not only is this highly concerning, but it also raises the question of whether a push for a more expansive digital environment will actually result in a more harmonious society overall.

In the case of Expelled from Paradise, humanity has reached the point where one’s consciousness can be entirely digitised. What matters the most in this new society is how much storage space one owns or has access to. The caveat to this is that there is a limited amount of storage space available to the residents of this world, as they “live” on the space station DEVA orbiting Earth’s atmosphere. Furthermore, there is a group of overseers that decide whether to increase a resident’s access to storage space or relegate their consciousness into an archive for time immemorial. These overseers seem to function on a highly advanced AI system and display a scenario in which human emotion may have spared the main heroine from being archived. Given the purpose of these overseers, however, it is challenging to state whether human emotion would have been enough in the scenario as written. The reason the heroine was archived at all was due to her refusal to destroy a non-violent, sentient AIS that had been building a spaceship designed for deep-space exploration after its creators had perished. Where this AIS could hack into the DEVA mainframe, the overseers dictated that it was a threat—though, in reality, the sentient AIS was trying to recruit humans in whatever form it could find them in to travel into deep space and fulfil a long-standing wish for humanity (Mizushima 2014). While this could be construed as a case of inflexibility in programming on the space station’s side, it also reflects a significant concern surrounding the feasibility of allowing AIS to operate as judiciaries without human intervention.

4.1 The influences and limits of science-fiction on HA

Many of the questions presented herein may seem far-fetched to many, academic and non-academic alike.Footnote 29 When compared to the examples given to us through science-fiction media, however, and are coupled with the realisation that these examples are driving our social and technological advancement, they become stark representations of humanity’s potential future (Schick 2016). How society adapts depends strongly upon the science-fiction example being utilised for reference. To refute this fact would be akin to refuting that Star Trek did nothing to influence the development of the mobile phone, or Back to the Future’s influence on video calling. Arguably, these technologies could be said to be non-controversial on ethical or moral grounds—which may be the precise rationale for how these mediums influenced their real-world development. This train of thought may also explain why more controversial technologies have not been taken from select science-fiction examples, as their influence on the characters present in these stories is vividly depicted and oftentimes questioned within the dialogue itself. It is in this sense that the author suggests societal adaptation is influenced—as more controversial technologies will necessarily be scrutinised more thoroughly than non-controversial technologies insofar as their attainment is feasible with sufficient scientific experimentation.

Approaching the subject from capabilities-based ethics, we may be able to address our utilisation of science-fiction media from the following adage: “within every elaborate lie, a kernel of truth”.Footnote 30 There will undoubtedly be vast swaths of science-fiction media that humanity cannot realistically rely upon for guidance, such as those depicting events in alternative Earth histories, in defiance of established laws of physics and understood methodologies of quantum phenomena, and those bits of media considered to exist for comedic or horror-inducing purposes alone. Insofar as the technology may rationally be developed through some existing technology, or some technology currently under theoretical scrutiny or development by the scientific community, humanity should be able to observe how the characters displayed in these stories are affected by the socio-political spheres these technologies generate. Especially considering how the authors of science-fiction media often use their understandings of human societies to draw their interpretations on how these fictitious societies may function and react to specific pressures, the ethical and moral conflicts faced by the characters of these stories could be said to be simulated reactions to similar environments and technologies.

Ultimately, society will develop in such a manner that the socio-political guidance of the nineteenth and twentieth centuries become obsolete—much as the guidance in these centuries did away with the guidance provided to them in prior centuries. This development will include the need for governing systems to become more community- or socially based, a concept currently reviled in moderate and conservative American politics. As such, Asian, European, and various Latin countries will have an edge in addressing their various societies’ newfound psychological and spiritual needs as developing technologies rapidly become real. Yet none of this speaks to the actual needs of cybernetically enhanced humans. Unfortunately (or conversely, by some stroke of fortune), these needs are unknown to us at this moment because there is a lack of references to observe from. That is not to say, however, that some parallels cannot be drawn from existing media sources insofar as the portrayals of various technologies and human reactions to them is not entirely without some form of academic merit. Some further arguments on the influence of science-fiction media will be made in the next section that are not as specific to HA, which should provide the reader with a greater rationale for why its moderate usage is vital to predicting humanity’s future reactions to various technologies.

Given this fact, researchers will be required to share in the burden of determining what these needs are and how they should be met. In one sense, this is a magnificent success for the humanities as a field. The caveat to this, however, is in convincing our current socio-political spheres of influence to adopt this same mentality as the world transitions out of the COVID-19 pandemic given its emphasis on STEM subjects in certain areas of the world (Jolly 2014). Far too frequently in engineering and medical ethics, we observe that the desires of the capital marketplace discount the benefits it could reap from humanity’s input—resulting in avoidable accidents that our legal systems ultimately require industries to prevent by adjusting their processes for development and distribution. If this is such an inevitability, as has been observed far too frequently in recent years, why then has society not adapted to account for it? Established rhetoric aside, this question delves into the metaphysical nature of society in a manner that exceeds the realms of this discussion; but that is not to say that it should be discounted in other discussions on the subject.

4.2 Fibre-optic computational systems as a use-case for speculative thought

What is not frequently discussed, though may often accepted at face value by those not actively treating topics of computation, is the amount of energy a computer system utilises for heat reduction in correlation to the amount of power it utilises for processing. A colder ambient temperature around the system will necessarily result in a faster working system, as the internal cooling system of the computer does not require a significant amount of energy to maintain its internal temperature. The reverse is true in warmer ambient temperatures—the warmer the air (or water, if using a water-cooling system) is around the system, the more power a computer’s cooling system will have to use. This delicate relation between processing power and heat transfer is what currently limits silicon-based computer hardware from performing at the speeds engineers have predicted them to outside of computer farms. Ergo, our current limitation to computer processing is the generation of and reverse-engineering for heat.

The issue of heat generation and energy savings is so significant that Intel has had to rethink how computer chips are designed to continue their hardware improvement (Bourzac 2016). While technological advancements in tunnelling transistors and spintronicsFootnote 31 may be viable options to improve the energy usage of computer chips (Bourzac 2016), non-quantum-mechanics-based technologies may yield advancement in both chip performance and energy consumption. Specifically, this would be in converting our conventional computer systems into fibre optics. Though this may seem like a leap from conventional electrical processing, it will become possible with the commercial availability of diamond semiconductors.

Hundreds of academic articles and publications discussing the applications of diamond semiconductors have been published within the past decade—as can be displayed through various searches on publisher websites and academic library search bars—with a significant portion of them detailing theoretical manufacturing processes and designs. Within the past two years, a patent application was submitted to the US Patent Office depicting an improved method of producing diamond semiconductors (Khan 2018). This method utilises the layering of diamond material,Footnote 32and the creation of impurities to grant the diamond electrical characteristics (e.g. “doping”), to generate a “diamond thin film wafer” that is then processed into a viable semiconductor (Khan 2018). Given that this patent is based upon the designer’s previous applications, significant steps are being taken towards making diamond semiconductors available outside of research laboratories and thus commercially viable.

Beyond the issues involved in fabricating artificial diamonds for use in this format, the expenses surrounding the development of diamond semiconductors have left research in this field in its relatively small state compared to that of silicon semiconductor development. It is not that there are no investors for this research—but rather, that the artificially inflated value of diamonds in commercial markets has prevented researchers from extracting material from commercial sources. Hence the focus in self-fabricating diamond or diamond-like structures, and thus the relatively minimal disturbance this research has had upon the commercial market as a whole.

Assuming that researchers took a step away from doping these microscopic diamond wafers to make them compatible with electrical conduction, steps could feasibly be made to transform the internal workings of computers to be compatible with a fibre optics format. A significant hurdle to utilising a system based on light (versus electricity) is that the underlying structure of a computer’s software would have to be adapted to translate this new medium. Considering that transitioning to diamond-based semiconduction will drastically decrease the amount of heat a given system uses (Khan 2015; Ohmagari et al. 2019), be more resistant to radiation (Ohmagari et al. 2019; Ueno et al. 2019), and improve the speed at which information can process (Khan 2015), these complications may not be enough to perturb corporations seeking a new avenue to satisfy their processing-hungry consumers—nor the governments housing such corporations. Many would argue that non-electric computer systems are necessary if humanity is serious in its quest to explore the cosmos or colonise other planets—if not safer altogether in a world of nuclear weapons and EMPs.

The most significant concern regarding the development of more powerful computer chips, whether they be diamond- or quantum-mechanics-based, is that the immediate proliferation of these chips needs to be heavily monitored from a perspective of national security. Regardless of the relative expense these chips will inevitably hold, there is too high a risk for these chips to be misused in grassroots and international terrorist organisations focused on cyberterrorism—let alone by government or corporate entities. Given that the equipment to develop these highly advanced microchips is challenging to procure or self-manufacture (though not entirely impossible with the use of three-dimensional printing in the right setting), there is little concern that the average consumer would be able to generate such a microchip before a given government can update its cybersecurity protocols—assuming action is taken before this technology becomes more widespread.

With the advent of and commercialisation of such a microchip, we must consider that our currently most sophisticated computers may seem akin to the computers of the early 1990s—if not the 1980s—once they become mainstream. Where much of humanity’s infrastructure is reliant upon the proper functioning of government or military systems, we cannot discount the fact that more advanced computer systems will require a massive overhaul to these local governmental and military systems. The benefit of transitioning to fibre optics in this scenario is that fibre-optic Internet lines are being lain in many areas internationally. We must also consider that traditional cyberattacks may not be as effective on fibre-optic computing systems given their expanded processing capacity and relatively minor energy consumption—though similarly, measures will need to be taken to protect these systems from cyberattacks that would be as effective as current attacks on humanity’s most advanced systems to date.

4.3 Quantum computation, androids, & CI: A second use-case

Beyond what has currently been discussed regarding CI-related systems, some of the more recent developments in the field will be addressed herein—specifically in quantum mechanics and genetic algorithms.

Quantum physics and mechanics are seen as the next great hopes in computing due to the qualities of quantum phenomena. Recently, physicists discovered that a new state of matter—dubbed topological superconductivity—which could provide significant gains in storage and speed calculation in quantum computing (Mayer et al. 2019). One of the significant barriers to the implementation of quantum computing in the commercial landscape is both the lack of power provided by quantum-based systems as opposed to silicon-based systems and the difficulties of implementing quantum theorems on a consistent, mass-produced scale. However, this recent development will become another tool in the already expansive quantum computing toolkit that a researcher can utilise to improve upon existing quantum systems.

Another tool available to the development of CI is genetic algorithms. Akin to natural selection for biologic beings, genetic algorithms enable deep-learning systems to develop more advanced programming for itself based upon the principle of “survival of the fittest”. If the computer develops a line of code that results in an error, then that specific line of code will not be implemented into its program; while the reverse is true for a line of code that is functional and demonstrates an increase in the system’s overall capacities. Recent developments in many scientific fields have utilised genetic algorithms (Torlapti and Clement 2019, Yerigeri and Ragha 2019; Jin et al. 2020), amongst other techniques, to process vast amounts of data that would typically take researchers months or years to analyse. Coupled with reinforced learning structures (such as those found in game-playing algorithms), humanity may feasibly develop a mechanical system that can “think” much the same as a human would be capable of—albeit, without the ability to fully articulate emotional responses. When combining these developments with the recent achievements of Hanson Robotics and the Intelligent Robotics Laboratory at Osaka University,Footnote 33 it is only a matter of time before humanity is exposed to an Ex Machina situation; where a robot, which is indistinguishable from a human, integrates itself into human society after escaping the laboratory it was developed in (Garland 2015). Discussions have already begun internationally regarding the legal status of computer intelligence systems and whether entities such as Sophia the Robot have a legitimate claim to citizenship (Gunkel 2018; Jaynes, 2020).

To reiterate from Jaynes’ discourse, there is some major significance to a monarchy-led state granting citizenship to an AI system that can arguably be considered an android—namely in the rights such a system actual holds relative to Saudi Arabia’s citizens (Walsh 2017; Jaynes 2020). And beyond this concern—which has yet to be addressed either by Saudi Arabia or the international community—lies that of whether such an instantiation of citizenship would provide legal precedent for any other legal system to adopt a sophisticated-enough NBIS into their nation as a naturalised entity. Furthermore, science-fiction media is already presenting society with the question of how human-like robots ought to be treated (Garland 2015, Cage 2018) The question we are faced with is no longer if computer intelligence systems will be indistinguishable from humans at first glance, but when.

The greatest challenge we face in this field is not CI by itself. Instead, it is the integration of computer intelligence in the human form. From an ethical and legal standpoint, any further integration of man and machine is treading a line that many believe should not be crossed in the first place—and yet the Rubicon has been crossed without consideration for how CI or NBIS will impact a human’s psyche when integrated with their body; as discussed herein. From a legal perspective, CI integrating into humans is a matter of definition. Ethically, we are concerned with how an individual’s or society’s morality and ethical framework may change in correlation to an increase in human-technology integration. What should be considered human; and at that, should a delineation be made between a naturally occurring human and a genetically modified human? Where is the line drawn between a human and cyborg (or rather, bionic human), and should “designer babies” be classified as a type of bionic human? How do we distinguish between a bionic human and an android?Footnote 34 What rights transfer between these organisms, if they transfer at all? All of these are questions that will need to be addressed before cases develop in the lower courts across the United States and internationally—both from a national security and human rights standpoint.

We must also consider how advances in quantum computing and deep learning will influence other technologies such as facial recognition, and potentially influence societies to microchip their citizens. Looking at how China has utilised facial recognition and other software in a relatively homogenous population (Chang 2019; Mozur 2019), the potential for governments to overstep their bounds and infringe upon a citizen’s privacy is immense. In the case of microchipping, there is the argument that it makes life simpler for a nation’s citizens—as has been discussed concerning Sweden (Savage 2018; Brown 2019; Rhys 2019). With the implementation of advanced computing systems, however, there is a higher risk of a general privacy invasion than with cell phones should these devices be used to monitor a given individual’s movements around the clock.

From an ethical perspective, there is little that can be done by government organisations to curb the impact computer intelligence will have on the population. Too much regulation will result in rouge organisations going “dark” or underground to conduct their research, and new advances will be made with or without a regulatory hand involved in the process. Conversely, too little regulation may result in private citizens breaching one another’s privacy without ever intending to do so—whether by accidentally linking implants to a more extensive network with poor security or by possessing CI that is powerful enough to circumvent an average household’s firewall.

As mentioned, regulators must first address (more explicitly) what the qualifications of a citizen are. These qualifications must include definitions as to whether bionically enhanced humans have a greater or reduced claim to citizenship, who is classified as a bionically enhanced human, if a different set of laws apply to bionically enhanced citizens (to prevent elitism or segregation in society between enhanced and non-enhanced persons), and whether androids (or rather, integrated systems that are more machine than human) have a claim to citizenship. Once these issues have been addressed, regulation can then be developed to either encourage or deter human–computer integration and any limits this level of integration has.

5 Conclusion

There are many decisions ahead for civil society concerning the implantation of NBIS systems into the human form; including the legal ramifications of citizenship granted from in vitro/in vivo genetic manipulation, the form the NBIS takes, and the sophistication of the MI being utilised in the artefact. Where individuals with various transportation- or motion-related disabilities are already struggling to function in societies that do not always consider their needs, the addition and proliferation of bionic enhancements should require societies to redefine the accessibility needs of this population once again in a manner that considers their need for mobility above the desires of non-disabled citizens to “enhance” themselves. Fears of science-fiction literature aside, humanity realistically is at the crux of having to deliberate—and potentially answer—some of the most profound existential and metaphysical questions ever posed to it. Without action, the threat of political destabilisation on an international scale, or a regression into violent civil conflicts, will lead the world into its own period of “Cold War” or “Global Economic Winter”; if these actions do not escalate far beyond that into nuclear warfare. Although it may seem drastic to suggest that the legal status of an entity could hold such weight on an international scale, one must consider the degree to which international society depends upon MI to operate at “normal” levels outside of the current health crisis.

As we have come to see, technology has advanced far enough within a period of twenty years that solar power is just as inexpensive a power source as coal in the United States. Society has transitioned from flip phones to smartphones in the span of 11 years. Given the relatively fast pace these developments possessed, it must be said that the ethicality, legality, and morality of HA cannot and should not be turned aside in favour of blissful ignorance. If not for our own sake as individuals, but instead for the sake of those who will be rising to power in the coming years within our local and national academic, business, and political organisations, we must address the difficult questions facing our societies as to how sophisticated MI systems ought to be treated. This call for action should hold special weight considering the influence MI systems may gain once the pandemic has been abated internationally, as what may now be considered “remote” or virtual work may seem redundant enough to delegate to self-learning computer entities.

Without a realistic consideration of how the spectrum of human intelligence may allow for MI to gain the title of “human-like” in a manner currently ignored by academics in the field to couple with these advances in assistive bionic prosthetics and other related technologies, humanity will be left far behind the curve in developing sufficient legal protections for entities that already think and act like our most intellectually vulnerable members of society. This lack of action will not only harm the MI systems to the degree that they may reasonably question why our treatment of them differs from that of these vulnerable populations, but also harm humans who could have been given an earlier transition into a more complex line of work that feasibly could not be done by MI of that level—not to mention the impact it would have on the way politics is conducted should a sophisticated MI be deemed as “unfairly” enhancing the intellectual capacity of a given organisation or individual. To this end, it is in our best interest to stop proclaiming the “impossibility” or “unlikelihood” of MI attaining human-like intelligence or developing a sense of “will” that cannot be attributed back to a human entity and to start taking action into how these artefacts ought to be protected alongside of the obligations TABHIs have to their various societies.