Keywords

1 Introduction

“Sociology may know about class, or about gender. But how much does it know about speciesism - the systematic practice of discrimination against other species? And how much does it know or care about machines?” [35]

Arguments about ultraintelligence [22], superintelligence [4, 8], or technological singularity [63, 64], are based on the assumption that artificial intelligence (AI) exists as a separate entity which competes with human, natural, or otherwise named conventional types of intelligence. The present paper is an attempt to challenge the fixity and consistency of the term AI from a social ontology perspective, as a means to support a non-dichotomous argument of ontological and mental continuity between humans and machines. Instead of being alarmed by how AI might change everything and impose an existential threat, it should be useful to think of alternative ways to conceive AI and change everything about how we face it. As humans become more mechanized [2, 23] and machines become more humanized [21, 32, 42, 46] it will gradually make little or no sense to distinguish between artificial and non-artificial intelligence [18]. However, a general eschatological climate of fear and skepticism towards intelligent machines is indicated, a stance which is further sustained and perpetuated by a recent hype in the press, associated with prestigious figures of science and business (yet, interestingly non-AI specialists like Stephen Hawking or industrialists like Elon Musk) who warn about the end of humankind by AI through media of mass appeal or, in other cases, through philosophical inquiry [7, 13,14,15,16,17, 20, 24,25,26, 60]. This controversy brings forth a number of ethical questions (in the emerging field of roboethics [37]), difficult to be tackled according to our current criteria, contradicting the human-machine continuum suggested by other authors (and defended in the present article). Meanwhile, it has been suggested that this form of dogmatic apprehension of “singularitarianism” (i.e. the belief that autonomous supra-human AI entities will outperform and even dominate humans) is on the one hand in lack of evidential and realistic basis, and on the other might impose great ethical and technical difficulties in AI R&D [18, 19].

In this brief conceptual investigation, I propose that emphasis on human responsibility with regard to AI can be fostered through the minimal requirement of abolishing the artificiality of AI and the outdated notion that intelligence is a separate component belonging to individual entities. To sustain the argument, I will examine separately the two parts of the phrase, namely “artificial” and “intelligence,” applying arguments stemming from the philosophy of social science (PSS) concerning (a) the opposition to the nature/nurture and nature/culture divides [59], and (b) the holistic (non-individualist) theories of shared cognition, treating intelligence as a phenomenon occurring within systems or collectives and not as an individual unit’s property [38]. Through this terminological challenge, I do not propose a new definition of AI; instead, I recommend that AI is indefinable enough outside research contexts, so that humans should think more of the social impact upon AI, instead of AI’s impact upon humanity.

The everyday understanding of AI (very often pronounced simply as/eɪ aɪ/, alienated from the acronym’s meaning) is loaded with taken for granted assumptions adhering binary conceptualizations of the given/constructed or the singular/plural cognition. However, this was not the case in the early foundations of the field. According to McCarthy, Minsky, Rochester, and Shannon’s 1955 classic definition, AI is the “conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” [44]. Advanced AI, thus, should prove that our celebrated human intelligence is totally replaceable. According to Paul Edwards’ historical accounts,

“AI established a fully symmetrical relation between biological and artificial minds through its concept of ‘physical symbol systems’ […] In symbolic processing the AI theorists believed they had found the key to understanding knowledge and intelligence. Now they could study these phenomena and construct truly formal-mechanical models, achieving the kind of overarching vantage point on both machines and organisms” ([10] original emphasis).

On the contrary, examples of the public (mis)understanding of AI and its clear-cut discrimination from human intelligence can be found in recent newspaper articles, when AI and human intelligence are equally responsible for various accidents, but AI is mainly accused – chess and Go players compete and machines impose threats, beauty judging algorithms are accused for racial discrimination, toddlers are bruised accidentally by patrolling robots, job losses to novel AI software, are only but a few of recent newspaper stories [27, 36, 45, 46, 51, 55, 66, 67]. From all this, it is inferred that artificial and human intelligences do exist, and moreover, they do exist as separate items. All of the cases above were phenomena which involved a symmetrical amount of human and machine, organic and inorganic intelligence; however, due to the novelty (and perhaps the “catchy-ness”) of the technology, the blame falls upon AI, highlighting the need for a sociological investigation of the human-AI relationship. More specifically, such empirical accounts of human-machine interaction raise profound ontological questions, concerned with the location of intelligence and the difference between given and constructed. Such questions have been investigated through the PSS and other related disciplines, but so far, the philosophy of computer science and AI has left to a great extent underexplored.

A pure sociology of AI is still lacking, at least since its early announcement by John Law. According to him, machines are discriminated by sociologists as inferior actors imposing some determinism upon society, yet, controlled by humans: “Most sociologists treat machines (if they see them at all) as second class citizens. They have few rights. They are not allowed to speak. And their actions are derivative, dependent on the operations of human beings” [35]. As shown above, humans and machines are widely understood as binary opposites, even by advocates of human-machine equality, like Turing (“one could not send the creature to school without the other children making excessive fun of it” [61]) or Sloman:

“History suggests that the invention of such robots will be followed by their exploitation and slavery, or at the very least racial discrimination against them. Will young robots, thirsty for knowledge, be admitted to our schools and universities? Will we let them vote? Will they have equal employment opportunities? Probably not. Either they will be forcibly suppressed, or, perhaps worse, their minds will be designed to have limits” [54]

By challenging the ontological foundation of AI, I aim to blur the sharp boundary separating machine from human intelligence, building a framework of open potentialities where intelligence is a shared processual phenomenon with no primacy of value in its natural or artificial traits. The theme of this paper is inspired by the non-binarization between female and male, as expressed by psychoanalyst Jacques Lacan. He defended femaleness by the non-negation of a missing sexual organ, exclaiming provocatively that “la femme n’existe pas” (“the woman does not exist,” [31]), but affirmatively has vagina, so is not heteronomously determined by the man (the constitutive phallus). With the danger of oversimplification, his argument means that the binarization is futile as long as it is based on a dominating, privileged constitutive agent (the male), whereas this formal difference does not make any difference at all (hence, the man does not exist either). I suggest that a similar movement should be made with artificial and human intelligence, however, without the reference to psychoanalysis (and ghosts inside the machines hylomorphist arguments), but through the pathway of understanding intelligence as a primary phenomenon with humans and machines as its agents.

2 Artificial: Nature-Nurture, Nature-Culture, and the Convergence of Physis and Techne

“Try to imagine the world not tomorrow or next year, but next century, or next millennium: a divorce between physis and techne would be utterly disastrous both for our welfare and for the wellbeing of our habitat” [14]

Are behavioral characteristics learned or inherited? Are entities and phenomena natural outcomes or are they products of sociocultural manipulation? These two questions synopsize two very common themes in the PSS (as well as biology and general philosophy), known as the nature/nurture and the nature/culture debateFootnote 1 [38, 40, 58]. Are machines products of a long-term evolutionary process, inscribed in natural randomness, or are they the outcomes of human intention? Does their intelligence depend on human intelligence or is it simply intelligence?

According to Longino, “[w]hen confronting a social phenomenon, […] we frequently ask whether the behavior is a result of nature or nurture, of our (inherited) biological makeup or of our social environment. Another contrast used to mark roughly the same distinction is that between innate and learned” [38]. Longino’s stance rejects the dichotomy as misleading, referring to “methodological reductionism,” that is, a strategy of reducing the analyzed phenomenon to its constituents and therefore speak of different scales of impact affecting the generated phenomenon. In such a way, for example, socioeconomic factors (nurture) can explain behavior (nature), but also psychological behavior can explain social phenomena, which in turn can be reduced to molecular levels of analysis, and so on. This assertion reflects a general tendency towards the abandonment of the dichotomy and the recognition of an interactionism between them. As Sherry points out: “There is no longer any question among most developmental psychologists, cognitive scientists, neuroscientists, and biologists that nature interacts with nurture to determine human behavior” [53]. Based on this axiom, we are left with two main options: (a) either the concepts of nature and nurture exist but only as long as they are in interaction (the biologist view):

“We have moved beyond versus. Whether it is medical traits like clinical depression, behavioral traits like criminality, or cognitive traits like intelligence, it is now widely recognized that ‘nature versus nurture’ does not apply. […] Rather, it is a truism that these complex human traits arise from both nature and nurture, and differences in those traits arise from both differences in nature and differences in nurture” (original emphasisFootnote 2 [58])

Or, (b) the very concept of nature versus culture can be criticized precisely as a cultural construct (MacCormack in [40]) and hence the entire existence of nature and culture can be doubted. “There is no culture, in the sense of the cumulative works of man [sic], and no nature to be tamed and made productive” (Strathern in [40]). When we speak about culture, however, as long as no reference to biology is given, we consider it as opposable to nature:

“In general, cultus is clearly the artificial, learned, and to some degree arbitrary aspect of human existence, as opposed to those aspects that we are born with or to (natus). This makes it the opposite of nature not only in the nature/culture debate, but in the old nature/nurture dichotomy as well. However, it is equally clear that culture, like nature, harbors paradox. Those who would reject nature as an unclear concept but still accept culture as a given need to look more carefully at both” [56]

Similarly, for Bruno Latour’s principle of symmetry, nature and culture simply do not exist, but different groups of humans in different times have constituted different sets of what is natural and what is cultural (or societal) [34]. What constitutes culture and nature, also constitutes a set of paradoxes, for example nature can be the enemy to be tamed, the extra human disaster, but also it can be the reference to the norm, as when one acts according to natural reason. “The solution appears along with the dissolution of the artifact of cultures. All natures-cultures are similar in that they simultaneously construct humans, divinities and nonhumans” [34]. The social inconsistency of terming “nature” and the “natural” has been consistently explored in contexts of genomics and synthetic biology, where the importance of inheritance, innate characteristics, and environmental factors are of significant value [33, 48]. As Calvert puts it, “[a]n important aspect of how we understand ‘natural’ rests on what we oppose to it,” in our case, the artificial (in tentative synonymy with “synthetic”), the social and the invented [6]. The debate is reaching a peak with Fausto-Sterling’s connectionist approach on dynamic biological systems, concluding that “we are always 100 percent nature and 100 percent nurture” [12]. I do not see why these arguments stemming from synthetic biology could not be imported in the social studying of AI, given the similarity of binary oppositions researchers faceFootnote 3.

The question concerning human intelligence as an innate characteristic or as an externally attached nourishment, can be posed with respect to the AI machine. What differentiates human from machine intelligence in such a dramatic way that allows the former to label the latter’s intelligence as artificial? To my knowledge, there has been only one – and indeed very recent – related approach to AI, by Jordi Vallverdú when introducing his article writes in the context of the forthcoming Singularity: “I will refer to both [humans and artificial devices] as kinds of ‘entities’, rejecting the distinction between ‘natural’ and ‘artificial’ as outmoded and just plain wrong” [62]. I sympathize, but simultaneously suggest that the outmodedness of the distinction is worth explored before being totally rejected, and furthermore, and highlight the difficulty in having done with the dichotomy, at least in language (something which is proven in the rest of his paper, despite his promise). To sum up:

Taking a physicalist/naturalist point of view, everything that exists in the universe (or multiverse) is natural. As Alfred North Whitehead puts it in his classic work “The Concept of Nature,” “[f]or natural philosophy everything perceived is in nature. We may not pick and choose” [65]. If AI exists, it is natural – therefore, the “A” in “AI” is fallacious. If a flower, a dolphin, or a robot exhibits intelligence, it is intelligence despite its nonhumanity. It becomes obvious, then, that AI is by all means partaking in physis (nature) as much as in techne (craft, manipulation of nature). Natural kinds, by definition, do not exclude mechanic or constructed kinds; the only condition for the establishment of a natural kind is the common appearance of “certain necessary relations” of individuals of a given kind towards other kinds [9]. In this sense, AI entities might differ from human entities, but this does not allow for the label of artificiality to be given to any of the two kinds.

Taking a social constructionist point of view [3], everything which we perceive and verbalize is a cultural product. Hence, nothing is a natural given, but anything we observe, manipulate, and produce is modified by our social shaping and personal interests. If we perceive AI, it is the result of social manipulation – therefore, the “A” in AI is redundant. It is nonsensical to admit that a form of intelligence is artificial to the extent that everything we intelligently perceive is an artificial interpretation. It becomes obvious, then, that AI is by all means partaking in techne (craftsmanship). However, this techne is nothing else than the cultural value that we attribute to all givens:

“Culture is nomos as well as techne, that is, subsumes society as well as culture in the marked sense. Nature is equally human nature and the non-social environment. To these images of the ‘real’ world we attach a string of evaluations – so that one is active, the other passive; one is subject, the other object; one creation, the other resource; one energizes, the other limits” (Strathern in [40])

Strathern, in her deconstruction of the nature-culture dichotomy, further refers to other taken-for-granted binaries as “innate/artificial” and “individual/society” (in [40]). Hence, this common treatment of all similar dipoles acts here as a smooth passage to the second part of the argument which analyses the singular/shared intelligence as well as the intelligence/non-intelligence. To recapitulate, AI can be seen both as an innate characteristic of a mechanism which satisfies a number of technical conditions in some sense as well as a constructed attribute dependent on its cultural contexts in some other sense.

3 Intelligence: Distributed Cognitive Agency and Giant Steps to an Ecology of Mind

“It would not, I imagine, be very bold to maintain that there are not any more or less intelligent beings, but a scattered, general intelligence, a sort of universal fluid that penetrates diversely the organisms which it encounters, according as they are good or bad conductors of the understanding” [41]

“But, surely, the interesting question is what entitles us to attribute intentionality to non-machines in the first place? What makes our description of human intentionality other than metaphorical?” (Woolgar, in [35])

Does a decision-making intelligent machine act as a single entity or as in relation to a group? Does its intelligence (natural, artificial, or otherwise) exist inside it, or is it the outcome of collective processes? These are questions which PSS tackles when addressing themes of individuals versus populations, and two main strands, individualism and collectivism, have been developed in order to methodologically explain phenomena either “in terms of individuals and their intentional states” or through other means when this method is found insufficient [59]. Tollefsen thoroughly overviews the different approaches, and while the question is admittedly related to the nature-nurture debate, however, the individual-population question is mostly a matter of method and not of ontological metaphysics of belief. There are many intermediate approaches, so, in a sense, the aforementioned “interaction” was there since the beginning. With AI, due to its permanent networked condition, it becomes almost imperative that we adhere to the collectivist approach. There is no precise ontological or epistemological limit separating a human’s actions from their AI (or other) tools, as, given a particular case, all agents function as functions of each other; the calculations of an online buying recommendation system are the result of my interaction with the system which reflects at the same time my personal behaviour but also other customers’ behavior, and so on. The distribution of intelligence and agency boundaryless and expanding and AI applications provide good evidence for this. However, this discussion is older than AI’s recent resurgence, and the most relevant authors for the present theme examined here are Bratman, Pettit, Hutchins, and Epstein (with his references to Tuomela and Searle).

Michael Bratman speaks of shared cooperative activity (SCA) a concept of collective involvement for the achievement of a given goal with the following three requirements: (i) Mutual responsiveness, (ii) commitment to the joint activity, and (iii) commitment to mutual support [5]. Bratman’s account is seminal, yet weak from a collectivist perspective, since as he admits, not all characteristics are found in the examined cases, and most importantly, SCA “is broadly individualistic in spirit; for it tries to understand what is distinctive about SCA in terms of the attitudes and actions of the individuals involved” [5]. With AI, we may assume only after some extrapolation, that responsiveness exists in the sense of a higher ethical motivation (however, ethical inscription in robotics is underway, [37]). If we make a distinction between responsiveness as a feature of value-driven decisions, and responsivity as an entity’s ability to respond, at the current stage of AI development, we may speak of responsivity, but not of responsiveness. Similarly, algorithmically programmed commitment is – at least to our human eyes – no commitment at all. This is debatable, for example, if we consider a nihilistic approach to ethics, which negates the existence of values as driving forces, or the human mind as a well-advanced algorithmic process, or algorithmic commitment as an extension of human commitment, and so on. The clear-cut differentiation between human and machine is again blurred. In any event, the following discussion might help revising the SCA concept.

Philip Pettit’s seminal book The Common Mind [49, 50] explores what constitutes human intentional thinking agents, and after sharply defining his terms concludes to his theory of the common or manifest mind, which is the necessity of interaction between individuals. Like Bratman, however, he also privileges the individual over the collective as the underlying force of this common decision. In his later article, he seems to withdraw this prioritization by referring to the interaction as the very prerequisite for one’s autonomy. If an individual is not within a society, she cannot comprehend her individuality. While people are autonomous in one sense, “[t]hey may depend on one another for attaining the basic prerequisite of their individual autonomy; they may be able to realize that autonomy only in one another’s company” [49, 50]. Again, this model can be applied to AI only via extrapolation. Since AI does not manifest its ontology by individuation, as it does by networking, the rule of verifying one’s individualist value through their dependency with the group is not very convenient. However, it is quite reasonable to suggest that since the robot’s “purpose” is to help humans, and since humans build robots to help them, the more the two are in interaction, the more the verification of the human-machine positive synergy will be.

Edwin Hutchins has been a pioneer both in contributing to the group mind hypotheses [28, 29, 59], as well as monitoring and reviewing relevant theories [30]. Constantly revisiting his terminology, in 1991 he modelled his connectionist distributed cognition framework in his theory of the “constraint satisfaction network.” Such networks are composed of units whose connections represent constraints, whose frequency and density, in turn, determine the judgement of the network [59]. The units can be sub-network of a hierarchically higher network, so humans can be the units of a group, but also the inner complications of a human body may act as a constraint satisfaction network for a person. Hutchins generalizes his theory as such:

“A system composed of a person in interaction with a cognitive artifact has different cognitive properties than those of the person alone […] A group of persons may have cognitive properties that are different from those of any person in the group […] A central claim of the distributed cognition framework is that the proper unit of analysis for cognition should not be set a priori, but should be responsive to the nature of the phenomena under study” [29]

In another article from the same year, he places his own theory among the pantheon of shared cognition frameworks which he denotes as “cognitive ecology” and defines as “the study of cognitive phenomena in context” [30]. Hutchins reviews the history and the differences between the approaches – namely, “Gibson’s ecological psychology, Bateson’s ecology of mind, and Soviet cultural-historical activity theory” [30]. Based on the simple premise that “[e]verything is connected to everything else” but “not all connectivity is equally dense” [30], cognitive ecology understands intelligence as a phenomenon and not as logical process, distributed beyond the human cranium, reaching multiple exo-human elements. While, like all previously analyzed theories of shared mind, Hutchins’ theory takes human as the standard unit of interest, his repeated references to the work of Gregory Bateson offer a significant advantage with regard to AI’s placement in the group mind theorizations. As Bateson straightforwardly refers to his notion of the ecology of mind (EoM), “the mental characteristics of the system are immanent, not in some part, but in the system as a whole” ([1], original emphasis). An example of an EoM follows:

“Consider a man [sic] felling a tree with an axe. Each stroke of the axe is modified or corrected, according to the shape of the cut face of the tree left by the previous stroke. This self-corrective (i.e. mental) process is brought about by a total system, tree-eyes-brain-muscles-axe-stroke-tree; and it is this total system that has the characteristics of immanent mind” [1]

Interestingly, Bateson’s ideas were shaped after his engagement with cybernetics and systems theory, the building blocks of AI, so that, in a sense, this paper comes now full circle. Within an EoM, or a cognitive ecology, or a not-necessarily-human constraint satisfaction network, intelligence exists despite the ontological nature of the units within the system. Paraphrasing Bateson, we may consider an ecology of human-smartphone-wireless connection-AI algorithm-food, and so on. Edwards speaks of such environments generated by new technologies as “closed worlds,” pretty much echoing the same cyberneticist systems symmetry. Simply put (and similar to Hutchins’ constrain networks), the restriction of a closed world opens up the possibilities for interconnections between the participants and for maximization of actions according to the rules: “Everything in the closed world becomes a system, an organized unit composed of subsystems and integrated into supersystems” [10]. Of great interest, is the sociologist and systems theorist Niklas Luhmann’s contribution to legal frameworks – something which is yet to be related to recent discussions about robotic legal personhood [37]. Luhmann states that “a person is a unity formed only for purposes of communication, merely a point of allocation and address,” reducing personhood to a temporary, partially self-contained, and self-aware unity (“which does not exclude the possibility of its imagining that it is a person”) related to other similar unities [39].

Before concluding, I should mention my intentional avoidance in mentioning Raimo Tuomela and John Searle’s theories of shared intention, who emphasize on collective intention as the primary decision-making driving force [59]. The reason of avoidance is twofold and is explicated in Brian Epstein’s recent work on social objects without intention: on the one hand, various sub-groups constituting a phenomenon in question have different intentional [11]. On the other, when a fact or an object fails in fulfilling the role of its collective acceptance, this does not imply its failure as an institutional entity [11]. In fact, drawing from lessons in anthropology, he reminds us that several implied and unseen factors are generating social objects, so that a theory of collective intention does not hold (66–67). Among AI specialists, Searle’s general disbelief towards the potentialities of AI and a machine’s capability to think is well-known through his Chinese Room argument [52]. It is verified that these theories prioritize the individual human over the group, despite their holist-labels; in a sense, we can refer to them as crypto-individualist. Beata Stawarska emphasizes upon the enhancement of the I-You relationship and the decomposing of egocentrism through the advent of AI and robotics, leaving an open potentiality for equal communication with robots [57], but nonetheless does not explicitly expand the notion to human-AI symbiosis.

To conclude, intelligence, according to shared cognition approaches can be viewed as a phenomenon taking place in the context of a given ecology and not as an organism’s intrinsic property. No organism can be imagined without a context, and therefore, intelligence is not owned by individuals but happens only within interaction. The fallacy of the letter “I” in AI is now sustained, since intelligence is not restrained within certain boundaries, and therefore it makes no sense to attribute this feature to a natural or artificial entity.

4 Conclusions–Objections–Future Work: AI Does Not Exist

A recurring problem (or perhaps advantage) in AI and robotics research is that the very term “AI” is relationally defined [21, 26, 36, 42, 43]. Sometimes an AI can mean a particular self-contained device, in the same way an automobile means a specific vehicle used for transportation. To a certain extent, this proposition is wrong, because one may argue that learning robots such as iCub or OpenCogBot [21, 43] make use of AI software, but they are not AIs themselves. Some other times, AI can mean the precise the software, which is enables after the coexistence of applications, physical supports, and goals, in the same way that transportation is the function of vehicles, infrastructure and operations of transport. To a certain extent, this proposition is also wrong, since claiming that IBM Watson or applications of advanced microcircuitry [32, 42] are themselves AIs is of little or no meaning given that they are only enabled to perform as parts of greater systems. (In that sense, any computer application or even a simple pocket calculator is an AI, and indeed this was the basic assumption for the early conceptions of this terminology, that is, the replication of any mental act [44, 52].) In most of the times both propositions are simultaneously right and wrong, depending on the context they are used. The problem occurs when the terms are used in non-research language, as in the press [27, 36, 55, 66]. In such cases, and after the present paper’s analysis of terminology, it appears that propositions about AI are neither right or wrong; they are meaningless. While space limitations do not allow for an elaborate discussion of the topic, it seems that there is a need for AI and robotics researchers to act as brokers and intermediaries for the improvement of the public understanding of their respective fields. Science and Technology Studies (STS) scholars have often raised the important issue of such understandings, when “institutional hybrids,” cases of scientific and technological artefacts or terms do not match exactly the criteria of multiple overlapping arenas such as law, policy, mass media, science fiction, and thus causing confusion (for example, in the case of “cybrids” and xenotransplantation where STS and other scholars collaborated to provide with analytical taxonomies of terminology, while also pointing out the difficulties of precise definition [23]).

As in certain cases of nonhuman transplants to humans and vice versa (not human-enough to be human, not animal-enough to be animal), the subject-referents of AI (for instance, autonomous robots), are, like humans, neither natural nor artificial, neither intelligent nor unintelligent, or else they are both. Consciousness, awareness, and intentionality are developing assemblages of contingencies, networks, and families of relationships, linked together by scales of context – according to the purposes of every researcher. On the one hand, AI exists, and in that sense is as natural (and as restricted by nature) as anything else which agreeably “exists.” On the other, like any other notion, it is artificial, constructed and in continuous interplay with other machines, with humans, and the environment. Therefore, to the extent that our current human categorizations of what constitutes natural and artificial, or intelligence and non-intelligence, are vastly contingent and context-based concepts, we may proclaim: AI does not exist. Following Edwards, who suggests that AI and robots are historical constructs [10], AI is a historical convention as much as the notion of the human is – which, if human judgement is taken out of the loop, also does not exist.

If, however, it is proved that AI has no meaningful reason of being an ontological category – and proved it is as societies exist in networks of meshed human and nonhuman intelligence – then what accounts for justifying contemporary AI R&D and its ethics? The collapse of both nature/culture and human/nonhuman intelligence divides, leaves open the question of responsibility and action. As Vicky Kirby puts it in her forward to the recent volume What if Culture was Nature All Along:

“This reversal from natural to cultural explanations brings a sense of dynamism and political possibility – in short, no need for despair if we can change things. Yet such interventions also carry the message that nature/biology/physis is, indeed, the ‘other’ of culture, the static and primordial benchmark against which human be-ing and its agential imagination secures its exceptional status. But if the capacity to think stretches across an entire ecological landscape, what then? If nature is plastic, agential and inventive, then need we equate biologism and naturalism with a conservative agenda, a return to prescription and the resignation of political quietism?” [33]

No. At least, as far as AI is concerned, I suggest that the question left open by the present investigation is thoroughly socio-political. Among the basic priorities for future investigation of social studies of AI, after this paper’s heretical conceptualization, are:

  1. (a)

    attempts at more precise definitions and analytical taxonomies of various applications of AI according to experts,

  2. (b)

    tentative (yet rigorous) demarcation of expertise especially in the cases of prestigious figures in mass media associating themselves with AI, and

  3. (c)

    empirical investigation through qualitative means of the impact of current AI hypes and/or disillusionments in the public sphere on AI R&D and policymakingFootnote 4

The overall feeling left by mainstream social commentary about AI is that the technology will change society. However, social studies should aim at highlighting how societal factors are impacting the conceptions of AI, and possibly, from an ethical scope, how should we change AI conceptually towards the greatest benefit (instead of proposing technologically deterministic responses of ethics to AI’s impact).

If we change everything we take for granted about AI, we can see how AI and (not “in”) society might change everything – as an act of co-production. Can there be a politics of decentralized and simultaneous 100 percent natural and 100 percent artificial cognition? As mentioned earlier, the emerging field of roboethics deals with the inscription of ethical drives into robots [19], [37]. Given empirical cases of perpetuation of biases based on the input of partial data (such as the AI-based beauty contest, [36]), one is tempted to ask: what is the normative morality “taught” to machines by humans? Moreover, if nature, nurture, human intelligence and AI do not exist, does “morality” exist? An increasing number of cybernetic devices becomes attached to human bodies or acts together with human brains for decision making, and an increasing number of human features and functions are inscribed to machines. Therefore, the line between the two traditionally assumed kinds blurs – in a similar manner with the blurrification that took place between the online and the offline, giving birth to the onlife condition [47]. Dichotomies are dangerous, and humans have been learning this the hard way (divisions according to gender, race, class, species have been infiltrated into institutional and social frameworks, leaving little or no room for nuances). Their social and legal implications are tremendous and difficult to modify after their lock-in. Policymaking and public portrayals of AI should adhere to the pragmatism of a human-machine continuum, and taken for granted dichotomies be taken with a pinch of salt. These questions verify the need of further exploration of networked and dynamic human-AI societies and admixed organic and inorganic features for future research.