In this section, we study how industry, science and law have engaged the imaginary of autonomous intelligent robots in society: how they perceive of the challenges and construct the visions, goals and strategies for realizing these developments. Our main focus is on the variety of concepts deployed to align different parts and actors of the network. The role of metaphors and analogies is particularly powerful here: In the case of industry, metaphors are taken from the assembly line of the traditional factory, and extended to the workings of society; in the case of science we observe the imaginary of the natural biological ‘machine’ which becomes a metaphor in describing assistive systems for users in need of help, and to address ‘societal challenges’; In law, the key concepts in legal terms refer to basic attributes ascribed to natural persons, such as personhood, agency and autonomy, the yardstick against which such attributes can or cannot describe non-human entities.
Industry: Re-making and Extending the Assembly Line
Robots were introduced to manufacture, mainly of automobiles, in the early 1960s, with the first robots on the market from companies such as KUKA and ABB in the early 1970s. But it was not until the 1980s that robots became mainstay in industrial production. The key concept describing their function is robotic and the classic assembly line of robots referred to as automation (cf. Suchman 2007). Emblematically, robot arms are installed in factory and assembly lines to which a host of formerly routine human actions are delegated such as sorting, distributing, welding, assembling, bolting and painting, one task leading deterministically to the next. ‘Degrees of machine freedom’ occur within a three-dimensional geometric field, literally shackled to the shop floor. More recently, however, different robots have been imagined and experimented with: ‘cobots’ for work environments beyond manufacture, e.g., in logistics, warehousing and healthcare; and, experiments with drones and driverless cars operating autonomously in complex semi-structured and unstructured environments. This shift in thinking what robots can achieve is emblematic of the gradual blurring of industrial, service and assistive robotics (euRobotics 2013–2014). Shifts in enabling technologies are projected to take society from robotic automaton in closed-off secure settings to that of the autonomous robot in living social and working environments (EUROP 2009); not just automation but autonomy (Floridi and Sanders 2004); not just repetitive movement, but flexibility, adaptability and learning. (Siciliano and Khatib 2008). The industry roadmap projected how ‘[w]ith increased flexibility and ease of use, robots are at the dawn of a new era, turning them into ubiquitous helpers to improve our quality of life by delivering efficient services in our homes, offices and public spaces’ (EUROP 2009: 7, cf. also euRobotics 2013–2014: 15).
The anticipation of autonomy is mobilized together with ‘application requirements’ such as adaptation, positioning, human–robot interaction, robot-robot interaction and dependability, to be translated into more concrete ‘product visions’ (Ibid.). Although robot autonomy is not the same as human autonomy (Haselager 2005), it is seen to share some of the same virtual or presumed characteristics: a regulative principle or meta-property to steer actions and strategies not identifiable in any single body part or application. It is not a technical specification of machines; rather, it projects and expects an evolving relation between humans and machines: “Autonomy is the system’s ability to independently perform a task, a process or system adjustment. The level of autonomy can be assessed by defining the necessary degree of human intervention…” (EUROP 2009: 22).
Yet, how do we achieve the desired levels of autonomy? There is no scientific specification that could settle the issue. Rather, the industry’s roadmap (EUROP 2009) provides a list of (66) technical and knowledge gaps needing to be filled. This is presented as “a long and tough task that can only be realized in a series of steps,” which requires bringing industry and academia closer together (Guhl and Zhang 2011: 6). The bridging function thus operates at two levels: between robot and society and between industry and academia. Here the scientific vision inserts itself, promoting a strategy to ‘build the bridge’ to the machines of tomorrow on the assumption that the scientific principles to get there will be discovered along the way (next section). Yet, industry does not wait for scientific results, but orients towards the making and ordering of the social relations deemed necessary for expanding the assembly line:
The robotics market is not only composed of end user applications and robot technology suppliers but also of service and supply chains which add value. The early stage nature of the robotics market means that these are not yet fully developed (euRobotics 2013-2014: 27).
Signs of developing service and supply chains would evidence a growing market of useful and usable robots, however, all sorts of potential application domains cannot be directly assessed because they remain to be built (cf. Scott 1998). Central to that construction task is the need to influence legislation, regulation and standards and strengthen the common language used in the robotics communities, e.g., through the circulation of strategic documents, roadmaps, newsletters, job announcements, events, etc. Roboticists are actively fostering a community, for example, with robot competitions and other outward projections and promotions of the promise of autonomous machines aimed at investors, policymakers and publics.
In the midst of gaps and technical challenges, concerning the assembly of robots, their properties and capabilities, a European community of sociable machines and humans remains the dominant imaginary. The decisive factors in success hinge on the role of European industries in a globally competitive environment, especially the extent to which European robotics can develop the desired technologies internally (euRobotics 2013-2014). euRobotics considered Europe to be in “a leading role in industrial robotics, supplying the world market.” Yet, “this position is vulnerable. Aside from well-established Japanese suppliers, new companies are entering the European market” (euRobotics 2015: 22). Indeed, support for research and innovation in physical-digital systems has seen an upsurge after the 2008 financial and economic crisis (also Fuchs 2018). This includes embedding the public–private partnership funding option within the ‘societal challenges’ framework of Horizon-2020, and linkages to ‘Pre-Commercial Procurement’ and ‘Public Procurement of Innovation’ (euRobotics 2013-2014: 22). Simultaneously, as an industry representative explained (Rommetveit et al. 2015), global competition was an incentive for industry to launch initiatives to sort out legal and ethical issues hindering robotic development:
The obstacles for robots have to be investigated. There is competition with Korea and Japan. .. ELS issues need to be investigated that hinder solutions. European robotics industry has to be made world leader … so that social science related to robotics has leadership of opinion in the world. This is why the Green paper was produced (author’s notes).
In this sense the progression of industry, extending assembly line robots to societal robot integration, is inseparable from acceptability of robot autonomy. But for the expansion of Europe-owned and controlled technologies, the way ahead is seen as one of incremental improvements upon core enabling technologies and regulations. Here, the industrial strategy mobilizes and implements an “economic imaginary” (Jessop 2009), giving “meaning and shape to the economic field,” at the same time performatively suggestive of the regulatory environment and technological economy required for its realization. It singles out new domains and value chains in citizens’ living and working environments, including an imagined community of interest (Levidow 2013). This is a Europe attached to its ownership (mainly German) of certain technological and scientific domains of highly competitive advantage (Fuchs 2018), yet vulnerable to international competition. Against this background, the roadmap envisions and orders the way forward toward new ‘partnerships’ of public and private enterprise, humans and machines.
Academic Research: Making Nature’s Friendly Helpers?
Visions of autonomous machines have historically captured the imagination, and are at the heart of ongoing efforts to better understand life and unique human characteristics (Riskin 2007). This, one could say, is a genuinely academic and experimental endeavour, and a philosophical one, at arm’s length from the industry’s practical reasoning. The RCC proposal is an instantiation of what happens when researchers and engineers are granted the opportunity to dream big. The FET Flagship Initiatives scheme held out a promise of large funds and prestige, of revealing nature’s secrets while offering solutions to Europe’s societal, economic and existential problems. The RCC proposal played along, portraying Europe’s high standards of living as an object of global envy: “democracy, advanced economies, social inclusion and quality of life” (RCC 2012: 3), yet, these standards being threatened by man-made and natural disasters, the economic downturn, trade imbalances, and a dwindling industrial base. The primary challenge was demographic, since “never before in human history have older citizens made up such a large proportion of the European populace” (RCC 2012: 92). For this purpose, the RCC’s primary goal was the making of friendly helpers for care and companionship. The RCC identified a gap between citizens’ expectations and their capabilities to live within the means of available resources. This gap was projected as “the challenge of sustainable welfare,” to be met by “a whole new class of machines to overcome the limitations of today’s machines, new machines based on a whole new science” (ibid.). Autonomous machines were thus posited as a direct response to a European welfare challenge.
Typically, roboticists do not address the topic of autonomy head-on, but rather work by way of conceptual and experimental detours,Footnote 10 in the case of the RCC proposal articulated as sentience, “the ability to integrate across perception, affect, cognition and action” (ibid.: 32). The building of sentient machines was the main scientific challenge, termed the robotics bottleneck. It stated how present robotics are advancing toward adaptable and learning machines, capable of acting in unstructured environments (Dario et al. 2011; RCC 2012), but so far not fully delivering. This lack of knowledge to deliver robot capabilities to think, build and act, fits well within the industry’s identification of gaps, and a promise to overcome them in one paradigm-changing leap. In the RCC proposal, filling gaps was branded as a challenge of enablement, to be met by building “a clear bridge between basic science, technology and society” and between “scientific vision and its implementation into concrete innovation and engineering objectives” (RCC 2012: 11). The pillars of the bridge were five: Simplexity, Morphological Computation, Novel Fabrication Technologies, Sentience and Society. A biomimetic approach that studies and models the adaptive mechanisms developed by living beings (micro-scale, invertebrate, vertebrate and human) over millions of years, underpinned and unified the design of this bridge. Seeing nature as an ‘engineer,’ these sorts of ‘mechanisms’ were turned into fundamental design principles taken to drive the evolution of bodies and brains.
Here we can see the differences between the academic and industrial robotics networks, but also their interactions and attempts at adjustment. Industry actors deliberately identify and map, to the greatest possible extent, the 66 knowledge gaps, leading to a set of recommendations for gradual improvements (Guhl and Zhang 2011). Academics, on the other hand, plunge themselves into the unknown unknowns of Nature’s secrets, seeing the gaps as constitutive of the road-building enterprise. To them, industry’s incremental approach is likely to lead to “a gradual loss of controllability and robustness, and … ultimately …to a substantial cost in efficiency and safety” (RCC 2012: 5; cf. Bekey 2005).Footnote 11 They articulate and mobilize the notion of a gap, when arguing that these shortcomings of controllability, paralleled in the legal problem of liability, can only be overcome in a paradigmatic leap to sentience. Here, scientific ingenuity is crucial. The challenge is accommodated within the broader roadmap strategy where concepts and metaphors such as ‘bridging gaps,’ and ‘overcoming limits and bottlenecks’ have a central role.
At its core, the RCC vision was similar to the industry vision in being deeply hybrid, a socio-technical imaginary (Jasanoff and Kim 2009) of a new society, driven and assisted by robot companions for humans and robots working alongside them in most walks of life. Importantly, both are machine-centric in their suggestions of economic and social change. However, the academically-driven consortium left relatively little room for input from industry and for its mediating role to deliver actual products to markets. This was also paralleled in an initial lack of attention to ethical, legal and societal issues, however, when the relevant expertise finally came on board, it was generously included in the discovery engine. The careful attention to community relations however, characteristic of the industry roadmap, was largely absent. Thus, the main identifier and basis for collective action within the RCC consortium, was the fascination with building and exploring things that move, act, think and feel, without a clear pathway to innovation to deliver societal goods and products to market.
Legal Studies: Qualifying as Man’s Friendly Helper or Self-standing Person?
The emergence of robots capable of autonomous decisions is seen as a challenge to human-centric constitutions and possibly resulting in a paradigmatic shift in legal thinking, considering the speculative character of machine autonomy (De Cock Buning et al. 2012). In this section we describe the work of two legal networks on this topic, the machine-centric Green Paper developed within the industry roadmap of euRobotics, and the human-centric White Paper of the Robolaw project. We highlight their different positions on robot autonomy in relation to legal frameworks, existing laws and charters, the former proposing electronic personhood for robots. From those basic positions different problem-frames and strategies follow, which we attend to in the section “Public Realignments in Co-producing the Partnership”.
Green Paper: From Human Tools to Electronic Persons
The Green Paper declares itself to be the first European effort at bringing together the robotics and legal communities, supported by an ELS assessment document (Leroux and Labruto 2012). The roadmap metaphor is up-front in the ELS assessment, reminding the reader that the general objective of euRobotics is “to act and find ways to favour the development of European robotics,” and deal with potential “worries about the consequences of introducing robots into society” (Ibid.: 5). It appears as if ELS issues are “hindering the development of robotics in Europe,” hence the “roadmap to overcome them.”Footnote 12 This framing of ELS issues provides the starting point for interdisciplinary collaboration. ELS issues are presented as ‘barriers’ or ‘obstacles’ that need to be removed, preferably before they arise and the contributions of legal scholars, ethicists, social scientists and engineers must aim at this road-sweeping task.
Arguably, the communities of engineers and lawyers should get to “know each other” through this work, “share common language, vision and objective” (Leroux et al. 2012: 8) and, predominantly, share the concept of autonomy: “It is precisely that “interdisciplinary” collaboration that is the main reason for the current debate on the meaning of the word autonomy” (Ibid.: 11–12). The Green Paper provides different meanings of ‘autonomy’ in law, engineering and ethics. Yet, the main authors are industry representatives and the problems in getting on with collaboration are seen primarily from an engineering perspective.Footnote 13 The paper blurs human–machine categories and disciplinary relations, since differing conceptions are themselves among the main obstacles. It also sidestepped an Ethics Roadmap created at the time (Veruggio 2006), and the results of the Ethicbots project (Tamburrini 2009), both of which took a human-centric approach.
The main normative obstacles pertain to issues concerning human autonomy like human rights, which are framed as ethical issues, not ‘legal issues.’ An important feature of the method the authors use is to only focus on issues that (speculatively) are specific to autonomous robots and not technologies in general. “This approach means that we always try to guess if ELS issues disappear or not when we replace the word “robot” by “device”, “robotics” by “technology” (Leroux and Labruto 2012: 9, our italics). In the analysis of different robotics fields (assistive, security, toy and sex robots, to exo-skeletons), ethical issues like privacy, equality and dignity turn out not to be robo-specific, since some analogy can always be found in an adjacent technological field. These human autonomy-focused issues thus disappear from further analysis.
Having set aside the ethical issues, the Green Paper turns to major legal obstacles to robot autonomy. These touch upon the core of legal systems: the fundamental distinction between legal subjects with agency and legal objects as physical entities. Robots are positioned as hybrids of the two. Depending on the legal regime in question (intellectual property rights, liability, legal capacity), certain autonomy-related qualities are attributed to robots (creativity, making choices, being a person), which serves to attribute legal qualities (authorship, liability, personhood), previously reserved for humans.
The Green Paper distinguishes between robots as physical entities (i.e., objects) and robots as a kind of agent, i.e. as quasi-subjectivities, and ventures into speculative terrains with respect to the second. This tendency is clearly seen in the chapter on intellectual property rights (IPRs). Following a broad outline of existing IPR laws concerning robots as objects of appropriation, the mode of analysis switches to the possibility of ‘robot generated works,’ whereby the machine is considered the subject, or author, of works worthy of IPR protections.
The overarching obstacle, however, remains the issue of liability (De Cock Buning 2012). Nobody wants to invest in robots if there is uncertainty about who is liable for damages caused by their behaviors. Until now, robots have been regarded as physical objects, but this conception might prove problematic for robots with self-adaptive and decision-making abilities (Boscarato 2011). The Green Paper offers a gradient legal analysis in which different forms of non-contractual liability are tailored according to a robot’s increased capability: starting from behaviors determined by producers, the analysis proceeds to machines that can “move freely in the surrounding space.” When a robot leaves the confines of its owner and causes harm or damage, it could be qualified as an ‘animal’ in the sense of article VI. 3:203 of the European Civil Code, and the custodian is liable. When robots possess decision-making and learning skills however, leading to behaviors not intended by producers or programmers, they may be qualified as a ‘child’ in the sense of article VI. 3:104 of the European Civil Code. This qualification becomes especially relevant when robots are modeled after animals, as in the RCC’s bio-mimetic approach. The ‘parents’ or ‘guardians’ of the robot would be held liable for damage or harm if their supervision has been deficient. Note that in such reasoning by analogy, certain actors are foregrounded (e.g., owners and guardians) whereas others disappear or are less relevant (e.g., producers), thus facilitating a transfer of liability.
Based on this, the Green Paper goes on to argue that robot autonomy is a central challenge to human-centric charters and judiciaries (method and practice), stating how “strict differentiation between man and machine (“man-machine-dualism”) is no longer acceptable” and that “man and machine should be considered simultaneously and their actions should be seen as cooperation” (p. 58). In other words, robot autonomy is positioned as a hybrid agency no longer bound by the passive concept of legal objectivity. This framing opens up for legal innovation, i.e., in exploring the self-admittedly speculative possibility of turning robots into ‘electronic persons.’ Robots should be granted “a special legal category” (Ibid.: 61) that would for instance allow them to be held directly liable for any damage they cause. This category of ‘electronic personhood’ builds upon notions of software agents in AI, presumptions that embodiment will make software agents more intelligent, and analogy with the legal personhood of corporations which suggests that electronic personhood is a solution to the problem of liability and responsibility, i.e., by a “bundling of capacities, material and financial responsibilities” (Ibid.: 61).
The Green Paper also provides an account of European legal structures, the main barrier being the sheer complexity and many-layered character of European law (Ibid.: 13–14, cf. 66). The Paper argues the need to “harmonize the legislation concerning robotics in Europe. Industries are confronted to different regulation and legal constraints which represent barriers making difficult the emergence of new markets.” Here, electronic personhood as a novel construct could prove to be useful. If the liability problem can be solved on a cross-jurisdictional basis, future relations and expectations of participants and investors on the European digital market would be provided with legal certainty.
White Paper: Putting Humans and Society First?
The machine-centrism of the Green Paper comes more clearly into view when contrasted with the legal work delivered by Robolaw. One of the main objectives of the project was to investigate how emerging technologies in the field of robotics have a bearing on the content, meaning and setting of laws and how they will affect existing legal categories and qualifications. Analytically, the authors do not prioritize that robots are about to enter society. Rather, they prioritize the conglomerate of legal regulations across Europe dealing with robotics. The mapping they do is one of a barely chartered terrain, whereby, mapping entails a comparative analysis of legislation in several different countries. New-emerging robotics are presented in the paper as an a-territorial and cross-boundaries phenomena, and that legally hybrid entities are moving between different legal systems, which calls for a development of a “specific European approach … characterized by core “European values” deriving from the main European sources of law.”
The goal is to avoid robots becoming disruptive to societal structures and to ensure that their benefits are fully exploited. Such a functional perspective “means to put rights – and fundamental rights as recognized by the European Union – first […],” requiring that “the impact the single application may have on society and fundamental rights shall be guiding the choice” (Bertolini and Palmerini 2014: 169).
The analysis in the White Paper orients to existing legal safeguards in society, e.g., how human rights could be affected by future robotics. What occurs in the Green Paper as an ‘obstacle’ is articulated in the White Paper as the premise on which roles for machines in society can be assessed and implemented. This focus on fundamental rights puts humans first, since “human rights are in fact an essential apparatus to deploy in order to promote and guarantee responsible advances in science and technology.” Human rights draw the boundaries for robotics development by indexing the ‘intangible’ zones that cannot be infringed. They function as “a test-bed for the desirability of robotic applications, since they can serve to identify the main goals and achievements expected by advancements in robotic research and industrial applications” (Ibid.: 176–177).
In this section we focused on mediations accomplished through strategic and intellectual work performed by industry, science and law. Whereas this work is premised on the imaginary of autonomous robots, each practice remains dependent on its own epistemic home-base and styles of reasoning, although, stretching fundamental discipline-specific concepts to meet new goals. This illustrates our analytic point that roadmaps are used to operationalize imaginaries and as aligning devices across networks, practices and institutions. Yet, the focus and overall goal of using a roadmap remains controversial: industrial and academic robotics remain machine-centric (although in different ways) which is transmitted to the legal and regulatory networks (Green Paper), but that also becomes subject to controversy and contradiction by a human-centric legal and constitutional approach (White Paper, cf. Nagenborg et al. 2008; see Figure 2 in the electronic supplementary material).
The role of institutions and networks in innovation is a key theme in STS perspectives on co-production, insofar as STS researchers shift attention away from the scientific and technological problem domains, toward the social and institutional relations and networks in which the co-production is embedded and takes shape. In this story the ‘embedding’ is visible as a boundary crossing activity, much like shifting in and out of situational frames of reference. We will now draw upon this theme to address the co-production of science, industry, law, and politics in the making of partnerships and robot autonomy.