In the following section, we discuss the five challenges that emerged in the workshops and explain the various sub-challenges identified within those broader topics. As stated, the workshops aimed to discuss ways to address the identified challenges. The organizers asked the participants to deliberate over different approaches that could remedy some of the challenges they brought forward during the debate. They also encouraged participants to consider older technologies and fields of applications, and to reflect on how associated challenges were addressed in the past. After introducing the five challenges, we summarize the recommendations that participants proposed. Finally, for each of the five challenge and recommendation areas, we provide a contextualization that embeds the topic into broader discussions in the human–robot interaction and ELS literature. This section is called “Discussion” for each of the five topics or challenges. Thus, the first two sub-sub-sections (i.e., “Challenges” and “Recommendations”) in each of the five sub-sections reflect the participant voices, giving justice to their experiences and opinions. By contrast, the third sub-sub-section (i.e., “Discussion”) in each of the five sub-sections connects the participant voices to the broader literature, including current approaches to these topics. By separating the participant voices from the broader literature, we can give more voice and room to the emerging and spontaneous themes from the workshop, while reflecting these themes in light of recent scholarship.
Privacy and Security
Challenges
During the workshops, particularly the workshops at New Friends 2015 and 2016, participants tried to find a common understanding of the concept of privacy. As privacy is a protean concept, the discussion soon switched to the ways social robots can move around, monitor their environment, record their interactions with individuals, and register their daily routines and preferences. In this sense, social robots may affect the “physical and informational” spheres that surround humans. The participants highlighted that the ability to physically reach out into the world in an autonomous fashion enables robots to surveil individuals across places and gain access to personal rooms, which was impossible at this scale before.
Second, and linked to the informational privacy aspect, during NewFriends 2015 a participant pointed to transparency (avoiding secretive systems), access to individuals’ records and their uses as well as privacy controls: How can we prevent information about oneself from unwanted purposes without consent? Is there an informed consent bias?
The participants also raised their concerns on their ability to control the collected and processed data, mainly referring to the data that robots can collect without human perception (e.g., muscle changes in exoskeleton devices): Is this data also theirs? What control would we have of our personal information? Would we have the ability to decide for ourselves if we want the robot to process such data? Participants discussed these questions during the workshops, especially at NewFriends 2015 and 2016. The discussions often revolved around the concepts of integrity, the ability to correct or amend the data, but also around the question how data misuse can be prevented.
Some participants in the JSAI-isAI workshop brought up the issue of what type of permission social pet robots employed in cognitive therapeutic settings or hospitals require. The participants highlighted the difficulties they had with group therapies, where some of the children had parental permission to give images and data to the researchers and other children did not.
Concerning security, multiple participants were worried about surveillance and security breaches, especially during ERF and NewFriends 2015 and 2016: What are the consequences of robots that are hacked? As the Internet of Things (IoT) community had already identified such events, participants feared that external (and often malicious) agents could control the correct functioning of therapeutic robots. Participants stressed the importance of security in this regard, especially in such sensitive contexts.
Recommendations
One of the participants of NewFriends 2016 believed that the issue of privacy and data protection is overly dramatized and does not reflect reality. The participant argued that we, as end users, need to choose among three attitudes towards the processing of personal data. These attitudes go from permissive to restrictive:
- 1.
We (end users) permit robots (and their manufacturers) to collect our data without any control on our side;
- 2.
We indicate which data can be collected and require robots to make these data anonymous (very much in line with the spirit of the current legislation on privacy in the European Union) as soon as robots collect it; or
- 3.
We prohibit social robots (and their manufacturers) from collecting any data.
While transparency from the data controller would lead users to choose scenario 1—because then the threat to privacy disappears as the users agree on the use of the collected data—the participant argued that the second scenario is the most likely to happen. She argued that the idea of informed consent follows a more typical data protection approach. According to the participant, scenario 3 is entirely the opposite, as it drives the impression that we should prohibit all data collection at any time. In line with scenario 2, the participants discussed the possibility of pre-defining the risks and establishing processes to identify and mitigate them, in the sense of a more fact-based than a fear-based approach. Concerning 3, beyond the fact that the processing of the data outside of the primary purpose of the robot should be prohibited, some of the participants suggested that the terms and conditions of companies that collect data should be revised to include the principle of data minimization.
If the technology allows for it, one of the organizers mentioned across different workshops that the robot should be privacy-friendly. For example, removing the cameras for drone navigation would reduce privacy impacts; or using vibration detection of the floor when a patient has fallen, instead of recording an image of them, could allow for a more privacy-friendly notification system. Participants in the JSAI-isAI workshop referred to the noise that cameras make in Korea when a person takes a picture. With some modification, this could be adopted for social robots to spread awareness among the users of when and how data is collected. The problem with this solution—and referring to the Korean camera noise—is that there are available apps that allow users to take a picture without making noise. Research is needed to know whether this could still be feasible in robot technology, especially for robots that do not incorporate a robot app store.
Some forward-thinking comments on existing concepts also took place during JSAI-isAI and NewFriends 2016. For instance, some participants suggested the inclusion of an evolutionary modulation of the privacy concept, to reflect the times in which we are living. Others reflected upon the creation of dynamic consent forms: a user could be asked more frequently whether they consent with specific data being processed and would be re-asked for permission when a specific purpose changes.
Discussion
As highlighted by the participants, social robots affect the privacy of individuals differently than current technologies [12, 33,34,35]. To understand the privacy implications of social robots, it is necessary to distinguish the privacy aspects or types that apply within this context [36]. In that regard, we can differentiate between physical and informational privacy (concerns). Physical privacy refers to the notion of non-interference and the idea of having a private space without prying, surveilling eyes. Social robots challenge physical privacy due to their ability to enter personal spaces [11]. Informational privacy refers to the ability to control information disclosure and aligns more with a central ideal of European data protection law, namely the concept of informational self-determination. With social robots, not only the collection of personal information becomes ‘murky’—with social robots constantly processing data in the background [12]—but also its processing and dissemination is hard to grasp for an individual user. In light of the anthropomorphic effect of social robots [37], the ability to grasp the data processing and disclosure abilities of a social robot are challenged. Informational privacy concerns not only relate to the interaction between the user and the robot itself but also to the interaction between individuals through a robot, for example when a robot is hacked or surveillance takes place through a telepresence robot [11].
Since the workshops took place, from 2015 to 2017, the European data protection framework has been revised and the General Data Protection Regulation (GDPR) entered into force in May 2018. The GDPR governs the processing of personal data in Europe, whereby both terms “processing” and “personal data” have been interpreted broadly by legal scholars and by courts such as the European Court of Justice [38,39,40]. While the GDPR has been called “the law of everything” [40] because of its aim to tackle all challenges arising from digitization, it becomes clear when reading its recitals and articles that policymakers’ main concerns when drafting the Regulation were web-based applications, cookies, profiles (also of children), and automated decision-making. The impact of social robots on the “fundamental rights and freedoms, in particular their right to the protection of personal data” (cf. Recital 2 of the GDPR) were not at the forefront of the policy discourse.
Nonetheless, the GDPR contains provisions that will shape how social robots collect and process data of European citizens. First, the GDPR contains information duties for data controllers, that is, information they have to provide to the data subject prior to the processing of their personal information (cf. Art. 13 and 14 of the GDPR). The information duties are an ex-ante transparency requirement, unlike ex-post mechanisms such as explanations [41]. Second, the GDPR provides data subjects (i.e., users of social robots) individual rights to access the data that the social robots process (Art. 15) or demand erasure of the data that is processed about them (Art. 17). Third, the GDPR codifies the principles of privacy-by-design and privacy-by-default, which state that systems processing personal data must be designed in a way that fulfills all the requirements set forward in the GDPR. Data controllers must include technical and organizational measures that ensure that the processing principles of the GDPR are met, including security measures such as encryption of data. However, privacy-by-design is not entirely or purely technical, but there are always organizational measures to take into account [42, 43]. The privacy culture in a company should be extended to cover the lifecycle of the robot, from its creation until its deployment.
Legal Uncertainty
Challenges
Many of the discussions across the workshops frequently came back to the uncertain legal landscape around social robots. Importantly, the participants highlighted the difficulty of defining what a robot is in legal terms. Because there is no legislation governing the use and development of robot technologies, but there are regulatory initiatives, participants added another legal uncertainty: What is the actual applicable law in this transition period? Some pointed to existing legislation that could apply, although they also acknowledged that this might largely vary depending on the robot type, context of use, and other variables. Relevant legislation ranged from the Directive 2001/95/EC on general product safety to the directive 85/374/EEC on liability for defective products, the recent regulation 2017/745 on medical devices, the low voltage directive 2014/35/EU or even the 2014/30/EU electromagnetic compatibility directive or the 2014/53/EU on radio equipment.
Policymakers have started discussing the legal implications of robots in general but not specifically for social robots, left aside for concrete applications such as therapy or education [15]. Instead, there has been a focus on ethical guidelines. While such an approach is desirable as it promotes an extensive discussion without blocking innovation in a still-emerging field [44], participants, in general, felt that many of their questions would need to be addressed within a more concrete, enforceable format. Not only the enforceability of guidelines and regulations in the field of social robots but also a global understanding of how to create more certainty in dealing with robots was seen as a challenge at JSAI-isAI. In particular, participants on different continents felt that cultural differences might inhibit global guidelines, although they all agreed that guidelines at worldwide scale were an intermediate step towards legal certainty in the field of robots in healthcare and therapy.
At NewFriends 2016, another new uncertainty discussed was whether robots could be certified as helpers or not. This discussion arose because one of the case studies introduced the question: “What can happen if a robotic system and competent nurse have contradictory approaches to a particular situation?” During that discussion, questions revolved around whether a robot or an AI system could become a certified nurse or doctor and know what is best for the patient to the same extent as an actual nurse or doctor. The participants highlighted that this would break the hierarchical decision-making currently existing in hospitals, where, most of the times, the doctor decides how to proceed. If technology advances as the current forecast predict, we may have robots and artificial technologies that surpass human knowledge and that are better at making medical decisions.
Recommendations
Concerning legal certainty, participants agreed on the necessity to harmonize definitions of robots, impact, risk, and legal issues concerning social robots. In particular, they agreed that it is vital to incorporate knowledge about domestic laws when implementing robots in different countries, including knowing the habits and cultures of the place. At NewFriends 2015, one participant mentioned the “regulation-by-design” principle to refer not only to the embedding of a hybrid top-down/bottom-up regulatory model into the system to avoid violations with domestic and international laws, but also to a future compliance-by-default model, where both technical and organizational measures are put in place to make robots comply with the law. The Japanese participants mentioned that the Tokku approach could be a model to follow. This approach connects living labs, where robot technology is tested, with the policymaking process to achieve policies based on the evidence collected in the lab.
The participants also sustained that the creation of an agency that could deal with robots as it occurs in Japan could help improve the management of all the ELS issues. This agency could be responsible for the development of standards and codes of conduct for engineers and other relevant stakeholders. An international expert committee could be created to draft group reports too, and to conduct research on the legal and regulatory implications of such technologies. The agency could also have some certification agencies that could test whether robots are compliant with existing laws, and certify them accordingly.
Both at JSAI-isAI and NewFriends 2016, participants agreed that there should be some purpose limitation. For instance, all the participants at both of these workshops agreed that the use of robots in the military field should be banned, as doing the contrary would increase violence and unfortunate scenarios. In the context of healthcare robotics, however, such clear-cut recommendations were missing. Participants reported positive attitudes towards the future applications of robots and AI technologies in the healthcare domain. The workshop organizers highlighted, however, that this opinion could be biased since all participants worked in related fields.
One participant at NewFriends 2016 wondered what barriers the current regulatory framework posed to the introduction of robots into society. The participant did not believe that the actual legal order can respond to the potential violations of children’s civil rights and suggested the use of multilateral trade agreements like the Trans-Pacific Partnership to change this and provide protection for these children.
Up to now, standards concerning robot technology only focused on physical human–robot interaction, although we interact with robots in many various forms, including cognitively [44]. In a much quicker way than public policymaking, the industry has started to develop standards that address the moral implications of robot technology, including the BS 8611:2016 Guide to the Ethical Design and Application of Robots and Robotic Systems, and the IEEE Ethically Aligned Design 2017 from the IEEE Global Initiative and Standard Association. Participants at NewFriends 2016 also highlighted in this respect that, because every robot is different (due to their machine learning characteristics), it is going to be very difficult to standardize the ELS aspects concerning robot technology.
Discussion
As workshop participants highlighted, robot legislation is overdue. As a result of the EP Resolution 2015/2103 (INL) [15], the European Union is making efforts towards providing legal certainty for robot legal compliance as “doing otherwise would negatively affect the development and uptake of robots” [23]. A recent open consultation launched by the EC [45] acknowledged that current European Harmonized Standards do not cover areas such as automated vehicles or machines, additive manufacturing, collaborative robots/systems, or robots outside the industrial environment, among others [46]. That is why the machinery directive is likely to be revised, to further consider its suitability for new areas of development in types of machinery such as robots and digitization.
The EP resolution did not provide a concrete definition for the word robot, although it acknowledged its importance. The EC reacted cautiously, questioning the need to define “cyber-physical systems, autonomous systems (and) smart autonomous robots” for regulatory purposes [23]. Whether a definition is required may depend on the level of uncertainty surrounding a concept, but it may be deemed necessary by legislators if such a concept has to be regulated explicitly [24]. Pointing in this direction, some legal scholars have defined robotics without having an impact on legislation [47, 48]. More recently, the High-Level Expert Group on AI (HLEG AI) has defined robotics as “AI in action in the physical world” or, in other words, as “a physical machine that has to cope with the dynamics, the uncertainties and the complexity of the physical world” [49]. However, it remains unclear whether such a definition will be adopted or whether we should focus on broadly accepted definitions found in technical, international standards (such as the definition given by ISO 8373:2012 “actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks”).
Because the EP Resolution suggested that the development of autonomous and cognitive features in a robot makes the current rules on strict liability insufficient, the EC may revise the Directive 85/374/EEC concerning the liability of defective products.Footnote 3 Indeed, the EC stated, “advanced robots involve, in addition to the machinery part, complex software systems and suppose the provision of services, data transmission and possibly internet connectivity and, depending on the category of robot, a degree of artificial intelligence.” This intertwinement of tangible and intangible parts may further challenge the allocation and the extent of the liability, that is, until what extent the damages are within the autonomous behavior of the robot [24, 50]. In this respect, the EC is exploring risk-based approaches that could set a safeguard baseline for current and future uses and developments of robots and AI. One example would be to use impact assessments appraising the consequences of implementing algorithm-driven technologies [51, 52], notably steered to understand how these affect the humans interacting with the robot [53]. As highlighted during the workshops, however, self-learning capabilities make each robot a unique product, which complicates their assessment—something recently highlighted by the Food and Drug Administration (FDA). Robot technologies process vast amounts of data, can learn from experience, and self-improve their performance. By doing so, they challenge the applicability of existing regulations (e.g., medical device regulations), which were not designed for progressive and adaptive AI. In addition, these capacities make the certification process more difficult.
One solution could be the use of an obligatory insurance scheme, as it already exists for cars [15]. However, the progressive and adaptive learning of robot technologies confronts the certainty of insurance schemes [54]. Furthermore, robot applications are vast and their embodiment as well as interaction modes differ widely from one case to another. Thus, it is intricate to understand which robots need insurance and which not, and, consequently, which companies will offer such services [54, 55].
Beyond mere standalone and provisional approaches, Japanese workshop researchers supported the creation of testing zones for robot technology connected to policy-making. The “Tokku” Japanese model could inform policies revolving around robot technology [18]. These testing zones may be in line with the EP Resolution, which also highlights the need for testing zones, although a formal communication process between the robot and regulatory developments does not currently exist in Europe [56]. New European projects directed towards establishing testing zones for exoskeletons seem to point in that direction, although their connection to policy is unclear for now.Footnote 4
Autonomy and Agency of Robot Technologies
Challenges
Participants discussed lively on the responsibility behind robots’ actions across the different workshops. The discussions at NewFriends 2016 and JSAI-isAI revolved around whom to blame in case of a critical event and whether robots can and should be perceived as morally responsible. The latter led to a discussion about how to think about the moral responsibility of robots, merging into an ethical and philosophical debate. Autonomy and the capacity of independent decision-making are what triggered the discussions. The participants at JSAI-isAI highlighted that autonomy is not a fixed term and that it includes different facets, calling it “the 50 shades of autonomy.” They were referring to the capacity of robots to make decisions on their own or by adapting to the users’ will. In this case, they highlighted that if robots adapt their personality to users, it could reduce serendipity and not challenge users’ decision-making processes. However, in the case of autonomous decisions, it was uncertain whether there should be a requirement to establish a hierarchical rule order as mandatory, and whether there should be a human that supervises robot decisions.
This led to the question of whether robots should be able to override human decisions in some instances, for security or safety reasons, because the robot may possess more understanding of the issue than the human. In this respect, and beyond the general concern on how much we are still in control, participants mentioned that this scenario could create a dependency and overreliance effect that would be very difficult to overcome. Some participants stressed that there are always people who need to design and maintain the robot and also to be in charge of programming it. They insisted on the importance of determining who prepares the controls, and how these controls are determined, which could help understand who is responsible in the end. Other participants added that this is going to be exacerbated in co-participatory design projects when the end-users are part of the design of the robot.
Regarding the question of moral responsibility, we encountered contradictory positions in the JSAI-isAI workshop. One of the participants highlighted that high-agency entities are typically thought to have more moral responsibility than low-agency entities. For example, we expect moral behavior from adult humans, but not from animals. The participant argued that robots would not be considered morally responsible unless people perceived them as agentic. Other participants mentioned that social cues (e.g., gaze, speech, or human-like appearance) influence how agentic people view robots and that we were setting higher moral standards to machines than humans. Another participant suggested that the problem with this approach is that although some discussions at European level had taken place, it is not clear whether robots are going to have agency in the first place, and what kind of agency they might have—animal-like, corporation-like, electronic agent-like type—in the second place.
Recommendations
Participants were very confident on what to do concerning robots’ actions: clarity, transparency, and division of responsibilities between developers, manufacturers, adopters (institutions), maintenance, and end-users (particularly if the robot adapts their behavior to the end-user). More similar to a risk-based approach, participants claimed that we should make an effort to avoid unexpected consequences by predicting and modeling agents’ behaviors accordingly. Participants highlighted that, in the future, we might rethink the concept of liability, mainly because it is more about prevention than about reaction. To ensure such prevention, some participants referred to the creation of protocols of actuation similar to the ones that exist for airplanes. This way we could limit unfortunate scenarios in advance. To this regard, maybe future protocols are influenced by the specific context. In the case of autonomous vehicles, for example, this might mean that lanes for autonomous cars are created, separating them from non-autonomous cars. This extra requirement could also work as a safeguard for the different types of robot technology such as unmanned ambulances and other care transport, including autonomous ground vehicles in hospitals or robotic wheelchairs.
A compensation fund schema similar to compensation funds for natural disasters for non-human faults was also highlighted at the NewFriends 2016 workshop. The question that arose afterward, regarding possible insurance schemes, was whether all robots needed insurance or not. One of the participants suggested the creation of a matrix to determine liability. According to one participant, who quoted the work of Lohmann [57], this matrix could include the different levels of autonomy of the robot and also the level of complexity of the environment.
This point is related to decision-making processes. The participants mentioned that a decision-making ruler should be established, including (1) a hierarchical model (e.g., doctor’s decision primes over the nurse, but not over patients’); or (2) co-participatory with the user. Both include the robot in the loop. In the case of conflicts, some participants referred to dispute resolution systems such as the one on Wikipedia, although there was not much clarity on how to build that.
The participants said that as well as insurance companies have assigned different values and importance to different objects, including body parts, this approach could be useful for the case of robots, for example, whether a robotic prosthesis is considered equal to a human body part. The participants also acknowledged that this approach should be carefully adopted as it could raise some discriminatory issues in the long term.
Discussion
One of the main problems of roboticists is engagement. If users do not engage with the robot effectively, then it is unlikely that they will interact with the robot for a longer period of time [58] and the return of investment may stagger. Establishing a long-term engagement, however, is not an easy task. Social entities require an empathic personality to establish relationships and permanent attachment between artificial social agents and humans [58]. Personality is formed through unique, imperfect behaviors. The robots may end up being unique, and in some cases disobedient to pre-established rules [59]. The robot dinosaur PLEO, for instance, evolves differently depending on the interactions of the user and its internal non-compliant parameters. From a juridical point of view, this more realistic adaptive behavior of robots poses some questions: To what degree will these robots be incompliant? What is going to be the tolerance level and in which contexts/circumstances is it going to be avoided? Who is going to determine the consequences for noncompliance?
Disobedient and imperfect robots that enhance long-term engagement completely clash with risk-based approaches proposed by European institutions, as this mainly challenges any certification process, apart from user trust and reliability. Non-compliant robots may also clash with by-design principles including privacy- or ethics-by-design, as changes in the internal parameters of the robot may lead to disastrous consequences in the long term. This may compromise the predictive behavior of the robot, something crucial in some therapies, for example for autism [60].
Whether these autonomous robots will be considered agents with some juridical relevance is something currently under debate. In most jurisdictions, several non-human entities have rights and obligations. Corporations are considered legal persons; nature is protected; animals are considered sentient beings; and even a person not yet conceived, but which can be taken into account for hereditary purposes (concepturus in Latin). On its latest proposal, the EP suggested “applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently” [15]. Such a proposal utterly shook part of the juridical community, to the extent that an open letter was written to the EC asking to focus on the health and safety of humans, and not robot or AI technologies.Footnote 5
Although there might be different strategies to address the responsibility gap [61, 62], the fact that a robot behaves differently from the designers’ intention should not be a reason to exempt them from being held responsible [24, 63, 64]. Such a decision is societally relevant and might have important consequences that need careful thought [23].
(Lack of) Employment for Humans
Challenges
The participants of the workshops highlighted positive and negative views of the economic impact of robots. A researcher working on robotic wheelchairs highlighted that some human–robot interactions could facilitate elderly individuals to be socially active and motivate them to train and do things by themselves. In this respect, the participant argued that the introduction of specific robots in healthcare could reduce the burden of a caregiver’s workload, meaning that caregivers may have more time for other activities.
Another participant had a contrary vision to this, at least in the context of cognitive therapies. The participant explained that growing dependence on mental therapy and people’s intentional stance for computers encourage the introduction of interactive agents and robots into psychiatry. The participant argued that the introduction of computers and interactive agents for mental therapy might lead to the substitution of human narrative therapists on the one hand and the exacerbation of patient issues on the other hand. Connected to the latter, the participant argued that narcissistic people who wish to talk about themselves while leaving concealed issues hidden in their self-narratives would appreciate these agents as complementing their distorted interpretation of psychological concepts, without achieving therapeutic effects. The participant pointed out that the cultural trend of social reductionism towards psychology might impact patients when interactive agents are introduced in mental therapies.
Some participants highlighted that there is a general belief that robots are going to replace human activity gradually. A participant wondered: Does the widespread implementation and introduction of robots mark the end of our civilized humanity? In general, the rest of the participants were not very fond of the “robots are taking over” scenario, especially because some of the attendants were roboticists developing this technology. Some of the participants wondered why humans needed to surrender to the robot taking-over scenario and who was in charge of deciding that this will be the case. A participant pointed out that it is crucially important to develop a strategy for the appropriate implementation of these technologies to avoid apocalyptic scenarios and to assure a return of investment of these technologies. Other participants mentioned that the use of robotics in therapy implies in-depth cooperation between engineers and scientists of that particular field, which would increase humans in the loop.
Other participants had a very different opinion on this. Japanese participants argued that seniors in Japan do not want to be looked after by persons that cannot speak proper Japanese, for instance, and that they did believe that robot technology could help sort out the huge problem they have, that is, the lack of workforce in care settings. Although tenderly framed, these participants agreed that the comment conveyed the impression that robot technology was elucidating ongoing soft but deeply rooted racism arguments. In other words, it was suggested that the employment of technology could not only be efficient in economic terms, but also in achieving discriminatory practices. In this respect, other participants wondered what mechanisms exist to evaluate the need to incorporate robots into the workspace globally. They stressed that rehabilitation and social robots have moved from the stage of experimentation, prototyping, and testing to be clinical and educational work tools in our society; and that it had become evident to the robotics community that the usefulness of the robots largely depends on the people who are working with them.
Recommendations
Most of the recommendations called for the incorporation of humans into the design processes of the robot, including end-users, nurses, and doctors in a healthcare context. The participants referred to the design of collaborative systems, the so-called cobots, and highlighted the importance of including such types of robots into the protected scopes of current harmonized standards, which is lacking at the moment.
At the workshop in Japan, some researchers presented their solution. They introduced to the group a collaborative system called “Practical Intelligent Applications (PRINTEPS).” PRINTEPS is a platform for developing intelligent applications for the co-operation between humans and machines. Via PRINTEPS, software modules can be reconfigured and related to knowledge-based reasoning, speech dialogue understanding, image sensing, and manipulation. To promote human–robot interaction, they use different robots with different purposes: the robot NAO takes charge of imparting knowledge, Sota (another robot) takes charge of progress checking, and Jaco2 takes charge of the development of students’ interest. The researchers found that preparing the co-operation channels in advance could promote the easy and quick use for each target purpose.
The overall idea during the workshops was the importance of keeping the human in the loop. In line with the GDPR, regarding automated decision-making, the participants were inclined to promote a more pan-human vision of robotics in sensitive contexts.
Discussion
Although much has been said concerning robots and labor, there is little empirical research on how new technologies, and in particular robots and artificial intelligence, affect the labor market. An exception is a recent study that investigated the effects of industrial robots from 1990 to 2007 in local labor markets in the United States concerning employment and wages [65]. The authors argue that, in a simple task-based model where the robot competes against the human in the performance of a specific task, robots have a positive impact on employment and wages because they increase productivity (what they call the productivity effect); but they also have a negative effect, which is the displacement of workers. By regressing the change in employment and wages on the exposure to robots in each local labor market, the authors proved that they can estimate the local labor market effects of robots [65]. They also highlighted that the number of job losses had been limited in the United States, mainly because in this period—1990–2007—there have been relatively few robots. In Germany, a macro-economic study found no evidence that robots lead to overall job losses [65]. However, the increasing introduction of robots shifts jobs from manufacturing to service professions. In addition, the authors found that those manufacturing workers who are most exposed to robots are the ones least likely to be replaced and that the introduction of robots might have a detrimental effect on wages [66]. Frey and Osborne [67: 261] concluded that “occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.” The BBC application created after this study highlighted that “social workers, nurses, therapists and psychologists are among the least likely occupations to be taken over,” mainly because assisting and caring for other people involves empathy, an essential part of their job. A growing interest in the use of emotions and empathy in human–robot interaction for healthcare purposes might change in the future.Footnote 6 Some researchers have pointed out that by allowing the robot to show attention, care and concern for the user as well as to being able to engage in genuine, meaningful interactions, socially assistive robots can be useful as therapeutic tools [68]. However, the latest research points out that emotion research is not quite there yet [69]. Although facial expressions may convey important information for social communication, the way people communicate different emotional states considerably differs between cultures, situations and people within a single situation [69].
The unclear economic impact of robots, particularly in therapy settings, aligns with the discussions in the workshops, where there were varied voices on the topic. However, most experts had a considerate and careful approach, warning apocalyptic scenarios again. Still, specific challenges in care settings such as patient preferences for robots instead of caregivers of certain national or racial backgrounds, as brought up by a participant in the workshop in Japan, merit further academic attention. A practical solution to detrimental economic effects is to consult closely with potential users, focusing on user-centered design (e.g., PRINTEPS project, [70]).
Replacement of Human Interactions
Challenges
During all workshops, the organizers shared with the audience the concerns of the European Parliament that the inclusion of care robots might decrease human–human interactions. Some participants at JSAI-isAI argued that robots currently only have one-to-one interactions and cannot interact well with a group of people. In this respect, human interactions might not be reduced drastically as social interactions frequently happen in interaction with more than one person.
Some participants raised inter-generational concerns connected to this discussion: How do children, middle-aged and elderly individuals perceive robot technology differently? Participants highlighted that, currently, the general population does not have access to robot technology, although they agreed on the importance to educate the population on its correct use. This comment appeared as a result of the explanation of a Japanese experiment where children beat up a robot repeatedly in a mall [71].
Moreover, participants at NewFriends 2015 pointed to a lack of research about changes in people’s attitudes towards robots over time. A researcher pointed out that there is a lack of research on the perceptions and attitudes of the middle-aged group concerning robotics. Connected to this, the question of an age limit (minimum or maximum) to use robots was discussed, as well as questions of whether therapy robots should be included at earlier stages for humans to adapt to a therapy environment led or shared with social robots.
Recommendations
The participants referred to different corpora that could be a sound basis for framing ELS questions, for instance, the Declaration of Human Rights of the United Nations. The participants acknowledged that although one problem is that Human Rights are interpreted differently in various nations, this could not constitute a barrier to look into it as a possible framework.
Some participants mentioned new ways of using robot technologies. One participant at NewFriends 2016 believed that the current moral damage responsibility framework has a default based on the compensation of non-material damage, which oversees actual circumstances and future consequences (e.g., psychological or social impact) and does not acknowledge the source of damage, which is critical to evaluate the moral damage. In the future, social robots could incorporate learning algorithms oriented through a pattern of the motor, verbal, and psychological, cognitive conversations, to determine whether and how much damage a user experienced.
The same participant envisaged a future where individuals may be forced to deal with therapeutic software agents and robots due to the social pressure that prescribes self-control over emotions and mental health. He suggested a more careful approach for the use of such technologies, not only because there are already some people feeling anxiety towards robots. Also, because these people may experience double-bind situations where mental problems cannot be solved regardless of whether or not they use the therapeutic system (i.e., social pressure prohibits them from removing themselves from the situation).
Discussion
Given the limited adoption of social robots at this time, we lack evidence of the long-term impact of healthcare and therapy robots. The central question of whether long-term interaction with robots reduces human–human interaction is thus hard to answer. However, studies with therapy and caregiver robots, such as Paro, suggest positive effects on users, particularly among specific population groups such as children with autism spectrum disorder and older adults with dementia.
Shibata and Wada [72], in a mini-review of previous research on the topic, show how Paro has the same effect as an animal treatment for seniors. Similarly, Broekens, Heerink, and Rosendal [73] discuss the positive wellbeing effects of introducing assistive social robots to older patients’, despite pointing to the limited generalizability of previous studies. Since most research has been done on select population groups (mostly young and old people), we know less about their impact more generally on social interactions. The workshop participants stressed the need to find out more about middle-aged individuals. They also pointed to the limited capabilities of social robots in group settings, where the robots have to interact with more than one individual. Thus, robots might be more likely to replace one-to-one human interaction than the group or team interaction. In any case, individuals and patients should have the choice to opt out of interacting with robots, especially in double-bind situations.
The European Parliament [15], a recent report by the Rathenau Institute [74], and part of the literature reflect on the fact that the use of robots could exacerbate social isolation and involve a loss of dignity [75]. Depending on how robots are designed, they might promote, exacerbate, or reduce the contact between humans. For instance, a robot interacting with a child under the autism spectrum disorder might end up socializing more with the robot rather than with other peers. However, if the robot is a social mediator, in the sense that it keeps encouraging the child to “ask your classmate, ask your teacher”, then the child will probably understand that to get their goal it is better to learn how to interact with another person rather than with the ‘dumb’ robot [76]. The report calls on the Council of Europe to clarify to what extent the right to respect for family life should also include the right to meaningful contact [74].
Engineers often develop robots primarily on the premise that the product should appeal to as many end-users as possible. However, while such a procrustean design choice seems legitimate from a return-of-investment point of view, it might not do as much justice to end-users who are more concerned about how that device helps them in their particular situation. The lack of personalization and adaptation to user needs might impede the effective engagement and successful implementation of such technologies in the long run, thus damaging both interested parties. There are already different approaches geared towards integrating users into design processes [77, 78]. Such initiatives could facilitate access to a more personalized technology.
Summary and Meta-Challenges: Uncertainty and Responsibility
Given the breadth and complexity of ELS challenges and recommendations, we provide a summary table of the key discussion points (Table 1) before concluding the article with implications and limitations.
Table 1 Overview of social robot challenges and proposed recommendations While the key aspects of discussion cluster roughly into the five themes outlined above, two overarching challenges or meta-challenges are worth pointing out. The first meta-challenge is uncertainty. It becomes apparent in all themes, yet in different forms. In privacy and security, it relates to a lack of common ground and understanding of the concept itself. In the legal area, uncertainty itself provides the name of the challenge. Uncertainty here refers to a lack of guidance on which laws apply and how the law can help manage expectations, for example through certification. Within the theme of autonomy and agency, uncertainty is tied to philosophical questions on the ontological status of social robots. When it comes to employment effects, the actual economic impact of social robots in the future is a big uncertainty. Finally, uncertainty in human–robot interaction is apparent in at least two of the three sub-challenges in Table 1: the role of religions and HRI in group settings. Uncertainty as an overarching theme reflects a lack of ethical, legal and social standards for service robot technologies that are deployed in therapy. Such uncertainty opens up opportunities for unethical behavior that leverages information asymmetries and exploits technical or legal loopholes (see [79] for a good example how such loopholes have been exposed). Cross-disciplinary research efforts, including technical and non-technical scholars, and the popularization of social robots, for example their introduction in educational institutions, could help address this challenge. A critical public sphere with an engaged media system could further alleviate uncertainties and reveal malpractices, problems, and ethical loopholes.
The second meta-challenge is responsibility. Again, this aspect is present across the five themes. In privacy and security, responsibility refers to control, transparency, and accountability. The GDPR has clarified some aspects of responsibility in the area of data protection but many questions remain for the specific technology of social robots [80]. In the legal area and in the themes of autonomy and agency as well as human–robot interaction, responsibility connects to practical and moral questions when responsibility is not clearly attributable. Examples include co-creation between humans and social robots, co-decisions on important matters, or the use of social robots in group settings. Here, the matter of assigning responsibility for both credit (who should get the reward) and blame (who should get the punishment) is murky. Within the theme of employment effects, responsibility questions arise concerning who should be responsible for addressing tensions that arise from automatization through social robots. Recommendations for addressing the meta-challenge of responsibility are based on human-centered design, clear decision-making rules and appropriate sanctions. Across both meta-challenges, it is clear that technological, legal and social approaches need to complement each other.