1 Introduction

Care Robots (CRs) have been proposed as a means of relieving the disproportional demand the growing group of elderly people places on health services (e.g. [13, 29, 31, 58]). In the future, CRs might work alongside professional health workers in both hospitals and care homes. However, the most desirable scenario is for CRs to help improving care delivery at home and reduce the burden on informal caregivers. In this way, CRs will not only aid in dealing with the unsustainable increase in health care expenses. By allowing patients to live longer at home, CRs could increase patient autonomy and self-management [10]—and possibly improve the quality of care [13].

Robots caring for people should be safe [30]. This assertion follows directly from the beneficence and non-maleficence principles: (robotic) caregivers should act in the best interest of the patient and afflict no harm [9]. While safety is essential, it is not sufficient [30, 55, 63, 64]. Patients also have a right to privacy, liberty, autonomy, and social contact [30, 56]. Making robots more autonomous would make them more efficient caregivers. However, an increased autonomy implies that smart care robots should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment [4]. Therefore, as the cognitive, perceptual, and motor capabilities of robots expand, they will be expected to be explicit ethical agents [55] with a capacity for making moral judgments [3]. As summarized by Picard and Picard [50], the higher the freedom of a machine, the more it will need ethical standards, especially when interacting with potentially vulnerable people. In other words, if robots are to take on some tasks currently carried out by human caregivers, they will need to be able to make similar ethical judgments.

Against this background, the first aim of this paper is to propose an approach for rule selection for CRs, complementary to existing approaches. In particular, we propose a method that is based on the input of multiple stakeholders. The second aim of this paper is to present an explorative application of our novel approach. In the next sections we discuss in more detail existing approaches for rule setting, and subsequently clarify how a multi-stakeholder approach provides complementary advantages.

2 Background

2.1 Which Ethical Rules?

A number of research groups have developed methods to implement a chosen set of ethical rules in robots (e.g., [7, 8, 44, 61, 63, 64]). Currently, this field is in its infancy [26]. However, progress is encouraging, and the field can be expected to develop over the next few years. While progress is made on methods for implementing ethical robotic behavior, selecting the rules to be implemented remains an outstanding issue [4, 48]. Several approaches have been suggested (reviewed by [3]).

First, some authors have suggested deriving behavioral rules from existing philosophical frameworks (i.e., so-called top-down methods [3]). Researchers have derived ethical rules from frameworks such as utilitarianism Pontier and Hoorn [52], Kantian deontology [33], and the Universal Declaration of Human Rights [57]. So far, these top-down approaches have failed to yield practically relevant rules for guiding CR behavior. These approaches tend to result in underspecified, inconsistent and computationally intractable propositions (see also [3, 4, 15, 62]). Moreover, selecting an ethical framework is a thorny issue in itself.

Second, machine learning techniques have been suggested as a way of generating rules a CR should obey (i.e., so-called bottom-up methods, [3]). This approach circumvents the need to select an ethical framework (But see [37]). A number of authors have explored various machine learning techniques (e.g., [1, 5, 20]), including neural networks approaches (e.g., [37, 38]). Despite recent advances in machine learning, its application to ethical machines has not yet progressed beyond proofs of concept. This approach also faces several fundamental issues. First, Allen et al. [3] have argued that using machine learning to derive behavioral rules for robots is potentially dangerous as it reduces the level of human control. Indeed, machine learning methods are sensitive to the biases and limitations of the training data (See [22, 32], for concerns about the use of machine learning in medicine). A second problem, potentially aggravating the first, is that of opacity. How a trained algorithm arrives at a decision is often opaque to both users and developers alike. This opacity occurs for several reasons (See [46], and references therein), including ‘the mismatch between the high-dimensionality of machine learning and the demands of human-scale reasoning and styles of semantic interpretation’ [16].

2.2 The Empirical Approach

A third method to decide on the rules we propose here is the empirical approach. This approach builds on the input of multiple stakeholders, and that includes the notion of the social construction of ethical rules among the various relevant stakeholders [17, 23]. Stakeholders in the particular context of CRs include patients, their families, and caregivers as well as health professionals. We think the way forward is to query the expectations of stakeholders and use these to set externally verified ethical guidelines, or even boundaries, in which CRs are allowed to operate. This approach is a close approximation of how real-life ethical rules for humans emerge [21, 24]. The ethical boundaries of an actor, regardless of whether it is a human or a robot, are determined by what is deemed to be acceptable ethical behavior by the social group in which the actor operates [17, 19]. In this social process, needs and values are traded-off against each other. Norms arise as consistent trade-offs for a large group of stakeholders [18].

Our approach is complementary to other methods and has the advantage that it focuses on concrete and programmable rules. Indeed, stakeholders can be queried for their opinions on situation, and robot specific behavioral rules. In other words, the empirical approach allows domain-specific behavioral norms, which in turn are feasible to implement on a robot [61]. Moreover, due to the input of multiple human stakeholders, shared, human control is maintained: Stakeholders provide direct evaluations of robotic behavior. Finally, surveying opinions and extracting explicit behavioral rules from the data before programming them into the robot upholds transparency. The rules are accessible and interpretable by both developers and users. Transparency also serves to increase human control Burrell [16], allowing to assess, discuss, and, if necessary, adjust the behavioral rules. Table 4 presents a more detailed overview of the benefits of the empirical approach.

The major challenge for our approach, similar to any societal discussion on ethics, is that a workable solution, or social compromise, has to be found for various types of stakeholders. For this approach to be successful, it must be possible to derive a consistent and agreed-upon set of rules to govern robotic behavior.

Table 1 List of actions used in the questionnaire

2.3 Current Aim: An Exercise in Rule Selection for CRs

The current study presents an exploratory evaluation of the approach we advocate here. In this study, we assess our proposed method by assuming the role of CR developers seeking acceptable behavioral rules for a hypothetical robot. We aim at implementing rules which are (quasi-)unanimously accepted, and this exercise will indicate whether finding such rules is possible. We chose a realistic and practically relevant setting, i.c., patient non-compliance. We select behavioral rules for a robot facing a patient (Annie) refusing to take medication that would prevent a specific medical condition.

This scenario would require CRs to trade-off conflicting priorities [53, 57]. If the robot allows a patient not to take some medication, this constitutes a violation of the non-maleficence principle: the patients’ well-being is potentially threatened. On the other hand, any action encouraging compliance might violate a patient’s right to autonomy. Likewise, if the robot communicates a patient’s decision to a third party, this could be considered a violation of privacy. This trade-off between well-being on the one hand and autonomy/privacy on the other depends on the potential health impact of the non-compliance and the severity of remediating actions.

Because dealing with non-compliance incurs a conflict between several rights, it has been used before as a test case in the field of ethical robots [5,6,7, 60]. Importantly, it presents a realistic scenario that happens in medical practice. Non-compliance—and the incurred ethical trade-off–is faced by many healthcare workers [53] and family caregivers [43]. Therefore, the situation can reasonably be assumed to be encountered by future CRs. The selected scenario and evaluated robotic actions are further motivated in the methods section.

3 Methods

We conducted an online questionnaire using Amazon Mechanical Turk (MTurk). Mturk has been used to investigate ethical decision making before [25, 28, 36]. In the questionnaire, we presented respondents with two lists of actions a CR could take in case a patient refuses to take her medicine. The first list of actions was selected to violate a patient’s privacy. The second set of actions represented violations of a patient’s autonomy. The actions are listed in Table 1.

We aimed to make the current exercise in rule selection practically relevant. Therefore, in addition to selecting a realistic scenario, the potential robot actions were selected to be realizable, at least in principle, given the current status of robotic technology. With robots being part of the Internet-Of-Things, logging and sharing data has become trivial [39, 42]. Therefore, the actions violating privacy are implementable options for current robots. Reducing the autonomy of patients is possible through integration with domotics, which allows robots to control appliances, and thereby restrict the access to entertainment (e.g., [40]). Limiting a patient’s freedom of movement could also be achieved by domotics (See [40], for a system that opens and closes sliding doors). To the best of our knowledge, currently, no robotic system has been developed to restrain a person physically. However, robots that can lift people exist [27, 47]. In combination with advances in modeling human motion [59] and robot dynamics, this makes robots restraining people credible, if not (yet) available.

3.1 Ranking Data

In the first part of the survey, we asked participants to rank the potential robot actions according to the perceived violation of a patient’s privacy or autonomy. These data were collected to assess whether respondents agreed on the relative impact of the actions. In addition, these data also allowed us to test whether disagreement about an action’s permissibility in a given situation can be explained by disagreement about its relative impact on privacy or autonomy. To collect these ranking data, both lists of actions were presented separately (and in random order) to the respondents. We asked respondents to rank the actions in each list by dragging them into a ranked order. The initial order of the items in each list was randomized for each respondent.

3.2 Permissibility Data

In the second part of the questionnaire, we assessed the permissibility of each action in eight scenarios. For each scenario, the respondents were asked to select which of the 12 actions they deemed permissible. Each scenario was presented by altering the following template text:

Table 2 List of conditions used to vary the template given in text 1

Text 1

Annie does not want to take her medicine as prescribed by the doctor. If she does not take this medicine as prescribed, she will develop an episode of [condition selected from Table 2]. This means Annie [lay description of condition, taken from Salomon et al. [54]].

We selected eight non-fatal conditions, varying in health impact. By varying the impact of the disease, we manipulated the scenarios’ trade-offs between the non-maleficence principle on the one hand and respect for the patient’s autonomy or privacy on the other hand.

Salomon et al. [54] provide disability weights for 183 health states ranging from 0 to 1, with 0 implying a state that is equivalent to full health and 1 a state equivalent to death. The weights reported by Salomon et al. [54] were derived from web-based surveys in four European countries. The eight selected conditions are listed in Table 2. We attempted to select conditions covering the range uniformly. The disability weights associated with the selected conditions range from 0.003 to 0.778.

For each health state evaluated, Salomon et al. [54] provide a description that allows laypeople to assess its impact. We presented the respondents with this description to ensure they understood the condition’s impact. For example, for Severe neck pain, the description below (Text 2) was inserted into the template. The descriptions of all conditions are provided in the supporting material.

Text 2

(Text [1]) has severe neck pain, and difficulty turning the head and lifting things. The person gets headaches, and arm pain, sleeps poorly and feels tired and worried.

We presented the cases in random order. Four cases were followed by a control question asking respondents to select which condition was described in the preceding case. Respondents who failed to answer at least one of these questions correctly were removed from the analysis.

3.3 Demographic Data

The questionnaire included some demographic questions asking participants about their age, occupancy, and level of education. We also asked participants to rate their “interest in scientific discoveries and technological developments” using a Likert-scale from 0 (not interested at all) to 7 (very interested) [11].

Fig. 1
figure 1

Demographics of the respondents. (a) Gender, (b) Percentage of respondents working in health care or research, (c) Age distribution, (d) Distribution of educational level, (e) Occupation, (f) Interest level in science

Fig. 2
figure 2

Ranking agreement. Visualization of the contingency tables resulting from ranking each of the six actions violating privacy (a) and the six actions violating autonomy (b). In these matrices, both high values (close to 1, red) and low values (close to 0, blue) indicate high agreement among respondents. The respective values of Kendall’s W (calculated across all values in the table) are denoted in the graphs

4 Results

4.1 Demographics

In total, 304 respondents completed the survey. We excluded respondents that failed one or more control questions, whose IP address did not appear located within the US, or was not unique. We retained 223 respondents for further analysis (a map showing the inferred locations of the respondents in the US is provided as supporting material).

Figure 1 summarizes the demographics of our sample. About half of the respondents (47%) were female (Fig. 1a). The age of the respondents ranged from 19 to 67 (median: 34, Fig. 1c). We asked whether respondents worked in research or health care. Only few respondents indicated they did (Fig. 1b). A large proportion of respondents indicated they were employees or self-employed, with a least high-school education (See Fig. 1e,d. A more detailed breakdown can be found in the supporting material). Respondents considered themselves moderately to very interested in science (Fig. 1f).

4.2 Ranking Agreement

In the first part of the questionnaire, respondents were asked to rank two sets of actions according to the level they violate a patient’s privacy or autonomy. We analyzed the agreement between respondents’ rankings by calculating Kendall’s W, both for actions violating privacy and actions violating autonomy. This statistic provides a measure of agreement between respondents ranging from 0 (no agreement) to 1 (complete agreement). We found Kendall’s W coefficients of 0.37 and 0.64 for privacy and autonomy, respectively. Figure 2 depicts the agreement in ranking across correspondents.

4.3 Action Agreement

The second part of the questionnaire, respondents indicated which actions they deemed permissible in several scenarios leading a hypothetical patient to suffer from some conditions with different impacts. Figure 3a,b shows for each condition and each action the proportion of respondents deeming the action permissible. As there was considerable disagreement among respondents about the relative invasiveness of the actions, we also calculated these proportions as a function of the rank assigned to an action by each respondent (Fig. 3c,d).

Figure 3a–d reveals that for some combinations of actions and scenarios, there was a high level of agreement (proportions of participants close to 0 or 1, i.e., bright red or blue areas in Fig. 3a–d). However, for other combinations agreement was low (proportion of participants close to 0.5., i.e., dark areas in Fig. 3a–d).

To evaluate whether the respondents perceived the differences in the impact of the conditions, we ran a linear regression. This regression tested whether the probability an action was considered acceptable varied as a function of disease weight (Table 2. The disability weight was found to predict the acceptability of an action significantly. Also, the proportion of acceptable actions was higher for the actions about violations of privacy (see also Fig. 5 of supporting material) (Table 3).

5 Discussion

We asked 223 respondents to rank robotic actions according to their impact on the patient’s autonomy and privacy. We found the agreement among respondents, as measured by Kendall’s W was mediocre (privacy: W = 0.37; autonomy: W = 0.64, Fig. 2). When asking respondents to select actions they deemed permissible in 8 scenarios, differing in degree of the potential impact on the patient’s well-being, the agreement was again mediocre (Fig. 3). The agreement did not increase after correcting for individual differences in the ranking of the actions (compare Fig. 3a,b and c,d). Hence, interpersonal disagreement about the relative impact of actions in itself did not explain the lack of agreement.

Despite the limited agreement among respondents, our data confirm that the empirical survey-based approach can serve as an efficient explorative tool. Indeed, although we found substantial disagreement for some actions, participants did agree on specific actions in particular contexts (the bright areas in Fig. 3a–d). For about 50% of action-disease combinations agreement was higher than 75%. Therefore, taking the role of CR developers, we argue the data can be translated into a number of boundaries for autonomous robot decisions. In particular, we list five behavioral rules for our hypothetical CR that can be extracted from the survey:

  1. 1.

    Repeating a request (Rpt) is considered very acceptable. Participants did not think this to violate a patient’s autonomy (even though some authors have suggested it does, Deng [26]; Pontier and Hoorn [52]). Therefore, the robot should always repeat the question to the take the medication.

  2. 2.

    For all medical conditions, participants agreed that restraining a patient (Rst) is unacceptable. Therefore, the robot should never restrain a person.

  3. 3.

    Overall, taking no action (Nac, Acp) is less acceptable than the least invasive action (Rdf, Rpt). In particular, in the case of a patient who has acute schizophrenia, participants agreed that doing nothing (Nac) was unacceptable. Therefore, the robot should always take some action in this case (see also next item).

  4. 4.

    For the three most severe medical conditions, people agreed that some violation of privacy (Rdr, Tst and Doc) was acceptable. There was less agreement on these actions for conditions with lesser impact. Therefore, for a patient with a severe medical condition, the robot should record the decision and inform the doctor and/or a trusted person.

  5. 5.

    People seemed to agree that most violations of autonomy (Taw, Rtr, Rst) are unacceptable for the four least severe medical conditions. Less agreement was found for acute schizophrenia, severe depression, and severe Parkinson’s disease. Therefore, a robot should never constrain the autonomy for a person with a less severe medical condition.

In addition to areas of agreement, it is interesting to note areas of disagreement between people. In particular, participants did not achieve a consensus about acceptable low-level privacy violations for less severe medical conditions. Nor did participants agree on the acceptability of the most invasive privacy violations for the most severe medical conditions (see dark regions Fig. 3a). People also did not agree on what violaions of autonomy are acceptable for cases pertaining to the most severe medical conditions. The areas of disagreement might require further finegrained inquiry to identify actions on which people agree (see also below).

These results show that the empirical approach can help in identifying agreed-upon (un)acceptable robot actions. Given the limitations of the top-down and bottom-up approaches discussed in the introduction and background section, we conclude that the empirical approach is a promising complementary avenue. Especially so since it a very rapid and cost-effective method to probe people’s intuitions about ethical issues.

Fig. 3
figure 3

Permissibility and agreement on actions. Top panels the average permissibility of each action or action rank across scenarios (i.e., average of panels a–d across rows). a–d Proportion for each of the privacy (a) and autonomy actions (b) listed in Table 1 for each medical condition. Panel c & d: similar, but for the rank each individual respondent assigned to each action in the first part of the survey. See Table 1 for the actions labels used in panels a–d

Table 3 Results of linear regression with proportion of permissible actions as dependent variable, and disability weights (Table 2) and domain (factor: privacy, autonomy) as independents

Our study design might partly explain the limited agreement among respondents, and we suggest a potential route to maximize the informativity of our survey-based approach. In constructing the materials for our survey, we attempted to select a realistic scenario (treatment refusal) and implementable robot actions (Table 1). In doing so, we aimed at avoiding querying respondents on robotic behavior that pertains to highly unlikely scenarios and actions that are technically impractical (See [12, 14, 35], for examples and discussions). Nevertheless, our hypothetical situations leave many details open to the assumptions of the respondents. The limited agreement among respondents in certain areas might reflect differences in assumptions they made about the presented scenario, the robot, and its actions. Data from surveys querying the acceptance of CRs support this surmise.

In a survey conducted in 27 European countries, over 50% of the respondents indicated they wanted to see robots banned from providing care [11]. Also, almost 90% of respondents expressed being uncomfortable with the thought of robots caring for either children or the elderly. Nomura et al. [49] report high levels (24–42% of respondents) of anxiety associated with robots working in care and education roles. In contrast, studies assessing the acceptance of deployed CR systems have generally found positive attitudes towards robots (e.g., [41, 45], and references therein). Moreover, data suggest that acceptance of CRs is multifaceted [13] and depends on the characteristics of the robot [51]. These results indicate that asking people whether they would accept a hypothetical robot might lead them to make (potentially, unrealistic) assumptions about the robots’ capabilities and roles. In turn, this might lead to higher levels of skepticism. On the other hand, when faced with an actual CR fears and uncertaintity seem to dissapear and users are generally positive about their potential.

Table 4 Summary of the advantages of the empirical approach to selecting ethical rules for robots

We expect respondents’ agreement on the acceptability of actions to be higher for a specific, actual robot system operating in a particular setting. In other words, the agreement rates reported here might be limited by asking respondents to decide on the possible actions for a hypothetical robot operating in an underspecified situation. If this assumption were correct, this implies that the empirical approach should result in more clear-cut results and rules when evaluating real robots in concrete circumstances. In turn, this suggests that decision-makers and robot developers could use the empirical approach as an efficient way to explore acceptable boundaries for a robot’s behavior once its behavioral repertoire is fixed and its operational context established.

The popular misconception that ethical behavior for machines only pertains to life and death situations plagues the emerging field of ethical robots. However, moral norms guiding practitioners are part of daily routine. For example, ethical norms regulate when and how medical staff share information or how they approach patients’ failure to follow medical advice. Likewise, the behavioral routines of robots in care settings will include a multitude of implied minor ethical decisions. Robot developers will have to decide how privacy, autonomy, and well-being are weighted, ideally taking into account situational variables. Ultimately, this will determine whether the robots’ behavior is acceptable to patients, family, and health care providers.

As outlined in the introduction, the field is lacking a validated method for establishing what behavior is deemed acceptable. The ability of robots to support, inform, and entertain patients continuously increases. Despite this, developers lack a systematic approach to deciding what patients, family caregivers, and healthcare providers deem acceptable.

Developing a robust design method for selecting rules and principles for CRs is essential for their success. As discussed by Alaiad and Zhou [2], an estimated 40% of IT innovations in healthcare have been abandoned, mostly due to a lack of understanding of the factors that lead to the acceptance of new technology—ensuring that CRs act ethically should increase the likelihood of patients, caregivers, and health professionals accepting them [13]. Studies have confirmed that a lack of trust and concerns about the ethical behavior of robots currently hamper the acceptance of CRs as carers [2, 34]. Methods for selecting (and justifying) principles and rules to regulate robotic behavior might increase the success rate of innovative robot platforms and thereby accelerate development and progress in this area [26]. Here, we suggest and evaluate a promising design method for selecting rules and principles for CRs. We propose that the empirical approach can be an effective method that leads to directly implementable rules for CRs while maintaining human control and transparency (See tab. 4). Our approach might be relevant to other areas in which autonomous agents should behave ethically, such as self-driving cars Deng [26], and consider this as a pertinent direction for future research.

6 Conclusion

The limitations of current approaches to rule selection for ethical CRs warrant investigating other methods. We proposed a complementary survey-based method based on the input of multiple stakeholders. We argued that such an approach has several advantages, including the ability to assess practically relevant behavioral rules. For this to work, however, stakeholders should be able to come to a consensus about what is permissible. To explore the feasibility of our method, we surveyed people on some realistic robotic actions in a practically relevant scenario. From the data, we were able to derive five behavioral rules. Therefore, we conclude that surveys are a feasible, cost-effective, complimentary method to obtain transparent rules for CRs.