Which Ethical Rules?
A number of research groups have developed methods to implement a chosen set of ethical rules in robots (e.g., [7, 8, 44, 61, 63, 64]). Currently, this field is in its infancy [26]. However, progress is encouraging, and the field can be expected to develop over the next few years. While progress is made on methods for implementing ethical robotic behavior, selecting the rules to be implemented remains an outstanding issue [4, 48]. Several approaches have been suggested (reviewed by [3]).
First, some authors have suggested deriving behavioral rules from existing philosophical frameworks (i.e., so-called top-down methods [3]). Researchers have derived ethical rules from frameworks such as utilitarianism Pontier and Hoorn [52], Kantian deontology [33], and the Universal Declaration of Human Rights [57]. So far, these top-down approaches have failed to yield practically relevant rules for guiding CR behavior. These approaches tend to result in underspecified, inconsistent and computationally intractable propositions (see also [3, 4, 15, 62]). Moreover, selecting an ethical framework is a thorny issue in itself.
Second, machine learning techniques have been suggested as a way of generating rules a CR should obey (i.e., so-called bottom-up methods, [3]). This approach circumvents the need to select an ethical framework (But see [37]). A number of authors have explored various machine learning techniques (e.g., [1, 5, 20]), including neural networks approaches (e.g., [37, 38]). Despite recent advances in machine learning, its application to ethical machines has not yet progressed beyond proofs of concept. This approach also faces several fundamental issues. First, Allen et al. [3] have argued that using machine learning to derive behavioral rules for robots is potentially dangerous as it reduces the level of human control. Indeed, machine learning methods are sensitive to the biases and limitations of the training data (See [22, 32], for concerns about the use of machine learning in medicine). A second problem, potentially aggravating the first, is that of opacity. How a trained algorithm arrives at a decision is often opaque to both users and developers alike. This opacity occurs for several reasons (See [46], and references therein), including ‘the mismatch between the high-dimensionality of machine learning and the demands of human-scale reasoning and styles of semantic interpretation’ [16].
The Empirical Approach
A third method to decide on the rules we propose here is the empirical approach. This approach builds on the input of multiple stakeholders, and that includes the notion of the social construction of ethical rules among the various relevant stakeholders [17, 23]. Stakeholders in the particular context of CRs include patients, their families, and caregivers as well as health professionals. We think the way forward is to query the expectations of stakeholders and use these to set externally verified ethical guidelines, or even boundaries, in which CRs are allowed to operate. This approach is a close approximation of how real-life ethical rules for humans emerge [21, 24]. The ethical boundaries of an actor, regardless of whether it is a human or a robot, are determined by what is deemed to be acceptable ethical behavior by the social group in which the actor operates [17, 19]. In this social process, needs and values are traded-off against each other. Norms arise as consistent trade-offs for a large group of stakeholders [18].
Our approach is complementary to other methods and has the advantage that it focuses on concrete and programmable rules. Indeed, stakeholders can be queried for their opinions on situation, and robot specific behavioral rules. In other words, the empirical approach allows domain-specific behavioral norms, which in turn are feasible to implement on a robot [61]. Moreover, due to the input of multiple human stakeholders, shared, human control is maintained: Stakeholders provide direct evaluations of robotic behavior. Finally, surveying opinions and extracting explicit behavioral rules from the data before programming them into the robot upholds transparency. The rules are accessible and interpretable by both developers and users. Transparency also serves to increase human control Burrell [16], allowing to assess, discuss, and, if necessary, adjust the behavioral rules. Table 4 presents a more detailed overview of the benefits of the empirical approach.
The major challenge for our approach, similar to any societal discussion on ethics, is that a workable solution, or social compromise, has to be found for various types of stakeholders. For this approach to be successful, it must be possible to derive a consistent and agreed-upon set of rules to govern robotic behavior.
Table 1 List of actions used in the questionnaire Current Aim: An Exercise in Rule Selection for CRs
The current study presents an exploratory evaluation of the approach we advocate here. In this study, we assess our proposed method by assuming the role of CR developers seeking acceptable behavioral rules for a hypothetical robot. We aim at implementing rules which are (quasi-)unanimously accepted, and this exercise will indicate whether finding such rules is possible. We chose a realistic and practically relevant setting, i.c., patient non-compliance. We select behavioral rules for a robot facing a patient (Annie) refusing to take medication that would prevent a specific medical condition.
This scenario would require CRs to trade-off conflicting priorities [53, 57]. If the robot allows a patient not to take some medication, this constitutes a violation of the non-maleficence principle: the patients’ well-being is potentially threatened. On the other hand, any action encouraging compliance might violate a patient’s right to autonomy. Likewise, if the robot communicates a patient’s decision to a third party, this could be considered a violation of privacy. This trade-off between well-being on the one hand and autonomy/privacy on the other depends on the potential health impact of the non-compliance and the severity of remediating actions.
Because dealing with non-compliance incurs a conflict between several rights, it has been used before as a test case in the field of ethical robots [5,6,7, 60]. Importantly, it presents a realistic scenario that happens in medical practice. Non-compliance—and the incurred ethical trade-off–is faced by many healthcare workers [53] and family caregivers [43]. Therefore, the situation can reasonably be assumed to be encountered by future CRs. The selected scenario and evaluated robotic actions are further motivated in the methods section.