In order to better understand the interplay between goal-directed versus practice oriented aspects of care activities, let us look, as a first example, to the activity of lifting a patient. If one were to consider the activity of lifting exclusively in terms of its immediate external goals (as goal-directed), the activity could be described as moving the patient from the bed to the wheelchair in order to bring them to the toilet (or to an appointment etc.). From this perspective the activity of lifting simply consists of: safely raising the patient out of bed at a certain angle, with a certain speed and force, and safely placing them in their wheelchair.
Alternatively, seen through the practice-oriented lens, the same activity appears much more complex. During lifting the patient is vulnerable and responsive, he/she must learn to trust their care giver, and the care giver must establish themselves as an agent to trust (among other things). Lifting is a moment in which the care giver and care receiver form the therapeutic relationship together. This relationship has a value in itself but is also necessary for the later care of the patient; in order for the care receiver to be honest about their symptoms, to take their medication and to comply with their care plan. Lifting is also a moment in which the care giver is able: to assess the neurological and physiological status of the patient; to make eye contact with the patient; and to socially interact with the patient. Thus, under this practice-oriented description, in order to be properly carried out the activity of lifting requires not only that the care giver efficiently and safely move the patient from one place to another but also that they develop observational skills and meet important general social and medical needs of the patient.
Should the operation of lifting be delegated to robots? In relation to this example, the nature-of-activities approach can be seen as offering two different kinds of contributions to the ethical analysis of care robots. Firstly, it helps make sense of the normative disagreement about the use of robots in care activities. By normative disagreement we refer here to the tension between different normative conclusions that one may draw depending on their conception of the care activity. Once we realize that a given care activity can be legitimately described appropriately in different ways, we may also make sense of the presence of a wider range of different, potentially contrasting values embedded in the activity. Seen simply as a process of transport, lifting is an activity that requires the safest and most efficient means to be fulfilled. Seen as a moment of socialization, trust-building, and care-taking, lifting is a “practice” (in the sense of care ethics) that requires human responsiveness and human
attentiveness to be fulfilled. Depending on our preferred conception of care, we may have contrasting ethical views about how care should look. In this perspective our conceptual analysis of the care activity is an important tool toward an ethics of care robots, insofar as a detailed and comprehensive account of the values at stake in healthcare and, relatedly, of the different available conceptions of care are held to be an important part of this enterprise.
Moreover, our reconstruction of the normative disagreement about care activities can also help make sense of the value of moral autonomy of patients/users in deciding how to live their lives, as it has been recently stressed by Sorell and Draper (2014). In her (2014) paper Sharkey points out that robots may both enhance and diminish the dignity of elderly persons; they may enhance dignity as they refrain from typically human ways of disrespecting elderly persons, i.e. rude or otherwise offensive behaviour deriving from tiredness, stress, overwork etc.; however, the elderly person’s dignity may also be endangered by robots insofar as current robots cannot give the “real compassion and empathy or understanding” typical of human companionship. Sharkey then points out that “if older people were to be predominantly looked after by robots, and as a consequence were not able to have access to human companionship, many people would consider their lives to be unacceptably impoverished” (65, emphasis added). However, in the second part of her paper, Sharkey also offers a more nuanced position by distinguishing the impact on dignity of different kinds of robots (assistive, monitoring, companion).
Whereas we agree with Sharkey that we need to give nuanced answers, we also think that the capability approach she proposes offers a not broad enough methodology to make sense of the many relevant values pursued by particular care activities. Our nature-of-activities approach is arguably able to offer a broader perspective. In fact, in relation to particular care activities (e.g. lifting), our approach allows one to recognize the legitimacy of having different understandings of what those activities are. For some people in some circumstances lifting is just about moving from one place to another; for those people human company or even the presence of other people is not a part of the activity of lifting. Therefore, failing to be able to lift themselves, these patients may just prefer to have a machine rather than a human person supporting them. In other words, an elderly person who endorses this view of the nature of lifting may reasonably prefer to be enabled by a machine to safely, efficiently, autonomously lift as opposed to be caringly, compassionately, empathetically assisted by a human carer in lifting. For these people care is good enough insofar as it enables them to pursue their goals—that is respecting their moral autonomy—no matter whether pursuing these goals may contribute to the development of a relevant capability or to the promotion of the person’s dignity.Footnote 6
On the other hand, if an elderly person shares the care ethics view that in their condition of physical frailty lifting is an important moment of empathetic interaction with a carer moved by compassion, attentiveness, and responsiveness; then they may reasonably refuse to be supported by a robot in this activity, even if the robots were able to guarantee the same or a even higher level of efficiency and safety in the operation of lifting.
What we are here suggesting is simply that when different descriptions of an activity are equally legitimate—as they arguably are in this case—then normative disagreement about the way in which a given activity has to be fulfilled—for instance, the disagreement between an elderly person and her care takers about the way in which she has to be assisted—may also be irreducible, and therefore we would rather leave it to the elderly persons themselves to decide how to be treated. It is a virtue of the nature-of-activities approach that it can make sense of the irreducibility of this conceptual and normative disagreement, and of the related necessity of respecting the patient’s autonomy of choice about their treatment in the presence of this tension between contrasting values.
Certainly, what the limits of autonomy for (mentally competent) patients or users should be is open to debate. Firstly, whereas we may easily accept the idea that in a home setting elderly persons should be able to make their own view of what taking good care of them means, things are more complex in a nursing home setting. In this context there may be stricter, objective limits to the way in which elderly persons may request to be taken care of. In complex institutions like nursing homes, where different actors with different roles, tasks and responsibilities, closely interact with patients on the basis of complex rules and procedures, patients’ preferences simply cannot and should not always be complied withFootnote 7; arguably, in these complex social contexts the assessment of what should count as (good) care cannot and should not left only to the judgements of individual patients. Secondly, according to an example by Sharkey and Sharkey (2012) no matter in what setting they operate, we may not want lifting robots to be allowed to release patients over the side of a high balcony in an apartment building, even if explicitly requested to do so by a mentally competent elderly person. However, such limitations would also apply to the behaviour of an autonomy-respecting human helper (Sorell and Draper 2014). With or without robots, autonomy in healthcare may be in tension with other paramount human values: life and health. As this is a well-known crux in other general debates in medical ethics—typically but not exclusively: assisted suicide, abortion—this is not the place for us to take a position about this broader issue.
A related point that is more specifically relevant to care robots should be mentioned though. The legitimacy of different views on the nature of care activities is arguably a fact that points in favour of respecting all patients’ and users’ autonomous choices in relation to their assistance and care. However, in the interest of making sure that everyone is in the position to realize their autonomous choices, we have to make sure that economic and social conditions do not make certain options de facto unavailable. From this perspective, given the economic pressure to replace human work with machines, during the process of introduction of robots in care practices we have to create also the social, political, economic conditions for those who do not want to be assisted by machines to be in the position of doing so. In the absence of the relevant political, social, economic constraints,Footnote 8 we may run the risk that given the high economic interests at stake, once assistive robots are massively introduced in care institutions and practices, they will be used no matter what the different preferences, values, and needs of patients and users are.