When what is in question is the promotion of autonomy, independence and some form of human contact, what, if anything, recommends a carebot solution to providing care for older people over a telecare, or single-function and simple companion robot solution, or a combination of telecare and single-function and simple companion robots? If the money cost of a multi-function, humanoid carebot is taken into account, the answer may be “Nothing”. On the other hand, if financial costs are disregarded, then the answer on the basis of the previous discussion may be that the carebot solution delivers physical help, and the ability in principle to integrate telecare and sophisticated presence. By ‘sophisticated presence’, as explained in the section “Robots, ‘presence’ and the requirements of care”, we mean that the carebot interacts and can even initiate interaction with the user. Moreover, the quality of interaction is far more sensitive and far more challenging than the passive twitches and facial expressions of Paro. By taking over some of the functions of telecare, the ACCOMPANY Care-O-bot® can keep track of the location and condition of the user. It does so, however, from close at hand, potentially enabling quicker intervention or emergency response than conventional telecare devices relaying data to a remote information hub (assuming that the carebot is not itself programmed to summon help from a similar hub). In other words, cost considerations apart, a carebot may give us in a single package a highly desirable embodiment of assistive technology alongside practical help with lifting, carrying and fetching.
The previous discussion, however, may be inadequate for a full answer to the question of the comparative value of low-tech and robotic assistive technology. So far we have been guided by a list of ethical issues raised by philosophers and technologists who have reflected on the capabilities of robots designed or used to provide care for older people and meet the needs of older people as they present themselves in ordinary experience. But perhaps the common sense of philosophers and technologists is a bad guide to the needs or preferences of the elderly. The preferences and needs of the current population of older people are not representative of future older people, and are subject to cultural variation even in the present. Moreover the list of ethical issues depends on the assumption that the experience of older people, especially in the West, is more or less uniform (Parks 2010).Footnote 25
Any development of an ethical framework for evaluation of carebots must be informed by the attitudes of older people themselves, with allowances being made for big variations in technophobia between people who currently are 60 years old and people who are currently over 80 years of age. The importance to an ethical framework of taking into account user-attitudes is connected with the value of autonomy. If carebot use is to take its cue from the wishes of individual older users of carebots, and if surveys of older people reveal a range of design-relevant preferences which do not correlate with the design features of the carebot that engineers intend to realize (Van der Plas et al. 2010), this may suggest that engineers think they know better than their potential older users what carebots should be like, or that they do not know and have not bothered to find out what older users of carebots might be looking for. Either way, the potential of the engineer-designed carebot to promote the autonomy of older users might be compromised.
ACCOMPANY has conducted research among panels of older people in the UK, the Netherlands and France. The project is investigating what users might want from a carebot, and has found that mobility, self-care and isolation are major preoccupations, while co-learning seems not to be.Footnote 26 Does this finding mean that ACCOMPANY should drop co-learning from its designs for robots? Not necessarily. Co-learning may have other effects that older people could benefit from and that they want, even if they want other effects more. There could be a therapeutic rationale for some design features that older people don’t want or don’t want much, so long as on balance groups of older people have been consulted and listened to in relation to design, and so long as the ACCOMPANY Care-O-bot® accommodates itself to individual users rather than coming up with an agenda of its own. To go back to Siena’s methods of keeping up older people’s social skills by adjusting its behaviour to the user’s tone of voice, this might have what is broadly speaking a therapeutic benefit even if the older person doesn’t like it much.
Vallor’s list of ethical concerns indeed anticipates the way that the Siena design might be justified. It in effect asks philosophers and technologists to think about:
and
The Siena innovations try to improve social skills and, indirectly, the psychological well-being of older users. They also introduce companionship into such routine ways of engaging with one’s surroundings such as watching television and helping with such tasks as moving objects from one room to another, which promotes living in orderly and clean surroundings.
Even when the attitudes of users are taken into account, there may be conflicts within the range of ethical values that are individually relevant to providing care for older people. We have already seen that autonomy can conflict with safety: a carebot that is otherwise dedicated to fulfilling the wishes of its older user ought not to comply with a request that is suicidal. Similarly, although older autonomous people have a right to privacy at least as extensive as that of younger people, there may be occasions when a carebot should report a fall to a non-resident carer or a medical assistance hub, even if that is against the wishes of the older person himself or herself.
Against this background, what sort of ethical framework should be proposed for the design of carebots. The framework must identify and define values that should be promoted or at least respected by carebot design and use in relation to older people, and it must say which value is, or which values are, overriding when there is a conflict. The ACCOMPANY project addresses isolation and declining physical capacity in older people who continue to live and want to live in their own homes. If a robotic companion is to be a solution, its design must promote the following:
-
autonomy—being able to set goals in life and choose means;
-
independence—being able to implement one’s goals without the permission, assistance or material resources of others;
-
enablement—having or having access to means of realizing goals and choices;
-
safety—being able readily to avoid pain or harm;
-
privacy—being able to pursue and realize one’s goals and implement one’s choices unobserved;
-
social connectedness—having regular contact with friends and loved ones, and safe access to strangers one can choose to meet.
All of these values lie in the background of most able-bodied, independent adult life, and our approach is to extend these values to later life unless there are reasons not to do so. Isolation and physical decline might be thought to be such reasons—unless a technology can compensate for them. The ACCOMPANY scenarios animate these reasons. And a particular design of robot companion compensates for them.
It is, however, inevitable that circumstances will arise where these values are in tension. When this happens one value is likely to be given priority over another. The preceding discussion has suggested that autonomy is a crucial value but that it can be outweighed when respecting it would threaten a user’s life or physical well-being. It might be thought to follow, then, that of the six values, safety is supreme, trumping even autonomy.
This seems to be a mistake. Not every threat to safety, even when realized, produces major injury. When the worst that the exercise of autonomy produces is minor harm, or not-so-minor but tolerable and survivable harm, autonomy might win out over safety. Admittedly, the meaning of ‘major harm’ and ‘minor harm’ varies over a life-course. Falls that are tolerable at 45 years of age and classifiable as minor then would not be classifiable as minor at 90, but the threshold has to be quite high if the older person’s autonomy is not to be in danger of being entirely undermined by too conservative a safety regime. In other words, autonomy, not safety, should normally be the ruling value in carebot design. For example, if an older person prefers being bruised for a week to staying seated or using a walker, not interfering with a decision to get up and be active seems to be consistent with the discretion usually allowed to middle-aged and younger adults with respect to their health and safety, even when minor harm results. Allowing the older person the same discretion might mean designing a carebot so that its prompts to use a walking frame etc. can be disabled (and perhaps later re-enabled) by the user.
Because privacy promotes autonomy by allowing users to discover when unobserved what their limits or vulnerabilities are, and to factor those into their plans, carebots should not normally be able to report information about users to outsiders or let anyone into an older person’s home without permission. On the other hand, acting on some kinds of information without reporting to outsiders might be valuable. Thus, if the carebot has or is connected to flood sensors in a smart home, there is no reason why it or the smart home technology cannot trigger a cut in the water supply and then ask the user what they next want done. This is in keeping with autonomy. Cutting the water supply and asking an outsider for subsequent instructions would undermine user autonomy unless the user was incapacitated.
Social connectedness is desirable, other things being equal, because of its potential benefits to physical and mental health. But the ‘other things being equal’ is important: it is possible for social connectedness to empower busybodies, without any benefit to the user. Instead of social connectedness full stop, chosen social connectedness with chosen people seems desirable, with the user deciding, as most adults routinely do, whom to include and whom not to include in their social circle. A user who disliked all eligible social connections might intelligibly choose isolation, but, given the reach of social networks afforded by the World Wide Web, the number of eligible social connections is likely to be much larger than the number of people the user has good reasons or any reasons for shunning.
Enablement might also be in tension with autonomy, since enablement may require individuals to do things for themselves that they might prefer were done for them, or that they might prefer not to do at all. Robotic devices are being developed to help with physical rehabilitation following stroke, accident or amputation. Physiotherapy of this kind often requires patients to be coaxed, persuaded and even paternalistically coerced into repeating movements by physiotherapists, who may themselves move or position the patients in ways that although initially uncomfortable are necessary for rehabilitation. Returning someone to a state of greater independence is certainly compatible with autonomy; the question is whether it is compatible with autonomy for a carebot to coerce someone to adhere to regimes that will return them to greater independence.
The answer to this question may lie in what is agreed with the older person at the time a rehabilitation device or robot with enabling capabilities is placed with that person’s consent in their home. In the case of single-purpose device, there would be no objections to removing a state-funded device that was lying unused or not being used properly. Carebots pose a different challenge because they are designed to be multi-functioning and these other functions would also be lost if they were removed. Enablement functions are not quite the same as those providing potential social interaction. Disliking social interaction and preferring isolation is a matter of taste. Working against a carebot programmed to maintain independence is not simply an expression of taste, but a kind of resistance to independence. Again, the robot and its developers would not necessarily be working against the autonomy of older users if the robot refused to do things that the older person could reasonably do for herself, or which it might be good for her to do for herself. Indeed, we can envisage something of a spectrum of mutual accommodation. At one extreme might be a user’s refusal to co-operate with the robot in maintaining his or her mobility. At the other extreme might be automatic robot compliance with all user requests, even the request to be thrown off the balcony. Between the extremes might be cases where the robot enables the user to eat, or drink or smoke excessively. In this respect choices about the programming of carebots reflect the ethical issues raised more generally in health promotion and public health, where what people want is not necessary what is good for them, and satisfying their desires can be in tension with health interests.
One of the challenges for the ethical framework in ACCOMPANY is that the Care-O-bot® can play a variety of roles (companion, helper and enabler), each of which is subject to different norms in human-to-human service provision.
To take companionship first, we can assume that the Care-O-bot® is not designed to simulate a family member but rather to counteract the experience of being always or mostly alone. The Care-O-bot® might therefore play a role similar to that of a paid companion in late eighteenth and early nineteenth century England. The companion was paid to provide constant company, usually for single people, and shared their employer’s home. This was a role that struck a balance between friend and servant. The companion could be a confidante, but, unlike the friend, was an employee who had very little autonomy and could be called upon to help with ‘light’ duties—such as sewing or playing sport. As in the case of the Care-O-bot®, the relationship was one-sided, with the feelings, wishes and whims of the employer (or older user in the Care-O-bot® case) having most of the weight and those of the companion having little or none. However, it was considered unseemly to be unduly rude to or rough with the companion—which corresponds with the concerns for ‘respectful’ interaction being worked on in by the Siena partner in ACCOMPANY.
A helper may be a servant, professional or volunteer, and these three roles will now be considered in turn. Servants are paid to do their employer’s bidding, usually without question. As it operates in ACCOMPANY, Care-O-bot® does not quite take on the traditional role of the servant, because it is intended to perform tasks that users are physically unable, rather than unwilling, to do for themselves. On the other hand, to place Care-O-bot® in the servant role suggests, appropriately enough, that the older user is controlling the robot rather than the robot controlling the older user. It also suggests that the robot should be discreet, keeping household matters private.
To the extent that it is designed for the frail and those with physical impairments, the Care-O-bot® could be associated with caring roles filled by nurses, healthcare assistants and doctors, especially when they are equipped with interfaces for telehealth interventions. Human carers are not necessarily obedient servants. On the contrary, they are likely to have their own ideas about how much help to give and when, what constitutes help and what form it should take from occasion to occasion. So there may be a tension between placing Care-O-bot® in the caring role and placing it in a servant role. In one the older person is the boss, and in the other the older person sometimes needs to accommodate the carer. Informal, voluntary care such as that which might be provided by a friend, incorporates both the care element and that of companionship. It reinforces the idea that that whilst the robot is present at the invitation of the older user, it should not be exploited or ordered about. It is also more of a relationship between equals, even though the older user retains the upper hand and the robot has only limited capacity to withdraw from unsympathetic behaviour or tone.
‘Enabler’ may suggest superiority over the enabled: the human enabler is the one with the knowledge, skills, abilities and powers to enable. This may also raise questions about who is deferred to when older person and their enabler are in conflict. There is a corresponding tension between enablement and autonomy.
When autonomy conflicts with other values that govern the possible roles of Care-O-bot®, which should prevail? A way of summarizing much of the foregoing is by saying that autonomy should. Autonomy can make sense as the organizing value of the ethical framework for the design of carebots. Being the organizing value, autonomy also constrains additions to the value framework: other potential values would be consistent with autonomy or else have some independent moral grounding. Should further values be added to those already introduced?
One source of further values is the interests of carers connected to the older person. Carers enter the ethical framework developed so far through its values of safety and social connectedness, in turn constrained by the value of keeping the older person autonomous for as long as possible. This may not be the right way for carers to enter the framework. It might be thought that by putting older people and their choices at the centre of things, the framework denies the dependence of older people on carers and is in any case too individualistic. For example, the framework recognises threats to the autonomy of older people from carers but not the sheer hard work and sometimes sacrifice of their carers. Perhaps the framework needs to reduce the value of autonomy in interactions with the older person the more other people have their choices reduced by their caring role. Concretely, this might mean that the ability of the older person to judge and take risks that could lead to injury and greater dependence might be restricted the more dependent they are on others. It might also justify more monitoring and more reporting to carers.
We are not persuaded that autonomous older persons necessarily overburden carers, even when they are dependent. But it helps to remind ourselves that we are not concerned with the general question of the best way of being fair to carers. We are only concerned with the way that carers’ interests should be represented in a framework for the design of carebots. Since carebots of the kind being developed in the ACCOMPANY project assume only moderate physical disability and near complete cognitive functioning in the older people who would be living with the Care-O-bot®, the question of trade-offs between autonomy and high dependence does not arise. That does not mean that there are no difficult questions about what carers have a right to know about in the lives of older people and what decisions of older people they have a right to veto, but in general the burden of proof will be on carers rather than the other way round.Footnote 27