One common attribute of many but far from all computerized caregivers that draw on AI programs is that they display simulated emotions.Footnote 2 This display is deemed necessary to cause bonding with human subjects, for them to become emotionally invested in these robots, to trust them, to feel that they are empathetic or sympathetic, and so on (see also Tanka et al. 2007; Turkley et al. 2006; Leyzberg et al. 2011; Gonsior et al. 2011). However, as van Wynsberghe points out, “There is no capability exclusive to all care robots” (2012, p. 409). Instead, care robots can differ in their capability for locomotion; voice, face and emotion recognition; and degree of autonomy. Coeckelbergh (2010) distinguishes between “shallow” and “deep” care: what distinguishes the latter from the former is the kind of feelings that accompany human care. He holds that AI can only provide shallow care because it does not actually care about the patient. Coeckelbergh notes that deep care is not guaranteed even from human caregivers, but that they are at least able to provide it.
The term humanoid robot, used to refer to this kind of caregiver, is misleading because it assumes that the computerized caregivers must have features that make them seem human, for instance simulated faces, legs, and arms. The Merriam-Webster dictionary defines the term humanoid as “having human form or characteristics” (Merriam Webster); masks from preliterate ages, for instance, were said to have humanoid features. In fact, many robots have no such features.
Moreover, evidence shows that human beings can become emotionally invested in inanimate objects that have no anthropomorphic features. An obvious example is a cuddly toy, such as a teddy bear (Sharkey and Sharkey 2010, pp. 161–190). One of our sons could not possibly go to sleep or to the playground without his ‘gaki,’ a well-worn small blanket, and another was attached to “Jack,” a piece of fur he found, even more strongly than his father was attached to his dark blue, white top, convertible Sting Ray. The movie Her captures well the attachment one can form to a voice that emanates from a screen, basically a piece of software. In short, just as one can become addicted to anything (though some materials are more addictive than others), one can also become attached to anything (though if it displays affection, attachment is more likely to take place).
Many, indeed most, of the computerized caregivers are not robots—defined as “a machine that looks like a human being and performs various complex acts (as walking or talking) of a human being” (Merriam Webster). Many are merely software programs that can be made to work on any computer, tablet, or smart phone. For instance, programs that provide computerized psychotherapy (discussed below).
For this key reason we suggest that all AI-enriched programs that provide care and seem affective to those they care for, be included. To include both humanoid robots and the much larger number of these computer caregivers, we shall refer to them as AI caregivers.Footnote 3 Most have no visible human-like features, make no visible gestures, do not ‘reach out and touch someone,’ but instead use mainly their voices to convey affect. We choose our words carefully: We refer to the presentation of emotions that leads humans who interact with AI caregivers to believe that these machines have emotions. Without this feature, AI caregivers are unable to perform much of their care.
Following this definition has another major benefit: it excludes from the domain under study all programs that provide exclusively or mainly cognitive services. A prime example of these is Google Assistant. It provides answers to questions, gives customized suggestions to fit the user’s preferences, and helps with tasks such as booking flights or making dinner reservations, among other things. Google Assistant presents no emotions; although people can find expression of emotions in anything, there is nothing in Google Assistant that fosters such projections. Other, mainly cognitive services by AI-driven software include Apple’s Siri and Microsoft’s Cortana, which have been designed to reveal a human touch, a rather limited sense of humor, but still do not qualify as AI caregivers because they are used mostly as a source of information. (In short, these programs are not caregivers and hence are not examined further here; online tutors are also mainly cognitive agents and are also not discussed.)
Chat bots constitute a somewhat more complicated case. There is no formal definition of what constitutes a chat bot. However, to the extent that these are mainly interactive, informative agents, they fall into the cognitive category and are not AI caregivers. This is true even if they are given some mannerisms to make them seem friendlier, such as greeting one by one’s first name when one queries them, say, about the best place to have dinner. Other chat bots are designed to display emotions in order to manipulate those they interact with, acting like humans who work in sales.
An extreme position holds that all such interactive relationships between humans and AI caregivers are unethical because, by definition, AI caregivers display emotions that they do not have, and hence the relationships are “false” and “inauthentic.” Robert Sparrow makes an applicable point about robot pets: “If robot pets are designed and manufactured with the intention that they should serve as companions for people, and so that those who interact with them are likely to develop an emotional attachment to them, on the basis of false beliefs about them, this is unethical” (2002). Sharkey and Sharkey offer a more nuanced view; they grant that illusion is a part of Artificial Intelligence, but draw a line between imagination, or a willing suspension of disbelief, and actual belief. Thus, they maintain that AI researchers must be honest and transparent about their designs in order to avoid deceit (2006, pp. 9–19). However, people are exposed to mild forms of ingratiation and false expressions of solicitude by many sales personnel, financial advisers, politicians, and others. The same is true about many people who read and apply the lessons of Dale Carnegie’s How to Win Friends and Influence People. There seems no obvious reason to treat AI caregivers more strictly than humans.Footnote 4
To the extent these kinds of manipulative AI caregivers (and humans) need to be restrained depends on whether or not they cause harm and the level of that harm, granted manipulation is never ethical. If the harm is minimal, it seems reasonable to rely here on “let the buyer (or listener) beware.” If the harm is considerable, regulations set by law and ethical guidelines should apply to AI caregivers as they do to people (how this can be achieved is discussed below).
Finally, one should note that some manipulation by caregivers, like white lies, is carried out to help those cared for rather than for the benefits of the caregiver. For instance, in medical care when patients seek expressions of hope and are given reassurance, even when there is little hope left. Other cases in point are AI caregivers that cheer on people who lost weight, did more steps than before, or repeated exercises during physical therapy, with quite a bit more enthusiasm than a precision instrument would call for. These are all cases in which a measure of manipulation should be tolerated, as with all white lies.
In summary: any form of deception violates a key ethical precept. Kantians would ban it. Utilitarians would measure the size of the harm it causes versus the size of the gain and find that many AI caregivers score quite well from the viewpoint of those they care for.