Keywords

Introduction

Since the early studies of human behavior, emotion has attracted the interest of researchers in many disciplines of neurosciences and psychology. Recent advances in neurosciences are highlighting connections between emotion, social functioning, and decision-making that have the potential to revolutionize our understanding of the role of affect.

Cognitive neuroscience has provided us with new keys to understand human behavior, new techniques (such as neuroimaging), and a theoretical framework for their evaluation. The American neuroscientist A. Damasio has suggested that emotions play an essential role in important areas such as learning, memory, motivation, attention, creativity, and decision-making (Damasio 1994, 1999, 2003).

More recently, cognitive neuroscience has become a growing field of research in computer science and machine learning. Affective Computing aims at the study and development of systems and devices that use emotion, in particular, in human–computer and human–robot interaction. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. The affective computing field of research is related to, arises from, or deliberately influences emotion or other affective phenomena (Picard 1997). The three main technologies are emotion detection and interpretation, dialog-reasoning using emotional information, and emotion generation and synthesis.

An affective chatbot or robot is an autonomous system that interacts with humans using affective technologies to detect emotions, decide, and simulate affective answers. It can have an autonomous natural language processing system with at least these components: signal analysis and automatic speech recognition, semantic analysis and dialog policies, response generation and speech synthesis. The agent can be just a voice assistant, a 2D or 3D on-screen synthetic character, or a physically embodied robot. Such artifact has several types of AI modules to develop perceptive, decision-making, and reactive capabilities in real environment for a robot or in virtual world for synthetic character. Affective robots and chatbots bring a new dimension to interaction and could become a means of influencing individuals. The robot can succeed in a difficult task and will not be proud of it, unless a designer has programmed it to simulate an emotional state. The robot is a complex object, which can simulate cognitive abilities but without human feelings, nor that desire or “appetite for life” that Spinoza talks as conatus (effort to persevere in being) which refers to everything from the mind to the body. Attempts to create machines that behave intelligently often conceptualize intelligence as the ability to achieve goals, leaving unanswered a crucial question: whose goals?

In 2060, 32% of the French population will be over 60 years old, an increase of 80% in 50 years. The burden of dependency and chronic diseases will go hand in hand with this aging. Robots could be very useful in following patients throughout an illness and helping sick, elderly and/or disabled people to stay at home and reduce their periods of hospitalization. Robots are available 24 h a day, 7 days a week; they are patient, can overcome perceptual deficiencies (deafness, visual impairment), and provide access to information on the internet more easily than a computer. They are also valuable for continuously recording data and sending them to a doctor to detect abnormal behaviors (depression, stress, etc.) and to follow patients with, for example, bipolar disorders, Holter’s disease, and degenerative diseases in the support of daily life. It is already possible to design intelligent systems to train people with cognitive impairment to stimulate memory and language.

Social and emotional robotics wants to create companion robots, which are supposed to provide us with therapeutic assistance or even monitoring assistance. So, it is necessary to learn how to use these new tools without fear and to understand their usefulness. In the case of neurodegenerative pathologies or severe disabilities, the robot may even be better than humans at interacting with people. The machine is in tune with the other, at very slow, almost inhuman rhythms. The robot listens with kindness and without any impatience. For very lonely people, the machine can also help them avoid depressions that lead to dementia.

We need to demystify the artificial intelligence, elaborate ethical rules, and put the values of the human being back at the center of the design of these robotic systems.

Artificial Intelligence and Robotics

Artificial intelligence and robotics open up important opportunities in the field of numerous applications such as, for example, health diagnosis and treatment support with the aim of better patient follow-up.

In 2016, AlphaGo’s victory (an artificial-intelligence computer program designed by Google DeepMind) over one of the best go players Lee Sedol raised questions about the promise and risks of using intelligent machines. However, this feat, which follows Deep Blue’s 20-year-ago victory over Garry Kasparov, should not lead us to fantasize about what robots will be capable of tomorrow in our daily lives. When AlphaGo beats the go player, the machine does not realize what she’s doing. Despite the AI’s impressive performances on specific tasks, it is necessary to keep in mind that machine learning systems cannot learn beyond the “real data.” They only use the past data to predict the future. However, many of the discoveries of our greatest scientists are due to the ability to be counter-intuitive, that is, to ignore the current knowledge! Galileo in the sixteenth century had the intuition that the weight of an object had no influence on its speed of fall. The serendipity, the “gift of finding” at random, is also not the strength of the machine. Faced with a question without a known answer, the human being with all their cognitive biases and imperfections is incredibly stronger than the machine to imagine solutions.

The robotics community is actively creating affective companion robots with the goal of cultivating a lifelong relationship between a human being and an artifact. Enabling autistic children to socialize, helping children at school, encouraging patients to take medications, and protecting the elderly within a living space are only a few samples of how they could interact with humans. Their seemingly boundless potential stems in part from the fact that they can be physically instantiated, i.e., they are embodied in the real world, unlike many other devices.

Social robots will share our space, live in our homes, help us in our work and daily life, and also share a certain story with us. Why not give them some machine humor? Humor plays a crucial role in social relationships; it dampens stress, builds confidence, and creates complicity between people. If you are alone and unhappy, the robot could joke to comfort you; if you are angry, it could help you to put things into perspective, saying that the situation is not so bad. It could also be self-deprecating if it makes mistakes and realizes it!

At Limsi-CNRS, we are working to give robots the ability to recognize emotions and be empathetic, so that they can best help their users. We teach them to dialogue and analyze emotions using verbal and nonverbal cues (acoustic cues, laughter, for example) in order to adapt their responses (Devillers et al. 2014, 2015). How are these “empathetic” robots welcomed? To find out, it is important to conduct perceptual studies on human–machine interaction. Limsi-CNRS has conducted numerous laboratory and Ehpad tests with elderly people, or in rehabilitation centers with the association Approche,Footnote 1 as part of the BPI ROMEO2 project, led by Softbank robotics. Created in 1991, the main mission of the association Approche is to promote new technologies (robotics, electronics, home automation, information and communication technologies, etc.) for the benefit of people in a situation of disability regardless of age and living environment. We are exploring how the expression of emotion is perceived by listeners and how to represent and automatically detect a subject’s emotional state in speech (Devillers et al. 2005) but also how to simulate emotion answers with a chatbot or robot. Furthermore, in a real-life context, we often have mixtures of emotions (Devillers et al. 2005). We also conducted studies around scenarios of everyday life and games with Professor Anne-Sophie Rigaud’s team at the Living Lab of Broca Hospital. All these experiments have shown that robots are quite well-accepted by patients when they have time to experiment with them. Post-experimental discussions also raised a number of legitimate concerns about the lack of transparency and explanation of the behavior of these machines.

Developing an interdisciplinary research discipline with computer scientists, doctors, and cognitive psychologists to study the effects of coevolution with these machines in a long-term way is urgent. The machine will learn to adapt to us, but how will we adapt to it?

The Intelligence and Consciousness of Robots

Machines will be increasingly autonomous, talkative, and emotionally gifted through sophisticated artificial-intelligence programs. Intelligence is often described as the ability to learn to adapt to the environment or, on the contrary, to modify the environment to adapt it to one’s own needs. Children learn by experiencing the world.

A robot is a platform that embeds a large amount of software with various algorithmic approaches in perception, decision, and action in our environment. Even if each of the object-perception or face-recognition modules is driven by machine learning algorithms, automation of all modules is very complex to adjust. To give the robot the ability to learn autonomously from its environment, reinforcement algorithms that require humans to design reward metrics are used. The robot learns by trial and error according to the programmed rewards, in a laborious way, it combines actions in the world and internal representations to achieve the particular tasks for which it is designed.

The integration of intentionality and human-like creativity is a new area of research. These machines are called “intelligent” because they can also learn. For a robot, the task is extremely difficult because it has neither instinct nor intentions to make decisions. It can only imitate human being. Giving a robot the ability to learn in interaction with the environment and humans, is the Holy Grail of artificial-intelligence researchers. It is therefore desirable to teach them the common values of life in society. The ability to learn alone constitutes a technological and legal breakthrough and raises many ethical questions. These robots can be, in a way, creative and autonomous in their decision-making, if they are programmed for this. Indeed, according to the American neuroscientist A. Damasio (2003), self-awareness comes from the pleasant or unpleasant feelings generated by the state of homeostasis (mechanisms aimed at the preservation of the individual) of the body. “Consciousness” is a polysemic term; for some, it refers to self-awareness, for others to the consciousness of others, or to phenomenal consciousness, moral consciousness, etc. To be conscious, you need a perception of your body and feelings.

The robots would need an artificial body with homeostatic characteristics “similar to ours” to be conscious. The goal of researchers such as K. Man and A. Damasio is to test the conditions that would potentially allow machines to care about what they do or “think” (Man and Damasio 2019). Machines capable of implementing a process resembling homeostasis is possible using soft robotics and multisensory abstraction. Homeostatic robots might reap behavioral benefits by acting as if they have feelings. Even if they would never achieve full-blown inner experience in the human sense, their properly motivated behavior would result in expanded intelligence and better-behaved autonomy.

The initial goal of the introduction of physical vulnerability and self-determined self-regulation is not to create robots with authentic feeling, but rather to improve their functionality across a wide range of environments. As a second goal, introducing this new class of machines would constitute a scientific platform for experimentation on robotic brain–body architectures. This platform would open the possibility of investigating important research questions such as “To what extent is the appearance of feeling and consciousness dependent on a material substrate?

With a materialistic conception of life, we can consider that the computer and the human brain are comparable systems, capable of manipulating information. There is a massively parallel interconnected network of 1011 neurons (100 billion) in our brain and their connections are not as simple as deep learning. For the moment, we are far from the complexity of life! Experiments conducted in Neurospin by Stanislas Dehaene’s team (chapter “Foundations of Artificial Intelligence and Effective Universal Induction” in this volume), particularly using subliminal images, have shown that our brain functions mainly in an unconscious mode. Routine actions, the recognition of faces, words, for example, are carried out without recourse to consciousness. In order to access consciousness, the human brain sets up two types of information processing: a first level, called “global availability,” which corresponds to the vast repertoire of information, modular programs that can be convened at any time to use them; and a second type of information processing, specific to the human consciousness: self-monitoring or self-evaluation, i.e., the ability to process information about oneself, which can also be called metacognition. Thus, the brain is able to introspect, control its own process, and obtain information about itself, which leads to autonomy. The addition of physical vulnerability opens the robot’s behavior to new reward function in reinforcement learning (RL).

The research challenge is to build autonomous machines able to learn just by observing the world. For a digital system, autonomy “is the capacity to operate independently from a human operator or from another machine by exhibiting nontrivial behavior in a complex and changing environment” (Grinbaum et al. 2017).Footnote 2 In April 2016, Microsoft’s Tay chatbot, which had the capacity to learn continuously from its interactions with web users, started racist language after just 24 h online. Microsoft quickly withdrew Tay. Affective computing and curiosity models will be among the next big research topics. Self-supervised learning systems will extract and use the naturally available relevant context, emotional information, and embedded metadata as supervisory signals. Researchers such as A. Bair (MIT lab) created an “Intrinsic Curiosity Model,” a self-supervised reinforcement learning system.

How can we assess a system that learns? What decisions can and cannot be delegated to a machine learning system? What information should be given to users on the capacities of machine learning systems? Who is responsible if the machine disfunctions: the designer, the owner of the data, the owner of the system, its user, or perhaps the system itself?

Anthropomorphism

The imagination of the citizens about robotics and more generally artificial intelligence are mainly founded on science-fiction and myths (Devillers 2017). To mitigate fantasies that mainly underline gloomy consequences, it is important to demystify the affective computing, robotics, and globally speaking AI science. For example, the expressions used by experts, such as “the robots understand emotions” and “the robots will have a consciousness” (Devillers 2020), are not understood as metaphors by those outside the technical research community. The citizens are still not ready to understand the concepts behind these complex AI machines. These emerging interactive and adaptive systems using emotions modify how we will socialize with machines and with humans. These areas inspire critical questions centering on the ethics, the goals, and the deployment of innovative products that can change our lives and society.Footnote 3

Anthropomorphism is the attribution of human traits, moods, emotions, or intentions to nonhuman entities. It is considered to be an innate tendency of human psychology. It is clear that the multiple forms of the voice assistants and affective robots already in existence and in the process of being designed will have a profound impact on human life and on human–machine coadaptation. Human–machine coadaptation is related to how AI is used today to affect people autonomy (in decision, perception, attention, memorization, ...) by nudging and manipulating them. What will be the power of manipulation of the voices of these machines? What responsibility is delegated to the creators of these chatbots/robots?

Systems have become increasingly capable of mimicking human behavior through research in affective computing. These systems have provided demonstrated utility for interactions with vulnerable populations (e.g., the elderly, children with autism). The behavior of human beings is shaped by several factors, many of which might not be consciously detected. Marketers are aware of this dimension of human psychology as they employ a broad array of tactics to encourage audiences towards a preferred behavior. Jokinen and Wilcock (2017) argue that a main question in social robotics evaluation is what kind of impact the social robot’s appearance has on the user, and if the robot must have a physical embodiment. The Uncanny Valley phenomenon is often cited to show the paradox of increased human likeness and a sudden drop in acceptance. An explanation of this kind of physical or emotional discomfort is based on the perceptual tension that arises from conflicting perceptual cues. When familiar characteristics of the robot are combined with mismatched expectations of its behavior, the distortion in the category boundary manifests itself as perceptual tension and feelings of creepiness (Jokinen and Wilcock 2017). A solution to avoid the uncanny valley experience might be to match the system’s general appearance (robot-like voice, cartoon-like appearance) with its abilities. This can prevent users from expecting behavior that they will not “see” (Jokinen and Wilcock 2017).

Alternatively, users can be exposed to creatures that fall in the uncanny valley (e.g., Geminoids), making the public more used to them. Humans tend to feel greater empathy towards creatures that resemble them, so if the agent can evoke feelings of empathy in the user towards itself, it can enhance the user’s natural feeling about the interaction and therefore make communication more effective. Following the reasoning on perceptual categorization, the robot’s appearance as a pleasantly familiar artificial agent and its being perceived as a listening and understanding companion to the user can establish a whole new category for social robots which, in terms of affection and trust, supports natural interaction between the user and the robot.

Nudges with Affective Robots

The winner of the Nobel Prize in economics, the American Richard Thaler, highlighted in 2008 the concept of nudge, a technique that consists in encouraging individuals to change their behavior without constraining them, by using their cognitive biases. The behavior of human beings is shaped by numerous factors, many of which might not be consciously detected. Thaler and Sunstein (2008) advocate “libertarian paternalism”, which they see as being a form of weak paternalism. From their perspective, “Libertarian Paternalism is a relatively weak, soft, and non-intrusive type of paternalism because choices are not blocked, fenced off, or significantly burdened” (Thaler and Sunstein 2008, p.175, note 27). Numerous types of systems are already beginning to use nudge policies (e.g., Carrot, Canada, for health). Assuming for the time being that nudging humans for their own betterment is acceptable in at least some circumstances, then the next logical step is to examine what form these nudges may take. An important distinction to draw attention to is between “positive” and “negative” nudges (sludges) and whether one or both types could be considered ethically acceptable.Footnote 4

The LIMSI team in cooperation with a behavioral economist team in France in the Chair AI HUMAAINE HUman-MAchine Affective spoken INteraction and Ethics au CNRS (2019–2024) will set up experiments with a robot capable of nudges with several types of more or less vulnerable population (children, elderly) to develop nudge assessment tools to show the impact (Project BAD NUDGE BAD ROBOT (2018) (Dataia 2020)). The principal focus of this project is to generate discussion about the ethical acceptability of allowing designers to construct companion robots that nudge a user in a particular behavioral direction for different purposes. At the laboratory scale, then in the field, the two teams will study whether fragile people are more sensitive to nudges or not. This research is innovative, it is important to understand the impact of these new tools in the society and to bring this subject on ethics and manipulation by machines internationally (IEEE 2017). The objects will address us by talking to us. It is necessary to better understand the relationship to these chatty objects without awareness, without emotions, and without proper intentions. Users today are not aware of how these systems work, they tend to anthropomorphize them. Designers need to avoid these confusions between life and artifacts to give more transparency and explanation on the capabilities of machines (Grinbaum et al. 2017).Footnote 5

Social roboticists are making use of empirical findings from sociologists, psychologists, and others to decide their spoken interaction designs, and effectively create conversational robots that elicit strong reactions from users. From a technical perspective, it is clearly feasible that robots could be encoded to shape, at least to some degree, a human companion’s behavior by using verbal and nonverbal cues. But is it ethically appropriate to deliberately design nudging behavior in a robot?

Ethical Implications

We must avoid a lack of trust but also too blind a trust in artificial-intelligence programs. A number of ethical values are important: the deontology and responsibility of designers, the emancipation of users, the measures of evaluation (Dubuisson Duplessis and Devillers 2015), transparency, explainability, loyalty, and equity of systems and the study of human–machine coadaptation. Social and emotional robots raise many ethical, legal, and social issues. Who is responsible in case of an accident: the manufacturer, the buyer, the therapist, or the user? How to regulate their functioning? Control their use through permits? For what tasks do we want to create these artificial entities? How do we preserve our privacy, our personal data?

Any system must be evaluated before it is placed in the hands of its user (Bechade et al. 2019). How do we evaluate a robot that learns from and adapts to humans, or that learns on its own? Can it be proven that it will be limited to the functions for which it was designed, that it will not exceed the limits set? How to detect sludge? Who will oversee the selection of the data that the machine uses for its learning directs it to certain actions?

These important issues have only recently been raised. The dramatic advances in digital technology will one day improve people’s well-being, provided we think not about what we can do with it, but about what we want to do with it. That is why the largest international professional digital association, the global scholarly organization IEEE (Institute of Electrical and Electronics Engineers), has launched an initiative to reflect on ethics related to self-designated systems; a dozen working groups on norms and standards have emerged, including on robot nudging (incentive manipulation). The CERNA,Footnote 6 replaced by the French National Pilot Committee for Digital Ethics (CNPEN) also took up this subject artificial intelligence that can provide better health diagnoses, stimulation tools, detection of abnormal behaviors, and better assistance, particularly for disability or loss of autonomy. As an example, one of the three referrals submitted by the Prime Minister to the CNPEN concerns the ethical issues of conversational agents, commonly known as chatbots, which communicate with the human user through spoken or written language. This work of the CNPEN is an extension of the work initiated by CERNA, the Allistene Alliance’s Commission for Research Ethics in Digital Science and Technology (https://www.ccne-ethique.fr/en/actualites/cnpen-ethical-issues-conversational-agents). Machines will surely be able to learn on their own, but will not be able to know if what they have learned is interesting, because they have no conscience. Human control always will be essential. It is necessary to develop ethical frameworks for social robots, particularly in health, and to understand the level of human–machine complementarity.

In my book “Robots and Humans: Myths, Fantasies and Reality” (Devillers 2017), I propose to enrich Asimov’s laws with commands adapted to life-assistant robots. The foundations of these commandments come in part from feedback from experiences of interactions between elderly people and robots. Conversational virtual agents and robots using autonomous learning systems and affective computing will change the game around ethics. We need to build long-term experimentation to survey Human–Machine Coevolution and to build ethics by design chatbots and robots.