Keywords

1 Introduction

Ambient Assisted Living (AAL) can be described as concepts, products, and services that combine new technologies and social environment to improve the quality of life for people in all stages of their lifetime [1]. From an individual perspective, the quality of life can be considered in terms of well-being. It includes emotional (self-esteem, emotional intelligence, mindset), social (friends, family, community) and physical (health, physical safety) aspects in a person’s life [2]. Humans are social beings, thus one of the most important tasks of AAL is facilitating social contact [3]. This is achievable through the implementation of affect (a generic term used to cover feelings, mood, emotions, etc.) detecting and processing mechanisms in a system. Affective data enhances a system’s ability to make rational decisions and achieve its goals by serving as an extra information for detecting the context of the particular situation and as a mediator through which information can be passed.

Integration of affective capabilities in AAL systems requires knowledge from various fields, including cognitive psychology, neuroscience, medicine, and computer science. Mentioned knowledge has been of paramount importance in such artificial field (AI) field as affective computing which mainly focuses on the study and development of systems and devices that can recognize, interpret, process, and simulate human emotions [4] which has led to significant amount of research, algorithms and methods in this area. One question that has been in the center since the first affective systems appeared is related to their affective abilities; to put it simply – what kind of emotional processes does a system need? In the studies answering this question, main affective processes of affective systems have been identified (namely, emotion recognition, emotion expression, emotion generation and emotion mapping on the rational behavior); it has been argued that depending on their focus, not all systems need all these processes [5].

Another aspect of this chapter is AAL applications that are targeted to help not only older adults but also younger people (since health disorders can affect anyone at any age) to live independently and comfortably in their living environment. However, living environments do not include only users’ houses but also various environments surrounding them such as city streets, schools, shops, restaurants, and other places. Therefore, these people have needs for movement, social interaction, healthcare and acquisition of knowledge and skills not only related to specific problem domains (e.g., mathematics) but also basic skills required for everyday life like eating or cleaning. To support emerging emotional, physical and mental needs in extended AAL environments, four AAL application domains, including healthcare, education (teaching/learning), mobility (transportation), and social interaction, are analyzed in terms of previously mentioned affective processes.

The chapter starts with explanations of the complexity of affective systems and advancements in affective computing field, as well as describes affective processes and their implementations in affective computing systems. Next, the need for emotions in existing AAL application areas has been discussed and a short analysis of AAL systems in the context of basic emotional processes has been provided.

2 General Emotional Processes of Affective Systems

Affective computing (AC), which started its advancement in 1997 [4], aims to endow computers with abilities to detect, recognize, interpret, process, simulate human emotions from visual, textual, and auditory sources, as well as respond appropriately [6]. AC humanizes human-computer interactions by building artificial emotional intelligence. As natural language interactions with technology continue to evolve (examples include search, bots, and personal assistants), emotion recognition is already emerging to improve advertising, marketing, entertainment, travel, customer service, and healthcare [7].

Advances in data processing speeds and disciplines of computer science, AI, machine learning, psychology, and neuroscience, are all leading to expanding of AC field [8]. Computers, cameras, and sensors can capture facial expressions, gaze, posture, gestures, tone of voice, speech, patterns of keyboard and/or mouse usage, as well as physiological states (e.g., skin temperature or conductance, heart rate and blood volume pulse) to register changes in a user’s emotional state [6].

Analysis of existing studies shows that numerous computational models of emotions have been developed and applied by researchers working in the AC area. An abundant amount of various systems and applications has facilitated discussion of main affective processes and system’s affective abilities in general.

One of fundamental works in this direction has been done by Hudlicka who proposed a general affective system framework [9]. The framework focuses on the roles of emotions and their fulfillment in artificial units. Such general approach allows systematic and organized design and implementation of necessary processes and functions, as well as enables comparison of affective mechanisms of various systems. According to AC, an abstract affective component can be identified, which executes three processes: affect recognition, affect calculation as well as affect expression [4]. Affect calculation may include two separate processes: emotion generation and emotion mapping on behavior [9]. By combining these ideas, Petrovica and Pudane [10] have defined processes that are needed specifically for a fully affective system that interacts with a user (see Fig. 1).

Fig. 1.
figure 1

Affective processes performed by an emotion-aware system (only affective interactions with a user are shown) (adapted from [10]).

Emotion recognition is usually done by extracting emotional cues from one or more modalities, i.e., facial expressions [11], gestures [12], body postures [13], voice [14], etc. Perception of various modalities is a precondition in order to automatically detect emotions and accordingly adapt the behavior of AAL systems. In the AC field, affect detection is commonly achieved both through non-intrusive sensors, which do not require physical contact, e.g., video cameras, eye trackers, and microphones, and intrusive sensors which require physical contact with human body, e.g., physiological sensors or haptic (touch) sensors. Since the main goal of AAL field is the development of non-intrusive intelligent systems that are able to proactively support people with special needs in their daily activities, non-invasive user monitoring is an important aspect of AAL systems [15].

Emotional state generation is related to the appraisal of stimuli causing subjective emotional experience. Emotional responses are triggered by various events that are evaluated as being significant for a person’s (or robot’s/agent’s) expectations, needs, or goals. Therefore, the same stimulus can produce distinct emotions, depending on differences in the person’s interpretations [9]. In AC field, affect generation is achieved by computational emotion modeling. One of the goals for computational emotion modeling is to enrich the architecture of intelligent systems with emotion mechanisms similar to those of humans, and thus endow them with the capacity to “have” emotions. In the context of AAL, some studies exist in this direction, e.g., in [16] authors describe need-inspired emotion model applied in a HiFi agent whose emotions are generated by evaluating the situation and comparing it to agent’s different needs.

Emotion mapping on cognition and behavior means defining reasoning or behavior changes caused by an emotional experience. Emotions can lead to the expression and communication of different reactions or the activation of specific actions in a person’s (or agent’s/robot’s) body. Thus, models of emotion effects should deal with the multi-modal nature of emotion. Systems with embodied agents need to express emotions not only through behavior, but also through other modalities available in their particular embodiment (e.g., facial expressions, speech, or gestures). One of the possible approaches that can be used for mapping emotions to behavioral reactions is the application of a behavior-consequent model that allows aligning emotional state to physical actions or other direct outward or social expressions, for instance smiling when happy. Behavioral-consequent models are often used to synthesize human-like emotional or social behavior in embodied robots like Kismet [17] or in virtual agents such as Max [18]. Regarding AAL developments that are able to link emotions with behavioral effects, few projects can be found. For example, in a NICA project [19] a behavioral architecture is developed for a social empathic robot which can assist a user in the interaction with a smart home environment.

Emotion expression is focused on the system’s ability to express emotions as responses to people’s personality, emotions, moods, attitudes, and actions. For AAL systems, such ability could improve their functionality since many AAL systems are developed as personal assistants fulfilling two functions:

  1. 1.

    facilitation of the completion of daily tasks [20]

  2. 2.

    maintenance of social interaction and communication to prevent social isolation of people [21].

To make the virtual companions or assistants not only look realistic but also have natural and human-like behaviors, one of the key characteristics is personality and the ability to exhibit human traits and characteristics, including emotions [22]. In AC field, such functionality is achieved mainly through affective conversational agents or affective robots; in AAL systems, it is implemented in a similar way – through virtual agents embodied into a system’s interface or robots. Thus, ways how emotions are expressed by AAL systems (or virtual agents) can be similar to those used by humans, i.e., facial expressions [23], voice and speech [24], behavior and body posture [25]. In other cases, a reaction to human emotions can be expressed through changes in music, color, and lighting [2].

While all four functional blocks (emotion recognition, emotion generation, emotion mapping on the rational behavior and emotion expression) if implemented properly ensure that a system is fully affective, it is assumed that a system still can perform well if it has just a few functional blocks. For example, if a system needs to adapt to a user’s emotions, it will achieve its goals just by recognizing emotions and expressing them as a response to a user’s emotions. Such approach is often used in intelligent tutoring systems [5].

AAL systems, in general, are complex in the sense that they need to support social interaction as well as carry out rational functions. This leads to thinking that in AAL systems, all four processes are needed: to detect emotion, to generate emotion, to map emotion on rational processes (“feel” emotion) and to express emotion.

While these components are already recognizable in existing systems, we argue that depending on the application area of AAL (as opposed to AAL systems as a whole), requirements for affective abilities differ. While rich affective model might be crucial in other cases, such as when dealing with older adults or targeting long-term interaction and/or companionship, for more specific AAL systems, full set of identified functions is not necessary. To prove this, we analyze four different areas where AAL can be used. To compare these areas, we use affective processes as a reference. It provides main functions required for AAL systems.

In the next section, various AAL application domains corresponding to requirements of AAL systems are reviewed and analyzed. Main characteristics are described for all listed application areas, as well as these characteristics are analyzed in the context of AC processes. Analysis of basic affective processes in existing AAL applications would help to develop truly affective systems supporting users not only physically but also mentally.

3 Affective Computing in AAL

AAL systems are aimed at satisfying the needs of those in care. In the research on older adults [6], needs have been divided into four kinds: Errand, Life curation, Emotional health and Comfort needs. Older adults are one of the major user groups of AAL, however, additional need for younger generation appears – a need for education, e.g., in autistic children cases [26]. We have chosen the following application areas in which AAL systems should support specific user needs:

  • education which supports life comfort in long-term by ensuring that basic life skills are learned;

  • the social interaction that supports emotional and comfort needs;

  • mobility supporting errand needs as well as comfort since the ability to move freely increases independence;

  • healthcare supporting physical (life curation) needs.

3.1 Emotions as Part of AAL in Education

Emotions play a central role as they ensure our survival and support all activities from the most basic to the most elaborated tasks, including education [27]. Studies have shown that emotions can influence various aspects of human behavior and cognitive processes, such as attention, long-term memorizing, decision making, understanding, remembering, analyzing, reasoning, and application of knowledge [28]. Emotions and cognition are complementary processes in learning situations when learners have to make conclusions, answer causal questions, identify problems, solve tasks, make knowledge-based comparisons, provide logical explanations, as well as demonstrate a usage of acquired knowledge and transfer it to others [29]. Emotional states of a learner can influence his/her problem-solving abilities and even leave an impact on a willingness to engage in the learning process, as well as they can affect motivation to learn. It is considered that positive emotions play an important role in the development of creativity and ability to adapt to different problems during their solving, conversely, negative emotions can hinder thinking processes, abilities to concentrate, remember, memorize, solve tasks, reason, and make conclusions [30].

Learning environments utilizing AC (i.e., monitoring of learner’s emotions and/or responding to them [31]) can create different scenarios that help and improve educational conditions. A system for emotion identification may detect signals of frustration during the learning process or lack of understanding during the study of concepts and definitions [27]. With such identification at the beginning of processes, the educational staff can start individual psychological assistance for learners, avoiding future problems that interfere in the learning process, and even more, in their lives. Currently, many examples of AC in educational settings already exist, e.g., AutoTutor [32], MathSpring [33], MetaTutor [34], etc. However, most of them focus on normally developing individuals and provide knowledge in specific problem domains, e.g., physics, mathematics, medicine, etc. Therefore, such developments might be applicable in cases when learners are not able to attend schools, for example, children with movement disorders.

If we are focusing particularly on AAL field and children with special educational needs, including those that have emotional, behavioral, sensory, physical, or mental disabilities, like children with autism, then previously mentioned affective learning environments (e.g., MetaTutor) developed for teaching specific problem domain are not applicable. This is due to the fact that most of the children suffering from autism have problems with learning even the basic skills required for everyday life [26]. In general, autism is a communication disorder that requires early and continuous educational interventions on various levels like everyday social interaction, communication and reasoning skills, language, understanding norms of social behavior, imagination, etc. [35]. Usually, these skills are relatively self-evident or easy to develop for other children. Basic social interaction skills are generally acquired from a very early age through an ongoing experience with the world and interactions with the people around us. Children with autism experience difficulties in this domain [36]. A social-emotional domain is strictly interrelated with cognitive and motor development, as it consists of the acquisition of capacities for personal relationships, emotional expression, motivation and engagement [36]. From an affective perspective, children with autism often have difficulty recognizing emotions in others and sharing enjoyment, interests, or accomplishments, as well as in interpreting facial cues to understand emotional expressions of others [37]. Without this understanding, they will remain oblivious to other people’s intentions and emotions. A lack of such an important prior knowledge about the environment hinders children to make informed decisions [38].

In general, the education is considered as the most proper solution for the autism, however, planning of the learning process for learners with autism is complex, because these learners have significant differences from most other learners in learning style, communication, and social skill development, and often have challenging behaviors [39]. Such differences may strongly influence the educational process and often lead to social exclusion from meaningful participation in learning activities and community life. Exclusion, in turn, further reduces learners’ perspectives to learn, grow, and develop [27]. Adapted educational systems facilitating an acquisition of knowledge and skills through the use of AC are crucial if the objective is successful development of the society where equal opportunities are provided for all children, youth, and adults.

Analysis of existing learning environments targeting AAL domain allows concluding that most of the developed solutions are particularly aimed at assisting autistic children in communication and interaction with other people. For example, mobile application CaptureMyEmotion [40] helps to teach children to recognize their emotions in the moment of taking photos, recording videos or sounds. Later emotions can be discussed with a caregiver thus helping children to learn their emotions. Another solution called Emotional Advisor has been proposed to help autistic children to engage in meaningful conversations where people are able to recognize their own or other people’s emotions. Emotional Advisor is capable of teaching and guiding autistic people on how to respond appropriately based on how the other person is feeling or expressing emotions during verbal communication [38]. In [41], the educational system called Face3D has been proposed for autistic children to help them in understanding and reasoning about other people’s (for example, relatives’) mental and emotional states by use of virtual agents representing real people, their performance, emotions, and behavior.

A robotic solution called IROMEC (Interactive Robotic Social Mediators as Companions) has been developed to teach autistic children basic social interaction skills [36]. During playing with IROMEC, children’s specific strengths and needs are taken into consideration and a wide range of objectives are covered regarding the development of different child’s skills (sensory, communicational and interaction, motor, cognitive, social, and emotional) [42]. The robot allows the use of different inputs (e.g. direct operation on touchscreen, buttons, remotely controlled switches, etc.) which can be changed according to child’s abilities and provides personalized feedback according to child’s and therapist’s preferences, therefore IROMEC adapts itself and develops along with a child [36]. Regarding the emotional factor covered by the system, IROMEC can display a set of basic emotions such as happiness, sadness, fear, surprise, disgust, anger. In addition, various scenarios of IROMEC are aimed at improving child’s self-esteem and regulation of emotions, as well as it enables teaching a range of basic emotions [42].

Even though many educational environments are targeting autistic children, more general solutions for people with disabilities also exist (however, not many). For example, an Ambient Intelligence Context-aware Affective Recommender Platform (AICARP) has been built to support learner’s needs through personalizing and adapting the learning environment during language learning [43]. AICARP ensures personalized feedback (e.g., playing different songs or sending some signals using a light and a buzzer) when particular learner’s emotional states, i.e., relaxed or nervous, are detected [44].

Overall, it can be concluded that AAL systems aiming at helping children during the learning process will not be able to provide full-fledged support if emotional aspects will not be considered during the development of the particular system. Emotions directly affect human cognitive abilities, including learning skills, therefore non-intrusive detection of learner’s emotional states and appropriate response (or adaptation) to these emotions are those capabilities which should be considered during the design of such AAL systems.

3.2 Emotions as Part of AAL in the Social Interaction

One of AAL goals is to ensure people’s wellbeing which includes not only satisfying physical needs or running errands but also making sure a person is, putting it simply, happy [45]. This is especially important in a case when a person uses a system in a long term, i.e., service robots for older adults and artificial nannies for kids [46]. Moreover, research shows that people are more open to system’s suggestions if it uses emotional words [47]. This leads to conclusion that a user would be more interested to engage with a system if it would fulfill their emotional expectations as a result supporting the main functions of AAL as well. A system that satisfies emotional needs has its advantages, and yet not many AAL systems exist in this direction.

The most straight-forward way for implementing social and emotional behaviors in an AAL system is through artificial companions. Developing such systems present multiple challenges such as unmistakable expressions of emotions, the ability to conduct high-level dialogue, abilities to learn, to adapt, to develop a personality, to use natural cues, and to develop social competencies [48, 49]. While it is not an easy task, research suggests that aside from already mentioned benefits – satisfying emotional needs and reducing loneliness and supporting “rational” tasks – companions also reduce stress and as a consequence can improve physical health [50]. However, for the companions to achieve these goals an important characteristic is a believability – i.e., a user needs to perceive them as if they act on their own; emotions are crucial for a companion to be believable [51].

Believable artificial companions have been researched in several areas, including social robotics, virtual assistants both as chatbots and as characters that provide other activities [52]. In an AAL environment, mobile robots provide more possibilities in terms of running errands or physically helping a user. Moreover, the research shows that people tend to empathize and attach to a robotic companion compared to its simulation [53]; robotic pets can be involved in therapy and achieve effect similarly as real pets [54] that cannot be done on 2D screen.

In the field of AC, however, several frameworks and projects for virtual agents have been developed that in terms of behavior are believable. One of such developments is WASABI – an architecture that is implemented as a virtual reality companion for playing a card game [18]. For this reason, this subsection reviews different types of assistants; it does not focus on “practical” functions (such as running errands, reminding drinking pills, etc.) of the companions but rather on their emotional abilities and behaviors that enable them to become emotionally believable.

In general, there are two types of companions: virtual and robotic [52]. Virtual assistants have no physical embodiment and they can have no virtual body as well (e.g., a chatbot). Emotions in companions, however, are closely related to expression through the body which helps them to be readable without misunderstandings [48] so a companion needs at least some kind of body – even if it is a virtual agent.

Robotic companions are researched by a field called social robotics [17]. Social robots are autonomous robots that can interact with a user in a socially believable manner [17]. Social robots are grouped into the ones that use strong approaches and those that use weak approaches. The strong approach means that a robot evolves its abilities over time; on the contrary the weak approach means a robot is just imitating emotions [49]. In the context of companions, this classification can be extended to virtual assistants as well.

Several researchers have noted that for companions to be able to adapt to a user, to form a personality and display believable behavior in long-term, they need to be able to learn [48, 55]. In [55], it is especially accented that in the future social robots will need to be personalized for which sophisticated user model might be needed. This leads to the conclusion that weak approaches will be left to narrow applications and currently the development of strong approaches is needed.

The weak approach is often used in robots that are zoomorphic, i.e., remind animals; some of these animals have no emotions at all e.g., robotic parrot RoboParrot that is used for educational purposes and therapy [56] or robotic seal PARO which is also used for therapy [57]. Sony’s robotic dog AIBO, on the other hand, can express six emotions: happiness, anger, fear, sadness, surprise, and dislike [46] but lately Sony has moved towards strong approaches claiming that dog can form an emotional bond with a user [58].

The strong approach in zoomorphic agents has been developed already almost two decades ago in FLAME which is a virtual agent [59]. FLAME is a fuzzy logic adaptive model of emotions which was implemented as a pet dog. A user can give his feedback to the pet, thus forming his behavior and teaching new rules. The author claims that such learning adapts the pet to the user.

Another group of robots and agents are the ones that are not similar neither to animals nor humans. They rely on different forms of emotion expression [48]. A well-known example of such social robots is Mung that has a simple body and LED lights that allows expressing emotion through colors [60]. An interesting experiment was done to investigate if movement-based emotions (without e.g. facial features) can be recognized [61]. The results showed that users still recognize emotions with sufficient accuracy. Such studies are important also for humanoid robots since implementing facial features is a complex task from both, hardware and software perspective, and for this reason, other approaches are often chosen. One example of that is Nao – widely used social robot (see e.g. [62] where Nao is used to investigate interaction with users or [63] where Nao is used to interact with autistic children) which relies on emotion expression through the body movements and lights [64].

Humanoids or robots with human-like expressions are often used for emotion expression [52]. Not all of them, however, express emotions through complex channels and not all of them use the strong approach. In [65] a human-like robot Daryl is described. While it shows its emotions through verbal cues and movement, the approach used in Daryl cannot be considered as strong since (a) the robot does not learn anything and (b) it reacts to the onlooker’s shirts color, and emotions are assigned arbitrarily to colors.

One of the first anthropomorphic robots was Kismet. Despite the fact that the author claimed that in theory, Kismet could learn, in the reality, it did not do so [66]. On its basis, Leonardo who uses the strong approach was developed. Leonardo uses gestures and facial expressions for social communication, can learn about objects and form affective memories which in turn underlies his likes and dislikes [67, 68]. Already mentioned WASABI has a human life-size body and sophisticated internal models that allows displaying mood, emotions and build attitude [18].

The strong approach is currently making its way into the social virtual agent’s world. One can see it in robots developed by the industry, the most sophisticated and publicly known being Sophia [69], and also in papers recently published which are focused on developing methods that solve different learning issues. A model for learning emotional reactions from humans and the environment, similarly as humans do, has been developed in [70]. Similarly, in [71], a method for learning facial expressions from humans has been implemented and tested. This all leads to the conclusion that the research on companions indeed has made rapid development since 2009 when social robotics was considered to be “very young” [68] and is on a track toward long-term companions that are able to adapt and learn from a user.

Currently, there are many advanced approaches in AC that allows modeling advanced user states which are not yet implemented into the area of social robotics, mostly because robots have other challenges that slow down development of emotional models (such as mechanical limitations, materials used, etc.) [48]. However, it can be concluded that due to practical functions and emotional attachment to robotic companions compared to virtual companions, social robots are the future of artificial companionship.

3.3 Emotions as Part of AAL in Mobility

AAL applications are targeted to help older adults or people with disabilities to live independently and comfortably in their living environment; however, living environments do not include only home, but also various environments such as neighborhood, shopping mall and other public places [72]. The best way to help people with disabilities is to give them autonomy and independence [73]; therefore, mobility that includes movement by private cars, public transport, wheelchairs and walking (by person itself or using walking sticks or exoskeletons) has become one of the most important areas for AAL solutions [74]. For example, older adults prefer to live as independently as possible at home, but living independently involves many possible risks, such as falling, weakening bodies, memory loss, and wandering that limit mobility and activities [75]. The main objective to be achieved regarding people with disabilities is providing them with an access to information resources and ability to move safely and autonomously in any environment. So far, many environments are not easily accessible for these people by themselves and without a guide [72].

In parallel to the development of AAL systems for the mobility, AC has also entered this domain. Emotional factors and affective states are crucial for enhanced safety and comfort [76] since essential driver abilities and attributes are affected by emotions, including perception and organization of memory, goal generation, evaluation, decision-making, strategic planning, focus and attention, motivation and performance, intentions and communication [77]. Furthermore, the mobility of older adults can be affected by emotional factors, e.g., the fear of getting lost or hurt [78]. Current predictions show that average population’s age is increasing and within 50 years one-third of the population in regions like Japan, Europe, China, and North America, will be over 60 years old [24]. Therefore, a great number of drivers will be older adults in the future.

Aggressiveness and anger are emotional states that extremely influence driving behavior and increase the risk of causing an accident [77]. As reported in a literature, aggressive or angry behaviors may occur in people with Alzheimer’s or other with dementias quite easily [79]. Furthermore, aging has been found to have negative effects on dual-task performance and older drivers present declines in information processing and driving performance [24]. Even healthy people can experience a wide range of emotions during driving, e.g., stress (caused by rush hour traffic congestion), confusion (caused by confusing road signs), nervousness or fear (e.g., for novice drivers), sadness (caused by negative event), etc. [77]. While driving, these emotions can have very harmful effects on the road, or even cause death. For instance, anger can lead to sudden driving reactions, often involving car accidents. Sadness or an excess of joy can lead to a loss of attention [80]. Considering the great responsibility, a driver has for his/her passengers, other road users, and her- or himself, as well as the fact that steering a car is an activity where even the smallest disturbance potentially has grave repercussions, keeping the driver in an emotional state that is the most suited for a driving is of enormous importance. Too low level of activation (e.g., resulting from emotional states like sadness or fatigue) also leads to reduced attention as well as prolonged reaction time and therefore lowers driving performance. In general, loss of mobility as a consequence of any illnesses puts people at an increased risk of social isolation and lower levels of physical activity [81].

By analyzing existing AAL solutions related to mobility and AC, it is possible to distinguish at least three application categories: intelligent solutions for walking, virtual environments for driving, and systems leading to affect-aware cars. All the mentioned categories and examples will be discussed further.

A support during the walking is of particular importance for older adults, people having problems with vision or movement in general. Currently, several developments (including robotic solutions and mobile applications) have been proposed to provide walking assistance or motivate people to go out and do physical activities. In [82], the Elderly-assistant & Walking-assistant robot has been described which is able to determine an intention of a user and identify a walking-mode. Its purpose is to provide physical support and walking assistance for older adults to meet their needs for walking autonomy, friendliness, and security [83].

For example, iWalkActive has been developed [84] to offer people a highly innovative, attractive and open walker platform that greatly improves a user’s mobility in an enjoyable and motivating way at the same time supporting physical activities that are either impossible or very difficult to perform with traditional non-motorized walkers, e.g., rollators. iWalkActive offers community services such as recording, sharing and rating walking routes, thus proving a possibility to stay socially connected.

DALi (Devices for Assisted Living) project was aimed at developing a semi-autonomous, intelligent mobility aid for older adults, which supports navigation in crowded and unstructured environments, i.e., public urban places such as shopping malls, airports, and hospitals [85]. This project takes into account also psychological and socio-emotional needs of older users, including self-consciousness, pride, and fear of embarrassment because older adults are more focused on achieving emotional goals compared to younger adults. Thus, this project focuses on emotional benefits achieved by improving a sense of safety and reducing the fear of falling. The use of the DALi also leads to the renewal of confidence and contribute to a belief in mastery [85].

Eyewalker project targets the development of an independent solution that can be simply clipped on a rollator [86]. Eyewalker involves the determination of a user’s emotional state based on movement analysis since gait itself provides relevant information about a person’s affective state. For the emotion detection, an acceleration data is analyzed.

Besides already mentioned physical solutions, various mobile or software applications have been developed focused on a facilitation of physical activities, including walking since regular walking is beneficial for enhancing mental health, for example, reducing physical symptoms and anxiety associated with minor stress. Ambient Walk is a mobile application that aims to explore how ambient sound generated by walking and meditative breathing, and the practice itself impacts user’s affective states [87]. Ambient Walk is designed to use audio-visual interaction as an interventional medium that provides novel means to foster mindfulness and relaxation. A similar mobile application has been proposed in [88]. This mobile tool supports mindful walking to reduce stress and to target such diseases as diabetes or depression. It is a mobile personalized tool that senses the walking speed and provides haptic feedback.

Next category regarding developed AAL mobility solutions includes various virtual environments (e.g., driving simulators) aimed at analyzing emotions during the driving process [89]. For example, young adults with autism have difficulties in learning safe driving skills. Furthermore, they demonstrate unsafe gaze patterns and higher levels of anxiety [90]. One of such virtual reality-based environments has been described in [91]. Environment operating as a driving simulator integrates electroencephalogram sensor, eye tracker and physiological data acquisition system for the recognition of several affective states and the mental workload of autistic individuals when they performed driving tasks. Based on acquired affective data, interventions of the system are adapted to keep users in a flow state. A similar solution called Driving Simulator has been designed to elicit driving related emotions and states, i.e., panic, fear, frustration, anger, boredom, and sleepiness [92]. Detection of mentioned affective states is carried out based on the analysis of various physiological body signals (GSR, temperature, and heart rate). Emotional Car simulator described in [80] has been developed with an aim to control and reduce the negative impact of emotions during the driving. The simulator can capture physiological data through EEG systems and recognize such affective states as excitement, engagement, boredom, meditation, and frustration. Besides emotion recognition, this environment integrates a virtual agent which intervenes to reduce an emotional impact so that a driver can return to a neutral emotion.

Another area where mobility will be improved in the near future is the use of autonomous cars. As such cars will not require attention from a driver, their use by older users or people with disabilities will be facilitated [74]. Therefore, researchers have been working on various solutions which can be integrated into a car to make it affect-aware. An extensive work has been done in the direction of car-voice integration since speech is a powerful carrier of emotional information [93]. This is also due to the fact that speech-controlled systems are already integrated into existing cars. Besides emotion recognition from voice, this process can be carried out based on other modalities, e.g., facial expressions and/or body posture [95], physiological signals [96], and even driving style [77]. However, the best way how a car can respond to the emotional state of a driver is through the voice. An appropriate voice response can be provided in terms of words used, presentation of a message by stressing particular words in the message and speaking in an appropriate emotional state [93]. Adapting a personality of an automated in-car assistant to a mood of a driver can also be important. A badly synthesized voice or an overly friendly, notoriously the same voice is likely to annoy the driver which soon would lead to distraction. Therefore, as an important adaptation strategy, matching in-car voice with the driver’s emotion is beneficial [77]. A solution called Voice User Help has been implemented and described in [24]. It is a smart voice-operated system that utilizes natural language understanding and emotional adaptive interfaces to assist drivers when looking for vehicle information with minimal effect on their driving performance. Additionally, the system presents an opportunity for older adult drivers to reduce the learning curve of new in-vehicle technologies and improve efficiency. In parallel to the speech recognition engine, an emotion recognition engine estimates the current emotional state of the user (e.g., angry, annoyed, joyful, happy, confused, bored, neutral) based on prosodic cues. Later, this information is used by a dialog manager to modify its responses.

Another research related to emotionally responsive cars has been proposed in [76]. A car can detect abnormal levels of stress and use this information to automatically adapt its interactions with a driver and increase individual and social awareness. Thus, the car is able to help the driver to better manage stress through adaptive music, calming temperature, corrective headlights, an empathetic voice of GPS, etc.

3.4 Emotions as Part of AAL in Healthcare

One of the primary applications for AAL systems is healthcare so it is not a surprise that there exists a remarkable number of various solutions. The overall benefits of using technology in healthcare include increased accessibility and cost-effectiveness, exclusion of human factor from the treatment (including infinite patience, diminishing variability) as well as tailoring communication to users’ needs [97].

Healthcare applications are intended not only to take care of older adults or people with disabilities but also to monitor users with chronic health conditions [98, 99]. Besides, healthcare in AAL systems is related not only to maintaining physical health but also to nurturing mental health. For this reason, it is closely related to cyberpsychology – a research area that has originated in psychology and focuses on treating and preventing mental illnesses through technology [97].

Specifically, some of the developments have been proven to increase the safety of older adults [100], improve the mental safety of chronic patients [101] and to enhance the quality of life for autistic children via accurately recognizing their emotions [102]. Healthcare applications also help to prevent habits that may lead to health problems in the future, such as overeating [103] and excessive drinking [104].

One can easily see that emotions have a crucial role in healthcare applications. Emotions are related to both causes and curing of physiological and mental illnesses [97] thus manipulations with a person’s emotional state can help with preventing illnesses as well as in the treatment of health problems.

Researchers have found that emotional responses towards various emotion elicitors can mitigate or enhance stress-related conditions. One example of physical disease prevention is Cardiac Defence Response detection which is a health risk that is not associated with dangerous stimuli. In [105], an algorithm has been designed for automatic recognition of such condition; it can help a patient to self-regulate as well as it notifies medical staff of the user’s health state. Physical diseases are particularly closely related to emotions when dealing with older adults and yet it is one of the groups that are susceptible towards depression; for this reason, a solution called a SENTIENT has been developed [106]. It monitors a user with the aim to detect negative or positive emotional valence in real-time thus enabling detecting and curing depression at its early stages.

As mentioned before, detection of affective state can also help with a treatment which in case of AAL systems can mean one of two things, i.e., there are two types of systems interventions in case of problems: in one case, system monitors a user and if abnormality is detected, calls caretaker, in the other system intervenes itself [97]. In case of life-threatening conditions, it is crucial for a system’s communication with caretakers to be failsafe; for this reason, researchers look for such solutions both from abnormality detection and messaging [107] perspectives. Abnormality detection is closely related to how well a system can detect user’s emotional states which is why several sensor data fusion solutions have been developed (see, e.g. [108] where a method to fuse image and sound have been invented). A question of sensors used in AAL systems is still open since they need, on one hand, to be unobtrusive, and on the other hand, informative enough. For this reason, wearable sensors and mobile phones are often used (see, e.g., [109]).

A system can intervene with the user itself and try to help in various ways. One such way is through changing conditions, e.g., switching on the light at night when distress is noticed [110]. In [111], based on pitch and speed while talking on the phone, depressive and manic states of patients suffering from bipolar disorder have been detected which then can be used for a treatment. Emotion detection and analysis can also be used not only with an aim to detect existing emotional state of a user but also to predict and automatically analyze behavior of involved humans [112].

While there are a lot of systems that monitor and analyze user’s states, the vast majority of them contact human caretakers once the intervention is needed. A current trend in the health applications is moving towards ubiquitous healthcare which means monitoring patients in all environments [107]. One such novel approach is monitoring older adults via the community [113]. Another promising research direction is personalization of a treatment for similar diseases [114].

4 Analysis of Affective Requirements for AAL Application Domains

As it was described in Sect. 2, four basic affective processes (emotion recognition, affect calculation consisting of emotion generation and emotion mapping on cognition and behavior, as well as emotion expression) can be fulfilled by an affective component, a unit or a system. The main goal of this section is to provide analysis and summary of previously considered AAL systems in terms of mentioned processes.

In general, the relationship between previously analyzed AAL application domains and all four affective processes is represented in Table 1. If the specific affective process is of high importance and should be included in the development of AAL systems as a functional requirement then it is depicted with black color. If not all solutions of the specific AAL application domain require the corresponding functionality then dark grey color is used (medium importance). Light grey color represents cases when the process is not essential to ensure the intended functionality of the AAL system (low importance).

Table 1. The relationship between affective processes and AAL application domains.

Education.

Emotion recognition and creation of a user model is an essential task of AAL systems targeting provision of educational activities since reasoning about learner’s emotions and adaptation of a system’s behavior (including emotion expressions of the system itself) is further required as a feedback. As an example, previously described IROMEC robot can be mentioned. It carries out user modeling (models child’s abilities and emotions) and accordingly adapts itself and provides personalized feedback. In general, emotion recognition is carried out through various modalities. The most popular one, of course, is the identification of facial expressions via cameras because it is considered a non-intrusive method. However, intrusive approaches (for example, analysis of physiological data) are applied as well for emotion recognition purposes.

If we return to affective processes, in particular, to emotion generation, then for AAL applications aimed at teaching specific knowledge or skills for a short-term period it is not of particular importance to actually “feel” or generate emotions based on system’s own emotion model. It can be just an imitation of emotions (e.g., feeling empathy towards learners) as predefined reactions to learner’s emotions, actions and/or learning outcomes in order to increase system’s (or pedagogical agent’s) believability and gain learners’ trust. Thus, there is no need to generate further changes in the system’s rational processes and/or behavior according to felt system’s emotions.

The Social Interaction.

A significant amount of effort has been dedicated to emotion recognition. Particularly, a challenge for social robots is emotion identification outside of the laboratory, i.e., “in the wild”. While the most social robots recognize user’s emotions from the camera, several use audio signals and body postures as well. In general, emotion recognition in AAL environment does not differ from emotion recognition that is being done away from a computer. A more interesting task is user modeling which is crucial for adapting to a user and forming a long-term friendship. While user modeling is also one of the key factors for education and healthcare, for companions it is especially crucial to develop long-term affective models, structures about a user, his interests and user’s affective attitudes towards various things.

Emotion expression is also very important for companions from two aspects: first, emotional expressions should be clearly understandable for a user; secondly, they should be socially appropriate. Expressivity, in general, is much-researched topic that has resulted in the aforementioned robot Leonardo as well as other developments.

An affective ability that differs social interaction from other areas is the necessity for the calculation of a system’s internal affective states, including emotion generation and mapping on cognition and behavior. Such approach allows the system to be more believable over a long time since emotional displays and emotion influence on behavior is the key to affection formation and life illusion (i.e. belief that the artificial companion is actually alive).

Mobility.

Regarding mobility and transportation in general, there can be various options depending on a system’s specificity. If the solution is aimed at supporting just a walking then there is no need for the emotion integration, however, if some form of interaction is involved then emotion inclusion can become an essential task.

In case of walking assistants, emotion recognition as a system’s capability not always is required since most of these developments aim to promote positive emotional outcomes (e.g., reducing the fear of getting lost) through specific actions (for example, the DALi project). The most important would be a creation of a user profile according to which a system would adapt its actions targeting emotional benefits.

If the aim is a long-term interaction and/or communication which could be the case of affect-sensitive cars, then recognition of user’s emotions and generation of appropriate emotional responses for an in-car assistant via voice or facial expressions may be required. However, behavior and rational thinking of such systems should not submit to emotions since this can lead to negative outcomes, for example, car accidents, injuries, etc.

Currently, a great amount of work is already devoted to the emotion recognition from driver’s voice since many cars use voice analysis and speech recognition services. Therefore, a possibility to acquire affective data in many cases is already integrated into cars only analysis of the collected data in the context of emotions should be applied. Regarding this issue, results of studies and experiments carried out with driving simulators can be used as well to analyze driver’s emotions in particular situations with an aim to create corresponding drivers’ profiles.

Healthcare.

When it comes to the affect integration into healthcare applications, the largest amount of research and practical studies has been linked to affect recognition. It is a logical consequence of field specifics: accurate affective state recognition underlies the entire chain of procedures that healthcare applications carry out. However, emotion recognition is not the only thing in the center of attention. User modeling and possibly forecasting his or her emotional reactions and consequently the behavior is of uttermost importance. Accurate and personalized user models would enable more precise detection of affective state and consequently would lead to more accurate evaluation of user’s health condition.

In the healthcare, similarly as in educational systems it is not needed for a system to have its own affective state but rather system should be able to tailor the affective reaction for achieving particular emotion from a user. System’s reasoning and decision-making processes, as can be seen from existing research, closely interact with user’s emotions, monitoring and forecasting them as well as adjusting system’s behavior.

Finally, some emotion expression capacities might be needed if a system performs interventions when required. In this case, functions of a healthcare system are merged with companionship functions so the system might need affective abilities vital for companions.

5 Conclusions

The chapter discusses a need of integration of AC approaches and methods in the context of AAL systems to improve their functionality in terms of rational decision making and enhancement of social interaction with people requiring the use of these systems. Four basic emotional processes forming general affective system framework have been described and analysis of various AAL systems application areas (i.e., education, social interaction, mobility, and healthcare) have been done to identify current capabilities of AAL systems in terms of listed processes.

Overall, it can be concluded that the existence of truly affective AAL system is not in the far future – separate parts of such systems already exist. Emotion detection is the most studied process in AC, therefore, various methods and algorithms have been developed which can be applied in the development of AAL systems. The analyzed AAL areas are closely merged together; it can be clearly seen that one system can have multiple functions.

Processes related to system’s emotion expression can be considered as a second most developed direction not only in AC but also in the field of AAL. Many researchers are working towards intelligent and expressive social agents which display believable behavior and can be used as personal assistants, teachers, companions, etc. In many cases, such agents represent a system itself and carry out most of the system’s functions aimed at direct interaction with a user, thus improving system’s communicative abilities.

The research focused on affect generation and consequently – the system’s endowment with abilities to “feel” emotions already exists, although it is at the very beginning of its development. Currently, most part of AAL systems just imitates abilities to “feel” emotions by using predefined emotion and/or behavior patterns as responses to user’s emotions. However, one direction where “feeling” real emotions is of primary interest, is companionship and long-term social interaction. While in some areas, such as healthcare, the system’s dependency on its own emotions can be unnecessary or even dangerous, in the social interaction “emotional glitches”, e.g., being offended, can make companion more believable and life-like. It can be concluded that this is one of future research directions.

Another trend that is closely related to the future of AAL is personalization – personal services and personal communication with a user. This means that there is a need to store not only “rational” data, such as health condition, but also affective data and attitudes of a user – which puts various user modeling techniques (including machine learning) as a top-interest research.