1 Introduction

The application of social robotsFootnote 1 has the potential to address a variety of societal problems [198]. The increasing use of social robots in various settings, including organizational, educational, and domestic environments [1, 14, 45, 71, 96], means that a larger portion of our interactions will be with these robots. Because of their unique features, social robots may play a critical role in contributing to people’s well-being. Sophisticated social robots capture the capability to serve as a “relational artifact” [174, p. 347] to people. The more socially interactive and human-like the robot is, the stronger people’s tendency to anthropomorphize [55] and attribute agency to the robot [10, 187]. Social robots that are intelligent, act autonomously, and respond in socially responsive ways, can be perceived as relational partners, relational agents, [7, 19, 20, 115] or change agents [132], which can impact people’s well-being in significant ways. As such, it is important to understand both the impact of robots on well-being as well as how to design them with the capabilities to actively enhance well-being.

In this paper, we argue that HRI (Human–Robot Interaction) should consider a psychological need-fulfillment perspective towards social robot research and design, in order to enhance individuals’ motivation and well-being. We introduce a conceptualization and operationalization of the Motivation, Engagement, and Thriving in User Experience (METUX) model [124], specifically applied to the context of social robots. The METUX model is built on the fundamental ideas of the Self-Determination Theory [SDT, [48]]. This is a major theory on motivation and offers an empirically tested and validated approach to examining factors that promote individuals’ motivation, engagement, and well-being. One central premise of SDT is that individuals’ basic psychological needs are essential to individuals’ intrinsic motivation and that intrinsic motivation both directly and indirectly influences well-being [49, 110, 176]. Studies in a large area of contexts and situations show how a simultaneous fulfillment of these basic needs by an individual’s social environment leads to intrinsic motivation, psychological well-being, and optimal functioning [139]. Given social robots’ capabilities to serve as relational agents or partners, they could form an important source of psychological need-fulfillment to people.

1.1 Contributions and Outline

The psychological need-fulfillment perspective as presented in this paper extends current HRI research agendas on well-being-supportive social robots in several ways. First, this perspective outlines specific underlying mechanisms for fostering healthy forms of motivation and engagement during social robot interactions, thereby serving to promote longer-term well-being. It will help HRI research more precisely investigate the relationship between social robots and well-being improvements. The psychological need-fulfillment perspective is based on the widely validated and empirically tested SDT and as such, it forms a theory-based framework to inspire future research. Each of the psychological needs is essential to well-being, and thwarting one need will cause disruptions to well-being. Thus, it is important to consider them integrally. However, in current HRI research, this has been rarely done.

Second, by accounting for the three basic needs at various spheres of experience (i.e., not only at the interface sphere but also at the task, behavior, and life sphere), the need-fulfillment perspective takes a holistic approach to the design of social robots for well-being. It prevents from fostering needs in one sphere of experience, while undermining it in others, resulting in no improvements in well-being or even harming well-being. When tailored to a specific type of robot or the function it needs to serve, this framework can help robot developers and researchers in the iterative design and evaluation of social robots, including socially assistive robots. As such, it inspires robot designers to deliberately design social robots that are motivating and engaging, ultimately leading to improved well-being.

The remainder of this article is structured as follows. First, we describe SDT and two of its subtheories, as they form the basis of the METUX model. Next, we explain how current HRI research has addressed well-being and the limitations of current approaches. After that, we outline the key ideas of the METUX model, as well as the application of this model to social robot design. By applying the METUX model and conceptualizing the role of basic need fulfillment in HRI, our paper provides several examples of how social robots can be designed with the aim to increase the well-being of their users. We propose various starting points of how a need-fulfillment perspective may be incorporated into future research on HRI within different spheres of analysis, ranging from the interface level to the user’s behavior that is supposed to be supported by the robot. We do this by reviewing past work as well as conceptualizing on this work to envision potential future directions.

2 Self-Determination Theory

SDT offers a framework for studying human motivation, based on several formal theories [185]. Central to SDT is the examination of how the fulfillment of individuals’ basic psychological needs affects their psychological well-being, motivation, and relationships [137]. Two key principles of SDT seem to be most relevant to HRI: the focus on different types of motivation and the impact of the social context on fulfilling or undermining individuals’ basic psychological needs. These principles will be discussed further below.

2.1 Autonomous Versus Controlled Motivation

One of the core theories within the SDT paradigm is the Organismic Integration Theory. This theory distinguishes between two types of motivation: intrinsic and extrinsic. Intrinsic motivation arises from the inherent interest or enjoyment in an activity, whereas extrinsic motivation is driven by external rewards or punishments. In other words: When it is performed to satisfy external demands or controlled by external regulations. SDT typically considers that intrinsic motivation applies to positive or well-being supportive behaviors. Moreover, SDT recognizes that extrinsic motivation can vary in the degree to which it is experienced as controlled. Thus, it places various forms of extrinsic motivation on an internalization continuum next to intrinsic motivation [136].

SDT recognizes four distinct types of extrinsic motivation, in addition to motivation, in addition to amotivation (when there is no motivation at all) and intrinsic motivation. The first type is external regulation, where the behavior is controlled by external demands, rewards, or punishments [48]. An example of this is when students use a robot in class only because their teacher told them they must. The second type is introjected regulation, where the behavior is somewhat internalized, but its regulation is still dependent on external standards. This often appears as ego involvements, to avoid feelings of guilt or to gain others’ respect [135], such as when employees use a robot because their colleagues are also using it. The third type is identified regulation [49], where the behavior is seen as serving an important purpose, even if it is not intrinsically motivating. An example is when patients use a robot to stay physically active. The fourth type is integrated regulation, where the behavior is fully integrated with personal goals and values [49]. An example is when a nurse regards working with social robots as an important part of professional development.

Research strongly supports the idea that more autonomous forms of motivation are related to improved performance, social functioning, and overall well-being [49]. This underlines the importance of the quality of motivation, rather than the strength of motivation. So, the specific type of motivation affects how much it contributes to overall well-being. Extrinsic motivation that is highly controlled (e.g., robot use contingent by rewards or punishments) will not foster well-being, while extrinsic motivation that is highly autonomous (e.g., robot use seen as coherent with personal goals and values) will contribute to well-being. In the context of user experience (UX), this means that designing for more autonomous regulations can lead to better outcomes for users [124, 138].

2.2 Basic Psychological Need Theory

Another widely tested and validated theory within SDT is the Basic Psychological Need Theory, which identifies three innate psychological needs that are essential for people’s psychological well-being and intrinsic motivation. First, the need for autonomy involves acting with choice, self-determination, and volition. Autonomy, however, does not mean independence, as people can act autonomously while still being dependent on others or even comply with the wishes of others [167, 182]. Second, the need for competence involves feeling effective in one’s efforts and capable of achieving desired outcomes. Last, the need for relatedness involves feeling connected to others, experiencing belonging, and being cared for by others, as well as caring for others.

The extent to which these needs are fulfilled depends on an individual’s social environment and is critical for self-motivation and effective functioning. When these needs are supported, individuals flourish, but when need-fulfillment is hindered, this will result in illness, demotivation, and stress [141]. This premise has been investigated in various contexts, including education [16, 59], interpersonal relationships [133, 180], and healthcare [113, 120]. Across decades of empirical studies in diverse contexts, the fulfillment of autonomy, competence, and relatedness has been found to be crucial for intrinsic motivation and well-being [33, 172, 183]. And thus, we argue that it is important to consider these needs when designing and studying social robots. The METUX model [124] provides a solid framework for analyzing the extent to which social robots support need-satisfaction, enhancing motivation and well-being.

2.2.1 Needs Versus Desires

It is important to note the exact meaning of needs in the context of SDT. In HRI, needs typically refer to end-users’ desired robot attributes or outcomes, and individual preferences can vary widely. However, in SDT, the meaning of basic psychological needs is more specific and narrow. Needs in SDT refer to basic psychological needs, similar to physiological needs like hunger, that all individuals possess [49]. Thus, "needs" in SDT differ from what might be called "desires". People may desire status, money, or authority, but they do not innately "need" these to be psychologically healthy, as SDT shows [49]. The METUX model is not suggesting that other constructs, such as privacy and security, are not important in technology design, but research shows that the three needs identified by SDT are the most important to consider in the context of well-being. Furthermore, other needs are often by-products of these three (thwarted) needs. For example, the need for privacy will typically emerge in response to frustrations of autonomy caused by controlling circumstances (e.g., personal data being stored).

Regardless of an individual’s desire for a particular basic psychological need, it is essential for fostering psychological growth and well-being, and deprivation or frustration of needs diminishes flourishing. Each need is associated with a unique set of experiences and is distinct from other basic needs [186]. All three needs are essential, and thwarting one need will cause disruptions to well-being, so it is important to consider them holistically. Additionally, basic psychological needs are universal and relevant for users regardless of their demographic characteristics or cultural background [186]. Four decades of empirical research [138, 183] show that these three needs are the most essential and predictive of well-being. Therefore, we argue that - from a well-being perspective - they are the most critically important to assess within social robot research and design.

3 Social Robots and Well-Being

The likelihood of a social robot having an impact on a person’s well-being is influenced by several factors, including the intentions with which it is designed and the frequency and duration of the interaction. Some robots are intentionally designed to have a short-term impact on well-being, with the expectation that the interaction will affect people’s well-being during or immediately after the interaction. For example, social robots that are designed to distract children during vaccinations [17]. However, the impact of such robots is less likely to carry over to people’s overall well-being in life.

Other social robots are designed for repeated sessions or long-term interventions, such as socially assistive robots for children with mental health problems [81], autistic children [148], or service and companion robots for elderly with dementia [69]. But even social robots that are not designed specifically to improve well-being (e.g., social service robots in shopping malls), can have a short-term impact on people’s well-being, as the interaction may lead to feelings of joy or frustration [24, 54]. The interaction with such a robot should at least not undermine people’s well-being. So, to achieve enhanced or prolonged well-being, particularly during long-term interactions with socially assistive robots, it is crucial that social robots are intentionally designed to strengthen or preserve people’s psychological well-being.

To date, research in HRI has not fully exploited this well-being supportive potential of social robots. While there have been several trials on improving well-being using robots, this has been done in limited contexts, and the evidence from such trials is mixed [130]. Some studies demonstrate that the interaction with a social robot can indeed enhance the user’s psychological well-being [130, 156] and report stress reduction and positive increases in mood and comfort [90, 188]. In contrast, other trials also reported negative consequences and showed, for example, increased irritability and hallucinations and a decrease in the quality of life among patients with dementia using a pet robot [175]. Thus, although social robots have the potential to bring both benefits and harms to users’ well-being, it remains unclear how this specifically works. We argue that this is due to the lack of a theory-based framework that identifies specific well-being determinants in social robot design in different spheres of experience. We will detail this critique below.

First, current HRI research does not always address specific underlying mechanisms that promote well-being. For instance, several studies have compared robot-assisted intervention groups with traditional intervention groups to examine the effect of utilizing a social robot in improving well-being. While useful as first explorations of possibilities of social robots as therapy-assistive tools, such studies do not provide insight into which specific robot features are effective in improving well-being. There is a wide range of robot characteristics that might impact well-being, but with an experiment comparing user groups with non-user groups, it remains unclear which specific features of these robots may improve well-being, and which may not. As such, they do not contribute to our insight into specific well-being supportive mechanisms of social robots. A more fine-grained analysis of social robots’ specific well-being determinants would advance this field.

Second, it would help HRI research if such well-being determinants are based on well-proven, validated formal theories of well-being and motivation. In the context of social robots, the application of such theories is rare, with only a few exceptions. Without a theory-based framework, it is difficult to provide theoretical explanations for why social robot interventions are effective or not, limiting the possibility of generalizing results based on empirical data in a specific case. Some studies have examined how specific robot features might lead to well-being or related concepts such as social anxiety [119], diagnosis of psychiatric disorders [87], or loneliness [123]. And sometimes, studies also touch upon concepts related to psychological basic needs, such as self-disclosure [86] or self-efficacy [134, 170]. However, studies that consider all three basic needs in relation to robot features impacting well-being are scarce. As SDT considers all three needs as essential to well-being, this can be seen as an important limitation of such studies. For example, an educational robot that increases its volume and employs arm gestures when students lose their attention [171] might provide competence support during a task but, at the same time, it might also thwart the student’s autonomy during the task. In terms of well-being, an increase is then unlikely. Thus, a more holistic approach toward fulfilling all three psychological needs is crucial to truly examine the well-being and supportive potential of social robots. The psychological need-fulfillment perspective presented in this paper is based on a widely validated theory and provides a systematic way of analyzing specific robot features that can impact well-being.

Last, current studies on the impact of social robots on well-being seldom address different spheres of experience. For example, some studies only seem to address the interface sphere by investigating robots’ ways of communicating (e.g., [86, 89, 177]), while others are more focused on the tasks that need to be executed by the user (e.g., [132]) or users’ behavioral changes (e.g., [75]), such as weight reduction as a result of robot-delivered interventions. In the worst case, this might mean that a social robot supports basic needs in one sphere (e.g., in the short-term, in the interface sphere) while thwarting basic needs in another sphere (e.g., in the longer term, in the life sphere). A social robot may then be user-friendly designed and easy to use but can lead to worsened well-being in the long run. Or, a social robot may help people reach their end goals (life sphere) but does this by prescribing strict rules (task and behavior sphere), thus undermining the user’s well-being. Clearly, this shows the need to study long-term interactions in ecologically valid environments. In the following section, we show how the application of the METUX model could address these shortcomings in current HRI research.

4 The METUX Model

Technology has the ability to affect well-being, both intentionally and unintentionally [5, 124]. Studies within the SDT paradigm have increasingly focused on how technology use affects motivation and well-being through basic need-fulfillment, particularly in the fields of education (e.g., [34, 189]) and gaming (e.g., [122, 126]). The central assumption in this stream of literature is the idea that people will use technology to the extent that their interaction with the technology satisfies their basic needs [124], which in turn increases motivation and well-being.

However, until recently, there was no comprehensive framework for well-being supportive design strategies of technologies and as such, no clear guidance on how to design healthy technology in practice. Peters et al [124] addressed this gap by introducing the METUX model, a conceptual model based on SDT. Their model enables designers to evaluate how their technology design affects motivation, engagement, and well-being. In line with the above-mentioned studies, the model considers need-satisfaction as a mediator between technology use and individuals’ motivation, well-being, and engagement.

4.1 Spheres of Experience

According to the model, need-satisfaction related to technology use can be experienced within six spheres: adoption, interface, task, behavior, life, and society. Peters et al [124] show how each sphere might lead to different outcomes when need-satisfaction is achieved within that sphere. It is important to distinguish between these various levels of need-fulfillment to avoid creating technologies that are need-supporting at one level but need-undermining at another. Based on the METUX model and accompanying measures of need-satisfaction, robot developers can make iterative improvements in social robot design to optimize well-being.

Within the adoption sphere, the extent to which a person’s motivation to adopt the technology is autonomous is central. When the adoption of technology is perceived as autonomous, the technology is being used because of free will, and because it aligns with the user’s values and goals. Contrasting, when the adoption of technology is perceived as controlled, the technology is being used because another person demands the user to do so. Within the interface sphere, the focus is on how the user interface of the technology supports the satisfaction of the three basic needs for autonomy, competence, and relatedness. Focusing on the task sphere, the question is to what extent engagement in a technology-specific task (e.g., monitoring your heart rate) supports the satisfaction of the three basic needs. The behavior sphere focuses on the extent to which technology improves need-satisfaction concerning the behavior that the technology is intended to support. For example, monitoring your heart rate might be used to measure exercise intensity, and to exercise at a safe and effective level. In this way, exercising might fulfill the need for competence and support an overarching goal (improving physical fitness). For technologies that are designed to improve one’s overall well-being, the life sphere is also critical. For example, socially assistive robots such as Paro are designed to provide comfort, companionship, or stress reduction for elderly patients [162]. In the longer term, this may improve their overall well-being in life. Last, within the society sphere, technology may improve societal well-being. For example, a technology supporting mindfulness practices may lead to improved mental well-being on a societal level, beyond the single user [124].

In our application of the METUX model to HRI, we focus on four spheres of the METUX model: the robot’s interface, the robot-related tasks performed by the user, the person’s behavior assisted by the robot, and how the robot could influence the individual’s overall experience of life. We exclude the adoption sphere because of “its peripheral role preceding actual use” [124, p.6]. It is important to note that in METUX, the term adoption does not refer to the long-term acceptance process as typically used in HRI (e.g., [46]), but rather to one point in time when the initial decision to use a robot is made. In our application of METUX, we explicitly aim to focus on longer-term human–robot interactions that follow the initial adoption (i.e., the decision to use the robot for the first time). We also exclude the society sphere as it extends beyond individual user experiences.

5 Basic Psychological Needs in HRI

So far, only a few studies have explored how to meet people’s psychological needs for autonomy, competence, and relatedness in HRI (e.g., [22, 103, 116]). Although the field of Human-Computer Interaction (HCI) is increasingly considering psychological factors that impact well-being in technology design [27, 50, 129, 157], there is a lack of HRI-specific frameworks for designing robots that support well-being. Such frameworks are crucial because HRI involves physically embodied interaction, and a robot’s ability to physically and socially interact with its environment affects people’s perceptions of the agent. Thus, existing HCI frameworks are inadequate and an HRI-specific framework is necessary to understand the unique ways in which social robots can impact people’s well-being.

Based on the principles of SDT and its associated METUX model, it can be argued that social robots are most likely to support well-being if they support autonomous motivation, which means fulfilling users’ needs for autonomy, competence, and relatedness. Thus, we suggest placing people’s basic psychological needs for autonomy, competence, and relatedness at the core of social robot design. Integrating these needs in robot design is expected to significantly contribute to people’s psychological well-being, engagement, and self-motivation. Ideally, social robots should support people in attaining their intrinsic, longer-term well-being goals (e.g., personal growth, meaningful relationships with others, and contributing to one’s community) [184].

Before we detail the possible implications of this psychological need-perspective on various spheres of experience (i.e., interface, task, behavior, life; see Sect. 6), we will first provide a basic introduction to the meaning of the basic needs in HRI. Drawing from current research in HRI, we argue that each of the three needs - autonomy, competence, and relatedness - appears to be a crucial factor in the context of social robots. While we do not aim to provide a comprehensive review of these concepts here, we will present some examples of how research has explored these concepts in HRI or which related concepts have been utilized.

5.1 Autonomy

Autonomy is characterized by willingness and volition to make choices and act upon them. To support autonomy, the social environment should acknowledge the person’s wishes, preferences, and perspectives, and provide them with choices and rationales for their behavior. Self-endorsed actions, thoughts, and feelings are also essential components of autonomy. This need for autonomy is important for healthy psychological development and functioning [143] as shown in diverse contexts (e.g., [9, 61, 118, 142, 155, 163]). Frustration of autonomy can lead to feelings of pressure and conflict, such as feeling pushed in an unwanted direction. Given the crucial role of autonomy in people’s healthy functioning, supporting autonomy can be considered a crucial component of effective behavioral interventions.

Autonomy can be seen as a key concept in HRI and has been extensively examined in social robot design. Human autonomy is at the center of the design process, according to ethical guidelines from the Institute of Electrical and Electronics Engineers (IEEE) [32], and as such, it is an essential design feature in HRI. With social robots becoming increasingly autonomous, questions about humans’ perceptions of and reactions to different levels of robot autonomy have been a central topic in HRI (e.g., [91, 147, 202]). The balance between human autonomy and machine autonomy has also been the subject of several ethical discussions [23, 56, 62, 78].

In essence, social robots hold great potential to enhance people’s sense of autonomy or self-regulation in their daily lives. Current research has explored how social robots can help users accomplish what they want and consider important, mainly in the context of assisting older people in nursing homes or at home, and helping them to maintain their autonomy (e.g., [31, 106, 112, 125, 161, 190, 202]). But also in other contexts, it has been shown how robots can support users in achieving their self-determined goals. For example, robots can remove obstacles (e.g., picking up fallen objects from the ground) [145], aid autistic children in developing social skills (e.g., [148, 151]), or assist children in acquiring self-regulated learning skills [42, 80]. In all these ways, robots can contribute to users’ autonomy in life.

5.2 Competence

Feeling competence means experiencing mastery and effectiveness. Competence is supported by the social environment when individuals are provided with optimal challenges and opportunities so they can use and expand their skills and expertise. This requires that goals and activities are challenging, though not overwhelming. Competence is also supported by providing consistent and clear expectations, rules, and consequences. When competence is frustrated, it leads to a feeling of ineffectiveness, failure, and helplessness. The impact of competence on intrinsic motivation has been demonstrated in a wide variety of contexts [57, 159].

Central to social robots is that they are designed to help people achieve their personal goals effectively. Especially in educational settings, social robots are increasingly being used as tutors to teach new skills, such as word learning in both school-aged children [42, 82, 192] and adults [154], reading, [68], grammar [83], mathematics [149], and speaking skills [70]. They are also being used as social behavior agents, for example, to promote healthier eating habits [132].

In current HRI research, the feeling of competence is often examined through the concept of self-efficacy. Self-efficacy has been conceptualized as a strong predictor for performance, interaction satisfaction, and evaluation in HRI [134]. Research has shown that a robot’s interaction style is a key factor in increasing people’s perceived self-efficacy in HRI [170, 200]. Studies have also demonstrated how social robots can improve users’ self-efficacy in contexts such as diabetes management [26, 29, 131], post-stroke rehabilitation [170], or mathematics learning [104].

Another line of research has focused on the warmth and competence dimension in social robots and examined how users evaluate a robot’s competence and respond to it [36, 102, 109, 152, 153]. However, to the best of our knowledge, these studies have not related perceived robot competence to a user’s sense of competence.

5.3 Relatedness

Feeling relatedness refers to the fundamental need for interpersonal attachments, to a feeling of belonging. The need for relatedness is supported by an individual’s social environment when others show interest in the person’s activities, are empathetic in responding to a person’s feelings, and convey that the person is significant, loved, and cared for [11, 85]. When relatedness is frustrated, this may lead to social alienation, exclusion, and loneliness [186].

Relatedness is the third basic psychological need that plays a central role in the internalization of extrinsically motivated activities [186]. So, next to a feeling of volition (i.e., autonomy satisfaction) and effectiveness (i.e., competence satisfaction), relatedness satisfaction is required for intrinsic motivation. Activities accompanied by a feeling of connection with those encouraging such goals and activities are more probably to be well-internalized [186].

In the context of technology, the need for relatedness may not always be relevant to consider [74, 122]. Most studies that did take the role of relatedness in relation to technology use into account, focused on how relationships are mediated by technology and how technology use is increasingly social. While in these studies the need for relatedness is satisfied by interactions that are mediated through technology, a social robot holds the potential to form a source for relatedness satisfaction in itself, for example, through social bonding [38, 100, 174]. Relatedness is then not supported by mediated interactions through the social robot but by interaction with the robot itself.

We envision that relatedness support in human–robot interactions can thus take two pathways. First, a robot could evoke companionship and maybe even relatedness in specific situations. For example, research shows how social robots’ relational verbal behaviors can contribute to children’s perceptions of a robot as a friend [84]. And cobots have been considered as team players or social entities in several contexts [35], even in manufacturing contexts where there seems to be little need for sociality with robots [146]. Thus, social robots may give people a feeling of belonging [165]. Second, a robot could facilitate social activities with others or with other users of the robot [39, 199], and in that way facilitate users to get relatedness support from other persons in their environment.

6 Need-Fulfillment in Different Spheres of Experience in HRI

Now that we have outlined the general meaning of basic need-fulfillment in the context of HRI, we will discuss the various spheres of experience in social robot design that can influence the fulfillment of these needs. Each of these spheres has a relationship with engagement, motivation, and well-being. Therefore, it is crucial for the design of social robots to closely examine need-fulfillment within these spheres.

First, we will discuss examples of design implications in the interface and task spheres. To do this, we will situate our work within current research in HRI and build upon it to demonstrate how basic psychological needs can be further integrated into future social robot design. Thus, we review past work and utilize it to conceptualize potential future directions. Table 1 illustrates the various ways in which need-fulfillment can be achieved in social robot design within these spheres. These examples are not exhaustive but rather serve to inspire others to consider these spheres and understand the possible implications of basic need-fulfillment within them. Depending on the specific robot type and its application in particular contexts, robot developers and researchers should tailor these paths of need-fulfillment. The relevance of each of the three needs in the various spheres, as well as specific design guidelines and ethical risks [193, 195], should be considered based on the use case.

After discussing the interface and task spheres, we will discuss the behavior and life spheres, which are particularly relevant in researching the consequences of social robots. In the latter two spheres, the focus is on how the robot is actually used in practice. The evaluation revolves around determining if robot implementation leads to need-fulfillment and subsequently to motivation, engagement, and well-being.

6.1 Interface Sphere

At the interface level, a well-being-supportive design of a social robot entails that the user’s direct interaction with the robot should support the satisfaction of psychological needs. Therefore, the specific modes of communication with the robot and the robot’s non-verbal and verbal behaviors toward people should contribute to fulfilling autonomy, competence, and relatedness. Consequently, using or interacting with the robot becomes satisfying.

6.1.1 Interface: Autonomy

Placing human autonomy at the core of the user interface means that social robots should offer users useful options and choices while using the robot. This allows people to adapt how the robot interacts with them to their specific needs and preferences. Individual differences, such as developmental age [150] and voice preferences [28], play a role in users’ attitudes, usage, and engagement with social robots. Offering choices in interaction modes (e.g., through speech or via a tablet interface [99]), interaction flow [131], voice selection, or sound volume empowers users and enhances their sense of freedom, regardless of the specific task that is performed by the robot or the behavior of the user that is supported by the robot. In terms of the specific communication strategies applied in social robots, autonomy can be built in by fostering open-ended dialogues [105], or employing motivational interviewing techniques [164], including asking open-ended questions, and applying mirroring and summarizing [18]. Preliminary studies indicate that social robots can effectively conduct motivational interviews, serving as agents for behavior change [132]. These robots are perceived as nonjudgmental and encourage users to freely express themselves [164]. Additionally, when providing instructions, it is important for the robot’s language to be non-controlling, avoiding words like “should,” “must,” and “have to” [168].

6.1.2 Interface: Competence

A competence-supportive interface of the robot will help people feel confident, effective, and capable of interacting with and using the robot, enhancing their sense of efficacy [91]. Therefore, the robot’s interface and controls should be clear and easy to use, requiring little to no additional instruction [6]. At a basic level, similar to video games, the interface of a social robot should be “intuitive,” meaning it is easily understood and mastered [140]. The intuitiveness of the robot is often assessed by perceived ease of use and anxiety during interactions [6, 67]. Some specific requirements for a competence-supportive interface will be familiar to social robot engineers, as Drury et al [52] developed heuristics based on Nielsen’s usability principles [117]. For example, the robot’s interface should be consistent, have a clear and simple design, provide useful user feedback, and provide shortcuts and accelerators to adapt to the cognitive ability of the user [52]. Concerning robot behavior, it is important for the robot to exhibit consistent and relatively predictable behavior [40, 63, 95, 152]. Note here that when a social robot is too predictable in its behavior, it may also be seen as boring and negatively influence the engagement [20, 152].

Table 1 Need-supportive HRI at different spheres of impact

In their way of communicating, social robots should provide users with positive (non)verbal feedback [43, 111, 121, 197] and acknowledge users’ improvements to support them in achieving their goals. Building upon the principles of SDT, providing relevant information as feedback (e.g., “I am pleased that you already took 6000 steps today, only 2000 more and you’ve reached your goal”) rather than solely evaluative feedback (e.g., “Excellent job”) enhances feelings of efficacy and success, which are both related to the fulfillment of competence [118, 140]. Important to notice here is the barrier of mental or cognitive load: if the feedback contains excessive information, people may feel overwhelmed, and if the feedback is too challenging, it may deplete their cognitive resources [66]. Thus, an optimal level of feedback richness should be determined.

6.1.3 Interface: Relatedness

For a social robot, the fulfillment of the need for relatedness could be one of the central elements to incorporate into its interface. Most importantly, to create belonging, a robot could provide people with support and care by showing an interest in them. This can be done through several strategic relational behaviors [84], which are likely to contribute to long-term engagement. Previous research has identified a number of features that can be embodied in robots to effectively influence users’ affect. For instance, personalized small talk [44], addressing users by their names [75, 94], referring to previous encounters [191], or incorporating users’ preferences and interests from previous sessions [22] have proven to be effective ways of designing relational behaviors and building rapport with users. Empathy in robot behavior design, such as facial emotional expressions and eye contact, has also been shown to maintain positive relationships with users [94, 144]. Furthermore, studies have explored the role of social robots in encouraging users’ self-disclosure [4, 88, 89, 107]. For example, robots can respond to participants’ input with empathetic responses (e.g., “I understand” or “This is really interesting”), in that way supporting people’s well-being. Another approach is to let the robot self-disclose [25, 103, 181], such as by sharing its backstory [3].

However, social robot interventions do not always effectively stimulate users’ self-disclosure [3, 179]. Regarding children as users, van Straten et al [178] show that when the robot self-discloses, children perceive the robot as having a decreased capacity to adopt their perspective. Other studies demonstrate that children interacting with a robot that encourages them to self-disclose personal information perceive the robot as a friend and show a willingness to interact with it again [84]. Moreover, when a robot self-discloses personal information, children have a higher appreciation for the robot, and its social influence is stronger [181]. Given the mixed findings in both adult and children samples, further research is needed to better understand the nuances and complexities between social robot design and user self-disclosure.

6.2 Task Sphere

At the task level, a well-being-supportive design of social robots means that the specific tasks performed by the robot (such as providing exercise instructions, suggesting healthy meals, or assisting with homework) are performed in such a way that they are need-fulfilling. The features and functionalities of the robot that accompany the execution of these tasks should be designed to support people’s need for autonomy, competence, and relatedness. All these specific tasks are aimed at supporting users’ overall behaviors. For example, suggesting a healthy meal (robot task) aims to promote healthy eating (an overarching behavior of the user). Providing exercise instructions (robot task) aims to encourage physical activity throughout the day (overarching behavior). And assisting a child with homework (robot task) enables the child to, for example, learn a second language (overarching behavior). The next section will discuss the sphere of overarching behaviors for which the robot is used. Within the current sphere, the focus is on the robot’s identifiable activities that enable, enhance, or augment the performance of such behaviors.

6.2.1 Task: Autonomy

Concerning autonomy, it is important for individuals to have a certain degree of freedom and choice in relation to robot-related tasks. Feeling autonomous in task performance ultimately enhances intrinsic motivation to engage in the overarching behavior facilitated by those tasks. To satisfy their need for autonomy, people could have the option to choose which tasks the robot performs, the time frame for task completion, the order of tasks, and the specific ways tasks are executed [22]. Van Minkelen et al [111] previously showed the impact of autonomy satisfaction on intrinsic motivation in word learning, where a social robot allowed children to choose between playing a game with three pictures or four pictures. Building upon this, other choices can be incorporated into social robot interventions. For example, patients could choose between being reminded to take medication by default or only when it has been a certain number of hours since the last intake. A robot could be programmed to assist within specific time frames, giving people control over task scheduling. And a patient could choose between different types of exercises to engage in for physical activity throughout the day. As already shown in child-robot interactions by de Greeff et al [47], the possibility to switch between activities is related to the motivation of users.

It is reasonable to assume that sometimes, the robot’s suggested tasks may seem uninteresting or difficult for the user. For example, the user might be asked to practice a skill repeatedly. Previous research shows that experiences of frustration, boredom, or stress can negatively impact intrinsic motivation [128]. Therefore, in such cases, the robot must show empathetic behavior and acknowledge the user’s negative feelings towards the task. We suggest that by acknowledging and accepting the user’s negative feelings and explaining the importance of the activity, a robot can assist individuals in engaging with and deriving benefits from the task. Thus, a social robot should be capable of explaining why a specific activity is valuable to the user in an empathetic manner. Previous studies on user’s self-disclosing behaviors indicate that robots can offer such emotional support during conversations by providing empathetic reactions, particularly when used for an extended period [88], or when the user notices a change in the robot’s response from a neutral to a positive listening attitude [114].

6.2.2 Task: Competence

To fulfill people’s need for competence at the task level, it is essential for the robot to offer optimal challenges during task performance [127, 149, 154]. This ensures that individuals feel confident in their abilities and perceive the activity as neither too difficult nor too easy for regular engagement. For example, when the level assignments given by the robot is too high, users will feel overwhelmed and stop practicing [15]. Maintaining long-term motivation and engagement requires finding the right balance. The robot should adapt the activity’s difficulty to match the user’s current (cognitive or physical) capabilities, enabling individuals to expand their skills and capabilities [73, 103, 118].

Next, it seems important that people are provided with opportunities to acquire new skills or abilities while using the robot, even when the robot takes over tasks that would normally provide people with opportunities to acquire such new skills. For example, when a social robot is being used to help diabetes patients make healthy food choices, this robot may provide suggestions for healthy meals and snacks during the day [131]. It is then important that users are also provided options to learn to make these types of decisions themselves. Otherwise, in the longer term, using the robot may hinder people from making such healthy choices themselves or knowing why a snack is regarded as healthy or not.

Computational models have proven effective in estimating users’ skill levels [60, 97, 149]. Additionally, providing relevant feedback on mastering tasks [118] could be an important way for social robots to support competence. It may also be important to offer a variety of tasks, as it allows individuals to engage in new activities and experience novelty regularly. Providing a range of activity options may also contribute to people’s well-being, as demonstrated in other contexts [158, 160].

Especially in the field of social educational robotics, personalizing robot-related tasks has become a key area of research [79]. For example, researchers have focused on adapting tasks to individual children’s progress [98]. In an experimental study, Leyzberg et al [98] provided personalized lessons to one group of participants, using an adaptive Hidden Markov Model personalization system. The system selected a lesson based on the learner’s specific skill that required more practice. The other group received non-personalized lessons, randomly chosen from the remaining lessons. Participants in the personalized condition significantly outperformed those in the non-personalized condition.

In another study, Szafir and Mutlu [171] conducted a laboratory experiment where participants interacted with a humanlike robot and received instructions. Some participants interacted with an adaptive agent that monitored their real-time EEG engagement data. When a drop in attention was detected, the robot displayed adaptive immediacy cues, such as increasing volume and using arm gestures to re-engage the student. Other participants did not receive such adapted instructions from the robot. The study shows that the use of adaptive agents significantly improves recall performance in a narrative task, resulting in better educational outcomes. Important to note here is however, that the study did not examine participants’ perceptions of need-satisfaction and that the use of increasing volumes and arm gestures might be frustrating users’ need for autonomy.

These studies highlight the positive impact of personalization and adaptation in educational robotics, showing how tailored experiences and responsive feedback can enhance learning and performance.

6.2.3 Task: Relatedness

Regarding the fulfillment of relatedness within the task sphere, designing a robot as a social mediator between people is a crucial approach to support relatedness [39]. The robot can engage users in activities that help them form and maintain fulfilling relationships with others. However, the suitability of involving others in the accompanying activities at a task level depends on the desired ultimate behavior. For example, if the overarching behavior is daily exercise, activities like a robot reminding someone to exercise do not directly involve relatedness with others. However, if the goal is to improve communication skills, an activity such as inviting the user to provide feedback to another person can more directly fulfill one’s need for relatedness.

Still, even in seemingly individualistic activities, opportunities can be found to support the need for relatedness. For example, in the context of exercise, a social robot could suggest finding an exercise partner with whom the user can engage in exercises the next day. By facilitating social connections and interactions, the robot can contribute to fulfilling the user’s need for relatedness, thus enhancing well-being.

Depending on the desired ultimate behavior, social robots can of course also offer tasks that directly foster empathy or compassion for others. For example, social robots provide the opportunity for people to engage in role-play, which is a powerful tool for empathy development and can contribute to forming or sustaining high-quality relationships with others [77, 101].

Fulfilling users’ needs for relatedness can also help them perform tasks that may seem boring, difficult, or uninteresting. In such cases, it is beneficial if users feel that the robot likes, respects, and values them. When individuals perceive this positive regard, they are more likely to exhibit autonomous forms of motivation, even for tasks that are not inherently intrinsically motivating. Previous research has already shown that artificial companions capable of displaying empathetic behavior are more successful in establishing and maintaining positive relationships with users compared to agents that behave neutrally [93]. This includes robot behaviors such as recognizing people’s emotional responses and responding to them appropriately.

6.3 Behavior Sphere

While the interface level focuses on the satisfaction of the interaction with the robot, and the task level is concerned with specific actions performed with the robot, the behavior sphere addresses the effectiveness of the robot in supporting the desired behavior. At the behavior level, a well-being-supportive design of social robots means that need satisfaction is experienced while engaging in the overarching behavior that the robot is intended to support (e.g., exercising, communicating with others, eating healthy, or practicing mindfulness). When the performance of this behavior is need-fulfilling, it contributes to intrinsic motivation and ultimately promotes well-being in the broader life sphere. Researchers examining this sphere may investigate the effectiveness of robot usage (e.g., whether using the robot leads to a healthier BMI or increased confidence in social conversations), as well as the levels of need-fulfillment experienced during the behavior.

It is important to note that the behavior within this sphere of experience is not necessarily dependent on the use of the robot. For example, in the context of practicing meditation, the interface sphere may focus on the satisfaction of interacting with a robot that teaches meditation, while the task sphere may evaluate the satisfaction of completing a meditation guided by the robot. However, at the behavior level, the key consideration is whether the use of the robot results in a higher willingness to meditate. Or if need-fulfillment is reached when meditating. Ultimately, the performance of the behavior should not solely rely on whether the robot is stimulating the user to do so or not. If this is not the case or if need-fulfillment during the behavior is not optimal, the design of the interface and task phase should be revisited to ensure optimal need-fulfillment in the behavior sphere.

6.3.1 Behavior: Autonomy

Fulfilling autonomy within this sphere of experience involves cultivating an intrinsic willingness to engage in the desired behavior (e.g., exercising, eating healthily, participating in social activities with others). To achieve this, it is important for individuals to feel a sense of freedom and choice during the behavior [124]. For example, when it comes to exercising, individuals should feel empowered to make decisions about the type of exercise, exercise difficulty, or exercise frequency. This sense of freedom during exercise is then merely a result of how the robot is designed at the interface and task levels, as the behavior should also be performed independently of the robot (e.g., feeling more motivated to take a walk during the lunch break due to previous encouragement from the robot in the task sphere).

Especially for behaviors that might be less likely to be performed because of intrinsic motivation, it can be helpful if a social robot regularly provides meaningful explanations for why a certain behavior is important for the user to engage in. Acknowledging that the requested behavior may evoke negative feelings such as boredom or fatigue can also assist individuals in performing the behavior with greater intrinsic motivation [168]. For example, adhering to medication schedules [169] is often not driven by interest or enjoyment, unlike activities like exercising. However, based on the principles of SDT, we conceptualize that when a robot acknowledges feelings of resistance and explains the importance of timely medication intake, the user will be less likely to rely on purely controlled or extrinsic motivation. The user will internalize this extrinsically motivated behavior to some extent, resulting in a higher likelihood of performing the behavior without the assistance of the robot.

6.3.2 Behavior: Competence

One of the most important ways in which social robots could satisfy the need for competence within the behavior sphere is by helping people to feel confident while performing the desired behavior and to experience an optimal challenge while performing the behavior. Similar to the task sphere, the behavior that the user needs to perform should not be too difficult and overwhelming, but challenging enough and not too boring either [127, 149, 154]. Again, whether competence is fulfilled within the behavior sphere might be merely the sum of how competence-supportive the interface of the robot and the tasks given by the robot are.

6.3.3 Behavior: Relatedness

The user’s need for relatedness within the behavior sphere will be fulfilled when experiencing belonging to others while performing the behavior. For example, by sharing a common bond with others who perform the behavior, by feeling accepted by them, and by feeling companionship. Similar to the fulfillment of relatedness within the task sphere, this may not be relevant for all behaviors (i.e., behaviors that are performed in a more individualistic way).

6.4 Life Sphere

The previous sections focused on using social robots for specific behaviors, while the life sphere examines the consequences of using a robot on one’s overall satisfaction with life. The extent of the measurable impact of a social robot on life depends on its intended purpose. Especially for social robots that consciously aim to impact overall well-being, it is critical that what is learned during using a social robot transfers to other aspects of life. Only then, it is to be expected that the use of a social robot will improve people’s overall well-being in life. However, even when a social robot is not specifically designed to improve well-being, it should at least not harm well-being. Therefore, satisfying the user’s psychological basic needs would be beneficial.

6.4.1 Life: Autonomy

In terms of autonomy, it is crucial within this sphere to prevent people from becoming excessively engaged with or overly reliant on the robot [30]. It is also important to avoid situations where individuals feel incapable of making decisions without depending on the robot. While robots offer opportunities for pleasurable and enjoyable activities, it is essential to ensure that their use does not become compulsive and solely benefits the robot’s creator [30]. People should maintain their autonomy in life, meaning they can make choices based on their personal values without feeling obligated to do things simply because the robot instructs them to. Previous research demonstrated that participants who were encouraged by a robot took more risks, highlighting the need to consider the potential pressure exerted by robots, especially in well-being programs that often target vulnerable user groups [64]. Again, the extent to which autonomy is experienced in this sphere depends on how the robot is designed at the interface and task levels. Researchers focusing on this level can examine whether individuals are still acting in accordance with their personal values or if they feel compelled to rely on the robot more than they would prefer.

6.4.2 Life: Competence

Regarding competence, the life sphere is concerned with the transferability or generalizability of acquired skills to other contexts. The success of a robot relies on people feeling that it enhances their self-efficacy and abilities in life. Therefore, it is important that individuals not only perceive themselves as effective in using the robot or accomplishing tasks assigned by the robot but are also able to apply the acquired skills or competencies in different situations. For example, when implementing a robot to improve the social skills of autistic children, it is important that they not only learn and practice skills with the robot but also demonstrate the ability to generalize those skills to interactions with adults in different environments [13, 37]. Long-term research is needed to examine the need-fulfillment in the life sphere, although this is more challenging [12]. Again, it should be noted that excessive reliance on the robot may diminish users’ confidence in real-life situations, as they become overly dependent on the presence of the social robot [30].

6.4.3 Life: Relatedness

With respect to relatedness, two main points seem to be most important to consider within the life sphere. First, it is important that the use of a social robot contributes to a feeling of being connected to others in life, or at least does not crowd out human relationships [108]. In a negative sense, it could be the case that the use of a social robot will be at the expense of other contacts because of over-engagement with the robot. Second, if it is intended that the robot will learn a user new social skills, a transfer of such skills learned during the interaction with the robot to other domains in life is needed [13, 37]. For robot designers, it is thus important that the robot’s behavior is based on human norms and values and that the robot acts in human-like ways.

The precise impact of robot use in the life sphere may be difficult to assess as this impact may be somewhat diffuse or diverse. For example, take a robot that helps in learning mathematics. At the behavior level, such a robot will help the user to do calculations. As a result, in the life sphere, it may be the case that the user becomes more proficient at managing money (competence) or can do grocery shopping independently and make their own decisions on how to spend money (autonomy). Concerning relatedness, it may be the case that the user will save some money that can now be spent on a new hobby, in which the user meets new people and meets a new friend. However, whether such improvements in life are accomplished, is not solely dependent on the use of the robot but may also be contingent on other factors in one’s life (e.g., support from others, a stable income, an individual’s personality) [76]. Still, the robot may be “contributing to a cumulative effect that could increase individual or even societal well-being measurably over time” [124, p. 10].

7 Discussion

This paper proposes incorporating a psychological need-fulfillment perspective to HRI, to design social robots that support well-being. Based on the METUX model and key principles of the self-determination theory, we argue to put users’ psychological needs for autonomy, competence, and relatedness at the center of robot design. This approach extends previous work on social robot design and research in several ways.

First and foremost, our paper presents a novel lens toward our understanding of the relationship between social robot usage and users’ well-being improvements. Current literature provides little insight into which underlying mechanisms lead to healthy forms of motivation and engagement during social robot interactions and which lead to unhealthy forms. For improved well-being, such healthy forms of motivation and engagement are crucial. However, the application of formal theories of well-being and motivation to the specific context of social robots is rare, with only a few exceptions [83, 111, 116, 149]. Without such theory-based studies, the literature on social robots runs the risk of consisting of one-shot empirical data, without providing theoretical explanations for why social robot interventions are effective (i.e., motivating, engaging, and contributing to well-being) or not. The psychological need-fulfillment perspective presented in this paper conceptualizes underlying mechanisms for fostering motivation, engagement, and well-being. We discussed several implications of taking a psychological need-fulfillment perspective toward HRI research and design.

Second, by distinguishing between various spheres of experience for need fulfillment, this paper provides a holistic view of the well-being outcomes of social robots. We showed how need-satisfaction related to social robot use can be experienced within the interface, task, behavior, and life sphere. Such a holistic view of well-being in the context of social robot use is an important addition to current literature, which typically investigates well-being in only one sphere of experience. The METUX model shows that well-being is not guaranteed in this way, as it can be undermined in other spheres of experience. To establish overall well-being, it is therefore important that a multidisciplinary team works on robot design and implementation in all spheres of experience. As stressed by Peters et al [124], the boundaries between the spheres and needs within those spheres are conceptualized and somewhat artificial. In reality, they may overlap and interrelate with each other.

In this current paper, we cannot offer a precise technical blueprint for endowing robots with need-fulfillment capabilities. Instead, we provided some initial directions that are based on SDT, a widely tested and validated theory. We hope that this inspires scholars to further translate these directions into design principles that practitioners can use to develop social and socially assistive robots that promote user well-being. As user experiences are highly context-dependent [8, 65, 92], future research should then examine the best ways to implement autonomy, competence, and relatedness in social robot design, and in different contexts (e.g., different types of social robots, user groups, and purposes), in that way externally validating such design principles.

Important to mention is that the framework discussed in this paper focuses on the use of social robots in an isolated setting. For analytical purposes, this may be meaningful, but in real life, the robot will often be used in combination with other actors in the user’s environment who can also provide need support (e.g., a teacher, therapist, friends, or family). The robot can complement these actors in their need-fulfilling role, and vice versa. The framework conceptualizes how need fulfillment can be optimally incorporated into the design of social robots at the interface and task levels. Empirical research on long-term interactions in ecologically valid environments can then examine the relative strength of need-support provided by various actors (including the social robot) and its consequences for well-being.

7.1 Limitations

An important limitation of this paper is that we focused on the interface, task, behavior, and life sphere but excluded the adoption and society spheres. We encourage other researchers to conceptualize and investigate these spheres as well. For example, in the adoption sphere, taking self-determination into account may add to the current literature on the acceptance of social robots. Following SDT, the adoption of a social robot is more likely if people are autonomously motivated to do so. When the use of a social robot is expected to enhance one’s experience of autonomy, competence, or relatedness in the life sphere, people are more likely to adopt and use the social robot. In contrast, when people’s motivations to adopt the social robot are perceived as externally controlled (e.g., "I use this robot because others tell me to do so"), they are less likely to adopt the robot and use it in the long term. Also, when considering the society sphere, the impact of social robots on the well-being of people not directly using them could be more closely investigated, as well as comparing need-fulfillment and well-being scores of user and non-user groups. Furthermore, the impact of a society’s norms and beliefs about robots can influence individual user experience, and as such, future research should consider the society sphere closely [196, 201].

7.2 Future Research

The ideas proposed in this paper invite scholars to explore several areas for future research in the field of HRI. In particular, examining the effects of need-supportive design on motivation, engagement, and well-being can offer valuable insights on designing robots that better support human psychological basic needs. To translate the concepts and ideas in this paper into a design practice, it is crucial to measure the effects of design choices on the fulfillment of autonomy, relatedness, and competence in the various spheres of experience. Existing measures on need-fulfillment have been developed for HCI, and will require adaptation to apply to social robots. Therefore, future research should focus on developing specific measures of need-support during human–robot interactions, taking into account the precise meaning of the basic needs in HRI within different spheres of analysis. For example, while relatedness might not be relevant in some HCI contexts, we showed that it is highly relevant in HRI. Thus, existing scales as suggested by Peters et al [124], should be validated for HRI. After being validated, these scales could provide valuable feedback in design cycles. This includes detecting need-frustration in the different spheres of experience. By measuring users’ need-fulfillment in the various spheres, it may be necessary to alter or add features to enhance the external validity of social robot design. Some features may not immediately appear to contribute to need-fulfillment, and their effects on people’s need-fulfillment may only become apparent over time. For example, if playing with a robotic toy becomes addictive, it can negatively impact the user’s autonomy in the life sphere. Similarly, there are concerns that autistic children may find interacting with a social robot too rewarding [2], or that sex robots may outperform humans in sexual tasks or even replace people [166, 173]. Both cases could have a negative impact on the fulfillment of relatedness in life. Overall, this suggested research area could help improve the design of robots and support their effective integration into human environments.

Second, future research could investigate the transfer of skills, behaviors, and attitudes from robot interactions to the broader context of daily life. Understanding how to support the long-term integration of robots into human environments is crucial for their effective use and for promoting enduring well-being. However, research into what fosters a successful transfer of skills and behaviors from robot interactions to daily life is currently limited. Conducting research in this area could promote social robot use in a variety of settings.

Last, research should ethically analyze the implications of need-fulfillment by social robots. For example, there are potential ethical concerns around the fulfillment of the need for relatedness by robots. SDT states that the need for relatedness is fulfilled when people feel that others are genuinely interested in them, are empathetic in responding to their feelings, and care for them [49]. While robots potentially could fulfill this need for relatedness, they are of course not genuinely interested in the user. Therefore, it could be problematic that social robots are deceiving users, especially when it concerns vulnerable users (as is often the case with social robots that are targeted to improve psychological well-being). Similarly, the elicitation of self-disclosure can be needed for creating relatedness, but may also lead to the sharing of private and sensitive data with the robot [21, 53]. To identify and reduce such risks and ensure that the use of social robots is ethical and responsible, an Ethical Risk Assessment [193, 194] is clearly essential.

7.3 Conclusion

To conclude, this paper may be read as a call to adopt a psychological need-fulfillment perspective to HRI. The suggestions made in the article are not intended to be exhaustive but serve as a starting point to inspire further research and discussion in this area. Although SDT is a widely validated theory, the application in the field of HRI is new and we encourage others to engage with this topic and contribute to the growing body of knowledge in this field. We hope that this paper will encourage others to put the three psychological needs at the center of design processes and consider them when researching social robots.