1 Introduction

Cognitive and social sciences assessed that human interactions are fundamentally based on normative principles. In particular, most human interactions are influenced by profound social and cultural standards, so-called social norms (Bicchieri and Muldoon 2014). Politeness is a universal social norm in human–human interaction resulting in an appropriate behavioral pattern expected from people in different contexts and situations. The effects of politeness within interpersonal relationships have been extensively studied by several social psychologists, anthropologists, and sociologists (Goffmann 1967; Brown et al. 1987). These studies have shown that individuals adopt politeness rules to preserve what has been called face. The concept of face has social and psychological implications related to honor, public image, positive social value, or prestige claimed for oneself. The face is considered a universal psychological need for human beings as social entities.

As far back as 1996, Reeves and Nass (1996) presented the CASA paradigm, which views computers as social actors (CASA), postulating that people’s engagement with computers or other new media reflects real life’s natural and social interactions. More recently, studies about politeness appeared in human–machine interaction (Hayes et al. 2002). Indeed, a great effort in artificial intelligence (AI) is applied to developing human-like intelligent systems to be introduced in everyday life. Such systems are conceived to reproduce many human features (such as natural language, emotions, and anthropomorphic aspects) that enable them to communicate with humans in ways that increasingly resemble interpersonal relationships. As a consequence of such anthropomorphization, researchers are investigating if social norms have some implications also within human–machine interactions. Mainly, considerable attention is paid to the social rules of etiquette, such as politeness (Whitworth 2005; Whitworth and Liu 2013). The concept of etiquette in human–computer interaction has been defined by Miller and Funk (2001) as the practice of behaving in ways appropriate to the traditional culture of the work environment. By etiquette, Miller means “defined roles, acceptable behaviors and interaction moves of human and intelligent agent participants in a common setting” (Miller 2000).

In this paper, an extensive review of existing literature was undertaken to examine the role of politeness norms within humans-machines interactions. The primary objective guiding this review was to address the following research query:

  • RQ1 Should the social norms, like politeness, that humans apply in interpersonal interactions also be used in interactions with intelligent machines (such as smart vehicles, social robots, and digital assistants)?

Considering the distinct nature of the parties involved in the interaction, a thorough response to the initial question requires a study from two different perspectives. As a result, to provide a response to the initial inquiry, the following research questions have to be posed:

  • RQ2 What are the reasons for technological devices should exhibit polite behavior towards humans?

  • RQ3 What are the motivations for individuals should exhibit politeness towards technological devices?

To attempt to answer these two questions, the works collected from the literature have been separated into two groups and discussed in different sections. The first group contains papers dealing with machines designed to adopt polite interactions with humans with the aim to examine the impact of politeness on human perception about these machines. The second one contains studies dealing with the politeness of humans towards intelligent systems, especially those concerning the CASA paradigm. Such a category encompasses works that also include systems exhibiting polite behavior; however, the examination aims to investigate how individuals respond, for example, if they establish similar polite behaviors towards the machines as they would in interpersonal interactions. Moreover, several factors have been analyzed, such as the considerable variety of intelligent systems (e.g., digital assistants, embodied agents, smart vehicles), different human–machine interactions contexts, and the significant variability of individuals (e.g., males, females, young, older, special needs people or persons with cultural differences). Finally, the present study aims to delineate research directions oriented toward the development of more trustworthy, acceptable, and socially competent intelligent systems.

As far as the author knows, the present study appears to be the first paper in the academic community to conduct a thorough analysis of the literature on the topic of politeness in the context of interactions between humans and machines.

It is worth underlying that, throughout the paper, we use indistinguishably the term machine, system, and device to indicate any technological device endowed with capabilities to interact with a human more or less naturally.

The rest of the paper is organized as follows. Section 2 introduces the theory of politeness in human–human interactions and the CASA paradigm. Sections 3 and 4 respectively show methods for literature review and a descriptive analysis of review results. In Sects. 5 and 6, a description of published works is presented. Discussions and conclusions are presented in Sects. 7 and 8, respectively.

2 Theoretical background

This section introduces the theory of politeness and the possible strategies humans adopt in interpersonal interactions to understand the importance of adherence to social norms, such as the rules of etiquette. Moreover, the CASA paradigm is presented as the first work that tries to explain the reasons behind people’s perception of machines as social actors.

2.1 Politeness theory

Politeness (Brown et al. 1987) is a powerful mechanism developed in all cultures to facilitate efficient interactions between individuals. Politeness stems from the innate human ability to consider other people. As social beings, humans have psychological mechanisms to develop relationships and cooperate. A premise for social relations is that people seek benefits through interactions. The effectiveness of interactions can be seriously compromised when etiquette is ignored. The Politeness Theory (Brown et al. 1987; Watts 2003) is a theory that appeared several years ago within the framework of the pragmatic approach in linguistics. Today, studies about politeness are conducted in different fields. Models of politeness have been applied in different disciplines, including social psychology, sociology, cultural studies, and artificial intelligence. Many models are still today inspired by the work of Brown and Levinson conducted in 1978 (Brown and Levinson 1978). The basic idea of politeness theory proposed by P. Brown and S. Levinson relies on the notion of face proposed by the sociologist Goffman (1967). Goffman introduced the concept of face as “an image of self delineated in terms of approved social attributes, albeit an image that others might share”, thus referring to the individual public identity. Namely, face is the image that people project in their social contacts with others. According to this theory, to achieve successful communication, people use particular strategies that create maximally comfortable environments for interactions. This behavior reflects two different human needs. On the one hand, it reveals a desire to be approved and appreciated by the interlocutor (positive face). On the other hand, it shows the need for a proper independent point of view and freedom of opinion (negative face).

According to Goffman, participants in conversations strive for stability in their relations, which involves maintaining one’s face and respecting the other person’s face. In some speech acts, such as refusal of a request, the speaker threatens the other’s positive or negative face, called a face-threatening act (FTAs). People try to maintain their positive self-image until an interaction partner violates the rules of politeness. The act of threatening an individual’s positive self-image may result in a behavioral response that goes against the norms of etiquette as a means to uphold the positive self-image. Violating the rules of politeness triggers adverse emotional reactions (Goffmann 1967) and offends (Culpeper 1996), leading to negative feelings toward the offender and sanctions by the social network (Blake and Davis 1964).

2.2 Politeness strategies

Politeness consists of techniques to prevent harm caused by FATs. For assessing FTAs, three major sociocultural variables are considered (Holmes 2006; Brown et al. 1987):

  • Social Distance refers to the relationship between interlocutors. It is a function of the similarity or difference between the participants and is often determined by the frequency of their interaction (e.g., a great distance is determined when the interlocutors do not know each other).

  • Power refers to the power relation between interlocutors. It can be institutional, as in the relationship between a student and a teacher, or determined individually within a particular relationship. It is the degree to which an individual can impose something on another person.

  • Rank of Imposition refers to the importance or degree of the situation. For example, a high rank would occur when we ask for a great favor.

The greater the social distance between the hearer and the speaker, the more politeness is generally advised. The greater the listener’s power over the speaker, the more politeness is recommended. The more massive the imposition is on the listener, the more politeness is generally advised.

According to Brown and Levinson, there are many ways for one to commit an FTA with a specified weight. The following are the four super-strategies for mitigating face threats ranked by the increasing politeness demanded:

  • Direct Request or Bald On Record—A speaker performs a request baldly and does not try to minimize the threat to the hearer’s face. Direct Request tends to contain an imperative without mitigation. They are brief, avoid ambiguity, and communicate no more than necessary. For example, a speaker who wants the door opened might say open the door. Direct Requests are performed when the speaker has significantly more power than the hearer or when the threat is minimal. Moreover, if necessary, an act threatening the face may be done without mitigation for reasons of urgency or efficiency. An order such as “Call an ambulance!” is not considered impolite, and it is mutually understood that there is no time for mitigating actions.

  • Positive Politeness—Positive politeness strategies aim to reduce the threat to the listener’s positive face. They are intended to avoid giving offense by highlighting friendliness. With positive politeness, strategies such as compliments, jokes and statements of friendship can be used more freely in conversation without harming a particular relationship. When attention is paid to the speaker’s positive face, the social distance between the interlocutors is reduced, and a potential FTA is thus weaker. A speaker using this strategy to ask for the door to be opened might say I am sorry to bother you. Could you please help me with the door?

  • Negative Politeness—Negative politeness strategies are aimed at the listener’s negative face and are meant to avoid any imposition on the listener. They are intended to avoid giving offense by showing deference. We use negative politeness when we want to avoid making the listener feel uncomfortable or embarrassed because of what we are saying. There are different ways people can communicate without being too forceful or rude. One way is by using words that make their statements sound less strong. Another way is by trying not to be too demanding or aggressive. People can also say sorry, use indirect language, and ask questions instead of giving orders. For example, an attempt to open the door using the Negative Politeness strategy might look like this: Could you please open the door?

  • Indirect Request or Off Record strategy—The speaker makes the request vaguely and uses indirect language. Hearers’ faces are protected if they can retreat behind the literal meaning of the words, and speakers can save face by saying that they have not committed an FTA. By relying on the literal interpretation of words, the listener is shielded from a face threat, while the speaker can also deny committing an FTA. A speaker that needs to access a closed doorway might say the door is blocking my way. The speaker expresses the desire to enter the door without directly requesting it.

The speaker generally chooses more polite strategies depending on the seriousness of the request. However, higher-level strategies come at a high cost, such as effort and the need for clarity. Thus, who want to make a request generally do not choose strategies that are more polite than necessary. Moreover, strategies for making polite requests differ across cultures. Since each culture perceives politeness differently, it is also important that a request strategy be tailored to the culture of the interlocutor.

2.3 CASA paradigm

According to Weber (1978), social actors perform social actions to drive themselves towards a specific goal, leading to social interactions. The theory of social actions, postulated by Weber, assesses that human behavior is influenced by both the social environment and the degree to which it affects the actions of others. An individual will adjust the corresponding action as necessary if an adverse reaction is predicted.

The first attempt at studying CASA was in 1996. Reeves and Nass (1996) introduced the CASA paradigm to postulate that people’s interactions with computers or other new media are inherently social and natural, just like real-life interactions. The proposed methodological approach is based on four main steps: (i) pick a social science finding (theory and method) that concerns behavior or attitudes towards humans, (ii) replace human with a computer in the statement of the theory, (iii) replace one or more humans with computers in the method of study, (iv) provide the computer with characteristics associates with humans and (v) determine if the social rule still applies. By adopting such a paradigm, Reeves and Nass proved that social rules guiding human–human interaction can be similarly applied to human–computer interaction. Works about CASA paradigm are included in the present review and they are further detailed in Sect. 6.1.

Fig. 1
figure 1

PRISMA flow diagram

3 Systematic review methods

A search of studies published between January 1996 and April 2023 was conducted about politeness within human–machine interactions to answer the research questions posed for this review. The year 1996 has been taken as the starting date for this research since it was the year of the first publication of the CASA theory.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework (Moher et al. 2015) guidelines have been used to collect publications related to politeness in human–machine interactions. Both academic databases (Scopus, Web of Science, and DBPL) and broad search engines (Google and Scholar) have been used. For collecting related studies, the following research query have been used: (Polite OR Politeness OR Etiquette) AND (Machine OR Computer OR Digital Assistant OR Robot OR Smart Systems OR Intelligent Systems OR Artificial Intelligence) AND Human AND Interaction. Search terms have been applied to article titles, keywords, and abstracts, except for Google and Scholar, where the search was also performed on parts of the article. The subject area has been limited to Computer Science for Scopus and DBPL and Social Sciences and Science Technology for Web of Science. Conversely, since Scholar does not allow restriction of subject area, only the most relevant articles were examined. Mainly, papers from Scholar have been selected according to the relevance ranking algorithm of this academic search engine (Beel and Gipp 2009). Reports containing expert opinions about the impact of the politeness on children have been collected from Google, since it is a novel research area were only few research papers have been retrieved throughout scientific databases. Finally, some papers have been collected from citation searching. As inclusion criteria, this study considers papers that deal with politeness as main element of the study.

As shown in Fig. 1, initially, 582 papers between January 1996 and April 2023 have been collected from scientific databases. After removing duplicated articles, a total of 281 papers were collected in the identification phase. In the screening phase, based on the title and abstract screening, 180 papers were excluded for not meeting the inclusion criteria. Indeed, the majority of these papers generically contained the term politeness and do not study politeness as variable affecting human–machine interactions. One non-English paper has been excluded. A total of 101 reports have been sought for retrieval, among them only three papers have been excluded for test unavailability. Thus, 98 papers have been assessed for eligibility. A total of 36 articles were excluded in the eligibility phase after full-text reading. In particular, the exclusion criteria concerned (1) papers that were very preliminary works without experimental results, (2) similar version of papers already included in the review and (3) papers that concerned team formation that was out of the scope of this review. Finally, 62 articles were qualified for final inclusion.

Analogously, 142 papers have been collected from other sources (Google scholar, citation searching and expert opinions from Google). After duplicated elimination and the screening, 51 papers was qualified for final inclusion. Finally, a total of 113 papers have been included in this review.

4 Descriptive analysis of review results

This section provides a synthesis of the collected data through different points of view aimed at delivering a preliminary descriptive assessment. The diagram in Fig.2 primarily presents the trend of published papers on the topic under review, from 1996 to the initial months of 2023. As it is possible to note, the area of research on politeness in human–machine interactions has witnessed significant development over the last 5 years, as evidenced by the available literature. A more compact view is provided by Fig. 3 wherein the scientific production is aggregated throughout a series of five-year periods, beginning from when the CASA theory was initially disseminated. Significantly, there has been a threefold increase in the number of publications within the past half-decade, including the most recent ones released at the beginning of 2023. The observed phenomenon may be linked to the significant progress made in the field of Artificial Intelligence and its increasing integration into daily activities, which has led the scientific community to consider the great social presence that such a new kind of system produces.

Fig. 2
figure 2

Number of annual published papers

Fig. 3
figure 3

Scientific production grouped by years

Figure 4 shows the papers addressing politeness in interactions with different types of systems. Notably, a substantial proportion of existing literature, i.e. 38%, pertains to the study of politeness in human–robot interactions, followed by 21% of works on smart speakers. A minor number of publications concern virtual agents (8%), smart vehicles (5%) and smartphones (2%). As evidenced in the next sections, the increasing deployment of social robots in various application domains requiring significant social abilities, along with the pervasive presence of intelligent speakers, suggests this interest in examining the implications of politeness in relation to this specific category of systems. Finally, the rest of the extant literature (26%) pertains to investigations of politeness in heterogeneous generic systems, including flight simulators, tablets, and others. These works have been aggregated in a single generic category due to the limited number of studies per system under consideration.

Fig. 4
figure 4

Number of papers dealing with specific types of systems

A different perspective is provided by Fig. 5. It shows the main features that are affected by politeness within human–machine interactions. Many works (i.e., 36%) found that politeness is strictly correlated to the vision of smart systems as social actors according to CASA paradigm. The most part of works (i.e., 44%) reported that politeness improves social acceptance of technology in daily life. The remaining 20% assessed that politeness improves the human perception of trust toward technology.

Fig. 5
figure 5

Number of papers related with system requirements

Finally, Fig. 6 shows the number of works that deal with politeness within human–machine interactions from two different points of view: the politeness of machines towards humans and, vice versa. As can be seen, these works are almost equally distributed. What emerges from this graph is an increasing interest in understanding the politeness of individuals towards machines.

In the following sections, the details of these papers are presented according to politeness direction.

Fig. 6
figure 6

Number of papers according to politeness direction

5 Politeness of machines towards humans

This section presents the review results about scientific contributions dealing with the intelligent systems adopting politeness strategies in interactions with humans. Such works have been grouped into two major categories. The first one contains works that have primarily studied politeness strategies as factors influencing human trust toward intelligent machines. The second one includes works that studied politeness’s role in accepting intelligent systems in different social contexts.

5.1 Trust

Advances in AI are making intelligent machines increasingly able to perform complex tasks. Trust represents a crucial factor in determining humans’ propensity to rely on such systems in situations characterized by high uncertainty. Lee and See (2004) o,define trust as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” that can be influenced by several factors.

Several researchers have shown that the capacity of intelligent machines to exhibit polite behaviors in appropriate situations is among the factors that may contribute to their recognition as trustworthy collaborators with human partners.

Miller (2005) and Parasuraman and Miller (2004) argue that etiquette is closely tied to what is traditionally interpreted as polite or rude behaviors. Thus, the etiquette in the machine also holds the power to cause positive or negative reactions in the individuals. Such works found their bases on the affective method. The affective method (Lee and See 2004) is strictly correlated to the affect generated by and toward an entity. People tend to be trustful toward something that is perceived to be pleasant. The experiments conducted by Miller have explored the role of politeness, considering two kinds of communication styles (i.e., polite and impolite) that a system (i.e., flight simulator) may adopt during an interaction with an individual and by considering high and low-reliability system conditions. When adopting a polite style, the system is patient and not interruptive. It only asks for a new query once the user has finished the current one. The system behaves the opposite way when adopting the second communication style. The results showed that polite communication styles improve user trust in both cases of low and high system reliability. Conversely, impolite communication increased the perceived user workload in high and low-reliability system conditions.

Spain and Madhavan (2009), similarly to Miller, conducted a study to determine the effects of machine etiquette on perceived system trust. Etiquette was manipulated by tuning the politeness level adopted by the device, varying among three conditions: polite, neutral, and rude. The study showed that participants trusted the polite system more than the rude one. Participants also perceived the polite system as more reliable than the neutral and the rude ones, even if each system was equally reliable.

In Lee and Lee (2022), Lee et al. propose the utilization of politeness to encourage driver-vehicle interaction by improving drivers’ trust in autonomous vehicles. Several practical implications are outlined for vehicles’ speech-interaction strategies in the study. As a result of including politeness in speech interface design, vehicles were recognized as being more sociable and trustworthy, and their requests were more actionable and accepted by drivers. The higher the perceived politeness of the vehicle, the higher the trust in the vehicle’s objective performance and personal kindness.

In Terada et al. (2021), Terada et al. investigated whether differences in the politeness strategies used by a virtual agent impact negotiated outcomes. The participants engaged in an online negotiation with one of three agents that adopted positive, off-record, or no politeness strategies, respectively. Results showed that agents who used the off-record strategy were able to achieve more significant concessions from their human counterparts. In contrast, positive politeness, which does not threaten the other’s face, led to fairer negotiated agreements.

Firdaus et al. (2022) proposed an approach for inducing politeness in dialogue generation based on user profile. The work presents a response generation framework that employs a reinforced deliberation decoder to fine-tune the responses to introduce politeness according to the personalized user information and dialogue context. Results shows that politeness levels differ across age groups and gender in a personalized Dialog Agent, for which politeness is essential to obtain the information needed.

Jucks et al. (2018) explored how students assess the communicative behavior of a smart speaker, which implements both polite and rude social strategies. The study’s participants was relatively youthful with technical expertise. Moreover, a simulated indirect communication scenario has been employed for the experimentation. The participants were passive observers of the system and did not engage in direct interaction with him. Their engagement was limited to listening to the generated responses of the system. Under these conditions, the study revealed that polite responses were regarded as superior in terms of appropriateness and pleasantness in comparison to rude ones. The polite speaker was evaluated to be more accurate, albeit not considered as having a superior level of expertise compared to the impolite system. Moreover, nearly all social assessments, including likability or the goodwill aspect of trustworthiness, were judged more positively for the polite speaker. Nevertheless, no differences were found in evaluations of expertise, which is viewed as a more content-related aspect of trustworthiness, even if the system’s accuracy, which is also a content-related aspect, was deemed higher for the polite system.

Similar results have been found in social robotics. Social robots (Fong et al. 2003) are autonomous robots, often endowed with an anthropomorphic physical aspect, that may interact and communicate with humans by following social behaviors and rules designed for the work they have to perform. They have been conceived to be introduced in daily life for performing several kinds of activities in different application domains. The intuitive trust people feel when faced with a robot is a determinant factor so that an interaction between the individual and the machine can be established. Some researchers believe that humans are more trusting to cooperate and interact with robots that behave politely with them.

In Tsui et al. 2010, Tsui et al. investigated the level of trust an observer perceives towards a robot in a passing corridor scenario. The study aims to understand how a robot should behave in front of a human that needs to move through the same space. Mainly, the study wants to evaluate the bystander’s expectations about the robot’s adherence to social protocol and the overall trust in the robot to do the right thing. The experiment results showed that people found the stop behavior of the robot to be the most polite. Participants considered the stop and slow behaviors more trustful than neutral or fast ones. People expect that robots move in the same polite manner as people do. In other words, robots should continue at a relatively constant speed unless there is a need to yield, in which case the robot should slow or stop.

Hendriks et al. (2011) investigated what type of personality people desire for a robotic cleaner. Since the autonomy of such kind of robot allows him to move around homes without instructions, people need to trust him. Authors found that people prefer a calm, polite, cooperative robot vacuum cleaner that works efficiently and follows routines. When users learn to know the personality of their robot, they can interact adequately and predict how the robot responds.

In Srinivasan and Takayama (2016), Srinivasan and Takayama examined the effectiveness of different polite strategies adopted by a robot in obtaining human help for performing its tasks. According to the politeness theory, the experiment has considered three factors that can influence helping behaviors: the status, familiarity, and the request size of the robot. Authors found that people were more willing to comply with robot requests that require less effort and that use a positive politeness strategy.

Inbar and Meyer (2015, 2019) conducted experiments using static images of a robot that performs an access-control task and interacts with people (younger and older, male and female) by applying polite or impolite behavior. The study showed that politeness could be a crucial determinant of people’s perception of peacekeeping robots. The experiment results also highlighted that the people’s age and gender interacting with the robot had no significant effect on participants’ impressions of the robot’s attributes.

Ramachandran and Lim (2021) investigated the design of a robot to perform nursing tasks in hospitals. The robot is designed with animated eyes, a voice with a local accent, and context phrases with politeness to mimic the behavior of nurses when engaging with patients. Results show that users are more likely to perceive the robot as trustworthy if it communicates politely.

The study in Babel et al. (2022b) aimed to investigate how an autonomous service robot can assert itself when making a request in a human–robot conflict situation. Mainly, the authors aim to examine if polite and assertive request sequences that fulfill the human politeness scheme will lead to higher compliance than sequences that do not and if they will be rated as more acceptable, polite, and trustworthy than sequences that do not. In the first case, participants were reluctant to follow a robot’s request in the domestic context. Most participants considered their tasks more important than those of the robot. Conversely, in the second case, strategies that adhered to human politeness norms were more accepted and rated as more polite and trustworthy than those contradicting it in the first trial. During a second trial, the evaluations for the strategies rejecting human politeness norms became more positive. As the authors claim, it could be due to the habituation of the robot’s requests. Novelty effects are common in HRI experiments, mainly if participants have limited prior robot experience.

Finally, the study proposed in Kumar et al. (2022) focuses on the impact of social robots’ politeness during interactions with humans, using Lakoff’s politeness rules (Lakoff 1973) such as: (i) refraining from coercively imposing one’s actions or perspectives on others, (ii) providing alternative options to enable individual decision-making, and (iii) generating a sense of parity among the involved parties. The primary objective of the present study is to assess the impact of polite robot behaviors on the users’ subjective perceptions concerning their enjoyment, trust, and satisfaction levels while utilizing distinct categories of non-humanoid robots, namely a robot arm and a mobile robot, under diverse conditions such as video and live settings, and among diverse age groups including both young and old adults. According to the findings, robots that adhered to Lakoff’s politeness rules obtained superior ratings in all three dependent variables (namely, enjoyment, satisfaction, and trust) compared to the other levels. Moreover, the participants experienced satisfaction with the interpersonal engagement in the real-time mode and perceived it as significantly more trustworthy than the video format. Furthermore, the study found that the participants experienced greater satisfaction when engaging with the mobile robot than the manipulator. A notable disparity was found between younger and older participants’ preferences. Specifically, the three rules-based robots were the most favored option among the younger study subjects.

5.2 Social acceptance

Acceptance of technology is a concept that indicates the willingness within a group of users to employ IT tools to support the tasks that it is designed to support (Dillon 2001). The more AI systems are pervasive, the more critical their approval becomes, even more with technology that has no direct and immediate utility for an individual. Therefore, it is crucial to comprehend the factors that motivate potential users to embrace these technologies in their day-to-day routines to implement AI systems effectively. In particular, human–machine interactions experienced a transformation over the past decades. From simple interactions for achieving functional objectives, we moved toward interactions incorporating emotional responses, thus providing positive experiences related to using technology (Norman 2004).

Nowadays, utilitarian and hedonic views are considered equally important for studying technology acceptance. Utilitarian variables are attributes connected to the usability of a technological device. Conversely, the hedonic variables are connected to the user experience using a device. Acceptance models, such as the Technology Acceptance Model (Davis 1998) and the Unified Theory of Acceptance and Use of Technology (Venkatesh et al. 2003), have initially shown the utilitarian factors (such as usefulness and ease of use) as critical elements for user acceptance of technology. Such models have been extended to incorporate additional variables such as subjective norms, computer playfulness, and enjoyment (Venkatesh and Davis 2000; Venkatesh and Bala 2008). In particular, several studies in human–robot interactions point to the relevance of including the hedonic view in evaluating robots (Heerink et al. 2010; Weiss et al. 2011; De Graaf and Allouch 2013). Such works show that enjoyment is a crucial variable for social robot acceptance, directly influencing the intention to utilize a robot and its perceived ease of use. The perception of engaging with a social entity is pivotal to the positive reception of technology.

Notably, it has been demonstrated that adopting politeness strategies can further enhance the acceptance of robotic systems. Several works studied the effect of robot politeness during user interactions. Similar to human–human interaction, variations in the robot’s politeness may cause considerable differences in robot’s perception.

Smith et al. (2022) claim that social robots (especially those designed as sociable partners) to be successfully integrated into society are expected to behave according to human social norms, especially politeness ones. Hence, their work focuses on understanding how people adhere to different politeness norms in intention and context-sensitive ways to use these insights to design social robots. Mainly, the authors focused on determining when it is appropriate to use indirect language, i.e. the Indirect Speech Act, where, as previously said, the utterance’s literal meaning does not match its intended meaning. The results of this study indicates that speakers typically use direct speech if any of the following conditions are met (and employ indirect speech in all other scenarios): (i) the utterance is an acknowledgment; (ii) there is potential for harm; (iii) the utterance is directed at oneself; (iv) the utterance requests (rather than provides) information.

Mutlu (2011) presented a study exploring how robots can use verbal and non-verbal cues to improve their proxemic relationship with people, affecting outcomes such as distancing and rapport. During the experiment, the gaze behavior (gaze following towards aversion) and verbal politeness cues (polite and impolite robot’s introduction) are varied to shape the participant’s physical and psychological distancing from the robot. The robot that used politeness cues has been considered significantly likelier than those who received the impolite introduction. The gaze cues influenced how much physical distance people maintained with the robot, mainly when they did not interact with the robot. Politeness cues influenced people’s rapport with the robot and the tendency to disclose personal information.

En and Lan (2012) conducted experiments about politeness maxims in a dialogue between humans and robots, demonstrating improved human–robot engagement. The experimental study conducted by Salem et al. (2013) with a receptionist robot suggested that also the interaction context significantly impacts participants’ perception of the robot according to the adopted politeness strategies.

Castro-González et al. (2016) presented a preliminary study assessing the effects of the robot’s verbal attitude on several subjects during rock-paper-scissors games. Robots behave politely or impolitely during the game by changing the contents of their utterances. Compared with rude robots, polite robots were considered more engaging and likable.

In Westhoven et al. (2019), results from web- and video-based studies on the perception of a robot help request are reported. The eye expressions and politeness of speech have been considered variables for evaluating effects on the hedonic user experience, perceived politeness, and help intention. The study showed that politeness could increase successful help requests. According to their results, a robot asking humans for help using sad or fearful eye expressions in combination with polite language improves perceived politeness. It yields a significantly better hedonic user experience and higher help intention than other tested combinations.

Kaiser et al. (2019) studied the efficacy of kinesic courtesy cues on people’s approval of non-humanoid mobile robots. Inspired by the non-verbal communication that humans perform by using body language (posture, gesture, movement), Kaiser et al. showed that, also in the machines, body language conveys social messages of universal significance, leading to people’s approval of robot behavior. For instance, developing robots that emulate polite human social behavior, such as stopping and moving out of a bottleneck (such as a doorway), can result in a more accepting robot than a technology that behaves less human-like and is regarded as more courteous by humans.

In Ribino and Lodato (2018) and Ribino et al. (2018), Ribino et al. proposed a norm-based approach for improving robot social acceptance. Mainly, a normative approach that allows exploiting the advantages of goal modeling to make social robots able to reason about dynamic situations proactively is proposed. An experiment has been conducted by adopting a Nao Robot endowed with the ability to choose appropriate social norms in interaction scenarios with older adults.

A different point of view is provided by the research conducted by Babel et al. (2022a). The purpose of this study was to examine how a robot can effectively and appropriately request priority in resource conflict situations while taking into account the robot’s type (humanoid, zoomorphic, or mechanoid), the level of politeness in the request (polite appeals or assertive commands), and the modality employed (verbal or displayed). The research findings indicate politeness was more socially acceptable than assertive commands for all robot types. However, it did not necessarily translate to greater effectiveness in achieving desired outcomes. For the humanoid robot, a verbal appeal proved to be a more effective communication method than a verbal command. This is because impolite verbal communication, such as a command, may be perceived as contradictory to the participants’ expectations. The displayed command was deemed more acceptable than the verbal command across all robot models. According to their findings, the effectiveness of the humanoid robot is enhanced when it shows the command instead of saying it. Distancing from the assertive request could be most effective for the humanoid robot since non-humanoid robots are unlikely to cause face threats due to their non-humanlike design. Moreover, the humanoid robot’s use of polite verbal requests did not increase effectiveness or acceptability compared to other robots utilizing visual requests. Results of this study support the position that there is a marked contrast in the compliance rates between the humanoid and mechanoid robots, with the former exhibiting substantially low levels of conformity. According to their findings, the non-anthropomorphic design has the potential to confer a strategic advantage upon public service robots in cases where the demands of the task necessitate a heightened level of assertiveness, such as with security robots.

Other works instead focus on the impact of cultural differences in robotic acceptance. The study in Salem et al. (2018) proved that participants, especially Arabic ones, showed a more positive attitude toward the robot. The use of positive politeness strategies, among others, affected people that participated in the human–robot experience.

Nomura and Saeki (2010) also analyzed the effects of a robot acting politely on the human perception of the robot. They conducted a psychological experiment in Japan with a small-sized humanoid robot that performs four types of motion according to the politeness level within the Japanese community. The experimental results showed the effects of the robot’s polite motions on human impressions of the robot and some relationships between the impressions and behaviors toward the robot. Mainly, the results highlighted some gender differences. In males and females, the politeness of robot behaviors influences the impressions of extroversion and politeness toward the robots. In females, the extroversive impression influences behaviors toward the robots, while in males, the polite impression influences behaviors.

It also seems promising to develop robotic assistants for the elderly to contribute to the establishment of a positive emotional and social relationship between the user and the system by adopting politeness strategies. Hammer et al. (2016) investigated linguistic variations for a robotic elderly assistant to convey different levels of politeness in human–robot interaction. Authors saw that wordings such as requests and actions formulated as shared goals were perceived as polite and persuasive. Thus, they can be used as standard strategies for improving the acceptability of a recommendation. There is also evidence that older adults respond positively to robotic companions if they emulate social behavior that matches the seriousness of the situation (Goetz et al. 2003). Moreover, there are also studies showing that recommendations would be more persuasive if they are provided by a polite robot that motivates the elderly to maintain a healthy lifestyle (Buoncompagni et al. 2021). However, if robots look too human-like but do not match the high expectations in terms of behavior, people tend to get disappointed and distrustful of them (Walters et al. 2008).

In Kim et al. (2022), authors explore the perception of a robot within a group of people. Specifically, in human–human interactions within a group, individuals display a greater propensity to positively or negatively evaluate their peers based on considerations of social relations within the group. The authors aim to investigate if such a phenomenon also occurs within human–robot interaction. In this study, a human–robot interaction within a heterogeneous cohort of individuals across different age groups has been designed. The authors examined the impact of the respect of social relations on users’ evaluation of a robot. The study’s participants deemed the robot behaving against social relations comparatively less useful and impolite. The negative assessment was due to the fact that robots catered to the younger individuals first instead of prioritizing the elderly within a setting that encompasses individuals of varying ages. This result indicates that the robot that fails to follow the social norms negatively affects the user.

Further studies have examined the positive relationship between politeness and human willingness to use robots, particularly in healthcare contexts. Patients’ compliance with healthcare practitioners’ recommendations is essential in this field. It refers to the extent to which they conform to their recommendations. Healthcare recommendations that are followed more closely contribute to better patient health status and higher satisfaction with healthcare services. Adopting politeness strategies is another way to enhance compliance. In this direction, the work proposed in Lee et al. (2017) investigated the perceived level of robot politeness as an element that improves the patient’s behavior following the guidelines provided by the healthcare assistants. The study found that using a lower politeness level in providing a recommendation is more similar to a command rather than a suggestion. Conversely, polite behavior adopted by a social robot when it recommends to patients positively influences the user compliance to follow them. However, it does not ensure the patient’s intention to adhere.

The research conducted in Ajibo et al. (2021) sought to explore the efficacy of robotic behavior in fostering adherence to the COVID-19 guidelines outlined by the World Health Organization (WHO), specifically about social distancing and the use of masks. The primary objective was to explore the subjective assessment of three attitudinal behaviors a robot displays, specifically being polite or gentle, displaying disapproval, and exhibiting anger, when engaging with individuals who consistently violate established guidelines. Notably, the research examined the impact of participants’ compliance awareness on the robot’s behavior impressions, explicitly concerning its social acceptance. Individuals showing limited awareness of adherence to the WHO guidelines prefer polite and gentle admonishments as a viable and effective method of addressing violations of said guidelines. These individuals consider this approach to be effective and productive. From a different viewpoint, individuals with a heightened sense of compliance awareness also demonstrate a greater inclination towards exhibiting polite and gentle behaviors when determining the appropriateness of actions towards violators. Nevertheless, they evaluated displeased and angry behaviors as more efficacious in enforcing adherence to social norms.

The work proposed in Zhu and Kaber (2012) aimed to explore the potential impact of various etiquette strategies on the task performance of both human and robot participants in a simulated medicine delivery task. A humanoid robot and a mechanical-looking robot were employed to deliver medication reminders to participants concurrently involved in a primary cognitive task of solving a puzzle. The study’s findings indicate that the participants were not sensitive to the positive language being communicated through robots, which included expressions of appreciation for human values. Furthermore, this particular strategy did not yield desirable outcomes in terms of reinforcing or augmenting the positive self-image of human users. Utilizing a direct communication style lacking in linguistic courtesy and a combination of positive and negative face-saving strategies to minimize user imposition resulted in moderate perceived etiquette scores among users. On the other hand, implementing a negative face-saving approach centered on advocating for users’ freedom of choice positively impacted user task execution and robot performance. Furthermore, there was evidence that humanoid robot features can supply additional social cues for patients, thus contributing to enhancing both human and robot performance, albeit not improving user-perceived etiquette.

In Jackson et al. (2020), instead, politeness is investigated as a means to reject immoral commands that humans may ask robots. Despite the inappropriateness of human commands, robot rejection must be handled with tact to maintain robot acceptability and likeability. By studying the interrelationship between gender and politeness, the authors investigated the effects of gender stereotypes on human perceptions of robotic non-compliance. Researchers found that robots were perceived more positively when their gender coincided with those of their human interactants.

While previously cited studies have explored politeness strategies for robot acceptability and confirmed their effectiveness in human–robot interaction, recent studies have also been conducted in the field of intelligent voice assistants to understand the factors motivating individuals to use such devices. Indeed, AI voice assistants, including Amazon Echo, Google Assistant, Apple’s Siri, and Microsoft’s Cortana, have profoundly changed how people consume content and perform their requests. Some academic researchers have explored the factors influencing the great individual’s acceptance of voice technology. Behind the primary aspect related to utilitarian purposes, there is also evidence that individuals are motivated to interact with such devices due to their ability to convey great social benefits in the form of social presence and attractiveness (McLean and Osei-Frimpong 2019).

Moreover, the impact of cultural differences on smart speaker acceptance has been considered in Ouchi et al. (2019). In this study, two speaking styles for intelligent speakers are adopted, normal and polite. In the polite style, the smart speaker used honorific words. In the normal style, the smart speaker talks informally, which is not polite in Japan. The participants’ impressions of their conversations with the smart speaker are analyzed. The results show that the normal style was considered more friendly, although the polite style was considered kinder than the normal one.

Gupta et al. (2007) introduced a system that integrates a spoken language generator with an artificial intelligence planner to represent Brown and Levinson’s politeness theory in task-oriented dialogues, named POLLy (Politeness for Language Learning). The authors conducted an empirical investigation to examine how individuals’ perceptual abilities regarding politeness differed across various discourse contexts. The study involved a cohort of native English speakers from two distinct cultural backgrounds, British and Indian. According to several situations, the participants were instructed to evaluate several utterances automatically generated by POLLy, as if they were communicated to them by a Friend or a Stranger partner. The findings indicate that how POLLy communicates aligns with B &L’s expectations regarding language use and the given context. The findings suggest that statements from a friend were perceived as more courteous than a stranger’s comments in all four B &L methods. This result indicates that when the gap between individuals is significant, it is more fitting to use a genteel expression. However, if the expression implies excessive distance, it may be perceived as overly courteous. Moreover, Indians tended to consider the statements to be considerably more courteous in comparison to the British. When the person making a request was a friend, this was particularly noticeable, but there was little difference in perceptions when the individual was a stranger. Concerning cultural disparities, the study revealed notable distinctions in the assessment of courteousness between Indian and British individuals in situations where there existed a high degree of imposition, such as requests, as well as in circumstances where social distance, as measured by B&L, was less pronounced, such as when conversing with a friend.

Politeness has also been of particular interest in the research about virtual agents. To evaluate the impact of politeness in learning settings, Wang et al. (2008) conducted an experiment in which students completed a learning task with the help of an on-screen agent. The agent responded to the student’s queries by offering polite or direct suggestions. The polite agent positively affected the scholars’ learning outcomes compared to the direct agent. This politeness effect was more effective with learners who needed more help and consequently had a more productive interaction with the agent. Thus, pedagogical agents can be based on the politeness model by Brown and Levinson to achieve the same benefit as real tutors do by respecting students’ social faces.

Authors in Yang and Dorneich (2018a, b) investigated adapting the interaction style of intelligent tutoring system (ITS) feedback based on human–automation etiquette strategies. The results demonstrated that systematically adapting interaction style based on etiquette strategies significantly influenced motivation, confidence, satisfaction, and performance. Hence, if virtual tutors can effectively utilize the interaction style to mitigate negative emotions, then ITS designers may implement mechanisms to design affect-aware adaptations that provide the proper responses in situations where human emotions affect the ability to learn.

In Wu and Miller (2010); Wu et al. (2009), authors evaluated the effects of politeness in interactions with virtual agents, demonstrating that variations in etiquette have consequences for human performance. Mainly, their results show that users found polite and familiar virtual agents more likable and trustworthy. Users also perceived less workload when interacting with polite virtual agents. They also showed that gender impacts the perception of politeness utterances and influences polite behaviors between different gender.

The study presented in Rana et al. (2021) examined the impact of politeness on Chatbot conversations according to gender, age, and personality. The findings revealed gender-based dissimilarities concerning emotional expression in the adult phase, whereby women demonstrated superior overall emotional sensitivity and intelligence compared to males. The females exhibited higher sensitivity towards the chatbot’s standardized response, leading to a reduced rating compared to its polite alternative. Moreover, individuals between the ages of 18 to 24 exhibited a lower capacity to discern the level of politeness of the response compared to those aged 25 and above. From a politeness perception perspective, the combined influence of age and gender as a filter determines an optimal evaluator for the chatbot’s politeness performance. Specifically, analysis reveals that a female from the higher age group is better positioned to judge the chatbot’s level of politeness accurately. It is worth noting that, in contrast, a male of the same age group detects only a slight variation in politeness. The younger age cohort exhibits comparable variation in this regard; however, their perception of politeness is measurably lower when contrasted with the elder demographic. If a candidate possesses introverted, disciplined, and innovative traits, they exhibit superior politeness perception in the chatbot. Conversely, individuals who have extroverted and unimaginative attributes are unlikely to discern substantial differences between the two chatbots.

In their study, Hu et al. (Hu et al. 2022) examined the potential employment of Politeness Theory in designing and implementing conversational interfaces for smart display devices intended to improve the user’s experience and alleviate the cognitive burden of older people. A field deployment study was conducted, in which a sample of elderly individuals was instructed to utilize a more direct or polite version of a smart display. The empirical findings established four classifications of users, giving guidelines for a tailored design. The research indicates that individuals classified as Socially-Oriented Followers tend to view technological devices as friends and prefer politeness over directness. Utility-Oriented Followers perceive the functionality of a device to be analogous to that of a human. Consequently, these individuals conform to the guidance provided by the system and articulate positive feedback about the system’s overall politeness. Socially-Oriented Leaders perceive the device in a non-human role. They exhibit a predominantly positive or neutral attitude towards politeness in general. However, they do not expect human-like behavior to be reflected in the system. Finally, Utility-Oriented Leaders perceive the system as a machine. They exhibit a neutral, if not adverse, attitude concerning politeness, opting for directness over politeness. They consider politeness odd and unnecessary.

Finally, other recent works extend politeness research to human-vehicle interaction and explore the effects of politeness strategies on improving the user experience and acceptability of the technology. In Miyamoto et al. (2019), authors used RoBoHoN for developing a driving support agent. RoBoHoN is a small, easily portable, robot-shaped phone with several functions and capabilities. Used as a driver agent, RoBoHoN selects an utterance based on the politeness theory, considering the driver’s age, gender, and driving characteristics. The experiment focused on the difference between the end-of-sentence style (honorific words–non-honorific words), representing psychological distances in Japanese conversation. The results suggested that the agent using positive politeness strategies is more effective for improving familiarity than an agent using negative politeness strategies. However, the results refer to a group of university and graduate students. Thus they are only generalized to a narrow category of drivers.

In Miyamoto et al. (2021), the same previous authors question if a driving support agent should provide explicit instructions. Through a video-based study, they evaluated the acceptability of politeness strategies in DSA utterances. The experiment results showed that NPS with a formal sentence-final style was significantly higher in evaluation items related to the functionality of DSA. Although the authors expected the off-record strategy to be evaluated highly, results show the usefulness of DSA in providing explicit instructions while considering linguistic considerations.

In Lee et al. (2019), Lee and Lee described as vehicle politeness can positively influence drivers’ experience and promote their intentions to cooperate with the vehicle. Vehicles with polite instructions are highly evaluated for social presence, politeness, satisfaction, and intention to use, especially in typical working situations. However, the adverse effects of politeness in failure situations reveal that the selective politeness strategy works better than the permanent polite strategy. In failure situations, the drivers perceive polite messages as unnecessary excuses. They prefer immediate solutions when complex traffic environments cause failures. Fast recovery is a critical determinant of drivers’ positive experiences in failure situations. The results also showed that the effects of vehicle politeness were stable even after controlling some demographic factors such as age and gender.

The study presented by Lanzer et al. (2020) explored the impact of an autonomous delivery vehicle’s courtesy on individuals’ adherence to its instructions, trust, and acceptance in two distinct settings—a pedestrian crossing and a street—among participants from Germany and China. The polite communication strategy has positively impacted compliance, acceptance, and trust. Results showed that there were cultural differences between German and Chinese participants. The findings indicated that the employment of the polite strategy exhibited superior outcomes across both groups; however, variances were observed among scenarios. In the Chinese sample, it was observed that the outcome of the crosswalk scenario differed from the German sample, exhibiting an opposing trend. In the street scenario, when the same polite strategy was applied in the German sample, almost everyone complied with the autonomous delivery vehicle’s request to move out of the way. In contrast, in the Chinese sample, many people did not. Both explanations of the described behaviors are congruent with different cultures’ road traffic regulations and customs. Some German participants stated they did not comply with the AV’s request to wait at the crosswalk because a traffic rule obligates all vehicles to stop at a zebra crossing. On the contrary, in the Chinese sample, pedestrians may be accustomed to prioritizing vehicles at zebra crossings since there is generally a very low rate of vehicles stopping at crosswalks, even though Chinese drivers are also required by law to yield to pedestrians at crosswalks. Furthermore, the findings demonstrate that the implemented strategy significantly altered the vehicle’s acceptance level. In both samples, a notable discrepancy in acceptance rates was observed between the dominant and polite communication strategies. The latter exhibited the highest level of acceptance and the former demonstrated the lowest degree of acceptance. The utilization of a polite as opposed to a dominant mode of request communication was associated with a significantly higher degree of perceived utility and satisfaction with the autonomous delivery vehicle. The same results for trust. The autonomous delivery vehicle was trusted more when it politely than dominantly communicated its request.

6 Politeness of humans towards machines

According to the prevailing opinion, using courtesy with machines means acknowledging that they should be treated with the same respect as people. Researchers from various disciplines, including computer science and psychology, do not wholly concur with this viewpoint since other implications should be considered. Mainly, some concerns have arisen about using AI systems as assistants to whom a human can give orders. In particular, child psychologists and technological experts debate if the commanding nature of human interactions with current AI systems may encourage incivility and rude behavior and if this form of contact may teach children the incorrect mentality of behaving the same way with other people (Cosic 2022; Zilber 2022; Sarwar 2022; Fairyington 2018; Gartenberg 2017; Truong 2016; Kayaarma et al. 2019). Despite the ongoing debate about if artificial intelligence systems should be designed to encourage polite behavior, some researchers have observed that as AI systems become more affective and socially capable, people tend to attribute human characteristics to such devices (Reeves and Nass 1996). They interact with them almost unconsciously by applying rules, norms, and expectations as in interpersonal interactions. In the following, the results of studies in such directions are presented.

6.1 Computers as social actors

Technology devices, from smartphones to sophisticated robots, are generally regarded as non-people and not deserving of humane treatment in a general sense. It is commonly believed that humans do not relate to machines the same way they do to other humans. However, scientific research has shown that this is only partially true.

As previously said, in 1996, Reeves and Nass (1996) started a research program called Computer as Social Actors, which demonstrated that interactions between humans and computers are very similar to interpersonal social relationships by introducing the concept of media equation. The media equation is based on the idea that people respond socially to computers. According to the CASA theory, if a medium communicates with us, we unconsciously react as if it was an individual. In other words, when interacting with technological devices that send social cues, users adopt basic social rules and norms they also adopt in human–human interaction. These social rules are followed somewhat unconsciously, requiring only low active mental involvement (Langer 1989).

Several works in social norms and communication, through the adoption of the CASA paradigm, have shown that social dynamics of human–human interaction are also evident in human–computer interactions (Fogg and Nass 1997; Takeuchi 1998; Nass and Moon 2000; Takeuchi et al. 2000; Johnson et al. 2004; Nass et al. 1999; Johnson and Gardner 2009).

Specifically, Nass and Moon’s studies (Nass and Moon 2000) demonstrate that people mindlessly apply social rules and expectations to devices, exhibiting social behavior such as politeness. Additionally, the authors advanced some theories about which social norms are more likely to be mindlessly enforced by machines. Social behaviors governed by more primitive or automatic processes are more likely to be mindless elicited than more structured behaviors. Rules that are commonly followed, like conversational politeness conventions, are more likely to be elicited than rules that are less frequently applied, such as cultural customs that are suited to a person’s culture.

Moreover, the experiments in Moon (2000) demonstrated that machines that adhere to socially acceptable behavioral norms are more successful at obtaining sensitive information from customers than those that disregard these social interaction rules. Mainly, Moon observed that humans, being socially-oriented creatures, use social rules such as politeness during interaction with technology. In detail, it was noticed that people pause, waiting for a response, and use courtesy during interactions as they would with another human being.

From these works, it emerged that mindlessness is the leading explanation for people’s tendency to treat computers socially. Mindlessness is a “state of mind characterized by an over-reliance on categories and distinctions drawn in the past and in which the individual is context-dependent and, as such, is oblivious to novel (or simply alternative) aspects of the situation. Mindlessness is compared to more familiar concepts such as habit,... and automatic (vs controlled) processing” (Langer 1992). Our attention is narrowed to a subset of contextual cues that transmit specific information, ignoring other data that could be potentially relevant. Consequently, since modern technology offers a variety of signals that resemble humanness, from the perspective of mindlessness, these cues are sufficient to trigger an unconscious categorization of machines as social actors (Nass et al. 1997). This categorization, in turn, often leads to a state of Ethopoeia (Langer 1992; Nass and Moon 2000), meaning that people assign human attitudes and attention to non-human objects. Ethopoeia involves a direct response to an entity as a human, although we are aware that it does not warrant human treatment.

Although mindlessness is the prevailing explanation for media equation findings, other alternatives are also considered: Anthropomorphism, Demand Characteristics, and Computer as a Proxy (Nass and Moon 2000; Johnson et al. 2004). In discussing the Anthropomorphism explanation of media equation findings, Nass and Moon (Nass and Moon 2000) draw a clear distinction between (a) anthropomorphism, which is defined as a sincere belief the computer has human characteristics and warrants human treatment; and (b) ’cherished objects’, which refers to situations in which people orient to an object and focus on its ability to evoke certain feelings or attitudes. The idea of people reacting to cherished objects is not evidence for anthropomorphism and does not explain media equation findings. Thus, in this context the definition of Anthropomorphism refers to people acting on the belief that computers are fundamentally humans. Hence, their behavior to respond socially to technological devices reflects ignorance, psychological impairment, or social dysfunction (Nass and Moon 2000; Johnson et al. 2004). Demand characteristics refers to how participants think about the nature of an experiment and how they are supposed to behave (Nass and Sundar 1994). Namely, the experimental situation encourages users to demonstrate social responses. The participants believe that for engaging in the experimental task, they are expected to forget that they are dealing with a machine (Nass and Moon 2000). Finally, the computer as a proxy refers to the idea that individuals respond socially to a computer because they believe the machine is a medium that embodies the programmer’s responses or thinks to interact with another person via the computer.

Both the anthropomorphism and computer as proxy explanations are related to individuals’ beliefs about technology. The technological device is treated as a person because it is perceived to be or to represent a human being. By contrast, according to the mindlessness explanation, peoples’ social responses to technology are not necessarily consistent with their beliefs about technology. Hence, these factors encourage people to treat computers, virtual agents, smart speakers, robots, and many other technologies as social actors and elicit some social responses (Nass 2004).

Studies on the CASA theory originally referred to simple computers. Today, people have more access to various technologies that display more or less intelligent functions, leading to several extensions of the CASA paradigm (Gambino et al. 2020; Lombard and Xu 2021). Thus, several revisions of the original paradigm have been proposed for a broader spectrum of technologies.

The work proposed in Karr-Wisniewski and Prietula (2010) replicated a study underlying the CASA paradigm on websites named WASA (Website As Social Actor). The purpose of the authors was to determine if websites can engage those same social response scripts with a website as for computers in the case of CASA theory. The experimental results demonstrated that websites provoke a more robust social response from humans than the machines that access them as far as a particular social rule—politeness—is concerned.

The aim of the study in Hoffmann et al. (2009), instead, was to empirically verify whether social effects, as described in the CASA studies, can be replicated when interacting with embodied conversational agents (ECA). The results showed that participants whom ECA questioned were more polite because they perceived ECA as more competent than participants questioned by a paper-and-pencil questionnaire.

Carolus et al. (2018, 2019) conducted studies about the effects of smartphones speaking politely or impolitely in interactions with users. They showed that an impolite phone was devalued regarding friendliness and competence. In contrast, smartphones politely replying have been considered more friendly. Their results also confirm that smartphones can elicit polite behavior, underlying that ownership and emotional relations influence the users’ feedback.

However, among several technologies, particular attention is paid to the interaction with smart-home assistants (e.g., Google Assistant, Amazon Alexa) and social robots. The reason is the increased perception of sociality inherent in such technologies. Indeed, both digital assistants and robots are developed with several human-like social cues (Fong et al. 2003; Purington et al. 2017) that enhance their perception as social actors. The following subsections introduce separately works about CASA theory and social response related to digital assistants and robots.

6.2 Politeness in human–digital voice assistants interactions

The natural language communication of digital voice assistants, like Apple’s Siri, Amazon’s Alexa, Google’s Assistant, and Windows’ Cortana, has changed the paradigm of human–machine interactions. As the study of Purington et al. (2017) showed, people tend to personify digital voice assistants. They are inclined to use human scripts to interact with technologies that exhibit human-like social cues. In such a study, participants who referred to the device with the personified name Alexa and personal pronouns reported sociable interactions with the device compared to other participants who used the name Echo and object pronouns when referring to the device.

Schneider and Hagmann (2022) attempt to extend the CASA paradigm to voice-based assistants and examine the various user-oriented factors that impact the media equation effects. The study specifically examined the reciprocity and politeness exhibited by individuals towards voice assistants and aimed to elucidate the underlying reasons for such behavior, considering different user personality traits. An experiment was conducted wherein users were given the liberty to assist the voice assistant after experiencing a helpful or unhelpful interaction with it. Results showed that participants who had received assistance from the voice assistant during their previous interaction exhibited reciprocal behavior by demonstrating a greater willingness to perform additional tasks as compared to those who did not receive such aid, giving evidence that individuals show social responses towards voice assistants. In particular, individuals exhibiting low levels of openness to novelty did not exhibit any appreciable variation in the number of tasks across diverse conditions. Conversely, subjects who demonstrated moderate to high levels of openness scored statistically higher in their task completion measurements under varying situations. Individuals exhibiting higher levels of openness may experience more significant disappointment in cases where their interaction with a voice-enabled assistant fails to fulfill their desired outcome and lacks utility. Individuals who exhibit moderate to high levels of openness and have engaged in a successful social exchange that met their expectations are inclined to derive considerable satisfaction from such interactions. As a result, their likelihood of carrying out additional tasks is elevated. In the context of CASA research, the personality trait of openness to new experiences exhibits a noteworthy level of significance. The phenomenon of openness has been found to elicit both favorable and unfavorable social reactions, contingent upon the degree to which the user’s anticipated expectations were fulfilled or unfulfilled, respectively.

In Lopatovska and Williams (2018), a study explored the manifestations of users’ personification of Alexa. Results showed that part of the participants reported personification behaviors. Most of the personification reports have been characterized as mindless politeness (saying thank you and please to Alexa). In line with the CASA theory, it has confirmed that, even in the case of digital assistants, there is a positive association between more sociable uses of the device and more consequential personification.

However, this new interaction paradigm has raised a disagreement in the scientific community. On the one hand, some researchers think that the command-based nature of interactions with such “personified” devices might negatively impact human social behavior, resulting in ruder human-to-human interactions (Cosic 2022; Zilber 2022; Sarwar 2022; Fairyington 2018; Gartenberg 2017; Truong 2016; Kayaarma et al. 2019; Giaccardi et al. 2020).

Mainly, it has been wondered if this kind of interaction may affect children’s politeness since there is evidence that they are more prone to perceive such systems as intelligent agents and attribute them characteristics of living beings (Cosic 2022; Zilber 2022; Sarwar 2022; Fairyington 2018; Gartenberg 2017; Truong 2016; Kayaarma et al. 2019).

Turkle (Turkle 2005) argues that, due to the perceived computer intelligence, children are prompted to reevaluate their beliefs about being alive and thinking. She observed that children attributed intent and emotion to objects encouraging social and psychological engagement. Druga et al. (2017, 2018) investigated how children perceive a digital assistant. In particular, an experimental study was conducted with 26 participants (3–10 years old) interacting with Amazon Alexa, Google Home, Cozmo, and Julie Chatbot. The study results show that different interaction modalities (such as voice and prosody, interactive engagement, and facilitating understanding) may change how children perceive their intelligence compared to such technology. The reason why children could give inanimate objects the traits of living things is underlined by the Theory of Mind development (Wellman et al. 2001), which enables younger to perceive the emotional and mental states of other beings. Theory of Mind finding suggests that age is a significant factor in children reasoning about technology.

Bylieva et al. (Bylieva et al. 2021) conducted a study of children’s communication with virtual assistants by analyzing videos where a child (aged 4–10) spoke with the Yandex virtual assistant Alice. Among other concerns that the study focused on, a discussed issue is related to the child’s attitude toward treating the virtual assistant as a servant, leading them to adopt and learn conversational patterns unsuitable for interpersonal communication. It was noticed that the kids utilized a commanding tone when speaking to Alice, frequently neglecting to utilize polite language and often adopting a reproachful tone. It was also seen that, although communication failed, the children attempted to maintain contact, even though they were irritated by Alice’s deviation from standard ways of exchanging information.

On the contrary, other scientists believe that being impolite toward devices does not affect interpersonal interactions because people are aware that digital assistants are not human, thus, not deserving of some courtesy and good manners (Hybridge 2023; Bouzid 2021; Elgan 2023). A first research study has been recently conducted by Burton and Gaskin (Burton and Gaskin 2019) for the politeness of 274 people in general and toward digital assistants. The study’s findings demonstrated that adults’ politeness toward digital assistants did not influence politeness toward other adult humans. In Bonfert et al. (2018), an experiment involving adults and a digital assistant that rebukes impolite requests have been conducted. It has been noticed that test subjects who were corrected for being impolite accepted the demand of the digital assistant to be more polite in their requests, thus reformulating their demands more politely. However, from the emotional viewpoint, they found that the request for politeness in everyday life is considered by adults tedious and time-consuming. However, the same results cannot be referred to populations susceptible to personifying digital assistants, like children.

Despite the results of these works, no scientific evidence can be considered statistically significant to provide a definite answer to the debate since experiments have been conducted on a small number of participants.

Nevertheless, to mitigate these concerns pertaining to intelligent virtual assistants, organizations have been exploring specific features to encourage intentionally politeness from people. For example, Google has already introduced a new functionality called Pretty Please (Vincent 2018) that enables its voice assistant to respond positively to polite phrasing such as please and thank you. Pretty Please is mainly conceived to prompt children to say the Magic word when they give a command. In the same way, Amazon introduced the Magic Word feature for Amazon Echo. Similarly to the previous one, Magic Word offers positive reinforcement when kids use the word please while asking questions of Alexa (Amazon 2018).

Most recently, also for ChatGPT (Dolan 2023) is arisen the same issue. In this case, if a user uses derogatory words when interacting with Microsoft’s GPT-powered Bing AI, it will respond by saying, “I’m sorry, but I don’t appreciate being spoken to that way”. Similarly, ChatGPT will tell you to “refrain from using offensive language” because it’s against its content policy or that it’s “sorry to hear you’re upset” but that “as an AI language model, it doesn’t have feelings or emotions”.

It is worth noting that, since such features are primarily aimed at children, the voice assistants reinforce polite behavior instead of rebuking discourtesy as in Bonfert et al. (2018).

6.3 Politeness in human–robot interactions

Politeness in human–robot interactions also deserves a separate analysis. The current developments of humanoid robots introduced such devices to highly social contexts. Social robots are mainly designed to autonomously interact with people across various application domains in a natural way, using the same social competencies used by humans (Kanda and Ishiguro 2016). They are used as office receptionists, tour guides, bankers replacing ATMs, and assistants for children and older adults. Studies on the importance of being polite with such robots have been addressed from different perspectives.

As in the case of smart speakers, the aforementioned concerns about the effect of rude commands when interacting with these devices have been recently reiterated by researchers in the field of human–robot interaction (Wen et al. 2023, 2022; Williams et al. 2020). Researchers have also highlighted the existence of “ripple effects,” wherein human impact on the ethical and social norms of robots can extend to human–human interactions, as previously documented in Lee et al. (2012). Primarily, Wen et al. (2023, 2022) assert that the strategy employed by corporations such as Amazon may have a negative outcome and lead to impolite exchanges between humans and technology as just uttering the word “please” does not guarantee politeness. The use of polite language such as “Please” at the beginning of a user’s request, as seen in interactive designs like “Alexa Please,” is seen as impolite because it often precedes a direct command that can come across as threatening. Furthermore, their experiments reveal that employing the wake-word “Please” has a negative impact on the utilization of language forms that are commonly interpreted as polite, such as Indirect Speech Acts (ISAs). Indeed in numerous cultures, speakers employ the technique of ISA to reduce face threat, especially in cases where speakers need to issue commands and requests. Wen et al. in (Wen et al. 2023) showed that encouraging robot’ interactants to use ISAs could be more effective strategy than Please-based design to encourage interactant politeness.

The study of Seok et al. (Seok et al. 2022), instead, analyzed the impact of cultural differences on politeness in HRI. Mainly Seok et al. said that the way people show politeness and respect in different cultures can also be important in how humans interact with robots. They conducted an experiment to show how language and manners are different in English and Korean when people interact with robots. Their study found that ISAs were much less common in Korean than in English. In Korean language, people use a lot of honorifics. In Korea, people use honorifics more in a conventionalized context than an unconventionalized context. Conversely, ISAs were used by English people more in a conventionalized context than an unconventionalized context.

In Mugar and Ancheta (2019), a living experience with a robot is reported. Although the robot was initially developed for managing parking spots, it has produced some unexpected results influencing the organization’s culture where it has been employed. It was noticed that while most people were interacting with the robot using commands that the robot was trained to understand, a small group of people interacted with him using social norms, typically used in interpersonal relations. Commonly, people added please and thank you to their requests for the robot. The survey revealed that some people thought about the robot as another organization member. Thus they treated it with the same respect as a human colleague. Others said they based their interactions on knowing the rest of the organization was watching. People are worried about the judgment of their colleagues. This experience showed that a social robot that plays a small role in an organization could draw attention to the social norms relevant to a company’s culture. Hence, social robots may be used as reminders of cultural practices and values.

In the same direction, the work proposed in Draper and Sorell (2014) reported the reactions of some participants to a possible scenario in which a healthcare robot is programmed to modify the rude behavior of an older individual by not executing a command that has been impolitely requested. Some participants disagreed with the robot’s behavior as they expected it to meet human demands regardless of how they were expressed. Such participants also claimed that being a machine, it has no feelings that can be hurt. Conversely, other participants considered the robot’s behavior acceptable because they rejected impoliteness in general. They shared the final goal of improving the older person’s rude temperament. The robot’s behavior has been also considered acceptable in the context of rehabilitation and promotion of the independence of elderly.

A qualitative analysis in (Afyouni et al. 2022) discussed the interaction between patients and a humanoid robot for seven days of cohabitation. The primary objective is to comprehend how people perceive a robot designed to assist in the rehabilitation of individuals. A shared attribute among all participants was their adherence to societal norms of respect and politeness toward the robot. All the involved participants unanimously acknowledged the importance of exhibiting respectful conduct towards the robot and affirmed their intent to communicate with the robot courteously. Throughout patient-robot interactions occurring for 1 week, it was observed that the patients consistently demonstrated civility and deference towards the robot. This was evidenced by their use of formal language, including frequent expressions of gratitude such as “please” and “thank you” and maintaining an overall respectful tone when interacting with the technology.

Politeness towards social robots is also studied to improve the robot experience. In Bothe et al. (2018), authors proposed a dialogue system to navigate in an indoor environment that adapts robot behavior and conversation flow according to the perceived degree of the user’s politeness to provide a personalized experience according to the user’s mood.

In Masson et al. (2017), authors argue that a politeness effect within an exchange paradigm could produce the endowment effect, namely the tendency to attribute more value to an object when we own it. Two kinds of experiments have been conducted. In the first one, an NAO robot was programmed to behave neutrally without any emission of non-verbal cues. Vice versa, during the second experiment, the robot was programmed to follow social norms, also using social cues. Results show that the endowment effect can be produced by using politeness rules. Hence, the endowment effect can be a helpful tool for evaluating the activation of social norms in human–machine interactions.

Moreover, some studies are focused on understanding the social and moral relations that children could form with a humanoid robot. Such a question related to humanoid robots differs from other technology we have previously seen. On the one hand, robots, like all the technologies mentioned above, are artifacts created by humans. They are merely tools, like an electronic calculator. However, on the other hand, their anthropomorphic physical features and the way they act and speak as humans contribute to attributing them to the same social qualities as human beings, hence deserving moral considerations like living entities. Some studies in this direction have been made using the robotic dog AIBO (Kahn et al. 2006; Melson et al. 2009; Friedman et al. 2003). Such studies demonstrated that children and adults could establish social relationships with robots. Although they recognize the robotic dog as a technology, AIBO evokes some social relations such as friendship and companionship. These results have stimulated some researchers to investigate whether meaningful interactions between children and humanoid robots can be established.

In Kahn et al. (2012), Kahn et al. conducted some experiments with a group of 90 children (from 9 to 15 years old) and a robot named Robovie. Their results have shown that children who interacted with Robovie believed it was a social being owning feelings and intelligence. However, they did not grant him civil liberties or rights, recognizing him as a technology. Despite that, most children think the robot deserves better treatment than a simple object. For example, most felt putting the robot in a closet as a simple object was wrong.

Finally, in Lee et al. (2021), the authors address the dynamics of robot etiquette within robot-children interactions in their different developmental stages. They state that robots’ behaviors can substantially influence the interactions with children, thus influencing the socialization process of children still developing their sociality through interactions with parents, friends, and others in their environment. Such a study claims that social robots used for childcare must conform to the norms of human society by incorporating behavioral attributes such as politeness to help children acquire valuable interpersonal interaction skills. The results from this study also show that etiquette must be modified according to the developmental stages of children, and the proper employment of social characteristics elicits related social responses and enhances interactions.

7 Discussions

The growing prevalence of artificial intelligence and the trend to make systems more human-like have posed attention to a new system requirement, the so-called social competence. Social competence (Duck 1989; Rose-Krasnor 1997; Park et al. 2021) pertains to an individual’s capacity to produce desirable outcomes and demonstrate flexibility when interacting within diverse social environments. Accordingly, as evaluated through interpersonal connections, social competence holds significant influence in social interaction, encompassing the capacity to communicate proficiently with counterparts and wisely manage interpersonal relationships to achieve constructive developmental outcomes. The social competence of individuals has a significant impact on their engagement in interpersonal interactions. An essential element of such competence is the ability to adhere to social norms (e.g., rules of politeness), and to act appropriately in response to the social dimensions of a specific environment (Tan et al. 2019).

Introducing the politeness requirement in developing advanced AI systems is currently addressed under various aspects and perspectives. Several factors need to be considered, such as the considerable variety of intelligent systems (e.g., digital voice assistants, embodied agents, smart vehicles), different human–machine interaction contexts, and the significant variability of individuals (e.g., males, females, young, older, special needs people or culturally different persons). Researchers in the field of artificial intelligence have analyzed some of these aspects, providing some findings, but an exhaustive answer is still far away. Several discussions are currently animating the scientific community where opposite positions have arisen (Meyer et al. 2016).

Based on the results of this review, an answer to the research question RQ2 (i.e., What are the reasons for technological devices should exhibit polite behavior towards humans?) is provided by the evidences about the usefulness of the machine’s polite interactions with humans, as summarized in the following.

Beyond the common sense that individuals deserve more respect than any AI system, other factors, such as improving the acceptability and trust towards AI technology, make politeness a valuable requirement in designing these systems. The evidence from the literature is that socially competent systems are more enjoyable, trustworthy, and readily accepted than machines without social competence, such as polite behavior (Lee and See 2004; Lee and Lee 2022; Firdaus et al. 2022; Jucks et al. 2018; Fong et al. 2003; Tsui et al. 2010; Ramachandran and Lim 2021). It was also found that polite systems are perceived to be more reliable than neutral or impolite systems, despite all systems being equally reliable (Spain and Madhavan 2009). Moreover, it has been highlighted that etiquette can also mitigate the negative effects of poor machine reliability (Miller 2005; Parasuraman and Miller 2004). As far as robotics is concerned, the previous findings hold more weight. It is revealed that politeness strategies are determinants for robot acceptance in society (Smith et al. 2022; Mutlu 2011; En and Lan 2012; Castro-González et al. 2016; Westhoven et al. 2019; Kaiser et al. 2019), mainly when they are used for particular purposes, such as peacekeeping or healthcare assistance (Inbar and Meyer 2015, 2019; Lee et al. 2017; Zhu and Kaber 2012).

Some other results showed that politeness influences technology acceptance differently according to the target audience’s cultural characteristics, even if the experiments were performed only on a small set of cultures (Salem et al. 2018; Nomura and Saeki 2010). Moreover, it is also proved that politeness perception depends on personal traits and in some cases classifications of users are provided (Rana et al. 2021; Hu et al. 2022). It is also noticed that in particular contexts such as healthcare the adoption of politeness strategies enhance user’s compliance to recommendations (Lee et al. 2017). As well as, in educational settings, polite AI systems positively affect the scholars’ learning outcomes and performances (Wang et al. 2008; Yang and Dorneich 2018a, b). Moreover, there are also finding that the conventionality of context is an important element to consider when investigating linguistic behavior in HRI across different cultures (Seok et al. 2022).

Conversely, there are contrasting findings about the impact of politeness according to the gender and age of individuals. Some findings revealed gender and age-based dissimilarities concerning the perception of politeness and its impact on human–machine interactions (Firdaus et al. 2022; Kumar et al. 2022; Rana et al. 2021; Miyamoto et al. 2019; Jucks et al. 2018). Other results, instead, show that the effects of politeness were stable across demographic factors such as age and gender (Inbar and Meyer 2015, 2019; Lee et al. 2019). Anyway, in this latter case, the age range of the study’s participants was relatively young. No evidence has been reported for older age groups. Additionally, in some works, the experiments were conducted by showing static images to the participants rather than participating in real human–machine interactions. Vice versa, in the first case, experiments have been conducted with more variability of participants. However, also in this case, there are no results that can be considered statistically significant. The aforementioned limitations preclude the feasibility of generalizing the outcomes. However, it should be mentioned that when taking into account actual social interactions, there exist indications that politeness exhibits varying effects across diverse age groups and genders.

Finally, politeness is seen as useless in critical working situations when an immediate failure recovery is needed (Lee et al. 2019).

On the other hand, an exhaustive answer to the research question RQ3 (i.e., What are the motivations for individuals should exhibit politeness towards technological devices?) is still untimely. There exists a disagreement within the academic community regarding the consideration of politeness towards machines by humans. The reasons underlying this debate can be outlined in the following points. Firstly, politeness is not a binary concept (polite or not polite). It is an elaborate system we apply selectively to people depending on whom we are talking to. It extends beyond just please and thank you (Brown et al. 1987; Watts 2003). It comprises a set of social skills to ensure everyone feels affirmed in social interactions. Politeness conveys much about the social status of the person we speak to. It implies respect, individuality, and inclusion in our social structure (Brown et al. 1987; Watts 2003). Secondly, why did the question of politeness toward machines not emerge with the spread of service devices? Aside from the fact that politeness is a complex issue, how we interact with technology has been shifting rapidly over the past 10 years. We started with command lines to create the scenario we wanted the technology to perform. The next significant advancement in communication was the search. We communicated with search engines with simple keywords. Today, instead, we interact with a system (e.g., a digital assistant, a robot, a smartphone) that convey several social cues (such as natural language, facial expressions, human-sounding voice, and humanlike shape etc..) that leads us to social responses, such as when we communicate with our peers (Lombard and Xu 2021). Thus, the issue of exhibiting politeness towards a digital assistant, a robot or any other kind of smart device is correlated to how we want to define our relationship with an artificial intelligence system. We want to designate an AI system as a peer? Some researchers (Hybridge 2023; Bouzid 2021; Elgan 2023) argue that an AI system is not a human being with feelings or social status, it does not need to maintain a positive self-image and cannot be hurt. There is no need to treat them politely. Being polite to an AI system means admitting they can be considered humans, which could negatively impact future generations (Elgan 2023). Mainly, some people think that promoting human-like behavior toward AI systems in children-machine interaction may influence their development (Lee et al. 2021; Kahn et al. 2006; Melson et al. 2009; Friedman et al. 2003; Kahn et al. 2012) and they may be induced to believe that these systems are living beings (Wellman et al. 2001; Turkle 2005; Elgan 2023; Druga et al. 2017, 2018). On the other hand, research on the CASA paradigm (Fogg and Nass 1997; Takeuchi 1998; Nass and Moon 2000; Takeuchi et al. 2000; Johnson et al. 2004; Nass et al. 1999; Johnson and Gardner 2009; Lombard and Xu 2021) has documented that people tend to attribute human proprieties to AI systems and interact almost unconsciously with them by applying social rules, norms and expectations as in interpersonal relations, thus perceiving the AI system as a social entity.

Hence, some possible motivations for individuals to demonstrate politeness towards technological devices, thereby endeavoring to provide a partial answer to the inquiry denoted as RQ3, may derive from the following considerations. As increasing anthropomorphization affects human behavior, ethical, psychological, and pedagogical issues must be considered, even while underlining that an individual, according to the politeness theory, certainly has more power in interactions than any intelligent machine. Some findings reported in the literature lead to different design requirements when an AI system has to be employed in contexts where interactions with vulnerable users (such as children) are recurrent. The commanding nature of interactions toward AI systems seems inappropriate for children under a certain age who cannot fully understand that an object with a human being nuance is not a living being, thus leading them to adopting rude behavior in interpersonal relations (Cosic 2022; Zilber 2022; Sarwar 2022; Fairyington 2018; Gartenberg 2017; Truong 2016; Kayaarma et al. 2019; Bylieva et al. 2021). Additionally, the purpose of the interaction (that may range from entertainment and education to working goals) may change the perceived usefulness of etiquette. In contexts such as professional settings where the efficiency of systems is demanding, politeness toward devices could be seen as annoying and time-consuming (Bonfert et al. 2018). Conversely, in educational contexts, an intelligent system that insists on respectful courtesy for the sake of children’s good manners could be advisable. As well as, in working contexts, the worry about the judgment of colleagues leads individuals to adopt polite behaviors towards machines to save their positive face (Mugar and Ancheta 2019).

8 Conclusions

This paper presents a comprehensive review of the adoption of politeness in human–machine interactions. The review has considered a wide range of artificial intelligent systems by collecting and analyzing works that deal with politeness from two different perspectives: machine vs human and vice versa. A summary of the answers to the research questions are:

  • RQ2 What are the reasons for technological devices should exhibit polite behavior towards humans? The results of this study show that socially competent systems are more appreciated and, therefore, more readily accepted than machines that lack social competencies, such as politeness. Moreover, it has arisen that politeness promotes greater trust in automated systems and some insights from politeness theory can also be applied to human–machine interactions. Finally, in particular contexts such as healthcare and education the adoption of politeness strategies enhance user’s compliance with recommendations and students’ performances.

  • RQ3 What are the motivations for individuals should exhibit politeness towards technological devices? Firstly, CASA findings proved that when people interact with technological devices that send social cues, they adopt basic social rules and norms they also adopt in human–human interaction, thus unconsciously treating AI systems as social entities. Secondly, the negative influence that a commanding nature of interactions toward AI systems may have especially on children leads to enforcing the adoption of politeness toward AI systems. Finally, the protection of an individual’s positive face is a motivator to adopt politeness also towards machines in social contexts.

Ultimately, the findings from this analysis, particularly pertaining to CASA theory, underscore the relevance of social norms derived from human–human relations in the context of interactions between humans and machines. Thus, in resuming the initial research question RQ1: Should the social norms, like politeness, that humans apply in interpersonal interactions also be used in interactions with intelligent machines (such as smart vehicles, social robots and digital assistants)? the answer is affirmative, as long as some conditions are met. Despite existing literature has provided initial insights on certain conditions, further investigation on the topic in question is deemed necessary. This is primarily due to the significant implications of a multifaceted system, such as politeness, which warrants further study.

8.1 Future directions

In considering politeness in human–machine interactions and the factors that affect its use and perception during interactions, some challenges arise for future research, briefly outlined below.

First of all, it has been demonstrated that the perception of politeness is contingent upon an individual’s personal traits, characteristics, and cultural background (Rana et al. 2021; Hu et al. 2022; Salem et al. 2018; Nomura and Saeki 2010). Therefore, the aspect of personalization should be taken into account. A potential avenue for designing an artificial intelligence system that can effectively adopt polite behavior towards diverse individuals involves exploring methodologies for enabling the system to adapt to different social etiquette and customs. Additionally, given that contextual factors and social dynamics can influence the perception of politeness within a group, it raises the question of how an artificial intelligence system may acquire knowledge of polite rules and behaviors through these contextual and relational factors.

Furthermore, the findings of the review indicate that the utilization of politeness strategies in conjunction with a particular type of device has the potential to elicit varying outcomes contingent on the task at hand, objectives, and circumstances. The current body of research lacks studies that offer categorizations of the various contexts, tasks, and purposes of interactions wherein a specific combination of politeness strategies and system type could prove the most successful. Such studies would provide valuable guidance for the development of more effective systems.

It has been also observed that the different developmental stages of children can engender different perceptions of technology and politeness (Lee et al. 2021; Turkle 2005; Druga et al. 2017, 2018). It would be advantageous to investigate potential associations between Theory of Mind (Wellman et al. 2001), politeness strategies (Holmes 2006; Brown et al. 1987; Watts 2003; Lakoff 1973), and technology perception within a broader experimental framework encompassing children of varying age groups.

An additional examination of the relationship between the contextual appropriateness of language and the degree of trust established may yield valuable insights, especially when automated systems require users to be interrupted from their ongoing tasks. Additional experiments may be conducted to investigate the influence of politeness in system-induced interruptions and its potential implications on user perceptions regarding the suitability and trustworthiness of information and systems within such scenarios.

On the other hand, for system designers, the findings of this study indicate that the implementation of “please-centring” wake words in robots and voice assistants may not be efficacious in promoting polite behavior by interactants (Wen et al. 2022, 2023). The implementation of system functionality that promotes the utilization of indirect speech acts may prove to be a more efficacious approach.

Moreover, the current landscape of artificial intelligence is characterized by a diverse range of users, contexts, and tasks. Consequently, hard-coded social norms and politeness behaviours are impracticable solutions, as they limit the adaptability of the system to unforeseen situations that may arise during its operation. The establishment of all potential situations that an AI system must manage at design time is a daunting and unrealistic task, thereby rendering these hard-coded solutions ineffective (Ribino and Lodato 2019). Consequently, it is essential to consider self-adaptation and self-learning as primary prerequisites for the development of socially competent systems.

To conclude, a note of attention needs to be given to future risks due to such an evolution of artificial intelligent systems. Although people are not ready to see AI systems as peers, it is more likely that artificial intelligence and robotics will affect substantially human development in the next years (Radanliev et al. 2022a, b). Artificial intelligence has already triggered existential questions, such as what cyber risks will emerge from such technologies, and how can humans control risks from artificial intelligence (Radanliev et al. 2022). In particular relating to politeness, as computers become more affectively and socially capable, the potential for unethical uses of such technology increases. Above all, some preventive measures must be taken when the interaction involves vulnerable users such as children or older adults needing special attention.

8.2 Limitations

Despite the extensive search for relevant literature through various databases and search engines, there is a possibility that certain pertinent papers may have been missed due to the inherent limitations of the selected terms and databases. Furthermore, due to the extensive growth of research pertaining to this topic, the latest investigations concerning the violations of social norms have not been incorporated. Finally, works on politeness and team formation have also been excluded as they would warrant a separate analysis.