Keywords

To decide which social abilities are required for a robot and which aspects do not require so much weight, the fields of application and the type and frequency of human contact should be analysed in depth (see also the explanations for the «Relations between Interacting Agents» factor). The evaluation criteria proposed by Dautenhahn (2007), each defined on a spectrum, can be used as a basis to analyse contact within a specific field of application (Dautenhahn 2007, p. 683):

1. Contact with humans (none; distant to repeated; long-term; physical),

2. A robot’s functionality (limited and clearly defined to openly adaptable; shaped by learning),

3. The role of a companion (tool to assistant, companion, partner),

4. Social skills (not required or desirable to essential and necessary).

Two different paradigms can be distinguished regarding the potential relationships that might arise between humans and robots: the caretaker paradigm and the assistant/companion paradigm (Dautenhahn 2007, p. 698). The former states that humans take on a caretaking function when they encounter robots, since the latter are viewed as artificial creatures. In other words, humans look after robots (and not vice versa, like care robots). According to this robot-centred view, the human must recognize the robot’s needs and react to them. The assistant/companion paradigm focuses on the robot’s supportive role as a useful machine that is able to recognize and respond to human needs to be helpful. Thus, an assistant robot that supports people with everyday tasks takes on a role similar to that of a personal guardian or butler. The choice of which paradigm to emphasize in the design of a human–robot interaction is ultimately left to the developers of technology. The poliTE project and the FASA model, however, take the view of the assistant/companion paradigm and focus on the human-centred perspective.

As an approach to concrete applications in specific interaction situations and application contexts relating to socially intervening artificial assistants, the applications of the FASA model are acted out below with examples in the context of various settings that reflect interactions between humans and artificial social assistants. To do this, a series of case studies identified over the course of the literature search are presented. At the end of the chapter, we will consider a fictitious application example, or thought experiment, to illustrate the application of our model to the analysis of a concretely planned interaction situation (for example for a research project) between a socially intervening artificial assistant and a person. By that, we will show how to use the model as a checklist or for a research and development project.

6.1 Application example 1 – A robot barista

Hedaoo, S., Williams, A., Wadgaonkar, C., & Knight, H. (2019). A robot barista comments on its clients: social attitudes toward robot data use. In Proceedings of the 14th ACM/IEEE International Conference on Human–Robot Interaction (HRI), pp. 66–74).

In this application example, a robot barista (NAO, Softbank Robotics) commented on a conversation between two guests in a café setting. The valence of the robot’s comments varied (positive/negative). The addressees of the comments were also varied, as well as the setting of the conversation, e.g., family setting vs. job interview. The basic setting of this application example constitutes a framework within which the test subjects judge the appropriateness of the robot’s behaviour; therefore, some of the factors and factor criteria of the FASA model of social appropriateness are already reflected: the robot is already configured for a specific situation with specific relationships between interacting agents and with individual specifics. It is intended to act:

  • as a barista for guests (factor criterion of the «Relations between Interacting Agents» factor: ‹familiarity or relationship aspects›) in a café setting (factor criteria of the «Situational Context» factor: ‹place›, ‹framing›, ‹participants›)

  • as a conversation partner (factor criterion of the «Individual Specifics» factor: ‹individual shaping of social roles›).

The appropriateness judgements of the test subjects also illustrate the relevance of the factors and factor criteria for human–machine interactions: the judged appropriateness of the robot’s behaviour depended on the social roles and degrees of familiarity with which the test subjects met. The factors «Individual Specifics» and «Relations between Interacting Agents» therefore played a particularly influential role in judging the appropriateness of the robot’s behaviour. In addition, the perception of the appropriateness of the robot’s comments fluctuated depending on the subjects’ mood in the context of the specific test situation – another link showing the relevance of the «Individual Specifics» factor. The fact that the valence of the robot’s comments affected judgements of appropriateness also points to the «Type of Action, Conduct, Behaviour, or Task» factor: some (speech) actions have typical consequences that always manifest, except in unusual situations or unusual usage. Praise and niceties are usually met with a positive reception. The «Standards of Customary Practice» factor also plays a role in this application example: the test subjects’ judgements of the appropriateness of body language, as well as potential conversation analysis and database queries done by the robot reflect a) the subjects’ own implicit conceptions of appropriateness, which, if explicated, would presumably allow conclusions about specific customs depending on their social position, habitus, etc., and b) their personal conception of intimacy and privacy preferences. The aspects are related to known legal and ethical questions in the context of human–machine interactions and, thus, also demonstrate that questions of legality, ethics, and social appropriateness are in some cases closely related, although we should not be too eager to conflate them for this very reason.

6.2 Application example 2 – Baby schema

Mussakhojayeva, S., Zhanbyrtayev, M., Agzhanov, Y., & Sandygulova, A. (2016). Who should robots adapt to within a multi-party interaction in a public space?. In Proceedings of the 11th ACM/IEEE International Conference on Human–Robot Interaction (HRI) (pp. 483–484).

In this application example, people in mixed groups (parents, their children, and people with no relation to the families) met a robot (NAO) in field tests. The robot adapted its behaviour to either the children or the adults in the group as it presented itself. The results of the experiment showed that the robot’s behavioural adaptations were evaluated differently by different people. Regardless of its verbalizations, children perceived the robot positively, whereas the parents perceived it more positively if it adapted its behaviour and language to the children. In general, the robot was perceived more positively by the parents than by the unrelated participants. This difference was very likely rooted in the children’s positive reaction to the robot. The unrelated adults (without children) remarked that the robot should adapt to adults in settings such as banks and hospitals, whereas the parents preferred the robot to adapt to their children, regardless of context.

This case study clearly illustrates the relevance of the «Type of Action, Conduct, Behaviour, or Task» factor with the ‹role identities› criterion since the relationship between the parents and the children led to differences in judging the robot’s behaviour. Finally, the relevance of the «Situational Context» factor (situational criteria ‹space› and ‹framing›) can also be seen since people with no relation to the children present in the interaction preferred behaviour adapted to adult interaction partners in ‘official’ settings such as banks or hospitals. However, this factor appears to play a subordinate role here because the role identity of being a parent shifted this preference in favour of unqualified adaptation to the children.

6.3 Application example 3 – That robot touch

Hoffmann, L. (2017). That robot touch that means so much: On the psychological effects of human–robot touch (Doctoral dissertation, University of Duisburg-Essen, Germany).

In our third application example, the influence of different parameters on the perception of contact between humans and robots and the effect of this contact on the evaluation and perception of the robots was tested. The results showed that touching certain parts of the body, e.g., the back or the legs, was perceived to be more appropriate from a robot than from a person (stranger). Furthermore, touch initiated by the human was perceived to be more appropriate than reciprocal touch or touch initiated by the robot. The acceptability of touch varied with the size and mechanical appearance of the robot; the appropriateness decreased as the size and mechanical appearance of the robot increased. In addition, touching the robot generally led to positive affect and more positive interaction behaviour. Accordingly, touch appears to be positive in HRI, but any touch should be initiated by humans. For example, it would be conceivable to establish a human-initiated handshake as a greeting.

This example is especially relevant as it demonstrates that a judgement of social appropriateness can differ between humans and robots. Here, a touch that would be inappropriate for a human is judged to be appropriate in the interaction with a robot. In terms of the FASA model, the «relations between interacting agents» factor is again especially relevant in this case study, with emphasis on the ‹familiarity and relationship aspects› criterion, which appears to be viewed more ‘narrowly’ with humans than with robots in the case of physical contact. Here, this factor goes hand in hand with the «Standards of Customary Practice» factor and more specifically its ‹values/social norms› criterion. Simply touching a stranger violates socially established norms of behaviour. This is especially true for certain parts of the body, namely the back and the legs in this case study. However, the extent of the applicability or inapplicability of these norms seems to differ between robots and humans. The «Individual Specifics» factor also plays a role, given that the dependency of social appropriateness on the size and mechanical appearance of the robot was, in turn, contingent on personal evaluation structures.

6.4 Application example 4 – If a robot comes down the hallway…

Lauckner, M., Kobiela, F., & Manzey, D. (2014). ‘Hey robot, please step back!’-exploration of a spatial threshold of comfort for human-mechanoid spatial interaction in a hallway scenario. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, (pp. 780–787).

Our fourth application example examined the distance perceived to be appropriate when passing someone in a limited space (in this case, a hallway). The robot used was a prototype of a Bosch transporter robot, equipped with a display that could show a human face. It was found that the accepted proximity for a frontal approach was 0.8 m and the mean accepted distance for passing laterally was approx. 0.4 m. The preferred distance was not significantly influenced by the robot’s autonomy but increasing the robot’s speed (by 0.8 m/s) increased the preferred distance significantly. The robot’s external appearance had no significant influence on the frontal distance, but a human-like design reduced the preferred lateral distance by 0.1 m. Due to interindividual variability, a frontal distance of 1.1 m and a lateral distance of 0.6 m was recommended for first contact with a social robot in this example.

In terms of the FASA model, the «Situational Context» plays an especially prominent role here (where something is unfolding, how it is spatially arranged as it unfolds); the frontal distance perceived to be appropriate was roughly twice as large as the accepted distance for passing laterally. The «Individual Specifics» factor and the ‹personal evaluation structures› criterion are also important, as they encompass differences in preferred distances from individual to individual. Additionally, the «Standards of Customary Practice» factor can be cited, since the ‹habitus› criterion reflects the field of proxemics, which was the focus of the study. Thus, there are certain distances that have been socially established as typical (Hall 1966), and failing to observe them constitutes a breach of social appropriateness.

6.5 Application example 5 – Wait for it… Hello!

Yamamoto, M., & Watanabe, T. (2006, September). Time lag effects of utterance to communicative actions on CG character-human greeting interaction. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, (pp. 629–634).

In our fifth application example, a series of differently timed variations of a greeting were examined in a Japanese-speaking setting. The robot was small and somewhat playful (unazuki-kun) and represented the embodiment of a virtual agent. During communication, it was found that variations in pauses and delays led to different communication effects. Thus, a delay of 0.3 s was desirable for greetings between acquaintances, but longer delays were preferred for polite greetings.

In terms of the FASA model, the «Situational Context» factor (situational criterion ‹place›) is reflected in the Japanese cultural setting. There are greater differences between polite and familiar greetings in Japanese culture than in some other cultures (e.g., the American culture), but the preferred delays would most likely manifest in some other way in other cultures, or perhaps play no role at all. The ‹familiarity and relationship aspects› criterion of the «Relations between Interacting Agents» factor is also relevant, reflecting the social distance between the two interaction partners. The preferred delay in communication was defined by the familiarity of both persons (acquaintances vs. interacting agents less familiar with one another), which in turn determined whether a formal, polite greeting was viewed as socially appropriate. The «Standards of Customary Practice» factor, more specifically the ‹habitus› criterion, can also be used to understand social appropriateness in this application example. The ‹habitus› criterion encompasses the typical and ‘ingrained’ types of behaviour and judgement within a group, which includes the general rules of conversation.

6.6 Application example 6 – CLIPPY

Whitworth, B. (2005). Polite computing. Behaviour & Information Technology, 24(5), 353–363.

The final application example relates specifically to ‘Clippy’, Microsoft’s virtual assistant. Clippy was perceived as impolite and disruptive by many users due to frequent and unsolicited interruptions in the work process, despite its potentially positive function in terms of assistance. This perception of it being impolite and disruptive may stem from it not being designed with enough thought for interaction performance with the user. Whitworth writes: “Politeness is any unrequired support for situating the locus of choice control of a social interaction with another party to it, given that control is desired, rightful and optional” (Whitworth 2005, p. 355) and formulates the following informal politeness rules for software (which can largely be transposed to robots) (cf. Fig. 6.1):

Fig. 6.1
figure 1

Software politeness rules (Whitworth 2005, p. 359.)

According to Whitworth, Clippy violated a number of these postulated informal politeness rules while acting as an assistant and was consequently perceived as disruptive and annoying by many users.

In terms of the FASA model, the «Situational Context» factor and its situation criterion ‹media-based and performative mediation› are especially relevant here since the norms of e-conversations fall under this criterion. For virtual or media-based conversations, there are specific norms that must be considered when designing interactions within this context to ensure that they are perceived as socially appropriate. The «Standards of Customary Practice» factor and its ‹habitus› criterion, which encompass the typical and ‘ingrained’ types of behaviour and judgement within a group, including general rules of conversation, are also relevant. The interaction between the different factors is clearly visible in this example. Although the «Standards of Customary Practice» factor includes general rules of conversation, the situation, in this case the media-based interaction, means that another set of norms, some of which are different and some of which overlap with the usual norms, must be considered.

6.7 Fictitious application example/thought experiment

To illustrate the application of the model to planning specific interaction situations in the context of research and development, it might be helpful to consider a fictitious example as a thought experiment. The FASA model can be used as a basis to assess which aspects of social appropriateness should be taken into consideration. We shall examine the criteria factor by factor to determine whether each criterion is relevant using the questions listed in the model description and, if so, what consequences we can deduce in terms of the behaviour that we wish the system to perform. As a realistic and widely encountered scenario in state-of-the-art technological development, we will consider the implementation of a robot in a retirement home. We will apply the model to a concrete interaction situation: reminding the residents about an appointment, in this case an upcoming leisure activity that has been planned.Footnote 1 The following discussion represents one possible approach to this scenario based on the FASA model and makes no claims of exclusivity or completeness.

«Situational Context»: For our appointment reminder, let us begin with the factor «Situational Context» and analyse the situation using the factor criteria listed in the model.

  • ‹Place›: Our analysis begins with the situational criterion ‹place›, which relates to where the behaviour is taking place. Here, the specific context of the retirement home and the cultural setting in which it is located must be considered. In our example, we will assume the case of a Western culture. If the interaction is set within a German-speaking country, the polite Sie form of address should for example be used if the level of familiarity is low (see also «Relations between Interacting Agents»), and a larger interindividual distance should be maintained than in some other societies (see also the ‹habitus› customariness criterion, which encompasses proxemics). Given that the interaction is set within a retirement home, we can draw conclusions about the age of the interacting agents and the possibility that the addressees may have cognitive and physical limitations caused by degenerative diseases or simply advanced age (see also «Individual Specifics») that must be taken into consideration when designing the interaction. Furthermore, within the scope of the ‹place› situational criterion, the degree of publicness (e.g., private vs. public) of the interaction needs to be considered; in the application example, the situation can be assumed to be private, unless the human-robot interaction is unfolding and being recorded as part of a scientific study. Accordingly, the robot does not need to communicate as representatively as would be necessary in a completely public situation.

  • ‹Framing›: The situational criterion ‹framing› asks ‘as what’ the behaviour is unfolding. For example, this criterion encompasses whether the action being evaluated is a ritual or ritualized, and whether it is being performed for its own sake or as a form of commentary, e.g., in the context of art. In our application example, the level of seriousness is most important aspect to consider. The situation is not very formal, and we can assume that it will occur somewhat regularly. Therefore, the robot’s behaviour can be more informal, jokes or possibly colloquial language are permissible, and no lengthy explanations are required, since the addressees can be assumed to be familiar with the situation.

  • ‹Media-based and performative mediation›: The situational criterion ‹media-based and performative mediation› plays a subordinate role in this situation since the interaction is unfolding in a face-to-face setting. This criterion would play a more prominent role in media-based interactions such as video conferences or discussions in a comments section on the internet.

  • ‹Participants›: For the ‹participants› factor criterion, which encompasses the nature of the interacting agents, we simply need to consider that the participants are people and no entities of a different nature play a role or need to be considered.

  • ‹Time›: The situational criterion ‹time› describes whether the appropriateness of a displayed behaviour depends on it being performed at specific times. This criterion also plays a subordinate role in this example, since the performance of the robot’s reminder task does not depend on the specific time at which this reminder is given (unless the reminder would undesirably wake the residents from sleeping, etc.). It is sufficient to select a time window that allows the addressees to complete or interrupt their current activities to participate in the planned leisure activity, or travel to the necessary location.

«Individual Specifics»: As mentioned above, the «Individual Specifics» factor plays a prominent role in this application example.

  • ‹Personal evaluation structures›: The ‹personal evaluation structures› criterion focuses on aspects that influence whether an interaction partner would judge a behaviour as socially appropriate or inappropriate. In the context described here the age of the addressees must be considered, since the conditions of socialization differ from generation to generation. For example, a higher degree of formality in the form of address may be advisable when interacting with older persons, whereas people of younger age or from a later generation might perceive a more informal address (such as the German Du form of address) as appropriate, even in the absence of familiarity (see «Relations between Interacting Agents»). Physical and cognitive states also need to be considered, for example in relation to the speaking volume and speed that would be considered appropriate. A hearing-impaired person would judge a higher volume to be appropriate; in the context of a retirement home, a clearer speech style or higher volume might be considered appropriate, depending on the composition of the group of residents – or come across as discriminatory. Furthermore, regarding physical condition, a resident with restricted mobility might need more time to cover distances, which should be taken into consideration when defining the timing of the reminder for the specific group being addressed.

  • ‹Personal characteristics›: The factor criterion ‹personal characteristics› describes the dispositions that interaction partners bring with them. This includes personal preferences regarding certain aspects of the interaction situation. Likes, prejudices, and personal taste play a role, as do chronic distortions of perspective (e.g., a negativity bias), personal attitudes, understanding of irony, personal interests, etc. Since our example concerns a group interaction where the robot addresses multiple people at the same time, this criterion plays a less prominent role. Instead, it makes sense to design the interaction more generally, as it is unrealistic to expect to be able to account for the personal characteristics and preferences of all addressees. Nevertheless, it would be conceivable to have a scenario in which a particular person requires special attention to encourage them to participate in the planned activity. A two-stage process could then be envisaged that first makes a general address to the full group, then addresses a particular individual with a more personalized communication that considers their personal preferences to improve the perception of appropriateness.

  • ‹Individual shaping of social roles›: The criterion ‹individual shaping of social roles› is not too relevant in our example, since potential social roles are primarily meaningful when interacting in a context where the interacting agents have essential roles that determine the interaction itself. Conceivable examples include interactions within professional life, where there are superiors and employees, interactions in school contexts, where there are teachers and students, and so on. In the interaction situation described here, which takes place in the context of a retirement home, social roles have a less prominent meaning because the addressees are unlikely to have any other roles besides being a resident of the facility at this point in time.

«Type of Action, Conduct, Behaviour, or Task»: Next, we need to analyse the «Type of Action, Conduct, Behaviour, or Task» factor and its factor criteria.

  • ‹Time›: Let us begin with the action and behaviour criterion ‹time›, which relates to how and when the behaviour is taking place. Various questions could be relevant here. For example, what type of conversation is it? A dialogue or something else? Is there a fixed exchange between the interacting agents? Are there interruptions? If so, why? In some cases, this criterion could also encompass a sense of tact regarding choosing the right time to address a particular question within the interaction. In the application example, the most relevant aspects are: it is not unfolding as a classical dialogue with a fixed exchange between the members of the conversation, and it is not an instance of dyadic communication, but a communication with a group of people. In such a context, since the robot cannot necessarily wait until nobody is speaking to avoid interrupting a conversation, it could be viewed as socially appropriate to interrupt existing interactions with an ‘interjection’ before ‘broadcasting’ a general announcement of the upcoming appointment to the room – like how it is appropriate to tap on a glass to interrupt conversation before giving a speech. Furthermore, a sense of tact is not necessary for the robot since the subject of the interaction is not sensitive.

  • ‹Role identities›: The second action and behaviour criterion, ‹role identities›, is closely linked to the «Relations between Interacting Agents» factor, as well as the individual criterion ‹shaping of social roles›. This criterion relates to questions about who is performing the behaviour and who is judging it. Here, for example, it corresponds to aspects regarding the innate roles of the interacting agents or the roles assigned to them, how a judgement of appropriateness depends on the role of the person performing the relevant behaviour (e.g., depending on age, gender, or ethnicity), how the role identities of the interacting agents relate to one another, the reputations of the interacting agents, and whether people are representing themselves or somebody else within the interaction (this plays an especially important role in political settings). However, since social roles are not that important in this example, as mentioned earlier, this factor is irrelevant or only plays a very subordinate role.

  • ‹Intention›: The action and behaviour criterion ‹intention› relates to the motivations, goals, or intentions with which the behaviour or action is taking place. Here, it should for example be considered whether the behaviour has a persuasive intention, whether there is the potential for cooperation, etc. In the example, the robot could potentially also have a persuasive intention in addition to its reminder function to motivate the residents to participate in the planned activity. In a human context, a persuasive intention can be pursued in subtle and socially appropriate ways by strategies such as particular rhetoric or by mirroring gestures or facial expressions in communications, but that is not yet possible to the same for robots. Since the ability to recognize emotions and situations is also significant in this criterion, besides the ability to cooperate, its relevance may increase as technical development continues to progress.

  • ‹Consequences›: Finally, there is the ‹consequences› criterion of the «Type of Action, Conduct, Behaviour, or Task» factor, which encompasses the possible consequences of the interaction. Here, rules of conduct established within specific groups of actors can be considered, as well as the visibility of consequences or the (institutionally normative) enshrinement of behavioural rules (see also the «Standards of Customary Practice» factor). The power dynamics between the interacting agents also play a role; they determine whether a violation of the rules of social appropriateness can be sanctioned by the interaction partner, and, if so, how. The worst-case scenario in the example considered here would be a termination of the interaction, for example if the residents simply ignore the robot and do not participate in the planned activity. Accordingly, the consequences are not serious and there is some leeway for social (in)appropriateness in the design of the interaction, which would not be the case for other scenarios that carry more serious consequences (imagine for example a situation unfolding in front of a court).

«Relations between Interacting Agents»: The «Relations between Interacting Agents» factor also contains various factor criteria that must be considered when designing the interaction.

  • ‹Familiarity and relationship aspects›: The relational criterion ‹familiarity and relationship aspects› asks how the interacting agents relate to one another. This includes consideration of the frequency and duration of the interaction, whether a friendship exists or how close the interacting agents are to one another, what specific power dynamics there are, what expectations they have of one another, etc. In the application example, the robot and the residents already know and have frequently seen one another, and interaction situations have already occurred repeatedly. This allows us to conclude that no highly formal communication or introductory greeting and self-introduction are needed. Since the example does not involve a companion robot, and the robot is instead understood as a service provider, the interaction does not have any special intimacy or familiarity and should instead be kept relatively neutral. There is not expected to be any power imbalance between the interacting agents; this aspect can therefore be neglected. Although the robot gives a reminder of the appointment, its task is not to force the residents to participate, nor does it have the authority to do so; it is simply offering a suggestion.

  • ‹Intention›: This is also reflected in the ‹intention› relational criterion, which describes the interests at stake in the interaction. For example, this includes the question of whether the interaction is cooperative or competitive, whether it serves an economic interest, whether the possible consequences of the interaction are institutionally enshrined, etc. In the application example, no further aspects need to be considered.

  • ‹Context›: The relational criterion ‹context› is comparable to the «Situational Context» factor but focuses instead on the relations between the interacting agents. For example, ‘as what’ do the interacting agents perceive the interaction? Is there a consensus on this perception? Applied to the situation of a robot reminder, it should for example be noted that both parties need to perceive the reminder as voluntary, i.e., the fact that the leisure activity is being proposed as a suggestion needs to be communicated to maintain social appropriateness.

«Standards of Customary Practice»: Finally, we must analyse the «Standards of Customary Practice» factor.

  • ‹Values/social norms›: The first customariness criterion is the ‹values/social norms› according to which the social appropriateness is judged. This criterion considers more collective values and virtues, such as fairness and equality, as well as individual values that the interacting agents may have internalized over the course of their socialization, but which are not necessarily shared by others, and finally any institutionally enshrined customs. In the context of our application example, this factor is of moderate importance, since the situation is not particularly sensitive to any potentially applicable values, but the violation of such a value might still lead to the interaction being terminated. For example, the fact that values such as friendliness or politeness can play an essential role in a retirement home setting needs to be considered. In addition, in the application example, no separately institutionalized customary practices are expected, although any relevant legislation should of course be observed, especially as it relates to the provision of care.

  • ‹Habitus›: The next criterion is the ‹habitus›, which describes the types of behaviour and judgement ‘ingrained’ in a group. This is linked to the frequency of interaction. People who are meeting for the first time (excluding broader societal contexts resulting from perceived roles in the interaction situation) do not have any habitually ingrained types of behaviour and judgement. In our application example, it might have already been established as common practice for the leisure activities to begin slightly late, for instance, so it would accordingly not be socially inappropriate for the robot to account for this time window in its reminder. This criterion also encompasses types of behaviour that would otherwise be inappropriate but have been accepted as appropriate through the habitus of the relevant group (but would not be accepted in other group contexts).

  • ‹Regulative norms›: The final customariness criterion ‹regulative norms› relates to the ethical dimension of the application example. The example could potentially involve vulnerable groups of people, making this criterion especially important. A long-standing and independent area of applied ethics exists for this scenario, identified by the keyword of ‘care ethics’, so we can draw from both the existing professional discourse and the relevant experts and institutions on this topic.