1 Motivation

In recent years, robots have started to be used in more and more everyday as well as professional contexts with service robots increasingly performing their tasks in direct interaction with humans. Here, service robots are conceived as robots that perform useful tasks for humans in a partially or fully autonomous manner, whereby the industrial application is excluded from this scope (International Organization for Standardization 2021). The area of application of such service robots thus includes both public and private spaces. In the public sector, for example, service robots are already used as information assistants, but more complex tasks such as autonomous cleaning or deliveries can already be realized. Also, in the private context, service robots already take over simple cleaning tasks today. In the near future, it is conceivable that robots will be used as personal assistants to help people with a wide range of everyday tasks. In this context, robots will take on new roles in the social system (Enz et al. 2011). In the course of this development, requirements for the design of the interaction of robots with humans in their task environment are changing (e.g., Bartneck and Forlizzi 2004). This does not only concern the interaction with people who own the robots or who directly perform a task together with them, but increasingly also people who happen to be in the task environment of the robots (e.g. passers-by in public spaces). In this sense, service robots are turning more and more into social robots that can be considered part of a mixed society of humans and robots, in which robots, among other things, interact and communicate socially with humans, thereby learning and acquiring experiences (e.g., Hegel et al. 2009; Fong et al. 2003). Accordingly, a service robot which performs its task in the context of social interaction can be described as a social (service) robot. This work refers to all types of social service robots (hereafter also referred to simplified as “service robots” or “robots”).

In the context of social interaction (both in the private and in the public domain), the appearance and behavior of service robots, as well as the interaction concepts they are equipped with, must fulfil several basic requirements. On the one hand, they must allow efficient task performance, and on the other hand, they must be suitable for the social structure in which the robot operates and above this be acceptable on an individual level. On this basis, the RobotKoop project funded by the BMBF (2018–2021) pursued the vision of cooperative, intelligent service robots which operate in dynamic social settings, act in a trustworthy and acceptable manner, and thereby negotiate and coordinate their actions with the people around them. One major step towards this vision is a context-sensitive, cooperative human-robot interaction strategy. The advantages of robot use in areas such as household, care, communication and service can only fully play out if the employed robots are equipped with interaction strategies that are context- and need-sensitive. These strategies should allow situation- and goal-oriented communication with the environment in which the robot is used. Such a human-robot interaction (HRI) needs to be perceived as transparent, acceptable and trustworthy on the part of the users and the persons in the environment of task execution. In particular, the promotion of a minimum level of acceptance and an appropriate level of trust are fundamental prerequisites for efficient, safe and pleasant togetherness of humans and robots.

Against this background, in the following a checklist is presented that is intended to support a human-centered design of robots, their behavior and communication in both science and practice. By providing 60 questions on design topics related to acceptance and trust and making design recommendations based on these questions, the checklist is intended to contribute to an increase in subjective trustworthiness and acceptance of HRI design. The questions and design recommendations in the checklist are intended to serve as an orientation aid, source of inspiration, and working tool for practitioners (e.g., in engineering, computer science, and product development) and researchers, and provide a starting point and a basis for discussion to enhance human-centered HRI design in specific HRI projects.

2 Theoretical background

To answer the question of how robots and their interaction with humans can be optimally designed with respect to acceptance and trust, it is not feasible to formulate a generalizable strategy due to the wide range of applications, tasks, and design possibilities of robots. With respect to the current state of HRI research, in regard to many specific design decisions partly contradictory results from specific studies and various individual collections of guidelines, recommendations, and requirements from different application areas can be found.

The aim of the presented checklist is to integrate and complement the existing results and approaches with the findings from studies and expert discussions of the RobotKoop project. Through its application in science and practice, the investigation and practical implementation of trustworthy and acceptable HRI might be advanced and promoted.

In the following, a theoretical introduction to the underlying concepts cooperation, trust, and acceptability is provided. Furthermore, a preliminary overview of existing collections of recommendations and requirements from related fields is given and their relevance for the present work is discussed.

2.1 Cooperation between humans and robots

In recent years, the distribution of tasks between humans and robots in the socio-technical system of joint task execution has been changing from a situation in which robots and humans mainly co-exist (the tasks of robots and humans are independent of each other) to the possibility of cooperative teamwork between humans and robots. In this respect, the concept of human-technology cooperation describes a type of working together between humans and technical systems in which both parties pursue a common goal as team players, coordinating, aligning, and complementing their task performance with each other (e.g., Hoc 2001; Christoffersen and Woods 2002; Klein et al. 2004).

In this context, different types of cooperative interaction between humans and robots can be distinguished, of which, for example, two types are differentiated in the work of Onnasch et al. (2016). In the first type of cooperation (referred to as human-robot cooperation in Onnasch’s work), humans and robots work together towards a common goal, whereby there is a clear division of the tasks between humans and robots and their actions are not directly dependent on each other. The second type of joint working is referred to as human-robot collaboration, which goes beyond cooperation by describing a working relationship between the human and the robot, in which sub goals are also worked on together simultaneously (and direct physical contact may also occur under certain circumstances). In these collaborative scenarios, the roles of humans and robots change. Humans and robots enter into a social exchange with each other—naturally on the side of the robot within its restrictions in regard to subjectivism and intentionality—and coordinate dynamic solutions to problems, taking into account their respective capabilities. In the following, the term human-robot cooperation is used to refer to both types of cooperation, since both situations are difficult to distinguish in practice or merge into each other, or sometimes this distinction does not seem meaningful (Onnasch describes cooperation as a type of collaboration). Ultimately, both types of collaboration between humans and robots require similar basic prerequisites on the part of the design of robot behavior and the user interface.

As compared to a mere coexistence, this more complex and communication-intensive cooperation between humans and robots creates new requirements for the design of robots and HRI (e.g., Walch et al. 2017; Babel et al. 2021). First, in order to ensure successful human-robot cooperation, the interface should foster a shared situational awareness between humans and robots (e.g., human-robot awareness; Yanco and Drury 2004; Drury et al. 2004). In this sense, each partner should be well informed about the current status of subtasks, as well as the current activities, and plans of the other. In this way, a certain predictability of the actions of the robotic or human counterpart can be established.

Furthermore, some degree of controllability of the actions of the cooperation partner seems necessary (e.g., Christoffersen and Woods 2002). For example, human users should be able to dynamically adapt the task scope of the robot or, to a certain extent, to control the way in which the robot performs tasks. Similarly, in many scenarios it seems desirable that the robot informs the human partner about upcoming actions or need for support. In many application contexts, it might be a necessity for the robot to prevent the human from carrying out potentially risky activities or to point out potential errors. These and other scenarios that can occur in cooperative collaboration between humans and robots require a certain degree of acceptance and trust in the robotic team partner or the robotic household helper. Without this, it could be unpleasant for the user to grant the robot own competencies and decision-making scope. This, in turn, could have negative psychological consequences, such as anxiety or stress, which—both in work and private contexts—could have even more serious long-term consequences for the user.

In addition to the cooperative and collaborative teamwork of humans and robots, also a mere co-existence of humans and robots requires a coordination of the respective independent goals and interests. Consequently, also in this case, the robots should be equipped with a cooperative interaction interface that enables an effective and acceptable coordination between humans and robots.

Against this background, this work aims to provide a collection of design topics for informing the design of the appearance and interface of robots in both the public and private domains. This is intended to support optimized cooperation between humans and robots and support the formation of an appropriate level of acceptance and trust on the part of the users or people present in the area of the robots’ tasks.

2.2 Acceptance of robots

The understanding and the prediction of acceptance of robots and robot behavior is a central research topic in HRI (de Graaf and Allouch 2013), as acceptance is a basic prerequisite for the use of automated technology (see, e.g., Technology Acceptance Model; TAM, Ghazizadeh et al. 2012). In this way, for example, a minimum level of acceptance constitutes a subjective prerequisite for the use of a technical system, e.g., a robot. Due to the large number of publications on the topic of acceptance, there is currently a wide variety of acceptance definitions, some of which contradict each other (e.g., Arndt 2011). This is summarized by Königstorfer and Gröppel-Klein (2009): “In acceptance […] research, there is now a consensus that acceptance moves along a continuum that ranges from attitude […] to action (purchase) and regular use of technological innovations” (p. 849, German original translated by the authors). In this sense, acceptance is defined in this paper as the intention to use—a subjective evaluation that influences the extent to which a robot is used (e.g., Naneva et al. 2020).

Since a minimum level of acceptance is a necessary prerequisite for the use of robots, the factors that influence users’ acceptance are of particular interest for HRI design. In line with the general framework of the TAM, Ghazizadeh et al. (2012) identified perceived usefulness and ease of use as subjective factors influencing robot acceptance. In addition, robot-specific models for predicting acceptance include emotional processes on the part of users. For example, the USUS Evaluation Framework postulates that robot acceptance is significantly influenced by users’ attitudes towards robots and their emotional attachment to robots, in addition to expectations regarding robot performance and efficiency (Weiss et al. 2009). Studies also identified additional robot characteristics influencing acceptance. For example, numerous studies have shown that transparency increases robot acceptance (Alonso and de la Puente 2018; Cramer et al. 2008; Ososky et al. 2014). Similarly, appropriate social behavior of the robot has been found to promote acceptance (politeness, social distance, communication behavior; de Graaf et al. 2015; Babel et al. 2021, 6,8,a, b). Furthermore, some studies have shown that the degree of human-likeness affects robot acceptance. In this regard, some studies report higher acceptance of human-like robot design (e.g., Barnes et al. 2017; Eyssel et al. 2012; Louie et al. 2014).

However, it can be noted that the influence of specific design and interaction features can vary depending on the studied robot types, tasks, subject groups, etc. This is supported by the findings of a recently published meta-analysis by Naneva et al. (2020). Overall, there was a wide variation in the average acceptance levels across the included studies. Namely, the average acceptance of robots tended to be in the negative range in 42% of the 26 included studies. The authors identify several factors in the study design that influence the level of acceptance. On the one hand, studies in which robots were either directly interacted with or not interacted with at all showed a higher average acceptance compared to studies in which robots were indirectly presented (picture/video of a robot). On the other hand, robots were better accepted in studies with a setting in the educational field than in studies in the health and care field or in studies in which the application field was not explained in detail. In contrast, age, gender, and publication year had no effect on the acceptance of social robots (Naneva et al. 2020).

2.3 Trust in robots

A second fundamental subjective prerequisite for enjoyable, efficient and safe use of and interaction with service robots is an appropriate level of trust in these robots. While trust has been researched in Psychology for many decades in the context of interpersonal relationships (e.g., Rempel et al. 1985; Holmes and Rempel 1989) the area of automation trust, i.e., trust in automated technical systems, represents a comparatively new research direction. Since the late 1980s, this research has been increasing, and initially the focus was on trust processes in the monitoring and operation of professional, automated industrial systems (e.g., Muir and Moray 1996; Lee and Moray 1992). Over the past two decades, the number of research papers on trust processes in automated vehicles (e.g., Hergeth et al. 2017; Kraus et al. 2019, 2020; Beggiato and Krems 2013, 2015) and robots (Miller et al. 2021; Babel et al. 2021, 6,8,a, b; Kraus et al. 2018) increased. Fundamentally, at the psychological level, one can distinguish between several layers of trust (e.g., Marsh and Dibben 2003). Here, a basic distinction needs to be made between the personality tendency to trust automated technology in general (propensity to automation trust), and a learned attitude with respect to a specific technical system (learned trust).

The general dispositional tendency to trust automated technology has been defined as an overarching individual predisposition to trust automated technology across different contexts, systems, and tasks (e.g., Hoff and Bashir 2015; Kraus 2020). It describes a user’s individual personality tendency to trust a broad set of automatized technology across a range of situations. It is hypothesized that this individual predisposition to trust automated technology arises from a combination of the individual user’s personality and the experiences they have with technology over the course of their learning history (e.g., Kraus 2020). This individual predisposition to trust a technical device thus represents an individual psychological basis for the formation of learned trust in a specific, new technical system. In line with this, Miller et al. (2021) found the propensity to trust to predict learned trust in the assistance robot Tiago (PAL Robotics) in a laboratory study. This supports previous findings by Kraus et al. (2021) in the domain of automated driving. Additionally, in terms of personality variables mainly a positive association between extraversion and trust in robots has been reported (e.g., Haring et al. 2013; Alarcon et al. 2021). From this, it can be concluded that when considering and optimizing trust processes in interaction or cooperation with robots, differences in the personality and experiences of users should also be taken into account.

Furthermore, according to Lee and See (2004), learned trust in automation is commonly defined as “an attitude that an agent will help an individual achieve a goal in a situation of uncertainty and vulnerability” (p. 51). In this respect, trust is a dynamic psychological attitude related to a specific technical (automated) system that develops in the course of getting to know and building a relationship with this technical system (e.g., Miller et al. 2021). The level of trust is influenced by available information—so-called trust cues (Thielmann and Hilbig 2015)—on the basis of which it is calibrated over time. Both, information available before the actual interaction with the system and information available during the use of the system, are considered in this process of trust calibration. The optimal result of such a trust calibration process is a calibrated level of trust—a situation of adequate trust that is characterized by the fact that users trust a technical system exactly to a degree that corresponds to the capabilities and reliability of the system (e.g., Forster et al. 2018; Lee and See 2004).

The psychological variable trust has particular relevance for behavior in situations in which the trust-giving agent is exposed to a particular degree of uncertainty, risk, and vulnerability (e.g., Thielmann and Hilbig 2015). This applies to interaction or collaboration with novel robots in both private and public settings. In the recent meta-analysis by Naneva et al. (2020), a total of 30 studies that investigated trust in social robots were analyzed. Overall, a wide variability of the average trust was found. Above this, the authors report various factors on the side of the systems and the study setup that seem to affect the level of trust. The presented checklist aims at optimizing calibration of trust in robots by stimulating considerations and an informed design of robots’ appearance and interaction concepts as well as a systematic communication of information to the users about the robot (e.g. in the form of trainings, user manuals or tutorials). On the one hand, this should promote the development of a sufficient degree of trust on the part of the user, so that he or she wants to use the robot or accepts its task execution at all. On the other hand, however, this is also intended to prevent overtrust in the robot, which could lead to the robot being assigned with tasks that it is not designed to perform or being used in situations that go beyond its scope of application (e.g., Parasuraman and Riley 1997). Similarly, overtrust could also lead to not keeping a necessary safety distance or not sufficiently considering the needs of vulnerable groups of people (e.g. children, elderly people). All these possible consequences of overtrust represent potential dangers of the use of robots in the public and private sector. In this way, supporting a calibrated level of trust in robot design and in the design of human-robot interfaces are likely to considerably foster a pleasant, safe, and stress-free interaction with the robot.

Based on these theoretical foundations and considerations, the presented checklist was developed with the goal of promoting an acceptable and trustworthy interaction between humans and robots. Thereby, the level of trustworthiness that is aimed at in the design process should reflect the actual capabilities and reliability of the robot—and not exceed it—to prevent overtrust and associated, potentially harmful interaction decisions. To level out and calibrate the trustworthiness of robot design is an important responsibility of robot and HRI designers. The development and structure of the checklist are described in the following.

3 Development and structure of the checklist

3.1 Development process

The checklist was created in an iterative process in partnership with interdisciplinary experts. Here, existing collections of criteria from different disciplines and technical domains, with different focuses were to be integrated and expanded. A multi-stage procedure was used, which is outlined in the following (Fig. 1).

  1. 1.

    The development process started with a broad literature search on existing ethical, safety, privacy, and interaction guidelines for the use of robots in private and/or public spaces (e.g. European Parliament 2017; Gelin 2017; Kreis 2018; Salvini et al. 2010). Search platforms used here were Google Scholar, Science Direct, and Scopus. Keywords used included: Robot requirements; Roboter Anforderungen; robot guidelines; human-machine interaction ethic*; Mensch-Roboter-Interaktion; Interaktion Mensch-Roboter; Sicherheit Anforderung; Serviceroboter Anforderungen; service robot ethic* & safety & guideline; collaborative robot guideline; personal robot data safety guideline; Datenschutz Richtlinie Roboter.

  2. 2.

    In addition, key factors that can influence acceptance and trust towards robots were identified on the basis of existing literature (e.g. Hancock et al. 2021; de Graaf and Allouch 2013).

  3. 3.

    The current state of the literature was then assessed and evaluated. It was found that there is additional potential to more strongly stress the individual perspective of robot users and of people who are in the environment of robots in terms of trust and acceptance in HRI design.

  4. 4.

    Based on the literature review, a first version of the checklist (design topics with associated questions and design recommendations) was created, which summarized the preliminary results. The different identified questions and recommendations were grouped into categories. This list was further updated on the basis of additional literature.

  5. 5.

    The compiled results were further expanded and adapted through an expert survey and discussion with HRI experts (both from a research and a practice and application perspective). On this basis, subcategories were introduced into the structure of the checklist. The questionnaire included eleven open questions and mainly referred to the experts’ views on several topics in HRI (e.g. “In your experience: What general requirements must a cooperative robot fulfil?”). The questionnaire was sent to 12 experts. Five completed questionnaires were returned by the expert groups (jointly completed answers). The questionnaire and the preliminary criteria collection then served as a basis for discussion of further requirements and design recommendations.

  6. 6.

    On this basis a second version of the checklist was developed incorporating subcategories of design topics.

  7. 7.

    On this basis, the design topics, questions and design recommendations of the checklist were further developed in discussions with additional interdisciplinary experts (psychologists, computer scientists, engineers and robot manufacturers) within the RobotKoop project. This enabled the integration of further practical feedback into the recommendations.

  8. 8.

    In addition, experts on the subject of ethics and data protection in the domain were consulted to provide feedback.

  9. 9.

    The interdisciplinary feedback was integrated to a third iteration step of the checklist.

  10. 10.

    The resulting version was again discussed (especially in regard to the organization and naming of the subcategories) within the author team.

  11. 11.

    This resulted in the current version of the checklist.

  12. 12.

    The current version of the checklist does not claim to be a final, complete version, but is to be further developed on the basis of feedback from the community.

Fig. 1
figure 1

Flow diagram of the development process of the Trustworthy and Acceptable HRI Checklist (TA-HRI)

3.2 Included factors influencing robot trust and acceptance in the checklist

Trust and acceptance are important foundations of successful HRI and integration of robots in everyday life. Study results indicate that amongst others user characteristics (e.g., individual attitudes towards robots, expectations), environmental and task factors (e.g., team collaboration, task characteristics), and robot characteristics can influence the trust towards robots (cf. Hancock et al. 2021; de Graaf and Allouch 2013). On the robot side, Hancock et al. (2021) identified in a meta-analysis that especially robot performance (e.g., low error rate, high reliability) as well as properties of appearance and robot behavior (e.g., anthropomorphism, physical proximity) as relevant for trust. In this regard, especially, transparency of the robot’s plans, processes and actions are important to establish a realistic expectation towards it, which in turn builds an essential basis for calibrated trust (e.g., Kraus 2020; Kraus et al. 2019). In this sense, communication, bilateral understanding and task coordination are essential for trust. In line with this, the acceptance of a robot can be promoted if it is perceived as useful, adaptable and controllable, as well as a sociable companion by its design (de Graaf and Allouch 2013).

Against the background of the discussed state of research (see 2.2 and 2.3), the presented checklist integrates a large number of the aforementioned factors influencing the acceptance of and trust in robots in the entailed design topics, questions and recommendations. In particular, transparency, understandability, and a trustworthy design of both robot appearance and interaction are considered. Furthermore, in order to pay respect to the discussed individual differences between users, throughout the checklist possibilities for customizability and individualization are suggested.

Additionally, characteristics of the situation, in which the interaction between humans and robots takes place, are commonly viewed as essential for the formation of trust (e.g., Lee and See 2004; Kraus 2020; Hancock et al. 2021) and acceptance (Abrams et al. 2021; Turja et al. 2020; de Graaf et al. 2019). For example, ethical and legal concerns regarding the use of a robot can negatively impact trust (Alaiad and Zhou 2014). Consequently, a design of robots that promotes trust calibration and acceptance should also consider ethical, safety, and privacy aspects as these seem to establish a framework in which trust and acceptance can prosper.

Therefore, in order to establish a functioning, efficient as well as from a subjective viewpoint enjoyable integration of robots into existing social systems, an adaptive, norm congruent and appropriate social behavior of robots is an essential design goal to foster both trust and acceptance of robots. Therefore, additionally to design and interaction considerations, the presented checklist integrates legal and societal framework conditions which are mainly based on previous work as discussed in the following.

3.3 Included ethical, safety and legal requirements in the checklist

The introduction of artificial intelligence (AI) technologies into society poses potential risks to physical safetyy, data protection, human rights, and fundamental freedoms (Yeung 2018). For this reason and as technological developments progressed, in recent years a large number of ethical guidelines and recommendation lists have been published. A recent prominent example are the ethical guidelines for trustworthy AI which were developed by an independent group of experts on behalf of the EU Commission (European Commission 2019). These guidelines describe seven ethical prerequisites, which should be examined before a system enters the market. These are (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination, and fairness, (6) societal and environmental well-being, and (7) accountability.

In addition, there are long-established guidelines regulating safety for people working with industrial robots. The Machinery Directives 2006/42/EC set out basic safety requirements for machinery, such as maintaining a minimum distance from people, fitting guards and an emergency stop switch. In Germany, these requirements have been transposed into national law by the Product Safety Act. For collaborative robot systems, the safety requirements under the DIN ISO/TS 15066 (Deutsches Institut für Normung e. V. 2017) standard for industrial robots apply in particular. For personal assistance robots, DIN EN ISO 13482 (Deutsches Institut für Normung e. V. 2014) specifies requirements for safe design, protective measures and user information.

Furthermore, the EU’s General Data Protection Regulation (GDPR) applies to robots that store the personal data of users. This means that individuals must give their consent to data processing and that this consent can be withdrawn at any time. This applies in particular to robots with sensors that process audiovisual data in order to interact with their environment. However, the GDPR does not yet contain any explicit specifications in regard to robots, which was pointed out by the EU Parliament in a resolution on civil law regulations in the area of robotics. The recommendations of the latter include data protection-friendly default settings and transparent control procedures for affected persons (European Parliament 2017).

The development of this checklist incorporates this and other work (see footnotes in the checklist).

3.4 Structure of the checklist

The checklist includes 60 design topics at different levels of HRI design (see Table 1 for the English version and Table 2 for the German version). A distinction is made here between private service robots in private households and robots that perform work in public spaces. Based on the preceding theoretical considerations, the design topics are assigned to the four areas: 1) design, 2) interaction, 3) legal, and 4) societal environment, which are in turn subdivided into eight categories (Fig. 2). For each design topic questions are provided in the checklist to favor acceptable and trustworthy HRI design. Based on each question exemplary design recommendations are listed that can help to optimize the acceptance and trustworthiness of robots.

Table 1 The Trustworthy and Acceptable HRI Checklist (TA-HRI)—English Version
Table 2 The Trustworthy and Acceptable HRI Checklist (TA-HRI)—German Version
Fig. 2
figure 2

Structure of the Trustworthy and Acceptable HRI Checklist (TA-HRI): the checklist is established by 60 design topics with respective questions and design recommendations in four areas and eight categories

The area of ‘design’ includes categories that relate to the design and development of the robot. A trustworthy robot appearance of the, as well as a reasonable and understandable autonomy, should be designed in a way that does not cause uncertainty for the user (Kreis 2018; Rosenstrauch and Kruger 2017; Salvini et al. 2010) and ultimately fosters a calibrated level of trust.

The area of ‘interaction’ describes processes that relate to the direct exchange between humans and robots. During interaction, users gain experience in dealing with the robot and, as a result, calibrate their trust in it. For this, on the one hand, the robot must interact in a trustworthy manner during the interaction and communicate its actions and states transparently (Gelin 2017; Hancock et al. 2011). On the other hand, the expectations towards an appropriate social behavior of the robot must be fulfilled.

In the areas of ‘legal and societal framework conditions’, a distinction is made between the following three categories: 1. perceivable data protection & protection of privacy, 2. security & subjective feeling of safety, and 3. subjectively normative robot behavior. The questions of the categories regarding data protection and security refer, among other things, to the legally regulated demands placed on the technical system in order to protect the users (Jacobs 2013; Müller 2014). The questions on the societal framework conditions refer to the subjective compliance with normative and moral principles (e.g., European Parliament 2017; Gelin 2017).

4 Discussion

4.1 Application, considerations and scope

This work presented the TA-HRI Checklist incorporating design topics to support trustworthy and acceptable HRI design. The design topics, questions and design recommendations covered in the TA-HRI Checklist pursue the goal to stimulate a consideration of potential design approaches that can contribute positively to the trustworthiness and acceptance of robots. Thereby, the questions and recommendations address different levels of design aspects. In addition to the physical appearance and the interaction, also the integration of the robots into the legal and social context is addressed.

The checklist can be used in robot development as a heuristic framework for optimizing the interaction design at an early stage. The questions on the respective design topics can serve as a starting point for discussions and the evaluation of the design of the HRI in the individual design or research project. They are intended to help examining design ideas with regard to possible trust- or acceptance-promoting (or calibrating) aspects and, if necessary, they might contribute to optimize the design in this regard. Thereby, the evaluation should always be carried out under consideration of the system, its specific task and the users interacting with this system. In this sense, successful implementation of the design topics should be evaluated in the specific application context.

The listed recommendations are not to be understood as indisputable design rules. Some of the recommendations listed may not be appropriate for all contexts, robotic types, tasks, and user groups. In this sense, the appropriateness and expected gain of each recommendation should be evaluated in the context of the intended robot task and operational context. Also, in practice, it may well be that the design criterion of trustworthiness or acceptability must be subordinated to other criteria (such as effectiveness or efficiency of task execution). This may be the case, for example, in security and emergency scenarios, where the effectiveness of the interaction (e.g., evacuation of a building) may be considered more important than its acceptability (Babel et al. 2022b). Also, increasing trust to a maximum that exceeds the actual capabilities and reliability of a robot (overtrust) can lead to dangerous interaction decisions (misuse). Therefore, the ultimate design goal in terms of trust should always be a calibrated instead of a maximum level of trust.

Furthermore, it can sometimes be useful to not implement the single recommendations as mandatory features, but to offer them to the user as an option and individually customizable features. In this sense, the checklist also offers a set of suggestions to implement personalization in the interaction with robots (e.g., recommendations regarding transparency).

The checklist neither has the claim to represent the requirements for a trustworthy HRI in an all-encompassing nor final way and is intended to form a basis for an interdisciplinary iterative development process. For this reason, feedback and comments on the checklist are explicitly welcome and serve as a basis to continuously update and enhance the checklist in an ongoing discussion within the community.

4.2 Strengths, limitations and further development

The presented checklist implements several areas of factors affecting trust and acceptance of social robots and integrates them with recommendations from a safety and ethical viewpoint. It is a result of an iterative process combining several methods—besides others literature search and expert discussions with interdisciplinary experts from the fields of engineering, computer science, psychology and ethics—to arrive at the presented design topics. With this, it is intended as a practical tool both for practitioners and researchers to support the consideration of acceptance and trust in a human-centered robot and HRI design process emphasizing the perspective of users. Thereby, in its current form the checklist aims at providing design impulses rather than a fixed list of rules.

At this stage of development, the checklist still has some limitations, which might be addressed in future iteration steps and further research. While existing models from the domain of acceptance and trust were used as a basis for the checklist, there was no explicit integrative theoretical model underlying the checklist development. Based on the areas and categories entailed in the checklist a model might be derived and empirically tested. This in turn would be a valuable starting point for the derivation of metrics and a systematic validation of the checklist. Derived metrics could be used as an extension of the scope of the checklist for evaluating a specific robot design against the different areas and categories of the checklist. With this the checklist would constitute a helpful evaluation tool for trustworthy and acceptable robot design for usability studies and A/B testing. A further limitation is the mainly European-centered perspective of the checklist, which might be extended in future iteration steps to include additional perspectives. Also, with ongoing technological progress, amongst others in the field of AI, new technological solutions for the covered challenges for acceptable and trustworthy robot design might be available. Therefore, continuous updating of the checklist to integrate technological developments seems necessary. In the same manner, in the next years with robots being more and more present in different domains of daily life, legislation and social norms will be adapted and refined, which should also be considered in coming iterations of the checklist.

To conclude, the checklist is intended as a practical tool to enhance the consideration of trust and acceptance in HRI design both in practice and research. The authors look forward to feedback and directions for continuous updating of the checklist from the community.