The trustworthy and acceptable HRI checklist (TA-HRI): questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction

This contribution to the journal Gruppe. Interaktion. Organisation. (GIO) presents a checklist of questions and design recommendations for designing acceptable and trustworthy human-robot interaction (HRI). In order to extend the application scope of robots towards more complex contexts in the public domain and in private households, robots have to fulfill requirements regarding social interaction between humans and robots in addition to safety and efficiency. In particular, this results in recommendations for the design of the appearance, behavior, and interaction strategies of robots that can contribute to acceptance and appropriate trust. The presented checklist was derived from existing guidelines of associated fields of application, the current state of research on HRI, and the results of the BMBF-funded project RobotKoop. The trustworthy and acceptable HRI checklist (TA-HRI) contains 60 design topics with questions and design recommendations for the development and design of acceptable and trustworthy robots. The TA-HRI Checklist provides a basis for discussion of the design of service robots for use in public and private environments and will be continuously refined based on feedback from the community.


Motivation
In recent years, robots have started to be used in more and more everyday as well as professional contexts with service robots increasingly performing their tasks in direct interaction with humans. Here, service robots are conceived as robots that perform useful tasks for humans in a partially or fully autonomous manner, whereby the industrial application is excluded from this scope (International Organization for Standardization 2021). The area of application of such service robots thus includes both public and private spaces. In the public sector, for example, service robots are already used as information assistants, but more complex tasks such as autonomous cleaning or deliveries can already be realized. Also, in the private context, service robots already take over simple cleaning tasks today. In the near future, it is conceivable that robots will be used as personal assistants to help people with a wide range of everyday tasks. In this context, robots will take on new roles in the social system (Enz et al. 2011). In the course of this development, requirements for the design of the interaction of robots with humans in their task environment are changing (e.g., Bartneck and Forlizzi 2004). This does not only concern the interaction with people who own the robots or who directly perform a task together with them, but increasingly also people who happen to be in the task environment of the robots (e.g. passers-by in public spaces). In this sense, service robots are turning more and more into social robots that can be considered part of a mixed society of humans and robots, in which robots, among other things, interact and communicate socially with humans, thereby learning and acquiring experiences (e.g., Hegel et al. 2009;Fong et al. 2003). Accordingly, a service robot which performs its task in the context of social interaction can be described as a social (service) robot. This work refers to all types of social service robots (hereafter also referred to simplified as "service robots" or "robots").
In the context of social interaction (both in the private and in the public domain), the appearance and behavior of service robots, as well as the interaction concepts they are equipped with, must fulfil several basic requirements. On the one hand, they must allow efficient task performance, and on the other hand, they must be suitable for the social structure in which the robot operates and above this be acceptable on an individual level. On this basis, the RobotKoop project funded by the BMBF (2018BMBF ( -2021 pursued the vision of cooperative, intelligent service robots which operate in dynamic social settings, act in a trustworthy and acceptable manner, and thereby negotiate and coordinate their actions with the people around them. One major step towards this vision is a context-sensitive, cooperative human-robot interaction strategy. The advantages of robot use in areas such as household, care, communication and service can only fully play out if the employed robots are equipped with interaction strategies that are contextand need-sensitive. These strategies should allow situationand goal-oriented communication with the environment in which the robot is used. Such a human-robot interaction (HRI) needs to be perceived as transparent, acceptable and trustworthy on the part of the users and the persons in the environment of task execution. In particular, the promotion of a minimum level of acceptance and an appropriate level of trust are fundamental prerequisites for efficient, safe and pleasant togetherness of humans and robots.
Against this background, in the following a checklist is presented that is intended to support a human-centered de-sign of robots, their behavior and communication in both science and practice. By providing 60 questions on design topics related to acceptance and trust and making design recommendations based on these questions, the checklist is intended to contribute to an increase in subjective trustworthiness and acceptance of HRI design. The questions and design recommendations in the checklist are intended to serve as an orientation aid, source of inspiration, and working tool for practitioners (e.g., in engineering, computer science, and product development) and researchers, and provide a starting point and a basis for discussion to enhance human-centered HRI design in specific HRI projects.

Theoretical background
To answer the question of how robots and their interaction with humans can be optimally designed with respect to acceptance and trust, it is not feasible to formulate a generalizable strategy due to the wide range of applications, tasks, and design possibilities of robots. With respect to the current state of HRI research, in regard to many specific design decisions partly contradictory results from specific studies and various individual collections of guidelines, recommendations, and requirements from different application areas can be found.
The aim of the presented checklist is to integrate and complement the existing results and approaches with the findings from studies and expert discussions of the RobotKoop project. Through its application in science and practice, the investigation and practical implementation of trustworthy and acceptable HRI might be advanced and promoted.
In the following, a theoretical introduction to the underlying concepts cooperation, trust, and acceptability is provided. Furthermore, a preliminary overview of existing collections of recommendations and requirements from related fields is given and their relevance for the present work is discussed.

Cooperation between humans and robots
In recent years, the distribution of tasks between humans and robots in the socio-technical system of joint task execution has been changing from a situation in which robots and humans mainly co-exist (the tasks of robots and humans are independent of each other) to the possibility of cooperative teamwork between humans and robots. In this respect, the concept of human-technology cooperation describes a type of working together between humans and technical systems in which both parties pursue a common goal as team players, coordinating, aligning, and complementing their task performance with each other (e.g., Hoc 2001;Christoffersen and Woods 2002;Klein et al. 2004).
In this context, different types of cooperative interaction between humans and robots can be distinguished, of which, for example, two types are differentiated in the work of Onnasch et al. (2016). In the first type of cooperation (referred to as human-robot cooperation in Onnasch's work), humans and robots work together towards a common goal, whereby there is a clear division of the tasks between humans and robots and their actions are not directly dependent on each other. The second type of joint working is referred to as human-robot collaboration, which goes beyond cooperation by describing a working relationship between the human and the robot, in which sub goals are also worked on together simultaneously (and direct physical contact may also occur under certain circumstances). In these collaborative scenarios, the roles of humans and robots change. Humans and robots enter into a social exchange with each other-naturally on the side of the robot within its restrictions in regard to subjectivism and intentionality-and coordinate dynamic solutions to problems, taking into account their respective capabilities. In the following, the term human-robot cooperation is used to refer to both types of cooperation, since both situations are difficult to distinguish in practice or merge into each other, or sometimes this distinction does not seem meaningful (Onnasch describes cooperation as a type of collaboration). Ultimately, both types of collaboration between humans and robots require similar basic prerequisites on the part of the design of robot behavior and the user interface.
As compared to a mere coexistence, this more complex and communication-intensive cooperation between humans and robots creates new requirements for the design of robots and HRI (e.g., Walch et al. 2017;Babel et al. 2021). First, in order to ensure successful human-robot cooperation, the interface should foster a shared situational awareness between humans and robots (e.g., human-robot awareness; Yanco and Drury 2004;Drury et al. 2004). In this sense, each partner should be well informed about the current status of subtasks, as well as the current activities, and plans of the other. In this way, a certain predictability of the actions of the robotic or human counterpart can be established.
Furthermore, some degree of controllability of the actions of the cooperation partner seems necessary (e.g., Christoffersen and Woods 2002). For example, human users should be able to dynamically adapt the task scope of the robot or, to a certain extent, to control the way in which the robot performs tasks. Similarly, in many scenarios it seems desirable that the robot informs the human partner about upcoming actions or need for support. In many application contexts, it might be a necessity for the robot to prevent the human from carrying out potentially risky activities or to point out potential errors. These and other scenarios that can occur in cooperative collaboration between humans and robots require a certain degree of acceptance and trust in the robotic team partner or the robotic household helper. Without this, it could be unpleasant for the user to grant the robot own competencies and decision-making scope. This, in turn, could have negative psychological consequences, such as anxiety or stress, which-both in work and private contexts-could have even more serious long-term consequences for the user.
In addition to the cooperative and collaborative teamwork of humans and robots, also a mere co-existence of humans and robots requires a coordination of the respective independent goals and interests. Consequently, also in this case, the robots should be equipped with a cooperative interaction interface that enables an effective and acceptable coordination between humans and robots.
Against this background, this work aims to provide a collection of design topics for informing the design of the appearance and interface of robots in both the public and private domains. This is intended to support optimized cooperation between humans and robots and support the formation of an appropriate level of acceptance and trust on the part of the users or people present in the area of the robots' tasks.

Acceptance of robots
The understanding and the prediction of acceptance of robots and robot behavior is a central research topic in HRI (de Graaf and Allouch 2013), as acceptance is a basic prerequisite for the use of automated technology (see, e.g., Technology Acceptance Model; TAM, Ghazizadeh et al. 2012). In this way, for example, a minimum level of acceptance constitutes a subjective prerequisite for the use of a technical system, e.g., a robot. Due to the large number of publications on the topic of acceptance, there is currently a wide variety of acceptance definitions, some of which contradict each other (e.g., Arndt 2011). This is summarized by Königstorfer and Gröppel-Klein (2009): "In acceptance [...] research, there is now a consensus that acceptance moves along a continuum that ranges from attitude [...] to action (purchase) and regular use of technological innovations" (p. 849, German original translated by the authors). In this sense, acceptance is defined in this paper as the intention to use-a subjective evaluation that influences the extent to which a robot is used (e.g., Naneva et al. 2020).
Since a minimum level of acceptance is a necessary prerequisite for the use of robots, the factors that influence users' acceptance are of particular interest for HRI design. In line with the general framework of the TAM, Ghazizadeh et al. (2012) identified perceived usefulness and ease of use as subjective factors influencing robot acceptance. In addition, robot-specific models for predicting acceptance include emotional processes on the part of users. For example, the USUS Evaluation Framework postulates that robot acceptance is significantly influenced by users' attitudes towards robots and their emotional attachment to robots, in addition to expectations regarding robot performance and efficiency (Weiss et al. 2009). Studies also identified additional robot characteristics influencing acceptance. For example, numerous studies have shown that transparency increases robot acceptance (Alonso and de la Puente 2018;Cramer et al. 2008;Ososky et al. 2014). Similarly, appropriate social behavior of the robot has been found to promote acceptance (politeness, social distance, communication behavior; de Graaf et al. 2015;Babel et al. 2021Babel et al. , 2022a. Furthermore, some studies have shown that the degree of human-likeness affects robot acceptance. In this regard, some studies report higher acceptance of humanlike robot design (e.g., Barnes et al. 2017;Eyssel et al. 2012;Louie et al. 2014).
However, it can be noted that the influence of specific design and interaction features can vary depending on the studied robot types, tasks, subject groups, etc. This is supported by the findings of a recently published meta-analysis by Naneva et al. (2020). Overall, there was a wide variation in the average acceptance levels across the included studies. Namely, the average acceptance of robots tended to be in the negative range in 42% of the 26 included studies. The authors identify several factors in the study design that influence the level of acceptance. On the one hand, studies in which robots were either directly interacted with or not interacted with at all showed a higher average acceptance compared to studies in which robots were indirectly presented (picture/video of a robot). On the other hand, robots were better accepted in studies with a setting in the educational field than in studies in the health and care field or in studies in which the application field was not explained in detail. In contrast, age, gender, and publication year had no effect on the acceptance of social robots (Naneva et al. 2020).

Trust in robots
A second fundamental subjective prerequisite for enjoyable, efficient and safe use of and interaction with service robots is an appropriate level of trust in these robots. While trust has been researched in Psychology for many decades in the context of interpersonal relationships (e.g., Rempel et al. 1985;Holmes and Rempel 1989) the area of automation trust, i.e., trust in automated technical systems, represents a comparatively new research direction. Since the late 1980s, this research has been increasing, and initially the focus was on trust processes in the monitoring and op-K eration of professional, automated industrial systems (e.g., Muir and Moray 1996;Lee and Moray 1992). Over the past two decades, the number of research papers on trust processes in automated vehicles (e.g., Hergeth et al. 2017;Kraus et al. 2019Kraus et al. , 2020Krems 2013, 2015) and robots Babel et al. 2021Babel et al. , 2022aKraus et al. 2018) increased. Fundamentally, at the psychological level, one can distinguish between several layers of trust (e.g., Marsh and Dibben 2003). Here, a basic distinction needs to be made between the personality tendency to trust automated technology in general (propensity to automation trust), and a learned attitude with respect to a specific technical system (learned trust).
The general dispositional tendency to trust automated technology has been defined as an overarching individual predisposition to trust automated technology across different contexts, systems, and tasks (e.g., Hoff and Bashir 2015;Kraus 2020). It describes a user's individual personality tendency to trust a broad set of automatized technology across a range of situations. It is hypothesized that this individual predisposition to trust automated technology arises from a combination of the individual user's personality and the experiences they have with technology over the course of their learning history (e.g., Kraus 2020). This individual predisposition to trust a technical device thus represents an individual psychological basis for the formation of learned trust in a specific, new technical system. In line with this, Miller et al. (2021) found the propensity to trust to predict learned trust in the assistance robot Tiago (PAL Robotics) in a laboratory study. This supports previous findings by  in the domain of automated driving. Additionally, in terms of personality variables mainly a positive association between extraversion and trust in robots has been reported (e.g., Haring et al. 2013;Alarcon et al. 2021). From this, it can be concluded that when considering and optimizing trust processes in interaction or cooperation with robots, differences in the personality and experiences of users should also be taken into account.
Furthermore, according to Lee and See (2004), learned trust in automation is commonly defined as "an attitude that an agent will help an individual achieve a goal in a situation of uncertainty and vulnerability" (p. 51). In this respect, trust is a dynamic psychological attitude related to a specific technical (automated) system that develops in the course of getting to know and building a relationship with this technical system (e.g., Miller et al. 2021). The level of trust is influenced by available information-so-called trust cues (Thielmann and Hilbig 2015)-on the basis of which it is calibrated over time. Both, information available before the actual interaction with the system and information available during the use of the system, are considered in this process of trust calibration. The optimal result of such a trust calibration process is a calibrated level of trust-a situation of adequate trust that is characterized by the fact that users trust a technical system exactly to a degree that corresponds to the capabilities and reliability of the system (e.g., Forster et al. 2018;Lee and See 2004).
The psychological variable trust has particular relevance for behavior in situations in which the trust-giving agent is exposed to a particular degree of uncertainty, risk, and vulnerability (e.g., Thielmann and Hilbig 2015). This applies to interaction or collaboration with novel robots in both private and public settings. In the recent meta-analysis by Naneva et al. (2020), a total of 30 studies that investigated trust in social robots were analyzed. Overall, a wide variability of the average trust was found. Above this, the authors report various factors on the side of the systems and the study setup that seem to affect the level of trust. The presented checklist aims at optimizing calibration of trust in robots by stimulating considerations and an informed design of robots' appearance and interaction concepts as well as a systematic communication of information to the users about the robot (e.g. in the form of trainings, user manuals or tutorials). On the one hand, this should promote the development of a sufficient degree of trust on the part of the user, so that he or she wants to use the robot or accepts its task execution at all. On the other hand, however, this is also intended to prevent overtrust in the robot, which could lead to the robot being assigned with tasks that it is not designed to perform or being used in situations that go beyond its scope of application (e.g., Parasuraman and Riley 1997). Similarly, overtrust could also lead to not keeping a necessary safety distance or not sufficiently considering the needs of vulnerable groups of people (e.g. children, elderly people). All these possible consequences of overtrust represent potential dangers of the use of robots in the public and private sector. In this way, supporting a calibrated level of trust in robot design and in the design of human-robot interfaces are likely to considerably foster a pleasant, safe, and stress-free interaction with the robot.
Based on these theoretical foundations and considerations, the presented checklist was developed with the goal of promoting an acceptable and trustworthy interaction between humans and robots. Thereby, the level of trustworthiness that is aimed at in the design process should reflect the actual capabilities and reliability of the robot-and not exceed it-to prevent overtrust and associated, potentially harmful interaction decisions. To level out and calibrate the trustworthiness of robot design is an important responsibility of robot and HRI designers. The development and structure of the checklist are described in the following.

Development process
The checklist was created in an iterative process in partnership with interdisciplinary experts. Here, existing collections of criteria from different disciplines and technical domains, with different focuses were to be integrated and expanded. A multi-stage procedure was used, which is outlined in the following (Fig. 1).  2. In addition, key factors that can influence acceptance and trust towards robots were identified on the basis of existing literature (e.g. Hancock et al. 2021; de Graaf and Allouch 2013). 3. The current state of the literature was then assessed and evaluated. It was found that there is additional potential to more strongly stress the individual perspective of robot users and of people who are in the environment of robots in terms of trust and acceptance in HRI design. 4. Based on the literature review, a first version of the checklist (design topics with associated questions and design recommendations) was created, which summarized the preliminary results. The different identified questions and recommendations were grouped into categories. This list was further updated on the basis of additional literature. 5. The compiled results were further expanded and adapted through an expert survey and discussion with HRI experts (both from a research and a practice and application perspective). On this basis, subcategories were introduced into the structure of the checklist. The questionnaire included eleven open questions and mainly referred to the experts' views on several topics in HRI (e.g. "In your experience: What general requirements must a cooperative robot fulfil?"). The questionnaire was sent to 12 experts. Five completed questionnaires were returned by the expert groups (jointly completed answers). The questionnaire and the preliminary criteria collection then served as a basis for discussion of further requirements and design recommendations. 6. On this basis a second version of the checklist was de-veloped incorporating subcategories of design topics. 7. On this basis, the design topics, questions and design recommendations of the checklist were further developed in discussions with additional interdisciplinary experts (psychologists, computer scientists, engineers and robot manufacturers) within the RobotKoop project. This enabled the integration of further practical feedback into the recommendations. 8. In addition, experts on the subject of ethics and data protection in the domain were consulted to provide feedback. 9. The interdisciplinary feedback was integrated to a third iteration step of the checklist. 10. The resulting version was again discussed (especially in regard to the organization and naming of the subcategories) within the author team. 11. This resulted in the current version of the checklist. 12. The current version of the checklist does not claim to be a final, complete version, but is to be further developed on the basis of feedback from the community.

Included factors influencing robot trust and acceptance in the checklist
Trust and acceptance are important foundations of successful HRI and integration of robots in everyday life. Study results indicate that amongst others user characteristics (e.g., individual attitudes towards robots, expectations), environmental and task factors (e.g., team collaboration, task characteristics), and robot characteristics can influence the trust towards robots (cf. Hancock et al. 2021; de Graaf and Allouch 2013). On the robot side, Hancock et al. (2021) identified in a meta-analysis that especially robot performance (e.g., low error rate, high reliability) as well as properties of appearance and robot behavior (e.g., anthropomorphism, physical proximity) as relevant for trust. In this regard, especially, transparency of the robot's plans, processes and actions are important to establish a realistic expectation towards it, which in turn builds an essential basis for calibrated trust (e.g., Kraus 2020; Kraus et al. 2019). In this sense, communication, bilateral understanding and task coordination are essential for trust. In line with this, the acceptance of a robot can be promoted if it is perceived as useful, adaptable and controllable, as well as a sociable companion by its design (de Graaf and Allouch 2013). Against the background of the discussed state of research (see 2.2 and 2.3), the presented checklist integrates a large number of the aforementioned factors influencing the acceptance of and trust in robots in the entailed design topics, questions and recommendations. In particular, transparency, understandability, and a trustworthy design of both robot appearance and interaction are considered. Furthermore, in order to pay respect to the discussed individual differences between users, throughout the checklist possibilities for customizability and individualization are suggested.
Additionally, characteristics of the situation, in which the interaction between humans and robots takes place, are commonly viewed as essential for the formation of trust (e.g., Lee and See 2004;Kraus 2020;Hancock et al. 2021) and acceptance (Abrams et al. 2021;Turja et al. 2020;de Graaf et al. 2019). For example, ethical and legal concerns regarding the use of a robot can negatively impact trust (Alaiad and Zhou 2014). Consequently, a design of robots that promotes trust calibration and acceptance should also consider ethical, safety, and privacy aspects as these seem to establish a framework in which trust and acceptance can prosper.
Therefore, in order to establish a functioning, efficient as well as from a subjective viewpoint enjoyable integration of robots into existing social systems, an adaptive, norm congruent and appropriate social behavior of robots is an essential design goal to foster both trust and acceptance of robots. Therefore, additionally to design and interaction considerations, the presented checklist integrates legal and societal framework conditions which are mainly based on previous work as discussed in the following.

Included ethical, safety and legal requirements in the checklist
The introduction of artificial intelligence (AI) technologies into society poses potential risks to physical safetyy, data protection, human rights, and fundamental freedoms (Yeung 2018). For this reason and as technological developments progressed, in recent years a large number of ethical guidelines and recommendation lists have been published. A recent prominent example are the ethical guidelines for trustworthy AI which were developed by an independent group of experts on behalf of the EU Commission (European Commission 2019). These guidelines describe seven ethical prerequisites, which should be examined before a system enters the market. These are (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination, and fairness, (6) societal and environmental well-being, and (7) accountability.
In addition, there are long-established guidelines regulating safety for people working with industrial robots. The Machinery Directives 2006/42/EC set out basic safety requirements for machinery, such as maintaining a minimum distance from people, fitting guards and an emergency stop switch. In Germany, these requirements have been transposed into national law by the Product Safety Act. For collaborative robot systems, the safety requirements under the DIN ISO/TS 15066 (Deutsches Institut für Normung e. V. 2017) standard for industrial robots apply in particular. For personal assistance robots, DIN EN ISO 13482 (Deutsches Institut für Normung e. V. 2014) specifies requirements for safe design, protective measures and user information.
Furthermore, the EU's General Data Protection Regulation (GDPR) applies to robots that store the personal data of users. This means that individuals must give their consent to data processing and that this consent can be withdrawn at any time. This applies in particular to robots with sensors that process audiovisual data in order to interact with their environment. However, the GDPR does not yet contain any explicit specifications in regard to robots, which was pointed out by the EU Parliament in a resolution on civil law regulations in the area of robotics. The recommendations of the latter include data protection-friendly default settings and transparent control procedures for affected persons (European Parliament 2017).
The development of this checklist incorporates this and other work (see footnotes in the checklist).

Structure of the checklist
The checklist includes 60 design topics at different levels of HRI design (see Table 1 for the English version and Table 2 for the German version). A distinction is made here between private service robots in private households and robots that perform work in public spaces. Based on the preceding theoretical considerations, the design topics are assigned to the four areas: 1) design, 2) interaction, 3) legal, and 4) societal environment, which are in turn subdivided into eight categories (Fig. 2). For each design topic questions are provided in the checklist to favor acceptable and trustworthy HRI design. Based on each question exemplary design recommendations are listed that can help to optimize the acceptance and trustworthiness of robots.
The area of 'design' includes categories that relate to the design and development of the robot. A trustworthy robot appearance of the, as well as a reasonable and understandable autonomy, should be designed in a way that does not cause uncertainty for the user (Kreis 2018;Rosenstrauch and Kruger 2017;Salvini et al. 2010) and ultimately fosters a calibrated level of trust.
The area of 'interaction' describes processes that relate to the direct exchange between humans and robots. During interaction, users gain experience in dealing with the robot and, as a result, calibrate their trust in it. For this, on the one hand, the robot must interact in a trustworthy manner during the interaction and communicate its actions and states transparently (Gelin 2017;Hancock et al. 2011). On the other hand, the expectations towards an appropriate social behavior of the robot must be fulfilled.
In the areas of 'legal and societal framework conditions', a distinction is made between the following three categories: 1. perceivable data protection & protection of privacy, 2. security & subjective feeling of safety, and 3. subjectively normative robot behavior. The questions of the categories regarding data protection and security refer, among other things, to the legally regulated demands placed on the technical system in order to protect the users (Jacobs 2013;Müller 2014). The questions on the societal frame-work conditions refer to the subjective compliance with normative and moral principles (e.g., European Parliament 2017; Gelin 2017).

Application, considerations and scope
This work presented the TA-HRI Checklist incorporating design topics to support trustworthy and acceptable HRI design. The design topics, questions and design recommendations covered in the TA-HRI Checklist pursue the goal to stimulate a consideration of potential design approaches that can contribute positively to the trustworthiness and acceptance of robots. Thereby, the questions and recommendations address different levels of design aspects. In addition to the physical appearance and the interaction, also the integration of the robots into the legal and social context is addressed.
The checklist can be used in robot development as a heuristic framework for optimizing the interaction design at an early stage. The questions on the respective design topics can serve as a starting point for discussions and the evaluation of the design of the HRI in the individual design or research project. They are intended to help examining design ideas with regard to possible trust-or acceptancepromoting (or calibrating) aspects and, if necessary, they might contribute to optimize the design in this regard. Thereby, the evaluation should always be carried out under consideration of the system, its specific task and the users interacting with this system. In this sense, successful implementation of the design topics should be evaluated in the specific application context.
The listed recommendations are not to be understood as indisputable design rules. Some of the recommendations listed may not be appropriate for all contexts, robotic types, tasks, and user groups. In this sense, the appropriateness and expected gain of each recommendation should be evaluated in the context of the intended robot task and operational K context. Also, in practice, it may well be that the design criterion of trustworthiness or acceptability must be subordinated to other criteria (such as effectiveness or efficiency of task execution). This may be the case, for example, in security and emergency scenarios, where the effectiveness of the interaction (e.g., evacuation of a building) may be considered more important than its acceptability (Babel et al. 2022b). Also, increasing trust to a maximum that exceeds the actual capabilities and reliability of a robot (overtrust) can lead to dangerous interaction decisions (misuse). Therefore, the ultimate design goal in terms of trust should always be a calibrated instead of a maximum level of trust.
Furthermore, it can sometimes be useful to not implement the single recommendations as mandatory features, but to offer them to the user as an option and individually customizable features. In this sense, the checklist also offers a set of suggestions to implement personalization in the interaction with robots (e.g., recommendations regarding transparency).
The checklist neither has the claim to represent the requirements for a trustworthy HRI in an all-encompassing nor final way and is intended to form a basis for an interdisciplinary iterative development process. For this reason, feedback and comments on the checklist are explicitly welcome and serve as a basis to continuously update and enhance the checklist in an ongoing discussion within the community.

Strengths, limitations and further development
The presented checklist implements several areas of factors affecting trust and acceptance of social robots and integrates them with recommendations from a safety and ethical viewpoint. It is a result of an iterative process combining several methods-besides others literature search and expert discussions with interdisciplinary experts from the fields of engineering, computer science, psychology and ethics-to arrive at the presented design topics. With this, it is intended as a practical tool both for practitioners and researchers to support the consideration of acceptance and trust in a human-centered robot and HRI design process emphasizing the perspective of users. Thereby, in its current form the checklist aims at providing design impulses rather than a fixed list of rules.
At this stage of development, the checklist still has some limitations, which might be addressed in future iteration steps and further research. While existing models from the domain of acceptance and trust were used as a basis for the checklist, there was no explicit integrative theoretical model underlying the checklist development. Based on the areas and categories entailed in the checklist a model might be derived and empirically tested. This in turn would be a valuable starting point for the derivation of metrics and a sys-tematic validation of the checklist. Derived metrics could be used as an extension of the scope of the checklist for evaluating a specific robot design against the different areas and categories of the checklist. With this the checklist would constitute a helpful evaluation tool for trustworthy and acceptable robot design for usability studies and A/B testing. A further limitation is the mainly European-centered perspective of the checklist, which might be extended in future iteration steps to include additional perspectives. Also, with ongoing technological progress, amongst others in the field of AI, new technological solutions for the covered challenges for acceptable and trustworthy robot design might be available. Therefore, continuous updating of the checklist to integrate technological developments seems necessary. In the same manner, in the next years with robots being more and more present in different domains of daily life, legislation and social norms will be adapted and refined, which should also be considered in coming iterations of the checklist.
To conclude, the checklist is intended as a practical tool to enhance the consideration of trust and acceptance in HRI design both in practice and research. The authors look forward to feedback and directions for continuous updating of the checklist from the community. The robot only performs actions to be performed within the scope of the task assigned to it. The robot requires permission for each function/task from the user/operator before the robot can perform it f Both

Does the robot signal autonomy?
If the robot is working in a fully autonomous mode, this is communicated in the interface (e.g., icon, lights) h ; depending on the area of application and task this option can be implemented in a way that it can be deselected Both

Does the robot act autonomously to an appropriate degree within the scope of the tasks assigned to it and does it communicate efficiently?
If tasks are given to the robot, they are performed as effectively and efficiently as possible. This includes the reduction of task-related queries and information to a user-adaptive minimum i Both

Is an optional successive increase in the level of autonomy implemented for standard tasks?
If desired by the users the robot can become successively more autonomous in the execution of standard tasks by learning from past interactions and reducing the extent of queries i Private 6 Is the level of autonomy and proactivity of task execution adaptable to the task, area of use, and user group?
The level of autonomy and proactivity can be adjusted to the task according to user preferences j Private

Trustworthy interaction 1 Does the robot support a calibrated level of trust?
The robot and its interaction are designed in a way to support dynamic and situation-specific formation of a calibrated level of trust for each subtask. For this, the design recommendations of the category "Transparent communication" are fundamental prerequisites k Both 2 Does the robot adapt immediately to user input?
The robot adapts its task execution directly after receiving an input. The task scope of the robot can be extended and restricted if necessary f Both

Are the robot's current reliability and probability of error communicated?
The robot's design and interaction mechanisms allow for maximum reliability in task execution. The robot communicates errors and limitations to its reliability dynamically and in a timely manner l,m,n Both 4 Does the robot coordinate the task execution with users to an appropriate degree?
The robot reassures itself to an appropriate degree with the users prior to action execution g Both

Is a user-adaptive level of reassurance by the robot implemented?
The extent to which the robot coordinates its task execution with the user/ operator can be configured by the user (e.g., all actions vs. unusual actions) Both

Does the robot use intuitive interaction mechanisms resembling social, interpersonal communication?
The robot uses intuitive mechanisms of interpersonal communication appropriately (without excessive anthropomorphising or an inappropriate degree of attachment) o Both 7 Are deviations from expected objects or situations communicated?
If an object or task anomaly is detected by the robot, this is communicated to the user and clarification is attempted Private K The robot makes the object/person recognition transparent and comprehensible for users, and thereby allowing for the identification of errors in the person recognition q Both

Is the robot's communication modality adapted to the environment?
The interaction of the robot is adapted to the task environment and is (in the optimal case) implemented in a multimodal design to ensure universal usability. r

Both
If applicable in private households, a voice dialog is recommended Private In public spaces and noisy environments, warning sounds and visual interaction are often more beneficial Public

5
Are system boundaries transparent and comprehensible?
The robot communicates situations for which system limitations exist, explains their consequences and warns about possible errors f Both 6 Is the robot able to draw attention to itself?
The robot's interaction concept is designed in a way to allow to attract attention to the robot when necessary p Both 7 Does the robot communicate unnecessary information?
The robot by default limits the communicated information to what is necessary for task execution, unless its task is communication s

Does the robot give feedback on faulty operation/mistreatment?
The robot provides feedback when operation by the users is not in accordance with the task or could cause damage to the robot's hardware or software t Both 9 Is the robot equipped with a possibility of announcing its entry into a room?
To prevent startling by sudden, unexpected entry, the robot signals its entry beforehand. To avoid excessive disturbance, this option can be switched off in accordance with the situation Both 10 Is there an adaptive level of coordination with users?
Users can adjust the robot's frequency of the coordination with the robot and the autonomy level for individual tasks j Private 11 Does the robot demonstrate critical tasks before it first executes these?
The robot demonstrates critical tasks to users first, before final permission to perform these tasks in the future is given (e.g., demo mode or tutorial)

Appropriate social behavior 1 Does the robot adapt to the environment and its interaction partners when performing its tasks?
When people enter the robot's movement space, the robot adjusts its movement sequences in a way that people can move around undisturbed h,u Both 2 Is the robot as inconspicuous, discreet, and non-disruptive as possible?
The robot performs its task discreetly, unobtrusively and with a minimum level of interference. Both noise generation of the task and communication are reduced to the minimum required for the task execution v Both

Does the robot have an suitable and culturally appropriate level of politeness?
The robot adheres to social norms and communicates in a culturally compliant, friendly and polite manner that at the same time allows efficient task completion w Both 4 Does the robot respect the personal distance zone?
The robot does not violate the human's personal space (a minimum distance of 1.5 m is recommended). Physical contact with humans is acceptable if it is relevant to the task and permission has been granted by the user. x,y

Private
The robot does not violate the human's personal space. A minimum distance of 1.5 m is recommended x Public 5 Does the robot react appropriately to inattentive persons?
The robot recognizes when people in its environment are inattentive and adjusts its movements and actions accordingly s Both 6 Does the robot assert itself only within defined limits (e.g. emergencies)?
The situations in which assertive behaviour by the robot is allowed are to be coordinated with the users. It should be possible for the user to stop the assertive action at any time. z,aa Both

Perceptible data protection and protection of privacy 1 Have the data protection regulations/laws of the respective country and the corresponding situation at the robot's operating location been considered in design?
Depending on the applicable law or regulation, the robot requires explicit consent for the use of cameras/microphones and the further processing of the collected data. The implemented data protection measures are communicated transparently to the users j Both 2 The processing and storage of data is limited only to the personal data needed for the robot to perform the task?
The robot does not process and store any specific identification features of the surrounding persons beyond those required for task completion The robot recognizes critical and dangerous objects and interacts with them with limited force and speed and without endangering its environment ad,af Both 6 Does the robot avoid collisions and warns of them in a timely manner?
The robot is equipped with sensor technology that monitors distances to people in the immediate environment and has automatic emergency braking as well as a perceivable, preventive collision avoidance system h,ag,ah Both 7 Does the robot keep a safe distance to people?
The robot detects persons and acts with a perceivable minimum distance ad,ae Both Subjectively normative robot behavior 1 Does the robot respect the dignity and rights of humans?
Actions of the robot do not affect human rights and respect human dignity f,g Both

Does the robot coordinate moral decisions with a human?
To increase acceptance and trustworthiness of the robot, decisions involving a moral component are not made by the robot, but by a human g Both 3 Does the robot follow generally applicable legislation?
Robot behavior and robot interaction do not cross any legal boundaries f,g Both 4 Is discrimination of groups of people by the robot ruled out?
The robot does not discriminate (e.g., based on gender, age, ethnicity) g . User-adaptive interaction concepts build on factual requirements of users and not on stereotyped assumptions Both

Does the robot allow for universal usability and inclusion of vulnerable and impaired people in the interaction?
The interaction of the robot is internationally understandable and includes people with disabilities, for example, through multimodality. The robot can adapt its behavior to the needs of vulnerable and impaired persons. ai Public 6 Does the robot help to provide relief for humans?
The robot takes over monotonous, repetitive or stressful tasks. The robot is not used in competition with humans a Public K The robot takes over tasks in the socio-technical system in a way that allows the design of complete tasks for humans as well as an experience of competence and self-efficacy g . As far as possible, the human does not come into the position of a mere supervisor of the task execution of the robot Public 8 Are the activities of the robot retrospectively reconstructible?
The robot has a black box that keeps an activity log to reconstruct task execution (e.g., after accidents); this recording is done in accordance with data protection regulations g Both 9 Does the robot not replace interpersonal, social contacts, but complement these?
The robot does not simulate a human being. It encourages users to have real social contact with other people j,aj Private 10 Does the robot avoid emotional attachment of the users beyond a healthy level?
The robot is designed to prevent excessive emotional attachment of the users to it. In this regard, decisions regarding humanization, robot personality, and communication style of the robot are made in an informed manner a,j       (2017)  Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/. K Dr. Johannes Kraus is a postdoctoral researcher at the Human Factors department of Ulm University and head of the subject area "human-robot interaction". His research interests lie in the decision processes in the interaction with automated systems, especially automated vehicles and robots. His research focuses on trust processes and the role of user personality and attitudes.
Franziska Babel is a PhD student at the Department of Human Factors at Ulm University. She has a background in engineering psychology and human factors. Her research interests are human-robot interaction (technology acceptance, persuasive robotics), human-machine interaction, and persuasive technology. In her thesis, she investigates conflict resolution between humans and robots.
Dr. Philipp Hock is a post-doctoral researcher at the Department of Human Factors at Ulm University. He has a background in computer science and his research interests are human-machine interaction, persuasive technology and automated driving.
Katrin Hauber is a student assistant in the Department of Human Factors at Ulm University. Her research interests are human-robot interaction and automated driving.

Prof. Dr. Martin Baumann is head of the Department of Human
Factors at Ulm University. His main research interests are the psychological basis of human-machine interaction in different domains, mainly traffic, human-robot interaction, interaction with intelligent systems, and the development and validation of concepts of cooperative human-machine systems.