Over the last decade, the range of robotic applications in the healthcare sector has expanded rapidly. These applications can range from dispensing medication to providing more personalized services to caretakers. However, this kind of robotization is associated with severe ethical and societal implications. To advance the design and acceptability of socially interactive robots it is, therefore, necessary to consider and analyze those concerns. The RoPHa research project aims at supporting care-dependent people to lead more independent life. This chapter examines potential ethical challenges and impacts in elderly care that were discussed during the design of the robotic system in the project. For evaluating the effect of assistive robotics on elderly care in practice, the MEESTAR model was applied. The ethical implications of the proposed applications were mapped to seven moral dimensions, such as autonomy, justice, privacy, etc. Each dimension were examined from three different perspectives (individual, organizational, and societal). All identified ethical implications were graded based on the degree of ethical justifiability. The results include ethically relevant questions regarding the role of the robotic system, its technical implications, economic and distrust barriers, occupational safety, data security as well as the legal and safety responsibilities of all involved parties.
RoPHa: Robots for the Support of Older People and People in Need of Care
Against the backdrop of the demographic development in Germany and other western societies, the support of older people and people in need of care is standard in discussing possible applications for robots. For the year 2050, it is projected, according to the latest report of Eurostat (Eurostat, 2020), that their relative share of the total population will gradually increase to reach 29.4%.
The RoPHaFootnote 1 (Robuste Perzeption für die interaktive Unterstützung älterer Nutzer bei Handhabungsaufgaben im häuslichen Umfeld) project—funded by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) aims at supporting care-dependent people to lead and maintain a longer independent life especially in the context of food consumption. The assistive robot should not only aid in preparing meals but should also assist the caretaker in consuming their meal. Since older adults face serious nutrition concerns and deficits, food intake is a practical application for a robotic system. The overall objective is to enhance the capabilities of interactive assistance robots to safely perform everyday manipulation tasks in complex and dynamic environments. The developed technologies were implemented and demonstrated on the Care-O-bot® 4Footnote 2 service robot. Together with experts from the care sector, different use cases were defined including preparing and serving food to the patient.
To advance the design and acceptability of socially interactive robots, it is necessary to discuss these issues’ ethical, societal, and legal perspectives. The MEESTAR model (Manzeschke et al., 2015) was used to evaluate ethical implications of the envisioned technical assistance systems. This instrument was specifically designed to provide an ethical evaluation of socio-technical arrangements. It determines the impact of such arrangements concerning their design and functionality based on concrete scenarios. The evaluation aided in the identification of ethically relevant problems as well as in the joint development of proposed solutions.
7.2.1 User Case Definition
The MEESTAR model requires a representative description of the technical system. The description of the system is provided through graphical sketches and diagrams. The description was formulated as detailed as possible with various design variables in mind to achieve a reliable system assessment. The provided description was regarded as an initial version of the system. The assessment process is then continually reiterated as more profound knowledge regarding aspects and features of the system and its environment, including users and other participants, becomes available.
For the initial system description, a care needs assessment was conducted through organizing preliminary observations in three facilities of the “Stiftung evangelische Altenheim.” They provide different forms of nursing care services, including daycare, young care, and care of persons who have dementia. The assessment aided in identifying the needs and unmet needs of people requiring care and the current support already provided for them. This initial assesment aided in defining desirable assistive functionalities of a robotic system. Based on the preliminary observation on the side, two personas (see Fig.7.1) were identified as potential users of the robotic assistance solution.
One persona suffers from the consequences of a stroke (hemiparesis). She can still eat independently but has difficulties with fine motor skills. The second persona suffers from incomplete paraplegia from the cervical spine onward and is therefore no longer able to eat independently. Based on the proximity of the robot to the user, the RoPHa project defined three key assistive functionalities, including practical tasks:
“Preparatory tasks,” e.g.:
Setting the table
Cleaning the table
“Assistance at the table,” e.g.:
Opening a bottle
“Direct interaction,” e.g.:
7.2.2 Ethical Evaluation
To consider all relevant implications, the ethical evaluation of the system requires the expertise of multiple disciplines. In the form of an interdisciplinary workshop, MEESTAR provides a reference framework to structure discussions about system-related ethical aspects with respect to a set of pre-defined dimensions, as outlined below and graphed in Manzeschke et al. (2015, p. 14).
Since MEESTAR seeks to determine and evaluate the ethical implications of socio-technical arrangements using concrete scenarios, a description of the system, including the intended context of use, was presented to all workshop participants. Three interdisciplinary working groups were formed, consisting of members of the research groups involved in the project and the pilot users. Each group reflected on and analyzed the case study from one of the three levels of observation mentioned in MEESTAR:
The first step focused on the identification of ethical problems and challenges. The technical assistance system was assessed regarding the seven ethical evaluation dimensions named in MEESTAR:
In addition, the identified ethical problems and areas of conflict were hierarchized into four levels in terms of their severity and are classified according to their different degrees of concern, ranging from
ethically unobjectionable (1)
ethically sensitive (2)
ethically extremely sensitive (3)
ethically unacceptable (4)
Each issue is analyzed individually, discussed from different perspectives, and jointly assessed and solved by developing a good attitude of the research group and establishing suitable procedures. In the next step, the identified ethical problems were evaluated and hierarchized according to the four degrees of ethical levels of severity mentioned above. The working session sought to form specific problem clusters from the ethical problem situations identified and hierarchized, which served as a basis for the further solution-oriented procedure. A central goal of the workshop was the creation of a “map” of ethically relevant problem contexts and the joint development of proposed solutions regarding the three dimensions of MEESTAR. Generally, the MEESTAR workshop does not provide answers to ethical questions but instead serves to open a space of reflection within which relevant ethical questions of the “good life” can be thematized, analyzed, and discussed, which have relevance throughout the project period (and partly beyond) regarding the design and functionality of the arrangement. Due to the project’s orientation, the contribution of technical assistance systems to an improvement of life (or to the maintenance of the high quality of life) for older target groups was mainly considered and discussed.
The ethical dimension “care” was relevant for all involved parties—IL associated care mainly with the implications of assistive robotic care on older people’s dignity. The usage of the system is ethically not justifiable if it compromises the quality of care and if it does not support older people and people in need in maintaining their independence and improving their quality of life while also preserving their potential.
SL associated the impact of a robotic care system on the concept of care, on the one hand, with the increase of social isolation and, on the other hand, with the decrease in commitment and solidarity in families and communities.
OL expressed ethical concerns regarding marketing and selling interaction as a commercial product. From both a social and an organizational perspective, the raised ethical concerns were evaluated as ethically sensitive, which can be compensated for in practice.
The ethical dimension “autonomy” was not a relevant ethical concern for OL. From an individual and social perspective, autonomy was associated primarily with freedom of choice. Both perceived the system as a threat to their autonomy if they did not have the freedom to choose or deny the system, which was evaluated as ethically extremely sensitive or forced to use to reduce the care burden on the rest of society. Latter was only assessed as ethically sensitive.
IL associated the ethical dimension of safety with implications related to physical security. Concerns were made regarding the safe use of the robotic system, especially while food is served directly to the user, e.g., whether the robot can recognize a situation where a patient swallows a meal and needs help. Here, the robot should react appropriately by making an emergency call. This situation would require, therefore, reliable and robust emergency awareness system.
On the one hand, SL associated safety with implications related to occupational safety, e.g. elimination of jobs that deemed the usage of the system as ethically sensitive. On the other hand, security was also associated with legal protection, including legal liability and safety responsibilities in the case of accidents. If unauthorized data access is detected, rating the system, therefore as ethically extremely sensitive. Hence, relating the ethical principle of safety with the topic of data security.
While SL associated safety with occupational safety, OL identified further implications to safety responsibilities of employers and data security, including a possible competence loss of the organization through human replacement determining the usage of the system ethically sensitive. OL addressed the concern that workforce skills might shift and change with automation and further dependency on technology. This was considered highly morally sensitive.
Privacy was associated by all participants, mainly with data security, including the possibility of misusing the technology for the surveillance of users and employers. SL addressed furthermore the involvement and interest of third parties, such as health-care insurance, in monitoring the health activities of patients. This ethical implication was considered ethically sensitive.
IL related the ethical dimension of justice to economic concerns. Concerns were raised over possible financial criteria that would either exclude people from using the system or determine which type of assistance the user should receive, either technical or personal assistant, rated as incredibly ethically sensitive.
SL discussed the broader social implications in addition to the concerns mentioned above and pointed out that increased use of technical assistance systems could prevent further political discussions and hence the social upgrading of these occupational groups. Essentially making it difficult for them to get recognized financially and socially, rating the usage of the system as highly sensitive.
IL associated the ethical dimension participation with implications related to the quality of interaction addressed already as aspects of the ethical principle autonomy. SL identified furthermore social implications that might result from frequent and sustained human–machine interaction, including the deterioration of social skills such as communications skills which deemed the usage of the system as ethically sensitive. OL pointed out the necessity to include the organization in the decision-making process. There was intense discussion among the participants about the extent to which it is possible and desirable for a technical system to satisfy the need for human interaction and what social implications might result from frequent and sustained human–machine interaction.
Discussion of the ethical dimension of self-conception resulted in similar concerns listed already as implications of the aforementioned dimensions.
The quality of life should not only be considered from an individual perspective. Still, it should instead be socially negotiated and communicated to understand what can be mutually expected and what is seen as the standard of social cohesion.
All participants agreed that it is neither desirable nor intended to replace interpersonal relationships completely and human care with human–machine relationships since the quality of human care and relationships cannot be technically simulated. It should not be substituted provided further desirable development of society. The consortium assessed the integration of social and communicative skills critically for several reasons and rejected it for two reasons. First, integrating social and communicative functions into the system creates a social bond or a user’s dependency on the system. Second, the system should be prevented from influencing the user’s opinion-forming and decision-making processes through standardized communication modules or adaptive functions that adjust to the user’s interests—especially concerning the vulnerability of the target group and the possible cognitive impairments, the risk that an intensive social relationship developing between the user and the system could contribute to users forgetting that they are interacting with a technical design. Language capabilities should only be present so that purpose-bound communication is possible.
In this context, a distinction between “helping and assisting” that Prof. Manzeschke has elaborated might prove helpful.
Help is a person-to-person activity in which one person makes their resources, abilities, or self-available to another to achieve the goals, which the latter can no longer do on their own. Assistance is the technical simulation and substitution of help. It contains the functional element of human help without the “admixture” of the social aspect of the human encounter. This is partly experienced as very relieving. Technical assistance ranges from simple aids to complex technical arrangements. With the help of this fundamental distinction, it is possible to precisely target what the system should and should not do and which form—help or assistance—is necessary, desirable, or preferable in which situation is essential, desirable, or preferable in which situation is necessary, desirable, or preferable in which case. Regardless of this general attitude, however, the question was raised to what extent the act of eating is a social practice whose social elements would be eliminated by reducing it to a purely functional interaction of preparing and enriching food. This could, under certain circumstances, be perceived as a deterioration of the quality of life of the persons concerned—especially if the fact is considered that the use of the robot leads to a reduction of the presence of the caregiver.
The project team agreed that the robot’s functions must always be transparent. Complete and accessible information for consumers will be necessary when the system is introduced to the market. Strategies include creating detailed, easy-to-understand instructions explaining the product’s usage and specification for the different user groups. In addition, there was a widespread agreement that the technical system should not be sold to vulnerable groups as a stand-alone, “unattended” system, as constant and competent monitoring and assessment were necessary concerning the user’s competence. Nevertheless, the question arose as to whether and how knowledge of the user’s loss of ability could be obtained, as the user may not be willing to disclose due to shame or fear of failure, the user may not be ready to reveal that they are having problems using the system. At the same time, they are frustrated that they can no longer use the system autonomously. This resulted in the discussion of the ethically relevant question of how to deal with persons who once had the technical design but then, due to cognitive or performative loss of ability from the assisted care.
The following distinction between models of use and distribution might serve as a basis for answering those questions:
The assistance system is distributed as a stand-alone device. This would allow any potentially interested party to purchase, install, and use the device unaccompanied (Dyadic model).
The device is installed under the supervision of a competent technician and is tailored to the needs and abilities of the individual user. This also mainly includes questions of the range of functions, in the case of a modularized solution, and questions of the inclusion of third parties in the interaction between user and system (Extended dyadic model with the potential inclusion of third parties).
The use of the device is made possible exclusively in combination with a supervising person who is informed about the user’s competencies in dealing with the system and over their possibly undesired emotional occupation of the system (Triadic Model).
The RoPHa project aims at supporting care-dependent people to lead a more independent life in the context of food intake. To advance the design and acceptability of socially interactive robots, it is necessary to evaluate the ethical implications of our envisioned technical assistance systems. The MEESTAR model was explicitly designed to provide an ethical evaluation of socio-technical arrangements. It determines the impact of such agreement concerning their design and functionality based on concrete scenarios. The evaluation aided in the identification of ethically relevant problems as well as in the joint development of proposed solutions. This could lead to undesirable social consequences regarding expectations and trust in the system. On the one hand, users could overestimate the robot’s capabilities and inevitably experience frustrations if the robot cannot provide the desired functionality, empathy, and support. On the other hand, a social trust relationship opens the possibility of the technical system influencing the user’s decision-making processes.
The discussion focused mainly on whether the consortium should actively contribute to creating an emotional and social bond between the user and the technical system, e.g., through design decisions and decisions regarding the system’s functionality emerge between the user and the technological system. The consortium believes that the decision to use the system should be made only with the user’s consent to ensure that the user perceives the system as helpful support and not as an unwanted or frightening coercive measure. It was a consensus that technical safety would have to be ensured to avoid accidents. Regardless of this objective, there would always be a residual risk of technical failure or technical dysfunctionality. The user should be informed in advance in an understandable, comprehensible form of possible threats to compensate for this.
The topic of security was discussed in its various facets. Particular attention was paid to the idea of monitoring all participants collaborating with the robot. To avoid profound ethical implications, it was agreed that it is necessary to define early in the system design how data will be collected, processed, and stored in RoPHa. Furthermore, the user should be informed in an understandable and accessible form about the collection and storage of data and should own the option of terminating the use of the system at any time. The principle of data economy and local data storage should apply to the research context to minimize the ever-present risk of data misuse.
The evaluation based on the MEESTAR model proved significantly as it made it possible to define a clear role for a care robot, including defining social, communicative, and technical capabilities.
Eurosat. (2020). Aging in Europe – Looking at the lives of older people in the EU - 2020 edition. Publications Office of the European Union. https://doi.org/10.2785/628105
Manzeschke A, Weber K, Rother E, & Fangerau H (2015). Ethical questions in the area of age appropriate assisting systems. Druckerei Thiel Gruppe. Retrieved January 27, 2022 from https://www.researchgate.net/publication/304743219_Ethical_questions_in_the_area_of_age_appropriate_assisting_systems
Editors and Affiliations
© 2023 The Author(s)
About this chapter
Cite this chapter
Abdel-Keream, M. (2023). Ethical Challenges of Assistive Robotics in the Elderly Care: Review and Reflection. In: Engel, U. (eds) Robots in Care and Everyday Life. SpringerBriefs in Sociology. Springer, Cham. https://doi.org/10.1007/978-3-031-11447-2_7
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-11446-5
Online ISBN: 978-3-031-11447-2