1 Introduction

The application of algorithms based on artificial intelligence (AI)Footnote 1 is spreading in the world of work and also in the health care sector. AI systems have the ability to imitate human problem solving, allowing them to assist with or perform tasks that require cognitive abilities (e.g., [2]). The transfer of agency from humans to AI-assisted technologies may, therefore, have a significant impact on social structures as well as practices in the health care context and the social and moral normsFootnote 2 manifested therein.

Recently, there has been an increase in the research and development of AI-assisted technologies for nursing care [5,6,7]. Against the backdrop of current challenges in professional care, such as the shortage of skilled workers, workforce aging and growing care needs resulting from increasingly aging societies and population growth [8], AI technologies promise to optimize nursing workflows, e.g., by providing automated tracking and analysis of care recipients’ activities and health data as well as identifying options for clinical decision-makingFootnote 3 [6, 10].

Nurses assist individuals in activities that “contribute to their health or recovery or to dignified death that they would perform unaided if they had the necessary strength, will, or knowledge” [11]. In this, they take responsibility for the well-being of humans who are limited in their decision-making ability and/or dependent on professional care.

Advocating for the interests and needs of those in need of care is a key aspect of professional care. Hence, nurses are often confronted with complex decisions that require the inclusion of multiple perspectives, taking into account the individual situation of care recipients. Frequently, their decisions have morally significant consequences (e.g., [12, 13]).

Areas of application of AI-assisted technologies already in use range from activity and health tracking to care coordination and communication; the systems are based on, e.g., computer vision, predictive modeling, natural language processing, or speech recognition [7]. It has been shown that such technologies can make assessments and processes more efficient—such as by early detection and prevention of adverse events or by reducing the time needed for documentation—enabling nurses to focus on humanistic aspects of care, including communication (e.g., [10]). Moreover, AI may offer the opportunity to make care services more personalized (by integrating individual health data) and to provide evidence-based health information for decision-making [14].

However, the implementation of AI-assisted technologies also creates new challenges. It has been discussed that the adoption of such technologies may be associated with adverse effects such as a depersonalization of the nurse-patient relationship [15] and impaired communication [16], thereby undermining the holistic approach to care practice. Depending on the system design and the field of application, the individuality of those in need of care could gradually disappear aside from what can be empirically captured (the so-called datafication of patients) (e.g., [17, 18]). Furthermore, a nonrepresentative selection of datasets and/or the quantification and categorization of data for the training of AI models contain the potential to discriminate against particular (sociodemographic) groups, such that their needs and characteristics are overlooked [19, 20]. Another possible drawback is that the reliance of nurses on algorithms could lead to a loss of the ability or willingness to critically reflect on their actions (e.g., [15]).

Overall, it can be said that the risk of neglecting care recipients’ interests and (behavioral) repercussions within care processes is already present (i.e., independent of AI technology) but can be exacerbated by this technology and particularly by systems that have a direct impact on human–human relationships.

To ensure that the implementation of AI applications supports human agency in an ethically aligned way, there is a need to provide early identification of possible tendencies of implemented algorithms to contribute to a perpetuation or change of social structures and the moral norms anchored therein (e.g., [4]). At present, unintended consequences associated with a de-humanization or impersonalization of care are not being systematically assessed during the system design process though. Existing ethical guidelines for AI are usually formulated as highly abstract ethical principles that appear to be too indeterminate, i.e., normatively unambiguous to guide the design of technologies based on moral claims [21]. To effectively inform choices made during the design process, guidelines need to be specified for specific contexts of use.

This study complements ethical principles considered relevant for the design of AI-assisted technology in health care with a context-specific conceptualization of the principles from the perspectives of individuals potentially affected by the implementation of AI technologies in long-term care facilities in Germany. With this approach, we provide indications regarding which concepts of the investigated ethical principles ought to receive particular attention during the design of AI technologies to ensure that these technologies are not blind to the moral interests of stakeholders in the German care sector.

2 The need to contextualize AI ethics frameworks

The need to develop norms and standards to achieve ethically aligned AI systems is being critically discussed by various organizations (e.g., [22, 23]), the private sector (e.g., [24]) and researchers (e.g., [25, 26]). Consequently, numerous ethical guidelines for AI have been developed in recent years [27, 28]. However, these guidelines seem to be rarely considered in practice [29]. This cannot solely be explained by the number of frameworks to choose from and/or the limited (sanction) mechanisms to date reinforcing their normative claims. An obstacle to the effective translation of ethical principles into practice is the high degree of epistemic uncertainty regarding the risks and opportunities associated with the (non-)fulfillment of ethical principles. To resolve this uncertainty, context-specific conceptualizations of the proposed principles, e.g., via bottom-up case studies with relevant stakeholders, are needed [30, 31]. The current guidelines are usually formulated as highly abstract principles that “leave much room for interpretation as to how they can be practically applied in specific contexts of use such as LTC [long-term care]” [32, p. 2].

Correspondingly, it is widely agreed that the design of technologies implemented in socially sensitive areas, such as the healthcare sector, should not solely be informed by predefined normative principles (adapted to the technology’s abilities) but also by local phenomena (i.e., thick ethical concepts)Footnote 4 that appear morally salient to those that are potentially affected by the implementation of such technology (e.g., [31, 34, 35]). To adequately assess and operationalize stakeholders’ perspectives, several researchers have stressed the need for a stronger investigation not only of stakeholders’ situated conceptualizations of proposed principles (e.g., [36]) but also of possible associations of these conceptualizations with specific tasks [37]. Existing approaches that aim to translate stakeholder perspectives into design requirements in a principled manner, such as value-sensitive design (VSD) [38] or participatory design (PD) [39], usually do not consider aspects that address ethical principles’ realization through situational factors embedded in specific real-life contexts (e.g., [40]).

Aiming to complement ethical principles with context-specific perspectives of individuals potentially affected by AI-assisted decision-making, we focus on the framework proposed by Beauchamp and Childress [41]. A mapping review conducted by Floridi et al. [42] of the literature on ethical guidelines for AI in health care suggests that the key principles incorporated by many AI initiatives are consistent with the ethical principles proposed herein.Footnote 5

The most influential framework in health care practice (hereafter, referred to as principles of biomedical ethics) proposes the following four prima facie principles for the ethical evaluation of health care practice:

  • Beneficence: all norms, dispositions and actions aiming to benefit or promote the well-being of other persons [41, pp. 217–218]. It “(1) present[s] positive requirements of action, [that] (2) need not always be followed impartially, and (3) generally do not provide reasons for legal punishment when agents fail to abide by them” [ibid., p. 219].

  • Nonmaleficence: obligates to abstain from causing harm to others [ibid., p. 155]. It is conceptualized as “(1) … negative prohibitions of action, [that] (2) must be followed impartially, and (3) provide moral reasons for legal prohibitions of certain forms of conduct” [ibid., p. 219].

  • Respect for autonomy: both the negative obligation that autonomous actions should not be subjected to controlling constraints and the positive obligation to disclose information as well as to promote the capacities for autonomous choice [ibid., p. 105]. The realization of the principle is assumed to require liberty (independence from controlling influences) and agency (capacity for intentional action) [ibid., p. 100].

  • Justice: broadly defined as the obligation to fairly distribute benefits, risks and costs under conditions of scarce resources [ibid., pp. 13, 250]. In the absence of social consensus on specific theories of justice (such as utilitarian, libertarian, communitarian, egalitarian, capability and well-being theories), policies are expected to integrate various elements of these theories on a case-by-case basis [ibid., p. 313].

Further references on ethical principles considered relevant in care contexts occur in nursing theories with their respective value orientations [43], in professional codes of ethics (e.g., [44, 45]) and to some extent in other (bioethical) approaches of health care ethics [46,47,48]. In particular, relational theories of health care and nursing, such as the ethics of care [49,50,51], make normative claims against the principles of biomedical ethics. Based on the assertion that social relationships and the recognition of the vulnerability of those in need of care should be the focus of ethical considerations of care work, the principle respect for autonomy, in particular, is thought to be based on an overly individualistic view of human beings. We assume that such perspectives are not necessarily incompatible with the principles of biomedical ethics; instead, they could be integrated into the framework, along with the context-specific conceptualization as well as adaptation of the principles. In fact, Beauchamp and Childress conceptualized their principles as an analytical framework of general norms derived from common moralityFootnote 6 that serves as a practical instrument for moral reasoning [41, p. 17] and requires further specification to provide direct guidance within specific contexts [ibid., p. 9].Footnote 7

However, we narrowed our search space to the three principles of beneficence, respect for autonomy and justice. Nonmaleficence requires intentional avoidance of actions that (may) cause harm and are, therefore, legally prohibited (ibid. p. 219).Footnote 8 In our study, however, we wanted to encourage participants to reflect on decision-making situations in which their moral intuitions are (presumably) not primarily guided by internalized rules of conduct. More importantly, we decided not to include a scenario prompting reflection on the principle of nonmaleficence because we aimed to respond to the (potential) vulnerability of participants in the care-recipient group and minimize the risk of causing psychological/emotional harm (such as feeling uncomfortable, embarrassed, or upset) to them [53, 54]. Due to the mutual relations between the principles, it must still be assumed that some participant statements may also be related to the principle of nonmaleficence.

3 Research questions

While former studies have assessed, e.g., medical students’ views of the principles of biomedical ethics (based on four scenarios) [55], the influence of the principles on health care practitioners’ attitudes toward AI technology [56] and student rankings of the principles within decision-making in ethical scenarios [57], to our knowledge, no qualitative study has assessed whether the principles are morally salient to direct stakeholders in the German care sector. Moreover, no study to date has examined which situational factors of specific real-life contexts are thought to promote the actualization of ethical principles by stakeholders. As outlined in previous section, it is assumed that such complementary data will help to translate ethical principles into practice. Therefore, the present study first aimed to illuminate the established principles of biomedical ethics from the perspective of direct stakeholders in the German care sector, nurses and care recipients (to ensure that multiple perspectives are factored into the analysis [58]). To meet this goal, we formulated the following research questions:

Q1::

Are the principles of beneficence, respect for autonomy and justice morally salient to participants?

Q2::

How do participants conceptualize the principles? Which situational factors (in particular, demands) do participants regard as promoting the actualization of their concepts of these principles in situations involving moral decision-making occurring in everyday nursing practice?

We further aimed to provide initial indications of which concepts of the investigated ethical principles ought to receive particular attention when designing AI technologies to ensure that they are not blind to the moral interests of stakeholders in the German care sector. We, therefore, analyzed participant expectations regarding the actualization of their concepts of the principles in the context of AI-assisted decision-making in the third question.

Q3::

Which potential influences do participants anticipate from the use of AI-assisted technology in situations involving moral decision-making (care tasks) with regard to the actualization of their concepts of the principles?

4 Methods

We conducted scenario-based semistructured interviews (see [59, 60]) focusing on situations involving moral decision-making occurring in everyday nursing practice. With this approach, we prompted participants to reflect upon the three ethical principles of beneficence, respect for autonomy and justice as well as the potential influences of AI-assisted technology on the actualization of the principles.

4.1 Participants

In total, semistructured interviews were conducted with 15 nurses and 15 care recipients between October 2021 and May 2022. In the care-recipient group, 2 interviews were excluded due to insufficient comprehensibility of their statements, resulting in 13 analyzable interviews. Recruitment took place through telephone and e-mail inquiries to long-term care facilities within Germany. Participants in the nurse group had to be employed as registered nursing professionals. Participants in the care-recipient group had to be at least 18 years old, without cognitive or communicative impairment (in everyday social life in the facility) and to have already received care for at least 1 year. Their demographic characteristics are reported in Table 1.

Table 1 Participants’ sociodemographic characteristics

4.2 Procedure

For the nurse group, the duration of interviews ranged from 60 to 90 min.Footnote 9 We originally planned to conduct the interviews on-site (i.e., at the facility in which the participants lived or worked); however, in some cases, this was not possible due to the coronavirus disease 2019 (COVID-19) pandemic. Therefore, some interviews were conducted digitally. In the care-recipient group, the length of the interviews was limited to 60 min. Most of these interviews were conducted at the nursing home in which the care recipients lived at the time. Interview audio was recorded using a conventional voice recorder. All interviews were conducted by one of the authors.

4.3 Scenarios

With a multidisciplinary group of researchers and a registered nurse, we developed three scenarios, depicting different care tasks associated with moral decision-making as potential fields of application for AI technology [5,6,7]. The scenarios were revised after pilot testing with two individuals. To assess the ecological validity of the scenarios, participants were asked whether they experienced decision-making situations in their (professional) everyday life similar to those described in the scenarios. Overall, agreement was high for all scenario variants.Footnote 10

The first scenario (see Scenario SI 1) describes a situation in the field of basic care (bodily care), and the second scenario (see Scenario SI 2) describes a situation in the field of social care (interaction and relationship). In both scenarios, a nurse must decide whether to follow the expressed will of a person in need of care or to perform a care task against his or her will, i.e., the nurse must weigh the principles of respect for autonomy and beneficence. The third scenario (see Scenario SI 3) describes a situation in which workflows must be prioritized (organization of workflows) due to staff shortages; specifically, a nurse has to decide between caring for one person (who needs emotional support) or caring for many (as part of routine on-site care). This scenario prompts reflection on the principle of justice.

Two versions of each scenario were presented, one in which the nurse decides with the support of an AI-assisted technology and one in which the nurse makes the decision without this technology.

Analysis of results related to Q1 and Q2 was primarily based on statements referring to scenarios without AI technology; in contrast, analysis of results related to Q3 primarily focused on statements referring to scenarios with AI technology. The presentation of both versions of each scenario was designed to increase the salience of the difference between the two situations. The resulting six situations were presented to participants in written form or, if necessary, aloud.

For each situation, participants were asked to answer questions concerning (a) possible implications of the outlined decision, (b) their moral evaluation of the outlined decision, and (c) their rationale for the evaluation made in (b). In addition, the participants were asked to describe their conception of good care. In order not to influence the moral reasoning of the participants and to be able to assign their statements inductively to the ethical principles, the participants were not given the definitions of the principles.

4.4 Data analysis

The recorded interviews were first pseudonymized and then transcribed. A content analysis following that of Kuckartz [62] using MAXQDA analysis software [63] was carried out. Participants were pseudonymized as follows: nurses were labeled as G1, G2,…, and G15; and care recipients were labeled as R1, R2,…, and R13. The transcripts were analyzed by a stepwise construction of codes. Initial main codes were derived deductively from our research questions; further main codes and subcodes were derived inductively from the data. Together with a third researcher, we independently performed coding; occasional differences in our codes were discussed and resolved within the research team.

5 Results

5.1 Contextualization of biomedical ethics principles

In the qualitative content analysis, participant moral reasoning clearly reflected the three principles of beneficence, respect for autonomy and justice (Q1). However, the results also suggested that the principles’ definitions may need to be extended to care-specific concepts.

Superordinate findings regarding participants’ contextualized perspectives of the principles (Q2) are described below (principle concepts are italicized). Tables of all key aspects associated with the principles (including situational factors considered to promote the actualization of their concepts of the principles) as well as corresponding anchor quotations are provided in the Supplementary Information.

5.1.1 Beneficence

Participants’ concepts of beneficence were highly multifaceted. Many facets referred to the relationship between the nurse and care recipient as well as specific caring actions. In other words, participants seemed to think of the principle as a dynamic process within care procedures that also impacts the actualization of the other principles.

The participants largely agreed that the overarching aim of beneficence is, on the one hand, the prevention of (physical) harm as well as the satisfaction of basic needs and, on the other hand, the promotion of care recipients’ emotional well-being. This conceptualization is largely consistent with the definition of Beauchamp and Childress (2019).

As shown in Table SI 1, participant statements regarding critical requirements to achieving these aims (in situations involving moral decision-making) can be broadly grouped into three categories, namely recognizing needs, assuming responsibility and meeting needs. These requirements provide a nuanced understanding of the principle of beneficence within the context of long-term care. In particular, participants highlighted demands that specified “positive requirements for actions” [ibid., p. 204]. Participants pointed out that the recognition of care recipients’ needs is the basis for the realization of subsequent aspects and demands on nurses to, inter alia, holistically assess care recipients’ needs, e.g., “Caring requires perceiving the persons in need of care as comprehensively as possible. Their wishes, needs, problems” (G9).

The assumption of responsibility, preceding the performance of concrete nursing actions, was viewed as closely linked to the demand of obtaining extended information on the (health) condition of patients as well as weighing possible consequences associated with the available options for action. In addition, many participants highlighted that communication plays a central role for building trust within this stage of caring processes: “If we talk to the patients, for example, explain why a particular treatment is important, the patients usually allow the treatment to be carried out” (G15).

Finally, participants stressed that meeting care recipients’ needs often requires nurses to respond to their patients according to a given situation and, if necessary, to adapt their (planned) actions accordingly, e.g., “The art of nursing involves applying abstract knowledge to the person and the specific situation” (G9).

5.1.2 Respect for autonomy

Participants’ contextualized understanding of respect for autonomy was roughly categorized into the concepts of individual autonomy and relational autonomy, which differ in their respective aims and demands (see Table SI 2).

In line with the definition of Beauchamp and Childress (2019), many participants argued that respect for autonomy requires care recipients to be self-determined as well as free from interference when making decisions, e.g., “Respect for autonomy requires that I regard the person in need of care as the decision-maker” (G9). Correspondingly, participants emphasized that nurses should trust in patients’ decision-making competency and, if necessary, improve their ability to make fully informed and independent decisions, e.g., “It is important to promote competence to make their own decisions… To do this, we often have to provide information” (G7). Limits to this understanding of patient autonomy are identified in associated risks of self-endangerment and harm for uninvolved personsFootnote 11 as well as regarding care recipients with cognitive impairments.

At the same time, many participants pointed out that patients’ exercise of agency is usually embedded in social relationships and that patients may not be capable of claiming the right to autonomy. Accordingly, some participants reasoned that patient autonomy may also be preserved by retaining a person’s sense of identity rather than independence, particularly with cognitively impaired persons. Thus, autonomy should be understood as a relational process involving the demand to holistically assess care recipients’ individual situation and motives. Several participants argued that nurses should consider the possibility of internalized incapacitation. Moreover, participants assumed that (relational) autonomy can also be preserved within shared decision-making. Relatedly, many emphasized the possible demand of ascertaining care recipients’ motives and needs through nonverbal communication as well as through consulting colleagues, e.g., “To strengthen the autonomy of people in need of care, it is important to talk to colleagues from other professional groups about particular residents. This opens up new perspectives” (G12).

5.1.3 Justice

As depicted in Table SI 3, participants identified nondiscrimination and, more particularly, distributive justice, i.e., the fair allocation of resources, as focal concepts of justice in everyday nursing practice. These concepts also fit well into the broad definition of justice proposed by Beauchamp and Childress (2019).

Several participants argued that their concept of justice prohibits treating people differently due to characteristics such as “their religion or the color of their skin” (G6).

Many participants emphasized the relevance of a fair allocation of time and attention to care recipients, presumably due to the frequent scarcity of nursing staff, demanding that health professionals set priorities. However, the participants held different views on what constitutes a fair distribution of these resources. While some reasoned that nurses “… shouldn’t concentrate on an individual patient because [they] might get the impression that he or she needs [them] more than than other patients” (G4) (i.e., the equality principle), others articulated the view that the allocation of resources should be based on individual needs for basic care and/or social support (i.e., the need principle).

In addition, several participants mentioned that the realization of these concepts is not always achievable in (professional) everyday life. An obstacle to the realization of the first concept is seen in that some care recipients may be more “visible” than others. An idiosyncratic issue with allocating resources on a strictly needs-oriented approach is considered to be the fact that care recipients’ ability to articulate their needs may be limited due to cognitive and/or communicative impairments.

5.2 Expected influence of AI technology on the actualization of the principles

Participant statements relating to their expectations regarding the actualization of the principles of beneficence, respect for autonomy and justice in the context of AI-assisted decision-making are categorized below into identified risks and opportunities.

5.2.1 Identified risks

Participant-identified risks regarding the use of AI-assisted technology frequently relate to the principle of beneficence and, in particular, associated aspects concerning the nurse–patient relationship. Many participants were concerned that the adoption of such technologies could compromise the promotion of emotional well-being, which is one of the core aims of beneficence. For instance, one participant reasoned that “…the use of the device could lead to patients feeling that the nurse only looks at the screen and no longer talks to them” (G12).

The participants highlighted that the use of AI-assisted technology may negatively impact demands related to the recognition of care recipients’ individual needs (i.e., recognizing needs). Risks identified in this context mostly addressed nurses’ empathy for and awareness of the vulnerability of persons in need of care, both for care recipients in general and for care recipients with impaired communicative abilities, e.g., “[With this technology,] I think the nurse would no longer be as aware of what the person in need of care is expressing in a nonverbal manner” (G7). Similarly, some participants expressed the fear that AI assistance could discourage nurses from exploring care recipients’ motives, such as in the event that a care recipient refused certain care procedures, e.g., “…when using such technology, nursing professionals … would tend to reflect less. They would spend less time thinking about what the other person wants” (G9). Moreover, care recipients expressed concern that the use of AI technology would disrupt interpersonal communication with nurses, as nurses might be preoccupied with operating the technology, e.g., “From my point of view, it is more personal and much more pleasant to talk to a nurse who is not simultaneously busy using such technology” (R5).

Mainly with regard to tasks in the context of social care and organization of workflows, individual participants articulated the worry that AI-based decision-support could impair the willingness of nurses to take responsibility for patients’ well-being (i.e., assuming responsibility). One participant stated, “I think [the nurse] feels validated when using the technology and questions less whether a decision is appropriate” (G10). Relatedly, participants assumed that nurses’ experiential knowledge would decrease as a consequence of regularly using such technology. While they reasoned that such may be suitable for providing orientation and confidence in (time) critical situations, they likewise expressed the view that the ability to weigh and balance risks and opportunities could gradually decrease, e.g., “I see a disadvantage in that you would probably tend to think less independently and instead follow standard procedures” (G15). One care recipient, moreover, raised the concern that particularly inexperienced nurses may no longer learn to independently weigh possible consequences of decisions in situations with moral implications, e.g., “I believe it depends on how long a nurse has been in the profession. A person who hasn’t been doing it for very long would certainly be highly influenced by the decision support [of AI technology]. Will that person ever be capable of making such decisions on his or her own?” (R5).

In the context of basic care, participants were also concerned with possible influences on patients’ autonomy. As shown in the Sect. 5.1.2, many participants perceived that both relational and individual autonomy could be improved by communication. Relatedly, some individuals expressed discomfort about the possibility that information asymmetries and dependencies would increase if nurses “…don’t engage in negotiation with the resident as much” (G1). One nurse explained, “Nurses are in a position of power over people in need of care. In uncertain situations, they enforce what they think is right. I think this disparity could become even greater [with such technology]” (G9).

Finally, several participants noted that the introduction of AI technology could negatively impact the objective of considering individual (subjective) needs when allocating resources (i.e., need principle, see the Sect. 5.1.3), e.g., “Since the system is fed by data, [I assume that] in case of doubt, it would recommend caring for the higher number of care recipients regardless of the individual feelings of those in need of care. Very pragmatic” (G12). This perception that in situations involving aspects of distributive justice, the individual situation of those in need of care might disappear from view aside from measurable data, is closely related to identified risks relating to beneficence. One person stated that “…such a decision must always be made after weighing all the individual points that play a role in a given situation. …a computer can’t grasp how someone feels inside” (G3).

5.2.2 Identified opportunities

In addition to possible risks, the interviewed nurses and care recipients also identified several opportunities arising from the use of AI-assisted technology. Again, considerations primarily focused on beneficence. In particular, with regard to basic care tasks participants reasoned that the use of such technology could prevent physical harm. Many participants assumed a positive influence of AI assistance on the empirical basis of decisions made under uncertainty, e.g., “Such applications would certainly provide added value not just by shortening the decision-making process but also, I think, above all ensuring that decisions are more empirically sound” (G12).

While several participants were concerned that the ability to weigh benefits and risks associated with different caring actions could decrease with regular use of AI technology (see the Sect. 5.2.1), some also expressed the hope that the expanded information base would provide assurance and guidance to inexperienced nurses in (time-sensitive) critical situations, e.g., “…I think in situations in which it is important to act quickly, a system like this could be very helpful for new colleagues. Because you really, yes, sometimes you don’t know what to do for a moment” (G10). Some participants further assumed that this decision-support could motivate nurses to reconsider their intuitions, e.g., “In order to reflect on your own intuition, I think such a system is actually quite useful. At least, if the various aspects that are important in certain situations are highlighted” (G6).

In addition to the potential support of AI technology in situations requiring nurses to weigh their options to prevent (physical) harm (a key aspect of assuming responsibility), one care recipient envisioned that this technology could support a holistic assessment of patients’ needs in the first place, e.g., “Nurses are different. Some make little effort to recognize what is going on in a person in need of care. …such technology could, perhaps, identify more precisely where the shoe pinches” (R3).

In the context of social and basic care, several participants identified a further opportunity arising from the expanded information base associated with AI-assisted technology: “I think such technology could provide reassurance to some residents because they can get additional information, sort of like a second opinion” (G13). The participants reasoned that, in this manner, AI technology could promote care recipients’ ability to make informed choices and improve their perceived self-determination (i.e., individual autonomy), e.g., “It would be good if there was a bit more transparency in the interaction between the nursing staff and the residents. With such technology, some residents would probably be more likely to be convinced because they would see that the information referred to was not made up but documented” (G13).

Another positive aspect mentioned by participants was a potential benefit regarding a fair(er) distribution of resources. Referring to tasks related to organizing workflows, participants noted that the adoption of AI technology may provide a more objective basis for workflow prioritization (when the technology is informed by patient needs), e.g., “Such systems can have a positive effect. Because with them, I think, you are less driven by emotions but more objective, that is, really guided to what is needed” (G5). Relatedly, participants expressed the hope that, depending on the system design, the technology could strengthen the concept of nondiscrimination (see the Sect. 5.1.3); in other words, that resources could be distributed independently of the visibility of individual care recipients and instead guided by their need for care.

Overall, participants mainly perceived advantages in the adoption of AI technology for expanding and increasing the objectivity of nursing professionals’ (information) bases for clinical decision-making.

6 Discussion

To complement ethical principles considered relevant for the design of AI-assisted technology in health care with a context-specific conceptualization of the principles of individuals potentially affected by the implementation of AI-assisted technology, we first investigated stakeholders’ contextualized perspectives on three principles: beneficence, respect for autonomy and justice (Q1 and Q2). Building upon this analysis, we investigated participant expectations regarding the actualization of their concepts of the principles in the context of AI-assisted decision-making. Thus, we provided initial indications regarding which principles ought to receive particular attention when designing AI technologies for nursing care.

Our analysis of participant reasoning in situations involving moral decision-making that occur in everyday nursing practice indicates that nurse and care recipient perspectives are largely compatible with the principles of beneficence, respect for autonomy and justice. Thus, these three principles of biomedical ethics are applicable to the field of nursing care and are a suitable launching point to explain and categorize nurse and care recipient beliefs and reasoning in situations involving moral decision-making in the fields of basic care, social care and organization of workflows (Q1).

Moreover, these results demonstrate that a qualitative analysis of stakeholder reflections on ethical principles based on scenarios depicting care tasks associated with moral decision-making (Q2) provide a more nuanced understanding (i.e., context-specific conceptualization) of the principles as well as their actualization through situational factors and, in particular, demands.

The results confirm that the principles’ definitions need to be specified for as well as adapted to care-specific requirements. Participant concepts of beneficence were largely consistent with the definition of Beauchamp and Childress [41]; particularly, participants highlighted the demands to recognize care recipients’ needs and to assume responsibility for the identified needs. With regard to respect for autonomy, many participants noted that autonomy may require that care recipients are free from controlling influences and/or that (capacities for) autonomous choice are promoted [ibid., p. 105] (the concept of individual autonomy). Other participants argued though that patient autonomy can also be ensured by preserving a person’s sense of identity as well as utilizing shared decision-making (the concept of relational autonomy). Caregiver and care recipient concepts of justice were, again, broadly in line with the definition of Beauchamp and Childress. Many participants referred to “the obligation to fairly distribute benefits, risks and costs under conditions of scarce resources” [ibid., pp. 13, 250]. Our analysis additionally suggests that the participants hold different views on what constitutes a fair distribution. While some advocated an equal allocation of resources to each care recipient (the equality principle), others argued for an allocation of resources based on individual needs for basic care and/or social support (the need principle).

Hence, our analysis indicates that a stakeholder-oriented specification of the principles allows for, or even requires, integration of specific theories of healthcare and nursing. Notably, statements relating to demands perceived as critical to the actualization of beneficence closely corresponded to Tronto’s assumption that beneficent care should be regarded as a dynamic process and, in particular, assessed along different phases [51]. Moreover, participant concepts of respect for autonomy referring to it as a relational process closely relate to feminist reconceptualizations of autonomy (e.g., [64, 65]). Such accounts highlight the importance of interpersonal or social conditions and a person’s sense of identity for the realization of autonomy, which therefore contrast with an individualistic interpretation of autonomy.

With regard to Q3, our analysis showed that the participants anticipated risks as well as opportunities relating to the actualization of their concepts of all three principles, and especially their concepts of beneficence, in the context of AI-assisted decision-making. In particular, care recipients reasoned that the use of AI-assisted technology could disrupt interpersonal relations as well as communication (the concept of recognizing needs) [15]. Both groups assumed that there would be a negative impact on nurses’ experiential knowledge (the concept of assuming responsibility) [ibid.] and that the technology could discourage nurses from exploring care recipients’ motives. On the other hand, participants envisioned that such technology, particularly with regard to basic care tasks, could prevent physical harm, e.g., by providing evidence-based health information for decision-making in uncertain conditions [14] and by motivating nurses to reconsider their intuitions (the concept of assuming responsibility).

Possible influences on the realization of respect for autonomy mainly included two aspects: On the one hand, an increase in information asymmetry is considered to reduce care recipient autonomy, but, on the other hand, increases in information to promote care recipients’ ability to make informed decisions, thereby strengthening their autonomy (the concept of individual autonomy). Moreover, participants expected that adopting AI technology in tasks related to organizing workflows could negatively impact the consideration of individual (subjective) needs when distributing resources (the need principle); however, adoption of this technology may improve the distribution of resources independent of the visibility of individual care recipients (the concept of nondiscrimination).

In conclusion, our study generated prospective understanding of how AI-assisted technologies might modify social structures and practices as well as existing asymmetries within care contexts. Participants reasoned that such technologies may improve and augment nurse abilities, assist in the identification of novel solutions to well-known problems such as discrimination, and help to coordinate complexity (e.g., within tasks that demand situational weighing). However, at the same time, participants warned that AI technology carries the inherent risk of unintended side effects, such as an objectification and rationalization of the nurse–care recipient relationship.

6.1 Implications for future research

The study results underscore the importance of a context-specific conceptualization of ethical principles relevant for AI-assisted decision-making to address the current epistemic uncertainty regarding the risks and opportunities associated with the (non)fulfillment of ethical principles. Moreover, existing guidelines not only appear too vague to guide the design of technologies based on ethical principles but they are also blind to stakeholders’ individual needs and interests. To ensure that ethical guidelines for AI assistance are sensitive to the interests and needs of stakeholders, AI technology guidelines should be determined within specific contexts. Linked to this, we recommend also future studies to consider both nurse and care recipient perspectives when generating bottom-up knowledge regarding the actualization of ethical principles in the context of AI-assisted decision-making. While considering ethical requirements within situations involving moral decision-making falls within the responsibility of nurses, their fulfillment also needs to be assessed by care recipients.

In addition, future studies might need to assess in greater detail how the implementation of AI-assisted technology may alter nurse tasks and impact their perceived moral coercion. The use of digital care services can be associated with moral distress (e.g., [66]), i.e., the experience of not being able to act according to personal and professional values [67], frequently reported by nurses [68]. However, to date, no studies have focused on the influences of AI-based systems.

Our study, moreover, suggests that the ethical principles of beneficence, respect for autonomy and justice provide suitable guidance for the development of care-specific indicators that can help to align AI-assisted technologies (in the field of nursing) with stakeholders’ moral interests. To specify such indications with regard to more concrete design considerations for AI systems and according relevant instrumental principles (such as explicability [42]), a prerequisite is to integrate interdisciplinary and transdisciplinary perspectives (e.g., from the social sciences, computer science and occupational sciences) to provide a (rich)er understanding of the coconstruction of technological and social phenomena (see also, e.g., [69, 70]). In addition, the specification of the ethical assessment of using AI-assisted technologies for care (e.g., by methods from the field of technology assessment [71]) requires broader knowledge of the technological possibilities of specific AI-assisted applications.

Further research is also needed to determine if stakeholders’ reflections on moral decision-making situations associated with different bioethical principles (such as ‘integrity’, ‘autonomy’, ‘vulnerability’ or ‘dignity’, as proposed by Rendtoff [47] or Häyry [48] as specific European principles) can broaden the set of ethical principles considered relevant for the design of AI. Similarly, different analytic methods such as the grounded theory may help to identify further ethical principles relevant for the design of AI-assisted technology in nursing care.

Ultimately, it may be necessary to develop innovative system design approaches that enable the integration of ethical principles during an iterative process throughout technologies’ entire lifecycle. Traditional engineering processes and current risk analysis methods do not allow a continuous assessment of possible risks, i.e., there is no open feedback loop between operators and system designers. As many algorithms underlying AI technologies are able to adapt to their environment (and given the black-box nature of frequently applied deep learning models), it would be useful if information on the extent to which technologies already in use affect social structures were available during the design process [72].

6.2 Study limitations

The results of this study must be interpreted in light of some limitations. First, while there is ample reason to prospectively deliberate on the potential consequences of emerging technologies, individuals who are unfamiliar with such technologies may have a limited understanding of the technologies’ abilities and their impacts on everyday (professional) life. In our study, this is particularly likely in the care-recipient group. Second, although our scenarios were designed to be comparable to real-life situations, the addition of different context-specific information could result in different principle-related statements. Third, we decided not to include a scenario prompting reflection on the principle of nonmaleficence because we aimed to respond to the (potential) vulnerability of participants in the care-recipient group. However, future studies could explore nurses' moral reasoning regarding nonmaleficence.

7 Conclusion

Artificial intelligence (AI)-assisted technologies may exert a profound impact on social structures and practices in health care contexts. Our study helped to translate ethical principles considered relevant for the design of AI-assisted technology in health care into practice. In particular, our analysis provides a context-specific conceptualization as well as adaptation of the well-established principles of biomedical ethics in the context of long-term care and, building upon this, generates bottom-up knowledge regarding the actualization of the ethical principles in AI-assisted decision-making in care contexts. Thus, we provided initial indications regarding which concepts of the investigated ethical principles ought to receive extra attention when designing AI technologies to ensure that these technologies are not blind to the moral interests of stakeholders in the care sector.