1 Introduction

In the course of the increasing privatization and economization of the healthcare sector, artificial intelligence and other digital solutions (Bär 2011; Marckmann 2021; Mohan 2018) offer companies and organizations (such as hospitals) promising opportunities to further develop and innovate organizational procedures and internal processes, as well as services offered (e.g., in prevention, diagnostics or treatment of diseases) and business models (van Giffen et al. 2020). The potential of artificial intelligence to independently evaluate and interpret masses of data algorithmically (e.g., through artificial neuronal networks) and thus make autonomous decisions opens up new paths in operational marketing, economic orientation, and quality of care. At the same time, new opportunities and obstacles also arise with regard to the operational organization and design of work environments and processes or the distribution of responsibilities and powers.

In this paper, we attempt to profitably combine a sociological analysis of the power relations and work processes in healthcare organizations under the growing influence of AI innovations with an ethical-normative analysis. Based on the sociological findings, we try to show how power becomes ethical by asking which theories are relevant for an ethical analysis of the problem of shifting power relations. Power is not only understood as a political or sociological concept but is also relevant in the context of ethics, e.g., when it comes to asking, whether a doctor or a medical AI system, which undoubtedly can take over some sort of epistemic authority, has the overriding power and legitimacy to influence and manipulate social relations and ethical relationships in work organizations. Furthermore, ethics also speaks of power when it comes to agency and self-determination. Ethical power can thus prove to be a complementary concept to ethical responsibility.Footnote 1 In such scenarios, it is widely held that adopting an organizational ethics framework, particularly when integrated with aspects of virtue ethics, enables precise analysis of issues pertaining to evolving power dynamics.

In the current literature, there are no studies that extensively deal with the problem of power shifts and power imbalances in relation to the specific use of AI in healthcare institutions. The issue of power is either focused on problems of data protection (Bavli et al. 2024) or is solely concerned with the question of how the use of AI changes global power structures (Polcumpally 2022), how the state can exercise information control (Abbate 2023) or how power imbalances between developers and users of AI systems can be assessed (Maas 2023). If a few studies deal peripherally with the topic of shifting power relations due to the use of AI systems, then usually without reference to organizational socio-technical causes and ethical issues.Footnote 2

The following article opens with an exploration of the role of AI systems in healthcare, focusing on diagnostics, therapy, and decision support. It examines the configurations of power within healthcare organizations, beginning with a comprehensive discussion on power as a social construct, influenced by Foucault's theories and its interplay with labor. It then probes into the specific power dynamics within healthcare, emphasizing the fluidity and variability of power relations and AI's influence on these frameworks. The role of AI in formalizing, standardizing, and digitizing processes is explored, as well as the consequent shifts in power dynamics stemming from human-technology interactions.

As a next step, the study explores ethical concerns, specifically the significance of virtue ethics and procedural justice within healthcare organizations: It seeks methods to detect and mitigate power imbalances by assessing a hypothetical yet plausible scenario where a medication recommendation system is implemented in a clinical setting. The subsequent exposition of three distinct perspectives (restitution, exploitation, and compensation) on addressing the unjust creation and allocation of power highlights the essential role of virtues in ethical decision-making and organizational behavior. Our reflections continue to concentrate on achieving organizational justice in AI-centric healthcare organizations, addressing the need to mitigate new structural imbalances introduced by AI. References to Aristotle and Rawls highlight justice as a societal virtue and the tenets of procedural justice. Finally, we discuss the challenges of preserving justice amid evolving power dynamics and AI's effects on employee autonomy, motivation, and patient well-being.

We conclude that the implementation of AI in healthcare should be considered as part of a holistic ecosystem, where ethical guidelines need to be put into practice. It is crucial that organizational measures are taken for quality assurance and continuous improvement, including collaboration between all decision-makers and stakeholders. AI-enabled healthcare institutions and tools must maintain organizational equity and transparency to prevent concentrations of power. An appropriate response to unwanted power shifts requires a compensatory strategy that promotes innovation and is morally justifiable. This can be achieved by promoting organizational justice and professional virtues.

2 Power (con)figurations in healthcare organizations—a sociological pre-study

2.1 What is power in general?

In order to approach the question of what effects the use of AI can be expected to have on organizational power relations, or how AI systems will possibly change and influence power structures and constellations in and around healthcare organizations, we first need to take stock of the underlying assumptions and theoretical premises.

Occasionally the term ‘power’ is used synonymously with the ‘domination’; there is little that can be said about power that would have general and unchallenged validity (Imbusch 2006). Following Foucault's understanding of power, we do not want to define power as a fixable quantity or a clearly definable entity. In contrast, power manifests itself in the form of different (action) strategies, which in turn can give rise to different types of power. By power, Foucault does not mean a general system of domination, a governmental power with institutions, or a mode of subjugation, but rather the multiplicity of power relations within a social formation (Foucault 2020). Accordingly, power never exists on its own, but only as a social construct in connection with and to other people. Power is thus to be understood as a contextual quality that only comes to bear in certain socially produced frames of reference. For example, the power relationship between doctor and patient is always determined by situational dependence on this particular role configuration. In another context, the power relationship between the same persons could be completely reversed.

Power can therefore always be understood first and foremost as an attribute of social relations and relationships. This perspective also reveals an important interaction between power and work; because just as power is a feature of social relations, the organization of work and the concrete design of work also have an indirectly constitutive effect on the organizational relations of power and domination in which they are embedded. Referring back to Foucault, it can thus be stated that power only exists as action (in actu). Accordingly, power relations appear as an ensemble of forms of action that operate in a (structurally) limited space of possibilities and are oriented towards further possible actions (Foucault 2005). This argument is further supported by the fact that social (power) positions are not only formally established structurally but can also be established informally or undermined in the opposite sense. Accordingly, both communicative behaviors, as well as cooperative relations among employees, have an impact on social relations, and therefore also shape power relations in an organization to the same extent.

By switching the levels of analysis from the interactional level to the structural level, power can (complementary) be understood as an attribute of (complex) social networks. Power is distributed in these networks (Foucault 1994), whereby the concentrations and the relationships fluctuate permanently. Persistent power structures in organizations thus require a continuous reproduction of the enabling conditions. As a consequence, it refers only to a relationship of dependency within a social relationship itself (or a network of social relationships and interdependencies) that, due to the reciprocity and temporal dynamics of social relationships, can never be described as static, but also (as with Foucault) only as a volatile manifestation of a fluid or unstable figuration of structures and relations. In this sense, we want to understand power as a quality of social structural conditions that establishes, interferes with, and institutes the behavior of specific people in combination with other sociological and psychological conditions. Based on this, the most important power relations in healthcare organizations for our analysis will be presented in brief and analyzed with regard to their contextuality and variability. On the one hand, our considerations are of a theoretical nature, on the other hand, we also refer to our empirical data, which consists of more than 30 interviews with medical staff as well as one week of ethnological field research in two different hospital stations. Our empirical data was collected as part of the “VUKIM” research project, which is funded by the Federal Ministry of Education and Research (BMBF).

In this paper, we do not develop a full theory of power about AI. Instead, we like to refer to the work of Mark Coeckelbergh, who has elaborated in our sense that “AI has power over us. Blurring the line between technology and politics, in particular, between AI and power, we can thus speak of AI as ‘artificial power’: not because AI is all-powerful, but because power is exercised through AI.” (Coeckelbergh 2022, 123f.). However, what is missing or remains relatively vague in Coeckelbergh’s work is an answer to the question of how the power exercised by AI is concretely reflected in social and economic institutions, such as healthcare organizations, and what changes in organizational structures this power can bring about. We would like to close this gap with the following sociological analysis, accompanied by a virtue ethics perspective that can be made fruitful for the design of just healthcare organizations that want to rely more and more on digital solutions.

2.2 Power in healthcare organizations

2.2.1 Dynamic power structures in healthcare organizations

Understanding dynamic power structures in healthcare organizations is central to the expected impact of AI in the healthcare system because it is still not uncommon to get the impression of purely rationally acting companies and organizations that function and behave according to plan and in line with organizational charts and process descriptions (Berger 2018). On the other hand, in practice and empiricism, we encounter "living" organizations in which people pursue individual interests and goals, make decisions, and have needs. The (individual) actions of individual actors in the collective also constantly reshape the (social) structures of the organization. Accordingly, all actors in organizations must first be regarded as participants in decision-making processes or as potential decision-makers (Wilz 2010). Hence, decision-making competencies, areas of responsibility, and powers are important indicators of the distribution of power and power relations in organizations, because the ability to make certain decisions and to decide about specific topics is also the ability to dispose of a state of affairs. The question of who may or must decide on what issue, thus is always linked to a legitimizing position of power. However, it is not only the formal decision-making bodies in management that play an important role in organizational decision-making processes. If one focuses on the actual implementation of decisions, the relevance of employees and the workforce also becomes apparent. Ultimately, this is where the course is set for how decisions are dealt with in an organization. Are they accepted or rejected? Heeded or not heeded? Implemented or undermined? These questions are not decided at the management level alone but manifest themselves in everyday working practice.

Furthermore, power is not only embedded in social relations; as an important resource, power, and knowledge form two closely intertwined and interdependent concepts (Foucault 2020). Seen in this light, there is no power relation "without a corresponding field of knowledge being constituted and no knowledge that does not at the same time presuppose and constitute power relations" (Foucault 1994). This connection elaborated by Foucault also makes clear how much power and power potentials are "hidden" in the workforce of organizations. It is undisputed that the more deeply people are involved in a work process, the more knowledge and understanding they accumulate in this context. This knowledge, in turn, enables them to influence the work process according to their ideas and needs and within the scope of their possibilities. Knowledge (about organizational processes) can therefore also be understood as a resource for exercising power.

2.2.2 Healthcare organizations as institutions of power

Being a resource for exercising power is not the only reason why hospitals and other medical institutions can be understood as institutions of power in several respects. Healthcare organizations such as hospitals can already be described as hierarchically structured organizations through their legal and formal structure; on the other hand, interpersonal power relations within these structures are continuously (re)produced and institutionalized in the daily actions and communications between the staff on a micropolitical level of action. Thus, everyday work in the medical sector is also conditioned by the strong hierarchies between the individual professions, for example, doctors enjoy extensive autonomy of action, while employees in nursing or administration act on the basis of doctors' prescriptions or instructions (Meier et al. 2020). Finally, these structures and the views on justice, on the relationship between ethical and non-ethical competences, and on the correct coordination of one's own and others' decision-making powers, which are weighted differently depending on the profession, are integrated into the functional logic and action visions of the health system as a whole.

While power configurations can also be observed in a similar form in organizations outside the health system, the characteristic that is special to medicine is, that in these institutions, power is exercised over the bodies and health of patients by doctors, nurses, and other staff. In this sense, hospitals can be understood as an institutional (power) configuration. The medical staff are themselves subject to the specific figurations of power. At the same time, they exercise power in the form of treatments and interventions on patients, through which they can be understood as both recipients and protagonists of power. This reciprocity is also repeatedly evident in our interviews, in which it is explained on numerous occasions that you have to come to an arrangement with the requirements and conditions on site in order to be able to work successfully. Here, "conditions" explicitly refers to the supervisors, the technology used and the patients' own will. One nurse interviewed reported: "You always have to have it on your mind: What does the doctor want me to do? How and when do I do this?" or a colleague of her: "[…] and I still have to type it into the programs and make sure that it all fits".

Similarly, this exercise of power is not always based on the complete consensus of both parties. For example, although it can usually be assumed that an unconscious emergency patient would approve of his or her treatment, certainty in such a case can only be obtained afterwards. As previously stated, knowledge can be understood as a power resource, which is also evident in the doctor-patient relationship. Accordingly, it can be assumed that there is usually a difference in knowledge between the doctor and the patient with regard to the treatment. Although it is the doctor's task to inform the patient as precisely as possible about the intervention, treatment, side effects, etc. and to inform him or her about alternatives, ultimately the doctor always has a knowledge advantage that cannot be eliminated in the shortness of a consultation. The extent to which the power relationship between doctor and patient can be precisely determined and how AI systems affect this will be discussed in more detail later on.

For the following considerations of the influences and effects of the application of AI systems, three power relations in particular are therefore relevant: On the one hand, this concerns a) the organizational structure already discussed, i.e., the power structures within the organization – in other words, the question of how the social relations in the organization change. This figuration is, to a certain extent, the field in which it is decided how ethical principles, guidelines, standards, and requirements are dealt with. In addition, b) the relationship between people and technology is relevant, as this also allows conclusions to be drawn about the development of medical professions (such as the professions of doctors or nurses). At the same time, c) the unity of human and technology in health care is an important actor, especially with regard to the increasing use of AI systems in medicine.Footnote 3 Since a large part of the scientific discussion regarding the ethical challenges, risks, and possible consequences of the use of AI in medicine focuses on AI systems that are used primarily in diagnostic and/or therapeutic contexts, we would like to follow this discourse in this paper. Accordingly, we focus in particular on those AI systems that (in the future) will be used either autonomously or as assistance systems or systems for decision support in prevention, diagnostics, or therapy and which are thus directly integrated into the work, responsibility, and decision-making area of the medical or nursing profession.Footnote 4 So, whenever AI is mentioned in this paper without further information, it usually refers to AI systems that are used in a diagnostic or therapeutic context.

2.2.3 AI-specific nodes of power within health organizations

2.2.3.1 Human-technology interaction

With regard to the relationship between people and technology, it can be argued that new technologies as well as ongoing technological developments and their dissemination have always been factors that shape social relations of power (Popitz 1995). However, technology not only shapes specific relations of power; moreover, technology and technologies themselves have always been sources, resources, instruments, and means for the exercise of power and domination (Imbusch and Steg 2022). This observation can also be applied to the use of AI in the health sector. Since the implementation of AI involves both comprehensive and far-reaching effects on the respective power configurations (for example, with regard to decision-making competences in certain work-related contexts). The question of how far-reaching the consequences of these developments will be on the (working) relationship between human and machine cannot be answered without further ado, however, as this is not a monocausal relationship. The relationship between humans and technology can neither be determined statically nor universally but is in turn dynamic over time and varies from case to case or constellation to constellation. Regardless of the specific form, technologies and technology, such as AI in this case, are undoubtedly to be understood as entities that affect social relations as well as the power structures within which they are used.

With increasing digitalization and especially the introduction of AI, two different (work-related) developments can be expected in healthcare organizations. It can be assumed that the introduction of AI will further increase the degree of formalization, standardization, and datafication (Timmermans and Epstein 2010; Timmermans et al. 2017; Jansen 2019) in healthcare organizations. After all, artificial intelligences as data-processing algorithms are ultimately based on principles of stochastic (relational) and statistical aggregation and evaluation. AI systems are therefore not only dependent on the data that are fed in, but are also ultimately strictly limited in their output. The growing importance of data that is suitable for analysis and evaluation with AI and its increasing use, is also accompanied by the fact that a certain, data-centered model of medical practice is preferred, or rather the power to define this medical perspective is becoming more and more valid. This assumption can also be justified from a technical point of view by considering that the implementation of sociotechnical AI solutions gets easier, the more the social environment adapts to the needs of the AI system. In this, the latent tendency toward standardization of the social can also be identified, whether in terms of the specific use of AI or by creating the structural (formal-legal) and social (institutional) conditions. This standardization and norming of behavior and knowledge takes place as AI generates certain standardization effects of knowledge and action in social practices, e.g., in the work process. However, this is accompanied by the possibility of far-reaching shifts in power within organizations. On the one hand, standards offer security against arbitrariness; on the other hand, their implementation always requires adaptation on the part of the actors. Beyond this, there arises the question of who defines the standards and how they are implemented. The trends described here can also be found in the same form in our empirical material. For example, many of the physicians and nurses we interviewed as part of our research project, emphasize the increase in documentation work (volume as well as level of detail) in conjunction with the advancing digitization of work processes.

Furthermore, from a power-analytical perspective, the spread of AI and standardization in the professional field of physicians could also cause a conflict between developers of AI and practicing medical staff regarding interpretive sovereignty and expertise, for example, over questions regarding work practice or definitions of health, normality, and deviations (Laufenberg 2016; Huber 2020). The same can be said for implicit, uncertain knowledge and subjective working capacity. In other words, experiential knowledge, skills and abilities that depend on the individual person and are inscribed in the body or mind and are therefore difficult to quantitatively depict qua natura or completely elude adequate detectability and thus algorithmic evaluation (Pfeiffer 2014). This circumstance accordingly also affects the relationship between humans and technology, for example in that subjective characteristics may be more difficult to acquire in a system that is geared towards the generation and processing of (standardized) quantifiable data but will undoubtedly differ as a "skill-set" from the current status quo. Moreover, the more extensively a workflow is standardized, the less room there is for individual deviations and adaptations, regardless of whether these have a positive or negative impact on the work process or the outcome of the work.

This context highlights the importance of developing AI systems in line with the work processes in which they will be used and in collaboration with the people who will use them later on (Herrmann and Pfeiffer 2023; Pfeiffer 2020; von Richthofen et al. 2022). After all, AI systems are also human-designed programs, a fact that is also relevant with regard to decision-making processes, both for autonomous AI systems and for so-called decision support systems. Thus, in the case of autonomous decision-making systems, it must be taken into account that even before the algorithmic decision, there were a multitude of human influences on the effective function of the AI. For example, in the form of the selection of applied criteria or training data, or generally in the course of prioritization, assignment, classification, and filtering processes (Diakopoulos and Deussen 2017). The same applies in reverse, of course, to AI assistance systems that do not make (action) decisions in the sense of an acting actor, but ‘only’ present recommendations for action or research results. For the development and use of these AI assistance systems, however, it is a prerequisite to have already made an almost immeasurable number of constitutive decisions in advance. These range from the conception of the functional scope and the determination of the application area of the AI system, to data collection and evaluation, up to the design of the user interface and the interpretation of the data. If one considers everything that is needed for the production of AI systems, namely the number of people and interests involved, the required coordination processes, decision dependencies, access to training data, etc., it becomes apparent that applications of artificial intelligence are already involved in complex power relationships during their production, quite independently of their later function and area of application (Bray 2007; Mellström 2009). This observation ultimately also concerns the technical-authoritative claim to neutrality that can be ascribed to AI systems, which, however, are not to be understood as neutral technical tools, but always as a product of the (social, economic, legal, and ethical) contextual conditions of their production and which can therefore also be understood as manifestations of power relations and their negotiation processes.

2.2.3.2 Intertwinings between system, organization, and individual

In general, many social science studies on AI in healthcare refer primarily to the question of the substitutability of human labor. For AI applications in clinical psychiatry, a study conducted by Doraiswamy et al. 2020 among psychiatrists (registered with Sermo, a global network platform open to verified and licensed physicians) concludes that only a minority of less than 4% believe that AI could make their own work redundant in the future. Also, only 17% think AI could replace humans, even in empathetic care. In contrast, 75% of respondents expect support in documenting and updating medical records and 54% in synthesizing information (Doraiswamy et al. 2020). To a certain extent, such study results also reflect the development status of a large number of current AI systems, which already achieve the same level of competence as doctors in certain isolated tasks (Heyen and Salloch 2021), but are still far from being able to replace holistic professional profiles. Nevertheless, it seems foreseeable that this relationship may develop further in the direction of AI systems in the future. From an ethical perspective, the relationship of decision-making authority between the AI system and the doctor is of particular importance. For example, around 60% of the participants in a study are of the opinion that doctors should not rely on an AI system to make a diagnosis (Buchkremer et al. 2020). In line with the often-assumed substitution potential of AI applications, it is rather apparent that, at least for now, substitution (at least by doctors) is neither desired nor possible. On the contrary, the current state of knowledge regarding the acceptance of AI systems makes it clear that decision-making competence is mostly desired in human hands. At least for the moment, the question of how the expected changes in the distribution of important (work-related) resources and the changing social relations (within healthcare organizations) will affect work in the healthcare system and thus healthcare provision seems much more urgent.

However, it does remain important not to examine the latter change and transformation processes independently of the surrounding environment. Thus, hospitals and other healthcare organizations cannot be thought of as organizations without their environment, within which they act and behave as a part of the system. The health system as an overarching political-economic regime thus not only determines the formal legal constitution of many healthcare organizations, but individual organizations are also strongly integrated into the path dependency of the entire system and dependent on it in their perspective of action and development. This aspect also reflects a special characteristic of medicine in general, which, viewed as a social practice constitutively intertwined with political and economic rationalities, is always looking for a suitable control and utilization of life (Laufenberg 2016). In this context, the economization and the associated introduction of private-sector control and allocation procedures are also have an important influence on organizations (Endreß and Matys 2010). These, in turn, can have an impact on the functional logic of the health care system as a whole as well as on the organization and practice of work in detail.

2.2.3.3 The doctor-patient-relationship

In continuation of digitalization processes that have already taken place, the strong and unreflected orientation towards AI data can lead to a shift or reduction of human competencies in a specific work context. As already mentioned in the previous paragraph, concerning the medical staff, for example, a gradual shift in competences can be expected, that could also turn out to be a loss of certain qualities. As in other professions, knowledge gained from experience is an important pillar of professional competence in the medical field (Vogd 2004). However, experiential knowledge can only be acquired in practice and through actual "doing". If certain tasks are increasingly taken over by artificial intelligence, it is foreseeable that less experiential knowledge can be acquired, at least for these specific tasks. It is still unclear to what extent the routine use of AI systems has a negative impact on the abilities and skills (or skillsets) of users and possibly narrows their horizon of experience. This risk is all the more important from an "ethical point of view, as a triad of personal abilities, skills, and experience at the level of the human-to-human relationship between the treating person and the person being treated is of essential importance for decisions to become responsible" (Liedtke/Langanke 2021). The problem described by Liedtke and Langanke can also be extended by two additional dimensions. On the one hand, people and staff are increasingly dependent on the technology used; on the other hand, it raises the question of what right patients should or must be given to have a say in their treatment. Furthermore, this raises the question of how the role of doctors and medical staff will change in the future health system.

These questions are also relevant regarding the hopes of democratization and the possibility of breaking up traditional power relations in the healthcare sector, which are also associated with digitalization and the accompanying standardization processes. Indeed, there is the possibility that digital transformation could sustainably strengthen the power position of patients vis-à-vis the healthcare provider (Haring 2019). By gaining easier access to information through the internet, patients can independently reduce the knowledge advantage of medical staff and strengthen their own knowledge/negotiation base. In addition, there are possibilities for collecting the interests of certain patient groups or for exchanging information on health services and service providers. In this way, several areas of responsibility that traditionally fall within the area of competence of medical staff are, to a certain extent, made available for patients or can be accomplished by them. However, these findings cannot be transferred to artificial intelligence applications without further ado. In contrast to everyday technologies such as the internet or certain gadgets for recording simple biomarkers, the evaluation of AI-generated results in some cases requests considerable demands on the understanding and typing abilities of the interpreting users (Vogd 2018). In collaboration with artificial intelligence, medical staff could thus increasingly take on the role of interpreting, communicating, or, if necessary, relativizing the results. In this respect, AI applications can also be compared with other highly technical workflows in medicine, e.g., laboratory tests. Samples can be mixed up or labeled incorrectly, or the laboratory chemistry can be influenced by certain drugs and produce incorrect results. Accordingly, the primacy of medical interpretation applies here as well. The more the technically conveyed (laboratory) information is based on complex preparation processes, the more a critical contextualization by an experienced professional is needed, who is able to put the relevance of the technically generated data into perspective again if necessary (Klinke and Kadmon 2018).

Accordingly, it cannot be assumed that AI systems will lead to a change in the power relationship between medical staff and patients in favor of the latter in the short or medium term. As this relationship is already diverse and contextually different, AI applications can also have very different effects. In this respect, it can rather be argued that the manufacturers of corresponding AI systems represent a new important authority in the healthcare system, who also have the corresponding responsibility in the design of future treatment methods and working practices in the medical field. With the increased use of AI systems for the diagnosis and treatment of patients, particular consideration must be given to "the risk of perpetuating incorrect clinical practices contained in the training data" (Heyen and Salloch 2021).

2.3 Provisional conclusion

Taking these interrelations into account when considering changing power relations, decision-making processes, and areas of responsibility is therefore an important level of context and analysis. It can be concluded that macro-political decisions and system-inherent requirements, as well as the resulting organizational adaptation and compensation efforts, have a profound effect on actual working practice. Furthermore, it becomes apparent that medical practice and the associated decision-making processes in the course of treatment are not solely based on medical factors, but also include a large number of social factors. From this fact, conclusions can be drawn about the dependencies and freedoms in the field of medical practice. This circumstance must also be taken into account when introducing and using AI systems. As shown, a multitude of influencing factors can be listed, all of which have a more or less powerful effect on the working practice of medical staff, whether they are of a social, economic, legal, or normative nature. Organization and structure can accordingly be seen not only as a space of power distribution (power distribution system) but also as a space of preventing injustice and enabling ethical conduct.

We can thus state as an interim result that sociological analysis has identified numerous key aspects to which ethical analysis can now be applied. The next step is to explain how power becomes organizational in order to be a matter of ethical inquiry. The following three steps are intended to build a bridge to the ethical analysis that follows.

  1. 1.

    The “power” of organizational structures: Healthcare institutions are not only places of individual fates and encounters, but also institutions with systems of rules in which work is standardized, decision-making paths and hierarchies are ordered, and the distribution of competencies, means, and rights is determined (Burmeister et al. 2021). These conditions are currently subject to massive changes, especially due to technological change, and fundamentally influence the ethical actions of all persons working in these institutions.

  2. 2.

    The (new) power of AI technologies: Healthcare institutions are naturally also susceptible to the fact that the introduction of new, and in the future probably even more AI-driven, technologies in the field of therapy, diagnosis, data management, communication, etc., change the power relations, decision-making processes, and responsibilities within an organizational unit or the entire organization. Especially with regard to AI, it must always be taken into account that a matter of technology is also a matter of power (cf. Coeckelbergh 2022, ch. 5).Footnote 5

  3. 3.

    The need for transforming these powers into ethical service for people working in healthcare organizations: a) and b) can fundamentally change the balance of justice and the general working atmosphere for the better or the worse; ways must be sought to take positive account of the new situation in individual and collective action.

3 How power becomes ethical in the clinic: a normative approach

3.1 AI-induced shifts of power relations: scenarios and ethical challenges

The sociological findings have identified nodes at which shifts in power relations are possible and which allow an ethical analysis to start. Based on a contrafactual case study or scenario, we would like to conduct an ethical inquiry that allows us to draw conclusions about incorrect ethical designs of AI-driven health organizations. The analysis is carried out according to the following guiding questions: Who gains more or less power in a healthcare institution through the use of AI? Do shifted power relations automatically lead to an unequal distribution of power in the organization? If so, what are the risks associated with the unequal distribution of power? Does the use of AI technologies only lead to shifts in power relations and responsibilities in areas where these technologies are predominantly used, or does their use affect all areas of the healthcare organization? Subsequently, we will make suggestions on how to avoid unethical shifts in power relations in AI-driven health organizations or prevent them from occurring in the first place. Here is a possible scenario, in which a new AI-based medication recommendation system (cf. Ochoa et al. 2021; Poulose et al. 2022) has been rolled out.

Imagine that your hospital will soon have a new AI-based medication recommendation system (MRS) that can assist the medical care providers with the selection of an appropriate medication for the patients. Every year, countless prescription and treatment errors occur in the healthcare system. The new system promises to lead to more concise decisions that can reduce human misconduct and workload. In your clinic, the system will be introduced after appropriate testing. The doctors rely more and more on the system's prescription suggestions, the technical administrative apparatus becomes larger, and the nursing staff realizes that the prescriptions the system makes are not always in line with the patient's experiences and tolerances. The doctors concentrate on other fields of activity, and the nursing staff fears for their jobs. The hospital is finally back in the black, but patient satisfaction has not improved. Overall, the number of incorrect prescriptions is decreasing, although doctors continue to check every recommendation issued by the system, and nursing staff must rectify some recommendations due to better patient knowledge and close interaction.

The scenario presented here, if taken further, would most likely amount to a shift in the balance of power and a change in everyone’s work situation through the use of AI-supported MRS. It could be that staff in technical administration get more power, but also more responsibility. Doctors and nurses must take on more supervisory tasks than ever before, which could probably make the work less attractive. However, doctors and nurses can concentrate more on other areas and no longer bear sole responsibility for prescription mistakes. If the system works transparently, the patient can better understand how this or that prescription came about. The (digital) contact between the patient and the drug manufacturer and developer of the MRS becomes closer, while the relationship with the nurse might no longer be as important in this respect. In conclusion, all those who have a lot to do with the system get more power but also more responsibility, while those whose activities are now taken over by the MRS give up more power. In addition, the system increases the general possibilities of control and surveillance, which can lead to a feeling of restriction of personal freedom at work and at bedside.

If we now assume that the unequal distribution of power, caused by the introduction of the new MRS, is ethically questionable, we should consider how to address this problem.Footnote 6 To ensure this, it is necessary to make an important claim: Since this is an ethical analysis, we will not speak of social equality here, but of justice or fairness. In this respect, our considerations are based on a non-ideal, non-comparative, corrective, and (imperfectly) procedural concept of justice (cf. the taxonomy of Miller 2021) that is open to tie in with the classical Aristotelian notion of justice as social virtue.Footnote 7 In the current literature on the question of distribution, numerous authors distinguish between equality and justice (Nozick 1997; Arneson 2008). This is important in our topic because power is distributed differently than, for example, money (in the sense of salaries). Against this background, we can point out three promising ways of coping with the problem of unfair distribution of power.

The Restitution view (RV) means that it must be the ethical goal of the organization to (re)establish a fair distribution of power. It can be assumed that AI is still technically and ethically immature and therefore promotes rather than prevents the emergence of power differentials, especially when the technologies are in the hands of people whose aim is surveillance and control. How to achieve the goal of restitution is a question that only the operational management (e.g., of a hospital) can answer, given that the executive floor has recognized the problem. However, RV should only be applied if it can be assumed that, before the use of AI, power was distributed fairly or more fairly than after its introduction.

The Exploitation view (EV) uses the shifting of power relations to deliberately “improve” organizational structures in health facilities without paying attention to how justice can be restored, or injustices can be remedied.

The Compensation view (CV) finally refers to the acceptance of the power shifts and compensation for the unequal or unfair distribution of power through fair distribution of responsibilities, i.e., whoever has more power also has more responsibility and can then also be held liable first in accordance with their responsibility in cases of damage. The following questions, which do not arise in RV, must be asked: Should the person who has more power, and thus more responsibility, also get more money than before? Isn’t the increase in power already rewarding enough? Or is this increase ‘eaten up’ again by the fact that the person who has more power can also be held more liable? This is not the place to answer these questions, however, it shows that the extent of power cannot be disproportionate to the extent of responsibility.

As a result of this scenario, we consider CV to be more plausible than RV band EV because power is generally difficult to distribute equally or fairly, since in every organization power and its use also depend on factors that are difficult to capture in terms of organizational ethics, e.g., the personalities of the individuals, the length of time the individual has belonged to the company, different attitudes of the workforce towards technical innovations ("corporate culture") and other organization-specific customs and traditions. RV only works if the status quo ante was already a just one, which is rarely possible. Who knows of an institution in which everything was fair? So, this is more about a hardly measurable comparison between different states in time: ‘Institution X at t1 was fairer than at t2 after a new AI system has been rolled out’.Footnote 8 Incidentally, another problem is that the rollout of the MRS is being reversed because the restitution cannot be achieved in any other way. CV is also preferable to EV because the latter has no regard for equity principles and the peculiarities of health organizations. EV tends to increase the likelihood of organizational structures being overstretched, and medical staff being overdemanded. There is a high probability that new asymmetries will emerge in the workforce or that existing ones will be exacerbated finally leading to unequal distribution of benefits and one-sided control and domination.

In contrast, CV also relieves employees and employers of certain responsibilities because there is no obligation to eliminate all injustices; in return, legal measures must also be taken to prevent all stakeholders from misusing their old and new power, e.g., if someone uses it to unfairly secure many goods from someone else. In view of CV, compensations do not exclude improvements, to the extent that new challenges are mastered well, and old mistakes are avoided. This is where our, basically, principle of sufficiency comes into play. Healthcare facilities do not always have to become better; they should first be good enough. AI systems can certainly be justified in this context, as they can compensate for injustices: If a robot suddenly does dirty job φ that person X used to do, then person X will no longer be envious or angry at person Y, who has never done dirty jobs but gets paid the same or more.

It is important to emphasize at this point that, according to the preferred CV, acceptance of the power shifts can only be guaranteed if AI allows for the compensation of disadvantages associated with power shifts within healthcare organizations. Compensating mainly refers to the rebalancing of powers and a refinement of the instruments by means of which this rebalancing is to be carried out. This method is anything but easy to implement. If it fails, CV inevitably collapse into RV or EV.

To successfully implement CV in health organizations, there must be some sort of trust in the workforce towards the medical AI system. If this portion of trust is not there, it should be built up somehow. But how can it be built? In our view, it seems necessary to introduce and cultivate certain virtues as explicit and power-enabling factors. It is evident that where virtues are recognizable, trust also tends to be or can arise. That the presence of virtues alone cannot guarantee trust also seems to be indisputable. Much more is needed in terms of institutional safeguards; one could therefore say that the presence of virtues in actors or institutions is a necessary but not sufficient condition for building trust in digital products, systems, and processes (Budnik 2018). For this reason, virtue-based processes of trust-building must be accompanied by a procedural ethical component, which we want to derive from the above-mentioned non-ideal, non-comparative, corrective, and (imperfectly) organizational concept of justice.

Incidentally, we do not believe that questions of justice and virtue in AI-driven healthcare organizations can be boiled down to trust. It is more the other way round. We are defending CV, because we are convinced that in the course of accelerated digital transformation, inequalities and injustices must be constantly compensated for in order to gain or not lose trust. There is no doubt that it is not only organizational or procedural justice that maintains or increases trustworthiness (Frazier et al. 2010; Colquitt and Rodell 2011), but also the virtues: "Individuals, who display traits such as justice, honesty, empathy and the like, acquire (public) trust. Trust, in turn, makes it easier for people to cooperate and work together, it creates a sense of community and it makes social interactions more predictable." (Hagendorff 2022).

This already indicates that trust in AI cannot be intrinsically good. It may be quite appropriate not to trust certain AI systems due to concerns about reliability, transparency, accountability, or the idea that AI systems as non-agents are not fitting recipients of trust at all.Footnote 9 For this reason, it seems advisable to initially trust the transparent organizational structures created, represented, and controlled by humans rather than the opaque AI systems.

3.2 Organizing power ethically in AI-driven healthcare organizations

Our normative analysis that is based on sociological findings and the compensation view (CV) needs to fit into an ethical framework that can support the analysis of the role of AI in organizations and structural power relations. Within business ethics, numerous approaches develop general organizational ethics, primarily for the private economic sector (Johnson 2021). These models can only be partially applied to our topic, as hospitals are primarily not about doing business, but about a healing mission that can be better fulfilled if the administrative and organizational structures serve this mission (Gibson et al. 2009). But what exactly we understand by organizational ethics?

Organizational ethics is “the applied ethics discipline that addresses the moral choices influenced and guided by values, standards, principles, rules, and strategies associated with organizational activities and business situations. Organizational ethics focuses both on the choices of the individual and the group. Since antiquity, the moral features of commercial activity mandated a code of ethics to ensure virtuous decision-making and preserve the common good.” (Letendre 2015). If we speak of organizational ethics in the field of medicine and nursing, we still need to specify: „Organizational ethics is concerned with the ethical issues faced by managers and governors in healthcare organizations and the ethical implications of organizational decisions and practices on patients, staff, and the community.” (Gibson et al. 2009, 243). The need for discussion on organizational ethics arises in the health sector in three places (ibid.): (a) Ethical issues emerging in clinical care because of decisions taken elsewhere in the organization, (b) ethical issues in clinical care with wide-reaching organizational implications, and (c) ethical issues related specifically to the business aspects of healthcare organizations.

In the following, we are primarily interested in examining ethical issues that arise from shifts in power as a result of the use of digital systems (Kluge 2017; Manzeschke 2021; Mirabaie et al. 2021). The three aspects just mentioned will also play a role, although only in connection with the planned or actual use of AI technologies. Based on the sociological pre-study and the scenario already played out, we are able to recognize that AI creates new kinds of power relationships that need to be evaluated ethically. Nevertheless, classical organizational ethics can help us to operationalize our analysis, as it can generate "positive knowledge spillovers" (Schultz and Seele 2022) for future AI ethics. Since “AI ethics is still in an early stage dealing with the institutionalization of ethics to address ethical challenges raised in organizational environments” (Ibid., 100f.), our analysis needs to draw on established theories and methods of organizational ethics such as Daniels and Sabin's (2002) accountability for reasonableness framework, the Corporate Ethical Virtues model (CEVM) by Kaptein (2008) or the classic “stakeholder impact analysis”. On the other hand, traditional organizational ethics as part of business ethics can also benefit from the new situation: "Business ethics can learn from AI ethics in catching up with the digital transformation, allowing for cross-fertilization between the two fields" (Schulze and Seele 2022, 100).

Due to the special conditions in AI-driven healthcare organizations (vulnerability of patients, scarcity of resources, high ethical standards), it can be expected that successful medical ethics in organizations depends more and more on a good organization of ethics. A good organization of ethics can only be ensured if the organizational structures are just and the people who fill these structures with life are virtuous. The interplay of these factors can ensure that trust is built in health facilities. Trust has been emphasized by several authors as a key to organizational ethics effectiveness in healthcare organizations (Buchanan 2000; Goold 2001). One could therefore say that if trust is there, one can use it to discover "blind spots" (Hagendorff 2022), i.e., power shifts that have hardly been visible so far, but which are momentous for the ethical design of health organizations and treat them ethically.

As we will show in the next section, a virtue ethical account seems to offer a suitable method for making organizations ethical "from within", especially in the health sector. In one of the most influential books on biomedical ethics written by Beauchamp and Childress (2019), the authors, who are not virtue ethicists themselves, have emphasized the growing institutional importance of virtues over the years.Footnote 10 Virtues can not only help people perform negative duties better but also ensure that they promote their well-being as well as that of others more strongly. Virtuous health professionals are often creative and open to innovation. Particularly in times of rapid technological change and epistemic uncertainty, specific virtues are more necessary than ever, especially to meet current questions of justice (including AI fairness), which arise during the formal and material reorganization of healthcare institutions. One can freely say, according to Kant,Footnote 11 that procedural justice without virtues is empty, and virtues without procedural justice are blind.

3.3 From organizational ethics (back) to virtue ethics

To ensure that structural power relations do not shift to the disadvantage of employers and employees, or that power relations that have already shifted do not have negative consequences for the stakeholders, it is necessary to create just and “virtuous” organizational structures that are supported by all. But how can organizational justice be achieved and virtues cultivated in order to stand the test of shifting structural power relations?

At the beginning of this paper, we already said that, with AI, new institutional and technological powers can fundamentally change the balance of justice and the general working atmosphere for the better or the worse. This applies all the more to the health sector with its vulnerable groups of people and its working conditions, which are further aggravated by pandemics, staff shortages, etc. Under difficult and volatile conditions, it is more important than ever that organizational structures are brought into line with the needs and abilities of those who live in and with them. For this reason, we believe that combining a procedural understanding of justice with virtue-based professional ethics is the key to better meeting new technological, social, and ethical challenges.

3.3.1 How to achieve organizational justice in AI-driven health organizations?

We have seen above that, according to the CV, it is necessary to find ways to compensate for the structural disadvantages caused by shifted power relations. From our point of view, it seems to be feasible to speak of “organizational justice” (Greenberg 1987; Rai et al. 2022) as an enhanced form of Rawlsian procedural justice rather than of structural or distributive justice, because:

  1. 1.

    to mitigate the negative effects of institutional and technological powers, it is necessary to look at the procedural, not outcome-based, character of justice. This focus seems to make sense for the development of a fair organizational structure in healthcare institutions affected by technological change and may even, in the final step, lead to fair distribution mechanisms being found and implemented more easily and quickly due to a fair organizational structure. In general, this change of perspective allows us to bypass the difficult-to-answer question of whether an AI system can or should take care of the fair allocation of medical goods in the future and to understand it as an expression of the organizational ethics problem of clarifying the dimensioning of the corresponding areas of responsibility as a basis for the allocation of certain decision-making powers.

  2. 2.

    According to Aristotle, justice must again be understood more as a (social) virtue of character (Nicomachean Ethics, Book V), not only as a “virtue of social institutions” (Rawls 1971, 3), insofar as just organizational structures should always include employees who are themselves just and good, or who should strive to be just and good.

  3. 3.

    Procedural justice cannot remedy structural injustice (Young 2011), since the latter is primarily not an institutional-ethical but a political-social problem.

However, the answer to the question of the extent to which justice can be maintained or restored as a result of AI-driven power shifts does not yet capture the ethical problem in its entirety. Shifting power relations can indeed lead to numerous injustices because employees are excluded from decisions or new monitoring technologies and divide the workforce into those who monitor and those who are to be monitored. In health care settings, however, other factors are also important: As a result of these shifts, actors whose rights and duties were still clearly defined before the shift may no longer have their moral responsibility clearly attributed to them after the shift. In addition to limiting the personal autonomy of all employees, which can be accompanied by feelings of disenfranchisement, there can also be a loss of motivation or frustration at work. These things are, of course, at the expense of patient welfare because staff who are ill cannot provide the level of care normally desired by patients.

We have just established that the good use of AI in healthcare institutions is not only dependent on whether and to what extent the modified organizational structures are up to the new technical challenges but is significantly determined by the correct application of practical knowledge. The double aspect of justice, as a virtue of institutions and as a virtue of character, which has been neglected in the current discussion on ethical AI within health organizations so far, thus offers us a suitable basis for combining considerations of organizational ethics with virtue theory. Organizational justice only exists where virtues as practical derivations of ethical principles come into play.Footnote 12 If the organization is the gearbox, then the virtues are the oil that keeps the gearbox running smoothly.

3.3.2 Virtues as institutional practices of trust, empowerment, and resilience

Virtues in healthcare organizations are dispositions that enable the workforce to do good for the organization (i.e., for others) and for themselves. In combination with organizational justice and according to CV, they help to compensate for injustice or prevent it from occurring in the first place without ethically overburdening the workforce. However, the establishment of organizational justice cannot be the only motivation for medical staff to practice virtues. Therefore, it is important that medical staff also practice virtues because they realize that virtues are good for themselves. In addition, virtues offer further advantages in terms of properly dealing with shifts in power relations: (a) Interest should not merely focus on the emerging technology or the socio-technical design of one's workplace but should also lie on the cultivation of one's character; (b) virtues make it possible to focus better on technical and organizational developments and thus create space for "technomoral education" (Vallor 2016)Footnote 13; (c) with the help of virtues, it is possible to better understand individuals and groups in their intentions and efforts to accept, promote, or hinder AI processes; (d) it is also necessary to look at those virtues of resilience and empowerment that are suitable for standing the test of shifting structural power relations; (e) with the help of virtues, trust can be built so that power shifts are mitigated in their negative consequences or do not happen at all.

It follows from this description that there may be good reasons to claim that it is possible to turn organizations into good organizations, i.e., "virtuous organizations", with an AI-friendly ethical culture. Of course, organizations are only as virtuous as the people who run them. Moreover, there are always free riders who prevent the building and maintenance of a good and just organizational structure. For these reasons, Vries et al. suggest that the most important thing to ask is „how organizations can facilitate that their members can exercise and develop their moral character (Vries et al. 2018, 671). In the sense of creating organizational justice, according to which it is important to create structures for all that allow them to develop their moral character, we draw on the Corporate Ethical Virtues Model (CEVM) by Kaptein (2008).Footnote 14 This model incorporates a special set of virtues that should be embodied in the organizational culture. However, it is not designed for healthcare institutions with their specific needs, so we need to complement it with specific virtues needed in the fields of biomedicine, public health, and AI technologies: What skills and competencies do stakeholders in healthcare organizations finally need to have in order to ensure compliance with general human values? Which principles, practices, and virtues are necessary to encourage healthcare professionals to become more ethically literate, to identify and prevent harmful power asymmetries, and, if necessary, to transform them into good power asymmetries and symmetries?

In the following, Kaptein's CEVM is to be enriched with the principle-related biomedical virtues of Beauchamp and Childress 2019, Vallor´s taxonomy of "technomoral virtues" (Vallor 2016), Hagendorff´s listing of basic and second-order AI virtues (Hagendorff 2022) and the catalogue of virtues Hähnel 2016 has compiled for public health actors to stakeholders in healthcare organizations. Kaptein’s organizational virtues are equated in their normative relevance with the individual virtues elaborated by Beauchamp and Childress, Vallor, Hähnel, and Hagendorff. Multiple responses in the following table qualify for a selection of a set of virtues that might be necessary to meet changing role configurations (for example, in the context of the doctor-patient relationship, cf. chapter 2.1) and particular infrastructural requirements for working environments in healthcare institutions with heavy use of AI in order to stand the special test of shifting structural power relations. These virtues cannot and should not appeal to all stakeholders to the same extent.

Tursunbayeva and Renkema 2022 have pointed out that “we often present our findings broadly for healthcare professionals instead of detailing them for doctors or nurses, as the latter were very seldom mentioned in qualifying studies.” We therefore try to show in our matrix which stakeholders are particularly addressed by which virtues to counter the free rider problem and to mind certain responsibility gaps. This list is certainly incomplete and still not concrete enough, but it offers us starting points for the development of virtue-based professional ethics. These extensions allow us to draw a preliminary picture of those character traits that need to be fostered in order to establish an organizational structure for AI-driven healthcare institutions in which desired and undesired power shifts can occur, which can be countered both preventively and retrospectively with the present virtue matrix. Our analysis is certainly only the beginning. However, it shows that more empirical research is needed to make profession-specific differentiations within virtue-based organizational ethics for health care (Table 1).

Table 1 Transformative virtues for AI-driven healthcare organizations

4 Summary and open questions

The question of what effects the use of AI systems will ultimately have can only be definitively answered by empirical studies. Thus, predictions about AI and the effects of its use encounter the same basic problem that already applies to predictions about the effects and developments of digitalization: they are made on the basis of data that depict a past, the significance of which cannot be extrapolated into the future without reflection due to a contingently developing society (Grunwald 2021). However, by taking a comparative look at other (digital) innovations or technologies and their containment in organizations and the preceding analysis of work and power relations in healthcare organizations, some conclusions can be drawn that seem to make sense for the introduction and application of AI systems.

For example, the question of how ethical principles regarding the use of AI in organizations must relate not only to the AI system itself but also to its context of use. Since the use of AI systems is mostly developed with a view to more efficient, productive, or reliable processing of certain work content, it must not be disregarded that even supposedly isolated tasks are mostly components of complex work processes and procedures, to which a multitude of very different requirements and expectations are directed. Implementing an AI system in a specific work environment does not mean replacing or changing just one work step, but rather reconfiguring the entire "running system". Therefore, a variety of organizational measures are required "to deal with AI-related shortcomings and to engage in quality assurance and continuous improvement. These include collaboration among relevant decision makers and knowledge-holders" (Herrmann and Pfeiffer 2022). The question of how a workforce deals with a technical system (e.g., AI) is therefore a multidimensional process of collaborative actions and individual decisions that require ongoing coordination. The introduction of artificial intelligence systems in the medical field should therefore always keep in mind the health system as a holistic ecosystem and consider the impact of new technology at all levels. Technology and its applications are (too) often seen as isolated entities, ignoring the fact that they are implemented in complex socio-technical systems. There is therefore a need to translate ethical guidelines and principles within healthcare organizations into concrete working practice. This is usually done through (one-sided) formal guidelines and directives.

To translate ethical guidelines and principles within healthcare organizations into concrete working practice, the catalogue of virtues presented here and tailored to healthcare organizations can be used specifically in performative studies and professional pieces of training. It also helps significantly with the jobs being tailored to the new AI requirements. In our view, these jobs, e.g., those of doctors, which are primarily oriented towards foreground processes about diagnosis and therapy, are still the least affected by these changes, as the use of AI will not replace the work processes here so quickly, but at best supplement them. Certainly, this change will also have a major impact on the medical profession. Rather, AI will increasingly and comprehensively find its way into administrative background processes that affect every employee in the hospital, say "clinical documentation, medical records management, or claims processing." (Tursunbayeva and Renkema 2022).

However, virtues are required across all levels of the organizational hierarchy to prevent and compensate harmful concentrations of power that can arise, for example, when AI enables managers to monitor staff disproportionately. Organizational justice and transparency are indispensable prerequisites for good leadership of healthcare institutions. If this is neglected, even the virtues cannot guarantee that (1) job autonomy has to be taken seriously, (2) learned skills can be exercised and extended for the benefit of patients (against “de-skilling”), (3) basic organizational structures function and are even improved, and (4) quality control and job feedback are effective (possibilities to give feedback must not only be in the hands of the management).

It is, of course, desirable that the whole workforce exhibit and practice all the virtues listed on the table. This will certainly never be fully realized. Therefore, it would be important that the AI-specific virtues necessary to maintain autonomy and enable justice are practiced by those groups at risk of benefiting or being disadvantaged by shifting power relations. Those virtues and their underlying principles need to be embedded in future workflows at healthcare organizations. However, AI should thereby improve the workflow but not increase the workload, which can happen if doctors have to become “part-time data scientists”. Moreover, if doctors suddenly have more time due to the relief provided by AI systems, they may be able to take on even more patients. In this respect, virtues such as modesty or humility, which protect against excessive ambition or greed, also seem to be important.

The question of which virtues can be demanded from whom remains a difficult issue, depending on the respective job profile. For this reason, the aim must be to look not only at the job profile of doctors but also that of nurses, health economists, managers, etc. (Britnell 2019). Certainly, there are also virtues, such as flexibility, that all doctors and healthcare professionals across the hierarchy must have. Such behaviors always (critically) reflect the new organizational structures, presumably “agile hierarchies that can adapt to volatile environments" (Tursunbayeva and Renkema 2022), in which they are supposed to be practiced.

In chapter 3.1., we have described the compensation view (CV), which seemed to us to be the most appropriate way of responding to power shifts in an ethically responsible way. However, there are still some open questions here: Can the AI-supported health institution of the future, if it constantly works out compensation strategies, be successful at all and serve the well-being of the patient? Does not the sentence that AI systems solve the problems they create themselves also apply here? Are not there also models that can get by without the great effort of compensating for errors, disproportions, and injustices by producing trust and justice from the very beginning?

We are aware that our considerations primarily apply to highly developed Western societies, although there are also major differences within Western societies when it comes to evaluate the healthcare system in question: Think of the liberal system in the US, the well-developed healthcare systems of Switzerland and Norway or the Chinese system, which is characterized by strong urban–rural differences. Not to mention societies that have a consistently very poor healthcare structures or no system at all. The future will show whether AI can be used to equalize inequalities and injustices between Western and still very underdeveloped non-Western healthcare systems (e.g., Ho 2022). There is also the question of whether AI will cause or compensate for power shifts within the underdeveloped healthcare systems. In any case, there cannot and will not be a "one-size-fits-all"-solution.

Be that as it may, more research is needed to answer these questions. We aimed to show in the sociological part how complex and multidimensional power relations in the healthcare system appear and how context-dependent they are. So that the use of AI can shift power relations in health organizations at different levels and intersections in multiple ways. In the ethical part, we have tried to describe what normative implications these shifts may have. We concluded that an adequate response to undesirable and damaging shifts in power relations can only lie in the development of a compensatory strategy that promotes innovation but is not (morally) overdemanding all stakeholders. To realize this goal, we believe that two interrelated tools are necessary, which can ensure that the job design in future AI-driven healthcare organizations can prove resistant to undesirable power shifts: The socio-technical production of organizational justice and its maintenance as a leadership task with the help of the cultivation of profession-specific virtues to build trust towards the transformed organization and the new technologies incorporated in it.