1 Introduction

The emergence of Artificial Intelligence (AI) systemsFootnote 1 leads to various new types of ethical issues. The relatively new discipline of AI ethics investigates these issues and develops approaches to address them. In this endeavor, AI ethics does not ‘start from scratch’ but builds on a rich and well-established body of literature on computer ethics. However, this article argues that computer ethics does not only provide concepts and methods for an ethical approach to designing and using AI systems. It makes the case that the emergence of AI systems and AI regulation also showcases developments that have significant implications for computer ethics and make it necessary to reexamine key assumptions of the discipline. Some challenges that the emergence of AI systems poses for individual approaches and methods of computer ethics have already been discussed in recent publications [1,2,3]. This article contributes to this discourse by discussing two further trends showcased by the emergence of AI systems and AI regulation. It raises the questions of which challenges they pose, which opportunities they provide, and how they affect the relationship among different approaches to computer ethics.

First, as James Moor stated in his seminal article “What Is Computer Ethics?” [4], policy-oriented computer ethics builds on the assumption that emerging computer technologies–such as AI systems–pose ethical issues primarily because they provide us with new capabilities. He argues that such new capabilities entail “new choices for actions” [4], which exist in a policy vacuum. That is, given these new choices for actions, often “no policies for [ethical] conduct” exist or “existing policies seem inadequate” [4], as the developments in computer technology outpace “ethical, […] and legal developments” [5]. The significance of Moor’s concept of policy vacuum for computer ethics roots in that it serves as justification for the existence of computer ethics as a stand-alone discipline [6,7,8,9] as well as that it establishes a core set of research questions for the field [4, 9, 10]. However, unlike many other computer technologies, the emergence of AI systems led to calls for regulation that relatively quickly resulted in policy advancements. For instance, the European Commission recently proposed the Artificial Intelligence Act (AI Act), laying down comprehensive rules for AI systems [11]. In light of the emergence of AI regulation, policy-oriented computer ethics, therefore, needs to address the question of which role it takes in highly regulated environments and how it takes into account and relates to existing policies.

Secondly, computer ethics and related disciplines discuss power primarily in terms of how technology affects “the way in which power is distributed and exercised in society” [12]. Furthermore, to ensure that technology supports achieving an ethically sound distribution of power in society, various scholars call for stakeholder integration in design processes or a ‘democratization of technology’ [1, 13,14,15]. However, as Friedman et al. [1] note, computer ethics focusing on the design of computer systems often did not sufficiently consider power relationships among actors once they are involved in design processes. In the context of AI systems, such issues are especially prevalent. This is because AI systems are a prime example of computer systems consisting of various technical components which are usually developed and operated by relatively independent actors [16, 17]. These include, among others, actors or groups of actors involved in data management and data preparation, model development, as well as deployment, use, and refinement of such systems [17]. Consequently, agency is highly distributed in the design of AI systems [15], and the question of how to account for power imbalances among actors involved in design processes deserves particular consideration [18].

Thus, the emergence of AI systems and AI regulation raises questions that computer ethics needs to address. This article reexamines computer ethics in light of the emergence of AI systems and AI regulation by investigating new challenges and opportunities. It does not aim at developing AI-specific solutions to discussed challenges but uses AI as an example to analyze how computer ethics needs to evolve in changing socio-technical environments. It focuses on policy- and design-oriented computer ethics, as these approaches to computer ethics are most clearly affected by the emergence of AI systems and AI regulation. Moreover, this article will demonstrate that novel interdependencies occur between the two approaches to computer ethics as a result.

The article proceeds as follows: Sect. 2 provides an overview of different approaches to computer ethics as well as the implications of the emergence of AI systems and AI regulation for these approaches. Furthermore, this section also outlines how computer ethics and AI ethics relate to each other. Section 3 discusses novel challenges arising in light of the emergence of AI systems and AI regulation, whereas Sect. 4 explores novel opportunities. Section 5 addresses new interdependencies between policy- and design-oriented computer ethics, manifesting as either conflicts or synergies. Lastly, Sect. 6 concludes by highlighting the key insights of this article and reflecting on the requirements for a productive integration of design- and policy-oriented computer ethics in light of these findings.

2 Computer ethics and the emergence of AI systems and AI regulation

According to van den Hoven [19], “[c]omputer ethics is a form of applied or practical ethics [which] studies the moral questions that are associated with the development, application, and use of computers and computer science.” Computer ethics has developed over several decades, and perspectives of computer ethics have evolved significantly over time. While computer ethics can be traced back to Wiener’s cybernetics and information ethics [20, 21], the term itself was coined by Walter Maner and his computer ethics initiative in the mid-1970s [22]. Earlier publications focus primarily on practices relating to computer technology (especially its use) and, on a more abstract level, the challenges to existing ethical concepts [22, 23]. Later, computer ethics began to also examine policies that guide actions enabled by computer technology [4], the professional conduct of computer specialists [24,25,26], and the design of computer technology itself [10, 27, 28]. In line with the aim of this article, the remainder of this section focuses on policy- and design-oriented computer ethics in more detail. However, it first addresses the relationship between computer ethics and AI ethics to provide conceptual clarity.

2.1 Computer ethics and AI ethics

The rapid development and dissemination of AI systems in recent years has been “accompanied by constant calls for applied ethics” [29]. In response, AI ethics emerged and gained significant public and scholarly attention [30]. While there is not necessarily a “categorical difference between computer ethics and the ethics of AI” [31]–one can be understood as a subset of the other–the discourses in the two disciplines differ in some respects. Stahl [31] identifies differences regarding, for instance, the scope, topics and issues, theoretical basis and referenced disciplines, solutions and mitigation, as well as importance and impact.

In its evolution, AI ethics did not customize the entirety of the methods and theories of computer ethics for the AI context. Rather, it focuses mainly on AI-specific issues. Yet, as AI systems are computer systems, the more general computer ethics remains highly relevant in the context of AI. It provides method and theory which can support understanding and addressing ethical issues of AI systems. However, as outlined in the introduction, the emergence of AI systems and AI regulation showcases developments that have significant implications for computer ethics which make it to reexamine key assumptions of the discipline.

The issues these developments pose for computer ethics are not necessarily unique to the AI context. For instance, just like AI systems, platform-ecosystems face increasing regulation [32, 33], and blockchain-based systems raise the question of who among the involved actors has the power to impose design decisions regarding the system’s protocol [18, 34, 35]. Thus, some of the challenges for computer ethics discussed in this article arise also in other contexts. Yet, AI is an exceptional case to discuss and reflect on these developments, as many recent trends in the development of computer technology occur simultaneously in the context of AI and, therefore, can be examined in relation to each other.

Thus, the emergence of AI systems and AI regulation does not necessarily require developing a customized version of computer ethics for AI. Accordingly, this article attempts to reexamine (general) computer ethics in light of AI systems and AI regulation to identify challenges that these systems pose for selected approaches of the discipline.

2.2 Policy-oriented computer ethics

Moor [4] holds the view that “computer ethics [emphasis in original] is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology.” This reasoning is based on the observation that novel computer technologies “provide us with new capabilities [which] in turn give us new choices for actions” [4]. Therefore, the emergence of new computer systems often results in situations “in which we do not have adequate policies in place to guide us” [36]. Thus, according to Moor, computer ethics aims to develop coherent conceptual frameworks for understanding ethical problems involving computer technology and ultimately to replace such “policy vacuums with good policies supported by reasonable justifications” [9].

Addressing policy vacuums is especially important in computer ethics because the “logical malleability” of computer technology makes it a universal tool that enables human beings to do an “enormous number of new things” [22]. This vast field of application means that computer technology “can produce policy vacuums in larger quantities than other technologies” [9]. This finding also applies to AI systems.

However, policy-makers increasingly push toward passing “legally binding regulations” addressing some of the ethical issues AI systems pose [37]. A prime example of this push is the EC’s proposal for the AI Act. The AI Act is a policy proposal laying down harmonized rules on Artificial Intelligence in the European Union [11]. The rules concern “the development, placement on the market and use of AI systems.” Depending on the risk that a system poses, they include, for instance, “prohibitions and a conformity assessment system adapted from EU product safety law” [38]. Thus, once the AI Act goes into effect, AI systems in the EU are deployed in a highly regulated environment.

This does not mean that policy vacuums are no longer a concern of computer ethics. For instance, as Smuha et al. [39] note, the AI Act’s “list of prohibited practices seems heavily inspired by recent controversies.” Therefore, future AI systems that enable not yet possible activities could exist in a policy vacuum again. Nevertheless, computer ethics will increasingly be applied in highly regulated contexts. Moving forward, computer ethics, therefore, needs to reflect on its role in contexts where there is no longer a policy vacuum.

2.3 Design-oriented computer ethics

Following the design turn in applied ethics, which had directed attention to the “design of institutions, infrastructure, and technology,” also computer ethics began to address the design of computer technology itself (i.e., separate from the behavior of the developers and designers) [19]. Disclosive Computer Ethics [10, 40, 41] and Value Sensitive Design [27, 42] are indicative approaches for the design turn in computer ethics. Both approaches argue that problems of computer ethics can be solved not only by developing policy regulating practices relating to computer systems (e.g., their use) but also by accounting for ethical values and principles in the design of computer technology. To achieve an alignment of technology design and ethical values, computer ethics accounts for values as well as “norms, practices, and incentives, perhaps originating from different stakeholders” [42].

The more analytic Disclosive Computer Ethics focuses “on morally opaque practices” [40] and the “moral deciphering of computer technology” or, more specifically, its “design features.” It concerns the exposure of opaque moral features (or “embedded normativity”) of computer technology [10]. The more constructive Value Sensitive Design argues that moral features in the design of (computer) technology can not only be analyzed ex-post but can already be accounted for in design processes. It “provides theory, method, and practice to account for human values in a principled and systematic manner throughout the technical design process” [42].

However, AI systems consist of several components, such as training data, an algorithm that infers decision rules from data based on a learning method, an algorithm that classifies cases based on the learned decision rules, and some form of end-user application that uses these classifications and translates them into decisions [43,44,45]. In many instances, the required system components are developed and/or controlled not by one but by several actors who specialize in one or a few tasks in the development of AI systems [46]. Furthermore, not only the engineers and data scientists directly involved in the development of Machine Learning capabilities can influence design decisions, but also “their [respective] managers, product designers, clients, executives, and others” [47].

Hence, AI systems need to be understood as “a complex network” of technical and non-technical components in which individual designers often lack the capacity to steer or control the design of the system at large [15]. As Barocas & Selbst [48], Danks & London [49], and others show, ethically problematic features of AI systems can have their roots in tasks performed by many of the involved actors, such as data management and data preparation, model development, as well as deployment, use, and refinement of AI systems. Consequently, the distribution of agency regarding design decisions among the involved actors poses challenges for addressing threats to the realization of ethical values, the consideration of ethical principles, and fundamental rights such systems can pose [17]. Prominent advocates of design-oriented computer ethics, such as Friedman et al. [1], acknowledge this as one of the ‘grand challenges’ that the discipline is facing today. They note that many Value Sensitive Design projects have assumed that the “practices, organizational policies, or legal frameworks in place will support ‘doing the right thing,’ without needing to be explicit about the role and importance of power relationships” among the involved actors.

Considering the emergence of more vast and complex socio-technical systems such as AI systems, these power relationships gain importance, as they affect how computer ethics can engage in designing such systems. In the case of such systems, computer ethics needs to account for how agency is distributed among the actors involved in AI systems and to what extent involved actors have the power to realize design decisions. This article adopts a definition of power focusing on outcomes,Footnote 2 according to which power is the “ability of agents” to “realize a certain outcome” or “bring about certain […] state of affairs” ([12], see also [50]). The emergence of more vast and complex socio-technical systems such as AI systems raises questions for computer ethics that go beyond how technology affects how power is distributed in society and which societal actors should take part in designing technical artifacts. It also raises the question of how power manifests in the broader social, economic, and political features of such systems (cf. [51]), as these features co-determine the ability of actors involved in the design process to ultimately impact design decisions. Computer ethics needs to address the question of which of the actors involved in the development and operation of AI systems have (and should have) the ability to realize specific design decisions.Footnote 3

3 Novel challenges for computer ethics

Based on the explanatory notes in Sect. 2, the following paragraphs exemplify how the emergence of AI systems and AI regulation challenge computer ethics in practice.

Concerning AI systems, biased decision models that unfairly discriminate against groups or individuals are a widely discussed ethical issue. Such bias can be caused by various factors, such as biased training data or algorithms [48, 49]. However, in many cases, some actors involved in developing AI systems can account for and mitigate such biases to prevent that biased training data or a biased algorithm lead to problematic outcomes. For instance, as Danks & London [49] note, algorithmic processing can be used to “offset or correct for” biased training data, or the end-user application in which an AI system is embedded can be set up in a way that it does “not take action solely on the basis of the algorithm output” in cases where a biased output is to be expected. This way, developers can attempt to “develop a system that is overall unbiased, even though different components each exhibit […] bias.”Footnote 4 To account for and react to biases, developers of decision models or end-user applications need information on properties of the training data or the decision model, respectively [17]. However, due to, for instance, business interests, actors involved in managing and preparing training data or developing decision models can decide against providing access to this information [45], even at the cost of negatively affecting the ability of developers and users to account for and react to this ethical issue.

As design specifications in both novel regulation and regulatory proposals demonstrate, policy-makers are further actors that can impact the ability of actors involved in socio-technical systems to account for specific ethical values and principles in design. For instance, the proposal for the AI Act prohibits the deployment of certain types of AI systems. It prescribes technical and non-technical requirements for the (legal) use of AI systems by the threat of penalties. Thereby, it incentivizes certain design decisions while disincentivizing others. In the AI Act, obligations concern, for instance, the establishment of quality management systems, the provision of technical documentation, or ensuring data governance in accordance with specified standards [11]. Often, such obligations reflect specific ethical principles or values, such as privacy, fairness, or transparency. Moreover, as promoting one value or principle often comes at the expense of another, they also reflect value tradeoffs. For instance, as Sect. 5 discusses in more detail, privacy regulations can hamper bias mitigation strategies that require integrating more data [37]. Furthermore, values like fairness can be defined in various conflicting ways. Thus, requiring an AI system to make fair decisions according to one definition of fairness makes it impossible to achieve fair decisions according to a conflicting definition of fairness [53]. Therefore, such regulatory interventions can obstruct or compel design decisions that promote or demote the realization of specific values [18]. Consequently, they can reduce the developers’ scope for design and hamper their ability to negotiate and account for values themselves or in accordance with further stakeholders.

Such limited agency of developers regarding design decisions poses new conceptual and practical challenges for design- and policy-oriented computer ethics. As technical, social, economic, and political features of a socio-technical system like an AI system can cut back on the involved actors’ ability to design technical components in accordance with ethical values and principles, design-oriented computer ethics needs to consider not only what designers ought to do and how technology should be designed. It also needs to address the questions of what individual developers have the ability to do, what constraints there are for design decisions, and which actors set these constraints. For more analytical approaches to design-oriented computer ethics, such as Disclosive Computer Ethics, the question arises which of the involved actors has the ability to address problematic ethical features of computer systems that are integrated into larger socio-technical systems once these features have been disclosed. For more constructive approaches to design-oriented computer ethics, such as Value Sensitive Design, the question arises which actors involved in a socio-technical system can assert design decisions that align with specific ethical principles or values and can, therefore, successfully apply these approaches. Conversely, they also have to engage with the question of which actors lack the ability to apply them successfully and how they can change this circumstance [17].

Policy-oriented computer ethics also faces challenges in light of the complex actor constellations in AI systems. In the process of making policy for the ethical use of computer technology, it needs to take the ability of actors to achieve certain outcomes into account. This is because if policy-makers do not outright ban specific applications but assign obligations to their development, deployment, or use, these obligations need to be assigned to some role or actor. Yet, if policy-makers assign obligations to actors incapable of fulfilling them, these obligations will not achieve the intended results. While this may seem trivial in theory, it leads to major challenges in practice. For instance, if policy-oriented computer ethics seeks to ensure that actors involved in AI systems warrant that potential bias in training data does not lead to biased decisions that harm individuals, it is challenging to determine which involved actors can or should be addressed: actors in charge of data collection and management (to ensure that there is no bias in the training data), actors in model development (to ensure that compensatory bias is applied so that decisions are unbiased), operators (to question decisions and not rely on them in cases that decisions might be biased), or providers ([17], see also [49]).Footnote 5

Thus, the rise of more vast and complex socio-technical systems such as AI systems forces policy-oriented computer ethics to determine not only what ethical practices relating to computer technology are [4] but also which actors have the ability to engage in these practices and which actors the respective obligations should be assigned to. To ensure the intended effects of policy measures, it is crucial to account for the involved actors’ power to achieve specific outcomes.

4 Novel opportunities for computer ethics

As the discussion of challenges in Sect. 3 shows, computer ethics needs to account for the complex actor constellations in socio-technical systems such as AI systems and consider how power manifests in their broader technical, social, economic, and political features. However, these features should not be perceived as unchangeable or as (only) a hindrance to computer ethics. The way that the technical, social, political, and economic features of socio-technical systems determine the power of involved actors to shape the design of computer technology is contingent. It can be influenced in a variety of ways [18]. Enabling and stimulating ethical reflection and conduct by impacting these features of socio-technical systems should thus be seen as a field of activity for computer ethics. Furthermore, computer ethics can make use of how power manifests in socio-technical systems to achieve its goals.

The new opportunities for design-oriented computer ethics are twofold. First, it can propose design features for technical components that co-determine the ability of actors involved in the socio-technical system to achieve specific outcomes. While this approach does not directly facilitate accounting for a specific ethical value or principle within the given system, it enables actors to apply methods of computer ethics. Second, acknowledging differences in the ability to assert design decisions among actors involved in a socio-technical system can help to identify powerful actors. These actors can then be encouraged to enforce compliance with specific ethical values or principles in the socio-technical system at large.

The first approach, that is, engaging with the technical, social, political, and economic features of a socio-technical system to determine the ability of actors involved in the system to achieve specific goals of computer ethics, can be achieved, for instance, by aiming for transparency in the system’s design. Section 3 described how a lack of information on, for instance, training data or properties of a decision model can hamper efforts to mitigate bias in AI systems. Conversely, a higher degree of transparency on properties of training data and the decision model can support actors in accounting for these properties and compensate for bias. A more transparent design allows for “a broader conversation about the values, operation, and limitations” of an AI system and can thereby foster the ability of involved actors to account for ethical values and principles in the system’s design [15]. Yet, if achieving greater transparency conflicts with other ethical values (or business interests), it is necessary to weigh these values (or interests) against each other [54, 55].

The second approach, that is, focusing on powerful actors to ensure that specific ethical values or principles are accounted for, can be demonstrated by the use of data access control for protecting sensitive data. Actors developing machine-learning-based AI systems often strive for ever more data to enhance the respective system’s quality and accuracy [45]. However, as Yeung [44] notes, this striving for ever more data can be ethically problematic. Individuals can have “a legitimate interest in not being evaluated and assessed” based on information that is “morally and/or causally irrelevant” to the decision,” even if this information “may have a very high degree of predictive value (i.e., statistical relevance).” In contexts where data are not widely available and individual actors control specific information exclusively, these actors can use their position by not granting access to specific types of sensitive information. Thereby, these actors can prevent this information from being used to train a decision model or make decisions, even if they are not directly involved in either of these activities. This is possible, for instance, where personal tracking devices generate otherwise unavailable data.Footnote 6

Thus, if design-oriented computer ethics is applied by actors in a dominant position in a socio-technical system, these actors can not only affect the design of technical components or applications which they are designing. To a varying degree, they can also shape the broader socio-technical system by co-determining if (and if so, how) values are accounted for in the system at large [18].

However, the two approaches can be in conflict with one another. This is because attempts by an actor to shape a socio-technical system as a whole (including technical components and applications that are developed and operated by other actors) require that this actor has a certain assertiveness regarding design decisions and other actors do not. For instance, because sensitive information can be used to identify or mitigate bias (cf. [52]), not granting access to sensitive information can get in the way of efforts to identify or mitigate bias in AI systems. Thus, protecting privacy can conflict with achieving unbiased decisions [37]. Moreover, if design-oriented computer ethics uses an actor’s dominant position within a socio-technical system to promote a specific ethical value, this can hinder efforts of other actors to negotiate and account for different values. Consequently, conflicts can arise between applying design-oriented computer ethics to determine design decisions in accordance with ethical values or applying it to enable further actors to engage in ethical considerations in design processes.

Moreover, new opportunities arise not only for design-oriented computer ethics but also for policy-oriented computer ethics. As design-oriented computer ethics, policy-oriented computer ethics can acknowledge and make use of existing features of a socio-technical system to achieve its goals or attempt to influence them. It can take advantage of how power manifests in a socio-technical system’s technical, social, political, and economic features by targeting and assigning obligations specifically to actors who have a powerful position because of these features. The proposal for the AI Act provides a prime example of this approach. Earlier whitepapers on AI assign obligations to ensure specific properties of socio-technical systems are met to actors who are “best placed” to address them. To illustrate this approach, the whitepaper clarifies that, for example, “while the developers of AI may be best placed to address risks arising from the development phase, their ability to control risks during the use phase may be more limited. In that case, the deployer should be subject to the relevant obligation” [57]. The AI Act, instead, assigns most obligations to providersFootnote 7 of AI systems. It uses the providers’ position—characterized by providing market access—to ensure that other actors involved in the respective AI system ensure adequate data governance, provide technical documentation, or establish a quality management system [11]. In doing so, the AI Act avoids the difficulties it would have faced if it did not delegate these tasks to providers, such as the need to engage in the micromanagement of assigning obligations according to the capabilities of individual actors involved in an AI system [17].

Second, policy-oriented computer ethics can advocate policies that change how power manifests in a socio-technical system’s technical, social, political, and economic features. For instance, the “AI Act proposes a new, central database, managed by the Commission, for the registration of ‘stand-alone’ high-risk AI systems” to help actors such as the regulatory authorities, civil society, or journalists to “uncover illicit AI” [38]. The proposed database aims at “enhanced oversight by the public authorities and the society” of high-risk AI systems [58]. Thus, this proposal fosters an infrastructure enabling various groups of actors to engage in discourses on the risks that AI systems pose for the realization of ethical values, the consideration of ethical principles, and fundamental rights. Furthermore, it challenges a status quo in which some actors groups are often excluded from these discourses.

5 Conflicts and synergies

Based on the findings in the previous sections, it is evident that policy- and design-oriented computer ethics can engage with the same actors, computer systems, value conflicts, or–more generally–state of affairs. Both approaches can be applied to target specific actors involved in a socio-technical system to encourage or coerce them to ensure that one particular ethical value or principle is accounted for in the system at large. Moreover, both approaches can be applied to influence how the technical, social, political, and economic features of a socio-technical system co-determine the distribution of power among the actors involved in the system.

This raises the question of how the two approaches to computer ethics relate to one another. Brey [10] argues that design-oriented computer ethics is complementary to “mainstream” or policy-oriented computer ethics. Yet, this assessment needs to be re-evaluated in light of the findings of the previous sections. This section makes the case that while these two approaches can be complementary (i.e., they can create synergies), they can also be in conflict with one another.

If design-oriented computer ethics is applied in contexts where policy constrains design decisions, developers and design-oriented computer ethicists need to take this circumstance into account. As outlined above, policies can affect the application of design-oriented computer ethics in two ways. On one hand, they can affect the consideration of specific values in design decisions. On the other hand, they can affect the ability of actors involved in a socio-technical system to influence design decisions and thus shape technology in line with ethical values and principles. This can lead to conflicts if either the respective approaches to computer ethics promote conflicting values (or operationalizations of values) or if one approach aims at enhancing the ability of specific actors to achieve their goals in a way that counteracts the other approach.

For instance, as Jobin et al. [37] note, “the need for ever-larger, more diverse datasets to ‘unbias’ AI might conflict with the requirement to give individuals increased control over their data and its use to respect their privacy and autonomy.” This can result in conflicts between design- and policy-oriented computer ethics. The European General Data Protection Regulation (GDPR), a regulation that primarily aims at enhancing data protection rights of individuals and thereby strengthening their fundamental rights in the digital age [59], can conflict with approaches of design-oriented computer ethics aiming at mitigation bias in AI systems. Article 10(5) of the AI Act specifically addresses this issue and defines an exemption to the GDPR, which allows providers to “process special categories of data” according to Article 9(1) of the GDPR “to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems” ([11], see also [39]). This exception in the data protection guidelines is needed to prevent the GDPR from obstructing bias mitigation strategies. Thus, depending on whether policy-oriented computer ethics promotes data protection regulation as initially defined in the GDPR or exceptions to data protection regulations as proposed by the AI Act, it conflicts with or complements such approaches to bias mitigation.

Moreover, design-oriented computer ethics can also counteract policy-oriented computer ethics. For instance, there is an ongoing debate if using AI to manipulate (or ‘nudge’) individuals into making choices for benign purposes, such as acting environmentally friendly, is ethically acceptable or not [44, 60]. Policy-oriented computer ethicists might conclude that such manipulation is ethically not justifiable and propose policies that prohibit it. However, defining practices that constitute ethically not acceptable manipulation is challenging. For example, the AI Act addresses this issue by prohibiting AI systems that deploy “subliminal techniques” [11] or exploit specific types of vulnerabilities linked to, for instance, “age, physical or mental disability” [11]. However, as Smuha et al. [39] note, this approach is “under-protective, as it only applies to the exploitation of a limited set of vulnerabilities” and leaves “the door open to many non-subliminal manipulative AI practices.” Therefore, if design-oriented computer ethicists assume that manipulation for specific benign purposes is legitimate, they could exploit such legal loopholes by customizing the design of manipulative AI systems to evade regulation. In such a case, an AI system is not situated in a policy vacuum because it enabled novel actions that policy-makers did not yet consider. Instead, placing the system outside the scope of existing policy is the intention behind the respective design decisions.

Yet, as stated above, policy- and design-oriented computer ethics can also complement each other. As policies co-determine how a socio-technical system’s technical, social, political, and economic features influence the ability of involved actors to assert design decisions, policy-oriented computer ethics can enable developers to apply approaches to account for ethical values and principles in the design process. For instance, individual powerful actors in socio-technical systems often can prevent developers from accounting for ethical values and principles in technical design if this conflicts with their commercial interests. Here, policy-oriented computer ethics can promote regulation that establishes a threat of fines for not ensureing that technology design accounts for specific (operationalizations of) values. In doing so, it can change the cost-/benefit-analysis of these actors and soften or end the resistance to design decisions in accordance with specific ethical values.

Conversely, the design of technical components of socio-technical systems also co-determines how well the respective socio-technical system can be regulated. For instance, in the case of AI systems, designing systems more transparently and providing explanations for how output is generated allows identifying problematic uses. In turn, this enables the “formulation and justification of policies for the ethical use of such technology” [4].

6 Conclusion

This article reexamines foundational assumptions of computer ethics in light of the emergence of AI systems and AI regulation. It outlines both challenges and highlights opportunities arising in this context. The main challenges concern how a socio-technical system’s technical, social, political, and economic features can hinder a successful application of policy- and design-oriented computer ethics. Furthermore, the article underlines that powerful actors in socio-technical systems can intentionally influence these features to co-determine the ability of other actors involved in the socio-technical system to achieve specific outcomes. With advancing regulation, AI systems are often no longer deployed in policy vacuums, suggesting that policy-makers become such powerful actors. Thus, computer ethics will increasingly need to account for them as such in the future.

However, as mentioned before, this article argues that the emergence of AI systems and AI regulation does not only exemplify new challenges or hindrances for computer ethics. They also present new opportunities. Indeed, features of AI systems that potentially hinder a successful application of approaches to computer ethics are (often) only contingent, and computer ethics can influence them. Doing so can enable actors involved in designing and operating AI systems to account for ethical values and principles in the system’s design and use. Furthermore, computer ethics can acknowledge and make use of how power manifests in the technical, social, political, and economic features of AI systems. It can use the powerful position of specific actors in AI systems to assert how ethical values and principles are being accounted for in design decisions or other practices relating to the respective AI system.

Furthermore, the emergence of AI systems and AI regulation showcases novel interdependencies between policy- and design-oriented computer ethics. These interdependencies manifest as either conflicts or synergies. Policy- and design-oriented computer ethics have been mainly discussed as being complementary in pertinent literature [10, 61, 62]. However, this article shows that the two approaches can also be at odds with one another. Therefore, computer ethicists should engage with the question of if pursuing certain goals potentially has unintended effects on applying design- or policy-oriented computer ethics elsewhere in a socio-technical system that can lead to such conflicts. Further research should investigate ways to systematically avoid or resolve such conflicts (where they were not consciously caused) and establish complementarity.

If computer ethics takes the developments showcased by the emergence of AI systems and AI regulation into account and adapts accordingly, its methods and concepts become more applicable in the context of AI. This does not only provide new possibilities to computer ethics but also to AI ethics. Computer ethics profits by that it can more easily and effectively apply its methods and concepts in discourses related to the ethical issues of AI. This improved applicability might also exist in regard to other systems which share features such as complex constellations of involved actors, severe power imbalances, or a high degree of regulation with AI systems. AI ethics, on the other hand, profits by that it can incorporate methods of computer ethics more easily and, thereby, augment the methodological and conceptual toolkit available to it.

Lastly, there are two crucial limitations to this article. First, this article focused on design- and policy-oriented computer ethics specifically. However, as noted in Sect. 2, there are further approaches to computer ethics. Presumably, some of these approaches, such as professional ethics, are also affected by the developments discussed in this article. Further research should, therefore, examine the challenges and opportunities that arise due to the emergence of AI systems and AI regulation for these other approaches to computer ethics. Second, while the emergence of AI systems and AI regulation is a prime example to showcase the developments discussed in this article, they are not unique to AI. Policy advancements such as the European Digital Services Act [32] and the European Digital Markets Act [33] call into question the existence of a policy vacuum in relation to other computer technologies. Furthermore, other emerging technologies, such as blockchain technology, are also exhibiting power struggles among actors involved in design processes concerning the advancement of the respective system [18, 34, 35, 63]. Thus, while the challenges and opportunities discussed in this article are well illustrated by the emergence of AI systems and AI regulation, they are similarly prevalent in other contexts—and offer a rich field of study for future research.