1 Introduction

Artificial intelligence (AI) ethics as a discipline is still in the making—given not only the novelty of pervasive and powerful AI as such but even more the societal and commercial use and implications, raising questions of moral reasoning, normative controversy, as well as governance and regulation [149, 154]. This article argues that some structural analogies from other applied ethics fields may serve as knowledge transfer bridges for AI ethics and provide cross-fertilization. We thereby build on a long tradition of cross-fertilization and knowledge transfer between subfields of applied ethics [38, 74, 123, 124]. Recently, a proposal to adopt medical ethics insights to the broader field of AI ethics has been published with important spillovers [166]. In analogy, we propose to adopt insights from the field of business ethics to advance the institutionalization of organizational AI ethics, as ethical concern about environmental and societal issues have produced experience with institutionalization, regulation, and proposing management procedures for tackling ethical issues in corporate environments [24, 133]. Thus, insights about the success and failures of institutionalizing ethics may help advance the discussion about AI ethics [32, 69, 149]. In turn, business ethics can gain by catching up with the recent challenges arising with AI in organizational contexts.

Broadly defined, business ethics focuses on “the study of business situations, activities, and decisions where issues of right and wrong are addressed” [31]. In this sense, business ethics comprises the study of the collective or ‘legal entity’ that produces goods and services, as well as its individual members and external stakeholders related to the business. From this perspective, technology corporations can be seen as powerful and pioneering collectives that maintain close relationships with governments and regulators via public affairs, lobbying, and corporate political activity, sometimes also close collaborations regarding security and intelligence [6, 71].

Corporate misconduct and shady business practices have substantially contributed to the growth of business ethics as a discipline and, in a broader vein, the advancement of governance and regulation [132, 161]. Over the past decades, business ethics has built a profound literature body dealing with various corporate scandals and implementing positive organizational ethics [24, 122]. Some of these ethics and (corporate) social responsibility discussions have been gradually formalized and led to voluntary (soft), and mandatory (hard) law, guiding business practices in both national and transnational realms [28, 133], thus, triggering an institutionalization of organizational ethics [83, 98, 146]. Nonetheless, corporate misconduct and challenges with implementing ethics in firms’ daily practices persist, indicating limitations of institutionalizing ethics in businesses [7, 8, 95, 119, 126]. Consequently, AI ethics may gain from insights into both the success and failures of institutionalizing business ethics in recent decades.

In the following, we suggest five well-researched and institutionalized topics and concepts established for decades in business ethics that may help to advance the current debate and institutionalization of AI ethics: (1) stakeholder management, (2) reporting on non-financial digital issues, (3) corporate governance and regulation, (4) AI/tech ethics in tertiary education, and, the overarching topic, (5) Greenwashing and digital ethics washing. Thus, we will outline the new and thriving discourse on AI ethics of recent years as a (sub-) discipline of the broader field of applied ethics, conjointly with business ethics as a valuable source of knowledge about the success and failures of institutionalizing organizational ethics [32, 51, 69, 142, 149]. We then depict each of the five core concepts by elaborating on the current AI ethics challenges while highlighting the knowledge transfer bridge from business ethics. Ultimately, we point out where limitations and potential for cross-fertilization between business and AI ethics arise, stressing the value of joining forces between applied ethics fields.

2 AI ethics as a (sub-) discipline of applied ethics

In the following, we outline that AI ethics is recommended to be integrated into the canon of applied ethics as a distinct field. So far, applied ethics as a field of ethics deals with “all systematic efforts to understand and to resolve moral problems that arise in some domain of practical life, […] or with some general issue of social concern” [174]. Within the (traditional) field of applied ethics, as Winkler continues, three major subfields are salient: biomedical, environmental, and business ethics. Whereas bio(medical) ethics revolves around issues of ethical concern in medicine and biomedical research, environmental ethics is more future-oriented, focusing on the sustainable preservation of the earth’s ecosystems and the biosphere [174]. Further, business ethics, which will be central to the arguments presented below, is concerned with ethical issues arising in professional contexts, and that goes along with corporations’ conduct. All three subfields have grown over several decades, building a substantial research body and transitioning beyond a philosophical sub-discipline of applied ethics. For example, business ethics is considered an established field in general management and academic discipline, with “chairs, societies, master’s programs, conferences, and dedicated journals” [122, 139]. Similarly, bioethics is considered a well-established field in biomedical research and public health, substantially contributing to advancing the practice of medicine [165]. Beyond these salient subfields, several other important research areas of applied ethics exist [for a profound overview, see, e.g., [23].

A new and thriving discourse of applied ethics has gradually moved into the public and scholarly spotlight in the past seven years: AI ethics. AI ethics as “a part or an extension of computer ethics” is thereby closely related to or sometimes used as an equivalent for digital ethics, cyberethics, tech ethics, information ethics, data ethics, internet ethics, and machine and robot ethics [35, 73, 98, 103, 128, 149, 154, 156]. AI has spread rapidly across many areas of everyday life, driven by advancements in several approaches of computer sciences and particularly machine learning techniques, such as decision trees, support vector machines, neural nets, and deep learning (see, e.g., [29, 164]). With the gradual commercialization and deployment of AI across multiple sectors, such as self-driving cars, job applicant assessment, bank lending, and autonomous weapon systems, AI decision-making questions have become a major concern [17, 94, 98, 125].

Critique has been raised about AI’s predictive and classificatory potential, leading to systemic errors and adverse outcomes for individuals and society. Numerous recent examples have shown that even unintended errors in AI’s design and training process can trigger profound ethical challenges regarding discrimination, fairness, privacy, and accountability [10, 39]. As a consequence, the study of the ethics of AI has become a salient focal point of academic discourse and drawn the attention of practitioners, policymakers, and the general public [114]. Seen as a continuation and extension of the computer ethics discourse, AI ethics is still in an early stage dealing with the institutionalization of ethics to address ethical challenges raised in organizational environments [69, 98, 149]. As Powers and Ganascia [125] note: “[t]he ethics of AI may be a “work in progress,” but it is at least a call that has been answered.”

2.1 AI ethics building on the tradition of knowledge transfer and cross-fertilization in applied ethics

This article strives to contribute to the progressing work in AI ethics, seeking to shed light on knowledge transfer bridges from the field of business ethics [98]. Thus, underlining how AI ethics may benefit from drawing on previous knowledge and cross-fertilizing with other applied ethics disciplines. In this regard, we build on a long tradition of reciprocal enrichment and knowledge sharing between subfields of applied ethics [38, 74, 123, 124]. To illustrate this point, one may consider Eiser et al. [38] and Poitras [123] outlining the common ground between medical and business ethics; Hoffman [74] exploring the interconnections between environmental and business ethics; and Potter [124] linking medical and environmental ethics. The most recent, hands-on approach of such a mutually enriching knowledge transfer has been published by Carissa Véliz [166]. Véliz [166] adopts insights from medical ethics to AI ethics. The analogy between the two “is quite close,” but as Véliz also writes, “the analogy between medical and digital ethics is not perfect, however. The digital context is much more political than the medical one, as well as more dominated by private forces, and it will have to develop its own ethical practices” [166]. In light of this accurate description, this article proposes to adopt insights in analogy to Vèliz from the well-established field of business ethics to AI ethics. Therefore, this paper is not to present a ‘better’ analogy but to complement and empower Véliz's [166] work with a selection of well-researched and institutionalized topics and concepts established for decades in business ethics. Thereby hoping to advance AI ethics by a multilateral perspective, ideally joining forces from various established ethics fields (see also [32, 51]). Given the scale and frequency of business scandals of the past decades, there is a lot to learn from business ethics on tackling ethical challenges concerning institutionalizing organizational ethics [83, 146]. As Venkataramakrishnan notes [167]: “[t]he assumption that tech ethics is mutually exclusive with innovation is at best lazy; so is the view that ethical treatment is an “optional” extra for companies that can afford it. […] Other sectors, such as coal and oil, have faced a reckoning over the impact they have on society; there is no reason why technology should not do the same.” Over the past decades, many ethical and social responsibility debates have been translated into voluntary and mandatory regulation in national and transnational perspectives. Consequently, in the following, we portray five knowledge bridges to connect business and AI ethics, explaining today’s challenge in AI ethics followed by the respective business ethics concept we propose to adopt concerning the institutionalization of organizational AI ethics (Fig. 1).

Fig. 1
figure 1

Five knowledge bridges to advance the institutionalization of organizational AI ethics

3 Five business ethics areas to advance AI ethics

3.1 Stakeholder management

The digital transformation driven by AI and algorithmic decision-making comes with new and often invisible or unclear societal impacts [3]. Related ethical issues often remain blurry (job loss, dehumanization, singularity) or technologically mediated (face recognition, privacy (by design)), and few influential non-governmental organizations (NGOs) address AI ethics.Footnote 1 Initially, AI ethics boards and committees were created to engage with societal concerns. However, some have mired in controversy, dissolved shortly after their launch [94], or have ethicists step down [4]. Thus, they are considered a public relations measure or ethical façade, which critics have labeled as ‘ethics washing’ or ‘machine washing’ [11, 108, 148].

Here, the stakeholder management approach may offer guidance on openly and systematically addressing ethical issues of AI to initiate dialog, participation, and deliberation. In 1984 stakeholder theory and management were introduced to counter the dominance of shareholder or stockholder theory [52, 54]. Whereas shareholder theory claims that the social responsibility of corporations is to increase profits, stakeholder theory underlines that corporations have to actively manage relations with stakeholders, defined as everyone who affects or is affected by the company [14]. In the course of the past decades, stakeholder theory became a highly influential framework to analyze ethical problems that arise in corporate decision-making [53]. However, stakeholder theory also attracted criticism, helpful to understand where its theoretical and practical limitations lie. Orts and Strudler [119] summarized the central weaknesses of stakeholder theory in three points: (1) the issue of defining and identifying stakeholders (2) the semantic vagueness and conceptual flexibility when it comes to approaching concrete stakeholder problems; and (3) the challenge of balancing conflicting stakeholder interests in corporate decision-making without a common measure. Banerjee [7] also takes up this point, warning about “stakeholder colonialism” when stakeholder theory is utilized as a means to regulate stakeholder behavior. As a consequence, the risk arises that corporations focus unilaterally on stakeholders with a financial or competitive influence on the firm, while disregarding the interests of marginalized stakeholder groups [7].

First attempts to define and engage stakeholders in AI ethics exist [3, 5, 97, 169]. Still, a similar rollout as has happened in business ethics 35 years ago is yet to come. In this regard, stakeholder theory as a concept for strategic organizational management provides a systematic approach to identifying, addressing, and balancing the competing demands of individuals and groups affected by technology companies [36, 53]. As stressed by Ayling and Chapman [5], stakeholders of technology firms are those “who either have direct roles in the production and deployment of AI technologies or who have legitimate interests in the usage and impact of such technologies.” In light of the critique above, the complexity and opposing nature of stakeholder interests make applying stakeholder theory in practice challenging [119]. However, the relative adaptability of the theory as a managerial instrument to respond to moral demands of internal and external stakeholders has been outlined in many types of organizations and sectors, including healthcare institutions, NGOs, and recently, on-demand labor platforms [100]. Proper stakeholder management in AI ethics means to go beyond an ethical façade and address the interests and demands of everyone who affects or is affected by products and services featuring AI, regardless of their saliency or relative power [5, 53]. Consequently, engaging with stakeholder theory can advance practice-driven approaches of AI ethics, while shedding new light on the theory of the firm when discussing narrow and broad definitions of stakeholders in digital environments [70].

3.2 Reporting on non-financial digital issues

A core challenge of AI ethics is the opacity and obfuscation of algorithms and third-party data collection [22, 40]. Open source as a transparent alternative remains to a large extent, a niche segment. The reasons for that are manifold (patents, protecting business models, value chains). However, frequent scandals shed light on ethical issues, where AI and data are used for political surveillance, disinformation, or manipulation [58, 67, 131]. Often organizations and AI developer teams make profound claims about how effectively their developed algorithms perform. Yet, without giving insights into neither the training data nor the code underlying the algorithm. The opacity and obfuscation are most striking in the context of law enforcement technology involving AI. A recent scandal involving the corporation Clearview, offering a facial recognition app to law enforcement agencies, made this evident [72]. Hill [72] outlines that the corporation built a large-scale database with millions of photos scrapped without permission from various social media sites and used to train their opaque AI system. Similarly, the AI system outlined in an academic publication made headlines claiming that the “[w]ith 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime" [48]. An open response letter to the publication outlet signed by over 1000 experts and researchers made clear that “there is no way to develop a system that can predict or identify “criminality” that is not racially biased—because the category of “criminality” itself is racially biased” [10, 27].

Major scandals (see, e.g., Enron) led to new standards in business ethics, with businesses disclosing information on ethically sensitive topics like integrity and societal or environmental impacts [132, 161]. Corporate Social Responsibility (CSR) reporting or non-financial reporting has substantially evolved in recent years, providing an essential resource for standardizing and comparing corporate conduct for both strategic and ethical reasons [96, 141, 162]. The non-profit organization Global Reporting Initiative (GRI) and IIRC offer standardized key performance indicators for CSR reporting plus reporting guidance for the material disclosure of stakeholder dialog engagement [88, 102, 171]. The CSR reporting standards strive to support corporations in their efforts to transparently share economic, environmental, and social information with their stakeholders [88]. The main goal is to enhance the quality and comparability of the reported information, based on specified disclosure metrics [59]. Thus, allowing stakeholders to assess reliable and consistent information about a firms societal and environmental impacts. CSR reporting can even match mandatory financial reporting standards via digital-data-based reporting taxonomies like XBRL as established by the U.S. securities and exchange commission for financial reporting [138]. Some technology companies such as the German Telekom already include ‘data protection and data security’ chapters in their CSR reporting [33]. And (local) governments are moving forward with legislation that requires reporting on automated decision systems, although remaining vague about what needs to be disclosed [30, 158]. These examples show that transparent and standardized disclosure on algorithms and data collection processes are undoubtedly part of the future of a more institutionalized AI ethics as corporate digital responsibility [70].

Whereas CSR reporting has come a long way from often incomparable ‘glossy’ public relations reports to established standards with clearly delineated disclosure metrics, AI ethics reporting can build on these insights to advance the recent discourse about ‘what and how’ to report about AI systems [7, 77, 163, 168]. The GRI and similar standards for CSR information disclosure can serve as a starting point to determine the type of information that can help stakeholders assess corporate conduct and increase the transparency of AI systems and the handling of personal data [86]. The challenge for AI ethics reporting lies in the timely provision reliable and comparable information on algorithmic systems based on disclosure metrics [59]. Recent research has already begun outlining types of information that may be relevant in this regard, such as fairness metrics, impact assessments, bias testing, system accuracy, and workflow verification [16, 92, 101, 145]. Analogue to the Environment, Social, and Governance (ESG) framework Herden et al. [70], outline a list of 20 items indicating highly relevant information domains, such as energy and carbon footprint, socially compatible automation, and data responsibility and stewardship.

Recent proposals for the regulation of AI in the European Union foresee reporting obligations for providers of high-risk AI systems, which encompass the disclosure of AI-related incidents and malfunctioning [44]. In addition, the EU’s General Data Protection Regulation (GDPR) strives to establish transparent insights into algorithmic decision-making aimed at individuals as well as third-party and regulatory oversight [85]. In contrast to CSR reporting, this regulatory approach goes beyond the provision of a single report to cover all stakeholders and includes an “individual right to explanation.” Thus, providing so-called data-subjects and experts distinct kinds of information [85]. In practice, this profound approach to transparency creates high costs for corporations and raises doubts about how “meaningful” the presented information is for individuals [34]. In light of these and other challenges, Edwards and Veale [37] suggest to focus on creating better algorithms a priori, via certification systems and privacy by design requirements. Overall stimulating the discussion on the formalization of AI ethics information disclosure is an important topic for practitioners and academics.

3.3 Corporate governance and regulation

Next to ethics boards and committees, corporations and legislators have created soft-law guidelines to govern AI ethics. Soft-law guidelines represent voluntary measures to govern the ethical development and deployment of AI [110]. In essence, the guidelines build on high-level ethical principles and values to align AI systems with the common good [159]. Over 200 such soft-law guidelines have been issued by public and private actors within the past five years compiled and clustered in the repository of Standards Watch [117, 150]. A recent systematic review by Jobin et al. [79] has analyzed the ethical content of 84 of them, finding that the guidelines converge in terms of five major principles: (1) transparency, (2) justice and fairness, (3) non-maleficence, (4) responsibility, and (5) privacy. However, Jobin [80] cautions, highlighting that “despite an apparent convergence on certain ethical principles on the surface level, there are substantive divergences on how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented.” Thus, ethical guidelines have been criticized as not having the moral authority to represent the public good [159]. Their inherent vagueness is understood by critiques as instrumental communication to engage in self-regulation while preventing governmental regulation [11].

Thereby revealing a striking analogy to business ethics, where self-regulation was praised for closing regulatory gaps in global governance and as means for gaining ‘moral legitimacy’ (see, e.g., [51, 121, 137, 173]). Analogue to AI ethics, critics warned about the weaknesses of ascribing “mid-level principles” to individuals and ignoring the complexity of organizational contexts and lacking philosophical depth [8, 95]. As business ethics literature evolved over the years, several authors have warned about industry self-regulation’s conceptual and practical limits [51, 107, 130]. Leading critics to condemn it as a lobbying strategy. In a public health study published in the Lancet, Moodie et al. [112] describe industry self-regulation as a means that lacks proof of effectiveness and safety. In a press interview about the study, Rob Moodie vividly summarizes the core issue at hand: “[s]elf-regulation is like having burglars install your locks” [120]. Thus, business ethics literature indicates persistent challenges that go along with the implementation of ethics in organizational contexts, particularly the gap between theoretical principles and practice [8].

Doubts about industry self-regulation are also characteristic of the current AI ethics debate. Governing the ethical challenges of AI has become a high priority for legislators worldwide [51, 91, 113]. Particularly, the European Union has recently made headlines for its moves to go beyond the self-regulatory pledges of technology corporations [42, 43, 101]. These steps have not remained unnoticed by private sector companies engaged in the development and deployment of AI [153]. In 2020, the budget of Big Tech companies to lobby, e.g., in the European Union, has reached an all-time high to fight upcoming legislation, as a leaked document published by the NYT revealed [136]. Thus, although hard-law regulation for AI is on the way, at least in some jurisdictions, [45], AI ethics may still benefit from business ethics research. Especially as current AI regulations come with many omissions and gaps meant to be filled by soft-law instruments and combinations of co- and self-regulation [26, 32, 101, 143]. Business ethics research, with its “conceptualization of CSR as a form of co-regulation that includes elements of both voluntary and mandatory regulation,” can be beneficial in this regard [57]. Consequently, past experiences and learnings from CSR may illuminate the institutionalization of AI ethics and industry standard-setting [147] to advance the AI ethics field [2, 28, 151].

Instead of waiting for a major AI scandal with fatal implications, it would make sense to anticipate such events [115]. Business ethics was substantially advanced after the Enron WorldCom scandal in the U.S. in 2001, which subsequently led to the Sarbanes–Oxley Act in 2002, enforcing accountability and demanding an ethics officer and a dedicated code of ethics putting values into practice [109, 132]. A recent publication by Mökander and Floridi [111] provides a glimpse into one possible pathway of transferring principles into practices via ethics-based auditing of AI. Besides, one of the crucial business ethics achievements and central to codes of ethics is whistleblowing and whistleblower protection, a mechanism that may have helped in the recent Google case of Timnit Gebru [78, 80]. Similarly, the trend for disclosure of ethical content is headed towards mandatory CSR reporting in the EU, and India just revised its Company Act geared towards mandatory social responsibilities [57]. Consequently, AI ethics may benefit from these insights.

3.4 AI/Tech ethics in tertiary education

The more AI spreads beyond traditional boundaries, the more paramount becomes the question of how to embed ethics in the higher education of software developers, engineers, and AI practitioners in general [60, 153]. Graduates will be working for organizations that are inventing the future, including upcoming scandals and potential disasters. Johnson [81], therefore, recently stressed, “[t]he question is not whether engineers make moral decisions (they do!), but whether and how ethical decision-making can be taught.” Higher education institutions are increasingly demanding to offer curricula that prepare students for the practical challenges arising with AI’s development and deployment and provide them with a comprehensive understanding of AI’s ethical and philosophical impacts on the broader society [20, 153]. Some institutions, such as Harvard, have already begun experimenting with pilot courses on ethical reasoning embedded in their computer science curricula conjointly organized with philosophy departments [66]. However, little is known about such ethics courses in AI/tech curricula on a global scale. A recent review of 115 tech ethics syllabi from university technology ethics courses by Fiesler et al. [50] found a lack of consistency in the course content taught and a lack of standards. Course content may cover topics as diverse as law, policy, privacy, and surveillance, as well as social and environmental impact, cybersecurity, and medical/health [50]. For Fiesler et al. [50], this broad topic range and inconsistency in teaching content across syllabi do not come as a surprise, given the current lack of standards enabling educators with leeway to design courses according to their own discretion. As Garret et al. [56] note, “if AI education is in the infancy stage of development, then AI ethics education is barely an embryo.”

Here, a glimpse at business ethics may give some indication about ways in which ethical reasoning may become institutionalized as an integral part of computer science education. In the past, the way to teach business ethics varied widely, with business school courses ranging from compulsory to elective or to no business ethics courses at all [47, 55]. In addition, the integration of business ethics into curricula was often facilitated by non-experts and characterized by a lack of monitoring over their integration process [87, 134]. However, scandals such as Enron-WorldCom in 2001 led to more formalized implementation processes and legal prescriptions such as the Sarbanes–Oxley Act in the US [132, 161]. In response to the outrage over ethical and financial misconduct, the Sarbanes–Oxley Act legislated ethical behavior for corporations listed on the stock market and their auditors [132]. This included that a code of ethics became a legal requirement for publicly traded companies. Due to the new legislation, business school accreditors began to ask for dedicated business ethics courses and professors that reflect the legal prescriptions [106, 161]. Suppose a business department wants to achieve the so-called Triple Crown Accreditation (AASCSB (U.S.), AMBA (U.K.), and EQUIS (EU)), ethics courses and dedicated faculty are a must today to prepare students for ethical dilemmas they may face in their future careers [132, 161].

This institutionalization process can be a helpful analogy to advance AI ethics curricula and revitalize a debate that already started in computer ethics several years ago [13, 25, 161]. Currently, the U.S. is one of the few examples where the integration of ethics into accredited computer science programs moves in this direction. The Accreditation Board for Engineering and Technology (ABET) requires students to have “[a]n understanding of professional, ethical, legal, security and social issues and responsibilities” [1, 135]. However, the precise implementation of ethics in the curricula is left to the institutions and professors [56]. In this regard, curricula design may draw on the rich and multifaceted literature already established in applied ethics [3, 69, 104, 105, 128, 139], to provide a diverse scope stretching across Western and Eastern ethics [41], and to include topical approaches, even informed by other scientific fields, such as cognitive (neuro) science [62, 63]. Further, AI ethics education can build on innovative approaches [20, 21] and education technologies that have not been present two decades ago, opening new pathways for engaging with ethical reflection [127].

In addition, closer attention needs to be paid to the role of regulatory bodies making legal prescriptions about the ethics content in AI education, analog to the Sarbanes–Oxley Act [49, 135]. The EU’s GDPR and more recent regulatory proposals of the EU are indicative for more concrete prescriptions to provide future employees with profound knowledge about sensitive ethical topics such as data privacy and security, bias avoidance, and equal treatment [44, 144]. Thus, even without explicit legal prescriptions, the need regulatory compliance will certainly increase the demand for more institutionalized AI ethics education. Another pathway may build on the 2009 decision “no ethics, no grant” from the U.S. National Science Foundation [116], which decided that institutions receiving funds have to teach ethics. A similar road could be taken for AI curricula: No AI/tech ethics, no degree—or no accreditation.

3.5 Greenwashing and ethics washing

As shown, AI ethics is both high in demand and on the rise. Yet, the previous points also indicate that some damage has already been caused regarding the credibility and moral authority of AI ethics [148]. Critics point to the possibility that “ethical AI” or “responsible AI” represents an invention by Big Tech to manipulate academia and to avoid regulation [32, 61, 118, 160]. Recently, the terms “ethics washing” and “machinewashing” have been coined [108, 170], referring to “a strategy that organizations adopt to engage in misleading behavior (communication and/or action) about ethical Artificial Intelligence” [142]. In light of this deceptive strategy, also “ethics bashing” entered the scene criticizing the “trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups” [11]. This abuse of ethics and the disregard of reflexive moral reasoning is particularly worrisome and stakeholders are certainly right to strongly object such corporate practices. However, as shown above, applied ethics as a discipline of reflexive moral reasoning and inquiry goes much deeper and offers a range of analytic and practical tools that can help prevent unethical behavior in corporate contexts [123, 126]. As in every academic discipline, limitations exist, consider Tenbrunsel and Smith-Crowe [157 discussing weaknesses of normative theory and how biases impede rational moral decision-making] or Bartlett [8 highlighting the persistent gap between theory and practice]. Admitting and actively engaging with such limitations, is what characterizes ethics as a discipline of critical inquiry about morals, and what helps to develop, and improve analytic and practical tools used by organizations [7, 95, 119, 157]. In sum, “[e]thics has powerful teeth, but these are barely being used in the ethics of AI today” creating the risk for ethics washing and ethics bashing [129].

The reputational damage going along with ethics washing proves the vicinity of AI ethics and business ethics, as private entities are crucial actors striving to dominate both market and non-market spheres [142]. On the one hand, ethics washing may be seen as an instrumentalization of societal values to gain a larger market share, such as promoting “AI for good” while vending surveillance technology [61, 82, 160]. On the other hand, ethics washing serves as a means to avoid external regulation, as in the case of corporate lobbying and self-regulatory approaches to prevent and influence regulation favorable for technology firms [61, 129, 160]. Thus, as shown by business ethics research, persuasion and lobbying fulfill instrumental purposes as “non-market strategies” to not only succeed in market competition but to conquer and dominate markets through shaping legal frameworks [75, 142].

In business ethics, greenwashing has been researched since the mid-1980s, when environmental ethics and the green movement gained traction [9]. NGOs like Greenpeace raised awareness by presenting specific criteria for identifying corporate greenwashing, and governmental actors, such as the U.S. Federal Trade Commission, followed suit with regulatory guidelines to help practitioners avoid making unfair or deceptive environmental claims [46, 65]. Particularly, this role of governing bodies for green corporate communication is deemed suitable to inform ethics washing and anticipate challenges and advancements of AI ethics.

Over the years, a profound body of business ethics literature, has observed greenwashing practices through various theoretical lenses, ranging from the individual level (e.g., agency theory [15]), over the organizational level (e.g., organizational institutionalism [64]), up to the institutional level (e.g., legitimacy theory [152]). Thus, greenwashing literature provides a range of typologies to better understand misleading corporate claims and explore the thin red line between deceptive and non-deceptive communication and action [90, 93, 99, 142]. Consequently, there is no need to reinvent the wheel, but a rich body of greenwashing research that can inform the study of AI ethics washing and the accompanying deceptive practices of technology corporations [61].

4 Limitations and future research

Recognizing a new and prevalent AI ethics emerging in recent years, this article drew on five salient areas of business ethics literature to elaborate potential for knowledge transfer and spillovers between these applied ethics fields concerning the institutionalization of organizational ethics. Against this background, future research is needed to study the five depicted insights in depth. One major limitation, however, is the scope of this knowledge spill-over: Just as business ethics over the past decades failed to make the business world ethical, so it may not to be expected that AI ethics will make AI fully ethical. This, however, is by far no reason not to keep pushing the application of ethics to societal and technologically important fields such as AI and business and their overlaps. Thus, beyond the scope of the underlying paper, fruitful avenues for future research open up when it comes to: (1) defining, identifying, and engaging with AI stakeholders and, for instance, mapping their salience; (2) the type of information on algorithms specific stakeholders may need, value, and can comprehend concerning AI; (3) building effective governance mechanism that helps to connect ethical codes to practice; (4) designing AI/Tech and ethics courses and generating teaching content, which provides ethical backgrounds while preparing students for ethical challenges they will face in their future careers; and (5) gaining more profound understandings of causes and outcomes of ethics washing, particularly in light of different stakeholders. Overall, the five knowledge bridges represent a non-exhaustive list of concepts business ethics literature may offer. Thus, opening space for future research to extend this initial set and identify topic areas where spillover effects and cross-fertilization between AI ethics and business ethics may occur. Such mutually beneficial knowledge creation may even emerge in areas where business and AI ethics currently differ, as will be discussed next.

From an AI ethics perspective, the depicted business ethics concepts may provide new insights that can help to develop the research agenda. However, it is essential to stress the potential limits of the structural analogy of the two applied ethics fields. Whereas corporations can be seen as artificial entities or legal persons [12], AI and automated decision-making systems (currently) lack such a status [19, 84]. Thus, in the case of the violation of others’ rights, AI respectively, the algorithmic decision-maker that has caused an infringement cannot be held accountable like a corporation. Although in some jurisdictions (e.g., European Union, UK), legal framework adjustments are considered regarding the civil and criminal liability of AI, such legal overhauls are regarded as troublesome and unlikely [19, 76]. However, Jowitt [84] argues that legal personhood for AI should be granted if the threshold of “bare, noumenal agency in the Kantian sense” is reached. With constant progress made, AI may undoubtedly advance in the coming years, further fueling this complex debate. Nevertheless—at least from a short-term perspective —it remains intangible to treat AI as a legal person, and thus, potential stakeholder. Regarding business ethics literature, AI as a latent stakeholder is also an important topic for future research, showing where the two fields of applied ethics can produce cross-fertilizing insights.

A second area where business and AI ethics structural analogy diverges emerges from how AI is developed and distributed. The twentieth century business context was characterized by scale economies, where corporations focused on standardized goods, produced, distributed, and marketed in mass [155]. However, the digital business environment of today substantially differs. AI as a product or service may be costly to develop in the first place but can be digitally replicated at close to zero cost. On top of this comes unprecedented customization due to AI’s ability to adapt to an individual level based on fine-grained user data [175]. Thus, once developed, AI can be quickly reproduced and offered even at a personal level.

Consequently, small and less visible firms can successfully compete in the market, often out of sight of the public eye. This represents a substantial difference to the twentieth century MNE, whose business conduct could be easily and closely followed by the public and dedicated NGOs alike. In light of a more significant number of small and less exposed firms, the watchdogs of public concerns could become less effective, spotting irresponsible business practices. Further, whereas ethical issues related to goods and services produced, distributed, and marketed in mass were relatively salient and easy to spot, AI’s digital nature and individual adaptability render ethical challenges much more complex and challenging to uncloak. Ethical issues, such as biases and discrimination triggered by AI, can remain hidden, with the individual unaware [140]. Ultimately, from a business ethics perspective, the increasing adaptation of AI in business products and processes make it necessary to critically revisit established concepts and theories in light of computer science knowledge that AI ethics can provide.

5 Conclusions

This manuscript strives to create awareness for further expanding the recent discourse of AI ethics by highlighting topics and concepts about the institutionalization of organizational ethics discussed in other applied ethics fields. Particularly business ethics has a long history of dealing with the challenges of institutionalizing ethics in organizational contexts. By building on this established research body, our manuscript strived to highlight the common ground between business ethics and AI ethics, discussing topics and concepts about the institutionalization of organizational ethics that can trigger a joint debate. Given the rapid deployment and use of AI across multiple areas of life, the new and thriving discourse on AI ethics as a discipline will undoubtedly become increasingly important for other applied ethics fields. For instance, the business ethics debate can benefit from the discussions on algorithmic biases that have recently expanded [10, 68, 89, 172]. Thus, both systematic and practical potential lies in joining forces between ethics of AI, medicine, business, and beyond. Here, ethics is understood as an academic discipline meant to prevent ethics washing (and not fuel it). Therefore, the contribution of ethics lies not in adding another box to be ticked in the sense of an offer in service as a supplier, but to reflect and contribute to the human discourse characterized by fuzziness, openness, and guided by theory and reason. This is the ‘tool’ that AI ethics may provide as an ‘early indication system,’ but not turning it into a measure for the quantitative toolbox, which is not what ethics (as a philosophical discipline) is about in the first place [18].