Keywords

1 Introduction

The increasing integration of artificial intelligence (AI) into software engineering (SE) businesses is revolutionizing technology development, necessitating the incorporation of ethical requirements into management practices. This shift is emphasized by research [12, 30] and calls for aligning AI functionalities with ethical principles essential for guiding decision-making toward the development of trustworthy AI systems. Ethical requirements help to provide tangible actions derived from broader ethical principles like transparency, fairness, and privacy. For instance, the general principle of transparency becomes the need for “explainability” in AI, ensuring decision-making processes are clear and comprehensible for users [18]. As AI becomes more prevalent in sensitive sectors like healthcare and education, SE organizations face increasing pressure from stakeholders, including developers, users, and regulators, to ensure AI systems like ChatGPT are not only innovative but also responsible and trustworthy [18, 30].

Creating AI systems that are ethical and in sync with societal norms is a crucial aspect of trustworthy AI [12, 29]. Despite this, SE management stakeholders who guide decision-making find it challenging to incorporate ethical requirements into their practices effectively [1, 5, 12]. A primary challenge lies in these stakeholders’ determination of ethical requirements relevant to business and representing them accordingly in their management approaches [1, 5]. This difficulty is compounded by a noticeable disconnect among these stakeholders in recognizing the value of ethical requirements [1, 5]. Existing ethical guidelines further exacerbate this gap, primarily focused on the technical aspects of SE projects, often neglecting the equally critical managerial dimensions that guide decision-making [25, 36]. This omission leads to the undervaluation of ethical considerations and puts organizations at risk of legal, reputational, and regulatory repercussions [1, 4].

To address the challenge faced by SE management stakeholders in determining and valuing ethical requirements in AI systems, our study utilizes the IEEE Standard Model Process for Addressing Ethical Concerns during System Design (IEEE Std 7000–2021) [19]. This standard serves as a vital tool for concept exploration and the development of the concept of operations (ConOps) stage, offering a comprehensive roadmap for embedding ethical considerations in the creation and operation of autonomous and intelligent systems (A/IS). It encourages managerial stakeholders to actively engage in four critical areas: Identifying relevant ethical requirements for their System of Interest (SOI), Eliciting these requirements based on applicability, Prioritizing their importance, and Incorporating them into management strategies, considering key stakeholder success factors. While the standard acknowledges that ethical consideration is not solely the responsibility of management, it underscores the pivotal role of management in establishing ethical benchmarks and supervising their outcomes. Consequently, our research is driven by two fundamental questions:

RQ1: What ethical requirements do SE management stakeholders consider crucial for AI-empowered SOI?; and RQ2: How can ethical requirements be effectively evaluated and integrated as success factors in SE management strategies for AI-empowered SOI?

The primary aim of this study is to underscore the crucial role of ethical requirements for SE businesses, particularly in AI-enhanced environments. By addressing the outlined research questions, we seek to guide organizations to circumvent ethical pitfalls and cultivate a culture of trustworthiness in AI development. Our objective is to contribute significantly to the ongoing conversation about integrating ethics into AI and SE practices, ultimately aiming to bolster stakeholder trust and position organizations as frontrunners in ethical AI deployment.

The remainder of this study is organized as follows: Sect. 2 provides an overview of the background and existing literature, while Sect. 3 describes our research methodology, including data collection, analysis, and key findings. Discussions based on our insights are presented in Sect. 4, and Sect. 5 offers the study’s conclusions.

2 Background

AI ethics aims to ensure AI technologies are developed and utilized in alignment with ethical and societal values, preventing unforeseen consequences or damage. It examines the ethical principles and moral concerns tied to the creation, implementation, and usage of AI systems [26]. While AI ethics encompasses worries about machine behaviors and the potential emergence of singularity intelligent AI [26], this study doesn’t explore that dimension. Issues like bias, surveillance, job displacement, transparency, safety, existential threats, and weaponized AI underscore the imperative of instilling ethical considerations into AI engineering. Consequently, private, public, and governmental stakeholders have set AI principles as ethical guidelines. Notable among these are the EU’s trustworthy AI guidelines (AI HLEG), IEEE’s Ethically Aligned Design (EAD), the Asilomar AI Principles, and the Montreal Declaration for Responsible AI [18, 19]. Guiding principles distilled from various guidelines, as outlined by Ryan and Stahl [32] and Jobin et al. [21], include Transparency, Justice, Non-maleficence, Responsibility, Privacy, Beneficence, Autonomy, Trust, Sustainability, Dignity, and Solidarity.

2.1 Ethical Requirements

Ethical requirements are multifaceted, requiring careful consideration and interdisciplinary collaboration spanning technology, law, philosophy, and social sciences [24]. Ethical requirements of AI are primarily from foundational ethical principles or rules, such as transparency and fairness, and are pivotal for fostering trustworthy AI [15]. They help interpret the guiding principles and standards that ensure AI systems’ ethical design, creation, deployment, and operation. From the principle of privacy, for instance, an ethical requirement is privacy and data protection, entailing that AI systems should handle personal and sensitive data carefully according to legal regulations and best practices [15, 21]. As such, they help build trust and align AI endeavors with human values and societal aspirations [15]. However, in SE, ethical requirements are predominantly articulated as functional and non-functional requirements during the development phase [15], yet they are seldom addressed at the management level, typically only insofar as to meet legal mandates like the General Data Protection Regulation (GDPR) [1, 24].

2.2 Trustworthy AI

With the increasing integration of AI across various aspects of human life, the concept of Trustworthy AI has evolved to encompass a broader range of societal and environmental considerations. These include the implications for employment, societal equity, and the environment. Despite the presence of specific frameworks and guidelines from organizations, governments, and international bodies, the critical requirements that truly define what makes AI trustworthy remain a central concern [12, 29]. The AI HLEG and IEEE EAD have been instrumental in identifying critical ethical requirements, significantly shaping the discourse on trustworthy AI [18, 19]. These frameworks outline key ethical principles that serve as a guide for both academia and industry professionals. The AI HLEG highlights seven key requirements for trustworthy AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability. Concurrently, the IEEE EAD emphasizes five: human rights, well-being, accountability, transparency, and awareness of AI’s potential for misuse [19]. There’s notable convergence in these requirements, which we explain as follows: Human agency and oversight: Emphasizes the importance of human rights and underscores the indispensability of human direction and supervision. Technical robustness and safety: Stresses the importance of crafting AI systems that resist threats, prioritize safety, have inherent protective mechanisms, and exhibit consistent, dependable, and replicable outcomes. Privacy and Data Governance: Navigates the privacy terrain, advocating the cause of data integrity, quality, and accessibility. Transparency: Entails a commitment to traceability, explainability, and effective communication of AI processes. Diversity, non-discrimination, and fairness: Encourages equitable AI practices, advocating for unbiased algorithms, universal design principles, and inclusive stakeholder engagement. Societal and environmental well-being: Focuses on AI’s societal imprint, ranging from its ecological footprint to its broader societal repercussions and democratic implications. Accountability: Encompasses regularized auditing, transparent reporting, harm minimization, and effective remedial mechanisms. These enumerated requirements find application in tools like ECCOLA and Ethical User Stories (EUS), pivotal in executing the IEEE Std 7000–2021 approach of this study.

ECCOLA is an Agile-oriented method designed to enhance awareness and execution of AI ethics for developers in SE [36]. It synthesizes ethical requirements from AI HLEG and EAD, consolidating them into seven core themes or requirements and sub-requirements. The ECCOLA approach is a 21-card deck organized around seven primary requirements: transparency, data agency and oversight, safety and security, fairness, well-being, and accountability, and a stakeholder analysis card. Each requirement is represented further by one to six dedicated sub-requirement cards. ECCOLA is segmented into three components: the rationale behind its importance, actionable recommendations, and a tangible real-world example [36]. For direct access to ECCOLA, click here.

Ethical User Story concept integrates the user story methodology with an ethical toolset, facilitating the extraction of ethical requirements during technological design or development processes [16]. In SE and Agile methodologies, user stories help bridge business objectives and development activities by succinctly capturing customer demands [10]. These stories act as conduits to foster understanding between developers and users. They distill intricate concepts into more targeted information pieces, bolstering communication and collaboration to ensure goal alignment. A standard user story is structured as: “As a [user role], I want [goal or need] so that [reason or benefit].” Here, the “user role” delineates a specific user’s identity or function. The “goal or need” specifies the desired outcome from the software, while the “reason or benefit” pinpoints the underlying motivation or value that drives this desire helping to concisely and clearly describe a user’s requirement for the SOI [10].

2.3 Standard Model Process for Addressing Ethical Concerns During System Design

The IEEE Std 7000–2021 provides a practical approach for SE businesses to identify and address ethical issues during the system design of their system of interest (SOI). We focus on the concept exploration and development of the concept of operations (ConOps) stage in our study, which emphasizes proactive communication with stakeholders, to help identify and prioritize ethical values to be integrated at the system design stage [20]. The procedure entails discerning these values from the operational concept, which lays out the system’s functionality, and from the value propositions and dispositions, which highlight the system’s benefits and potential outcomes. Central to the IEEE Std 7000–2021 are the Ethical Value Requirements (EVRs) concept. EVRs epitomize the essential worth of ethical requirements, ensuring that systems resonate with societal standards and uphold human rights, dignity, and well-being [12, 18, 20]. The standard advocates for meaningful engagement of primary stakeholders, especially those in management roles, throughout the design phase in Identifying pertinent ethical requirements by scrutinizing relevant ethical regulations, policies, and guidelines, including gathering stakeholder feedback. - Eliciting these ethical requirements based on their relevance to the SOI. - Prioritizing the inherent value of these requirements. - incorporating these values into the system’s core objectives and ensuring consistent communication and compliance monitoring with all concerned parties. Defining and embedding ethical requirements can bolster SOIs’ credibility, trustworthiness, and perceived value to help weave them seamlessly into their system’s design and development [20].

2.4 Implementing Ethical Requirements in SE Management

Aligning software development with an organization’s objectives is primarily achieved through SE management, which integrates critical success factors into operational and decision-making frameworks [14, 28]. Despite its importance, there’s a scarcity of tools that embed ethical requirements within SE management [3, 5]. Notably, the adaptation of canvas models for ethical representation is gaining traction among researchers and practitioners seeking to elevate ethical considerations in their practices [22, 27, 37]. Canvas tools are graphical representations that clarify intricate business concepts, facilitating stakeholder alignment. They break down various business facets, like customer segments or value propositions, into an easily digestible format often serving as a business snapshot enhancing understanding and communication [8, 28]. Some notable approaches for the canvas model include The Ethics Canvas [22] which leverages the foundational blocks of the business model canvas to stimulate discussions on the ethical implications of technology. However, its scope on ethics is extensive and doesn’t precisely target AI ethics or its requirements. The Open Data Institute’s Data Ethics Canvas [27] offers a lens through which data practices can be ethically evaluated. Vidgen et al. [37] introduce a business ethics canvas, drawing inspiration from the applied ethics principles of the Markkula Center, which focuses on addressing data-centric ethical issues in business analytics. The canvas, however, predominantly focuses on the data ethics dimension. A more comprehensive canvas approach is the Trustworthy AI Implementation (TAII) canvas [2], which extends from the TAII framework [3]. It outlines the interplay of ethics within a company’s broader ecosystem, touching upon corporate values, business strategies, and overarching principles but does not precisely pinpoint ethical requirements, potentially making it challenging for SE management stakeholders to translate it into actionable management practices [3].

3 Research Methodology

We adopt an exploratory approach to address our research questions. This approach is in line with Hevner et al.’s Design Science method, particularly the “build” component, given the innovative nature of our study and the limited resources in existing literature [17]. Exploratory methods provide valuable flexibility, especially when delving into less-explored research areas [35]. Hevner et al. emphasize the importance of adapting their seven guidelines, and our primary focus lies in developing conceptual artifacts, as outlined in their “Design as an artifact” guideline. While this phase typically yields conceptual insights rather than fully developed systems, the design science approach is crucial for shaping novel artifacts, even in the face of challenges [17].

3.1 Data Collection

We collaborated with 12 Finnish SE executives on an AI-enhanced project focused on autonomous marine transport for emission reduction and the enhancement of passenger and cargo experiences at the concept exploration stage. These executives represent various businesses specializing in different aspects of intelligent and autonomous SE, as detailed in Table 1. Our objective was to identify the essential ethical requirements these stakeholders deemed necessary for the AI-enabled System of Interest (SOI). To initiate our study, we secured the informed consent of our industry partners, emphasizing their entitlement to withdraw or request data deletion at any phase. Leveraging their SE background, which granted them a foundational understanding of the concepts, we embarked on a collaborative project segmented into three specific use cases. A series of workshops grounded on the brainstorming technique delineated by [33] facilitated the familiarization process with critical frameworks, including IEEE Std 7000–2021, ECCOLA, and the EUS concept.

During these sessions, the participants, who were predominantly executives, actively engaged in selecting pertinent ethical requirements from the 21 ECCOLA cards, highlighting those that resonated significantly with their business operations. The focus coalesced around ethical themes encapsulated by cards # 2 Explainability, # 3 Communication, # 5 Traceability, # 7 Privacy and Data, # 8 Data Quality, # 9 Access to Data, #12 System Security, # 13 System Safety, # 14 Accessibility, # 16 Environmental Impact, and # 18 Auditability. This careful selection served as a guide to pinpoint the ethical themes critical to their enterprise, facilitating a nuanced exploration. Extensive notes were documented to address subsequent inquiries and emerging concerns.

Table 1. SE Management Stakeholders

In eight workshops, each spanning one to three hours, we collaboratively formulated EUS using the ECCOLA method, tailoring the selections from ECCOLA to suit the requirements of each specific use case. Our detailed notes amounted to a total of 367, resulting in the creation of 253 EUS instances [34]. Examples of these instances include:

As a[company CEO], with automated truck deliveries, I want [to have information, before sending my trucks on how data is handled], so that [I can feel secure that my data will not leak to unwanted parties].”

As a [company data protection manager ], I want to [authenticate the collected data] so that I can [ensure validity].”

As a [system administrator], I want to [streamline the management of GDPR requirements] so that I can [ ensure that the service remains unaffected by user information or data erasure requests].”

As a [project stakeholder], I want the system [to feature clear and explainable logic] to [prevent project overruns or operational errors caused by unclear system descriptions].”

3.2 Data Analysis

We conducted our analysis utilizing content analysis, a systematic approach for dissecting qualitative data to discern recurring themes, patterns, and categories, ultimately yielding valuable insights [39]. In analyzing the EUS, we adopted an interpretive content analysis approach, prioritizing narrative interpretations of meaning over purely statistical inferences. This method enabled us to differentiate between manifest content, which represents overt messages in communication, and latent content, which encompasses subtle or underlying implications [39]. To streamline the analysis, we established a coding system. For instance, ‘TR’ was used as a code to symbolize ‘transparency’, while’DA’ represented’data’. These are just some examples of the various codes we employed throughout our analysis. These codes were then used to highlight specific ethical requirements within the dataset. For example, ‘TR’ pinpointed instances where transparency was a focal point in user stories. As we observed emerging patterns, we sought to identify correlations between the codes and overarching themes. These themes were then cross-referenced with central themes from the ECCOLA cards.

Utilizing the MoSCoW Prioritization technique [11], a popular tool in project management, software development, and business analysis, the executives classified the EUS based on their significance of “Must have, Should have, Could have, and Won’t have”. “Must have” captures indispensable requirements without which the project is incomplete.“Should have” comprised valuable yet non-critical elements; their omission wouldn’t jeopardize the project.“Could have” entails requirements that, while beneficial, aren’t urgent and can be tackled if resources permit.“Won’t have” covers those that are either irrelevant to the current project or simply unfeasible, possibly deferring them for later consideration or omitting them altogether [11]. The comprehensive prioritization can be found in Table 2. Of the 12 industry partners, nine participated in these classification exercises, while three were unavailable (denoted as N/A). The activity spanned several sessions, resulting in 177 out of the 253 EUS receiving priority rankings.

Table 2. Prioritization breakdown

3.3 Findings

The prioritization from the EUS yielded seven distinct sub-requirements, categorized under four primary requirements. These sub-requirements are#5 Traceability, #3 Communication, #8 Data quality, #9 Access to data, #7 Privacy and data, #12 System security, and #14 Accessibility. They fall under the broader categories of Transparency, Data, Safety and Security, and Fairness. These emerged as crucial for SE management stakeholders, as illustrated in Fig. 1.

Fig. 1.
figure 1

Essential Ethical Requirements

4 Discussion

We examine our findings within existing research.

4.1 Essential Ethical Requirements

We analyze the seven identified ethical requirements and explore their significance and implications for stakeholders in SE management.

Traceability is pivotal in enhancing transparency and ensuring accountability within AI systems. It provides stakeholders with vital information to scrutinize and interpret the system’s decisions [36]. By prioritizing traceability, those in SE management roles can effectively identify and manage the inherent risks associated with AI technology. This focus requires a detailed documentation process encompassing data sources, applied algorithms, computational models, and justifying particular outputs. Such comprehensive records identify potential weak points that could be prone to errors or biases, thereby enabling risk mitigation strategies to be deployed proactively [21]. As Ryan et al. underscore [32], maintaining stringent traceability practices reinforces accountability and fortifies customer and stakeholder trust, consequently elevating the organization’s reputation.

Communication is central to disseminating essential details about an AI system’s architecture, development phases, and functionalities to all pertinent stakeholders. Effective communication involves transparently articulating the system’s objectives, capabilities, limitations, and possible repercussions. By doing so, stakeholders engaged in the project can gain a well-rounded understanding of the initiative’s scope and aims, allowing them to identify and proactively address technical and ethical challenges. Open and transparent dialogue among SE management stakeholders can facilitate collaborative problem-solving and mitigate potential adverse outcomes. One challenge in communication within SE management is the complexity of technical jargon and the volume of information related to AI project documentation. However, prioritizing strategic communication can align expectations and clarify objectives [32].

Data Quality ensures that data serves its designated purpose and can be relied upon for making well-informed decisions within AI systems [6, 18, 23]. For SE management, data quality is a strategic component that influences the efficacy and efficiency of AI deployments. Subpar data quality elevates risks such as data breaches, security lapses, and other data-centric complications. These issues can inflate development expenses by necessitating the resolution of data inconsistencies, which in turn may lead to project delays and increased rework costs. Such disruptions can compromise the quality of AI solutions, diminishing customer satisfaction and eroding revenue and market share. Conversely, a commitment to high-quality data practices can assist SE management in curbing development costs, elevating product quality, enriching customer experience, and mitigating risks [18, 23].

Access to Data facilitates SE management by granting stakeholders insights into the data utilized in projects, development progression, and other pertinent details, aiding in identifying and mitigating risks associated with their chosen data for SOI. As businesses accumulate vast and diverse data sets, maintaining streamlined access becomes indispensable to prevent data landscapes from turning chaotic and complex [3]. Moreover, with tightening regulatory landscapes, such as the GDPR and the California Consumer Privacy Act (CCPA), adept data management, particularly regarding access, has gained paramount significance. Conversely, inefficient practices regarding data access can result in gaps in understanding data’s availability, quality, security measures, proprietorship, and overarching governance [18].

Privacy and Data are key elements in maintaining the integrity of AI systems, safeguarding against data breaches, and avoiding biased or discriminatory outcomes. AI systems often require access to data, including sensitive or personal information, that demands stringent protection measures. SE management stakeholders can play a vital role by incorporating strong privacy and data handling practices. These measures enable the ethical utilization of data, safeguarding against biased or prejudicial data sets and avoiding harm to individuals or groups. Wang et al. [38] point out that while data can provide invaluable benefits to organizations, it can also pose risks. High-profile cases like Meta (formerly Facebook) underscore the necessity for striking a balanced approach between exploiting data’s benefits and mitigating its associated risks, both from a social and regulatory standpoint.

System Security focuses on deploying security protocols like authentication and encryption to safeguard against unauthorized system or data access while ensuring that the system can quickly recover from any security breaches. The ultimate objective is to guarantee the system’s safe and reliable operation across diverse scenarios without harming users or society. Cheatham et al. [9] note that AI technology’s relative infancy means that SE management stakeholders often lack the refined understanding necessary to grasp societal, organizational, and individual risks fully. This lack of understanding can lead to underestimating potential dangers, overvaluing an organization’s ability to manage those risks, or mistakenly equating AI-specific risks with general software risks. To avoid or minimize unforeseen consequences, these stakeholders must enhance their expertise in AI-related risks and involve the entire organization in comprehending both the opportunities and responsibilities of AI technology.”

Fairness entails management practices of avoiding biased algorithms or data sets that may lead to discrimination or unfair treatment of certain groups [18]. It also means ensuring that AI systems design and development are supervised not to perpetuate or exacerbate societal inequalities. Berente et al. [5] explain that management stakeholders can ensure that the teams responsible for developing and deploying AI systems are diverse regarding gender, race, and ethnicity to mitigate bias in decision-making. Diversity can help ensure that AI is designed and deployed fairly and ethically for all users, thereby increasing the adoption and acceptance of AI by a broader range of users.

4.2 Towards a Business Case for Ethical Requirements

To address RQ2 effectively, we introduce the Ethical Requirements Canvas, depicted in Fig. 2. This canvas serves to underline not just the importance but also the intrinsic value of ethical requirements, thereby constructing a business case for their integration. Business cases are essential for management to evaluate a project’s costs, benefits, risks, and alternatives, ensuring alignment with the organization’s strategic goals [40]. The Ethical Requirements Canvas serves as a practical instrument that not only integrates ethical considerations into management practices but also highlights their business value [28]. Consequently, the canvas provides a pragmatic method for aligning ethical requirements with the organization’s broader goals, articulating their significance and potential for adding value in business terms.

Fig. 2.
figure 2

Ethical Requirements Canvas

Section one presents the ethical requirements identified through our research. It’s important to note that these requirements are displayed for reference and awareness, not for rigid adherence. Section two focuses on identifying the organization’s stakeholders. Here, SE management can discuss various categories of stakeholders, such as human and non-human agents, different age groups, societal standing, and levels of vulnerability, among others. Section three outlines the essential business operations necessary to realize the value proposition of integrating ethical requirements. Section four lists the resources required for effective implementation. Sections five and six allow SE management to assess the societal, internal, and external impacts of incorporating these ethical parameters into their SOI. Section seven explores the financial, reputational, or otherwise costs associated with choosing to integrate or overlooking ethical requirements. Section eight evaluates the benefits and potential monetization of ethical requirements. Section nine illuminates the distinct advantages of ethical considerations, assisting in identifying vital initiatives that enhance the benefits of ethical requirements, potentially serving as critical determinants of success [7]. These benefits encompass elevating the organization to a Trustworthy AI business status, akin to the positive reputational impact observed in companies with sustainability initiatives. This can enhance stakeholder engagement-from the business being perceived as ethical and trustworthy-and potentially expanding market share and boosting profitability due to increased user trust. [7, 27, 28].

While the Ethical Requirements Canvas provides a systematic framework for visualizing and assessing ethical considerations, it may have inherent limitations. Its structured nature could risk simplifying complex ethical dilemmas, potentially fostering a compliance-centric mindset at the expense of cultivating a deeper ethical culture [31]. This approach risks satisfying only the minimum legal standards rather than aspiring to ethical excellence, which may lead to the marginalization of crucial ethical aspects [13, 28, 31]. Additionally, while adaptability is one of the canvas’s strengths, it also poses challenges. Our research identified seven core ethical requirements, but their relevance and prioritization can differ significantly among organizations due to unique contextual factors, industry norms, and stakeholder expectations. Therefore, it is critical to balance adherence to industry standards with the strategic objectives of the organization when applying the canvas.

4.3 Limitation

A limitation inherent to our research is its specific focus on the marine transportation sector within Finland, potentially circumscribing the external validity and generalizability of our findings to other geographical contexts or industries experiencing AI-driven digital transformations. Despite this, we argue that our research lays a foundational framework that can be adapted and scrutinized in various settings [33].

For future studies, we plan to validate the Ethical Requirements Canvas via workshops with SE management teams and industry-wide surveys. These evaluations will not only gauge the canvas’s usability and relevance but will also fine-tune its alignment with both organizational demands and ethical standards.

5 Conclusion

In this study, we have made three principal contributions. First, we compiled a comprehensive set of ethical requirements reflecting the perspectives of SE management stakeholders. Second, we presented a stakeholder-centric approach that is responsive to the challenges faced by the industry. Third, we introduced the “Ethical Requirements Canvas,” a novel tool designed to elucidate and integrate the value of ethical considerations into SE management practices. The canvas not only acts as an ethical roadmap for practitioners but can also facilitate risk management and promote judicious decision-making [28]. From an academic standpoint, our framework lays the groundwork for further inquiry into the integration of ethical requirements in AI and SE management, encouraging cross-disciplinary research and assessments of tool efficacy. On a practical level, our work supports SE managers in embedding ethical principles more deeply within their processes, thereby advocating for the development of trustworthy AI systems.