1 Introduction

The rapid advancement of autonomous and intelligent systems (AIS) heralds a new era of innovations, promising profound positive effects across diverse aspects of human life worldwide. Spanning education, healthcare, business, industry, transportation, and societal structures, AI technologies, including Machine Learning (ML), Deep Learning, Natural Language Processing (NLP), and Computer Vision, intertwine to offer myriad opportunities for improving human well-being. Amid these prospects lie vital ethical considerations regarding the potential implications of AIS. One significant concern revolves around the conception that using certain AI systems may lead to a morally challenging responsibility gap, wherein no one bears responsibility for the actions undertaken by AIS (Oimann 2023; Taylor 2024). Ethical concerns surrounding AIS pertain to ensuring the security and safety of using these systems, as they can inadvertently cause harm despite good intentions. Given the incomprehensible nature of their inner mechanisms, even to their creators, assigning moral responsibility for the systems’ behavior becomes unreasonable due to its unpredictability. This unpredictable behavior poses risks that extend across multiple domains, including damage to reputation or identity, biased biometric identification and inappropriate categorization, access limitations, threats to democratic systems, economic impacts such as market destabilization, and implications for climate engineering, healthcare technologies and telemedicine, business operations, and social policies.

In the ethical context, it is crucial to prevent AI applications from causing harm, particularly societal harm among diverse individuals or groups. This requires developers of AI algorithms to uphold human rights by ensuring continuous human oversight throughout the lifecycle of AI systems. Therefore, the development and implementation of effective procedures and processes for assessing the ethical dimensions of AIS, as well as delineating responsibilities, rights, and duties, are paramount.

Given the potential risks associated with AI systems, proactive monitoring throughout all stages–from design and development to deployment and usage–is essential. This proactive approach facilitates the effective governance of AI security and promotes the responsible use of AI technologies from an ethical perspective.

2 State of the art in AI ethics

As the use of automated decision-making systems becomes more prevalent, ensuring transparency and security in AI applications becomes paramount to fostering trust. Given the myriad ethical considerations associated with AI systems, there is an increasing emphasis on regulating the development and deployment of trustworthy AI systems. This is essential for enabling individuals and society to harness the benefits of AI while holding developers accountable for the systems they create. Ethical frameworks guiding the development and utilization of AI systems primarily rely on ethical theories, ethical principles, and the preservation of human values.

2.1 AI ethical guidelines and standards

To mitigate the potential negative impacts of AI-based systems, establishing societal, policy, and ethical guidelines is crucial to ensure that these systems remain centered on humanity and prioritize ethical principles and human values. Numerous institutions have issued AI ethics guidelines with a focus on fostering AI trustworthiness and ethical adherence. In April 2018, 25 European countries signed a Declaration of Cooperation on AI, committing to addressing pressing issues concerning AI’s societal, economic, ethical, and legal implications. Four additional countries joined this initiative in May 2018 (European Commission 2018a).

In December 2018, the High-Level Expert Group on Artificial Intelligence (AI HLEG), tasked with supporting the implementation of the European Union (EU) AI strategy, released the first draft of EU AI guidelines (Ethics Guidelines for Trustworthy AI). These guidelines, subjected to open consultation, were officially published in April 2019. Aimed at promoting trustworthy AI, the guidelines advocate for AI that is lawful (i.e., compliant with applicable laws and regulations), ethical (adhering to ethical principles and human values), and robust (addressing both technical and social dimensions). The guidelines provide a framework for achieving trustworthy AI, emphasizing the operationalization of ethical principles in socio-technical systems and recommending measures to mitigate risks based on their magnitude (High-Level Expert Group 2019).

In October 2020, the European Parliament issued a resolution proposing a framework for addressing the ethical aspects of AI technologies (Framework of Ethical Aspects of Artificial Intelligence, Robotics, and Related Technologies). This resolution underscores the importance of user trust in the development and implementation of AI technologies, particularly in scenarios where biased datasets or opaque algorithms can introduce inherent risks. It also emphasizes the significance of transparency and accountability of AI applications. According to the framework, AI systems should be deemed high-risk if their development, deployment, and utilization pose significant risks of causing harm or injury to individuals or society, breaching fundamental human rights and safety rules. Additionally, the framework advocates for conducting risk assessments of AI technologies and developing a risk-based approach, particularly for high-risk AI systems (European Parliament 2020).

Several researchers have delved into existing AI ethics guidelines. Jobin et al. (2019) offered a thorough survey and analysis of prevailing principles and guidelines regarding ethical AI. Their review, which scrutinized 84 ethical guidelines from national and international organizations across different countries, revealed a consensus on five core principles: transparency, justice and fairness, nonmaleficence, responsibility, and privacy.

Hagendorff (2020) conducted a comprehensive analysis of 22 major AI ethics guidelines, highlighting both commonalities and gaps. Ryan and Stahl (2021) provided a comprehensive clarification of artificial intelligence ethics guidelines tailored for developers and users, elucidating their content and normative implications in depth. Recently, Huang et al. (2023) presented an up-to-date global overview of AI ethics guidelines and principles, drawing from 146 guidelines released by companies, organizations, and governments worldwide.

2.2 Related work on ethical approaches

In recent decades, numerous authors have conducted extensive studies on the ethical implications of AI, as evidenced by Friedman (1997), Flanagan et al. (2008), and Stahl et al. (2022) and the references cited therein. This has spurred significant interest in exploring various ethical approaches to address ethical dilemmas. Liu et al. (2021) focused on surveying privacy and security issues within deep learning, while Zhang et al. (2021) explored key ethical and privacy concerns in AI, tracking their evolution over recent decades through bibliometric analysis. Several authors have addressed the moral costs associated with responsibility gaps (Sparrow 2007; Taylor 2021), while Arrieta et al. (2020) concentrated on the concept of explainable AI (XAI).

It is widely recognized that the societal ramifications of AI implementation must be scrutinized through an ethical lens. Consequently, ethical considerations surrounding AI form a central theme in many scholarly works. Pflanzer et al. (2023) conducted an in-depth examination of the social, political, and ethical dimensions surrounding AI. The researchers curated an overview of the contributions found in the selected articles, which were authored by a diverse group of scholars. Their objective was to assist readers in discerning conceptual linkages among the various arguments presented. The articles span a wide array of domains, including but not limited to autonomous vehicles, healthcare robotics, algorithmic policing, and AI-driven personal assistants.

Several articles delve into the implementation of key ethical areas such as transparency, fairness, justice, and responsibility within the realm of AI. These discussions often advocate for democratizing AI to better uphold these principles. However, Himmelreich (2023) challenges the notion of democratizing AI, contending that it fails to meet legitimacy standards, introduces governance redundancies, and perpetuates various injustices. Instead, Himmelreich proposes a nuanced approach to democratization, emphasizing the enrichment and enhancement of existing infrastructures rather than simply increasing participation.

One prominent concern revolves around determining the ethical behavior of autonomous vehicles (AVs) in unavoidable collision scenarios. This moral dilemma has garnered considerable attention, as reflected in the substantial body of literature dedicated to this topic, as seen in (Bonnefon et al. 2016; Gawronski and Beer 2017; Gogoll and Müller 2017; Greene 2016; Karnouskos 2020; Shariff et al. 2017) and their respective references. Several authors have proposed ethical decision-making frameworks to guide AV behavior in such situations. For instance, De Moura et al. (2020) introduced a decision-making algorithm grounded in a Markov Decision Process (MDP) to govern AV behavior under normal circumstances. Upon encountering a dilemma, the severity of potential collisions is assessed based on the harm to different parties involved. The authors further proposed three deliberative processes based on distinct ethical theories: Rawlsian contractarianism (Rawls 1999), utilitarianism (Mill 1998), and egalitarianism (Broom 1995). Subsequently, they developed corresponding ethical decision-making algorithms within the MDP framework to dictate AV responses in moral dilemma scenarios.

However, Rhim et al. (2021) criticized these MDP-based algorithms for overlooking the intuitive aspects of human decision-making. In response, they devised an integrative ethical decision-making framework for AV moral dilemmas. This framework incorporates both intuitive and cognitive moral reasoning processes, recognizing the dual-process and pluralistic nature of human moral deliberation. By defining various variables pertinent to AV moral dilemmas, this comprehensive framework aims to elucidate the complete ethical decision-making process (Rhim et al. 2021). In the realm of transportation, Gaio and Cugurullo (2023) advocate for prioritizing mobility justice over policies that favor individual transportation modes. They support their argument by aligning with societal goals of proximal cities and urban containment.

3 Ethical Principles in the AI-related Context

A set of essential ethical principles and correlated values that should guide the development, deployment, and use of AIS has been established based on fundamental rights. Principlism, an approach to applied ethics, offers practical guidance centered on four basic ethical principles: beneficence, nonmaleficence, respect for autonomy, and justice, derived from a common morality (Beauchamp and Childress 2019). Principlism is sensitive to situational nuances and contexts, allowing for indefinite degrees of specification as cases become more contextualized. It does not advocate hierarchical ordering of principles; none of the principles is formally of a higher order than any of the others. Mutual tensions or conflicts among ethical principles can give rise to ethical dilemmas, which are situations where adhering to one principle may lead to a violation or neglect of another. In such cases, it is challenging to determine which principle should take precedence. Ethical decision-making becomes complex and requires further specification, balancing judgments, weighing various values, and careful consideration of interests at stake. However, certain maxims formulated at a specified level can be reasonably overridden based on compelling ethical and moral considerations, allowing room for ethical judgment, which should be developed through critical evidence-based reflection rather than intuition or random choice. The principles and related values must be discussed on a practical level and ethical decision-making should always be context-dependent.

Principlism is a framework applicable across all fields of ethics, including AI ethics, as its principles are universally correlated with fundamental human rights and values, such as respect for human dignity, individual freedom, equity, rules of law, equality, non-discrimination, and solidarity. These principles establish the moral and ethical boundaries of actions to which an action is morally relevant and ethically acceptable within society. Incorporating the core ethical principles into the AIS development process enables early identification and consistent addressing of ethical issues. However, identifying ethically acceptable trade-offs can be challenging in certain situations. It is essential to acknowledge that certain fundamental rights and correlated principles are absolute and cannot be subject to a balancing exercise (e.g., human dignity) (High-Level Expert Group 2019).

As noted by Floridi et al. (2018), the four foundational bioethical principles resonate remarkably well with the emerging ethical landscape presented by artificial intelligence. However, their analysis suggested that these principles, though crucial, may not cover all ethical considerations. The authors highlighted the need for an additional principle: explicability, which encompasses both intelligibility (answering “How does it work?”) and accountability (answering “Who is responsible for the way it works?”). To foster the development of a Good AI Society and ensure socially preferable outcomes of AI, ethical principles should be integrated into the standard practices of AI. AIS should be designed to promote social empowerment, reduce inequality, uphold human autonomy, prevent harm, including undermining existing social structures, and ensure equitable distribution of benefits among all stakeholders (Floridi et al. 2018).

The Ethics Guidelines for Trustworthy AI, issued by the AI HLEG, provide a framework for implementing trustworthy AI through ethical principles and requirements. This framework aims to ensure the ethical and robust nature of AI systems by adhering to the fundamental ethical principles:

  • Respect for human autonomy: AI systems must allow individuals interacting with them to retain full and effective self-determination. Human-centric design principles should be followed, offering meaningful opportunities for human choice while ensuring human oversight over AI systems’ operations. No interference with human decision-making autonomy is acceptable. AIS should be designed to enhance and empower human cognitive, social, and cultural skills.

  • Prevention of harm: AI systems must avoid causing harm to individuals or groups or otherwise adversely affecting humans, including vulnerable populations, and must not exacerbate existing negative impacts. Any negative impacts resulting from power or information asymmetries, such as those between governments and citizens, employers and employees, or businesses and consumers, must be mitigated. AI systems and their operating environments must be secure and safe.

Alongside the principle of nonmaleficence, it is essential to underscore the principle of beneficence, which firmly emphasizes the central importance of promoting the well-being of individuals and the planet. This principle advocates for the AIS utilization solely for the benefit of individuals and society, striving to maximize benefits for both.

  • Fairness: Ensuring equitable distribution of benefits and costs, including equal access to education, services, goods, and technologies, is essential. Measures must be taken to avoid unfair biases, discrimination, and stigmatization, and competing interests and objectives must be balanced judiciously. Given AI systems’ vast potential to promote fairness in society, this principle underscores the necessity of fostering shared benefit and shared prosperity stemming from AI.

  • Explicability: Processes must be transparent, and the capabilities and purpose of AI systems must be explicitly communicated. Autonomous decisions should be explainable to affected individuals or groups to the greatest extent possible. If it is not possible to explain why an AI system has generated a particular output or decision, other measures, such as traceability, auditability, and transparent communication on system capabilities, should be implemented. The degree of explicability depends on the context and potential consequences but must always respect fundamental human rights. Explicability is crucial for establishing and maintaining trustworthy AI (High-Level Expert Group 2019).

Ethical principles outlined in the guidelines should be further translated into concrete requirements, such as transparency, accountability, privacy, data governance, bias detection and suppression, safety, accuracy, reliability, and human oversight. These requirements can be effectively reviewed during subsequent evaluation processes, such as independent ethical audits, risk management, or AI ethics certification.

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems has identified high-level ethical principles applicable to AIS, regardless of whether they apply to physical robots or software systems. These principles include embodying the highest ideals of human beneficence (as a superset of human rights), prioritizing maximum benefits to humanity and the environment, and mitigating risks and negative impacts through transparent and accountable AI systems. These principles have subsequently been elaborated upon in additional IEEE Standards Association (IEEE SA) documents (IEEE 2016; IEEE 2018; IEEE 2021; IEEE 2023).

All AI ethics guidelines emphasize the foundational importance of adhering to basic ethical principles such as beneficence, nonmaleficence, respect for autonomy, and justice. Additionally, they stress the significance of transparency and explainability as integral components of trustworthy AI.

In their work, Bickley and Torgler (2023) highlight the necessity for robust AI systems to possess moral and ethical reasoning capabilities grounded in common sense. They advocate for leveraging cognitive architectures to imbue AI with ethical capabilities, particularly focusing on enhancing transparency, explainability, and accountability. Understanding the decision-making processes of AI systems is paramount, and exploring their machine-equivalents of motivations, attitudes, and values is imperative. Despite the complex and uncertain journey toward advanced AI, the potential for progress may be more rapid than anticipated. Therefore, achieving a comprehensive understanding of AI is essential to harness its potential benefits for humanity and society while mitigating potential drawbacks.

Similarly, Andrada et al. (2022) aim to categorize various forms of transparency in human-technology interactions. Their analysis underscores the importance of considering all dimensions of transparency when designing ethical AI systems. In a related vein, Walmsley (2021) discussed different facets of transparency and proposed contestability as an alternative approach to transparency.

Slota et al. (2023) conducted interviews with 26 stakeholders to investigate the challenges posed by AI, including issues related to the distribution of agent empowerment and the complexities of establishing accountable systems. They advocate for the development of accountable sociotechnical systems that can be scrutinized, questioned, and adjusted to mitigate unjust risks. These systems must not only exhibit autonomy but also prioritize transparency.

AI’s impact on fairness is another critical consideration. Similar to transparency, fairness is an essential ethical principle that warrants thorough examination and discussion. Maas (2022) investigates fairness by scrutinizing power imbalances among stakeholders, encompassing both those who shape AI (such as developers) and those affected by it (such as users). Drawing upon the concept of domination, Maas proposes external auditing and design-for-value as strategies to mitigate the negative effects of asymmetrical power dynamics.

Yazdanpanah et al. (2023) propose a comprehensive research agenda to advance responsible AI. They assert that the deployment of any autonomous system should not only demonstrate trustworthiness but also provide an explanation of how AI effectively fulfills societal needs in a responsible manner.

While ethical principles serve as the cornerstone of AI ethics, it is vital to acknowledge that they should not serve as the sole instruments utilized in ethical AI development and deployment.

4 Ethical theories applied to AI systems

In addition to ethical principles, various ethical approaches contribute to the foundation of ethical deliberation in decision-making processes. One such approach is casuistry, which begins by examining the specific circumstances of real-life cases and the explicit maxims individuals employ when facing moral dilemmas. It delves into the context of each case, assessing similarities and differences between specific types of cases on a practical level (Jonsen and Toulmin 1988). However, decision-making in AI ethics must remain strictly context-dependent, going beyond mere comparisons of similar cases.

Traditional ethical theories, including utilitarianism, virtue ethics, and deontological ethics, offer guidance in determining the ethical acceptability of actions within specific contexts. They help identify and prioritize values throughout developing ethical and trustworthy AI systems. Notably, these ethical theories assist in pinpointing where the most relevant set of values lies. Designers should acknowledge that certain values are indispensable for the ethical implementation of AIS, and any absence or violation of these values can result in harm to individuals, social groups, or society. Values derived from utilitarian or deontological perspectives differ from those elicited through virtue ethics (IEEE 2021). Consequently, similarly, as all four ethical principles are applied concurrently in qualified ethical decision-making, we cannot rely solely on utilitarian or deontological ethics. Each ethical approach provides significant consideration for consequences, rules, and virtues.

The theory of principlism is also reflected in ethical theories, encompassing the ethical principles of beneficence, nonmaleficence, respect for autonomy, and justice. From a deontological standpoint, the ethical principles of autonomy and justice necessitate respect for human dignity. Conversely, from a utilitarian perspective, the ethical principles of beneficence and nonmaleficence advocate for actions that maximize positive consequences and minimize harm. The following text outlines the fundamental characteristics of ethical theories, emphasizing their potential for deliberation and determination of relevant ethical values.

4.1 Utilitarian ethics: assessing consequences, promoting happiness

Utilitarianism, a traditional ethical theory associated with British philosophers and economists Jeremy Bentham and John Stuart Mill, is one of the consequentialist theories that focuses on the consequences of an action (Bentham 2012; Mill 1998). It employs a common metric for measuring consequences known as a unit of utility. From a utilitarian perspective, utility encompasses happiness, pleasure, or well-being. Utilitarianism, therefore, adopts a reason-based approach, giving prominence to the outcomes (pleasure or suffering) and consequences (benefits and harms) of actions as important features during ethical assessments. According to utilitarian ethics, an action is deemed right if it promotes happiness and wrong if it tends to produce the opposite of happiness for anyone affected by the action. Utilitarianism also advocates for the greatest amount of good for the greatest number of individuals, often referred to as the greatest happiness principle, which entails enhancing societal well-being. Practically, utilitarianism mandates that individuals act to maximize the good consequences (benefits) and minimize the bad consequences (harm).

Utilitarian ethics, when applied to AI systems, evaluates the overall good produced by the system as the primary ethically relevant feature in ethical assessments. It considers the utility of all affected parties, prioritizing their happiness. In the context of system design, utilitarianism proves valuable in aligning business objectives with the long-term well-being (happiness) of individuals. Monitoring the utility consequences of developed AIS involves project teams striving to maximize happiness for the greatest number of human beings impacted by the system under development. They must assess all potential benefits and harms resulting from system deployment across various stakeholders in the short, medium, and long terms by translating benefits and harms into values that can be weighed based on their significance. Ultimately, ethical values should be seamlessly integrated into the concept of operations to ensure that the developed AI system adheres to ethical standards and benefits all involved parties (IEEE 2021).

4.2 Deontological ethics: upholding duties and universal moral principles

Deontological ethics, deriving its name from the Greek word deon meaning duty (sometimes referred to as duty ethics), seeks to establish universal rules that delineate boundaries for individual actions. As a normative direction in philosophical ethics, deontological ethics, primarily associated with Immanuel Kant, focuses on decision-making and subsequent actions based on firmly established principles from which moral obligations arise. Unlike consequentialist theories assessing the morality of actions based on their outcomes, deontological ethics asserts that certain actions are inherently right or wrong, irrespective of their consequences, and the ethicality of actions is assessed based on their adherence to established rules and principles. Kant’s analysis of morality begins with the premise that moral actions are undertaken solely for the sake of duty, not for any concrete objective. This approach requires unconditional validity, leading to the moral law and the categorical imperative. Consequently, the concept of freedom must also encompass these moral principles. The categorical imperative demands that individuals act according to maxims (subjective principles of willing) that can be universally applied as moral laws. This duty-based ethics focuses on what we are obliged to do or refrain from doing, regardless of consequences, as the right action should emanate from its fundamental nature, and the moral value of an action does not lie in its expected results. As Kant states, “An action from duty has its moral worth not in the purpose that is to be attained by it, but in the maxim according to which it is resolved…” (Kant 2011). Actions from duty are guided by the quality of the agent’s maxim. Maxims derive from the will’s orientation; thus, an action’s moral value lies in the principle of the will regardless of the purposes that may be realized in this action. Having a will (as the ability to act rationally) means possessing knowledge of the reasons or principles that direct the action: “Since reason is required for the derivation of actions from laws, the will is nothing other than practical reason” (Kant 2011). Moral principles are rational in themselves, have unconditional validity, and apply equally to the will of every rational being. Kant’s philosophy emphasizes autonomy, defined as the ability to give oneself one’s own law. Free rational beings not only align their decisions and actions with rational principles given by external factors but also possess the ability to formulate moral principles independently. Recognizing these principles is intrinsic to the rational agent and reflects their inherent rationality, rather than being imposed by external forces.

Agent-centered deontologists emphasize Kant’s notion of the moral quality of acts residing in the subjective volitional principles, or maxims, guiding an agent’s actions. Maxims serve as the motivators for a good will, which Kant deemed as the fundamental principle of morality: “Act as if the maxim of your action were to become through your will a universal law of nature” (Kant 2011). Actions are deemed morally praiseworthy if they stem from respect for the moral law. According to deontological ethics, universal moral laws should constrain the actions of all rational beings. Unethical acts are considered fundamentally irrational, as no rational agent would deliberately act irrationally, i.e., contrary to these moral laws. Thus, any moral reasoning procedure should be grounded in the will, operating on the basis of our maxims (Kant 2011).

In practice, deontological ethics does not seek to compile a set of diverse situations and states into a predetermined unified model of duty-based actions (e.g., widely adopted contemporary ethical codes). Instead, it aims to enhance vigilance to the possibilities presented by specific situations and to heighten awareness of the responsible choices of moral actions they entail. This approach embodies an ethics of action (ethics of maxims) as a conscious, rational, and autonomous attribute of our responsible existence. Deontological ethics does not pursue motives of virtue or expediency, but this does not mean that achieving a purpose is irrelevant. The principles of the good will aim to achieve the desired purpose, as this is the goal of the good will. However, the moral value of an action lies in the emphasis on the value of the intention behind the will, not on the outcome of this intention.

In the context of AIS, deontological ethics aims to align the personal ethics of design teams (or their value maxims) with the expectations of stakeholders. Maxims represent an individual’s intentions or reasons for acting in a particular manner. Personal maxims encapsulate individual values with universal validity; for instance, the value maxim of honesty prevents designers from deceiving clients. In the process of designing AI systems, it is crucial to ascertain how respect (or disrespect) for human dignity can be embedded into the system. AI systems hold the potential to undermine human dignity by disregarding individuals’ autonomy in making choices over matters that significantly impact their lives, such as those concerning data privacy or medical decisions.

A significant implication of deontological ethics in AI is its emphasis on respecting human dignity and autonomy. AI systems must be designed in a way that respects user consent and autonomy, avoiding manipulative practices or coercion. Moreover, developers have a duty to ensure AI systems are secure and reliable, thereby preventing harm from malfunctions or misuse. Therefore, the deontological approach in AI development prioritizes the inherent rightness of actions and the moral integrity of the processes involved, rather than merely focusing on outcomes. Ensuring transparency in AI algorithms, maintaining user privacy, and preventing bias are seen as essential duties that developers must uphold regardless of potential benefits or efficiencies that might arise from compromising these principles. This approach requires AI developers to acknowledge the intrinsic moral duties they owe to users, society, and the broader human community. Detecting any potential harm is paramount, necessitating the exploration and prioritization of alternative design options. By eliciting value priorities as the personal maxims of responsible design teams and stakeholders, expectations regarding how a system should treat human beings can be identified (IEEE 2021).

4.3 Virtue ethics: fostering moral character and virtues

Virtue ethics, rooted in the works of Aristotle, emphasizes the virtues or moral character of individuals rather than solely the utility and consequences of actions (as seen in utilitarianism) or adherence to duties (as in deontological ethics). What sets virtue ethics apart from consequentialism or deontological ethics is its focus on the centrality of virtue within the theory. While consequentialists define virtues as traits that lead to favorable outcomes and deontologists view them as traits possessed by individuals who fulfill their duties, virtue ethicists regard virtues and practical wisdom as central concepts.

A virtue is deemed a worthy trait of character, representing a specific manifestation of values held by individuals that contribute to their moral goodness. It embodies a predisposition to perceive, expect, value, feel, desire, choose, act, and react in particular ways. Unlike a framework of duties, virtues dictate certain actions without necessitating regulations, rules, or obligations. Virtues can be understood as ingrained character qualities that facilitate an individual’s moral and communal integration. A virtuous person embodies moral goodness and admiration, consistently acting and feeling in accordance with ethical standards (Hursthouse and Pettigrove 2022).

Aristotle’s concept of virtues is foundational to virtue ethics. According to Aristotle, virtues are habitual and firm dispositions to do good, shaping a person’s actions and emotions. He identified two types of virtues: moral virtues, which are products of adopting good habits acquired through practice (e.g., courage, temperance, self-discipline, moderation, modesty, humility, generosity, friendliness, truthfulness, honesty, and justice), and intellectual virtues, which are produced and increased by instructions, and therefore require experience and time. Intellectual virtues, through which the mind achieves truth in affirmation or denial, include scientific knowledge (episteme), artistic or technical skill (techne), intuitive understanding (nous), practical wisdom (phronesis), and philosophical wisdom (sophia). Moral dispositions are developed through corresponding activities: initially, individuals perform virtuous actions, often guided by teachers or personal experience; over time, these habitual actions evolve into virtues as individuals consistently and deliberately choose to engage in them. Aristotle emphasized the concept of the golden mean, the desirable middle ground between extremes of excess and deficiency, advocating for balanced and moderate behavior (Aristotle (2004). Aristotle’s concept of the golden mean provides a context-specific model for ethical decision-making, making it useful for navigating morally complex situations.

The virtue approach finds application in various domains of applied ethics. In the context of AI, virtue ethics seeks to promote the flourishing of individuals and stakeholders over long-term system use, facilitating the maintenance or enhancement of their virtuous character. Thus, the virtue ethical approach anticipates how an AI system influences an individual’s habitual character and virtuous behavior. It presupposes that systems affecting human behavior either foster certain personal character qualities or deter others. Consequently, design teams should contemplate how their AI system impacts the virtuousness of individuals using the system over a long-term period. Through virtue ethical analysis, project teams should compile a list of virtues that stakeholders aim to cultivate in human users of the developed system, along with a list of virtues that may be compromised as a result of using the system. Additionally, considerations regarding the cultural or regional context into which the system will be deployed, are essential (IEEE 2021).

4.4 Multifaceted ethical approaches to AI ethics

Various scholars explore diverse ethical theories and approaches to guide the ethical integration of AIS into society. Jenkins et al. (2023) propose a comprehensive framework for evaluating the ethical consequences of AI systems, emphasizing the importance of considering both positive and negative impacts, particularly in domains such as journalism, criminal justice, and law. Similarly, Bauer (2020) advocates for a utilitarian approach to Artificial Moral Agents (AMAs), positing its superiority over virtue-theoretic approaches in terms of practicality and additional benefits. On the other hand, Begley (2023) challenges the conventional reliance on normative ethics in philosophical inquiries, advocating for a case-by-case, non-methodological approach to stimulate deeper ethical investigations. Concurrently, Stenseke (2023) delineates a methodology for instilling ethics in machines, drawing inspiration from virtue ethics. Stenseke highlights the significance of virtue ethics in machine ethics, emphasizing its role in cultivating artificial virtuous agents and deliberating on their integration into moral contexts. As underscored by Stenseke, it is crucial to recognize that while virtue ethics offers valuable insights, it is not inherently superior to other ethical frameworks like deontology or consequentialism in every regard; rather, it draws our attention to important aspects of morality that are overlooked in the field of machine ethics.

Various ethical theories applied to AI ethics offer diverse perspectives on ethical dilemmas. For instance, in the context of deploying an AI-driven facial recognition system in public spaces to bolster security, a virtue ethicist might endorse this system, viewing it as morally right because it embodies virtues like care, justice, and safety. From the virtue ethics standpoint, the decision to deploy such technology is not solely judged by its crime prevention potential but rather by its alignment with virtuous traits and its contribution to overall society’s moral character. In contrast, deontologists would evaluate the ethicality of deploying the facial recognition system based on its adherence to established moral norms or principles. They would assess whether it aligns with maxims such as respecting individuals’ autonomy, privacy, and dignity. A deontologist may raise concerns about the system’s deployment if it infringes upon individuals’ autonomy, dignity, or privacy rights, irrespective of its intended outcomes. Additionally, within the given scenario, a utilitarian approach to AI ethics would scrutinize the AI system’s overall utility and its contribution to happiness and well-being. Utilitarian ethicists would examine whether deploying the AI-driven facial recognition system maximizes the greatest good for the greatest number of individuals, thereby enhancing overall societal well-being. They would prioritize the improvement of public safety over privacy concerns and consider the interests and happiness of the majority of individuals affected by the system.

This diversity of perspectives underscores the complexity and multifaceted nature of ethical considerations in AIS development and highlights the need for interdisciplinary dialogue and collaboration.

5 Values and AI-based systems

Due to the rapid pace at which AI systems are increasingly evolving, it is imperative to assess their short-term and long-term impacts on human values and societal norms to ensure that they serve to benefit humanity and society. The absence of universal standards for integrating human norms and moral values into autonomous intelligent systems has spurred significant interest in this field. It is essential that AIS are designed in a way that allows them to adopt, learn, and align with the norms and values of the communities they serve. Moreover, these systems must transparently communicate and explicate their actions to instill trust. Ongoing discussions on the ethical considerations related to embedding values into AIS involve researchers, ethicists, and various academic, governmental, and business entities. The IEEE Global Initiative underscores the importance of employing value-based design methodologies to develop sustainable systems that promote human well-being and freedom. However, it is crucial to recognize that values embedded in AIS are community-specific rather than universal. Furthermore, AI systems may encounter conflicting norms and values, necessitating the acknowledgment of culturally distinctive values in AI design (IEEE 2016).

Developers design or program AI systems to reflect values in their decision-making processes and actions. Autonomous systems may encounter unforeseen situations requiring algorithmic procedures to select the best course of action. Human oversight over algorithmic decisions is thus paramount to ensure that AI systems prioritize human rights, as delineated in the Universal Declaration of Human Rights (United Nations 1948). To foster community trust, researchers and technologists should prioritize transparency regarding their processes, products, values, and design practices. Additionally, they should embrace accountability for the outcomes and consequences of the AI systems they develop.

In the endeavor to embed norms and values into AIS, the IEEE Global Initiative has outlined three primary goals:

  1. 1.

    Identifying the norms and values of the specific communities affected by AIS.

  2. 2.

    Incorporating these norms and values within AIS.

  3. 3.

    Evaluating the compatibility of the implemented norms and values with those of the community (IEEE 2016; IEEE 2018).

The European Group on Ethics in Science and New Technologies (EGE) has issued a Statement on Artificial Intelligence, Robotics, and Autonomous Systems, proposing a comprehensive set of principles grounded in fundamental values essential to AI ethics discussions. These principles encompass human dignity, autonomy, responsibility, justice, equity, solidarity, democracy, accountability, security, safety, bodily and mental integrity, data protection, privacy, and sustainability (European Commission 2018b). Similarly, the IEEE Standards Association (IEEE SA) has put forth a set of values commonly applied to AI system design. These ethical values include but are not limited to autonomy, fairness, privacy, respect, transparency, trust, care, control, and sustainability (IEEE 2021).

Leikas et al. (2019) summarized ethical values and principles from various documents pertinent to the ongoing discourse on AI ethics. They proposed a modified set of values crucial for the ethical development of AIS. These values encompass integrity and human dignity, autonomy, human control, responsibility, justice, equality, fairness, solidarity, transparency, privacy, reliability, security and safety, accountability, explicability, sustainability, and the role of technology in society. The authors emphasized the necessity of a multi-perspective and systematic discussion involving relevant stakeholders, such as ethical experts, developers, and end-users, throughout the AI system design process.

Recognizing the significance of accountability for the design process, AI system performance, and data governance, I underscore the importance of independent ethical assessment processes. Ethical audits or AI ethics certifications play a pivotal role in verifying compliance with ethical requirements for trustworthy AI. Drawing from personal experience, I attest to the considerable value added by such evaluation processes and the marked enhancement in operational effectiveness and governance procedures resulting from independent reviews. The IEEE SA has developed the IEEE CertifAIEd AI Ethics Certification Program, initiated under the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and offered under the global CertifAIEd mark. This program sets standards for certification processes that promote accountability and transparency, prevent unacceptable algorithmic bias, and protect ethical privacy in AI systems. Through thorough independent review, assessment, and verification, this program safeguards and advances the adoption of AI products or services. The certification process validates an organization’s commitment to upholding human values and fundamental rights, ensuring compliance with ethical requirements that foster trust in AI systems while mitigating associated risks and negative consequences (IEEE 2023). For more information about the program, please refer to Jedlickova (2024a). The list of authorized assessors is accessible in the certification registry of the IEEE Conformity Assessment Program (IEEE Certification Registry 2024).

6 Ethical quandaries in the realm of autonomous and intelligent systems

Ethical dilemmas frequently emerge in the development and deployment of AIS, presenting challenges that require careful consideration and resolution. A significant ethical dilemma revolves around the issue of bias in AI algorithms which can lead to discriminatory outcomes in various domains. AI-driven recruitment systems may inadvertently perpetuate gender or racial biases present in historical hiring data, resulting in unfair treatment of certain demographic groups.

Another ethical dilemma concerns the privacy implications of AI technologies, particularly in the context of data collection and surveillance. The widespread deployment of facial recognition systems raises concerns about the erosion of personal privacy and the potential for mass surveillance, as evidenced by controversies surrounding the use of facial recognition technology by law enforcement agencies in various countries.

Moreover, the increasing autonomy of AI systems raises questions about accountability and decision-making responsibility. For instance, when autonomous vehicles are involved in accidents, determining liability becomes complex, as responsibility may rest with various entities such as the vehicle’s manufacturer, the AI algorithm designer, the vehicle operator, or other parties. This highlights the need for clear guidelines and regulations to address accountability in the development and deployment of autonomous systems.

Additionally, ethical dilemmas can arise from the potential misuse of AI technologies for malicious purposes. The development of deepfake technologies, which allow for the creation of highly realistic fake videos, raises concerns about the spread of disinformation and the manipulation of public opinion. Instances of deepfake videos being used to fabricate false narratives or defame individuals underscore the ethical challenges associated with the proliferation of AI-driven misinformation.

Last but not least, the use of AI in healthcare services presents its own set of ethical dilemmas, particularly regarding patient privacy, consent, and the potential for bias in diagnostic algorithms. The use of AI-based diagnostic tools may inadvertently prioritize certain patient populations or overlook specific demographic groups, leading to disparities in healthcare outcomes.

Addressing ethical dilemmas requires a multifaceted approach that incorporates ethical principles, legal frameworks, technological safeguards, and ongoing stakeholder engagement. By proactively identifying and mitigating ethical risks, developers and policymakers can contribute to the conscientious and ethical progress of AIS, ensuring that they serve the best interests of society while upholding fundamental ethical values.

7 Boundaries and barriers: ethical challenges in the development of AIS

Navigating the complex landscape of ethical considerations in the development of AIS presents numerous challenges and limitations. One of the limitations arises from the inherent ambiguity and subjectivity surrounding ethical principles, making it difficult to establish universally applicable guidelines. Additionally, the rapid progress of technological advancement often outpaces the development of ethical frameworks, leaving developers without clear guidance on how to address emerging ethical dilemmas.

Another significant limitation lies in the tension between ethical ideals and practical constraints. While organizations strive to prioritize ethical principles such as transparency, fairness, and accountability, they must also contend with competing demands such as time-to-market pressures, resource constraints, and proprietary concerns. Balancing these conflicting priorities can pose significant challenges and may result in compromises that undermine ethical integrity.

Furthermore, the interdisciplinary nature of AI development introduces complexities and communication barriers between stakeholders from diverse backgrounds, including technologists, ethicists, policymakers, and end-users. Bridging these gaps and fostering effective collaboration requires dedicated efforts to promote mutual understanding and alignment around shared ethical objectives.

Moreover, the dynamic and evolving nature of AI technologies poses inherent uncertainties and risks, making it challenging to anticipate and address ethical implications comprehensively. Ethical considerations, especially those pertaining to transparency and accountability, are closely linked to the unpredictability of autonomous systems, particularly observed in “bottom-up” systems where the landscape becomes more intricate. These systems independently generate and modify their own programming to achieve predefined success criteria. While we may comprehend the system’s objectives, predicting its actions to attain them proves challenging. Consequently, as autonomy increases, assigning moral responsibility to those involved in developing or utilizing AI systems becomes increasingly difficult, given their diminishing control over the system due to its high degree of autonomy and capacity for self-learning. Furthermore, attributing responsibility to the system itself is irrelevant, as it lacks consciousness and cannot be subject to punishment or blame (Oimann 2023; Taylor 2024). However, even within this complexity, it is crucial to acknowledge that ultimate accountability for the behavior and consequences of such systems rests with the humans responsible for their design and implementation. They possess the capability and knowledge necessary to make ethical decisions, influencing their choices concerning system architecture, algorithmic frameworks, and input data, which significantly shape the behavior and output of “bottom-up” systems.

8 Ethical horizons: guiding future paths in AI development

To address the obstacles and limitations inherent in ensuring ethical compliance in the development of AIS, a multifaceted approach is necessary. Enhancing interdisciplinary collaboration among stakeholders from various fields, including technology, ethics, law, and social sciences presents one of the key areas. By bringing experts together, researchers can gain a more comprehensive understanding of the ethical implications of AI technologies and develop holistic solutions. Moreover, fostering collaboration between industry, academia, and policymakers can facilitate the development of regulatory frameworks that promote ethical AI development. Clear and consistent regulations can provide guidance to developers while also ensuring accountability and transparency in AI deployment. Future research should prioritize exploring the practical implementation of regulatory risk-related requirements outlined in the AI Act, approved by the EU Parliament on March 13, 2024. This legislation aims to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability by addressing AI risks while fostering innovation. The AI Act mandates the establishment of risk management systems by providers to assess and mitigate risks throughout the entire design and development process (European Parliament 2024). For further insights into AI risk management, refer to (Jedlickova 2024b). Giudici et al. (2024) propose an AI risk management framework focused on evaluating AI-related risks using four measurable statistical principles of SAFEty: Sustainability, Accuracy, Fairness, and Explainability, each accompanied by a set of statistical metrics called Key AI Risk Indicators (KAIRI). These metrics allow for assessing the safety and trustworthiness of AI applications and ensuring continuous monitoring of their performance. The framework can support AI developers, stakeholders, and regulators by guiding both development and oversight. It facilitates compliance with regulatory standards, enabling the measurement and mitigation of AI-related risks. This adaptable risk management system can be implemented across various AI applications (Giudici et al. 2024).

Moving forward, the field of ethical AI development presents numerous opportunities for future research and exploration, involving the continued refinement and development of ethical frameworks tailored to specific AI applications and contexts. Ethical guidelines must provide specificity, clarity, and transparency to effectively navigate decision-making in dilemmatic situations. AI developers need support in decision-making to ensure that their actions are not only legally compliant but also morally sound. While they possess expertise in their respective domains, they are also human beings with moral integrity and conscience. It is worth noting that a significant portion of existing guidelines is often convoluted and lacks clarity, posing challenges for developers to interpret and implement them effectively.

Furthermore, directing investments into research and development initiatives aimed at overcoming specific barriers can yield innovative solutions. Advancing technologies that explore innovative approaches to enhance transparency, fairness, accountability, and risk assessment in AI systems, such as the development of explainable AI techniques (XAI) or the governance framework known as Fairness, Accountability, and Transparency (FAT) offer the potential for bolstering transparency and accountability within AI systems. These advancements mitigate concerns related to biased or opaque decision-making processes. XAI has emerged as a promising remedy to foster transparency in AI, particularly in the realm of “bottom-up” systems characterized by heightened autonomy and algorithms capable of self-generation and adaptation resulting in unforeseen decisions. Recent research indicates that XAI focuses on developing explainable techniques that empower end-users to comprehend, trust, and effectively manage the evolving landscape of AI systems (Saeed, Omlin 2023; Taylor 2024). Additionally, FAT Forensics presents a potential solution in the realm of algorithmic fairness, accountability, and transparency by facilitating the development, evaluation, comparison, and deployment of FAT tools. By constructing FAT tools modularly from the foundation, FAT Forensics ensures their resilience and accountability, while also safeguarding against downstream errors. Algorithmic fairness, accountability (including robustness, safety, security, and privacy), and transparency (including interpretability and explainability) often lack comprehensive evaluation strategies and software solutions, making them a valuable addition that could streamline research efforts. In response to these challenges, an open-source Python framework named FAT Forensics for evaluating, comparing, and deploying FAT algorithms can help address such shortcomings (Sokol et al 2022).

Recommender systems (RSs) have become integral tools for managing information across various online platforms, from social media to e-commerce. These algorithms predict user choices to optimize platform offerings and reduce information overload, as noted by Milano et al. (2020). RSs play a crucial role in facilitating interactions on multisided platforms with numerous participants. From a social good perspective, this approach to RSs offers the benefit of raising user awareness about the potential implications of their preferences without imposing biases on the content they encounter. Additionally, users may have the option to receive recommendations related to social good alongside standard suggestions based on their preferences. This approach aims to strike a balance between paternalism and tolerance by enhancing user awareness without restricting their freedom. Fabbri (2023) supports the concept of multi-stakeholder recommender systems (MRSs) and pro-ethical design applied to conversational recommender systems (CRSs), as proposed by Milano et al. (2021). He suggests that these approaches can be integrated into various domains where algorithmic recommendations influence human decisions, ranging from e-commerce to health. By directing human choices towards ethical purposes alongside profit motives, these systems hold promise for promoting societal well-being. Jannach et al. (2021) define conversational recommender systems (CRSs) as software that assists users in achieving recommendation-related goals through multi-turn dialogue. CRSs play a vital role in addressing the explanation problem inherent in RSs’ pro-ethical design. When users seek explanations for specific recommendations, CRSs enable nuanced responses that may not be immediately apparent.

Defining a company’s core values and cultivating a culture of ethical responsibility within organizations and the broader AI community are essential steps toward promoting ethical awareness and accountability. This entails providing education and training on ethical principles and practices, as well as creating mechanisms for reporting and addressing ethical concerns. Education and training initiatives play a crucial role in equipping stakeholders with the knowledge and skills necessary to effectively navigate ethical challenges in AI. By providing ongoing training on ethical principles and practices, organizations can ensure their long-term ethical commitment and cultivate a culture of ethical responsibility.

Integrating ethical impact assessments into the AI development process is crucial for evaluating potential ethical consequences and guiding decision-making to prioritize societal benefit while minimizing harm. Continuous oversight, ethical audits, and certification processes enable expert scrutiny to identify vulnerabilities and effectively address ethical considerations, including data governance, privacy protection, and fairness in algorithmic decision-making.

Finally, promoting public engagement and dialogue on ethical AI issues is essential for building trust and fostering societal acceptance of AI technologies. By involving end-users in discussions about the ethical implications of AI development and deployment, organizations can ensure that AI technologies reflect societal values and priorities.

In summary, despite the inherent challenges of ensuring ethical compliance in AIS development, it also presents an opportunity for innovation and advancement. Addressing these challenges requires a multifaceted approach, including interdisciplinary collaboration, regulatory and ethical frameworks, technological innovation, education and training, and public engagement. By adopting such a holistic approach, stakeholders can collectively strive for the responsible and ethical advancement of AI technologies.

9 Conclusion

The development, deployment, and utilization of artificial intelligence, robotics, and related technologies, alongside the software, algorithms, and data they use or generate, should prioritize serving humanity and society. While AI technologies offer significant opportunities, they also present substantial risks that necessitate addressing through a comprehensive regulatory framework. All personnel involved in the relevant processes must uphold ethical principles rooted in fundamental rights to ensure the development, deployment, and utilization of AI systems in a trustworthy manner.

Trustworthy AI systems must be responsive to various criteria reflecting the diverse values of all stakeholders. Approaches such as ethical assessments, auditing, AI ethics certification, or risk management have considerable potential to mitigate negative AI risks (Jedlickova 2024a; 2024b). Trustworthy AI aligns with the OECD’s values-based AI principles, which succinctly encapsulate the content of all aforementioned values and principles. These include benefiting people and the planet, human-centered values and fairness, transparency and explainability, robustness, security, and safety, as well as accountability (OECD 2023).

Establishing a value-based corporate culture and practices for the development and deployment of products or services based on autonomous and intelligent systems is essential. Fostering a corporate culture that prioritizes ethics, where individuals feel empowered to raise safety concerns and promptly address AI safety issues and early risk signals, is integral to successful leadership in the AIS domain. Ethical values such as fairness, accuracy, confidentiality, transparency, and privacy should serve as the bedrock of AI technologies. Trustworthy and ethical AI systems must exhibit reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, and privacy enhancement. Achieving trustworthy AI necessitates a careful balance of these characteristics tailored to the context of the AIS use.

Ethical theories and principles are increasingly vital in preventing unintended implications of AI decisions. While AI technologies may be legal and safe, from an ethical standpoint, they do not always contribute positively to human well-being, as underscored by the IEEE Global Initiative. Technologies developed without considering well-being metrics can adversely affect people’s mental health, emotions, autonomy, and ability to achieve goals. Thus, it is imperative to identify appropriate metrics and indicators focusing on increasing human well-being. Well-being indicators should guide the development and implementation of AI systems, enabling the modeling of various scenarios and impacts to enhance societal benefits and avert unintended consequences. Well-being metrics, assessed subjectively through cognitive, affective, and eudaemonic dimensions, as well as objectively through factors conducive to well-being, can align with the objectives of human rights. It is important not to view human rights and well-being as competing priorities, where one is favored at the expense of the other (IEEE 2018). It is crucial to uphold all ethical principles and ensure respect for fundamental human rights, values, and well-being throughout the AI technology lifecycle. Incorporating ethical requirements into AI systems, implementing continuous and effective human oversight, independent evaluation of AIS operations, conformity assessment, risk management, and timely resolution of issues are fundamental imperatives. Achieving this level of integration requires continuous refinement of methodologies, tools, and good practices for ethical assessment and compliance monitoring.

Despite potential tensions among requirements, values, or principles, such as the accuracy-explainability tradeoff, trustworthy AI systems must meet all ethical requirements (Petkovic 2023). Responsible balancing of ethical requirements and principles should be integral to the AI process.