1 Introduction

Artificial Intelligence (AI) is a term used to refer to the ‘science and engineering of making intelligent machines,’ generally by using a computer to model intelligent behavior with minimal intervention from humans (McCarthy 2007; Hamet and Tremblay 2017). Since its inception at a meeting of minds in Dartmouth College in 1956, AI has now developed such that it is being successfully deployed across a variety of domains across the globe. This ranges from AI applications in healthcare to fields such as education, engineering, economics, and finance (see, e.g., Panesar 2019; Luckin et al. 2016; Gogas and Papadimitrou 2021; Mhlanga 2020). Across these fields of application, the intended use of the algorithmic model must be defined by its developer to apply, develop, validate and deploy the software or machine being used (Bitterman et al. 2020).

The rationale behind this need for definition and specificity can be thought of as twofold: 1. to ensure that AI is fit for purpose by achieving a desired and intended outcome. For example, an AI-based medical device being used to diagnose a specific condition; and 2. to avoid unintended harm to end users or future harm to others. One important method to ensure that AI supports such reasoning is by employing a "human-centric" or "human-centered" approach. Human-centered AI is a movement by AI developers and academics, among others who aim to narrow the gap between the values of the AI engineer(s), the people who will use it or end-users, and anyone else who might be impacted by the AI. Essentially, the main objective is to align the values between developers, who typically center algorithms, and end-users to prioritize human outcomes (Shneiderman 2022). The aims of such efforts are to mitigate potential harm and to ultimately promote human well-being (Stanford Institute for Human-Centered Artificial Intelligence, n.d.) Although this is not a universally held goal, some assert that developing human-centric AI is necessary for society’s long-term stability, despite many challenges (Bryson and Theodorou 2019; see also Lukowicz 2019). Moreover, there are some who question which humans are being centered and whose values are being emphasized, particularly given the legacy of the West's dehumanization and epistemological dominance of non-Western societies (Mhlambi 2020).

Despite this, given the increasing pivot toward human-centered AI, the objectives of this paper are twofold: (1) we maintain that AI has always been human-centered, and we aim to highlight how certain philosophical perspectives, such as dualism, has created the illusion of AI operating independently of human values. (2) Next, we introduce two alternative frameworks, namely Ubuntu and Maximum Feasible Participation (MFP), that can better align the objectives of human-centered AI with a community-based or participatory perspective. We conclude by emphasizing the key points, discussing implications, and offering directions for future research.

1.1 Becoming human-centric

There is much discussion today about AI becoming human-centric. The thrust of this exchange about the importance of human-centricity is the algorithms that underpin AI have drifted from human control and do not reflect the values of their users. Brian Christian (2020) refers to this issue as the “alignment problem.” As a result, critics are calling for human values to undergird the development of AI-based technology (Dhanrajani 2018; Guszcza 2018; Xu 2019). Accordingly, algorithms must be more closely connected to everyday values, beliefs, and commitments, if developers seek to mitigate potential harm, such as the alienation expressed by workers in several industries (Vicent 2019; Shneidermann 2022).

Yet the disconnect between this AI-based technology and end users is beginning to cause personal and social problems. For instance, not long after Microsoft deployed its AI chatbot, Tay, on Twitter, it quickly began hurling derogatory insults while praising Hitler on the public, social platform (Matyszczyk 2016). Most recently, Meta, previously known as Facebook, deployed an AI-powered chatbot, and it immediately announced that Donald Trump was still president, the election had been stolen, and that it had an Asian wife and watched anime (Thorbecke 2022). While many still see AI as a rational and reliable technology for human development, the root of the misalignment argument must be understood and underscored.

How has this separation occurred? The root of this drift away from human control is an issue as old as Plato (Grayling 2019). That is, throughout the Western tradition, philosophers have sought a base of knowledge and ethics that avoids the contingencies that are a part of daily life. To accomplish this, an escape from the everyday, subjective world is required to gain insight. For example, the Absolute Truth, a fixed reality where facts lay uncontaminated by humans (AllAboutPhilosophy.org n.d.), has been invoked to supply this foundation. The assumption is that once this pristine referent is discovered, real, unaltered knowledge and sound ethical principles become achievable.

This achievement, an objective reality where facts can be retrieved, became much simpler with the theoretical maneuver made by Descartes around 1600 (Bordo 1987). Rather than speculate about unknown and absolute foundations, he declared that subjectivity (mind) could be divorced from objectivity (physical reality). Later on, the fact-value and mind–body distinctions would become popular. In each case, the idea is that particular knowledge can be separated from subjectivity or human contamination and treated as objective and universal. According to Descartes, this distinction is possible and necessary to secure reliable knowledge and unambiguous ethical standards.

In philosophical terminology, this separation of facts from values, or subjectivity from objectivity, is referred to as dualism (Robinson 2018). Dualism, accordingly, is the maneuver that allows algorithms, and AI, to appear to be autonomous because of their status as strictly logical and objective. Within the context of dualism, AI appears to gain autonomy and, after some time, needs an orientation supplied by human values. However, as long as dualism is in play, this ambivalence is difficult to resolve, especially when autonomy, and, at the extreme end, “sentience” (Tiku 2022), is thought to be the strength of AI. That is, as AI-guided technology begins to appear self-directed, and even improve on human traits, interference by users would only compromise this technology and must be restricted.

Contemporary philosophy has a solution to such philosophical dissonance. Through a rejection of dualism, the difficulty of introducing human values in the face of ‘objectivity’ is overcome. For instance, while paying tribute to phenomenology, many writers claim that all knowledge originates in the lifeworld, or the everyday world of human experience (Bakewell 2016). Consequently, nothing is devoid of human values, even AI and associated technologies. When filtered through the lens of the lifeworld, AI is revealed to have a human base that is thoroughly mediated by human values.

Edmund Husserl, for example, undercuts the dualism linked to Descartes with a simple phrase. Husserl (1964) asserts that consciousness is always directed towards an object, which he terms "intentionality". As a result, the dichotomy between objectivity and subjectivity is dissolved, as both are united within the conscious experience. In acknowledging the role of intentionality, the spheres of objectivity and subjectivity converge into conscious action. Therefore, the lifeworld is the reality that is formed through intentional acts, according to Husserl (1970). Additionally, this world is an outgrowth of human action and is replete with values and meanings that are situational. Consequently, facts are never objective, but are rather subjective and should be interpreted based on the intentions of social actors, as phenomenologists argue.

In terms of AI, the key implication is that AI is never autonomous and disconnected from human values. Indeed, algorithms are a product of conscious activity and carry the standpoints that accompany this connection. As a result, AI can be treated as a mode of human expression, rather than a technology that relieves humans of their total involvement. In the absence of dualism, a new relationship is established between AI and users that enables them to direct or redirect this technology at any time.

The problem, however, is that some phenomena strive to divert attention away from this connection in the hopes of appearing to be objective, unbiased, and scientific. Given the lifeworld perspective, algorithms are not autonomous and universal but constitute a worldview with unique traits and values. Nonetheless, the following constitute some of the assumptions associated with a dualism that reinforce the illusion of algorithmic autonomy:

  1. 1.

    Dualism assumes that facts are empirical and are associated with the empirical features of behavior or events.

  2. 2.

    Dualism treats facts as concrete, measurable data points. In this regard, Lyotard (1984) comments that the current period is the age of the information “bit,” with the prevailing belief that reliable knowledge comes in the form of neatly packaged pieces of data.

  3. 3.

    Dualism assumes that facts are causally linked, meaning that a natural association exists between them. For example, if A happens, then B is likely to happen as a result.

  4. 4.

    Dualism assumes that the relationships between facts are entirely logical, with a precise and discoverable relationship between each one.

These themes constitute the backdrop of algorithms while providing this technology with the appearance of autonomy and engendering an illusion of untrammeled rationality. In other words, this background philosophy frames technology in a particular way that directs attention away from any human connection. However, considering the lifeworld, this externality is no longer justified.

Three ideas are particularly noteworthy in this realization. First, algorithms are never divorced from human reach. Second, AI always has a value orientation that underpins any technical operations. Third, this AI-based technology can be guided by many values that do not attempt to hide human presence. The point is that although a particular philosophical maneuver (dualism) and practice (technical focus) strive to hide the connection of algorithms to human action, this association is at the core of their creation and use.

The moral of this assessment is that no one should be striving to make algorithms humane given that human values are already in action. Making these devices human-centric, accordingly, does not involve a monumental discovery. What is involved, instead, is a decision to make this technology less alienating to stakeholders and community members. That is, persons must explicitly decide to make AI reflect what they desire, rather than deny their presence. Next, we provide two alternative perspectives that hold the potential to reduce algorithmic alienation, namely Ubuntu and maximum feasible participation, and can adequately support community and stakeholder well-being.

1.2 Moving from the implicit to the explicit: the need for community-centric values

What motivated our elucidation of the human values that underlie AI is a call to explicitly put forth images of AI that are beneficial that consider the well-being of all stakeholders and communities. By doing so, we can support the development of AI that aligns with ethical and moral principles and promotes the greater good of society. One such approach is seeing AI from the perspective of Ubuntu, a community-centered framework. Another is maximum feasible participation, a framework that demands the inclusion of community members or stakeholders to a significant extent in the development and deployment of the AI lifecycle. These two perspectives can be advantageous for both developers and community members.

1.3 Ubuntu

UbuntuFootnote 1 is an African philosophy and a social ontology that provides guideposts for how one relates to other human beings. Originating from the Nguni and Bantu language families (Mugumbate and Nyanguru 2013), it is a form of humanism that elevates a constant concern for the collective, community, or stakeholders as well as highlighting the intersubjective nature of all community members’ lives and outcomes. Essentially, an Ubuntu outlook acknowledges that all lives are, to a great extent, entangled and the behavior of one individual has the ability to influence the lives of all community members or stakeholders. In other words, the fates of all community members are linked (Dawson 1995).

Ubuntu, as a way of viewing social relationships, is a distinct worldview that prioritizes community interrelatedness over a view that promotes individuals as atomized. For instance, an axiom most commonly attributed to the Ubuntu philosophy is “I am because we are.” This declaration epitomizes the core values underpinning the Ubuntu framework. These values relate to sharing, solidarity, humanness through participation, and the acknowledgment that individuals of high rank in society are positioned only with the support of community members or stakeholders (Mugumbate and Nyanguru 2013). In consideration of artificial intelligence, Ubuntu recognizes that AI is only possible by extracting training data from the community.

A growing number of scholars are arguing for the usage of an Ubuntu framework in the approach to AI development and deployment. For instance, Langat et al. (2020) and van Norren (2021) consider the ethical implications of AI and the advantages of having an Ubuntu worldview. The scholars contend that when facing privacy issues, an Ubuntu framework would prioritize transparency to community members or all stakeholders. Moreover, the scholars argue that an Ubuntu lens would elevate community participation and democratization in matters of algorithmic decision-making. Gwagwa et al. (2022) maintain that an Ubuntu approach and its consideration of the impact of community members in AI work could benefit not only African AI developers but have universal benefits as it could reduce issues of inclusivity and diversity by its inherent compelling need to embrace all community members or stakeholders. Adams (2021) maintains that Ubuntu can be a valuable philosophy in the struggle to decolonize Western-dominant AI. Mhlambi (2020) asserts that Ubuntu, by enabling a view of seeing individuals as communal people, will ultimately promote community well-being and social solidarity. Black (2018), in his thesis dealing with the urgent need for Ubuntu within AI research, maintains that, given the acceptance of an Ubuntu philosophical outlook, “An AI researcher… accepts the responsibility they have to develop AI to the benefit of all people and not to the benefit of some while harming others” (2018, p. 26). Accordingly, Ubuntu exalts community elevation over a type of techno-marginalization.

Consider how Ubuntu contrasts with the popularly expressed axiom in the West “I think therefore I am,” which elevates the individual or entity over community well-being. This is underscored by Bakewell’s (2016) assessment of existentialism, a European philosophy commonly associated with the French philosopher Jean-Paul Sartre. Bakewell’s description of existentialism in her book, At the existentialist café: freedom, being and apricot cocktails, describes it as an anxiety that extends no further than the self. This is a glaring contrast from an Ubuntu outlook that concerns community well-being and interconnectedness.

In the case of artificial intelligence and its expanding utility in the West, such a narrow concern for the “I” or the entity in deploying AI would primarily seek to bolster the undergirding neoliberal ambitions, while trivializing the potential long-term impact on community members and stakeholders. A consequence of this would reinforce the marginalization of vulnerable communities by alienating them from the developmental process of AI technology and exacerbating inequities existing at social, political, and economic levels. Without healthier, community-centric approaches like Ubuntu, organizations will continuously have to focus on mitigating the harm associated with philosophical frameworks that only prioritize the "needs of the business," leading to ongoing algorithmic casualties.

Currently, the evidence of harm caused by AI is increasing, as both private and public organizations are implementing AI in their operations. For example, Obermeyer et al. (2019) found issues within algorithms developed by a major health insurance company that used healthcare costs as a predictor of stakeholder health. Considering the training data, developers found that white Americans’ health expenditures were significantly more costly than African Americans’ health expenditures. Subsequently and logically, it was inferred that white Americans were unhealthier than African Americans and needed more healthcare intervention. On the surface, the premise seems reasonable. However, this proposition does not hold for African Americans given it did not account for the complex historical and contemporary racialized relationship between African Americans, the United States (US), and the healthcare industry.

Indeed, in the US, African Americans, compared to white Americans, routinely endure disproportionate levels of morbidity while having less access to healthcare due to the various ways that systemic racism divorces stigmatized social groups from health-promoting resources (Paradies 2006). Furthermore, African Americans are more distrustful of the medical industry owing to the centuries-long history of ghastly, unethical medical experimentation by healthcare workers, e.g., James Marion Sims (Washington 2006). As a result, African Americans have disparate access to healthcare and are less likely to trust the healthcare industry, leading to fewer medical encounters and lower healthcare spending compared to white Americans. Such a social dynamic produces a misleading pattern in training data.

In this case, appropriate consultation with subject matter experts, i.e., stakeholders, who study health inequity and inequality, could have provided rich insight to the algorithmic developers. Furthermore, by failing to acknowledge that data are mere products of social processes and not objective representations of social reality, the AI developers have inadvertently exacerbated racial disparities in healthcare, further harming a vulnerable community. An Ubuntu framework in this case would be valuable due to its preoccupation with including relevant stakeholders in algorithmic decision-making with the aim of expanding equitable access to healthcare. Such a community-centric approach will undoubtedly foster community well-being. Maximum feasible participation is another framework that can assist in aligning with the goals of human-centric AI.

1.4 Maximum feasible participation

To date, maximum feasible participation (MFP) has not been proposed in the literature as a framework for AI. MFP is a political phrase that comes from a provision in the US Economic Opportunity Act of 1964 during the tenure of President Lyndon B. Johnson (Rubin 1969). At the time, the US faced unprecedented poverty rates which compelled the Johnson administration to formally declare a “War on Poverty,” the unofficial name for the Economic Opportunity Act.

Under the auspices of The War on Poverty legislation, a more robust social safety net was established to help lift US citizens out of impoverished conditions. An example of intervention was the quick creation of community action programs in the US. These programs aimed to tackle social issues such as health, education, and poverty, by infusing government funding and resources in needy communities. However, a significant requirement within the Economic Opportunity Act's language became notorious for igniting ongoing conflicts between major political parties and their ideological supporters. The requirement mandated that residents or groups predominantly comprising of poor individuals, who would be served and impacted by the community action programs, must participate in the governance and decision-making of the community action programs to the greatest extent possible, hence “maximum feasible participation” (Rubin 1969).

The broad goal of this tactic was to balance the conventional top-down, paternalistic nature of government funding programs while empowering community members to exercise greater self-determination by participating in the design of programs that would have a direct impact on their community's outcomes. Essentially, rather than public and private officials determining how government funds would be distributed to the community members, the maximum feasible participatory clause required laymen to participate in the decision-making process in a collaborative effort with officials—at the time, a novel policy within the US political context. Accordingly, Rubin (1969) states that "community action [or participation] [was viewed as]…a vehicle for community development" (p. 18).

Until a 1967 amendment was introduced to the MFP mandate, there was an ongoing debate about how it should be implemented, leading to constant confusion. The amendment, which was a significant development, specified that at least one-third of board members of community action programs needed to be community members, while public officials and industry leaders were limited to a maximum of one-third of board membership. This stands in stark contrast to vague ideas of community participation or democratization that lack clear and detailed operationalization.

To illustrate the potential challenges that a maximum feasible participation framework in the field of AI could face, it's worth noting that many city and organizational leaders pushed back against this approach. They cited reasons such as a lack of expertise among community members, rumors that this strategy would enable them to undermine professional authority, and concerns that it could lead to revolution (Rubin 1969). Rubin (1969) states, “welfare agencies and politicians made massive efforts to retain their doctrine, dogma, and power, while the leaders of the poor did, indeed, use federal funds to try to force institutional change.”

While the maximum feasible participation clause would go on to see thousands of community members and stakeholders employed by community action programs, it would never reach its full potential given the constant struggle for power by city officials and organizational leaders. However, such a mandate that sought inclusivity in the decision-making process regarding government funding distribution can provide relevant guidance in efforts to democratize and align with the goals of human-centric AI.

1.5 The social benefit of a community-centric approach

A community- or stakeholder-centric approach from the perspective of Ubuntu or maximum feasible participation provides many ways that can benefit organizations deploying AI. In the legal realm, the inclusion of stakeholders in the total lifecycle of AI would ease the number of class action and personal litigation suits against organizations deploying AI models onto citizens. According to the Ethical Tech Initiative of DC (2021), an organization that maintains a database of AI litigation cases in the US, there have been over 30 lawsuits related to the use of AI technology by various organizations. The most commonly cited themes in these lawsuits, as reported by plaintiffs, were concerns about transparency, lack of human, scholarly, and expert review, issues related to the reliability of the technology, and the improper use of variables such as gender and race in algorithmic decision-making. These concerns have been raised by multiple plaintiffs across multiple lawsuits, indicating a significant issue with the implementation of AI technology in various industries.

In one class action lawsuit, Bauserman v Unemployment Insurance Agency (2015), the Michigan Unemployment Insurance Agency, through its use of algorithmic decision-making, falsely accused 40,000 residents of insurance fraud, which led to civil penalties and tax refund seizures of the citizens (Ethical Tech Initiative of DC 2021). As a result, the state was forced to distribute tens of thousands of monetary waivers to the affected (Pluta 2022). The lack of stakeholder participation, that is, human review, in the lifecycle of the algorithm is cited as one of the contributing reasons for the flawed automated system. In this particular situation, implementing an MFP approach could have reduced the chances of Michigan residents from losing their insurance benefits and suffering negative psychological consequences. Unfortunately, it is the taxpayers who, in the long run, fund state inconsideration and lack of foresight.

Considering AI and medicine, an Ubuntu approach would seek to, among other things, cultivate patient and public collaborations in the active collection of data, sharing of methodologies, and the dissemination of research results to the public. Each party would understand that such a way of relating to each other would be mutually beneficial. In the UK, McKay et al. (2022) shed light on three primary efforts made in the medical and AI industry to try to actively involve the public in the governance of AI, namely “lay representation on data access committees, patient and public involvement groups, and citizen forums.” Although scholars argue that there is a need for improved integration of these methods, the efforts in the UK do provide a framework for organizations looking to develop patient and public partnerships.

In public health, an explicit stakeholder/community-centric approach would align AI values with more upstream health interventions—the efforts made to prevent individuals from acquiring disease (Shultz et al. 2019). Through a circular relationship as seen in Fig. 1, such a model would not only involve stakeholders in the research, investigation, development, and deployment process of AI, but it could potentially support the training and employment of stakeholders and community members, which ultimately increases equity and subsequently reduces inequality.

Fig. 1
figure 1

Maximum feasible participation model: 1/3rd minimum participation from stakeholders (consumers and professionals) with a maximum of 1/3rd participation from AI/ML engineers (original source)

Additionally, incessant engagement with community members and stakeholders could theoretically provide increased access to health data for researchers. This would potentially enable quicker modalities to prevent and reduce disease as well as provide longitudinal data, rather than reliance on cross-sectional data. Although AI is predominantly deployed at midstream and downstream preventive health measures; that is, the efforts designed to mitigate the impact of a disease or injury and the efforts that reduce the impact of an ongoing injury or disease through clinical intervention, respectively (Shultz et al. 2019), a stakeholder and community-centric AI would bolster public health interventions.

2 Conclusion

This paper contributes to the literature in two primary ways: first, we provide a response to recent suggestions that AI should become more human-centric. Specifically, we clarify that AI is already human-centered. However, the philosophy of dualism, which holds that human experience is separate from objective reality, creates an illusion of a divide between human actions and AI decisions. This leads some to believe that AI operates independently of human influence. In reality, AI technology reflects human activity, as it is designed by humans in pursuit of human-derived goals. Second, although proponents of human-centered AI aim to promote human well-being, we present two alternative, community-centered frameworks, Ubuntu and MFP, that can better align AI with the aspirations of those seeking to democratize and center ethical values and thereby support community well-being. Ubuntu and MFP are two frameworks that provide clear guidance on how AI development and deployment can become more inclusive, rather than alienating, in various industries. In conclusion, it is important to highlight that when organizations choose to develop and deploy AI, whether their focus is on maximizing profits (i.e., their "bottom line") or promoting community well-being, each decision reflects a specific set of human values. Yet, it is crucial to note that this is not a matter of misalignment. Ultimately, the choice of which values to prioritize when deploying AI depends on the organization's goals and priorities, as well as the ethical considerations that guide their decision-making process.