1 Introduction

The issue of democratization has been raised recently in various sectors of society [12, 13, 25, 43]. All institutions are thought to benefit from this process. AI decision-making is no exception. Many theorists and practitioners contend that this technology will be improved by this activity. For example, McQuillan [23] proposes the creation of people’s councils to bolster ethical decision-making surrounding machine learning. Wong [42] maintains that the accountability for reasonableness framework (AFR) for industry and researchers could help converge dissimilar values in algorithmic decision-making by buttressing regulation, publicizing design methodology, and accounting for the values of non-technical individuals, among other things. In the healthcare domain, Rubeis et al. [30] argue that while increased democratization in healthcare decision-making is needed from all stakeholders, a more defined concept of democratization is due for healthcare practitioners. Although some problems have been noticed regarding AI democratization, none of these issues are insurmountable.

In a recent article, Himmelreich [16] summarizes many of the traditional shortcomings of democracy. He notes, for example, that democracy is expensive and requires time, commitment, and personal involvement. These are technical matters that limit participation and slow outcomes. However, those who support democracy believe that this form of governance is worth the cost. Nonetheless, accommodations and resources must be available to support this process.

On the other hand, some critics argue that democratic decisions that are made will be suboptimal. This objection can be found in discussions as early as Plato [37]. The main reason underpinning this criticism is that not everyone is an expert, and thus, some uninformed input may receive unwarranted consideration. Additionally, humans are flawed and may lack the courage to make difficult choices, or may be swayed by flashy rhetoric and engage in reckless behavior [6]. In the end, people seem to muddle through this process and make satisfactory decisions, even if corrective action is needed in the later stages.

Generally, widespread participation is considered to be a virtue, even when decisions are judged to be bad [25]. Further, people think of themselves as free and the outcomes are, for the most part, thought to be legitimate. Therefore, the answer to the question “Why democracy?” seems to be fairly obvious to most persons. Freedom in daily life decision-making is highly valued and institutions do not function effectively if they are not considered to be legitimate. That being said, some other issues, such as what is democracy, what to democratize, and how to democratize require further attention.

These questions, however, do not raise doubts about the value or cost of democracy but labor to refine this process. Concerning AI, the need for democracy seems to be compelling and has widespread support [36]. Refinement, certainly, is a worthwhile consideration, especially when attempting to enhance participation. In fact, improving democracy is seldom resisted. The purpose of this paper is to imagine how to move forward with the democratization of AI decision-making.

1.1 What is democracy?

Himmelreich [16] raises doubts about democratizing AI, especially if this process introduces another layer of governmental bureaucracy. Such a development, he argues, will only exacerbate the problems associated with democracy. Bad decisions will likely be made because of uninformed or unprepared stakeholders, political motives, or morally questionable actions. The dubious motives and behavior of government bureaucrats and other representatives will not improve AI, but most likely impede progress by undue regulation and diverting attention from serious issues.

However, many are calling for a direct style of democracy. This mode of democracy in AI discourse is somewhat vogue nowadays and is almost immediately applauded [41], although this idea needs to be fleshed out. Here is where community-based thinking and strategies may lend some assistance. Community-based work has begun to infiltrate a variety of areas, such as research and local governance [26]. The idea is that all processes are improved if they are brought closer to the people.

Two points guide community-based work. The first is that local knowledge is vital. This principle aims to elevate important local definitions and the accompanying narratives about knowledge. Valid knowledge, accordingly, is connected to human interests and the ways that individuals and communities organize their lives [14]. In other words, participation shapes local realities and thus provides norms and behaviors with meaning and legitimacy. Developers, for example, who ignore this pool of information will most likely propose irrelevant policies.

Given the importance placed on locally produced knowledge, the second principle of community-based work highlights the need for local control of all interventions [26]. If local knowledge is crucial to successful planning, what better way is there to ensure that this information informs all developments? In the case of AI, the narratives of usersFootnote 1 can be built into algorithmic models. In this way, the values, beliefs, commitments, and definitions that motivate individuals and communities will be reflected in this technology.Footnote 2 As part of a community-based strategy, the dominant narratives of developers are replaced by local storylines. More concretely, the models that transform input into output embody local concerns, such as language use and local cultural norms [31, 34].

What is being proposed thus far is that democratization is expected to bring AI development and deployment closer to the people. Local knowledge and control are central to this practice. With this local participation, AI becomes more useful, less harmful, and truly human-centric. But the issue remains, who are the people? There are a lot of persons involved in the development, deployment, and use of this technology. Thus, where should direct participation begin?

1.2 A boundary issue

What needs to be recognized at this juncture of this discussion is that participation in direct democracy is not superficial [2]. Participation signifies far more than the involvement or consultation of stakeholders. Likewise, participation refers to more than seeking and accepting input from users. With participation, the narratives that generate knowledge and shape information landscapes are brought to the forefront of AI development [4]. Indeed, possibly, very different realities that counter the status quo may be at stake. For instance, by participating in the AI development process, marginalized communities can glean more epistemic justice—a process whereby particularly indigenous, non-dominant groups’ knowledge systems and worldviews are treated as equal to dominant groups. In this way, the range of possible knowledge bases expands a change that goes beyond soliciting input.

Specifically, by participating in the data labeling process rather than labels and concepts being imposed, non-dominant realities are supported. Take, for example, the Māori indigenous group who regard all data for and about the Māori as potential property and embodying the living essence of the Māori people. One of the principles laid out by the Māori Data Sovereignty Network in a charter called the Te Mana Raraunga [22] states that “Māori should be involved in decisions about the collection of and access to Māori data, analysis and interpretation” (p. 3, par. 1). The Māori maintain that any usage of their data is directly tied to their group’s well-being, future, and ultimately, their reality.

In a similar vein, the National Congress of American Indians of the United States has supported the development of a resolution asserting control of indigenous data in efforts to foster more self-determination, increases autonomy over indigenous knowledge, and further aims of decolonization, among other things [11]. Such examples are indicative of non-dominant groups seeking to define their realities and elevate their narratives, which ultimately supports group well-being.

These scenes of participation also include employees who represent a community within the workplace. Aradau and Blanke [1] highlight how, in 2019, over 4000 Google employees signed a letter of petition to the company to halt their partnership with the US military. The partnership planned to weaponize AI in the realm of surveillance technologies. This type of democratization in AI decision-making is described by Aradau and Blanke as a “little tool of friction,” which refers to “actions that slow down, try to move in a different direction, or…produce hindrances” (p. 152, par. 3). The scholars emphasize that such frictions are important, because they, at the very least, foster democratic scenes of dissensus—a counter-narrative to the potentially harmful, dominant narrative (2022).

Additional scenes of participation within AI democratization include online citizen science projects such as Zooniverse (Zooniverse.org). Zooniverse is a research organization that combines machine and human, particularly non-technical, intelligence to aid in training algorithms, among other research projects. Moreover, there are an increasing number of organizations in the field of AI and ethics that are actively seeking participation from individuals in social sciences, law, philosophy, and policy to ensure a multidisciplinary perspective in AI decision-making processes.

Such perspectives on participation produce very different, anti-Cartesian scenes. No longer do persons confront reality and, if accurate, reproduce this situation as faithfully as possible. Reality, instead, is recognized to be an outgrowth of participation, and this activity shapes the realities of individuals and communities. The purpose of participation, accordingly, has important implications beyond simply demographic representation. What is especially pertinent now are the realities that are created and may be overlooked.

Democratization, therefore, should not be taken lightly. As Stanley Fish [9] suggests, reality hangs in the balance. But what reality is going to be accepted as valid and represented in practice such as AI development? Even if democracy is difficult and sloppy, participation is key to any serious assessment. After all, how can planning go forward in the absence of reality? Who participates in critical decision-making relates to the reality that is in play. Further, what is the boundary that delimits the range of participation [35]?

The first issue that must be addressed is the identity of the decision domain. In other words, what should be the focus of attention in the development and deployment of AI? For example, emphasis could be placed on data, algorithms, model development, storage, or the marketplace [21]. These facets are often identified as significant in the technology spectrum, and each of these elements can be democratized. The processes may be different in each case, but the general aim is to promote widespread participation.

Another aspect of the boundary problem relates to who will participate. Important to remember are the decision domains and the various realities that may be brought to this situation. Data scientists, responsible technologists, and social scientists could collaborate with local persons to accomplish the task of drawing boundaries around community and narrative relevancy (locality) to reduce potential blind spots. These local persons could be community members or workers at a job site.Footnote 3 However, when drawing the boundary of participation, the first step should be properly defining the immediate task.

Yet, what is the meaning of immediate? Take the implementation of AI in a warehouse. In this example, data and storage may have some relevance. But clearly, algorithms and model development are more immediately relevant. These factors are closest to the work. Additionally, participation could involve inventors, investors, designers, end users, or promoters. In a warehouse, however, the pace and rhythm of effort can be affected by the use of AI to organize the flow of work. Reports tend to show that fatigue and feelings of being overworked can result from inappropriately designed AI, or AI that is model-centric and not human-centric. The specific participation of front-line employees in model development is most relevant in this workplace scenario [17, 20].

Based on the above designation of immediacy, democratization should stress the participation of the workers who are affected by the development of algorithms that are employed to organize work. The instructions that are utilized can thus reflect and become more inclusive of the narratives of these persons, instead of other rules that may be irrelevant. What must be remembered is that algorithms are never neutral, despite the idealized status granted to these devices. Some narrative is always operative, and some rules are regulating behavior. Democratization strives to ensure that the appropriate narratives inform the algorithms that shape workflow.

Particularly important are the realities created by these narratives [4]. Using narratives imported by developers that fail to convey the realities of workers will likely cause problems, such as fatigue and bad attitudes toward work. What is important is not simply input, but a concern for the relevant interpretation of narratives. In other words, participation is superficial, unless the creative capacity of narratives is appreciated and built into algorithms. When the properly interpreted realities of the narratives are adopted, the flow of work becomes more acceptable to workers, and alienation is reduced.

This sort of consultation is not so difficult to imagine. This solicitation of narratives, and their proper interpretation, is indicative of self-determination. In management studies, sometimes, the term self-management is used [33]. In the framework of organizations, such as workplaces, the value of participation has been recognized since the onset of the Human Relations School in the 1920s. The moral of this history is that worker participation is beneficial and not very difficult to undertake [24]. The general result of this involvement is that work conditions are more commodious, production improves, and employees have better attitudes toward their jobs.

Democracy may not be neat. Nonetheless, the benefits seem to outweigh the problems. Those who worry that the democratization of AI may be too difficult should recognize how this process can be focused. While all of the components of AI development are significant, focusing on the most immediate decision domain can be helpful. What may initially appear to be a daunting task can thus become manageable.

Even within a narrowed scope of AI democratization, some critics fear that this process may devolve into a free-for-all. They worry that experts will be marginalized and their ideas ignored. On the other hand, those with minimal expertise, and perhaps mostly political motives, could dominate discussions. The result of this disorganization will be faulty decisions and poorly designed and deployed AI. However, direct democracy is not the same as representative models and attempts to address this issue in the coordination of various viewpoints. The point is to avoid governing by individual fiat.

There are ways to avoid the monopolization of discussions by the most educated or aggressive persons. The Delbecq and Delphi methods are available to facilitate widespread participation [7]. These strategies are mostly technical correctives, although somewhat effective. What is especially noteworthy in direct democracy, however, are the realities brought to deliberations and the proper interpretation of these perspectives in any outcomes. Here is where dialogue becomes important, which is not necessarily a part of these well-known techniques.

1.3 Democracy and dialogue

At the core of representative democracy is the individual. Decisions are based on personal choices. Persons gather information, consider their interests, and cast a vote. Within this framework, they act on their respective beliefs and assessments. In the end, these votes are tallied, and policies are pursued [40]. The collective good is a product of a myriad of individual decisions. Voting is treated as a personal matter.

Direct democracy is different. The basic axiom is that personal narratives are never written alone. In this sense, persons are not like atoms, and policies should reflect collective interests [2]. Voting is thus replaced by dialogue as decisions are made through a lot of talking. If dialogue simply ends in a vote, this process is considered to be a failure. The goal is to reach a conclusion that everybody can live with, at least for a while.

The point to remember is that dialogue is not merely consultation, even if this interaction continues for some time. To take part in real dialogue, notes Hans-Georg Gadamer [10], persons must “risk” their original positions to truly engage others. Since persons bring their realities to dialogue, they must recognize the limits of those initial positions to entertain the realities of others. When these shifts in orientation occur, persons can begin to understand one another and undertake worthwhile negotiations. Dialogue can thus begin.

Although talking is necessary to enter into dialogue, there are many other factors at play. What must be recognized is that persons bring realities with them to a dialogical meeting. And for dialogue to begin, these different realities must be entered and interpreted properly. Persons must overcome their original positions, so that one another’s narratives can be given serious consideration. Such reflection and maneuvering must occur before a dialogical meeting can be said to be possible. To borrow from Enrique Dussel [8], others must be viewed to have understandable and legitimate positions that are worthy of engagement.

Such recognition, however, does not mean that every position will not be critiqued, subject to modification, or eventually rejected. What dialogue guarantees is that all viewpoints will be correctly interpreted, as their proponents intend, and receive a fair hearing. Dialogue is thus not an encounter but a meeting. The narratives that each interlocutor brings to this dialogical situation should be interpreted as the author intends. In this way, the various realities that are possible can be given serious consideration. In a warehouse, for example, what workers have to say should be viewed as expressing a, possibly, very different perspective than initially expected. This new reality is the focus of attention in dialogue.

Jϋrgen Habermas [15], for example, has proposed a procedure to promote this kind of assessment in the form of an ideal speech situation. In this meeting, all persons are allowed to speak and expect to be heard as they intend. The assumption is that when expressed in this environment, all ideas will receive serious consideration. And if initially some ideas are rejected, they may stimulate other, more acceptable thoughts. In this non-threatening setting, criticism should not be taken personally but promote continued investigation and discussion.

In this situation, experts and novices will both clarify their positions, with the aim of having, at minimum, parts of their perspectives included in the final positions. Similar to a Quaker meeting, the hope is that everyone will be on board with the final decisions. No one should walk away thoroughly disgruntled. And even if a proposal is completely rejected, after a thorough examination, this outlook can be revisited in later meetings.

In more concrete terms, this dialogue could begin with the goal of assessing the proper work pace that will be built into the algorithms that organize task assignments. With this aim in mind, supervisors and workers can begin their exchanges. The purpose is not merely to win an argument by proving that others are wrong, but to listen to the respective narratives that are advanced, particularly the various realities of work. Often, the difficulty of tasks are not appreciated by supervisors, and this exercise is insightful. In these cases, what workers have to say makes sense and begins to guide the work process. But sometimes, what they have to say is unclear, or unacceptable, and a minimal understanding is achieved. Nonetheless, this process helps to grow this group of discussants.

The purpose of this dialogue is to cultivate the cooperation necessary to create a robust AI. No longer will AI generation be an ad hoc activity, with separate factions offering criticisms and proposals with no endgame in sight. Will dialogue eliminate the excesses that are feared by skeptics of this strategy? Probably not. However, there is a message that is not a prominent part of representative models. That is, there is a normative expectation pertaining to participation and outcomes [3, 25].

Participation is encouraged and all positions are treated with dignity. Additionally, outcomes are touted to be collective initiatives. The overall message is that everyone's interests are important and should be reflected to some degree, based on dialogue, in the final decisions. Persons, in other words, should not be jockeying for position and attempting to game the system to merely seek advantages. Will everyone adhere to the purport of this strategy? There is no guarantee. Nonetheless, the conditions are in place whereby every proposal can be challenged and the common good reinforced.

When the democratization of AI is raised, the purpose is often to correct the biases in algorithms (Nobel 2017) [18]. These biases, of course, have implications for marginalized populations and appreciating any disparities. But much more is at stake. An important, but often overlooked, issue relates to the narrative character of reality and how these stories should be understood. These narratives expose the need for dialogue and the significance of local realities in the creation of AI. A correct recollection of these realities requires entry into these narratives. Accordingly, this mode of democratization can lead beyond the correction of biases and challenge discriminatory practices.

1.4 Structural injustice

Another issue raised by critics is whether increased democracy can deal with structural injustice. Representative democracy addresses this question indirectly. Because the focus is on the individual, personal barriers to inclusion and advancement are sometimes addressed. Petitions are made to alter individual behavior, but institutional modes of discrimination and exclusion are often left intact. For example, persons can be encouraged to apply to a range of schools, based on a moral argument about diversification and democracy, while technicalities remain in place to restrict enrollment and success [5, 29]. Individuals are thus expected to figure out how to navigate institutions, while the so-called structural barriers remain in place.

Direct democracy provides a much broader perspective; the general expectations are different. Because democracy is understood to be a social affair, barriers to inclusion are the focus of attention. Moreover, everyone is expected to reflect critically on these impediments and work for their elimination. Personal advancement is not necessarily the measure of success, but whether institutions are changed sufficiently to promote real inclusion and human well-being.

In Habermas’ [15] ideal speech situation, the aim is not merely to encourage individual expression. Instead, everyone is responsible to establish a setting where all expressions receive a fair hearing. For this end to be achieved, for example, stereotypes must be addressed that may discredit certain speakers and their ideas. Additionally, all stakeholders are expected to promote tolerance, so that everyone feels comfortable as part of the group. Persons not only watch out for themselves but are required to attack any behaviors or barriers or threats to inclusion that they may detect.

In direct democracy, persons are not simply individual agents. They are, instead, coproducers of all outcomes [15, 19]. For this reason, collective responsibility is not viewed to be a burden or a sacrifice, as is often the case in representative strategies. An interpersonal commitment is assumed. The intention is that mutual support is commonplace and that persons protect one another from assault. Outsmarting or surpassing others is not a part of dialogue. In this process, persons move together to achieve mutually determined aims. All are equals in this activity, a condition that must be actively defended and maintained by the stakeholders.

Structural barriers are usually thought to be social factors that individuals cannot change. In a representative democracy, these features provide the background that serves to guide discussions. But because direct democracy is, from the outset, a collective undertaking, these features are not beyond the reach of persons. These structures are no longer institutional imperatives lurking behind the scenes. Rather, these institutions are nothing more than collective actions that can be rethought through continued, direct deliberation. The institutional procedures that guide discourse also arise from deliberations and interpersonal agency [27].

Direct democracy allows narratives to proliferate. These stories, moreover, embody their creations, such as institutions. As a result, direct democracy extends to the core of institutions. What this extension means is that democracy can deal with structural issues, since nothing escapes the prospect of rewriting narratives. If democracy is truly direct, and an outgrowth of collective deliberation, injustice imposed by faulty institutions can be rectified. The radical potential of democracy, however, must be recognized [32].

2 Conclusion

An array of persons are worried about the social impact of AI. Reports have circulated, for example, that many persons are concerned about losing their jobs to this technology, while others feel overworked at the workplace [28, 39, 44]. Accordingly, the democratization of AI seems to have wide support [36]. This practice is thought to promote transparency and lessen the misuse of this technology.

Some criticisms, nonetheless, have been raised about this movement to democratize AI [16]. Many of the complaints are standard and relate to the sloppiness of democracy. Other warnings pertain to what critics believe to be a lack of focus. After all, democratizing AI is a broad endeavor, and identifying points of entry may be challenging. Therefore, how can democratization proceed most effectively?

Identifying the decision domain can make this activity manageable. For example, those who are most directly affected by algorithms at a workplace could easily participate in their development and control their use. There is a long history of workplace democracy that testifies to the feasibility of this practice [24]. Highly technical matters have been democratized, along with appropriate capacity building.

Most important about this participation is to recognize the existence of local realities, particularly their narrative character. These narratives are relevant in the creation of less-disruptive AI. This technology will be truly democratized when this knowledge base guides AI development and use. Most important is not only participation but the incorporation of these narrative realities into AI. When this change occurs, this technology will have a less antagonistic relationship with users and others affected by AI.

All those who are involved in the development and use of AI must remember that this technology is not a private but a public interest. Therefore, the development and deployment of AI must be considered legitimate by the public. As Himmelreich [16] correctly points out, legitimacy is vital to support and acceptance. Democratization is thus warranted and necessary. Any drawbacks to this process pale in comparison to the advantages. Specifically, making AI transparent and accessible, not to mention relevant, is consistent with the promotion of socially responsible AI.