1 Background

Algorithmic technologies, defined as “computer-programmed procedures that transform input data into desired outputs in ways that tend to be more encompassing” [1], have gained popularity in organizations in recent decades. Many technologies employed in the field of employment and hiring have adopted automated processes and decision-making. While many researchers believe that applying algorithms can improve resource allocation and decision-making coordination [2], facilitate the transparency and efficiency of the decision-making process within and across organizations [3], and enhance organizational learning [4], algorithmic decision-making is subject to data and developer biases. Artificial intelligence (AI) systems used in employment contexts (e.g., recruiting, performance evaluation, and related activities) are considered “high risk” for people’s lives under the European AI Act [5]. Transparency and objectivity can be limited [6], and privacy can be compromised [7, 8] when algorithmic controls in organizational decision-making are used more than conventional technical or bureaucratic controls.

Algorithmic control is a term used to describe automated labor management practices in the contemporary digital economy [9]. In this study, we discuss algorithmic controls that employ AI technologies to collect data on workers’ behavior, physiology, and emotions to model, predict, and modify their organizational behavior. Organizational equality can be challenged in the age of comprehensive employment of algorithmic controls in organizations. Typically, various organizational inequalities, such as gender, class, and racial inequalities, have been studied in recent decades [10, 11]. Where algorithmic controls are employed in organizations, literature evidence shows that workers can be more constrained [6], confused and frustrated [12], voiceless [13], non-cooperative [1], resistant [14, 15], and discriminated against [16, 17]. However, current research addressing organizational inequalities using algorithmic controls is significantly lacking [1, 18]. This may be associated with underdeveloped literature on managing AI risks and social impacts. More research must be conducted to understand how algorithmic controls deviate from traditional controls, how algorithmic controls reproduce organizational inequalities, and how to mitigate these inequalities in organizations.

Literature on trustworthy AI is emerging that addresses the lawfulness, ethics, and responsibility of AI systems [19,20,21,22,23]. Developing trustworthy AI technologies contributes to accountability [19, 20], explainability (National Institute of Standards and Technology [24], and transparency [19] of the algorithm and ensures non-discrimination [21] of end-users and privacy of the individuals whose data is collected [24], while trustworthy AI establishes requirements for each stage of AI technologies (e.g., data sanitization, robust algorithms, and anomaly monitoring) to support AI robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability [25]. While many regulatory entities have published guidance on developing trustworthy AI [24, 26], assurance of such trustworthiness may only be obtained through adequate internal controls [22] or external compliance and ethics-based audits [19, 27, 28]. Through a systematic literature review, we examine whether and how trustworthy algorithmic controls can mitigate relevant organizational inequalities.

Participatory AI, also known as inclusive and equitable AI or co-creative AI, is a field that has emerged in recent years [29]. It involves a model that allows the participation of various stakeholders in the design, development, and decision-making process of AI systems [29]. Unlike traditional approaches in which AI systems are primarily developed and controlled by a small group of experts, participatory AI seeks to democratize technology by actively engaging individuals and communities who may be affected by or have valuable insights into its applications [30]. Participatory AI systems involve various participatory processes, including co-design, public consultation, citizen science initiatives, and ongoing collaboration among researchers, developers, and the community. Through these processes, participants collectively state the problem, identify data sources, create algorithmic models, establish evaluation metrics, and shape the AI system’s deployment and governance framework [29].

Participatory AI recognizes that the influence of AI is not limited to technical aspects but also includes ethical and sociocultural contexts. It can help identify potential biases AI can produce in sensitive areas such as healthcare and criminal justice [31]. By incorporating diverse and inclusive perspectives from marginalized groups and communities, domain experts, policymakers, and civil society organizations, participatory AI aims to combat bias and inequality and address the needs and values of a diverse population. This systematic review also explores the potential of using participatory AI to combat biases and inequalities in algorithmic controls.

In the following sections, this article first reviews three well-known types of controls: technical, bureaucratic, and algorithmic, in order to identify how algorithmic controls are unique. Then, we explain the methodology used for the systematic literature review. Third, it summarizes what we learned from the systematic literature review about how algorithmic controls can promote organizational inequalities and how trustworthy characteristics and participatory AI can mitigate these inequalities. Finally, the article concludes with the major takeaways of this research.

1.1 Technical, bureaucratic, and algorithmic controls

Organizational controls are dialectical processes in which employers continuously innovate to maximize the value obtained from workers. In other words, the control process refers to any process that managers use to direct and motivate employees to meet organizational expectations in desired ways. The literature has studied two broad types of controls: normative and rational controls. Employers implement normative controls and earn employees’ trust [32] to obtain desired work performance. In contrast, rational controls are used when employers appeal to workers’ self-interest to compel' desired behavior from workers [33]. This study focuses on one type of these so-called rational controls: algorithmic controls.

To illustrate how rational controls work, it is essential to understand that the rational model of choice assumes that human behavior has some purpose. In this model, actors enter decision situations with known objectives, which helps them determine the value of the possible consequences of an action [34]. In rational control systems, worker behavior is directed by well-designed tasks, clear objectives, and reasonable incentives.

Within rational control systems, technical controls rely on the intervention of machine or computer software to substitute for the presence of a supervisor; they have historically been employed in the physical and technological aspects of production [33]. Bureaucratic controls are based on standardized rules and procedures to guide worker behavior [35]; they reduce supervisors’ time and effort spent on managing their subordinates. Algorithmic controls allow workers’ data to be collected, modeled, predicted, and modified through algorithms, which leads to improvement, substituting, or supplements of traditional means of organizational controls (i.e., technical and bureaucratic controls) [18]. Technical, bureaucratic, and algorithmic controls will be discussed in detail. The comparison and summary of mechanisms, procedures, benefits, adverse effects, and resistance actions of these three kinds of controls are presented in Table 1.

Table 1 Mechanisms and direction, evaluation, and discipline of worker behaviors for technical, bureaucratic, and algorithmic controls

1.1.1 Technical controls

Technical controls are exercised through physical devices that replace direct supervision [1]. Early in the twentieth century, employers began to set a machine-driven pace and limited workers’ workspace as assembly lines were developed [36]. Technical controls have made it possible for employers to direct workers through external devices (e.g., the pace of production can be directed by a machine). To evaluate their workers, employers record the frequency and duration of work tasks, worker productivity, accuracy, and response time [37]. If workers are not cooperating or following the directives and goals set by their employers, secondary workers will be recruited to replace them [36]. Employers expect workers to perceive constant surveillance, which leads workers to police their own behavior in compliance with set directions [38]. However, employees may sabotage the machines and related equipment [39] or collectively withhold effort [40] to resist managerial objectives.

However, technical controls may have a negative impact on worker motivation. For example, technical controls can make workers experience alienation because they can be deprived of the right to see themselves as commanding their actions [41]. This is evidenced through workers’ various individual and collective actions to resist technical controls, such as sabotaging the machines and related equipment, stealing supplies, loafing on the job, developing alternative technical procedures, and other measures.

1.1.2 Bureaucratic controls

Bureaucratic controls emerged in the years following World War II. Unlike technical control, bureaucratic controls direct employees through narratives, such as job descriptions, rules, and checklists. In the bureaucratic model, decision-making is conducted by people with both power and competency who interpret master plans; such master plans provide rules and procedures governing contingencies, performance expectations, and individual behavior for organizational decision-making [35]. Thus, the bureaucratic model applies when organizations have stable domains and the costs of developing master plans are under control.

Under bureaucratic controls, incentives and penalties are implemented to accomplish discipline of employees [42]. Leaders of an organization may direct worker behavior by creating rule systems (e.g., employee handbooks) that detail how to perform specific tasks and make decisions and how member compliance with organizational directives is evaluated. On the other hand, bureaucratic controls can create a rigid, dehumanized work environment for workers [43]. The feeling of losing freedom and autonomy may lead to similar resistance actions from employees, as was discussed in relation to technical controls, such as worker strikes. Additional resistance actions may include cynicism, workarounds, or pro forma compliance [44].

1.1.3 Algorithmic controls

In recent decades, algorithmic controls have become a significant force in allowing employers to reconfigure employer-worker relationships within and across organizations. Algorithms are defined by Gillespie [45] as computer-programmed procedures for transforming input data into the desired output, and therefore, algorithmic controls can be understood to use these computer-programmed procedures to collect, analyze, and model workers’ behavioral data in order to predict and modify workers’ organizational behaviors.

Scholars have identified multiple economic benefits of algorithmic controls. For example, algorithmic controls, such as algorithmic recommendations at the workplace, can enable individual workers to make decisions more accurately than previously. Algorithmic controls also save large amounts of human labor in organizational decision-making by removing humans partially or almost entirely from the process. For example, algorithms can scan many resumes in seconds, thus promoting the efficiency of the recruiting process. Algorithmic controls also efficiently provide prompt insights and feedback. In addition, algorithmic controls can automate coordination processes, which contributes to maximizing economic value for employers.

Team coordination, “a process that involves the use of strategies and patterns of behavior aimed to integrate actions, knowledge, and goals of interdependent members” [46], is used to achieve common goals. Automating coordination processes has been shown to provide economic efficiency [47]. In addition, employers can use algorithmic controls to automate organizational learning to create economic value. Studies have shown how employers have used algorithmic controls to identify and learn from user patterns across individuals and then responsively change system behavior in real-time [3]. In addition, some emerging literature suggests algorithmic controls through platforms (e.g., Uber) can enhance motivation and enjoyment for gig workers through a sense of “being your own boss” and opportunities to connect with others [48]. As mundane, repetitive, and tedious jobs are automated under algorithmic controls, workers can take on higher-value work and pursue retraining and reskilling opportunities, thus leading to higher job satisfaction [49].

Compared to technical and bureaucratic controls, worker activities can be more constrained under algorithmic controls because there is a lack of transparency in how the controls direct, evaluate, and discipline workers. That is, workers often do not fully understand how algorithms are being used to direct, evaluate, and discipline them [6]. By facilitating interactive and crowd-sourced data and procedures, these controls can also tighten the power of managers over workers and remove employees’ discretion. In the context of the public sector, Borry and Getha-Taylor [50] found that algorithmic controls resulted in elevated gender and racial inequalities with regard to job categories and job functions. For example, under algorithmic controls, there was a higher representation of women in administrative support and paraprofessional roles but a lower representation among technicians, skilled craft, and service maintenance categories. In addition, white employees were more often found than non-white employees in the public sector. In particular, white employees were significantly more represented than non-white employees in terms of officials and administration, professionals, and protective services categories. Employees can also find themselves under high pressure when algorithmic controls are in place because they can fear being replaced by automation, which leads to increased risk of mental health issues [49].

As under the other categories of controls, workers resist the tighter employer controls and defend their autonomy and individuality [1]. Under algorithmic controls, workers may engage in non-cooperation. However, due to the instantaneous and interactive nature of algorithms, the tactics of noncooperation are different from those under conventional controls. First, workers resist controls by ignoring recommendations or rewards provided by the algorithms. For example, it has been found that web journalists and legal professionals abandoned algorithmic risk evaluations, obtained desired risk scores by manipulating the input data, and claimed the rules and computational analyses behind the algorithmic systems were problematic [51]. Second, workers also engage in noncooperation by disrupting algorithmic recording. For instance, Uber drivers may turn off their driver mode in particular areas, stay in only residential areas, and frequently log off to avoid unwanted trips [9]. Third, workers have also been shown to leverage algorithms to resist controls. For example, some Airbnb hosts attempt to figure out the characteristics or behaviors that potentially impact their ratings by studying all the accessible information (e.g., online forums, the company’s technical documentation, and competitors’ profiles and ratings) [15]. Finally, workers resist algorithmic controls by personally negotiating with clients to bypass or alter algorithmic ratings. The inflated ratings for workers (e.g., Uber drivers, Airbnb hosts) in the online labor markets can be explained by such personal negotiation [14].

Overall, algorithmic controls exhibit greater efficiency compared to technical and bureaucratic controls. While algorithmic controls have the potential to overcome some negative impacts of traditional means of organizational controls (e.g., the lack of comprehensive analyses for controlling factors causing poor performance), algorithmic controls also amplify existing negative impacts of traditional controls (e.g., worker frustration) and introduce a new source of negative impacts (e.g., loss of employee privacy). We present a summary of mechanisms and direction, evaluation, and discipline of worker behaviors for technical, bureaucratic, and algorithmic controls in Table 1. We also provide a comparison of benefits, negative impacts, and workers’ resistant actions for the three types of organizational controls in Table 2.

Table 2 Benefits, negative impacts, and resistance actions for technical, bureaucratic, and algorithmic controls

2 Methods

2.1 Literature search

2.1.1 Database search

A comprehensive search strategy was employed to identify relevant articles in the field of organizational inequalities associated with algorithmic controls. An initial search was conducted on ScienceDirect, resulting in 1,094 records. The search was limited to articles written in English, published as scientific research articles, accessible through ScienceDirect, openly accessible on Google Scholar or accessible through university interlibrary loans, and articles that address organizational inequalities and/or power asymmetry in the context of algorithmic controls. This step yielded 372 records, which were further filtered based on article type, language, and subject area.

2.1.2 Supplementary search

An additional search was performed using Google Scholar to augment the initial findings; it identified 35 relevant records. The inclusion criteria for this search were openness and accessibility, which ensured that the articles were openly accessible on Google Scholar.

2.2 Screening process

2.2.1 Title and abstract screening

A systematic screening process was implemented for the remaining 407 records, which involved the entire reading of titles, keywords, and abstracts. This stage led to the exclusion of 354 records, primarily based on relevance to the research topic and alignment with the inclusion criteria. The screening process aimed to identify articles addressing organizational inequalities and power asymmetry in the context of algorithmic controls.

2.2.2 Full-text review

Subsequently, the remaining records underwent a full-text review. Articles were thoroughly examined during this phase for appropriateness to the research focus. Duplications were identified and removed, resulting in a final set of 53 articles for detailed analysis.

2.3 Data extraction and analysis

2.3.1 Data extraction

Data extraction was carried out from the selected articles, including information on study design, methodologies employed, key findings, and relevance to organizational inequalities associated with algorithmic control.

2.3.2 Synthesis and analysis

A qualitative synthesis approach was adopted to analyze the extracted data. Patterns, themes, and commonalities across studies were identified to gain insights into reducing organizational inequalities in the context of algorithmic controls.

2.4 Quality assessment

A quality assessment of the included articles was conducted to evaluate the rigor and validity of the studies. This assessment considered study design, sampling methods, and reporting transparency.

2.5 Reporting

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [82] were followed to ensure transparent reporting of the systematic review process. The PRISMA flow diagram (Fig. 1) illustrates the step-by-step inclusion and exclusion of articles.

Fig. 1
figure 1

The preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram

2.6 Limitations

Potential limitations of this systematic review include the initial restriction of 372 records based on article type, language, and subject area, excluding non-English language articles and relying on available databases. The final review will discuss these limitations to provide a comprehensive overview of the study’s scope.

3 Results

3.1 Organizational inequality in algorithmic controls

Inequality is defined as “systematic disparities between participants in power and control over goals, resources, and outcomes” [83]. Since a few decades ago, various types of organizational inequalities, including gender [84,85,86], class [52], and racial inequalities [87, 88] have been studied. Organizational inequality involves all possible forms of divergence in the treatment of and opportunities for workers within the same organization. In organizations, female, lower class, and non-white workers may be paid less, have fewer opportunities to receive a promotion, be excluded from important meetings, receive less benefits, have a low chance to be hired and a high chance to be fired, among other factors, compared to male, upper class, and white workers. During the late 1990s to 2000s, specific organizational processes that produced inequalities were identified and studied, including job allocation [11], performance evaluation, and wage setting [10].

As algorithmic controls have become popular, they have also introduced new sources of organizational inequalities. According to Eubanks [12], poor and working-class people are targeted by many algorithmic controls across governments and corporations, leading to negative consequences. For example, more personal information is collected from the poor and working-class, and these data are less likely to be kept private or secured. Worse still, these classes are labeled by predictive models and algorithms as risky investments, which makes them vulnerable in claiming equal resources and rights. These new systems have been shown to have the most destructive effects on low-income communities and people of color. However, the growth of automated decision-making impacts the quality of democracy for us all. Eubanks [12] argued that the use of automated decision-making algorithms “shatters the social safety net, criminalizes the poor, intensifies discrimination, and compromises our deepest national values” (p. 12). In other words, the social decisions about who we are and what we want to achieve have been reframed as systems engineering problems in the age of algorithms. In addition, societal biases toward people of color may also be perpetuated through training AI-powered algorithmic controls with biased training data [89].

Recent research has analyzed organizational inequality using Kellogg et al.’s [1] “6Rs” model of algorithmic controls [18]. Kellogg et al. [1] proposed six algorithmic controls that employers use to control workers, including direction mechanisms (i.e., restricting and recommending), evaluation mechanisms (i.e., recording and rating), and discipline mechanisms (i.e., replacing and rewarding). Algorithmic recommendations provide suggestions to managers to ensure that workers’ decisions are made within managers’ preferences. Algorithmic restricting helps managers control the type and amount of information to which workers have access and limit workers’ behaviors in certain ways, while algorithmic recording is used to monitor, aggregate, process, and report worker behaviors. Algorithmic rating is used to identify patterns in the data recorded that is relevant to worker performance measures. Algorithmic replacing is used to automatically fire workers with low performance and replace them with qualified eligible candidates. Finally, algorithmic rewarding is used to determine and distribute professional and material rewards among workers to modify their behaviors. We summarize the research findings on organizational inequalities associated with algorithmic controls in Table 3.

Table 3 A summary of organizational inequalities associated with algorithmic controls

3.2 How trustworthy AI-powered algorithmic controls reduce organizational inequalities

Research on trustworthy AI has emerged in the past few years, which generally examines AI accountability, transparency, explainability, interpretability, fairness, safety, privacy, and security [19, 20, 22, 23, 110,111,112,113]. Research has advocated compliance and ethics-based audits [19, 27, 28] as well as impact assessments [114] for hiring and employment algorithms to ensure AI trustworthiness. Based on the review of the literature, we suggest that developing accountable, transparent, explainable, fair, and privacy-enhanced AI-powered algorithmic controls may mitigate organizational inequalities involved in algorithms.

3.2.1 Accountability

Accountability is fundamentally referred to as answerability of actors for outcomes to auditors, users, regulators, and other key stakeholders [110]. In the context of AI systems, accountability is achieved when organizations keep records regarding the dimensions of time, information, and action to ensure that misbehavior in a protocol can be attributed to a specific party; to accomplish accountability, organizations also need to demonstrate partial information about the contents of these records to affected people or to oversight entities [20]. AI-powered algorithmic controls need to be developed with accountability while following modern accountability approaches (e.g., creating logs and audit trails that explain decision-making for each step and maintaining contact lists of key actors involved in AI development and management.) [19, 22, 113]. Organizations may require third-party audits to examine algorithmic accountability; they may also demand continuous audits during AI deployment and use [19]. As lawmakers, businesses, and the general public demonstrate increasing concern about incompetent, unethical, and misused AI-powered algorithmic controls, algorithmic audits have become increasingly demanded, either by law or by clients’ requests. While an algorithm is in use, continued internal monitoring and governance should also be in place [22]. In addition, providing processes for workers to access and challenge algorithmic decisions is important, as workers most often have no procedures to do so [115].

3.2.2 Transparency

Transparency reflects the extent to which information is available to individuals about an AI system; it is usually compromised in the context of “black box” models. Algorithmic controls rely on rules embedded in coding that are inaccessible to the public or end users, which is different from previous instantiations of bureaucratic controls. For example, employers using algorithmic rewarding often keep their algorithms secret to discourage manipulation and rating inflation, which offers workers limited transparency. According to Burrell [6], algorithms can be opaque for three major reasons: intentional secrecy, required technical literacy, and machine-learning opacity. All of these factors may contribute to increased organizational inequality and delay in detection. The algorithm implemented should be accompanied with comprehensive transparency measures related to AI commissioning, model building, decision-making, and investigation [19].

In addition to algorithmic transparency, the training datasets used should also be made transparent. The training data used in algorithmic analyses can be already biased in a way that allows these algorithms to inherit the prejudices of prior decision-makers [93]. Gebru et al. [116] proposed data transparency measures, such as using datasheets that include the question about whether the dataset identifies any subpopulations (e.g., by race, age, or gender). Moreover, auditability and traceability measures, such as log mechanisms, are usually solutions to transparency issues. Therefore, the system must have interfaces that enable auditing and also policies that allow effective accessibility to those interfaces [20].

Traceability means that the outputs of a computer system can be understood through the processes in design and development. To achieve traceability, the algorithm must maintain sufficient records during development and operation so that their creation can be reproduced; this evidence should also be amenable to review [20]. Traceability measures may mitigate the organizational inequalities associated with opaque predictive algorithmic controls [18]. Therefore, having auditable and traceable algorithms and maintaining such auditability and traceability in deployment and in use can effectively enhance algorithmic transparency, thereby contributing to inequality mitigation.

3.2.3 Explainability

Explainability means the algorithm and its output can be explained in a way that a human being can understand. Many organizational inequality issues associated with algorithmic controls are results of lack of explainability. For instance, due to the large information asymmetry between the algorithm and workers, algorithms can easily nudge workers to make decisions in the manager’s best interest [18]. Additionally, the objective function and limitations of the algorithm are often unknown to the workers, which leads workers to rely on their heuristics to decide whether to accept algorithmic recommendations [18]. Workers may become suspicious and frustrated about the opaque and unclear guidelines on how they are evaluated and rewarded [109].

To mitigate the explainability issues of algorithms in the workplace, algorithm controls that organizations acquire and use must be equipped with explainability measures, not only explainable to technical experts, but also to all workers who may be non-technical. NIST [24] proposed that risk from lack of explainability may be managed by descriptions of how models work; and this description should be tailored to consider individual differences such as the user’s knowledge and skill level. Therefore, developers of management algorithms should consider the knowledge and skill levels of intended users when designing and implementing explainability measures. In addition, providing training necessary for workers to better understand the algorithmic decision-making is an essential step to mitigate workplace inequality.

Toreini et al. [23] also proposed using explainability technologies to enhance explainability of AI algorithms. Explainability technologies are technologies that focus on explaining and interpreting the outcome to the stakeholders (including end users) in a humane manner. Overall, implementation of explainable algorithmic controls is one of the essential steps in mitigating workplace inequality introduced by algorithmic controls.

3.2.4 Fairness

Fairness is generally related to being free from biases and non-discrimination. NIST [24] has identified three major categories of AI bias to be considered and managed: systemic (e.g., biases exist in training datasets, organizational norms, practices, and processes in developing and using AI), computational (e.g., biases stem from systematic errors due to non-representative samples), and human (e.g., biased individuals or groups’ perception of AI system information in supporting decision-making), all of which can occur in the data used to train the algorithm, the algorithm itself, the use of the algorithm, or interaction of people with the algorithm. Many social and racial inequalities can be reinforced in algorithmic restricting, recommending, recording, rating, rewarding, and replacing [12, 69, 91, 92, 101,102,103, 106,107,108].

The most effective way of detecting biases in algorithms perhaps is the fairness audit. Fortunately, there have been fairness audit procedures established by governments [117] and in the literature [27, 28]. Debiasing tools and techniques are also well established in the literature to assist bias detection [21]. Fairness assessment may be continuously and periodically conducted while algorithmic controls are in use to prevent biases from developing through reinforcement learning and lack of monitoring.

3.2.5 Privacy

Privacy refers broadly to the norms and practices that help to safeguard human autonomy, identity, and dignity [24]. The use of algorithms in workplaces poses privacy challenges in organizations. For instance, data are sometimes collected outside the workplace and might not be job-related; organizations may also make inferences about workers at the individual or group levels without their knowledge [100]. Therefore, organizations under algorithmic controls may be more vulnerable to risks of a data breach, consequent legal liabilities, and non-compliance with privacy regulations [18]. To protect workers’ privacy, privacy considerations should be considered during design phases. Many organizations follow established privacy laws and standards (e.g., General Data Protection Regulation (GDPR)) or internally developed rules to enforce privacy measures in algorithms. Privacy serves as an important algorithmic principle in many AI audit standards and frameworks [24]. Privacy may be audited and enhanced prior to deployment and use by organizations and continue to be audited post deployment. In addition, workers’ awareness is also critical for their personal privacy [117], which can be improved through continuous organizational training.

3.3 How participatory AI-powered algorithmic controls reduce organizational inequalities

Participatory algorithmic controls have the potential to reduce the organizational inequality by involving a diverse range of stakeholders in the design, development, and decision-making processes of algorithms. By actively including individuals and communities who may be affected by algorithmic systems, participatory approaches can help mitigate biases, address power imbalances, and ensure that algorithms are developed and deployed in a fair and equitable manner. The following are several aspects in which participatory algorithmic controls can contribute to reducing organizational inequality.

3.3.1 Diverse perspectives and contextual knowledge

Participatory development is an approach that prioritizes the inclusion of diverse perspectives and contextual knowledge in the development of algorithms. This methodology acknowledges that stakeholders from a range of backgrounds can provide valuable insights and experiences that are instrumental in identifying and addressing biases, blind spots, and potentially discriminatory outcomes in algorithms [23]. By including voices often excluded from decision-making processes, participatory algorithmic controls mitigate biases and discriminatory effects in areas such as hiring, management, and employment. Collaborative efforts bring together individuals from marginalized communities, end users, domain experts, and civil society organizations to ensure that a diverse range of perspectives and contextual knowledge is considered [118]. For example, participatory AI involving domain experts in algorithm design can effectively address algorithmic biases by aligning models and decision-making criteria with specific domain needs and mitigating explicit and implicit biases in algorithms [119, 120]. It also empowers marginalized communities by actively involving them in the design and decision-making processes of algorithmic systems, leading to more equitable outcomes, bridging the digital divide, addressing systemic inequalities, democratizing AI, promoting social justice, and providing a sense of ownership and control over technologies that affect them [23, 118, 121].

3.3.2 Co-design and co-creation

When it comes to algorithmic controls, participatory approaches that involve co-design and co-creation are crucial. This means that stakeholders work together to design and develop algorithms, recognizing that each person brings unique insights and experiences to the table. By involving stakeholders in defining problems, shaping models, and establishing evaluation metrics, participatory algorithmic controls promote collective problem-solving and ensures that algorithms align with shared values and aspirations [122].

Research by Dekker et al. [123] emphasized the importance of co-design in algorithmic decision-making because involving diverse stakeholders in the design process can help identify ethical concerns, assess potential impacts, and explore alternative solutions. By actively involving stakeholders through co-design and co-creation, organizations can benefit from diverse perspectives and collective intelligence, ensuring that algorithmic systems are more responsive to the needs and values of different stakeholders [122]. For example, involving workers in the co-design and co-creation of algorithms used in hiring processes can help mitigate biases and power imbalances. Workers can contribute their insights and experiences to shape algorithms that reduce discrimination, promote fairness, and consider a broader range of qualifications beyond traditional metrics. Similarly, in performance management systems, involving workers in algorithm design can ensure that the evaluation criteria are aligned with their specific job requirements, thereby providing a more accurate and equitable assessment of their performance. This participatory approach empowers workers and gives them a sense of ownership and control over the technologies that affect them. It also mitigates the potential for algorithms to perpetuate or exacerbate power imbalances and inequalities in the workplace.

3.3.3 Accountability and transparency

Participatory algorithm development also prioritizes accountability and transparency, recognizing the importance of increased visibility and comprehension of the decision-making process of the algorithm. By involving stakeholders in the development and implementation of algorithms, organizations can enhance transparency, encourage scrutiny, and address biases and unfair outcomes [124, 125]. More specifically, participatory algorithms enhance transparency by making the decision-making criteria, data sources, and algorithmic models accessible to stakeholders. This transparency enables individuals to gain insights into the decision-making process, understand the factors that influence algorithmic outcomes, and identify underlying rules, assumptions, and biases embedded in the algorithms [126]. Therefore, stakeholders can scrutinize algorithmic systems, validate their fairness and effectiveness, and detect potential biases, discriminatory effects, or unintended consequences.

3.3.4 Algorithmic education and awareness

Algorithmic education and awareness programs are essential for fostering a deeper understanding of algorithmic systems. Brisson-Boivin and McAleese [127] highlighted the significance of providing educational opportunities that extend beyond technical knowledge and encompass broader societal perspectives. Participatory approaches foster a culture of learning and collaboration, where stakeholders can contribute their unique perspectives and experiences. Involving stakeholders from diverse backgrounds allows for a knowledge exchange that bridges the gap between technical experts and non-experts. This inclusive approach facilitates stakeholders’ comprehension of the underlying principles, processes, and implications of algorithms, thereby enabling them to engage in a more informed and critical manner with algorithmic decision-making. Through education, stakeholders develop a critical awareness of the potential harms, unintended consequences, and ethical dilemmas associated with algorithmic decision-making [127, 128].

Tsamados et al. [129] emphasized the importance of empowering stakeholders to actively identify and challenge biases. By providing workshops, training sessions, and dialogues, organizations employing algorithmic controls can cultivate a shared understanding of algorithmic controls. Moreover, workers can better understand how algorithms provide a recommendation, restrict their activities, rate and reward them, and drive other related activities. Such programs empower workers to recognize and fight against biases and unfair outcomes in algorithms.

Overall, participatory algorithmic controls contribute to the reduction of organizational inequality through their focus on diversity, inclusion, transparency, accountability, and empowerment. By actively engaging stakeholders in design and decision-making, organizations can tap into a wider range of perspectives, address biases, and develop algorithmic management systems that better serve the needs and values of a diverse population of workers. Through this inclusive approach, participatory algorithmic controls promote more equitable and responsible organizational practices.

4 Conclusion

This study provides an overview of differences among technical controls, bureaucratic controls, and algorithmic controls employed for organizational management, the organizational inequalities potentially introduced by algorithmic controls, and approaches to mitigate such inequalities associated with algorithmic controls. Our findings revealed various types of inequalities in the organizations that correspond to Kellogg et al.’s [1] six mechanisms of algorithmic controls that employers use to control workers (i.e., restricting, recommending, recording, rating, replacing, and rewarding). To mitigate these organizational inequalities, we propose the development of trustworthy algorithmic controls and participatory algorithmic controls.

Trustworthy algorithmic controls provide ways for workers to understand how a particular decision is made, thus enhancing the transparency and accountability of algorithmic recommendation and evaluation. Through effective internal review and external assurance of the algorithm, workers’ data privacy can be ensured and protected. As algorithms employed in the employment context are seen as high-risk algorithms to ensure accuracy, fairness, privacy, transparency, human oversight, and data governance, they must go through conformity assessment according to applicable laws and standards (e.g., the European AI Act). Future generations of algorithmic controls should be made lawful and ethical by undergoing conformity and compliance audits required by laws as well as voluntary audits and assessments that enhance trust and acceptance by workers.

Participatory development of algorithms offers a set of recommendations and implications that can promote fairness, equity, and inclusivity within organizations. By involving diverse stakeholders in algorithm design, development, and decision-making processes, organizations can address biases, enhance transparency, integrate ethical considerations, empower stakeholders through education, and continuously improve algorithms based on feedback. These practices help build trust, legitimacy, and accountability while ensuring that algorithms align with societal values and reduce organizational inequality. Ultimately, participatory algorithmic controls foster responsible and human-centric algorithmic controls that benefit workers in organizations.