Introduction

The rapid advancement of artificial intelligence (AI) and related technologies has enabled organizations to process large amounts of information and make decisions at a pace surpassing human capacity (Black and van Esch 2020). In the organizational context, we consider AI as machines, operating instead of or in collaboration with human organizational members, “performing cognitive functions that are usually associated with human minds, such as learning, interacting, and problem-solving” (Raisch and Krakowski 2021, p. 192).

Over the past few decades, collaboration between HR professionals and AI has significantly increased across human resource management (HRM) functions, including but not limited to recruitment and selection, coaching and training, performance management, and compensation. For instance, organizations use AI to support employee learning and development by personalizing the individual learning path through relevant training recommendations that account for employees’ skills, job tasks, and career plans (Nicastro 2020). In many HRM functions, HR professionals can benefit from collaborating with AI in designing streamlined and fair processes (Li et al. 2021).

This paper introduces a typology of HR–AI collaboration systems that informs which design principles may foster social acceptability of these systems among HR professionals and stakeholders. While past work has focused on structural dimensions explaining on which tasks humans and AI can collaborate to improve decision-making accuracy and economic gains (Puranam 2021), we suggest that important social dimensions determine the acceptance and, ultimately, the adoption of these systems. We discuss how organizations can integrate AI into their HRM processes for increased organizational efficiency and decision quality, and we address concerns about AI explainability, high stakes contexts, and the threat to professional identities. Unless organizations take into consideration how AI profoundly reshapes relations between employees and technology, HR–AI collaboration systems may create more resistance than value. We suggest that organizations need to carefully reflect on design principles that mitigate these critical concerns to enhance the viability of such systems. Furthermore, we examine the implications of HR–AI collaboration systems for “organizing for good”—improving organizational effectiveness and fairness while preserving the vital role of HR professionals in the design of HR–AI collaboration systems that are accepted by stakeholders.

The future of HR–AI collaboration systems

Emerging research evidence on AI application in management suggests that AI automates some tasks in the workflow rather than replacing entire roles or functions (Raisch and Krakowski 2021). With these AI applications, one approach to divide and allocate tasks between humans and AI is the sequential division of labor (Puranam 2021), which means that AI and humans make decisions that are sequentially related. For example, AI can screen and shortlist applicants, whom hiring managers later interview, and managers make the final hiring decisions. Humans may also check training data for biases and periodically monitor outcomes at key steps of the HRM process. In most HRM functions, hybrid autonomy (i.e., human and AI share the decision authority over an HRM process, Charlwood and Guenole 2022) is becoming increasingly prevalent, which has important design implications for HR–AI collaboration systems.

Our typology (see Fig. 1) depicts the degree of autonomy humans and AI will likely have in performing work tasks in the future of HR–AI collaboration systems.Footnote 1 We focus on two dimensions of work tasks that past scholarship has found to be critical for the division of labor; routine versus non-routine and low versus high cognitive complexity (Tschang and Almirall 2021). The routine versus non-routine dimension captures whether work tasks are well defined and occur frequently or are rare due to process deviation, unclear business rules, or incomplete data. Routine tasks may be easily automated by creating explicit, programmed rules (Autor et al. 2003), while non-routine tasks tend to require human intervention, because those tasks are unique and ambiguous. Low versus high cognitive complexity of the labor relates to the degree of cognitive effort that work tasks demand. Specifically, highly cognitively complex tasks require flexibility, creativity, communication, and analytical and problem-solving skills (Autor et al. 2003).

Fig. 1
figure 1

Task division between humans and AI in future HR–AI collaboration systems

Tasks at the nexus of routine and low cognitive complexity are perfect candidates for full, AI-supported automation. They occur frequently and generate high volumes of data that algorithms can learn from, and the domain space for pre-trained data is well suited to avoid major data transferabilityFootnote 2 issues. In low cognitively complex tasks, AI is effective in reducing information search time and processing costs, which frees up employee time for more cognitively complex tasks and thus enhances employee productivity (Tarafdar et al. 2019). In these routine and low cognitively complex tasks, there may be little need for or value in having humans involved.

Currently, for non-routine, low cognitively complex tasks, AI predictions may be less accurate due to a lack of available training data. Yet, AI automation may progressively become self-governing, so that organizations can modularize these tasks to make them solvable by AI (Tschang and Almirall 2021). Indeed, novel AI methods, such as self-supervised learningFootnote 3 can manage more complex data and do not require humans to hand code knowledge, making it flexibly implementable in different work situations. Therefore, non-routine tasks can be transformed into routine AI tasks, particularly in narrow application domains. For instance, in performance management, Amazon has started to use AI to track employee productivity and automate warnings and even trigger job terminations without human intervention (Lecher 2019). Although HR professionals could still override non-routine, low cognitively complex, AI-generated decisions such as job terminations, a future in which AI could perform them single-handedly is not far away.

Furthermore, we argue that in digitally transformed organizations, AI will be able to make routine, highly cognitively complex decisions. For instance, for quarterly bonus decisions for a business unit, AI algorithms can generate recommendations for allocation relying on a large amount of macro-economic, organizational, business unit, and individual-level performance data from previous quarters. AI can even factor in data on employee engagement/satisfaction surveys and allocate bonuses that maximize employee engagement. In such a case, allocation rules could be set by strategic decisions made upfront (e.g., a 3-year strategic vision plan), established in code and executed by smart contracts.

AI may soon execute many highly cognitively complex tasks that humans used to perform tacitly and that could be reframed as solvable AI pattern-recognition problems (Tschang and Almirall 2021). However, because non-routine, highly cognitively complex decisions typically have strategic, long-term implications within organizations, human involvement may still be needed to alleviate political conflicts (e.g., HR professionals will need to negotiate with other stakeholders such as the CFO or labor law specialists) and create accountability for those decisions. Furthermore, because those tasks are idiosyncratic and ill-defined, they may require human creativity, synthesis, and sensemaking. In this regard, humans need to support AI in obtaining and classifying contextual knowledge and translating it into design parameters.

Social acceptability in the new paradigm

In our hypothetical, digitally transformed organization, AI will autonomously perform tasks that were once performed jointly with humans. We posit that AI and humans will collaborate and share decision-making authority in non-routine, highly cognitively complex tasks. Yet, organizations will need to ensure the social acceptability of future HR–AI collaboration systems if they wish to reap their potential benefits. When social acceptability is low, employees may engage in individual or collective resistance against such systems (Kellogg et al. 2020). Recent scholarship suggests that the social and material dimensions of human–AI collaboration systems are embedded in a series of complex relations that evolve over time, which blends the technology with the process of organizing (Bailey et al. 2022). Consequently, anticipating how the functional role of employees and AI develops in use can inform design principles that are conducive to system adoption and long-run viability.

We posit that social acceptability can be shaped by multiple factors, such as AI explainability, high stakes contexts, and professional identities (e.g., DeStefano et al. 2022). First, AI explainability, defined as “the extent to which the internal mechanism of a model can be explained in human terms” (Shrestha et al. 2020, p. 23), will affect whether stakeholders accept the HR–AI collaboration system. The dominant AI approach in HRM relies on machine learning algorithms, which are generated automatically based on patterns learned from the data. Because those algorithms have become more complex and generate more accurate predictions, they can be harder to comprehend, leaving stakeholders the impression that algorithmic decisions are made in a “black box”. Stakeholders tend to resist using AI algorithms when they do not understand the processes underlying the AI-generated decisions and especially when they disagree with those decisions (Waardenburg et al. 2022).

Second, social acceptability may vary by context. In high stakes contexts where decisions have a major impact on personal lives or careers (Tambe et al. 2019), social acceptability is likely lower, because stakeholders will scrutinize and critically analyze algorithmic decisions. This may also be true in contexts where algorithmic decisions affect the whole organization rather than a few employees.

Third, the professional identity of HR professionals may influence the extent to which they accept and implement HR–AI collaboration (Vaast and Pinsonneault 2021). Research suggests that professionals want to deepen their autonomy and professional boundary against new practices, occupational classes, or technologies to enhance their status by increasing their specialization (e.g., Burrell and Fourcade 2021; Noordegraaf 2011). Furthermore, recent work has found that professional identities shape how professionals integrate knowledge claims generated by AI and whether they accept or ignore AI decisions (Lebovitz et al. 2022). Hence, HR professionals will support or reject HR–AI collaboration depending on whether they view it as enhancing or undermining their status and professional identity.

Taking into account the implications of the factors analyzed above, we next illustrate the degree of expected social acceptability for the four types of tasks in our typology of future HR–AI collaboration systems shown in Fig. 1. We predict that routine, low cognitively complex tasks automated by AI will enjoy a relatively high social acceptability. First, low complexity makes algorithmic processes more interpretable, and stakeholders can compare AI-generated decisions with past decisions based on similar parameters, as those exist in high volume. Second, these decisions are typically operational (low stakes) and impact a limited number of stakeholders, which means that scrutiny will remain low. Furthermore, HR professionals do not define themselves professionally by those mundane, non-complex tasks, and may gladly reduce red tape by outsourcing them to AI.

As tasks go from routine to non-routine, and from low to high cognitive complexity, the social acceptance of AI involvement will likely deteriorate. On the one hand, non-routine tasks imply little precedence that can inform stakeholders on the validity of current decisions (i.e., comparisons with outcomes from past similar decisions remain limited). On the other hand, elaborated and clear justifications for the AI decision-making process will not be readily available, because high task complexity makes for multiple computational steps that potentially involve a large number of parameters. Hence, the model’s internal mechanisms will be less explainable to stakeholders. Furthermore, non-routine, highly cognitively complex tasks tend to be strategic in nature and occur in high stakes contexts, which makes them heavily scrutinized and prone to strong criticism. Finally, because performing these strategic tasks constitutes an important signal of HR professionals’ expertise and social status, relying on AI may undermine their decision autonomy and threaten their professional identity. Following this logic, without humans in the loop, AI involvement in non-routine, highly cognitively complex tasks will suffer from low social acceptability.

Organizing for organizational fairness

Although HR–AI collaboration systems may help reduce human biases and arbitrariness in decision-making, outsourcing important decisions to AI in HRM functions can affect stakeholders’ perceptions of procedural and distributive fairness (Tambe et al. 2019). Evidence suggests that stakeholders are particularly sensitive to algorithmic decisions in high stakes contexts. For instance, Lee (2018) found that employees perceived unfairness in and distrusted algorithmic decisions on hiring and performance evaluation, while generally accepting mundane decisions, such as the ones on scheduling and work assignments. When employees perceive that algorithmic decisions lack procedural and distributive fairness, they may feel that the psychological contract has been broken (i.e., they may believe that the employer failed to fulfill its promises, Rousseau 1989). This may lead to decreased organizational commitment, trust, and work effort. Despite their possible important effect on system acceptance and viability, there is scarce empirical work on employees’ fairness perceptions of decisions generated by an HR–AI collaboration system in high stakes contexts.

We also posit that organizing for organizational fairness requires organizations to accommodate stakeholders’ preference for being involved in the system design. Research finds that employees resist centralized decision-making and imposed designs, which are typical features of formal authority structures (Narayanan et al. 2021). Therefore, we suggest that organizations involve employees in designing novel HR–AI collaboration systems to meet their preference for self-design. For instance, employees and labor unions may work with HR professionals in deciding if and under what conditions AI should issue important personnel decisions (e.g., termination notices). This could enhance employees’ understanding of how decisions are made, thus enhancing perceptions of procedural fairness in high stakes contexts.

Designing HR–AI collaboration systems for organizational fairness involves addressing novel complex challenges, such as enhancing AI explainability (Parent-Rocheleau and Parker 2022). Typically, the lack of AI explainability stems from the fact that probabilistic calculations underlying algorithmic outputs remain hidden from humans. In these cases, explaining how the algorithm works (algorithm transparency) and how the final decision is made (process transparency) can help to enhance perceptions of fairness among stakeholders. Recent work further suggests that organizations can design the choice environment for informing stakeholders by presenting probabilistic, context-based outputs rather than deterministic, decontextualized ones (Gal et al. 2020). For instance, introducing information about confidence scores of algorithmic outputs can provide some indication of certainty, which may prompt human reflection on the outputs and make them more interpretable. As such, machine learning algorithms could produce human-interpretable information on reasoning and context for the generated outputs, serving as an expert assistant. Furthermore, organizations can use algorithmic brokers (i.e., actors solving knowledge boundaries between AI developers and stakeholders) to translate the algorithmic outcomes to stakeholders for better explainability (Kellogg et al. 2020).

HR professionals in the design of HR–AI collaboration systems

In our typology of future, digitally transformed organizations, we expect that social acceptability will be particularly low for non-routine, highly cognitively complex tasks. More importantly, HR professionals’ role is poised to become prominent in the strategic shift to HR–AI collaboration. They will be involved in designing these systems, which necessitates addressing the root issues of poor AI explainability, high stakes contexts, and threat to professional identities. HR professionals may serve as HRM system architects by applying their domain knowledge and closely collaborating with AI developers and the top management, ideally acting as gatekeepers to ensure that the leverages technological and economic gains while ensuring organizational fairness.

More specifically, HR professionals will likely select and set up AI-enabled applications, mitigating potential issues introduced by AI in HRM processes. For instance, they may identify discrimination patterns in AI recruiting algorithms in the design phase and suggest ways to remove bias from the data inputs (Cowgill 2019). Algorithm biases often result from the training data that reproduces the existing organizational biases in employment practices. HR professionals may also consult with managers to address their concerns of diminished managerial discretion in terms of managing their employees. As such, HR professionals could play an important role in reaching collective consensus and thus promoting social acceptability of the HR–AI collaboration system.

Rapid technology advances make it easier for HR professionals to be actively involved in co-designing HR–AI collaboration systems. For instance, the latest advances in generative AI (e.g., OpenAI’s ChatGPT) can support HR professionals with little programming knowledge in generating lines of code (Davenport and Mittal 2022). Specifically, HR professionals could leverage their domain knowledge and support AI developers in creating algorithms on tasks that are tightly linked to their professional identity (e.g., non-routine, highly cognitively complex). This may help to maintain HR professionals’ status and to mitigate the threat of AI to their professional identity. Furthermore, recent empirical work has found that if respected peers participate in designing and testing the algorithms (social proofing), stakeholders are more likely to accept algorithmic outcomes (DeStefano et al. 2022).

Discussion and implications for future research

In this essay, we envision a division of labor between human HR professionals and AI along the dimensions of routine versus non-routine and low versus high cognitive complexity of tasks. We also discuss the social acceptability of these scenarios of HR–AI collaboration systems. We argue that organizations will need to carefully address concerns about AI explainability, high stakes contexts, and threat to professional identities to foster high social acceptability and the long-term viability of HR–AI collaboration systems. However, most typologies risk being oversimplistic and ours is no exception. First, it does not fully capture the feedback loops between HR professionals and AI. While we outline how algorithms could learn from outcomes generated by humans in the real world and update themselves, our discussion did not account for how humans learn from algorithmic outputs and adapt their behavior over time. This feedback loop has significant implications not only for human proclivity to accept AI involvement but also for the outlook of future HR–AI collaboration systems. Second, our typology does not include the scenario of no division of labor between human HR professionals and AI (i.e., humans and AI do not specialize in different tasks but instead perform all tasks together, see Choudhary et al. 2023). Will human–AI ensembles be a viable solution to low social acceptance of AI involvement? And will this form of human–AI collaboration preserve the role of human professionals? These are important questions for future research to explore both within and outside the domain of HRM systems.

Organization design scholars may benefit from the relational perspective that views emerging technologies as enacted by a dynamic set of relations constituted by many functions (Bailey et al. 2022). For instance, a newly implemented HR–AI collaboration system that aims at improving recruiting processes can reveal gaps in organizational knowledge (e.g., employees' expertise areas), which may potentially benefit further HRM processes. Leveraging machine learning algorithms in recruiting may progressively give rise to novel functions that were not initially intended, such as mapping organizational knowledge for enhancing talent development. Therefore, one open issue for organization design scholars is to factor in the dynamics of human experience with emerging technologies, which may extend the functions traditionally performed by humans with such technologies. Our typology hints at how the functional role of employees and AI may evolve over time and thus is a foundational step towards this dynamic understanding of human–AI relations, which can inform design principles of such collaboration systems.

Future work should also examine how the latest development in AI technology updates our typology of HR–AI collaboration systems. One example is generative AI, a technology that creates its own content without any human intervention by drawing from large language models (LLMs) to predict the likelihood of the next word in a text (Felten et al. 2023). Future versions of generative AI may be able to perform non-routine, highly cognitively complex tasks autonomously once the technology surpasses human abilities in terms of creativity, synthesis, and sensemaking. Of course, the question remains whether or not humans will prefer to preserve a role in these tasks even at the cost of lower performance. Future work should examine social acceptability whenever employees and generative AI collaborate on such non-routine, highly cognitively complex tasks. Will generative AI produce outputs with high explainability and hence foster social acceptability? Will LLMs exacerbate detrimental effects of algorithms through the breach of professional identities given the human-like forms of interactions they enable? Future scholarship on organization design should study key design principles that make those novel collaboration systems acceptable among stakeholders. Organizing for good also requires scholars to gather much needed evidence on whether generative AI can benefit all employees throughout the organization, whether it has potential dehumanization effects, and how it influences perceptions of organizational justice (Budhwar et al. 2023).