1 Introduction

Artificial intelligence (AI) technologies are being used in increasingly diverse organizational practices, creating new types of human–machine configurations, and playing an increasing role in various aspects of contemporary organizing (Boden 2016; Seidel et al. 2018), such as managerial decision-making, design, and manufacture (Brynjolfsson and McAfee 2014). Systems incorporating these technologies can be described as rational agents that perform tasks guided by their functionalities and input parameters, autonomously and with little or no user intervention. Thus, AI technologies constitute a new type of material agency in contemporary organizing.

Recent contributions to literature on AI in organizational practice have expressed enthusiasm for the ways in which decisions can be made automatically, e.g., through problem-solving programs (algorithms). Management by AI, defined as governance undertaken by software algorithms to control, shape or influence a multitude of actors (see e.g. Möhlmann and Zalmanson 2017; Möhlmann and Henfridsson 2019) is surrounded by hype, and there is a need for exploration beyond the hype to critically examine the potentials and perils surrounding AI’s use in organizational contexts. Following recent calls for understanding the how the outcomes and inputs of algorithmic decisions affect individuals and organizations (Holmström 2021), we extend this view by considering not only how AI shapes the behaviour of a multitude of organizational actors, but also the co-constitution of AI and social contexts (as advocated, for example, by Lindgren and Holmström 2020). In so doing we take a decidedly context-sensitive approach (Johns 2006), aiming to extend understanding of how AI technologies shape and are shaped by actors in their specific contexts. We develop a conceptual framework recognizing four types of AI contexts. This extends and complements current theorization of AI management that hitherto has paid little attention to variations in the embeddedness of systems and associated implications for human behaviour. This is important because recent developments in AI systems and their uses, including increasing assumption of managerial roles, are transforming work as we know it (Brynjolfsson and McAfee 2014; Ford 2016; Holmström 2021). Hence, we find not only enthusiasm but also high levels of anxiety about AI technologies and their effects (Susskind and Susskind 2015).

Recent technological advances have given some support to the feasibility of realizing ‘human-level AI’ (Bostrom 2016) and some scholars have even gone as far as to consider the point when technology surpasses human intelligence (e.g., Kurzweil 2005). Thus, given the rapid diffusion of AI technologies in society, analysis of AI’s social implications is increasingly important (Gill 2020; Liberati 2020). Recent advances in AI have pushed it out of research labs and into today’s work contexts. In computer science, the algorithms applied by AI have been defined as sets of instructions that can be computed in a particular order to achieve a result or goal (Moschovakis 2001). However, social scientists have adopted a broader, relational understanding of algorithms (Orlikowski and Scott 2015; Lindgren and Holmström 2020), which includes the possibility of unforeseen outcomes arising from algorithmic instructions (Neyland 2016; Neyland and Möllers 2017).

We use the term algorithms to refer to “an emergent family of technologies that build on machine learning, computation, and statistical techniques, as well as rely on large data sets to generate responses, classifications, or dynamic predictions that resemble those of a knowledge worker” (Faraj et al. 2018: 62). As such, AI algorithms are designed to improve decision-making, often using real-time data. Today’s AI algorithms can combine data from diverse sources (sensors, digital archives and/or remote inputs), analyse the data instantly, and act on insights derived from those data (McAfee and Brynjolfsson 2017). With rapid improvements in data storage systems, processing speeds, and analytic techniques, they are beginning to demonstrate high sophistication in analysis and decision-making (Crawford 2021). AI algorithms are thus beginning to perform, reliably and accurately, an increasing array of tasks that historically have been performed by humans.

Against this backdrop, suggest that AI technologies are not likely to replace human workers but augment their capabilities, with smart machines working alongside people (Davenport 2018). We extend this line of reasoning by focusing on ways in which AI technologies and organizational practices mutually shape each other. In this article, we focus on work contexts where there are interactions between algorithmic management and workers (Preda 2009), and seek to conceptualize the entangled nature of algorithmic and social settings. The importance of understanding complex socio-technical spaces has been acknowledged since the 1990s in both workplace studies (Heath and Luff 1992) and organization studies (Carlile 2002). However, today’s scenarios of entangled social–digital spaces pose new challenges for theorizing the relationships between algorithms and social contexts. Specifically, we seek to investigate the co-constitution of AI technologies and organizational context (Sandberg et al. 2020; Faraj and Pachidi 2021) by addressing the following research question: What is the role of AI in the constitution of work contexts in contemporary organizations? We take a first step towards answering this overarching question by developing a typology to map the complexity of the emerging terrain, thereby displaying its range and scope by critically synthesizing findings and issues from the literature. The typology serves as a heuristic device for considering the broader implications of AI in terms of transparency and algorithmic management.

2 AI management: what we do and do not know

2.1 How AI algorithms transform society

Our ambition to examine AI management is prompted by AI’s pervasive roles in today’s society. AI’s social consequences have been explored in literature on the politics of algorithms (Burrell 2016). As peoples’ lives are increasingly influenced by algorithms and AI, algorithms have been described as ‘black boxes’ (Hallinan and Striphas 2014; Pasquale 2015) that may be difficult (or even impossible) to understand, even for the systems’ designers. Hence, algorithms are likely to transform societal contexts in qualitatively different ways than historical advances in technology. Exploring how AI algorithms interact with people to shape societal contexts, we identify three specific pervasive research streams in the literature:

First, AI algorithms may transform work contexts by replacing expertise. The idea that AI’s primary effects are in automation is well established in research focused on the evolution of modern technology (Brynjolfsson and McAfee 2014). AI-powered technologies can now perform (inter alia) highly complex information retrieval, logistic, inventorial, financial and translation tasks. For example, as illustrated by Riley (2018), algorithms are replacing (rather than merely augmenting) candidate screening processes in some settings. In such cases, automation involves use of AI algorithms for ‘simple’ (not simplistic) labour, leading to displacement of workers from the tasks being automated. Beyond labour automation, advances in AI also may potentially increase productivity (Brynjolfsson et al. 2018) and reduce biases (Riley 2018). Of course, there is not always a sharp distinction between automation and augmentation.

Second, AI algorithms may transform work contexts by augmenting expertise, i.e., by enhancing (but not replacing) human expertise. Daugherty and Wilson (2018) highlight several ways in which AI technologies can augment people’s capacities at work. They can amplify human capabilities by providing insights derived from real-time and/or archived data, facilitate interactions between people (or on behalf of people), and be embedded in robots and machines that work alongside humans, for example in manufacturing plants. Such robots and machines can work collaboratively with humans, as is now typical in automobile assembly. This view is also echoed in studies of the enhancement of service technicians’ abilities through data accessed by sensor technologies (Jonsson et al. 2008, 2018), and conceptual analyses of AI and humans living symbiotically (Jarrahi 2018) and developing ‘meta-intelligence’ (Lichtenthaler 2020). More importantly, these business models, typically deployed by players in 'the gig economy' such as Uber and Lyft, are becoming increasingly common and significantly impact work boundaries.

Third, AI algorithms may transform work contexts by shaping and being reshaped by work contexts and their boundaries. This is most prominent in the current introduction of new transformative, high-profile digital business models such as Uber, Airbnb, and Lyft. In these business models an algorithm directly assumes roles of surrounding institutional devices, rather than ‘replacing expertise’ or performing managerial functions on behalf of the organization with some degree of human managerial interaction. This research stream directly focuses on the workers’ experience of interaction with the system itself. Because of the recent growth of this nascent empirical phenomenon the results are still tentative and far from conclusive, but reports suggest that the workers feel controlled and seek to game the system (cf.Rosenblat and Stark 2016; Lee 2018). We argue that more elaborate forms of algorithmic management will be progressively introduced across more work contexts.

In sum, with the increases in computing power and growth of advanced systems for handling vast amounts of data, society as we know it is transforming before our eyes. There is seemingly no way to stop the trend, so we are better off trying to understand the everyday impacts of algorithms that are increasingly not only replacing and augmenting expertise, but also managing our work and lives. This article is largely rooted in the third research stream shown in Table 1, and explores in detail the ways in which algorithms are transforming work contexts.

Table 1 How AI algorithms transform work contexts

2.2 Artificial intelligence in organizational contexts: a typology

The pitting of human against machine portrayed in AI literature is not new. This human/machine tension has been an integral part of the development of AI theory and practice (Dreyfus and Dreyfus 1988; Dreyfus 1999; Ensmenger 2012). For example, the development of automated chess programs capable of beating human opponents was an important testing ground for AI technology around the turn of the millennium, and humans have lost that battle. However, AI systems’ technological ‘intelligence’ does not account for their social implications, which we have only started to address. As peoples’ lives are increasingly clearly being shaped by algorithms and AI, social scientists, legal scholars and philosophers are beginning to critically appraise these aspects of AI (Mittelstadt et al. 2016).

2.2.1 Algorithmic management

In our typological exploration of how AI technologies shape, and are shaped, by people we recognize two key dimensions of AI systems: algorithmic management and transparency. We also recognize the importance of contextual sensitivity because “insufficient attention to context could lead to a poor understanding of how variables at one level of analysis affect those at a different level of analysis, to an underappreciation of the significance of apparently trivial context effects” (Hällgren et al. 2018: 112). Thus, it could be responsible for “one of the most vexing problems in the field: study-to-study variation in research findings” (Johns 2006: 389). Inspired by the conversation about AI's capability to reshape work as we know it (e.g., in decision-making and analysis) and its managerial abilities we root our typology in the third research stream (on AI reshaping work contexts and boundaries). This is because the level of algorithmic management, i.e., the degree that algorithms and surrounding institutional devices that support them assume managerial functions and determines AI's influence on fundamental organizational processes such as delegation, coordination and decision-making. For example, Möhlmann and Henfridsson (2019) found that workers engaged with Online Labour Platforms such as Uber respond to measures of control by market-like responses (e.g., cancelling accepted rides and collective coordination) and organization-like responses (e.g., organizing strikes and encouraging others to engage in collective action). Concurring, Pignot (2021) finds that Uber drivers are glued to the algorithm (e.g., for one more fare, or the next price surge). Tied to the organization, the workers cannot offer any real resistance to the management of their situation. Similarly, Woodcock (2020) found that the food delivery service Deliveroo electronically replicates the factory panopticon of control, through illusions of control and freedom. These, like most, studies of algorithmic management have critically explored the gig economy, or variations thereof. Therefore, analysis of its implications (especially positive implications) beyond such industries is rare. However, algorithmic management is also seeping into other industries. Ovetz (2020) argues that algorithmic management of university learning is redefining and deskilling academic workers, who are becoming less specialized and more self-disciplined precarious “platform” workers who can labor remotely under the control of algorithmic management. Using the example of the singer, rapper and ‘media personality’ Lil NasX going viral through the social media platform Tiktok, Collie and Wilson-Barnao (2020) argue that creative work and labour is about to be redefined. For example, material may not be deleted but it can be suppressed by an algorithm, which may thus influence the outcome. More importantly, the same algorithms that may create a star may influence the outcome of an entire election (Thorson et al. 2021). The level of algorithmic management is thus a concern for us all and there is no reason to believe it stops at platform-type solutions. Hence, we define the level of algorithmic management as the degree of data-based tracking, evaluation and decision-making (Möhlmann and Zalmanson 2017).

2.2.2 Transparency

As algorithmic decision-making is being increasingly embedded in diverse settings, there are increasing calls to increase algorithmic transparency (Diakopoulos 2016; Pasquale 2015). Such transparency, and explanations of how algorithmic outputs are reached, may serve a number of goals, including increases in trust, effectiveness, persuasiveness, efficiency, and satisfaction (provided the algorithms’ purposes align with interests of users and others they may affect). Thus, transparency is the second dimension of our typological framework. Transparency may be at the level of platform design and algorithmic mechanisms, or more deeply at the level of a software system’s logic. In relation to AI, transparency is not simply a matter of revealing information or hiding it from people but also making “some parts of organizational and social life knowable and governable” and keeping others hidden (Flyverbom 2016: 112). The ‘disclosure devices’ involved (Hansen and Flyverbom 2015: 872) are neither exclusively human nor entirely computational. Instead, networks of human and non-human agents set the level of transparency associated with any AI application. Sometimes transparency is impossible, as the details of an AI system will be not only protected by corporate secrecy or indecipherable to those without technical skills, but inscrutable even to its creators because of the scale and speed of its design (Crain 2018). Moreover, increasingly advanced algorithms are increasingly being at least partially designed by preceding algorithms. Arajuo et al. (2020) have experimentally shown that people have mixed feelings about automated decision-making at societal level, depending on the types of decisions that are made. Accordingly, there is a clear need for a context-specific approach.

This raises the following key questions regarding transparency: What is it that we can and should disclose about our algorithms, and to whom? Diakopoulos (2016) describes five broad categories of information that might be considered for disclosure. First, information on human involvement, i.e., the goals, purposes, and intentions of algorithms, including who has direct control over them. Second, information on the datasets that drive algorithms in various ways, including their quality (e.g., their accuracy, completeness, and uncertainty) and timeliness (since validity may change over time). Third, information on the underlying models, particularly the inputs (features or variables used, and their weightings, if any). Fourth, the inferences made by the algorithms, such as classifications or predictions, their accuracy and potential errors. Fifth, information on if and when an algorithm is being employed, particularly if personalization is involved. In addition to the five categories highlighted by Diakopoulos (2016), information on ‘visibility’ in terms of elements of curated experience that have been filtered away, may be important. Ultimately, regardless of the kind of transparency considered, the end-user decides its importance and utility (cf. Burrell 2016). We conceptualize transparency as the ability to understand "how or why a particular classification has been arrived at from inputs" (Burrell 2016:1).

2.2.3 AI in contexta typology

Our typology of AI use in organizational contexts is simply based on a two-by-two matrix of low and high levels of transparency and algorithmic management. As shown in Fig. 1, the four resulting categories are called Automated AI contexts (with high algorithmic management and low transparency), Commissioned AI contexts (with high algorithmic management and high transparency), Augmented AI contexts (with low algorithmic management and high transparency), and Opaque AI contexts (with low algorithmic management and low transparency). We believe the typology may facilitate exploration of broader implications of algorithmic management, including its effects in empirical settings other than the almost fully automated gig economy. It may also facilitate evaluation of variations in results of previous and future studies, and potentially highlight aspects that warrant further attention. Most importantly, it may facilitate analysis of the entangled nature of algorithmic management and social settings, starting from the observation that their mutual shaping will be highly case-dependent. For instance, in low transparency, high algorithmic management situations, the algorithms are likely to shape their social surroundings more than vice versa. The four types are discussed in the following section.

Fig. 1
figure 1

AI use in organizational contexts: a typology

2.3 AI in organizational contexts: four types

A typology is not a theory (Sutton and Staw 1995; Weick 1995), but rather a way to reduce myriads of examples of considered phenomena or things into smaller numbers of classes whose members share key attributes, thereby facilitating analysis. The purpose of our typology of AI use in organizational contexts is to facilitate understanding of AI’s social implications and identification of future challenges for AI management by recognizing the four types that are listed in the preceding section, shown in Fig. 1, and sequentially described below. Thus, it is intended to provide an analytical tool for exploring basic assumptions, rather than a map of messy reality (Collier et al. 2012). We also provide empirical examples, chosen to illustrate themes rather than cover the wide variation within the classes.

2.3.1 Opaque AI contexts

In opaque contexts, people make decisions using outputs of algorithms, without understanding how the algorithms generated the outputs. Thus, high degrees of trust are placed in the algorithms to produce accurate inputs for processes, routines and decisions initiated, established and made through human agency. This is the ‘simplest’ form of algorithmic management: human actors are still involved and have agency in most of the associated processes, but the low transparency may be problematic.

A case in point is the current transition of the forestry industry from the production and sale of wood products and heavy machinery such as harvesters to operators, towards provision of services associated with operating the machinery (Nylen and Holmström, 2011). These services include, inter alia, maintenance, operational control and measurements of harvested material (and material that could potentially be harvested), and scheduling for both harvesting and regenerating stands. For example, measurements of harvested materials (before and after harvest) are sent to lumber mills (and/or other users such as ‘biorefineries’), where staff and other algorithms calculate optimal ways to use them (Müller et al. 2019). The users’ capacities and requirements also provide important feedback, and in many cases feedforward information. Thus, this seemingly simple process involves at least three sets of actors, all of whom have access to proprietary information but cannot act by themselves, without the involvement of others and the algorithm(s) that link their operations. Hence, they all require high levels of confidence in the algorithms.

Based on the above illustrative example, we suggest that opaque AI contexts are likely to create ‘narrow’ business solutions, with fragments of systems controlled by multiple actors, rather than one large system provider controlling all the information (Brynjolfsson et al. 2018). This will limit the impact of AI by subordinating the algorithms’ roles to those of human actors, who will strive to keep their agency in relation to the algorithm, and each other, to avoid losing potential business advantage. We conclude that in opaque AI contexts the algorithms will receive high levels of trust that few, if any, people challenge although they dictate important aspects of their working lives and routines. Thus, in opaque AI contexts the working conditions are significantly shaped by the degree of trust in, and benefits provided by, the algorithms.

2.3.2 Automated AI contexts

Whereas opaque contexts are characterized by involvement of people in managerial processes and algorithms producing inputs for decisions, automated contexts are characterized by the removal of people from both preparation of information and subsequent decision-making processes. That is, the managerial processes are automated from start to finish with as little human involvement as possible. The automated nature of the contexts has also transformed society as we know it by, for example, giving rise to the gig economy, in which workers perform tasks assigned by a system with little or no direct agency.

For instance, in Uber’s model, which has disrupted the taxi industry in many major cities (Greenwood and Wattal 2017), customers are assigned drivers (who use their own cars rather than traditional taxis) to take them from point A to point B through an app provided by the corporation. The system uses a rating system when assigning drivers to trips and areas, some of which are naturally more lucrative than others. This distribution of trips involves no human decision-making and relies on non-transparent information provided by the system, which ‘nudges’ drivers by ‘surge pricing’ in particular areas. Hence, Uber drivers reportedly feel controlled by an algorithm that they do not understand and cannot control (Möhlmann and Zalmanson 2017; Pignot 2021). The drivers respond by trying to game the system by, for example, switching between similar services, or turning off the phone and its geographical positioning. In a recent move Uber is even starting to manage their customers. When a customer consistently receives bad reviews s/he will eventually be unable to use the services (Möhlmann and Zalmanson 2017). This is only one example of a growing trend. For example, in a totally different sector, hospitality, the asymmetry of algorithmic information increases Airbnb’s power to influence and control practices of the hosts (Kavidas et al. 2016). This makes the hosts feel controlled (see also Woodcock 2020). Others suggest that this ‘pure’ form of algorithmic management is reshaping universities (Ovetz 2020), transforming the music industry (Collie and Wilson-Barnao 2020), and even threatening democracy (Thorson 2021).

Based on these insights, we posit that use of algorithms in automated AI contexts can significantly reshape their settings, most powerfully through non-emotional, rational decision-making. Stripping human emotions from working contexts (and hence potentially both enhancing efficiency and eliminating bias and discrimination) may appear tempting, but it also raises risks of reducing humans to robots and removing some of our humanity. Therefore, in accordance with previous authors, such as Möhlmann and Zalmanson (2017) and Pignot (2021), we suggest that eventually there will be a counter-reaction to maintain human agency in such contexts, e.g., by gaming the system or through legal and civil rights movements. Instead of relying on the algorithms to work in their favour, workers are likely to distrust the system and find ways to manipulate it, thereby reducing the AI solution’s efficacy.

2.3.3 Augmented AI contexts

A shared characteristic of opaque and automated contexts is heavy managerial reliance on non-transparent algorithms. In augmented AI contexts the algorithms used in managerial processes are much more transparent. In addition, algorithmic management is low (as in opaque contexts, but not automated contexts), and the algorithms are believed to improve rather than replace human involvement.

An example of this category is ‘hotspot policing’, in which reported crime statistics are used in decisions regarding allocation of police resources. Analysis by an algorithm of crimes (reported by humans) identifies ‘hotspots', in the form for example of heat maps showing frequencies of particular types of crimes at particular times and places within a city. Managers in the city’s police organization then at least partly allocate their resources in accordance with the analysis. Hotspot policing has also become increasingly popular as a crime prevention strategy during the past decade, partly due to its potential to improve the efficiency of use of scarce police resources and reduce crime rates (Ratcliffe 2004). Hence ‘predictive policing’ (such as hotspot policing) is a highly anticipated product in the Big Data era (Ridgeway 2018: 410). Another example of the use of algorithmic management in policing contexts is in the collection and assessment of offenders’ modus operandi to link them to crime patterns, automatically estimate risk exposure, and facilitate prevention of the crimes and/or arrests of offenders. “Such estimations can assist law enforcement agencies when linking crimes into series and thus provide a more comprehensive understanding of offenders and targets, based on the combined knowledge and evidence collected from different crime scenes” (Boldt et al. 2018: 167). In a nutshell, by applying augmented solutions, a type of meta-intelligence that improves the managerial processes can be developed (Lichtenthaler 2020). These are examples of algorithmic uses of locally sourced and applied data, which are thus reasonably transparent to anyone with local knowledge, by middle managers who may, or may not, allocate resources accordingly.

Based on these insights, we posit that AI solutions are perceived in such contexts as useful tools that support managerial processes by identifying patterns in datasets that are too large for humans to comprehend sufficiently to optimise responses without assistance. In augmented AI contexts the managerial processes are improved by the AI by enabling better informed decisions. This also suggests that AI may be more easily adopted in augmented contexts than in the other types as it does not require the trust in other actors, and involves less significant changes in working procedures as well as less interference with human agency. This leads us to conclude that most growth in AI solutions in the near future will probably be of augmented nature.

2.3.4 Commissioned AI contexts

In contrast to opaque contexts, where algorithms provide information that humans act upon, commissioned contexts are characterized by human agency providing information that algorithms use to make decisions.

Major examples of commissioned AI contexts are provided by the emerging internet of things (IoT), which refers to integrated use of smart home technologies, primarily to enhance residents’ quality of life. However, businesses may also benefit from the associated automation of daily tasks, optimization of power consumption, and assistance in routine operations. As Liberati (2020) notes, IoT (or wearable computers, e.g., Google glasses) are generating “a new collective subject with its different collective needs and appetites by merging the living body of many subjects into one” (with mutual exchange of collective and individual experience). Although the technological foundations for IoT and smart home concepts seem to be well established and there are high anticipated applications (Papert and Pflam 2017), they have not yet been widely adopted by consumers (Smirek et al. 2016). The IoT provides numerous examples of fairly transparent data inputs (e.g., consumption statistics), and automatic decision-making by algorithms in settings where humans could not assess the information or respond quickly enough.

Based on these insights, we posit that in commissioned AI contexts algorithms will remove humans as intermediaries and make immediate decisions on their behalf. In accordance with findings by Liberati (2020), individuals will not become cyborgs in such contexts since they will retain single bodies. However, a major social implication is that human intervention will be too slow for ‘good’ decisions, and hence removed from some of the processes. In commissioned AI contexts, the managerial processes are likely to be focused on defining the decision criteria rather than making the decisions. Because of the perceived advantages offered by such use of AI, we expect commissioned AI contexts to become increasingly prevalent in both industries and society at large.

The four types of AI contexts and their implications are summarized in Table 2.

Table 2 Summary of the four types of AI contexts

In closing, these AI contexts are by no means static. For example, the AI solution initially implemented to reduce biases in hiring by augmenting human decision-making described by Riley (2018) eventually became fully automated. Moreover, despite interplay between automation and augmentation, generally the ultimate goal is automation (Raisch and Krakowski 2021) and hence movement towards the upper left corner of our typological matrix (automated AI contexts). Hence, we hypothesize that while AI solutions are here to stay, there will be increasing resistance to their wide implementation. This suggests that the longer full managerial automation is delayed during the implementation of AI solutions the less resistance they will receive.

Clearly, transparency and algorithmic management are not the only dimensions of AI use in organizations, so there are massive variations in each of the classes of our typology, which must be considered in any detailed analysis of AI’s impacts on managerial processes. However, it clearly indicates a need to address the diversity of AI solutions and their effects to enhance understanding of solutions’ impacts (positive and negative) on organizations, humans they may augment or displace, and other stakeholders. For this, we need empirical research that seriously addresses the contexts in which AI is implemented. In the next section, we offer some reflections on major aspects that should be considered.

3 AI management: three recommendations

In contrast to most considerations of AI, in this conceptual article we have advocated a context-sensitive approach to AI management to explore how algorithms interact with workers to shape societal contexts. Contexts vary widely (Gill 2020), and we posit that impacts of the use of AI will vary substantially, depending (inter alia) on the levels of algorithmic management and transparency. We thus do not offer the typology as a theory, but as a framework for further theorizing the consequences, a few of which we have detailed. The framework augments emerging literature on algorithms in work environments (e.g., Faraj et al. 2018; Orlikowski and Scott 2015) by theorizing the social implications of AI.

In accordance with Raisch and Krakowski (2021), we suggest that interactions between people and AI involve mutual shaping. Moreover, we believe that our typology provides not only a new conceptualization of AI management but also new and useful vocabulary. This may help efforts to elucidate both how AI shapes the behaviour of a multitude of actors, but also how these actors shape AI (and thus the evolving nature of AI in modern organizations). Even if today’s AI technology is approaching ‘human-level’ intelligence (Bostrom 2016) and rapidly improving, we suggest that the most significant changes will not occur in disruptive technologies. Instead, it will be in AI slowly seeping into, and transforming, everyone’s lives. This typology may thus prove useful to meet urgent needs, engendered by the rapid expansion of potentially disruptive AI technologies, to explain variations in approaches, implications and feelings regarding applications of AI. We identify three main recommendations (presented in this section) for organizations considering AI solutions, which are applicable in any of the types of AI contexts, although each of them may be more relevant for some of the types than others.

3.1 Explicitly define the purpose of organizational AI use

AI comes in various shapes and forms, and an organization hosting AI technologies must actively seek the AI solution that optimally fits the organization’s needs. Considering the context-sensitive approach to AI outlined in this paper, it is important for organizations to be vigilant and clear about their purpose for using AI. If the purpose of an AI solution is not defined in advance, the technology is unlikely to meet it.

An important step for achieving an organization-AI fit is to examine characteristics of the organization’s decision-making. The increases in volume and quality of data generated by AI tools often pose challenges for decision-makers by overloading them with information (Sivarajah et al. 2017). However, decision-makers today can increasingly leverage the problem-solving, reasoning, perception and communication capacities that contemporary AI offers, and extend far beyond human processing capabilities (Rzepka and Berger 2018). Use of AI technologies in decision-making can also help individuals and organizations to overcome potential biases closely tied to human decision-making behaviour (Tversky and Kahneman 1974; Riley 2018). Consequently, in augmented and commissioned AI contexts, AI technologies are becoming increasingly valuable in strategic decision-making, but the associated displacement of human agent and loss of control in opaque contexts will inhibit their acceptance.

To leverage AI’s potential utility in strategic decision-making, organizations need to embrace AI technologies for executing strategic decisions. Delegation of decision-making to AI involves managers transferring authority and thereby losing a degree of control (Bostrom 2016). Managers in organizations typically tend to be reluctant to give up control, and therefore often hesitate to delegate decision-making (Steffel et al. 2016). Thus, it is crucial to be explicit about the purpose of organizational AI use to avoid such uncertainties, and identify ways to foster its acceptance.

3.2 Define the appropriate level of transparency and algorithmic management for organizational AI use

Contemporary organizations are increasingly adopting AI technologies for knowledge work (Davenport and Kirby 2015), but there is high variation in the AI technologies available for organizations and adaptation to specific needs is critical. AI technologies bring both benefits and challenges to organizations, as well as to the workforce that will use and/or be affected by them (Anthes 2017). The challenges are particularly associated with the performance of knowledge work by AI systems and consequent work transformation (Sion 2018). It is important to recognize that organizations can not only use various forms of AI technologies (such as machine learning, natural language processing, and virtual assistants) for certain types of work, but also that the appropriate configurations will strongly depend on the associated levels of transparency and algorithmic management.

There is growing interest in addressing the issues of algorithmic management and transparency to improve AI use in work settings. Algorithmic management may be involved in decision-making at levels ranging from technology-mediated decision support (e.g., Newell and Marabelli 2015) to fully automated management practices through performance of complex tasks that were previously the responsibility of human actors (Brynjolfsson and McAfee 2014). Similarly, transparency—or lack thereof—is a key aspect of AI and algorithms. For instance, Uber drivers have expressed deep concerns about a lack of transparency regarding how the Uber algorithm works, especially in allocating rides and calculating their generated earnings (Möhlmann and Zalmanson 2017). Building on recent advances in AI, algorithmic management shifts the prospects for automation to a higher level. While previous authors (e.g., Brynjolfsson and McAfee 2014) have argued that automation has traditionally focused largely on simple work tasks, today’s machine learning algorithms are increasingly adaptive and self-learning (Ananny 2016; Burrell 2016; Faraj et al. 2018). A key element for successful deployment of AI for organizational uses is definition of appropriate levels of transparency and algorithmic management. If it is used to augment managerial processes, the focus should be on transparency, but if it is used to make faster decisions on behalf of humans the focus should be on defining the decision-making rules. Finally, if the AI is used in an opaque setting, the focus should be on building trust in the system, or between actors. Fully automated AI solutions will still be rare, and when they emerge they will either be disruptive or have benefitted from development from one of the other types.

3.3 Be aware of the context-dependent nature of AI

AI technologies have entered social contexts in various forms, and could potentially impact any industry (Gartner.com 2021). Research on AI technologies with different roles in other contexts (such as home, entertainment or education) has identified important theoretical concepts and design principles for successful human interaction with automated and intelligent machines, providing transparency and shared control by humans and AI.

AI technologies are fundamentally changing the nature of work (Lu et al. 2018). While enjoying the benefits of AI-centric automation strategies, many organizations are struggling to manage their knowledge and capabilities, both within and outside their organizational boundaries when implementing AI technologies (Davenport and Kirby 2015). Practitioners and academics have problematized the future of work, specifically regarding the knowledge and skills humans require to work together with machines (Susskind and Susskind 2015). Unlike traditional information technologies, AI algorithms can be trained to perform knowledge work previously done by humans (Faraj et al. 2018), which calls for specific sensitivity to contextual factors. Examination of the realities of AI and algorithmic management allows us to see not only how these technologies are actually working but also for whom and for whose benefit. This is particularly important in commissioned and augmented AI contexts, where the rationale for applying AI is to improve decisions. In opaque and automated AI contexts the benefits rely on trust, and disruptive AI solutions, respectively. While the hype surrounding AI technologies and their marketing highlight broad benefits and universal gains, the context-specific consequences of AI are much more complex. To be successful, AI technologies must be integrated into existing social contexts before they can transform them, and as discussed here this raises complex opportunities and perils that require careful consideration.

4 Conclusion

AI technologies hold great promise for addressing problems in organizational contexts. However, the potential benefits must not obscure the potential associated perils. Results of the exploration include a typology of AI use in organizational contexts, based on variations in two dimensions (transparency and algorithmic management), which extends the literature on AI management. The core of our argument is that algorithmic management is not restricted to high-profile cases such as Uber and AirBnB, but can be found in everyday technologies that we already rely upon. This conclusion has important implications for how we theorize and organize since we run the risk of not being taken by surprise but slowly and imperceptibly being increasingly managed by algorithms. Today, practical applications of AI can be found in the home, car, office, bank, hospital and myriads of other contexts. Thus, AI technologies perform diverse tasks throughout the various contexts that we engage with, and play increasingly pervasive roles in our everyday lives. Previous researchers debated whether or not AI can be achieved (Dreyfus 1999), but AI can no longer be portrayed as the pursuit of a ‘dream’ (Ekbia 2008): it is already here to stay.

We argue that algorithms are accompanied with a set of challenges related to transparency. Specifically, Burrell (2016) sees three modes of algorithmic opacity. First, algorithms are sources of competitive advantage and, therefore, likely to be proprietary. Hence, access to codes and the data that enable their learning will, according to Pasquale (2015), become a growing point of contention between actors, including workers, seeking to understand algorithmic action. Second, as algorithms are becoming more specialized, complex and likely to be composed by multiple authors with different perspectives. Thus, they are becoming increasingly difficult to understand, even by their creators (Burrell 2016), which may include preceding generations of algorithms, and impossible for workers who interact with them to comprehend. The third way in which algorithms become non-transparent is in their application, as often use extremely large datasets that are impossible for humans to analyse without assistance. Thus, to understand how AI systems shape and are shaped by social contexts, conventional technical exploration of algorithmic transparency is insufficient, we must also scrutinize their real-world use, and explore the challenges and opportunities that workers experience (Ananny and Crawford 2018; Burrell 2016). To address these challenges, we propose three recommendations for informed use of AI in contemporary organizations. First, be explicit about the purpose of organizational AI use. Second, define the appropriate levels of transparency and algorithmic management for organizational AI use. Third, be aware of the context-dependent nature of AI.

To facilitate exploration of the contextual dynamics of AI’s organizational uses, our two-dimensional typological matrix presents four types that we believe are interesting by themselves. We also contribute to the debate on ‘narrow’ AI applications, which are tied to a specific context with a specific and limited dataset, and broader applications (e.g. Brynjolfsson et al. 2018). For example, we expect AI configurations with high levels of algorithmic management and transparency to be more likely to produce wide AI applications, while configurations with low levels may be likelier to produce narrow AI applications. These expectations are consistent with current trends observed in many contexts.