1 Introduction

The ongoing digital transformation is challenging the way in which business is conducted and value is created and captured (Vial 2019). While prior digitalization waves focused on replacing paper as physical carrier of information, leveraging the Internet as global communication infrastructure, and developing reactive, partly automated business processes and systems (e.g. Legner et al. 2017), the next wave will be about transforming these processes/systems into proactive autonomous systems (AS). Such systems represent complex “systems of systems” with different maturities, qualities, reliabilities, and performances, which may develop their own dynamics (Boardman and Sauser 2006; Maier 1999). In the information systems (IS) context, a common characteristic of AS is their reliance on large amounts of data, along with the use of advanced technologies—such as the Internet of Things, Artificial Intelligence (AI), Machine Learning, or Blockchain—that allow for gathering and processing ‘big’ data with limited, or even no, human involvement.

Today, AS can be found in various fields of application. Popular examples include driverless cars, smart cities, and smart homes, which often rely on a combination of sensors, algorithms, and self-executable code. Besides these tangible AS that link the physical world to the information world (Barrett 2006), we note a growing number of intangible AS in the form of software systems that operate either entirely in the background or at the interface with humans. Examples are intelligent chatbots, smart contracts, and recommender systems (Murray et al. 2021a; Pfeiffer et al. 2020; Rutschi and Dibbern 2020; Wang et al. 2019a, b), as well as algorithmic management and control systems, such as the ones used by Uber and other gig economy firms to manage their digital workforce (Cram and Wiener 2020; Möhlmann et al. 2021; Wiener et al. 2021).

Even though AS are designed, developed, and implemented in a process of socio-technical interaction, once in use, the embedded technology takes on the role of an autonomous agent (or actor) that can make decisions and perform actions independently of humans (Baird and Maruping 2021). In other words, what has been created in a socio-technical way by implementing patterns—including organizational rules, as well as social norms and values—into a technical system, turns into a techno-social system once operating, where social agents in the organizational environment respond to the technical system and where the system may self-adapt to environmental changes. Thus, agency, decision rights, and responsibility are handed over to technology agents, while the ultimate accountability and decision rights to change these systems still reside with the governing entity owning those systems (Kellogg et al. 2020).Footnote 1 This asks for a better understanding of AS in a broader context, where the autonomy of technical systems as agents must be analyzed in relation to human agents. In fact, changes in the autonomy of one (human or technology) agent may have consequences for the autonomy of another agent. Accordingly, the notion of “conjoined agency” between human and technology agents has been conceptualized as one way to acknowledge new types of interdependencies that arise in the course of increasing technology autonomy (Murray et al. 2021b).

Another way to view AS is by consideration of their temporal dimension, as captured by the notion of sustainability, which generally refers to some long-term existence. This means that, once in use, AS should be able to exist and technology agents embedded in these systems should be able to fulfill their function for a longer period of time without human intervention, as otherwise they cannot be considered being really autonomous. In this sense, sustainable autonomous systems (SAS) may refer to self-learning technical systems that are constantly improving themselves, such as an autonomous vehicle that, on a daily commute, keeps optimizing the route it takes. Put differently, SAS are characterized by their ability to adapt to changing circumstances and be responsive to environmental changes. In doing so, SAS may not only optimize themselves in accordance with some predefined output criteria (e.g., quality or performance), but also with regard to their consumption of resources (e.g., an autonomous vehicle constantly improving its fuel consumption). On a larger scale, this points to another perspective on sustainability directed towards the effects of AS use and operation. As such, sustainability may also concern the long-term economic, social, and environmental effects of using AS (Hart and Milstein 2003), commonly referred to as the “3Ps” (profit, people, and planet) of the triple bottom line (Elkington 1997). This perspective includes the effects of SAS on the efficient use of tangible resources, such as energy (e.g., smart offices), space (e.g., smart cities), food (e.g., smart fridges), or natural resources (e.g., smart agricultures), as well as their effects on intangible resources, such as the longevity of data (e.g., for auditing purposes) or human and social capital in general.

While the debate around SAS is not new, the emergence of blockchain has fueled innovative solutions, but also concerns regarding the energy consumption of blockchains based on the so-called “proof of work” consensus mechanism (Sedlmeir et al. 2020). While ecologic sustainability is one important aspect of SAS, there are further aspects that need to be considered. For example, as unintended and unforeseen second-order or spillover effects can result from the deployment of SAS, the question must be answered if we really want to rely on systems that are on ‘autopilot.’ Here, critical ethical questions arise (Tang et al. 2020), including questions of fairness regarding the decision rules according to which AS act (Dolata et al. 2021); for instance, how a driverless car should react to unforeseen circumstances affecting humans (Kirkpatrick 2015).

In recent years, IS research has begun to pick up the concept of autonomy and to study it from different perspectives. Thereby it is important to note that the concept of autonomy is by no means new to the IS field. For example, autonomy has been an inherent characteristic of intelligent software agents (Jennings et al. 1998), which have been subject of research in various fields of application, such as supply-chain automation and improvement (Nissen and Sengupta 2006) or electronic auctions (Adomavicius et al. 2008). It is only recently, however, that the concept of autonomy has gained increasing interest with regard to the phenomena described above.

Against this backdrop, in this editorial, we seek to synthesize and integrate different autonomy concepts and develop a framework that can serve as a basis for future research on (S)AS in various IS contexts and settings. In particular, drawing on the IS and related literatures, we first identify and review different autonomy concepts and their definitions. On this basis, we then elaborate on the relationships among those concepts and present a multi-perspective framework for studying (S)AS in a broader “systems of systems” context along with promising directions for future research. Our framework has been inspired by the existing literature on autonomy and AS, as well as the experiences we made as editors during the review process for our special issue on SAS in BISE. In total, we received 12 papers out of which two were accepted and published in this issue.

2 Synthesizing Autonomy Concepts

The concept of autonomy has been of interest in IS research and related fields, such as management and organization sciences, for quite some time. Specifically, scholars have been interested in understanding how, why, and under what circumstances autonomy is assigned to human agents (e.g., at the individual, team, or organizational level), or designed ‘into’ technology agents, and what the consequences or outcomes of such assignments or designs are. Also, in an organizational context, granting autonomy has often been viewed as paradoxical, given that it arguably contradicts with the common view of organizations as hierarchies, control systems, or complex systems characterized by interdependencies. For example, Wiedner and Mantere (2019, p. 659) have asked “how organizations divest or spin off units with the aim of establishing two or more autonomous organizational entities while simultaneously managing their continued interdependencies.” In fact, one may view the principles of control and direction as antipodes of autonomy. This inherent tradeoff is also visible in the various definitions of autonomy. On the one hand, these definitions have in common that autonomy refers to an agent’s freedom of action, choice, and decision-making without being constrained, restricted, or controlled by others (see Table 1); that is, the control should reside with the autonomous entity, instead of an external one. On the other hand, however, autonomy is often granted by others (e.g., in an act of delegation of decision rights and responsibilities), which may be seen as a classical principal-agent relationship (Eisenhardt 1989; Fama 1980; Jensen and Meckling 1976). In such a relationship, the principal is typically viewed as the one who controls and monitors the actions of the agent (as opposed to the agent self-controlling its actions). As such, it is of little surprise that the role of technology has traditionally been limited to an assisting function that needs to be instrumentalized in order to improve organizational outcomes, including efficiency, effectiveness, and innovation. In other words, human beings have often been assumed to keep full control over technology (i.e., its functions and outcomes), as well as its usage (Bhattacherjee 1996, 1998), even if a certain task had been fully automated. However, it has soon been recognized that technology, once in place, may create its own agency in that the rules and mechanisms embedded in a given technology can change organizations in unexpected ways (Markus 1983; Markus and Robey 1988; Orlikowski 1992). This is also nicely reflected in the concept of technology affordance, which implies that a given technology may be used in various ways that are difficult to predict (Leonardi 2011; Strong et al. 2014). In fact, one of the sources of this variability of technology affordances lies in the autonomy of its users (i.e., their freedom of using the technology in ways that may not be fully prescribed by those that developed the technology). As well, the agency of technology is visible in the concept of drift, where digital technologies, once being implemented and used, often enter into a process of “deviating from their planned purpose for a variety of reasons often outside anyone’s influence” (Ciborra 2000, p. 4). In a similar vein, actor network theory has emphasized the role of the technological artifact as a (non-human) actor that can take on agency and serve as a source of action (Latour 1996).

Table 1 Overview of autonomy concepts (and selected sub-concepts) and their definitions

However, if technology is viewed as a new (non-human) agent that has autonomy, then it also seems obvious that this agent cannot be considered in isolation but must be viewed and understood in relation to its surrounding agents and their autonomy. Further, autonomy as a state and with respect to a particular task/action can only be attributed to one type of agent, which means that surrounding agents must grant or accept this autonomy and may act as possible counterparts of the new autonomous agent. In contrast, extant studies tend to focus on one particular autonomy concept or perspective, such as IT tool autonomy (Seidel et al. 2019) or the question of how human designers are influenced by a design tool taking autonomous actions (Ye and Kankanhalli 2018). In this regard, to the best of our knowledge, a systematic attempt to synthesize existing autonomy concepts is still lacking. Accordingly, drawing on a review of prior literature, we put together a systematic overview of different autonomy concepts and their definitions (see Table 1), thereby explicitly distinguishing between the relevant human/social and technology agent (who?) and the relevant task and/or its properties (what?). On this conceptual basis, we will elaborate on the interrelations among human, technology, and task autonomy in the following section.

3 Multi-Perspective Framework of (Sustainable) Autonomous Systems

As can be inferred from Table 1 (see above), the various concepts of autonomy differ in terms of two main attributes, namely the agent (who?) and the task and/or its properties (what?). Here, relevant agents can be described and distinguished in various ways, including the distinction between human versus non-human (i.e., technology) agents; the level of analysis (e.g., individual, team, or organizational level); and the specific role or type (e.g., a user or a designer representing a human being versus a design tool or a particular IT system representing a technology agent). Similarly, relevant tasks may be described in general terms (e.g., high-level actions or organizational practices), or at a more detailed level, with reference to a specific sub-task or tasks (e.g., task scheduling, sequencing, and timing) and/or with reference to a particular task domain (e.g., software design).

Adding to this, by closer examination of the various autonomy concepts, it also becomes apparent that their definitions vary in terms of autonomy properties (how?); i.e., the specific attributes/features that different authors associate with autonomy. Corresponding properties of autonomy are expressed in two ways: in an inclusive way, such as having freedom or having discretion over how a task is being carried out (e.g., in terms of task scheduling and/or work methods); and, often in addition, in an exclusive way (in relation to other agents), such that actions can be carried out independently of others, or without the involvement, influence, and control of others. Interestingly, it is exactly this ‘exclusion of others’ that is often questioned and in fact subject of investigation, with the overarching question frequently being whether, in practice, any agent is able to act truly and fully autonomously (i.e., without involvement of, interdependency with, and control by others). In other words, an autonomous agent is not isolated from the rest of the world. As soon as an autonomous agent carries out a particular act, it often must interact with others, influences the actions of others, and/or its actions have consequences for others (e.g., the institution for which the work is being carried out or the owners of assets), so that at least the outcome or the way in which the task is being carried out matters for an external stakeholder. Also, even if an autonomous agent replaces the work of another agent, the act of replacement may be viewed as an act of interaction, where the replaced agent is likely to be influenced in its own autonomy. Moreover, the initial formation or design of an autonomous agent typically involves other agents as well.

Given the above, we argue that different agents (characterized by some form of autonomy), as well as their relations to each other and relevant tasks, represent inherent key features of any (S)AS. As such, it is important to clearly define the system boundaries and to consider possible linkages between relevant agents and their surroundings, since, by definition, the autonomy of one agent depends on its relations to other agents. This appears to be of particular importance when considering the installation of a new autonomous agent as an act of change (i.e., one that changes the way in which work has been carried out previously). More specifically, it means that ‘someone’ must define or design the new agent and that ‘someone else’ will be affected (e.g., by being replaced or by having to interact with the new agent). Corresponding linkages across autonomous agents are inherent in any conceptualization of (S)AS and hence should be considered when researching such systems.

Against this backdrop, we derived a multi-perspective research framework capturing the key building blocks of (S)AS; namely, human and technology agents (who?) having at least some level of autonomy over carrying out some kind of task(s) (what?). Here, notable tensions are likely to arise in the course of defining and attributing autonomy to particular human agents and/or technology agents, especially if they have to interact with each other, as reflected in the notion of conjoined agency (Murray et al. 2021a, b). For example, one possible tension that may arise concerns the question of who is responsible for controlling the process and outcome of relevant work tasks. Moreover, tensions related to control, as well as other tensions, may be triggered, or intensified, by specific events, such as contextual changes in environmental conditions (e.g., a change in task regulations necessitating an adaptation of the AS), pointing to the need for considering the sustainability of AS (i.e., SAS). A graphical representation of our framework is provided in Fig. 1.

Fig. 1
figure 1

Framework of (sustainable) autonomous systems

4 Framework Illustration: Decentralized Autonomous Organizations

In the following, we will use the example of decentralized autonomous organizations (DAOs) to illustrate how the above-introduced (S)AS framework can be applied. DAOs have arisen as one of many socio-technical innovations attributed to the introduction of the blockchain technology. In prior literature, DAOs are defined as ledger-enforced, value-creating entities that solely run on the blockchain without interference of a single source of authority or governance. All rules and incentive structures are codified in smart contracts to achieve “self-operation, self-governance, and self-evolution” (Wang, et al. 2019a, b, p. 871).

Based on our (S)AS framework, we can explain DAOs and their ability to autonomously enact control and being controlled in a recursive dynamic process (Yeung 2019). Figure 2 illustrates the relations between human, technology, and task autonomy in a blockchain-based DAO environment where human agents and technology agents (also referred to as “actors” in the literature on DAOs) interact in a triadic relationship between rules, DAO protocol, and practices. This triadic relationship mirrors the general “trifecta” of an IT-based regulation system that is “made up of rules, practices and IT artifacts and their relationships” (De Vaujany et al. 2018, p. 755), which in turn serves as a promising lens to instantiate our (S)AS framework in the DAO context. Generally, rules describe the regulating statements directing an active agent in a network (based on Giddens 1984). In the context of DAOs, rules define the way in which particular tasks, processes, or transactions should be carried out. In the DAO protocol (technology agent), these rules are defined by the DAO community members (human agents) in the form of chunks of code (smart contracts). Practices result from the execution of the rules inscribed in the DAO protocol; that is, rules are automatically executed whenever a set of criteria defined in the smart contract is met.

Fig. 2
figure 2

SAS design and use in the context of decentralized autonomous organizations (DAOs) (based on De Vaujany et al. 2018)

The triadic relationship between rules, DAO protocol, and practices can be understood as a triadic relationship between human, technology, and task autonomy (see Fig. 2). In other words, an IT-based regulation system can be expressed as a SAS. Here, the definition of rules and its materialization in the DAO protocol may be conceptualized as SAS design, whereby human autonomy in defining the rules directly translates into technology autonomy in executing the rules in the form of practices (i.e., task autonomy). This implies that there is an overlap between SAS design and SAS use via the intermediary role of the DAO protocol. Human agents (i.e., DAO community members) exert autonomous control of the DAO via SAS design, while technology agents (i.e., the DAO protocol) exert autonomous control of the DAO through SAS use. Taken together, this also implies that DAOs can be considered as an instrument to execute control through an autonomously acting system; i.e., control through technology, and not control of technology. Such blockchain-based organizations autonomously execute tasks with other autonomous agents or in an interplay with human agents in practice. Therefore, SAS in use can be regarded as techno-social systems where technology agents, such as the DAO protocol, enact control over other (external) agents. However, such an autonomously acting technology has been designed by humans through encoding rules, such as DAO decision rights and responsibilities, in an act of control materialization in the DAO protocol. While the SAS is supposed to be in use over a longer time period, the practices, as well as the originally implemented rules, may not be aligned with community member goals and/or contextual conditions forever. In the face of contextual changes, context-related tensions may arise (DuPont 2017). In a continuous sensemaking process between controls inserted into the DAO and rules executed by the DAO (i.e., practices), the desire to adjust rules (i.e., to implement different control instantiations within the DAO) may arise in the human agent sphere. Figure 2 illustrates these interdependencies, where human agents autonomously assess the rules and controls to be implemented in the technology agent sphere (SAS design) and where the DAO protocol autonomously executes tasks that enforce control over both human and non-human agents (SAS use).

5 Future Research Directions

As illustrated by the DAO example above, we see a strong need and bright future for research adding to our understanding of the implications surrounding the design, development, and use of IS characterized by both autonomy (i.e., AS) and sustainability (i.e., SAS). Here, it should be noted that neither autonomy nor sustainability are fixed end states, or ultimate goals; rather, they represent IS characteristics that need to be better understood as they appear in various forms and degrees, change over time, and have manifold consequences for individuals, organizations, and society. In this regard, we hope that our synthesis of different autonomy concepts, as well as our multi-perspective research framework of (S)AS will prove to be useful in facilitating and guiding the design of exciting future research studies on this topic.

In our original call for papers for this special issue, we listed a series of potentially relevant research questions in relation to the design and use of (S)AS along four more general themes:


Enabling Conditions, Determinants, and Goals of (S)AS Design and Use

  • What goals drive the design, development, and use of (S)AS and what potential tensions and/or paradoxes can be associated with those goals?

  • How do organizations create an effective balance between different sustainability goals?

  • Under what conditions do they prioritize certain goals at the expense of other goals?

  • How can (S)AS be designed to achieve a particular set of objectives and ‘cushion’ its inherent tensions?

  • What level of digital maturity and what dynamic capabilities are needed for the value-enhancing use of (S)AS?


Implementing (S)AS and Managing Their Use

  • How can the implementation and use of (S)AS be controlled and governed?

  • Who should oversee corresponding control and governance activities?

  • Designing and developing (S)AS from an end-to-end point of view may require novel and mindful systems engineering, evaluation, and testing approaches that go beyond traditional ones; if so, how would such approaches look like? Who would approve them?

  • What criteria or requirements would have to be met to ensure the proper functioning of (S)AS along with their seamless integration into existing structures?

  • How can interdependencies among different (S)AS be managed in due consideration of sustainability?


Outsourcing (S)AS and Governing (S)AS-Based Organizations, Platforms, and Networks

  • If the development of (S)AS was outsourced to third-party vendors, who would ensure their adaptation to environmental changes and how would this process be governed?

  • How should network and/or platform-based organizations—whose operations and business models tend to be based on the use of algorithmic systems—be governed and regulated with respect to sustainability goals?


Ethical and Societal Implications of (S)AS

  • What about the ethical dilemmas and issues arising from algorithmic decision-making and how can managers, organizations, and society cope with those dilemmas/issues?

  • What are the limitations of (S)AS and how can their appropriate use be influenced by a societal discourse?

Some of these questions have already been addressed by the two research articles (Heßler et al. 2022; Jussupow et al. 2022) and the interview (Beck et al. 2022) published in this special issue. In particular, Heßler et al (2022) contribute to a better understanding regarding the link between human and technology autonomy related to the task of decision support on digital lending platforms. Specifically, they find support for the perceived importance of user (i.e., human) autonomy and empathy in decision-making contexts characterized by self-humanization needs, as is the case in prosocial digital lending platforms (as opposed to for-profit lending platforms). They also find that if users place stronger importance on their autonomy and empathy, this is associated with higher degrees of algorithm aversion and thus a stronger preference for human-like (as opposed to machine-like) decision support. Relatedly, Jussupow et al (2022) explore the tensions between human and technology autonomy in relation to diagnostic decision-making by studying how radiologists come to use AI systems in different ways and what role AI-based assessments play in this process if they confirm or disconfirm radiologists’ human-based assessments. Drawing on a revelatory case study of an AI system used for stroke diagnosis at a hospital, the study results show how radiologists develop distinct system usage patterns through three context-specific sensemaking processes: sensedemanding, sensegiving, and sensebreaking. Further, the authors find that radiologists’ diagnostic self-efficacy plays a crucial but different role in each of the three sensemaking processes.

Despite these interesting and valuable contributions to the existing body of knowledge, of course, many of the above-listed research questions remain open. As such, we hope that our special issue on (S)AS, including the editorial at hand, will provide researchers with some inspiration and eventually lead to a vibrant stream of research on corresponding systems in a broad range of IS contexts and settings.