1 Introduction

Recent advances in computer hardware and software have given rise to “The Second Machine Age” (Brynjolfsson and McAffee 2016) which is increasingly powered by what is commonly called artificial intelligence (AI). “Artificial General Intelligence” (Goertzel and Pennachin 2007: 1) that is comparable to or supersedes human level intelligence will remain out of reach for quite some time, but the so-called “narrow AI” (ibid: 1) has left the research labs and enjoys rapid and widespread adoption across most industries. Current AI rests on technologies like machine learning, deep neural networks, big data, internet of things and cloud computing. As such, contemporary AI can be perceived as a general-purpose technology (Trajtenberg 2019) and has the potential to dramatically change the economy (Furman and Seamans 2019). Whilst organizations of all sizes and from all sectors have started to pursue their goals with the help of AI, limited experience and knowledge is available concerning the impact AI will have on the economy and on society in general. Since AI very quickly makes an already complex world even more complex, economic research that contributes to an understanding of the impact of AI technologies is urgently required but still scarce (Agrawal et al. 2019).

The purpose of this paper is to outline relevant economic patterns in a world with AI with the help of an analytical perspective rooted in institutional economics. It is meant to contribute to a better understanding of institutional evolution in an increasingly complex world (Rosser and Rosser 2017). To achieve this, it supports the idea that the time has come for a more “entrepreneurial economics” (Koppl et al 2015: 22) which assumes a creative world where “the system must adapt to unforeseen and unforeseeable changes that represent new possibilities and new opportunities in the adjacent possible” (ibid: 22). Entrepreneurial economics as it is understood here, includes a process of discovery of how economic patterns change under the influence of technological innovation as well as processes of economic design to shape economic patterns. In such a context, it appears to be important to identify, observe, question and discuss economic patterns in the sense of patterns observable in the real world and in the sense of patterns of economic thought to allow our shared mental models (Denzau and North 1994) to appreciate and adapt to a world with AI. The dual role of economics puts additional weight on this endeavor: the discipline does not only provide theories for the explanation but also concepts for the design of a world with AI (Parkes and Wellman 2015; Wagner 2001). In what follows, evidence for the relevance of these patterns is provided and the perspective is brought to life by conducting an interdisciplinary integrative literature review (Torraco 2016). Initially, the institutional economic perspective and the key patterns and problems perceived through this theoretical lens will be laid out. Subsequently, the phenomena in question will be reviewed one by one in more detail. The original contribution of this paper is that it synthesizes interdisciplinary views on a world with AI to derive resulting economic patterns and to make them accessible to institutional economics by establishing suitable notions and analytical frameworks. As a result, guidance for interdisciplinary research to further explore the economic patterns of a world with AI is given.

2 A world with AI from the perspective of institutional economics

One underlying idea of this contribution is that interdisciplinarity requires discipline. In this sense, the institutional economic perspective is the discipline that provides an infrastructure that serves to enable disciplines like computer science, information science, electrical engineering, robotics, management science, organization science, law, sociology, psychology, ethics, and philosophy to contribute to an interdisciplinary and thus joint understanding of the impact of AI on the economy and society.

In general terms, institutions can be defined as “a set of formal and informal rules including their enforcement mechanisms” (Furubotn and Richter 2005: 7). Institutionalists perceive an institution as either, an important variable that explains social, political and economic life, or as an outcome of social, political and economic life that itself requires explanation (Lowndes and Roberts 2013: 6–10). The economic patterns under review in this article can be understood as variables that help to explain a world with AI and, at the same time, require further exploration and explanation. Following the idea of a more entrepreneurial economics introduced above, these patterns are perceived to require economic design.

The notion artificial intelligence causes many concerns and misunderstandings, because people assign different meanings to it. It is useful to follow Tegmark (2018) in the tradition of Goertzel and Pennachin (2007) here, who defines it as non-biological intelligence and distinguishes narrow intelligence which is the ability to accomplish a narrow set of goals like playing chess or driving a car from general intelligence which is the ability to accomplish virtually any goal including learning. Contemporary AI is at the beginning of a dynamic journey from narrow to more general intelligence. A key technology at the heart of current AI is “machine learning” which refers to algorithms that perform tasks without using explicit instructions and which rely on patterns and inference instead (Russell and Norvig 2016). Supervised and unsupervised machine learning have to be distinguished, which from an institutionalist perspective implies different degrees of autonomy. For reasons that will be explained further below, artificially intelligent algorithms will in this paper be defined as AI agents. The natural habitat of AI agents based on machine learning is environments with accessible digital data. This may be the so-called “big data” environments which can be defined as “the information assets characterized by such a high volume, velocity and variety to require specific technology and analytical methods for its transformation into value” (Mauro et al. 2015: 103).

Social settings where AI agents get involved with humans have been described as human–agent collectives or HAC (Jennings et al. 2014) and result in situations where AI agents may operate autonomously and where “sometimes the humans take the lead, sometimes the computer does, and this relationship can vary dynamically” (ibid: 80). A relatively simple example is sports venues where human media managers work with AI agents that oversee sensors and learning algorithms which record the gestures and body language of the athletes as well as the applause and reactions of the audience, compare them with match scores and automatically compile the highlights of a match day in a video clip (Rundel 2019; Ohanian 2019). More generally, HAC arise within and in between already existing institutions like states, markets, communities, firms, non-profit organizations, governmental and non-governmental organizations and more.

Institutionalists agree that “institutions matter” (North 1994) in that they influence human beliefs and actions and thus have an impact on social, political and economic outcomes. An initial research proposition for what follows is that some general ‘institutional matters’ that were carved out by economists in the past are of high relevance in a world with AI. But the main research proposition is that AI will give a special twist to these matters. There is no claim for completeness and inspired by existing traces of economic analysis no more than a prelude can be offered by an initial review of the following economic patterns:

  • From homo economicus to machina economica.

  • Micro-division of labor.

  • Triangular agency relationships and next level information asymmetries.

  • New factors of production.

  • Economics of AI networks.

With AI in the game, the question arises, how this technology will impact on existing institutions and what the knock-on effect on social, political and economic life will be? How will AI shape important economic patterns?

3 From homo economicus to machina economica

A fundamental pattern that the discipline of economics observes is that social outcomes arise from the interactions of utility-maximizing human individuals who make more or less rational decisions. Over the years, the underlying homo economicus model of man (Kirchgässner 2013) has been confronted with substantial criticism and more flexible perspectives have been proposed (Tomizuka 2015). But given the influence of neoclassical economics on management science and thus on decades of business and management education (Ghosal 2005), its practical relevance appears to be more substantial than ever (Gintis and Khurana 2016). And even if some scholars and practitioners have realized that it may be time to re-invent organizations, many firms as well as organizations from other sectors including NPOs and NGOs are today designed to be a home for the ‘economic man’, or in other words for primarily self-interested, rational decision makers (Laloux 2014).

It is here, that AI enters the stage and it does not come as a surprise that AI agents too are designed to be economic actors. Homo economicus has long served as a welcome role model for AI (Huberman 1988; Russell and Norvig 2016), and it does not take much to grasp that AI actually is the better actor economicus (Wagner 2001): it behaves algorithmic rather than fuzzy, it acts always dispassionate rather than sometimes emotional, and its reasoning is logical rather than intuitive. But whilst the new species of “machina economicus” (Parkes and Wellman 2015) or rather “machina economica” behaves more economic than man, it too is faced with bounded rationality. Algorithms work with finite computational resources which in practice means that they cannot achieve Turing completeness and are limited to linear bounded automation (Hopcroft et al. 2014). In complex social environments with data from human society, these limitations can, for example, lead to bias in AI decision making (Baeza-Yates 2018; Boddington 2017). In addition, narrow AI solutions are extremely bounded in that they are highly specialized on specific tasks and thus might not behave rationally beyond their dedicated domain.

It must be encouraging for the economic discipline to observe that AI agents come closer to the economic model of man than man itself. With AI becoming part of the object of study, economic models to explain social, political and economic outcomes should work even better. And from an institutional perspective, ‘rules of the game’ derived with the help of economic theories may prove more relevant for artificial agents than for humans (Varian 1995; Parkes and Wellman 2015). However, rational choice as a building block of economic theory rests on the concepts of methodological and normative individualism (Kirsch 2004). This is where further implications need to be considered.

3.1 The rise of collective intelligence and the end of individualism as we know it

AI increases connectivity. It increases connectivity between humans but it also changes the way people interconnect (Turkle 2017), since computers and AI have become active parts of the network where technology creeps in between humans or as Varian (2014) notes “computers are in the middle of virtually every transaction”. This means that humans when they make choices and communicate team up with computers in general and with AI in particular (Carr 2014; Cowen 2014; Kasparov 2008). On a larger scale, human–agent collectives thus develop properties of collective intelligence (Malone and Bernstein 2015) and enable smarter institutional structures. Malone (2018) calls markets, hierarchies, and democracies that rely on people and computers thinking together “superminds”. Despite all the potential progress, from a critical economic analysis point of view, such superminds lead to a new type of principal–agent problems which will be further discussed below. At this stage, it is important to note that due to the emerging inseparability of man and machine (Davenport and Kirby 2016; Ito 2019), human choices can no longer be understood to take place on a standalone basis. This in turn means that methodological individualism as a building block of economic theory has to be questioned.

3.2 The interference of AI with normative individualism

In economic theory, individualism is the methodological point of departure for analysis but at the very same time for many scholars it is also the normative point of reference (Buchanan 1975; Parisi 2004). Normative individualism means that “only individuals can be the ultimate point of reference of moral obligations” (von der Pfordten 2012) or, in more narrow economic terminology, the ultimate point of reference for the internalization of external effects. A second implication of having AI as an economic actor around is that it interferes with individualism as a norm. In principle, there appear to be two root causes for this interference, the properties of the digital environment and the properties of the digital entity itself. On the one hand, AI operates in a digital environment and “the merging of AI and cyberspace … will potentially lead to entities which will have the capacities of intellect personhood…without any legal attachment to physical space and thus to states. Therefore, entities possessing actual personhood will be out of reach of the legal authority of states” (Puaschunder 2018: 5). This results in a necessity to find ways of clarifying the legal status of artificial agents and digital personality (Bryson et al. 2017). On the other hand, the digital entity itself poses new challenges when it comes to moral obligations and to the internalization of external effects. AI agents become more and more autonomous but compared to human individuals they miss an essential property that can be considered as sufficient condition to qualify as an ultimate normative point of reference: they have nothing to lose. In the context of emerging HAC, the issue is as much of practical relevance as it is a philosophical question. Driverless cars, for example, have become feasible. But in critical traffic situations with unavoidable trade-offs “should a driverless car decide who lives or dies?” (Naughton 2015). Accountable algorithms have become a dedicated field of interest in AI (Kroll et al. 2016; Boddington 2017). This and the sheer fact that individual AI agents have no “skin in the game (Taleb 2018) suggest that normative individualism needs to be critically reviewed as a foundation for economic and social institutions.

As an interim conclusion, it can be noted that in a world with AI the individual actor is not only an element of analysis (homo economicus) but also an element of design (machina economica). In addition, the review shows that the fundamental assumption of methodological individualism made by institutional economists may pose challenges to economic analysis and design in an AI world where human and artificial actors are closely connected. In addition, the discussion shows that the emergence of AI agents and their distinct properties have the potential to undermine economic and institutional order rooted in normative individualism.

4 Micro-division of labor

An immediate knock-on effect of the introduction of AI agents into society is that they reinforce and accelerate an economic pattern that Adam Smith once described in his famous pin factory example (Smith 1999) and that ever since has fundamentally shaped economic development by creating an unprecedented complexity (Beinhocker 2007; Hausmann et al. 2014): division of labor and specialization.

Prior to industrialization, human collectives consisted of a limited number of specialized roles. Over the last centuries, division of labor and specialization have increased considerably. For example, the number of occupations available in the USA has risen from around 300 in the year 1850 to almost 1000 today (Bureau of Labor Statistics 2019). The effects of the differentiation of the labor market and increasing economic exchange can be traced in a dramatic rise of GDP along with a differentiation of the markets for products and services. A tribal society has several hundred products at their disposition which is less than 1% of the 70,000 products a leading superstore can offer (Beinhocker 2007; Scrapehero 2019). Human–agent collectives including AI agents unfold new dynamics of specialization and differentiation which are illustrated by the + 500 million products available on just one leading ecommerce platform alone (Scrapehero 2018). In this context and in a world where new knowledge is derived from combining existing knowledge, an important specialist role for AI assistants is to ease “needle-in-the-haystack discovery problems” (Agrawal et al. 2019). But they do also take over broader cognitive tasks and produce new knowledge by recombining conceptual inputs which leads to new possibilities of exchange, division of labor and specialization (Koppl et al. 2015). AI thus increasingly and more and more autonomously operates at the heart of an economic pattern that Adam Smith had already described as “the invention of all those machines by which labour is so much facilitated and abridged, seem to have been originally owing to the division of labour” (Smith 1999: 109).

The persistent market economic pattern of division of labor, specialization, and differentiation is now accelerated by the “endeavor to connect not only everyone, but also everything” (Pticek et al. 2016). Humans became the minority on the Internet 10 years ago. The number of ‘occupations’ adopted by autonomous artificial agents is unknown but can be expected to be exponentially higher than the number of occupations available to humans in the economy. And it is rising quickly. Agrawal et al. (2018) argue that, as AI improves, businesses must adjust their division of labor between humans and machines.

In the light of these developments, a long-standing question of economic order becomes increasingly difficult to answer: “How is this process with its far-reaching division of labor controlled in its entirety, so that everyone comes by the good on which his existence depends?” (Eucken 1950: 18).

Since software often operates “behind the scenes” (Jennings et al. 2014: 85), its rationale and actions are regularly not readily available to the involved humans. Also, co-operation ‘at eye level’ with artificial agents will be the exception rather than the norm, simply because a micro-division of labor amongst artificial agents leads to a degree of fragmentation that can no longer be deciphered by humans (see Table 1).

Table 1 Examples for human interaction with AI

A use case that illustrates the transformational effects of the patterns described above is a smart autonomous intersection in traffic management. Here, self-driving cars would no longer need traffic lights (Ratti 2015). In such a HAC, each intersection becomes an ‘invisible pin factory’ which connects everything and where humans delegate their previous roles in traffic control to a much higher number of more decentralized, specialized and invisible algorithms, but where at eye-level co-operation becomes a challenge (Jaffe 2015).

Due to the pattern of increasing division of labor and specialization, today’s economy has become so complex that it is no longer possible for a single individual—be it a customer, a senior manager, a specialist employee, or another stakeholder—to exactly know how a large organization creates value with its products and services. They all rely on institutional arrangements to support them for an advantageous exchange with such organizations or parts thereof. To the extent that learning algorithms autonomously continue to follow this pattern, a comparable situation will arise for humans in general. The purpose of micro-division of labor and of micro-specialization from an economic point of view is very simple: to create gains from specialization and exchange, or in other words to generate positive externalities whilst avoiding negative externalities.

According to Ashby’s law of requisite variety, only variety can absorb variety (Ashby 1956, 1958). Given the economic pattern described above, the variety of a world with AI will quickly be out of reach for human institutions. This implies that the interaction between humans and AI in HAC will have to evolve institutional arrangements that build on AI to guide AI behavior in areas which are difficult if not impossible for humans to understand and access. The field of agent-based computational economics (Tesfatsion and Judd 2002; Epstein and Axtell 1996) illustrates that at least in principle social and economic institutions can emerge bottom-up amongst artificial agents.

In summary, this section clarifies that in a world with AI a transition takes place from the economic pattern of division of (human) labor and specialization to micro-division of labor and even further specialization. The increased decentralization and fragmentation are pursued with the help of algorithms which operate on the level of tasks rather than on the level of roles. The distinctive properties of the phenomenon result in the conclusion that suitable institutional arrangements for economic order will at least in part have to be evolved from the bottom-up of the micro-level of AI agents.

5 Triangular agency relationships and next level information asymmetries

The emergence of HAC with artificial economic agents and an accelerating pattern of micro-division of labor and specialization puts another well-known institutional challenge into the spotlight: the principal–agent problem. For a long time, organization economics (Ouchi and Barney 1986), and more specifically agency theory (Eisenhardt 1989) have been recognizing that agents do not always act in the best interest of their principal. This is possible as and when agents have more information about a situation than their principal. The principal–agent problem arises, if the interest of the agent and the principal are not aligned and the agent exploits an existing information asymmetry.

As will be shown, principal–agent problems are of importance in a world with AI, but first it will be necessary to review how the scope, the scale and the structure of principal relationships change. At the outset, a pattern change can be observed: a typical principal–agent relationship involving AI does not consist of two but three involved actors (see Fig. 1). These include the human user of AI as a principal, the AI agent, and the provider of the AI agent who is in a dual role. On the one hand, the AI provider owns the AI agent and is thus also in the role of a principal. On the other hand, the AI provider is a supplier of AI services to the user and thus in the role of an agent. Agency problems arising in hierarchies (Fig. 1a, b) were first identified by Simon (1951) and later became a cornerstone of the theory of the firm (Alchian and Demsetz 1972; Jensen and Meckling 1976). Agency problems arising in market transactions like purchasing and rental agreements (Fig. 1c) have been widely analyzed in transaction cost theory and contract theory (Akerlof 1970; Williamson 1985; Furubotn and Richter 2005).

Fig. 1
figure 1

Source: own representation

Asymmetric information in triangular agency relationships with AI. The size of the circles symbolizes the type of actor: small = software, medium = human, large = organization. The angle of the triangles symbolized the degree of information asymmetry.

Triangular agency relationships as they arise in a world with AI have previously been largely neglected by microeconomic analysis and can, for example, be found in markets for temporary agency work (Mitlacher 2008). And indeed, this comes closest to what happens when the user accesses and interacts with AI agents which are provided by a third party. Given the technologies that empower the AI agent and in the light of an increasing autonomy granted, a transition from what is today called software as a service (SaaS) to software agents as a service (SAaaS) can be detected. In this constellation, the specialized software agent regularly has more information available to itself than its principals. What is particularly tricky here, is that human users may be mislead by emulating at eye-level interaction by certain types of AI agents, especially if they hold names like Siri or Alexa. The user experiences distinct interactions with software agents adopting distinct roles although in the back end many if not all of these agents might tap into the same computational resource of the AI provider. This points to the overall information asymmetry between the AI provider and the user. Given that AI is a general-purpose technology, AI providers operate in multiple business environments and thus combine data collected and processed by a multitude of software agents in various domains like consumer behavior, social media activities, or mobility. When a situation is reached where “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about” (Bennett 2010), next level information asymmetries are in place.

It is not only the structure of principal–agent relationships that changes but also the scope and the scale. Information asymmetries between humans and AI agents are fostered by three underlying developments. First, AI is much faster than humans in accessing and processing information that is available in digital form. AI agents possess an unbeatable structural competitive advantage here. Second, users in the developed world are now almost always online and via various applications, users directly and indirectly interact with an abundance of services on a continuous basis, which is reflected in hundreds of thousands of server contacts per week (Evangelho 2019). The resulting “dataveillance” (Clarke 1988; van Dijck 2014; Degli 2014) leads to an unprecedented scope and scale of information asymmetries. Third, how AI agents process information and how they arrive at decisions is no longer easily traceable. AI is used to make predictions (Agrawal et al. 2018), but its behavior is no longer predictable. As can be seen in the literature, the inexplicability and non-transparency of decisions made by machine learning systems is becoming increasingly evident (Doshi-Velez et al. 2017; Contissa 2018).

In an AI world, next level information asymmetries in triangular agency relationships can occur in various constellations. In addition to the broad user scenario sketched above, at least employment relationships involving new forms of micromanagement (Golumbia 2015) and exposure of citizens to authoritarian states pursuing AI strategies (Botsman 2017; Lee 2018) appear to be of relevance. Just like common information asymmetries, this next level phenomenon is a prerequisite for agency problems which are bound to occur if the interests of the agent and the principal are not aligned and the agent exploits the advantageous position. In consumer markets, this is obviously the case since AI providers typically offer their services to users “for free” whilst generating income from advertising. AI uses big data to predict consumer behavior. This information can be used to influence purchasing decisions. Consumers may be manipulated and induced into suboptimal purchases (O’Neil 2017). Yeung (2017) classifies the algorithmic manipulation and subtle persuasion and regulation of behavior with the help of big data as “hypernudging”. Referring to Morozov (2013), Danaher (2018) puts it more drastically when he concludes that “the AI assistant will imprison us within a certain zone of agency” (ibid: 18). At least in the very short term, this may not be against the will of the affected individuals and Schull (2014) observe asymmetric collusion, which expresses itself in an increasing acceptance of continuous algorithmic surveillance in return for highly tailored convenience.

Whilst agency problems between human individuals and AI providers are already of high practical relevance, issues between AI agents and AI providers are still rather theoretical since an alignment of interests can be assumed. However, the inexplicability and the lack of transparency of decisions made by AI agents (Doshi-Velez et al. 2017) represents a challenge for AI providers. And, increasing autonomy of agents combined with underlying machine learning technologies reaching out for intellect personhood (Puaschunder 2018) suggest that the interests of AI agents and AI providers will sooner or later go separate ways. A foretaste of this is offered by the currently discussed challenges of AI bias and discrimination (Kim 2017).

At this stage, it can be noted that in an AI world the phenomenon of asymmetric information rises to a next level because the way how AI agents acquire and process information systematically differs from that of humans. On this basis, the analysis comes to the conclusion that the standard agency relationship in a world with AI is a triangular one that includes the typical principal, the AI agent and the AI provider in a dual role as both, agent and principal. In case of conflicting interests between the involved economic actors, this setting is bound to intensify agency problems.

6 New factors of production

Next level information asymmetries arise from data. As described above, economic interactions based on micro-division of labor and specialization generate an unprecedented superabundance of data and in this context, the previously portrayed AI agents can be perceived as natural inhabitants of a growing data sphere. The interplay of these economic patterns requires further consideration.

A factor of production is an input used in the economic process to produce goods or services. Classic economic theory distinguishes three factors of production, namely natural resources, labor and capital (Samuelson 2010). Today, data can be perceived as a factor of production (Varian 2019). This is based on data’s growing economic importance as an input for goods and services, which is, for example, indicated by the rise of data-driven tech companies and by the increasing dependence of whole sectors of the economy like, for example, mobility or healthcare on data. But recognizing data as a factor of production is more than an analytical step. It represents an economic pattern shift: unlike the classic factors of production, data are not scarce. Rather, data are being generated at an increasing rate and continuously as a by-product or simply “anyways” (Hilbert 2016: 140). Data growth is fueled by increased levels of digital activity and interconnectivity by humans, embedded devices like sensors and supporting technologies like utility grids (Reinsel et al. 2018). Data have been described as the “new oil” for the economy but whilst both need to be refined to be useful, unlike oil, the consumption of data is non-rival (Varian 2019). The use of it does not diminish it. Quite to the contrary, data tend to generate more data. The contemporary phenomenon of big data is typically described by increasing volume, high velocity and valence, veracity, and substantial variety (Demchenko et al. 2013).

During the production process for goods and services and as a side effect of social interactions, data are generated, identified, collected, and analyzed. This results in information which again can be transformed into knowledge and learning takes place. Like land, data can be private or public. Since information goods and services that result from data are non-rival in consumption, it depends on the criteria of excludability whether such goods and services are public or private. In economics, private goods that are non-rival are known as club goods (Musgrave 1939; Buchanan 1965).

To extract value from data requires other production factors, namely labor and capital. Human labor is not very productive at generating, identifying, collecting, analyzing and learning from data. AI-based machine labor, in comparison, is. Machine labor in the sense of machine learning can be seen as the natural complement to data and as another new factor of production (Brynjolfsson 1994).

The properties of AI agents described in the previous section continuously propel machine labor to higher levels of factor productivity. Thus, it does not come as a surprise that the exploitation of data by humans at an economically relevant scale is increasingly mediated by machine learning. This phenomenon has only recently begun to catch the interest of economists (Chen and Venkatachalam 2017). In practice, humans are more often than not excluded from direct access to data as a production factor. The crowding out of humans implies that the data sphere develops into a club good where AI is positioned as a gatekeeper. That this can cause distribution problems is supported by recent economic developments and the concentration of wealth (Furman and Seamans 2018). In the short run and with narrow AI, control will be concentrated in the hands of AI providers. With a longer-term view and with an emergence of more general artificial intelligence, AI agents may take control of data and of information goods.

Substantial research will be necessary to fully understand the emerging phenomena of economic and social order, but as a first step the following propositions regarding data and machine labor as new factors of production (see Fig. 2) may be derived:

  • First, the data sphere is unbounded, shows fast growth and exhibits non-rivalry of consumption. This means that in comparison to other factors and goods, overconsumption is not a problem.

  • Second, the data sphere does not only grow, it also shows low entropy. This means that once generated in digital form, data are likely to persist. This increases the likelihood of “data repurposing” (Tucker 2019: 2) which means that it is not necessarily clear how data once generated could be used in the future. Once created, data can in principle be indefinitely repurposed, which may lead to unforeseen negative externalities.

  • The first two propositions include the possibility of “data spillovers” (Tucker 2019: 2) which come into effect when individuals are adversely affected by the use of data that got originally created for another purpose. This also means that privacy is difficult to enforce.

  • Third, data diversity complements and further nurtures already existing “cambiodiversity” (Koppl et al. 2015: 7). This means that data diversity spurs micro-division of labor and specialization and increases the complexity of the economy.

  • Fourth, the growing pool of data is costly to access, ‘mining’ and ‘refining’ is required, and, in comparison to humans, AI agents are well suited to undertake these tasks. This in turn, results in a growing dependence on AI which reinforces the triangular agency problem described above. It also leads to a crowding out and exclusion of humans from directly accessing the data sphere.

Fig. 2
figure 2

Classic and new factors of production in an AI world

In summary, the review identifies data and machine labor as distinct factors of production in a world with AI. It is shown that the characteristics of “machina economica” and the phenomenon of micro-division of labor introduced in the preceding section support this distinction. With the help of a variety of research propositions, it is concluded that the challenge will be to evolve institutions that internalize negative externalities which stem from the use of data and to balance out distribution problems that are caused by lack of direct and unmanipulated access to data and information goods.

7 Economics of AI networks

Once data and complementing machine labor are recognized as relevant factors of production, a further economic pattern needs to be accounted for: the economics of networks.

A good or service exhibits network effects if the value to a new user from adopting it is increasing in the number of users who have already adopted it (Shapiro and Varian 2008; Varian 2017). And “modern, complex technologies often display increasing returns to adoption in that the more they are adopted, the more experience is gained with them, and the more they are improved” (Arthur 1994: 116). AI agents are an exact fit to this description. But AI agents give economics of networks an entirely new impetus, since the well-known economic pattern of “learning by using” (Rosenberg 1999; Atkinson and Stiglitz 1969) is now automated, increasingly autonomous and thus factor-driven. The pattern change the economy is confronted with is that AI is capable of Lamarckian evolution (Dyson 1999) which in short means that whilst under conditions of natural evolution giraffes do not grow taller by stretching their necks, AI agents in a figurative sense do (ibid). And as a general-purpose technology with Lamarckian training, AI agents can today operate in sales and tomorrow in purchasing, HR, or lobbying.

It is important to note that network effects spring from positive feedback which makes the strong get stronger and the weak get weaker (Shapiro and Varian 2008). In a world with AI, there are substantial economies of scale from data. These arise primarily due to direct network effects (Goldfarb and Trefler 2018), also called demand-side economies of scale (Varian 2017), as and when AI agents that can process more data generate more accurate results which will increase demand for their service which will further improve their quality. This pattern leads to a competition for data where positive feedback loops enable further data collection (Goldfarb and Trefler 2018). Varian (2019) claims that data typically exhibit decreasing returns from scale. However, Agrawal et al. (2018) argue that this is not necessarily the case and that there may indeed be increasing returns. The behavior of AI providers appears to confirm that: following the so-called platform strategies (Gawer and Cusumano 2002, 2008), large AI providers have learned how to develop two-sided markets (Rochet and Tirole 2003) where the platforms sell below or at no cost on the more price-sensitive side with the aim of expanding the network, whilst charging the other side for the value of the resulting externality (Biancotti 2018; Rysman 2009). This approach leads to the acquisition of an exclusive ownership of large user-generated datasets.

In practice, demand-side economies of scale can aggregate based on the principle of division of labor and in form of layers. If for example, a user chooses to interact with a certain AI agent (e.g., Uber for transport) whilst another user prefers a competing AI agent (e.g., Lyft) but both use the same AI agent for orientation (Google Maps), then the back-end AI provider benefits from scale effects.

The demand-side economies of scale from data on the level of the AI agent meet supply-side economies of scope on the level of the AI provider. As already noted, AI is a general-purpose technology which means that useful algorithms and useful data can cross-fertilize a variety of applications and can be put to use in various domains (Goldfarb and Trefler 2018). The combination of the two effects nurtures the collective intelligence (Malone and Bernstein 2015) arising from the intra-firm co-operation between humans and AI agents. The development of such “superminds” (Malone 2018) is bound to increasingly accelerate more AI agents with complementing abilities to self-organize and co-operate directly without human intervention which effectively results in what is commonly known as swarm intelligence (Bonabeau et al. 1999; Kennedy 2011).

The combination of demand-side economies of scale and supply-side economies of scope nurture the rise of ‘platform’ competition rather than conventional competition (Gawer and Cusumano 2008). In line with the economic pattern described above, this suggests trends towards market dominance and risks of monopolization in the provision of AI which is:

  • Enabled by data and machine labor as factors of production,

  • Driven by organizations or even states pursuing platform strategies that build on AI as a general-purpose technology and thus deliberately integrate across distinct business lines (Khan 2016) or even areas of human life in general (Lee 2018),

  • And facilitated by prevailing legislatures and property rights structures which make it easy for AI agents and AI providers to receive data from users and citizens.

This can result in exclusive ownership of some types of big data by a few corporations which may constitute a barrier to entry in markets for AI-based services (Rubinfeld & Gal, 2017). Further, it can lead such platforms to control the technological infrastructure on which not only their customers but also their competitors depend (Khan 2016). The control over infrastructure does also include control over algorithms and thus over behaviors displayed in the market. When it comes to price setting, this can lead to a new level of opacity where anticompetitive behavior can be difficult to detect (Furman and Seamans 2019). Overall, this seems to portend the possibility of AI-dependent industries having a winner-take-all market structure. This argument potentially has to be extended to the competitiveness of nations and in the long run even to the competitiveness of AI in comparison to the human species (Kurzweil 2005; Nordhaus 2015; Bostrom 2017).

In essence, this section served to depict how the previously discussed economic patterns contribute to and interrelate with network effects that arise in a world with AI. It becomes apparent that “machina economica” and micro-division of labor stir network effects. To benefit from the economics of networks in an AI world, it is necessary to exploit data and machine labor as factors of production which in turn increases information asymmetries and thus can become the source of triangular agency problems.

8 Conclusions for a world with AI

This contribution has come with some limitations. It entered new territory in breadth with the risk of insufficient depth in the areas covered. Nevertheless, the discussion has shown that machina economica, micro-division of labor and specialization, triangular agency relationships with next level information asymmetries as well as network effects based on the properties of data and machine labor as new factors of production are not only relevant but also interdependent institutional matters in a world with AI. The proposition that AI gives a new meaning to these matters could be strengthened.

Methodologically, the advent of AI has promising as well as challenging implications for the discipline of economics. AI agents are new objects of study that have an increasing impact on the economy as a whole. On the one hand, economics seems to offer just the right theories for analysis, since machina economica promises to be more rational than humans will ever be. It is therefore not surprising that businesses and politics seek advice for institutional design and development (Duflo 2017; Roth 2002). On the other hand, economics is methodologically based on individualism. This is challenged by an emerging inseparability of man and machine where human choice can no longer be understood to take place on a standalone basis, an aspect that will require further research and exploration and which is further complicated by the economic pattern of micro-division of labor and specialization. There are good reasons to assume that this pattern is further accelerated by a widespread adoption of AI. In human–agent collectives, where humans and AI co-evolve, ongoing further division of labor and specialization, especially on part of the machines, will crowd out at eye-level interactions and humans will have to cope with conglomerate effects of various AI agents that operate ‘behind the scenes’ which implies limits to direct institutional design.

The challenges on the methodological level are complemented by the issue on the normative level of perceiving AI agents as point of reference for moral obligations and for the internalization of external effects. In summary, enhanced rationality, increased autonomy along with increasing inseparability embedded into the pattern of division of labor and specialization ask for new research efforts regarding the methodological and normative foundations of institutional economics.

The above will be important when trying to come to grips with AI-induced next level information asymmetries. In the past, economists identified such asymmetries to be of relevance in principal–agent relationships which are omnipresent in the economy. With AI in the game, there is a pattern change in that triangular agency relationships become relevant in both, the private domain of the market, and in the public domain of the state. Research that helps gaining a better understanding of such triangular agency problems is required to be able to analyze emerging phenomena and to identify and develop suitable institutional settings.

The need to progress institutional economic theory is further nurtured by the fact that an economy with artificial intelligence uses data and AI-based machine labor as new factors of production. The properties of these factors do not only reinforce the pattern of micro-division of labor and specialization. They are also a source of potential negative externalities and they are the basis for the pattern of increasing network returns.

In conclusion, technological progress in AI, the institutions governing triangular agency relationships, and economics of scale and scope will determine future dependence on AI, problems resulting from information asymmetries, issues revolving around market and political dominance and thus overall economic and political dynamics and development.

Last but not least, this review sometimes explicitly, sometimes implicitly showed the dual role of economics in an AI world. The discipline of economics serves as a scientific approach to explain social, political and economic life with AI as well as institutions involving AI. At the same time, it provides elements of design on both, the level of the machine actor and on the level of rules. In sum, and as already stated in the introduction, the dynamics of economic patterns outlined in this article and the dual role of economics require economics to become more entrepreneurial (Koppl et al. 2015).