Artificial intelligence in the field of economics

The history of AI in economics is long and winding, much the same as the evolving field of AI itself. Economists have engaged with AI since its beginnings, albeit in varying degrees and with changing focus across time and places. In this study, we have explored the diffusion of AI and different AI methods (e.g., machine learning, deep learning, neural networks, expert systems, knowledge-based systems) through and within economic subfields, taking a scientometrics approach. In particular, we centre our accompanying discussion of AI in economics around the problems of economic calculation and social planning as proposed by Hayek. To map the history of AI within and between economic sub-fields, we construct two datasets containing bibliometrics information of economics papers based on search query results from the Scopus database and the EconPapers (and IDEAs/RePEc) repository. We present descriptive results that map the use and discussion of AI in economics over time, place, and subfield. In doing so, we also characterise the authors and affiliations of those engaging with AI in economics. Additionally, we find positive correlations between quality of institutional affiliation and engagement with or focus on AI in economics and negative correlations between the Human Development Index and share of learning-based AI papers.


Introduction
The quest for artificial intelligence 1 (AI) has affected many fields and disciplines; economics is no exception. It began with the dream of creating human-or animal-like machines and automata such as Leonardo da Vinci's robot knight, walking lion, or Vaucanson's duck. 2 But humans remain fascinated by synthesizing things, by imitating appearances of natural things, or by understanding non-artificial functionalities.
The journey of AI has moved from the storytelling of Homer's tripods to philosophies of syllogism and logical reasoning and on towards a computation theory or manipulation of symbols by machine, thanks to luminaries such Aristotle, Leibniz, Boole, or Turing. The field flourished with important conferences such as the 1948 Interdisciplinary Conference held at Caltech that cemented the view that the brain might be compared to a computer, or the 1956 famous Summer Research Project on Artificial Intelligence at Dartmouth College-seen by many as the key gathering in the history of AI (for an overview, see Nilsson, 2010). Herb Simon, for example, shifted his interests and concerns around administration and economics towards human problem solving to discover the symbolic processes that people use in their thinking. The computer was therefore used as a general processor for symbols. In other words, he found tools within computer languages that classical mathematical languages could not offer when exploring the processes of human thinking (Simon, 1991). The use of AI was therefore closely linked to attaining more rigor in the fields of behavioral and social sciences.
Cybernetics-in the spirit of Nobert Wiener-has helped us put more weight on feedback and therefore to understand adaptive processes or how to maintain stability. Morgan (2003), for example, stresses that "under the influence of cybernetic thinking, the economic behaviour of each individual was pictured as being controlled by personal feedback loops" (p. 276). The field of cybernetics that inspired economics derived their insights from the study of messages, the development of computing machines and automata, psychology, and the nervous system (Wiener, 1954). Wiener (1954) stressed that " [t]o live effectively is to live with adequate information. Thus, communication and control belong to the essence of man's inner life, even as they belong to his life in society" (p. 18). Especially when viewed through a macroscopic lens, cybernetics allows us to avoid making assumptions about the contents, connections, and structure of economic systems beyond a simple input-output through transformation modelling approach (Billeter-Frey, 1996;Cochrane & Graham, 1976). This allows study of the sequences of economic events and path dependency; when calculating over the entire economic system where inputs are transformed by a black box economy into outputs and time and space, both local and global matter tremendously. In the end, feedback is a property of being able to adjust your future conduct based on past performance; and therefore, is a method of controlling a system by reinserting into the system the outcome of past performance (Wiener, 1954). This allows definition of various societal concepts, such as law, which in this context could be defined as the "ethical control applied to communication" (Wiener, 1954, p. 105).
Economics in the twentieth century emerged as a science in the "mould of engineering" (Morgan, 2003, p. 276), relying on a certain precision regarding how to represent the world with quantitative techniques rather than words and verbal arguments of the 19th century (Morgan, 2003, p. 287). Paul Samuelson (2004, p. 49), for example, recollects that when he.
began the study of economics back in 1932 on the University of Chicago Midway, economics was literary economics. A few original spirits-such as Harold Hotelling, Ragnar Frisch, and R. G. D. Allen-used mathematical symbols; but, if their experiences were like my early ones, learned journals rationed pretty severely acceptance of anything involving the calculus. Such esoteric animals as matrices were never seen in the social science zoos. At most a few chaste determinants were admitted to our Augean stables. Do I seem to be describing Eden, a paradise to which many would like to return in revulsion against the symbolic pus-pimples that disfigure not only the pages of Econometrica but also the Economic Journal and the American Economic Review? Don't believe it. Like Tobacco Road, the old economics was strewn with rusty monstrosities of logic inherited from the past, its soil generated few stalks of vigorous new science, and the correspondence between the terrain of the real world and the maps of the economics textbook and treatises was neither smooth nor even one-to-one.
Thus, as Morgan points out, the modelling and tool-based approach gave economics "an aura of scientific modernity" (p. 277) of a more advanced and proper science via the "desire to ape natural since" (p. 287) using, for example, mathematics to formulate general laws or statistics or econometrics to predict economic events (p. 277) or by engineering interventions in the economy (p. 305): The engineering metaphor also suggests that twentieth-century economics is best characterized as a science of applications and implies a technical art, one that relies on tacit knowledge and decidedly human input as in the eighteenth-century term "art of manufactures" (Morgan, 2003, p. 276) As a consequence, a form of social engineering evolved during the middle and late twentieth century (Morgan, 2003). Similarly, the use of AI may give the impression that Hayek's spontaneous order is a thing of the past. The economics literature is still undecided on how AI may transform society, with a lively discourse evident in the literature. 3 Big Data, machine learning (ML), and deep learning (DL) allow to ask whether economic calculation has become less impossible, supporting advancement towards more social planning.
However, although we may end up with more reliable data about the initial conditions to predict the future, it will not give us enough theoretical understanding of the actual phenomena to be predicted, which is equally important for good predictions (Simon, 1996). Also, ML or Big Data cannot solve issues around the problem of utilization of knowledge (Hayek, 1945), particularly local, situational knowledge that is important for the here and now. Thus, there is a limit to how much of an "architect" an economist can be. 4 Data points or statistics themselves are not free of the ambitions, potential power struggles, gaming aspects and manipulation, or politics of societal decision making (see, e.g., Chan et al., 2019). AI has also taught us that simple heuristics of just "interestingness" can lead to powerful searches that result in realized activities or scientific discoveries (Simon, 1996). AI systems are interesting for economics due to their search for powerful problem-solving algorithms and for procedural rationality: "Procedural rationality takes on importance for economics in those situations where the 'real world' out there cannot be equated with the world as perceived and calculated by the economic agent" (Simon, 1978, p. 505). Simon sees the power of a theory of computation in domains that are highly uncertain, rapidly changing, and therefore, too challenging to allow objectively optimal actions to be identified or implemented. However, social planning on a societal scale also calls "for modesty and restraint in setting the design objectives and drastic simplification of the real-world situation in representing it for the purposes of design" (Simon, 1996, p. 141), in some sense, a deliberate filtering of information to prevent overload (particularly with such high stakes) as on the societal scale, we are dealing with a complex adaptive system of systems (Holling, 2001;Levin et al., 2013). Within the backdrop of such complexity, the policy intervention itself can be seen as a complex adaptive system (Hawe, 2015;May et al., 2016). At each step of implementation, a new situation is created which is our starting point for the next stage of design, planning, or implementation. "The real result our actions is to establish initial conditions for the next succeeding stage of action" (Simon, 1996, p. 163), so we must be adaptive and consider potential feedback mechanisms to keep up: "The planners make their move (i.e., implement their design), and those who are affected by it then alter their own behavior to achieve their goals in the changed environment" (Simon, 1996, pp. 153-154). This highlights the need to also consider varying spatial and temporal scales in the complex system's feedback and feedforward mechanisms (Holling, 2001;Simon, 1996). For example, the behaviors of those who the planner seeks to influence (i.e., the planned) may only become visible to the planner on longer timescales (e.g., months to years) and often, only at an aggregate level.
Thus, modern economics has been influenced by engineering or engineering mathematics in terms of modelling techniques, simulation technologies, and experimental 4 An architect in the sense we convey here, is an economist performing the role of economic designer. In other words, economists that can and are called upon to design, manage, and integrate real-world institutions and market mechanisms that align incentives and behaviours with underlying, including also consideration of human, social, and cultural aspects. Historically, this description would align most similarly to those of the French economist-engineers (Mariotti, 2021, p. 557) or polytechnical engineers. Note, also links to management engineering (Omurtag, 2009), and more generally, to common ground with operations research and management science. However, as Hayek (1952) contends, "The application of engineering technique to the whole of society requires indeed that the director possess the same complete knowledge of the whole society that the engineer possesses of his limited world" (p. 97). This assumes away the pooling of local knowledge of the here and now (i.e., the knowledge of particular circumstances of time, place, or task) between groups of individuals in society (i.e., the collective experience), as if a "complete concentration of all relevant knowledge is possible" (p. 97)-a tall ask for a human society which presents as a living system of dynamic networks of individuals, groups, institutions. methodologies (Sent, 1997), particularly for the more practical policy-focused applied economists. Some have advocated that economics is an engineering or design science, highlighting the policy-oriented nature of economics and the economists' role (increasingly called upon) in designing policy interventions for real-world settings by adapting and combining theoretical models with practical implementation know-how. After all, as Mankiw (2006, p. 29) highlights, "God put macroeconomics on earth not to propose and test elegant theories but to solve practical problems". However, late mainstream economists, not unlike their predecessors who borrowed from physics, continued to borrow from many other disciplines besides engineering, such as evolutionary biology, cognitive and behavioural sciences, and of course, mathematics. 5 In this broader light, the equation between economists and engineers is widely questioned by economists on the grounds that economics is a positive science and not engineering. 6 As Lazear (2000, p. 99) reminds us, "economics is not only a social science, it is a genuine science".
The relationship between engineering and economics is also interesting to consider as they are both key players when it comes to building and operating our future smart society. Especially, considering the role that engineering plays in the development of AI itself. Engineers are typically concerned with a single end, deliberately controlling all effort and resources towards this end and with complete (sometimes illusionary) control of this particular little world that concerns their project scope (i.e., dealing with mostly known quantities). However, as Hayek (1952) argues, whilst "the engineer's ideal is [often] based on the disregard of the most fundamental economic fact …, the scarcity of capital" (p. 97), the economists mindset comes from an appreciation for the scarcity of everything-knowledge, capital, time, etc.-and often, they are trying to optimise over many (potentially competing) dimensions and dealing almost exclusively with (messy) human behaviours. Hayek (1952) warns us, there are dangers associated with an "engineering mentality" in economics, i.e., of efficiency as the primary driver and by designing independently from the broader (embedded) social context. Further, there are tensions between the engineering and scientific aspects of economics (Mankiw, 2006;Sent, 1997). On the science side, economists attempt to develop and unify their scientific knowledge (i.e., journey-driven). Whereas, on the engineering side, economists are more practical and outcome-driven. However, others advocate the complementary nature of "economics as engineering" and "economics as science" (Duflo, 2017;Roth, 2002). 7 AI-like all fields at one point or another-has gone through many seasons of activity and focus in its long and winding history. The way AI is discussed and applied more specifically in economics and to the economic calculation debate is less explored. Haenlein and Kaplan (2019) use an analogy of four seasons (in turn) when recounting the history of the AI field: spring, summer, winter, and fall. Spring, led by the likes of Asimov, Turing, Minsky, McCarthy, Simon, and Newell, forms the roots of AI in automata, the antecedents of neural networks, and symbolic logical reasoning. AI summer began with the 1956 Dartmouth Project, followed by two decades of significant success and advances in the field (theory and practice). In 1973, US Congress argued for reducing the level of spending on AI research and the British Research Council grew increasingly cautious of optimistic AI researchers. This hallmarked the beginning of (the first) AI winter, a period where AI research (and funding for it) fell dramatically to almost a standstill as the early promises and hype of the field failed to materialise. As funding dried up, AI entered a 41-year long winter with little interest or activity in the field aside from the brief rise (and fall) of expert systems and other knowledge-based systems in the later 1970s. Statistical advances, increasing computational power, and data ubiquity began to increase by the 2000s, and AI researchers came to harvest the learnings from previous AI generations (i.e., AI fall), again with increasing interest from academia and industry and excitement in the general public. Neural networks, ML, and DL led the way in this (current) season. Others, however, argue this narrow focus falls short of true general AI; the sort imagined by early spring and summer AI researchers and science fiction more generally. Mostly, economists are focused on AI fall theories and methods, likely due to the complementary nature of their use and application (i.e., prediction, clustering, classifying) and strong preference for mathematics and procedural rationality. See for example, Parkes and Wellman (2015) regarding the 'rationality' common ground between AI researchers and economists, and AI links to "classical" behaviour economics Kao & Velupillai, 2015). We suspect however, this means missing out or ignoring what else can be learned from earlier attempts to create models of the human mind and nature. See  for further discussion.
In this paper, we explore the history and development of AI in economics, taking a scientometrics approach. Our descriptive study looks to shed light on the take-up of AI methods in economics through time and across locations, characterising the average and influential authors and institutions involved by economic subfield, academic age, and country. Further, we also empirically explore the productivity of the economic calculation/social planning discussion in the AI economics literature. Our study builds on related literature in business, operations management, finance, and economics (Dhamija & Bag, 2020;Ghoddusi et al., 2019;Goodell et al., 2021;Liang & Liu, 2018;López-Robles et al., 2019;Loureiro et al., 2021;Ruiz-Real et al., 2021) via analysing by Journal of Economic Literature (JEL) categories for higher granularity on economic subfields and differentiating papers by learning / non-learning AI methods and economic calculation / social planning papers in the empirical analysis. It is still early days as to where AI in economics may take us-not to mention what world this might create and for what good (or bad). By characterising who and how AI methods have propagated through different sub-fields of economics (e.g., labour, environmental) we seek to uncover what resistance we may face as AI economists in the future and what hidden gems from a not-so-distant past may still remain ripe for picking. Studies like this are important for mapping fields of science and the evolution of knowledge, and in turn, how this affects society. We need to do so in a flexible manner-capturing a broad AI research landscape without sacrificing granularity on economics' (sub)fields-which is why we complement our empirical analyses with two data sources (Scopus and EconPapers-see Sect. 2.1). This allows us to take advantage of the relative strengths and weaknesses of each database in a complementary fashion. We supplement our empirical analysis with an accompanying narrative review of the literature on AI, AI in economics, and the AI economic calculations/social planning discourse. This seeks to uncover aspects of the literature not easily captured by bibliometric analysis without sacrificing insights that may be neglected by the inclusion/exclusion process and quality controls required in systematic reviews. This is particularly important in fast-evolving fields such as AI. Several future research avenues are identified and peppered throughout the paper. In the final sections, we summarise the main insights and arguments of the paper before highlighting what we see as the most fruitful avenues for future research applying AI to the field of economics.

Data description
To gather the data needed to map this history of AI within and between economic subfields, we construct two datasets containing bibliometrics information of economics papers based on search query results from Scopus database and the EconPapers (and IDEAs/ RePEc) repository. While the latter contains information on the JEL classification system which distinguishes between 20 (main) economics sub-fields. This provides us a much higher resolution analysis by field, however, we retain the Scopus data as this allows to explore the individual and institutional factors of research in similarly fine detail.
We perform a search query (see Table 1) to retrieve relevant article records from the Scopus and EconPapers databases. 8 The search query is based on the causal methods matrix (CMM) ) which seeks to functionally map the landscape of AI methods available for problem solving, respective to the complexity of the problem faced and the information at hand. Our query has been kept sufficiently broad to avoid the exclusion of relevant records from the search and also to ensure that results were not skewed towards more recent AI developments. The current AI literature is heavily focused on the connectionist approach, whereas in earlier times it was largely focused on symbolic/ semantic reasoning and rule-based knowledge engineering. Our assumption is that economic publications that discuss, test, or apply AI in relation to the economic calculation / social planning discourse are likely to mention more general AI terms (e.g., "machine learning", "artificial intelligence") as opposed to the computer science or applied AI literature which discusses more specific AI terms (e.g., "support vector machines", "backward propagation"), due to the latter's methodological focus as compared to the former's conceptual one.
In Scopus, we conduct a "TITLE-ABS-KEY" search for the desired words in the title, abstract, and keywords of the texts, restricting to "SUBJAREA (ECON)" for only articles classified to the economics field. 9 Similarly, we searched the EconPapers database for the desired words in the title, abstract, and keywords of the texts. While the full collection in EconPapers also captures articles in a non-economics field, we restrict the search results to papers where JEL codes (see Table S1 in Appendix) are identified. The final Scopus 8 The final search was performed on 31 August 2021. 9 The four economic sub-fields in Scopus are broadly defined as General Economics, Econometrics and Finance; Economics, Econometrics and Finance (miscellaneous), and Economics and Econometrics, Finance.
search retrieved 6279 items 10 (contains 4999 journal articles, 731 books, 545 book chapters/series, and 4 trade journals or conference proceedings) and 6949 records are retrieved from the EconPapers database (contains 4492 journal articles, 2074 working papers, and 383 book or book chapters records).
Using the pybliometrics (Rose & Kitchin, 2019) Python package, 11 we also collect and collate, for each Scopus item retrieved, information of the authors (e.g., affiliation, first year of publication), articles (e.g., journal, number of authors), and journal (e.g., publication year). This results in a set of five derived datasets all linked to each other by document/author/affiliation Scopus IDs and by ISSN for the publication titles' information. See Table S2 to S4 in Appendix for further detail on variables retrieved by our Scopus search.

Empirical strategy
We first begin with some descriptive statistics, discussing the overall growth of AI in economics over time before delving deeper into details like growth and diffusion through different economic sub-fields (i.e., using JEL codes). We also differentiate between learning and non-learning AI papers (Boolean) using a self-refined (by trial-and-error) list of search phrases 12 that identify more specific AI terms associated with learning AI methods. We then map the co-occurrence of JEL classifications, seeing also the change in AI learning/ non-learning use and discussion over time by JEL classification. We undertake a similar process for papers that focus on problems of economic calculation or social planning, again using a self-refined list of search phrases. 13 Next, we turn our focus to describing institutional/affiliation factors such as relative standing and AI productivity. Lastly, we describe the individual (i.e., author-level) factors of those publishing on AI in economics such as academic age and country of origin.

Results
Looking at the overall number of AI-related papers retrieved by Scopus with subject area ECON, we find that since 1986 the number of AI papers in economics have been steadily increasing (see Fig. 1) coinciding with the major AI winter. Prior to this year, there were very few ECON papers published with AI-related content (n = 11) matching our search term; perhaps corresponding to the rise (and fall) of expert systems with some slight lag. The share of all ECON papers with AI-related content in Scopus has also been steadily rising; up to approximately 2.4% in 2021 from 0.13% in 1986. One potential reason could be that advances in ML and neural networks were becoming more important in economics. In the early years, scholars such as Herbert Simon and Allen Newell placed a strong emphasis on symbolic systems (Cockburn et al., 2019;Hunt, 2007). Simon, for example, recounts in his autobiography the achievement of the Logic Theorist (LT) that helped to prove some of the theorems in symbolic logic given by Russell and Whitehead in their Volume I of Principia Mathematica 14 : I have always celebrated December 15, 1955, as the birthday of heuristic problem solving by computer, the moment when we knew how to demonstrate that a computer could use heuristic search methods to find solutions to difficult problems. According to Ed Feigenbaum, who was a graduate student in a course I was then teaching in GSIA, I reacted to this achievement by walking into class and announcing, "Over the Christmas holiday, Al Newell and I invented a thinking machine." (If, indeed, I did say that, I should have included Cliff Shaw among the inventors.) Of course, LT wasn't running on the computer yet, but we knew precisely how to write the program (p. 206). Table 1 Search phrases for AI publication records retrieval Wildcards are also used in search terms (e.g., network*)

Search phrases
Artificial intelligence, artificial general intelligence, intelligent machine, intelligent system, intelligent agent, machine learning, deep learning, neural network, statistical learning, natural language processing, expert system, knowledge system, knowledge-based system, knowledge engineering, and semantic reasoning, symbolic reasoning, logical reasoning 14 Correspondence with Bertrand Russell (see Simon, 1991, pp. 207-208). Dear Earl Russell: Mr. Newell and I thought you might like to see the enclosed report of our work in simulating certain human problem-solving processes with the aid of an electronic computer. We took as our subject matter Chapter 2 of Principia, and sought to specify a program that would discover proofs for the theorems, similar to the proofs given there. We denied ourselves devices like the deduction theorem and systematic decision procedures of an algorithmic sort; for our aim was to simulate as closely as possible the processes employed by humans when systematic procedures are unavailable and the solution of the problem involves genuine "discovery.". The program described in the paper has now been translated into computer language for the "Johnniac" computer in Santa Monica, and Johnniac produced its first proof about two months ago. We have also simulated the program extensively by hand, and find that the proofs it produces resemble closely those in Principia. At present, we are engaged in extending the program in the direction of learning (of methods as well as theorems) and self-programming.
Very truly yours, Herbert A. Simon, Head. Industrial Management Department. Dear Mr. Simon, Thank you for your letter of October 2 and for the very interesting enclosure. I am delighted to know that Principia Mathematica can now be done by machinery. I wish Whitehead and I had known of this possibility before we both wasted ten years doing it by hand. I am quite willing to believe that everything in deductive logic can be done by a machine.
Yours very truly, Bertrand Russell.

3
Early heuristic programs focused on aspects such as proving theorems in geometry or developing game-playing programs such as chess or checkers (Nilsson, 2010). Newell et al., (1959) developed the General Problem-Solving (GPS) program which was part of an agenda to understand information processing and human problem solving (for a discussion, see Torgler, 2021). GPS was, as Nilsson (2010) notes, an outgrowth of their earlier work on the Logic Theorist that was based on manipulating symbol structures via operators. According to Newell et al. (1959), GPS used means-ends systems of problemsolving heuristics, classifying "things in terms of the functions they serve, and oscillating among ends, functions required, and means that perform them" (p. 9). Means-ends analysis was therefore achieved by comparing the problem goal with the present situation and noticing the differences. This then leads to action(s) to reduce or eliminate such differences between goal and present states. Minsky (1986Minsky ( , 2006) also refers to meansends analysis as a "difference-engine". In the 1970s, expert systems became dominant, pushed by a second generation of AI scholars such as Edward Feigenbaum and Ray Reddy, most of whom were more interested in knowledge representation than actual human intelligence (McCorduck, 2019). Expert systems can be defined as "computer programs, designed to make available some of the skills of an expert to non-experts" (Siler & Buckley, 2005. p. xii). Those systems were attractive to businesses as consulting systems as they would be designed to help humans in their decision-making process and guide the uninitiated towards more favourable outcomes (for discussion, see . However, these systems generally break down when confronted with problems outside their area of expertise-or even within, if knowledge (e.g., common sense) beyond the provided rules is required (Nilsson, 2010). Nilsson (2010, p. 326) refers to a story in which John McCarthy interacted with the famous medical expert system MYCIN by typing some information about hypothetical patients, namely, a male that underwent amniocentesis. MYCIN accepted those parameters without complaint as the fact that males cannot get pregnant was not included in the expert knowledge, thereby demonstrating a core limitation-a lack of common sense. Another downfall, the knowledge engineering itself is difficult, often needing considerable investments in time, money, and collective effort to develop and maintain fit-for-purpose expert systems (Turban, 1988) and the knowledge transfer or acquisition process is not itself trivial either (Nisbett & Wilson, 1977;White, 1988). As evident in Fig. 1, the mid to late 1980s was the period classified as the AI winter. Many AI sponsors (government and industry) ceased or reduced their funding, disappointed by exaggerated hopes, promises, and expectations.
A strong debate emerged during the long AI winter over whether the metaphor of the mind as a computer is useful or not. Reservations emerged about the comparability of mind and machine (for a discussion see Hunt, 2007, pp. 637-642). For scholars such as Herb Simon, both the human mind and the computer were symbolic systems. Others felt that the computation model was a poor fit, or simply inadequate, and became disillusioned with the information-processing models (Dreyfuss, 1965;Neisser, 1976;Taube, 1961). Dreyfuss emphasized that humans have uniquely human forms of information processing that are inaccessible to a mechanical system. He refers in particular to issues around the ill-structured data of daily life. Nilsson (2010, p. 314)  Jacob Schwartz (1986) raises the point that: … a basic goal of AI research has been the discovery of principles of self-organization robust enough to apply to a wide variety of information sources. Any such organizing principle would have to allow coherent structures capable of directly guiding some form of computer action to be generated automatically from relatively disorganized, fragmented input. The present state of AI research is most fundamen-15 On the study of autonomous creativity in AI see, for example, Boden (1998), and also Rowe and Partridge (1993), and Wiggins (2001) who formalise Boden's propositions. See also Colton and Wiggins (2012) for detailed review and Du Sautoy (2019) for a book-length discussion. More recently, Mikalef and Gupta (2021) discusses the positive role of AI (capability) in organisational creativity via human augmented intelligence and also substitution effects by automating manual and repetitive tasks, hence freeing up time for humans to dedicate towards more creative means and ends. tally characterized by the fact that no such robust principle of self-organization is as yet known, even though many possibilities have been tried (p. 491).
In the 1970s, scholars started to argue that thinking does not proceed serially (Hunt, 2007); therefore, scholars developed theories around parallel-processing systems (see, e.g., Rumelhart et al., 1986). Influenced by the brain structure, connectionists stressed that knowledge is stored in the connections among neurons, simulating the parallel processing of small neural networks (Hunt, 2007). Mitchell (2019) refers to an article published in The Scientist in 1988 citing a top official at the Defense Advanced Research Projects Agency (DARPA)-the agency that provided the majority of AI funding over the years-discussing the power of neural networks: "I believe that this technology which we are about to embark upon is more important than the atom bomb" (p. 34). As automated data-gathering techniques and inexpensive mass-memory storage apparatus became available and more accessible to the masses, ML techniques became even more important (Nilsson, 2010). AI researchers started to develop a large set of algorithms that enabled computers to learn from data, leading to the situation that ML became its own subdiscipline of AI (Mitchell, 2019). In recent years, learning-from-data approaches using, for example, deep neural networks have been very successful thanks to the better availability of Big Data. As the rise of Big Data has profound implications for the way science is done, such questions or comparisons are important; the manner in which data are collected, curated, and integrated into scientific modelling is essential in understanding our world (Coveney et al., 2016). Varian (2014), for example, stresses that.
[s]ince computers are now involved in many economic transactions, Big Data will only get bigger. Data manipulation tools and techniques developed for small datasets will become increasingly inadequate to deal with new problems. Researchers in machine learning have developed ways to deal with large datasets and economists interested in dealing with such data would be well advised to invest in learning these techniques (pp. 24-25).
Economists are interested in ML as it is predictive. In general, data analysis in econometrics can be classified into four groups: (1) prediction, (2) summarization, (3) estimation, and (4) hypothesis testing (Varian, 2014). Traditional data analytics techniques struggle with observational Big Data as they are often noisy, and, unlike survey data or experimental data, are not collected to answer specific questions. AI and Big Data analytics try to overcome such deficiencies. In addition, ML can be implemented first, followed by attempts to explain phenomena that better identify the underlying correlations and co-occurrences; hence, moving towards identifying a causal relationship. Mullainathan and Spiess (2017) stress that "machine learning provides a powerful tool to hear, more clearly than ever, what the data have to say" (pp. 103-104), but this could also be seen as an invasion of personal privacies if/when taken too far. Particularly, in areas where this may be of concern to the data owner or custodial (e.g., risks of gaming the system exist, proprietary information, sensitive data). More data in the end also means less privacy (Agrawal et al., 2018). Figure 2 indicates that a large share of AI papers in economics are using ML or other learning-based AI methods, and the rate is even higher for EconPapers. In other words, in EconPapers the primary focus is on connectionist AI of machine and deep learning, neural nets, etc. In the 1990s (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999), the share was 71.1%, followed by 79.9% in the first decade of the 21st century (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010), and 80.5% in the following decade (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020). For Scopus, the values are in the range of 1 3 40 to 50 percent (1990-1999: 40.5%; 2000-2010: 55.8%; 2010-2020: 49.7%) across the board, a much more balanced distribution. Figure 2 also shows the share of economic calculation papers (dashed lines) relative to learning (blue) and non-learning (orange) papers. In other words, the blue (orange) dashed line shows the share of learning (non-learning) papers which are coded into the economic calculation / social planning discourse. In EconPapers, we can see several time periods where there was increasing interest in economic calculation / social planning from the learning-based AI literature but less so recently. Scopus tells a similar story, almost appearing to plateau in the last 5 + years. However, given the explosive productivity in recent years (see Fig. 1), it is salient that the share of economic calculation papers has not plummeted further-albeit, there is little room for EconPapers to fall much further anyway. Over time, the relative share of economic calculation / social planning papers to all ECON AI papers has remained relatively low, but also more stable in more recent years. For EconPapers, the relative share (of all ECON AI papers) in the 1990s (1990-1999 16 Table S5. Unfortunately, paper-level citation data is not available in the EconPapers dataset for comparison.

Subfield differences
Next, we look at the subfield differences in economics via JEL classification codes. To account for the total volume of papers published with each JEL classification, we obtain the number of papers with each JEL code in a single year where there is at least one AIrelated paper, calculating the proportion of papers on the topic. A total of 734,819 records were returned with 329,795 journal articles (44.9%) ( Table 2). We present results using fractional counting, meaning that we divide the paper by the number of JEL classifications listed in the paper. The correlation between the normal counting and fractional counting is (not surprisingly) quite high (ρ = 0.96). Overall, the 6949 AI papers in the EconPapers dataset account for 0.95% of all economics papers based on our method of identifying AI papers. Table S7 and S8 in Appendix provides summary statistics for the top-20 journals (most papers published) for each of Scopus and EconPapers (respectively), including the most frequent JEL codes used in the journal.
To better see the subfield differences, we first rank the fields by share of AI papers, which ranges from 0.19% (H-Public Economics) to 2.37% (Q-Agricultural and Natural Resource Economics-Environmental and Ecological Economics) (see Fig. 3). The relatively high rate of AI papers in Q could be driven by the fact that the ecological environment is a complex system that benefits from the use of Big Data and ML techniques. 17 Herb Simon (1995) defines AI as "complex information processing" (p. 939). In addition, it is a field that has a natural interest in knowledge-based systems. AI offers interesting opportunities for improved environmental management and global conservation. ML for example, 16 Leading up to the commonly referred Y2K (or Millennium) bug, problem, glitch, or error. 17 For a discussion on the importance of AI for sustainable entrepreneurship, see .

3
has been important in the study of climate change. Deep neural networks have also helped to better map the biodiversity around the world (Dauvergne, 2020). As Fig. 3 shows, a large proportion of the papers use a ML approach (around 87% of the papers). Cambridge University has, for example, recently launched the UKRI Centre for Doctoral Training in the Application of Artificial Intelligence to the study of Environmental Risks (AI4ER). As they stress, their goal is to "train researchers uniquely equipped to develop and apply leading edge computational approaches to address critical global environmental challenges by exploiting vast, diverse and often currently untapped environmental data sets". 18 Naturally, C (Mathematical and Quantitative Methods) has a high relative share (compared to other fields) of AI papers, reporting the largest proportion of ML papers. Also, not surprisingly, fields such as Financial Economics (G) heavily rely on ML (88.3 percent of studies) and AI more generally. Looking to the far right of Fig. 3, unsurprisingly, we see that B (History of Economic Thought, Methodology, and Heteredox Approaches) and P (Economic Systems) are-in comparison to other subfields-more focused on topics of economic calculation, social planning, and social engineering. However, the share of economic calculation AI papers is only a small component of the overall share of AI papers (i.e., 8.1% for P and 6.8% for B). Interestingly, we see the least engagement by G (Financial Economics) and E (Macroeconomics and Monetary Economics) despite the implications any change in economic organisation would have on global financial institutions, and also the suggestion by  some that money could become outdated in the new economy (e.g., in place of data). If we correlate the share of ML papers in AI with the share of economic calculation papers in AI, we observe a statistically significant negative correlation (ρ = − 0. 7323, statistically significant at the 1% level). This suggests that economic fields which place a lot of relative weight in applying ML or other learning-based approaches (relative to non-learning AI) are also less likely to discuss the issues of economic calculation and social planning (at least not as a main contribution of the text assuming this is in the title, keywords, and/or abstract of the document). This actually contradicts what we see visually in Fig. 2 wherein the relative share of economic calculation papers for learning AI papers is consistently greater the relative share for non-learning AI papers. However, this also could be an artifact of the greater productivity of learning AI papers in EconPapers-again, relative to non-learning AI papers.

Subfield co-occurrences
In Fig. 4, we show how different combinations of JEL codes are used together in AI-related economics papers implementing a network analysis approach. The network graphs were created using Gephi with force-directed (Fruchterman-Reingold) layout. The nodes represent the 20 JEL classifications with size showing the frequency with which the JEL classification was used. Edges show the number of times a pair of JEL codes were used together (thickness and colour of the edge). Co-occurrence is weighted by the total number of JEL classifications used in the paper. Overall, we see a strong link between the O (Economic Fig. 3 Share of AI-papers, by JEL code. Share is calculated based on full counting method. Share of learning papers (left panel) shows the proportion of ML-related papers (darker shade) and non-ML papers (lighter shade), respectively Development) and Q (Agricultural and Environmental Economics) pair, C (Mathematical and Quantitative Methods) and G (Financial Economics) pair, and C and E (Macroeconomics and Monetary Economics) pair. While Q is the most often used JEL classification among all AI-econ papers, C has the highest weighted degree, indicating that it is being used more often in conjunction with other JEL codes. This is particularly evident when the network is spilt into learning-and non-learning-based papers (Fig. 5b and c), in which the strength of the connection between C and E and G is drastically reduced in the latter. In contrast, the connection strength between the pair O and Q is similar in the two subsets, while O is most linked to other JEL classifications, often taking a more central position. The centrality and connectedness of O make sense as AI is often associated with economic discussions of innovation, technological change, and growth as it is expected to be a powerhouse for future society. Comparing Fig. 4b to c, we can also see there is more balanced participation by all the economic fields in the non-learning AI papers as compared to the learning papers where Q, O, C, and G dominate the show. This could perhaps indicate the generally less technical nature of non-learning AI papers, easing the entry requirements of academics to the space and hence, supporting a more balanced distribution of economic subfields. However, it could also indicate the economics fields with most ready application to learning-based AI (i.e., the low-hanging fruits) with other subfields possibly still working to answer what AI can do for their research or fields of application.

Subfields over time
In Fig. 5, we take a look at changes over time. Overall, the share of economics papers using or discussing AI tends to increase in more recent years. This remains true across a number of fields, particularly for those disciplines reporting the highest shares in Fig. 3 (C, Q,  M). Interestingly, the field of law and economics has increased the attention it pays to AI. Possibly this is due to the growing interest in how AI can affect society, which also asks questions around regulatory requirements and legal aspects of AI. Campedelli (2020), for example, shows that researchers are increasingly focusing on cyber-related crime topics, but also that relevant themes in algorithmic discrimination, fairness, and ethics are still rarely discussed (see also footnote 3). We should remember however, that every day "[e] conomic transactions, like legal transactions, do not take place in a vacuum, but in a social and moral context" (Griffiths and Lucas, 2016, p. 30), highlighting the need to bring these issues more to the forefront.

Institutional characteristics
When looking at which institutions are pushing AI in economics (see Fig. 6), we can see a positive correlation between the quality of the university/institution and the ranking based on the number of AI papers produced; although, we did not normalize by the department size (ρ = 0.4, statistically significant at the 1% level). To measure the ranking information, we relied on Amir and Knauff (2008), whose method grades departments based not on research productivity but on the strength of the PhD program as measured by a department's ability to place doctoral graduates in top-level economics departments or business schools. As they stress: "[w]ithin the respective context, faculty hires probably constitute a more reliable and stable indicator of influence than journal citations" (p. 185). As only 58 institutions were listed, we exclude the other university/institutions not listed in the ranking, which means that we only explore the relationship in relatively renowned departments.
Those 58 institutions jointly publish 5.6% of all the AI publications in economics. For all the counting that we are doing (institutions, countries, authors' academic age), we employ a weighting approach by the inverse of the number of authors listed on the paper. Those investigations only rely on Scopus data. For example, if only one out of three authors is affiliated with Harvard, Harvard is credited with 33% of the contribution. Our finding also highlights one of two things that are not mutually exclusive: 1) researchers from higherranked institutions are more focused on AI, or 2) higher-ranked institutions are focused on hiring (more productive) AI researchers or researchers that apply AI in their work. Fig. 4 Network analysis of JEL codes of AI-econ papers (EconPapers). Panel a shows the network of all 6,949 AI-econ papers and panels b and c shows the network of 'learning-based' papers (n = 5607) and nonlearning-based papers (n = 1342), respectively

Author characteristics
Next, we take a closer look at the academic age of the scholars who publish in AI (Fig. 7). Academic age is defined as the number of years between year of publication and the year of the author's first publication recorded in Scopus. To calculate the authors' average academic age of each type of contribution within a given year, we apply the inverse of number of authors in each article as weights. For example, if an article is co-authored by two academics, then the weight is one-half (0.5). Overall, we observe that over the last three Rank (Amir & Knauff 2008) decades, economics scholars publishing on AI are on average older. One possible explanation is that collaboration has increased in economics over time, meaning that the number of authors per paper has increased (Torgler & Piatti, 2013). There are also differences between learning and non-learning AI papers. Scholars applying learning approaches are consistently younger than those who apply a non-learning approach, possibly reflecting a skills gap in older researchers or a shorter-term focus in younger researchers. In other words, placing more focus on the modern, connectionist/neural network approaches whilst neglecting earlier symbolic/logic approaches, or integration of the two.

Country characteristics
Finally, we also look at which countries are producing more AI papers (Fig. 8). Overall, the US leads the number of publications (18.54%), followed by China (9.03%), and the UK (5.89%). The top 10 countries are responsible for 57.64% of all the AI economics publications (Scopus). Among those nations with relatively high production of AI papers, China and India show a particularly high rate of ML research. If we correlate the share of ML papers in AI with the Human Development Index (HDI) 19 -which assesses the development of a country by the key dimensions of human development (long and healthy life, being knowledgeable, and having a decent standard of living)-we observe a negative correlation of ρ = − 0.546. This means that less developed nations are putting a lot of relative weight in applying ML, relative to other AI approaches. Again, potentially lacking a longer-term focus. It should be noted however, that we restricted that sample only to countries with more than 30 AI papers (n = 41). We do not observe any statistically significant correlation between the share of economic calculation papers in AI with the HDI. We do however, note that-again, compared to other nations-Mexico, Portugal, and United Arab Emirates focus significantly more on issues of social planning and engineering, and economic calculation. We also see greater focus (comparatively) in countries such as Lithuania and Austria and less focus in places like China, Japan, and Korea, where sociotechnical society and monitoring is typically more advanced with widespread adoption. In future studies, it could be interesting Year Learning Non-learning to explore the relationship between the share of economic calculation/social planning AI papers and a country's technological innovation, e.g., using the Global Innovation Index. 20 Exploring the institutional, economic, and cultural factors of countries that engage in AI, certain types of AI (e.g., learning, non-learning), and the AI social planning and economic calculation discourse could be another interesting endeavour for future work. For example, whether more autocratic or democratic countries are more focused on certain applications or methods of AI, or how government use/application of AI influences a country's research directions.

Limitations
There are four main limitations to the present study that we have identified. Firstly, Scopus is just one of many databases available for electronic records of scientific literature. For example, Web of Science (WoS) and Google Scholar both offer valid alternatives. Comparing the coverage (records, citations, journals) of different scientific literature databases is in itself an active area of scientific enquiry (Aksnes & Sivertsen, 2019;Falagas, et al., 2008;Harzing, 2019;Harzing & Alakangas, 2016;Levine-Clark & Gil, 2021;Martín-Martín et al., 2018a, 2018bSingh et al., 2021) so we will not delve too deeply into this discussion. In the social sciences, Scopus has outperformed WoS in content coverage (Norris & Oppenheim, 2007;Martín-Martín et al., 2018a, 2018b and citation coverage (Martín-Martín et al., 2021). There is also evidence to suggest the same in economics (Aksnes & Siversten, 2019;Harzing, 2019;Martín-Martín et al., 2018a, 2018b. Whilst Google Scholar is known to have the greatest coverage across all disciplines (Martín-Martín et al., 2021) (especially for non-research article document types), and is free to use, its data quality can at times be questionable and there is no API to easily access the structured back-end data (Else, 2018;Mingers & Leydesdorff, 2015). Nor is there detailed metadata (e.g., author affiliations, funding details) provided in Google Scholar, so data scraping/ wrangling/cleaning can become time consuming and its use (on a larger scale) is typically limited to those with technical programming skills. Whilst there is evidence to suggest that Microsoft Academic had improved coverage in economics and the social sciences (Levine-Clark & Gil, 2021), unfortunately, this database service has recently been decommissioned. 21 . Other databases such as CrossRef and Dimensions appear to perform similarly to Scopus and WoS (Harzing, 2019). Future work could extend this study by replicating this study using different bibliographic sources and/or undertaking citation/content coverage comparisons between database alternatives mentioned above (and others). However, such an exercise is beyond the scope of this paper.
Second, in the current study, we have self-refined the records search and classification phrases (see Table 1 and footnotes 11 and 12) from previous literature for the search and manually for the classification (iteratively based on a quick, title overview of search results), as opposed to a more robust and/or data-driven process (Liu et al., 2021) or structured literature review. Such data-driven strategies are obviously not free of issues or bias (e.g., lacking reference to earlier symbolic or analogical reasoning AI) but could aid future replicability. Further, other AI methods (and/or more nuanced AI terms) may have been 1 3 applied or mentioned in papers' titles, keywords, or abstracts that we have not been able to identify with our current search and classification strategy (i.e., missed papers or false negatives). We do also note the potential for false positives and negatives in the record classification stage. Third, paper-level citation data is not available in the EconPapers dataset for comparison of top cited papers of Scopus at this stage. Future work could also explore insights from EconPapers data on the author-and affiliation-level in addition to paper-level citations and quality indicators. Fourth, the current descriptive empirical analysis focuses only on quantitative measures of the literature, neglecting the more qualitative measures. Future work could use qualitative methods such as keyword co-occurrence and topic modelling to thematically map the main topics of discourse in the respective literature (learning, non-learning, economic calculation/social planning). For example, see recent works of Sestino and Mauro (2021) and Loureiro et al. (2021).

Conclusions
The history of AI use and discussion in economics is long and winding, much the same as the evolving field of AI itself. Since its beginnings, economists have engaged with AI but in varying degrees and with changing focus over time and place. However, the transformative nature of AI also requires we discuss and debate the longer-term implications of AI and the many paths a future AI-fuelled society could take, whilst also remaining vigilant to its potential for misuse (e.g., media manipulation of the masses, deep fakes, algorithms profiling or targeting certain racial or ethnic groups). In general, our results provide insights regarding the evolution of AI in economics, between its subfields, and various author and affiliation demographics. More specifically, we have explored the diffusion of AI and different AI methods (e.g., ML, DL, neural networks, expert systems, knowledgebased systems) through and within economic subfields, taking a Scientometrics approach. To map such a history of AI within and between economic subfields, we construct two datasets containing bibliometrics information of economics papers based on search query results from the Scopus database and the EconPapers (and IDEAs/RePEc) repository. We build on related literature via analysing by JEL categories for higher granularity on economic subfields and by differentiating papers by learning or non-learning AI methods and economic calculation/social planning papers in the analysis. Further, we supplement the empirics with an accompanying narrative review of the literature on AI, AI in economics, and the AI economic calculation/social planning discourse. This seeks to uncover aspects of the literature not easily captured by bibliometrics analysis and guide the reader towards relevant and related literature.
Our empirical results indicate that AI in economics follows a similar trajectory (with a slight delay) to the AI field itself: activity in the 70 s, downturn in the 80 s, and increasing interest in more recent times. Primarily, this appears driven by learning approaches. However, not all economic subfields engage equally in learning-based AI. Indeed, those that have come to conceptualise the economy as a complex system are usually more susceptible to AI and Big Data. In particular, environmental, agricultural, natural resource, and ecological economists focus (in relative terms) extensively on AI, especially on learning-based AI methods. This is not overly surprising as these subfields of economics are generally the most open to interdisciplinary and transdisciplinary approaches. Of those papers with AIrelated content, those citing a mathematical and quantitative focus (JEL classification) most frequently occur in conjunction with other economic subfields, which makes sense considering the learning dominance (relying heavily on mathematics and computational theories) in AI economics. There also appears to be some subtle undertone of the challenges and opportunities AI holds for topics such as the future of work, discussed in economics with papers often citing economic development, innovation, technological change, and growth JEL classifications. Further, we find increasing interest in AI over time for those at the intersection of economics and law, possibly due to new regulatory and legal challenges that are emerging in this field. See, for example, Calvano et al. (2020), Hardyns and Rummens (2018), and Leib et al. (2021). Others in the history of economic thought also look to be following suit, suggesting growing interest in reviving or revisiting the relevance of AI for and in economics.
Regarding the discourse on economic calculation, social planning, social engineering in the AI economics literature, we find that the share of economic AI papers dedicated to the topic has remained relatively stable over time, despite exponential growth in the number of economic AI papers. Looking to subfield differences, we see (unsurprisingly) the most engagement from those focused on the history of economic thought/methodology/ heterodox approaches and comparing different types of economic systems (e.g., capitalist, socialist, transitional economies). Interestingly, we find relatively lower engagement in the financial economics and macroeconomic/monetary policy subfields which is surprising considering that some are projecting money will be replaced. Many are expressing the criticality of data in future society as opposed to money (Zuboff, 2015), also highlighting potential issues that could arise in data privacy, surveillance, security, and forging. For data can be collected for many reasons (Hellbing & Hausladen, 2022): "… 'for security reasons', 'to save the world', 'to improve the state of the world'" (p. 2) and the ubiquity of it is only increasing making it ever more likely this data power could accumulate in the wrong hands. In some cases, AI can judge people's personalities more accurately than their own family or friends by mining their digital footprint (Youyou et al., 2015) and there are already governments which tap into such resources to guide and fine-tune their policies. Thus, highlighting the current social risk of possible alliance between mainstream-oriented economists and incurious computer engineers (i.e., the type who truly internalise "machina economica") in dealing with the application of AI systems to business and society. 22 This also highlights the importance of informational self-determination in this new economy, as well as other collaborative principles as the 'open' movement and increasing civil engagement in planning initiatives (Hellbing & Hausladen, 2022). Looking to other fields outside economics for inspiration and common ground, the AI ethics literature could offer transferable insights, particularly around the ethics of AI (see footnote 3).
Moreover, those institutions at the productive forefront of AI economics are those scholars in higher ranked universities and institutions. This shows the big players (or at least the affiliate members of the big players) have their sights set on how AI may transform their own research and ways of the world. We also found that economists engaging with AI are also getting older over time, possibly due to more frequent collaborations, which would require us to normalize by the overall trend in economics to identify its importance for AI research. However, those engaging with learning-based AI approaches are consistently younger than those who wield the non-learning methods. Further, we find those countries with lower scores on the Human Development Index appear more concerned or focused on learning-based AI, perhaps because they are coming later to the game and hence, starting primarily from where the AI field is focused now. However, future developments require an understanding about what a method is able to achieve and what it cannot. This requires mapping out the possible landscape of AI methods with respect to the problems they solve (i.e., a meta-theory of AI methods). For example, mapping AI methods along dimensions such as number and influence of causal components (Minsky, 1992), similarity to other problems solved, relevance to the problem context itself (Minsky, 1991;Singh, 2003), or the content, flows, and uncertainty of information or knowledge about the problem (Bickley & Torgler, 2021a). The same should be done for the many different ways to think (represent and reason) about types of problems, including e.g., the expressivity of the representation, ease of reflection, transparency of reasoning, tractability of inference, or capability with uncertain knowledge (Singh, 2003). This requires that we also, from time to time, revisit earlier seminal and possibly forgotten literature to see what else can be learned or perhaps what we may have missed along the way.
As a first next step, we may look to characterize how AI has propagated through and within other subject areas and fields beyond economics. Also, taking a closer look at the individual papers, extracting from each how the methods were used or applied (i.e., collecting more contextual paper-level information). In doing so, we start to understand the value, understanding, and use of AI methods across and within other fields. Future research could also explore the story and dynamics of citations, citation networks, co-author networks, etc. in AI research and thus, would need to consider more carefully the implication of citation coverage when deciding which database to use. Further, undertaking citation/ content coverage comparisons to determine which databases are most applicable to AI and also taking advantage of the qualitative aspects of bibliographic data via keyword cooccurrence and topic modelling. Focusing on the economic calculation and social planning discourse, it could be interesting to explore the relationship between the share of these AI papers and country's level of technological development or innovation. Further, exploring the other institutional, economic, social, and cultural factors which drive an individual's engagement with AI, certain types of AI (e.g., learning or non-learning), and the economic calculation debate.
Historically, there has never been a time with more efficient means to systematically collect and analyse information. However, what is lacking is AI with common sense and many ways to think and reason about the world beyond narrow or highly specialised domains. In other words, combining the many methods of AI developed throughout history (e.g., ML, expert systems, symbolic logical reasoning), e.g., via a neuro-symbolic approach, and further, to map of these methods functionally (e.g., in terms of what problems they can help solve). Most AI of today will fail once it meets the messiness of the real world and its many moral/ethical dilemmas . Beyond statistical randomisation, for human-like organic interactions and spontaneous order to take place, AI must send price signals indistinguishable from human-kind. In other words, AI will need to mimic the behaviours of individuals in markets, and markets of markets and other socioeconomic systems so we know they can participate in a truly human world. Sensitive to the quality and quantity of data inputs, learning-based AI approaches require to find signals in data among noise and other interferences that are inherent in Big Data. In general, as Marcus and Davis (2019) point out, "deep learning, is fabulous at learning but poor at compositionality and the construction of cognitive models; the other, classical AI, incorporates compositionality and the construction of cognitive models, but it is mediocre at best at learning" (p. 94). As cognitive models are an important element for economics, the field cannot afford to disregard this factor. This is particularly important when working towards an AI that is robust, reliable, and able to function in a complex and ever-changing world. Humans are uniquely spontaneous and creative when faced with new and unexpected circumstances, which means that no training data are available and the data we do have is often imperfect. AI systems need to be able to work in an open system that constantly changes. Marcus and Davis (2019, p. 16) are right in emphasizing that we cannot practice every situation in advance, nor are we able to foresee what sort of information and data we need when acting in any given situation. Common sense is still a core hurdle to overcome. As Marvin Minsky often reminds us, AI systems do not really know why they cannot push a string. We generally do not think to ask ourselves these questions as their answers seem obvious. The sorts of knowledge in experience we describe as intuition and common sense.
Engineers and planners will be more fashionable in the AI world. 23 AI may encourage a shift towards pushing or arguing for more central planning, thanks to now having access to the kind of "supermind" that would probably cause Hayek to turn over in his own grave. Hayek (1952) warned us about the subjective character of the data, and in the end, our data or facts are only ideas or concepts. It is naïve to assume, as he stresses, that "all the sense qualities (or their relations) which different men had in common were properties of the external world", meaning that.
it could be argued that our knowledge of other minds is no more than our common knowledge of the external world… knowledge and beliefs of different people, while possessing that common structure which makes communication possible, will yet be 23 On the fascination exercised by AI on computer engineers and economists, see Parkes and Wellman (2015) discussion and Mariotti's (2021) criticisms. different and often conflicting in many respects… But the concrete knowledge which guides the action of any group never exists as a consistent and coherent body. It only exists in the dispersed, incomplete, and inconsistent form in which it appears in many individual minds, and the dispersion and imperfection of all knowledge are two of the basic facts from which the social sciences have to start (p. 29-30).
AI and Big Data will not solve the problem of compatibility of intentions and expectations of different people. Global networks create complexity and allow potential instabilities that are impossible to manage by a top-down planning approach, which suggests that we need to rely on flexible adaptation to local needs (Helbing, 2015, p. 88). As Hayek argued, there will always be local, situational knowledge that accumulates separate to the central knowledge and not everything is meticulously reported back up the chain. Gmeiner and Harper (2021) point out there is a difference between economic calculation and planning, as planning also requires making decisions in the realm of the (socio)political, and decision-making processes are not per se efficient, as evident in the Public Choice literature. Aspects around incentives, power and authority can corrupt the benefits of AI, as Gmeiner and Harper (2021) point out, "planning creates more data, which feeds calculation. Data problems can cascade through this feedback loop and could even corrupt deep learning algorithms, which are only as good as the data that train them" (p. 6). Unintended consequences remain important, given the oftentimes black box nature of AI and sensitivity to manipulation of information and knowledge flows to AI, which decide on behalf of humans, or which may cause harm (financial, health, or otherwise). The good, the bad, and the ugly are all parts of the same AI. How goals are defined and decided upon all have major implications when AI is involved.
There is also a danger that AI and Big Data may also disregard qualitative phenomena by emulating natural sciences, "replacing the picture of the world in terms of sense qualities by one in which the units are defined exclusively by their explicit relations…It not only frequently leads to the selection for the study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to 'measurements' and assignments of numerical values which are absolutely meaningless" (Hayek, 1952, p. 50-51). In general, Hayek (1952) reminds us that "the great lesson of humility which science teaches us, that we can never be omnipotent or omniscient, is the same as that of all great religions: man is not and never will be the god before whom he must bow down" (p. 102).
Not even when the Golem becomes a reality.