Identity of AI

With the explosion of Artificial Intelligence (AI) as an area of study and practice, it has gradually become very difficult to mark its boundaries precisely and specify what exactly it encompasses. Many other areas of study are interwoven with AI, and new research and development topics that require interdisciplinary approach frequently attract attention. In addition, several AI subfields and topics are home to long-time controversies that give rise to seemingly never-ending debates that further obfuscate the entire area of AI and make its boundaries even more indistinct. To tackle such problems in a systematic way, this paper introduces the concept of identity of AI (viewed as an area of study) and discusses its dynamics, controversies, contradictions, and opposing opinions and approaches, coming from different sources and stakeholders. The concept of identity of AI emerges as a set of characteristics that shape up the current outlook on AI from epistemological, philosophical, ethical, technological, and social perspectives.


Introduction
It is very difficult to tell with high accuracy what people have in mind when they talk about AI. The term/label 'AI' has become overloaded. To some people, the proliferation of related fields, such as applied statistics, data science, predictive analytics, biometrics, etc., brings up the feeling that AI gradually loses its identity to these other fields.
Compiling from definitions from several online English dictionaries (Merriam-Webster, Cambridge, Britannica, Collins, Vocabulary.com, Dictionary.com), the term identity is used in this paper to denote the set of characteristics, qualities, beliefs, etc. by which a thing or person is recognized or known and is distinguished from others. The paper extends this term from things and persons also to the research and engineering area of Artificial Intelligence (AI) and discusses important qualities, beliefs and insights related to the identity of AI.
The question that immediately follows is: What are those distinguishing characteristics, qualities, beliefs, etc. of AI as an area of study? This question has been taken as the research question in the analyses conducted in an attempt to elucidate the notion of identity of AI.
Note that this is an open question, and it is where the notion of identity of AI that many people intuitively and implicitly have in their minds starts to diffuse. As AI continues to boom at an unprecedented pace, as other more-or-less related areas and fields continue to overlap more and more with AI, and as new subfields and topics continue to open under the umbrella that we call AI, it becomes increasingly difficult to draw a clear distinction between AI and non-AI. 1 3 Table 1 The methodological approach taken in this paper Each row represents one element/component of the approach Label-short name of the corresponding element/component Include the perspectives of different stakeholders into the current big picture of the concept of identity of AI Earlier work, milestones, established topic areas Include some of the relevant seminal work from the past, in order to get a more complete picture Present some historically important ideas that persist or have been overcome by newer developments Important sources of attempts to define intelligence and AI; evolution of characteristics of identity of AI games, and robotics, to name but a few, have long been well-established AI topic areas that certainly contribute to the notion of AI's identity. It is the second group that makes AI's identity very vague. The focus of this research has been largely on this second group, roughly corresponding to the first two rows in Table 1, thus the subsequent sections discuss primarily the characteristics and opinions from the second group. This second group can be seen as creating the "variable part" of AI's identity. As for the resources other than text ones, statistics and graphs from [3], AI training courses from Stanford and MIT, as well as resources available from Kaggle and GitHub repositories have proven to be very valuable in the author's past work, so they have been largely consulted here as well. Some of them are not referenced explicitly in the subsequent sections, but have nevertheless supported the creation of the big picture.
The big picture of the current identity of AI has emerged from the analyses carried out in this research effort, but has remained vague. Its characteristics (A-G from the Introduction) are elaborated in detail in the rest of the paper. Note that the paper section titles apparently do not map these characteristics one to one explicitly. There is a reason for that: the controversies around these characteristics have made several important dichotomies in AI as an area of study, so that analyzing these dichotomies leads to a more comprehensive understanding of the whole area. The "Discussion" section, however, brings all these pieces together around the characteristics A-G.

Definition of AI?
The question mark in this section heading is intentional-there are many attempts to define AI, yet there is no widely accepted, 'standard' definition. The reason is that the concept of intelligence is much too complex, not well understood [4], and "nobody really knows what intelligence is" [5].

Selected definitions
As an introduction to a more detailed discussion on definitions of intelligence and AI, here are some examples according to which AI is: • the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages [6]. • scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines [3]. • the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment [7].
The first definition has been selected as a representative of definitions coming from general reference sources, such as dictionaries and encyclopedias. The second one comes from one of the most authoritative Websites when it comes to AI, that of The Association for the Advancement of Artificial Intelligence (AAAI). The third one is the one adopted in the widely used and quoted textbook on AI.

Collections of definitions
Typical approaches in compiling definitions of AI and human intelligence are to search for them on the Web and in literature, to look for the definitions used by high-reputation research groups, centers, and labs, to ask experts to provide their own definitions, and to ask experts to comment on existing definitions in an attempt to refine them and possibly merge them into one. An obvious drawback of these approaches is the lack of completeness; as Legg and Hutter notice, many such definitions are buried deep inside articles and books, which makes it impractical to look them all up [8].
Still, existing incomplete collections provide useful insights into different views on intelligence and AI, make it possible to classify the included definitions into meaningful categories, and also prompt researchers to extract common and essential features from different definitions and devise their own ones. For example, Legg and Hutter have collected and surveyed 70 definitions of intelligence and noticed strong similarities between many of them [8]. By pulling out commonly occurring features and points from different definitions, they have adopted the view that "intelligence [is the concept that] measures an agent's ability to achieve goals in a wide range of environments."

3
Fagella makes a clear separation between AI as an area of study and AI as an entity, and focuses on the latter [9]. His approach has been to start from only five practical definitions from reputable sources (Stanford University, AAAI, etc.) and ask experts to comment on them or to provide their own definition. The final result of an analysis of these comments is the following attempt to define AI: "Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time. " Marsden has started from about two dozen established definitions, coming from different areas of business and science [10]. By spotting common themes in his list of definitions, he has synthesized them into the following one of his own: "[AI is any] technology that behaves intelligently (insofar as it responds adaptively to change). The capacity to respond adaptively to change through the acquisition and application of knowledge is a hallmark of intelligence-i.e. the ability to cope with novel situations." Although his definition suffers from the same deficiency as many others-circularity caused by the terms 'intelligently' and 'intelligence'-it stresses the adaptivity to change as an important feature of AI.

Formal approaches
Legg and Hutter have also constructed a formal mathematical definition of machine intelligence, calling it universal intelligence [5]. It includes mathematically formalized descriptions of an agent, its goal(s), its environment(s), the observations it receives from the environment(s), the actions it performs in the environment(s), and reward signals that the environment(s) send(s) to the agent as information of how well the agent is performing in pursuing its goal(s). The agent is supposed to compute its measure of success from the reward signals and wants to maximize it.
Another existing formal definition of AI is Russell's [11]. It is based on the concept of bounded optimality, i.e. the capacity to generate maximally successful behavior given the available information and computational resources. Formally, bounded optimality is exhibited by an agent's program l opt that belongs to the set LM of all programs l that can be implemented on a machine M and that satisfies where E is an environment class, U is a performance measure on sequences of environment states, and V is the expected value according to U obtained by the agent when executing the function implemented by the program l (argmax is an operation that finds the argument that gives the maximum value from a target function).

Issues with definitions
The lack of consensus on the definition of AI is expected and is caused by different cognitive biases [12]. These biases form part of people's judgment and cannot always be avoided. Another issue stems from the fact that definitions, in the attempt to be concise, often lack aspects that many researchers believe are essential for the entire area of AI. For example, the definition of AI proposed by the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) does not mention of Natural Language Processing (NLP), one of the oldest and most popular topics in the area of AI, although it does mention other topics explicitly [13].
AI researchers often rely on the notion of rationality instead of the vague concept of intelligence. Rationality refers to the ability to choose the best action to take in order to achieve a certain goal, given certain criteria to be optimized and the available resources [7,11,13], and is a much more specific concept than intelligence. However, focusing on rationality alone leaves little room for other important aspects of intelligence, such as intention, abstraction, emotions and beliefs.
Yet another issue is that whatever the definition of AI one chooses, there should be a way to test if a system is intelligent according to that definition. The widely known Turing test [14] is nowadays disliked and even dismissed by many. They criticize it as misleading [15], insufficient [9], not convincing [16], not formal enough [11], and largely based on a chat-like model of interaction between humans and computers that excludes criteria important in, e.g., self-driving cars and robots [17].
Unfortunately, alternatives to the Turing test are also limited and are not widely studied and accepted [18]. An implicit assumption underlying such tests-that a demonstration of some kind of intelligent behavior clearly reveals genuine intelligence-is not true [15,19]. For example, the AI system in a self-driving car does not see or understand its environment in the way that humans do.
Most definitions of intelligence and AI also reflect a very human-centric way of thinking [20]. The major criticism here is that practical AI technologies like statistical-pattern matchers should not be necessarily anthropomorphized in an attempt to mimic human abilities; they should rather be seen as complementing human abilities [11,21].

Working definitions
Still, from the pragmatic point of view, there are researchers who believe that it's better to have at least some explicit 'working definitions' of intelligence and AI that serve current research contexts, improve coherence, and make research and communication of ideas more efficient, than to wait for new fundamental discoveries and consensus about the definitions. To this end, Wang has suggested several working definitions of AI depending on whether different practical contexts require building systems that are similar to the human mind in terms of structure, behavior, capability, function, or principle [20]. Linda S. Gottfredson has proposed the following definition of intelligence: "Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-'catching on, ' 'making sense' of things, or 'figuring out' what to do" [22]. The last bit of this definition is of special interest for understanding the level of 'intelligence' of current AI systems.
Obviously, these different working definitions give the area of AI different identities. They have different focus, correspond to different levels of abstraction, set different research goals, and require different methods. Practical developments based on these different definitions/approaches produce different results, and must be evaluated according to different criteria. An AI system relying on an 'integrated' definition, i.e. satisfying the criteria of all these working definitions does not exist yet.

AI, its topics, and other disciplines: blurring borderlines, merging scopes
A quick glance at the Quick Topic Analysis graph at AITopics.org [3], https:// short url. at/ kDEPW, reveals that Machine Learning (ML) is by far the most popular topic in AI today. A similar graph from the same source indicates that neural networks are the most popular subarea of ML. Other popular ML subareas include statistical learning, learning graphical models, reinforcement learning, ML performance analysis, inductive learning, evolutionary systems and decision tree learning, etc. Approximately, these popular ML subareas follow a geometric progression-each next subarea is about half the size or 2/3 the size of the previous one in terms of popularity. These graphs are continuously updated at AITopics.org, but for quite some time they have remained roughly the same in shape and in terms of the dominance of ML and particularly neural networks. Jiang et al. have presented a somewhat different graph-a semantic network of important concepts in AI-based on the search results in Web of Science [23]. They also provide an analysis of why ML and especially deep learning have become so popular.
While having these up-to-date insights is certainly useful, it looks as if non-experts are not aware of them. More precisely, it often looks as if the non-ML parts of AI are simply overlooked and ignored by many. Moreover, in industry and in the popular press alike, people often disregard the fact that AI subsumes ML and use the terms AI and ML as synonyms. Alternatively, one often sees the use of 'AI and ML' , 'AI/ML' , 'AI or ML' , 'AI and neural networks' , 'AI and deep learning' , and the like, which reflects colloquial rather than factual use and makes identity of AI even more indistinctive.
The things get even more complicated with the fact that AI partially overlaps and intersects with other disciplines, areas and fields, like math, statistics, pattern recognition, data mining, knowledge discovery in databases, programming, neurocomputing, etc. They all use concepts, tools, and techniques from AI and vice versa. In interdisciplinary contexts, it is often hard to say what specific discipline prevails. Interested readers are welcome to take a look at a nice graphical illustration of these intersections and relationships [24].
Such complex relationships between different disciplines and their topic areas, as well as the popularity of ML, often bring up questions like "What is the difference between AI and Data Science (DS)?", "What is the difference between ML and DS?", and "What is the difference between ML and statistics?" Note that, again, people who are not AI experts often exhibit a tendency to use AI and DS as synonyms. However, DS "works by sourcing, cleaning, and processing data to extract meaning out of it for analytical purposes" [25], i.e., its center of attention is data analysis, whereas AI is about understanding the mechanisms of intelligence and building systems (agents) that can perform tasks normally requiring intelligence. Mahadevan makes a comment that "ML is to statistics as engineering is to physics… statistics is the science that underlies … machine learning" [26].

AI dichotomies
Different stakeholders see AI from different perspectives. This is the source of several dichotomies that further diffuse the identity of AI. An illustration of such different perspectives is the already mentioned distinction of AI as an entity and as an area of study [9,27]. There are others as well.

Human-driven AI vs. autonomous AI
To most people, the term 'autonomous AI' brings two real-world mental associations: autonomous robots and self-driving vehicles. The autonomy of such systems is strongly related to the degree to which the system can run independently [28].
Here the major problems arise in situations when communications are poor, or the sensors fail, or the system is physically damaged-the system should have a range of intelligent capabilities to handle such unexpected situations on its own.
The human-driven part of autonomous AI systems is, however, still large-in addition to turning these systems on and off, it is humans that create the programs that the systems execute, and much of system maintenance is also on the human side.
At a more basic level, and bearing in mind the AITopics.org Quick Topic Analysis graph (https:// short url. at/ kDEPW) [3], much of AI today is about ML model building and making predictions. Although tools like AutoML can automate these tasks to a great extent, in many real-world situations such tasks are mostly human-driven. It is still most often the case that human experts select variables from datasets to train the model with, and run various kinds of dimensionality reduction and feature engineering processes. Training ML models is not only time-consuming and resource-demanding, it is often manual. Interpreting predictions obtained from feeding previously unseen data is also a human task in most cases.
In fact, things are even more complex with human-driven AI. Although currently there is nothing conscious and human-cognitive in the systems labeled AI, it is also important to understand that human intelligence also has performance constraints and limitations [29]. As a simple example, consider again the case of self-driving vehicles-while level 5 self-driving is still questionable, there is also no guarantee that, safety-wise, human steering is better than self-driving (and self-driving vehicles and their algorithms are continuously being improved).
Likewise, there are arguments against the so-called finality argument, i.e. that "the final goal that a rational agent happens to have at the beginning of its existence will be really final" (unlike humans, who can always reconsider their final goals and possibly change them) [30]. The rationale is as follows: an agent's goal is not separate from its understanding of the world, and this understanding can change over time as the agent learns from experience; thus the agent's understanding of the given goal may change as well. Note, however, that the finality argument also has many proponents and for now it remains an open debate. In reality, current AI systems can at best choose the path to take to achieve their human-set goal [13].
Finally, there are more and more voices in support of integrating human-driven AI, autonomous AI, and human intelligence in practical developments and use of intelligent systems [29,[31][32][33]. The rationale is to leverage all AI technology, but also to keep humans in the loop in order to achieve the right balance.

Industry/Academia dichotomy
It is a well-known fact in many fields that the expectations that industry has about innovations differ to an extent from what researchers are pursuing. Industry has to care not only about the quality and utility of its products and services, but also about pressures coming from the market, about competitiveness, and about continuous improvement at a very practical level. Academia and research are more about ideas and visions, and how to develop prototypes, evaluate Vol:.(1234567890)

Perspective
Discover Artificial Intelligence (2022) 2:23 | https://doi.org/10.1007/s44163-022-00038-0 1 3 them, and contribute to existing knowledge in different fields. There are also many initiatives and funding programs that support bringing research and industry together in an attempt to get "the best of both worlds'' . Likewise, many big companies have their own research departments whose activities pertain to practical innovations to become part of future products and services offered by these companies.
In the case of AI, the hype surrounding the field for quite some time now contributes to the fact that many companies today use the term 'AI' in advertising their products simply because it is so fashionable. However, there is a danger of overselling AI if the term 'AI' is used just for marketing purposes, leading to what Waddell calls "AI washing" and "fakeit-till-you-make-it attitude": "If you can swap in 'data analytics' for 'AI' in a company's marketing materials, the company is probably not using AI" [34].
Some reality check here can introduce another perspective. Figure 1 shows the declining trend in popularity of some of the best selling AI technologies-ML, deep learning and NLP-on the Gartner Hype Cycle curve [35] for AI. Companies are typically interested in how they can build different AI innovations into products, i.e. in the rightmost end of the curve. Contrary to that, researchers typically focus on emerging topics that belong to the leftmost, rising end of the curve. In the past few years, they have included, e.g., knowledge graphs, small data, and generative AI (not shown explicitly in Fig. 1). Such technologies are typically not getting much attention in industry before investors become interested in them. For instance, the approach called small data refers to what Andrew Ng calls "smart-sized, 'data-centric' solutions" [36]. The idea is very appealing: instead of training neural networks with huge volumes of data, some of which may be inconsistent, noise-generating, erroneous, how about selecting small sets of representative, high-quality data? This would not only drastically reduce the training time-it would be much closer to how humans build their mental models of the world [37]. But before this idea becomes actionable, research efforts are needed to find out how to get small-size, good-quality data.
Another phenomenon related to the notion of identity of AI and interesting in the context of industry/academia dichotomy is that of the AI effect-discounting (mostly by researchers) AI systems, devices and software as not being really intelligent but "just technology", after results are achieved in meeting a challenge at the leftmost slope of the curve [38]. On the other hand, companies are interested in further developing that same "just technology", provided that it survives the test of time long enough to prove profitable.

Artificial General Intelligence (AGI) vs. narrow AI
Informally, if a program, an agent, or a machine represents and/or embodies generalized human cognitive abilities and is capable of understanding, learning, and performing any intellectual task a human is capable of, it is considered to be AGI, or an AGI system/entity. Just like humans, when faced with an unfamiliar task, AGI can act in order to find a solution. It is expected to achieve complex goals in a variety of environments, while simultaneously learning and operating autonomously. AGI, or strong AI, is best understood as the original goal of AI as a discipline [39], as opposed to many current practical AI systems, called narrow AI, capable of performing specific tasks (e.g., self-driving cars, face recognition technology, and checkers playing programs). Just like AI can be discussed both as an area of study and as an entity, AGI can be also seen both as a theoretical and practical study of general, human-level intelligence, and as an engineered system that can display and exhibit the same general intelligence as humans.
Although currently AGI does not exist, there is a slowly growing research community around that idea. There is also an open discussion between supporters and opponents of AGI. Some supporters believe that AGI will be realized in this century [40]. On the other hand, opponents argue from philosophical and other points of view that AGI cannot be realized [41].
The open discussion between supporters and opponents of AGI raises a lot of speculation about the development of AGI to the extent that artificial superintelligence will surpass human intelligence and replace humans as the dominant life form on Earth [42]. Researchers like McLean et al. and Naudé and Dimitri fear "an arms race for AGI" that would be detrimental for humanity if the resulting AGI systems break out of control [43,44] and analyze sources of risk for such a scenario, proposing ways to mitigate these risks.
The attitude of the majority of the AI community when discussing AGI still remains reserved. Some researchers propose to strive to create not full-fledged AGI systems but practical AI systems "compatible with human intelligence" [45], i.e. "systems that effectively complement us" [29] and represent "the amplification of human intelligence" [45].
An important milestone on the path to achieving AGI would certainly be overcoming Moravec's paradox [46]. It refers to the fact that cognitive tasks that are difficult for humans can be relatively easy for computers, and vice versa. "It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" [46].
Meanwhile, much of AI remains narrow AI, meaning that current AI systems carry out specific 'intelligent' behaviors and perform specific practical tasks in specific contexts. There is nothing wrong with this approach-the history of AI so far has seen a number of useful results, systems, and technologies that are all narrow AI. Moreover, interconnecting multiple narrow AI systems might lead to a higher-quality outcome [29], and narrow AI systems pose no existential threat to humans [44].

Explainable AI vs. black-box AI
In a recent philosophical study of intelligence [47], explanations and explainability are identified as important characterizations of the epistemic distinctiveness of the notion of intelligence. Informally, an explanation is a clarification of a certain phenomenon, process, procedure, or fact(s), and typically comes as a set of statements that describe the causes, context, and consequences of the phenomenon/process/procedure/fact(s). Explainability itself is defined as follows: given a certain audience, explainability refers to the details and reasons a model gives to make its functioning clear or easy to understand [48]. Put more simply, explainability is a property of those AI systems that can provide a form of explanation for their actions [13].
It was recognized long ago that many AI systems, in order to be useful and interactive, should be able to explain their built-in knowledge, reasoning processes, results and the recommendations they make [49][50][51][52][53]. Remember, though, that back in the day most of AI was focused on symbolic and logical reasoning, as well as that the field of NLP was not as advanced as it is today. Thus the early explanation approaches and techniques all had their limitations. Consequently, early AI systems could not carry on a convincing, explanatory dialog with the user in real-world environments.
The interest in AI systems that can generate meaningful explanations of their behavior has revived with the AI explosion in the last decade. However, it has been accompanied with a considerable shift of focus, due to the rapid development and dominance of ML. In ML, especially in neural networks and deep learning models, the complexity of opaque internal representations is huge. Although these new models are more effective and exhibit much better performance than the early models, they are also characterized by reduced explainability. As Rodu and Baiocchi write, "some of the most successful algorithms are so complex that no person can describe the mathematical features of the algorithm" [54]. This has led to the rise of a whole movement in AI, called explainable AI or XAI [55][56][57][58], best reflected in DARPA's large project with the same name [59].
XAI is defined in a way similar to how explainability is defined: given an audience, an XAI is one that produces details or reasons to make its functioning clear or easy to understand [48]. In other words, an AI model is explainable if it is possible to trace (in detail and in a manner understandable to humans) its reasoning, its decision-making processes, its predictions and the actions it executes. Note the 'audience' part in these definitions; as Arya  does not fit all-each explanation generated by an XAI system should target specific users and should be tailored for their interests, background knowledge and preferences [60]. Also, explainability is not exactly the same concept as interpretability, which is the ability to explain or to provide the meaning in terms understandable to a human; a model can be explained, but the interpretability of the model is something that comes from the design of the model itself [48,61].
Wu notes an important detail here: simple models, like decision trees and logistic regression models, are interpretable-humans can directly trace how these models transform input to output by examining the model parameters [62]. In other words, there is no need to further explain such models; they are interpretable and self-explanatory.
On the other hand, there are models that are not straightforwardly interpretable to humans. We call them black-box models [58,[61][62][63]. Currently, these black-box models usually outperform interpretable models on unstructured data (like text and images), but there are cases when interpretable models perform on par with black-box models, e.g. in case of modeling structured clinical data [61].
Current dominance of ML in AI is implicitly reflected in the previous paragraphs. Moreover, some researchers call the entire XAI just explainable ML [61] or interpretable ML [63]. Some of the techniques typically used in explainable ML include identification of feature importance/dominance, development of surrogate simple interpretable models based on the predictions of the black-box model, and various visualization techniques.
The rivalry between black-box and explainable models is actually quite complex. "Explainability may not be vital in all domains, but its importance becomes underlined in domains with high stakes such as AI applications in medical diagnostics, autonomous vehicles, finance, or defense" [64]. Also, "black-box predictive models, which by definition are inscrutable, have led to serious societal problems that deeply affect health, freedom, racial bias, and safety…" [63]. Likewise, inscrutability of black-box models can hamper users' trust in the system and eventually lead to rejection/abandonment of the system; "algorithmic biases [of black-box systems]… have led to large-scale discrimination based on race and gender in a number of domains ranging from hiring to promotions and advertising to criminal justice to healthcare" [58].
Loyola-González proposes a fusion of both explainable and black-box approaches in real-world domains, stressing that "experts in the application domain do not need to understand the inside of the applied model, but it is mandatory for experts in machine learning to understand this model due to, on most occasions, the model need to be tuned for obtaining accurate results" [65]. Petch et al. elaborate on this idea by proposing to train models using both black-box and interpretable techniques and then assess the accuracy of predictions [61]. In practice, there are a number of applications where experts are happy with post-hoc and/or just detailed enough explanations coming from the system-see, e.g., [66]. Still, situations also arise in practice when AI systems behave in ways unexplainable even to their creators, which can create important security and interpretability issues [67].
Although users generally prefer explainable and interpretable systems to black-box ones, their cognitive load to interpret explanations provided by such systems can still hinder the benefits of the explanations. The task at hand must be difficult enough for the explanations to be beneficial for the users, and it is important to define measures of explanation effectiveness bearing in mind that these can change over time [57,64,68].

Open problems in AI
Recently, several fields, topics and problems related to AI are receiving a lot of attention in research publications and in press, videos and interviews. They extend the identity of AI in their own way.

Energy consumption and carbon footprint of AI
Carbon footprint is a term often used in discussions about sustainability. Adapted from [69], it can be understood as the total greenhouse gas emissions caused by activities of a person or an organization, or by an event, a service, a product, etc. It is usually expressed as the carbon dioxide equivalent of these activities, events, processes and entities.
Carbon footprint of AI is primarily related to the fact that today's AI systems, and especially large-scale deep neural networks, consume a lot of power. For example, Strubell et al. have calculated that the carbon emissions of training one large NLP model is roughly equivalent to the carbon emissions of five cars over their entire lifetime [70]. Similarly, it is well-known that data centers that store data around the world consume huge amounts of energy [71], and much of nowadays AI relies on data.
Still, as Schwartz et al. carefully note, these things are not just black and white-much of the recent computationally expensive AI developments push the boundaries of AI [72]. Also, Patterson et al. predict that data storage and model training demand for energy will plateau, and then shrink [73]. Hölzle explains misconceptions that make projected power demands of ML models like the one reported by Strubell et al. largely inflated [74].
Along these lines, Amy van Wynsberghe defines sustainable AI as "a movement to foster change in the entire lifecycle of AI products (i.e. idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice" [75]. Note that the focus here is not only on AI systems and technology, but also includes a lot of social, environmental, ethical, and even political context [76].

Bias
"If the training data is biased, that is, it is not balanced or inclusive enough, the AI system trained on such data will not be able to generalize well and will possibly make unfair decisions that can favor some groups over others. Recently the AI community has been working on methods to detect and mitigate bias in training datasets and also in other parts of an AI system" [13].
This general statement can have serious consequences in practical applications of AI. If there exist prejudices in the training data or if the developers of an AI system unconsciously introduce their cognitive biases in the algorithm design, the results will be biased and often unfair [77]. For example, it has been reported that OpenAI's GPT-3 language model [78] exhibits bias in terms of displaying strong associations between Muslims and violence [79]. There are many more other examples of biased results in AI [77,[80][81][82][83][84].
An obvious question here is: Why not simply filter out prejudices and other kinds of bias from the training data? It turns out that this is often easier said than done, because such filtering can introduce unwanted elimination of data useful in the other parts of the overall model [85]. It is also very difficult for developers not to embed subjective value judgments about what to prioritize in the algorithms, even if their intentions are good. Note also that if the training data is incomplete, there is a high risk of the data being inherently biased from the start. Moreover, as Dr. Sanjiv M. Narayan from Stanford University School of Medicine has commented in an interview [86], "All data is biased. This is not paranoia. This is fact." Finally, special care should be taken in cases when there are datasets other than the one used for training the model-the model designers must ensure that the trained model generalizes well to the other, external datasets [87,88].
Another obvious question is: How to avoid or fix bias in AI, i.e. how to keep it out of AI tools? It is usually not possible to do it completely, since the so-called 'static assumption'-the data that does not change over time and the biases that all show up before the system is put in actual use-is typically not realistic. However, ensuring that the training data is representative of the target application context and audience, building multiple versions of the model, with multiple datasets, conducting data subset analysis (ensuring that the model performance is identical across different subsets of the training data), as well as updating the datasets over time and re-training the model, certainly mitigates the problem [86]. The already mentioned 'data-centric' approach [36] is also helpful in reducing the initial bias to an acceptable minimum. There are also automated tools like IBM's AI Fairness 360 library (open-sourced through GitHub) and Watson OpenScale, as well as Google's What-If Tool, that can help spot biases early, visualize model behavior over multiple datasets, and mitigate risks.
Schwartz et al. have provided the most systematic and the most comprehensive study to date of bias in AI and how to mitigate it [89]. protection, environment, democracy, rule of law, security and policing, dual use, and human rights and fundamental freedoms, including freedom of expression, privacy and non-discrimination [94].
To operationalize AI ethical principles, companies should identify existing infrastructure that a data and AI ethics program can leverage, create a data and AI ethical risk framework tailored to the company's industry, change the views on ethics by taking cues from the successes in health care, optimize guidance and tools for product managers, build organizational awareness, formally and informally incentivize employees to play a role in identifying AI ethical risks, and monitor impacts and engage stakeholders [99].
Responsible AI is a somewhat more narrow and more concise concept than ethical AI (AI ethics), although it is often used as a synonym for ethical AI. It denotes a set of principled and actionable norms to ensure organizations develop and deploy AI responsibly [95]. This boils down to assessing and explaining social implications of using AI systems, guaranteeing fairness in AI development by eliminating biased data, processes and decisions, protecting data and users' privacy, documenting AI systems in detail and informing the users that they are interacting with AI, enforcing AI system security (preventing attacks and unwanted changes in the system behavior), ensuring transparency (interpretability, explainability) of AI systems, and complying with inclusiveness standards [95,100].
Trustworthy AI is another similar term, used to denote AI that is not only ethically sound, but is also robust, resilient, lawful and characterized by a high level of trust throughout its lifecycle (design, development, testing, deployment, maintenance) [97,101]. Li et al. stress interdisciplinary nature of trustworthy AI, and Thieves et al. discuss key aspects of different pertinent frameworks and guidelines for achieving trustworthy AI [97].
Sustainable AI is a relatively new research topic in the field of AI ethics. It focuses on developing and deploying AI systems in a way compatible with sustaining environmental resources, economic models and societal values [75]. It includes measuring AI carbon footprints, energy consumption needed for training algorithms, the tension between innovation in AI and general sustainable development goals, while serving the needs of the society at large.
Many authors stress that it is people who develop AI systems, and if something goes wrong the problem is not really AI-it is the human factor [102][103][104].

Fear of AI (AI anxiety)
The recent huge wave of AI development and integration with almost every aspect of modern living has also caused fear/ anxiety in many people. This fear is multi-dimensional. Li and Huang have identified eight dimensions of AI anxiety [105]: 1. privacy violation anxiety-disclosure or misuse of private data stored in datasets, obtained by biometric devices, collected from surveillance cameras, etc. 2. bias behavior anxiety-discrimination of individuals or groups in AI systems 3. job replacement anxiety-worry about being replaced by AI systems or entities at workplaces 4. learning anxiety-perception of AI as being difficult to learn (as it becomes a must to learn) 5. existential risk anxiety-fear that all intelligent life on Earth will be destroyed by AI 6. ethics violation anxiety-fear that behavior of AI entities may violate rules of human ethics in interaction with humans 7. artificial consciousness anxiety-fear that AI entities may develop human-like consciousness that will eventually set them apart from humans and give rise to a new, artificially created species 8. lack of transparency anxiety-discomfort about the opacity of AI training and decision-making processes.
Many of these anxieties are caused by negative propaganda, often spread by insufficiently informed individuals and doomsayers, or exaggerated and yet unverified claims appearing in popular press (of the 'AGI is here!' kind, e.g., [42]). This is supplemented by widespread skepticism caused by AI failures [106]. The controversy here is that some scholars and influential people express their concerns about the future of AI much akin to these anxieties, while others oppose this kind of fear and argue for acceptance of AI because of the benefits it brings to humans [107,108]. The debate is still open.
Fear and anxiety are much studied in psychology, hence research on AI anxiety is often conducted by psychologists in the context of different personality traits [109], fear of loneliness, of drone use, of autonomous vehicles, and of being unemployed, all of it being elevated with media exposure to science fiction [110][111][112][113]. Transparency, justice and fairness, non-maleficence, responsibility, privacy (in addition, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity) [92]; Beneficence, non-maleficence, autonomy, justice, and explicability [91]; Privacy, manipulation, opacity, bias, human-robot interaction, employment, the effects of autonomy, machine ethics, artificial moral agency, and AI superintelligence (AGI) [93]; Proportionality and do no harm, safety and security, fairness and non-discrimination, sustainability, right to privacy and data protection, human oversight and determination, transparency and explainability, responsibility and accountability, awareness and literacy, and multi-stakeholder and adaptive governance and collaboration [94] Responsible AI A set of principled and actionable norms to ensure organizations develop and deploy AI responsibly Accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness [95] Trustworthy AI Ethically sound, technically robust, resilient, and lawful AI, built with trust throughout its lifecycle Robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability [96]; Beneficence, non-maleficence, autonomy, justice, and explicability [97]; Reliability, safety, security, privacy, availability, usability, accuracy, robustness, fairness, accountability, transparency, interpretability/explainability, ethical data collection and use of the system outcome, and more (yet to be defined) [98] 7 Discussion-the big picture Current identity of AI, emerging from the reviews and insights presented in the previous sections, is summarized in Table 3. This big picture is still vague, and will require periodic updating and extensions over time due to the dynamics of the entire area of AI. It is also subject to different interpretations from different experts and other stakeholders, as they all focus on different aspects of AI. Still, it provides a general sense of what researchers, practitioners, educators, developers, end users, analysts, organizations, and policy makers have in mind when they use the term 'AI' . These different views and perspectives are unlikely to get completely unified, so their blend shown in Table 3 is not definite and is more like the current track record of identity of AI. The left column in Table 3 indicates the elements of the current identity of AI that have emerged in this research effort as distinguishing characteristics, qualities and themes of AI as an area of study. The right column is a condensed recap of the major approaches related to each specific characteristic, such that they contradict each other to an extent and thus make the current identity of AI somewhat indistinct.
To shed yet another light on the big picture depicted in Table 3, one should have in mind several other facts. Humanlevel intelligence, consciousness and general cognitive capabilities of machines are not yet at sight, but nevertheless continue to present challenges to researchers. An intriguing observation to this end is the fact that children surpass even the most sophisticated AI systems in several areas and everyday tasks, like social learning, exploratory learning, telling relevant information from irrelevant noise, and being curious about the right kinds of things [1,114]. This is not to say that humans surpass AI systems in all tasks; successful AI achievements like large language models and game-playing applications are certainly better than humans in terms of coping with complexities of the corresponding domains. However, scientists are still polarized about the idea of creating AGI without consciousness and its own goals and survival instincts [45].
In addition, it is difficult to expect the disagreements about different aspects of identity of AI shown in Table 3 to resolve soon, because developments in AI and in other relevant areas are interdependent. For example, neuroscience and cognitive psychology still cannot tell how human consciousness is created. Human reasoning is still a black box in itself and much of it does not happen at a conscious level, hence XAI models can at best try to imitate human explanation capabilities in a convincing way. When it comes to trust, human verification of AI system decisions is still desirable and, in order to avoid subjectivity, should be based on objective performance measures.
Finally, it happens that AI techniques that have fallen out of focus return in new technological contexts. Knowledge graphs [115,116] are an example. Knowledge graphs are a natural continuation of earlier techniques like semantic networks, ontologies, and RDF graphs. While today researchers and practitioners know only from experience that deep neural networks can achieve great results in certain contexts [117] but details of their operation are not fully understood, knowledge graphs are much better understood and much more interpretable AI technologies that continue to develop. Their graph-based structures are suitable for describing semantics of interlinked entities using the basic object-attribute-value model. This model is certainly limited in terms of the types of knowledge that it can represent and has other deficiencies as well [118], but combined with, e.g., trained NLP model, facilitate different tasks in question answering systems based on knowledge graphs. Another example is embedded AI or embedded intelligence [119]. It is an AI topic area focused on deployment of different AI techniques on edge computing devices. AI applications can be implemented on cloud servers and then run by edge and mobile devices, but recent developments in hardware technologies have enabled partial implementation of AI on edge devices as well.

3
representation and games, to predictive analytics, object recognition, deep learning, large language models, self-driving vehicles, sophisticated robotics, convincing applications and much more. It is nowadays discussed not only by researchers and practitioners, but also by business people, social scientists, government organizations, informed journalists and other stakeholders. All of them have their own views and understanding of AI, which further diffuses the notion of identity of AI and its defining characteristics and beliefs.
Complex deep learning systems underlying narrowly focused applications are often superior to humans in the corresponding tasks, with the drawback that they can "make a perfect chess move while the room is on fire". Contrary to that, children outperform complex AI systems in many simple tasks. It is just one of many controversies surrounding AI and making its identity indistinct. Current discourse of AI research and development focuses predominantly on extending its limits and improving performance, rarely depicting such controversies of the intensive dynamics of AI in a comprehensive Machine learning and especially neural networks currently dominate the entire area of AI. Traditional AI fields, such as representation and reasoning, cognitive modeling, games and computer vision are still around. NLP and robotics are very popular, and currently largely rely on ML models. There is a notable overlap of AI with data science and predictive analytics, as both of these areas implicitly express "proprietary rights" over ML. Statistics, neurocomputing, pattern recognition, and other fields also partially overlap with AI C. Autonomous behavior-the degree to which an AI system can run independently Autonomous robots and self-driving vehicles have advanced to an impressive level. Sensor and communication technologies have become essential elements of these systems. Still, much of AI is human-driven and goal-directed.
Integrative approaches (human-in-the-loop) are being proposed to keep the best of both worlds D. Stakeholders' views-the different views of AI coming from different stakeholders Researchers typically come up with ideas and experiments that push the boundaries of AI. Industry tends to build them into products only if the innovations proposed by researchers are profitable. Industrial applications rely on training models with huge datasets, whereas researchers have started to turn their attention to data-centric approaches with smaller datasets with quality data. Business analysts such as The Gartner Group regularly update research and development trends in AI; that perspective can be used to reconcile and harmonize the somewhat opposing approaches from academia and industry. 'AI' is overused as a marketing term. Government bodies and policy makers gradually develop legal documents to support AI development at the strategic level E. Explainability-the explainability and interpretability of AI systems and their behavior The need for explainability and interpretability is evident and is often expressed in the context of opaqueness of deep learning models. Black-box models typically have better performance than explainable models. The XAI movement has not yet achieved all of its envisioned results. Tailored user-centered explanations are still in focus of relevant research groups and applications. Hybrid approaches are suggested where both explainable and black-box models are used depending on the objectives and on specific user groups F. Ethics-guidelines and principles of development and deployment of AI systems according to the norms of human and social ethics There is a great deal of overlap between the terms AI ethics, trustworthy AI, responsible AI, and sustainable AI. Concerns about transparency, bias, fairness, energy consumption, carbon footprint, user privacy, accountability, social responsibility, reliability and lawfulness have received a lot of attention in AI developments in recent years G. Other To be maintained over time, as the area of AI advances. Currently AGI and AI anxiety are among the topics that receive considerable attention from psychologists, philosophers, and social scientists. AGI still remains a vision, although time after time the general press publishes excited claims that new developments are on the verge of AGI. In the meantime, interconnected multiple narrow AI systems seem to be more realistic