1 Introduction

The concept of artificial intelligence (AI) has been around since antiquity (e.g., [1]). It is clear from investigative literature (e.g., [2, 3]) popular culture (e.g., [4]), and even ancient philosophers (e.g., [5]) that humans have long been intrigued by the idea of creating artificial life, be it from stone or machines, with some sort of intelligence to help, serve, or protect human life. Modern AI has matured into a reputable science and technology thanks to the development of powerful computers, a better theoretical understanding of what AI is and how it works, and the availability of large amounts of data [6].

AI has been defined in many ways [7,8,9,10]. The different interpretations of AI generally converge to two major descriptions: (i) ‘the ability to think, understand, and problem-solve like a human’ and (ii) ‘the ability to mimic human thinking’. Another important aspect in defining AI is by the words ‘artificial’ and ‘intelligence’. Artificial is often referred to as anything humans build (e.g., [9, 11]). ‘Intelligence’ refers to a computer’s ability to learn (independently), understand and reason like a human [12]. However, there is currently no clear consensus on how to define intelligence (e.g. [13].). Instead, more philosophical concepts of intelligence (Weak AI and Strong AI) are often used to differentiate between varying degrees of machine intelligence (e.g. [12].). Machine Learning (ML) is often used interchangeably with AI, though related, they are not exactly the same. Machine learning is a subset of AI and describes a set of techniques that is used to solve data-related problems without being explicitly programmed [14]. In this article, by AI we refer to both rule-based and machine learning techniques [12], unless mentioned otherwise.

AI technology, as including machine learning techniques, is capable of processing information (e.g., [15]), identifying patterns (e.g., [16]), making predictions (e.g., [17]), and even operating robots and autonomous devices (e.g., [18, 19]). Machine Learning (ML) and a subset called Deep Learning (DL) are powering most digital applications today, providing efficiencies and new avenues for value creation. This trend will only continue as we move into the future, standing at the forefront of the 4th Industrial Revolution.

However, the future of AI is not without concerns. In recent years, ethical and moral dilemmas have emerged regarding how AI is being used in modern-day applications (e.g., [20, 21]), specifically, the use of AI in the public domain and the (un)intentional consequences machine learning algorithms have on human well-being and economic choices (e.g., [22]). In addition, policymakers who lack knowledge in the AI field are not always up to speed on preventing unethical or inhumane use of technology, nor do they want to limit their countries' digital competitiveness due to AI policies that are too stringent. It’s clear that the advancement of AI needs to be governed by more human-centric principles (referred to hereafter as ‘humanizing AI’), ones that are easily understood by all stakeholders and that benefit society.

If left undefined, humanizing AI is an ambiguous concept and a challenge in humanizing AI is that there is no universally accepted approach that guides the best practice for design and use of AI. In a narrow definition, humanizing AI means the process of creating and using AI that (i) understands not only human emotions but human unconscious dynamics, (ii) has the capability to interact with humans in a natural, human-like manner, and (iii) during this interaction it processes information in a similar way that people do. Producing AI that processes information similarly to people does not automatically produce a symbiotic relationship between humans and AI, however, it is a requirement to build a trusting relationship with machines. We believe that humanizing AI needs to manifest at multiple levels, which are interconnected, to help bridge the gaps between humans and machines which is currently lacking (e.g., [23]).

In this paper, we argue that AI conceptualization and application need to be less artificial and more human-like. We are not arguing that AI needs to look more like human beings, but rather that humanizing AI sets a foundation for AI development to integrate aspects of human intelligence, cognition and behavior which complement human limitations and promote human values. We contribute to the existing literature on AI human-centric advancements by providing a motivational framework to explain the operationalization of AI today. This paper also provides a multilayered behavioral approach to developing and applying AI in a more humane and equitable way. Existing literature on usability, user experience, human-centered design, and human–computer interaction (e.g., [24,25,26,27,28]) all have behavioral elements, but not all of them consider a multilayered approach. Even the ISO standards related to human-centered design for interactive systems [29] lack a multilevel viewpoint. However, [26,27,28] discuss the necessity of understanding technology development (and AI development with it) from a multilevel perspective. Our paper provides a unique viewpoint in this discussion. Finding a way to build a symbiotic relationship with AI as we transition into a digital world is of crucial importance if humanity wants to benefit from technology [30, 31].

To discuss the rational for our framework and multilayered approach, we structure the paper in the following way. First, we provide an overview of the current concerns with AI. Next, we explain about the importance of humanizing AI and introduce our framework how to humanize AI from a multilevel perspective. Finally, our paper concludes and suggests future research directions.

2 Current concerns with AI

AI’s existing and potential benefits are undeniable, but it may do more harm than good in some cases. Academics and business professionals frequently raise concerns about possible biases, lack of transparency and explainability, power imbalance, and liability – to mention a few issues [32].

Even the best AI tools can institutionalize existing biases that might be present in the training data. The creation and deployment of AI solutions have been a predominantly male orientation, restricted to specific areas of the world, such as the US and China (e.g., [33]). The active field of AI has limited diversity in terms of gender and race [34]. This not only unbalances the beneficiaries of this technology, but also limits diversity in use and propagates potential bias in how AI functions (e.g., [33]).

Our experience with real-life AI projects is that companies often underestimate the difficulties in creating a clean and unbiased training dataset. In addition to being aware of the problem of possible bias, domain experts need to do a lot of work to eliminate embedded biases in the training data and make a clean dataset available for the AI algorithm to learn.

We have very little visibility and knowledge of how and why an AI tool makes its decisions, especially with deep learning neural network approaches. Further research is necessary to make deep learning approaches explainable. In general, businesses (and humans) will only trust AI-enabled systems if they can fully understand how AI makes decisions and predictions [35]. Tackling issues of explainability is difficult, as biased modeling is often found after the development stage [36]. Therefore, researchers need to focus not only on post-modeling explainability but also address explainability in earlier stages. Current standardization efforts (e.g., [37]) aim to bring a harmonized approach to address the issue of transparency. Further to building these frameworks, it is also important that experts from various domains speak the same language. To address this, the IEEE 7007 Working Group has developed a set of ontologies representing norms and ethical principles, data privacy and protection, transparency and accountability, and ethical violation management [38].

There seems to be a global consensus that policymakers will need to establish rules and control mechanisms to ensure that AI tools are safe and will aid rather than harm society (e.g., [39, 40]). However, there is no global consensus on how to build a framework to regulate AI. For example, the European Union and the United States differ widely on how they want to address AI regulation.

To create an EU-wide framework, the European Commission issued a draft proposal of the Artificial Intelligence Act to the European Parliament on 21 April 2021. In the proposal, they established a framework for determining whether an AI application poses a significant risk which would subject it to additional obligations, including a conformity assessment, auditing requirements, and post-market monitoring [41]. The US is taking a slower and more fragmented approach. Lawmakers have introduced bills that control the unwanted effects of AI algorithms from various aspects [42, 43]. Government offices and US regulatory agencies have also outlined their respective views and positions on regulating AI tools, but a comprehensive federal framework does not exist yet.

AI also has the potential to change both market and regulatory dynamics. Because of this, building the proper legal framework is necessary, not only to guard against individual and societal harm but also to ensure a level playing field for businesses. The current imbalance is putting considerable power into the hands of tech companies. To prevent market tipping, lawmakers may consider forcing tech companies in data-driven markets to share some of the data they collect [44] or explain how their algorithms work [45].

3 The importance of humanizing AI

It is not only the prospects of AI that need to be addressed but also the way it is currently used. Big data companies such as social media platforms are renowned for their ability to influence human behavior which has led to scandals and data privacy breaches (e.g., [46, 47]). Not addressing this will only make it more difficult for policymakers to make any significant changes to how big tech companies leverage human data to maximize gain. It is naive to continue to think of humans as superbeings able to fully control themselves in the face of increasingly sophisticated online persuasion and manipulation tactics. Equally concerning, is the way mechanistic algorithms (the application of narrow or weak AI) influence complex human behavior.

If AI had any kind of embodied representation today, it would have to be Mr. Spock from Star Trek. Governed by logic and capable of making rational decisions without being swayed by emotion, Mr. Spock would run the world without space for human error or irrational behavior, diminishing humanity to an artificial society governed by algorithms. A better representation of humankind would be Homer Simpson, limited in cognitive capacity, persuadable and irrational, but also caring and supportive. Homer Simpson would benefit greatly from Mr. Spock’s characteristics if they didn’t undermine human values.

Humanizing AI requires more than embodiment alone. We also need to consider the underpinning AI architecture paradigms that govern the machine-to-human interaction [48, 49]. These underpinnings help to understand how machines engage with their environment to make sense of the world and how to interact effectively (e.g. [50]). Bringing this back to the Mr. Spock metaphor, it reflects his ability to sense, plan, and interact with the environment to find the best possible solution. AI must not become a replacement for human cognition; rather it should be a tool to enhance human life.

4 Multilevel approach to humanizing AI

So far, we have discussed the reasons why we need to humanize AI. In this section, we will discuss how a behavioral lens can help us humanize it from inception to deployment. Considering only human-centric usages of AI is not enough. A multilevel approach is required, one which focuses on AI from creation all the way through to societal impact. First, designing AI to think like humans is one way of bringing humans and AI closer together. Can behavioral science help us design AI technology that has the capability to take human thought patterns into consideration in their functioning, that is, to embed knowledge of human thinking into the algorithms [51]? Second, applying a behavioral lens to consider more human-centric ways of serving people is needed. How do automation and AI usage facilitate human functioning, and what about fairness and transparency? Finally, from a macro perspective, how can behavioral science facilitate a more positive and ethical impact of AI on society? We need to discuss the mechanisms underlying existing phenomena and behavior-informed strategies to humanize AI from the micro, meso and macro perspectives.

5 Humanizing AI from an algorithm perspective (micro)

Creating more human-like AI starts at the programming level. Like the micro perspective in behavioral science, understanding the software (brain) helps to understand the hardware (behavior). One of the main goals in AI development is to create intelligent machines that can understand, think, and act like human beings [52]. This type of AI is often referred to as strong AI or artificial general intelligence. Currently, AI's capabilities are narrow in scope (often referred to as weak AI or artificial narrow intelligence) being able to execute specific tasks like automation, surveillance, or autonomous driving.

Understanding machine intelligence and AI architecture is key to guiding more human-centric AI design. However, as AI computation becomes more ‘complex’ it gets harder to figure out how intelligent machines make decisions (e.g., [53]). To guide the evolution of AI operationalization in an explainable and responsible manner, various (micro-level) types of mechanisms need to be in place. These mechanisms i.e., audit trails (e.g. [54]), interpretability (e.g. [55]), and algorithmic design choices (e.g., [56]) can guide AI development and deployment into the future.

5.1 Anthropomorphism

It is well known that advancements made in machine learning algorithms are often anthropomorphized to represent human-like features [57]. Anthropomorphism is defined as the attribution of human-like traits to non-human objects, animals, and entities (e.g., [58]). Some researchers and businesspeople argue that for AI to become more integrated into human life or to enhance human properties, it needs to be more human-like [59].Footnote 1 AI researchers argue that to become more human-like, AI needs to represent human characteristics such as conversational abilities (e.g. [60]), using mental shortcuts to make decisions (e.g., [61]), being empathetic (e.g., [62]), or looking more human physically (e.g., [63, 64]). However, we should note the importance of delineating what human-like means for machines. With the term human-like we refer to the creation of behavioral similarities of humans in machines and do not mean the ontological definition of human-likeness (e.g. humans as conscious, experiencing, emotional beings).

The development of AI functionality and mechanisms has been cognitively inspired and modeled after the human brain (e.g., [65, 66]). For example, the design of Artificial Neural Networks (ANNs) is based on the way neurons in the brain process and exchange information. The development of Convolutional Neural Networks (CNNs) used in computer vision is based on how cats process visual information neurologically [67]. Besides traditional artificial intelligent approaches, Bioinspired Intelligent Algorithms (BIAs) also represent human (or living) organisms functioning at the micro level (e.g., [68]). BIAs show a strong underpinning in neuroscience and biological systems which are reflected in their working mechanisms. Genetic Algorithms (GA), Evolutionary Algorithms (EA), and Bee Colony Algorithms (BCA) are examples of BIAs. The benefit of using BIAs is that they are more explainable than traditional neural networks (e.g., [68,69,70]).

5.2 Anthropomorphic Algorithms

Some recent attempts to build more human-like AI at the micro level have been the creation of neural networks infused with decision science theory to develop anthropomorphic algorithms that use mental shortcuts to mimic human decision-making (e.g. [61]). Heuristics and mental shortcuts, often referred to as cognitive errors or limitations in human intelligence, do serve an evolutionary purpose [71]. They help to make quick decisions in difficult or uncertain situations while using limited information and cognitive resources [72].

The benefits of infusing algorithms with decision theory are that it helps machines think more like humans, makes faster decisions thanks to minimizing information requirements and computational power, and generates more accurate predictions in line with human cognition [73]. This could lead to a better user experience or higher customer satisfaction with product suggestions. Finally, it also addresses the issue of explainability as the built-in shortcuts make the decision rules applied in complex models transparent (e.g., [74]). The latter is a significant issue of current modern AI, especially for ANNs. Conclusively, using behavioral theory to create more human-like AI not only helps address current limitations of existing AI, but can also provide pathways to more transparent operability and symbiotic human–machine design.

One of the major reasons for anthropomorphizing AI beyond application design is to consider the ethical foundations of anthropomorphic design (e.g., [75]). Bioethical principles are often used as the basis for developing ethical AI, e.g., respecting human rights and dignity, accountability, transparency, and promoting well-being [57]. These ethical considerations are seen as important viewpoints in the design and application of AI [76, 77]. In fact, anthropomorphizing AI has the potential to not only provide perspectives on finding effective ways of coexisting, but also provide a foundation for the ethical use of AI and how it can enhance human properties and life. For example, anthropomorphizing AI can support ethical considerations beyond existing bioethical principles, providing a broader perspective to the meaning of ethical use. [78] has questioned if it is unethical that AI applications or robots can make certain people (people in need of social connection i.e. elderly, mentally unwell) believe it is capable of building an emotional connection. Broadening the perspective of ethical use will become increasingly important as more human-like AI becomes available.

Advancing AI through an anthropomorphic lens is a discussion that we believe requires further attention as the human–machine relationship is not purely objective and rational, but is governed by experiences, emotions and heuristics.

6 Humanizing AI from an application perspective (meso)

In this section, we consider potential approaches to humanizing AI from an application perspective where the ‘how’ is emphasized more than the ‘what’. Technology is always a means to an end and the way it is used depends on the intended purpose. Technology often emerges from a human-centric purpose motive (e.g., improving humanity, helping people to stay connected, making tasks easier). As time passes, the profit motive takes over and users become the target of exploitation, especially if investor metrics are involved in the further development of applications.

This development is prominent in consumer applications such as social media (e.g., Facebook), food delivery (e.g., Deliveroo), and ride sharing (e.g., Uber), which use algorithms to maximize profit margins and influence users and service providers [79]. Though issues relating to online exploitation and manipulation (e.g., [80]), psychological harm (e.g., [81]), data privacy (e.g., [82]), and misconduct (e.g., [83]) have been reported, little preventative action is being taken.

AI developments for business focus mainly on automation (e.g., process automation, robotics), smart solutions (e.g., just-in-time production, I-o-T smart buildings) and programs to help business managers make better decisions (e.g., talent management platforms, business intelligence systems). Businesses see the development and application of AI as a strategic imperative to create business value through increased efficiencies, revenue, or cost reductions (e.g., [84, 85]). However, there are also many company-focused AI solutions aimed at tracking and spying on employees. Whether commercial or business, the collection and use of personal data need to be governed and curated based on ethical considerations and human-enhancing properties. To achieve this, well-being metrics should be considered before investor metrics. This would fundamentally change future business-to-consumer (B2C) and business-to-business (B2B) application development. Other mechanisms which can be used to ensure more human centricity, accountability, and safety in application are audit trails (e.g. [54]), Responsible AI governance (e.g., [86]), and data bias and proportionality checks (e.g., [87]) among others.

Human-like AI application also needs to be considered within ‘Industry 4.0’ (I4.0), which is a term given to reflect the fourth industrial revolution currently taking place, characterized by big data, velocity and connectivity, cyber security systems, and embedded intelligence often referred to as smart technologies (e.g., [88]). The exponential growth of data and machine-to-machine and machine-to-human connectivity within I4.0 brings along various knowledge management and data interpretation challenges. Ontologies can provide a solution to bridge the complexity of data semantics and enable machine reasoning (e.g. [89, 90]). Within I4.0, machine intelligence needs to move away from narrow AI approaches and evolve more into cross-domain intelligence.

7 Humanizing AI from an organizational perspective

AI is reshaping the ways organizations operate. Bionic companies that combine human capabilities with machine intelligence, are no longer viewed as futuristic. According to a recent survey by McKinsey, the three business functions where AI adoption is most common are service operations, product and service development, and sales and marketing [91].

Technology complexity is not the biggest obstacle for large scale AI adoption – human nature is. Resistance to change and fear of the unknown are often quoted as key barriers to organizational adoption of AI, especially in core business areas where machines are used to perform complex cognitive functions [92]. While employing AI to automate highly manual processes (process automation) is gaining widespread acceptance, intelligent automation (decision automation) still needs to earn trust [93].

A recent research report by IBM argues that “trusted, explainable AI is crucial to widespread adoption of the technology”, including maintaining brand integrity and meeting regulatory compliance [94]. The efforts behind explainable AI (XAI) aim to address the concerns related to lack of trust and seek to ensure that humans can easily understand the machine’s decisions by providing solutions to black box issues and transparent explanations. But will these efforts be enough?

Research suggests that people supported by machines frequently trust the decisions of AI more than they should. This is even true for explainable AI solutions. Recently, a Google engineer believed that the company’s DL-enabled chatbot ‘LaMDA’ had become sentient due to its human-like conversational abilities [95]. Research evidence indicates that, in many instances, people make worse decisions than they would without the assistance of machine intelligence [96, 97]. This raises the question of what is needed to humanize AI and make it more easily accepted by organizations.

First, AI tools have several advantages in performing certain customer-facing tasks quicker and more accurately than humans, especially when they can demonstrate high cognitive intelligence and empathetic behavior. These solutions, however, must be accepted and trusted by customers, and they need to deliver socially acceptable performance [98]. Our view is that once customers and society accept and trust AI tools, organizations will endorse them more readily.

Second, AI tools need to focus on more than explainability and ethics, eliminating unwanted bias in decision-making, and showing perceptible empathy. They also need to deliver the diversity of human decisions. If we let only a very few algorithms perform specific cognitive tasks, we might end up amplifying systemic risks such as flash-crash events on the stock markets due to high-frequency trading [99], or we might monopolize the core software engines behind cognitive AI tools [100].

Third, AI tools must convey to human users that their decision automation is subject to errors; not all automated decisions will be accurate. The higher the cognitive function, the more likely they can make mistakes, just like humans. We tend to have more trust in people who are like us in some way, and it is easier for us to predict the reactions of people who resemble us [101, 102]. This notion needs to be designed into AI tools and employee training during the operationalization phase if the AI solution is to be easily accepted by organizations and have the desired result.

Lastly, organizations will need to manage the risks that come with introducing intelligent machines. Operationalization of low cognitive AI solutions (such as process automation) poses fewer risks which can be reasonably mitigated by appropriate control [103]. The ultimate danger is that intelligent machines might seize control over their environment and refuse human control. Further research addressing this AI control problem and heavy-handed ex-ante regulation for highly cognitive AI tools will help to mitigate the risk that, as the late physicist Steven Hawkings put it, “The development of full artificial intelligence could spell the end of the human race" [104].Footnote 2

8 Humanizing AI at the societal level (macro)

As mentioned before, AI as a technology has reached a level in which its usage has become ubiquitous. Although machine learning algorithms help us to process data efficiently, predict outcomes, and make intelligent decisions, it is the way these data analytic approaches are used that need to be scrutinized. In this section, we want to discuss some of the more precarious applications of AI that have (or continue to have) a deep impact on human behavior and society at large. Note that our review does not go into the details of subjective and objective measurement of human well-being, as defined by [105]; our three specific societal examples will be presented just to highlight the need for more human-like considerations of AI in society.

9 China’s Social Credit System

In 2020, a countrywide social credit system (SCS) was implemented in China which scores citizens based on their offline and online behavior. Using AI-powered surveillance cameras, payment tracking through Alipay and other Chinese online payment methods, and social media surveillance, China’s centralized SCS can evaluate a citizen’s score and thus provide or restrict access to resources based on how well someone behaves (e.g., [106]). This application of AI monitors and socially-engineers behaviors and grants or denies access to public resources. From the perspective of humanizing AI, this approach is questionable as it monitors and coerces behavior in an opaque way, leaving it unclear for citizens how to influence their score.

SCSs are nothing new. Online platforms like Uber and Airbnb allow customers and service providers to evaluate the experience they had with each other. User-generated reviews act as a form of social validation and authority. Reviews not only affect the likeability and demand of the service but also impose a level of control over how service providers and customers interact with each other, based on expected behaviors governing the system (e.g. [107, 108]). However, the application of social ranking systems beyond consumer apps needs to be further reviewed.

9.1 Facebook segmentation algorithms

In recent months, Facebook and its operating platform have had significant backlash from the public because of how its algorithms promote hatred and discourage diverse views [109]. Facebook uses AI to classify and segment its users based on all kinds of characteristics ranging from behavioral profiles, social ties and even psychological traits it can infer using machine learning and pattern recognition. Facebook uses this data to enhance platform engagement and better serve its advertisers by offering more targeted advertising.

However, the unintended consequence of creating micro-segments is that users get stuck in echo chambers and are less exposed to different information and opinions. Studies have found that this kind of algorithmic design leads to more siloed thinking, reinforcing latent beliefs and potentially fueling more hatred [109].

These events make clear that AI-driven social media platforms have a significant impact on attitude formation and offline behaviors and thus are not neutral. Platforms that serve humankind need to be transparent in operations and designed with human well-being in mind. Recent EU regulation aims to force social media companies operating in the EU to disclose how their algorithms work. The unintended consequences of for-profit platforms need to be considered during the design stages and addressed later if necessary. Allowing the profit motive to take over the purpose motive is a key challenge to be addressed in the pursuit of a symbiotic relationship between humans and machines.

9.2 Cambridge Analytica Psychographic Profiling

The US presidential elections in 2016 were criticized because of the use of psychographic profiling online to sway voter decisions. The company responsible for facilitating the online profiling and micro-targeting of US citizens was Cambridge Analytica. It allegedly used personal data, improperly collected through a personality test on Facebook, which provided a rich user profile to identify attitudes, beliefs, and opinions. These data were then used to create micro-targeted ads aimed at influencing election participation and voter decisions [110].

Another consequence of this kind of profiling is that user data can be used to make highly accurate predictions about political beliefs, sexual orientations, fears and desires, and other sensitive information, which is impossible to predict using traditional survey approaches if not directly asked [111, 112]. This means that data companies with advanced machine learning algorithms know more about a user than relatives or close friends do [113]. If this information is used to influence attitudes and behavior for political reasons, then AI poses a risk to the social fabric of humanity and political integrity. Global institutions and countries need to address these applications and provide guidelines of ethical considerations in AI design and usage.

At the macro level, techno-social mechanisms need to be in place to ensure that the usage and advancement of AI are improving societal outcomes. This is necessary for technology adoption and trust in AI (e.g., [53]). Institutional mechanisms that govern AI development and deployment at the societal level focus mainly on protecting human values (e.g., equality, fairness, safety, and privacy – in [114]), accountability (e.g., holding AI developers accountable – in [115]), and incentives (e.g., funding or promoting AI-driven technologies that strengthen human values – in [116]). These mechanisms need to be in place to prevent both individual harm as well as societal harm (e.g. [40]).

10 The power motive of AI usage at the societal level

The three examples given highlight the power motive of AI usage at the societal level. The institutions that can install these mechanisms do it mainly to exert or expand their existing powers over the people they can control within their system. So far, the emphasis has been on governing and control (see Fig. 1 which represents our motivational framework of AI operationalization today - purpose, profit, and power). Using AI to socially engineer behavior, manipulate democratic voting decisions and segregate people based on similar characteristics, values, and/or beliefs does not benefit human well-being or promote equality. It enables “divide and power” asymmetries within society and highlights major concerns related to privacy, surveillance, discrimination and bias. It is also not unthinkable that with the ongoing datafication (digitization of all aspects of our daily lives and the evaluation of this data) that SCS in various forms could emerge in other parts of the world [106].

However, we can see potential avenues for human-enhancing and societally desirable usage of AI despite these examples of human-limiting approaches. Enabling societal development using AI-powered systems can allow people to flourish in more ways than are currently practiced. AI systems can augment human functioning and replace repetitive or dangerous tasks, allowing humans to focus more on strengthening qualities such as creativity, connection, altruism and emotional intelligence. AI and other enabling technologies can be used to help provide equal access to resources to facilitate human growth and well-being, emphasizing morals, fairness, ethics and even philosophy.

Fig. 1
figure 1

Motivations driving AI creation and usage

11 Conclusion and research directions

Modern-day AI development and deployment show great potential in creating value for business and society. However, the current state and the future of AI are not without concerns. Ethical and moral dilemmas have arisen in recent years due to AI usage in the public domain and the (un)intentional consequences algorithms have on economic choices and human well-being. Moreover, policymakers lack the speed and motivation to regulate the market with the required laws on time. Being too strict on AI policies can limit a country’s digital competitiveness. It’s clear that new perspectives are needed to help solve current issues and advance the field. In this paper, we argue that AI conceptualization and application need to be less artificial and more human-like. AI development and deployment need to be governed by more human-centric principles, ones that are easily understood by all stakeholders and that benefit society. We therefore propose a multilayered behavioral approach to address the issues and potential solutions.

This paper also reviewed the mechanisms underlying existing phenomena and behavior-informed strategies to humanize AI from the micro, meso and macro perspectives. In terms of mechanisms, we highlighted the importance of audit trails, interpretability, and algorithmic design choices at the micro level, Responsible AI governance, and data bias and proportionality checks at the meso level, and techno-social mechanisms such as protecting human values, accountability, and incentives at the macro level. For strategies, we proposed solutions which help build trusted and explainable AI and support technology adoption, such as the development of anthropomorphic algorithms and other human-like features, making clear how algorithms make decisions, minimizing human and machine bias, and ensuring that the usage of AI augments and protects human life through incentivization and accountability.

Humanizing AI also means introducing ethical principles into the activities related to the planning, development and deployment of AI tools. The Responsible AI Guidelines, as developed by the US Department of Defense, provide detailed guidelines to ensure that ethical considerations are integrated into the design, training, and operationalization of AI, and define a process that is responsible, reproducible and scalable [117]. They go well beyond spelling out the need for explainability; they also aim to ensure that the decisions of AI tools are in line with human values. We believe that design research efforts should focus on investigating the core aspects of these themes, such as responsible AI, explainable AI and anthropomorphic design.

This paper provides various avenues for future research. First, from the micro perspective, future research should focus on exploring ways to build algorithms that are able to mimic human decision-making processes and make decisions in a more human-centric manner. This is not only important to make AI more understandable but is also an avenue to create intelligent systems with more general intelligence. Second, from the meso perspective, future research efforts should focus on finding ways to promote the application and adoption of more equitable, trusted, and responsible AI solutions to help overcome existing barriers and build a stronger relationship between humans and machines. Finally, from a macro perspective, future research efforts should focus on investigating and designing mechanisms that promote and secure human properties as the application of intelligent systems at the societal level becomes more common. The field of behavioral data science, where ML experts and behavioral scientists work together, has a valuable contribution to humanizing AI in future research efforts.

Addressing today’s AI challenges is crucial if we want to build a more symbiotic relationship between humans and machines. Humanizing AI does not automatically lead to a more symbiotic relationship between humans and machines but does set a necessary foundation for its development based on human values and potential. This is not only important to build better AI, but also helps humankind to better understand what it means to be human in a digital world. Once again, we require wisdom to guide the future of AI.