Our WoS search between 1991 and 2020 yielded 1125 publications in the area of ELSE implications of AI. No publications were found in this area prior to 1991. After analyzing and cleaning the initial dataset, 1028 publications were selected for more detailed analysis. The results stemming from data analysis are discussed in Sects. 3.1, 3.2, 3.3, 3.4, 3.5 and 3.6 below.
Global mapping of ELSE implications of AI publications and future trend predictions
We have presented the global trend of ELSE implications of AI publications from 1991 to 2020 (Fig. 1). We found that publication numbers remained relatively low (< 20) over 5-year intervals from 1991 until the 2006–2010 interval where there were 36 publications, a 140% increase from its previous interval, i.e. the 2001–2005 interval. Mention should be made that, there were only 4 publications in the 1991–1995 interval, hence it doesn’t even show up on Fig. 1. The following 2011–2015 interval saw 75 publications, an increase of 108% from the previous interval (2006–2010). The largest jump occurred in the 2016–2020 interval, yielding a > 1000% increase in publications from the 2011–2015 period. This indicates the rapid development of AI and its application in diverse sectors paying attention to different aspects of the ELSE implications of AI.
The 2011–2020 period saw an enormous growth in the application of AI technologies in various fields and consequently, an increase in the concerns surrounding its use. The future prediction shows an increase in the volume of ELSE issues of AI publications over the next 10 years (Fig. 2). Four hundred and twenty-six publications are forecasted for 2021, which could increase to 572 in 2025, and to 754 in 2030. As research on AI increases with its applications permeating every sector of society, we can only expect an increase in human–AI interactions and the ELSE implications of these interactions, hence more publications.
Global mapping of ELSE implications of AI publications
The number of publications from 1991 to 2020 in each continent is presented in Fig. 3. Nearly half of the 1028 publications were from Europe (468; 46%). There has been at least one publication from most European countries, with England alone contributing to more than a quarter of the publications from Europe. Europe is followed by North America and Asia with 339 (33%) and 136 (13.2%) publications, respectively. The USA and China, the world’s leaders in AI investment [4], topped their respective continents with 280 and 32 publications. Since these two countries have massive committed funding for AI research, it was not surprising to see this high publication trend. The remaining continents, Africa, Australia, and South America, accounted for a total of 85 (8%) publications. The distribution of publications across the world mirrors the economies of the nations/regions of the world. This suggests that the use of AI technologies in these countries are either very low and/or there is no political or social willingness to consider ELSE issues in those regions. However, the number of publications might go up in the coming years when AI research and investment picks up in those continents.
Research areas
We found that the ELSE implications of AI have been studied by researchers across various areas. We have grouped these sub-areas into six main research areas. Appendix 1, details sorting of sub-areas under each broad research area. The main research areas were: Computer and Information Science (Computer Science, Information Science, Robotics, etc.) with 34% of the 1028 publications in this area (Fig. 4), followed by Humanities and Social Sciences (Sociology, Government and Law, Philosophy, Education, and Communication, etc.) with 31%, Health Sciences (General and Internal Medicine, Radiology, Health care, Neurology, Opthalmology, etc.) with 15%, Business and Economics (Business, Commerce, Economics, Management, etc.) with 9%, and Science, Engineering, and Technology (Engineering, Science and Technology, Architecture, etc.) with 8% of publications. The Agricultural Sciences (Environmental Science, Occupational Health, Food Science, etc.) research area had the least proportion of publications, 3%.
For agricultural sciences, the low number of publications is partly because the implementation of AI technologies in agriculture is still in its infancy. However, of late, more research and national resources are being directed to this sector with several public and private funding agencies are investing in the agricultural sectors [2, 52, 55, 63]. USDA-National Institute of Food and Agriculture (NIFA) has funded four AI Institutes around Agroecosystem and Food since 2020 and they just have announced another cycle of such funding for 2022. Ayed and Hanana [1] highlighted the role of AI in improving the food and agriculture sector related to five areas: (1) efficient crop production and marketing, (2) efficient management techniques to combat climate variations, pest and weed infections, (3) early forecasting on weather that farmers adapt their practices to, (4) monitoring of soil health by analyzing extensive images captured by satellites and cameras using Machine Learning and Deep Learning, (5) protection of environment by spot application of nutrients and pesticides using sensor combined with machine learning and robotics. They also indicated that using the capabilities of AI technologies can help reduce cost of employee training, minimize human error, and result in precise decision-making capabilities for food industry. The modern innovations in agricultural production and food industry paved the way for use of novel food production and processing equipment and technologies including smart machines and production lines [39]. AI-driven big data approaches have the potential to integrate agricultural food production, processing, food safety risk factors, and genomic data that can transform public health strategies to prevent foodborne diseases and rapidly respond to outbreaks. As with any AI use, privacy, intellectual property, data ownership, and security are also top concerns [57]. While AI use in medicine, education, and even policy most directly affects people, AI use in agriculture also has the potential to dramatically impact ecosystems [30, 57]. Agriculture currently relies on large amounts of physical labor done by field workers and data analysis done by agro economists. Both roles could be potentially replaced by AI, fieldwork being taken over by robots, and data analysis being taken over by complex models [47, 57]. Though this may be cost-effective for the agricultural industry, it also raises important ethical questions of AI agency and accountability [47, 57]. Would an AI system or its creator(s) be held accountable in the event an AI error leads to foodborne disease outbreaks? Given the application of AI in other sectors, it is important to explore and create ethical frameworks in other research areas such as agriculture, government and law, as well.
In the area of legal studies, despite the evidently rapid growth and adoption of AI technologies around the world, most countries have not been able to develop and implement relevant regulations on AI usage at the same pace. A report by Cognilytica [17] indicated that most countries have little to no AI laws and regulations. Even in countries where these laws exist, they mostly border on data privacy and protection, leaving other equally important areas such as autonomous decision-making, facial recognition, conversational systems, malicious AI usage, AI ethics and bias, and the oft-debated question of AI legal personhood and liability [17]. Though the USA is a world leader in AI, it appears to have taken a laid-back democratic approach to regulations on AI. While there are no comprehensive federal laws on AI usage, some states have enacted their own laws [13]. It has been argued that the USA approach fosters an enabling environment for AI research and development. This is in stark contrast to China’s approach where the government has an overreaching authoritarian interest in AI research and applications [13]. Across the world, the EU has the most comprehensive and harmonized AI regulatory framework [17], prohibiting some applications of AI (e.g., applications likely to violate fundamental rights), heavily restricting some high-risk applications of AI (e.g., systems used as a safety component), and lightly regulating others that are deemed to be of low risk [21].
Source of funding
Overall funding for AI research worldwide is increasing dramatically, which includes funding for ELSE implications of AI research. As has already been mentioned earlier, the analysis presented here is solely based on authors’ acknowledgment of funding sources in their publications. We have not presented an analysis of the total amount of funding worldwide on the ELSE implications of AI research since we did not have access to such data and this was out of the scope of our study. However, we have provided a general scenario of the funding on AI worldwide in the introduction section.
Out of the 1028 publications that were analyzed, 269 (26.2%) acknowledged a funding source. Our analysis of funding sources is based on these 269 publications. The number of funding-acknowledged publications from each continent followed a similar trend as the overall distribution of publications from each continent (see Sect. 3.2). Approximately half (131; 49%) of the publications were from Europe. This was followed by North America with 80 (30%) publications, Asia with 49 (18%) publications and Africa, Australia, and South America collectively contributing to 3% of the funding sources. We attribute this to the fact that all the European countries are connected through the European Union’s single economic and political framework while North America is not connected to a single economic or political union. Our analysis further revealed that 11 countries did not have more than one source of funding. Country-wise, the USA (26) topped the list with the most number of funding sources followed by England (21), and China (9). Another thing to note is the gap between the number of funding sources for each country. For example, the USA had 26 funding organizations compared to other countries such as Turkey, Cameroon, and Romania with only one funding organization.
In the USA, ELSE implications of AI research have mainly been funded by public institutions. For example, National Science Foundation (NSF), National Institutes of Health (NIH), and several universities, were acknowledged in 19, 14, and 14 publications, respectively. Eight publications acknowledged the funding from other institutions such as Center for Disease Control, Office of Naval Research, National Aeronautics and Space Administration, etc. In China, 5, 7, 3, and 1 publications acknowledged the funding from the National Social Science Foundation, National Natural Science Foundation, Ministry of Education, and Ministry of Science and Technology, respectively, for ELSE implications of AI research. In European countries, ELSE implications of AI research has been funded by the European Commission, National Academy of Sciences, Engineering and Physical Sciences Research Council and other national institutes such as Alan Turing Institute, German Research Foundation, Swiss National Science Foundation, etc.
Delving into the political economy of the respective countries indicate that funding sources in the developing countries are most often limited to a single government agency while in the developed countries like the countries in Europe and North America, besides government funding many private companies and foundations also provide funding for ELSE implications of AI research. This could be attributed to the fact that developing countries do not have as many resources as those of the developed countries. This may call for a need for the European Union and North America to share information on research funding strategies and models across the globe to assess the true impact of AI on society on a global level.
Keyword and co-occurrence analysis
We performed a keyword analysis on keywords (n = 5143) provided by the authors in the 1028 publications. The 30 most frequent simple noun phrases from our text analysis have been presented in Fig. 5. “Artificial intelligence”, predictably, had the highest frequency, occurring 580 times. This was much higher than the frequency of the next five simple noun phrases “artificial intelligence ethics”, “machine ethics”, “big data”, “natural language processing”, and “deep learning”, which occurred 93, 46, 40, 33, and 27 times, respectively. Though the frequencies of the remaining phrases were below 20, they still retain vital information about the text we analyzed. The 30 phrases presented in Fig. 5 clearly show the two recurring themes, which are the foci of this study: AI, and ESLE implications of AI. Seventeen of the phrases are related to AI and 13 related to ELSE implications of AI.
Vakkuri and Abrahamsson [64] employed a systematic mapping study to identify the key concepts of AI ethics. They identified key phrases such as “robot ethics”, “moral agency”, “human rights”, and “AI ethics”, similar to what we have found in our study.
Figure 6 shows the results of a keyword extraction approach based on the importance of each word (TextRank). The word cloud shows previously unseen keywords including, “responsible AI”, “social implication”, “algorithmic bias”, “public policy”, “robot rights, “ethical design”, “ethical issue”, “general data protection regulation”, “human control”, and “responsible innovation”. The results of the text analysis highlight responsibility, privacy, transparency, trust, sustainability, and freedom as the main ethical principles in the publications we analyzed. Similar findings were reported by Jobin et al. [38] in a study on the global landscape of AI ethics guidelines.
As part of co-occurrence analysis, we have presented a network of word co-occurrences showing how frequently a specific word follows another word (Fig. 7). Just like keyword analysis, in this analysis also, “artificial” and “intelligence” had the highest co-occurrence (shown by the wider network link between them). This both complements and reinforces the findings from keyword analysis (Fig. 5), where “artificial intelligence” was shown as the simple noun phrase with the highest frequency. The network shows a broader set of words and the links among them, thus providing insight into the concepts that were discussed in the publications we analyzed. Figure 7 shows links among words, for example, “health”, “digital”, “care”, “policy”, “informatic”, etc. We can deduce that these words are from publications that discussed the ethical and policy implications of digital health. Similarly, the network also shows links among “moral”, “agency”, “artificial”, “agent”, “machine”, “responsibility”, giving rise to the words “artificial moral agent”, “moral agent”, “moral agency” and “moral responsibility”. These concepts usually occur in studies where the authors debate the accountability of AI, i.e., whether AI should be seen as a moral agent and whether an AI system or its developer should have moral responsibility.
Content analysis of highly cited publications
Broadly, the emerging international AI code of ethics is centered on three guiding principles: transparency, accountability, and positive impact [15]. Within the existing AI literature, these principles have been applied both broadly and narrowly to form everything from long lists of principles for AI development to ideal codes of ethics for deploying AI in medicine [26, 66]. Any field that could benefit from rapid, aggregate data processing has the potential to be shaped and changed by AI. AI could become an integral part of medicine, economics, policy, scientific research, marketing, customer service, engineering, and beyond [9]. We chose 30 highly cited publications, whose number of citations ranged between 232 and 40, for detailed content and thematic analysis. Out of the 30 publications, Humanities and Social Sciences, and Computer Science, each had ten publications, and the remaining ten were in Medicine, Environmental Sciences, and Engineering research areas collectively. We carried out the analysis of the 30 publications by looking for common themes, patterns, research methods, central research questions, and main results. We found that the existing AI ethics literature falls into three primary themes: (i) ethical framework, (ii) specific application, and (iii) ethics in practice (Table 1). We have briefly described these three themes below.
Table 1 Highly cited publications (> 40 citations) on ELSE implications of AI thematically analyzed by ethical framework, specific application, and ethics in practice Ethical framework
Ethical frameworks refer to the publications with a primarily academic aim. These publications reviewed existing literature to define principles, models, and philosophical problems in the ELSE implications of AI research. These publications are not concerned with a particular discipline, but rather with how AI ethics as a whole is conceptualized and analyzed. To a varying degree, most of the publications analyzed were concerned with beneficence, non-maleficence, autonomy, justice, and explicability, key principles outlined [26]. Floridi et al. attempted to lay a groundwork for unpacking and understanding the problems and themes related to the production and use of AI in society by framing directions for future research and underpinning the current understanding of the issues. Many of the publications in this category were organized along similar themes. For example, Bostrom and Yudkowsky [11] discussed ethical problems in AI based on the need for morality, transparency, responsibility, auditability, and incorruptibility. Floridi and colleagues have published widely on the topic of AI ethics, including a highly cited introduction to a themed issue on “What is data ethics?” [28]. This article was presumably highly cited due to the definitions it provided on these prominent emerging topics related to AI and data ethics. In 2004, Floridi published the “Open Problems in the Philosophy of Information”, defining 18 problems that must be solved in to create and integrate ethical AI into people’s lives [25]. Our analysis suggested that Floridi and colleagues were the most prolific publishers on AI ethical frameworks [26,27,28]. Other authors did not publish within the highly cited publications we analyzed that frequently, but consistently referred to frameworks and thematic framing of Floridi et al.’s work in their subsequent publications.
Another method of defining AI ethics is the creation of models. These models are grouped with the frameworks because they are often highly academic, adhere to many of the same principles, and a model is inherently a type of framework. Allen et al. [3] attempted to create a model for the design or moral AI systems, using well-known philosophical frameworks, such as Kant’s categorical imperative, deontology, utilitarianism, and consequentialism. Models in this area were mostly conceptual but contained suggestions for potentially quantitative work. For example, Wallach et al. [67] developed a cognitive science-based model that could theoretically be adapted for moral decision-making within AI. While these cognitive models represented ethical and computational challenges, the extension of ethical AI frameworks to models that could be used by machines have been explored.
Finally, there were two publications that fall into the context of a social framework for AI. Early work on this subject by O’Sullivan and Haklay [51] attempted to describe how AI fits into agent-based ethical models, and if such a model even makes sense for the world in which we live. Floridi and Sanders [27] discussed the conditions necessary for AI to be considered a moral agent that can be held accountable for its actions, and what steps would be required to censure immoral acts by AI. In one of the earliest publications, Gasser reviewed the principles of distributed AI and proposed as how it can be socially conceptualized for creating functional AI systems [31]. This article shows that active framing research has been developed on this theme for decades, even before AI began to assume a role in everyday life.
Specific application
Specific application theme categorize publications that are concerned with how ELSE implications of AI arise within the context of one particular field, such as healthcare, manufacturing industry, environmental policy, and others. They did not discuss broader philosophical problems or implications of AI outside of their particular field. They were either more abstract and focused on ethical questions or more concrete with consideration of how to address ethical problems, but on either end of the spectrum, their focus remained narrow.
Many of the examples in this category came from healthcare. Research in the healthcare field appeared to have more readily embraced the boom in AI, while also being highly concerned about any ethical dilemmas from the increased use of AI. Reviews by Yu et al. [73] touched upon everything from the challenges of collecting high-quality data to consequences of AI use in healthcare, such as loss of healthcare jobs, medical negligence and malpractice by AI, and data privacy concerns for patients. Many of these same themes were discussed in the more narrowly focused publications on healthcare AI. Hashimoto et al. [34] raised many of these same issues while examining the applications and ethical pitfalls of AI usage in surgery. Though they acknowledged that the data analysis capabilities of AI could be a useful tool for surgeons, they are also wary about surgeons being unable to assess how or why AI detects specific patterns in large datasets. Similarly, Ngiam and Khor [49] analyzed the opportunities for machine learning in oncology, and concluded that doctors would need to understand how machine learning worked in order for its use to be ethical and successful. In addition, they also discussed that substantial privacy concerns needed to be addressed for AI to be incorporated in oncology. A similar article by Ting et al. [62] looked at the potential for AI use in ophthalmology but ultimately drew similar conclusions reported by Ngiam and Khor [49], indicating that the AI ethics issues in healthcare were similar across healthcare subfields.
Outside of the healthcare field, we found a highly cited publication on the function of robots in common jobs and tasks [72]. This publication did not have a strong ethical focus but discussed how workplace interactions might be impacted by the introduction of robots and how some of the current consumers’ perceive robot-provided services, indicating that current AI research understood the need for both internal and customer-facing assessments of AI implications. Additionally, French and Geldermann [30] explored how AI-based quantitative analysis could be used in environmental decision-making. This publication did not ask apparent ethical questions but rather weighed the implications of implementing AI decision-making in an environmental policy context.
Ethics in practice
Ethics in practice falls somewhere at the intersection of the two previous themes namely. Ethics in practice examines current ethical principles and frameworks within the field of AI Ethics and how those are applied by governments, businesses, non-governmental organizations, and researchers. Similar to the Sect. 3.6.2, the field and type of application may vary, but generally, all publications in this theme were concerned about how AI ethical frameworks can be or have been evaluated and used in “real world” contexts outside of academia. Additionally, they focused on the broader context of applying AI ethical frameworks versus focusing specifically on a particular field.
The moral machine experiment by Awad et al. [5] broadly explored how autonomous vehicles should solve moral dilemmas through questioning individual study participants. Participants were presented with choices of who or what a vehicle should swerve to hit a person, pet, or stay on course. The demographics of the people and pets were varied in each scenario. Thus, the researchers were attempting to build a moral model for an AI technology based on the opinion of real people. This was one of the most narrow studies within this category. Others looked at broad issues such as the ethical implications of ambient AI (AI that is all around in everyday objects, such as household items) [58] and another looked at current examples of how businesses, governments, and universities were using AI systems, and suggested a three C model (Confidence, Change, and Control) for how such organizations could address the ethical concerns inherent in these technologies [40].
Similar to the previous categories, different authors have organized their studies along identical themes. For example, Jobin et al. [38] examined how AI guidelines changed or overlapped based on location and organization by analyzing AI reports produced by private companies, governments, and academic institutions. Common emergent themes included transparency, justice and equity, non-maleficence, responsibility and accountability, privacy, freedom and autonomy, trust, sustainability, dignity, and solidarity [38]. For some of these categories, there were discrepancies on how these terms were defined and to whom they should apply. However, shows that while common themes may exist in the realm of AI ethics, consensus on how to address the multifaceted ethical questions it raises still seems far away. In a bid to find some consensus across industrialized, Western societies, Cath et al. [15] analyzed AI reports issued by the EU Parliament, the USA White House, and the UK House of Commons, attempting to answer the questions of “what constitutes good AI society?” Transparency, accountability, and positive impact were highlighted across the three reports. Each report varied in its specifics and to the extent it created a plan for a concrete vision, but all highlighted how AI should help to build a “good” human society. The benefit to humans was a consistent thread across all the three themes.