Artificial intelligence (AI) is “concerned with intelligent behavior in artifacts” [50]. AI is an umbrella term used to refer to computing systems with the capacity to accurately interpret input data, learn from such data, and use those learnings to complete specific tasks, including machine learning, deep learning, natural language processing, and computer vision [40]. As computing technology advances, it is expected that applications of AI will become increasingly prevalent in everyday life, especially in industrialized societies [15]. Due to the nature of AI, most of the foundational AI research has occurred within the field of computer science [19]. However, use-inspired AI has the potential to reshape the way we live, by incorporating AI into more systems that are fundamental to the functions of contemporary society, which consequently has far-reaching ethical implications [28].

AI is primarily a computer science discipline, but its applications transcend the field of computer science. Today, the applications of AI are seen in several areas such as healthcare, music, visual arts, linguistics, biology, agricultural sciences, and geology. With the growth of AI beyond its primary field of computer science, new ELSE issues relating to its specific applications in new fields arise while other already known ones are brought to the fore. Economically, funding for AI has increased significantly worldwide with a shift in agency. National governments were once the prime drivers behind strategic technologies, from networked systems to nuclear energy, and supported foundational work on AI techniques [4]. But today, governments mostly rely on private companies to build their AI software, furnish their AI talent, and produce the AI advances that underpin economic and military competitiveness [4]. Globally, it is estimated that $40 billion was invested in AI companies in 2020 alone. The United States of America (USA)-based AI companies had the largest share of the investment, $25.2 billion (64% of total global investment; [4]. Investments in the USA represent a 194% increase since 2015, followed by China and Israel with investments of $5.4 billion and $3.1 billion, respectively [4]. These investments also represent a 71% increase for China and a 1109% increase for Israel since 2015. AI investment in Western Europe, India, Japan, and Singapore is growing quickly as well [4]. The statistics represent a very fund intensive and lucrative AI enterprise worldwide in different sectors.

Legally, the increased awareness of AI has led to the realisation that existing legislation, education, and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms [10, 29]. In 2018, Deloitte surveyed 1900 information technology and related business executives from seven countries [18]. More than 46% of the executives in that survey believed that AI was critically important to the success of their business while, 16% (in China), and 49% (in Australia) expressed major concerns about ethical issues and risks associated with AI including, bias in AI algorithms, trust and transparency, cybersecurity vulnerabilities, autonomous decision-making, manipulation of information, and legal liability [18]. In a similar study, Baum [8] suggested that AI should either be designed based on predetermined ethical laws or social choice analysis should be incorporated into the abilities of the AI itself.

Ethically, one of the central issues in AI ethics is: how can they be created in such a way that they do not share the same ethical biases as their creator [8, 41]. When Google’s word embedding tool was used to solve verbal analogy problems, researchers found that it performed the task with blatant and rampant gender bias [41, 65]. AI bias has the potential to be a particular problem in fields prone to unequal treatment of individuals such as medicine or fields that have the potential to have severe environmental impacts if the AI involved are only concerned with maximum yields, for example agriculture [30, 32, 66]. AI bias is far from the only concern accompanying AI proliferation. While bias applies to the AI itself, additional concerns about how the AI is utilized and regulated need deliberation. Transparency, credibility, auditability, reliability, and recoverability of AI data are believed to be some of the essential elements needed to combat implicit biases and ensure that AI is created to be or can be adapted to be as neutral and ethical as possible [42].

For the responsible development of AI, it is imperative that we incorporate the ELSE issues right from the development of the technology for better adoption and to gain trust among the stakeholders. Many existing studies attempt to create ELSE frameworks for AI evaluation [3, 11, 26, 27]. However, the extent of such studies on a global level is missing. In the pursuit of exploring ELSE implications of AI research and existing ethical frameworks globally, we utilized a scientometrics study. The objectives of this study were to (i) map and predict the global trends of publications in ELSE implications of AI, (ii) critically evaluate the ELSE implications of AI in different research areas (disciplines), and (iii) qualitatively analyze the existing ethical frameworks and major themes used in highly cited publications related to ELSE implications of AI.

Materials and methods

To accomplish our study objectives, we employed mixed-methods, both quantitative, such as a scientometrics approach, a keyword and co-occurrence analysis, and qualitative, such as a content analysis. We elaborate on each of these research methods below.

Application of scientometrics analysis

Scientometrics can be defined as the “quantitative study of science, communication in science, and science policy” [35]. Scientometrics studies have been effectively used in several studies to map the citation patterns, understand the social networks of research areas, thematic analysis of research, as well as showing the futuristic trends of a research area [46, 59]. “Web of Science” (WoS) database (accessed via University of Maryland Library, 2021) [68] was used, which is considered as one of the most reliable sources to get information on the published work in a particular research area and hence, researchers have been widely using this database [46, 59]. This database covers different types of publications namely peer-reviewed journal articles, proceedings, editorials, book chapters, and review papers.

Data collection from web of science and other sources

Data collection from the WoS database consisted of a two-stage process. First, we performed an initial search using the term “social and ethical implications of AI” to get all the publications in this area. Subsequently, we conducted advanced searches using different combinations of targeted words or phrases such as “social implications of AI”, “ethical implications of AI”, “AI and ethics”, “AI and legal issues”, and “AI and economics”, to garner as many publications as possible that addressed relevant aspects of ELSE implications of AI. We then carefully checked both the initial and advanced searches to include all the relevant publications for analysis without missing any publication in this area. We combined the results from each search and removed the duplicates. We considered years of publications between 1991 (the earliest available publication in this area that the WoS database has information available on) and December 2020 (time when data were downloaded for analysis). Besides, we have used Google Scholar to search for highly cited publications in the area of ELSE implications of AI. Each publication was further analyzed by the titles, keywords, and abstracts to confirm if they were in the area of ELSE implications of AI. Publications were excluded when a screening of their titles and keywords showed that they were not related to the ELSE issues of AI. For instance, “narrative”, “reality”, “imagination”, and “corporate social responsibility” were excluded since they were not within the scope of our research.

Keyword and co-occurrence analysis

To evaluate the ELSE implications of AI in different research areas (disciplines), we utilized the keyword and co-occurrence analysis techniques. Keyword analysis is used to identify relevant topics in different research fields as well as to predict trends [43]. We utilized the keywords of all the publications and their respective frequencies. The authors’ own classification, in terms of the keywords provided, is one of the most meaningful indicators of an article’s content [43, 69]. The keywords are available in several formats and are ready to use for text mining tools. Co-occurrence, on the other hand, is a positioning text technique that indicates how often a word occurs together with other words [37]. This technique is founded on the knowledge that semantic representations and relations of words can be inferred from their occurrence patterns in texts [14]. It has been widely used for keyword extraction and text classification [44, 48, 61].

We performed a text analysis on all the author-provided keywords in the publications obtained from the WoS database to identify recurring themes. This analysis was carried out in R programming software [53] using the UDPipe package [70]. We then focused on two functionalities of the package: co-occurrence analysis and keyword analysis based on simple noun phrases. The co-occurrence function was set to identify co-occurrences in words that had been tagged as nouns or adjectives since they would be more informative than other parts of speech [6]. The extraction algorithm in UDPipe searches the text for noun phrases by following a rule-based linguistic pattern, (A|N)*N(P + D*(A|N)*N)*, interpreted as “start with adjective or noun, another noun, a preposition, determiner, adjective or noun, and next a noun again” [70]. Co-occurrence analysis was carried out using the TextRank R package, which constructs a word network of co-occurrences and assigns higher weights to higher frequency [71]. TextRank then applies the Google PageRank algorithm to measure the importance of every word in the network. We tuned the TextRank algorithm to include only nouns and adjectives, which provides more accurate results [16].

Content analysis of highly cited publications

To analyze the existing ethical frameworks and major themes, we utilized the content analysis method for the highly cited publications. Content analysis is a commonly used research technique for analyzing qualitative data [23]. It provides a systematic and objective means of describing and quantifying phenomena [20, 60]. In content analysis, data can be reduced to concepts, categories, and themes that describe the research phenomenon [24, 36].

We sorted the publications based on the number of times it was cited and chose highly cited publications (40 times or more) to carry out the content analysis. Research shows that publications cited 40 times and above can generally be considered as highly cited publications [22]. From each of these publications, we gathered the research questions, research methods, major results/themes, and any future directions of research or policy suggestions. We further utilized thematic analysis to identify these themes. Thematic analysis is “a method for identifying themes and patterns of measuring across a dataset in relation to a research question”, which can be applied across theoretical approaches [12].

Data analysis plan

We carefully examined each publication record, to document the type of publication, publication’s title, authors and their affiliations, abstract, year of publication, source of publication, country affiliation of the corresponding author, author keywords, source of funding, times each publication was cited and research area of the publication. We analyzed the data to create global patterns of the publications, research efforts in different geographical regions worldwide, research areas of the publications, and sources of funding. The information on the country of origin of the publications was sorted alphabetically and was listed under six geographic regions namely Asia, Europe, Africa, North America, South America, and Australia. Additionally, we sorted the publications by year, counted the frequency, and predicted the number of ELSE implications of AI-related publications for the next 10 years (2021–2030) using an exponential smoothing function (MS Excel for Microsoft 365, Microsoft Corp., Redmond, WA). We have used the previous 10 years’ historical data (2011–2020) for our forecast since we are predicting for the next 10 years’ trend (2021–2030). Information on the areas of ELSE implications of AI was collected from the publication records to figure out the research area that each publication belong to. To organize the research area, we first sorted all the research areas in alphabetical order by subject area. We combined several research areas to group them together to six broad research areas. Each publication was then sorted into one of those six broad research areas. The sorted data portrayed the frequency of different research areas that published in ELSE implications of AI. The information on sources of funding was treated similarly by sorting country-wise. It is worth mentioning that the analysis for sources of funding in this paper was solely based on the authors’ acknowledgment of source of funding in their publications.

Results and discussion

Our WoS search between 1991 and 2020 yielded 1125 publications in the area of ELSE implications of AI. No publications were found in this area prior to 1991. After analyzing and cleaning the initial dataset, 1028 publications were selected for more detailed analysis. The results stemming from data analysis are discussed in Sects. 3.1, 3.2, 3.3, 3.4, 3.5 and 3.6 below.

Global mapping of ELSE implications of AI publications and future trend predictions

We have presented the global trend of ELSE implications of AI publications from 1991 to 2020 (Fig. 1). We found that publication numbers remained relatively low (< 20) over 5-year intervals from 1991 until the 2006–2010 interval where there were 36 publications, a 140% increase from its previous interval, i.e. the 2001–2005 interval. Mention should be made that, there were only 4 publications in the 1991–1995 interval, hence it doesn’t even show up on Fig. 1. The following 2011–2015 interval saw 75 publications, an increase of 108% from the previous interval (2006–2010). The largest jump occurred in the 2016–2020 interval, yielding a > 1000% increase in publications from the 2011–2015 period. This indicates the rapid development of AI and its application in diverse sectors paying attention to different aspects of the ELSE implications of AI.

Fig. 1
figure 1

The global trend of ELSE implications of AI publications from 1991 to 2020 in 5 years interval

The 2011–2020 period saw an enormous growth in the application of AI technologies in various fields and consequently, an increase in the concerns surrounding its use. The future prediction shows an increase in the volume of ELSE issues of AI publications over the next 10 years (Fig. 2). Four hundred and twenty-six publications are forecasted for 2021, which could increase to 572 in 2025, and to 754 in 2030. As research on AI increases with its applications permeating every sector of society, we can only expect an increase in human–AI interactions and the ELSE implications of these interactions, hence more publications.

Fig. 2
figure 2

Future prediction of publication trend in ELSE implications of AI from 2021 to 2030. Dashed lines are the upper and lower bounds at a 95% confidence interval

Global mapping of ELSE implications of AI publications

The number of publications from 1991 to 2020 in each continent is presented in Fig. 3. Nearly half of the 1028 publications were from Europe (468; 46%). There has been at least one publication from most European countries, with England alone contributing to more than a quarter of the publications from Europe. Europe is followed by North America and Asia with 339 (33%) and 136 (13.2%) publications, respectively. The USA and China, the world’s leaders in AI investment [4], topped their respective continents with 280 and 32 publications. Since these two countries have massive committed funding for AI research, it was not surprising to see this high publication trend. The remaining continents, Africa, Australia, and South America, accounted for a total of 85 (8%) publications. The distribution of publications across the world mirrors the economies of the nations/regions of the world. This suggests that the use of AI technologies in these countries are either very low and/or there is no political or social willingness to consider ELSE issues in those regions. However, the number of publications might go up in the coming years when AI research and investment picks up in those continents.

Fig. 3
figure 3

Global distribution of ELSE implications of AI publications between 1991 and 2020. The map was generated at:

Research areas

We found that the ELSE implications of AI have been studied by researchers across various areas. We have grouped these sub-areas into six main research areas. Appendix 1, details sorting of sub-areas under each broad research area. The main research areas were: Computer and Information Science (Computer Science, Information Science, Robotics, etc.) with 34% of the 1028 publications in this area (Fig. 4), followed by Humanities and Social Sciences (Sociology, Government and Law, Philosophy, Education, and Communication, etc.) with 31%, Health Sciences (General and Internal Medicine, Radiology, Health care, Neurology, Opthalmology, etc.) with 15%, Business and Economics (Business, Commerce, Economics, Management, etc.) with 9%, and Science, Engineering, and Technology (Engineering, Science and Technology, Architecture, etc.) with 8% of publications. The Agricultural Sciences (Environmental Science, Occupational Health, Food Science, etc.) research area had the least proportion of publications, 3%.

Fig. 4
figure 4

Distribution of the 1028 publications sorted into six broad research areas

For agricultural sciences, the low number of publications is partly because the implementation of AI technologies in agriculture is still in its infancy. However, of late, more research and national resources are being directed to this sector with several public and private funding agencies are investing in the agricultural sectors [2, 52, 55, 63]. USDA-National Institute of Food and Agriculture (NIFA) has funded four AI Institutes around Agroecosystem and Food since 2020 and they just have announced another cycle of such funding for 2022. Ayed and Hanana [1] highlighted the role of AI in improving the food and agriculture sector related to five areas: (1) efficient crop production and marketing, (2) efficient management techniques to combat climate variations, pest and weed infections, (3) early forecasting on weather that farmers adapt their practices to, (4) monitoring of soil health by analyzing extensive images captured by satellites and cameras using Machine Learning and Deep Learning, (5) protection of environment by spot application of nutrients and pesticides using sensor combined with machine learning and robotics. They also indicated that using the capabilities of AI technologies can help reduce cost of employee training, minimize human error, and result in precise decision-making capabilities for food industry. The modern innovations in agricultural production and food industry paved the way for use of novel food production and processing equipment and technologies including smart machines and production lines [39]. AI-driven big data approaches have the potential to integrate agricultural food production, processing, food safety risk factors, and genomic data that can transform public health strategies to prevent foodborne diseases and rapidly respond to outbreaks. As with any AI use, privacy, intellectual property, data ownership, and security are also top concerns [57]. While AI use in medicine, education, and even policy most directly affects people, AI use in agriculture also has the potential to dramatically impact ecosystems [30, 57]. Agriculture currently relies on large amounts of physical labor done by field workers and data analysis done by agro economists. Both roles could be potentially replaced by AI, fieldwork being taken over by robots, and data analysis being taken over by complex models [47, 57]. Though this may be cost-effective for the agricultural industry, it also raises important ethical questions of AI agency and accountability [47, 57]. Would an AI system or its creator(s) be held accountable in the event an AI error leads to foodborne disease outbreaks? Given the application of AI in other sectors, it is important to explore and create ethical frameworks in other research areas such as agriculture, government and law, as well.

In the area of legal studies, despite the evidently rapid growth and adoption of AI technologies around the world, most countries have not been able to develop and implement relevant regulations on AI usage at the same pace. A report by Cognilytica [17] indicated that most countries have little to no AI laws and regulations. Even in countries where these laws exist, they mostly border on data privacy and protection, leaving other equally important areas such as autonomous decision-making, facial recognition, conversational systems, malicious AI usage, AI ethics and bias, and the oft-debated question of AI legal personhood and liability [17]. Though the USA is a world leader in AI, it appears to have taken a laid-back democratic approach to regulations on AI. While there are no comprehensive federal laws on AI usage, some states have enacted their own laws [13]. It has been argued that the USA approach fosters an enabling environment for AI research and development. This is in stark contrast to China’s approach where the government has an overreaching authoritarian interest in AI research and applications [13]. Across the world, the EU has the most comprehensive and harmonized AI regulatory framework [17], prohibiting some applications of AI (e.g., applications likely to violate fundamental rights), heavily restricting some high-risk applications of AI (e.g., systems used as a safety component), and lightly regulating others that are deemed to be of low risk [21].

Source of funding

Overall funding for AI research worldwide is increasing dramatically, which includes funding for ELSE implications of AI research. As has already been mentioned earlier, the analysis presented here is solely based on authors’ acknowledgment of funding sources in their publications. We have not presented an analysis of the total amount of funding worldwide on the ELSE implications of AI research since we did not have access to such data and this was out of the scope of our study. However, we have provided a general scenario of the funding on AI worldwide in the introduction section.

Out of the 1028 publications that were analyzed, 269 (26.2%) acknowledged a funding source. Our analysis of funding sources is based on these 269 publications. The number of funding-acknowledged publications from each continent followed a similar trend as the overall distribution of publications from each continent (see Sect. 3.2). Approximately half (131; 49%) of the publications were from Europe. This was followed by North America with 80 (30%) publications, Asia with 49 (18%) publications and Africa, Australia, and South America collectively contributing to 3% of the funding sources. We attribute this to the fact that all the European countries are connected through the European Union’s single economic and political framework while North America is not connected to a single economic or political union. Our analysis further revealed that 11 countries did not have more than one source of funding. Country-wise, the USA (26) topped the list with the most number of funding sources followed by England (21), and China (9). Another thing to note is the gap between the number of funding sources for each country. For example, the USA had 26 funding organizations compared to other countries such as Turkey, Cameroon, and Romania with only one funding organization.

In the USA, ELSE implications of AI research have mainly been funded by public institutions. For example, National Science Foundation (NSF), National Institutes of Health (NIH), and several universities, were acknowledged in 19, 14, and 14 publications, respectively. Eight publications acknowledged the funding from other institutions such as Center for Disease Control, Office of Naval Research, National Aeronautics and Space Administration, etc. In China, 5, 7, 3, and 1 publications acknowledged the funding from the National Social Science Foundation, National Natural Science Foundation, Ministry of Education, and Ministry of Science and Technology, respectively, for ELSE implications of AI research. In European countries, ELSE implications of AI research has been funded by the European Commission, National Academy of Sciences, Engineering and Physical Sciences Research Council and other national institutes such as Alan Turing Institute, German Research Foundation, Swiss National Science Foundation, etc.

Delving into the political economy of the respective countries indicate that funding sources in the developing countries are most often limited to a single government agency while in the developed countries like the countries in Europe and North America, besides government funding many private companies and foundations also provide funding for ELSE implications of AI research. This could be attributed to the fact that developing countries do not have as many resources as those of the developed countries. This may call for a need for the European Union and North America to share information on research funding strategies and models across the globe to assess the true impact of AI on society on a global level.

Keyword and co-occurrence analysis

We performed a keyword analysis on keywords (n = 5143) provided by the authors in the 1028 publications. The 30 most frequent simple noun phrases from our text analysis have been presented in Fig. 5. “Artificial intelligence”, predictably, had the highest frequency, occurring 580 times. This was much higher than the frequency of the next five simple noun phrases “artificial intelligence ethics”, “machine ethics”, “big data”, “natural language processing”, and “deep learning”, which occurred 93, 46, 40, 33, and 27 times, respectively. Though the frequencies of the remaining phrases were below 20, they still retain vital information about the text we analyzed. The 30 phrases presented in Fig. 5 clearly show the two recurring themes, which are the foci of this study: AI, and ESLE implications of AI. Seventeen of the phrases are related to AI and 13 related to ELSE implications of AI.

Fig. 5
figure 5

Thirty most frequent simple noun phrases found in 5143 keywords

Vakkuri and Abrahamsson [64] employed a systematic mapping study to identify the key concepts of AI ethics. They identified key phrases such as “robot ethics”, “moral agency”, “human rights”, and “AI ethics”, similar to what we have found in our study.

Figure 6 shows the results of a keyword extraction approach based on the importance of each word (TextRank). The word cloud shows previously unseen keywords including, “responsible AI”, “social implication”, “algorithmic bias”, “public policy”, “robot rights, “ethical design”, “ethical issue”, “general data protection regulation”, “human control”, and “responsible innovation”. The results of the text analysis highlight responsibility, privacy, transparency, trust, sustainability, and freedom as the main ethical principles in the publications we analyzed. Similar findings were reported by Jobin et al. [38] in a study on the global landscape of AI ethics guidelines.

Fig. 6
figure 6

Word cloud of 100 most frequent relevant keywords (from TextRank). “Artificial intelligence” could not be plotted in this figure because it has a much higher frequency relative to the other keywords

As part of co-occurrence analysis, we have presented a network of word co-occurrences showing how frequently a specific word follows another word (Fig. 7). Just like keyword analysis, in this analysis also, “artificial” and “intelligence” had the highest co-occurrence (shown by the wider network link between them). This both complements and reinforces the findings from keyword analysis (Fig. 5), where “artificial intelligence” was shown as the simple noun phrase with the highest frequency. The network shows a broader set of words and the links among them, thus providing insight into the concepts that were discussed in the publications we analyzed. Figure 7 shows links among words, for example, “health”, “digital”, “care”, “policy”, “informatic”, etc. We can deduce that these words are from publications that discussed the ethical and policy implications of digital health. Similarly, the network also shows links among “moral”, “agency”, “artificial”, “agent”, “machine”, “responsibility”, giving rise to the words “artificial moral agent”, “moral agent”, “moral agency” and “moral responsibility”. These concepts usually occur in studies where the authors debate the accountability of AI, i.e., whether AI should be seen as a moral agent and whether an AI system or its developer should have moral responsibility.

Fig. 7
figure 7

Word co-occurrence network for 5143 keywords in all publications. Each word is shown at a node with links indicating how these words were connected. “Artificial” is shown as the central node

Content analysis of highly cited publications

Broadly, the emerging international AI code of ethics is centered on three guiding principles: transparency, accountability, and positive impact [15]. Within the existing AI literature, these principles have been applied both broadly and narrowly to form everything from long lists of principles for AI development to ideal codes of ethics for deploying AI in medicine [26, 66]. Any field that could benefit from rapid, aggregate data processing has the potential to be shaped and changed by AI. AI could become an integral part of medicine, economics, policy, scientific research, marketing, customer service, engineering, and beyond [9]. We chose 30 highly cited publications, whose number of citations ranged between 232 and 40, for detailed content and thematic analysis. Out of the 30 publications, Humanities and Social Sciences, and Computer Science, each had ten publications, and the remaining ten were in Medicine, Environmental Sciences, and Engineering research areas collectively. We carried out the analysis of the 30 publications by looking for common themes, patterns, research methods, central research questions, and main results. We found that the existing AI ethics literature falls into three primary themes: (i) ethical framework, (ii) specific application, and (iii) ethics in practice (Table 1). We have briefly described these three themes below.

Table 1 Highly cited publications (> 40 citations) on ELSE implications of AI thematically analyzed by ethical framework, specific application, and ethics in practice

Ethical framework

Ethical frameworks refer to the publications with a primarily academic aim. These publications reviewed existing literature to define principles, models, and philosophical problems in the ELSE implications of AI research. These publications are not concerned with a particular discipline, but rather with how AI ethics as a whole is conceptualized and analyzed. To a varying degree, most of the publications analyzed were concerned with beneficence, non-maleficence, autonomy, justice, and explicability, key principles outlined [26]. Floridi et al. attempted to lay a groundwork for unpacking and understanding the problems and themes related to the production and use of AI in society by framing directions for future research and underpinning the current understanding of the issues. Many of the publications in this category were organized along similar themes. For example, Bostrom and Yudkowsky [11] discussed ethical problems in AI based on the need for morality, transparency, responsibility, auditability, and incorruptibility. Floridi and colleagues have published widely on the topic of AI ethics, including a highly cited introduction to a themed issue on “What is data ethics?” [28]. This article was presumably highly cited due to the definitions it provided on these prominent emerging topics related to AI and data ethics. In 2004, Floridi published the “Open Problems in the Philosophy of Information”, defining 18 problems that must be solved in to create and integrate ethical AI into people’s lives [25]. Our analysis suggested that Floridi and colleagues were the most prolific publishers on AI ethical frameworks [26,27,28]. Other authors did not publish within the highly cited publications we analyzed that frequently, but consistently referred to frameworks and thematic framing of Floridi et al.’s work in their subsequent publications.

Another method of defining AI ethics is the creation of models. These models are grouped with the frameworks because they are often highly academic, adhere to many of the same principles, and a model is inherently a type of framework. Allen et al. [3] attempted to create a model for the design or moral AI systems, using well-known philosophical frameworks, such as Kant’s categorical imperative, deontology, utilitarianism, and consequentialism. Models in this area were mostly conceptual but contained suggestions for potentially quantitative work. For example, Wallach et al. [67] developed a cognitive science-based model that could theoretically be adapted for moral decision-making within AI. While these cognitive models represented ethical and computational challenges, the extension of ethical AI frameworks to models that could be used by machines have been explored.

Finally, there were two publications that fall into the context of a social framework for AI. Early work on this subject by O’Sullivan and Haklay [51] attempted to describe how AI fits into agent-based ethical models, and if such a model even makes sense for the world in which we live. Floridi and Sanders [27] discussed the conditions necessary for AI to be considered a moral agent that can be held accountable for its actions, and what steps would be required to censure immoral acts by AI. In one of the earliest publications, Gasser reviewed the principles of distributed AI and proposed as how it can be socially conceptualized for creating functional AI systems [31]. This article shows that active framing research has been developed on this theme for decades, even before AI began to assume a role in everyday life.

Specific application

Specific application theme categorize publications that are concerned with how ELSE implications of AI arise within the context of one particular field, such as healthcare, manufacturing industry, environmental policy, and others. They did not discuss broader philosophical problems or implications of AI outside of their particular field. They were either more abstract and focused on ethical questions or more concrete with consideration of how to address ethical problems, but on either end of the spectrum, their focus remained narrow.

Many of the examples in this category came from healthcare. Research in the healthcare field appeared to have more readily embraced the boom in AI, while also being highly concerned about any ethical dilemmas from the increased use of AI. Reviews by Yu et al. [73] touched upon everything from the challenges of collecting high-quality data to consequences of AI use in healthcare, such as loss of healthcare jobs, medical negligence and malpractice by AI, and data privacy concerns for patients. Many of these same themes were discussed in the more narrowly focused publications on healthcare AI. Hashimoto et al. [34] raised many of these same issues while examining the applications and ethical pitfalls of AI usage in surgery. Though they acknowledged that the data analysis capabilities of AI could be a useful tool for surgeons, they are also wary about surgeons being unable to assess how or why AI detects specific patterns in large datasets. Similarly, Ngiam and Khor [49] analyzed the opportunities for machine learning in oncology, and concluded that doctors would need to understand how machine learning worked in order for its use to be ethical and successful. In addition, they also discussed that substantial privacy concerns needed to be addressed for AI to be incorporated in oncology. A similar article by Ting et al. [62] looked at the potential for AI use in ophthalmology but ultimately drew similar conclusions reported by Ngiam and Khor [49], indicating that the AI ethics issues in healthcare were similar across healthcare subfields.

Outside of the healthcare field, we found a highly cited publication on the function of robots in common jobs and tasks [72]. This publication did not have a strong ethical focus but discussed how workplace interactions might be impacted by the introduction of robots and how some of the current consumers’ perceive robot-provided services, indicating that current AI research understood the need for both internal and customer-facing assessments of AI implications. Additionally, French and Geldermann [30] explored how AI-based quantitative analysis could be used in environmental decision-making. This publication did not ask apparent ethical questions but rather weighed the implications of implementing AI decision-making in an environmental policy context.

Ethics in practice

Ethics in practice falls somewhere at the intersection of the two previous themes namely. Ethics in practice examines current ethical principles and frameworks within the field of AI Ethics and how those are applied by governments, businesses, non-governmental organizations, and researchers. Similar to the Sect. 3.6.2, the field and type of application may vary, but generally, all publications in this theme were concerned about how AI ethical frameworks can be or have been evaluated and used in “real world” contexts outside of academia. Additionally, they focused on the broader context of applying AI ethical frameworks versus focusing specifically on a particular field.

The moral machine experiment by Awad et al. [5] broadly explored how autonomous vehicles should solve moral dilemmas through questioning individual study participants. Participants were presented with choices of who or what a vehicle should swerve to hit a person, pet, or stay on course. The demographics of the people and pets were varied in each scenario. Thus, the researchers were attempting to build a moral model for an AI technology based on the opinion of real people. This was one of the most narrow studies within this category. Others looked at broad issues such as the ethical implications of ambient AI (AI that is all around in everyday objects, such as household items) [58] and another looked at current examples of how businesses, governments, and universities were using AI systems, and suggested a three C model (Confidence, Change, and Control) for how such organizations could address the ethical concerns inherent in these technologies [40].

Similar to the previous categories, different authors have organized their studies along identical themes. For example, Jobin et al. [38] examined how AI guidelines changed or overlapped based on location and organization by analyzing AI reports produced by private companies, governments, and academic institutions. Common emergent themes included transparency, justice and equity, non-maleficence, responsibility and accountability, privacy, freedom and autonomy, trust, sustainability, dignity, and solidarity [38]. For some of these categories, there were discrepancies on how these terms were defined and to whom they should apply. However, shows that while common themes may exist in the realm of AI ethics, consensus on how to address the multifaceted ethical questions it raises still seems far away. In a bid to find some consensus across industrialized, Western societies, Cath et al. [15] analyzed AI reports issued by the EU Parliament, the USA White House, and the UK House of Commons, attempting to answer the questions of “what constitutes good AI society?” Transparency, accountability, and positive impact were highlighted across the three reports. Each report varied in its specifics and to the extent it created a plan for a concrete vision, but all highlighted how AI should help to build a “good” human society. The benefit to humans was a consistent thread across all the three themes.

Summary and conclusion

Artificial intelligence development and application has a promising future and has the potential to transform our society. Just like any emerging technology, AI has its potential challenges as well. ELSE issues associated with AI are very important and incorporating it right from the inception of the technology development will promote transparency and ensure trust among end-users. To map the global scenario of the ELSE implications of AI publications, we analyzed 1028 publications worldwide. Our analysis showed that the publications gained momentum over the last 5 years and we predicted that it will increase to more than 750 publications annually over the next 10 years. Europe ranked first with 468 publications (46%), followed by North America (339; 33%). However, if we consider individual countries, the USA topped the list with 280 publications. We also found similar trend in terms of acknowledged sources of funding in the publications by the authors. This trend indicated that funding, especially in developed countries, is available for research related to ELSE implications of AI and we anticipate that the future funding for this research area will continue to grow. In terms of the research areas, findings from our study indicated that the computer science research area had the maximum number of ELSE implications of AI-related publications, followed by social sciences. From the content analysis of highly cited publications, we found that the existing AI ethics literature falls into three primary themes: (1) ethical framework, (2) specific application, and (3) ethics in practice.

Governments have an important role to play in displaying a political will that encourages the continued development of AI. At the same time, they must think of regulations that would curb the excesses of AI. We suggest that national governments continue to anticipate potential problems due to AI and create their country-specific legislation that would address these problems instead of wholesale importation of laws and regulations from other countries. Additionally, results showed that ELSE implications of AI were not explored globally in the agricultural sector. As we have already discussed, agricultural sciences are inherently related to agriculture, the environment, and food production systems, which directly impact human life. Thus, it is important to explore the ELSE implications of application of AI in agricultural sciences in general and environment and food systems in particular and conduct a more applied research taking multi-stakeholders’ perspectives and a multi-disciplinary approach into account. Innovations with the potential to transform the entire society can sometimes be rejected by the, if it is not trusted and adopted by, end users. For any transformative technology to succeed, one needs to incorporate multi-stakeholders’ perspectives into account. One also needs to keep the communication channel open between technology developers and end-users. It would promote transparency, inclusion, trust, and would promote successful adoption among the intended end users.