1 Introduction

Recent technological advancements have accelerated the use of artificial intelligence (AI) in our daily lives. However, most people in society have not learned about how AI works in a systematic manner and have relied on journalistic writing or popular media to gain information. This kind of incidental learning (Green et al. 2021), as opposed to intentional learning, has a significant impact, since the fragmented manner of the learning can cause misconceptions that not only go unchallenged but can be proliferated thanks to social media.

Along with the increase in AI application, there has been an increase in studies on whether people have accurate conceptions of AI knowledge. For example, a recent study revealed that K-12 teachers (who teach kindergarteners to 12th graders) in the southeastern US showed a tendency to hold some accurate conceptions of AI but at the same time held crucial misconceptions that “AI is expensive, that AI can learn on its own, and AI is not biased” (Antonenko and Abramowitz 2023). Studies on children’s understanding of AI knowledge and curricular design are also increasing. For instance, a study examined the validity of an AI educational module for middle school students based on how their knowledge on AI improved in accuracy (Zhang et al. 2022). A survey on 470 Pakistani doctors and medical students showed that, while approximately 70% of them in this developing country had a basic knowledge of AI, their awareness of its medical application was considerably lower (Ahmed et al. 2022). Diaz et al. (2021) surveyed 219 professionals in the medical physics field in 31 countries. A gender difference was highlighted, as this survey showed that the average level of AI knowledge was 2.3 ± 1.0 (mean ± standard deviation) on a 1–5 scale and that there was a noticeable difference between male and female respondents. Male respondents showed more expert knowledge than their female counterparts (Diaz et al. 2021). An American study of 5,101 adults found that knowledge of AI differed by educational attainment as well as race (PEW Research Center 2023).

This scholarship on the accuracy of AI knowledge is relatively new and is being built upon previous studies on self-perceived understandings, perceptions, attitudes, and expectations about AI. For instance, a 2019 study on adults in Australia showed that, while the respondents’ basic understanding of AI varied, the majority of respondents considered themselves to have a poor understanding about AI (Selwyn and Cordoba 2022). Perhaps one of the most comprehensive studies on the correlation between perceived understanding and population demographics is an international survey conducted in late 2021 across 28 countries (Ipsos 2022). Of the 19,504 respondents, 64% claimed that they had a good understanding of what AI is. The respondents’ confidence in their AI understanding had a positive correlation with their educational degree as well as their household income. Business owners and senior executives/decision makers had more confidence in their understanding than the total population of those employed. Respondents in South Africa, Peru, Chile, Russia, Mexico, Korea, and India were the top countries, where over 72% of respondents believed they had a good understanding of what AI is. At the lower end, half of the respondents from France and Germany, and less than half of those from Italy and Japan, responded positively to having a good understanding of AI. The general trend from this 2022 Ipsos study shows that, despite the varying degrees of understanding, there is a general optimism that AI will bring a positive impact to the various sectors in our society, including education, income, and technology.

While a certain level of optimism exists about the problem-solving capabilities of AI technology, some people are concerned about the over-reliance on a technocentric approach, where social problems are addressed solely through technology; these critics worry that we may lose sight of solving the social problem for actual social good (Hagendorff and Wezel 2020). This kind of critical view of “AI solutionism” aligns itself with some of the concerns that scholars and the public have about the social implications of AI technology permeating widely. Scholarly interest in ethical, legal, and social issues (ELSI) has followed this trend. Most notably, biases based on gender, race, and class in AI development have been widely discussed (Buolamwini and Gebru 2018; Kruply 2020; Leavy 2018). Such empirical research, demonstrating a need to address ELSI, has led to various ethical guidelines being created. Fjeld et al. (2020) identified eight common concepts in these various guidelines, and Ikkatai et al. (2022a) conducted a quantitative study on the public attitude toward AI ethics using the eight concepts. In their study, participants in Japan were asked to express their attitudes toward four scenarios regarding the AI voice replication of a deceased person, AI customer service, unmanned autonomous weapons, and crime prevention. The results showed that the public’s attitudes varied depending on the context. Age was a significant indicator of difference in attitude. Hartwig et al (2022) conducted a quantitative study on public attitudes toward AI ethics in Japan and the US. The study revealed that country and age were the most informative factors in predicting attitudes on AI ethics.

As the problems requiring AI technology are increasingly “wicked problems” (Rittel and Webber 1973), which are multifaceted, complicated, and in need of cross-cultural cooperation (ÓhÉigeartaigh et al. 2020), it is important to investigate the correlation of how accurately people understand AI and their attitudes toward ethical, legal, and social concerns across countries. In other words, it is important to understand how the level of AI knowledge relates to the attitudes of ELSI.

1.1 International comparison: understanding, diversity, and inclusion

For this study, we chose Japan, the United States (US), Germany, and the Republic of Korea (hereafter South Korea, or simply Korea) to conduct our survey. These four countries represent the variety of degrees of trust and understanding of AI on a global scale: much trust and understanding, low trust but much understanding, much trust and low understanding, and low in both trust and understanding. A survey on the correlation between people’s trust in AI and their perceived understanding of AI revealed that these four countries represented different traits. According to an analysis conducted by Ipsos (2022), Germany represents a typical correlation for people who say they trust companies that use AI as much as other companies and have confidence in their self-perception of what AI is. Korea represents a cluster of countries where approximately three quarters of participants self-perceived to have a strong understanding of what AI is but had a relatively low level of trust in it. The US represents another cluster of countries where a majority of respondents perceived their understanding of AI to be fair but communicated low trust in the use of AI. Japan represents a fourth trait, where less than half of respondents said they had a good understanding of AI, and the respondents also had a low level of trust in companies that use AI.

Diversity and inclusion in AI are emerging as important factors in addressing the understanding of AI. Again, these four countries represent a range of attitudes toward social issues, particularly toward diversity and inclusivity. According to a 2018 survey, the global trend is for people to favor increased gender equality, but that they are less enthusiastic about increased ethnic, religious, and racial diversity (PEW Research Center 2019). In all four countries, men were more likely than women to state that gender equality had improved in their country. A high percentage of respondents believed the ethnic, religious, and racial makeup of their society had become more diverse in the last 20 years in Germany (84%), South Korea (84%), and the US (77%), whereas only 55% of people believed so in Japan. People believed diversity was a positive change in South Korea (68%), the US (61%), and Germany (50%), whereas only 43% of those in Japan believed it was a positive trend. As for generational differences, a significantly greater percentage of younger adults in Japan tended to favor greater diversity than people over 50 years old. In terms of educational background, those with higher levels of education were generally more likely to favor greater diversity in their country. In Germany, the gap between those with lower and those with higher levels of education was greatest, with a 21-point difference, whereas in the US and Japan this was lower, with an 18-point difference. The educational difference was smallest in South Korea, with an 11-point difference. However, South Korea and the US had a higher percentage of people who favored greater diversity among those with a higher level of education, at 73% and 71%, respectively. Germany followed at 65% among those with a higher level of education, and Japan was the lowest at 55%.

While these statistics are relevant in understanding the degree to which ELSI is understood and considered, there is no study on the correlation between AI understanding and ELSI concerning diversity and inclusion.

1.2 Four dilemma situations for the use of AI

We identified four situations that were common and created the following four diversity and inclusion-related dilemmas: the adoption of AI voice assistance in an office setting (“voice” scenario), AI use by a recruiter to shortlist applicants (“recruiting” scenario), AI use for facial recognition in border control (“facial recognition” scenario), and AI use for automated decision making on immigration applications (“immigration” scenario). These four situations represent ELSI of AI use involving potentially vulnerable populations in society. Public attitudes on these four dilemmas were measured. The correlation between the public attitudes on these scenarios and the respondents’ knowledge about AI were analyzed.

The first scenario (“voice”) describes the dilemma in using a female voice for implementing an AI voice assistance system. Most voice assistance software, such as Alexa, Google Assistant, Siri, and Xiaoice, come with a human-sounding voice that is set to a default female voice (Fosch-Villaronga and Poulsen 2022). This is because anthropomorphism of the artificial voice is received better by the user (Li and Sung 2021). Marketing research shows that female voices generate a more positive attitude about the brand that is being advertised (Martín-Santana et al. 2017). However, some claim that the female personalities that these voice assistants embody symbolically represent the gender bias that exists in women’s employment, which is subsequently indicative of the gender bias that is deeply rooted in the tech industry (UNESCO 2020).

The second scenario (“recruiting”) describes the use of AI in recruitment. Increasingly, more companies are using AI in their hiring practices as it allows for expediency, efficiency, and convenience for both the recruiter and candidates (Stephan and Erickson 2017). At the same time, there are concerns raised about the ethical and legal implications of such technology use (Dattner et al. 2019; Yam and Skorburg 2021). Particularly mentioned in this scenario is how people with physical disabilities may be subjected to discriminatory hiring practices because of the limitations of AI (Tilmes 2022).

The third scenario (“facial recognition”) describes the use of facial recognition AI and the ethical concerns it may pose for transgender people. This scenario takes place in the airport, where the facial recognition system may cause real consequences when it does not match the gender indicated on official travel documents (Donnelly et al. 2022; Keyes 2018; Wilkinson 2021). On the other hand, facial recognition systems can be effective for security purposes and crime prevention (Wang and Siddique 2020).

The fourth scenario (“immigration”) describes the use of AI for border control. There are an increasing number of people displaced in the world due to global warming and military conflicts. Switzerland has introduced the use of AI algorithms to place asylum seekers in certain cantons (counties) for the best chance of employment (Information Technology Newsweekly 2018). The improvements that AI efficiency can bring to border control is also received favorably (Isleyen and Ucar 2019). Canada has introduced the use of AI for temporary visa applications and immigration applications (Government of Canada 2021). On the other hand, there is much concern over the potential violation of human rights with the use of AI in border control (Access Now 2021; Beduschi 2021; Molnar and Gill 2018).

Thus, to reiterate, our research question is the following: How does the level of AI knowledge relate to the attitudes of ELSI in the use of AI in these four scenarios?

1.3 Research question

How does the level of AI knowledge relate to the ELSI awareness of AI in the four scenarios? Ikkatai et al. (2022b) measured ELSI awareness using three items of ethics (ethically correct or not), tradition (favorable from a traditional perspective or not), and law (policies and laws are established or not) and showed that people’s attitudes toward AI can be divided into four groups using the responses to these three items (Hartwig et al. 2022). Based on previous studies that people have both positive and negative views toward AI (PEW Research Center 2022), this study added one further item, social benefit, which assesses whether AI is beneficial to society or not (Table 1). We thus quantitatively measured public ELSI awareness using the four items of ethics, tradition, law, and social benefit.

Table 1 Questionnaire design of Q1–Q5

RQ: What kinds of knowledge and ELSI awareness of AI do people have in each country?

2 Methodology

2.1 Respondents

We contracted Cross Marketing Inc., a research company based in Japan, to collect data from Japan, the US, Germany, and Korea using their data pool. We asked the company to collect about 1,000 samples that matched the current demographic profile of each country’s population for age, gender, and location as closely as possible. For Japan, the company collected data from 1,271 respondents (men = 720, women = 551) aged 20 to 69 years old (mean ± SD = 47.3 ± 13.2). The survey was conducted from February 14–19, 2022. For the US, the company collected data from 1,151 respondents (men = 561, women = 590) aged 20 to 69 years old (mean ± SD = 44.0 ± 14.3). The survey was conducted from February 14–24, 2022. For Germany, the company collected data from 1,075 respondents (men = 536, women = 539) aged 20 to 69 years old (mean ± SD = 45.8 ± 14.1). The survey was conducted from February 14–19, 2022. For Korea, the company collected data from 1,192 respondents (men = 632, women = 560) aged 20 to 69 years (mean ± SD = 44.1 ± 12.9). The survey was conducted from February 14 to 21, 2022.

2.2 Procedure

The online questionnaire consisted of (1) sociodemographic variables, (2) knowledge of AI, and (3) questionnaire items for each scenario. We prepared the questionnaire in Japanese, English, German, and Korean using double-back translations from English to Japanese, English to German, and English to Korean.

  1. 1.

    Age, sex, location, education, occupation, and household income (see Appendix 1): Age, sex, and education were included in our analysis. The responses to education were categorized into three groups for analysis: “more than university” (“university” and “graduate school”), “below university” (“elementary school/junior high school,” “high school,” and “junior college/vocational school”) and “other” (“other,” “do not know,” and “do not want to answer”).

  2. 2.

    Understanding of AI: Four short quizzes, developed in previous studies, were used to measure the level of knowledge of AI. A set of three quizzes (Quizzes 1–3) had been developed by Ikkatai et al. 2022a, b) and Hartwig et al. (2022) and tested with the public in Japan and the US. Another study developed the fourth quiz (Quiz 4), and this was tested with the public in Japan, the US, and Germany (Ikkatai et al. 2022b). In this study, we used the total number of correct answers (0 to 4) of Quizzes 1–4 for analysis.

  3. 3.

    Quiz 1. Which of the following options is the most appropriate explanation of AI as of today? (1: A robot that thinks and acts on its own, without human assistance; 2: A program that makes decisions based on learning results; 3: A computer that interacts with people; 4: A new type of smartphone). The correct answer is 2.

  4. 4.

    Quiz 2. Which of the following options is the most appropriate explanation of what AI can do as of today? (1: It makes moral decisions on its own; 2: It understands and interprets human languages; 3: It develops software on its own; 4: It has free will). The correct answer is 2.

  5. 5.

    Quiz 3. Which of the following options is the most appropriate explanation of AI developers as of today? (1: The government is developing AI; 2: Information scientists and researchers are developing AI; 3: Computer programs are developing AI without human intervention; 4: Everyone is developing AI using smartphones). The correct answer is 2.

  6. 6.

    Quiz 4. Which of the following options is the most appropriate statement about the performance of current AI technology compared to the performance of humans on various tasks? (1: The performance of AI is always better than the performance of humans on all tasks; 2: The performance of AI and humans is identical on all tasks; 3: The performance of AI is better than the performance of humans on some tasks; 4: The performance of AI is never better than the performance of humans on any tasks). The correct answer is 3.

  7. 7.

    Scenarios and items. There were four different scenarios (Fig. 1, see Appendix 1 for the all scenarios). Each scenario described the use of AI immigration (“immigration” scenario), AI-recruiting (“recruiting” scenario), AI facial recognition (“face” scenario) and AI voice assistance (“voice” scenario). These scenarios were created based on actual recent cases (see Introduction). The scenarios consisted of a description of the beneficial and anxiety-inducing aspects of AI from the viewpoint of the people in charge who consider using these AI. The last sentence shows that they face an ethical dilemma: whether or not to use the AI technology (Fig. 1).

Fig. 1
figure 1

Scenario (d) “voice.”

After reading each scenario, we asked the respondents to answer what they thought about the implementation of this AI technology (Q1), and what they thought of this AI from the viewpoint of ethics (Q2), tradition (Q3), law (Q4), and social benefit (Q5). The respondents rated the items on a seven-point scale (Table 1). A larger value on this scale means that the respondent disagreed with the item.

2.3 Analysis

First, we plotted the distribution of Q1 (seven-point scale) to compare the level of concern about each scenario by country. Second, we investigated how each variable affected ELSI awareness. ELSI was divided into four items (ethics, tradition, law, and social benefit), and analysis was conducted for each of them. We conducted a multiple linear regression to investigate the relationship between the responses of ethics, tradition, law, and social benefit to the scenarios (Q2–Q5, seven-point scales, dependent variables) and five independent variables. The independent variables were age, sex (“men” served as the baseline), education (“below university” served as the baseline), knowledge of AI (the number of correct responses to the four quizzes), and countries (“Japan” served as the baseline). This multiple linear regression alone can examine differences between Japan and Germany, Japan and the U.S., and Japan and Korea, but it cannot examine differences among other country combinations (i.e., Germany and the U.S., Korea and the U.S., and Korea and Germany). Therefore, we also conducted a multiple linear regression with Germany and the U.S. as the baseline, respectively. Multicollinearity was assessed using the variance inflation factor (VIF). A high value of VIF suggests that a variable is highly correlated with other variables. We eliminated any variable of VIF > 2. The analysis was conducted using IBM SPSS Statistics 25 software. The statistically significant difference was set to p < 0.05.

3 Results

The simple tabulation of the number of responses for each variable is shown in Appendix 1.

3.1 Level of AI knowledge

The mean ± standard deviation (SD) in order of correct answers was 2.83 ± 1.22 in Japan, 2.81 ± 1.07 in Korea, 2.70 ± 1.07 in Germany, and 2.23 ± 1.22 in the US. Japan had the highest percentage of all four quizzes answered correctly (38.0%). Both Germany (36.6%) and Korea (35.9%) had the highest percentage of three quizzes answered correctly. The respondents who correctly answered three quizzes (27.5%) and two quizzes (27.4%) were relatively the same in the US (Fig. 2).

Fig. 2
figure 2

Number of correct answers to four AI questions that quizzed the respondents on their accuracy in AI knowledge. The graph legend (0–4) shows the number of correct answers to the four AI quizzes

3.2 Level of disagreement/agreement for the scenarios (Q1)

The percentage of those who were willing to adopt AI (sum of the responses to 1, 2, and 3) was highest in the “voice” scenario in Japan (56.2%), the US (57.0%), and Korea (66.3%), and in “face” in Germany (53.4%). The percentage of those who were not willing to adopt AI (sum of the responses to 5, 6, and 7) was highest in “immigration” in the US (35.3%), Germany (39.3%), and Korea (25.3%), and in “recruiting” in Japan (26.1%) (Fig. 3). Although there was a difference among countries, the respondents relatively agreed to the use of AI in “voice,” and they showed relative disagreement to the use of AI in “immigration.”

Fig. 3
figure 3

Responses to the four scenarios (“immigration,” “voice,” “recruiting,” and “face”) in Japan, the US, Germany, and Korea

3.3 Relationship between the responses to ELSI awareness and AI knowledge (Q2–5)

The unstandardized coefficient (B) of the knowledge of AI was statistically significant in the “recruiting” and “immigration” scenarios across ethics, tradition, law, and social benefit (Tables 2, 3 and 4). Those participants had stronger concerns about ELSI awareness for the two scenarios. The unstandardized coefficient (B) of the knowledge of AI was statistically significant in ethics and law across the scenarios (Tables 2, 3, 4, 5). Those who understood AI had stronger concerns about ethics and law, but not about tradition and social benefit.

Table 2 Statistical results of the responses to the “voice” scenario
Table 3 Statistical results of the responses to the “recruiting” scenario
Table 4 Statistical results of the responses to the “face” scenario
Table 5 Statistical results of the responses to the “immigration” scenario

The unstandardized coefficient (B) of age and sex was statistically significant across ethics, tradition, law, and social benefit in all scenarios (Tables 2, 3, 4, 5). The older respondents more than the younger respondents, and women more than men, had stronger concerns about ELSI awareness across all scenarios. The unstandardized coefficient (B) of education was not statistically significant in the ELSI awareness of any of the scenarios (Tables 2, 3, 4, 5). This means that the responses did not differ depending on the level of education.

3.4 Countries’ differences in responses to ELSI awareness

In the “voice” scenario, the unstandardized coefficient (B) of the US, more than Japan, was statistically significant in ethics, tradition, and law. This demonstrates that the US, more than Japan, significantly agreed with ethics, tradition, and law, but not with social benefit. Similarly, Germany, more than Japan, significantly agreed with tradition, law, and social benefit, but not with ethics. Korea, more than Japan, significantly agreed with ethics, law, and social benefit, but not with tradition (Table 2).

In the “recruiting” scenario, the US, more than Japan, significantly agreed with tradition and law, but not with ethics and social benefit. Germany, more than Japan, significantly agreed with law and social benefit, but not with ethics and tradition. Korea, more than Japan, significantly agreed with ethics, tradition, law, and social benefit (Table 3).

In the “face” scenario, the US, more than Japan, significantly agreed with tradition and law, but not with ethics and social benefit. Germany, more than Japan, significantly agreed with law but not with ethics, tradition, and social benefit. Korea, more than Japan, significantly agreed with law and social benefit, but not with ethics and tradition (Table 4).

In the “immigration” scenario, the US, more than Japan, significantly agreed with law but not with ethics, tradition, and social benefit. Germany, more than Japan, significantly agreed with ethics, tradition, law, and social benefit. Korea, more than Japan, significantly agreed with ethics, law, and social benefit but not with tradition (Table 5).

Some characteristics were found across scenarios. For example, the US, more than Japan, significantly agreed with law across all scenarios. Germany, more than Japan, significantly agreed with law but disagreed with social benefit across all scenarios. Korea, more than Japan, significantly agreed with law and social benefit across all scenarios (Tables 2, 3, 4, 5).

4 Discussion

4.1 RQ: What kinds of knowledge and ELSI awareness of AI do people have in each country?

The level of AI knowledge was higher in Japan than the US, Germany, and Korea. This higher tendency in Japan was in line with previous studies (Hartwig et al. 2022; Ikkatai et al. 2022ab). However, as an international survey showed, less than half of those from Japan responded positively to having a good understanding of AI (Ipsos 2022). Japanese respondents may feel that they do not understand AI, regardless of their knowledge of AI.

The level of AI knowledge for Korean respondents ranked second in this study. However, in the same international survey, 72% of Korean respondents believed they had a good understanding of what AI is (Ipsos 2022). It remains unclear where this difference between Korea and Japan comes from, but Japanese people might be feeling a sense of ambivalence toward AI.

Half of the German respondents answered positively to having a good understanding of AI (Ipsos 2022). These results show a diversity of relation between the understanding of AI and the knowledge of AI in each country. The level of AI knowledge for German respondents ranked third in our study, suggesting that there was no clear correlation found.

For the US respondents, the level of AI knowledge was fourth in this study, but more than half of the Ipsos survey’s US respondents (63%) believed they had a good understanding of what AI is (Ipsos 2022). Expectations of and optimism toward AI may be high in the US regardless of or because of their own understanding of AI.

In terms of the ELSI items of ethics, tradition, law, and social benefit, particularly large differences were found in law and social benefit.

Regarding law, Japanese respondents had stronger concern about this item. On the other hand, the US more than Japan was relatively positive to the current situation of law across the scenarios. Germany and Korea, more than Japan, were relatively positive about the current laws. This suggests three possibilities, detailed below.

The first is that the system of laws and regulations of AI has been better established in the US and Germany, more than Japan. Japan has guidelines for AI (i.e., “Social Principles of Human-Centric AI” by the Government of Japan, Cabinet Office, Council for Science, Technology and Innovation), but there are still no legally binding rules for the use of AI in general.

In the US, the White House published the “Blueprint for an AI Bill of Rights” in 2022. It applies five principles for protecting civil rights or democratic values: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration, and fallback (The White House 2022). In 2021, the European Commission published the “Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)” for regulating AI, and categorized AI depending on the level of its risk (European Commission 2021).

In Germany, AI laws are mostly provided by EU regulations (Spies 2022). On the other hand, the Conference of Independent Data Protection Supervisory Authorities of the Federal Government and the Länder (DSK) issued the Hambach Declaration on April 3, 2019, and listed seven principles for data protection. This reveals the skepticism of the supervisory authorities toward AI systems by the provisions of the General Data Protection Regulation (GDPR) (Tribess 2023). The German government makes specific reference to the protection of employee rights in the context of the use of AI in the workplace. Innovation-inhibiting ex ante regulations are to be avoided. The German government rejects AI systems for biometric recognition in public spaces and automated government scoring systems and aims for their prohibition under European law.

The second possibility is that people in the US, Germany, and Korea may understand the laws of AI better than those in Japan. However, it remains unclear how much the laws of AI have been publicized to the extent of having an impact on people’s understanding of AI laws established by the government and related organizations.

The third possibility is the public’s relationship with the level of trust toward the government and companies who regulate AI. The level of trust for the government is 41%, based on the OECD average, and 49% in Korea, but it is less in Japan (24%) (OECD 2021). Furthermore, only 39% of Japanese respondents trusted companies that use AI, but this was not a big difference from the US (35%) and Germany (42%) (Ipsos 2022).

As for social benefit, Korea, more than Japan, was relatively positive concerning the social benefit of AI. However, Germany, more than Japan, showed stronger concerns about social benefit across the scenarios. This was supported by a previous international survey. The percentage who said the development of AI is a good thing for society was 69% in Korea, 65% in Japan, 47% in Germany, and 47% in the US (PEW Research Center 2020). Although the use of AI varies from scenario to scenario, our study revealed that two Asian countries, Japan and Korea, had relatively positive attitudes toward the social benefit across scenarios.

The responses to the “voice” and “face” scenarios were different from “recruiting” and “immigration.” AI knowledge was significantly related to social benefit in the “recruiting” and “immigration” scenarios (Tables 3 and 5). However, in the “voice” and “face” scenarios, AI knowledge was not significantly related to social benefit.

We consider three possible reasons for the strong concerns about “recruiting” and “immigration.” The first is that people often hear about cases relevant to the “recruiting” and “immigration” scenarios in their daily lives through the Internet, TV, and newspapers. These experiences made them feel that it may become a severe problem in the near future. For example, in 2018, the AI hiring tool introduced by Amazon was reportedly discriminatory against women and caused controversy (BBC 2018). This may have raised awareness that using AI in human resources is not necessarily a good thing.

The second possible reason is that the “recruiting” and “immigration” scenarios may have been perceived to have more serious consequences than the “voice” or “face” scenarios. As the consequence of a poor implementation of AI could result in unemployment in the case of the “recruiting” scenario, or the rejection of an asylum-seeking application in the case of the “immigration” scenario, the severity of these consequences may seem dire compared to the perpetuation of gender stereotyping with the “voice” scenario and harassment for crime prevention in the “face” scenario. The potential for human rights violations may have caused a stronger concern toward these scenarios. However, as Segun (2021) argues, there are different value systems that need to be considered when globally engaging with AI ethics, as conceptions of ethics in Western countries may differ to East Asian, African, Middle Eastern, or South American cultures. There are also varying degrees of inclusion of transgender people (as in the “face” scenario) and people with physical disability (as in the “recruiting” scenario) across the countries studied (Dicklitch-Nelson and Rahman 2022; Heyer 2015). The current perception of AI ethics, human rights, and diversity and inclusion in each country would need to be scrutinized further to come to a better understanding of the trends observed in this study.

Third, the respondents perhaps felt that they could not decide for themselves whether to use AI or not in these scenarios. The decision to implement an AI tool would be an organizational decision and could not be changed by an individual in the scenarios “recruiting” and “immigration.” On the other hand, the respondents were relatively positive to the current situation of ELSI in the “voice” scenario (Fig. 2). UNESCO (2020) claims that gender bias is deeply rooted in the tech industry, but voice assistance systems are widely used in society. People may have felt that less serious problems were being raised in the “voice” scenario. The respondents may have thought it to be relatively easy to change the voice settings from female to a neutral voice on one’s personal digital device in the “voice” scenario, unlike the use of AI in “recruiting” and “immigration.”

Despite such variations in the results of our survey data from four countries, we found an overall tendency that people who have AI knowledge are generally more likely to show stronger concerns about ELSI, especially in ethics and law, across the four scenarios. At the same time, our investigation showed that the relationship between AI knowledge and understanding varies greatly across different countries, suggesting that the image and understanding of AI technologies may be strongly influenced by cultural and societal factors. In particular, we found that law and social benefit are conditioned by local cultures.

Additionally, our results support previous findings. First, older respondents rather than younger respondents, and women more than men, were likely to have stronger concerns about the use of AI across the scenarios. This tendency was also found in some studies in Japan and the US (Hartwig et al. 2022; Ikkatai et al. 2022b). Also, we need to emphasize that many of our respondents had positive attitudes toward the use of AI in each scenario. For example, in the “immigration” scenario, more than 40% of respondents in each country agreed on the implementation of AI (total of the option 1, 2, and 3). Also, in the “voice” scenario, about 50% of respondents in each country agreed (Fig. 2). This supports previous findings that people have mixed views toward AI (PEW Research center 2020; Ipsos 2022).

This study had four limitations. The first was the specificity of the scenarios. Each scenario focused on certain issues and included both positive and negative aspects of AI to balance the descriptions. However, the public responses might have been different if we had used different issues and aspects. For example, in the “recruiting” scenario, we focused on the employment of persons with disabilities. However, there are other issues such as gender and racial discrimination. Therefore, it should be noted that our responses are only for our specific scenarios and cannot be generalized. Also, the implementation of AI may vary from country to country. The authors confirmed that our scenarios can be applied in Japan, the US, Germany, and Korea at least, but our scenarios may not be applicable or relevant in other countries.

The second limitation was the public recognition of ethics, tradition, law, and social benefit. As we only asked the participants to choose within the options of a seven-point scale, it remains unclear how they understood each perspective. In particular, the response to social benefit differed between scenarios. Further study is required to investigate how the understanding of each perspective is different in each country using free descriptions and interview surveys.

The third limitation was the participants. As this study was conducted on the Internet, the participants were limited to internet users. However, we consider many people could be covered with such an approach, as the percentage of internet use is more than 90% in each country (98% in Korea in 2021, 91% in Germany in 2021, 91% in the US in 2020, and 90% in Japan in 2020) (World Bank 2022). Such a wide proliferation of internet surveys can make them more prone to self-selection bias, where participants choose to respond based on their own motivations, or non-response bias, where individuals who choose not to respond may have different characteristics than those who do respond. This is partly because of the convenience of the Internet.

The fourth limitation was how we considered individual “understanding of AI.” In this study, we used the number of responses to the four quizzes as an index of understanding of AI. However, an “understanding of AI” should include various aspects, such as the accuracy of AI knowledge, AI knowledge that does not depend on accuracy, an awareness of the risks and benefits of AI, and self-perception of AI. Therefore, what we have measured is part of the “understanding of AI.” If we need to consider a comprehensive understanding of AI, we may need to consider how to measure the understanding of AI more effectively.

5 Conclusions

This study investigated what helps to develop ELSI awareness regarding the normative social use of AI and what kinds of ELSI concerns people have. We collected responses in Japan, the US, Germany, and Korea using four scenarios. We found that respondents who had AI knowledge were likely to show stronger concerns about ELSI across the four countries. Positive responses to scenarios were higher in the “voice” scenario in Japan, the US, and Korea, and in the “face” scenario in Germany. Japanese respondents had stronger concerns about law. The US, more than Japan, was relatively positive to the current situation of law. Korea, more than Japan, was relatively positive to social benefit, but Germany, more than Japan, showed stronger concerns about social benefit across the scenarios. The ELSI concerns about AI differ depending on the ethics, tradition, law, and social benefit besides the scenarios. Our results suggest that deepening individuals’ knowledge and understanding of AI is critical to furthering the discussion on ELSI of AI. While not all aspects of ELSI are of constant relevance, a country’s culture also affects how AI is used and how people show concerns about AI. AI is a new technology that is rapidly spreading across countries, and its ELSI should be discussed with the public both globally and locally. These results may lend insight to ongoing discussions about societal impacts on scientific and technological innovations, such as varying cultural differences on bioethics such as the implementation of nanotechnology (Matsuda and Hunt 2009) or informed consent (Tham et al 2022).