1 Introduction

A long time ago, information scientist Rob Kling and colleagues (2000) warned about interpreting social phenomena as independent of and layered on a specific technology. As this might be history repeating itself, the deployment of AI technologies raised similar concerns. Research in academic and non-academic fields alike often talks about “social impacts” in such broad terms that they leave doubts about the openness of its perspective: the “social” is just something to be taken care of. On the contrary, society is much more than a passive recipient of the transformations carried out by inevitable technological development. As Dourish and Bell (2011, 51) observed for the Internet, a “naive orientation towards social impacts frames the relationship between the social and technical too narrowly”.

In general, the AI community has been awakening and taking the first steps to avoid such a “layer cake” model. In the last few years, an articulated discussion has developed over regulatory, technical and ethical aspects of AI, but only to a lesser extent have the growing relevant social aspects of AI—both at the design and use stages—captured the attention of the AI community. Namely, the European Commission documents on AI are original contributions to set the stage for framing and regulation (European Commission 2020; European Commission 2021). Technical studies (Sirbu et al. 2019) in addition to research on ethics (Floridi and Cowls 2019) and privacy (Stone et al. 2016) normally address and investigate the “social impacts”’ of AI. In the eyes of the many, these efforts saturate the need for a deeper knowledge of the social world, where these AI-based systems come to operate. Along with some important recent work in computer sciences that seeks to open the perspective to the “social” (Dignum 2019), there are other promising approaches to study the societal implications of AI.

Sartori and Theodorou (2022), for example, highlight the need for a proper sociology for AI to develop a sociological perspective within the AI community. Only when AI is conceived as a sociotechnical system, a more fruitful approach for the future of AI could be thought throughout. From design to use, there is a path to balance a Human-in-Control (HIC) approach with an ecosystem organized around diversity (in data collection and algorithms models) and intersectionality (in the society).

Cave and Dihal (2019) dig into the narratives surrounding AI to extrapolate the most common visions among the public. Hopes and fears about AI’s scenarios might heavily influence how the public perceives and approaches the technology. Its diverse and composite configuration gives the general public a role in the acceptance and adoption of the technology mediated through the interaction between the media and public opinion. The public perception of AI is deemed essential to “how AI is deployed, developed and regulated” (ibid., p. 331).

This article falls in between these two approaches, elaborating a socio-technical perspective that overcomes the current neglect of how individuals frame, media portrays, and policy makers regulate AI. To “problematize” the conversation about AI (Roberge et al. 2020), identifying and accounting for who is involved with its role, purpose and imaginaries within the socio-technical system is then a mandatory step. As sociologist Patrice Flichy (1995) supported the need to contemplate all subjects—from design to use—engaged in the “frame of use” proper to a specific technology, here we focus on the individuals and their levels of awareness, knowledge and emotional response when it comes to AI. Our main argument holds that, as for previous technologies, not only vary the aforementioned levels across social groups depending on many socio-demographics factors, but also they are crucial for the diffusion and acceptance of the technology in the society.

In this direction, the article is organized in six sections. After the introduction, we build the theoretical framework for our main argument relying on some established concepts such as that of “technology as a practice” (Suchman et al. 1999; Star 1999) “sociotechnical imaginaries” (Jasanoff 2015) and “narratives” (Cave et al. 2020). We illustrate the main existing hopes and fears associated with AI and robots as they refer to “imaginaries” that shape and organize how society sees and interprets technology. The third section gives a detailed description of data and methods that, notwithstanding the limitations, grounds our novel socio-technical approach to an original empirical study. The fourth section illustrates the results of the ad-hoc survey conducted within the University of Bologna. It examines the level of awareness (Sect. 4.1) and opinions (Sect. 4.2) with respect to some relevant socio-demographic variables. The latter are also crucial to analyse both utopian and dystopian narratives that, replete with exaggerations, are telling about individuals’ emotional responses (Sect. 4.3) and perceived likelihood in the future (Sect. 4.4). A deeper dive in the perception of narratives (Sect. 4.5) also offers some relevant insights about the role of gender and competence. The fifth section focuses on two issues—the current state of “AI anxiety” and the underestimated point of view of “non-experts”—we deem essential to problematize and to sustain the need for a sociological perspective for AI, opening the floor to future comparative research (Sect. 6).

2 The theoretical framework: a call for a socio-technical perspective

A too narrow definition of the relations between society and technology usually leads towards an attitude of “technological determinism”, where the technology unfolds its logic over a unique path, impacting society and determining its output (Ogburn 1922; Winner 1977). Alternatives have been played out focusing on a reverse logic of how social factors are responsible for shaping technical development and adoption (Mackenzie and Wajcman 1999). In dealing with technicalities, different stakeholders shape technology while solving their conflicts over resources, affordances and power. To a different degree, the constructionist school (Pinch and Bijker 1984) with the varieties of approaches that refer to Science, Technology and Society (STS) studies (Callon 1986) and Actor-network Theory (Latour 2005) investigated the relation between human and artifacts, how social groups—other than developers—are relevant for diffusion and adoption, for success and failures. Technologies are products of social practices, actions and decisions: from their design to their application and use phases, they come out of the contexts, with their specific institutional and organizational cultures, that elaborate on and imagine what they are needed for and where they might be employed. To our argument, AI might be useful to screen thousands of job curricula or historical photos to alleviate Human Resource’s recruitment process (Dastin 2018) or historians’ classifications and colorization of black-and-white pictures (Goree 2021). Yet, it was easily foreseeable that out-of-context AI systems would come up with biased suggestions, such as recruiting higher percentages of males or steering the middle path about colours (leading to beiges). If thinking about technology as neutral is misleading, considering AI simply the latest tool to ease both trivial or complex problems individuals, institutions, states and firms have to deal with is worse.

While acknowledging the march of technology as not inevitable is becoming more common, the AI community still needs to take some steps forward. Sartori and Theodorou (2022), for example, highlight the need for a proper sociological perspective within the AI community. As such, sociological insights enter into the picture connecting inequalities and AI systems in the effort to offset the recently discovered magnifying glass effect in the process of “automating” inequalities (Eubanks 2018). More open technical approaches—such as the HIC—are also speaking up for a fairer AI (based on shared principles) to be transparent and explainable, accountable and contestable (Sartori and Theodorou 2022). Further down the line, a sociotechnical perspective calls for considering values and inequalities, institutional and organizational practices that are embedded in technology.

Another interesting approach relates to narratives surrounding AI to extrapolate the most common visions among the general public. AI technologies are increasingly present in our daily life, introducing significant changes: navigation systems, chatbots, or music and movie recommendation systems. Their rising presence in our everyday life notwithstanding, laymen do find it extremely difficult to understand these systems’ functioning and consequences. Narratives help in this direction for their broader capacity to convey meaning to social and cultural changes that come along with the technology (Natale 2019; Natale and Ballatore 2020). The study of how individuals approach new technologies and how their perceptions, understanding and expectations originate and unravel offers insights on the multidimensional relation between technology and society.

To a varying degree, all actors involved in the AI process—from start to finish—influence the construction of narratives and its power onto the public. Over the centuries, narratives have always played a role like for the press (Eisenstein 1980), the telephone (Fisher 1992; Marvin 1988), the Internet (Mosco 1999; Levy 1984). As such, narratives are a building block of a broader “socio-technical imaginary”, defined as a “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of advances in science and technology” (Jasanoff 2015, p. 4).

Visions of desirable futures (or resistance to) are supported by shared understandings imbued with values and expectations about society, modernity, human agency and technical potentialities. Again, the Internet serves as an example. The distributed decentralized network infrastructure at the heart of the Internet reflects shared meanings and ethical attitudes across different social groups that overcame sectoral boundaries in the university, military and tech industry in the seventies. Values such as freedom, access and liberalism combined into what has been labelled Internet culture or the “Californian ideology” (Levy 1984; Barbrook and Cameron 1995).

Although AI technologies are not comparable to Information and Communication Technologies (ICTs) for the autonomy of use attributable to the final user, the idea of the imaginary is nonetheless relevant to our case. As Jasanoff’s definition informs the conceptual frame that surrounds any technology, it applies to AI and its community as well. By cutting through the dualism of structure and agency, it “combines some of the subjective and psychological dimensions of agency with the structured hardness of technological systems, policy styles, organizational behaviours, and political cultures” (Jasanoff 2015, p. 24). A sociotechnical perspective for AI allows for a deeper investigation of the social, economic and political roots of these imaginaries, disentangling possible conflicts over competing visions. As happened in the past with the Internet (Lesage and Rinfret 2015; Levy 1984), how the AI community envisions its future steps is key for pluralism and democratic accountability (Crawford 2021).

AI developers, policy makers and the media are other key players in the field. Research shows that not only do AI developers pursue specific technical goals, but also their readings might be a source of influence (Greshko 2019; Dillon and Schaffer-Goddard cit. in Cave et al. 2020, p. 8) as much as their collective imaginary (Robertson 2010; Bory and Bory 2016).

Policymakers also might choose among different forms of regulation on their beliefs and perceptions of AI, diverting public and private funding or affecting governance choices (Natale 2019). As for the Internet, they acted for a long time with an uncritical optimistic view that technical development is necessary and desirable under the auspices of a future economic well-being (Wyatt 2003). Another example of the public’s influence comes out of raising awareness about robots’ potentialities. Lin et al. (2008) identify public perceptions as one of the main market forces that are currently impacting the development of military robotics and related regulation. Away from AI, in 2015 the European regulation of Genetically Modified crops changed—widening the powers to restrict or prohibit their production (EU Commission 2015)—not as the result of new scientific data. The change was driven by the increased perception of risk among consumers (Malyska et al. 2016).

It is a well-known fact that the media contributes to the framing of the public (DeFleur and Ball-Rokeach 1989; Cave et al. 2018) by covering selected features of emerging technologies. With regard to AI, the media discussion spurred, since 2014 is quite sophisticated in tone, but not in content (Ouchchy et al. 2020). For instance, when bound to the ethics of AI, it does not go deep into technicalities of different types of AI but uses specific examples for thematizing the topic at large. While specialized writers might lack a specific knowledge when it comes to recommendations, Ouchchy et al. (2020) find a sound interest in the public debate about regulation. For the media to account for both negative and positive social implications is key for supporting a balanced framing and portrayal of AI. This could pave the way to even-handed media reporting.

Sometimes, science fiction jumps into the scene with its truly accurate description of the emerging technological issues by drawing on narratives that have expressed hopes and fears over centuries. Musa Giuliano (2020) shows how fiction might act as cautionary tales that could nudge or forestall some imaginary compared to others. When we add “intersectionality” to the picture, there is room for more “intersectional sociotechnical imaginaries” that critically addresses the dominant narratives and the related AI potentialities (Ciston 2019). To add to this, it is relevant to consider that collectively held and institutionally stabilized visions are publicly discussed and performed by the media as much as they are instrumentally used by firms in trumpeting their notion of technological advancement (e.g., as promoter of the first internal Committee on Ethics, at the end of 2020 Google winded up firing the two most prominent names because of so-called conflicting viewsFootnote 1).

Since “the future is born of the past, it is equally true that the past is also continuously shaped by the future”, as sociologist Alberto Melucci wrote (1996, p. 12), a sociotechnical perspective for AI offers new tools to link past, present and future. How future technological development is—individually and collectively—imagined, coordinated and aggregated into a vision of the world is worth investigating. For one, “imagined futures” are a way to control the unpredictability of the future for strategic actions in fields concerning money and innovation (Beckert 2016). For another, concepts such as uncertainty (Giddens 1990) and risk (Beck 1992) nicely fit the study of AI technologies. As “expert systems” intervening in the material and social worlds, they are tools for mediating knowledge asymmetries and balancing feelings of emotional anxiety. Tightly associated with the idea of modernity in the Western world, anxiety especially arises from the lack of technical expertise, which, in turn, requires a leap of faith in the technology. To the completion of everyday life routines, expert systems require to be trusted (Giddens 1991). Calling attention to the relevance of shared meanings and collective imaginaries, future individual expectations and trust is an important addition to the study of technology “put in context”. Narratives, sociologically conceived as “organizing visions” for society (Mosco 2004), offer the bridge to an original contribution to the debate. As they reflect and reproduce traditional lines of social, economic and political inequalities, narratives can be telling about collective and individual knowledge in a society increasingly organized around AI. What follows is a closer look at the AI dominant narratives.

2.1 AI narratives

Relevant works in the field of narratives dig into those regarding AI in the English-speaking countries of the West, looking both inside and outside the world of fiction. (Cave et al. 2019; Cave et al. 2020; Cave and Dihal 2019). Narratives can originate from both how the scientific community and the media covers the topic and how books, movies and TV series speculate about technology (Cave et al. 2018). There is a propensity to describe AI with both overly optimistic or pessimistic tones, which confirms a long-term trend (Fast and Horvitz 2017). In other words, both utopian and dystopian narratives trace back to the main recurrent hopes and fears connected to technology. Talking about AI, Cave and Dihal find four main narrative scenarios with regard to hopes as much as the four scenarios linked to fears. We will briefly describe them in pairs.

Immortality-dehumanization. The first dyad relates to the medical field, in which AI is the cornerstone of new and important areas of research. The extreme evolution of this scenario foresees man conquering Immortality, while the dystopian drift is Dehumanization, where humans lose their essence, ditching values and emotions.

Freedom-obsolescence. Freedom refers to the condition of men liberated from tedious or tiring tasks, be they physical or cognitive. No matter how astonishing technical developments are, this optimistic scenario, where AI and robots totally replace humans in the sphere of work, leaving time to engage only with leisure activities, is far from reality. The far-stretched opposite representation, Obsolescence, is the risk linked to this technical turning point.

Gratification-alienation. The optimistic scenario focused on Gratification sees AI and robots becoming an essential element in the relational sphere, satisfying every possible human desire. The drift to this utopian narrative foresees a world, where machines could satisfy all possible relational desires, leading to a scenario of Alienation, where people prefer interacting with technologies rather than with others.

Dominance-uprising. The latest dyad concerns the use of AI in the military field. Identifying new tools that allow nations or communities to dominate and maintain security over a territory is a major hope. The expectation here is to overcome one of the most iconic narratives in Western filmography: the uprising of machines that take physical and cognitive power, escaping human control.

Why are narratives so powerful? As one of the tools reflecting the content of the social imaginary, narratives forge how actors perceive and understand technology in their daily life. When interpreted as practice (Suchman et al. 1999), technology is telling about the underlying relations of production and use. However, the reality depicted in these eight scenarios turns out to be somehow disconnected from what, so far, are the plausible technical possibilities of AI and its purposes under development (Floridi 2020; Musa Giuliano 2020).

To explain this misalignment, researchers pinpointed how expectations and imagined affordances of AI (Neff and Nagy 2016) influence users’ perceptions and understandings. A well-known case study is Tay, one of the Microsoft most advanced chatbots. Soon after its launch in 2016, Tay started to interact with Twitter users in such an inflammatory and obscene manner that it was shut down within 24 h. Tay was the battlefield, where designers and users’ expectations about what she should do, or how to use her, fully conflicted. As Nagy and Neff (2015, p.1) point out, rather than fixed capacities, we should think in terms of imagined affordances for it allows to render all together the user’s attitudes, designers’ intentions and both the materiality and functionalities of the technology. As such, technical and imagined affordances are crucial to understanding narratives, their articulations and possible implications for society at large.

Second, another accredited explanation for this misalignment is about the anthropomorphized notions of technologies (Zemčík 2021) driven by the need for social connection, understanding of the relevant technology, and promoting its acceptance (Salles et al. 2020), especially when robots are the object of research (Katz et al. 2015). Science fiction and the media greatly contribute to this end, bearing and fostering social, political and ethical issues. Overall, in a socio-technical perspective, technologies have capacities that extend to the social realm through interactions, perceptions and actions: they are never neutral.

All in all, the drafted socio-technical perspective leads us to formulate the research questions we investigate with an ad-hoc survey with regards to awareness and knowledge of the AI technology. The key elements of this perspective disentangle the socio-technical imaginaries behind the dominant AI narratives, which—as we will see in Sect. 4—powerfully shape attitudes and emotional responses in the public.

3 Data and methods

3.1 The survey: questionnaire and sample description

To investigate the novel topic around public perceptions of AI and robots, we opted for an exploratory approach, consisting in a dedicated survey administered to the people affiliated to the University of Bologna. An original ad-hoc questionnaire providing all essential definitionsFootnote 2 was specifically designed for this survey. Research questions were informed by the need to know to what degree social practices, actions and decisions (Sect. 2) shaped the perception of AI technology, leading to investigate levels of knowledge, awareness and trust.

Respondents were asked to express their opinions on both robots and AI and solicited to substantiate their answers through qualitative open-ended questions.Footnote 3 They were surveyed about their level of awarenessFootnote 4 and attitude towards further future developmentFootnote 5 of AI. Relative to their visions of a (desirable or not) technological future, we also investigated the emotional response (concern or excitement) to the dominant AI narratives and their perceived likelihood in the next 15 years. Table 1 shows the eight scenarios—used as benchmark—ordered to have, first, hopes and, then, fears.

Table 1 The eight AI narratives

The questionnaire was sent by e-mail to all the people belonging to the University of Bologna, be they students, professors and other employees. 5,391 respondents completed the survey. The sample is made of 57% of women and 43% of men, born between 1950 and 2003. The respondents were divided with respect to their generation: 22% were born between 1950 and 1989; 23% between 1990 and 1996; 54% between 1997 and 2003.Footnote 6 Competence in the field of technology is used to filter respondents by their closeness and experience with information technology (IT) or computer science (CS) (which we refer to as “competence”): 8% graduated (undergraduate, master’s or Phd) in the two selected fields, 38% attended at least one university course or are programming-savvy, while 55% have no competence.

3.2 Limitations of the study

This study is subject to several limitations. First, although it can provide important original insights, the sample should not be considered representative of the whole population. Then and there, the availability of data was so crucial to start investigating such a novel topic that—even without external funds—we set up a survey being aware of the non-representativeness of the final sample. Nevertheless, the exploratory nature of this study allows for considering the results as the first steps upon which to build future representative surveys.

Second, given the peculiar population, the sample is obviously skewed with respect to some relevant socio-demographic characteristics, such age and education. As expected, students make up for the great majority with a 77% under the age of 30 and 42% of the sample has a degree. Further investigations are, therefore, needed to verify whether our results hold in other samples.

4 Results

Here, we present some results on awareness of AI, opinions on robots and AI, emotional responses to the narratives, perceived likelihood of the different future scenarios by gender, generation and competence (4.1, 4.2, 4.3, 4.4). A deeper dive into competence and gender (4.5.) also offers further interesting insights.

4.1 Awareness of AI

First of all, we wanted to assess the level of AI awareness by asking whether participants had heard, read or seen material related to the topic of AI in the last 12 months. In the sample, 76% answered positively, 16% negatively, while the remaining 8% said they were unable to say for sure if what they had read, heard or seen had anything to do with AI.

Table 2 reflects the absence of substantial differences by generation (except a slightly lower percentage among the youngest), while they emerge with respect to the other two variables. While 85% of men claim they have read about AI in the last year, the percentage drops to just under 70% among women. Unsurprisingly, there is a higher percentage of contact with this topic (90%) among those who have a degree in IT or CS than both those who have no competence (70%) or those who have “only” attended a university course in the field or are programming-savvy (82%).

Table 2 Contact with AI issues in the last 12 months by gender, generation and competence in IT or CS fields; percentages

To test the level of general knowledge of AI, we suggested six technologies (Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate), asking whether they actually use AI.

As shown in Table 3, we found differences in all the three variables considered. With regard to gender, there is a greater—yet not particularly high—ability among men to correctly identify the presence of AI in the proposed examples. This gap is narrowed down within the category “three correct answers out of 6” substantiating the result that, among women who do not score “all correct”, the majority has sufficient knowledge at least to identify half of the systems that use AI.

Table 3 Technologies using AI (Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate) correctly identified by gender, generation and competence in IT or CS; percentages

Interesting is the distance between the first two generations (1950–1989; 1990–1996) and the youngest (1997–2003): about 50% of older respondents provide all the correct answers, while among the youngest the percentage falls to 37%. This result suggests that belonging to the digital native generation—that is, being the first to be born at a time when technologies such as smartphones, social media and AI already existed (Bennet et al. 2008)—does not mean a better understanding and proficient use the technology behind these tools.

Finally, the level of competence portrays the biggest difference. If attending a course or being programming-savvy does not seem to make a big difference compared to having no competence at all, a degree in IT or CS fields does: it increases by almost 20 percentage points the number of individuals who correctly identify AI in all the suggested examples. Thus, with regard to general knowledge of AI, competence plays the greatest role. Moreover, competence could also be the factor behind some of the gender and generation differences. Unsurprisingly, there are fewer percentages of correct answers among women and the youngest (1997–2003), since they register fewer graduates in IT or CS.

4.2 Opinions on robots and AI

Existing literature fully agrees that one of the salient predictors of knowledge and support for AI is gender: all around the world, women have a worse image of AI than men (Eurobarometer 2017). In the attempt to understand the opinions of individuals towards AI, competence with technology is also found to be a good predictor (Zhang and Dafoe 2019).

Overall, our sample reveals a positive attitude towards these technologies. Modal response was “quite positive” for both robots (60%) and AI (58%), followed by “very positive” (20%; 22%): it means that about 4 out of 5 claim to have more positive than negative opinions. 18% are “not very positive” and 2% is “not at all positive”. To further investigate the factors behind positive views, Table 4 shows interesting differences by gender, generation and competence with regard to “very positive” opinions, which are slightly greater for AI than robots, regardless of the type of breakdown.

Table 4 “Very positive” opinion on robots and AI by gender, generation and competence in IT or CS; percentages

Our data displays a gender divide in opinions: a higher percentage of men (30%; 32%) shows a very favourable attitude towards both technologies, compared to women (12%; 16%). The generation seems to have a slight influence only on robots: the percentage of “very positive” opinions among the youngest (1997–2003) and the middle (1990–1996) generation is just under (about 4 percentage points) compared to the oldest. There are no considerable differences in opinions about AI. Relevant is, instead, the role of education in the technical fields: the higher the level of competence, the higher the percentage of positive opinions. This is true either for robots or AI.

4.3 Emotional responses to narratives

Telling differences emerge when respondents are confronted with narratives: much variation over concern or excitement arises depending on the scenario. Overall, our data reveals that Freedom and Gratification scenarios are the ones that polarize the least with respect to gender, generation and competence. These narratives register the lowest levels of concern and—together with Alienation—they are perceived as the most likely to happen. These results are in line with Cave and Dihal (2019), with the only exception being Gratification that scores among the lowest in the UK.

Gender reveals a clear trend: women are more concerned across all narratives (Table 5). These differences are even stronger with reference to specific scenarios such as those addressing fears about AI (Dehumanization, Obsolescence, Alienation and Uprising). When it comes to hopes, a more homogeneous emotional response is recorded with respect to Freedom, Gratification and Dominance with the only exception of Immortality. Not only does the latter elicit more concern than enthusiasm, but it also substantiates gender differences.

Table 5 Emotional responses across scenarios by gender, generation and competence in IT or CS; percentages of concern

The variable generation provides considerably less precise indications in understanding what factors are associated with different emotional responses to the considered narratives. In the Italian context, data lead to presume that generation does not influence the respondents’ attitudes. Keeping in mind that these are small percentage differences, we can only note that a lower number of people born between 1997 and 2003 declare to be concerned about scenarios of hope (Immortality, Freedom, Gratification and Dominance) compared to the oldest generation (1950–1989). On the reverse side, the in-the-middle generation (1990–1996) is the least worried about the scenarios of fear, while the youngest (1997–2003) are more worried. Possible explanations should consider that younger respondents might not clearly distinguish the different technologies behind AI and robots or that they were socialized to technology through darker and dystopic fictions and movies (such as Black Mirror, Westworld or Ex Machina).

Again, competence does affect the emotional response. In three scenarios (Immortality, Dehumanization and Obsolescence) out of eight, those who have no competence are more concerned than those who have it. Again, the closer the relationship with IT or CS, the lower the percentage of concern recorded. Freedom and Uprising record minor differences (although with a similar trend), while scenarios of Gratification, Dominance and Alienation show the absence of differences.

4.4 Perceived likelihood of the narratives

Investigating the technological future, respondents were also asked whether they consider each scenario likely to happen in the next 15 years (Table 6). Across the scenarios, women are slightly inclined to consider everything more likely to happen. While negative scenarios polarize the opinions of men and women, there is greater alignment on the positive ones.

Table 6 Narrative’s likelihood in the next 15 years by gender, generation and competence in IT or CS; percentages

Immortality is the only one showing no difference. There is a general agreement regardless of gender, generation and competence: in the next 15 years, it is unlikely for AI to reach a level of development that leads to eternal life. In the other seven scenarios, there are some small differences by generation. The youngest (1997–2003) perceives as less likely the negative scenarios (Dehumanization, Obsolescence, Alienation and Uprising) compared to older respondents. The opposite happens for Freedom and Gratification.

Looking at competence, there is a constant trend across all scenarios with the exception of two: immortality and freedom. The former registers a substantial agreement on the lower bound, while the latter is the only one where graduates peak their percentage.

In the other six scenarios, the lower the competence, the higher the percentage of considering those futures achievable. To confirm that competence does have a role in articulating the attitudes towards AI, it is to be noted that the difference between high and no competence is at its highest levels the scenarios of fear.

Our results are in line with Neri and Cozman (2020) analysis of public tweets from January 2007 and January 2018 in English-speaking countries. Most of the risk perception is associated with existential risks that stretch from foreseeing the end of humanity and the advent of a Artificial General Intelligence (AGI). With regard to narratives which portrayed existential risks, our data reveals that 47% of the sample thinks that Dehumanization is likely to happen. Likewise, Uprising, which is one of the most discredited scenarios by experts (Stone et al. 2016; Brooks 2017), has been considered plausible by 3 respondents out of 10. These results are particularly intriguing in supporting the misalignment between real technical achievements and collective imaginaries.

4.5 Narratives by competence and gender

This section offers a deeper dive into competence and gender on the perception of robots and AI.

4.5.1 Proficiency profiles

A different way to evaluate the role of competence is to profile respondents over a more articulated line of expertise around the most and least experts. The “Proficient” profile comprises all those who heard, read or saw something about AI, correctly identified all six suggested AI technologies and got a degree in IT or CS. The “Not proficient” profile collects those who didn’t hear, read or see anything about AI (or didn’t know if it concerned this topic), made at least three mistakes out of the six suggested technologies and have no competence in IT or CS.

As shown in Table 7, the Proficient profile has a better opinion on both AI and robots. In this group, the modal response is “very positive” (52–54%), while among Not proficient it collects 6% for robots and 8% for AI. Among the latter, almost 60% declares a quite positive opinion, while about 30% feels “not very positive”. Sceptics fall to 5% among Proficient. Again, results from surveys around the world confirm this trend: technical competence (Zhang and Dafoe 2019) or even just contact with more general sources of information about AI (Eurobarometer 2017) could improve people’s opinion about these technologies.

Table 7 Opinions about robots and AI by Proficiency profiles; percentages

4.5.2 Gender

It is even more interesting to look at Tables 8 and 9. Respondents were asked whether they favour a further future development of AI systems for we wanted to check to what degree this disposition might influence their perception.

Table 8 Emotional response to the narratives of those “strongly in favour” to AI’s further development by gender; percentages
Table 9 “Strongly in favour” to AI further future development among respondents who are concerned and those who are excited about different narratives by gender; percentages

In the scenarios of hope (with the only exception being Immortality), Table 8 reflects the absence of gender difference among those who are strongly in favour of further development of AI. It could be further noticed that Freedom shows a peculiar performance: women are (slightly) more enthusiastic than men. One possible interpretation of this anomaly emerges out of our qualitative data.Footnote 7 Women express appreciation for the potential aid that robots and AI systems could provide in the domestic activities within the household. This is especially true in the case of assisting robots: women—usually responsible for the care labour—foresee a potential material help.

When we turn to the scenarios of fear, a consistent gender difference is in clear sight. Men strongly in favour register less concerns about negative future evolutions. Table 6 helps in interpreting this result as men do perceive the scenarios of fear to be less likely to be realized in the next 15 years.

Table 9 highlights that gender does—again—play a role in the perception of AI and the subsequent attitudes towards its development in the future. Being the emotional response either excitement or concern, men do favour a further future development of AI systems more than women. Let’s look at both men and women who are concerned about the eight scenarios. Among them, men strongly in favour of future development have percentages that double those of women across all scenarios. Similarly, if we switch to those who are excited, data register percentages for men almost two times those of enthusiastic women.

Overall, we have a threefold intuition about the gender divide in the perception of AI to further investigate. Women consider each of these “extreme” scenarios as more likely compared to men (Table 6). Accordingly, the emotional response follows: even looking among those who are strongly in favour, there are higher levels of concern among women (Table 8). The differences of “strongly in favour” between worried and enthusiastic are greater among men than women, and this holds true across scenarios (Table 9). This suggests that the emotional response influences the attitude of men, keeping at bay non-realistic fearful reactions. These original insights highlight the importance of further research on the gender divide and how it mediates opinions, knowledge and the sociotechnical imaginaries about AI.

5 Discussion

Pursuing the goal of testing perceptions and attitudes in our sample towards AI and the associated narratives brought us to some novel insights that allow for problematizing the discussion about AI around two hot points.

5.1 Are we experiencing a state of AI anxiety?

As mentioned earlier, the Western idea of modernity is intertwined with uncertainty and risk with a clear future-oriented posture. Anxiety—inseparable from uncertainty and risk—is a common emotional response to the openness of the future, especially when it comes to technology. Overall, pessimistic scenarios elicit higher emotional responses with some important differences related to gender, generation and competence. With few exceptions (such as the scenarios of Gratification and Freedom, Table 5), the data render a picture of a wary public. Emotional responses suggest a connection to a state of “AI anxiety”, which spurred some debate over the last few years.

Worried about the computational capacity and the achievements in mimicking human reasoning, the public discourse has developed surrounded by confusion about what AI could really achieve. Since the mid 2010s, powerful and well-known public figures have expressed some alarming concerns. Among others, Elon Musk, Bill Gates and Stephen Hawking (Kolodny 2014; Lanier 2014) called for more attention about future developments as there is any guarantee for humans to be in control. However, AI experts claim that wiping out humans or their substitution in the labour market are strategies for “selling fear” (Umoh 2017).

As cognitive scientist Margaret Boden illustrates (2016), the future of AI has always been hyped, for the good or for the bad, switching from accounts of AGI to an in-control narrow AI, from enthusiasm to preoccupation. Since the hypothesis of AGI and intelligence explosion dates back to the fifties and sixties (Good 1965), framing the risks associated with AI is not new. As it is to enlist pessimistic portrayals of robots and AI-systems taking over control or further speculations about their impacts on society. Whatever the drivers, in the public discourse there is a mismatch between what AI is and what it ought to be in the eyes of people. Our data supports this misalignment through higher percentages of people who believe that pessimistic scenarios (Dehumanization and Uprising) are likely to become reality (Table 6).

Johnson and Verdicchio (2017a; 2017b) point at three potential causes of AI anxiety within the general population: inaccurate portrayals, absence of humans and institutions within the theoretical framework, confusion about the concept of autonomy. Without any doubt, one of the reasons for AI anxiety is fallacious representations of future development in technology, linking to the ecosystems of different players in the AI process (Sect. 2). When it comes to robots and AI, successful fiction novels and Hollywood movies play a grand role in supporting opposing views on enthusiastic or horrified predictions. Robots’ appetite for freedom (Garland’s Ex Machina), AI yearning for domination (Cameron’s Terminator) or uprising (HBO’s Westworld), lonely hyper-individualized humans falling in love with virtual assistants (Jonze’s Her), enhanced mental capabilities emotionless humans transforming into a supercomputer (Besson’s Lucy) are just a few examples of technological imagining that contribute to shape social imaginaries. Not always technically feasible, they could nudge or forestall competing narratives over a single technology.

According to Johnson and Verdicchio (2017b), the absence of humans and institutions within the picture is a second factor leading to anxiety. Put simply, thinking about AI as a software, as lines of codes disembedded from social structures and institutions, supports the portrayal of a superintelligence in power with no need for humans. This blindness about the social and political roles of humans couples with our call for a socio-technical perspective when studying AI-systems (Sect. 2). It also links to the third cause.

The third reason for anxiety—confusion about the concept of autonomy—bounces back to the dualism of structure and agency and the mediation of sociotechnical imaginaries. What does it mean for an intelligent machine to be autonomous? It could refer to the capacity to collect and operate on data without the programmer knowing the final output (like in the AlphaGo case). It could also point to the robot’s ability to explore the surroundings in an open environment, like the 2021 Boston Dynamic’s dancing robot. Yet, the often-forgotten main difference between humans and AI artifacts is that the latter are not endowed with free will and the ability to make decisions. This confusion finds solid confirmation in our data.

To explain this conflation, we add a fourth cause: the tendency to anthropomorphize technology and fictionalize its (potential) affordances (see Sect. 2.1). Attributing the same kind of agency humans have to robots and AI systems is the source of a distorted portrayal of future technical possibilities. Moreover, the required trust in expert systems, which increasingly organize our daily routine, clashes with the lack of expertise in judging and controlling the AI technologies, which are depicted to be even more powerful than humans. As a future-oriented emotion, anxiety kicks in. Our qualitative data supports this conflation of attributed meanings in sustaining the worrisome opinions about robots and AI:

“[..] Thinking about machines that could decide autonomously and act rationally like humans worries me”;

“[..] The idea of being surrounded by tools that rationally act as if they were human frightens me”;

“Men won’t be able to fully control autonomous machines? Yes, there is a concrete chance that robots will escape human control”.

Further down the line, the representation of robots or AI systems as embodied helps structuring both positive and negative narratives. The positive narratives of Freedom and Gratification along with the more negative Obsolescence and Uprising have roots in and allow for the aforementioned conflation. Freedom suggests an easier daily life thanks to domestic robots, virtual assistants or AI recommender systems, while Gratification refers to friendlier and more fruitful social relationships. Imagining embodied robots or AI systems opens to elicit closeness and affection (Fortunati et al. 2015; Turkle 2012), pushing some narratives over others. It also reproduces the very same structure of biases and stereotypes that applies offline (being gender a striking example: Pillinger 2019; Unesco 2019).

A final important remark about this conflation returns to regulation. Attributing agency to robots and AI systems is a step towards switching responsibilities away from AI developers. It is those who design and create them that should remain accountable for their actions and should collaborate across disciplines to mitigate abusive use of such technologies. The functioning of AI systems comes with social and moral consequences, but AI technologies remain amoral artifacts designed and created for specific goals. The recent EU study (Delvaux 2017) for considering civil law rules—such as granting liability for damages—applicable to robots and AI has been harshly criticized. It concretizes fears about the shift of responsibility away from tech industries that develop and own such artifacts. In 2017 Saudi Arabia granted citizenship to Sophia the Robot, an intelligent humanoid developed by Hanson Robotics. When this shift about responsibility is coupled with conferring citizenship or unrealistic portrayals, the discussion comes full circle.

Since excessively pessimistic representations can unjustifiably increase risk perception among public opinion, the general misalignment can either foster over-regulation or hinder possible beneficial social implications (Stone et al. 2016). As recalled in Sect. 2, public opinion plays an important role in influencing regulation. For instance, the attention of the legislator might be addressed to issues related to Artificial General Intelligence (AGI)—one of the main current concerns among the general public. Had the path to the AGI been traced, it would be far from being technically possible. Moreover, a misdirected attention could overshadow actual problems such as biases in AI (Bolukbasi et al. 2016; Buolamwini and Gebru 2018) which tend to automate (Eubanks 2018; Benjamin 2019) and reproduce existing intersectional discriminations and stereotypes in our society (Joyce et al. 2021).

As Gillespie (2010, p. 356) warned for the Internet, “it is in many ways making decisions about what that tech is, what it is for, what sociotechnical arrangements are best suited to help it achieve that and what it must not be allowed to become”. In this direction comes the recent EU proposal (European Commission 2021) that is the first ever framework on AI.

In this composite explanation for AI anxiety to be beneficial, some major events of the last few years could help set the stage. Global scale scandals—such as the Cambridge Analytica events in 2017, the 2021 Facebook’s massive data leak hack (just in Italy 36 million profiles breached) or less known cases of Automated decision systems—shaped public debate and knowledge about technology. Many are real examples of the latter, which might affect public and social perceptions within the public. AI systems that attribute a defendant’s risk score for recidivism (Angwin et al. 2016) and applications to college (Naughton 2020; Lamont 2021). Algorithms that evaluate teacher’s quality, college rankings, job applications, policing and sentencing (O’Neil 2016). Moreover, Tesla’s unexpected autonomous car crashes (Stilgoe 2018; BBC 2018) or anomalous behaviours of high-frequency trading AI programs (e.g., Knight’s Capital Group’s bankruptcy, Neri and Cozman 2020) can negatively influence the public discourse.

To add to this, one should not forget that prevalent narratives are forged and reinforced by big tech corporations (such as Amazon, Google and Microsoft). Their actions, for example, in developing AI Ethics principles or programs promoting “AI for social good”, are functional to their vision of technology and the user’s final adoption. They propagate specific ideas of scientific and technological progress (e.g., for biotechnology see Smith 2015), often portrayed for the “public good”.

As mentioned earlier, also policy makers contribute to narratives’ shaping. Not only might they be influenced by other mechanisms for narrative diffusion (science fiction, movies, media, corporations) but they could shape regulation accordingly. The same happens for the media, film industry and science fiction. As Jasanoff argues (2015, p. 27), “coalitions between corporate interests and the media, through advertising and outright control. Are increasingly likely to play a pivotal role in making and unmaking global sociotechnical imaginaries”. Conflicting views between the main actors shaping the public discourse seasoned with worldwide scandals and mundane algorithmic decision systems may leave the public sceptical, reinforcing a feeling of anxiety.

5.2 Non-experts view on AI

A second point for discussion relates to non-experts: do they matter? Although intellectuals and researchers are the legitimate actors to lead the discussion about future scenarios, let’s not forget that non-expert do face the need for understanding. A mirror-like image comes from the past as the Internet was spreading across users. STS studies gathered much research about how humans and artifacts interact, offering examples on how anti-cycling groups contributed to a safer bike design in late nineteenth century in Europe (Bijker 1995), how farmers resisted each new technological innovation (from electrification to telephone and cars) in the United States in the early twentieth century (Kline 2003) or how non-users of the Internet were taken out of the picture because “non-use” opposed the desirability of “use” (Wyatt 2003). Non-users could resist and reject the Internet as non-experts might mis- or under-use technology behind AI. Regardless, they interact with it and forge its imaginary, as we try to illustrate.

Undoubtedly, experts are those entitled to discuss and substantiate with evidence to what degree both hopeful and fearful events may concretize with real consequences (Neri and Cozman 2020). Nevertheless, the average citizen through her imagination does wonder and ponder about technological future scenarios. Notably when her knowledge and awareness is low, collective imaginaries come into play in mediating with reality, with special regard to the most fearful things. Not only does leaving humans and institutions out of the picture fuel anxiety, but it also makes the AI community lose ground in the race for a fairer AI for people.

As seen in Sect. 2, considering all actors involved in the frame of use (Flichy 1995) suits a sociotechnical perspective that comprises all subjects involved. To really make a step forward to a true AI for people, non-experts should be consulted along the process of design and deployment. One reason for this is that AI developers should understand what values are important for those who will be using the AI system they design. For example, journalists ask to go beyond important general principles, calling AI systems to embody the core values (truth, impartiality and originality) of their profession (Komatsu et al. 2020). Another reason goes back to the intersectional issues brought by inequalities in AI design, development and training. Who designs the technology and trains the algorithms at the heart of automated decision systems should be knowledgeable of the diversity needed all along the process. From data collection to design, from deployment to applications, diversity is the critical issue to address to mitigate propagation and reproduction of inequalities. In this direction, Design Justice is a growing community advocating a new approach in the design of technology that mixes together design, power and social justice (Costanza-Chock 2020).

6 Conclusion

This article aimed at investigating the perceptions and attitudes of the general public towards AI relying on original data collected within the University of Bologna. The theoretical hook lies in a call for a sociotechnical perspective in the study of technology: especially when it comes to AI, it balances dominant deterministic approaches. Contrary to previous technological innovations—from the press to the Internet—individuals cannot have direct access to “use” AI technologies. They do not own a bike or a microwave, do not have access to the mobile or the Web, to really adapt it to their purposes. Nevertheless, people’s attitudes and perceptions are crucial in the formation and reproduction of the sociotechnical imaginaries that sustain technological development. Since AI narratives are a building block of the broader imaginary, we analysed data about their perception that, although not representative of the Italian population, offer some relevant insights to be carried on in future research. Awareness, knowledge and emotional responses change by gender, generation and competence. Rarely should individuals be considered as “general public”, because they might be policy makers, developers, journalists, writers, entrepreneurs or non-experts. As such, they could be influenced and, at the same time, shape the visions around technology. Deepening and digging into the social side of AI is a novel but indisputable requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much needed efforts into ethics, explainability and responsible AI.