Abstract
Deepening and digging into the social side of AI is a novel but emerging requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much-needed efforts into ethics, explainability and responsible AI. The article addresses this challenge by problematizing the discussion around AI shifting the attention to individuals and their awareness, knowledge and emotional response to AI. First, we outline our main argument relative to the need for a socio-technical perspective in the study of AI social implications. Then, we illustrate the main existing narratives of hopes and fears associated with AI and robots. As building blocks of broader “sociotechnical imaginaries”, narratives are powerful tools that shape how society sees, interprets and organizes technology. An original empirical study within the University of Bologna collects the data to examine the levels of awareness, knowledge and emotional response towards AI, revealing interesting insights to be carried on in future research. Replete with exaggerations, both utopian and dystopian narratives are analysed with respect to some relevant socio-demographic variables (gender, generation and competence). Finally, focusing on two issues—the state of AI anxiety and the point of view of non-experts—opens the floor to problematizing the discourse around AI, sustaining the need for a sociological perspective in the field of AI and discussing future comparative research.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
A long time ago, information scientist Rob Kling and colleagues (2000) warned about interpreting social phenomena as independent of and layered on a specific technology. As this might be history repeating itself, the deployment of AI technologies raised similar concerns. Research in academic and non-academic fields alike often talks about “social impacts” in such broad terms that they leave doubts about the openness of its perspective: the “social” is just something to be taken care of. On the contrary, society is much more than a passive recipient of the transformations carried out by inevitable technological development. As Dourish and Bell (2011, 51) observed for the Internet, a “naive orientation towards social impacts frames the relationship between the social and technical too narrowly”.
In general, the AI community has been awakening and taking the first steps to avoid such a “layer cake” model. In the last few years, an articulated discussion has developed over regulatory, technical and ethical aspects of AI, but only to a lesser extent have the growing relevant social aspects of AI—both at the design and use stages—captured the attention of the AI community. Namely, the European Commission documents on AI are original contributions to set the stage for framing and regulation (European Commission 2020; European Commission 2021). Technical studies (Sirbu et al. 2019) in addition to research on ethics (Floridi and Cowls 2019) and privacy (Stone et al. 2016) normally address and investigate the “social impacts”’ of AI. In the eyes of the many, these efforts saturate the need for a deeper knowledge of the social world, where these AI-based systems come to operate. Along with some important recent work in computer sciences that seeks to open the perspective to the “social” (Dignum 2019), there are other promising approaches to study the societal implications of AI.
Sartori and Theodorou (2022), for example, highlight the need for a proper sociology for AI to develop a sociological perspective within the AI community. Only when AI is conceived as a sociotechnical system, a more fruitful approach for the future of AI could be thought throughout. From design to use, there is a path to balance a Human-in-Control (HIC) approach with an ecosystem organized around diversity (in data collection and algorithms models) and intersectionality (in the society).
Cave and Dihal (2019) dig into the narratives surrounding AI to extrapolate the most common visions among the public. Hopes and fears about AI’s scenarios might heavily influence how the public perceives and approaches the technology. Its diverse and composite configuration gives the general public a role in the acceptance and adoption of the technology mediated through the interaction between the media and public opinion. The public perception of AI is deemed essential to “how AI is deployed, developed and regulated” (ibid., p. 331).
This article falls in between these two approaches, elaborating a socio-technical perspective that overcomes the current neglect of how individuals frame, media portrays, and policy makers regulate AI. To “problematize” the conversation about AI (Roberge et al. 2020), identifying and accounting for who is involved with its role, purpose and imaginaries within the socio-technical system is then a mandatory step. As sociologist Patrice Flichy (1995) supported the need to contemplate all subjects—from design to use—engaged in the “frame of use” proper to a specific technology, here we focus on the individuals and their levels of awareness, knowledge and emotional response when it comes to AI. Our main argument holds that, as for previous technologies, not only vary the aforementioned levels across social groups depending on many socio-demographics factors, but also they are crucial for the diffusion and acceptance of the technology in the society.
In this direction, the article is organized in six sections. After the introduction, we build the theoretical framework for our main argument relying on some established concepts such as that of “technology as a practice” (Suchman et al. 1999; Star 1999) “sociotechnical imaginaries” (Jasanoff 2015) and “narratives” (Cave et al. 2020). We illustrate the main existing hopes and fears associated with AI and robots as they refer to “imaginaries” that shape and organize how society sees and interprets technology. The third section gives a detailed description of data and methods that, notwithstanding the limitations, grounds our novel socio-technical approach to an original empirical study. The fourth section illustrates the results of the ad-hoc survey conducted within the University of Bologna. It examines the level of awareness (Sect. 4.1) and opinions (Sect. 4.2) with respect to some relevant socio-demographic variables. The latter are also crucial to analyse both utopian and dystopian narratives that, replete with exaggerations, are telling about individuals’ emotional responses (Sect. 4.3) and perceived likelihood in the future (Sect. 4.4). A deeper dive in the perception of narratives (Sect. 4.5) also offers some relevant insights about the role of gender and competence. The fifth section focuses on two issues—the current state of “AI anxiety” and the underestimated point of view of “non-experts”—we deem essential to problematize and to sustain the need for a sociological perspective for AI, opening the floor to future comparative research (Sect. 6).
2 The theoretical framework: a call for a socio-technical perspective
A too narrow definition of the relations between society and technology usually leads towards an attitude of “technological determinism”, where the technology unfolds its logic over a unique path, impacting society and determining its output (Ogburn 1922; Winner 1977). Alternatives have been played out focusing on a reverse logic of how social factors are responsible for shaping technical development and adoption (Mackenzie and Wajcman 1999). In dealing with technicalities, different stakeholders shape technology while solving their conflicts over resources, affordances and power. To a different degree, the constructionist school (Pinch and Bijker 1984) with the varieties of approaches that refer to Science, Technology and Society (STS) studies (Callon 1986) and Actor-network Theory (Latour 2005) investigated the relation between human and artifacts, how social groups—other than developers—are relevant for diffusion and adoption, for success and failures. Technologies are products of social practices, actions and decisions: from their design to their application and use phases, they come out of the contexts, with their specific institutional and organizational cultures, that elaborate on and imagine what they are needed for and where they might be employed. To our argument, AI might be useful to screen thousands of job curricula or historical photos to alleviate Human Resource’s recruitment process (Dastin 2018) or historians’ classifications and colorization of black-and-white pictures (Goree 2021). Yet, it was easily foreseeable that out-of-context AI systems would come up with biased suggestions, such as recruiting higher percentages of males or steering the middle path about colours (leading to beiges). If thinking about technology as neutral is misleading, considering AI simply the latest tool to ease both trivial or complex problems individuals, institutions, states and firms have to deal with is worse.
While acknowledging the march of technology as not inevitable is becoming more common, the AI community still needs to take some steps forward. Sartori and Theodorou (2022), for example, highlight the need for a proper sociological perspective within the AI community. As such, sociological insights enter into the picture connecting inequalities and AI systems in the effort to offset the recently discovered magnifying glass effect in the process of “automating” inequalities (Eubanks 2018). More open technical approaches—such as the HIC—are also speaking up for a fairer AI (based on shared principles) to be transparent and explainable, accountable and contestable (Sartori and Theodorou 2022). Further down the line, a sociotechnical perspective calls for considering values and inequalities, institutional and organizational practices that are embedded in technology.
Another interesting approach relates to narratives surrounding AI to extrapolate the most common visions among the general public. AI technologies are increasingly present in our daily life, introducing significant changes: navigation systems, chatbots, or music and movie recommendation systems. Their rising presence in our everyday life notwithstanding, laymen do find it extremely difficult to understand these systems’ functioning and consequences. Narratives help in this direction for their broader capacity to convey meaning to social and cultural changes that come along with the technology (Natale 2019; Natale and Ballatore 2020). The study of how individuals approach new technologies and how their perceptions, understanding and expectations originate and unravel offers insights on the multidimensional relation between technology and society.
To a varying degree, all actors involved in the AI process—from start to finish—influence the construction of narratives and its power onto the public. Over the centuries, narratives have always played a role like for the press (Eisenstein 1980), the telephone (Fisher 1992; Marvin 1988), the Internet (Mosco 1999; Levy 1984). As such, narratives are a building block of a broader “socio-technical imaginary”, defined as a “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of advances in science and technology” (Jasanoff 2015, p. 4).
Visions of desirable futures (or resistance to) are supported by shared understandings imbued with values and expectations about society, modernity, human agency and technical potentialities. Again, the Internet serves as an example. The distributed decentralized network infrastructure at the heart of the Internet reflects shared meanings and ethical attitudes across different social groups that overcame sectoral boundaries in the university, military and tech industry in the seventies. Values such as freedom, access and liberalism combined into what has been labelled Internet culture or the “Californian ideology” (Levy 1984; Barbrook and Cameron 1995).
Although AI technologies are not comparable to Information and Communication Technologies (ICTs) for the autonomy of use attributable to the final user, the idea of the imaginary is nonetheless relevant to our case. As Jasanoff’s definition informs the conceptual frame that surrounds any technology, it applies to AI and its community as well. By cutting through the dualism of structure and agency, it “combines some of the subjective and psychological dimensions of agency with the structured hardness of technological systems, policy styles, organizational behaviours, and political cultures” (Jasanoff 2015, p. 24). A sociotechnical perspective for AI allows for a deeper investigation of the social, economic and political roots of these imaginaries, disentangling possible conflicts over competing visions. As happened in the past with the Internet (Lesage and Rinfret 2015; Levy 1984), how the AI community envisions its future steps is key for pluralism and democratic accountability (Crawford 2021).
AI developers, policy makers and the media are other key players in the field. Research shows that not only do AI developers pursue specific technical goals, but also their readings might be a source of influence (Greshko 2019; Dillon and Schaffer-Goddard cit. in Cave et al. 2020, p. 8) as much as their collective imaginary (Robertson 2010; Bory and Bory 2016).
Policymakers also might choose among different forms of regulation on their beliefs and perceptions of AI, diverting public and private funding or affecting governance choices (Natale 2019). As for the Internet, they acted for a long time with an uncritical optimistic view that technical development is necessary and desirable under the auspices of a future economic well-being (Wyatt 2003). Another example of the public’s influence comes out of raising awareness about robots’ potentialities. Lin et al. (2008) identify public perceptions as one of the main market forces that are currently impacting the development of military robotics and related regulation. Away from AI, in 2015 the European regulation of Genetically Modified crops changed—widening the powers to restrict or prohibit their production (EU Commission 2015)—not as the result of new scientific data. The change was driven by the increased perception of risk among consumers (Malyska et al. 2016).
It is a well-known fact that the media contributes to the framing of the public (DeFleur and Ball-Rokeach 1989; Cave et al. 2018) by covering selected features of emerging technologies. With regard to AI, the media discussion spurred, since 2014 is quite sophisticated in tone, but not in content (Ouchchy et al. 2020). For instance, when bound to the ethics of AI, it does not go deep into technicalities of different types of AI but uses specific examples for thematizing the topic at large. While specialized writers might lack a specific knowledge when it comes to recommendations, Ouchchy et al. (2020) find a sound interest in the public debate about regulation. For the media to account for both negative and positive social implications is key for supporting a balanced framing and portrayal of AI. This could pave the way to even-handed media reporting.
Sometimes, science fiction jumps into the scene with its truly accurate description of the emerging technological issues by drawing on narratives that have expressed hopes and fears over centuries. Musa Giuliano (2020) shows how fiction might act as cautionary tales that could nudge or forestall some imaginary compared to others. When we add “intersectionality” to the picture, there is room for more “intersectional sociotechnical imaginaries” that critically addresses the dominant narratives and the related AI potentialities (Ciston 2019). To add to this, it is relevant to consider that collectively held and institutionally stabilized visions are publicly discussed and performed by the media as much as they are instrumentally used by firms in trumpeting their notion of technological advancement (e.g., as promoter of the first internal Committee on Ethics, at the end of 2020 Google winded up firing the two most prominent names because of so-called conflicting viewsFootnote 1).
Since “the future is born of the past, it is equally true that the past is also continuously shaped by the future”, as sociologist Alberto Melucci wrote (1996, p. 12), a sociotechnical perspective for AI offers new tools to link past, present and future. How future technological development is—individually and collectively—imagined, coordinated and aggregated into a vision of the world is worth investigating. For one, “imagined futures” are a way to control the unpredictability of the future for strategic actions in fields concerning money and innovation (Beckert 2016). For another, concepts such as uncertainty (Giddens 1990) and risk (Beck 1992) nicely fit the study of AI technologies. As “expert systems” intervening in the material and social worlds, they are tools for mediating knowledge asymmetries and balancing feelings of emotional anxiety. Tightly associated with the idea of modernity in the Western world, anxiety especially arises from the lack of technical expertise, which, in turn, requires a leap of faith in the technology. To the completion of everyday life routines, expert systems require to be trusted (Giddens 1991). Calling attention to the relevance of shared meanings and collective imaginaries, future individual expectations and trust is an important addition to the study of technology “put in context”. Narratives, sociologically conceived as “organizing visions” for society (Mosco 2004), offer the bridge to an original contribution to the debate. As they reflect and reproduce traditional lines of social, economic and political inequalities, narratives can be telling about collective and individual knowledge in a society increasingly organized around AI. What follows is a closer look at the AI dominant narratives.
2.1 AI narratives
Relevant works in the field of narratives dig into those regarding AI in the English-speaking countries of the West, looking both inside and outside the world of fiction. (Cave et al. 2019; Cave et al. 2020; Cave and Dihal 2019). Narratives can originate from both how the scientific community and the media covers the topic and how books, movies and TV series speculate about technology (Cave et al. 2018). There is a propensity to describe AI with both overly optimistic or pessimistic tones, which confirms a long-term trend (Fast and Horvitz 2017). In other words, both utopian and dystopian narratives trace back to the main recurrent hopes and fears connected to technology. Talking about AI, Cave and Dihal find four main narrative scenarios with regard to hopes as much as the four scenarios linked to fears. We will briefly describe them in pairs.
Immortality-dehumanization. The first dyad relates to the medical field, in which AI is the cornerstone of new and important areas of research. The extreme evolution of this scenario foresees man conquering Immortality, while the dystopian drift is Dehumanization, where humans lose their essence, ditching values and emotions.
Freedom-obsolescence. Freedom refers to the condition of men liberated from tedious or tiring tasks, be they physical or cognitive. No matter how astonishing technical developments are, this optimistic scenario, where AI and robots totally replace humans in the sphere of work, leaving time to engage only with leisure activities, is far from reality. The far-stretched opposite representation, Obsolescence, is the risk linked to this technical turning point.
Gratification-alienation. The optimistic scenario focused on Gratification sees AI and robots becoming an essential element in the relational sphere, satisfying every possible human desire. The drift to this utopian narrative foresees a world, where machines could satisfy all possible relational desires, leading to a scenario of Alienation, where people prefer interacting with technologies rather than with others.
Dominance-uprising. The latest dyad concerns the use of AI in the military field. Identifying new tools that allow nations or communities to dominate and maintain security over a territory is a major hope. The expectation here is to overcome one of the most iconic narratives in Western filmography: the uprising of machines that take physical and cognitive power, escaping human control.
Why are narratives so powerful? As one of the tools reflecting the content of the social imaginary, narratives forge how actors perceive and understand technology in their daily life. When interpreted as practice (Suchman et al. 1999), technology is telling about the underlying relations of production and use. However, the reality depicted in these eight scenarios turns out to be somehow disconnected from what, so far, are the plausible technical possibilities of AI and its purposes under development (Floridi 2020; Musa Giuliano 2020).
To explain this misalignment, researchers pinpointed how expectations and imagined affordances of AI (Neff and Nagy 2016) influence users’ perceptions and understandings. A well-known case study is Tay, one of the Microsoft most advanced chatbots. Soon after its launch in 2016, Tay started to interact with Twitter users in such an inflammatory and obscene manner that it was shut down within 24 h. Tay was the battlefield, where designers and users’ expectations about what she should do, or how to use her, fully conflicted. As Nagy and Neff (2015, p.1) point out, rather than fixed capacities, we should think in terms of imagined affordances for it allows to render all together the user’s attitudes, designers’ intentions and both the materiality and functionalities of the technology. As such, technical and imagined affordances are crucial to understanding narratives, their articulations and possible implications for society at large.
Second, another accredited explanation for this misalignment is about the anthropomorphized notions of technologies (Zemčík 2021) driven by the need for social connection, understanding of the relevant technology, and promoting its acceptance (Salles et al. 2020), especially when robots are the object of research (Katz et al. 2015). Science fiction and the media greatly contribute to this end, bearing and fostering social, political and ethical issues. Overall, in a socio-technical perspective, technologies have capacities that extend to the social realm through interactions, perceptions and actions: they are never neutral.
All in all, the drafted socio-technical perspective leads us to formulate the research questions we investigate with an ad-hoc survey with regards to awareness and knowledge of the AI technology. The key elements of this perspective disentangle the socio-technical imaginaries behind the dominant AI narratives, which—as we will see in Sect. 4—powerfully shape attitudes and emotional responses in the public.
3 Data and methods
3.1 The survey: questionnaire and sample description
To investigate the novel topic around public perceptions of AI and robots, we opted for an exploratory approach, consisting in a dedicated survey administered to the people affiliated to the University of Bologna. An original ad-hoc questionnaire providing all essential definitionsFootnote 2 was specifically designed for this survey. Research questions were informed by the need to know to what degree social practices, actions and decisions (Sect. 2) shaped the perception of AI technology, leading to investigate levels of knowledge, awareness and trust.
Respondents were asked to express their opinions on both robots and AI and solicited to substantiate their answers through qualitative open-ended questions.Footnote 3 They were surveyed about their level of awarenessFootnote 4 and attitude towards further future developmentFootnote 5 of AI. Relative to their visions of a (desirable or not) technological future, we also investigated the emotional response (concern or excitement) to the dominant AI narratives and their perceived likelihood in the next 15 years. Table 1 shows the eight scenarios—used as benchmark—ordered to have, first, hopes and, then, fears.
The questionnaire was sent by e-mail to all the people belonging to the University of Bologna, be they students, professors and other employees. 5,391 respondents completed the survey. The sample is made of 57% of women and 43% of men, born between 1950 and 2003. The respondents were divided with respect to their generation: 22% were born between 1950 and 1989; 23% between 1990 and 1996; 54% between 1997 and 2003.Footnote 6 Competence in the field of technology is used to filter respondents by their closeness and experience with information technology (IT) or computer science (CS) (which we refer to as “competence”): 8% graduated (undergraduate, master’s or Phd) in the two selected fields, 38% attended at least one university course or are programming-savvy, while 55% have no competence.
3.2 Limitations of the study
This study is subject to several limitations. First, although it can provide important original insights, the sample should not be considered representative of the whole population. Then and there, the availability of data was so crucial to start investigating such a novel topic that—even without external funds—we set up a survey being aware of the non-representativeness of the final sample. Nevertheless, the exploratory nature of this study allows for considering the results as the first steps upon which to build future representative surveys.
Second, given the peculiar population, the sample is obviously skewed with respect to some relevant socio-demographic characteristics, such age and education. As expected, students make up for the great majority with a 77% under the age of 30 and 42% of the sample has a degree. Further investigations are, therefore, needed to verify whether our results hold in other samples.
4 Results
Here, we present some results on awareness of AI, opinions on robots and AI, emotional responses to the narratives, perceived likelihood of the different future scenarios by gender, generation and competence (4.1, 4.2, 4.3, 4.4). A deeper dive into competence and gender (4.5.) also offers further interesting insights.
4.1 Awareness of AI
First of all, we wanted to assess the level of AI awareness by asking whether participants had heard, read or seen material related to the topic of AI in the last 12 months. In the sample, 76% answered positively, 16% negatively, while the remaining 8% said they were unable to say for sure if what they had read, heard or seen had anything to do with AI.
Table 2 reflects the absence of substantial differences by generation (except a slightly lower percentage among the youngest), while they emerge with respect to the other two variables. While 85% of men claim they have read about AI in the last year, the percentage drops to just under 70% among women. Unsurprisingly, there is a higher percentage of contact with this topic (90%) among those who have a degree in IT or CS than both those who have no competence (70%) or those who have “only” attended a university course in the field or are programming-savvy (82%).
To test the level of general knowledge of AI, we suggested six technologies (Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate), asking whether they actually use AI.
As shown in Table 3, we found differences in all the three variables considered. With regard to gender, there is a greater—yet not particularly high—ability among men to correctly identify the presence of AI in the proposed examples. This gap is narrowed down within the category “three correct answers out of 6” substantiating the result that, among women who do not score “all correct”, the majority has sufficient knowledge at least to identify half of the systems that use AI.
Interesting is the distance between the first two generations (1950–1989; 1990–1996) and the youngest (1997–2003): about 50% of older respondents provide all the correct answers, while among the youngest the percentage falls to 37%. This result suggests that belonging to the digital native generation—that is, being the first to be born at a time when technologies such as smartphones, social media and AI already existed (Bennet et al. 2008)—does not mean a better understanding and proficient use the technology behind these tools.
Finally, the level of competence portrays the biggest difference. If attending a course or being programming-savvy does not seem to make a big difference compared to having no competence at all, a degree in IT or CS fields does: it increases by almost 20 percentage points the number of individuals who correctly identify AI in all the suggested examples. Thus, with regard to general knowledge of AI, competence plays the greatest role. Moreover, competence could also be the factor behind some of the gender and generation differences. Unsurprisingly, there are fewer percentages of correct answers among women and the youngest (1997–2003), since they register fewer graduates in IT or CS.
4.2 Opinions on robots and AI
Existing literature fully agrees that one of the salient predictors of knowledge and support for AI is gender: all around the world, women have a worse image of AI than men (Eurobarometer 2017). In the attempt to understand the opinions of individuals towards AI, competence with technology is also found to be a good predictor (Zhang and Dafoe 2019).
Overall, our sample reveals a positive attitude towards these technologies. Modal response was “quite positive” for both robots (60%) and AI (58%), followed by “very positive” (20%; 22%): it means that about 4 out of 5 claim to have more positive than negative opinions. 18% are “not very positive” and 2% is “not at all positive”. To further investigate the factors behind positive views, Table 4 shows interesting differences by gender, generation and competence with regard to “very positive” opinions, which are slightly greater for AI than robots, regardless of the type of breakdown.
Our data displays a gender divide in opinions: a higher percentage of men (30%; 32%) shows a very favourable attitude towards both technologies, compared to women (12%; 16%). The generation seems to have a slight influence only on robots: the percentage of “very positive” opinions among the youngest (1997–2003) and the middle (1990–1996) generation is just under (about 4 percentage points) compared to the oldest. There are no considerable differences in opinions about AI. Relevant is, instead, the role of education in the technical fields: the higher the level of competence, the higher the percentage of positive opinions. This is true either for robots or AI.
4.3 Emotional responses to narratives
Telling differences emerge when respondents are confronted with narratives: much variation over concern or excitement arises depending on the scenario. Overall, our data reveals that Freedom and Gratification scenarios are the ones that polarize the least with respect to gender, generation and competence. These narratives register the lowest levels of concern and—together with Alienation—they are perceived as the most likely to happen. These results are in line with Cave and Dihal (2019), with the only exception being Gratification that scores among the lowest in the UK.
Gender reveals a clear trend: women are more concerned across all narratives (Table 5). These differences are even stronger with reference to specific scenarios such as those addressing fears about AI (Dehumanization, Obsolescence, Alienation and Uprising). When it comes to hopes, a more homogeneous emotional response is recorded with respect to Freedom, Gratification and Dominance with the only exception of Immortality. Not only does the latter elicit more concern than enthusiasm, but it also substantiates gender differences.
The variable generation provides considerably less precise indications in understanding what factors are associated with different emotional responses to the considered narratives. In the Italian context, data lead to presume that generation does not influence the respondents’ attitudes. Keeping in mind that these are small percentage differences, we can only note that a lower number of people born between 1997 and 2003 declare to be concerned about scenarios of hope (Immortality, Freedom, Gratification and Dominance) compared to the oldest generation (1950–1989). On the reverse side, the in-the-middle generation (1990–1996) is the least worried about the scenarios of fear, while the youngest (1997–2003) are more worried. Possible explanations should consider that younger respondents might not clearly distinguish the different technologies behind AI and robots or that they were socialized to technology through darker and dystopic fictions and movies (such as Black Mirror, Westworld or Ex Machina).
Again, competence does affect the emotional response. In three scenarios (Immortality, Dehumanization and Obsolescence) out of eight, those who have no competence are more concerned than those who have it. Again, the closer the relationship with IT or CS, the lower the percentage of concern recorded. Freedom and Uprising record minor differences (although with a similar trend), while scenarios of Gratification, Dominance and Alienation show the absence of differences.
4.4 Perceived likelihood of the narratives
Investigating the technological future, respondents were also asked whether they consider each scenario likely to happen in the next 15 years (Table 6). Across the scenarios, women are slightly inclined to consider everything more likely to happen. While negative scenarios polarize the opinions of men and women, there is greater alignment on the positive ones.
Immortality is the only one showing no difference. There is a general agreement regardless of gender, generation and competence: in the next 15 years, it is unlikely for AI to reach a level of development that leads to eternal life. In the other seven scenarios, there are some small differences by generation. The youngest (1997–2003) perceives as less likely the negative scenarios (Dehumanization, Obsolescence, Alienation and Uprising) compared to older respondents. The opposite happens for Freedom and Gratification.
Looking at competence, there is a constant trend across all scenarios with the exception of two: immortality and freedom. The former registers a substantial agreement on the lower bound, while the latter is the only one where graduates peak their percentage.
In the other six scenarios, the lower the competence, the higher the percentage of considering those futures achievable. To confirm that competence does have a role in articulating the attitudes towards AI, it is to be noted that the difference between high and no competence is at its highest levels the scenarios of fear.
Our results are in line with Neri and Cozman (2020) analysis of public tweets from January 2007 and January 2018 in English-speaking countries. Most of the risk perception is associated with existential risks that stretch from foreseeing the end of humanity and the advent of a Artificial General Intelligence (AGI). With regard to narratives which portrayed existential risks, our data reveals that 47% of the sample thinks that Dehumanization is likely to happen. Likewise, Uprising, which is one of the most discredited scenarios by experts (Stone et al. 2016; Brooks 2017), has been considered plausible by 3 respondents out of 10. These results are particularly intriguing in supporting the misalignment between real technical achievements and collective imaginaries.
4.5 Narratives by competence and gender
This section offers a deeper dive into competence and gender on the perception of robots and AI.
4.5.1 Proficiency profiles
A different way to evaluate the role of competence is to profile respondents over a more articulated line of expertise around the most and least experts. The “Proficient” profile comprises all those who heard, read or saw something about AI, correctly identified all six suggested AI technologies and got a degree in IT or CS. The “Not proficient” profile collects those who didn’t hear, read or see anything about AI (or didn’t know if it concerned this topic), made at least three mistakes out of the six suggested technologies and have no competence in IT or CS.
As shown in Table 7, the Proficient profile has a better opinion on both AI and robots. In this group, the modal response is “very positive” (52–54%), while among Not proficient it collects 6% for robots and 8% for AI. Among the latter, almost 60% declares a quite positive opinion, while about 30% feels “not very positive”. Sceptics fall to 5% among Proficient. Again, results from surveys around the world confirm this trend: technical competence (Zhang and Dafoe 2019) or even just contact with more general sources of information about AI (Eurobarometer 2017) could improve people’s opinion about these technologies.
4.5.2 Gender
It is even more interesting to look at Tables 8 and 9. Respondents were asked whether they favour a further future development of AI systems for we wanted to check to what degree this disposition might influence their perception.
In the scenarios of hope (with the only exception being Immortality), Table 8 reflects the absence of gender difference among those who are strongly in favour of further development of AI. It could be further noticed that Freedom shows a peculiar performance: women are (slightly) more enthusiastic than men. One possible interpretation of this anomaly emerges out of our qualitative data.Footnote 7 Women express appreciation for the potential aid that robots and AI systems could provide in the domestic activities within the household. This is especially true in the case of assisting robots: women—usually responsible for the care labour—foresee a potential material help.
When we turn to the scenarios of fear, a consistent gender difference is in clear sight. Men strongly in favour register less concerns about negative future evolutions. Table 6 helps in interpreting this result as men do perceive the scenarios of fear to be less likely to be realized in the next 15 years.
Table 9 highlights that gender does—again—play a role in the perception of AI and the subsequent attitudes towards its development in the future. Being the emotional response either excitement or concern, men do favour a further future development of AI systems more than women. Let’s look at both men and women who are concerned about the eight scenarios. Among them, men strongly in favour of future development have percentages that double those of women across all scenarios. Similarly, if we switch to those who are excited, data register percentages for men almost two times those of enthusiastic women.
Overall, we have a threefold intuition about the gender divide in the perception of AI to further investigate. Women consider each of these “extreme” scenarios as more likely compared to men (Table 6). Accordingly, the emotional response follows: even looking among those who are strongly in favour, there are higher levels of concern among women (Table 8). The differences of “strongly in favour” between worried and enthusiastic are greater among men than women, and this holds true across scenarios (Table 9). This suggests that the emotional response influences the attitude of men, keeping at bay non-realistic fearful reactions. These original insights highlight the importance of further research on the gender divide and how it mediates opinions, knowledge and the sociotechnical imaginaries about AI.
5 Discussion
Pursuing the goal of testing perceptions and attitudes in our sample towards AI and the associated narratives brought us to some novel insights that allow for problematizing the discussion about AI around two hot points.
5.1 Are we experiencing a state of AI anxiety?
As mentioned earlier, the Western idea of modernity is intertwined with uncertainty and risk with a clear future-oriented posture. Anxiety—inseparable from uncertainty and risk—is a common emotional response to the openness of the future, especially when it comes to technology. Overall, pessimistic scenarios elicit higher emotional responses with some important differences related to gender, generation and competence. With few exceptions (such as the scenarios of Gratification and Freedom, Table 5), the data render a picture of a wary public. Emotional responses suggest a connection to a state of “AI anxiety”, which spurred some debate over the last few years.
Worried about the computational capacity and the achievements in mimicking human reasoning, the public discourse has developed surrounded by confusion about what AI could really achieve. Since the mid 2010s, powerful and well-known public figures have expressed some alarming concerns. Among others, Elon Musk, Bill Gates and Stephen Hawking (Kolodny 2014; Lanier 2014) called for more attention about future developments as there is any guarantee for humans to be in control. However, AI experts claim that wiping out humans or their substitution in the labour market are strategies for “selling fear” (Umoh 2017).
As cognitive scientist Margaret Boden illustrates (2016), the future of AI has always been hyped, for the good or for the bad, switching from accounts of AGI to an in-control narrow AI, from enthusiasm to preoccupation. Since the hypothesis of AGI and intelligence explosion dates back to the fifties and sixties (Good 1965), framing the risks associated with AI is not new. As it is to enlist pessimistic portrayals of robots and AI-systems taking over control or further speculations about their impacts on society. Whatever the drivers, in the public discourse there is a mismatch between what AI is and what it ought to be in the eyes of people. Our data supports this misalignment through higher percentages of people who believe that pessimistic scenarios (Dehumanization and Uprising) are likely to become reality (Table 6).
Johnson and Verdicchio (2017a; 2017b) point at three potential causes of AI anxiety within the general population: inaccurate portrayals, absence of humans and institutions within the theoretical framework, confusion about the concept of autonomy. Without any doubt, one of the reasons for AI anxiety is fallacious representations of future development in technology, linking to the ecosystems of different players in the AI process (Sect. 2). When it comes to robots and AI, successful fiction novels and Hollywood movies play a grand role in supporting opposing views on enthusiastic or horrified predictions. Robots’ appetite for freedom (Garland’s Ex Machina), AI yearning for domination (Cameron’s Terminator) or uprising (HBO’s Westworld), lonely hyper-individualized humans falling in love with virtual assistants (Jonze’s Her), enhanced mental capabilities emotionless humans transforming into a supercomputer (Besson’s Lucy) are just a few examples of technological imagining that contribute to shape social imaginaries. Not always technically feasible, they could nudge or forestall competing narratives over a single technology.
According to Johnson and Verdicchio (2017b), the absence of humans and institutions within the picture is a second factor leading to anxiety. Put simply, thinking about AI as a software, as lines of codes disembedded from social structures and institutions, supports the portrayal of a superintelligence in power with no need for humans. This blindness about the social and political roles of humans couples with our call for a socio-technical perspective when studying AI-systems (Sect. 2). It also links to the third cause.
The third reason for anxiety—confusion about the concept of autonomy—bounces back to the dualism of structure and agency and the mediation of sociotechnical imaginaries. What does it mean for an intelligent machine to be autonomous? It could refer to the capacity to collect and operate on data without the programmer knowing the final output (like in the AlphaGo case). It could also point to the robot’s ability to explore the surroundings in an open environment, like the 2021 Boston Dynamic’s dancing robot. Yet, the often-forgotten main difference between humans and AI artifacts is that the latter are not endowed with free will and the ability to make decisions. This confusion finds solid confirmation in our data.
To explain this conflation, we add a fourth cause: the tendency to anthropomorphize technology and fictionalize its (potential) affordances (see Sect. 2.1). Attributing the same kind of agency humans have to robots and AI systems is the source of a distorted portrayal of future technical possibilities. Moreover, the required trust in expert systems, which increasingly organize our daily routine, clashes with the lack of expertise in judging and controlling the AI technologies, which are depicted to be even more powerful than humans. As a future-oriented emotion, anxiety kicks in. Our qualitative data supports this conflation of attributed meanings in sustaining the worrisome opinions about robots and AI:
“[..] Thinking about machines that could decide autonomously and act rationally like humans worries me”;
“[..] The idea of being surrounded by tools that rationally act as if they were human frightens me”;
“Men won’t be able to fully control autonomous machines? Yes, there is a concrete chance that robots will escape human control”.
Further down the line, the representation of robots or AI systems as embodied helps structuring both positive and negative narratives. The positive narratives of Freedom and Gratification along with the more negative Obsolescence and Uprising have roots in and allow for the aforementioned conflation. Freedom suggests an easier daily life thanks to domestic robots, virtual assistants or AI recommender systems, while Gratification refers to friendlier and more fruitful social relationships. Imagining embodied robots or AI systems opens to elicit closeness and affection (Fortunati et al. 2015; Turkle 2012), pushing some narratives over others. It also reproduces the very same structure of biases and stereotypes that applies offline (being gender a striking example: Pillinger 2019; Unesco 2019).
A final important remark about this conflation returns to regulation. Attributing agency to robots and AI systems is a step towards switching responsibilities away from AI developers. It is those who design and create them that should remain accountable for their actions and should collaborate across disciplines to mitigate abusive use of such technologies. The functioning of AI systems comes with social and moral consequences, but AI technologies remain amoral artifacts designed and created for specific goals. The recent EU study (Delvaux 2017) for considering civil law rules—such as granting liability for damages—applicable to robots and AI has been harshly criticized. It concretizes fears about the shift of responsibility away from tech industries that develop and own such artifacts. In 2017 Saudi Arabia granted citizenship to Sophia the Robot, an intelligent humanoid developed by Hanson Robotics. When this shift about responsibility is coupled with conferring citizenship or unrealistic portrayals, the discussion comes full circle.
Since excessively pessimistic representations can unjustifiably increase risk perception among public opinion, the general misalignment can either foster over-regulation or hinder possible beneficial social implications (Stone et al. 2016). As recalled in Sect. 2, public opinion plays an important role in influencing regulation. For instance, the attention of the legislator might be addressed to issues related to Artificial General Intelligence (AGI)—one of the main current concerns among the general public. Had the path to the AGI been traced, it would be far from being technically possible. Moreover, a misdirected attention could overshadow actual problems such as biases in AI (Bolukbasi et al. 2016; Buolamwini and Gebru 2018) which tend to automate (Eubanks 2018; Benjamin 2019) and reproduce existing intersectional discriminations and stereotypes in our society (Joyce et al. 2021).
As Gillespie (2010, p. 356) warned for the Internet, “it is in many ways making decisions about what that tech is, what it is for, what sociotechnical arrangements are best suited to help it achieve that and what it must not be allowed to become”. In this direction comes the recent EU proposal (European Commission 2021) that is the first ever framework on AI.
In this composite explanation for AI anxiety to be beneficial, some major events of the last few years could help set the stage. Global scale scandals—such as the Cambridge Analytica events in 2017, the 2021 Facebook’s massive data leak hack (just in Italy 36 million profiles breached) or less known cases of Automated decision systems—shaped public debate and knowledge about technology. Many are real examples of the latter, which might affect public and social perceptions within the public. AI systems that attribute a defendant’s risk score for recidivism (Angwin et al. 2016) and applications to college (Naughton 2020; Lamont 2021). Algorithms that evaluate teacher’s quality, college rankings, job applications, policing and sentencing (O’Neil 2016). Moreover, Tesla’s unexpected autonomous car crashes (Stilgoe 2018; BBC 2018) or anomalous behaviours of high-frequency trading AI programs (e.g., Knight’s Capital Group’s bankruptcy, Neri and Cozman 2020) can negatively influence the public discourse.
To add to this, one should not forget that prevalent narratives are forged and reinforced by big tech corporations (such as Amazon, Google and Microsoft). Their actions, for example, in developing AI Ethics principles or programs promoting “AI for social good”, are functional to their vision of technology and the user’s final adoption. They propagate specific ideas of scientific and technological progress (e.g., for biotechnology see Smith 2015), often portrayed for the “public good”.
As mentioned earlier, also policy makers contribute to narratives’ shaping. Not only might they be influenced by other mechanisms for narrative diffusion (science fiction, movies, media, corporations) but they could shape regulation accordingly. The same happens for the media, film industry and science fiction. As Jasanoff argues (2015, p. 27), “coalitions between corporate interests and the media, through advertising and outright control. Are increasingly likely to play a pivotal role in making and unmaking global sociotechnical imaginaries”. Conflicting views between the main actors shaping the public discourse seasoned with worldwide scandals and mundane algorithmic decision systems may leave the public sceptical, reinforcing a feeling of anxiety.
5.2 Non-experts view on AI
A second point for discussion relates to non-experts: do they matter? Although intellectuals and researchers are the legitimate actors to lead the discussion about future scenarios, let’s not forget that non-expert do face the need for understanding. A mirror-like image comes from the past as the Internet was spreading across users. STS studies gathered much research about how humans and artifacts interact, offering examples on how anti-cycling groups contributed to a safer bike design in late nineteenth century in Europe (Bijker 1995), how farmers resisted each new technological innovation (from electrification to telephone and cars) in the United States in the early twentieth century (Kline 2003) or how non-users of the Internet were taken out of the picture because “non-use” opposed the desirability of “use” (Wyatt 2003). Non-users could resist and reject the Internet as non-experts might mis- or under-use technology behind AI. Regardless, they interact with it and forge its imaginary, as we try to illustrate.
Undoubtedly, experts are those entitled to discuss and substantiate with evidence to what degree both hopeful and fearful events may concretize with real consequences (Neri and Cozman 2020). Nevertheless, the average citizen through her imagination does wonder and ponder about technological future scenarios. Notably when her knowledge and awareness is low, collective imaginaries come into play in mediating with reality, with special regard to the most fearful things. Not only does leaving humans and institutions out of the picture fuel anxiety, but it also makes the AI community lose ground in the race for a fairer AI for people.
As seen in Sect. 2, considering all actors involved in the frame of use (Flichy 1995) suits a sociotechnical perspective that comprises all subjects involved. To really make a step forward to a true AI for people, non-experts should be consulted along the process of design and deployment. One reason for this is that AI developers should understand what values are important for those who will be using the AI system they design. For example, journalists ask to go beyond important general principles, calling AI systems to embody the core values (truth, impartiality and originality) of their profession (Komatsu et al. 2020). Another reason goes back to the intersectional issues brought by inequalities in AI design, development and training. Who designs the technology and trains the algorithms at the heart of automated decision systems should be knowledgeable of the diversity needed all along the process. From data collection to design, from deployment to applications, diversity is the critical issue to address to mitigate propagation and reproduction of inequalities. In this direction, Design Justice is a growing community advocating a new approach in the design of technology that mixes together design, power and social justice (Costanza-Chock 2020).
6 Conclusion
This article aimed at investigating the perceptions and attitudes of the general public towards AI relying on original data collected within the University of Bologna. The theoretical hook lies in a call for a sociotechnical perspective in the study of technology: especially when it comes to AI, it balances dominant deterministic approaches. Contrary to previous technological innovations—from the press to the Internet—individuals cannot have direct access to “use” AI technologies. They do not own a bike or a microwave, do not have access to the mobile or the Web, to really adapt it to their purposes. Nevertheless, people’s attitudes and perceptions are crucial in the formation and reproduction of the sociotechnical imaginaries that sustain technological development. Since AI narratives are a building block of the broader imaginary, we analysed data about their perception that, although not representative of the Italian population, offer some relevant insights to be carried on in future research. Awareness, knowledge and emotional responses change by gender, generation and competence. Rarely should individuals be considered as “general public”, because they might be policy makers, developers, journalists, writers, entrepreneurs or non-experts. As such, they could be influenced and, at the same time, shape the visions around technology. Deepening and digging into the social side of AI is a novel but indisputable requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much needed efforts into ethics, explainability and responsible AI.
Change history
30 July 2022
Missing Open Access funding information has been added in the Funding Note.
Notes
Robot: “A robot is defined as a machine that can assist a human in everyday chores without asking for continuous instructions or supervision (let’s think of it as a collaborator on the assembly line, in cleaning or in dangerous activities for humans such as rescuing in case of natural disasters). Kitchen tools are not to be considered as robots”.
AI: “The term “Artificial Intelligence” (AI) refers to computer systems that perform specific tasks, make decisions without explicit instruction and, to some extent, act rationally like a human. Some applications are:—Text recognition and language translation—Predicting individual searches on Google—Facial recognition—Verifying the reliability of fake news on social media—Assisting in travel (Google maps; self-driving vehicles or drones)—Supporting complex decisions (e.g. large-scale emergency management)”.
Open questions are not extensively used in this contribution, expect for Sect. 5.1.
To investigate awareness of AI, we submitted to respondents two different questions. The first wants to point out contact with the topic: it was asked if they had heard, read or seen something about AI during the last year. In addition to “yes” and “no”, they could report whether they didn’t know if what they saw or read had to do with AI. The second was addressed to detect the level of general knowledge through six different technologies or AI applications: Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate. For each, we asked whether those make use of AI.
On a four graded scale: “strongly unfavourable”, “more unfavourable than favourable”, “more favourable than unfavourable”, “strongly favourable”.
Percentages do not always equal 100 due to rounding.
Here, we considered qualitative open questions filled out by non-student women only (N = 563).
References
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Accessed 23 May 2016
Barbrook R, Cameron A (1995) The Californian ideology, mute 1:3 London. Rev Sci Cult 6(1):44–72. https://doi.org/10.1080/09505439609526455
BBC (2018) Tesla in fatal California crash was on Autopilot. BBC. https://www.bbc.com/news/world-us-canada-43604440 Accessed 18 March 2018
Beck U (1992) Risk society: towards a new modernity. Sage, London
Beckert J (2016) Imagined futures: fictional expectations and capitalist dynamics. Harvard University Press, Cambridge
Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Soc Forces 98(4):1–3. https://doi.org/10.1093/sf/soz162
Bennett S, Maton K, Kervin L (2008) The “digital natives” debate: a critical review of the evidence. Br J Edu Technol 39(5):775–786. https://doi.org/10.1111/j.1467-8535.2007.00793.x
Bijker WE (1995) Of bicycles, bakelites, and bulbs: toward a theory of sociotechnical change. MIT Press, Cambridge
Boden MA (2016) AI: its nature and future. Oxford University Press, New York
Bolukbasi T, Chang KW, Zou J, Saligrama V, Kalai A (2016) Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in Neural Information Processing Systems 4349–4357. https://arxiv.org/abs/1607.06520
Bory S, Bory P (2016) New Imaginaries of the Artificial Intelligence. Im@go. J Soc Imagin 6:66–85. https://doi.org/10.7413/22818138047
Brooks R (2017) The seven deadly sins of predicting the future of AI. Robots, AI, and other stuff. https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ Accessed 23 October 2018.
Buolamwini J, Gebru T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc Mach Learn Res 81:77–91
Callon M (1986) Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay. In: Law J (ed) Power, Action and Belief A New Sociology of Knowledge. Routledge & Kegan Paul, London, pp 196–223
Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1:74–78. https://doi.org/10.1038/s42256-019-0020-9
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London
Cave S, Coughlan K, Dihal K (2019) “Scary Robots” Examining Public Responses to AI. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (331–337). https://doi.org/10.1145/3306618.3314232
Cave S, Dihal K, Dillon S (eds) (2020) AI narratives: a history of imaginative thinking about intelligent machines. Oxford University Press, Oxford
Ciston S (2019) Intersectional AI is essential: polyvocal, multimodal, experimental methods to save artificial intelligence. CITAR J 11(2):3–8. https://doi.org/10.7559/citarj.v11i2.665
Costanza-Chock S (2020) Design justice: community-led practices to build the worlds we need. The MIT Press, Cambridge
Crawford K (2021) The atlas of AI. power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven
Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G Accessed 11 October 2018
DeFleur M, Ball-Rokeach S (1989) Media system dependency theory. In: DeFleur M, Ball-Rokeach S (eds) Theories of mass communication. Longman, New York, pp 292–327
Delvaux M (2017) Report with recommendations to the commission on civil law rules on robotics (2015/2103) (INL). European Parliament Committee on Legal Affairs, Brussels. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html Accessed 27 January 2017
Dignum V (2019) Responsible artificial intelligence: how to develop and use AI in a responsible way. Springer Nature, Switzerland
Dourish P, Bell G (2011) Divining a digital future: mess and mythology in ubiquitous computing. The MIT Press, Boston
Eisenstein EL (1980) The printing press as an agent of change. Cambridge University Press, New York
EU Commission (2015) Directive (EU) 2015/412 of the European Parliament and of the Council of 11 March 2015 amending Directive 2001/18/EC as regards the possibility for the Member States to restrict or prohibit the cultivation of genetically modified organisms (GMOs) in their territory. Official Journal of the European Union, 1–8. http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex%3A32015L0412
Eubanks V (2018) Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
Eurobarometer (2017) Special Eurobarometer 460: Attitudes towards the impact of digitisation and automation on daily life. Technical Report. https://ec.europa.eu/jrc/communities/sites/jrccties/files/ebs_460_en.pdf
European Commission (2020) White Paper on Artificial Intelligence—a European approach to excellence and trust. COM(2020) 65 final. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
European Commission (2021) Proposal for a Regulation laying down harmonised rules on artificial intelligence, COM(2021) 206 final, https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence
Fast E, Horvitz E (2017) Long-term trends in the public perception of artificial intelligence. In: Proceedings of the AAAI Conference on Artificial Intelligence 31.1. https://ojs.aaai.org/index.php/AAAI/article/view/10635
Fisher C (1992) America calling: a social history of the telephone to 1940. University of California Press, Berkeley
Flichy P (1995) L’innovation technique. Récents développements en sciences sociales. Vers une nouvelle théorie de l’innovation La découverte, Paris
Floridi L (2020) AI and Its new winter: from myths to realities. Philos Technol 33(1):1–3. https://doi.org/10.1007/s13347-020-00396-6
Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harvard Data Sci Rev 1:1. https://doi.org/10.1162/99608f92.8cd550d1
Fortunati L, Esposito A, Lugano G (2015) Introduction to the special issue “beyond industrial robotics: social robots entering public and domestic spheres.” Inf Soc 31(3):229–236. https://doi.org/10.1080/01972243.2015.1020195
Giddens A (1990) The consequences of modernity. Polity Press, Cambridge
Giddens A (1991) Modernity and self-identity: self and society in the late modern age. Polity, Cambridge
Gillespie T (2010) The politics of “platforms.” New Media Soc 12(3):347–364. https://doi.org/10.1177/1461444809342738
Good IJ (1965) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–88. https://doi.org/10.1016/S0065-2458(08)60418-0
Goree S (2021) The Limits of Colorization of Historical Images by AI. Hyperallergic. https://hyperallergic.com/639395/the-limits-of-colorization-of-historical-images-by-ai Accessed 21 April 2021
Greshko M (2019) The real science inspired by “Star Wars”. National Geographic. https://www.nationalgeographic.com/science/article/151209-star-wars-science-movie-film Accessed 1 December 2019
Jasanoff S (2015) Future imperfect: science, technology, and the imaginations of modernity. In: Jasanoff S, Kim SH (eds) Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. The University of Chicago Press, Chicago, pp 1–33
Johnson K (2021) Black and Queer AI Groups Say They’ll Spurn Google Funding. Wired. https://www.wired.com/story/black-queer-ai-groups-spurn-google-funding/ Accessed 5 October 2021
Johnson DG, Verdicchio M (2017a) Reframing AI discourse. Mind Mach 27:575–590. https://doi.org/10.1007/s11023-017-9417-6
Johnson DG, Verdicchio M (2017b) AI anxiety. J Am Soc Inf Sci 68(9):2267–2270. https://doi.org/10.1002/asi.23867
Joyce K, Smith-Doerr L, Alegria S, Bell S, Cruz T, Hoffman SG, Noble SU, Shestakofsky B (2021) Toward a sociology of artificial intelligence: a call for research on inequalities and structural change. Soc Sociol Res Dyn World 7:1–11. https://doi.org/10.1177/2378023121999581
Katz JE, Halpern D, Crocker ET (2015) In the company of robots: views of acceptability of robots in social settings. In: Vincent J, Taipale S, Sapio B, Lugano G, Fortunati L (eds) Social robots from a human perspective. Springer, Cham, pp 25–38
Kline R (2003) Resisting consumer technology in rural america: the telephone and electrification. In: Oudshoorn N, Pinch T (eds) How users matter: the co-construction of users and technology. The MIT Press Cambridge, Cambridge, pp 51–66
Kling R, McKim G, Fortuna J, King A (2000) Scientific Collaboratories as Socio-Technical Interaction Networks: A Theoretical Approach. Paper presented at the American Conference on Information Systems, Long Beach. https://aisel.aisnet.org/amcis2000/375
Kolodny C (2014) Stephen Hawking is terrified of artificial intelligence. Huffington Post. https://www.huffingtonpost.co.uk/entry/stephen-hawking-artificial-intelligence_n_5267481 Accessed 5 May 2014
Komatsu T, Gutierrez Lopez M, Makri S, Porlezza C, Cooper G, MacFarlane A, Missaoui S (2020) AI should embody our values: Investigating journalistic values to inform AI technology design. In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society 11: 1–13. https://doi.org/10.1145/3419249.3420105
Lamont T (2021) The student and the algorithm: how the exam results fiasco threatened one pupil’s future. The Guardian. https://www.theguardian.com/education/2021/feb/18/the-student-and-the-algorithm-how-the-exam-results-fiasco-threatened-one-pupils-future Accessed 18 February 2021
Lanier J (2014) The myth of AI: a conversation with Jaron Lanier. Edge.org. https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai Accessed 14 November 2014
Latour B (2005) Reassembling the social: an introduction to actor-network-theory. Oxford University Press, Oxford
Lesage F, Rinfret L (2015) Shifting media imaginaries of the Web. First Monday 20:10. https://doi.org/10.5210/fm.v20i10.5519
Levy S (1984) Hackers: heroes of the computer revolution. Anchor Press/Doubleday, Garden City
Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design. California Polytechnic State University, San Luis Obispo. https://apps.dtic.mil/sti/pdfs/ADA534697.pdf
MacKenzie D, Wajcman J (1999) The social shaping of technology. Open University Press, Buckingham
Malyska A, Bolla R, Twardowski T (2016) The role of public opinion in shaping trajectories of agricultural biotechnology. Trends Biotechnol 34(7):530–534. https://doi.org/10.1016/j.tibtech.2016.03.005
Marvin C (1988) When old technologies were new: thinking about electric communication in the late nineteenth century. Oxford University Press, New York, pp 9–32
Melucci A (1996) The playing self: person and meaning in the planetary society. Cambridge University Press, Cambridge
Metz C (2021) Who Is Making Sure the A.I. Machines Aren’t Racist?. The New York Times. https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html Accessed 15 March 2021
Mosco V (1999) Cyber-monopoly: a web of techno-myths. Science as Culture 8(1):5–22. https://doi.org/10.1080/09505439909526528
Mosco V (2004) The digital sublime: Myth, power, and cyberspace. MIT Press, Cambridge
Musa Giuliano R (2020) Echoes of myth and magic in the language of Artificial Intelligence. AI & Soc 35(4):1009–1024. https://doi.org/10.1007/s00146-020-00966-4
Nagy P, Neff G (2015) Imagined affordances: Reconstructing a keyword for communication theory. Soc Media Soc. https://doi.org/10.1177/2056305115603385
Natale S (2019) If software is narrative: Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA. New Media Soc 21(3):712–728. https://doi.org/10.1177/1461444818804980
Natale S, Ballatore A (2020) Imagining the thinking machine: technological myths and the rise of artificial intelligence. Convergence 26(1):3–18. https://doi.org/10.1177/1354856517715164
Naughton J (2020) From viral conspiracies to exam fiascos, algorithms come with serious side effects. The Guardian. https://www.theguardian.com/technology/2020/sep/06/from-viral-conspiracies-to-exam-fiascos-algorithms-come-with-serious-side-effects Accessed 6 September 2020
Neff G, Nagy P (2016) Talking to Bots: Symbiotic Agency and the Case of Tay. Int J Commun 10:4915–4931
Neri H, Cozman F (2020) The role of experts in the public perception of risk of artificial intelligence. AI & Soc 35:663–673. https://doi.org/10.1007/s00146-019-00924-9
O’Neil C (2016) Weapons of math destruction: How big data increases inequality and threatens democracy. Crown, New York
Ogburn WF (1922) Social change: with respect to culture and original nature. viking Press, New York
Ouchchy L, Coin A, Dubljević V (2020) AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & Soc 35(4):927–936. https://doi.org/10.1007/s00146-020-00965-5
Pillinger A (2019) Gender and feminist aspects in robotics. http://www.geecco-project.eu/fileadmin/t/geecco/FemRob_Final_plus_Deckblatt.pdf
Pinch TJ, Bijker WE (1984) The social construction of facts and artefacts: or how the sociology of science and the sociology of technology might benefit each other. Soc Stud Sci 14(3):399–441. https://doi.org/10.1177/030631284014003004
Roberge J, Senneville M, Morin K (2020) How to translate artificial intelligence? Myths and justifications in public discourse. Big Data Soc 7(1):1–13. https://doi.org/10.1177/2053951720919968
Robertson J (2010) Gendering humanoid robots: Robo-sexism in Japan. Body Soc 16(2):1–36. https://doi.org/10.1177/1357034X10364767
Salles A, Evers K, Farisco M (2020) Anthropomorphism in AI. AJOB Neurosci 11(2):88–95. https://doi.org/10.1080/21507740.2020.1740350
Sartori L, Theodorou A (2022) A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics Inf Technol 24:2 (forthcoming)
Sîrbu A, Fosca G, Pedreschi D, Kertész J (2019). Public opinion and Algorithmic bias.
Smith E (2015) Corporate imaginaries of biotechnology and global governance: syngenta, golden rice, and corporate social responsibility. In: Jasanoff S, Kim SH (eds) Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. The University of Chicago Press, Chicago, pp 254–276
Star SL (1999) The ethnography of infrastructure. Am Behav Sci 43(3):377–391. https://doi.org/10.1177/00027649921955326
Stilgoe J (2018) Machine learning, social learning and the governance of self-driving cars. Soc Stud Sci 48(1):25–56. https://doi.org/10.1177/0306312717741687
Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirschberg J, Kalyanakrishnan S, Kamar E, Kraus S, Leyton-Brown K, Parkes D, Press W, Saxenian A, Shah J, Tambe M, Teller A (2016) Artificial intelligence and life in 2030. One Hundred Year Study on Artificial Intelligence: report of the 2015–2016 Study Panel. Stanford University, Stanford. https://apo.org.au/sites/default/files/resource-files/2016-09/apo-nid210721.pdf Accessed 6 September 2016
Suchman L, Blomberg J, Orr JE, Trigg R (1999) Reconstructing technologies as social practice. Am Behav Sci 43(3):392–408. https://doi.org/10.1177/00027649921955335
Turkle S (2012) Alone together: why we expect more from technology and less from each other. Basic Books, New York
Umoh R (2017) Why this artificial intelligence expert says Elon Musk is “selling fear”. CNBC. https://www.cnbc.com/2017/09/06/artificial-intelligence-expert-says-elon-musk-is-selling-fear.html Accessed 6 September 2017
Unesco, EQUALS Coalition (2019) I’d blush if I could: Closing gender divides in digital skills through education. https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1
Winner L (1977) Autonomous technology: technics out-of-control as a theme in political thought. The Mit Press, Cambridge
Wyatt SM (2003) Non-users also matter: the construction of users and non-users of the Internet. In: Pinch NEJ, Oudshoorn T (eds) Now users matter: the co-construction of users and technology. MIT Press, Cambridge, pp 67–79
Zemčík T (2021) Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases? AI & Soc 36(1):361–367. https://doi.org/10.1007/s00146-020-01053-4
Zhang B, Dafoe A (2019) Artificial intelligence: American attitudes and trends. SSRN Electron J. https://doi.org/10.2139/ssrn.3312874Accessed19January2019
Funding
Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sartori, L., Bocca, G. Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI & Soc 38, 443–458 (2023). https://doi.org/10.1007/s00146-022-01422-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01422-1