1 Introduction

In the transition to a digital era, education in G20 countries faces two challenges: reaping the benefits of AI and related technological advances to improve educational processes in the classroom and at the system level; preparing students for new skillsets for increasingly automated economies and societies, including, for some of them, the skills to contribute to the further development of digitalisation. (Vincent-Lancrini & van der Vliesi, 2020, p. 3)

The vision of integrating Artificial Intelligence (AI) in education, as discussed in the quoted paragraph from the 2020 OECD blueprint, is part of an ongoing push for harnessing digital solutions to improve teaching and learning. AI is an umbrella term for machine-based systems “that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments” (Vincent-Lancrini & van der Vliesi, 2020, p. 4). The promotion of AI in education implies both pedagogical and epistemological shifts. Utilizing AI in education can reshape teacher instruction, advance personalized learning, and provide adaptive learning plans based on different students’ needs. Advocates of AI in education stress that employing automated tools for student evaluation can support learning processes, improve the decision-making of teachers and administrators, and reduce biased patterns of decision-making (U.S. Department of Education, Office of Educational Technology, 2023). In addition, AI induces epistemological questions that require educators to consider how knowledge is produced, deployed, and interpreted and what kinds of knowledge are valued.

Considering the use of AI in education opens new opportunities and challenges. It requires educators to reconsider and reimagine the educational goals and give thought to the limitations and risks of integrating AI in teaching and learning (OECD, 2021). While there is a growing literature about the potentials and pitfalls of AI in education, there is less consideration of the processes in which social imaginaries are shaped and how the implementation of new technologies, such as AI in education, reflects the different envisions of social actors to improve social, political, and cultural realities. Yet, the challenge of imaginaries in an ultra-digitized world is the glorification of instant forms of knowledge, which, rather than promoting an appreciation for the symbolic meaning of the experience, encourage the integration of those platforms in education to achieve instrumental goals.

The first section briefly reviews visions of AI as a transformative social force. I discuss how, alongside utopian visions, AI has raised concern regarding its potential risks and harms to society. Drawing from Jasanoff (2015) and Hasse (2023), I deliberate about how sociotechnical imaginaries are interrelated to the implications of new technologies, such as AI, in education. As Hasse (2023) proposes, the notion of Socratic ignorance is employed in this paper as a point of departure to question our predispositions about new technologies and complicate hitherto debates about the integration of AI in education. In the final section, I argue that utilizing a critical constructivist approach to technology can provide a more nuanced understanding of Socratic ignorance and support teachers and students as they negotiate human-technology relations when using digital technologies. I shall argue that such understanding transcends the common division between utopian and dystopian views of [educational] technologies.

2 AI, Sociotechnical Imaginaries, and Education

When reading recent literature about AI, I recalled the 1987 book entitled Architect or Bee: The Human Price of Technology, which was written by the Irish engineer and social activist Mike Cooley (1987). Cooley realized the possible ramifications of automation on social, cultural, and political dimensions. Akin to current debates about the risks of big data and AI, Cooley and his peers were concerned about dehumanization and the loss of human skills corollary to automation. His genuine concern regarding the envisioned technological future was expressed in an article that was published in the 70s in Artificial Intelligence & Society Journal:

The tragic waste our society makes its most precious asset—the skills, ingenuity, energy, creativity and enthusiasm of ordinary people”; and “the myth that computerisation, automation and use of robotic devices will automatically free human being from soul destroying, backbreaking tasks and leave them free to engage in more creative work. (Cooley as quoted in Gill, 2016, p. 436)

The vehement call for reconsidering the assumptions of utopian visions of new technologies originated back in the 70s as a response to various initiatives to replace human labor with machines (such as the Lucas Plan in England). Those initiatives offered an alternative vision based on human–machine symbiosis (rather than human-centered systems). The concern, as Cooley indicated, is not limited to labor, in and of itself, but relates to how individuals, communities, and societies envision human relations and to broader ontological issues pertaining to the meaning of being fully human.

The inspiring question (that was originally raised by Marx in The Capital) of the difference between being an architect and being a bee is not merely about knowledge, skills, or even about fundamental questions regarding human nature. Rather, it induces a careful examination of the sociopolitical forces that, inter alia, determine how individuals and societies grapple with ontological questions concerning the meaning of being fully human, as well as how people imagine their own and collective futures. As Cooley argues:

Either we will have a future in which human beings are reduced to a sort of bee-like behaviour, reacting to the systems and equipment specified for them; or we will have a future in which masses of people, conscious of their skills and abilities in both a political and technical sense, decide that they are going to be architects of a new form of technological development which will enhance human creativity and mean more freedom of choice and expression rather than less. (Cooley, 1987, p. 100)

Thus, as one considers the current hype around AI in education, it is essential that, beyond examining the potentials, the benefits, and the challenges of new educational technologies, one questions whether and how innovative technologies, such as AI, enhance creativity, rather than reducing education to technocratic apparatus, based on instrumental reasoning. In this respect, Means (2018) contends that while the rhetoric of digital education entrepreneurs evolves around desired models of education in the twenty-first century, such as creativity, imagination, and critical thinking, the practices of these digital solutions represent a “myopic vision of knowledge and learning as static abstractions detached from social context and deeper forms and ethical development” (p. 85). In addition, Means (2018) suggests that the push for integrating AI tools in education reflects how sociotechnical imaginaries have become algorithmic imaginaries, which “serves to obscure the structural conditions and economic interests and power relations shaping the ‘smartification’ and ‘datafication’ of life and their potential impact on human interaction on social outcomes” (p. 113). In this sense, the nature of algorithmic education is “anti-relational, anti-dialogical, and rooted in the assumption of education and child development that do not accord with social and cognitive science” (p. 117). The growing permeation of digital technology and AI in education demarcates how educational imaginaries have transformed and become a lucrative model of transnational corporations.

This issue is inherently related to how reality is mediated and how our social imaginary worlds are developed. Imaginaries, as Jasanoff (2015) explains, are intrinsically entrenched within our social, cultural, and political worlds, and reflect how we understand the social reality and envision potential future trajectories. The development of educational technologies and the strive to integrate innovative educational tools, such as AI, relate to how social actors, such as scientists, technology entrepreneurs, policymakers, and non-government organizations, imagine various prospects that can improve the common good. Jasanoff (2015) calls this collective imaginative effort sociotechnical imaginaries:

Sociotechnical imaginaries thus are “collectively held and performed visions of desirable futures” (or of resistance against the undesirable), and they are also “animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology. ”Unlike mere ideas and fashions, sociotechnical imaginaries are collective, durable, capable of being performed; yet they are also temporally situated and culturally particular. Moreover, as captured by the adjective “sociotechnical,” these imaginaries are at once products of and instruments of the coproduction of science, technology, and society in modernity. (Jasanoff, 2015, p. 19)

The promise of AI (and other digital technologies) to transform education, prepare students for a changing world, and provide them with the required skills for work is interrelated with the development of sociotechnical imaginaries. Rahm (2023) demonstrates the intrinsic relations between sociotechnical imaginaries and educational policy. She contends that “educational imaginaries have represented a form of governance in between education and technology – ways of coordinating the relationship between citizens and an increasingly technological society” (p. 18). In addition, Rahm (2023) shows how education “has repeatedly been set up as one of, if not the, most appropriate and effective means for adjusting the citizen to the effects of computerization, promoting computer literacy, and, later on, fostering the completely quantified citizen” (p. 18).

The claim regarding quantifying education is related to a broader critique of the tendency to instrumentalize and commodify education. Utilizing business models, such as accountability and standardization, have been fostered as a means to improve students’ skills for a competitive workforce while overlooking the humanistic goals of education, which are required for democratic societies to thrive (Mamlok, 2021; Williamson, 2013). This critique (which has been recurrently raised in the past few decades) is not limited to AI or educational technologies in general. In the case of educational technologies, sociotechnical imaginaries have become part of a growing industry that is governed by a lucrative motivation. What is at stake is how the prevailing discourse about the promise of AI in education is based on techno-managerial visions that endeavor to make both teaching and learning more efficient, automated, and individualized (Rahm, 2023). In addition, the push toward instrumental goals of education is related to what Ashton et al. (2010) call Digital Taylorism, which refers to the “translation of knowledge work into working knowledge” and “enables innovation to be translated into routines that might require some degree of education but not the kind of creativity and independence of judgement that is often associated with the knowledge economy” (p. 846).

Indeed, those critiques raise concerns regarding the nature of education and the tendency to instrumentalize education. In addition, those critiques demystify the intricate layers of how technology reinforces social values and, in the case of education, serves as an effective means to foster technocratic visions of knowledge. Nevertheless, sociotechnical imaginaries are not static nor monolithic; as they can facilitate instrumental goals, they can open up other social perspectives, realities, and purposes (Jasanoff, 2015). In this respect, Hasse (2023) provides an intriguing analysis of imaginaries as related to human–machine relations, and claims that opening up the complex relations between humans and technology requires us to consider how “new materialism and Socratic ignorance reveal the relational learning process that can lead to the unfolding of relational agency” (p. 76). Her call for the development of Socratic ignorance, namely, the “acknowledgment of one’s own ignorance” (Hasse, 2023, p. 76) is particularly interesting in the context of philosophical examination of technology; instead of relying upon pre-conceived perceptions regarding human-technology and human–human relations, Socratic ignorance allows one to have the space to suspend his or her convictions and to consider technology-human relations. Embracing such an approach “emphasizes that sociotechnical imaginaries are not static but are tested in workplace learning, as technology is put to use. In the learning processes, relations change as humans and technology move from being imagined to being practiced” (Hasse, 2023, p. 77).

Hasse’s argument is based on the premise that imaginaries can be understood as an interpretation of the present that can, to some extent, be linked to how humans envision the future. Elaborating on this idea can open various ways to consider how sociotechnical imaginaries can support our understanding of everyday reality. Jasanoff (2005, 2015) rightly notes that sociotechnical outcomes and the emergence of new technologies yield various responses among different societies, nations, political regimes, and political actors. In the context of digital technology and education, the degree to which new technologies, such as AI, are accepted, interpreted, and implemented is varied and contingent on collective norms, perspectives, and social imaginaries. This point is particularly important in light of dystopian and deterministic views of technology, which often dismiss the potential of sociotechnical imaginaries to encourage different visions of education that can move beyond instrumental goals.

In this sense, Socratic ignorance can be understood as a predisposition to engage with and evaluate new technologies. Consider, for example, the recent development of Open AI’s ChatGPT,Footnote 1 which brings a variety of reactions (both utopian and dystopian), proposals, and connections to education. Some experts recognize the relevancy of ChatGPT to teaching and learning and suggest that the advanced chatbot can improve personalized learning by inputting “student essays, discussion board responses, and other assignments into ChatGPT to seek out alignment to assignment requirements and to seek out evidence for the need of further instruction/intervention.” (Glaser, 2023, p. 1946). In addition, ChatGPT can enhance hitherto AI tools that automate grading systems. Beyond personalizing learning and grading, ChatGPT holds the potential to translate educational content and improve interactive and adaptive learning experiences (Grassini, 2023). Against this promising vision, concerns have been raised with respect to students’ honesty, plagiarism, security issues that can violate students’ privacy, and “risks related to racial discrimination due to inherent biases in the training contents” (Birenbaum, 2023, p. 3). These concerns have led several leading universities and school districts to ban students from using ChatGPT (Herman, 2023; Rosenzweig-Ziff, 2023).Footnote 2

Those contradicting reactions are congruent with Jasanoff’s (2002) analysis of the wide variety of reactions to the emergence of new technologies. However, a possible way in which the concept of Socratic ignorance can help us develop a more nuanced understanding of ChatGPT and other AI technologies is by suspending our predispositions and acknowledging our ignorance about certain aspects of that new technology, which can potentially generate new insights about the benefits and pitfalls of that technology, and consequently influence the multifarious pedagogical implications (Hasse, 2023). Hasse brings humanoid robots as a prime example of social imaginaries and how the practice of working with these robots may bring on Socratic ignorance:

The first relation begins with zero potential for Socratic ignorance. The robot developers develop robots, and the practitioners have imaginaries of how robots work. The second relation arises when the substantiated robots are implemented into local practices. Here the possibility for Socratic ignorance arises, as practitioners see how robots work in practice. Robot developers can reach Socratic ignorance if they follow their robots into a local practice and learn about local motives for work (Hasse, 2023, p. 79).

While Hasse focuses on robots, her observation is pertinent to a broader sense of human-technology relations. For example, the reactions to ChatGPT (that were briefly noted) demonstrate the discrepancies between the first and second relations, which are based on different collective social imaginaries, values, and goals of education. For some, ChatGPT brings about new visions of education that can support students’ competencies and improve efficiency in instruction and evaluation of students’ performance. For those who are more skeptical, ChatGPT signifies the degradation of teaching and learning and the movement toward a technocratic kind of educational discourse that lacks the critical nuances needed for the maintenance of democratic societies.

Considering each of these perceptions is beyond the scope of this paper. Yet it is essential to note that what is missed from the current debate about ChatGPT (and other educational technology tools) is realizing what kind of philosophical approach guides educators in evaluating new technologies, such as AI and ChatGPT (as its latest development). Considering philosophical approaches to education can help us to clarify what kind of knowledge is valued, how knowledge is evaluated, what are the nature of human experiences that educators wish to offer to their students, what is the meaning of being fully human, and what kind of society we wish to develop. These are some of many longstanding questions that merit attention not only in the case of AI (and other new technologies that emerge) and education but also as a starting point to assess any new educational practice in general. In the context of this paper, the emergence of AI holds a significant impact on both epistemological and ontological dimensions of social imaginaries, which will be further illustrated in the following section.

3 Moving Beyond Instrumentalism

Examining the epistemological dimensions of imaginary, technological innovations in education requires us to carefully look at how knowledge is produced, deployed, and interpreted (Mamlok, 2021). Enhancing our ability to evaluate and discern different forms of knowledge is particularly vital in light of current global challenges, including global warming, populism, inequity, and migration. In this sense, the provoking question of Cooley regarding whether our mindset is more that of a bee or of an architect is important as we consider educational goals in general and as we integrate new technological innovations in education.

Beyond instrumentalizing and commodifying education, I posit that part of the problem of how we treat digital technologies in educational settings is the dichotomic divisions of perceiving technology through utopian visions or dystopian lenses. I deem that both approaches are wrong. The utopian approach tends to treat digital technologies as neutral and overlooks the sociopolitical elements of design and production. Relying on technological determinism, the dystopian approach looks carefully at the structural grounds of technology design but misses the latent social possibilities of those technologies (Mamlok & Knight-Abowitz, 2021). In the context of this paper, neither approach supports the creation of imaginaries that can move beyond a reductionist understanding of reality. Looking at the world beyond instrumental imaginaries that decontextualize reality is needed for creating the conditions that allow students to have a more coherent understanding of reality. Two prominent approaches that offer a compelling analysis of technology are postphenomenology and critical constructivism. In what follows, I shall provide a brief review of both approaches and suggest that each of these approaches can help us to gain a better realization of knowledge and imaginaries in an ultra-technological world.

Developed by Don Ihde (1990, 2006, 2012), postphenomenology is a philosophy that critically explores human-technology relations based on phenomenology and empirical studies of science and technology. In general, phenomenological analysis perceived technology as “a broad, social, and cultural phenomenon, with a special focus on the ways in which technology alienates human beings from themselves and from the world they live in” (Rosenberger & Verbeek, 2015, p. 10). While acknowledging the significance of phenomenology in elucidating various forms of human experience, Ihde (1993) suggests that it falls short of capturing the essence of actual experience within a technologically advanced world. Empirical studies of science and technology offer valuable insights into the societal impact of technology. Yet, these studies tend to overlook the intricate philosophical connections among science, technology, and culture. Postphenomenology aims to integrate philosophical exploration with empirical studies and offer a more nuanced understanding of human-technology-world relations.

While postphenomenology cannot be reduced to a single concept, it is essential, for the purpose of this paper, to highlight human-technology relations, as identified by Ihde (2009), through four different dimensions: embodiment, hermeneutic, alterity, and background. Embodiment relations refer to human experience wherein technology becomes an integral and inseparable part of our bodies. When one uses eyeglasses, vision is mediated through the eyeglasses. These relations are transparent; since the devices are in the background of one’s experience, one does not consider the nature of using eyeglasses or hearing aids, but they become part of one’s body (Rosenberger & Verbeek, 2015). Ihde (2009) points out that embodied relations “become part of our ordinary experience” (Ihde, 2009, p. 42). He schematizes embodiment as (human-technology) → world.Footnote 3Hermeneutic relations entail the interpretation and mediation of information or knowledge facilitated by technology. For example, the sound of a timer signals us about a task that we have to finish; safety devices installed in cars give us alerts through visual signals or sounds. Ihde (2012) schematizes these relations as human → (technology-world). Alterity relations refer to engaging with “technologies themselves as quasi-objects or even quasi others” (Ihde, 2012, p. 43). Consider, for example, the interaction with ATMs, dialogue boxes in various software, or timely examples of virtual interactive assistants, such as Siri or Alexa. The nature of these experiences is based on devices that mimic, to some degree, human interaction. Ihde (2012) schematizes these relations as I → technology (-world). Background relations refer to technological devices that operate in the background, such as refrigerators and air-conditioning. While such technologies are in the background, they shape the nature of our experiences (e.g., having a refrigerator has changed how people keep their food and how they perceive the notion of food in general). Ihde (2012) schematizes these relations as I → technology (world-).

Postphenomenology offers a compelling analysis of how technology has transformed the way subjects interpret the world and become an indispensable part of how reality is mediated and interpreted. While embracing some aspects of postphenomenology (such as the relation between mediation and technology and the rejection of determinism), Feenberg (2020) contends that what is missed in postphenomenology is the role of the sociopolitical dimensions in the construction of technology: “Critical constructivism has focused on the collective process of transforming technology in which these subjects are engaged” (p. 29). Feenberg argues that, when looking at the evolution of technology, it is evident that the progress of technological developments has involved collective perceptions that altered technologies and were part of hermeneutic relations. Thus, he claims that “instead of the world interpreted through technology, it is technology that is interpreted within a world” (p. 29), which can be schematized as human → (world-technology).

Feenberg’s claim can support this paper’s concern regarding sociotechnical imaginaries and their relevance to education. Considering AI in education, as pointed out earlier, can be understood through different kinds of imaginaries, and can be interpreted in various ways by different actors. Adopting a critical constructivist approach can help teachers and students negotiate the intricate layers and the underlying political forces that determine the nature of new digital platforms (such as ChatGPT and personalized learning solutions). What is at stake is the kinds of imaginaries evoked through innovative technologies and their usage in education, that is: Do these imaginaries distort reality or reveal new horizons of knowledge? Are those platforms used in education as an instrumental device to perpetuate the status quo, or might they help students to unleash their imaginations? Is the design of the platforms used in education supporting the realization of the different kinds of social imaginaries?

Indeed, adopting a more constructivist approach requires educators to move beyond a reductionist understanding of algorithmic education, which refers to “a web of corporate interests and speculative narratives that project sociotechnical solutions and futures for school and society based on digital technology platforms” (Means, 2018, p. 105). In a more concrete reference to imaginaries, Ben Williamson claims:

Sociotechnical imaginaries are not just science fiction fantasies. The dreamscapes of the future that are dreamt up in science laboratories and technical [Research & Development] departments sometimes, through collective efforts, become stable and shared objectives that are used in the design and production of actual technologies and scientific innovations—developments that then incrementally produce or materialize the desired future. (Williamson, 2016, p. 222)

There is a lot of truth in the warnings of Williamson (2016) and Means (2018), especially when considering immersive technologies that can lead to a naïve understanding of reality that may result in a blurring of the boundaries between reality and virtuality or between truth and imaginaries. Hasse’s (2023) call for using Socratic ignorance as a means to question the influence of technologies on our imagination is of great import as we consider the rapid changes of new technologies and the ways in which they can transform collective imaginaries. The point of departure of Socratic ignorance is acknowledging that there are things about new technologies that we do not know. Yet, recognizing the potential benefits and limitations of new technologies requires educators to recognize the socio-political dimensions of those technologies, and realize the political and cultural contexts in which those technologies are operated.

In this vein, Feenberg’s (1991, 2010, 2020) work on critical constructivism helps us take a more nuanced understanding of technology design and balances utopian and dystopian views of technologies. I propose that understanding Socratic ignorance in terms of human → (world→technology) can provide a more complete theoretical framework for considering the relations between students/teachers and technology and help them develop a more critical approach as they negotiate with various forms of imaginaries.

There are questions regarding how educators can translate the notion of Socratic ignorance in a productive way. One may rightly suggest that, as Socratic ignorance can help us scrutinize the nature of imaginaries and recognize our limited knowledge or what Freire (1998) defines as our unfinishedness, it may, for some people, inadvertently reinforce a distorted representation of reality. Demystifying the fine line between how social imaginaries are developed, embraced, or rejected, is one of the challenges of current education and requires us to consider both epistemological and ontological dimensions of living in times of digital culture.

4 Conclusion

We must not allow our common sense to be bludgeoned into silence by the determinism of science and technology, into believing that the future is already fixed. The future is not ‘out there’… It has yet got to be built by human beings and we do have real choices, but these choices will have to be fought for, and the issues are both technical and political… We have to decide whether we will fight for our right to be the architects of the future, or allow a tiny minority to reduce us to bee-like responses. (Cooley, 1987, p. 77)

I opened this paper with Cooley’s distinction between being an architect or a bee. This distinction goes deeper to problematize the meaning of being fully human. How can we negotiate between the symbolic and the material world? And what are the conditions in which we can envision a transformative reality, and transcend from what Wendy Brown (2005) calls homo-economicus? Or if we follow Zygmunt Bauman’s (2011) liquid world, how can we balance between surrendering to a bulimic consumerist culture and overpowering it.

These questions become more complex as we consider the rapid changes in digital technologies, such as AI, that reshape how we perceive the world, how we value knowledge, and how we construct our imaginaries. In the case of education, imaginaries can be understood as a reinforcement of social values that tend to focus on instrumental and technocratic goals. Nevertheless, I have contended that sociotechnical imaginaries should not be conceived through a deterministic lens. Namely, since imaginaries interrelate with social, cultural, and political worlds, they are not static; as Jasanoff (2015) notes, imaginaries can support the process of investigating “how, through the imaginative work of varied social actors, science and technology become enmeshed in performing and producing diverse visions of the collective good, at expanding scales of governance from communities to nation-states to the planet” (p. 11).

Thus, as new technologies emerge, it is essential that policymakers, educators, and non-government organizations reconsider the purposes of education, and advance a humanistic vision of education, that will support students’ capacities to move beyond a technocratic understanding of reality and to envisage new prospects, opportunities, and alternatives of living in a world that is mediated by and through new technologies. Reconsidering the purpose of education in digital times invites us to revisit and deliberate on foundational issues of education, such as: What goals of education are achieved by integrating AI tools in education? How do these purposes serve the public good? What kinds of digital pedagogies can support the advancement of social skills that transcend the preparation of young people for the workplace and encourage them to be politically active? How can educators use AI tools to develop students’ critical skills and improve their capacity to read (political, social, historical, and cultural) reality more nuancedly? These are a few of many questions that merit further exploration and require further research. In this sense, Socratic ignorance can support the exploration of our predispositions regarding new technologies and our ability to question them, as well as to unfold new imaginaries.

I have argued that, while embracing the notion of Socratic ignorance, it is important to develop a nuanced understanding of technology that realizes its lack of neutrality and supports the creation of a deeper understanding of how knowledge is produced, deployed, and interpreted in the digital age. Against determinist views of technology, and for advancing a creative, productive, and transformative understanding of educational technologies, I have suggested that developing a critical understanding of technology based on Socratic ignorance and critical constructivism can enhance our ability to advance a more critical understanding of technology and opening new landscapes of ideas, hopes, and imaginaries.