Keywords

Introduction

When the term AI first saw the light of day at the Dartmouth workshop in 1956, the proposal for the conference included the assertion that ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’ (McCarthy et al., 2006, p. 12). In the past, the focus was on creating machines that could simulate learning and intelligence. However, the current discourse on AI in the public domain is shifting away from mere simulation towards acknowledging the significant disruption of human processes and practices that AI has set in motion. Indeed, one could encapsulate this disruption by highlighting the stark contrast between the rapid evolution of AI and the comparatively slower pace at which most educational actors are acquainting themselves to these advancements. Unlike every transformative technology before it, AI developments continue to move at speed and scale, allowing little time for acceptability and the subsequent and necessary mechanisms of overview and governance.

Traditional AI systems focused on narrow tasks, such as playing chess (Mainzer & Mainzer, 2020). In contrast, current foundation models possess a pre-trained, generalised knowledge base that enables them to perform a wide array of language-related tasks. This versatility positions foundation models as potential game-changers in education, in which learning is most often supported by linguistic interactions. Today’s AI foundational models, defined as ‘the base models trained on large-scale data in a self-supervised semi-supervised manner that can be adapted for several other downstream tasks’ (Bommasani et al., 2021), are capable of simulating not only every neural aspect of learning but also a wide range of creative activities, solving complex problems, generating functional computer code, and quoting a majority of the authored world. These AI applications seek to produce a wide and general variety of outputs based on large linguistic models, which can be adapted to a range of educational interactions including information searches, creating exemplars, generating detailed explanations or summaries, language translations, creating quiz questions, and even simulating dialogue for interactive learning scenarios.

In this manifesto, we consider the UNICEF definition of AI, which is future-proof, human-oriented, and data-dependent (Holmes et al., 2022). AI refers to ‘machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments’ (OECD, 2019, para. 12). AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour when provided additional context (UNICEF, 2021). In this chapter, special focus is put on the educational and societal impacts of large language models (LLM) and generative AI (GenAI) over more traditional AI technologies, such as machine learning (ML). In the following paragraphs, we use the generic term artificial intelligence (AI) for these separate artificial intelligence technologies.

The advanced capabilities of AI may be seductive as a potential technical solution for the educational system's shortcomings during the global school closures caused by the COVID-19 pandemic as well as the current educational challenges around diversity in the learning process. Through adaptive learning environments (Dogan et al., 2023; Minn, 2022), AI can identify and respond to students’ unique learning challenges, preferences, and pace, fostering a more inclusive and effective educational environment. Similar to AI, the use of mobile technologies in the classroom has been discussed as both an opportunity, such as when it is used to support pedagogy or learning activities, as well as a distractor, in instances when it is applied without pedagogical strategies. As such, some schools have chosen to ban such devices or technologies in their codes of behaviour. In this context, the ongoing debates regarding the need to limit access to different types of technologies for K12 learners continue to highlight the concerns among educational stakeholders—notably parents, teachers, school principals, and policy makers—regarding its use. The integration of technology in school activities and curriculum remains a contentious issue in which technologists and technophobes provide different perspectives on a complex phenomenon where technology transformation creates both opportunities and challenges (Culver, 2017; Romero et al., 2016).

Likewise, AI accelerates the rate of change in different educational domains by developing models, enriched through machine learning, used to develop predictions or create educational content. This process can disrupt the standard assessment paradigm (SAP), as defined by Mislevy et al. (2012), by personalising the assessment process and allowing for more accurate and relevant measurements. Here AI excels at replicating educational elements, such as evaluation methods like multiple-choice questions, essays, and short-answer questions, while also supporting adaptive learning systems where learning analytics are used to support the learning process. However, it is important to note that AI cannot replicate every aspect of learning. For instance, when evaluating learning processes centred around a shared understanding or values fostering metacognition, as well as competency-based assessment of activities in which human empathy, morality, and subjectivity are required, AI tools are limited in their ability to develop real-sensitivity feedback, even if they can be designed to simulate a certain type of empathic relationship with the end-user (Montemayor et al., 2022). This underscores how AI is considered as an enhancement of teaching practices to augment the learning experience through a hybrid intelligence approach rather than a replacement of certain teaching tasks. Hence, while AI technologies provide enormous potential for the learner's experience and education in general, there is also sufficient cause for concerns (Dwivedi et al., 2023).

Another challenge of AI technologies deals with human creation and originality.

The integration of AI carries the potential for varying degrees of plagiarism and unethically facilitated collaborative creation of intellectual content. This ethical concern looms over both students and educators, encompassing not only the manner in which students use AI, but also the guidance provided by their teachers (Dwivedi et al., 2023). The intellectual property challenges in the use of generative AI have led to different research journals, newspapers, and Higher Education Institutions setting up guidelines for regulating the ethical use of AI in the human production of new works.

This manifesto advocates for a balanced and thoughtful integration of AI technology into the educational landscape. It does this by recognising the current tensions between AI’s inherent potential and its ability to disrupt the current human-centred education paradigm. It also seeks to distance itself from the often polarising debate where AI is either a transformative solution with the ability to greatly benefit education or an unpredictable technology, controlled by powerful companies with hidden motives. While both sides may yet hold some truth, by focusing solely on AI’s underlying technology, we risk losing sight of the very essence of education: the support of human development and well-being within a community.

In the following sections, the learning scientists and education experts behind this manifesto present several recommendations to help mitigate the scale of AI’s disruption in educational settings. Our hope is that these recommendations provide context to educational stakeholders and developers seeking guidance on how to integrate AI technology, while retaining teacher and student agency and supporting, rather than displacing, future teaching and learning processes. Finally, we acknowledge that these recommendations, presented here at the dawn of AI’s adoption, will need to evolve in order to keep pace with this rapidly changing technology.

Empowering Students and Teachers as Decision-Makers

Cuban (1986) recognises that one of the main challenges associated with integrating technology into education is the exclusion of teachers in the decision-making process. Involving teachers in the participatory design process and empowering them to adapt the technology to the needs of their students is indispensable for ensuring that developed learning solutions align with their needs and the needs of their learners (Frøsig, 2023). Tedre et al. (2023) propose a similar approach with their project to engage Finnish students in a collaborative machine learning design process. Teachers are an essential component of the learning process in a human-centred educational system because they help students develop a common knowledge of their roles and duties based on a distinct purpose and core set of values. This relationship, based on collaboration and shared goals, is an essential component of education.

However, the prevailing trend in educational technology to prioritise automated indicators, even though they can miss aspects of the learning experience not easily represented by data, can sometimes lead to learning analytics being emphasised at the expense of pedagogic principles and ethics (Williamson, 2022). For example, factors such as socio-economic status, values, and motivations can be difficult to quantify using data, but play a substantial role in a learner’s progress, nonetheless. Sahlberg and Hasak (2017) argue that mining for Big Data can divert educators, leaders, pundits, and policymakers from meeting the diverse and unique needs of their students. As such, they promote the use of ‘Small Data’ (Lindstrom, 2016) to underline ‘small clues that uncover huge trends’ (Sahlberg & Hasak, 2017, p. 7), typically centred on students’ progress, emotions, behaviours, and other important observable details.

To enable meaningful integration, educational technologies should empower teachers to make informed choices, not only within their classrooms, but also at the national level. As Sahlberg and Hasak (2017) advocate, this can be achieved by giving more autonomy to the educators, aiming for an emancipation from school bureaucracy. It is imperative to reverse the current trend of surrendering decision-making power to technologies and, instead, place educators at the forefront when shaping the educational journey of students (Tedre et al., 2023). Learning is a social process and, currently, only humans can process the full spectrum of observable and unobservable factors that impact student learning. Furthermore, sidelining teachers runs the risk of excluding a substantial body of professional expertise and longitudinal knowledge regarding their learners. Additionally, teachers can act as a counterbalance to an over reliance on learning analytics where machine technology and its outputs are automatically assumed to be correct (Swiecki et al., 2022).

Impact of Artificial Intelligence on Existing Educational Paradigms

The widespread availability of technologies, such as generative AI, poses a significant challenge to traditional pedagogical instruction-based practices and assessment methodologies. While the current educational systems in most of the member Organisation for Economic Co-operation and Development (OECD) countries focus on evaluating student performance based on their ability to meet specific learning goals, there are initiatives afoot that seek to put student engagement and agency above the role of learning goals (Harouni, 2015).

The current evolution of AI agents is moving them beyond text-based interactions towards multimodal collaboration where AI is seen as a partner rather than merely a tool. Here AI can assume the role of either a coach or a teammate, which can support self-regulation, collaboration, knowledge co-construction, and problem solving (Cress & Kimmerle, 2023; Dwivedi et al., 2023; Lodge et al., 2023; Mollick & Mollick, 2023; Sharples, 2023).

Contemporary AI technologies can be considered as an extension of existing knowledge and skills, which extends and enhances in ways that go beyond what either a human or machine could do individually. More precisely, such technologies can be integrated into the thinking and learning processes students engage in. For example, AI image generators can potentially enhance and build on human capabilities for creativity (Lodge et al., 2023).

Furthermore, Sharples (2023) introduced several ideas of how contemporary Generative AI tools could be used to scaffold students in collaborative and dialogical learning: (a) generator of possibilities; (b) opponent in argumentation; (c) an assistant in design; (d) an exploratory tool; (e) collaborator in creative writing. However, because the reliability and accuracy of information provided by generative AI is not guaranteed, metacognitive skills like self-reflection and critical thinking are needed when students work with generative AI. In practice, human learners need to self-monitor their learning goals and states, continuously evaluate AI responses, and adapt their own learning strategies or prompts to AI (Lodge et al., 2023). Evaluating responses is a complex skill in itself. It requires students to compare responses by AI to scientifically or expertly grounded responses, and to evaluate how relevant the AI responses are to the context of the problem.

From a future perspective, designing AI that can fully participate as an agent in social learning activities and fine-tune existing language models for educational purposes is not an adequate approach. According to Sharples (2023), current AI technologies lack, for example, ‘long-term memory, the ability to reflect on its output and consolidate its knowledge from each conversation. More fundamentally, it does not capture the affective and experiential aspects of what it takes to be a learner and teacher’ (p. 7).

Likewise, it is also important to start considering how UX, in the framework of co-creative AI systems (CAIS), might encourage, or even discourage, new modes of human–AI co-creativity by virtue of its interactive design (Feldman, 2017). The idea of a text box being the ideal gateway to AI for the majority of users and use cases already seems dated. It will be interesting to see how users might perceive AI differently when engaged in a ‘spoken conversation’ versus words typed in a text box (Rezwana & Maher, 2021). Given its ability to impact the cognitive and emotional factors in human–AI interactions, there is a need to support further UX analysis and development for improving AI technologies in education.

Artificial Intelligence in Human-Centred Education

In advocacy of a human-centred approach to education, it is essential to establish frameworks that guide the ethical and responsible use of AI technologies. Creating these frameworks requires an ongoing dialogue between educators, learners, AI developers, technologists, legislators, and other stakeholders with the expressed purpose of creating alignment between the means (e.g. technology or pedagogy) and the objectives (e.g. the purpose of education) of its use. AI can be viewed as a tool for achieving specific goals, but those goals should be clearly defined, shared among all stakeholders, and transparent in purpose. Within this context, the European Union’s (EU) recent AI Act takes the first steps in creating a legally binding framework for the regulation of AI’s development and use in its member states. Importantly, one of the primary stated goals of the EU’s AI Act is that AI systems should be overseen by people, rather than being automated, recognising the role humans play in guiding its use and preventing harmful outcomes (Helberger & Diakopoulos, 2023; Kazim et al., 2023).

Given its potential for disruption, teachers should have the autonomy to reject certain technologies that may not align with their pedagogical philosophy or the unique needs of their students. Simultaneously, learners should be granted the right to explore and test emerging technologies under the guidance of educators. This dual approach respects the agency of both teachers and learners, fostering an environment where educational technologies are not imposed, but collaboratively chosen based on their merit and relevance. While teachers should be empowered to reject certain technologies, learners should also have the right to benefit from different technologies that might support their learning and co-creative processes.

To support this aim, teachers will need access to ongoing professional development in the form of multi-disciplinary training groups and dynamic learning communities. Romero (2023) argues that this calls for a simultaneous focus on improving teachers’ digital competences as well as the time and space necessary to allow for their acculturation to AI. Access to diverse types of acculturation activities is critical if teachers are to develop the confidence and agency necessary to gauge the impact of rapidly evolving technologies. For instance, participation in dynamic learning communities can help teachers develop a diversity of expertises that benefit not only their classrooms, but the school and wider community at large. Furthermore, participation in these learning communities can benefit teachers by giving them access to peers whose specific digital competencies might scaffold their own inherent weaknesses. This also serves to benefit learners who, without this framework of shared expertise, may not have had access to certain technologies such as educational robotics, artificial intelligence, maker education, or creative programming. Continuous professional development also plays a key role in ensuring that teachers are prepared for potential disruptions caused by AI in education. Given the speed at which AI technologies are advancing, ongoing professional training will be key to ensuring that teachers have access to the latest pedagogical methods and materials as well as continuous updates on the technology’s evolution. Additionally, co-design activities between researchers, learners, and educators should not only be encouraged, but proactively initiated and supported through curriculum, dedicated learning environments, and research partnerships. An example of such an activity can be found in Finland with the Generation AIFootnote 1 project, which aims to empower teachers and students by increasing their data agency and AI literacy through the use of co-created AI tools, materials, and pedagogies.

Hybrid Intelligence

Akata et al. (2020) introduce hybrid intelligence (HI) as a paradigm that combines human and machine intelligence to enhance human intellect and capabilities, emphasising collaboration rather than replacement. In formulating a research agenda for HI, they identify four key challenges, including Collaborative HI, Adaptive HI, Responsible HI, and Explainable HI. These challenges underscore the need to address the intricate dynamics between humans and intelligent systems, laying the foundation for the evolution of AI technologies. The emphasis on collaboration between AI and humans within hybrid intelligence aligns with the idea of empowering individuals to actively engage with AI systems towards a defined objective. Additionally, the pursuit of adaptive intelligence supports an individuals’ capacity to learn and adapt to evolving technological landscapes, as facilitated by improved data and AI literacy.

In contemplating the potential of hybrid intelligence to augment human capabilities through technology, it is essential to consider the foundational human resources that have been instrumental in the development of AI. The agentic use of AI, as highlighted by the emphasis on data and AI literacy, serves to bridge the gap between AI’s technological potential and the capacity to develop new activities and products. By nurturing the ability to comprehend and navigate the complexities of data and artificial intelligence, learners can become active participants in the development and utilisation of AI. This objective aligns seamlessly with the overarching goal of augmenting human capabilities, emphasising the empowerment of individuals to make informed decisions and contributions in a technology-driven landscape. These types of participatory approaches can also support the development of the agentic uses of AI (Tedre et al., 2023).

These participatory approaches become a pivotal aspect in addressing the challenges set forth by Akata et al. (2020) and implemented in the workshops by Tedre et al. (2020). It not only aligns with the goals of Collaborative and Adaptive HI, but also reinforces the importance of foundational human sources in shaping the trajectory of hybrid intelligence. The development of data and AI literacy is not just an objective, but a catalyst for allowing learners to benefit from AI with the possibility to develop different applications of hybrid intelligence in education.

Lodge et al. (2023) refer to hybrid learning as an educational approach where generative AI systems work in conjunction with human learners in order to promote both cognitive and metacognitive aspects of learning. From a cognitive perspective, AI technologies can be used to scaffold instances of information processing, content generation, or problem solving. Similarly, AI from a metacognitive perspective can support learners with features like real-time feedback, adaptive questioning, and self-assessment prompts helping learners to monitor, evaluate, and adjust their learning strategies. This makes AI an active interlocutor or teammate (Lodge et al., 2023).

Creative Uses of Artificial Intelligence in Education

Beyond conventional uses of AI as a tool for generating text, image, or content, AI emerges as a dynamic force not only for co-creating knowledge in participatory settings, but also for reshaping human practices. This transcendent use of AI for good (#AI4good) involves a paradigm shift, where AI tools engage in a shared, collaborative process with human agents, contributing to the conceptualization and development of critical knowledge (Septiani et al., 2023). At this pinnacle, AI is seamlessly integrated into the creation of transformative knowledge, fostering agency, and catalysing a profound evolution in human practices (Romero, 2023). It is here that humans and AI cohabitate, playing to each other’s strengths, with each lending qualities that individually make them unique, but together true collaborators (Wu et al., 2021).

Figure 12.1, showing the six levels of the #PPAI6 model, presents a way of differentiating between the different types of creative engagement in human–AI activities (Romero, 2023). This model provides a continuum that starts with passive consumption and evolves into more active and participatory forms of engagement, emphasising the diverse ways in which individuals and groups can collaborate with AI in the learning process.

Fig. 12.1
An illustration of 6 levels of creative engagement in human A I in education. The levels are passive consumer, interaction, individual content creation, collaborative content creation, participatory knowledge co-creation, and expansive learning supported by A I.

Six levels of creative engagement in human–AI in education

  • Level 1. Passive consumer: The learner consumes AI-generated content without understanding how it works.

  • Level 2. Interactive consumer: The learner interacts with AI-generated content. The AI system adapts to the learners’ actions.

  • Level 3. Individual content creation: The learner creates new content using AI tools.

  • Level 4. Collaborative content creation: A team creates new content using AI tools.

  • Level 5. Participatory knowledge co-creation: A team creates content thanks to AI tools and the collaboration of stakeholders in a complex problem.

  • Level 6. Expansive learning supported by AI: In formative interventions supported by AI, participants’ agency may expand or transform problematic situations. AI tools can be used to help identify contradictions in complex problems and help generate concepts or artefacts to regulate conflicting stimuli and foster collective agency and action. AI tools can be used to assist in the modelling of activity systems as well as in the simulation of new actions, facilitating the expansive visualisation process.

In order to support the agentic use of AI, levels 5 and 6 of the model are designed to contribute to learners’ acculturation to AI and its fundamental principles. This strategic approach aims to empower learners, enabling their active engagement in participatory activities where hybrid intelligence flourishes through the synergistic collaboration of human–AI systems. These higher levels also aim to raise student and teacher awareness of AI through the lens of agentic and creative engagement.

In order to support agentic and creative engagement and establish new paradigms of human–AI co-creativity, more work will need to be done to address the challenges, latent or otherwise, to its acceptance. Foremost among these include displacement concerns, human biases towards AI created work, and the difficulty in applying value to AI co-creations given the subjective value assigned to creative works. Understanding creativity requires a nuanced consideration of specific cultural and human sensitivities, both in terms of their manifestation and evaluation. Consequently, as we contemplate human–AI co-creativity, it becomes imperative to explore the role of culture in shaping and influencing this collaborative process, understanding that it will need to navigate the same subjective, norm-aligned evaluation and value-assigned processes as other creativity projects. Likewise, Magni et al. (2023) highlight the phenomena of human gatekeeping where some forms of AI-produced work are assigned a lower value based on the perception of effort, or lack thereof. While there is research showing that anthropomorphic AI agents can moderate this producer-identify effect on the creative evaluation process (Glikson & Woolley, 2020; Israfilzade, 2023; Magni et al., 2023), further solutions will need to be found that counter the tendency for humans to ascribe value to effort or innate ability. Finally, there is a growing fear of displacement as humans wrestle with AI’s impact on future workforce and professional opportunities (Thomson & Thomas, 2023; Tiwari, 2023).

At its pinnacle, the transformative potential of AI can transcend its role from knowledge co-creation to actively transforming human practices (Romero, 2023). At the highest level of creative engagement with the AI model (#PPAI6), AI is integrated into the creation of critical knowledge, fostering agency, and reshaping human practices. The collaborative process between generative AI tools and human agents becomes a shared endeavour to develop agency and enact transformative changes. This aligns seamlessly with the goal of expansive learning where formative interventions contribute to expanding or transforming problematic situations. AI tools, in this context, play a vital role in identifying contradictions, generating concepts, regulating conflicting stimuli, and ultimately fostering collective agency and action.

Inclusivity and Diversity in Artificial Intelligence

The challenges in ethics and inclusivity arise due to the high complexity and diversity of cognitive technologies, including human–AI applications in education. The AI act developed at the European level has been a clear example of the level of complexity in creating consensus in AI principles that ensure citizens' rights while simultaneously supporting innovation. Attempts to align individual issues with stakeholders are hindered by the many tensions between stakeholder objectives. Interventions in one part of the AI ecosystem (e.g. need for learners’ privacy) can have consequences in other parts (e.g. uses of facial recognition to identify the learners’ engagements). These tensions require a participatory approach in the design of AI systems that could be developed for educational purposes (Holmes et al., 2021).

Stahl (2021) proposes three main requirements for interventions meant to improve the ethical design and integration of AI. Firstly, interventions need to clearly delineate the boundaries of the ecosystem (e.g. the educational actors engaged) in relation to the actors, but also the geographical scope and the topics addressed. Secondly, interventions should focus on knowledge development, support, maintenance, and dissemination within the AI ecosystems. In education, this raises the need for the different educational actors to develop an understanding of AI fundamentals and the way that human–AI collaboration can support the teaching and learning processes. Lastly, interventions need to be adaptive and flexible to emerging needs. In relation to inclusivity, there is a need to support pre-service and in-service teachers in their acculturation to the fundamentals of AI. This support will aid their decision-making process for the integration, or not, of AI technologies and allow them to consider the agentic and creative engagement learners can develop using these tools.

Advancing Towards an Increased Human-Centred Education in the Age of AI

The challenges and opportunities presented in this manifesto highlight the need to develop critical thinking as well as an acculturation to AI for each of the varied educational stakeholders. In particular, developing critical thinking skills has become essential given the escalating challenges posed by AI integration in the educational and non-educational uses of Generative AI. These include, but are not limited to, generating fake information, unethical AI-generated content, and impersonations such as deep fakes. The proliferation of misinformation, across digital formats, has the potential to manipulate public opinion, incite conflicts on various grounds (e.g., racial, religious), and exacerbate existing inequalities and stereotypes such as gender disparities (Vartiainen et al., 2023). While students should be taught the skills necessary to recognise and dismiss fake information, there is also a need to regulate the type of AI content that could compromise students’ privacy and integrity.

It is not just generative AI that impacts our students’ everyday lives. AI is ubiquitous and pervasive (social media and mobile phones are examples), and often coupled with massive-scale data collection. This has given rise to a plethora of complex challenges and ethical dilemmas including uneven power relationships, privacy rights violations, total surveillance, hybrid influencing, behaviour engineering, and algorithmic biases (Page et al., 2022). Kahila et al. (submitted) acknowledge the computational processes changing the cultural practices and decision-making by individuals, organisations, and institutions and suggest the development of data agency as a solution to these societal challenges.

Educators face the crucial task of equipping citizens with the skills necessary to navigate a society permeated by AI systems and tools. In this context, fostering acculturation to AI from an early age, as part of an overarching digital literacy framework that includes critical thinking, emerges as a pivotal strategy. For this objective, the Digital Competence Framework for Citizens (DigComp 2.2) (Vuorikari et al., 2022) offers a comprehensive guide, encompassing elements tailored for interacting with AI systems. To bolster citizens’ AI literacy, educators can leverage this framework as a foundation for planning and developing curricula and course materials. Furthermore, by integrating these competencies into educational practices, educators contribute to the cultivation of a digitally competent citizenry capable of discerning, evaluating, and navigating the intricate landscape of information in the age of AI.

In collaboration with the DigComp 2.2 framework, educators will also need to develop age appropriate tools and materials to ensure that students at every level have access to fundamental AI concepts. These AI competency frameworks are required to navigate the evolving landscape of education in the age of AI and support a human-centred approach to education as a major pillar of our educational systems. This manifesto calls for a re-evaluation of the current contradictions, emphasising the need to empower teachers, ethically regulate the use of transformative technologies, and uphold the rights of both educators and learners. By doing so, we can forge a path towards an educational future where technology complements and enhances the human experience rather than overshadowing it.