Keywords

Introduction

In the integration of artificial intelligence (AI) in education, it is imperative to move beyond the perspectives of researchers and policymakers and integrate the perspective of middle schoolers. Doing so promotes inclusivity, counters potential biases, and ensures that learners have a voice in the design and integration of AI in education. In this chapter, we focus on the perspective of Higher Education students on the use of AI in education before developing a study based on middle school students who have participated in a semester of activities aiming to develop their AI literacy and their capacity to develop its creative and transformative uses.

Higher Education Students’ Perspective on the Use of AI in Education

In Higher Education (HE), the study of Meade et al. (2023) identified student opinions on the use of generative artificial intelligence, particularly applications such as ChatGPT. The results revealed that over 60% of students had a basic understanding of AI tools. The study also highlighted a number of ethical and developmental concerns including standardisation, decolonisation, the reinforcement of biases, deskilling, and the potential for impeded skill development due to an over-reliance on technology. Although the issue of academic integrity was raised, several students pointed to their use of ChatGPT as a research assistant, highlighting its function in structuring ideas rather than producing content. The resulting student recommendations called for an increased focus on acculturation to AI efforts and the implementation of alternative assessment strategies that prioritise the development of critical thinking skills. Suggestions were also made to increase student–faculty dialogue around the rules of AI use in HE environments and provide regular updates regarding AI’s rapidly evolving capabilities.

In a study of more than 6,300 HE students across Germany, von Garrel and Mayer (2023) observed that two-thirds were utilising generative AI tools (e.g., ChatGPT or GPT-4) with STEM disciplines showing an increased rate of adoption, possibly due to their existing affinity for technology. In all domains, save for art, art sciences, and sports, question clarification and subject-specific concept explication are the most common uses of AI in studies. In the Social Sciences, students used AI primarily for studying literature (30.3%), for translation (28.6%), and for text creation (25.4%). Alternatively, engineering students utilised AI for research (32%), translation (30.7%), and problem-solving and decision-making (30.3%).

In Idroes et al. (2023), undergraduate students in Romania identified a range of significant benefits and drawbacks of AI use in HE. Virtual assistants, with their ability to support teachers during lessons and provide prompt responses to student questions, were acknowledged as the primary benefit of AI use (42.9%) although improved time management, enhanced interactivity, lesson personalisation, and increased engagement were also mentioned. A notable portion (52.7%) of students also highlighted universal access to AI tools and inclusivity, particularly for students with special needs, as an important benefit. When specifically surveyed about AI use in the assessment process, a significant proportion of the participants (49.5%) identified continuous, timely feedback from virtual AI assistants as a major benefit. Additionally, students also highlighted the benefits of automated exam grading and the subsequent decrease in grading errors. However, students have also expressed concerns regarding the drawbacks associated with the integration of artificial intelligence (AI) in HE. The primary concern, as indicated by a significant portion (37.4%) of the Romanian undergraduates in Idroes et al. (2023), involves the impact AI could have on interpersonal connections and how that might affect the overall quality of education. Additional concerns raised encompass potential internet addiction, reduced student-teacher interactions, and the peril of information loss resulting from system malfunctions. In Canada, online students participating in the study by Seo et al. (2021) identified the benefits of personalised learner-instructor interactions, but also noted that AI had the potential to diminish interpersonal relationships by reducing the number of human-to-human interactions. In the United Arab Emirates (UAE), a study by Farhi et al. (2023) on the impact of ChatGPT usage highlighted its popularity among students as an assistant, but also reinforced the potential for its unethical use and excessive dependency.

Middle Schoolers Perspectives on AI

In its Digital Educational Outlook, the Organisation for Economic Co-operation and Development (OECD) provides an overview of its members’ efforts to integrate generative AI in educational contexts and offers recommendations for its use going forward (OECD, 2023). As such, middle school students are exposed to AI through a variety of educational initiatives. Interestingly, even when these students are not offered access to AI acculturation programmes, they continue to use AI tools outside of the school environment in a variety of educational contexts (e.g. completing homework). Within this context, a number of studies have focused on how middle school students perceive AI and its uses. For example, in Marrone et al. (2022), middle schoolers, when questioned about the relationship between AI and human creativity, voiced doubt about AI’s ability to replicate the human creative process. Yet, while the same study showed that these students continued to believe that human creativity remained distinct and irreplaceable, there was an acknowledgement that future technical breakthroughs may enable AI to approximate human levels of creativity.

The Life Bloom Academy

In this section, we outline an interdisciplinary project where teachers from the Life Bloom Academy, a middle school located in Cagnes-sur-Mer within the Alpes-Maritimes region of France, collaborated on a semester-long research intervention programme to acculturate their students to AI. A particular emphasis was placed on developing students’ critical thinking, while they considered the ethical questions surrounding AI and its use in educational settings.

Procedure

Prior to their visit to the Maison de l'Intelligence Artificielle (MIA),Footnote 1 students participated in preparatory sessions in history, geography, and moral and civic education classes. During these sessions, students considered the question, ‘What is intelligence?’ and debated what constitutes its different forms. Equipped with these insights, students conducted visits to the MIA participating in STEAM-based activities and interactive demonstrations led by MIA staff. These activities served as prerequisites prior to students participating in immersive AI activities back at the Life Bloom Academy.

On the strength of these novel experiences, and having enriched their knowledge through mathematics, science, and technology lessons, the students took part in a second debate on the challenges of AI in history, geography, and moral and civic education. This debate focused on specific questions aimed at enabling students to activate their new learnings on AI. Each student, working alone or in pairs, answered the following questions: ‘What are the positives and negatives of different AI uses in education?’ and ‘What advice would you give to AI developers?’ Student responses were aggregated, revealing a number of AI use cases and their potential impact on daily life.

Middle Student Perspectives on AI in Education

The following section presents five perspectives on AI, aggregated from student responses and collected by their teachers, representing one of the outcomes of the academy’s interdisciplinary project. Students were asked to summarise how AI might impact their lives before considering the potential risks and ethical concerns around its widespread use. Two important reflections that arose from these student perspectives include the idea that AI’s intelligence derives from a combination of pattern recognition and the processing of vast amounts of data and, as such, requires programming to function. Significantly, students also expressed their apprehensions regarding the possible threats posed by AI to their freedom, free will, and future professional opportunities.

Students Perception of the Nature of AI

Upon completion of the acculturation activities, the students revealed two primary themes in regard to AI and its use cases. On one hand, they recognised the value in its ability to automate tasks citing examples such as automated check-outs machines in supermarkets and the autopilot function available in Tesla automobiles. On the other hand, students acknowledged the importance of human involvement in the training and programming of AI models, particularly when those models are expected to interact with humans such as in smart home devices. These examples underscored the students’ awareness of AI’s transformative impact on various aspects of daily life. One participant stressed that ‘AI makes almost everything easier and automated these days, many things are possible. For example, more and more Tesla cars are equipped with autopilots. In addition, more and more professions are being replaced by AIs, such as cashiers in supermarkets.’ Another anecdote, involving the voice-activated assistant Alexa, illustrated the idea that AI, while capable of evolving and learning independently, is still dependent on human input and guidance. ‘AI could help people in their day-to-day work. It is a lot of work to program them, because yes, AI is nothing without humans. For example, Alexa who answers our questions orally and with whom we can have conversations is an AI and it is developing. I said “goodbye” to her when I was leaving and she said she didn’t understand. So, I asked him to say “goodbye” to the people who said it to him. Now she'll give me sweet little expressions to say “goodbye” to me! She can develop on her own, which makes her an AI.’ This dual awareness reflects a nuanced understanding among middle schoolers, recognising both the potential benefits of AI automation and the importance of human intervention in its training and development.

Students’ Concerns About Privacy and Social Control in the Era of AI

Students also raised concerns regarding the integration of AI into our lives and the impact that has on users’ privacy and security. In anecdotes, students pointed to AI’s ability to personalise the user experience (e.g. by tailoring recommendations and advertisements), but also acknowledged the ethical implications of widespread data collection and usage. ‘AI can guide choices through our personal data. It offers us objects, services, and goods that correspond to our tastes. When we search and browse websites, an AI collects this data and resells it. This is why the advertisements we see are often personalised, but does this really respect our privacy? A lot of people are not comfortable with all their data being sold and, even if it is unlikely, this data can be hacked and used for blackmail.’ Students also express their fear of AI’s misuse given the potential for data breaches, hacking, and even the endangerment of lives in extreme cases such as hospital ransomware attacks. These concerns underscore the student's understanding of the risks AI poses to both personal security and individual freedom. ‘There have even been hospitals hacked for ransom, putting the lives of others at risk! All this allows us to see that the data used by AI jeopardises our security and our freedom.’ The students’ apprehensions regarding the potential misuse of personal data by AI, highlighted by concerns over hacking incidents, particularly in critical settings like hospitals, underscores the broader viewpoint that the utilisation of data by AI poses significant risks to both personal security and individual freedom. Yet, students’ concerns regarding AI’s use are not only related to privacy, but also in relation to social control and democracy. Students underscored the urgent need for ethical considerations and robust safeguards in the widespread integration of AI technologies. ‘As AI becomes more and more integrated into our lives, new risks associated with its use are emerging. Its use makes us think about the notions of privacy and security, that is to say the increased risks of hacking our data on the web. Especially since, if AI is used for bad purposes, the risks would be much greater. The new risks that must therefore be taken into account are the risks of the security of our data, our confidentiality and the malicious uses of AI such as for the establishment of a neo-totalitarian regime.’ Another student added, ‘AI is not human and it can meet needs. But it can impact our lives and our freedom. For example, navigation cookies (cookies) or recommendations follow us in our internet searches. When we go to a site, we accept cookies and the computer takes into account our taste for this site and can offer us similar ones. It reduces our freedom because we feel observed and it prevents us from making our own choices. We are offered things, so we are influenced. Soon, in our daily lives, AI will help us with repetitive and household tasks. But we should not completely depend on it.’ These perspectives, while showing a good understanding of technology, also highlight the capacity of students to perceive its inherent risks and how its use might be abused or used unethically.

Students Perception of AI in the Service of Sustainable Development

While some students’ raised concerns regarding AI’s use, others shared an optimistic view of AI as a potential solution to pressing global issues, particularly in the realms of ecological and sustainable development. The envisioned use cases ranged from optimising agricultural practices by anticipating diseases and managing water resources to deploying specialised robots for cleaning oceans. ‘AI may seem like a solution to address ecological and sustainable development issues. It can help us create sustainable innovations, manage energy, organise depollution and recycling efforts, or resolve tense situations. Thanks to AI, it would be possible to optimise soil management and the yield of agricultural land by, for example, anticipating the appearance of diseases, optimising water use or adjusting production to demand. The robot might just look like a little chip that could scan the land to optimise for water, etc. To clean the oceans, we could make two types of robots, one that stays on the surface of the water to pick up the waste that is present there, it would have the shape of a small boat and it would be formed of a large pocket to collect waste. For the second type of robots, this one could go underwater to pick up trash. It would have the shape of a large fish and so that it would not scare marine animals, it would have a very large pocket to store waste.’ The potential uses of AI for sustainable development extend to addressing climate change challenges, emphasising the role of AI in altering consumption habits, and preventing food waste through innovations like smart refrigerators. Students see AI as a transformative force that, if ethically and sustainably implemented, could contribute significantly to overcoming critical environmental and resource-related challenges globally. ‘With the challenges of climate change, natural resources are dwindling and the risk of famine and food shortages is intensifying. A change in our habits is then necessary, and AI can help us in this change and in controlling our food consumption in order to preserve our resources. For example, we could create an AI that would prevent food waste with a smart refrigerator that would prevent us from wasting food. We could automate and improve the production of greenhouses thanks to AI, ​​which would analyse the temperature and humidity. All this to say that AI can help us solve our global problems.’ The students also acknowledged the critical role that AI can play in addressing the challenges posed by climate change and resource scarcity. The students emphasised the need for a shift in human habits to adapt to these challenges and AI is seen as a powerful ally in fostering this change. The proposed solutions, such as AI-managed smart refrigerators to prevent food waste and the automation of greenhouse production through AI analysis of environmental factors, highlight the technology's potential to enhance sustainability in food systems. Septiani et al. (2023) consider this type of use case as a form of agentic and transformative creativity made possible by AI.

Students’ Perception of the Potential of AI in Healthcare

Students have high expectations regarding the use of AI in the healthcare sector, particularly in the areas of care and treatment. The notion that AI could assist, or even replace, human professionals in certain medical tasks underscores the technology's capacity to enhance efficiency and precision in healthcare delivery. ‘Another area where AI could be useful is in healthcare and more specifically in care and treatment. Indeed, AI could help, or even replace, professionals in their work by performing tasks or analyses in the medical field. AI could, for example, take care of delicate operations, without the intervention of personnel. One can imagine that it would be useful in the event of an accident. We could also use AI to optimise the treatment of diseases. If AI is able to detect and recognise them, it would be possible to make precise diagnoses, whether to prevent the first symptoms or to treat them at a more advanced stage, while monitoring the evolution and therefore adapting treatments. The advantage of this system would be to have a large database that would replace the limited experience of health(care) professionals.’ Additionally, students shared their concerns regarding the potential risks for data privacy in healthcare. ‘If a robot containing AI is hacked, it would be dangerous. For example, if a medical robot is hacked, it could cause deaths. The same problem applies for the use of AI in the army. If there are robots in a war, they will not be afraid to die and therefore they can do much more damage and death.’ Moreover, students raised the potential risks of reducing human capacities. ‘The other risk is the weakening of human capacities: if everything is done by AI and it no longer works, then what are we going to do, because we will no longer be able to do everyday things?’ The students shared their concerns regarding the consequences of excessive dependence on technology and the dangers of developing an over-reliance on AI. In doing so, they stressed the need to maintain and nurture human competencies in the event of unforeseen challenges or failures in these systems.

Students’ Expectations of AI at the Service of Education

The students were also able to identify different use cases for AI in support of educational administrative tasks. ‘The AI ​​would be able to help teachers with administrative tasks. She will be able to do all the administrative things in relation to the director by helping him with accounting or sending emails. It would also be very useful to the teacher: for the roll call, the preparation of exercises and lessons, or the evaluation of learning.’ The students also identified the potential for AI to personalise the learning experience by providing real-time scaffolding, lesson difficulty scaling, and ongoing, persistent feedback.

Discussion

In this chapter, we have reviewed different studies on student acculturation to AI, focusing primarily on middle school and Higher Education students. We can observe that a majority of students in Higher Education use generative AI tools, such as ChatGPT, even when they have a disparate understanding of how AI works. For its part, Higher Education continues to deal with concerns around deskilling and dependency as potential impacts of AI’s use. Middle schoolers are also using generative AI and reflecting on its ability to reproduce human creativity.

In order to better understand student perspectives of AI usage in educational environments, the Life Bloom Academy developed an interdisciplinary action-research activity for middle school students exploring the impact of AI on critical thinking and ethics. The project helped students comprehend AI and its ramifications, emphasising both its potential as well as ethical problems. Reflections collected from the project showed that students have high expectations in relation to AI and its different use cases. At the same time, they are also aware of the potential opportunity costs associated with AI use such as deskilling due to over-reliance and the potential for data theft and misuse. The work of the disciplinary team, based on five pedagogical inputs, allowed students to express themselves autonomously about their citizenship in the age of AI. Importantly, the experiences shared by these middle school students have shown the ability to critically analyse the potential and risks associated with AI use, but the depth and accuracy of their analysis is dependent on their understanding of AI fundamentals. Therefore, we should view acculturation to AI as a basic requirement for all students as a way of developing their citizenship and agency in the age of AI.