Introduction

There seems to be widespread agreement for the need to introduce AI to students much earlier than college. The articles in this special issue provide a wide range of compelling rationales for doing so: AI has become woven into our daily lives, our data is routinely collected and analyzed, AI is used to filter the news and information we see, there is a shortage of AI specialists in industry and government, and there is no indication that the impact of AI on our lives will fade anytime soon. There are many good reasons to increase opportunities for children and adolescents to learn about AI and best prepare them for the world that awaits them. The articles in this special issue lay important groundwork for how to go about providing these much-needed experiences.

Until the last decade, though, AI has remained a specialization in computer science largely limited to upper level undergraduate and graduate courses. So, what are the key challenges to teaching AI prior to college? What should we teach and how? As a way to frame my commentary, I first review a few conceptual challenges that, in my view, should be addressed by AI Education researchers and curriculum designers. I then provide reactions to the curriculum proposals, studies, and frameworks presented by the authors in the special issue, and conclude with suggestions for much needed research that will inform future work in AI Education.

Unique Challenges to K-12 AI Education

Perhaps the most obvious challenge to teaching AI is that it is an advanced topic in computer science that remains a specialty pursued primarily at the graduate level. Understanding AI systems often requires a sophisticated understanding of concepts from mathematics, statistics, logic, and more. There are certainly many advanced topics in STEM that have direct influence on society that we are not trying to insert into K-12 curricula (e.g., civil engineering, neuroscience, quantum physics) to a similar degree. It is not ridiculous to argue that the list of required skills to “really do AI” is out of reach for most K-12 learners. Related to the challenge is the preparation of K-12 teachers to teach AI - it is unlikely that even CS-trained teachers will be adequately prepared. So what is possible? What does it mean for a K-12 learner to learn and understand AI?

A related point is that AI is an exceptionally young field of study, especially when considered alongside other fields taught at the K-12 level (AI was born in 1956 versus 3000 BC for Mathematics, for example). There may be wisdom in giving AI a little more time to mature and for the deep relationships it has with statistics, math, and computer science to be better fleshed out. Granted, the rapid growth of the field in the nearly 70 years since John McCarthy gave it a name was nothing like the first 70 years of any field in human history, but nonetheless, convergence on a set of agreed foundational ideas lags behind those we see in mathematics, physics, and others. I do note, however, that Paul Rosenbloom has argued that computing has indeed reached this threshold (Rosenbloom, 2012), and that Touretzky el al.’s article in this special issue presents a proposal for what these core ideas might be. Two other examples worthy of consideration are modern molecular biology and data science - both fields have emerged and matured rapidly in the last 40–50 years and are being taught at the K-12 level. Touretzky et al. discuss current data science curricula at length, but no articles in the special issue highlight potential shared goals with recent changes to K-12 Biology education (Bybee, 2012).

The final issue I highlight is less of a hurdle to overcome, but more of a potential alternative way to think about preparing K-12 learners for their AI-heavy futures: perhaps K-12 educational goals should be adjusted to more directly provide foundational knowledge that will enable future AI learning. So rather than have units devoted to AI, the focus would be on ensuring that related areas of early learning, such as mathematics, critical thinking, and problem solving align with what will be needed to do AI at the university level. This approach may be more appealing to school districts who are faced with more requirements than can be met. It could also suggest that emerging computer science education efforts across the nation should include AI-focused content. Such a change would still require changes to teacher preparation and training to facilitate their ability to make connections to AI when appropriate and to better motivate the need for learning the basics (e.g., relating ideas from representing knowledge to decision-making that non-player characters make in video games). Of course, not all papers in this special issue address formal education, and so consideration of organized informal learning opportunities for AI seems critically important as well.

Framing the Articles

Admittedly, these are primarily conceptual and strategic challenges and are unique to AI as a discipline. The many practical hurdles that apply to any effort to create new curricula and materials apply as well. These practical challenges will vary greatly between contexts, states, districts, etc. and will likely defy general solutions. Our role in the AIED community should be to draw simultaneously from our knowledge and experience with AI and our commitment to conducting rigorous educational research. Of course, any plan to teach AI to K-12 learners should be properly motivated and driven by evidence. When we lack evidence for a decision we have to make in designing a curriculum or activity, we should eventually seek that evidence and refine our approach appropriately. Further, with the long history of designing and investigating AI-based learning technologies in AIED, it logically follows that we will begin to see AIED systems that use AI as the target learning domain - i.e., systems that use AI to assess learning of and teach AI.

Turning to the contributions in this special issue, the articles each make compelling and thoughtful cases for how to teach K-12 children AI. They cover substantial ground in identifying (1) what learners should know about AI, (2) what they should be able to do with AI, and (3) how they should relate their learning about AI to their own lives. While there are common braids and threads between them, they also each propose some very different kinds of approaches to the problem. While considering different ways to provide this commentary, the issues that kept arising for me related to the developmental appropriateness of the content. Of all potential dimensions to consider, age seems to be the most critical in thinking about the content and structure of the approaches presented. What kids need to know, what they will be able to understand, the depth of which they are able to engage with AI, and what will motivate them is likely to vary greatly by age. Thus, my commentary begins with two articles that address general issues: The Big Five Ideas of AI (Tourteztky, et al.), a proposed framework for structuring AI content and activities, followed by teacher education (Tang, et al.). After this, I work from articles focusing on younger to older K-12 learners.

The Five Big Ideas of AI

Recognizing the many gaps and shortcomings from several existing sets of standards (CSTA, NGSS, etc.), Touretzky et al. provide a thorough history and detailed description of their work to define the “Five Big Ideas in AI”. The AI4K12 initiative brings in the perspective of many groups (academia, industry, education, government) and has emerged as the most influential effort to bring clarity to AI Education by providing a structure around what should be taught. I found the historical approach to be helpful in a number of ways, but most importantly to see how AI was largely overlooked in existing (and high profile) work on defining standards for computer science. While their analysis reveals many relevant connections to these standards, especially with the treatment of the Data Science Curriculum Framework (DSCF), overall (for me) the exercise revealed major gaps and oversights in these related efforts.

The AI4K12 initiative’s mission is to provide guidelines for teaching AI at the K-12 level. The Big Five Ideas (perception, representation and reasoning, learning, natural interaction, and societal impact) act as the basis for their approach. My favorite aspect of the approach is that along the five ideas, the team has identified grade band progressions, split up as K-2, 3–5, 6–8, and 9–12. A deep dive into machine learning, Big Idea 3, is given as an example of how to provide grade-band-relevant instruction. Many intriguing ideas are proposed for making complex ideas accessible and understandable. For example, I greatly appreciated the authors’ approach to leveraging metaphors and the intuitive way in which ML has been carved up into more manageable sub-concepts (e.g., training a model, neural networks, reasoning). There is a great deal of potential to use the framework for both guiding instruction and more basic research on validating the developmental appropriateness of K-12 AI learning.

The Five Big Ideas in AI is certainly an intuitive and appealing proposal for how to organize AI instruction. Indeed, its influence on other articles in the special issue and the field more broadly has been significant. The article concludes with a useful list of example activities and resources that educators could use, as well as a discussion on implementation. Each of these activities could easily be the basis for an investigation and refinement of the framework. Therefore, I wholeheartedly endorse the final line of the article with a call to conduct testing in classrooms. It would not be difficult, for example, to investigate the level of vocabulary used in lessons across the different grade bands to confirm its appropriateness. This seems critical to do for advanced topics like neural networks and knowledge representation. The members of the AI4K12 initiative have no doubt considered these issues and I’m excited to see how the framework evolves as it continues to be adopted, evaluated, and refined.

Teacher Preparation

One of the most prominent challenges facing AI Education is ensuring that educators will have sufficient content knowledge and skills to teach AI confidently and effectively. This is a relatively urgent challenge given the current lack of institutional AI learning opportunities for teachers. Tang et al. address this challenge directly by presenting ML4STEM, a professional development program aimed at helping teachers for all grade levels gain knowledge about machine learning and teaching strategies that seek to foster positive attitudes about AI. Their approach leverages an existing ML learning tool (SmileyDiscovery) that has been shown to be accessible to learners with limited background in mathematics. ML4STEM has been carefully designed around the widely-used TPACK PD framework to include content knowledge (CK), pedagogical knowledge (PK), and technical knowledge (TK). This is important because as demand continues to grow for AI Education, teacher training programs will need to respond and will likely seek out programs such as ML4STEM.

Tang et al.’s approach is innovative in a number of ways that stand out. One feature that has great promise is to clearly label the roles of the teacher to align with the objectives of the program. In the case of ML4STEM, teachers spend time in two roles: Teachers as Learners and Teachers as Designers. This is used as the framing for the two 75-minute sessions that comprise the program and helps set expectations for the teachers involved. For example, as learners, participants gain basic knowledge and skills about ML, and as designers, they gain experience creating AI lessons. A robust evaluation along the dimensions of the TPACK-based approach was very positive and stands as initial evidence that we can bring teachers up to a sufficient level of expertise in a relatively short amount of time to teach ML and improve their interest in doing so. It would be interesting to do more detailed interviews with teachers in the study who gained or lost interest in the subject to find out why.

The ML4STEM study included 18 teachers spread across grade levels. Although the study results were encouraging, the study does not address the question of whether the same PD is suitable for teachers of different grade levels. It is not clear at all, for example, that SmileyDiscovery is something elementary aged children could use as it is currently designed. In future iterations of the program, it may be worth considering the incorporation of different tools to match the age level interests of participants or to simply create different versions of the program targeted at different age levels. Certainly pedagogical content knowledge also varies by age level as well. The authors acknowledge the small scale of their study and early stage nature of the work, but given that, it was also not fully clear why the program is limited to ML given that more foundational aspects of AI could easily have been chosen (for example, from the Five Big Ideas). Nonetheless, ML4STEM is an excellent example of how to cleanly design AI Education professional development and provides evidence that it is possible to provide meaningful learning opportunities to teachers in a short amount of time.

Elementary AI Education

With a focus on learners in grades 4–5, Ottenbreit-Leftwich et al. present the only paper in the special issue focused on elementary aged learners. Importantly, the article takes the approach that we should first find out what kids and teachers know about AI before making too many choices about how best to help them learn. This prioritizes the difficulty of handling incoming knowledge and understanding of AI of younger learners, a challenge that will certainly persist for all AI Education research moving forward.

Combining the perspectives of students and teachers on their everyday experiences with AI, the article seeks to uncover entry points for AI education. They report a number of themes that may serve as effective pathways for introducing AI to children. For me, one of the most important findings from the interviews was that children had limited views of the difference between programmed behavior and behavior resulting from reasoning or learning. This is incredibly important because we want to instill the notion that behavior from a system that appears intelligent does not always imply that it is intelligent. Rather than a child assuming intelligence drove some behavior, we should strive for that child to be skeptical and want to understand how some intelligent (or not) system is able to perform the observed task. The authors accurately frame this as a conceptual change issue, suggesting that we should work to create reliable tools to measure the depth to which learners understand the difference between direct programming and AI-produced behavior. As the authors put it, “we need to provide them with opportunities to open the black box of AI.”

The remainder of the Ottenbreit-Leftwich et al. article is a treasure trove of useful insights into how 4th and 5th graders, and their teachers, conceive of AI. The influence of everyday experiences turned out to be even more profound than I imagined. This presents, in my opinion, significant challenges for AI Education research at all K-12 levels. Not only do everyday experiences set up critical conversations about ethics and privacy, but they also imply that instruction may need to be fine-tuned to address fundamental misconceptions about AI that children bring with them to class.

Middle School AI Education

Middle school is quite possibly the sweet spot for introducing AI to children. It is a time when they are already beginning to explore more advanced topics in STEM, such as algebra and biology. But more broadly, it is a critical time to provide positive and engaging STEM learning experiences since it is established as a period when identities begin to take hold and STEM interest can be fragile (Maltese & Tai, 2011). Two articles in the special issue focus on middle school level AI education with both prominently featuring ethical considerations of AI and (interestingly) being delivered in a workshop/informal format. In addition, both articles report empirical results based on emerging instruments for AI knowledge, attitudes, and more, and report findings on incoming beliefs and knowledge about AI.

Williams et al. take a unique approach by emphasizing AI for creativity and dance. This positions the curriculum content to better meet the relevance needs and expectations of learners. It further enables a more direct application of active learning, one of their driving principles (the other two being embedded ethics and reduction of barriers). Impressively, due to the pandemic, the team was able to still achieve positive results online which presented obvious challenges to the active components of their plans. A finding that stands out for the authors (and me) is that only 17% of the participants felt they were “smarter” than AI and only a third thought they could “control” AI. While there are many potential reasons for these self-reported beliefs, they are somewhat shocking and stand as more evidence that early AI education is needed (and soon!). These may be related to misconceptions that are derived from representations of AI in popular movies or video games.

An additional unique feature of the Williams et al. article is their detailed analysis of student projects. It is genuinely impressive that middle school learners were able to achieve what they did (e.g., a smile detection program). Detailed analyses provided by the authors offer interesting glimpses into the nature of these learning experiences. One consistent issue arising was the dependency between coding skills and project work. For me, this suggests that integration of AI content should be carefully balanced with introductory coding skills - standalone curricula may be problematic in some cases, especially in schools that have not prioritized CS education. The projects also highlight excellent examples of the importance of providing opportunities for learning with personally relevant content. It was nice to see students finding those connections and building on them in their projects.

Shifting to Zhang et al., this article was motivated by prior curricula that, according to the authors, over-emphasized AI knowledge and skills. They present a tightly integrated, ethics-first plan also for learners in middle school. The program implements the well-known Developing AI Literacy (DAILy) curriculum that brings together technical concepts and skills, ethical and societal issues, and career futures in AI. As advertised, DAILy expertly weaves ethical and societal considerations throughout its coverage. For the 19 students in the study, it appears to have been effective: all but one participant was found to express a more nuanced view of AI and its implications. Concerns about AI expressed in post-interviews struck me as nuanced and addressed very important issues like worker displacement, bias in classification systems, and the spread of misleading deep fake video clips. The beauty of these findings is that even if most of the children in the program do not choose AI careers, it is impossible to argue with the broad societal value of this kind of knowledge.

My only quibble with Zhang et al. lies in the use of “AI or not” as a teaching tool (i.e., learners judge whether the technologies in everyday life require AI). I very much appreciate the outcome of the activity and helping learners realize that AI is not required for everything and that it will be used often without our knowledge, however, it is not clear at all that even seasoned AI researchers always agree what “counts” as AI. A slightly different angle would be to present a range of examples and ask where the AI is, then map those observations back to features of intelligence identified by a group discussion. This would further facilitate the inherent mapping between AI and human intelligence that benefits from relevance to all.

High School AI Education

Two papers in the special issue are devoted to high school level AI Education. Until more consistent CS and AI Education emerges for the K-8 levels, focusing on this age is going to continue to bring unique challenges. Between varying levels of incoming mathematical, CS, AI, and Ethics sophistication, any curriculum proposal should emphasize flexibility and provide numerous entry points for learners. Both of the papers offer very different perspectives on the challenge and unique ways of approaching AI learning.

Bellas et al. present the results of a European effort (Erasmus+) to define an AI curriculum for high school students and teachers who have no previous knowledge of AI. The authors provide an extensive review of related curriculum development efforts that is critical to consider for design of an AI curriculum. The authors outline a 2-year sequence for learning about AI. It is carefully designed and organized around a slight variation on the Five Big Ideas (Bellas et al. offer an expanded version with 8 key ideas). The curriculum is also practical in its use of teaching units that can be implemented as needed. In addition, learning activities draw primarily on mobile learning as a way to keep costs manageable. Two key features of the design worth highlighting are (1) its attention to math prerequisites for different components of the curriculum, and (2) its integration with helping students learn programming in Python. In other words, the curriculum allows learners to both learn Python and AI at the same time, thus offering a model of the approach of combining AI and CS education. Given the 2-year time frame, this is certainly a feasible goal to set.

Curriculum evaluation is notoriously challenging, especially when they cover an extended period of time. To provide an initial evaluation, the authors summarize feedback from teachers at their partner schools and responses to technical questionnaires sent to students who participated in a portion of the curriculum. Together, the results seem encouraging given the critical role teachers play in the overall success of implementing new curricula. The authors provide several examples of how the feedback mapped to changes in the curriculum. Of all of the feedback, one that stood out for me was the point that existing AI textbooks were not a fit for the needs of the teachers of the curriculum. This suggests the need for a more focused effort to create AI learning materials designed specifically for K-12 teachers and students. Although the student questionnaire questions were somewhat limited (2–3 items per question), the responses do suggest (again) that K-12 learners have the capacity to learn advanced concepts in AI. As a whole, Bellas et al. provide the most expansive AI K-12 curriculum that I have seen. To what extent the Erasmus + curriculum can be used in practice and whether the specific teaching units are effective will only become clear with time and research.

The other paper focused on high school learners comes at AI Education from a very different perspective and is the only paper in the special issue not focused on curriculum design or development. Instead, Leitner et al. describe a game (called ARIN-561) for teaching core a set of AI techniques: Search, Bayesian Networks, Decision Trees, Clustering, and Linear Regression. The award-winning game is highly innovative, driven by input from high school learners, visually appealing, and designed around a compelling narrative (escape from an alien planet). AI concepts are cleverly woven into the game mechanics. For example, in searching for their companion robot, learners are able to apply a breadth-first search algorithm and visualize its execution in game. Given the heavy (perhaps over-) emphasis on machine learning in the articles in this special issue, it was good to see coverage of what is traditionally an early topic in introductory AI courses. Subsequent activities expand the toolkit of the learner to include depth-first and greedy search algorithms, thus providing a basis for algorithm comparison and knowledge for proper selection in future tasks (arguably one of the most important basic skills for any AI developer or researcher).

Another strength of the paper is in how the authors have carved up the different roles and relationships people can have with AI. Specifically, they use AI Consumer to refer to someone who simply uses AI-based technologies, AI Operator to refer to one who applies AI techniques to solve problems, and AI Developer for people who implement and improve AI algorithms. I believe these distinctions have promise to act as an organizing tool for K-12 learners to think about career possibilities in CS and AI (they could easily be generalized for CS). The version of the game presented in the article prioritizes the AI consumer and operator roles, so it will be exciting and interesting to see how the team introduces more developer-style interactions. The game does allow the player to adjust algorithms and so there is promise to expand this functionality in future versions of the game. One popular game mechanic that occurred to me was to allow the player to level up to earn more worker bots; they could then be run in parallel using variations on the algorithms to complete similar tasks. This would set up natural comparisons that could be used to influence future game choices. As the authors acknowledge, the prototype has been pilot tested but larger scale studies are the next step. Although the authors don’t discuss in-game scaffolding extensively, I find it likely that far more support will be needed to ensure that learners stay on productive paths and are able to make progress, especially if the plan is to release the game publicly.

Defining a K-12 AI Education Research Agenda

I was delighted and encouraged to see the influence of the learning sciences on the designs, activities, and content presented in the articles in this special issue. The inclusion of research or learner data as part of most of the articles was similarly positive. Relatedly, several papers proposed first versions of generalized instruments for measuring AI attitudes and knowledge. If these tools can evolve, be tested across systems and curricula, and adapted for different grade levels, empirical AI K-12 research could see a much needed boost. If combining forces with the CS Education movement makes sense (I think it does), tools for measuring AI knowledge could be incorporated with the repository on the csedresearch list of over 135 instruments (covering program evaluation, cognitive, and noncognitive factors related to measuring computational thinking).

Another important step forward in many of the articles was the goal to assess incoming knowledge and attitudes about AI of K-12 students. Ottenbreit-Leftwich et al. describe several studies attempting to capture what K-12 learners know about AI and how they feel about it, which will be critical moving forward to both contextualize AI learning activities and to counteract the potentially negative influences from society and popular media on AI.

In the late 2000s, I was part of a team at the USC Institute for Creative Technologies to build virtual humans for the Boston Museum of Science. Our goal was to improve public understanding of AI technologies by developing an exhibit with two virtual humans who could answer questions about how they worked (the characters were twin sisters named Ada and Grace, after the CS pioneers, Ada Lovelace and Grace Hopper). My favorite finding was that “fear of AI” was significantly reduced after interacting with Ada and Grace (Swartout et al., 2010). It was not clear why it was reduced, but two possible explanations were (1) the appealing and jovial nature of the twins, or (2) the availability of a “science behind” area next to the exhibit showing the details of the speech recognition, NLU, and question-answer systems running on terminals. Zooming out from these examples, it is clear that we simply do not know what prior experiences learners may have with AI (say, from movies like The Terminator) or if they even realize they use AI systems on a daily basis. Whatever place they are with respect to AI, all AI Education efforts should seek to undo misconceptions and foster healthier and more realistic concerns around AI such as displacement and ethical applications.

While these are positive steps, it is time now to define a shared, more basic research agenda to ground our emerging frameworks, curricula, and systems on more solid empirical ground. There are well-defined and relevant roadmaps for such an endeavor. For example, similar work in early mathematics education has been going on for decades that could inform and guide such an empirical agenda (National Research Council, 2009). Collaborations with math education researchers and developmental psychologists could be incredibly fruitful. For example, longitudinal studies on AI conceptual understanding and beliefs could inform the ideal progression through topics and identify key stages for reflection on cross-cutting issues such as ethics and bias. This also invites a reference to the history of AIED and development of technologies that address these precise goals.

One promising strand of research in developmental psychology is emerging evidence that children have the capacity to grasp a surprising level of complexity when content is presented to them in ways that are developmentally appropriate (Kelemen et al., 2014). This suggests that some of the seemingly challenging AI concepts that we instinctively may delay might actually be accessible earlier in a child’s life. We therefore have a need to forge deepened collaborations between AI Education researchers and developmental psychologists to advance our understanding of what developmental AI learning looks like empirically. The Five Big Ideas of AI by Touretzky et al. could very well be the best place to start: it would not be difficult to analyze the grade bands for each of the five ideas and identify a series of studies to help elaborate on what is and is not developmentally appropriate. We may find that the band age ranges need to change or that predictive connections between math and CS learning emerge (e.g., algebraic understanding could predict development of representational skills). Such research may also reveal important similarities with developmental research on molecular biology and data science, also complex topics making their way into K-12 curricula.

Conclusion

This special issue captures a wealth of knowledge and reflects a great deal of effort to define a path forward for K-12 AI Education. Each paper makes a strong case for why it is critical to introduce AI to learners at younger ages, so the debate now turns to what should be learned and how to present it in engaging and appropriate ways. The background sections of the articles capture the full breadth of efforts to do this so far, and propose numerous key innovations. It was unfortunate that no articles addressed the challenge of introducing AI to K-2 learners, but possibly indicative of us, as a community, not being convinced that it is the right time. Research could help us understand that question better, and perhaps mold the way mathematics and computer science is introduced at that age to promote future learning of AI. From the integration of ethics to the proliferation of online tools for exploring AI concepts, there now seems to be sufficient design and implementation work to begin pursuing an aggressive research agenda to unpack how children learn AI and better understand the support they need. As such empirical work matures and we come to better understand learning processes associated with AI, I look forward to future iterations of K-12 AI Education approaches that emerge.