Keywords

1 Introduction

The digital transformation continues to progress dynamically. The changes brought by the ICT are influencing how we communicate, work or gather information in all areas of our daily life. A new stage of development is reached by the technologies of “Artificial Intelligence" (AI). Due to the given technological prerequisites, a rapidly increasing spread can be observed. On the one hand, overarching fields of action are affected - media education, advisory and assessment tasks change or become superfluous - on the other hand, a wide range of new opportunities arise for subject-specific teaching, but also challenges with regard to forms, content and methods. Due to the given technical conditions, a rapidly increasing diffusion of these technologies can be observed. This again places considerable demands on the professionalization of teachers in all subjects. On the one hand, overarching fields of action are affected - media education, advisory, and assessment tasks change or become superfluous - on the other hand, a wide range of new opportunities arise for subject teaching but also challenges with regard to its forms, content, and methods.

With regard to the interdisciplinary cross-sectional task of supporting the design of holistic educational and study programs for teaching in the digitally networked “AI world", the following questions arise: 1. What are the special features of AI systems compared to conventional informatics systems, as they have mainly been discussed in the context of digital transformation so far? 2. What does the AI-related area of competencies for teachers encompass? Since teachers combine pedagogical, subject-specific and digitality related competencies in their work, general “AI competencies" are not sufficient. Following the Sect. 3, in which, among other things, the special features of AI systems are discussed, the teaching-related AI competence areas are therefore characterized deductively in the Sect. 4 on the basis of the DPACK model and illustrated with some examples.

2 Related Work

The profound changes brought by the digital transformation have been addressed in school and teacher education in numerous works and competence frameworks for structured description and exploration as well as for practice orientation have been created, such as the US “Framework for 21st Century Learning" [17]. It was developed by the non-profit organization “Partnership for 21st Century Learning” (P21), which is made up of representatives from industry, education, and the public sector and addresses required skills in the domains of “learning & innovation”, “information, media & technology” and “life & career”. Another framework that aims to offer an understanding of what “digital competence” is, is the EU framework “DigComp”. This is not in the context of teacher education, but describes competencies that citizens need to live in a “digital world". It addresses the areas “Information and data literacy”, “Communication and collaboration”, “Digital content creation”, “Safety” and “Problem-solving”. The newer version “DigComp 2.2" [4] already includes an AI-related update. The European Framework for the Digital Competence of Educators “DigCompEdu" [3] builds on an older version 2.0 (2016) of the EU digital literacy framework DigComp, which did not yet include an AI update. A widely accepted model for teacher competencies is TPACK [14]. TPACK and the DPACK [8] based on it (see Sect. 3.1), have origins in Computer Science Education (CSE). In our consideration, we want to take into account not only user-oriented but also technological and socio-cultural perspectives according to the “Dagstuhl Declaration" (cf. [5]) where it is considered on an equal footing, presented as three sides of the so-called “Dagstuhl Triangle". What is explicitly taken into account in DPACK. With its focus on digital competencies and its background in CSE, DPACK is well suited for our interdisciplinary cross-cutting task, which we want to address, namely to support the design of holistic education and study programs for teaching in the digitally networked “AI world".

In the scientific literature where TPACK or its derivatives appear in the context of “AI", there are several works that use TPACK to describe and explore teaching and learning that has AI as its subject like Seonghun Kim et al. [13], Druga et al. [6], and [20]. However, contributions that, like this one, conversely aim to determine the required AI competencies to teach have been scarce. In [25], under the impression of the digitization push in the context of Covid-19, there are some general indications of what the inclusion of AI in teaching methods and contents as well as in the design of teaching-learning environments could mean, but socio-cultural and technological perspectives, which we want to address in our approach here, are not further considered there. Celik [2] describes an AI competency model derived from TPACK, which also includes an ethical component. He mainly has a specific subset of AI systems in mind here, as the work focuses heavily on a field that targets specific application competencies of AI-based self-learning tools, i.e., tools that provide individualized, adaptive feedback in real-time, with ways for the teacher to analyze learning progress, etc. However, the field of possible applications of AI software is much wider, as shown in [10]. Also, in comparison, “digital competence" is understood in a more holistic way in the DPACK model. Scientific contributions that, like the present one, aim to determine required AI-related competencies for teachers holistically, taking into account the Dagstuhl perspectives, do not seem to exist so far.

Moreover, there is no consensus with regard to a definition of those informatics systems that produce the AI phenomena addressed (cf. Sect. 3.2). For an overview of this, the European Commission’s “AI Watch" report [18] provides an informative source. Here, in the context of a possible political and legal evaluation, 64 AI definitions and provisions from politics, industry, and research are compiled and evaluated. Other recent articles describing the basic characteristics of AI, reflecting the state of the debate in terms of societal, cultural, or ethical challenges, and presenting new potential applications in education include. [10, 15] (Sect. 3.3). A competency framework with a CSE background for K-12 education from which possible and necessary AI competencies can be obtained was presented with [16]. [21] discusses from a CSE perspective some fundamental shifts regarding “AI Thinking" or “Computational Thinking 2.0" compared to the “Computational Thinking" discussed so far, especially seen in the context of “Machine Learning" systems. To justify our framework, in Sect. 3.2 a provision that we consider appropriate is given and presented for discussion.

3 Theoretical Background

In this section, the structure and background of TPACK and DPACK are first briefly presented. It then justifies the need to consider separately the “AI-K" domain of AI-related competencies (the “D" of DK has been replaced here by “AI") within the “digital literacy" (DK) domain by addressing the special characteristics of AI and showing that problems and requirements need to be taken into account here that do not occur in conventional computer science systems.

3.1 The TPACK and DPACK Models

In the TPACK competency framework the three domains of teacher professional knowledge introduced by Shulman [19], general “Pedagogical Knowledge” (PK), subject-matter “Content Knowledge” (CK), and the domain of “Pedagogical Content Knowledge” (PCK), are complemented by a domain of “Technological Knowledge” (TK) (cf. [14]). Replacing the “T” with a “D” at DPACK [9] is intended to emphasize that not only technical application knowledge but a “Digitality related knowledge” is taken into account in the TK sector, which is in DPACK characterized by the three perspectives of the Dagstuhl Triangle (Fig. 1). “Digitality related knowledge" (DK) is the necessary competence to be able to recognize, describe, reflect, and shape phenomena in a culture of digitality [8]. The use of term competence is also intended to illustrate that the requirements for teachers are not only at the level of “knowledge" [8].If it were solely about discussing and explaining digital literacy in the context of phenomena of the “digital networked world," it would be sufficient to apply the Dagstuhl model as in [16]. However, teachers’ digital competence must additionally be discussed in the context of their content-related (i.e., subject-related) and pedagogical competencies (cf. [8] Area D).

3.2 AI Systems as Special Informatics Systems

The AI-PACK competence framework refers to a subdomain of informatics phenomena. Informatics phenomena are events caused by automated information processing. In order to define this domain, appropriate identification of the informatics systems that produce these phenomena is needed. Here, some difficulties arise initially in the context of AI. The “AI Watch” [18] report notes that AI is usually described in relation to human intelligence or intelligence in general, with many definitions referring to machines that behave like humans or are capable of actions that require intelligence. Consequently, these definitions are ambiguous or describe a “moving target", as in “Tesler’s Theorem" (“AI is whatever hasn’t been done yet.") or generate undesirable anthropomorphic associations. Moreover, AI need not behave human-like in any way; for example, typical “non- AI tasks” such as mathematical calculations can also be performed by products of AI techniques such as Machine Learning. Definitions of a technical perspective (“How does it work?", “How was it made?") list specific techniques and approaches used to develop appropriate software, such as the European Commission’s “AI Act" definition [18]. This results in clearer provisions, but such a list must be permanently updated, especially if it is very detailed and does not refer to central principles.

Therefore, the question arises: what specifically characterizes automated information processing in AI systems? A representation of a CS design approach in AI results from considering the usual division of the field into 1. “knowledge-based AI", sometimes also called “classical", “symbolic" or “rule-based AI" (GOFAI, “Good Old-Fashioned AI") and 2. “Machine Learning" (ML) [10, 16]. In GOFAI systems, a “knowledge base" is built using appropriate structured and prepared data representing content (“facts") that form the basis for a heuristic and rule-based search for precisely specified solutions. ML systems follow a different paradigm in that the initial search is not for solutions, but in the space of possible functions [12]. In ML, functions are found through an iterative, data-driven optimization (“training process"), usually with the help of examples and a complex approximation procedure that includes an objective function [10]. A function found with it is finally used in the application context for the computation of usable solutions. That the developed software follows comprehensible rules, is thereby no condition. Only statistical tests take place for the examination. This basic principle can be found in the different variants of ML [11]. Here we refer to ML software the software that performs such optimization, as well as the software that is the product of such a process.

In summary, then, we follow here a view according to which an alternative problem-solving paradigm is applied in AI systems, in which the process of the information processing that is to produce the desired outputs need not be described. Historically central to this approach are: first, the search for answers in a knowledge base using general inference rules (GOFAI), and second, the data-driven adaptation of system behavior with an evaluative objective function (ML). The approach of focusing on the problem description but not on the solution path has similarities to the paradigm of declarative programming, especially in GOFAI. However, only in some cases is finding a path to a solution essentially left to the computer, as in PROLOG, where the input database containing inference rules forms the basis for a rule-based (Depth-first) search for correct answers to queries. Furthermore, it should be taken into account that AI systems can be modular and combine functions that may have been generated using different approaches. Hereby, we achieve a relatively stable basis for a purposeful delimitation of the domain in our context. It also provides a clearer basis for explanations of the phenomenon domain from a computer science perspective.

3.3 Peculiarities of AI Systems

From user-oriented perspective (U), the application of AI systems often seems familiar and simple at first. The verb “to google" refers to querying a well-known knowledge-based AI system and has already entered common usage. ML-generated software can besides other things make computers good at “hearing," “seeing," or natural language processing (e.g., translating text). Generative systems can use a few keywords or linguistic input to generate e.g. well-formulated essays, artistic-looking images, or program code for various questions in seconds. AI systems also can present adaptive or interestingly acting interaction partners in complex game-like environments. Chatbots based on GPT-4 may be almost indistinguishable from a human in short conversations [1]. Thus, they pass the “Turing test" in many cases [23]. Users without basic computer science education, even young children [22], can easily take on the role of developers (“trainers") with ML and create their own solutions to problems, such as gesture controls or intelligent software agents. However, in contrast to this often intuitive usability even for problem solutions, the interpretation and use of the outputs of these software systems require special skills, if one does not want to be exposed to undesired effects or cause them with one’s products. ML systems produce “only" approximate solutions based on the presented examples, which partly require stochastic methods of interpretation (“confusion matrix"). Although in GOFAI the behavior of the system can be explained by studying the programmed logic [10], intuitive software production in the form presented above is not readily possible. In particular, ML systems have hidden limitations, including in [15] a number of “hard" problems: “one-shot learning", i.e., the ability to learn correct classification skills with only one or a few examples of a given class of objects, cross-domain generalization ability, causal inference, concrete meaning, “grounding", or the complexity of time scales and memory, and metacognition. Separately taught knowledge of these qualities is of paramount importance to all teachers because of the enormous impact they have on our personal lives [16].

In social-cultural perspective (S), teachers (as well as students) need to be empowered to analyze the impact, opportunities, and challenges of AI. In addition, they need to know how to address potential problems in the use of AI to ensure responsible use (see Domain S [16]). To this end, it is also critical to clearly characterize the role of humans. There are some (ethical) grievances inherent in the technical nature of these systems [15]: for example, AI may not produce the intended performance or defensible reliability, produces biased or toxic results, violates privacy (or copyrights), produces false information about the world, exhibits a lack of explainability, contains consequences of lack of diversity in the people who research and develop AI in industry and academia, e.g., gender or race. AI systems are spawning a large number of new, but sometimes ethically dubious, tools and applications in the digitally networked world, including educational ones. The UNICEF AI definition [24], referred to among others in the update of the EU digital literacy framework “DigComp 2.2" [4] designates that the design and behavior of AI systems are also always subject to goals that have been determined by human system designers. This a fact that can be concealed by ostensible autonomy, objectivity, or by anthropomorphism, eloquence, or the like, whereby people without appropriate reflective competencies, especially children, can easily be subject to fatal deceptions here. AI systems produce a large number of new tools and possible applications, including in the field of education, which can be ethically questionable.

From a technological perspective (T), AI systems do not add any new functions to the basic processes of digital transformation “capturing and storing" (digitization), “processing" (automation) as well as “transmitting and disseminating" (networking) information (cf. [7]), however, in the case of AI software, the essential part of the information processing takes place through a process that has not been designed manually, but is produced by “facts" or examples and the objectives (cf. Sect. 3.2). Although the function of both types of AI is crucially based on the input data used, in practice ML systems stand out from GOFAI systems, in this respect, where the general inference algorithms search for the specified solutions in a space generated by the facts and rules, that can then be exactly traced [10]. Although the scope of application of ML systems has proven to be surprisingly extensive, on the other hand, the functionality generated in the training process by iterative approximation can only satisfy statistical quality criteria, which also clearly distinguishes the products of ML methods from those of “manual" software development, where humans use structured decomposition and analytical insight to pursue, among other things, the quality criterion of correctness, which is verified and, if possible, validated by various methods. In principle, most of the presented peculiarities of AI systems result from the way their information processing process was produced. Therefore, appropriate informatics education is also of central importance in establishing AI competencies.

4 AI-PACK - AI Competencies for Teachers

In this section, we will briefly describe each field and give some illustrative examples from each of the three perspectives of the “Dagstuhl Triangle".

Fig. 1.
figure 1

AI-K with AI-PK, AI-CK, and AI-PCK within DPACK model.(The A was inserted at TPACK for stylistic reasons. We retain this for the name of the model).

4.1 AI-K: AI Related Knowledge

AI-K refers to the competence of being able to recognize, understand, reflect, and thus shape AI phenomena from a technological, socio-cultural, and user-oriented perspective.

If the focus is only on digital competencies, i.e. pedagogical and subject content competencies and their interactions do not play a special role, it is sufficient here to apply the Dagstuhl model. An application of the Dagstuhl model with respect to general, non-pedagogical, or content-related AI competencies in the context of CS K-12 education is available, with [16]. Accordingly, competencies would be to be able, for example, to critically question suggestions and prices (e.g., in online stores) as results of conscious and unconscious use (A-“How do I use this?"), to discuss reliability, e.g., in the context of self-driving cars (S-“What are the effects?"), or to select an appropriate ML procedure, e.g., to automatically recognize images containing certain artifacts (T-“How does this work?"). However, to perform their jobs, teachers still need some additional or more specific knowledge beyond what students are expected to acquire. Especially for Non-Computer Science contexts, the question of specific applications usually arises first. Linked to this are then questions of how the technology works and what the societal implications are.

4.2 AI-PK: AI Related Pedagogical Knowledge

AI-PK refers to the competence to recognize and reflect potentials and limitations, and risks of AI on teaching-learning processes and thus to be able to design contemporary teaching-learning settings.

The area comprises the general, non-subject-specific part of the AI competencies, which is necessary to be able to plan and implement lessons that are effective for learning. These competencies are located in the subarea DPK (cf. 1). “How can I teach (in general)’with’,’about’, and’in spite of’ the phenomena of artificial intelligence?" [8] This includes answers to questions such as “Where are my students currently in relation to digital media?", “How is digitalization currently changing society in general?", “How has the socialization of students changed, what opportunities, but also what problems and risks need to be considered with regard to teaching?" At AI-PACK, this is now applied to AI with its specifics. What are the implications of applications of the “AI world" from the students’ environment like “TikTok", “Photomath", “DeepL", “Teachable Machine", “ChatGPT" or “Midjourney" etc. etc. for teaching in general? For teachers, a number of AI applications are also discussed to help with lesson planning, delivery, and reflection in general. Learning analytics is about “measuring, collecting, analyzing, and evaluating data about learners and their context with the goal of understanding and optimizing learning and the learning environment" (G. Siemens); in this context, great expectations are sometimes placed on corresponding AI applications.

A-“How do I use this?": This area is about the use of applications that allow e.g. the planning of effective lessons with a generative system, like ChatGPT, or the consideration of teaching forms that include adaptive self-learning systems that allow e.g. individualized and differentiated individual or group work or that analyze learning levels and progress as well as the use of reflection apps that support to reflect and process experiences.

T-“How does it work?": Following on from this, teachers should be able to roughly illustrate, for example, how ChatGPT was trained and produces its outputs, how apps with adaptive reward mechanisms work and respond to learners’ attention and motivation, or how texts are classified or learning profiles are assessed.

S-“What are the effects?": Critically interpret and evaluate the outputs, outcomes, alerts, or notifications of AI tools and learning environments. What are the potentials, limitations, and risks for the learning group of such teaching-learning processes with adaptive self-learning systems that include personalized AI tutors or adaptive reward mechanisms? What is the impact of AI feedback on the learning group? What are the risks beyond the intended goals, e.g., with regard to privacy rights, such as data protection and copyright, or unfairness?

4.3 AI-CK: AI Related Content Knowledge

AI-CK refers to the competency of being able to recognize and reflect on the implications of the increasing use of AI in one’s discipline and the resulting impact on the scientific discipline, the professional field, and the subject.

The area comprises the part of digital literacy (cf. 1) that is necessary to be able to teach a subject or a topic confidently: “In what ways and with what procedures does AI come into play in subject-specific science, or how are its methods affected?”, How are the corresponding professional fields changing because of AI systems?, “How is my subject being changed as a result?", “Does content disappear or is new content added?" cf. [8]. The starting point is corresponding subject-specific applications, e.g., the use of AI in reception or writing processes (language subjects), in translations (foreign languages), for the classification and explanation of artifacts (history), or in the identification of plants and animals based on photographs (biology). Corresponding competencies related to the aforementioned applications from the field would be,

... A-“How do I use it?": knowing and applying corresponding relevant professional AI tools.

... T-“How does it work?": to be able to describe how solutions are created technically, e.g. the classification of a plant image and what the differences are compared to traditional “manual" methods.

... S-“What are the effects?": to be able to evaluate AI solutions in a subject-specific way and to represent how such AI applications change the tasks and professional fields of experts in the subject. For example, what changes occur in history when applications are available that classify and explain artifacts such as images or writings? How reliable are the corresponding outputs and to what extent might the system be biased?

4.4 AI-PCK: AI Related Pedagogical Content Knowledge

AI-PCK refers to the competence to recognize and reflect on the topic- or subject-specific influence, the potentials and limitations of AI on teaching-learning processes and the learners and thus to design contemporary teaching-learning settings.

In the DPACK model, the central area of “Digitality related Pedagogical and Content Knowledge" (DPCK) refers to knowing the most useful forms of presentation of relevant subject or thematic content, e.g., those of timeless and general importance, where aspects of teachability, instructiveness, and relevance (most meaningful analogies, illustrations, examples, explanations, and demonstrations) are also embodied [19] - understood under the conditions of digitality. It follows that teachers should be “digitally competent" in deciding what should be covered, how, and with what, i.e. without digital media or AI tools if necessary.

On the other hand, however, the area also refers to knowledge about the application possibilities of technology and pedagogical techniques with regard to targeted competency goals, as well as knowledge about how technology can help solve some of the problems that students face cf. [14]. In addition, with regard to the Dagstuhl perspectives, besides this application knowledge, there are also the skills to reflect on the digital means, in the case of AI-PCK the “AI software" technically and socially-culturally, to address and design them appropriately [8]. This means, for example, being able to,

... A-“How do I use this?": to generate subject-specific teaching materials or media involving AI (e.g., task variations, texts, images, videos, or simulations that include avatars, etc.) or to use appropriate AI-based tools to better convey subject content, e.g., to generate different text or translation variations and discuss them instructively in class (foreign language teaching) or to create instructive simulations or educational games using AI tools (e.g., in STEM subjects).

... T-“How does it work?": to describe how the applications mentioned work, i.e. to be able to explain how these systems have been trained and on the basis of which technical principles the outputs are generated.

... S-“What are the effects?": to assess the didactic value of e.g. self-evaluations using AI tools, such as chatbots, translators, tools that explain artifacts or “intelligent" math tools and to work through them appropriately with the students or to motivate subject content, if it is not to be dropped, even if perhaps tools exist that could take over these tasks, such as translators.

5 Discussion and Outlook

In our paper, we have presented a framework that enables the structured description and exploration of AI education requirements for contemporary professional teaching. With AI-PACK, we outline the AI-related domains of teacher professional knowledge AI-PK, AI-CK, and AI-PCK based on the DPACK model.

Our presentation narrows down the field of AI systems via their technical nature (design approach), which requires specific competencies. It is therefore based on an informatics perspective. A media pedagogical perspective may raise further issues, e.g. the problem that dealing with systems that feign human characteristics and abilities requires specific competencies, regardless of whether the AI technology described in Sect. 3.2 was used, or that pretended “intelligence", as in the case of the famous “Mechanical Turk", is not generated by an informatics system but by covertly working humans (cf. [10]). The UNICEF AI definition [24], therefore includes, for example, systems that appear intelligent but are not AI systems from the technical perspective described here. CS education can be relieving if teachers understand how the outputs of AI systems are generated and how informatics problems are solved using AI methods. Many properties of the applications of the described area can be derived systematically and a more reflective handling of e.g. the outputs of such systems can be made possible. From a didactic point of view, understanding how AI processes subject-related data and thus builds up or applies its internal modeling could also provide new insights with regard to subject-related understanding, e.g. in comparison to corresponding manual processes.

Further research is needed with regard to the concretization, evaluation, and didactic design of the fields. On the one hand, the presented model shows strongly subject-related fields, whose evaluation and concretization require content-related and subject-didactic expertise (AI-CK and AI-PCK), but also interdisciplinary intersections, which are particularly well suited as subjects of interdisciplinary study offers, such as general methods of lesson preparation and implementation (AI-PK), as well as the obvious intersections in the T-areas (“How does it work?") and the related basics in computer science education. Therefore, in addition to the specific clarification of AI-PCK for CSE, we see the additional task for CSE to address the cross-curricular supplementary need for CS AI education in an appropriate way.