Introduction

Artificial intelligence (AI) is affecting more and more fields of the digitalized world, such as business (Ng et al., 2021), arts (Epstein et al., 2020), communication (Androutsopoulou et al., 2019), and science (Sharpless & Kerlavage, 2021). The importance of AI competence as a future skill for all citizens is underlined by its implementation into the latest edition of the European DigComp 2.2. framework (Vuorikari et al., 2022). Consequently, AI is increasingly used in education (De Laat et al., 2020; Ifenthaler & Seufert, 2022; Zawacki-Richter et al., 2019). AI in education is defined as a combination of „machine learning, algorithm productions, and natural language processing “ (Akgun and Greenhow, 2021, p.1) with the potential to reduce teachers ‘ workload, contextualize students ‘ learning, (semi-)automate assessments (Ifenthaler et al., 2018), and provide intelligent tutoring systems (Ifenthaler et al., 2024). Accordingly, teachers apply intelligent tutoring systems and adaptive learning environments to support the individual learning pathways of their students (Castro-Schez et al., 2021; Ifenthaler & Schumacher, 2023; Park et al., 2023). Educators can use learning analytics to identify at-risk students and initiate personalized support (Ifenthaler, 2015). Furthermore, educators can benefit from AI systems through automated scoring tools (Ludwig et al., 2021) or recommender systems (Hemmler et al., 2023). However, a responsible and effective application of AI in education is based on competent teachers who are proficient in the different facets of AI (Caena & Redecker, 2019; Ng et al., 2021; Seufert et al., 2021). While teachers seem open to working with AI in schools, the few existing projects are based on individual initiatives rather than on changes in the organizational learning culture (Roppertz, 2020). Furthermore, Rietz and Völmicke (2020) identify a lack of individual learner support. Their analysis states that the focus on newly developed tools lies mostly in developing organizational tools for school administration.

This study's objective is to analyze the dimensional structure of an AI competence model and the evidence-based development of an instrument for assessing teachers’ self-rated AI competence. The assessment of teachers’ AI competencies is a crucial step in identifying teachers' readiness to deal with the challenges and opportunities presented by introducing AI technology into the field of education. On a micro level, teachers can be enabled to reflect on already existing knowledge, attitudes, and skills while pinpointing opportunities to improve their teaching capabilities in an increasingly digital society (Nielsen et al., 2015). From an organizational perspective, schools and administrative decision-makers can establish further teacher training possibilities based on analyzing teachers’ competencies. Furthermore, the results of competence assessment can be used to shape decisions in the educational programs of teachers at universities and on a political level (Ifenthaler et al., 2024). The AI competence model was conceptualized based on experts' understanding of the AI field and consists of six dimensions. Further, the study aimed to empirically confirm the robustness of the AI competence model to nurture the development of professional learning opportunities for AI competence of pre-service and in-service teachers.

Background

Current research on accepting and using AI in classroom practice with a specific focus on teachers' AI competence is scarce. Furthermore, existing studies on AI competence fall short of a holistic view of AI competence (Delcker et al., 2024). Still, the existing literature on teachers' AI competence identifies different fields of expertise, which can be summarized in distinctive competence dimensions. Teachers are expected to demonstrate basic knowledge of the functionality of AI (Attwell et al., 2020). For example, they must be able to identify whether an application uses AI (Long & Magerko, 2020). Teachers need to be aware of data security risks and how they can ensure data privacy when collecting, analyzing, and managing data in education (Papamitsou et al., 2021). Teachers must identify AI's potential and risks in education, society, and the workplace (Attwell et al., 2020).

Additionally, they must be aware of the competencies AI requires (Massmann & Hofstetter, 2020). Furthermore, teachers should be interested in AI, open to trying new AI tools, critically reflect on the possibilities of AI, and become active entities in the AI implementation processes. Teachers need to be able to deploy AI tools in their instructional design, and they need the competence to teach about AI (Gupta & Bhaskar, 2020; Zhang & Aslan, 2021). The enumerated competencies have to be accompanied by ongoing teacher professionalization and training, including teachers’ ability to educate themselves about AI from professional networks further, as well as implement AI in administrative processes (Al-Zyoud, 2020; Butter et al., 2014).

Current research on AI competence frameworks demonstrates varying findings for combining these fields of expertise. Huang (2021) proposes a framework that emphasizes specific AI-related knowledge such as machine learning, robotics, and programming in combination with more general key competencies (e.g., self-learning and teamwork). In contrast, Kim et al. (2021) base their model on AI knowledge, AI skills, and AI attitudes, underlining the importance of critical reflection for ethical AI implementation. Sanusi et al. (2022) follow this idea and implement ethics of AI as a competence connecting the other parts of their model, namely learning, team, and knowledge competence. Further, in designing and implementing AI systems in the context of education, consideration and compliance with ethical norms as well as values are of utmost importance (Heil & Ifenthaler, 2024). Richards and Dignum (2019) proposed the so-called ART (Accountability, Responsibility, and Transparency) principles to be the foundations of AI systems. Algorithms and data need to allow accountability for the decisions made by an agent and reflect the organization’s moral values.

Furthermore, a clear chain of responsibility concerning the involved stakeholders must be evident. Ultimately, in terms of data and algorithms implemented, the AI system needs to be developed in a form that provides insights into its mode of operation. The ethical considerations are paired with legal regulations and frameworks. Countries and regions, such as the European General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), implemented data protection laws to prevent data misuse. Accordingly, knowledge about the specific regulations, as well as their application in teaching and learning, has to be part of a holistic framework of teachers’ AI competence.

The proposed frameworks form a valuable basis for identifying and modeling general AI competencies. Still, multiple shortcomings hinder a direct transfer to using one of the frameworks as a blueprint for assessing teachers' AI competencies. Firstly, the ability of teachers to include AI in their learning and teaching practice, as well as their perspective on the implications AI might have for education in general (Tuomi, 2022). Secondly, these frameworks propose theoretical considerations but do not deliver actionable measurement instruments. In contrast, SELFIEforTEACHERS (Economou, 2023) is designed as a tool for teachers to self-reflect on their perceived digital competencies and is based on the DigCompEdu framework (European Commission: Joint Research Centre et al., 2017). SELFIEforTEACHERS covers a wide variety of digital competencies, but AI technology can hardly be found in the tool.

Laupichler et al. (2023), as well as Ng et al. (2021), underline the need for instruments that offer a holistic but actionable approach to AI competence measurement: As of now, published research work is often too general, leading to extensive questionnaires, which are unable to capture the numerous constructs related to AI competence in detail. On the other hand, some instruments are too detailed and only collect data on very specific components of AI competence, such as attitudes (Sindermann et al., 2021) or anxiety (Wang & Wang, 2022) towards AI technology. Furthermore, Laupichler et al. (2023) point out that some other instruments might only be valid for certain courses, such as the one developed by Dai et al. (2020), who proposed an instrument to measure the influence of an AI course on students’ anxiety.

The instrument presented in this paper has been developed to assess perceived AI competence from a practical perspective. In contrast to long instruments like SELFIE, it focuses on perceived AI competence and does not include items connected to more general digital competences. Additionally, the instrument covers various constructs associated with perceived AI competence instead of focusing on a single construct. School leaders and educators should use it in teacher training to collect data about the perceived AI competence of in-service and pre-service teachers. It focuses on relevant fields of AI competence and their relationships rather than general digital competency or a single part of AI competence.

The instrument presented in this paper follows the same principle as the other instruments which are currently being developed to assess AI competence (Laupichler et al., 2023; Sindermann et al., 2021; Wang & Wang, 2022): It collects data on teachers' perceived competencies, not results of performance tasks (Schoenfeld, 2010). Although there is a clear difference between perceived competences and the actual competences of a person, perceived competences are related to actual competences (Arnold, 1985), and the measurement of perceived competences is a tool that is often chosen when measuring the competences of teachers (Sumaryanta et al., 2018).

Pilot study

A pilot study was conducted to explore possible additions and adaptations to the existing frameworks of AI competence. Qualitative interviews with different stakeholders in the educational context, as well as experienced stakeholders from the field of ICT, were conducted. In total, N = 35 stakeholders took part in the study from April until May 2021. Out of the 35 stakeholders, 15 participants were in-service teachers, 9 participants worked as instructors in training companies, and 11 participants worked in various ICT-related occupations, such as software development, IT consulting, and project management. All participants were picked on their position and field of expertise within their respective organizations and had prior knowledge about AI within their field. An interview guideline was used to structure the interviews. The first part of the interview guideline contains questions regarding the participants' demographic data. The second part focuses on AI competencies in a pedagogical context and which dimensions might belong to AI competencies. These questions were guided by the results of the literature review. In the last part, the participants were asked to rank the different components of AI competencies based on the participant's perceived level of importance. The recorded interviews were transcribed into Microsoft Word and imported into MAXQDA for further analysis, following Kuckartz's content-structuring analysis method (Kuckartz & Rädiker, 2023).

Based on an in-depth systematic literature review and the results of the analysis of the expert interviews, the following six dimensions of AI competence for teachers have been identified:

  1. (1)

    Theoretical Knowledge about AI (TH): Teachers need to know the difference between AI and traditional computer programs and how AI can be used. More precisely, teachers need to be able to identify future fields of application for AI. This requires a basic understanding of different AI technologies, such as machine learning, deep learning, data mining, or artificial neural networks.

  2. (2)

    Legal Framework and Ethics (LF): Teachers need to be able to work with data-based ethical considerations, especially when working with student data. They must be aware of the challenges that can arise for fairness, equality, and transparency when AI is used. This enables them to prevent discrimination and evaluate the results of AI technology. Teachers have to ensure data protection under local laws, such as the GDPR in Europe, at all times.

  3. (3)

    Implications of AI (IP): Teachers need to identify the challenges and potentials of AI in education, society, and the workplace. AI changes the competencies required to be a capable citizen. In education, different forms of learning might occur. Society and the workplace might be at risk of alienation. Not every problem can be solved or approached with AI. Competent teachers have to factor these thoughts into their practice.

  4. (4)

    Attitude toward AI (AT): Teachers have to be open to AI and engage with AI. They have to keep an open mind to identify use cases for new technology and potentials for their students. It is important for teachers to critically reflect on their own beliefs about and their handling of AI.

  5. (5)

    Teaching and Learning with AI (TL): Teachers need to be able to implement AI into their teaching. This includes AI as a general topic, AI for individual or cooperative learning, or AI as an assessment tool. In addition, they have to identify how AI affects education processes, focusing on the possible cooperation of humans and machines. Furthermore, teachers need to act as role models for the application of AI.

  6. (6)

    Ongoing Professionalization (PF): Teachers must understand the importance of continuing professionalization. AI has to be identified as a quickly evolving field, which makes continuous, demand-driven training necessary. This includes forming a professional network with colleagues and university and industry partners. Furthermore, the implementation of AI into organizational processes is subsumed in this dimension.

In summary, AI competence in the context of education is a set of skills that enable teachers to ethically responsible develop, apply, and evaluate AI for learning and teaching processes. Research shows that the relationship of different competence fields is a key factor for investigating teachers’ knowledge and skills (Blömeke et al., 2016; Schoenfeld, 2010). Systematic reviews in the field of AI literacy and competence emphasize the need for the holistic, multidimensional approach chosen in this study (Knoth et al., 2024; Sperling et al., 2024). The interrelationship between these fields can be underlined by the following example: a teacher who wants to use AI in his class must be open to its use (AT) to identify relevant parts of her teaching as potentially benefiting from using AI. She then decides to use AI to grade her students’ papers automatically. To be able to do that, she needs to understand how an AI tool might use different techniques to fulfill that task (TH). Simultaneously, the teacher needs to be aware of ethical and legal considerations that play a role in automated grading (LF). She also needs to realize the implications (IP) the usage might have for her workplace, such as a more efficient way of using her work hours. As the teacher is not very experienced with AI, she decided to go through an online training program (PF). Once she collects all the possible information and sets up her AI tool, she decides to inform her students about the process and how she wants to include it in her teaching and learning (TL).

Current study

Previous research works have shown that perceived AI competence can be measured with longer, more general instruments (Caena & Redecker, 2019). In addition, specific constructs belonging to AI literacy, such as attitude or anxiety, can be measured in more detail (Sindermann et al., 2021; Wang & Wang, 2022). This study aimed to establish a scale to measure teachers' perceived AI competencies and confirm the six dimensions of the AI competence model without focusing on a single dimension or requiring a lengthy questionnaire.

The guiding research questions were:

RQ 1): Are the items of the perceived AI competence instrument a fitting representation of the underlying factors?

RQ 2): How do the six factors in the model represent the overall perceived AI competence?

Answering these research questions is an important step to validating the developed instrument and, therefore, ensuring its practical applicability.

Method

Participants

Teachers at vocational schools in Germany were contacted via publicly available email addresses to participate in the online survey. The focus on vocational schools is rooted in the heterogeneity of the German school system. The different types of schools, such as pre-schools, elementary schools, high schools, and vocational schools, lead to a wide variety in the competence of the teaching personnel, both between and within these schools. While the instrument does not focus on a specific type of school, sampling from one type of school allows for eliminating the school type as an influencing factor on the perceived competencies of the surveyed teachers (Pfost & Artelt, 2014; Rohm et al., 2021). The final convenience sample included N = 480 participants (47% female, 53% male). Their mean age was 39 years (SD = 11.36). The average work experience of the participants was ten years (SD = 5.43).

The participation was conducted voluntarily, and there were no incentives to participate in the form of money, vouchers, or sweepstakes. All procedures performed in studies involving human participants followed the ethical standards of the institutional/national research committee.

Instrument

The AICO_edu (AI Competence Educators) questionnaire was developed based on the six dimensions of the perceived AI competence model presented above. Each dimension was captured through six to eight items (TH: 8; LF: 6; IP: 8; AT: 8; TL: 8; PF: 7), which were answered on a five-point Likert scale (1 = strongly disagree; 5 = strongly agree). The items were created based on the results of the pilot study. As a first step, they were revised by the interview partners of the pilot study to ensure that the wording and the structuring of the subscales were in line with its planned practical area of application. The structure was further checked through an expert discussion among educational researchers as part of a research colloquium. Cronbach’s alpha for the six dimensions and corresponding sample items are presented in Table 1.

Table 1 Cronbach’s alpha of the six dimensions

Procedure and analysis

A quantitative study using a convenience sampling method in vocational schools was conducted over a period of two months in 2021 to examine the robustness of the perceived AI competence model. As a standard research data-protection practice, all data were stored and analyzed anonymously. Data were cleaned and combined for descriptive and inferential statistics using r-Statistics (https://www.r-project.org). All effects were tested at the .05 significance level.

Confirmatory factor analysis

A Confirmatory Factor Analysis was used to examine the construct validity of the developed questionnaire. Confirmatory Factor Analysis (CFA) and Exploratory Factor Analysis (EFA) are types of structural equation modeling. While EFA is applied to datasets to identify unknown relationships inside the data, CFA is used to confirm theory-based pre-assumptions about the structure of the data and unobservable latent factors measured by observable indicators (Brown & Moore, 2012).

As we developed the original questionnaire based on theoretical and empirical pre-analysis described in the pilot study, we conducted a confirmatory factor analysis to validate our measurement instrument (RQ1). The first CFA analyzed how well the 45 developed items represent our six dimensions of AICO_edu.

In addition, we intended to investigate if the six dimensions presumed to represent an overall AI-competence, derived from the in-depth literature review and the qualitative interviews with stakeholders, reflect one unobservable latent factor, AI-competence (RQ2). Therefore, we conducted a second CFA, measuring how well the means of the six sub-categories as indicators represent one factor.

Several fit indices were applied, such as chi-square, the Root-Mean-Square-Error of Approximation (RMSEA), the Comparative Fit Index (CFI), and the Tucker-Lewis Index (TLI). These global indices represent how well the assumed model fits the data.

Results

Confirmatory factor analysis (RQ1)

A six-factor model confirmatory factor analysis was conducted to analyze the representation of the six underlying dimensions by the 45 items of the AICO_edu instrument. The results are presented in Table 2.

Table 2 Results of the six-factor Confirmatory Factor Analysis

The six-factor model (M1) does not meet the criteria of a good fit, as the RMSEA is higher than .08, and neither the CFI nor the TLI are higher than the cut-off value of .95 (Hernandez et al., 2019; Savalei, 2012). Due to low covariations between some questions in the same dimension, eight items have been removed for an improved representation of the underlying latent factor. A further analysis of the wording of the items showed that some of the removed items had not been explicitly phrased enough to guarantee valid answers: The term “fields of application” in TH02 and TH03 can be interpreted as fields of applications within occupations or as fields of applications in general. TH08 should be rephrased into “knowledge about databases” to fit the TH category better. The three items removed from the LF category need to be further contextualized. These items need to be more specific about the type of student data, where the data is collected, and how the data is analyzed. The removed items can be found in the instrument in the Appendix. They are marked with an asterisk. The results are presented in Table 3.

Table 3 Results of the Confirmatory Factor Analysis for six-factor after Removal of Questions

After the removal of items, the fitness scores of the model improved and were closer to the desired cut-off values. The RMSEA is closer to the cut-off value of 0.08, and CFI and TLI are closer to the cut-off value of .95, respectively.

The results of the six-factor confirmatory factor analysis suggest that the items used for the different factors can partially be grouped into the six theoretical dimensions (TH, LF, IP, AT; TL, PF) of perceived AI competence. Some of the results align with previously developed models, such as attitudes in connection with AI technology (Sindermann et al., 2021). The results also show that new items have to be developed or rephrased in cases where removing an item from the model improved the fitness scores of the model. This will enhance the fitness score, as more suitable items improve the representation of the underlying factors.

Single-factor analysis (RQ2)

A single-factor model confirmatory factor analysis was conducted to analyze the representation of perceived AI competence through the six dimensions of the AICO_edu instrument based on the means of the respective subscales. The results of the CFA are presented in Table 4.

Table 4 Results of the Confirmatory Factor Analysis for the Single-Factor Confirmatory Analysis of the Six Subscales

Modification indices hinted toward a conflicting relationship between the PF dimension and the other dimensions of the model (Whittaker, 2012). A detailed explanation can be found in the discussion section of the paper. Removing the PF dimension from the model resulted in the model shown in Table 5 and Fig. 1.

Table 5 Results of the Confirmatory Factor Analysis after the Removal of the PF Dimension
Fig. 1
figure 1

Model representing the five sub-categories and the relationship to the latent factor AI-competence

Although the relative fit indices improve, the new model does not meet the criteria of a good fit concerning the CFI and the TLI. Further analysis of the items regarding their semantic and topical alignment hinted toward possible interrelationships between the PF and the AT dimension. Removing the AT dimension from the model resulted in the model shown in Table 6.

Table 6 Results of the Confirmatory Factor Analysis after the Removal of the PF and AT Dimensions

The final model meets the criteria of a good fit concerning the CFI and the TLI, as well as the RMSEA (see Fig. 2).

Fig. 2
figure 2

Model representing the four sub-categories and the relationship to the latent factor AI-competence

The results of the single-factor analysis underline the assumption that perceived AI competency consists of multiple constructs. The results align with previous research on AI competencies (Caena & Redecker, 2019; Huang, 2021; Kim & Kim, 2022; Laupichler et al., 2023; Sanusi et al., 2022). Most importantly, the results suggest that professionalization and attitudes might not be directly connected to perceived AI literacy.

Discussion and conclusion

The theoretical assumptions about teachers' perceived AI competence underline the construct's multi-dimensionality. The evaluation of the instrument emphasized the modularity of perceived AI competence.

Teachers have to be able to implement AI in the classroom and organizational processes (Attwell et al., 2020; Gupta & Bhaskar, 2020). Their pedagogical practice has to be backed up by theoretical knowledge about the functionality of AI, as well as legal requirements and ethical considerations (Massmann & Hofstetter, 2020; Schmid et al., 2021). However, AI theory and tools rarely exist in teacher education and professional development programs (Vazhayil et al., 2019). Improving competencies and knowledge about AI, as well as implementing them into practice, is therefore dependent on teachers’ attitudes towards AI.

The findings of this study uncover specific issues for further examination. The developed instrument (see Appendix) can collect evidence about teachers’ perceived AI competence. As described in the method section, some items need to be improved to enable a more valid data collection. Furthermore, the two categories, Professionalization (PF) and Attitude (AT) do not fit into the model structure in their current form. The PF dimension may be influenced by the lack of training possibilities in the field of AI (Caena & Redecker, 2019; Seufert et al., 2021). Teachers’ interest in professional development, which has been assessed in the PF dimension, can, therefore, be interpreted in two ways. Teachers might possess AI competence already, they know about the importance of AI, they want to educate themselves further, and have a high interest in professional development opportunities. A high score on the PF scale would then reflect an AI-competent teacher.

On the other hand, teachers might not be proficient in dealing with AI in the context of education. These teachers are interested in professional development to close their existing knowledge gaps. A high score on the PF scale would then reflect a teacher with a low perceived AI competence level. Other items in the PF scale are also influenced by this contradiction, such as participation in professional networks.

The same assumptions hold for parts of the AT scale, which leads to its removal from the final model. A positive attitude towards AI in education might stem from a non-critical reflection on possible risks and chances of the underlying technology. While this results in high scores in the AT dimension, it would not be considered a high AI competence. On the other hand, a more critically informed approach towards AI could lead to lower scores on the AT scale but should be interpreted as a higher AI competence. These possible conflicts align with research such as the work of Blömeke et al. (2015) and Shavelson (2013). As both the PF and the AT dimensions are relevant for a holistic model of AI competence (Knoth et al., 2024; Sperling et al., 2024), further research needs to be conducted on how these dimensions can be incorporated into a valid scale. Possible solutions might be found in a highly context-specific scale for ongoing professionalization and a stronger compartmentalization of attitudes. Both approaches stand in contrast to the initial goal of this study to create a manageable scale for the assessment of AI competence, as both the specification of the PF and the AT scale would lead to more items for those two dimensions.

The role of professional development

The follow-up review process of the instrument will consider these contradictions and create a better distinction for the reasons behind choosing further training opportunities or interest in professional networks specifically. The findings from the confirmatory factor analysis support the consideration of the quality of the PF subscale, especially the improvement of the model fit indices after removing the dimension from the model. The PF dimension is an important addition to the existing models by Huang (2021), Kim et al. (2021), and Sanusi et al., (2022). The ongoing professionalization of in-service teachers has not yet been considered in these models.

Digital literacy in the field of AI needs to harness networking abilities to provide constant professionalization regarding important topics of the field. The increasing usage of AI technology for teaching and learning requires competent teachers who can identify challenges and opportunities for all stakeholders in vocational schools (Pedro et al., 2019). Students at risk of cheating (Oravec, 2022), racial or gender bias (Baker & Hawn, 2021) in algorithms as well as transparency in coding (Bogina et al., 2022) are some examples that make a sustainable professionalization of teachers necessary. The inclusion of professionalization into the framework fulfills an additional demand. AI competence for teachers should not be viewed as a stark set of skills but rather a sustainable development of competences that adapt to the ongoing changes in the field of AI.

A reliable model for AI measurement

Furthermore, the confirmatory factor analysis and the modification indices hint toward further possibilities for improvements of the model and the questionnaire (Whittaker, 2012). The removal of items led to an improvement in the accuracy of the model. In some cases, modification indices suggest moving items to different dimensions. These findings can be traced back to theoretical assumptions and statements of the experts on which the instrument's construction is based. The experts stated that some items might be allocated to various dimensions due to the multidimensionality of perceived AI competence. Reformulations of items and splitting items into multiple items might help to overcome these problems.

Limitations

Various factors limit the presented study. Most importantly, while AI is getting traction as a research topic in the field of education, the implementation of AI in teaching practices, school development, or teacher training has just started to gain attention (Attwell et al., 2020). As a result, many of the surveyed teachers might not have been in contact with AI, or at least not frequently. Furthermore, the construction of the questionnaire is based on the limited research findings existing on the AI competencies of teachers. Although the instrument has been developed with the help of experts from the field of vocational education as a first explorative step to examining the AI competence of vocational teachers, the findings of the study hint toward improvement capacities regarding the construction of the instrument (Shi et al., 2019; Whittaker, 2012). While the results show that the developed dimension represents a common factor, this factor might not be AI competence. Firstly, linking the data of the perceived AI competence of participants to the results of practical tests and experiments will deliver better insights into the connections of the dimensions towards the construct of AI competence. Secondly, tests about AI knowledge and practical AI usage might close the gap between perceived AI competence and actionable AI competence.

The contrast between perceived competence and other competence measurements is another significant limitation. Analyzing the results of knowledge tests or the practical usage of AI technology in teaching and learning processes might result in better or at least different results than the assessment of perceived competences. In addition, information about the relationship between attitudes, intended usage, and actual usage can be analyzed (Venkatesh et al., 2003). On the other hand, assessing data with knowledge tests or practical assignments for AI tools is difficult due to the fast-moving development in the field of AI, as well as the lack of clear regulation for the usage of AI in teaching and learning. At the time of the data collection, no AI tool was approved for teaching and learning at German schools, making it impossible to perform a more practical, realistic measurement of AI usage.

Given the Teaching and Learning (TL) dimension, the current model AICO_edu is developed explicitly for the use of education-specific AI competencies. However, after modification, the AICO instrument also allows for use in other context-specific use cases. For instance, AICO_man could include a dimension focusing on management. In this scenario, the Teaching and Learning (TL) scale could be removed from the questionnaire, and a Management (MA) scale could be added. This scale would then consist of items that target the relationship between AI and management (“I can make management decisions based on the results of an AI tool,” “I know how AI can be integrated into management processes”). Furthermore, performance tasks could be added to the instrument to increase the validity of the scales and to counteract the problems of self-reported competence data. These performance tasks might include quiz questions or the analysis of sample data.

AI is an emerging field of interest for learning and teaching, which must be further implemented in theory development (Gibson & Ifenthaler, 2024) and teacher education programs. Currently, AI is seldom a part of teacher training programs or further teacher practice (Roppertz, 2020). The findings of the presented study are, therefore, being used for the instructional design of two training programs. These programs try to overcome the current shortcomings in AI training for in-service and pre-service teachers by combining theoretical knowledge about AI and hands-on practical solutions for vocational school practice. The evaluation of these programs can help to identify further dimensions of AI competence and methods to measure AI competencies of schoolteachers.