1 Introduction

While the theoretical and technological foundations of Artificial Intelligence (AI) have existed since the 1950s, the convergence of various factors, such as the increased computational power, extensive availability of data, and sophisticated machine learning algorithms, has resulted in remarkable advancements. The widespread integration of AI into various aspects of our daily social life raises a pressing question about how to educate citizens and prepare them for a world full of intelligent technological devices. At the same time, it places increasing pressure on teachers, who are expected to not only gain knowledge and understanding of AI but also acquire new competences enabling them to teach about AI, for AI and with AI (Holmes et al., 2022; Kim & Kwon, 2023).

The relationship between digital competence and AI seems indisputable, as digital competence plays a pivotal role in the effective educational use and adoption of digital technologies, including those based on AI (Ng et al., 2023; Polak et al., 2022; Velander et al., 2023). However, empirical studies exploring the interaction between digital competence and AI adoption are, to the best of our knowledge, limited and therefore grant further research.

Teachers play a pivotal role in the AI integration in K-12 education (Casal-Otero et al., 2023), and are recognized as the largest user group of AI technology (Crompton et al., 2022). They are crucial to the successful adoption of AI systems and the realization of the potential benefits of digital data in education (European Commission, 2022). Notwithstanding, the topic of AI is relatively new to both students and teachers (Lindner & Berges, 2020), and as a result, the attitudes of teachers may oscillate between complete resistance and overconfidence (Casal-Otero et al., 2023). Nevertheless, such attitudes have a significant impact on the effectiveness of AI integration in education, as if teachers perceive the AI technology as detrimental to learning, they may either refrain from adopting it or use it in a negative way (Crompton et al., 2022).

The current situation reveals that teachers have limited knowledge and understanding of what AI is, and consequently how it could be used in education (Chounta et al., 2022; Hrastinski et al., 2019; Ng et al., 2023). Moreover, they express uncertainty about the necessary competences for effective use (Velander et al., 2023) and their understanding of AI related knowledge lacks empirical evidence (Chounta et al., 2022). Furthermore, some AI applications may lack transparency, which leads to concerns about accountability and trust (European Commission, 2022). Trust, as it is well reported in literature, plays a pivotal role in the adoption of any technology (Nazaretsky et al., 2021; Qin et al., 2020), but few studies have explored K-12 teachers’ trust in integrating AI based technologies into their teaching practices (Ayanwale et al., 2022; Kim & Kwon, 2023; Nazaretsky et al., 2021, 2022; Su et al., 2023). In addition, the interplay between trust in AI and different variables is currently understudied. Against this background, the present study seeks to identify the variables that may influence K-12 teachers’ trust in AI. The study aims to provide practical implications for teacher preparation and in-service professional development regarding AI and implementation of AI tools and processes in K-12 education. The research aims to answer the following questions:

RQ1: What is the relation between trust in AI, knowledge of AI and digital competence?

RQ2: What is the relation between trust in AI, age, sex, teaching experience and International Standard Classification of Education (ISCED) levels?

1.1 Teachers’ digital competence and artificial intelligence in education

Teachers’ digital competence can be broadly defined as the individual teacher’s proficiency in the critical and reflective use of digital technologies in the various dimensions of their profession, with the pedagogical dimension being a core aspect. Frameworks like TPACK, the UNESCO ICT Competency Framework or DigCompEdu – to mention just a few of the most adopted - guide educators on the use of digital technology to improve teaching and learning.

Around the world, studies looking at teachers’ digital competence point at low levels and identify gaps to be addressed (Cabero-Almenara et al., 2022; Diz-Otero et al., 2023; Lucas et al., 2021; Runge et al., 2023; Tomczyk et al., 2023; Tzafilkou et al., 2023). In response, but also in addition to the challenges brought by the pandemic, international organizations and national governments have launched measures and initiatives with the main goal of promoting the development of teachers’ digital competences (e.g., Council of Ministers of Portugal, 2020; European Commission, 2020; Estonian Education and Research Ministry, 2021). However, despite efforts to enhance teachers’ digital competences, questions arise regarding whether these initiatives adequately prepare teachers for an education environment infused with AI technology.

A recent global survey, involving more than 450 schools and universities, uncovered a concerning trend: less than 10% of these institutions have established institutional policies or formal guidance regarding the use of AI applications (UNESCO, 2023a). Furthermore, the survey highlighted a slow progress in equipping teachers with the necessary digital competence related to AI. To address this gap, recommendations put forward suggest providing teachers and students with guidance and support in understanding emerging changes accompanied by AI (Holmes et al., 2022; UNESCO, 2023b). This need is emphasized by various studies that underscore teachers’ limited understanding of AI and consequently of how it could be used in education (Hrastinski et al., 2019; Ng et al., 2023; Velander et al., 2023).

Some authors argue that AI-related knowledge, skills, and attitudes can be built on and expand existing digital competences (European Commission, 2022; Heck et al., 2021; Ng et al., 2023; Vuorikari et al., 2022). Therefore, the relation between the two is intricate, suggesting that digital competence is a fundamental prerequisite for effective AI adoption (Ng et al., 2023; Velander et al., 2023). Nevertheless, and although there is a substantial amount of literature regarding the relation between teachers’ digital competence and their attitudes of trust regarding the integration of digital technologies into their teaching practices (Hatlevik, 2017; Tzafilkou et al., 2023), there is a research gap when it comes to exploring the relation between digital competence, knowledge of AI and trust in AI adoption.

1.2 Teachers’ trust in AI

As mentioned above, few studies have explored K-12 teachers’ trust in integrating AI based technologies into their teaching practices (Ayanwale et al., 2022; Kim & Kwon, 2023; Nazaretsky et al., 2021, 2022; Su et al., 2023). Understanding the factors that affect trust and reduce distrust is critical, particularly given the crucial role that teachers play in the decision-making processes regarding AI in education (Nazaretsky et al., 2021). In addition, doing so may contribute to inform policy making within the field.

Qin et al. (2020) conducted a user-focused study on trust in AI-based educational systems and found several influential factors. These factors included technological elements (such as system functionality and interpretability), contextual factors (including data management and teacher competence), and personal factors (such as cognitive ability and teacher interaction preferences). Crockett et al. (2020) compared the perspectives on AI between the general public and individuals who had studied computer science at tertiary level. Their findings indicate that individuals with specific education in computer science had a significantly increased perception of risk associated with AI applications compared to the public. This contrast highlights the critical role of education in improving public trust and understanding of AI.

Recently, Nazaretsky et al. (2021) have made three contributions to this area. Initially, they investigated the interactive attitudes of teachers towards AI technologies, highlighting teachers’ reluctance to accept AI-based recommendations that clash with their pre-existing knowledge. This study presented preliminary findings emphasizing the crucial significance of gaining a more profound comprehension of teacher-AI interactions for more effective use of AI educational technologies in K-12 education. Following this, they proposed a new tool to measure teachers’ trust on AI educational technology, proving its validity. Through conducting exploratory factor analysis, they identified eight key factors that impact teachers’ trust in AI (Nazaretsky et al., 2022, 2022). They found that teachers were positive about the benefits of AI-based educational technology and confident in their ability to implement it effectively in the classroom. However, they expressed concerns about the need for significant pedagogical changes when adopting this technology, which could potentially increase their workload. In addition, teachers appreciated methods aimed at increasing trust in AI and didn’t worry much about the prospect of being replaced by AI. They also highlighted teachers had more trust in guidance provided by human experts or peer educators than by AI. Their concerns related to the perceived lack of transparency in how AI formulates decisions, with the belief that one of the main limitations of AI in the context of K-12 education is its lack of human qualities such as emotion and intuition. Then, to promote teachers’ trust in AI, they presented a professional development plan for teachers that focused on expanding their theoretical and practical knowledge of AI in the educational field. This research also highlighted the importance of increasing teachers’ familiarity with AI to improve trust (Nazaretsky et al., 2022).

Ayanwale et al. (2022) studied the factors affecting the behavioral intention and readiness of teachers to teach AI. They found that anxiety was not a significant predictor of behavioral attitude and that the extent to which teachers recognize the value of AI significantly influences it. Other factors affecting teachers’ intention and readiness pertained to attitude, confidence, and relevance. Authors concluded that knowledge of AI is essential to build confidence and trust.

Chounta et al. (2022) found no correlation between professional experience and the opinions of teachers regarding the use of AI in education. Almost half of the teachers participating in their study had limited knowledge regarding AI but exhibited positive attitudes towards incorporating it in education. Teachers considered AI to act as an assistance in terms of designing learning and reviewing homework assignments. Conversely, they highlighted their apprehensions regarding the effort they would need to invest in mastering the proper usage of AI technologies and potential trust issues. Trust was found to be a crucial factor impacting the level of adoption of AI in the study by Choi et al. (2023).

Previous studies have attempted to explain what may affect teachers’ trust in AI. However, most of the research has concentrated on factors such as the effectiveness and comprehensibility of AI tools, changes in teaching approaches, and the transparency of these tools. There is limited research exploring possible differences on trust in AI based on other variables. Wang et al. (2023) found that teachers of different genders exhibited no significant differences in AI readiness. When studying differences related to the age, sex and years of experience of teachers evaluating a summary generated by ChatGPT, Vázquez-Cano et al. (2023) found that results were independent of these variables. Both studies stress the need for more studies that can produce comparable results and enlarge the current body of knowledge.

2 Materials and methods

2.1 The instrument

This study used a quantitative approach, with an instrument developed by Nazaretsky et al. (2022) to investigate teachers’ trust in AI (TAI) and the factors influencing it. The instrument consists of 24 items across eight factors: Self-efficacy (F1), AI-based vs. Human advice/recommendation (F2), Anxieties related to using AI (F3), AI lack of human characteristics (F4), Perceived benefits of AI (F5), Preferred means to increase Trust in AI (F6), AI perceived lack of transparency (F7) and Required shift in pedagogy to adopt AI (F8). Respondents are asked to position themselves against them using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). In addition, the instrument included three other sections. One to collect demographic information, including age, sex, teaching experience and ISCED level taught; another to collect the participants’ perceptions of their knowledge of AI (KAI) and another of their digital competence (DC) proficiency level. KAI was measured using a six-point scale proposed by Chounta et al. (2022) with the following options: 1-I have never heard of AI; 2-Not sure what AI is; 3-I have limited knowledge about AI; 4-I know what AI is; 5-I know a lot about AI and 6-I am an expert in AI. DC was measured using the DigCompEdu proficiency scale (Redecker, 2017), which includes the following levels: A1-Newcomer, A2-Explorer, B1-Integrator, B2-Expert, C1-Leader, C2-Pioneer. Participants were familiar with this proficiency scale and the meaning of each label, as DigCompEdu was adopted as a foundational document for the national strategic plan for the digital transition in education approved by the Portuguese government in 2020. Within this plan, 92% of the Portuguese teachers self-assessed their digital competence, which was then translated into this same scale (Lucas & Bem-Haja, 2021) and used to allocate teachers to training programs (currently ongoing).

2.1.1 The translation of the instrument

To ensure the content validity of the Portuguese version of the questionnaire, we followed Brislin’s translation model (1986) and MAPI’s (2005) guidelines. Initially, two proficient Portuguese native speakers, fluent in English translated the instrument. Subsequently, an independent translator, fluent in Portuguese but with English as their native language, back translated the instrument. The back-translation was thoroughly reviewed and compared with the original version. Disparities were examined by the original translators and discussed with an expert panel. A unified version was agreed upon and tested with six teachers in a pilot study. Feedback from the pilot study was reviewed, and minor adjustments, such as normalizing expressions in Portuguese, were made.

2.2 Sample and procedure

The cross-sectional study surveyed a sample of 211 teachers from primary and secondary schools in Portugal, who were enrolled in a MOOC covering digital assessment related topics. The demographic data of the sample are shown in Table 1.

Table 1 Demographic characteristics of the participants

As part of one of the MOOC learning modules, in which the topic of datafication and AI were addressed, teachers were invited to test their attitudes towards the adoption of AI based tools. The intention of the survey was fully disclosed. Participation was voluntary, anonymous, and fully consented.

2.3 Factorial validity and reliability

The Mardia tests of multivariate skewness and kurtosis were conducted on a dataset consisting of 211 observations across 24 variables. The results revealed substantial departures from normality: the test indicated a skewness of 4669.72 (p < = 2.1e-121), and a kurtosis of 20.2 (p < = 0). Additionally, a small sample skewness of 4741.48 was observed with a probability of p < = 2.2e-128.

Due to the non-normal distribution assumption of the data, the selection of the weighted least square mean and variance adjusted (WLSMV) estimator within the structural equation modelling (SEM) framework was made when conducting confirmatory factor analysis (CFA). The analysis results indicate that the comparative fit index (CFI) and the Tucker–Lewis index (TLI) exhibit values of 0.998 and 0.997, respectively, demonstrating an exceptionally high level of model fit with the observed data. The robust root mean square error of approximation (RMSEA) stands at 0.051, falling within the range indicative of a favorable fit, with a 90% confidence interval spanning from 0.040 to 0.062. Furthermore, the Standardized Root Mean Square Residual (SRMR) is measured at 0.067, further underscoring the model’s commendable fit.

Table 2 reveals eight latent factors (F1-F8) linked to various observed indicators (Q16-Q18, Q23-24, Q13-Q15, Q8-Q11, Q1-Q7, Q20-Q22, Q12, and Q19). All the factor loadings are statistically significant, surpassing 0.4, except for item Q14. This meets the threshold for factor loading for a sample size greater than 200 (Hair et al., 2019), indicating a secure association between the latent factors and the observed variables. Considering the value of lambda achieved by item Q14, the results of F3 should be read with caution.

Based on the presented reliability coefficients, F1, F2, F5 and F6 demonstrate a notably high level of reliability, as evidenced by their Cronbach’s alpha coefficients, all of which exceed 0.8. F3 and F4 also exhibit alpha coefficients surpassing the threshold of 0.6 (Robinson et al., 1991), indicating acceptable reliability for the purpose of exploratory research. This observation highlights the strong internal consistency of the findings.

Table 2 Psychometric properties of the instrument

To calculate the score for each factor, a weighted average was performed considering the factor loadings of each constituent item. The global TAI score derives from the sum of scores of all factors.

2.4 Data analysis

The analyses were conducted using R and JASP. Multivariate skewness and kurtosis were evaluated using Mardia tests. Afterwards, CFA was used with the WLSMV estimator in the SEM framework to assess the factorial validity of the instrument. The overall fit of the factor model was determined by indices including CFI, TLI, RMSEA, and SRMR. Internal consistency was further examined via a reliability analysis.

A series of general linear regression analyses were carried out to assess the relations between TAI, and age, sex, teaching experience and ISCED level. These variables were then integrated into a predictive model to determine if it could provide a satisfactory level of predictive performance for TAI. In order to achieve this, a machine learning regression approach was used, with boosting as the ensemble method and K-fold cross-validation. Network analysis was employed as a pertinent model to investigate the relations among the variables TAI, KAI and DC. To determine if these direct relations persisted while controlling for the impact of other variables, partial correlation network analysis was conducted. Additionally, three mediation analyses were conducted to explore whether the relation between TAI and DC was mediated by KAI.

3 Results

3.1 Relation between TAI vs. KAI vs. DC

A network analysis was performed as a correlational model to assess the relation between the variables TAI, KAI and DC. The resulting network is shown in Fig. 1.

Fig. 1
figure 1

Network with correlations between TAI, KAI and DC

As shown in Fig. 1, a statistically significant positive correlation was observed between all variables. However, the relation between TAI and DC, although statistically significant, can be characterized as weak (r = 0.17, p = 0.01; see Cohen, 1988). KAI emerges as a central factor within the network, with values above 1 for the betweenness, closeness and expected influence metrics. Given that TAI is assessed using 8 different factors, and recognizing the potential for differential effects that could justify the observed result, these individual factors were integrated into the model and the overall result was removed. The results of this detailed analysis are presented in the network diagram on the right-hand side of Fig. 2.

Fig. 2
figure 2

Left network: Bivariate correlations between the eight factors of TAI, KAI and CD. Right network: Partial correlations between the eight factors of TAI, KAI and CD. Note:  F1: Self-efficacy; F2: AI-based vs. Human advice/recommendation; F3: Anxieties related to using AI; F4: AI lack of human characteristics; F5: Perceived benefits of AI; F6: Preferred means to increase Trust in AI; F7: AI perceived lack of transparency; F8 Required shift in pedagogy to adopt AI

The results derived from the left-hand side network in Fig. 2 show that F1 and F6 are associated with DC and highlight the existence of a significant relation between DC and TAI. Interestingly, the robust correlation observed in Fig. 1 between KAI and TAI appears to be supported by F1, F2, F6 and F5, while no relation between KAI and F3, F4 and F7 seems to exist. It is important to note that the absence of edges between nodes within the network implies the absence of significant relations.

To verify whether these direct relations persist after controlling for the influence of other variables, a network analysis with partial correlations was carried out (see the right-hand side network in Fig. 2). The results obtained indicate that the relation between KAI and F2 has disappeared, but a relation between KAI and F3 has emerged. However, the most striking result is that when the influence of other variables is controlled for, the relations between DC and F1 and F6 disappear, while the strong relation between DC and KAI persists. This result suggests that the previously observed relation between TAI factors and DC appears to be mediated by KAI. To validate this assumption, three mediations were performed: one for TAI and the other two for its F1 and F6. The analysis assessing whether KAI mediates the relation between DC and TAI (overall) is shown in Fig. 3.

Fig. 3
figure 3

Path diagram of the mediation model with TAI as a dependent variable, DC as predictor and KAI as mediator

The analysis shows that KAI acts as a significant mediator in the apparent relation between DC and TAI. Specifically, the indirect effect (a*b) is quantified at 0.648, with a 95% CI ranging from 0.364 to 0.969. In fact, when examining Fig. 3, it becomes clear that in the absence of KAI, the significant relation between DC and TAI ceases to exist. The results show that 88% of the total relation, denoted by c’, can be attributed to mediation. Conceptually, this suggests that higher levels of DC are associated with increased levels of KAI, and increased levels of KAI correspond to increased levels of TAI. Similar patterns emerge from the mediation analyses for F1 (a*b = 0.185, 95% CI[0.107, 0.272]) and 6 (a*b = 0.146, 95% CI[0.064, 0.235]). KAI consistently plays a mediating role, meaning that higher DC scores are associated with higher KAI scores, and higher KAI scores are associated with higher levels of TAI F1 and F6. The cumulative results emphasise that KAI is a robust and substantial predictor of TAI.

3.2 Relation between TAI, age, sex, teaching experience and ISCED levels

To assess the relation between TAI, age, sex, teaching experience and ISCED level four general linear models were carried out, one for each independent variable. The descriptive results of these analyses are shown in Fig. 4.

Fig. 4
figure 4

Relation between TAI, age, sex, teaching experience and ISCED levels

According to Fig. 4, there appears to be no significant relation between TAI and the four variables analyzed. Inferential results show a lack of significant relation between TAI and age, as indicated by F(1, 210) = 0.186, p = 0.666, η²p = 0.0001. Similarly, no significant relation was found between TAI and sex, as indicated by F(1, 210) = 1.231, p = 0.269, η²p = 0.0006, TAI and teaching experience, as indicated by F(1, 210) = 0.048, p = 0.827, η²p < 0.0001, or TAI and ISCED level, as indicated by F(1, 210) = 0.118, p = 0.889, η²p = 0.0001.

Although these variables do not show individual relations with TAI, it was worth investigating whether their combined inclusion in a predictive model gave satisfactory predictive performance for TAI. To this end, a machine learning regression approach was adopted using boosting as the ensemble method and K-fold as a cross-validation procedure. This analysis yielded poor results, with the most effective model accounting for only 1.6% of the variance in TAI and a MAPE equal to 101.8, indicating a tendency for the model to overestimate predictions on average. Consequently, it can be said that TAI is independent of age, sex, teaching experience and ISCED level in this sample of teachers.

It is important to note that in contrast to the lack of relation between TAI and these variables, the results show a significant negative relation between DC and age, as indicated by F(1, 210) = 5.031, p = 0.026, η²p = 0.024. There is also evidence of a male advantage in DC scores, as indicated by F(1, 210) = 4.413, p = 0.037, η²p = 0.021. These results conform to the ones typically found in the related literature (Cabero-Almenara et al., 2022; Lucas et al., 2021), which provides robustness to our findings and suggest they do not result from a biased sample.

4 Discussion

The present study sought to identify the variables that may influence K-12 teachers’ TAI with the following research questions:

RQ1: What is the relation between TAI, KAI and DC?

RQ2: What is the relation between TAI, age, sex, teaching experience and ISCED levels?

4.1 What is the relation between TAI, KAI and DC?

The study results show that there is a statistically significant positive correlation between all three variables and that KAI is a robust and substantial predictor of TAI. The relation between KAI and TAI is consistent with the findings of Kaya et al. (2022), who observed that individuals’ self-assessed knowledge levels regarding AI were linked to their attitudes toward it. Those with a better understanding of technological innovations tended to have greater awareness of their practical applications and value, and consequently, a more positive outlook on AI.

However, it becomes clear that in the absence of KAI, the significant relation between DC and TAI ceases to exist. This result supports the perspective of Polak et al. (2022), who argue that DC is a prerequisite for the acquisition of KAI. However, it is noteworthy that teachers with different levels of DC do not show significant differences in their attitudes towards AI. This highlights the inadequacy of traditional definitions of DC to address the rapid advances in AI. As a result, it is recommended that existing DC frameworks be reviewed and updated to incorporate relevant AI concepts (Ng et al., 2023; Polak et al., 2022). Thus, although DC frameworks and related policies have been continually adjusted to incorporate AI related concepts (e.g. Vuorikari et al., 2022), the absence of a direct relation between TAI and DC indicates that despite the widespread recognition of the significance and potential of AI education, its full potential remains untapped (Luckin & Holmes, 2016). While current research highlights the importance of teachers having sufficient DC to integrate AI into subject teaching (Casal-Otero et al., 2023; Ng et al., 2023), the challenge lies in the fact that AI literacy has not been sufficiently integrated into the definition of DC (Velander et al., 2023). As highlighted by UNESCO, public policy development for AI in the education sector is still in its early phase. This makes it difficult to identify common components at this initial stage (UNESCO, 2019).

To accurately assess the relations among the eight factors of TAI, KAI, and DC, a network analysis was conducted. Our bivariate correlation results indicate a close relation between KAI and TAI, supported by F1, F2, F6, and F5. In contrast, no relation was observed between KAI and F3, F4, and F7. In Nazaretsky et al. (2022), the eight factors measured were categorized into three domains: Perceived benefits of AI in an educational setting (F5), reasons for not trusting AI diagnosis (F3, F4, F7), working alongside AI to improve pedagogy (F1, F2, F6, F8). It was observed that these three domains correspond to the three key elements that characterize trust as identified by Vereschak et al. (2021): vulnerability (F3, F4, F7), positive expectations (F5), and attitude (F1, F2, F6, F8). Therefore, from a purely bivariate correlations perspective, KAI is associated with the positive expectations (Perceived benefits of AI in an educational setting) and attitudes (Working alongside AI to improve pedagogy). However, factors related to the vulnerability (Reasons for not trusting AI diagnosis) do not appear to be associated with KAI.

The connection between F1, F2, F6, and F5 imply that attitude and positive expectation elements are intertwined, namely the confidence element (Vereschak et al., 2021). Lucas et al. (2021) employed machine learning and multiple linear regression techniques to demonstrate that teachers’ confidence is a predictor of their digital competence. Additionally, it has been demonstrated by different studies (Hatlevik, 2017; Tzafilkou et al., 2023) that confidence and digital competence have a positive correlation. Consequently, the relation between KAI and F1, F2, F6, and F5 is well-established: in AI is positively correlated with KAI.

Vereschak et al. (2021) suggest that vulnerability is a crucial component in building trust. The intention behind creating this vulnerability is to immerse the participants in a state of fragility where they can realize the importance of their own decisions. Nevertheless, this vulnerability includes uncertain elements, and sometimes the possibility of results can be estimated, while other times it remains unquantifiable. Clearly, in AI, there is uncertainty due to the unpredictability of the world and the boundaries of human knowledge and capabilities. As a result, teachers are not capable of objectively evaluating the suitability of implementing AI (Bentley et al., 2023). This can cause a swing between complete reluctance and excessive confidence (Casal-Otero et al., 2023). Therefore, gauging the relation between KAI and this intricate, multifaceted and unstable vulnerability cannot be done justice by solely relying on bivariate correlations.

To ensure precision and independence in our results, we conducted a partial correlation network analysis to control for the influence of other variables. The analysis revealed a positive correlation between KAI and F3, which pertains to Anxieties Related to Using AI-based EdTech. Increased anxiety towards the integration of AI was associated with elevated levels of KAI. This phenomenon corresponds with the findings reported by Crockett et al. (2020), which indicate that individuals with specific training in computer science, in contrast to the general public, have a higher perception of risk in relation to AI applications.

The description of this phenomenon requires, above all, recognizing that the presence of AI carries potential hazards, which do not necessarily equate to people holding an adverse view or stopping the use of related software (Kaya et al., 2022). However, these risks and resulting anxieties are similar to past computer anxieties, including fears of job displacement, learning anxiety, and privacy invasion. However, distinct anxieties have also emerged, including fears of artificial consciousness, lack of transparency, moral transgression, which have yet to be fully explored (Li & Huang, 2020). Concerns arising from emerging risks are distinct from computer anxiety since all computer ethical standards are based on human norms and computers are merely tools. However, present-day AI systems can act as ethical agents, potentially creating unprecedented ethical dilemmas in human-machine interactions (Li & Huang, 2020). Therefore, currently, if an exclusively ethical discussion lacks the backing of ample AI knowledge, is insufficient to promote deep understanding and application throughout the entire AI lifecycle (UNESCO, 2022). Therefore, we argue that the idea that heightened computer experience reduces anxiety (Heinssen et al., 1987), which was often valid during the computer era, may not necessarily apply to the current stage of AI development. In the current context, it is crucial to have sufficient knowledge and understanding of AI to be able to identify potential dangers associated with it. Paradoxically, as individuals gain deeper insights into AI, they may face increased levels of uncertainty, which could potentially worsen the anxiety surrounding AI.

Our results are in line with the findings of Nazaretsky et al. (2022), which suggest that the substitution of teachers by AI is not the foremost concern of teachers. Nevertheless, they highlight that this outcome deviates from the opposite conclusions of public opinion research. This viewpoint is supported by Chounta et al. (2022), which indicates that while there is a popular assumption that the implementation of AI in the workplace could lead to human unemployment, this concern seems to be of less importance for K-12 teachers. Nonetheless, according to an evaluation of the possible influence of AI on teachers’ existing tasks by the European Commission, many tasks appear to be amenable to straightforward automation. One possible explanation they propose is that computer technology has advanced to the point where it can perform certain tasks that require a higher level of human cognition, or, in other words, that teachers are currently engaged in relatively mechanistic tasks within the current educational system (Tuomi, 2018).

We compare the attitude of teachers to societal attitudes, not to suggest that teachers should be replaced. As emphasized by the European Commission (2022), AI is not meant to replace teachers, but rather to support them in their work. Nevertheless, it is worth considering the reasons for teachers’ low anxiety. We should consider whether this is due to their limited understanding of KAI and AI potential risks, or whether it is due to their full understanding of the learning experiences that AI systems themselves cannot provide, thus alleviating their anxiety.

4.2 What is the relation between TAI, age, sex, teaching experience and ISCED level?

Our data shows no relation between teachers’ TAI and factors such as age, sex, teaching experience and ISCED level. This result is consistent with previous studies (Kaya et al., 2022; Vázquez-Cano et al., 2023; Wang et al., 2023), which suggests that age, sex and educational level are not predictors of attitudes towards AI. Moreover, Chounta et al. (2022) highlights the absence of a noteworthy statistical correlation between professional experience and the opinions of teachers regarding the use of AI in education.

Considering that we are in the initial phase of AI in education, when comparing TAI with DC data, it is worth to reflect on the early incorporation of computers in education. Previous studies have also highlighted the impact of personal variables such as sex and age on attitudes and anxieties towards computer applications, and the conclusions in that moments are consistent with the TAI data. For example, Loyd and Gressard (1984) found that students from various age groups exhibited no significant differences in their attitudes towards computers. Heinssen et al. (1987) highlighted the lack of consistent sex differences in computer anxiety. Moreover, the study conducted by Pope-Davis and Twing (1991) discovered no sex interaction effect, although men had a slightly stronger tendency towards positive average computer attitudes. Notably, Dyck and Smither (1994) explicitly indicated that, when computer experience was controlled for, no apparent sex differences existed in computer anxiety and computer attitudes.

Therefore, the existing trend of TAI among teachers being independent of their age, sex, teaching experience and ISCED level is explicable, resembling the initial phases of computer integration into education. As highlighted by UNESCO, despite several countries implementing regulations to assimilate AI into the syllabus, its widespread incorporation in K-12 education has not yet been achieved (UNESCO, 2022). As a novel conception, a significant number of teachers have a limited understanding of AI implementation (Crompton et al., 2022).

4.3 Limitations and future directions

Based on the previous discussion, it’s evident that without sufficient KAI, a substantial connection between DC and TAI cannot be established. In other words, in the context of future research in AI in education, as well as in the definition of public policies for teacher training, the enhancement of teachers’ KAI levels holds the key potential to improve their TAI. This relation is supported by Qin et al. (2020), which posits that individual factors play a critical role in the development of trust. These factors comprise cognitive understanding of AI, tendencies towards autonomy, teacher interaction preferences, and recognition of the nature of learning. Nazaretsky et al. (2022) also highlight the paramount importance of enhancing teachers’ theoretical and practical knowledge in the field of education to cultivate trust in AI, especially in the context of K-12 education.

However, raising teachers’ KAI levels may imply moving beyond short-term planning and instead envision future scenarios to explore potential developments in education (Seufert et al., 2021). It is imperative to recognize that improving KAI may result in an increase in teachers’ anxiety levels about AI, given its positive correlation with F3. However, until uncertainties surrounding AI in education are fully clarified and/or diluted through the gradual incorporation of AI into educational routines and practices, this phenomenon should be viewed as an unavoidable reality. As a result, we have to admit the possibility that teachers may have both positive expectations as well as anxieties towards AI. In this respect, comprehensive policies and educational initiatives are essential to equip teachers with the knowledge and understanding necessary to navigate the complexities of AI in education effectively.

Nevertheless, this study has some inherent limitations. Firstly, it is challenging to generalize the findings as the research sample solely consists of Portuguese teachers from primary and secondary schools. Furthermore, the predictive effects of age and teaching experience on TAI may be biased due to the limited number of young teachers included in the sample. Nevertheless, in Portugal, as in many other European countries, more than half of the in-service teachers are aged 50 years or older (OECD, 2019). Future studies involving other samples, inclusive from other educational contexts and younger teachers, could be of interest. This would facilitate the comparison of findings across different nations and educational settings. Additionally, it is important to note that the perception of KAI and DC relied on a single self-assessment question, which may lead to teachers’ overestimation or underestimation.

5 Conclusions

The study aimed to provide practical implications for teacher preparation and in-service professional development regarding AI and implementation of AI tools and processes in K-12 education. The findings reveal no significant relation between TAI and variables such as age, sex, teaching experience and ISCED level of teachers. A statistically significant positive relation was observed among KAI, TAI and DC. However, KAI acts as a crucial mediator in the relation between DC and TAI. More specifically, the significant relation between DC and TAI disappears in the absence of KAI. Therefore, despite the ongoing adaptation of DC frameworks and related policies, the absence of a direct relation between TAI and DC suggests that the full potential and importance of AI in K-12 education has yet to be fully realized. In summary, this study highlights the significance of improving teachers’ TAI during the current stages of AI interaction in K-12 education. Such improvement can be accomplished by augmenting teachers’ KAI.