Abstract
In the rapidly evolving landscape of education, Artificial Intelligence (AI) has emerged as a transformative tool with the potential to revolutionize teaching and learning processes. However, the successful integration of AI in education depends on the trust and acceptance of teachers. This study addresses a significant gap in research by investigating the trust dynamics of 677 in-service Science, Technology, Engineering, Arts, and Mathematics (STEAM) teachers in Nigeria towards AI-based educational technologies. Employing structural equation modelling for data analysis, our findings reveal that anxiety, preferred methods to increase trust, and perceived benefits significantly influence teachers' trust in AI-based edtech. Notably, the lack of human characteristics in AI does not impact trust among STEAM teachers. Additionally, our study reports a significant gender moderation effect on STEAM teachers' trust in AI. These insights are valuable for educational policymakers and stakeholders aiming to create an inclusive, AI-enriched instructional environment. The results underscore the importance of continuous professional development programs for STEAM teachers, emphasizing hands-on experiences to build and sustain confidence in integrating AI tools effectively, thus fostering trust in the transformative potentials of AI in STEAM education.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Technologies used in education refer to those concepts, tools, innovations, and advancements that are applied for various purposes in education settings to enhance teachers’ duties while also assisting students in learning effectively and improving their achievements [1]. Recently, the world has witnessed an advancing array of modern and emerging educational technologies which are fast gaining attention all over the world. One of these emerging technologies is artificial intelligence (AI). AI has been conceptualized as a machine's capacity to think and behave like a human, or better still, are computerized systems programmed to imitate and behave in humanlike manners [2]. AI, one of the vital driving forces of the 21st Century, is speedily bringing transformational changes to almost all human endeavours, including the educational field. While there are several educational applications of AI in education, its full potential is yet to be fully harnessed in instructional settings, unlike in the business domains [3, 4]. Given the speed at which AI is advancing, it would be illogical to conclude that it will not significantly impact the education sector in a few years to come, based on the several possibilities of the technology and its mind-blowing advancements in education [5, 6].
As AI rapidly advances, it finds numerous applications in the field of education. Researchers, particularly those in instructional design and computer sciences, are actively investigating the optimal ways in which AI can support students and teachers [7]. AI-based edtech, for instance, offers the potential for a student-centred approach [8], personalized learning experiences [9, 10], and the ability to identify students' affective and cognitive needs while delivering tailored support in response to these needs [11, 12]. Beyond these possibilities, teachers also benefit from the capability to monitor their students' learning progress [13]. Teacher dashboards, for example, provide real-time notifications to teachers about their students [14, 15]. Integration of AI in education empowers teachers to assess their pedagogies and effectively plan and implement lessons [16, 17]. However, as noted by [18] in [19], there is a neglect of the role teachers play in incorporating AI-based edtech, particularly concerning teachers' trust in technology.
Trust in technology is a significant predictor of the extent to which teachers rely on technology [20]. Since interaction (calibration) between teachers’ trust and AI-based edtech may impact the outcomes of technology utilization, the need arises to examine teachers’ trust (TT) in AI-based edtech. This examination is essential for understanding the dynamics involving the trustor (teachers), the referent of trust (AI-based edtech), and the nature of trusting (the risks or vulnerabilities associated with trust or dependence on AI-based edtech) [21, 22]. Given that the trust developed by teachers in AI-based edtech plays a vital role in determining the functions of technology in instructional settings, there is a demand to scrutinize and model the factors that predict their trust in AI, particularly among STEAM teachers in Nigeria. Our observations reveal a dearth of studies detailing trust in AI edtech among STEAM teachers in Nigeria. Despite the considerable potential of AI-based edtech to revolutionize STEAM education, research investigating STEAM TT in AI is notably scarce in Nigeria. Acquisition of scientific skills in the contemporary world underscores the importance of STEAM teachers utilizing technologies, including emerging ones like AI, robotics, and AR, among others, in their instructional processes. This becomes essential to prepare the next generation of students for the lives and jobs of the future, a future predicated to be AI-dominated. Unfortunately, the prevailing situation in the majority of Nigerian secondary schools is contrary, as STEAM teaching and learning primarily rely, in most cases, on conventional methods of teaching. While the government advocates the integration of technology in schools, its implementation in the National Policy on Education faces challenges, including those related to teacher variables [23]. In Nigeria, the effectiveness of STEAM education hinges on the holistic resolution of the challenge bedeviling effective STEAM education in the country [24]. In this context, the role of technology, particularly emerging technologies like AI in education, cannot be overstated [25].
Numerous studies conducted on AI-based edtech in education in Nigeria primarily focused on teachers' perceptions, perceived utility, ease of use, opportunities, advantages, and challenges associated with AI implementation and use [17, 26,27,28,29]. Also, the majority of these studies have centred on teachers in a general sense, neglecting the specific context of STEAM teachers. While established theories like the Technology Acceptance Model (TAM) [30] and the Theory of Academic Resistance [30] have been employed to underpin the factors influencing teachers' acceptance and adoption of new technology, none of these theories have explored TT in AI-based edtech or the distinctive characteristics of such technologies within an educational framework [31]. Although TAM and the Theory of Reasoned Action (TRA) possess significant behavioural components accurately predicting the intention to accept or use technology, their explanatory power is limited, lacking consideration for additional factors impacting users' trust in technology [32,33,34,35]. This study builds upon the TAM as its theoretical foundation while extending its constructs by incorporating external factors such as AI anxieties, the absence of human characteristics in AI, preferred strategies to enhance trust in AI, and the level of technology literacy. Also, we introduced gender as a moderating factor to enhance the research's robustness [36, 37]. Our contribution extends to empirical insights into AI in Nigeria by investigating the relationship between STEAM TT in AI-based edtech and the extended external constructs explored in this study, an aspect often overlooked by previous research. According to our observation, this study probably represents the first empirical endeavour to illuminate the connections between the considered constructs. Structurally, this paper begins with an introduction, followed by a comprehensive literature review and hypotheses formulation, methods, results, discussion, conclusion and limitation and future work.
2 Artificial intelligence in education
AI, being one of the emerging technologies of the fourth industrial revolution, is increasingly gaining popularity in education globally, especially in the areas of intelligent tutoring systems, automatic scoring of essays, gaining insights into learning analytics, smart assistive technologies, and autonomous pedagogical agents like teacher bots and robots that support social-emotional development, amongst others. This has paved the way for the application of intelligent tutoring systems (ITSs), have been described as the instructional model of the 21st Century [38]. As a result, teachers’ professional works are now being challenged on multiple fronts by AI-based tools which can automate pedagogical decision-making and teaching activities in schools [39]. Aside from the threat of job loss as a result of work automation, emerging technologies such as AI-based systems have initiated a host of new ethical concerns, including but not limited to data insecurity, racial bias, and issues around trust, among others, which in turn is currently driving cross-sectorial development of policies both nationally and internationally including in education [40].
The primary goal of applying AI to education is to improve student's learning experiences [41], and this is because the system has powerful pedagogical tools that can bring about effective instruction [42]. The tools of AI in education such as simulation-based methods, virtual, augmented realities, and 3-D technology, among others, according to [2, 41, 43], assist students to get practical and experimental learning experiences in and outside of the classroom. AI-based educational technologies such as robots or cobots working along with teachers are applied in education to teach students routine tasks such as spelling and pronunciation [41, 44, 45]. Aside from using them in instructional settings of teaching and learning, AI-based educational technologies are also been used in school administration [46]. Additionally, AI-based educational tools could assist teachers in personalizing instruction for their learners and give isolated and children with disabilities access to better and more efficient learning possibilities [47, 48]. Research has demonstrated that using AI-based educational tools to offer personalized training for students in a dynamic and sophisticated learning environment is possible [49]. While qualitative education requires human teachers’ active involvement, AI-based educational technologies promise additional quality support at all levels of education [50].
2.1 Teachers’ trust (TT) in educational technologies
Teachers’ perspectives on AI’s adoption in education, especially in teaching and learning are vital since they are the direct stakeholders in charge of bringing AI-based educational technologies into the classrooms [51]. This confirms the need to examine TT and the factors responsible for predicting it. In AI-assisted decision-making and studies in educational contexts, trust is rarely defined [52]. While trust in human beings generally increases with time as a result of frequent interactions, the reverse is the case with technologies where constant errors and malfunctions over time decrease trust [53]. However, in the case of AI systems, the opposite may also be true [54] since a direct connection may lead the initial low degree of trust to rise [55]. For AI to maintain its social license, especially in the context of education, the question of trust is crucial. The AI High-Level Expert Group (AI HLEG) of the European Commission claims that if an AI-based system does not demonstrate to users that it is trustworthy, then its widespread acceptance and adoption will be seriously hampered, and its numerous potential and advantages will go unrealized [56]. While trust is still important for a variety of technology adoptions [57], the problems with AI also present a variety of qualitatively different trust concerns in comparison to previous technologies [58]. In the case of trust in technology, there is neither volition nor a moral agency. Therefore, trust in technologies is based on beliefs about the features of the technologies rather than will or motives as technologies have none [59].
Studies that itemize the factors contributing to teachers’ adoption of technologies, especially those relating to the factors predicting their trust, particularly in emerging technologies such as AI are very scarce. Most studies have centred mainly on factors such as experience [60], teachers’ readiness [61], school’s technology policies [62], pressure to use technology [63, 64], and school variables [65], among others. Because teachers are critical stakeholders in the integration and use of technologies, especially AI-based educational technologies in an emerging world [66], we are not satisfied with the current scarcity of studies on the factors predicting TT in AI-based educational technologies in science (STEAM) education in the Nigerian context.
Given the crucial role that trust plays in technology adoption and use, it is necessary to understand critically what elements influence TT in AI-based educational technologies. Trust is a key predictor of the desire to embrace AI-based systems [67]. This is because an examination of TT in AI is crucial. After all, it can inspire the creation of pertinent policies and also result in regulatory actions with potentially grave repercussions. The findings of this kind of study would help policy and decision-makers in education, especially in STEAM education, craft and implement pertinent policies for the adoption and use of AI-based educational technologies [68, 69]. This study, therefore, explored STEAM TT in AI-based educational technologies using a structural equation modelling approach.
3 Literature review and hypotheses development
Structural equation modelling, according to [70], is useful when complex datasets are been analyzed, and also when the direct and indirect relationships between variables are been examined. It is also useful in the identification of the causes or consequences previously existing among individuals or groups of variables [71]. The need for this study arises from the need to investigate the relationships between STEAM TT in AI-based educational systems and AI anxieties (AN), perceived benefits (PB), AI’s lack of human characteristics (LC), preferred methods to increase AI trust (PI), level of technology literacy (ICT skill), and the moderating effect of gender. The conceptual model is shown in Fig. 1.
3.1 AI anxieties
Issues with computer usage, a lack of proficiency with new technologies, and protracted technology use, to mention a few, have all been associated with technology-related anxiety [72]. Teachers’ use of diverse technological tools in the educational environment may lead to anxiety which in turn leads to frustration and confusion, the consequences of which are noticeable during classroom interactions [73]. The issues surrounding the adoption and utilization of AI-based educational technologies and the novelty of the technology may generate a feeling of fear and apprehension among teachers. This further condition their views of the complexity related to the use of the technology [74].
Anxiety towards the adoption and use of AI-based technology can occur due to the confused attitude of teachers toward technological improvements, confusion around technology autonomy, and ignorance relating to the socio-technicalities of the technology [75]. Therefore, anxieties related to the adoption and use, or trust in AI-based educational technologies can be expressed as apprehension or panic nervousness arising from the unknown directions of AI-based technology developments [75]. High levels of technology anxiety are associated with negative attitudes toward technology, whereas positive experiences with technology use are associated with extremely positive attitudes toward technology [76,77,78]. A high level of technology anxiety may result in trouble using technology, according to research that has linked technology anxiety to actual technology use [73]. Additionally, [79] and [80] have documented a direct correlation between technological anxiety and a number of other factors, such as age, frequency of technology use, past experiences using technology, and neuroticism. Concerning trust in AI-based systems in education, the study hypothesized that:
H1. AI anxieties predict TT in AI-based educational technologies.
3.2 Perceived benefits
According to [81], for teachers to utilize the potential of AI-based educational technologies, they must be aware of the instructional contributions of the technology. In order words, the level of belief or trust that a teacher has that technology will increase his performance and drastically reduce his efforts refers to perceived usefulness or benefits [82]. This concept was made popular by [83], who claimed that perceived advantages are related to how strongly users of technology feel that utilizing it will improve their ability to execute their jobs. The use of AI in educational activities entails some potential benefits, and teachers’ awareness of these potential benefits may affect their perceptions of the usefulness of AI in education [74, 84]. This position is corroborated by [85] that AI-based systems can be effectively adopted in education when teachers are sufficiently aware of its pedagogical benefits and are knowledgeable enough to use the system. In order words, the more teachers are aware of the benefits of using AI-based systems in education, the more they will use the system to improve motivation and engagement among their students [86]. In the same vein, [87] report that teachers who are knowledgeable about AI-based system’s use in education are better positioned to select relevant AI-based systems for instructional purposes, hence teachers’ knowledge of the role of technology is proportional to the successful integration and technology in an educational setting [88]. Based on this literature, the author propose that:
H2. Perceived benefits of AI predict TT in AI-based educational technologies.
3.3 Lack of human characteristics by AI
According to [31], humans possess several distinctive qualities, such as the capacity for perception, emotion, and cognition, which AI cannot replicate [89]. Humans can maintain numerous conflicting mentalities at once because they have free will, consciousness, and emotions that occasionally entail irrational conflicts. This is not true for machines, whose mental processes are limited to logical progression. Although AI algorithms have developed to mimic human behavior, it is still challenging for a short-term machine to replicate human features [90]. Despite the huge potential of AI-based educational technologies in instructional settings, there are also ethical issues surrounding the validity of the decisions taken by the system [91, 92] such as race and cultural discriminations [93, 94], and concerns related to fairness [95]. These, among others, have created challenges for educators in understanding the rationales for unpinning AI-based systems’ decisions [96]. These are observed situations arising from LC. Therefore, it becomes crucial for people working with sophisticated AI-based systems to create accurate mental models of how these systems' various cognitive capabilities relate to human cognition. When AI becomes more autonomous, the human vs. machine conflict is likely to take on a new shape. [97]. Consequently, this can aggravate teachers' resistance or distrust of AI-based educational technologies. Given the literature, the author hypothesized that:
H3. AI’s lack of human characteristics predicts TT in AI-based educational technologies.
3.4 Preferred means to increase trust (PI) in AI
Teachers play significant roles in the preparation of the next generation of students, especially in AI [98], and therefore, many of the reasons for the training of teachers to have a working knowledge of AI in education are very congruent to those recommended to prepare them for digital skills [99]. A crucial element that is closely related to integrating technology in the classroom is teachers' professional development. This is because instructors' knowledge of their area of expertise and their understanding of how to effectively incorporate technology to assist students' learning and achievement work together to raise their level of technological knowledge, confidence in it, and attitudes toward it [100]. The amount of instruction teachers receive in using technology directly relates to how well it is incorporated into the classroom. This result was reached when it was found that among the top factors of successful implementation of technology in education are ongoing professional development programs for teachers and the provision of ongoing support for effective practice [100]. Consequently, technology-related pieces of training foster teachers’ recognition of the roles being played by emerging technologies in students’ learning [101], with regards to this, emerging evidence suggests that training programmes and interventions are germane to mitigating AI-related biases, and therefore help at improving the processes of decision-making [102, 103] as the users’ knowledge of AI increases. Providing pieces of training to teachers while working with data provided by AI-based educational technologies helps to improve their trust in building pedagogical decisions based on data [104]. Given the foregoing, the author propose that:
H4: Preferred means to increase trust in AI-based edtech predict TT in AI-based educational technologies.
3.5 Level of technology literacy
Technology competence or literacy is conceptualized as the ability to effectively use diverse technologies for various objectives [62]. Teachers’ technology literacy or competence is an important predictor of technology integration in instructional settings [105]. This is because teachers’ understanding of educational technologies and how best to blend them with field knowledge for productivity is very important [106]. Teachers’ technological competencies are also expected to be high to be able to beneficially use technologies in instructional environments [107]. Skills level and experience can positively influence technology use [108], and this has been verified empirically [109, 110]. According to [111], the Internet, especially educational tools, is expanding at an exponential rate, and the abilities needed to thrive in technologically based societies are likely to overlap with those needed to study in technologically enhanced classrooms. There is evidence that most educators who expressed negative or indifferent opinions about incorporating technology into their teaching practices lack the necessary background information and expertise to do so in a way that allows them to make "informed decisions" about technology integration [105]. Studies have therefore shown that while there is a high usage of technology among teachers, however, their level of integrating educational technologies in their classrooms is lower [112,113,114]. Given the reviewed literature, the study proposes that:
H5: Level of technology literacy predicts TT in AI-based educational technologies.
3.6 Moderating effect of gender
Gender has been reported to influence teacher beliefs and behaviours [115, 116]. Several studies that examined teachers’ ICT skills based on gender reported that significant differences exist between male and female teachers, while some did not find any gender-based influence. With regards to the integration of technology in instructional settings [117] report that male teachers frequently integrate technologies compared with their female counterparts. The acceptance, integration, and use of technologies by teachers in the classroom, according to several studies [e.g. 117–119], has not been significantly influenced by gender. Additionally, according to studies, female teachers are more anxious about using technology in their lessons [79, 118,119,120,121]. However [122], female instructors incorporate technology more than male teachers do, and their perceptions of their proficiency in technology have improved in comparison to those of their male colleagues, who have maintained the same perceptions. Regarding the importance of gender as a determining factor in predicting teachers' adoption and integration of technology in teaching and learning processes, literature has continued to demonstrate discrepancies in conclusions. However, this has not been investigated in terms of STEAM TT in AI. To fill this gap in the literature, the author propose the following hypotheses:
H6. Significant differences exist between male and female teachers regarding the relationship between anxieties related to using AI-based edtech and trust in AI-based educational technologies.
H7. Significant differences exist between male and female teachers regarding the relationship between perceived benefits of AI-based edtech and trust in AI-based educational technologies.
H8. Significant differences exist between male and female teachers regarding the relationship between AI-based edtech’s lack of human characteristics and trust in AI-based educational technologies.
H9. Significant differences exist between male and female teachers regarding the relationship between preferred means to increase trust in AI-based edtech and in AI-based educational technologies.
H10. Significant differences exist between male and female teachers regarding the relationship between the Level of ICT literacy and trust in AI-based educational technologies.
4 Methods
4.1 Participants
Six hundred and sixty-seven (677) in-service STEAM teachers from three states in Nigeria made up the study's participants. Males make up 74.6% of the sample while females make up 25.4%. The majority of STEAM educators have at least had some training on the value and use of technology in the classroom. Due to the hype around the use of AI in education in Nigeria, the majority of teachers are aware of the technology, even though a sizable portion have not used it. The survey's aim and objectives were explained to the STEAM teachers who willingly participated, and they were also assured of their anonymity for the duration of the study for ethical reasons. In Table 1 below, their demographic profiles are further displayed.
4.2 Instrument for data collection
The data used in the study were collected across three states in Nigeria, using a Google form-based questionnaire. The instrument used in the study was adapted [67] to suit the purpose of the study. The constructs adapted are anxieties connected with the use of AI-based edtech (AN, 3 items), perceived benefits (PB, 7 items), LC in AI (LC, 4 items), TT (PI, 3 items), and preferred methods of increasing trust in AI (PI, 3 items). A total of 20 items were adapted from [67]. We developed and included an item on the level of technology literacy (ICT skill) and gender (see Table 2 for Cronbach’s alpha; CR—composite reliability, and AVE—average variance extracted). The final instrument, which was divided into two sections, was validated by academic experts in Test and Measurement and Educational Technology. The STEAM teachers' demographic information, including gender, age, subject taught, type of school, location of the school, and degree of ICT literacy, was gathered in the first section of the instrument used. The 20 items they responded to were included in the second section. The options given range from strongly disagreed (1) to strongly agree (7) on a Likert scale. The survey was shared via the in-service teachers' school and other online professional platforms across Nigeria. The Google form-based survey was left open for three-month after which it was shut down from data collection (February to April 2023). During the three months that it was opened, daily prompts were sent out across the platforms on which the instrument was shared to remind the teachers.
4.3 Data analysis
The study examined the proposed relationships between AI-based educational technologies constructs and the moderating effects of gender using structural equation modelling (SEM), a method of evaluating and modifying conceptual models, including their relationships among variables, at the same time. Using WarpPLS 7.0 [123] to analyze the data and to perform the partial least squares-structural equation modelling (PLS–SEM) analysis, this study used PLS–SEM method to facilitate theory building (125, 126). As a first step in the analysis, we determine if the sample size for this study is adequate or not. Two methods are suggested to estimate the minimum sample size required for a PLS-SEM study: the inverse square root method and the gamma-exponential method [123, 124]. By simulating Monte Carlo experiments, these methods produce estimates that are similar to Monte Carlo estimates. As a result of the inverse square root method, the minimum sample size tends to be overestimated. However, a better and more precise estimate is provided by the gamma-exponential approach [124]. Since inverse square root methods are more conservative and ensure a significant power level, researchers should report estimates from both methods [123]. This study required a minimum sample size based on both methods, as shown in Fig. 2.
The study determined that a suitable and sufficient sample size for the research is 677 in-service teachers who have trust in AI-based educational technologies. This determination was made using two different methods, namely the inverse square root and gamma-exponential methods, which estimated minimum required sample sizes of 366 and 353, respectively (see Fig. 3). The significance level (alpha) used in the study was 0.05, the minimum absolute significant path coefficient was set at 0.130, and the desired power level was 0.80. To assess the validity of the measurement model, the researchers employed convergent and discriminant validity indices. Convergent validity was established by evaluating various measures, including the CR, Cronbach alpha, and Dijkstra's PLSc reliability index. These measures were required to exceed a threshold of 0.70. Additionally, the AVE needed to be greater than 0.50 to indicate convergent validity. Discriminant validity was assessed by examining the squared AVE for each latent variable, which should be higher than the correlation between that variable and all other latent variables [125]. Furthermore, cross-loading and the Hetero-trait mono-trait ratio correlation (HTMT) were considered, and a value of less than 0.85 was required to establish discriminant validity. The researchers also evaluated the structural model using several criteria. Stone Gassier's Q2 was employed to determine the predictive relevance of the model, with a value greater than 0 indicating its relevance. The significance of paths in the model was assessed using stable3, which required a test statistic (T) value higher than 1.645 in a one-tail test. The variance inflated factor (VIF) was examined, and values below 3.3 were considered acceptable. Effect sizes (f2 values) were calculated to measure the impact of exogenous variables on the endogenous variable, with values of 0.35, 0.15, and 0.02 indicating large, medium, and small effects, respectively [126,127,128]. The adjusted R-square coefficient (R2) was used to determine the amount of variance explained by the exogenous variables with the endogenous variables. Finally, a two-stage approach was employed to test for moderating effects. In this approach, factor scores were calculated and then used to construct interactions or products. This method was chosen over variable orthogonalization and indicator products as the preferred approach for the study [129].
5 Results
5.1 Measurement model validity assessment
Before conducting a detailed analysis of the structural relationships, a measurement model was carried out to validate the constructs. In this analysis, we compared the correlation matrices implied by the model with the empirical indicators using both existing and new indices [123]. The new indices used in the analysis included the standardized root mean squared residual (SRMR), standardized mean absolute residual (SMAR), standardized chi-square (SChS), standardized threshold difference count ratio (STDCR), and standardized threshold difference sum ratio (STDSR). These new indices were complemented by existing indices such as Tenenhaus GoF (GOF—small > = 0.1, medium > = 0.25, large > = 0.36), Sympson's paradox ratio (SPR—acceptable if > = 0.7, ideally = 1), R-squared contribution ratio (RSCR—acceptable if > = 0.9, ideally = 1), nonlinear bivariate causality direction ratio (NLBCDR—acceptable if > = 0.7), and statistical suppression ratio (SSR—acceptable if > = 0.7). The acceptable fit of the measurement model was indicated by SRMR and SMAR values lower than 0.1. For SChS, a p-value associated with the SChS equal to or lower than 0.05 indicated a normally acceptable fit at the 0.05 level of significance. The acceptable fit was indicated by STDCR and STDSR values equal to or greater than 0.7, which referred to the modified p-value. Overall, the model fit, and quality indices demonstrated a good fit to the data. This was evidenced by the following values: SRMR = 0.09 (less than 0.10), SMAR = 0.08 (less than 0.10), SChS = 8.912 (p < 0.05), STDCR = 0.87 (greater than 0.70), STDSR = 0.73 (greater than 0.70), GOF = 0.47 (greater than 0.36), SPR = 1.00 (greater than 0.70), RSCR = 1.00 (greater than 0.90), NLBCDR = 0.80 (greater than 0.70), and SSR = 1.00 (greater than 0.70). In summary, the measurement model demonstrated an acceptable fit based on the various indices, indicating that the model adequately captured the relationships between the constructs being studied.
In addition to assessing the validity and reliability of the measurement model, we evaluated the link between the latent variables and their manifest variables. In the research model, six reflective constructs were included, including gender, anxiety when using AI-based education technology, PB of AI-based education technology, the lack of human characteristics in AI-based education technology, preferred methods of increasing trust, level of ICT literacy, and TT in AI-based education technology (Fig. 1). These constructs were classified as reflective due to high correlations between measurement items. We used AVE, CR, and a coefficient to measure the scale's reliability. A convergent validity assessment was conducted as well using AVE. The convergent validity of a measurement instrument is determined by whether respondents can understand the question statements (or other measures) associated with each latent variable. In this regard, the following criteria are recommended for determining whether a measurement model is valid in terms of convergence: the p-value of the loadings should be 0.05 or smaller; the loadings should be equal to or greater than 0.50, and the AVE score should be greater than 0.50 for each dimension [125, 128, 130,131,132,133]; Kock, 2014a). As a result, Table 1 shows convergent validity with all dimensions having an AVE of greater than 0.50 and significant item-to-factor loadings.
Construct reliability is assessed using Cronbach's alpha coefficients and CR. While some evidence suggests that Cronbach's alpha [98, 134,135,136]; may have weak psychometric properties [137], researchers are advised to consider the CR coefficient as a more reliable measure [138]. CR coefficients are also referred to as Dillon-Goldstein reliability coefficients and congeneric reliability coefficients [139, 140]. Reliable measurement instruments are those in which respondents understand the question statements or measures associated with each latent variable similarly. According to various researchers, including Fornell and Larcker [125], Hair et al. [127, 130, 131], Kock [133] and Kock and Lynn [141], both the CR and Cronbach's alpha coefficient should exceed 0.7. The CR coefficient is generally considered to be more precise than Cronbach's alpha [123, 125, 141]. In this study, the CR value exceeded 0.70, indicating good construct reliability [128, 130, 131, 142, 143]. All factors had Cronbach's alpha values ranging from 0.700 to 0.840, and Dijkstra's PLSc ranged from 0.639 to 0.926. Table 1 demonstrates that all AVE values were greater than 0.50, while the CR, Cronbach's alpha, and Dijkstra's PLSc were higher than 0.70.
Measurement instruments like question statements are typically used to test discriminant validity. An instrument with good discriminant validity prevents respondents from confusing the associated measures with those for other latent variables. WarpPLS includes HTMT ratios and other coefficients as part of its outputs [123, 125, 130, 132] that can provide useful information for assessing discriminant validity. There are also correlation coefficients among latent variables, square roots of AVEs, and loadings and cross-loadings among structures (see Table 3 and 4). We also provide p-values, a 90% confidence interval, and the HTMT ratios (see Table 2). AVEs are squared for each construct, and correlation coefficients are calculated using their square roots. As demonstrated in Table 3 and 4, all AVE values were larger than correlations in every case, and cross-loadings were greater than the correlation values for all indicators associated with their highest constructs. HTMT has been shown to perform better than the Fornell-Larcker criterion and cross-loading assessment based on heterotrait-monotrait correlations (HMC). Discriminant validity of reflective measurement models must be established by HTMT values of not more than 0.85 [144]. As shown in Table 5, the model has discriminant validity.
The HTMT ratio values, as presented in Table 5, were found to be below the benchmark of 0.85, thus confirming the discriminant validity of the model. Additionally, Table 6 displays the confidence intervals for the HTMT ratios. The 90 per cent confidence interval is considered substantial and acceptable when one estimate is excluded. In Table 2, the lower and upper limit values are each excluded once. This indicates that the variables in the model exhibit both convergent and discriminant validity and are reliable. Based on these findings, the structural model was assessed to examine the relationships between variables.
5.2 Structural model assessment
Table 3 provides the VIFs for all the latent variables and p-values. A redundancy assessment was carried out using this method. A reflective latent variable should have redundant indicators [133]. As a rule of thumb, full collinearity VIFs of 3.3 or less suggest the absence of multicollinearity in the model and no common method bias [133, 145]. For PLS-based SEM using latent variables, this is also the threshold for VIFs [145]. Hence, all VIFs in the model are below the 3.3 threshold, indicating that this study does not have a multicollinearity issue. Nevertheless, it is worth noting that categorical nominal variables/categorical predictor variables such as level of ICT skills cannot be used directly in warpPLS in its present categories unless they are converted into dummy variables before they can be analyzed. The result will also be compared against one of these groups as a reference. To avoid errors caused by zero variance and multicollinearity, one of the categories must be retained.
After establishing the reliability and validity of the measurement model, we proceeded to examine the structural model to assess the direct effects of the latent variables and the amount of variance predicted by the model, as depicted in Fig. 3. The model demonstrated that it predicts more than thirty per cent of the variance in TT in AI-based educational technologies, as indicated in Table 2. The model's predictive capability was further confirmed by Stone-Geisser's test, where the Q2 value (shown in Table 2) exceeded 0 for the construct, indicating its ability to make reliable predictions. The next step involved determining the magnitude of the proposed relationships between the latent variables, as illustrated in Fig. 4. Each relational hypothesis in the model had a distinct path coefficient value. Among them, the relationships proposed between PI—> TT (H4) and PB—> TT (H2) exhibited some of the strongest coefficients. However, hypotheses 3 and 4 indicated relatively weaker relationships between AN and TT, as well as between ICT Skills and TT.
Table 7 presents the t-values of the path coefficients and the f2 statistic calculated to confirm the significance of the proposed relational hypotheses and the size of the latent variable's effect. Based on the results, we conclude that the PLS analysis supports all the proposed relational hypotheses within the model with a significance level of 0.05, except for H3. Usually, Cohen's size effect is recommended at 0.02, 0.15, and 0.35 [127, 128]. Even when the corresponding p-values are statistically significant, values below 0.02 suggest that the effects are too weak to be relevant from a practical standpoint. For reflective latent variables, all indicators' effect sizes should be equal to or greater than 0.02 [127]. Our study shows that H3 has an f2 of 0.001, which makes its inclusion in the model questionable since we cannot ensure that the effect size would be sufficient. As calculated by Cohen and Kock, H1, H2, and H5 have small effects because their values are under 0.15, whereas H4 has a medium effect size of under 0.35.
Nonetheless, as a complement to the analysis of direct effects, we conducted a two-stage moderating analysis effects of gender across all the paths (see Fig. 4).
In Table 8, Fig. 5a, there is a significant difference between male and female teachers regarding the relationship between anxieties related to using AI-based educational technologies and trust in them, thus supporting H6. In addition, there is a negative interaction effect. Accordingly, teachers with a higher level of gender (i.e., female) are less likely to report anxieties about using AI-based edtech while those with a lower level of gender (i.e., male) are more likely to report anxieties about using AI-based edtech. Moreover, males and females differ in their preferred means of increasing trust in AI-based education technology, level of ICT literacy, and use of AI-based educational technologies, which supports H9 and H10 (see Table 8, Fig. 5d and e). However, no significant differences were found between male and female teachers concerning the PB of AI-based edtech, lack of human characteristics, and trust in AI-based educational technologies; therefore, H7 and H8 were not supported (see Table 8, Fig. 5b and c).
6 Discussion
Exploring STEAM TT in AI-based systems with regards to its adoption and use in teaching and learning of STEAM is very vital. Our study therefore, examines the influence of AN, PB, LC, PI, ICT skill, and the moderating effect of gender on STEAM TT in AI-based educational technologies in K-12 setting, using a structural equation modelling approach. Our proposed model predicts over thirty per cent of the variance TT in AI-based systems in education. Each relational hypothesis has a different path coefficient value, and one of the strongest relationships is that proposed between PI—> TT in AI-based educational technologies [99, 100], and PB—> TT in AI-based educational technologies [19, 81, 134, 146]. However, AI’s LC and PI indicate the weakest relationship between AN and TT in AI-based educational technologies, as well as between ICT Skills and TT in AI-based systems. Our study also shows that LC has an f2 of 0.001, which makes its inclusion in the model questionable since we cannot ensure that the effect size would be sufficient. The finding additionally shows that AN, PB, and LC have small effects because their values are under 0.15, whereas PI in AI have a medium effect size of under 0.35.
Our findings also show a significant difference between male and female teachers [147] regarding the relationship between anxieties related to using AI-based educational technologies and trust in them. However, there is a negative interaction effect. Accordingly, female teachers are less likely to report anxieties about using AI-based edtech while male teachers are more likely to report anxieties about using AI-based edtech. This is an interesting finding as most studies' reports tend to favour the males in terms of technology usage [119, 120, 146]. Moreover, males and females differ in their preferred means of increasing trust in AI-based education technology, level of ICT literacy, and use of AI-based educational technologies [115, 116]. However, no significant differences were found between male and female teachers concerning the PB of AI-based edtech, lack of human characteristics, and trust in AI-based educational technologies [148,149,150].
7 Conclusion
In conclusion, our results show that anxiety and preferred methods to increase trust and PB influence trust in AI-based edtech while LC does not have a strong influence. Therefore, by implication, the findings of this study are expected to be an eye-opener to all stakeholders and policymakers in education, especially science education on the need to begin an evaluation of any existing frameworks on technology integration, especially emerging technologies in education to incorporate the adoption processes, integration procedures, and effective use of AI-based educational technologies in STEAM classrooms in Nigeria. Also, as a result of the implications of the findings for STEAM educators and STEAM education in the era of AI, it is expected that policymakers make informed decisions on what must be done to ensure that STEAM teachers trust and use AI-based educational technologies effectively and efficiently in their pedagogical processes. In this regard, the finding of the study might prompt the Ministry of Education officials and other relevant stakeholders in the education sector on the need to design effective professional development programs which will ensure that STEAM teachers in the K-12 section are well-trained and grounded in the integration and use of emerging technologies in their pedagogical processes. a. Finally, since STEAM teachers have important roles to play in the way their students learn in and outside of the classrooms, it becomes necessary to ensure that their trust in AI-based educational technologies is strategically improved since AI technology is the future of work and life of the fourth industrial revolution and students must not only be taught using the technology, they must equally be prepared to embrace and use the technology.
8 Limitations and future work
We propose that subsequent researchers should consider using a larger sample of participants even though the sample size of this study is sufficient for inference purposes. A larger sample size is suggested because the findings from the sample size of the present study may not represent the opinions of a majority of the STEAM teachers in Nigeria. Therefore, there is a need for a larger sample size to further validate the findings of this study. Also, the study was conducted among secondary school STEAM teachers in Nigeria. The opinions of these level of education teachers may not represent the views of the STEAM teachers in higher institutions in Nigeria. Hence, extending this study to the STEAM lecturers in the higher institution might be worthwhile for further validation of the results of the present study. In addition to this, we suggest that subsequent studies try to streamline the gender gap as observed in this study. Ensuring that the gender gap is balanced in further studies might bring up new discussions on the moderating effects of gender in a study of this nature.
Data availability
The data that support the findings of this study are available on request.
Abbreviations
- STEAM:
-
Science, Technology, Engineering, Arts and Mathematics
- AI:
-
Artificial intelligence
- AR:
-
Augmented reality
- CR:
-
Composite reliability
- AVE:
-
Average variance extracted
- SEM:
-
Structural equation modelling
- PLS:
-
Partial least square
- PLS–SEM:
-
Partial least squares-structural equation modelling
- HTMT:
-
Hetero-trait mono-trait ratio correlation
- VIF:
-
Variance inflated factor
- HMC:
-
Heterotrait-monotrait correlations
- SMAR:
-
Standardized mean absolute residual
- SRMR:
-
Standardized root mean squared residual
- STDCR:
-
Standardized threshold difference count ratio
- ICT:
-
Level of ICT literacy
- TT:
-
Teachers’ trust in AI-based edtech tools
- AN:
-
Anxieties related to using AI-based educational tools
- PB:
-
Perceived benefits of AI-based educational tools
- LC:
-
Lack of human characteristics
- PI:
-
Preferred means to increase trust
- K-12:
-
Secondary school (senior) level of education
- TAM:
-
Technology Acceptance Model
- TRA:
-
Theory of Reasoned Action
- AI HLEG:
-
AI High-Level Expert Group
- SChS:
-
Standardized chi-square
- NLBCDR:
-
Nonlinear bivariate causality direction ratio
- RSCR:
-
R-squared contribution ratio
- SSR:
-
Statistical suppression ratio
References
Veletsianos G. Emerging technologies in distance education. Edmonton: AU Press; 2010.
Wartma SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad Med. 2018;93(8):1107–9.
Luckin R, George K, Cukurova M. AI for school teachers. 2021. CRC Press.
Luckin R, Cukurova M. Designing educational technologies in the age of AI: A learning sciences-driven approach. Br J Educ Technol. 2019;50(6):2824–38. https://doi.org/10.1111/bjet.12861.
Holmes W, Bialik M, Fadel C. Artificial intelligence in education. Boston: Center for Curriculum Redesign, 2019. https://curriculumredesign.org/wp-content/uploads/AIED-Book-Excerpt-CCR.pdf
Nabiyev VV. Yapayzeka: İnsanbilgisayaretkileşimi. SeckinYayıncılık. 2010.
Pokrivcakova S. Preparing teachers for the application of AI-powered technologies in foreign language education. Sciendo. 2019. https://doi.org/10.2478/jolace-2019-0025.
Luan H, Geczy P, Lai H, Gobert J, Yang SJ, Ogata H, et al. Challenges and future directions of big data and artificial intelligence in education. Front Psychol. 2020. https://doi.org/10.3389/fpsyg.2020.580820.
Hwang GJ, Xie H, Wah BW, et al. Vision, challenges, roles and research issues of artificial intelligence in education. Comput Educ. 2020;1:100001. https://doi.org/10.1016/j.caeai.2020.100001.
Shum SJB, Luckin R. Learning analytics and AI: politics, pedagogy and practices. Br J Edu Technol. 2019;50(6):2785–93. https://doi.org/10.1111/bjet.12880.
Chen X, Zou D, Xie H, Cheng G. Twenty years of personalized language learning: topic modeling and knowledge mapping. Educ Technol Soc. 2021;24(1):205–22.
Mislevy RJ, Yan D, Gobert J, Sao-Pedro M. Automated scoring in intelligent tutoring systems. In Handbook of automated scoring, 2020; (pp. 403–422). Chapman and Hall/CRC.
Wang Y, Zhao P. A probe into spoken English recognition in English education based on computer-aided comprehensive analysis. Int J Emerg Technol Learn. 2020;15(3):223–33.
Keuning T, van Geel M. Differentiated teaching with adaptive learning systems and teacher dashboards: the teacher still matters most. IEEE Trans Learn Technol. 2021;14(2):201–10. https://doi.org/10.1109/TLT.2021.3072143.
van Leeuwen A, Knoop-van Campen CA, Molenaar I, Rummel N. How teacher characteristics relate to how teachers use dashboards: results from two case studies in K-12. J Learn Analyt, 2021; 8(2): 6–21. https://doi.org/10.18608/jla.2021.7325
Celik I, Dindar M, Muukkonen H, et al. The promises and challenges of artificial intelligence for teachers: a systematic review of research. TechTrends. 2020;66:616–30. https://doi.org/10.1007/s11528-022-00715-y.
Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int J Educ Technol High Educ. 2019;16(1):1–27. https://doi.org/10.1186/s41239-019-0171-0.
Seufert S, Guggemos J, Sailer M. Technology-related knowledge, skills, and attitudes of pre-and in-service teachers: the current situation and emerging trends. Comput Hum Behav. 2021;115: 106552. https://doi.org/10.1016/j.chb.2020.106552.
Celik I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput Hum Behav. 2023;138: 107468.
Glikson E, Woolley W. Human trust in artificial intelligence: review of empirical research. Acad Manag Ann. 2020;14(2):627–60. https://doi.org/10.5465/annals.2018.0057.
Fulmer CA, Gelfand MJ. At what level (and in whom) we trust: Trust across multiple organizationallevels. J Manag. 2021;38(4):1167–230.
Pirson M, Malhotra D. Foundations of organizational trust: What matters to different stakeholders? Organ Sci. 2011;22(4):1087–104.
Oni S. Revitalizing Nigerian Education in Digital Age. London: Trafford Publishing; 2012.
Aina JK. STEM education in Nigeria: development and challenges. Curr Res Lang Liter Educ. 2022;3:53. https://doi.org/10.9734/bpi/crlle/v3/2258C.
Fomunyam KG. Teaching stem education in Nigeria: challenges and recommendations. Int J Mech Eng Technol. 2019;10(12):85–93.
Atiku SO, Boateng F. Rethinking education system for the fourth industrial revolution. In Human Capital Formation for the Fourth Industrial Revolution, 2020. (pp. 1–17). IGI Global. https://doi.org/10.4018/978-1-5225-9810-7.ch001
Chitra L. Artificial intelligence meets augmented reality (1st ed.). BPB Publications. 2019.
Hinojo-Lucena FJ, Aznar-Díaz I, Cáceres-Reche MP, Romero-Rodríguez JM. Artificial intelligence in higher education: a bibliometric study on its impact in the scientific literature. Educ Sci. 2019;9(1):51. https://doi.org/10.3390/educsci9010051.
Popenici SAD, Kerr S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res Pract Technol Enhanced Learn. 2017;12(1):22. https://doi.org/10.1186/s41039-017-0062-8.
Bart R. Understanding academics’ resistance towards (online) student evaluation. Assess Eval High Educ. 2014;39(8):987–1001.
Nazaretsky T, Cukurova M, Alexandron G. An instrument for measuring teachers’ trust in AI-based educational technology. In LAK22: 12th international learning analytics and knowledge conference. (2022, March). (pp. 56–66).
Ball DM, Levy Y. Emerging educational technology: assessing the factors that influence instructors’ acceptance in information systems and other classrooms. J Inf Syst Educ. 2008;19(4):431.
Sun H, Zhang P. The role of moderating factors in user technology acceptance. Int J Hum Comput Stud. 2006;64(2):53–78.
Thompson R, Compeau R, Deborah E, Higgins C. Intentions to use information technologies: an integrative model. J Organ End User Comput. 2006;18(3):25–47.
Bashir I, Madhavaiah C. Consumer attitude and behavioural intention towards Internet banking adoption in India. J Indian Busin Res. 2015. https://doi.org/10.1108/JIBR-02-2014-0013.
Hou M, Lin Y, Shen Y, Zhou H. Explaining pre-service teachers’ intentions to use technology-enabled learning: An extended model of the theory of planned behavior. Front Psychol. 2022;13:1.
Sun L, Zhou X, Sun Z. Improving cycling behaviors of docklessbike-sharing users based on an extended theory of planned behavior and credit-based supervision policies in China. Front Psychol. 2019;10:2189. https://doi.org/10.3389/fpsyg.2019.02189.
Adelana OP, Akinyemi L. Artificial intelligence-based tutoring systems utilization for learning: a survey of senior secondary students’ awareness and readiness in Ijebu-Ode, Ogun State. UNIZIK J Educ Res Pol Stud. 2021;9:16–28.
Shen L, Su A. The Changing Roles of Teachers with AI. Revolutionizing Education in the Age of AI and Machine Learning. IGI Global, 2020; 1–25.
Adams C, Terrie lT. Interviews with digital objects: a special issue on speaking with the digital. Explor Media Ecol, 2020; 19(3): 249–254.
Timms MJ. Letting artificial intelligence in education out of the box: educational cobots and smart classrooms. Int J Artif Intell Educ. 2016;26(2):701–12.
Chen L, Chen P, Lin Z. Artificial intelligence in education: a review. IEEE Access. 2020;8:75264–78.
Mikropoulos TA, Natsis A. Educational virtual environments: a ten-year review of empirical research (1999–2009). Comput Educ. 2011;56(3):769–80.
Snyder H. Literature reviews as a research methodology: an overview and guidelines. J Bus Res. 2019;104:333–9.
Fang Y, Chen P, Cai G, Lau FCM, Liew SC, Han G. Outagelimit-approaching channel coding for future wireless communications: root-protograph low-density parity-check codes’’. IEEE Veh Technol Mag. 2019;14(2):85–93.
Chassignol M, Khoroshavin A, Klimova A, Bilyatdinova A. Artificial intelligence trends in education: a narrative overview’’. ProcediaComput Sci. 2018;136:16–24.
Sekeroglu B, Dimililer K, Tuncal K. Artificial intelligence in education: application in student performance evaluation. Dilemas Contemporáneos: Educación, Política y Valores. 2019;7(1):1–21.
Pedro F, Subosa M, Rivas A, Valverde P. Artificial intelligence in education: challenges and opportunities for sustainable development. Paris: UNESCO; 2019.
Mohammed PS, Watson EN. Towards inclusive education in the age of artificial intelligence: perspectives, challenges, and opportunities. In: Knox J., Wang Y., Gallagher M. (eds) Artificial Intelligence and Inclusive Education. Perspectives on Rethinking and Reforming Education. Singapore: Springer. 2019.https://doi.org/10.1007/978-981-13-8161-4_2
Grosz BJ, Stone P. A century-long commitment to assessing artificial intelligence and its impact on society. Commun ACM. 2018;61(12):68–73.
Gupta KP, Bhaskar P, Singh S. Prioritization of factors influencing employee adoption of e-government using the analytic hierarchy process. J Syst Inform Technol. 2017;19(1/2):116–37. https://doi.org/10.1108/JSIT-04-2017-0028.
Vereschak O, Bailly G, Caramiaux B. How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proc ACM Human-Comput Inter. 2021;5(CSCW2):1–39.
Madhavan P, Wiegmann DA. Similarities and differences between human–human and human– automation trust: an integrative review. Theor Issues Ergon Sci. 2007;8(4):277–301.
Hengstler M, Enkel E, Duelli S. Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Chang. 2016;105:105–20.
Ullman D, Malle BF. Human-robot trust: Just a button press away. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction – HRI’ 17, (March 6–9), 2017; 309–310. https://doi.org/10.1145/3029798. 3038423
The European Commission's AI High-Level Expert Group (AI HLEG). Ethics Guidelines for Trustworthy AI. European Commission, 2018. Retrieved fromhttps://ec.europa.eu/
Söllner M. “Trust”. MIS Quarterly Research Curations, Ashely Bush and Arun Rai, 2016; Eds. http://misq.org/research-curations
Makridakis S. The forthcoming Artificial Intelligence (AI) revolution: its impact on society and firms. Futures. 2017;90:46–60.
McKnight DH. Trust in a specific technology: an investigation of its components and measures. ACM Trans Manag Inform Syst. 2011;2(2):1–25.
Hsu S, Kuan PY. The impact of multilevel factors on technology integration: the case of Taiwanese grade 1–9 teachers and schools. Educ Tech Res Dev. 2013;61(1):25–50.
Inan FA, Lowther DL. Factors affecting technology integration in K-12 classrooms: a path model. Educ Tech Res Dev. 2010;58(2):137–54.
van Braak J, Tondeur J, Valcke M. Explaining different types of computer use among primary school teachers. Eur J Psychol Educ. 2004;19(4):407–22.
O’Dwyer L, Russell M, Bebel D. Elementary teachers’ use of technology: Characteristics of teachers, schools, and districts associated with technology use. Boston: Technology and Assessment Study Collaborative, 2003a, Boston College.
O’Dwyer L, Russell M, Bebell D. Elementary teachers' use of technology: Characteristics of teachers, schools, and districts associated with technology use. inTASC Publications, 2003b; 2.
Pelgrum WJ. Obstacles to the integration of ICT in education: results from a worldwide educational assessment. Comput Educ. 2001;37:163–78.
Scherer R, Siddiq F, Tondeur J. The technology acceptance model (TAM): a meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput Educ. 2019;128:13–35.
Nazaretsky T, Cukurova M, Ariely M, Alexandron G. Confirmation bias and trust: human factors that influence teachers' attitudes towards AI-based educational technology. In CEUR Workshop Proceedings. 2021, September). (Vol. 3042).
Manikonda L, Kambhampati S. Tweeting AI: perceptions of lay versus expert twitterati, in: Twelfth International AAAI Conference on Web and Social Media, 2018.
Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirschberg J, Kalyanakrishnan S, Kamar E, Kraus, S….et al. Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence, 2016.
Bryne B. Structural equation modeling with AMOS: basic concepts, applications, and programming. 2nd ed. New York: Taylor & Francis Group; 2010. p. 9780805863727.
Fraenkel JR, Wallen NE, Hyun HH, How to design and evaluate research in education,. 7: 429. New York: McGraw-hill; 2012.
Sareen P. Techno stress creators—an exploratory research on teaching and non-teaching staff working in colleges. Int J Manag Human. 2019;3:1–7.
Almaiah MA, Alfaisal R, Salloum SA, Hajjej F, Thabit S, El-Qirem FA, Lutfi A, Alrawad M, Al Mulhe A, Alkhdour T, et al. Examining the Impact of Artificial Intelligence and Social and Computer Anxiety in E-Learning Settings: Students’ Perceptions at the University Level. Electronics. 2022;11:3662. https://doi.org/10.3390/electronics11223662.
Sánchez-Prieto JC, Cruz-Benito J, Therón R, García-Peñalvo FJ. How to measure teachers' acceptance of AI-driven assessment in eLearning: A TAM-based proposal. In Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality. 2019. (pp. 181–186). https://doi.org/10.1145/3362789.3362918
Johnson DG, Verdicchio M. AI, agency and responsibility: the VW fraud case and beyond. Ai Soc. 2019;34(3):639–47.
Huang HM, Liaw SS. Exploring users’ attitudes and intentions toward the web as a survey tool. Comput Hum Behav. 2005;21(5):729–43. https://doi.org/10.1016/j.chb.2004.02.020.
Kumar P, Kumar A. Effects of web-based projects on pre-service and inserviceteachers’ attitudes towards computers and technology skills. J Comput Teach Educ. 2003;19(3):87–92.
Jawahar IM, Elango B. The effects of attitudes, goal setting, and self-efficacy on end user performance. J End User Comput. 2001;13(2):40–5. https://doi.org/10.4018/joeuc.2001040104.
Agbatogun A. Self-concept, computer anxiety, gender and attitude towards interactive computer technologies: a predictive study among Nigerian teachers. Int J Educ Dev Using ICT. 2010;6:55–268.
Namlu A, Ceyhan E. Computer anxiety: A study on university students. Eskisehir: Anadolu University Publishing; 2002.
Xu L. The Dilemma and countermeasures of AI in educational application. In 2020 4th international conference on computer science and artificial intelligence. 2020. (pp. 289–294).
Sugandini D, Purwoko PA, Resmi S, Reniati M, Kusumawati RA. The role of uncertainty, perceived ease of use, and perceived usefulness towards the technology adoption. Int J Civil Eng Technol. 2018;9(4):660–9.
Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40.
Oh S, Yoon O. Validation of haptic enabling technology acceptance model (HE-TAM): integration of IDT and TAM. Telematics Inform. 2014;31(4):585–96.
Cavalcanti AP, Barbosa A, Carvalho R, Freitas F, Tsai YS, et al. Automatic feedback in online learning environments: A systematic literature review. Comput Educ. 2021;2:100027. https://doi.org/10.1016/j.caeai.2021.100027.
Wang Y, Liu C, Tu YF. Factors affecting the adoption of AI-based applications in higher education. Educ Technol Soc. 2021;24(3):116–29.
Edwards C, Edwards A, Spence PR, Lin X. I, teacher: Using artificial intelligence (AI) and social robots in communication and instruction. Commun Educ. 2018;67(4):473–80. https://doi.org/10.1080/03634523.2018.1502459.
Mishra P, Koehler MJ. Technological pedagogical content knowledge: a framework for teacher knowledge. Teachers College Record. 2006;108(6):1017–54. https://doi.org/10.1111/j.1467-9620.2006.00684.x.
Shneiderman B. Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Trans Tech Soc. 2020;1:73–82. https://doi.org/10.1109/TTS.2020.2992669.
Yang SJ, Ogata H, Matsui T, Chen NS. Human-centered artificial intelligence in education: seeing the invisible through the visible. Comput Educ. 2021;2:100008.
Shin D. How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Comput Human Behav. 2020;109:106344. https://doi.org/10.1016/j.chb.2020.106344.
Sao Pedro MA, Baker RS, Gobert JD. What different kinds of stratification can reveal about the generalizability of data-mined skill assessment models. In Proceedings of the third international conference on learning analytics and knowledge. 2013 (pp. 190–194).
De Cremer D, De Schutter L. How to use algorithmic decision-making to promote inclusiveness in organizations. AI and Ethics. 2021;1(4):563–7. https://doi.org/10.1007/s43681-021-00073-0.
Dietvorst BJ, Simmons JP, Massey C. Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manage Sci. 2018;64(3):1155–70. https://doi.org/10.1287/mnsc.2016.2643.
Almusharraf N, Alotaibi H. An error-analysis study from an EFL writing context: human and automated essay scoring approaches. Technology. 2022. https://doi.org/10.1007/s10758-022-09592-z.
Dorr KN, Hollnbuchner K. Ethical challenges of algorithmic journalism. Digital J. 2017;5(4):404–19. https://doi.org/10.1080/21670811.2016.1167612.
Korteling JE, van de Boer-Visschedijk GC, Boswinkel RA, Boonekamp RC. Effecten van de inzet van Non-Human Intelligent Collaborators op Opleiding and Training [V1719]. Report TNO, 2018 R11654.Soesterberg: TNO defense safety and security, Soesterberg, Netherlands: TNO, Soesterberg.
Ayanwale MA, Sanusi IT, Adelana OP, Aruleba KD, Oyelere SS. Teachers’ readiness and intention to teach artificial intelligence in schools. Comput Educ. 2022;3:100099. https://doi.org/10.1016/j.caeai.2022.100099.
UNESCO ICT Competency Framework for Teachers’ Report, 2018.
Buabeng-Andoh C. Factors influencing teachers’ adoption and integration of information and communication technology into teaching: A review of the literature. Int J Educ Dev Inform Commun Technol, 2012; 8(1): 136–155. https://files.eric.ed.gov/fulltext/EJ1084227.pdf
Plair S. Revamping professional development for technology integration and fluency. Clear House. 2008;82(2):70–4.
Sellier A-L, Scopelliti I, Morewedge CK. Debiasing training improves decision making in the field. Psychol Sci. 2019;30(9):1371–9.
Morewedge CK, Yoon H, Scopelliti I, Symborski CW, Korris JH, Kassam KS. Debiasing decisions: improved decision making with a single training intervention. Policy Insights Behav Brain Sci. 2015;2(1):129–40.
Arnold KE, Lynch G, Huston D, Wong L, Jorn L, Olsen CW. Building institutional capacities and competencies for systemic learning analytics initiatives. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge. 2014. (pp. 257–260).
Bordbar F. English teachers’ attitudes toward computer-assisted language learning. Int J Lang Stud. 2010;4(3):27–54.
Bayrak N, Bayrak G. The effects of in-service training courses about the use of technology on teachers’ technological pedagogical content knowledge self-confidence. YYU J Educ Facul, 2021; 18 (1), 1009–1041. https://doi.org/10.33711/yyuefd.957385
Doğru M, Şeren N, Koçulu A. An investigation about primary school teachers’ self-efficacy perception related to technology use from the point of variables. Eur J Soc Econ Res. 2017;4(12):464–72.
Chen R. Investigating models for pre-service teachers’ use of technology to support student-centered learning. Comput Educ. 2010;55(1):32–42.
MacCallum K, Jeffrey L, Kinshuk K. Factors impacting teachers’ adoption of mobile learning. J Inf Technol Educ. 2014;13(1):141–62.
Sipila K. The impact of laptop provision on teacher attitudes towards ICT. Technol Pedagog Educ. 2010;19(1):3–16.
Tilak S, Glassman M, Peri J, Xu M, Kuznetcova I, Gao L. Need satisfaction and collective efficacy in undergraduate blog-driven classes: a structural equation modelling approach. Austr J Educ Technol. 2022;1:75–90.
Başaran M, Ülger IG, Demirtaş M, Kara E, Geyik C, Vural OF. Investigation of Teachers’ Use of Technology in the Distance Education Process. OPUS Int J Soc Res, 2012; 17(37): 4619–4645. https://doi.org/10.26466/opus.903870
Safa BS, Arabacıoğlu T. Investigation of the educational technology usage levels of primary school teachers in terms of individual innovativeness characteristics. Ondokuz Mayis Univ J Educ. 2021;40:1. https://doi.org/10.7822/omuefd.686056.
Bolat D, Korkmaz Ö, Çakır R. Determination of the level of secondary school teachers to use information technologies and to integrate them into their courses. J Ahmet Keleşoğlu Educ Faculty. 2020;2(2):229–50.
Estalkhi NN, Mohammadi M, Bakshiri N, Kamali J. Gender differences among EFL teachers’ beliefs and their classroom practice in Iranian context. Proceedings of INTED2011 Conference. 7–9 March 2011, Valencia, Spain.
Li W. Teachers’ beliefs, gender differences and mathematics. Paper presented at the Annual Meeting of American Association for the Advancement of Science, 1996. San Jose, Calif., USA, June 1996.
Tondeur J, Valcke M, van Braak J. A multidimensional approach to determinants of computer use in primary education: teacher and school characteristics. J Comput Assist Learn. 2008;24(6):494–506.
Semerci A, Aydin KM. Examining high school teachers’ attitudes towards ICT use in education. Int J Progress Educ. 2018;14:93–105.
Awofala AO, Akinoso SO, Fatade AO. Attitudes towards computer and computer self-efficacy as predictors of pre-service mathematics teachers’ computer anxiety. Acta Didact Napoc. 2017;10:91–108.
Çakir T. The attitudes of preschool teachers and principals towards computer using. Anthropologist. 2014;18:735–44.
Halder S, Chaudhuri S. Computer self-efficacy and computer anxiety of trainee teachers: issue of concern. Proc Epistem. 2011;4:1–7.
Adams NB. Educational computing concerns of postsecondary faculty. Res Technol Educ. 2002;34(3):285–303.
Kock N. WarpPLS User Manual: Version 7.0. Laredo, TX: ScriptWarp Systems; 2020.
Kock N, Hadaya P. Minimum sample size estimation in PLS-SEM: the inverse square root and gamma-exponential methods. Inf Syst J. 2018;28(1):227–61.
Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1):39–50.
Avkiran NK, Ringle CM, Partial least squares structural equation modeling: Recent advances in banking and finance. Springer,. 239. Cham, Switzerland: Springer; 2018. https://doi.org/10.1007/978-3-319-71691-6.
Kock N. Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM. Int J e-Collab. 2014;10(3):1–13.
Ayanwale MA. Can experience determine the adoption of industrial revolution 4.0 skills in 21st-century mathematics education?. Res Soc Sci Technol. 2023; 8(1), 74–91. https://doi.org/10.46303/ressat.2023.6
Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum; 1988.
Hair JF, Hult GTM, Ringle CM, Sarstedt M. A Primer on Partial Least Squares Structural Equation Modeling (PLS‐SEM) (2nd ed.). Sage. 2017.
Hair JF. “Next-generation prediction metrics for composite-based PLS-SEM”, Industrial Management & Data Systems, Forthcoming. 2020.
Ayanwale MA, Molefi RR, Matsie N. Modelling secondary school students’ attitudes toward TVET subjects using social cognitive and planned behavior theories. Soc Sci Human Open. 2023;8(1): 100478.
Hair JF, Ringle CM, Sarstedt M. PLS-SEM: indeed a silver bullet. J Market Theory Pract. 2016;19(2):139–51.
Kock N. Common method bias in PLS-SEM: a full collinearity assessment approach. Int J e-Collaboration. 2015;11(4):1–10.
Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16(3):297–334.
Kline RB. Promise and pitfalls of structural equation modelling in gifted research. 2010. https://doi.org/10.1037/12079-007.
Sijtsma K. On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika. 2009;74:107–20.
Peterson RA, Kim Y. On the relationship between coefficient alpha and composite reliability. J Appl Psychol. 2013;98(1):194.
Tenenhaus M, Vinzi VE, Chatelin YM, Lauro C. PLS path modeling. Comput Stat data Aanal. 2005;48(1):159–205.
Rakov VA. Lightning electromagnetic fields: Modeling and measurements. In Proc. 12th Int. Zurich Symp. Electromagn. Compat 1997; (pp. 59-64).
Kock N, Lynn GS. Lateral collinearity and misleading results in variance-based SEM: an illustration and recommendations. J Assoc Inf Syst. 2012;13(7):546–80.
Kline RB. The mediation myth. Basic Appl Soc Psychol. 2015;37(4):202–13.
Molefi RR, Ayanwale MA. Using composite structural equation modeling to examine high school teachers' acceptance of e-learning after Covid-19. New Trends and Issues Proceedings on Humanities and Social Sciences, 2023; 10(1), 1–11. https://doi.org/10.18844/prosoc.v10i1.8837.
Kock N. Using indicator correlation fit indices in PLS-SEM: Selecting the algorithm with the best fit. Data Analysis Perspectives Journal, 2020c; 1(4): 1-4. https://scriptwarp.com/dapj/2020_DAPJ_1_4/Kock_2020_DAPJ_1_4_XsCorrMatrixIndices.pdf.
Henseler J, Ringle CM, Sarstedt M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Market Sci, 2015; 43(1): 115e135. https://doi.org/10.1007/s11747-014-0403-8.
Oladele JO, Ayanwale MA, Ndlovu M. Technology adoption for STEM education in higher education: students’ experience from selected sub-Saharan African countries. Pertanika J Sci Technol, 2023; 31(1), 237–256. https://doi.org/10.47836/pjst.31.1.15
Ayanwale MA. Evidence from Lesotho secondary schools on students’ intention to engage in artificial intelligence learning. In 2023 IEEE AFRICON Conference (Accepted).
Sánchez-Mena A, Martí-Parreño J, Aldás-Manzano J. Teachers’ intention to use educational video games: the moderating role of gender and age. Innov Educ Teach Int. 2018;56:318–29.
Baydas O, Goktas Y. Influential factors on pre-service teachers’ intentions to use ICT in future lessons. Comput Hum Behav. 2016;56:170–8.
Teo T, Milutinovic V. Modelling the intention to use technology for teaching mathematics among pre-service teachers in Serbia. Australas J Educ Technol. 2015;31:363–80.
Author information
Authors and Affiliations
Contributions
OP focused on paper development, data collection, theoretical framework and hypotheses development, and data coding. MA did the data screening and management, analysis and interpretation. TT was involved in data collection, direction of the investigation and other logistics for the smooth development of the paper. We approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ayanwale, M.A., Adelana, O.P. & Odufuwa, T.T. Exploring STEAM teachers’ trust in AI-based educational technologies: a structural equation modelling approach. Discov Educ 3, 44 (2024). https://doi.org/10.1007/s44217-024-00092-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44217-024-00092-z