Abstract
The use of digital media in education has already been addressed in numerous technology acceptance models, but there is very little research on establishing a link between acceptance and assessment using mobile devices, a reality in educational institutions. This work aims to extend research by developing the TAM model and studying teachers’ perceived usefulness of mobile devices in terms of how they understand assessment: generically, as a summative and a formative assessment, or as the complementarity of these. This study proposes a comparison between three models using the partial least squares structural equation modeling (PLS-SEM) on a sample of 262 master’s degree students (pre-service teachers). The results show the validity of the three proposals and confirm the advantages to specifically consider assessment in acceptance models, as well as the importance of addressing its modalities differently after obtaining better results in the two models that do so. The study also confirms the importance of self-efficacy in the use of mobile devices as a predictor of usefulness and intention to use in the three models. The use of a comparative approach and the development of the perceived usefulness construct in assessment represents a new contribution to the field of acceptance studies.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Information and communication technologies (ICTs) are nowadays an essential tool in the teaching–learning processes in all educational institutions, from the lowest levels to higher education (Burbules et al., 2020; Valverde-Berrocoso et al., 2021). This digital inclusion has facilitated access to an unprecedented variety of resources in education, leading to the creation of new virtual learning environments (Gómez-Galán, 2020) and the development of new methodologies, such as collaborative work or flipped classrooms (Ciobanu, 2022; Clark et al., 2022; Hossain et al., 2021; Sun et al., 2022), especially after the pandemic (Sar & Misra, 2020).
Several studies on assessment in digital environments attempt to define assessment frameworks (Gómez-Ruiz et al., 2022; Moccozet et al., 2019) and present a large number of technological tools for this process (Cosi et al., 2020), with mobile devices as one of the main research lines in the development of teaching (Guardia et al., 2019; Moreno-Guerrero et al., 2020), given their universality and potential (Abd-Karim et al., 2018; Criollo et al., 2021). The advantages of these devices in the educational context have been summarised as portability, interactivity, context sensitivity, connectivity, and individuality (Ally & Prieto-Blázquez, 2014). For Al-Emran et al. (2020), their use is an opportunity for ubiquitous access to media and resources, enriched content, new means of interaction and collaboration, and new methodological and assessment procedures (Aljawarneh, 2020; Baier & Kunter, 2020; Reisoğlu & Çebi, 2020).
Therefore, it is necessary to deepen the concept of assessment in education, since it implies describing two modalities in its process, formative assessment, and summative assessment (Black, 1993; Dixson & Worrell, 2016; Scriven, 1967). Consequently, it is possible that, if a teacher has a different understanding of these modalities, his or her perspective on the use of mobile devices may also differ depending on the assessment made; a distinction that should be considered in the research.
The described potential of mobile devices contrasts, in many cases, with the reluctance of teachers to introduce innovations in a process as rigorous as assessment (Hébert et al., 2021). These tools have been extensively validated in the introduction of new classroom methodologies (Bernacki et al., 2020; Marín-Díaz et al., 2022; Vieira & Ribeiro, 2018), or as supplementary and substitute teaching materials (Wan-Sulaiman & Mustafa, 2020); but there are still very few initiatives that address the use of mobile devices in assessment, a practice that is currently not widespread (Nikou & Economides, 2016).
It should also be considered that the ultimate decision on the use of technological devices for assessment in their classrooms, whether face-to-face or in technology-mediated instruction, rests with the teachers (Olimov, 2021). The knowledge of the factors influencing the teacher’s decision to use technology and to know the current state of its technological acceptance will contribute to establishing appropriate training policies based on an effective and real diagnosis, which will allow teachers to be trained in those aspects in which they face limitations for the inclusion of technology in the assessment process (García-Aretio, 2019).
The determination of this acceptance and its factors is nowadays addressed through different technology acceptance models described in the literature and validated in practice, with the TAM model (Davis, 1989; Fishbein & Ajzen, 1975). Regarding the variables that the TAM model includes (Fig. 1), Davis (1989) described the behavioural intention to use through the attitude toward using and stated that the factors that determine them are usefulness, understood as “the degree to which a person believes that using a particular system would enhance their job performance” (p. 320) and perceived ease of use, defined as “the degree to which a person believes that using a particular system would be free from effort” (p. 320).
Technology acceptance model (Davis, 1989)
The TAM model has been widely discussed in studies on the technological acceptance of teaching in virtual environments (Bizzo, 2021; Cruz-Benito et al., 2019; Terán-Guerrero, 2019) and in teaching via mobile devices (Hao et al., 2017; Mutambara & Bayaga, 2021; Sánchez-Prieto et al., 2017b), but few studies have addressed the technology acceptance of assessment using mobile devices. Among these models, Nikou and Economides (2017a, 2017b) are the most relevant authors in the study of the field with their proposal of the Mobile-Based Assessment Acceptance Model (MBAAM). This model, which is based on TAM, suggests the influence of ten factors that try to predict the perceived ease of use and usefulness, thus increasing Davis’ (1989) explanation of the intention of use in this field.
Research on the acceptance of technology has mainly been carried out on in-service teachers (Sánchez-Prieto et al., 2019), hence this study proposes to explore the acceptance of technology in the group of future teachers, as they will be the main actors in these new e-learning or m-learning frameworks (Anisimova et al., 2020; Castañeda-Vázquez et al., 2019; Sánchez-Prieto et al., 2019).
Therefore, this research aims to study perceived usefulness and its current definition in this field. Given that the TAM model proposes a generic perceived usefulness, this proposal is to reformulate the construct through two new models that specifically include formative and summative assessments, since, as indicated above, the use of mobile devices may vary depending on the type of assessment to be carried out. The research questions to be addressed are as follows:
-
• Which of the proposed perceived usefulness explains a higher intention to use mobile devices in e-assessment?
-
• What are the implications of formative and summative assessments for the construct of perceived usefulness?
A review of the literature and the proposed models is provided in Sect. 2. The methodology is presented in Sect. 3, and the results are in Sect. 4. The discussion of the results and the conclusions are included in Sects. 5 and 6.
1.1 Literature review and models development
In educational practice, assessment is conducted from a dual perspective depending on its purpose, distinguishing between summative and formative assessments (Scriven, 1967). In general, the summative assessment focuses on the final product and accountability, being a form of assessment aimed at obtaining a final grade (Knight, 2002). In contrast, formative assessment is focused on monitoring the process and all the factors that influence it, being more oriented toward feedback and continuous improvement (Morris et al., 2021).
The debate in research on these assessment modalities has generated three different currents: one assuming the non-differentiation between formative and summative assessments, the second one assuming their existence but independence, and the third one assuming their necessary complementarity for effective assessment (Lau, 2016). The three conceptions and their implications for the assessment process are presented below.
On the one hand, the first current describes assessment in general terms as a single whole, without distinguishing between process and product, and understanding its importance in terms of objectives and effectiveness (Mejía-Pérez, 2012). This current corresponds to the classical conception of assessment, without distinguishing between formative and summative modalities or exploring their possibilities (Shepard, 2006; Tyler, 1950).
On the other hand, the second current assumes their independence and is in favour of differentiating between the effects of the two on learning (Harlen & James, 1997; Patton, 1996), treating both modalities independently and separately (Black & Wiliam, 1998). This is consistent with an established idea in assessment, which distinguishes between the effects of the two and considers that a new combined model may be a problem for teachers in the search for the combination of different techniques, objectives, and goals (MacLellan, 2001).
Finally, the third approach considers these assessments jointly, implying that the whole assessment process is considered as the sum of formative and summative assessments by a teacher (Ahmad & Bhat, 2019; Buchholtz et al., 2018), as their purposes are fully related and complementary for them in a complete process (Black et al., 2003) mediated by feedback and guidance aimed at the consecution of achievements and their final certification (Dixson & Worrell, 2016; Dolin et al., 2018).
1.2 Perceived usefulness modelling: a triple comparison
This study reports three TAM-based models with perceived usefulness defined from the three assessment perspectives to find out which one explains, to a greater extent, the perceived usefulness of mobile devices in e-assessment for teachers (Horvat et al., 2012). These models will reveal how most teachers understand assessment, which will provide insight into the actual usefulness of assessment and how best to identify it in acceptance models and may lead to the analysis of the implications for teacher education and training.
1.2.1 Model A: TAM: Perceived usefulness as a generic construct.
The first model understands assessment from a classical perspective (Tyler, 1950), assuming the generic perceived usefulness of the TAM model for its study. This usefulness means that if teachers perceive mobile devices as useful for assessment, they will be more likely to use them in their classrooms (Davis, 1989). According to this assumption, it is proposed that:
-
Perceived usefulness (PU) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in future teaching practice (MAH1).
A second predictor of the intention to use mobile devices is, according to TAM, the perceived ease of use, which assumes that all professionals who innovate in their teaching must make an additional effort in terms of technical knowledge and training, taking on a greater workload for effective inclusion (Thorsteinsson & Niculescu, 2013). This research focuses on teachers in training, a group that is assumed to have greater ease of use of mobile devices given their native digital competence (Evans & Robertson, 2020), but, since this is not true in all scenarios, the assessment of this construct in the new models is recommended (Kimmons et al., 2017). Therefore, it is proposed that:
-
Perceived usefulness (PU) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in future teaching practice (MAH2).
Furthermore, in the early stages of adopting technological tools, perceived ease of use can become an internal barrier that conditions both perceived usefulness and intention to use them (Sánchez-Prieto et al., 2019; Venkatesh & Bala, 2008). That is if teachers find it difficult to use a mobile phone for assessment, not only will it condition the final intention of use, but they will not find it useful in their teaching practice (Al-Gasawneh et al., 2022). The following is therefore proposed:
-
Perceived ease of use (PEOU) positively predicts teachers’ perceived usefulness (PU) of mobile devices for e-assessment in their future teaching practice (MAH3).
The constructs of perceived usefulness, perceived ease of use, and behavioural intention from the TAM model are preserved in this study, but the construct of attitude toward using has been removed due to its limited moderating effect (Hu et al., 2003), thus simplifying the instrument by reducing the number of items. This elimination is justified by the evolution of the TAM model (Venkatesh et al., 2003) and by acceptance models applied in the educational field (Sánchez-Prieto et al., 2017a).
Having defined the three constructs proposed in TAM, the models presented in this paper are extended with the construct of self-efficacy to explore an antecedent of perceived usefulness according to the literature (Nikou & Economides, 2017b). Previous studies have shown that experience is a relevant factor in the use of mobile devices, based on the fact that the higher the level of skill and dexterity in their use, the less effort will be required to use them (Anderson, 1996); which is related to this perceived usefulness.
Self-efficacy is defined in research as an individual’s perception of their ability to use mobile devices to perform certain tasks (Nikou & Economides, 2017a). In the educational context, it can be inferred that the higher the level of skill with the use of devices (self-efficacy), the greater the ease of use and perceived usefulness in assessment processes (Pikkarainen et al., 2004; Wang et al., 2020). Based on these premises, it is proposed that:
-
• Mobile self-efficacy (MSE) will have a positive effect on teachers’ perceived usefulness (PU) in using mobile devices for e-assessment (MAH4).
Accordingly, the first model proposed is the TAM model, which presents mobile self-efficacy as an antecedent of perceived usefulness based on Nikou and Economides’ (2017b) construction of the construct (Fig. 2).
1.2.2 Model B (MB): Formative perceived usefulness and summative perceived usefulness
A second model is based on the understanding of summative and formative assessments as two distinct dimensions (MacLellan, 2001), and converting the original utility of the TAM model into a specific utility for each of the assessments.
The construction of the model has been based on the perceived usefulness construct of Davis (1989), the theoretical analysis of formative and summative assessments presented here, and following Moore & Benbasat’s proposal for modelling (1991). These new utilities are also derived from Scriven’s distinction (1991) and the formulation of two parallel and distinct constructs. Therefore, the hypotheses regarding perceived usefulness are as follows:
-
• Perceived usefulness of formative assessment (PUFA) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in future teaching practice (MBH1).
-
• Perceived usefulness of summative assessment (PUSA) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in future teaching practice (MBH2).
In addition, the constructs of ease of use (PEOU), behavioural intention (BIU), and mobile self-efficacy (MSE) are maintained from the previous model, preserving the hypotheses to test the differences between the models. Having two perceived usefulness, the PEOU and MSE constructs establish a double hypothesis, one for each utility, such that:
-
• Perceived ease of use (PEOU) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in their future teaching practice (MBH3).
-
• Perceived ease of use (PEOU) positively predicts teachers’ perceived usefulness of mobile devices in formative assessment (PUFA) for e-assessment in their future teaching practice (MBH4).
-
• Perceived ease of use (PEOU) positively predicts teachers’ perceived usefulness of mobile devices in summative assessment (PUSA) for e-assessment in their future teaching practice (MBH5).
-
• Mobile self-efficacy (MSE) will have a positive impact on teachers’ perceived usefulness of mobile devices in formative assessment (PUFA) for e-assessment (MBH6).
-
• Mobile self-efficacy (MSE) will have a positive impact on teachers’ perceived usefulness of mobile devices in summative assessment (PUSA) for e-assessment (MBH7).
The second model of the study is thus the combination of two constructs of perceived usefulness of the formative and summative assessments, the constructs of the TAM model and mobile self-efficacy (Fig. 3).
1.2.3 Model C (MC): Perceived usefulness of assessment as a formative construct
A third group of authors considers assessment as a combination of summative and formative assessments, assuming that the assessment process is the sum of its components (Dolin et al., 2018). This implies adopting, in acceptance models, a single construct that brings together the indicators defined in the previous model for the perceived usefulness of formative and summative assessments, establishing a single dimension of assessment, the assessment perceived utility (APU).
This dimension is understood as a formative construct, assuming that the sum of all indicators provides the overall measure of the dimension (Simonetto, 2012), the usefulness of assessment. Consequently, the following hypothesis is formulated for perceived usefulness:
-
• Perceived usefulness (PU) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in their future teaching practice (MCH1).
As in the previous model (MB), the constructs of ease of use (PEOU), behavioural intention (BIU), and mobile self-efficacy (MSE) are kept from the original model, and the hypotheses of these constructs are maintained to carry out the triple comparison:
-
• Perceived ease of use (PEOU) positively predicts teachers’ intention to use (BIU) mobile devices for e-assessment in their future teaching practice (MCH2).
-
• Perceived ease of use (PEOU) positively predicts teachers’ perceived usefulness of mobile devices for e-assessment (APU) in their future teaching practice (MCH3).
-
• Mobile self-efficacy (MSE) will have a positive impact on teachers’ perceived usefulness of mobile devices for e-assessment (APU) (MCH4).
The third and final model is the sum of a single construct on assessment (bringing together the items of perceived usefulness of formative and summative assessments), and the constructs of the TAM model with self-efficacy as a predecessor of perceived usefulness (Fig. 4).
The methodology followed is presented in Sect. 3. The results are included in Sect. 4, whereas the conclusions, main findings, limitations, and future prospects are in Sect. 5.
2 Methods
2.1 Measurement tool
To conduct this study, a questionnaire consisting of a set of identity variables and the items that compose the dimensions has been designed. The socio-demographic identity variables included are age, gender, the academic degree of access to the master’s degree, teaching experience, and the mean number of hours of daily use of mobile devices. The second section consists of 27 items to assess acceptance through the study constructs using a Likert scale ranging from 1 to 7 (1 = strongly disagree, 7 = strongly agree) (Matas, 2018). The complete list of items can be found in the complementary database to this research, available at: https://bit.ly/3T3DK8q.
The measurement tool designed is based on the analysis of current models of technology acceptance assessment (Davis, 1989; Nikou & Economides, 2017a, 2017b; Venkatesh & Bala, 2008; Venkatesh & Davis, 2000) and the evaluation of comparative structural equation models developed in PLS-SEM in the field of m-learning (Alshurideh et al., 2020).
In its construction, the items related to behavioural intention, perceived ease of use, and perceived usefulness have been taken and adapted from the TAM3 model (Venkatesh & Bala, 2008), an evolution of the original proposal by Davis (1989), adapting them for mobile technologies in e-assessment. The items related to the perceived usefulness of formative and summative assessments (model B) have been developed by the authors based on the considerations of Moore and Benbasat (1991) and the delimitation of the classical concept of formative and summative assessments (Scriven, 1967). Finally, the mobile self-efficacy construct has been adapted from the original self-efficacy proposal (Anderson, 1996) and its adaptation to the field of mobile technologies (Nikou & Economides, 2017a).
The administration of the questionnaires, the entire design process, and results and data processing have been approved by the Research Ethics Committee of the University of Salamanca. Data collection was conducted in November 2022, with a total survey period of 28 days. The questionnaire was distributed through the LimeSurvey application (Engard, 2009).
2.2 Population and sample
The study population is composed of students enrolled in the Master’s Degree in Teacher Training for Secondary and Upper-Secondary Education, Vocational Training and Languages, which is specific training for teachers before access to the teaching profession, responsible for providing pedagogical knowledge to graduates in the areas of knowledge related to teaching in Secondary Education (Vilches & Gil, 2010). This study addresses the group of teachers in training as this is a period which, depending on its development, can be a barrier or a driver in technological changes and the training of future teachers (Sánchez-Prieto et al., 2019). The questionnaire was administered to the students of the University of Salamanca (N = 292), and 262 responses were collected.
2.3 Data analysis technique
The partial least squares structural equation modelling (PLS-SEM) has been applied in this study to analyse the data collected from the questionnaires. The analysis was carried out using the statistical software SmartPLS 3.2.9 (Hair et al., 2019; Sarstedt et al., 2016) and consisted of several steps: the evaluation of the measurement model (convergent validity and reliability analysis, collinearity and cross-loading checks), the evaluation of the structural model (study of the effects, effect sizes, and mediations) and the behavioural explanation of the model (predictive power and model fit indicators) (Bayaga & Kyobe, 2021; Hair et al., 2022).
This study is based on PLS-SEM given its characteristics, as it presents a triple comparison focused on perceived usefulness in which one of the models considers the usefulness construct as formative, an analysis that is only possible using PLS-SEM (Ajms, 2015; Bollen, 2011).
3 Results
First, a description of the mean scores on the questionnaire items (which will later form the basis for the study of the different models) is presented (Fig. 5). The dimensions with the highest mean score were perceived ease of use (x̅ = 4.75), intention to use (x̅ = 4.27), and perceived usefulness of mobile devices in e-assessment (x̅ = 4.22). In contrast, the lowest mean scores were found in the usefulness of mobile devices for summative assessment (x̅ = 3.90), mobile self-efficacy (x̅ = 4.18), and perceived usefulness of mobile devices for formative assessments (x̅ = 4.18).
In an initial approximation to the differences between the perceived usefulness according to the models, it was observed that future teachers rated the generic perceived usefulness higher, as it does not distinguish between summative and formative assessments in its formulation, which may be due to the fact that the formulation of items is different, and the results can hardly be compared in purely descriptive terms. However, the comparison between the perceived usefulness of formative assessment (x̅ = 4.18) and the perceived usefulness of summative assessment (x̅ = 3.90) showed higher values for formative assessment. Therefore, it is suggested that future teachers find the use of mobile devices more useful in formative assessments, which will be explored below through the structural equation analysis conducted.
3.1 Evaluation of the measurement model
The evaluation of the measurement model included the study of reliability (composite reliability index), construct validity (convergent validity [average variance extracted, AVE]), and Cronbach’s alpha. Data for the first two models (reflective models) and the reflective part of the third model (MC) are shown in Table 1. Regarding the weights, following the recommendations of Hair et al. (2010), indicators with loadings higher than 0.5 were accepted; and those with a lower value were removed (BIU_03, PEOU_01, and MSE_05 were removed in the three models). After the removal, the reliability of the items was confirmed. The results also confirmed convergent validity through the AVE, the Cronbach’s alpha (α), and the composite reliability (CR); with values above 0.5, 0.7, and 0.6; respectively (Fornell & Larcker, 1981).
For the formative construct of the usefulness of the evaluation of the third model (MC), the values of the variance inflation factor (VIF), which will enable us to dismiss problems of collinearity (Hair et al., 2022), are presented in Table 2, with the acceptance of items with values lower than 5 (Hair et al., 2021), and, as all of them were below the indicated value, the analysis was therefore continued. Subsequent bootstrapping techniques with 5000 sub-samples (Hair et al., 2022) revealed low weights of the indicators. However, the recommendation is to retain items with low weights if the loadings are higher than 0.5 (Hair et al., 2021; Ramayah et al., 2017); hence, all items were retained even if they did not contribute to explaining the dimension. Under this paradigm, the perceived usefulness dimension of the assessment (MC) was maintained, but its items do not fully define it.
Discriminant validity was assessed according to two criteria: the Fornell-Larcker criterion (Fornell & Larcker, 1981) and the heterotrait-monotrait ratio of correlations (HTMT) (Henseler et al., 2015). The results showed optimal discriminant validity for all three models (Table 3).
3.2 Structural validity of the model
The analysis of the structural validity of the models included the description of the adjusted R2 values, i.e. the explained variance (Cohen, 1988). The first model (MA) explained 0.48 of the behavioural intention to use, while the second model (MB) explained 0.58 of it, and the third one (MC) explained 0.59, being MC the model with the highest percentage of explained variance. Furthermore, the Stone-Geisser test results (Geisser, 1974; Stone, 1974) showed positive values at Q2 for each model, confirming their predictive relevance.
The path coefficients (Hair et al., 2022) of the three models are shown in consecutive order in Fig. 6. In the first model (MA), usefulness positively predicts intention to use mobile devices (MAH1 is accepted), and ease of use positively predicts intention to use (MAH3 is accepted, but MAH2 is not confirmed). Regarding the second model (MB), the usefulness of formative assessment and usefulness of summative assessment positively predict final intention to use (accepting MBH1 and MBH2), while ease of use is only a predictor of formative usefulness (accepting MBH4 and rejecting MBH5). In the third model (MC), the formative construct “perceived usefulness of assessment” positively predicts behavioural intention to use (MHC1), but in this model ease of use does not predict intention to use nor is it a predictor of the usefulness of mobile devices in assessment (MHC2 and MCH3 are not accepted).
Finally, mobile self-efficacy is confirmed as a predictor of ease of use in all three proposed models, including the perceived usefulness of TAM (MAH4 is accepted), the two differentiated usefulness constructs of model two (MBH5 and MBH6 are accepted), as well as the third model (MCH4 is accepted).
The bootstrapping results (Davison & Hinkley, 1997) (Table 4) show the significance of the relationships proposed above, as well as the effect sizes of each of these relationships (small: 0.02 ≤ f2 < 0.15; median: 0.15 ≤ f2 < 0.35; large: ≥ 0.35) (Cohen, 1988).
Previous research has found that variables such as perceived ease of use can have a direct but also an indirect effect on other study variables (Rothmann, 2015). It is therefore important to evaluate these effects to determine whether perceived ease of use influences behavioural intention to use through other constructs, thus providing a complete understanding of the model. The indirect effects, their size, and significance are shown in Table 5. It follows from this table that ease of use has an indirect effect on behavioural intention to use only in model B, through the perceived usefulness of mobile devices for formative assessment. Moreover, self-efficacy has an indirect effect in all three models reported, acting through the perceived usefulness described in model A, the two perceived usefulness (formative and summative) in model B, and the perceived usefulness of assessment in model C.
To conclude the analysis of the results, it should be emphasised that this study aimed to carry out a triple comparison between a model based on the generic utility of TAM (MA), a model with two differentiated usefulness in formative and summative assessments (MB) and a model with a single usefulness of formative and summative assessments (MC). After confirming the validity of the models (reliability of the items and validity), we proceed to describe the differences found in their effect sizes. MA, MB and MC can predict behavioural intention through perceived usefulness with a large effect size (except MBH2, which predicts it with a small effect size). Ease of use has a medium effect size on the behavioural intention for model A and formative usefulness for model B. Finally, self-efficacy in using mobile devices has a large effect on perceived usefulness (MA) and perceived usefulness of assessment (MC), and a medium effect on perceived usefulness for summative and formative assessments (MB).
The predictive relevance (Q2) of the three models confirms that all three models have predictive power. According to this criterion, the best model is determined to be the one with the highest score in the comparison (Hair et al., 2011, 2013), with MC being the most parsimonious and generalisable model (Sharma & Kim, 2012), practically equal to MB. Considering the Akaike information criterion (AIC) (Bozdogan, 1987), model C is significantly better than model B, as are models B and C with regard to model A. The criterion, which has high predictive power and is stable for model comparison (Sharma et al., 2022), also confirms MC as the best model, followed by MB (Table 6).
After completing the description of the results, the discussion and conclusions drawn from the comparative study, as well as the limitations of the research and future prospects, are covered in Sect. 5.
4 Discussion
The study highlights the importance of the usefulness of mobile devices in assessment processes by teachers in training. In response to the research questions posed (R1 and R2), the model that best predicts the intention to use mobile devices is the one that considers perceived usefulness as a formative construct (MC). However, when considering that seven of the ten items of the formative construct of usefulness are not significant for its assessment, it can be stated that the best model in predicting intention to use is model B since it only predicts 1% less than the previous one and maintains all the items in two distinct constructs.
Therefore, the usefulness of assessment should be approached as two distinct sources, since both reflective modelling (MB) and formative modelling (MC) that specifically consider the usefulness of assessment perform better than Davis’ (1989) generic model. Future acceptance studies should not address usefulness in a non-specific way, which would fail to consider the differences that teachers set for the use of mobile devices between both and which have been confirmed in this study.
4.1 Implications of assessment for technology adoption models
Previous studies have shown the relevance of generic perceived usefulness in models of technology adoption in the field of m-learning (Harchay et al., 2019), but to date no literature has been published on perceived usefulness that addresses the particularities of assessment, generating here two new models more specific to the field. This work highlights the relevance of assessment in teaching processes, interpreted from the perspective of the usefulness of technology for this purpose, making its determination crucial for the study of the intention to use it for teaching.
The results of the second model (MB), which addresses formative and summative assessments in differentiated constructs, suggest that teachers attach greater importance to the use of mobile technology in formative assessments, also acting as a greater predictor of the intention to use mobile technology in assessment. This idea is in line with the literature, as formative assessments are perceived as more flexible and open to innovations (Nikou & Economides, 2021) while traditional assessments are still reluctant to include new techniques in their processes (Shepard, 2006).
The study also supports the association between self-efficacy in the use of mobile devices and perceived usefulness in the three models, which confirms that this is an influential factor related to the perceived usefulness of technology in assessment (Rahmawati, 2019). This statement confirms the potential of ICT in the teaching–learning process, as a teacher who is prepared and effective in the use of technology will perceive the usefulness for assessment processes in a much higher way, and his or her intention to use technology and change methodologies towards virtual environments will be higher.
4.2 Implications of the study for teaching practice
Regarding teaching, the study reveals the importance of continuous teacher training on improving the usefulness of mobile devices, especially for the formative assessments. Work on the usefulness of mobile devices through new assessment techniques and new tools is already a reality in continuous training in many countries (Jin et al., 2021), trying to encourage a change in teaching towards the inclusion of technology in classroom practice.
Moreover, the models developed have confirmed self-efficacy in the use of mobile devices as a relevant factor in both usefulness and intention to use, in line with the studies by Nikou and Economides (2017a, 2017b). Therefore, teaching practice should promote training and the development of specific policies focused not only on the usefulness or the basics of assessment but also make teachers effective in the use of mobile devices for their teaching–learning processes and their assessments.
Finally, it has also been found that ease of use is only a significant factor in the relationship with the perceived usefulness of formative assessment (model B). This fact can be explained by the regular and standardised use of mobile devices in modern society (Smith, 2021), especially among future teachers, being the highest dimension of the study in the descriptive analysis carried out.
4.3 Implications of the study for initial teacher training
The research confirms the need for training based on technological and pedagogical innovation rather than training in the use of mobile devices, a less relevant construct for these assessment models. This innovation should be focused on working on self-efficacy in the use of devices and the perceived usefulness of teachers in their use in assessment, assuming that the training in their use has been completed (Domingo-Coscollola et al., 2020).
This new approach of training in assessment through mobile devices concerning self-efficacy and technological usefulness will be of great relevance in order to have future teachers with a high intention of use, who are updated in assessment techniques and the use of mobile devices for this purpose and who have an intention of use following the new educational scenarios, achieving the transition of education towards the new educational reality (Souabi et al., 2021). These specific changes in the training of future teachers require, ultimately, the establishment of new training policies in higher education institutions and the design of new, updated study plans, with technologically competent teaching staff and with innovation as the sole focus of training, reaffirming the need for a new education mediated by technology (Canales-García et al., 2020; Skulmowski & Rey, 2020).
5 Conclusions
Technology adoption in education has been addressed in numerous models, mostly in the field of e-learning and in studies related to methodology, but there is little research focused on the technological acceptance of mobile devices in e-assessment (Alrfooh & Lakulu, 2020).
This study has not only explored this area of the teaching process in depth but has also included an exhaustive analysis based on the theoretical currents on the conception of assessment and the study of its usefulness, which is based on the evaluation of three different technology acceptance models in future teachers. This research contributes to the advancement of knowledge from the following perspectives:
-
(1)
First, this research supports the differences between summative and formative assessments from the teacher’s perspective and the importance of considering them specifically as two different usefulness of mobile devices for assessment in technology adoption models, explaining a higher percentage of the intention of use than the TAM model in its generic conception.
-
(2)
Second, a greater relevance of formative assessment compared to summative assessment is observed in the specific adoption models (MB, MC). Perceived usefulness of formative assessment explains a higher intention to use in the model with two constructs (MB), and more items are representative of formative usefulness than of summative usefulness in the model with the formative construct of perceived usefulness of assessment (MC).
-
(3)
Third, the results suggest the high predictive power and functioning of self-efficacy in the use of mobile devices on the perceived usefulness of assessment, being an appropriate construct in all three proposed models. However, ease of use is a factor of lesser relevance and explanatory power, possibly as a consequence of the current digital competence of the teachers.
-
(4)
Finally, the initial training of future teachers and in-service teachers training should focus on the development of formative assessment work, the effective use of mobile devices in teaching, and their usefulness in assessment processes, to increase their intention to use mobile technology in assessment.
This study has some limitations. From a methodological perspective, the results may have been compromised by the access to the sample. Despite trying to ensure heterogeneity by applying the questionnaire to all students enrolled in the master’s degree, all participants share the training within the same academic programme. Future studies could widen the participation of students from other institutions and geographical areas, leading to more generalisable results. Future research could also already consider the specific usefulness of summative and formative assessments as the main usefulness of the model, extend the study and analyse its relationship with other related antecedents.
References
Abd-Karim, R., Abu, A., Adnan, A., & Suhandoko, A. (2018). The Use of Mobile Technology in Promoting Education 4.0 for Higher Education. 2, 34–39. https://doi.org/10.26666/rmp.ajtve.2018.3.6
Ahmad, B., & Bhat, G. J. (2019). Formative and summative evaluation techniques for improvement of learning process. European Journal of Business & Social Sciences, 7(5), 776–785.
Ajms, E. (2015). Structure Equation Modeling Basic Assumptions and Concepts: A Novices Guide. Asian Journal of Management Sciences, 3(1), Article 1. https://www.ajmsjournal.com/index.php/ajms/article/view/70
Al-Emran, M., Arpaci, I., & Salloum, S. A. (2020). An empirical examination of continuous intention to use m-learning: an integrated model. Education and Information Technologies, 25(4), 2899–2918. https://doi.org/10.1007/s10639-019-10094-2
Al-Gasawneh, J. A., Al Khoja, B., Al-Qeed, M. A., Nusairat, N. M., Hammouri, Q., & Anuar, M. M. (2022). Mobile-customer relationship management and its effect on post-purchase behavior: The moderating of perceived ease of use and perceived usefulness. https://digitallibrary.aau.ac.ae/handle/123456789/672
Aljawarneh, S. A. (2020). Reviewing and exploring innovative ubiquitous learning tools in higher education. Journal of Computing in Higher Education, 32(1), 57–73. https://doi.org/10.1007/s12528-019-09207-0
Ally, M., & Prieto-Blázquez, J. (2014). What is the future of mobile learning in education? RUSC. Universities and Knowledge Society Journal, 11(1), Article 1. https://doi.org/10.7238/rusc.v11i1.2033
Alrfooh, A., & Lakulu, M. (2020). A systematic review of mobile-based assessment acceptance studies from 2009 to 2019. Journal of Theoretical and Applied Information Technology, 97(20), 1–25.
Alshurideh, M., Al Kurdi, B., Salloum, S. A., Arpaci, I., & Al-Emran, M. (2020). Predicting the actual use of m-learning systems: a comparative approach using PLS-SEM and machine learning algorithms. Interactive Learning Environments, 1–15. https://doi.org/10.1080/10494820.2020.1826982
Anderson, A. A. (1996). Predictors of computer anxiety and performance in information systems. Computers in Human Behavior, 12(1), 61–77. https://doi.org/10.1016/0747-5632(95)00019-4
Anisimova, T. I., Sabirova, F. M., & Shatunova, O. V. (2020). Formation of Design and Research Competencies in Future Teachers in the Framework of STEAM Education. International Journal of Emerging Technologies in Learning (IJET), 15(02), Article 02. https://doi.org/10.3991/ijet.v15i02.11537
Baier, F., & Kunter, M. (2020). Construction and validation of a test to assess (pre-service) teachers’ technological pedagogical knowledge (TPK). Studies in Educational Evaluation, 67, 100936. https://doi.org/10.1016/j.stueduc.2020.100936
Bayaga, A., & kyobe, M. (2021). PLS-SEM technique and phases of analysis – implications for information systems’ exploratory design researchers. Conference on Information Communications Technology and Society (ICTAS), 2021, 46–51. https://doi.org/10.1109/ICTAS50802.2021.9395029
Bernacki, M. L., Greene, J. A., & Crompton, H. (2020). Mobile technology, learning, and achievement: advances in understanding and measuring the role of mobile technology in education. Contemporary Educational Psychology, 60, 101827. https://doi.org/10.1016/j.cedpsych.2019.101827
Bizzo, E. (2021). Aceptación y a la adopción del e-learning en los países en desarrollo: Una revisión de la literatura. Ensaio: Avaliação e Políticas Públicas em Educação, 30, 458–483.
Black, P. J. (1993). Formative and summative assessment by teachers. Studies in Science Education, 21(1), 49–97. https://doi.org/10.1080/03057269308560014
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Formative and summative assessment: Can they serve learning together? Annual meeting of the American Educational Research Association, Chicago.
Bollen, K. A. (2011). Evaluating effect, composite, and causal indicators in structural equation models. MIS Quarterly, 35(2), 359–372. https://doi.org/10.2307/23044047
Bozdogan, H. (1987). Model selection and Akaike’s Information Criterion (AIC): the general theory and its analytical extensions. Psychometrika, 52(3), 345–370. https://doi.org/10.1007/BF02294361
Buchholtz, N. F., Krosanke, N., Orschulik, A. B., & Vorhölter, K. (2018). Combining and integrating formative and summative assessment in mathematics teacher education. ZDM Mathematics Education, 50(4), 715–728. https://doi.org/10.1007/s11858-018-0948-y
Burbules, N. C., Fan, G., & Repp, P. (2020). Five trends of education and technology in a sustainable future. Geography and Sustainability, 1(2), 93–97. https://doi.org/10.1016/j.geosus.2020.05.001
Canales-García, A., Fernández-Valverde, M., & Ulate-Solís, G. (2020). Aprender y enseñar con recursos TIC: experiencias innovadoras en la formación docente universitaria. Ensayos Pedagógicos, 15(1), 235–248.
Castañeda-Vázquez, C., Espejo-Garcés, T., Zurita-Ortega, F., & Fernández-Revelles, A. (2019). La formación de los futuros docentes a través de la gamificación, tic y evaluación continua. SPORT TK-Revista EuroAmericana de Ciencias del Deporte, 8(2), Article 2. https://doi.org/10.6018/sportk.391751
Ciobanu, R.-C. (2022). M-learning and E-learning Educational Solutions Impact in the COVID-19 Pandemic. Informatica Economica, 26(3), 64–73.
Clark, R. M., Kaw, A. K., & Braga Gomes, R. (2022). Adaptive learning: helpful to the flipped classroom in the online environment of COVID? Computer Applications in Engineering Education, 30(2), 517–531. https://doi.org/10.1002/cae.22470
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2.a ed.). Routledge. https://doi.org/10.4324/9780203771587
Cosi, A., Voltas, N., Lázaro-Cantabrana, J. L., Morales, P., Calvo, M., Molina, S., & Quiroga, M. Á. (2020). Formative assessment at university through digital technology tools. Profesorado, Revista de Currículum y Formación Del Profesorado, 24(1), 164–83. https://doi.org/10.30827/profesorado.v24i1.9314. Article 1.
Criollo, S., Guerrero-Arias, A., Jaramillo-Alcázar, Á., & Luján-Mora, S. (2021). Mobile Learning Technologies for Education: Benefits and Pending Issues. Applied Sciences, 11(9), Article 9. https://doi.org/10.3390/app11094111
Cruz-Benito, J., Sánchez-Prieto, J. C., Therón, R., & García-Peñalvo, F. J. (2019). Measuring Students’ Acceptance to AI-Driven Assessment in eLearning: Proposing a First TAM-Based Research Model. In P. Zaphiris & A. Ioannou (Eds.), Learning and Collaboration Technologies. Designing Learning Experiences (pp. 15–25). Springer International Publishing. https://doi.org/10.1007/978-3-030-21814-0_2
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Davison, A. C., & Hinkley, D. V. (1997). Bootstrap methods and their application. Cambridge University Press. https://doi.org/10.1017/CBO9780511802843
Dixson, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the classroom. Theory into Practice, 55(2), 153–159. https://doi.org/10.1080/00405841.2016.1148989
Dolin, J., Black, P., Harlen, W., & Tiberghien, A. (2018). Exploring Relations Between Formative and Summative Assessment. In J. Dolin and R. Evans (Eds.), Transforming Assessment: Through an Interplay Between Practice, Research and Policy (pp. 53–80). Springer International Publishing. https://doi.org/10.1007/978-3-319-63248-3_3
Domingo-Coscollola, M., Bosco-Paniagua, A., Carrasco-Segovia, S., & Sánchez-Valero, J.-A. (2020). Fomentando la competencia digital docente en la universidad: Percepción de estudiantes y docentes. Revista de Investigación Educativa, 38(1), Article 1. https://doi.org/10.6018/rie.340551
Engard, N. C. (2009). LimeSurvey. Public Services Quarterly, 5(4), 272–273. https://doi.org/10.1080/15228950903288728
Evans, C., & Robertson, W. (2020). The four phases of the digital natives debate. Human Behavior and Emerging Technologies, 2(3), 269–277. https://doi.org/10.1002/hbe2.196
Fishbein, M., & Ajzen, I. (1975). Belief. Attitude, Intention And Behavior: An introduction to theory and research. Addison-Wesley.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312
García-Aretio, L. (2019). Necesidad de una educación digital en un mundo digital. RIED. Revista iberoamericana de educación a distancia. https://doi.org/10.5944/ried.22.2.23911
Geisser, S. (1974). A predictive approach to the random effect model. Biometrika, 61(1), 101–107. https://doi.org/10.2307/2334290
Gómez-Galán, J. (2020). Media Education in the ICT Era: Theoretical Structure for Innovative Teaching Styles. Information, 11(5), Article 5. https://doi.org/10.3390/info11050276
Gómez-Ruiz, M. Á., Vázquez-Recio, R., López-Gil, M., & Ruiz-Romero, A. (2022). La pesadilla de la evaluación: Análisis de los sueños de estudiantes universitarios. Revista Iberoamericana de Evaluación Educativa, 15(1), 139–60. https://doi.org/10.15366/riee2022.15.1.008. Article 1.
Guardia, J. J., Del Olmo, J. L., Roa, I., & Berlanga, V. (2019). Innovation in the teaching-learning process: The case of Kahoot! On the Horizon, 27(1), 35–45. https://doi.org/10.1108/OTH-11-2018-0035
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate Data Analysis. Prentice-Hall.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–152. https://doi.org/10.2753/MTP1069-6679190202
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2013). Partial Least Squares Structural Equation Modeling: Rigorous Applications, Better Results and Higher Acceptance. Long Range Planning, 46(1), 1–2.
Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Evaluation of Formative Measurement Models. En J. F. Hair Jr., G. T. M. Hult, C. M. Ringle, M. Sarstedt, N. P. Danks, & S. Ray (Eds.), Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook (pp. 91–113). Springer International Publishing. https://doi.org/10.1007/978-3-030-80519-7_5
Hair, J., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2022). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). https://doi.org/10.1007/978-3-030-80519-7
Hao, S., Dennen, V. P., & Mei, L. (2017). Influential factors for mobile learning acceptance among Chinese users. Educational Technology Research and Development, 65(1), 101–123. https://doi.org/10.1007/s11423-016-9465-2
Harchay, A., Berguiga, A., Cheniti-Belcadhi, L., & Braham, R. (2019). Student Perception of Mobile Self-assessment: An Evaluation of the Technology Acceptance Model. Interaction Design and Architecture(s), 2019, 109–124. https://doi.org/10.55612/s-5002-041-008
Harlen, W., & James, M. (1997). Assessment and Learning: differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365–379. https://doi.org/10.1080/0969594970040304
Hébert, C., Jenson, J., & Terzopoulos, T. (2021). “Access to technology is the major challenge”: Teacher perspectives on barriers to DGBL in K-12 classrooms. E-Learning and Digital Media, 18, 204275302199531. https://doi.org/10.1177/2042753021995315
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
Horvat, L., Balen, J., & Martinović, G. (2012). Proposal of mLearning system for written exams. Proceedings ELMAR, 2012, 345–348.
Hossain, S. F. A., Shan, X., Nurunnabi, M., Tushar, H., Mohsin, A. K. M., & Ahsan, F. T. (2021). Opportunities and Challenges of M-Learning During the COVID-19 Pandemic: A Mixed Methodology Approach. In E-Collaboration Technologies and Strategies for Competitive Advantage Amid Challenging Times (pp. 210–227). IGI Global. https://doi.org/10.4018/978-1-7998-7764-6.ch007
Hu, P.J.-H., Clark, T. H. K., & Ma, W. W. (2003). Examining technology acceptance by school teachers: a longitudinal study. Information & Management, 41(2), 227–241. https://doi.org/10.1016/S0378-7206(03)00050-8
Jin, Y., Lin, C.-L., Zhao, Q., Yu, S.-W., & Su, Y.-S. (2021). A study on traditional teaching method transferring to e-learning under the covid-19 pandemic: from chinese students’ perspectives. Frontiers in Psychology, 12, 632787. https://doi.org/10.3389/fpsyg.2021.632787
Kimmons, R., Clark, B., & Lim, M. (2017). Understanding web activity patterns among teachers, students and teacher candidates. Journal of Computer Assisted Learning, 33(6), 588–596. https://doi.org/10.1111/jcal.12202
Knight, P. T. (2002). Summative Assessment in Higher Education: Practices in disarray. Studies in Higher Education, 27(3), 275–286. https://doi.org/10.1080/03075070220000662
Lau, A. M. S. (2016). ‘Formative good, summative bad?’ – a review of the dichotomy in assessment literature. Journal of Further and Higher Education, 40(4), 509–525. https://doi.org/10.1080/0309877X.2014.984600
MacLellan, E. (2001). Assessment for learning: the differing perceptions of tutors and students. Assessment & Evaluation in Higher Education, 26(4), 307–318. https://doi.org/10.1080/02602930120063466
Marín-Díaz, V., Sampedro, B. E., Aznar, I., & Trujillo, J. M. (2022). Perceptions on the use of mixed reality in mobile environments in secondary education. Education + Training,65(2), 312–323. https://doi.org/10.1108/ET-06-2022-0248
Matas, A. (2018). Diseño del formato de escalas tipo Likert: Un estado de la cuestión. Revista Electrónica De Investigación Educativa, 20(1), 38–47.
Mejía-Pérez, O. (2012). De la evaluación tradicional a una nueva evaluación basada en competencias. Revista Electrónica Educare, 16(1), Article 1. https://doi.org/10.15359/ree.16-1.3
Moccozet, L., Benkacem, O., Berisha, E., Trindade, R. T., & Bürgi, P.-Y. (2019). A versatile and flexible e-assessment framework towards more authentic summative examinations in higher-education. International Journal of Continuing Engineering Education and Life Long Learning, 29(3), 211–229. https://doi.org/10.1504/IJCEELL.2019.101032
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222.
Moreno-Guerrero, A.-J., Rodríguez-Jiménez, C., Gómez-García, G., & Ramos Navas-Parejo, M. (2020). Educational Innovation in Higher Education: Use of Role Playing and Educational Video in Future Teachers’ Training. Sustainability, 12(6), Article 6. https://doi.org/10.3390/su12062558
Morris, R., Perry, T., & Wardle, L. (2021). Formative assessment and feedback for learning in higher education: a systematic review. Review of Education, 9(3), e3292. https://doi.org/10.1002/rev3.3292
Mutambara, D., & Bayaga, A. (2021). Learners’ and teachers’ acceptance of mobile learning: an exploratory study in a developing country. International Journal of Learning Technology, 16(2), 90–108. https://doi.org/10.1504/IJLT.2021.117763
Nikou, S., & Economides, A. (2017b). Mobile-based assessment: Investigating the factors that influence behavioral intention to use. Computers & Education, Query date: 2022–03–22 16:32:53. https://www.sciencedirect.com/science/article/pii/S0360131517300283
Nikou, S. A., & Economides, A. A. (2021). A Framework for Mobile-Assisted Formative Assessment to Promote Students’ Self-Determination. Future Internet, 13(5), 116. https://doi.org/10.3390/fi13050116
Nikou, S., & Economides, A. (2016). An outdoor mobile-based assessment activity: measuring students’ motivation and acceptance. International Journal of Interactive Mobile Technologies (iJIM), 10, 11–17. https://doi.org/10.3991/ijim.v10i4.5541
Nikou, S., & Economides, A. (2017a). Mobile-based assessment: integrating acceptance and motivational factors into a combined model of self-determination theory and technology acceptance. Computers in Human Behavior, 68, 83–95. https://doi.org/10.1016/j.chb.2016.11.020
Olimov, S. S. (2021). The innovation process is a priority in the development of pedagogical sciences. European Journal of Research Development and Sustainability, 2(3), 86–8. Article 3.
Patton, M. Q. (1996). A world larger than formative and summative. Evaluation Practice, 17(2), 131–144. https://doi.org/10.1177/109821409601700205
Pikkarainen, T., Pikkarainen, K., Karjaluoto, H., & Pahnila, S. (2004). Consumer acceptance of online banking: an extension of the technology acceptance model. Internet Research, 14(3), 224–235. https://doi.org/10.1108/10662240410542652
Rahmawati, R. N. (2019). Self-efficacy and use of e-learning: a theoretical review Technology Acceptance Model (TAM). American Journal of Humanities and Social Sciences Research (AJHSSR), 3(5), 41–55.
Ramayah, T., Hwa, C., Chuah, F., Ting, H., & Memon, M. (2017). PLS-SEM using SmartPLS 3.0: Chapter 8: Assessment of Formative Measurement Models. En Partial least squares structural equation modeling (PLS-SEM) using smartPLS 3.0: An Updated and Practical Guide to Statistical Analysis. Pearson.
Reisoğlu, İ, & Çebi, A. (2020). How can the digital competences of pre-service teachers be developed? Examining a case study through the lens of DigComp and DigCompEdu. Computers & Education, 156, 103940. https://doi.org/10.1016/j.compedu.2020.103940
Rothmann, S. (2015). A structural model of technology acceptance. South African Journal of Industrial Psychology, 41(1), 1–2.
Sánchez-Prieto, J. C., Olmos-Migueláñez, S., & García-Peñalvo, F. J. (2017a). MLearning and pre-service teachers: an assessment of the behavioral intention using an expanded TAM model. Computers in Human Behavior, 72, 644–654. https://doi.org/10.1016/J.CHB.2016.09.061
Sánchez-Prieto, J. C., Olmos-Migueláñez, S., & García-Peñalvo, F. J. (2017b). ¿Utilizarán los futuros docentes las tecnologías móviles? Validación de una propuesta de modelo TAM extendido. Revista de Educación a Distancia (RED), 52, Article 52. https://revistas.um.es/red/article/view/282191
Sánchez-Prieto, J., Hernández-García, Á., García-Peñalvo, F., Chaparro-Peláez, J., & Olmos, S. (2019). Break the Walls! second-order barriers and the acceptance of mlearning by first-year pre-service teachers. Computers in Human Behavior, 95, 158–67. https://doi.org/10.1016/j.chb.2019.01.019
Sar, A., & Misra, S. N. (2020). A study on policies and implementation of information and communication technology (ICT) in educational systems. Materials Today, 8. https://doi.org/10.1016/j.matpr.2020.10.507
Sarstedt, M., Hair, J. F., Ringle, C. M., Thiele, K. O., & Gudergan, S. P. (2016). Estimation issues with PLS and CBSEM: Where the bias lies! Journal of Business Research, 69(10), 3998–4010. https://doi.org/10.1016/j.jbusres.2016.06.007
Scriven, M. (1967). The Methodology of Evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.), Perspectives of Curriculum Evaluation (pp. 39–83). Rand McNally.
Scriven, M. (1991). Beyond formative and summative evaluation. Teachers College Record, 92(6), 19–64. https://doi.org/10.1177/016146819109200603
Sharma, P., Liengaard, B., Jr., & H., Sarstedt, M., Ringle, C. (2022). Predictive model assessment and selection in composite-based modeling using PLS-SEM: Extensions and guidelines for using CVPAT. European Journal of Marketing. https://doi.org/10.1108/EJM-08-2020-0636
Sharma, P., & Kim, K. (2012). Model Selection in Information Systems Research Using Partial Least Squares Based Structural Equation Modeling. International Conference on Interaction Sciences. https://www.semanticscholar.org/paper/Model-Selection-in-Information-Systems-Research-Sharma-Kim/cfde34aa3bd19983b07dc16fc2801cdd377b05d7
Shepard, L. (2006). La evaluación en el aula. In R. Brennan (Ed.). En Educational Measurement (4 Edition, pp. 623–646). Praeger Westport.
Simonetto, A. (2012). Formative and reflective models: State of the art. Electronic Journal of Applied Statistical Analysis, 5(3), Article 3-7. https://doi.org/10.1285/i20705948v5n3p452
Skulmowski, A., & Rey, G. D. (2020). COVID-19 as an accelerator for digitalization at a German university: establishing hybrid campuses in times of crisis. Human Behavior and Emerging Technologies, 2(3), 212–216. https://doi.org/10.1002/hbe2.201
Smith, C. A. (2021). Development and Integration of Freely Available Technology into Online STEM Courses to Create a Proctored Environment During Exams. Journal of Higher Education Theory and Practice, 4. https://papers.iafor.org/submission59360/
Souabi, S., Retbi, A., Idrissi, M. K., & Bennani, S. (2021). Towards an Evolution of E-Learning Recommendation Systems: From 2000 to Nowadays. International Journal of Emerging Technologies in Learning (IJET), 16(06), Article 06. https://doi.org/10.3991/ijet.v16i06.18159
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society: Series B (methodological), 36(2), 111–133. https://doi.org/10.1111/j.2517-6161.1974.tb00994.x
Sun, Y., Li, N., Hao, J. L., Di Sarno, L., & Wang, L. (2022). Post-COVID-19 Development of Transnational Education in China: Challenges and Opportunities. Education Sciences, 12(6), Article 6. https://doi.org/10.3390/educsci12060416
Terán-Guerrero, F. N. (2019). Acceptance of university students in the use of Moodle e-learning systems from the perspective of the TAM model. UNEMI, 12(29), 63–76.
Thorsteinsson, G., & Niculescu, A. (2013). Examining teachers’ mindset and responsibilities in using ICT. Studies in Informatics and Control, 22(2), 315–322. https://doi.org/10.24846/v22i3y201308
Tyler, R. (1950). Basic principle of curriculum and instruction. Chicago University.
Valverde-Berrocoso, J., Fernández-Sánchez, M. R., Dominguez, F. I. R., & Sosa-Díaz, M. J. (2021). The educational integration of digital technologies preCovid-19: Lessons for teacher education. PLoS ONE, 16(8), e0256283. https://doi.org/10.1371/journal.pone.0256283
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39, 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
Venkatesh, V., & Davis, F. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 184–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Vieira, H., & Ribeiro, C. P. (2018). Implementing Flipped Classroom in History: The reactions of eighth grade students in a Portuguese school. Yesterday and Today, 19, 35–49. https://doi.org/10.17159/2223-0386/2018/n18a3
Vilches, A., & Gil, D. (2010). Máster de formación inicial del profesorado de enseñanza secundaria. Algunos análisis y propuestas. Revista Eureka sobre Enseñanza y Divulgación de las Ciencias, 661–666.
Wang, R., Chen, L., & Solheim, I. (2020). Modeling dyslexic students’ motivation for enhanced learning in E-learning systems. The ACM Transactions on Interactive Intelligent Systems, 2. https://doi.org/10.1145/3341197
Wan-Sulaiman, W. N. A., & Mustafa, S. E. (2020). Usability elements in digital textbook development: a systematic review. Publishing Research Quarterly, 36(1), 74–101. https://doi.org/10.1007/s12109-019-09675-3
Acknowledgements
Not applicable.
Data Transparency
Study data can be provided by the corresponding author upon request.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
Conceptualization, data curation, formal analysis, methodology, resources, investigation resources, writing – original draft, writing – review & editing: A.O-L. Conceptualization, data curation, formal analysis, methodology, supervision, writing – review & editing: J-C.S-P. Conceptualization, formal analysis, investigation, resources, supervision, writing – review & editing: S.O-M. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Research approved by the Research Ethics Committee of the University of Salamanca and accepted by the participants.
Consent for publication
Not applicable.
Competing interest
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ortiz-López, A., Sánchez-Prieto, J.C. & Olmos-Migueláñez, S. Perceived usefulness of mobile devices in assessment: a comparative study of three technology acceptance models using PLS-SEM. J. New Approaches Educ. Res. 13, 2 (2024). https://doi.org/10.1007/s44322-023-00001-6
Accepted:
Published:
DOI: https://doi.org/10.1007/s44322-023-00001-6