1 Inclusion, digital competences and ICT

Digital society is already a reality. In previous decades, education has incorporated Information and Communication Technologies (ICTs) in the teaching practice; however, the COVID-19 pandemic has demonstrated that there is no return, and that ICTs are here to stay. These great changes, generated by the immersion of ICTs in the education system, caused the demand and need for more flexible spaces that respond to those needs that emerge from facing new situations in the teaching and learning processes, which require the development of new strategies that increase the capacity of educators to carry out their teaching practice, regardless of the formative scenario from which they have to implement their teaching (Ayale-Pérez & Joo-Nagata, 2019; Ilomäki et al., 2016).

Digital competence (i.e., the capacity to use technology and the cognitive capacity required to do so successfully) is essential within the current and future process of social inclusion. The phenomenon of ICTs and their impact on the scope of education could be contemplated from a double perspective. On the one hand, they have contributed to the expansion of the digital divide, and their incorporation has promoted aspects of inequality (e.g., people who have access vs those who do not; people who have skills vs those who do not) and differences of not only development spaces, but also in the frontiers that are set in terms of digital literacy (Nilholm, 2021; Vyrastekova, 2021; Mercader & Duran-Bellonch, 2021). On the other hand, ITCs are presented as an opportunity for the inclusion of all disadvantaged people (training, work, health, access to information, expression, etc.). Therefore, diversity and social inclusion are directly linked to the use of ICTs in the so-called Information and Knowledge Society. However, it is important to take into account that the sole presence of digital technologies and materials do not guarantee an inclusion process; on the contrary, their proper use and management favour social inclusion, as long as several conditions are met, which Cabero & Córdoba (2009) highlighted as necessary for a true educational digital inclusion: “Firstly, ICTs must be present; then, access to and use of technology; and lastly, a process of digital literacy must be generated to learn the symbolic languages of an information and knowledge society”.

From education, the aim is “to provide opportunities of professional and personal growth through meaningful experiences that lead to the development of useful skills, knowledge, aptitudes and habits to become a functional part of society”. To this end, social inclusion is especially important, as it generates a conglomerate of opportunities through the global acceptance of its members, as long as it is considered an essential strategy to overcome the inequalities (personal, cultural and economic) that derive from difference. In this sense, a value of justice is granted to digital inclusion, as it improves the quality of life through the accessibility to digital services, digital literacy, the responsible use of ICTs, and the access to education and the job market.

In this context, digital competences are set as an element that can favour educational marginalisation, since the educator who does not have the fundamental skills (digital) to perform well in the new school (i.e., “pushed” and “transformed” by the events of the last years) may become obsolete. As was stated by Maestre et al. (2017), “such cultural context leads to a certain marginalisation for those people who do not have the necessary competences to perform well in the current mutation and transformation of digital society”. As has been pointed out by different authors (Díaz-García et al., 2016; He & Zhu, 2017; Hsu, 2010; Misk Foundation, 2021), research suggests that better qualification in the use of technology increases the probabilities of educators to integrate ICTs successfully in their teaching. Thus, it is necessary to reinvent the teacher role and, consequently, the education systems.

The European Framework for the Digital Competence of Educators (DigCompEdu) (Barragán-Sánchez et al., 2021; Cabero-Almenara et al., 2011, 2020a, b; Redecker & Punie, 2017; Rodríguez-García et al., 2019) aims to support the national, regional and local efforts to promote the digital competence of educators, providing a European space of reference, with a common language and logic (Ilomäki et al, 2016; Mattar et al., 2022). The present study is based on such space to establish this research line, through the design of a scale of teacher digital competences that aims to measure different dimensions. From this article, the attention is focused on the competence called “student empowerment”, from which accessibility, inclusion, personalisation and active commitment of the students with their own learning provide a view of the digital needs for an inclusive society (Derenzis et al., 2020; Lin et al., 2020).

In this study, we attempted to validate the scale called “DigCompEdu Check-In”, which is used for the self-reflection of educators to allow them to self-evaluate their strengths and weaknesses/needs or areas for improvement in digital learning. Taking into account the importance of having a powerful instrument that can be used to self-evaluate the competences of educators, the objectives of the present study were to: a) analyse the validity of the instrument through Cronbach’s alpha and McDonald’s omega; b) analyse the validity of the scale construct by analysing the simple correlations with an exploratory factor analysis; c) guarantee that the information obtained through the analysis and the different interpretations value the reality that is intended to be measured; and d) determine the variables and structure of the scale, showing the relationships between the different dimensions.

2 Method

2.1 Description of the sample

The population that composed the study sample was constituted by the faculty members of the public universities of Andalusia (Spain) in the academic year 2021/2022. For their selection, incidental or convenience criteria were used, depending on their availability to complete the questionnaire (Hernández-Sampieri et al., 2014).

A total of 2,262 faculty members from different public universities of Andalusia completed the questionnaire. Of the total sample, 1,236 (54.6%) were women and 1,026 (45.6%) were men, with two predominating age ranges of 50–54 years (37.3%) and 40–49 years (29.4%). Next, Table 1 shows the data obtained regarding the university of origin, field of knowledge and years of teaching experience (Table 2).

Table 1 University, field of knowledge and years of teaching experience of the participating faculty
Table 2 Validity of the instrument and its dimensions obtained with Cronbach’s alpha and McDonald’s omega

As can be observed in Table 1, all public universities and fields of knowledge are represented. Over half of the participants have 20 or more years of teaching experience (51.9%), which grant stability to the findings of this study, as this indicates that these faculty members have developed their teaching career for many years.

With respect to their digital profile, only 1.5% did not use technology in their teaching practice. Similarly, Figs. 1, 2 shows that 24% of the faculty use technology in 51–75% of the classroom time, whereas almost 30% of them use technology in 11–25% of the classroom time. Only 7.6% of the faculty spend 0–10% of the classroom time using technology.

Fig. 1
figure 1

Time spent by the faculty using technology in the classroom

Fig. 2
figure 2

Social networks used by the faculty

With respect to their competence for the management of different technologies, the results indicate that, in their everyday practice, 81% used a computer, 70% used a Tablet, 72.1% used a Smartphone and 66.4% used the Internet.

As can be observed, most of the sample stated that they use between one and three social networks, whereas only 5.2% do not use any social network.

2.2 Data-gathering instrument

The questionnaire used is currently being piloted with educators from all Member States of the EU (Joint Research Centre, 2019). It is worth pointing out that each competence is represented by a single item, and thus the most generic concept that encompasses the entirety of the specific content of the competence is selected. The 22 items that compose the questionnaire respond to the 6 competence areas: professional commitment (4), digital resources (3), digital pedagogy (4), evaluation and feedback (3), student empowerment (3) and facilitating student digital competence (5). This article contributes to the translation and adaptation of such instrument to the Spanish context (Appendix 1).

The questionnaire was administered online to all the Teaching Staff of the public universities, after two weeks it was sent again to those who had not answered and to those who were returned, because the email address was not correctly stated. Finally, the study had the answers of a total of 2,262 professors from different public universities in Andalusia.

2.3 Data gathering and analysis procedure

The instrument was administered on-line, using “Google Forms”, in the first months of the year 2021 in the different public universities of Andalusia. The data matrix was modified for operational reasons, although it is worth mentioning that such recoding does not affect the validity and reliability of the measure. The reliability, discriminant validity and convergent validity of the questionnaire were calculated using the following coefficients: Cronbach’s alpha, McDonald’s omega, composite reliability (CR), average variance extracted (AVE) and maximum shared variance (MSV).

To determine the construct validity of the instrument, an exploratory factor analysis (EFA) was carried out. The factor selection was performed using the main components method. The obtained factors were orthogonally rotated using the varimax method with Kaiser normalisation. The SPSS statistical software was used to carry out all statistical analyses.

After defining the number of factors, a confirmatory factor analysis (CFA) was performed to confirm the variables and structure of the instrument, by modeling diagrams and using structural equations (Ruiz et al., 2010). The method used to test the theoretical model was weighted least squares (WLS), which provides consistent estimations in samples that do not meet normality criteria (Ruiz et al., 2010). For this last procedure, the AMOS statistical software was used. The “Kolmogorov–Smirnov goodness-of-fit test” was also carried out, which allowed confirming that the data are not normally distributed, with a significance (p-value) of 0.000 for all items (non-normal distribution).

3 Results

The reliability of the instrument was analysed using Cronbach’s alpha and McDonald’s omega, globally and for each of its dimensions. Cronbach’s alpha is the most widely used method for the estimation of the internal consistency, although some authors have defined the use of the omega coefficient, as it is a more stable calculation that reflects the true reliability level (Ventura-León & Caycho-Rodríguez, 2017).

The results show a Cronbach’s alpha and McDonald’s omega of 0.946 and 0.967, respectively, in the global result. It is established that this index is very high (> 0.9), indicating that the questionnaire has a high degree of reliability (O’Dwyer & Bernauer, 2016). The partial results also show high reliability indices for each dimension of the instrument: professional commitment (0.749 and 0.842), digital resources (0.628 and 0.807), digital pedagogy (0.841 and 0.821), evaluation and feedback (0.788 and 0.790), student empowerment (0.733 and 0.784) and facilitating student digital competence (0.853 and 0.898). As can be observed, all dimensions present a reliability of over 0.7, except in the alpha value of digital resources, whereas in the omega value they exceed 0.8, except in evaluation and feedback and student empowerment. Moreover, according to Fox (1987), correlations from 0.700 and even 0.600 are acceptable when estimations of opinion or critique are carried out, and when the scales are applied in different contexts (Barclay et al., 1995).

To determine the validity, we analysed the simple correlations of each item with the theoretical dimension or construct through an exploratory factor analysis. The results are shown in Tables 3, 4.

Table 3 Correlations of the items with the associated dimensions
Table 4 Rotated component matrix

All items obtained factor loadings above 0.700 with their associated factor. Therefore, all items are integrated in their respective dimensions (Carmines & Zeller, 1979).

The construct validity of the test was obtained through an exploratory factor analysis (previously, the applicability of the factor analysis was confirmed using the KMO test), with a statistically significant coefficient of 0.971, and Barlett’s sphericity test, with significance (p-value) equal to 0.000, indicating that the factor analysis can be applied.

The results, which explain 73.53% of the variance, determine the 6 theoretical factors proposed:

The removal of items A4, C3, F4 (< 0.7) guarantees the content validity and increases the explained variance. In addition to the theoretical model proposed by the exploratory factor analysis (EFA), we performed a confirmatory factor analysis (CFA), which allowed us to compare the results. Some authors highlight the suitability of this confirmatory analysis to validate an instrument adapted to another language or applied to a different population, since it is not possible to guarantee that the items are understood in the same way, or that the latent variables or factors have the same conceptualisation (Batista-Foguet et al., 2004). The aim is to confirm the variables and the structure of the scale, showing the existing relationships between the different dimensions. Figure 3 presents the proposed structure diagram with the item-dimension and dimension-dimension correlation indices.

Fig. 3
figure 3

Structure diagram of the «Digcompedu Check-In» questionnaire

As can be observed in the model, the six latent variables or dimensions are adequately correlated with their items; likewise, there is a strong correlation between dimensions (0.63 and 0.95), and there are weak correlations only between dimension D-B and item B3 (0.49). These were considered within the limitations of the study, although the results confirm the proposed theoretical model.

To evaluate the quality of the model, the goodness-of-fit indices were calculated. Table 5 shows the obtained and reference values for the fit of the model according to (Lévy et al., 2006): Chi-Squared (CMIN), goodness-of-fit index (GFI), parsimony goodness-of-fit index (PGFI), normed fit index (NFI) and parsimony normed fit index (PNFI).

Table 5 Fit indices

Complementarily, we calculated the coefficients of composite reliability (CR), average variance extracted (AVE) and maximum shared variance (MSV). Table 6 shows the results and the reference values for the fit of the model (Hair et al., 2010).

Table 6 Convergent and discriminant validity of the model

All the obtained values fit the reference values. Therefore, the reliability of the model (CR) and its convergent (AVE) and discriminant (MSV) validity are demonstrated.

4 Discussion and conclusions

The aim of the present study was to provide a tool with a high potential to measure and identify the digital competence of faculty members, focusing on student inclusion and empowerment. This work considered the UNESCO International Report on the Future of Education (2020), which indicates that the educational response to the COVID-19 crisis showed the capacity of educators to make use of their professional knowledge and collaborate in a creative manner (UNESCO, 2021). Many studies have been carried out on teacher digital competence in the Spanish educational context (Bullón et al., 2009; Cabero-Almenara et al., 2011; Durán et al., 2019; Lores Gómez et al., 2019; Touron et al., 2018) and in the international education context (Drossel & Eickelmann, 2017; Engen, 2019; Pérez Díaz, 2019; Reisoğlu & Çebi, 2020); however, the transformations caused by ICTs in the education system (at all levels, i.e., from early childhood education to higher education) have made it necessary for researchers and educators to design, validate and establish a common teacher digital competence framework. In this sense, the present study gains relevance, since the “DigCompEdu Check-In” scale was validated for two large groups: a) non-university educators, which includes early childhood, primary and secondary education and baccalaureate; and b) faculty members.

With respect to the reliability and validity of the “DigCompE-du” instrument, the findings allow generating accurate scientific knowledge for the improvement of education quality in universities and non-university institutions, as well as providing propositions for the design of training and counseling plans (Gisbert Cervera & Lázaro Cantabrana, 2015; Roblero, 2020; Rodríguez-García et al., 2019), developing alternatives that can attend to the demands of the Society of Knowledge (García-Valcárcel et al., 2015).

As is shown by many of the obtained results, the «DigCompEdu Check-In» instrument shows high reliability indices, both globally and in all its dimensions, with values that can be considered valid, since most of them are similar to those obtained by its authors in the German context (Ghomi & Redecker, 2019); thus, a priori, the tool shows great acceptance for the analysis of the European Framework for the Digital Competence of Educators.

Among the future research lines, we propose replicating this study in other university or non-university contexts, which would increase its reliability and validity. In turn, this study allows generating other studies in different lines, such as actions that enable educators to measure their teacher digital competences, although without self-perception, and the possibility of creating a tool that incorporates a larger number of items than the diagnostic instrument.

With respect to the limitations of the study, the type of measurement instrument, i.e., a questionnaire designed for teachers to self-evaluate their competences, may influence the use and application of the validated scale. This could be solved by contemplating some complementary measure of concurrent validity that grants robustness to the empirical evidence.