1 Introduction

In educational contexts, the integration of Information and Communication Technologies (ICT) has emerged as crucial elements (Vásquez et al., 2023), playing a fundamental role in the teaching-learning process (Lomos et al., 2023; Şimşek & Ateş, 2022), which have transformed educational practices, making them more interactive and productive (Lin et al., 2017). Its relevance is such that there are more and more digital resources which are implemented in both traditional and online teaching environments, facilitating the creation of active and participatory educational environments (Jogezai et al., 2021). Thus, it is imperative that society develops one of the eight key competences proposed by the European Commission (2006) for appropriate use, categorized as digital competence (DC).

This concept has been defined as “the confident and critical use of Information Society Technology for work, leisure and communication” (Søby, 2013, p. 135), involves the “confident, critical and responsible use of, and engagement with, digital technologies for learning, at work, and for participation in society” (European Commission, 2019). It is underpinned by “basic skills in ICT: the use of computers to retrieve, assess, store, produce, present and exchange information, and to communicate and participate in collaborative networks via the Internet” (European Communities, 2006, p. 14). However, despite technological advancement and the continuous evolution of DC, growing controversy has arisen around the most precise definition of the concept (Falloon, 2020). In this context, it is crucial to recognize that a solid and fundamental foundation in education remains essential. The main members of the educational community (teachers, students and parents) must possess this basic DC to adapt effectively to the changes and challenges that constantly arise in new times (Guillén-Gámez et al., 2023a). Therefore, this will provide the necessary foundation to understand, evaluate and effectively use the new tools and technologies that are introduced into the educational environment. In general terms and for this study, the concept of basic DC is understood as the “ability of individuals to appropriately use digital tools and facilities to identify, access, manage, integrate, evaluate, analyse and synthesize digital resources, construct new knowledge, create media expressions, and communicate with others, in the context of specific life situations, to enable constructive social action; and to reflect upon this process (Martin, 2005, p. 135).

In this sense, Gümüş & Kukul (2023, p. 3) state that “the acquisition of digital competencies in education plays a significant role in improving the knowledge and skills of teachers and students around the world”, but it must also be the same for parents (Martínez-Piñeiro et al., 2018). In this triangulation of educational agents, Kiryakova (2022) emphasizes that it is essential for teachers since it will achieve a more enriched and attractive educational experience, adapted to contemporary technological needs. In addition, it will allow students to be empowered according to the teachers’ point of view, preparing them to face the challenges of an increasingly digitalized world (Zakharov et al., 2022). Some previous studies demonstrated that enhancing students’ DC could be improved by improving teachers’ digital DC (Lin et al., 2023) since it has been shown that lack of these skills on the part of educators can have negative repercussions on the students’ academic performance (Yazar & Keskin, 2016). And for the third group, training in basic digital skills plays an essential role in the teaching-learning process (Alharbi et al., 2023; Nikken & Jansz, 2014), since it empowers parents and tutors to actively participate in their children’s digital education (Romero et al., 2021), strengthening collaboration between family and school.

In this context, it is important to remember that the use of digital resources plays a determining role in the acquisition and development of DC as pointed out Guillén-Gámez et al. (2020). This theory has also been analyzed by the research of Ghomi and Redecker (2019), as well as by the findings of Cabero-Almenara et al. (2023) and Lucas et al. (2021), which established a positive correlation between the availability of digital resources and the level of digital literacy. Nevertheless, the essence does not lie solely in the frequency of use of ICT resources, but in the way in which they are applied, since, as DC training increases, the use of these resources becomes more effective (O’Malley et al., 2013).

In this order of ideas and taking into consideration how digital resources advance and are integrated into classrooms, it is necessary to develop basic digital skills in all the subareas that make up and structure DC (Tomczyk, 2019), requiring valid and reliable instruments that allows knowing the level of literacy for any group involved in the educational process. However, the scientific literature remains quite scarce regarding the existence of instruments that allow analyzing the basic DC of teachers, students and parents, in a heterogeneous and joint manner. To justify this statement, a bibliographic search of CD measurement instruments has been carried out with the following inclusion criteria: (1) the words must appear in the title of the paper “digital competence instrument”, “digital competence scale” or “digital competence framework”; (2) that the instruments had some type of validation, whether expert validation or construct validation (exploratory factor analysis, EFA; confirmatory factor analysis, CFA); and (3) that the studies are from the last five years. Table 1 shows that there are many instruments, each with different structural taxonomies on how to measure DC. However, three specific aspects are observed: (1) that no instrument has measured the DC of the group of parents; (2) that no instrument has measured the DC of the three main educational agents in a heterogeneous and joint manner, leaving a gap in the scientific literature; y (3) very few instruments have measured multigroup invariance with the purpose of offering more robust validity to all groups.

Table 1 Bibliographic analysis of instruments in DC

Therefore, the contribution of this study, and consequently, the purpose to be achieved, is to validate a psychometric instrument with sufficient methodological rigor (including multigroup invariance), to evaluate the basic DC of the three main agents in the teaching process (teachers, students, and parents), and which serves to evaluate the DC of these agents for any educational stage (Early Childhood Education, Primary Education, Secondary Education and Higher Education).

2 Method

2.1 Design, type of Sampling and Confidentiality

A non-experimental and ex post facto design was used. Data collection was carried out using intentional non-probabilistic sampling, as well as snowball sampling (Leighton et al., 2021). The data was collected during the 2022/2023 academic year, from all over the territory of the Dominican Republic. Before participants completed the questionnaire, they were informed about the purpose of the research, which revolved around a doctoral thesis. Data collection was carried out anonymously using an online form without any marking that could compromise the identity of the participants.

2.2 Preliminary Analyzes for the Sample of Participants

The response rate was 1335 participants. However, Kline (2023) states that there are a series of important issues to consider in any survey validation process. First, the presence of missing data manifests itself when participants do not respond to a specific item. Fortunately, when administering the survey with Google Forms, we took the step of labeling all items as required. This measure has contributed to minimizing the probability of omitted responses. Secondly, we managed to detect outliers using the Mahalanobis distance (D2). Kline (2023) recommends that all observations (subjects) that have a p-value less than 0.001 in the two calculations of the distances P1 and P2 be eliminated. In this research, we eliminated 166 subjects based on the p-values reported by the AMOS software, thus leaving a sample of 1,169. In addition, 20 cases were also eliminated since some of the participants marked the option of early childhood education students, which was a marking error, since, for this study, the authors have considered that, due to the chronological age of the student, they are not qualified to respond to a survey. The final sample was 1149 participants. The distribution is seen in Table 2.

Table 2 Sample distribution

2.3 Instrument

To measure the CD of the different agents of the educational community, the instrument by Carrera et al. (2011) was used. However, the instrument lacked psychometric values which would measure reliability and construct validity since the authors had only carried out expert judgment validation. The test format is Likert type with 7 response options from 7 (I have the skills to do it) to 1 (I don’t have the skills to do it). The instrument had a total of 23 dimensions of which the 6 most representative dimensions that measured basic digital skills were selected. Each of the selected factors is theoretically defined as follows:

  1. 1.

    Skills in management and transfer of technological data: ability to store information on various devices, including the ability to facilitate the smooth transfer of data between computers and mobile devices, as well as knowing how to configure and identify different peripheral devices.

  2. 2.

    Software and hardware skills: ability to interact efficiently with computer programs and physical devices. In software, this includes proficiency in installing, configuring and using applications, while in hardware it refers to the skill in operating and maintaining physical devices and components, such as computers, printers and peripherals.

  3. 3.

    Web navigation skills: ability to use and carry out actions effectively with web browsers, reflecting on the security of websites, browsing the web through links and sending files.

  4. 4.

    Skills in using word processors: ability to create, edit and format text documents. This includes mastering functions such as writing, spelling and grammar correction, as well as handling advanced features such as styles, tables and inserting graphics, facilitating the creation of professional and well-structured documents.

  5. 5.

    Data processing and management skills: ability to collect, organize and analyze information efficiently. This includes skills in database management, interpretation of statistical data, and application of techniques to ensure the integrity and accuracy of information.

  6. 6.

    Multimedia presentation design skills: ability to create visually attractive and effective content through the use of graphic design tools.

2.4 Procedure and Verification of Assumptions

First, the sample was divided into two subgroups drawn at random, with the objective of examining the internal composition of the instrument, following the guidelines proposed by Hinkin et al. (1997). Each subsample was used to analyze the construct validity process by applying EFA and CFA.

The purpose of the EFA is to discover the underlying structures between the items, classifying them based on the correlation coefficients between them (Sencan, 2005). In other words, the EFA assumes that the correlations (covariance) between the observed items can be explained by a smaller number of latent factors (Mulaik, 2018). For the analysis, the Oblimin rotation method and the Principal Axis factorization method were used, which analyzes the common variance between the items to answer questions such as How many factors? What are the factors? And what are the relationships between the factors? (Mvududu & Sink, 2013).

For the second type, the CFA was used to verify the relevance of the proposed theoretical models (Perry et al., 2015). Structural equation modeling was used based on the polychoric correlation matrix and robust maximum likelihood estimators. Convergent validity was also verified, which refers to the degree of certainty that the proposed items measure the same latent factor (Cheung & Wang, 2017) and was evaluated through average variance extracted values (AVE). For discriminant validity, the MSV index (maximum squared shared variance) was considered. Regarding the last type of validity analyzed, and with the purpose of knowing if the factorial structure of the model is shown to be invariant with respect to the variable typology of educational agent and educational stage, multigroup analyzes were carried out to determine if the instrument was equally valid.

Once adequate validity was obtained, the assumption of multivariate normality was verified. For this, multivariate normality was verified by comparing the Mardia coefficient, which is considered acceptable when its value is less than the result of the formula p(p + 2) (Raykov & Marcoulides, 2008), where p is the number of items. This assumption is verified by contrasting the multivariate kurtosis value obtained in SPSS Amos (Ping & Cunningham, 2013). The calculation was carried out assuming the final 20 items of the instrument. The formula returned a value of 440, while the multivariate kurtosis index was 92.562. Therefore, since the Mardia coefficient was less than the formula value, we concluded that the multivariate normality assumption was confirmed.

The last procedure was to check the internal consistency of the instrument, where different reliability coefficients were used such as Cronbach’s Alpha, Composite Reliability, Spearman-Brown Coefficients, Guttman’s Two-Halves and McDonald’s Omega. All analyzes were performed using IBM SPSS version 24.0 and AMOS version 24.0.

3 Results

3.1 Comprehension Validity: Statistical Analysis of the ítems

To check the validity of understanding the instrument, scientific literature recommends using the kurtosis (K) and skewness coefficients (A), which must be within the threshold ± 1.5 (Pérez & Medrano, 2010). In this case, the following items were eliminated for subsequent analyses: DIM1.1, DIM1.2, DIM2.7, and DIM5.6. For the same verification, Meroño et al. (2018) recommends eliminating those items with a standard deviation (ST) smaller than the value 1. It can be seen in Table 3 that all items meet this requirement.

Table 3 Coefficients of asymmetry, kurtosis and standard deviation

The degree of discrimination of each item was also used through the corrected correlation coefficient between the item score and the factors. The purpose of this procedure is to increase the reliability of the factors if any of their corresponding items were eliminated. Shaffer et al. (2010) state that items must be excluded from the instrument if the item-total correlation coefficient is less than 0.40. Table 4 presents the analysis of the degree of discrimination through two parameters: corrected total correlation of elements and Cronbach’s Alpha if the element has been deleted. It is observed that no item of the instrument has a value less than 0.60 in the column of Corrected Total Correlation of Elements, complying with the authors’ recommendations. Therefore, no element was eliminated for subsequent analyses.

Table 4 Analysis of the scale discrimination index

Table 5 presents the correlation matrix between the latent factors of the instrument when applying the oblimim rotations method, indicating that there is a correlation between the factors. This finding suggests the unidimensionality of the instrument, which is composed of a base of six latent factors.

Table 5 Factorial correlation matrix

3.2 Construct Validity (AFE)

The results related to construct validity are shown through the application of the EFA method, following the recommendations of Gümüş and Kukul (2023). The Kaiser Meyer Olkin Index (KMO) and Bartlett’s Test of Sphericity were checked, with the purpose of verifying the suitability of the data for the factor analysis and the relevance of the sample size. Authors such as Worthington and Whittaker (2006) establish that values greater than 0.8 for the KMO index would be satisfactory and, in our study, the KMO value was 0.975. Respecto al test de Bartlett, se arrojó un resultado significativo con un valor de p < .05. In addition, it was determined that the Chi-square value was 45184.141, and the number of degrees of freedom (DF) was 630. These values are considered appropriate according to the literature corresponding to the EFA stage (Watkins, 2021).

According to the previous analyses, the EFA was applied with a total of 36 items. In the literature, factor distributions are expected to have a landa value greater than one (Cattell, 1966). Furthermore, it is recommended that those items with factor loadings less than 0.40 be eliminated from the model, the same occurring for those items that have not saturated the corresponding factor (Gümüş & Kukul, 2023). Table 6 shows that all items meet the criterion, so none of them were eliminated for subsequent analyses.

Table 6 Exploratory factor analysis of the instrument

The results of the EFA revealed the 36 items grouped according to the theoretical belonging factor. The emerging factors, according to the theoretical foundations, take the following names: Factor 1 (items DIM3.2, DIM3.3, DIM3.7, DIM3.6, DIM3.4, DIM3.1, DIM3.5) which explained 67.04% of the true variance in the participants’ scores; Factor 2 (items DIM5.5, DIM5.4, DIM5.3, DIM5.7, DIM5.1, DIM5.2) explaining 6.35% of the variance; Factor 3 (items DIM6.4, DIM6.6, DIM6.3, DIM6.7, DIM6.5, DIM6.2, DIM6.1), explaining 5.03% of the variance; Factor 4 (items DIM4.5, DIM4.6, DIM4.3, DIM4.4, DIM4.2, DIM4.1) with 3.45% of the total variance; Factor 5 (items DIM2.4, DIM2.6, DIM2.3, DIM2.5, DIM2.2, DIM2.1) with 3.26% of the variance; and finally, Factor 6 (items DIM1.6, DIM1.4, DIM1.5, DIM1.3) with 2.04% of the variance. It was found that the rate of total explained variance was 87.17%.

3.3 Construct Validity (AFC)

The CFA was performed to determine how the EFA data fit (Bandalos & Finney, 2018). The goal was to achieve an instrument that was as simple and concise as possible, with a smaller number of items, without compromising reliability or validity. The first model was initially examined with the final latent structure obtained through the EFA. Table 7 shows that this model did not meet any of the fit indices recommended by Hu and Bentler (1999), which led to the creation of a second model. In this second model, all those items that had an exaggeratedly high covariance with any of the rest of the items of the instrument were eliminated, as recommended by Byrne (2013). To achieve this, the modifications of indices (MIs) of the covariances between items were analyzed, interpreting this as an interaction of errors. Specifically, the following items were eliminated: DIM2.1, DIM2.5, DIM3.1, DIM3.2, DIM3.3, DIM3.4, DIM4.1, DIM4.2, DIM5.1, DIM5.2, DIM5.3, DIM5.6, DIM6.1, DIM6.2, DIM6.3 and DIM6.6.

The second model was significant and met all recommended requirements. Table 7 shows the coefficients obtained for each index analyzed. The value of the chi-square goodness-of-fit test (CMIN/DF) was 2.258, which explains the suitability of the sample size of the data, where values less than 5 are interpreted as satisfactory (Kline, 2011). The comparative fit index (CFI) and the normalized fit index (NFI) must be equal to or greater than 0.95 (West et al., 2012). In the second model, values of 0.988 and 0.978 respectively were observed, interpreting them as acceptable. The IFI (Incremental Fit Index) and TLI (Tucker-Lewis index) are incremental adjustment indices. The literature recommends that these values be greater than 0.95 (Hu & Bentler, 1999). In the model these values were 0.988 and 0.985, respectively, interpreted as satisfactory. Finally, the RMSEA (Root mean squared error of approximation) measures the difference between the observed covariance matrix per degree of freedom and the predicted covariance matrix, where satisfactory values must be less than 0.06 (Hu & Bentler, 1999). In the second model, a value of 0.057 was obtained, considering it within the accepted standard. Furthermore, to improve the model, the covariance method was applied to the relationships between the items (Schreiber et al., 2006). Consequently, covariance graphs were drawn between the error terms e1-e3 and e5-e7, due to the relationships formed between items DIM1.3-DIM1.5 and DIM2.2-DIM2-4, respectively.

Table 7 Model goodness-of-fit indicators

Figure 1 shows the final factor model as a result of the CFA and the findings on the relationship between the latent factors and their items. The standardized correlation values can also be observed from the CFA results.

Fig. 1
figure 1

Diagram of confirmatory factor analysis. Own elaboration

3.4 Convergent and Discriminant Validity

In order for the AVE coefficient to be satisfactory, each factor of the instrument must have a value greater than 0.50. Furthermore, the value of the diagonal of the square root of AVE must be higher than the correlations between the factors (Hair et al., 2010). Table 8 shows the AVE values greater than 0.5, and, in turn, how the square roots of the AVE (diagonal) were higher than the correlations between the latent factors.

To evaluate discriminant validity, the MSV index (maximum shared variance) was used, where the criterion is that its value is lower than the AVE of each factor (Fornell & Larcker, 1981). When analyzing these results, it is observed in Table 8 that, although both factors are considerably related, the discriminant validity between them is maintained.

Table 8 Convergent and discriminant validity coefficients

3.5 Analysis of the Internal Consistency of the Instrument (Reliability)

In the literature it is stated that different methods are used to measure the reliability of the instruments. Nunally (1978) considers that the minimum acceptable value of the reliability coefficient must be at least around 0.7, although it is better close to 0.80. Furthermore, according to Çokluk et al. (2012), a value between 0.80 and 1.00 is considered highly reliable. The values found in this study were greater than 0.90 (Table 9). Therefore, it can be stated that the internal consistency coefficient obtained in this study is very good. Regarding the composite reliability coefficient (CR), the value must be above 0.7 for all factors (Heinzl et al., 2011). Note that this criterion was also met. Also, the Spearman-Brown, Guttman Two-Halves and Omega McDonald Coefficients reached the recommended thresholds, so the reliability of the instrument was very satisfactory in each latent factor and its total.

Table 9 Reliability coefficients

3.6 Multigroup Invariance Analysis

With the purpose of evaluating the invariance of the factorial structure of the model in relation to the type of educational agent (teachers, students and parents) and the educational stage (kindergarten, primary, secondary and university), an analysis was carried out multigroup. The presence of invariance by type of educational agent would be established if there were no significant differences (p. > 0.05) between the Unconstrained Model and the Measurement Weights Model. Likewise, following the proposal of Cheung and Rensvold (2002), invariance could be corroborated by the CFI coefficient, where a difference equal to or less than 0.01 in the comparison between the Unconstrained Model and the Measurement Weights Model would indicate the presence of invariance. For the typology of educational agent, no significant differences were found between both models (p. = 0.098), assuming a minimum criterion to accept the existence of model invariance by types of educational agents (Byrne et al., 1989; Marsh, 1993). In addition, it was reflected that the difference between CFIs obtained is 0.001, allowing the metric invariance model to be accepted for both cases. It can be concluded that metric invariance establishes the equivalence of the basic meaning of the construct through factor loadings between the three groups (teachers, students and parents). For the analysis between educational stages, no significant differences were found between both models (p. = 0.583), so the instrument is also equally valid to analyze DC at any educational stage (See Table 10).

Table 10 Multigroup analysis of factorial invariance

4 Discussions and Conclusions

Training in DC stands as an imperative to guarantee effective and relevant education in the contemporary era. In this context, digital training for teachers, students and parents is essential, since, first, it enables educators to effectively integrate technology into teaching; secondly, it provides students with key 21st century skills and lastly, it fosters effective collaboration between school and family, preparing everyone for the ever-changing digital environment. In fact, it is the responsibility of educational institutions to ensure that all participants in the educational process have comprehensive access to digital tools and resources, both inside and outside the classroom. This access must go hand in hand with effective training in DC, in order to enhance student learning.

In this study, an instrument was tested and validated which measured the basic DC of the main agents of the educational community (teachers, students, and parents). There are many studies of instruments in the literature that have similar results, however, there is no study that analyzes these skills of the entire educational community in a heterogeneous way with the same instrument, and which serves for all educational stages, contributing to the progress of science. To do this, we start from the expert validation carried out by Carrera et al. (2011) which initially configured the measurement instrument.

Different techniques were used to validate the scale: comprehension, construct, convergent, discriminant and invariance validity. The initial selection of items was 40. First, the dispersion values were checked with the purpose of adjusting the successive correlations of the items as recommended by Pérez and Medrano (2010) and Meroño et al. (2018). Subsequently, the discrimination of each item was verified through the corrected correlation coefficient between the item score and the factors, achieving very satisfactory levels with values higher than those recommended by Shaffer et al. (2010) and Munro (2005). Bartlett’s test was also used to perform the EFA, as well as the oblimin principal axis factorization method.

After the EFA study, a scale composed of 36 items with six factors was created as a result. The EFA results were confirmed by CFA. For this procedure, some adjusted indices were used and the results were compared with acceptable values expressed by Hu and Bentler (1999), Kline (2011) and West et al. (2012). When all these values were examined, several models were carried out and the last one determined that the results obtained are within the range of acceptable values specified in the literature. Subsequently, the discriminant and convergent validity of the instrument was also verified, finding satisfactory values in both the AVE index and the MSV index, as recommended by Hair et al. (2010) and Fornell and Larcker (1981). The last type of validity verified was the invariance by type of educational agent and educational stage, which showed satisfactory coefficients which showed how valid the instrument was for any group and educational stage.

Therefore, unlike other instruments, the basic DC scale is validated for any type of educational agent, whether teachers, students, or parents, as well as for any educational stage (Early Childhood Education, Primary Education, Secondary Education and Higher Education). With this scale, each group will be able to evaluate their DC in relation to fundamental technological skills and address any deficiencies that may exist, thus allowing them to improve their capabilities in this area.

In addition to carrying out the validity of the instrument, it is essential to reflect on how to improve both the design and methodology of the study. One limitation lies in the type of sample used, which was non-probabilistic. Therefore, the results obtained should be interpreted with caution when applying them to other members of the educational community who have similar characteristics, thus avoiding extrapolating the findings to all teachers, students and parents. Looking to the future, it would be relevant to consider the possibility of collecting a more representative sample of these agents, in order to achieve a more adequate generalization of the results and guarantee that the instrument is equally valid for the entire educational community.