The COVID-19 pandemic of 2020–2022 highlighted the increasing need for digital skills in higher education. The temporary closure of universities necessitated a rapid shift to online instruction. This was particularly challenging for those undergraduate students who were scheduled to conduct research. Against this backdrop, we aimed to create a short test of digital literacy:

- that would help further develop digital undergraduate research (cf. Kerres & Otto, 2022);

- as an easy-to-use tool for the ongoing evaluations and assessments in the context of undergraduate research (cf. Singer et al., 2022);

- not as an isolated test construction, but employed within a broader context that also allows conceptual insights and joint development of tools and concepts: here, the European Digital Competence Framework for Citizens (Carretero et al., 2017).

This article presents the digital competence scaleFootnote 1 we developed and its context within the European digiUR project for the promotion of undergraduate research, describes the characteristics of the scale, and discusses how it can be used and further developed.

1 Introduction

Our digital competence scale was created based on DigComp2.1 The Digital Competence Framework for Citizens (Carretero et al., 2017). Therefore, we will first provide an introduction to DigComp2.1 and then present the context of promoting undergraduate research in Europe, in which the scale was used. Section 1 concludes with our starting hypotheses for requisite properties of the scale, on students’ digital competence, and the usefulness of digital competence in higher education.

1.1 DigComp2.1 the digital competence framework for citizens

The Digital Competence Framework for Citizens (DigComp2.1) resulted from the project “Learning and Skills for the Digital Era” that began in 2005 (Carretero et al., 2017, p. 6). The long-term goal was “to provide evidence-based policy support to the European Commission and the Member States on harnessing the potential of digital technologies to innovate education and training practices, improve access to lifelong learning and to deal with the rise of new (digital) skills and competences needed for employment, personal development and social inclusion.“ (p. 6) Thus, we see the creation of our scale in the context of both personal development and, as we will explain later, social inclusion.

The DigComp2.1 framework is already highly differentiated in concept, encompassing five broad areas of digital competence: (1) information and data literacy, (2) communication and collaboration, (3) digital content creation, (4) safety, and (5) problem solving. In turn, each area includes three to six competences. The DigComp2.1 analytical framework also includes eight proficiency levels, ranging from Foundation to Intermediate, Advanced, and Highly Specialized, each with two sublevels. The eight proficiency levels are distinguished by increasing task complexity, increasing autonomy of the proficient practitioner, and different cognitive domains such as remembering or creating. The DigComp framework is complemented by DigCompEdu (Redecker, 2017), which defines educator competences. These include, for example, facilitating learners’ digital competence (in the five competence areas mentioned above); and competences in professional engagement, such as reflective practice necessary for teaching.

Meanwhile, there is abundant literature on DigComp and its use in assessing competence, especially in school settings (e.g., Siiman et al., 2016). An overview of DigComp-based assessment tools is provided by Mattar et al. (2022). Among the most important of these is DigCompSAT (Clifford et al., 2020). Using classical test-construction methods, DigCompSAT created databases of items in English, Latvian, and Spanish. The resulting online questionnaire comprises 82 items that assess citizens’ digital competences (https://europa.eu/europass/digitalskills/screen/home). The DigCompSAT study provided the perfect basis for us to develop a short test of digital literacy.

1.2 digiUR

The context for the development of our digital competence scale was the European project digiUR (www.digiur.eu), “A European Network for Digital Undergraduate Research.“ This project pursued the priority of “Innovative Practices in the Digital Age” in higher education under pandemic conditions, focusing on undergraduate research; digital literacy therefore played a key role. As mentioned, we referred to the European DigCom2.1 project (Carretero et al., 2017) to measure digital literacy (here “digital competence”) in undergraduate students, in particular using questions from the DigCompSAT study (Clifford et al., 2020).

The project was embedded in a European network for the promotion of undergraduate research (UR), an innovative pedagogy that has become an important focus of universities worldwide (cf. Healey & Jenkins, 2009; Brew, 2012; Hensel & Blessinger, 2020; Mieg et al., 2022a, 2023). Studies show that UR supports rapid and deep learning, improves student retention, and promotes diversity and inclusion (cf. Mieg & Haberstroh, 2022). Thus, UR is considered a characteristic of high-impact educational practices and has become a hallmark of Europe’s leading research universities (Fung et al., 2017). The advancement of digital forms of UR is considered both an important progression and a catalyst for networking on UR in Europe (cf. https://digiur.eu/klaipeda-communique/).

Research shows that success in UR may depend on some individual characteristics, such as self-efficacy (e.g., Reitinger & Altrichter, 2022), i.e., the expectation that one can make a difference oneself. Self-efficacy also plays a major role in organizational and workplace studies (e.g., Lunenburg, 2011). Similarly, the promotion of critical thinking is often considered a rationale for UR (e.g., Council of Undergraduate Research [CUR], n.d.; Petrella & Jung, 2008). At the very least, it is clear that critical thinking can be promoted through university teaching (Abrami et al., 2015). UR is also known to foster inclusion (cf. Mieg & Haberstroh, 2022), which is a consideration for digiUR and all other European projects (“leaving no-one behind,“European Commission, 2020, p. 115).

1.3 Hypotheses

The expectations for the development and application of the short assessment scale are summarized in the following hypotheses.

  • Hypothesis 1 (scale): The short digital competence scale is reliable and valid.

As stated in Hypothesis 1: In order to be usable, the short scale for DigComp2.1 must first be both reliable and valid.

  • Hypothesis 2 (digital competence in university settings): The digital competence scale should be able to detect differences found by previous studies on digital literacy, e.g., by subject (higher digital competence in STEM), gender (lower among women), or study level (higher in advanced students).

Hypothesis 2 is informed by several previous studies. Senkbeil et al. (2019) analyzed data from the German Educational Panel—which also recorded digital literacy (performance test items)—and found that digital literacy differed by subject area and gender. However, the overall findings on digital literacy are not clear-cut. There are mixed findings on gender and age differences (Peng & Yu, 2022). For example, when younger children are included and assessment occurs via performance testing, then girls seem to be favored (Siddiq & Scherer, 2019). This raises the question of whether observed gender gaps in digital literacy instead reflect differences in self-appraisal. However, self-assessed digital literacy seems to increase with academic education level (Senkbeil et al., 2019; Inan Karagul et al., 2021).

  • Hypothesis 3 (pandemic): Digital competence has a compensatory effect on lack of research experience.

Hypothesis 3 arises from the context of the Erasmus + application for the digiUR project, the Partnerships for Digital Education Readiness (European Commission, 2020) call, as follows: “The current COVID-19 crisis has greatly accelerated the need for modernisation and digital transformation of education and training systems across Europe. The goal is to reinforce the ability of education and training institutions to provide high quality, inclusive digital education.“ (p. 115) Because the COVID-19 pandemic severely limited opportunities for student research (Grineski et al., 2020), the hope was that digital competence would have a compensatory effect on students’ diminished research experiences; that is, digital competence should incorporate or support digital research, or should feed into general methodological research competence.

2 Method

The development and use of the scale was embedded in an online student survey. In the following sections, we first present the scale development, then the other variables developed and used, and finally the four individual studies conducted.

2.1 Digital competence

DigCompSAT developed items for the five competence areas and all specific competences. Since we wanted to limit the scale to about ten items, it was clear that not all digital competences could be covered. To produce a core set of items, we selected the item from each competence area that correlated most strongly with the overall test in the DigCompSAT study (cf. Table 1 and Appendix). In the second round, we could have selected the second-best items. However, we chose to construct five general items specifically related to the five competence areasFootnote 2. Our reasons for doing so were:

  • First, we wanted to be sure that the basic concept of DigComp2.1 would not be obscured by questions that were too specific (but without covering sufficient aspects).

  • Second, specific questions, especially if they were technology-related, ran the risk of quickly becoming outdated, especially since the pandemic had accelerated the development and use of digital tools.

The risk we took by using two item types for the same scale was that the scale would break down into the two item types from the respondents’ perspective.

Table 1 Included questions / items for Digital Competence (top rows: items from DigCompSAT, Clifford et al. (2020); bottom rows contain new items; see also the Appendix)

2.2 Further variables

In addition to demographic and study-related information, the survey included further key variables:

  1. (1)

    self-assessed digital competence

  2. (2)

    self-assessed research competence

  3. (3)

    research-related self-efficacy

  4. (4)

    critical thinking

  5. (5)

    lack of motivation (for student research under COVID-19)

  6. (6)

    study situation for students with fewer opportunities (under COVID-19)

The first two variables (1, 2) were presented as perceived assessments by peers, in accordance with surveys in expertise research (Mieg, 2009). Peers (or a professional community) can be considered a relevant reference group for the attribution of competence. The question for self-assessed digital competence was: “If your fellow students would approach you on practical digital problems (Internet, programming…), how confident would you be to do so?“ The question for self-assessed research competence was: “If your fellow students approached you to give advice on practical research problems (from research design to analysis), how confident would you be to do so?“ Responses were recorded using a five-point assessment ranging from “Not at all confident” to “Very confident.“

For research-related self-efficacy (3) and critical thinking (4), variables were taken from published survey instruments, specifically the two items that correlated highest with the overall scale. The two items for research-related self-efficacy were taken from the work of Reichow (2021, translated; cf. also Wessels et al., 2021); they read:

  • (3a) I am confident that I can develop a useful research design to answer my research question, although I will have to show a lot of ingenuity in doing so.

  • (3b) I am confident that I can find a research gap, even if the literature in the area is confusing.

The two items for critical thinking are from Sosu (2013, p. 117). They are:

  • (4a) I usually try to think about the bigger picture during a discussion.

  • (4b) I often re-evaluate my experiences so that I can learn from them.

Agreement was recorded in five levels, from “I definitely disagree” to “I definitely agree.“

The two other variables (3, 4) serve to complement Hypothesis 3. “Felt a lack of motivation” was a tick list item in the Grineski et al. (2020) survey on initial impacts of COVID-19 pandemic on undergraduate research. Accordingly, variable 3 has only the two possible answers “Yes” and “No”. Variable (4) was constructed by us and was aimed at students with fewer opportunties. We asked whether the students’ study situation had changed, with five response categories: strongly worsened / somewhat worsened /neither / somewhat improved /strongly improved. Since each university has its own regulations for identifying and addressing students with fewer opportunities, the trigger question was different for each (e.g., “Do you have any special challenges to overcome in your studies…”).

2.3 Surveys

Our study includes four surveys at universities from four countries, two in German and two in English (cf. Table 2). Full surveys were conducted at the University of Oldenburg, Germany, and LCC International University, Lithuania. Response rates were good (Oldenburg, 11%) or very good (LCC, 35%),Footnote 3 but there were some very incomplete data sets, so not all responses were included in the analyses. The surveys at the University of Vienna and the University of Warwick were conducted or initiated by members of the digiUR project and consist of opportunity samples.

Table 2 Surveys, samples

3 Results

We present the results with reference to the three hypotheses. One focus is on an in-depth scale analysis.

3.1 Hypothesis 1: Quality of the scale

Hypothesis 1 stated: The short digital competence scale is reliable and valid.

Cronbach’s α of 0.87 and McDonald’s ωt of 0.88 indicate satisfactory internal consistency (reliability) for the whole scale. This is a remarkable result when combining four different samples from four countries with two different languages.

Table 3 shows the item characteristics of all ten digital competence items. Leaving out any single item would result in a decrease in Cronbach’s α. Additionally, corrected item-test correlations indicate satisfactory differentiation, ranging from rit = 0.465 (for the specific information item) to rit =. 706 (for the general digital problem-solving item). In the following, we will use Digital Competence as the single total score for this scale (more precisely: the arithmetic mean of the ten items).

Table 3 Item characteristics of all ten digital competence items (for the definition of the variables see Table 1)

Regarding construct validity of the digital competence scale, our most general criterion of digital literacy is self-assessed digital competence. The correlation of Digital Competence with self-assessed digital competence is very high (r = .572, p < .001, N = 1161). Further, face validity results from the finding that the values for Digital Competence are significantly higher for students who have already worked as research assistants than for the other students (research assistants: M = 3.2, N = 73; others: M = 2.9; N = 1107; t-test: t = 4.507, df = 1178, p < .001); since they are undergraduates, working as research assistants is mostly limited to data analyses, literature reviews and text editing, which now require digital literacy.Footnote 4

Therefore, Hypothesis 1 is confirmed.

We can use the ten digital competence items as a scale for digital literacy and the average of them - Digital Competence - as a measure. Figure 1 shows the distribution of Digital Competence (N = 1218). Digital Competence ranges from 1.0 (lowest value) to 4.0 (highest value). The mean is 2.88 (s = .54), its median 2.9, the mode 3.1 (most frequent value). Only 1% of the students show a very low Digital Competence (1.5 or lower), 11% of the students rate their Digital Competence as very high (more than 3.5).Footnote 5 The differences in Digital Competence in our sample of four universities (cf. Table 2) are small but significant (ANOVA, F (3, 1214) = 7.209, p < .001).

Fig. 1
figure 1

Distribution of Digital Competence in 0.5 units (the upper limit always belongs to the smaller class, e.g. the value 1.5 belongs to the lowest class). N = 1218

Regarding scale structure, parallel analyses would suggest two components. In particular, we find an imbalance in the data, focusing on the specific items on the one hand and on the general questions on the other. Principal component analysis (PCA) with varimax rotation indicates a general component (PCA I) and a specific component (PCA II), explaining 59.3% of variance (see Table 4). Thus, the sub-components are not content-related, but assessment-related. Note that the specific digital safety item (CA4_specific) is out of line, correlating strongly with the general component (0.638), while less so with the specific component (0.335). In addition, Table 4 shows the results of Mc Donald’s ω scale consistency analysis assuming a hierarchical two-factor model (eigenvalues: g = 3.06, F1 = 1.19, F2 = 0.69; explained variance: g = 0.66, F1 = 0.45, F2 = 0.40). However, for this two-factor model consistency drops to McDonald’s ωh of 0.65 (Fig. 2).

Table 4 Two factors, left: varimax rotated components (PCA I and II), right: hierarchical two-factor model (with factor F1, F2)
Fig. 2
figure 2

Hierarchical, two-factor model for digital competence

The short scale for Digital Competence does not capture the competence structure of DigComp2.1. What we could have expected with regard to the competence areas of DigComp2.1, would have been a content-related five-component structure that reflects the respective competence areas. This is not supported by our findings. However, it is noteworthy that:

  • Our short scale shows some divide related to the competence areas (see Table 3): both specific and general items on digital information, communication and content management (competence areas CA1 to CA3) show lower item-total correlations than all items (specific and general) on digital problem solving and digital safety (CA4 and CA5). The main focus of Digital Competence is therefore on problem solving and digital safety.

  • The two components could be pure artifacts resulting from the low item difficulty for most of the specific questions (with the only exception of CA4_specific on safety). The three most difficult items (CA3_general, CA4_general, CA5_general) determine component 1, the easiest item (CA1_specific) determines the second component. This interpretation is supported by the fact that the second factor of the hierarchical model, F2, shows such a low eigenvalue (0.69) that it cannot stand by itself.

  • Replacing our short scale for digital competence with the two PCA components would lower validity: our criterion Self-Assessed Digital Competence correlates highly with the overall scale for Digital Competence (r = .572, p < .001), while less so with the two sub-components (r = .464, p < .001, and 0.337, p < .001, for PCA I and PCA II, respectively).

To conclude, our analyses suggest that digital competence is best captured by the total score of the ten-item short scale, i.e. Digital Competence. We therefore refrain from the two factors or sub-components of Digital Competence for further analyses.

Having dispensed with a two-factor solution, let us turn to the question of whether we can use even shorter scales for digital competence. A scale consisting of the five specific questions from DigComp2.1 (DCs) or a scale consisting of the five created general questions (DCg) would be obvious. It seems clear that reliability would then decrease. DCs would still have an acceptable Cronbach’s alpha of 0.78, DCg a good Cronbach’s alpha of 0.83, and both a slightly lower construct validity (correlation with self-assessed digital competence, DCs: r = .514, DCg: r = .525). With DCs and DCg, we can replicate just about all results with Digital Competence, albeit almost always in weaker degree and with weaker discriminatory power (e.g., for differences in study subjects). However, looking from a view of undergraduate research, which was the general focus of our study, and conducting regression analyses (stepwise) for Self-Assessed Research Competence (to find influencing factors for research competence), Digital Competence appears in the regression whereas DCs and DCg are excluded; if we leave out Digital Competence, both variables - DCs and DCg - appear in the regression. DCs and DCg thus represent slightly different aspects and do not substitute for each other or Digital CompetenceFootnote 6. Hence, we clearly prefer the ten-item short scale as a combination of specific and general questions over the five-item scales DCs and DCg.

3.2 Hypothesis 2: Digital competence at the university

Hypothesis 2 stated: The digital competence scale should be able to detect differences found by previous studies on digital literacy, e.g., by subject (higher digital competence in STEM), gender (lower among women), or study level (higher in advanced students).

As expected, the values for Digital Competence are significantly lower for women than for men and other students (m = 3.1 vs. 3.5, t-test: t = -5.530, df = 1193, p < .001). The values for Digital Competence are higher for STEM students than for students of other subjects (m = 3.6 vs. 3.1, t-test: t = 6.709, df = 1204, p < .001). In addition, Digital Competence increases significantly with the length of study - even though we are only dealing with undergraduates here (correlation with Years of Study, r = .109, p < .001, N = 1171). Hypothesis 2 is thus confirmed. It can be seen that the results of other studies can be replicated with the short scale for Digital Competence. This speaks for the content validity of the short scale.

In these analyses, the short scale (with Digital Competence as total score) proves superior to the single item on self-assessed digital competence. For the single item, too, significantly lower values are found for women and higher values for STEM students (t-tests, p < .001). Unlike the short scale, however, there are no significant differences by years of study for the variable Self-Assessed Digital Competence (r = .024, p = .420; ANOVA: F (3,1262) = 0.317, p = .813). Digital Competence shows certain differences according to age (but no significant correlation) and correlates with Years of Study, even when age is controlled (r = .135, p < .001, N = 1165).

Finally, we asked ourselves whether we could estimate the extent to which the effects of Digital Competence are attributable to the fact that this was a self-assessment. To do this, we created a residual variable of Digital Competence from the regression on Self-Assessed Digital Competence; i.e., we statistically removed all influences that could be traced back to the self-assessment of digital literacy. This residual variable shows the same difference by gender (women: m = − 0.11 vs. others: m = 0.23, t = -5.549, df = 1143, p < .001) and the same correlation with Years of Study (r = .109, p < .001, N = 1116) as Digital Competence. However, the difference for STEM disappears (STEM: m = − 0.01 vs. others: m = 0.00, ns.). Even with this residual variable, we find a significantly higher value for student research assistants (m = 0.33 vs.: m = − 0.04, t = 2.954, df = 1121, p < .01), which we considered an indication of validity (as to digital literacy). Digital Competence thus seems to indicate more digital literacy than can be attributed to pure self-assessment.

3.3 Hypothesis 3: Digital competence and the pandemic

Hypothesis 3 stated (regarding the pandemic situation): Digital competence has a compensatory effect on lack of research experience.

The specific situation the digiUR project dealt with was the pandemic. So did digital competence play a special role in the pandemic, especially in the context of undergraduate research? In our study, we had three areas to investigate to determine whether they could be positively influenced: (1) self-assessed research competence (Research Competence); (2) motivational collapse in student research during the pandemic (Lack of Motivation); and (3) an improved study situation for students with fewer opportunities. Table 5 provides an overview of which other variables the target variables correlate with. The correlations do not stand for causality and can at best be indicative. We are interested in whether a possible change in Digital Competence could be related to a change in self-assessed research competence.

Table 5 Pearson correlations for self-assessed research competence, lack of motivation (for student research under COVID-19), and improved study situation for students with fewer opportunities (under COVID-19)

As Table 5 shows, Digital Competence correlates significantly with Research Competence (r = .284, p < .001, N = 1154). As we can see, other variables, such as self-efficacy, years of study or research experience, also show relatively high correlations with research competence. Nevertheless, when we control for the influence of these variables, the significant correlation of Digital Competence with Research Competence remains (r = .192, p < .001, N = 1072). An increase in Digital Competence corresponds with an increase in Research Competence. Hypothesis 3 is thus confirmed (as far as we test it). Specifically, we see that when students have no research experience (Having Research Experience = 0), Digital Competence still correlates significantly with Research Competence (r = .267, p < .001, N = 263), even when controlling for significant other influencing variables such as Self-Efficacy and Years of Study (r = .252, p < .001, N = 235). Thus, we can assume that digital competence provides a compensatory effect in the absence of specific research experience.

In this context, we should emphasize the important role of self-efficacy. Self-Efficacy shows a supportive pattern in all target variables, not only Research Competence but also Lack of Motivation (for student research under COVID-19) and the Study Situation for Students with Fewer Opportunities. The correlations of Self-Efficacy with these variables are clearly larger than those of all other variables (Table 5). In contrast, Digital Competence shows no correlation with Lack of Motivation (cf. Table 5), but it does with Study Situation for Students with Fewer Opportunities (r = .155, p < .01, N = 350). This correlation remains significant when controlling for Self-Efficacy and Years of Study (r = .132, p < .01, N = 331). The effect does not disappear even when we control for Self-Assessed Digital Competence (r = .115, p < .05, N = 331). That is, regardless of whether Digital Competence is self-assessed, there is a correlation with the Study Situation for Students with Fewer Opportunities. This suggests that the Digital Competence short scale should be used not only for the undergraduate research questions, but in research on universities and inclusion.

4 Discussion

We conduct the discussion around the three hypotheses related first to the digital competence scale, second to the differences among students that can be measured with it, and third to the compensatory effect in the pandemic.

4.1 The scale on Digital Competence (Hypothesis 1)

According to the usual criteria (Cronbach’s alpha, McDonald’s omega, correlation with external criteria), the created ten-item digital competence scale based on DigComp2.1 (Carretero et al., 2017) is reliable and valid. It also seems to be more useful than the single-item self-assessment question alone, since, for example, the new scale differentiates by level of study (years of study), unlike the latter (cf. 3.2). The new short scale cannot reasonably be shortened further; a five-item scale with specific items from the DigCompSAT (Clifford et al., 2020) seems less appropriate (cf. 3.1).

The construction of the short scale from specific and general items has resulted in inhomogeneity that may give rise to further review and revision of the scale. There were reasons for this method of construction. According to our analyses, we should reconsider them:

  • The first reason was that DigComp2.1 has a very differentiated, analytically obtained structure of five competence areas and several proficiency levels. Unfortunately, the five-factor solution could not be found in the data. Our analyses revealed rather two clusters of variables, on the one hand the items on problem solving with digital means and safety (CA4 and CA5, cf. Tables 1 and 3) which enter the short scale with the greatest weight; on the other hand, the items on communication with digital means (CA1 to CA3).

  • The second reason was that specific items become obsolete too quickly. An indication of this would have been some correlation of the specific items with age; this is not the case (cf. 3.2). However, if we consider the items on communication with digital means (CA1 to CA3) as more "specific" (in the sense of "concrete") than the items on problem solving and safety (CA4 and CA5), we found that the second factor (F2) of a hierarchical two-factor model based on CA1 to CA3 was not appropriate as a scale (cf. 3.1). That is, the "general" or "abstract" items CA4 and CA5 seem to be needed for the assessment of digital literacy.

For the time being, the short scale with specific and general items seems to be the tool of choice. Next steps could be to test the created short scale in comparison with a scale consisting of specific items only or to conduct further studies on the structure of the digital competence scale.

4.2 Measurable differences among undergraduates (Hypothesis 2)

Digital Competence scores among undergraduates follow common patterns as found by former studies (cf. Inan Karagul et al., 2021; Peng & Yu, 2022; Senkbeil et al., 2019; Siddiq &Scherer, 2019). Digital Competence scores were lower for women, higher for STEM students and the more advanced students.

These results demonstrate the content validity of the short scale. It is also important to note that the effect of Digital Competence cannot be reduced to the fact that it is based on self-assessment (as argued with gender differences by Siddiq & Scherer, 2019). We were able to statistically subtract the effect of self-assessment from Digital Competence - and still prove validity (criterion: correlation with activity as a student assistant).

The finding that digital competence does not depend on age is very remarkable. Digital competence (as assessed here on the basis of DigComp2.1) is a matter of academic education. Digital competence clearly increases with the duration of studies, regardless of age. Thus, the surprising finding is that even the five-item short scale (DCg, cf. 3.1), which correlates slightly negatively with age (younger students see themselves as more competent), nevertheless correlates positively with years of study (more advanced students see themselves as more competent). The finding of the dependence of digital literacy on years of study that Senkbeil et al. (2019) found for German students can thus be generalized.

4.3 Compensatory effect of Digital Competence in the pandemic? (Hypothesis 3)

We have to be careful with the interpretation of effects here, because the data and the research design only allow correlations. We can say that we found evidence that digital competence correlates with research competence regardless of undergraduate research experience. Likewise, that digital competence correlates with an improved study situation of students with fewer opportunities in the pandemic.

It should be noted that these are usually small effects, the correlative relationships usually remain below a value of r = .3, which means that the explained variance remains below 10%. An exception is self-efficacy, for which our analyses found many and often high correlations. Not only does self-efficacy play a role in research competence, it also seems to counteract motivational deficits and to be beneficial for students with fewer opportunities - adding to the growing literature on self-efficacy and inclusion (e.g., Marra et al., 2009; Zhu et al., 2019). When considering digital competence or literacy in the academic context, it is therefore important to include sufficient other variables and influencing factors. Therefore, it is even more important to have a reliable and valid, tested short scale for digital competence.