Abstract
With the current emphasis placed on ICT skills development in education, accurate information about how well students master these skills becomes invaluable. Despite the wide-spread use of self-report measures of ICT skills, their accuracy has been questioned. An analysis, on a large sample, of the heterogeneity in reporting behavior in the domains of ICT competencies is, as far as we know, missing; we fill this gap. We investigate the (in)comparability of self-reports of online communication skills (e.g., the using of social networks, data sharing) among two contrasting groups of students (a) elite, high-performing grammar schools and (b) economics schools (total N = 1,070 students, 17 secondary schools). Using the anchoring vignette method, we identify scale usage differences among respondents and adjust their self-reports for these differences. We show that grammar school students significantly underestimate their skills. Before the adjustment, grammar school students report significantly lower levels of online communication skills. After the adjustment, grammar school students have non-significantly higher levels of these skills. Differential academic demands thus might be a relevant factor in students’ self-assessment of online communication skills. In practice, students’ under-/over-estimation of skills might impact their access to ICT-related jobs and the effectiveness of educational decision-making in the ICT domain. We also show the potential of the anchoring vignette method to explain paradoxical negative relationships between self-reported skills and results on the achievement tests identified in the literature. Further research could explore this phenomenon in other domains of digital competence and among other student populations.
Similar content being viewed by others
Notes
In the Czech Republic’s school curricula, the frequency with which a particular subject is taught during the study is expressed by so called “week lessons”. A week lesson means that the subject is taught once a week across the whole school year. The number of week lessons in the curricula indicates how often the subject is taught across the four years of study.
Data available at the CERMAT web pages https://vysledky.cermat.cz/data/Default.aspx.
In our study, we not only compared the curricula of the two groups of students, but we also administered a short ICT achievement test with items covering the five competence areas defined in DigComp. In the test, grammar school students scored significantly higher than economics students, which is in line with results of achievement tests in other areas.
Cities with a population of over 50,000 inhabitants are considered big in the Czech Republic. There are only about 20 such cities in the country.
Note that in our sample, less than 1/5 of the respondents reported having 201 or more books in their household. By setting this threshold, we capture a group of “exceptional” families (less than 20% of the total), which might be considered to possess a high level of scholarly culture.
In the models, we use the following dummy variables: Grammar school (1 = YES, 0 = NO), where the reference group are students from economics schools; Male (1 = YES, 0 = NO), where the reference group are females; Home population > 50,000 (1 = YES, 0 = NO), where the reference group are students living in a location with a population of less than 50,000 inhabitants; and finally Household books – 201 or more (1 = YES, 0 = NO), where the reference group are students having 200 or less books in their household.
References
Aesaert, K., & van Braak, J. (2015). Gender and socioeconomic related differences in performance based ICT competences. Computers and Education, 84, 8–25. https://doi.org/10.1016/j.compedu.2014.12.017
Angelini, V., Cavapozzi, D., Corazzini, L., & Paccagnella, O. (2012). Age, health and life satisfaction among older Europeans. Social Indicators Research, 105(2), 293–308. https://doi.org/10.1007/s11205-011-9882-x
Araujo, T., Wonneberger, A., Neijens, P., & de Vreese, C. (2017). How much time do you spend online? Understanding and improving the accuracy of self-reported measures of Internet use. Communication Methods and Measures, 11(3), 173–190. https://doi.org/10.1080/19312458.2017.1317337
Bago d’Uva, T., O’Donnell, O., & van Doorslaer, E. (2008). Differential health reporting by education level and its impact on the measurement of health inequalities among older Europeans. International Journal of Epidemiology, 37(6), 1375–1383. https://doi.org/10.1093/ije/dyn146
Blanco, M., & Lopez Boo, F. (2010). ICT skills and employment: A randomized experiment (IZA Discussion Papers, No. 5336). Institute for the Study of Labor (IZA). Retrieved July 31, 2021, from https://ssrn.com/abstract=1716131
Blazek, R., Janotova, Z, Potuznikova, E., & Basl, J. (2019). Mezinárodní šetření PISA 2018: Národní zpráva. Czech School Inspectorate. Retrieved July 31, 2021, from https://www.csicr.cz/Csicr/media/Prilohy/PDF_el._publikace/Mezin%c3%a1rodn%c3%ad%20%c5%a1et%c5%99en%c3%ad/PISA_2018_narodni_zprava.pdf
Bonsang, E., & van Soest, A. (2012). Satisfaction with social contacts of older Europeans. Social Indicators Research, 105(2), 273–292. https://doi.org/10.1007/s11205-011-9886-6
Burdova, J., Dolezalova, G., Chamoutova, D., Klenhova, M., Skacelova, P., Trhlikova, J., Ulovcova, J., & Vojtech, J. (2011). Uplatnění absolventů škol na trhu práce–2010. Prague: National Institution of Technical and Vocational Education. Retrieved July 31, 2021, from http://www.nuov.cz/uploads/Vzdelavani_a_TP/Uplatneni_2010_internet.pdf
Danner, R. B., & Pessu, C. O. (2013). A survey of ICT competencies among students in teacher preparation programmes at the University of Benin, Benin City, Nigeria. Journal of Information Technology Education, 12, 33–49. https://doi.org/10.28945/1762
Ferrari, A. (2013). DIGCOMP: A framework for developing and understanding digital competence in Europe. Retrieved July 31, 2021, from http://publications.jrc.ec.europa.eu/repository/bitstream/JRC83167/lb-na-26035-enn.pdf
Foucault Welles, B., Vashevko, A., Bennett, N., & Contractor, N. (2014). Dynamic models of communication in an online friendship network. Communication Methods and Measures, 8(4), 223–243. https://doi.org/10.1080/19312458.2014.967843
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (2020). Preparing for life in a digital world: IEA International Computer and Information Literacy Study 2018 international report. Springer. https://doi.org/10.1007/978-3-030-38781-5
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age: The IEA International Computer and Information Literacy Study international report. Springer. Retrieved July 31, 2021, from https://link.springer.com/content/pdf/10.1007%2F978-3-319-14222-7.pdf
Gravill, J., Compeau, D., & Marcolin, B. (2002). Metacognition and IT: The influence of self-efficacy and self-awareness. AMCIS 2002 Proceedings. Retrieved July 31, 2021, from https://aisel.aisnet.org/amcis2002/147
Gravill, J. I., Compeau, D. R., & Marcolin, B. L. (2006). Experience effects on the accuracy of self-assessed user competence. Information and Management, 43(3), 378–394. https://doi.org/10.1016/j.im.2005.10.001
Hakkarainen, K., Ilomäki, L., Lipponen, L., Muukkonen, H., Rahikainen, M., Tuominen, T., Lakkala, M., & Lehtinen, E. (2000). Students’ skills and practices of using ICT: Results of a national assessment in Finland. Computers and Education, 34(2), 103–117. https://doi.org/10.1016/S0360-1315(00)00007-5
He, J., & van de Vijver, F. J. R. (2016). The motivation-achievement paradox in international educational achievement tests: Toward a better understanding. In R. B. King & A. B. I. Bernardo (Eds.), The psychology of Asian learners: A Festschrift in honor of David Watkins (pp. 253–268). Singapore: Springer Science. Retrieved July 31, 2021, from https://doi.org/10.1007/978-981-287-576-1_16
Hohlfeld, T. N., Ritzhaupt, A. D., & Barron, A. E. (2013). Are gender differences in perceived and demonstrated technology literacy significant? It depends on the model. Education Technology Research and Development, 61, 639–663. https://doi.org/10.1007/s11423-013-9304-7
Ilomäki, L., & Rantanen, P. (2007). Intensive use of ICT in school: Developing differences in students’ ICT expertise. Computers & Education, 48(1), 119–136. https://doi.org/10.1016/j.compedu.2005.01.003
Kaufman, J. H., Engberg, J., Hamilton, L. S., Yuan, K., & Hill, H. C. (2019). Validity evidence supporting use of anchoring vignettes to measure teaching practice. Educational Assessment, 24(3), 155–188. https://doi.org/10.1080/10627197.2019.1615374
King, G., Murray, C., Salomon, J., & Tandon, A. (2004). Enhancing the validity and cross-cultural comparability of measurement in survey research. American Political Science Review, 98(1), 567–583. https://doi.org/10.1017/S000305540400108X
Kyllonen, P. C., & Bertling, J. (2013). Innovative questionnaire assessment methods to increase cross-country comparability. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), A handbook of international large-scale assessment data analysis: Background, technical issues, and methods of data analysis (pp. 277–286). Chapman Hall/CRC Press.
Lau, W. W. F., & Yuen, A. H. K. (2014). Developing and validating of a perceived ICT literacy scale for junior secondary school students: Pedagogical and educational contributions. Computers and Education, 78, 1–9. https://doi.org/10.1016/j.compedu.2014.04.016
Lyons, A. C., Zucchetti, A., Kass-Hanna, J., & Cobo, C. (2019). Bridging the gap between digital skills and employability for vulnerable populations. G20 Insights. Retrieved July 31, 2021, from https://www.g20-insights.org/wp-content/uploads/2019/05/t20-japan-tf7-9-bridging-gap-between-digital-skills-employability.pdf
Martínez-Cantos, J. L. (2017). Digital skills gaps: A pending subject for gender digital inclusion in the European Union. European Journal of Communication, 32(5), 419–438. https://doi.org/10.1177/0267323117718464
Merritt, K., Smith, D., & Renzo, J. C. D. (2005). An investigation of self-reported computer literacy: Is it reliable? Issues in Information Systems, 6(1), 289–295. https://doi.org/10.48009/1_iis_2005_289-295
National Institute for Education. (2019a). Nová soustava oborů vzdělání poskytujících střední vzdělání s maturitní zkouškou obory kategorie M a L. Retrieved December 14, 2019, from http://zpd.nuov.cz/celkove_lm.htm
National Institute for Education. (2019b). Rámcový vzdělávací program pro gymnázia. Retrieved December 14, 2019, from http://www.nuv.cz/file/159
Organization of Economic Co-operation and Development. (2017). PISA 2015 technical report. France: OECD Publishing. Retrieved July 31, 2021, from http://www.oecd.org/pisa/sitedocument/PISA-2015-technical-report-final.pdf
Peart, M. T., Gutiérrez-Esteban, P., & Cubo-Delgado, S. (2020). Development of the digital and socio-civic skills (DIGISOC) questionnaire. Educational Technology and Research Development, 68, 3327–3351. https://doi.org/10.1007/s11423-020-09824-y
Peled, Y. (2020). Pre-service teacher’s self-perception of digital literacy: The case of Israel. Education and Information Technologies. https://doi.org/10.1007/s10639-020-10387-x
Rabe-Hesketh, S., & Skrondal, A. (2002). Estimating Chopit models in gllamm: Political efficacy example from King et al. Retrieved July 31, 2021, from http://www.gllamm.org/chopit.pdf
Scharkow, M. (2016). The accuracy of self-reported internet use–a validation study using client log data. Communication Methods and Measures, 10(1), 13–27. https://doi.org/10.1080/19312458.2015.1118446
Scherer, R., & Siddiq, F. (2019). The relation between students’ socioeconomic status and ICT literacy: Findings from a meta-analysis. Computers and Education, 138, 13–32. https://doi.org/10.1016/j.compedu.2019.04.011
Siddiq, F., Hatlevik, O. E., Olsen, R. V., Throndsen, I., & Scherer, R. (2016). Taking a future perspective by learning from the past-A systematic review of assessment instruments that aim to measure primary and secondary school students’ ICT literacy. Educational Research Review, 19, 58–84. https://doi.org/10.1016/j.edurev.2016.05.002
Van de Vijver, F. J. R. (2018). Towards an integrated framework of bias in noncognitive assessment in international large-scale studies: Challenges and prospects. Educational Measurement: Issues and Practice, 37(4), 49–56. https://doi.org/10.1111/emip.12227
Vonkova, H. (2019). Life satisfaction among different groups of children: Self-reports, differential scale usage and anchoring vignettes. Child Indicators Research, 12(6), 2111–2136. https://doi.org/10.1007/s12187-019-09629-3
Vonkova, H., Bendl, S., & Papajoanu, O. (2017). How students report dishonest behavior in school: Self-assessment and anchoring vignettes. Journal of Experimental Education, 85(1), 36–53. https://doi.org/10.1080/00220973.2015.1094438
Vonkova, H., & Hrabak, J. (2015). The (in)comparability of ICT knowledge and skill self- assessments among upper secondary school students: The use of the anchoring vignette method. Computers & Education, 85, 191–202. https://doi.org/10.1016/j.compedu.2015.03.003
Vonkova, H., Hrabak, J., Kralova, K., & Papajoanu, O. (2021). Developing a framework for the examination of anchoring vignette assumptions using cognitive interviews: A demonstration in the ICT skills domain. Field Methods, 33(4). https://doi.org/10.1177/1525822X21991281
Vonkova, H., Zamarro, G., & Hitt, C. (2018). Cross-country heterogeneity in students’ reporting behavior: The use of the anchoring vignette method. Journal of Educational Measurement, 55(1), 3–31. https://doi.org/10.1111/jedm.12161
Weiss, S., & Roberts, R. D. (2018). Using anchoring vignettes to adjust self-reported personality: A comparison between countries. Frontiers in Psychology, 9, 1–17. https://doi.org/10.3389/fpsyg.2018.00325
West, M. R., Kraft, M. A., Finn, A. S., Martin, R. E., Duckworth, A. L., Gabrieli, C. F., & Gabrieli, J. D. (2016). Promise and paradox: Measuring students’ non-cognitive skills and the impact of schooling. Educational Evaluation and Policy Analysis, 38(1), 148–170. https://doi.org/10.3102/0162373715597298
Acknowledgements
This work was supported by the Czech Science Foundation under Grant GA ČR 17-02993S “Factors influencing the ICT skill self-assessments of upper-secondary school students”.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Specification of the CHOPIT model
The text is based on the description provided by Vonkova, Bendl, and Papajoanu (2017) but adapted to reflect our domain of online communication skills, explanatory variables, and our use of more than one anchoring vignette.
The CHOPIT model consists of two parts: the self-assessment part and the anchoring vignette part. First, we model the self-reported online communication skills, reflecting the ordinal nature of responses (we use a 7-point scale). For each student \(i\left ( {i = 1, \ldots I} \right)\) we define a latent online communication skills variable \(Y_{si}^{*}\) as follows:
where \({\varvec{X}}_{i}\) is a vector of explanatory variables. We use the following dummy variables: Grammar school (1 = YES, 0 = NO), where the reference group are students from economics schools; Male (1 = YES, 0 = NO), where the reference group are females; Home population > 50,000 (1 = YES, 0 = NO), where the reference group are students living in a location with a population of less than 50,000 inhabitants; Household books-201 or more (1 = YES, 0 = NO), where the reference group are students having 200 or less books in their household. \({\varvec{\beta}}\) is a vector of unknown parameters and \(\varepsilon_{si}\) is the error term assumed to be normally distributed with mean 0 and variance 1 and independent of \({\varvec{X}}_{i}\). The reported online communication skills \(Y_{si}\) is an ordinal variable based upon the latent online communication skills variable:
where \(\tau_{i}^{k} ,k = 1, \ldots , 6\) are the thresholds \(\left( {\tau_{i}^{0} = - \infty , \tau_{i}^{7} = \infty } \right)\) which are allowed to differ by the explanatory variables of the students.
\({\varvec{\gamma}}^{k} ,k = 1, \ldots ,6\) are vectors of unknown parameters. In our study they capture differences in the reporting behavior of the students with respect to different characteristics (type of school, gender, the grade they attend, the population of the location that the student lives in, the number of books in their household). If the thresholds are the same for all students, i.e. no heterogeneity in reporting behavior \(\left( {\tau_{i}^{k} = \tau^{k} } \right)\) is assumed, we get a common ordered probit model.
If we use only the self-reported online communication skills, the parameters \({\varvec{\beta}}\) and \({\varvec{\gamma}}^{1}\) are not separately identified (only their difference is identified). In other words, we cannot distinguish between the real online communication skill levels and the differences in reporting styles if we only use the self-reports. Therefore, we need additional information to identify the parameters. The anchoring vignette evaluations serve as the source of this information. In the second part of the CHOPIT model, we model the responses to the anchoring vignettes as follows:
where \(\theta_{v}\) corresponds to the level of the online communication skills of the hypothetical person in the vignette, \(\varepsilon_{vi}\) is the error term assumed to follow the normal distribution with mean 0 and variance \(\sigma_{v}^{2}\) (assumed to be equal for all vignettes) and independent of \(\varepsilon_{si}\), and of \({\varvec{X}}_{i}\). \(Y_{vi}\) are the vignette evaluations of student \(i\). The thresholds between response categories for the vignettes are modeled the same way as the thresholds between response categories for the self-reports. It enables the identification of \({\varvec{\gamma}}^{1}\) (i.e. the parameters reflecting differences in response styles) and subsequently the identification of \({\varvec{\beta}}\) (the parameters reflecting real differences in online communication skills).
Appendix B
The set of estimated parameters for version 2 of the ordered probit and the CHOPIT model, including the threshold parameters γ for the CHOPIT model
See Table 5
.
Rights and permissions
About this article
Cite this article
Vonkova, H., Papajoanu, O. & Kralova, K. Student online communication skills: Enhancing the comparability of self-reports among different groups of students. Educ Inf Technol 27, 2181–2205 (2022). https://doi.org/10.1007/s10639-021-10685-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-021-10685-y