Reading-Aloud Versus Self-Administered Student Questionnaires: An Experiment on Data Quality

  • Cornelia Gresch
  • Rolf Strietholt
  • Michael Kanders
  • Heike Solga
Chapter

Abstract

A major finding from recent large-scale assessments on student achievement is that a remarkable proportion of students around the world are poor readers. This calls into question the quality of the data retrieved from self-administered background questionnaires. A better administration mode, especially for this student population, might be to have the administrator read the questionnaires out aloud, as is done in surveys at elementary schools. In order to provide empirical evidence on whether reading aloud helps improve data quality, we conducted an experimental study with 664 twelve-year-old students in lower secondary schools in Germany. One finding is that, unsurprisingly, reading questionnaires aloud increases survey time. Regarding data quality, however, item non-response rates decrease somewhat in the reading- aloud group, and filtering procedures also work better. This effect can be found regardless of students’ status or reading speed. Even though the data quality for this group is generally poor, the improvement achieved by reading aloud is comparable with the group of fast readers. Regarding the acceptance of the mode, analyses on the role of migrant status and reading speed suggest that slow readers and migrant students particularly prefer being read the questionnaires aloud. Our study indicates that reading questionnaires aloud may be a meaningful administration mode not only in early primary school grades, but also at the beginning of secondary school. Data quality in studies involving at-risk students can particularly benefit from reading questionnaires aloud.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blossfeld, H.-P., Roßbach, H.-G., & von Maurice, J. (Eds.). (2011). Education as a lifelong process: The German National Educational Panel Study (NEPS) [Special issue]. Zeitschrift für Erziehungswissenschaft, 14. Wiesbaden: VS Verlag für Sozialwissenschaften.Google Scholar
  2. Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design (Vol. 2). San Francisco: Jossey Bass.Google Scholar
  3. De Leeuw, E. D. (2008). Choosing the method of data collection. In E. D. de Leeuw, J. J. Hox, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 113–135). New York: Psychology Press Taylor & Francis Group.Google Scholar
  4. Dillman, D. A., & Christian, L. M. (2005). Survey mode as a source of instability in responses across surveys. Field Methods, 17, 30–52.Google Scholar
  5. Groves, R. M., & Lyberg, L. (2010). Total Survey Error. Public Opinion Quarterly, 74(5), 849–879.Google Scholar
  6. Heydrich, J., Weinert, S., Nusser, L., Artelt, C., & Carstensen, C. H. (2013). Including students with special educational needs into large-scale assessments of competencies: Challenges and approaches within the German National Educational Panel Study (NEPS). Journal for Educational Research Online, 5(2), 217–240.Google Scholar
  7. Krosnick, J. A. (1999). Survey research. Annual Review Psychology, 50, 537–567.Google Scholar
  8. Meloy, L. L., Deville, C., & Frisbie, D. A. (2002). The effect of a read aloud accommodation on test scores of students with and without a learning disability in reading. Remedial and Special Education, 23(4), 248 – 255. doi: 10.1177/07419325020230040801Google Scholar
  9. Mullis, I. V. S., Martin, M. O., Kennedy, A. M., & Foy, P. (2007). PIRLS 2006 international report: IEA’s progress in international reading literacy study in primary school in 40 countries. Chestnut Hill, MA: Boston College.Google Scholar
  10. Naumann, J., Artelt, C., Schneider, W., & Stanat, P. (2010). Lesekompetenz von PISA 2000 bis PISA 2009. In E. Klieme, C. Artelt, J. Hartig, N. Jude, O. Köller, M. Prenzel, … P. Stanat (Eds.), PISA 2009. Bilanz nach einem Jahrzehnt [PISA 2009. Review after one decade] (pp. 23 – 71). Münster: Waxmann.Google Scholar
  11. NEPS. (2011). Starting Cohort 3. Main Study 2010/11 (A28). Students, 5th Grade, Regular Schools: Information on the Competence Test. Retrieved from https://www.neps-data. de/Portals/0/NEPS/Datenzentrum/Forschungsdaten/SC3/1-0-0/C_A28_en.pdf
  12. OECD. (2010). PISA 2009 results: What students know and can do. Student performance in reading, mathematics and science (Vol. 1). Paris: OECD Publishing.Google Scholar
  13. Randall, J., & Engelhard, G. (2010). Performance of students with and without disabilities under modified conditions: Using resource guides and read-aloud test modifications on a high-stakes reading test. The Journal of Special Education, 44(2), 79–93. doi: 10.1177/0022466908331045Google Scholar
  14. Schwarz, N., Strack, F., Hippler, H.-J., & Bishop, G. (1991). The impact of administration mode on response effects in survey measurement. Applied Cognitive Psychology, 5(3), 193–212.Google Scholar
  15. Wolf, M. K., Kim, J., Kao, J. C., & Rivera, N. M. (2009). Examining the effectiveness and validity of glossary and read-aloud accommodations for English language learners in a math assessment (CRESST Report No. 766). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden 2016

Authors and Affiliations

  • Cornelia Gresch
    • 1
  • Rolf Strietholt
    • 2
  • Michael Kanders
    • 2
  • Heike Solga
    • 1
  1. 1.BerlinDeutschland
  2. 2.DortmundDeutschland

Personalised recommendations