Advertisement

Metacognition and Learning

, Volume 6, Issue 2, pp 155–177 | Cite as

Measuring strategy use in context with multiple-choice items

  • Jennifer CromleyEmail author
  • Roger Azevedo
Article

Abstract

A number of authors have presented data that challenge the validity of self-report of strategy use or choice of strategy. We created a multiple-choice measure of students’ strategy use based on the work of Kozminsky, E., and Kozminsky, L. (2001), and tested it with three samples as part of a series of studies testing the fit of the DIME model of reading comprehension. One study was conducted at the high school level (N = 175) and two at the undergraduate level (N = 185 and 737). Over the three studies with three different samples, we found good evidence for the internal consistency reliability and concurrent validity of this type of measure. Commonality analysis suggested that strategy use mainly makes a shared contribution to comprehension with other predictors, especially inference, and to some extent vocabulary, background knowledge, and word reading. The measure was relatively easy to construct and easy to administer to large numbers of students, and showed much higher evidence of concurrent validity than self-ratings of frequency of use of strategies.

Keywords

Strategy use Comprehension Validity Reliability Measurement 

References

  1. Alexander, P. A., & Judy, J. E. (1988). The interaction of domain-specific and strategic knowledge in academic performance. Review of Educational Research, 58(4), 375–404.Google Scholar
  2. Azevedo, R., Moos, D. C., Greene, J. A., Winters, F. I., & Cromley, J. C. (2008a). Why is externally-regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45–72.CrossRefGoogle Scholar
  3. Azevedo, R., Witherspoon, A. M., Graesser, A., McNamara, D., Rus, V., Cai, Z., Lintean, M., & Siler, E. (2008b). MetaTutor: An adaptive hypermedia system for training and fostering self-regulated learning about complex science topics. Paper presented at a Symposium on ITSs with agents at the annual meeting of the Society for Computers in Psychology, Chicago, ILGoogle Scholar
  4. Baker, L. (1989). Metacognition, comprehension monitoring, and the adult reader. Educational Psychology Review, 1(1), 3–38.CrossRefGoogle Scholar
  5. Baker, L., & Cerro, L. C. (2000). Assessing metacognition in children and adults. In G. Schraw & J. C. Impara (Eds.), Issues in the measurement of metacognition (pp. 99–146). Lincoln: Buros Institute.Google Scholar
  6. Best, R. M., Rowe, M., Ozuru, Y., & McNamara, D. S. (2005). Deep-level comprehension of science texts: The role of the reader and the text. Topics in Language Disorders, 25, 62–80.CrossRefGoogle Scholar
  7. Brand-Gruwel, S., Aarnoutse, C., & Van Den Bos, K. P. (1998). Improving text comprehension strategies in reading and listening settings. Learning and Instruction, 8, 63–81.CrossRefGoogle Scholar
  8. Cain, K., Oakhill, J., & Bryant, P. (2004). Children’s reading comprehension ability: Concurrent prediction by working memory, verbal ability, and component skills. Journal of Educational Psychology, 96(1), 31–42.CrossRefGoogle Scholar
  9. Calvo, M. (2005). Relative contribution of vocabulary knowledge and working memory span to elaborative inferences in reading. Learning and Individual Differences, 15(1), 53–65.CrossRefGoogle Scholar
  10. Campbell, N. A., & Reece, J. B. (2001). Biology (6th ed.). San Francisco: Benjamin Cummings.Google Scholar
  11. Cromley, J. G., & Azevedo, R. (2006). Self-report of reading comprehension strategies: What are we measuring? Metacognition and Learning, 1(3), 229–247.CrossRefGoogle Scholar
  12. Cromley, J. G., & Azevedo, R. (2007). Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 99(2), 311–325. doi: 10.1037/0022-0663.99.2.311.Google Scholar
  13. Cromley, J. G., & Snyder, L. E. (2007). Testing the fit of the DIME model of reading comprehension with undergraduate students. Paper presented at the annual conference of the American Educational Research Association, Chicago, IL.Google Scholar
  14. Cromley, J. G., Snyder-Hogan, L. E., & Luciw-Dubas, U. A. (2010a). Reading comprehension of scientific text: A domain-specific test of the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 102(3), 687–700. doi: 10.1037/a001945.Google Scholar
  15. Cromley, J. G., Snyder-Hogan, L. E., & Luciw-Dubas, U. A. (2010b). Cognitive activities in complex science text and diagrams. Contemporary Educational Psychology, 35, 59–74. doi: 10.1016/j.cedpsych.2009.10.002.Google Scholar
  16. Evans, J. E., Floyd, R. G., McGrew, K. S., & Leforgee, M. H. (2001). The relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and reading achievement during childhood and adolescence. School Psychology Review, 31(2), 246–262.Google Scholar
  17. Glaser, C., & Brunstein, J. (2007). Improving fourth-grade students’ composition skills: Effects of strategy instruction and self-regulation procedures. Journal of Educational Psychology, 99(2), 297–310.CrossRefGoogle Scholar
  18. Gonzalez, J. E., & Uhing, B. M. (2008). Home literacy environments and young Hispanic children’s English and Spanish oral language: A communality analysis. Journal of Early Intervention, 30(2), 116–139. doi: 10.1177/1053815107313858.CrossRefGoogle Scholar
  19. Graesser, A. (2007). An introduction to strategic reading comprehension. In D. S. McNamara (Ed.), Reading comprehension strategies: Theories, interventions, and technologies. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  20. Hadwin, A., Winne, P., Stockley, D., Nesbit, J., & Woszczyna, C. (2001). Context moderates students’ self-reports about how they study. Journal of Educational Psychology, 93(3), 477–487.CrossRefGoogle Scholar
  21. Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Code, J., & Winne, P. H. (2007). Examining trace data to explore self-regulated learning. Metacognition and Learning, 2(2), 107–124. doi: 10.1007/s11409-007-9016-7.CrossRefGoogle Scholar
  22. Hannon, B., & Daneman, M. (2001). A new tool for measuring and understanding individual differences in the component processes of reading comprehension. Journal of Educational Psychology, 93(1), 103–128.CrossRefGoogle Scholar
  23. Hannus, M., & Hyona, J. (1999). Utilization of illustrations during learning of science textbook passages among low- and high-ability children. Contemporary Educational Psychology, 24(2), 95–123.CrossRefGoogle Scholar
  24. Hine, D. C. (1994). Hine sight: Black women and the re-construction of American history. NY: Carlson Publications.Google Scholar
  25. Johnson-Glenberg, M. C. (2005). Web-based training of metacognitive strategies for text comprehension: Focus on poor comprehenders. Reading and Writing, 18, 755–786.CrossRefGoogle Scholar
  26. Katz, S., Blackburn, A. B., & Lautenschlager, G. J. (1991). Answering reading comprehension items without passages on the SAT when items are quasi-randomized. Educational and Psychological Measurement, 51(3), 747–754.CrossRefGoogle Scholar
  27. Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, England: Cambridge University Press.Google Scholar
  28. Kozminsky, E., & Kozminsky, L. (2001). How do general knowledge and reading strategies ability relate to reading comprehension of high school students at different educational levels? Journal of Research in Reading, 24(2), 187–204.CrossRefGoogle Scholar
  29. Leslie, L., & Caldwell, J. (2000). Qualitative reading inventory III. New York: Longman.Google Scholar
  30. MacGinitie, W. H., MacGinitie, R. K., Maria, K., & Dreyer, L. G. (2001). Gates-MacGinitie reading tests, Level 7/9, Form S (4th ed.). Itasca: Riverside.Google Scholar
  31. MacGinitie, W. H., MacGinitie, R. K., Maria, K., & Dreyer, L. G. (2002). Technical report for the fourth edition, Gates-MacGinitie reading tests. Itasca: Riverside.Google Scholar
  32. McKeown, M. G., Beck, I. L., & Blake, R. G. K. (2009). Rethinking Comprehension Instruction: Comparing strategies and content instructional approaches. Reading Research Quarterly, 44(3), 218–253.Google Scholar
  33. McNamara, D. S., & Magliano, J. P. (2009). Towards a comprehensive model of comprehension. In B. Ross (Ed.), The psychology of learning and motivation (51st ed., pp. 297–384). NY: Elsevier Science.CrossRefGoogle Scholar
  34. McNamara, D. S., O’Reilly, T. P., Best, R. M., & Ozuru, Y. (2006). Improving adolescent students’ reading comprehension with iSTART. Journal of Educational Computing Research, 34(2), 147–171.CrossRefGoogle Scholar
  35. McNamara, D. S., de Vega, M., & O’Reilly, T. (2007). Comprehension skill, inference making, and the role of knowledge. In F. Schmalhofer & C.A. Perfetti (Eds.), Higher level language processes in the brain: Inference and comprehension processes (pp. 233–251). Mahwah, NJ: Erlbaum.Google Scholar
  36. Mokhtari, K., & Reichard, C. A. (2002). Assessing students’ metacognitive awareness of reading strategies. Journal of Educational Psychology, 94(2), 249–259.CrossRefGoogle Scholar
  37. Naceur, A., & Schiefele, U. (2005). Motivation and learning—The role of interest in construction of representation of text and long-term retention: Inter- and intraindividual analyses. European Journal of Psychology of Education, 20(2), 155–170.CrossRefGoogle Scholar
  38. NICHD. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: NICHD.Google Scholar
  39. Oakhill, J., & Yuill, N. (1996). Higher order factors in comprehension disability: Processes and remediation. In C. Cornoldi & J. Oakhill (Eds.), Reading comprehension difficulties: Processes and intervention (pp. 69–92). Mahwah: Erlbaum.Google Scholar
  40. OECD. (2000). Manual for the PISA 2000 database. Paris: OECD.Google Scholar
  41. Richardson, J. T. E. (2004). Methodological issues in questionnaire-based research on student learning in higher education. Educational Psychology Review, 16(4), 347–358.Google Scholar
  42. Samuelstuen, M., & Bråten, I. (2007). Examining the validity of self-reports on scales measuring students’ strategic processing. The British Journal of Educational Psychology, 77(2), 351–378.CrossRefGoogle Scholar
  43. Schmalhofer, F., McDaniel, M. A., & Keefe, D. (2002). A unified model for predictive and bridging inferences. Discourse Processes, 33(2), 105–132.Google Scholar
  44. Schmitt, M. (1990). A questionnaire to measure children’s awareness of strategic reading process. The Reading Teacher, 49, 454–461.Google Scholar
  45. Shorris, E. (1997). On the uses of a liberal education II. As a weapon in the hands of the restless poor. Harper‘s Magazine, 295, 50–59.Google Scholar
  46. Starr, C., & McMillan, B. (2001). Human biology (4th ed.). Pacific Grove: Wadsworth Group.Google Scholar
  47. Stromso, H. I., Braten, I., & Samuelstuen, M. S. (2003). Students’ strategic use of multiple sources during expository text reading: A longitudinal think-aloud study. Cognition and Instruction, 21(2), 113–147.CrossRefGoogle Scholar
  48. VanSledright, B. A., & Frankes, L. (2000). Concept- and strategic-knowledge development in historical study: Reading American history: A comparative exploration in two fourth-grade classrooms. Cognition and Instruction, 18(2), 239–283.CrossRefGoogle Scholar
  49. Veenman, M. V. J. (2005). The assessment of metacognitive skills: What can be learned from multi-method designs? In B. Moschner & C. Artelt (Eds.), Lernstrategien und Metakognition: Implikationen für Forschung und Praxis (pp. 75–97). Berlin: Waxmann.Google Scholar
  50. Willson, V. L., & Rupley, W. H. (1997). A structural equation model for reading comprehension based on background, phonemic, and strategy knowledge. Scientific Studies of Reading, 1(1), 45–63.CrossRefGoogle Scholar
  51. Winne, P. H., & Nesbit, J. C. (2009). Supporting self-regulated learning with cognitive tools. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 259–277). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  52. Woodcock, R. W. (1997). Woodcock diagnostic reading battery. Itasca: Riverside.Google Scholar
  53. Zientek, L. R., & Thompson, B. (2009). Matrix summaries improve research reports: Secondary analyses using published literature. Educational Researcher, 38(5), 343–352. doi: 10.3102/0013189X09339056.CrossRefGoogle Scholar

Copyright information

© Springer Science + Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Psychological Studies in EducationTemple UniversityPhiladelphiaUSA
  2. 2.Department of Educational and Counselling Psychology (ECP), Laboratory for the Study of Metacognition and Advanced Learning TechnologiesMcGill UniversityMontrealCanada

Personalised recommendations