Advertisement

Issues and Methods in the Measurement of Student Engagement: Advancing the Construct Through Statistical Modeling

Chapter

Abstract

This chapter will provide an overview of statistical modeling to further the measurement of student engagement. After a discussion of the complexity of defining and therefore measuring engagement, a general introduction and guide to the construction of productive measures of engagement will be provided. Confirmatory factor analysis and item response theory will be elaborated and used to highlight modern methods of evaluating and scoring instruments to measure engagement. Additionally, the bifactor model will be displayed and elaborated upon as a potentially useful model for disentangling some of the intricacies of engagement along with parsing the relationship with motivation. The Student Engagement Instrument (SEI) will be utilized throughout the chapter to highlight the specific methods and provide some guidance on potentially useful applications related to theoretical issues.

Keywords

Confirmatory Factor Analysis Item Response Theory Student Engagement Classical Test Theory Bifactor Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Alexander, K., Entwisle, D., & Horsey, C. (1997). From first grade forwards: Early foundations of high school dropouts. Sociology of Education, 70, 87–107.CrossRefGoogle Scholar
  2. Andersen, E. (1997). The rating scale model. In W. van der Linden & R. Hambleton (Eds.), Handbook of modern item response theory (pp. 67–84). New York: Springer.Google Scholar
  3. Andrich, D. (1978a). A rating formulation for ordered response categories. Psychometrika, 43, 561–573.CrossRefGoogle Scholar
  4. Andrich, D. (1978b). Applications of a psychometric rating model to ordered categories which are scored with successive integers. Applied Psychological Measurement, 2, 581–594.CrossRefGoogle Scholar
  5. Andrich, D. (1999). Rating scale analysis. In G. Masters & J. Keeves (Eds.), Advances in measurement in educational research and assessment (pp. 110–121). New York: Pergamon.Google Scholar
  6. Appleton, J., Christenson, S., & Furlong, M. (2008). Student engagement with schools: Critical conceptual and methodological issues of the construct. Psychology in the Schools, 54, 369–386.CrossRefGoogle Scholar
  7. Appleton, J., Christenson, S., Kim, D., & Reschly, A. (2006). Measuring cognitive and psychological engagement: Validation of the Student Engagement Instrument. Journal of School Psychology, 44, 427–445.CrossRefGoogle Scholar
  8. Balfanz, R., Herzog, L., & Mac Iver, D. (2007). Preventing student disengagement and keeping students on the graduation path in urban middle-grades schools: Early identification and effective intervention. Educational Psychologist, 42, 223–235.CrossRefGoogle Scholar
  9. Bartholomew, D., & Knott, M. (1999). Latent variable models and factor analysis (2nd ed.). New York: Oxford Press.Google Scholar
  10. Battistich, V., Solomon, D., Watson, M., & Schaps, E. (1997). Caring school communities. Educational Psychologist, 32, 137–151.CrossRefGoogle Scholar
  11. Betts, J. E., Appleton, J. J., Reschly, A. L., Christenson, S. L., & Huebner, E. S. (2010). A study of the factorial invariance of the Student Engagement Instrument (SEI): Results from middle and high school students. School Psychology Quarterly, 25, 84–93.CrossRefGoogle Scholar
  12. Bloom, B. (Ed.). (1985). Developing talent in young people. New York: Ballantine Books.Google Scholar
  13. Blumenfeld, P., Kempler, T., & Krajcik, J. (2006). Motivation and cognitive engagement in learning environments. In R. K. Sawyer (Ed.), The Cambridge handbook of learning sciences (pp. 475–488). New York: Cambridge University Press.Google Scholar
  14. Bollen, K. (1989). Structural equations with latent variables. New York: Wiley.Google Scholar
  15. Bond, T., & Fox, C. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  16. Brown, T. (2006). Confirmatory factor analysis for applied research. New York: The Guildford Press.Google Scholar
  17. Cameron, J., & Pierce, W. (1994). Reinforcement, reward, and intrinsic motivation: A meta-analysis. Review of Educational Research, 64, 363–423.Google Scholar
  18. Catalano, R., Kosterman, R., Hawkins, J., Newcomb, M., & Abbott, R. (1996). Modeling the etiology of adolescent substance use: A test of the social development model. Journal of Drug Issues, 26, 429–455.PubMedGoogle Scholar
  19. Cheung, G., & Rensvold, R. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255.CrossRefGoogle Scholar
  20. Christenson, S., Reschly, A., Appleton, J., Berman, S., Spanjers, D., & Varro, P. (2008). Best practices in fostering student engagement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (5th ed., pp. 1099–1119). Bethesda, MD: National Association of School Psychologists.Google Scholar
  21. Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt, Rinehart and Winston.Google Scholar
  22. Davis, H. (2003). Conceptualizing the role and influence of student-teacher relationships in children’s social and cognitive development. Educational Psychologist, 38, 207–234.CrossRefGoogle Scholar
  23. DeVellis, R. (2003). Scale development: Theory and applications (2nd ed.). Thousand Oaks, CA: Sage.Google Scholar
  24. Downing, S., & Haladyna, T. (Eds.). (2006). Handbook of test development. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  25. Embretson, S. (1995). The new rules of measurement. Psychological Assessment, 8, 341–349.CrossRefGoogle Scholar
  26. Embretson, S., & Reise, S. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  27. Finn, J. (1989). Withdrawing from school. Review of Educational Research, 59(2), 117–142.Google Scholar
  28. Finn, J., & Rock, D. (1997). Academic success among students at risk for school failure. Journal of Applied Psychology, 82, 221–234.PubMedCrossRefGoogle Scholar
  29. Fischer, G., & Molenaar, I. (Eds.). (1995). Rasch models: Foundations, recent developments, and applications. New York: Springer.Google Scholar
  30. Franken, R. (2001). Human motivation (5th ed.). Pacific Grove, CA: Brooks/Cole.Google Scholar
  31. Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74, 59–109.CrossRefGoogle Scholar
  32. Gibbons, R., & Hedeker, D. (1992). Full-information item bi-factor analysis. Psychometrika, 57, 423–436.CrossRefGoogle Scholar
  33. Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.CrossRefGoogle Scholar
  34. Guo, J., Hawkins, J., Hill, K., & Abbott, R. (2001). Childhood and adolescent predictors of alcohol abuse and dependence in young adulthood. Journal of Studies on Alcohol, 62, 754–762.PubMedGoogle Scholar
  35. Hambleton, R., Swaminathan, H., & Rogers, H. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.Google Scholar
  36. Harman, H. (1967). Modern factor analysis (2nd ed.). Chicago: The University of Chicago Press.Google Scholar
  37. Hawkins, J., & Weis, J. (1985). The social development model: An integrated approach to delinquency prevention. Journal of Primary Prevention, 6, 73–97.CrossRefGoogle Scholar
  38. Holzinger, K., & Swineford, F. (1937). The bi-factor method. Psychometrika, 2, 41–54.CrossRefGoogle Scholar
  39. Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55.CrossRefGoogle Scholar
  40. Jensen, A. (1998). The g-factor: The science of mental ability. Westport, CT: Praeger.Google Scholar
  41. Jöreskog, K. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183–202.CrossRefGoogle Scholar
  42. Jöreskog, K. (1971). Statistical analysis of sets of congeneric tests. Psychometrika, 36, 109–133.CrossRefGoogle Scholar
  43. Kane, M. (2006). Validation. In R. Brennan (Ed.), Educational measurement (4th ed., pp. 17–64). Westport, CT: Praeger.Google Scholar
  44. Keeves, J., & Masters, G. (1999). Introduction. In G. Masters & J. Keeves (Eds.), Advances in mea­surement in educational research and assessment (pp. 1–19). New York: Pergamon.Google Scholar
  45. Kleinginna, P., & Kleinginna, A. (1981). A categorized list of motivation definitions, with suggestions for a consensual definition. Motivation and Emotion, 5, 263–291.CrossRefGoogle Scholar
  46. Linacre, J. (2007). A user’s guide to WINSTEPS. Author.Google Scholar
  47. Lord, F. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  48. Lord, F., & Novick, M. with A. Birnbaum. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.Google Scholar
  49. McDonald, R. (1985). Factor analysis and related methods. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  50. McDonald, R. (1999). Test theory: A unified treatment. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  51. Muthén, L., & Muthén, B. (1998–2005). Mplus statistical analysis with latent variables: User’s guide. Los Angeles, CA: Muthén & Muthén.Google Scholar
  52. Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago: The University of Chicago Press.Google Scholar
  53. Reschly, A., Betts, J., & Appleton, J. (2011). An examination of the validity of two measures of student engagement. Manuscript submitted for publication.Google Scholar
  54. Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin.Google Scholar
  55. Spector, P. (1992). Summated rating scale construction: An introduction. Newbury Park, CA: Sage.Google Scholar
  56. Thomson, G. (1948). The factorial analysis of human ability. New York: Houghton Mifflin.Google Scholar
  57. Thorndike, R. (1982). Applied psychometrics. Lawrenceville, NJ: Houghton Mifflin.Google Scholar
  58. van der Linden, W., & Hambleton, R. (Eds.). (1997). Handbook of modern item response theory. New York: Springer.Google Scholar
  59. Wainer, H., & Braun, H. (Eds.). (1988). Test validity. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  60. Wentzel, K. (1997). Student motivation in middle school: The role of perceived pedagogical caring. Journal of Educational Psychology, 89, 411–419.CrossRefGoogle Scholar
  61. Wentzel, K., & Wigfield, A. (2007). Motivational interventions that work: Themes and remaining issues. Educational Psychologist, 42, 261–271.CrossRefGoogle Scholar
  62. Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  63. Wright, B. (1999). Rasch measurement models. In G. Masters & J. Keeves (Eds.), Advances in mea­surement in educational research and assessment (pp. 85–97). New York: Pergamon.Google Scholar
  64. Wright, B., & Masters, G. (1982). Rating scale analysis: Rasch measurement. Chicago, IL: MESA.Google Scholar
  65. Yen, W., & Fitzpatrick, A. (2006). Item response theory. In R. Brennan (Ed.), Educational measurement (4th ed., pp. 111–153). Westport, CT: Praeger.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  1. 1.Center for Cultural Diversity & Minority EducationMadisonUSA

Personalised recommendations