Advertisement

Research in Higher Education

, Volume 31, Issue 4, pp 319–325 | Cite as

Are validity coefficients understated due to correctable defects in the GPA?

  • John W. Young
Article

Abstract

The predictive validity of preadmissions measures such as standardized test scores and high school grades may be understated because of correctable defects in both the freshman year and cumulative grade point average (GPA). Measurement error in the criterion artificially depresses the size of observed validity coefficients. A study was conducted using item response theory (IRT) to develop a more reliable measure of performance, called an IRT-based GPA, and tested in a predictive validity study using data from Stanford University. Results indicate increased predictability when the IRT-based GPA is compared with the usual GPA.

Keywords

High School Measurement Error Test Score Validity Study Standardize Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cronbach, L. J. (1984).Essentials of Psychological Testing 4th ed. New York: Harper & Row.Google Scholar
  2. Elliott, R., and Strenta, A. C. (1988). Effects of improving the reliability of the GPA on prediction generally and on comparative predictions for gender and race particularly.Journal of Educational Measurement 25(4): 333–347.Google Scholar
  3. Goldman, R. D., and Hewitt, B. N. (1975). Adaptation-level as an explanation for differential standards in college grading.Journal of Educational Measurement 12(2): 149–161.Google Scholar
  4. Goldman, R. D, and Slaughter, R. E. (1976). Why college grade point average is difficult to predict.Journal of Educational Psychology 68(1): 9–14.Google Scholar
  5. Lord, F. M (1980).Application of Item Response Theory to Practical Testing Problems. Hillsdale, NJ: Erlbaum.Google Scholar
  6. McCornack, R. L., and McLeod, M. M. (1988). Gender bias in the prediction of college course performance.Journal of Educational Measurement 25(4): 321–331.Google Scholar
  7. McDonald, R. P. (1985). Unidimensional versus multidimensional models in item response theory. In D. J. Weiss (ed.),Proceedings of the 1982 Item Response Theory and Computerized Adaptive Testing Conference. Minneapolis: University of Minnesota.Google Scholar
  8. Muraki, E. (1990). Fitting polytomous item response models to Likert-type data.Applied Psychological Measurement 14(1): 59–71.Google Scholar
  9. Muraki, E., and Bock, R. D. (1988). PARSCALE: Parameter scaling of rating data (computer program). Mooresville, IN: Scientific Software, Inc.Google Scholar
  10. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores.Psychometrika, Monograph Supplement No. 17.Google Scholar
  11. Strenta, A. C., and Elliott, R. (1987). Differential grading standards revisited.Journal of Educational Measurement 24(4): 281–291.Google Scholar
  12. Willingham, W. W. (1985).Success in College. New York: College Entrance Examination Board.Google Scholar
  13. Young, J. W. (1989a). Developing a universal scale for grades: Investigating predictive validity in college admissions. Ph.D. dissertation, Stanford University.Google Scholar
  14. Young, J. W. (1989b). Adjusting the cumulative GPA using item response theory. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA, March 1989.Google Scholar
  15. Young, J. W. (1990). Adjusting the cumulative GPA using item response theory.Journal of Educational Measurement 27(2): 175–186.Google Scholar

Copyright information

© Human Sciences Press, Inc. 1990

Authors and Affiliations

  • John W. Young
    • 1
  1. 1.Graduate School of EducationRutgers UniversityNew Brunswick

Personalised recommendations