Advertisement

Psychometrika

, Volume 46, Issue 3, pp 257–274 | Cite as

Using aptitude measurements for the optimal assignment of subjects to treatments with and without mastery scores

  • Wim J. van der Linden
Article

Abstract

For assigning subjects to treatments the point of intersection of within-group regression lines is ordinarily used as the critical point. This decision rule is critized and, for several utility functions and any number of treatments, replaced by optimal monotone, nonrandomized (Bayes) rules. Both treatments with and without mastery scores are considered. Moreover, the effect of unreliable criterion scores on the optimal decision rule is examined, and it is illustrated how qualitative information can be combined with aptitude measurements to improve treatment assignment decisions. Although the models in this paper are presented with special reference to the aptitude-treatment interaction problem in education, it is indicated that they apply to a variety of situations in which subjects are assigned to treatments on the basis of some predictor score, as long as there are no allocation quota considerations.

Key words

aptitude-treatment interaction decision theory criterion-referenced measurement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference notes

  1. Suppes, P., Smith, R., & Beard, M.University level computer-assisted instruction at Stanford: 1975 (Technical Report No. 265). Stanford, Cal.: Stanford University, Institute for Mathematical Studies in the Social Sciences, October, 1975.Google Scholar
  2. Vijn, P.Prior information in linear models, Thesis, University of Groningen, The Netherlands, 1980.Google Scholar

References

  1. Atkinson, R. C. Computer-based instruction in initial reading. InProceedings of the 1967 Invitational Conference on Testing Problems. Princeton, N.J.: Educational Testing Service, 1968.Google Scholar
  2. Berhold, M. H. The use of distribution functions to represent utility functions.Management Science, 1973,19, 825–829.Google Scholar
  3. Block, J. H.Mastery learning: Theory and practice. New York: Holt, Rinehart and Winston, 1971.Google Scholar
  4. Bloom, B. S. Learning for mastery.Evaluation Comment, 1968,1, (12).Google Scholar
  5. Bloom, B. S., Hastings, J. T., & Madaus, G. F.Handbook on formative and summative evaluation of student learning. New York, N.J.: McGraw-Hill, 1971.Google Scholar
  6. Borich, G. D., & Wunderlich, K. W. Johnson-Neyman revisited: Determining interactions among group regressions and plotting regions of significance in the case of two groups, two predictors and one criterion.Educational and Psychological Measurement, 1973,33, 155–159.Google Scholar
  7. Cronbach, L. J. The two disciplines of scientific psychology.American Psychologist, 1967,12, 671–684.Google Scholar
  8. Cronbach, L. J. Beyond the two disciplines of scientific psychology.American Psychologist, 1975,30, 116–127.Google Scholar
  9. Cronbach, L. J., & Gleser, G. C.Psychological tests and personnel decisions (2nd Ed.). Urbana, Ill.: University of Illinois Press, 1965.Google Scholar
  10. Cronbach, L. J., & Snow, R. E.Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington Publishers, Inc., 1977.Google Scholar
  11. DeGroot, M. H.Optimal statistical decisions. New York: McGraw-Hill, 1970.Google Scholar
  12. Erlander, S., & Gustavsson, J. Simultaneous confidence region in normal regression analysis with an application to road accidents.Review of the International Statistical Institute, 1965,33, 364–378.Google Scholar
  13. Ferguson, T. S.Mathematical statistics: A decision theoretic approach. New York: Academic Press, 1967.Google Scholar
  14. Flanagan, J. C. Functional education for the seventies.Phi Delta Kappan, 1967,49, 27–32.Google Scholar
  15. Glaser, R. Adapting the elementary school curriculum to individual performance. InProceedings of the 1967 Invitational Conference on Testing Problems. Princeton, N.J.: Educational Testing Service, 1968.Google Scholar
  16. Glaser, R., & Nitko, A. J. Measurement in learning and instruction. In R. L. Thorndike (Ed.),Educational Measurement. Washington, D.C.: American Council on Education, 1971.Google Scholar
  17. Gross, A. L., & Su, W. H. Defining a “fair” or “unbiased” selection model: A question of utilities.Journal of Applied Psychology, 1975,60, 345–351.Google Scholar
  18. Hambleton, R. K. Testing and decision-making procedures for selected individualized instructional programs.Review of Educational Research, 1974,44, 371–400.Google Scholar
  19. Hambleton, R. K., & Novick, M. R. Toward an integration of theory and method for criterion-referenced tests.Journal of Educational Measurement, 1973,10, 159–170.Google Scholar
  20. Hambleton, R. K., & Swaminathan, H., Algina, R., & Coulson, D. B. Criterion-referenced testing and measurement: A review of technical issues and developments.Review of Educational Research, 1978,48, 1–47.Google Scholar
  21. Huynh, H, Statistical considerations of mastery scores.Psychometrika, 1976,41, 65–79.Google Scholar
  22. Johnson, P. D., & Neyman, J. Tests of linear hypotheses and their application to some educational problems.Statistical Research Memoirs, 1936,1, 57–93.Google Scholar
  23. Karlin, S. Decision theory for Pólya type distributions. Case of two actions, I. InProceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. (Vol. 1). Berkeley, Cal.: University of California Press, 1957.Google Scholar
  24. Karlin, S., & Rubin, H. Distributions possessing a monotone likelihood ratio.Journal of the American Statistical Association, 1956,1, 637–643.Google Scholar
  25. Karlin, S., & Rubin, H. The theory of decision procedures for distributions with monotone likelihood ratio.Annals of Mathematical Statistics, 1956,27, 272–299.Google Scholar
  26. Keeney, D., & Raiffa, H.Decisions with multiple objectives: Preferences and value trade-offs. New York: John Wiley and Sons, 1976.Google Scholar
  27. Lindgren, B. W.Statistical Theory (3rd Ed.). New York: MacMillan Publishing Co., Inc., 1976.Google Scholar
  28. Lindley, D. V. A class of utility functions.The Annals of Statistics, 1976,4, 1–10.Google Scholar
  29. Lord, F. M., & Novick, M. R.Statistical theories of mental test scores. Reading, Mass.: Addison-Wesley, 1968.Google Scholar
  30. Luce, R. D., & Raiffa, H.Games and decisions. New York: John Wiley & Sons, Inc., 1957.Google Scholar
  31. Mellenbergh, G. J., Koppelaar, H., & van der Linden, W. J. Dichotomous decisions based on dichotomously scored items: A case study.Statistica Neerlandica, 1977,31, 161–169.Google Scholar
  32. Mellenbergh, G. J., & van der Linden, W. J. The linear utility model for optimal selection.Psychometrika, 1981,46, (in press).Google Scholar
  33. Nitko, A. J., & Hsu, T. C. Using domain-referenced tests for student placement, diagnosis and attainment in a system of adaptive, individualized instruction.Educational Technology, 1974,14, 48–54.Google Scholar
  34. Novick, M. R., & Lindley, D. V. The use of more realistic utility functions in educational applications.Journal of Educational Measurement, 1978,15, 181–191.Google Scholar
  35. Novick, M. R., & Lindley, D. V. Fixed-state assessment of utility functions.Journal of the American Statistical Association, 1979,74, 306–311.Google Scholar
  36. Petersen, N. S. An expected utility model for “optimal” selection.Journal of Educational Statistics, 1976,1, 333–358.Google Scholar
  37. Petersen, N. S., & Novick, M. R. An evaluation of some models for culture-fair selection.Journal of Educational Measurement, 1976,13, 3–29.Google Scholar
  38. Potthoff, R. F. On the Johnson-Neyman technique and some extensions thereof,Psychometrika, 1964,29, 241–256.Google Scholar
  39. Salomon, G. Heuristic models for the generation of aptitude-treatment interaction hypotheses.Review of Educational Research, 1972,42, 327–343.Google Scholar
  40. Suppes, P. The use of computers in education.Scientific American, 1966,215, 206–221.Google Scholar
  41. Swaminathan, H., Hambleton, R. K., & Algina, J. A Bayesian decision-theoretic procedure for use with criterion-referenced tests.Journal of Educational Measurement, 1975,12, 87–98.Google Scholar
  42. van der Linden, W. J. Decision models for use with criterion-referenced tests.Applied Psychological Measurement, 1980,4, 469–492.Google Scholar
  43. van der Linden, W. J., & Mellenbergh, G. J. Optimal cutting scores using a linear loss function.Applied Psychological Measurement, 1977,1, 593–599.Google Scholar

Copyright information

© The Psychometric Society 1981

Authors and Affiliations

  • Wim J. van der Linden
    • 1
  1. 1.Onderafdeling Toegepaste OnderwijskundeTechnische Hogeschool TwenteEnschedeThe Netherlands

Personalised recommendations