Advertisement

Path-Referenced Assessment of Individual Differences

  • John R. Bergan
  • Clement A. Stone
  • Jason K. Feld
Part of the Perspectives on Individual Differences book series (PIDF)

Abstract

Assessment plays a critical role in meeting instructional needs related to individual differences in student competence. Assessment devices have long been central in determining the educational placement of students with special learning needs. Assessment techniques have also been used to diagnose individual learning problems and to place students in curriculum sequences (Glaser & Nitko, 1971).

Keywords

Latent Class Analysis Latent Trait Task Model Item Difficulty Addition Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angoff, W. H. Scales, norms and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (2nd ed.). Washington, D.C.: American Council on Education, 1971, pp. 508–600.Google Scholar
  2. Ausubel, D. P., & Sullivan, E. V. Theory and problems of child development. New York: Grune & Stratum, 1970.Google Scholar
  3. Bentler, P. M. Multivariate analysis with latent variables. In M. R. Rozenweig & L. W. Porter (Eds.), Annual Review of Psychology, Vol. 31, Palo Alto, Calif.: Annual Review, 1980.Google Scholar
  4. Bentler, P. M., & Weeks, D. B. Multivariate analysis with latent variables. In P. R. Krishnaiah & L. Kanal (Eds.), Handbook of statistics, Vol. 2. Amsterdam: North Holland, 1980.Google Scholar
  5. Bergan, J. R. The structural analysis of behavior: An alternative to the learning hierarchy model. Review of Educational Research, 1980, 50, 225–246.Google Scholar
  6. Bergan, J. R. Path-referenced assessment in school psychology. In T. R. Kratochwill (Ed.), Advances in school psychology (Vol. 1). Hillsdale, N.J.: Lawrence Erlbaum Associates, 1981a.Google Scholar
  7. Bergan, J. R. Measuring cognitive growth in Head Start Programs (Contract No. HHS-105–81-C-008). Paper presented at the annual symposium of the Society for Research and Child Development, Boston, Massachusetts, 1981b.Google Scholar
  8. Bergan, J. R. A domain structure model for measuring cognitive development (Technical Report, Contract No. HHS-105–81-C-008). Tucson: University of Arizona, Center for Educational Evaluation and Measurement, 1981c.Google Scholar
  9. Bergan, J. R. Latent-class models in educational research. In E. W. Gordon (Ed.), Review of Research in education (Vol. 10). Washington, D.C.: American Educational Research Association, 1983, pp. 305–360.Google Scholar
  10. Bergan, J. R., & Henderson, R. W. Child development. Columbus, Oh.: Charles E. Merrill, 1979.Google Scholar
  11. Bergan, J. R., & Stone, C. A. Psychometric and instructional validation of domain structures. Unpublished manuscript, Tucson: University of Arizona, 1981.Google Scholar
  12. Bergan, J. R., Stone, C. A., & Feld, J. K. Rule replacement in the development of basic number skills. Journal ofEucational Psychology, 1984, 76, 289–299.CrossRefGoogle Scholar
  13. Bergan, J. R., Towstopiat, O. M., Cancelli, A. A., & Karp, C. L. Replacement and component rules in hierarchically ordered mathematics rule learning tasks. Journal of Educational Psychology, 1982, 74, 39–50.CrossRefGoogle Scholar
  14. Berk, R. A. Handbook of methods for detecting test bias. Baltimore, Md.: Johns Hopkins University Press, 1982.Google Scholar
  15. Berliner, D. C. Developing conceptions of classroom envirionments: Some light on the T in the classroom studies of ATI. Educational Psychologist, 1983, 78, 1–13.CrossRefGoogle Scholar
  16. Brainerd, C. J. Piagefs theory of intelligence. Englewood Cliffs, N.J.: Prentice-Hall, 1979.Google Scholar
  17. Brennan, R. L. Applications of generalizability theory. In R. A. Berk (Ed.), Criterion-referenced measurement: The state of the art. Baltimore, Md.: Johns Hopkins University Press, 1980.Google Scholar
  18. Carpenter, T. P., Moser, J. M., & Romberg, T. A. Addition and subtraction: A cognitive perspective. Hillsdale, N.J.: Lawerence Erlbaum Associates, 1982.Google Scholar
  19. Christoffersson, A. Factor analysis of dichotomized variables. Psychometrika, 1975, 40, 5–32.CrossRefGoogle Scholar
  20. Cochran, W. G. Sampling techniques (3rd. ed.). New York: Wiley, 1977.Google Scholar
  21. Cronbach, L. J., Glaser, G. C., Nanda, H., & Rajaratnam, N. The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: Wiley, 1972.Google Scholar
  22. Dayton, C. M., & Macready, G. B. A probabilistic model for validation of behavior hierarchies. Psychometrika, 1976, 41, 189–204.CrossRefGoogle Scholar
  23. Flavell, J. R. An analysis of cognitive developmental sequences. Genetic Psychology Monographs, 1972, 86, 279–350.Google Scholar
  24. Fuson, K. C. An analysis of the counting-on solution procedure in addition. In T. P. Carpenter, J. M. Moser, & T. A. Romberg (Eds.), Addition and subtraction: A cognitive perspective. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1982.Google Scholar
  25. Gagné, R. M. The acquisition of knowledge. Psychological Review, 1962, 69, 355–365.CrossRefGoogle Scholar
  26. Gagné, R. M. The conditions of learning (2nd ed.). New York: Holt, Rinehart & Winston, 1970.Google Scholar
  27. Gagné, R. M. The conditions of learning (3rd ed.). New York: Holt, Rinehart & Winston, 1977.Google Scholar
  28. Gelman, R., & Gallistel, C. R. The child’s understanding of number. Cambridge, Ma.: Harvard University Press, 1978.Google Scholar
  29. Glaser, R. Instructional technology and the measurement of learning outcomes: Some questions. American Psychologist, 1963, 18, 519–521.CrossRefGoogle Scholar
  30. Glaser, R., & Nitko, A. J. Measurement in learning and instruction. In R. L. Thorndike (Ed.), Educational measurement (2nd ed.). Washington, D.C.: American Council on Education, 1971.Google Scholar
  31. Greeno, J. G. A study of problem solving. In R. Glaser (Ed.), Advances in instructional psychology, (Vol. 1). Hillsdale, N.J.: Lawrence Erlbaum Associates, 1978.Google Scholar
  32. Hambleton, R. K. Test score validity for standard setting methods. In R. A. Berk (Ed.), Criterion-referenced measurement: The state of the art. Baltimore, Md.: Johns Hopkins University Press, 1980.Google Scholar
  33. Hambleton, R. K. (Ed.) Applications of item response theory. Vancouver, British Columbia: Educational Research Institute of British Columbia, 1983.Google Scholar
  34. Hambleton, R. K., & Eignor, D. R. A practitioners guide to criterion referenced test development, validation, and test score usage. Prepared for the National Institute of Education and Department of Health, Education, and Welfare, 1979.Google Scholar
  35. Hambleton, R. K., Martois, J. S., & Williams, C. Detection of biased items with item response models. Paper presented at the meeting of the American Educational Research Association, Montreal, 1983.Google Scholar
  36. Hambleton, R. K., & Novick, M. R. Toward an integration of theory and method for criterion-referenced tests. Journal of Educational Measurement, 1973, 10, 159–170.CrossRefGoogle Scholar
  37. Hambleton, R. K., & Rovinelli, R. J. Assessing the dimensionality of a set of test items. Paper presented at the annual meeting of American Educational Resources Association, Montreal, 1983.Google Scholar
  38. Hambleton, R. K., Swaminathan, H., Algina, J., & Coulson, D. G. Criterion-referenced testing and measurement: A review of technical issues and developments. Review of Educational Research, 1978, 48, 1–48.Google Scholar
  39. Henderson, R. W., & Bergan, J. R. The cultural context of childhood. Columbus, Oh.: Charles E. Merrill, 1976.Google Scholar
  40. Horst, D. P. What’s bad about grade-equifalent scores. ESEA Title 1 evaluation and reporting system (Technical Report Ño. 1). Mountain View, Cal.: RMC Research Corporation, 1976.Google Scholar
  41. Jöreskog, J. K., & Sorbom, D. Advances in factor analysis and structural equation models. Cambridge, Mass.: ABT, 1979.Google Scholar
  42. Kratochwill, T. K., Alper, D., & Cancelli, A. A. Nondiscriminatory assessment: Perspectives in psychology and special education. In L. Mann & D. Sabatino (Eds.), Fourth review of special education. New York: Gardner Press, 1980.Google Scholar
  43. Lawlerp R. W. The progressive instruction of mind. Cognitive Science, 1981, 5, 1–30.CrossRefGoogle Scholar
  44. Linn, R. S. Measuring pretest-posttest performance changes. In R. A. Berk (Ed.), Educational evaluation methodology: The state of the art. Baltimore, Md.: Johns Hopkins University Press, 1981.Google Scholar
  45. Lord, F. M. Applications of item response theory to practical testing problems. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1980.Google Scholar
  46. Macready, G. B., & Merwin, J. S. Homogeneity within item forms in domain-referenced testing. Educational and Psychological Measurement, 1973, 33, 351–360.CrossRefGoogle Scholar
  47. Muthen, B. Contributions to factor analysis of dichotomous variables. Psychometrika, 1978, 43, 551–600.CrossRefGoogle Scholar
  48. Newell, A., & Simon, H. A. Human problem solving. Englewood Cliffs, N.J.: Prentice-Hall, 1972.Google Scholar
  49. Nitko, A. J. Distinguishing the many varieties of criterion-referenced tests. Review of Educational Research, 1980, 50, 461–485.Google Scholar
  50. Nunnally, J. C. Psychometric theory. New York: McGraw-Hill, 1978.Google Scholar
  51. Osterlind, S. J. Test item bias. Beverly Hills, Cal.: Sage Publications, 1983.Google Scholar
  52. Piaget, J. The child’’s conception of number. New York: Humanities Press, 1952.Google Scholar
  53. Popham, W. J. Criterion-referenced assessment. Englewood Cliffs, N.J.: Prentice-Hall, 1978.Google Scholar
  54. Popham, W. P. Content domain specification/item generation. In R. A. Berk (Ed.), Criterion-referenced measurement: The state of the art. Baltimore, Md.: Johns Hopkins University Press, 1980.Google Scholar
  55. Resnick, L. B., Wang, M. C., & Kaplan, J. Task analysis in curriculum designs: A hierarchically sequenced introductory mathematics curriculum. Journal of Applied Behavioral Analysis, 1973, 6, 679–710.CrossRefGoogle Scholar
  56. Reynolds, C. R. Methods for detecting construct and predictive bias. In R. A. Berk (Ed.), Handbook of methods for detecting test bias. Baltimore, Md.: Johns Hopkins University Press, 1982.Google Scholar
  57. Rogosa, D., Brandt, D., & Zimowski, M. A growth curve approach to the measurement of change. Psychological Bulletin, 1982, 92. Google Scholar
  58. Shepard, L. A. Definitions of bias. In R. A. Berk (Ed.), Hanbook of methods for detecting bias. Baltimore, Md.: Johns Hopkins University Press, 1982.Google Scholar
  59. Siegler, R. S. The rule-assessment approach and education. Contemporary Educational Psychology, 1982, 7, 272–288.CrossRefGoogle Scholar
  60. Siegler, R. S. Five generalizations about cognitive development. American Psychologist, 1983, 38, 263–277.CrossRefGoogle Scholar
  61. Strenio, J. F., Weisberg, H. I., & Bryk, A. S. Empirical Bayes estimation of individual growth curve parameters and their relationship to covariates. Biometrics, 1983, 39, 71–86.PubMedCrossRefGoogle Scholar
  62. Subkoviak, M. J. Decision-consistency approaches. In R. A. Berk (Ed.), Criterion-referenced measurement. Baltimore, Md.: Johns Hopkins University Press, 1980.Google Scholar
  63. Thorndkike, R. L. Appliedpsychometrics. Boston, Mass.: Houghton Mifflin, 1982.Google Scholar
  64. Van Lehn, K., & Brown, J. S. Planning nets: A representation for formalizing analogies and semantic models of procedural skills. In R. E. Snow, P. A. Fredrico, & W. E. Montague (Eds.), Aptitude learning and instruction: Cognitive process analyses. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1979.Google Scholar
  65. White, R. T. Learning hierarchies. Review of Educational Research, 1973, 43, 361–375.Google Scholar
  66. Wilkinson, A. C. Partial knowledge and self-correction: Developmental studies of a quantitative concept. Developmental Psychology, 1982, 18, 876–893.CrossRefGoogle Scholar
  67. Wright, B. D., & Masters, G. N. Rating scale analysis. Chicago: Mesa Press, 1982.Google Scholar
  68. Wright, B. D., & Stone, M. H. Best test design: Rasch measurement. Chicago: Mesa Press, 1979.Google Scholar

Copyright information

© Plenum Press, New York 1985

Authors and Affiliations

  • John R. Bergan
    • 1
  • Clement A. Stone
    • 1
  • Jason K. Feld
    • 1
  1. 1.Department of Educational PsychologyUniversity of ArizonaTucsonUSA

Personalised recommendations