Abstract
If the assessment of student outcomes is to be successful in improving the quality and effectiveness of American higher education, measures of student achievement must be linked to the characteristics of academic programs. The Differential Coursework Patterns Project (DCPP), directed by Dr. James Ratcliff at Iowa State University, appears to offer a method of linking outcomes measures to program data. However, questions must be raised about the generalizability of this method. The results of this study suggest that the differential coursework methodology may be used effectively with at least two different measures of educational outcomes. Moreover, this methodology can be used with coursework data gathered either through transcript analysis or students' self-reports. The results of this study also indicate that the choice of statistical techniques may not be generalizable. The techniques selected should be determined by the nature of the institution, the types of outcomes measures used, and the configuration of the data.
Similar content being viewed by others
References
Aldenderfer, Mark S., and Blashfield, Roger K. (1984).Cluster Analysis. Quantitative Applications in the Social Sciences, No. 44. Beverly Hills, CA: Sage.
American College Testing Program (1973). Assessing students on the way to college: Technical report for the ACT Assessment Program. Iowa City, IA: American College Testing Program.
American College Testing Program (1987a).College Outcome Measures Program 1987–88. Iowa City, IA: American College Testing Program.
American College Testing Program (1987b).Registering for the ACT Assessment. Iowa City, IA: American College Testing Program.
Astin, Alexander W. (1970). The methodology of research on college impact, part one.Sociology of Education 43(3): 223–254.
Astin, Alexander W., Henson, James W., and Christian, C. E. (1980). The impact of student financial aid programs on student choice. Washington, D.C.: Department of Health, Education, and Welfare. (ERIC Document Reproduction Service No. ED 187 268)
Banta, Trudy W. (1988a). Editor's notes. In Trudy W. Banta (ed.),Implementing Outcomes Assessment: Promise and Perils pp. 1–4. New Directions for Institutional Research, No. 59. San Francisco: Jossey-Bass.
Banta, Trudy W. (1988b). Promise and perils. In Trudy W. Banta (ed.),Implementing Outcomes Assessment: Promise and Perils pp. 95–98. New Directions for Institutional Research, No. 59. San Francisco: Jossey-Bass.
Banta, Trudy W., Lambert, E. Warren, Pike, Gary R., Schmidhammer, James L., and Schneider, Janet A. (1987). Estimated student score gain on the ACT COMP exam: Valid tool for institutional assessment?Research in Higher Education 27(3): 195–217.
Edelbrock, Craig (1979). Mixture model tests of hierarchical clustering algorithms: The problem of classifying everybody.Multivariate Behavioral Research 14(3): 367–384.
Educational Testing Service (1987). Guide to the Scholastic Aptitude Test. Princeton, NJ: Educational Testing Service.
ETS College and University Programs (1988). The academic profile: Information booklet. Princeton, NJ: Educational Testing Service.
Ewell, Peter T., and Lisensky, Robert (1988). Assessing institutional effectiveness: Redirecting the self-study process. Boulder, CO: National Center for Higher Education Management Systems.
Forrest, Aubrey (1982). Increasing student competence and persistence: The best case for general education. Iowa City, IA: ACT National Center for the Advancement of Educational Practice.
Forrest, Aubrey, and Steele, Joe M. (1982). Defining and measuring general education knowledge and skills. Iowa City, IA: American College Testing Program.
Halpern, Diane F. (1987). Student outcomes assessment: Introduction and overview. In Diane F. Halpern (ed.),Student Outcomes Assessment: What Institutions Stand to Gain pp. 5–8. New Directions for Higher Education, No. 59. San Francisco: Jossey-Bass.
Klecka, William R. (1980).Discriminant Analysis. Quantitative Applications in the Social Sciences, No. 19. Beverly Hills, CA: Sage.
Milligan, Glenn W. (1981). A review of Monte Carlo tests of cluster analysis.Multivariate Behavioral Research 16(3): 379–407.
National Governor's Association (1988). Results in education. Washington, D.C.: National Governor's Association.
Nichols, Robert C. (1964). Effects of various college characteristics on student aptitude test scores.Journal of Educational Psychology 35(1): 45–54.
Pike, Gary R. (1984). Television dependency, candidate images, and voting behavior in the 1980 election. Paper presented at the annual meeting of the International Communication Association, San Francisco, May.
Pike, Gary R. (1988a). A comparison of the College Outcome Measures Program (COMP) exam and the ETS Academic Profile. In Trudy W. Banta (ed.),Performance funding report for the University of Tennessee, Knoxville pp. 64–79. Knoxville, TN: Learning Research Center.
Pike, Gary R. (1988b). Data on selected assessment instruments. In C. Adelman (ed.),Performance and judgement: Essays on principles and practices in the assessment of college student learning pp. 313–325. Washington, D.C.: U.S. Government Printing Office.
Pike, Gary R., and Banta, Trudy W. (1987). Assessing student educational outcomes: The process strengthens the product.VCCA Journal 2(2): 24–35.
Ratcliff, James L. (1988a). Development of a cluster-analytic model for identifying coursework patterns associated with general learned abilities of college students. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, April.
Ratcliff, James L. (1988b). The Differential Coursework Patterns Project (DCPP) progress report, May 1988.ASHE Newsletter 1(3): 5–8.
Riverside Publishing Company (1988).College BASE: College Basic Academic Subjects Examination. Chicago: Riverside Publishing Company.
Rock, Donald A., Baird, Leonard L., and Linn, Robert L. (1972). Interaction between college effects and students' aptitudes.American Educational Research Journal 9(1): 149–161.
Rock, Donald A., Centra, John A., and Linn, Robert L. (1970). Relationships between college characteristics and student achievement.American Educational Research Journal 7(1): 109–121.
Rossmann, Jack E., and El-Khawas, Elaine (1987). Thinking about assessment: Perspectives for presidents and chief academic officers. Washington, D.C.: American Association for Higher Education.
Study Group on the Conditions of Excellence in American Higher Education. (1984). The progress of an agenda: A first report from the Study Group on the Conditions of Excellence in American Higher Education. Washington, D.C.: National Institute of Education. (ERIC Document Reproduction Service No. ED 244 577)
Swinton, Spencer S., and Powers, Donald E. (1982). A study of the effects of special preparation on GRE analytical scores and item types. ETS Research Report No. 82-1. Princeton, NJ: Educational Testing Service.
Thompson, Bruce (1984).Canonical Correlation Analysis: Uses and Interpretation. Quantitative Applications in the Social Sciences, No. 47. Beverly Hills, CA: Sage.
Ward, Joe H., Jr. (1963). Hierarchical grouping to optimize an objective function.Journal of the American Statistical Association 58(301): 236–244.
Wilson, Kenneth M. (1985). The relationship of GRE General Test item-type part scores to undergraduate grades. ETS Research Report No. 84-38. Princeton, NJ: Educational Testing Service.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Pike, G.R., Phillippi, R.H. Generalizability of the Differential Coursework methodology: Relationships between self-reported Coursework and performance on the ACT-COMP exam. Res High Educ 30, 245–260 (1989). https://doi.org/10.1007/BF00992603
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF00992603