Advertisement

Research in Higher Education

, Volume 30, Issue 3, pp 245–260 | Cite as

Generalizability of the Differential Coursework methodology: Relationships between self-reported Coursework and performance on the ACT-COMP exam

  • Gary R. Pike
  • Raymond H. Phillippi
Article

Abstract

If the assessment of student outcomes is to be successful in improving the quality and effectiveness of American higher education, measures of student achievement must be linked to the characteristics of academic programs. The Differential Coursework Patterns Project (DCPP), directed by Dr. James Ratcliff at Iowa State University, appears to offer a method of linking outcomes measures to program data. However, questions must be raised about the generalizability of this method. The results of this study suggest that the differential coursework methodology may be used effectively with at least two different measures of educational outcomes. Moreover, this methodology can be used with coursework data gathered either through transcript analysis or students' self-reports. The results of this study also indicate that the choice of statistical techniques may not be generalizable. The techniques selected should be determined by the nature of the institution, the types of outcomes measures used, and the configuration of the data.

Keywords

High Education Statistical Technique Education Research Student Achievement Educational Outcome 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aldenderfer, Mark S., and Blashfield, Roger K. (1984).Cluster Analysis. Quantitative Applications in the Social Sciences, No. 44. Beverly Hills, CA: Sage.Google Scholar
  2. American College Testing Program (1973). Assessing students on the way to college: Technical report for the ACT Assessment Program. Iowa City, IA: American College Testing Program.Google Scholar
  3. American College Testing Program (1987a).College Outcome Measures Program 1987–88. Iowa City, IA: American College Testing Program.Google Scholar
  4. American College Testing Program (1987b).Registering for the ACT Assessment. Iowa City, IA: American College Testing Program.Google Scholar
  5. Astin, Alexander W. (1970). The methodology of research on college impact, part one.Sociology of Education 43(3): 223–254.Google Scholar
  6. Astin, Alexander W., Henson, James W., and Christian, C. E. (1980). The impact of student financial aid programs on student choice. Washington, D.C.: Department of Health, Education, and Welfare. (ERIC Document Reproduction Service No. ED 187 268)Google Scholar
  7. Banta, Trudy W. (1988a). Editor's notes. In Trudy W. Banta (ed.),Implementing Outcomes Assessment: Promise and Perils pp. 1–4. New Directions for Institutional Research, No. 59. San Francisco: Jossey-Bass.Google Scholar
  8. Banta, Trudy W. (1988b). Promise and perils. In Trudy W. Banta (ed.),Implementing Outcomes Assessment: Promise and Perils pp. 95–98. New Directions for Institutional Research, No. 59. San Francisco: Jossey-Bass.Google Scholar
  9. Banta, Trudy W., Lambert, E. Warren, Pike, Gary R., Schmidhammer, James L., and Schneider, Janet A. (1987). Estimated student score gain on the ACT COMP exam: Valid tool for institutional assessment?Research in Higher Education 27(3): 195–217.Google Scholar
  10. Edelbrock, Craig (1979). Mixture model tests of hierarchical clustering algorithms: The problem of classifying everybody.Multivariate Behavioral Research 14(3): 367–384.Google Scholar
  11. Educational Testing Service (1987). Guide to the Scholastic Aptitude Test. Princeton, NJ: Educational Testing Service.Google Scholar
  12. ETS College and University Programs (1988). The academic profile: Information booklet. Princeton, NJ: Educational Testing Service.Google Scholar
  13. Ewell, Peter T., and Lisensky, Robert (1988). Assessing institutional effectiveness: Redirecting the self-study process. Boulder, CO: National Center for Higher Education Management Systems.Google Scholar
  14. Forrest, Aubrey (1982). Increasing student competence and persistence: The best case for general education. Iowa City, IA: ACT National Center for the Advancement of Educational Practice.Google Scholar
  15. Forrest, Aubrey, and Steele, Joe M. (1982). Defining and measuring general education knowledge and skills. Iowa City, IA: American College Testing Program.Google Scholar
  16. Halpern, Diane F. (1987). Student outcomes assessment: Introduction and overview. In Diane F. Halpern (ed.),Student Outcomes Assessment: What Institutions Stand to Gain pp. 5–8. New Directions for Higher Education, No. 59. San Francisco: Jossey-Bass.Google Scholar
  17. Klecka, William R. (1980).Discriminant Analysis. Quantitative Applications in the Social Sciences, No. 19. Beverly Hills, CA: Sage.Google Scholar
  18. Milligan, Glenn W. (1981). A review of Monte Carlo tests of cluster analysis.Multivariate Behavioral Research 16(3): 379–407.Google Scholar
  19. National Governor's Association (1988). Results in education. Washington, D.C.: National Governor's Association.Google Scholar
  20. Nichols, Robert C. (1964). Effects of various college characteristics on student aptitude test scores.Journal of Educational Psychology 35(1): 45–54.Google Scholar
  21. Pike, Gary R. (1984). Television dependency, candidate images, and voting behavior in the 1980 election. Paper presented at the annual meeting of the International Communication Association, San Francisco, May.Google Scholar
  22. Pike, Gary R. (1988a). A comparison of the College Outcome Measures Program (COMP) exam and the ETS Academic Profile. In Trudy W. Banta (ed.),Performance funding report for the University of Tennessee, Knoxville pp. 64–79. Knoxville, TN: Learning Research Center.Google Scholar
  23. Pike, Gary R. (1988b). Data on selected assessment instruments. In C. Adelman (ed.),Performance and judgement: Essays on principles and practices in the assessment of college student learning pp. 313–325. Washington, D.C.: U.S. Government Printing Office.Google Scholar
  24. Pike, Gary R., and Banta, Trudy W. (1987). Assessing student educational outcomes: The process strengthens the product.VCCA Journal 2(2): 24–35.Google Scholar
  25. Ratcliff, James L. (1988a). Development of a cluster-analytic model for identifying coursework patterns associated with general learned abilities of college students. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, April.Google Scholar
  26. Ratcliff, James L. (1988b). The Differential Coursework Patterns Project (DCPP) progress report, May 1988.ASHE Newsletter 1(3): 5–8.Google Scholar
  27. Riverside Publishing Company (1988).College BASE: College Basic Academic Subjects Examination. Chicago: Riverside Publishing Company.Google Scholar
  28. Rock, Donald A., Baird, Leonard L., and Linn, Robert L. (1972). Interaction between college effects and students' aptitudes.American Educational Research Journal 9(1): 149–161.Google Scholar
  29. Rock, Donald A., Centra, John A., and Linn, Robert L. (1970). Relationships between college characteristics and student achievement.American Educational Research Journal 7(1): 109–121.Google Scholar
  30. Rossmann, Jack E., and El-Khawas, Elaine (1987). Thinking about assessment: Perspectives for presidents and chief academic officers. Washington, D.C.: American Association for Higher Education.Google Scholar
  31. Study Group on the Conditions of Excellence in American Higher Education. (1984). The progress of an agenda: A first report from the Study Group on the Conditions of Excellence in American Higher Education. Washington, D.C.: National Institute of Education. (ERIC Document Reproduction Service No. ED 244 577)Google Scholar
  32. Swinton, Spencer S., and Powers, Donald E. (1982). A study of the effects of special preparation on GRE analytical scores and item types. ETS Research Report No. 82-1. Princeton, NJ: Educational Testing Service.Google Scholar
  33. Thompson, Bruce (1984).Canonical Correlation Analysis: Uses and Interpretation. Quantitative Applications in the Social Sciences, No. 47. Beverly Hills, CA: Sage.Google Scholar
  34. Ward, Joe H., Jr. (1963). Hierarchical grouping to optimize an objective function.Journal of the American Statistical Association 58(301): 236–244.Google Scholar
  35. Wilson, Kenneth M. (1985). The relationship of GRE General Test item-type part scores to undergraduate grades. ETS Research Report No. 84-38. Princeton, NJ: Educational Testing Service.Google Scholar

Copyright information

© Human Sciences Press, Inc. 1989

Authors and Affiliations

  • Gary R. Pike
    • 1
  • Raymond H. Phillippi
    • 2
  1. 1.Center for Assessment Research and DevelopmentUniversity of TennesseeKnoxville
  2. 2.University of TennesseeKnoxville

Personalised recommendations