Advertisement

The Data-Driven Direct Consensus (3DC) Procedure: A New Approach to Standard Setting

  • Jos Keuning
  • J. Hendrik Straat
  • Remco C. W. Feskens
Chapter
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Abstract

Various procedures for establishing performance standards have been proposed in the literature. Among the best-known examples are the Angoff procedure, the Bookmark procedure and the Direct Consensus procedure. These procedures have their strengths and weaknesses. Some procedures make it possible to establish performance standards relatively efficiently and quickly, but lack empirical rigor. Other procedures do include empirical data, but are time consuming and not very intuitive. In the present study, the strengths of the aforementioned standard setting procedures were brought together in a new one: the Data-Driven Direct Consensus (3DC) procedure. The 3DC procedure divides the complete test into a number of clusters and uses (unlike Direct Consensus) empirical data and an item response model to relate the scores of the clusters to the scores of the complete test. The relationships between the clusters and the complete test are presented to the subject-area experts on a specially designed assessment form. Subject-area experts are asked to use the assessment form to indicate the score that students would be expected to achieve in each cluster if they were exactly on the borderline of proficiency. Because of the design of the assessment form, the assessment is easily allowed to be based on both content information and empirical data. This is an important difference with Direct Consensus as empirical information is less explicit within this procedure.

Keywords

Angoff Bookmark Direct consensus Empirical data Standard setting 

References

  1. American Educational Research Association, American Psychological Association, and American Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.Google Scholar
  2. Angoff, W.H. (1971). Scales, norms, and equivalent scores. In R.L. Thorndike (Ed.), Educational measurement (2nd ed.), pp. 508–600. Washington, DC: American Council on Education.Google Scholar
  3. Berk, R. A. (1986). A consumer’s guide to setting performance standards on criterion referenced tests. Review of Educational Research, 56, 137–172.CrossRefGoogle Scholar
  4. Busch, J. C., & Jaeger, R. M. (1990). Influence of type of judge, normative information, and discussion on standards recommended for the National Teacher Examinations. Journal of Educational Measurement, 27, 145–163.CrossRefGoogle Scholar
  5. Cizek, G. J. (2001). Conjectures on the rise and call of standard setting: An introduction to context and practice. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 3–17). Mahwah: Lawrence Erlbaum.Google Scholar
  6. Cizek, G. J., & Bunch, M. B. (2007). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks: Sage Publications Ltd.CrossRefGoogle Scholar
  7. Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press http://www.coe.int/T/DG4/Linguistic/Default_en.asp. Retrieved Nov 2013.Google Scholar
  8. Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah: Erlbaum.Google Scholar
  9. Downing, S. M., & Haladyna, T. M. (2006). Handbook of test development. Mahwah: Erlbaum. Google Scholar
  10. Feskens, R., Keuning, J., Van Til, A., & Verheyen, R. (2014). Performance standards for the CEFR in Dutch secondary education: An international standard setting study. Arnhem: Cito.Google Scholar
  11. Finn, R. H. (1970). A note on estimating the reliability of categorical data. Educational and Psychological Measurement, 30, 71–76.CrossRefGoogle Scholar
  12. Goodwin, L. D. (1999). Relations between observed item difficulty levels and Angoff minimum passing levels for a group of borderline candidates. Applied Measurement in Education, 12(1), 13–28.CrossRefGoogle Scholar
  13. Gower, J. C. (1971). A general coefficient of similarity and some of its properties. Biometrics, 27, 857–871.CrossRefGoogle Scholar
  14. Hambleton, R. K., & Plake, B. S. (1995). Using an extended Angoff procedure to set standards on complex performance assessments. Applied Measurement in Education, 8, 41–55.CrossRefGoogle Scholar
  15. Hambleton, R. K., & Pitoniak, M. (2006). Setting performance standards. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 433–470). Westport: Praeger.Google Scholar
  16. Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park: Sage.Google Scholar
  17. Hambleton, R. K., Jaeger, R. M., Plake, B. S., & Mills, C. N. (2000). Handbook for setting standards on performance assessments. Washington, DC: Council of Chief State School Officers.Google Scholar
  18. Impara, J. C., & Plake, B. S. (1997). Standard setting: An alternative approach. Journal of Educational Measurement, 34, 353–366.CrossRefGoogle Scholar
  19. Jaeger, R. M. (1978). A proposal for setting a standard on the North Carolina High School competency test. Paper presented at the 1978 spring meeting of the North Carolina Association for Research in Education, Chapel Hill.Google Scholar
  20. Jaeger, R. (1989). Certification of student competence. In R. Linn (Ed.), Educational measurement (pp. 485–511). Washington, DC: American Council on Education.Google Scholar
  21. Kaftandjieva, F. (2004). Methods for setting cut scores in criterion-referenced achievement tests. A comparative analysis of six recent methods with an application to tests of reading in EFL. Arnhem: Cito.Google Scholar
  22. Kane, M. (1998). Choosing between examinee-centered and test-centered standard-setting methods. Educational Assessment, 5, 129–145.CrossRefGoogle Scholar
  23. Karatonis, A., & Sireci, S. (2006). The bookmark standard-setting method: A literature review. Educational Measurement: Issues and Practice, 25(1), 4–12.CrossRefGoogle Scholar
  24. Landis, J., & Koch, G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.CrossRefGoogle Scholar
  25. Lewis, D. M., Mitzel, H. C., Green, D. R. (1996). Standard setting: A bookmark approach. In D. R. Green (Chair), IRT-based standard setting procedures utilizing behavioural anchoring. Symposium conducted at the Council of Chief State School Officers National Conference on Large-scale Assessment, Phoenix, AZ.Google Scholar
  26. Lewis, D. M., Mitzel, H. C., Green, D. R., & Patz, R. J. (1999). The bookmark standard setting procedure. Monterey: McGraw-Hill.Google Scholar
  27. Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4–16.Google Scholar
  28. Pitoniak, M. J., Hambleton, R. K., Sireci, S. G. (2002). Advances in Standard Setting for Professional Licensure Examinations. Paper was presented at the annual meeting of the American Educational Research Association, New Orleans, LA, April, 2002.Google Scholar
  29. Reckase, M. D. (2006). A conceptual framework for a psychometric theory for standard setting with examples of its use for evaluating the functioning of two standard setting methods. Educational Measurement: Issues and Practice, 25(2), 4–18.CrossRefGoogle Scholar
  30. Sireci, S. G., Hambleton, R. K., Huff, K. L., & Jodoin, M. G. (2000). Setting and validating standards on Microsoft certified professional examinations, Laboratory of Psychometric and Evaluative Research Report No. 395. Amherst: University of Massachusetts, School of Education.Google Scholar
  31. Sireci, S. G., Hambleton, R. K., & Pitoniak, M. J. (2004). Setting passing scores on licensure exams using direct consensus. CLEAR Exam Review, 15(1), 21–25.Google Scholar
  32. Van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997). Handbook of modern item response theory. New York: Springer.Google Scholar
  33. Verhelst, N. D., & Glas, C. A. W. (1995). The generalized one parameter model: OPLM. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models: Their foundations, recent developments and applications (pp. 215–238). New York: Springer.CrossRefGoogle Scholar
  34. Woehr, D. J., Arthur, W., & Fehrmann, M. L. (1991). An empirical comparison of cut-off score methods for content-related and criterion-related validity settings. Educational and Psychological Measurement, 51, 1029–1039.CrossRefGoogle Scholar
  35. Zieky, M. J., Perie, M., Livingston, S. (2008). Cuts cores: A manual for setting standards of performance on educational and occupational tests. http://www.amazon.com/Cutscores-Standards-Performance-Educational Occupational/dp/1438250304/

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Jos Keuning
    • 1
  • J. Hendrik Straat
    • 1
  • Remco C. W. Feskens
    • 1
  1. 1.Cito, Psychometric Research CenterArnhemThe Netherlands

Personalised recommendations