Abstract
Various procedures for establishing performance standards have been proposed in the literature. Among the best-known examples are the Angoff procedure, the Bookmark procedure and the Direct Consensus procedure. These procedures have their strengths and weaknesses. Some procedures make it possible to establish performance standards relatively efficiently and quickly, but lack empirical rigor. Other procedures do include empirical data, but are time consuming and not very intuitive. In the present study, the strengths of the aforementioned standard setting procedures were brought together in a new one: the Data-Driven Direct Consensus (3DC) procedure. The 3DC procedure divides the complete test into a number of clusters and uses (unlike Direct Consensus) empirical data and an item response model to relate the scores of the clusters to the scores of the complete test. The relationships between the clusters and the complete test are presented to the subject-area experts on a specially designed assessment form. Subject-area experts are asked to use the assessment form to indicate the score that students would be expected to achieve in each cluster if they were exactly on the borderline of proficiency. Because of the design of the assessment form, the assessment is easily allowed to be based on both content information and empirical data. This is an important difference with Direct Consensus as empirical information is less explicit within this procedure.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
American Educational Research Association, American Psychological Association, and American Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.
Angoff, W.H. (1971). Scales, norms, and equivalent scores. In R.L. Thorndike (Ed.), Educational measurement (2nd ed.), pp. 508–600. Washington, DC: American Council on Education.
Berk, R. A. (1986). A consumer’s guide to setting performance standards on criterion referenced tests. Review of Educational Research, 56, 137–172.
Busch, J. C., & Jaeger, R. M. (1990). Influence of type of judge, normative information, and discussion on standards recommended for the National Teacher Examinations. Journal of Educational Measurement, 27, 145–163.
Cizek, G. J. (2001). Conjectures on the rise and call of standard setting: An introduction to context and practice. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 3–17). Mahwah: Lawrence Erlbaum.
Cizek, G. J., & Bunch, M. B. (2007). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks: Sage Publications Ltd.
Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge: Cambridge University Press http://www.coe.int/T/DG4/Linguistic/Default_en.asp. Retrieved Nov 2013.
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah: Erlbaum.
Downing, S. M., & Haladyna, T. M. (2006). Handbook of test development. Mahwah: Erlbaum.
Feskens, R., Keuning, J., Van Til, A., & Verheyen, R. (2014). Performance standards for the CEFR in Dutch secondary education: An international standard setting study. Arnhem: Cito.
Finn, R. H. (1970). A note on estimating the reliability of categorical data. Educational and Psychological Measurement, 30, 71–76.
Goodwin, L. D. (1999). Relations between observed item difficulty levels and Angoff minimum passing levels for a group of borderline candidates. Applied Measurement in Education, 12(1), 13–28.
Gower, J. C. (1971). A general coefficient of similarity and some of its properties. Biometrics, 27, 857–871.
Hambleton, R. K., & Plake, B. S. (1995). Using an extended Angoff procedure to set standards on complex performance assessments. Applied Measurement in Education, 8, 41–55.
Hambleton, R. K., & Pitoniak, M. (2006). Setting performance standards. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 433–470). Westport: Praeger.
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park: Sage.
Hambleton, R. K., Jaeger, R. M., Plake, B. S., & Mills, C. N. (2000). Handbook for setting standards on performance assessments. Washington, DC: Council of Chief State School Officers.
Impara, J. C., & Plake, B. S. (1997). Standard setting: An alternative approach. Journal of Educational Measurement, 34, 353–366.
Jaeger, R. M. (1978). A proposal for setting a standard on the North Carolina High School competency test. Paper presented at the 1978 spring meeting of the North Carolina Association for Research in Education, Chapel Hill.
Jaeger, R. (1989). Certification of student competence. In R. Linn (Ed.), Educational measurement (pp. 485–511). Washington, DC: American Council on Education.
Kaftandjieva, F. (2004). Methods for setting cut scores in criterion-referenced achievement tests. A comparative analysis of six recent methods with an application to tests of reading in EFL. Arnhem: Cito.
Kane, M. (1998). Choosing between examinee-centered and test-centered standard-setting methods. Educational Assessment, 5, 129–145.
Karatonis, A., & Sireci, S. (2006). The bookmark standard-setting method: A literature review. Educational Measurement: Issues and Practice, 25(1), 4–12.
Landis, J., & Koch, G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.
Lewis, D. M., Mitzel, H. C., Green, D. R. (1996). Standard setting: A bookmark approach. In D. R. Green (Chair), IRT-based standard setting procedures utilizing behavioural anchoring. Symposium conducted at the Council of Chief State School Officers National Conference on Large-scale Assessment, Phoenix, AZ.
Lewis, D. M., Mitzel, H. C., Green, D. R., & Patz, R. J. (1999). The bookmark standard setting procedure. Monterey: McGraw-Hill.
Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4–16.
Pitoniak, M. J., Hambleton, R. K., Sireci, S. G. (2002). Advances in Standard Setting for Professional Licensure Examinations. Paper was presented at the annual meeting of the American Educational Research Association, New Orleans, LA, April, 2002.
Reckase, M. D. (2006). A conceptual framework for a psychometric theory for standard setting with examples of its use for evaluating the functioning of two standard setting methods. Educational Measurement: Issues and Practice, 25(2), 4–18.
Sireci, S. G., Hambleton, R. K., Huff, K. L., & Jodoin, M. G. (2000). Setting and validating standards on Microsoft certified professional examinations, Laboratory of Psychometric and Evaluative Research Report No. 395. Amherst: University of Massachusetts, School of Education.
Sireci, S. G., Hambleton, R. K., & Pitoniak, M. J. (2004). Setting passing scores on licensure exams using direct consensus. CLEAR Exam Review, 15(1), 21–25.
Van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997). Handbook of modern item response theory. New York: Springer.
Verhelst, N. D., & Glas, C. A. W. (1995). The generalized one parameter model: OPLM. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models: Their foundations, recent developments and applications (pp. 215–238). New York: Springer.
Woehr, D. J., Arthur, W., & Fehrmann, M. L. (1991). An empirical comparison of cut-off score methods for content-related and criterion-related validity settings. Educational and Psychological Measurement, 51, 1029–1039.
Zieky, M. J., Perie, M., Livingston, S. (2008). Cuts cores: A manual for setting standards of performance on educational and occupational tests. http://www.amazon.com/Cutscores-Standards-Performance-Educational Occupational/dp/1438250304/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Keuning, J., Straat, J.H., Feskens, R.C.W. (2017). The Data-Driven Direct Consensus (3DC) Procedure: A New Approach to Standard Setting. In: Blömeke, S., Gustafsson, JE. (eds) Standard Setting in Education. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-319-50856-6_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-50856-6_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-50855-9
Online ISBN: 978-3-319-50856-6
eBook Packages: EducationEducation (R0)