Skip to main content
Log in

Behaviormetrika - Call for Papers on "Methodological Innovations in Cognitive Diagnosis Arising from Research Challenges in Practice"

Guest Editors

Chia-Yi Chiu, University of Minnesota, Twin-Cities (cchiu@umn.edu)
Hans Friedrich Köhn, University of Illinois, Urbana-Champaign (hkoehn@illinois.edu)

Motivation and Aim of the Special Issue

Cognitive Diagnosis Models—also known as Diagnostic Classification Models (DCMs)—have become a highly-active research area in psychometrics and educational measurement specifically. CD provides a fine-grained assessment of an examinee’s ability in a given knowledge domain permitting for targeted feedback to students and instructors for improving teaching and learning. Many experts in the field predict that CD is the future of educational measurement (cf. Huff & Goodman, 2007; Stout, 2002).

Publications concerning CD can be roughly characterized as papers with a strong applied, data-analytic inclination, and papers that are highly theoretical delving deeply into the mathematical-statistical foundations of CD. This special issue of Behaviormetrika seeks to cover the middle ground; that is, we are soliciting research manuscripts presenting a theoretically well-founded solution to a challenging data analytic problem encountered in practice. The following examples are intended to illustrate what we have in mind.

For instance, consider the development of general DCMs. A plethora of models for CD had been proposed in the literature (cf. Fu & Li, 2007; Sessom & Henson, 2018; Rupp & Templin, 2008). They all differed in how the specific relation between attribute mastery and the probability of a correct item response was defined. The advent of general DCM (cf. von Davier, 2008; Henson, Templin, & Willse, 2009; de la Torre, 2011) provided a unifying conceptual and mathematical framework establishing a taxonomy for DCMs thereby making the field more transparent for researchers and practitioners.

Nonparametric approaches to CD (Chiu & Douglas, 2013; Chiu, Sun, & Bian, 2018, Chiu & Köhn, 2019; Köhn, Chiu, & Brusco, 2015) —that is, methods that do not rely on fitting a parametric statistical model—may serve as a second example. The prevailing methods for fitting DCMs use either marginal maximum likelihood estimation relying on the Expectation Maximization algorithm (MMLE-EM) or Markov chain Monte Carlo (MCMC) techniques. They work well for large-scale assessments, where the data of hundreds or thousands of examinees are available. However, nonparametric methods are useful for analyzing assessment data from educational micro-environments, say, for monitoring the instruction and learning process at the classroom level, where CD-based methods are most useful and needed, but sample sizes are simply too small for MMLE-EM and MCMC to guarantee reliable estimates of item parameters and examinees' proficiency classes.

As a third example, consider the work on model and parameter identifiability. This line of research originally arose from the observation that performing CD without an identifiability guarantee directly affects parameter estimation, with grave consequences in practice like the misclassification of examinees. Several highly technical papers (e.g., Chen, Liu, Xu, & Ying, 2015; Gu & Xu, 2021; Köhn, & Chiu, 2017; Xu & Shang, 2018) established identifiability conditions for DCMs that permit practitioners to gauge their assessments to avoid the pitfalls of non-identifiability.

In summary, these three examples illustrate methodological work that arose from the necessity to establish a theoretical justification for solving a technical issue encountered with CD in practice. Of course, this list is just a sample and by far not complete, as many other instances would also qualify.

Topics of interest for the special edition include, but are not limited to:

  • High-Dimensional CD
  • Multilevel / Longitudinal CD
  • Q-Matrix Theory
  • Cluster Analysis and CD
  • Networks and CD
  • CD in Non-Educational Settings
  • Process Data and CD
  • Implementations of CD Technology


Manuscripts with a focus on simulations or data analysis are less likely to be considered for publication in this special edition. Hence, we invite manuscript proposals of 1000 words for a preliminary screening, upon which the final selection will be made. We look forward to your proposals!

Deadlines

  • Proposal Submission Deadline: January 31, 2024
  • Notification Proposal Acceptance: February 29, 2024
  • Manuscript Submission Deadline: August 31, 2024 
  • First Round of Reviews: October 31, 2024
  • Revisions due: December 31, 2024
  • Second Round of Reviews: February 28, 2025 
  • Revisions due: April 30, 2025
  • Final Editorial Decision: May 31, 2025
  • Publication of the Special Issue: July, 2025


Please prepare your paper following Behaviormetrika’s submission guidelines (this opens in a new tab). All papers must be submitted to the journal's submission system (this opens in a new tab).

While submitting, please select "Yes" for the question “Does this manuscript belong to a special feature?” and then select the special feature “S.I. : Methodological Innovations in Cognitive Diagnosis Arising from Research Challenges in Practice”.

In addition, please indicate in your cover letter that your submission is intended for the "Methodological Innovations in Cognitive Diagnosis Arising from Research Challenges in Practice" special issue.

Contact Details

Queries regarding the special issue can be directed to Chia-Yi Chiu (cchiu@umn.edu) or Hans Friedrich Köhn (hkoehn@illinois.edu).

References

  • Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110, 850-866. 
  • Chiu, C.-Y., & Douglas, J. A. (2013). A nonparametric approach to cognitive diagnosis by proximity to ideal response profiles. Journal of Classification, 30, 225-250.
  • Chiu, C.-Y., & Köhn, H.-F. (2019). Consistency theory for the general nonparametric classification method. Psychometrika, 84, 830-845.
  • Chiu, C.-Y., Sun, Y., & Bian, Y. (2018). Cognitive diagnosis for small educational programs: The general nonparametric classification method, Psychometrika, 83, 355-375.
  • de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179-199.
  • Köhn, H. F. & Chiu, C.-Y. (2017). A procedure for assessing the completeness of the Q-matrices of cognitively diagnostic tests. Psychometrika, 82, 112-132.
  • Köhn, H. F., Chiu, C.-Y., & Brusco, M. J. (2015). Heuristic cognitive diagnosis when the Q-matrix is unknown. British Journal of Mathematical and Statistical Psychology, 68, 268-291.
  • Fu, J., & Li, Y. (2007, April). An integrative review of cognitively diagnostic psychometric models. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Gu, Y., & Xu, G. (2021). Sufficient and necessary conditions for the identifiability of the Q-matrix. Statistica Sinica, 31, 449-472.
  • Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191-210.
  • Huff, K., & Goodman, D. P. (2007). The demand for cognitive diagnostic assessment. Cognitive diagnostic assessment for education: Theory and applications, 19-60.
  • Rupp, A. A., & Templin, J. (2008). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68, 78-96.
  • Sessoms, J., & Henson, R. (2018). Applications of diagnostic classification models: A literature review and critical commentary. Measurement: Interdisciplinary Research and Perspectives, 16, 1-17.
  • Stout, W. (2002). Psychometrics: From practice to theory and back. Psychometrika, 67, 485-518.
  • von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61, 287-301.
  • Xu, G. & Shang, Z. (2018). Identifying latent structures in restricted latent class models. Journal of the American Statistical Association, 113, 1284-1295.

Navigation