Skip to main content
Log in

Extended Asymptotic Identifiability of Nonparametric Item Response Models

  • Theory & Methods
  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Nonparametric item response models provide a flexible framework in psychological and educational measurements. Douglas (Psychometrika 66(4):531–540, 2001) established asymptotic identifiability for a class of models with nonparametric response functions for long assessments. Nevertheless, the model class examined in Douglas (2001) excludes several popular parametric item response models. This limitation can hinder the applications in which nonparametric and parametric models are compared, such as evaluating model goodness-of-fit. To address this issue, We consider an extended nonparametric model class that encompasses most parametric models and establish asymptotic identifiability. The results bridge the parametric and nonparametric item response models and provide a solid theoretical foundation for the applications of nonparametric item response models for assessments with many items.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Chen, Y., Li, X., Liu, J., & Ying, Z. (2021). Item response theory—a statistical framework for educational and psychological measurement. arXiv preprint arXiv:2108.08604.

  • Douglas, J. (1997). Joint consistency of nonparametric item characteristic curve and ability estimation. Psychometrika, 62, 7–28.

    Article  Google Scholar 

  • Douglas, J. A. (2001). Asymptotic identifiability of nonparametric item response models. Psychometrika, 66(4), 531–540.

    Article  Google Scholar 

  • Douglas, J., & Cohen, A. (2001). Nonparametric item response function estimation for assessing parametric model fit. Applied Psychological Measurement, 25(3), 234–243.

    Article  Google Scholar 

  • Falk, C. F., & Cai, L. (2016). Maximum marginal likelihood estimation of a monotonic polynomial generalized partial credit model with applications to multiple group analysis. Psychometrika, 81, 434–460.

    Article  PubMed  Google Scholar 

  • Johnson, M. S. (2007). Modeling dichotomous item responses with free-knot splines. Computational Statistics & Data Analysis, 51(9), 4178–4192.

    Article  Google Scholar 

  • Lee, Y.-S., Wollack, J. A., & Douglas, J. (2009). On the use of nonparametric item characteristic curve estimation techniques for checking parametric model fit. Educational and Psychological Measurement, 69(2), 181–197.

    Article  Google Scholar 

  • Mikhailov, V. G. (1994). On a refinement of the central limit theorem for sums of independent random indicators. Theory of Probability & Its Applications, 38(3), 479–489.

    Article  Google Scholar 

  • Peress, M. (2012). Identification of a semiparametric item response model. Psychometrika, 77, 223–243.

    Article  Google Scholar 

  • Ramsay, J. O. (1991). Kernel smoothing approaches to nonparametric item characteristic curve estimation. Psychometrika, 56(4), 611–630.

    Article  Google Scholar 

  • Ramsay, J. O., & Abrahamowicz, M. (1989). Binomial regression with monotone splines: A psychometric application. Journal of the American Statistical Association, 84(408), 906–915.

    Article  Google Scholar 

  • Ramsay, J. O., & Winsberg, S. (1991). Maximum marginal likelihood estimation for semiparametric item analysis. Psychometrika, 56(3), 365–379.

    Article  Google Scholar 

  • Sijtsma, K. & Molenaar, I. W. (2002). Introduction to nonparametric item response theory. SAGE Publications, Inc.

  • Sijtsma, K. (1998). Methodology review: Nonparametric IRT approaches to the analysis of dichotomous item scores. Applied Psychological Measurement, 22(1), 3–31.

    Article  Google Scholar 

  • Van der Linden, W. J. (2018). Handbook of item response theory. CRC Press.

    Google Scholar 

  • Winsberg, S., Thissen, D., & Wainer, H. (1984). Fitting item characteristic curves with spline functions. ETS Research Report Series, 1984(2), i–14.

    Article  Google Scholar 

Download references

Acknowledgements

The author would like to thank Editor-in-Chief Dr. Sandip Sinharay, an Associate Editor, and a referee for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yinqiu He.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We present all the lemmas in Section A and provide their proofs in Section B. The proofs of propositions are provided in Section C.

1.1 A. Lemmas

Lemma 1

Consider an integer k such that \(\theta _k\in (\alpha ,\beta )\). There exist constants \(\tilde{C}_{\alpha \beta }>0\) and \(N_{\alpha \beta }\) independent with (ni) such that when \(n \geqslant N_{\alpha \beta }\),

$$\begin{aligned} P\left( \bar{Y}_{n,-i}=\frac{k}{n-1}\right) \geqslant \tilde{C}_{\alpha \beta }n^{-1}. \end{aligned}$$

Lemma 2

Under the conditions of Theorem 2, there exist constants \(\tilde{C}_{\alpha \beta ,1}\) and \(\tilde{C}_{\alpha \beta ,2}\) independent with (ni) such that

$$\begin{aligned} \left| 1-P\left( \Theta \in I_{\delta }\mid \mathcal {E}_{n,k} \right) \right| \leqslant \frac{ n\delta \exp (-n\tilde{C}_{\alpha \beta ,1} )}{\tilde{C}_{\alpha \beta ,2}}. \end{aligned}$$

Lemma 3

(Hoeffding inequality of bounded variables) For any (ni), and \(m>0\),

$$\begin{aligned} P(|\bar{Y}_{n,-i}-\bar{P}_{n,-i}(\theta )|> m \mid \Theta =\theta )\leqslant 2\exp [- 2(n-1)m^2]. \end{aligned}$$

1.2 B. Proofs of Lemmas

1.2.1 B1. Proof of Lemma 1

Let \(\mu _{\theta }=\sum _{j\ne i} P_{n,j}(\theta )\) and \(\sigma ^2_{\theta }=\sum _{j\ne i} P_{n,j}(\theta )(1-P_{n,j}(\theta ))\), which are the mean and variance of \((n-1)\bar{Y}_{n,-i}\) conditioning on \(\Theta =\theta \), respectively. By applying a bound on the normal approximation for the distribution of a sum of independent Bernoulli variables (Mikhailov, 1994)

$$\begin{aligned} \left| P\left[ (n-1)\bar{Y}_{n,-i}=k \mid \Theta =\theta \right] -\frac{1}{\sigma _\theta \sqrt{2 \pi }} e^{-\frac{\left( k-\mu _\theta +1 / 2\right) ^2}{2 \sigma _\theta ^2}}\right| \le \frac{c}{\sigma _\theta ^2} \end{aligned}$$
(12)

for some universal constant c.

Define the intervals \(I_{n,1/2}=(\theta _k-n^{-1/2}, \theta _k+n^{-1/2})\) and \(\tilde{I}_{n,1/2}=I_{n,1/2}\cap (\alpha /2,(1+\beta )/2)\). For \(\theta \in \tilde{I}_{n,1/2}\) and \(\theta _k \in (\alpha ,\beta )\),

$$\begin{aligned} \frac{|k-\mu _{\theta }|}{n-1}=|\bar{P}_{n,-i}(\theta _k)-\bar{P}_{n,-i}(\theta )|\leqslant {M}_{\alpha \beta ,2} |\theta _k-\theta | < {M}_{\alpha \beta ,2}n^{-1/2}, \end{aligned}$$
(13)

where in the second inequality, \({M}_{\alpha \beta ,2}\) is a constant that is independent with (ni) by Condition 3. Moreover, by Condition 4, there exist positive constants \(L_{\alpha \beta } \) and \(U_{\alpha \beta } \) that are independent with (ni) such that

$$\begin{aligned} L_{\alpha \beta }<\sigma ^2_{\theta }/(n-1) < U_{\alpha \beta }. \end{aligned}$$
(14)

Therefore, by (12), (13), and (14), for \(\theta \in \tilde{I}_{n,1/2}\),

$$\begin{aligned} P\left( \bar{Y}_{n,-i}=\frac{k}{n-1} \ \Big | \ \theta \right)>&~ \frac{1}{\sqrt{2\pi \sigma ^2_{\theta } }} e^{-\frac{\left( k-\mu _\theta +1 / 2\right) ^2}{2\sigma _\theta ^2}} - \frac{c}{\sigma ^2_{\theta }} \\>&~ \frac{1}{\sqrt{2U_{\alpha \beta }(n-1)\pi }} e^{-\frac{\left( {M}_{\alpha \beta ,2}(n-1)n^{-1/2} + \frac{1}{2}\right) ^2}{2 L_{\alpha \beta }(n-1)}} - \frac{c}{L_{\alpha \beta }(n-1)}\\ >&~ \frac{\tilde{C}_{\alpha \beta }}{2n^{1/2}}, \end{aligned}$$

where \(\tilde{C}_{\alpha \beta }>0\) is a constant that is independent with (ni). Therefore,

$$\begin{aligned} P\left( \bar{Y}_{n,-i}=\frac{k}{n-1} \right) \geqslant&~\int _{\theta \in \tilde{I}_{n,1/2}} P\left( \bar{Y}_{n,-i}=\frac{k}{n-1} \ \Big | \ \theta \right) \textrm{d}\theta > \int _{\theta \in \tilde{I}_{n,1/2}}\frac{\tilde{C}_{\alpha \beta }}{2n^{1/2}}\textrm{d}\theta . \end{aligned}$$
(15)

There exists \(N_{\alpha \beta }\) independent with i such that when \(n\geqslant N_{\alpha \beta }\), \(\tilde{I}_{n,1/2}=I_{n,1/2}\) by \(\theta _k\in (\alpha ,\beta )\). Then by (15),

$$\begin{aligned} P\left( \bar{Y}_{n,-i}=\frac{k}{n-1} \right) >\int _{\theta \in {I}_{n,1/2}}\frac{\tilde{C}_{\alpha \beta }}{2n^{1/2}}\textrm{d}\theta = \tilde{C}_{\alpha \beta }n^{-1}. \end{aligned}$$

1.2.2 B2. Proof of Lemma 2

To prove Lemma 2, we note that

$$\begin{aligned} 1-P(\Theta \in I_{\delta }\mid \mathcal {E}_{n,k})={P\left( 0<\Theta<\delta \mid \mathcal {E}_{n,k}\right) +P\left( 1-\delta<\Theta <1 \mid \mathcal {E}_{n,k}\right) }. \end{aligned}$$

We next derive an upper bound of \(P\left( 0<\Theta <\delta \mid \mathcal {E}_{n,k}\right) \), and a similar upper bound can be obtained for \(P\left( 1-\delta<\Theta <1 \mid \mathcal {E}_{n,k}\right) \) following a similar analysis.

By \(P(0<\Theta<\delta \mid \mathcal {E}_{n,k})= P(0<\Theta <\delta ,\ \mathcal {E}_{n,k})/P(\mathcal {E}_{n,k})\), and the lower bound of \(P(\mathcal {E}_{n,k})\) in Lemma 1, it suffices to derive an upper bound of \(P(0<\Theta <\delta ,\ \mathcal {E}_{n,k})\) below. In particular,

$$\begin{aligned} P\left( 0<\Theta<\delta ,\ \mathcal {E}_{n,k}\right) =\int P\biggr (\bar{Y}_{n,-i}=\frac{k}{n-1}\mid \Theta =\theta \biggr ) \, \textrm{I}(0<\theta <\delta )\textrm{d}\theta . \end{aligned}$$
(16)

By \(\bar{P}_{n,-i}(\theta _k)=k/(n-1)\),

$$\begin{aligned} \bar{Y}_{n,-i}=\frac{k}{n-1}\quad \Leftrightarrow \quad \bar{Y}_{n,-i}-\bar{P}_{n,-i}(\theta )=[\bar{P}_{n,-i}(\theta _k)-\bar{\kappa }_{-i}]-[\bar{P}_{n,-i}(\theta )-\bar{\kappa }_{-i}], \end{aligned}$$
(17)

where we define \(\bar{\kappa }_{-i}=\sum _{j\ne i} \kappa _j/(n-1)\). By \(0<\alpha /2<\alpha<\theta _k<\beta \) and Conditions 3 and 4, we have \(\bar{P}_{n,-i}(\theta _k) \geqslant \bar{P}_{n,-i}(\alpha )\) and \(\bar{P}_{n,-i}(\alpha /2)\geqslant \bar{\kappa }_{-i}.\) Therefore,

$$\begin{aligned} \bar{P}_{n,-i}(\theta _k)-\bar{\kappa }_{-i} \geqslant&~\bar{P}_{n,-i}(\alpha )-\bar{P}_{n,-i}(\alpha /2) = \bar{P}_{n,-i}'(\tilde{\alpha } )\alpha /2\geqslant \tilde{m}_{\alpha } \alpha /2, \end{aligned}$$
(18)

where the second equation is obtained by the intermediate value theorem with \(\tilde{\alpha }\in (\alpha /2,\alpha )\), and the last inequality is obtained by Condition 3 with \(\tilde{m}_{\alpha }\) being a constant independent with (ni). Let \(\tilde{C}_{\alpha } = \tilde{m}_{\alpha } \alpha /2\). By Condition 4, there exists \(\delta _{\alpha }>0\) such that for \(\theta <\delta \leqslant \delta _{\alpha }\),

$$\begin{aligned} \bar{P}_{n,-i}(\theta ) -\bar{\kappa }_{-i}< \bar{P}_{n,-i}(\delta ) -\bar{\kappa }_{-i} < \tilde{C}_{\alpha } /2. \end{aligned}$$
(19)

Combining (17), (18), and (19),

$$\begin{aligned}&~P\left( \bar{Y}_{n,-i}=\frac{k}{n-1} \mid \Theta = \theta \right) \\&\quad = ~ P\left( \bar{Y}_{n,-i}-\bar{P}_{n,-i}(\theta )=[\bar{P}_{n,-i}(\theta _k)-\bar{\kappa }_{-i}]-[\bar{P}_{n,-i}(\theta )-\bar{\kappa }_{-i}]\mid \Theta = \theta \right) \\&\quad \leqslant ~P\left( \bar{Y}_{n,-i}-\bar{P}_{n,-i}(\theta )\geqslant \tilde{C}_{\alpha } /2 \mid \Theta =\theta \right) \\&\quad \leqslant ~ 2\exp [-(n-1)\tilde{C}_{\alpha }^2 /2 ], \end{aligned}$$

where the last inequality follows by Lemma 3. By the above inequality and (16),

$$\begin{aligned} P\left( 0<\Theta <\delta \mid \mathcal {E}_{n,k}\right) \leqslant \frac{2\delta \exp [-(n-1)\tilde{C}_{\alpha }^2 /2 ]}{ P(\mathcal {E}_{n,k}) }\leqslant \frac{2n\delta \exp [-(n-1)\tilde{C}_{\alpha }^2 /2 ]}{\tilde{C}_{\alpha \beta }}, \end{aligned}$$

where the second inequality is obtained by Lemma 1. A similar upper bound can be obtained for \(P (1-\delta<\Theta <1 \mid \mathcal {E}_{n,k}\ ) \) too. Lemma 2 is proved.

1.3 C. Proofs of Propositions

1.3.1 C1. Proof of Proposition 1

Let \(x=\Phi ^{-1}(\theta )\). We can equivalently write \(P'_{n,i}(\theta )=h_{n,i}(x)\), where

$$\begin{aligned} h_{n,i}(x)=\frac{a_{n,i} \phi [a_{n,i}(x-b_{n,i})] }{ \phi (x)} = a_{n,i}\exp \left[ -\frac{1}{2}(a_{n,i}^2-1)x^2+a_{n,i}^2b_{n,i}\left( x-\frac{1}{2}\right) \right] . \end{aligned}$$

Note that \(\theta \downarrow 0\) and \(\theta \uparrow 0\) correspond to \(x \rightarrow -\infty \) and \(x\rightarrow +\infty \), respectively. Let \(p_{+}=\lim _{\theta \rightarrow 1}P_{n,i}'(\theta )=\lim _{x\rightarrow +\infty } h_{n,i}(x)\) and \(p_{-}=\lim _{\theta \rightarrow 0}P_{n,i}'(\theta )=\lim _{x\rightarrow -\infty } h_{n,i}(x)\). As \(h_{n,i}(x)=0\) when \(a_{n,i}=0\), it suffices to consider \(a_{n,i}\ne 0\) below. In particular,

$$\begin{aligned} (p_{-}, p_{+})={\left\{ \begin{array}{ll} (+\infty , +\infty ) &{} \text { when } 0<a_{n,i}^2<1 \\ (0, 0) &{} \text { when } a_{n,i}^2>1\\ (0, +\infty ) &{} \text { when } a_{n,i}^2=1, b_{n,i}>0\\ ( +\infty , 0) &{} \text { when } a_{n,i}^2=1, b_{n,i}<0\\ (1, 1) &{} \text { when } a_{n,i}^2=1, b_{n,i}=0. \end{array}\right. } \end{aligned}$$

This suggests that \(P'_{n,i}(\theta )\) cannot be uniformly bounded on (0, 1) when \(a_{n,i}^2\ne 1\) or \( b_{n,i}\ne 0. \)

On the other hand, when \(\theta \in [\alpha , \beta ]\), \(x\in [\Phi ^{-1}(\alpha ), \Phi ^{-1}(\beta )]\) with the two end points bounded away from \(-\infty \) and \(+\infty \). Therefore, when \(a_{n,i}\in [1/C, C]\) and \(|b_{n,i}|\in [1/C', C']\), there exist \(0< m_{\alpha \beta }< M_{\alpha \beta } <\infty \) such that \(P'_{n,i}(\theta ) = h_{n,i}(x) \in (m_{\alpha \beta }, M_{\alpha \beta })\). Thus Condition 3 is satisfied.

We next prove that Condition 4 is also satisfied. When \(\epsilon \in (0,1/2)\), \(\Phi ^{-1}(\epsilon )<0\), and then we set \(l_{\epsilon }=\Phi \left[ C_a\Phi ^{-1}(\epsilon ) - C_b\right] .\) When \(\theta \in [0,l_{\epsilon }]\),

$$\begin{aligned} P_{n,i}(\theta )\leqslant&~ P_{ni}(l_{\epsilon }) =\Phi \big [a_{ni}(\Phi ^{-1}(l_{\epsilon }) - b_{ni})\big ] \leqslant \Phi \{ a_{ni}[C_a\Phi ^{-1}(\epsilon )-C_b+ |b_{ni}| ] \}\\ \leqslant&~ \Phi \{ a_{ni}[C_a\Phi ^{-1}(\epsilon )-C_b+C_b] \} \leqslant \Phi \left[ \frac{1}{C_a} C_a \Phi ^{-1}(\epsilon )\right] = \epsilon . \end{aligned}$$

When \(\epsilon \in (1/2,1)\), \(\Phi ^{-1}(\epsilon )>0\), and we set \(l_{\epsilon }=\Phi \left[ C_a^{-1}\Phi ^{-1}(\epsilon ) - C_b\right] .\) Then we can obtain \(P_{n,i}(\theta )\leqslant \epsilon \) similarly to the above analysis. Following similar analysis, we can construct \(u_{\epsilon }\) so that \(1-P_{n,i}(\theta )\leqslant \epsilon \) for \(\theta \in [u_{\epsilon },1]\). In brief, Condition 4 is satisfied.

1.3.2 C2. Proof of Proposition 2

Let \(x=\Phi ^{-1}(\theta )\). We can equivalently write \(P'_{n,i}(\theta )=h_{n,i}(x)\), where

$$\begin{aligned} h_{n,i}(x)=&~(d_{n,i}-c_{n,i})a_{n,i}\frac{ g'\left[ a_{n,i}(x-b_{n,i})\right] }{\phi (x)} \\ =&~ (d_{n,i}-c_{n,i})a_{n,i} \frac{ e^{x^2/2}/[e^{a_{n,i}(x-b_{n,i})}+e^{-a_{n,i}(x-b_{n,i})}]}{1+2/[e^{a_{n,i}(x-b_{n,i})}+e^{-a_{n,i}(x-b_{n,i})}]} \end{aligned}$$

which is obtained by

$$\begin{aligned} g'(x)= \frac{1/(e^x+e^{-x})}{1+2/(e^x+e^{-x})}. \end{aligned}$$

Note that \(\theta \downarrow 0\) and \(\theta \uparrow 0\) correspond to \(x \rightarrow -\infty \) and \(x\rightarrow +\infty \), respectively. When \(a_{n,i}=0\), \(h_{n,i}(x)=0\). When \(a_{n,i}\ne 0\), \(e^{a_{n,i}(x-b_{n,i})}+e^{-a_{n,i}(x-b_{n,i})}\rightarrow +\infty \) and \(e^{x^2/2}\) as \(|x|\rightarrow +\infty \). Moreover, as the quadratic term diverges faster than the linear term, we know \(e^{x^2/2}/[e^{a_{n,i}(x-b_{n,i})}+e^{-a_{n,i}(x-b_{n,i})}]\rightarrow \infty \). Thus, \(h_{n,i}(x)\rightarrow +\infty \).

On the other hand, when \(\theta \in [\alpha , \beta ]\), \(x\in [\Phi ^{-1}(\alpha ), \Phi ^{-1}(\beta )]\) with the two end points bounded away from \(-\infty \) and \(+\infty \). Therefore, under the conditions in (ii) of Proposition 2, there exist \(0< m_{\alpha \beta }< M_{\alpha \beta } <\infty \) such that \(P'_{n,i}(\theta ) = h_{n,i}(x) \in (m_{\alpha \beta }, M_{\alpha \beta })\). Thus, Condition 3 is satisfied.

We next prove that Condition 4 is also satisfied. When \(\epsilon \) is small such that \(g^{-1}(\epsilon /C_{c,d})<0\), we set \(l_{\epsilon }=\Phi \left[ C_ag^{-1}(\epsilon /C_{c,d}) - C_b\right] .\) Then for \(\theta \in [0,l_{\epsilon }]\),

$$\begin{aligned} P_{n,i}(\theta )-c_{n,i} \leqslant P_{n,i}(l_{\epsilon })-c_{n,i} =&~ \left( d_{n,i}-c_{n,i}\right) g\left( a_{n,i}[C_ag^{-1}(\epsilon /C_{c,d}) - C_b - b_{n,i} ] \right) \\ \leqslant&~ \left( d_{n,i}-c_{n,i}\right) g\left( a_{n,i}[C_ag^{-1}(\epsilon /C_{c,d}) - C_b + |b_{n,i}|] \right) \\ \overset{(i1)}{\leqslant }\&~ \left( d_{n,i}-c_{n,i}\right) g[g^{-1}(\epsilon /C_{c,d})] \leqslant \epsilon , \end{aligned}$$

where the inequality (i1) above is obtained by \(a_{n,i}C_a\geqslant 1 > 0\) and \(|b_{n,i}|-C_b\leqslant 0\). When \(\epsilon \) is large such that \(g^{-1}(\epsilon /C_{c,d})>0\), we set \(l_{\epsilon }=\Phi \left[ C_a^{-1}g^{-1}(\epsilon /C_{c,d}) - C_b\right] \!,\) and then \(P_{n,i}(\theta )-c_{n,i}\leqslant \epsilon \) can be obtained similarly. Following similar analysis, we can construct \(u_{\epsilon }\) so that \(d_{n,i}-P_{n,i}(\theta ) \leqslant \epsilon \) for \(\theta \in [u_{\epsilon },1]\). In brief, Condition 4 is satisfied.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Y. Extended Asymptotic Identifiability of Nonparametric Item Response Models. Psychometrika (2024). https://doi.org/10.1007/s11336-024-09972-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11336-024-09972-7

Keywords

Navigation