Skip to main content

Advertisement

Log in

A Two-Stage Approach to Differentiating Normal and Aberrant Behavior in Computer Based Testing

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Statistical methods for identifying aberrances on psychological and educational tests are pivotal to detect flaws in the design of a test or irregular behavior of test takers. Two approaches have been taken in the past to address the challenge of aberrant behavior detection, which are (1) modeling aberrant behavior via mixture modeling methods, and (2) flagging aberrant behavior via residual based outlier detection methods. In this paper, we propose a two-stage method that is conceived of as a combination of both approaches. In the first stage, a mixture hierarchical model is fitted to the response and response time data to distinguish normal and aberrant behaviors using Markov chain Monte Carlo (MCMC) algorithm. In the second stage, a further distinction between rapid guessing and cheating behavior is made at a person level using a Bayesian residual index. Simulation results show that the two-stage method yields accurate item and person parameter estimates, as well as high true detection rate and low false detection rate, under different manipulated conditions mimicking NAEP parameters. A real data example is given in the end to illustrate the potential application of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Because precise parameter recovery is the premise to subsequent item and person classification, and parameter recovery is not the focus of the study, we decide to move the results in the appendix to save space.

  2. We avoid using the terminology of “power” here because our method is not strictly a hypothesis testing based method.

  3. Imagine for an item i, let “g” denotes the probability of guessing this item right, and “c” denote the probability of cheating this item correctly. Suppose there are \(N_1 \) examinees who answered it correctly, and \(N_2 \) incorrectly. Both \(N_1 +N_2 \) examinees had short RTs. Then, it is legitimate to think all \(N_1 \) guessed the item correctly, and the likelihood function becomes \(g^{N_1 }\left( {1-g} \right) ^{N_2 }\). On the other hand, suppose out of \(N_1 \), there are \(n_1 \) who guessed correctly and \(n_2 \) who cheated, and also our of \(N_2 \), there are \(n_3 \) who guessed incorrectly and \(n_4 \) who cheated incorrectly (\(n_4 \) might be small which is fine), then the likelihood becomes \(g^{n_1 }\left( {1-g} \right) ^{n_3 }c^{n_2 }\left( {1-c} \right) ^{n_4 }\). These two likelihoods are both permissible and thus they are indeterminate even when the response information is taken into consideration.

References

  • Baker, F. B., & Kim, S.-H. (2004). Item response theory: Parameter estimation techniques (2nd edn.). New York: Marcel Dekker.

  • Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39(4), 331–348.

    Article  Google Scholar 

  • Boughton, K. A., & Yamamoto, K. (2007). A hybrid model for test speededness: Multivariate and mixture distribution Rasch models. New York: Springer.

    Book  Google Scholar 

  • Chang, Y.-W., Tsai, R.-C., & Hsu, N.-J. (2014). A speeded item response model: Leave the harder till later. Psychometrika, 79(2), 255–274.

    Article  PubMed  Google Scholar 

  • Cizek, G. J. (1999). Cheating on tests: How to do it, detect it, and prevent it. London: Routledge.

    Google Scholar 

  • Cohen, A. S., & Wollack, J. A. (2006). Test administration, security, scoring, and reporting. Educational measurement, 4, 17–64.

    Google Scholar 

  • Drasgow, F., Levine, M. V., & Williams, E. A. (1985). Appropriateness measurement with polychotomous item response models and standardized indices. British Journal of Mathematical and Statistical Psychology, 38(1), 67–86.

    Article  Google Scholar 

  • Drasgow, F., Luecht, R. M., & Bennett, R. (2006). Technology and testing. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 471–515). Washington: American Council on Education/Praeger Publishers.

    Google Scholar 

  • Fan, Z., Wang, C., Chang, H.-H., & Douglas, J. (2012). Utilizing response time distributions for item selection in CAT. Journal of Educational and Behavioral Statistics, 37(5), 655–670.

    Article  Google Scholar 

  • Fitzpatrick, S., & Hickey, M. (2016). Developing achievement levels on the 2014 national assessment of educational progress in grade 8 technology and enginering literarcy: technical report. Washington, DC: National Assessment Governing Board.

    Google Scholar 

  • Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences (with discussion). Statistical Science, 7, 457–511.

    Article  Google Scholar 

  • Glas, C. A., & Meijer, R. R. (2003). A Bayesian approach to person fit analysis in item response theory models. Applied Psychological Measurement, 27(3), 217–233.

    Article  Google Scholar 

  • Goegebeur, Y., De Boeck, P., Wollack, J. A., & Cohen, A. S. (2008). A speeded item response model with gradual process change. Psychometrika, 73(1), 65–87.

    Article  Google Scholar 

  • Karabatsos, G. (2003). Comparing the aberrant response detection performance of thirty-six person-fit statistics. Applied Measurement in Education, 16(4), 277–298.

    Article  Google Scholar 

  • Levine, M. V., & Rubin, D. B. (1979). Measuring the appropriateness of multiple-choice test scores. Journal of Educational and Behavioral Statistics, 4(4), 269–290.

    Article  Google Scholar 

  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.

    Google Scholar 

  • McLeod, L. D., & Lewis, C. (1999). Detecting item memorization in the CAT environment. Applied Psychological Measurement, 23(2), 147–160.

    Article  Google Scholar 

  • McLeod, L., Lewis, C., & Thissen, D. (2003). A Bayesian method for the detection of item preknowledge in computerized adaptive testing. Applied Psychological Measurement, 27(2), 121–137.

    Article  Google Scholar 

  • Meijer, R. R., & Sijtsma, K. (2001). Methodology review: Evaluating person fit. Applied Psychological Measurement, 25(2), 107–135.

    Article  Google Scholar 

  • Meyer, J. P. (2010). A mixture Rasch model with item response time components. Applied Psychological Measurement, 34, 521–538.

    Article  Google Scholar 

  • Mislevy, R. J., & Verhelst, N. D. (1990). Modeling item responses when different subjects employ different solution strategies. Psychometrika, 55(2), 195–215.

    Article  Google Scholar 

  • Nering, M. L. (1996). The effects of person misfit in computerized adaptive testing. Unpublished doctoral dissertation, University of Minnesota, Minneapolis.

  • Nering, M. L. (1997). The distribution of indexes of person fit within the computerized adaptive testing environment. Applied Psychological Measurement, 21, 115–127.

    Article  Google Scholar 

  • Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271–282.

    Article  Google Scholar 

  • Rouder, J. N., Sun, D., Speckman, P. L., Lu, J., & Zhou, D. (2003). A hierarchical Bayesian statistical framework for response time distributions. Psychometrika, 68(4), 589–606.

    Article  Google Scholar 

  • Schnipke, D. L., & Scrams, D. J. (1997). Modeling item response times with a two-state mixture model: A new method of measuring speededness. Journal of Educational Measurement, 34, 213–232.

    Article  Google Scholar 

  • Segall, D. O. (2002). An item response model for characterizing test compromise. Journal of Educational and Behavioral Statistics, 27(2), 163–179.

    Article  Google Scholar 

  • Shao, C., Li, J., & Cheng, Y. (2015). Detection of test speededness using change-point analysis. Psychometrika, 1–24. doi:10.1007/s11336-015-9476-7.

  • Shu, Z., Henson, R., & Luecht, R. (2013). Using deterministic, gated item response theory model to detect test cheating due to item compromise. Psychometrika, 78(3), 481–497.

    Article  PubMed  Google Scholar 

  • van der Linden, W. J., & Lewis, C. (2014). Bayesian Checks on Cheating on Tests. Psychometrika, 80, 689–706.

    Article  PubMed  Google Scholar 

  • van der Linden, W. J. (2006). A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics, 31, 181–204.

    Article  Google Scholar 

  • van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items. Psychometrika, 72(3), 287–308.

    Article  Google Scholar 

  • van der Linden, W. J., & Guo, F. (2008). Bayesian procedures for identifying aberrant response-time patterns in adaptive testing. Psychometrika, 73(3), 365–384.

    Article  Google Scholar 

  • van der Linden, W. J., & van Krimpen-Stoop, E. M. (2003). Using response times to detect aberrant responses in computerized adaptive testing. Psychometrika, 68(2), 251–265.

    Article  Google Scholar 

  • van Krimpen-Stoop, E. M., & Meijer, R. R. (2000). Detecting person misfit in adaptive testing using statistical process control techniques Computerized adaptive testing: Theory and practice (pp. 201–219). New York: Springer.

  • van Krimpen-Stoop, E. M., & Meijer, R. R. (1999). Simulating the null distribution of person-fit statistics for conventional and adaptive tests. Applied Psychological Measurement, 23, 327–345.

    Article  Google Scholar 

  • von Davier, M., & Rost, J. (1995). Polytomous mixed Rasch models. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models—Foundations, recent developments, and applications. New York: Springer.

  • Wang, C., & Xu, G. (2015). A mixture hierarchical model for response times and response accuracy. British Journal of Mathematical and Statistical Psychology, 68, 456–477.

  • Wang, C., Chang, H. H., & Douglas, J. A. (2013a). The linear transformation model with frailties for the analysis of item response times. British Journal of Mathematical and Statistical Psychology, 66(1), 144–168.

  • Wang, C., Fan, Z., Chang, H.-H., & Douglas, J. A. (2013b). A semiparametric model for jointly analyzing response times and accuracy in computerized testing. Journal of Educational and Behavioral Statistics, 38(4), 381–417.

  • Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement, 43(1), 19–38.

    Article  Google Scholar 

  • Wise, S. L., & Kong, X. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18(2), 163–183.

    Article  Google Scholar 

  • Wright, B. D., & Stone, M. H. (1979). Best test design. Rasch measurement. Chicago: MESA Press.

    Google Scholar 

  • Yamamoto, K. (1989). HYBRID model of IRT and latent class models. Princeton, NJ: ETS (RR-89-41).

  • Yamamoto, K. (1995). Estimating the effects of test length and test time on parameter estimation using the HYBRID model (TOEFL Tech. Rep. No. TR-10). Princeton, NJ: Educational Testing Service.

  • Yi, Q., Zhang, J., & Chang, H. (2008). Severity of organized item theft in computerized adaptive testing a simulation study. Applied Psychological Measurement, 32, 543–558.

    Article  Google Scholar 

  • Zhang, J. (2013). A sequential procedure for detecting compromised items in the item pool of a CAT system. Applied Psychological Measurement, 38, 87–104.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chun Wang.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 57 KB)

Supplementary material 2 (pdf 93 KB)

Appendices

Appendix A: The MCMC algorithm

At the rth step, denote current parameter estimates: \(\varvec{a}^{(r-1)} \equiv (a_{1}^{(r-1)},...,a_{J}^{(r-1)})'\), \(\varvec{b}^{(r-1)} \equiv (b_{1}^{(r-1)},...,b_{J}^{(r-1)})'\), \(\varvec{c}^{(r-1)} \equiv (c_{1}^{(r-1)},...,c_{J}^{(r-1)})'\), \(\varvec{\alpha }^{(r-1)} \equiv (\alpha _{1}^{(r-1)},...,\alpha _{J}^{(r-1)})'\), \(\varvec{\beta }^{(r-1)} \equiv (\beta _{1}^{(r-1)},...,\beta _{J}^{(r-1)})'\), \(\varvec{\theta }^{(r-1)} \equiv (\theta _{1}^{(r-1)},...,\theta _{N}^{(r-1)})'\), \(\varvec{\tau }^{(r-1)} \equiv (\tau _{1}^{(r-1)},...,\tau _{N}^{(r-1)})'\), \(\sigma _{\theta \tau }^{(r-1)}\), \(\sigma _{\tau }^{(r-1)}\); the “aberrant” parameters: \(\varvec{d}^{(r-1)} \equiv (d_{1}^{(r-1)},...,d_{J}^{(r-1)})'\), \(\varvec{\pi }^{(r-1)} \equiv (\pi _{1}^{(r-1)},...,\pi _{N}^{(r-1)})'\), \(\sigma _{c}^{(r-1)}\),\(\mu _{c}^{(r-1)}\) and the indicator \(\varvec{\Delta }^{(r-1)}\). Sample each parameter sequentially as follows.

  1. 1

    Update \(\Delta _{ij}\) for each i and j: Draw:

    $$\begin{aligned} \Delta ^{(r)}_{ij} \sim Bernoulli(\frac{CT(Y_{ij})\varphi _c^{(r-1)}(t_{ij})\pi ^{(r-1)}_{i}}{IRT(Y_{ij})f^{(r-1)}(t_{ij})(1-\pi _{i}^{(r-1)})+CT(Y_{ij})\varphi _c^{(r-1)}(t_{ij})\pi ^{(r-1)}_{i}}) \end{aligned}$$
    (1)

    where

    $$\begin{aligned} IRT(Y_{ij})&= P_j(\theta _i)^{Y_{ij}}(1-P_j(\theta _i))^{(1-Y_{ij})}, \end{aligned}$$
    (2)
    $$\begin{aligned} CT(Y_{ij})&=d_{j}^{Y_{ij}}(1-d_{j})^{(1-Y_{ij})}. \end{aligned}$$
    (3)

    \(\varphi _c^{(r-1)}(t_{ij})\) is the lognormal likelihood with parameters \(\mu ^{(r-1)}_{c}\) , \(\sigma ^{(r-1)}_{c}\); \(f^{(r-1)}(t_{ij})=f(t_{ij};\tau ^{(r-1)}_{i},\alpha ^{(r-1)}_{j},\beta ^{(r-1)}_{j})\) is the likelihood of the lognormal model and \(P_j(\theta _i)\) is calculated from the 3PL model.

  2. 2

    Update \(c_j\) for each j: Define a latent variable \(w_{ij}\) as follows: if \(Y_{ij}=0\) then \(w_{ij}=0\); if \(Y_{ij}=1\) then \(w^{(r)}_{ij} \sim Bernoulli(\frac{\phi ^{(r-1)}(\theta _i)}{c^{(r-1)}_j+(1-c^{(r-1)}_j)\phi ^{(r-1)}(\theta _i)})\) where \(\phi ^{(r-1)}(\theta _i)=\frac{1}{1+\exp (-a_{j}^{(r-1)}(\theta _{i}^{(r-1)}-b_{j}^{(r-1)})}\). Then within the solution behaviour class, i.e., \(\Delta _{ij}^{(r)} \ne 1\), compute \(T^{(r)}_{j} = \sum ^{N}_{i=1}I(w^{(r)}_{ij}=0)\) as the number of people who do not know the response to item j and \(M^{(r)}_{j} = \sum ^{N}_{i=1}I(w^{(r)}_{ij}=0)I(y_{ij}=1)\) as the number of people who give a correct guessing to item j. It is easy to see that \(M^{(r)}_{j} \sim Bin(T^{(r)}_{j},c_j)\). Given a beta prior \(Beta(\gamma , \delta )\), \(c^{(r)}_{j}\) could be drawn from its posterior distribution: \(Beta(M^{(r)}_{j}+\gamma , T^{(r)}_{j}-M^{(r)}_{j}+\delta )\).

  3. 3

    Update \(a_j\) and \(b_j\) for each j: Within the solution behaviour category, i.e., \(\Delta _{ij}^{(r)} \ne 1\), draw \(a^{*}_{j} \sim ln\mathcal {N}(\log a^{(r-1)}_{j},c^2_{a})\) and \(b^{*}_{j} \sim \mathcal {N}(b^{(r-1)}_{j},c^2_{b})\). Follow Patz and Junker(1999), a Metropolis-Hastings algorithm is employed to update the two parameters simultaneously with the acceptance probability \(min(1,R_{ab})\), where

    $$\begin{aligned} R_{ab}=\frac{\pi _{a}(a^{*}_{j})\pi _{b}(b^{*}_{j})a^{(r-1)}_{j}\prod _{i=1}^{N}IRT(Y_{ij},a^{*}_{j},b^{*}_{j},c^{(r)}_j,\theta ^{(r-1)}_{i})}{\pi _{a}(a^{(r-1)}_{j})\pi _{b}(b^{(r-1)}_{j})a^{*}_{j}\prod _{i=1}^{N}IRT(Y_{ij},a^{(r-1)}_{j},b^{(r-1)}_{j},c^{(r)}_j,\theta ^{(r-1)}_{i})}. \end{aligned}$$
    (4)

    where IRT is calculated from equation (2), \(\pi _a\) is the prior lognormal density on parameter a and \(\pi _b\) denotes the normal density of the prior for parameter b. To obtain reasonable acceptance rate we adjust the standard deviations of the proposal distributions at \(c_{a} = 0.5\) and \(c_{b} = 0.3\).

  4. 4

    Update \(\sigma _{\theta \tau }\): Fix the standard deviation for \(\tau \), update the correlation \(\rho _{\theta \tau }\) then \(\sigma ^{(r)}_{\theta \tau } = \rho ^{(r)}_{\theta \tau }\sigma ^{(r-1)}_{\tau }\). Since \(\rho _{\theta \tau } \in [-1,1]\), transformation is needed. Following Wang et al. (2013), compute \(\varphi ^{(r-1)} = \log (\frac{1+\rho ^{(r-1)}_{\theta \tau }}{1-\rho ^{(r-1)}_{\theta \tau }})\) and draw \(\varphi ^{*} \sim \mathcal {N}(\varphi ^{(r-1)},c^2_\varphi )\). Accept the sample with the probability \(min(1,R_{\varphi })\) and

    $$\begin{aligned} R_{\varphi }=\frac{P(\varvec{\theta },\varvec{\tau }|\varphi ^{*})\pi _{\rho }(\rho ^{*}_{\theta \tau })J(\varphi ^{*})}{P(\varvec{\theta },\varvec{\tau }|\varphi ^{(r-1)})\pi _{\rho }(\rho ^{(r-1)}_{\theta \tau })J(\varphi ^{(r-1)})} \end{aligned}$$
    (5)

    where

    $$\begin{aligned} P(\varvec{\theta },\varvec{\tau }|\varphi )=\prod _{i=1}^{N}f(\varvec{\xi }_i; \varvec{\mu }_p,\varvec{\varvec{\Sigma }}_p|\varphi )\text { which bivariate normal density}, \end{aligned}$$

    \(\pi _\rho \) is the normal prior for the correlation term and \(J(\varphi )=\frac{2\text {exp}(\varphi )}{(1+\text {exp}(\varphi ))^2}\) is the Jacobian function. To obtain reasonable acceptance rate we adjust the standard deviations of the proposal distributions to \(c_{\varphi } = 0.5\).

  5. 5

    Update \(\theta _i\) and \(\tau _i\) for each i: Within the solution behaviours \(\{\Delta ^{(r)}_{ij} \ne 1\}\), draw \((\theta ^{*}_{i},\tau ^{*}_{i})\) from bivariate distribution with \(\varvec{\mu }= (\theta ^{(r-1)}_{i},\tau ^{(r-1)}_{i})\) and \(\varvec{\Sigma }= ({\begin{matrix} 1&{}0.25\\ 0.25&{}0.25 \end{matrix}} \bigr )\). Accept the sample with the probability \(min(1,R_{\theta \tau })\) where

    $$\begin{aligned} R_{\theta \tau }=\frac{\pi (\theta _i^{*},\tau ^{*}_i)\prod _{j=1}^{J}IRT(Y_{ij},a^{(r)}_{j},b^{(r)}_{j},c^{(r)}_j,\theta ^{*}_{i})f(t_{ij},\tau ^{*}_i)}{\pi (\theta _i^{(r-1)},\tau _i^{(r-1)})\prod _{j=1}^{J}IRT(Y_{ij},a^{(r)}_{j},b^{(r)}_{j},c^{(r)}_j,\theta ^{(r-1)}_{i})f(t_{ij},\tau ^{(r-1)}_i)} \end{aligned}$$
    (6)

    where f is the log-normal likelihood of response time, IRT(.) is calculated from Equation (2) and \(\pi (.)\) is the likelihood of bivariate normal prior with mean (0, 0) and \(\varvec{\Sigma }= ({\begin{matrix} 1&{}\sigma ^{(r)}_{\theta \tau }\\ \sigma ^{(r)}_{\theta \tau }&{}\sigma ^{2,(r-1)}_{\tau } \end{matrix}} \bigr )\).

  6. 6

    Update \(\sigma _{\tau }\): Since \(\tau \sim \mathcal {N}(0, \sigma ^2_{\tau })\), we can use an inverse gamma conjugate prior for \(\sigma _{\tau }\): \(\pi (\sigma _{\tau }) \sim Inv\text {-}Gamma(\gamma _t,\delta _t)\) and draw \(\sigma ^{(r)}_{\tau }\) from:

    $$\begin{aligned} Inv\text {-}Gamma(\gamma _t+\frac{N}{2},\delta _t+\frac{\sum ^{N}_{i=1}(\tau ^{(r)}_i)^2}{2}) \end{aligned}$$
  7. 7

    Update \(\alpha _j\) for each j: Within the solution behaviours \(\{\Delta _{ij}^{(r)} \ne 1\}\), draw \(\alpha ^{*}_{j} \sim ln\mathcal {N}(\log \alpha ^{(r-1)}_{j},c^{2}_\alpha )\), accept the sample with the probability \(min(1,R_{\alpha })\), where

    $$\begin{aligned} R_{\alpha }=\frac{\pi _{\alpha }(\alpha ^{*}_j)\alpha ^{(r-1)}_j\prod _{i=1}^{N}f(t_{ij},\tau _i^{(r)},\alpha ^{*}_j,\beta ^{(r-1)}_j)}{\pi _{\alpha }(\alpha ^{(r-1)}_j)\alpha ^{*}_j\prod _{i=1}^{N}f(t_{ij},\tau _i^{(r)},\alpha ^{(r-1)}_j,\beta ^{(r-1)}_j)}, \end{aligned}$$
    (7)

    \(\pi _\alpha \) is the lognormal prior density on \(\alpha \) and f is the density function from Equation (1). \(c_\alpha \) is set to 0.3 to approach a proper acceptance rate of \(\alpha ^{*}_j\).

  8. 8

    Update \(\beta _j\) for each j: Within the solution behaviours we have \(\log t_{ij}+\tau _i \overset{i.i.d.}{\sim } \mathcal {N}(\beta _j,\frac{1}{\alpha _j^2})\). The normal prior is conjugate for \(\beta \), so we can draw \(\beta _j^{(r)}\) from \(f(\beta _j|\log \varvec{t}_{\cdot j},\varvec{\tau }^{(r)},\alpha _j^{(r)},\varvec{\Delta }^{(r)})\), where

    $$\begin{aligned} f(.) \sim \mathcal {N}(\frac{[\alpha _j^{(r)}]^2\sum _{i=1}^{N}(\log t_{ij}+\tau _i^{(r)})I(\Delta ^{(r)}_{ij}=0)}{1+[\alpha _j^{(r)}]^2\sum _{i=1}^{N}(1-\Delta ^{(r)}_{ij})}, \frac{1}{1+[\alpha _j^{(r)}]^2\sum _{i=1}^{N}(1-\Delta _{ij}^{(r)})}) \end{aligned}$$
  9. 9

    Update \(\mu _{c}\): Within the aberrant behaviour category, i.e., \(\Delta _{ij}^{(r)} = 1\), compute the sum of log of response times \(logY^{(r)}\) and the number of cases with aberrant behavior, \(n^{(r)}_{c}\). Then draw \(\mu ^{(r)}_{c} \sim ln\mathcal {N}(\frac{\mu _{m}\sigma ^{2,(r-1)}_{c}+\sigma ^{2}_{m}logY^{(r)}}{\sigma ^{2,(r-1)}_{c}+\sigma ^{2}_{m}n^{(r)}_{c}},\frac{\sigma ^{(r-1)}_{c}\sigma _{m}}{\sqrt{\sigma ^{2}_{m}n^{(r)}_{c}+\sigma ^{2,(r-1)}_{c}}})\), where \(\mu _{m}\) and \(\sigma _{m}\) are the parameters of the normal prior for \(\mu _{c}\).

  10. 10

    Update \(d_{j}\) for each j: Within the aberrant behaviour category \(\{\Delta ^{(r)}_{ij} = 1\}\), compute the total number of people engaging in aberrant behaviour on item j, \(nc^{(r)}_{j}\), and the number of correct items \(tc^{(r)}_{j}\). Given \(tc_{j} \sim Bin(nc_{j},d_j)\) and a beta conjugate prior \(Beta(\alpha _d,\beta _d)\) for \(d_j\), we can draw \(d^{(r)}_j \sim Beta(\alpha _d+tc^{(r)}_{j},\beta _d+nc^{(r)}_{j}-tc^{(r)}_{j})\).

  11. 11

    Update \(\pi _i\) for each i: Within the aberrant behaviours \(\{\Delta ^{(r)}_{ij} = 1\}\), compute number of items person i has cheated on, \(nc^{(r)}_{i}\), draw \(\pi ^{(r)}_{i} \sim Beta(nc^{(r)}_{i}+\gamma _{m},J-nc^{(r)}_{i}+\delta _{m})\), where \(\gamma _{m}\) and \(\delta _{m}\) are the parameters of the beta prior for \(\pi \).

Appendix B: Parameter recovery and classification contingency tables

1.1 Simulation Study I

Results for Parameter Estimation

 

EXP 01

EXP 02

EXP 03

EXP 04

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

a

\(-\)0.1136

0.0582

0.0108

\(-\)0.0198

0.0753

0.0151

\(-\)0.1454

0.0897

0.0136

\(-\)0.1166

0.0858

0.0113

b

0.0284

0.0132

0.0067

\(-\)0.0241

0.0143

0.0077

0.0084

0.0372

0.0091

0.0314

0.0179

0.0078

c

0.0106

0.0022

0.0025

0.0136

0.0025

0.0023

0.0168

0.0032

0.0029

0.0149

0.0031

0.0026

\(\alpha \)

0.0368

0.0037

0.0010

0.0709

0.0087

0.0010

0.0749

0.0090

0.0010

0.1299

0.0204

0.0011

\(\beta \)

\(-\)0.0010

0.0002

0.0010

\(-\)0.0003

0.0002

0.0010

\(-\)0.0124

0.0004

0.0010

\(-\)0.0079

0.0003

0.0010

\(\theta \)

\(-\)0.0068

0.1173

0.0106

\(-\)0.0506

0.1317

0.0109

\(-\)0.0267

0.1166

0.0109

0.0109

0.1194

0.0114

\(\tau \)

\(-\)0.0069

0.0084

0.0021

\(-\)0.0062

0.0090

0.0022

\(-\)0.0139

0.0095

0.0021

\(-\)0.0099

0.0096

0.0021

\(\sigma \)

\(-\)0.0039

0.0000

0.0004

\(-\)0.0283

0.0008

0.0004

0.0045

0.0000

0.0004

\(-\)0.0171

0.0003

0.0004

\(\sigma _{\tau }\)

\(-\)0.0045

0.0000

0.0001

0.0028

0.0000

0.0001

\(-\)0.0183

0.0003

0.0001

\(-\)0.0035

0.0000

0.0001

\(\pi \)

0.0223

0.0013

0.0003

0.0182

0.0019

0.0003

0.0185

0.0018

0.0003

0.0093

0.0026

0.0003

\(\mu _c\)

\(-\)0.0006

0.0000

0.0000

0.0004

0.0000

0.0000

0.0001

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

\(\sigma _c\)

0.0022

0.0000

0.0000

0.0025

0.0000

0.0000

0.0007

0.0000

0.0000

0.0003

0.0000

0.0000

d

\(-\)0.0031

0.0042

0.0006

\(-\)0.0068

0.0027

0.0004

0.0018

0.0015

0.0004

0.0036

0.0013

0.0003

 

EXP 05

EXP 06

EXP 07

EXP 08

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

a

\(-\)0.0639

0.0826

0.0127

\(-\)0.0244

0.1072

0.0161

\(-\)0.1083

0.0646

0.0132

0.0775

0.0698

0.0151

b

0.0315

0.0148

0.0067

0.0214

0.0190

0.0104

0.0747

0.0184

0.0066

\(-\)0.0342

0.0092

0.0070

c

0.0108

0.0025

0.0022

0.0110

0.0037

0.0031

0.0154

0.0019

0.0024

0.0297

0.0020

0.0024

\(\alpha \)

0.1270

0.0187

0.0010

0.2610

0.0734

0.0011

0.0624

0.0080

0.0010

0.1118

0.0278

0.0010

\(\beta \)

0.0000

0.0004

0.0010

0.0171

0.0007

0.0011

\(-\)0.0091

0.0005

0.0011

\(-\)0.0061

0.0002

0.0011

\(\theta \)

\(-\)0.0020

0.1122

0.0110

0.0082

0.1666

0.0133

0.0410

0.1032

0.0105

\(-\)0.0709

0.1260

0.0106

\(\tau \)

0.0037

0.0089

0.0021

0.0236

0.0128

0.0022

\(-\)0.0121

0.0104

0.0023

\(-\)0.0050

0.0101

0.0022

\(\sigma \)

\(-\)0.0173

0.0003

0.0004

\(-\)0.0104

0.0001

0.0005

0.0008

0.0000

0.0004

\(-\)0.0004

0.0000

0.0004

\( \sigma _\tau \)

\(-\)0.0061

0.0000

0.0001

\(-\)0.0111

0.0001

0.0001

0.0037

0.0000

0.0001

\(-\)0.0012

0.0000

0.0001

\(\pi \)

0.0107

0.0028

0.0004

\(-\)0.0060

0.0039

0.0004

0.0160

0.0016

0.0003

0.0102

0.0027

0.0004

\(\mu _c\)

0.0005

0.0000

0.0000

0.0002

0.0000

0.0000

\(-\)0.0003

0.0000

0.0000

0.0002

0.0000

0.0000

\(\sigma _c\)

0.0009

0.0000

0.0000

0.0013

0.0000

0.0000

0.0014

0.0000

0.0000

0.0015

0.0000

0.0000

d

\(-\)0.0077

0.0013

0.0003

\(-\)0.0010

0.0005

0.0002

0.0376

0.0068

0.0006

0.0228

0.0029

0.0004

 

EXP 09

EXP 10

EXP 11

EXP 12

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

a

\(-\)0.1743

0.0924

0.0160

\(-\)0.0522

0.0441

0.0122

\(-\)0.0452

0.0392

0.0154

\(-\)0.0629

0.0476

0.0154

b

0.0342

0.0224

0.0088

0.0081

0.0129

0.0078

0.0239

0.0268

0.0080

0.0216

0.0128

0.0086

c

\(-\)0.0038

0.0037

0.0028

0.0068

0.0018

0.0027

0.0097

0.0016

0.0026

0.0264

0.0043

0.0027

\(\alpha \)

0.1049

0.0221

0.0010

0.1740

0.0457

0.0011

0.1668

0.0481

0.0011

0.3095

0.1103

0.0012

\(\beta \)

0.0087

0.0004

0.0011

\(-\)0.0088

0.0004

0.0012

0.0375

0.0017

0.0011

\(-\)0.0033

0.0003

0.0012

\(\theta \)

0.0513

0.1515

0.0127

\(-\)0.0157

0.1364

0.0119

0.0346

0.1391

0.0114

0.0116

0.1465

0.0127

\(\tau \)

0.0110

0.0095

0.0021

\(-\)0.0165

0.0109

0.0023

0.0321

0.0112

0.0022

\(-\)0.0024

0.0135

0.0023

\(\sigma \)

\(-\)0.0113

0.0001

0.0005

0.0001

0.0000

0.0005

\(-\)0.0024

0.0000

0.0004

0.0075

0.0001

0.0005

\(\sigma _\tau \)

0.0063

0.0000

0.0001

\(-\)0.0030

0.0000

0.0001

\(-\)0.0062

0.0000

0.0001

0.0048

0.0000

0.0001

\(\pi \)

0.0113

0.0021

0.0004

0.0005

0.0035

0.0004

0.0015

0.0035

0.0004

\(-\)0.0185

0.0058

0.0005

\(\mu _c\)

\(-\)0.0001

0.0000

0.0000

0.0001

0.0000

0.0000

\(-\)0.0001

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

\(\sigma _c\)

0.0008

0.0000

0.0000

\(-\)0.0018

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

0.0020

0.0000

0.0000

d

0.0148

0.0025

0.0004

0.0044

0.0011

0.0003

0.0017

0.0013

0.0003

\(-\)0.0018

0.0005

0.0002

Aberrant Behaviour Classification (from stage 1)

EXP 01

EXP 02

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1112

9

1121

+

1789

4

1793

-

25

28854

28879

-

19

28188

28207

Total

1137

28863

30000

Total

1808

28192

30000

EXP 03

EXP 04

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1800

1

1801

+

3177

2

3179

-

25

28174

28199

-

15

26806

26821

Total

1825

28175

30000

Total

3192

26808

30000

EXP 05

EXP 06

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

3372

7

3379

+

6222

0

6222

-

17

26604

26621

-

32

23746

23778

Total

3389

26611

30000

Total

6254

23746

30000

EXP 07

EXP 08

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1944

27

1971

+

2633

2

2635

-

30

27999

28029

-

51

27314

27365

Total

1974

28026

30000

Total

2684

27316

30000

EXP 09

EXP 10

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

2644

5

2649

+

4048

10

4058

-

30

27321

27351

-

40

25902

25942

Total

2674

27326

30000

Total

4088

25912

30000

EXP 11

EXP 12

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

4248

10

4258

+

6984

11

6995

-

32

25710

25742

-

38

22967

23005

Total

4280

25720

30000

Total

7022

22978

30000

Classification of cheating-dominant, guessing-dominant, and mixed-behavior (from stage 2)

EXP 01

EXP 02

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

182

48

182

48

230

C dominant

212

41

212

41

253

G dominant

17

78

11

84

95

G dominant

9

91

8

92

100

Mixed

1

0

1

0

1

Mixed

0

0

0

0

0

EXP 03

EXP 04

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

207

33

206

34

240

C dominant

197

14

197

14

211

G dominant

33

150

31

152

183

G dominant

31

169

24

176

200

Mixed

1

3

1

3

4

Mixed

0

0

0

0

0

EXP 05

EXP 06

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

130

39

130

39

169

C dominant

116

33

117

32

149

G dominant

76

288

61

303

364

G dominant

14

386

8

392

400

Mixed

6

2

6

2

8

Mixed

0

0

0

0

0

EXP 07

EXP 08

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

543

129

543

129

672

C dominant

506

148

508

146

654

G dominant

27

60

21

66

87

G dominant

17

82

9

90

99

Mixed

3

2

3

2

5

Mixed

0

1

0

1

1

EXP 09

EXP 10

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

512

89

514

87

601

C dominant

482

99

483

98

581

G dominant

53

115

42

126

168

G dominant

42

158

31

169

200

Mixed

6

3

6

3

9

Mixed

0

0

0

0

0

EXP 11

EXP 12

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

375

104

376

103

479

C dominant

376

54

378

52

430

G dominant

77

272

68

281

349

G dominant

35

364

20

379

399

Mixed

10

3

10

3

13

Mixed

0

1

0

1

1

1.2 Simulation Study II

Results for Parameter Estimation

 

EXP 01

EXP 02

EXP 03

EXP 04

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

\(\alpha \)

0.0366

0.0037

0.0010

0.0708

0.0086

0.0010

0.0751

0.0091

0.0010

0.1300

0.0205

0.0011

\(\beta \)

\(-\)0.0051

0.0003

0.0010

0.0104

0.0003

0.0009

\(-\)0.0035

0.0002

0.0010

\(-\)0.0087

0.0003

0.0010

\(\theta \)

\(-\)0.0165

0.1159

0.0094

\(-\)0.0102

0.1251

0.0099

0.0061

0.1144

0.0095

0.0032

0.1114

0.0098

\(\tau \)

\(-\)0.0110

0.0084

0.0021

0.0045

0.0089

0.0022

\(-\)0.0051

0.0093

0.0021

\(-\)0.0108

0.0096

0.0022

\(\sigma \)

0.0019

0.0000

0.0004

\(-\)0.0238

0.0006

0.0004

0.0100

0.0001

0.0004

\(-\)0.0097

0.0001

0.0004

\(\sigma _\tau \)

\(-\)0.0049

0.0000

0.0001

0.0028

0.0000

0.0001

\(-\)0.0190

0.0004

0.0001

\(-\)0.0040

0.0000

0.0001

\(\pi \)

0.0223

0.0013

0.0003

0.0182

0.0019

0.0003

0.0185

0.0018

0.0003

0.0093

0.0026

0.0003

\(\mu _c\)

\(-\)0.0006

0.0000

0.0000

0.0004

0.0000

0.0000

0.0001

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

\(\sigma _c\)

0.0022

0.0000

0.0000

0.0025

0.0000

0.0000

0.0007

0.0000

0.0000

0.0003

0.0000

0.0000

d

\(-\)0.0031

0.0042

0.0006

\(-\)0.0066

0.0027

0.0004

0.0017

0.0015

0.0005

0.0037

0.0013

0.0003

 

EXP 05

EXP 06

EXP 07

EXP 08

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

\(\alpha \)

0.1270

0.0187

0.0011

0.2609

0.0734

0.0011

0.0623

0.0080

0.0010

0.1116

0.0278

0.0010

\(\beta \)

\(-\)0.0023

0.0004

0.0010

0.0171

0.0007

0.0009

\(-\)0.0205

0.0008

0.0009

0.0076

0.0003

0.0010

\(\theta \)

\(-\)0.0045

0.1094

0.0096

\(-\)0.0020

0.1642

0.0122

\(-\)0.0073

0.0974

0.0090

\(-\)0.0097

0.1188

0.0097

\(\tau \)

0.0015

0.0088

0.0022

0.0236

0.0128

0.0021

\(-\)0.0235

0.0108

0.0022

0.0087

0.0101

0.0022

\(\sigma \)

\(-\)0.0114

0.0001

0.0004

\(-\)0.0070

0.0000

0.0005

0.0069

0.0000

0.0004

0.0008

0.0000

0.0004

\(\sigma _\tau \)

\(-\)0.0065

0.0000

0.0001

\(-\)0.0117

0.0001

0.0001

0.0032

0.0000

0.0001

\(-\)0.0011

0.0000

0.0001

\(\pi \)

0.0107

0.0028

0.0004

\(-\)0.0059

0.0039

0.0004

0.0160

0.0016

0.0003

0.0103

0.0027

0.0004

\(\mu _c\)

0.0005

0.0000

0.0000

0.0002

0.0000

0.0000

\(-\)0.0003

0.0000

0.0000

0.0002

0.0000

0.0000

\(\sigma _c\)

0.0009

0.0000

0.0000

0.0013

0.0000

0.0000

0.0015

0.0000

0.0000

0.0015

0.0000

0.0000

d

\(-\)0.0076

0.0013

0.0003

\(-\)0.0010

0.0005

0.0002

0.0374

0.0068

0.0006

0.0227

0.0029

0.0004

 

EXP 09

EXP 10

EXP 11

EXP 12

 

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

Bias

MSE

MCSE

\(\alpha \)

0.1050

0.0220

0.0010

0.1739

0.0458

0.0011

0.1672

0.0484

0.0011

0.3094

0.1103

0.0012

\(\beta \)

0.0062

0.0004

0.0010

\(-\)0.0075

0.0004

0.0009

0.0313

0.0013

0.0008

\(-\)0.0098

0.0004

0.0010

\(\theta \)

0.0337

0.1448

0.0112

\(-\)0.0120

0.1342

0.0107

0.0136

0.1358

0.0102

0.0015

0.1453

0.0114

\(\tau \)

0.0085

0.0094

0.0021

\(-\)0.0151

0.0109

0.0022

0.0258

0.0109

0.0021

\(-\)0.0088

0.0135

0.0022

\(\sigma \)

\(-\)0.0030

0.0000

0.0005

0.0036

0.0000

0.0004

0.0017

0.0000

0.0004

0.0100

0.0001

0.0005

\(\sigma _\tau \)

0.0059

0.0000

0.0001

\(-\)0.0034

0.0000

0.0001

\(-\)0.0063

0.0000

0.0001

0.0042

0.0000

0.0001

\(\pi \)

0.0113

0.0021

0.0004

0.0005

0.0035

0.0004

0.0015

0.0035

0.0004

\(-\)0.0185

0.0058

0.0005

\(\mu _c\)

\(-\)0.0001

0.0000

0.0000

0.0001

0.0000

0.0000

\(-\)0.0001

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

\(\sigma _c\)

0.0008

0.0000

0.0000

\(-\)0.0018

0.0000

0.0000

\(-\)0.0000

0.0000

0.0000

0.0020

0.0000

0.0000

d

0.0148

0.0025

0.0004

0.0045

0.0011

0.0003

0.0017

0.0013

0.0003

\(-\)0.0019

0.0005

0.0002

Aberrant Behaviour Classification (from stage 1)

EXP 01

EXP 02

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1112

9

1121

+

1789

4

1793

-

25

28854

28879

-

19

28188

28207

Total

1137

28863

30000

Total

1808

28192

30000

EXP 03

EXP 04

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1800

1

1801

+

3177

2

3179

-

24

28175

28199

-

15

26806

26821

Total

1824

28176

30000

Total

3192

26808

30000

EXP 05

EXP 06

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

3372

7

3379

+

6222

0

6222

-

16

26605

26621

-

31

23747

23778

Total

3388

26612

30000

Total

6253

23747

30000

EXP 07

EXP 08

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

1945

26

1971

+

2632

3

2635

-

31

27998

28029

-

51

27314

27365

Total

1976

28024

30000

Total

2683

27317

30000

EXP 09

EXP 10

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

2644

5

2649

+

4048

10

4058

-

31

27320

27351

-

42

25900

25942

Total

2675

27325

30000

Total

4090

25910

30000

EXP 11

EXP 12

 

Predicted +

Predicted -

Total

 

Predicted +

Predicted -

Total

+

4249

9

4258

+

6984

11

6995

-

32

25710

25742

-

38

22967

23005

Total

4281

25719

30000

Total

7022

22978

30000

Classification of cheating-dominant, guessing-dominant, and mixed-behavior (from stage 2)

EXP 01

EXP 02

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

182

48

182

48

230

C dominant

212

41

212

41

253

G dominant

19

76

12

83

95

G dominant

9

91

7

93

100

Mixed

1

0

1

0

1

Mixed

0

0

0

0

0

EXP 03

EXP 04

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

206

34

207

33

240

C dominant

197

14

197

14

211

G dominant

33

150

33

150

183

G dominant

32

168

22

178

200

Mixed

2

2

1

3

4

Mixed

0

0

0

0

0

EXP 05

EXP 06

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

130

39

130

39

169

C dominant

117

32

117

32

149

G dominant

79

285

61

303

364

G dominant

13

387

8

392

400

Mixed

6

2

6

2

8

Mixed

0

0

0

0

0

EXP 07

EXP 08

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

543

130

544

129

673

C dominant

508

145

510

143

653

G dominant

28

59

22

65

87

G dominant

18

81

8

91

99

Mixed

3

2

3

2

5

Mixed

0

1

0

1

1

EXP 09

EXP 10

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

512

89

512

89

601

C dominant

481

100

481

100

581

G dominant

54

114

39

129

168

G dominant

47

153

31

169

200

Mixed

6

3

6

3

9

Mixed

0

0

0

0

0

EXP 11

EXP 12

 

Method 1

Method 2

  

Method 1

Method 2

 

True\(\backslash \)Classified

C

G

C

G

Total

True\(\backslash \)Classified

C

G

C

G

Total

C dominant

370

109

374

105

479

C dominant

376

54

379

51

430

G dominant

77

272

65

284

349

G dominant

40

359

21

378

399

Mixed

10

3

10

3

13

Mixed

0

1

0

1

1

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Xu, G. & Shang, Z. A Two-Stage Approach to Differentiating Normal and Aberrant Behavior in Computer Based Testing. Psychometrika 83, 223–254 (2018). https://doi.org/10.1007/s11336-016-9525-x

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-016-9525-x

Keywords

Navigation