1 Outline of the Problem

The problem addressed here is that the standard scoring rule in much educational measurement, that is, the number-correct score, is not the same one as the optimal scoring rule that is derived from the IRT model that fits the data. In this chapter, a method is outlined for how to evaluate the consequences of this discrepancy for an important inference that is often made using IRT, that is, the consequences for test equating. To explain this further, we first introduce an IRT model and outline the principle of test equating.

The IRT models used in this chapter are the one-, two- and three-parameter Logistic models. The data are responses of students labeled with an index n = 1, …, N to items labeled with an index i = 1, …, K. To indicate whether a response is available, we define a variable

$$ \, d_{ni} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{if}}\,{\text{a}}\,{\text{response}}\,{\text{of}}\,{\text{student}}\,n\,{\text{to}}\,{\text{item}}\,i\,{\text{is}}\,{\text{available }}} \hfill \\ 0 \hfill & {{\text{if}}\,{\text{this}}\,{\text{is}}\,{\text{not}}\,{\text{the}}\,{\text{case}} .} \hfill \\ \end{array} } \right. $$
(11.1)

The responses will be coded by a stochastic variable \( Y_{ni} \). In the sequel, upper-case characters will denote stochastic variables and lower-case characters will denote realizations. In the present case, there are two possible realizations, defined by

$$ y_{ni} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{if}}\,d_{ni} = 1\,{\text{and}}\,{\text{student}}\,n\,{\text{gave}}\,{\text{a}}\,{\text{correct}}\,{\text{response}}\,{\text{to}}\,{\text{item}}\,i} \hfill \\ 0 \hfill & {{\text{if}}\,d_{ni} = 1\,{\text{and}}\,{\text{student}}\,n\,{\text{did}}\,{\text{not}}\,{\text{give}}\,{\text{a}}\,{\text{correct}}\,{\text{response}}\,{\text{to}}\,{\text{item}}\,i} \hfill \\ c \hfill & {{\text{if}}\,d_{ni} = 0,\,{\text{where}}\,c\,{\text{is}}\,{\text{an}}\,{\text{arbitrary}}\,{\text{constant}}\,{\text{unequal}}\,0\,{\text{or}}\,1. \, } \hfill \\ \end{array} } \right. $$
(11.2)

Define the logistic function \( \Psi (.) \) as:

$$ \Psi (x) = \frac{\exp (x)}{1 + \exp (x)} \, . $$

In the 3-parameter logistic model (3PLM, Birnbaum 1968) the probability of a correct response depends on three item parameters, ai, bi and ci which are called the discrimination, difficulty and guessing parameter, respectively. The parameter \( \theta_{n} \) is the latent proficiency parameter of student n. The model is given by

$$ \begin{aligned} P_{i} (\theta_{n} ) & = c_{i} + (1 - c_{i} ) +\Psi (a_{i} (\theta_{n} - b_{i} )) \\ & = c_{i} + (1 - c_{i} )\frac{{\exp (a_{i} (\theta_{n} - b_{i} ))}}{{1 + \exp (a_{i} (\theta_{n} - b_{i} ))}}. \\ \end{aligned} $$
(11.3)

The 2-parameter logistic model (2PLM, Birnbaum 1968) follows by setting the guessing parameter equal to zero, so by introducing the constraint ci = 0. The 1-parameter logistic model (1PLM, Rasch 1960) follows by introducing the additional constraint ai = 1.

Note that in the application of the models in high-stakes situations, the number of proficiency parameters \( \theta_{n} \) can become very large. Besides the practical problem of computing estimates of all model parameters concurrently, this also leads to theoretical problems related to the consistency of the estimates (see, Neyman and Scott 1948; Kiefer and Wolfowitz 1956). Therefore, it is usually assumed that the proficiency parameters are drawn from one or more normal proficiency distributions, indexed g = 1, …, G, which are often also referred to as population distributions. That is, \( \theta_{n} \) has the density function

$$ g(\theta_{n} ;\mu_{g(n)} ,\sigma_{g(n)}^{2} ) = \frac{1}{{\sigma \sqrt {2\pi } }}\exp \left( { - \frac{1}{2}\frac{{(\theta_{n} - \mu_{n(g)} )^{2} }}{{\sigma^{2} }}} \right), $$
(11.4)

where g(n) is the population to which student n belongs.

Test equating relates the scores on one test to the scores on another test. Consider a simulated example based on the estimates displayed in Table 11.1. The estimates emanate from two tests. A sample of 5000 students of a population A was given a test A consisting of the items i = 1,…,10, while a sample of 5000 other students of a population B was given a test B consisting of the items i = 6,…,15. So the anchor between the two tests, that is, the overlap between the two tests, consists of 5 items. The anchor supports the creation of a common scale for all parameter estimates. The responses were generated with the 2PLM. The difficulties of the two tests differed: test A had a mean difficulty parameter, \( \overline{b}_{A} \), of 0.68, while the difficulty level of test B, \( \overline{b}_{B} \), was equal to −0.92. The mean of the proficiency parameters \( \theta_{n} \) of sample A, \( \mu_{A} \) was equal to −0.25, while the mean of the proficiency parameters of sample B, \( \mu_{B} \) was equal to 0.25. The variances of the proficiency parameters and the mean of the discrimination parameters were all equal to one.

Table 11.1 Example of proficiency estimates and their standard errors on two linked tests

Suppose that test A has a cutoff score of 4, where 4 is the highest number-correct score that results in failing the test. In the fourth column of Table 11.1, the column labeled \( \theta \), it can be seen that the associated estimate on the latent \( \theta \)-scale is 0.02. We chose this point as a latent-cutoff point, that is, \( \theta_{0} = 0.02 \). If the Rasch model would hold for these data, the number-correct score would be the sufficient statistic for \( \theta \). In the 2PLM, the relation between a number-correct score and a \( \theta \)-estimate is more complicated; this will be returned to below. Through searching for number-correct scores on Test B with \( \theta \)-estimates closest to the latent cutoff point, we find that a cutoff score 6 on Test B best matches a cutoff score 4 on Test A. This conclusion is consistent with the fact the average difficulty of test A was higher than the average difficulty of test B. On the other hand, the sample administered test B was more proficient than the sample of test A. The columns labeled “Freq” and “Prob” give the frequency distributions of the number-correct scores and the associated cumulative proportions, respectively. Note that 73% of sample A failed their test, while 39% of sample B failed theirs. Again, this is as expected.

The next question of interest is the reliability of the equating procedure. This can be translated into the question how precise the two cutoff scores can be distinguished. If we denote the cutoff scores by SA and SB, and denote the estimates of the positions on the latent scale associated with these two cutoff points by \( \hat{\theta }_{SA} \) and \( \hat{\theta }_{SB} \), then \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \) can be used as a measure of the precision with which we can distinguish the two scores. The estimates \( \hat{\theta }_{SA} \) and \( \hat{\theta }_{SB} \) are not independent. Firstly, they both depend on the same linked data set and, secondly, they both depend on a concurrent estimate of all item-parameters, ai, bi and ci, and (functions of) all latent proficiency parameters \( \theta_{n} \). Therefore, the standard error \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \) cannot be merely computed as the square root of \( Var(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) = Var(\hat{\theta }_{SA} ) + Var(\hat{\theta }_{SB} ) \), but the covariance of the estimates must be taken into account also. The method to achieve this is outlined below, after the outline of a statistical framework and considering the problem of test scoring with number-correct scores when these are not sufficient statistics.

2 Preliminaries

Nowadays, marginal maximum likelihood (MML, see, Bock and Aitkin 1981) and fully Bayesian estimation (Albert 1992; Johnson and Albert 1999) are the prominent frameworks for estimating IRT models. Mislevy (1986, also see, Glas 1999) point out that they are closely related, because MML estimation is easily generalized to Bayes modal estimation, an estimation method that seeks the mode of the posterior distribution rather than the mode of the likelihood function. In this chapter, we adopt the MML and Bayes modal framework. In this framework, it is assumed that the \( \theta \)-parameters are drawn from a common distribution, say, a population proficiency distribution as defined in Formula (11.4). Estimates of the item parameters and the parameters of the population proficiency distribution are obtained by maximizing a likelihood function that is marginalized with respect to the \( \theta \)-parameters.

An important tool for deriving the estimation equations is Fisher’s identity (Efron 1977; Louis 1982). For this identity, we distinguish N independent observations \( y_{n} \) and unobserved data \( z_{n} \). The identity states that the first order derivatives of the parameters of interest \( \delta \) with respect to the log-likelihood function \( L(.) \) are given by

$$ \frac{\partial L(\delta )}{\partial \delta } = \sum\limits_{n = 1}^{N} {E_{z|y} \left( {\left. {\nabla_{n} (\delta )} \right|} \right.\left. {y_{n} } \right)} = \sum\limits_{n = 1}^{N} {\int_{{}}^{{}} {, \ldots ,\int_{{}}^{{}} {\left[ {\frac{{\log p(y_{n} ,z_{n} ;\delta )}}{\partial \delta }} \right]p(z_{n} |y_{n} ;\delta )dz_{n} } } } , $$
(11.5)

where \( p(y_{n} ,z_{n} ;\delta ) \) is the likelihood if \( z_{n} \) would be observed, \( \nabla_{n} (\delta ) \) is the first-order derivative of its logarithm, and \( p(z_{n} |y_{n} ;\delta ) \) is the posterior distribution of the unobserved data given the observations.

Bock and Aitkin (1981) consider the \( \theta \)-parameters as unobserved data and use the EM-algorithm (Dempster et al. 1977) for maximum likelihood estimation from incomplete data to obtain estimates of the item and population parameters. In this framework, Glas (1999, 2016) uses Fisher’s identity to derive estimation and testing procedures for a broad class of IRT models.

Standard errors can be obtained as the square roots of the covariance matrix of the estimates \( Cov(\hat{\delta },\hat{\delta }) \) which can be obtained by inverting the observed Fisher information matrix, say, \( Cov(\hat{\delta },\hat{\delta }) = I(\hat{\delta },\hat{\delta })^{ - 1} \). Louis (1982) shows that this matrix is given by

$$ I(\delta ,\delta ) = - \frac{{\partial^{2} L(\delta )}}{{\partial \delta \, \partial \delta^{t} }} = - \sum\limits_{n = 1}^{N} {E_{z|y} \left( {\left. {\nabla_{n} (\delta ,\delta^{t} )} \right|} \right.\left. {y_{n} } \right) - Cov_{z|y} \left( {\left. {\nabla_{n} (\delta )\nabla_{n} (\delta )^{t} } \right|} \right.\left. {y_{n} } \right)} , $$
(11.6)

where \( \nabla_{n} (\delta ,\delta^{t} ) \) stands for the second-order derivatives of \( \log p(y_{n} ,z_{n} ;\delta ) \) with respect to \( \delta \). Evaluated at the MML estimates, the information matrix can be approximated by

$$ I(\hat{\delta },\hat{\delta }) \approx \sum\limits_{n = 1}^{N} {E_{z|y} \left( {\left. {\nabla_{n} (\delta )\nabla_{n} (\delta )^{t} } \right|} \right.\left. {y_{n} } \right)} \, $$
(11.7)

(see Mislevy 1986). In the next sections, this framework will be applied to the issues addressed in this chapter: the reliability of tests scored with number-correct scores and to equating errors.

3 MAP Proficiency Estimates Based on Number-Correct Scores

Glas (1999, 2016) shows how the estimation equations for the item and population parameters of a broad class of IRT models can be derived using Fisher’s identity. This identity can also be applied to derive an estimation equation for a proficiency estimate based on a number-correct score

$$ s = \sum\limits_{i = 1}^{k} {d_{i} y_{i} } , $$
(11.8)

with \( d_{i} \) and \( y_{i} \) as defined in (11.1) and (11.2) dropping the subscript n. The application of Fisher’s identity is based on viewing a response pattern as unobserved and the number-correct score as observed. Define \( L_{s} (\theta ) \) as the product of the normal prior distribution \( g(\theta ;\lambda ) \) with \( \lambda = (\mu ,\sigma^{2} ) \) and the probability of a number-correct score s given \( \theta \). Define \( \{ y|s\} \) as the set of all response patterns resulting in a number correct score s. Then the probability of a number-correct score s given \( \theta \) is equal to the sum over \( \{ y|s\} \) of the probabilities of response patters \( P(y|\theta ,\beta ) \) given item parameters \( \beta \) and proficiency parameters \( \theta \). Application of Fisher’s identity results in a first order derivative

$$ \frac{{\partial L_{s} (\theta )}}{\partial \theta } = E_{y|s} \left( {\left. {\nabla (\theta )} \right|} \right.\left. {s,\beta } \right) = \frac{{\sum\nolimits_{{\{ y|s\} }} {\left[ {\frac{\partial \log P(y,\theta ;\beta ,\lambda )}{\partial \theta }} \right]P(y|\theta ,\beta )} }}{{\sum\nolimits_{{\{ y|s\} }} {P(y|\theta ,\beta )} }}. $$
(11.9)

Equating this expression to zero gives the expression for the MAP estimate. Computation of the summation over \( \{ y|s\} \) can be done using the recursive algorithm by Lord and Wingersky (1984). The algorithm is also used by Orlando and Thissen (2000) for the computation of expected a-posteriori estimates of \( \theta \) given a number-correct score s.

Note that in expression (11.9), the prior \( g(\theta ;\lambda ) \) cancels in the posterior, so \( p(y|s;\theta ,\beta ,\lambda ) \equiv p(y|s;\theta ,\beta ) \).

As an example, consider the 2PLM, given by expression (11.3) with ci = 0. The probability of a response pattern becomes

$$ \begin{aligned} L_{s} (\theta ) & = \sum\limits_{{\{ y|s\} }}^{{}} {\log P(y,\theta ;\beta ,\lambda )} = { \log }\, g(\theta ;\mu ,\sigma^{2} )\\ &\quad + \sum\limits_{{\{ y|s\} }} {\sum\limits_{i = 1}^{K} {\log \left( {P_{i} (\theta )^{{d_{i} y_{i} }} (1 - P_{i} (\theta ))^{{d_{i} (1 - y_{i} )}} } \right)} } , \end{aligned} $$
(11.10)

and

$$ \frac{{\partial L_{s} (\theta )}}{\partial \theta }{ = }\frac{\mu - \theta }{{\sigma^{2} }}{ + }\sum\limits_{{\{ y|s\} }} {\sum\limits_{i = 1}^{K} {\left( {d_{i} a_{i} \left( {y_{i} - P_{i} (\theta )} \right)} \right)} } p(y|s;\theta ,\beta ) \, . $$
(11.11)

The estimation equation can be solved by either the Newton-Raphson algorithm, or by the EM algorithm. Standard errors can be based on observed information as defined in expression (11.7). One way of estimating \( \theta \) and computing the standard errors is to impute the item parameters as known constants. However, when we want to compare the estimated proficiencies obtained for two tests through their difference, say, \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \), we explicitly need to take the precision of the estimates of all item and population parameters into account. How this is accomplished is outlined in the next section.

4 Equating Error

Suppose \( \theta_{0} \) is a cutoff point on the latent scale and we want to impose this cutoff point on several test versions. Further, we want to estimate the reliability of the created link. Three procedures for the computation of equating errors will be discussed, using some possible data collection designs displayed in Fig. 11.1.

Fig. 11.1
figure 1

Four designs for test equating

To introduce the first method, consider the design displayed in Fig. 11.1a. In this design, students were administered both test versions, that is, Version A and Version B. The first measure for the strength of the link is based on the standard error of the difference between the average difficulties of the two versions, say, \( Se(\overline{b}_{A} - \overline{b}_{B} ) \), where \( \overline{b}_{A} \) is the estimate of the mean difficulty of Version A and \( \overline{b}_{B} \) the estimate of the mean difficulty of Version B. The strength of the link is mainly determined by the number of students, but also by the number of item parameters making up the two means. Since the estimates are on a latent scale that is subject to linear transformations, we standardize the standard error with the standard deviation of the proficiency distribution. This leads to the definition of the index

$$ {\text{Equating}}\,{\text{Error }} = \frac{{Se(\overline{b}_{A} - \overline{b}_{B} )}}{Sd(\theta )}. $$
(11.12)

The standard error can be computed as the square root of \( Var(\overline{b}_{A} - \overline{b}_{B} ) \), which can be computed by pre- and post-multiplying the covariance matrix by a vector of weights, that is, \( \varvec{w}^{t} Cov(\hat{\delta },\hat{\delta })\varvec{w} \),

$$ {\text{where}}\;\varvec{w}\;{\text{has}}\;{\text{elements}}\quad {\text{w}}_{j} = \left\{ {\begin{array}{*{20}c} {\frac{{d_{iA} }}{{\Sigma _{i} d_{iA} }} - \frac{{d_{iB} }}{{\Sigma _{i} d_{iB} }}} & {{\text{if}}\,j\,{\text{is}}\,{\text{related}}\,{\text{to}}\,Cov(\hat{b}_{i} ,\hat{b}_{i} ) { }} \\ 0 & {{\text{if}}\,{\text{this}}\,{\text{is}}\,{\text{not}}\,{\text{the}}\,{\text{case, }}} \\ \end{array} } \right. $$
(11.13)

where \( d_{iA} \) and \( d_{iB} \) are defined by expression (11.1), for a student administered test A and a student administered test B, respectively.

Figure 11.1b gives an example of equating two tests via common items (the so-called anchor). The test consisting of the items A1 and A2 is linked to the test consisting of the items B2 and B3, because A2 and B2 consist of the same items. The larger the anchor, the stronger the link. In this design it is usually assumed that the means of the two proficiency distributions are different. This leads to a second definition of an index for equating error, that is:

$$ {\text{Equating}}\,{\text{Error = }}\frac{{Se(\hat{\mu }_{A} - \hat{\mu }_{B} )}}{Sd(\theta )}, $$
(11.14)

where \( Sd(\theta ) \) is a pooled estimate of the standard deviations of the proficiency distributions of the two populations. In Fig. 11.1c, the test consisting of parts A1 and A2 and the test consisting of the parts B3 and B4 have no items in common, but a link is forged by the students administered C2 and C3.

Again, the standard error can be computed as the square root of the associated variance, which can be computed by pre- and post-multiplying the covariance matrix of the parameter estimates by a vector of weights, that is, \( \varvec{w}^{t} Cov(\hat{\delta },\hat{\delta })\varvec{w} \), where \( \varvec{w} \) has elements

$$ w_{j} = \left\{ {\begin{array}{*{20}l} { \, 1} \hfill & {{\text{if}}\,j\,{\text{is}}\,{\text{related}}\,{\text{to}}\,Cov(\hat{\mu }_{A} ,\hat{\mu }_{A} ) { }} \hfill \\ { - 1} \hfill & {{\text{if}}\,j\,{\text{is}}\,{\text{related}}\,{\text{to}}\,Cov(\hat{\mu }_{B} ,\hat{\mu }_{B} )} \hfill \\ { \, 0} \hfill & {{\text{if}}\,{\text{this}}\,{\text{is}}\,{\text{not}}\,{\text{the}}\,{\text{case}} . { }} \hfill \\ \end{array} } \right. $$
(11.15)

A third method to assess a equating error is based on the position of the cutoff point on the latent scale. This approach gives a more precise estimate of the equating error of the cutoff point, but below it becomes clear that it is somewhat more complicated to compute. Suppose \( \theta_{0} \) is the cutoff point on the latent scale. On both tests, we choose an observed cutoff score, say SA and SB, that are associated with the same (mean) proficiency level \( \theta_{0} \). Then an equating error index can be defined as

$$ {\text{Equating}}\,{\text{Error = }}\frac{{Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} )}}{Sd(\theta )} $$
(11.16)

where \( \hat{\theta }_{SA} \) and \( \hat{\theta }_{SB} \) are the estimates of the positions on the latent scale with the two observed cutoffs.

To define this standard error, we augment the log-likelihood given the observed data with two observations, one for each of the sum scores \( S_{A} \) and \( S_{B} \). So the complete likelihood becomes \( L(\delta ,\theta ) = L(\delta ) + L_{s} (\theta ) \), and the information matrix becomes

$$ I(\delta ,\theta ) \approx E_{\theta } \left( {\begin{array}{*{20}c} {\nabla (\delta )\nabla (\delta )^{t} } & {\nabla (\delta )d(\theta_{SA} )^{t} } & {\nabla (\delta )d(\theta_{SB} )^{t} } \\ {\nabla (\theta_{SA} )\nabla (\delta )^{t} } & {\nabla (\theta_{SA} )\nabla (\theta_{SA} )^{t} } & 0 \\ {\nabla (\theta_{SB} )\nabla (\delta )^{t} } & 0 & {\nabla (\theta_{SB} )\nabla (\theta_{SB} )^{t} } \\ \end{array} } \right.\left. {\left| {\begin{array}{*{20}c} {} \\ y \\ {} \\ \end{array} } \right.} \right). $$
(11.17)

As above, the standard error of the difference between \( \hat{\theta }_{SA} \) and \( \hat{\theta }_{SB} \) can be computed as the square root of the associated variance, which can be computed by pre- and post-multiplying the covariance matrix by a vector of weights, that is, \( \varvec{w}^{t} Cov(\hat{\delta },\hat{\delta })\varvec{w} \). In this case, the vector \( \varvec{w} \) has elements

$$ w_{j} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{if}}\,j\,{\text{is}}\,{\text{related}}\,{\text{to}}\,Cov(\hat{\theta }_{SA} ,\hat{\theta }_{SA} ) { }} \hfill \\ { - 1} \hfill & {{\text{if}}\,j\,{\text{is}}\,{\text{related}}\,{\text{to}}\,Cov(\hat{\theta }_{SB} ,\hat{\theta }_{SB} )} \hfill \\ 0 \hfill & {{\text{if}}\,{\text{this}}\,{\text{is}}\,{\text{not}}\,{\text{the}}\,{\text{case}} .} \hfill \\ \end{array} } \right. $$
(11.18)

Examples will be given below.

  • EAP estimates and another approach to the reliability of number-correct scores.

In test theory we distinguish between global reliability and local reliability. Global reliability is related to the precision with which we can distinguish two randomly drawn students from some well-defined population, while local reliability relates to the precision given a specific test score. We discuss these two concepts in the framework of IRT in turn.

One of the ways in which global reliability can be defined is as the ratio of the true variance relative to the total variance. For the framework of IRT, consider the variance decomposition

$$ var(\theta ) = var[E(\theta |\varvec{y})] + E[var(\theta |\varvec{y})], $$
(11.19)

where \( \varvec{y} \) is an observed response pattern, \( var(\theta ) \) is the population variance of the latent variable, \( var[E(\theta |\varvec{y})] \) is the posterior variance of the expected person parameters (say, the EAP estimates of \( \theta \)). So this EAP estimate is the error variance averaged over the values that can be observed weighted with their probability of their occurrence under the model. Further, \( E[var(\theta |\varvec{y})] \) is the expected posterior variance of the EAP estimate. Then reliability is given by the ratio

$$ \rho = \frac{{var[E(\theta |\varvec{y})]}}{var(\theta )} = 1 - \frac{{E[var(\theta |\varvec{y})]}}{var(\theta )} $$
(11.20)

(See, Bechger et al. 2003). The middle expression in (11.20) is the variance of the estimates of the person parameters relative to the ‘true’ variance, and the right-hand expression in (11.15) is one minus the average variance of the estimates of the student parameters, say, the error variance, relative to the ‘true’ variance.

The generalization to number-correct scores s is straightforward. If the observations are restricted from \( \varvec{y} \) to s, a student’s proficiency can be estimated by the EAP \( E(\theta |s) \), that is, the posterior expectation of \( \theta \) given s, and the precision of the estimate is given by the posterior variance \( var(\theta |s) \). Then global reliability generalizes to

$$ \rho_{s} = \frac{var[E(\theta |s)]}{var(\theta )} = \frac{{\text{var} (\theta ) - E[var(\theta |s)]}}{var(\theta )}. $$
(11.21)

If the 1PLM holds, s is a sufficient statistic for \( \theta \). Therefore, it is easily verified that \( E(\theta |s) \equiv E(\theta |\varvec{y}) \) and the expressions (11.20) and (11.21) are equivalent. In all other cases, computation of the posterior distribution involves a summation over all possible response patterns resulting in a number-correct score s, and, as already noticed above, this can be done using the recursive algorithm by Lord and Wingersky (1984).

If the 1PLM does not hold, there is variance in \( E(\theta |\varvec{y}) \) conditional on s. This leads to the interesting question how much extra error variance is created by using s as the basis for estimating \( \theta \). That is, we are interested in the contribution of \( Var(E(\theta |\varvec{y})|s) \) to the total error variance, that is, to the posterior variance \( Var(\theta |s) \). This contribution can be worked out by using an identity analogous to Expression (11.21), that is,

$$ Var(\theta |s) \, = E(Var(\theta |\varvec{y})|s) + Var(E(\theta |\varvec{y})|s)). $$
(11.22)

Note that \( E(Var(\theta |\varvec{y})|s) \) is the squared measurement error given \( \varvec{y} \) averaged over the distribution of \( \varvec{y} \) given s, and \( Var(E(\theta |\varvec{y})|s)) \) is the variance of the EAP estimates, also over the distribution of \( \varvec{y} \) given s. In the next section, examples of local reliability estimates will be given.

  • Examples of Reliability Estimates

In this section, two simulated examples are presented to show the kind of results that the local reliability indices presented above produce.

The first example is created by simulating 1000 response patterns on a 20-item test. The data were created with the 2PLM, with the \( \theta \)-values drawn from a standard normal distribution. The 20 item parameters were the product of a set of four discrimination parameters a = {0.8, 0.9, 1.10, 1.20} and five difficulty parameters b = {−1.0, −0.5, 0.0, 0.5, 1.0}. MML estimates (i.e., Bayes modal estimates) were computed with a standard normal distribution for the \( \theta \)-values. The results are displayed in Table 11.2.

Table 11.2 MAP and EAP estimates and their local reliability

Note that the MAP estimates and the EAP estimates are very similar, as are their standard deviations displayed in the columns labeled \( Sd_{MAP} (\theta |s) \) and \( Sd_{EAP} (\theta |s) \). The last three columns give the variance decomposition as defined in Expression (11.22). It can be seen that \( Var(E(\theta |\varvec{y})|s) \) is relatively small compared to \( E(Var(\theta |\varvec{y})|s)) \). So the potential bias in a student’s proficiency estimate when using number-correct scores is much less than the inflation of the precision of the estimate. A final observation that can be made from this simulation study is that the global reliability when switching from scoring using the complete response patters to using the number-correct scores dropped from 0.788 to 0.786. So the loss in global reliability was negligible.

It is expected that if the variability of the discrimination parameters is enlarged, \( Var(E(\theta |\varvec{y})|s) \) increases. The reason is that if the discrimination parameters are considered known, the weighted sum score \( \Sigma _{i} d_{i} a_{i} y_{i} \) is a sufficient statistic for \( \theta \). If all discrimination parameters are equal to 1.0, the 2PLM becomes the 1PLM, and then the number-correct score becomes a sufficient statistic. So the more variance in the discrimination parameters, the greater the violation of the 1PLM and the depreciation of the appropriateness of the scoring rule.

To investigate this effect, the discrimination parameters of the simulation were changed to parameters a = {0.40, 0.60, 1.40, 1.60}. The results are displayed in Table 11.3. It can be seen that the standard deviations in the columns labeled \( Sd_{MAP} (\theta |s) \) and \( Sd_{EAP} (\theta |s) \) blew up a bit, but the effect was not very large. Further, in the column labeled \( Var(E(\theta |\varvec{y})|s) \) the values clearly increased, while this is less the case in the column labeled \( E(Var(\theta |\varvec{y})|s)) \). For instance, if we consider a number-correct score 10, we observe that the initial values 0.01 and 0.19 changed to 0.04 and 0.17. The net effect was a change in \( Var(\theta |s) \) from 0.20 to 0.21. So the increase in variance of \( \theta \)-estimates (that is, of expectations \( E(\theta |\varvec{y}) \)) was counterbalanced by an increase of the overall precision \( Var(\theta |\varvec{y}) \).

Table 11.3 MAP and EAP estimates and their local reliability when the variance of the discrimination parameter is increased

5 Simulation Study of Equating Errors

In this section, two sets of simulation studies will be presented. The first study was based on the design displayed in Panel b of Fig. 11.1, which displays a design with a link via common items. The simulation was carried out to study the effect of the size of the anchor. The second set of simulations was based on the design of Panel c of Fig. 11.1, which displays a design with common students. These simulations were carried out to study the effect of the number of students in the anchor.

The studies were carried out using the 2PLM. To create realistic data, the item parameters were sampled from the pool of item parameters used in the final tests in primary education in the Netherlands. Also the means of proficiency distributions and cutoff scores were chosen to create a realistic representation of the targeted application, that entailed equating several versions and cycles of the tests.

For the first set of simulations, two tests were simulated with 2000 students each. The proficiency parameters for the first sample of students were drawn from a standard normal distribution, while the proficiency parameters for the second sample of students were drawn from a normal distribution that was either standard normal or normal with a mean 0.5 and a variance equal to 1.0. Cutoff points were varied as \( \theta_{0} \) = −0.5 or \( \theta_{0} \) = 0.0. The results are displayed in Table 11.4. The first column gives the length of the two tests; the tests were of equal size. 50 items is considered realistic for a high-stakes test, tests of 20 and 10 items were simulated to investigate the effects of decreasing the test length.

Table 11.4 Simulation of equating via common items

The second column gives the size of the anchor. The total number of items in the design displayed in the third column follows from the length of the two tests and the size of the anchor. 100 replications were made for every one of the 24 conditions. For every replication, the item parameters were redrawn from the complete pool of all item parameters of all (five) test providers. The complete pool consisted of approximately 2000 items. The last three columns give the three equating errors defined above. Note that \( Sd(\theta ) \) was always equal to 1.0, so the equating errors were equal to the analogous standard errors.

The results are generally as expected. Note first that there was always a substantial main effect of the test length for all three indices. For a test length of 50 items, decreasing the size of the anchor increased the equating errors for the average item difficulties \( Se(\overline{b}_{A} - \overline{b}_{B} ) \) and the proficiency means \( Se(\hat{\mu }_{A} - \hat{\mu }_{B} ) \). The effect on \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \) was small. This pattern was sustained for a test length of 20 items, but in that case also \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \) increased slightly when the anchor was decreased from 10 to 5. Finally, there were no marked effects of varying the position of the cutoff points and the differences between the two proficiency distributions.

The second set of simulations was based on the design of panel c of Fig. 11.1, the design with common students. The general setup of the study was analogous to the first one, with some exceptions. All samples of students were drawn from standard normal distributions and the cutoff point was always equal to \( \theta_{0} \) = 0.0. There were three tests in the design: two tests to be equated and a test given to the linking group. As can be seen in the first column of Table 11.5, the tests to be equated had either 40 or 20 items. In the second column, it can be seen that the linking groups were either administered tests of 20, 10, or 4 items. These linking tests always comprised of an equal number of items from the two tests to be equated. The third column shows how the size of the sample of the linking group was varied. The two tests to be equated were always administered to 2000 students. In general, the results are much worse than those displayed in Table 11.4. In fact, only the combination of two tests of 40 items with a linking group of 1600 students administered a test of 20 items comes close to the results displayed in Table 11.4. Note that linking tests of 40 items with linking groups administered 4 items completely breaks down, especially the results for \( Se(\hat{\theta }_{SA} - \hat{\theta }_{SB} ) \) with 100, 400 or 800 students in the linking groups become extremely poor.

Table 11.5 Simulation of equating via common students

6 Conclusion

Transparency of scoring is one of the major requirements for the acceptance of an assessment by stakeholders such as students, teachers and parents. This is probably the reason why number-correct scores are still prominent in education. The logic of such scoring is evident: the higher the number of correct responses, the higher the student’s proficiency. The alternative of using the proficiency estimates emanating from an IRT model as test scores is more complicated to explain. In some settings, such as in the setting of computerized adaptive testing, it can be made acceptable that students that respond to more difficult items get a higher proficiency estimate than students with an analogous score on more easy items. However, explaining the dependence of proficiency estimates on item-discrimination parameters is more cumbersome.

A potential solution to the problem is using the 1PLM model, where all items are assumed to have the same discrimination index, and the proficiency estimate only depends on the number of correct responses to the items. However, the 1PLM seldom fits educational test data and using the 1PLM to utilize all the advantages of IRT leads to notable loss of precision. Therefore, the 2PLM and 3PLM have become the standard models for analyzing educational test data. In this chapter, a method to combine number-correct scoring with the 2PLM and 3PLM was suggested and methods for relating standards on the number-correct scale to standards on the latent IRT scale were outlined. Indices for both the global and local reliability of number-correct scores were introduced. It was shown that the error variance for number-correct scoring can be decomposed into two components. The first component is the variance of the proficiency estimates given the response patterns conditional on number-correct scores. This component can be viewed as a measure for the bias introduced by using number-correct scores as estimates for proficiency rather than estimating the proficiency under the 2PLM or 3PLM based on a student’s complete response pattern. The second component can be interpreted as the average error variance when using the number-correct score. The presented simulation studies indicate that, relative to the second component, the first component is small.

When equating two tests, say an older version and a newer version, it is not only the standard error of the proficiency estimates on the two tests which is important, but also the standard error of differences between proficiency estimates on the two tests. To obtain a realistic estimate of the standard errors of these differences, the whole covariance matrix of the estimates of all item and population parameters in the model must be taken into account. The size of these standard errors depends on the strength of the link between the two tests, that is, on the number of items and students in the design and the sizes of the overlap between, respectively, items and students. The simulation studies presented in this chapter give an indication of the standard errors of these differences for various possible designs.

The procedure for number-correct scoring was presented in the framework of unidimensional IRT models for dichotomously scored items. It can be generalized in various directions. First of all, a sum score can also be defined for a test with polytomously scored items by adding the scores on the individual items in the test. These sum scores can then be related to a unidimensional IRT model for polytomously scored items such as the generalized partial credit model (Muraki 1992), the graded response model (Samejima 1969) or the sequential model (Tutz 1990) in an manner that is analogous to the procedure presented above. Also multidimensional versions of these models (Reckase 1985) present no fundamental problems: the proficiency distributions and response probabilities introduced above just become multivariate distributions in multivariate \( \theta \) parameters. For the generalized definitions of reliabilities refer to van Lier et al. (2018).

A final remark concerns the statistical framework of this chapter, which was the related Bayes modal and marginal maximum likelihood framework. In the preliminaries section of this chapter, it was already mentioned that this framework has an alternative in the framework of fully Bayesian estimation supported by Markov chain Monte Carlo computational methods (Albert 1992; Johnson and Albert 1999). Besides with dedicated samplers, the IRT models discussed here can also be estimated using general purpose samplers such as Bugs (Lunn et al. 2009) and JAGS (Plummer 2003). But details of the generalizations to other models and another computational framework remain points for further study.