## Abstract

This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima’s continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.

## Introduction

A latent trait model for continuous item scores has been suggested as a limiting form of the graded response model (Samejima, 1973); however, the continuous response model (CRM) has not received much attention in the fields of education and psychology, although there have been a few applications (Bejar, 1977; Ferrando, 2002; Wang & Zeng, 1998). This lack of popularity may be attributable to the preferences of applied researchers for using a simpler linear factor model in modeling continuous measurement outcomes, instead of using a more complex nonlinear model, and to the availability and accessibility of software for estimating linear factor model parameters using a limited-information approach.

There are two approaches that have been used in practical applications for estimating the CRM item parameters: limited-information and full-information approaches. In the limited-information framework (also known as heuristic estimation), the observed scores are first rescaled between 0 and 1, and the rescaled scores are transformed to logit. Then a linear factor model is fitted to the covariance/correlation matrix obtained from the transformed data (Bejar, 1977; Ferrando 2002). This approach can be implemented using any well-known popular software (e.g., MPlus, LISREL, SAS, SPSS). Once the linear factor model parameters are estimated, researchers can put them into the IRT metric, if there is any interest, using the linear transformations presented by Ferrando (2002, Equation 5).

Within the full-information framework, there are two approaches presented in the literature for estimating the CRM item parameters (Shojima, 2005; Wang & Zeng, 1998). Both approaches use the marginal maximum likelihood (MML) method and the expectation-maximization (EM) algorithm (Dempster, Laird, & Rubin, 1977), with a slightly different variation. In the first approach, Wang and Zeng obtained an expected log-likelihood function in the E-step by approximating the integration over the posterior ability distribution with Gaussian quadrature points or equally spaced quadrature points, as a standard procedure in most other psychometric software (e.g., BILOG, MULTILOG). In the M-step, they iteratively solved for the item parameters that maximized the log-likelihood function, using a numerical approach, the Newton–Raphson method. The EM2 program was developed in C language and released as an executable file to implement the algorithm above, and it is available for free from its authors.

In another approach, Shojima (2005) developed a simplified version of the EM algorithm for estimating the CRM item parameters. In this simplified algorithm, Shojima (2005) showed that the expected log-likelihood function can be explicitly obtained in the E-step without using any numerical approximation for the integration over the posterior ability distribution. In addition, Shojima (2005) derived the closed formulas for estimating the item parameters by assuming a uniform prior distribution for each item parameter. Therefore, the item parameters that maximize the log-likelihood function can be obtained in a noniterative fashion without using the Newton–Raphson method in the M step. An R package, EstCRM (Zopluoglu, 2012), that implements Shojima’s (2005) simplified algorithm for estimating the CRM item parameters is available for free.

A practical concern is how well these different approaches recover the CRM item parameters. So far, the limited-information approach has been compared with the full-information approach, as implemented in the EM2 program, in a simulation setting (Ferrando, 2002), and the performances of two full-information approaches have been examined independently in separate experimental settings (Shojima, 2005; Wang & Zeng, 1998).

## Research purpose

There are two main purposes of this study. The first purpose is to compare two different full-information approaches proposed for estimating Samejima’s CRM item parameters in the same setting, using simulated data, and to examine whether the simplified EM algorithm (Shojima, 2005), as implemented in the EstCRM package, outperforms a more traditional EM algorithm (Wang & Zeng, 1998), as implemented in the EM2 program. The second purpose of the study is to compare two full-information approaches, using real data, and to provide an application of CRM in the field of education in the context of curriculum-based measurement (Deno, 1985; Deno & Mirkin, 1977).

## Theoretical background

A response model for continuous measurement outcomes was originally proposed by Samejima (1973) as a limiting case of the graded response model. Yet Wang and Zeng (1998) proposed a different parameterization of CRM. The reason for their reparameterization was to create an item difficulty parameter that is on the *θ* scale and an item discrimination parameter counterpart in discrete IRT models to make the model more understandable for practitioners. In this model, the probability of the *i*th examinee obtaining a score of *x* or higher on the *j*th item is equal to

where *x*
_{
ij
} is the observed score between 0 and *k*
_{
j
} for examinee *i* on item *j*; *θ*
_{
i
} is the ability level of examinee *i*; *a*
_{
j
}, *b*
_{
j
}, and *α*
_{
j
} are the discrimination, difficulty, and scaling parameters, respectively, for item *j*; and *k*
_{
j
} is the possible maximum score on item *i*. In this model, *a* and *b* parameters are interpreted similarly as in binary and polytomous IRT models. The model includes an additional scaling parameter *α* with no practical interpretation.

The conditional probability distribution function is also derived as

where \( z = \ln \frac{x}{{k \,-\, x}} \)(Wang & Zeng, 1998). The observed scores (*x*) are transformed to the *z* scores as defined above for simplicity in the estimation process.

## Estimating item parameters

As was briefly discussed above, there are two approaches for estimating the CRM item parameters: limited-information and full-information approaches. A detailed technical discussion for estimating the CRM item parameters in the limited-information framework was given by Ferrando (2002). Since the focus of the present article is to compare two full-information approaches, it will not be covered here.

Similar to the other well-known psychometric software, the MML-EM approach is used to estimate the CRM item parameters within the full-information framework. Two different implementations of the MML-EM approach have appeared in the literature (Shojima, 2005; Wang & Zeng, 1998). Wang and Zeng’s implementation is more traditional, while Shojima’s implementation is a simplified procedure with additional assumptions on item parameter distributions. In both implementations, the likelihood of the observed response vector for person *i* is defined as

on the basis of the local independence assumption among items. Then the log-likelihood function to be maximized in the E-step is derived as

where \( P({a_j},{b_j},{\alpha_j}) \) is a prior distribution on the item parameters.

Wang and Zeng (1998) approximated the integration over the ability distribution using Gaussian quadrature points or equally spaced quadrature points to obtain the log-likelihood in the E-step. In the following step, they obtained the item parameter estimates by simultaneously solving the first derivatives of the log-likelihood function with respect to item parameters, using a Newton–Raphson method in the M-step.

In a slightly different implementation, Shojima (2005) showed that the log-likelihood function in the E-step can be explicitly computed without approximating the integration. Shojima (2005) derived the log-likelihood function in the E-step as equal to

where *μ*
_{
i
}
^{(k)} and σ^{(k)} are the estimated mean and standard deviation of the posterior ability distribution in the *k*th EM cycle. The mean and variance of the posterior ability distribution is updated in each EM cycle. In addition, Shojima (2005) concluded that the item parameters can be solved explicitly in the M-step without using the Newton–Raphson method if uniform prior distributions are assumed on the item parameters (interested readers are encouraged to review the original sources for additional equations and more technical information).

Two software are available to practitioners for estimating the CRM item parameters within the full-information framework. The first one, the EM2 program (Wang & Zeng, 1998), is developed in C language and is available as an executable file from its authors. EM2 uses the following starting parameters for item *j* in the first EM cycle:

The user can specify the maximum number of EM cycles, criteria to stop the EM cycles, the maximum number of Newton–Raphson iterations in the M-step, and criteria to stop the iterations. The syntax file to prepare for the program is very similar to the ones prepared for other well-known psychometric software (e.g., BILOG, MULTILOG).

The second option for practitioners is the EstCRM package (Zopluoglu, 2012) developed in R language (R Development Core Team, 2011). In contrast to the EM2 program, the simplified EM algorithm (Shojima, 2005) is used in the EstCRM package to estimate the CRM item parameters. The package uses 1, −mean(*z*
_{
j
}), and 1 as the starting parameters for *a*, *b*, and *α* parameters, respectively, for item *j* in the first EM cycle. The reason for using a different starting value for parameter *a* is that the variance of transformed scores for item *j* may not be bigger than 1, so\( \sqrt {{{\text var} ({z_j}) - 1}} \) is not always a real number. The R code for analyzing sample data sets is available in the package manual.

## Recovery of item parameters for CRM

Wang and Zeng (1998) evaluated their algorithm in a simulation study in 12 different conditions (3 [sample size] × 2 [number of items] × 2 [type of ability distribution]) and reported item recovery statistics for the EM2 program. The simulation results indicated that EM2 recovered item parameters well enough, especially when the ability distribution was normal and the sample size was bigger than 500. The root mean square error (*RMSE*) values were 0.06, 0.04, and 0.05 for parameters *a*, *b*, and *α*, respectively, when sample size was 2,000, number of items was 6, and the ability distribution was normal. Similarly, the *RMSE* values were 0.04, 0.05, and 0.03 for parameters *a*, *b*, and *α*, respectively, when sample size was 2,000, number of items was 12, and ability distribution was normal. When the ability distribution was skewed, the *RMSE* values were slightly higher, especially for parameter *b*. However, this study was limited, because only one fixed set of parameters was used across all replications.

In another simulation study, Shojima (2005) evaluated the item recovery of his simplified algorithm in nine different conditions (3 [sample size] × 3 [number of items]). The parameter *a* and parameter *α* were generated from a log-normal distribution with a mean of zero and standard deviation of .09, while the parameter *b* was drawn from a standard normal distribution across all conditions and replications. The ability levels were drawn from a standard normal distribution across all conditions. The results were almost identical to those in Wang and Zeng’s (1998) study for similar conditions. Shojima (2005) reported that the *RMSE* values were 0.04, 0.05, and 0.03 for parameters *a*, *b*, and *α*, respectively, when the sample size was 2,000 and the number of items was 10. Both studies reported that the item recovery was better as the sample size and number of items increased.

The two different approaches differed in terms of estimation bias. Shojima’s (2005) simplified algorithm seemed to produce less biased parameter estimates. The estimation bias for the simplified algorithm was not larger than .009 in any experimental condition and was negligible. On the other hand, Wang and Zeng (1998) reported that the estimation bias for the EM2 program ranged from .024 to .048 for parameter *a*, from .001 to .010 for parameter *b*, and from .01 to .048 for parameter *α* when the distribution of ability was standard normal.

Although the two previous studies provide some information regarding the utility of two full-information approaches in estimating the CRM item parameters, they are limited in the conditions used to simulate data when assessing item recovery. In addition, both studies reported the item recovery statistics under independent experimental settings. The goal of the present study is to provide an empirical comparison between two MML-EM algorithms in estimating the CRM item parameters under the same experimental setting, using both simulated and real data. The study provides some insight regarding the utility of two computer programs available for practitioners who may want to use Samejima’s (1973) IRT model for their continuous measurement outcomes.

## Method

### Simulated data

Five independent variables were manipulated in the study: number of items (10 and 20), sample size (500 and 1,000), the distribution of parameter *a* (Lognormal ~ [0,.3], Lognormal ~ [0,.1], Uniform ~ [.4,2]), the distribution of parameter *b* (Normal ~ [0,1], Normal[0,.5], Uniform[-3,3]), and the distribution of parameter *α* (Lognormal ~ [0,.3], Lognormal ~ [0,.1], Uniform ~ [.4,2]). Independent variables were fully crossed, for a total of 108 conditions. All simulation conditions were based on a common ability distribution. The ability levels of the examinees were drawn from a standard normal distribution across all conditions.

In CRM, the conditional probability distribution function of transformed score for person *i* on item *j* (*z*
_{
ij
}) follows a normal distribution with a mean of *α*
_{
j
}
*(θ*
_{
i
}
*-b*
_{
j
}
*)* and a variance of *α*
_{
j
}
^{2}
*/a*
_{
j
}
^{2} (Shojima, 2005; Wang & Zeng, 1998). Given the generated ability level and item parameters, each *z*
_{
ij
} was drawn from a normal distribution with a mean of *α*
_{
j
}
*(θ*
_{
i
}
*-b*
_{
j
}
*)* and a variance of *α*
_{
j
}
^{2}
*/a*
_{
j
}
^{2}. Each item was assumed to have a score scale between 0 and 50, so the generated *z*
_{
ij
} was transformed to observed score (*x*
_{
ij
}), using the equation

One hundred data sets that follow CRM were simulated within each experimental condition, given the generated ability levels and item parameters.

After simulating the data sets, item parameters were calibrated in two different computer programs, EM2 and EstCRM, implementing two different approaches as described above. The number of maximum EM cycles was set to 500, and the EM cycles were terminated when the difference in log-likelihood values between two successive EM cycles was smaller than .01. The same settings were used in both programs. For the EM2 program, the maximum number of Newton–Raphson iterations in the M-step was set to 50, and the criterion to stop the iterations was set to .0001. Since the EstCRM package directly computes the item parameter estimates in the M-step, there was no need for a similar setting.

Two dependent variables, *RMSE* and mean error (*ME*), were computed to assess the accuracy and bias. The *RMSE* is the square root of the averaged squared deviation between a parameter and its estimate. For all the simulated data within an experimental condition, the *RMSE* values for corresponding parameters *a*, *b*, and *α* were computed across items:

where *RMSE*
_{
r
} is the *RMSE* value in the *r*th replication and *λ*
_{
k
} and \( {\hat{\lambda }_k} \) are the corresponding parameter and parameter estimate for item *k,* respectively. Then the *RMSE* values were averaged across all replications within an experimental condition. Similarly, the *ME*s for the simulated data within an experimental condition were computed for the corresponding parameters *a*, *b*, and *α* as the following:

Then the *ME* values were averaged across all replications for an experimental condition

### Real data

Data, provided by a large, urban, Midwestern school district, were collected from 2,810 students in the first grade at 49 different schools. Three of the Early Mathematics Curriculum Based Measurement (EM-CBM) probes were administered at the beginning of Fall 2008 by the school district. These probes were the number identification (NI), quantity discrimination (QD), and quantity array (QA), used to measure the early numeracy skills at the kindergarten and first-grade level (Clarke & Shinn, 2004; Lembke & Foegen, 2009). Each probe was individually administered to students for 1 min. The NI probe consisted of numbers from 1 to 90, and students were required to name the numbers orally. The count of correctly named numbers was recorded as the observed score for the students. The QD probe required each student to compare two numbers and to identify the larger one. The QD probe had 54 comparisons, and the number of correct identifications was recorded as the observed score. The QA probe required identifying the number of dots ranging from 1 to 10 in a box. The quantity array probe had 38 boxes, and the number of correctly identified boxes was the observed scores for the probe.

In addition to the EM-CBM probes, three reading passages were presented to the students to measure oral reading fluency (Hintze, Christ, & Methe, 2006). The students were asked to start reading the passage and to stop reading after 1 min. The number of words read correctly per minute was recorded as the observed score for each student.

Samejima’s (1973) continuous IRT model was fitted separately to the EM-CBM and curriculum-based measurement reading (CBM-R) passage data. The item parameters were estimated using both software, and the standard errors of item parameter estimates were calculated using a nonparametric bootstrap sampling approach with 500 bootstrap samples.

## Results

### Simulation study

The results of the simulation study for all 108 conditions are presented in Tables 1, 2, 3, and 4. Tables 1 and 2 present the results regarding the estimation bias. The sample size and number of items did not seem to affect the bias in estimation regardless of the type of computer program used to estimate item parameters. However, the type of parameter distribution had some effect. In almost all conditions, there was no bias in the estimation of *b* parameters. Across all 108 conditions, only 3 conditions were flagged as problematic in estimating *b* parameters for the EM2 program in terms of bias. The EstCRM package did not show any bias in the estimation of *b* parameters in any condition. Both software showed a slightly positive bias in estimating parameter *a* in some conditions. In most of these conditions, either one or two of the parameters *a*, *b*, and *α* had uniform distribution. The conditions where the parameters had normal or lognormal distributions, the bias for the parameter *a* was ignorable. In terms of parameter *α*, the EM2 program showed a positive bias in a few conditions regardless of sample size when the number of items was 10; however, there were only two cases in which the bias occurred when the number of items was 20. The conditions in which the bias in estimating parameter *α* occurred were mostly the conditions where the parameter *α* had uniform distribution. In contrast to the EM2 program, the EstCRM package showed remarkable negative bias in estimating parameter *α* in most of the conditions. The only conditions in which the bias did not occur in estimating parameter *α* were the conditions where all the parameters had either normal or lognormal distributions.

The results regarding the precision of item parameter estimates are presented in Tables 3 and 4. Both computer programs estimated parameter *a* with almost identical precision in most conditions. In a few cases, EstCRM seemed to give more precise estimates for parameter *a*, but the difference was ignorable. In terms of parameter *b*, the EstCRM package produced more precise estimates than did the EM2 program. Overall, the *RMSE* value of parameter *b* was about .23 for the EM2 program, while it was about .13 for the EstCRM package. A closer look at the tables revealed that the EstCRM package outperformed the EM2 program, especially in conditions where the distribution of parameter *b* was uniform. In contrast to the *b* parameter, the EM2 program outperformed the EstCRM package in estimating parameter *α*. The overall RMSE values for the parameter *α* were .11 and .17 for the EM2 program and EstCRM package, respectively.

### Real data

The descriptive statistics for the EM-CBM probes (number identification, quantity discrimination, and quantity array) and CBM reading passages, as well as the correlations among the measures, are presented in Table 5. The data are assumed to be unidimensional to be able to fit Samejima’s (1973) continuous IRT model, since it is a standard assumption in other IRT models. The eigenvalues were extracted from the correlations among the probes to examine the plausibility of this assumption. The eigenvalues of 2.23, 0.47, and 0.30 and the eigenvalues of 2.94, 0.04, and 0.02 are obtained, respectively, from the correlations among the three EM-CBM probes and from the correlations among the three CBM reading passages. Using a parallel analysis approach (Horn, 1965), the eigenvalues of random multivariate data with the same sample size and number of items were also computed. The 99th percentiles of the first, second, and third eigenvalues from random data were obtained as 1.07, 1.02, and 0.99, respectively. Since the first sample eigenvalues were much higher than the first eigenvalue of random data and the second sample eigenvalues were much lower than the second eigenvalue of random data, it was concluded that the unidimensionality assumption is plausible for the three EM-CBM probes, as well as the three CBM reading passages.

Samejima’s (1973) continuous IRT model was fit to the EM-CBM and CBM reading passage data separately, and the parameters including the standard errors, as presented in Table 6, were estimated for each probe. Since these measures were not regular items, the parameters were actually interpreted as *probe parameters*. Therefore, discrimination and difficulty parameters were interpreted as *probe difficulty* and *probe discrimination*. For instance, fitting Samejima’s continuous IRT model helps to assign a difficulty and discrimination level for each reading passage administered.

The probe parameters were calibrated by using two different approaches as implemented in two different software. The probe parameters calibrated from two software were very close to each other for the EM-CBM probes. The probe discrimination and probe difficulty parameters were estimated slightly higher by the EM2 program. The empirical standard errors, based on the nonparametric bootstrap sampling, were lower in the EM2 program, except for the difficulty parameter for the quantity array probe. EM-CBM probes differed in difficulty levels. The quantity array probe was the most difficult probe, while the number identification probe was second and the quantity discrimination probe was last in difficulty. The number identification and the quantity discrimination probes were equally discriminating among the students with low number sense and high number sense, while the quantity array probe had slightly lower discriminating power.

The *passage difficulty* and *passage discrimination* parameters were estimated for each CBM reading passage administered to the students. Similar to the EM-CBM probes, EM2 estimated the difficulty and discrimination parameters for reading passages slightly higher in magnitude with smaller empirical standard error (higher precision). All three reading passages were very close to each other in difficulty level, and the results can be interpreted such that the passages were equivalent to each other in difficulty. However, the discriminating parameters for the passages were different in magnitude. The reading passages were highly discriminative among the low- and high-ability students in reading, and the discrimination value for the third reading passage is questionable. It is unexpectedly high in terms of the usual range observed for the discriminating parameter in IRT applications. In addition, the empirical standard error of the estimate is quite large enough to be suspicious. This might suggest some item–model fit problem for the third reading passage.

## Discussion

On the basis of the conditions investigated in this study, the results showed that Shojima’s (2005) simplified EM algorithm, as implemented in the EstCRM package, is as effective and efficient as the traditional EM algorithm, as implemented in the EM2 program, in most conditions. Both programs produced similar results regarding the estimation bias and precision; however, they had some advantages and disadvantages in a few conditions. The bias and precision in estimating the parameter *a* were very similar for both programs. Similarly, both programs showed almost no bias in estimating the *b* parameter in almost every condition. But the EstCRM package was more successful in terms of precision when the parameter *b* was estimated, especially in conditions where the distribution of parameter *b* was uniform. On the other hand, the EM2 program was more effective in estimating the parameter *α* in terms of both precision and bias. The negative bias was remarkable in the EstCRM package when the parameter *α* was estimated*.*

Both programs are available for practitioners at no charge. One limitation of the EM2 program is that it works only with 32-bit computers. It was compiled as an executable file in C language about 15 years ago, and there has not been an update since then. The program is available for free from its authors, but the practical use of the program is limited to 32-bit computers. On the other hand, the EstCRM package is compiled in R language and works in an R environment in any Windows, Linux, and Mac OS platforms. The manual including the sample R code for analyzing the sample data sets are available online for the EstCRM package.

In addition to the full-information approaches examined in this study, the reader should also be aware that the CRM item parameters can also be estimated by fitting a linear factor model using a limited-information approach. From a theoretical point of view, CRM is a more complex nonlinear counterpart to the linear factor model and is more appropriate for measurement outcomes. From a practical point of view, however, whether a nonlinear model fitted with the full-information approach outperforms a linear model fitted with the limited-information approach is an open question. A previous simulation study suggested that a linear model fitted with the limited information approach would perform as well as the nonlinear model fitted with the full-information approach, unless the items were at extreme difficulty levels and highly discriminating (Ferrando, 2002). Also, the full-information approach may be more efficient than the limited-information approach in case of a large amount of missing data that does not permit computing a satisfactory covariance matrix. More simulation studies are needed to investigate the conditions where a full-information approach or a limited-information approach would be preferred over the other.

A real data illustration was also provided in the study. The CRM was used as a potential psychometric tool for analyzing the data from EM-CBM probes and CBM reading passages. There are many advantages to using the CRM in the context of curriculum-based measurement. First, it provides an analytical way to examine *probe* or *passage equivalency* in a psychometric context when several reading passages or probes are administered to the same sample of students in a single administration. Second, the analytical procedure for linking the scores under the CRM is already developed for common-examinee and common-item designs (Shojima, 2003). Using the recommended approach, the oral fluency reading scores can be linked across samples using the CRM as an alternative method to the observed score equating procedures (Albano & Rodriguez, 2012). Third, the closed formula for estimating the ability level from the calibrated item parameters is derived for the CRM (Samejima, 1973). The estimation of ability under the CRM takes both *probe* or *passage difficulty* and *probe* or *passage discrimination* into account, which is not a standard practice in the current application.

The CRM has the potential to help CBM practitioners with some technical difficulties faced in practice. The real data illustration in this study aimed to introduce CRM for practitioners and to open the doors for its applicability in education, especially for CBM outcomes. Future research should analyze more data for different samples of students by using different sets of probes or reading passages and assess the model fit at the item and person level.

Lastly, one aspect of this study should be noted as a limitation. In the present simulation study, it is assumed that there are no missing data. Certainly, the presence and nature of missing data will deteriorate the precision and bias of item parameter estimates, and future simulation studies should manipulate the presence and nature of missing data. It is hypothesized that the presence of missing data will have more effect on the item parameter estimates in the limited-information approach than in the full-information approach, because either list-wise or pair-wise deletion will lead to more information loss when the covariance matrix is computed, as compared with the full-information approach that uses all item responses available.To conclude, as one of the reviewers pointed out, the continuous response formats will be more frequently used with the increasing popularity of computer administrations. This study aimed to address a small piece regarding the item parameter estimation. More studies are needed in the future to examine item parameter estimation, DIF analysis, linking, model fit, and item and person fit in the context of CRM, to provide the best tools for practitioners.

## References

Albano, A. D., & Rodriguez, M. C. (2012). Statistical equating with measures of oral reading fluency.

*Journal of School Psychology, 50,*43–59.Bejar, I. I. (1977). An application of the continuous response level model to personality measurement.

*Applied Psychological Measurement, 1,*509–521.Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum-based measurement.

*School Psychology Review, 33,*234–248.Dempster, A. P., Laird, N., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm.

*Journal of the Royal Statistical Society B, 39,*1–38.Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative.

*Exceptional Children, 52,*219–232.Deno, S. L., & Mirkin, P. K. (1977).

*Data-based program modification: A manual*. Arlington: Council for Exceptional Children.Ferrando, P. J. (2002). Theoretical and empirical comparisons between two models for continuous item responses.

*Multivariate Behavioral Research, 37,*521–542.Hintze, J. M., Christ, T. J., & Methe, S. A. (2006). Curriculum-based assessment.

*Psychology in the Schools, 43,*45–56.Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis.

*Psychometrica, 30,*179–185.Lembke, E., & Foegen, A. (2009). Identifying early numeracy indicators for kindergarten and first-grade students.

*Learning Disabilities Research and Practice, 24,*12–20.R Development Core Team. (2011). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/

Samejima, F. (1973). Homogeneous case of the continuous response model.

*Psychometrika, 38,*203–219.Shojima, K. (2003). Linking tests under the continuous response model.

*Behaviormetrika, 30,*155–171.Shojima, K. (2005). A noniterative item parameter solution in each EM cycle of the continuous response model.

*Educational Technology Research, 28,*11–22.Wang, T., & Zeng, L. (1998). Item parameter estimation for a continuous response model using an EM algorithm.

*Applied Psychological Measurement, 22,*333–344.Zopluoglu, C. (2012). EstCRM: An R package for Samejima's continuous IRT model.

*Applied Psychological Measurement, 36,*149–150.

## Author Note

The author acknowledges Dr. David Heistad, Dr. Chi-Keung (Alex) Chan, and Mary Pickart at the Minneapolis Public Schools for their tremendous support, as well as willingness to share the data for this study. The author also greatly appreciates Dr. Chi-Keung (Alex) Chan for serving as the mentor for his internship.

## Author information

### Affiliations

### Corresponding author

## Rights and permissions

## About this article

### Cite this article

Zopluoglu, C. A comparison of two estimation algorithms for Samejima’s continuous IRT model.
*Behav Res * **45, **54–64 (2013). https://doi.org/10.3758/s13428-012-0229-6

Published:

Issue Date:

### Keywords

- Continuous response model
- Continuous IRT model
- Item response theory
- Item parameter estimation
- Item parameter recovery
- Simulation
- Curriculum-based measurement