Skip to main content

Advertisement

Log in

cpm.4.CSE/IRT: Compact process model for measuring competences in computer science education based on IRT models

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

Abstract

cpm.4.CSE/IRT (compact process model for Competence Science Education based on IRT models) is a process model for competence measurement based on IRT models. It allows the efficient development of measuring instruments for computer science education. Cpm.4.CSE/IRT consists of four sub processes: B1 determine items, B2 test items, B3 analyze items according to Rasch model, and B4 interpret items by criteria. Cpm.4.CSE/IRT is modeled in IDEF0, a process modeling language that is standardized and widely used. It is implemented in R, an open-source software optimized for statistical calculations and graphics that allows users to interact using the web application framework Shiny. Through coordinated processes, cpm.4.CSE/IRT ensures the quality and comparability of test instruments in competence measurement. Cpm.4.CSE/IRT is demonstrated using an example from the competence area of Modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

Notes

  1. For the fourth sub process, items with Differential Item Functioning were eliminated and the resulting reduced data set was re-analyzed

    Input. Based on the information from the sub processes B3.2 and B3.3, the items 4, 6, and 9 are eliminated from the data “set dat” and a new reduced dataset “datReduced” is generated by

    > datReduced = dat[,-c(4,6,9]

    Output. The re-analysis of the data set and the output of the results is done with

    > resultReAnalyzed=RM(datReduced)

    > summary(resultReAnalyzed)

    A model check with the steps from subprocesses B3.2 and B3.3 shows that no significant model violations are present.

References

  • ACER ConQuest 4 (2018). ConQuest. Retrieved January 2, 2018, from https://www.acer.edu.au/conquest.

  • ACM (Association for Computing Machinery) (2014). ITiCSE´14 (proceedings of the ACM conference on innovation and technology in computer science education). New York: ACM.

  • ACM (Association for Computing Machinery) (2015). ITiCSE´15 (proceedings of the ACM conference on innovation and technology in computer science education). New York: ACM.

  • ACM (Association for Computing Machinery) (2016). ITiCSE´16 (proceedings of the ACM conference on innovation and technology in computer science education). New York: ACM.

  • Andersen, E. B. (1973). A goodness of fit test for the Rasch model. Psychometrika, 38, 123–140.

    Article  MathSciNet  MATH  Google Scholar 

  • Anderson, L. W., Krathwohl, D. R., & Airasian, P. W. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. New York: Longman.

  • Bartolucci, F., Bacci, S., & Gnaldi, M. (2016). Statistical analysis of questionnaires: A unified approach based on R and Stata. New York: Chapman & Hall.

  • Beaton, E., & Allen, N. (1992). Interpreting scales through scale anchoring. Journal of Educational Statististics, 17, 191–204.

    Google Scholar 

  • Beeley, C. (2016). Web application development with R using shiny. Birningham: Packt Publishing.

    Google Scholar 

  • Berges, M. & Hubwieser, P. (2015). Evaluation of source code with item response theory. In ITiCSE '15 proceedings of the 2015 ACM conference on innovation and Technology in Computer Science Education (pp. 51–56). ACM: New York.

  • Bigsteps (2018). Bigsteps. Retrieved January 2, 2018, from http://www.winsteps.com/bigsteps.htm.

  • Booch, G., Rumbaugh, J. & Jacobson, I. (2005). The unified modeling language user guide. New York: Addison-Wesley.

  • Borg, I., & Staufenbiel, T. (2007). Lehrbuch Theorien und Methoden der Skalierung. Bern: Huber.

    Google Scholar 

  • Botturi, L. (2008). E2ML: A tool for sketching instructional designs. In L. Botturi & S. T. Stubbs (Eds.), Handbook of visual languages for instructional design (pp. 112–132). New York: Information Science Reference.

    Chapter  Google Scholar 

  • Bühner, M. (2011). Einführung in die test- und Fragebogenkonstruktion. München: Pearson Studium.

    Google Scholar 

  • Burnham, K., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. New York: Springer.

    MATH  Google Scholar 

  • Chang, W. (2013). R graphics cookbook. Beijing: O'Reilly and Associates.

    Google Scholar 

  • ConstructMap (2018). ConstructMap. Retrieved January 2, 2018, from http://bearcenter.berkeley.edu/software/constructmap.

  • CRAN (Comprehensive R Archive Network) (2018). CRAN Task Views. Retrieved January 2, 2018 from https://cran.r-project.org/AS.

  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York: Guilford press.

    Google Scholar 

  • Derntl, M., & Motschnig-Pitrik, R. (2008). CoUML: A visual language for modeling cooperative environments. In L. Botturi & S. T. Stubbs (Eds.), Handbook of visual languages for instructional design (pp. 155–184). New York: Information Science Reference.

    Google Scholar 

  • Eid, M., & Schmid, K. (2014). Testtheorie und Testkonstruktion. Göttingen: Hogrefe.

    Google Scholar 

  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Fischer, G. H., & Molenaar, I. (Eds.). (1995). Rasch models - foundations, recent developments, and applications. Berlin: Springer.

    Google Scholar 

  • GI. Gesellschaft für Informatik (2008). Grundsätze und Standards für die Informatik in der Schule. Bildungsstandards Informatik für die Sekundarstufe I. LOG IN, 28 (150/151) supplement.

  • GI. Gesellschaft für Informatik (2016). Bildungsstandards Informatik für die Sekundarstufe II. LOG IN, 36(183/184) supplement.

  • Glas, C. A. W., & Verhelst, N. D. (1995). Testing the Rasch model. In G. H. Fischer & J. W. Molenaar (Eds.), Rasch models: Their foundations, recent developments and applications (pp. 69–95). New York: Springer.

    Chapter  Google Scholar 

  • Goldhammer, F., & Hartig, J. (2012). Interpretation von Testresultaten und Testeichung. In H. Moosbrugger, & A, Kelava (Eds.), Testtheorie und Fragebogenkonstruktion (pp. 173–201). Berlin. Springer.

  • Griffin, P. (2007). The comfort of competence and the uncertainty of assessment. Studies in Educational Evaluation, 33, 87–99.

    Article  Google Scholar 

  • Haladyna, T. (2004). Developing and validating multiple choice test items. London: Lawrence Erlbaum Associates Publisher.

    Google Scholar 

  • Horn, R. A. J. (2004). Standards. New York: Lang.

    Google Scholar 

  • Hsieh, S.-C., Lin, J.-S., & Lee, H.-C. (2012). Analysis on literature review of competency. International Review of Business and Economics, 2, 25–50.

    Google Scholar 

  • Hubwieser, P. (1999). Modellierung in der Schulinformatik. LOG IN, 24–29.

  • Hubwieser, P. (2007). Didaktik der Informatik. Grundlagen, Konzepte, Beispiele. Berlin: Springer.

    Google Scholar 

  • Ihaka, R., & Gentleman, R. (1996). R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5(3), 299–314.

    Google Scholar 

  • Institute for Objective Measurement (2018). Tools overview. Retrieved January 2, 2018, from http://www.rasch.org/software.htm.

  • Irtel, H. (1996). Entscheidungs- und testtheoretische Grundlagen der Psychologischen Diagnostik. Frankfurt am Amain: Lang.

  • Jonkisz, E., Moosbrugger, H., & Brandt, H. (2012). Planung und Entwicklung von Tests und Fragebogen. In H. Moosbrugger & A. Kelava (Eds.), Testtheorie und Fragebogenkonstruktion (Kapitel 3). Berlin: Springer.

    Google Scholar 

  • Klieme, E., & Maag Merki, K. (2008). Introduction of educational standards in German-speaking countries. In J. Hartig, E. Klieme, & D. Leutner (Eds.), Assessment of competencies in educational contexts (pp. 305–314). Göttingen: Hogrefe.

    Google Scholar 

  • Klieme, E., Hartig, J., & Rauch, D. (2008). The concept of competence in educational contexts. In J. Hartig, E. Klieme, & D. Leutner (Eds.), Assessment of of competencies in educational contexts (pp. 3–22). Göttingen: Hogrefe.

    Google Scholar 

  • Knowledge Based Systems (1993). Integrated Definition for Function Modeling (IDEF0). Retrieved March 1, 2014, from http://www.itl.nist.gov/fipspubs/idef02.doc.

  • Koller, I., & Hatzinger, R. (2013). Nonparametric tests for the Rasch model: Explanation, development, and application of quasi-exact tests for small samples. InterStat, 11, 1–16.

    Google Scholar 

  • Koller, I., Alexandrowicz, R., & Hatzinger, R. (2012). Das Rasch Modell in der praxis. Wien: Facultas.

    Google Scholar 

  • Kron, F. W. (2008). Grundwissen Didaktik. Stuttgart: UTB.

    Google Scholar 

  • Lersch, R., & Schreder, G. (2013). Grundlagen kompetenzorientierten Unterrichtens: Von den Bildungsstandards zum Schulcurriculum. Opladen: Budrich.

    Google Scholar 

  • Linacre, J. M. (1994). Sample size and item calibration stability. Rasch Measurement Transactions, 7, 328.

    Google Scholar 

  • Mair, P., & Hatzinger, R. (2007). CML based estimation of extended Rasch models with the eRm package in R. Psychology Science, 49, 26–43.

    Google Scholar 

  • Martin-Löf, P. (1973). Statistika modeller. Stockholm: Instituet för Försäkringsmathematik och Mathematisk Statistisk vid Stockholms Universitet.

  • Mayer, R., Painter, M., & de Witte, P. (1992). IDEF family for concurrent engineering and business reengineering applications. New York: Knowledge Based Systems.

    Google Scholar 

  • Menzel, C., & Mayer, R. (2005). The IDEF family of languages. In P. Bernus, K. Martins, & G. Schmidt (Eds.), Handbook on architectures of information systems (pp. 215–250). Berlin: Springer.

    Google Scholar 

  • Ministep (2018). Ministep. Retrieved January 2, 2018, from http://www.winsteps.com/ministep.htm.

  • Molenaar, J. W. (1995). TSome background for item response theory and the Rasch model. In G. H. Fischer & J. W. Molenaar (Eds.), Rasch models: Their foundations, recent developments and applications (pp. 3–14). New York: Springer.

    Chapter  Google Scholar 

  • Moosbrugger, H., & Kelava, A. (Eds.) (2012). Testtheorie und Fragebogenkonstruktion. Berlin. Springer.

  • Mühling, A., Ruf, A., & Hubwieser, P. (2015). Design and first results of a psychometric test for measuring basic programming abilities. In Proceedings of the Workshop in Primary and Secondary Computing Education. WiPSCE’15 (pp. 2–10). New York, NY: ACM.

  • Mullis, I. V. S., Martin, M. O., Foy, P., & Arora, A. (2011). TIMSS 2011 international results in mathematics. Chestnut Hill, MA: TIMMS & Pierls International Study Center.

    Google Scholar 

  • OECD iLibrary (2018). Retrieved January 2, 2018, from http://www.oecd-ilibrary.org/education/pisa-2009-ergebnisse_9789264095335-de.

  • Paquette, G., Léonard, M., & Lundgren-Cayrol, K. (2008). The MOT+ visual language for knowledge-based instructional design. In L. Botturi & S. T. Stubbs (Eds.), Handbook of visual languages for instructional design (pp. 133–154). New York: Information Science Reference.

    Google Scholar 

  • Ponocny, I. (2001). Nonparametric goodness of fit tests for the Rasch model. Psychometrika, 66, 437–460.

    Article  MathSciNet  MATH  Google Scholar 

  • Preinerstorfer, D., & Formann, A. (2012). Parameter recovery and model selection in mixed Rasch models. British Journal of Mathematical and Statistical Psychology, 65, 251–262.

    Article  MathSciNet  Google Scholar 

  • Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Kopenhagen. Danish Institute for Educational Research.

  • Ravitch, D. (1995). National standards in American education. Washington, D.C.: Brookings Institution Press.

    Google Scholar 

  • Rizopoulos, R. (2006). Ltm: An R package for latent variable modelling and item response theory analyses. Journal of Statistical Software, 17, 1–25.

    Article  Google Scholar 

  • Robinsohn, S. B. (1971). Bildungsreform als revision des curriculums und ein Strukturkonzept für curriculums-entwicklung. Berlin: Luchterhand.

    Google Scholar 

  • RStudio (2018). Shiny. Retrieved January 2, 2018, from http://shiny.rstudio.com/.

  • Rumbaugh, J., Jacobson, I., & Booch, G. (2010). The unified modeling language reference manual. New York: Addison-Wesley.

  • Rumm2030 (2018). Rumm2030. Retrieved January 2, 2018, from http://www.rummlab.com.au/.

  • Rychen, S., & Salganik, L. H. (2003). Definition and selection of competencies: theoretical and conceptual foundations – summary of the final report “key competencies for a successful life and a well-functioning society. Retrieved January 2, 2018 from http://www.netuni.nl/courses/hre/uploads/File/deseco_finalreport_summary.pdf.

  • Sarris, W. E., & Gallhofer, I.N. (2014). Design, evaluation, and analysis of questionnaires for survey research. New York: Wiley.

  • Seifert, A. (2015). Kompetenzforschung in den Fachdidaktiken auf der Grundlage von IRT-Modellen. In U. Riegel, S. Schubert, G. Siebert-Ott, & K. Macha (Eds.), Kompetenzmodellierung und Kompetenzmessung in den Fachdidaktiken (pp. 131–161). Münster: Waxmann.etrobl, C. (2015). Das Rasch-Modell. München: Hampp.

  • Strobl, C. (2015). Das Rasch-Modell. München: Hampp.

  • Sudol, A., & Studer, C. (2010). Analyzing test items: Using item response theory to validate assessments. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education, SIGCSE '10 (pp. 436–440). New York, NY: ACM.

  • Teetor, R. (2011). R cookbook. Beijing: O'Reilly and Associates.

    Google Scholar 

  • Tew, A. E., & Guzdial, M. (2011). The fcs1: A language independent assessment of cs1 knowledge. In Proceedings of the 42Nd ACM Technical Symposium on Computer Science Education, SIGCSE '11 (pp. 111–116). New York, NY: ACM.

  • van der Linden, W., & Hambleton, R. K. (1997). Handbook of modern item-response theory. Berlin: Springer.

  • van der Linden, W., & Hambleton, R. K. (2016). Handbook of item-response theory (three volume set). New York: CRC Press.

    Book  Google Scholar 

  • Weinert, F. E. (1998). Vermittlung von Schlüsselqualifikationen. In S. Matalik & D. Schade (Eds.), Entwicklungen in Aus- und Weiterbildung - (pp. 23–43). Baden-Baden: Nomos.

    Google Scholar 

  • Wendt, H., Bos, W., Selter, C., Köller, O., Schwippert, C., & Kaspar, D. (Eds.) (2016). TIMSS 2015 Mathematische und naturwissenschaftliche Kompetenzen von Grundschulkindern in Deutschland im internationalen Vergleich. Münster: Waxmann.

  • Wickham, H. (2013). R packages. Beijing: O'Reilly and Associates.

  • Wilson, M. (2005). Constructing measures. An item response modelling approach. Mawah: Lawrence Erlbaum Associates.

  • Winsteps (2018). Winsteps. Retrieved January 2, 2018, from http://www.winsteps.com/winsteps.htm.

  • Winters, T., & Payne, T. (2005). What do students know? An outcomes-based assessment system. In Proceedings of the First International Workshop on Computing Education Research, ICER '05 (pp. 65–72). New York, NY: ACM.

  • Winters, T., & Payne, T. (2006). Closing the loop on test creation: A question assessment mechanism for instructors. SIGCSE Bulletin, 38(1), 169–172.

    Article  Google Scholar 

  • Zendler, A., & Hubwieser, P. (2013). The influence of teacher training programs on evaluations of central computer science concepts. Teacher and Teaching Education, 34(August), 130–142.

    Article  Google Scholar 

  • Zendler, A., Spannagel, C., & Klaudt, D. (2011). Marrying content and process in computer science education. IEEE Transactions on Education, 54(3), 387–397.

    Article  Google Scholar 

  • Zendler, A., Klaudt, D., & Seitz, C. (2014). Empirical determination of competence areas to computer science education. Journal of Educational Computing Research, 51(1), 71–89.

    Article  Google Scholar 

  • Zendler, A., McClung, O. W., & Klaudt, D. (2015). A cross-cultural comparison of concepts in computer science education: The US–Germany experience. The International Journal of Information and Learning Technology, 32(4), 235–256.

    Article  Google Scholar 

  • Zendler, A., Seitz, C., & Klaudt, D. (2016). Process-based development of competence models to computer science education. Journal of Educational Computing Research, 54(4), 563–597.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Zendler.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Data model of cpm.4.CSE/IRT

Figure 24 illustrates the data model of cpm.4.CSE/IRT in simplified UML notation. The central entity type is Data matrix with the entity types Item and Person in 1: 1 .. * relationship types; In addition, there is a 1: 1 relationship type between Data matrix and Rasch analysis. The entity type (formulated) Competence has the following entity types in 1: 1. * relationship types: Item, Competence area and Competence dimension. Two further 1..*: 1 relationship types exist between Competence concept and Competence dimension as well as between Item and Competence level.

Fig. 24
figure 24

Data model of cpm.4.CSE/IRT

1.2 Running Example

1.2.1 Competence areas (cf. Zendler et al. 2011, 2014)

CA01 Information technology. This competence area includes the two content concepts data and information.

CA02 Modeling. This competence area includes the four content concepts problem, model, structure, and algorithm.

CA03 Computer communication. This competence area includes the two content concepts computer and communication.

CA04 Software engineering. This competence area includes seven content concepts: process, language, computation, system, test, program, and software.

A-2.2 Formulated competences for the competence area CA02 Modeling.

C01 Model concept: << Learners evaluate different model concepts, in particular, they qualify the model concept for computer science>>.

C02 Classification of models: << Learners determine the usage of different models for software development >>.

C03 Diagram types: << Learners use diagram types for modeling >>.

C04 Process of modeling: << Learners analyze requirements and use class diagrams to model them >>.

C05 Modeling languages: <<Learners have an overview of different modeling languages such as Unified Modeling Language (UML), Event-driven Process Chains (EPC), Petri nets, IDEF0 (Integrated DEFinition One), SDL (Specification and Description Language), ERM (Entity Relationship Model)>>

1.2.2 Items for the competence C03 Diagram types

Item1: „Learners justify different model concepts, in particular they qualify the class diagrams for computer science“.

Item2: „Learners adequately apply the concepts of sequence modeling“.

Item3: „Learners determine the use of class and sequence diagrams in software engineering“.

Item4: „Learners demonstrate the use of models in software engineering“.

Item5: „Learners use diagram types for requirements modeling“.

Item6: „Learners analyze requirements and apply use case diagrams“.

Item7: „Learners analyze requirements and apply sequence diagrams“.

Item8: „Learners analyze requirements and apply activity diagrams“.

Item9: „Learners analyze requirements and apply state diagrams“.

Item10: „Learners analyze requirements and apply class diagrams“.

Item11: „Learners are convinced of modeling as a key activity in software engineering“.

Item12: „Learners have an overview of various modeling languages such as Unified Modeling Language (UML), Event Driven Process Chains (EPK), Petri Nets, IDEF (Integrated DEFinition), SDL (Specification and Description Language), ERM (Entity-Relationship Model)“

1.3 Central test statistics

Likelihood of person and item parameters for a given data set (see Molenaar 1995, p. 10).

The likelihood L of the person and item parameters for a given data set, assuming that the Rasch model in the population is true, is determined by the following equation

$$ L=\prod \limits_{v=1}^N\prod \limits_i^k\frac{\exp \left({x}_{vi}\cdot \left({\xi}_v-{\sigma}_i\right)\right)}{1+\exp \left({\xi}_v-{\sigma}_i\right)}, $$

where N is the number of persons, v in the total sample; k is the number of items i; xvi is the answer of a person v to the item i – in the dichotomous case: 1 solved for item, 0 not solved for item; ξv is the person parameter for a person v, σi is the item difficulty parameter of the item i, exp. means exponential function.

Conditional likelihood ratio test according to Andersen (see Glas and Verhelst1995, p. 86; Bühner 2011, p. 531).

The test statistics of Andersen’s conditional likelihood ratio test is:

$$ {\chi}^2=-2\ln \cdot \left(\frac{L_0}{cL_1\cdot {cL}_2}\right),\mathrm{with}\ df=\left({k}_1\hbox{--} 1\right)+\left({k}_2\hbox{--} 1\right)\hbox{--} \left({k}_0\hbox{--} 1\right), $$

where L0 is the likelihood of the total sample; cL1 is the conditional likelihood of the first subsample, cL2 is the conditional likelihood of the second subsample, k0 is the number of items in the total sample, k1 is the number of items in the first subsample, k1 is the number of items in the second subsample, ln is the natural logarithm, df is the number of degrees of freedom.

Martin-Löf test (see Martin-Löf 1973; Glas and Verhelst1995, p. 77; Bühner 2011, p. 538).

The test statistics of the Martin-Löf test is:

$$ {\chi^2}_{\mathrm{MLT}}=-2\ln \cdot \frac{\prod \limits_{r=0}^k{\left(\frac{n_r}{N}\right)}^{n_r}\cdot {cL}_0}{\prod \limits_{r=0}^{k_1}\prod \limits_{s=0}^{k_2}{\left(\frac{n_{rs}}{N}\right)}^{n_{rs}}\cdot {cL}_1\cdot {cL}_2},\mathrm{with}\kern0.50em df=\left({k}_1\right)\cdotp \left({k}_2\right)\hbox{--} 1, $$

where L0 is the conditional likelihood of the overall test; cL1 is the conditional likelihood of the first half of the test, cL2 is the conditional likelihood of the second half of the test, k is the number of items of the overall test, k1 is the number of items of the first half of the test, k2 is the number of items in the second half of the test, ln is the natural logarithm, nr is the frequency of the sum score in the total test, nrs is the frequency of the sum score in the respective first and second half of the test, r is the sum score in the first test half, s is the sum scores in the second half test, df is the number of degrees of freedom.

Item specific Wald test for item i (see Glas and Verhelst 1995 , p. 91; Eid and Schmid 2014 , p. 187).

The test statistic for the (standard normal-distributed) item specific Wald test is:

$$ {W}_i=\frac{\sigma_i^{(1)}-{\sigma}_i^{(2)}}{\sqrt{\operatorname{var}\left({\sigma}_i^{(1)}\right)+\operatorname{var}\left({\sigma}_i^{(2)}\right)}}\sim \kern0.5em N\left(0,1\right), $$

where \( {\sigma}_i^{(1)} \)is the item difficulty for item i of the first group; \( {\sigma}_i^{(2)} \) is the item difficulty for item i of the second group, \( \operatorname{var}\left({\sigma}_i^{(1)}\right) \)is the variance for the item difficulty of item i of the first group, \( \operatorname{var}\left({\sigma}_i^{(2)}\right) \)is the variance for the item difficulty of item i of the second group.

χ 2 -fit test according to Pearson for the Rasch model (see Glas and Verhelst 1995 , p. 71; Bühner 2011 , p. 534).

The test statistics for the χ2-fit test according to Rasch is:

$$ {\chi}^2=\sum \limits_{\underline{x}}\frac{{\left({o}_{\underline{x}}-{e}_{\underline{x}}\right)}^2}{e_{\underline{x}}},\mathrm{with}\ df={m}^k\hbox{--} {n}_p\hbox{--} 1, $$

where \( {o}_{\underline{x}} \)is the observed response pattern; \( {e}_{\underline{x}} \)is the expected response pattern under the Rasch model, m is the number of response categories, k is the number of items, np is the number of model parameters.

Item standardization (see Goldhammer and Hartig 2012, p. 182).

Item difficulty fixed to the sum of 0: \( 0\left({\sigma}_i\right)={\sigma}_i-\overline{\sigma} \).

z-values of the item difficulties: z(σi) = 0(σi)/SD(σ).

Values of item difficulties in the standardization context of PISA/TIMSS: PT(σi) = 500 + 100 ⋅ z(σi)

1.4 R program for cpm.4.CSE/IRT

The following listing shows the R program for the B3 sub process of cpm.4.CSE/IRT. Due to lack of space, the listing is kept to a minimum, without comments, functions are only called with the most necessary parameters. Explanations of the individual functions are contained in the documentations of the eRm and ltm packages (see Mair and Hatzinger 2007 and Rizopoulos 2006, respectively).

figure b

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zendler, A. cpm.4.CSE/IRT: Compact process model for measuring competences in computer science education based on IRT models. Educ Inf Technol 24, 843–884 (2019). https://doi.org/10.1007/s10639-018-9794-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-018-9794-3

Keywords

Navigation