PROSPECTS

, Volume 43, Issue 4, pp 481–493 | Cite as

Developing an item bank for use in testing in Africa: Theory and methodology

Trends/Cases

Abstract

The author describes the steps taken by a research team, of which she was part, to develop a specific methodology for assessing student attainment in primary school, working with the Programme for the Analysis of Education Systems (PASEC) of the Conference of Ministers of Education of French-speaking Countries (CONFEMEN). This methodology provides the basis for an item bank that can be used for many assessment tests. It can also be applied outside the specific context of French-speaking Africa.

Keywords

Assessment Item bank Item response theories Rasch model Curriculum reform Africa 

References

  1. Bertrand, R., & Blais, J. G. (2004). Modèles de mesure. L’apport de la théorie de réponses aux items [Measurement models: Contribution of the item response theory]. Québec: Presses de l’Université du Québec.Google Scholar
  2. CONFEMEN [Conference of Ministers of Education of French-speaking Countries] (2013). Organizational website. http://www.confemen.org.
  3. Devouche, E. (2003). Les banques d’items. Construction d’une banque pour le test de connaissance du français [Item banks: Construction of a bank for testing French knowledge]. Sèvres: Centre International d’Études Pédagogiques.Google Scholar
  4. Dickes, P., Tournois, J., Flieller, A., & Kop, J.-L. (1994). La psychométrie [Psychometrics]. Paris: Presses Universitaires de France.Google Scholar
  5. Embreston, S. E., & Reise, S. P. (2000). Item response theory for psychologists. London: Erlbaum Associates.Google Scholar
  6. Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.Google Scholar
  7. Hulin, C. L., Drasgow, F., & Parsons, C. K. (1983). Item response theory: Applications to psychological measurement. Homewood, IL: Dow Jones Irwin.Google Scholar
  8. Ingebo, G. S. (1997). Probability in the measure of achievement: Rasch measurement. Chicago: Mesa Press.Google Scholar
  9. Jonnaert, P. (2009). Compétence et constructivisme, un cadre d’opérationnalisation [Competence and constructivism: An operational framework] (2nd ed.). Brussels: De Boeck-Université.Google Scholar
  10. Jonnaert, P. (2011). Curriculum, entre modèle, rationnel et irrationalité des sociétés [Curriculum: Between model, rationale and the irrationality of societies]. Revue Internationale d’Éducation de Sèvres, 56, 135–155.CrossRefGoogle Scholar
  11. Jonnaert, P., Ettayebi, M., & Defise, R. (2009). Curriculum et compétence, un cadre d’opérationnalisation [Curriculum and competence: An operational framework]. Brussels: De Boeck-Université.Google Scholar
  12. Keeves, J. (1992). Methodology and measurement in international and educational surveys. London: Pergamon.Google Scholar
  13. Laveault, D. (2007). Quelles compétences, quels types de validité pour l’évaluation [What competencies and what types of validity for assessment?]. In L. Bélair, D. Laveault, & C. Lebel (Eds.), Les compétences professionnelles en enseignement et leur évaluation (pp. 25–50). Ottawa: Presses de l’Université d’Ottawa.Google Scholar
  14. Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.Google Scholar
  15. Masciotra, D., & Medzo, F. (2009). Développer un agir compétent. Vers un curriculum pour la vie [Developing competence in action: Towards a curriculum for life]. Brussels: De Boeck-Université.Google Scholar
  16. Masciotra, D., Medzo, F., & Jonnaert, P. (Eds.) (2010). Vers une approche située en education. Réflexions, pratiques, recherches et standards [Towards a situated approach in education: Reflections, practice, research, and standards]. Montréal: Cahiers de l’ACFAS, vol. 111.Google Scholar
  17. Milanovitch, M. (1998). Studies in language testing: Multilingual glossary of language testing terms. Cambridge: Cambridge University Press.Google Scholar
  18. Rasch, G. (1960). Probabilistic model for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research.Google Scholar
  19. Tehio, V. (Ed.) (2010). Politiques publiques en éducations: l’exemple des réformes curriculaires. Actes du Séminaire final sur les réformes curriculaires par l’approche par compétences en Afrique [Public policies in education: The example of curriculum reform. Report of the final seminar on curriculum reforms adopting the competency-based approach in Africa]. Sèvres: CIEP.Google Scholar
  20. Thissen, D., & Wainer, H. (2001). Test scoring. Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  21. UCCD/UQAM [UNESCO Chair in Curriculum Development/ Université du Québec à Montréal] (2009). Tableaux de spécifications et cadre de référence du nouveau PASEC [Tables of specifications and reference framework for the new PASEC]. Dossiers de la CUDC. Montréal: UCCD/UQAM.Google Scholar
  22. Wright, B., & Bell, S. R. (1984). Items banks: What, why, how. Journal of Educational Measurement, 21(4), 331–354.CrossRefGoogle Scholar
  23. Wright, B. D., & Stone, M. H. (1979). Best test design: Rasch measurement. Chicago: Mesa Press.Google Scholar

Copyright information

© UNESCO IBE 2013

Authors and Affiliations

  1. 1.Université du Québec à MontréalMontréalCanada

Personalised recommendations