• Mohamed A. ShahatEmail author
  • Annika Ohle
  • David F. Treagust
  • Hans E. Fischer


Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on an established model for problem solving and were designed to assess seventh grade students’ problem solving, experimental strategy knowledge, achievement and motivation towards science. The test for assessing students’ knowledge has been developed based on the topic, density and buoyancy which will be taught in seventh grade in a later intervention study. The instruments were partly self-developed and partly adapted from newly performed studies on strategy knowledge and problem solving in Germany. All instruments were translated into Arabic; the translation process and quality control are described. In order to determine the quality of the instruments, 44 students in Egypt completed the questionnaires and tests. The study’s aim to develop and validate the instruments did require an ad hoc and typical sample which was drawn from an accessible population. Accordingly, the characteristics of the sample are described. Data were analysed according to the classical test theory, but to underpin the results, the instruments were additionally analysed using the even stronger Rasch model. The findings demonstrated the reliability of the items and aspects of validity. In addition, this study showed how test items can be successfully developed and adapted in an international study and applied in a different language.


motivation problem solving reliability scientific inquiry strategy knowledge validity 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Aczbl, J. & Saaty, T. L. (1983). Procedures for synthesizing ratio judgments. Journal of Mathematical Psychology, 27, 93–102.CrossRefGoogle Scholar
  2. Anderson, L. W. & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy. New York: Longman.Google Scholar
  3. Arora, A., Ramirez, M. J. & Howie, S. J. (2006). Using indicators of educational contexts in TIMSS. In S. J. Howie & T. Plomp (Eds.), Contexts of learning mathematics and science: Lessons learned from TIMSS (pp. 31–49). London: Routledge.Google Scholar
  4. Berg, C. A. R., Bergendahl, C. B., Lundberg, B. K. S. & Tibell, L. A. E. (2003). Benefiting from an open-ended experiment? A comparison of attitudes to, and outcomes of, an expository versus an open-inquiry version of the same experiment. International Journal of Science Education, 25(3), 351–372.CrossRefGoogle Scholar
  5. Bernholt, S., Neumann, K. & Nentwig, P. (Eds.). (2012). Making it tangible: Learning outcomes in science education. Münster: Waxman.Google Scholar
  6. Bloom, J. W. (Ed.). (2006). Creating a classroom community of young scientists (2nd ed.). New York: Routledge.Google Scholar
  7. Boone, W. J., & Scantlebury, K. (2006). The role of Rasch analysis when conducting science education research utilizing multiple-choice tests. Science Education, 90(2), 253–269.Google Scholar
  8. Brislin, R. W. (1970). Back translation for cross-culture research. Journal of Cross-Culture Psychology, 1, 185–216.CrossRefGoogle Scholar
  9. Brislin, R. W. (1980). Translation and content analysis of oral and written material. In H. C. Triandis & J. W. Berry (Eds.), Handbook of cross-culture psychology: Methodology (pp. 389–444). Boston, MA: Allyn & Bacon, Inc.Google Scholar
  10. Bühner, M. (2004). Einführung in die Test- und Fragebogenkonstruktion. [Introduction to test- and questionnaire development]. München: Pearson.Google Scholar
  11. Carver, R. H. & Nash, J. G. (2012). Doing data analysis with SPSS® version 18. Boston, MA: Brooks/Cole Cengage Learning.Google Scholar
  12. Chen, Z. & Klahr, D. (1999). All other things being equal: Acquisition and transfer of the control of variables strategy. Child Development, 70(5), 1098–1120.CrossRefGoogle Scholar
  13. Cortina, J. M. (1993). What is coefficient Alpha? Examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104.CrossRefGoogle Scholar
  14. Darling-Hammond, L. (2000), Teacher quality and student achievement: A review of state policy evidence. Education Policy Analysis Archives, 8(1), Retrieved from (5.3.2012)
  15. Dogru, M. (2008). The application of problem solving method on science teacher trainees on the solution of the environmental problems. Journal of Environmental & Science Education, 3(1), 9–18.Google Scholar
  16. Duschl, R. & Hamilton, R. (2011). Learning science. In R. E. Mayer & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 78–107). New York: Routledge.Google Scholar
  17. Elliott, A.C. & Woodward, W.A. (2007). Statistical analysis quick reference guidebook: With SPSS examples. London: Sage Publications, Inc.Google Scholar
  18. Emden, M. & Sumfleth, E. (2012). Assessing experimental procedures through different formats—a comparative study. In C. Bruguière, A. Tiberghien & P. Clément (Eds.), E-Book Proceedings of the ESERA 2011 Conference, Lyon France. Science learning and Citizenship. Part 10: Co-ed. R. Millar (pp. 23–29). Lyon, France: European Science Education Research Association.Google Scholar
  19. Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London: Sage Publications.Google Scholar
  20. Fischer, H. E., Klemm, K., Leutner, D., Sumfleth, E., Tiemann, R. & Wirth, J. (2005). Framework for empirical research on science teaching and learning. Journal of Science Teacher Education, 16(4), 309–349.CrossRefGoogle Scholar
  21. Good, T. L. & Brophy, J. E. (Eds.). (2008). Learning in classroom (10th ed.). Boston, MA: Person Education, Inc.Google Scholar
  22. Goodwin, C. J. (2010). Research in psychology: Methods and design (6th ed.). San Francisco, CA: John Wiley & Sons, Inc.Google Scholar
  23. Hashimoto, K., Pillay, H. K. & Hudson, P. B. (2008). Evaluating teacher education reform projects in developing countries: A case study of teacher educational reform in Egypt. The International Journal of Learning, 15(1), 123–131.Google Scholar
  24. Hassan, F. (1997). Science education in Egypt and other Arab countries in Africa and west Asia. The Interdisciplinary of Study Abroad, 3, Retrieved from (22.10.2011)
  25. Helmke, A. (2003). Unterrichtsqualität erfassen, bewerten, verbessern [Measuring, rating and improving the quality of instruction]. Seelze: Kallmeyer.Google Scholar
  26. Hofstein, A. & Kind, P. M. (2012). Learning in and from laboratories. In B. J. Fraser, K. G. Tobin & C. McRobbie (Eds.), Second international handbook of science education (Vol. 24, pp. 189–207). New York: Springer.CrossRefGoogle Scholar
  27. Jeong, H. & Songer, N. (2008). Understanding scientific evidence and the data collection process: Explorations of why, who, when, what, and how. In C. L. Petroselli (Ed.), Science education issues and developments (pp. 169–200). New York: Nova Science Publishers.Google Scholar
  28. Kaplan, R. M. & Saccuzzo, D. P. (2009). Psychological testing: Principles, applications, and issues (7th ed.). Belmont, CA: Wadsworth, Cengage Learning.Google Scholar
  29. Kim, M. C. & Hannafin, M. J. (2011). Scaffolding 6th graders’ problem solving in technology-enhanced science classrooms: A qualitative case study. Instructional Science, 39, 255–282.CrossRefGoogle Scholar
  30. Klahr, D. & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1–48.CrossRefGoogle Scholar
  31. Klos, S., Henke, C., Kieren, C., Walpuski, M. & Sumfleth, E. (2008). Naturwissenschaftliches Experimentieren und chemisches Fachwissen-zwei verschiedene Kompetenzen [Natural science experimentation and content knowledge—two different components]. Zeitschrift für Pädagogik, 54(3), 304–321.Google Scholar
  32. Kneeland, S. (1999). Effective problem-solving: How to understand the process and practise it successfully. Oxford: How to Books, Ltd.Google Scholar
  33. Kuhn, D. (2005). Education for thinking. Cambridge, MA: Harvard University Press.Google Scholar
  34. Lederman, N. G. & Lederman, J. S. (2012). Nature of scientific knowledge and scientific inquiry: Building instructional and capacity through professional development. In B. J. Fraser, K. G. Tobin & C. McRobbie (Eds.), Second international handbook of science education (Vol. 24, pp. 335–359). New York: Springer.CrossRefGoogle Scholar
  35. Linacre, J. M. (1994). Sample size and item calibrations stability. Rasch Measurement Transactions, 7(4), 328.Google Scholar
  36. Linacre, J. M. (2010). A User's guide to winstepsministep: Rasch-model computer programs. Chicago: Scholar
  37. Marschner, J. (2011). Adaptives Feedback zur Unterstützung des selbstregulierten Lernens durch Experimentieren. [Adaptive feedback to support self-regulated learning through experimentation]. Doctoral dissertation, Universität Duisburg-Essen.Google Scholar
  38. Martin, M.O., Mullis, I.V.S., Gonzalez, E.J. & Chrostowski, S.J. (2004). TIMSS 2003 international science report. Findings from IEA’s trends in international mathematics and science study at the fourth and eighth Grades. TIMSS & PIRLS International Study Center, Lynch School of Education. Boston College: Boston, MA.Google Scholar
  39. Meyer, J. P. (2010). Reliability. Oxford: Oxford University Press, Inc.Google Scholar
  40. Ministry of Education-Egypt (MOE) (2008). The development of education in Egypt (2004–2008). A national Report, Cairo, Retrieved from (22.10.2011)
  41. Ministry of Education-Egypt (MOE) (2010). The future vision of pre-university education, (in Arabic). Retrieved from (26.08.2012)
  42. Mullis, I. V. S., Martin, M. O., Robitaille, D. F. & Foy, P. (2009). TIMSS 2007 international science report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.Google Scholar
  43. Newburghl, R. (2008). Why do we lose physics students? In C. L. Petroselli (Ed.), Science education issues and developments (2nd ed., pp. 1–4). New York: Nova Science Publishers, Inc.Google Scholar
  44. Ohle, A. (2010). Primary School teachers’ content Knowledge in physics and its impact on teaching and students’ achievement, Doctoral dissertation, Universität Duisburg-EssenGoogle Scholar
  45. Organization for Economic Co-operation and Development (OECD) & IBRD/the World Bank (2010). Reviews of national policies for education: Higher education in Egypt (2nd ed.). Paris, France: OECD Publishing.Google Scholar
  46. Oser, F. K. & Baeriswyl, F. J. (2001). Choreographies of teaching: Bridging instruction to learning. In V. Richardson (Ed.), AERA’s handbook of research on teaching (4th ed., pp. 1031–1065). Washington, DC: American Educational Research Association.Google Scholar
  47. Peers, I. S. (2006). Statistical analysis for education and psychology researchers. London: The Falmer Press.Google Scholar
  48. Posamentier, A. S. & Krulik, S. (2009). Problem solving in mathematics, Grades 3–6: Powerful strategies to deepen understanding. Thousand Oaks, CA: Corwin.Google Scholar
  49. Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., et al (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13, 337–386.CrossRefGoogle Scholar
  50. Robertson, S. I. (2001). Problem solving. Philadelphia, PA: Psychology Press/Taylor & Francis, Inc.CrossRefGoogle Scholar
  51. Roderick, M. & Engel, M. (2001). The grasshopper and the ant: Motivational responses of low-achieving students to high-stakes testing. Educational Evaluation and Policy Analysis, 23, 197–227.CrossRefGoogle Scholar
  52. Ronald, H. H. (2009). Teacher effectiveness and student achievement: Investigating a multilevel cross-classified model. Journal of Educational Administration, 47(2), 227–249.CrossRefGoogle Scholar
  53. Schauble, L., Glaser, R., Ranghavan, K. & Reiner, M. (1991). Causal models and experimentation strategies in scientific reasoning. The Journal of the Learning Science, 1(2), 201–238.CrossRefGoogle Scholar
  54. Schauble, L., Glaser, R., Ranghavan, K. & Reiner, M. (1992). The integration of knowledge and experimentation strategies in understanding a physical system. Applied Cognitive Psychology, 6, 321–343.CrossRefGoogle Scholar
  55. Schreiber, N., Theyßen, H. & Schecker, H. (2009). Experimentelle Kompetenz messen?! [Measuring experimental competence?!]. Physik und Didaktik in Schule und Hochschule, 8(3), 92–101.Google Scholar
  56. Shadish, W. R., Memphis, T., Cook, T. D. & Evanston, I. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Company.Google Scholar
  57. Sutman, F. X., Schmuckler, J. S. & Woodfield, J. D. (2008). The science quest using inquiry/discovery to enhance student learning, grades 7–12. San Francisco, CA: Wiley.Google Scholar
  58. Tang, X., Coffey, J. E., Elby, A. & Levin, D. M. (2009). The scientific method and scientific inquiry: Tensions in teaching and learning. Science Education, 94, 29–47.Google Scholar
  59. Thillmann, H., Künsting, J., Wirth, J. & Leutner, D. (2009). Is it merely a question of ‘what’ to prompt or also ‘when’ to prompt? The role of presentation time of prompts in self-regulated learning. Zeitschrift für Pädagogische Psychologie, 23, 105–115.CrossRefGoogle Scholar
  60. Toth, E. E., Klahr, D. & Chen, Z. (2000). Bridging research and practice: A cognitively based classroom intervention for teaching experimentation skills to elementary school children. Cognition and Instruction, 18, 423–459.CrossRefGoogle Scholar
  61. UNESCO (2008). Egypt’s educational context and the government’s education priorities and strategies. UNESCO Cairo Office-Egypt.Google Scholar
  62. Urdan, T. C. (2001). Statistics in plain English. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
  63. VanGundy, A. B. (2005). 101 Activities for teaching creativity and problem solving. San Francisco, CA: John Wiley & Sons, Inc.Google Scholar
  64. Weinstein, C. E. & Mayer, R. E. (1986). The teaching of learning strategies. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 315–327). New York: Macmillan.Google Scholar
  65. Wendler, C. L. W. & Walker, M. E. (2006). Practical issues in designing and maintaining multiple test forms for large-scale programs. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 445–467). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  66. Wirth, J. & Leutner, D. (2006). Selbstregulation beim Lernen in interaktiven Lernumgebungen. [Self-regulated learning in interactive learning environments]. In H. Mandl & H. F. Friedrich (Eds.), Handbuch Lernstrategien (pp. 172–184). Göttingen: Hogrefe.Google Scholar
  67. Wirtz, M. & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität [Interrater agreement and interrater reliability]. Göttingen: Hogrefe.Google Scholar
  68. Wragg, E. C. (2005). The art and science of teaching and learning: The selected works of Ted Wragg. New York: Routledge.Google Scholar
  69. Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20, 99–149.CrossRefGoogle Scholar

Copyright information

© National Science Council, Taiwan 2012

Authors and Affiliations

  • Mohamed A. Shahat
    • 1
    Email author
  • Annika Ohle
    • 2
  • David F. Treagust
    • 3
  • Hans E. Fischer
    • 4
  1. 1.Department of Physics, Physics EducationUniversity of Duisburg-EssenEssenGermany
  2. 2.Institute for School Development Research (IFS)TU DortmundDortmundGermany
  3. 3.Science and Mathematics Education CentreCurtin University of TechnologyPerthAustralia
  4. 4.Department of Physics, Physics EducationUniversity Duisburg- EssenEssenGermany

Personalised recommendations