Educational Studies in Mathematics

, Volume 68, Issue 2, pp 149–170 | Cite as

The role of scaling up research in designing for and evaluating robustness

Article

Abstract

One of the great strengths of Jim Kaput’s research program was his relentless drive towards scaling up his innovative approach to teaching the mathematics of change and variation. The SimCalc mission, “democratizing access to the mathematics of change,” was enacted by deliberate efforts to reach an increasing number of teachers and students each year. Further, Kaput asked: What can we learn from research at the next level of scale (e.g., beyond a few classrooms at a time) that we cannot learn from other sources? In this article, we develop an argument that scaling up research can contribute important new knowledge by focusing researchers’ attention on the robustness of an innovation when used by varied students, teachers, classrooms, schools, and regions. The concept of robustness requires additional discipline both in the design process and in the conduct of valid research. By examining a progression of three studies in the Scaling Up SimCalc program, we articulate how scaling up research can contribute to designing for and evaluating robustness.

Keywords

Democratization of access to mathematics Educational technology Mathematics education Randomized experiments Scaling up 

References

  1. Assude, T., & Gelis, J. M. (2003). La dialectique ancien-nouveau dans l’intégration de Cabri-géomètre à l’école primaire. Educational Studies in Mathematics, 50(3), 259–287.CrossRefGoogle Scholar
  2. Baker, E. L. (2007). Principles for scaling up: Choosing, measuring effects, and promoting the widespread use of educational innovation. In B. Schneider & S.-K. McDonald (Eds.), Scale-up in education (pp. 37–54). Lanham, MD: Rowman & Littlefield.Google Scholar
  3. Ball, D. L., Hill, H. C., & Bass, H. (2005). Knowing mathematics for teaching: Who knows mathematics well enough to teach third grade, and how can we decide. American Educator, 29(3), 14–17, 20–22, 43–46.Google Scholar
  4. Brown, A. (1991). Design experiments. Theoretical and methodological challenges in evaluating complex interventions in classroom settings. Journal of the Learning Sciences, 2(2), 141–178.CrossRefGoogle Scholar
  5. Cerulli, M., Georget, J. P., Maracci, M., Psycharis, G., & Trgalova, J. (2007). Integrating research teams: The TEMLA approach. Retrieved on September 12, 2007, from http://telearn.noe-kaleidoscope.org/warehouse/TELMA_CERME5_conference_version.pdf.
  6. Cobb, P., Confrey, J., diSessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13.CrossRefGoogle Scholar
  7. Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32(6), 3–12.CrossRefGoogle Scholar
  8. Cohen, D. K., Raudenbush, S., & Ball, D. L. (2003). Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25(2), 1–24.CrossRefGoogle Scholar
  9. Cook, T. D. (1999). Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education. Evanston, IL: Institute for Policy Research at Northwestern University.Google Scholar
  10. Dede, C. (2006). Scaling up: Evolving innovations beyond ideal settings to challenging contexts of practice. In R. K. Sawyer (Ed.), Cambridge handbook of learning sciences (pp. 551–566). Cambridge, UK: Cambridge University Press.Google Scholar
  11. Elmore, R. F. (1996). Getting to scale with good educational practice. Harvard Educational Review, 66(1), 1–26.Google Scholar
  12. Fishman, B. J., Marx, R. W., Best, S., & Tal, R. T. (2003). Linking teacher and student learning to improve professional development in systemic reform. Teaching and Teacher Education, 19, 643–658.CrossRefGoogle Scholar
  13. Fullan, M., & Earl, L. (2002). Large scale reform. Journal of Educational Change, 3, 1–5.CrossRefGoogle Scholar
  14. Hawkins, J. (1997). The national design experiments consortium:Final report. New York: Center for Children and Technology, Educational Development Center.Google Scholar
  15. Hedges, L. V. (2007). Generalizability of treatment effects: Psychometrics and education. In B. Schneider & S.-K. McDonald (Eds.), Scale-up in education (pp. 55–78). Lanham, MD: Rowman & Littlefield.Google Scholar
  16. Hiebert, J., Gallimore, R., Garnier, H., Givvin, K. B., Hollingsworth, H., Jacobs, J., et al. (2003). Teaching mathematics in seven countries: Results from the TIMSS 1999 video study. Washington DC: National Center for Educational Statistics.Google Scholar
  17. Kaput, J. (1992). Technology and mathematics education. In D. Grouws (Ed.), A handbook of research on mathematics teaching and learning (pp. 515–556). New York: Macmillan.Google Scholar
  18. Kaput, J. (1994). Democratizing access to calculus: New routes using old roots. In A. Schoenfeld (Ed.), Mathematical thinking and problem solving (pp. 77–155). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  19. Kaput, J. (1997). Rethinking calculus: Learning and thinking. The American Mathematical Monthly, 104(8), 731–737.CrossRefGoogle Scholar
  20. Kaput, J. (2000). Implications of the shift from isolated, expensive technology to connected, inexpensive, diverse and ubiquitous technologies. In M. O. J. Thomas (Ed.), Proceedings of the TIME 2000: An International Conference on Technology in Mathematics Education (pp. 1–24). Auckland, New Zealand: The University of Auckland and the Auckland University of Technology Also published in the New Zealand Mathematics Magazine, 38(3), December 2001.Google Scholar
  21. Kaput, J., & Roschelle, J. (1998). The mathematics of change and variation from a millennial perspective: New content, new context. In C. Hoyles, C. Morgan, & G. Woodhouse (Eds.), Rethinking the mathematics curriculum (pp. 155–170). London, UK: Falmer.Google Scholar
  22. Kaput, J., & Roschelle, J. (2000). Shifting representational infrastructures and reconstituting content to democratize access to the math of change and variation: Impacts on cognition, curriculum, learning and teaching. Paper presented at the NSF Workshop to Integrate Computer-based Modeling and Scientific Visualization into K-12 Teacher Education Programs. Reston, VA: National Science Foundation.Google Scholar
  23. Kaput, J., & Shaffer, D. (2002). On the development of human representational competence from an evolutionary point of view: From episodic to virtual culture. In K. Gravemeijer, R. Lehrer, B. van Oers, & L. Verschaffel (Eds.), Symbolizing, modeling and tool use in mathematics education (pp. 277–293). London: Kluwer Academic.Google Scholar
  24. Lagrange, J. B., Artigue, M., Laborde, C., & Trouche, L. (2003). Technology and mathematics education: A multidimensional study of the evolution of research and innovation. In A. J. Bishop, M.A. Clements, C. Keitel, J. Kilpatrick, & F. S. Leung (Eds.), Second international handbook of research in mathematics education (pp. 239–271). Dordrecht, The Netherlands: Kluwer Academic.Google Scholar
  25. National Research Council (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press.Google Scholar
  26. O’Neil, J. (1995). Teacher and technology: Potential pitfalls. Educational Leadership, 53(2), 10–11.Google Scholar
  27. Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3–14.CrossRefGoogle Scholar
  28. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Newbury Park, CA: Sage.Google Scholar
  29. Reichardt, C. S. (2007). Estimating the effects of educational interventions. In B. Schneider & S.-K. McDonald (Eds.), Scale-up in education (Vol. 1, pp. 79–99). Lanham, MD: Rowman & Littlefield.Google Scholar
  30. Rogers, E. M. (2003). Diffusion of innovations. New York: Simon and Schuster.Google Scholar
  31. Romberg, T. A., & Kaput, J. (1999). Mathematics worth teaching, mathematics worth understanding. In E. Fennema & T. A. Romberg (Eds.), Mathematics classrooms that promote understanding (pp. 3–17). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  32. Roschelle, J., & Jackiw, N. (2000). Technology design as educational research: Interweaving imagination, inquiry & impact. In A. Kelly & R. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 777–797). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  33. Roschelle, J., & Kaput, J. (1996). Educational software architecture and systemic impact: The promise of component software. Journal of Educational Computing Research, 14(3), 217–228.CrossRefGoogle Scholar
  34. Roschelle, J., Kaput, J., & Stroup, W. (2000). SimCalc: Accelerating student engagement with the mathematics of change. In M. J. Jacobsen & R. B. Kozma (Eds.), Learning the sciences of the 21st century:Research, design, and implementing advanced technology learning environments (pp. 47–75). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  35. Roschelle, J., Kaput, J., Stroup, W., & Kahn, T. (1998). Scaleable integration of educational software: Exploring the promise of component architectures. Retrieved January 8, 2003, from http://www-jime.open.ac.uk/98/6/
  36. Roschelle, J., Tatar, D., & Kaput, J. (2008). Getting to scale with innovations that deeply restructure how students come to know mathematics. In A. E. Kelly, R. Lesh, & J.Y. Baek (Eds.), Handbook of innovative design research in science, technology, engineering, mathematics education. Hillsdale, NJ: Lawrence Erlbaum Associates (in press).Google Scholar
  37. Roschelle, J., Tatar, D., Shechtman, N., Hegedus, S., Hopkins, B., & Knudsen, J. (2007). Can a technology-enhanced curriculum improve student learning of important mathematics? Results from 7th grade, year 1 (No. 1). Menlo Park, CA: SRI International.Google Scholar
  38. Schneider, B., & McDonald, S. K. (2007). Introduction. In B. Schneider & S.-K. McDonald (Eds.), Scale-up in education (pp. 1–15). Lanham, MD: Rowman & Littlefield.Google Scholar
  39. Smith, J. P., diSessa, A. A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of the Learning Sciences, 3(2), 115–163.CrossRefGoogle Scholar
  40. Tatar, D., Roschelle, J., Knudsen, J., Shechtman, N., Kaput, J., & Hopkins, B. (2008). Scaling up innovative technology-based math. Journal of the Learning Sciences (in press).Google Scholar
  41. Torgerson, C. (2001). The need for randomised controlled trials in educational research. British Journal of Educational Studies, 49(3), 316–328.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  • J. Roschelle
    • 1
  • D. Tatar
    • 2
  • N. Shechtman
    • 1
  • J. Knudsen
    • 1
  1. 1.SRI International, Center for Technology in LearningMenlo ParkUSA
  2. 2.Department of Computer ScienceVirginia TechBlacksburgUSA

Personalised recommendations