Data-based Decision Making: An Overview

Chapter
Part of the Studies in Educational Leadership book series (SIEL, volume 17)

Abstract

School leaders and teachers are increasingly required to use data as the basis for their decisions. But what does using data for decision-making mean? What counts as “data”? In this chapter, the authors address what is meant by the word “data” and what kinds of data are available and needed. The latter should overlap, but sometimes the available data are not needed and sometimes needed data are not available. In this chapter, we also discuss why teachers and school leaders should use data. Finally, the process of using data and the different ways data can and should be used is described.

Keywords

Student Learning Student Achievement School Leader Context Data Improve Student Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Bernhardt, V. (2003). Using data to improve student achievement. Educational Leadership, 60(5), 26–30.Google Scholar
  2. Black, P. (2004). The nature and value of formative assessment for learning. Kings College. http://access.kcl.clientarea.net/content/1/c4/73/57/formative.pdf. Accessed 2 Aug 2012.Google Scholar
  3. Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.CrossRefGoogle Scholar
  4. Boudett, K. P., City, E. A. & Murnane, R. J. (2007). Data wise. A step-by-step guide to using assessment results to improve teaching and learning. Cambridge: Harvard Education.Google Scholar
  5. Buly, M. R., & Valencia, S. W. (2002). Below the bar: Profiles of students who fail state reading assessments. Education and Evaluation and Policy Analysis, 24(3), 219–239.CrossRefGoogle Scholar
  6. Campbell, C., & Levin, B. (2009). Using data to support educational improvement. Educational Assessment, Evaluation and Accountability, 21(1), 47–65.CrossRefGoogle Scholar
  7. Carlson, D., Borman, G., & Robinson, M. (2011). A multistate district-level cluster randomized trial of the impact of data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33(3), 378–398.CrossRefGoogle Scholar
  8. Cawelti, G., & Protheroe, N. (2001). High student achievement: How six school districts changed into high-performance systems. Arlington: Educational Research Service.Google Scholar
  9. Cousins, B. J., & Leithwood, K. A. (1993). Enhancing knowledge utilization as a strategy for school improvement. Knowledge: Creation, Diffusion, Utilization, 14(3), 305–333.Google Scholar
  10. Davenport, T. H., & Prusak, L. (1998). Working knowledge. How organizations manage what they know. Boston: Harvard Business School.Google Scholar
  11. Earl, L., & Katz, S. (2006). Leading in a data rich world. California: Thousand Oaks Corwin.Google Scholar
  12. Ehren, M. C. M., & Swanborn, M. L. (2012). Strategic data use in accountability systems. School Effectiveness and School Improvement, 23(2), 257–280.Google Scholar
  13. Hamilton, L. S., Stecher, B. M., & Yuan, K. (2009). Standards-based reform in the United States: History, research, and future directions. Santa Monica: RAND Corporation. Retrieved from http://www.rand.org/pubs/reprints/RP1384.Google Scholar
  14. Herman, J., & Winter, L. (2011). The turn around toolkit: Managing rapid, sustainable school improvement. California: Thousand Oakes Corwin.Google Scholar
  15. Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the data-driven mantra: Different conceptions of data-driven decision making. In P. A. Moss (Ed.), Evidence and decision making. USA: Wiley-Blackwell.Google Scholar
  16. Kean, M. (1996). Multiple measures: The common-sense approach to education assessment. School Administrator, 53(11), 14–16.Google Scholar
  17. Lai, M. K. (2002). Planning an assessment cycle. Training module for the Mangere AUSAD schooling improvement initiaitve. Auckland: Mangere AUSAD Schooling Improvement Initiative.Google Scholar
  18. Lai, M. K., McNaughton, S., Amituanai-Toloa, M., Turner, R., & Hsiao, S. (2009a). Sustained acceleration of achievement in reading comprehension: The New Zealand experience. Reading Research Quarterly, 44(1), 30–56.CrossRefGoogle Scholar
  19. Lai, M. K., McNaughton, S., Timperley, H., & Hsiao, S. (2009b). Sustaining continued acceleration in reading comprehension achievement following an intervention. Educational Assessment, Evaluation and Accountability, 21(1), 81–100.CrossRefGoogle Scholar
  20. Love, N. (2008). Using data to improve learning for all: A collaborative inquiry approach. California:Thousand Oakes Corwin.Google Scholar
  21. Mourshed, M., Chijioke, C., & Barber, M. (2010). How the world’s most improved school systems keep getting better. London: McKinsey & Company.Google Scholar
  22. Robinson, V. M. J., & Lai, M. K. (2006). Practitioner research for educators: A guide to improving classrooms and schools. California: Thousand Oaks Corwin.Google Scholar
  23. Robinson, V. M. J., Phillips, G., & Timperley, H. (2002). Using achievement data for school-based curriculum review: A bridge too far? Leadership and Policy in Schools, 1(1), 3–29.CrossRefGoogle Scholar
  24. Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach. California: Thousand Oaks Sage.Google Scholar
  25. Rothman, R. (2010). Principles for a comprehensive assessment system. Policy brief alliance for excellent education. http://www.all4ed.org/publication_material/Accountability. Accessed 10 Feb 2011.Google Scholar
  26. Schildkamp, K., & Handelzalts, A. (2011, April). Data teams for school improvement. Paper presented at the American Educational Research Association Conference, New Orleans, USA.Google Scholar
  27. Schildkamp, K., & Visscher, A. J. (2010). The utilization of a school self-evaluation instrument. Educational Studies, 36(4), 371–389.CrossRefGoogle Scholar
  28. Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In F. G. Madaus, M. Scriven, D. L. Stufflebeam (Eds.), Evaluation models: Viewpoints on educational and human services evaluation. Boston and Hingham: Kluwer-Nijhoff.Google Scholar
  29. Timperley, H., & Phillips, G. (2003). Changing and sustaining teachers’ expectations through professional development in literacy. Journal of Teaching and Teacher Education, 19, 627–641.CrossRefGoogle Scholar
  30. Timperley, H., & Parr, J. (2009). Chain of influence from policy to practice in the New Zealand literacy strategy. Research Papers in Education: Policy and Practice, 24(2), 135–154.CrossRefGoogle Scholar
  31. Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2007). Best evidence synthesis iterations (BES) on professional learning and development. Wellington: Ministry of Education.Google Scholar
  32. Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21–33.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Woolf Fisher Research CentreThe University of AucklandAucklandNew Zealand
  2. 2.Faculty of Behavioural SciencesUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations