Formative and Stealth Assessment

Chapter

Abstract

Assessing generally refers to the process of gathering information about a person relative to specific competencies and other attributes, in formal or informal learning contexts. This should lead to valid and reliable inferences about competency levels, which in turn may be used for diagnostic and/or predictive purposes. Too often, classroom and other high-stakes assessments are used for purposes of grading, promotion, and placement, but not to enhance learning. In this chapter, we focus on formative assessment which posits that assessment should (a) encourage and support, not undermine, the learning process for learners and teachers; (b) provide formative information whenever possible (i.e., give useful feedback during the learning process instead of a single judgment at the end); and (c) be responsive to what is known about how people learn, generally and developmentally. This type of assessment has as its primary goal improvement of learning, which is critical to support the kinds of learning outcomes and processes necessary for students to succeed in the twenty-first century. It is referred to as “formative assessment,” or assessment for learning, in contrast to “summative assessment” (or assessment of learning). This chapter overviews the role of formative assessment in education generally, and also touches on stealth assessment specifically—an ­evidence-based approach to weaving assessments directly into learning environments (Shute, Computer games and instruction. Charlotte, NC: Information Age Publishers, 2011).

Keywords

Competency Evidence-centered design (ECD) Formative assessment Stealth assessment 

References

  1. Almond, R. G., & Mislevy, R. J. (1999). Graphical models and computerized adaptive testing. Applied Psychological Measurement, 2(3), 223–237.CrossRefGoogle Scholar
  2. Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213–238.CrossRefGoogle Scholar
  3. Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. New York, NY: Open University Press.Google Scholar
  4. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.Google Scholar
  5. Bull, S., & Pain, H. (1995). “Did I say what I think I said, and do you agree with me?”: Inspecting and questioning the student model. In J. Greer (Ed.), Proceedings of AI-ED’95—7th World Conference on Artificial Intelligence in Education (pp. 501–508). Virginia: AACE.Google Scholar
  6. Chappius, S., & Chappius, J. (2008). The best value in formative assessment. Educational Leadership, 65(5), 14–19.Google Scholar
  7. Corbett, A. T., & Anderson, J. R. (1989). Feedback timing and student control in the LISP intelligent tutoring system. In D. Bierman, J. Brueker, & J. Sandberg (Eds.), Proceedings of the Fourth International Conference on Artificial Intelligence and Education (pp. 64–72). Amsterdam, The Netherlands: IOS Press.Google Scholar
  8. Council of Chief State School Officers [CCSSO]. (2004). Indicators of quality of teacher professional development and instructional change using data from surveys of enacted curriculum: Findings from NSF MSP-RETA project. Washington, DC.Google Scholar
  9. Csikszentmihalyi, M. (1990). Flow: The psychology of optical experience. New York, NY: Harper Perennial.Google Scholar
  10. Elawar, M., & Corno, L. (1985). A factorial experiment in teachers’ written feedback on student homework: Changing teacher behavior a little rather than a lot. Journal of Educational Psychology, 77(2), 162–173.CrossRefGoogle Scholar
  11. Feng, M., Heffernan, N. T., & Koedinger, K. R. (2006). Addressing the testing challenge with a web-based e-assessment system that tutors as it assesses. Paper presented at the 15th International Conference on World Wide Web, Edinburgh, Scotland.Google Scholar
  12. Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Katzaroff, M., & Dutka, S. (1997). Effects of task-focused goals on low-achieving students with and without learning disabilities. American Educational Research Journal, 34(3), 513–543.CrossRefGoogle Scholar
  13. Gustafson, K. L., & Branch, R. M. (2002). What is instructional design? In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 16–25). Columbus, OH: Merrill Prentice Hall.Google Scholar
  14. Hartley, D., & Mitrovic, A. (2002). Supporting learning by opening the student model. In S. Cerri, G. Gouarderes, & F. Paraguacu (Eds.), Proceedings 6th International Conference on Intelligent Tutoring Systems (pp. 453–462). Springer-Verlag London, UK.Google Scholar
  15. Hindo, C., Rose, K., & Gomez, L. M. (2004). Searching for Steven Spielberg: Introducing iMovie to the high school English classroom: A closer look at what open-ended technology project designs can do to promote engaged learning. In Y. B. Kafai, W. A. Sandoval, N. Enyedy, A. S. Nixon, & F. Herrera (Eds.), Proceedings of the 6th International Conference on Learning Sciences (pp. 606–609). Mahwah, NJ: Erlbaum.Google Scholar
  16. Hoska, D. M. (1993). Motivating learners through CBI feedback: Developing a positive learner perspective. In V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 105–132). Englewood Cliffs, NJ: Educational Technology Publications.Google Scholar
  17. Jennings, J., & Rentner, D. S. (2006). Ten big effects of the no child left behind act on public schools. Phi Delta Kappan, 88(2), 110–113.Google Scholar
  18. Kay, J. (1998). A scrutable user modelling shell for user-adapted interaction (Doctoral thesis, University of Sydney, Sydney, Australia). Retrieved from http://sydney.edu.au/engineering/it/~judy/Homec/Pubs/thesis.pdf.
  19. Kim, C. (2007). Effects of motivation, volition and belief change strategies on attitudes, study habits and achievement in mathematics education. Doctoral dissertation, Florida State University, Tallahassee, FL.Google Scholar
  20. Koedinger, K., McLaughlin, E., & Heffernan, N. (2010). A quasi-experimental evaluation of an on-line formative assessment and tutoring system. Journal of Educational Computing Research, 4, 489–510.CrossRefGoogle Scholar
  21. Lai, E. R. (2009). Interim assessment use in Iowa elementary schools (Doctoral thesis, University of Iowa, Iowa City, USA). Retrieved from http://ir.uiowa.edu/etd/393/.
  22. Mislevy, R. J. (1994). Evidence and inference in educational assessment. Psychometrika, 59, 439–483.CrossRefGoogle Scholar
  23. *Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). On the structure of educational assessment. Measurement: Interdisciplinary Research and Perspective, 1(1), 3–62.Google Scholar
  24. Popham, W. J. (2009). A process—Not a test. Educational Leadership, 66(7), 85–86.Google Scholar
  25. Ramaprasad, A. (1983). On the definition of feedback. Behavioural Science, 28(1), 4–13.CrossRefGoogle Scholar
  26. *Sadler, D. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.Google Scholar
  27. Schwartz, D. L., Bransford, J. D., & Sears, D. L. (2005). Efficiency and innovation in transfer. In J. Mestre (Ed.), Transfer of learning from a modern multidisciplinary perspective (pp. 1–51). Greenwich, CT: Information Age Publishing.Google Scholar
  28. Shute, V. J. (2007). Tensions, trends, tools, and technologies: Time for an educational sea change. In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 139–187). New York, NY: Lawrence Erlbaum Associates.Google Scholar
  29. *Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi: 10.3102/0034654307313795.Google Scholar
  30. Shute, V. J. (2009). Simply assessment. International Journal of Learning and Media, 1(2), 1–11. doi: 10.1162/ijlm.2009.0014.CrossRefGoogle Scholar
  31. *Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. In S. Tobias & J. D. Fletcher (Eds.), Computer games and instruction (pp. 503–524). Charlotte, NC: Information Age Publishers.Google Scholar
  32. Shute, V. J., Graf, E. A., & Hansen, E. (2005). Designing adaptive, diagnostic math assessments for individuals with and without visual disabilities. In L. PytlikZillig, R. Bruning, & M. Bodvarsson (Eds.), Technology-based education: Bringing researchers and practitioners together (pp. 169–202). Greenwich, CT: Information Age Publishing.Google Scholar
  33. Shute, V. J., Hansen, E. G., & Almond, R. G. (2008). You can’t fatten a hog by weighing it—or can you? Evaluating an assessment for learning system called ACED. International Journal of Artificial Intelligence in Education, 18(4), 289–316.Google Scholar
  34. Shute, V. J., & Towle, B. (2003). Adaptive e-learning. Educational Psychologist, 38(2), 105–114.CrossRefGoogle Scholar
  35. *Shute, V. J., Ventura, M., Bauer, M. I., & Zapata-Rivera, D. (2009). Melding the power of serious games and embedded assessment to monitor and foster learning: Flow and grow. In U. Ritterfeld, M. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects (pp. 295–321). Mahwah, NJ: Routledge, Taylor and Francis.Google Scholar
  36. Shute, V. J., & Zapata-Rivera, D. (2008). Adaptive technologies. In J. M. Spector, D. Merrill, J. van Merriënboer, & M. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 277–294). New York, NY: Lawrence Erlbaum Associates.Google Scholar
  37. Shute, V. J., & Zapata-Rivera, D. (2010). Intelligent systems. In E. Baker, P. Peterson, & B. McGaw (Eds.), Third edition of the international encyclopedia of education (pp. 75–80). Oxford, UK: Elsevier.CrossRefGoogle Scholar
  38. Steinberg, L. S., & Gitomer, D. G. (1996). Intelligent tutoring and assessment built on an understanding of a technical problem-solving task. Instructional Science, 24, 223–258.CrossRefGoogle Scholar
  39. Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765.Google Scholar
  40. Symonds, K. W. (2004). After the test: Closing the achievement gaps with data. Naperville, IL: Learning Point Associates.Google Scholar
  41. Wiliam, D. (2006). Does assessment hinder learning? Speech delivered at the ETS Europe Breakfast Salon. Retrieved from http://www.decs.sa.gov.au/adelaidehills/files/links/williams_speech.pdf.
  42. *Wiliam, D., & Thompson, M. (2007). Integrating assessment with instruction: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: shaping teaching and learning. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  43. Zapata-Rivera, D., & Greer, J. E. (2004). Interacting with inspectable Bayesian models. International Journal of Artificial Intelligence in Education, 14, 127–163.Google Scholar
  44. Zapata-Rivera, D., Vanwinkle, W., Shute, V. J., Underwood, J. S., & Bauer, M. (2007). English ABLE. In R. Luckin, K. Koedinger, & J. Greer (Eds.), Artificial intelligence in education—Building technology rich learning contexts that work (pp. 323–330). Amsterdam, The Netherlands: IOS Press.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Florida State UniversityTallahasseeUSA

Personalised recommendations