Planning the Evaluation of Online Instruction



There are two types of evaluation, formative and summative. At this stage of the WBID Model, formative evaluation plans are fully developed and summative evaluation plans are developed to a preliminary state. The formative evaluation facilitates the revision of the prototype and its website as they are developed. This evaluation is enacted once the concurrent design stage begins and is then carried into the initial implementation of the online instruction, which would be considered a field trial. The second part of planning, the preliminary planning for summative evaluation, is an important feature of the WBID Model. It allows for data about the instructional situation to be collected prior to implementation. Often, valuable information is lost when data on the state of the instructional products or practices is not collected before a new innovation is introduced (Salmon and Gardner, Educational Researcher 15:13-19, 1986). The final planning for and conducting of summative evaluation occurs after full implementation.

This chapter begins with an overview of the main purposes of evaluation and five general evaluation orientations, followed by a discussion of the evaluation methods and tools. We then discuss how to develop each plan and ways to communicate and report formative evaluation findings. The chapter closes with a discussion of preliminary planning for summative evaluation. (Chapter  10 is devoted to the final planning and conducting of summative evaluation and research.)


Evaluation Formative evaluation Summative evaluation Efficiency evaluation Effectiveness evaluation Appeal evaluation Microlearning Communication plan Usability 


  1. Boulmetis, J., & Dutwin, P. (2011). The ABCs of evaluation: Timeliness techniques for program and project managers (3rd ed.). San Francisco, CA: Jossey-Bass.Google Scholar
  2. Bryson, J. M. (2004). What to do when stakeholders matter. Public Management Review, 6(1), 21–53. CrossRefGoogle Scholar
  3. Burton, L., & Goldsmith, D. (2002). Students’ experiences in online courses: A study using asynchronous online focus groups. New Britain, CT: Connecticut Distance Learning Consortium. Retrieved from Google Scholar
  4. Centers for Disease Control and Prevention (CDC). (2013). Evaluation reporting: A guide to help ensure use of evaluation findings. Atlanta, GA: US Dept. of Health and Human Services. Retrieved from Google Scholar
  5. Cielo24. (2016). 2016 Federal and state accessibility guidelines and law for educators. Retrieved from
  6. Clark, D. (2015). Kirkpatrick's four level evaluation model. Big dog and little dog’s performance juxtaposition. Retrieved from
  7. Davidson-Shivers, G. V., & Reese, R. M. (2014). Are online assessments measuring student learning or something else? In P. Lowenthal, C. York, & J. Richardson (Eds.), Online learning: Common misconceptions, benefits, and challenges (pp. 137–152). Hauppauge, NY: Nova Science Publishers.Google Scholar
  8. Denzin, N. K., & Lincoln, Y. S. (Eds.). (1994). Handbook of qualitative research. Thousand Oaks, CA: Sage.Google Scholar
  9. Dick, W., Carey, L., & Carey, J. O. (2015). The systematic design of instruction (8th ed.). Boston, MA: Pearson.Google Scholar
  10. Elkeles, T., Phillips, J. J., & Phillips, P. P. (2017). The chief talent officer: The evolving role of the chief learning officer (2nd ed.). New York, NY: Routledge.Google Scholar
  11. Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2012). Program evaluation: Alternative approaches and practical guidelines. Upper Saddle River, NJ: Pearson Education.Google Scholar
  12. Gagné, R. M., Wager, W. W., Goals, K. C., & Keller, J. M. (2005). Principles of instructional design (5th ed.). Belmont, CA: Wadsworth/Thomson Learning.Google Scholar
  13. Hu, D., & Potter, K. (2012). Designing an effective online learning environment. SEEN. Retrieved from
  14. Hug, T., & Friesen, N. (2009). Outline of a microlearning agenda. eLearning Papers, 16, 1–13. Retrieved from Google Scholar
  15. International Organization for Standardization. (2008). 9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: Guidance on usability. Retrieved from
  16. Johnson, R. B., & Dick, W. (2012). Evaluation in instructional design: A comparison of evaluation models. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed., pp. 96–104). Upper Saddle River, NJ: Pearson.Google Scholar
  17. Joint Committee on Standards for Educational Evaluation. (2011). Webpage. Retrieved from
  18. Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels. San Francisco, CA: Barrett-Koehler.Google Scholar
  19. Lockee, B., Moore, M., & Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, 25(1), 20–26.Google Scholar
  20. Lohr, L. L. (2008). Creating graphics for learning and performance: Lessons in visual literacy (2nd ed.). Upper Saddle River, NJ: Pearson/Merrill/Prentice Hall.Google Scholar
  21. Ormrod, J. E. (2014). Educational psychology: Developing learners (8th ed.). Boston: Pearson.Google Scholar
  22. Pettersson, R. (2002). Information design: An introduction. Philadelphia: John Benjamins Publishing Company.CrossRefGoogle Scholar
  23. Praslova, L. (2010). Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in Higher Education. Educational Assessment, Evaluation and Accountability, 22(3), 215–225. Scholar
  24. Richey, R. C., & Klein, J. D. (2007). Design and development research. New York, NY: Routledge.Google Scholar
  25. Salmon, G., & Gardner, H. (1986). The computer as educator: Lessons from television research. Educational Researcher, 15(10), 13–19.CrossRefGoogle Scholar
  26. Slavin, R. (2015). Educational psychology: Theory into practice (11th ed.). Boston: Pearson.Google Scholar
  27. Smith, P. L., & Ragan, T. J. (2005). Instructional design (3rd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
  28. Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, and applications (2nd ed.). San Francisco, CA: Jossey-Bass, A Wiley Brand.Google Scholar
  29. U.S. Department of Education, Office of Innovation and Improvement. (2008). Evaluating online learning: Challenges and strategies for success. Washington, DC: U.S. Department of Education, Office of Innovation and Improvement.Google Scholar
  30. van Gog, T., & Paas, F. (2008). Instructional efficiency: Revisiting the original construct in educational research. Educational Psychologist, 43(1), 16–26.CrossRefGoogle Scholar
  31. Van Tiem, D. M., Moseley, J. L., & Dessinger, J. C. (2012). Fundamentals of performance improvement: Optimizing results through people, process, and organizations (3rd ed.). San Francisco, CA: Pfeiffer.Google Scholar
  32. W3C. (2016). Accessibility, usability, and inclusion: Related aspects of a web for all. Web Accessibility Initiative. Retrieved from
  33. Wang, M., & Shen, R. (2012). Message design for mobile learning: learning theories, human cognition and design principles. British Journal of Educational Technology, 43(4), 561–575. CrossRefGoogle Scholar
  34. Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.Google Scholar

Copyright information

©  Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Department of Counseling and Instructional ScienceUniversity of South AlabamaMobileUSA
  2. 2.Division of Research and Strategic InnovationUniversity of West FloridaPensacolaUSA
  3. 3.Department of Educational TechnologyBoise State UniversityBoiseUSA

Personalised recommendations