Skip to main content

Planning the Evaluation of Online Instruction

  • Chapter
  • First Online:
  • 1101 Accesses

Abstract

There are two types of evaluation, formative and summative. At this stage of the WBID Model, formative evaluation plans are fully developed and summative evaluation plans are developed to a preliminary state. The formative evaluation facilitates the revision of the prototype and its website as they are developed. This evaluation is enacted once the concurrent design stage begins and is then carried into the initial implementation of the online instruction, which would be considered a field trial. The second part of planning, the preliminary planning for summative evaluation, is an important feature of the WBID Model. It allows for data about the instructional situation to be collected prior to implementation. Often, valuable information is lost when data on the state of the instructional products or practices is not collected before a new innovation is introduced (Salmon and Gardner, Educational Researcher 15:13-19, 1986). The final planning for and conducting of summative evaluation occurs after full implementation.

This chapter begins with an overview of the main purposes of evaluation and five general evaluation orientations, followed by a discussion of the evaluation methods and tools. We then discuss how to develop each plan and ways to communicate and report formative evaluation findings. The chapter closes with a discussion of preliminary planning for summative evaluation. (Chapter 10 is devoted to the final planning and conducting of summative evaluation and research.)

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  • Boulmetis, J., & Dutwin, P. (2011). The ABCs of evaluation: Timeliness techniques for program and project managers (3rd ed.). San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Bryson, J. M. (2004). What to do when stakeholders matter. Public Management Review, 6(1), 21–53. https://doi.org/10.1080/14719030410001675722

    Article  Google Scholar 

  • Burton, L., & Goldsmith, D. (2002). Students’ experiences in online courses: A study using asynchronous online focus groups. New Britain, CT: Connecticut Distance Learning Consortium. Retrieved from https://www.ctdlc.org/ResourceDocs/evaluation/StudentExperience.pdf

    Google Scholar 

  • Centers for Disease Control and Prevention (CDC). (2013). Evaluation reporting: A guide to help ensure use of evaluation findings. Atlanta, GA: US Dept. of Health and Human Services. Retrieved from https://www.cdc.gov/dhdsp/docs/evaluation_reporting_guide.pdf

    Google Scholar 

  • Cielo24. (2016). 2016 Federal and state accessibility guidelines and law for educators. Retrieved from https://www.unthsc.edu/center-for-innovative-learning/wp-content/uploads/sites/35/2015/12/2016federalandstateaccessibilityforeducators.pdf

  • Clark, D. (2015). Kirkpatrick's four level evaluation model. Big dog and little dog’s performance juxtaposition. Retrieved from http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

  • Davidson-Shivers, G. V., & Reese, R. M. (2014). Are online assessments measuring student learning or something else? In P. Lowenthal, C. York, & J. Richardson (Eds.), Online learning: Common misconceptions, benefits, and challenges (pp. 137–152). Hauppauge, NY: Nova Science Publishers.

    Google Scholar 

  • Denzin, N. K., & Lincoln, Y. S. (Eds.). (1994). Handbook of qualitative research. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Dick, W., Carey, L., & Carey, J. O. (2015). The systematic design of instruction (8th ed.). Boston, MA: Pearson.

    Google Scholar 

  • Elkeles, T., Phillips, J. J., & Phillips, P. P. (2017). The chief talent officer: The evolving role of the chief learning officer (2nd ed.). New York, NY: Routledge.

    Google Scholar 

  • Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2012). Program evaluation: Alternative approaches and practical guidelines. Upper Saddle River, NJ: Pearson Education.

    Google Scholar 

  • Gagné, R. M., Wager, W. W., Goals, K. C., & Keller, J. M. (2005). Principles of instructional design (5th ed.). Belmont, CA: Wadsworth/Thomson Learning.

    Google Scholar 

  • Hu, D., & Potter, K. (2012). Designing an effective online learning environment. SEEN. Retrieved from http://www.seenmagazine.us/Articles/Article-Detail/articleid/2000/designing-an-effective-online-learning-environment

  • Hug, T., & Friesen, N. (2009). Outline of a microlearning agenda. eLearning Papers, 16, 1–13. Retrieved from http://www.academia.edu/2817967/Outline_of_a_Microlearning_agenda

    Google Scholar 

  • International Organization for Standardization. (2008). 9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: Guidance on usability. Retrieved from https://www.iso.org/standard/16883.html

  • Johnson, R. B., & Dick, W. (2012). Evaluation in instructional design: A comparison of evaluation models. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed., pp. 96–104). Upper Saddle River, NJ: Pearson.

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation. (2011). Webpage. Retrieved from http://www.jcsee.org/program-evaluation-standards-statements

  • Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels. San Francisco, CA: Barrett-Koehler.

    Google Scholar 

  • Lockee, B., Moore, M., & Burton, J. (2002). Measuring success: Evaluation strategies for distance education. Educause Quarterly, 25(1), 20–26.

    Google Scholar 

  • Lohr, L. L. (2008). Creating graphics for learning and performance: Lessons in visual literacy (2nd ed.). Upper Saddle River, NJ: Pearson/Merrill/Prentice Hall.

    Google Scholar 

  • Ormrod, J. E. (2014). Educational psychology: Developing learners (8th ed.). Boston: Pearson.

    Google Scholar 

  • Pettersson, R. (2002). Information design: An introduction. Philadelphia: John Benjamins Publishing Company.

    Book  Google Scholar 

  • Praslova, L. (2010). Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in Higher Education. Educational Assessment, Evaluation and Accountability, 22(3), 215–225. https://doi.org/10.1007/s11092-010-9098-7

    Google Scholar 

  • Richey, R. C., & Klein, J. D. (2007). Design and development research. New York, NY: Routledge.

    Google Scholar 

  • Salmon, G., & Gardner, H. (1986). The computer as educator: Lessons from television research. Educational Researcher, 15(10), 13–19.

    Article  Google Scholar 

  • Slavin, R. (2015). Educational psychology: Theory into practice (11th ed.). Boston: Pearson.

    Google Scholar 

  • Smith, P. L., & Ragan, T. J. (2005). Instructional design (3rd ed.). Hoboken, NJ: John Wiley & Sons.

    Google Scholar 

  • Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, and applications (2nd ed.). San Francisco, CA: Jossey-Bass, A Wiley Brand.

    Google Scholar 

  • U.S. Department of Education, Office of Innovation and Improvement. (2008). Evaluating online learning: Challenges and strategies for success. Washington, DC: U.S. Department of Education, Office of Innovation and Improvement.

    Google Scholar 

  • van Gog, T., & Paas, F. (2008). Instructional efficiency: Revisiting the original construct in educational research. Educational Psychologist, 43(1), 16–26.

    Article  Google Scholar 

  • Van Tiem, D. M., Moseley, J. L., & Dessinger, J. C. (2012). Fundamentals of performance improvement: Optimizing results through people, process, and organizations (3rd ed.). San Francisco, CA: Pfeiffer.

    Google Scholar 

  • W3C. (2016). Accessibility, usability, and inclusion: Related aspects of a web for all. Web Accessibility Initiative. Retrieved from https://www.w3.org/WAI/intro/usable

  • Wang, M., & Shen, R. (2012). Message design for mobile learning: learning theories, human cognition and design principles. British Journal of Educational Technology, 43(4), 561–575. https://doi.org/10.1111/j.1467-8535.2011.01214.x

    Article  Google Scholar 

  • Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Davidson-Shivers, G.V., Rasmussen, K.L., Lowenthal, P.R. (2018). Planning the Evaluation of Online Instruction. In: Web-Based Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-67840-5_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67840-5_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67839-9

  • Online ISBN: 978-3-319-67840-5

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics