Educational Technology Research and Development

, Volume 63, Issue 1, pp 97–124

Designing online software for teaching the concept of variable that facilitates mental interaction with the material: systemic approach

  • Natalya A. Koehler
  • Ann D. Thompson
  • Ana-Paula Correia
  • Linda Serra Hagedorn
Development Article

Abstract

Our case study is a response to the need for research and reporting on specific strategies employed by software designers to produce effective multimedia instructional solutions. A systemic approach for identifying appropriate software features and conducting a formative evaluation that evaluates both the overall effectiveness of the multimedia instructional design and the effectiveness of specific features of the software is presented. The instructional software for teaching the concept of variable was designed as a research platform and tested with 90 undergraduate students at a Midwestern university. Behavior tracking and data collection instruments (pre-test, surveys, and delayed post-test) were embedded in the software. As a part of design-engineering-develop approach, two potential types of feedback, single try versus two tries, were tested in two experimental conditions. The results of the formative evaluation demonstrated preliminary evidence of the effectiveness of the designed software with either type of feedback for both high and low prior knowledge students. An innovative instructional strategy of helping the learner mindfully process the program feedback is described.

Keywords

Formative evaluation Instructional design Web-based interactive multimedia Computer-assisted instruction Software development Design experiment 

References

  1. Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111–127. doi:10.2190/9LMD-3U28-3A0G-FTQT.CrossRefGoogle Scholar
  2. Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61, 213–238. doi:10.2307/1170535.CrossRefGoogle Scholar
  3. Bernard, M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M. A., et al. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844.CrossRefGoogle Scholar
  4. Betrancourt, M. (2005). The animation and interactivity principles in multimedia learning. In R. E. Mayer (Ed.), The cambridge handbook of multimedia learning. Cambridge University Press: New York. doi:10.1017/CBO9780511816819.019.Google Scholar
  5. Butler, A. C., & Roediger, H. L, I. I. I. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604–616. doi:10.3758/MC.36.3.604.CrossRefGoogle Scholar
  6. Butler, D., & Winne, P. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281. doi:10.2307/1170684.CrossRefGoogle Scholar
  7. Chance, B., Ben-Zvi, D., Garfield, J., & Medina, E. (2007). The role of technology in improving student learning of statistics. Technology Innovations in Statistics Education, 1(1), 50.Google Scholar
  8. Clariana, R. B. (1993). A review of multiple-try feedback in traditional and computer-based instruction. Journal of Computer-Based Instruction, 20(3), 67–74.Google Scholar
  9. Clariana, R. B. (1999). CBT design: A feedback aptitude treatment interaction. In Proceedings of Selected Research and Development Papers Presented at the National Convention of the Association for Educational Communications and Technology (vol. 21, pp. 87–91). Houston, TX, February 10–14, 1999.Google Scholar
  10. Clark R. C., & Mayer, R. E. (2007). E-Learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (2nd ed.). San Francisco, CA: Pfeiffer & Company. doi:10.1002/pfi.4930420510.
  11. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.Google Scholar
  12. Craig, S., Gholson, B., & Driscoll, D. (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features, and redundancy. Journal of Educational Psychology, 94(2), 428–434.CrossRefGoogle Scholar
  13. Friel, S. (2007). The researcher frontier: Where technology interacts with the teaching and learning of data analysis and statistics. In G. W. Blume & M. K. Heid (Eds.), Research on technology and the teaching and learning of mathematics: Cases and Perspectives (Vol. 2, pp. 279–331). Greenwich, CT: Information Age Publishing Inc.Google Scholar
  14. Garfield, J., Chance, B., & Snell, J. L. (2000). Technology in college statistics courses. In D. Holton et al. (Eds.), The teaching and learning of mathematics at university level: AnICMI study (pp. 357-370). Dordrecht: Kluwer Academic Publishers. Retrieved from http://www.dartmouth.edu/~chance/teaching_aids/books_articles/technology.html. Accessed 7 Oct 2014.
  15. Gulz, A., & Haake, M. (2006). Design of animated pedagogical agents—A look at their look. International Journal of Human Computer Studies, 64(4), 322–339.CrossRefGoogle Scholar
  16. Hasler, B. S., Kersten, B., & Sweller, J. (2007). Learner control, cognitive load, and instructional animation. Applied Cognitive Psychology, 21(6), 713–729. doi:10.1002/acp.1345.CrossRefGoogle Scholar
  17. Hegarty, M., & Sims, V. K. (1994). Individual differences in mental animation during mechanical reasoning. Memory and Cognition, 22, 411–430.CrossRefGoogle Scholar
  18. Keller, J. M. (1983). Motivational design of instruction. In C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 383–434). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  19. Kulhavy, R. W. (1977). Feedback in written instruction. Review of Educational Research, 47(1), 211–232.CrossRefGoogle Scholar
  20. Kulhavy, R. W., & Anderson, R. C. (1972). Delay-retention effect with multiple-choice tests. Journal of Educational Psychology, 63(5), 505–512.CrossRefGoogle Scholar
  21. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279–308.CrossRefGoogle Scholar
  22. Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58(1), 79–97. doi:10.2307/1170349.CrossRefGoogle Scholar
  23. Larreamendy-Joerns, J., Leinhardt, G., & Correador, J. (2005). Six online statistics courses: Examination and review. American Statistician, 59, 240–251. doi:10.1198/000313005X54162.CrossRefGoogle Scholar
  24. Larwin, K., & Larwin, D. (2011). A meta-analysis examining the impact of computer-assisted instruction on postsecondary statistics education: 40 years of research. Journal of Research on Technology in Education, 43(3), 253–278.CrossRefGoogle Scholar
  25. Litchfield, B. C. (1987). The effect of presentation sequence and generalization formulae on retention of coordinate and successive concepts and rules in computer-based instruction (Doctoral dissertation, Florida State University, 1987). Dissertation Abstracts International, 49, 486A.Google Scholar
  26. Lowe, R. K. (2003). Animation and learning: Selective processing of information in dynamic graphics. Learning and Instruction, 13, 247–262. doi:10.1016/S0959-4752(02)00018-X.CrossRefGoogle Scholar
  27. Mason, B. J. & Bruning, R. (2001). Providing feedback in computer-based instruction: What the research tells us. Retrieved from: http://dwb.unl.edu/Edit/MB/MasonBruning.html. Accessed 7 Oct 2014.
  28. Mayer, R. E. (2008). Applying the science of learning: Evidence-based principles for the design of multimedia instruction. The American Psychologist, 63(8), 760–769. doi:10.1037/0003-066X.63.8.760.CrossRefGoogle Scholar
  29. Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds. Journal of Educational Psychology, 95, 806–813.CrossRefGoogle Scholar
  30. Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: Annotated illustrations versus narrated animations in multimedia instructions. Journal of Experimental Psychology: Applied, 11, 256–265. doi:10.1037/1076-898X.11.4.256.Google Scholar
  31. Mayer, R. E., & Moreno, R. (2002). Animation as an aid to multimedia learning. Educational Psychology Review, 14, 87–99.CrossRefGoogle Scholar
  32. McKenney, S., & Reeves, T. (2012). Conducting educational design research. London: Routeldge.Google Scholar
  33. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. doi:10.1037//0033-295X.101.2.343.CrossRefGoogle Scholar
  34. Moore, D. S. (1997). New pedagogy and new content: The case of statistics. International Statistical Review, 635, 123–165.CrossRefGoogle Scholar
  35. Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia. Instructional Science, 32, 99–113. doi:10.1023/B:TRUC.0000021811.66966.1d.CrossRefGoogle Scholar
  36. Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching. Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177–213.CrossRefGoogle Scholar
  37. Moreno, R., & Valdez, A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: The role of student interactivity and feedback. Educational Technology Research and Development, 53(3), 35–45.Google Scholar
  38. Mory, E. (1992). The use of informational feedback in instruction: Implications for future research. Educational Technology Research and Development, 40(3), 5–20. doi:10.1007/BF02296839.CrossRefGoogle Scholar
  39. Mory, E. (1996). Feedback research. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 919–956). New York: Simon & Schuster Macmillan.Google Scholar
  40. Mory, E. H. (2004) Feedback research revisited, In D. H. Jonassen, (Ed.), Handbook of research on educational communications and technology (pp. 745–783, 2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  41. Narayanan, H. N., & Hegarty, M. (2002). Multimedia Design for communication of dynamic informartion. International Journal of Human-Computer Studies, 57, 279–315.CrossRefGoogle Scholar
  42. Ormel, B., Pareja Roblin, N., McKenney, S., Voogt, J., & Pieters, J. (2012). Research-practice interactions as reported in recent design research studies: Still promising, still hazy. Educational Technology Research & Development, doi:10.1007/s11423-012-9261-6.
  43. Park, O. (1994). Dynamic visual displays in media-based instruction. Educational Technology, 34(4), 21–25.Google Scholar
  44. Plotzner, R., & Lowe, R. (2004). Dynamic visualizations and learning. Learning and Instruction, 14, 235–240.CrossRefGoogle Scholar
  45. Richards, D. D., & Godfarb, J. (1986). The episodic memory model of conceptual development: An integrative viewpoint. Cognitive Development, 1, 183–219.CrossRefGoogle Scholar
  46. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:10.3102/0034654307313795.CrossRefGoogle Scholar
  47. Sklar, J. C., & Zwick R. (2009). Multimedia presentations in educational measurement andstatistics: Design considerations and instructional approaches. Journal of Statistics Education, 17(3), 3. Retrieved from http://www.amstat.org/publications/jse/v17n3/sklar.html. Accessed 7 Oct 2014.
  48. Sosa, G., Berger, D. E., Saw, A. T., & Mary, J. C. (2011). Effectiveness of computer-assisted instruction in statistics: A meta-analysis. Review of Educational Research, 81, 97–128. doi:10.3102/0034654310378174.CrossRefGoogle Scholar
  49. Spector, M., Merrill, M. D., Merrienboer, J. V., & Driscoll, M. (2008). Handbook of research on educational communications and technology (3rd ed.). New York: Routledge.Google Scholar
  50. Symanzik, J. & Vulkasinovic, N. (2006). Teaching an introductory statistics course with CyberStats, an electronic textbook. Journal of Statistics Education, 14(1), 1. Retrieved from www.amstat.org/publications/jse/v14n1/symanzik.html. Accessed 7 Oct 2014.
  51. Tabbers, H., Martens, R., & van Merrienboer, J. J. G. (2004). Multimedia instructions and cognitive load theory: Effects of modality and cueing. British Journal of Educational Psychology, 74, 71–81.CrossRefGoogle Scholar
  52. Tennyson, R. D., & Cocchiarella, M. J. (1986). An empirically based instructional design theory for teaching concepts. Review of Educational Research, 56(1), 40–71.CrossRefGoogle Scholar
  53. Tversky, B., Bauer-Morrison, J., & Bétrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57, 247–262. doi:10.1006/ijhc.2002.1017.CrossRefGoogle Scholar
  54. Valdez, A. (2012). Computer-based feedback and goal intervention: learning effects. Educational Technology Research and Development, 60, 769–784. doi:10.1007/s11423-012-9252-7.CrossRefGoogle Scholar
  55. Wender, K. F., & Muehlboeck, J. S. (2003). Animated diagrams in teaching statistics. Behavior Research Methods, Instruments, & Computers, 35, 255–258.CrossRefGoogle Scholar

Copyright information

© Association for Educational Communications and Technology 2014

Authors and Affiliations

  • Natalya A. Koehler
    • 1
  • Ann D. Thompson
    • 2
  • Ana-Paula Correia
    • 3
  • Linda Serra Hagedorn
    • 4
  1. 1.Instructional Design Faculty, International Institute for Innovative InstructionFranklin UniversityColumbusUSA
  2. 2.University Professor EmeritusSchool of EducationAmesUSA
  3. 3.School of EducationAmesUSA
  4. 4.Associate Dean of Undergraduate Programs, Educational Leadership & Policy Studies (ELPS)Iowa State UniversityAmesUSA

Personalised recommendations