Skip to main content
Log in

Ten Commandments for the evaluation of interactive multimedia in higher education

  • Selected Conference Paper
  • Published:
Journal of Computing in Higher Education Aims and scope Submit manuscript

Abstract

A SET OF GUIDELINES for redirecting evaluation and research involving interactive multimedia (IMM) in higher education are presented in the form of “Ten Commandments.” Each commandment is “illuminated” with anecdotes and stories to illustrate its importance and application. In light of the complexity involved in human learning via IMM and the politics of higher education, the commandments stress descriptive approaches to research and evaluation, including “modeling” methods that integrate quantitative and qualitative data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albenese, M., & Huntley, J. (1988, July). Letter: Response to evaluation review of assessment of neuromotor dysfunction in infants.The Videodisc Monitor, 6(1), 26–27.

    Google Scholar 

  • Anderson, S.B., & Ball, S. (1978).The profession and practice of program evaluation. San Francisco: Jossey-Bass.

    Google Scholar 

  • Bayard-White, C. (1985).Interactive video case studies and directory. London, Great Britain: National Interactive Video Center.

    Google Scholar 

  • Borich, G.D., & Jemelka, R.P. (1982).Programs and systems: An evaluation perspective. New York: Academic Press.

    Google Scholar 

  • Campbell, D.T., & Stanley, J.C. (1963).Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

    Google Scholar 

  • Carroll, J.B. (1963), A model of school learning.Teachers College Record, 64, 723–733.

    Google Scholar 

  • Clark, R.E. (1983), Reconsidering research on learning from media,Review of Educational Research.53(4), 445–459.

    Google Scholar 

  • Cooley, W., & Bickel, W. (1986).Decision-oriented educational research. Boston: Kluwer-Nijhoff.

    Google Scholar 

  • Cooley, W.W., & Lohnes, P.R. (1976)Evaluation research in education: Theory, principles, and practice. New York: Irvington.

    Google Scholar 

  • Cronbach, L.J. (1982).Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.

    Google Scholar 

  • Cziko, G.A. (1989). Unpredictability and indeterminism in human behavior: Arguments and implications for educational research.Educational Researcher, 18(3), 17–25.

    Google Scholar 

  • DeBloois, M.C. (1988).Use and effectiveness of videodisc training: A status report. Falls Church, VA: Future Systems.

    Google Scholar 

  • Dewdney, A.K. (1988).The armchair universe: An exploration of computer worlds. New York: W.H. Freeman.

    Google Scholar 

  • Eisner, E.W. (1985).The art of educational evaluation: A personal view. Philadelphia, PA: Falmer Press.

    Google Scholar 

  • Fetterman, D.M. (1984),Ethnography in educational evaluation. Beverly Hills, CA: Sage.

    Google Scholar 

  • Gagné, R.M. (1985).The conditions of learning (4th ed.). New York: Holt, Rinehart and Winston.

    Google Scholar 

  • Geertz, C. (1973).The interpretation of cultures. New York: Basic Books.

    Google Scholar 

  • Gleick, J. (1987).CHAOS: Making a new science. New York: Penguin.

    Google Scholar 

  • Greenfield, P.M. (1989, May).Information technology and visual literacy. Paper presented at the Third International Conference on Children in the Information Age, Sofia, Bulgaria.

  • Gruber, H.E. (1985). From epistemic subject to unique creative person at work.Archives de Psychologie, 53, 167–185.

    Google Scholar 

  • Guba, E.G., & Lincoln, Y.S. (1981).Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Hawking, W.H. (1988).A brief history of time: From the big bang to black holes. New York: Bantam.

    Google Scholar 

  • Herbert, N. (1985).Quantum physics: Beyond the new physics. New York: Anchor Press/Doubleday.

    Google Scholar 

  • Hoban, C.F. (1958). Research on Media.AV Communication Review, 6(3), 169–178.

    Google Scholar 

  • House, E.R. (1980)Evaluating with validity. Beverly Hills, CA: Sage.

    Google Scholar 

  • Hunter, J.E. (1987). Multiple dependent variables in program evaluation. In M.L. Mark & R.L. Shotland (Eds.),Multiple methods in program evaluation. (pp. 43–56) San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Huntley, J.S., Albanese, M., Blackman, J., & Lough, L. (1985, April).Evaluation of a computer-controlled videodisc program to teach pediatric neuromotor assessment. Paper presented at Annual Meeting of the American Educational Research Association, Chicago, IL.

  • Jaeger, R.M. (1988).Complementary methods for research in education. Washington, DC: American Educational Research Association.

    Google Scholar 

  • Judd, C.M. (1987). Combining process and outcome evaluation. In M.L. Mark & R.L. Shotland (Eds.),Multiple methods in program evaluation. (pp. 23–41) San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Kenny, D.A. (1979).Correlation and causality. New York: Wiley.

    Google Scholar 

  • Leinhardt, G. (1980). Modeling and measuring educational treatment in evaluation.Review of Educational Research, 50(3), 393–420.

    Google Scholar 

  • Lorenz, E.N. (1963). Deterministic nonperiodic flow.Journal of the Atmospheric Sciences, 20, 130–141.

    Article  Google Scholar 

  • Lorenz, E.N. (1979, December).Predictability: Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? Paper presented at the annual meeting of the American Association for the Advancement of Science, Washington, DC.

  • Mark, M.L., & Shotland, R.L. (1987).Multiple methods in program evaluation. San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Marlino, M.R. (1989).An examination of the effective dimensions of interactive videodisc instruction using a process modeling approach. An unpublished doctoral dissertation. The University of Georgia, Athens, GA.

    Google Scholar 

  • National Study of Secondary School Evaluation. (1960).Evaluative criteria. Washington, DC.

  • Pagels, H.R. (1988).The dreams of reason: The computer and the rise of the sciences of complexity. New York: Simon and Schuster.

    Google Scholar 

  • Peters, C.L. (1988).The effects of advisement, content mapping, and interactive video on learner control and achievement in computer-based instruction. Unpublished doctoral dissertation, The University of Georgia, Athens, GA.

    Google Scholar 

  • Phillips, D.C. (1980). What do the researcher and the practitioner have to offer each other?Educational Researcher, 9(11), 17–24.

    Google Scholar 

  • Popper, K.R. (1982).Quantum theory and the schism in physics. Totowa, NJ: Rowan and Littlefield.

    Google Scholar 

  • Reeves, T.C. (1989). The role, methods, and worth of evaluation in instructional design. In K. Johnson and L. Foa, (Eds.),Instructional design: New strategies for education and training. New York: MacMillan.

    Google Scholar 

  • Reeves, T.C. (1988). Effective dimensions of interactive videodisc for training. In T. Bernold and J. Finklestein, (Eds.).Computer-assisted approaches to training: Foundations of industry’s future, (pp. 119–132) Amsterdam, NR: Elsevier Science.

    Google Scholar 

  • Reeves, T.C. (1988, April). Evaluation review: Assessment of neuromotor dysfunction in infants.The Videodisc Monitor, 6(4), 26–27.

    Google Scholar 

  • Reeves, T.C. (1986). Research and evaluation models for the study of interactive video.Journal of Computer-Based Instruction, 13(4), 102–106.

    Google Scholar 

  • Reeves, T.C., Brandt, R., & Marlino, M.R. (1988, April). Evaluation review: Guidelines, introduction, and overview.The Videodisc Monitor, 6(4), 24–25.

    Google Scholar 

  • Reeves, T.C., & Lent, R.M. (1984). Levels of evaluation of computer-based instruction. In D.F. Walker & R.D. Hess (Eds.),Instructional software: Principles and perspectives for design and use. Belmont, CA: Wadsworth.

    Google Scholar 

  • Reeves, T.C., & Marlino, M.R. (1989, April).An evaluation of the emergency medical conditions interactive videodisc. Paper presented at Annual Meeting of the American Educational Research Association, San Francisco, CA.

  • Reeves, T.C., Marlino, M.R., & Henderson, J.V. (1988, April).Evaluation of acute trauma life support interactive videodisc training. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.

  • Roueche, J.E., & Herrscher, B.R. (1973).Toward instructional accountability: A practical guide to educational change. Palo Alto, CA: Westinghouse Learning Press.

    Google Scholar 

  • Sanders, D.P. (1981). Education inquiry as developmental research.Educational Researcher, 10(3), 8–13.

    Google Scholar 

  • Schroeder, J.E. (1982). U.S. Army VISTA results.Proceedings of the Fourth Annual Conference on Interactive Instruction Delivery. Warrenton, VA: Society for Applied Learning Technology.

    Google Scholar 

  • Scriven, M. (1967). The methodology of evaluation. In R.E. Stake (Ed.),Curriculum evaluation. American Educational Research Association Monograph Series on Evaluation, No. 1. Chicago: Rand McNally.

    Google Scholar 

  • Scriven, M. (1973). The methodology of evaluation. In B.R. Worthen & J.R. Sanders, (Eds.),Educational evaluation: Theory and practice. Belmont, CA: Wadsworth.

    Google Scholar 

  • Stake, R.E. (1967). The countenance of educational evaluation.Teachers College Record, 68, 523–540.

    Google Scholar 

  • Stake, R.E. (1978). The case study method in social inquiry.Educational Researcher, 7(2), 5–8.

    Google Scholar 

  • Stufflebeam, D.L. (1983). The CIPP model for program evaluation, in G.L. Madaus, M. Scriven, & D.L. Stufflebeam (Eds.),Evaluation models: View-points on educational and human services evaluation. Boston: Kluwer-Nijhoff.

    Google Scholar 

  • Taubes, G. (1986).Nobel dreams: Power, deceit, and the ultimate experiment.

  • Tiene, D., Evans, A., Milheim, W., Callahan, B., & Buck, S. (1989). The instructional effectiveness of interactive video versus computer assisted instruction.Interact Journal, 1(1), 15–21.

    Google Scholar 

  • Tyler, R.E. (1942). General statement on evaluation.Journal of Educational Research, 55(7), 492–501.

    Google Scholar 

  • Wang, M.C., & Walberg, H.J. (1983). Evaluating educational programs: An intergrative causal-modeling approach.Educational Evaluation and Policy Analysis, 5(4), 347–366.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Reeves, T.C. Ten Commandments for the evaluation of interactive multimedia in higher education. J. Comput. High. Educ. 2, 84–113 (1991). https://doi.org/10.1007/BF02941590

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02941590

Keywords

Navigation