Skip to main content
Log in

Differences in expert practice: a case from formative evaluation

  • Published:
Instructional Science Aims and scope Submit manuscript

Abstract

A clear case has been made in the literature regarding the role of the learner in formative evaluation of instructional materials. In contrast, research on the role of the expert has been limited. This paper describes formative evaluation by instructional designers and subject matter experts, highlighting their task interpretation, their focus on text features, and their strategies. Content analysis of protocols based on think-aloud data suggest that instructional designers take on the role of a generalist, use a comparative method of review, and appear to be directed by the heuristics of the Instructional Systems Design Model. In contrast, subject matter experts approach the task as a specialist, use a sequential method of review, and seem to be directed by the domain knowledge. These differences lead to the identification of different problems and the generation of different revision recommendations. The implications of findings are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alkin, M. and Fitz-Gibbon, C. (1975). Methods and theories of evaluation programs. Journal of Research and Development in Education, 8(3), 2–15.

    Google Scholar 

  • Baker, E. (1970). Generalizability of rules for an empirical revision. Audiovisual Communications Review, 18, 300–305.

    Google Scholar 

  • Black, J., Galambos, J. and Reiser, B. (1984). Coordinating discovery and verification research. In D., Kieras and M., Just (Eds.), New methods in the study of immediate processes in comprehension (pp. 287–297). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Bowler, M. (1978). The making of a textbook. Learning, 6(March), 38–42.

    Google Scholar 

  • Breuleux, A. (1987, July). Discourse and the investigation of cognitive skills in complex tasks. Paper presented at the meeting of the Cognitive Science Society, Seattle, WA.

  • Burkholder, B. (1981–82). The effectiveness of using the instructional strategy diagnostic profile to prescribe improvements in self-instructional materials teaching abstract concepts. Journal of Instructional Development, 5(2), 2–9.

    Google Scholar 

  • Burt, C. (1989). Identification of critical variables in developmental testing and the experimental examination of the number and roles of participants in testing sessions. Unpublished doctoral dissertation, McGill University, Montreal, Canada.

  • Carroll, M. J. (1988, April). Differences in quantitative and qualitative from four developmental testing conditions. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

  • Cooper, M. and Holtzman, M. (1985). Counterstatement reply. College Composition and Communication, 36, 97–100.

    Google Scholar 

  • Cronbach, L. (1963). Course improvement through evaluation. Teachers College Record, 64, 672–683.

    Google Scholar 

  • D.A.E. Project (1978). Microbiology: a submodule to sterilization and disinfection. Seattle, WA: Office of Research in Medical Education, School of Medicine, University of Washington.

    Google Scholar 

  • de, Groot, A. (1966). Perception and memory versus thought: some old ideas and recent findings. In B., Kleinmuntz (Ed.), Problem solving. New York: Wiley.

    Google Scholar 

  • Diek, W. and Carey, L. (1985). The systematic design of instruction (2nd ed.). Glenview, IL: Scott, Forseman.

    Google Scholar 

  • Duchastel, P. (1983). Text illustrations. Performance and Instruction Journal, 33(4), 3–5.

    Google Scholar 

  • Egan, D. and Schwartz, B. (1979). Chunking in recall of symbolic drawings. Memory and Cognition, 7(2), 149–158.

    Google Scholar 

  • Ericsson, K. and Simon, H. (1984). Protocol analysis. Cambridge, MA: MIT.

    Google Scholar 

  • Felker, D., Redish, J. and Peterson, J. (1985). Training authors of informative documents. In T., duffy and R., Waller (Eds.), Designing usable text (pp. 43–61). Orlando, FL: Academic Press.

    Google Scholar 

  • Foshay, W. (1984). QA and QC: a training vendor's vision of the formative/summative evaluation distinction. Performance and Instruction Journal, 23(10), 15–17.

    Google Scholar 

  • Gagné, R. and Briggs, L. (1979). Principles of instructional design (2nd ed.). New York: Holt Rinehart and Winston.

    Google Scholar 

  • Golas, K. (1982). The effectiveness and cost of alternate models of formative evaluation for printed instructional materials (Doctoral dissertation, Florida State University). Dissertation Abstracts International, 43, 2873A.

  • Hartley, J. (1981). Eighty ways of improving instructional text. IEEE Transactions on Professional Communications, PC 24, 1, 17–27.

    Google Scholar 

  • Hayes, J. and Flower, L. (1983). Uncovering cognitive processes in writing: an introduction to protocol analysis. In P., Mosenthal, L., Tamor and S., Walmsley (Eds.), Research on writing: principles and methods (pp. 207–220). New York: Longman.

    Google Scholar 

  • Hayes, J., Flower, L., Schriver, K., Stratman, J. and Carey, L. (1987). Cognitive processes in revision. In S., Rosenberg (Ed.), Advances in applied psycholinguistics: Vol: 2 (pp. 176–240). Cambridge, England: Press Syndicate of the University of Cambridge.

    Google Scholar 

  • Komoski, K. (1983). Formative evaluation. Performance and Instruction Journal, 22(5), 3–4.

    Google Scholar 

  • Komoski, K. and Woodward, A. (1985). The continuing need for Learner Verification and Revision of textual material. In D. H. Jonassen (Ed.), The technology of text: Vol: 2 (pp. 396–417).

  • Larkin, J., McDermott, J., Simon, D. and Simon, H. (1980). Models of competence in solving physics problems. Cognitive Science, 4, 317–345.

    Google Scholar 

  • Macdonald-Ross, M. (1978). Language in texts. Review of Research in Education, 6, 229–275.

    Google Scholar 

  • Merrill, M., Reigeluth, C. and Faust, G. (1979). The instructional quality profile: a curriculum design tool. In H., O'NeilJr. (Ed.), Procedures for instructional systems development (pp. 165–204). New York: Academic.

    Google Scholar 

  • Montague, W., Ellis, J. and Wulfeck, W. (1983). Instructional quality inventory: a formative evaluation tool for instructional development. Performance and Instruction Journal, 22(5), 11–14.

    Google Scholar 

  • Nathenson, M. and Henderson, E. (1980). Using student feedback to improve learning materials. London: Croom Helm.

    Google Scholar 

  • O'Donnell, H. (1985). Improving textbook-who is responsible? Journal of Reading, 29(3), 268–270.

    Google Scholar 

  • Rahilly, T. (1991). An analysis of three learner-based formative evaluation conditions. Unpublished master's thesis, McGill University, Montreal, Canada.

  • Rowland, G. (1992). What do instructional designers actually do? An initial investigation of expert practices, Performance Improvement Quarterly, 5(2), 65–86.

    Google Scholar 

  • Saroyan, A. (1989). The review process of formative evaluation: the role of the content expert and the instructional designer. Unpublished doctoral dissertation, McGill University, Montreal, Canada.

  • Saroyan, A. and Geis, G. (1988). An analysis of guidelines for expert reviewers. Instructional Science, 17, 101–128.

    Google Scholar 

  • Scriven, M. (1967). The methodology of evaluation. AERA Monograph Series on Curriculum Evaluation, 1, (39–83).

    Google Scholar 

  • Simon, D. and Simon, H. (1979). A tale of two protocols. In J., Lochhead and J., Clement (Eds.), Cognitive process instruction (pp. 119–132). Philadelphia, PA: Franklin Institute.

    Google Scholar 

  • Smillie, R. (1985). Design strategies for job performance aids. In T., Duffy and R., Waller (Eds.), Designing usable texts (pp. 213–243). Orlando, FL: Academic Press.

    Google Scholar 

  • Swarts, H., Flower, L. and Hayes, J. (1984). Designing protocol studies of the writing process. In R., Beach and L., Bridwell (Eds.), New directions in composition research (pp. 53–71). New York: Gilford.

    Google Scholar 

  • Thorndyke, P. and Stasz, C. (1980). Individual differences in procedures for knowledge acquisition from maps. Cognitive Psychology, 12, 137–175.

    Google Scholar 

  • Tulley, M. (1985). A descriptive study of the intents of state level textbook adoption procedures. Educational Evaluation and Policy Analysis, 7(3), 289–308.

    Google Scholar 

  • Tyler, R. (1941–42). General statement on evaluation. Journal of Educational Research, 35, 492–501.

    Google Scholar 

  • Winn, W. (1990). Some implications of cognitive theory for instructional design. Instructional Science, 19, 53–69.

    Google Scholar 

  • Winograd, T. (1983). Language as a cognitive process: Vol 1. New York: Addison Wesley.

    Google Scholar 

  • Winterowed, W. R. (1989). Composition textbooks: publisher-author relationships. College Composition and Communications, 40, 2, 139–151.

    Google Scholar 

  • Wright, P. (1985). Editing: policies and processes. In T. M., Duffy and R., Waller (Eds.), Designing usable text (pp. 63–96). Orlando, FL: Academic Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Saroyan, A. Differences in expert practice: a case from formative evaluation. Instr Sci 21, 451–472 (1992). https://doi.org/10.1007/BF00118558

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00118558

Keywords

Navigation