Developing Measurement Technique

  • Charles P. Friedman
  • Jeremy C. Wyatt
Part of the Computers and Medicine book series (C+M)


This chapter applies the theoretical material of Chapter 5 to actual practice. In Chapter 5 we introduced theories of measurement that were, in effect, theories of error. In this chapter we address specific procedures for estimating and minimizing error. We discuss the structure of measurement studies, the mechanics of conducting them, and how to use the results of these studies to improve measurement techniques. We consider how to develop measurement methods that yield results that are acceptably reliable and valid. We discuss in detail three specific situations that arise frequently in informatics: first, when the repeated observations in a measurement process are tasks completed by either persons or information resources; second, when the repeated observations are the opinions of judges about clinical cases; and third, when the repeated observations are items or questions on forms. Although the same overall measurement issues apply in all three instances, there are issues of implementation and technique specific to each.


Response Option Measurement Process Information Resource Object Class Measurement Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Issac S, Michael WB: Handbook in Research and Evaluation. San Diego: EdITS Publishers, 1989.Google Scholar
  2. 2.
    Cureton EE, D’Agostino RB: Factor Analysis, an Applied Approach. Hillsdale, NJ: Lawrence Erlbaum, 1983.Google Scholar
  3. 3.
    Kim J, Mueller CW: Factor Analysis: Statistical Methods and Practical Issues. Beverly Hills, CA: Sage, 1978.Google Scholar
  4. 4.
    Brennan RL.: Elements of Generalizability Theory. Iowa City: American College Testing Program, 1983.Google Scholar
  5. 5.
    Anderson JG, Jay SJ, Schweer HM, et al: Why doctors don’t use computers: some empirical findings. JR Soc Med 1986;79:142–144.PubMedGoogle Scholar
  6. 6.
    Brown SH, Coney RD: Changes in computer anxiety and attitudes related to clinical information system use. J Am Med Inf Assoc 1994; 1: 381–394.CrossRefGoogle Scholar
  7. 7.
    Elstein AS, Shulman LS, Sprafka SA: Medical Problem Solving. Cambridge, MA: Harvard University Press, 1978.Google Scholar
  8. 8.
    Schmidt HG, Norman GR, Boshuizen HPA: A cognitive perspective on medical expertise: theory and implications. Acad Med 1990;65:611–621.PubMedCrossRefGoogle Scholar
  9. 9.
    Stillman P, Swanson D, Regan MB, et al: Assessment of clinical skills of residents using standardized patients. Ann Intern Med 1991;114:393–401.PubMedGoogle Scholar
  10. 10.
    Swets JA: Measuring the accuracy of diagnostic systems. Science 1998;240:1285–1293.CrossRefGoogle Scholar
  11. 11.
    Jain R: The Art of Computer Systems Performance Analysis. New York: Wiley, 1991.Google Scholar
  12. 12.
    Friedman CP: Anatomy of the clinical simulation. Acad Med 1995;70: 205–208.PubMedCrossRefGoogle Scholar
  13. 13.
    Berk RA (ed): Performance Assessment: Methods and Applications. Baltimore: Johns Hopkins Press, 1986.Google Scholar
  14. 14.
    Murphy KR, Cleveland JN: Understanding Performance Appraisal: Social, Organizational, and Goal-Based Perspectives. Thousand Oaks, CA: Sage, 1995.Google Scholar
  15. 15.
    Guilford JP: Psychometric Methods. New York: McGraw-Hill, 1954.Google Scholar
  16. 16.
    Thorndike RL, Hagen E: Measurement and Evaluation in Psychology and Education. New York: Wiley, 1977.Google Scholar
  17. 17.
    Wyatt J: A method for developing medical decision-aids applied to ACORN, a chest pain advisor. DM thesis, Oxford University, 1991.Google Scholar
  18. 18.
    Clarke JR, Webber BL, Gertner A, Rymon. KJ: On-line decision support for emergency trauma management. Proc Symp Comput Applications Med Care 1994; 18:1028.Google Scholar
  19. 19.
    Landy FJ, Farr JL: Performance rating. Psychol Bull 1980;87:72–107.CrossRefGoogle Scholar
  20. 20.
    Detmer WM, Friedman CP: Academic physicians’ assessment of the effects of computers on health care. Proc Symp Comput Applications Med Care 1994;18:558–62.Google Scholar
  21. 21.
    Miller GA: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 1956;63:46–49.CrossRefGoogle Scholar
  22. 22.
    Quaglini S, Stefanelli M, Barosi G, Berzuini A: A performance evaluation of the expert system ANAEMIA. Comp Biomed Res 1987;21:307–323.CrossRefGoogle Scholar
  23. 23.
    Yu VL, Fagan LM, Wraith SM, et al: Antimicrobial selection by computer: a blinded evaluation by infectious disease experts. JAMA 1979;242:1279–1282.PubMedCrossRefGoogle Scholar
  24. 24.
    Delbecq AE: Group Techniques for Program Panning: A Guide to Nominal Group and Delphi Processes. New York: Scott, Foresman, 1975.Google Scholar
  25. 25.
    Kors JA, Sittig A, van Bemmel JH: The Delphi method used to validate diagnostic knowledge in a computerised ECG interpreter. Methods Inf Med 1990;29:44–50.PubMedGoogle Scholar
  26. 26.
    Shavelson RJ, Webb NM, Rowley GL: Generalizability theory. Am Psychol 1989;44:922–932.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Charles P. Friedman
    • 1
    • 2
  • Jeremy C. Wyatt
    • 3
  1. 1.University of North CarolinaPittsburghUSA
  2. 2.Center for Biomedical InformaticsUniversity of PittsburghPittsburghUSA
  3. 3.Imperial Cancer Research FundLondonUK

Personalised recommendations