Journal of Digital Imaging

, Volume 26, Issue 6, pp 1099–1106 | Cite as

Trend of Contrast Detection Threshold with and without Localization

  • David L. Leong
  • Louise Rainford
  • Tamara Miner Haygood
  • Gary J. Whitman
  • William R. Geiser
  • Beatriz E. Adrada
  • Lumarie Santiago
  • Patrick C. Brennan


Published information on contrast detection threshold is based primarily on research using a location-known methodology. In previous work on testing the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF) for perceptual linearity, this research group used a location-unknown methodology to more closely reflect clinical practice. A high false-positive rate resulted in a high variance leading to the conclusion that the impact on results of employing a location-known methodology needed to be explored. Fourteen readers reviewed two sets of simulated mammographic background images, one with the location-unknown and one with the location-known methodology. The results of the reader study were analyzed using Reader Operating Characteristic (ROC) methodology and a paired t test. Contrast detection threshold was analyzed using contingency tables. No statistically significant difference was found in GSDF testing, but a highly statistical significant difference (p value <0.0001) was seen in the ROC (AUC) curve between the location-unknown and the location-known methodologies. Location-known methodology not only improved the power of the GSDF test but also affected the contrast detection threshold which changed from +3 when the location was unknown to +2 gray levels for the location-known images. The selection of location known versus unknown in experimental design must be carefully considered to ensure that the conclusions of the experiment reflect the study’s objectives.

Key words

Image perception contrast threshold GSDF ROC SKE LKE 



The authors would like to thank the participants who devoted two reading sessions in support of this research.


  1. 1.
    Hemminger BM, Johnston RE, Rolland JP, Muller KE: Introduction to perceptual linearization of video display systems for medical image presentation. J Digit Imaging 8:21–34, 1995PubMedCrossRefGoogle Scholar
  2. 2.
    American College of Radiology. Practice Guideline for Digital Radiography, Reston, VA, 2007Google Scholar
  3. 3.
    Samei E, Badano A, Chakraborty D, Compton K, Cornelius C, Corrigan K, Flynn MJ, Hemminger B, Hangiandreou N, Johnson J, Moxley-Stevens DM, Pavlicek W, Roehrig H, Rutz L, Shepard J, Uzenoff RA, Wang J, Willis CE: Assessment of display performance for medical imaging systems: executive summary of AAPM TG18 report. Med Phys 32:1205–1225, 2005PubMedCrossRefGoogle Scholar
  4. 4.
    IHE Technical Framework Volume I Integration Profiles, Chicago, IL, 2007Google Scholar
  5. 5.
    Kroon H: Overall x-ray system simulation model developed for system design and image quality versus patient dose optimization. SPIE, San Diego, California, USA, 2003Google Scholar
  6. 6.
    Barten PGJ: Contrast sensitivity of the human eye and its effects on image quality. SPIE Optical Engineering Press, Bellingham, WA, 1999CrossRefGoogle Scholar
  7. 7.
    Burgess AE, Jacobson FL, Judy PF: Human observer detection experiments with mammograms and power-law noise. Med Phys 28:419–437, 2001PubMedCrossRefGoogle Scholar
  8. 8.
    Wang J, Xu J, Baladandayuthapani V: Contrast sensitivity of digital imaging display systems: contrast threshold dependency on object type and implications for monitor quality assurance and quality control in PACS. Med Phys 36:3682–3692, 2009PubMedCrossRefGoogle Scholar
  9. 9.
    Burgess AE, Jacobson F, Judy P: Mass discrimination in mammography: experiments using hybrid images. Acad Radiol 10:1247–1256, 2003PubMedCrossRefGoogle Scholar
  10. 10.
    Tchou P, Flynn MJ, Peterson E: 2AFC assessment of contrast threshold for a standardized target using a monochrome LCD monitor. Proc. Medical Imaging 2004: Image Perception, Observer Performance, and Technology Assessment: San Diego, CA, USA, 344–352Google Scholar
  11. 11.
    Kundel HL: Reader error, object recognition, and visual search San Diego. SPIE, California, USA, 2004Google Scholar
  12. 12.
    Leong DL, Haygood TM, Whitman GJ, Tchou PM, Geiser WR, Carkaci S, Rainford L, Brennan PC: Verification of DICOM GSDF in Complex Backgrounds. J Digit Imaging 25:662–669, 2012PubMedCrossRefGoogle Scholar
  13. 13.
    National Electrical Manufacturers Association: Digital Imaging and Communications in Medicine (DICOM) Part 14: Grayscale Standard Display Function. National Electrical Manufacturers Association, Rosslyn, VA, 2008Google Scholar
  14. 14.
    Bochud F, Abbey C, Eckstein M: Statistical texture synthesis of mammographic images with super-blob lumpy backgrounds. Opt Express 4:33–42, 1999PubMedCrossRefGoogle Scholar
  15. 15.
    Borjesson S, Hakansson M, Bath M, Kheddache S, Svensson S, Tingberg A, Grahn A, Ruschin M, Hemdal B, Mattsson S, Mansson LG: A software tool for increased efficiency in observer performance studies in radiology. Radiat Prot Dosimetry 114:45–52, 2005PubMedCrossRefGoogle Scholar
  16. 16.
    Dorfman DD, Berbaum KS, Lenth RV, Chen YF, Donaghy BA: Monte Carlo validation of a multireader method for receiver operating characteristic discrete rating data: factorial experimental design. Acad Radiol 5:591–602, 1998PubMedCrossRefGoogle Scholar
  17. 17.
    Dorfman DD, Berbaum KS, Metz CE: Receiver operating characteristic rating analysis. Generalization to the population of readers and patients with the jackknife method. Invest Radiol 27:723–731, 1992Google Scholar
  18. 18.
    Hillis SL: A comparison of denominator degrees of freedom methods for multiple observer ROC analysis. Stat Med 26:596–619, 2007PubMedCrossRefGoogle Scholar
  19. 19.
    Hillis SL, Berbaum KS: Power estimation for the Dorfman-Berbaum-Metz method. Acad Radiol 11:1260–1273, 2004PubMedCrossRefGoogle Scholar
  20. 20.
    Hillis SL, Berbaum KS: Monte Carlo validation of the Dorfman-Berbaum-Metz method using normalized pseudovalues and less data-based model simplification. Acad Radiol 12:1534–1541, 2005PubMedCrossRefGoogle Scholar
  21. 21.
    Hillis SL, Berbaum KS, Metz CE: Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis. Acad Radiol 15:647–661, 2008PubMedCrossRefGoogle Scholar
  22. 22.
    Hillis SL, Obuchowski NA, Schartz KM, Berbaum KS: A comparison of the Dorfman-Berbaum-Metz and Obuchowski-Rockette methods for receiver operating characteristic (ROC) data. Stat Med 24:1579–1607, 2005PubMedCrossRefGoogle Scholar
  23. 23.
    Medical Image Perception Laboratory [Internet]. Available at Accessed July 27, 2010 2010.
  24. 24.
    Friendly M: Mosaic Displays for Multi-Way Contingency Tables. J Amer Statist Assoc 89:190–200, 1994CrossRefGoogle Scholar
  25. 25.
    Kundel HL, Nodine CF, Carmody D: Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Invest Radiol 13:175–181, 1978PubMedCrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2013

Authors and Affiliations

  • David L. Leong
    • 1
    • 2
  • Louise Rainford
    • 2
  • Tamara Miner Haygood
    • 3
  • Gary J. Whitman
    • 3
  • William R. Geiser
    • 4
  • Beatriz E. Adrada
    • 3
  • Lumarie Santiago
    • 3
  • Patrick C. Brennan
    • 5
  1. 1.Analogic CorporationPeabodyUSA
  2. 2.Diagnostic Imaging, UCD School of Medicine and Medical SciencesUniversity College DublinDublin 4Ireland
  3. 3.Department of Diagnostic RadiologyThe University of Texas MD Anderson Cancer CenterHoustonUSA
  4. 4.Department of Imaging PhysicsThe University of Texas MD Anderson Cancer CenterHoustonUSA
  5. 5.Brain and Mind Research Institute, Faculty of Health ScienceUniversity of SydneySydneyAustralia

Personalised recommendations