Quality of Life Research

, Volume 24, Issue 8, pp 1809–1822 | Cite as

Testing item response theory invariance of the standardized Quality-of-life Disease Impact Scale (QDIS®) in acute coronary syndrome patients: differential functioning of items and test

  • Nina Deng
  • Milena D. Anatchkova
  • Molly E. Waring
  • Kyung T. Han
  • John E. WareJr.
Article

Abstract

Purpose

The Quality-of-life (QOL) Disease Impact Scale (QDIS®) standardizes the content and scoring of QOL impact attributed to different diseases using item response theory (IRT). This study examined the IRT invariance of the QDIS-standardized IRT parameters in an independent sample.

Method

The differential functioning of items and test (DFIT) of a static short-form (QDIS-7) was examined across two independent sources: patients hospitalized for acute coronary syndrome (ACS) in the TRACE-CORE study (N = 1,544) and chronically ill US adults in the QDIS standardization sample. “ACS-specific” IRT item parameters were calibrated and linearly transformed to compare to “standardized” IRT item parameters. Differences in IRT model-expected item, scale and theta scores were examined. The DFIT results were also compared in a standard logistic regression differential item functioning analysis.

Results

Item parameters estimated in the ACS sample showed lower discrimination parameters than the standardized discrimination parameters, but only small differences were found for thresholds parameters. In DFIT, results on the non-compensatory differential item functioning index (range 0.005–0.074) were all below the threshold of 0.096. Item differences were further canceled out at the scale level. IRT-based theta scores for ACS patients using standardized and ACS-specific item parameters were highly correlated (r = 0.995, root-mean-square difference = 0.09). Using standardized item parameters, ACS patients scored one-half standard deviation higher (indicating greater QOL impact) compared to chronically ill adults in the standardization sample.

Conclusion

The study showed sufficient IRT invariance to warrant the use of standardized IRT scoring of QDIS-7 for studies comparing the QOL impact attributed to acute coronary disease and other chronic conditions.

Keywords

Item response theory invariance Differential item functioning Differential test (scale) functioning Measurement invariance Disease-specific quality-of-life measures 

Abbreviations

ACS

Acute coronary syndrome

ACS-LT

ACS-specific linearly transformed

CAT

Computerized adaptive testing

CDIF

Compensatory differential item functioning

CFA

Confirmatory factor analysis

DFIT

Differential functioning of items and tests

DICAT

The computerized adaptive Assessment of disease impact project

DIF

Differential item functioning

DTF

Differential test (scale) functioning

GPCM

Generalized partial credit model

ICC

Item characteristic curve

IPD

Item parameter drift

IRT

Item response theory

MLHFQ

Minnesota Living with Heart Failure Questionnaire

NCDIF

Non-compensatory differential item functioning

PRO

Patient-reported outcome

PROMIS

Patient Reported Outcomes Measurement Information System

QDIS®

Quality-of-life Disease Impact Scale

QDIS-7

7-item short-form of QDIS®

QOL

Quality-of-life

RMSD

Root-mean-square difference

SAQ

Seattle Angina Questionnaire

TCC

Test characteristic curve

TRACE-CORE

The Transitions, Risks, and Actions in Coronary Events-Center for Outcomes Research and Education project

References

  1. 1.
    Ware, J. E, Jr, Kemp, J. P., Buchner, D. A., et al. (1998). The responsiveness of disease-specific and generic health measures to changes in the severity of asthma among adults. Quality of Life Research, 7(3), 235–244.PubMedCrossRefGoogle Scholar
  2. 2.
    De Boer, A. G., Spruijt, R. J., Sprangers, M. A., & de Haes, J. C. (1998). Disease-specific quality of life: is it one construct? Quality of Life Research, 7(2), 135–142.PubMedCrossRefGoogle Scholar
  3. 3.
    Spertus, J. A., Winder, J. A., Drewhurst, T. A., et al. (1995). Development and evaluation of the Seattle Angina Questionnaire: a new functional status measure for coronary artery disease. Journal of the American College of Cardiology, 25, 333–341.PubMedCrossRefGoogle Scholar
  4. 4.
    Rector, T. S., Kubo, S. H., & Cohn, J. N. (1987). Patients’ self-assessment of their congestive heart failure. Part 2: content, reliability and validity of a new measure, the Minnesota Living with Heart Failure questionnaire. Heart Failure, 3, 198–209.Google Scholar
  5. 5.
    Stewart, A. L., Greenfield, S., Hays, R. D., et al. (1989). Functional status and well-being of patients with chronic conditions. Results from the Medical Outcomes Study. JAMA, 262, 907–913.PubMedCrossRefGoogle Scholar
  6. 6.
    Ware, J. E, Jr, Harrington, M., Guyer, R., & Boulanger, R. (2012). A system for integrating generic and disease-specific patient-reported outcome (PRO) measures. Patient Reported Outcomes Newsletter, 48(Fall), 2–4.Google Scholar
  7. 7.
    Ware, J. E, Jr, Gandek, B., & Guyer, R. (2014). Measuring disease-specific quality of life (QOL) impact: A manual for users of the QOL Disease Impact Scale (QDIS ® ). Worcester, MA: JWRG Incorporated.Google Scholar
  8. 8.
    Ware, J. E. Jr., Guyer, R., Gandek, B., Deng, N. Standardizing disease-specific quality of life (QOL) impact measures: Development and initial evaluation of the QOL Disease Impact Scale (QDIS ® ) (submitted).Google Scholar
  9. 9.
    Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.Google Scholar
  10. 10.
    Swaminathan, H., & Rogers, H. J. (1990). Detecting differential item functioning using logistic regression procedures. Journal of Educational Measurement, 27, 361–370.CrossRefGoogle Scholar
  11. 11.
    Raju, N. S., van der Linden, W. J., & Fleer, P. F. (1995). IRT-based internal measures of differential functioning of items and tests. Applied Psychological Measurement, 19, 353–368.Google Scholar
  12. 12.
    Oldroyd, J. C., Cyril, S., Wijayatilaka, B. S., et al. (2013). Evaluating the impact of depression, anxiety & autonomic function on health related quality of life, vocational functioning and health care utilisation in acute coronary syndrome patients: the ADVENT study protocol. BMC Cardiovascular Disorders, 13, 103.PubMedCentralPubMedCrossRefGoogle Scholar
  13. 13.
    Kim, M. J., Jeon, D. S., Gwon, H. C., et al. (2013). Health-related quality-of-life after percutaneous coronary intervention in patients with UA/NSTEMI and STEMI: The Korean multicenter registry. Journal of Korean Medical Science, 28(6), 848–854.PubMedCentralPubMedCrossRefGoogle Scholar
  14. 14.
    Waring, M. E., McManus, R. H., Saczynski, J. S., et al. (2012). Transitions, risks, and actions in coronary events-center for outcomes research and education (TRACE-CORE): Design and rationale. Circulation Cardiovascular Quality and Outcomes, 5(5), e44–e50.PubMedCentralPubMedCrossRefGoogle Scholar
  15. 15.
    Howard, K. I., & Forehand, G. G. (1962). A method for correcting item-total correlations for the effect of relevant item inclusion. Educational and Psychological Measurement, 22, 731–735.CrossRefGoogle Scholar
  16. 16.
    Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.CrossRefGoogle Scholar
  17. 17.
    Oshima, T. C., & Morris, S. B. (2008). Raju’s differential functioning of items and tests (DFIT). Educational Measurement: Issues and Practice, 27, 43–50.CrossRefGoogle Scholar
  18. 18.
    Flowers, C. P., Oshima, T. C., & Raju, N. S. (1999). A description and demonstration of the polytomous-DFIT framework. Applied Psychological Measurement, 23, 309–326.CrossRefGoogle Scholar
  19. 19.
    Bock, R. D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37, 29–51.CrossRefGoogle Scholar
  20. 20.
    Swaminathan, H., Hambleton, R. K., & Rogers, H. J. (2007). Assessing the fit of item response theory models. In C. R. Rao & S. Sinharay (Eds.), Handbook of Statistics: Psychometrics (pp. 683–718). London: Elsevier Publishing Co.Google Scholar
  21. 21.
    Hambleton, R. K., & Han, N. (2005). Assessing the fit of IRT models to educational and psychological test data: A five step plan and several graphical displays. In W. R. Lenderking & D. Revicki (Eds.), Advances in health outcomes research methods, measurement, statistical analysis, and clinical applications (pp. 57–78). Washington: Degnon Associates.Google Scholar
  22. 22.
    Muraki, E., & Bock, R. D. (2003). PARSCALE 4: IRT item analysis and test scoring for rating scale data [computer program]. Chicago, IL: Scientific Software.Google Scholar
  23. 23.
    Liang, T., Han, K. T., & Hambleton, R. K. (2009). ResidPlots-2: Computer software for IRT graphical residual analyses. Applied Psychological Measurement, 33(5), 411–412.CrossRefGoogle Scholar
  24. 24.
    De Ayala, R. J. (2009). The theory and practice of item response theory. New York: The Guilford Press.Google Scholar
  25. 25.
    Stocking, M. L., & Lord, F. M. (1983). Developing a common metric in item response theory. Applied Psychological Measurement, 7, 201–210.CrossRefGoogle Scholar
  26. 26.
    Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling and linking: Methods and practices (2nd ed.). New York: Springer.CrossRefGoogle Scholar
  27. 27.
    Sukin, Tia M. (2010). Item parameter drift as an indication of differential opportunity to learn: An exploration of item flagging methods & accurate classification of examinees. Doctoral dissertation. http://scholarworks.umass.edu/open_access_dissertations/301 Accessed 13 March 2014.
  28. 28.
    Raju, N. S. (1988). The area between two item characteristic curves. Psychometrika, 53, 495–502.CrossRefGoogle Scholar
  29. 29.
    Fleer, P. F. (1993). A Monte Carlo assessment of a new measure of item and test bias. Doctoral dissertation, Illinois Institute of Technology. Dissertation Abstracts International, 54, 2266.Google Scholar
  30. 30.
    Raju, N. (2000). Notes accompanying the differential functioning of items and tests (DFIT) computer program. Chicago: Illinois Institute of Technology.Google Scholar
  31. 31.
    Teresi, J. A., Ocepek-Welikson, K., Kleinman, M., et al. (2007). Evaluating measurement equivalence using the item response theory log-likelihood ratio (IRTLR) method to assess differential item functioning (DIF): Applications (with illustrations) to measures of physical functioning ability and general distress. Quality of Life Research, 16, 43–68.PubMedCrossRefGoogle Scholar
  32. 32.
    Raju, N. S. (1990). Determining the significance of estimated signed and unsigned areas between two item response functions. Applied Psychological Measurement, 14, 197–207.CrossRefGoogle Scholar
  33. 33.
    Zumbo, B. D. (1999). A handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert-type (Ordinal) item scores. Ottawa ON: Directorate of Human Resources Research and Evaluation, Department of National Defense.Google Scholar
  34. 34.
    Crane, P. K., van Belle, G., & Larson, E. B. (2004). Test bias in a cognitive test: differential item functioning in the CASI. Statistics in Medicine, 23(2), 241–256.PubMedCrossRefGoogle Scholar
  35. 35.
    Reeve, B., Hays, R. D., Bjorner, J., et al., on behalf of the PROMIS cooperative group. (2007). Psychometric evaluation and calibration of health–related quality of life item banks: Plans for the Patient-Reported Outcome Measurement Information System (PROMIS). Medical Care, 45(5), S22–S31.Google Scholar
  36. 36.
    Rose, M., Bjorner, J. B., Becker, J., Fries, J. F., & Ware, J. E. (2008). Evaluation of a preliminary physical function item bank supports the expected advantages of the Patient-Reported Outcomes Measurement Information System (PROMIS). Journal of Clinical Epidemiology, 61, 17–33.PubMedCrossRefGoogle Scholar
  37. 37.
    Reise, S. P., Widaman, K. F., & Pugh, R. H. (1993). Confirmatory factor analysis and item response theory: Two approaches for exploring measurement invariance. Psychological Bulletin, 114(3), 552–566.PubMedCrossRefGoogle Scholar
  38. 38.
    Raju, N. S., Laffitte, L. J., & Byrne, B. M. (2002). Measurement equivalence: a comparison of methods based on confirmatory factor analysis and item response theory. Journal of Applied Psychology, 87(3), 517–529.PubMedCrossRefGoogle Scholar
  39. 39.
    Gregorich, S. E. (2006). Do self-report instruments allow meaningful comparisons across diverse population groups? Testing measurement invariance using the confirmatory factor analysis framework. Medical Care, 44(11 Suppl 3), S78–S94.PubMedCentralPubMedCrossRefGoogle Scholar
  40. 40.
    Wong, C. K., Lam, C. L., & Mulhern, B. (2013). Measurement invariance of the Functional Assessment of Cancer Therapy—Colorectal quality-of-life instrument among modes of administration. Quality of Life Research, 22, 1415–1426.PubMedCentralPubMedCrossRefGoogle Scholar
  41. 41.
    Bjørner, J. B., Rose, M., Gandek, B., et al. (2014). Method of administration of PROMIS scales did not significantly impact score level, reliability, or validity. Journal of Clinical Epidemiology, 67(1), 108–113.PubMedCentralPubMedCrossRefGoogle Scholar
  42. 42.
    Alonso, J., Ferrer, M., Gandek, B., et al. (2004). Health-related quality of life associated with chronic conditions in eight countries: results from the International Quality of Life Assessment (IQOLA) Project. Quality of Life Research, 13, 283–298.PubMedCrossRefGoogle Scholar
  43. 43.
    Patrick, D. L., & Deyo, R. A. (1989). Generic and disease-specific measures in assessing health status and quality of life. Medical Care, 27(3 Suppl), S217–S232.PubMedCrossRefGoogle Scholar
  44. 44.
    Ware, J. E, Jr, Guyer, R., Harrington, M., & Boulanger, R. (2012). Evaluation of a more comprehensive survey item bank for standardizing disease-specific impact comparisons across chronic conditions. Quality of Life Research, 21(1 Suppl), 27–28.Google Scholar
  45. 45.
    R Core Team (2014). R: A language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. http://www.R-project.org/.
  46. 46.
    Candell, G. L., & Drasgow, F. (1988). An iterative procedure for linking metrics and assessing item bias in item response theory. Applied Psychological Measurement, 12, 253–260.CrossRefGoogle Scholar
  47. 47.
    Hidalgo-Montesinos, M. D., & Lopez-Pina, J. A. (2002). Two-stage equating in differential item functioning detection under the graded response model with the Raju area measures and the Lord statistic. Educational and Psychological Measurement, 62, 32–44.CrossRefGoogle Scholar
  48. 48.
    Bolt, D. M. (2002). A Monte Carlo comparison of parametric and nonparametric polytomous DIF detection methods. Applied Measurement in Education, 2, 113–141.CrossRefGoogle Scholar
  49. 49.
    Oshima, T. C., Raju, N. S., & Nanda, A. O. (2006). A new method for assessing the statistical significance in the differential functioning of items and tests (DFIT) framework. Journal of Educational Measurement, 43, 1–17.CrossRefGoogle Scholar
  50. 50.
    Bjorner, J. B., Rose, M., Gandek, B., Stone, A. A., Junghaenel, D. U., & Ware, J. E, Jr. (2013). Difference in method of administration did not significantly impact item response: An IRT-based analysis from the Patient-Reported Outcomes Measurement Information System initiative. Quality of Life Research, 23, 217–227.PubMedCentralPubMedCrossRefGoogle Scholar
  51. 51.
    Choi, S. W., Gibbons, L. E., & Crane, P. K. (2011). Lordif: An R Package for detecting differential item functioning using iterative hybrid ordinal logistic regression/item response theory and monte carlo simulations. Journal of Statistical Software, 39(8), 1–30.PubMedCentralPubMedGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Nina Deng
    • 1
    • 2
  • Milena D. Anatchkova
    • 1
    • 3
  • Molly E. Waring
    • 1
  • Kyung T. Han
    • 4
  • John E. WareJr.
    • 1
    • 5
  1. 1.Department of Quantitative Health SciencesUniversity of Massachusetts Medical SchoolWorcesterUSA
  2. 2.Measured Progress, Inc.DoverUSA
  3. 3.EvideraLexingtonUSA
  4. 4.Graduate Management Admission CouncilRestonUSA
  5. 5.John Ware Research Group, IncorporatedWorcesterUSA

Personalised recommendations