Advertisement

Quality of Life Research

, Volume 23, Issue 1, pp 239–244 | Cite as

Testing measurement invariance of the patient-reported outcomes measurement information system pain behaviors score between the US general population sample and a sample of individuals with chronic pain

  • Hyewon Chung
  • Jiseon Kim
  • Karon F. Cook
  • Robert L. Askew
  • Dennis A. Revicki
  • Dagmar Amtmann
Article

Abstract

Purpose

In order to test the difference between group means, the construct measured must have the same meaning for all groups under investigation. This study examined the measurement invariance of responses to the patient-reported outcomes measurement information system (PROMIS) pain behavior (PB) item bank in two samples: the PROMIS calibration sample (Wave 1, N = 426) and a sample recruited from the American Chronic Pain Association (ACPA, N = 750). The ACPA data were collected to increase the number of participants with higher levels of pain.

Methods

Multi-group confirmatory factor analysis (MG-CFA) and two item response theory (IRT)-based differential item functioning (DIF) approaches were employed to evaluate the existence of measurement invariance.

Results

MG-CFA results supported metric invariance of the PROMIS–PB, indicating unstandardized factor loadings with equal across samples. DIF analyses revealed that impact of 6 DIF items was negligible.

Conclusions

Based on the results of both MG-CFA and IRT-based DIF approaches, we recommend retaining the original parameter estimates obtained from the combined samples based on the results of MG-CFA.

Keywords

Multi-group confirmatory factor analysis Differential item functioning Item response theory Patient outcome measures Pain measurement Psychometrics 

Abbreviations

ACPA

American Chronic Pain Association

CFA

Confirmatory factor analysis

DIF

Differential item functioning

IRT

Item response theory

MG-CFA

Multi-group confirmatory factor analysis

PB

Pain behavior

PROMIS

Patient-reported outcomes measurement information system

Notes

Acknowledgments

The project described was supported by Award Number 3U01AR052177-06S1 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Arthritis and Musculoskeletal and Skin Diseases or the National Institutes of Health.

References

  1. 1.
    Fordyce, W. E. (1976). Behavioral methods for chronic pain and illness. St. Louis, MO: C. V. Mosby.Google Scholar
  2. 2.
    Keefe, F. J., Williams, D. A., & Smith, S. J. (2001). Assessment of pain behaviors. In D. C. Turk & R. Melzack (Eds.), Handbook of pain assessment (pp. 170–187). New York, NY: Guilford Press.Google Scholar
  3. 3.
    Waters, S. J., Dixon, K. E., Keefe, F. J., Ayers, S., Baum, A., McManus, C., et al. (2007). Cambridge handbook of psychology, health and medicine (2nd ed., pp. 300–303). Cambridge UK: Cambridge University Press.Google Scholar
  4. 4.
    Hadjistavropoulos, T., Herr, K., Turk, D. C., Fine, P. G., Dworkin, R., Helme, R., et al. (2007). An interdisciplinary expert consensus statement on assessment of pain in older persons. The Clinical Journal of Pain, 23(1 Suppl), S1–S43.PubMedCrossRefGoogle Scholar
  5. 5.
    Turk, D. C., Dworkin, R. H., Revicki, D. A., Harding, G., Burke, L. B., Cella, D., et al. (2008). Identifying important outcome domains for chronic pain clinical trials: An IMMPACT survey of people with pain. Pain, 137(2), 276–285.PubMedCrossRefGoogle Scholar
  6. 6.
    Jensen, M. P. (1997). Validity of self-report and observational measures. In T. S. Jensen & J. A. Turner (Eds.), Proceedings of the 8th world congress on pain: Progress in pain research and management. Seattle, WA: IASP Press.Google Scholar
  7. 7.
    Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). Initial item banks and first wave testing of the patient-reported outcomes measurement information system (PROMIS) network: 2005–2008. Journal of Clinical Epidemiology, 63(11), 1179–1194.PubMedCentralPubMedCrossRefGoogle Scholar
  8. 8.
    Cheung, G. W., & Rensvold, R. B. (1999). Testing factorial invariance across groups: A reconceptualization and proposed new method. Journal of Management, 25(1), 1–27.CrossRefGoogle Scholar
  9. 9.
    Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indices for testing measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 9(2), 233–255.CrossRefGoogle Scholar
  10. 10.
    French, B. F., & Finch, W. H. (2006). Confirmatory factor analytic procedures for the determination of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 13(3), 378–402.CrossRefGoogle Scholar
  11. 11.
    King, W. C., & Miles, E. W. (1995). A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurements: A test of measurement equivalence. Journal of Applied Psychology, 80(6), 643–651.CrossRefGoogle Scholar
  12. 12.
    Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70.CrossRefGoogle Scholar
  13. 13.
    Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.). New York, NY: The Guilford Press.Google Scholar
  14. 14.
    Stark, S., Chernshenko, O. S., & Drasgow, F. (2006). Detecting differential item functioning with confirmatory factor analysis and item response theory: Toward a unified strategy. Journal of Applied Psychology, 91(6), 1292–1306.PubMedCrossRefGoogle Scholar
  15. 15.
    Meade, A. W., & Lautenschlager, G. J. (2004). A comparison of item response theory and confirmatory factor analytic methodologies for establishing measurement equivalence/invariance. Organizational Research Methods, 7(4), 361–388.CrossRefGoogle Scholar
  16. 16.
    Reise, S. P., Widaman, K. F., & Pugh, R. H. (1993). Confirmatory factor analysis and item response theory: Two approaches for exploring measurement. Psychological Bulletin, 114(3), 552–566.PubMedCrossRefGoogle Scholar
  17. 17.
    Choi, S. W., Cook, K. F., & Dodd, B. G. (1997). Parameter recovery for the partial credit model using MULTILOG. Journal of Outcome Measurement, 1(2), 114–142.PubMedGoogle Scholar
  18. 18.
    Meredith, W. (1993). Measurement invariance, factor analysis, and factorial invariance. Psychometrika, 58(4), 525–543.CrossRefGoogle Scholar
  19. 19.
    Horn, J. L., & McArdle, J. J. (1992). A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research, 18(3), 117–144.PubMedCrossRefGoogle Scholar
  20. 20.
    Steenkamp, E. M. J., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25(1), 78–90.CrossRefGoogle Scholar
  21. 21.
    Muthén, L. K., & Muthén, B. O. (1998–2010). Mplus user’s guide (6th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
  22. 22.
    Bentler, P. M. (1980). Multivariate analysis with latent variables: Causal modeling. Annual Review of Psychology, 31(1), 419–456.CrossRefGoogle Scholar
  23. 23.
    Tucker, L. R., & Lewis, C. (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38(1), 1–10.CrossRefGoogle Scholar
  24. 24.
    Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  25. 25.
    Steiger, J. H., & Lind, J. C. (1980). Statistically-based tests for the number of common factors. In Paper presented at the annual spring meeting of the Psychometric Society, Iowa City, IA.Google Scholar
  26. 26.
    Woehr, D. J., Arciniega, L. M., & Lim, D. H. (2007). Examining work ethic across populations. A comparison of the multidimensional work ethic profile across three diverse cultures. Educational and Psychological Measurement, 67(1), 154–168.CrossRefGoogle Scholar
  27. 27.
    Browne, M., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. Bollen & J. Long (Eds.), Testing structural equation models (pp. 136–162). London, England: Sage.Google Scholar
  28. 28.
    Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55.CrossRefGoogle Scholar
  29. 29.
    Marsh, H. W., Hau, K., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341.CrossRefGoogle Scholar
  30. 30.
    Sivo, S. A., Fan, X., Witta, E. L., & Willse, J. T. (2006). The search for “optimal” cutoff properties: Fit index criteria in structural equation modeling. The Journal of Experimental Education, 74(3), 267–288.CrossRefGoogle Scholar
  31. 31.
    Choi, S. W., Gibbons, L., & Crane, P. K. (2011). Lordif: An R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/item response theory and Monte Carlo simulations. Journal of Statistical Software, 39(8), 1–30.PubMedCentralPubMedGoogle Scholar
  32. 32.
    Crane, P. K., Gibbons, L. E., Jolley, L., & van Belle, G. (2006). Differential item functioning analysis with ordinal logistic regression techniques: DIFdetect and difwithpar. Medical Care, 44(Suppl 3), S115–S123.PubMedCrossRefGoogle Scholar
  33. 33.
    Zumbo, B. D. (1999). A handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert type (ordinal) item scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation Department of National Defense.Google Scholar
  34. 34.
    Crane, P. K., Van Belle, G., & Larson, E. B. (2004). Test bias in a cognitive test: Differential item functioning in the CASI. Statistics in Medicine, 23(2), 241–256.PubMedCrossRefGoogle Scholar
  35. 35.
    Crane, P. K., Gibbons, L., Ocepek-Weiklson, K., Cook, K., Cella, D., Narasimhalu, K., et al. (2007). A comparison of three sets of criteria for determining the presence of differential item functioning using ordinal logistic regression. Quality of Life Research, 16(Suppl 1), 69–84.PubMedCrossRefGoogle Scholar
  36. 36.
    Cook, K. F., Bombardier, C. H., Bamer, A. M., Choi, S. W., Kroenke, K., & Fann, J. R. (2011). Do somatic and cognitive symptoms of traumatic brain injury confound depression screening?. Archives of Physical Medicine and Rehabilitation, 92, 818–823.PubMedCentralPubMedCrossRefGoogle Scholar
  37. 37.
    Revicki, D. A., Chen, W., Harnam, N., Cook, K. F., Amtmann, D., Callahan, L. F., et al. (2009). Development and psychometric analysis of the PROMIS pain behavior item bank. Pain, 146(1–2), 158–169.PubMedCentralPubMedCrossRefGoogle Scholar
  38. 38.
    Liu, H. H., Cella, D., Gershon, R., Shen, J., Morales, L. S., Riley, W., et al. (2010). Representativeness of the PROMIS internet panel. Journal of Clinical Epidemiology, 63(11), 1169–1178.PubMedCentralPubMedCrossRefGoogle Scholar
  39. 39.
    Cook, K. F., Teal, C. R., Bjorner, J. B., Cella, D., Chang, C. H., Crane, P. K., et al. (2007). IRT health outcomes data analysis project: An overview and summary. Quality of Life Research, 16(Suppl 1), 121–132.PubMedCrossRefGoogle Scholar
  40. 40.
    Chen, F. F., Sousa, K. H., & West, S. G. (2005). Testing measurement invariance of second-order factor models. Structural Equation Modeling: A Multidisciplinary Journal, 12(3), 471–492.CrossRefGoogle Scholar
  41. 41.
    Yen, W. M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30(3), 187–213.CrossRefGoogle Scholar
  42. 42.
    Steinberg, L., & Thissen, D. (1996). Uses of item response theory and the testlet concept in the measurement of psychopathology. Psychological Methods, 1(1), 81–97.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  • Hyewon Chung
    • 1
  • Jiseon Kim
    • 2
  • Karon F. Cook
    • 3
  • Robert L. Askew
    • 4
  • Dennis A. Revicki
    • 5
  • Dagmar Amtmann
    • 6
  1. 1.Department of EducationChungnam National UniversityDaejeonKorea
  2. 2.Department of Rehabilitation MedicineUniversity of WashingtonSeattleUSA
  3. 3.Department of Medical Social SciencesNorthwestern University Feinberg School of MedicineChicagoUSA
  4. 4.Department of Rehabilitation MedicineUniversity of WashingtonSeattleUSA
  5. 5.Outcomes ResearchUnited BioSource CorporationBethesdaUSA
  6. 6.Department of Rehabilitation MedicineUniversity of WashingtonSeattleUSA

Personalised recommendations