Quality of Life Research

, Volume 22, Issue 3, pp 501–507 | Cite as

Measurement invariance of the PROMIS pain interference item bank across community and clinical samples

  • Jiseon Kim
  • Hyewon ChungEmail author
  • Dagmar Amtmann
  • Dennis A. Revicki
  • Karon F. Cook



This study examined the measurement invariance of responses to the patient-reported outcomes measurement information system (PROMIS) pain interference (PI) item bank. The original PROMIS calibration sample (Wave I) was augmented with a sample of persons recruited from the American Chronic Pain Association (ACPA) to increase the number of participants reporting higher levels of pain. Establishing measurement invariance of an item bank is essential for the valid interpretation of group differences in the latent concept being measured.


Multi-group confirmatory factor analysis (MG-CFA) was used to evaluate successive levels of measurement invariance: configural, metric, and scalar invariance.


Support was found for configural and metric invariance of the PROMIS-PI, but not for scalar invariance.

Conclusions and recommendations

Based on our results of MG-CFA, we recommend retaining the original parameter estimates obtained by combining the community sample of Wave I and ACPA participants. Future studies should extend this study by examining measurement equivalence in an item response theory framework such as differential item functioning analysis.


Factor analysis Pain interference Pain measurement Patient outcome measures Psychometrics 



American Chronic Pain Association


Confirmatory factor analysis


Item response theory


Multi-group confirmatory factor analysis


Pain interference


Patient-Reported Outcomes Measurement Information System



The project described was supported by Award Number 3U01AR052177-06S1 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Arthritis and Musculoskeletal and Skin Diseases or the National Institutes of Health.


  1. 1.
    Dworkin, R. H., Turk, D. C., Farrar, J. T., Haythornthwaite, J. A., Jensen, M. P., & Katz, N. P. (2005). Core outcome measures for chronic pain clinical trials: IMMPACT recommendations. Pain, 113(1–2), 9–19.PubMedCrossRefGoogle Scholar
  2. 2.
    Amtmann, D., Cook, K., Jensen, M. P., Chen, W.-H., Choi, S., Revicki, D., et al. (2010). Development of a PROMIS item bank to measure pain interference. Pain, 150(1), 173–182.PubMedCrossRefGoogle Scholar
  3. 3.
    Riley, W., Rothrock, N., Bruce, B., Christodolou, C., Cook, K., & Hahn, E. A. (2010). Patient-reported outcomes measurement information system (PROMIS) domain names and definitions revisions: further assessment of content validity in IRT-derived item banks. Quality of Life Research, 19(9), 1311–1321.PubMedCrossRefGoogle Scholar
  4. 4.
    Choi, S. W., Cook, K. F., & Dodd, B. G. (1997). Parameter recovery for the partial credit model using MULTILOG. Journal of Outcome Measurement, 1(2), 114–142.PubMedGoogle Scholar
  5. 5.
    Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monograph Supplement, 34 (4, Pt. 2, No 17).Google Scholar
  6. 6.
    Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). Initial item banks and first wave testing of the patient–reported outcomes measurement information system (PROMIS) network: 2005–2008. Journal of Clinical Epidemiology, 63(11), 1179–1194.PubMedCrossRefGoogle Scholar
  7. 7.
    Liu, H. H., Cella, D., Gershon, R., Shen, J., Morales, L. S., Riley, W., et al. (2010). Representativeness of the PROMIS Internet panel. Journal of Clinical Epidemiology, 63(11), 1169–1178.PubMedCrossRefGoogle Scholar
  8. 8.
    Rothrock, N. E., Hays, R. D., Spritzer, K., Yount, S. E., Riley, W., & Cella, D. (2010). Relative to the general US population, chronic diseases are associated with poorer health–related quality of life as measured by the patient–reported outcomes measurement information system (PROMIS). Journal of Clinical Epidemiology, 63(11), 1195–1204.PubMedCrossRefGoogle Scholar
  9. 9.
    Meredith, W. (1993). Measurement invariance, factor analysis, and factorial invariance. Psychometrika, 58(4), 525–543.CrossRefGoogle Scholar
  10. 10.
    Horn, J. L., & McArdle, J. J. (1992). A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research, 18(3), 117–144.PubMedCrossRefGoogle Scholar
  11. 11.
    Steenkamp, E. M. J., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25(1), 78–90.CrossRefGoogle Scholar
  12. 12.
    Muthén, L. K., & Muthén, B. O. (1998–2010). Mplus user’s guide. 6th ed. Los Angeles, CA: Muthén & Muthén.Google Scholar
  13. 13.
    Bentler, P. M. (1980). Multivariate analysis with latent variables: Causal modeling. Annual Review of Psychology, 31(1), 419–456.CrossRefGoogle Scholar
  14. 14.
    Tucker, L. R., & Lewis, C. (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38, 1–10.CrossRefGoogle Scholar
  15. 15.
    Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  16. 16.
    Steiger, J. H., & Lind, J. C. (1980). Statistically-based tests for the number of common factors. In Paper presented at the annual spring meeting of the Psychometric Society, Iowa City, IA.Google Scholar
  17. 17.
    Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55.CrossRefGoogle Scholar
  18. 18.
    Browne, M., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. Bollen & J. Long (Eds.), Testing structural equation models (pp. 136–162). London, England: Sage.Google Scholar
  19. 19.
    Marsh, H. W., Hau, K., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341.CrossRefGoogle Scholar
  20. 20.
    Sivo, S. A., Fan, X., Witta, E. L., & Willse, J. T. (2006). The search for “optimal” cutoff properties: Fit index criteria in structural equation modeling. The Journal of Experimental Education, 74(3), 267–288.CrossRefGoogle Scholar
  21. 21.
    Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indices for testing measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 9(2), 233–255.CrossRefGoogle Scholar
  22. 22.
    French, B. F., & Finch, W. H. (2006). Confirmatory factor analytic procedures for the determination of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 13(3), 378–402.CrossRefGoogle Scholar
  23. 23.
    Mora, P. A., Contrada, R. J., Berkowitz, A., Musumeci-Szabo, T., Wisnivesky, J., & Halm, E. A. (2009). Measurement invariance of the mini asthma quality of life questionnaire across African–American and Latino adult asthma patients. Quality of Life Research, 18(3), 371–380.PubMedCrossRefGoogle Scholar
  24. 24.
    Hill, C.D., Edwards, M.C., Thissen, D., Langer, M.M., Wirth, R.J., Burwinkle, T. M., et al. (2007). Practical issues in the application of item response theory: A demonstration using items from the Pediatric Quality of Life Inventory™ (PedsQL™) 4.0 Generic Core Scales. Medical Care, 45(5 Suppl 1), 39–47.Google Scholar
  25. 25.
    Reise, S. P., Widaman, K. F., & Pugh, R. H. (1993). Confirmatory factor analysis and item response theory: Two approaches for exploring measurement. Psychological Bulletin, 114(3), 552–567.PubMedCrossRefGoogle Scholar
  26. 26.
    Chen, F., Sousa, K. H., & West, S. G. (2005). Teacher’s corner: Testing measurement invariance of second-order factor models. Structural Equation Modeling: A Multidisciplinary Journal, 12(3), 471–492.CrossRefGoogle Scholar
  27. 27.
    Yen, W. M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187–213.CrossRefGoogle Scholar
  28. 28.
    Steinberg, L., & Thissen, D. (1996). Uses of item response theory and the testlet concept in the measurement of psychopathology. Psychological Methods, 1, 81–97.CrossRefGoogle Scholar
  29. 29.
    Stark, S., Chernshenko, O. S., & Drasgow, F. (2006). Detecting differential item functioning with confirmatory factor analysis and item response theory: Toward a unified strategy. Journal of Applied Psychology, 91(6), 1292–1306.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  • Jiseon Kim
    • 1
  • Hyewon Chung
    • 2
    Email author
  • Dagmar Amtmann
    • 1
  • Dennis A. Revicki
    • 3
  • Karon F. Cook
    • 4
  1. 1.Department of Rehabilitation MedicineUniversity of WashingtonSeattleUSA
  2. 2.Department of EducationChungnam National UniversityDaejeonKorea
  3. 3.Center for Health Outcomes ResearchUnited BioSource CorporationBethesdaUSA
  4. 4.Department of Medical Social SciencesNorthwestern University, Feinberg School of MedicineChicagoUSA

Personalised recommendations