Testing measurement invariance of the patient-reported outcomes measurement information system pain behaviors score between the US general population sample and a sample of individuals with chronic pain
- 428 Downloads
In order to test the difference between group means, the construct measured must have the same meaning for all groups under investigation. This study examined the measurement invariance of responses to the patient-reported outcomes measurement information system (PROMIS) pain behavior (PB) item bank in two samples: the PROMIS calibration sample (Wave 1, N = 426) and a sample recruited from the American Chronic Pain Association (ACPA, N = 750). The ACPA data were collected to increase the number of participants with higher levels of pain.
Multi-group confirmatory factor analysis (MG-CFA) and two item response theory (IRT)-based differential item functioning (DIF) approaches were employed to evaluate the existence of measurement invariance.
MG-CFA results supported metric invariance of the PROMIS–PB, indicating unstandardized factor loadings with equal across samples. DIF analyses revealed that impact of 6 DIF items was negligible.
Based on the results of both MG-CFA and IRT-based DIF approaches, we recommend retaining the original parameter estimates obtained from the combined samples based on the results of MG-CFA.
KeywordsMulti-group confirmatory factor analysis Differential item functioning Item response theory Patient outcome measures Pain measurement Psychometrics
American Chronic Pain Association
Confirmatory factor analysis
Differential item functioning
Item response theory
Multi-group confirmatory factor analysis
Patient-reported outcomes measurement information system
The project described was supported by Award Number 3U01AR052177-06S1 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Arthritis and Musculoskeletal and Skin Diseases or the National Institutes of Health.
- 1.Fordyce, W. E. (1976). Behavioral methods for chronic pain and illness. St. Louis, MO: C. V. Mosby.Google Scholar
- 2.Keefe, F. J., Williams, D. A., & Smith, S. J. (2001). Assessment of pain behaviors. In D. C. Turk & R. Melzack (Eds.), Handbook of pain assessment (pp. 170–187). New York, NY: Guilford Press.Google Scholar
- 3.Waters, S. J., Dixon, K. E., Keefe, F. J., Ayers, S., Baum, A., McManus, C., et al. (2007). Cambridge handbook of psychology, health and medicine (2nd ed., pp. 300–303). Cambridge UK: Cambridge University Press.Google Scholar
- 6.Jensen, M. P. (1997). Validity of self-report and observational measures. In T. S. Jensen & J. A. Turner (Eds.), Proceedings of the 8th world congress on pain: Progress in pain research and management. Seattle, WA: IASP Press.Google Scholar
- 7.Cella, D., Riley, W., Stone, A., Rothrock, N., Reeve, B., Yount, S., et al. (2010). Initial item banks and first wave testing of the patient-reported outcomes measurement information system (PROMIS) network: 2005–2008. Journal of Clinical Epidemiology, 63(11), 1179–1194.PubMedCentralPubMedCrossRefGoogle Scholar
- 13.Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.). New York, NY: The Guilford Press.Google Scholar
- 21.Muthén, L. K., & Muthén, B. O. (1998–2010). Mplus user’s guide (6th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
- 24.Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
- 25.Steiger, J. H., & Lind, J. C. (1980). Statistically-based tests for the number of common factors. In Paper presented at the annual spring meeting of the Psychometric Society, Iowa City, IA.Google Scholar
- 27.Browne, M., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. Bollen & J. Long (Eds.), Testing structural equation models (pp. 136–162). London, England: Sage.Google Scholar
- 29.Marsh, H. W., Hau, K., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341.CrossRefGoogle Scholar
- 33.Zumbo, B. D. (1999). A handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert type (ordinal) item scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation Department of National Defense.Google Scholar
- 35.Crane, P. K., Gibbons, L., Ocepek-Weiklson, K., Cook, K., Cella, D., Narasimhalu, K., et al. (2007). A comparison of three sets of criteria for determining the presence of differential item functioning using ordinal logistic regression. Quality of Life Research, 16(Suppl 1), 69–84.PubMedCrossRefGoogle Scholar