Abstract
The past 100 years have witnessed an evolution of the meaning of validity and validation within the fields of education and psychology. Validity was once viewed as a property of tests and scales, but is now viewed as the extent to which theory and evidence support proposed interpretations and uses of test scores. Uncertainty about what types of validity evidence were needed motivated the current “argument-based” approach, as reflected in the 2014 Standards for Educational and Psychological Testing. According to this approach, investigators should delineate the assumptions required in order for a proposed interpretation or use to be plausible and then seek evidence that supports or refutes those assumptions. Though validation practices within the field of patient-reported outcome measurement have implicitly included many elements of the argument-based approach, the approach has yet to be explicitly adopted. To facilitate adoption, this article proposes an initial set of assumptions that might be included in most arguments for research-related interpretations and uses of scores from patient-reported outcome measures. The article also includes brief descriptions of the types of evidence that would be best suited for evaluating each assumption. It is hoped that these generic assumptions will stimulate further discussion and debate among quality of life researchers regarding how best to adopt modern validity theory to patient-reported outcome measures.
Similar content being viewed by others
References
Zumbo, B. C. E. (2014). Validity and validation in social, behavioral, and health sciences social indicators research series. Cham: Springer.
Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. https://doi.org/10.1111/jedm.12000.
Edwards, M. C., Slagle, A., Rubright, J. D., & Wirth, R. J. (2017). Fit for purpose and modern validity theory in clinical outcomes assessment. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 14(2), 1–10. https://doi.org/10.1007/s11136-017-1644-z.
Association, A. E. R., Association, A. P., & Education, N. C. o. M. i. Standards for educational and psychological testing (American Educational Research Association): American Educational Research Association.
Kane, M. T. (2013). Validation as a pragmatic, scientific activity. Journal of Educational Measurement, 50(1), 115–122. https://doi.org/10.1111/jedm.12007.
Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: A practical guide to Kane’s framework. Medical Education, 49(6), 560–575. https://doi.org/10.1111/medu.12678.
Hatala, R., Cook, D. A., Brydges, R., & Hawkins, R. (2015). Constructing a validity argument for the objective structured assessment of technical skills (OSATS): A systematic review of validity evidence. Advances in Health Sciences Education Theory and Practice, 20(5), 1149–1175. https://doi.org/10.1007/s10459-015-9593-1.
Hawkins, M. (2018). Application of validity theory and methodology to patient-reported outcome measures (PROMs): Building an argument for validity. Quality of Life Research : An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 27(7), 1695–1710. https://doi.org/10.1007/s11136-018-1815-6.
Reeves, T. D., & Marbach-Ad, G. (2016). Contemporary test validity in theory and practice: A primer for discipline-based education researchers. CBE Life Sciences Education, 15(1), 11. https://doi.org/10.1187/cbe.15-08-0183.
Walton, M. K., Powers, J. H., Hobart, J., Patrick, D., Marquis, P., Vamvakas, S., et al. (2015). Clinical outcome assessments: Conceptual foundation—report of the ISPOR clinical outcomes assessment—emerging good practices for outcomes research task force. Value in Health, 18, 741–752.
Weinfurt, K. P. (2019). Viewing assessments of patient-reported heath status as conversations: Implications for developing and evaluating patient-reported outcome measures. Quality of Life Research : An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 13(1), 1–7. https://doi.org/10.1007/s11136-019-02285-8.
Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks: Sage.
Willis, G. B. (2015). Analysis of the cognitive interview in questionnaire design. Oxford: Oxford University Press.
McClimans, L. (2010). Towards self-determination in quality of life research: A dialogic approach. Medicine, Health Care, and Philosophy, 13(1), 67–76. https://doi.org/10.1007/s11019-009-9195-x.
McClimans, L. (2010). A theoretical framework for patient-reported outcome measures. Theoretical Medicine and Bioethics, 31(3), 225–240. https://doi.org/10.1007/s11017-010-9142-0.
Byrom, B., Gwaltney, C., Slagle, A., Gnanasakthy, A., & Muehlhausen, W. (2019). Measurement equivalence of patient-reported outcome measures migrated to electronic formats: A review of evidence and recommendations for clinical trials and bring your own device. Therapeutic Innovation & Regulatory Science, 53(4), 426–430. https://doi.org/10.1177/2168479018793369.
Eremenco, S., Coons, S. J., Paty, J., Coyne, K., Bennett, A. V., McEntegart, D., et al. (2014). PRO data collection in clinical trials using mixed modes: Report of the ISPOR PRO mixed modes good research practices task force. Value in Health, 17(5), 501–516. https://doi.org/10.1016/j.jval.2014.06.005.
Fayers, P. M., & Machin, D. (2016). Quality of life (3rd ed.). Blackwell: Wiley.
Costa, D. S. J. (2015). Reflective, causal, and composite indicators of quality of life: A conceptual or an empirical distinction? Quality of Life Research, 24(9), 1–9. https://doi.org/10.1007/s11136-015-0954-2.
Bollen, K. A., & Bauldry, S. (2011). Three Cs in measurement models: Causal indicators, composite indicators, and covariates. Psychological Methods, 16(3), 265–284. https://doi.org/10.1037/a0024448.
Gobo, G., & Mauceri, S. (2014). Constructing survey data. Thousand Oaks: Sage.
Brennan, R. L. (2001). An essay on the history and future of reliability from the perspective of replications. Journal of Educational Measurement, 38(4), 295–317. https://doi.org/10.1111/j.1745-3984.2001.tb01129.x.
Acknowledgements
I am grateful to my colleagues—Karon Cook, PhD, Kathryn Flynn, PhD, and Bryce Reeve, PhD, and Arthur Stone, PhD—who provided invaluable feedback on an earlier draft of this paper, helping me to understand what I was trying to say and in many cases providing better ways to say it. The manuscript was much improved thanks to suggestions from two anonymous reviewers. I am also indebted to Karen Staman, MS, who provided invaluable editorial support.
Funding
The author reports no funding for the current work.
Author information
Authors and Affiliations
Contributions
KW, Conceptualization and drafting of final manuscript. The author wrote the manuscript and is responsible for the ideas presented herein.
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflict of interest.
Ethical approval
This article does not contain any studies with human participants performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Weinfurt, K.P. Constructing arguments for the interpretation and use of patient-reported outcome measures in research: an application of modern validity theory. Qual Life Res 30, 1715–1722 (2021). https://doi.org/10.1007/s11136-021-02776-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11136-021-02776-7