Advertisement

Clinical Orthopaedics and Related Research®

, Volume 475, Issue 8, pp 1932–1935 | Cite as

Editor’s Spotlight/Take 5: Readability of Orthopaedic Patient-reported Outcome Measures: Is There a Fundamental Failure to Communicate?

  • M. Daniel WongworawatEmail author
Editor's Spotlight/Take 5

When the purpose of treatment is to help the patient by reducing symptoms and improve quality of life, there are some questions that only the patient can answer. As we seek each patient’s answers, it turns out that the way we ask our questions may be as important as the questions themselves. How good are we at asking our patients these questions? Do our patients even understand them?

A recent study of orthopaedic patient-reported outcome measures (PROMs) reported that most such materials are written at a level that is incomprehensible to the average adult reader [2]. The problem of hard-to-understand content is, in fact, even more widespread than that. A total of 97% of the reading material on the American Academy of Orthopaedic Surgeons website was above the 6th grade reading level in one study [4]. Similar findings have been reported across disciplines and in other kinds of patient-education materials [3, 5, 6, 7, 10, 13]. With increasing emphasis on PROMs, the incomprehensibility of medical information written for our patients calls into question the reliability of the feedback we receive.

These concerns prompted Dr. Brent A. Ponce and his team at the University of Alabama at Birmingham to study the readability of PROMs used in orthopaedic surgery. They compared their results against both the Centers for Medicare & Medicaid Service’s (CMS) and NIH’s recommendations for reading grade levels. Given the reporting to the contrary [2, 4], Dr. Ponce’s group offer a surprising finding—that the vast majority of PROMs are, in fact, written at an acceptable grade level. However, there are still a small number of PROMs that are beyond the grasp of most patient’s reading comprehension.

Dr. Ponce and his coauthors went beyond analysis; they showed how deliberate editing can improve readability of medical information for patients. Editing improved all of the difficult-to-read PROMs. These suggestions are generalizable to printed information for patients. Using simple steps and available tools, we can apply the findings of Dr. Ponce’s work to material we produce for our patients, from the development of future PROMs to printed discharge instructions.

Please join me for the Take-5 interview with Brent A. Ponce MD, as we explore the important topic of communication with patients.

Take-5 Interview with Brent A. Ponce MD, senior author of “Readability of Orthopaedic Patient-reported Outcome Measures: Is There a Fundamental Failure to Communicate?”

M. Daniel Wongworawat MD: You are familiar with previous reports [ 2, 4 ] showing that orthopaedic patient material often exceeds the reading level of those patients. Why do you think your findings are so different?

Brent A. Ponce MD: Assessing readability is challenging, especially in medicine. Physicians generally communicate as they would with peers instead of adopting the perspective of a patient who may not have the same education. Readability algorithms were specifically designed for use in the military, educational, or business sectors—not in medicine. Additionally, the currently used readability algorithms have been around for many years, and they each use different formulas that emphasize different items to assess text readability. To use a lone metric, as in prior orthopaedic studies, fails to consider numerous accepted, alternative ways to assess readability. For example, one algorithm may use the number of “complex words” in a text to determine grade level, while another may be based upon sentence length. Neither is wrong, but neither really considers all components of the text. To use a baseball analogy, Dave Kingman hit a lot of home runs, but he is not in the Hall of Fame. Many other players with fewer home runs are enshrined at Cooperstown. This is because home runs are but one single metric to assess performance on the baseball field; a more-comprehensive view generally is called for. Our study attempted to combine the readability metrics for a more accurate assessment of readability. While we acknowledge that this has some inherent problems, we feel it is the best method for readability assessment with the tools currently available. Interestingly, our study highlighted a need for the academic community to develop better readability metrics for medical texts.

Dr. Wongworawat: You have done a remarkable job at editing those instruments that are too difficult to read. What approaches can you highlight for us?

Dr. Ponce: The take-away point regarding PROM editing is that readability improvement is possible and predictable. Previous studies [11, 12] have shown that editing is beneficial and effective with other forms of patient-health documents, so, logically, we wanted to know if this success would be demonstrated with PROMs as well. However, all steps were not necessary for all PROMs. Whereas, the formation of shorter sentences and implementation of active voice was useful in some PROMs, these steps were not universal. In contrast, the implementation of smaller, shorter, simpler words was a broadly applicable step, which yielded satisfactory results. If someone wished to edit PROMs in one step, the removal of difficult, technical language would likely be the most-appropriate action. Lastly, it should be noted that these editing steps are not original, but have been recommended by CMS [8], and we simply followed their recommendations.

Dr. Wongworawat: Applying your approaches in editing documents to improve readability, show us an example where you applied those approaches (sentence length, active voice, etc …) and highlight the before and after.

Dr. Ponce: In the example below, we modify an original sentence with three editing techniques: (1) Using the active voice, (2) making the sentences shorter, and (3) removing technical terms. The median grade level (MGL) using these steps goes from the 9th grade to just below the 5th grade.

Original (MGL 9.1): Occasional giving way with light sports or moderate work. Able to compensate but limits vigorous activities, sports, or heavy work not able to cut or twist suddenly.

Active Voice (MGL 7.3): Your knee occasionally gives way with light sports or moderate work. You are able to compensate, but with limits of vigorous activities, sports, or heavy work. You are not able to cut or twist suddenly.

Shorter Sentences (MGL 7.0): Your knee occasionally gives way with light sports or moderate work. You are able to compensate. You limit vigorous activities, sports, or work. You cannot cut or twist suddenly.

Removal of Technical Terms (MGL 4.7): Your knee gives way at times with light sports or modest work. You are able to adapt. You limit robust activities, sports, or work. You cannot cut or twist quickly.

Yes, arguments may arise over the words chosen as replacements, but overall, we believe that the final version is more easily “readable” and in turn understood than the original. We also attempted to show this in Appendix 2 of the manuscript.

Dr. Wongworawat: There are many readability tests to choose from. You compared reading grade levels between these tests. For the writer, what practical advice do you have in choosing one or two? Which readability test should I choose if I wanted to analyze material that I’ve written for my patients?

Dr. Ponce: There are numerous readability tests available for use, but a common question arises regarding which readability test is the most applicable to the healthcare field. Many articles have debated this topic, drawing differing conclusions [1, 9, 14]. The Gunning Fog Index (GFI) seems to be an applicable measure for assessing medical texts. Developed for use with business publications and journals, the GFI assesses average sentence length and percentage of complex words (over three syllables). However, it is essential to pair this with another algorithm that measures different aspect(s) of PROMs. For this, the Automated Readability Index, developed by the US Air Force for assessment of technical documents, would be an appropriate complement through its usage of average sentence length and average word length for its calculations. This allows the assessment of multiple aspects of a PROM via two unique readability algorithms with partial overlap for retrospective quality control. Additionally, as stated in the paper, we caution against using one, lone algorithm due to the possibility of skew that could result as seen in prior reports concerning PROMs.

Dr. Wongworawat: MGLs are related to sentence and word structure, but having a low MGL does not necessarily mean better understandability. What are the limits of computer analysis, and when is it necessary to test written material using real people?

Dr. Ponce: This is a subtle but critical point as understandability and readability are not synonymous. The readability equations cannot quantify a reader’s ability to comprehend the text, but only how easily it can be read. Understandability incorporates subjective variables such as font size, sentence syntax, and the patient’s ability to digest what was read. Equations cannot quantify this as they do not test understanding. While understandability is an individualized measure that varies between patients, readability is a defined calculation used to assess document complexity on a broad scale—a 40,000-foot view, so-to-speak. We believe that once issues with readability are addressed, understandability should then be tested.

References

  1. 1.
    Doak CC, Doak LG, JH R. Teaching Patients With Low Literacy Skills, 2nd Edition. Philadelphia, PA: J.B. Lippincott Company; 1996.Google Scholar
  2. 2.
    El-Daly I, Ibraheim H, Rajakulendran K, Culpan P, Bates P. Are patient-reported outcome measures in orthopaedics easily read by patients? Clin Orthop Relat Res. 2016;474:246–255.CrossRefPubMedGoogle Scholar
  3. 3.
    Eltorai AE, Cheatham M, Naqvi SS, Marthi S, Dang V, Palumbo MA, Daniels AH. Is the readability of spine-related patient education material improving?: An assessment of subspecialty websites. Spine (Phila Pa 1976). 2016;41:1041–1048.Google Scholar
  4. 4.
    Eltorai AE, Sharma P, Wang J, Daniels AH. Most American Academy of Orthopaedic Surgeons’ online patient education material exceeds average patient reading level. Clin Orthop Relat Res. 2015;473:1181–1186.CrossRefPubMedGoogle Scholar
  5. 5.
    Hunt WT, McGrath EJ. Evaluation of the readability of dermatological postoperative patient information leaflets across England. Dermatol Surg. 2016;42:757–763.CrossRefPubMedGoogle Scholar
  6. 6.
    John AM, John ES, Hansberry DR, Lambert WC. Assessment of online patient education materials from major dermatologic associations. J Clin Aesthet Dermatol. 2016;9:23–28.PubMedPubMedCentralGoogle Scholar
  7. 7.
    McClure E, Ng J, Vitzthum K, Rudd R. A mismatch between patient education materials about sickle cell disease and the literacy level of their intended audience. Prev Chronic Dis. 2016;13:E64.CrossRefPubMedPubMedCentralGoogle Scholar
  8. 8.
    McGee J. Centers for Medicare and Medicaid Services, U.S. Department of Health and Human Services. Toolkit for making written material clear and effective. 2012. Available at: https://www.cms.gov/Outreach-and-Education/Outreach/WrittenMaterialsToolkit/index.html?redirect=/writtenmaterialstoolkit/. Accessed May 12, 2017.
  9. 9.
    Meade CD, Smith CF. Readability formulas: Cautions and criteria. Patient Educ Couns. 1991;17:153–158.CrossRefGoogle Scholar
  10. 10.
    Salmon C, O’Conor R, Singh S, Ramaswamy R, Kannry J, Wolf MS, Federman AD. Characteristics of outpatient clinical summaries in the United States. Int J Med Inform. 2016;94:75–80.CrossRefPubMedGoogle Scholar
  11. 11.
    Sheppard ED, Hyde Z, Florence MN, McGwin G, Kirchner JS, Ponce BA. Improving the readability of online foot and ankle patient education materials. Foot Ankle Int. 2014;35:1282–1286.CrossRefPubMedGoogle Scholar
  12. 12.
    Terranova G, Ferro M, Carpeggiani C, Recchia V, Braga L, Semelka RC, Picano E. Low quality and lack of clarity of current informed consent forms in cardiology: How to improve them. JACC Cardiovasc Imaging. 2012;5:649–655.CrossRefPubMedGoogle Scholar
  13. 13.
    Unaka NI, Statile A, Haney J, Beck AF, Brady PW, Jerardi KE. Assessment of readability, understandability, and completeness of pediatric hospital medicine discharge instructions. J Hosp Med. 2017;12:98–101.CrossRefPubMedGoogle Scholar
  14. 14.
    Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: Application, results, and recommendations. Res Social Adm Pharm. 2013;9:503–516.CrossRefPubMedGoogle Scholar

Copyright information

© The Association of Bone and Joint Surgeons® 2017

Authors and Affiliations

  1. 1.Clinical Orthopaedics and Related Research®PhiladelphiaUSA

Personalised recommendations