Current Osteoporosis Reports

, Volume 10, Issue 3, pp 217–220

Bone Density Testing Intervals and Common Sense

Authors

    • New Mexico Clinical Research & Osteoporosis Center
  • Neil Binkley
    • University of Wisconsin
Evaluation and Management (M Kleerekoper, Section Editor)

DOI: 10.1007/s11914-012-0111-6

Cite this article as:
Lewiecki, E.M. & Binkley, N. Curr Osteoporos Rep (2012) 10: 217. doi:10.1007/s11914-012-0111-6

Abstract

Measurement of bone mineral density (BMD) is underutilized, contributing to the underdiagnosis and undertreatment of osteoporosis. Appropriate patient selection for BMD testing leads to more people being treated, fewer fractures, and a decrease in fracture-related health care costs. Although there are well-established indications for BMD testing, it is less clear when a BMD test should be repeated for a patient who does not have osteoporosis and is found to be at low fracture risk with initial testing. BMD testing intervals should be individualized according to the likelihood that the results will influence treatment decisions.

Keywords

OsteoporosisScreeningFrequencyIntervalBone density testing

Introduction

Health care systems worldwide are faced with the dilemma of having limited available resources to meet a seemingly inexhaustible demand for services. Many studies are now directed at determining the cost-effectiveness of medical services and procedures. In the field of osteoporosis, treatment guidelines are often based on cost-utility analyses. Measurement of bone mineral density (BMD) by dual-energy X-ray absorptiometry (DXA) is a component of most treatment guidelines. DXA is used to diagnose osteoporosis, estimate fracture risk, and monitor therapy. To use DXA wisely, physicians must know not only when BMD testing should initially be done but also when it should be repeated. A recent study by Gourlay et al. [1] analyzed the rate of bone loss in a cohort of postmenopausal women, with the goal of generating data to guide decisions on appropriate intervals for DXA testing. This study was well done and generated conclusions supported by the data. However, the intense media attention that followed appeared to come to a different conclusion, one that was not supported by the data. Since it is likely that the number of people exposed to the media reports greatly exceeded those who actually read the scientific publication, there is a risk that health care policy and clinical decisions may be based on erroneous information. It is therefore important that all stakeholders in osteoporosis patient management distinguish fact from fiction and recognize the strengths and limitations of the Gourlay et al. [1] study. Moreover, it is essential that clinical osteoporosis decision making be based upon estimation of an individual patient’s fracture risk (which is affected by clinical factors in addition to BMD) rather than simply the time to meet an arbitrary BMD diagnostic threshold.

Study Overview

Gourlay et al. [1] set out to determine the time for a transition from normal BMD or osteopenia to the development of osteoporosis before a hip or clinical vertebral fracture occurred. The study cohort consisted of 4957 ambulatory women representing 51 % of 9704 participants in the Study of Osteoporotic Fractures (SOF). Women were excluded from participation for reasons that included having a diagnosis of osteoporosis at baseline, missing data, or history of hip or clinical vertebral fracture. All were age 67 years or older with a baseline T-score that was -2.49 or better at the femoral neck and total hip. They were followed prospectively for up to 15 years. Thus, the study cohort was clearly defined as ambulatory community-dwelling elderly postmenopausal women with no prior spine or hip fracture and T-score better than -2.5. It is only for such individuals that the data apply.

The primary end point was the estimated time interval for 10 % of participants to make the transition from normal BMD or osteopenia to osteoporosis before a hip or clinical vertebral fracture occurred or before treatment for osteoporosis was started. Not surprisingly in women age 67+ years, it was found that those with baseline BMD that was normal or slightly below normal were unlikely to develop osteoporosis over the next 15 years, suggesting that a long interval between BMD tests would be appropriate for most of them. For women with baseline BMD that was lower, osteoporosis was more likely to develop sooner, suggesting that in these women a shorter testing interval should be considered. The rate of bone loss observed in this SOF subgroup appeared to be approximately 1 % per year, consistent with other data showing a similar age-related rate of bone loss in older women [2].

The authors concluded with a recommendation that for women like the ones in this study, the rescreening interval should be about 15 years when the baseline BMD is normal (-1.00 or better) or there is “mild osteopenia” (T-score -1.01 to -1.49), about 5 years when there is “moderate osteopenia” (T-score -1.50 to -1.99), and about 1 year when there is “advanced osteopenia” (T-score -2.00 to -2.49). These recommendations are reasonable as a general guidance, although they do not account for innumerable confounding factors that may alter the rate of bone loss in an individual patient and do not consider individual fracture risk.

Strengths and Limitations of the Study

Study strengths include the large cohort size, the long observation time, and presumably rigorous quality in DXA acquisition and analysis.

Limitations include the demographics of the study cohort, which was composed of women age 67 years and older who were mostly (>99 %) Caucasian, with osteopenia or normal BMD, and no prior osteoporosis treatment or hip/clinical vertebral fracture. The study findings do not apply to younger women, men, non-Caucasians, and those with osteoporosis. Most notably, women soon after menopause typically have an accelerated rate of bone loss, and as such, may need shorter screening intervals. This study was not designed to provide information about obtaining a baseline screening study or about monitoring patients receiving pharmacological therapy to reduce fracture risk. Although one of the study end points was clinical vertebral fractures, patients with incident undiagnosed morphometric vertebral fractures would not have been recognized, even though these fractures are common and have important diagnostic [3] and prognostic implications [4, 5]. An additional limitation is that lumbar spine T-score was not reported, although in clinical practice this skeletal site is typically measured and used for diagnostic classification, assessment of fracture risk, and patient management decisions [6, 7]. It is common for the lumbar spine T-score to be lower than femoral neck and total hip T-score, especially in early postmenopausal women. Finally, this study did not consider the important role of clinical risk factors for fracture, evaluated by fracture risk algorithms such as FRAX, in identifying patients with osteopenia who are likely to benefit from pharmacological therapy.

Media Reports of the Study

The publication of the study by Gourlay et al. [1] was followed by news reports that were generally factually correct, but often lacking in appreciation of the study limitations and the complexities in managing individual patients. Many of these reports suggested or stated outright that DXA is an expensive overutilized test that should be ordered less frequently to save money for health care systems. One report began with the headline, “Women too often tested for osteoporosis, researchers report” [8], followed by a statement that “unnecessary tests can lead to false positives and prescriptions of drugs.” It was incorrectly stated that the US Preventive Services Task Force (USPSTF) recommends a screening BMD test every 2 years [9], when in reality the USPSTF recommends waiting at least 2 years between screening tests [10]. A national network television news story was titled “Osteoporosis tests unnecessary?” [11]. Some experts were quoted as saying that bone density testing had been “oversold” [12] and that doing fewer BMD studies will “save Medicare dollars” [13]. The unfortunate consequences of these media reports may be that patients will be less inclined to have a bone density test and that physicians will order fewer tests.

Perspective

Medical tests should be ordered only when there is a reasonable probability that the results will influence patient care. While over-ordering of any test, including DXA, may sometimes occur, there is no credible evidence that excessive use of DXA is a systemic problem. In fact, there is strong evidence that just the opposite is true: far too few patients are being screened for osteoporosis, and more DXA testing is needed, not less [14]. US Medicare claims data show that the annual testing rate for BMD testing in women age 65 years and older was only 14 % during 2006 to 2010, with the testing rate declining, not rising, in 2010 [15]. An analysis of Medicare data for 2002 to 2008 showed that 48 % of elderly women did not have a single DXA test, and 25 % had only one test [15]. Thus, almost half of elderly postmenopausal women, a group at high risk for osteoporosis and fragility fractures, have probably never received a BMD measurement. Common sense holds that identifying high-risk patients by BMD testing prior to occurrence of a fracture and utilizing widely available low-cost medication to reduce risk would be cost-effective. What do the data show?

DXA is not an expensive test. Medicare reimbursement for non-facility (office-based) DXA studies is currently approximately $56 [16], far below the cost of providing the service at most facilities [17]. Moreover, inadequate reimbursement is probably a major factor leading to the closing of bone densitometry centers, thereby limiting access to BMD testing and contributing to a decline in BMD tests performed [15]. Reducing the use of an inexpensive clinically useful test seems unlikely to be cost-effective. Studies of osteoporosis disease management programs document that doing more BMD testing reduces fracture rates and saves money. For example, in the Geisinger Health System, implementation of osteoporosis guidelines with increases in BMD testing and treatment was associated with a decrease in hip fractures and an estimated $7.8 million reduction in health care costs during a 5-year period [18]. A Kaiser Southern California program that included more BMD testing reported that in a single year an estimated 935 hip fractures were prevented with cost savings of over $30.8 million [19]. Other studies have found that a variety of osteoporosis screening strategies are clinically effective and cost-effective [2022].

Evidence-based guidelines of the National Osteoporosis Foundation (NOF) [7], International Society for Clinical Densitometry [6], and USPSTF [23] recommend BMD testing for women age 65 years and older, and for younger women with risk factors for fracture. Monitoring of pharmacological treatment with a repeat DXA study has been recommended 1 to 2 years after starting or changing therapy [6, 7]. Once a treatment effect is observed, the timing of subsequent DXA studies should be individualized according to each patient’s clinical circumstances [24].

For repeat screening of patients who do not have osteoporosis and are not being treated to reduce fracture risk, testing intervals should also be individualized, rather than blindly following guidelines. It is correct that the Gourlay et al. [1] study accurately represents expected age-related bone loss in a population of older women without osteoporosis. However, the study does not tell us about the rate of bone loss and appropriate testing intervals in other patients who might need shorter BMD testing intervals. Importantly, simply waiting 15 years to cross a -2.5 T-score threshold will ignore many individuals who are at high risk for fracture based upon their BMD plus clinical risk factors. For example, consider a 70 year-old Caucasian American woman with no risk factors for fracture with a femoral neck T-score of -1.4. Her FRAX-estimated 10-year probability of hip fracture is 1.3 %, well below the NOF threshold of 3.0 % for pharmacological therapy [7]. If she has an age-related rate of bone loss of about 1 % per year and waits 15 years to have another DXA, she will then have a femoral T-score of about -2.7 and 10-year hip fracture probability of 7.4 %, far above the NOF treatment threshold. In fact, she would have reached the treatment threshold of 3.0 % by age 75 years, with a T-score of -1.9. If she had clinical risk factors for fracture, she would have reached the treatment threshold even sooner. Waiting 15 years for a repeat DXA study, as suggested by the Gourlay et al. [1] data, would have neglected this person’s high fracture risk.

Conclusions and Common Sense

A person who is stingy with small sums of money yet wasteful with large sums is called “penny wise and pound foolish.” So it is with governments or societies unwilling to make small investments now that are likely to have large payoffs in the future. Encouraging increased use of BMD testing in appropriately selected patients should be a priority for health care systems. More BMD testing will lead to appropriate treatment to reduce fracture risk, fewer fractures, and cost savings. It is essential that BMD testing intervals for treated and untreated patients be individualized according to the likelihood of results having an influence on treatment decisions. To this end, we suggest a common sense approach, that DXA testing intervals in patients with osteopenia include consideration of clinical risk factors in addition to BMD.

It is widely appreciated that the diagnostic classification of osteopenia includes patients with a wide range of fracture risk. For such individuals, BMD results should be combined with clinical risk factors using various algorithms (eg, FRAX) to determine when fracture risk is sufficiently high that treatment is likely to be cost-effective. Similarly, the purpose of repeating a BMD test in an untreated patient is to reassess fracture risk to determine whether treatment is indicated, rather than simply finding when an arbitrary T-score threshold is crossed. As noted above, an age-related bone loss rate of 1 % per year is typical for elderly women with an initial T-score that is normal or in the osteopenia range, while individual patients may lose bone slower or faster than that rate. Moreover, other clinical risk factors (eg, advancing age, parental history of hip fracture) strongly affect fracture risk. As such, a 15-year interval between baseline and repeat BMD testing is far too long, even when the baseline T-score is better than -1.5. A more appropriate interval in such a patient would be 3 to 5 years, thereby establishing the rate of bone loss for that patient, with the interval for subsequent BMD tests to be determined according to clinical circumstances and fracture risk estimation. Finally, if/when new clinical risk factors for fracture are identified, the interval between BMD tests should be reassessed.

Disclosure

Conflicts of interest: E.M. Lewiecki: has received financial support or owned personal investments in the following categories: Grant/research support (principal investigator, funding to New Mexico Clinical Research & Osteoporosis Center) for Amgen, Eli Lilly,

Merck, GlaxoSmithKline; Other support from Amgen (scientific advisory board, speakers’ bureau); Eli Lilly (scientific advisory board, speakers’ bureau); Novartis (speakers’ bureau); Merck (scientific advisory board); GlaxoSmithKline (consultant); and Warner Chilcott (speakers’ bureau); N. Binkley (Research support: Amgen, Lilly, Merck, Tarsa; Consultant: Lilly, Merck).

Copyright information

© Springer Science+Business Media, LLC 2012