Skip to main content

Evidence and Efficacy in the Era of Digital Care

Patients are increasingly turning to digital tools to help manage medical conditions, attracting significant attention from entrepreneurs and developers. Investment in digital health technologies increased from $1.1 billion in 2011 to $14.6 billion in 2020, with diabetes, cardiovascular disease, and mental health as the top-funded disease categories.1 Digital health encompasses a broad spectrum of technology-enabled products, ranging from wellness-focused mobile applications (e.g., fitness trackers) to digital therapeutics prescribed by physicians to treat specific diseases (e.g., digitally delivered cognitive behavioral therapy for substance use disorder). Between these two extremes is a vast set of services that provide “digital care” for the management of chronic diseases. A loosely defined but rapidly growing category, digital care moves beyond wellness but rarely includes discrete, standardized therapies. Rather, these products offer personalized care pathways that collect data and deliver tailored content to patients and clinicians through web-based platforms. For a patient with diabetes, a digital care product might track blood sugars and HbA1c, provide trends and insights for the patient and their physician, and deliver timely reminders to take needed medications.2

Market-driven enthusiasm for digital care has outpaced our understanding of its possible benefits. As digital care products proliferate, patients and their clinicians must navigate a burgeoning set of potential options, often with inadequate data to guide informed decision-making. For example, the American Diabetes Association recently concluded that “insufficient evidence of clinical validity, effectiveness, accuracy, and safety” limits the usefulness of digital health technologies for diabetes.3 A likely result is wasted time, energy, and resources on products that offer limited utility for patients.

These knowledge deficits lie in contrast to other areas of health care. When clinicians choose between medications or counsel patients on treatment options, and when payers make coverage decisions, they are accustomed to drawing on peer-reviewed literature, consensus-based guidelines, and approval from independent regulatory organizations. Such trusted sources of information are comparatively weak or completely absent for most digital care products. Digital care developers sometimes conduct pilot studies with early customers, but the nature of these studies varies widely. Rarely do they employ randomization or other rigorous designs,4 and only few are found in the peer-reviewed literature. While time and expense understandably limit early-stage companies from conducting thorough evaluations, the status quo is discouraging. The poor availability and low quality of data limit optimal decisions by clinicians, patients, and payers, and prevent digital care products from competing on their ability to demonstrably improve health. Separating the wheat from the chaff in digital care requires new approaches to evidence generation and standard setting.

Digital care developers have an obligation to produce credible evidence about their programs but, absent sufficient market demand for this evidence, have limited incentives to do so. To strengthen these incentives, private payers, governmental agencies, and employers could use the levers at their disposal—including coverage policy and data aggregation—to catalyze an expansion of the evidence base. For example, payers could refuse to cover digital care programs lacking strong evidence, or only cover non-validated tools contingent on plans to rigorously evaluate, and ideally disseminate, results. Since certain digital care programs target narrow patient populations, payers could also work together to create distributed trial networks—alongside health systems—to understand the impact of digital care programs at scale.

An expanded evidence base could set the stage for a novel accreditation body that devises transparent standards and applies them fairly across the market, raising the bar for evidence and efficacy in digital care. This accreditor—responsible for synthesizing available evidence and certifying digital care programs that meet widely accepted standards—could draw features from several existing models.

The Food and Drug Administration (FDA) has, for decades, been the gold standard for regulation. When its safety and efficacy standards are properly applied, FDA approval assures clinicians, patients, and payers that new drugs and devices have proven benefits that outweigh any risks. The FDA is successful in this role because it has statutory authority to require that substantial evidence of effectiveness be demonstrated through well-conducted trials. Its role in digital health, though, is limited, as few tools currently meet the definition of a regulated medical device. As a result, most digital care products on the market have not undergone formal FDA review. Of course, some elements of traditional FDA review processes, like animal studies, cannot be applied to digital care programs. And others, such as trials to demonstrate safety in healthy subjects, may be less relevant for certain digital care products. However, key aspects are well-suited for digital care, including rigorous trial design, expert advisory groups, independence, and transparency. A limitation of the FDA process is the significant time and expense incurred by both developers and the agency to shepherd products through the stages of approval, which conflicts with the rapid pace of innovation in digital care and the limited funding of individual start-ups and developers.

Non-governmental models for standard setting and accreditation also exist. The Institute for Clinical and Economic Review (ICER) assesses the marginal value of drugs by determining comparative clinical effectiveness and incremental cost-effectiveness with validated methodology. While these metrics are often technically complex and sometimes define value narrowly, ICER supplements FDA review in an important way by considering the costs of interventions as well as their benefits. Consensus over condition-specific incremental cost-effectiveness ratios (e.g., dollars per unit reduction in HbA1c) would allow payers, clinicians, and patients to compare the value of competing digital care products and compare these products to other interventions.

Another model is the National Committee for Quality Assurance (NCQA), which manages voluntary accreditation programs for health plans and care delivery organizations. NCQA currently certifies some care delivery interventions—including patient-centered medical homes and population health programs—based on a set of consensus-based criteria. The process requires detailed applications, surveys, virtual reviews, and regular reporting, but is relatively swift, usually taking less than 12 months from application to decision. Although optional and resource intensive, and thus not universally pursued by eligible entities, NCQA certification is a meaningful signal in the industry. For example, more than 95 organizations, including many payers, offer financial and other incentives to medical homes recognized by NCQA.5

Elements from the approaches and processes of the FDA, ICER, and NCQA could be leveraged for standard-setting and accreditation in digital care. Consensus-driven standards, like those used by the FDA for drug and device approval, would help set clear expectations for standards of evidence and clinically meaningful outcomes. Standards would need to vary by clinical indication and intended goals. Disease-specific standards could define, for instance, the percent reduction in HbA1c a digital diabetes program must achieve to be considered effective, as well as the price at which it would be considered cost-effective, using ICER-like methods. At minimum, we should expect the demonstration of at least one measurable, statistically rigorous, and meaningful effect on a clinically relevant metric before a digital care program is adopted at scale.

If pursued, an accreditation process could be modeled after that used by NCQA (i.e., voluntary, thorough, and expedient) but tailored to digital care. Similar to FDA approval, the process should include review by practicing clinicians with relevant specialty expertise to assess the clinical significance of any submitted evidence. The resulting stamp of approval would help digital care programs attract new customers, encourage developers to submit their products for review, and ultimately raise the bar for evidence and efficacy throughout the industry. It is worth noting that, compared to drugs and devices, digital care programs are not static. Those that employ artificial intelligence and machine learning may change in difficult to detect ways, and do so quite rapidly. A critical feature of any standard-setting, approval, or accreditation process will be the need for ongoing review of evolving products.

Digital care programs are an increasingly common, and potentially innovative, approach to help patients and their clinicians manage chronic disease. However, only by bringing to digital care the discipline and rigor applied to other areas of medical innovation will this potential be fully realized. A heightened focus on standards and accreditation would increase the volume and quality of evidence for digital care programs, supporting high-value investments by payers, competition among developers, and more informed decisions by patients and their physicians.

References

  1. Krasniansky A, Zweig M, Evans B. H1 2021 Digital Health Funding. Rock Health. July 27, 2021. Accessed Aug 6, 2021. Available at <https://rockhealth.com/reports/h1-2021-digital-health-funding-another-blockbuster-year-in-six-months/>.

  2. Duffy S. Digital therapeutics vs. digital care: defining the landscape. STAT News. Feb 2020. Accessed Aug 25, 2021. Available at < https://www.statnews.com/2020/02/20/digital-therapeutics-vs-digital-care/>.

  3. Fleming GA, Petrie JR, Bergenstal RM, Holl RW, Peters AL, Heinemann L. Diabetes Digital App Technology: Benefits, Challenges, and Recommendations. A Consensus Report by the European Association for the Study of Diabetes (EASD) and the American Diabetes Association (ADA) Diabetes Technology Working Group. Diabetes Care. 2020;43(1):250-260.

    Article  Google Scholar 

  4. Guo, C., Ashrafian, H., Ghafur, S. et al. Challenges for the evaluation of digital health solutions—A call for innovative evidence generation approaches. NPJ Digit Med. 3, 110 (2020).

  5. Patient-Centered Medical Home (PCMH). NCQA. Accessed Jan 24, 2022. Available at https://www.ncqa.org/programs/health-care-providers-practices/patient-centered-medical-home-pcmh/.

Download references

Acknowledgements

The authors would like to thank Aaron S. Kesselheim, MD, JD, MPH, for helpful comments on earlier drafts without compensation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William H. Shrank MD, MSHS.

Ethics declarations

Conflict of Interest

Mr. Gondi reports being an advisor at 8VC and prior employment at Humana and Commonwealth Care Alliance. Dr. Powers reports employment and equity holdings with Humana and prior employment by Anthem and Fidelity Investments. Dr. Shrank reports employment and equity holdings with Humana, and serving as a Director at GetWellNetwork.

Disclaimer

The views expressed in this article represent the authors’ views and not necessarily the views or policies of their respective affiliated institutions.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gondi, S., Powers, B.W. & Shrank, W.H. Evidence and Efficacy in the Era of Digital Care. J GEN INTERN MED (2022). https://doi.org/10.1007/s11606-022-07445-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11606-022-07445-0