Abstract
Background
Electronic health records (EHRs) have been connected to excessive workload and physician burnout. Little is known about variation in physician experience with different EHRs, however.
Objective
To analyze variation in reported usability and satisfaction across EHRs.
Design
Internet-based survey available between December 2021 and October 2022 integrated into American Board of Family Medicine (ABFM) certification process.
Participants
ABFM-certified family physicians who use an EHR with at least 50 total responding physicians.
Measurements
Self-reported experience of EHR usability and satisfaction.
Key Results
We analyzed the responses of 3358 physicians who used one of nine EHRs. Epic, athenahealth, and Practice Fusion were rated significantly higher across six measures of usability. Overall, between 10 and 30% reported being very satisfied with their EHR, and another 32 to 40% report being somewhat satisfied. Physicians who use athenahealth or Epic were most likely to be very satisfied, while physicians using Allscripts, Cerner, or Greenway were the least likely to be very satisfied. EHR-specific factors were the greatest overall influence on variation in satisfaction: they explained 48% of variation in the probability of being very satisfied with Epic, 46% with eClinical Works, 14% with athenahealth, and 49% with Cerner.
Conclusions
Meaningful differences exist in physician-reported usability and overall satisfaction with EHRs, largely explained by EHR-specific factors. User-centric design and implementation, and robust ongoing evaluation are needed to reduce physician burden and ensure excellent experience with EHRs.
INTRODUCTION
The development of the electronic health record (EHR) was spurred by a vision of safer, more cost-effective, and better coordinated patient care.1,2 After the US government initially adopted a market-driven approach that gave rise to a large number of competing EHRs, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act to define “meaningful use” of EHRs in 2009.3,4 The law has been a success inasmuch as EHR use spread quickly.5
EHRs have since been shown to improve some measures of clinician performance and patient outcomes, even if these improvements have emerged more slowly and unevenly than originally anticipated.6 At the same time, they have produced marked downsides for clinicians, largely by increasing documentation burden, especially among primary care providers.7 Approximately 25% of primary care physicians’ working days are spent documenting care in EHRs, and many commit substantial time after clinic hours to documentation.8 Excessive documentation and poor EHR usability have been implicated as predictors of burnout both across physician specialties and specifically among primary care physicians.9,10,11,12 International comparisons showing that clinicians in the USA spend over 50% more time documenting in their EHRs also indicate that increased burden does not necessarily go hand-in-hand with EHR use.13
Despite shared functionality requirements introduced with the meaningful use criteria and the Office of the National Coordinator for Health Information Technology’s EHR certification program, EHRs developed by different vendors vary in important ways. EHRs may alleviate and/or exacerbate clinician burden through the vendors’ prioritization of usability.14 Past research has found that a hospital’s choice of EHR is independently associated with quality metrics and that EHRs differ substantially in their ability to notify clinicians of potentially harmful drug safety issues.15,16 Relatively little research, though, has focused on variation in the experience of using different EHRs.
In this study, we used a unique survey mechanism with a 100% response rate among a representative sample of family physicians to answer three questions. First, how much variation exists across EHR vendor platforms in reported usability and satisfaction? Second, what is the correlation between usability and satisfaction? Finally, how much of EHR satisfaction is attributable to factors unique to each specific EHR, rather than characteristics of the respondents and features of their practice environments?
METHODS
Population
We analyzed cross-sectional responses from family physicians who sought recertification from ABFM during 2022 and who reported providing direct patient care. As of 2022, there were a total of 100,360 ABFM-certified family physicians, which accounted for approximately one-third of all primary care physicians in the USA per 2020 Area Health Resource Files.17 Certified physicians apply to continue their certification on a rolling 3- to 10-year basis, dependent on their participation in continuing education activities. Prior to taking the certifying exam, physicians must complete the Internet-based Continuous Certification Questionnaire (CCQ). This meant that the survey had a 100% response rate among the population of physicians who were continuing their certification, and questions could not be skipped.
In 2022, the CCQ included an expanded set of questions about EHR experience. This comprised 2 questions which were answered by all respondents and subsequent randomization to one of two modules of in-depth EHR questions (Appendix 1). One of these modules concerned usability. Questions were based on the National Electronic Health Record Survey, modified for brevity and applicability, and then pre-tested in a series of hourlong sessions with family physicians.18 The 2022 survey became available on December 12, 2021, and closed on October 17, 2022. Responses were stored on a secure server with controlled access.
We included physicians whose EHR had at least 50 respondents using that platform at their main practice site. We excluded EHRs with fewer respondents since their sample size made it impossible to reach conclusions about their users’ experience and their likely heterogeneity of user experience made them unsuitable as a reference group. All items in this study directly reflected participant responses except for practice location, which we summarized by the Rural-Urban Commuting Area code of the provided primary practice ZIP code in order to ensure interpretability.
This research was determined to be IRB exempt.
Measures
Our primary dependent variables were measures of EHR usability and satisfaction.
We constructed six measures of usability for (a) entering information; (b) reading information; (c) amount of information; (d) workflow integration; (e) finding information; and (f) usability of alerts, using responses to the question “How would you assess the following usability dimensions of your current primary EHR system?” with response options of “Excellent,” “Good,” “Fair,” “Poor,” “Don’t Know,” or “Not Applicable.”
We constructed a single measure of EHR satisfaction, based on responses to the question “Overall, how satisfied are you with your current primary, outpatient EHR system?” with response options from a 5-point Likert scale ranging from “Very Satisfied” to “Very Dissatisfied,” as well as an option for respondents to select “Not Applicable.”
We present both of the above measures without adjustment for organizational or physician-level characteristics, which vary by EHR.
Our primary independent variable of interest was EHR used in the primary practice environment. We also included controls for organizational and physician-level characteristics including ownership of primary practice site, size, participation in value-based care payment programs, metropolitan location, gender, years of experience with the EHR they currently use, and years in practice.
Descriptive Analysis
To evaluate the statistical significance of differences in the organizational and physician-level features as stratified by EHR, we used a chi-square test for categorical variables and ANOVA for continuous variables. To assess differences in the dependent variables, which were all on ordinal scales, we used a pairwise Wilcoxon rank sum test with correction of p-values for the false discovery rate using the method of Benjamini and Hochberg.19
We assessed correlation using Cramér’s V, which is expressed as a number from 0 (no correlation) to 1 (perfect correlation). Specifically, we calculated the pairwise correlation while dropping data from individuals who were not selected to respond to one or both questions to avoid correlating with non-informative missing data.
Decomposition Analysis
Based on previous studies in the technology acceptance literature,20 we hypothesized that three broad types of features affect overall EHR satisfaction: physician-level features, organizational features, and EHR-specific features. To analyze the role of each of these in reported satisfaction, we conducted pairwise, twofold Kitagawa-Oaxaca-Blinder decompositions with a linear probability model of satisfaction on EHR choice for the four most popular platforms. We included only the four most used EHRs because decomposition is relatively data hungry and the uncertainty for less used EHRs would be very high.
We adjusted for both physician-level and organizational features. In each decomposition, we dichotomized EHR selection to the platform of interest versus the combination of all others. Similarly, we dichotomized satisfaction between “somewhat”/ “very” satisfied or not. The physician-level features we adjusted for were decades in practice, gender, and years of experience with the EHR; the organizational features we adjusted for were site ownership (academic, government, hospital-/HMO-owned, independent, or other), number of providers at the site (1 to 5, or more than 5), location in a metropolitan area, and participation in value-based care initiatives. We used the adjustment of Gardeazabal and Ugidos to reduce the model’s sensitivity to the omitted reference category in site ownership.21
The results of the decomposition showed the impact of differing baseline physician-level and organizational features across groups and the impact of differing EHR preferences between respondents with different demographics and practice environments. The sum of these features’ contributions was equal to the total difference between satisfaction with a given EHR and satisfaction with all other EHRs. For simplicity, we combined the effects of differences in features and differences in preferences across features for our main analysis. In this same main analysis, we interpreted the model intercept as the effect of EHR choice, as it was not explainable by physician-level or organizational features. Standard errors were calculated with 1000 bootstrap samples.
We conducted all analyses in R 4.1.2, including the packages “tidyverse” 1.3.1 for data management and visualization, “lsr” 0.5.2 for Cramér’s V, and “oaxaca” 0.1.5 for the decomposition.22,23,24,25
Role of the Funding Source
This study was funded by the United States Office of the National Coordinator for Health Information Technology, Department of Health and Human Services, Cooperative Agreement Grant #90AX0032/01-02. The funders assisted with the development of the survey instrument, but had no role in the planning, conduct, writing, or submission of this manuscript.
RESULTS
Respondent Characteristics
A total of 5998 respondents completed the survey and indicated that they provide direct patient care. From these responses, nine EHRs met our minimum number of responses for inclusion in the analyses (Table 1). This left 3358 respondents (56% of all respondents) in our analytic dataset.
We observed significant (p ≤ 0.005) differences across EHR platforms in all measured physician-level and organizational variables. There were notable trends in EHR selection across types of practice and number of providers at the respondents’ main sites. Cerner and Epic were primarily used at hospital-owned sites, while most Practice Fusion users were at small, independent practices. A higher proportion of Cerner and NextGen users were also located outside of major metropolitan areas.
Usability
In unadjusted analyses, our results indicated significant differences in reported usability between platforms across several domains (Fig. 1, Appendix Table 1). In particular, athenahealth, Epic, and Practice Fusion were rated significantly higher than other EHRs in ease of entering information, readability of information, amount of information presented on screen, EHR alignment with workflow, ease of finding relevant information, and usefulness of alerts. eClinical Works was comparable to these EHRs in readability of information.
Reported usability of different EHR functions across platforms: ease of entering information (A), readability of information (B), amount of information displayed on screen (C), EHR alignment with workflow (D), ease of finding information (E), and usability of alerts (F). EHRs are in descending order of prevalence.
Satisfaction
We also found significant differences in satisfaction between EHR platforms (Fig. 2). Epic had a significantly higher satisfaction rating than any other platform; athenahealth, CPRS, and Practice Fusion had significantly lower satisfaction than Epic, but significantly higher satisfaction than other EHRs. Among Epic users, 37% were very satisfied and another 40% were somewhat satisfied (Appendix Table 2). Satisfaction with athenahealth was slightly lower, with 31% saying that they were very satisfied and 36% saying that they were somewhat satisfied. Satisfaction for CPRS and Practice Fusion was similar to athenahealth. Allscripts, Cerner, and Greenway had much lower rates of physicians saying that they were very satisfied. These rates ranged from 10% for Cerner to 12% for Allscripts. Between 32 and 40% of users for these EHRs reported being somewhat satisfied.
Usability responses were moderately correlated with the outcome of EHR satisfaction, with Cramér’s V coefficients ranging from 0.21 for the usefulness of alerts to 0.27 for the EHR’s integration into workflow. All elements of usability were highly correlated, as indicated by Cramér’s V coefficients between 0.60 for the correlation between the usefulness of alerts and the readability of information, and 0.76 for the correlation between integration into workflows and the ease of finding information.
Decomposition Analysis
The decomposition analysis showed how different variables explain the observed differences in satisfaction across EHRs (Fig. 3). The satisfaction with each platform was affected by the combination of differences in physician-level and organizational variables, as well as the impact of those characteristics on the probability of expressing satisfaction with a given EHR platform. One example of these separate impacts can be seen in how longer experience in practice had counteracting effects in different platforms (Appendix Tables 3 and 4). Our point estimates suggested that more experienced physicians were slightly more likely to use Epic than other platforms but were less likely to be satisfied with it: for every additional decade of experience, respondents were 6.3% less likely to be very satisfied with Epic compared to other platforms. On the other hand, more experienced physicians were slightly less likely to use eClinical Works but were 7.9% more likely to be very satisfied with it for each decade in practice. Not all physician-level and organizational variables were significant in the decomposition analysis. Confidence intervals for all variables are available in Appendix Tables 3 and 4.
Point estimate results of the Kitagawa-Oaxaca-Blinder decomposition showing the impact of physician-level, organizational, and EHR-specific factors on the point estimate probability of being (A) very satisfied, or (B) somewhat or very satisfied with the named platform versus the combination of all other platforms.
EHR-specific factors were extremely important. This included not just usability as discussed above, but also factors such as the user interface and implementation of functions, which are difficult to quantify but contribute to the user experience. These EHR-specific factors accounted for 48% of the variation in being very satisfied with Epic, 46% with eClinical Works, 14% with athenahealth, and 49% with Cerner. In the decomposition analysis focused on physicians who are either somewhat or very satisfied, EHR-specific factors explained 52% of variation with Epic, 31% with eClinical Works, 11% with athenahealth, and 44% with Cerner.
Notably, whether EHR-specific factors had positive or negative effects varied by EHR. Based on the characteristics of its users and their estimated preferences, Epic would be expected to have lower satisfaction than other EHRs; however, total satisfaction was higher than that of other EHRs in both analyses. Cerner, on the other hand, was expected to have higher satisfaction than other EHRs based on its users and their preferences. Respondents, though, were less satisfied with Cerner than its competitors on average, largely because of EHR-specific factors.
DISCUSSION
The use of EHRs has been a boon for patient safety and healthcare experience in the USA,26 and substantial credit for their broad adoption should be given to the policies enacted by the HITECH Act and other related regulatory decisions. At the same time, though, the EHR market in the USA is highly fragmented, in part due to the functionality-based, vendor-agnostic regulation during the early years of broad-based EHR adoption.27 This has resulted in a wide variety of clinician experiences with the various EHRs. As attention has turned to the role of the EHR in promoting clinician wellness and continuity of patient care, the variation in EHRs’ approaches to usability and their impacts on satisfied have emerged as important topics of study. Our analysis of family physician experiences with different EHRs revealed the extent of this variation.
First, we found that athenahealth, Epic, and Practice Fusion received higher reported scores across most aspects of usability compared to other EHRs. Criticism of EHRs is well-known, and over a quarter of our respondents were dissatisfied with their EHRs. For the top rated EHR, only a third of respondents gave the highest satisfaction rating. At the same time, there were meaningful differences in satisfaction between EHRs. These differences are especially notable in the proportion of users who report being very satisfied.
Next, we found that usability metrics were moderately correlated with overall satisfaction. The correlations within our usability metrics were quite high. This led us to question which way the causality worked for our respondents—that is, whether they mentally weighted and summed the metrics to arrive at their overall satisfaction, or whether their overall satisfaction trickled down to individual metrics, which were not given much specific attention. Our survey was not able to provide any evidence for one hypothesis or the other.
Finally, we found that EHR-specific factors were important contributors to overall satisfaction. We were motivated to ask this question by previous research showing that different practice sizes and types tended to use different platforms.28 If physicians in different settings or with different demographics were more likely to express satisfaction in any EHR, this could account for the overall differences in satisfaction that we observed. Our decomposition analysis suggested, however, that variation in EHR features contributed to the majority of observed variation in satisfaction in the cases of Cerner and Epic, and was a substantial contributor to the overall satisfaction of eClinical Works, too.
Our findings about the correlation between EHR selection and practice type or size were similar to previously published research. Namely, we found that the EHR market in large practices—especially hospital- or health system-owned practices—has largely consolidated around Cerner and Epic, while smaller practices use a greater variety of EHRs. Approximately 39% of respondents used one of the many smaller EHRs for which we could not provide any meaningful insights due to their small sample sizes. An analysis of the 2019 National Electronic Health Records Survey (NEHRS) similarly found the majority of practices with three or fewer physicians used EHR platforms that were not listed in the survey.28 A divide has also emerged around the implementation of advanced EHR features between large teaching hospitals and critical access hospitals.29 This points to a potential downside of using smaller platforms, which may be slower to implement new features. Smaller practices may therefore benefit from easier access to major platforms. Alternatively, smaller platforms may benefit from promoting clinician-directed design, which may allow them to maintain their appeal among primary care providers.30,31
Our study had several important limitations. First, the study population was entirely composed of family physicians. It is not clear how well their experiences with different EHR platforms generalize to other clinicians, like nurse practitioners and physician assistants, who provide primary care or to other primary care physicians.
Next, our study was restricted to a single, cross-sectional observation from 2022. This was a limitation inasmuch as the COVID-19 pandemic has substantially changed primary care and the EHR’s role within it, while also increasing burnout.32,33,34 Because we lacked longitudinal data, we could not ascertain the pandemic’s impact on the relationship between EHR platform and EHR satisfaction, nor were we equipped to investigate any other potential causal relationship. Similarly, we did not directly measure the impact of EHR usability on burnout or other measures of physician burden. However, other studies have highlighted EHR-driven work as playing a substantial role in burnout and perceived burden.35,36
Finally, our data did not allow us to examine the potential role of working hours, patient panel size, or team-based care including documentation support on EHR satisfaction. To the extent that these factors are predictable based on physician-level and organizational variables, they are accounted for in the decomposition. However, they likely contributed to the some of the impact of unobserved factors as well.
CONCLUSION
We found that a representative sample of family physicians experienced substantial variation in the usability of EHRs, which was moderately correlated with satisfaction. The characteristics of the platforms’ different user bases could not fully explain the variation in satisfaction between platforms, and EHR-specific factors significantly shifted opinion in both positive and negative directions. Many of these EHR-specific factors are difficult to measure, but our research suggests that some vendors implement their EHRs in much more physician-friendly ways than others. Following the lead of EHRs with higher usability and satisfaction may point towards ways to use EHRs to improve physicians’ workplace experience.
Data Availability
Data may be accessed for IRB-approved projects subject to the approval of the ABFM Research Governance Board. Please contact the corresponding author for details.
References
Stanberry B. Telemedicine: barriers and opportunities in the 21st century. J Intern Med. 2000;247(6):615-628. https://doi.org/10.1046/j.1365-2796.2000.00699.x.
Burton LC, Anderson GF, Kues IW. Using Electronic Health Records to Help Coordinate Care. Milbank Q. 2004;82(3):457-481. https://doi.org/10.1111/j.0887-378X.2004.00318.x.
Jha AK, Ferris TG, Donelan K, et al. How Common Are Electronic Health Records In The United States? A Summary Of The Evidence. Health Aff. 2006;25(Supplement 1):W496-W507. https://doi.org/10.1377/hlthaff.25.w496.
Jha AK. Meaningful Use of Electronic Health Records: The Road Ahead. JAMA. 2010;304(15):1709-1710. https://doi.org/10.1001/jama.2010.1497.
Halamka JD, Tripathi M. The HITECH Era in Retrospect. N Engl J Med. 2017;377(10):907-909. https://doi.org/10.1056/NEJMp1709851.
Kruse CS, Ehrbar N. Effects of Computerized Decision Support Systems on Practitioner Performance and Patient Outcomes: Systematic Review. JMIR Med Inform. 2020;8(8):e17283. https://doi.org/10.2196/17283.
Rotenstein LS, Holmgren AJ, Downing NL, Bates DW. Differences in Total and After-hours Electronic Health Record Time Across Ambulatory Specialties. JAMA Intern Med. 2021;181(6):863-865. https://doi.org/10.1001/jamainternmed.2021.0256.
Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: Primary Care Physician Workload Assessment Using EHR Event Log Data and Time-Motion Observations. Ann Fam Med. 2017;15(5):419-426. https://doi.org/10.1370/afm.2121.
Robertson SL, Robinson MD, Reid A. Electronic Health Record Effects on Work-Life Balance and Burnout Within the I3 Population Collaborative. J Grad Med Educ. 2017;9(4):479-484. https://doi.org/10.4300/JGME-D-16-00123.1.
Melnick ER, Harry E, Sinsky CA, et al. Perceived Electronic Health Record Usability as a Predictor of Task Load and Burnout Among US Physicians: Mediation Analysis. J Med Internet Res. 2020;22(12):e23382. https://doi.org/10.2196/23382.
Melnick ER, Dyrbye LN, Sinsky CA, et al. The Association Between Perceived Electronic Health Record Usability and Professional Burnout Among US Physicians. Mayo Clin Proc. 2020;95(3):476-487. https://doi.org/10.1016/j.mayocp.2019.09.024.
Peccoralo LA, Kaplan CA, Pietrzak RH, Charney DS, Ripp JA. The impact of time spent on the electronic health record after work and of clerical work on burnout among clinical faculty. J Am Med Inform Assoc. 2021;28(5):938-947. https://doi.org/10.1093/jamia/ocaa349.
Holmgren AJ, Downing NL, Bates DW, et al. Assessment of Electronic Health Record Use Between US and Non-US Health Systems. JAMA Intern Med. 2021;181(2):251-259. https://doi.org/10.1001/jamainternmed.2020.7071.
Hettinger AZ, Melnick ER, Ratwani RM. Advancing electronic health record vendor usability maturity: Progress and next steps. J Am Med Inform Assoc. 2021;28(5):1029-1031. https://doi.org/10.1093/jamia/ocaa329.
Holmgren AJ, Adler-Milstein J, McCullough J. Are all certified EHRs created equal? Assessing the relationship between EHR vendor and hospital meaningful use performance. J Am Med Inform Assoc. 2018;25(6):654-660. https://doi.org/10.1093/jamia/ocx135.
Classen DC, Holmgren AJ, Co Z, et al. National Trends in the Safety Performance of Electronic Health Record Systems From 2009 to 2018. JAMA Netw Open. 2020;3(5):e205547. https://doi.org/10.1001/jamanetworkopen.2020.5547.
Area Health Resources Files. https://data.hrsa.gov/topics/health-workforce/ahrf.Accessed 29 Sept 2022.
NEHRS - National Electronic Health Records Survey Homepage. Published May 4, 2022. https://www.cdc.gov/nchs/nehrs/about.htm.Accessed 26 July 2022.
Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Ser B Methodol. 1995;57(1):289-300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x.
Holden RJ, Karsh BT. The Technology Acceptance Model: Its past and its future in health care. J Biomed Inform. 2010;43(1):159-172. https://doi.org/10.1016/j.jbi.2009.07.002.
Gardeazabal J, Ugidos A. More on Identification in Detailed Wage Decompositions. Rev Econ Stat. 2004;86(4):1034-1036. https://doi.org/10.1162/0034653043125239.
Navarro D. Learning Statistics with R. Lulu.com; 2012.
Hlavac M. oaxaca: Blinder-Oaxaca Decomposition in R. SSRN J. Published online 2014. https://doi.org/10.2139/ssrn.2528391.
Wickham H, Averick M, Bryan J, et al. Welcome to the Tidyverse. J Open Source Softw. 2019;4(43):1686. https://doi.org/10.21105/joss.01686.
R Core Team. R: A Language and Environment for Statistical Computing. Published online 2022. https://www.R-project.org/. Accessed October 4, 2022.
Atasoy H, Greenwood BN, McCullough JS. The Digitization of Patient Care: A Review of the Effects of Electronic Health Records on Health Care Quality and Utilization. Annu Rev Public Health. 2019;40(1):487-500. https://doi.org/10.1146/annurev-publhealth-040218-044206.
Morrison Z, Robertson A, Cresswell K, Crowe S, Sheikh A. Understanding Contrasting Approaches to Nationwide Implementations of Electronic Health Record Systems: England, the USA and Australia. J Healthc Eng. 2011;2(1):25-41. https://doi.org/10.1260/2040-2295.2.1.25.
Rotenstein LS, Apathy N, Landon B, Bates DW. Assessment of Satisfaction With the Electronic Health Record Among Physicians in Physician-Owned vs Non–Physician-Owned Practices. JAMA Netw Open. 2022;5(4):e228301. https://doi.org/10.1001/jamanetworkopen.2022.8301.
Adler-Milstein J, Holmgren AJ, Kralovec P, Worzala C, Searcy T, Patel V. Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide. J Am Med Inform Assoc. 2017;24(6):1142-1148. https://doi.org/10.1093/jamia/ocx080.
Cifuentes M, Davis M, Fernald D, Gunn R, Dickinson P, Cohen DJ. Electronic Health Record Challenges, Workarounds, and Solutions Observed in Practices Integrating Behavioral Health and Primary Care. J Am Board Fam Med. 2015;28(Supplement 1):S63-S72. https://doi.org/10.3122/jabfm.2015.S1.150133.
Miller H, Johns L. Interoperability of Electronic Health Records: A Physician-Driven Redesign. Manag Care. 2018;27(1):37-40.
Apaydin EA, Rose DE, Yano EM, et al. Burnout Among Primary Care Healthcare Workers During the COVID-19 Pandemic. J Occup Environ Med. 2021;63(8):642-645. https://doi.org/10.1097/JOM.0000000000002263.
Nath B, Williams B, Jeffery MM, et al. Trends in Electronic Health Record Inbox Messaging During the COVID-19 Pandemic in an Ambulatory Practice Network in New England. JAMA Netw Open. 2021;4(10):e2131490. https://doi.org/10.1001/jamanetworkopen.2021.31490.
Patel SY, Mehrotra A, Huskamp HA, Uscher-Pines L, Ganguli I, Barnett ML. Trends in Outpatient Care Delivery and Telemedicine During the COVID-19 Pandemic in the US. JAMA Intern Med. 2021;181(3):388-391. https://doi.org/10.1001/jamainternmed.2020.5928.
Kroth PJ, Morioka-Douglas N, Veres S, et al. Association of Electronic Health Record Design and Use Factors With Clinician Stress and Burnout. JAMA Netw Open. 2019;2(8):e199609. https://doi.org/10.1001/jamanetworkopen.2019.9609.
Tajirian T, Stergiopoulos V, Strudwick G, et al. The Influence of Electronic Health Record Use on Physician Burnout: Cross-Sectional Survey. J Med Internet Res. 2020;22(7):e19274. https://doi.org/10.2196/19274.
Acknowledgements
We thank our collaborators at the American Board of Family Medicine, the Office of the National Coordinator for Health Information Technology, and the Center for Clinical Informatics and Improvement Research (CLIIR) at University of California, San Francisco for their support in developing and distributing the survey instrument.
Funding
This study was funded by the United States Office of the National Coordinator for Health Information Technology, Department of Health and Human Services, Cooperative Agreement Grant #90AX0032/01-02.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
NH reports no conflicts of interest. AB reports no conflicts of interest. AJH reports no conflicts of interest. LSR receives research support from the American Medical Association and FeelBetter, Inc. ARE reports no conflicts of interest. AHK reports no conflicts of interest. RLP reports no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Hendrix, N., Bazemore, A., Holmgren, A.J. et al. Variation in Family Physicians’ Experiences Across Different Electronic Health Record Platforms: a Descriptive Study. J GEN INTERN MED 38, 2980–2987 (2023). https://doi.org/10.1007/s11606-023-08169-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11606-023-08169-5