Introduction

Electronic Health Records (EHRs) are a “repository of patient data in digital form, stored and exchanged securely” and an important feature of modern healthcare systems [1]. EHRs can help healthcare professionals to plan, document and deliver care for their patients, as well as exchange information with other healthcare providers to provide continuity of care. Consistent use of EHRs can also reduce the rate of medical errors, improve patient safety and quality, as well as improve organisational efficiency [2,3,4,5]. However, if clinicians find EHRs disorganised or complex to use, then they may not be completed consistently which can diminish the benefits associated with EHRs and actively contribute to patient harm [6].

Usability problems in EHRs

Usability or ease of use predicts whether a technological system such as EHRs will be accepted and used consistently [7]. This is supported by the technology acceptance model which states that the ease of use, defined as whether the user can use the system without too much effort, alongside the system’s perceived usefulness can predict the behavioural intention of whether the system will be used and adopted [8, 9]. Studies also show that clinicians report usability problems to be the most common barrier in using EHR systems [10, 11]. Difficulties in completing EHRs due to usability problems can further have implications for both clinician’s and patient’s well-being. Usability problems in EHRs can increase clinician’s cognitive workload [12], as well as increase time pressures and psychological distress [13]. The extra burden on clinical staff can also limit the time clinicians have available for direct patient care [14]. Improper use of EHRs due to usability problems can have a negative impact on the accuracy and quality of patient record keeping, which can subsequently lead to poor quality patient care [4, 15].

EHRs usability in mental health

There has been little research conducted on EHRs usability in mental health [16, 17]. EHRs are used to record sensitive and potentially stigmatising patient information within the mental health context [16, 18]. However, EHRs usability problems can prevent proper completing of patient records and distress mental health patients who may be expected to repeatedly re-live traumatic experiences as their information is not properly recorded [16, 19]. Improving EHRs usability in mental health contexts is particularly challenging as healthcare professionals may have different and at times conflicting requirements from EHRs depending on their clinical teams [16, 17]. Some of these difficulties could be overcome by tailoring the EHR system based on common needs of mental health teams, ideally by involving clinicians throughout this process [17, 20, 21].

Improving EHRs usability

Studies that improve EHRs usability found better clinician satisfaction with the system and improvements in clinician’s cognitive workload and performance [12, 22]. Examples of usability improvements included customising EHRs for different clinical professions, as well as adding navigational pathways, keyboard shortcuts and auto-population patient records which can pull information from other sections of the EHR system [12, 17, 22]. Feasibility testing and clinician participation at all stages of EHR improvement have further been advocated as an approach to improve usability of the system for healthcare professionals [13, 17, 23].

Methods to evaluate usability improvements

Clinician-based surveys are commonly used to evaluate usability improvements [7, 24,25,26,27]. However, many studies do not use validated surveys or describe survey development. Objective measures, such as capturing completed patient information in EHRs, has also been recommended [1, 4]. Usability testing is another objective measure which can help indicate whether a system is efficient, effective and easy to use for the intended user [28, 29]. This testing allows users to perform realistic tasks using the system in typical conditions, during which the task completion rate, mouse clicks or time taken to complete tasks is recorded [15, 24, 29, 30]. A combination of objective and self-report measures to evaluate usability improvements is likely to be the most informative.

Aim

The aim of the current study was to improve the usability of EHR assessment forms completed after a mental health assessment within a clinical mental health setting in the UK. This was a feasibility study conducted with two mental health teams prior to implementation across the wider organisation. It was hypothesised that improving usability of EHR assessment forms in collaboration with clinicians will: (i) increase the number of completed forms, (ii) improve overall clinician experience, and (iii) reduce the time spent completing and duplicating patient information.

Methods

Study design

The present study evaluated usability changes made to existing EHR assessment forms in a pre and post design using three forms of measurement.

Setting and participants

Two community-based adult mental health teams were conveniently sampled based on availability for feasibility testing from one mental health NHS Trust in the UK. Both teams provide community mental health care for about 1000 adult patients and employs around 50 multidisciplinary staff including psychiatrists, social workers, occupational therapists and community psychiatric nurses.

Procedure

Usability changes were made to existing EHR assessment forms which are typically completed following a mental health assessment with a patient (see Box 1). These EHR assessment forms have been in use within the healthcare organisation since 2015, replacing a previous version of EHRs.

Usability improvements to EHR assessment forms were made by the EHR team, in collaboration with clinicians. First, a focus group was conducted with clinicians to identify common usability barriers, followed by iterations to the EHR assessments with regular consultation from clinicians across the two teams. This process took approximately six months. The new EHR assessment forms were trialled on the two mental health teams on 15th April 2019. Baseline testing was conducted in the two months before implementation, and post-testing was conducted within ten weeks after implementation (i.e. the pre-post evaluation phase was between 18th February 2019 and 23rd June 2019). See the flow-diagram in Fig. 1 for a summary of the study procedure.

Results

Usability testing

Demographics

Participant demographics for the three clinicians participating in usability testing were as follows: gender (female, n = 2; male, n = 1), age (30–40 years, n = 2; 50–60 years, n = 1), and profession (doctor, n = 1; psychiatric nurse, n = 1; social worker, n = 1).

Time taken to complete assessment forms

At baseline, the average time taken to complete all EHR assessment forms was 62.06 min (SD 14.37 min, range 51.29–82.38 min) and after changes to EHR assessment forms was 36.67 min (SD 13.34 min, range 27.23–46.10 min). There was a decrease of 40.9% in time taken to complete EHR assessment forms.

Duplication of assessment forms

Clinicians reported an average of 19 duplications at baseline (SD 4.93; range 3–12) and an average of 3 duplication post-changes (SD = 3; range 0–3). At baseline, the average time spent duplicating patient information was 14.36 min (SD = 10.28 min, range 2.14-13.34 min) and at post-changes was 4.10 min (SD = 0.13 min, range 4-4.19). There was an overall decrease of 71.4% in the time spent duplicating patient information with the updated EHR assessment forms.

Post-usability testing

The SUS was administered immediately after usability testing. The average total SUS score at baseline was 19 (SD = 2.64, range 16–21) and at post-changes was 42 (SD = 1.42; range 41–43). There was increased satisfaction with the usability of the updated EHR assessment forms.

Clinician Experience Survey

Demographics

A total of 71 participants completed the survey from both teams (pre-changes, n = 43; post-changes, n = 28). Key demographic information is reported in Table 3. This table shows that there were no significant differences between the group of clinicians completing the survey at baseline and post-changes.

Table 3 Demographic information for clinicians completing the clinician experience survey, pre and post changes

Clinician’s experience of EHR assessment forms

A composite survey score was calculated by summing clinician’s score on each survey item (see Table 4). The average composite survey at baseline was 32.2 (SD 6.6) and post-changes was 41.1 (SD 7.8). Since the data met the assumption of normality (Shapiro-Wilk test at p > 0.005) and equal variance (Levine’s test, F = 2.78, p = 0.1), an independent t-test was conducted and showed a significant difference between pre and post groups, t (67) = -3.005, p = 0.004, 95% CI -8.66 to -1.74. Clinicians had a significantly better experience of completing EHR assessment forms following usability improvements.

Clinician’s self-reported usage and satisfaction of EHR assessment forms

Clinician’s reported the highest increase in usage and satisfaction for mental health assessment forms after usability improvements were made (see appendix Table A.1).

Table 4 Clinician’s scores on each item of the clinician experience survey and composite survey score, pre and post changes

Proportion of completed EHR forms

The number of completed EHR assessment forms at baseline and post-changes, and the results of the two proportion z-tests, are shown in Table 5. Clinician’s use of the following forms increased significantly after usability improvements were made: mental health assessment, risk assessment, HONOS and patient care plans. The physical health assessment forms did not show a significant increase in usage after Bonferroni correction was applied. There was an overall increase in clinician’s use of EHR assessment forms after usability improvements, but the increase in usage of physical health assessment forms was not significant.

Table 5 Number of completed EHR assessment forms for new patient episodes opened in both teams and two proportion z-test of differences, pre and post changes

Discussion

The current study evaluated usability improvements made to existing EHR assessment forms within a mental health setting in the UK. This was a feasibility study conducted with two mental health teams before usability changes were implemented across the wider healthcare organisation. All changes to the EHR system were made in collaboration with clinicians. Evaluation of the updated EHR assessment forms showed that clinicians spent less time completing forms and duplicating patient information, were satisfied with the usability changes, and completed more EHR assessment forms for patients after usability was improved.

The technology acceptance model proposes that usability of a system can predict adoption of that system and is a widely used model when implementing EHRs into healthcare services [8, 9, 34]. The current findings offer some further insights about how usability can sustain the usage of existing systems. We have developed a framework to conceptualise the results of present and past research findings; we propose that usability of a system can affect the use of existing systems such as EHRs and this is mediated by improved user satisfaction and reduction in unnecessary time spent using the system. This framework also supports the need to engage users when making usability improvements for a better outcome (see Fig. 2).

Fig. 2
figure 2

Framework to show relationship between usability and usage of an existing technological system. According to the framework, user feedback can influence the usability of a system. System usability is likely to impact system use, mediated by user satisfaction and time spent using the system

According to this framework, improving usability would need to improve user satisfaction with a system before usage of the system is increased. In the current study, both user satisfaction and the use of EHR assessment forms increased after usability was improved, but the direction of this relationship is unclear as user satisfaction and use of EHRs were assessed at the same time. However, previous work supports the notion that increased user satisfaction with the system can subsequently lead to increased usage of the system [12, 22]. As proposed in our framework, clinician’s satisfaction with the system could also be mediated by the time spent using the system. Clinicians in the present study were much quicker in completing EHR assessment forms and spent less time duplicating patient information which likely contributed to improved satisfaction. Reducing clinician’s time in using EHRs also has potential long-term implications; information recorded in EHRs is likely to be accurate and complete, clinician’s cognitive workload could be reduced and more clinician time could be spent in direct patient care [12,13,14]. Longitudinal studies or longer follow-ups are needed to evaluate these possible long-term implications. In terms of usage, there was an overall increase in completed EHR assessment forms in the present study, with only the physical health assessment forms not showing a significant increase. This is a promising finding for a short-term feasibility study. This is also consistent with previous studies which find that when EHRs were difficult to use, clinicians would only partially use EHRs, find their own workarounds in the system or rely only on paper-based forms [14, 15, 22]. However, these practices can compromise the safety and quality of patient care. Thus, improving EHR usability and clinician satisfaction is important to ensure a consistent and accurate record of patient information.

Specific usability changes in the present study could have further impacted clinician’s satisfaction and use of the system. One reason for this could be as these changes may have improved the general clinical workflow, by making EHRs become a meaningful and integral tool to support patient care, rather than a burdensome task for clinicians [35, 36]. Usability changes such as introducing a visual dashboard, removing duplicate questions across forms, and auto-population features were all aimed at making EHR assessment forms quicker and efficient to use, but could have indirectly improved the workflow of the system [12, 17, 22]. For instance, the auto-population feature used in the present study could have specific implications for mental health services. Auto-populating EHRs based on previous information for the same patient could prevent mental health patients from repeatedly recalling psychologically distressing information [16, 18]. Further, auto-population of letters for primary care clinicians could also be used for clinicians from other services to facilitate integration of services for mental health patients [16, 18]. However, the impact of features such as auto-population features on clinical workflow were not directly assessed in the present study and could be evaluated in the future.

As proposed in our framework, user feedback can be helpful when deciding which usability changes should be made to an existing system. In the present study, usability changes were directly tailored to clinicians in the healthcare organisation. The changes also underwent many iterations based on clinician input. This strategy has been widely recommended in previous studies [13, 17, 23, 37]. Encouraging a dialogue between clinicians and the EHR team can also ensure that usability changes are within the limits of technology and help both parties come to a shared understanding about the purpose of the EHR system. Future usability studies should adopt this approach to make the process of change efficient for both clinicians and EHR teams and increase the likelihood that the usability changes will lead to improved outcomes.

Limitations and future work

The current findings should be interpreted in light of its limitations. First, this was a feasibility study conducted with two mental health teams within a short time-frame and is not easily generalisable. Another limitation was the small sample size of clinicians in the usability testing and the fact that these clinicians were only recruited from one clinical team. While small sample sizes are generally sufficient in highlighting any obvious usability errors, future work should consider including a larger number of clinicians from different clinical teams. Another limitation was the author-developed clinician experience survey which had not been previously validated. To minimise this limitation, questions in the present survey were adapted from previous work and the survey was used in combination with other evaluation methods. The framework examining how usability could have an impact on usage of existing systems could not be verified within the current study. Controlled experimental studies or longitudinal studies using an interrupted time series methodology are needed to further validate this framework [38, 39]. Finally, the content of the EHR assessment forms themselves can also impact clinician’s satisfaction and usage of the forms. However, this was beyond the scope of the current work but should be considered in the future.

Conclusions

Poor EHR usability can be a barrier for clinicians to use EHRs consistently and accurately. There is limited research on how to improve EHR usability in mental health services. The current feasibility study made usability changes to an existing EHR system by reducing duplication, improving navigation, customising forms to clinical teams and adding auto-population features. The results showed that improving usability of EHRs reduced clinician’s time spent completing and duplicating patient information, improved clinician’s satisfaction with the system and increased usage of EHR assessment forms in the clinical service. It is important to tailor EHR usability improvements to clinicians who are users of the system. Future work should improve EHR usability across the wider healthcare organisation and evaluate the longer-term implications for clinician’s workload and patient care in mental health settings.