Skip to main content

The Electronic Health Record in Ophthalmology: Usability Evaluation Tools for Health Care Professionals

Abstract

Introduction

The adoption of the electronic health record (EHR) has grown rapidly in ophthalmology. However, despite its potential advantages, its implementation has often led to dissatisfaction amongst health care professionals (HCP). This can be addressed using a user centred design (UCD) which is based on the philosophy that ‘the final product should suit the users, rather than making the users suit the product’. There is often no agreed best practice on the role of HCPs in the UCD process. In this paper, we describe practical qualitative methodologies that can be used by HCPs in the design, implementation and evaluation of ophthalmology EHRs.

Methods

A review of current qualitative usability methodologies was conducted by practising ophthalmologists who are also qualified health informaticians.

Results

We identified several qualitative methodologies that could be used for EHR evaluation. These include:

  1. 1

    Tools for user centred design: shadowing and autoethnography, semi-structured interviews and questionnaires

  2. 2

    Tools for summative testing: card sort and reverse card sort, retrospective think aloud protocol, wireframing, screenshot testing and heat maps

Conclusion

High-yield, low-fidelity tools can be used to engage HCPs with the process of ophthalmology EHR design, implementation and evaluation. These methods can be used by HCPs without the requirement for prior training in usability science, and by clinical centres without significant technical requirements.

FormalPara Key Summary Points
The adoption of the electronic health record (EHR) has grown rapidly in ophthalmology.
However, despite its potential advantages, its implementation has often led to dissatisfaction amongst health care professionals (HCP).
High-yield, low-fidelity tools can be used to engage HCPs with the process of ophthalmology EHR design, implementation and evaluation.
These methods can be used by HCPs without the requirement for prior training in usability science, and by clinical centres without significant technical requirements.

Digital Features

This article is published with digital features, including a summary slide, to facilitate understanding of the article. To view digital features for this article go to https://doi.org/10.6084/m9.figshare.13067279.

Introduction

Electronic health records (EHRs) are defined by the International Organization for Standardization as “a repository of data in digital form, stored and exchanged securely, and accessible by multiple authorized users” [1].

A number of studies have shown that poorly designed EHRs can be associated with patient and health care professional (HCP) dissatisfaction, reduced patient contact time and physician burnout [2]. Some of the issues include the presence of too many screens, options and prompts. The process of entering data into the system can be unintuitive, with clinicians having to adapt working practices to fit into the workflow of existing EHRs [3]. The impact of COVID-19 has confirmed the necessity and usefulness of structured queries, triage and prioritization; these are elements that can potentially be addressed by well-designed EHRs. This might further drive the usage and adoption of EHRs.

EHR vendors in countries such as the USA are obliged to meet certification requirements set by the Office of the National Coordinator for Health Information Technology in efforts to promote user centred design (UCD). It has been shown that there are significant variations in UCD processes and testing methodologies by vendors [4]. UCD processes and usability testing methodology reports provided by vendors can be complex, making it difficult for HCPs who are not trained in usability science to understand the information.

Fully developed and implemented EHRs should ideally be continuously and independently evaluated by end users, much like post market surveillance of a pharmaceutical drug or medical device. A systematic review published in 2017 showed that the most used usability evaluation tools were surveys or distributed questionnaires among end users [5]. While surveys are advantageous in determining a user’s perceptions about an EHR system, they are poor at identifying specific usability problems that can be used for targeted improvements.

Ophthalmology is a unique branch of medicine in that it is both a medical and surgical specialty. There is limited published research on usability evaluation of ophthalmology EHRs [6, 7]. The aim of our paper is to discuss practical qualitative methods for usability evaluation of ophthalmology EHRs. These methods can be used by HCPs without the requirement of prior training in usability science, and by clinical centres without significant technical requirements. This allows for continuous end user engagement with the EHR vendor.

Methods

A literature search was conducted on Pubmed, Medline and Google Scholar using the search terms ‘usability testing’, ‘electronic health records’, ‘electronic patient records’. Manual searches of bibliographies, citations and related articles were also undertaken. Eligibility assessment was conducted by YJC and AL who are practising ophthalmic surgeons and qualified health informaticians. This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.

Results and Discussion

We identified six different types of methodologies which can be used for the user centred design process and summative testing process, which have been summarized in Table 1. These methodologies were selected on the basis of their ease of use and accessibility to HCPs who are not trained in usability science.

Table 1 Methodologies for the user centred design process and summative testing process

The authors of this paper are clinicians with formal qualifications in health informatics. We have simplified complex domains of usability science so that these tools and techniques can be understood and used by a wide range of HCPs with different educational backgrounds.

Tools for User Centred Design

The first stage in EHR development is UCD. This process puts the needs of the end user at the forefront of EHR design, to ensure that they are adequately reflected in the final EHR system. This can be particularly challenging in a field such as ophthalmology because of the multidisciplinary approach to patient care. For example, a single outpatient episode in an ophthalmic unit might involve the optometrist, orthoptist, ophthalmic photographer, ophthalmic nurse, ophthalmic technician and the ophthalmologist who will all interact with the EHR and have their own unique requirements.

Effective usability tools are needed to be identify these needs, which are often complex and hard for the end user to communicate. A combination of three tools can be used in the UCD process: (1) shadowing, (2) autoethnography and (3) semi-structured interviews and questionnaires.

Shadowing and Autoethnography

Shadowing is a technique where the researcher follows participants in their daily activities over a period of time, with documentation of the user actions by note taking or video recording [8, 9]. This provides a unique opportunity for researchers to understand the different terminologies used in a clinical setting, and what information or clinical events are considered critical to different HCPs. For example, the orthoptist will require specific tools to document the measurement of eye movements, while the medical ophthalmologist will be reliant on temporal comparisons of photographs and imaging of the eye. The researcher functions as an apprentice with the aim to understand and appreciate the role and requirements of the master [10].

The method of autoethnography can follow on from shadowing, as the researcher now has a basic understanding of the practical requirements of the end user. Autoethnography is a research method used in the field of human–computer interaction, where the researcher becomes a participant to obtain a deeper knowledge of the subjective state of end users [11]. This is achieved through the human capacity for empathy. For example, the researcher could engage in forms of self-reflection and writing, as though he is the end user himself.

There are several limitations to shadowing and autoethnography. Firstly, researchers might have varying degrees of access to real-world clinical settings. Secondly, it might still be difficult for researchers who are not content experts to appreciate the difficulty and varying complexity of certain clinical tasks.

Structured interviews or questionnaire surveys entail a list of questions, with little opportunity for respondents to provide suggestions outside of a rigid template. In the field of ophthalmology, there have been several studies looking at the adoption of EHR in the UK and the USA. For example, a cross-sectional study in 2017 showed that only fewer than 50% of ophthalmology units in the UK were using EHR [12]. In the USA, a 2018 cross-sectional study showed that the adoption rate of EHR in ophthalmology was 72%, with respondents having a more negative perception of EHR productivity outcomes and effect on practice cost compared to previous studies [13].

Semi-Structured Interviews and Questionnaires

National cross-sectional surveys are useful to provide information about the general adoption and perception of EHRs. However, the results of such findings often fail to identify specific usability issues that can be targeted for improvement. National surveys are often only conducted once every few years while end users should ideally be engaged continuously so that iterative improvements can be made.

In contrast with structured interviews, semi-structured interviews are in-depth interviews where respondents are provided with pre-defined open-ended questions, which are subsequently thematically analysed to generate a comprehensive picture of the collective experience. Studies have suggested that five participants could reveal about 80% of all usability problems, although there are reported benefits in terms of increased sample sizes in usability and utility testing [1416].

Semi-structured interviews can be easily conducted in individual or clusters of ophthalmology units. For example, an open-ended question like “What specific information do you need to record during an oculoplastics consultation?” could reveal information such as the need for templates for eyelid measurements, tear film break-up time, accompanied by anatomical drawings of the eyelids and orbit.

There are, several commonly cited limitations to this method. Firstly, manual clustering of themes poses a risk that conclusions would be over-reliant on the researcher’s “often unsystematic views about what is significant and important” [17]. The response given by respondents might also be influenced by what he or she thinks a particular situation requires [18]. People might also react differently depending on how they perceive the researchers [19].

Tools for Summative Testing

Once the end user needs have been ascertained, the system needs to be designed to reflect them and undergo vigorous testing. This is referred to as the summative testing process. Participants involved in this process should reflect the end user demographics of the EHR. Ideally, these are the same users whose needs were addressed in the UCD process. It is important to highlight that UCD and summative testing are not sequential processes, but rather iterative in nature. Constantly redesigning and testing the system to ensure end user needs are addressed is essential to ensure end user satisfaction. There are a number of tools that can be used to conduct summative testing: (1) card sort and reverse card sort, (2) retrospective think aloud protocol and (3) wireframing, screenshot testing and heat maps.

Card Sort and Reverse Card Sort

Card sorting is an effective, cheap and easy way to help understand the expectations of end users about how content should be organized [20]. This is a common tool in usability science. However, this technique is not often used in the field of usability testing in EHR. In a recent literature review of the relative frequency of use of usability analysis methods in EHR, card sort was only used 1% of the time [5]. This is surprising given that card sorting can be done using affordable software.

The way card sort works is that a list of relevant topics are first identified. For example, a list of 20–30 topics would include things such as primary complaints, current medications, intraocular pressure, visual acuity, driving status, and laboratory blood tests. Participants are then asked to group topics together as categories. Topics such as primary complaints, ocular history, past medical history, systemic history, family history, driving status and allergies could then be grouped under the category of “History”.

Participant agreements about categories provide researchers with information about which items should be grouped together. This can subsequently inform the structure of the EHR.

Tree testing or reverse card sort is a technique used to evaluate the ease which content can be found and navigated within a software’s information architecture [21]. The ‘tree’ is the site structure of the EHR which is essentially a simplified structure of the software. This allows for the structure of the EHR to be evaluated in isolation without effects of factors such as visual design or navigation aids. Users are provided with tasks and asked to complete them by navigating a collection of cards [each with a category which was created during the initial card sort]. This evaluative approach provides information to the researcher about whether a predetermined hierarchy is a good way to find information.

Figure 1 provides an example of the user journey based on the clinician inputting a patient’s tear break-up time. This provides researchers with a representation of the way end users navigated through the structure of the EHR to accomplish a particular task.

Fig. 1
figure1

Example of the user journey based on the clinician inputting a patient’s tear break-up time

There are, several limitations to the card sort methodology [22]. Firstly, this type of study is performed outside the actual EHR system and is stripped from its context. One is able to obtain information about how individuals combine concepts. However, this does not provide information on how effectively users will find relevant information in the final EHR system. Secondly, it is difficult to determine the extent to which the wording of topics influences the way subjects group cards. To counter this limitation, participants can be instructed to think of underpinning concepts beyond the words provided. Lastly, the card sorting system means that users are not allowed to place one topic into more than one category. In reality, the information landscape of EHRs often allow concepts to reside in multiple places in multiple pages.

Retrospective Think Aloud Protocol

Reverse card sorting can be a useful technique for analysis of navigation issues. However, the method above does not provide researchers with the participants’ reasoning when making those particular navigational decisions.

Another useful method is the retrospective think aloud (RTA) protocol. During this process, participants first carry out their tasks silently, while subsequently verbalizing their thoughts in retrospect [23]. The retrospective verbalization can be supported by adjuncts such as video recording of the process or computer log files [15, 23]. The theory behind this is that when verbalization is accompanied by adjuncts, the RTA combines both the benefits of working silently and thinking aloud.

Wireframing, Screenshot Testing and Heat Maps

Following on from the identification of end user usability issues, a low-fidelity prototype of an EHR can be created with a technique known as wireframing. Wireframe mock-ups are two-dimensional illustrations of a webpage or a software’s interface. They do not involve design elements, which allows for quick iterative assembly and testing [24]. The benefit of wireframes includes it simplicity of use to determine a software’s information architecture, and its intended functionality in the interface.

Wireframes can also be created without the need for coding or programming expertise. It is interesting to note that wireframing was not used by any of the 120 usability studies performed on EHRs [5]. This could be due to the perceived difficulty of creating a prototype owing to a lack of usability training amongst clinicians. Wireframes can be built using simple software from companies such as Balsamiq (https://balsamiq.com). Another alternative would be to simply manually sketch the architecture of the EHR on blank pieces of paper.

One of the limitations of low-fidelity wireframes is the lack of interactivity and functionality of the actual EHR, such as accordion menus, dropdown windows and active links. Wireframes also do not take into account the technical elements of existing EHRs. On the other hand, a fully interactive prototype requires significantly more resources in terms of technical input, time and cost. This would be impractical for clinicians unless they have specific training and resources dedicated to usability science.

Screenshot testing is a usability tool which can be used in conjunction with the low-fidelity wireframe prototypes. Chalkmark software developed by Optimal Workshop is a simple method of conducting screenshot testing [25]. Participants will be asked to complete a series of tasks which would require them to navigate through the wireframes. Quantitative information that can be generated includes the proportion of users that got their first click correct, the locations that the participants clicked, and time taken to complete a task on average.

Results of this user testing method can then be displayed as a visual map of activity, indicating the areas where users clicked most often to complete a task. These are known as heat maps.

Conclusion

This paper provides HCPs with foundational skills in usability analysis, which are not currently part of the core curriculum in medical schools or specialist training programs. In many countries, no national frameworks exist mandating the use of such tools in EHR design, resulting in variable uptake of these methodologies by the few major ophthalmology EHR vendors [5]. Providing HCPs with these tools will enable them to engage in meaningful conversation with commercial EHR vendors, and play an active role in their development. This will improve the accountability of EHR vendors in adopting usability-driven processes, improve EHR design and improve patient and HCP satisfaction [2].

It is important to appreciate that the usability tools that we described only form one component of the EHR development process. These tools should not be used in isolation but rather in conjunction with other EHR developmental processes such as utility analysis (whether the system provides features needed by the end user) and prototyping. It is, however, beyond the scope of this paper to explore the full details of the EHR development process. The development and refinement of EHRs should be a continuous and iterative process, in which changes at one stage may require evaluation and changes at another stage.

End users should be continuously involved and engaged in usability testing of an EHR. This is very much like post marketing safety evaluations of technology and medications used in real-world clinical settings. With these tools that can be deployed in any clinical units away from resource-rich research centres, we hope that clinical information leads can work together with EHR vendors and various stakeholders to continuously improve the usability of EHRs.

References

  1. 1.

    International Standard Organization [ISO]/DTR 20514. Health informatics—electronic health record—definition, scope, and context. ISO. 2004. https://www.iso.org/obp/ui/#!iso:std:39525:en. Accessed 1 Sep 2015.

  2. 2.

    Downing NL, Bates DW, Longhurst C. A physician burnout in the electronic health record era: are we ignoring the real cause? Ann Intern Med. 2018;169(1):50–1.

  3. 3.

    Miller RH, Sim I. Physicians’ use of electronic medical records: barriers and solutions. Health Aff. 2004;23(2):116–26.

    Article  Google Scholar 

  4. 4.

    Ratwani RM, Zachary Hettinger A, Kosydar A, Fairbanks RJ, Hodgkins ML. A framework for evaluating electronic health record vendor user-centered design and usability testing processes. J Am Med Inf Assoc. 2017;24(e1):e35–9.

    Article  Google Scholar 

  5. 5.

    Ellsworth MA, Dziadzko M, O’Horo JC, Farrell AM, Zhang J, Herasevich V. An appraisal of published usability evaluations of electronic health records via systematic review. J Am Med Inf Assoc. 2017;24(1):218–26.

    Article  Google Scholar 

  6. 6.

    Read-Brown S, Hribar MR, Reznick LG, et al. Time requirements for electronic health record use in an academic ophthalmology center. JAMA Ophthalmol. 2017;135(11):1250–7.

    Article  Google Scholar 

  7. 7.

    Hribar MR, Read-Brown S, Goldstein IH, et al. Secondary use of electronic health record data for clinical workflow analysis. J Am Med Inf Assoc. 2018;25(1):40–6.

    Article  Google Scholar 

  8. 8.

    Aldersey-Williams H, Bound J, Coleman R. The methods lab: user research for design. London: Design for Ageing Network.

  9. 9.

    Brun-Cottan F, Wall P. Using video to re-present the user. Commun ACM. 1995;38(5):61–71.

    Article  Google Scholar 

  10. 10.

    Wallach D, Scholz SC. User-centered design: why and how to put users first in software development. InSoftware for people 2012. Berlin: Springer; 2012. p. 11–38.

    Google Scholar 

  11. 11.

    Wright P, McCarthy J. Empathy and experience in HCI. In: Proceedings of the SIGCHI conference on human factors in computing systems 2008 Apr 6 (pp. 637–646).

  12. 12.

    Lim SB, Shahid H. Distribution and extent of electronic medical record utilisation in eye units across the United Kingdom: a cross-sectional study of the current landscape. BMJ Open. 2017;7(5):e012682.

    Article  Google Scholar 

  13. 13.

    Lim MC, Boland MV, McCannel CA, et al. Adoption of electronic health records and perceptions of financial and clinical outcomes among ophthalmologists in the United States. JAMA Ophthalmol. 2018;136(2):164–70.

    Article  Google Scholar 

  14. 14.

    Aronson J. A pragmatic view of thematic analysis. Qual Rep. 1995;2(1):1–3.

    Google Scholar 

  15. 15.

    Nielsen J. Usability engineering. Burlington: Morgan Kaufmann; 1994.

    Google Scholar 

  16. 16.

    Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput. 2003;35(3):379–83.

    Article  Google Scholar 

  17. 17.

    Bryman A. Social research methods. Oxford: Oxford University Press; 2016.

    Google Scholar 

  18. 18.

    Gomm R. Social research methodology: a critical introduction. London: Macmillan International Higher Education; 2008.

    Book  Google Scholar 

  19. 19.

    Denscombe M. The good research guide: for small-scale social research projects. McGraw-Hill Education (UK): Maidenhead; 2014.

    Google Scholar 

  20. 20.

    Spencer D. Card sorting: designing usable categories. New York: Rosenfeld Media; 2009.

    Google Scholar 

  21. 21.

    Spencer D, Warfel T. Card sorting: a definitive guide. Boxes Arrows. 2004;2:1–23.

    Google Scholar 

  22. 22.

    Faiks A, Hyland N. Gaining user insight: a case study illustrating the card sort technique. Coll Res Libr. 2000;61(4):349–57.

    Article  Google Scholar 

  23. 23.

    Henderson RD, Smith MC, Podd J, Varela-Alvarez H. A comparison of the four prominent user-based methods for evaluating the usability of computer software. Ergonomics. 1995;38(10):2030–44.

    Article  Google Scholar 

  24. 24.

    Murray G, Costanzo T. Usability and the Web: An overview. Network Notes, 61. Information Technology Services, National Library of Canada. Retrieved October. 1999;1(2008):1–260.

  25. 25.

    First-Click Testing Software | Optimal Workshop. https://www.optimalworkshop.com/chalkmark.

Download references

Acknowledgements

Funding

No funding or sponsorship was received for this study or publication of this article. The Rapid Service Fees were funded by the authors.

Authorship

All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.

Disclosures

Abison Logeswaran is a Topol Digital Health Fellow who is funded by Health Education England. Yu Jeat Chong and Matthew R Edmunds have no conflicts of interest to declare.

Compliance with Ethics Guidelines

This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Yu Jeat Chong.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Logeswaran, A., Chong, Y.J. & Edmunds, M.R. The Electronic Health Record in Ophthalmology: Usability Evaluation Tools for Health Care Professionals. Ophthalmol Ther 10, 13–20 (2021). https://doi.org/10.1007/s40123-020-00315-0

Download citation

Keywords

  • Electronic health records
  • Electronic patient records
  • Ophthalmology
  • User experience