Active Involvement of End-Users in an EHR Procurement Process: a Usability Walkthrough Feasibility Case Study

Introduction . Despite federal incentives promoting user-centered design processes during electronic health record (EHR) development, poor EHR usability remains a workflow and safety problem. Usability depends on alignment between EHR characteristics, user attributes, and context of use. Therefore, end-users should participate in EHR procurement to select a product that best fits business needs. Instead, end-users are often relegated to the organizational fringes during technology demonstrations and selection. We hypothesized that a usability walkthrough could enable end-users to evaluate EHR usability – even in resource-constrained settings. This article assesses the feasibility of this method using a real-world case study of an EHR procurement. Method . As part of the hospital’s transition to a new EHR, we organized a usability walkthrough for clinical staff. Nine representative end-users identified and categorized a range of usability issues. Our team’s experts gathered user feedback and perceptions about the walkthrough using a mixed-methods approach. Finally, we tracked deviations from the planned protocol to identify implementation challenges and root causes. Results . Participants detected 258 usability problems, and usability experts identified 7 additional usability problems. Both groups generally agreed on usability criteria and severity scores (Krippendorff’s α = 0.66 and = 0.75, respectively). Thirty-two problems required clinical domain expertise to be identified. Participants indicated they liked the method and would use it for future technology purchases. Vendor preparation oversights caused protocol and timeline deviations; the EHRs were not always properly populated or configured for the walkthroughs. Discussion . The usability walkthrough is a feasible method to involve end-users in EHR evaluation during a procurement process. Engaging clinical domain experts is crucial to identify usability issues that might otherwise be missed.


INTRODUCTION Background
Government usability standards-including those from the US Office of the National Coordinator (ONC) of Health Information Technology (HIT)-emphasize the importance of user-centered design to maximize electronic health record (EHR) usability. 1Published studies 2,3 and best-practice recommendations 4,5 show that improving user experience can promote technology adoption, increase system efficiency 6,7 , improve patient safety [8][9][10] , and reduce clinician burnout. 11Nevertheless, clinicians cite EHR design as one of the most common causes of clinical errors, long work hours, and dissatisfaction. 12,13Furthermore, not all vendors-including those headquartered in the US-integrate user-centered design principles into the product lifecycle. 2,14t is incumbent upon healthcare organizational leaders and clinical champions to be actively engaged throughout implementation from technology procurement, through system configuration, to eventual deployment and training.
6][17] The specialty and practice settings are critically important to consider.For example, an emergency physician will not use the EHR like a geriatrician-each will have specific needs within their workflow.][24][25] While executives often only invite end-users to EHR demonstrations, 24 there are practical ways for users to evaluate products during demonstrations.Project managers can administer the Usability Questionnaire for Demonstrations in Procurement (DPUQ) 26 to gather end-user perceptions.Usability experts can then supplement this with a heuristic evaluation during the demonstration (HED). 27][30] These methods notwithstanding, data gathered during vendor demonstrations may not be the best quality since users rarely interact with the EHR, and sales representatives can hide product weaknesses. 31Prospective customers can gather more predictive data by conducting usability tests with clinical information processing scenarios (CLIPS).CLIPS are scripts representing clinical situations with tasks for users to complete while experts observe and collect data. 23,31,32Stakeholders, however, may be apprehensive about the time and cost of simulation testing of multiple EHRs with CLIPS. 33,34Heuristic evaluations are typically more cost-effective but do not involve end-users and can miss important issues. 22,23,28We see a need to combine these methods into a single protocol.

Problem Statement and Research Questions
Schumacher and colleagues described a pragmatic approach to involve end-users in the procurement process: the usability walkthrough. 34During a usability walkthrough, end-users complete CLIPS with the EHR and classify usability problems using a set of heuristics.After the walkthrough, participants rate effectiveness, efficiency, and satisfaction.
A usability walkthrough is like simulation testing in that it involves end-users.In user simulation testing, experts identify problems while watching users test the technology, whereas, in a usability walkthrough, users identify the problems. 28,34,35The usability walkthrough is also like a cognitive walkthrough-both methods require users to think and talk through a clinical scenario.However, the cognitive walkthrough evaluates product learnability by asking standardized questions about interface intuitiveness and the ability to guide users through tasks. 35,36The usability walkthrough, by contrast, measures multiple usability dimensions by replacing standardized questions with heuristics.To conduct a usability walkthrough correctly, usability professionals train the endusers to apply the same heuristics experts use.
To summarize, usability walkthroughs permit users to compare EHRs without vendor interference and allow clinicians to champion their needs and preferences. 34et, no studies have formally investigated the feasibility of conducting a usability walkthrough during an EHR procurement.We sought to close this knowledge gap by asking four questions: (1) Are end-users able to detect, describe, and prioritize usability problems?(2) Does the usability walkthrough method help identify problems only detectable by clinical experts?(3) How satisfied are end-users with the usability walkthrough process?(4) What are the challenges of implementing a usability walkthrough during EHR procurement?To answer these questions, we conducted an implementation study of a usability walkthrough as a hospital transitioned to a new commercial EHR.In this article, we report on the method's feasibility.The results of the EHR evaluation are published elsewhere. 37

Study Context
Leadership at a private, non-profit, 1000-bed teaching hospital in Lille, France, issued a request for proposals (RFP) and organized a procurement process to select a replacement for their current commercial EHR.The process included three steps: (1) a vendor demonstration; (2) a usability walkthrough with each candidate EHR; and (3) technical and economic comparisons of EHRs selected during the walkthrough.The comparisons focused on "back-end" functionality (e.g., data interoperability).In this manuscript, we describe the second step of the process (i.e., usability walkthrough).
To conserve resources and adhere to a timeline, it was necessary to quickly thin the pool of EHR candidates for later technical evaluation.We did not set out to exhaustively safety test products or generate summative statistics during the second phase.Therefore, we limited the number of users recruited.

Usability Workshop Implementation
The project manager (AP) and four usability experts (RM, SG, JS, SP) designed a usability instructional session and a usability workshop that included structured exercises and evaluation instruments.We planned to conduct two instructional sessions and five workshops over three weeks from September to October 2020 (Fig. 1).

Usability Instructional Sessions.
We hosted two face-toface instructional sessions.We held the first session two weeks before the workshops and a refresher on the day of the first workshop.In both sessions, we explained the dimensions of usability, demonstrated the walkthrough method, reviewed usability assessment criteria (adapted from Scapin and Bastien) 29 , and introduced a usability issue severity rating scale.We answered questions and furnished the participants with a written summary of all content.
Workshop Design.Preparation and Participants We received proposals from five vendors and scheduled five 3-hour workshops over one week: one per candidate (Fig. 1).During each workshop, the vendor presented their EHR to stakeholders.We then excused vendors from the proceedings; they were not permitted to interact with end-users during any other evaluation stage-including the usability walkthrough and end-user debriefing session (outlined below).
A multidisciplinary team of clinical representatives, the procurement manager, and a usability expert (JS) identified common inpatient EHR use scenarios and concerned endusers.We designed seven CLIPS simulating 59 EHR tasks (Appendix A), targeting frequently used or critical functionalities (Table 1).We also created a clinical dataset for the EHR.We sent the CLIPS and dataset to the EHR vendors two weeks before the usability sessions.We recruited nine end-users: three physicians (an emergency physician from the emergency unit, a cardiologist from the cardiology unit, and a neurologist from the geriatrics unit), three nurses (from emergency, cardiology, and geriatrics units), one pharmacist (from the central pharmacy), one medical clerk, and one admission officer (i.e., non-clinical staff member trained to manage administrative and logistic duties).Participants volunteered for the workshops; they were not compensated for their participation.None had been trained to use the candidate EHRs.
Usability Walkthrough We organized participants into four evaluation groups-each supervised by a usability expert.Each group completed CLIPS at a computer workstation.Three groups included a physician and a nurse, whereas the fourth included the pharmacist, admission officer, and clerk.The usability experts facilitated the walkthrough, tracked time, and gathered field observations.We first gave each group written CLIPS and a patient summary.We then instructed groups to use the EHR to complete tasks, describe issues encountered, and assign each issue a usability criterion (i.e., "guidance," "workload," "compatibility," "significance of codes," "adaptability," "error management," "consistency," or "explicit control").Participants also assigned a severity level (i.e., "light," "minor," or "major") to each issue.Please refer to Appendices B and C for criterion and severity definitions. 29We audio-recorded comments.The usability experts documented direct quotes during the sessions, issues reported by end-users, and criteria and severity scores.After each task, end-users completed a 4-item questionnaire adapted from Schumacher et al. with 5-point Likert-type items evaluating EHR features availability, completeness, ease of use, and efficiency. 34We published our questionnaire findings in a companion article. 37he walkthrough ended once participants completed all CLIPS or after two hours had elapsed.Afterward, users completed the System Usability Scale (SUS) 38 -a validated 10-item questionnaire with Likert-type statements and performance benchmarks.We published the results of this questionnaire in a companion article. 37We organized endusers into groups according to professional roles.A usability expert then debriefed each group using a semi-structured interview script exploring EHR strengths and weaknesses (Appendix D).

Data Collection and Analysis
Question 1: Can end-users Detect, Describe, and Prioritize Usability Problems?.Our usability experts first read problems identified by the participants and excluded (1) those unrelated to specific EHR characteristics (e.g., opinions without descriptions), (2) those concerned with the technology platform (e.g., connection failures), or (3) those rooted in data upload problems.They then combined multiple descriptions of the same problem (i.e., deduplication) to reach a final list of usability problems.
Next, we created a usability expert "comparison set".We combined lists of problems identified by participants and problems detected by usability experts.For each problem, we assigned a usability criterion and a severity level.Experts independently categorized problems using our a priori usability criteria and severity levels. 29Disagreements were resolved through consensus. 28We then compared end-users' lists and assignments to the "comparison set."We calculated concordance between end-users and experts using percent agreement and Krippendorf's α. 39 We also calculated the average issue detection rate within user profiles when there were multiple representatives (i.e., nurses and physicians).

Question 2: Does the Usability Walkthrough Method
Identify Problems that Require Clinical Domain Expertise to be Detected?.Two usability experts screened problem descriptions to identify those requiring clinical expertise to detect.Since these represented new types of problems only clinicians recognized, our usability experts categorized each problem inductively.
Question 3: How Satisfied are end-users that Participate in a Usability Walkthrough?.After the last walkthrough, we asked participants to provide feedback on the method.
To measure user satisfaction, we developed an eightitem questionnaire with 5-point Likert scales anchored by strongly disagree and strongly agree (Table 5).
Participants also answered open-ended questions about the strengths and weaknesses of the method.We compared each rating to 3 (i.e., the median value) using the Wilcoxon sample signedrank test with a significance threshold of 0.05.Two usability experts analyzed the qualitative data inductively to identify important or recurrent themes.

Question 4: What are the Challenges Associated with Implementing a Usability Walkthrough during an EHR Procurement Process?.
We recorded all deviations from the workshop agenda and evaluation protocol.Two usability experts categorized each deviation according to the root cause.

RESULTS
In this section, we report on the feasibility and utility of our usability evaluation strategy.We reported EHR usability findings in a companion article. 37Two vendors withdrew their applications during the procurement process, leaving only three candidate EHRs for analysis.
Are End-Users Able to Detect, Describe, and Prioritize Usability Problems?
Participants reported 361 usability problems.We excluded 21 issues (5.82%) using our eligibility criteria.After deduplication, the final list consisted of 265 problems: 258 detected by end-users (97.36%) and 7 detected only by usability experts (2.64%) (Table 2).Each end-user within a professional role detected between 26.82 and 70.37% of all problems identified by that group (mean = 42.92%;SD = 14.11).On average, 59.83% of the problems were detected by one participant, 23.73% by two, and 12.75% by three.

Does the Usability Walkthrough Method Identify Problems That Require Clinical Domain Expertise to Be Detected?
Thirty-two of the 258 problems (12%) required clinical expertise to detect (Appendix E).In one instance, a pharmacist using the medication review module could not determine how to make a drug substitution, and in another, a physician did not receive an error message after entering the same treatment order twice.We include a complete list of issues in Appendix E. We classified these problems into 5 novel There were instances when patient identifiers were not visible while performing tasks, increasing the risk of wrong patient selection errors.End-users encountered screens missing critical information, including orders, decision support, care plan actions, and medication changes.There were readability issues associated with typography and inappropriate "hard stops" (e.g., health insurance information queries blocking access to other functions).We found a mismatch between user EHR permissions and real-world scope of practice (e.g., nurses were granted access to prescribe when not permitted in clinical practice).Finally, the EHRs tended to increase workload and cognitive load.For example, some features increased the number of actions per task, failed to provide the user with feedback, or made selections confusing (e.g., a mismatch between medication type and dosing units).

How Satisfied are End-Users with the Usability
Walkthrough?.Overall, end-users valued the usability walkthrough.The average score for each questionnaire item was at least four on a five-point scale (Table 5).
Participants, however, struggled to assign usability criteria (n= 4 of 9), and all reported some challenges learning the scoring and categorization system.One respondent said, "understanding the criteria takes time," and another would have appreciated "even more training upstream of the evaluations."Nevertheless, most respondents (n= 8 of 9) said the method was "easy to learn."Most participants liked the ability to quantify subjective impressions of the technology and indicated they would be willing to use this method again (n=8 of 9).They all agreed that the method "clearly distinguished [EHRs] strengths and weaknesses," and the data permitted a "detailed comparison of EHRs."

What Challenges Exist when Implementing a Usability
Walkthrough during an EHR Procurement Process?.Two vendors withdrew their applications.One had concerns about their product's usability; the other did not disclose a reason.The remaining vendors were not prepared for the evaluation.We could not complete all CLIPS due to technical issues (e.g., features not working) and database "locks" that prevented multiple users from opening the same test record simultaneously (Table 6).In some cases, the EHRs were missing patient data, whereas, in others, the data that participants were expected to enter were already in the chart.

Principal Findings
Initial selection or transition to a new EHR can have seismic consequences on health system outcomes, patient safety, and clinician well-being. 40Choosing one that does not meet organizational needs can destabilize the work system and undermine patient safety.To our knowledge, this is the first study to evaluate the feasibility of a usability walkthrough involving end-users in EHR procurement.After a short training session, clinicians could identify, and risk stratify real EHR usability problems.The procurement team used our results to guide selection decisions and screen candidate EHRs for the third phase of the evaluation.
Our findings match studies of usability testing showing that testing products with as few as 3-5 users can identify up to 85% of interface problems affecting similar users. 41In our study, end-users within the same profile did not detect all the same problems.It is, therefore, crucial to recruit at least 3-5 evaluators per profile. 42While the assignment of criteria and scores by end-users and usability experts were consistent, end-users sometimes struggled to disambiguate "guidance," "workload," and "compatibility" criteria. 43Clinicians also tended to assign higher severity scores compared to usability experts.We hypothesize that the consensus method our experts used when assigning severity levels had a moderating effect on scores.Overall, our findings suggest that the usability walkthrough method-and the inclusion of end-users-is valid for identifying usability issues, is generalizable across settings, and may be extensible to other technologies.
End-users identified 32 usability problems that required clinical expertise to detect.This is consistent with studies showing that end-users with domain expertise identify problems that usability specialists may overlook. 44Some of the problems identified by clinicians could have had severe patient safety consequences.Patient identifier issues are known to increase the risk of wrong-patient prescribing 45 , missing patient data can affect medical decision-making, and inappropriate "hard stops" can delay patient care. 8,46herefore, our findings reinforce the axiom that end-users must be engaged throughout the HIT lifecycle-including the procurement process.

Feedback on Methods and Implications for Practice
Participants said they would use the walkthrough and scoring system for similar projects despite the learning curve.The information helped them objectively evaluate features and compare products systematically and argue preferences and advocate for their constituency.Data collected without vendor interference may generate more reliable information and more safety concerns than traditional product demonstrations, enabling business leaders to base decisions on technology elements or objective measures of human-computer interaction. 31

Lessons Learned and Limitations
While usability walkthrough methods do not require extensive training, there are opportunities to improve orientation and testing activities. 47Sending the CLIPS and sample patient data to the vendors could have given them an unfair advantage and influenced how they configured their EHRs.In practice, however, the vendors were unprepared for testing.We could not complete several test scripts because vendors did not include the minimum data required for each simulation.This may have biased our selection decisions.Before implementing the usability walkthrough, usability experts should complete a "cross-check" of the EHR setup.We hypothesize that France's lack of incentives for usability work and evaluation could explain vendors' lack of awareness surrounding prerequisites for usability evaluation (e.g., missing mock patient data in a test patient record).Usability requirements included in the ONC certification program and other federal research funding programs may have incentivized vendors to collaborate more effectively with US healthcare organizations and create better test environments. 48eaders should interpret the findings from our case study with caution.Our results are based on the behaviors and observations of a small sample of users recruited at a French hospital during an EHR procurement.The findings could differ across users, settings, products, or time points.Nevertheless, this case study shows how to gather usability data and stakeholder sentiments at the pace of healthcare operations.
Usability research adds up-front costs for the customer. 49he main costs include time spent creating test scenarios, recruiting participants, conducting usability evaluation, and retaining usability experts to proctor sessions, facilitate debriefs, and analyze the results. 50Executives must be convinced that the method generates a return on investment if they are to budget evaluation resources during procurement.The cost of testing must be weighed against the downstream costs of a poorly designed EHR related to training, workflow, efficiency losses, clinician burnout, staff disenfranchisement, patient endangerment, and legal actions. 19

CONCLUSIONS
The usability walkthrough is a feasible method to engage end-users in usability evaluation, compare EHRs during a procurement process, and galvanize buy-in for change among stakeholders.Nevertheless, efforts are needed to raise usability awareness and incentivize vendors to optimize product performance.Without strong federal policies or technical regulations, the procurement process represents an important lever for change.

Table 1
List of End-User Profiles and Summary of End-User Tasks During Clinical Information Processing Scenarios (CLIPS)ProfileAbstract of CLIPS and tasksPharmacist (n = 1) Validation of a prescription, entry of medication dispensation Admissions officer (n = 1) Entry of new patient's administrative data, of administrative discharge, of consultation and day hospital pre-admissions Secretary (n = 1) Dictation transcription, search and entry of appointments bookings for consultation and hospitalization Nurses (n = 3) Patient's arrival in the emergency room: entry of reason for admission, vital parameters, patient prioritization, medication administration, patient orientation, transfer to another service Patient hospitalization: entry of the bed installation, of the entry synthesis, of the initial clinical synthesis, of the autonomy assessment, of the Braden score, of the blood test, of the care planning, of the therapeutic administration, of the care delivery, of the traceability of a bandage, of the urinary catheter insertion, of the department exit, visualization of the care plan, correction of an error, update of a targeted transmission Physicians (n =3)Patient's arrival in the emergency room: entry of medical observation, prescription, medical decision, and coding Patient hospitalization: entry of medical history and disease, of the usual treatment, of allergies, of the initial prescription, of the progress note, of the discharge order and coding, visualization of laboratory results, of the care plan, of the prescriptions, of the medical observations and of the vital parameters, change of prescription, request for infectious advice, dictation of discharge letter.

Table 2 Frequency of Problems Detected per User Profile, and Number of End-Users Identifying the Problem ("0" Indicates Only the Usability Expert Identified a Problem)
Acronyms: UE, usability expert; EHR, electronic health record