Background

The field of implementation science (IS) has made great progress in identifying critical approaches to translate evidence-based programs (EBP) into practice [1, 2]. Despite this progress to guide the implementation of an EBP into a given health setting, persistent dissemination challenges include: (1) inconsistent “scaling up” to varied settings within a health system, (2) “scaling out” across different health systems remains rare, and (3) sustainment of these changes is difficult. When system-level decision makers lack information on the cost of implementing and sustaining EBPs, it deters dissemination and sustainment [3,4,5]. Some IS frameworks, including the Veterans Administration QUality Enhancement Research Initiative (VA QUERI) roadmap [6], seek to guide scaling up EBPs by considering different types of implementation costs within the following project phases: (1) pre-implementation, (2) implementation, and (3) sustainment [6]. During the pre-implementation and implementation phases, key cost considerations are as follows: (1) “capacity” for delivering the EBP, including the cost of staff time for both EBP delivery and the implementation strategy of staff training, and (2) comparing the costs of alternate implementation strategies. In the sustainment phase, the focus shifts to estimate the staff time needed to continue to deliver the EBP and implementation strategies, as well as other ongoing system costs such as program materials [5].

Recent reviews of cost assessment approaches for IS and improvement science have specified the need to track the staff time required for both EBP delivery and for implementation strategies used [5, 7,8,9]. Drilling down into the staff time costs for both of EBP delivery and implementation strategies is important because (1) staff time is a major source of costs for EBP delivery; (2) staff time is a costly element of certain implementation strategies, such as technical assistance and training; and (3) assessment of other types of costs, such as program materials, are more straightforward to track. The method of time-driven activity-based costing (TDABC) has been heralded as a relatively pragmatic approach to estimate the staff time required for these different tasks; accordingly, the use of TDABC in IS research has accelerated recently [3,4,5].

As developed by Kaplan et al. [3], TDABC methods specify costs across several steps of EBP implementation. A central aspect of TDABC is to create a process map that allocates the time for each staff actor to complete each process map step, inclusive of both EBP delivery and implementation strategies used [5]. However, a recent review of TDABC by Keel et al. [4] concluded that current approaches for staff time estimation in each step of a TDABC process map remain resource-intensive and called for the development of simpler and more rapid approaches with less resource burden [4]. Accordingly, the field would benefit from more pragmatic staff time estimation approaches, with balanced attention to rigorous and reliable data collection methods [10].

Thus, there is a need to contrast the conventional methods of staff time estimation with some novel and emerging electronic health record (EHR)-based methods that could address some of the current challenges. The purpose of this brief methodology report is to compare distinct categories of conventional and emerging TDABC approaches to staff time estimation according to the 5 R’s model [11] for pragmatism that measures domains of interest to researchers and health system decision-makers: relevance, rapidity, rigor, resources, and replicability. In contrast to recent reviews and commentaries that only considered conventional approaches to staff time estimation by TDABC [4, 5, 7,8,9,10], this paper also considers innovative automated and semi-automated EHR-based approaches, and compares these different approaches on each of the 5 R’s domains. We also provide two illustrative case study examples that delineate why different staff time estimation approaches may be selected. This environmental scan of emerging pragmatic methods for staff time estimation provides the field of IS with options beyond the current standards of observation or asynchronous reporting, and presses the field to identify additional non-intrusive, real-time approaches to assessing implementation costs.

Methods

We conducted an environmental scan, including a literature search, for articles measuring the cost of staff time to implement healthcare-related EBPs. We searched PubMed using the following search terms: (“implementation cost” or “time-driven activity-based cost” or “micro-cost”) and (“health*” or “clinic*”). The literature search was limited to articles in English over the past 5 years. Articles’ references were hand-searched for additional articles. We also queried an online community of EHR users (Epic UserWeb) and colleagues with experience in EHR approaches to time capture: a clinical informatics nurse research scientist and two physician informaticists.

Our intent was not to conduct a systematic review, but to use this environmental scan to identify existing categories of staff time estimation approaches, and to compare the relative pragmatism of these approaches using the 5 R’s model perspective [11] (Table 1). While not exclusive to IS, the 5 R’s was selected because it was developed to increase the pragmatism of health research and is an accepted model of pragmatic health research domains [11,12,13]. The 5 R’s framework emphasis on relevance, rigor, and replicability are complementary with the approach that Cidav et al. took to track TDABC according to the Proctor et al. framework [12] by specifying who/what/when/how often/for how long an individual delivers an implementation strategy, but also adds an explicit emphasis on rapid/low resource burden approaches [5].

Table 1 Application of the 5 R’s to evaluate cost assessment approaches

Approaches to staff time estimation were evaluated from the perspective of system-level decision makers. Decision makers did not participate in the review process, but we considered their perspective on how a new EBP would impact their budget. Using a content analysis approach, two authors (KT and AH) reviewed the distinct approaches to staff time estimation in each article and placed them in categories named for common time capture terms [14]. We evaluated these categories of approaches from the 5 R’s model perspective [11] (Table 1), providing more favorable ratings if they (1) rated high in relevance to stakeholders, rapidity, and recursiveness, rigor, and replicability and (2) required few resources.

Results

From our environmental scan, we identified several categories of approaches to estimate the staff time spent implementing EBPs as a part of TDABC [4, 5, 7, 15]. These categories of staff time estimation approaches are applicable to EBP program delivery by managers, supervisors, and staff, as well as implementation strategies (e.g., training to deliver the EBP, and other time spent preparing for the program); time spent evaluating the program; and indirect time costs of the program on patients and care givers [5, 7, 15].

With the caveat that the approaches used to capture staff time were not always clearly described in our literature search, and a given study sometimes used more than one category of staff time estimation approach in concert [4], the most common conventional approaches reported were self-report using a time-reporting template or “time diary” [16,17,18] and uniform self-report estimates of time spent on certain activities [5, 7, 19,20,21]. Some studies also reported a category of direct observation [22,23,24]. Using our pre-specified search terms, we found one study reporting use of an automated EHR-related approach [23]. Our broad environmental scan also identified other articles using semi-automated or automated EHR-based approaches for staff time estimation, including recommendations for their use and reporting [25, 26]. Our summary of these TDABC categories of approaches are summarized in Table 2 from a 5 R’s model perspective.

Table 2 Comparison of current categories of TDABC approaches to staff time estimation

Self-report/observation categories

We identified numerous articles using conventional self-report or observation approaches to estimate staff time [5, 7,8,9, 28]. As described above, these began with a process map to identify each step of EBP delivery and then estimated the staff time required at each step of the process map using one of these approaches: (a) uniform estimate of time needed for a commonly occurring task, (b) retrospective self-report in “time diary”, or (c) periodic direct observation. However, these approaches are somewhat resource-intensive, especially observation. Further, using these approaches, costs may not be feasible to capture during the sustainment phase when there are no grant funds to support observations and/or compilation of self-report data.

Automated/semi-automated EHR-based approaches

For programs in settings that have EHRs, recent approaches have emerged to automate the data collection partly or fully. Semi-automated approaches may include hard stops built into a specific EHR note type that “nudge” a user to input their time—this essentially embeds a contemporaneous time diary into the note. Incorporating a contemporaneous time diary into the clinic note allows staff to review their charting to guide their time estimate reported and may lessen recall bias by completing the time diary in “real-time.” In contrast, fully automated approaches require no action by the staff. Seven categories of fully automated EHR-based approaches to staff time estimation are possible. These 7 categories are not mutually exclusive and include time spent: (1) documenting care provided—including the time spent within specific types of encounters, such as time spent documenting an anticoagulation visit encounter in the second case example below, (2) time spent placing or refilling prescriptions, (3) managing the EHR inbox including patient messages, (4) managing orders as part of the team, (5) providing direct patient care, (6) work during scheduled work hours, and (7) work outside of scheduled work hours [25].

With automated approaches, data are collected in real time (e.g., by the EHR), which avoids recall bias and reduces the resource burden. However, depending on the EHR vendor or other software used to track time, there are limitations in what activities can be tracked, the accuracy of the tracking estimates, and the timeliness of retrieving the data. For example, when clinicians are multitasking and leave EHR windows open, time estimates may be inflated. As it relates to relevance, if the encounter type that is tracked is not specific to the EBP and it also captures other tasks, it may not be fully relevant and the rigor of measurement is decreased. Regarding resources, some automated EHR-based approaches require assistance from the EHR vendor and/or local analysts/informaticists at the outset to determine what data to collect and how to access them. In addition, although collected in real time, the data may not be accessible in real time—data access may also require help from the EHR vendor or a local analyst/informaticist. After the initial set up, there are some benefits to EHR approaches, notably that when programs reach a sustainment phase [3], an automated method that was set up in the EHR can continue to provide reports of the staff time needed for a certain type of clinical encounter.

Case examples

To further illustrate the tradeoffs of these different approaches, we provide an overview of the approaches employed in two real-world research case examples [26]. In the first case, the study used both self-report and semi-automated approaches for capturing time spent, whereas in the second case, the authors used both automated approaches and direct observation to develop a complete workflow process map and to validate the automated timing calculations. In Table 3, we provide the rationale for the approach selected in each case example, and at least one alternative approach that could have been used.

Table 3 Rationale for use of specific TDABC approaches in the case example pilot trials

The first case example is from a pilot type 2 hybrid implementation/effectiveness trial studying the delivery of an evidence-based physical activity coaching intervention in a primary care clinic [26]. Staff time costs included the following: (1) an implementation strategy of training existing staff to serve as coaches; (2) time spent delivering the 6 intervention telephone calls to each patient, and (3) time for the implementation strategy of coaches providing technical assistance to patients to share their physical activity data (FitBit©). Approaches to capture time varied across the different elements of the program (Table 3). For time spent training, a conventional self-report time diary was used per the staff employer’s preference, in order to allocate the time spent to the research grant for this one-time session. To optimally capture the time spent in each counseling session, a semi-automated EHR-based approach was used to avoid recall bias: a brief, required contemporaneous time diary was embedded within the behavioral coaching note template in the EHR (Epic Systems). This embedded time collection template can easily be replicated in Epic Systems and other commonly used EHRs by creating a “required field” for time that must be documented before closing the note. In contrast to an alert that fires and interrupts workflow, this approach only nudges staff if the template was left incomplete when signing the encounter. During the pre-implementation phase, the coaches noted this approach fit their workflow and was minimally burdensome.

The second example is a program evaluation of the staff costs of delivering care at an anticoagulation clinic for various phenotypes of patients—those who needed minimal adjustment to their treatment regimen and those who needed frequent adjustments [23]. As they sought to compare variable costs across patients in the existing anticoagulation clinic where baseline training had already occurred, the authors did not assess staff training costs. Instead, they used direct observation to detail a process map of each step in the workflow for a patient to engage with the anticoagulation clinic staff. This included multiple steps for in-person visits, from the time of check-in until the time of check-out, and the time spent by nurses and pharmacists between in-person clinic visits. Using a proprietary internal database, the authors captured automated data for the time spent by each member of the clinical team in each step of the process map workflow. They also used a subset of direct observation assessments to validate these automated measurements of staff time. Using TDABC, they calculated the costs of the staff time in each step of the workflow, and then differentiated the costs for patients who were well-controlled and not well-controlled. Although this internal database was proprietary to their system, other EHRs, including the Epic Systems EHR [25], also have this capacity to track staff time.

Discussion

This brief methodologic commentary compares several approaches to capturing the portion of implementation costs related to staff time—an important element of implementation according to the VA QUERI Roadmap [6] and other IS process models. In particular, approaches to capturing staff time are critical to transparently report to system decision-makers the time required to implement and sustain a program. Overall, the comparison of these approaches in Table 2 may be considered as a balance of data optimization (i.e., rigorous/reliable) and efficiency in terms of a rapid return of relevant findings with low-resource requirements. In terms of the rigor/reliability, the observational approaches are most accurate, followed by the automated and semi-automated EHR-based approaches, and then the retrospective time diary approaches that are particularly prone to recall biases. In terms of efficiency, the semi-automated/automated EHR approaches stand out for their rapidity and for the limited resources needed after their initial set-up, followed by self-report. Observational approaches are the slowest and most time-consuming.

It is interesting to further consider the relative merits of these approaches from the VA QUERI Roadmap perspective which dictates that estimates of staff time are most critical to assess in the Sustainment phase. Conventional self-report time diary and observational approaches are typically too burdensome for use in the Sustainment phase; however, the conventional self-report uniform estimate approaches could be pragmatic in this phase, as well as the semi-automated or automated EHR-based approaches. In contrast, during the pre-implementation planning phase of an EBP, estimation may be the only possible approach available if decision makers need data on the time required for alternate implementation strategies before these tasks have been pilot-tested. In sum, advances are needed in terms of highly rapid, rigorous, and low-resource time capture approaches, and the semi-automated and automated approaches described here provide innovative steps forward towards that goal.

Strengths of this report include its summary of key emerging EHR-based semi-automatic and automatic approaches to capturing time and the concrete case study examples (Table 3). Further, the 5 R’s model provided a systematic basis on which to evaluate the pragmatism of different approaches. In addition, reporting staff time as a cost is consistent with the recommendations from the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) [29] to “describe the methods for valuing each resource in terms of its unit cost.” However, depending on the EHR approach used, the automated approach may be challenged to separately report the distinct resource costs for the intervention and the implementation strategy, as others have recommended [5]. Although beyond the scope of this brief review, those applying these different approaches to staff time estimation should keep in mind the CHEERS recommendations to specify which staff are included (e.g., clinical staff, contracted coaches) and from what perspective (e.g., clinical health system staff, research staff) [29].

Limitations include that our focused environmental scan on conventional self-report/observation approaches to staff time estimation and EHR-based semi-automatic and automatic approaches did not include all potential approaches relevant for IS, such as automated assessments by radiofrequency identification (RFID) tags or readers. In addition, our comparisons according to the 5 R’s model are necessarily subjective. A future systematic review would expand and add rigor to this environmental scan. Automated EHR-based approaches have been used internally by health systems more often than in IS research; thus, there are some key limitations in terms of sparse prior reporting of details and validation of these approaches [25]. However, some of the described EHR approaches have been validated against direct observation and demonstrated that > 80% of the time the estimates are within 3 min of each other [27]. When used for research, it is reasonable to initially vet the accuracy of automated EHR-based approaches as compared to observation [25], as was done in case example 2—this is particularly important for complex processes that are prone to interruptions.

Conclusions

We summarized the strengths and limitations of different conventional and EHR-based semi-automated and automated approaches to measuring staff time as a cost for IS studies, with an emphasis on the 5 R’s model as an index of factors that are important to stakeholders. This is critical to allow decision-makers to consider the feasibility of implementing and sustaining programs, based on the estimates of staff time required. Going forward, the field should continue to identify additional methods of estimating staff time (and other implementation costs) that are rigorous and replicable, and also relevant, rapid, and low-resource enough to be measured in a EBP sustainment phase.