Administration and Policy in Mental Health and Mental Health Services Research

, Volume 35, Issue 1, pp 114–123

Driving with Roadmaps and Dashboards: Using Information Resources to Structure the Decision Models in Service Organizations

Authors

    • Department of PsychologyUniversity of Hawai’i at Mānoa
  • Adam Bernstein
    • Department of PsychologyUniversity of Hawai’i at Mānoa
  • Eric L. Daleiden
    • Kismetrics, LLC
  • The Research Network on Youth Mental Health
Original Paper

DOI: 10.1007/s10488-007-0151-x

Cite this article as:
Chorpita, B.F., Bernstein, A., Daleiden, E.L. et al. Adm Policy Ment Health (2008) 35: 114. doi:10.1007/s10488-007-0151-x

Abstract

This paper illustrates the application of design principles for tools that structure clinical decision-making. If the effort to implement evidence-based practices in community services organizations is to be effective, attention must be paid to the decision-making context in which such treatments are delivered. Clinical research trials commonly occur in an environment characterized by structured decision making and expert supports. Technology has great potential to serve mental health organizations by supporting these potentially important contextual features of the research environment, through organization and reporting of clinical data into interpretable information to support decisions and anchor decision-making procedures. This article describes one example of a behavioral health reporting system designed to facilitate clinical and administrative use of evidence-based practices. The design processes underlying this system—mapping of decision points and distillation of performance information at the individual, caseload, and organizational levels—can be implemented to support clinical practice in a wide variety of settings.

Keywords

Organizational changeEvidence-basedTechnologyClinical reasoningFeedback

Introduction

In the effort to improve the quality of behavioral healthcare for youth, the implementation of evidence-based practices has emerged among the foremost challenges, frequently occupying center stage in training, research and policy discussions (e.g., Chambers et al. 2005; National Advisory Mental Health Council Workgroup 2001; Norquist et al. 1999; U.S. Department of Health and Human Services 1999). Efforts over the last 15 years have focused primarily on the identification, development, and testing of the field’s core innovations: evidence-based treatment protocols (Lonigan et al. 1998; Weisz et al. 2004). Meanwhile, relatively less attention has been paid to such issues as the socio-political, contextual, and organizational aspects of the implementation of those innovations (e.g., Hemmelgarn et al. 2006; Simpson 2002). Although interest in implementation is expanding rapidly (e.g., Fixsen et al. 2005; Greenhalgh et al. 2004), it seems we still know a lot more about what treatments work than about how to support their use in service organizations.

The Importance of Clinical and Quality Management Decisions

Evidence-based practices are typically tested and developed in the context of relatively small, well-funded, and well-developed clinical management infrastructures with strong centralized control. This laboratory context often includes well-trained, highly motivated therapists, rigorous supervision practices, and expert supervisors and project managers. Some features are typical of those that characterize successful organizations in general, such as a culture that supports knowledge sharing, strong leadership with a clear strategic vision, visionary staff in key positions, a climate conducive to experimentation and risk-taking, available free resources, provision of staff training and coaching, and maintenance of effective monitoring and feedback systems (Fixsen et al. 2005; Greenhalgh et al. 2004). Thus, investigators commonly test the influence of particular therapeutic practices on clinical outcomes in a highly optimized context. Such conditions may be difficult to replicate in service organizations, and the notion that all evidence-based practices are robust to these changes in context seems unlikely.

This paper focuses on one central part of that context: the decision-making models that govern the use of the chosen treatment protocols. The current aim is to demonstrate design processes for structuring and informing clinical and quality management decisions, such as (a) which children should get treatment, (b) how should the protocol be managed to optimize results, and (c) are children getting the services on time and as expected? Arguably, such decision models are diverse across clinical research trials, both in the degree to which they are informed by data and in their underlying theory or logic. Nevertheless, given the choice to implement a particular evidence-based protocol, one must consider how to support the associated decision model in which that protocol is embedded.

Information technology has been a key resource in service organizations for many years (e.g., Persevic et al. 2004), and it has been argued more broadly that organizational resources can serve as potent facilitators of implementation and change (e.g., Lehman et al. 2002; Simpson 2002). Traditionally, information technology has largely served decision making related to utilization review and cost management (O’Donohue et al. 2002), and has shown some measure of success in terms of tracking service volume, cost, and levels of care.

Increasingly, however, there is a trend toward using such resources to improve service quality (Lambert 2001; O’Donohue et al. 2002; Strum 1999). Some sophisticated models have emerged with respect to automated feedback for clinical outcomes (see Kluger and DeNisi 1996, for a discussion of feedback intervention theory more broadly), based on the idea that real-time knowledge of clinical progress can positively impact service delivery, particularly when benchmarked in an interpretive context of “success” or “failure” (e.g., Lambert et al. 2003; Percevic 2001). We propose that the effects of such feedback may be enhanced when it is instrumental (Sapyta et al. 2005), in other words, when it provides guidance for what to do in the face of negative results. Such guidance should presumably be based on the overarching clinical reasoning model that supports a particular treatment protocol (e.g., “if cognitive therapy isn’t working, then try relaxation,” or “if parents are not attending sessions, then switch to self-reward training for child”). Notably, for instrumental feedback to be possible, reliable data must be present at the individual level, and treatment protocols must be flexible in allowing for adjustment.

In a more general sense, clinical reports (e.g., automated computer displays) must be based on a functional mapping of the full range of decisions that are characteristic of clinical experts managing clinical trials (e.g., is the client in crisis, is the client getting better this week, what has the therapist tried so far, etc.). Moreover, this mapping should also take into account key decisions of the organization. For the purposes of illustration, we provide examples of how we chose to map and optimize three procedures in the Child STEPs clinical effectiveness trial. These three procedures—intake and eligibility determination, concurrent review, and quality improvement—are examples of broad classes of decision models that are ubiquitous in service settings. Hence, the processes we illustrate should have direct relevance to service organizations seeking to implement evidence-based approaches.

Our illustration refers to the design of a Behavioral Health Reporting System (BHRS) in the context of a multi-site clinical effectiveness trial as part of a formal research collaborative: Child System and Treatment Enhancement Projects (STEPs). The Child STEPs clinical trial examines the implementation and outcomes of two different ways clinicians can use three different protocols in addition to usual care, to treat three disorder areas (i.e., anxiety, depression, or conduct problems) in nine clinical service organizations in Massachusetts and Hawaii. Participants are children between the ages of 8 and 13. The BHRS was built to facilitate and enhance the implementation of the evidence-based protocols and to structure the decision-making of the research and clinical teams in a highly complex set of environments.

The goal of the illustration below is to demonstrate the application of design principles for building clinical reports, and not to suggest adoption of the particular features of the current system itself. The BHRS described here is specific to the context of the Child STEPs clinical trial, which entailed some procedures requiring resources difficult to replicate in other environments (e.g., weekly data collection from child and parent, regular monitoring of session audio tapes by supervisors, and frequent one-on-one supervision of therapists). Moreover, the tools illustrated were designed for supervision purposes only, and were not used by clinicians. The intention is therefore to highlight the design principles which guided its creation. Those principles could be used to structure clinical practice in a wide variety of settings, with various degrees of automation, and at various levels of cost. Application of these design principles in different organizations would produce reports that are different to the degree that the organizations’ decision processes and performance indicators are different. Across such environments, by mapping decision points and distilling performance information, reports can be built to support evidence-based practice. In providing this illustration, we also make no assumptions about the organizational or contextual variables that are needed for such a system to be implemented successfully in a service organization. Some of these issues are however touched upon in other papers in this special issue.

The Map: Evidence-based Clinical Decision-making

Clinical reasoning is a core activity that transcends multiple clinical and business processes. Thus, formal specification of the decision-making or reasoning framework (e.g., what actions do we take when the client gets worse?) with detailed reference to the information needed at key decision points (e.g., what data will tell us if the client gets worse?) creates the common roadmap for the many stakeholders in a service organization. Through a combination of literature review and consensus-based processes, the Child STEPs management team selected the clinical decision-making framework illustrated in Fig. 1. This model is the metaphorical roadmap that organizes clinical personnel to achieve a particular goal.
https://static-content.springer.com/image/art%3A10.1007%2Fs10488-007-0151-x/MediaObjects/10488_2007_151_Fig1_HTML.gif
Fig. 1

Clinical reasoning model in the child STEPs clinical trial

In this model, clients are first evaluated for their fit to an available treatment protocol. This decision could involve for example the determination of whether a child’s age and diagnosis identify a particular evidence-based protocol that is appropriate. If so, the decision sequence continues with an evaluation of crises or critical events that would preclude delivery of the intervention (e.g., client elopement; family emergency). In the absence of crises, one next considers clinical progress, and in the face of positive evidence, continues to administer the protocol. In the face of negative evidence, one would next consider whether the clinical strategies employed are appropriate. If not, they can be reconsidered, and if they are, then the next consideration involves a review of engagement to see whether lack of attendance, or poor therapy compliance might be responsible for the lack of clinical gains. When engagement has been ruled out as an influence on negative outcomes, one would consider a variety of other options, including additional consultation, increased supports, or a different intervention altogether.

It is important to point out that the decision logic here is essentially rationally determined and is not considered definitive. Other algorithms may characterize a variety of evidence-based protocols equally well or better; indeed, we have outlined different sequences to these same decisions ourselves, depending on the context (cf. Chorpita 2007; Daleiden and Chorpita 2005). The essential point here is that the information system should fully support the clinical decision model chosen, and not the other way around. To return to our metaphor, there may be other maps to other destinations.

The Dashboard: Delivering Model-relevant Information

The point of a well-designed BHRS is to provide simple and usable information to inform each choice point in the decision model. Note in Fig. 1 the stacked document icons that refer to the information sources for each decision (as represented by a diamond). In the way that an automobile dashboard organizes critical information about speed, distance, and remaining fuel, clinical reports are intended to display and organize model-relevant data in an efficient and informative manner.

To create the “dashboards” for the Child STEPs project, the design team went through a formal business process modeling exercise (i.e., building flow diagrams representing the dozens of procedure that are involved in the study administration). Through this process, aspects of the clinical reasoning model were integrated into an overall business model that included approximately 35 business model diagrams. Each diagram specified the process model (e.g., the sequence of actions taken by performer role, such as “telephone screener contacts client”), the organizational model (e.g., the business relationships among functional units across locations, such as “Boston clinical services site”) and the information model (e.g., the data, forms, and reports needed to guide performers to take actions within their context, such as “intake report”). Many of these business processes are quite common among behavioral healthcare organizations, and three in particular can be seen as central to provision of healthcare.

Process #1: Intake and Eligibility Determination

In the Child STEPs project, this generic process was modeled with four procedures: (1) screen new client, (2) perform initial assessment, (3) review initial assessment, and (4) determine client eligibility. Parents of referred children participate in a brief phone screen to determine potential eligibility, and then parent and child participate in a more extensive assessment in person. The assessment includes administration of a semi-structured interview for youth and parents, along with various standardized parent-report and youth-report questionnaire measures. As is typical with a thorough assessment, the informational results of these procedures are sprawling, and unstructured interpretation is difficult.

The information model for this process included numerous forms (e.g., recruitment script, consent forms, forms for each assessment instrument, payment forms, an eligibility worksheet), several tracking logs (e.g., client screening log, enrollment and assessment log), and several strategic aggregate reports (e.g., child screening status report, Diagnostic Assessment Summary (DAS)). Many of the logs are used for quality assurance and improvement, discussed below, but the DAS is of particular interest in supporting the clinical reasoning during the intake and eligibility process. A selection of information from the DAS is shown in Table 1.
Table 1

A subset of data from the diagnostic assessment summary (DAS) report

Measure

Child

Parent

Cutoffa

Elevationb

Revised children’s anxiety and depression scales

    Social

66

48

≥65

Child

    Separation

48

38

≥65

Achenbach system of empirically based assessment

    Total problems

61

59

≥65

Both

    Internalizing problems

70

67

≥65

    Externalizing problems

42

54

≥65

Parenting stress index

    Total stress

90

≥85

Parent

    Parental stress

80

≥85

aMeasure-specific cutoff scores for clinically significant disturbance

bReporters whose scores exceed cutoff levels

The DAS was designed to organize initial assessment information for the specific decisions that must be made by the clinical team following an assessment. Critical determinations to be made at this time include the nature of the child’s difficulties, the relative severity of each area of difficulty, and the treatment protocol with which to proceed if a need for services is indicated. The model proposed that clinical team members minimally had to verify (1) that a child’s problem was severe enough to warrant entry into the project, and (2) that the protocol chosen matched the child’s most severe presenting problem.

In the DAS report, total and scale scores from the diverse assessment measures are organized along rows of a single table. Adjacent columns for child and parent scores allow easy comparison for measures with versions for multiple reporters. A third column lists measure-specific cutoff scores for clinically significant disturbance, while an additional column alerts the reader to scale scores that meet these cutoff levels by indicating the associated reporter as parent, child, or both. Thus, the interpretation of cutoffs is relevant to the first decision regarding problem severity (i.e., the presence of scores exceeding clinical cutoff values is an indicator of sufficient severity). Next, the determination of which protocol to use from among the four utilized in the study (each specific to problem category) was based on the rank ordered listing of the child’s diagnoses and problems at the bottom of the DAS, as determined by the composite of the child and parent interviews.

Process #2: Concurrent Review

The generic process of concurrent review (i.e., ongoing quality evaluation of active clinical cases) was modeled with two main procedures: (1) clinical supervision (between provider therapist and project supervisor), and (2) expert consultation (between project supervisor and an evidence-based services consultant). The information model for this process included two forms (i.e., supervision form, session outline from treatment protocol), a tracking log for session audiotapes, and several strategic aggregate reports (e.g., caseload summary, individual client summary). During concurrent review, a Caseload Summary displays multiple cases on a single report, and thereby supports the teams’ case selection decisions (i.e., which cases to prioritize for discussion in supervision or consultation).

The Caseload Summary (see Fig. 2) represents the primary mechanism for comparisons across clients. The Caseload Summary presents a distilled subset of information in various panes (i.e., horizontal regions of the display stacked vertically on each other), with each client’s data organized along a single vertical line. The Caseload Summary panes follow the logic of the decision sequence from top to bottom (see Figs. 1 and 2). Thus, the Critical Events Pane at the top plots a symbol indicating the presence of a recent critical event as reported by the therapist. The Progress Pane plots initial and most recent observations on chosen outcome measures, such that the square represents the first observation, and the triangle represents the most recent. Trajectories of clients who are improving are thus indicated by a line with a weight or foot on the bottom (assuming lower scores represent progress; see Clients 1 and 3), whereas deteriorating trajectories are indicated by a line with an upward arrow (see Client 2). Once data from multiple time points are present (this requires a few weeks in the Child STEPs clinical trial; see below for a description of data sources used), this configuration allows for rapid discrimination by consultants and supervisors regarding those cases most in need of concurrent review and expert attention.
https://static-content.springer.com/image/art%3A10.1007%2Fs10488-007-0151-x/MediaObjects/10488_2007_151_Fig2_HTML.gif
Fig. 2

Caseload summary example from the child STEPs clinical trial

The Practice Pane, immediately below the Progress Pane, plots symbols showing both the historical values (x’s) and most recent values (diamonds) for practices delivered. Thus, both Clients 1 and 2 have had three skills covered prior to the most recent coverage of “Practicing” (i.e., exposure to feared situations). Near the bottom of the caseload summary, the Attendance Pane and Activity Pane show who attended the most recent session, and whether homework, role-plays, or other activities were performed. This display format can be used to aggregate clients across any relevant organizational level (by supervisor, clinic, therapist, city), making the caseload summary a versatile report for large scale administrative review and for guiding supervisory priorities. Viewing the example caseload summary in Fig. 2, supervisors might attend first to Client 2, whose status is summarized in the center of the report and shows an increase in internalizing symptoms. The full individual client summary can then be accessed quickly via a hyperlink. With the client’s structured information available to all team members, discussion is grounded in data and relevant treatment decisions are efficiently brought into focus.

Once a client is selected for discussion, the individual client summary (see Fig. 3) becomes a central tool for monitoring progress, identifying problems, and informing adjustments to the treatment course. The structure of the individual client summary mirrors that of the caseload summary and again reflects the clinical reasoning sequence outlined in Fig. 1. Here, the Critical Events Pane at the top plots all critical events reported over time. The Progress Pane plots over time any and all repeated measures of clinical progress obtained during the course of treatment. The Practice Pane plots symbols showing both the observed values for practices delivered, as well as the expected values as determined by supervisors. Thus, on the Practice Pane, the open circles refer to planned strategies agreed upon in supervision, and the diamonds refer to strategies actually delivered as determined via discussion between the therapist and supervisor during supervision meeting. Diamonds in circles therefore represent implementation of the therapy session as planned, whereas open circles represent errors of omission relative to the plan, and solitary diamonds represent errors of commission relative to the plan. Near the bottom of the client summary, the Attendance Pane and Activity Pane show all historical values for who have attended sessions when, and whether homework, role plays, or other activities were performed at each session. Also included on the display is a description of the intermediate treatment goal, along with basic facts such as client name and gender, therapist information, and current treatment protocol being used.
https://static-content.springer.com/image/art%3A10.1007%2Fs10488-007-0151-x/MediaObjects/10488_2007_151_Fig3_HTML.gif
Fig. 3

Individual client summary example from the child STEPs clinical trial

In Child STEPs, most of the data display’s divisions plot information from records of weekly meetings between therapists and supervisors. All plots are organized such that the horizontal axis (labeled at the top of the report) represents time in days from intake assessment, thereby allowing for clearer inferences across data series about the relations among events.

Note that the clinical reasoning model in Fig. 1 specifies that review of progress is a key determinant of subsequent clinical planning. A resultant feature of the information system Progress Pane is that any of the various quantifiable measures obtained to measure progress can be plotted. The Child STEPs project uses weekly phone calls to both children and caregivers to gather a continual stream of outcome data, and the information system plots internalizing or externalizing scales from these calls in the progress area (the child’s treatment protocol determines which of the two types of scales is the default, such that anxiety and depression protocols cause parent and child internalizing scales to be plotted, and conduct protocols cause parent and child externalizing scales to be plotted). Thus, only the information likely to be most applicable is presented by default. Although the Child STEPs project provides weekly scores to guide supervision and planning, such frequent assessments are unlikely to occur in most service environments. Nevertheless, it is possible to use information from less frequent assessments as a guide to decision making (Daleiden and Chorpita 2005), and this matter involves the traditional tradeoff between speed and accuracy of decision making.

A variety of information can be quickly gleaned from the individual client summary shown in Fig. 3. It is apparent, for example, that treatment has spanned 4 months, was interrupted for approximately 40 days, and has been attended only by the child. The weekly progress measures of child- and parent-report internalizing symptoms declined at the very start of treatment to near minimum values and may exhibit a floor effect afterwards. Periodic data from three other progress measures also show decreasing scale values. “Practicing” (i.e., exposure), is a core component in the treatment of anxiety and has only begun for the example client in the last two sessions, so it seems likely that continuation will be useful. However, the lower responsiveness of the thought and social problem progress indicators, in combination with the supervisor comment regarding recalcitrant test-taking anxiety, make coping skills such as “Secret Calming” (i.e., covert relaxation procedures, planned for the next session) an additional logical target.

Process #3: Quality Improvement

This generic process was primarily modeled with two procedures: (1) administrative supervision (with principal investigators, project supervisors, and evidence-based services consultants), and (2) management supervision (with principal investigators and site-specific project staff), but quality activities were embedded throughout other specific processes, particularly through the use of tracking logs in most other processes (e.g., the eligibility and intake process described above). The information model for this process included several strategic aggregate reports (e.g., administrative results reports, clinical results reports). The client enrollment and assessment report, described below, was an important tool for tracking the timeliness and completion of enrollment milestones and repeated assessments performed in Child STEPs. A sampling of information from this report is shown in Table 2. As before, though the specifics of the report reflect the Child STEPs context for which it was designed, the illustration of distillation and presentation principles is believed to be widely applicable to quality improvement efforts by supervisors and directors in other clinical environments.
Table 2

Client enrollment and assessment report

  

Enrollment

Treatment based events

Periodic assessment (days late)

Client #

Therapist

Interest/Eligibility to initial assessment

Initial assessment to treatment

Since phone assessment

Since session

Since supervision

Treatment to post assessment

3

6

9

12

18

24

1

A

[4]

[8]

14

[25]

[−1]

−48

−67

−249

−432

2

A

[5]

[7]

6

9

3

[11]

[−13]

−64

−165

−337

−520

3

B

[3]

[11]

13

32

9

16

−73

−165

−256

−438

−621

4

C

[1]

17

−73

−164

−256

−347

−529

−712

In the client enrollment and assessment report, data relating to each client occupy a single row and are categorized according to their relation to enrollment, treatment based events, or periodic assessment. The enrollment area displays time delays measured in days for key events such as the completion of initial assessment after receipt of interest and eligibility information and the subsequent start of treatment for clients included in the study. The treatment based events area displays days elapsed since the most recent occurrence of treatment sessions, therapist supervision, and weekly phone assessment (which only occurred during treatment in Child STEPs), as well as days elapsed since the end of treatment for those clients awaiting a post assessment. Lastly, the periodic assessment area indicates the number of days late for each of 6 assessments performed at specified numbers of months following a client’s initial assessment (i.e., 3, 6, 9, 12, 18, and 24 months). Negative values in this area thus indicate the time horizon for planning and scheduling a periodic assessment while positive values indicate potentially problematic delay.

Formatting is used to add salience to values that may require action and to diminish salience of values that are no longer increasing (e.g., number of days late for a periodic assessment that has been completed) and are thus unlikely to require action. In the sample shown in Table 2, values outside the desired ranges are emboldened, while those that are no longer increasing are enclosed in square brackets. Among the example clients shown, Client 1 has been awaiting post assessment for 2 weeks, indicating that increased scheduling efforts may be needed. The report shows that Clients 2 and 3 are both in active treatment. All values are within normal ranges for Client 2; however, Client 3 has bold values in the Since Phone Assessment and Since Session columns, indicating that increased resources or effort may be needed to overcome the current break in contact. The bold value for Client 3 in the Periodic Assessment area denotes that the time window for a valid 3-month assessment has been reached but the assessment has yet to be completed. Finally, for Client 4, the sample report draws attention to the fact that 17 days have passed following initial assessment without the completion of a treatment session. This delay may indicate the need for a supervisor to support the client’s therapist in surmounting logistical or engagement-related obstacles.

Key Elements of the Context

Although the illustrations of the above procedures necessarily emphasize the use of technology, we conceptualize the BHRS as a support structure to create a context for successful treatment delivery, and the technology illustrated is but one medium through which such aims could be realized. More essential is that creating this context should involve a mapping of existing procedures and resources to build models for clinical reasoning, followed by some effort to optimize the models once identified. Little has been written in this paper about the choices that guided our model optimization for Child STEPs, so we briefly mention a few of them here, with an emphasis on how such concepts might generalize to clinical service organizations.

Compatibility

One issue that can help or hinder implementation involves compatibility with values and beliefs of the practitioners and supervisors. Rogers (1995) has argued that such compatibility accelerates diffusion of innovations. Unfortunately, evidence-based protocols tend to be viewed as incompatible with clinician beliefs and practices (e.g., Addis and Krasnow 2000; Persons 1991). This raises credible concerns about therapist motivation to use a specific evidence-based protocol or to follow the novel suggestions of a supervisor.

As a solution for Child STEPs, the information model included clinical reports that were designed to prioritize individual case-specific outcomes as the primary source of evidence, rather than placing immediate emphasis on treatment fidelity. As reflected in Fig. 1, the clinical logic is to ask the question “is the client getting better?” before asking the question “are we following the protocol?” This decision making algorithm shifts the primary goal from “using evidence-based practices,” an arguably controversial goal in some services settings, to “getting positive outcomes,” a goal more compatible with most practitioners in service organizations. With this shift, evidence-based practices are now simply a strategy by which to achieve the goal. This line of reasoning no longer hinges on the endorsement by practitioners that the use of evidence-based protocols is an end to itself.

Feedback

Regarding motivation, here we considered the literature on feedback in clinical management (e.g., Lambert 2005; Lambert et al. 2005; Sapyta et al. 2005). Because all supervision occurs in a social context, the salience of the feedback on clinical progress to the supervisor, the therapist, and possibly to others (e.g., peers, managers) becomes a motivating force for goal attainment. Sapyta et al. (2005) proposed that feedback needs to be immediate and instrumental to be most effective. Given the automated nature of such systems, the feedback cycle can be considerably short (in Child STEPS, weekly). The feedback is instrumental (i.e., when negative, it suggests strategies for the user) in that (1) the Crisis Pane may highlight critical issues that are barriers to success requiring immediate attention, (2) the Practice Pane may point to areas where fidelity or formulation may be problematic, and (3) the Attendance and Activity Panes may point to areas where engagement may be an issue. It is important to note that other aspects of the organizational culture are likely to moderate openness to feedback (e.g., “psychological safety,” the perception that well-intentioned professional risks will not result in punishment; Edmondson et al. 2001).

Complexity

One other challenge to optimizing the treatment context involves the complexity associated with evidence-based protocols. With few exceptions, these practices run on different “platforms.” That is, each has its own way of measuring progress, ensuring fidelity, and organizing other aspects of clinical management. From the laboratory perspective, this complexity is often a strength, in that it stems from a highly developed clinical management infrastructure. From the perspective of a service organization, it can be a major limitation, requiring a system to learn the diverse languages of the many protocols it chooses to implement and monitor. Aside from the burden of learning the proprietary management structure associated with each protocol, the diversity of metrics rarely allows aggregated reporting at a system level. In other words, if each program uses a different outcome measure, there is no way to ask such questions as “how are all our clients doing?” or “how often are therapists following the protocols chosen?” These are not trivial questions for service organizations.

As has been illustrated in other contexts (e.g., Daleiden and Chorpita 2005) the system used in the Child STEPs trial involves monitoring critical events, outcomes, practice history, attendance, and rehearsal/homework for four different protocols on a universal set of metrics. Thus, supervisors in the project can view the same reports regardless of the protocols. Though the protocols differ, for example, in choice of outcome measures, unification is made possible by the flexibility of the system to map diverse data types onto the dashboard displays. Any quantifiable measure can be presented in the same fashion. This allows multiple evidence-based practices to coexist with each other and with data from other clinical care.

Summary

As efforts to implement evidence-based service programs continue, there should be an increasing emphasis on understanding and modeling of relevant organizational and contextual factors. Our examples were intended to illustrate a modeling and optimization process by which information technology can be used to support and inform a structured set of business procedures and clinical decisions. Although the features of the system illustrated are specific to the Child STEPs project, the design process described may be used to support clinical practice in diverse settings. As the field continues to sharpen its understanding of the systemic and contextual variables related to clinical practice, we expect to witness a resultant evolution of clinical information technology. We are entering an era in which innovation in the information and decision domains may afford some of the best opportunities for improving practice in clinical settings, providing yet another avenue to narrow the science-practice gap.

Copyright information

© Springer Science+Business Media, LLC 2007