Journal of Autism and Developmental Disorders

, Volume 41, Issue 5, pp 597–609

Bridging the Research-to-Practice Gap in Autism Intervention: An Application of Diffusion of Innovation Theory

Authors

    • Department of PsychologyUniversity of Pennsylvania
  • David S. Mandell
    • Center for Mental Health Policy and Services Research, Department of PsychiatryUniversity of Pennsylvania School of Medicine
Original Paper

DOI: 10.1007/s10803-010-1081-0

Cite this article as:
Dingfelder, H.E. & Mandell, D.S. J Autism Dev Disord (2011) 41: 597. doi:10.1007/s10803-010-1081-0

Abstract

There is growing evidence that efficacious interventions for autism are rarely adopted or successfully implemented in public mental health and education systems. We propose applying diffusion of innovation theory to further our understanding of why this is the case. We pose a practical set of questions that administrators face as they decide about the use of interventions. Using literature from autism intervention and dissemination science, we describe reasons why efficacious interventions for autism are rarely adopted, implemented, and maintained in community settings, all revolving around the perceived fit between the intervention and the needs and capacities of the setting. Finally, we suggest strategies for intervention development that may increase the probability that these interventions will be used in real-world settings.

Keywords

Autism spectrum disordersInterventionCommunity mental healthPublic schoolsImplementation scienceDiffusion of innovation

Autism interventionists practicing in the community generally rely on techniques unsupported by research (Stahmer et al. 2005); when efficacious interventions are used, they often are not implemented the way they were designed (Stahmer 2007). Most autism programs incorporate elements of different strategies, which have not been tested in combination (Chasson et al. 2007). As a result, autism programming in the community has not been found to improve adaptive, social, or cognitive functioning (Chasson et al. 2007).

From 2002 to 2004, the National Institute of Mental Health convened stakeholders to inform government agencies about the state of the science of psychosocial interventions for autism. Based on these meetings, Lord et al. (2005) and Smith et al. (2007) wrote thoughtful reviews on the research-to-practice gap, which emphasize that efficacy research is not paralleled by systematic research on the best ways to implement interventions in the settings in which most children receive care. Parallel research in other areas of child psychopathology suggests similar challenges in cultivating the support of payers, practitioners, and families in disseminating and implementing evidence-based practices (Schoenwald and Hoagwood 2001).

We propose applying diffusion of innovation theory (Rogers 1962, 2003) to further our understanding of the dissemination and implementation of efficacious autism interventions. This theory provides a framework for describing how, why, and at what rate new technologies spread through social systems. It has been used to better understand the dissemination and implementation of interventions in diverse fields, such as HIV/AIDS (Svenkerud and Singhal 1998), substance abuse (Gotham 2004), and conduct disorder (Glisson and Schoenwald 2005). It is new, however, to autism intervention research.

The Diffusion of Innovation Model

The guiding principle for the diffusion of innovation framework is that an innovation’s reception is dependent on social context, which explains why proven-efficacious interventions are not used in community practice, while other interventions with minimal research support gain widespread acceptance. As Freedman (2002) summarizes, “social forces consistently trump unvarnished effectiveness” in how interventions are used (p. 1539).

Diffusion is the process by which a new practice (an innovation) is communicated over time among members of a social system (Rogers 1995, 2003). The decision to accept, adopt, and use the intervention is not an instantaneous act, but a process consisting of four stages: dissemination, adoption, implementation and maintenance (Rohrbach et al. 1993). As depicted in Fig. 1, each stage includes processes that affect how the innovation is integrated into standard practice. During dissemination, administrators are made aware of programs and encouraged to adopt them. During adoption, administrators form attitudes towards the intervention and commit to initiate the program. During implementation, practitioners begin to use the innovation. Re-invention (changing or modifying an innovation) is especially likely at this phase. Finally, during maintenance (or sustainability), the innovation moves from implementation to institutionalization. At this phase, administrators and practitioners make a commitment to continue or discontinue use of the program.
https://static-content.springer.com/image/art%3A10.1007%2Fs10803-010-1081-0/MediaObjects/10803_2010_1081_Fig1_HTML.gif
Fig. 1

The innovation1-decision process2

These four stages are affected by a complex array of variables, including perceived characteristics of the innovation (Rogers 2003), which have been show to predict 49–87 percent of the variance in how quickly it is adopted (Rogers 1995). These characteristics have been defined in a wide variety of ways in the diffusion of innovation literature, ranging from objective, “primary” ratings, such as size or cost, to subjective, “secondary” ratings, such as adopter’s ratings of specific attributes of the innovation (Tornatzky and Klein 1982). Here, we take Rogers’ (2003) approach, which emphasizes that it is the potential adopter’s perceptions of the attributes of an innovation, not the attribute as classified by external objective raters, that most strongly affects the rate of adoption. Within his framework, five characteristics of the innovation are most influential: relative advantage, compatibility, complexity, trialability, and observability. We focus here on three of these attributes: relative advantage, compatibility, and complexity, which are the most widely studied attributes in Rogers’ model and have the most consistent significant relationships with innovation adoption (Tornatzky and Klein 1982). While many perspectives are important, we adopt that of program administrators, with the understanding that they often have the most say in whether interventions are adopted (Rohrbach et al. 2005; Yell et al. 2005).

In what follows, we define relative advantage, compatibility, and complexity. Then, within this framework, we pose a practical set of questions facing administrators as they decide about the use of interventions.

What are Relative Advantage, Compatibility, and Complexity?

Relative advantage is the degree to which an innovation is perceived as better than the program it supersedes. Innovations that have an unambiguous advantage are more readily adopted—otherwise, potential users will not consider them further. In other words, “relative advantage is a sine qua non for adoption” (Greenhalgh et al. 2004, p. 594). Nevertheless, relative advantage alone does not guarantee widespread adoption of an innovation. Diffusion also requires that an innovation be compatible with the values, beliefs, past history, and current needs of the adopters. As Rogers (1995) notes, “The innovation may be ‘new wine,’ but it is poured into ‘old bottles’ (the clients’ existing perceptions)” (p. 241). Compatibility with organizational values and capacities also is required (Greenhalgh et al. 2004). Complexity is the degree to which an innovation is perceived as difficult to use. Generally speaking, innovations that are perceived as complex are less easily adopted.

While theoretical presentation can be useful, a model’s utility stems from practical application. Therefore, here we present guiding questions within this framework that administrators consider when deciding whether to implement a given intervention: Which program works better? Is the cost of changing worth it? Does this program meet my administrative requirements? Can my staff do this? Will staff and parents accept it?

To illustrate the following points, Table 1 applies these questions to two different evidence-based interventions for autism in public schools: discrete-trial training (DTT), a prototypical technique in applied behavior analysis (ABA) based on operant discrimination learning (Lovaas 1987), and pivotal response training (PRT), a less structured, more naturalistic approach (Koegel et al. 1987). While their approaches are quite different, both DTT and PRT have strong empirical support (Howlin et al. 2007; Jocelyn et al. 1998; Koegel et al. 2003; National Research Council 2001; Schreibman 2000), and they raise similar issues with respect to their dissemination and implementation in community settings, as demonstrated in Table 1.
Table 1

Guiding questions applied to the diffusion of discrete trial training (DTT) and pivotal response training (PRT) in public schools

Key guiding questions

Discrete Trial Training (DTT) and Pivotal Response Training (PRT) in Public Schools

1. Which program works better…

 

 1a. …based on what evidence?

While some randomized trials support the use of DTT, most evidence for DTT and PRT come from single-subject studies, which may not meet federal standards

Although DTT and PRT are accepted as EBP in recent practice reviews, most evidence about them appears in peer-reviewed journals, which school administrators may have difficulty accessing and interpreting

PRT is less widely known, studied, and disseminated than DTT

 1b. …for what children?

Children enrolled in efficacy studies for DTT and PRT have more familial resources and are less heterogeneous than those in public schools; administrators may believe that DTT will not be equally effective or appropriate for their students

Concerns may be exacerbated because of focus on single-subject design

 1c. …for what outcomes?

Outcomes in DTT efficacy studies focus on IQ and symptoms of autism

Outcomes in PRT studies focus on language, joint attention, play, and social skills

Outcomes in schools focus on academic achievement (i.e., reading and math)

2. Is the cost of changing worth the relative advantage offered by the new program?

DTT and PRT are expensive due to time and staffing requirements, particularly the number of hours spent on one-to-one intervention

Changing to DTT or PRT requires time and resources

DTT and PRT are rarely compared to other interventions, making it difficult to ascertain the advantage of changing programs

3. Does this program have all the components I need for full implementation?

DTT and PRT are specific instructional strategies and are not necessarily paired with educational content

DTT and PRT are one-on-one interventions and do not include strategies for group work

Some components of PRT are difficult to implement in the classroom and must be combined with other interventions

4. Can my staff do this with the resources available to them?

DTT and PRT require intensive training and ongoing consultation

DTT also requires certification

Classrooms are not adequately staffed to deliver one-to-one DTT interventions

DTT (and some models of PRT) requires rigorous data collection, which may be perceived as excessively burdensome

5. Will important constituencies accept this program?

Payers may endorse one specific program (e.g., Lovaas methods) and not fund other, less-known strategies

Some parents may prefer naturalistic, play-based developmental approaches over DTT

Some teachers and administrators may not see PRT as sufficiently didactic and/or within the scope of their work

Which Program Works Better?

Mental health programs generally have no legal requirement to implement practices with proven efficacy, although the evidence-based practice movement has made considerable inroads into reconceptualizing community practice (Kazdin 2004; Weisz et al. 2004). School systems, on the other hand, are bound by the Individuals with Disabilities Education Act and the No Child Left Behind Act of 2001 (PL 107–110, Sect. 1001), which require them to implement evidence-based practices (Browder and Cooper-Duffy 2003). When districts do not use interventions with proven efficacy, courts can require costly alternatives, such as increased one-to-one intervention or private school placements (Yell and Drasgow 2000). School administrators, therefore, face enormous pressure to find justifiable programs. Mental health program administrators face similar pressures from payers and constituencies. This search can lead to more questions than answers. What sources will I use to determine the relative effectiveness of a given program? For what children was the program developed and tested, and do these children look like the students I serve? What outcomes did these programs measure, and are they the outcomes that I consider most important? Each question is addressed below.

…Based on What Evidence?

Many researchers assume that proven efficacy will be the primary selection criterion. In reality, administrators use a broader range of evidence. Rather than peer-reviewed journals, they often rely on information gathered locally in partnership with staff and community members (Honig and Coburn 2008). This information often consists of “practice wisdom” (nonempirical indicators that are naturally built into practice and include the opinions of colleagues, client statements, and practitioner observations and inferences; Elks and Kirkhard 1993). Practice wisdom is most frequently used when the research base is perceived as lacking (Herie and Martin 2002). Administrators report difficulty accessing research because the evidence has not been produced or research syntheses are not available. Administrators often perceive scientific evidence as not providing useful recommendations or not specific to the administrator’s situation, resulting in their feeling that this evidence cannot meaningfully guide decision making (Honig and Coburn 2008).

From this perspective, evidence regarding efficacious autism practices is weak and contradictory. Researchers’ understandable reluctance to draw clear-cut conclusions from ambiguous data is at odds with the demands of practitioners who must make concrete decisions about which interventions will be used (Stanovich and Stanovich 1997). No Child Left Behind and practice standards of assessing efficacy used for mental health interventions (Chambless and Hollon 1998; Weisz and Hawley 1998) consider independent randomized controlled trials (RCT) to be the gold standard. This narrow interpretation of scientific validity has significant implications for adopting intervention practices. The majority of autism intervention studies have focused on single-subject experimental designs (Machalicek et al. 2007; Odom et al. 2003; Rogers and Vismara 2008). While these studies have made tremendous contributions to the autism intervention field by demonstrating improvement among children with autism across broad areas of functioning, current federal and practice standards may make it difficult for administrators to justify the adoption of these strategies into practice.

While the National Research Council (2001) outlined key principles common to many intervention programs for autism, it also acknowledged that “there are virtually no data on the relative merit of one model over another” (National Research Council 2001, p. 171). While some states have developed best practice guidelines, many simply list the available treatments without regard for evidence. Even guidelines that consider experimental evidence offer no specific endorsements (Akshoomoff and Stahmer 2006).

…for What Children?

Many practitioners believe that the children they serve present more challenges than the children with whom interventions are tested and that proven-efficacious programs will not work in their settings (Cochran-Smith and Lytle 1999). The belief that their students or clients are different may have merit: children enrolled in efficacy studies represent only a subset of those served in community settings. Most autism interventions have been tested with preschool-aged children, with comparatively little data to guide administrators’ decisions regarding programming for older children (Akshoomoff and Stahmer 2006). Almost all of the comprehensive intervention programs are university-based. While single-subject studies have tended to occur in more naturalistic settings, such as a special education settings with a teacher and/or paraprofessional (Machalicek et al. 2007), these studies often focus on a single interventionist targeting a specific problem within a single child. The extent to which administrators and practitioners perceive these studies as applicable to broader programming decisions also may be constricted as a result. Children enrolled in efficacy studies also typically have greater familial resources and less heterogeneity than the typical child served in community settings, due to exclusion criteria and self-selection (Lord et al. 2005). Almost all of the current interventions in autism have been built on studies with white middle-class populations. While studies generally advocate for parental involvement, few have studied this involvement in families with fewer financial resources, less education, and more ethnic heterogeneity.

…for What Outcomes?

The mission of the education system is to promote academic achievement. NCLB requires that states implement assessment systems in reading and math (Yell et al. 2005). The mission of the mental health system is to maximize child functioning. Most efficacy studies of autism intervention, however, measure the symptoms of autism (e.g., joint attention, imitation, challenging behaviors) or IQ (National Research Council 2001), which may not directly relate to their outcomes of interest.

Is the Cost of Changing Worth the Relative Advantage Offered by the New Program?

The resource and staffing requirements to implement most proven-efficacious autism interventions are intensive. For example, estimated average costs for discrete trial training range from $22,500 to $75,670 per year (Chasson et al. 2007; Motiwala et al. 2006), with an expected course from two to six years (Jacobson et al. 1998). Many evidence-based interventions may be too costly to implement in an era in which the number of children diagnosed with autism in the special education system increases an average of 17% a year (US Department of Education 2004).

In addition to the cost of the new program per se, the change itself demands effort, time, and resources. Even if administrators are confident in a new program, they still must assess whether it poses a sufficient relative advantage to justify these costs. This is not easy to answer. The vast majority of intervention studies rely on a treatment-as-usual comparison group in which the usual treatment is not specified (Rogers and Vismara 2008). Therefore, administrators cannot determine whether the new program offers worthwhile advantages over current practice. When justifying the costs of changing programs, administrators also take into account whether the new program will be more cost-effective in the long run. Almost no intervention study has assessed short or long-term costs (National Research Council 2001).

Does this Program Have all the Components I Need for Full Implementation?

Interventions that are most readily transported into community settings address a broad range of needs, with program materials and clear procedural guidelines (Boardman et al. 2005). Autism interventions rarely are designed this way. Rather, they usually represent an instructional strategy without a curriculum (necessary for classroom implementation) or strategies for group work (necessary for classrooms and some mental health settings). While the National Research Council (2001) describes ten comprehensive programs, only three are designed for public schools, and none specifically address issues of implementation in community mental health settings. Of these ten, only four are commercially packaged or have publicly-available manuals (Lord et al. 2005).

Can My Staff Do This with the Resources Available to Them?

While administrators may have primary authority to adopt new programs, frontline staff determine whether and how they are used (Rohrbach et al. 2005). In this respect, teachers and clinicians delivering autism interventions can be thought of as “street-level bureaucrats.” Lipsky (1980) describes the large degree of influence held by public agency employees responsible for implementing public policies. These street-level bureaucrats often experience relative autonomy from organizational authority, and their work objectives are often vague or contradictory. As a result, they have considerable relatively high degrees of discretion in determining the nature, amount, and quality of the work provided by their agencies. When street-level bureaucrats are either unable or unwilling to administer policies enacted at a higher echelon, it is unlikely that they will be implemented as intended. Taking into account practitioners’ capabilities and preferences therefore is critical for effective delivery of evidence-based practices.

First, administrators must consider whether staff have the skills and resources to implement the program as intended. This is particularly important for autism programming, given the personnel requirements needed to delivery the high-intensity, individualized interventions that have proven most efficacious (Simpson 2003). Most evidence-based comprehensive programs require staff to blend several strategies to match children’s needs (Swiezy et al. 2008). Staff therefore must be trained in multiple approaches.

Practitioners may perceive autism interventions as overly burdensome (Maurice et al. 2001). For example, strategies based on applied behavior analysis demand high and consistent effort (Rakos 2006) and take considerable training to master (Lerman et al. 2004). Comprehensive programs for autism are even more complex. For example, STAR (Strategies for Teaching Based on Autism Research), an educational program for children with autism with published outcome data (Arick et al. 2003), requires that teachers learn three instructional methods in combination with curriculum content, student profiles and data sheets, and specific classroom set-up.

Most community settings are not staffed by professionals with the skills to implement these programs effectively (Heidgerken et al. 2005; Simpson 2003). For example, special education certifications and child psychology licensure are created under the assumption that generically trained personnel can address the needs of all children with disabilities (Simpson 2003). Even states offering licensure in specific disability categories often do not offer sufficient autism-specific training (Scheuermann et al. 2003). Simpson (2003) notes, “preparing qualified [professionals]…to educate…students with autism …is the most significant challenge facing the autism field” (p. 194). Because practitioners’ preparation so often is poor, administrators must find ways for them to acquire skills on the job. Specialized training is rarely available, however, and often lacks recommended elements of effective practices (Scheuermann et al. 2003).

Administrators also must take into account organizational capacity (Boardman et al. 2005). For example, the most common element of proven-efficacious practices consists of many hours per day of active engagement (Dunlap 1999). The National Research Council (2001) recommends that children receive a minimum of 25 h a week, 12 months a year, of individualized intervention. Most classrooms have insufficient staff-student ratios to meet these requirements, and mental health centers rarely are able to devote that much time to any one child. Comparatively little time, therefore, is available for individualized interaction.

In community clinics, administrators, practitioners, and parents also must take into account funding for autism intervention. Children with autism have medical expenditures that are extraordinarily high in comparison to expenditures of children without autism diagnoses (Mandell et al. 2006), and families of children with autism are more likely than other families of children with special needs to report problems acquiring appropriate service referrals and obtaining an adequate number of service visits (Krauss et al. 2003).

Will Important Constituencies Accept this Program?

Diffusion of innovation theory emphasizes that interventions must fit the perceived needs, values, and beliefs of key stakeholders. Finding a program acceptable to all stakeholders is challenging because parents, staff, and administrators often disagree about which practices for children with autism are consistent with their values (Feinberg and Vacca 2000; Sperry et al. 1999). Should intervention be highly structured and didactic, or naturalistic and play-based (Ospina et al. 2008)? Should it focus on social or academic development? Should it rely on intensive staff-child interactions or incorporate peer interactions? While this has not been studied in mental health settings, constituencies vary in their acceptance of didactic versus play-based instructional approaches in schools (Jennett et al. 2003). Didactic approaches such as discrete trial training are often perceived by parents as cold, rigid and dogmatic (Simpson 2001). On the other hand, some teachers and administrators embrace these didactic approaches, but balk at less structured, naturalistic play-based interventions. In one study, didactic practices based on operant conditioning were the source of most disagreement among parents, teachers and administrators (Callahan et al. 2008).

Administrators often also must take into account the requirements imposed by external funders, such as private or public insurance companies. Insurance companies may emphasize certain intervention strategies over others. For example, Pennsylvania Act 62 of 2008, which requires health insurance companies to cover up to $36,000 yearly for behavioral and other clinical services until the age of 21, explicitly mentions coverage of ABA, but not other behavioral or developmental approaches. Many of the services commonly used by children with autism such as educational testing, rehabilitation, coursework, and educational counseling are often excluded under private insurance plans (Young et al. 2009).

Summary

Given the current state of evidence and the proliferation of interventions, administrators face difficult decisions in determining which programs should be adopted. Administrators must weed through various claims of effectiveness, and take into account whether their staff can implement the program with available resources, and whether the program can be sustained over time. To complicate matters further, there often is tremendous parental advocacy for specific approaches. Because these issues are so emotionally charged, administrators may face harsh criticism, regardless of their decision.

Implications for Intervention Researchers

The research-practice gap in autism mirrors that in other conditions of childhood. A large body of research shows that interventions for children with psychiatric and developmental disorders are not as effective in communities as they are in research settings and do not sustain over time (Storch and Crisp 2004; Weisz et al. 2005). The lag between the development of evidence-based treatment and its integration into routine practice is estimated to be 20 years (Walker 2004). Developers and advocates of effective interventions have a responsibility to cultivate the conditions that will facilitate successful diffusion.

Diffusion of innovation theory highlights that the study any intervention is always the study of that intervention in a context. Rather than treating contextual factors as nuisance variables, diffusion of innovation theory suggests that they are critical to the adoption and continued, committed use of the intervention. This idea was discussed in the NIH-sponsored meetings on the state of autism intervention research, where there was consensus that key stakeholders (e.g., families, teachers, clinicians, and administrators) must be involved in developing a research agenda to foster large-scale use of effective treatments (Lord et al. 2005). Despite this consensus, there was disagreement on how to move from efficacy trials (testing the intervention under ideal circumstances) to effectiveness trials (testing the intervention under real-world conditions).

In their excellent review of the current challenges facing the field of autism intervention research, Smith et al. (2007) propose a model for systematically validating and disseminating interventions for autism. This model provides a strong framework in setting the agenda for autism intervention research. However, as researchers in related fields have highlighted, in order for efficacious interventions for be successfully implemented, the community context must be considered explicitly throughout all phases of research.

These researchers (Glasgow et al. 2003; Schoenwald and Hoagwood 2001; Weisz et al. 2004) suggest that it may be necessary to rethink the current “efficacy-to-effectiveness” sequence, often described as the “stage pipeline” model (Rohrbach et al. 2006). A report of the National Institute of Mental Health’s Advisory Council (National Institute of Mental Health 2001) describes a model of treatment development that attends to service delivery issues at the outset. Similarly, Weisz and colleagues (2004) state, “To create the most robust, practice-ready treatments, the field [should] consider a shift from the traditional model to a model that brings treatments into the crucible of clinical practice early in their development and treats testing in practice settings as a sequential process, not a single final phase’’ (p. 304). Weisz and colleagues advocate research models that attend to setting characteristics from the start, in the initial pilot and testing phases. They outline steps—from manual development to wide-scale dissemination—that focus on the setting in which the service ultimately will be delivered. This model is designed to accelerate the pace at which interventions are developed, adapted, refined, and implemented in communities (Weisz et al. 2004). Other researchers (Glasgow et al. 2003) have developed frameworks to increase external validity in trials by providing criteria for evaluating interventions on their efficacy and applicability to real-world practice. They also emphasize that participatory research methods should be built into efficacy studies, rather than left for later phases of research.

Successful efforts to adapt interventions to the practice context are likely to be bidirectional. In addition to adapting interventions to improve their fit with the values and capacities of public settings, adaptations within these settings may have to occur to improve practices (Hoagwood and Johnson 2003). Researchers have improved organizational capacity to support interventions by working with key stakeholders to improve their willingness and ability to adopt, implement, and maintain the intervention. The ARC (availability, responsiveness and continuity) model (Glisson and Schoenwald 2005) and the RE-AIM (reach, efficacy, adoption, implementation, and maintenance) framework (Glasgow et al. 2003) address barriers to the “fit” between social context and intervention by focusing organizational and community efforts on a specific population and problem, building community support for services that target the problem, creating alliances among providers and community stakeholders, encouraging desired provider behaviors, and developing a social context that fosters effective services delivery.

To highlight how these considerations can be incorporated within Smith et al.’s (2007) current model for conducting autism intervention research, Table 2 presents Smith et al.’s recommendations for each phase of autism intervention research. In the right-hand column, we present suggestions from dissemination, implementation, and community-based participatory research (Glasgow et al. 2003; Glasgow et al. 2001; Glisson 2007; Glisson and Schoenwald 2005; Israel et al. 2005; Weisz et al. 2004) on how to refine these phases in order to enhance the likelihood of uptake of efficacious autism interventions in the community, We also add a fifth stage, which is not explicitly considered within the scope of the Smith et al. (2007) model. The general principles presented are described in more detail below.
Table 2

Incorporating diffusion of innovation research into Smith et al.’s (2007) recommendations for future autism intervention research

Phase

Goals and activities

Refinements from dissemination and implementation research

1. Formulation and systematic application of a new intervention using single-case or between-group designs

Conduct initial efficacy studies to refine techniques and document clinical significance of effects

Involve parents, practitioners, and administrators in the research from the early stages:

 Build on prior community-academic relationships

 Conduct a community analysis to identify key opinion leaders and other potential community partners

 Use change agents as “boundary spanners” who cultivate relationships with the community

Target research towards issues that are most salient to public interest and actual practice, as identified by these community partners

2. Manualization and protocol development via pilot studies, surveys and focus groups

Manualize efficacious intervention

Put togetxher a manual for comparison group

Develop treatment fidelity measures

Test feasibility of implementing manuals across sites

Assess acceptability of interventions to clinicians and families

Examine sustainability of interventions in community settings

Estimate sample size

Continue to elicit guidance from the treatment developers, administrators, community practitioners, and families to encourage:

 Faithfulness to the core principles guiding the treatment protocol; and

 Goodness of fit with the clinical setting, practitioners, and clients

Make successive modifications to protocol as problems of fit are identified

Build formal mechanisms for evaluating sustainability and planning for sustainability of protocol in community settings, including:

 Continuing to partner with administrators, practitioners, and families to assess long-term feasibility of protocol

 Assessing potential long-term costs and other potential treatment demands

 Building data collection mechanisms into protocol that can be used in community to monitor fidelity of implementation, child outcomes, and stakeholder satisfaction

3. Efficacy studies using randomized clinical trials

Evaluate efficacy of intervention in large-scale trial

Recruit a diverse sample representative of all children with ASD

Report exclusions, participation rates, dropouts, and representativeness on key characteristics

Demonstrate consistent effects across sites, as a step towards disseminating the intervention

Include outcome measures relevant to the system in which the intervention ultimately will be implemented, and assess both positive (anticipated) and negative (unintended) outcomes

Conduct hierarchical analyses of mediators and moderators

Include proxy measures of adoption, such as expressed interest of community practitioners in participating or providing feedback on protocol

Consider the representativeness of the intervention agents by:

 Describing the participation rate and characteristics of those delivering the intervention, and how these agents compare with those who will eventually implement it in the community

 Including a variety of intervention agents with respect to background/experience, and report on potential differences in implementation and outcomes associated with these differences

Collect data on likely treatment demands, such as time, staffing, parent involvement required, and total cost of implementation

Continue to evaluate and plan for sustainability of protocol, as described above

4. Community effectiveness studies using between-group designs

Assess whether competent clinicians in community can implement treatment

Implement two-stage community-based effectiveness trials:

Series of group-design partial effectiveness studies testing the newly adapted treatment protocol in the context representative community care.

 Explore, in stepwise fashion, the extent to which the protocol works with referred youths, in clinical care settings, when used by representative practitioners, and when compared to usual care

Full effectiveness group-design clinical trials in community setting:

 Include all referred clients with ASD in community settings with community practitioners

 Report exclusions, participation rates, dropouts, and representativeness

 Include organization-wide economic outcomes, along with child outcomes

 Assess willingness of stakeholders from multiple settings to adopt/adapt program

 Continue having administrators, practitioners, and parents assess fit

 Report on representativeness of settings, participation rates, and reasons for declining

 Assess practitioners’ ability to implement intervention components in routine practice

Systematically program for institutionalization of the program elements after formal study assistance is terminated by strengthening organizational capacity

 Utilize formal data collection mechanisms and comprehensive follow-up to monitor fidelity of implementation, child outcomes, and stakeholder satisfaction, and provide ongoing consultation as needed

 Plan for self-regulation and stabilization by providing training and program materials, and incrementally facilitating independent use

5. Sustainability studies focused on the relation between the treatment program and the practice contexts in which it is employed

Not addressed

Assess protocol’s sustainability, with treatment fidelity and youth outcomes, over time, after the research support is withdrawn

Target Research Towards Issues that are Most Salient to Public Practice

Autism interventions often receive broad media coverage and are adopted before claims of effectiveness have been adequately tested. Researchers increasingly recognize the need to test the validity of these claims quickly, but funding to test these claims is slow compared with the rate of information diffusion through commercial media (Lord 2000). Rapid funding of studies of autism interventions that have attracted significant public interest should be encouraged (Lord 2000). Similarly, intervention research should target ecologically valid outcomes that match the needs of stakeholders.

Enhance Generalizability of Intervention Studies by Including Heterogeneous Samples in More Naturalistic Settings

Most treatment research samples do not reflect demographics or clinical presentation of the general population (2005). Lord et al. (2005) suggest some strategies for increasing sample diversity. This issue also has been addressed elsewhere (Swanson and Ward 1995; Yancey et al. 2006). For example, Yancey et al. (2006) reviewed 95 studies describing methods of increasing minority enrollment and retention and identified several key recommendations, including: (a) reducing restraints on eligibility; (b) improving communication with potential minority participants to establish mutually beneficial goals and counteract mistrust of scientific research (e.g., by using personal contact rather than mass mailings); (c) facilitating community involvement by hiring outreach workers from the target population and working through community-based organizations, such as churches and schools; and (d) improving retention by providing intensive follow-up, having the same staff over time, and having accessible locations for intervention implementation and data collection, regular telephone reminders, and timely compensation.

Involve Stakeholders in Research from the Protocol Development Stage, and Have Them Assess the Fit of the Prototype to their Needs, Values, and Setting

Hoagwood and Johnson (2003) identify several organizational elements that must be understood to assess intervention fit, such as local policies, staffing, financing, and coordination of services. It is also critical to identify stakeholders’ perceptions of key barriers to diffusion, and collaboratively address these barriers. Participatory decision-making should be used throughout the dissemination and implementation process as a way to foster a strong sense of “team.” Finally, researchers can rely on social validity research to assess the fit of the program to stakeholders’ needs and values (Callahan et al. 2008; Gresham, Cook, Crews, and Kern 2004). Social validity research systematically assesses whether the goals, procedures, and outcomes of specific programs or interventions are acceptable to key stakeholders (Callahan et al. 2008).

Include Formal Data Collection and Comprehensive Follow-Up to Monitor Implementation Fidelity, Child Outcomes and Stakeholder Satisfaction, and Provide Ongoing Consultation

A review of autism intervention studies found that only 18% reported fidelity data (Wheeler et al. 2006). While measuring fidelity is fundamental to any study of intervention, it is particularly important for effectiveness studies, where fidelity is expected to be highly variable. Researchers also should study outcomes that are salient to stakeholders, and incorporate measures of stakeholder satisfaction. For example, researchers again can collect and report data to key stakeholders, monitor progress in solving problems stakeholders identify, and recommend changes (Glisson and Schoenwald 2005).

Plan for Intervention Maintenance by Providing Information, Training and Tools, and Incrementally Facilitating Community Practitioners’ Independent Use of the Intervention

In the absence of planning and support, community use of evidence-based interventions does not sustain over time (Shediac-Rizkallah and Bone 1998). Researchers should plan for sustainability by examining implementation during the study and determining what supports or modifications are necessary for it to continue after the study ends. Here again, autism intervention researchers can benefit from strategies identified in other fields, including: addressing multiple pathways to sustainability (e.g., policy change), rather than focusing exclusively on training; having organizational—in addition to individual—commitment to ensure stability; establishing program ownership among stakeholders and strengthening champion roles and leadership actions; building and maintaining local expertise; and establishing feasible evaluation strategies to monitor implementation quality and effectiveness (Israel et al. 2006; Johnson et al. 2004; Rohrbach et al. 2006).

Conclusion

The combination of increasing numbers of children identified with autism and increased advocacy for interventions that community practitioners often are ill-equipped to provide has created a crisis (Shattuck and Grosse 2007). Researchers often assume that this crisis can be resolved through the development of interventions, which will be disseminated automatically once efficacy is ascertained; however, efficacious treatments are rarely adopted or successfully implemented in community settings (Proctor et al. 2009), leading researchers to conclude that the “pipeline” model assumes an unrealistic progression from efficacy to uptake (Proctor et al. 2009; Schoenwald and Hoagwood 2001; Weisz et al. 2004). The autism intervention research community has the opportunity to learn from these experiences. Some collaborators with Lord et al. (2005) believe that “understanding what treatments are most beneficial, when done well, must precede the question of their effectiveness in the real world.” We respectfully disagree. A growing body of research suggests strategies for linking intervention development and the settings in which we hope interventions ultimately will be used. Autism intervention researchers must change current practice by (a) partnering with communities to facilitate the successful adoption, implementation, and maintenance of interventions that have already been developed, and (b) developing new interventions in collaboration with these communities to ensure that the interventions meet the community’s needs and capabilities, thereby increasing the likelihood of successful diffusion.

We do not suggest that researchers abandon the traditional randomized controlled trial. In fact, the RCT can be a key component of this type of research. For example, the National Institute of Mental Health, the Institute of Education Science, and the Agency for Healthcare Research and Quality all have renewed emphasis on comparative effectiveness research, field trials, and community-based participatory research, in which researchers partner with community settings to test interventions using rigorous research designs (Israel et al. 2006; Proctor et al. 2009).

Autism intervention researchers have made enormous advances in our knowledge of the best ways to intervene with children with autism. However, this knowledge has yet to affect many of the children who need it most. By making the diffusion of efficacious interventions a research priority, we can increase the probability that every child with autism benefits from the best intervention models that research has to offer.

Acknowledgments

This work was supported by a grant from the National Institute of Mental Health (F31MH088172-02).

Copyright information

© Springer Science+Business Media, LLC 2010