Advertisement

Translational Behavioral Medicine

, Volume 4, Issue 3, pp 234–237 | Cite as

Methodologies for optimizing behavioral interventions: introduction to special section

  • William T. RileyEmail author
  • Daniel E. Rivera
Editorial

An extensive literature has established the effectiveness of various behavioral interventions for a range of conditions [e.g., 1, 2, 3], but this literature often fails to isolate the intervention components that are more or less effective. Therefore, despite numerous controlled trials of various interventions for a given problem, the field has little guidance on how to improve upon previously studied interventions, adapt them to specific populations, contexts, or delivery mechanisms, or streamline them to facilitate use in real-world settings with constrained resources. Behavioral intervention research cannot become a cumulative science that builds upon prior research until intervention studies can answer not only if the intervention changed behavior, but also how it changed behavior, and which intervention components were most effective in changing behavior.

To illustrate the current state of behavioral intervention research, Fig. 1 shows a common meta-analysis forest plot, in this case, the effects of group interventions versus self-help for smoking cessation [4]. Like many systematic reviews of behavioral interventions, this one finds considerable heterogeneity of treatment effects across studies. This heterogeneity can be the result of a variety of factors including differences between studies in sample characteristics, delivery mechanisms and context, and/or differences in intervention components, including the nature, dose, and sequence of these components. In this illustrative meta-analysis, different intervention components were used across the included studies, and treatments were delivered in different dosages by different providers in different contexts. A researcher wanting to develop an improved intervention for group-based smoking cessation would receive little guidance on how to do so from the studies in this meta-analysis.
Fig. 1

Forest plot of group versus self-help programs for smoking cessation (reprinted from Stead and Lancaster, Cochrane Database Syst Rev. 2005; CD001007)

Intervention development largely remains a black box that receives little attention within the typical constraints of publication page limits. Intervention developers typically review the prior intervention literature to glean components for the intervention being developed. The selection of components, however, is predominately a process of educated guesses based on the components frequently used in prior effective interventions and the experience of the intervention developer with these intervention components in the sample of interest. Theory also contributes to intervention development. Some research suggests that interventions based on theory are more effective than those not based on theory [5], but others have failed to find this relationship [6], and the association could be the result of a thoughtful, conceptual approach to intervention development rather than the validity of a particular theory. Moreover, interventions based on the same theory can differ substantially in their intervention components, whereas interventions based on different theories can appear quite similar. Funding for behavioral intervention research also explicitly rewards innovation. This incentive structure may influence intervention developers to create new, previously untested intervention components and incorporate them into an admixture of existing intervention components obtained from theory-based conceptual models and prior intervention studies.

Once developed, these interventions are seldom refined and rigorously tested prior to the randomized controlled trial designed to determine the efficacy of the final intervention. Qualitative studies of individuals from the target population are increasingly being used to assess the acceptability of intervention components, dosages, and delivery mechanisms [7]. These qualitative studies are critical for assessing acceptability and initial engagement in the intervention. Interventions that are unacceptable or unduly burdensome and fail to achieve good engagement and adherence are unlikely to be found effective; however, one cannot infer that an intervention will be effective just because it is judged highly acceptable by a small group of target individuals. Indeed, it is quite possible that intervention components deemed acceptable may have no effect on changing their behavior, or vice versa. For example, sleep restriction is one of the least acceptable components of behavioral treatments for insomnia but it is also one of the most effective components [8].

Once initially developed, the prototype intervention is often subsequently tested in a pilot trial, typically an open trial with a small number of participants. In addition to testing the feasibility of the study procedures, the pilot trial is also used to test the feasibility of delivering the intervention and to determine if the intervention effect size is sufficient to justify a larger randomized controlled trial (RCT) of the intervention. In these small samples, however, the effect size’s confidence interval is quite large which limits the ability of the pilot trial to estimate effect sizes for the larger study [9]. Moreover, because the full intervention package is commonly tested in these pilot trials, these trials provide minimal insight into which intervention components produced the effect observed. Although interventions are often modified following the pilot trial, these modifications are typically based on the participants’ post-study qualitative responses regarding intervention component feasibility, usability, and perceived effectiveness. As a result, large-scale controlled trials of a behavioral intervention usually evaluate a multi-component intervention for which it remains unclear which intervention components are essential, useful, or unnecessary, or which putative mediators of behavior change are affected by these various intervention components.

This special section of Translational Behavioral Medicine attempts to provide alternative approaches to the current intervention development paradigm that can better isolate the effects of intervention components and adjust the dosage and sequencing of intervention components to optimize the intervention. Linda Collins and colleagues have led the development of methodologies to optimize behavioral interventions, and the first paper in this special section [10] describes the Multiphase Optimization Strategy (MOST). Guidance is offered about decisions that need to be made to conduct a MOST trial, including criteria for optimization, the components to be tested, and which cells of a full factorial model can remain untested in a fractional factorial model. These principles of MOST are illustrated through a hypothetical example based on smoking cessation, where the intervention components include nicotine patch, nicotine gum, two forms of in-person counseling, and telephone support. The design of an informative fractional factorial experiment, the corresponding analysis of variance, and the decision making process regarding which components should form the optimized intervention based on MOST findings are demonstrated. Collins and colleagues also provide online several artificial data sets to practice the application of the decision making framework.

Wyrick and colleagues [11] describe the application of MOST to optimize an online program to prevent substance abuse by college athletes. While one MOST trial is often sufficient, this paper shows how the findings from one MOST trial could lead to additional intervention questions that may require additional MOST trials to insure an optimized intervention. This iterative approach to intervention testing is consistent with the engineering roots of MOST. This paper also illustrates how MOST can be conducted as a cluster randomized design, in this case with school as the unit of randomization. Sample size concerns have been raised regarding cluster randomized trials as well as MOST designs, yet Wyrick and colleagues estimated that they would need to randomize 56 schools with 100 students per school to achieve 90 % power to detect a 0.3 effect size, a feasible N for a web-based intervention. Finally, for those concerned about the length of time to conduct a MOST study, Wyrick et al., note that the traditional intervention testing paradigm of conducting a treatment package RCT, followed by post-hoc analysis, treatment package revision, conducting a second treatment package RCT, and repeating the cycle, is a much longer and less efficient process than MOST.

Progressing from the selection of components via MOST, Almirall and colleagues [12] focus on the sequencing of intervention components using the Sequential Multiple Assignment Randomized Trial (SMART) for an adaptive weight loss intervention. SMART trials test the optimal sequencing of intervention components and the decision rules for when an intervention dosage should be increased (or decreased), when an intervention should be augmented with another intervention, and/or when an intervention should be terminated due to lack of efficacy and another intervention attempted. In contrast to the standard treatment package RCT, the SMART trial closely parallels the treatment decisions of clinical practice. If clinicians were to practice as per the standard RCT, they would provide the intervention that was found on average to be most effective, and then at the end of the treatment period, if the treatment did not work, attempt no further treatment. Empirically guided clinicians, however, first attempt the treatment with the highest likelihood of success, but if not successful, try other treatment approaches that have at least some support in the literature. These decisions regard if and when to try other approaches, and which ones to try and when, are the essence of adaptive interventions. SMART uses sequential re-randomizations of study participants to isolate the effects of various treatment options, tailoring variables, and decision rules.

MOST and SMART designs were derived in part from engineering approaches, and the paper by Deshpande and colleagues [13] draws even more explicitly from engineering approaches with the application of control systems engineering for optimizing behavioral interventions. Control systems are pervasive in our daily lives. Control systems run the thermostats that maintain a comfortable temperature in our homes regardless of the weather, and the cruise controls on our automobiles that maintain a steady speed regardless of the slope of the road. Applied to behavioral interventions, these same regulatory principles can be used to achieve and maintain a steady state of a desired behavior by adjusting the magnitude of system inputs (i.e., dosages of intervention components) and adjusting for the effects of extraneous variables (i.e., noise, disturbances, uncertainty). This paper illustrates the application of control systems to intervention optimization for low-dose naltrexone treatment for fibromyalgia. In the first part of the paper, a dynamical systems model is obtained idiographically from intensively collected measures during the course of a clinical trial. The estimated models are then shown to serve as the basis for an optimization procedure called “Model Predictive Control” that adjusts the dosage of intervention components at daily intervals from observed measures of participant response and the influences of external factors. Although the concepts described in this paper are perhaps the most foreign to behavioral scientists among all of the papers in this special section, these concepts are worth taking the time to fully grasp. As our ability to continuously monitor behavior and its mechanisms improves via sensor technologies and smartphones, our ability to frequently adjust intervention type and intensity becomes possible, and these computational models will be critical to making effective intervention adjustments over time.

The rapid advances in behavioral sensors and ecological momentary assessment technologies have also re-invigorated the single-case design. Single-case designs were frequently used when operant behavioral interventions were in their infancy, but the need for frequent and extensive behavioral observations to establish stable baselines and strengthen causal inference were burdensome to obtain. Mobile and wireless technologies now allow us to automate behavioral observations and gather the intensive longitudinal data necessary for rigorous single-case designs. Dallery and colleagues [14] describe the advances in methodology of single-case studies and the application of single-case studies to intervention optimization, specifically the use of parametric and component analyses which can be used to test different intervention dosages or components, respectively. In contrast to a typical pilot trial in which the intervention remains unchanged during the trial, a series of single-case trials allows the intervention developer to test the intervention package and specific components of that package in a few participants, revise the intervention based on these findings, and continue to iterate between a few single-case studies and intervention refinements to optimize the intervention. For the same number of participants, many more intervention variants can be tested via a series of single-case studies than the traditional pilot trial.

These various intervention optimization designs are being increasingly accepted as viable alternatives to the traditional intervention development model. As Wyrick and colleagues note, his project and others have been funded by the National Institutes of Health (NIH), and there are NIH Funding Opportunity Announcements that encourage the use of these approaches, specifically the “Innovative Research Methods: Prevention and Management of Symptoms in Chronic Illness (R01)” program announcement (http://grants.nih.gov/grants/guide/pa-files/PA-13-165.html). A strategic priority of the National Cancer Institute’s Science of Research and Technology Branch (SRTB) is the development and adaptation of new methodologies for the behavioral sciences, including these intervention optimization methods (http://cancercontrol.cancer.gov/brp/srtb/about.html). Broadening the base of expertise in these approaches among behavioral scientists remains important, not only for facilitating the adoption of these approaches to optimize behavioral interventions, but also to provide adequate expertise for grant application and journal article reviews.

The optimization approaches described in this special section of Translational Behavioral Medicine offer a diverse and novel set of methodologies that are relevant for optimizing interventions at various stages of intervention development, and for disentangling the effects of various intervention components in multi-component behavioral interventions. With these data, the field can build a cumulative science of behavior change that retains the more effective intervention components, discards the less effective components, and adds new components in a testable, empirical manner. While there are obvious practice and public policy implications for developing optimized interventions that provide the largest effects in the most efficient manner, these approaches also have important implications for advancing the theory and science of behavior change. If intervention components are cleaved based on the mechanism of change targeted, the results from these optimization studies also provide the field with critical insights into the mechanisms of behavior change. In contrast to repeating cycles of RCTs that test various admixes of intervention components, these optimization approaches provide an efficient approach for developing behavioral interventions that cost-efficiently produce the largest effects for the most people. These approaches inform innovation in behavioral medicine in significant ways, from delivering the most important and effective components in the proper amount at the right time to ultimately aiding in the search to better understand the mechanisms of behavior change.

Notes

Conflict of interest

The authors have no conflict of interest to declare. All procedures, including the informed consent processes, were conducted in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000.

References

  1. 1.
    Daley D, van der Oord S, Ferrin M, et al. Behavioral interventions in attention-deficit/hyperactivity disorder: a meta-analysis of randomized controlled trials across multiple outcome domains. J Am Acad Child Adolesc Psychiatry. 2014; 53(8): 835-847.PubMedCrossRefGoogle Scholar
  2. 2.
    Norris SL, Zhang X, Avenell A, et al. Long-term non-pharmacologic weight loss interventions for adults with type 2 diabetes. Cochrane Database Syst Rev. 2005; 18: CD004095.Google Scholar
  3. 3.
    Smedslund G, Berg RC, Hammerstrøm KT, et al. Motivational interviewing for substance abuse. Cochrane Database Syst Rev. 2011; 11(5): CD008063.Google Scholar
  4. 4.
    Stead LF, Lancaster T. Group behaviour therapy programmes for smoking cessation. Cochrane Database Syst Rev. 2005; 18: CD001007.Google Scholar
  5. 5.
    Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annual Rev Public Health. 2010; 31: 399-418.CrossRefGoogle Scholar
  6. 6.
    Prestwich A, Sniehotta FF, Whittington C, Dombrowski SU, Rogers L, Michie S. Does theory influence the effectiveness of health behavior interventions? Meta-analysis Health Psychol. 2014; 33: 465-474.CrossRefGoogle Scholar
  7. 7.
    Akard TF, Gilmer MJ, Friedman DL, Given B, Hendricks-Ferguson VL, Hinds PS. From qualitative work to intervention development in pediatric oncology palliative care research. J Pediatr Oncol Nurs. 2013; 30: 153-160.PubMedCrossRefPubMedCentralGoogle Scholar
  8. 8.
    Miller CB, Espie CA, Epstein DR, Friedman L, Morin CM, Pigeon WR, Spielman AJ, Kyle SD. The evidence base of sleep restriction therapy for treating insomnia disorder. Sleep Med Rev. 2014; S1087-0792 [Epub ahead of print].Google Scholar
  9. 9.
    Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011; 45: 626-629.PubMedCrossRefPubMedCentralGoogle Scholar
  10. 10.
    Collins LM, Trail JB, Kugler KC, Baker TB, Piper ME, Mermelstein RJ. Evaluating individual intervention components: making decisions based on the results of a factorial screening experiment. Trans Behav Med Pract Policy Res. 2013. doi: 10.1007/s13142-013-0239-7.Google Scholar
  11. 11.
    Wyrick DL, Rulison KL, Fearnow-Kenney M, Milroy JJ, Collins LM. Moving beyond the treatment package approach to developing behavioral interventions: addressing questions that arose during an application of the Multiphase Optimization Strategy (MOST). Trans Behav Med Pract Policy Res. 2013. doi: 10.1007/s13142-013-0247-7.Google Scholar
  12. 12.
    Almirall D, Nahum-Shani I, Sherwood NE, Murphy SA. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Trans Behav Med Pract Policy Res. 2014. doi: 10.1007/s13142-014-0265-0.Google Scholar
  13. 13.
    Deshpande S, Rivera DE, Younger JW, Nandola NN. A control systems engineering approach for adaptive behavioral interventions: illustration with a fibromyalgia intervention. Trans Behav Med Pract Policy Res. 2014. doi: 10.1007/s13142-014-0282-z.Google Scholar
  14. 14.
    Dallery J, Raiff BR. Optimizing behavioral health interventions with single-case designs: from development to dissemination. Trans Behav Med Pract Policy Res. 2014. doi: 10.1007/s13142-014-0258-z.Google Scholar

Copyright information

© Springer Science+Business Media New York (outside the US) 2014

Authors and Affiliations

  1. 1.National Institutes of HealthRockvilleUSA
  2. 2.Arizona State UniversityPhoenixUSA

Personalised recommendations