Abstract
In the target article, Slocum et al. (2022) suggested that nonconcurrent multiple baseline designs can provide internal validity comparable to concurrent multiple baseline designs. We provide further support for this assertion; however, we highlight additional considerations for determining the relative strength of each design. We advocate for a more nuanced approach to evaluating design strength and less reliance on strict adherence to a specific set of rules because the details of the design only matter insofar as they help researchers convince others that the results are valid and accurate. We provide further support for Slocum et al.’s argument by emphasizing the relatively low probability that within-tier comparisons would fail to identify confounds. We also extend this logic to suggest that staggering implementation of the independent variable across tiers may be an unnecessary design feature in certain cases. In addition, we provide an argument that nonconcurrent multiple baseline designs may provide verification within baseline logic contrary to arguments made by previous researchers. Despite our general support for Slocum et al.’s assertions and our advocacy for more nuanced approaches to determining the strength of experimental designs, we urge experimenters to consider the perspectives of researchers from other fields who may favor concurrent multiple-baseline designs and suggest that using concurrent multiple-baseline designs when feasible may foster dissemination of behavior analytic research.
Similar content being viewed by others
Notes
It is worth emphasizing that this logic is based on affirming the consequent, which is a logical fallacy. Thus, even if NonconMBLs cannot fulfill this requirement, it may not be problematic because fulfilling this requirement should only provide negligible increases in confidence regarding the experimental effect from a perspective based purely on logic.
References
Carr, J. E. (2005). Recommendations for reporting multiple-baseline designs across participants. Behavioral Interventions, 20(3), 219–224. https://doi.org/10.1002/bin.191
Christ, T. J. (2007). Experimental control and threats to internal validity of concurrent and nonconcurrent multiple baseline designs. Psychology in the Schools, 44(5), 451–459. https://doi.org/10.1002/pits.20237
Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson Education.
Gast, D. L., Lloyd, B. P., & Ledford, J. R. (2018). Multiple baseline and multiple probe designs. In J. R. Ledford & D. L. Gast (Eds.), Single case research methodology: Applications in special education and behavioral sciences (pp. 288–335). Routledge/Taylor & Francis Group. https://doi.org/10.4324/9781315150666
Ghaemmaghami, M., Hanley, G. P., & Jessel, J. (2021). Functional communication training: From efficacy to effectiveness. Journal of Applied Behavior Analysis, 54(1), 122–143. https://doi.org/10.1002/jaba.762
Harvey, M. T., May, M. E., & Kennedy, C. H. (2004). Nonconcurrent multiple baseline designs and the evaluation of educational systems. Journal of Behavioral Education, 13(4), 267–276. https://doi.org/10.1023/B:JOBE.0000044735.51022.5d
Hayes, S. C. (1981). Single case experimental design and empirical clinical practice. Journal of Consulting and Clinical Psychology, 49(2), 193–211. https://doi.org/10.1037/0022-006X.49.2.193.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165–179. https://doi.org/10.1177/001440290507100203
Johnston, J. M., Pennypacker, H. S., & Green, G. (2020). Strategies and tactics of behavioral research and practice (4th ed.). Routledge/Taylor & Francis Group.
Kazdin, A. E. (2021). Single-case research designs: Methods for clinical and applied settings (3rd ed.). Oxford University Press.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34(1), 26–38. https://doi.org/10.1177/0741932512452794
Lehardy, R. K., Luczynski, K. C., Hood, S. A., & McKeown, C. A. (2021). Remote teaching of publication-quality, single-case graphs in Microsoft Excel. Journal of Applied Behavior Analysis, 54(3), 1265–1280. https://doi.org/10.1002/jaba.805
Lloveras, L. A., Tate, S. A., Vollmer, T. R., King, M., Jones, H., & Peters, K. P. (2022). Training behavior analysts to conduct functional analyses using a remote group behavioral skills training package. Journal of Applied Behavior Analysis, 55(1), 290–304. https://doi.org/10.1002/jaba.893
McDevitt, M. A., Pisklak, J. M., Dunn, R. M., & Spech, M. (2022). Forced-exposure trials increase suboptimal choice. Psychonomic Bulletin & Review. Advance online publication. https://doi.org/10.3758/s13423-022-02092-2
Slocum, T. A., Pinkelman, S. E., Joslyn, P. R., & Nichols, B. (2022). Threats to internal validity in multiple-baseline design variations. Perspectives on Behavior Science. https://doi.org/10.1007/s40614-022-00326-1
What Works Clearinghouse. (2020). What works clearinghouse: Standards handbook, version 4.1. U.S. Department of Education. https://ies.ed.gov/ncee/wwc/handbooks
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors have no conflicts of interest to declare.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Smith, S.W., Kronfli, F.R. & Vollmer, T.R. Commentary on Slocum et al. (2022): Additional Considerations for Evaluating Experimental Control. Perspect Behav Sci 45, 667–679 (2022). https://doi.org/10.1007/s40614-022-00346-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40614-022-00346-x