Barton, E. E., Ledford, J. R., Lane, J. D., Decker, J., Germansky, S. E., Hemmeter, M. L., & Kaiser, A. (2016). The iterative use of single case research designs to advance the science of EI/ECSE. Topics in Early Childhood Special Education, 36(1), 4–14. https://doi.org/10.1177/0271121416630011.
Article
Google Scholar
Barton, E. E., Lloyd, B. P., Spriggs, A. D., & Gast, D. L. (2018). Visual analysis of graphic data. In J. R. Ledford & D. L. Gast (Eds.), Single-case research methodology: Applications in special education and behavioral sciences (pp. 179–213). New York, NY: Routledge.
Chapter
Google Scholar
Barton, E. E., Meadan, H., & Fettig, A. (2019). Comparison of visual analysis, non-overlap methods, and effect sizes in the evaluation of parent implemented functional assessment based interventions. Research in Developmental Disabilities, 85, 31–41. https://doi.org/10.1016/j.ridd.2018.11.001.
Article
PubMed
Google Scholar
Brossart, D. F., Parker, R. I., Olson, E. A., & Mahadevan, L. (2006). The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behavior Modification, 30, 531–563. https://doi.org/10.1177/0145445503261167.
Article
PubMed
Google Scholar
Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284–290. https://doi.org/10.1037/1040-3590.6.4.284.
Article
Google Scholar
Cooper, C. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis. St. Louis: Pearson Education.
Google Scholar
DeProspero, A., & Cohen, S. (1979). Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 12(4), 573–579. https://doi.org/10.1901/jaba.1979.12-573.
Article
PubMed
PubMed Central
Google Scholar
Fisch, G. S. (1998). Visual inspection of data revisited: Do the eyes still have it? The Behavior Analyst, 21(1), 111–123. https://doi.org/10.1007/BF03392786.
Article
PubMed
PubMed Central
Google Scholar
Furlong, M. J., & Wampold, B. E. (1982). Intervention effects and relative variation as dimensions in experts’ use of visual inference. Journal of Applied Behavior Analysis, 15(3), 415–421. https://doi.org/10.1901/jaba.1982.15-415.
Article
PubMed
PubMed Central
Google Scholar
Hagopian, L. P., Fisher, W. W., Thompson, R. H., Owen-DeSchryver, J., Iwata, B. A., & Wacker, D. P. (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis, 30(2), 313–326. https://doi.org/10.1901/jaba.1997.30-313.
Article
PubMed
PubMed Central
Google Scholar
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorial in Quantitative Methods for Psychology, 8(1), 23–34.
Article
Google Scholar
Hitchcock, J. H., Horner, R. H., Kratochwill, T. R., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. M. (2014). The what works Clearinghouse single-case design pilot standards: Who will guard the guards? Remedial and Special Education, 35(3), 145–152. https://doi.org/10.1177/0741932513518979.
Article
Google Scholar
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S. L., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practices in special education. Exceptional Children, 71, 165–179. https://doi.org/10.1177/001440290507100203.
Article
Google Scholar
Horner, R. H., & Spaulding, S. A. (2010). Single-subject designs. In N. E. Salkind (Ed.), The encyclopedia of research design (Vol. 3, pp. 1386–1394). Thousand Oaks: Sage Publications.
Google Scholar
Horner, R. H., Swaminathan, H., Sugai, G., & Smolkowski, K. (2012). Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children, 35(2), 269–290. https://doi.org/10.1353/etc.2012.0011.
Article
Google Scholar
Kahng, S. W., Chung, K. M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis, 43(1), 35–45. https://doi.org/10.1901/jaba.2010.43-35.
Article
PubMed
PubMed Central
Google Scholar
Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). New York: Oxford University Press.
Google Scholar
Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38. https://doi.org/10.1177/0741932512452794.
Article
Google Scholar
Ledford, J. R., & Gast, D. L. (2018). Single case research methodology: Applications in special education and behavioral sciences. New York: Routledge.
Book
Google Scholar
Lieberman, R. G., Yoder, P. J., Reichow, B., & Wolery, M. (2010). Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly, 25(1), 28–44. https://doi.org/10.1037/a0018600.
Article
Google Scholar
Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the what works Clearinghouse standards for evaluating single subject research: Synthesis of the self-management literature base. Remedial and Special Education, 34(1), 44–58. https://doi.org/10.1177/0741932511435176.
Article
Google Scholar
Penny, J., Johnson, R. L., & Gordon, B. (2000a). The effect of rating augmentation on inter-rater reliability: An empirical study of a holistic rubric. Assessing Writing, 7(2), 143–164. https://doi.org/10.1016/S1075-2935(00)00012-X.
Article
Google Scholar
Penny, J., Johnson, R. L., & Gordon, B. (2000b). Using rating augmentation to expand the scale of an analytic rubric. Journal of Experimental Education, 68(3), 269–287. https://doi.org/10.1080/00220970009600096.
Article
Google Scholar
Scruggs, T. E., & Mastropieri, M. A. (1998). Summarizing single-subject research: Issues and applications. Behavior Modification, 22(3), 221–242. https://doi.org/10.1177/01454455980223001.
Article
PubMed
Google Scholar
Shadish, W. R. (2014). Statistical analyses of single-case designs: The shape of things to come. Current Directions in Psychological Science, 23(2), 139–146. https://doi.org/10.1177/0963721414524773.
Article
Google Scholar
Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43(4), 971–980. https://doi.org/10.3758/s13428-011-0111-y.
Article
PubMed
Google Scholar
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428. https://doi.org/10.1037/0033-2909.86.2.420.
Article
PubMed
Google Scholar
Smith, J. D. (2012). Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods, 17, 510–550. https://doi.org/10.1037/a0029312.
Article
PubMed
Google Scholar
What Works Clearinghouse. (2017). Procedures and standards handbook (Version 4.0). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pdf. Accessed 9 Jan 2018.
Wolfe, K., Seaman, M. A., & Drasgow, E. (2016). Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification, 40(6), 852–873. https://doi.org/10.1177/0145445516644699.
Article
PubMed
Google Scholar