Advertisement

Integrative Psychological and Behavioral Science

, Volume 49, Issue 3, pp 323–349 | Cite as

Single-Case Research Methods: History and Suitability for a Psychological Science in Need of Alternatives

  • Camilo Hurtado-Parrado
  • Wilson López-López
Regular Article

Abstract

This paper presents a historical and conceptual analysis of a group of research strategies known as the Single-Case Methods (SCMs). First, we present an overview of the SCMs, their history, and their major proponents. We will argue that the philosophical roots of SCMs can be found in the ideas of authors who recognized the importance of understanding both the generality and individuality of psychological functioning. Second, we will discuss the influence that the natural sciences’ attitude toward measurement and experimentation has had on SCMs. Although this influence can be traced back to the early days of experimental psychology, during which incipient forms of SCMs appeared, SCMs reached full development during the subsequent advent of Behavior Analysis (BA). Third, we will show that despite the success of SCMs in BA and other (mainly applied) disciplines, these designs are currently not prominent in psychology. More importantly, they have been neglected as a possible alternative to one of the mainstream approaches in psychology, the Null Hypothesis Significance Testing (NHST), despite serious controversies about the limitations of this prevailing method. Our thesis throughout this section will be that SCMs should be considered as an alternative to NHST because many of the recommendations for improving the use of significance testing (Wilkinson & the TFSI, 1999) are main characteristics of SCMs. The paper finishes with a discussion of a number of the possible reasons why SCMs have been neglected.

Keywords

History of psychological methods Single-case research methods Single-subject designs Behavior analysis Conceptual analysis Behaviorism Null Hypothesis Significance Testing NHST 

Notes

Acknowledgments

The authors thank Aaro Toomela, João Antonio Monteiro, and the anonymous reviewers for their valuable suggestions and thoughtful comments.

References

  1. Abelson, R. (1997). A retrospective on the significance test ban of 1999 (if there were no significance tests, they would be invented). In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 117–141). Mahwah, NJ: Erlbaum.Google Scholar
  2. Allison, D. B., & Gorman, B. S. (1993). Calculating effect sizes for meta-analysis: The case of the single case. Behaviour Research and Therapy, 31(6), 621–631.PubMedGoogle Scholar
  3. American Psychological Association Board of Scientific Affairs, Task Force on Statistical Inference. (2000). Narrow and shallow. American Psychologist, 55, 965–966. doi: 10.1037/0003-066X.55.8.965.
  4. Ator, N. A. (1999). Statistical inference in behavior analysis: Environmental determinants? Behavior Analyst, 22(2), 93–97.PubMedCentralPubMedGoogle Scholar
  5. Barlow, D. H., & Nock, M. K. (2009). Why can’t we be more idiographic in our research? Perspectives on Psychological Science, 4(1), 19–21. doi: 10.1111/j.1745-6924.2009.01088.x.PubMedGoogle Scholar
  6. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: strategies for studying behavioral change (3rd ed.). Boston: Pearson.Google Scholar
  7. Baron, A. (1999). Statistical inference in behavior analysis: friend or foe? The Behavior Analyst, 22(2), 83–85.PubMedCentralPubMedGoogle Scholar
  8. Behi, R., & Nolan, M. (1996). Single-case experimental designs 1: Using idiographic research. British Journal of Nursing, 5(21), 1334–1337.PubMedGoogle Scholar
  9. Bernard, C. (1927). An introduction to the study of experimental medicine. New York: Macmillan.Google Scholar
  10. Blampied, N. M. (1999). A legacy neglected: Restating the case for single-case research in cognitive-behaviour therapy. Behaviour Change, 16(2), 89–104. doi: 10.1375/bech.16.2.89.Google Scholar
  11. Blampied, N. M. (2000). Single-case research designs: A neglected alternative. American Psychologist, 55(8), 960–960. doi: 10.1037/0003-066X.55.8.960.Google Scholar
  12. Blampied, N. M. (2001). The third way: Single-case research, training, and practice in clinical psychology. Australian Psychologist, 36(2), 157–163. doi: 10.1080/00050060108259648.Google Scholar
  13. Blampied, N. M. (2013). Single-case research designs and the scientist-practitioner ideal in applied psychology. In G. J. Madden (Ed.), APA handbook of behavior analysis: Vol. 1. methods and principles. Washington: American Psychological Association. doi: 10.1037/13937-008.Google Scholar
  14. Bourret, J., & Pietras, C. (2013). Visual analysis in single-case research. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 199–217). Washington, DC: American Psychological Association.Google Scholar
  15. Branch, M. N. (1999). Statistical inference in behavior analysis: Some things significance testing does and does not do. Behavior Analyst, 22(2), 87–92.PubMedCentralPubMedGoogle Scholar
  16. Branch, M. N., & Pennypacker, H. S. (2013). Generality and generalization of research findings. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 151–175). Washington, DC: American Psychological Association.Google Scholar
  17. Breakwell, G. M., Hammond, S., Fife-Schaw, C., & Smith, J. A. (2006). Research methods in psychology (3rd ed.). London: Sage.Google Scholar
  18. Borckardt, J., Nash, M., Balliet, W., Galloway, S., & Madan, A. (2013). Time-series statistical analysis of single-case data. In G. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 251–266). Washington, D.C.: American Psychological Association.Google Scholar
  19. Borckardt, J. J., Nash, M. R., Murphy, M. D., Moore, M., Shaw, D., & O’Neil, P. (2008). Clinical practice as natural laboratory for psychotherapy research a guide to case-based time-series analysis. American Psychologist, 63(2), 77–95. doi: 10.1037/0003-066X.63.2.77.PubMedGoogle Scholar
  20. Brossart, D. F., Parker, R. I., & Castillo, L. G. (2011). Robust regression for single-case data analysis: how can it help? Behavior Research Methods, 43(3), 710–719. doi: 10.3758/s13428-011-0079-7.PubMedGoogle Scholar
  21. Campbell, J. M. (2004). Statistical comparison of four effect sizes for single-subject designs. Behavior Modification, 28(2), 234–246. doi: 10.1177/0145445503259264.PubMedGoogle Scholar
  22. Catania, A. C. (2008). The journal of the experimental analysis of behavior at zero, fifty, and one hundred. Journal of the Experimental Analysis of Behavior, 89(1), 111–118. doi: 10.1901/jeab. 2008.89-111.PubMedCentralPubMedGoogle Scholar
  23. Catania, C. (2013). A natural science of behavior. Review of General Psychology, 17(2), 133–139. doi: 10.1037/a0033026.Google Scholar
  24. Catania, A. C., & Laties, V. G. (1999). Pavlov and Skinner: two lives in science (an introduction to B. F. Skinner’s “some responses to the stimulus ‘Pavlov’”). Journal of the Experimental Analysis of Behavior, 72(3), 455–461. doi: 10.1901/jeab. 1999.72-455.PubMedCentralPubMedGoogle Scholar
  25. Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66(1), 7–18. doi: 10.1037//0022-006X.66.1.7.
  26. Chelune, G. J., Naugle, R. I., & Lüders, H. (1993). Individual change after epilepsy surgery: practice effects and base-rate information. Neuropsychology, 7(1), 41–52.Google Scholar
  27. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003. doi: 10.1037//0003-066X.49.12.997.Google Scholar
  28. Cohen, L., Feinstein, A., Masuda, A., & Vowles, K. (2014). Single-case research design in pediatric psychology: considerations regarding data analysis. Journal of Pediatric Psychology, 39(2), 124–137. doi: 10.1093/jpepsy/jst065.PubMedGoogle Scholar
  29. Crosbie, J. (1999). Statistical inference in behavior analysis: Useful friend. Behavior Analyst, 22(2), 105–108.PubMedCentralPubMedGoogle Scholar
  30. Cumming, G., & Fidler, F. (2009). Confidence intervals better answers to better questions. Journal of Psychology, 217(1), 15–26. doi: 10.1027/0044-3409.217.1.15.Google Scholar
  31. Davison, M. (1999). Statistical inference in behavior analysis: Having my cake and eating it? Behavior Analyst, 22(2), 99–103.PubMedCentralPubMedGoogle Scholar
  32. De Mey, H. R. A. (2003). Two psychologies: cognitive versus contingency-oriented. Theory & Psychology, 13(5), 695–709.Google Scholar
  33. Dermer, M. L., & Hoch, T. A. (1999). Improving descriptions of single-subject experiments in research texts written for undergraduates. The Psychological Record, 49, 49–66.Google Scholar
  34. Duryea, E., Graner, S. P., & Becker, J. (2009). Methodological issues related to the use of p < 0.05 in health behavior research. American Journal of Health Education, 40(2), 120–125.Google Scholar
  35. Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538. doi: 10.1037/a0015808.Google Scholar
  36. Fisher, R. A. (1925/1950). Statistical methods for research workers (11th ed.). Edinburgh: Oliver & Boyd.Google Scholar
  37. Frerichs, R. J., & Tuokko, H. (2005). A comparison of methods for measuring cognitive change in older adults. Archives of Clinical Neuropsychology, 20(3), 321–233. doi: 10.1016/j.acn.2004.08.002.PubMedGoogle Scholar
  38. Fritz, A., Scherndl, T., & Kühberger, A. (2013). A comprehensive review of reporting practices in psychological journals: Are effect sizes really enough? Theory and Psychology, 23(1), 98–122. doi: 10.1177/0959354312436870.Google Scholar
  39. Goddard, M. J. (2012). On certain similarities between mainstream psychology and the writings of B. F. Skinner. The Psychological Record, 62, 563–576.Google Scholar
  40. Gravetter, F. J., & Forzano, L. B. (2009). Research methods for the behavioral sciences (3rd ed.). Belmont, CA: Wadsworth, Cengage Learning.Google Scholar
  41. Graziano, A., Raulin, M., & Cramer, K. (2009). Research methods: a process of inquiry. New Jersey: Pearson.Google Scholar
  42. Greenland, S., & Poole, C. (2013). Living with p values: Resurrecting a Bayesian perspective on frequentist statistics. Epidemiology, 24(1), 62–68. doi: 10.1097/EDE.0b013e3182785741.
  43. Hammond, G. (1996). The objections to null hypothesis testing as a means of analyzing psychological data. Australian Journal of Psychology, 48, 104–106. doi: 10.1080/00049539608259513.Google Scholar
  44. Harris, R. J. (1997). Reforming significance testing via three-valued logic. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 145–174). Mahwah, NJ: Erlbaum.Google Scholar
  45. Harlow, L. (1997). Significance testing introduction and overview. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 1–21). Mahwah, NJ: Erlbaum.Google Scholar
  46. Hayes, S. C., & Brownstein, A. J. (1986). Behavior-Analytic view of the purposes of science, 2(2), 175–190.Google Scholar
  47. Hayes, S. C., Blackledge, J. T., & Barnes-Holmes, D. (2001). Language and cognition: constructing an alternative approach within the behavioral tradition. In S. C. Hayes, D. Barnes-Holmes, & B. Roche (Eds.), Relational frame theory: A post-Skinnerian account of human language and cognition (pp. 3–20). New York: Kluwer Academic Publishers.Google Scholar
  48. Heron, W. T., & Skinner, B. F. (1939). An apparatus for the study of animal behavior. Psychological Record, 3, 166–176.Google Scholar
  49. Hineline, P. N., & Laties, V. G. (1987). Anniversaries in behavior analysis. Journal of the Experimental Analysis of Behavior, 48, 439–514. doi: 10.1901/jeab. 1987.48-439.PubMedCentralPubMedGoogle Scholar
  50. Hunter, J. E. (1997). Needed: A ban on the significance test. Psychological Science, 8(1), 3–7.Google Scholar
  51. Hurtado-Parrado, C. (2006). El conductismo y algunas implicaciones de lo que significa ser conductista hoy. Diversitas, 2(2), 321–328.Google Scholar
  52. Hurtado-Parrado, H. C. (2009). A Non-Cognitive Alternative for the Study of Cognition: An Interbehavioral Proposal. In T. Teo, P. Stenner, & A. Rutherford (Eds.), Varieties of Theoretical Psychology - ISTP 2007 - International Philosophical and Practical Concerns (pp. 340–348). Toronto: Captus University Publication.Google Scholar
  53. Iversen, I. (2013). Single-case research methods: an overview. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 3–32). Washington, DC: American Psychological Association.Google Scholar
  54. Johnston, J. M., & Pennypacker, H. S. (1993a). Strategies and tactics of behavioral research (2nd ed.) Hillsdale, NJ: Erlbaum.Google Scholar
  55. Johnston, J. M., & Pennypacker, H. S. (1993b). Readings for the strategies and tactics of behavioral research. Hillsdale, NJ: Erlbaum.Google Scholar
  56. Johnston, J. M., & Pennypacker, H. S. (2009). Strategies and tactics of behavioral research (3rd ed.) New York: Routledge.Google Scholar
  57. Kratochwill, & J. R, Levin (Eds.). (1992). Single-case research design and analysis: New directions for psychology and education. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  58. Lambdin, C. (2012). Significance tests as sorcery: Science is empirical-significance tests are not. Theory and Psychology, 22(1), 67–90. doi: 10.1177/0959354311429854.Google Scholar
  59. Lattal, K. (2013). The five pillars of the experimental analysis of behavior. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis (Methods and Principles, Vol. 1, pp. 33–64). Washington, DC: American Psychological Association.Google Scholar
  60. Machado, A., Lourenço, O., & Silva, F. J. (2000). Facts, concepts, and theories: The shape of psychology’s epistemic triangle. Behavior and Philosophy, 40, 1–40.Google Scholar
  61. Machado, A., & Silva, F. J. (2007). Toward a richer view of the scientific method. The role of conceptual analysis. American Psychologist, 62(7), 671–681. doi: 10.1037/0003-066X.62.7.671.PubMedGoogle Scholar
  62. Maggin, D. M., & Chafouleas, S. M. (2012). Introduction to the special series: issues and advances of synthesizing single-case research. Remedial and Special Education, 34(1), 3–8. doi: 10.1177/0741932512466269.Google Scholar
  63. Malone, J. C. J., & Cruchon, N. M. (2001). Radical behaviorism and the rest of psychology: a review/précis of Skinner’s “About Behaviorism.”. Behavior and Philosophy, 29, 31–57.Google Scholar
  64. Manolov, R., Solanas, A., & Leiva, D. (2010). Comparing “visual” effect size indices for single-case designs. Methodology, 6(2), 49–58. doi: 10.1027/1614-2241/a000006.Google Scholar
  65. Martella, R. C., Nelson, R., & Marchand-Martella, N. E. (1999). Research methods: learning to become a critical research consumer. Boston: Allyn & Bacon.Google Scholar
  66. Martin, G. L., Thompson, K., & Regehr, K. (2004). Studies using single-subject designs in sport psychology: 30 years of research. The Behavior Analyst, 27(2), 263–280.PubMedCentralPubMedGoogle Scholar
  67. McGrane, J. (2010). Are psychological quantities and measurement relevant in the 21st century? Frontiers in Psychology, 1, 22. doi: 10.3389/fpsyg.2010.00022.PubMedCentralPubMedGoogle Scholar
  68. McSweeny, A. J., & Naugle, R. I. (1993). “T scores for change”: An illustration of a regression approach to depicting change in clinical neuropsychology. The Clinical Neuropsychologist, 7(3), 300–312.Google Scholar
  69. Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi: 10.1037//0022-006X.46.4.806.Google Scholar
  70. Meehl, P. E. (1997). The problem is epistemology, not statistics: Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 393–425). Mahwah, NJ: Erlbaum.Google Scholar
  71. Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355–383.Google Scholar
  72. Michell, J. (2000). Normal science, pathological science and psychometrics. Theory & Psychology, 10(5), 639–667. doi: 10.1177/0959354300105004.Google Scholar
  73. Moore, J. (1981). On mentalism, methodological behaviorism, and radical behaviorism. Behaviorism, 9(1), 55–77.Google Scholar
  74. Moore, J. (1990). A special section commemorating the 30th anniversary of tactics of scientific research: evaluating experimental data in psychology by Murray Sidman. The Behavior Analyst, 13(2), 159–161.PubMedCentralPubMedGoogle Scholar
  75. Moore, J. (2001). On distinguishing methodological from radical behaviorism. European Journal of Behavior Analysis, 2(2), 221–244.Google Scholar
  76. Moore, J. (2007). Conceptual foundations of radical behaviorism. New York: Sloan publishing.Google Scholar
  77. Moore, J. (2009). Why the radical behaviorist conception of private events is interesting, relevant, and important. Behavior and Philosophy, 37, 21–37.Google Scholar
  78. Moore, J. (2011). Behaviorism. The Psychological Record, 61, 449–464.Google Scholar
  79. Moore, J. (2013). Methodological behaviorism from the standpoint of a radical behaviorist. The Behavior Analyst, 36(2), 197–208.Google Scholar
  80. Morgan, D. L., & Morgan, R. K. (2009). Single-case research methods for the behavioral and health sciences. Los Angeles: Sage.Google Scholar
  81. Morris, E. K. (1992). The aim, progress, and evolution of behavior analysis. The Behavior Analyst, 15(1), 3–29.PubMedCentralPubMedGoogle Scholar
  82. Morris, E. K. (1998). Tendencias actuales en el análisis conceptual del comportamiento. In R. Ardila, W. López-López, A. Pérez, R. Quiñones & F. Reyes (Eds.), Manual de análisis experimental del comportamiento (pp. 19–56). Madrid: Biblioteca Nueva.Google Scholar
  83. Morris, E. K., Todd, J. T., Midgley, B. D., Schneider, S. M., & Johnson, L. M. (1990). The history of behavior analysis: some historiography and a bibliography. The Behavior Analyst, 13(2), 131–158.PubMedCentralPubMedGoogle Scholar
  84. Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and a place for significance testing. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 66–115). Mahwah, NJ: Erlbaum.Google Scholar
  85. Nestor, P., & Schutt, R. K. (2012). Research methods in psychology: investigating human behavior. Los Angeles: SAGE Publications.Google Scholar
  86. Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5(2), 241–301. doi: 10.1037//1082-989X.5.2.241.PubMedGoogle Scholar
  87. Nourbakhsh, M. R., & Ottenbacher, K. J. (1994). The statistical analysis of single-subject data: a comparative examination. Physical Therapy, 74(8), 768–776.PubMedGoogle Scholar
  88. O’Donohue, W. T., Callaghan, G. M., & Ruckstuhl, L. E. (1998). Epistemological barriers to radical behaviorism. The Behavior Analyst, 21(2), 307–320.PubMedCentralPubMedGoogle Scholar
  89. O’Donohue, W. T., & Houts, A. C. (1985). The two disciplines of behavior therapy: Research methods and mediating variables. The Psychological Record, 35, 155–163.Google Scholar
  90. O’Donohue, W. T., & Kitchener, R. (1999). Handbook of behaviorism. New York: Academic.Google Scholar
  91. Ottenbacher, K. J. (1990). Clinically relevant designs for rehabilitation research: the idiographic model. American Journal of Physical Medicine Rehabilitation, 69(6), 286–292. doi: 10.1097/00002060-199012000-00002.PubMedGoogle Scholar
  92. Osborne, J. W. (2010). Challenges for quantitative psychology and measurement in the 21st century. Frontiers in Psychology, 1(1). doi: 10.3389/fpsyg.2010.00001
  93. Parker, R. I., & Brossart, D. E. (2003). Evaluating single-case research data: a comparison of seven statistical methods. Behavior Therapy, 34, 189–211. doi: 10.1016/S0005-7894(03)80013-8.Google Scholar
  94. Parker, R. I., & Hagan-Burke, S. (2007). Useful effect size interpretations for single case research. Behavior Therapy, 38(1), 95–105. doi: 10.1016/j.beth.2006.05.002.PubMedGoogle Scholar
  95. Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35(4), 303–322. doi: 10.1177/0145445511399147.PubMedGoogle Scholar
  96. Perner, P. (2008). Case-based reasoning and the statistical challenges. Quality and Reliability Engineering International, 24, 705–720. doi: 10.1002/qre.Google Scholar
  97. Perone, M. (1999). Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst, 22(2), 109–116.PubMedCentralPubMedGoogle Scholar
  98. Perone, M., & Hursh, D. E. (2013). Single-case experimental designs. In G. J. Madden (Ed.), APA Handbook of Behavior Analysis: Vol. 1. Methods and Principles (pp. 107–126). Washington, DC: American Psychological Association.Google Scholar
  99. Photos, V., Michel, B. D., & Nock, M. K. (2008). Single-case research. In M. Hersen & A. M. Gross (Eds.), Handbook of Clinical Psychology (Vol. 1, pp. 224–245). NJ: John Wiley & Sons.Google Scholar
  100. Plazas, E. (2006). B. F. Skinner: la búsqueda de orden en la conducta voluntaria. Universitas Psychologica, 5(2), 371–383.Google Scholar
  101. Pruzek, R. M. (1997). An introduction to Bayesian inference and its applications. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 287–318). Mahwah, NJ: Erlbaum.Google Scholar
  102. Rachlin, H. (1992). Teleological behaviorism. The American Psychologist, 47(11), 1371–1382.PubMedGoogle Scholar
  103. Rachlin, H. (1994). Behavior and mind: The roots of modern psychology. New York: Oxford University Press.Google Scholar
  104. Rachlin, H. (2013). About teleological behaviorism. The Behavior Analyst, 36(2), 209–222.Google Scholar
  105. Ribes-Iñesta, E. (1997). Causality and contingency: some conceptual considerations. The Psychological Record, 47, 619–635.Google Scholar
  106. Ribes-Iñesta, E. (2001). Instructions, rules, and abstraction: a misconstrued relation. Behavior and Philosophy, 28(1), 41–55.Google Scholar
  107. Ribes-Iñesta, E. (2003). What is defined in operational definitions? The case of operant psychology. Behavior and Philosophy, 126, 111–126.Google Scholar
  108. Reichardt, C. S., & Golub, H. F. (1997). When confidence intervals should be used instead of statistical significance tests, and vice versa. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 259–286). Mahwah, NJ: Erlbaum.Google Scholar
  109. Robins, R. W., Gosling, S. D., & Craik, K. (1999). An empirical analysis of trends in psychology. American Psychologist, 54(2), 117–128. doi: 10.1037/0003-066X.54.2.117.PubMedGoogle Scholar
  110. Rozeboom, W. W. (1997). Good science is abductive, not hypothetico-deductive. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if there were no significance tests? (pp. 335–391). Mahwah, NJ: Erlbaum.Google Scholar
  111. Rozin, P. (2007). Exploring the landscape of modern academic psychology: finding and filling the holes. The American Psychologist, 62(8), 754–766. doi: 10.1037/0003-066X.62.8.754.Google Scholar
  112. Shadish, W. R. (2014). Statistical analyses of single-case designs: the shape of things to come. Current Directions in Psychological Science, 23(2), 139–146. doi: 10.1177/0963721414524773.Google Scholar
  113. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100. doi: 10.1037/a0015108.
  114. Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In L. Harlow, S. Mulaik, & J. Steiger (Eds.), What if There Were no Significance Tests? (pp. 37–64). Mahwah, NJ: Erlbaum.Google Scholar
  115. Schweigert, W. A. (2012). Research methods in psychology: a handbook. Long Grove: Waveland Press.Google Scholar
  116. Sharpe, D. (2013). Why the resistance to statistical innovations? Bridging the communication gap. Psychological Methods, 18(4), 572–582. doi: 10.1037/a0034177.PubMedGoogle Scholar
  117. Shull, R. L. (1999). Statistical inference in behavior analysis: Discussant’s remarks. The Behavior Analyst, 22(2), 117–121.PubMedCentralPubMedGoogle Scholar
  118. Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. New York: Basic Books.Google Scholar
  119. Silverstein, A. (1988). An Aristotelian resolution of the idiographic versus nomothetic tension. American Psychologist, 43(6), 425–430. doi: 10.1037//0003-066X.43.6.425.Google Scholar
  120. Skinner, B. F. (1938). The behavior of organisms. New York: Appleton.Google Scholar
  121. Skinner, B. F. (1945). The operational analysis of psychological terms. Psychological Review, 52(5), 270-270-277. doi: 10.1037/h0062535
  122. Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 11(5), 221–233. doi: 10.1037/h0047662.Google Scholar
  123. Skinner, B. F. (1967). B. F. Skinner. In E. G. Boring & G. Lindzey (Eds.), A history of psychology in autobiography (Vol. 5, pp. 385–413). New York: Appleton.Google Scholar
  124. Skinner, B. F. (1979). The shaping of a behaviorist. New York: Knopf.Google Scholar
  125. Skinner, B. F. (1981). Selection by consequences. Science, 213(4507), 501–504.PubMedGoogle Scholar
  126. Skinner, B. F. (1984). An operant analysis of problem solving. The Behavioral and Brain Sciences, 7, 583–613. doi: 10.1017/S0140525X00027412.Google Scholar
  127. Smith, J. (2012). Single-case experimental designs: a systematic review of published research and current standards. Psychological Methods, 17(4), 510–550. doi: 10.1037/a0029312.PubMedGoogle Scholar
  128. Smith, L. D., Best, L. A., Cylke, V. A., & Stubbs, D. A. (2000). Psychology without p values: Data analysis at the turn of the 19th century. American Psychologist, 55(2), 260–263. doi: 10.1037/0003-066X.55.2.260
  129. Solanas, A., Manolov, R., & Onghena, P. (2010). Estimating slope and level change in N = 1 designs. Behavior Modification, 34(3), 195–218. doi: 10.1177/0145445510363306.PubMedGoogle Scholar
  130. Stang, A., Poole, C., & Kuss, O. (2010). The ongoing tyranny of statistical significance testing in biomedical research. European Journal of Epidemiology, 25(4), 225–230. doi: 10.1007/s10654-010-9440-x.PubMedGoogle Scholar
  131. Stangor, C. (2011). Research methods for the behavioral sciences. Belmont: Wadsworth Cengage Learning.Google Scholar
  132. Thompson, B. (2002). What future quantitative social science research could look like: Confidence intervals for effect sizes. Educational Researcher, 31(3), 24–31. doi: 10.1002/pits.20234.Google Scholar
  133. Thompson, T. (1984). The examining magistrate for nature: A retrospective review of Claude Bernard’s an introduction to the study of experimental medicine. Journal of the Experimental Analysis of Behavior, 41(2), 211–216. doi: 10.1901/jeab. 1984.41-211.PubMedCentralGoogle Scholar
  134. Toomela, A. (2007a). Culture of science: Strange history of the methodological thinking in psychology. Integrative Psychological and Behavioral Science, 41(1), 6–20. doi: 10.1007/s12124-007-9004-0.PubMedGoogle Scholar
  135. Toomela, A. (2007b). History of methodology in psychology: Starting point, not the goal. Integrative Psychological and Behavioral Science, 41(1), 75–82. doi: 10.1007/s12124-007-9005-z.Google Scholar
  136. Toomela, A. (2008). Variables in psychology: a critique of quantitative psychology. Integrative Psychological & Behavioral Science, 42(3), 245–265. doi: 10.1007/s12124-008-9059-6.Google Scholar
  137. Toomela, A. (2009a). The methodology of idiographic science: the limits of single-case studies and the role of typology. In S. Salvatore, J. Valsiner, J. Travers, & A. Gennaro (Eds.), Yearbook of idiographic science - (Vol. II, pp. 13–33). Roma: Firera & Liuzzo.Google Scholar
  138. Toomela, A. (2009b). What is the psyche? the answer depends on the particular epistemology adopted by the scholar. In S. Salvatore, J. Valsiner, J. Travers, & A. Gennaro (Eds.), Yearbook of idiographic science - (Vol. II, pp. 81–104). Roma: Firera & Liuzzo.Google Scholar
  139. Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Frontiers in Psychology, 1, 1–14. doi: 10.3389/fpsyg.2010.00029.Google Scholar
  140. Toomela, A. (2011). Travel into a fairy land: a critique of modern qualitative and mixed methods psychologies. Integrative Psychological & Behavioral Science, 45(1), 21–47. doi: 10.1007/s12124-010-9152-5.Google Scholar
  141. Toomela, A. (2012). Guesses on the future of cultural psychology: past, present, and past. In J. Valsiner (Ed.), Oxford handbook of culture and psychology (pp. 998–1033). Oxford University Press. doi: 10.1093/oxfordhb/9780195396430.013.0049
  142. Toomela, A. (2014). Mainstream psychology. In (T. Teo, Ed.)Encyclopedia of critical psychology. New York, NY: Springer New York. doi: 10.1007/978-1-4614-5583-7.Google Scholar
  143. Tryon, W. W. (2001). Evaluating statistical difference, equivalence, and indeterminacy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests. Psychological Methods, 6(3), 371–386. doi: 10.1037/1082-989X.6.4.371.PubMedGoogle Scholar
  144. Valsiner, J. (1986). Where is the individual subject in scientific psychology? In J. Valsiner (Ed.), The individual subject and scientific psychology (pp. 1–16). New York, NY: Plenum.Google Scholar
  145. Wachtel, P. L. (1980). Investigation and its discontents: Some constraints on progress in psychological research. American Psychologist, 35(5), 399–408. doi: 10.1037//0003-066X.35.5.399.Google Scholar
  146. Wagenmakers, E., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. doi: 10.1177/1745691612463078.PubMedGoogle Scholar
  147. Wakefield, J. C. (2007). Why psychology needs conceptual analysts: Wachtel’s “discontents” revisited. Applied and Preventive Psychology, 12(1), 39–43. doi: 10.1016/j.appsy.2007.07.014.Google Scholar
  148. Wilkinson, L. & The Task Force on Statistical Inference (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594–604. doi:  10.1037/0003-066X.54.8.594
  149. Windelband, W. (1894). History and natural science. Theory & Psychology, 8(1), 5–22.Google Scholar
  150. Ximenes, V. M., Manolov, R., Solanas, A., & Quera, V. (2009). Factors affecting visual inference in single-case designs. The Spanish Journal of Psychology, 12(2), 823–832.PubMedGoogle Scholar
  151. Zuriff, G. E. (1985). Behaviorism: A conceptual reconstruction. New York: Columbia.Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Faculty of PsychologyFundación Universitaria Konrad LorenzBogotaColombia
  2. 2.Pontificia Universidad JaverianaBogotáColombia

Personalised recommendations