The Behavior Analyst

, Volume 40, Issue 1, pp 17–38 | Cite as

Coal Is Not Black, Snow Is Not White, Food Is Not a Reinforcer: The Roles of Affordances and Dispositions in the Analysis of Behavior

  • Peter R. KilleenEmail author
  • Kenneth W. Jacobs
Original Research


Reinforcers comprise sequences of actions in context. Just as the white of snow and black of coal depend on the interaction of an organism’s visual system and the reflectances in its surrounds, reinforcers depend on an organism’s motivational state and the affordances—possibilities for perception and action—in its surrounds. Reinforcers are not intrinsic to things but are a relation between what the thing affords, its context, the organism, and his or her history as capitulated in their current state. Reinforcers and other affordances are potentialities rather than intrinsic features. Realizing those potentialities requires motivational operations and stimulus contexts that change the state of the organism—they change its disposition to make the desired response. An expansion of the three-term contingency is suggested in order to help keep us mindful of the importance of behavioral systems, states, emotions, and dispositions in our research programs.


Affordance Affordee Dispositions Four-term contingency Emotion Law of effect Linear algebra Motivation Notation States 



We appreciate the formative commentary from four anonymous reviewers and AE and from Amy Odum, Tim Shahan, Tom Taylor, Travis Thompson, and Max Hocutt. This is not to imply that they endorse the views expressed in this paper; just that it would have been worse without their help.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.


  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211.CrossRefGoogle Scholar
  2. Augustson, E. M., & Dougher, M. J. (1997). The transfer of avoidance evoking functions through stimulus equivalence classes. Journal of Behavior Therapy and Experimental Psychiatry, 28(3), 181–191.PubMedCrossRefGoogle Scholar
  3. Balsam, P. D., Deich, J. D., Ohyama, T., & Stokes, P. D. (1998). Origins of new behavior. In W. T. O’Donohue (Ed.), Learning and behavior therapy (pp. 403–420). Needham Heights: Allyn and Bacon.Google Scholar
  4. Balsam, P. D., & Silver, R. (1994). Behavioral change as a result of experience: toward principles of learning and development. In J. A. Hogan & J. J. Bolhuis (Eds.), Causal mechanisms of behavioural development (pp. 327–357). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  5. Barrett, L. (2011). Beyond the brain: how body and environment shape animal and human minds. Princeton: Princeton University Press.Google Scholar
  6. Baum, W. M. (2005). Understanding behaviorism: behavior, culture, and evolution (2nd ed.). Malden: Blackwell.Google Scholar
  7. Baum, W. M. (2011). What is radical behaviorism? A review of Jay Moore’s conceptual foundations of radical behaviorism. Journal of the Experimental Analysis of Behavior, 95(1), 119–126.PubMedCentralCrossRefGoogle Scholar
  8. Baum, W. M. (2012). Rethinking reinforcement: allocation, induction, and contingency. Journal of the Experimental Analysis of Behavior, 97(1), 101–124. doi: 10.1901/jeab.2012.97-101.PubMedPubMedCentralCrossRefGoogle Scholar
  9. Baum, W. M., & Davison, M. (2014). Background activities, induction, and behavioral allocation in operant performance. Journal of the Experimental Analysis of Behavior, 102, 213–230.PubMedCrossRefGoogle Scholar
  10. Bennett, M., Hermans, D., Dymond, S., Vervoort, E., & Baeyens, F. (2015). From bad to worse: symbolic equivalence and opposition in fear generalisation. Cognition and Emotion, 29(6), 1137–1145. doi: 10.1080/02699931.2014.973833.PubMedCrossRefGoogle Scholar
  11. Berridge, K. C., Robinson, T. E., & Aldridge, J. W. (2009). Dissecting components of reward: ‘liking’, ‘wanting’, and learning. Current Opinion in Pharmacology, 9, 65–73. doi: 10.1016/j.coph.2008.12.014.PubMedPubMedCentralCrossRefGoogle Scholar
  12. Bjork, D. W. (1997). B. F. Skinner: a life. Washington, D. C.: American Psychological Association.CrossRefGoogle Scholar
  13. Boakes, R. A., Poli, M., Lockwood, M. J., & Goodall, G. (1978). A study of misbehavior: token reinforcement in the rat. Journal of the Experimental Analysis of Behavior, 29(1), 115–134. doi: 10.1901/jeab.1978.29-115.PubMedPubMedCentralCrossRefGoogle Scholar
  14. Bolles, R. C. (1970). Species-specific defense reactions and avoidance learning. Psychological Review, 71, 32–48.CrossRefGoogle Scholar
  15. Bouton, M. E., & Trask, S. (2016). Role of the discriminative properties of the reinforcer in resurgence. Learning & Behavior, 44(2), 1–14.CrossRefGoogle Scholar
  16. Boyle, S., Roche, B., Dymond, S., & Hermans, D. (2016). Generalisation of fear and avoidance along a semantic continuum. Cognition and Emotion, 30(2), 340–352. doi: 10.1080/02699931.2014.1000831.PubMedCrossRefGoogle Scholar
  17. Breland, K., & Breland, M. (1961). The misbehavior of organisms. American Psychologist, 16, 681–684. doi: 10.1037/h0040090.CrossRefGoogle Scholar
  18. Burgos, J. E. (2016). Antidualism and antimentalism in radical behaviorism. Behavior and Philosophy, 43, 1–37.Google Scholar
  19. Cabrera, F., Sanabria, F., Jiménez, Á. A., & Covarrubias, P. (2013). An affordance analysis of unconditioned lever pressing in rats and hamsters. Behavioural Processes, 92, 36–46.PubMedCrossRefGoogle Scholar
  20. Catania, A. C. (2013). A natural science of behavior. Review of General Psychology, 17(2), 133–139.  10.1037/a0033026
  21. Staddon, J. E. R., & Simmelhag, V. (1971). The "superstition" experiment: A re-examination of its implications for the principles of adaptive behavior. Psychological Review, 78, 3–43.Google Scholar
  22. Cialdini, R. B. (2016). Pre-Suasion: A Revolutionary Way to Influence and Persuade: Simon and Schuster.Google Scholar
  23. Dawkins, R. (1999). The extended phenotype. Oxford: Oxford University Press (Revised edition).Google Scholar
  24. Dezfouli, A., Lingawi, N. W., & Balleine, B. W. (2014). Habits as action sequences: hierarchical action control and changes in outcome value. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655). doi: 10.1098/rstb.2013.0482.
  25. Domjan, M. (2016). Elicited versus emitted behavior: time to abandon the distinction. Journal of the Experimental Analysis of Behavior, 105(2), 231–245.PubMedCrossRefGoogle Scholar
  26. Domjan, M., & Galef, B. G. (1983). Biological constraints on instrumental and classical conditioning: retrospect and prospect. Animal Learning and Behavior, 11(2), 151–161.CrossRefGoogle Scholar
  27. Donahoe, J. W. (2014). Evocation of behavioral change by the reinforcer is the critical event in both the classical and operant procedures. International Journal of Comparative Psychology, 27(4), 537–543. Scholar
  28. Donahoe, J. W., & Palmer, D. C. (1994). Learning and complex behavior. Boston: Allyn and Bacon.Google Scholar
  29. Donahoe, J. W., Palmer, D. C., & Burgos, J. E. (1997). The unit of selection: what do reinforcers reinforce? Journal of the Experimental Analysis of Behavior, 67, 259–273. doi: 10.1901/jeab.1997.67-259.PubMedPubMedCentralCrossRefGoogle Scholar
  30. Donahoe, J. W., & Vegas, R. (2004). Pavlovian conditioning: the CS-UR relation. Journal of Experimental Psychology. Animal Behavior Processes, 30, 17–33. doi: 10.1037/0097-7403.30.1.17.PubMedCrossRefGoogle Scholar
  31. Dougher, M. J., & Hackbert, L. (2000). Establishing operations, cognition, and emotion. Behavior Analyst, 23(1), 11–24. PMC2731379.PubMedPubMedCentralCrossRefGoogle Scholar
  32. Dymond, S., & Roche, B. (2009). A contemporary behavioral analysis of anxiety and avoidance. Behavior Analyst, 32, 7–28.PubMedPubMedCentralCrossRefGoogle Scholar
  33. Dymond, S., Schlund, M. W., Roche, B., & Whelan, R. (2014). The spread of fear: symbolic generalization mediates graded threat-avoidance in specific phobia. The Quarterly Journal of Experimental Psychology, 67(2), 247–259.PubMedCrossRefGoogle Scholar
  34. Eisenberger, R., Karpman, M., & Trattner, J. (1967). What is the necessary and sufficient condition for reinforcement in the contingency situation? Journal of Experimental Psychology, 74(3), 342–350. doi: 10.1037/h0024719.PubMedCrossRefGoogle Scholar
  35. Estévez, A. F., Fuentes, L. J., Marí-Beffa, P., González, C., & Alvarez, D. (2001). The differential outcome effect as a useful tool to improve conditional discrimination learning in children. Learning and Motivation, 32(1), 48–64.CrossRefGoogle Scholar
  36. Fahmie, T. A., Iwata, B. A., & Jann, K. E. (2015). Comparison of edible and leisure reinforcers. Journal of Applied Behavior Analysis, 48, 331–343. doi: 10.1002/jaba.200.PubMedCrossRefGoogle Scholar
  37. Fanselow, M. S., & Lester, L. S. (1988). A functional behavioristic approach to aversively motivated behavior: predatory imminence as a determinant of the topography of defensive behavior. In R. C. Bolles & M. D. Beecher (Eds.), Evolution and learning (pp. 185–212). Hillsdale: Erlbaum.Google Scholar
  38. Foxall, G. R. (2004). Context and cognition: interpreting complex behavior. Reno: Context Press.Google Scholar
  39. Freedman, D. H. (2012). The Perfected Self. The Atlantic, 42–52. Retrieved from
  40. Gibson, J. J. (1979/1986). The ecological approach to visual perception. Boston: Houghton Mifflin. (Original work published in 1979)Google Scholar
  41. Ginsburg, S., & Jablonka, E. (2010). The evolution of associative learning: a factor in the Cambrian explosion. Journal of Theoretical Biology, 266, 11–20. doi: 10.1016/j.jtbi.2010.06.017.PubMedCrossRefGoogle Scholar
  42. Glenberg, A. M. (2010). Embodiment as a unifying perspective for psychology. Wiley Interdisciplinary Reviews: Cognitive Science, 1(4), 586–596.PubMedGoogle Scholar
  43. Goldstein, N. J., Martin, S. J., & Cialdini, R. B. (2008). Yes! 50 scientifically proven ways to be persuasive. New York: Free Press.Google Scholar
  44. Gollwitzer, P. M. (1993). Goal achievement: the role of intentions. European Review of Social Gollwitzer Psychology, 4(1), 141–185.CrossRefGoogle Scholar
  45. Gross, J. J., & Barrett, L. F. (2013). The emerging field of affective science. Emotion, 13(6), 997–998. doi: 10.1037/a0034512.PubMedCrossRefGoogle Scholar
  46. Gulhardi, P., Yi, L., & Church, R. M. (2007). A modular theory of learning and performance. Psychonomic Bulletin & Review, 14, 543–559. doi: 10.3758/BF03196805.CrossRefGoogle Scholar
  47. Hackenberg, T. D. (2014). The outside story: a review of beyond the brain: how body and environment shape animal and human minds, by Louise Barrett. Behavior Analyst, 37(2), 125–131. doi: 10.1007/s40614-014-0007-0.PubMedCentralCrossRefGoogle Scholar
  48. Hayes, L. J. (1992). The psychological present. Behavior Analyst, 15(2), 139–145.PubMedPubMedCentralCrossRefGoogle Scholar
  49. Hayes, S. C. (2004). Acceptance and commitment therapy, relational frame theory, and the third wave of behavioral and cognitive therapies. Behavior Therapy, 35(4), 639–665.CrossRefGoogle Scholar
  50. Hayes, S. C., & Brownstein, A. J. (1985). Mentalism and the “as-yet-unexplained”: a reply to Killeen. Behaviorism, 13(2), 151–154.Google Scholar
  51. Hayes, S. C., & Brownstein, A. J. (1986). Mentalism, behavior-behavior relations, and a behavior-analytic view of the purposes of science. Behavior Analyst, 9(2), 175–190.PubMedPubMedCentralCrossRefGoogle Scholar
  52. Hayes, L. J., & Fredericks, D. W. (1999). Interbehaviorism and interbehavioral psychology. In W. O’Donohue & R. Kitchener (Eds.), Handbook of behaviorism (pp. 71–96). San Diego: Academic.CrossRefGoogle Scholar
  53. Hayes, L. J., & Fryling, M. J. (2014). Motivation in behavior analysis: a critique. The Psychological Record, 64(2), 339–347.CrossRefGoogle Scholar
  54. Hayes, S. C., Wilson, K. G., Gifford, E. V., Follette, V. M., & Strosahl, K. (1996). Experiential avoidance and behavioral disorders: a functional dimensional approach to diagnosis and treatment. Journal of Consulting and Clinical Psychology, 64(6), 1152.PubMedCrossRefGoogle Scholar
  55. Higgins, E. T. (1997). Beyond pleasure and pain. American Psychologist, 52(12), 1280–1300. doi: 10.1037/0003-066X.52.12.1280.
  56. Hineline, P. N., & Groeling, S. M. (2011). Behavior analytic language and interventions for autism. In E. A. Mayville & J. A. Mulick (Eds.), Behavioral foundations of effective autism treatment (pp. 35–55). New York: Sloan Publishing.Google Scholar
  57. Hocutt, M. (1967). On the alleged circularity of Skinner’s concept of stimulus. Psychological Review, 74(6), 530–532. doi: 10.1037/h0025100.PubMedCrossRefGoogle Scholar
  58. Holland, P. C. (1977). Conditioned stimulus as a determinant of the form of the Pavlovian conditioned response. Journal of Experimental Psychology. Animal Behavior Processes, 3(1), 77–104. doi: 10.1037/0097-7403.3.1.77.PubMedCrossRefGoogle Scholar
  59. Holland, P. C. (1992). Occasion setting in Pavlovian conditioning. Psychology of Learning and Motivation, 28, 69–125.CrossRefGoogle Scholar
  60. Hundt, A. G., & Premack, D. (1963). Running as both a positive and negative reinforcer. Science, 142(3595), 1087–1088. doi: 10.1126/science.142.3595.1087.PubMedCrossRefGoogle Scholar
  61. Jacobs, K. W., Isenhower, R. W., & Hayes, L. J. (2016). A multiscaled approach to privacy. Conductual, 4(1), 5–21.Google Scholar
  62. Kanter, J. W., Holman, G., & Wilson, K. G. (2014). Where is the love? Contextual behavioral science and behavior analysis. Journal of Contextual Behavioral Science, 3(2), 69–73.CrossRefGoogle Scholar
  63. Killeen, P. R. (1984). Emergent behaviorism. Behaviorism, 12, 25–39.Google Scholar
  64. Killeen, P. R. (1992). Mechanics of the animate. Journal of the Experimental Analysis of Behavior, 57, 429–463. doi: 10.1901/jeab.1992.57-429.PubMedPubMedCentralCrossRefGoogle Scholar
  65. Killeen, P. R. (2001). The four causes of behavior. Current Directions in Psychological Science, 10(4), 136–140. doi: 10.1111/1467-8721.00134.PubMedPubMedCentralCrossRefGoogle Scholar
  66. Killeen, P. R. (2004). Minding behavior. Behavior and Philosophy, 32, 125–147.Google Scholar
  67. Killeen, P. R. (2014). Pavlov + Skinner = Premack. International Journal of Comparative Psychology, 27(4), 544–568.Google Scholar
  68. Killeen, P. R. (2015). The logistics of choice. Journal of the Experimental Analysis of Behavior, 104, 74–92.  10.1002/jeab.156
  69. Killeen, P. R., & Glenberg, A. M. (2010). Resituating cognition. Comparative Cognition & Behavior Reviews, 5, 59–77. doi: 10.3819/ccbr.2010.50003.CrossRefGoogle Scholar
  70. Killeen, P. R., & Pellón, R. (2013). Adjunctive behaviors are operants. Learning & Behavior, 41, 1–24.CrossRefGoogle Scholar
  71. Killeen, P. R., & Reilly, M. P. (2001). No thanks, I’m good. Any more and i’ll be sick (comment on Lynch and Carroll, 2001). Experimental and Clinical Psychopharmacology, 9, 144–147.PubMedCrossRefGoogle Scholar
  72. Killeen, P. R., Sanabria, F., & Dolgov, I. (2009). The dynamics of conditioning and extinction. Journal of Experimental Psychology. Animal Behavior Processes, 35, 447–472. doi: 10.1037/a0015626 PubMedPubMedCentralCrossRefGoogle Scholar
  73. Klatt, K. P., & Morris, E. K. (2001). The Premack principle, response deprivation, and establishing operations. Behavior Analyst, 24(2), 173–180.PubMedPubMedCentralCrossRefGoogle Scholar
  74. Koganezawa, M., Kimura, K.-i., & Yamamoto, D. (2016). The neural circuitry that functions as a switch for courtship versus aggression in drosophila males. Current Biology, 26(11), 1395–1403. doi: 10.1016/j.cub.2016.04.017
  75. Konarski, E. A., Johnson, M. R., Crowell, C. R., & Whitman, T. L. (1981). An alternative approach to reinforcement for applied researchers: response deprivation. Behavior Therapy, 12, 653–666. doi: 10.1016/S0005-7894(81)80137-2 CrossRefGoogle Scholar
  76. Land, E. H. (1964). The retinex. American Scientist, 52, 247–264.Google Scholar
  77. Land, E. H. (1977). The retinex theory of color vision. Scientific American, 237, 108–128. doi: 10.1038/scientificamerican1277-108.PubMedCrossRefGoogle Scholar
  78. Land, E. H. (1983). Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image. Proceedings of the National Academy of Sciences of the United States of America, 80(16), 5163–5169. doi: 10.1073/pnas.80.16.5163.PubMedPubMedCentralCrossRefGoogle Scholar
  79. Laraway, S., Snycerski, S., Olson, R., Becker, B., & Poling, A. (2014). The motivating operations concept: current status and critical response. The Psychological Record, 64(3), 601–623.CrossRefGoogle Scholar
  80. Lattal, K. A. (1995). Contingency and behavior analysis. Behavior Analyst, 18, 209–224.PubMedPubMedCentralCrossRefGoogle Scholar
  81. Lewon, M., & Hayes, L. J. (2014). Toward an analysis of emotions as products of motivating operations. The Psychological Record, 64(4), 813–825.CrossRefGoogle Scholar
  82. Lotfizadeh, A. D., Edwards, T. L., Redner, R., & Poling, A. (2012). Motivating operations affect stimulus control: a largely overlooked phenomenon in discrimination learning. Behavior Analyst, 35(1), 89–100.PubMedPubMedCentralCrossRefGoogle Scholar
  83. McSweeney, F. K. (2015). A challenging and satisfying career in basic science. Behavior Analyst, 38(2), 247–254. doi: 10.1007/s40614-015-0040-7.PubMedPubMedCentralCrossRefGoogle Scholar
  84. Meehl, P. E. (1950). On the circularity of the law of effect. Psychological Bulletin, 47(1), 52.CrossRefGoogle Scholar
  85. Michael, J. (2000). Implications and refinements of the establishing operation concept. Journal of Applied Behavior Analysis, 33(4), 401–410. doi: 10.1901/jaba.2000.33-401.PubMedPubMedCentralCrossRefGoogle Scholar
  86. Mowrer, O. H. (1938). Preparatory set (expectancy)—a determinant in motivation and learning. Psychological Review, 45(1), 62–91. doi: 10.1037/h0060829.CrossRefGoogle Scholar
  87. Musairah, S. K. M. K. (2016). Mediation and moderation analysis from the perspective of behavioral science. Jurnal Intelek, 10(1), 1–11.Google Scholar
  88. Neuringer, A. J. (1970). Many responses per food reward with free food present. Science, 169, 503–504.PubMedCrossRefGoogle Scholar
  89. Nevin, J. A. (1992). An integrative model for the study of behavioral momentum. Journal of the Experimental Analysis of Behavior, 57, 301–316.PubMedPubMedCentralCrossRefGoogle Scholar
  90. Nevin, J. A. (2003). Mathematical principles of reinforcement and resistance to change. Behavioural Processes, 62(1–3), 65–73. doi: 10.1016/s0376-6357(03)00018-4.PubMedCrossRefGoogle Scholar
  91. Nomoto, K., & Lima, S. Q. (2015). Enhanced male-evoked responses in the ventromedial hypothalamus of sexually receptive female mice. Current Biology, 25(5), 589–594. doi: 10.1016/j.cub.2014.12.048
  92. Norman, D. A. (1988). The psychology of everyday things. New York: Basic books.Google Scholar
  93. Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38–43. Scholar
  94. O’Hora, D., & Maglieri, K. A. (2006). Goal statements and goal-directed behavior: a relational frame account of goal setting in organizations. Journal of Organizational Behavior Management, 26(1–2), 131–170. doi: 10.1300/J075v26n01_06.CrossRefGoogle Scholar
  95. Ortu, D., & Vaidya, M. (2016). The challenges of integrating behavioral and neural data: bridging and breaking boundaries across levels of analysis. Behavior Analyst, 1–16. doi: 10.1007/s40614-016-0074-5.
  96. Osborne, S. R. (1977). The free food (contrafreeloading) phenomenon: a review and analysis. Animal Learning and Behavior, 5(3), 221–235.CrossRefGoogle Scholar
  97. Oyama, S., Griffiths, P. E., & Gray, R. D. (2001). Cycles of contingency. Cambridge: The MIT Press.Google Scholar
  98. Pellón, R., & Killeen, P. R. (2015). Responses compete and collaborate, shaping each others’ distributions: commentary on Boakes, Patterson, Kendig, & Harris (2015). Journal of Experimental Psychology: Animal Learning and Cognition, 41(4), 444–451. doi: 10.1037/xan0000067.Google Scholar
  99. Peterson, C., Lerman, D. C., & Nissen, M. A. (2016). Reinforcer choice as an antecedent versus consequence. Journal of Applied Behavior Analysis, 49, 286–293.PubMedCrossRefGoogle Scholar
  100. Place, U. (1987). Skinner re-skinned. In S. Modgil & C. Modgil (Eds.), B. F. Skinner: consensus and controversy (pp. 239–248). Philadelphia: Falmer Press.Google Scholar
  101. Premack, D. (1959). Toward empirical behavior laws: I. Positive reinforcement. Psychological Review, 66(4), 219.PubMedCrossRefGoogle Scholar
  102. Premack, D. (1969). On some boundary conditions of contrast. In J. T. Tapp (Ed.), Reinforcement and behavior (pp. 120–145). New York: Academic. doi: 10.1016/B978-0-12-683650-9.50010-8.CrossRefGoogle Scholar
  103. Premack, D. (1971). Catching up with common sense, or two sides of a generalization: reinforcement and punishment. In R. Blaser (Ed.), The nature of reinforcement (pp. 121–150). New York: Academic.Google Scholar
  104. Rachlin, H. R. (1992). Teleological behaviorism. American Psychologist, 47, 1371–1382.PubMedCrossRefGoogle Scholar
  105. Rachlin, H. R. (1998). Teleological behaviorism. In W. O’Donohue & R. Kitchener (Eds.), Handbook of behaviorism (pp. 195–215). San Diego: Academic.Google Scholar
  106. Rachlin, H. R. (2014). The escape of the mind. New York: Oxford University Press.CrossRefGoogle Scholar
  107. Ramnerö, J., & Törneke, N. (2015). On having a goal: goals as representations or behavior. The Psychological Record, 65(1), 89–99. doi: 10.1007/s40732-014-0093-0.PubMedCrossRefGoogle Scholar
  108. Richardson, M. J., Shockley, K., Riley, M. R., Fajen, B. R., & Turvey, M. T. (2008). Ecological psychology: six principles for an embodied-embedded approach to behavior. In P. Calvo & A. Gomila (Eds.), Handbook of cognitive science: an embodied approach (pp. 161–190). San Diego: Elsevier. doi: 10.1016/B978-0-08-046616-3.00009-8.Google Scholar
  109. Rietveld, E., & Kiverstein, J. (2014). A rich landscape of affordances. Ecological Psychology, 26(4), 325–352. doi: 10.1080/10407413.2014.958035.CrossRefGoogle Scholar
  110. Rosales‐Ruiz, J., & Baer, D. M. (1997). Behavioral cusps: a developmental and pragmatic concept for behavior analysis. Journal of Applied Behavior Analysis, 30(3), 533–544. doi: 10.1901/jaba.1997.30-533.PubMedPubMedCentralCrossRefGoogle Scholar
  111. Schlinger, H. D., Jr., & Normand, M. P. (2013). On the origin and functions of the term functional analysis. Journal of Applied Behavior Analysis, 46(1), 285–288. doi: 10.1002/jaba.6.PubMedCrossRefGoogle Scholar
  112. Schnaitter, R. (1987). Knowledge as action: the epistemology of radical behaviorism. In S. Modgil & C. Modgil (Eds.), B. F. Skinner: consensus and controversy (pp. 57–68). New York: Falmer Press.Google Scholar
  113. Segal, E. F. (1972). Induction and the provenance of operants. In R. M. Gilbert & J. R. Millenson (Eds.), Reinforcement: behavioral analyses (pp. 1–34). New York: Academic.Google Scholar
  114. Seligman, M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406–418.CrossRefGoogle Scholar
  115. Skinner, B. F. (1953). Science and human behavior. New York: The Free Press.Google Scholar
  116. Skinner, B. F. (1974). About Behaviorism. New York: Random House.Google Scholar
  117. Skinner, B. F. (1985). Selection by consequences. Science, 213(4507), 501–504.CrossRefGoogle Scholar
  118. Skinner, B. F. (1989). The origins of cognitive thought. American Psychologist, 44, 13–18.CrossRefGoogle Scholar
  119. Staddon, J. E. R. (1993). The conventional wisdom of behavior analysis. Journal of the Experimental Analysis of Behavior, 60, 439–447. doi: 10.1901/jeab.1993.60-439.PubMedPubMedCentralCrossRefGoogle Scholar
  120. Staddon, J. E. R. (1998). Theoretical behaviorism. In W. O’Donohue & R. Kitchener (Eds.), Handbook of behaviorism (pp. 217–241). San Diego: Academic.Google Scholar
  121. Staddon, J. (2016). Adaptive behavior and learning. Cambridge: Cambridge University Press. Scholar
  122. Sulzer-Azaroff, B. (2015). Joy and fulfillment as a female behavior analyst. Behavior Analyst, 38(2), 275–282. doi: 10.1007/s40614-015-0038-1.PubMedPubMedCentralCrossRefGoogle Scholar
  123. Swails, J. A., Zettle, R. D., Burdsal, C. A., & Snyder, J. J. (2016). The experiential approach scale: development and preliminary psychometric properties. The Psychological Record, 1–19. doi: 10.1007/s40732-016-0188-x.
  124. Taylor, B. A. (2015). Stereo knobs and swing sets: falling in love with the science of behavior. Behavior Analyst, 38(2), 283–292. doi: 10.1007/s40614-015-0041-6.PubMedPubMedCentralCrossRefGoogle Scholar
  125. Thompson, T. (2014). A review of “radical behaviorism for ABA practitioners” by James M. Johnston. The Psychological Record, 64, 133–138. doi: 10.1007/s40732-014-0017-z.CrossRefGoogle Scholar
  126. Thorndike, E. L. (1911). Animal intelligence. New York: Macmillan.Google Scholar
  127. Thorndike, E. L. (1913). The psychology of learning. New York: Mason-Henry Press.Google Scholar
  128. Thorndike, E. L. (1935). The psychology of wants, interests and attitudes. Oxford: Appleton-Century.Google Scholar
  129. Timberlake, W. (1984). Selection by consequences: a universal causal mode? Behavioral and Brain Sciences, 7(04), 499–501.CrossRefGoogle Scholar
  130. Timberlake, W. (1993). Behavior systems and reinforcement: an integrative approach. Journal of the Experimental Analysis of Behavior, 60, 105–128.PubMedPubMedCentralCrossRefGoogle Scholar
  131. Timberlake, W. (1994). Behavior systems, associationism, and Pavlovian conditioning. Psychonomic Bulletin and Review, 1, 405–420.PubMedCrossRefGoogle Scholar
  132. Timberlake, W. (2001). Motivational modes in behavior systems. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of contemporary learning theories (pp. 155–209). Mahwah: Erlbaum.Google Scholar
  133. Timberlake, W. (2004). Is the operant contingency enough for a science of purposive behavior? Behavior and Philosophy, 32, 197–229.Google Scholar
  134. Timberlake, W., & Allison, J. (1974). Response deprivation: an empirical approach to instrumental performance. Psychological Review, 81, 146–164.CrossRefGoogle Scholar
  135. Timberlake, W., & Farmer-Dougan, V. A. (1991). Reinforcement in applied settings: figuring out ahead of time what will work. Psychological Bulletin, 110(3), 379–391. doi: 10.1037/0033-2909.110.3.379.PubMedCrossRefGoogle Scholar
  136. Timberlake, W., & Grant, D. L. (1975). Auto-shaping in rats to the presentation of another rat predicting food. Science, 190(4215), 690–692.CrossRefGoogle Scholar
  137. Timberlake, W., & Wozny, M. (1979). Reversibility of reinforcement between eating and running by schedule changes: a comparison of hypotheses and models. Animal Learning and Behavior, 7(4), 461–469.CrossRefGoogle Scholar
  138. Trask, S., Thrailkill, E. A., & Bouton, M. E. (2016). Occasion setting, inhibition, and the contextual control of extinction in Pavlovian and instrumental (operant) learning. Behavioural Processes (in press). doi: 10.1016/j.beproc.2016.10.003
  139. Urcuioli, P. J. (2005). Behavioral and associative effects of differential outcomes in discrimination learning. Animal Learning & Behavior, 33(1), 1–21.CrossRefGoogle Scholar
  140. Vlaeyen, J. W., & Linton, S. J. (2000). Fear-avoidance and its consequences in chronic musculoskeletal pain: a state of the art. Pain, 85(3), 317–332.PubMedCrossRefGoogle Scholar
  141. Warren, W. H. (2006). The dynamics of perception and action. Psychological Review, 113(2), 358–389.PubMedCrossRefGoogle Scholar
  142. Wilson-Mendenhall, C. D., Barrett, L. F., & Barsalou, L. W. (2013). Situating emotional experience. Frontiers in Human Neuroscience, 7, 764.PubMedPubMedCentralCrossRefGoogle Scholar
  143. Zuriff, G. (1979). Ten inner causes. Behaviorism, 7(1), 1–8.Google Scholar

Copyright information

© Association for Behavior Analysis International 2016

Authors and Affiliations

  1. 1.Department of PsychologyArizona State UniversityTempeUSA
  2. 2.Department of Psychology/296University of NevadaRenoUSA

Personalised recommendations