Abstract
Experiment 1 compared the acquisition of initial- and terminal-link responding in concurrent chains. The terminal-link schedules were fixed interval (FI) 10 sec and FI 20 sec, but some presentations were analogous to no-food trials in the peak procedure, lasting 60 sec with no reinforcement delivery. Pigeons completed a series of reversals in which the schedules signaled by the terminal-link stimuli (red and green on the center key) were changed. Acquisition of temporal control of terminal-link responding (as measured by peak location on no-food trials) was more rapid than acquisition of preference in the initial links. Experiment 2 compared acquisition in concurrent chains, using the typical procedure in which the terminal-link schedules are changed with a novel arrangement in which the initial-link key assignments were changed while the terminal-link schedules remained the same. Acquisition of preference was faster in the latter condition, in which the terminal-link stimulus-reinforcer relations were preserved. These experiments provide the first acquisition data that support the view that initial-link preference is determined by the values of the terminal-link stimuli.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Adams, C. D., &Dickinson, A. (1981). Instrumental responding following reinforcer devaluation.Quarterly Journal of Experimental Psychology,33B, 109–121.
Alsop, B., &Davison, M. (1988). Concurrent-chain performance: Effects of absolute and relative terminal-link entry frequency.Journal of the Experimental Analysis of Behavior,49, 351–365.
Autor, S. M. (1960).The strength of conditioned reinforcers as a function of frequency and probability of reinforcement. Unpublished doctoral dissertation, Harvard University.
Beale, I. (1970). The effects of amount of training per reversal on successive reversals of a color discrimination.Journal of the Experimental Analysis of Behavior,14, 345–352.
Beam, J. J., Killeen, P. R., Bizo, L. A., &Fetterman, J. G. (1998). How reinforcement context affects temporal production and categorization.Animal Learning & Behavior,26, 388–396.
Catania, A. C. (1970). Reinforcement schedules and psychophysical judgments: A study of some temporal properties of behavior. In W. N. Schoenfeld (Ed.),The theory of reinforcement schedules (pp. 1–42). New York: Appleton-Century-Crofts.
Cheng, K., &Roberts, W. A. (1991). Three psychophysical principles of timing in pigeons.Learning & Motivation,22, 112–128.
Davison, M. C., &Temple, W. (1973). Preference for fixed-interval schedules: An alternative model.Journal of the Experimental Analysis of Behavior,20, 393–403.
Durlach, P. J., &Dawson, G. R. (1991). Response specificity in animal timing.Journal of the Experimental Analysis of Behavior,55, 11–20.
Fantino, E. (1969). Choice and rate of reinforcement.Journal of the Experimental Analysis of Behavior,12, 723–730.
Gallistel, C. R., &Gibbon, J. (2000). Time, rate, and conditioning.Psychological Review,107, 289–344.
Gibbon, J. (1977). Scalar expectancy theory and Weber’s law in animal timing.Psychological Review, 884, 279–325.
Gibbon, J., Church, R. M., Fairhurst, S., &Kacelnik, A. (1988). Scalar expectancy theory and choice between delayed rewards.Psychological Review,95, 102–114.
Grace, R. C. (1994). A contextual model of concurrent-chains choice.Journal of the Experimental Analysis of Behavior,61, 113–129.
Grace, R. C. (1996). Choice between fixed and variable delays to reinforcement in the adjusting-delay procedure and concurrent chains.Journal of Experimental Psychology: Animal Behavior Processes,22, 362–383.
Grace, R. C., &Nevin, J. A. (1999). Timing and choice in concurrent chains.Behavioural Processes,45, 115–127.
Grace, R. C., &Nevin, J. A. (2000). Response strength and temporal control in fixed-interval schedules.Animal Learning & Behavior,28, 313–331.
Grace, R. C, &Savastano, H. I. (2000). Temporal context and conditioned reinforcement value.Journal of Experimental Psychology: General,129, 427–443.
Herrnstein, R. J. (1964). Secondary reinforcement and rate of primary reinforcement.Journal of the Experimental Analysis of Behavior,7, 27–36.
Higa, J. J. (1996). Rapid timing of a single transition in interfood interval duration by rats.Animal Learning & Behavior,25, 177–184.
Higa, J. J., Thaw, J. M., &Staddon, J. E. R. (1993). Pigeons’ wait-time responses to transitions in interfood-interval duration: Another look at cyclic schedule performance.Journal of the Experimental Analysis of Behavior, 559, 529–541.
Killeen, P. (1970). Preference for fixed-interval schedules of reinforcement.Journal of the Experimental Analysis of Behavior,14, 127–131.
Killeen, P. (1982). Incentive theory: II. Models for choice.Journal of the Experimental Analysis of Behavior,38, 217–232.
Mazur, J. E. (1992). Choice behavior in transition: Development of preference with ratio and interval schedules.Journal of Experimental Psychology: Animal Behavior Processes,18, 364–378.
Mazur, J. E. (2001). Hyperbolic value addition and general models of animal choice.Psychological Review,108, 96–112.
Moore, J. (1979). Choice and number of reinforcers.Journal of the Experimental Analysis of Behavior,32, 51–63.
Roberts, S. (1981). Isolation of an internal clock.Journal of Experimental Psychology: Animal Behavior Processes,7, 242–268.
Schneider, B. A. (1969). A two-state analysis of fixed-interval responding in the pigeon.Journal of the Experimental Analysis of Behavior,12, 677–687.
Squires, N., &Fantino, E. (1971). A model for choice in simple concurrent and concurrent-chains schedules.Journal of the Experimental Analysis of Behavior,15, 27–38.
Stubbs, D. A., &Pliskoff, S. S. (1969). Concurrent responding with fixed relative rate of reinforcement.Journal of the Experimental Analysis of Behavior,12, 887–895.
Vaughan, W., Jr. (1985). Choice: A local analysis.Journal of the Experimental Analysis of Behavior,43, 383–405.
Williams, B. A. (1994). Conditioned reinforcement: Neglected or outmoded explanatory construct?Psychonomic Bulletin & Review,1, 457–475.
Williams, B. A. (1999). Value transmission in discrimination learning involving stimulus chains.Journal of the Experimental Analysis of Behavior,72, 177–185.
Williams, B. A., &Dunn, R. (1991). Substitutability between conditioned and primary reinforcers in discrimination acquisition.Journal of the Experimental Analysis of Behavior,55, 21–35.
Williams, B. A., Ploog, B. O., &Bell, M. C. (1995). Stimulus devaluation and extinction of chain schedule performance.Animal Learning & Behavior,23, 104–114.
Zeiler, M. D., &Powell, D. G. (1994). Temporal control in fixedinterval schedules.Journal of the Experimental Analysis of Behavior,61, 1–9.
Author information
Authors and Affiliations
Corresponding author
Additional information
Some of these data were presented at the annual meeting of the Society for the Quantitative Analyses of Behavior, Washington, DC, May 2000. I thank Tony Nevin and Orn Bragason for helpful comments.
Rights and permissions
About this article
Cite this article
Grace, R.C. The value hypothesis and acquisition of preference in concurrent chains. Animal Learning & Behavior 30, 21–33 (2002). https://doi.org/10.3758/BF03192906
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.3758/BF03192906