Visual artificial grammar learning by rhesus macaques (Macaca mulatta): exploring the role of grammar complexity and sequence length
Humans and nonhuman primates can learn about the organization of stimuli in the environment using implicit sequential pattern learning capabilities. However, most previous artificial grammar learning studies with nonhuman primates have involved relatively simple grammars and short input sequences. The goal in the current experiments was to assess the learning capabilities of monkeys on an artificial grammar-learning task that was more complex than most others previously used with nonhumans. Three experiments were conducted using a joystick-based, symmetrical-response serial reaction time task in which two monkeys were exposed to grammar-generated sequences at sequence lengths of four in Experiment 1, six in Experiment 2, and eight in Experiment 3. Over time, the monkeys came to respond faster to the sequences generated from the artificial grammar compared to random versions. In a subsequent generalization phase, subjects generalized their knowledge to novel sequences, responding significantly faster to novel instances of sequences produced using the familiar grammar compared to those constructed using an unfamiliar grammar. These results reveal that rhesus monkeys can learn and generalize the statistical structure inherent in an artificial grammar that is as complex as some used with humans, for sequences up to eight items long. These findings are discussed in relation to whether or not rhesus macaques and other primate species possess implicit sequence learning abilities that are similar to those that humans draw upon to learn natural language grammar.
KeywordsArtificial grammar learning Sequence learning Statistical learning Rhesus macaques
This research was funded in part by National Institutes of Health Grants HD-38051 and HD-060563, an RCALL seed grant, and the College of Arts and Sciences at Georgia State University. Lisa Heimbauer was an RCALL Fellow during completion of this work.
Compliance with Ethical Standards
Conflict of interest
Lisa A. Heimbauer declares that she has no conflict of interest. Christopher M. Conway declares that he has no conflict of interest. Morten H. Christiansen declares that he has no conflict of interest. Michael J. Beran declares that he has no conflict of interest. Michael J. Owren is deceased.
Human and animal rights
All research protocols used in this study with the monkeys were approved by the Georgia State University Animal Care and Use Committee (protocol A09031). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed including those in the Guide for the Care and Use of Laboratory Animals. No human participants were involved in this research.
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
- Christiansen MH (in press) Implicit statistical learning: a tale of two literatures. Top Cogn SciGoogle Scholar
- de Vries MH, Christiansen MH, Petersson KM (2011) Learning recursion: multiple nested and crossed dependencies. Biolinguistics 5:10–35Google Scholar
- Fiser J, Aslin RN (2002) Statistical learning of higher-order temporal structure from visual shape sequences. J Exp Psychol 28:458–467Google Scholar
- Karlsson F (2010) Syntactic recursion and iteration. In: van der Hulst H (ed) Recursion and human language. Mouton de Gruyter, Berlin, pp 43–67Google Scholar
- Lockhard E (2008) Effects of foraging sequence on the ability of lambs to consume endophyte-infected tall fescue (alkaloids), birdsfoot trefoil (tannins), and alfalfa (saponins). Unpublished doctoral dissertation, Utah State University, UtahGoogle Scholar
- Misyak JB, Christiansen MH (2010) When ‘more’ in statistical learning means ‘less’ in language: individual differences in predictive processing of adjacent dependencies. In: Catrambone R, Ohlsson S (eds) Proceedings of the 32nd annual cognitive science society conference. Cognitive Science Society, Austin, pp 2686–2691Google Scholar
- Petersson KM, Forkstam C, Ingvar M (2004) Artificial syntactic violations activate Broca’s region. Cogn Sci 28:383–407Google Scholar
- Pothos EM (2010) An entropy model for artificial grammar learning. Front Psychol 1:1–13Google Scholar
- Reber AS (1967) Implicit learning of artificial grammars. J Verbal Learning Verbal Behav 77:317–327Google Scholar