Rapid instructed task learning: A new window into the human brain’s unique capacity for flexible cognitive control
- 2.3k Downloads
The human ability to flexibly adapt to novel circumstances is extraordinary. Perhaps the most illustrative, yet underappreciated, form of this cognitive flexibility is rapid instructed task learning (RITL)—the ability to rapidly reconfigure our minds to perform new tasks from instructions. This ability is important for everyday life (e.g., learning to use new technologies) and is used to instruct participants in nearly every study of human cognition. We review the development of RITL as a circumscribed domain of cognitive neuroscience investigation, culminating in recent demonstrations that RITL is implemented via brain circuits centered on lateral prefrontal cortex. We then build on this and the recent discovery of compositional representations within lateral prefrontal cortex to develop an integrative theory of cognitive flexibility and cognitive control that identifies mechanisms that may enable RITL within the human brain. The insights gained from this new theoretical account have important implications for further developments and applications of RITL research.
KeywordsCognitive flexibility Compositionality Cognitive control Prefrontal cortex Flexible cognitive control Computational model Animal models
One of the defining characteristics of human-level intelligence is the ability to rapidly restructure one’s behavior into novel configurations from instruction. This ability is important in everyday life. For instance, it is essential for learning new technologies and new skills at all levels of education. Furthermore, nearly every experimental psychologist uses verbal instructions to inform participants how to perform experimental tasks, yet the mechanisms underlying this process are largely unknown (Monsell, 1996).
The neural and cognitive processes underlying this ability are the focus of an emerging area of cognitive neuroscience research. This new area investigates the neural basis of rapid instructed task learning (RITL; pronounced “rittle”)—a term that we propose to describe the ability to rapidly learn task procedures from instructions. Here we will interpret, distill, and build a novel theory based on current findings regarding this key component of human cognition, helping to establish RITL as a multidisciplinary domain of scientific inquiry, and thereby help accelerate further research in this area.
We also present RITL as an especially important form of cognitive flexibility, given the extraordinary speed (one trial) and adaptability (involving novel mental configurations) that RITL requires. These attributes make RITL (1) an especially specific and sensitive methodology for the study of flexible cognition and (2) an important source of constraints on the kinds of neural architectures that are capable of implementing flexible human cognition. We support these conclusions by reviewing cognitive, computational, and neuroscientific studies of RITL and postulating a novel neural architecture capable of implementing RITL and other forms of flexible cognitive control.
This article is structured in three main sections, as follows: First we provide an overview of RITL, including its definition, along with a review of previous and current research on the topic. Next, we introduce and discuss our novel neuroscientific theory of RITL (and of flexible cognition generally). Finally, we take a broad view of RITL research, informed by the perspective gained from our new neuroscientific theory, and look toward future research and potential applications.
Review of RITL research
RITL is the process of rapidly (typically, on the first trial) learning a novel rule or task from instructions. Humans often use RITL to learn new tasks, such as how to use new technologies (e.g., a new “smartphone”), how to cook new recipes (e.g., a new kind of lasagna), or how to play an unfamiliar game (e.g., the first time that checkers is played). These tasks can be learned via reinforcement learning (Sutton & Barto, 1998), yet they are learned much more efficiently with RITL. Consider, for a moment, how difficult learning checkers would be without RITL abilities. Instead of rapidly learning the rules for how each piece moves and that the goal is to capture all of the opposing player’s pieces, you would need to randomly select from dozens of possible actions (e.g., moving your pieces to the other end of the board or having the goal of moving every piece in sequence), until an instructor rewards valid moves and, eventually, rewards a win. Reinforcement learning such as this has been investigated much more extensively than RITL, despite the clear utility and high frequency of RITL use in everyday life.
Although reinforcement learning is the primary means of learning early in life, humans are quickly able to start receiving instruction from others (i.e., ‘supervised’ learning). The initial form of supervised learning—termed ‘shaping’—speeds task learning substantially by reinforcing behaviors as the learner gets closer to the proper task (Fig. 1A). However, even this efficient form of learning is still much slower than RITL. Through RITL, learners can achieve first-trial learning due to the instructor using examples and/or language to directly specify the correct task in the state space of possible tasks (Fig. 1B) (Cole, 2009). Although some tasks are so complex or nuanced that directly specifying the exact task state is difficult or impossible (e.g., performing a professional-level tennis backhand), many tasks in everyday life can be immediately performed on the first try via RITL (Cole, 2009). RITL research investigates how the brain is able to rapidly convert the water of instructions into the wine of novel-task performance.
Defining RITL more precisely: The role of first-trial RITL and neuroscience
The distinction between RITL and reinforcement learning is typically clear, but some learning situations involve aspects of both. For instance, learning can be relatively rapid, yet accomplished with instructions delivered in the form of post-trial feedback. Currently, it is uncertain whether such multitrial feedback-based rapid learning is truly the same as RITL. Since it is evident that first-trial (i.e., without feedback or multitrial integration) performance on a novel task involves RITL, we suggest that RITL research should focus primarily on such “first-trial RITL” situations. Supporting this suggestion, we recently found a sharp shift in behavior between first and second encounters with novel tasks, even when other tasks are performed between those encounters (Cole & Braver, 2012). Cohen-Kdoshay and Meiran (2009) also emphasized the importance of focusing on first trials when investigating the effects of instructions on behavior, in order to rule out effects of long-term memory traces formed across multiple trials.
It will also be important, however, for future research to discover the exact boundaries of what defines RITL. We suggest that cognitive neuroscience can play a critical role here. Specifically, we suggest that identifying the neural mechanisms underlying first-trial RITL will allow for precise categorization of learning episodes as either involving or not involving RITL. In other words, we propose that a learning episode involves RITL insofar as it involves the neural mechanisms underlying the most theoretically established form of RITL: first-trial RITL. It may be that precise boundaries surround first-trial RITL (e.g., completely different brain processes are activated when feedback is involved), or it may be that the same neural mechanisms are involved during first-trial RITL and in a variety of similar learning situations. Adjudicating among these possibilities will be an important new direction for cognitive neuroscience RITL research.
Previous RITL research: Neuropsychology and the role of language
The initial studies of RITL—prior to the recent advent of RITL cognitive neuroscience research—were in the areas of neuropsychology and computational modeling. Milner (1964, 1965) and Luria (1973; Luria, Pribram, & Homskaya, 1964) found that lesions in the lateral prefrontal cortex (LPFC) led to patients with normal linguistic abilities who were seemingly able to understand and remember instructions, yet who had a profound inability to execute those instructions. These and other instances of ‘goal neglect’ (Duncan, Burgess, & Emslie, 1995) provided preliminary evidence that RITL does not simply rely on language or on remembering instructions during execution. Instead, these results suggest the existence of processes in LPFC that convert instructions into task sets that can then be executed as necessary for accurate task performance.
These studies have indicated that, rather than depending on linguistic abilities per se, RITL depends on the ability to rapidly reconfigure task sets. It can be further demonstrated that in many cases, language is not at all necessary for RITL. For instance, the instructions for building IKEA furniture can be learned rapidly, despite the fact that they are conveyed in purely iconic diagrams with no language (see Holmes, 2005, for more examples). Other forms of nonlinguistic RITL involve using imitation or context (e.g., using spatial or temporal proximity to associate an arbitrary stimulus with a response) to rapidly learn new tasks.
Linguistic RITL appears to be the most powerful form, however. This is due to its ability to directly specify task states (see Fig. 1B), even when they are abstract. For instance, the task ‘lift the red items’ is readily specified linguistically, but may require many trials via the other forms of task learning (e.g., lifting of a red dotted hat, lifting of a red dotted shirt [‘lift red dotted clothes?’], lifting of a red striped shoe [‘lift red clothes?’], lifting of a red striped ball [‘ah, lift the red items’]). In other words, many more tasks can be specified with linguistic RITL than would be either practical or possible with other forms of task learning (Cole, 2009).
Previous RITL research: Computational models
The use of punishments and rewards can at best be a part of the teaching process. . . . It is necessary . . . to have some other “unemotional” channels of communication. If these are available it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language. . . . The use of this language will diminish greatly the number of punishments and rewards required.
—A. Turing, in “Computing Machinery and Intelligence” (1950), in which the Turing test was first proposed
Computational models have been used to explore the mechanisms necessary for RITL since long before neuropsychological RITL research began. Indeed, Alan Turing was inspired by the human capacity for RITL—the ability to convert instructions into novel-task performance—when he made his field-defining advances in computer science. For example, his concept of the universal Turing machine demonstrated a way in which a machine could be rapidly instructed to perform any computable task (Turing, 1937). Turing also used the analogy of RITL to propose the use of high-level programming languages to make instructing a machine much easier (Turing, 1950; see the quote above). From this perspective, one of Turing’s legacies is that every modern computer is, in some sense, a computational model of the human capacity for RITL.
Programming languages were not intended to model human RITL per se, however. Therefore, computers did not yield insight into human RITL until computational models specifically attempting to model the human mind were developed. Early on, production models (consisting entirely of if–then ‘production’ rules) were built to rapidly learn from instruction (Anderson, 1976; Kieras & Bovair, 1986). However, these models required instructions to be represented in the arbitrary codes used by the models rather than to emerge from known cognitive or neural mechanisms, reducing the relevance of these RITL findings to humans.
This limited relationship of computational modeling to the biological substrates of human learning was overcome, in the 1980s, with the widespread introduction of more biologically plausible ‘connectionist’ models consisting of networks of neuron-like units. However, in sharp contrast to RITL, learning in connectionist models was entirely based on either associative or error-driven algorithms that relied on trial and error (see Fig. 1A) and typically required thousands of trials for learning a new task (Rumelhart, McClelland, & the PDP Research Group, 1986). Importantly, several examples of ‘structured’ connectionist models (in which some connections are specified a priori) showed that simulated networks of neuron-like units could rapidly learn novel tasks from instructions (Noelle & Cottrell, 1996; Schneider & Oliver, 1991). These models used specialized units for the active maintenance of if–then contingencies to allow for rapid reorganization of the model’s internal state to implement novel-task parameters. Although these models were still somewhat slower (dozens of trials) than the first-trial learning that humans are capable of, these computational studies demonstrated the utility of if–then rules and of the active maintenance of information for RITL abilities. More recently, several theoretical neural architectures have proposed more specific biological substrates for RITL computations (Doll, Jacobs, Sanfey, & Frank, 2009; Lebiere & Anderson, 1993; Ramamoorthy & Verguts, 2012; Stocco, Lebiere, & Anderson, 2010; Zylberberg, Dehaene, Roelfsema, & Sigman, 2011). Importantly, these proposed neural substrates converge with the previous neuropsychological results, since in all of the models the active maintenance of task-relevant representations is thought to occur within LPFC (Miller & Cohen, 2001).
RITL and human intelligence
One of the most striking distinctions between human and nonhuman primate intelligence is the tremendous amount of time that it takes for nonhuman primates to learn tasks. For instance, it takes several weeks or months for a macaque monkey to learn a simple delayed match-to-sample task (cf. Verrico et al., 2011), while a human can learn it immediately (“press the button when two stimuli in a row match”). This striking between-species difference illustrates that RITL may be one of the cognitive skills that have expanded most rapidly with the evolutionary changes producing Homo sapiens sapiens, the modern human species.
These considerations suggest that we may learn something about the evolutionary process underlying RITL abilities by comparing humans to other primate species. First, however, it would be informative if we could establish the plausibility of RITL itself as the driving force of evolutionary change—as opposed to RITL emerging solely from the evolution of related cognitive abilities. Importantly, recent simulation studies have shown that RITL and related forms of social learning enhance group survival rates (Rendell et al., 2010; Rendell et al., 2011), providing a plausible selective pressure behind the evolution of RITL specifically (though independent selective pressures on related abilities certainly also played an important role).
This brain difference may be in anterior LPFC (area 10). This area has undergone a greater evolutionary expansion in humans than almost any other brain area (Avants, Schoenemann, & Gee, 2006). Critically, bonobos have the largest anterior LPFC of all nonhuman primates (Semendeferi, Armstrong, Schleicher, Zilles, & Van Hoesen, 2001). This suggests that anterior LPFC may be especially important for the computations underlying RITL abilities. Consistent with this possibility, several RITL studies have demonstrated the involvement of anterior LPFC during RITL (see discussion below). It may be that the selective pressures pushing improvement of RITL abilities during evolution did so in part by driving computational improvements in anterior LPFC (see below for examples of how, e.g., greater gray-matter volume in LPFC may enhance the computations underlying RITL).
Recent methodological innovations in RITL research
RITL is a complex cognitive behavior and, as such, it is difficult to effectively isolate its subcomponents in laboratory-based experimental studies. For instance, one may want to separate the processes that are specific to the acquisition of task rules (e.g., the “instructions”) from those that are specific to their application. Recently developed RITL experimental designs typically solve this problem by dividing each trial into two separate phases, an encoding phase, in which a new task is communicated through a prearranged notation (e.g., three words describing three consecutive mental operations to perform), and an execution phase, in which the task-specific stimuli are presented and participants can perform the instructed task.
Importantly, a subset of these studies (Cole, 2009; Cole, Bagic, et al., 2010; Hartstra et al., 2011; Stocco et al., 2012) also utilized another methodological innovation: the inclusion of an additional set of tasks that had been practiced in a prior session. The inclusion of these control tasks provided an important baseline, enabling the assessment of processes specific to RITL, while controlling for generic processes that might be engaged prior to and during performance of practiced tasks. Note that it is possible that these contrasts might pick up on differences that are also present between moderately practiced and extensively practiced task switches (Yeung & Monsell, 2003); it will be important for future research to explore this possibility. Furthermore, it is possible that the ability to use the same stimuli among a wide variety of tasks in these paradigms (in contrast to Ruge & Wolfensteller’s, 2010, procedure) might add additional processes that would need to be explored further (Rubin & Meiran, 2005). Nonetheless, these studies have consistently shown that anterior, middle (dorsolateral or ventrolateral), and posterior regions of LPFC all contribute to RITL. It will be important for future work to identify the unique contributions that each portion of LPFC makes to RITL abilities.
Key distinctions in RITL research
In this section, we review some of the fundamental distinctions in RITL research covered here. We suggest that further characterization of these and other distinctions will be important for future progress in RITL research.
Form of communication: Nonlinguistic versus linguistic RITL
The existence of nonlinguistic RITL (see above) and the observation that lesions in LPFC can impair RITL without impairing linguistic abilities (Luria, 1973) demonstrate that language is not necessary for RITL. Nonlinguistic RITL involves learning through examples. For instance, imitation leads to RITL via simply copying the observed behavior of someone else, and this may be the most extensively studied form of RITL to date (Heyes, 2001). Imitation is highly related to ‘emulation,’ in which the intentions/goals are copied rather than the specific motor movements (Byrne & Russon, 1998). In contrast, linguistic instructions utilize symbolic representations to communicate task procedures. These forms of RITL communication likely involve some shared and some distinct cognitive and brain processes. Furthermore, there are advantages to each form of RITL communication, depending on the particular task to be learned (see below).
Level of abstraction: Concrete versus abstract RITL
Ultimately, we are embodied creatures. Therefore, concrete (i.e., embodied/sensory–motor) tasks are likely processed differently from abstract tasks. Concrete tasks specify a limited but exact set of percept–action associations. In contrast, abstract tasks specify a less limited but typically inexact set of percept–action associations. For example, the relatively concrete task “point left when you see a red hat” is quite exact (i.e., nearly the entire percept–action scenario is specified), yet those instructions are limited to one small set of scenarios. In contrast, the relatively abstract task “point left when you see clothing” is less exact, but it applies to a wider variety of scenarios. Thus, abstract task instructions and representations can be considered to be compressed (in an information-theoretic sense; Gray & Tall, 2007). Importantly, concrete tasks are more readily communicated via imitation (nonlinguistic RITL), while abstract tasks are more readily communicated via language. See Ruge and Wolfensteller (2010) for an example of concrete RITL, and Cole, Bagic, et al. (2010) for an example of abstract RITL. Note that the use of novel stimuli for concrete RITL may allow for less proactive interference (negative transfer) than the reuse of rules across task contexts in abstract RITL. However, such reuse of rules may alternatively result in positive transfer (from greater practice with them), effectively facilitating performance. Further research will be necessary to explore this issue.
Level of complexity: Simple versus complex RITL
Task complexity is another potentially important distinction between the Ruge and Wolfensteller (2010) and Cole, Bagic, et al. (2010) studies. Ruge and Wolfensteller only included nonintegrated rules (i.e., rules that could be executed independent of one another), while Cole, Bagic, et al. included three integrated rules for each task. This additional complexity likely led to an additional multirule integration process in Cole, Bagic, et al.’s study. Future work will be necessary to dissociate complexity from abstraction effects in RITL, which were confounded across the two studies. It would be possible to deconfound these two dimensions, for instance, by using novel multistep spatial routes (an example of concrete complex RITL). Note, however, that the Ruge and Wolfensteller paradigm likely involved an integration process between the constituent stimulus and response representations of each task, such that the differences with Cole, Bagic, et al. may have been more of degree (greater complexity) than of kind (simple vs. complex).
Task preparation stage: Instruction versus initial implementation
Recent research has suggested the presence of distinct phases of task preparation during RITL (Cole, Bagic, et al., 2010; Stocco et al., 2012). These stages are similar to the ‘cue’ and ‘target’ stages in the task-switching literature (Monsell, 2003; Ruge, Jamadar, Zimmermann, & Karayanidis, 2011), though there is strong evidence that RITL is distinct from typical forms of task switching (see below). During the instruction stage, a novel task set must be communicated by an instructor and interpreted by the learner. This interpretation process likely involves activation of the proper task semantics (e.g., motor representations of ‘point left’ and visual representations of ‘red hat’). During the initial implementation stage, the task is executed in response to a stimulus for the first time. In some cases (especially during abstract RITL), preparation likely continues during initial task implementation, as the stimulus allows for more concrete specification of the task procedure that is to be executed.
RITL versus typical task switching: Task set formation versus task set retrieval
The vast majority of task-switching experiments have involved switching among a very small number of highly practiced tasks. These typical task-switching paradigms are thought to involve a task set retrieval process from long-term memory (Mayr & Kliegl, 2000), yet such a process is impossible with RITL, since the tasks are novel by definition. Other “task reconfiguration” processes may be shared between RITL and typical task switching, however. Such processes are important constituent parts of RITL, yet identifying the component processes that are unique to RITL is essential for understanding RITL as an independent cognitive construct. Recently developed RITL paradigms have sought to identify such components, by utilizing practiced in contrast to strictly novel tasks in order to effectively isolate RITL from processes that are not unique to RITL (Cole, Bagic, et al., 2010; Stocco et al., 2012). In addition to generic task-switching processes, these paradigms also controlled for stimulus novelty and peculiarities about the specific task rules used. These designs are able to control for these processes by using the same rules across novel and practiced tasks, yet using novel (i.e., never before seen or executed) combinations of those rules for RITL trials, and practiced (i.e., repeatedly performed) combinations for the control condition. Recent work (Cole, Bagic, et al., 2010; Cole & Braver, 2012) has suggested that RITL involves a unique ‘task set formation’ process that leads to an integrated task set that can then be later retrieved during practiced task preparation (i.e., during typical task switching). Further research will be necessary to fully characterize the similarities and differences between these two modes of task preparation. For instance, it will be important to explore differences in the amounts of between-task interference (caused from stimulus reuse) across RITL and typical task switching (Ruge et al., 2011), as well as between different RITL experimental paradigms.
LPFC processes underlying RITL
The original neuropsychological studies of RITL raised the important question of what specialized processes are implemented in LPFC during RITL. Modern neuroimaging is allowing us to begin addressing this question. For instance, Ruge and Wolfensteller (2010) found that a large portion of LPFC is involved in RITL, but that regions within LPFC differentiate as initially novel stimulus–response rules become practiced. Specifically, Ruge and Wolfensteller found that activity in several anterior LPFC regions (among others) was high in the initial trials but decreasing with rule repetition, while more posterior LPFC regions (among others) increased their activity with rule repetitions. This suggests an anterior-to-posterior shift in task control with practice.
Also consistent with the anterior-to-posterior gradient of abstraction within LPFC, Cole, Bagic, et al. (2010) looked at a finer (within-trial) time scale and found, using both magnetoencephalography and functional MRI, that activity flowed from middle to anterior LPFC during RITL. This could reflect relatively concrete task information within middle LPFC (e.g., ‘is it sweet?’, ‘are they the same?’, and ‘press left index finger’) being compositionally combined into more abstract/integrative task information within anterior PFC (e.g., the ‘press your left index finger if both objects are sweet’ task) during novel task preparation in order to coordinate task rules. Importantly, Cole, Bagic, et al. found that this pattern reversed once a task became practiced: Activity flowed from anterior to middle LPFC during practiced-task preparation. This suggests that familiarity with a task changes preparation, such that an abstract/integrative task representation (likely recalled from long-term memory) is used to activate and coordinate more concrete representations for task implementation.
Subcortical contributions to RITL
RITL processes are also present outside LPFC. One potentially important set of locations for RITL-related processes is the basal ganglia. The caudate nucleus, midbrain dopamine nuclei, and other parts of the basal ganglia have been associated with procedural learning and the acquisition of new skills. Single-cell recording in primates learning new rule-based tasks, for instance, suggested that caudate responses anticipate responses in LPFC and mediate the acquisition of new behaviors (Pasupathy & Miller, 2005). In humans, Ruge and Wolfensteller (2010) found that caudate activity during novel trials predicted the amount of learning (measured by decreases in reaction times) for subsequent repetitions of the same task.
The basal ganglia (including the striatum and the midbrain dopamine nuclei), however, are also thought to play an important role in controlling/modulating LPFC activity. In particular, it has been suggested that these regions rapidly select and gate the flow of signals from posterior sensory and motor cortical areas to LPFC (Braver & Cohen, 2000; McNab & Klingberg, 2008; Stocco, Lebiere, & Anderson, 2010; see O’Reilly & Frank, 2006, for a detailed computational model of this process). Thus, basal-ganglia involvement in RITL may extend beyond a simple associative role in learning. The capacity to rapidly gate information to LPFC becomes particularly useful when novel tasks must be learned and executed in a single trial, as with first-trial RITL. Supporting this view, Stocco et al. (2012) analyzed a first-trial RITL experiment, explicitly searching for regions whose activity selectively increased during the execution of novel tasks and carefully excluding those regions whose involvement could be ascribed to either stimulus novelty or task difficulty alone. In addition to LPFC and posterior parietal cortex, the results identified the basal ganglia as a key contributor to the execution of newly instructed tasks.
The key role of another subcortical region—the hippocampus—in long-term memory encoding makes this region likely to be important for the transition from novel- to practiced-task performance. This region might also have a more direct role in RITL, however: Some theories have suggested that hippocampus may also be important for working memory of task sets, especially when tasks are novel (Hasselmo & Stern, 2006) and when they consist of conjunctions of rules or sensory/motor representations (O’Reilly, Braver, & Cohen, 1999). This seems to suggest a critical role for hippocampus in RITL, yet it is clear that hippocampus is unnecessary for RITL, given that hippocampal lesion patients can perform RITL. For instance, patient H.M. was able to use RITL to learn and coordinate the rules of a mirror-tracing task, despite an inability to encode those rules in long-term memory (Squire, 2009). Furthermore, it was shown in a large group of lesion patients that those with hippocampal lesions (along with patients with a variety of other lesions) could use RITL to learn and coordinate a complex set of rules for a visual maze task, while only those with LPFC lesions could not (Milner, 1965).
The cognitive control network’s role in RITL
LPFC is strongly connected with a set of cortical regions sometimes referred to as the fronto-parietal cognitive control network (Cole & Schneider, 2007; Dosenbach et al., 2006; Duncan, 2010; Wager, Jonides, & Reading, 2004). This network is thought to be composed of the majority of LPFC, anterior cingulate/presupplementary motor area, posterior parietal cortex, anterior insular cortex, and sometimes posterior middle temporal cortex. These regions are coactive across a wide variety of studies (Wager et al., 2004) and are more correlated at rest than is the whole brain on average (Cole & Schneider, 2007). Although the regions can be dissociated using task functional MRI (Cole & Schneider, 2007) and high connectivity thresholds (Dosenbach et al., 2007), they are more often coactive and connected with each other than with sensory–motor or “default-mode” regions (Fox et al., 2005). This suggests that if LPFC is central to RITL, then all or most of the cognitive control network may be as well.
In surprising contrast to this argument, RITL functional MRI studies to date have indicated that little of the control network is involved in RITL specifically. For instance, when looking relative to a resting baseline (see Cole, 2009), the entire control network was active during both novel and practiced conditions for Cole, Bagic, et al. (2010). However, of the control network regions, only LPFC and posterior parietal cortex were selectively active for RITL relative to practiced tasks. Hartstra et al. (2011) found an even more restricted portion of the control network—left posterior LPFC—when looking for RITL versus practiced activations. In contrast, Ruge and Wolfensteller (2010) found a change in the entire control network between RITL and practiced-task performance. Importantly, however, they found that activity in only LPFC and posterior parietal cortex (along with the caudate) correlated with behavioral improvement between RITL and practiced-task performance, suggesting that these regions were especially important during the learning process. Note that Dumontheil et al. (2011) found increases in activity for the entire control network for large versus small instruction sets during RITL, yet it is difficult to interpret this result, given that this contrast did not control for short-term memory load (the number of task rules) and for several other factors controlled for in the other RITL studies. Together, these studies suggest that only a portion of the control network—including LPFC and posterior parietal cortex—involves processes specific to RITL. It will be important for future research to more directly test this conclusion, however, using region-of-interest analysis and statistical dissociations (Henson, 2005). It will also be important to assess the shared and distinct contributions that these two regions make to RITL and to other forms of flexible cognitive control.
Integrative theoretical account of RITL
A compositional theory of flexible cognitive control
Basic principles of the compositional theory of flexible cognitive control
The ability to reuse representations with a variety of others, resulting in massive combinatorics of possible representational combinations for task learning. All of the theory’s principles are ultimately tied to the compositionality of representations and how this benefits cognitive flexibility.
The combination of practiced rules into novel configurations, resulting in novel-task performance benefiting from previous practice.
The compositional grouping of representations (including feature subsets of full representations) into categories. The activation of such feature subsets of one concept can then take part in representing a different concept. This is highly related to compositionality and facilitates transfer.
The recognition of similarity between two or more representations. This allows selection of features common among those representations to form an abstraction, which can then transfer to new tasks during RITL.
A series of representations, ultimately based on simple sensory–motor features, building upon each other in stages, with increasing abstraction and/or complexity at each stage. Lower-level representations in such a hierarchy can be compositionally combined and coordinated by higher-level (abstract and/or complex) representations to create novel task sets during RITL.
We suggest that two properties of LPFC make it ideal for implementing the rapid compositionality necessary for RITL: (1) rapid updating of activity and connectivity patterns, due to gating by dopamine and/or other basal ganglia signals (Braver & Cohen, 2000; McNab & Klingberg, 2008; O’Reilly & Frank, 2006; Stocco, Lebiere, & Anderson, 2010; Stocco et al., 2012), and (2) high global connectivity (Cole, Anticevic, Repovs, & Barch, 2011; Cole, Pathak, & Schneider, 2010) resulting in latent connectivity (previously unused connections and connection patterns that can quickly come into use when necessary; Fig. 6), allowing for a combinatorial explosion of possible active connectivity patterns across a wide range of possible task semantics.
Another important aspect of LPFC is its ability to represent rules and other abstract concepts (Haynes et al., 2007; Wallis, Anderson, & Miller, 2001). Abstraction is defined here as the grouping of representations or representational features into categories, allowing for the activation of a subset of representational features of one concept in representing a different concept. For instance, the abstract concept ‘circle’ is a category defined by a common set of subfeatures that have been extracted across many instances of perceiving specific imperfect circles (e.g., uniform roundness). Alternatively, a different sort of abstraction (a “policy abstraction”; Badre & D’Esposito, 2009) such as ‘make coffee’ is a category defined by several related sets of action representations, with each set able to lead you to make coffee in a different situation (e.g., one action set might involve grinding coffee beans, while another might not). These examples can also be conceptualized in terms of LPFC having broad (categorical) receptive fields (Seger & Miller, 2010). Abstraction is clearly a critical feature of a compositional architecture, given that its definition is nearly identical to that of compositionality itself (i.e., the ability for a representation to be meaningfully applied across multiple situations). Abstractions are further related to compositionality in that abstractions are likely built from the compositional combination of features constituting the range of a given abstraction-representing neuron’s receptive field (e.g., all relevant cat-like features for representing “cat”).
Note that posterior cortical regions outside LPFC (i.e., in the temporal, parietal, and occipital lobes) build and represent abstractions as well (Kiehl et al., 1999), and that both concrete (Fuster, Bauer, & Jervey, 1985; Pouget, Emeric, Stuphorn, Reis, & Schall, 2005) and abstract (Muhammad, Wallis, & Miller, 2006) representations are projected from posterior cortex to LPFC, such that extensive representational redundancy exists across these posterior and LPFC systems. Critically, the theory differentiates these systems by characterizing posterior cortex as representing the semantics of the external world, in contrast to LPFC organizing representations in terms of task/goal relevance. This, along with rapid updating and latent connectivity, makes novel task-relevant cognitive configurations (largely unconstrained by established semantics/experience) readily available in LPFC during RITL and other situations requiring flexible cognition, while allowing established semantics of the external world to remain intact within posterior cortex. Due to the bidirectional connectivity between LPFC and posterior representations (Fuster et al., 1985), activated representations are distributed across both systems, allowing for simultaneous activation of rich (yet overconstrained) semantics in posterior cortex and flexible (yet underconstrained) representations within LPFC.
Abstraction, compositionality, and transfer are highly related to the concept of analogy—“the perception of like relational patterns across different contexts, p.35” (Gentner & Colhoun, 2010). As the circle example above illustrates, analogy among multiple instances of imperfect circles can lead to mappings among the common elements between the circles, creating an abstract representation of “circle” that can generalize to new instances, allowing knowledge learned about circles to transfer to new contexts (i.e., to new circles or things similar to circles—e.g., spheres). We suggest that such analogical mapping and the resulting abstract representation (Gentner & Medina, 1998) can occur within LPFC (in concert with other brain regions), leading to the ability to immediately transfer knowledge and skills during RITL. Identification of analogical similarity between existing abstract representations and a novel task (e.g., via keywords during instruction) is also an important part of this process.
There is evidence that LPFC neuron receptive fields include a variety of nonintuitive variants of task-relevant rules (Jun et al., 2010; Rigotti, Rubin, Wang, & Fusi, 2010), such as “is green” being partially represented by neurons that fire most to “not red.” This suggests that the flexibility necessary for RITL may arise in part from coarse-coded conjunctive representations (O’Reilly, Busby, & Soto, 2003; Rigotti et al., 2010) formed from the combination of various semi-task-relevant abstract representations into task sets. It has been suggested that this kind of representational binding occurs via the synchrony/coactivation of neurons with relevant receptive fields (Fries, 2005). Another account has suggested that representational binding occurs via activation of (and feedback from) higher-level conjunction neurons (O’Reilly & Rudy, 2001). We posit that these two mechanisms of binding—synchrony and conjunction—are in fact complementary mechanisms, in which feedforward synchrony can activate higher-level conjunctions and feedback activation of conjunctions can lead to lower-level synchrony in a representational hierarchy (see Table 1), resulting in the binding of representations via both mechanisms. These principles of the theory are similar to those used in a previous computational model of “compositional connectionism” (Hummel et al., 2004). It will be important for future research to verify the exact mechanisms underlying rapid feature binding (sometimes called ‘variable binding’) during RITL, however.
In sum, this theoretical model can be conceptualized as a specific form of domain-general working memory that emphasizes rapid updating, compositionality, and combinatorics of the representations within task sets. Another way to conceptualize the proposed model is as a projection of many posterior cortical representations (i.e., perceptual, motor, semantic, and long-term memories) to LPFC (see Dehaene, Kerszberg, & Changeux, 1998), in which billions of combinations of those representations are functionally available for selection and goal-directed sustained processing at a moment’s notice. The theory postulates that having this extra space for representations to interact combinatorially provides the human brain with an immensely flexible architecture capable of such computational feats as first-trial RITL.
Specific mechanisms of the compositional theory
Mechanistic principles of the compositional theory of flexible cognitive control
Multisystem global connectivity
LPFC connectivity with many content-specific systems throughout the brain, giving access to many potentially task-relevant representations.
A fast change of active content within LPFC, likely via a mechanism (basal ganglia) that gates instruction information (from posterior cortex).
Within-LPFC global connectivity
Extensive connectivity between neurons within LPFC, allowing for complex processing and latent connectivity (see below).
Unused connections and connectivity patterns that can become used as necessary by novel tasks during RITL.
Coarse-coded conjunctive representations
A large set of neurons with broad receptive fields that receive inputs from (potentially random) combinations of each other, to produce many conjunctive receptive fields. This allows for representational binding, general processing (see below), and the other principles considered here.
Binding via synchronous coactivation of multiple representations, allowing for rapid selection (see below) of sets of representations to achieve massive combinatorics during RITL.
Slow, multi-trial selection and tuning of task-representing neurons and connections for optimizing task performance from practice. Transfer of subsets of these neurons and connections to new tasks facilitates RITL.
Fast selection of novel representations and (previously incrementally selected) representations from practiced tasks during RITL.
General processing hierarchy
Specific instantiations of a “compositional hierarchy” (see Table 1). This consists of many connected neural populations, building a wide variety of representations via conjunctions, unions, and other set theory operations, ultimately based on primitives in primary sensory–motor cortices. Due to wiring costs that promote short-distance connectivity, this results in multiple hierarchies of processing, starting from primary cortices and going outward anatomically in terms of complexity and abstraction. We focus on the general processing hierarchy within LPFC.
A bias to incrementally select and strengthen more posterior (lower-level) representations of a task during practice.
Population adaptive coding
The ability of LPFC as a whole to represent a wide variety of possible tasks. This is accomplished by compositionally selecting sets of individual neurons, each with relatively static and coarse coding, that together specify the processes necessary to implement each specific task.
The many sets of abstract/complex representations are also made possible due to extensive within-LPFC global connectivity—which has only recently been investigated for the first time (Cole, Anticevic, et al., 2011)—that likely allows for the building of sets of abstract/complex features. This principle of the model is consistent with recent nonhuman primate work demonstrating immense variability in LPFC single-neuron receptive fields (Jun et al., 2010), given that such observations could reflect the building of abstract/complex representations via extensive within-LPFC connectivity.
There are clear capacity limits on working memory (Conway & Engle, 1996; and, by proxy, on RITL and LPFC), such that the tremendous combinatorics of possible sets of coactivated features within LPFC may overwhelm the system’s limited capacity as the appropriate configuration is being searched for. The theory deals with this by allowing sets of features distributed between LPFC and posterior cortex to be incrementally selected, and connections among them to be strengthened over many trials during prior experiences (i.e., to be ‘chunked’ via repeated use; Hebb, 1949; Lynch, 2004), and then rapidly selected (and coordinated within LPFC) with a limited number of other features via activation of instruction semantics during RITL. The reactivation of incrementally selected sets of features allows for immediate transfer of previously learned abilities as novel combinations of such feature sets are rapidly selected during RITL. One important example of this process is the incremental selection of associations between words and rule meanings (i.e., selected and strengthened connections from language regions to LPFC), which can then be rapidly selected by incoming linguistic instructions during RITL. This aspect of the theory is consistent with a recent formulation of working memory in which a distinction exists between activated long-term memory (activation of representations that were incrementally selected and/or strengthened/refined during consolidation) and a ‘region of direct access’ that is able to flexibly select and bind a variety of possible novel representations (Meiran, Cole, & Braver, 2012; Oberauer, 2009).
It is theoretically possible for the building of abstract/complex representations within LPFC to emerge from random connectivity built upon sensory/motor primitives from posterior cortex (Rigotti et al., 2010). There is evidence, however, for a posterior-to-anterior hierarchy of processing or representation in LPFC (see the previous sections). It is possible that this hierarchy supports efficient building of abstract/complex representations used during RITL. However, controversy currently exists regarding the exact nature of this LPFC hierarchy (Badre, 2008; Reynolds, O’Reilly, Cohen, & Braver, 2012): Some studies have suggested that it is a processing hierarchy organized by time or action (Botvinick, 2008; Koechlin et al., 2003), while others have suggested that it is a representational hierarchy organized by abstraction (Badre & D’Esposito, 2007). We suggest that LPFC builds abstract and complex representations—which become processes when activated (due to downstream effects of connectivity)—in a general processing hierarchy. This avoids the current controversy by subsuming the two camps: gradients of abstraction, complexity, action, and time are all built using conjunctions of random sets (and sets of sets) of sensory–motor primitives (ultimately, from primary sensory–motor regions). Consider, for instance, the general processing hierarchy illustrated in Fig. 6. From the several primitives at the bottom of the chart (already somewhat built up from primitives in, e.g., V1), both abstractions (e.g., ‘is green’) and complex task representations (e.g., “press the left button when you see red”) are built. Representational hierarchies of time and action are possible due to the existence of temporal and motor primitives (i.e., neurons that fire for particular event timings or motor movements) for building upon in the hierarchy.
The observed anterior-to-posterior LPFC activation shift with practice (see Fig. 4) can be accounted for by positing a hierarchical conservation principle for the theoretical model. This principle suggests that the incremental selection occurring during practice is biased toward selecting and strengthening representations/connections lower in the general processing hierarchy. Thus, while the initial RITL rapid selection likely involves both high-level and low-level representations, the representations involved can be whittled down over time by incrementally selecting posterior representations to more efficiently represent the task set. The theory suggests that higher-level representations in anterior LPFC are involved during RITL for two reasons: to allow for (1) transfer via abstract representations in anterior LPFC (see above) that can readily transfer rules across task contexts and (2) activation of a wide variety of ad hoc, coarse-coded representations that together can represent the task rapidly but inefficiently during RITL. With practice, abstract representations (in anterior LPFC) can become less involved, as more task-specific (in posterior LPFC) representations/connections become tuned and incrementally selected to perform the task. In the case of an abstract or complex task, the posterior shift cannot go very far, given that anterior representations are necessary to represent such task sets even after connections are selected and tuned (see Fig. 4B). In contrast, concrete stimulus–response associations (see Ruge & Wolfensteller, 2010) can become fully automatic with enough practice (Schneider & Shiffrin, 1977), such that they can go all the way down the hierarchy to sensory–motor cortices (Chein & Schneider, 2005; Schneider & Chein, 2003). This suggests that there may be three learning stages for concrete tasks, based on the states of incrementally selected task representations: (1) RITL, (2) controlled, and (3) automatic (Chein & Schneider, 2012). The first stage involves instruction interpretation, ad hoc coarse-coded representation, and transfer, whereas controlled processing involves incremental selection and tuning of representations, eventually resulting in highly efficient automatic processing. It will be important for future research to test these predictions and to better characterize the transitions between stages of skill acquisition.
The combinatorial explosion of possible tasks is a major issue for neural theories of RITL, and several principles postulated above may help. To illustrate the issue, consider that a conservative estimate of 10,000 concepts available for humans (Biederman, 1987) would result in over 160 billion possible triplets for RITL (see the introduction of the compositional theory above for the combinatorial equation). The human neocortex has only 16 billion neurons (Azevedo et al., 2009), with only a fraction of these being within LPFC, such that it would be impossible for each conceptual combination to have a dedicated neuron. Coarse-coded conjunctions and synchrony binding (see above) would allow for the reuse of neurons across contexts, such that LPFC could represent more combinations than the number of neurons within it, since these mechanisms would allow concepts to be built from sets of reusable subfeatures (O’Reilly et al., 2003). Similarly, the general processing hierarchy could allow for compositional reuse of lower-level concepts via various higher-level representations within the hierarchy. Importantly, these principles help deal with the combinatorial explosion of possible tasks while allowing for efficient compositional transfer of rules during RITL.
The compositional theory versus the adaptive coding theory
The present compositional theory is compatible with a variety of existing theories, as we outlined above. However, the compositional theory appears to be incompatible with the adaptive coding theory (Duncan, 2001). This theory posits that neurons within LPFC are adaptive and change their receptive fields across task contexts; the compositional theory, on the other hand, requires that receptive fields be relatively rigid so as to allow for transfer (see Fig. 6). The adaptive coding theory is based on the observation of LPFC being active in humans across many task contexts (Duncan & Owen, 2000) and on the observation of macaque monkey LPFC neurons representing whatever task rule had been used during training (Freedman, Riesenhuber, Poggio, & Miller, 2001). Supporting the compositional theory, however, are the human functional MRI studies considered above, which found that constituent-rule representations remain stable within LPFC despite changes in task context (see Fig. 5).
Also incompatible with the adaptive coding theory, a recent study with macaque monkeys found that different categories are represented in separate LPFC neural populations (Roy, Riesenhuber, Poggio, & Miller, 2010). Importantly, in that study the exact same stimuli were used for both categories (e.g., a large cat could be categorized using either cat vs. dog or large vs. small animal), such that the categories were in conflict. Another recent study showed that nonconflicting categories involving distinct stimuli (e.g., for cars, sedan vs. sports car, and for animals, cat vs. dog) are represented in the same LPFC neurons (Cromer, Roy, & Miller, 2010). This appears to support the adaptive coding theory, yet, when considered along with the results of Roy et al., it is actually compatible with the compositional theory. Specifically, in contrast to the adaptive coding theory, these studies suggest that each LPFC neuron uses a complex static receptive field—of, for instance, “cat OR sedan”—to represent both of the categorical distinctions sedan versus sports car and cat versus dog (rather than shift its receptive field depending on the context) when the categories are not in conflict. When the categories are in conflict, however, LPFC uses neurons with nonoverlapping static receptive fields to reduce interference.
In other words, it appears that receptive fields are not adaptive so much as complex—such that they appear to be adaptive in certain contexts. Thus, both data and theory suggest that representations within LPFC are consistent across contexts, allowing for compositional transfer of LPFC representations between related tasks. More specifically, unlike the adaptive coding theory, the compositional theory suggests that the receptive-field properties of LPFC neurons change only slowly, allowing experience-dependent tuning of representations via connection strength changes to incrementally improve task-specific performance, which can then rapidly transfer to new related tasks during RITL.
It may be possible to make the compositional theory compatible with a variant of the adaptive coding theory—population adaptive coding. We suggest that individual neurons have relatively static receptive fields (Roy et al., 2010), allowing for transfer, but that LPFC as a whole is highly adaptive (compatible with Duncan & Owen, 2000). The compositional theory suggests that this is possible because of the great variety of intermixed receptive fields within LPFC (Jun et al., 2010; Rigotti et al., 2010). Specifically, the great variety of static coarse-coded representations within LPFC can be conceptualized as a large set of “basis functions” that can be rapidly selected, such that together they “fit” the task parameters specified by instructions during RITL. The very large number of possible sets of such basis functions allows for highly adaptive population coding within LPFC, while allowing for compositional transfer due to static coding of individual neurons.
It is important to consider that the same compositional coarse coding described above that results in abstract representations would also result in the kinds of complex receptive fields described by Cromer et al. (2010). Random variations in compositional combinations of representations can result in standard abstractions like “red OR orange” (equivalent to “hot” colors), but they can also result in more counterintuitive, complex representations like “cat OR sedan.” The compositional theory suggests that such combinations of unrelated concepts provide two functions in LPFC: (1) allowing LPFC to represent a wider variety of concepts without increasing the number of neurons and (2) providing a mechanism for “far transfer,” in which similarities between seemingly unrelated concepts allow for learning in one context to transfer to another. To illustrate, consider the possibility that you recently learned to sell your sedan on a new online marketplace (like eBay) and you now want to use the same method to sell your cat. With a set of “cat OR sedan” neurons tuned to the online marketplace concepts and procedure, you can readily transfer them to allow for RITL when selling your cat.
Consider, however, that it is also very important for LPFC to have neurons with very distinct/orthogonal receptive fields, in order to reduce interference when transfer is not possible (e.g., if the online marketplace has different procedures for selling cars and selling animals). There are also other forms of conflict during RITL (i.e., negative transfer) from previous associations. The compositional theory emphasizes selection of the correct activity/connectivity pattern in LPFC for novel-task performance, with suppression of previous associations occurring through activation of orthogonal representations. Identifying the specific mechanisms for selecting and adaptively increasing activation of orthogonal representations will be an important area for future RITL (and general cognitive control) research. Two possible mechanisms include (1) conflict detection by medial prefrontal cortex increasing the activation of orthogonal representations and/or suppressing nonorthogonal representations (Botvinick, Braver, Barch, Carter, & Cohen, 2001; Cole, Yeung, Freiwald, & Botvinick, 2009) and (2) the activation of LPFC neurons during RITL (as part of the complex set of coactive neurons specifying a given task set) suppressing irrelevant previous associations—possibly via LPFC projections to inhibitory neurons in the thalamus (Barbas & Zikopoulos, 2007) and/or via gating by basal ganglia (Stocco, Lebiere, & Anderson, 2010).
Predictions of the compositional theory
The compositional theory makes a variety of predictions about neural and behavioral factors that should lead to increased RITL abilities, and to increased cognitive flexibility generally (e.g., set shifting, divergent thinking/creativity, and fluid intelligence). Our expectation that increased RITL abilities will correspond to increases in general cognitive flexibility comes from the extraordinary speed (one trial) and adaptability (involving complex novel brain configurations) required for RITL—attributes that are shared yet typically taxed less by other forms of flexible cognition. Also supporting this unified view of cognitive flexibility is recent work demonstrating that fluid intelligence (related to RITL; Dumontheil et al., 2011; Duncan, Schramm, Thompson, & Dumontheil, 2012) and creativity—typically considered to be uncorrelated abilities—are actually highly correlated once less noisy ‘latent’ measures are used (Nusbaum & Silvia, 2011; Silvia & Beaty, 2012). The predictions postulated below can be tested in a variety of ways, such as by using variability between individuals, groups, species, or cognitive/brain states. This is not an exhaustive list of predictions of the theory, but rather a general outline of possible predictions. We expect that more explicit computational or mathematical implementations of the theory will make predictions that are more specific and critical for the theory in the future.
One important prediction is that greater global connectivity, both within LPFC and between LPFC and the rest of the brain, should result in greater RITL abilities. Greater multisystem global connectivity would allow LPFC to better access a variety of potentially task-relevant systems, allowing it to receive and influence more task-relevant information during RITL and other situations requiring flexible cognitive control (Cole et al., 2012). Similarly, the within-LPFC global connectivity prediction is based on the resulting increase in latent connectivity, which would likely result in greater representational capacity for novel conceptual configurations (as was discussed above).
Having more neurons within LPFC (measured, e.g., as LPFC gray-matter volume) should also increase representational capacity, resulting in greater RITL abilities. This would increase latent connectivity substantially (exponentially with increases in the number of neurons), yet it would increase representational capacity in other ways as well. Rather than just reflecting the number of possible configurations, as in latent connectivity, having more neurons can increase the orthogonality of those configurations. The theory predicts that increased orthogonality/distinctness should reduce between-rule interference and allow for activation of multiple rules simultaneously during transfer of practiced rules to novel contexts. It will be important, therefore, to assess whether having more LPFC neurons corresponds with greater representational orthogonality within LPFC, and whether that in turn corresponds with better RITL abilities. Importantly, evidence already supports this prediction of the theory. Specifically, the theory is compatible with the finding (described above) that macaque monkeys can perform RITL-like behavior (despite having low LPFC representational capacity) only once between-task interference is eliminated (Cromer et al., 2011) (Fig. 2A). The theory suggests that humans are better at nonlinguistic RITL than monkeys primarily because of greater representational capacity, which reduces between-task interference. This not only reduces catastrophic interference during RITL, but also improves transfer of shared concepts between tasks to facilitate RITL further.
Perhaps the most counterintuitive prediction of the theory is that a higher “learning rate” (the rate at which connection weights change between neurons, changing those neurons’ receptive fields) in LPFC should result in slower task learning, via a reduction in RITL abilities. More precisely, the theory predicts that a learning rate that is optimal for most reinforcement-learning (or other forms of incremental learning) situations should be higher than the optimal learning rate for RITL. We expect that this prediction can be tested using computational models manipulating learning rates, or using single-unit recording or neuroimaging to measure differences in the rates of receptive-field change across individuals. This prediction is based on the theoretical claim that a higher learning rate would result in overfitting of LPFC connectivity to practiced task contexts, paradoxically reducing the ability to generalize practiced rules to novel contexts. Thus, the compositional theory directly argues against theories of cognitive flexibility that posit fast weight-based learning in LPFC as a key mechanism (e.g., Bugmann, 2011). Note that a higher learning rate might be effective for situations in which prior learning has no relevance or is incompatible with the to-be-learned task (i.e., negative transfer), but we suggest that such situations are rare once a sufficiently large set of generally relevant abstract rules have been learned.
Another potentially counterintuitive prediction of the theory is that using a rule in a variety of task contexts should improve RITL performance with that rule. One might expect that using a rule in many contexts would increase the number of associations formed with that rule, increasing between-rule interference in novel contexts. The theory, however, predicts that using the rule in many contexts would reduce overfitting of the rule to any one context, increasing the ability of the rule to generalize to new contexts. The theory’s specific mechanism for this involves strengthening of rule-consistent connectivity and weakening of rule-inconsistent connectivity, such that using the rule in more contexts reduces between-rule connectivity (and thus interference). This leads to the surprising prediction that a paradigm involving a rule in many task contexts would involve only negligible reductions in performance relative to a paradigm with only a few task contexts for the rule (which would promote overfitting). A related prediction is that using a rule with a variety of other rules should reduce between-rule interference within LPFC during RITL, rather than increasing it. This may be one reason that RITL performance was so high ( > 90% accuracy) for Cole, Bagic, et al. (2010), despite the use of each rule with many others.
Implications and future directions for RITL research
Potential applications of the compositional theory and of RITL research generally
Our society relies heavily on the human capacity for RITL. For instance, instructed learning is the dominant form of learning in scholastic education and professional training, putting those with lower RITL abilities at a disadvantage. The prominence of RITL in everyday life, along with variation in RITL abilities across individuals and groups (e.g., younger vs. older adults), suggests that future advances in RITL research will have important practical applications.
We present several predictions of the compositional theory as illustrations of potential future applications of RITL research. In the domain of education, the compositional theory predicts that students should be able to increase RITL abilities by practicing a general strategy of identifying common constituent concepts across multiple tasks and trying to apply familiar concepts to new problems whenever possible. This prediction is not completely new, as others have emphasized the importance of metaphor, analogy, and self-explanation for transfer in education (Billing, 2007; Gentner, Loewenstein, & Thompson, 2003; VanLehn, Jones, & Chi, 1992; Wormeli, 2009). However, the mechanistic grounding that the compositional theory provides may lead to elaboration and refinement of strategies promoting between-task transfer, as well as other approaches to improve RITL.
Similar to students seeking between-task transfer, the compositional theory predicts that RITL should be enhanced by instructors (or cognitive tutors; Ritter, Anderson, Koedinger, & Corbett, 2007) emphasizing common concepts among tasks. For instance, instructors could label and repeatedly point out ‘deep structure’ common to solving several word problems in a mathematics course. Also, lesson plans should emphasize constituent concepts available to all or most students when teaching new concepts/tasks.
The compositional theory also predicts that there exists an optimal set of abstract concepts that can be recombined to allow for RITL of most tasks that anyone is likely to learn in a lifetime. An important application of future RITL research may be to identify these abstract concepts (perhaps through careful analysis of tasks and rule frequency) and to teach them to students to facilitate RITL abilities. The compositional theory also predicts that it will be important to practice using the abstractions repeatedly in many unique contexts, to strengthen their representational connectivity within LPFC while avoiding overfitting to a small subset of contexts. One major example of this is mathematics—a set of procedural abstractions already identified as important and taught universally—yet there are likely other sets of important common concepts (even in mathematics) that are not being taught at present.
Even with the availability of optimal strategies and lesson plans, there will always be individual differences in RITL abilities. This puts some individuals or groups at a disadvantage. For instance, older adults have difficulty with RITL relative to young adults, such as when learning to use new technology (Hickman, Rogers, & Fisk, 2007). Similarly, deficits in flexible cognitive control—as indexed by fluid reasoning abilities—are present in a wide variety of mental illnesses (Gale, Batty, Tynelius, Deary, & Rasmussen, 2010; Gottfredson & Saklofske, 2009; Koenen et al., 2009). It will be important to identify the severity of RITL (and general flexible cognition) deficits in each of these groups and to look for ways to alleviate the detrimental effects of these deficits (e.g., on education and employment).
The compositional theory may facilitate the transition from identification to treatment of flexible cognition deficits by suggesting mechanisms of action. For example, the theory predicts that drugs affecting dopamine should affect rapid updating in LPFC, potentially improving RITL abilities during mental illness. The theory also predicts that drugs targeting other neurotransmitters within LPFC (e.g., acetylcholine; Croxson, Kyriazis, & Baxter, 2011) may enhance other LPFC mechanisms supporting RITL, such as orthogonality of representations to facilitate transfer. It should be possible to also enhance transfer during mental illness using cognitive strategies similar to those suggested for education, such as emphasizing between-task similarities during RITL. It will be important for detailed computational implementations of the compositional model to make more nuanced predictions of ways that RITL deficits can be alleviated in a variety of mental illnesses.
Future directions for RITL research
The cognitive neuroscience of learning is currently dominated by reinforcement-learning research. However, RITL is a much more powerful form of learning in many instances (see Fig. 1), and the basic cognitive and neural mechanisms underlying this ability are in need of further investigation. Uncovering the basic mechanisms of RITL will likely also provide important insights into flexible cognition generally, as rapidly learning a never-performed task is one of the best demonstrations of cognitive flexibility possible.
It will be especially important for future RITL research to investigate the role of RITL in mental illnesses. This need arises not just from scientific curiosity, but also from the increasing debilitation of various mental diseases that likely affect RITL (and thus, the ability to rapidly adapt) as technological innovation increases the rate of change in the world. It is currently unclear exactly which mental illnesses affect RITL abilities. However, the observation that LPFC lesions decimate RITL abilities (Luria, 1973) and that diseases such as schizophrenia (which involve LPFC disruption; Barch et al., 2001) limit the ease of cognitive task learning (Barch, Braver, Carter, Poldrack, & Robbins, 2009; Young & Freyslinger, 1995) suggest that RITL is affected by a variety of mental illnesses. The link between general fluid intelligence and RITL (Dumontheil et al., 2011; Duncan et al., 2012)—and the widespread association of mental illness with impaired fluid intelligence (Koenen et al., 2009)—further suggests that RITL deficits are widespread. Research into whether and which mental illnesses involve RITL deficits could yield important new insights into the nature of those deficits, especially with regard to cognitive flexibility. Furthermore, the mechanisms by which RITL is impaired may differ across mental illnesses, mandating different therapeutic strategies to improve RITL for different mental diseases.
Recent innovations in RITL research promise a bevy of new insights regarding human learning and intelligence. Cognitive paradigm designs that permute rule combinations to investigate novel relative to practiced tasks appear especially promising for isolating RITL processes from stimulus novelty and general task-switching processes (Cole, Bagic, et al., 2010; Stocco et al., 2012). It will be important for future research to investigate how such rule-combination approaches—which lend themselves to investigating ‘abstract’ task learning—differ from ‘concrete’ stimulus–response rule learning (see Fig. 3). It will be especially important for such research to differentiate RITL-specific processes from stimulus novelty and general task-switching processes in these ‘concrete’ rule-learning paradigms, in addition to exploring the possibility of greater between-task interference in ‘abstract’ relative to ‘concrete’ paradigms.
RITL research provides unprecedented access to the kind of cognitive flexibility that makes human cognition unique. Unlike studies utilizing planning, problem solving, or reinforcement learning, RITL paradigms directly reference novel mental states (rather than forcing pseudorandom exploration), and so are typically better controlled by the experimenter. This increased control allows for more precise and efficient experimental paradigms. For instance, Cole, Bagic, et al. (2010) were able to investigate cognitive flexibility across 64 tasks while carefully controlling for extraneous factors. We expect that this combination of increased experimental control and rapid access to a virtually infinite variety of possible mental configurations will lead to new insights into the impressive human capacity for flexible cognitive control.
RITL is something we encounter every day, yet we understand surprisingly little about it. Beyond providing an understanding of this basic ability, we suggest that the RITL framework provides unprecedented access to an even more fundamental cognitive ability: flexibly controlling cognition and behavior according to task demands. We expect that further development and application of the RITL framework, and of the compositional theory presented here, will lead to the emergence of important new insights into flexible cognitive control.
This work was supported by National Institutes of Health Grant No. MH096801. We thank Todd Braver, as well as Jeff Zacks, Alan Anticevic, Grega Repovs, and Jeremy Reynolds, for invaluable feedback and suggestions during preparation of the manuscript.
- Anderson, J. R. (1976). Language, memory, and thought. Hillsdale, NJ: Erlbaum.Google Scholar
- Azevedo, F. A. C., Carvalho, L. R. B., Grinberg, L. T., Farfel, J. M., Ferretti, R. E. L., Leite, R. E. P., . . . Herculano-Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513, 532–541. doi: 10.1002/cne.21974
- Braver, T. S., & Cohen, J. D. (2000). On the control of control: The role of dopamine in regulating prefrontal function and working memory. In S. Monsell & J. Driver (Eds.), Control of cognitive processes: Attention and performance XVIII (pp. 713–737). Cambridge, MA: MIT Press.Google Scholar
- Bugmann, G. (2011). Modeling fast stimulus–response association learning along the occipito-parieto-frontal pathway following rule instructions. Brain Research, 1–17. doi: 10.1016/j.brainres.2011.09.028
- Cole, M. W. (2009). The biological basis of rapid instructed task learning. Unpublished dissertation, University of California, Berkeley, CA.Google Scholar
- Cole, M. W., & Braver, T. S. (2012). Switching between novel tasks: Evidence for a distinct task set formation process. Manuscript submitted for publication.Google Scholar
- Dosenbach, N. U. F., Visscher, K. M., Palmer, E. D., Miezin, F. M., Wenger, K. K., Kang, H. C., . . . Petersen, S. E. (2006). A core system for the implementation of task sets. Neuron, 50, 799–812. doi: 10.1016/j.neuron.2006.04.031
- Duncan, J., Schramm, M., Thompson, R., & Dumontheil, I. (2012). Task rules, working memory, and fluid intelligence. Psychonomic bulletin & review. doi: 10.3758/s13423-012-0225-y
- Fox, M. D., Snyder, A. Z., Vincent, J. L., Corbetta, M., Van Essen, D. C., & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences, 102, 9673–9678. doi: 10.1073/pnas.0504136102 CrossRefGoogle Scholar
- Gentner, D., & Colhoun, J. (2010). Analogical processes in human thinking and learning. In B. M. Glatzeder, V. Goel, & A. von Müller (Eds.), Towards a theory of thinking: Building blocks for a conceptual framework (pp. 35–48). Berlin, Germany: Springer. doi: 10.1007/978-3-642-03129-8_3 CrossRefGoogle Scholar
- Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York, NY: Wiley.Google Scholar
- Holmes, N. (2005). Wordless diagrams. New York, NY: Bloomsbury.Google Scholar
- Hummel, J. E., Holyoak, K. J., Green, C., Doumas, L. A. A., Devnich, D., Kittur, A., & Kalar, D. J. (2004). A solution to the binding problem for compositional connectionism. In S. D. Levy & R. Gayler (Eds.), Compositional connectionism in cognitive science: Papers from the AAAI Fall Symposium (pp. 31–34). New York, NY: ACM.Google Scholar
- Lebiere, C., & Anderson, J. (1993). A connectionist implementation of the ACT-R production system. In W. Kintsch (Ed.), Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (pp. 635–640). Hillsdale, NJ: Erlbaum.Google Scholar
- Luria, A. R. (1973). The frontal lobes and the regulation of behavior. In K. H. Pribram & A. R. Luria (Eds.), Psychophysiology of the frontal lobes (pp. 3–28). New York, NY: Academic Press.Google Scholar
- Milner, B. (1964). Some effects of frontal lobectomy in man. In J. M. Warren & K. Akert (Eds.), The frontal granular cortex and behavior. New York, NY: McGraw Hill.Google Scholar
- Monsell, S. (1996). Control of mental processes. In V. Bruce (Ed.), Unsolved mysteries of the mind: Tutorial essays in cognition (pp. 93–148). Hove, U.K.: Erlbaum.Google Scholar
- Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
- Noelle, D., & Cottrell, G. (1996). Modeling interference effects in instructed category learning. In G. W. Cottrell (Ed.), Proceedings of the 18th Annual Conference of the Cognitive Science Society (pp. 475–480). Hillsdale, NJ: Erlbaum.Google Scholar
- O’Reilly, R. C., Braver, T. J., & Cohen, J. D. (1999). A biologically based computational model of working memory. In A. Miyake & P. Shah (Eds.), Models of working memory: Mechanisms of active maintenance and executive control (pp. 375–411). Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
- O’Reilly, R. C., Busby, R. S., & Soto, R. (2003). Three forms of binding and their neural substrates: Alternatives to temporal synchrony. In A. Cleeremans (Ed.), The unity of consciousness: Binding, integration, and dissociation (pp. 168–192). Oxford, U.K.: Oxford University Press.CrossRefGoogle Scholar
- Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., . . . Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72, 665–678. doi: 10.1016/j.neuron.2011.09.006
- Rabbitt, P. (1997). Methodology of frontal and executive function. Hove, U.K.: Psychology Press.Google Scholar
- Ramamoorthy, A., & Verguts, T. (2012). Word and deed: A computational model of instruction following. Brain Research, 1–12. doi: 10.1016/j.brainres.2011.12.025
- Rendell, L., Boyd, R., Cownden, D., Enquist, M., Eriksson, K., Feldman, M. W., . . . Laland, K. N. (2010). Why copy others? Insights from the social learning strategies tournament. Science, 328, 208–213. doi: 10.1126/science.1184719
- Ruge, H., Jamadar, S., Zimmermann, U., & Karayanidis, F. (2011). The many faces of preparatory control in task switching: Reviewing a decade of fMRI research. Human Brain Mapping. doi: 10.1002/hbm.21420
- Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). Cambridge, MA: MIT Press, Bradford Books.Google Scholar
- Schneider, W., & Chein, J. (2003). Controlled and automatic processing: Behavior, theory, and biological mechanisms. Cognitive Science, 27, 525–559.Google Scholar
- Schneider, W., & Oliver, W. L. (1991). An instructable connectionist/control architecture: Using rule-based instructions to accomplish connectionist learning in a human time scale. In K. VanLehn (Ed.), Architectures for intelligence: The Twenty-Second Carnegie Mellon Symposium on Cognition (pp. 113–146). Hillsdale, NJ: Erlbaum.Google Scholar
- Semendeferi, K., Armstrong, E., Schleicher, A., Zilles, K., & Van Hoesen, G. W. (2001). Prefrontal cortex in humans and apes: A comparative study of area 10. American Journal of Physical Anthropology, 114, 224–241. doi: 10.1002/1096-8644(200103)114:3<224::AID-AJPA1022>3.0.CO;2-I PubMedCrossRefGoogle Scholar
- Singley, M. K., & Anderson, J. R. (1989). The transfer of cognitive skill (Vol. 9). Cambridge, MA: Harvard University Press.Google Scholar
- Stocco, A., Lebiere, C., O’Reilly, R. C., & Anderson, J. R. (2010). The role of the basal ganglia–anterior prefrontal circuit as a biological instruction interpreter. In Biologically inspired cognitive architectures 2010 (pp. 153–162). Amsterdam, The Netherlands: IOS Press.Google Scholar
- Stocco, A., Lebiere, C., O’Reilly, R. C., & Anderson, J. R. (2012). Distinct contributions of the caudate nucleus, rostral prefrontal cortex, and parietal cortex to the execution of instructed tasks. Cognitive, Affective, & Behavioral Neuroscience.. doi: 10.3758/s13415-012-0117-7
- Sutton, R., & Barto, A. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.Google Scholar
- Wormeli, R. (2009). Metaphors & analogies: Power tools for teaching any subject: Stenhouse Pub.Google Scholar