One tradition of thinking about skill will reject the reasons-sensitive ideal. This tradition’s idea is that the skilled agent tunes her capacities through practice until their coordinated production of behavior flows automatically. The more practice-driven automaticity we find in an agent, the more we should expect to see high levels of skill (Fitts and Posner 1967, Anderson 1982, Dreyfus and Dreyfus 1986).
Automaticity theories of skill are motivated in part by the thought that processes that have not become automatic through practice will be processes associated with higher consumption of limited processing resources, and with slower processing speeds.Footnote 6 Human agents are limited in certain ways, and one way to overcome these limitations is to automate as much as possible.
That’s true as far as it goes, but a newer family of empirical approaches to skill recognizes that automaticity cannot explain the full range of skilled behavior of which human agents are capable (Ericsson 2006, Fridland 2014, Shepherd 2015b, Christensen et al. 2016, Montero 2016), even if automaticity remains in some ways important (see Stanley and Krakauer 2013, Mylopoulos and Pacherie 2017). There is a lot of work being done in working through the details of studies from sport psychology, sensorimotor adaptation, and cognitive control. But here is a very general reason to doubt that skills can be explained entirely in terms of practice-driven automaticity.Footnote 7 Practice-driven automaticity makes rigid transitions between states – paradigmatically, transitions between stimulus and response. Many action domains, however, present the agent with a wide range of subtly different circumstances. And in many action domains, conditions on the ground may change the ways that a stimulus should be registered, or may change the kind of response that would best suit the agent’s goals. The action space is not only too complex in many cases, the relationships between states and goal-achievement are too fluid, too holistic, to enable the kind of learning that could lead to automatization. Whenever one renders a process automatic, one should hope that the loss of flexibility will be outweighed by the gains in processing speed and resource consumption. For very complex action domains, this will sometimes be a foolhardy hope.
We are here tracking a general tension between automatization and reason sensitivity. Automatization makes rigid the joints of processes and behavioral sequences. If these joints remain fluid, they are easier to manipulate, to detach, to recombine with other modes of behavior. For familiar reasons to do with the recombinatorial structure of concepts, this is especially true of processes that are driven by capacities for conceptual cognition. The rationale for automatization is that this is expensive – either computationally, or temporally, or energetically. So the agent has good reason to automatize, provided the sequence one renders automatic is not a sequence better left fluid.Footnote 8
I have been speaking as though the agent has a choice about the automatization of behavioral sequences. But of course her choices are limited. Learning processes render associations between items in a behavioral sequence more automatic according to their own rules. They run – agents learn – more or less constantly (Collins and Frank 2013). So task-set structures, the quality of behavioral sequence connections, the viability of various state-transitions leading to goal achievement: these things are being constantly assessed by learning and control mechanisms, leading to the pruning of an agent’s behavioral space. And in principle, any number of processes, or state-transition sequences, may be rendered more automatic.
Consider, for example, a three-step tree of behavioral options. The agent is hungry. The first step involves two options: find the kitchen or find the local ramblas. If the agent selects the kitchen, further options will present themselves at the second step: find the pantry, find the refrigerator, etc. The same is true of the ramblas: find the tapas place, find the kebab place, etc. Still further options will be available at the third step, involving the specific food the agent winds up eating. Now say that we associate the specific foods with reward-levels for the agent. And we ask: how will our agent make her way through this behavioral space?
She may do so via a computationally expensive modelling process, whereby she simulates what is best to do at each stage, comparing costs and benefits along the way. Or she may, if she has done this enough, proceed immediately from hunger to some specific option at the third step. Perhaps a connection from hunger to kebab place has become fairly automatized for her. This will be much less expensive computationally. But it will be rigid.
Note that I am not claiming that a relatively automatized process is incapable, in virtue of its automaticity, of supporting an event we would classify as recognition of, or response to, a reason for action. I see no good reason to deny that relatively automatic processes may qualify as relatively automatic recognition of reasons.Footnote 9 But it does seem plausible to think that processes that have undergone automatization will less frequently qualify as recognizing the reason for what it is in the relevant circumstance. The rationale is that reasons for action often bear subtle relationships to particular characteristics of specific situations. But automatized processes tend to be triggered by relatively stereotyped considerations – stimuli that occur repeatedly enough to drive the automatization in the first place. So automatized responses may often more plausibly qualify as a kind of psychological sensitivity to cues, rather than a cognitive sensitivity to reasons as reasons.
The claim I am presently making is that, depending upon her learning history, any of the steps in this behavioral tree may be reinforced, and may become more automatized. The agent may have an automatic connection between hunger and heading for the kitchen, and she may depend on expensive modelling and reasoning once she gets there. Or the agent may reason her way to the tapas place, at which point the selection of the xipirones are automatic. Or the agent may have automatized an intermediate link, between the kitchen and the refrigerator, for example (see Cushman and Morris 2015, Daw 2015, Haith and Krakauer 2018).
The significance of these points is that the basic conflict between automatization and reason-sensitivity is present at each stage of reason handling, and at each level of abstraction within the hierarchical computational architecture of action production. If the agent’s planning processes are more automatized, then she will be more likely to display what Daw has called ‘abstract habits’ (2015, 13,750), and what Cushman and Morris call ‘habits of thought’ (2015, 13,817). This will leave room for more flexibility with the implementation of goals that are selected more automatically. In general, at each stage the agent’s learning will be sailing between flexibility and reason-sensitivity on the one hand, and cost-efficiency and higher levels of performance on the other.
Equally exciting is the following idea. Perhaps in some cases what agents automatize is not just a transition link between states, but a link between modes of behaving. Perhaps, that is, agents sometimes automatize a link between a type of stimulus and a subsequent need to engage in more flexible processing. Bream (2017) demonstrated that if you reward participants more for engaging in task switching as opposed to task repetition, they will begin to spontaneously choose to engage in task switching. But task switching is a hallmark of cognitive control – more difficult, more computationally expensive, and ultimately a more flexible mode of behavior. Why would participants behave in this way? It may be that a more automatized entry into task switching behavior gives participants more time and preparation to engage in this mode. Chiu and Egner (2017) demonstrated that if you can teach participants to (likely, implicitly) associate stimuli with a need for task switching in order to reach success, their task switching behavior will display less switch costs (i.e., quicker switches). As Braem and Egner note in a recent review, ‘these studies suggest that the choice to be cognitively flexible is very susceptible to its recent reinforcement-learning history’ (2018, 472).
Indeed, Braem and Egner argue for a perspective on which cognitive flexibility itself is a mode of responding that needs to be learned in various contexts. There is more research to be done, but I find their arguments plausible, so I repeat:
The basic premise of this perspective is that, rather than seeing cognitive flexibility as originating from a standalone module (or brain region) that intervenes – like a deus ex machina – to solve problems in lower-level associative processing, the processes underlying cognitive flexibility are grounded in the same learning framework (and associative network) as simple stimulus-response associations. Thus, while cognitive control processes are higher level in that they can produce generalizable benefits, their regulation must be understood in terms of basic associative-learning processes. (474)
What this means, more generally, is that whether the agent’s behavioral space is pruned in the right way – whether her cognitive routines, behavioral sequences, action options have been ‘chunked’ and ‘parsed’ (see, e.g., Collins and Frank 2013) in the right way – very much depends upon whether the agent’s behavioral space is well-suited to the domains in which she acts. More complex domains will generally require greater flexibility and reason-sensitivity. Less complex domains will permit greater degrees of automatization. Many domains will reward specific behavioral structures – automatization at some places, expensive flexibility at others. This will depend upon features like the structure of the domain, the stability of the circumstances in the domain, the agent’s level of ability, the ways an agent’s ability-levels fluctuate across times and circumstances. But it appears that (at least human) agents have tricks to maximize success. In some cases, we may use automaticity to promote flexible modes of behavior.