Introduction

Category learning, our ability to organize our experiences into meaningful concepts that can be leveraged in novel situations, is fundamental to the human experience. Not only are we able to group objects according to basic features such as colour and shape, but we are also capable of learning highly abstract and multivariate categories with relatively little practice. Besides commonplace categories such as “edible” and “friendly” and their antagonistic equivalents, objects can be assembled based on one or several complex perceptual features. Influential category-learning models posit that novel objects are categorized according to their relative positions in a multidimensional psychological space populated with known category members in which the distance between objects determines their degree of similarity (Shepard, 1957). While this perceptual space can be composed of an unlimited number of dimensions, it is often the case that only some inform categorization decisions, and the weights of those few dimensions can vary (Nosofsky, 1986; Seger & Miller, 2010).

Although category learning has been studied for decades (Shepard, 1957; Young & Householder, 1938), more recent work has converged on a comprehensive account that formalizes the computational and neural mechanisms underlying successful learning (Zeithamova et al., 2019). While previous accounts of category learning debated the nature of concept representations (e.g., exemplar vs. prototype), the multifactorial nature of concept learning has been found to implicate myriad neural systems. For instance, while exemplar-based representations recruit lateral prefrontal cortex (LPFC) and parietal areas (Mack et al., 2013), representations based on category prototypes recruit hippocampus and ventromedial prefrontal cortex (Bowman & Zeithamova, 2018). In addition to similarity-based comparisons of representations, successful learning requires higher-order inferential processes, for example, when faced with novelty or uncertainty invoked by a given stimulus (Paniukov & Davis, 2018). A comprehensive account must incorporate each of these processes; how these multiple brain systems interact remains to be defined.

Much of this category learning work has focused on the link between predictions of representations formed during learning and how task strategies can influence what is learned by combining sophisticated neural analyses with formal model predictions (Bowman & Zeithamova, 2018; Braunlich & Seger, 2016; Mack et al., 2013; Mack et al., 2016). There is, however, a missing component: the decision-making process itself. Limited work has explored the neural mechanisms that govern how category knowledge, once learned, is applied to novel situations to yield measurable behavioural changes in decision making.

Investigating the neural processes of category decision making requires a computational theory of how such decisions unfold. The Exemplar-Based Random Walk (EBRW) model (Nosofsky & Palmeri, 1997) formalizes category decisions as an evidence accumulation process. Evidence in support of different categories is sampled over time through similarity-based retrieval of category exemplars motivated by the seminal Generalized Context Model (GCM) (Nosofsky, 1986). The key innovation of EBRW is its ability to predict both response probabilities and speed, thereby providing a comprehensive formal account of the behaviour underlying categorization decision making.

Here, motivated by the EBRW framework and recent advances in interrogating brain function with computational theory (Kragel et al., 2015), we leverage a combination of computational models in an exploratory approach to identifying neural processes linked to categorization decision making. Our approach significantly extends prior work that has interrogated brain function with categorization models (e.g., Bowman & Zeithamova, 2018; Davis et al., 2017; Mack et al., 2013) by (1) targeting neural response that fully characterizes decision responses and speeds, and (2) focusing on participant-specific predictions through hierarchical model analyses. Although the EBRW model provides a strong theoretical framework for interpreting potentially informative neural processes, there are currently no analytic approaches for applying EBRW in a hierarchical manner that captures individual differences in neural function in category decision making. As such, we split the primary elements of EBRW – that is, exemplar-based category representations that drive an accumulation of noisy decision evidence – into a two-stage analytic approach to best approximate the formal structure of EBRW: First, we interrogate neural signals related to decision making with a hierarchical variant of the drift diffusion model (DDM; Ratcliff, 1978), a computational model that approximates the decision making mechanism of EBRW. Second, we evaluate the correspondence between the identified brain measures and individually tailored predictions of the GCM, the model that the exemplar-based category representations of EBRW are based on. Since EBRW formalizes that category representations impact decision making through changes to the rate of evidence accumulation (Nosofsky & Palmeri, 1997) and that this sort of category evidence has been correlated with neural function in prefrontal and parietal cortices and striatum (Davis et al., 2017), we focus on the drift rate parameter in the DDM. In essence, we test whether or not constraining drift rate to vary across trials in the same manner as fluctuations in neural signal better captures category decisions, and if so, whether or not that neural signal is related to exemplar-based predictions of category evidence. Although this approach is a departure from a pure EBRW analysis, it does provide a pragmatic solution to the challenge of linking brain and model in a category decision-making framework that accounts for response times.

We test the hypothesis that brain activation during a classic categorization task (Mack et al., 2013; Medin & Schaffer, 1978) corresponds to the rate of category evidence accumulation on a trial-by-trial basis. Given that new experiences modulate activity in a network of brain regions (e.g., Kafkas & Montaldi, 2018), we also test the hypothesis that neural decision signals may vary when category knowledge is generalized to novel relative to previously encountered stimuli. Given the exploratory nature of this approach (Thompson et al., 2020), we take a purposefully uninformed view of brain function. Specifically, we interrogate neural response from a parcellation of distinct regions across the whole brain independently identified in a large-scale, data-driven analysis of resting state and task-based interregional connectivity (Schaefer et al., 2018). This approach provides a key first approximation of how to identify neural function underlying complex categorization behaviour through the lens of a formal computational theory.

Methods

The current study leverages a previously published open-access dataset (Mack et al., 2013). This dataset, which includes behavioural and structural and functional magnetic resonance imaging (fMRI) data, was downloaded from OSF (https://osf.io/62rgs/). We include most methodological details here, but a full description can be found in the original paper.

Participants

Data from 20 participants were included in the primary analyses (age range 19–33 years; mean age 23.5 years; 14 female).

Stimuli

The stimulus set was composed of 16 objects consisting of simple shapes enclosed in a grey, horizontally oriented rectangle (Fig. 1). The simple shape varied based on four salient binary-valued features (colour: red or green, shape: circle or triangle, size: large or small, and position: right or left). For each participant, the four features were randomly assigned to the four dimensions defined by the 5/4 category structure (Medin & Schaffer, 1978). This structure is divided into two categories with the prototype member of category A corresponding to [0,0,0,0] and the prototype member of category B corresponding to [1,1,1,1]. Nine objects served as the training items with five for category A and four for category B. The remaining seven objects served as a transfer set.

Fig. 1
figure 1

Stimuli and test performance. Stimuli composed of four binary dimensions were split into training (five A and four B items) and transfer sets (example stimulus set shown in top row). Test performance showed typical accuracy (middle) and median reaction time (bottom) performance as in previous reports (e.g., Medin & Schaffer, 1978). Lighter dots depict participant-specific averages, darker dots depict group averages, and errors bars depict 95% confidence intervals

Procedures

After consent in accordance with the University of Texas Institutional Review Board, participants were instructed that they would be shown simple objects composed of different features and that the task was to learn which object belonged to one of two categories through corrective feedback. Participants performed the training phase of the experiment in a behavioural testing room on a laptop computer. On each training trial, one of the nine training stimuli was displayed for 3.5 s and participants made a response to the stimulus’ category by pressing one of two labelled keys on the keyboard. Then, a fixation cross was presented for 0.5 s, followed by a feedback display that presented the stimulus, the correct category, and whether the participant’s response was correct or incorrect for 3.5 s. The nine training stimuli were presented 20 times in randomized order during the initial training outside the scanner. Participants also completed an additional four training repetitions inside the MRI scanner during an anatomical scan as a refresher of the training items’ category membership.

After training, participants performed the testing phase during fMRI scanning. On each test trial, one of 16 stimuli (consisting of the nine training stimuli and seven novel transfer stimuli) was displayed for 3.5 s and participants made a category response by pressing one of two buttons on an MRI-compatible button box. A fixation cross was then presented for 6.5 s. No feedback was provided during the testing phase. The 16 stimuli were presented three times in randomized order during six functional runs for 18 total repetitions per stimulus.

fMRI data acquisition and preprocessing

Whole-brain imaging data were acquired on a 3.0T GE Signa MRI system (GE Medical Systems). Structural images were acquired using a T2-weighted flow-compensated spin-echo pulse sequence (TR = 3 s; TE = 68 ms, 256 × 256 matrix, 1 × 1 mm in-plane resolution) with 33 3 mm-thick oblique axial slices (0.6 mm gap), approximately 20° off the AC-PC line. Functional images were acquired with an echo planar imaging sequence using the same slice prescription as the structural images (TR = 2 s, TE = 30.5 ms, flip angle = 73°, 64 × 64 matrix, 3.75 × 3.75 in-plane resolution, bottom-up interleaved acquisition, 0.6 mm gap). An additional high-resolution T1-weighted 3D SPGR structural volume (256 × 256 × 172 matrix, 1 × 1 × 1.3 mm voxels) was acquired for registration and brain parcellation.

Anatomical and functional MRI data for each participant were preprocessed using the fMRIPrep automated MRI workflow (version 1.0.15; Esteban et al., 2019), which included brain extraction, motion correction, co-registration between functional and T1 volumes, and normalization to the MNI 2009c asymmetric brain template. AROMA-identified noise components were regressed out of the functional timeseries. For each run, trial-level beta parameters were estimated from the functional timeseries across the whole brain using the LS-S approach (Mumford et al., 2012). This approach provides whole brain voxel-level estimates of BOLD response for each trial across the entire experiment (i.e., the degree of brain activation on each trial for each participant). These beta estimates across trials were averaged within 100 regions of interest (ROIs) as defined by a resting-state and task-based brain parcellation (Schaefer et al., 2018) and an additional eight ROIs from subcortical regions including right and left hippocampus, caudate, putamen, and thalamus.

Brain-informed drift diffusion modelling

DDM analyses were conducted by first averaging beta estimates (i.e., trial-specific estimates of neural activation) within each ROI for each trial resulting in an ROI-specific timeseries (Fig. 2B) that characterized how BOLD response within the ROI varied across trials. Separate DDM simulations were conducted for each ROI wherein trial-by-trial changes in drift rate were linked to the ROI timeseries. Additionally, the relationship between drift rate (v) and ROI activation was allowed to vary by stimulus type (training vs. transfer items). Thus, drift rate was modelled as a linear regression of ROI neural activation, stimulus type, and their interaction (v ~ ROI + type + ROI:type). This relationship was evaluated with DDM simulations implemented with the Hierarchical Drift Diffusion Model (HDDM) library (Wiecki et al., 2013), which performs hierarchical Bayesian parameter estimation to predict response choices and times. Markov chain Monte Carlo sampling was conducted for 20,000 samples with 10,000 burn-in and thinning set to 2. The deviance information criterion (DIC) for each ROI-based model was compared to a baseline model that included no link between drift rate and neural activation (i.e., drift rate was a free parameter within the constraints of the hierarchical parameter estimation). ROI-based models with smaller DIC values differing from the baseline model by at least 10 were considered a significantly better fit (Spiegelhalter et al., 2002).

Fig. 2
figure 2

Brain parcellation and analysis schematic. (A) Neural activation from 108 regions of interest (ROI) derived from a data-driven resting-state functional parcellation and anatomically defined subcortical regions were investigated. (B) Mean beta estimates within each ROI were extracted and leveraged to predict trial-by-trial changes in drift rate in DDM simulations of categorization decisions

Parameter estimates from ROI-based models meeting the DIC criterion were further explored by analyzing the posterior distributions of effects due to neural activation and stimulus type (training vs. transfer items) on drift rate. Specifically, the existence of an effect was quantified with the probability of direction, pd, which is defined as proportion of posterior samples in the most probable direction (i.e., pd ranges from 0.5 to 1 with values closer to 1 for more likely effects). pd is akin to the frequentist p-value and is best interpreted as the degree of evidence against a null effect (Makowski et al., 2019). Given that we performed 108 independent model fits, the issue of multiple comparisons inflating the familywise error rate had to be considered. As such, we performed a permutation test on each of the regions that demonstrated significantly lower DIC values. The permutation test consisted of shuffling the ROI beta values across trials within each participant, performing hierarchical Bayesian parameter estimation, and saving the resulting DIC value. This process was repeated 1,000 times, each time with a random shuffling of beta values in order to define a null distribution of DIC values for the DDM model fit to that specific ROI. An empirical significance level was then calculated as the proportion of the null distribution that was smaller than the observed DIC value and compared to a Bonferroni-corrected significance level of 4.63 × 10-4 (α = 0.05 and 108 tests). To aid in interpretation, significance values based on these permutation tests are reported as Bonferroni-adjusted p-values (padj) such that they can be compared to α = 0.05. Although this overall statistical approach mixes frequentists and Bayesian rationales for interpreting statistical significance, it provides an appropriately conservative view for the type of data-driven exploratory analysis central to the current study.

Linking category evidence and neural response

Leveraging the HDDM provides a quantitative means for interrogating the link between neural activation and categorization decisions. In particular, finding that trial-by-trial fluctuations in the neural activation of certain brain regions is related to model-based predictions of behaviour suggests these brain regions are performing an important role in mapping sensory information onto category knowledge. However, the DDM is agnostic to specific mechanisms underlying computations of category evidence, thus it represents only one component of the motivating EBRW framework.

To test the hypothesis that the degree of evidence for one category over another is reflected in the neural dynamics of the identified ROIs, we performed an additional analysis that linked participant-specific cognitive model predictions of categorization to neural activation in the ROIs identified by the brain-informed DDM analysis. Specifically, we generated predictions of category evidence with the GCM (Nosofsky, 1986). A key mechanism of GCM is selective attention, whereby diagnostic feature dimensions for a given task are weighted to varying degrees in calculating similarity to stored category exemplars and thus modulate the evidence for each category. It has been previously shown that individual differences in both categorization performance and neural representations (e.g., Braunlich & Love, 2019; Mack et al., 2013) are related to attention weighting in the GCM.

Thus, to quantify participant-specific predictions of category evidence, we fit to each participant’s behaviour by optimizing GCM parameters of dimensional attention weights (w), sensitivity (c), and decision gain (γ) with a genetic algorithm approach (differential evolution in scipy version 1.3.0) to maximize likelihood of response probabilities during the last four repetitions of training (Mack et al., 2013). We focused on end-of-learning behaviour to estimate model parameters with data independent from the data collected during the testing period. Following prior reports, r was set to 1 such that similarity values were computed according to city-block distance (Mack et al., 2013; Zaki et al., 2003). Participant-specific optimized parameters were then leveraged to predict for each stimulus the degree of discriminatory evidence (ev) for the most probable category. This measure of category evidence for stimulus x is the absolute value of the difference between the summed similarity for category A and B:

$$ {ev}_x=\left|{\sum}_{y\in A}{e}^{-c{\sum}_{i\in d}w\mid {x}_i-{y}_i\mid }-{\sum}_{y\in B}{e}^{-c{\sum}_{i\in d}w\mid {x}_i-{y}_i\mid}\right|, $$

where d is the set of feature dimensions, A and B are the set of training items in the two categories, w is the set of optimized dimension weights, and c is sensitivity. We then evaluated the relationship between category evidence (ev) and trial-by-trial neural activation with a mixed effects linear regression. Specifically, category evidence was modelled as the response, neural activation as the predictor, and random intercepts were included for participants. Given that category evidence is exponentially distributed, a log link function was included in the regression model. Regression analyses were conducted with Bayesian estimation (rstanarm R package version 2.19.3, R version 4.0.0). Neural activation models were compared to a baseline model that only included random intercepts.

Results

Testing phase performance (Fig. 1) showed typical results consistent with previous reports (e.g., Medin & Schaffer, 1978). Responses to A and B training items demonstrated clear learning that was retained during test. Responses to novel transfer items varied according to match to category exemplars as determined by each participants’ learning performance (Mack et al., 2013). Median reaction times (RTs) at the group level did not vary across items; however, there was variability across participants and trials. The brain-based DDM analysis offers a means for accounting for this trial-by-trial variability in response choices and times with neural function.

Brain regions related to category decisions

Across the brain, only six of the tested ROIs showed brain-informed HDDM predictions with significantly better accounts of category decisions relative to the baseline model. Interestingly, these six ROIs were composed of three pairs of adjacent regions, with each pair showing similar effects. To simplify presentation of the results and best reflect the nature of their similar effects, we combined these pairs into three ROIs (Fig. 3): an occipital region including extrastriate cortex, a region of midcingulate cortex, and a region of left lateral prefrontal cortex (PFC) extending into insula. HDDM simulations of these combined ROIs demonstrated significantly lower DICs (occipital: 6,712.9, mid cingulate: 6,713.5, lateral PFC: 6,711.5) relative to baseline (6,725.3). Of these three regions, permutation tests of occipital (padj = 0.0019) and lateral PFC (padj = 0.0016) survived multiple comparison correction whereas mid cingulate did not (padj = 0.095).

Fig. 3
figure 3

Brain regions linked to category decisions. Drift diffusion models informed by neural activation from occipital (red), mid cingulate (blue), and lateral prefrontal cortex (PFC) (green) each accounted for category decisions significantly better than the baseline model not informed by brain signals

Although these three regions demonstrated similar overall fits to behaviour, the nature of the relationship between ROI activation and drift rate was unique across ROIs (Fig. 4). In the occipital ROI, neural activation was positively related to drift rate but only for transfer items (pdtrain = 0.8, pdtransfer = 0.998). In mid cingulate, neural activation was negatively related to drift rate but only for trained items (pdtrain > 0.999, pdtransfer = 0.676). Finally, the lateral PFC region showed a negative relationship between neural activation and drift rate for both stimulus types (pdtrain = 0.997, pdtransfer = 0.985).

Fig. 4
figure 4

Effects of neural activation on drift rate. Posterior distributions of the region of interest (ROI) activation effects on drift rate for training (purple) and transfer (orange) stimuli relative to 0 (dotted line) are shown separate for occipital (top), mid cingulate (middle), and lateral prefrontal cortex (PFC) (bottom) ROIs. Shaded regions represent 95% prediction intervals and asterisks denote effects with less than 2.5% of the posterior distribution overlapping with 0

Category evidence in neural activation

The key prediction of EBRW indicates that category decision making is driven by similarity-based comparisons to category exemplars. Extending this hypothesis to neural function, it follows that brain regions key for category decision making will exhibit activation profiles that track category evidence. To test this prediction, we evaluated the association between category evidence, as derived from GCM-based model fits of learning behaviour, and neural activation in the DDM-identified brain regions. The GCM captured the transfer behaviour well (mean RMSE = 0.042) and the participant-specific optimized parameters (mean values: c = 6.41, w1 = 0.325, w2 = 0.134, w3 = 0.416, w4 = 0.124, γ = 4.48) were consistent with prior reports (Mack et al., 2013; Zaki et al., 2003). These parameter sets were used to calculate category evidence as defined in the methods.

Of the three ROIs, only lateral PFC showed a significant relationship with model-based predictions of category evidence (Fig. 5; R2 = 0.145) with higher lateral PFC activation associated with less discriminatory category evidence (β = -0.004, CI = [-0.007, -0.001], pd = 0.989). These findings support the hypothesis that lateral PFC activation fluctuates as a function of category evidence (Paniukov & Davis, 2018) and that this category evidence plays a role in the accumulation of evidence in category decisions (Nosofsky & Palmeri, 1997).

Fig. 5
figure 5

Relationship between trial-by-trial lateral prefrontal cortex (LPFC) activation and generalized context model-predicted category evidence (ev). Both group-level (thick black line) and participant-level (thin lines) effects are depicted, with participant lines coloured according to the direction of the effect (blue = negative, yellow = positive). The shaded ribbon and shaded region of the distribution depict 95% prediction intervals of the group effect of LPFC activation

Discussion

By integrating a formal category decision-making model, EBRW, with whole-brain neural measures, we demonstrate that activation in specific brain regions relates to the trial-by-trial dynamics of category decisions. Specifically, we found that activation in LPFC was associated with category decisions and the speed of those decisions through the lens of the computational model. Notably, trial-by-trial activation in this region also related to exemplar-based predictions of category evidence. In both cases, the link to LPFC activation was an inverse relationship: higher activation was accompanied by less category evidence and slower decisions. These findings support an account of LPFC engagement tied to the difficulty of the current category decision, such that LPFC is recruited to resolve conflicts and help drive decision making in ambiguous circumstances (Davis et al., 2017; O’Bryan, Worthy, et al., 2018).

The LPFC has been previously reported to be involved in a variety of tasks consistent with this interpretation. Monkey studies suggest that neurons in LPFC are sensitive to multidimensional feature representations (Mendoza-Halliday & Martinez-Trujillo, 2017) and code for category boundaries (Seger & Miller, 2010). Additionally, human work points to a specific role for LPFC in the control of memory retrieval, particularly in the face of ambiguity and competing information (e.g., Badre & Nee, 2018; Thompson-Schill et al., 1998). Indeed, recent fMRI work focused on category learning has specifically pointed to LPFC playing a role in representing evidence for different categories (Davis et al., 2017; O’Bryan, Walden, et al., 2018; O’Bryan, Worthy, et al., 2018; Paniukov & Davis, 2018). Here, we extend these findings by directly linking trial-by-trial neural signals from LPFC to category decision making and the expression of exemplar-based category knowledge to uniquely support the notion that LPFC is tracking category evidence in a behaviourally relevant manner (Nosofsky & Palmeri, 1997; Paniukov & Davis, 2018).

The current findings are consistent with prior studies of the role of LPFC in category decisions, but important differences are noteworthy. In particular, whereas we defined the exemplar-based decision signal as the difference between category summed similarities, others have looked for neural engagement reflecting the summed similarity for the most likely category or a combined signal reflecting both exemplar similarities and dissimilarities (O’Bryan, Walden, et al., 2018; O’Bryan, Worthy, et al., 2018). These variants of decision signals are, in most tasks, correlated; however, they are intuitive and informative model quantities and may represent distinct variables computed in different regions. Future work employing the methods we propose here with different candidate model-based decision signals may help dissociate distinct functions in these brain regions (O’Bryan, Worthy, et al., 2018; Paniukov & Davis, 2018; Zeithamova et al., 2019).

Although the occipital region was not associated with model-based predictions of category evidence, activation in this region did exhibit distinct relationships with evidence accumulation as formalized in the DDM. Decision-related activation was restricted to novel transfer items, such that higher activation was accompanied by quicker and more accurate responses. Prior studies support the notion of concept representation in perceptual cortices. Specifically, it is thought that recurrent feedforward/feedback loops with medial temporal lobe and PFC allow visual regions to make inferences about stimulus features (Hindy et al., 2016; Lee & Mumford, 2003). Moreover, occipital regions respond to the similarity between particular stimuli and category representations (Braunlich & Love, 2019), as well as demonstrating an increase in activation to category-relevant visual stimulus dimensions (Folstein et al., 2013). Thus, the link between occipital activation and decision making we observe may be due to the engagement of neural representations tuned to diagnostic visual dimensions. It follows that this neural tuning may be most helpful and, therefore, recruited when generalizing to new stimuli.

Activation within mid cingulate cortex (MCC) also improved DDM predictions; however, its potential role in category learning is less clear. The relationship between MCC activation and drift rate was restricted to trained items, such that lower activation was associated with higher drift rates. While this relationship resembles that of the one observed in the LPFC, the lack of a similar relationship for transfer items is puzzling. Activation in MCC is often observed across a broad spectrum of cognitive tasks, which calls into question its specificity for any given domain (Yarkoni et al., 2011). That this region did not survive the conservative multiple comparison threshold further warrants caution in interpreting a distinct role of mid cingulate in category learning; future work will undoubtedly shed light on this potential node in the category learning neural network.

Our approach offers a novel method for interrogating neural measures with psychological theory and extends the recent surge in developing innovative approaches for bridging formal theories with brain measures (Bowman & Zeithamova, 2018; Davis et al., 2012; Daw et al., 2006; Forstmann et al., 2011; Mack et al., 2013; Nosofsky et al., 2012; O’Doherty et al., 2007). Much of this work focusses on localizing hypothesized representational structure of stimuli (e.g., similarity relationships between items within and between categories) to neural representations found in activation patterns (Bowman & Zeithamova, 2018; Davis et al., 2014; Mack et al., 2016; Mack et al., 2020). Indeed, targeting representations in both model and brain has proven to be an invaluable tool in characterizing neural mechanisms underlying cognition and in evaluating formal theories. However, these approaches necessarily average over potentially informative neural dynamics that may explain behaviour on a trial-by-trial basis. Such trial-level approaches have become standard for research in reinforcement learning wherein model predictions of how value influences choices depend on prior experiences and responses (Daw et al., 2006; Leong et al., 2017; Radulescu et al., 2019). We extend this approach by linking neural signal to a comprehensive prediction of behaviour in terms of choice probabilities and response times (Mack & Preston, 2016). This tight integration between brain and model is afforded by the EBRW framework that makes explicit predictions for the speed of category decisions based on the structure of category knowledge (Nosofsky & Palmeri, 1997). However, our approach is not limited to category learning; any theory that makes clear predictions about response speed is a potential candidate for the methods that we outline here.

The current findings also extend an already rich literature supporting the comprehensive behavioural predictions of EBRW (Annis & Palmeri, 2019; Nosofsky & Palmeri, 1997, 2015; Palmeri, 1997) to make the first step towards characterizing this model’s neural mechanisms. The central tenet of EBRW is that exemplar-based category representations drive not only what categorization responses observers make but also the speed of those responses. The hypothesized mechanism posits that the degree to which a to-be-categorized stimulus matches stored memory traces of category exemplars dictates which exemplars are retrieved from memory to inform a random walk accumulation of evidence. Given the literature’s view on LPFC, as noted above, that this region was solely linked to EBRW’s mechanisms of category evidence accumulation provides compelling empirical support for the theory.

Our approach relies on first estimating trial-level beta series from the fMRI data (Mumford et al., 2012). This step of the analysis is inherently noisier than alternative methods of linking model to brain dynamics that leverage parametric regressors of time-varying model predictions in a mass univariate approach (e.g., Davis et al., 2012; O’Doherty et al., 2007). However, relationships between model mechanism and neural function identified with the parametric regressor approach are ultimately explanatory in nature. In contrast, by linking neural signal to drift rate in the DDM, our approach isolates brain function that gives rise to behaviour within a generative framework that makes predictions about category decisions on a trial-by-trial basis (Kragel et al., 2015). First focusing on isolating brain regions that comprehensively account for response choice and time allows for a more targeted evaluation of how such regions are further linked to model measures like category representation.

In summary, a comprehensive account of category learning requires an understanding of the dynamics of attention, representation, and decision making at the level of both computational and neural processes. Here, we build on recent advances that link computational model predictions of category representations to neural coding (Mack et al., 2018; Zeithamova et al., 2019) in order to isolate the neural signals of category decision making. We also extend methods of linking brain and behaviour through the DDM (Frank et al., 2015; Mack & Preston, 2016; Roberts & Hutcherson, 2019; White et al., 2012) to demonstrate that trial-by-trial neural signals from occipital, mid cingulate, and lateral PFC track the accumulation of evidence in category decisions. Importantly, LPFC activation tracked participant-specific predictions of exemplar-based category evidence as formalized by EBRW (Nosofsky & Palmeri, 1997). More generally, this approach offers a novel method for quantitatively connecting behavioural data to neural processes with cognitive theory.