Absolutely relative or relatively absolute: violations of value invariance in human decision making
Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.
KeywordsComputational modeling Judgment and decision making Response time models Inhibition
A central goal of the cognitive, economic, and biological sciences has been to uncover the mechanism underlying human and animal decision-making behavior. Decisions are made based on a set of values that characterizes the choice alternatives (e.g., monetary rewards, likelihoods of events, perceptual properties or subjective utilities). However, when aiming to select the best out of a given set of alternatives, processing information about the absolute value of each alternative can be considered superfluous. Indeed, leading theories of human decision making postulate that task irrelevant information about absolute values is discarded in favor of relative value representations. This relativity of information processing in decision making is crucial for producing context effects and has been formulated both in terms of differences between values (Ratcliff and Rouder 1998; Roe et al. 2001; Tversky and Simonson 1993; Usher and McClelland 2004) and value ratios (Brown and Heathcote 2008; Louie et al. 2013). Concurrently, other theories used to describe decision making in both humans (Usher and McClelland 2001) and decentralized biological systems (e.g., bee colonies; Pais et al. 2013) suggest that some sensitivity to absolute values is, in fact, retained. Value sensitive mechanisms have also been argued to exhibit adaptive advantages, such as breaking decision deadlock between equally valued alternatives faster for high value alternatives than for low value alternatives (Pirrone et al. 2014). Two questions are thus begged in this study: (a) Does human decision making retain sensitivity to absolute information values, above and beyond their relative properties? and if so (2) by means of what underlying mechanism do absolute value sensitivity and relative value representation coexist?
Ample evidence from psychology and biology indicates that choices and their latencies are the result of a process that sequentially accumulates samples of noisy, momentary, evidence values towards an internal decision criterion (Marshall et al. 2009; Meyer et al. 1988; Ossmy et al. 2013; Ratcliff and Rouder 1998; Ratcliff 1978; Teodorescu and Usher 2013; Usher and McClelland 2004). In this sequential sampling framework, a relative model is one that either accumulates relative momentary values (e.g., momentary evidence value differences) or, alternatively, accumulates absolute values but implements a relative stopping rule (e.g. stop when the difference between accumulators crosses a criterion amount). While values often are associated with economic rewards, perceptual values also constitute an informational basis for decision making and perceptual decision-making manifests many biases and context effects associated with economic decision making (Trueblood et al. 2013; Tsetsos et al. 2012). For binary decisions, accumulation of value differences as in the Drift Diffusion Model (DDM; Ratcliff 1978), has been related to choice optimality, as a mechanistic implementation of the Bayesian Sequential Probability Ratio Test (Wald and Wolfowitz 1948 but see Drugowitsch et al. 2012; Moran 2014). The DDM has been successful in fitting a plethora of experimental results and has been suggested as a unifying framework for both perceptual and economic decision making (Basten et al. 2010; Ratcliff and McKoon 2008; Towal et al. 2013). Similarly, fractional normalization—an algorithm that divides each individual input value by the sum of all input values,1 thus discarding absolute values in favor of ratios—has been recently proposed as a canonical neural computation that permeates all levels of brain functions (Carandini and Heeger 2012; Louie et al. 2011; Louie et al. 2013). In this study, we focused on these two relative processes.
When considering the type and degree of information processing relativity in different model architectures, distinct dependencies between absolute model inputs and model outputs become readily evident. Specifically, both pure difference (pure DDM) and ratio (pure normalization) relations are invariant to manipulations that maintain the difference or the ratio of absolute input values, respectively. In this study, we use a binary perceptual decision paradigm, requiring decision makers to choose the brighter out of a pair of fluctuating gray patches (i.e., evaluate relations between brightness stimuli while ignoring absolute values). Brightness value manipulations were specifically designed to test different types of relativity in the processing of decision information. Despite the relative nature of the task, we present results demonstrating surprising sensitivity of human choices and choice-latencies to task irrelevant absolute information values. An extensive computational investigation of corresponding accumulation models demonstrates that the observed sensitivity to absolute values conforms to neither pure difference nor ratio relations. Begging a principled synthesis, we put forth the proposition that the observed sensitivity to absolute values can be captured by two plausible, although as we will show, apparently incompatible, origins: (a) the dependence of processing noise on input intensity (Brunton et al. 2013; Lu and Dosher 2008) when implemented within a DDM framework or (b) response competition produced by lateral inhibition between evidence accumulators and the resulting temporal dynamics (Teodorescu and Usher 2013; Usher and McClelland 2001, 2004).
Eight Tel-Aviv University undergraduate students participated in Exp. 1 and eight in Exp. 2, in exchange for course credit. Each participant was tested in two, 60-minute sessions (on different days but no more than 4 days apart). One subject in Exp. 1 was excluded from the analyses due to experimenter error (the subject performed both sessions consecutively on the same day and consequently RTs were more than 3 standard deviations slower than the rest of the group). One subject in Exp. 2 did not complete the second session and was also excluded from the analyses. Exclusion of these subjects did not affect the general pattern of our results, and all reported data in the remainder of the paper are from the remaining seven participants. All participants had normal or corrected to normal vision. The projects were approved by the department’s ethical committee.
All stimuli in this experiment were presented on a ViewSonic Graphics Series G90fB 19” CRT monitor. The monitor was gamma corrected so that physical brightness outputs (in lumens) are linear with respect to MATLAB RGB values where zero represents the minimum screen brightness and one represents the maximum screen brightness. Note that the range of available screen brightness is only a small section of the full brightness range from no light up to some extremely high value like the brightness of the sun (higher than the maximum screen brightness). Gamma correction was performed using a TES-1332A photo meter. Experiments were coded in MATLAB and were realized using Cogent Graphics toolbox developed by John Romaya at the LON at the Welcome Department of Imaging Neuroscience. The stimuli on each trial were composed of two homogenous, round (1.2 cm diameter), temporally fluctuating gray patches with independently sampled gray-level on each monitor frame refresh. The two gray patches were presented on a black background and were positioned to the right and to the left of a fixation cross (total width from right edge of right patch to left edge of left patch: 5 cm). The head position of the subjects was fixed at approximately 60 cm using a standard chinrest. For baseline trials, gray levels of the target and non-target were normally distributed around means of 0.4 and 0.3 (on a 0 to 1 (maximum screen brightness) scale in MATLAB) respectively. In Exp. 1, gray levels of the target and non-target stimuli for the multiplicative-boost condition were normally distributed around means of 0.6 and 0.45 respectively maintaining the 4/3 ratio but increasing the difference from 0.1 to 0.15 compared with baseline. In the additive -boost condition, gray levels of the target and non-target were distributed around 0.6 and 0.5, respectively, maintaining the difference of 0.1 but decreasing the ratio from 4/3 to 6/5 compared with baseline. In Exp. 2, the multiplicative-boost condition began the same as a baseline trial, but 100 ms into the trial both target and non target mean gray levels increased from 0.4 and 0.3 to 0.6 and 0.45, respectively, maintaining the 4/3 ratio but increasing the difference from 0.1 to 0.15. The additive-boost condition was similar only that after 100-ms gray levels of the target and nontarget increased from 0.4 and 0.3 to 0.6 and 0.5, respectively, maintaining the difference of 0.1 but decreasing the ratio from 4/3 to 6/5. For all conditions in both experiments, on each frame the gray level for each individual patch was separately and independently recalculated as the sum of its designated mean plus a Gaussian random variable N(0,0.12). Occasional below threshold brightness samples might create the appearance of flickers where the stimulus appears to disappear and immediately reappear. Importantly, these flickers are more likely to occur for stimuli with lower mean brightness, potentially providing alternative strategies for performing the task (e.g., choose the one that flickers less). Thus, final brightness values below a value of 0.1 were truncated to 0.1 (a clearly visible, above threshold brightness value) to prevent obvious flickering of the stimuli that might attract attention to it in a bottom-up fashion. Because a brightness of 1 represents the maximum brightness afforded by the screen, final brightness values above 1 also were truncated to 1. Refresh rate was set at 60 Hz (16.6 ms per frame) with new brightness levels independently resampled for each frame and for each stimulus location. Tests were run to evaluate the probability of dropped frames, and no frames were dropped after a full hour of continuous presentation. The location of the target (right/left) was randomly drawn on each trial.
Responses were given on the 1 and 3 keys of the keyboard number keypad for the left and right responses, respectively. Subjects were instructed to use the right index finger for the 3 key and the same finger on the left hand for the 1 key. The stimuli stayed on until the response was entered, after which a 1-s Inter-Stimulus-Interval (ISI), preceded the next trial. All trials were randomly assigned to one of three possible conditions: baseline, multiplicative boost, and additive boost. Participants were presented with 10 blocks of 60 trials per session for a total of 1,200 trials per participant. Each block consisted of 40 %, 30 %, and 30 % of randomply intermixed baseline, multiplicative boost, and additive boost trials, respectively. Feedback on error responses was provided in the form of a 1-second loud beep immediately following the response. No feedback was provided after correct responses. After each block, there was a self-timed intermission to allow the subject to rest. During each of these breaks, the average accuracy and RT for the last block were presented on the screen. The participants were instructed to try to maximize their performance (try to respond as fast and as accurately as possible), such that if they reached 100 % accuracy they should try to respond faster and were given a 30-trial practice block. Subjects also were told to keep their eyes focused on the fixation cross throughout the trial. Maintaining a constant fixation facilitates performance of the task by allowing simultaneous monitoring of both patches, while maintaining constant retinotopic mapping between stimulus and response. Nevertheless, in the absence of an eye tracker, there was no way to verify that they actually complied with this request. The experiment was held in a partially darkened room, and subjects were acclimated to the lighting conditions during the practice.
Experiment 1 results
The intensity manipulations in Exp. 1 yielded significant effects on both correct RT (F(2,12) = 10.676, p < 0.01; repeated measures ANOVA) and accuracy (F(2,12) = 23.81, p < 0.0001; repeated measures ANOVA. RT and accuracy results are illustrated in Fig. 1c. A post-hoc analysis (Tukey HSD test) of the accuracy data revealed that subjects were less accurate in the additive boost condition (M = 0.77, SD = 0.04) compared with both the baseline (M = 0.86, SD = 0.04; p < 0.001) and multiplicative boost condition (M = 0.86, SD = 0.04; p < 0.001). Accuracy levels in the baseline and multiplicative boost conditions were practically identical (p = 1). A post-hoc analysis (Tukey HSD test) of correct RTs revealed that compared to baseline condition (M = 0.95 s, SD = 0.19) subjects responded faster in both the additive boost condition (M = 0.87 s, SD = 0.16; p < 0.02) and the multiplicative boost condition (M = 0.83 s, SD = 0.15; p < 0.01). In a previous study (Teodorescu and Usher 2013), we found slower RTs in a condition similar to the additive boost compared with a condition similar to the multiplicative boost. On this basis, we performed a planned comparison for this contrast alone and indeed found slower RTs in the additive compared with the multiplicative condition2 (F(1,6) = 7.75, p < 0.05, planned comparison).
Experiment 2 results
The manipulation yielded significant effects of both correct RT (F(2,12) = 35.96, p < 0.00001; repeated measures ANOVA) and accuracy (F(2,12) = 21.67, p < 0.001; repeated measures ANOVA; Fig. 1c). A post-hoc analysis of the accuracy data revealed that subjects were less accurate in the additive boost condition (M = 0.81, SD = 0.088) compared with both the baseline (M = 0.88, SD = 0.056; p < 0.001; Tukey HSD test) and multiplicative boost condition (M = 0.88, SD = 0.082; p < 0.001; Tukey HSD test). Accuracy in the baseline and multiplicative boost conditions were nearly identical (p = 0.97; Tukey HSD test). A post-hoc analysis of the RT data revealed that compared to baseline (M = 1.004 s, SD = 0.188) subjects responded faster in both the additive boost condition (M = 0.914 s, SD = 0.175; p < 0.001; Tukey HSD test) and the multiplicative boost condition (M = 0.860 s, SD = 0.157; p < 0.001; Tukey HSD test). Last, as in Exp. 1, consistent with our previous study (Teodorescu and Usher 2013), we also found slower RTs in the additive compared with the multiplicative condition (F(1,6) = 9.59; p < 0.05; planned comparison).
Discussion of experimental results
Strikingly, these findings demonstrate that both common forms of invariance predicted by purely relative models were violated. For RT, invariance held for neither equi-difference nor equi-ratio intensities (faster RTs in both boost conditions compared to baseline). For accuracy however, intensity invariance was violated with respect to differences (lower accuracy in the additive boost compared to the baseline condition) but not with respect to ratios (equal accuracy in the multiplicative boost compared to baseline condition). As we will show below (Computational Modeling section), neither the pure DDM nor the pure normalization model can capture these qualitative patterns.
Additionally, comparing the additive and the multiplicative conditions, we note that the brighter channel is identically distributed in both conditions (mean = 0.6), whereas the dimmer channel is, on average, brighter in the additive boost condition (0.5) than in the multiplicative boost condition (0.45). The finding of slower RT’s in the additive versus multiplicative boost condition indicates competition and thus violates a fundamental prediction of independent race models (Teodorescu and Usher 2013). Independent race models, are “purely absolute” in that they are driven solely by absolute input values in contrast to competitive models, in which the interaction between the input values also contribute to the integrated evaluations (Teodorescu and Usher 2013). Thus, the current results speak against “model purity” either in the relative or the absolute sense. As we show in the following modeling section, only “hybrid” models, which maintain sensitivity to both the relative and absolute aspects of the stimuli, can satisfactorily account for our findings. To anticipate our computational modeling results, we find two ways of accounting for the joint observation of absolute and relative processing. The first is an LCA account, which combines absolute and relative evidence dynamically, and the second is a diffusion account, in which the stopping rule is based on differences (thus relative) and a separate component, the processing noise, increases with the magnitude of the input (thus absolute).
Our results indicate a need for computational models of decision making to possess mechanisms for absolute value sensitivity in addition to relative processing. In this section, we investigate how different sources of sensitivity to absolute values interact with different relative decision processes. To this end, we first consider potential sources for absolute value sensitivity that are independent of the modeling framework. We then proceed to explore three model families. The first two originate from a purely relative conceptualization of the decision process based on either ratios (as in the normalization model) or differences (as in the DDM model). Within each of these families, we consider both a purely relative version and a hybrid version where relative processing is not pure but varies parametrically on a continuum between purely absolute and purely relative extremes. We explore a third model family, dynamic relativity (as in the LCA model; Teodorescu and Usher 2013; Usher and McClelland 2001), which is of an intrinsically hybrid nature. The LCA model is related to differential relativity, but here relativity emerges dynamically over time as a result of lateral inhibition. We then test the ability of the different models to account for our results by fitting the models to empirical joint distributions of response probability, correct RT, and error RT from Exp. 1. We conclude by discussing insights from the model fitting exercise and explore model predictions for hypothetical input spaces.
S i (t) has the same distribution as the momentary physical stimulus value (brightness) at location i (left/right) and γ > 0 is a power coefficient representing the nonlinear nature of the perceptual (or neuronal) transformation.3
Computational choice-RT models traditionally have been pure in the sense that they were either purely absolute (independent race model; Usher, Olami & McClelland, 2002; Van Zandt et al. 2000; Vickers 1970) or implemented difference and ratio relations in their pure form. However, purely relative and purely absolute processes do not represent discrete states but rather two extremes on a continuum. The balance between absolute and relative processing can be parameterized to interpolate between the two states. The form of this interpolation depends on the type of information processing relativity assumed in the decision stage. Thus, this third mechanism for sensitivity to absolute values will be treated separately for each model family in the following three sections.
Fractional relativity: a normalization model analysis
I i (t) is the momentary input (Eq. 1), I i N (t) is the normalized momentary input, ξ i (t) is the momentary processing noise, independent over time and across channels (Eq. 2), and the semi-saturation parameter λ ≥ 0 allows the model to transition continuously between a purely relative ratio (or normalization) model (λ = 0) and an asymptotically purely absolute independent race model (λ ≫ ∑I).
Differential relativity: a diffusion model analysis
D(t) is the relative decision module tracking the momentary differences between the accumulated evidence in favor of the two response alternatives, 0 ≤ α ≤ 1 is the partial relativity coefficient that allows the model to transition between a purely relative DDM (α = 1 implies: stop when the difference crosses a threshold) and purely absolute independent race model (α = 0 implies: stop when the largest independent accumulator crosses a threshold; cf. Moreno-Bote 2010; Zylberberg et al. 2012), X i (t) are the accumulated evidence values at time t corresponding to the target and non-target alternatives and ξ i (t) is the momentary processing noise, independent over time and across channels (Eq. 2).
Dynamic relativity: a leaky-competing accumulator model analysis
The present section deals with practical, theoretical, and experimental assumptions required for computational modeling and fitting of choice RT data in general and for this study in particular. The first subsection, “Modeling Assumptions,” is less technical and is instructive with regards to the nature of the relation between experimental design and computational modeling. The rest of the subsections, however, are more technical and can be safely skipped, either entirely or partially. Those not interested in the technical details presented in this section can continue reading from the Modeling Results section.
In the following model fits, input values were sampled from the same distributions used to generate the stimuli in the experiment (S i (t) Eq. 1), so that the models “experienced” the exact same external environment as the subjects. This constrained approach to modeling the experimental setting is crucial in discriminating between models that could otherwise closely mimic each other (Teodorescu and Usher 2013), especially in view of recent concerns regarding the falsifiability of response-time models (Jones and Dzhafarov 2013). However, constraining model inputs to the same distribution as the physical stimulus intensities is a nontrivial assumption that may be unsuited for many experimental paradigms. In fact, the common practice when fitting choice RT models to data is to allow input values to vary freely between conditions and assign free parameters for each input and condition combination. What allows us to assume a more constrained relationship between physical stimulus values and input strengths relates directly to the choice of stimuli and the design of the experimental paradigm, which in turn are motivated by neural constraints. In our design, the stimuli are such that pre-decisional neural interactions between the channels are minimized. This is achieved by choosing a low-level perceptual modality (brightness or contrast is arguably the lowest level of visual processing) while maintaining spatial separation between the perceptual evidence streams (for a comprehensive discussion see Teodorescu and Usher 2013).
In addition, the use of highly overlapping, temporally variable stimuli distributions, intermixed within blocks justifies the assumption of constant decision thresholds between experimental conditions. A common selective influence assumption is that under such conditions, only drift-rates should be allowed to vary between conditions (Ratcliff and Smith 2004). However, having the model inputs linked directly to the physical properties of the stimuli allowed us to derive (rather than fit) condition dependent drift-rates (Teodorescu and Usher 2013). All of this resulted in zero free parameters being allowed to vary between experimental conditions. Model freedom was confined to general model parameters, thus forcing the models to predict the pattern of results for all experimental conditions with a single unique set of parameters.
Note that these constraints do not imply that drift rates were fully determined by the physical stimulus values. As mentioned earlier, parametric assumptions were introduced into all the models regarding the psychophysical transformations of physical (energy) values into psychological (perceived) values (Eq. 1) as well as the dependence of processing noise on stimulus intensity (Eq. 2). In addition, assumptions about the relationships between inputs, such as constraining the sum of all drift-rates to equal a constant, also were captured parametrically by incorporating continuous partial relativity parameters. Having drift-rates constrained by the physical properties of the stimuli also allowed us to use computational simulations, based on the results of the model fits, to generate input space mappings, which produce model predictions for a continuum of possible empirical manipulations.
Following the above experimental and theoretical considerations, no parameters in our models were allowed to vary between experimental conditions. Specifically, 4 standard free parameters were included with all models: (1) threshold (Th) that represents response caution such that higher thresholds lead to slower but more accurate decisions; (2) general processing noise (σ2) representing the intrinsically noisy nature of neural processing; (3) nondecision time (Tnd) corresponding to the time it takes to process all nondecision components, such as perceptual encoding and response generation processes; and (4) a time-step parameter (Ts), which is a scaling parameter that determines the equivalent duration in milliseconds for one computational iteration. The last two parameters are needed to transform all simulated RTs (RT’; expressed as discrete time steps) to real-world RTs (expressed in milliseconds) such that RT = Ts*RT′ + Tnd.
In addition, all models included two parameters that describe the dependence of model inputs on the physical properties of the stimuli. Momentary drift-rates I i (t) for each channel (I = left/right) were directly derived from the momentary physical brightness values S i (t) by transforming them through a psychophysical power law (Eq. 1) and perturbing the ensuing value by intensity dependent random variability (Eq. 2). The psychophysical transformation required one parameter, a power-law coefficient (γ) representing the concavity of the psychophysical transformation (Eq. 1). The intensity dependent random input variability required an additional parameter, an input-noise coefficient (π), representing the sensitivity of the standard deviation of stimulus dependent input-noise to the momentary perceived stimulus intensity (i.e., post psychophysical transformation; Eq. 2).
Between-trial variability parameters
In order for sequential sampling models to be able to describe correctly the form of both correct and error RT distributions, it is common practice to add some sources of between-trial variability. One source of between-trial variability, starting point variability (SPV), was applied to all models in the same manner. In all our simulations, the accumulation process for each alternative i began at an arbitrary baseline activation level of 0.4 to which we added a uniformly distributed random variable U(0, SPV) independent across channels. All models, except for the LCA, also traditionally require an additional between-trial drift-rate variability parameter (η) to account well for the form of response time distributions (Ratcliff and Rouder 1998). This was implemented in the models as a Gaussian random variable N(0, η 2) that is drawn once for each trial and independently for each channel i and added to all inputs I i for the duration of that simulation trial. The normalized race model predicted distributions that were too symmetrical, so following Ratcliff and Smith (2004) and Teodorescu and Usher (2013) we augmented it with a parameter for between trial exponential variability in response criteria (τ) such that for each trial, response criteria were drawn from an exponential distribution Th + exp(τ).
To control the degree of processing relativity in the normalization and DDM models, we also included a partial-relativity parameter, which was implemented differently for the normalization and DDM models (see λ & α in Eqs. 3 and 4, respectively). The LCA model also required two additional parameters for leak (k) and inhibition (β). The inhibition parameter (β) is considered the partial relativity parameter for the LCA model, because in its absence (β = 0) the model reduces to a purely absolute, independent, leaky race model. Note, however, that the LCA is fundamentally value sensitive and it does not have a purely relative form. For the LCA, some sensitivity to absolute input values is retained early in the process for any value of β.
General optimization algorithm
Model goodness of fit scores for group data
Model goodness of fit scores for individual participants
Optimizing the models to predict binned response proportions allows us to test the models on their ability to simultaneously account for both RT distributions of correct and error responses as well as accuracy (i.e response probabilities).
We fit the models to both group (average observer) and individual participant data. The results are reported in Tables 1 and 2, respectively. Because model comparisons based on individual fits are in close agreement with group, average observer fits, in the remainder of the paper we focus our discussion on the model comparisons based on fits to group data. This choice is motivated by our desire to focus the discussion on general qualitative differences between the models rather than on individual differences (Ratcliff and Smith 2004).
AIC and BIC scores
To compare model performance, we provide two goodness of fit measures that penalize models for extra complexity: (1) The Bayesian Information Criterion (BIC; Schwarz 1978); and (2) Akaike Information Criterion (AIC; Akaike 1974). Complexity, in these methods, is operationalized as proportional to the total number of free parameters and the two methods differ with respect to the magnitude of this penalty. Generally, AIC being more liberal and BIC more conservative.
In its most widely used form the DDM is purely relative (α = 1), and in the absence of intensity-dependent noise and psychophysical transformations (π = 0; γ = 1) would predict complete invariance to the additive boost manipulation (see Fig. 2, left panel for example trajectories demonstrating this). Fitting the purely relative model with γ free to vary, the best fit was achieved with γ = 0.49 < 1 (typically γ ~ 0.5 for brightness stimuli; Geisler 1989) leading to a compressive psychophysical transformation. Thus, the DDM “perceived” the brightness difference between the two alternatives7 in the, high intensity, additive-boost condition as smaller than the equivalent physical difference in the, lower intensity, baseline condition leading to higher mean RT for the additive-boost condition in contradiction to our data (Fig. 3 column 2; Tables 1 and 2, Model 3).
To test the roles of the three sources of absolute value sensitivity within a differential relativity framework, we fit the full DDM model (Eq. 4) to the data from Exp. 1. The model captures all the data well both qualitatively and quantitatively (Tables 1 and 2, Model 4; Fig. 3). Interestingly, the best fit was achieved with a degenerate partial relativity coefficient α = 1 reducing to a model where the sensitivity to absolute values comes solely from the noise component (π = 0.69). Thus, within a DDM framework, our model fits provide support for the dominant role of multiplicative noise over partial modulation of stopping rule relativity.
The LCA model (Eq. 5) captured all the empirical effects both qualitatively and quantitatively (Tables 1 and 2, Model 5; Fig. 3, rightmost column). Interestingly, the fit was achieved with π < 10−4 indicating no role for input dependent noise in contradiction to the DDM interpretation.
Input space analysis
To better understand the dynamics and predictions of the different models, we performed a simulation-based computational investigation of input spaces. The best fitting average observer parameters for each model (as in Table S1) were used for these simulations. Using tractable transformations of physical stimulus values into model input values allowed us to derive model predictions for a continuum of possible stimulus value combinations, including the values used in the experiment (Fig. 3, bottom panel). The first observation that stands out is that models differ only minimally in their predictions for accuracy contours. Except for the purely relative DDM (model 4; Fig. 3, second column) where accuracies are approximately constant for equal differences (contours parallel to main diagonal), predictions of all models maintain approximately constant accuracies for constant input ratios (fanning out contours compared to the diagonal). The second observation is that RT dynamics vary qualitatively between different relative architectures. Purely fractional relativity leads to a fan like pattern that approximately maintains ratios (normalized race model; Table 1, model 1; Fig. 3, first column). Purely differential relativity predicts approximately parallel diagonal lines (DDM; Table 1, model 3; Fig. 3, second column) that maintain constant differences. Unlike purely relative models, RT dynamics for both the partially relative models (DDM with input dependent noise and the LCA model (Table 1, models 4 and 5; Fig. 3, two rightmost columns correspondingly) predict RT contour lines that are approximately parallel when the stimuli are clearly discriminable (far from the main diagonal) yet converge towards the main diagonal (rather than fanning out) for less discriminable stimuli. In other words, both partially relative models make the testable prediction that the size of the additive-boost RT speedup effect would be larger for less discriminable stimuli. The curvature of the heat contours suggests that there are quantitative differences between the two partial relativity architectures in the rate with which RT changes as a function of absolute input values. This provides an avenue for stronger future tests of the models by using several “boost” levels (e.g. baseline; baseline*5/4; baseline * 3/2; baseline + 0.1; baseline + 0.2; bold represents current mean brightness values in Exp. 1 & 2).
In this study, we focused on assumptions regarding the relativity of information processing and its sensitivity to absolute values in order to learn about the mechanisms underlying perceptual decision making. Critically, our experimental results demonstrate a sensitivity of response latencies to both additive and multiplicative boosts in brightness intensity values, which was not predicted by purely relative models: RTs speeded up with boosted brightness levels, even when the differences or the ratios were maintained. These effects constitute violations of value invariance predicted by purely relative models of decision making. The ratio and difference theoretical frameworks discussed in this study can be regarded as purely relative in that they forfeit all information about, and thus any sensitivity to, the absolute values representing the choice alternatives. Because successful performance in our discrimination task only requires attending to the relation of the two stimuli (i.e., which one is brighter relative to the other), this pure relativity assumption can be considered rational, at least in the sense that the absolute values are irrelevant to task performance.
Value sensitivity, as it was observed in this study, is reminiscent of Pieron’s law whereby higher intensities lead to faster responses (Geisler 1989). However, previous demonstrations of Pieron’s law have confounded higher intensity with higher signal to noise ratio (Van Maanen et al. 2012). This resulted in decisions being easier for the higher intensity conditions, as in the multiplicative boost condition in this study, thus allowing for a natural account within standard choice RT models. In the absence of specific constraints on the relationship between physical stimulus values and model inputs, higher signal to noise ratios are commonly associated with higher drift-rate differences (in DDM models) or higher drift-rate ratios (in normalization models). This, given a constant decision threshold, directly leads to faster RTs in most models. However, in our experiments, RTs were faster even in the additive boost condition where the signal to noise was not larger than the baseline condition. Therefore, our results constitute the first demonstration that Pieron’s law holds even for conditions where higher intensity is associated with equal or lower signal to noise ratios compared with the lower intensity conditions. In addition, we replicate the results of Teodorescu and Usher (2013), demonstrating slower RTs for higher non-target stimulus values, in contradiction to purely absolute models (i.e., independent race models), which predict the opposite pattern. Thus, our experimental framework provides a theoretically motivated benchmark manipulation for simultaneously testing multiple decision-making theories.
We contrasted models that varied with respect to the type of relative information processing (differential vs. fractional), the degree of relativity (on a purely absolute to purely relative continuum), lateral inhibition (an alternative neural mechanism for partially relative evidence integration) and the type of noise (constant or input dependent). The results of the model comparison rule out purely absolute (independent race) models, purely relative models of the differential type based on the integration of (psychophysically transformed) differences, and both pure and partially relative models of the fractional type based the integration of (psychophysically transformed) normalized inputs. Interestingly, fractional relativity captures the accuracy results quite well, and its failure applies almost exclusively to RTs, demonstrating the importance of fitting multiple dependent measures. In a recent study, normalization was used to explain relative effects of value manipulations on accuracy (Louie et al. 2011). The authors showed that increasing the value of a third, nonpreferred alternative changed the proportion of choices between the two preferred alternatives in favor of the second-best alternative. However, in our study the stronger nontarget value in the additive compared with the multiplicative boost produced slower RTs in addition to lower accuracy. While predicting the accuracy effect correctly, the normalization model did not produce an adequate RT slowdown. Thus, our results suggest that a reexamination of the Louie et al. dataset with respect to choice RTs might provide additional insights into the mechanisms underlying the decision process. Note that we rule out a normalization scheme that is plausible and prevalent in the literature. However, other normalization schemes can be potentially designed based on different principles other than ratios, which might be able to better account for our results. Nevertheless, our results demonstrate that any such scheme should incorporate some measure of absolute value sensitivity and cannot be purely relative.
Due to its purely relative nature, the DDM has been claimed to be incompatible with trial by trial sensitivity to absolute values as previously associated with models of social colony decision making (Pirrone et al. 2014) and now, as per our results, also with findings for human decision making. However, the neurally plausible assumption that processing noise is proportional to momentary input values, coupled with a differentially relative decision mechanism, resolves this challenge. The results of this study imply that the dependence of processing noise on input values is not by any measure just a technical assumption but one that has direct theoretical implication for the underlying psychological mechanism. The segregation of general processing noise into general and input dependent, multiplicative noise components also is consistent with the modeling work of Brunton et al. 2013 (cf. Lu and Dosher 2008). Interestingly, Brunton et al. found zero general processing noise and a major role for input dependent noise. However, their conclusion is contingent on using a differential relativity framework to fit accuracy data. In our computational study, the differential and dynamic relativity frameworks categorically disagreed regarding the role of multiplicative noise. Thus, beyond stressing the added value of fitting RTs, the current work suggests that conclusions about properties of internal processes such as the roles of general versus multiplicative noise can critically depend on the, often arbitrary, choice of modeling architecture.
Indeed, dynamic relativity also captured our results well, both quantitatively and qualitatively. The LCA model can be conceptualized intuitively as a dynamic amalgam of both absolute and differentially relative processing occurring at early and late stages respectively. Under some assumptions, the LCA is asymptotically equivalent to a DDM (Bogacz et al. 2006; Bogacz et al. 2007; Marshall et al. 2009), but with lower decision thresholds for conditions involving higher input values. Interestingly, the two models provide two incompatible accounts for our data. Specifically, the DDM used a purely relative stopping rule such that input dependent multiplicative noise was solely responsible for producing intensity related RT-speedup effects. Conversely, multiplicative noise played a detrimental role in the LCA, where speedups were uniquely produced by value sensitivity during the initial stages of the gradual transition from purely absolute to increasingly relative processing as a result of lateral inhibition. Conceptually, the account provided by the LCA can be considered intrinsic to the architecture of the decision mechanism as the RT speedup is produced by sensitivity to absolute values during the early stages of evidence accumulation. On the other hand, the account provided by the DDM can be considered extrinsic, because it is based on the scaling properties of neural noise, which are independent from the decision mechanism. Indeed, both accounts could be the end products of evolutionary pressures. However, while the former seems to suggest an evolved value sensitive design feature, the latter is more compatible with a mechanical limitation of information processing, not eliminated through natural selection due either to a lack of a better alternative or to the benefits of value sensitivity (expedited decisions for higher values) overweighing its disadvantages (lower accuracy for equally discriminable higher values).
More generally, this study provides an observation of what could be considered an involuntary, value dependent, bottom up, speed-accuracy tradeoff (SAT; Heitz 2014; Pirrone et al. 2014). We found that for equal or lower stimulus discriminability, high-intensity stimuli bias decision making towards faster decisions at the expense of higher error-rates. Deliberate, top-down SATs have been the subject of extensive investigation in psychology, traditionally studied by manipulating either the subject’s goals (e.g. respond fast vs. respond accurately) or by controlling the subjects RT directly via response cues (Heitz 2014). However, such voluntary criterion modulations are relatively slow, effortful, and require executive control based on intricate understanding of the context, making them inefficient for dealing with trial by trial variations in decision values.
Although the sensitivity to absolute values appears to violate “rationality” in the narrow sense, it is possible that it has an adaptive value in a broader ecological sense, which includes typical tasks and environmental contingencies. This relates to a different kind of tradeoff, the speed-value tradeoff, which has been recently suggested as more appropriate outside the laboratory (Pirrone et al. 2014). Most decisions in naturalistic environments involve value, rather than accuracy based rewards, whereby the agent is rewarded in proportion to the value of the chosen alternative and not with constant rewards for objectively correct (“best” alternative) responses. In such environments, there are several reasons why decisions between high-value alternatives may warrant expedited responses (i.e., speed-value tradeoff). First, the set of available alternatives is not always static, which often is the case in the laboratory. Thus, taking a long time to choose might allow for additional alternatives to present themselves (Pais et al. 2013; Pirrone et al. 2014). If the values of the alternatives considered are low, a new alternative is more likely to be better than the existing ones. In contrast, in a situation where the existing alternatives are already high valued, new alternatives are unlikely to provide benefits but existing (high-valued) alternatives could expire. Second, unlike monetary rewards, which can be infinitely hoarded, many natural resources relevant to survival and reproduction have a short lifespan. Take for example a hungry agent deliberating between two unoccupied, fruit-laden patches of berries on opposite sides of a valley. When such perishable resources are abundant (high value of redness), it is unlikely one could take advantage or consume all of the reward, making deliberation over difference in the absolute quantities less relevant and favoring a quick decision over a slow but objectively “correct” one. In addition, intense perceptual values can indicate abundance but are also more salient and thus more likely to attract competition. Consequently, dallying for too long in deciding could result in these alternatives being occupied by someone else and leading to loss of resources or potential conflict. Alternatively, when dealing with negative rewards that are to be avoided, high-stimulus intensities also could serve as a cue for danger (e.g., the fast motion of an incoming projectile or predator; the loud noise of a stampeding herd or rock avalanche, etc.). The potentially high cost of not reacting in time to such high intensity stimuli, could again support quick and frugal reactions over making an “accurate” response too late.
Therefore, a mechanism that allows speeded reactions to high intensity situations in a bottom-up fashion, might be meta-optimal in the sense that, beyond providing satisfying decision quality in most everyday situations, it also captures the merit of expedited decisions under certain unexpected situations characterized by high intensity stimuli. Our study provides evidence for violations of invariance to absolute values and suggests that partially relative information processing is both necessary and sufficient for producing the observed value sensitivity.
Recent studies on decentralized decision making in biological systems, such as house hunting bees, revealed parallels between neural mechanism responsible for decision making in the human brain and collective decision making in social colonies (Seeley et al. 2012). Indeed, insights from modeling human decision making with lateral inhibition as in the LCA model have proven useful in modeling bee colony behavior (Marshall et al. 2009; Pirrone et al. 2014). These parallels suggest similar evolutionary pressures across species whereby retaining sensitivity to absolute values in addition to relative ones might hold adaptive advantages (Pais et al. 2013). The mechanism underlying value sensitivity in both humans and social colonies could be the result of an evolved advantage mediated by lateral inhibition or an accidentally beneficial side effect of mechanical limitations on the variability of information processing. Either way, distinguishing between these two hybrid theoretical frameworks would require an integrated approach. To this end, future investigations, in which the type of noise and the nonlinearity are measured via more complex psychophysical procedures (Brunton et al. 2013; Lu and Dosher 2008) could be used in conjunction with dedicated intensity manipulations of the type presented in this study.
Note that fractional normalization is a specific case of normalization that is purely relative. Mechanistically, this is achieved through inputs inhibiting each other so that only momentary ratios are accumulated. Normalization in its general definition, however, also includes a saturation parameter that allows an adaptive, continuous transition between purely relative, fractional normalization, and asymptotically absolute values. This issue will be addressed more thoroughly in the Methods and Computational sections.
Note that this contrast was not excluded from the Tukey HSD test. Thus, our Tukey HSD statistics represent a conservative significance test.
Note that only clearly visible, above threshold brightness values were used in the experiments. Thus, in Eq. 1 and all that follow, the perceptual threshold value for brightness and the unique dynamics of sub-threshold brightness perception are neglected.
A model corresponding to the Poisson assumption with ξ i (t) ∼ N(0, πI i (t) + σ 2) also was tested but did not perform as well.
More technically, under certain assumptions, the LCA can be mathematically decomposed into two components Y 1 = X 1 + X 2 & Y 2 = X 1 − X 2 (Bogacz et al. 2006; Heathcote 1998; Marshall et al. 2009). The former (Y 1) is an absolute component predominant in the initial stages of accumulation, the latter (Y 2) is a differential relativity component dominating the final stages. The absolute component Y 1 represents the speed with which the process approaches the threshold in the initial stagees of accumulation which will be achieved faster when absolute input values are higher. Thus, the LCA also can be conceptualized as a DDM-like process with lower thresholds for conditions with higher absolute input values.
The experiments were fully randomized, containing exactly the same number of right and left trials. In addition, feedback was the same for right and left trials and no rewards were provided. Thus, no bias for right or left response was expected and consequently all data were collapsed over right and left responses.
“Perceived” difference is equivalent to drift-rate in the classic formulation of the DDM model.
1) A. T. was funded by the Fulbright Scholar Program. 2) M.U. is funded by the Israeli Science Foundation (grant: 743/12) and by the German Israeli Foundation (grant, 1130-158.4/2010).
- Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19. doi: 10.1109/TAC.1974.1100705
- Basten, U., Biele, G., Heekeren, H. R., & Fiebach, C. J. (2010). How the brain integrates costs and benefits during decision making. Proceedings of the National Academy of Sciences of the United States of America, 107(50), 21767–21772. doi: 10.1073/pnas.0908104107 PubMedCentralCrossRefPubMedGoogle Scholar
- Bogacz, R., Usher, M., Zhang, J., & McClelland, J. L. (2007). Extending a biologically inspired model of choice: Multi-alternatives, nonlinearity and value-based multidimensional choice. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1485), 1655–1670. doi: 10.1098/rstb.2007.2059 PubMedCentralCrossRefPubMedGoogle Scholar
- Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s method. Tutorials in Quantitative Methods for Psychology, 1(1), 42–45. Retrieved from http://www.tqmp.org/Content/vol01-1/p042/p042.pdf
- Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N., & Pouget, A. (2012). The cost of accumulating evidence in perceptual decision making. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 32(11), 3612–3628. doi: 10.1523/JNEUROSCI.4010-11.2012 CrossRefGoogle Scholar
- Geisler, W.S. (1989). Sequential ideal-observer analysis of visual discriminations. Psychological Review, 96(2), 267–314. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/2652171
- Heathcote, A., Brown, S., & Cousineau, D. (2004). QMPE: Estimating lognormal, Wald, and Weibull RT distributions with a parameter-dependent lower bound. Behavior Research Methods, Instruments, & Computers : A Journal of the Psychonomic Society, Inc, 36(2), 277–290. doi: 10.3758/BF03195574 CrossRefGoogle Scholar
- Heitz, R.P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience, 8(June), 150. doi: 10.3389/fnins.2014.00150
- Louie, K., Khaw, M. W., & Glimcher, P. W. (2013). Normalization is a general neural mechanism for context-dependent decision making. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 6139–6144. doi: 10.1073/pnas.1217854110 PubMedCentralCrossRefPubMedGoogle Scholar
- Marshall, J. A. R., Bogacz, R., Dornhaus, A., Planqué, R., Kovacs, T., & Franks, N. R. (2009). On optimal decision-making in brains and social insect colonies. Journal of the Royal Society, Interface / The Royal Society, 6(40), 1065–1074. doi: 10.1098/rsif.2008.0511 PubMedCentralCrossRefPubMedGoogle Scholar
- Meyer, D.E., Irwin, D.E., Osman, A.M., & Kounios, J. (1988). The dynamics of cognition and action: Mental processes inferred from speed-accuracy decomposition. Psychological Review, 95(2), 183–237. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/3375399
- Pais, D., Hogan, P.M., Schlegel, T., Franks, N.R., Leonard, N.E., & Marshall, J.A.R. (2013). A mechanism for value-sensitive decision-making. PloS One, 8(9), e73216. doi: 10.1371/journal.pone.0073216
- Pirrone, A., Stafford, T., & Marshall, J.A.R. (2014). When natural selection should optimize speed-accuracy trade-offs. Frontiers in Neuroscience, 8(April), 73. doi: 10.3389/fnins.2014.00073
- Raftery, A.E. (1995). Bayesian model selection in social research. Sociological Methodology, 25, 111–163. Retrieved from https://www.stat.washington.edu/raftery/Research/PDF/socmeth1995.pdf
- Schwarz, G. (1978). Estimating the dimension of a model. The annals of statistics, 6(2), 461–464.Google Scholar
- Seeley, T.D., Visscher, P.K., Schlegel, T., Hogan, P.M., Franks, N.R., & Marshall, J.A.R. (2012). Stop signals provide cross inhibition in collective decision-making by honeybee swarms. Science, 335(6064), 108–111.Google Scholar
- Towal, R.B., Mormann, M., & Koch, C. (2013). Simultaneous modeling of visual saliency and value computation improves predictions of economic choice. Proceedings of the National Academy of Sciences of the United States of America, 110(40), E3858–67. doi: 10.1073/pnas.1304429110
- Tsetsos, K., Chater, N., & Usher, M. (2012). Salience driven value integration explains decision biases and preference reversal. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664. doi: 10.1073/pnas.1119569109 PubMedCentralCrossRefPubMedGoogle Scholar
- Usher, M., Olami, Z., & McClelland, J. L. (2002). Hick's law in a stochastic race model with speed–accuracy tradeoff. Journal of Mathematical Psychology, 46(6), 704-715. doi: 10.1006/jmps.2002.1420
- Zylberberg, A., Barttfeld, P., & Sigman, M. (2012). The construction of confidence in a perceptual decision. Frontiers in Integrative Neuroscience, 6(September), 79. doi: 10.3389/fnint.2012.00079