Abstract
Self-reports remain affective science’s only direct measure of subjective affective experiences. Yet, little research has sought to understand the psychological process that transforms subjective experience into self-reports. Here, we propose that by framing these self-reports as dynamic affective decisions, affective scientists may leverage the computational tools of decision-making research, sequential sampling models specifically, to better disentangle affective experience from the noisy decision processes that constitute self-report. We further outline how such an approach could help affective scientists better probe the specific mechanisms that underlie important moderators of affective experience (e.g., contextual differences, individual differences, and emotion regulation) and discuss how adopting this decision-making framework could generate insight into affective processes more broadly and facilitate reciprocal collaborations between affective and decision scientists towards a more comprehensive and integrative psychological science.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Affective Experiences
Subjective experience is fundamental to human affective processing (Adolphs, 2010; Barrett et al., 2007; Coan & Allen, 2007; Coppin & Sander, 2021; Cowen & Keltner, 2017; LeDoux & Hofmann, 2018; Quigley et al., 2014). Yet self-reports, the only direct measure of subjective affective experience, evoke both reverence and skepticism (Barrett & Westlin, 2021; LeDoux & Hofmann, 2018). Much affective science research relies on methodologies that assume self-reported emotion accurately indicates the presence of or changes in emotion states, permitting investigation of their (neuro)physiological and behavioral dynamics (Coan & Allen, 2007). Yet researchers have also characterized subjective reports as unreliable measures of affective dynamics that fail to cohere with physiological signatures of emotion (Barrett & Westlin, 2021; LeDoux & Hofmann, 2018; Quigley et al., 2014). Better specifying the psychological mechanisms that transform subjective emotion experiences into self-report ratings might help to resolve this tension. Yet surprisingly little work exists on this topic (Scherer & Moors, 2019).
In this review, we propose a two-part solution to this problem. First, consistent with a long history of theorizing in affective science, we view emotion reports as a class of affective decisions (Barrett, 2017; Berkovich & Meiran, 2022; Givon et al., 2020; Karmon-Presser et al., 2018) which take as evidence (among other possible sources) sensory signals (interoceptive: Critchley & Garfinkel, 2017; Terasawa et al., 2013; Wiens, 2005; and proprioceptive: Coles et al., 2019; Stepper & Strack, 1993) and situational appraisals (Lindquist & Barrett, 2008; Scherer & Moors, 2019; Singer-Landau & Meiran, 2021). Second, and more critically, following the example of others in the field (Givon et al., 2020; Karmon-Presser et al., 2018), we argue that viewing self-reports as evidence-based decisions allow us to draw on research from computational models of perceptual and value-based decision-making to resolve tensions over what affective self-reports mean, and reveal novel insights into the dynamics of affective experience. Specifically, we first establish that self-reports constitute a kind of decision. Next, we show how sequential sampling models of decision-making (SSMs: Forstmann et al., 2016) can disentangle subjective emotion experience from noisy and variable self-reports and identify distinct mechanisms through which moderators of subjective experience (e.g., individual differences, culture, and regulation) change self-reports. Finally, we argue that adopting this computational framework will allow for greater collaborative efforts between decision and affective scientists and theoretical synthesis between these fields.
Self-Reports of Emotions as Affective Decisions
Conventional measures of self-reported affective experience often use (pseudo-)continuous numerical scales in combination with verbal labels to anchor interpretations of specific values (Coan & Allen, 2007; Jebb et al., 2021). Researchers use these scales to elicit and measure structured reflection on current, past, or future affective experiences by mapping them onto predefined choice options. Within a decision-making framework, one can conceive of these ratings as choices where participants evaluate which rating option best represents their affective states by constructing a task “value” for each option and comparing across them (Busemeyer et al., 2019; Givon et al., 2020; Karmon-Presser et al., 2018; Rangel et al., 2008).
For example, when you unexpectedly receive a gift from a close friend, you might feel both surprised and happy. While these categorical labels might not define the affective experience per se, they are useful ways to apprehend our subjective experiences and communicate them to others. If asked how you felt, the option to report these feelings as surprise might have a moderately high task-value because it captures your subjective feelings about the unexpectedness of the gift. Similarly, the option to report these feelings as happiness might also have a moderately high task-value because it captures your subjective feelings of being cared for by your friend. In contrast, the option to report these feelings as anger (if present) would have an extremely low task-value because it fails to capture any component of your subjective feelings. Thus, if presented with all three emotion labels as possible options (e.g., Do you feel happy, surprised, or angry?), you would likely self-report feeling surprised or happy but not angry. If presented only a choice between happy and angry, you would select happy.
Such a framework emphasizes how simply eliciting self-reported emotion fundamentally structures the interpretation of subjective affective experience as evidence for self-report. Here, we discuss how two specific structural features that vary across affective self-report measures (detection/differentiation vs. affect/emotion) might reconfigure this evidence construction process through which our affective states shape recorded self-reports.
First, self-reports may be structured to elicit (1) affective detection (whether a person experienced a specific affective state like anger) or (2) affective differentiation (which of a number of affective states, like anger or fear, a person experienced). Different specifications of the question can change what evidence is relevant for self-report (Kirkland & Cunningham, 2012). The former instructs participants to construct evidence for the presence/absence of an affective state; the latter instructs participants to accumulate evidence that discriminates between two possible affective states. For example, while interoceptive evidence about heart rate might drive self-reports of anger in presence/absence judgments, it is unlikely to be integrated as evidence for anger vs. fear because it might support the presence of both.
Second, self-reports are often used to measure both (1) more general dimensions of affective states (e.g., valence and arousal: Bradley & Lang, 1994; Kuppens et al., 2013; Russell, 1980) and (2) more specific and complex emotions (Watson et al., 1988). These distinct specifications could likewise shape the construction of evidence during ratings. While decisions about the unpleasantness of an experience (e.g., Was this experience unpleasant or not?) may rely more heavily on sensory evidence from interoceptive sources and less on specific situational appraisals, decisions about whether an experience constituted disgust (e.g., Did you feel disgusted (or not)?) might require careful consideration of both the interoceptive evidence and a much broader set of situational appraisals (Roseman et al., 1990).
However, common across these choice specifications is that the evidence construction process is psychologically and biophysically (e.g., neurally) constrained and thus noisy, due to random variation in the processing environment (both external and internal to the human body: Hilbert, 2012; Ratcliff, 2001). Consequently, recorded self-reports ultimately reflect a composite of both the underlying affective experience and variations in the decision-process (e.g., response biases, random noise).
Sequential Sampling Models of Noisy Affective Self-Reports
Sequential sampling models formalize this decision-making framework by postulating that affective self-reports result from the weighted integration of noisy, experience-relevant evidence for each reporting option (Busemeyer et al., 2019; Forstmann et al., 2016; Givon et al., 2020). To deal with noise, these models assume individuals accumulate variable samples of their experience as evidence over time towards thresholds for various response options, with the ultimate choice and time taken to make that choice resulting from which threshold the accumulated evidence crosses first, and when. Thus, to deploy these models, measurements of self-reports should include not only the content of the self-report but also the time it takes participants to make those reports to capture the dynamics of the evidence accumulation process.
Inspired by others in the field (Givon et al., 2020; Karmon-Presser & Meiran, 2019), we likewise propose here that experiential evidence is driven by the weighted sum of inputs from a number of sources, including but not limited to interoception, proprioception, appraisals, and action tendencies (Eq. 1), which in turn drives subjective reports: more intense experiences generate stronger evidence for one reporting option over others, which subsequently leads to faster, more consistent reporting of that option. For example, when asked to report whether or not we felt anger (detection), we might recognize as anger an experience where we felt our heart racing (interoceptive evidence), our jaw clenching (proprioceptive evidence), an urge to punch something (action evidence), and wronged by another person (appraisals), weighing them all similarly. Importantly, these weights not only identify whether a source of information is relevant but also how relevant for self-reported subjective experience. As discussed above, each of these sources of evidence may be more or less informative for deciding between self-report options depending on the structure of the question eliciting the self-report.
Differences in the context of self-report could also shape the relative contribution of random noise to the resulting decisions by changing the overall evidence-threshold for response. Increasing the thresholds makes self-reports more resistant to noise but result in longer decision times. Reducing thresholds makes reports faster but more subject to random noise (Bogacz et al., 2010). Additionally, asymmetrical thresholds for self-report options also systematically bias responses independent of the underlying affective experience (Forstmann et al., 2010; Leite & Ratcliff, 2011). Response options with lower thresholds are chosen more quickly and frequently because less evidence is required to cross the threshold but are more likely to be chosen in error (i.e., in contradiction to the overall evidence) due to noise.
Take for example someone’s decision to self-report their negative experience with an aggressive stranger as anger rather than fear (see Fig. 1). One explanation for this report might be appraisals that the stranger’s aggression is unjustified, providing evidence for anger rather than fear (Kuppens et al., 2003; Lindquist & Barrett, 2008). Another possibility is that they actually appraised the stranger as a threat and thus indicative of fear but responded in error due to a combination of random noise and lack of response caution. A third possibility is that they were already in an irritated mood at the time of response (Schmid & Schmid Mast, 2010), priming their responses towards anger by selectively lowering the threshold of evidence for the anger option. Sequential sampling models can quantify the degree to which each of these mechanisms drive responses to explain and predict variability and stability of self-reports across contexts.
Critically, while we exclusively used examples with only binary self-report options for simplicity, sequential sampling models can also capture decisions between multiple options and even continuous scales (Brown & Heathcote, 2008; Evans et al., 2019; Heathcote et al., 2022; Kvam, 2019; Moran et al., 2015; Ratcliff, 2018; Ratcliff & Rouder, 1998; Roberts et al., 2023; Tillman et al., 2020). This means that principles of our decision-making framework may be applied equivalently across a variety of self-report measures, so long as response times are measured simultaneously.
Applications of a Sequential Sampling Framework
In practice, application of these models requires multiple instances of self-report to varied affective stimuli. Researchers interested in determinants of affective experience could generate this variability by presenting a range of affective stimuli that differ on a few specific and quantifiable dimensions (e.g., physical color, psychological threat) and recording the content and timing of self-reports. Sequential sampling models fit to these data could then identify how these dimensions shape the affective dynamics specific to an individuals’ self-reports. By aggregating the model’s parameters across a sample of participants, researchers could then make inferences about (in)consistencies between individuals (Wiecki et al., 2013).
While the full potential of a sequential sampling framework remains to be tested, recent studies have begun to explore its utility in characterizing subjective affective experiences (Berkovich & Meiran, 2022; Givon et al., 2020). In these studies, participants indicated whether emotionally evocative images made them feel pleasant or not, and the authors modeled their choices and response times using sequential sampling models. Model-fitting in these studies revealed unexpected asymmetries in the way people experience negative and positive affect: people’s self-reports were not only more sensitive to negative experiences compared to positive experiences (Givon et al., 2020), but also more certain about the intensity of these negative experiences (Berkovich & Meiran, 2022). In other words, people accumulated evidence more quickly and with less noise for negative compared to positive experiences. Consequently, while evidence for negative experiences scaled linearly with intensity, for a positive experience to generate an evidence signal twice as strong as another positive experience (relative to the noise), it needed to be more than twice as intense. These findings suggest that people generally form stronger and more precise impressions of negative experiences, possibly explaining why they sometimes learn faster from negative compared to positive feedback (Gershman, 2015).
Moreover, these models hold great promise for affective scientists because they identify multiple target processes through which individual differences, context, and regulatory strategies can shape self-reports of affective experience. In a recent paper, Givon et al. (2023) applied these models to elucidate the precise mechanism that underlies previously reported gender differences in affective experience. By distinguishing between the evidence and thresholds for self-reports of valence, they found that women generated significantly stronger evidence towards negative stimuli compared to men but had similar thresholds for responding. These findings suggest that women were not a priori biased towards reporting all stimuli as negative, but rather may actually experience negative stimuli more intensely than men. These results raise important questions about the source of these gender differences in evidence construction. Do women have stronger physiological responses to negative stimuli than men or simply weight equivalent responses more heavily? Alternatively or in addition, do women recruit a different set of appraisals as evidence for these self-reports? Answers to these questions may help us better understand consequential gender differences in the reported prevalence of affective disorders (Altemus et al., 2014), since coherence between physiology and self-reported affective experiences have been found to predict subjective well-being (Brown et al., 2020).
Similarly, these models could better characterize contextual differences in self-reported emotional experience, like those between cultures, and whether they derive directly from differences in the appraisals involved in the evidence generation process (Imada & Ellsworth, 2011; Roseman et al., 1995; Scherer, 1997), or thresholds of evidence for specific response options due to cultural norms (Matsumoto, 1990; Mesquita & Walker, 2003). Alternatively, these models could also advance research on how distinct emotion regulation strategies shape subjective experience and self-reports (McRae et al., 2012; Troy et al., 2018). For example, experimental demand when instructing people to regulate could decrease reports of emotion not because internal experience changes, but because it increases the threshold of evidence required for reporting any emotion (positive or negative), or biases people to respond positively. A sequential sampling framework allows researchers to better explore how various emotion regulation strategies target different substrates of the affective process during self-reports of experience (Gross, 2015). This, in turn, would not only better characterize the specificity and efficacy of emotion regulation strategies but also open up investigations into more complex regulatory strategies that target multiple affective processes or combine multiple techniques (Ford et al., 2019).
Future Directions
At the same time, the quality of inferences about the dynamics of affective self-report will also improve as the sophistication of computational models improves. Newer sequential sampling models enable researchers to segment the evidence accumulation process into distinct stages that weight evidence in different ways at different times (Diederich & Trueblood, 2018; Maier et al., 2020), and incorporate richer forms of data, like eye-movements, to identify on a moment-by-moment basis what evidence is being prioritized by visual attention (Krajbich et al., 2010; Teoh et al., 2020). Such approaches could be easily adopted by affective scientists who seek to understand how moment-to-moment changes in affective experience drive self-reports, by grounding evidence construction in temporally precise measures of physiological activity like skin-conductance response or cardiac inter-beat interval (Butler, 2017). Future models could further be developed using multi-modal methods combining physiological recording and eye-tracking to understand how interoceptive processes and visual attention jointly drive the subjective experience and self-report of emotions.
Additionally, while we have only discussed the utility of a decision-making framework in uncovering the temporal dynamics that lead up to a single self-report thus far, our approach also offers insight into how affective experiences may evolve from one self-report to the next. Recent research suggests that naming affective experiences impedes subsequent attempts to regulate these experiences (Nook et al., 2021). By combining this observation with decision-making research on confirmation biases (Chaxel et al., 2013; Navajas et al., 2016; Talluri et al., 2018), sequential sampling models provide a means to test the idea that emotion-naming selectively constrains patterns of attention and appraisals which leads to emotional rigidity during subsequent self-reports (Moran et al., 2015; Turner et al., 2021), formalizing theories about the iterative nature of affective processes (Cunningham et al., 2013; Ford et al., 2019; Gross, 2015; Gross & Barrett, 2011).
In light of accelerating interest in the role of affective states on cognition and behavior across broad swaths of psychological science (Dukes et al., 2021; FeldmanHall & Heffner, 2022; Lerner et al., 2015; Phelps et al., 2014; Roberts & Hutcherson, 2019), we hope that our paper here highlights the utility of a reciprocal approach to understanding subjective affective experience, and affective processes more generally, by drawing on insights from decision-making research. We believe that these kinds of collaborations between affective and decision scientists will spur continued discoveries in the respective fields and contribute to a more comprehensive and integrative psychological science.
References
Adolphs, R. (2010). Emotion. Current Biology, 20(13), R549–R552. https://doi.org/10.1016/j.cub.2010.05.046
Altemus, M., Sarvaiya, N., & Neill Epperson, C. (2014). Sex differences in anxiety and depression clinical perspectives. Frontiers in Neuroendocrinology, 35(3), 320–330. https://doi.org/10.1016/j.yfrne.2014.05.004
Barrett, L. F. (2017). The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1), 1–23. https://doi.org/10.1093/scan/nsw154
Barrett, L. F., Mesquita, B., Ochsner, K. N., & Gross, J. J. (2007). The experience of emotion. Annual Review of Psychology, 58(1), 373–403. https://doi.org/10.1146/annurev.psych.58.110405.085709
Barrett, L. F., & Westlin, C. (2021). Navigating the science of emotion. In Emotion measurement (pp. 39–84). Elsevier. https://doi.org/10.1016/B978-0-12-821124-3.00002-8
Berkovich, R., & Meiran, N. (2022). Pleasant emotional feelings follow one of the most basic psychophysical laws (Weber’s law) as most sensations do. Emotion. https://doi.org/10.1037/emo0001161
Bogacz, R., Wagenmakers, E. J., Forstmann, B. U., & Nieuwenhuis, S. (2010). The neural basis of the speed-accuracy tradeoff. Trends in Neurosciences, 33(1), 10–16. https://doi.org/10.1016/j.tins.2009.09.002
Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49–59.
Brown, C. L., Van Doren, N., Ford, B. Q., Mauss, I. B., Sze, J. W., & Levenson, R. W. (2020). Coherence between subjective experience and physiology in emotion: Individual differences and implications for well-being. Emotion, 20(5), 818–829. https://doi.org/10.1037/emo0000579
Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57(3), 153–178. https://doi.org/10.1016/j.cogpsych.2007.12.002
Busemeyer, J. R., Gluth, S., Rieskamp, J., & Turner, B. M. (2019). Cognitive and neural bases of multi-attribute, multi-alternative, value-based decisions. Trends in Cognitive Sciences, 23(3), 251–263. https://doi.org/10.1016/j.tics.2018.12.003
Butler, E. A. (2017). Emotions are temporal interpersonal systems. Current Opinion in Psychology, 17, 129–134. https://doi.org/10.1016/j.copsyc.2017.07.005
Chaxel, A.-S., Russo, J. E., & Kerimi, N. (2013). Preference-driven biases in decision makers’ information search and evaluation. Judgment and Decision Making, 8(5), 561–576.
Coan, J. A., & Allen, J. J. (2007). Handbook of emotion elicitation and assessment. Oxford University Press.
Coles, N. A., Larsen, J. T., & Lench, H. C. (2019). A meta-analysis of the facial feedback literature: Effects of facial feedback on emotional experience are small and variable. Psychological Bulletin, 145(6), 610–651. https://doi.org/10.1037/bul0000194
Coppin, G., & Sander, D. (2021). Chapter 1—theoretical approaches to emotion and its measurement. In H. L. Meiselman (Ed.), Emotion measurement (second edition) (pp. 3–37). Woodhead Publishing. https://doi.org/10.1016/B978-0-12-821124-3.00001-6
Cowen, A. S., & Keltner, D. (2017). Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences, 114(38), E7900–E7909. https://doi.org/10.1073/pnas.1702247114
Critchley, H. D., & Garfinkel, S. N. (2017). Interoception and emotion. Current Opinion in Psychology, 17, 7–14. https://doi.org/10.1016/j.copsyc.2017.04.020
Cunningham, W. A., Dunfield, K. A., & Stillman, P. E. (2013). Emotional states from affective dynamics. Emotion Review, 5(4), 344–355. https://doi.org/10.1177/1754073913489749
Diederich, A., & Trueblood, J. S. (2018). A dynamic dual process model of risky decision making. Psychological Review, 125(2), 270–292.
Dukes, D., Abrams, K., Adolphs, R., Ahmed, M. E., Beatty, A., Berridge, K. C., Broomhall, S., Brosch, T., Campos, J. J., Clay, Z., Clément, F., Cunningham, W. A., Damasio, A., Damasio, H., D’Arms, J., Davidson, J. W., de Gelder, B., Deonna, J., de Sousa, R., Sander, D. (2021). The rise of affectivism. Nature Human Behaviour, 5(7), 816–820. https://doi.org/10.1038/s41562-021-01130-8
Evans, N. J., Holmes, W. R., & Trueblood, J. S. (2019). Response-time data provide critical constraints on dynamic models of multi-alternative, multi-attribute choice. Psychonomic Bulletin & Review, 26(3), 901–933. https://doi.org/10.3758/s13423-018-1557-z
FeldmanHall, O., & Heffner, J. (2022). A generalizable framework for assessing the role of emotion during choice. American Psychologist, 77(9), 1017–1029. https://doi.org/10.1037/amp0001108
Ford, B. Q., Gross, J. J., & Gruber, J. (2019). Broadening our field of view: The role of emotion polyregulation. Emotion Review, 11(3), 197–208. https://doi.org/10.1177/1754073919850314
Forstmann, B. U., Brown, S., Dutilh, G., Neumann, J., & Wagenmakers, E.-J. (2010). The neural substrate of prior information in perceptual decision making: A model-based analysis. Frontiers in Human Neuroscience, 4. https://www.frontiersin.org/articles/https://doi.org/10.3389/fnhum.2010.00040
Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J. (2016). Sequential sampling models in cognitive neuroscience: advantages, applications, and extensions. Annual Review of Psychology, 67, 641–666.
Gershman, S. J. (2015). Do learning rates adapt to the distribution of rewards? Psychonomic Bulletin & Review, 22(5), 1320–1327. https://doi.org/10.3758/s13423-014-0790-3
Givon, E., Berkovich, R., Oz-Cohen, E., Rubinstein, K., Singer-Landau, E., Udelsman-Danieli, G., & Meiran, N. (2023). Are women truly “more emotional” than men? Sex differences in an indirect model-based measure of emotional feelings. Current Psychology. https://doi.org/10.1007/s12144-022-04227-z
Givon, E., Itzhak-Raz, A., Karmon-Presser, A., Danieli, G., & Meiran, N. (2020). How does the emotional experience evolve? Feeling generation as evidence accumulation. Emotion, 20(2), 271–285. https://doi.org/10.1037/emo0000537
Gross, J. J. (2015). Emotion regulation: Current status and future prospects. Psychological Inquiry, 26(1), 1–26. https://doi.org/10.1080/1047840X.2014.940781
Gross, J. J., & Barrett, L. F. (2011). Emotion generation and emotion regulation: one or two depends on your point of view. Emotion Review, 3(1), 8–16. https://doi.org/10.1177/1754073910380974
Heathcote, A., Matzke, D., & Heathcote, A. (2022). Winner takes all! What are race models, and why and how should psychologists use them? Current Directions in Psychological Science, 31(5), 383–394. https://doi.org/10.1177/09637214221095852
Hilbert, M. (2012). Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making. Psychological Bulletin, 138(2), 211–237. https://doi.org/10.1037/a0025940
Imada, T., & Ellsworth, P. C. (2011). Proud Americans and lucky Japanese: Cultural differences in appraisal and corresponding emotion. Emotion, 11, 329–345. https://doi.org/10.1037/a0022855
Jebb, A. T., Ng, V., & Tay, L. (2021). A review of key Likert scale development advances: 1995–2019. Frontiers in Psychology, 12. https://www.frontiersin.org/articles/https://doi.org/10.3389/fpsyg.2021.637547
Karmon-Presser, A., & Meiran, N. (2019). A signal-detection approach to individual differences in negative feeling. Heliyon, 5(4), e01344. https://doi.org/10.1016/j.heliyon.2019.e01344
Karmon-Presser, A., Sheppes, G., & Meiran, N. (2018). How does it “feel”? A signal detection approach to feeling generation. Emotion, 18(1), 94–115. https://doi.org/10.1037/emo0000298
Kirkland, T., & Cunningham, W. A. (2012). Mapping emotions through time: How affective trajectories inform the language of emotion. Emotion, 12(2), 268–282. https://doi.org/10.1037/a0024218
Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in simple choice. Nature Neuroscience, 13(10), 1292–1298. https://doi.org/10.1038/nn.2635
Kuppens, P., Tuerlinckx, F., Russell, J. A., & Barrett, L. F. (2013). The relation between valence and arousal in subjective experience. Psychological Bulletin, 139(4), 917–940. https://doi.org/10.1037/a0030811
Kuppens, P., Van Mechelen, I., Smits, D. J. M., & De Boeck, P. (2003). The appraisal basis of anger: Specificity, necessity and sufficiency of components. Emotion, 3(3), 254–269. https://doi.org/10.1037/1528-3542.3.3.254
Kvam, P. D. (2019). A geometric framework for modeling dynamic decisions among arbitrarily many alternatives. Journal of Mathematical Psychology, 91, 14–37. https://doi.org/10.1016/j.jmp.2019.03.001
LeDoux, J. E., & Hofmann, S. G. (2018). The subjective experience of emotion: A fearful view. Current Opinion in Behavioral Sciences, 19, 67–72. https://doi.org/10.1016/j.cobeha.2017.09.011
Leite, F. P., & Ratcliff, R. (2011). What cognitive processes drive response biases? A diffusion model analysis. Judgment and Decision Making, 6(7), 651–687. https://doi.org/10.1017/S1930297500002680
Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. (2015). Emotion and decision making. Annual Review of Psychology, 66(1), 799–823.
Lindquist, K. A., & Barrett, L. F. (2008). Constructing emotion: The experience of fear as a conceptual act. Psychological Science, 19(9), 898–903. https://doi.org/10.1111/j.1467-9280.2008.02174.x
Maier, S. U., Raja Beharelle, A., Polanía, R., Ruff, C. C., & Hare, T. A. (2020). Dissociable mechanisms govern when and how strongly reward attributes affect decisions. Nature Human Behaviour, 4(9), 949–963. https://doi.org/10.1038/s41562-020-0893-y
Matsumoto, D. (1990). Cultural similarities and differences in display rules. Motivation and Emotion, 14(3), 195–214.
McRae, K., Ciesielski, B., & Gross, J. J. (2012). Unpacking cognitive reappraisal: Goals, tactics, and outcomes. Emotion, 12(2), 250.
Mesquita, B., & Walker, R. (2003). Cultural differences in emotions: A context for interpreting emotional experiences. Behaviour Research and Therapy, 41(7), 777–793. https://doi.org/10.1016/S0005-7967(02)00189-4
Moran, R., Teodorescu, A. R., & Usher, M. (2015). Post choice information integration as a causal determinant of confidence: Novel data and a computational account. Cognitive Psychology, 78, 99–147. https://doi.org/10.1016/j.cogpsych.2015.01.002
Navajas, J., Bahrami, B., & Latham, P. E. (2016). Post-decisional accounts of biases in confidence. Current Opinion in Behavioral Sciences, 11, 55–60. https://doi.org/10.1016/j.cobeha.2016.05.005
Nook, E. C., Satpute, A. B., & Ochsner, K. N. (2021). Emotion naming impedes both cognitive reappraisal and mindful acceptance strategies of emotion regulation. Affective Science, 2(2), 187–198. https://doi.org/10.1007/s42761-021-00036-y
Phelps, E. A., Lempert, K. M., & Sokol-Hessner, P. (2014). Emotion and decision making: Multiple modulatory neural circuits. Annual Review of Neuroscience, 37(1), 263–287. https://doi.org/10.1146/annurev-neuro-071013-014119
Quigley, K. S., Lindquist, K. A., & Barrett, L. F. (2014). Inducing and measuring emotion and affect: Tips, tricks, and secrets. In Handbook of research methods in social and personality psychology, 2nd ed. (pp. 220–252). Cambridge University Press.
Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9(7), 545–556. https://doi.org/10.1038/nrn2357
Ratcliff, R. (2001). Putting noise into neurophysiological models of simple decision making. Nature Neuroscience, 4(4), 336–336. https://doi.org/10.1038/85956
Ratcliff, R. (2018). Decision making on spatially continuous scales. Psychological Review, 125(6), 888.
Ratcliff, R., & Rouder, J. (1998). Modeling response times for two-choice decision. Psychological Science, 9(5), 347–356.
Roberts, I. D., HajiHosseini, A., & Hutcherson, C. A. (2023). How bad becomes good: A neurocomputational model of flexible affect valuation. OSF Preprints. https://doi.org/10.31219/osf.io/4cu98
Roberts, I. D., & Hutcherson, C. A. (2019). Affect and decision making: insights and predictions from computational models. Trends in Cognitive Sciences, 23(7), 602–614. https://doi.org/10.1016/j.tics.2019.04.005
Roseman, I. J., Dhawan, N., Rettek, S. I., Naidu, R. K., & Thapa, K. (1995). Cultural differences and cross-cultural similarities in appraisals and emotional responses. Journal of Cross-Cultural Psychology, 26(1), 23–38. https://doi.org/10.1177/002202219502600101
Roseman, I. J., Spindel, M. S., & Jose, P. E. (1990). Appraisals of emotion-eliciting events: Testing a theory of discrete emotions. Journal of Personality and Social Psychology, 59(5), 899.
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714
Scherer, K. R. (1997). The role of culture in emotion-antecedent appraisal. Journal of Personality and Social Psychology, 73, 902–922. https://doi.org/10.1037/0022-3514.73.5.902
Scherer, K. R., & Moors, A. (2019). The emotion process: Event appraisal and component differentiation. Annual Review of Psychology, 70(1), 719–745. https://doi.org/10.1146/annurev-psych-122216-011854
Schmid, P. C., & Schmid Mast, M. (2010). Mood effects on emotion recognition. Motivation and Emotion, 34(3), 288–292. https://doi.org/10.1007/s11031-010-9170-0
Singer-Landau, E., & Meiran, N. (2021). Cognitive appraisal contributes to feeling generation through emotional evidence accumulation rate: Evidence from instructed fictional reappraisal. Emotion, 21, 1366–1378. https://doi.org/10.1037/emo0001006
Stepper, S., & Strack, F. (1993). Proprioceptive determinants of emotional and nonemotional feelings. Journal of Personality and Social Psychology, 64(2), 211.
Talluri, B. C., Urai, A. E., Tsetsos, K., Usher, M., & Donner, T. H. (2018). Confirmation bias through selective overweighting of choice-consistent evidence. Current Biology, 28(19), 3128-3135.e8. https://doi.org/10.1016/j.cub.2018.07.052
Teoh, Y. Y., Yao, Z., Cunningham, W. A., & Hutcherson, C. A. (2020). Attentional priorities drive effects of time pressure on altruistic choice. Nature Communications, 11, 3534. https://doi.org/10.1038/s41467-020-17326-x
Terasawa, Y., Fukushima, H., & Umeda, S. (2013). How does interoceptive awareness interact with the subjective experience of emotion? An fMRI study. Human Brain Mapping, 34(3), 598–612. https://doi.org/10.1002/hbm.21458
Tillman, G., Van Zandt, T., & Logan, G. D. (2020). Sequential sampling models without random between-trial variability: The racing diffusion model of speeded decision making. Psychonomic Bulletin & Review, 27(5), 911–936. https://doi.org/10.3758/s13423-020-01719-6
Troy, A. S., Shallcross, A. J., Brunner, A., Friedman, R., & Jones, M. C. (2018). Cognitive reappraisal and acceptance: effects on emotion, physiology, and perceived cognitive costs. Emotion, 18(1), 58.
Turner, W., Feuerriegel, D., Andrejević, M., Hester, R., & Bode, S. (2021). Perceptual change-of-mind decisions are sensitive to absolute evidence magnitude. Cognitive Psychology, 124, 101358. https://doi.org/10.1016/j.cogpsych.2020.101358
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54, 1063–1070. https://doi.org/10.1037/0022-3514.54.6.1063
Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical Bayesian estimation of the drift-diffusion model in python. Frontiers in Neuroinformatics, 7, 14.
Wiens, S. (2005). Interoception in emotional experience. Current Opinion in Neurology, 18(4), 442–447. https://doi.org/10.1097/01.wco.0000168079.92106.99
Acknowledgements
We are expressly grateful to Dr. Brett Ford and Dr. Jennifer Stellar at the University of Toronto for their critical feedback on initial drafts of this article.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
We also gratefully acknowledge funding support from the Canada Research Chairs program (to C.A.H.), the Natural Science and Engineering Research Council (to W.A.C.), and the Ontario Graduate Scholarship (to Y.Y.T.). All views expressed in this article represent the views of the authors and not of the funding bodies.
Competing interests
The author declares no competing interests.
Data availability
Not applicable
Code availability
Not applicable
Authors’ contributions
Not applicable
Additional information
Handling Editor: Linda Camras
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Teoh, Y., Cunningham, W.A. & Hutcherson, C.A. Framing Subjective Emotion Reports as Dynamic Affective Decisions. Affec Sci 4, 522–528 (2023). https://doi.org/10.1007/s42761-023-00197-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42761-023-00197-y