Abstract
The affect misattribution procedure (AMP) is a measure of implicit evaluations, designed to index the automatic retrieval of evaluative knowledge. The AMP effect consists in participants evaluating neutral target stimuli positively when preceded by positive primes and negatively when preceded by negative primes. After multiple prior tests of intentionality, Hughes et al. (Behav Res Methods 55(4):1558–1586, 2023) examined the role of awareness in the AMP and found that AMP effects were larger when participants indicated that their response was influenced by the prime than when they did not. Here we report seven experiments (six preregistered; N = 2350) in which we vary the methodological features of the AMP to better understand this awareness effect. In Experiments 1–4, we establish variability in the magnitude of the awareness effect in response to variations in the AMP procedure. By introducing further modifications to the AMP procedure, Experiments 5–7 suggest an alternative explanation of the awareness effect, namely that awareness can be the outcome, rather than the cause, of evaluative congruency between primes and responses: Awareness effects emerged even when awareness could not have contributed to AMP effects, including when participants judged influence awareness for third parties or primes were presented post hoc. Finally, increasing the evaluative strength of the primes increased participants’ tendency to misattribute AMP effects to the influence of target stimuli. Together, the present findings suggest that AMP effects can create awareness effects rather than vice versa and support the AMP’s construct validity as a measure of unintentional evaluations of which participants are also potentially unaware.
Similar content being viewed by others
Data Availability
All data (including trial-level AMP data) are available via the Open Science Framework (https://osf.io/wfksp/).
Notes
In this paper, we use “attitudes” to refer to latent evaluative knowledge and “evaluations” to refer to observable behaviors, such as self-reports on a Likert scale or binary choices on an AMP. We use “explicit evaluations” to refer to the relatively controlled retrieval of evaluative knowledge and “implicit evaluations” to refer to the relatively automatic retrieval of evaluative knowledge. At the level of measures, we distinguish between “direct measures” and “indirect measures.” We do not assume that direct measures capture exclusively controlled processes or that indirect measures capture exclusively automatic processes; however, we believe that given their relatively controlled nature, direct measures are appropriately characterized as indexing explicit evaluations, and given their relatively automatic nature, indirect measures are appropriately characterized as indexing implicit evaluations.
Although these examples use an associative notation, for the purposes of this project we are agnostic regarding the representational format of attitudes, e.g., whether they are associative (The Yankees–Bad) or propositional (“The Yankees are terrible”).
Throughout the paper, we follow the terminology established by Hughes et al. (2023) and refer to the finding of larger AMP effects on trials in which participants report being influenced by the primes than on trials in which they do not as an “awareness effect.” However, in the paper we show empirically that (a) such awareness effects can be an outcome, rather than a cause, of AMP effects and (b) such awareness effects need not emerge from privileged first-person access but rather can also be subserved by inferential mechanisms.
A precise breakdown of reasons for participant exclusions, including in cases where multiple criteria led to an exclusion decision, is available in the open data.
An anonymous reviewer expressed concerns about the English proficiency of participants from non-English-speaking countries. To alleviate these concerns, we refit the main models from Experiments 1–5 to the data of participants from majority-English-speaking countries only and found no substantial deviation from the conclusions reported in the paper. (Participants in Experiments 6–7 were recruited exclusively from the United States.) The corresponding models are available in the open code.
For Experiments 2–4, due to a clerical error, the preregistration documents list the awareness variable, rather than the AMP response, as the dependent variable. We follow the procedure of Hughes et al. (2023) in treating the awareness variable, rather than the AMP response, as the dependent variable. However, the substantive conclusions would remain unchanged even if the AMP response were treated as the dependent variable.
Given the stepwise model comparison process, the degrees of freedom reported for the likelihood ratio test can differ from experiment to experiment even if the best-fitting model is the same. The reason for this is that the likelihood ratio test takes into account not only the complexity (number of parameters) of the final model but also the complexity (number of parameters) of the penultimate model. For example, the degrees of freedom for the likelihood ratio test will be larger when the penultimate model contains only one main effect rather than two main effects. We thank an anonymous reviewer for bringing this point to our attention.
In response to reviewer feedback, we conducted exploratory analyses to investigate whether including the maximal participant-level random-effects structure in each model leads to similar conclusions as the more parsimonious models reported in the main text.
To test for model overparameterization, we followed the procedure recommended by Bates et al. (2018) and Matuschek et al. (2017) and conducted a principal component analysis on the random-effects covariance matrix of the full model, and then simplified the model if there were principal components accounting for 0% of the variance after rounding to three decimal places. We simplified the model by first disallowing correlation between the random effects; then, if there was still a degenerate component, we dropped the corresponding random slope.
In Experiment 4, we were unable to follow this approach given that the main model was a generalized additive mixed-effects model. In Experiments 1–3 and 7, the statistical inferences remained unchanged. In Experiments 5 and 6, the best-fitting models remained the same, with minor deviations in planned comparisons. Specifically, unlike in the main models, the first-person awareness effect with respect to positive stimuli was reduced to non-significance in the maximal model of the Experiment 5 data, and the awareness effect was reduced to non-significance with respect to positive stimuli in the maximal model of the Experiment 6 data. Both of these results are indicative of a valence asymmetry effect observed both in the original experiments of Hughes et al. (2023) and in the present work. We return to this effect in the general discussion.
The sign flip between the two correlations is theoretically expected given that in the standard condition, pressing the space bar indicated influence awareness, and in the reversed condition, it indicated lack of influence awareness.
An additional explanation of the substantial effect of response options relies on the idea of response sets, that is, responding to self-report items in a way that is determined by the structure rather than the content of the question. Specifically, work involving the evaluative priming procedure suggests that participants are more likely to respond “yes” than “no” if two psychological events (in that case, the prime and target, and in the present case, the prime and the response) are evaluatively congruent with each other (e.g., Wentura, 2000). Indeed, in line with this possibility, participants in both conditions of Experiment 2 were more likely than not to choose the “influenced” rather than the “not influenced” response following congruent rather than incongruent trials (independently of whether the default response was that they were not vs. they were influenced by the prime; 43% vs. 20% in the former and 65% vs. 38% in the latter condition). If influences of response set indeed contributed to responding on the prime influence measure in this and the remaining experiments, this would raise additional serious concerns about its internal validity. We thank an anonymous reviewer for raising this possibility.
Another piece of evidence against Hughes et al.’s (2023) account emerges from a recent study by Morris and Kurdi (2023). In this study, each participant completed five AMPs randomly selected from a larger set of 16 adapted from Nosek (2005). Crucially, the attitude objects were diverse and included comparisons such as American vs. Canadian, cats vs. dogs, Coke vs. Pepsi, thin people vs. fat people, and Yankees vs. Diamondbacks. As such, there is no theoretical reason to expect these attitudes, as a set, to be correlated with each other; rather, any high intercorrelation may be seen as evidence for method-specific variance, perhaps of the kind suggested by Hughes et al. (2023), inflating the statistical relationship. However, in fact, the average correlation across different AMPs was r = − .001 and did not differ significantly from zero, t(119) = − 0.30, p = .765, BF01 = 9.44, Cohen’s d = − 0.03, or from the correlation among explicit evaluations toward the same comparisons, t(119) = − 0.86, p = .394, BF01 = 6.21, Cohen’s d = − 0.08.
Hussey and Cummins (2022) disputed the validity of this interpretation, pointing out that if the absolute deviation from neutrality in AMP performance is used as the dependent variable in this analysis, an ICC of .26 is observed, suggesting shared variance across different AMPs. Even putting theoretical considerations about the accuracy of relying on absolute deviations aside, it is unclear how this intercorrelation provides evidence for Hughes et al.’s (2023) account given that scores on different AMPs may be correlated with each other for a host of different reasons having nothing to do with awareness of prime influence.
In fact, the correlation between the participant-level mean absolute deviation from neutrality on the set of five AMPs and mean absolute deviation from neutrality on the parallel set of five explicit measures in this sample was r = .464, t(566) = 12.47, p < .001. Importantly, a mean correlation of r = .170 was also observed when the participant-level correlation between extremity in implicit evaluations and extremity in explicit evaluations was calculated using nonoverlapping subsets of attitude objects, thus suggesting that the relationship is not entirely due to variance shared between implicit and explicit evaluations of the same targets. This result, although not conclusive, raises the possibility that the correlation observed by Hussey and Cummins (2022) in absolute deviation across different AMPs is a result of an individual difference having to do with broader tendencies in evaluative behavior, such as the need to evaluate (Jarvis & Petty, 1996), rather than any specific aspect(s) of AMP performance. And, of course, this is only one of many potential hypotheses about why such intercorrelations might emerge.
Depending on participants’ monitor settings (specifically, the refresh rates used), the actual duration of stimulus presentation may have differed from 16 ms. However, none of the conclusions of the present experiment depend on the specific length of exposure.
What is more, based on Experiments 5–6 it seems that, on the AMP, the source of such awareness is more likely to be a configuration of externally observable events (e.g., the combination of seeing the éclair and an involuntary approach movement toward the bakery) rather than privileged introspective awareness only available to the self. As such, even if intentionality were incompatible with this type of genuine introspective awareness, as yet no compelling evidence has been provided that AMP effects are subject to this type of awareness.
References
Bar-Anan, Y., & Nosek, B. A. (2011). Reporting intentional rating of the primes predicts priming effects in the Affective Misattribution Procedure. Personality and Social Psychology Bulletin, 38(9), 1194–1208. https://doi.org/10.1177/0146167212446835
Bar-Anan, Y. & Nosek, B. A. (2016). Misattribution of claims: Comment on Payne et al., 2013. PsyArXiv. https://doi.org/10.31234/osf.io/r75xb
Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic influence in social perception and cognition. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 3–51). Guilford Press.
Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition. In R. S. Wyer & T. K. Srull (Eds.), Handbook of social cognition: Basic processes; Applications (pp. 1–40). Lawrence Erlbaum Associates Inc.
Bates, D., Kliegl, R., Vasishth, S., & Baayen, H. (2018). Parsimonious mixed models. ArXiv. http://arxiv.org/abs/1506.04967
Bates, D., Mächler, M., Bolker, B. & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 6, pp. 1–62). Elsevier. https://doi.org/10.1016/s0065-2601(08)60024-6
Brosch, T., & Sharma, D. (2005). The role of fear-relevant stimuli in visual search: A comparison of phylogenetic and ontogenetic stimuli. Emotion, 5(3), 360–364. https://doi.org/10.1037/1528-3542.5.3.360
Cone, J., & Ferguson, M. J. (2015). He did what? The role of diagnosticity in revising implicit evaluations. Journal of Personality and Social Psychology, 108(1), 37–57. https://doi.org/10.1037/pspa0000014
Cooley, E., & Payne, B. K. (2016). Using groups to measure intergroup prejudice. Personality and Social Psychology Bulletin, 43(1), 46–59. https://doi.org/10.1177/0146167216675331
De Houwer, J., & Moors, A. (2010). Implicit measures: Similarities and differences. In B. Gawronski & B. K. Payne (Eds.), Handbook of implicit social cognition (pp. 176–196). Guilford Press.
Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56(1), 5–18. https://doi.org/10.1037//0022-3514.56.1.5
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in Psychology, 5(e33400), 1507–1517. https://doi.org/10.3389/fpsyg.2014.00781
Eagly, A. H. & Chaiken, S. (1993). The psychology of attitudes. Harcourt Brace Jovanovich College Publishers.
Fazio, R. H. (2007). Attitudes as object–evaluation associations of varying strength. Social Cognition, 25(5), 603–637. https://doi.org/10.1521/soco.2007.25.5.603
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50(2), 229–238. https://doi.org/10.1037/0022-3514.50.2.229
Ferguson, M. J., & Cone, J. (2021). The role of intentionality in priming. Psychological Inquiry, 32(1), 38–40. https://doi.org/10.1080/1047840x.2021.1889839
Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58(2), 203–210. https://doi.org/10.1037/h0041593
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393
Gawronski, B. (2012). Back to the future of dissonance theory: Cognitive consistency as a core motive. Social Cognition, 30(6), 652–668. https://doi.org/10.1521/soco.2012.30.6.652
Gawronski, B., Ledgerwood, A., & Eastwick, P. W. (2022). Implicit bias ≠ bias on implicit measures. Psychological Inquiry, 33(3), 139–155. https://doi.org/10.1080/1047840x.2022.2106750
Gawronski, B., & Ye, Y. (2013). What drives priming effects in the Affect Misattribution Procedure? Personality and Social Psychology Bulletin, 40(1), 3–15. https://doi.org/10.1177/0146167213502548
Gawronski, B., & Ye, Y. (2015). Prevention of intention invention in the Affect Misattribution Procedure. Social Psychological and Personality Science, 6(1), 101–108. https://doi.org/10.1177/1948550614543029
Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102(1), 4–27. https://doi.org/10.1037//0033-295x.102.1.4
Greenwald, A. G., Klinger, M. R., & Schuh, E. S. (1995). Activation by marginally perceptible (“subliminal”) stimuli: Dissociation of unconscious from conscious cognition. Journal of Experimental Psychology, 124(1), 22–42. https://doi.org/10.1037/0096-3445.124.1.22
Greenwald, A. G., Smith, C. T., Sriram, N., Bar-Anan, Y. & Nosek, B. A. (2009). Implicit race attitudes predicted vote in the 2008 U.S. presidential election. Analyses of Social Issues and Public Policy, 9(1), 241–253. https://doi.org/10.1111/j.1530-2415.2009.01195.x
Hahn, A., & Gawronski, B. (2019). Facing one’s implicit biases: From awareness to acknowledgment. Journal of Personality and Social Psychology, 116(5), 769–794. https://doi.org/10.1037/pspi0000155
Hahn, A., Judd, C. M., Hirsh, H. K., & Blair, I. V. (2014). Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3), 1369–1392. https://doi.org/10.1037/a0035028
Hughes, S., Cummins, J., & Hussey, I. (2023). Effects on the Affect Misattribution Procedure are strongly moderated by influence awareness. Behavior Research Methods, 55(4), 1558–1586. https://doi.org/10.3758/s13428-022-01879-4
Hussey, I. & Cummins, J. (2022). Evidence against effects on the Affect Misattribution Procedure being unaware: AMP effects involve construct-irrelevant individual differences. PsyArXiv. https://psyarxiv.com/8k94v
Hussey, I., & Hughes, S. (2019). Hidden invalidity among 15 commonly used measures in social and personality psychology. Advances in Methods and Practices in Psychological Science, 3(2), 166–184. https://doi.org/10.1177/2515245919882903
Jachimowicz, J. M., Duncan, S., Weber, E. U., & Johnson, E. J. (2019). When and why defaults influence decisions: A meta-analysis of default effects. Behavioural Public Policy, 3(2), 159–186. https://doi.org/10.1017/bpp.2018.43
Jarvis, W. B. G., & Petty, R. E. (1996). The need to evaluate. Journal of Personality and Social Psychology, 70(1), 172–194. https://doi.org/10.1037/0022-3514.70.1.172
Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5469), 1338–1339. https://doi.org/10.1126/science.1091721
Jones, M., & Sugden, R. (2001). Positive confirmation bias in the acquisition of information. Theory and Decision, 50(1), 59–99. https://doi.org/10.1023/a:1005296023424
Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697–720. https://doi.org/10.1037/0003-066x.58.9.697
Katz, J. H., Mann, T. C., Shen, X., Goncalo, J. A., & Ferguson, M. J. (2022). Implicit impressions of creative people: Creativity evaluation in a stigmatized domain. Organizational Behavior and Human Decision Processes, 169, 104116. https://doi.org/10.1016/j.obhdp.2021.104116
Kruschke, J. K. (2018). Rejecting or accepting parameter values in Bayesian estimation. Advances in Methods and Practices in Psychological Science, 1(2), 270–280. https://doi.org/10.1177/2515245918771304
Kurdi, B., Hussey, I., Stahl, C., Hughes, S., Unkelbach, C., Ferguson, M. J. & Corneille, O. (2022a). Unaware attitude formation in the surveillance task? Revisiting the findings of Moran et al. (2021). International Review of Social Psychology, 35(1). https://doi.org/10.5334/irsp.546
Kurdi, B., Morehouse, K. N., & Dunham, Y. (2022b). How do explicit and implicit evaluations shift? A preregistered meta-analysis of the effects of co-occurrence and relational information. Journal of Personality and Social Psychology, 124(6), 1174–1202. https://doi.org/10.1037/pspa0000329
Kurdi, B., Lozano, S., & Banaji, M. R. (2017). Introducing the Open Affective Standardized Image Set (OASIS). Behavior Research Methods, 49(2), 457–470. https://doi.org/10.3758/s13428-016-0715-3
Lang, P. J., Bradley, M. M. & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual. Technical report A-8. University of Florida, Gainesville.
Lee, K. M., Lindquist, K. A., & Payne, B. K. (2018). Constructing bias: Conceptualization breaks the link between implicit bias and fear of Black Americans. Emotion, 18(6), 855–871. https://doi.org/10.1037/emo0000347
Mann, T. C., Cone, J., Heggeseth, B., & Ferguson, M. J. (2019). Updating implicit impressions: New evidence on intentionality and the Affect Misattribution Procedure. Journal of Personality and Social Psychology, 116(3), 349–374. https://doi.org/10.1037/pspa0000146
Mann, T. C., & Ferguson, M. J. (2015). Can we undo our first impressions? The role of reinterpretation in reversing implicit evaluations. Journal of Personality and Social Psychology, 108(6), 823–849. https://doi.org/10.1037/pspa0000021
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305–315. https://doi.org/10.1016/j.jml.2017.01.001
Melnikoff, D. E., & Bargh, J. A. (2018). The mythical number two. Trends in Cognitive Sciences, 22(4), 280–293. https://doi.org/10.1016/j.tics.2018.02.001
Melnikoff, D. E., & Kurdi, B. (2022). What implicit measures of bias can do. Psychological Inquiry, 33(3), 185–192. https://doi.org/10.1080/1047840x.2022.2106759
Moors, A. (2016). Automaticity: Componential, causal, and mechanistic explanations. Annual Review of Psychology, 67(1), 263–287. https://doi.org/10.1146/annurev-psych-122414-033550
Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and conceptual analysis. Psychological Bulletin, 132(2), 297–326. https://doi.org/10.1037/0033-2909.132.2.297
Morris, A. & Kurdi, B. (2023). Awareness of implicit attitudes: Large-scale investigations of mechanism and scope. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001464
Moutoussis, M., Fearon, P., El-Deredy, W., Dolan, R. J., & Friston, K. J. (2014). Bayesian inferences about the self (and others): A review. Consciousness and Cognition, 25(100), 67–76. https://doi.org/10.1016/j.concog.2014.01.009
Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology, 64(5), 723–739. https://doi.org/10.1037/0022-3514.64.5.723
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295x.84.3.231
Nosek, B. A. (2005). Moderators of the relationship between implicit and explicit evaluation. Journal of Experimental Psychology: General, 134(4), 565–584. https://doi.org/10.1037/0096-3445.134.4.565
Oikawa, M., Aarts, H., & Oikawa, H. (2011). There is a fire burning in my heart: The role of causal attribution in affect transfer. Cognition & Emotion, 25(1), 156–163. https://doi.org/10.1080/02699931003680061
Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466–478. https://doi.org/10.1037/0096-3445.130.3.466
Payne, B. K., Brown-Iannuzzi, J., Burkley, M., Arbuckle, N. L., Cooley, E., Cameron, C. D., & Lundberg, K. B. (2012). Intention invention and the Affect Misattribution Procedure. Personality and Social Psychology Bulletin, 39(3), 375–386. https://doi.org/10.1177/0146167212475225
Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. (2005). An inkblot for attitudes: Affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89(3), 277–293. https://doi.org/10.1037/0022-3514.89.3.277
Payne, B. K., Krosnick, J. A., Pasek, J., Lelkes, Y., Akhtar, O., & Tompson, T. (2010). Implicit and explicit prejudice in the 2008 American presidential election. Journal of Experimental Social Psychology, 46(2), 367–374. https://doi.org/10.1016/j.jesp.2009.11.001
Payne, B. K., & Lundberg, K. (2014). The Affect Misattribution Procedure: Ten years of evidence on reliability, validity, and mechanisms. Social and Personality Psychology Compass, 8(12), 672–686. https://doi.org/10.1111/spc3.12148
Perszyk, D. R., Lei, R. F., Bodenhausen, G. V., Richeson, J. A., & Waxman, S. R. (2019). Bias at the intersection of race and gender: Evidence from preschool-aged children. Developmental Science, 22(3), e12788. https://doi.org/10.1111/desc.12788
Rivers, A. M., & Hahn, A. (2019). What cognitive mechanisms do people reflect on when they predict IAT scores? Personality and Social Psychology Bulletin, 45(6), 878–892. https://doi.org/10.1177/0146167218799307
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296–320. https://doi.org/10.1207/s15327957pspr0504_2
Ruys, K. I., Aarts, H., Papies, E. K., Oikawa, M., & Oikawa, H. (2012). Perceiving an exclusive cause of affect prevents misattribution. Consciousness and Cognition, 21(2), 1009–1015. https://doi.org/10.1016/j.concog.2012.03.002
Schreiber, F., Neng, J. M. B., Heimlich, C., Witthöft, M., & Weck, F. (2014). Implicit affective evaluation bias in hypochondriasis: Findings from the Affect Misattribution Procedure. Journal of Anxiety Disorders, 28(7), 671–678. https://doi.org/10.1016/j.janxdis.2014.07.004
Shiffrin, R. M. & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84(2), 127–190. https://doi.org/10.1037/0033-295x.84.2.127
Theeuwes, J. & Belopolsky, A. V. (2012). Reward grabs the eye: Oculomotor capture by rewarding stimuli. Vision Research, 74(C), 80–85. https://doi.org/10.1016/j.visres.2012.07.024
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge University Press.
Tucker, R. P., Wingate, L. R., Burkley, M., & Wells, T. T. (2018). Implicit association with suicide as measured by the Suicide Affect Misattribution Procedure (S-AMP) predicts suicide ideation. Suicide and Life-Threatening Behavior, 48(6), 720–731. https://doi.org/10.1111/sltb.12392
Wentura, D. (2000). Dissociative affective and associative priming effects in the lexical decision task: Yes versus no responses to word targets reveal evaluative judgment tendencies. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(2), 456–469. https://doi.org/10.1037/0278-7393.26.2.456
Wentura, D., Müller, P., & Rothermund, K. (2014). Attentional capture by evaluative stimuli: Gain- and loss-connoting colors boost the additional-singleton effect. Psychonomic Bulletin & Review, 21(3), 701–707. https://doi.org/10.3758/s13423-013-0531-z
Williams, A., & Steele, J. R. (2017). Examining children’s implicit racial attitudes using exemplar and category-based measures. Child Development, 21(1), 55–17. https://doi.org/10.1111/cdev.12991
Wood, S.N. (2017). Generalized additive models: An introduction with R (2nd edition). Chapman and Hall/CRC.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Preregistrations, materials, data, and analysis scripts are available via the Open Science Framework (https://osf.io/wfksp/).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kurdi, B., Melnikoff, D.E., Hannay, J.W. et al. Testing the automaticity features of the affect misattribution procedure: The roles of awareness and intentionality. Behav Res (2023). https://doi.org/10.3758/s13428-023-02291-2
Accepted:
Published:
DOI: https://doi.org/10.3758/s13428-023-02291-2