Skip to main content
Log in

Testing the automaticity features of the affect misattribution procedure: The roles of awareness and intentionality

  • Original Manuscript
  • Published:
Behavior Research Methods Aims and scope Submit manuscript

Abstract

The affect misattribution procedure (AMP) is a measure of implicit evaluations, designed to index the automatic retrieval of evaluative knowledge. The AMP effect consists in participants evaluating neutral target stimuli positively when preceded by positive primes and negatively when preceded by negative primes. After multiple prior tests of intentionality, Hughes et al. (Behav Res Methods 55(4):1558–1586, 2023) examined the role of awareness in the AMP and found that AMP effects were larger when participants indicated that their response was influenced by the prime than when they did not. Here we report seven experiments (six preregistered; N = 2350) in which we vary the methodological features of the AMP to better understand this awareness effect. In Experiments 1–4, we establish variability in the magnitude of the awareness effect in response to variations in the AMP procedure. By introducing further modifications to the AMP procedure, Experiments 5–7 suggest an alternative explanation of the awareness effect, namely that awareness can be the outcome, rather than the cause, of evaluative congruency between primes and responses: Awareness effects emerged even when awareness could not have contributed to AMP effects, including when participants judged influence awareness for third parties or primes were presented post hoc. Finally, increasing the evaluative strength of the primes increased participants’ tendency to misattribute AMP effects to the influence of target stimuli. Together, the present findings suggest that AMP effects can create awareness effects rather than vice versa and support the AMP’s construct validity as a measure of unintentional evaluations of which participants are also potentially unaware.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

All data (including trial-level AMP data) are available via the Open Science Framework (https://osf.io/wfksp/).

Notes

  1. In this paper, we use “attitudes” to refer to latent evaluative knowledge and “evaluations” to refer to observable behaviors, such as self-reports on a Likert scale or binary choices on an AMP. We use “explicit evaluations” to refer to the relatively controlled retrieval of evaluative knowledge and “implicit evaluations” to refer to the relatively automatic retrieval of evaluative knowledge. At the level of measures, we distinguish between “direct measures” and “indirect measures.” We do not assume that direct measures capture exclusively controlled processes or that indirect measures capture exclusively automatic processes; however, we believe that given their relatively controlled nature, direct measures are appropriately characterized as indexing explicit evaluations, and given their relatively automatic nature, indirect measures are appropriately characterized as indexing implicit evaluations.

  2. Although these examples use an associative notation, for the purposes of this project we are agnostic regarding the representational format of attitudes, e.g., whether they are associative (The Yankees–Bad) or propositional (“The Yankees are terrible”).

  3. Throughout the paper, we follow the terminology established by Hughes et al. (2023) and refer to the finding of larger AMP effects on trials in which participants report being influenced by the primes than on trials in which they do not as an “awareness effect.” However, in the paper we show empirically that (a) such awareness effects can be an outcome, rather than a cause, of AMP effects and (b) such awareness effects need not emerge from privileged first-person access but rather can also be subserved by inferential mechanisms.

  4. A precise breakdown of reasons for participant exclusions, including in cases where multiple criteria led to an exclusion decision, is available in the open data.

  5. An anonymous reviewer expressed concerns about the English proficiency of participants from non-English-speaking countries. To alleviate these concerns, we refit the main models from Experiments 1–5 to the data of participants from majority-English-speaking countries only and found no substantial deviation from the conclusions reported in the paper. (Participants in Experiments 6–7 were recruited exclusively from the United States.) The corresponding models are available in the open code.

  6. IAPS images were not originally intended for use in online research (Lang et al., 2008). However, given that these images were used as primes in the online experiments conducted by Hughes et al. (2023), we retained them for use in the present studies.

  7. For Experiments 2–4, due to a clerical error, the preregistration documents list the awareness variable, rather than the AMP response, as the dependent variable. We follow the procedure of Hughes et al. (2023) in treating the awareness variable, rather than the AMP response, as the dependent variable. However, the substantive conclusions would remain unchanged even if the AMP response were treated as the dependent variable.

  8. Given the stepwise model comparison process, the degrees of freedom reported for the likelihood ratio test can differ from experiment to experiment even if the best-fitting model is the same. The reason for this is that the likelihood ratio test takes into account not only the complexity (number of parameters) of the final model but also the complexity (number of parameters) of the penultimate model. For example, the degrees of freedom for the likelihood ratio test will be larger when the penultimate model contains only one main effect rather than two main effects. We thank an anonymous reviewer for bringing this point to our attention.

  9. In response to reviewer feedback, we conducted exploratory analyses to investigate whether including the maximal participant-level random-effects structure in each model leads to similar conclusions as the more parsimonious models reported in the main text.

    To test for model overparameterization, we followed the procedure recommended by Bates et al. (2018) and Matuschek et al. (2017) and conducted a principal component analysis on the random-effects covariance matrix of the full model, and then simplified the model if there were principal components accounting for 0% of the variance after rounding to three decimal places. We simplified the model by first disallowing correlation between the random effects; then, if there was still a degenerate component, we dropped the corresponding random slope.

    In Experiment 4, we were unable to follow this approach given that the main model was a generalized additive mixed-effects model. In Experiments 1–3 and 7, the statistical inferences remained unchanged. In Experiments 5 and 6, the best-fitting models remained the same, with minor deviations in planned comparisons. Specifically, unlike in the main models, the first-person awareness effect with respect to positive stimuli was reduced to non-significance in the maximal model of the Experiment 5 data, and the awareness effect was reduced to non-significance with respect to positive stimuli in the maximal model of the Experiment 6 data. Both of these results are indicative of a valence asymmetry effect observed both in the original experiments of Hughes et al. (2023) and in the present work. We return to this effect in the general discussion.

  10. The sign flip between the two correlations is theoretically expected given that in the standard condition, pressing the space bar indicated influence awareness, and in the reversed condition, it indicated lack of influence awareness.

  11. An additional explanation of the substantial effect of response options relies on the idea of response sets, that is, responding to self-report items in a way that is determined by the structure rather than the content of the question. Specifically, work involving the evaluative priming procedure suggests that participants are more likely to respond “yes” than “no” if two psychological events (in that case, the prime and target, and in the present case, the prime and the response) are evaluatively congruent with each other (e.g., Wentura, 2000). Indeed, in line with this possibility, participants in both conditions of Experiment 2 were more likely than not to choose the “influenced” rather than the “not influenced” response following congruent rather than incongruent trials (independently of whether the default response was that they were not vs. they were influenced by the prime; 43% vs. 20% in the former and 65% vs. 38% in the latter condition). If influences of response set indeed contributed to responding on the prime influence measure in this and the remaining experiments, this would raise additional serious concerns about its internal validity. We thank an anonymous reviewer for raising this possibility.

  12. Another piece of evidence against Hughes et al.’s (2023) account emerges from a recent study by Morris and Kurdi (2023). In this study, each participant completed five AMPs randomly selected from a larger set of 16 adapted from Nosek (2005). Crucially, the attitude objects were diverse and included comparisons such as American vs. Canadian, cats vs. dogs, Coke vs. Pepsi, thin people vs. fat people, and Yankees vs. Diamondbacks. As such, there is no theoretical reason to expect these attitudes, as a set, to be correlated with each other; rather, any high intercorrelation may be seen as evidence for method-specific variance, perhaps of the kind suggested by Hughes et al. (2023), inflating the statistical relationship. However, in fact, the average correlation across different AMPs was r =  − .001 and did not differ significantly from zero, t(119) =  − 0.30, p = .765, BF01 = 9.44, Cohen’s d =  − 0.03, or from the correlation among explicit evaluations toward the same comparisons, t(119) =  − 0.86, p = .394, BF01 = 6.21, Cohen’s d =  − 0.08.

    Hussey and Cummins (2022) disputed the validity of this interpretation, pointing out that if the absolute deviation from neutrality in AMP performance is used as the dependent variable in this analysis, an ICC of .26 is observed, suggesting shared variance across different AMPs. Even putting theoretical considerations about the accuracy of relying on absolute deviations aside, it is unclear how this intercorrelation provides evidence for Hughes et al.’s (2023) account given that scores on different AMPs may be correlated with each other for a host of different reasons having nothing to do with awareness of prime influence.

    In fact, the correlation between the participant-level mean absolute deviation from neutrality on the set of five AMPs and mean absolute deviation from neutrality on the parallel set of five explicit measures in this sample was r = .464, t(566) = 12.47, p < .001. Importantly, a mean correlation of r = .170 was also observed when the participant-level correlation between extremity in implicit evaluations and extremity in explicit evaluations was calculated using nonoverlapping subsets of attitude objects, thus suggesting that the relationship is not entirely due to variance shared between implicit and explicit evaluations of the same targets. This result, although not conclusive, raises the possibility that the correlation observed by Hussey and Cummins (2022) in absolute deviation across different AMPs is a result of an individual difference having to do with broader tendencies in evaluative behavior, such as the need to evaluate (Jarvis & Petty, 1996), rather than any specific aspect(s) of AMP performance. And, of course, this is only one of many potential hypotheses about why such intercorrelations might emerge.

  13. Depending on participants’ monitor settings (specifically, the refresh rates used), the actual duration of stimulus presentation may have differed from 16 ms. However, none of the conclusions of the present experiment depend on the specific length of exposure.

  14. What is more, based on Experiments 5–6 it seems that, on the AMP, the source of such awareness is more likely to be a configuration of externally observable events (e.g., the combination of seeing the éclair and an involuntary approach movement toward the bakery) rather than privileged introspective awareness only available to the self. As such, even if intentionality were incompatible with this type of genuine introspective awareness, as yet no compelling evidence has been provided that AMP effects are subject to this type of awareness.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benedek Kurdi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Preregistrations, materials, data, and analysis scripts are available via the Open Science Framework (https://osf.io/wfksp/).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kurdi, B., Melnikoff, D.E., Hannay, J.W. et al. Testing the automaticity features of the affect misattribution procedure: The roles of awareness and intentionality. Behav Res (2023). https://doi.org/10.3758/s13428-023-02291-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.3758/s13428-023-02291-2

Keywords

Navigation