Skip to main content

Looking into the mind’s eye: Directed and evaluated imagery vividness modulates imagery-perception congruency effects

Abstract

While most people have had the experience of seeing a representation in the mind’s eye, it is an open question whether we have control over the vividness of these representations. The present study explored this issue by using an imagery-perception interface whereby color imagery was used to prime congruent color targets in visual search. In Experiments 1a and 1b, participants were required to report the vividness of an imagined representation after generating it, and in Experiment 2, participants were directed to create an imagined representation with particular vividness prior to generating it. The analyses revealed that the magnitude of the imagery congruency effect increased with both reported and directed vividness. The findings here strongly support the notion that participants have metacognitive awareness of the mind’s eye and willful control over the vividness of its representations.

If you were asked to close your eyes and imagine the face of a loved one, you may have an experience much like seeing them in the real world. While the perceptual basis of mental representations has long been an issue of debate (Kosslyn, 1996; Tye, 1991), recent evidence has favored the notion that generating a representation in the mind’s eye utilizes mechanisms that overlap with those responsible for perceptual experience (Dijkstra et al., 2019; Pearson & Kosslyn, 2015). In other words, the representations we see in our mind and those we see in the world may not be so different from a neural point of view.

An important area of visual imagery inquiry concerns whether there are imagery ability differences across individuals. Indeed, this area was the grounds for one of the earliest psychology studies; in 1880, Sir Francis Galton investigated whether imagery vividness differed across social strata, showing that scientists had less vivid imaginations than those in general society. While differences in imagery vividness across individuals have been extensively investigated (Cui et al., 2007; Isaac & Marks, 1994; Kosslyn et al., 1984), a parallel issue that has been investigated far less concerns differences in the vividness of imagery across mental representations within the same individual (although see Pearson et al., 2011), with no studies to our knowledge investigating whether individuals have volitional control over the vividness of mental representations.

A challenge in visual imagery inquiry is that imagery itself is a subjective construct, and therefore it can be difficult to measure. In other words, if dispositional capacity is the quintessential inquiry in the field of visual imagery, how to measure imagery when we cannot see it is the quintessential challenge visual imagery researchers struggle with. Recently, researchers have begun to make use of the imagery–perception interface in order to objectively measure visual imagery. That is, if imagery and perception are similar mechanistically, they should produce similar effects when perceptual representations are supplanted by imagined ones. Indeed, this tact has been successfully implemented to show that imagery can produce perception-like priming effects during binocular rivalry (Chang et al., 2013; Pearson et al., 2008), visual search (Cochrane et al., 2019; Cochrane et al., 2020; Moriya, 2018; Reinhart et al., 2015), and object discrimination (Cochrane & Milliken, 20192020; Wantz et al., 2015). In a particularly notable example, we observed that if participants were instructed to imagine a color prior to a singleton search task, responding was faster when imagery and target color were congruent than when they were incongruent (Cochrane, Nwabuike et al., 2018; see also Cochrane, Zhu et al., 2018). In other words, color imagery appears to increase the relative saliency of congruent perceptual representations, not unlike the perceptual priming effects reported in the literature (Maljkovic & Nakayama, 1994, 2000; Wolfe et al., 2004).

Accordingly, the purpose of the present study was to use the imagery-perception interface to investigate whether participants can evaluate and volitionally control vividness in the mind’s eye. In particular, participants performed a color singleton search task like that of Cochrane, Nwabuike et al. (2018), where they had to imagine a color square that could be either congruent or incongruent with an upcoming target color. To measure whether participants could evaluate the vividness of imagined representations, they reported the vividness of the representation they generated following each imagery event (Experiments 1a and 1b). To measure whether participants could volitionally control the vividness of imagined representations, participants were prompted to imagine a representation with particular vividness prior to each imagery event (Experiment 2). If participants are capable of evaluating the mind’s eye, we should observe that the size of the imagery congruency effect increases with reported vividness (Experiments 1a and 1b). If participants have volitional control over the mind’s eye, we should find that the size of the imagery congruency effect varies in accord with the vividness prompt (Experiment 2).

Experiments 1a and 1b

Experiments 1a and 1b investigated whether participants were able to evaluate the vividness of representations in the mind’s eye. The experiments presented trials of color singleton search in pairs and had participants imagine a square in the color opposite to that of the previous target in the interval between trials (see Cochrane, Nwabuike et al., 2018). This procedure was used since it does not depend on explicit cues that may independently influence performance (Wolfe et al., 2004). Further, by having participants imagine the opposite color, we put the imagery congruency effect in opposition with the intertrial priming effect (Maljkovic & Nakayama, 1994, 2000), which deconfounds selection history from top-down strategic imagery influences (Awh et al., 2012). Following the trial pair sequence participants reported the vividness of their visual imagery on a 4-point scale. We then evaluated whether the size of the imagery congruency effect varied as a function of vividness rating across experiments where imagery-perception congruency was (Experiment 1a: 80% imagery congruent) and was not (Experiment 1b: 50% imagery congruent) to be expected. The supposition underlying this congruency manipulation was that imagery would be more frequently generated when it was expected to match the target than when it did not.

Method

Participants

Thirty-two undergraduates in Experiment 1a (25 females, Mage = 19.5 years) and 16 undergraduates in Experiment 1b (11 females, Mage = 19.9 years) at McMaster University took part in exchange for course credit. All participants reported normal or corrected-to-normal vision and normal color vision. A power analysis was conducted to establish an appropriate sample size. The effect size of the imagery congruency effect (d = 1.1) was drawn from a comparable study in the literature (Cochrane, Nwabuike et al., 2018: Experiment 1a). This analysis revealed that a total sample size of 13 participants was sufficient to detect the imagery congruency effect with power greater than .95 for a .05 alpha criterion. We made the a priori decision to use a sample size of 32 participants in Experiment 1a (2,400 total observations) and sixteen participants in Experiment 1b (2,400 total observations) given we assumed that it would be substantially more difficult to detect magnitude differences in the imagery congruency effect as a function of vividness rating relative to simply detecting the effect itself.

Apparatus and stimuli

Stimuli were presented using PsychoPy (Version 1.82) on a BenQ 24-in. LED monitor that was connected to a Dell 300 computer. The search display contained one target square and four distractor squares that each subtended an approximate vertical and horizontal visual angle of 2.0°. Search items were displayed in red and green—the search target was the odd-colored square among four homogeneously colored distractor squares. All displays were presented on a black background. On each trial, the five squares were randomly assigned to five of eight possible locations positioned equidistant from each other on the contour of a centrally presented invisible circle. The distance from the center of the screen to each of these locations subtended an approximate visual angle of 5.0°. All squares contained a gap in either the left or right side that subtended an approximate visual angle of 0.5°. The fixation cross was presented in white and subtended a horizontal and vertical visual angle of .33°.

Procedure

On each search trial, the oddball colored target square, the four homogeneously colored distractor squares, and the central fixation cross were displayed on screen. The target and distractor squares were each randomly positioned at one of the eight locations that surrounded the central fixation cross. The target color could be presented in red and the distractors in green or vice versa. Participants were instructed to locate the odd-colored target square and indicate as quickly and accurately as possible whether it had a gap in the left or right side. The side of the gap was randomized on a trial-by-trial basis. Participants indicated a left gap by pressing the z key with their left index finger and a right gap by pressing the m key with their right index finger on a standard QWERTY keyboard. The search display remained on screen until a response was made.

Participants were seated approximately 60 cm from the computer screen. Search trials as described above were presented in pairs, and each trial pair sequence began with white text stating, “Press the space bar when you are ready to continue.” Once ready, participants initiated the trial pair sequence by pressing the space bar with their thumb. Following this response, the central fixation cross was displayed for 500 ms followed by the first search trial. Once participants performed the search task of the first trial, a blank screen with the central fixation cross was displayed for 2,000 ms. Participants were instructed at the experiment outset to imagine a square in a color opposite to that of the target in the first search display and maintain this representation until the second search display was presented. For example, if the target was a red oddball among green distractors in the first search display, participants were to imagine a green square during the interval prior to the second search display. The target color was randomized on a trial-by-trial basis such that it was congruent with color imagery 80% of the time in Experiment 1a and 50% of the time in Experiment 1b. Once participants performed the search task of the second trial, participants were prompted to rate the vividness of the visual imagery they generated. Specifically, white text that stated “rate vividness” and the following rating scale were displayed on-screen: 1 = no imagery, 2 = low vividness, 3 = moderate vividness, 4 = high vividness. Participants reported their vividness by pressing the number corresponding to the above rating scale. The trial pair sequence is depicted in Fig. 1.

Fig. 1
figure 1

The top diagram depicts a congruent trial pair sequence when imagery vividness was rated (Experiments 1a and 1b). The bottom diagram depicts an incongruent trial pair sequence when imagery vividness was cued (Experiment 2)

The practice session consisted of 15 practice trial pair sequences across three separate training phases (five trial pair sequences per phase). In the first phase, participants simply performed the paired search tasks. Here, the instructions on how participants ought to perform the search task were administered. In the second phase, participants implemented the imagery instruction between the trial pairs. At this time, participants were instructed that they were to imagine the colored square and to represent that square in their mind (as opposed to spatially localizing their imagery). Further, participants were informed that color imagery would not always match the upcoming target. In the third phase, participants performed trial pair sequences that were identical to the experimental trials. Here, the instructions on how participants ought to report their imagery vividness were administered. Specifically, participants were instructed that the no-imagery rating constituted the situation when “they did not generate any imagery,” the low-vividness rating constituted the situation when their imagery was “vague and dim,” the moderate-vividness rating constituted the situation when their imagery was “reasonably clear and vivid,” and the high-vividness rating constituted the situation when their “imagery was clear and vivid like that of normal vision.” Participants were also informed that their ratings should be implemented in a relativistic manner to reflect their individual capability.

Following this practice session, participants performed the experimental trials. The participants of Experiment 1a performed 75 trial pair sequences (150 total search trials), and the participants of Experiment 1b performed 150 trial pair sequences (300 total search trials). At the end of the experiment, participants provided a percentage estimate of the frequency with which they implemented the imagery instruction across the experimental trials.

Results

Correct response times (RTs) and error percentages for the second search trial in a pair were the primary dependent variables. Correct RTs less than 200 ms and greater than 2,000 ms were excluded from analysis, resulting in the removal of 2.9% of observations in Experiment 1a and 1.7% of observations in Experiment 1b. Correct RTs were further excluded from analysis if they were identified as outliers by the nonrecursive moving outlier elimination procedure of Van Selst and Jolicoeur (1994), which led to the removal of an additional 2.7% of observations in Experiment 1a and 2.6% of observations in Experiment 1b. Correct RTs and error percentages for the no-imagery and low-vividness ratings were combined and constituted the low category, and the moderate-vividness and high-vividness ratings were combined and constituted the high category. These low and high categories comprised 34.7% (no imagery: 9.7%; low: 25.0%) and 65.3% (moderate: 35.6%; high: 29.7%) of observations in Experiment 1a, and 41.0% (no imagery: 15.1%; low: 25.9%) and 59.0% (moderate: 37.0%; high: 22.0%) observations in Experiment 1b, respectively. The vividness ratings were categorized this way to reduce the number of participants excluded from analyses due to empty cells. Even so, 10 participants were excluded from the analyses of Experiment 1a, and four participants were excluded from the analyses of Experiment 1b. In other words, these participants were excluded for implementing a subjective rating strategy that did not provide observations in both the low and high categories for each level of the imagery color condition. Means were computed from the remaining participants, and the correct RTs and corresponding error percentages were submitted to within-subject ANOVAs that treated imagery color (congruent/incongruent) and vividness rating (high/low) as factors. An alpha criterion of .05 was used to determine statistical significance. The mean percentage estimates of imagery use were 77.0% in Experiment 1a and 64.5% in Experiment 1b. RTs are depicted in Fig. 2, and error percentages are depicted in Table 1.Footnote 1

Fig. 2
figure 2

Mean RTs of the imagery congruent and incongruent color conditions across the vividness ratings of Experiments 1a and 1b. The error bars represent the standard error of the mean corrected to remove between-subject variability (Cousineau, 2005; Morey, 2008)

Table 1 Mean error percentages (%) across experiments. The difference column reflects the difference in error percentages across the congruent and incongruent conditions

Experiment 1a

The analysis of RTs revealed a significant interaction of imagery color and vividness rating, F(1, 21) = 11.6, p = .003, ηp2 = .36. The interaction was examined further by performing planned paired t tests that evaluated the effect of imagery color for the high-vividness and low-vividness ratings separately. For the high-vividness rating condition, there was a significant effect of imagery color, t(21) = 4.06, p < .001, d = .67, reflecting faster responses when the target and imagery colors were congruent (753 ms) than incongruent (932 ms).Footnote 2 For the low-vividness rating condition, the effect of imagery color was not significant (p = .15), indicating that there was no difference in response speed when the target and imagery colors were congruent (851 ms) and incongruent (896 ms). There were no significant effects in the analysis of error percentages (all Fs < 1).

Experiment 1b

The analysis of RTs revealed a significant interaction of imagery color and vividness rating, F(1, 11) = 4.97, p = .048, ηp2 = .31. The interaction was examined further by performing planned paired t tests that evaluated the effect of imagery color for the high-vividness and low-vividness ratings separately. For the high-vividness ratings, there was a significant effect of imagery color, t(11) = 2.43, p = .033, d = .24, reflecting faster responses when the target color was congruent (733 ms) than incongruent (785 ms) with color imagery. For the low-vividness ratings, the effect of imagery color was not significant (p = .51); RTs were not different when target color was congruent (788 ms) than when it was incongruent (768 ms) with color imagery. There were no significant effects in the analysis of error percentages (all Fs < 2) although the main effect of imagery color approached significance (p = .065).

Comparison of Experiments 1a and 1b

Mean RTs were submitted to an ANOVA that treated imagery color (congruent/incongruent) as a within-subjects factor and experiment (1a/1b) as a between-subjects factor. This analysis revealed a significant interaction of color imagery and experiment, F(1, 32) = 4.69, p = .039, ηp2 = .13, indicating that the magnitude of the imagery congruency effect was larger in Experiment 1a than 1b.

Discussion

Across two experiments, high-vividness representations in the mind’s eye produced larger imagery congruency effects than low-vividness representations. Further, this modulation of the imagery congruency effect occurred both when imagery was likely to be congruent with the upcoming target (Experiment 1a) and when not (Experiment 1b). Given that the imagery congruency effect was larger in Experiment 1a than 1b, it appears that increased congruency led to increased imagery use and/or led to an additional performance benefit in and of itself. That is, while the postexperiment estimates and distribution of imagery vividness ratings support that the participants of Experiment 1b were less likely to imagine than those in Experiment 1a, it is possible that an expectancy that was non-visual in nature may have influenced performance as well (see Cochrane & Pratt, 2020; Thomson et al., 2013). These imagery congruency effects were not modulated by ratings of imagery effort, which supports the view that the effects were indeed due to imagery vividness (see online supplemental material). Overall, these findings support the notion that participants were able to accurately evaluate the vividness of representations in the mind’s eye.

Experiment 2

The results of Experiment 1 demonstrate that participants were able to evaluate the vividness of representations in the mind’s eye. Yet an open question is whether participants have volitional control over the vividness of these representations. To examine this issue, a similar experimental procedure to that of Experiment 1a was used, but instead of having participants rate the vividness of imagery at the end of each trial pair sequence, they were cued to generate imagery of a particular vividness at the beginning of each sequence. If participants are able to volitionally control the vividness of representations in the mind’s eye, the size of the imagery congruency effect should increase in accord with the intensity of the vividness cue. Experiment 2 further controls for possible effects in Experiment 1 due to evaluating performance after the fact. That is, while vividness ratings could have reflected how well participants thought they performed when administered after the search task, this could not be the case when vividness was cued in advance.

Method

Participants

Thirty-two undergraduates at McMaster University (23 females, Mage = 18.7 years) participated in exchange for course credit. All participants reported normal or corrected-to-normal vision and normal color vision. The sample size was selected based on Experiment 1.

Apparatus and stimuli

Apparatus and stimuli were identical to those used in Experiment 1.

Procedure

The procedure of Experiment 2 was identical to Experiment 1a, with the exception that participants no longer provided vividness ratings at the end of each trial pair sequence. Instead, white text that stated “no imagery,” “low vividness,” “moderate vividness,” or “high vividness” was displayed at the beginning of the trial pair sequence. These vividness cues were randomized such that each cue was equally probable. At the outset of the experimental session, participants were instructed that they were to generate color imagery corresponding to the vividness cue. In particular, if the high-vividness cue was displayed, participants were to imagine a colored square that was “clear and vivid like that of normal vision”; if the moderate-vividness cue was displayed, participants were to imagine a colored square that was “reasonably clear and vivid”; if the low-vividness cue was displayed, they were to imagine a colored square that was “vague and dim”; and if the no-imagery cue was displayed, they were to “not generate any imagery.” Participants pressed the space bar with their thumb to indicate that they understood the vividness cue, which in turn began the search trial pair sequence. Like in Experiment 1a, the target color was congruent with color imagery 80% of the time, and incongruent 20% of the time. The participants performed 200 experimental trial pair sequences (400 total search trials). The trial pair sequence is depicted in Fig. 1.

Results

Correct RTs and error percentages for the second search trial in a pair were the primary dependent variables. Correct RTs less than 200 ms and greater than 2,000 ms were excluded from analysis, resulting in the removal of 1.6% of observations. Correct RTs were further excluded from analysis if they were identified as outliers by the nonrecursive moving outlier elimination procedure of Van Selst and Jolicoeur (1994), which led to the removal of an additional 2.9% of observations. The remaining correct RTs and corresponding error percentages were submitted to within-subjects ANOVAs that treated imagery color (congruent/incongruent) and vividness cue (high/moderate/low/no) as factors. An alpha criterion of .05 was used to determine statistical significance. The mean percentage estimate of imagery use was 74.4%. RTs are depicted in Fig. 3, and error percentages are depicted in Table 1.

Fig. 3
figure 3

Mean RTs of the imagery congruent and incongruent color conditions for each of the vividness cues of Experiment 2. The error bars represent the standard error of the mean corrected to remove between-subject variability (Cousineau, 2005; Morey, 2008)

The analysis of RTs revealed a significant interaction of imagery color and vividness cue, F(3, 93) = 11.2, p < .001, ηp2 = .27. The interaction was examined further by performing planned paired t tests that evaluated the effect of target color for each vividness cue separately. The analysis of the no-vividness cue revealed a significant effect of imagery color, t(31) = 3.85, p < .001, d = .27, reflecting faster responses when the target color was incongruent (750 ms) than congruent (794 ms) with color imagery—this result constitutes an intertrial priming effect (Maljkovic & Nakayama, 1994, 2000). The analysis of the low-vividness cue revealed no effect of imagery color (p = .38), reflecting similar RTs when the target color was congruent (806 ms) and incongruent (825 ms) with color imagery. The analysis of the moderate-vividness cue revealed an effect of imagery color that was not significant, t(31) = 1.91, p = .065, d = .29, although there was a trend toward faster responses when the target color was congruent (788 ms) than incongruent (851 ms) with color imagery. The analysis of the high-vividness cue revealed a significant effect of imagery color, t(31) = 3.37, p = .002, d = .55, reflecting faster responses when the target color was congruent (754 ms) than incongruent (878 ms) with color imagery. There were no significant effects in the analysis of error percentages (all Fs < 2).

Discussion

The present experiment revealed that the size of the imagery congruency effect increased in an approximately linear manner with the intensity of the vividness cue. In particular, there was a highly significant imagery congruency effect when high vividness was cued, a marginal imagery congruency effect when moderate vividness was cued, no imagery congruency effect when low vividness was cued, and a significant intertrial priming effect (i.e., a pattern of results in the opposite direction of the imagery congruency effect) when no imagery was cued. The present result strongly supports the notion that participants have volitional control over the vividness of representations in the mind’s eye.

General discussion

In the present study, we examined whether participants can evaluate and volitionally control visual imagery vividness. We made use of a method previously shown to produce more efficient search for color imagery that is congruent than for color imagery that is incongruent with the color of a following search target. The key research question was whether the vividness of subjectively reported color imagery (Experiments 1a and 1b), and the cued vividness of color imagery (Experiment 2) would modulate the magnitude of the imagery congruency effect. Experiments 1a and 1b revealed that the magnitude of the imagery congruency effect increased with increased reported imagery vividness, and Experiment 2 revealed that the magnitude of the imagery congruency effect increased with increased cued imagery vividness. The findings of these experiments converge on the notion that individuals can evaluate and volitionally control the vividness of representations in the mind’s eye.

A potential limitation of the present study is that, while there was a tight correspondence between imagery vividness and the imagery perception congruency effect, it is not entirely clear whether imagery vividness was exclusively responsible for it. That is, while this pattern of results was not revealed in the present study, we have observed that vividness ratings, from time to time, can account for more than simply imagery vividness. For example, participants sometimes report that their vividness was poor when an error was made in the search task (see Cochrane et al., 2020). While this particular behavior could not possibly account for the findings of Experiment 2, it is possible that some other type of behavior (e.g., differing states of attentiveness) played a role in the observed findings. While the basis of our behavioral study does not permit us to tease apart this issue, recent work by Dijkstra et al. (2017) nicely compliments the supposition espoused here. Using a similar imagery vividness rating procedure, they revealed that recruitment of perceptual brain regions increased with increased vividness ratings. Aligned with Dijkstra et al., we suspect that the vividness ratings of the present study (mostly) corresponded to the extent at which imagery recruited the brain regions responsible for perceptual experience.

The present findings reveal important insight into the processes underlying attentional guidance. Historically, top-down processes have been reported to have a weak influence on visual search (Theeuwes, 2013), while at the same time there certainly must be a system that allows us to find things in our visual environment that we want to find. The missing ingredient that explains why these top-down effects have been famously weak is that efficient attentional guidance likely depends on the generation of a representation that is visual in nature. Certainly, if you were asked to spot your own car in a crowded parking lot, while it can be difficult at times, it is markedly less difficult than spotting a car you are unfamiliar with. By extracting visual features from memory, you can form a template that guides attention to congruent representations in the external world. We suspect that it is maintaining these representations that have a particularly potent influence on attentional guidance, which is why imagery-perception congruency effects have a consistently profound influence on search relative to semantic and pictorial representations (Cochrane, Nwabuike, et al., 2018; Cochrane et al. 2019; Cochrane et al., 2020), and as demonstrated by the conceptually identical working memory findings, maintained representations guide attention even when irrelevant to a task (Soto et al., 2005; Soto et al., 2006). The insight the present study reveals is the extent to which the guidance system depends on visual representations, as its capacity to guide increases with increased visual quality.

To paraphrase the renowned philosopher George Berkeley, physical objects in the world do not exist independently of the minds that perceive them; an item only truly exists as long as it is observed (Berkeley et al., 2016). While it is not advisable to take Berkeley at his most literal, the essence of this notion is something that is often lost on us as we go about our day-to-day lives—our perception of the world does not exist in the world itself, but is the product of our internal mechanisms. The reason why the pen in front of me appears blue to me and not to someone who is color blind is not because of differences in the physical properties of the stimulus, but because of differences in the mechanisms governing our perception. What modern imagery research has shown is that, while an external stimulus is sufficient to produce a perceptual experience, it is not necessary. That is, we can flexibly mold the memories of our previous experiences in our mind’s eye in order to produce a phenomenological experience much like seeing objects in the real world. In addition, as the present study reveals, much like seeing an external stimulus in the dark degrades its perceptual quality, we too can turn the light off in our mind’s eye (so to speak) to dim our internal representations.

Notes

  1. In Experiment 1a, participants rated their effort following each vividness rating. The corresponding analyses revealed that reported effort did not influence the size of the imagery congruency effect. All participants of this experiment also completed the Vividness of Visual Imagery Questionnaire–2 (VVIQ-2; Marks, 1995), which purports to measure dispositional imagery ability. The corresponding analysis revealed that the VVIQ-2 score did not correlate with the size of the imagery congruency effect, in keeping with some findings in the literature (Cochrane et al., 2019; Wantz et al., 2015) and contrasting with others (Cui et al., 2007; Pearson et al., 2011). (See supplementary online materials for the details of these analyses.)

  2. All Cohen’s d values reported in the manuscript were based on the aggregate measures of performance in each condition for each participant (i.e., the mean RTs and error percentages). Cohen’s d was computed by taking the difference of the means for the two conditions, then dividing them by their pooled standard deviations.

References

Download references

Acknowledgements

Financial support for this study was provided in part by a National Sciences and Engineering Research Council of Canada Discovery Grant (2019-07021) awarded to Bruce Milliken. The funding agreement ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report. The authors report no conflict of interest.

Open practices statement

The experiments reported in this article were not preregistered. The data for all experiments and other supplemental materials are publicly available at the Center of Open Science website (osf.io/h3c8j).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Brett A. Cochrane.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cochrane, B.A., Ng, V., Khosla, A. et al. Looking into the mind’s eye: Directed and evaluated imagery vividness modulates imagery-perception congruency effects. Psychon Bull Rev 28, 862–869 (2021). https://doi.org/10.3758/s13423-020-01868-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3758/s13423-020-01868-8

Keywords

  • Imagery
  • Visual search
  • Attention capture
  • Metacognition