Retinotopy of visual masking and non-retinotopic perception during masking
Due to the movements of the observer and those of objects in the environment, retinotopic representations are highly unstable during ecological viewing conditions. The phenomenal stability of our perception suggests that retinotopic representations are transformed into non-retinotopic representations. It remains to show, however, which visual processes operate under retinotopic representations and which ones operate under non-retinotopic representations. Visual masking refers to the reduced visibility of one stimulus, called the target, due to the presence of a second stimulus, called the mask. Masking has been used extensively to study the dynamic aspects of visual perception. Previous studies using Saccadic Stimulus Presentation Paradigm (SSPP) suggested both retinotopic and non-retinotopic bases for visual masking. In order to understand how the visual system deals with retinotopic changes induced by moving targets, we investigated the retinotopy of visual masking and the fate of masked targets under conditions that do not involve eye movements. We have developed a series of experiments based on a radial Ternus-Pikler display. In this paradigm, the perceived Ternus-Pikler motion is used as a non-retinotopic reference frame to pit retinotopic against non-retinotopic visual masking hypothesis. Our results indicate that both metacontrast and structure masking are retinotopic. We also show that, under conditions that allow observers to read-out effectively non-retinotopic feature attribution, the target becomes visible at a destination different from its retinotopic/spatiotopic location. We discuss the implications of our findings within the context of ecological vision and dynamic form perception.
KeywordsVisual masking Motion perception Reference frames
Retinotopic and non-retinotopic representations in human vision
The optics of the eye map neighboring points in the environment to neighboring photoreceptors in the retinae, and these neighborhood relations, known as retinotopic organization, are preserved in early visual cortical areas. Under normal viewing conditions, due to the movements of the observer’s body, head, eyes, and due to the movements of objects, the stimuli impinging on retinotopic representations are highly dynamic and unstable. Thus, understanding ecological vision requires an understanding of how visual processes operate under these dynamic conditions. In order to explain the phenomenal stability of our environment, it is often postulated that the brain constructs non-retinotopic representations wherein the ego-centric representations (i.e., based on the observer, such as retinotopic representations) are transformed into exo-centric representations (i.e., based outside of the observer, such as spatiotopic representations). However, determining whether a given visual process operates in retinotopic or non-retinotopic representations and which visual processes operate in non-retinotopic representations remains one of the fundamental challenges in understanding ecological vision.
Retinotopy of visual masking assessed with the Saccadic Stimulus Presentation Paradigm
Saccadic eye movements constitute a major source for retinotopic instability. However, during these eye movements, the world appears phenomenally stable suggesting that retinotopic shifts caused by saccades are either dismissed or compensated by the visual system. Theories of dismission propose that very little information is kept from one saccade to another and vision starts tabula rasa after each saccade. Theories of complete compensation propose that all information is remapped across the saccade by taking into account the global shift caused by the saccade. Theories that take intermediate positions between these two extremes have also been proposed (Bridgeman, van der Heijden, & Velichkovsky, 1994). In general, compensation theories rely on three mechanisms: 1) before the initiation of the saccade, retinotopic information is stored in memory, 2) during and after the saccade, retinotopic information is suppressed to prevent inappropriate integration of pre- and post-saccadic images, 3) after the saccade, the new image is integrated with the contents of the memory by taking into account the retinotopic shift caused by the saccade. Because saccadic shifts take in general few tens of milliseconds, sensory (iconic) memory1 and backward masking2 have been viewed as the major candidates to perform the memorization and suppression tasks, respectively.
The observer is asked to make a saccade from one fixation point to a second one. Target stimuli (five letters) are presented briefly before the saccade, followed by a mask stimulus (either a nonoverlapping ring, as in Fig. 1, or an overlapping pattern) presented after the saccade. As depicted in Fig. 1, the mask stimulus surrounds (or covers) different letters according to retinotopic and non-retinotopic (spatiotopic) coordinates. By measuring which of the two letters is suppressed from perception, one can infer whether this mask operates in retinotopic or non-retinotopic coordinates. Davidson et al. (1973) reported that the mask suppressed the letter that shared its retinotopic coordinates but appeared to occupy the same position as the letter that shared its spatiotopic coordinates. They proposed that there is retinotopic visible persistence at which trans-saccadic masking occurs, and a spatiotopic sensory memory at which trans-saccadic integration occurs. In a follow-up study, McRae et al. (1987) reported not only retinotopic but also spatiotopic masking. They suggested that the transition from retinotopic to spatiotopic representations takes time and the reason Davidson et al. (1973) did not find evidence for spatiotopic masking could be the relatively shorter Inter-Stimulus Interval (ISI) used by Davidson et al. (ca. 80 ms) compared with the ISI used in McRae et al.’s study (153 ms). That masking is retinotopic at short ISIs also was confirmed by Irwin, Brown, and Sun (1988). These authors also presented evidence for spatiotopic memory integrating information across saccades. However, their data suggested that spatiotopic integration of information were rather abstract depending on position and identity information rather than detailed image-like fusion of trans-saccadic stimuli (van der Heijden, Bridgeman, & Mewhort, 1986).
The observation of shifts of neuronal receptive fields in the direction of intended saccades (Duhamel, Colby, & Goldberg, 1992) generated a renewed interest for the problem of visual stability across saccades from this perspective (Melcher & Colby, 2008; Wurtz, 2008; Cavanagh, Hunt, Afraz, & Rolfs, 2010; Melcher, 2011). This “remapping of receptive fields” has been associated with shifts in the perceived positions of peri-saccadically presented targets (Ross, Morrone, Goldberg, & Burr, 2001). De Pisapia, Kaunitz, and Melcher (2010) suggested that these shifts, in turn, can help a target stimulus escape from masking. Moreover, they have presented evidence for spatiotopic masking for ISIs shorter (48 ms) than the ISIs reported in previous studies. Hunt and Cavanagh (2011) presented a brief target before the saccade followed by a long-duration mask that turned on before the saccade and remained on after the saccade until the subject responded. With this paradigm, Hunt and Cavanagh (2011) showed masking when the mask was presented at the post-saccadic retinotopic coordinates of the location where the target was presented. Taken together, these studies paint a complex picture for the retinotopy of masking. Part of the reason for this complexity may be due to the fact that many of the studies used different types of targets, masks and widely different parameters. It is known that masking is not a unitary phenomenon (Bachmann, 1994; Breitmeyer & Öğmen, 2006), and the differences between the studies may be due to differences in the types of masking functions and mechanisms evoked in different studies. Notwithstanding this issue, these studies show that SSPP provides a powerful method for exploring retinotopy of visual masking across saccades. However, SSPP involves eye-movement related processes, such as saccadic suppression and efference copy, and cannot be employed to study retinotopy of visual masking independent of eye movements.
Retinotopy of visual masking in the absence of eye movements
In a recent study, Lin and He (2012) investigated the retinotopy of masking in the absence of eye movements by using a modified version of object-specific reviewing paradigm (Kahneman, Treisman, & Gibbs, 1992). We use an alternative method that directly pits retinotopic predictions against non-retinotopic predictions. We will first introduce our approach and compare in Section “Discussion” our methods and findings to those of Lin and He (2012).
Depending upon the ISI, two different types of motion are perceived (Pantle & Picciano, 1976). For short ISIs (e.g., 0 ms) observers perceive Element Motion, in which the leftmost element in Frame 1 is perceived to be moving to the position of the rightmost element in Frame 2 (Fig. 2b). In this case, no motion is perceived for the other two elements. For long ISIs (e.g., 100 ms) observers perceive Group Motion, in which all elements are perceived to be moving together as a group (Fig. 2c). To study non-retinotopic feature attribution, a simple feature called a vernier offset is inserted into the central element of the first frame (Fig. 2d). A purely retinotopic hypothesis predicts that features of the central element in Frame 1 should be integrated into the leftmost element of Frame 2 for all ISIs within the window of temporal integration. Hence, performance should be well above chance level when subjects are asked to report the direction of the probe vernier perceived in the leftmost element in Frame 2, and near chance for the other elements in Frame 2. However, it was shown that performance depends on the ISI. When group motion is established between the Ternus-Pikler elements (ISI = 100), performance is well above chance when subjects are asked to report the perceived direction of the vernier offset in the central element in Frame 2, and near chance for other elements (Fig. 2d). On the other hand, in the case of element motion (ISI = 0), performance is higher when subjects report the vernier offset perceived in the leftmost element in Frame 2. The illusory attribution of the vernier offset also depends critically on the elicitation of a motion percept. If the leftmost line of the first frame and the rightmost line of the second frame are omitted (Fig. 2e), no apparent motion is induced since the remaining elements spatially overlap. In this control display, percentage of responses in agreement with the probe-vernier is high only for the element labeled 1 and at chance level for element 2 for both ISIs. These results indicate that feature attribution between elements of consecutive Ternus-Pikler display frames is governed according to motion-induced grouping; i.e., according to the dashed arrows in Fig. 2b and c. In other words, perceived motion of the Ternus-Pikler elements serves as a non-retinotopic reference frame for feature attribution between elements of the Ternus-Pikler display frames.
In the present study, we use a similar Ternus-Pikler paradigm to probe retinotopic and non-retinotopic bases of visual masking and to assess non-retinotopic perception during masking.
General methods and materials
All visual stimuli were generated via a Visual Stimulus Generator (VSG 2/5) card manufactured by Cambridge Research Systems. The stimuli were displayed on a 22-inch color monitor set at a resolution of 800 × 600 with a refresh rate of 100 Hz. Subject responses were collected by means of a joystick connected to the computer hosting the VSG card. The distance between the observer and the monitor was fixed at 0.5 m, and a head/chin rest was utilized to minimize subject head motion during the experiments. Observers were asked to maintain a stable gaze at a fixation cross that remained visible throughout the experiment at the center of the monitor. Our previous experiments indicate that observers are able to keep a stable fixation while viewing the Ternus-Pikler displays (Boi et al., 2009). Nevertheless, to completely rule out the involvement of eye movements, we ran control experiments with eye movement monitoring and the results of these experiments are provided in Appendix. All experiments were conducted in a dimly lit room. Background and Ternus-Pikler disk/square luminances for all experiments were set at 25 cd/m2 and 10 cd/m2 respectively. Target and mask luminance levels were chosen at 30 or 40 cd/m2 depending upon subject thresholds for optimum masking. Four participants with age range from 25 to 37 years, of whom three were naïve to the purpose of the study, took part in the experiments. The experiments were conducted according to a protocol approved by the University of Houston Committee for the Protection of Human Subjects. Informed consent was obtained from every participant, and practice trials were conducted to familiarize the observers with experimental procedures. The results of practice trials were not included in the data analysis.
Experiment 1: Retinotopy of metacontrast masking
Spatial parameters of the target and mask are displayed in Fig. 3c. Variable “x” represents the size of the probe gap, which was varied in the range of 12’ to 25’ to meet each individual subject’s masking threshold requirements. The square and the disk in the Ternus-Pikler stimulus had, respectively, a side and a diameter equal to 1.5°. Figure 3d displays the timing diagram of a typical trial. The ISI was fixed (0 or 40 ms) for each experimental block, and the target always appeared just before the ISI. Target and mask stimuli were presented for 10 and 20 ms, respectively. Mask onset time was randomized from trial to trial to allow for different ISI-SOA combinations per block. The Ternus-Pikler disks were displayed at an eccentricity of 4° and for a duration of 120 ms. As shown Fig. 3d, the Ternus-Pikler ISI limits the shortest masking SOA that can be used. Therefore, eccentricity, background luminance, Ternus-Pikler element shapes, and target/mask/disk contrasts were chosen in such a way that Ternus-Pikler group motion was perceived by all observers at a relatively short ISI (40 ms), whereas strong masking effect was observed at the corresponding SOA (50 ms). In this study, we used only one contrast-polarity for targets and masks. The luminance values for the target, mask, disk, and background were 40, 40, 10, and 50 cd/m2, respectively. Thus, with respect to the disk within which they were presented, the target and mask had positive contrast polarity. Based on previous research, we would expect quantitatively different but qualitatively similar results for other contrast polarity combinations (Breitmeyer, Tapia, Kafalıgönül, & Öğmen, 2008). Ternus-Pikler radial motion (upward or downward) also was randomized from trial to trial. In a two-alternative forced-choice design, three naïve observers as well as one of the authors reported the perceived location of the missing corner of the target diamond (left/right).
Figure 5b shows performance as a function of SOA for element motion condition (ISI = 0 ms). Two-way repeated measures ANOVA shows significant effect of the mask condition (F2,6 = 29.4; p = 0.012; ηp 2 = 0.907), as well as the SOA (F8,24 = 45.89; p = 0.007; ηp 2 = 0.939). However, when retinotopic mask condition is removed from the analysis, metacontrast masking effect (F1,3 = 0.214; p = 0.675), as well as the SOA effect become insignificant (F8,24 = 36.06; p = 0.497). Figure 5c shows performance as a function of the SOA, when Ternus-Pikler disks are perceived to be in group motion (ISI = 40 ms). Once again, significant effect of the mask (F2,6 = 126.09; p < 0.001; ηp 2 = 0.977) and the SOA (F4,12 = 5.49; p = 0.009; ηp 2 = 0.647) are observed. Removal of the retinotopic mask condition from the analysis, once again, renders both masking (F1,3 = 1.000; p = 0.391) and SOA (F4,12 = 1.876; p = 0.179) effects insignificant.4
These results show clearly that metacontrast masking in the absence of eye movements is retinotopic. Regardless of the perceived motion of the Ternus-Pikler disks, the retinotopic mask significantly masks the target at optimum SOAs, whereas the non-retinotopic mask has no significant effect on performance. To generalize this result across mask and masking function types, we used a spatially overlapping mask that shared structural similarity with the target in the next experiment. In this structure masking paradigm, we chose a strong mask to generate a Type-A (i.e., monotonic) backward masking function instead of the Type-B (i.e., non-monotonic) backward masking function obtained in the metacontrast experiment.
Experiment 2: Retinotopy of structure masking with type-a masking function
Figure 6d displays the timing diagram of a typical trial in Experiment 2. As in Experiment 1, the ISI was fixed (0 or 40 ms) for each experimental block and the target always appeared just before the ISI. Mask presentation time was again randomized from trial to trial to allow for different ISI-SOA combinations per block. Background luminance, Ternus-Pikler element shapes, and target/mask/disk contrasts were chosen as explained in Experiment 1. Ternus-Pikler radial motion (upward or downward) also was randomized from trial to trial. In a two-alternative forced-choice design, three naïve observers and one of the authors reported the perceived location of the missing side of the target square (up/down).
Together, the results of Experiments 1 and 2 show that backward masking is retinotopic and this finding holds for metacontrast and structure masking, as well as, for type-A and type-B masking functions.
In a recent study, Lin and He (2012) investigated the retinotopy of masking by using a modified version of object-specific reviewing paradigm (Kahneman et al., 1992). A rectangular object (frame) was presented for a preview period of 200 ms. The target was presented during the last 10 ms of this preview period in one of the two sides of the rectangle. This rectangular frame was then shifted to a new location and displayed for another 200 ms. The mask stimuli were presented during the first 30 ms of the shifted frame. One side of the frame contained a weak mask and the other side contained a strong mask. Neither mask occupied the same retinotopic location as the target but one of the masks occupied the same rectangle-relative position as the target (i.e., the same side). Observers performed worse when the strong mask occupied the same relative position as the target. Lin and He interpreted this finding as evidence for non-retinotopic frame-centered backward masking. While this interpretation is plausible, it is difficult to make inferences about masking without observing the complete masking functions and comparing directly retinotopic, non-retinotopic, and baseline conditions. At the single short SOA of 10 ms (corresponding to ISI = 0 ms) used in the experiment, it is difficult to assess whether the difference in performance across the two mask types is due to masking per se or other factors. In our experiments, we included baseline no-mask measures, multiple SOA values to reveal the full typical type-A and type-B masking functions and compared directly retinotopic and non-retinotopic masking conditions according to two different motion grouping conditions. Our results reveal only retinotopic masking.
Previous studies showed that features of a masked target can be observed as being part of the mask stimulus (Werner, 1935; Wilson & Johnson, 1985; Herzog & Koch, 2001; Otto, Ogmen, & Herzog, 2006; Öğmen et al., 2006; Breitmeyer, Herzog, & Ogmen, 2008). As indicated in Fig. 1, the vernier offset of the target in the first frame can be observed on the mask stimulus shown in the second frame even though no vernier is presented at this element nor at this retinotopic location. Similarly, by using the sequential metacontrast paradigm, we have shown that features of a target, whose visibility is suppressed, can nevertheless be perceived along motion streams to which the target belongs (Otto et al., 2006; Herzog, Otto, & Ogmen, 2012). Our studies showed that the attribution of the target’s features to the mask stimulus is a consequence of motion grouping rather than masking itself (Öğmen et al., 2006; Breitmeyer et al., 2008). The goal of the next experiment was to study this motion-dependent non-retinotopic feature attribution in masking.
Experiment 3: Non-retinotopic feature attribution
In some trials of Experiments 1 and 2, subjects informally reported perceiving the target to be moving with the Ternus-Pikler elements, as one would expect from non-retinotopic feature attribution. In such cases, the target could be perceived at spatial locations different from where the target stimulus was actually presented. To formally study this, we removed the motion ambiguity from Experiments 1 and 2, and instructed our subjects to spread their attention as discussed in the following section, so as to facilitate the read-out of non-retinotopic feature attribution.
Experimental design and procedures of Experiment 3 were identical to those of Experiments 1 and 2, with the exception of the Ternus-Pikler motion. The Ternus-Pikler motion was made predictably upwards in all trials, and the subjects were instructed to spread their attention to the central Ternus-Pikler square in both display frames. The target and mask design was identical to those of Experiments 1 and 2 for the respective metacontrast and structure masking conditions. Stimulus timing was also chosen to match those of the previous two experiments. Once again, the ISI was fixed (0 or 40 ms) for each experimental block, and the target always appeared just before the ISI. Mask presentation time, was again randomized from trial to trial to allow for different ISI-SOA combinations per block. Background luminance, Ternus-Pikler element shapes, and target/mask/disk contrasts were chosen to match those of the previous experiments. The Ternus-Pikler radial motion was fixed (upward) in all trials to remove motion ambiguity. In a two-alternative forced-choice design, three naïve observers as well as one of the authors reported the perceived missing corner of the target diamond or the location of the missing side of the target square for metacontrast and structure masking conditions, respectively.
Results A: metacontrast masking
Results B: masking by structure
In agreement with the results found in experiments 1 and 2, retinotopic masking is observed when the Ternus-Pikler disks are perceived in element motion (ISI = 0 ms). However, in contrast to the results found in our previous two experiments, when observers can focus their attention to the Ternus-Pikler element in the second frame which is grouped with the Ternus-Pikler element in the first frame containing the target, they can identify the target based on its continued appearance along the motion path of the element containing the target. This finding is in agreement with our previous results from sequential metacontrast (Otto et al., 2006; Herzog et al., 2012) and Ternus-Pikler display (Fig. 1). Informal reports of our subjects state that a faded, but complete copy of the target is perceived at the non-retinotopic destination, in accordance with the motion of the stimulus.
The functional significance of retinotopic masking in the absence of eye movements can be understood by considering how the visual system analyzes the form of moving targets. Under normal viewing conditions, a briefly presented stimulus remains visible for approximately 120 ms after the stimulus offset (Haber & Standing, 1970; Coltheart, 1980). Due to this visible persistence, one would expect moving objects to appear highly blurred with a comet-like trailing smear. Yet our normal perception of objects in motion is relatively clear and sharp (Ramachandran, Rao, & Vidyasagar, 1974; Bex, Edgar, & Smith, 1995; Westerink & Teunissen, 1995; Hammett, 1997), a phenomenon known as motion deblurring (Burr, 1980; Hogben & Di Lollo, 1985; Chen, Bedell, & Ogmen, 1995; Burr & Morgan, 1997). We have proposed a theory according to which dynamic form computation relies on a synergy between retinotopic masking and motion-based reference frames (Öğmen, 2007; Öğmen & Herzog, 2010). According to this theory, masking and motion mechanisms play complementary roles: Masking operates in retinotopic representations to control motion blur and motion mechanisms provide the reference frame used to compute non-retinotopically features of moving targets. This is in contrast to theories suggesting that motion deblurring, dynamic form perception, and masking all result from motion mechanisms (Burr, 1980; Burr, Ross & Morrone, 1986) and those suggesting that computation of features and masking are linked by the common process of object substitution or updating (Enns & Di Lollo, 1997; Di Lollo, Enns, & Rensink, 2000; Enns, 2002; Enns, Lleras, & Moore, 2010). Whereas some models of backward masking causally linked backward masking and motion mechanisms (Kahneman, 1967; Burr, 1984), various studies showed that these two processes are largely independent and can be dissociated from each other (Weisstein & Growney, 1969; Stoper & Banffy, 1977; Breitmeyer & Horman, 1981). Below we discuss independent but complementary roles that these processes play in the perception of dynamic form.
Mechanisms of motion deblurring: Metacontrast, and not motion, mechanisms
Several models have been proposed to explain motion deblurring (Anderson & Van Essen, 1987; Burr, 1980; Burr, Ross & Morrone, 1986; Martin & Marshall, 1993). According to Burr (1980), motion estimation is achieved by the spatiotemporally oriented receptive fields of motion mechanisms (such as the Reichardt motion detector or equivalent motion energy models). These motion-based models predict that an isolated moving target should not produce motion blur provided that it sufficiently stimulates the motion mechanisms. However, this prediction does not agree with findings from various studies that show the perception of extensive blur for moving isolated targets (Bidwell, 1899; Chen et al., 1995; Lubimov & Logvinenko, 1993; McDougall, 1904; Smith, 1969a, b). By using several paradigms directly tailored to test the predictions of motion-based models for deblurring, we showed that the activation of motion mechanisms is not a sufficient condition for motion deblurring. These findings argue against aforementioned motion-based models of deblurring. Our theoretical (Ogmen, 1993), experimental (Chen et al., 1995), and computational (Purushothaman, Ogmen, Chen, & Bedell, 1998) studies suggest that, metacontrast, and not motion, mechanisms underlie motion deblurring. In agreement with these findings, several studies showed strong correlation between motion smear and metacontrast in terms of their dependence on spatial separation, timing, and eccentricity (Castet, Lorenceau, & Bonnet, 1993; Chen et al., 1995; Di Lollo & Hogben, 1985; Farrell, 1984). Motion deblurring is closely related to “sequential metacontrast masking” (Herzog et al., 2012; Otto et al., 2006; Piéron, 1935), which is an extended form of metacontrast (Breitmeyer & Öğmen, 2006).
Mechanisms of non-retinotopic feature attribution: Motion, and not metacontrast, mechanisms
In backward masking, it has been long known that features of the target can be perceived as being part of the mask, an effect termed feature transposition, feature inheritance, or feature attribution (Werner, 1935; Stewart & Purcell, 1970; Wilson & Johnson, 1985; Hofer, Walder, & Groner, 1989; Herzog & Koch, 2001; Enns, 2002; Sharikadze, Fahle, & Herzog, 2005; Öğmen et al., 2006; Otto et al., 2006). The close relationship between feature attribution and masking led some researchers to suggest that processing of features for moving stimuli and masking are linked causally by a common process, viz., the process of object substitution or updating (Enns & Di Lollo, 1997; Di Lollo et al., 2000; Enns, 2002; Enns et al., 2010). To test whether feature attribution results from masking or motion mechanisms, we measured magnitudes of feature attribution, motion, and masking and computed correlations between these variables (Breitmeyer, Herzog, et al., 2008). Our results showed that, when apparent motion occurs without masking, it correlates positively with feature attribution. Furthermore, when apparent motion occurs with masking, feature attribution remains positively correlated with apparent motion after the contribution of masking is factored out, but does not correlate with masking after the contribution of apparent motion is similarly factored out. Taken together, these findings support the view that feature attribution is based on motion and not on masking mechanisms.
In our theory, masking operates in retinotopic representations while features of a stimulus can escape masking and become visible in non-retinotopic representations. Little is known in terms of neural correlates of visibility and non-retinotopic representations; however, we can propose an outline of how this dissociation between retinotopic masking and non-retinotopic visibility can take place following two general neural representation schemes.
In a second scheme, that we call activational correlates, the same population of neurons can carry out both types of information but through different activation patterns. For example, it has been suggested that synchronization between neurons at pre-determined frequency bands (e.g., alpha, gamma) may underlie conscious awareness of a stimulus. In this case, results of Experiment 3 can be explained by masking mechanisms that disrupt synchrony at retinotopic, but not at non-retinotopic, level. Of course, structural and activational correlates are not mutually exclusive and it is possible to find a mixture of the two. In general, our approach highlights the importance of motion-based reference-frames and suggests a broad and distributed neural representation that requires coordination between ventral and dorsal streams to process features in terms of motion-based reference frames.
All of these observations are in agreement with the findings of retinotopic masking under conditions where the observer is stationary with the eyes under steady fixation. However, under normal viewing conditions, both the subject and objects move. Future studies will determine how reference frames generated by ego-motions (as in the case of eye movements) and exo-motions (as in the case of moving objects, studied herein) are coordinated to work in synergy.
Sensory (iconic) memory is a visual storage mechanism with relatively high capacity and a relatively short time span (rev., Haber, 1983).
Backward masking refers to the reduction in the visibility of a target stimulus caused by a mask stimulus that follows the target in time. When the mask surrounds but does not spatially overlap with the target, it is called metacontrast. When the mask spatially overlaps and shares structural similarities to the target, it is called backward structure masking (Bachmann, 1994; Breitmeyer & Öğmen, 2006).
White (1976) used smooth pursuit eye movements to study retinotopic versus non-retinotopic aspects of visual masking. He reported spatiotopic masking (White, 1976). However, a subsequent study where eye movements were monitored showed that masking during pursuit was retinotopic and not spatiotopic (Sun & Irwin, 1987).
It is interesting to note that retinotopic mask in the element motion condition exerts stronger masking at SOAs shorter than 40 ms when compared to the static control condition. In other words, the metacontrast masking function appears to shift from type-B to type-A. The expression of type-A versus type-B metacontrast functions has been attributed to differences in stimulus parameters, criterion contents, or individual differences (Breitmeyer & Öğmen, 2006; Duangudom, Francis, & Herzog, 2007; Francis & Cho, 2008; Albrecht, Klapotke, & Matler, 2010; Maksimov, Murd, & Bachmann, 2011). In our study, the same subjects were used in the two conditions and the inspection of individual subject data showed that all subjects’ metacontrast functions consistently showed increased masking at short SOAs in the Ternus-Pikler condition compared to the static control condition. Ruling out individual differences, stimulus configuration becomes a more likely explanation for this change. In the static condition, target and mask stimuli are embedded inside the same static square. In the dynamic Ternus-Pikler configuration, the square changes into a disk. Additional experiments can determine if this change can account for the increased masking at short SOAs.
Michael Herzog is supported by the Swiss National Science Foundation (SNF) project: “Basics of visual processing: from retinotopic encoding to non-retinotopic representations.” We thank the action editor and the reviewers for helpful comments and suggestions.
- Anderson, C. H., & Van Essen, D. C. (1987). Shifter circuits: a computational strategy for dynamic aspects of visual processing. Proceedings of the National Academy of Sciences of the United States of America, 84(17), 6297–6301.Google Scholar
- Bachmann, T. (1994). Psychophysiology of visual masking: The fine structure of conscious experience. Commack, NY: Nova Science Publishers.Google Scholar
- Bidwell, S. (1899). Curiosities of light and sight. London: Swan Sonnenschein.Google Scholar
- Breitmeyer, B. G., & Horman, K. (1981). On the role of stroboscopic motion in metacontrast. Bulletin of the Psychonomic Society, 17(1), 29–32.Google Scholar
- Burr, D. C., Ross, J., & Morrone, M. C. (1986). Seeing objects in motion. Proceedings of the Royal Society Series B-Biological Sciences, 227(1247), 249–265.Google Scholar
- Castet, E., Lorenceau, J., & Bonnet, C. (1993). Inverse intensity effect is not lost with stimuli in apparent motion. Vision Research, 33, 1697–1708.Google Scholar
- Di Lollo, V., & Hogben, J. H. (1985). Suppression of visible persistence. Journal of Experimental Psychology: Human Perception and Performance, 11, 304–316.Google Scholar
- Farrell, J. E. (1984). Visible persistence of moving objects. Journal of Experimental Psychology: Human Perception and Performance, 10, 502–511.Google Scholar
- Hofer, D., Walder, F., & Groner, M. (1989). Metakontrast: Ein beru¨hmtes, aber schwer messbares Pha¨nomen. Schweizer Zeitschrift fu¨r Psychologie, 48, 219–232.Google Scholar
- Hogben, J. H., & Di Lollo, V. (1985). Suppression of visible persistence in apparent motion. Perception & Psychophysics, 38(5), 450–460.Google Scholar
- Lin, Z., & He, S. (2012). Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking. Journal of Vision, 12((11):24), 1–18.Google Scholar
- Lubimov, V. & Logvinenko, A. (1993). Motion blur revisited. Perception 22(Suppl.), 77.Google Scholar
- Martin, K. E., & Marshall, J. A. (1993). Unsmearing visual motion: Development of long-range horizontal intrinsic connections. In Hanson, S., Cowan, J., & Giles, C. (Eds.), Advances in neural information processing systems (Vol. 5). San Mateo, Calif.: Morgan Kaufmann.Google Scholar
- McDougall, W. (1904). The sensations excited by a single momentary stimulation of the eye. British Journal of Psychology, 1, 78–113.Google Scholar
- Piéron, H. (1935). Le processus du métacontraste. Journal de Psychologie Normale et Pathalogique, 32, 1–24.Google Scholar
- Pikler, J. (1917). Sinnesphysiologische Untersuchungen.Google Scholar
- Smith, V. C. (1969a). Scotopic and photopic functions for visual band movement. Vision Research, 9, 293–304.Google Scholar
- Smith, V. C. (1969b). Temporal and spatial interactions involved in the band movement phenomenon. Vision Research, 9, 665–676.Google Scholar