Advertisement

Attention, Perception, & Psychophysics

, Volume 79, Issue 2, pp 449–458 | Cite as

Spatial partitions systematize visual search and enhance target memory

  • Grayden J. F. Solman
  • Alan Kingstone
Article
  • 464 Downloads

Abstract

Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment—like buildings, rooms, and furniture—can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.

Keywords

Visual search Spatial memory 

As embodied agents, much of human behavior is contingent on our ability to locate and access objects in space—whether tools, resources, other individuals, or sources of information. For the most part, this is accomplished in one of two ways: through search, or through memory—in other words, by exploring the environment, or by leveraging episodic memory for where we have previously observed a target object.1 There has been considerable interest in examining the interplay between these processes, as well as the conditions under which one or the other might be preferred. To date, the literature suggests that the use of memory is enhanced when search is more difficult. Memory use is rare in relatively simple displays with low orienting costs (Kunar, Flusberg, & Wolfe, 2008; Wolfe, Klempen, & Dahlen, 2000), but memory is used increasingly often as search becomes more challenging—for instance, by decreasing target discriminability or increasing stimulus eccentricity (Solman & Smilek, 2012), or by increasing orienting costs through the need for eye or head movements (Solman & Kingstone, 2014; Solman & Smilek, 2010; cf. Ballard, Hayhoe, & Pelz, 1995). Similarly, when the elements of a task support search—for instance, through semantic cues that enable inference of the target locations (Eckstein, Drescher, & Shimozaki, 2006; Neider & Zelinsky, 2006; Torralba, Oliva, Castelhano, & Henderson, 2006)—then even targets in complex, naturalistic displays are less likely to be found via episodic memory (Võ & Wolfe, 2012, 2013).

Of central interest in these studies is the nature of the search–memory interplay in naturalistic settings, with the aim of improving our understanding of routine naturalistic behavior. Here we focused on an aspect of naturalistic environments that has received limited attention in the context of search—the ubiquity of partitions in the world. The bulk of human environments are multiply subdivided into buildings, rooms, items of furniture, and further down into nested compartments, drawers, and containers. There are several good reasons to believe that such partitioning might influence search. First, it has long been known that grouping or otherwise regularizing the configuration of items in search displays can facilitate target detection and improve efficiency (e.g., Bundesen & Pedersen, 1983; Farmer & Taylor, 1980; Humphreys, Quinlan, & Riddoch, 1989; Treisman, 1982; Williams, Pollatsek, & Reichle, 2014). Second, there is evidence that visual attention is typically deployed in a coarse-to-fine ordering: selecting groups first, then homing in on individual objects within them (e.g., Rao, Zelinsky, Hayhoe, & Ballard, 2002; Zelinsky, Rao, Hayhoe, & Ballard, 1997). In this way, subdivided spaces might provide a natural or complementary structure for guiding attention. Finally, on larger scales, we find that human navigation often relies on landmark knowledge—suggesting that spatial encoding is better served by referencing readily identifiable features, rather than by encoding absolute positions (Foo, Warren, Duchon, & Tarr, 2005).

The influence of display partitioning on search has been examined recently by Nakashima and Yokosawa (2013). Participants searched among Cs and Os in either uniform arrays or arrays subdivided by black borders. Nakashima and Yokosawa reported that partitions impair easier searches, perhaps due to perceptual disruption, but critically, these same partitions can also facilitate more difficult searches. One explanation for this result is that partitions, like other forms of grouping, enable more systematic processing of the items in the display (cf. Williams et al., 2014), thereby avoiding attentional inefficiencies such as retracing searched locations or dwelling on and reinspecting items that have already been examined.

In this study, we extended the investigation of partitions in search, with a dual purpose. First, and primarily, we explored how partitions might influence the use of memory in search, using the repeated-search paradigm (Wolfe et al., 2000). Second, using trajectory analysis, we examined how partitions influence the strategic/systemic components of the search process itself, in hopes of clarifying the mechanistic underpinnings of Nakashima and Yokosawa’s (2013) results. In the present study, participants searched repeatedly through partitioned and open (nonpartitioned) displays of object images, and then they were tested on their explicit memory for item positions. We used a masked, mouse-contingent display to enforce serial scanning and enable detailed path analysis metrics. We made several predictions. Most focally, we expected that partitioning search displays would lead to improved memory for item locations, both during search and during explicit free recall. In addition, we expected to find faster search through partitioned displays, supported by more regular search paths.

Critically, note that in this study we approached search vis-à-vis exploratory behavior in general, as opposed to visual search in particular (cf. Hollingworth, 2012; Smith, Hood, & Gilchrist, 2008; Solman & Kingstone, 2015). Indeed, here we used masked displays, which directly limit and largely preclude any influence of visual features in guiding search. By limiting the influence of low-level featural guidance, we emphasized the two search components of focal interest in the present study—memory use and exploratory strategy.

Method

Participants

A group of 35 participants (six male, 29 female) from the University of British Columbia participated for course credit. All reported normal or corrected-to-normal visual acuity. We obtained informed consent from all participants, and all experimental procedures and protocols were reviewed and approved by the University of British Columbia Behavioral Research Ethics Board.

Displays

Example search displays in the open and partitioned conditions are shown in Fig. 1. The search displays consisted of 48 object images, drawn randomly for each participant from the Bank of Standardized Stimuli (BOSS; Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010). The items were arrayed in a 7 × 7 grid, with the central position used for the target template, presented in a green box. All item images were 80 pixels wide, spaced to evenly span the vertical dimension of the screen. In the open condition (panel A), the items were presented against a uniform white background. In the partitioned condition (panel B), the items were enclosed within four white rectangles placed on a gray (.5) background. In both conditions, the search display was masked by a similar display, with the items replaced by identical blurred gray patches (with the exception of the target template, which remained visible in the mask display). The partitions, in the appropriate condition, were visible in both the mask and search displays. The search display was visible through the mask via a circular mouse-contingent window, with a radius of 80 pixels. In this way, search was limited to the inspection of individual items (Fig. 2).
Fig. 1

Example unmasked search displays in the open (a) and partitioned (b) conditions. During search, the object images, with the exception of the central target template, were masked by noise patches, and only visible by exploration with a mouse-contingent window

Fig. 2

Schematic trial sequences for the experiment. Participants completed two blocks (open and partitioned; only the open case is shown in the figure). In each block, participants completed five repetitions of search through each of the 48 items, followed by an explicit-memory test for each of the 48 items. Both search and memory trials began with a fully masked display and a masked target template. Trial onset was triggered by moving the mouse onto the target template, revealing the target. In search trials, a mouse-contingent window was used to explore the display, locally revealing the masked items, and search terminated when the participant clicked to report the target location. Feedback was given on search trials, with a green flash for correct and a red flash for incorrect responses. On memory trials, the display remained masked, and participants were required to indicate the location where they believed each item to have been

Procedure

Each participant completed two blocks—open displays in one block, and partitioned displays in the other, with the block order counterbalanced across participants. In each block, participants searched for each of the 48 items in five separate repetitions, for a total of 240 search trials. Incorrect trials were recycled to the end of the search period, so that each participant correctly located each target five times. Following search, a single explicit-memory test was presented for the location of each of the 48 items. Our analysis of explicit memory included only the Partition (open, partitioned) factor, whereas the search measures included both the Partition and Repetition (1, 2, 3, 4, 5) factors.

Search and memory trials proceeded in largely the same way (Fig. 2). A trial began with a masked display and a blank green square, where the target template would subsequently appear. Participants triggered the onset of search by moving the mouse-contingent window onto the central green square, triggering the target, whereupon they could use the window to explore the display. Note that prior to triggering the target, the search display was not visible through the window. During search trials, participants moved the window over the display to inspect the items, and they were instructed to click on the item matching the target template. A brief (250-ms) feedback display flashed either green or red, to indicate that the response was correct or incorrect, respectively. A response was deemed correct if the click location was within 80 pixels of the target item’s center. During memory trials, only the mask display was visible (i.e., the item identities were unavailable), and participants were instructed to click on the location where they believed each item had been. No feedback was provided on memory trials.

Apparatus

The experiment was written and executed in Python using the pygame module, and run on an Apple Mini running OS X 10.6.4 on a 2.4-GHz Intel Core 2 Duo processor. The stimulus displays were presented on a 24-in. Dell Acer V243H monitor at a resolution of 1,920 × 1,080. The seating distance was not rigidly controlled, but was approximately 60 cm. For both search and memory trials, in addition to response time and response location, we recorded the position of the mouse-contingent window at a rate of ~20 Hz.

Results

Outliers

Given that the search items were readily identifiable natural-object images, error rates were low for most participants, with a few exceptions. We excluded these error-prone participants with a recursive outlier removal process. We identified the Partition × Repetition condition cell with the greatest error for each participant, then recursively excluded those participants whose error rates were more than 3.5 standard deviations from the group. This led to the removal of three participants (Zs = 8.7, 8.7, and 21.1). The remainder of the analysis proceeded with N = 32.

Explicit memory

We first evaluated the accuracy of explicit memory and the response speed during this portion of the task. Next, we evaluated the spatial magnitude of the errors made. Participants were significantly more accurate in explicit-memory testing for partitioned (M = 90.9 %) than for open (M = 82.4 %) displays, t(31) = 4.210, p < .001, and were significantly faster in producing these correct responses, t(31) = 2.931, p < .01 (M = 1,596 vs. M = 1,751 ms).

As we noted above, accuracy was quite high, so the analysis of errors was limited, with missing cells due to perfect performance in one or more conditions leading to a reduction of the sample size to 23. Error magnitude was estimated by computing the distance between the response location and the location of the target. Interestingly, we found that although fewer errors were made overall in the partitioned displays, when errors were produced, they were farther from the target in the partitioned displays (M = 321 pixels) than in the open displays (M = 257 pixels), t(22) = 2.276, p < .05. One explanation for this effect is to consider that errors in explicit localization reflect a combination of memory inaccuracies (near-misses) and memory failures (arbitrary location selections). The larger average error distance seen in the partitioned condition may reflect a change in the distribution of these errors, with location memory in partitioned displays having a stronger all-or-none character—that is, selectively fewer near-misses. To test this theory, we distinguished between near-misses (errors adjacent to the correct location) and failures (errors farther away). A Partition (partitioned, open) × Distance (near, far) analysis of variance (ANOVA) was conducted on the numbers of errors in explicit memory. There were more errors overall in the open displays (as we reported previously), F(1, 31) = 17.7, MSE = 7.45, p < .001, and more near than far errors, F(1, 31) = 9.44, MSE = 4.06, p < .005. Critically, the interaction was also significant, F(1, 31) = 16.8, MSE = 2.03, p < .001, with a larger reduction in near (open, M = 5.28; partitioned, M = 2.22) than in far (open, M = 3.16; partitioned, M = 2.16) misses.

Search accuracy

A trial was accurate if the searcher clicked on the item matching the target object, and inaccurate otherwise. Since the items were images of highly distinctive common natural objects, we expected to find relatively few errors. Indeed, across conditions, fewer than 2 % of the trials on average were errors. The rate of errors was not influenced by partition condition, F(1, 31) = 1.38, p = .248, nor by repetition, F(4, 124) < 1, p = .511. The interaction was also nonsignificant, F(4, 124) = 1.47, p = .215.

Response times

The response time (RT) distributions were distinctly nonnormal, particularly as repetition increased, with an average skew >2.5 and an average kurtosis >8.5. Consequently, we used the median as our measure of central tendency. Median RTs are plotted in Fig. 3, and were compared in a Partition (open, partitioned) × Repetition (1, 2, 3, 4, 5) repeated measures ANOVA. Sensibly, the RTs became significantly faster as repetitions increased, F(4, 124) = 296.0, MSE = 643,594.689, p < .001. There was also, however, a small but significant effect of partition, such that RTs were faster for the partitioned than for the open conditions, F(1, 31) = 4.63, MSE = 325,901.097, p < .05. We found no interaction, F(4, 124) = 1.057, p = .381.
Fig. 3

Median response times (RTs) during search, and difference scores between the open and partitioned conditions, plotted across repetitions. Error bars depict one standard error of the mean

Search trajectory

In the following analyses, we used trajectory data to present qualitative overviews of search performance and learning. Trajectory analyses offer detailed information on the time course of search, the accuracies of individual movements and movement components, and the overall character and potential strategic underpinnings of search paths. In the following discussion, we examine how repetition influenced the directed (i.e., toward the target) rate of change of mouse movements during search, yielding a portrait of how quickly searchers were able to orient to the target and how this orienting shifted from random search to accurate ballistic movements over the course of learning.

We used the following resampling method to normalize time so that the aggregate behavioral trends could be estimated. For each trial, the variables of interest were resampled at 40 time points,2 sampled evenly from [0, RT], starting at 0 and incrementing in intervals of (RT/39), such that the final time point coincided with the trial’s RT. At each time point T, we obtained the weighted average of the variable of interest V(T) as:
$$ V(T)=\frac{{\displaystyle \sum_{i=1}^nN\left(0,k\right)\left({t}_i-T\right){v}_i}}{{\displaystyle \sum_{i=1}^nN\left(0,k\right)\left({t}_i-T\right)}}, $$
with N(0, k) indicating the normal probability density function with mean 0 and standard deviation k equal to the sampling interval for that trial (RT/39), and v i indicating the variable of interest at sample i.
We first examined the averaged time series of the directed (toward the target) rate of change of mouse movement during search (Fig. 4). These plots were computed by first determining for each sample the projection of the instantaneous movement vector onto the instantaneous vector toward the target, and then resampling as described above. The resulting curves show, over time, how quickly the mouse was moving away from or toward the search target. These time series were computed for each repetition, for both the open (Fig. 4, panel A) and partitioned (Fig. 4, panel B) conditions. Qualitatively, we can see that both the open and partitioned conditions show the same general pattern of behavior: As the number of exposures to a target increases, search takes less time and is marked by increasingly early and rapid movements directly toward the target. We can also note that in the earliest repetitions a moderate negative deflection is present, indicating the frequent occurrence of initial movements away from the target. To evaluate these results quantitatively, we extracted the peak sample from each participant’s average trace and compared the peak latency and peak amplitude across repetitions and partition conditions.
Fig. 4

Time series for the directed rates of change of mouse position relative to the target, in pixels per second. Positive values indicate movement toward the target, and negative values indicate movement away from the target. Time series are plotted across repetitions (pale to dark traces) for the open (a) and partitioned (b) conditions. Difference scores (partitioned minus open) are plotted for the latency (c) and amplitude (d) of the peak movement speed. Error bars indicate one standard error of the mean

For these analyses, we omitted the highly variable first repetition (latency difference: M = –317, SD = 1,761; peak difference: M = 2.05, SD = 79.7), in which no memory was expected (although some incidental memory might have been present even in the first repetition—i.e., for later targets observed while searching for earlier targets). We focused instead on later repetitions, where the dominant behavior would be expected to be memory-driven. For peak latencies (Fig. 4, panel C), we found only a main effect of repetition, F(3, 93) = 155.2, MSE = 109,652.157, p < .001, with earlier peaks as repetitions increased. There was no significant difference between the open and partitioned conditions, F(1, 31) < 1, p = .440, and no interaction, F < 1, p = .835. For peak amplitudes (Fig. 4, panel D), we found a main effect of repetition, with the amplitude increasing with repetitions, F(3, 93) = 102.7, MSE = 11,929.831, p < .001; a significant effect of partition, with larger amplitudes in the partitioned than in the open condition, F(1, 31) = 4.21, MSE = 45,321.709, p < .05; and a significant Partition × Repetition interaction, F(3, 93) = 4.62, MSE = 11270.565, p < .005. This interaction was followed up with paired-samples t tests at each repetition to compare the partitioned and open conditions. We found no difference for Repetitions 2, t(31) = t(31) = –0.187, p = .853, and 3, t(31) = 0.592, p = .558, but significantly higher amplitudes for the partitioned than for the open conditions at Repetitions 4, t(31) = 3.657, p < .001, and 5, t(31) = 2.424, p < .05.

Search strategy

We next examined whether partitioning the display influenced the structure of search, regardless of performance. If search is influenced by the partitions, we would expect that searchers should transition more often between items within a partition than between partitions. We tested this prediction by examining the transition probabilities for the experimental partition set (the set actually displayed), as compared to the transition probabilities for a control partition set (obtained by mirroring the layout of the experimental partitions). Note that the control set was not displayed at any time—we used this strictly as a control case for examining the transition rates. We evaluated these transition probabilities for both the open and the partitioned conditions, with the expectation that transitions within a partition should be equivalent for the experimental and control partitions in the open condition (since no visual markers differentiated these regions), but that within-partition transitions should be amplified in the experimental relative to the control partitions for the partitioned condition, reflecting a bias toward segmenting search episodes by partition, rather than searching the entire display indiscriminately.

For each sample, we determined the item, if any, on which that sample fell, through strict collision with the rectangle where the item was displayed. Samples falling outside any item were given a null coding. Each item was associated with a given experimental partition and with a control partition. Transitions were identified by finding successive samples (ignoring null-coded samples) falling on different items. The transition was recorded along with its classification as being either within a partition or between partitions for both the experimental and control partition sets.

The proportions of transitions within a partition are plotted in Fig. 5. The data were tested with a Partition (open, partitioned) × Set (experimental, control) × Repetition (R1, R2, R3, R4, R5) repeated measures ANOVA. All but one effect [Partition × Repetition: F(4, 124) < 1, p = .625] was significant in the omnibus analysis, including the three-way Partition × Set × Repetition interaction, F(4, 124) = 6.42, MSE = .001, p < .001. We resolved this interaction by conducting a separate Set × Repetition ANOVA for each partition condition. For the open condition, we found no significant effects (largest F = 1.50, p = .206). For the partitioned condition, we found higher rates for the experimental than for the control condition, F(1, 31) = 357.1, MSE = .002, p < .001; lower rates overall as repetitions increased, F(4, 124) = 4.01, MSE = .002, p < .005; and a significant interaction, F(4, 124) = 15.5, MSE = .001, p < .001. Paired-samples t tests comparing the sets at each repetition were all significant, but with decreasing magnitudes as repetitions increased [R1, t(31) = 32.970, p < .001; R2, t(31) = 16.986, p < .001; R3, t(31) = 11.809, p < .001; R4, t(31) = 5.793, p < .001; R5, t(31) = 7.100, p < .001]. The overall decline reflects a reduction in the number of transitions for later repetitions, indicating less searching, per se, and more directed trajectories, potentially cutting across partitions indiscriminately.
Fig. 5

Proportions of item-to-item transitions in search paths in which both items fell within the same partition. Partitions were defined as either experimental (the actual divisions present in the partitioned condition; filled square markers) or control (mirror image of the experimental partitions; empty circle markers). Plotted across repetitions, for the open (solid lines) and partitioned (dashed lines) conditions. Error bars indicate one standard error of the mean

Discussion

The present experiment revealed a number of effects of display partitioning on the performance of search and on explicit memory for search target locations. Evaluating the item–item transitions during search, we found that partitions strongly influenced the trajectory of search, such that searchers were more likely to move from one item to another within the same partition. This systematicity may have facilitated exhaustive search, reducing the demands on memory for which items had already been inspected. We also found that, in later repetitions, searchers in the partitioned condition moved toward the target with an increased peak speed, although the latency of this movement was not significantly altered by the partitions. This increase in movement speed may explain the modest overall RT difference. The increased peak speed in conjunction with an absence of latency differences suggests that retrieval time is essentially fixed, but that the accuracy of either the representation or the guidance may be improved for partitioned as compared to open displays, leading to a more rapid orienting movement. In terms of explicit item location memory, we found a clear effect of partition, with explicit memory being more accurate for partitioned displays, and with these recalled locations being generated more quickly. Notably, the reduction in errors between conditions occurred preferentially for “near-misses”—suggesting that instead of increasing the number of target locations encoded, partitions facilitate a shift from approximate location memory to precise memory. In the case of open displays, memory is sufficient to localize a target within a small cluster of adjacent positions, but more often it fails to pinpoint the exact position.

This study occupies a unique middle territory between visual-search and spatial-memory paradigms. A variety of studies have explored the effects of different configurational factors on spatial memory. When items are presented in a spatial arrangement, the sequence of items generated during free recall tends to cluster on the basis of item proximity (Hirtle & Jonides, 1985; McNamara, 1992; however, this effect may depend closely on the temporal order of item presentation: McNamara, Halpin, & Hardy, 1992; see also Tversky, 1991). Of more direct relevance to the present results, when explicit groupings are formed (e.g., by color, shape, or boundaries), relational memory is often improved for within-group as compared to between-group pairs (Hommel, Gehrke, & Knuf, 2000; McNamara, 1986). With respect to boundaries in particular, there is evidence that children may overestimate distances across boundaries (Cohen, Baldwin, & Sherman, 1978; Kosslyn, Pick, & Fariello, 1974), and in adults boundaries may facilitate the formation of hierarchical representations in memory (Stevens & Coupe, 1978).

Although considerable attention has been given to spatial memory, there are some hurdles to overcome when linking these results to the effects in visual search. Most studies of spatial memory have focused on short-term memory for position sequences, or else on item–item priming or relative position judgments. In other words, when absolute positional recall is measured, this is only for short-term memory of ordered sequences of three or four targets; when larger and unsequenced sets of items have been presented, the measures have mostly focused on relative bearing and relative distance judgments.

There are reasons to suggest that search may provide a more ecological window on spatial memory for object positions. Routine naturalistic search involves (1) the precise localization of large numbers of objects, (2) target sequences generally unrelated to the order of initial exposure, and (3) a gradual buildup of memory through repeated interactions. There is even evidence that the act of searching for an object may confer unique advantages for spatial memory—in real scenes, a target that has been searched for is remembered better than either an item viewed incidentally or an item viewed in the course of an explicit memorization task (Võ & Wolfe, 2012). On the other hand, the search literature itself has for the most part had surprisingly little to say regarding spatial memory. The most popular models of visual search (Itti & Koch, 2000, 2001; Pomplun, Reingold, & Shen, 2003; Wolfe, 1994, 2007) have restricted their attention to bottom-up featural guidance, with some acknowledgement of top-down biases (e.g., from context and expectancies). These models provide exceptionally good accounts of search when it is driven exclusively by the visual properties of an array, but behaviors arising from ongoing search through relatively stable environments remain beyond their scope. Although the importance of memory at multiple spatial and temporal scales is understood (see, e.g., Shore & Klein, 2000, for a review), and although some models do include memory terms at the within-trial level (e.g., Guided Search 4.0: Wolfe, 2007), to date no well-established models of search have incorporated the effects of repeated exposure and the commensurate buildup of spatial memory.

The data here add to a growing body of work addressing the factors feeding into the nebulous “top-down” category incorporated in models of human search performance. To date, top-down considerations have primarily involved general semantic knowledge and related expectancies for particular objects in particular settings (e.g., Chen & Zelinsky, 2006; Eckstein, Drescher, & Shimozaki, 2006; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009; Henderson, 2003; Navalpakkam & Itti, 2002, 2005; Neider & Zelinsky, 2006; Torralba et al., 2006; Zelinsky et al., 1997), or otherwise, memory developed over the course of repeated presentations (Chun & Jiang, 1998; Jiang & Wagner, 2004; Kunar, Flusberg, & Wolfe, 2008; Olson & Chun, 2002; Solman & Kingstone, 2014; Solman & Smilek, 2010, 2012; Võ & Wolfe, 2012, 2013; Wolfe et al., 2000). Two additional sources of top-down guidance are addressed in the present research: (1) strategic biases in scanpath organization, and (2) semantic-independent configurational aspects of the search display. The present research has confirmed that arbitrary structure (i.e., partitioning) encourages systematicity in scanpaths (De Lillo, Kirby, & James, 2014; Gilchrist & Harvey, 2006; Hooge & Erkelens, 1996; Solman & Kingstone, 2015), and further has demonstrated that arbitrary structure leads to improved memory for target locations.

Several possible mechanisms may underlie the observed memory improvement. First, we note that the more regularized scanpaths during search through partitioned displays may have facilitated accurate spatial encoding by allowing observers to avoid reinspections or gaps during search. Several studies of random search have also shown that paths are adapted to regularities in the arrangement of display items (e.g., clusters: De Lillo, Kirby, & James, 2014; grids: Gilchrist & Harvey, 2006; or circles: Hooge & Erkelens, 1996). The observation that scanpaths are also regularized by partitions suggests that this is a reasonable mechanism for the improvements observed by Nakashima and Yokosawa (2013) in random, perceptually driven search.

It is also possible that both the scanpath effects and the memory improvement may be traced to a common support from the spatial landmarks provided by the boundaries (e.g., corners or edges: Foo et al., 2005; or distinctive context: Cherry & Park, 1993). These background features could provide reference points for spatial memory, both during encoding and during recall. Indeed, in studies of recall for sequences of locations, memory span is increased when the locations are regularly arrayed, symmetrical, or form continuous, nonintersecting paths (Kemps, 1999, 2001), and when the structure of the sequence conforms to the structure of the locations (De Lillo, 2004; De Lillo, Kirby, & James, 2014). In this view, global context in a display serves as an anchor for location memory, facilitating guidance and potentially helping to diversify and separate item representations. If we view group membership as an additional feature for each item, then both encoding and guidance ought to be enhanced for grouped items. Partitions, then, offer an extremely flexible grouping signal—by creating a set membership property independent of lower-level grouping properties, like proximity or color.

A related possibility arises from observations of coarse-to-fine visual orienting (Rao et al., 2002; Zelinsky et al., 1997) and evidence for hierarchical or semihierarchical encoding in spatial memory (De Lillo, 2004; McNamara, 1986, 1992; Stevens & Coupe, 1978; Tversky, 1991). In particular, although it may be difficult to recall a single, precise coordinate in the full display, it may be much easier to encode two smaller pieces of information about each target—that is, the particular partition and the location within that smaller region. Alternatively, this coarse-to-fine encoding could instead emerge over time, so that each target is associated with only a single piece of location memory, but the spatial resolution of this memory improves over time. In this case, partitions in the display might provide a convenient scaffold for early coarse memory, with this advantage facilitating subsequent searches and more detailed encodings.

Conclusion

Despite the complexities of naturalistic search—a torrent of sensory data, effectively boundless environments, and extremely large set sizes—humans demonstrate a remarkable facility for locating the objects they need during routine activity. This facility likely depends on the effective combination of visual ability with memory, each supporting the other, when necessary. Here we have shown that the environment itself may influence this interplay. Whereas previous work has established that when the environment offers semantic cues to facilitate prediction as a guiding factor in search, episodic target memory is reduced (Võ & Wolfe, 2012, 2013), we demonstrated the complementary result—that when the environment offers structural supports to facilitate spatial encoding, episodic target memory is increased.

Footnotes

  1. 1.

    Of course, under naturalistic conditions, even search without episodic memory is likely to be supported to some extent by more general semantic memory—that is, knowledge about where a given class of object is likely to be, as distinct from knowledge of where a particular instance has been directly observed. For the present purposes, we group this particular form of memory-guided search with random search, because both processes are marked by the need for exploration, in contrast to directed orienting in the case of episodic memory.

  2. 2.

    The following analyses are largely insensitive to the number of time points resampled, provided that the sampling was not excessively coarse. In general, resampling rates should be chosen so that the resampling errors are smaller than the variability in the supporting data.

Notes

Author note

This work was supported by the Natural Sciences and Engineering Research Council of Canada, Grant No. RGPIN 170077-11.

References

  1. Ballard, D. H., Hayhoe, M. M., & Pelz, J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7, 66–80. doi: 10.1162/jocn.1995.7.1.66 CrossRefPubMedGoogle Scholar
  2. Brodeur, M. B., Dionne-Dostie, E., Montreuil, T., & Lepage, M. (2010). The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PLoS ONE, 5, e10773. doi: 10.1371/journal.pone.0010773 CrossRefPubMedPubMedCentralGoogle Scholar
  3. Bundesen, C., & Pedersen, L. F. (1983). Color segregation and visual search. Perception & Psychophysics, 33, 487–493.CrossRefGoogle Scholar
  4. Chen, X., & Zelinsky, G. J. (2006). Real-world visual search is dominated by top-down guidance. Vision Research, 46, 4118–4133. doi: 10.1016/j.visres.2006.08.008 CrossRefPubMedGoogle Scholar
  5. Cherry, K. E., & Park, D. C. (1993). Individual difference and contextual variables influence spatial memory in younger and older adults. Psychology and Aging, 8, 517–526. doi: 10.1037/0882-7974.8.4.517 CrossRefPubMedGoogle Scholar
  6. Chun, M. M., & Jiang, Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. doi: 10.1006/cogp.1998.0681 CrossRefPubMedGoogle Scholar
  7. Cohen, R., Baldwin, L. M., & Sherman, R. C. (1978). Cognitive maps of a naturalistic setting. Child Development, 49, 1216–1218.CrossRefGoogle Scholar
  8. De Lillo, C. (2004). Imposing structure on a Corsi-type task: Evidence for hierarchical organisation based on spatial proximity in serial-spatial memory. Brain and Cognition, 55, 415–426. doi: 10.1016/j.bandc.2004.02.071 CrossRefPubMedGoogle Scholar
  9. De Lillo, C., Kirby, M., & James, F. C. (2014). Spatial working memory in immersive virtual reality foraging: Path organization, traveling distance and search efficiency in humans (Homo sapiens). American Journal of Primatology, 76, 436–446.CrossRefPubMedGoogle Scholar
  10. Eckstein, M. P., Drescher, B. A., & Shimozaki, S. S. (2006). Attentional cues in real scenes, saccadic targeting, and Bayesian priors. Psychological Science, 17, 973–980.CrossRefPubMedGoogle Scholar
  11. Ehinger, K. A., Hidalgo-Sotelo, B., Torralba, A., & Oliva, A. (2009). Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual Cognition, 17, 945–978. doi: 10.1080/13506280902834720 CrossRefPubMedPubMedCentralGoogle Scholar
  12. Farmer, E. W., & Taylor, R. M. (1980). Visual search through color displays: Effects of target-background similarity and background uniformity. Perception & Psychophysics, 27, 267–272.CrossRefGoogle Scholar
  13. Foo, P., Warren, W. H., Duchon, A., & Tarr, M. J. (2005). Do humans integrate routes into a cognitive map? Map- versus landmark-based navigation of novel shortcuts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 195–215. doi: 10.1037/0278-7393.31.2.195 PubMedGoogle Scholar
  14. Gilchrist, I. D., & Harvey, M. (2006). Evidence for a systematic component within scan paths in visual search. Visual Cognition, 14, 704–715. doi: 10.1080/13506280500193719 CrossRefGoogle Scholar
  15. Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7, 498–504.CrossRefPubMedGoogle Scholar
  16. Hirtle, S. C., & Jonides, J. (1985). Evidence of hierarchies in cognitive maps. Memory & Cognition, 13, 208–217. doi: 10.3758/BF03197683 CrossRefGoogle Scholar
  17. Hollingworth, A. (2012). Guidance of visual search by memory and knowledge. In M. D. Dodd & J. H. Fowlers (Eds.), The influence of attention, learning, and motivation on visual search: Nebraska Symposium on Motivation (pp. 63–89). New York, NY: Spring Science.CrossRefGoogle Scholar
  18. Hommel, B., Gehrke, J., & Knuf, L. (2000). Hierarchical coding in the perception and memory of spatial layouts. Psychological Research, 64, 1–10.CrossRefPubMedGoogle Scholar
  19. Hooge, I. T. C., & Erkelens, C. J. (1996). Control of fixation duration in a simple search task. Perception & Psychophysics, 58, 969–976. doi: 10.3758/BF03206825 CrossRefGoogle Scholar
  20. Humphreys, G. W., Quinlan, P. T., & Riddoch, M. J. (1989). Grouping processes in visual search: Effects with single- and combined-feature targets. Journal of Experimental Psychology: General, 118, 258–279. doi: 10.1037/0096-3445.118.3.258 CrossRefGoogle Scholar
  21. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. doi: 10.1016/S0042-6989(99)00163-7 CrossRefPubMedGoogle Scholar
  22. Itti, L., & Koch, C. (2001). Computational modeling of visual attention. Nature Reviews Neuroscience, 2, 194–203. doi: 10.1038/35058500 CrossRefPubMedGoogle Scholar
  23. Jiang, Y., & Wagner, L. C. (2004). What is learned in spatial contextual cueing—Configuration or individual locations? Perception & Psychophysics, 66, 454–463. doi: 10.3758/BF03194893 CrossRefGoogle Scholar
  24. Kemps, E. (1999). Effects of complexity on visuo-spatial working memory. European Journal of Cognitive Psychology, 11, 335–356.CrossRefGoogle Scholar
  25. Kemps, E. (2001). Complexity effects in visuo-spatial working memory: Implications for the role of long term memory. Memory, 9, 13–27.CrossRefPubMedGoogle Scholar
  26. Kosslyn, S. M., Pick, H. L., & Fariello, G. R. (1974). Cognitive maps in children and men. Child Development, 45, 707–716.CrossRefPubMedGoogle Scholar
  27. Kunar, M. A., Flusberg, S., & Wolfe, J. M. (2008). The role of memory and restricted context in repeated visual search. Perception & Psychophysics, 70, 314–328. doi: 10.3758/PP.70.2.314 CrossRefGoogle Scholar
  28. McNamara, T. P. (1986). Memory representations of spatial relations. Cognitive Psychology, 18, 87–121.CrossRefPubMedGoogle Scholar
  29. McNamara, T. P. (1992). Spatial representation. Geoforum, 23, 139–150.CrossRefGoogle Scholar
  30. McNamara, T. P., Halpin, J. A., & Hardy, J. K. (1992). Spatial and temporal contributions to the structure of spatial memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 555–564. doi: 10.1037/0278-7393.18.3.555 PubMedGoogle Scholar
  31. Nakashima, R., & Yokosawa, K. (2013). Visual search in divided areas: Dividers initially interfere with and later facilitate visual search. Attention, Perception, & Psychophysics, 75, 299–307.CrossRefGoogle Scholar
  32. Navalpakkam, V., & Itti, L. (2002). A goal oriented attention guidance model. In H. H. Bülthoff, C. Wallraven, S.-W. Lee, & T. A. Poggio (Eds.), Biologically motivated computer vision 2002 (Lecture Notes in Computer Science, Vol. 2525, pp. 453–461). Berlin, Germany: Springer.Google Scholar
  33. Navalpakkam, V., & Itti, L. (2005). Modeling the influence of task on attention. Vision Research, 45, 205–231. doi: 10.1016/j.visres.2004.07.042 CrossRefPubMedGoogle Scholar
  34. Neider, M. B., & Zelinsky, G. J. (2006). Scene context guides eye movements during visual search. Vision Research, 46, 614–621. doi: 10.1016/j.visres.2005.08.025 CrossRefPubMedGoogle Scholar
  35. Olson, I. R., & Chun, M. M. (2002). Perceptual constraints on implicit learning of spatial context. Visual Cognition, 9, 273–302. doi: 10.1080/13506280042000162 CrossRefGoogle Scholar
  36. Pomplun, M., Reingold, E. M., & Shen, J. (2003). Area activation: A computational model of saccadic selectivity in visual search. Cognitive Science, 27, 299–312. doi: 10.1016/S0364-0213(03)00003-X CrossRefGoogle Scholar
  37. Rao, R. P. N., Zelinsky, G. J., Hayhoe, M. M., & Ballard, D. H. (2002). Eye movements in iconic visual search. Vision Research, 42, 1447–1463. doi: 10.1016/S0042-6989(02)00040-8 CrossRefPubMedGoogle Scholar
  38. Shore, D. I., & Klein, R. M. (2000). On the manifestations of memory in visual search. Spatial Vision, 14, 59–75.CrossRefPubMedGoogle Scholar
  39. Smith, A. D., Hood, B. M., & Gilchrist, I. D. (2008). Visual search and foraging compared in a large-scale search task. Cognitive Processing, 9, 121–126. doi: 10.1007/s10339-007-0200-0 CrossRefPubMedGoogle Scholar
  40. Solman, G. J. F., & Kingstone, A. (2014). Balancing energetic and cognitive resources: Memory use during search depends on the orienting effector. Cognition, 132, 443–454.CrossRefPubMedGoogle Scholar
  41. Solman, G. J. F., & Kingstone, A. (2015). Endogenous strategy in exploration. Journal of Experimental Psychology: Human Perception and Performance, 41, 1634–1649.PubMedGoogle Scholar
  42. Solman, G. J. F., & Smilek, D. (2010). Item-specific memory in visual search. Vision Research, 50, 2430–2438. doi: 10.1016/j.visres.2010.09.008 CrossRefPubMedGoogle Scholar
  43. Solman, G. J. F., & Smilek, D. (2012). Memory benefits during visual search depend on difficulty. Journal of Cognitive Psychology, 24, 689–702.CrossRefGoogle Scholar
  44. Stevens, A., & Coupe, P. (1978). Distortions in judged spatial relations. Cognitive Psychology, 10, 422–437.CrossRefPubMedGoogle Scholar
  45. Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review, 113, 766–786. doi: 10.1037/0033-295X.113.4.766 CrossRefPubMedGoogle Scholar
  46. Treisman, A. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human Perception and Performance, 8, 194–214. doi: 10.1037/0096-1523.8.2.194 PubMedGoogle Scholar
  47. Tversky, B. (1991). Spatial mental models. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 27, pp. 109–145). Orlando, FL: Academic Press.Google Scholar
  48. Võ, M. L.-H., & Wolfe, J. M. (2012). When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal of Experimental Psychology: Human Perception and Performance, 38, 23–41. doi: 10.1037/a0024147 PubMedGoogle Scholar
  49. Võ, M. L.-H., & Wolfe, J. M. (2013). The interplay of episodic and semantic memory in guiding repeated search in scenes. Cognition, 126, 198–212. doi: 10.1016/j.cognition.2012.09.017 CrossRefPubMedGoogle Scholar
  50. Williams, C. C., Pollatsek, A., & Reichle, E. D. (2014). Examining eye movements in visual search through clusters of objects in a circular array. Journal of Cognitive Psychology, 26, 1–14. doi: 10.1080/20445911.2013.865630 CrossRefPubMedGoogle Scholar
  51. Wolfe, J. M. (2007). Guided Search 4.0: Current progress with a model of visual search. In W. D. Gray (Ed.), Integrated models of cognitive systems (pp. 99–119). New York, NY: Oxford University Press.CrossRefGoogle Scholar
  52. Wolfe, J. M., Klempen, N., & Dahlen, K. (2000). Postattentive vision. Journal of Experimental Psychology: Human Perception and Performance, 26, 693–716. doi: 10.1037/0096-1523.26.2.693 PubMedGoogle Scholar
  53. Zelinsky, G. J., Rao, R. P. N., Hayhoe, M. M., & Ballard, D. H. (1997). Eye movements reveal the spatiotemporal dynamics of visual search. Psychological Science, 8, 448–453. doi: 10.1111/j.1467-9280.1997.tb00459.x CrossRefGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2016

Authors and Affiliations

  1. 1.University of Hawai’i at MānoaHonoluluUSA
  2. 2.University of British ColumbiaVancouverCanada

Personalised recommendations