# RETRACTED ARTICLE: The plateau in mnemonic resolution across large set sizes indicates discrete resource limits in visual working memory

## Abstract

The precision of visual working memory (WM) representations declines monotonically with increasing storage load. Two distinct models of WM capacity predict different shapes for this precision-by-set-size function. Flexible-resource models, which assert a continuous allocation of resources across an unlimited number of items, predict a monotonic decline in precision across a large range of set sizes. Conversely, discrete-resource models, which assert a relatively small item limit for WM storage, predict that precision will plateau once this item limit is exceeded. Recent work has demonstrated such a plateau in mnemonic precision. Moreover, the set size at which mnemonic precision reached asymptote has been strongly predicted by estimated item limits in WM. In the present work, we extend this evidence in three ways. First, we show that this empirical pattern generalizes beyond orientation memory to color memory. Second, we rule out encoding limits as the source of discrete limits by demonstrating equivalent performance across simultaneous and sequential presentations of the memoranda. Finally, we demonstrate that the analytic approach commonly used to estimate precision yields flawed parameter estimates when the range of stimulus space is narrowed (e.g., a 180º rather than a 360º orientation space) and typical numbers of observations are collected. Such errors in parameter estimation reconcile an apparent conflict between our findings and others based on different stimuli. These findings provide further support for discrete-resource models of WM capacity.

## Keywords

Visual working memory Short-term memoryWorking memory (WM) is a limited-capacity system that enables the active storage of representations in an online state. Two broad classes of models have been proposed to characterize the nature of capacity limits in WM. Discrete-resource models assert a fixed item limit for storage in working memory, such that once a relatively small number of items have been stored, no further information can be stored from additional items (Cowan, 2001; Zhang & Luck, 2008). By contrast, flexible-resource models suggest that WM can store an unlimited number of items, although the fidelity of each stored representation declines as the number of stored items increases (e.g., Bays & Husain, 2008; Wilken & Ma, 2004).

Multiple studies have demonstrated an inverse relationship between the number of stored items and the precision of the stored representations (Anderson, Vogel, & Awh, 2011; Barton, Ester, & Awh, 2009; Bays & Husain, 2008; Zhang & Luck, 2008). Although flexible-resource models naturally explain this empirical pattern, it can also be reconciled with discrete-resource models that allow for variations in mnemonic precision within subspan displays. For example, Barton et al. proposed a hybrid model in which a discrete number of “slots” constrain the allocation of a separate resource that determines mnemonic precision. Critically, this version of the discrete-resource model predicts that declines in mnemonic precision should cease once the number of to-be-stored items exceeds the putative item limit, because such supraspan displays do not lead to the storage of additional items. By contrast, flexible-resource models predict that monotonic declines in precision should be observable across much larger set sizes. Zhang and Luck disconfirmed the latter prediction by demonstrating equivalent mnemonic precision for set sizes that exceeded estimated item limits. Extending this observation, Anderson et al. showed that individual differences in the number of items that could be stored—estimated using both behavioral performance and storage-related neural activity—strongly predicted the set size at which WM precision reached a plateau for each observer. Thus, the resolution-by-set-size function in visual WM strengthens the case for discrete item limits in WM storage.

The goal of the present work was to replicate and expand the boundary conditions of this basic empirical pattern and to address some key alternative accounts of the Anderson et al. (2011) findings. In Experiment 1, we demonstrate that the original results with orientation are replicated with color memoranda. In Experiment 2, we rule out limitations in the encoding of simultaneous stimulus displays as the source of the apparent item limits in memory storage. To test whether encoding is a limiting factor, we examined the precision-by-set-size function in both simultaneous and sequentially presented displays. The rationale was that if the plateau in mnemonic precision observed by Anderson et al. was due to limits in the number of stimuli that could be encoded simultaneously, this plateau should shift to larger set sizes when the stimuli were presented sequentially and the simultaneous encoding demands were cut in half. As will be shown, the shapes of the precision-by-set-size functions in the simultaneous and sequential conditions were identical, thereby ruling out encoding limits as the source of the plateau in the precision-by-set-size function. Finally, in Experiment 3 we used a combination of behavioral data and simulations to show that the approach that we have used to estimate item limits and precision (Zhang & Luck, 2008) leads to systematic errors in parameter estimates under specific stimulus conditions. Specifically, when typical numbers of observations are collected, constricting the range of stimulus values (from 360º to 180º) leads to inaccurate parameter estimates and changes the shape of the precision-by-set-size function from a bilinear to a logarithmic shape. This is a crucial aspect of the results, because the bilinear shape implies a discrete item limit, while the logarithmic shape implies a continuous resource that is divided across much larger numbers of items. Nevertheless, our finding that the logarithmic function is an artifact of flawed parameter estimates provides a clear explanation of the discrepancy between our findings and those from studies using 180º stimuli (Gorgoraptis, Catalao, Bays, & Husain, 2011; Ma & Chou, 2010; Rademaker & Tong, 2010). Thus, the present work strengthens the case for discrete resource limits in visual WM.

## Experiment 1

To test whether the previously reported relationship between asymptotes in precision and WM capacity generalizes across other stimulus dimensions, we replicated Anderson et al. (2011) with a color recall task (Wilken & Ma, 2004; Zhang & Luck, 2008).

### Method

### Subjects

A total of 22 undergraduates at the University of Oregon completed the experiment for course credit. All had normal or corrected-to-normal visual acuity and gave informed consent according to procedures approved by the University of Oregon institutional review board.

### Stimulus displays

The stimuli were generated in MATLAB using the Psychophysics Toolbox extension (Brainard, 1997; Pelli, 1997) and were presented on a 17-in. flat CRT computer screen (refresh rate of 120 Hz). Viewing distances were approximately 77 cm.

*L*×

*a*×

*b*color space and centered at the color (

*L*= 70,

*a*= 20,

*b*= 38); see Fig. 1. All objects had radii of 0.93º of visual angle.

In Experiment 1, objects were presented within a square region subtending 10.7º × 10.7º of visual angle, and subjects fixated on a central fixation point that subtended 0.37º × 0.37º. The objects were positioned randomly, with the constraint that no two objects could fall within 1.43º of one another, resulting in a between-object separation of at least two-thirds of an object. At the end of each trial, subjects were cued to recall the color of a single item. A specific object was probed by outlining its position with a thick, white ring with a radius of 0.93º and a rim thickness of 0.37º (see Fig. 1).

### Procedures

Experiment 1 took approximately 1.5 h to complete and consisted of 12 blocks of 60 trials each. The events in a single trial went as follows. First, subjects saw a central fixation point, followed by the presentation of one, two, three, four, five, or six colored discs for 200 ms. A 1,000-ms delay period followed the offset of the discs. Following the delay, a probe ring appeared in the position of a randomly selected disc. This probe was presented for 500 ms, and then a color wheel was presented. This procedure was implemented to reduce any masking caused by the simultaneous presentation of the memory probe and color wheel. The color wheel, which had a radius of 9º and a thickness of 0.74º, consisted of 180 color values that were evenly distributed along the perimeter of the wheel in CIE *L* × *a* × *b* color space and centered at the color (*L* = 70, *a* = 20, *b* = 38). Subjects clicked on the perimeter of this color wheel in an unspeeded response to indicate the color of the sample item that had appeared in the same position as the probe. Each response was followed by a 750-ms blank intertrial interval.

### Modeling response error distributions

Offset values were defined by the difference between a subject’s response and the angle of the correct color of the probed sample stimulus (ranging from –180º to 180º). Frequency histograms of response offsets for each set size were analyzed to determine the probability of storage and mnemonic precision at each set size.

Maximum-likelihood estimation was used to fit the distribution of the response offsets. Three parameters were estimated: *μ*, the mean of a von Mises distribution corresponding to trials on which the subject had selected the target location; *SD*, the width of the same von Mises distribution (used to operationalize mnemonic precision); and *p*(failure), denoted *P* _{f}. The latter parameter corresponds to the height of a uniform distribution, corresponding to trials on which subjects failed to store the probed item. *P* _{mem} refers simply to the probability that the critical item was stored (1 – *P* _{f}).

### Results and discussion

*R*

^{2}statistic on the basis of histograms of the data with 15 bins, each 24º wide. The adjusted

*R*

^{2}statistic reflects the proportion of variance explained by a model weighted by its number of parameters. Models with a greater number of parameters are penalized relative to models with fewer parameters. This statistic ensures a fair comparison between the three-component discrete-resource model and the two-component flexible-resource model. The mixture model representing the predictions of the discrete-resource model (red) was more effective than the flexible-resource model (green) in explaining variance in the response distributions for each set size [discrete-resource

*R*

^{2}values: .99 (set size 1 [SS1]), .99 (SS2), .99 (SS3), .99 (SS4), .99 (SS5), .97 (SS6); flexible-resource

*R*

^{2}values: .88 (SS1), .87 (SS2), .83 (SS3), .79 (SS4), .81 (SS5), .79 (SS6)]. Moreover, Kolmogorov–Smirnov tests revealed a significant difference between the predicted values of the flexible-resource model and the actual distributions at every set size, whereas the test revealed no difference between the predictions of the discrete-resource model and their fits to each distribution.

We also assessed whether the discrete- and flexible-resource models could account for the distribution of response errors in the individual subject data. Here again, the discrete-resource model was more effective on average than the flexible-resource model in characterizing the response distributions and in explaining distribution variance for each subject at every set size. Dependent-samples *t* tests revealed a significant advantage for the discrete-resource model in explaining variance (*R* ^{2}) in the distributions for all set sizes (*p* < .001) except set size 1 (*p* = .4). To summarize, at both the group and individual-subject levels, the discrete-resource model was superior to the flexible-resource model in its ability to account for the distributions of response offsets. This replicates previous findings (Anderson et al., 2011; Zhang & Luck, 2008) and suggests that observers could store a subset of the items in the sample array while maintaining no information about the remaining items.

We first examined the probability of storing the probed item (*P* _{mem}) across set sizes (SS1 = .98, SS2 = .93, SS3 = .82, SS4 = .63, SS5 = .49, SS6 = .38), which revealed a significant effect of set size [*F*(5, 16) = 41.25, *p* < .001]. The product of *P* _{mem} and set size provides an estimate of the number of items stored; this estimate rose from set sizes 1–3 and reached a statistical plateau for subsequent set sizes [set size 3 (*M =* 2.46) vs. 4 (*M =* 2.52), *t*(21) = –0.45, *p* = .6; set size 4 vs. 5 (*M =* 2.46), *t*(21) = 0.39, *p* = .69; set size 5 vs. 6 (*M =* 2.30), *t*(21) = 1.2, *p* = .25]. This result notwithstanding, we note that past studies have often documented declining capacity estimates as set sizes move farther past putative item limits. For example, Cusack, Lehmann, Veldsman, and Mitchell (2009) found lower capacity estimates for set size 8 than for set size 4; moreover, they found that the size of the drop from set sizes 4 to 8 was a good predictor of fluid intelligence. Indeed, in our prior work (Anderson et al., 2011), we also observed declining capacity estimates from set sizes 3 to 8. Thus, the stable plateau in capacity estimates in the present experiment is somewhat anomalous when the extant data are considered. Although it may seem that discrete-resource models predict stable capacity limits across large set sizes, this view depends on the strong assumption that the number of items available for report depends only on the total amount of “space” in working memory. However, a growing literature suggests that limits in the number of items stored may instead result from limitations in the observers’ ability to filter out irrelevant from relevant items during encoding (Engle, 2002; Fukuda & Vogel, 2009). This perspective provides a plausible explanation of why capacity estimates may decline at larger set sizes, because it may be harder to select a manageable subset of items to be stored when putative item limits are exceeded. One possible reason for this is that supraspan displays make it harder for observers to individuate the to-be-stored items from those that exceed the storage capacity (e.g., Ester, Vogel, & Awh, 2012). Clearly, more work is needed to test these hypotheses. For now, we note that discrete-resource models do not entail the prediction that capacity estimates remain the same across large set sizes.

Recently, Bays, Catalao, and Husain (2009) suggested an alternative explanation of the mixture distributions first reported by Zhang and Luck (2008). Bays et al. (2009) questioned whether the flat component of this mixture distribution was really caused by trials in which no information had been retained about the probed item. Instead, Bays et al. (2009) argued that the flat distribution might be the result of trials on which observers mislocalized the probed object and reported the value of a different item in the display. Because the color for each item varied randomly with respect to every other item, this kind of mislocalization would yield a random (i.e., flat) distribution of response errors relative to the probed item. Indeed, Bays et al. (2009) found that a substantial proportion of responses in their study could be attributed to the erroneous report of nontarget values. To test this possibility in our own data, we calculated the response offset between each response and each of the nontarget values from the same trial. If subjects had consistently reported distractor values during the experiment, this analysis should reveal a central tendency in the response error histogram, showing that the subjects’ reported nontarget values occurred at greater-than-chance levels. Figure 2 (bottom panel) shows the aggregate distribution of distractor response values for each set size, which reveals no evidence of a central tendency in these error plots, suggesting that observers did not consistently report distractor values in this experiment. Specifically, a swapping analysis in which we determined the proportion of trials on which observers erroneously reported a distractor value (Bays et al., 2009) revealed relatively small proportions of swapping errors across set sizes (SS2 = .01, SS3 = .03, SS4 = .05, SS5 = .07, SS6 = .06). We note an apparent depression in the error histograms that depict response offsets relative to distractor values, but this drop is not statistically reliable; the frequency of reporting a distractor (i.e., with a distractor offset of 0) is not significantly different than the mean frequency across all distractor offset bins (*p* > .27). To conclude, this analysis suggests that the uniform component of the target distribution is not due to mislocalizations, but rather to a failure to store the probed item.

*M =*11.6) vs. 2 (

*M =*13.7),

*t*(21) = –7.58,

*p*< .001; set size 2 vs. 3 (

*M =*16.2),

*t*(21) = –2.42,

*p*< .05] and then reached an apparent asymptote after set size 3 [set size 3 vs. 4 (

*M =*15.5),

*t*(21) = 0.539,

*p*= .60; set size 4 vs. 5 (

*M =*17.4),

*t*(21) = –1.15,

*p*= .26; set size 5 vs. 6 (

*M =*17.9),

*t*(21) = –0.367,

*p*= .72]. The aggregate precision-by-set-size function was fitted with a bilinear function to calculate the estimated point of asymptote, which was 3.32 items (Fig. 3a;

*R*

^{2}= .94,

*p*< .01). This is consistent with previous results demonstrating an asymptote in precision at approximately three items (Anderson et al., 2011; Zhang & Luck, 2008).

Following the approach of Anderson et al. (2011), we fitted each individual observer’s precision function with a bilinear function to calculate individual asymptotes in precision. This analytic approach yielded good fits of the data with the bilinear function (average *r* = .59), and the average point of asymptote was 3.20 items. These fits can be compared with those of a logarithmic function, which is the shape of the precision-by-set-size functions predicted by the flexible-resource model (Bays & Husain, 2008). When the same precision functions were fitted with a logarithmic function, we observed a significantly worse fit than when fitting with a bilinear function [average *r*s = .45 for logarithmic, .59 for bilinear; *t*(21) = 2.66, *p* < .05]. Thus, the shapes of precision-by-set-size functions were better fit by the predictions of a discrete-resource model rather than a flexible-resource model. Moreover, we found a strong relationship between inflections in the bilinear precision function and an independent capacity estimate (*R* ^{2} = .43, *p* < .001; Fig. 3b) obtained from a separate color experiment within the same session. A separate measure of *P* _{mem} was employed to avoid reporting relationships between nonindependent measures of *SD* and *P* _{mem} from the same data set (Brady, Fougnie, & Alvarez, 2011). Indeed, the strength of this relationship is artificially increased when examining this relationship between nonindependent measures, such as *P* _{mem} for set size 6 inclusive in the precision-by-set-size function (*R* ^{2} = .72). The true correlation (*R* ^{2} = .43), however, still affirmed the clear prediction of discrete-resource models by showing that putative item limits within each observer robustly predict the shape of the precision-by-set-size function. We note that this correlation is also inconsistent with a mislocalization explanation of the flat component of the mixture distribution (Bays et al., 2009), because there is no reason why the probability of mislocalizations should correlate with the inflection point of the precision-by-set-size function. Thus, Experiment 1 demonstrates that the basic empirical pattern from Anderson et al. generalizes across both orientation and color memoranda. Working memory for both orientations and colors is subject to discrete item limits.

## Experiment 2

The purpose of Experiment 2 was to test the alternative hypothesis that asymptotes in precision (Anderson et al., 2011; Zhang & Luck, 2008) are a consequence of encoding limits, rather than storage limits. If observers were unable to encode all of the sample stimuli simultaneously, then apparent item limits in those experiments might be explained without recourse to limits in memory storage per se. To address this hypothesis, we measured mnemonic precision for memoranda that were presented simultaneously (in one display) or sequentially (across two displays). Previous work had used longer memory durations, rather than sequential designs, to address potential encoding limitations in a WM task (Bays, Gorgoraptis, Wee, Marshall, & Husain, 2011). Increasing exposure durations, however, is known to enhance long-term memory for memoranda (e.g., Glanzer & Cunitz, 1966), and this could yield inflated estimates of online storage capacity. In addition, large increases in exposure duration would also be conducive to verbal labeling or other recoding strategies. To avoid such methodological pitfalls, we chose to present the memoranda across two sequential displays, so that the individual display durations would remain constant while encoding time per item would double. If the reported asymptotes in mnemonic precision are a consequence of encoding limits, precision should plateau at a larger set size in the sequential condition.

### Method

### Subjects

A total of 22 undergraduates at the University of Oregon completed the experiment for course credit. All had normal or corrected-to-normal visual acuity and gave informed consent according to the procedures approved by the University of Oregon institutional review board.

### Stimulus displays

The stimuli were generated in MATLAB using the Psychophysics Toolbox extension (Brainard, 1997; Pelli, 1997) and were presented on a 17-in. flat CRT computer screen (refresh rate of 120 Hz)**.** The viewing distances were approximately 77 cm.

In Experiment 2, objects were presented within a square region subtending 10.7º × 10.7º of visual angle, and the subjects fixated on a central fixation point that subtended 0.37º × 0.37º. The objects were positioned randomly with respect to both position and orientation, with the constraint that no objects could fall within 1.43º of one another, resulting in a between-object separation of at least two-thirds of an object. At the end of each trial, subjects were cued to recall the orientation of a single item. A specific object was probed by outlining its position with a thick, black ring with a radius of 0.93º and a rim thickness of 0.37º (see Fig. 4).

### Procedure

Experiment 2 took approximately 1.5 h to complete and consisted of 15 blocks of 64 trials each. The events in a single trial went as follows. First, subjects saw a central fixation point, followed by the presentation of the memory items for 200 ms. In the simultaneous condition, one, two, three, four, six, or eight memory items were presented within the same 200-ms memory array; in the sequential condition, only the even-number set sizes were presented (two, four, six, and eight), and the set sizes were divided into two sequential presentations, such that half of the items were presented on one side of fixation for 200 ms in the first memory array, and the other half of the items were presented on the other side of fixation for 200 ms in the second memory array (Fig. 4). A 1,000-ms delay period followed the offset of the second discs. Following the delay, a probe ring appeared in a randomly selected position. Subjects clicked on the perimeter of this ring in an unspeeded response to indicate the orientation of the sample item that had appeared in the same position. Each response was followed by a 750-ms blank intertrial interval.

### Modeling response error distributions

The modeling procedures in the present experiment were identical to those in Experiment 1.

### Results and discussion

*R*

^{2}statistic was used to provide a fair comparison of the three-parameter discrete-resource model with the two-parameter flexible-resource model. The mixture model representing the predictions of the discrete-resource model (red) was more effective than the flexible-resource model (green) in explaining variance in the response distributions for each set size [discrete-resource

*R*

^{2}values: >.96 (simultaneous), >.94 (sequential); flexible resource

*R*

^{2}values: >.84 (simultaneous), >.81 (sequential)]. The discrete-resource model was also more effective on average than the flexible-resource model in characterizing the response distributions and in explaining distribution variance for each subject at every set size. Dependent-samples

*t*tests revealed significant differences between the two models in explaining variance (

*R*

^{2}) in the distributions for all set sizes (simultaneous,

*p*< .01; sequential,

*p*< .05). Furthermore, there was no difference between the simultaneous and sequential conditions in the variances explained for either discrete-resource (

*p*> .10) or flexible-resource (

*p*> .15) models.

We then estimated *P* _{mem} across set sizes as a function of presentation condition (simultaneous: SS1 = .99, SS2 = .96, SS3 = .85, SS4 = .68, SS6 = .45, SS8 = .35; sequential: SS2 = .93, SS4 = .61, SS6 = .40, SS8 = .29), which revealed a significant effect of condition [*F*(1, 21) = 20.75, *p* < .001] and a significant interaction between set size and condition [*F*(3, 19) = 13.89, *p* < .001], which was driven by the near-ceiling *P* _{mem} values observed in set size 2. Although larger estimates of *P* _{mem} were observed in the simultaneous condition, we note that this result is opposite what would be predicted if encoding limits had constrained performance in the simultaneous condition. One possible reason for the reduced *P* _{mem} values in the sequential condition was that the total retention interval was slightly longer for the first array presented.

As in Experiment 1, we observed statistically identical capacity estimates across larger set sizes in both simultaneous [set size 4 (*M =* 2.73) vs. 6 (*M =* 2.70): *t*(21) = 0.19, *p* = .86; set size 6 vs. 8 (*M =* 2.77): *t*(21) = 0.42, *p* = .68] and sequential [set size 4 (*M =* 2.44) vs. 6 (*M =* 2.37): *t*(21) = 0.34, *p* = .74; set size 6 vs. 8 (*M =* 2.35): *t*(21) = 0.11, *p* = .91] displays.

We calculated the response offset between each response and each of the nontarget values from the same trial to examine the possibility of swapping as an explanation for a uniform distribution. The aggregate distribution of distractor response values for each set size revealed no evidence of a central tendency in error plots for either the simultaneous (Fig. 5, bottom panel) or sequential (Fig. 6, bottom panel) conditions, suggesting that observers did not consistently report distractor values in this experiment. A swapping analysis in which we determined the proportion of trials on which observers erroneously reported a distractor value (Bays et al., 2009) revealed a relatively small proportion of swapping errors across set sizes for the simultaneous (SS2 = .001, SS4 = .01, SS6 = .01, SS8 = .05) and sequential (SS2 = .001, SS4 = .02, SS6 = .04, SS8 = .02) conditions. Additionally, there was no effect of presentation type on the probability of swapping for any set size (*p* > .17). This analysis replicates and confirms our conclusion that the uniform component of the target distribution is not due to mislocalizations, but rather to a failure to store the probed item. We also demonstrated that mislocalization is not affected by sequentially presented displays.

*M =*11.65) vs. 2 (

*M =*15.25):

*t*(21) = –8.75,

*p*< .001); set size 2 vs. 3 (

*M =*18.73):

*t*(21) = –3.80,

*p*< .001], and then reached an apparent asymptote after set size 3 [set size 3 vs. 4 (

*M =*20.26):

*t*(21) = –1.84,

*p*= .08); set size 4 vs. 6 (

*M =*21.16):

*t*(21) = –0.88,

*p*= .39); set size 6 vs. 8 (

*M =*20.96):

*t*(21) = 0.13,

*p*= .90]. The aggregate precision-by-set-size function was fitted with a bilinear function to calculate the estimated point of asymptote, which was 3.58 items (Fig. 7a;

*r*= .997,

*p*< .001). This is consistent with previous results demonstrating an asymptote in precision at approximately three items (Anderson et al., 2011; Zhang & Luck, 2008).

We then tested the hypothesis that asymptotes in precision are the results of encoding limitations by directly comparing precision estimates at each set size between the simultaneous and sequential conditions; if performance in the simultaneous condition were limited by encoding, precision should then be significantly improved in the sequential condition. Because the sequential condition contained only even-numbered set sizes, we compared set sizes 2, 4, 6, and 8 in the simultaneous and sequential conditions. There was no main effect of presentation type (simultaneous vs. sequential) on precision [Fig. 7b; *F*(1, 21) = 0.015, *p* = .90], and there was no presentation-type-by-set-size interaction [*F*(3, 63) = 1.17, *p* = .33]. Thus, encoding limits did not influence precision estimates in the simultaneous condition. More fine-grained analyses of the sequential condition showed that precision declined until a putative item limit was exceeded, and then reached a statistical asymptote [set size 2 (*M =* 16.69) vs. 4 (*M =* 19.12): *t*(21) = –3.72, *p* < .001; set size 4 vs. 6 (*M =* 21.95): *t*(21) = –1.97, *p* = .062; set size 6 vs. 8 (*M =* 20.12): *t*(21) = 1.16, *p* = .26]. We fitted the sequential precision function with a bilinear function (*R* ^{2} = .85, *p* < .05), which revealed an apparent asymptote at 2.9 items, consistent with the hypothesis that no further items were encoded into memory at larger set sizes.

We then tested the hypothesis that fixed item limits constrain resource allocation by fitting each observer’s precision functions, both simultaneous and sequential, with a bilinear function to calculate individual asymptotes in precision. This approach yielded good fits with the bilinear function [mean *r*s = .80 (simultaneous) and .69 (sequential)], and the average points of asymptote were 3.71 and 3.69 items for the simultaneous and sequential conditions, respectively. As in Experiment 1, we examined whether the bilinear function provided a better fit of the precision data than did a logarithmic function, which is the predicted shape according to the flexible-resource model (Bays & Husain, 2008). As was demonstrated in Experiment 1, we found that the bilinear function provided better fits of the precision-by-set-size functions than did the logarithmic function in both simultaneous [average *r* = .42; *t*(21) = 4.63, *p* < .001] and sequential [average *r* = .43; *t*(21) = 2.01, *p* < .05] conditions.

In our individual-differences analysis, we again avoided a potential violation of independence assumptions by examining the relationship between inflections in precision and individual capacity estimates from two different data sets. Thus, we examined the relationship between *P* _{mem} in the sequential condition and inflections in precision in the simultaneous condition (*R* ^{2} = .36, *p* < .01; Fig. 7c) and *P* _{mem} in the simultaneous condition and inflections in precision in the sequential condition (*R* ^{2} = .35, *p* < .01; Fig. 7d). Importantly, we replicated the core empirical pattern from Anderson et al. (2011) in both simultaneous and sequential displays while maintaining independence between the dependent measures.

*M*s = 3.71 and 3.69, respectively;

*t*(21) = 0.17,

*p*= .87; see Fig. 8a]. Furthermore, individual inflection estimates in the simultaneous and sequential conditions were strongly correlated (

*R*

^{2}= .36,

*p*< .01; Fig. 8b). This finding suggests that the same central limits constrain resource allocation in these two conditions. Given that sequential displays provide substantially more time for encoding each item, this rules out encoding limits as the reason for asymptotes in the precision-by-set-size function.

## Experiment 3

Although we have shown that precision reaches a stable asymptote at small set sizes and that individual measures of asymptotes in precision are predicted by capacity limits, other researchers have reported failing to find an asymptote in precision during an orientation task in which set size ranged from one to eight items (Gorgoraptis et al., 2011; Ma & Chou, 2010; Rademaker & Tong, 2010). Of course, this apparent contradiction is a significant concern, given that these studies provide what appears to be a conceptual replication of the procedure that we have described here. We show here, however, that this discrepancy can be resolved by taking into account one fundamental difference between these studies and both Anderson et al. (2011) and the present work: Specifically, we employed orientation stimuli whose values covered a full 360º range, while the studies failing to replicate our findings employed gratings or lines that spanned 180º of space. Because the procedure for extracting parameter estimates from response offset distributions requires a 360º space, the data from 180º stimuli must undergo a linear transformation (i.e., multiplied by 2) prior to fitting a von Mises distribution. Although this transformation is reversed after the fitting procedure, the simulations below will show that this procedure leads to systematic errors in parameter estimation. These flawed parameter estimates in turn distort the shape of the precision-by-set-size function unless very large numbers of trials—in this case, at least 750 trials per condition—are collected.

We used both behavioral and simulated data to examine the consequences of employing 180º and 360º stimuli in these procedures. The behavioral study replicated both our findings and the conflicting findings that had been obtained with 180º-orientation stimuli. While the estimated precision-by-set-size function was bilinear with 360º stimuli, it was logarithmic in shape with the 180º stimuli. To test whether the difference in results may have been due to limitations of the analytic method used to estimate precision, we created artificial data sets in which we presumed a bilinear (or a logarithmic) precision-by-set-size function and examined whether the seed function could be recovered in the 180º and 360º conditions. We found systematic errors in the parameter estimates from the 180º condition that changed the shape of the precision-by-set-size function from a bilinear to a logarithmic shape. Critically, this problem with parameter estimation was not present in the 360º data. These findings show that the apparent conflict between past studies examining the precision-by-set-size function can be fully explained by inaccurate parameter estimates with 180º stimuli. In the simulation work below, we detail the potential cause of these flawed parameter estimates and provide some strategies for avoiding this problem.

### Method

### Subjects

A total of 23 undergraduates at the University of Oregon completed the experiment for course credit. All had normal or corrected-to-normal visual acuity and gave informed consent according to the procedures approved by the University of Oregon institutional review board.

### Stimulus displays

The stimuli were generated in MATLAB using the Psychophysics Toolbox extension (Brainard, 1997; Pelli, 1997) and were presented on a 17-in. flat CRT computer screen (refresh rate of 120 Hz)**.** The viewing distances were approximately 77 cm.

In Experiment 3, objects were presented within a square region subtending 10.7º × 10.7º of visual angle, and the subjects fixated on a central fixation point that subtended 0.37º × 0.37º. The objects were positioned randomly with respect to both position and orientation, with the constraint that no two objects could fall within 1.43º of one another, resulting in a between-object separation of at least two-thirds of an object. At the end of each trial, subjects were cued to recall the orientation of a single item. When the stimuli were sampled from 180º of space, observers were instructed to click on either side of the ring and assured that both responses were the same. A specific object was probed by outlining its position with a thick, white ring with a radius of 0.93º and a rim thickness of 0.37º (see Fig. 9).

### Procedure

Experiment 3 took approximately 2 h to complete and was composed of 24 blocks of 60 trials each. The presentation of stimulus range (180º or 360º) was blocked and counterbalanced. The events in a single trial went as follows. First, subjects saw a central fixation point, followed by the presentation of one, two, three, four, six, or eight discs with oriented gaps for 200 ms. A 1,000-ms delay period followed the offset of the discs. Following the delay, a probe ring appeared at a randomly selected position. The subject clicked on the perimeter of this ring in an unspeeded response to indicate the orientation of the sample item that had appeared in the same position. Each response was followed by a 750-ms blank intertrial interval.

### Simulation approach

The purpose of the simulations was to examine whether the conflicting results obtained with different sets of stimuli were due to limitations in the mixture model analysis that was used to estimate mnemonic precision and the probability of storage. To test this hypothesis, we generated an artificial data set that followed the assumptions of the discrete-resource model, and then examined whether the underlying parameter estimates could be accurately recovered under various conditions.

We generated the artificial data set on the basis of data observed by Anderson et al. (2011). Thus, the precision-by-set-size function was defined by three parameters: the slope (3.77º/item) and *y*-intercept (7.89º) of the initial decline in precision with set size, and a plateau in precision (at 19.2º) after a presumed capacity limit of three items. In this fashion, we generated artificial data for set sizes 1–6. After defining the seed parameters, we ran the simulation over 100 “subjects.” For each set size, we simulated either 120, 300, or 1,000 trials, and each trial was classified as a hit or a miss, with a hit representing successful recall and a miss representing failed recall. The assigned proportion of hits was dependent on the ratio of modeled capacity relative to total set size (e.g., 75% of trials in set size 4 would be hits for a capacity of three). In hit trials, simulated responses were sampled from a von Mises distribution (*μ* = 0), and *k* (a concentration parameter from which precision is calculated) varied with respect to estimated precision values obtained from the modeled precision-by-set-size function. For example, if set size 3 were being simulated, we would feed the estimated *SD* value for set size 3 (3.77 * 3 + 7.89 = 19.20º) into the simulation to create a von Mises distribution based on this parameter value, and sample a single response from that distribution. Thus, for set size 3, 75% of simulated trials were sampled from the von Mises distribution. In miss trials, simulated responses were sampled from a uniform distribution that spanned across the full range of the stimulus space in that condition. This simulation procedure was repeated across set sizes 1–6. The simulated data were then fitted with the mixture model to estimate precision and capacity, thereby providing a clear test of whether accurate parameter estimates could be extracted in each condition.

## Results and discussion

### Empirical results

First, we examined whether the data replicated prior work showing that the distribution of errors in each condition was best described by a mixture of a target-related von Mises distribution and a guessing-related flat distribution, in line with discrete-resource models that claim a relatively low item limit in working memory (Anderson et al., 2011; Zhang & Luck, 2008). This discrete-resource model was pitted against a flexible-resource model that predicted an error distribution described by a single von Mises distribution.

### 360º behavioral data

*R*

^{2}values: .99 (SS1), .99 (SS2), .99 (SS3), .99 (SS4), .98 (SS6), .95 (SS8); von Mises–alone

*R*

^{2}values: .99 (SS1), .97 (SS2), .90 (SS3), .91 (SS4), .85 (SS6), .84 (SS8)]. The discrete-resource model was also more effective on average than the flexible-resource model in characterizing the response distributions and in explaining distribution variance for each subject at every set size. Dependent-samples

*t*tests revealed significant differences between the two models in explaining variance (

*R*

^{2}) in the distributions for all set sizes (

*p*< .01) except set size 1 (

*p*= .18).

We calculated the response offset between each response and each of the nontarget values from the same trial to examine the possibility of swapping as an explanation for a uniform distribution (Bays et al., 2009). The aggregate distribution of distractor response values for each set size revealed no evidence of a central tendency in error plots (Fig. 10, bottom panel), suggesting that observers did not consistently report distractor values in this experiment. A swapping analysis in which we determined the proportion of trials on which observers erroneously reported a distractor value (Bays et al., 2009) revealed a relatively small proportion of swapping errors across set sizes (SS2 = .01, SS3 = .01, SS4 = .03, SS6 = .02, SS8 = .05). This suggests that the uniform component of the error distribution is due to a failure to store the probed item rather than to a consistent tendency to report unprobed values.

### 180º behavioral data

*R*

^{2}values: .999 (SS1), .999 (SS2), .998 (SS3), .99 (SS4), .99 (SS6), .98 (SS8); flexible resource

*R*

^{2}values: .999 (SS1), .992 (SS2), .97 (SS3), .94 (SS4), .90 (SS6), .89 (SS8)]. The discrete-resource model was also more effective on average than the flexible-resource model in characterizing the response distributions and in explaining the distribution variance for each subject at every set size. Dependent-samples

*t*tests revealed significant differences between the two models in explaining variance (

*R*

^{2}) in the distributions for all set sizes (

*p*< .001) except set size 1 (

*p*= .32).

We calculated the response offset between each response and each of the nontarget values from the same trial to examine the possibility of swapping as an explanation for a uniform distribution. The aggregate distribution of distractor response values for each set size revealed no evidence of a central tendency in error plots (Fig. 11, bottom panel), suggesting that observers did not consistently report distractor values in this experiment. A swapping analysis in which we determined the proportion of trials on which observers erroneously reported a distractor value (Bays et al., 2009) revealed negligible evidence of swapping errors across set sizes (all swapping probabilities were less than .01). This analysis suggests that the uniform component of the error distribution is due to a failure to store the probed item rather than to a consistent tendency to report unprobed values.

To summarize the results so far, the distributions of response errors in both the 360º and 180º conditions were better described by the mixture model predicted by the discrete-resource model than by the single von Mises distribution predicted by flexible-resource models. We next examine whether estimates of the *P* _{mem} and *SD* parameters differed as a function of stimulus space. Particular emphasis will be placed on the critical issue of whether the shape of the precision-by-set-size function differed across the 360º and 180º conditions.

### Estimates of the probability of storage (P_{mem}) in the 360º and 180º conditions

*P*

_{mem}parameters for each set size and stimulus space are illustrated in Fig. 12b. We found a main effect of set size [one, two, three, four, six, or eight items;

*F*(5, 20) = 293.4,

*p*< .001], but no effect of stimulus space [180º or 360º condition;

*F*(1, 24) = 1.68,

*p*= .21] and no interaction between set size and stimulus space [

*F*(5, 20) = 2.03,

*p*= .13]. Thus, although we will detail below how precision estimates differed across these conditions, we found no reliable effect of stimulus space on estimates of

*P*

_{mem}. Although the equivalent

*P*

_{mem}estimates across the 180º and 360º conditions may appear to conflict with the clear differences in

*SD*estimates across the same conditions, we will later discuss in the simulations why the

*SD*parameter may have been more sensitive to the stimulus space manipulation.

### Comparing the precision-by-set-size functions from the 360º and 180º conditions

The primary goal of this study was to examine why different studies had observed different shapes for the precision-by-set-size function. As the results will show, the behavioral data from this study replicated those conflicting findings, such that the precision-by-set-size function was best described by bilinear and logarithmic shapes in the 360º and 180º conditions, respectively. The precision parameters for each set size and stimulus range condition are illustrated in Fig. 12a. We found main effects of set size [one, two, three, four, six, or eight items; *F*(5, 20) = 28.23, *p* < .001] and stimulus space [180º or 360º condition; *F*(1, 24) = 7.51, *p* < .01], as well as a significant interaction between set size and stimulus space [*F*(5, 20) = 3.82, *p* < .05]. Thus, the shift from 360º to 180º had a marked effect on the observed parameter estimates. As Fig. 12a illustrates, the 360º condition replicated the bilinear function observed by Anderson et al. (2011), whereas the 180º condition showed a monotonic rise in *SD* all the way up to set size 8. To verify this apparent difference, we examined whether the bilinear or the logarithmic functions provided a better fit for the aggregate and individual subject data in the 360º and 180º conditions.

### Fits for the precision-by-set-size function in the 360º condition

Precision in the 360º condition declined monotonically from set sizes 1–4 [set size 1 (*M =* 13.06) vs. 2 (*M =* 21.14), *t*(24) = –8.66, *p* < .001; set size 2 vs. 3 (*M =* 20.48), *t*(24) = 0.536, *p* = .60; set size 3 vs. 4 (*M =* 23.78), *t*(24) = –3.39, *p* < .01], and then reached an apparent asymptote after set size 4 [set size 4 vs. 6 (*M =* 20.89), *t*(24) = 1.95, *p* = .09; set size 6 vs. 8 (*M =* 23.17), *t*(24) = –1.49, *p* = .16]. This aggregate function was indeed better fit by a bilinear function (*R* ^{2} = .95, *p* < .01) than by a logarithmic function (*R* ^{2} = .42, *p* = .14). This was also true at the individual-subject level, where a paired *t* test showed that the bilinear function (average *r* = .75) provided a better fit than did the logarithmic function (average *r* = .37) [*t*(24) = 6.82, *p* < .0001].

### Fits for the precision-by-set-size function in the 180º condition

Precision estimates in the 180º condition declined monotonically across the entire range of set sizes [set size 1 (*M =* 12.95) vs. 2 (*M =* 17.27), *t*(24) = –7.7, *p* < .001; set size 2 vs. 3 (*M =* 19.18), *t*(24) = –2.03, *p* = .054; set size 3 vs. 4 (*M =* 20.65), *t*(24) = –1.86, *p* = .07; set size 4 vs. 6 (*M =* 20.28), *t*(24) = 0.341, *p* = .74); set size 6 vs. 8 (*M =* 22.58), *t*(24) = –2.373, *p* < .05]. The only difference that was not significant was that observed between set sizes 4 and 6. For the aggregate data from the 180º condition, the precision-by-set-size function was well described by both a logarithmic function (*R* ^{2} = .95, *p* < .01) and a bilinear function (*R* ^{2} = .89, *p* < .05). Comparing these fits at the individual-subject level, however, there was a clear advantage for the logarithmic function (average *r* = .61) over the bilinear function (average *r* = .35) [*t*(24) = 6.42, *p* < .001].

To summarize the analysis of the precision-by-set-size functions, we observed different shapes of this function in the 360º and 180º conditions. While the 360º condition replicated the bilinear precision-by-set-size function observed by Anderson et al. (2011), this function had a logarithmic shape in the 180º condition.

### Examining the link between P_{mem} and the inflection point of the precision-by-set-size function

The key finding reported by Anderson et al. (2011) was that the set size at which mnemonic resolution reached its plateau was predicted by the number of items that each observer could hold in memory, as predicted by the hypothesis that storage is limited to a discrete number of items. To examine this correlation in the present work, however, we examined whether estimates of item limits from one data set predicted the set size at which precision reached a plateau in the other condition (i.e., whether *P* _{mem} from the 180º condition was correlated with the inflection point of the precision-by-set-size function in the 360º condition, and vice versa.). We examined the correlation in this manner because of violations of independence that arise when *P* _{mem} and *SD* are estimated from the same data set (Brady et al., 2011). In line with the prior work, estimates of item limits observed in the 180º condition predicted the set size at which precision reached its plateau (i.e., the inflection point of the precision-by-set-size function) in the 360º condition (*R* ^{2} = .24, *p* < .05; Fig. 12c). The same analysis with the 180º data, however, did not replicate the Anderson et al. (2011) findings. That is, we found no trace of a correlation between the item limits estimated from the 360º condition and the inflection point of the precision-by-set-size function in the 180º condition (*R* ^{2} = .001, *p* = .95).

Of course, the conflicting findings in the 360º and 180º conditions raise a crucial question: Were the allocations of mnemonic resources actually different in the 360º and 180º conditions? Although it is plausible that observers may allocate resources differently for different types of stimuli, we also considered the hypothesis that the mixture modeling approach might yield inaccurate parameter estimates when the width of the stimulus space was reduced. To examine whether inaccurate parameter estimates might be responsible for the apparent conflict between the findings in the 180º and 360º conditions, we turn to the simulation results.

### Simulation results

Our primary goal was to examine whether the different precision-by-set-size functions across stimulus sets were due to problems with parameter estimation rather than to a qualitative shift in how resources were allocated across the two conditions. We started by generating an artificial data set based on the assumptions of the discrete-resource model and the aggregate parameter estimates from the Anderson et al. (2011) data set. Because we knew precisely what the underlying parameter values were in this artificial data set, we had the opportunity to test whether the mixture model could accurately extract those values when the stimulus space varied between 360º and 180º. As a follow-up, we then examined the effects of stimulus space when a logarithmic seed function was employed.

We created artificial data sets in which the size of the stimulus space varied parametrically from 360º down to 180º (in increments of 60º). This procedure was implemented by restricting the range of values in the uniform distribution from which miss trials were sampled. For example, if we were simulating data under the assumption of a 240º stimulus space, all random responses would be sampled between –120º and 120º. The assumptions of the von Mises distribution were kept constant across the parametric variations of stimulus space because the results of the empirical study suggested that the distribution of target-related responses was equivalent across the 180º and 360º stimulus spaces. We ran 100 “subjects” through 120, 300, and 1,000 trials of each stimulus space condition for set sizes 1–6. The bilinear precision function used to generate the simulated data was based on the findings reported by Anderson et al. (2011). Thus, we knew that the underlying precision function was bilinear in each artificial data set, and we tested whether this function could be accurately recovered across the different stimulus spaces.

*R*

^{2}) between the recovered

*P*

_{mem}functions and the seed function as the dependent measure, we found effects of stimulus space [

*F*(3, 97) = 19.20,

*p*< .001] and trial number [

*F*(2, 98) = 3.95,

*p*< .05], as well as a significant interaction [

*F*(6, 94) = 7.83,

*p*< .001]. Nevertheless, although we observed effects of the stimulus space and trial number manipulations, we should point out that all conditions provided fits above .89 to the seed function [with 120 trials, 180º = .89, 240º = .96, 300º = .97, 360º = .97; with 300 trials, 180º = .96, 240º = .99, 300º = .99, 360º = .99; and with 1,000 trials, 180º = .993, 240º = .996, 300º = .996, 360º = .997]. In contrast to the robustness of the estimation of

*P*

_{mem}, the precision functions showed marked effects of both stimulus space [

*F*(3, 97) = 42.86,

*p*< .001] and trial number [

*F*(2, 98) = 389.67,

*p*< .001]. As is shown in Fig. 13, the values for all stimulus spaces were nearly identical to those of the seed function when 1,000 trials were simulated (Fig. 13c), while there was a sharp decline in the fits when only 120 trials were simulated (Fig. 13a). Specifically, we obtained the following fits for each stimulus space across trial numbers: for 120 trials, 180º = .62, 240º = .72, 300º = .78, 360º = .79; for 300 trials, 180º = .79, 240º = .88, 300º = .89, 360º = .9; and for 1,000 trials, 180º = .92, 240º = .95, 300º = .96, 360º = .97. This interaction between stimulus space and trial number was significant when examining

*R*

^{2}values [

*F*(6, 94) = 5.997,

*p*< .001]. Thus, we found a monotonic decline in the accuracy of parameter estimates when the stimulus space was reduced from 360º to 180º, although this problem was ameliorated as the number of trials increased. Because past work employing 180º stimuli has been based on too few trials to overcome these problems in the 180º condition, we conclude that there may have been systematic errors in the parameter estimates in those studies. Below, we further examine how these errors would affect the apparent shape of the precision-by-set-size function.

As was done with the empirical data, we fitted each data set with both a bilinear and a logarithmic function to determine which function better characterized the shape of the precision-by-set-size function derived from each stimulus space. Given that the fits of each precision function to the seed function increased monotonically with increases in both stimulus space and trial number, we examined the two extreme stimulus space values: 180º and 360º. Comparing across simulations of both stimulus spaces, the bilinear function better characterized simulations of the larger stimulus space across all trial numbers [for 120, *t*(99) = –2.65, *p* < .01; for 300, *t*(99) = –12.83, *p* < .001; for 1,000, *t*(99) = –24.52, *p* < .001]. Conversely, in the 180º condition, the modeling procedures failed to discriminate between a logarithmic function and the bilinear seed function that was used to generate the data; thus, when 120 or 300 trials were simulated, the resulting precision-by-set-size functions were equally well-described by bilinear and logarithmic functions [for 120, *t*(99) = –0.296, *p* = .77; for 300, *t*(99) = –1.23, *p* = .22]. With 1,000 trials, there was a significantly better fit for the bilinear seed function than for the logarithmic function [*t*(99) = 3.10, *p* < .01]. Further simulations indicated that at least 750 trials were required to disambiguate the bilinear seed function from a logarithmic alternative in the 180º condition. This observation suggests that previous work using 180º stimuli may have revealed a logarithmic shape because of flawed estimates of the *SD* parameter. The implications of this simulation are twofold. First, they show that with typical numbers of observations (i.e., well below 750), the recovery of the bilinear precision-by-set-size function is more reliable with 360º than with 180º stimuli. Second, the apparent fit of a logarithmic (or exponential) function (Bays et al., 2011; Bays & Husain, 2008) to precision functions in both our empirical and simulation data can be attributed entirely to inaccurate extraction of the *SD* parameter values when typical numbers of trials are run.

*f*(

*x*) =

*a** log(

*b**

*x*) +

*d*, where

*x*is the set size,

*a*is the slope,

*b*is the

*x*-intercept, and

*d*is a constant. The seed logarithmic parameter estimates (

*a*= 4.73,

*b*= 6.47, and

*d*= 2.98) were obtained from the best-fitting logarithmic function to the bilinear precision function used in the previous simulation. As was done in the previous simulation, we fitted each data set with both a bilinear and a logarithmic function to determine which function better characterized the shape of the precision-by-set-size function derived from each stimulus space. As is apparent in Fig. 14, the recovered precision functions were better fit by a logarithmic function than by a bilinear function in both the 180º condition [mean

*r*s = .61 for logarithmic, .40 for bilinear;

*t*(99) = –5.02,

*p*< .0001] and the 360º condition [mean

*r*s = .74 for logarithmic, .56 for bilinear;

*t*(99) = –4.56,

*p*< .0001]. Thus, the general form of the seed function (logarithmic rather than bilinear) was accurately recovered in both the 180º and 360º conditions. Nevertheless, the accuracy of the recovered parameter estimates was once again worse in the 180º condition, as demonstrated by a paired

*t*test of the individual-subject fits across the two stimulus spaces,

*t*(99) = 4.31,

*p*< .001. To sum up, the simulations with a logarithmic seed function showed that parameter estimates from both the 180º and 360º conditions allowed for accurate discrimination between logarithmic and bilinear alternatives, but there was still a significantly better fit between the seed function and the estimated parameter values in the 360º condition.

Of course, it is of interest to determine why parameter estimates were consistently less accurate in the 180º condition than in the 360º condition. Our hypothesis is that the errors in parameter estimation resulted because of the linear transformation that is applied to restricted stimulus spaces during the mixture model analysis. For example, when running behavioral studies or simulations in which responses only range from –90º to 90º (i.e., a 180º stimulus space), offset values must be subjected to a linear transformation prior to fitting with a circular distribution such as the von Mises distribution, so that the whole range of offset values spans all 360º of circular space. Specifically, all offset values are multiplied by a factor of 2. The precision parameter estimated by fitting the von Mises distribution is then altered back into its native space by applying the inverse of the linear transformation. Thus, the assumption is that a linear transformation on a response offset distribution, prior to fitting with a circular distribution, allows the correct value to be recovered when the transformation is reversed. Our findings suggest that this assumption is incorrect when typical numbers of trials are run. For example, we simulated a response offset distribution in 360º of space, with an underlying precision estimate of 18º, and compared the resulting parameter estimates when the data were left unchanged and when they were subjected to the same linear transformation that has been used in past studies with 180º stimuli (i.e., the initial doubling of offset values, and a later halving of the *SD* parameter estimate). In the absence of the transformation, the recovered precision value was 18.1, while the *SD* parameter estimate from the transformed data was 16.5. Such errors in estimating the *SD* parameter value could explain the logarithmic shape of the precision-by-set-size function when trial numbers were low.

*SD*), which effectively decreases the range of stimulus values over which target-related responses are primarily encountered; (2) increasing the number of trials; and (3) reducing the coefficient of the linear transformation. To test these predictions, we ran simulations in which we manipulated the seed parameter (

*SD*= 7, 15, or 23), number of trials (120, 240, or 360), and magnitude of the transformation coefficient (

*m*= 1, 2, or 3) while maintaining 360º of stimulus space. For each condition, we calculated the mean squared error (

*MSE*) between the seed value and the recovered parameter estimate across all 100 simulated subjects. Thus, if the density of observations within a given stimulus space is positively correlated with the accuracy of parameter estimates, we should find an increase in the

*MSE*with increasing

*SD*, decreasing number of trials, or increasing transformation coefficient. In line with this prediction (Fig. 15a), we found main effects of magnitude [

*F*(2, 98) = 2.93,

*p*< .05], trials [

*F*(2, 98) = 33.31,

*p*< .001], and seed parameter [

*F*(2, 98) = 312.965,

*p*< .001], as well as a significant interaction between trial number and seed parameter [

*F*(4, 96) = 17.30,

*p*< .001]. No other interactions were significant (

*p*> .19).

Moreover, we obtained similar results when other manipulations that changed “density” were performed on the same data set, such as an increase in the size of the *SD* seed parameter or a decrease in trial number. To demonstrate the relationship between the size of *SD* and the number of trials required to reliably recover the underlying seed *SD*, we simulated response distributions over a range of *SD* values (5–40, inclusive of all values). The simulation of each *SD* value began with 120 simulated trials over 100 “subjects.” To determine the number of trials required for accurate parameter estimates, we parametrically varied trial numbers until a criterion *MSE* was achieved. If the *MSE* of the sample did not fall below 1 (i.e., if the average squared difference between seed *SD* and simulated mean *SD* was not less than 1), we ran the simulation again with the number of trials increased by 5. This was repeated until the *MSE* met our criterion for successful parameter recovery, and the number of trials required to meet criterion was saved for each *SD* value. This procedure was implemented over 100 iterations to obtain a confident estimate of the number of trials required to meet criterion. As was predicted, we found that an increase in *SD* required a proportional increase in the number of trials required to recover the underlying seed *SD* (Fig. 15b). Specifically, a regression of the trial-number-by-*SD* function revealed a slope of 13.4 trials/deg, suggesting that approximately 13 additional trials would be required with each additional degree of *SD*. We conclude that increasing *SD* decreases the density of observations relevant for estimating the width of the target-related von Mises distribution, and this yields systematic errors in the *SD* parameter estimate.

Overall, the results from the simulations suggest that systematic errors in *SD* parameter estimates occur when the stimulus space is restricted and typical trial numbers are run. Thus, when using stimuli that utilize only 180 unique angles of orientation, significantly more trials per condition are required to recover the underlying precision parameter and to overcome the reduction in density that occurs when linear transformations are performed on the response distributions. These simulations therefore offer a way to reconcile the apparent conflict between our findings and those from conceptually similar studies that have suggested a logarithmic precision-by-set-size function (Bays et al., 2011; Gorgoraptis et al., 2011; Ma & Chou, 2010; Rademaker & Tong, 2010).

## General discussion

There is active debate regarding the nature of capacity limits in visual WM. Discrete-resource models assume that WM capacity is constrained by a discrete resource that precludes the storage of additional items that exceed a small item limit (Anderson et al., 2011; Barton et al., 2009; Zhang & Luck, 2008). By contrast, flexible-resource models assume a continuous allocation of resources without the constraint of a fixed item limit, meaning that each item within a given display will receive a share of WM resources (Bays et al., 2009; Bays & Husain, 2008; Wilken & Ma, 2004). Indeed, multiple studies have demonstrated an inverse relationship between display size and the fidelity of each representation stored in memory (Anderson et al., 2011; Bays et al., 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Flexible-resource models can account for this relationship because it is assumed that each item in memory receives a smaller proportion of resources with larger displays. Discrete-resource models, however, can also account for this relationship if fixed item limits constrain resource allocation to a fixed number of individuated representations (Barton et al., 2009). Past findings (Anderson et al., 2011; Zhang & Luck, 2008) have supported the discrete account by showing that precision-by-set-size functions are best described by a bilinear function, in which precision declines initially with set size and reaches a stable plateau when individual capacity limits are exceeded. Thus, we have suggested that WM may be best described as a fixed-capacity system whose item limit constrains a separate resource that determines the resolution of the stored representations.

The present experiments strengthen support for the discrete-resource model in three different ways. First, we showed that the same empirical pattern is observed when observers store the color instead of the orientation of the stimuli. Second, we ruled out encoding limits as a potential source of the apparent item limit by showing that equivalent precision estimates are obtained with simultaneous and sequential presentations of the memoranda. Finally, we presented behavioral and simulation data that demonstrated an important boundary condition for the mixture modeling approach developed by Zhang and Luck (2008): When the stimulus space is reduced below 360º, this analytic approach fails to accurately recover the seed precision parameter. Indeed, the shapes of our recovered precision-by-set-size functions were fitted better by a logarithmic function than by a bilinear function. This observation was especially striking, considering that the underlying shape of the precision-by-set-size function in these simulations was bilinear. This finding provides a clear explanation for apparently conflicting findings in conceptually similar procedures that have employed orientation stimuli that spanned only 180º of orientation space (Gorgoraptis et al., 2011; Ma & Chou, 2010; Rademaker & Tong, 2010).

An important observation in the simulation results was the variation in shape of the precision-by-set-size functions derived from parametric variations of stimulus space. On the one hand, we found that the shapes of precision functions were more bilinear when the stimulus space was larger (e.g., 360º), which is suggestive of discrete resource limits in WM capacity (Anderson et al., 2011; Zhang & Luck, 2008). On the other hand, we found that the shapes of precision functions were more logarithmic (or exponential) when the stimulus space was smaller (e.g., 180º), which is suggestive of a flexible resource that can be allocated across all of the items in the display (Bays et al., 2011; Bays & Husain, 2008). Thus, a simple change in the stimuli had a dramatic effect on the shape of the precision-by-set-size function, even though the underlying function was identical across the different stimuli. These findings suggest that care should be taken when selecting the stimulus space within which to work. Specifically, future empirical investigations regarding the precision of working memory should take notice of the sheer volume of trials required to overcome the decline in distribution density when response distributions of smaller stimulus spaces undergo linear transformations prior to fitting with a von Mises distribution.

There is a distinction to be made between two alternative views of the discrete-resource model: the slots + resources model (Anderson et al., 2011; Zhang & Luck, 2008) and the slots + averaging model (Zhang & Luck, 2008). On the one hand, the slots + resources model suggests that individuated representations are stored within individual slots in WM, and a separate neural resource determines the precision of those representations. In this case, the two components of the discrete-resource model (number and resolution) are independent factors, one determining how many items can be stored and the other determining the precision of those items. Thus, the decline in precision with subspan displays can be described as follows: As individuated items are loaded into slots, the separate resource can be flexibly allocated among those slots, and sharing resources among a larger number of slots results in fewer resources available for each represented item. On the other hand, the slots + averaging model suggests that individuated representations are stored within individual slots, and the resource that determines the resolution of each item is inherent within each slot. In this case, both the number of items that can be stored and the precision of each representation in memory are determined by the availability of a common discrete resource. If there are more slots than items to store (e.g., subspan displays), multiple slots can load onto a single representation; consequently, an item encoded from a smaller display may occupy multiple slots, resulting in a higher-precision representation, while the same item encoded from a larger display may only occupy a single slot, resulting in a lower-precision representation.

We tend to favor the slots + resources hypothesis, in part because prior findings have suggested that discrete item limits in WM reflect a distinct aspect of WM ability from that which determines the resolution or precision of these online memories. This number–resolution dichotomy garners support from behavioral studies of individual differences, in which the number of items stored and the precision of the stored representations correspond to distinct factors (Awh, Barton, & Vogel, 2007; Fukuda, Vogel, Mayr, & Awh, 2010). Moreover, while the maximum number of items an individual can store is strongly predictive of fluid intelligence, no such relationship is apparent between mnemonic precision and intelligence (Fukuda et al., 2010). At first glance, this separation between number and resolution seems to conflict with the claim from the slot-averaging hypothesis that both are determined by a single discrete resource.

The number–resolution dichotomy also dovetails with previous brain-imaging data in which activity in disparate brain regions tracked the number of items stored and the information content of the stored items (Xu & Chun, 2006). Neural activity in the parietal cortex, measured by electrophysiological (McCollough, Machizawa, & Vogel, 2007; Vogel & Machizawa, 2004) and functional MRI approaches (Todd & Marois, 2004; Xu & Chun, 2006) has been shown to track the number of items stored in visual working memory. In addition, the profile of this activity across different set sizes is a robust predictor of WM capacity (Todd & Marois, 2004; Vogel & Machizawa, 2004). By contrast, growing evidence shows that the fidelity of representations in working memory is supported by activity in the same sensory areas that enable the initial encoding of the memoranda, in line with the sensory recruitment hypothesis (Awh & Jonides, 2001; Jonides, Lacey, & Nee, 2005; Postle, 2006). Evidence in favor of this hypothesis has come from fMRI work showing that the sustained patterns of activity in primary visual cortex during WM maintenance are qualitatively similar to patterns of activity observed during sensory encoding (Ester, Serences, & Awh, 2009; Harrison & Tong, 2009; Serences, Ester, Vogel, & Awh, 2009). Thus, the behavioral and neural data converge to support a distinction between number and resolution in the maintenance of online visual memories.

One important goal for future research is to provide a more mechanistic account of why mnemonic precision varies across subspan set sizes. Although we and others have referred to the splitting of a specific “resource” for mnemonic precision, the nature of this resource is unclear. As described above, the slot-averaging hypothesis from Zhang and Luck (2008) provides one description of how resources relevant for precision could be distributed. One challenge for this account, however, is that storage-related neural activity (e.g., Vogel & Machizawa, 2004) increases monotonically across early set sizes; this is inconsistent with the claim that all available mnemonic resources are deployed for subspan displays, as claimed by the slot-averaging hypothesis. One hope is that more detailed studies of how sensory regions represent precise visual details during WM storage (Ester et al., 2009; Harrison & Tong, 2009; Serences et al., 2009) will provide a way to directly examine how stimulus-specific representations are influenced by basic variables such as the number of items held in WM.

## Notes

### Author note

Supported by NIH Grant R01 MH077105-01A2 to E.A. E.A. and D.E.A. conceived and designed the experiments, D.E.A. collected and analyzed data, and D.E.A. and E.A. wrote the manuscript. We are grateful to Tim Brady for his advice on the simulation work.

## References

- Anderson, D. E., Vogel, E. K., & Awh, E. (2011). Precision in visual working memory reaches a stable plateau when individual item limits are exceeded.
*Journal of Neuroscience, 31,*1128–1138. doi: 10.1523/JNEUROSCI.4125-10.2011 PubMedCentralCrossRefPubMedGoogle Scholar - Awh, E., Barton, B., & Vogel, E. K. (2007). Visual working memory represents a fixed number of items regardless of complexity.
*Psychological Science, 18,*622–628. doi: 10.1111/j.1467-9280.2007.01949.x CrossRefPubMedGoogle Scholar - Awh, E., & Jonides, J. (2001). Overlapping mechanisms of attention and spatial working memory.
*Trends in Cognitive Sciences, 5,*119–126. doi: 10.1016/S1364-6613(00)01593-X CrossRefPubMedGoogle Scholar - Barton, B., Ester, E. F., & Awh, E. (2009). Discrete resource allocation in visual working memory.
*Journal of Experimental Psychology: Human Perception and Performance, 35,*1359–1367. doi: 10.1037/a0015792 PubMedCentralPubMedGoogle Scholar - Bays, P. M., Catalao, R. F. G., & Husain, M. (2009). The precision of visual working memory is set by allocation of a shared resource.
*Journal of Vision, 7*(10), 1–11. doi: 10.1167/9.10.7 Google Scholar - Bays, P. M., Gorgoraptis, N., Wee, N., Marshall, L., & Husain, M. (2011). Temporal dynamics of encoding, storage, and reallocation of visual working memory.
*Journal of Vision, 6*(10), 1–5. doi: 10.1167/11.10.6 Google Scholar - Bays, P. M., & Husain, M. (2008). Dynamic shifts of limited working memory resources in human vision.
*Science, 321,*851–854. doi: 10.1126/science.1158023 PubMedCentralCrossRefPubMedGoogle Scholar - Brady, T. F., Fougnie, D., & Alvarez, G. A. (2011, October 14). Comparisons between different measures of working memory capacity must be made with estimates that are derived from independent data [Online response to Anderson, Vogel, & Awh, 2011].
*Journal of Neuroscience*. Retrieved from www.jneurosci.org/content/31/3/1128/reply - Brainard, D. H. (1997). The Psychophysics Toolbox.
*Spatial Vision, 10,*433–436. doi: 10.1163/156856897X00357 CrossRefPubMedGoogle Scholar - Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity.
*Behavioral and Brain Sciences, 24,*87–114. doi: 10.1017/S0140525X01003922 CrossRefPubMedGoogle Scholar - Cusack, R., Lehmann, M., Veldsman, M., & Mitchell, D. J. (2009). Encoding strategy and not visual working memory capacity correlates with intelligence.
*Psychonomic Bulletin & Review, 16,*641–647. doi: 10.3758/PBR.16.4.641 CrossRefGoogle Scholar - Engle, R.W. (2002). Working memory capacity as executive attention.
*Current Directions in Psychological Science, 11*, 19–23.Google Scholar - Ester, E. F., Serences, J. T., & Awh, E. (2009). Spatially global representations in human primary visual cortex during working memory maintenance.
*Journal of Neuroscience, 29,*15258–15265. doi: 10.1523/JNEUROSCI.4388-09.2009 PubMedCentralCrossRefPubMedGoogle Scholar - Ester, E.F., Vogel, E.K., & Awh, E. (2012). Discrete resource limits in attention and working memory. In M.I. Posner (Ed.),
*Cognitive neuroscience of attention*(pp. 99–112). New York: Guilford Press.Google Scholar - Fukuda, F., & Vogel, E. (2009). Human variation in overriding attentional capture.
*Journal of Neuroscience, 29*, 8726–8733.Google Scholar - Fukuda, K., Vogel, E., Mayr, U., & Awh, E. (2010). Quantity, not quality: The relationship between fluid intelligence and working memory capacity.
*Psychonomic Bulletin & Review, 17,*673–679. doi: 10.3758/17.5.673 CrossRefGoogle Scholar - Glanzer, M., & Cunitz, A. R. (1966). Two storage mechanisms in free recall.
*Journal of Verbal Learning and Verbal Behavior, 5,*351–360. doi: 10.1016/S0022-5371(66)80044-0 CrossRefGoogle Scholar - Gorgoraptis, N., Catalao, R. F. G., Bays, P. M., & Husain, M. (2011). Dynamic updating of working memory resources for visual objects.
*Journal of Neuroscience, 31,*8502–8511. doi: 10.1523/JNEUROSCI.0208-11.2011 PubMedCentralCrossRefPubMedGoogle Scholar - Harrison, S. A., & Tong, F. (2009). Decoding reveals the contents of visual working memory in early visual areas.
*Nature, 458,*632–635. doi: 10.1038/nature07832 PubMedCentralCrossRefPubMedGoogle Scholar - Jonides, J., Lacey, S. C., & Nee, D. E. (2005). Processes of working memory in mind and brain.
*Current Directions in Psychological Science, 14,*2–5.CrossRefGoogle Scholar - Ma, W. J., & Chou, W.-C. (2010). Beyond magical numbers: Towards a noise-based account visual short-term memory limitations [Abstract].
*Journal of Vision, 10*(7), 728a. doi: 10.1167/10.7.728 CrossRefGoogle Scholar - McCollough, A. W., Machizawa, M. G., & Vogel, E. K. (2007). Electrophysiological measures of maintaining representations in visual working memory.
*Cortex, 43,*77–94.CrossRefPubMedGoogle Scholar - Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies.
*Spatial Vision, 10,*437–442. doi: 10.1163/156856897X00366 CrossRefPubMedGoogle Scholar - Postle, B. R. (2006). Working memory as an emergent property of the mind and brain.
*Neuroscience, 139,*23–38. doi: 10.1016/j.neuroscience.2005.06.005 PubMedCentralCrossRefPubMedGoogle Scholar - Rademaker, R. L., & Tong, F. (2010). An investigation of the precision and capacity of human visual working memory [Abstract].
*Journal of Vision, 10*(7), 726a. doi: 10.1167/4.12.11 CrossRefGoogle Scholar - Serences, J. T., Ester, E. F., Vogel, E. K., & Awh, E. (2009). Stimulus-specific delay activity in human primary visual cortex.
*Psychological Science, 20,*207–214. doi: 10.1111/j.1467-9280.2009.02276.x PubMedCentralCrossRefPubMedGoogle Scholar - Todd, J. J., & Marois, R. (2004). Capacity limit of visual short-term memory in human posterior parietal cortex.
*Nature, 428,*751–754. doi: 10.1038/nature02466 CrossRefPubMedGoogle Scholar - Vogel, E. K., & Machizawa, M. G. (2004). Neural activity predicts individual differences in visual working memory capacity.
*Nature, 428,*748–751. doi: 10.1038/nature02447 CrossRefPubMedGoogle Scholar - Wilken, P., & Ma, W. J. (2004). A detection theory account of change detection.
*Journal of Vision, 11*(12), 1120–1135. doi: 10.1167/4.12.11 Google Scholar - Xu, Y., & Chun, M. M. (2006). Dissociable neural mechanisms supporting visual short-term memory for objects.
*Nature, 440,*91–95. doi: 10.1038/nature04262 CrossRefPubMedGoogle Scholar - Zhang, W. W., & Luck, S. J. (2008). Discrete fixed-resolution representations in visual working memory.
*Nature, 453,*233–235. doi: 10.1038/nature06860 PubMedCentralCrossRefPubMedGoogle Scholar