Increasing interletter spacing facilitates encoding of words
- First Online:
- Cite this article as:
- Perea, M. & Gomez, P. Psychon Bull Rev (2012) 19: 332. doi:10.3758/s13423-011-0214-6
- 433 Views
KeywordsWord processingLetter position codingLetter spacingVisual-word recognitionWord perceptionDiffusion model
The second goal of the present experiment was to examine the nature of the effect of interletter spacing. To do that, we employed Ratcliff’s (1978) diffusion model for speeded two-choice decisions. This model has been quite successful at accounting for lexical decision data (e.g., Ratcliff, Gomez, & McKoon 2004; see also Gomez, Ratcliff, & Perea, 2007; Ratcliff, Perea, Colangelo, & Buchanan 2004; Wagenmakers, Ratcliff, Gomez, & McKoon, 2008). According to the diffusion model account of the lexical decision task, the visual stimulus is encoded so that the relevant stimulus features (e.g., lexical features) are utilized to accumulate evidence toward a “word” or “nonword” response. The accumulation of evidence is assumed to occur in a noisy manner. The two aforementioned processes (encoding and accumulation of evidence) are represented by two separate parameters in the model: (Ter and drift rate, respectively). Importantly, in a diffusion model, changes in these two parameters produce different effects in qualitative aspects of the data. If the Ter parameter changes, there should be shifts in the response time (RT) distributions with no change in their shape (see Gomez et al., 2007), and in addition, there should not be any effect on error rates. On the other hand, changes in the drift rate produce greater effects in the tail than in the leading edge of the RT distributions (i.e., the .1 quantile) and also affect error rates.1 Therefore, if the effect of interletter spacing takes place in the early encoding, nondecisional stage, its effect should be a shift of the RT distribution with no effect on accuracy. Alternatively, if the impact of interletter spacing occurs in the word system, the one would expect some changes in the drift rate—and consequent changes in the RT distributions and error rates. We should note here that Perea et al. (2011) briefly discussed the RT distributions—with no fits of the diffusion model—and the effect of interletter spacing on words grew very slightly as a function of RT quantiles, while the changes in error rates were minimal. However, explicit fits are necessary to corroborate that observation—in particular, by using a wider range of interletter spacing conditions. To obtain stable estimations for the diffusion model, we employed a large number of items per condition (60) in the experiment.
A group of 25 students at the University of Valencia took part in the experiment voluntarily. They were native speakers of Spanish, and all had either normal or corrected-to-normal vision.
We selected a set of 300 Spanish words from the B-Pal lexical database (Davis & Perea, 2005). The mean written frequency of these words was 89 occurrences per million words (range: 24–690); the mean length was 5.6 (range: 5–6); and the mean number of substitution-letter neighbors was 1.53 (range: 1–4). For the purposes of the lexical decision task, 300 orthographically legal nonwords were also created (mean length: 5.6 letters; range: 5–6). These nonwords had been created by changing two letters from Spanish words that did not form part of the word list. The stimuli were presented in Times New Roman 14-pt font (i.e., the same font as in the Perea et al., 2011, experiments). Five lists of stimuli were created to counterbalance the materials across letter spacings, so that each target appeared only once in each list, but in a different condition. The list of stimuli is available at www.uv.es/mperea/paramspacing.pdf. The participants were randomly assigned to each list.
Participants were tested individually in a quiet room. Presentation of the stimuli and recording of latencies were controlled by a computer using DMDX (Forster & Forster, 2003). On each trial, a fixation point (+) was presented for 500 ms in the center of the monitor. Then, the stimulus item (in lowercase) was presented until the participant’s response. The letter strings were presented centered, in black, on a white background. The participants were instructed to push a button labeled sí “yes” if the letter string formed an existing Spanish word and a button labeled no if the letter string was not a word. Each participant received a different order of trials, and the whole experimental session lasted about 25 min.
Mean response times (in milliseconds) and percentages of errors (in parentheses) for words and pseudowords in our experiment
The ANOVA on the latency data showed an effect of interletter spacing, F1(4, 80) = 3.94, MSE = 934, p < .007, η2 = .16; F2(4, 1180) = 7.81, MSE = 5,391, p < .001, η2 = .03. This effect reflected a decreasing linear trend (see Table 1), F1(1, 80) = 6.59, MSE = 2,147, p < .02, η2 = .25; F2(1, 295) = 29.50, MSE = 5,869, p < .001, η2 = .09, while the quadratic/cubic/quartic components were not significant (all Fs < 1).
The ANOVA on the error data showed did not reveal any significant effects (both ps > .25).
The ANOVAs on the latency/error data failed to show any significant effects (all Fs < 1).
Diffusion model analysis
Within the diffusion model framework, different data patterns correspond to distinct parameter behavior. The behavior of the parameters can then be interpreted in terms of psychological processes. To this end, we present the fits to the grouped data that we obtained using the fitting routines described by Ratcliff and Tuerlinckx (2002). We calculated the accuracy and latency (i.e., the RTs at the .1, .3, .6, .7, and .9 quantiles) for “word” and “nonword” responses for all conditions and for all participants, and we obtained the group-level performance by averaging across subjects (i.e., vincentizing; Ratcliff, 1978; Vincent, 1912). Fitting averaged data is an appropriate procedure for fitting the diffusion model. In previous research (e.g., Ratcliff, Gomez & McKoon 2004; Ratcliff, Thapar, & McKoon, 2001), fits to averaged data provided parameter values similar to the values obtained by averaging across fits to individual participants. The averaged quantile RTs were used for the diffusion model fits as follows: The model generated for each response the predicted cumulative probability within the time frames bounded by the five empirical quantiles. Subtracting the cumulative probabilities for each successive quantile from those of the next higher quantile yields the proportions of responses between each pair of quantiles, which are the expected values for the chi-square computation. The observed values are the empirical proportions of responses that fall within a bin bounded by the 0, .1, .3, .5, .7, .9, and 1.0 quantiles, multiplied by the proportion of responses for that choice (e.g., if there is a .965 response proportion for the word alternative, the proportions would be .965*.1, .965*.2, .965*.2, .965*.2, .965*.2 and .965*.1).
Parameters of the diffusion model for the different scenarios in the experiment
Ter and Drift Rate Free
Drift Rate Free
The unconstrained parameterization yielded a chi-square value of 77.37. Interestingly, the value of the Ter parameter decreased as a function of interletter spacing, from .492 in the condensed condition to .473 in the +1.5 condition. The value of the drift rates for words increased very slightly from the condensed condition (.264) to the +1.5 condition (.278).
Pure distributional shifts (i.e., changes in the locations of distributions) are naturally accounted for by allowing the Ter parameter to vary (i.e., there were five values of Ter, one for each level of spacing, and two drift rates, one for words and one for nonwords). This model yielded a chi-square value of 104.55, which was 14% greater than the value from the unconstrained model (see Ratcliff & Smith, 2010, for a similar result in a perceptual task).
Drift rate parameterization
Across a large variety of manipulations, the mean RT and the variance are correlated; this is so because effects tend to be larger in the tail of the RT distribution than in the faster responses (e.g., the word-frequency effect affects both the mean and the variance of the RTs). These effects are naturally accounted for by allowing the drift rate to vary (i.e., one value of Ter and 10 values of drift rate: one for words and one for nonwords for each level of spacing); this model yielded a chi-square value of 126.63, which was 38% worse than the value for the unconstrained model.
The findings from the present experiment are clear. First, small increases of interletter spacing (relative to the default settings) lead to faster word identification times, extending the findings of Perea et al. (2011) to a wider range of interletter spacing conditions. Second, the effect of interletter spacing shows a decreasing linear trend (see Table 1). Third, the effect of interletter spacing occurs at the encoding level rather than at a decisional level, as deduced from the fits of the diffusion model (see Fig. 1).
What about the locus of the effect of interletter spacing for word stimuli? The locus of this effect is at an encoding level (rather than at the decision level), as deduced from the fits of the diffusion model: The fits of the model were very good when the encoding parameter (Ter) was allowed to vary freely across the spacing conditions, while they were rather poor when the drift rate (i.e., the quality of lexical information) was allowed to vary across conditions (see Fig. 1). To our knowledge, this is the first time in which a manipulation at the stimulus level has produced an effect on the encoding time rather than on the quality of information (i.e., drift rates)—note that this finding undermines a common criticism of the diffusion model approach, that “everything goes to drift rate.”2 This encoding advantage for words with a slightly wide interletter spacing was presumably due to less “crowding” or a more accurate “letter position coding” process; the present experiment was not designed to disentangle these two accounts, however. Thus, increased letter spacing could be thought to enhance the perceptual normalization phase, which would affect Ter but not drift rates. This pattern of data is consistent with the experimental findings of Yap and Balota (2007), who found that degrading the lexical string led to a shift of the RT distribution, with little or no effect on the error data. Although Yap and Balota did not conduct any explicit modeling, this finding would be consistent with the view that stimulus degradation affects the encoding process (i.e., Ter in the diffusion model)—namely, that the decision process would not begin until the appropriate information had been extracted from the stimulus.
In sum, the present experiment with adult skilled readers has revealed that small increases in interletter spacing (relative to the default settings) have a positive impact on lexical access, and that the locus of the effect is at an early encoding (nondecisional) stage. This finding opens a new window of opportunities to examine the role of interletter spacing not only in other well-known word identification paradigms, but also in more applied settings (i.e., normal silent reading).
It is important to note here that two influential factors in visual-word recognition experiments, word frequency and repetition, are related to the quality of information (e.g., RT distributions corresponding to high-frequency words [or repeated words] have a less pronounced asymmetry than do those corresponding to low-frequency words [or nonrepeated words])
The research reported in this article was partially supported by Grant PSI2008-04069/PSIC and CONSOLIDER-INGENIO2010 CSD2008-00048 from the Spanish Ministry of Science and Innovation. We thank Colin Davis, Corey White, and Tessa Warren for helpful comments on an earlier draft. We also thank Carmen Moret for running the participants in the experiment.