Behavior Research Methods

, Volume 45, Issue 1, pp 160–168 | Cite as

Sensory experience ratings for over 5,000 mono- and disyllabic words

  • Barbara J. JuhaszEmail author
  • Melvin J. Yap


Sensory experience ratings (SERs) reflect the extent to which a word evokes a sensory and/or perceptual experience in the mind of the reader. Juhasz, Yap, Dicke, Taylor, and Gullick (Quarterly Journal of Experimental Psychology 64:1683–1691, 2011) demonstrated that SERs predict a significant amount of variance in lexical-decision response times in two megastudies of lexical processing when a large number of established psycholinguistic variables are controlled for. Here we provide the SERs for the 2,857 monosyllabic words used in the Juhasz et al. study, as well as newly collected ratings on 3,000 disyllabic words. New analyses with the combined set of words confirmed that SERs predict a reliable amount of variance in the lexical-decision response times and naming times from the English Lexicon Project (Balota, Yap, Cortese, Hutchison, Kessler, Loftus, & Treiman, Behavior Research Methods 39:445–459, 2007) when a large number of surface, lexical, and semantic variables are statistically controlled for. The results suggest that the relative availability of sensory/perceptual information associated with a word contributes to lexical–semantic processing.


Sensory experience rating Embodied cognition Visual word recognition Semantic richness 

Recognizing a visual word is a complex process that includes access to its orthographic, phonological, and semantic representations. The purpose of the present study is to provide sensory experience ratings (SERs) on over 5,000 mono- and disyllabic words. SERs index the degree to which words evoke a sensory or perceptual experience when read silently. The rating is motivated by the grounded-cognition framework, which views conceptual processing as being rooted in the perceptual systems (e.g., Barsalou, Simmons, Barbey, & Wilson, 2003). For example, the word incense is rated relatively high on the 7-point SER scale (5.90). According to the theory, this word may not only create a mental image of incense in the mind of the reader, but also a slight but perceptible olfactory trace. This olfactory trace is believed to be available to the reader when asked to introspectively probe the experience, and it may impact word recognition processes, as evidenced by the impact of SERs on lexical-decision performance (Juhasz et al., 2011).

The activation of sensory/perceptual information during word processing is supported by neuroimaging data. For example, González et al. (2006), demonstrated that words related to the sense of smell produced more activity in olfactory areas of the brain than did control words. In addition, Pulvermüller, Shtyrov and Ilmoniemi (2005) showed that auditorily presented action verbs related to the face and leg activated corresponding brain regions consistent with the somatotopic map of the motor and premotor cortex. Importantly, this differential activation of brain regions occurred within 200 ms after the unique auditory stimulus was available. Pulvermüller et al. suggested that this time course implies that the activation of sensorimotor information is a relatively automatic component of semantic access.

The neuroimaging results cited above suggest that specific types of words (olfactory and action verbs) are grounded in sensory systems. However, a major strength of the SER variable is that it is not limited to a single sensation, but can be used to probe links between word meaning and all sensory/perceptual modalities. According to the “language and situated simulation” (LASS) model of conceptual processing (Barsalou, Santos, Simmons, & Wilson, 2008), word recognition entails both the activation of a linguistic form and the activation of a situated simulation that is grounded within the perceptual and sensory systems. According to the theory, activation of the linguistic form typically reaches a peak prior to the situated simulations. Situated simulations of sensory and perceptual information reflect deeper conceptual processing, which may be relied on to a greater extent in certain word recognition tasks. In this light, the SER variable may be conceptualized as indexing the degree to which a word evokes a strong or meaningful situated simulation.

Relationships between SER and other semantic variables

SER is an example of a subjective semantic variable, as it is rated by individuals on a 1 to 7 scale. In contrast, objective variables are typically based on corpus statistics. One of the most well-established and studied subjective semantic variables is imageability (e.g., Toglia & Battig, 1978). Imageability is based on the idea that words that easily evoke a mental image are easier to recognize than those that do not. According to Paivio’s (1971) dual-coding theory, words that refer to concrete entities show an advantage in processing due to their ability to bring to mind an image (however, see Schwanenflugel, Harnishfeger & Stowe 1988, for an alternative view of concreteness effects). Imageability and SERs are conceptually related. Instructions for imageability, however, tend to stress visual and sound images, with only a brief mention given to “other sensory experiences” (see, e.g., Bennett, Burnett, Siakaluk, & Pexman, 2011; Cortese & Fugett, 2004; Schock, Cortese, & Khanna, 2012). In addition, the instructions require participants to attempt to create a mental image and to judge the ease with which it happens. By contrast, SERs ask participants to judge the ability for a word to evoke an “actual sensation (taste, touch, sight, sound, or smell) you experience by reading the word” (the entire instructions for SERs can be found in the Appendix). Juhasz et al. (2011) demonstrated that even though SERs and imageability are significantly correlated for monosyllabic words (r = .463), the two variables independently predict unique variance in lexical-decision latencies. Thus, SERs may be particularly useful to researchers interested in examining sensory activation from words in modalities such as olfaction, taste, touch, and hearing. SERs may also be useful for examining the sensations evoked by both positive and negative emotion words, which have been found to speed lexical-decision latencies (see, e.g., Kousta, Vinson, & Vigliocco, 2009). Both words that are rated as positive in valence and words rated as negative may produce sensory simulations in the reader. This added sensory information may speed recognition of the words, due to their “richer” semantic information. Recent research has provided evidence that a richer semantic representation affects processing across a variety of word recognition tasks (see Pexman, Hargreaves, Siakaluk, Bodner, & Pope, 2008; Yap, Pexman, Wellsby, Hargreaves, & Huff, 2012; Yap, Tan, Pexman, & Hargreaves, 2011).

Age of acquisition (AoA) is another subjectively rated variable that has been argued to be related to semantic processes (see Juhasz, 2005, for a review), with earlier-acquired words taking less time to recognize. A debate in the literature concerns the locus of AoA effects. However, converging computational efforts and empirical research have suggested that AoA affects access to the semantic representations of words (e.g., Brysbaert, Van Wijnendaele, & De Deyne, 2000; Gullick & Juhasz, 2008; Steyvers & Tenenbaum, 2005). The connectionist modeling efforts of Ellis and Lambon Ralph (2000) suggest that AoA effects may reflect a general learning property in the mental lexicon based on network plasticity and may therefore affect access to orthographic, phonological, and semantic representations. Juhasz et al. (2011) reported a significant correlation between SERs and AoA (r = –.222), consistent with the notion that earlier-acquired concepts are more likely to be tied to the sensory/perceptual systems. In addition, the body–object interaction (BOI) variable (Siakaluk, Pexman, Aguilera, Owen, & Sears, 2008), which reflects the extent to which a human body can interact with a word’s referent, is another subjective semantic variable that is related to the grounded-cognition framework, although it is concerned with one specific aspect: sensorimotor relationships. Hargreaves et al. (2012) recently demonstrated that words with a high BOI produce more activation in the left supramarginal gyrus of the parietal lobe, a region linked to memory for kinesthetic interaction. This finding further supports a role of sensorimotor activation during word recognition. While BOI indexes one aspect of sensory activation during visual word recognition, Juhasz et al. demonstrated that SERs were still a significant predictor of lexical-decision latency when BOI, imageability, and AoA were included in the regression analyses. Of course, imageability, AoA, and BOI are only three of a multitude of published subjective semantic variables (see Pexman, 2012, for a review). Others have also been reported, such as Wurm’s (2007) danger and usefulness measures.

Validating semantic effects in large-scale studies of word recognition

Recently, megastudies have been used to examine the potential contributions of various surface, sublexical, lexical, and semantic variables to word recognition performance (see Balota, Yap, Hutchison, & Cortese, 2012, for a review). Megastudies, which contain mean recognition latencies for thousands of words, are very useful for evaluating whether new variables are able to account for additional unique variance in word recognition performance, above and beyond other correlated variables. These approaches have methodological advantages over the traditional factorial design, since many predictor variables are intercorrelated, which makes the selection of materials for factorial designs problematic (see Cutler, 1981). For example, Balota, Cortese, Sergent-Marshall, Spieler and Yap (2004) conducted hierarchical linear regressions to examine the influences of various variables on word-naming and lexical-decision times for a large set of monosyllabic words. Importantly, they reported that semantic variables such as rated imageability and semantic connectivity (i.e., the extent to which words are connected to other words in the semantic network) predicted unique variance even after a host of correlated lexical variables were controlled for. Using a similar approach, Cortese and Khanna (2007) examined the effects of AoA and imageability on word recognition performance by carrying out regression analyses that included both factors. Although AoA was a significant predictor of both lexical-decision and naming performance when imageability was controlled for, imageability effects were observed only in lexical decision, but not naming, when AoA was controlled for. Other researchers (e.g., Pexman et al., 2008; Yap et al., 2011) have also used megastudy data to explore the effects of more objectively defined semantic variables (e.g., number of senses, number of features) on lexical processing.

Juhasz et al. (2011) employed the same procedure to examine the newly developed SER variable. Ratings were collected on over 2,850 monosyllabic words, and hierarchical linear regressions were conducted in the same fashion as reported in Cortese and Khanna (2007). SER was added in a final step after imageability and AoA. Importantly, SER significantly predicted lexical-decision response times in both the Balota et al. (2004) database and the British Lexicon Project database (BLP; Keuleers, Lacey, Rastle, & Brysbaert, 2012). SER did not predict significant variance in naming times in the Balota et al. (2004) database (see note 2 in Juhasz et al., 2011). In further analyses with a subset of items, SER predicted unique variance in lexical-decision times even when BOI was included in the regression. Thus, SER appears to be a distinct subjective semantic variable that reliably predicts lexical-decision response times.

All of the above studies examined word recognition performance on monosyllabic words. As Yap and Balota (2009) noted, multisyllabic words vastly outnumber monosyllabic ones in the English language, and processing multisyllabic words implicates additional processes such as syllabification and stress assignment. Thus, recent efforts have been geared toward characterizing the extent to which effects in monosyllabic word recognition generalize to multisyllabic word recognition. For example, Yap and Balota (2009) examined the influence of two objective semantic variables: semantic neighborhood density (Durda, Buchanan, & Caron, 2006) and WordNet (Miller, 1995) number of senses. Both variables significantly predicted word-naming and lexical-decision latencies for multisyllabic words when these variables were entered in the final step in hierarchical linear regression analyses. The contribution of the variables was larger in lexical decision, as performance on this task may rely more on access to meaning than is the case in word naming (Balota & Chumbley, 1984). No subjective semantic variables were examined in the Yap and Balota study, as the norms were not available at that time. However, in recent efforts, ratings have been collected for imageability (Schock et al., 2012) and AoA (Schock, Cortese, & Yap, 2011), as well as for BOI (Bennett et al., 2011), on multisyllabic words. These efforts will afford a further rigorous test of the effects of subjective semantic variables on word recognition performance.

The major purpose of the present study was to collect and report SERs for 3,000 disyllabic words. Importantly, these disyllabic words are the same ones for which imageability and AoA ratings have recently been made available (Schock et al., 2012; Schock et al., 2011). Having semantic measures for a common set of disyllabic words will allow researchers to more effectively manipulate and control for these variables. These ratings were collected in a manner analogous to the one used by Juhasz et al. (2011) to collect ratings for monosyllabic words. Ratings for the full set of mono- and disyllabic words (the newly collected ones as well as those collected for Juhasz et al., 2011) are available in the supplementary materials for this article. We chose to examine both the mono- and disyllabic words in the same analyses. In order to validate the SER measure, hierarchical regressions, broadly following the procedure of Yap and Balota (2009), were conducted on word-naming latency and lexical-decision times from the English Lexicon Project (; Balota et al., 2007).1 In addition to the predictor variables considered in the main analyses reported by Juhasz et al. (2011) for monosyllabic words, we also included variables specific to multisyllabic words, such as number of syllables and position of stress assignment, as well as two objective semantic variables (number of senses and semantic neighborhood density) that were not controlled for by Juhasz et al. (2011).



A group of 63 native English speakers from Wesleyan University participated in the SER word-rating task for course credit.


The 3,000 disyllabic words rated in the present study were the ones for which imageability ratings have recently been made available (Schock et al., 2012).


The procedure was identical to that used by Juhasz et al. (2011) to collect SERs on 2,857 monosyllabic words. The 3,000 stimuli were divided into six questionnaires. The questionnaires were administered to groups of participants. Although responses were untimed, the entire session took less than 1 h. Participants were given an instruction sheet (see the Appendix) that asked them to rate the degree to which each word evoked a sensory experience, on a 1 to 7 scale, with higher numbers indicating a greater sensory experience. Next to each word was a scale, and participants were asked to circle their responses. Each questionnaire was rated by 10–11 participants. For each word, a blank scale or a circle that was not clearly marked on one number was not included in the computation of the word’s average SER.

Data analysis

Word-naming and lexical-decision response times were analyzed for 4,738 mono- and disyllabic words from the English Lexicon Project (ELP; Balota et al., 2007) to assess the role of SER on word recognition performance. Only words for which all relevant predictor variables were available were included in the analyses. The analysis procedure employed by Yap and Balota (2009) to assess word recognition performance on multisyllabic words was utilized. Average lexical-decision and naming latencies in the ELP database for each word were used as the dependent variables. Predictor variables were entered into a hierarchical linear regression in seven steps. In Step 1, 13 dichotomous variables related to the characteristics of word-initial phonemes were entered (Balota et al., 2004). In Step 2, the stress pattern of the word was captured by a dummy code, wherein words with stress on the first syllable comprised the reference group. In Step 3, variables hypothesized to be related to the lexical-word-form level were included in the analyses. These variables consisted of both linear and quadratic length (New, Ferrand, Pallier, & Brysbaert, 2006), number of syllables, orthographic neighborhood size (Coltheart, Davelaar, Jonasson, & Besner, 1977), phonological neighborhood size (Yates, 2005), word frequency (Lund & Burgess, 1996), Levenshtein measures (Yarkoni, Balota, & Yap, 2008) and measures of feedforward and feedback phonological consistency (Yap & Balota, 2009). In Step 4, two measures of the objective semantic properties of the words were entered: the number of senses of the word, as measured in WordNet (Miller, 1995), as well as semantic neighborhood density, based on the average radius of co-occurrence (ARC) measure; words with higher ARC values are associated with denser semantic neighborhoods (Shaoul & Westbury, 2010). This was followed by Step 5, in which two subjective measures thought to be related to semantic processing were included: imageability (from Cortese & Fugett, 2004, and Schock et al., 2012) and AoA (from Cortese & Khanna, 2007, and Schock et al., 2011).2 SERs were entered in the final step of the analyses, in order to assess their unique contribution to word recognition performance once the variance predicted by the other variables was controlled for.

Results and discussion

The average SER for the 4,738 words included in the analyses was 2.96 (SD = .98). Table 1 reports the correlations between SERs and the other semantic variables included in the analyses. Table 2 reports the effect of SERs on lexical-decision and word-naming latencies. The correlation between SERs and imageability is higher in the combined data set (r = .586) than is reported in Juhasz et al. (2011) for only monosyllabic words. The size of the correlation is similar to that reported between AoA and imageability (r = –.586) for the words presented in the Gilhooly and Logie (1980) norms (as calculated by Zevin & Seidenberg, 2002), as well as between BOI and imageability for 1,618 monosyllabic nouns (r = .67; Tillotson, Siakaluk, & Pexman, 2008). These correlations are smaller than those reported for different imageability norms (all of which have rs > .80; see Cortese & Fugett, 2004, and Schock et al., 2012). Thus, while SERs and imageability are obviously related constructs, we feel that SERs provide a more direct measure of the degree of sensory activation by visual word forms. The correlation between SERs and AoA (r = –.223) in the combined data set is almost identical to that reported previously for monosyllabic words (r = –.222). This again supports the position that words learned earlier in life are more likely to be tied to sensory/perceptual experiences. Finally, the correlations between SER and the two objective semantic variables included in the present analysis (number of senses and semantic neighborhood density) are negligible, suggesting that they tap different constructs.
Table 1

Correlations between SER and other semantic variables







1. Sensory experience rating





2. Number of senses





3. Semantic neighborhood density




4. Age of acquisition



5. Imageability


** p < .01. *** p < .001

Table 2

Standardized response time (RT) regression coefficients of the item-level regression analyses (n = 4,738)

Predictor Variable

Lexical-Decision RTs

Naming RTs

Step 1: Onsets

Adjusted R 2



Step 2: Stress

Adjusted R 2



Change in R 2



Step 3: Lexical Variables


Adjusted R 2



Change in R 2



Step 4: Objective Semantic Variables

Adjusted R 2



Change in R 2



Step 5: Subjective Semantic Variables

Adjusted R 2



Change in R 2



Step 6: Sensory Experience Rating

Sensory experience rating



Adjusted R 2



Change in R 2



** p < .01. *** p < .001

The results from lexical decision mirror those reported in Juhasz et al. (2011). With the addition of disyllabic words to the analyses, SERs predict a unique amount of variance in lexical-decision latencies. The effect of SERs on naming latencies is reported in Table 2. In contrast to the analyses reported in Juhasz et al.’s note 2, SERs were a significant predictor of naming latencies when the disyllabic words were included along with the additional control variables. It is possible that the addition of the new words may have increased the power of the analysis for detecting the influence of SERs. These results support the position that the meaning of a word is in fact accessed during the process of generating the phonology for a word (Yap et al., 2011) and that words that are more likely to evoke a sensory experience show an advantage in the speed with which a phonological word form is activated.

It should be noted that the amount of variance predicted by SERs is rather modest (0.1 % for both lexical decision and word naming) after including so many control variables in the analyses. To recapitulate, in addition to various measures reflecting the orthographic and phonological characteristics of words, we also included four semantic measures (AoA, imageability, number of senses, and neighborhood density). That said, we need to emphasize that the theoretical importance of a variable cannot be fully gauged by the size of its effect. For example, although the Frequency × Phonological Consistency interaction accounts for very little variance (Sibley, Kello, & Seidenberg, 2009), the interaction is theoretically very important and central to the ongoing debate of whether reading aloud involves one or two mechanisms (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Plaut, McClelland, Seidenberg, & Patterson, 1996). Moreover, not all new variables survive this rigorous and conservative test. For example, imageability effects in monosyllabic word-naming performance were no longer significant once AoA was controlled for (Cortese & Khanna, 2007). Likewise, the semantic size variable introduced by Sereno, O’Donnell, and Sereno (2009) did not account for variance in lexical-decision performance when correlated variables were included in the analysis with 324 words (Kang, Yap, Tse, & Kurby, 2011).

In order to ascertain the robustness of the SER effect, we decided to explore its influence using the BLP, an additional established database of lexical-decision response times. By examining the 4,660 items that are represented in both the ELP and BLP, we found that SERs significantly predicted lexical-decision times in the BLP (β = –.089, p < .001) and accounted for 0.5 % of the variance even after the same control variables were included. Thus, while SERs only predict a small amount of additional variance in word recognition times when added in the final step of the analysis, their effect consistently replicates across different and independent databases.3 To validate SER, we also examined the zero-order correlations between the predictor variables included in Steps 2 to 6 of the hierarchical regression analyses reported above and the lexical-decision response times for the 4,660 items included in both the ELP and BLP (see Kuperman, Stadthagen-Gonzalez, & Brysbaert, 2012). These correlations (reported in Table 3) confirm that while the effect size for SER is modest, it is significantly correlated with response times in the lexical-decision task. SER had smaller zero-order correlations than the other semantic variables in both databases. However, in both databases, SER displayed a larger zero-order correlation with response times than did the feedforward and feedback consistency measures and stress pattern. In addition, in the BLP database, SER displayed a larger raw correlation with response times than did traditional measures of orthographic and phonological neighborhood size, as well as measures of word length.
Table 3

Correlations between predictor variables and standardized lexical-decision response times (RTs) for items in the English Lexicon Project (Balota et al., 2007) and the British Lexicon Project (Keuleers et al., 2012)

English Lexicon Project


British Lexicon Project


Age of acquisition


Age of acquisition


Log HAL frequency


Log HAL frequency


Semantic neighborhood density


Semantic Neighborhood density


Number of WordNet senses


Number of WordNet senses


Orthographic Levenshtein distance






Orthographic Levenshtein distance


Phonological Levenshtein distance


Phonological Levenshtein distance


Number of letters (quadratic)


Sensory experience ratings


Orthographic neighborhood size


Levenshtein neighborhood frequency


Number of letters


Orthographic neighborhood size


Levenshtein neighborhood frequency


Number of letters


Phonological neighborhood size


Number of letters (quadratic)


Number of syllables


Phonological neighborhood size


Levenshtein consistency


Number of syllables


Sensory experience ratings


Levenshtein consistency


Feedback onset consistency (composite)


Feedback onset consistency (composite)


Stress Pattern


Feedback onset consistency (S1)


Feedforward rime consistency (S1)


Stress pattern


Feedforward onset consistency (composite)


Feedforward onset consistency (S1)


Feedforward rime consistency (composite)


Feedforward onset consistency (composite)


Feedback onset consistency (S1)


Feedback rime consistency (composite)


Feedback rime consistency (composite)


Feedforward rime consistency (S1)


Feedforward onset consistency (S1)


Feedforward rime consistency (composite)


Feedback rime consistency (S1)


Feedback rime consistency (S1)


p < .10. * p < .05. ** p < .01. *** p < .001

However, although the zero-order correlations are suggestive, they must be interpreted cautiously, as they do not take into account the shared variance between predictors. In order to more conclusively establish the robustness of SER, we conducted supplementary cross-validation techniques that are less likely to (spuriously) overestimate the amount of variance accounted for by SER (see Tops, Callens, Lammertyn, Van Hees, & Brysbaert, 2012).4 In order to carry out cross-validation, the data set is first partitioned into a training set and a test (i.e., hold-back) set. Over multiple iterations, the training set is used to fit a regression model, and the predictive power of the model is then evaluated with the test set; this ensures that the data used for model fitting are not also used for model testing. The overarching purpose of such an analysis is to converge on an optimal regression model that is high in generalizability; the variables that survive in the final model, and their rank order of importance, are identified.

In line with Tops et al. (2012), we relied on a resampling technique called tenfold cross-validation (Kuhn, 2008), wherein the data set is partitioned into ten folds (i.e., nine folds are used for training and one for testing in each iteration). Using R (R Development Core Team, 2011), we built separate predictive models based on the ELP (speeded naming and lexical decision) and BLP (lexical decision) data, using the resampling-based recursive feature elimination algorithm in the caret package (Kuhn, 2012). Importantly, the SER variable survived this cross-validation procedure in all three data sets, attesting to its reliability and generalizability. Specifically, SER was ranked fourth out of 35 surviving variables for BLP lexical decision (regression coefficient = –.040), eighth out of 20 surviving variables for ELP lexical decision (regression coefficient = –.016), and 24th out of 30 surviving variables for ELP speeded pronunciation (regression coefficient = –.040).

On the basis of the LASS model of conceptual processing (Barsalou et al., 2008) discussed in the introduction, while sensory simulations are activated automatically by word forms, these simulations represent deeper conceptual processing and may be relied on more when completing word recognition tasks (e.g., semantic categorization) that require that the specific meaning of a word be computed. Thus, according to this theory we would expect SERs to predict a greater amount of variation in tasks such as reading for meaning and semantic categorization. Further research should be conducted to evaluate this claim.


The present study confirms that SERs, a rating of sensory/perceptual activation from words, predict a unique amount of variance in word recognition performance when a large set of existing psycholinguistic variables are statistically controlled. Traditionally, word recognition researchers have focused more on sublexical- and lexical-level characteristics in their work, and most models of visual word recognition do not implement mechanisms for computing meaning (see Pexman, 2012). More recently, the investigation of variables such as BOI (e.g., Siakaluk et al., 2008), danger/usefulness (Wurm, 2007), and SER have provided further evidence that sensory and perceptual experiences with a word play a role when it is recognized; words that are rated as evoking more sensory activation facilitate word recognition. SERs offer a relatively simple way to gauge the degree of sensory activation for all word classes across all sensory modalities and can be used to examine the influence of sensory activation in additional paradigms. In addition, future neuroimaging research using these ratings may examine whether words with a high SER produce a differing pattern of brain activation (see Hargreaves et al., 2012), as suggested by the grounded-cognition conceptual framework.


  1. 1.

    This is a different database than the one examined in Juhasz et al. (2011), which utilized the Balota et al. (2004) database, as well as the newly available British Lexicon Project lexical-decision times (Keuleers et al., 2012).

  2. 2.

    Inclusion of semantic variables at two separate steps did not impact the size or reliability of the SER variable’s regression coefficients. If all semantic variables were added in the same step, the amount of variance predicted by all semantic variables was equivalent to that currently reported at the end of Step 6.

  3. 3.

    We also examined the impact of SERs separately for nouns, verbs, and adjectives (the three most common syntactic categories) for lexical-decision and naming times in the ELP (Balota et al., 2007) and lexical decisions in the BLP (Keuleers et al., 2012). SERs were only a significant predictor of response times for nouns. However, these results must be interpreted with caution, as many more nouns (n = 3,182) than verbs (n = 810) and adjectives (n = 526) were among the items analyzed. Additional work can be directed at exploring the differential impact of SERs on different parts of speech.

  4. 4.

    We are grateful to Marc Brysbaert for suggesting these analyses.


Author note

We thank Alix Haber, Alexandra Pogosky, and Jennifer Brewer for data collection, Jan Lammertyn for assistance with the cross-validation analyses, as well as Marc Brysbaert and two anonymous reviewers for helpful comments on a previous version of the manuscript.

Supplementary material

13428_2012_242_MOESM1_ESM.xls (377 kb)
ESM 1 (XLS 377 kb)


  1. Balota, D. A., & Chumbley, J. I. (1984). Are lexical decisions a good measure of lexical access? The role of word frequency in the neglected decision stage. Journal of Experimental Psychology: Human Perception and Performance, 10, 340–357. doi: 10.1037/0096-1523.10.3.340 PubMedCrossRefGoogle Scholar
  2. Balota, D. A., Cortese, M. J., Sergent-Marshall, S. D., Spieler, D. H., & Yap, M. J. (2004). Visual word recognition for single-syllable words. Journal of Experimental Psychology: General, 133, 283–316. doi: 10.1037/0096-3445.133.2.283 CrossRefGoogle Scholar
  3. Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A., Kessler, B., Loftus, B., & Treiman, R. (2007). The english lexicon project. Behavior Research Methods, 39, 445–459. doi: 10.3758/BF03193014 PubMedCrossRefGoogle Scholar
  4. Balota, D. A., Yap, M. J., Hutchison, K. A., & Cortese, M. J. (2012). Megastudies: What do millions (or so) of trials tell us about lexical processing? In J. S. Adelman (Ed.), Visual word recognition: Vol. 1. Models and methods, orthography and phonology (pp. 90–115). Hove, U.K.: Psychology Press. Google Scholar
  5. Barsalou, L. W., Santos, A., Simmons, W. K., & Wilson, C. D. (2008). Language and simulation in conceptual processing. In M. De Vega, A. M. Glenberg, & A. C. A. Graesser (Eds.), Symbols, embodiment, and meaning (pp. 245–283). Oxford: Oxford University Press.CrossRefGoogle Scholar
  6. Barsalou, L. W., Simmons, W. K., Barbey, A., & Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific systems. Trends in Cognitive Sciences, 7, 84–91.PubMedCrossRefGoogle Scholar
  7. Bennett, S. D. R., Burnett, A. N., Siakaluk, P. D., & Pexman, P. M. (2011). Imageability and body–object interaction ratings for 599 multisyllabic words. Behavior Research Methods, 43, 1100–1109. doi: 10.3758/s13428-011-0117-5 PubMedCrossRefGoogle Scholar
  8. Brysbaert, M., Van Wijnendaele, I., & De Deyne, S. (2000). Age-of-acquisition effects in semantic processing tasks. Acta Psychologica, 104, 215–226. doi: 10.1016/S0001-6918(00)00021-4 PubMedCrossRefGoogle Scholar
  9. Coltheart, M., Davelaar, E., Jonasson, J. T., & Besner, D. (1977). Access to the internal lexicon. In S. Dornic (Ed.), Attention and performance VI (pp. 535–555). Hillsdale: Erlbaum.Google Scholar
  10. Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256. doi: 10.1037/0033-295X.108.1.204 PubMedCrossRefGoogle Scholar
  11. Cortese, M. J., & Fugett, A. (2004). Imageability ratings for 3,000 monosyllabic words. Behavior Research Methods, Instruments, & Computers, 36, 384–387. doi: 10.3758/BF03195585 CrossRefGoogle Scholar
  12. Cortese, M. J., & Khanna, M. M. (2007). Age of acquisition predicts naming and lexica-decision performance above and beyond 22 other predictor variables: An analysis of 2,342 words. Quarterly Journal of Experimental Psychology, 60, 1072–1082.CrossRefGoogle Scholar
  13. Cutler, A. (1981). Making up materials is a confounded nuisance, or: Will we be able to run any psycholinguistic experiments at all in 1990? Cognition, 10, 65–70. doi: 10.1016/0010-0277(81)90026-3 PubMedCrossRefGoogle Scholar
  14. Durda, K., Buchanan, L., & Caron, R. (2006). Wordmine2. University of Windsor. Retrieved from
  15. Ellis, A. W., & Lambon Ralph, M. A. (2000). Age of acquisition effects in lexical processing reflect loss of plasticity in maturing systems: Insights from connectionist networks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1103–1123.PubMedCrossRefGoogle Scholar
  16. Gilhooly, K. J., & Logie, R. H. (1980). Age-of-acquisition, imagery, concreteness, familiarity, and ambiguity measures for 1,944 words. Behavior Research Methods & Instrumentation, 12, 395–427. doi: 10.3758/BF03201693 CrossRefGoogle Scholar
  17. González, J., Barros-Loscertales, A., Pulvermüller, F., Meseguer, V., Sanjuán, A., Belloch, A., & Ávila, C. (2006). Reading cinnamon activates olfactory brain regions. NeuroImage, 32, 906–912. doi: 10.1016/j.neuroimage.2006.03.037 PubMedCrossRefGoogle Scholar
  18. Gullick, M. M., & Juhasz, B. J. (2008). Age of acquisition’s effect on memory for semantically associated word pairs. Quarterly Journal of Experimental Psychology, 61, 1177–1185.CrossRefGoogle Scholar
  19. Hargreaves, I. S., Leonard, G. A., Pexman, P. M., Pittman, D. J., Siakaluk, P. D., & Goodyear, B. G. (2012). The neural correlates of the body-object interaction effect in semantic processing. Frontiers in Human Neuroscience, 6, 22. doi: 10.3389/fnhum.2012.00022 PubMedGoogle Scholar
  20. Juhasz, B. J. (2005). Age-of-acquisition effects in word and picture identification. Psychological Bulletin, 131, 684–712. doi: 10.1037/0033-2909.131.5.684 PubMedCrossRefGoogle Scholar
  21. Juhasz, B. J., Yap, M. J., Dicke, J., Taylor, S. C., & Gullick, M. M. (2011). Tangible words are recognized faster: The grounding of meaning in sensory and perceptual systems. Quarterly Journal of Experimental Psychology, 64, 1683–1691. doi: 10.1080/17470218.2011.605150 CrossRefGoogle Scholar
  22. Kang, S. H. K., Yap, M. J., Tse, C.-S., & Kurby, C. A. (2011). Semantic size does not matter: “Bigger” words are not recognized faster. Quarterly Journal of Experimental Psychology, 64, 1041–1047.CrossRefGoogle Scholar
  23. Keuleers, E., Lacey, P., Rastle, K., & Brysbaert, M. (2012). The British Lexicon Project: Lexical decision data for 28,730 monosyllabic and disyllabic English words. Behavior Research Methods, 44, 287–304. doi: 10.3758/s13428-011-0118-4 PubMedCrossRefGoogle Scholar
  24. Kousta, S.-T., Vinson, D. P., & Vigliocco, G. (2009). Emotion words, regardless of polarity, have a processing advantage over neutral words. Cognition, 112, 473–481.PubMedCrossRefGoogle Scholar
  25. Kuhn, M. (2008). Building predictive models in R using the caret package. Journal of Statistical Software, 28, 1–26.Google Scholar
  26. Kuhn, M. (2012). “Caret” package (R Package Version 5.15-023). Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
  27. Kuperman, V., Stadthagen-Gonzalez, H., & Brysbaert, M. (2012). Age-of-acquisition ratings for 30,000 English words. Behavior Research Methods. doi: 10.3758/s13428-012-0210-4
  28. Lund, K., & Burgess, C. (1996). Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers, 28, 203–208. doi: 10.3758/BF03204766 CrossRefGoogle Scholar
  29. Miller, G. A. (1995). WordNet: A lexical database for English. Communications of the ACM, 38(11), 39–41.CrossRefGoogle Scholar
  30. New, B., Ferrand, L., Pallier, C., & Brysbaert, M. (2006). Reexamining the word length effect in visual word recognition: New evidence from the English Lexicon Project. Psychonomic Bulletin & Review, 13, 45–52. doi: 10.3758/BF03193811 CrossRefGoogle Scholar
  31. Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart & Winston.Google Scholar
  32. Pexman, P. M., Hargreaves, I. S., Siakaluk, P. D., Bodner, G. E., & Pope, J. (2008). There are many ways to be rich: Effects of three measures of semantic richness on visual word recognition. Psychonomic Bulletin & Review, 15, 161–167. doi: 10.3758/PBR.15.1.161 CrossRefGoogle Scholar
  33. Pexman, P. M. (2012). Meaning-level influences on visual word recognition. In J. S. Adelman (Ed.), Visual word recognition: Vol. 2. Meaning and context, individuals and development (pp. 24–43). Hove, U.K.: Psychology Press.Google Scholar
  34. Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56–115. doi: 10.1037/0033-295X.103.1.56 PubMedCrossRefGoogle Scholar
  35. Pulvermüller, F., Shtyrov, Y., & Ilmoniemi, R. (2005). Brain signatures of meaning access in action word recognition. Journal of Cognitive Neuroscience, 17, 884–892. doi: 10.1162/0898929054021111 PubMedCrossRefGoogle Scholar
  36. R Development Core Team. (2011). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from
  37. Schock, J., Cortese, M. J., & Khanna, M. M. (2012). Imageability estimates for 3,000 disyllabic words. Behavior Research Methods, 44, 374–379. doi: 10.3758/s13428-011-0162-0 PubMedCrossRefGoogle Scholar
  38. Schock, J., Cortese, M. J., & Yap, M. J. (2011, November). Imageability and age of acquisition effects in disyllabic word recognition. Paper presented at the 51st Annual Meeting of the Psychonomic Society, Seattle, WA.Google Scholar
  39. Schwanenflugel, P. J., Harnishfeger, K. K., & Stowe, R. W. (1988). Context availability and lexical decisions for abstract and concrete words. Journal of Memory and Language, 27, 499–520.CrossRefGoogle Scholar
  40. Sereno, S. C., O’Donnell, P. J., & Sereno, M. E. (2009). Size matters: Bigger is faster. Quarterly Journal of Experimental Psychology, 62, 1115–1122.CrossRefGoogle Scholar
  41. Shaoul, C., & Westbury, C. (2010). Exploring lexical co-occurrence space using HiDEx. Behavior Research Methods, 42, 393–413.PubMedCrossRefGoogle Scholar
  42. Siakaluk, P. D., Pexman, P. M., Aguilera, L., Owen, W. J., & Sears, C. R. (2008). Evidence for the activation of sensorimotor information during visual word recognition: The body–object interaction effect. Cognition, 106, 433–443. doi: 10.1016/j.cognition.2006.12.011 PubMedCrossRefGoogle Scholar
  43. Sibley, D. E., Kello, C. T., & Seidenberg, M. S. (2009). Error, error everywhere: A look at megastudies of word reading. In N. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society (pp. 1036–1041). Austin: Cognitive Science Society.Google Scholar
  44. Steyvers, M., & Tenenbaum, J. B. (2005). The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive Science, 29, 41–78. doi: 10.1207/s15516709cog2901_3 PubMedCrossRefGoogle Scholar
  45. Tillotson, S. M., Siakaluk, P. D., & Pexman, P. M. (2008). Body–object interaction ratings for 1,618 monosyllabic nouns. Behavior Research Methods, 40, 1075–1078. doi: 10.3758/BRM.40.4.1075 PubMedCrossRefGoogle Scholar
  46. Toglia, M. P., & Battig, W. F. (1978). Handbook of semantic word norms. Hillsdale: Erlbaum.Google Scholar
  47. Tops, W., Callens, M., Lammertyn, J., Van Hees, V., & Brysbaert, M. (2012). Identifying students with dyslexia in higher education. Annals of Dyslexia. doi: 10.1007/s11881-012-0072-6
  48. Wurm, L. H. (2007). Danger and usefulness: An alternative framework for understanding rapid evaluation effects in perception? Psychonomic Bulletin & Review, 14, 1218–1225. doi: 10.3758/BF03193116 CrossRefGoogle Scholar
  49. Yap, M. J., & Balota, D. A. (2009). Visual word recognition of multisyllabic words. Journal of Memory and Language, 60, 502–529. doi: 10.1016/j.jml.2009.02.001 CrossRefGoogle Scholar
  50. Yap, M. J., Pexman, P. M., Wellsby, M., Hargreaves, I. S., & Huff, M. J. (2012). An abundance of riches: Cross-task comparisons of semantic richness effects in visual word recognition. Frontiers in Human Neuroscience, 6, 72. doi: 10.3389/fnhum.2012.00072 PubMedCrossRefGoogle Scholar
  51. Yap, M. J., Tan, S. E., Pexman, P. M., & Hargreaves, I. S. (2011). Is more always better? Effects of semantic richness on lexical decision, speeded pronunciation, and semantic classification. Psychonomic Bulletin & Review, 18, 742–750.CrossRefGoogle Scholar
  52. Yarkoni, T., Balota, D. A., & Yap, M. J. (2008). Moving beyond Coltheart’s N: A new measure of orthographic similarity. Psychonomic Bulletin & Review, 15, 971–979. doi: 10.3758/PBR.15.5.971 CrossRefGoogle Scholar
  53. Yates, M. (2005). Phonological neighbors speed visual word processing: Evidence from multiple tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 1385–1397.PubMedCrossRefGoogle Scholar
  54. Zevin, J. D., & Seidenberg, M. S. (2002). Age of acquisition effects in word reading and other tasks. Journal of Memory and Language, 47, 1–29. doi: 10.1006/jmla.2001.2834 CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2012

Authors and Affiliations

  1. 1.Department of PsychologyWesleyan UniversityMiddletownUSA
  2. 2.Department of PsychologyNational University of SingaporeSingaporeSingapore

Personalised recommendations