Visual and auditory perceptual strength norms for 3,596 French nouns and their relationship with other psycholinguistic variables

Abstract

Perceptual experience plays a critical role in the conceptual representation of words. Higher levels of semantic variables such as imageability, concreteness, and sensory experience are generally associated with faster and more accurate word processing. Nevertheless, these variables tend to be assessed mostly on the basis of visual experience. This underestimates the potential contributions of other perceptual modalities. Accordingly, recent evidence has stressed the importance of providing modality-specific perceptual strength norms. In the present study, we developed French Canadian norms of visual and auditory perceptual strength (i.e., the modalities that have major impact on word processing) for 3,596 nouns. We then explored the relationship between these newly developed variables and other lexical, orthographic, and semantic variables. Finally, we demonstrated the contributions of visual and auditory perceptual strength ratings to visual word processing beyond those of other semantic variables related to perceptual experience (e.g., concreteness, imageability, and sensory experience ratings). The ratings developed in this study are a meaningful contribution toward the implementation of new studies that will shed further light on the interaction between linguistic, semantic, and perceptual systems.

The sensory/perceptual system processes information from the environment through our different senses. More specifically, the sensory system allows the detection and analysis of stimuli through the peripheral nervous system (through the receptors specific to different sensory modalities; Gardner & Martin, 2000). Perception refers to the central processing that transforms sensory information into a meaningful pattern (Keetels & Vroomen, 2012). Perceptual experience based on different sensory modalities (visual, auditory, etc.) is part of our conceptual knowledge (Ernst & Bülthoff, 2004). A large body of evidence has shown that semantics, especially when associated with the perceptual and functional attributes of object concepts, is represented by distributed patterns of activity across multiple modality-specific processing pathways in the brain (Binder & Desai, 2011; Martin, 2007; Meteyard, Cuadrado, Bahrami, & Vigliocco, 2012). Functional neuroimaging studies in healthy participants have consistently demonstrated that the semantic processing of words representing concepts with strong visual, auditory, olfactory, and gustatory association activated the brain network involved in the processing of these sensory characteristics (Barros-Loscertales et al., 2012; Goldberg, Perfetti, & Schneider, 2006; Gonzalez et al., 2006; Kiefer, Sim, Herrnberger, Grothe, & Hoenig, 2008; Simmons et al., 2007). These findings suggest that semantic knowledge remains, at least in part, grounded in its sensory and motor features (Barsalou, 1999, 2008; Borghi & Riggio, 2015; Grush, 2004; Vallet, Brunel, & Versace, 2010). Cognition would thus be indivisible from the sensorimotor states of the body as well as from the characteristics of the surrounding environment (Glenberg, Witt, & Metcalfe, 2013; Versace et al., 2014). When this perspective is applied to memory, we find the different modal sensory components of a single concept are closely related. Thus, the activation of one component should automatically propagate to the other associated components (Vallet, Simard, Versace, & Mazza, 2013; Versace et al., 2014) from a perceptual prime (Vallet et al., 2013), or even from a conceptual prime (a word; see Rey, Riou, Vallet, & Versace, 2017). Taken together, these findings demonstrate the potential role of perceptual experience in conceptual knowledge.

Thus, one might argue that the conceptual processing of words partially relies on the ability of each modality to be activated (i.e., its perceptual strength). In line with that view, Lynott and Connell collected perceptual strength ratings for different sensory modalities (visual, tactile, auditory, olfactory, and gustatory) for approximately 400 nouns and 400 adjectives (Connell & Lynott, 2012; Lynott & Connell, 2009, 2013). More specifically, participants were asked to rate the extent to which they experienced each word by means of seeing, hearing, smelling, tasting, or feeling through touch. The ratings ranged from 0 (not experienced at all through this sense) to 5 (greatly experienced through this sense). More importantly, these authors investigated the impact on word processing of perceptual strength in different modalities. This series of studies yielded two main findings. First, they showed that perceptual strength is a good predictor of both lexical decision and word-naming performance (Connell & Lynott, 2012, 2014). More specifically, words with strong perceptual representations are processed more quickly than words with weaker perceptual representations. This result is in agreement with previous studies reporting that perceptual stimulation leads to faster and/or more accurate conceptual processing in the same modality—that is, a perceptual–conceptual facilitation effect (Kaschak, Zwaan, Aveyard, & Yaxley, 2006; Van Dantzig, Pecher, Zeelenberg, & Barsalou, 2008). Second, these studies showed that the strength of perceptual experience predicts word-processing performance better than semantic variables such as concreteness and imageability do (Connell & Lynott, 2012). Concreteness is defined as the degree to which words refer to objects, individuals, places, or things that can be experienced with our senses (Paivio, Yuille, & Madigan, 1968). Concreteness rating norms are based on the degree to which certain words refer to tangible objects, materials, or people that can be easily perceived by our senses (Bonin, Méot, & Bugaiska, 2018). A longstanding literature has pointed out that concrete concepts are processed more quickly and accurately than abstract concepts (Allen & Hulme, 2006; Binder, Westbury, McKiernan, Possing, & Medler, 2005; Fliessbach, Weis, Klaver, Elger, & Weber, 2006; Paivio, Yuille, & Smythe, 1966; Romani, McAlpine, & Martin, 2008). According to the dual-coding theory (Paivio, 2013), this advantage comes from the fact that both concrete and abstract concepts have a verbal code representation, but only concrete concepts also benefit from an imagistic representation (Crutch, Connell, & Warrington, 2009; Crutch & Warrington, 2005; Holcomb, Kounios, Anderson, & West, 1999; Jessen et al., 2000; Paivio, 1991). In this regard, the concept of concreteness is strongly related to the concept of imageability. Imageability refers to the degree to which a word and/or a concept arouses a mental image. In fact, in the experimental language literature, imageability and concreteness ratings are often used interchangeably, because of their high correlation and theoretical relationship (Binder et al., 2005; Fliessbach et al., 2006; Sabsevitz, Medler, Seidenberg, & Binder, 2005).

Both concreteness and imageability are based on the properties of the mental representation evoked by a word, and therefore they do not reflect the actual perceptual experience associated with the concept represented by the word. In addition, concreteness and imageability ratings are not explicitly based on the personal sensory experience of the raters. For this reason, both variables tend to be assessed on the basis of visual experience, neglecting or underestimating the contribution of other modalities (Connell & Lynott, 2012). This is probably the reason why perceptual strength in multiple modalities was found to be a better predictor of word-processing performance than concreteness and imageability (Connell & Lynott, 2012). More recently, Winter (2016) conducted a study to investigate the relationship between perceptual strength and emotional valence. The results of this study indicated that words associated with taste and smell (e.g., “pungent” or “delicious”) had higher absolute emotional valence than do words associated with other sensory modalities (e.g., the visual word “yellow” or the auditory word “echoing”; Winter, 2016). In summary, altogether these data clearly show the key role of perceptual strength in word processing. These results highlight the necessity to make available databases of perceptual strength ratings in different modalities of concepts. These ratings could allow for researchers (1) to control for potential variables influencing concept processing when designing factorial experiments and (2) to test specific hypotheses on the impact of perceptual strength on concept processing. In English, in addition to the ratings for single words (van Dantzig, Cowell, Zeelenberg, & Pecher, 2011), ratings of perceptual strength in different sensory modalities are available for object–property pairs (e.g., TUBA–LOUD, or TUBA–SHINY; van Dantzig et al., 2011). In van Dantzig et al.’s study, the participants were asked to rate the degree to which object–property pairs were experienced by seeing, hearing, feeling by touch, tasting, and smelling. However, these norms are recommended for studies employing tasks that use specific concept–property combinations, such as memory tasks (van Dantzig et al., 2011). Ratings based on single words, such as those of Lynott and Connell (2009, 2013), are preferred for more general studies, such as those focused on single-word processing (van Dantzig et al., 2011).

The creation of language-specific norms is important because ratings to the same stimulus can vary considerably, not only in different languages (Sanfeliu & Fernandez, 1996), but also in different cultures (e.g., French in Canada vs. France; see Sirois, Kremin, & Cohen, 2006). Consequently, it has been recommended that normative data should be collected for each culture separately (Bonin, Peereman, Malardier, Méot, & Chalard, 2003).

Until now, no database of modality perceptual strength has been available in French. Only one database includes a similar but more general type of perceptual norm, based on sensory experience ratings (SERs; Bonin, Méot, Ferrand, & Bugaiska, 2015). These authors define the SERs as indicating the degree to which a word evokes a sensory and/or perceptual experience in the mind of the participant, independent of a specific sensory/perceptual modality (Bonin et al., 2015; Juhasz & Yap, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick, 2011). The semantic nature of SERs has been confirmed in both French and English by revealing the significant association between SERs and other semantic variables, such as imageability and age of acquisition (Juhasz & Yap, 2013; Juhasz et al., 2011). In addition, it has been demonstrated that SERs critically contribute to word processing above and beyond the contribution of other lexical and semantic variables (Juhasz et al., 2011). Although SERs are an important step forward in the study of cognition, further perceptual strength ratings in French, specific to different sensory modalities, will be necessary in order to conduct studies addressing the role in cognition of perceptual strength in specific sensory modalities; such ratings are available already in English (Lynott & Connell, 2009).

The aim of the present study was threefold. The first and main aim was to provide modality-specific perceptual strength ratings for a large set of 3,596 French nouns that already have norms of subjective frequency, imageability, and concept familiarity available (Chedid et al., 2018; Desrochers & Thompson, 2009; Study 1). This would represent the largest database for which perceptual strength ratings are available in French. Due to the number of words to rate, in the present work we focused on two modalities of perceptual strength—that is, visual and auditory perceptual strength. These two modalities were chosen because vision and audition have major impacts on word processing (Lynott & Connell, 2013; van Dantzig et al., 2011). Additionally, these are the most studied human senses (Colavita, 1974; Hecht & Reiner, 2009), as well as being the most widely represented in the human cortex (Glasser et al., 2016). Toward this aim, we performed an online rating task following the procedures adopted in our previous work on concept familiarity, using the same set of words (Chedid et al., 2018). In a manner similar to previous studies in English, the participants were asked to separately rate the extent to which they visually or auditorily experienced each word (Juhasz, Lai, & Woodcock, 2015; Lynott & Connell, 2009, 2013). The second aim was to explore the relationship of our newly developed variables with other well-studied semantic variables (Study 2). Our main hypothesis assumed that visual and auditory perceptual strength ratings are semantic in nature. This assumption stems from their relationship with other semantic variables, such as concept familiarity, age of acquisition, and imageability, as in the norms collected by Connell and Lynott (2012), Juhasz and Yap (2013), Juhasz et al. (2011), and Bonin et al. (2015). The third aim was to demonstrate that ratings of the strength of visual and auditory perceptual experience are not merely another form of rating for imageability, concreteness, or SER (Study 3). Toward this aim, we extracted the reaction times (RTs) for lexical decision from Ferrand et al. (2010) and used them in a linear regression to demonstrate the contribution of visual and auditory perceptual strength over and above the contributions of conceptually related semantic variables, including imageability, concreteness, and SER.

Study 1

The aim of the study was to collect norms for the visual and auditory perceptual strength of a large set of words. We achieved this in two steps: (1) data collection of visual and auditory perceptual strength for a large set of French words, and (2) norm verification through intra- and interstudy reliability.

Method

Participants

Three hundred four participants (198 women, 106 men), 18–35 years of age (mean age = 25.3, SD = 3.9; mean education in years = 14.1, SD = 3.3) took part in this study. We recruited the participants by e-mail invitations sent to a panel of students from the University of Montreal. The inclusion criteria were as follows: Participants must (1) be between 18 and 35 years old, (2) have normal or corrected-to-normal vision, (3) not have hearing loss (due to the nature of the task), and (4) not have a previous history of reading and/or mental problems. They received a CAN$10 gift card as compensation after completing the experiment.

On the basis of the study by Sirois et al. (2006), we decided to include a homogeneous group of French Canadian native speakers. The language (and its variant) spoken by each participant was assessed using an online questionnaire. Indeed, Sirois et al. showed that ratings of some variables, such as name agreement, visual complexity, and conceptual familiarity, showed differences between French Canadian and European French.

The study was reviewed and approved by the local ethics committee (Comité d’éthique de la recherche vieillissement-neuroimagerie CER IUGM 15-16-33). This committee follows the guidelines of the Tri-Council Policy Statement of Canada, the civil code of Quebec, the Declaration of Helsinki, and the Nuremberg Code.

Stimuli

We selected 3,596 French nouns from those studied by Desrochers and Thompson (2009). The list of 3,596 words was randomly split into 24 lists of approximately 150 words each and was presented to participants for perceptual strength ratings. In each list, five randomly selected words appeared twice in a semirandom order, to compute the test–retest reliability of each participant’s ratings, as we have previously described (Chedid et al., 2018). Thus, a total of 155 words (including the five repeated words) were presented in each list.

Procedure

The timing, sequencing, presentation of stimuli, response recording, and response latencies were controlled by a web application created by Beau and Rey (2015) and previously used in both Rey et al. (2017; https://github.com/sebastienbeau/aphrodite-survey) and Chedid et al. (2018). Participants completed the rating study using an online platform where they submitted their personal information and filled out a screening questionnaire to determine their eligibility to participate. After completing the consent form, they accessed a session consisting of a list of stimuli for which they had to rate the visual and auditory perceptual strength of 155 words. As in Chedid et al.’s study, each participant could complete a single session or divide the rating task across two or more sessions. Participants were not allowed to complete the same session more than once. The ratings were automatically saved by the server in a secure database (PostgreSQL).

The session started with an instruction page, where participants received explanations about and examples of rating perceptual strength. The explanations and instructions for the ratings followed the method used by Lynott and Connell (2009). After these instructions, the rating task began. The order of the 155 words was randomized across participants. Each word was separately presented to the participants, who had to rate the extent to which the meaning of the word could be experienced in each of the perceptual modalities, in the following order: visual (in French: Dans quelle mesure CE MOT vous fait ressentir une experience visuelle?; English translation: “To what extent do you visually experience WORD?”), then auditory (in French: Dans quelle mesure CE MOT vous fait ressentir une experience auditive?; English translation: “To what extent do you audibly experience WORD?”). Underneath these questions, a horizontal visual analog scale (VAS) was displayed for the ratings. Participants were asked to move the cursor on this uncalibrated line according to their subjective judgment. To estimate perceptual strength, the left side of the line corresponded to very low, and the right side to very high. The cursor always appeared in the center of the line (equal to 50), and the participant had to give his or her estimation of the strength of his or her experience of the concept represented by the current word by moving the cursor to the left (the extreme left was coded as 0) or to the right (the extreme right was coded as 100). In addition, the rating latencies were also recorded. In the present study, we used VAS rating scales, rather than the Likert scales used by Connell and Lynott (2010), for two main reasons. First, Likert scales should be considered as providing ordinal data. Conversely, VASs are considered as providing continuous data (e.g., Howell, 1992; Parker, McDaniel, & Crumpton-Young, 2002). Unlike continuous data, ordinal data limit the array of possible analyses, in some cases precluding analysis. Second, multiple studies have shown advantages of VASs over Likert scales—notably regarding sensitivity and reliability (e.g., Pfennings, Cohen, & van der Ploeg, 1995), as well as for other psychometric parameters (e.g., Voutilainen, Pitkaaho, Kvist, & Vehvilainen-Julkunen, 2016).

Data screening for outliers

Before proceeding to the statistical analysis, the data were screened for outliers within each session (per participant) and then for each item (across participants). The data of 12 participants were removed due to lack of variability in responses (i.e., the same rating was given for all words in the list—e.g., 50 or 100; Brysbaert, Warriner, & Kuperman, 2014; Chedid et al., 2018).

For further data trimming, the mean and standard deviation of all the participants’ ratings in each list were calculated. Participants’ mean scores falling outside ± 3.5 standard deviations from the group mean of his or her list were excluded in order to attenuate the possible influence of outliers on the ratings (Kuperman, Stadthagen-Gonzalez, & Brysbaert, 2012). Comparable procedures for the detection of outliers have been employed in similar studies providing ratings for word databases (Chedid et al., 2018; Lynott & Connell, 2009). After the screening of all the sessions, the data of 24 participants were discarded because the majority of their ratings were spread out around the mean (the overall ratings of three participants were more than 3.5 SDs below the mean ratings of the group of the same list, and 21 participants gave extreme ratings above 3.5 SDs from the other participants’ ratings of the same list). Thus, the data obtained from 268 participants were used in the statistical analyses. Each session was evaluated by a mean of 25 participants (minimum raters per session = 20; maximum raters per session = 29).

In addition, response latencies were used as a lower-bound criterion, below which responses could be considered invalid. On the basis of previous studies that had used the same criterion, visual inspection of the RT distribution suggested that response latencies below 300 ms were derived from a distinct distribution and were extracted (Desrochers & Thompson, 2009; Tsaparina, Bonin, & Méot, 2011). Only 0.0032% of the visual and 0.0027% of the auditory perceptual strength samples were discarded (numbers of ratings lost [± SD]: respectively, 92 ± 6 and 74 ± 4). To set an upper-bound criterion, the mean RT of all answers given for each item was calculated, and a standard deviation of 2.5 was set as a cutoff for delayed responses. On average, 0.0118% of the visual and 0.0076% of the auditory perceptual strength samples were rejected (numbers of ratings lost [± SD]: respectively, 437 ± 8 and 266 ± 5).

Results

The overall mean perceptual strength rating for the visual modality was 61.4 (SD = 18.0, Min = 2.5, Max = 94.2), and that for the auditory modality was 32.1 (SD = 16.1, Min = 0.6, Max = 95.4).

Intra- and interstudy rating reliability

First, we measured the internal consistency of the ratings by calculating the split-half reliability coefficient. This coefficient was calculated by splitting the ratings of the participants into two groups according to even and odd participant numbers, and by computing a correlation between the even and odd data for each variable separately. If the ratings of the two halves were highly correlated, it meant they provide similar results and, consequently, that the ratings have good internal consistency reliability. The corrected Pearson correlations were significant for both visual perceptual strength, r(3,596) = .779, p < .001, and auditory perceptual strength, r(3,596) = .745, p < .001, indicating good internal consistency reliability. The good reliability between raters was also confirmed by Cronbach’s alphas of .875 for visual perceptual strength and .854 for auditory perceptual strength. The correlation analysis was corrected with the Holm–Bonferroni method for multiple comparisons.

Second, we measured response consistency within participants. To that end, we ran a correlation between the responses to the 120 words that received a double rating (the five words repeated within each of the 24 sessions). High correlations would indicate that participants gave similar ratings to the same words presented twice. Consequently, this would be an indicator of good internal reliability. The Pearson’s correlation between the two responses given for the 120 repeated words across all sessions was computed and showed a strong significant correlation between the first and the second ratings of the same words, both for visual perceptual strength, r(120) = .968, p < .001, and auditory perceptual strength, r(120) = .972, p < .001. These strong correlations between the ratings of repeated items are associated with excellent internal consistency, with Cronbach’s alphas equal to .983 and .984 for the visual and auditory ratings on the repeated items, respectively.

Interstudy reliability was calculated by correlating the visual and auditory perceptual strength ratings with the perceptual variable already available for French. The only available French variable was SER (Bonin et al., 2015). We ran interstudy correlations on the stimuli common to our database and that providing the SERs. A significant and positive correlation would provide evidence of the convergent validity of our ratings. The results of the correlation analysis showed a significant and positive correlation for the 542 common words, for both visual perceptual strength, r(542) = .461, p < .001, and auditory perceptual strength, r(542) = .332, p < .001 (Table 1).

Table 1 Correlation values for visual and auditory perceptual strength and the semantic variables of Studies 1 and 2

Relationship between the two modalities

To test the relationship between the visual and auditory ratings, we tested the correlation between these two variables. In previous studies on perceptual strength, the authors reported a significant negative correlation between visual and auditory perceptual strength (Connell & Lynott, 2012). In line with these findings, we expected to observe a negative correlation between the visual and auditory perceptual ratings. In agreement with our predictions, a negative and significant correlation was observed, r(3,596) = – .61, p < .001. This means that weaker visual perceptual strength is generally associated with stronger auditory strength, and vice versa. A significant negative correlation between visual and auditory perceptual strength ratings has been previously reported in English (Connell & Lynott, 2012; Lynott & Connell, 2009). Most objects are multimodal in nature, as revealed by the modality exclusivity perceptual strength ratings obtained in previous studies (Lynott & Connell, 2013; Speed & Majid, 2017). Most common objects, such as a “cat,” could be identified through both the visual and auditory modalities. This double association may lead participants to evaluate both perceptual strengths as being strong. Accordingly, the word chat (English translation: “cat”) was rated 87.1 for visual and 74.9 for auditory. On the other side, highly visual objects, such as “wall,” or highly auditory concepts, such as “whistling,” are more rarely associated with the other modality. Consistently, the word mur (English translation: “wall”) was rated 85.4 for visual and 18.7 for auditory, whereas the word sifflement (English translation: “whistling”) was rated 36.8 for visual and 87.9 for auditory. Therefore, the most extreme perceptual strengths in one modality should be negatively associated with the other modality. This result is in agreement with Connell and Lynott (2012, 2014), who observed that their auditory and visual perceptual ratings were negatively correlated.

Study 2

Visual and auditory perceptual strength ratings are associated with the conceptual dimensions of the words, and thus are considered semantic in nature (Connell & Lynott, 2012; Juhasz & Yap, 2013). The aim of the present study was to establish the relationship between the newly developed visual and auditory perceptual strength ratings and other well-known psycholinguistic semantic variables that have been previously shown to affect word processing (Bonin et al., 2015; Connell & Lynott, 2012; Juhasz & Yap, 2013). We hypothesized a correlation between the visual and auditory ratings and other semantic variables, such as imageability, concreteness, age of acquisition, concept familiarity, and SER.

Method

Significant associations between the visual and auditory perceptual strength scores and other semantic variables were tested using correlations. These semantic variables included concreteness, imageability, conceptual familiarity, age of acquisition, and SER. The complete list of variables and the databases used to obtain them are reported in Table 2. Unfortunately, norms for the semantic variables were not always available for all the words included in the present study. Ratings of concreteness for 542 words were taken from Bonin et al. (2018). Imageability ratings for 3,596 words were taken from Desrochers and Thompson (2009). Concept familiarity refers to the degree to which people come in contact with or think about a specific concept. Concept familiarity ratings for 3,596 words were extracted from Chedid et al. (2018). Age of acquisition (AoA) refers to the age at which a word was first learned. The AoA ratings for 425 words were extracted from Ferrand et al. (2008).

Table 2 Sources and number of words, as well as the means and standard deviations, minimums, and maximums for the psycholinguistic variables used in Studies 2 and 3

Results

Relationship between visual perceptual strength and the other semantic variables

Table 1 shows the results of the correlation analyses between all variables. We found significant and positive correlations between visual perceptual strength and the other semantic variables: concreteness, r(537) = .763, p < .001; imageability, r(3,596) = .862, p < .001; concept familiarity, r(3,596) = .544, p < .001; and SER, r(542) = .461, p < .001. The positive correlations indicate that as visual perceptual strength increased, the values of the other semantic variables also increased. This means that stronger visual perceptual strength also meant more imageable, more concrete, more conceptually familiar, and stronger perceptual (SER) words. We found a negative correlation for AoA, r(420) = – .558, p < .001. This means that the earlier a word is learned, the stronger is its visual perceptual strength.

Relationship between auditory perceptual strength and other semantic variables

Auditory perceptual strength also significantly correlated with the five semantic variables: concreteness, r(537) = .100, p = .02; imageability, r(3,596) = .182, p < .001; concept familiarity, r(3,596) = .298, p < .001; SER, r(542) = .332, p < .001. The positive correlations indicate that as auditory perceptual strength increased, the values of the other semantic variables also increased. In other words, stronger auditory perceptual strength also meant more imageable, more concrete, more conceptually familiar, and stronger perceptual (SER) words. We again found a negative correlation for AoA here, r(420) = – .218, p < .001: Earlier-acquired words tend to be stronger in their auditory perceptual strength. As compared to visual perceptual strength, the correlations for auditory perceptual strength were weaker.

The visual and auditory perceptual strength ratings should be related to the conceptual sensory dimensions of the words, and are therefore semantic in nature. It is logical that the perceptual strength of a given concept should also depend on its sensory characteristics, which should in turn be among its conceptual properties. The results showed that visual and auditory perceptual strength strongly correlated with other semantic variables, including imageability, AoA, concreteness, and concept familiarity. These correlations with semantic variables confirm that visual and auditory perceptual strength variables index one aspect of the semantic representations of words.

Study 3

Concreteness, imageability, and SER ratings refer to sensory and perceptual aspects of concept representations. This could raise the question of whether our newly developed variables are merely other forms of the previously studied variables, or whether they independently contribute to explaining the variability in word processing. To address this issue, we conducted a hierarchical regression analysis using lexical decision RTs to determine the contributions of the two newly developed variables over and beyond concreteness, imageability, and SER, once we had controlled for orthographic and lexical variables known to have impacts on the lexical decision task (Bonin et al., 2015; Connell & Lynott, 2012; Juhasz et al., 2011). We hypothesized that both visual and auditory perceptual strength would show significant contributions to lexical decision RT variability, above and beyond the contribution of other lexical and semantic variables.

Stepwise regression

We used a stepwise regression analysis to determine the proportions of the variance of RTs in lexical decisions that could be explained by concreteness, imageability, SER, and visual and auditory perceptual strength (Connell & Lynott, 2012). We followed previous similar literature (Boukadi, Zouaidi, & Wilson, 2016; Cortese & Khanna, 2007; Cortese & Schock, 2013; Sanchez-Gutierrez, Mailhot, Deacon, & Wilson, 2018) and ran several hierarchical regression models in which each of the two modality-specific perceptual variables (auditory and visual) was added separately in the last step of these regression models. This allowed for testing the contribution of each of the new variables once the variability of all the other variables entered in the previous step(s) had been controlled for.

We obtained the values for the dependent variable (RTs) from the lexical decision latencies in Ferrand et al. (2010). As control variables, we extracted the values of the following orthographic and lexical psycholinguistic variables for the 3,596 nouns from the French online database Lexique (New et al., 2004; www.lexique.org): word length in number of syllables (N-syllables; e.g., concept = 2), objective lexical frequency calculated from books (FreqBooks; e.g., concept = 7.63 occurrences per million), and orthographic Levenshtein distance 20 (OLD20; i.e., the minimum number of insertions, deletions, and substitution required to turn one word into its 20 nearest neighbors; Yarkoni et al., 2008). We also obtained values for subjective frequency from Desrochers and Thompson (2009). There were large differences in the amounts of overlap between the words in our database and those present in the databases in which the ratings of concreteness (537), SER (538), and imageability (3,124) were available. Thus, we ran six separate regression models for each of the variables (see Table 3).

Table 3 Hierarchical regression coefficient models for lexical decision reaction times in Study 3

In the first model, we entered the lexical and orthographic variables (i.e., N-syllables, FreqBooks, OLD20, and subjective frequency), imageability, and auditory perceptual strength in the first step. We entered visual perceptual strength in the second step. In the second model, we entered the same variables with visual perceptual strength in Step 1, and auditory perceptual strength in the Step 2. These models would allow us to test the contribution of each of the two modality-specific perceptual variables above the contribution of the semantic variable of imageability in the prediction of lexical decision RTs.

In the third model, we entered the lexical variables, concreteness, and auditory perceptual strength in Step 1. We then entered visual perceptual strength in Step 2. In the fourth model, we entered visual perceptual strength with the other variables in Step 1, and auditory perceptual strength in the Step 2. These models would allow us to determine the contribution of each of the two modality-specific perceptual variables above that of the semantic variable of concreteness in the prediction of lexical decision RTs.

In the fifth model, we entered the lexical variables, SER, and auditory perceptual strength in Step 1. In the sixth model, we entered visual perceptual strength with the other variables in Step 1, and auditory perceptual strength in Step 2. These models would allow us to determine the contribution of each of the two-modality specific perceptual variables above the contribution of the more general semantic variable SER in the prediction of lexical decision RTs.

Results

Table 3 shows the standardized regression coefficients of the six models used in Study 3. In the first and second models (all tolerance values > 0.2 and variance inflation factor [VIF] values < 4), we observed significant contributions of visual perceptual strength, F(3124) = 36.94, p < .001, ∆R2 = .007, and auditory perceptual strength, F(3124) = 15.44, p < .001, ∆R2 = .003, t o lexical decision RTs. These contributions were beyond that of imageability. In the third and fourth models (all tolerance values > 0.3 and VIF values < 3), both visual and auditory perceptual strength significantly contributed to explaining the variance in lexical decision RTs beyond the contribution of concreteness [visual: F(537) = 15.24, p < .001, ∆R2 = .017; auditory: F(537) = 5.27, p = .022, ∆R2 = .006]. In the fifth and sixth models (all tolerance values > 0.5 and VIF values < 2), we found a significant contribution of visual perceptual strength above that of SER, F(537) = 4.28, p = .039, ∆R2 = .005. Nevertheless, auditory perceptual strength did not significantly contribute to explaining RT decisions, F(537) = 2.56, p = .110, ∆R2 = .003. In conclusion, these results demonstrated for the first time in French the critical role of the visual and auditory perceptual strength evoked by a word, above and beyond the contributions of other semantic variables, such as imageability, concreteness, and SER.

Discussion

This study provided ratings for 3,596 French nouns for two semantic variables that are based on the perceptual experience of individuals: visual and auditory perceptual strength. The intrastudy reliability analysis showed that our new ratings were reliable between raters. The interstudy reliability analysis revealed that our ratings were consistent with those contained in the French database by Bonin et al. (2015). Bonin et al. collected ratings for a more general sensory experience variable—that is, sensory experience ratings (Bonin et al., 2015). Thus, we produced reliable norms for two specific modalities, visual and auditory, of perceptual strength in French. These norms are freely available at http://lingualab.ca/en/projects/norms-of-visualperceptualstrength and http://lingualab.ca/en/projects/norms-of-auditoryperceptualstrength.

In addition, our study provided critical evidence that visual and auditory perceptual strength are not mere by-products of other semantic variables related to the perceptual experience evoked by a concept, such as concreteness, imageability, and SER. In fact, we demonstrated that visual and auditory perceptual strength contribute to lexical decision latencies during word processing over and beyond the contributions of concreteness, imageability, and SER. This result confirms previous findings obtained in English (Connell & Lynott, 2012) and highlights the key role of perceptual experience in semantics. According to Bonin et al. (2015), high visual scores are attributed to more-imageable words and to an earlier age of acquisition of the word. In our Study 2, we reproduced these results. Indeed, the association between visual perceptual strength and imageability stresses the richness of conceptual representations. Both perceptual strength and imageability are thought to be subjective semantic variables as they are based on the personal experiences and knowledge of the individual. On the other hand, AoA is also considered to have a semantic component, in that it affects both lexical decisions and word naming (Brysbaert & Ghyselinck, 2006; Cuetos & Barbón, 2006; Davies, Wilson, Cuetos, & Burani, 2014; Ghyselinck, Lewis, & Brysbaert, 2004; Wilson, Cuetos, Davies, & Burani, 2013). Accordingly, we found that the earlier a word is learned, the stronger its visual perceptual strength. Visual perceptual strength was strongly associated with imageability, suggesting that visual perceptual strength and imageability share some semantic visual/imageable representations. The association between visual perceptual strength and concreteness, such as the one we found here, has been explained in terms of the verbal and imagistic representations of concepts (Crutch et al., 2009; Crutch & Warrington, 2005; Holcomb et al., 1999; Jessen et al., 2000). It has been demonstrated that concrete concepts have more direct connections to imagistic representations, whereas abstract concepts have only indirect connections to images via other verbal codes (Binder et al., 2005; Crutch et al., 2009; Crutch & Warrington, 2005).

On the other hand, auditory perceptual strength was weakly related to other semantic variables. This is not surprising. The instructions used to obtain concreteness ratings do not explicitly mention that the raters should consider any sensory experience as a form of concreteness. On the other side, the instructions used to obtain imageability ratings explicitly mention that raters should mainly rely on the “mental image” aroused by the word. These instructions are likely to create a bias toward the visual perceptual modality. This would explain the results of Study 2 for auditory perceptual strength. Indeed, the association between imageability and auditory perceptual strength ratings was weaker that the one found for visual perceptual strength and imageability. The same pattern was observed for concreteness. Taken together, these results appear to support the view that concreteness and imageability ratings mainly capture the visual aspects of sensory experience, confirming the previous findings (Bonin et al., 2015; Juhasz & Yap, 2013). Moreover, the relationship between the two modalities, visual and auditory, confirm the multimodality of noun concepts. Strongly auditory nouns frequently refer to things that can also be seen (e.g., chanteuse “singer”: visual = 72.5, auditory = 77; Lynott & Connell, 2013). Although the vast majority of noun concepts in our sample were visually dominant, the correlation analysis indicated that many of these words also had high auditory perceptual strength, and should therefore be characterized as bimodal (e.g., ambulance: visual = 89.40, auditory = 87.14).

Why should future research use these new semantic variables related to perceptual strength? What is the added value of visual and auditory perceptual strength as compared to concreteness and imageability, the two most widely used semantic variables? The results of Study 3 showed that visual and auditory perceptual strength have a role beyond that of concreteness and imageability in the explanation of lexical decision RTs. This effect was already reported in an English-language study by Connell and Lynott (2012). However, it must be noted that they used a similar but slightly different perceptual strength variable—that is, the strength in the dominant perceptual modality of a concept (maximum perceptual strength)—as a measure of perceptual strength. Regarding SER, another semantic variable related to the perceptual experience, visual perceptual strength, increased the percentage of explained variance in lexical decision RTs, whereas auditory perceptual strength did not. The significant result for visual perceptual strength is extremely important, because it shows that a modality-specific perceptual strength could significantly increase the explained variance of lexical decision RTs when added to a general perceptual rating score (SER). The absence of a significant effect produced by auditory perceptual strength could be due to different factors. First, the analysis was run on a small subset of words of our database since SER ratings were available for only 542 words. Second, another possible explanation may come from the distribution of these 542 words in terms of their visual and auditory properties. To test this hypothesis, we conducted a cluster analysis (see the supplementary data) to determine whether there were different patterns of words in our database, based on their visual and auditory perceptual strength ratings. The results of the cluster analysis showed that the words were distributed in three clusters. Cluster 1 (n = 787) included words with high visual and low auditory perceptual strength. Cluster 2 (n = 1,283) regrouped the words with weak visual and auditory perceptual strength. Finally, Cluster 3 (n = 1,061) was composed of words with strong visual but weak auditory perceptual strength. These results are congruent with those of other studies that have shown that the visual and haptic modalities tend to be grouped together, and the auditory modality was not included in either group (Lynott & Connell, 2013; Tsaparina et al., 2011). If we consider the subset of 542 words with SER ratings, 445 words (82% of the total) belonged to Cluster 1 (i.e., high visual and low auditory perceptual strength). Thus, the fact that the great majority of the words included in the database by Bonin and colleagues for SERs had low auditory perceptual strength could partly explain why auditory perceptual strength did not increase the percentage of the prediction of lexical decision RTs. Future studies on a larger database including concepts more grounded in auditory features would help us better understand the role of auditory perceptual strength in word processing

This study represents a first, necessary step to provide French Canadian norms of perceptual strength in the most studied perceptual modalities (i.e., visual and auditory). Our results showed the critical role of these variables for word processing. This highlights the importance of further collecting norms for the other three perceptual modalities (olfactory, gustatory and haptic). Future studies should address this issue.

One limitation of our study concerns the fact that participants could not say whether they did not know a word they had to rate. Notwithstanding, and according to the available French Canadian familiarity ratings, none of these words were of extremely low familiarity to the raters (Chedid et al., 2018). This suggests that most participants might have known these words. However, we cannot rule out the possibility that the words that received low ratings on perceptual strength for both modalities were indeed unknown to certain participants.

In conclusion, our results confirm and expand upon previous findings that had demonstrated that visual and auditory perceptual strength ratings cannot be considered another form of concreteness, imageability, or SER, since visual and auditory perceptual strength make independent contributions to the prediction of latencies in word processing. These findings are in line with grounded cognition models, indicating the importance of perceptual experience in concept representation. Further studies should be carried out to test the specific impact of these variables on word processing. We are confident that the new ratings of visual and auditory perceptual strength for the large set of French nouns that we presented here will help enable new studies to investigate the role of perceptual experience on the representation of concepts.

References

  1. Allen, R., & Hulme, C. (2006). Speech and language processing mechanisms in verbal serial recall. Journal of Memory and Language, 55, 64–88. https://doi.org/10.1016/j.jml.2006.02.002

    Article  Google Scholar 

  2. Barros-Loscertales, A., Gonzalez, J., Pulvermuller, F., Ventura-Campos, N., Bustamante, J. C., Costumero, V., . . . Avila, C. (2012). Reading salt activates gustatory brain regions: fMRI evidence for semantic grounding in a novel sensory modality. Cerebral Cortex, 22, 2554–2563. https://doi.org/10.1093/cercor/bhr324

    Article  Google Scholar 

  3. Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–609, disc. 610–660. https://doi.org/10.1017/S0140525X99002149

    Article  PubMed  Google Scholar 

  4. Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645. https://doi.org/10.1146/annurev.psych.59.103006.093639

    Article  PubMed  Google Scholar 

  5. Beau, S., & Rey, A. (2015). Github repository, https://github.com/sebastienbeau/aphrodite-survey.

  6. Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in Cognitive Sciences, 15, 527–536. https://doi.org/10.1016/j.tics.2011.10.001

    Article  PubMed  PubMed Central  Google Scholar 

  7. Binder, J. R., Westbury, C. F., McKiernan, K. A., Possing, E. T., & Medler, D. A. (2005). Distinct brain systems for processing concrete and abstract concepts. Journal of Cognitive Neuroscience, 17, 905–917.

    Article  Google Scholar 

  8. Bonin, P., Méot, A., & Bugaiska, A. (2018). Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times. Behavior Research Methods, 50, 2366–2387. https://doi.org/10.3758/s13428-018-1014-y

    Article  PubMed  Google Scholar 

  9. Bonin, P., Méot, A., Ferrand, L., & Bugaiska, A. (2015). Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition. Behavior Research Methods, 47, 813–825. https://doi.org/10.3758/s13428-014-0503-x

    Article  PubMed  Google Scholar 

  10. Bonin, P., Peereman, R., Malardier, N., Méot, A., & Chalard, M. (2003). A new set of 299 pictures for psycholinguistic studies: French norms for name agreement, image agreement, conceptual familiarity, visual complexity, image variability, age of acquisition, and naming latencies. Behavior Research Methods, Instruments, & Computers, 35, 158–167. https://doi.org/10.3758/BF03195507

    Article  Google Scholar 

  11. Borghi, A. M., & Riggio, L. (2015). Stable and variable affordances are both automatic and flexible. Frontiers in Human Neuroscience, 9, 351. https://doi.org/10.3389/fnhum.2015.00351

    Article  PubMed  PubMed Central  Google Scholar 

  12. Boukadi, M., Zouaidi, C., & Wilson, M. A. (2016). Norms for name agreement, familiarity, subjective frequency, and imageability for 348 object names in Tunisian Arabic. Behavior Research Methods, 48, 585–599. https://doi.org/10.3758/s13428-015-0602-3

    Article  PubMed  Google Scholar 

  13. Brysbaert, & Ghyselinck, M. (2006). The effect of age of acquisition: Partly frequency related, partly frequency independent. Visual Cognition, 13, 992–1011. https://doi.org/10.1080/13506280544000165

    Article  Google Scholar 

  14. Brysbaert, Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46, 904–911. https://doi.org/10.3758/s13428-013-0403-5

    Article  PubMed  Google Scholar 

  15. Chedid, G., Wilson, M. A., Bedetti, C., Rey, A. E., Vallet, G. T., & Brambati, S. M. (2018). Norms of conceptual familiarity for 3,596 French nouns and their contribution in lexical decision. Behavior Research Methods. Online ISSN1554-3528 - https://doi.org/10.3758/s13428-018-1106-8

  16. Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412. https://doi.org/10.3758/BF03203962

    Article  Google Scholar 

  17. Connell, L., & Lynott, D. (2010). Look but don’t touch: Tactile disadvantage in processing modality-specific words. Cognition, 115, 1–9. https://doi.org/10.1016/j.cognition.2009.10.005

    Article  PubMed  Google Scholar 

  18. Connell, L., & Lynott, D. (2012). Strength of perceptual experience predicts word processing performance better than concreteness or imageability. Cognition, 125, 452–465. https://doi.org/10.1016/j.cognition.2012.07.010

    Article  PubMed  Google Scholar 

  19. Connell, L., & Lynott, D. (2014). I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention. Journal of Experimental Psychology: General, 143, 527–533. https://doi.org/10.1037/a0034626

    Article  Google Scholar 

  20. Cortese, M. J., & Khanna, M. M. (2007). Age of acquisition predicts naming and lexical-decision performance above and beyond 22 other predictor variables: An analysis of 2,342 words. Quarterly Journal of Experimental Psychology, 60, 1072–1082. https://doi.org/10.1080/17470210701315467

    Article  Google Scholar 

  21. Cortese, M. J., & Schock, J. (2013). Imageability and age of acquisition effects in disyllabic word recognition. Quarterly Journal of Experimental Psychology, 66, 946–972. https://doi.org/10.1080/17470218.2012.722660

    Article  Google Scholar 

  22. Crutch, S. J., Connell, S., & Warrington, E. K. (2009). The different representational frameworks underpinning abstract and concrete knowledge: Evidence from odd-one-out judgements. Quarterly Journal of Experimental Psychology, 62, 1377–1388, 1388–1390. https://doi.org/10.1080/17470210802483834

    Article  Google Scholar 

  23. Crutch, S. J., & Warrington, E. K. (2005). Abstract and concrete concepts have structurally different representational frameworks. Brain, 128, 615–627. https://doi.org/10.1093/brain/awh349

    Article  PubMed  Google Scholar 

  24. Cuetos, F., & Barbón, A. (2006). Word naming in Spanish. European Journal of Cognitive Psychology, 18, 415–436. https://doi.org/10.1080/13594320500165896

    Article  Google Scholar 

  25. Davies, R., Wilson, M., Cuetos, F., & Burani, C. (2014). Reading in Spanish and Italian: Effects of age of acquisition in transparent orthographies? Quarterly Journal of Experimental Psychology, 67, 1808–1825. https://doi.org/10.1080/17470218.2013.872155

    Article  Google Scholar 

  26. Desrochers, A., & Thompson, G. L. (2009). Subjective frequency and imageability ratings for 3,600 French nouns. Behavior Research Methods, 41, 546–557. https://doi.org/10.3758/BRM.41.2.546

    Article  PubMed  Google Scholar 

  27. Ernst, M. O., & Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends in Cognitive Sciences, 8, 162–169. https://doi.org/10.1016/j.tics.2004.02.002

    Article  PubMed  Google Scholar 

  28. Ferrand, L., Bonin, P., Meot, A., Augustinova, M., New, B., Pallier, C., & Brysbaert, M. (2008). Age-of-acquisition and subjective frequency estimates for all generally known monosyllabic French words and their relation with other psycholinguistic variables. Behavior Research Methods, 40, 1049–1054. https://doi.org/10.3758/BRM.40.4.1049

    Article  PubMed  Google Scholar 

  29. Ferrand, L., New, B., Brysbaert, M., Keuleers, E., Bonin, P., Meot, A., . . . Pallier, C. (2010). The French Lexicon Project: Lexical decision data for 38,840 French words and 38,840 pseudowords. Behavior Research Methods, 42, 488–496. https://doi.org/10.3758/BRM.42.2.488

    Article  Google Scholar 

  30. Fliessbach, K., Weis, S., Klaver, P., Elger, C. E., & Weber, B. (2006). The effect of word concreteness on recognition memory. NeuroImage, 32, 1413–1421. https://doi.org/10.1016/j.neuroimage.2006.06.007

    Article  PubMed  Google Scholar 

  31. Gardner, E. P., & Martin, J. H. (2000). Coding of sensory information. In E. R. Kandel, J. H. Schwartz, & T. M. Jessell (Eds.), Principles of neural science (4th ed., pp. 411–429). New York, NY: McGraw-Hill.

    Google Scholar 

  32. Ghyselinck, M., Lewis, M. B., & Brysbaert, M. (2004). Age of acquisition and the cumulative-frequency hypothesis: A review of the literature and a new multi-task investigation. Acta Psychologica, 115, 43–67. https://doi.org/10.1016/j.actpsy.2003.11.002

    Article  PubMed  Google Scholar 

  33. Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J., Yacoub, E., . . . Van Essen, D. C. (2016). A multi-modal parcellation of human cerebral cortex. Nature, 536, 171–178. https://doi.org/10.1038/nature18933

    Article  Google Scholar 

  34. Glenberg, A. M., Witt, J. K., & Metcalfe, J. (2013). From the revolution to embodiment: 25 years of cognitive psychology. Perspectives on Psychological Science, 8, 573–585. https://doi.org/10.1177/1745691613498098

    Article  PubMed  Google Scholar 

  35. Goldberg, R. F., Perfetti, C. A., & Schneider, W. (2006). Perceptual knowledge retrieval activates sensory brain regions. Journal of Neuroscience, 26, 4917–4921. https://doi.org/10.1523/JNEUROSCI.5389-05.2006

    Article  PubMed  Google Scholar 

  36. Gonzalez, J., Barros-Loscertales, A., Pulvermuller, F., Meseguer, V., Sanjuan, A., Belloch, V., & Avila, C. (2006). Reading cinnamon activates olfactory brain regions. NeuroImage, 32, 906–912. https://doi.org/10.1016/j.neuroimage.2006.03.037

    Article  PubMed  Google Scholar 

  37. Grush, R. (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27, 377–396, disc. 396–442.

    Article  Google Scholar 

  38. Hecht, D., & Reiner, M. (2009). Sensory dominance in combinations of audio, visual and haptic stimuli. Experimental Brain Research, 193, 307–314. https://doi.org/10.1007/s00221-008-1626-z

    Article  PubMed  Google Scholar 

  39. Holcomb, P. J., Kounios, J., Anderson, J. E., & West, W. C. (1999). Dual-coding, context-availability, and concreteness effects in sentence comprehension: An electrophysiological investigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 721–742. https://doi.org/10.1037/0278-7393.25.3.721

    Article  PubMed  Google Scholar 

  40. Howell, D. C. (1992). Statistical methods for psychology (3rd ed.). Boston, MA: PWS-Kent.

    Google Scholar 

  41. Jessen, F., Heun, R., Erb, M., Granath, D. O., Klose, U., Papassotiropoulos, A., & Grodd, W. (2000). The concreteness effect: Evidence for dual coding and context availability. Brain and Language, 74, 103–112. https://doi.org/10.1006/brln.2000.2340

    Article  PubMed  Google Scholar 

  42. Juhasz, B. J., Lai, Y. H., & Woodcock, M. L. (2015). A database of 629 English compound words: Ratings of familiarity, lexeme meaning dominance, semantic transparency, age of acquisition, imageability, and sensory experience. Behavior Research Methods, 47, 1004–1019. https://doi.org/10.3758/s13428-014-0523-6

    Article  PubMed  Google Scholar 

  43. Juhasz, B. J., & Yap, M. J. (2013). Sensory experience ratings for over 5,000 mono- and disyllabic words. Behavior Research Methods, 45, 160–168. https://doi.org/10.3758/s13428-012-0242-9

    Article  PubMed  Google Scholar 

  44. Juhasz, B. J., Yap, M. J., Dicke, J., Taylor, S. C., & Gullick, M. M. (2011). Tangible words are recognized faster: The grounding of meaning in sensory and perceptual systems. Quarterly Journal of Experimental Psychology, 64, 1683–1691. https://doi.org/10.1080/17470218.2011.605150

    Article  Google Scholar 

  45. Kaschak, M. P., Zwaan, R. A., Aveyard, M., & Yaxley, R. H. (2006). Perception of auditory motion affects language processing. Cognitive Science, 30, 733–744. https://doi.org/10.1207/s15516709cog0000_54

    Article  PubMed  Google Scholar 

  46. Keetels, M., & Vroomen, J. (2012). Perception of synchrony between the senses. In M. M. Murray & M. T. Wallace (Eds.), The neural bases of multisensory processes (pp. 147–178). Boca Raton, FL: CRC Press.

    Google Scholar 

  47. Kiefer, M., Sim, E. J., Herrnberger, B., Grothe, J., & Hoenig, K. (2008). The sound of concepts: Four markers for a link between auditory and conceptual brain systems. Journal of Neuroscience, 28, 12224–12230. https://doi.org/10.1523/JNEUROSCI.3579-08.2008

    Article  PubMed  Google Scholar 

  48. Kuperman, V., Stadthagen-Gonzalez, H., & Brysbaert, M. (2012). Age-of-acquisition ratings for 30,000 English words. Behavior Research Methods, 44, 978–990. https://doi.org/10.3758/s13428-012-0210-4

    Article  PubMed  Google Scholar 

  49. Lynott, D., & Connell, L. (2009). Modality exclusivity norms for 423 object properties. Behavior Research Methods, 41, 558–564. https://doi.org/10.3758/BRM.41.2.558

    Article  PubMed  Google Scholar 

  50. Lynott, D., & Connell, L. (2013). Modality exclusivity norms for 400 nouns: The relationship between perceptual experience and surface word form. Behavior Research Methods, 45, 516–526. https://doi.org/10.3758/s13428-012-0267-0

    Article  PubMed  Google Scholar 

  51. Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45. https://doi.org/10.1146/annurev.psych.57.102904.190143

    Article  PubMed  Google Scholar 

  52. Meteyard, L., Cuadrado, S. R., Bahrami, B., & Vigliocco, G. (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48, 788–804. https://doi.org/10.1016/j.cortex.2010.11.002

    Article  PubMed  Google Scholar 

  53. New, B., Pallier, C., Brysbaert, M., & Ferrand, L. (2004). Lexique 2: A new French lexical database. Behavior Research Methods, Instruments, & Computers, 36, 516–524. https://doi.org/10.3758/BF03195598

    Article  Google Scholar 

  54. Paivio, A., (1991). Dual coding theory: Retrospect and current status. Canadian Journal of Psychology/Revue canadienne de psychologie, 45, (3):255–287

    Article  Google Scholar 

  55. Paivio, A. (2013). Dual coding theory, word abstractness, and emotion: A critical review of Kousta et al. (2011). Journal of Experimental Psychology: General, 142, 282–287. https://doi.org/10.1037/a0027004

    Article  Google Scholar 

  56. Paivio, A., Yuille, J. C., & Madigan, S. A. (1968). Concreteness, imagery, and meaningfulness values for 925 nouns. Journal of Experimental Psychology, 76(1, Pt. 2), 1–25. https://doi.org/10.1037/h0025327

    Article  Google Scholar 

  57. Paivio, A., Yuille, J. C., & Smythe, P. C. (1966). Stimulus and response abstractness, imagery, and meaningfulness, and reported mediators in paired-associate learning. Canadian Journal of Psychology, 20, 362–377.

    Article  Google Scholar 

  58. Parker, P. L., McDaniel, H. S., & Crumpton-Young, L. L. (2002). Do research participants give interval or ordinal answers in response to Likert scales? In Proceedings of the IISE Annual Conference (p. 1). Peachtree Corners, GA: Institute of Industrial and Systems Engineers.

    Google Scholar 

  59. Pfennings, L., Cohen, L., & van der Ploeg, H. (1995). Preconditions for sensitivity in measuring change: visual analogue scales compared to rating scales in a Likert format. Psychological Reports, 77, 475–480. https://doi.org/10.2466/pr0.1995.77.2.475

    Article  PubMed  Google Scholar 

  60. Rey, A. E., Riou, B., Vallet, G. T., & Versace, R. (2017). The automatic visual simulation of words: A memory reactivated mask slows down conceptual access. Canadian Journal of Experimental Psychology, 71, 14–22. https://doi.org/10.1037/cep0000100

    Article  PubMed  Google Scholar 

  61. Romani, C., McAlpine, S., & Martin, R. C. (2008). Concreteness effects in different tasks: Implications for models of short-term memory. Quarterly Journal of Experimental Psychology, 61, 292–323. https://doi.org/10.1080/17470210601147747

    Article  Google Scholar 

  62. Sabsevitz, D. S., Medler, D. A., Seidenberg, M., & Binder, J. R. (2005). Modulation of the semantic system by word imageability. NeuroImage, 27, 188–200. https://doi.org/10.1016/j.neuroimage.2005.04.012

    Article  PubMed  Google Scholar 

  63. Sanchez-Gutierrez, C. H., Mailhot, H., Deacon, S. H., & Wilson, M. A. (2018). MorphoLex: A derivational morphological database for 70,000 English words. Behavior Research Methods, 50, 1568–1580. https://doi.org/10.3758/s13428-017-0981-8

    Article  PubMed  Google Scholar 

  64. Sanfeliu, M. C., & Fernandez, A. (1996). A set of 254 Snodgrass-Vanderwart pictures standardized for Spanish: Norms for name agreement, image agreement, familiarity, and visual complexity. Behavior Research Methods, Instruments, & Computers, 28, 537–555. https://doi.org/10.3758/BF03200541

    Article  Google Scholar 

  65. Simmons, W. K., Ramjee, V., Beauchamp, M. S., McRae, K., Martin, A., & Barsalou, L. W. (2007). A common neural substrate for perceiving and knowing about color. Neuropsychologia, 45, 2802–2810. https://doi.org/10.1016/j.neuropsychologia.2007.05.002

    Article  PubMed  PubMed Central  Google Scholar 

  66. Sirois, M., Kremin, H., & Cohen, H. (2006). Picture-naming norms for Canadian French: Name agreement, familiarity, visual complexity, and age of acquisition. Behavior Research Methods, 38, 300–306. https://doi.org/10.3758/BF03192781

    Article  PubMed  Google Scholar 

  67. Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods, 49, 2204–2218. https://doi.org/10.3758/s13428-017-0852-3

    Article  PubMed  Google Scholar 

  68. Tsaparina, D., Bonin, P., & Méot, A. (2011). Russian norms for name agreement, image agreement for the colorized version of the Snodgrass and Vanderwart pictures and age of acquisition, conceptual familiarity, and imageability scores for modal object names. Behavior Research Methods, 43, 1085–1099. https://doi.org/10.3758/s13428-011-0121-9

    Article  PubMed  Google Scholar 

  69. Vallet, G., Brunel, L., & Versace, R. (2010). The perceptual nature of the cross-modal priming effect: Arguments in favor of a sensory-based conception of memory. Experimental Psychology, 57, 376–382. https://doi.org/10.1027/1618-3169/a000045

    Article  PubMed  Google Scholar 

  70. Vallet, G., Simard, M., Versace, R., & Mazza, S. (2013). The perceptual nature of audiovisual interactions for semantic knowledge in young and elderly adults. Acta Psychologica, 143, 253–260. https://doi.org/10.1016/j.actpsy.2013.04.009

    Article  PubMed  Google Scholar 

  71. van Dantzig, S., Cowell, R. A., Zeelenberg, R., & Pecher, D. (2011). A sharp image or a sharp knife: Norms for the modality-exclusivity of 774 concept-property items. Behavior Research Methods, 43, 145–154. https://doi.org/10.3758/s13428-010-0038-8

    Article  PubMed  Google Scholar 

  72. Van Dantzig, S., Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2008). Perceptual processing affects conceptual processing. Cognitive Science, 32, 579–590. https://doi.org/10.1080/03640210802035365

    Article  PubMed  Google Scholar 

  73. Versace, R., Vallet, G. T., Riou, B., Lesourd, M., Labeye, É., & Brunel, L. (2014). Act-In: An integrated view of memory mechanisms. Journal of Cognitive Psychology, 26, 280–306. https://doi.org/10.1080/20445911.2014.892113

    Article  Google Scholar 

  74. Voutilainen, A., Pitkaaho, T., Kvist, T., & Vehvilainen-Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing, 72, 946–957. https://doi.org/10.1111/jan.12875

    Article  PubMed  Google Scholar 

  75. Wilson, M. A., Cuetos, F., Davies, R., & Burani, C. (2013). Revisiting age-of-acquisition effects in Spanish visual word recognition: The role of item imageability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 1842–1859. https://doi.org/10.1037/a0033090

    Article  PubMed  Google Scholar 

  76. Winter, B. (2016). Taste and smell words form an affectively loaded and emotionally flexible part of the English lexicon. Language, Cognition and Neuroscience, 31, 975–988. https://doi.org/10.1080/23273798.2016.1193619

    Article  Google Scholar 

  77. Yarkoni, T., Balota, D., & Yap, M. (2008). Moving beyond Coltheart’s N: A new measure of orthographic similarity. Psychonomic Bulletin & Review, 15, 971–979. https://doi.org/10.3758/PBR.15.5.971

    Article  Google Scholar 

Download references

Author note

G.C. is supported by a Fonds de recherche du Québec–Nature et Technologies (FRQ-NT) fellowship. S.M.B. is supported by a Fonds de recherche du Québec–Santé (FRQS) Chercheur Boursier Junior 2 Scholarship. The work was supported by the Natural Sciences and Engineering Research Council of Canada (Grant 418630-2012 to S.M.B.) and by the Social Sciences and Humanities Research Council of Canada (Grant 430-2015-00699 to M.A.W.).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Georges Chedid.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

ESM 1

(DOC 418 kb)

ESM 2

(XLSX 53 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chedid, G., Brambati, S.M., Bedetti, C. et al. Visual and auditory perceptual strength norms for 3,596 French nouns and their relationship with other psycholinguistic variables. Behav Res 51, 2094–2105 (2019). https://doi.org/10.3758/s13428-019-01254-w

Download citation

Keywords

  • Perceptual strength
  • Norms
  • Regression
  • Psycholinguistic variables