Advertisement

Psychonomic Bulletin & Review

, Volume 20, Issue 4, pp 773–779 | Cite as

The activation of segmental and tonal information in visual word recognition

  • Chuchu Li
  • Candise Y. Lin
  • Min WangEmail author
  • Nan Jiang
Brief Report

Abstract

Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., 红, hong2, “red”), homophones of the color characters (S+T+; e.g., 洪, hong2, “flood”), different-tone homophones (S+T–; e.g., 轰, hong1, “boom”), characters that shared the same tone but differed in segments with the color characters (S–T+; e.g., 瓶, ping2, “bottle”), and neutral characters (S–T–; e.g., 牵, qian1, “leading through”). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T– than for S–T+ trials, and was similar between S+T+ and S+T– trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.

Keywords

Syllable segment Tone Visual word recognition 

Whether phonological information is always activated in visual word recognition is a debated issue. Most researchers tend to agree that phonology is activated in lexical access in alphabetic writing systems such as English, in which graphemes map onto phonemes (Alario, De Cara, & Ziegler, 2007; Berent & Perfetti, 1995; Frost, 1998). However, the activation of phonology in logographic writing systems such as Chinese, in which graphemes map onto syllables and morphemes, is less clear-cut. In Chinese, phonological information is not explicitly represented in written characters; for example, the character 马 is pronounced ma3 (the number 3 here denotes the tone feature) and means “horse.” There is no letter–sound correspondence between the visual character and its pronunciation. Chinese readers may entirely bypass phonological information in lexical access (Zhou & Marslen-Wilson, 2009; Zhou, Shu, Bi, & Shi, 1999). Furthermore, Mandarin Chinese has a simple syllable structure in which consonant–vowel is the most common structure for monosyllabic words, most of which have a large number of homophones and phonological neighbors (Chen, Vaid, & Wu, 2009; Dictionary Editing Room of the Language Institute, 2005). These features of the language may push Chinese readers to rely more on orthographic information in order to access the lexical representation. Thus, how much phonological information is involved in reading logographic Chinese is an interesting and important question to address.

Phonology encompasses segments and suprasegments. A segment refers to any discrete unit that can be identified in the stream of speech, such as consonants and vowels. A suprasegment is defined as a vocal effect that extends over more than one segment, such as lexical stress in English, which cannot be carried on an isolated consonant or vowel (Crystal, 2008). Both segments and suprasegments provide useful information in word recognition. For example, the pronunciations of pie and buy only differ in their initial phonemes (/p/ vs. /b/), which is segmental information, yet their meanings and syntactic categories are completely different. Suprasegmental information such as stress can also be important in determining a word’s semantic information. For example, when stress is placed on the first syllable of the word record (/ˈrɛkərd/), it is a noun meaning “an account of facts.” When stress is placed on the second syllable (/rɪˈkɔrd/), it becomes a verb meaning to set down in writing.” How do readers activate segmental and suprasemental information in word recognition, and how do Chinese readers activate phonological information, including segmental and tonal information, in a logographic system? These questions motivated the present study.

The functions of segments and suprasegments in spoken word recognition

Segment and lexical stress

Both segmental and suprasegmental information play a role in lexical access in spoken word recognition, although their relative importance is language-specific. One such language specificity is that the prevalence of minimal stress pairs—pairs of words that differ only in stress location—varies across languages (Cutler, Oahan, & van Donselaar, 1997). For languages such as Spanish and Italian, in which minimal stress pairs are very common, segments and suprasegments are both useful. In a cross-modal priming lexical-decision task, Spanish listeners showed inhibition of similar effect sizes when the target word was preceded by a prime mismatched in one vowel, one consonant, or the stress pattern (Soto-Faraco, Sebastián-Gallés, & Cutler, 2001). Similar results were observed in Italian (Tagliapietra & Tabossi, 2005).

For languages in which minimal stress pairs are not common, such as English, suprasegments are relatively less relied upon. In English, stress information may be redundant in word recognition, since it can always be derived from segmental structure (e.g., vowel quality). For example, /ˈrɛkərd/ and /rɪˈkɔrd/ differ in both their stress patterns and vowel qualities. If vowel quality is not altered, mis-stressing has no significant effect on noise-masked word recognition (Slowiaczek, 1990). Using the cross-modal priming paradigm, in which listeners hear an English sentence and at some point during the sentence perform a visual lexical decision task, Cutler (1986) found that a word such as forbear in both stress patterns (i.e., FORbear and forBEAR) facilitated the recognition of words semantically related to each of them (e.g., ancestor, tolerate). This result suggested that stress may not constrain lexical access in English. In summary, whether suprasegments (lexical stress, in this case) are important in constraining lexical access in spoken word recognition depends on whether segments provide sufficient information to distinguish among lexical items. For languages in which minimal stress pairs are rare, stress is treated as dispensable rather than critical information to distinguish words (Cooper, Cutler, & Wales, 2002; Cutler et al., 1997).

Segment and lexical tones

Unlike lexical stress, tone contrasts are not distinguished by differences in segmental structure. In tonal languages such as Mandarin and Thai, different tone homophones—minimal pairs differing only in tone—are abundant (Lee, 2007). Therefore, the investigation of the usefulness of segmental and suprasegmental information in tonal languages provides a unique perspective in the investigation of language-specific constraints on lexical access. In Mandarin, a syllable segment comprises an onset (consonant or empty) and rime (vowel + consonant or vowel only). In terms of suprasegments, there are four lexical tones. The same syllable segment can represent four different meanings, depending on the tone that it carries (Li & Thompson, 1989). For example, 妈 (ma1) means “mother,” 麻 (ma2) means “hemp,” 马 (ma3) means “horse,” and 骂 (ma4) means “scold.” Therefore, it is important to associate the syllable segment with the correct lexical tone.

Syllable segments or lexical tones alone may not provide sufficient information to differentiate spoken words. In an auditory-priming lexical-decision task (Lee, 2007), Mandarin listeners did not show significant priming effects when the prime shared only the same segment or tone with the target. Reliable facilitation was found only when the prime shared both the segment and tone with the target. Moreover, Malins and Joanisse (2010) suggested that tonal and segmental information are accessed concurrently and play comparable roles. Using the eyetracking technique, native Mandarin speakers were asked to select a picture that matched the Chinese word that they heard. Both segmental distractors (i.e., the name of the distracting picture differed from the target only in syllable segment) and tonal distractors (i.e., the name of the distracting picture differed from the target only in tone) slowed down participants’ fixation latencies. However, Tong, Francis, and Gandour (2007) indicated that segments played a more robust role than tones. Listeners were asked to classify syllables only on the basis of one target dimension (tone, consonant, or vowel); segmental dimension interfered more with tone classification than did the tonal dimension with vowel or consonant classification. Although it is still debatable how segments and tones interact with each other, a consensus agrees that they both play roles during spoken word recognition.

Although both segments and suprasegments are salient phonological properties of spoken language, they are not always explicitly represented in visual words. For example, stress is not marked in visual English words, and neither segmental nor tonal information is explicitly represented in visual Chinese characters. Whether and how readers have access to the segments and suprasegments in particular becomes an interesting question.

The functions of segments and suprasegments in visual word recognition

Segments and lexical stress

Previous literature has shown that segmental information is activated in visual word recognition in alphabetic writing systems such as English and German (e.g., Braun, Hutzler, Ziegler, Dambacher, & Jacobs, 2009; Luo, Johnson, & Gallo, 1998). Although it is not marked in written English words, stress has an impact on word recognition in silent reading (Ashby & Clifton, 2005): Native English readers spent more time on words that contain two stressed syllables than those with one stressed syllable, after controlling for factors such as word length. The implicit-prosody hypothesis (Fodor, 1998) suggested that readers may generate inner speech and impose a prosodic contour (i.e., suprasegment information) on text in silent reading. The activation of prosodic information can function as a mechanism to temporarily hold the phonological representation of each word in short-term memory (see Ashby, 2006).

Segments and lexical tones

In a logographic writing system such as Chinese, neither segmental nor suprasegmental information is explicitly represented in the visual characters. However, previous studies have suggested that phonology plays an important role in Chinese lexical access (e.g., Tan & Perfetti, 1997; Xu, Pollatsek, & Potter, 1999). Spinks, Liu, Perfetti, and Tan (2000) used the Stroop paradigm (Stroop, 1935) to investigate whether phonological information is automatically activated in visual Chinese word recognition. In a classic Stroop task, participants name the presentation color of the word rather than saying the word itself. As compared to a noncolor word (e.g., CAT printed in red), participants are slower to name the print color of an incongruent color word (e.g., RED printed in blue) and faster to name the print color of a congruent color word (e.g., RED printed in red). This Stroop paradigm is one of the most powerful tools to address unintentional, automatic word reading. It is also useful for the investigation of phonological coding in lexical access.

Spinks et al. (2000) asked native Mandarin speakers to name the presentation color of Chinese characters. The critical stimuli included color characters (e.g., 红, hong2, “red”), homophones of the color characters (same segment–same tone: S+T+; e.g., 洪, hong2, “flood”), homophones that only shared the same syllable segment (S+T–; e.g., 轰, hong1, “boom”), and a neutral stimulus (S–T–; e.g., 贯, guan4, “passing through”). Significant facilitation for congruent S+T+ (e.g., 洪 in red) and S+T– characters (e.g., 轰 in red), and inhibition for incongruent S+T+ characters (e.g., 洪 in green) were reported. No significant effect was found for incongruent S+T– trials (e.g., 轰 in green). These results suggested that syllable segments may be activated independently of tones, since congruent S+T– characters facilitated naming latencies. It is likely that tones are also activated, since incongruent S+T– did not produce significant inhibition, whereas incongruent S+T+ did. However, since Spinks et al.’s study did not include the S–T+ stimulus type (characters that shared the same tone but differed in syllable segments from the color characters), it remains unclear whether tones play an independent role.

Taft and Chen (1992) suggested that tones may be poorly represented in the mental lexicon as compared to syllable segments. It was difficult for Mandarin speakers to say “no” in a homophone judgment task when the visual characters shared the same segment but differed in tones (e.g., 去, qù, and 曲, qŭ), whereas it was easier to say “no” to visual characters that shared the same tone but differed in vowels (e.g., 去, qù, and 气, qì). These inconsistent findings regarding the importance of segmental and tonal information activation in Chinese visual lexical access motivated our study.

The present study

We modified the Stroop task in Spinks et al. (2000), with the important addition of an S–T+ stimulus type. Participants named the presentation color of Chinese characters in Mandarin. The critical trials included congruent and incongruent color characters (e.g., 红 in red and 红 in green, respectively), and congruent S+T+ (e.g., 洪 in red), S+T– (e.g., 轰 in red), and S–T+ (e.g., 瓶 in red) characters, as well as neutral characters (S–T–; e.g., 牵 in red). A comparison among S+T+, S+T–, S–T+, and S–T– effects might help tease apart the activation of segmental and tonal information cleanly and address the independent contributions made by each. Previous research had suggested that segmental and suprasemental information play comparable roles in alphabetic languages such as Spanish, Italian, or Dutch, in which minimal stress pairs are prevalent (Koster, & Cutler, 1997; Soto-Faraco et al., 2001; Tagliapietra & Tabossi, 2005). Given the abundance of minimal tone pairs in Chinese and evidence that segmental and tonal information in Chinese also play comparable roles (e.g., Malins & Joanisse, 2010), we hypothesized that both congruent S+T– and S–T+ trials should elicit significant facilitation on the color naming, in addition to the S+T+ trials. If tonal information is poorly represented in the Chinese mental lexicon, as compared to segmental information (e.g., Taft & Chen, 1992), there should be facilitation in the S+T+ and S+T– trials but not the S–T+ trials. If segmental and tonal information are activated as an integral unit, as reported in Lee (2007), color-naming facilitation should be observed only in the S+T+ trials.

Method

Participants

The participants consisted of 18 native Mandarin-speaking graduate students with normal or corrected-to-normal vision from a Mid-Atlantic university. A total of 12 females and six males took part, whose ages ranged from 21 to 26 years (M = 23.6, SD = 1.24).

Design and materials

The stimuli were written in four different print colors—red, yellow, green, and blue—including 72 (six stimulus types × four colors × three repetitions) critical trials and 60 fillers. The six stimulus types for the critical trials were congruent and incongruent color characters, congruent S+T+, congruent S+T–, congruent S–T+, and neutral characters (see Table 1 for the stimulus characteristics).
Table 1

Stimuli in the critical trials

Condition

Color Characters

S+T+

S+T–

S–T+

S–T– (neutral)

 

Frequencya

75.89

93.35

92.53

95.80

95.67

Number of strokes

6

9

8

10

9

Pronunciation

hong2

hong2

hong1

ping2

qian1

Translation

red

flood

boom

bottle

lead along

 

Frequency

78.23

84.32

96.27

97.29

97.50

Number of strokes

11

9

10

13

12

Pronunciation

huang2

huang2

huang4

chan2

chen4

Translation

yellow

emperor

sway

wrap around

take advantage of

 

Frequency

91.73

99.28

96.68

95.34

88.85

Number of strokes

13

11

9

9

9

Pronunciation

lan2

lan2

lan3

chang2

gong1

Translation

blue

greedy

view

taste

palace

 

绿

Frequency

90.44

87.40

88.30

89.38

95.97

Number of strokes

11

10

10

9

10

Pronunciation

lü4

lü4

lü3

dong4

tu2

Translation

green

ponder

travel

hole

paint

Frequency information was obtained from the bigram frequency database on the Chinese Text Computing website (Da, 2004; http://lingua.mtsu.edu/chinese-computing/). All frequencies are character frequencies per million characters

Procedure

The experiment was implemented using the DMDX software (Forster & Forster, 2003). The participants were asked to name the color of the characters shown on the computer screen as quickly and accurately as possible. A fixation mark “+” appeared at the center of the screen for 500 ms, followed by the target character, written in bold 48-point Song-Ti font, which disappeared as soon as a color-naming response was made. The intertrial interval was 1,000 ms. The trial was automatically terminated if no response was made within 3,000 ms. The trials were pseudorandomized so that the same color or character did not appear consecutively, and they were preceded by eight practice trials. The first author sat behind the participants and recorded their naming accuracy.

Results

Response time (RT) analyses were based on correct trials only. Correct trials in which the naming responses failed to trigger the voice key were deleted (2 %). RT data that were two standard deviations above or below the cell means were excluded (an additional 3 %). For all subsequent analyses, RT data were log-transformed to improve their normality. Table 2 shows the descriptive statistics for RTs and error rates.
Table 2

Response times (RTs, with SDs in parentheses) and errors (with SDs) in each condition

Condition

RT (ms)

Errors (%)

Stroop Effect (ms)

Congruent color character

719 (94)

0.0 (0.0)

58**

Congruent S+T+

703 (100)

0.0 (0.0)

74***

Congruent S+T–

695 (66)

0.0 (0.0)

82***

Congruent S–T+

742 (84)

0.9 (2.7)

35*

Incongruent color character

999 (208)

8.3 (9.9)

−222***

S–T– (neutral condition)

777 (102)

0.0 (0.0)

 

* p < .05. ** p < .01. *** p < .001

As compared to the neutral condition, significant facilitation emerged for congruent color characters [t(17) = 3.605, p = .002], congruent S+T+ [t(17) = 5.758, p < .001], congruent S+T– [t(17) = 5.886, p < .001], and congruent S–T+ [t(17) = 2.509, p = .023], and significant inhibition was found for incongruent color characters [t(17) = −6.760, p < .001]. No significant difference in effect sizes was apparent between the facilitations of S+T+ and S+T– [t(17) = .677, p = .508, Cohen’s d = 0.127]. However, the effect size of S+T– facilitation was significantly larger than that of S–T+ facilitation [t(17) = 3.437, p = .003, Cohen’s d = 0.769]. The error rates across all trials were low, ranging from 0 % to 0.9 %, except for the incongruent color characters, for which the error rate was 8.3 %. Analyses on error rates only showed significant inhibition for the incongruent condition (p < .001).

Additional tests were conducted for the congruent S–T+ stimuli on the basis of color groups, since three of the four color names carried the second tone (红, hong2, “red”; 蓝, lan2, “blue”; and 黄, huang2, “yellow”). As compared with the corresponding control characters, for the red color S–T+ character 瓶 (ping2, “bottle”), the facilitation was marginally significant [t(17) = 1.866, p = .079]; for the blue 尝 (chang2, “taste”), participants showed significant facilitation [t(17) = 2.380, p = .029], and for the yellow 缠 (chan2, “wrap around”), we also found significant facilitation [t(17) = 2.261, p = .037]. However, for the green S–T+ character 洞 (dong4, “hole”), which carries the fourth tone, although a trend of facilitation was shown, the effect was not significant [t(17) = 0.017, p = .987].

Discussion

The present study showed that in syllables, segmental and tonal forms of information constrain lexical access independently in visual word recognition. In addition, the activation of these constituents is automatic, even when it is not needed in the context of the Stroop experiment. Unlike silent-reading tasks in which readers are required to process visual word information, the Stroop task requires participants to name the color of the character but not to say the name of the character. However, the presence of a visual word resulted in the automatic activation of syllable segmental and tonal information. The Stroop effect across all stimulus types provided evidence that the automatic activation of both segments and suprasegments supports lexical access. Importantly, the significant facilitation in the congruent S–T+ trials suggested that tonal information plays an independent role in visual word recognition. As stated earlier, Mandarin Chinese has abundant minimal tone pairs, and tonal contrasts are not distinguished by segmental changes. Thus, incorporating tonal information in phonological representations is crucial for accurate word recognition.

However, our results suggested that syllable segmental information outweighs lexical tonal information in supporting lexical access. The effect size of the S+T– facilitation was close to that of the S+T+ facilitation, but significantly larger than that of the S–T+ facilitation. These results suggested that the activation of segmental information is more helpful than the activation of tonal activation in lexical access. Another way to view these results is that tonal variation did not change participants’ naming latencies when syllable segments were the same (S+T+ and S+T– stimulus types). Tonal information was helpful only when syllable segments differed (S–T+ and S–T– types). This interaction suggests that syllable segmental and tonal information may be activated in a hierarchical order. Readers first look for segmental information for lexical access. When syllable segments cannot provide enough useful information, readers then look for tonal information to resolve any lexical ambiguity. This proposed interaction between segmental and tonal information suggests that, as compared to syllable segmental information, tonal information may not be as well-represented in the mental lexicon. Tong et al. (2007) pointed out that each tone is associated with more words than each segment is. Consequently, tonal information exerts fewer constraints on word recognition than does segmental information. In addition, the acoustic realization of tones is greatly influenced by its phonological environment (Chen, 2000). Tone sandhi is a good example; for instance, the third tone is changed to the second when it is followed by another third tone. Finally, tones may be activated later than segments during visual word recognition. Unfortunately, the time course of activation is beyond the scope of the present study.

Models of visual word recognition have focused on the relations among orthography, phonology, and semantics. For example, the direct visual–orthographic pathway (e.g., Wong & Chen, 1999) supports a direct mapping between orthography and semantics, without the mediation of phonology. The indirect pathway (e.g., Perfetti & Tan, 1998, 1999), on the other hand, supports the inclusion of phonology as the mediation between orthography and semantics. How different phonological constituents (e.g., segments vs. suprasegments) function in visual word recognition has received relatively less attention. Findings from the present study suggest that segmental and suprasegmental forms of information are both activated, although segments may be more important than suprasegments. On the basis of the well-known “triangle” model of visual word recognition (Harm & Seidenberg, 2001; Plaut, McClelland, Seidenberg, & Patterson, 1996), we proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental phonological information are taken into account (see Fig. 1). In this model, segmental and suprasegmental forms of information jointly contribute to lexical access. The activation of both segmental and suprasegmental information is an online and obligatory process. Segmental information may also be more helpful than suprasegmental information in lexical access, at least for Mandarin.
Fig. 1

Revised visual word recognition model

There are several limitations to our study. The S–T+ stimuli included three color names carrying the second tone. This may explain why S–T+ facilitation only happened when the color characters carried the second tone; that is, the S–T+ facilitation may be partially due to the repeated production of the second tone syllable. If color names with the first, third, or fourth tones were included in the stimuli, the S–T+ facilitation might not be a reliable effect. However, stimulus selection for those color names turns out to be difficult, due to their linguistic restrictions. For example, color names such as 粉 (fen3, “pink”) do not have homophones. Other colors may be named in multiple ways. For example, the color brown can be named as 棕 (zong1) or 褐 (he4). Our study did not address consonant and vowel units separately in examining the activation of syllable segmental information. Tong et al. (2007) suggested that, within the syllable segment, tone is attached more to the vowel in constraining lexical access.

To conclude, the present study has shown evidence that syllable segments and tones independently contribute to lexical access in reading Chinese, although segmental information may play a more primary role than tonal information. A revised theoretical model is proposed to take into account both segmental and suprasegmental information in visual word recognition.

Notes

Author note

The research reported here was supported by a University Graduate Fellowship to the first author. The second author was supported by an NSF IGERT Grant (No. DGE-0801465) awarded to the University of Maryland.

References

  1. Alario, F. X., De Cara, B., & Ziegler, J. C. (2007). Automatic activation of phonology in silent reading is parallel: Evidence from beginning and skilled readers. Journal of Experimental Child Psychology, 97, 205–219. doi: j.jecp/j.jecp.2007.02.001 PubMedCrossRefGoogle Scholar
  2. Ashby, J. (2006). Prosody in skilled silent reading: evidence from eye movements. Journal of Research in Reading, 29,(3) 318–333.Google Scholar
  3. Ashby, J., & Clifton, C., Jr. (2005). The prosodic property of lexical stress affects eye movements during silent reading. Cognition, 96, B89–B100.PubMedCrossRefGoogle Scholar
  4. Berent, I., & Perfetti, C. A. (1995). A rose is a REEZ: The two-cycles model of phonology assembly in reading English. Psychological Review, 102, 146–184. doi: 10.1037/0033-295X.102.1.146 CrossRefGoogle Scholar
  5. Braun, M., Hutzler, F., Ziegler, J. C., Dambacher, M., & Jacobs, A. M. (2009). Pseudohomophone effects provide evidence of early lexico-phonological processing in visual word recognition. Human Brain Mapping, 30, 1977–1989. doi: 10.1002/hbm.20643 PubMedCrossRefGoogle Scholar
  6. Chen, M. Y. (2000). Tone sandhi: Patterns across Chinese dialects. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  7. Chen, H.-C., Vaid, J., & Wu, J.-T. (2009). Homophone density and phonological frequency in Chinese word recognition. Language & Cognitive Processes, 24, 967–982.CrossRefGoogle Scholar
  8. Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and nonnative listeners. Language and Speech, 45, 207–228.PubMedCrossRefGoogle Scholar
  9. Crystal, D. (2008). A dictionary of linguistics and phonetics (6th ed.). Malden, MA: Wiley-Blackwell.CrossRefGoogle Scholar
  10. Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201–220.Google Scholar
  11. Cutler, A., Oahan, D., & van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141–201.PubMedGoogle Scholar
  12. Da, J. (2004). A corpus-based study of character and bigram frequencies in Chinese e-texts and its implications for Chinese language instruction. In Z. Pu, T. Xie, & J. Xu (Eds.), Studies on the theory and methodology of the digitalized Chinese teaching to foreigners: Proceedings of the Fourth International Conference on New Technologies in Teaching and Learning Chinese (pp. 501–511). Beijing, ROC: Tsinghua University Press.Google Scholar
  13. Dictionary Editing Room of the Language Institute, Chinese Academy of Social Sciences. (2005). Modern Chinese dictionary (现代汉语词典). Beijing, ROC: Commercial Press.Google Scholar
  14. Fodor, J. D. (1998). Learning to parse? Journal of Psycholinguistic Research, 27, 285–319.CrossRefGoogle Scholar
  15. Forster, K. I., & Forster, J. C. (2003). DMDX: A Windows display program with millisecond accuracy. Behavior Research Methods, Instruments, & Computers, 35, 116–124. doi: 10.3758/BF03195503 CrossRefGoogle Scholar
  16. Frost, R. (1998). Toward a strong phonological theory of visual word recognition: True issues and false trails. Psychological Bulletin, 123, 71–99. doi: 10.1037/0033-2909.123.1.71 PubMedCrossRefGoogle Scholar
  17. Harm, M. W., & Seidenberg, M. S. (2001). Are there orthographic impairments in phonological dyslexia? Cognitive Neuropsychology, 18, 71–92.PubMedGoogle Scholar
  18. Koster, M., & Cutler, A. (1997). Segmental and suprasegmental contributions to spoken-word recognition in Dutch. In Proceedings of the Fifth European Conference on Speech Communication and Technology (pp. 2167–2170). Rhodes, Greece.Google Scholar
  19. Lee, C.-Y. (2007). Does horse activate mother? Processing lexical tone in form priming. Language and Speech, 50, 101–123.PubMedCrossRefGoogle Scholar
  20. Li, C. N., & Thompson, S. A. (1989). Mandarin Chinese: A functional reference grammar. Berkeley, CA: University of California Press.Google Scholar
  21. Luo, C., Johnson, R., & Gallo, D. (1998). Automatic activation of phonological information in reading: Evidence from the semantic relatedness decision task. Memory and Cognition, 26, 833–843.PubMedCrossRefGoogle Scholar
  22. Malins, J. G., & Joanisse, M. F. (2010). The roles of tonal and segmental information in Mandarin spoken word recognition: An eyetracking study. Journal of Memory and Language, 62, 407–420. doi: j.jecp/j.jml.2010.02.004 CrossRefGoogle Scholar
  23. Perfetti, C. A., & Tan, L. H. (1998). The time course of graphic, phonological, and semantic activation in Chinese character identification. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 101–118. doi: 10.1037/0278-7393.24.1.101 CrossRefGoogle Scholar
  24. Perfetti, C. A., & Tan, L. H. (1999). The constituency model of Chinese character identification. In A. Inhoff, J. Wang, & I. Chen (Eds.), Reading Chinese script: A cognitive analysis (pp. 115–134). Hillsdale, NJ: Erlbaum.Google Scholar
  25. Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56–115. doi: 10.1037/0033-295X.103.1.56 PubMedCrossRefGoogle Scholar
  26. Slowiaczek, L. M. (1990). Effects of lexical stress in auditory word recognition. Language and Speech, 33, 47–68.PubMedGoogle Scholar
  27. Soto-Faraco, S., Sebastián-Gallés, N., & Cutler, A. (2001). Segmental and suprasegmental mismatch in lexical access. Journal of Memory and Language, 45, 412–432. doi: 10.1006/jmla.2000.2783 CrossRefGoogle Scholar
  28. Spinks, J. A., Liu, Y., Perfetti, C. A., & Tan, L. H. (2000). Reading Chinese characters for meaning: The role of phonological information. Cognition, 76, B1–B11.PubMedCrossRefGoogle Scholar
  29. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643–662. doi: 10.1037/0096-3445.121.1.15 CrossRefGoogle Scholar
  30. Taft, M., & Chen, H.-C. (1992). Judging homophony in Chinese: The influence of tones. In H.-C. Chen & O. J. L. Tzeng (Eds.), Language processing in Chinese (pp. 151–172). Amsterdam, The Netherlands: Elsevier.CrossRefGoogle Scholar
  31. Tagliapietra, L., & Tabossi, P. (2005). Lexical stress effects in Italian spoken word recognition. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 2140–2144). Hillsdale, NJ: Erlbaum.Google Scholar
  32. Tan, L. H., & Perfetti, C. A. (1997). Visual Chinese character recognition: Does phonological information mediate access to meaning? Journal of Memory and Language, 37, 41–57.CrossRefGoogle Scholar
  33. Tong, Y. X., Francis, A. L., & Gandour, J. T. (2007). Processing dependencies between segmental and suprasegmental features in Mandarin Chinese. Language & Cognitive Processes, 23, 689–708.CrossRefGoogle Scholar
  34. Wong, K. F., & Chen, H. C. (1999). Orthographic and phonological processing in reading Chinese text: Evidence from eye fixations. Language & Cognitive Processes, 14, 461–480.CrossRefGoogle Scholar
  35. Xu, Y., Pollatsek, A., & Potter, M. C. (1999). The activation of phonology during silent Chinese word reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 838–857. doi: 10.1037/0278-7393.25.4.838 PubMedCrossRefGoogle Scholar
  36. Zhou, X., & Marslen-Wilson, W. D. (2009). Pseudohomophone effects in processing Chinese compound words. Language & Cognitive Processes, 24, 1009–1038.CrossRefGoogle Scholar
  37. Zhou, X., Shu, H., Bi, Y., & Shi, D. (1999). Is there phonologically mediated access to lexical semantics in reading Chinese? In J. Wang, A. W. Inhoff, & H.-C. Chen (Eds.), Reading Chinese script: A cognitive analysis (pp. 135–172). Mahwah, NJ: Erlbaum.Google Scholar

Copyright information

© Psychonomic Society, Inc. 2013

Authors and Affiliations

  • Chuchu Li
    • 1
  • Candise Y. Lin
    • 1
  • Min Wang
    • 1
    Email author
  • Nan Jiang
    • 2
  1. 1.Department of Human Development and Quantitative MethodologyUniversity of MarylandCollege ParkUSA
  2. 2.Second Language Acquisition ProgramUniversity of MarylandCollege ParkUSA

Personalised recommendations