Skip to main content
Log in

Compensatory cross-modal effects of sentence context on visual word recognition in adults

  • Published:
Reading and Writing Aims and scope Submit manuscript

Abstract

Reading involves mapping combinations of a learned visual code (letters) onto meaning. Previous studies have shown that when visual word recognition is challenged by visual degradation, one way to mitigate these negative effects is to provide "top–down" contextual support through a written congruent sentence context. Crowding is a naturally occurring visual phenomenon that impairs object recognition and also affects the recognition of written stimuli during reading. Thus, access to a supporting semantic context via a written text is vulnerable to the detrimental impact of crowding on letters and words. Here, we suggest that an auditory sentence context may provide an alternative source of semantic information that is not influenced by crowding, thus providing “top–down” support cross-modally. The goal of the current study was to investigate whether adult readers can cross-modally compensate for crowding in visual word recognition using an auditory sentence context. The results show a significant cross-modal interaction between the congruency of the auditory sentence context and visual crowding, suggesting that interactions can occur across multiple levels of processing and across different modalities to support reading processes. These findings highlight the need for reading models to specify in greater detail how top–down, cross-modal and interactive mechanisms may allow readers to compensate for deficiencies at early stages of visual processing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Two additional reading tasks were carried out using two short excerpts from newspaper articles. However, the texts were poorly matched on several factors, and one text also had some grammatical errors, we do not report any analyses including these texts.

  2. One subject had an accuracy rate of less than 60% in the incongruent sentence condition for words in the LDT-auditory context.

References

Download references

Acknowledgements

This research is supported by the Basque Government through the BERC 2018-2021 program; the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation (SEV-2015-0490); the "Programa Estatal de Promoción del Talento y su Empleabilidad en I+D+i" fellowship, reference number: PRE2018-083945" to C.C; funding from European Union's Horizon 2020 Marie Sklodowska-Curie grant agreement No-79954 to S.G.; and the grants from the Spanish Ministry of Science and Innovation, Ramon y Cajal-RYC-2015-1735 and Plan Nacional-RTI2018-096242-B-I0 to M.L.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catherine Clark.

Ethics declarations

Conflict of interest

The authors declared no conflicts of interest regarding authorship and publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Clark, C., Guediche, S. & Lallier, M. Compensatory cross-modal effects of sentence context on visual word recognition in adults. Read Writ 34, 2011–2029 (2021). https://doi.org/10.1007/s11145-021-10132-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11145-021-10132-x

Keywords

Navigation