Skip to main content
Log in

Not perceptually equivalent in semantic emotion across visual and auditory modalities: cross-modal affective norms of two-character Chinese emotion-label words

  • Published:
Current Psychology Aims and scope Submit manuscript

Abstract

Over recent years, although multi-sensory studies have increasingly revealed modality-specific mechanisms underlying lexical processing, validated lexical databases with reliable affective norms in both visual and auditory modalities have been scantly established, especially in Chinese. Therefore, this study aims to establish a cross-modal affective database consisting of 350 two-character Chinese emotion-label words, and investigate how neutral speech prosody changes semantic emotion perception in Chinese. Affective ratings of six variables were collected, including familiarity, valence, arousal, dominance, intensity and emotion type, from 364 participants in both visual and auditory modalities. Reliability and validity of the ratings were strictly examined. Statistical analyses manifested the U-shape relationships for valence-arousal and valence-dominance pairwise correlations in the within-modality setting, and identified the existence of a neutral prosody influence on semantic emotion access, thus showing no direct juxtaposition in lexical emotion perception across the two modalities. Specifically, the auditory modality imposed a neutrality convergence on valence perception, decreased the familiarity and dominance feelings, but did not change the intensity parameter. This study is among the first to introduce the multi-modal perspective into Chinese lexical database construction, which not only supplements extant research tools for selecting grammatically homogeneous Chinese emotion-label words as experiment stimuli, but also warrants further investigations on how speech prosody influences lexical semantic perception.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

All data generated or analysed during this study are included in this published article and its supplementary information files.

References

Download references

Acknowledgements

We thank the support and participation of research assistants, including Jingyi Wu, Jiaqi Zhang, Luyao Jiang, Zhuorui Gao, Leqi Zhou, Yi Lin, Minyue Zhang, Yu Chen. We are also grateful for all suggestions provided by experts throughout our experiments.

Funding

This work was supported by grants from Major Program of National Social Science Foundation of China (Grant number: 18ZDA293) and a research funding from SONOVA.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Visualization, Writing - Original Draft [Enze Tang]. Investigation, Material Preparation [Xinran Fan, Ruomei Fang]. Investigation [Yuhan Zhang, Jie Gong]. Conceptualization [Jingjing Guan]. Conceptualization, Funding Acquisition, Supervision, Writing - Review & Editing [Hongwei Ding].

Corresponding author

Correspondence to Hongwei Ding.

Ethics declarations

Ethics approval

This research was approved by the Ethics Committee of School of Foreign Languages, Shanghai Jiao Tong University (Ethics approval number: 2006S12002).

Informed consent

An informed consent was obtained from all participants before the formal experiment.

Competing interests

Dr. Jingjing Guan is employed by the company Sonova China, Shanghai. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Below is the link to the electronic supplementary material.

ESM 1

(PNG 191 KB)

ESM 2

(PNG 179 KB)

ESM 3

(DOCX 79.3 KB)

ESM 4

(XLSX 239 KB)

ESM 5

(XLSX 184 KB)

ESM 6

(DOCX 42.7 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, E., Fan, X., Fang, R. et al. Not perceptually equivalent in semantic emotion across visual and auditory modalities: cross-modal affective norms of two-character Chinese emotion-label words. Curr Psychol 43, 15308–15327 (2024). https://doi.org/10.1007/s12144-023-05476-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12144-023-05476-2

Keywords

Navigation