Skip to main content
Log in

Emoji-based semantic representations for abstract and concrete concepts

  • Research Article
  • Published:
Cognitive Processing Aims and scope Submit manuscript

Abstract

An increasingly large body of converging evidence supports the idea that the semantic system is distributed across brain areas and that the information encoded therein is multimodal. Within this framework, feature norms are typically used to operationalize the various parts of meaning that contribute to define the distributed nature of conceptual representations. However, such features are typically collected as verbal strings, elicited from participants in experimental settings. If the semantic system is not only distributed (across features) but also multimodal, a cognitively sound theory of semantic representations should take into account different modalities in which feature-based representations are generated, because not all the relevant semantic information may be easily verbalized into classic feature norms, and different types of concepts (e.g., abstract vs. concrete concepts) may consist of different configurations of non-verbal features. In this paper we acknowledge the multimodal nature of conceptual representations and we propose a novel way of collecting non-verbal semantic features. In a crowdsourcing task we asked participants to use emoji to provide semantic representations for a sample of 300 English nouns referring to abstract and concrete concepts, which account for (machine readable) visual features. In a formal content analysis with multiple annotators we then classified the cognitive strategies used by the participants to represent conceptual content through emoji. The main results of our analyses show that abstract (vs. concrete) concepts are characterized by representations that: 1. consist of a larger number of emoji; 2. include more face emoji (expressing emotions); 3. are less stable and less shared among users; 4. use representation strategies based on figurative operations (e.g., metaphors) and strategies that exploit linguistic information (e.g. rebus); 5. correlate less well with the semantic representations emerging from classic features listed through verbal strings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Emogi.com’s Emoji Report 2015, retrieved 2018-11-02 from www.emogi.com.

  2. Unicode-Consortium. Draft Emoji Candidates: http://www.unicode.org/emoji/future/emoji-candidates.html.

  3. Unicode-Consortium. Submitting Emoji Proposals: https://www.unicode.org/emoji/proposals.html.

  4. The pre-trained emoji-embedding is provided at: github.com/uclmr/emoji2vec.

  5. Emoji are defined by the Unicode Standard using code-points to implement emoji as characters. There are certain modifiers which allow to transform the skin tone of emoji, gender or nationality of flag emoji. These modifiers are relevant semantic units, but cannot be displayed by themselves.

  6. As this emoji is a recent addition to the Unicode not all systems do support the rendering of this emoji. Different renderings can be found at: https://emojipedia.org/bar-of-soap/.

References

  • Barbieri F, Kruszewski G, Ronzano F, Saggion H (2016a) How cosmopolitan are emojis? Exploring emojis usage and meaning over different languages with distributional semantics. In: Proceedings of the 2016 ACM on multimedia conference. ACM, pp 531–535

  • Barbieri F, Ronzano F, Saggion H (2016b) What does this Emoji mean? A vector space skip-gram model for twitter Emojis. In: LREC

  • Barbieri F, Camacho-Collados J, Ronzano F, Anke LE, Ballesteros M, Basile V, Patti V, Saggion H (2018) SemEval 2018 Task 2: multilingual Emoji prediction. In: Proceedings of the 12th international workshop on semantic evaluation, 24–33

  • Baroni M, Barbu E, Murphy B, Poesio M (2010) Strudel: a distributional semantic model based on properties and types. Cognit Sci 34:222–254

    Google Scholar 

  • Barsalou LW (2008) Grounding symbolic operations in the brain’s modal systems. In: Semin GR, Smith ER (eds) Embodied grounding: social, cognitive, affective, and neuroscientific approaches. Cambridge, New York, pp 9–42

    Google Scholar 

  • Barsalou LW, Wiemer-Hastings K (2005) Situating abstract concepts. In: Pecher D, Zwaan RA (eds) Grounding cognition: the role of perception and action in memory, language, and thought. University Press, Cambridge, pp 129–163

    Google Scholar 

  • Barsalou LW, Simmons WK, Barbey AK, Wilson CD (2003) Grounding conceptual knowledge in modality-specific systems. Trends Cognit Sci 7(2):84–91

    Google Scholar 

  • Behrend TS, Sharek DJ, Meade AW, Wiebe EN (2011) The viability of crowdsourcing for survey research. Behav Res Methods 43(3):800–813

    PubMed  Google Scholar 

  • Binder JR, Westbury CF, Possing ET, McKiernan KA, Medler DA (2005) Distinct brain systems for processing concrete and abstract concepts. J Cognit Neurosci 17:905–917

    CAS  Google Scholar 

  • Binder JR, Desai RH, Graves WW, Conant LL (2009) Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb Cortex 19(12):2767–2796

    PubMed  PubMed Central  Google Scholar 

  • Bolognesi M (2017) Using semantic features norms to investigate how the visual and verbal modes afford metaphor construction and expression. Lang Cognit 9(3):525–552

    Google Scholar 

  • Bolognesi M, Steen G (eds) (2018) Abstract concepts: structure, processing and modeling. Special issue of topics in cognitive science, 10(3)

  • Bolognesi M, Pilgram R, van den Heerik R (2017) Reliability in content analysis: the case of semantic feature norms classification. Behav Res Methods 49(6):1984–2001

    PubMed  Google Scholar 

  • Borghi AM, Binkofski F (2014) Words As social Tools: An embodied view on abstract concepts. SpringerBriefs in Cognition series. Springer, New York. https://doi.org/10.1007/978-1-4614-9539-0

    Book  Google Scholar 

  • Brabham DC (2010) Moving the crowd at threadless: motivations for participation in a crowdsourcing application. Inf Commun Soc 13(8):1122–1145

    Google Scholar 

  • Bright P, Moss H, Tyler L (2004) Unitary vs. multiple semantics: PET studies of word and picture processing. Brain Lang 89:417–432

    CAS  PubMed  Google Scholar 

  • Brysbaert M, Warriner AB, Kuperman V (2014) Concreteness ratings for 40 thousand generally known English word lemmas. Behav Res Methods 46(3):904–911

    PubMed  Google Scholar 

  • Chee M, Weekes B, Lee K, Soon C, Schreiber A, Hoon I, Chee M (2000) Overlap and dissociation of semantic processing of Chinese characters, English words, and pictures. Neuroimage 12:392–403

    CAS  PubMed  Google Scholar 

  • Collins AM, Loftus EF (1975) A spreading-activation theory of semantic processing. Psychol Rev 82:407–428

    Google Scholar 

  • Cree GS, McRae K (2003) Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). J Exp Psychol Gen 132:163–201

    PubMed  Google Scholar 

  • Cree GS, McRae K, McNorgan C (1999) An attractor model of lexical conceptual processing: simulating semantic priming. Cognit Sci 23:371–414

    Google Scholar 

  • Cree GS, McNorgan C, McRae K (2006) Distinctive features hold a privileged status in the computation of word meaning: implications for theories of semantic memory. J Exp Psychol Learn Mem Cognit 32(4):643–658

    Google Scholar 

  • Davidoff J, De Bleser R (1994) Impaired picture recognition with preserved object naming and reading. Brain Cognit 24:1–23

    CAS  Google Scholar 

  • Eisner B, Rocktäschel T, Augenstein I, Bošnjak M, Riedel S (2016) emoji2vec: learning emoji representations from their description. arXiv preprint arXiv:1609.08359

  • Farah M (1990) Visual agnosia: disorders of object recognition and what they tell us about normal vision. MIT Press, Cambridge

    Google Scholar 

  • Farah MJ, McClelland JL (1991) A computational model of semantic memory impairment: modality specificity and emergent category specificity. J Exp Psychol Gen 120(4):339–357

    CAS  PubMed  Google Scholar 

  • Garrard P, Lambon Ralph MA, Watson PC, Powis J, Patterson K, Hodges JR (2001) Longitudinal profiles of semantic impairment for living and nonliving concepts in dementia of Alzheimer’s type. J Cognit Neurosci 13:892–909

    CAS  Google Scholar 

  • Gates L, Yoon M (2005) Distinct and shared cortical regions of the human brain activated by pictorial depictions versus verbal descriptions: an fMRI study. Neuroimage 24:473–486

    PubMed  Google Scholar 

  • Gorno-Tempini M, Price C, Josephs O, Vandenberghe R, Cappa S, Kapur N, Frackowiak R (1998) The neural systems sustaining face and proper-name processing. Brain 121:2103–2118

    PubMed  Google Scholar 

  • Hasson U, Levy I, Behrmann M, Hendler T, Malach R (2002) Eccentricity bias as an organizing principle for human high-order object areas. Neuron 34:490–497

    Google Scholar 

  • Highfield T (2018) Emoji hashtags//hashtag emoji: of platforms, visual affect, and discursive flexibility. First Monday 23:9

    Google Scholar 

  • Jespersen B, Reintges C (2008) Tractarian Sätze, Egyptian hieroglyphs, and the very idea of script as picture. In: The philosophical forum vol 39. Wiley Online Library, pp 1–19

  • Kousta ST, Vigliocco G, Vinson DP, Andrews M, Del Campo E (2011) The representation of abstract words: why emotion matters. J Exp Psychol Gen 140(1):14–34

    PubMed  Google Scholar 

  • Leech G, Rayson P (2014) Word frequencies in written and spoken English: based on the British National Corpus. Routledge, London

    Google Scholar 

  • Lenci A (2018) Distributional models of word meaning. Annu Rev Linguist 4:151–171

    Google Scholar 

  • Lynott D, Connell L (2013) Modality exclusivity norms for 400 nouns: the relationship between perceptual experience and surface word form. Behav Res Methods 45:516–526

    PubMed  Google Scholar 

  • McRae K, Boisvert S (1998) Automatic semantic similarity priming. J Exp Psychol Learn Mem Cognit 24(3):558–572

    Google Scholar 

  • McRae K, Jones MN (2013) Semantic memory. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press, New York, pp 206–219

    Google Scholar 

  • McRae K, Cree GS, Westmacott R, Sa VRD (1999) Further evidence for feature correlations in semantic memory. Can J Exp Psychol/Revue canadienne de psychologie expérimentale 53(4):360

    PubMed  Google Scholar 

  • McRae K, Cree GS, Seidenberg MS, McNorgan C (2005) Semantic feature production norms for a large set of living and nonliving things. Behav Res Methods 37(4):547–559

    PubMed  Google Scholar 

  • Medin DL, Schaffer MM (1978) Context theory of classification learning. Psychol Rev 85:207–238

    Google Scholar 

  • Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781

  • Minda JP, Smith JD (2002) Comparing prototype-based and exemplar-based accounts of category learning and attentional allocation. J Exp Psychol Learn Mem Cognit 28:275–292

    Google Scholar 

  • Mirman D, Magnuson G (2009) The effect of frequency of shared features on judgments of semantic similarity. Psychon Bull Rev 16:671–677

    PubMed  PubMed Central  Google Scholar 

  • Moore C, Price C (1999) Three distinct posterior basal temporal lobe regions for reading and object naming. Neuroimage 10:181–192

    CAS  PubMed  Google Scholar 

  • Paivio A (1971) Imagery and verbal processes. Holt, Rinehart, and Winston, New York

    Google Scholar 

  • Paivio A (1986) Mental representations: a dual coding approach. Oxford University Press, Oxford

    Google Scholar 

  • Paivio A (2010) Dual coding theory and the mental lexicon. Ment Lex 5:205–230

    Google Scholar 

  • Pecher D, Zeelenberg R, Barsalou LW (2004) Sensorimotor simulations underlie conceptual representations: modality-specific effects of prior activation. Psychon Bull Rev 11:164–167

    PubMed  Google Scholar 

  • Pexman PM, Lupker SJ, Hino Y (2002) The impact of feedback semantics in visual word recognition: number-of-features effects in lexical decision and naming tasks. Psychon Bull Rev 9:542–549

    PubMed  Google Scholar 

  • Pexman PM, Holyk GG, Monfils MH (2003) Number-of features effects and semantic processing. Mem Cognit 31:842–855

    PubMed  Google Scholar 

  • Pohl H, Domin C, Rohs M (2017) Beyond Just Text. ACM Trans Comput Human Interac 24(1):1–42

    Google Scholar 

  • Recchia G, Jones MN (2012) The semantic richness of abstract concepts. Front Hum Neurosci 41(3):647–656

    Google Scholar 

  • Reijnierse WG, Burgers C, Bolognesi M, Krennmayr T (2019) How polysemy affects concreteness ratings: the case of metaphor. Cognit Sci 43(8):e12779

    Google Scholar 

  • Reinholz J, Pollmann S (2005) Differential activation of object-selective visual areas by passive viewing of pictures and words. Cognit Brain Res 24:702–714

    Google Scholar 

  • Sartori G, Polezzi D, Mamelia F, Lombardi L (2005) Feature type effects in semantic memory: an event related potentials study. Neurosci Lett 390:139–144

    CAS  PubMed  Google Scholar 

  • Smith EE, Medin DL (1981) Categories and concepts. Harvard University Press, Cambridge

    Google Scholar 

  • Stark L, Crawford K (2015) The conservatism of Emoji: WOrk, affect, and communication. Social Media + Society 1, 2

  • Taggart C (2015) New words for old: recycling our language for the modern world. Michael O’Mara Books

  • Taylor KI, Moss HE, Tyler LK (2007) The conceptual structure account: a cognitive model of semantic memory and its neural instantiatios. In: Hart Kraut (ed) Neural basis of semantic memory. University Press, Cambridge, pp 265–301

    Google Scholar 

  • Ursino M, Cuppini C, Cappa SF, Catricalà E (2018) A feature-based neurocomputational model of semantic memory. Cognit Neurodyn 12(6):525–547

    Google Scholar 

  • Veale T (2017) Déja vu all over again. In: Proceedings of the 8th international conference on computational creativity

  • Vigliocco G, Vinson DP, Lewis W, Garrett MF (2004) Representing the meanings of object and action words: the featural and unitary semantic system hypothesis. Cognit Psychol 48(4):422–488

    PubMed  Google Scholar 

  • Vigliocco G, Meteyard L, Andrews M, Kousta S (2009) Toward a theory of semantic representation. Lang Cognit 1:219–247

    Google Scholar 

  • Vigliocco G, Kousta ST, Della Rosa PA, Vinson DP, Tettamanti M, Devlin JT, Cappa SF (2014) The neural representation of abstract words: the role of emotion. Cereb Cortex 24(7):1767–1777

    PubMed  Google Scholar 

  • Vinson DP, Vigliocco G (2008) Semantic feature production norms for a large set of objects and events. Behav Res Methods 40(1):183–190

    PubMed  Google Scholar 

  • Warrington E (1985) Agnosia: the impairment of object recognition. In Frederiks (ed.), Handbook of clinical neurology. New York: Elsevier (pp. 333–349)

  • Warrington E, McCarthy RA (1987) Categories of knowledge further fractionations and an attempted integration. Brain 110:1273–1296

    PubMed  Google Scholar 

  • Wicke P (2017) Ideograms as semantic primes: Emoji in computational linguistic creativity. Unpublished dissertation. https://doi.org/10.13140/rg.2.2.21344.89609

  • Wierzbicka A (1996) Semantics: primes and universals—primes and universals. Oxford University Press, UK

    Google Scholar 

  • Wu LL, Barsalou LW (2009) Perceptual simulation in conceptual combination: evidence from property generation. Acta Physiol 132:173–189

    Google Scholar 

Download references

Acknowledgements

The research has been sponsored by the Creative Language Systems Group (afflatus.ucd.i.e.). The authors would like to acknowledge the financial and scientific support on Prof. Tony Veale, director of the Creative Language Systems Group, for the realization of this project. This article is the result of the close collaboration of both authors, who equally contributed to it. In general, Philipp Wicke was responsible for the data collection, crowdsourcing has been conducted by the Creative Language Systems Group under the supervision of Prof. Tony Veale; the analysis was performed by both authors, and the results discussed together. Both authors equally contributed to write the paper and edit each other’s’ text. For the specific concerns of the Italian academic attribution system, Marianna Bolognesi is responsible for writing sections 1, 2, 3.3, 3.7, 4.5, 5, 6. Philipp Wicke is responsible for writing sections 2.1, 3.1, 3.2, 3.4, 3.5, 3.6, 4.1, 4.2, 4.3, 4.4.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marianna Bolognesi.

Ethics declarations

Ethical approval

The study of crowdsourcing emoji sequences on figure-8 has been granted exemption from requiring ethics approval by the Ethics Committee of the first author’s home university, University College Dublin, Ireland, under the protocol number UCD HREC-LS, Ref.-No.: LS-E-19-7-Wicke-Veale. The study has been granted exemption as it included an anonymous survey that did not involve identifiable data, nor any vulnerable groups. All participants took part voluntarily to the study, thus agreeing with the terms and condition of the platform figure-8. All procedures performed were in accordance with the ethical standards of the institutional and/or national research committee (UCD HREC-LS, Ref.-No.: LS-E-19-7-Wicke-Veale) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Special Topic on ‘Eliciting Semantic Properties: Methods and Applications’ guest-edited by Enrico Canessa, Sergio Chaigneau, Barry Devereux, and Alessandro Lenci.” Guest editor: Alessandro Lenci (University of Pisa).

Reviewers: Gianluca E. Lebani (Ca’ Foscari University of Venice), Lucia Passaro (University of Pisa).

Appendix

Appendix

Instructions used for the Crowdsourcing task:

In this job you are asked to provide an Emoji selection for a given word. For this you will be directed to a website that allows you to select Emoji in order to copy and paste them into this job. The first question will ask you whether you are a native English speaker. You only need to answer this question once and can skip after you have answered it once. For each task you will be presented with the word you need to describe. Here is an example: You have to represent the word in the red box (“Time Bomb”) with a sequence of Emoji. The second step is to click on the link that directs you to the Emoji keyboard. The Emoji keyboard presents you with a variety of Emojis you can pick. Scroll down to see the entire range of categories of Emoji. You can choose from any of the available Emoji on this website. Once you found an appropriate Emoji click on it. This will add it to the text box on the bottom. After you have selected the Emoji that you need, you click on the COPY button at the end of the page and return to this task. When you return back to the task paste the sequence to finish the task.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wicke, P., Bolognesi, M. Emoji-based semantic representations for abstract and concrete concepts. Cogn Process 21, 615–635 (2020). https://doi.org/10.1007/s10339-020-00971-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10339-020-00971-x

Keywords

Navigation