Advertisement

Behavior Research Methods

, Volume 50, Issue 1, pp 392–405 | Cite as

Lisbon Emoji and Emoticon Database (LEED): Norms for emoji and emoticons in seven evaluative dimensions

  • David Rodrigues
  • Marília Prada
  • Rui Gaspar
  • Margarida V. Garrido
  • Diniz Lopes
Article

Abstract

The use of emoticons and emoji is increasingly popular across a variety of new platforms of online communication. They have also become popular as stimulus materials in scientific research. However, the assumption that emoji/emoticon users’ interpretations always correspond to the developers’/researchers’ intended meanings might be misleading. This article presents subjective norms of emoji and emoticons provided by everyday users. The Lisbon Emoji and Emoticon Database (LEED) comprises 238 stimuli: 85 emoticons and 153 emoji (collected from iOS, Android, Facebook, and Emojipedia). The sample included 505 Portuguese participants recruited online. Each participant evaluated a random subset of 20 stimuli for seven dimensions: aesthetic appeal, familiarity, visual complexity, concreteness, valence, arousal, and meaningfulness. Participants were additionally asked to attribute a meaning to each stimulus. The norms obtained include quantitative descriptive results (means, standard deviations, and confidence intervals) and a meaning analysis for each stimulus. We also examined the correlations between the dimensions and tested for differences between emoticons and emoji, as well as between the two major operating systems—Android and iOS. The LEED constitutes a readily available normative database (available at www.osf.io/nua4x) with potential applications to different research domains.

Keywords

LEED Emoticons Emoji Aesthetic appeal Familiarity Visual complexity Concreteness Valence Arousal Meaningfulness Meaning analysis Normative ratings Android iOS Facebook ICTs 

Human communication involves the transmission of abstract and concrete information using both verbal and nonverbal symbols (for a review, see Richmond & McCroskey, 2009). In the last few decades, and particularly as of the beginning of the 21st century, innovations in technology have dramatically changed the ways that people communicate with each other. The increasing worldwide Internet usage and smartphone ownership, including in emerging economies (PEW Research Center, 2016), has introduced different forms of written communication mediated by information and communication technologies (ICTs). These include instant messaging and e-mail applications based on ICT-device operating systems (Android, iOS) or messaging services (e.g., Gmail, Whatsapp), VoIP system providers (e.g., Skype), social networking sites (e.g., Facebook), and social media platforms (e.g., Twitter).

Some authors have suggested that these forms of communication filter out social, affective, and nonverbal/visual cues and can originate less effective communication outcomes (e.g., Walther, 1996; Walther & D’Addario, 2001). However, other studies have shown that the absence of such cues does not necessarily render communications less effective. Instead, this absence may promote the implementation of uncertainty reduction strategies to compensate for the absence (Antheunis, Valkenburg, & Peter, 2007, 2010). In particular, the use of written paralanguage cues in written communication, has been identified as a strategy to overcome the absence of certain cues, because they convey meaning (e.g., Lea & Spears, 1992). These cues include typographical marks (i.e., letters and numbers) and ideograms (e.g., graphic symbols), identified as “typographic or text-based emoticons” and “graphic emoticons,” respectively (e.g., Huang, Yen, & Zhang, 2008; Wang, Zhao, Qiu, & Zhu, 2014). In the late 1990s, the latter emerged as an independent strand of meaning and emotional expression through ideograms and pictographs that could be used across ICT platforms. These came to be known as emoji, created with the goal of facilitating mobile communication (Negishi, 2014; Nelson, Tossell, & Kortum, 2015).

In addition to their massive use in daily written communications, both emoticons and emoji are being used increasingly in applied domains, such as marketing and health, as well as stimulus materials in scientific research (e.g., Davidov, Tsur, & Rappoport, 2010; Hogenboom et al., 2013; Skiba, 2016; Thelwall, Buckley, & Paltoglou, 2012; Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Vashisht & Thakur, 2014; Wang & Castanon 2015). However, their selection, coding, and analysis may be somewhat biased if we assume a direct correspondence between the users’ interpretations of emoji/emoticons and their intended meanings (e.g., a sad face emoji is negative and will be perceived as such).

In this study we report evaluations of emoticons and emoji provided by ICT users. Specifically, we present the Lisbon Emoji and Emoticon Database (LEED), and provide the first set of normative evaluations for 238 stimuli, comprising 85 emoticons and 153 emoji, based on seven evaluative dimensions: aesthetic appeal, familiarity, visual complexity, concreteness, valence, arousal, and meaningfulness. In addition, we examined the meaning attributed to each stimulus. It is our contention that the LEED contributes to the literature by proposing subjective norms for emoji and emoticons and guaranteeing the quality of the codebooks used in both research and practice in a multitude of areas.

Emoticons and emoji in ICT-mediated communication

Emoticons and emoji have been considered a new medium to share daily narratives, emotions, and attitudes with others through ICTs (for a review, see Gülşen, 2016). Emoticons (from emotion + icon) are symbols created by using punctuation, numbers, or letters, with the intention of transmitting feelings, emotional states, or information in the absence of words, or complementing a written message (Dresner & Herring, 2010; Krohn, 2004; Thompson & Filik, 2016). The first known emoticons :( and :) were proposed in 1982 and are attributed to Scott E. Fahlman, a professor at Carnegie Mellon’s School of Computer Science, who created them in an attempt to differentiate serious posts from joke remarks on a bulletin board.1 Since then, emoticons have hugely increased in number, and the current list of emoticons is extensive, running from simple symbols to highly complex ones (e.g., www.netlingo.com/smileys). Emoticons include representations of facial expressions, typically sideways [Western style; e.g., ;)], as well as representations of abstract concepts and emotions/feelings (e.g., <3). Other emoticons are represented in a right-way-up position [Eastern style; e.g., (*^.^*)].

Emoji (from the Japanese e [picture] + moji [character]) are graphic symbols with predefined names/IDs and code (Unicode), which include not only representations of facial expressions (e.g., Open image in new window ), abstract concepts (e.g., Open image in new window ), and emotions/feelings (e.g., Open image in new window ), but also animals (e.g., Open image in new window ), plants (e.g., Open image in new window ) activities (e.g., Open image in new window ), gestures/body parts (e.g., Open image in new window ), and objects (e.g., Open image in new window ). Emoji are presumed to have been first proposed by Shegetaka Kurita during the late 1990s, who created them while working at a mobile phone operator in Japan to facilitate mobile communication (Negishi, 2014). Currently, more than 2,000 emoji are supported by different platforms, and they are constantly evolving and becoming more diverse (http://emojipedia.org). For instance, new Unicode releases (e.g., Unicode 11.0, released in 2016) include emoji that represent different social groups—varying, for example, in ethnicity (e.g., Open image in new window ) and age (e.g., Open image in new window ).

There are major differences between emoji and emoticons (Ganster, Eimler and Kramer, 2012). As compared to emoticons, emoji are colored, are not rotated by 90°, and in those representing facial expressions, the face is often delimited by a circle and may include multiple facial cues.

However, both emoticons and emoji are increasingly popular in our everyday life. They are a constant presence in the ways we communicate in the virtual world (e.g., social media, e-mail, and text messages; Gülşen, 2016). Emoji are also being included in everyday products (e.g., toys, home decoration items, or even clothes). Moreover, emoji have been integrated in the ways artists communicate with their audience (e.g., Katy Perry’s “Roar” music video) and the ways brands connect with consumers (for a review, see Wohl 2016). For instance, brands have included emoji in advertising campaigns (e.g., McDonalds used people with emoji as their heads; Beltrone, 2015) and developed new sets of brand-related emoji (e.g., Dove launched a set of curly-haired emoji; Neff, 2015). In another example of emoji popularity, the Oxford dictionaries considered the emoji Open image in new window “face with tears of joy” to be “the word of the year 2015.” On Twitter alone, this emoji registered 6.6 billion uses that year (@TwitterData).

Scientific research about emoticons and emoji is also increasing. Some studies have examined naturalistic data, such as public messages posted on social media platforms (e.g., Twitter, Google forums, or Facebook) to understand and characterize emoticon/emoji usage. For example, Novak, Smailović, Sluban, and Mozetič (2015) proposed the emoji sentiment ranking, an index of positivity based on the frequency of each emoji used in negative, neutral, and positive tweets. Also, Ljubešić and Fišer (2016) used tweets as their dataset to investigate how popular emoji are on Twitter, which countries exhibit greater emoji usage, and the popularity of specific emoji. Similarly, Tossell and colleagues (2012) conducted a longitudinal study monitoring the use of emoticons in text messages. This type of descriptive analysis can also be conducted in specific domains. For example, Vidal, Ares, and Jaeger (2016) examined tweets about eating situations and how people used emoticons/emoji to spontaneously express food-related emotional experiences. Other studies used similar naturalistic data to monitor a given event (e.g., public health information; Paul & Dredze, 2011) or to examine event-centered reactions, opinions, feelings, evaluations, or emotions (e.g., elections; Burnap, Gibson, Sloan, Southern, & Williams, 2016). Even though these studies have typically relied on emotional word lexicons, more recently researchers have drawn attention to the need to extend these lexicons to include emoticons and emoji (B. Liu, 2012; Pang & Lee, 2008).

Research focusing on emoticon/emoji usage and functions has suggested that these stimuli serve two key functions: to portray emotional or social intent and to reduce potential discourse ambiguity (for a review, see Kaye, Wall, & Malone, 2016). Skovholt, Grønning, and Kankaanranta (2014) showed that such stimuli also function as contextualization cues (e.g., markers of positive attitudes that facilitate message interpretation) and as organizers of social relationships in written interaction (e.g., reducing perceived interpersonal distance by decreasing impersonality/formality). As examples of these functions, Lo (2008) showed that adding emoticons to online messages improved receivers’ understanding of the intensity and valence of the emotions (sad vs. happy) and attitudes (like vs. dislike) expressed by the sender. Likewise, Ganster and colleagues (2012) showed that using a smiling (vs. a frowning) emoji/emoticon influences how a message is evaluated (i.e., more positive and humorous), how the sender is perceived (i.e., more extroverted), and how the receiver feels (i.e., a more positive mood). Derks, Bos, and von Grumbkow (2008) further showed that emoticons strengthen the intensity of a message (e.g., a positive message with a smile emoticon is rated more positively than the same positive message without the emoticon). However, in the case of incongruence between the valences of the message and the emoticon (e.g., a positive message accompanied by a frown emoticon), a message’s interpretation relies more on the text content.

Another line of research has adopted experimental methodologies to examine how the presentation of emoticons/emoji influences different phenomena. For example, Wang and colleagues (2014) focused on the effects of adding positive and negative emoji to messages regarding workplace performance on acceptance of negative feedback. Likewise, Tung and Deng (2007) tested how the presentation of emoji in an e-learning environment affected children’s motivation. Furthermore, Siegel and colleagues (2015) investigated whether including emoji on food packages influenced children’s meal choices. Emoji and/or emoticons have also been used as the experimental materials in studies focusing on affective processing (e.g., Han, Yoo, Kim, McMahon, & Renshaw, 2014; Kerkhof et al., 2009; Yuasa, Saito, & Mukawa, 2011). For example, positive and negative emoji have been used as primes to induce valence, influencing responses (event-related potentials) to valenced target words (e.g., Comesaña et al., 2013). Research has shown that novel target words primed with positive emoji are more likely to be erroneously categorized as familiar (e.g., Garcia-Marques, Mackie, Claypool, & Garcia-Marques, 2004). Finally, emoji/emoticons have been used for research method development—for example, as anchors in rating scales assessing current emotional states (e.g., Moore, Steiner, & Conlan, 2013), emotional associations with specific stimuli (e.g., food names; Jaeger, Vidal, Kam, & Ares, 2017), well-being (Fane, MacDougall, Jovanovic, Redmond, & Gibbs, 2016), and pain (e.g., Chambers & Craig, 1998).

Methodologies and tools for emoticons/emoji analysis

The selection, coding, and analysis of emoticons and emoji as direct indicators of the emotional meanings conveyed by messages can follow either human-based (e.g., Park, Baek, & Cha, 2014; Vidal et al., 2016) or computer-based (Davidov et al., 2010; Hogenboom et al., 2013; Vashisht & Thakur, 2014; H. Wang & Castanon, 2015) procedures. A computer-based procedure relies on machine-learning algorithms and semantic lexicons that is thought to provide a more objective analysis of emoticon/emoji usage. Both human-based and computer-based procedures may be prone to bias because they rely exclusively on the evaluations of, and the meanings attributed by researchers/analysts, without taking into consideration the ways they are perceived by the users. One area in which this has been particularly worrisome is the field of computer-based sentiment analysis (Thelwall et al., 2012; Thelwall et al., 2010), which allows for detecting and analyzing sentiment/affective reactions on the basis of semantic analysis of written text. Such analyses rely on codebooks developed by researchers from of the commonly accepted designations/feelings portrayed by emoticons and emoji (e.g., emoticon-smoothed language models; Liu, Li, & Guo, 2012; SentiStrenght coding manual for sentiment in texts available at http://sentistrength.wlv.ac.uk; e.g., Thelwall et al., 2012; Thelwall et al., 2010).

The emoji sentiment ranking (Novak et al., 2015) constitutes an attempt to overcome some of these limitations. However, this index focuses exclusively on the valence dimension and does not take into account other relevant information, such as the level of arousal elicited by a given emoji or the meaning attributed to it. Therefore, standardized procedures for the classification of emoticons/emoji are still missing.

In our view, this state of affairs may have two potential problems. First, the stimulus selection, coding, and analysis may be prone to biases due to researchers’ own evaluations of the stimuli (e.g., analyses based on ad-hoc emotionality categorization made by two coders; Park et al., 2014). Second, there may be a biased assumption that emoticon/emoji users’ interpretations necessarily correspond to the meanings intended by developers/researchers. Because emoji/emoticons are not usually labeled when presented (with the exception of the Facebook emoji set), they are open to interpretation. Indeed, users can select an emoji on the basis of superficial visual features, which can lead to misinterpretations of its meaning and intent. For example, one may wish to express sadness by selecting a tearful emoji, and mistakenly choose Open image in new window “face with tears of joy” instead of Open image in new window “face with tears of sadness”. Additionally, the same emoticon/emoji can be used to represent a variety of meanings. For instance, a smiley face may be used to express happiness, but it may also be used to express agreement with or liking of something/someone, one’s own physical or mental well-being state, empathy, comprehension, or other meanings. Moreover, the same emoticon/emoji can be interpreted differently according to the communication context. For example, emoticons such as :p and ;) are typically described as positive, but they can also be used as markers of irony (Carvalho, Sarmento, Silva, & de Oliveira, 2009) or sarcasm (Thompson & Filik, 2016). Finally, emoji with the same intended meaning may have distinct visual representations across operating systems, potentially leading to different interpretations and evaluations (Miller et al., 2016). To sum up, as with other types of visual stimuli, emoji/emoticons are prone to subjectivity in their evaluation and interpretation, which supports the need to develop a normative database.

Normative data are abundant in the literature (for reviews, see Prada, Rodrigues, Silva, & Garrido, 2015; Proctor & Vu, 1999). These validated databases typically include stimuli such as words (e.g., Bradley & Lang, 1999a), sounds (e.g., Bradley & Lang, 1999b), or images depicting a broad range of contents (e.g., Dan-Glauser & Scherer, 2011; Lang, Bradley, & Cuthbert, 2008). Regarding the latter type of stimuli, some databases include, for example, visual materials such as simple line drawings (e.g., Bonin, Peereman, Malardier, Méot, & Chalard, 2003; Snodgrass & Vanderwart, 1980) or symbols (e.g., McDougall, Curry, & Bruijn, 1999; Prada et al., 2015). Other databases are theme-focused and include specific contents, such as food (e.g., Blechert, Meule, Busch, & Ohla, 2014; Charbonnier, van Meer, van der Laan, Viergever, & Smeets, 2016) or human faces (e.g., Ebner, Riediger, & Lindenberger, 2010; Garrido et al., 2016; Mendonça, Garrido, & Semin, 2016).

The absence of published normative data on visual stimuli such as emoticons and emoji has two important consequences. First, it implies that researchers should make the additional effort of pretesting materials to meet a study’s demands. For example, prior to their affective-priming study, Comesaña and colleagues (2013) had to conduct two extensive pretests in which 180 participants evaluated the valence, arousal, and meaning associated with each emoji. Second, the comparison of results between studies can be challenging because the stimuli are often categorized ad hoc. For example, in their study on tweets about food, Vidal and colleagues (2016) had two coders categorizing the emoji and emoticons as negative, neutral, or positive by considering their intended meaning or available description. Park and colleagues (2014) also had two coders categorizing emoticons on three levels, but they considered a different dimension (i.e., emotionality: sad, neutral, and happy) and distinct criteria (emotions conveyed by shape of the eyes and by the shape of the mouth).

In the present article, we present normative ratings of a set of emoticons and emoji from the two most used operating systems—Android and iOS. We also included “reaction” emoji from the most used social networking platform—Facebook. Each stimulus was evaluated with regards to its aesthetic appeal, familiarity, visual complexity, semantic clarity, the valence and arousal of the meaning conveyed, and meaningfulness. Additionally, we assessed the subjective meanings attributed by participants to each stimulus. We selected this set of seven evaluative dimensions on the basis of previous norms with other types of visual stimuli. Specifically, we followed the methodology adopted in a recent validation study (for a detailed review of the dimensions of interest, see Prada et al., 2015), with the exception of adding the dimension of clarity, which has emerged as being relevant for the evaluation of facial expressions (for a review, see Garrido et al., 2016).

Method

Participants

A sample of 505 Portuguese individuals (71.7% women; M age = 31.10, SD = 12.70) volunteered to participate in a Web survey. These individuals were recruited online through Facebook (university institutional page and online studies advertisement pages) and mailing services (students mailing lists). All participants were native Portuguese speakers or had lived in Portugal for the last 5 years. The sample comprised mostly university students (46.7%) and active workers (43.3%), with at least a bachelor degree (46.0%). Most participants indicated that Android/Google (67.5%) or iOS (26.3%) was their operating system.

Stimulus set

The LEED includes 238 stimuli2: 85 emoticons and 153 emoji (77 from iOS, 63 from Android, 9 from Facebook, and 4 from Emojipedia), mostly representing facial expressions of emotions (e.g., “happy face”) and/or symbolic meanings (e.g., “silence”).3

The emoticon set was developed on the basis of the list of emoticons presented in the “Twitter emotion coding instructions”4 for the SentiStrenghth tool (Thelwall et al., 2010; adapted from Wiebe, Wilson, & Cardie, 2005), used for sentiment detection in short texts. This list included 63 Western-style emoticons (e.g., Emot07; see Fig. 1) and 23 Eastern-style emoticons (e.g., Emot56a; see Fig. 1). One symbol was removed due to its unavailability in mobile phone text packages ().
Fig. 1

Sample emoticons and emoji across operating systems for “laughing” and “crying” (the stimulus codes are included)

Because a given emoticon can sometimes vary in its presentation, variations of the same stimulus were included. For example, Emot01 (“laughing, big grin”) has three variations identified in the database, from Emot01a to Emot01c. Each emoticon was generated in black, 28-point Arial font on a white background and was saved as a single image file (72 × 68 pixels, 72 dots per inch, RGB, PNG format).

According to information available from the Unicode Foundation (http://unicode.org/emoji/charts/full-emoji-list.html), we selected emoji with intended meanings similar to the emoticons. Figure 1 depicts examples of the emoticons and emoji for “laughing” and “crying.” As in the case of emoticons, variations of the same emoji were included. The set of 153 emoji was extracted from the Emojipedia database (http://emojipedia.org/) and included stimuli from the two most used and available operating systems at the time the study was performed: Apple iOS 9.3 (used in iPhone, iPad, iMac, Apple Watch, and Apple TV) and Google Android 6.0.1 (used in Android devices, the Gmail Web interface, Google Hangouts, and the Google Chrome Internet browser).

Emoji were matched across operating systems according to their Unicode references. Of the 153 emoji, 63 stimuli were represented in both operating systems (all 63 Android emoji had a corresponding iOS emoji), 14 were only represented in the iOS operating system (e.g., EmjAp51), and 8 were represented in both operating systems and in the Facebook reaction set (see Fig. 1).5 The latter subset included nine emoji: the like/dislike buttons (EmjFb76 and EmjFb77, respectively), the recently added “Facebook reactions” (five faces expressing emotions, EmjFb07–EmjFb67, and one heart symbol, EmjFb71), and the new “like” button (EmjFb78). Finally, four Emojipedia images (EmjPe86–EmjPe89) were also included in the final set. These Unicode 9.0 emoji were not available in the Android or iOS operating systems at the time of the study (e.g., EmjPe89), but were included to represent potentially official future emoji not currently available. Each emoji was saved as a single image file (72 × 72 pixels, 72 dots per inch, RGB, PNG format).

The vast majority of the emoji set represents facial expressions (88.89%), with the exceptions of popular symbols (3.27%; e.g., heart, EmjAn71; heartbreak, EmjAn72) and hand gestures (7.84%; e.g., hand palm, EmjAp75).

Procedure and measures

The study was conducted using the Qualtrics software. Participants were invited to collaborate on a web survey about the perception and evaluation of emoticons and emoji. After clicking on the hyperlink, participants were directed to a secure webpage and were informed about the goals of the study and its expected duration (approximately 20 min). The initial instructions provided the definition of all emoji and emoticons, and examples of each type of stimulus were presented (emoticons Open image in new window and emoji Open image in new window ). To avoid overlap, these examples were different from the stimuli used in the evaluation task. Participants were also informed that all the data collected would be treated anonymously and that they could abandon the study at any point by closing the browser, without their responses being considered for analysis.

After providing their informed consent to collaborate in the study (by checking the “I agree” option), participants were asked to provide information regarding their age, sex, educational level, current occupation, and their operating system. Following this, they were given specific instructions to evaluate each stimulus in seven evaluative dimensions, namely: aesthetic appeal, familiarity (subjective frequency), visual complexity, clarity, valence, arousal, and meaningfulness (all dimensions rated using 7-point Likert-type scales; the detailed instructions for each scale are presented in Table 1; see also Garrido et al., 2016; Prada et al., 2015). These dimensions were randomly presented per trial in the evaluation task. Finally, participants were requested to write the first meaning or emotion that came to their mind for each stimulus in an open-ended response format, or alternatively select the option “I don't know” if they were not able to provide a specific meaning or emotion. The instructions also emphasized that responses would have to be fast and spontaneous and that there were no right or wrong answers.
Table 1

Instructions and scale anchors for each dimension

Dimension

Instructions

Scale

1. Aesthetic appeal

In your opinion, considering the visual characteristics of the symbol, and not the object or concept it may depict, how visually appealing is the stimulus?

1 = Visually unpleasant/unappealing,

7 = Visually very pleasant/appealing

2. Familiarity

How frequently do you encounter or see this stimulus in your daily routine? More frequently encountered stimuli are more familiar.

1 = Not familiar,

7 = Very familiar

3. Visual complexity

Considering the complexity of the visual characteristics of the stimulus, and not those of the concept that can be related to it, how much visual detail and complexity does this stimulus contain? The more details the stimulus contains, the more complex it is.

1 = Very simple,

7 = Very complex

4. Clarity

How clear or ambiguous is this stimulus? Stimuli that, in your opinion, clearly convey an emotion/meaning should be considered clear. Otherwise, they should be considered more ambiguous.

1 = Totally ambiguous,

7 = Totally clear

5. Valence

To what extent do you consider this stimulus refers to something positive/pleasant or negative/unpleasant.

1 = Very negative,

7 = Very positive

6. Arousal

To what extent do you consider this stimulus refers to something arousing/exciting or passive/calm?

1 = Very passive/calm,

7 = Very arousing/exciting

7. Meaningfulness

Please indicate to what extent this stimulus conveys a meaning/emotion.

1 = Conveys no meaning/emotion at all, 7 = Conveys a lot of meaning/emotion

Participants then proceeded to the main task. To prevent fatigue and demotivation, each participant only saw a subset of 20 randomly selected stimuli from the available pool of 238 stimuli. Each stimulus was presented on a single page of the Web survey. We used a forced response option, such that participants were required to answer each question to progress in the survey. The number of participants evaluating each stimulus varied from 40 to 49. The stimuli were always presented at the top left corner of the page, with all evaluative dimensions presented below. Upon completing the task, participants were thanked and debriefed.

Results

The norms for the full set of stimuli are provided as supplementary material. (see also www.osf.io/nua4x) In the following sections we present (a) the preliminary analysis regarding outlier detection, (b) the analysis of the differences by gender and operating system, (c) the subjective rating norms for each dimension, (d) the correlations between evaluative dimensions, and (e) the analysis of attributed meaning/emotion.

Preliminary analysis

Because only completed surveys were included in the analysis, there were no missing data. Outliers were determined in terms of the criterion of 2.5 standard deviations above or below the mean evaluation of each stimulus on a given dimension. This analysis yielded a small percentage (1.32%) of outlier ratings. Moreover, none of the participants responded systematically in the same way (i.e., using the same value of the scale). Therefore, no participants were excluded.

Emoticons and emoji evaluations

When we compared the evaluations of emoticons and emoji on each dimension for the total sample (see Table 2), the overall results showed that emoji (vs. emoticons) were rated as aesthetically more appealing, t(498) = –24.82, p < .001, d = 1.11; more familiar, t(498) = –23.73, p < .001, d = 1.06; clearer, t(497) = –31.45, p < .001, d = 1.41; more positive, t(498) = –2.50, p = .013, d = 0.11; more arousing, t(498) = –21.51, p < .001, d = 0.96; and more meaningful, t(498) = –31.00, p < .001, d = 1.39.
Table 2

Evaluations of each dimension (means and standard deviations) for emoticons and emoji, for the total sample and for men and women, as well as mean difference tests

 

Total Sample (N = 505)

Men (n = 143)

Women (n = 362)

Difference Test for Gender

Stimulus/Dimension

M

SD

M

SD

M

SD

t

p

Emoticons

 Aesthetic appeal

3.01a

1.13

3.09a

1.19

2.89a

1.11

0.91

.373

 Familiarity

3.38a

1.30

3.29a

1.31

3.42a

1.30

–1.03

.306

 Visual complexity

3.39a

1.25

3.35a

1.20

3.41a

1.27

–0.50

.612

 Clarity

3.52a

1.20

3.48a

1.17

3.54a

1.21

–0.49

.622

 Valence

3.96a

0.82

3.96a

0.83

3.95a

0.82

0.08

.941

 Arousal

3.90a

0.81

3.95a

0.80

3.88a

0.81

0.88

.379

 Meaningfulness

3.69a

1.22

3.62a

1.22

3.70a

1.23

–0.39

.699

Emoji

 Aesthetic appeal

4.65b***

1.04

4.55b***

0.99

4.69b***

1.05

–1.42

.141

 Familiarity

4.86b***

1.18

4.55b***

1.14

4.99b***

1.17

3.85

<.001

 Visual complexity

3.52a

1.24

3.52a

1.14

3.53a

1.27

0.05

.965

 Clarity

5.33b***

0.93

5.15b***

0.91

5.41b***

0.93

–2.87

.003

 Valence

4.08b*

0.85

4.01a

0.80

4.11b*

0.87

–1.20

.204

 Arousal

4.84b***

0.86

4.73b***

0.84

4.88b***

0.87

–1.73

.079

 Meaningfulness

5.43b***

0.85

5.29b***

0.81

5.49b***

0.86

–2.43

.015

Subscripts indicate 5,000-bootstrap-sample paired-sample t tests comparing emoticons and emoji on each evaluative dimension, by column. Different subscripts indicate significant differences: *** p < .001, * p < .050. p values for gender differences correspond to 5,000-bootstrap-sample paired-sample t tests.

Gender differences

The general results of the comparison between emoticons and emoji were also observed in the subsamples of both women and men. However, men provided equivalent valence ratings for the emoticons and emoji. We also tested for gender differences in the evaluations of emoticons and emoji for each dimension. As shown in Table 2, no gender differences emerged in the ratings of emoticons. When we replicated this analysis for emoji, the results showed that women evaluated emoji as being more familiar, clear, and meaningful than did men, all ps ≤ .015. This pattern of results remained the same after controlling for the main operating systems used by participants, all ps < .019.

Operating system differences

Emoji evaluations were compared between the Android and iOS operating systems (Table 3). These results showed that iOS emoji were evaluated as being more aesthetically appealing, familiar, clear, and meaningful, all 5,000-sample bootstrapped ps ≤ .006. In contrast, no differences between operating systems were found for visual complexity, valence, and arousal, all 5,000-sample bootstrapped ps ≥ .059.
Table 3

Evaluations of each dimension (means and standard deviations) for Android and iOS emoji for the total sample, as well as mean difference tests

 

Android

iOS

Difference Test

Stimulus/Dimension

M

SD

M

SD

t (504)

p

Aesthetic appeal

4.45

1.22

4.77

1.11

6.38

<.001

Familiarity

4.43

1.53

5.05

1.22

10.40

<.001

Visual complexity

3.59

1.32

3.55

1.27

–0.89

.370

Clarity

5.12

1.12

5.30

0.99

3.70

<.001

Valence

3.85

1.01

3.95

0.91

1.89

.059

Arousal

4.74

0.98

4.77

0.95

0.69

.494

Meaningfulness

5.31

1.04

5.42

0.92

2.72

.006

Results of 5,000-bootstrap-sample paired-sample t tests comparing Android and iOS emoji on each evaluative dimension.

Subjective rating norms

To define subjective rating norms, data was further coded and analyzed by stimulus. For each stimulus, we calculated frequencies, means, standard deviations and confidence intervals (CIs) in each dimension (see Appendix 1 in the supplementary material). On the basis of these results, stimuli were categorized as low, moderate, or high in each dimension (for a similar procedure, see Prada et al., 2015). When the CI included the response scale midpoint (i.e., four) stimuli were considered “moderate” in a given dimension. Stimuli were categorized as “low” when the upper bound of the CI was below the scale midpoint and as “high” when the lower bound of the CI was above the scale midpoint. In the case of valence, “low” means negative, “moderate” means neutral, and “high” means positive. Figures 2 and 3 present summaries of this analysis for emoticons and emoji separately.
Fig. 2

Emoticon frequency distributions for each dimension level. For valence: low = negative, moderate = neutral, high = positive

Fig. 3

Emoji frequency distributions for each dimension level. For valence: low = negative, moderate = neutral, high = positive

As is shown in Fig. 2, the majority of emoticons were categorized as being low in aesthetic appeal (76.47%), familiarity (57.65%), and clarity (50.59%), and as being moderately arousing (55.29%). Moreover, the results show that most emoticons were categorized as being low (48.24%) or moderate (44.71%) in complexity, and as low (43.53%) or moderate (36.47%) in meaningfulness. Regarding valence, the emoticons were distributed across the three levels: negative (42.35%), neutral (30.59%), and positive (27.06%).

Figure 3 shows that the majority of emoji were categorized as being highly familiar (58.82%), clear (79.08%), arousing (65.36%), and meaningful (88.24%). The results further show that emoji were categorized as being high (49.02%) or moderate (45.10%) in aesthetic appeal, and moderate (54.25%) or low (43.10%) in complexity. Note that the emoji were somewhat polarized in their valence, being mostly categorized as either negative (49.02%) or positive (42.48%) in this dimension. Figure 4 depicts examples of emoticons and emoji for each level of each evaluative dimension.
Fig. 4

Sample emoticons and emoji for each level across dimensions (LEED stimulus codes are included). For valence: low = negative, moderate = neutral, high = positive

Correlations between dimensions

Overall, the results showed significant correlations between the dimensions (see Table 4). For example, meaningfulness was strongly correlated with aesthetic appeal (r = .547), familiarity (r = .648), clarity (r = .743), and arousal (r = .506). Clarity was strongly associated with aesthetic appeal (r = .538) and familiarity (r = .704). Also aesthetic appeal was also strongly associated with familiarity (r = .556).
Table 4

Pearson’s correlations between the dimensions

Dimensions

1

2

3

4

5

6

1. Aesthetic appeal

     

2. Familiarity

.556***

    

3. Visual complexity

–.038***

–.188***

   

4. Clarity

.538***

.704***

–.175***

  

5. Valence

.403***

.250***

–.032***

.176***

 

6. Arousal

.266***

.314***

.106***

.398***

–.005

7. Meaningfulness

.547***

.648***

–.062***

.743***

.123***

.506***

***Correlation is significant at the .001 level (two-tailed).

Analysis of attributed meaning/emotion

In addition to meaningfulness ratings, participants were asked to indicate the meaning or emotion attributed to each stimulus. Percentage of responses was computed considering the sample size that evaluated a given stimulus. Two independent judges coded the meaning/emotion attributed by the participants to each symbol (for a similar strategy, see, e.g., Prada et al., 2015). Synonyms (e.g., “don’t speak” and “silence,” EmjAp31) and singular/plural forms (e.g., “smiles” and “smile,” Emot1c) were included in the same category. The meaning of 15 emoticons was not categorized due to a low percentage of responses (i.e., <25%). For example, from the 42 participants that evaluated Emot32, only eight indicated meaning, from which two were categorized as “smile,” two as “ignore,” and the remaining were uncategorized. Note that the sum of percentages of both categories does not necessarily equals 100. For example, 48.4% of the valid responses for EmjAp47 were categorized as “glad,” 25.8% as “upside down,” whereas the remaining responses (n = 8) were heterogeneous and therefore uncategorized (e.g., “normality,” “sarcasm”).

The percentages of meaning responses varied between 4.3% (Emot75) and 95.0% (Emot01a) for emoticons (M = 49.9%, SD = 24.1); between 46.9% (EmjAn24) and 100% (e.g., EmjAn71) for Android emoji (M = 84.6%, SD = 11.9); and between 48.8% (EmjAp24) and 100% (e.g., EmjAp57) for iOS emoji (M = 86.9%, SD = 11.3). The percentages varied between 90.7% (EmjFb17) and 100% (e.g., EmjFb76) for the Facebook emoji (M = 95.7%, SD = 2.9), and between 74.4% (EmjPe86) and 97.8% (e.g., EmjPe88) for the Emojipedia emoji (M = 82.9%, SD = 10.8). Within each operating system, results regarding the first category showed that, on average, participants agreed on the meanings of both the Android (64.95%) and iOS (66.78%) emoji.

A detailed discussion of the meaning or emotion attributed to each stimulus would be too extensive. The complete meaning analysis is presented in Appendix 2 in the supplementary material alongside the Unicode intended meaning for comparison purposes. In some cases, the meaning categorization converged with the Unicode intended meaning. For instance, participants attributed a congruent meaning to the “winking face” stimulus in its different formats. For emoticon (Emot08a) the most frequent meanings were “wink” (40.5%) and “agree” (21.6%), for the iOS emoji (EmjAp08) these were “agree/compliance” (40.0%) and “wink” (28.6%), and for the Android emoji (EmjAn08) these were “wink” (40.6%) and “compliance” (25.0%).

In other cases there was only partial convergence. For example, the emoji “face savoring delicious food” was interpreted as “cheeky/fun” (63.2%) and “tasty” (18.4%) in the iOS emoji (EmjAp10), and as “wink/cheeky” (59.4%) and “tasty” (12.5%) in the Android emoji (EmjAn10). In another example, the emoji “imp” was attributed the meanings “evil” (60.0%) and “mischief/prank” (30.0%) in the iOS emoji (EmjAp70), and “evil/mischief” (62.5%) and “rage” (22.5%) in the Android emoji (EmjAn70).

For other stimuli, the attributed meaning differed across operating systems and from the Unicode intended meaning. For example, the emoji “dizzy face” was attributed the meaning “shocked” (66.7%) in the iOS emoji (EmjAp66), and “confusion” (46.5%) and “hypnotized” (18.6%) in the Android emoji (EmjAn66). These examples clearly illustrate that the meaning participants assign to emoji is not always convergent with their Unicode intended meaning and also varies across operating systems.

Discussion

In this article we have presented the LEED, which includes 238 emoticons and emoji, evaluated across seven evaluative dimensions: aesthetic appeal, familiarity, visual complexity, clarity, valence, arousal, and meaningfulness. Additionally, participants attributed meaning to each stimulus. To our knowledge, this is the first available emoticon/emoji normative database.

Results showed that, in comparison to emoticons, emoji are perceived as more aesthetically appealing, familiar, clear, and meaningful. Most emoticons were categorized as low in aesthetic appeal, familiarity, clarity, valence, and meaningfulness, whereas most emoji were categorized as high in familiarity, clarity, arousal, and meaningfulness. This may be associated with an increasing popularity and use of emoji. Indeed, recent evidence shows that as emoji usage has increased the usage of emoticons has decreased (Pavalanathan & Eisenstein, 2015). Furthermore, in the case of stimuli depicting facial cues, the graphical representation of emoji may be more appealing because they are better proxies to human facial expressions (e.g., Ganster et al., 2012).

Results also showed no gender differences regarding the evaluation of emoticons. Emoji, however, were evaluated as more familiar, clear, and meaningful by women. This finding converges with empirical evidence showing that women are more likely than men to use emoji (e.g., Fullwood, Orchard, & Floyd, 2013).

Recent literature has suggested the need to take into account possible differences in emoji evaluation across operating systems (Miller et al., 2016). Indeed, our results showed that iOS emoji were evaluated as more aesthetically appealing, familiar, clear and meaningful than Android emoji. We also found significant correlations between the evaluative dimensions (e.g., stimuli that were perceived as more meaningful were also perceived as more aesthetically appealing, familiar, clear and arousing). This pattern replicates findings from databases of other visual stimuli using the same evaluative dimensions (Garrido et al., 2016; Prada et al., 2015).

In addition to presenting normative ratings across dimensions, our database includes participants’ interpretation of the meaning of each stimulus. Participants were more likely to attribute meaning to emoji than to emoticons irrespectively of the operating system (iOS vs. Android). It is important to note that even though participants described the meaning in terms of what the stimulus directly represents (e.g., wink), they were also likely to go beyond this mere description and infer its intent (e.g., being cheeky). This is particularly relevant because it allows researchers to assess the extent to which the intended meaning overlaps with the meaning attributed by users, and more importantly because our findings show this is not always the case. However, as in previous research, our coding system for the meaning has shortcomings that render this overlap subjective.

Emoticons and emoji are often analyzed in the absence of information about the contexts in which they are communicated (Gaspar, Pedro, Panagiotopoulos, & Seibt, 2016). This was also the case of the present research, in which ratings were obtained by presenting the stimuli in isolation. This can constitute a limitation, because the interpretation of visual stimuli is often context-dependent (e.g., Wolff & Wogalter, 1998). Emoticons/emoji are typically incorporated in a message and research has already shown that they can influence how the message is interpreted (e.g., Derks et al., 2008; Fullwood et al., 2013). Moreover, the reverse may occur, such that the content of the message can influence the interpretation of emoticons/emoji (e.g., Miller et al., 2016). For instance, a winking emoticons/emoji can be interpreted differently when accompanied by “Let’s go to the movies ;)” versus “Let’s watch a movie at my place ;)” Furthermore, emoticons/emoji interpretation can also depend on how the sender’s goals are perceived (Gaspar, Barnett, & Seibt, 2015; Gaspar et al., 2016). For instance, winking emoji accompanying a sarcastic remark can be differently interpreted when the sender is a close friend or when the sender is one’s boss.

Another limitation to the present study concerns the specific cultural context in which this dataset was developed. Culture has emerged as a factor that influences emoticon and emoji usage in online communication (Park et al., 2014). Our normative dataset was obtained with Portuguese participants and, according to recent data (Ljubešić & Fišer, 2016), Portugal ranks fourth in Europe for emoji usage on Twitter. Nevertheless, as with other normative databases, generalizations to other populations should be made with caution and cross-validation is recommended. Therefore, future studies should consider extending this database to other countries/cultures to assess cross-cultural differences and similarities. It should also be noted that differences may arise between studies that analyze how emoticon and emoji are evaluated in isolation from the context in which they are often used, and those focusing on how users actually contextualize them in communication. For example, in our study participants perceived emoji as negative or positive, whereas the work by Novak and colleagues (2015) showed that users mostly use positive emoji in their tweets.

Finally, the results from the meaning analysis indicated that intended meaning and users’ interpretation of that meaning do not always overlap. Two independent coders analyzed and categorized the responses given by participants to each stimulus. Although this procedure is not exempt from bias, the lack of overlap constitutes an important indicator that the selection of emoji and emoticons to use in research or practice should be carefully conducted, on the basis of more objective normative data such as that reported in the LEED. Other procedures could be used to determine users’ interpretation of meaning. For instance, researchers could use forced choice tasks (i.e., decide which emotion/meaning is expressed by the stimuli; Vaiman, Wagner, Caicedo, & Pereno, 2017).

The LEED mostly includes stimuli depicting graphical representations of faces. Research has shown that this type of emoji is processed similarly to other human nonverbal information (e.g., voice and facial expression; Yuasa et al., 2011) and that emoji can be used to prime social presence (Tung & Deng, 2007). Therefore, our stimuli can be used in affective processing studies and as experimental primes.. Future studies could also seek to expand our normative ratings to other emoji representing humans (e.g., bodily postures and activities). Considering that recently new emoji varying in age group and skin tone were added to the available set in different platforms, it would be interesting to examine whether they are suitable as stimulus materials in research designed to examine topics such as person perception, intergroup relations, and social influence.

The LEED is a useful tool for researchers and practitioners (e.g., public health officials) interested in conducting research with naturalistic data (e.g., user-generated messages shared on social media platforms). It can also be used in a variety of experimental paradigms, particularly when the control of stimuli characteristics is required. Instead of their selection, coding, and analysis of emoticons and emoji relying on ad hoc categorization and intended meaning, researchers and analysts can rely on the systematic normative ratings offered by the LEED.

This type of database also has the potential to be used in more applied contexts comprising ICTs mediated written communication, such as in marketing, education, and professional contexts (e.g., Skiba, 2016; Skovholt et al., 2014). Particularly promising is the field of health informatics (see, e.g., Eysenbach, 2011). Both human-based and computer-based evaluations of ICT users reactions to health related events have been used for a variety of public health issues monitoring and surveillance (e.g., influenza like diseases and dengue; Milinovich, Williams, Clements, & Hu, 2014). In such monitoring, machine-learning algorithms and semantic lexicons often use computer-based techniques. These techniques would benefit if they were based on normative ratings such as those offered by the LEED.

Footnotes

  1. 1.

    For a first-person account of emoticon history, see www.cs.cmu.edu/~sef/sefSmiley.htm.

  2. 2.

    The full set of stimuli is available as online supplemental material, and at www.osf.io/nua4x. This includes the corresponding Unicode references (http://unicode.org/emoji/charts/full-emoji-list.html) and intended meanings for each emoticon/emoji proposed by the Unicode Foundation.

  3. 3.

    For identifying the stimuli in our database, we used the prefixes Emot = Emoticon, Emj = Emoji, Ap = Apple iOS, An = Google Android, Fb = Facebook, Pe = Emojipedia.

  4. 4.
  5. 5.

    This excludes the new “like” emoji, which had no correspondence, given that the old “like” button was the one used as a correspondent to similar emoji in iOS and Android. Moreover, a nonexistent emoji in Facebook, the “dislike” emoji (representing the “like” emoji in an inverted position), was also included in the stimulus set, given that when the study materials were created the news was stating that Facebook would include this option in their platform, which later proved not to be the case.

Notes

Author note

Part of this research was funded by grants from the Fundação para a Ciência e Tecnologia awarded to the first (SFRH/BPD/73528/2010), third (UID/PSI/04810/2013), and fourth (PTDC/MHC-PCN/5217/2014) authors, and by a Marie Curie fellowship (FP7-PEOPLE-2013-CIG/631673) awarded to the fourth author. We thank Nuno Porto for his assistance in preparing the figures.

Supplementary material

13428_2017_878_MOESM1_ESM.pdf (309 kb)
ESM 1 (PDF 308 kb)

References

  1. Antheunis, M. L., Valkenburg, P. M., & Peter, J. (2007). Computer-mediated communication and interpersonal attraction: An experimental test of two explanatory hypotheses. Cyber Psychology and Behavior, 10, 831–836. doi: 10.1089/cpb.2007.9945 CrossRefGoogle Scholar
  2. Antheunis, M. L., Valkenburg, P. M., & Peter, J. (2010). Getting acquainted through social network sites: Testing a model of online uncertainty reduction and social attraction. Computers in Human Behavior, 26, 100–109. doi: 10.1016/j.chb.2009.07.005 CrossRefGoogle Scholar
  3. Beltrone, G. (2015). Everyone is an emoji in this bizarre and terrifying French McDonald’s ad. AdWeek. Retrieved from www.adweek.com/adfreak/everyone-emoji-bizarre-and-terrifying-french-mcdonalds-ad-166335
  4. Blechert, J., Meule, A., Busch, N. A., & Ohla, K. (2014). Food-pics: An image database for experimental research on eating and appetite. Frontiers in Psychology, 5, 617. doi: 10.3389/fpsyg.2014.00617 CrossRefPubMedPubMedCentralGoogle Scholar
  5. Bonin, P., Peereman, R., Malardier, N., Méot, A., & Chalard, M. (2003). A new set of 299 pictures for psycholinguistic studies: French norms for name agreement, image agreement, conceptual familiarity, visual complexity, image variability, age of acquisition, and naming latencies. Behavior Research Methods, Instruments, & Computers, 35, 158–167. doi: 10.3758/BF03195507 CrossRefGoogle Scholar
  6. Bradley, M. M., & Lang, P. J. (1999a). Affective Norms for English Words (ANEW): stimuli, instruction manual, and affective ratings (Technical Report C-1). Gainesville: University of Florida, Center for Research in Psychophysiology.Google Scholar
  7. Bradley, M. M., & Lang, P. J. (1999b). International Affective Digitized Sounds (IADS): Stimuli, instruction manual and affective ratings (Technical Report B-2). Gainesville: University of Florida, Center for Research in Psychophysiology.Google Scholar
  8. Burnap, P., Gibson, R., Sloan, L., Southern, R., & Williams, M. (2016). 140 characters to victory? using twitter to predict the UK 2015 general election. Electoral Studies, 41, 230–233. doi: 10.1016/j.electstud.2015.11.017 CrossRefGoogle Scholar
  9. Carvalho, P., Sarmento, L., Silva, M. J., & de Oliveira, E. (2009). Clues for detecting irony in user-generated contents: Oh…‼ It’s “so easy”; -). In Proceedings of the 1st international CIKM workshop on topic–sentiment analysis for mass opinion (pp. 53–56). New York: ACM Press. doi: 10.1145/1651461.1651471 CrossRefGoogle Scholar
  10. Chambers, C. T., & Craig, K. D. (1998). An intrusive impact of anchors in children’s faces pain scales. Pain, 78, 27–37. doi: 10.1016/S0304-3959(98)00112-2 CrossRefPubMedGoogle Scholar
  11. Charbonnier, L., van Meer, F., van der Laan, L. N., Viergever, M. A., & Smeets, P. A. M. (2016). Standardized food images: A photographing protocol and image database. Appetite, 96, 166–173. doi: 10.1016/j.appet.2015.08.041 CrossRefPubMedGoogle Scholar
  12. Comesaña, M., Soares, A. P., Perea, M., Piñeiro, A. P., Fraga, I., & Pinheiro, A. (2013). ERP correlates of masked affective priming with emoticons. Computers in Human Behavior, 29, 588–595. doi: 10.1016/j.chb.2012.10.020 CrossRefGoogle Scholar
  13. Dan-Glauser, E. S., & Scherer, K. R. (2011). The Geneva Affective Picture Database (GAPED): A new 730-picture database focusing on valence and normative significance. Behavior Research Methods, 43, 468. doi: 10.3758/s13428-011-0064-1 CrossRefPubMedGoogle Scholar
  14. Davidov, D., Tsur, O., & Rappoport, A. (2010). Enhanced sentiment learning using Twitter hashtags and smileys. In Proceedings of the 23rd international conference on computational linguistics (pp. 241–249). Stroudsburg: Association for Computational Linguistics. Retrieved from http://dl.acm.org/citation.cfm?id=1944566.1944594 Google Scholar
  15. Derks, D., Bos, A. E. R., & von Grumbkow, J. (2008). Emoticons and online message interpretation. Social Science Computer Review, 26, 379–388. doi: 10.1177/0894439307311611 CrossRefGoogle Scholar
  16. Dresner, E., & Herring, S. C. (2010). Functions of the nonverbal in CMC: Emoticons and illocutionary force. Communication Theory, 20, 249–268. doi: 10.1111/j.1468-2885.2010.01362.x CrossRefGoogle Scholar
  17. Ebner, N. C., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior Research Methods, 42, 351–362. doi: 10.3758/BRM.42.1.351 CrossRefPubMedGoogle Scholar
  18. Eysenbach, G. (2011). Infodemiology and infoveillance: Tracking online health information and cyberbehavior for public health. American Journal of Preventive Medicine, 40, S154–S158. doi: 10.1016/j.amepre.2011.02.006 CrossRefPubMedGoogle Scholar
  19. Fane, J., MacDougall, C., Jovanovic, J., Redmond, G., & Gibbs, L. (2016). Exploring the use of emoji as a visual research method for eliciting young children’s voices in childhood research. Early Child Development and Care. doi: 10.1080/03004430.2016.1219730 Google Scholar
  20. Fullwood, C., Orchard, L. J., & Floyd, S. A. (2013). Emoticon convergence in internet chat rooms. Social Semiotics, 23, 648–662. doi: 10.1080/10350330.2012.739000 CrossRefGoogle Scholar
  21. Ganster, T., Eimler, S. C., & Krämer, N. C. (2012). Same same but different!? the differential influence of smilies and emoticons on person perception. Cyber Psychology, Behavior, and Social Networking, 15, 226–230. doi: 10.1089/cyber.2011.0179 CrossRefGoogle Scholar
  22. Garcia-Marques, T., Mackie, D. M., Claypool, H. M., & Garcia-Marques, L. (2004). Positivity can cue familiarity. Personality and Social Psychology Bulletin, 30, 585–593. doi: 10.1177/0146167203262856 CrossRefPubMedGoogle Scholar
  23. Garrido, M. V., Lopes, D., Prada, M., Rodrigues, D., Jerónimo, R., & Mourão, R. P. (2016). The many faces of a face: Comparing stills and videos of facial expressions in eight dimensions (SAVE database). Behavior Research Methods. doi: 10.3758/s13428-016-0790-5 PubMedGoogle Scholar
  24. Gaspar, R., Barnett, J., & Seibt, B. (2015). Crisis as seen by the individual: The norm deviation approach. Psyecology, 6, 103–135. doi: 10.1080/21711976.2014.1002205 CrossRefGoogle Scholar
  25. Gaspar, R., Pedro, C., Panagiotopoulos, P., & Seibt, B. (2016). Beyond positive or negative: Qualitative sentiment analysis of social media reactions to unexpected stressful events. Computers in Human Behavior, 56, 179–191. doi: 10.1016/j.chb.2015.11.040 CrossRefGoogle Scholar
  26. Gülşen, T. T. (2016). You tell me in emojis. In O. Ogata & T. Akimoto (Eds.), Computational and cognitive approaches to narratology (pp. 354–375). Hershey: Information Science Reference.Google Scholar
  27. Han, D. H., Yoo, H. J., Kim, B. N., McMahon, W., & Renshaw, P. F. (2014). Brain activity of adolescents with high functioning autism in response to emotional words and facial emoticons. PLoS ONE, 9, 1–8. doi: 10.1371/journal.pone.0091214 Google Scholar
  28. Hogenboom, A., Bal, D., Frasincar, F., Bal, M., de Jong, F., & Kaymak, U. (2013). Exploiting emoticons in sentiment analysis. In Proceedings of the 28th annual ACM symposium on applied computing (pp. 703–710). New York: ACM Press. doi: 10.1145/2480362.2480498 Google Scholar
  29. Huang, A. H., Yen, D. C., & Zhang, X. (2008). Exploring the potential effects of emoticons. Information Management, 45, 466–473. doi: 10.1016/j.im.2008.07.001 CrossRefGoogle Scholar
  30. Jaeger, S. R., Vidal, L., Kam, K., & Ares, G. (2017). Can emoji be used as a direct method to measure emotional associations to food names? Preliminary investigations with consumers in USA and China. Food Quality and Preference, 56, 38–48. doi: 10.1016/j.foodqual.2016.09.005 CrossRefGoogle Scholar
  31. Kaye, L. K., Wall, H. J., & Malone, S. A. (2016). “Turn that frown upside-down”: A contextual account of emoticon usage on different virtual platforms. Computers in Human Behavior, 60, 463–467. doi: 10.1016/j.chb.2016.02.088 CrossRefGoogle Scholar
  32. Kerkhof, I., Goesaert, E., Dirikx, T., Vansteenwegen, D., Baeyens, F., D’Hooge, R., & Hermans, D. (2009). Assessing valence indirectly and online. Cognition and Emotion, 23, 1615–1629. doi: 10.1080/02699930802469239 CrossRefGoogle Scholar
  33. Krohn, F. B. (2004). A generational approach to using emoticons as nonverbal communication. Journal of Technical Writing and Communication, 34, 321–328. doi: 10.2190/9eqh-de81-cwg1-qll9 CrossRefGoogle Scholar
  34. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Technical Report A-8). Gainesville: University of Florida.Google Scholar
  35. Lea, M., & Spears, R. (1992). Paralanguage and social perception in computer‐mediated communication. Journal of Organizational Communication, 2, 321–341. doi: 10.1080/10919399209540190 CrossRefGoogle Scholar
  36. Liu, B. (2012). Sentiment analysis and opinion mining. San Rafael: Morgan & Claypool.Google Scholar
  37. Liu, K.-L., Li, W.-J., & Guo, M. (2012). Emoticon smoothed language models for twitter sentiment analysis. In Proceedings of the 26th AAAI conference on artificial intelligence (pp. 1678–1684). New York: AAAI Press.Google Scholar
  38. Ljubešić, N., & Fišer, D. (2016). A global analysis of emoji usage. In proceedings of the 10th Web as corpus workshop (WAC-X) and the EmpiriST shared task (pp. 82–89). Stroudsburg: Association for Computational Linguistics.Google Scholar
  39. Lo, S.-K. (2008). The nonverbal communication functions of emoticons in computer-mediated communication. Cyber Psychology and Behavior, 11, 595–597. doi: 10.1089/cpb.2007.0132 CrossRefGoogle Scholar
  40. McDougall, S. J. P., Curry, M. B., & de Bruijn, O. (1999). Measuring symbol and icon characteristics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic distance for 239 symbols. Behavior Research Methods, Instruments, & Computers, 31, 487–519. doi: 10.3758/BF03200730 CrossRefGoogle Scholar
  41. Mendonça, R., Garrido, M. V., & Semin, G. R. (2016). A standardized database for Portuguese faces looking towards left, right and the front: The LRF Face Corpus. Manuscript submitted for publication.Google Scholar
  42. Milinovich, G. J., Williams, G. M., Clements, A. C. A., & Hu, W. (2014). Internet-based surveillance systems for monitoring emerging infectious diseases. Lancet Infectious Diseases, 14, 160–168. doi: 10.1016/S1473-3099(13)70244-5 CrossRefPubMedGoogle Scholar
  43. Miller, H., Thebault-Spieker, J., Chang, S., Johnson, I., Terveen, L., & Hecht, B. (2016). “Blissfully happy” or “ready to fight”: varying interpretations of emoji. In International Conference on Web and Social Media (ICWSM) ’16 (pp. 259–268). New York: AAAI Press.Google Scholar
  44. Moore, A., Steiner, C. M., & Conlan, O. (2013). Design and development of an empirical smiley-based affective instrument. In 1st workshop on emotions and personality in personalized services (pp. 24–30). Rome: UMAP.Google Scholar
  45. Neff, J. (2015). Dove launches curly-haired emojis to end straight-hair dominance. Advertising Age. Retrieved from http://adage.com/article/digital/dove-launches-curly-haired-emojis-address-void/301203/
  46. Negishi, M. (2014). Meet Shigetaka Kurita, the father of emoji. Wall Street Journal. Retrieved from http://blogs.wsj.com/japanrealtime/2014/03/26/meet-shigetaka-kurita-the-father-of-emoji/
  47. Nelson, R. A., Tossell, C. C., & Kortum, P. (2015). Emoticon use in mobile communications :-). In Z. Yan (Ed.), Encyclopedia of mobile phone behavior (pp. 1–11). Hershey: IGI Global.Google Scholar
  48. Novak, P. K., Smailović, J., Sluban, B., & Mozetič, I. (2015). Sentiment of emojis. PLoS ONE, 10, e0144296. doi: 10.1371/journal.pone.0144296 CrossRefGoogle Scholar
  49. Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2, 1–135. doi: 10.1561/1500000011 CrossRefGoogle Scholar
  50. Park, J., Baek, Y. M., & Cha, M. (2014). Cross-cultural comparison of nonverbal cues in emoticons on twitter: Evidence from big data analysis. Journal of Communication, 64, 333–354. doi: 10.1111/jcom.12086 CrossRefGoogle Scholar
  51. Paul, M. J., & Dredze, M. (2011). You are what you tweet: analyzing twitter for public health. In Fifth international AAAI conference on weblogs and social media (pp. 265–272). New York: AAAI Press.Google Scholar
  52. Pavalanathan, U., & Eisenstein, J. (2015). Emoticons vs. emojis on Twitter: A causal inference approach. arXiv:1510.08480 [cs.CL].Google Scholar
  53. PEW Research Center. (2016). Smartphone ownership and Internet usage continues to climb in emerging economies. Retrieved from www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/
  54. Prada, M., Rodrigues, D., Silva, R. R., & Garrido, M. V. (2015). Lisbon Symbol Database (LSD): Subjective norms for 600 symbols. Behavior Research Methods, 48, 1370–1382. doi: 10.3758/s13428-015-0643-7 CrossRefGoogle Scholar
  55. Proctor, R. W., & Vu, K.-P. L. (1999). Index of norms and ratings published in the psychonomic society journals. Behavior Research Methods, Instruments, & Computers, 31, 659–667. doi: 10.3758/BF03200742 CrossRefGoogle Scholar
  56. Richmond, V., & McCroskey, J. (2009). Human communication theory and research: Traditions and models. In D. Stacks & M. Salwen (Eds.), An integrated approach to communication theory and research (2nd ed., pp. 223–231). New York: Routledge.Google Scholar
  57. Siegel, R. M., Anneken, A., Duffy, C., Simmons, K., Hudgens, M., Lockhart, M. K., & Shelly, J. (2015). Emoticon use increases plain milk and vegetable purchase in a school cafeteria without adversely affecting total milk purchase. Clinical Therapeutics, 37, 1938–1943. doi: 10.1016/j.clinthera.2015.07.016 CrossRefPubMedGoogle Scholar
  58. Skiba, D. J. (2016). Face with tears of joy is word of the year: Are emoji a sign of things to come in health care? Nursing Education Perspectives, 37, 56–57. doi: 10.1097/01.NEP.0000476112.24899.a1 CrossRefPubMedGoogle Scholar
  59. Skovholt, K., Grønning, A., & Kankaanranta, A. (2014). The communicative functions of emoticons in workplace e-mails: :-). Journal of Computer-Mediated Communication, 19, 780–797. doi: 10.1111/jcc4.12063 CrossRefGoogle Scholar
  60. Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174–215. doi: 10.1037/0278-7393.6.2.174 Google Scholar
  61. Thelwall, M., Buckley, K., Paltoglou, G., Cai, D., & Kappas, A. (2010). Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology, 61, 2544–2558. doi: 10.1002/asi.21416 CrossRefGoogle Scholar
  62. Thelwall, M., Buckley, K., & Paltoglou, G. (2012). Sentiment strength detection for the social web. Journal of the American Society for Information Science and Technology, 63, 163–173. doi: 10.1002/asi.21662 CrossRefGoogle Scholar
  63. Thompson, D., & Filik, R. (2016). Sarcasm in written communication: Emoticons are efficient markers of intention. Journal of Computer-Mediated Communication, 21, 105–120. doi: 10.1111/jcc4.12156 CrossRefGoogle Scholar
  64. Tossell, C. C., Kortum, P., Shepard, C., Barg-Walkow, L. H., Rahmati, A., & Zhong, L. (2012). A longitudinal study of emoticon use in text messaging from smartphones. Computers in Human Behavior, 28, 659–663. doi: 10.1016/j.chb.2011.11.012 CrossRefGoogle Scholar
  65. Tung, F.-W., & Deng, Y.-S. (2007). Increasing social presence of social actors in e-learning environments: Effects of dynamic and static emoticons on children. Displays, 28, 174–180. doi: 10.1016/j.displa.2007.06.005 CrossRefGoogle Scholar
  66. Vaiman, M., Wagner, M. A., Caicedo, E., & Pereno, G. L. (2017). Development and validation of an argentine set of facial expressions of emotion. Cognition and Emotion, 31, 249–260. doi: 10.1080/02699931.2015.1098590 CrossRefPubMedGoogle Scholar
  67. Vashisht, G., & Thakur, S. (2014). Facebook as a corpus for emoticons-based sentiment analysis. International Journal of Emerging Technology and Advanced Engineering, 4, 904–908.Google Scholar
  68. Vidal, L., Ares, G., & Jaeger, S. R. (2016). Use of emoticon and emoji in tweets for food-related emotional expression. Food Quality and Preference, 49, 119–128. doi: 10.1016/j.foodqual.2015.12.002 CrossRefGoogle Scholar
  69. Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23, 3–43. doi: 10.1177/009365096023001001 CrossRefGoogle Scholar
  70. Walther, J. B., & D’Addario, K. P. (2001). The impacts of emoticons on message interpretation in computer-mediated communication. Social Science Computer Review, 19, 324. doi: 10.1177/089443930101900307 CrossRefGoogle Scholar
  71. Wang, H., & Castanon, J. A. (2015). Sentiment expression via emoticons on social media. Retrieved from arXiv:1511.02556Google Scholar
  72. Wang, W., Zhao, Y., Qiu, L., & Zhu, Y. (2014). Effects of emoticons on the acceptance of negative feedback in computer-mediated communication. Journal of the Association for Information Systems, 15(8), 3. Retrieved from http://aisel.aisnet.org/jais/vol15/iss8/3 CrossRefGoogle Scholar
  73. Wiebe, J., Wilson, T., & Cardie, C. (2005). Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39, 165–210. doi: 10.1007/s10579-005-7880-9 CrossRefGoogle Scholar
  74. Wohl, J. (2016, August 16). How marketers can win the great emoji arms race. Advertising Age. Retrieved from http://adage.com/article/digital/marketers-emoji-arms-race/303361/
  75. Wolff, J. S., & Wogalter, M. S. (1998). Comprehension of pictorial symbols: Effects of context and test method. Human Factors, 40, 173–186. doi: 10.1518/001872098779480433 CrossRefGoogle Scholar
  76. Yuasa, M., Saito, K., & Mukawa, N. (2011). Brain activity associated with graphic emoticons. The effect of abstract faces in communication over a computer network. Electrical Engineering in Japan, 177, 36–45. doi: 10.1002/eej.21162 CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  1. 1.Department of Social and Organizational PsychologyInstituto Universitário de Lisboa (ISCTE-IUL), CIS-IULLisbonPortugal
  2. 2.GoldsmithsUniversity of LondonLondonUK
  3. 3.William James Center for ResearchISPA - Instituto UniversitáriovLisbonPortugal

Personalised recommendations