Keywords

1 Introduction

Sentiment analysis on Twitter has been attracting much attention recently due to the rapid growth in Twitter’s popularity as a platform for people to express their opinions and attitudes towards a great variety of topics. Most existing approaches to Twitter sentiment analysis can be categorised into machine learning [7, 11, 13] and lexicon-based approaches [2, 6, 8, 15].

Lexicon-based approaches use lexicons of words weighted with their sentiment orientations to determine the overall sentiment in texts. These approaches have shown to be more applicable to Twitter data than machine learning approaches, since they do not require training from labelled data and therefore, they offer a domain-independent sentiment detection [15]. Nonetheless, lexicon-based approaches are limited by the sentiment lexicon used [21]. Firstly, because sentiment lexicons are composed by a generally static set of words that do not cover the wide variety of new terms that constantly emerge in the social web. Secondly, because words in the lexicons have fixed prior sentiment orientations, i.e. each term has always the same associated sentiment orientation independently of the context in which the term is used.

To overcome the above limitations, several lexicon bootstrapping and adaptation methods have been previously proposed. However, these methods are either supervised [16], i.e., they require training from human-coded corpora, or based on studying the statistical, syntactical or linguistic relations between words in general textual corpora (e.g., The Web) [17, 19] or in static lexical knowledge sources (e.g., WordNet) [5] ignoring, therefore, the specific textual context in which the words appear. In many cases, however, the sentiment of a word is implicitly associated with the semantics of its context [3].

In this paper we propose an unsupervised approach for adapting sentiment lexicons based on the contextual semantics of their words in a tweet corpus. In particular, our approach studies the co-occurrences between words to capture their contexts in tweets and update their prior sentiment orientations and/or sentiment strengths in a given lexicon accordingly.

As a case study we apply our approach on Thelwall-Lexicon [15], which, to our knowledge, is the state-of-the-art sentiment lexicon for social data. We evaluate the adapted lexicons by performing a lexicon-based polarity sentiment detection (positive vs. negative) on three Twitter datasets. Our results show that the adapted lexicons produce a significant improvement in the sentiment detection accuracy and F-measure in two datasets but gives a slightly lower F-measure in one dataset.

In the rest of this paper, related work is discussed in Sect. 2, and our approach is presented in Sect. 3. Experiments and results are presented in Sect. 4. Discussion and future work are covered in Sect. 5. Finally, we conclude our work in Sect. 6.

2 Related Work

Exiting approaches to bootstrapping and adapting sentiment lexicons can be categorised into dictionary and corpus-based approaches. The dictionary-based approach [5, 14] starts with a small set of general opinionated words (e.g., good, bad) and lexical knowledge base (e.g., WordNet). After that, the approach expands this set by searching the knowledge base for words that have lexical or linguistic relations to the opinionated words in the initial set (e.g., synonyms, glosses, etc.).

Alternatively, the corpus-based approach measures the sentiment orientation of words automatically based on their association to other strongly opinionated words in a given corpus [14, 17, 19]. For example, Turney and Littman [17] used Pointwise Mutual Information (PMI) to measure the statistical correlation between a given word and a balanced set of 14 positive and negative paradigm words (e.g., good, nice, nasty, poor). Although this work does not require large lexical input knowledge, its identification speed is very limited [21] because it uses web search engines in order to retrieve the relative co-occurrences of words.

Following the aforementioned approaches, several lexicons such as MPQA [20] and SentiWordNet [1] have been induced and successfully used for sentiment analysis on conventional text (e.g., movie review data). However, on Twitter these lexicons are not as compatible due to their limited coverage of Twitter-specific expressions, such as abbreviations and colloquial words (e.g., “looov”, “luv”, “gr8”) that are often found in tweets.

Quite few sentiment lexicons have been recently built to work specifically with social media data, such as Thelwall-Lexicon [16] and Nielsen-Lexicon [8]. These lexicons have proven to work effectively on Twitter data. Nevertheless, such lexicons are similar to other traditional ones, in the sense that they all offer fixed and context-insensitive word-sentiment orientations and strengths. Although a training algorithm has been proposed to update the sentiment of terms in Thelwall-Lexicon [16], it requires to be trained from human-coded corpora, which is labour-intensive to obtain.

Aiming at addressing the above limitations we have designed our lexicon-adaptation approach in away that allows to (i) work in unsupervised fashion, avoiding the need for labelled data, and (ii) exploit the contextual semantics of words. This allows capturing their contextual information in tweets and update their prior sentiment orientation and strength in a given sentiment lexicon accordingly.

3 A Contextual Semantic Approach to Lexicon Adaptation

The main principle behind our approach is that the sentiment of a term is not static, as found in general-purpose sentiment lexicons, but rather depends on the context in which the term is used, i.e., it depends on its contextual semantics.Footnote 1 Therefore, our approach functions in two main steps as shown in Fig. 1. First, given a tweet collection and a sentiment lexicon, the approach builds a contextual semantic representation for each unique term in the tweet collection and subsequently uses it to derive the term’s contextual sentiment orientation and strength. The SentiCircle representation model is used to this end [10]. Secondly, rule-based algorithm is applied to amend the prior sentiment of terms in the lexicon based on their corresponding contextual sentiment. Both steps are further detailed in the following subsections.

Fig. 1.
figure 1

The systematic workflow of our proposed lexicon adaptation approach.

3.1 Capturing Contextual Semantics and Sentiment

The first step in our pipeline is to capture the words contextual semantics and sentiment in tweets. To this end, we use our previously proposed semantic representation model, SentiCircle [10].

Following the distributional hypothesis that words that co-occur in similar contexts tend to have similar meaning [18], SentiCircle extracts the contextual semantics of a word from its co-occurrence patterns with other words in a given tweet collection. These patterns are then represented as a geometric circle, which is subsequently used to compute the contextual sentiment of the word by applying simple trigonometric identities on it. In particular, for each unique term \(m\) in a tweet collection, we build a two-dimensional geometric circle, where the term \(m\) is situated in the centre of the circle, and each point around it represents a context term \({c_i}\) (i.e., a term that occurs with \({m}\) in the same context). The position of \({c_i}\), as illustrated in Fig. 2, is defined jointly by its Cartesian coordinates \(x_i, y_i\) as:

$$\begin{aligned}&x_i = r_i \cos (\theta _i*\pi )&y_i = r_i\sin (\theta _i*\pi ) \end{aligned}$$

Where \(\theta _i\) is the polar angle of the context term \(c_i\) and its value equals to the prior sentiment of \(c_i\) in a sentiment lexicon before adaptation, \(r_i\) is the radius of \(c_i\) and its value represents the degree of correlation (tdoc) between \(c_i\) and \(m\), and can be computed as:

$$\begin{aligned} r_i = tdoc(m,c_i) = f(c_i, m) \times \log {\frac{N}{N_{c_i}}} \end{aligned}$$

where \(f(c_i, m)\) is the number of times \(c_i\) occurs with \(m\) in tweets, \(N\) is the total number of terms, and \(N_{c_i}\) is the total number of terms that occur with \(c_i\). Note that all terms’ radii in the SentiCircle are normalised. Also, all angles’ values are in radian. The trigonometric properties of the SentiCircle allows us to encode the contextual semantics of a term as sentiment orientation and sentiment strength. Y-axis defines the sentiment of the term, i.e., a positive \({y}\) value denotes a positive sentiment and vice versa. The X-axis defines the sentiment strength of the term. The smaller the \({x}\) value, the stronger the sentiment.Footnote 2 This, in turn, divides the circle into four sentiment quadrants. Terms in the two upper quadrants have a positive sentiment (\({\sin \theta > 0}\)), with upper left quadrant representing stronger positive sentiment since it has larger angle values than those in the top right quadrant. Similarly, terms in the two lower quadrants have negative sentiment values (\({\sin \theta < 0}\)). Moreover, a small region called the “Neutral Region” can be defined. This region, as shown in Fig. 2, is located very close to X-axis in the “Positive” and the “Negative” quadrants only, where terms lie in this region have very weak sentiment (i.e., \( |\theta | \thickapprox 0\)).

Fig. 2.
figure 2

SentiCircle of a term \({m}\). Neutral region is shaded in blue (Color figure online).

Calculating Contextual Sentiment. In summary, the Senti-Circle of a term \({m}\) is composed by the set of \(({x,y})\) Cartesian coordinates of all the context terms of \({m}\). An effective way to compute the overall sentiment of \(m\) is by calculating the geometric median of all the points in its SentiCircle. Formally, for a given set of \({n}\) points \({(p_1, p_2,..., p_n)}\) in a Senti-Cirlce \({\varOmega }\), the 2D geometric median \(g\) is defined as: \(g = \arg \min _{g \in \mathbb {R}^2} \sum _{i=1}^n \Vert |p_i - g||_2\). We call the geometric median \({g}\) the SentiMedian as its position in the SentiCircle determines the final contextual-sentiment orientation and strength of \({m}\).

Note that the boundaries of the neutral region can be computed by measuring the density distribution of terms in the SentiCircle along the Y-axis. In this paper we use similar boundaries to the ones used in [10] since we use the same evaluation datasets.

3.2 Lexicon Adaptation

The second step in our approach is to update the sentiment lexicon with the terms’ contextual sentiment information extracted in the previous step. As mentioned earlier, in this work we use Thelwall-Lexicon [16] as a case study. Therefore, in this section we first describe this lexicon and its properties, and then introduce our proposed adaptation method.

Thelwall-Lexicon consists of 2546 terms coupled with integer values between \({-}5\) (very negative) and \({+}5\) (very positive). Based on the terms’ prior sentiment orientations and strengths (SOS), we group them into three subsets of 1919 negative terms (SOS \(\in [{-}2,{-}5]\)), 398 positive terms (SOS \(\in [2,5]\)) and 229 neutral terms (SOS \(\in \{{-}1,1\}\)).

The adaptation method uses a set of antecedent-consequent rules that decides how the prior sentiment of the terms in Thelwall-Lexicon should be updated according to the positions of their SentiMedians (i.e., their contextual sentiment). In particular, for a term \(m\), the method checks (i) its prior SOS value in Thelwall-Lexicon and (ii) the SentiCircle quadrant in which the SentiMedian of \(m\) resides. The method subsequently chooses the best-matching rule to update the term’s prior sentiment and/or strength.

Table 1 shows the complete list of rules in the proposed method. As noted, these rules are divided into updating rules, i.e., rules for updating the existing terms in Thelwall-Lexicon, and expanding rules, i.e., rules for expanding the lexicon with new terms. The updating rules are further divided into rules that deal with terms that have similar prior and contextual sentiment orientations (i.e., both positive or negative), and rules that deal with terms that have different prior and contextual sentiment orientations (i.e., negative prior, positive contextual sentiment and vice versa).

Although they look complicated, the notion behind the proposed rules is rather simple: Check how strong the contextual sentiment is and how weak the prior sentiment is \(\rightarrow \) update the sentiment orientation and strength accordingly. The strength of the contextual sentiment can be determined based on the sentiment quadrant of the SentiMedian of \(m\), i.e., the contextual sentiment is strong if the SentiMedian resides in the “Very Positive” or “Very Negative” quadrants (See Fig. 2). On the other hand, the prior sentiment of \(m\) (i.e., \(prior_m\)) in Thelwall-Lexicon is weak if \(|prior_m|\leqslant 3\) and strong otherwise.

Table 1. Adaptation rules for Thelwall-Lexicon, where prior: prior sentiment value, StrongQuadrant: very negative/positive quadrant in the SentiCircle, Add: add the term to Thelwall-Lexicon.

For example, the word “revolution” in Thelwall-Lexicon has a weak negative sentiment (\(prior={-}2\)) while it has a neutral contextual sentiment since its SentiMedian resides in the neutral region (\(SentiMedian \in NeutralRegion\)). Therefore, rule number 10 is applied and the term’s prior sentiment in Thelwall lexicon will be updated to neutral (\(|prior|=1\)). In another example, the words “Obama” and “Independence” are not covered by the Thelwall-Lexicon, and therefore, they have no prior sentiment. However, their SentiMedians reside in the “Positive” quadrant in their SentiCircles, and therefore rule number 12 is applied and both terms will be assigned with a positive sentiment strength of 3 and added to the lexicon consequently.

4 Evaluation Results

We evaluate our approach on Thelwall-Lexicon using three adaptation settings: (i) the update setting where we update the prior sentiment of existing terms in the lexicon, (ii) The expand setting where we expand Thelwall-Lexicon with new opinionated terms, and (iii) the update+expand setting where we try both aforementioned settings together. To this end, we use three Twitter datasets OMD, HCR and STS-Gold. Numbers of positive and negative tweets within these datasets are summarised in Table 2, and detailed in the references added in the table. To evaluate the adapted lexicons under the above settings, we perform binary polarity classification on the three datasets. To this end, we use the sentiment detection method proposed with Thelwall-Lexicon [15]. According to this method a tweet is considered as positive if its aggregated positive sentiment strength is 1.5 times higher than the aggregated negative one, and negative vice versa.

Table 2. Twitter datasets used for the evaluation

Applying our adaptation approach to Thelwall-Lexicon results in dramatic changes in it. Table 3 shows the percentage of words in the three datasets that were found in Thelwall-Lexicon with their sentiment changed after adaptation. One can notice that on average 9.61 % of the words in our datasets were found in the lexicon. However, updating the lexicon with the contextual sentiment of words resulted in 33.82 % of these words flipping their sentiment orientation and 62.94 % changing their sentiment strength while keeping their prior sentiment orientation. Only 3.24 % of the words in Thelwall-Lexicon remained untouched. Moreover, 21.37 % of words previously unseen in the lexicon were assigned with contextual sentiment by our approach and added to Thelwall-Lexicon subsequently.

Table 3. Average percentage of words in the three datasets that had their sentiment orientation or strength updated by our adaptation approach

Table 4 shows the average results of binary sentiment classification performed on our datasets using (i) the original Thelwall-Lexicon (Original), (ii) Thelwall-Lexicon induced under the update setting (Updated), and (iii) Thelwall-Lexicon induced under the update+expand setting.Footnote 3 The table reports the results in accuracy and three sets of precision (P), recall (R), and F-measure (F1), one for positive sentiment detection, one for negative, and one for the average of the two.

From these results in Table 4, we notice that the best classification performance in accuracy and F1 is obtained on the STS-Gold dataset regardless the lexicon being used. We also observe that the negative sentiment detection performance is always higher than the positive detection performance for all datasets and lexicons.

Table 4. Cross comparison results of original and the adapted lexicons

As for different lexicons, we notice that on OMD and STS-Gold the adapted lexicons outperform the original lexicon in both accuracy and F-measure. For example, on OMD the adapted lexicon shows an average improvement of 2.46 % and 4.51 % in accuracy and F1 respectively over the original lexicon. On STS-Gold the performance improvement is less significant than that on OMD, but we still observe 1 % improvement in accuracy and F1 comparing to using the original lexicon. As for the HCR dataset, the adapted lexicon gives on average similar accuracy, but 1.36 % lower F-measure. This performance drop can be attributable to the poor detection performance of positive tweets. Specifically, we notice from Table 4 a major loss in the recall on positive tweet detection using both adapted lexicons. One possible reason is the sentiment class distribution in our datasets. In particular, one may notice that HCR is the most imbalanced amongst the three datasets. Moreover, by examining the numbers in Table 3, we can see that HCR presents the lowest number of new opinionated words among the three datasets (i.e., 10.61 % lower than the average) which could be another potential reason for not observing any performance improvement.

5 Discussion and Future Work

We demonstrated the value of using contextual semantics of words for adapting sentiment lexicons from tweets. Specifically, we used Thelwall-Lexicon as a case study and evaluated its adaptation to three datasets of different sizes. Although the potential is palpable, our results were not conclusive, where a performance drop was observed in the HCR dataset using our adapted lexicons. Our initial observations suggest that the quality of our approach might be dependent on the sentiment class distribution in the dataset. Therefore, a deeper investigation in this direction is required.

We used the SentiCircle approach to extract the contextual semantics of words from tweets. In future work we will try other contextual semantic approaches and study how the semantic extraction quality affects the adaptation performance.

Our adaptation rules in this paper are specific to Thelwall-Lexicon. These rules, however, can be generalized to other lexicons, which constitutes another future direction of this work.

All words which have contextual sentiment were used for adaptation. Nevertheless, the results conveyed that the prior sentiments in the lexicon might need to be unchanged for words of specific syntactical or linguistic properties in tweets. Part of our future work is to detect and filter those words that are more likely to have stable sentiment regardless the contexts in which they appear.

6 Conclusions

In this paper we proposed an unsupervised approach for sentiment lexicon adaptation from Twitter data. Our approach extracts the contextual semantics of words and uses them to update the words’ prior sentiment orientations and/or strength in a given sentiment lexicon. The evaluation was done on Thelwall-Lexicon using three Twitter datasets. Results showed that lexicons adapted by our approach improved the sentiment classification performance in both accuracy and F1 in two out of three datasets.