Unsupervised Fine-Grained Sentiment Analysis System Using Lexicons and Concepts
Sentiment is mainly analyzed at a document, sentence or aspect level. Document or sentence levels could be too coarse since polar opinions can co-occur even within the same sentence. In aspect level sentiment analysis often opinion-bearing terms can convey polar sentiment in different contexts. Consider the following laptop review: “the big plus was a large screen but having a large battery made me change my mind,” where polar opinions co-occur in the same sentence, and the opinion term that describes the opinion targets (“large”) encodes polar sentiments: a positive for screen, and a negative for battery. To parse these differences, our approach is to identify opinions with respect to the specific opinion targets, while taking the context into account. Moreover, considering that there is a problem of obtaining an annotated training set in each context, our approach uses unlabeled data.
KeywordsFine-grained sentiment analysis Opinion mining Lexicon
The surging number of subjective information across the Web, in several forms such as of reviews, blogs, and bulletin board can be useful for decision-making and various applications. Since manual assessment is not feasible, not only because the high number, but also due to the fact that some opinioned-text are very long, automatically analyzing the sentiment becomes extremely useful. Traditional sentiment analysis approaches aim to extract sentiment at the document level [1, 2, 3]. However consider the following excerpt:
“(1) I bought an iPhone a few days ago. (2) It was such a nice phone. (3) The touch screen was really cool. (4) The voice quality was clear too. (5) Although the battery life was not long…” 
Notice that sentence (3) conveys a positive opinion of the touch-screen, whereas sentence (5) describes the battery negatively. Sentence (2) conveys a positive general opinion of the product.
More researchers have recognized that even if a document bears a negative classification, it can contain some positive indicators. Consequently, they have an increasing interest in applying opinion mining techniques at a more granular level—specifically, the phrase level or sentence level [5, 6, 7]. However, such approach is still limited when polar opinions co-occur in the same sentence. For example, in: “the big plus was a large screen but still the pricewas too high,” polar opinions are conveyed for two different opinion targets (screen, price).
Since there could be several opinions in the text, even within the same sentence, we would like to extract each opinion and to associate it with the corresponding opinion target. The suggested fine-grained system is designed to identify sentiment of opinion targets and therefore it can identify multiple, and possibly polar opinions for each occurrence of opinion target in the text. Opinion target are entities and their attributes which are also referred as aspects .
Labeled data is in shortage and for some aspects not available at all. For example, TripAdvisor suggests user rating for only seven aspects, in addition to the overall rating. Hence, some methods utilize the overall rating of a review, while assuming that it is generated based on a weighted combination of the ratings over all the aspects [9, 10]. Since not all websites provide overall rating in addition to the content, our method uses unlabeled data without any rating. Instead, our system uses conjunction patterns in order to infer the polarity of adjectives, with respect to each opinion target, that co-occur with known adjectives.
Adjectives are words that describe or modify other elements in a sentence, and are frequently used to directly convey facts and opinions about the nouns they modify. As such, they found as useful with sentiment identification [11, 12, 13] and are the backbones of our system; therefore, this paper elaborates mainly on disambiguating the polarity of adjectives across different opinion targets i.e., aspects. This process is iterative and differs from  since it is designed to produce polarity score for each adjective based on previously discovered adjectives, which can describe how positive (or negative) an adjective is, and is useful for sentiment summarization.
Since sentiment is not always conveyed by adjectives, the system is able to identify concepts by using SenticNet 3 , and to further disambiguate their polarity in the relevant context, i.e., opinion target, by using the adjective lexicons. For example, our system can successfully predict the sentiment in the excerpts “the pool looks large,” and to associate it to the relevant aspect pool, although the adjective large does not modify it.
To summarize, our method has the following properties: (1) it can be trained with unsupervised data, (2) it can determine an adjective’s polarity with respect to the target aspect, and (3) it is designed in a cascading approach to seamlessly support adding more modules.
The system starts with discovering important aspects in the text. First, repeating nouns that often opinion-bearing adjectives are related to, are identified. In the next step these nouns are considered as aspects and are clustered to a single topical aspect, i.e., each topical aspect will be represented by a set of aspects. For example, the sentiment of the topical aspect room is calculated by averaging sentiment of the aspects: room, bed, bathroom and view. This is done in a similar way as described by , where a set of seed words are used to discover additional ones, however we are only using nouns.
A seed lexicon (SL) - a set of adjectives paired with their corresponding polarity to reflect how positive/negative each adjective is (1 for positive and 0 for negative). The polarity paired with each adjective pertaining to this lexicon should not be dependent on the opinion target, i.e., the polarity of these adjectives is set as a-prior convention. For example, the polarity of the adjectives excellent and amazing should always be positive. Two classes of adjectives must be excluded from the seed lexicon: ambiguous adjectives (such as great which may be very good or big) and adjectives that are used to express polar sentiment in different contexts (such as big which can be negative to describe a device or positive in the context of the description of a meal).
Reviews (R) - a set of opinioned text such as reviews which is relevant to the domain of the target aspects, i.e., they are likely to be discussed in. For example TripAdvisor.com is an adequate choice for aspects in the tourism domain.
Conjunction patterns (C) - a set of conjunctions to be matches between a pair of adjectives that co-occur in the same sentence, and their polarity property, i.e., linear of shifter. For example, the conjunction and has a linear polarity property whereas the conjunction but indicates shift in polarity.
The main output of the learning phase is an extended set of aspect-specific lexicons which includes the seed as well as new adjectives with their sentiment scores.
The process of creating the aspect-dependent lexicon performed for each aspect separately. First, the extended lexicon of aspect A (ELA) is initialized with the seed lexicon (SL). Then, the following steps are repeated n times (n is a configurable parameters). We identify all adjectives in each review ri∈R. Then, for each identified adjective a and for each discovered aspect A we check whether a is modifying aspect A or not. Then, for each pair of adjectives a1 and a2 which both modify aspect A, we check whether this instance of two adjectives is connected with a conjunction pattern. If a1 and a2 are connected with a conjunction c, then, if one of the two adjectives (let’s assume a1 – without loss of generality) is in the current extended lexicon ELA, and the second adjective a2 is not in ELA, we compute the polarity score (pol) of a2, which is determined according to the conjunction pattern c and adjective a1; for example, if pol(a1) = 0.9 and c = shifter then pol(a2) = 1-pol(a1) = 0.1. At the end of each iteration, the polarity score of each new adjective a2 is computed as the average of the polarity scores that were computed for each instance. Finally, a2 is added to the extended lexicon of A (ELA) with its corresponding polarity score.
To this end, the polarity of the adjectives that are modifying the target aspect can be used to calculate its sentiment score in all of its instances, i.e., in each time it appears in the text. This lexical approach can obtain a relatively high precision rate. As a result, in some cases still the target aspect does not have any modifying adjectives, or the modifying adjective does not include in the aspect’s lexicon. Aiming to increase recall, we use SenticNet 3, a semantic source that contains 14,000 common sense-knowledge concepts labeled by their polarity scores, in a cascading approach. If the lexical approach returns no answer for aspect A which still appears in the text, we retrieve concepts by using SenticNet 3. If an adjective a1 appears in one of the concepts, and it pertains to A’s lexicon, the aspect’s sentiment score is determined by the score of a1 in the lexicon. Otherwise, the sentiment of that concept is determined by SenticNet 3. It is to mention that at any time a polarity of an adjective is computed or used, negation, if recognized, is taken into consideration by using a dependency parser.
The final score of an aspect is the average of all of its instances’ scores in the text. The system can output an overall sentiment for a given sentence, based on averaging the calculated sentiment for each aspect in the sentence.
The represented system is using unlabeled set of opinioned text to construct sentiment lexicons of adjectives. Each adjective is given with a score that is computed for a specific aspect and can be used in various of ways since adjectives are frequently used to convey sentiment. Methods that use the overall score may be too coarse. Consider the following review taken from Tripadvisor.com, rated as ‘terrible’ (1 of 5 points): “Nice kitchenette, good location next to Museum station. Aircon unit is standalone and controls fully adjustable”. No doubt that the overall rating is not in accordance with the text. A conclusive overall score cannot take into consideration divergent opinions. The cascading approach makes the system capable of adding more methods while high precision methods will be employed first. Thus, it is configurable, and users can achieve high precision rates on the expense of lower recall, according to their needs.
- 1.Dave, K., Lawrence, S., Pennock, D.: Mining the peanut gallery: opinion extraction and semantic classification of product reviews. In: Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, pp. 519−528, 20−24 May 2003Google Scholar
- 2.Turney, P.D.: Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), pp. 417–424 (2002)Google Scholar
- 3.Yang, K., Yu, N., Zhang, H.: WIDIT in TREC 2007. Blog track: combining lexicon-based methods to detect opinionated blogs. In: Proceedings of TREC 2007 (2007)Google Scholar
- 4.Liu, B.: Sentiment analysis and subjectivity. Handbook of Natural Language Processing, 2nd edn. (2010)Google Scholar
- 5.Xu, R., Wong, K.F., Lu, Q., Xia, Y., Li, W.: Learning knowledge from relevant webpage for opinion analysis. In: Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Sydney, Australia, pp. 307–313, 9−12 December 2008Google Scholar
- 6.Agarwal, A., Biadsy, F., Mckeown, F.: Contextual phrase-level polarity analysis using lexical affect scoring and syntactic N-Grams. In: Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pp. 24–32 (2009)Google Scholar
- 7.Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: Proceedings of the Human Langugae Technology Conference and Conference on Empirical Methods in Natural Language Processing. The Association for Computational Linguistics, pp. 347–354 (2005)Google Scholar
- 9.Wang, H., Lu, Y., Zhai, C.: Latent aspect rating analysis on review text data: a rating regression approach. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 783−792. ACM, July 2010Google Scholar
- 10.Wang, H., Lu, Y., Zhai, C.: Latent aspect rating analysis without aspect keyword supervision. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 618−626. ACM, 2011 AugustGoogle Scholar
- 11.Kamps J., Marx, M., Mokken, R.J., De Rijke M.: Using WordNet to measure semantic orientation of adjectives. In: Proceedings of LREC-04, 4th International Conference on Language Resources and Evaluation, Lisbon, PT, vol. 4, pp. 1115–1118 (2004)Google Scholar
- 12.Blair-Goldensohn, S., Hannan, K., McDonald, R., Neylon, T., Reis, G.A., Reynar, J.: Building a sentiment summarizer for local service reviews. In: WWW Workshop on NLP in the Information Explosion Era (NLPIX). ACM, New York (2008)Google Scholar
- 13.Qiu, G., Liu, B., Bu, J., Chen, C.: Expanding Domain Sentiment Lexicon through Double Propagation. In: IJCAI, vol. 9, pp. 1199–1204 (2009)Google Scholar
- 14.Cambria, E., Olsher, D., Rajagopal, D.: SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis. In: Twenty-Eighth AAAI Conference on Artificial Intelligence, June 2014Google Scholar
- 15.Poria, S., Gelbukh, A., Cambria, E., Yang, P., Hussain, A., Durrani, T.: Merging SenticNet and WordNet-Affect emotion lists for sentiment analysis. In: 2012 IEEE 11th International Conference on Signal Processing (ICSP), vol. 2, pp. 1251−1255. IEEE (2012)Google Scholar
- 17.Poria, S., Cambria, E., Winterstein, G., Huang, G.-B.: Sentic patterns: dependency-based rules for concept-level sentiment analysis. Knowl.-Based Syst. (2014). doi:10.1016/j.knosys.2014.05.005
- 18.Poria, S., Cambria, E., Ku, L.-W., Gui, C., Gelbukh, A.: A Rule-based approach to aspect extraction from product reviews. In: COLING, Dublin (2014)Google Scholar