Skip to main content
Log in

The good, the bad and the implicit: a comprehensive approach to annotating explicit and implicit sentiment

  • Original Paper
  • Published:
Language Resources and Evaluation Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

We present a fine-grained scheme for the annotation of polar sentiment in text, that accounts for explicit sentiment (so-called private states), as well as implicit expressions of sentiment (polar facts). Polar expressions are annotated below sentence level and classified according to their subjectivity status. Additionally, they are linked to one or more targets with a specific polar orientation and intensity. Other components of the annotation scheme include source attribution and the identification and classification of expressions that modify polarity. In previous research, little attention has been given to implicit sentiment, which represents a substantial amount of the polar expressions encountered in our data. An English and Dutch corpus of financial newswire text, consisting of over 45,000 words each, was annotated using our scheme. A subset of this corpus was used to conduct an inter-annotator agreement study, which demonstrated that the proposed scheme can be used to reliably annotate explicit and implicit sentiment in real-world textual data, making the created corpora a useful resource for sentiment analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33
Fig. 34
Fig. 35
Fig. 36
Fig. 37
Fig. 38
Fig. 39
Fig. 40

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. http://mpqa.cs.pitt.edu/corpora/mpqa_corpus/

  2. Wilson (2008) states that an utterance “may be a single phrase or expression, but whenever possible it is a sentence or proposition with references to the source and target of the subjectivity included in the span that is marked”.

  3. When applying our fine-grained annotation scheme to text, we make use of the brat annotation tool (see Sect. 4.2). In brat, the polarity of the sentiment expressed by a polar expression about a certain target is denoted by the colour of the arrow pointing from the polar expression to that target (viz. green for positive, red for negative, purple for unknown and orange for other) and a symbol (viz. +, −, ? and \(\sim \)).

  4. Other sources use the terms objective polar utterances (Wilson 2008) or evaluative factuals (Nigam and Hurst 2004) for polar facts.

  5. The same applies to the other elements covered by our annotation scheme, viz. modifiers, sources, source expressions, targets and causes.

  6. These modifiers are also sometimes referred to as amplifiers (Quirk et al. 1985).

  7. These modifiers are also sometimes referred to as downplayers or downtoners (Quirk et al. 1985).

  8. In brat, the intensity of the positive or negative sentiment is denoted by the number of plus or minus signs accompanying the arrow pointing from the polar expression to the target at hand.

  9. While Girju (2003) uses the term resultative causative for verbal constructions, we also identify other causative constructions.

  10. The resulting annotations will be made available for research purposes.

  11. Note that not all annotators for Dutch (three Dutch native speakers) are the same persons who annotated the English corpus (two Master’s students in English and one English native speaker). Only one annotator participated in the inter-annotator agreement study for both English and Dutch.

References

  • Abdul-Mageed, M., & Diab, M. T. (2011). Subjectivity and sentiment annotation of modern standard Arabic newswire. In Proceedings of the 5th linguistic annotation workshop (LAW V) (pp. 110–118). Portland, Oregon, USA.

  • Artstein, R., & Poesio, M. (2008). Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4), 555–596.

    Article  Google Scholar 

  • Asher, N., Benamara, F., & Mathieu, Y. Y. (2008). Categorizing opinion in discourse. In Proceedings of the 18th European conference on artificial intelligence (ECAI 2008) (pp. 835–836). Patras, Greece.

  • Balahur, A., Hermida, J. M., & Montoyo, A. (2011a). Detecting emotions in social affective situations using the emotinet knowledge base. In Proceedings of the 8th international conference on advances in neural networks, Part III (ISNN 2011), volume 6677 of lecture notes in computer science (pp. 611–620). Springer.

  • Balahur, A., Hermida, J. M., Montoyo, A., & Muñoz, R. (2011b). EmotiNet: A knowledge base for emotion detection in text built on the appraisal theories. In Proceedings of the 19th conference on applications of natural language to information systems (NLDB 2011), volume 6716 of lecture notes in computer science (pp. 27–39). Springer.

  • Banfield, A. (1982). Unspeakable sentences. Boston: Routledge and Kegan Paul.

    Google Scholar 

  • Bermingham, A., & Smeaton, A. F. (2009). A study of inter-annotator agreement for opinion retrieval. In Proceedings of the 32nd annual international ACM SIGIR conference on research and development in information retrieval (SIGIR 2009) (pp. 784–785). Boston, Massachusetts, USA.

  • Bethard, S., Yu, H., Thornton, A., Hatzivassiloglou, V., & Jurafsky, D. (2004). Automatic extraction of opinion propositions and their holders. In Proceedings of the AAAI spring symposium on exploring attitude and affect in text: Theories and applications (pp. 20–27). Palo Alto, California, USA.

  • Boldrini, E., Balahur, A., Martínez-Barco, P., & Montoyo, A. (2009). EmotiBlog: An annotation scheme for emotion detection and analysis in non-traditional textual genres. In Proceedings of the 2009 international conference on data mining (DMIN 2009) (pp. 491–497). Las Vegas, Nevada, USA.

  • Boldrini, E., Balahur, A., Martínez-Barco, P., & Montoyo, A. (2012). Using EmotiBlog to annotate and analyse subjectivity in the new textual genres. Data Mining and Knowledge Discovery, 25(3), 603–634.

    Article  Google Scholar 

  • Breck, E., Choi, Y., & Cardie, C. (2007). Identifying expressions of opinion in context. In Proceedings of the 20th international joint conference on artificial intelligence (IJCAI-2007) (pp. 2683–2688). Hyderabad, India.

  • Choi, Y., Breck, E., & Cardie, C. (2006). Joint extraction of entities and relations for opinion recognition. In Proceedings of the 2006 conference on empirical methods in natural language processing (EMNLP 2006) (pp. 431–439). Sydney, Australia.

  • Choi, Y., & Cardie, C. (2008). Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the 2008 conference on empirical methods in natural language processing (EMNLP 2008) (pp. 793–801). Honolulu, Hawaii, USA.

  • Choi, Y., Cardie, C., Riloff, E., & Patwardhan, S. (2005). Identifying sources of opinions with conditional random fields and extraction patterns. In Proceedings of the conference on human language technology and empirical methods in natural language processing (HLT-EMNLP 2005) (pp. 355–362). Vancouver, British Columbia, Canada.

  • Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.

    Article  Google Scholar 

  • Dabrowski, M., Acton, T., Jarzebowski, P., & O’Riain, S. (2010). Improving customer decisions using product reviews—CROM—Car Review Opinion Miner. In Proceedings of the 6th international conference on Web information systems and technologies, Volume 1 (WEBIST 2010) (pp. 354–357). Valencia, Spain.

  • Dave, K., Lawrence, S., & Pennock, D. M. (2003). Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of the 12th international conference on World Wide Web (WWW 2003) (pp. 519–528). Budapest, Hungary.

  • Deng, L., Choi, Y., & Wiebe, J. (2013). Benefactive/malefactive event and writer attitude annotation. In Proceedings of the 51st annual meeting of the association for computational linguistics (Vol. 2: Short Papers) (pp. 120–125). Sofia, Bulgaria.

  • Desmet, B., & Hoste, V. (2014). Recognising suicidal messages in Dutch social media. In Proceedings of the 9th international conference on language resources and evaluation (LREC 2014) (pp. 830–835). Reykjavik, Iceland.

  • Devitt, A., & Ahmad, K. (2007). Sentiment polarity identification in financial news: A cohesion-based approach. In Proceedings of the 45th annual meeting of the association of computational linguistics (pp. 984–991). Prague, Czech Republic.

  • Ding, X., Liu, B., & Yu, P. S. (2008). A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 international conference on Web search and data mining (WSDM 2008) (pp. 231–240). Palo Alto, California, USA.

  • Drury, B. & Almeida, J. J. (2011). Identification of fine-grained feature-based event and sentiment phrases from business news stories. In Proceedings of the international conference on Web intelligence, mining and semantics (WIMS 2011). Sogndal, Norway.

  • Esuli, A., & Sebastiani, F. (2006). SentiWordNet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th conference on language resources and evaluation (LREC 2006) (pp. 417–422). Genoa, Italy.

  • Feng, S., Kang, J. S., Kuznetsova, P., & Choi, Y. (2013). Connotation Lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51st annual meeting of the association for computational linguistics (Vol. 1: Long Papers) (pp. 1774–1784). Sofia, Bulgaria.

  • Ferguson, P., O’Hare, N., Davy, M., Bermingham, A., Tattersall, S., Sheridan, P., et al. (2009). Exploring the use of paragraph-level annotations for sentiment analysis of financial blogs. In Proceedings of the 1st workshop on opinion mining and sentiment analysis (WOMSA 2009) (pp. 42–52). Seville, Spain.

  • Girju, R. (2003). Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 workshop on multilingual summarization and question answering, Vol. 12 (MultiSumQA 2003) (pp. 76–83). Sapporo, Japan.

  • Halliday, M. (1994). An introduction to functional grammar. London: Edward Arnold.

    Google Scholar 

  • Hu, M., & Liu, B. (2004). Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining (KDD 2004) (pp. 168–177). Seattle, Washington, USA.

  • Ikeda, D., Takamura, H., Ratinov, L.-A., & Okumura, M. (2008). Learning to shift the polarity of words for sentiment classification. In Proceedings of the 3rd international joint conference on natural language processing (IJCNLP 2008) (pp. 296–303). Hyderabad, India.

  • Kennedy, A., & Inkpen, D. (2006). Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2), 110–125.

    Article  Google Scholar 

  • Kessler, J. S., Eckert, M., Clark, L., & Nicolov, N. (2010). The ICWSM 2010 JDPA sentiment corpus for the automotive domain. In 4th international AAAI conference on weblogs and social media data workshop challenge (ICWSM-DWC 2010). Washington, DC, USA.

  • Kim, S.-M., & Hovy, E. (2005). Automatic detection of opinion bearing words and sentences. In Companion volume to the proceedings of the 2nd international joint conference on natural language processing (IJCNLP 2005) (pp. 61–66). Jeju Island, Korea.

  • Kim, S.-M., & Hovy, E. (2006). Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the workshop on sentiment and subjectivity in text (SST 2006) (pp. 1–8). Sydney, Australia.

  • Kouloumpis, E., Wilson, T., & Moore, J. (2011). Twitter sentiment analysis: The good the bad and the OMG!. In Proceedings of the 5th international AAAI conference on weblogs and social media (ICWSM 2011) (pp. 538–541) Barcelona, Spain.

  • Krippendorff, K. (1970). Estimating the reliability, systematic error and random error of interval data. Educational and Psychological Measurement, 30, 61–70.

    Article  Google Scholar 

  • Krippendorff, K. (1980). Content analysis: An introduction to its methodology, chapter 12. Beverly Hills: Sage.

    Google Scholar 

  • Krippendorff, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Thousand Oaks: Sage.

    Google Scholar 

  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174.

    Article  Google Scholar 

  • Li, S., Lee, S. Y. M., Chen, Y., Huang, C.-R., & Zhou, G. (2010). Sentiment classification and polarity shifting. In Proceedings of the 23rd international conference on computational linguistics (Coling 2010) (pp. 635–643). Beijing, China.

  • Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis lectures on human language technologies. San Rafael: Morgan & Claypool.

    Google Scholar 

  • Macdonald, C., Ounis, I., & Soboroff, I. (2007). Overview of the TREC-2007 blog Track. In Proceedings of the 16th text REtrieval conference (TREC 2007) (pp. 31–43). Gaithersburg, Maryland, USA.

  • Martin, J. R., & White, P. R. (2005). The language of evaluation: Appraisal in English. Hampshire/New York: Palgrave Macmillan.

    Google Scholar 

  • Musat, C., & Trausan-Matu, S. (2010). The impact of valence shifters on mining implicit economic opinions. In Proceedings of the 14th international conference on artificial intelligence: Methodology, systems, and applications (AIMSA 2010), volume 6304 of Lecture Notes in Computer Science (pp. 131–140). Springer.

  • Nakov, P., Rosenthal, S., Kozareva, Z., Stoyanov, V., Ritter, A., & Wilson, T. (2013). SemEval-2013 Task 2: Sentiment analysis in Twitter. In Proceedings of the 7th international workshop on semantic evaluation (SemEval 2013) (pp. 312–320). Atlanta, Georgia, USA.

  • Nigam, K., & Hurst, M. (2004). Towards a robust metric of opinion. In Proceedings of the AAAI spring symposium on exploring attitude and affect in text: Theories and applications (pp. 598–603). Palo Alto, California, USA.

  • O’Hare, N., Davy, M., Bermingham, A., Ferguson, P., Sheridan, P., Gurrin, C., et al. (2009). Topic-dependent sentiment analysis of financial blogs. In Proceedings of the 1st international CIKM workshop on topic-sentiment analysis for mass opinion measurement (TSA 2009) (pp. 9–16). Hong Kong, China.

  • Ounis, I., de Rijke, M., Macdonald, C., Mishne, G., & Soboroff, I. (2006). Overview of the TREC-2006 blog track. In Proceedings of the 15th text REtrieval conference (TREC 2006) (pp. 17–31). Gaithersburg, Maryland, USA.

  • Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2), 1–135.

    Article  Google Scholar 

  • Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on empirical methods in natural language processing, volume 10 (EMNLP 2002) (pp. 79–86). Philadelphia, Pennsylvania, USA.

  • Polanyi, L., & Zaenen, A. (2004). Contextual valence shifters. In Proceedings of the AAAI spring symposium on exploring attitude and affect in text: Theories and applications (pp. 106–111). Palo Alto, California, USA.

  • Pontiki, M., Papageorgiou, H., Galanis, D., Androutsopoulos, I., Pavlopoulos, J., & Manandhar, S. (2014). SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014) (pp. 27–35). Dublin, Ireland.

  • Popescu, A.-M., & Etzioni, O. (2007). Natural language processing and text mining, chapter Extracting product features and opinions from reviews. London: Springer.

    Google Scholar 

  • Quirk, R., Greenbaum, S., Leech, G., & Svartvik, J. (1985). A comprehensive grammar of the English language. London: Longman.

    Google Scholar 

  • Read, J., & Carroll, J. (2012). Annotating expressions of appraisal in English. Language Resources and Evaluation, 46, 421–447.

    Article  Google Scholar 

  • Riloff, E., & Wiebe, J. (2003). Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on empirical methods in natural language processing (EMNLP 2003) (pp. 105–112). Sapporo, Japan.

  • Roberts, K., Roach, M. A., Johnson, J., Guthrie, J., & Harabagiu, S. M. (2012). EmpaTweet: Annotating and detecting emotions on Twitter. In Proceedings of the 8th international conference on language resources and evaluation (LREC 2012) (pp. 3806–3813). Istanbul, Turkey.

  • Scherer, K. R. (2005). What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729.

    Article  Google Scholar 

  • Seki, Y., Evans, D. K., Ku, L.-W., Chen, H.-H., Kando, N., & Lin, C.-Y. (2007). Overview of opinion analysis pilot task at NTCIR-6. In Proceedings of the workshop meeting of the National Institute of Informatics (NII) test collection for information retrieval systems (NTCIR-6) (pp. 265–278). Tokyo, Japan.

  • Seki, Y., Evans, D. K., Ku, L.-W., Sun, L., Chen, H.-H., & Kando, N. (2008). Overview of multilingual opinion analysis task at NTCIR-7. In Proceedings of the 7th NTCIR workshop meeting on evaluation of information access technologies (NTCIR-7) (pp. 185–203). Tokyo, Japan.

  • Somasundaran, S., Ruppenhofer, J., & Wiebe, J. (2008). Discourse level opinion relations: An annotation study. In Proceedings of the 9th SIGdial workshop on discourse and dialogue (SIGdial 2008) (pp. 129–137). Columbus, Ohio, USA.

  • Stenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Ananiadou, S., & Tsujii, J. (2012). brat: A Web-based tool for NLP-assisted text annotation. In Proceedings of the demonstrations at the 13th conference of the European chapter of the association for computational linguistics (EACL 2012) (pp. 102–107). Avignon, France.

  • Stoyanov, V., & Cardie, C. (2008). Annotating topics of opinions. In Proceedings of the 6th international conference on language resources and evaluation (LREC 2008) (pp. 3213–3217). Marrakech, Morocco.

  • Strapparava, C., & Mihalcea, R. (2007). SemEval-2007 task 14: Affective text. In Proceedings of the 4th international workshop on semantic evaluations (SemEval 2007) (pp. 70–74). Prague, Czech Republic.

  • Taboada, M., Brooke, J., Tofiloski, M., Voll, K., & Stede, M. (2011). Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2), 267–307.

    Article  Google Scholar 

  • Toprak, C., Jakob, N., & Gurevych, I. (2010). Sentence and expression level annotation of opinions in user-generated discourse. In Proceedings of the 48th annual meeting of the association for computational linguistics (ACL 2010) (pp. 575–584). Uppsala, Sweden.

  • Turney, P. D. (2002). Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting of the association for computational linguistics (ACL 2002) (pp. 417–424). Philadelphia, Pennsylvania, USA.

  • Van de Kauter, M., Coorman, G., Lefever, E., Desmet, B., Macken, L., & Hoste, V. (2013). LeTs Preprocess: The multilingual LT3 linguistic preprocessing toolkit. Computational Linguistics in the Netherlands Journal, 3, 103–120.

    Google Scholar 

  • Van de Kauter, M., Desmet, B., & Hoste, V. (2014). Guidelines for the fine-grained analysis of polar expressions, version 2.0. Technical report LT3 14–02, LT3, Language and Translation Technology Team—Ghent University.

  • van Rijsbergen, C. (1979). Information retrieval. London: Butterworths.

    Google Scholar 

  • Wiebe, J., Bruce, R. F., & O’Hara, T. P. (1999). Development and use of a gold-standard data set for subjectivity classifications. In Proceedings of the 37th annual meeting of the association for computational linguistics (ACL 1999) (pp. 246–253). College Park, Maryland, USA.

  • Wiebe, J., Wilson, T., & Cardie, C. (2005). Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2–3), 165–210.

    Article  Google Scholar 

  • Wiegand, M., Balahur, A., Roth, B., Klakow, D., & Montoyo, A. (2010). A survey on the role of negation in sentiment analysis. In Proceedings of the workshop on negation and speculation in natural language processing (NeSp-NLP 2010) (pp. 60–68). Uppsala, Sweden.

  • Wilson, T. (2008). Annotating subjective content in meetings. In Proceedings of the 6th international conference on language resources and evaluation (LREC 2008) (pp. 2738–2745). Marrakech, Morocco.

  • Wilson, T., & Wiebe, J. (2005). Annotating attributions and private states. In Proceedings of the workshop on frontiers in corpus annotations II: Pie in the sky (CorpusAnno ’05) (pp. 53–60). Ann Arbor, Michigan, USA.

  • Wilson, T., Wiebe, J., & Hoffmann, P. (2005). Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing (HLT-EMNLP 2005) (pp. 347–354). Vancouver, British Columbia, Canada.

  • Yu, H., & Hatzivassiloglou, V. (2003). Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the 2003 conference on empirical methods in natural language processing (EMNLP 2003) (pp. 129–136). Sapporo, Japan.

  • Zabin, J., & Jefferies, A. (2008). Social media monitoring and analysis: Generating consumer insights from online conversation. Aberdeen Group Benchmark Report, Aberdeen Group.

  • Zhang, L., & Liu, B. (2011). Identifying noun product features that imply opinions. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies (HLT 2011) (pp. 575–580). Portland, Oregon, USA.

  • Zhuang, L., Jing, F., & Zhu, X.-Y. (2006). Movie review mining and summarization. In Proceedings of the 15th ACM international conference on information and knowledge management (CIKM 2006) (pp. 43–50). Arlington, Virginia, USA.

Download references

Acknowledgments

This research was conducted in the framework of SentiFM (http://www.lt3.ugent.be/en/projects/sentifm/) (Sentiment mining for Financial Markets), a project funded by a Ph.D. grant of the Agency for Innovation by Science and Technology (IWT), and the HOF project SubTLe (http://www.lt3.ugent.be/en/projects/subtle/) (Subjectivity Tagging and Learning).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marjan Van de Kauter.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van de Kauter, M., Desmet, B. & Hoste, V. The good, the bad and the implicit: a comprehensive approach to annotating explicit and implicit sentiment. Lang Resources & Evaluation 49, 685–720 (2015). https://doi.org/10.1007/s10579-015-9297-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10579-015-9297-4

Keywords

Navigation