Advertisement

Expect the Unexpected: Harnessing Sentence Completion for Sarcasm Detection

  • Aditya Joshi
  • Samarth Agrawal
  • Pushpak Bhattacharyya
  • Mark J. Carman
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 781)

Abstract

The trigram ‘I love being’ is expected to be followed by positive words such as ‘happy’. In a sarcastic sentence, however, the word ‘ignored’ may be observed. The expected and the observed words are, thus, incongruous. We model sarcasm detection as the task of detecting incongruity between an observed and an expected word. In order to obtain the expected word, we use Context2Vec, a sentence completion library based on Bidirectional LSTM. However, since the exact word where such an incongruity occurs may not be known in advance, we present two approaches: an All-words approach (which consults sentence completion for every content word) and an Incongruous words-only approach (which consults sentence completion for the 50% most incongruous content words). The approaches outperform reported values for tweets but not for discussion forum posts. This is likely to be because of redundant consultation of sentence completion for discussion forum posts. Therefore, we consider an oracle case where the exact incongruous word is manually labeled in a corpus reported in past work. In this case, the performance is higher than the all-words approach. This sets up the promise for using sentence completion for sarcasm detection.

Keywords

Sarcasm detection Sentence completion Sentiment analysis LSTM 

References

  1. 1.
    Tsur, O., Davidov, D., Rappoport, A.: ICWSM-a great catchy name: semi-supervised recognition of sarcastic sentences in online product reviews. In: ICWSM (2010)Google Scholar
  2. 2.
    Reyes, A., Rosso, P., Veale, T.: A multidimensional approach for detecting irony in twitter. Lang. Res. Eval. 47(1), 239–268 (2013)CrossRefGoogle Scholar
  3. 3.
    Joshi, A., Tripathi, V., Patel, K., Bhattacharyya, P., Carman, M.: Are word embedding-based features for sarcasm detection? In: EMNLP (2016)Google Scholar
  4. 4.
    Khattri, A., Joshi, A., Bhattacharyya, P., Carman, M.J.: Your sentiment precedes you: using an author’s historical tweets to predict sarcasm. In: WASSA, p. 25 (2015)Google Scholar
  5. 5.
    Veale, T., Hao, Y.: Detecting ironic intent in creative comparisons. In: ECAI, vol. 215, pp. 765–770 (2010)Google Scholar
  6. 6.
    Maynard, D., Greenwood, M.A.: Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In: LREC (2014)Google Scholar
  7. 7.
    Gibbs, R.W.: The Poetics of Mind: Figurative Thought, Language, and Understanding. Cambridge University Press, New York (1994)Google Scholar
  8. 8.
    Ivanko, S.L., Pexman, P.M.: Context incongruity and irony processing. Discourse Process. 35(3), 241–279 (2003)CrossRefGoogle Scholar
  9. 9.
    Zweig, G., Burges, C.J.: The microsoft research sentence completion challenge. Technical Report MSR-TR-2011-129, Microsoft. Technical report (2011)Google Scholar
  10. 10.
    Melamud, O., Goldberger, J., Dagan, I.: context2vec: learning generic context embedding with bidirectional LSTM. In: CONLL, pp. 51–61 (2016)Google Scholar
  11. 11.
    Joshi, A., Sharma, V., Bhattacharyya, P.: Harnessing context incongruity for sarcasm detection. In: ACL-IJCNLP, vol. 2, pp. 757–762 (2015)Google Scholar
  12. 12.
    Riloff, E., Qadir, A., Surve, P., De Silva, L., Gilbert, N., Huang, R.: Sarcasm as contrast between a positive sentiment and negative situation. In: EMNLP, pp. 704–714 (2013)Google Scholar
  13. 13.
    Rajadesingan, A., Zafarani, R., Liu, H.: Sarcasm detection on twitter: a behavioral modeling approach. In: ICWSM. ACM, pp. 97–106 (2015)Google Scholar
  14. 14.
    Wallace, B.C., Choe, D.K., Charniak, E.: Sparse, contextually informed models for irony detection: exploiting user communities, entities and sentiment. ACL 1, 1035–1044 (2015)Google Scholar
  15. 15.
    Wang, Z., Wu, Z., Wang, R., Ren, Y.: Twitter sarcasm detection exploiting a context-based model. In: Wang, J., Cellary, W., Wang, D., Wang, H., Chen, S.-C., Li, T., Zhang, Y. (eds.) WISE 2015. LNCS, vol. 9418, pp. 77–91. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-26190-4_6 CrossRefGoogle Scholar
  16. 16.
    Joshi, A., Tripathi, V., Bhattacharyya, P., Carman, M.: Harnessing sequence labeling for sarcasm detection in dialogue from TV series ‘friends’. In: CoNLL, p. 146 (2016)Google Scholar
  17. 17.
    Silvio, A., Wallace, B.C., Lyu, H., Silva, P.C.M.J.: Modelling context with user embeddings for sarcasm detection in social media. In: CoNLL 2016, p. 167 (2016)Google Scholar
  18. 18.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781 (2013)
  19. 19.
    Liu, Q., Jiang, H., Wei, S., Ling, Z.-H., Hu, Y.: Learning semantic word embeddings based on ordinal knowledge constraints. In: ACL-IJCNLP (2015)Google Scholar
  20. 20.
    Walker, M.A., Tree, J.E.F., Anand, P., Abbott, R., King, J.: A corpus for research on deliberation and debate. In: LREC, pp. 812–817 (2012)Google Scholar
  21. 21.
    Pedersen, T., Patwardhan, S., Michelizzi, J.: Wordnet:: similarity: measuring the relatedness of concepts. In: Demonstration Papers at HLT-NAACL. Association for Computational Linguistics 2004, pp. 38–41 (2004)Google Scholar
  22. 22.
    Ghosh, D., Guo, W., Muresan, S.: Sarcastic or not: word embeddings to predict the literal or sarcastic meaning of words. In: EMNLP (2015)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Aditya Joshi
    • 1
    • 2
    • 3
  • Samarth Agrawal
    • 2
  • Pushpak Bhattacharyya
    • 2
  • Mark J. Carman
    • 3
  1. 1.IITB-Monash Research AcademyMumbaiIndia
  2. 2.Indian Institute of Technology BombayMumbaiIndia
  3. 3.Monash UniversityMelbourneAustralia

Personalised recommendations