Advertisement

Towards Egocentric Sentiment Analysis

  • Estefania TalaveraEmail author
  • Petia Radeva
  • Nicolai Petkov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10672)

Abstract

The availability and use of egocentric data are rapidly increasing due to the growing use of wearable cameras. Our aim is to study the effect (positive, neutral or negative) of egocentric images or events on an observer. Given egocentric photostreams capturing the wearer’s days, we propose a method that aims to assign sentiment to events extracted from egocentric photostreams. Such moments can be candidates to retrieve according to their possibility of representing a positive experience for the camera’s wearer. The proposed approach obtained a classification accuracy of 75% on the test set, with deviation of 8%. Our model makes a step forward opening the door to sentiment recognition in egocentric photostreams.

Keywords

Egocentric images Moment retrieval Sentiment analysis 

Notes

Acknowledgements

This work was partially founded by Ministerio de Ciencia e Innovación of the Gobierno de España, through the research project TIN2015-66951-C2. SGR 1219, CERCA, ICREA Academia 2014 and Grant 20141510 (Marató TV3). The funders had no role in the study design, data collection, analysis, and preparation of the manuscript.

References

  1. 1.
    Beck, J., et al.: Stimulating therapeutic change with interpretations: a comparison of positive and negative connotation. Couns. Psychol. 29, 551 (1986)CrossRefGoogle Scholar
  2. 2.
    Borth, D., Ji, R., Chen, T., Breuel, T., Chang, S.-F.: Large-scale visual sentiment ontology and detectors using adjective noun pairs, pp. 223–232. ACM (2013)Google Scholar
  3. 3.
    Campos, V., et al.: Diving deep into sentiment: understanding fine-tuned CNNs for visual sentiment prediction. In: ASM, pp. 57–62 (2015)Google Scholar
  4. 4.
    Chen, T., Borth, D., Darrell, T., Chang, S.-F.: DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks, p. 7 (2014)Google Scholar
  5. 5.
    Dan-Glauser, E.S., Scherer, K.R.: The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behav. Res. Methods 43(2), 468–477 (2011)CrossRefGoogle Scholar
  6. 6.
    Dimiccoli, M., Talavera, E., Nikolov, S.G., Radeva, P.: SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation (2015)Google Scholar
  7. 7.
    Lang, P., Bradley, M., Cuthbert, B.: International affective picture system (IAPS): technical manual and affective ratings. In: NIMH, pp. 39–58 (1997)Google Scholar
  8. 8.
    Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: ICMI, pp. 503–510 (2015)Google Scholar
  9. 9.
    Ma, M., Fan, H., Kitani, K.M.: Going deeper into first-person activity recognition. In: CVPR (2016)Google Scholar
  10. 10.
    Machajdik, J., Hanbury, A.: Affecitve image classification using features inspired by psychology and art theory. In: ICM, pp. 83–92 (2010)Google Scholar
  11. 11.
    Nojavanasghar, B., et al.: EmoReact: a multimodal approach and dataset for recognizing emotional responses in children. In: ICMI 2016, pp. 137–144 (2016)Google Scholar
  12. 12.
    Plutchik, R.: Emotion: A Psychoevolutionary Synthesis. Harper & Row, New York City (1980)Google Scholar
  13. 13.
    Poria, S., et al.: Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 174, 50–59 (2014)CrossRefGoogle Scholar
  14. 14.
    Santos, V., et al.: The role of positive emotion and contributions of positive psychology in depression treatment: systematic review. Clin. Pract. Epidemiol. Ment. Health: CP & EMH 9, 221–237 (2013)CrossRefGoogle Scholar
  15. 15.
    Talavera, E., Strisciuglio, N., Petkov, N., Radeva, P.: Sentiment recognition in egocentric photostreams. In: Alexandre, L.A., Salvador Sánchez, J., Rodrigues, J.M.F. (eds.) IbPRIA 2017. LNCS, vol. 10255, pp. 471–479. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58838-4_52 CrossRefGoogle Scholar
  16. 16.
    Wang, M., Cao, D., Li, L., Li, S., Ji, R.: Microblog sentiment analysis based on cross-media bag-of-words model. In: ICIMCS, pp. 76–80 (2014)Google Scholar
  17. 17.
    You, Q., et al.: Cross-modality consistent regression for joint visual-textual sentiment analysis of social multimedia. In: WSDM, pp. 13–22 (2016)Google Scholar
  18. 18.
    You, Q., et al.: Robust image sentiment analysis using progressively trained and domain transferred deep networks. In: AAAI, pp. 381–388 (2015)Google Scholar
  19. 19.
    You, Q., Luo, J., Jin, H., Yang, J.: Building a large scale dataset for image emotion recognition: the fine print and the benchmark, CoRR (2016)Google Scholar
  20. 20.
    Yuan, J., et al.: Sentribute: image sentiment analysis from a mid-level perspective categories and subject descriptors. In: WISDOM, pp. 101–108 (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Estefania Talavera
    • 1
    • 2
    Email author
  • Petia Radeva
    • 1
  • Nicolai Petkov
    • 1
    • 2
  1. 1.University of BarcelonaBarcelonaSpain
  2. 2.University of GroningenGroningenThe Netherlands

Personalised recommendations