Skip to main content

Introduction and Motivation

  • Chapter
  • First Online:
  • 1247 Accesses

Part of the book series: Socio-Affective Computing ((SAC,volume 8))

Abstract

Multimodal sentiment analysis a new research field in the area of Artificial Intelligence. It aims at processing multimodal inputs for e.g., Audio, Visual and Text to extract affective knowledge. In this chapter we discuss the major research challenges in this topic followed by the overview of the proposed multimodal sentiment analysis framework.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bughin J, Doogan J, Vetvik OJ (2010) A new way to measure word-of-mouth marketing. McKinsey Q 2:113–116

    Google Scholar 

  2. Cambria E, Howard N, Hsu J, Hussain A (2013) Sentic blending: scalable multimodal fusion for continuous interpretation of semantics and sentics. In: IEEE SSCI, Singapore, pp 108–117

    Google Scholar 

  3. Cambria E, Hussain A, Havasi C, Eckl C (2010) Senticspace: visualizing opinions and sentiments in a multi-dimensional vector space. In: Jordanov I, Setchi R (eds) Knowledge-based and intelligent information and engineering systems. Springer, Berlin/Heidelberg, pp 385–393

    Chapter  Google Scholar 

  4. See Ref. [46].

    Google Scholar 

  5. See Ref. [232].

    Google Scholar 

  6. Morency L-P, Mihalcea R, Doshi P (2011) Towards multimodal sentiment analysis: harvesting opinions from the web. In: Proceedings of the 13th International Conference on Multimodal Interfaces. ACM, pp 169–176

    Google Scholar 

  7. Poria S, Cambria E, Howard N, Huang G-B, Hussain A (2016) Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 174:50–59

    Article  Google Scholar 

  8. Qi H, Wang X, Sitharama Iyengar S, Chakrabarty K (2001) Multisensor data fusion in distributed sensor networks using mobile agents. In: Proceedings of 5th International Conference on Information Fusion, pp 11–16

    Google Scholar 

  9. Shimojo S, Shams L (2001) Sensory modalities are not separate modalities: plasticity and interactions. Curr Opin Neurobiol 11(4):505–509

    Article  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Poria, S., Hussain, A., Cambria, E. (2018). Introduction and Motivation. In: Multimodal Sentiment Analysis. Socio-Affective Computing, vol 8. Springer, Cham. https://doi.org/10.1007/978-3-319-95020-4_1

Download citation

Publish with us

Policies and ethics