International Conference on Image Analysis and Processing

ICIAP 2015: Image Analysis and Processing — ICIAP 2015 pp 683-693 | Cite as

Movie Genre Classification by Exploiting MEG Brain Signals

  • Pouya Ghaemmaghami
  • Mojtaba Khomami Abadi
  • Seyed Mostafa Kia
  • Paolo Avesani
  • Nicu Sebe
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9279)

Abstract

Genre classification is an essential part of multimedia content recommender systems. In this study, we provide experimental evidence for the possibility of performing genre classification based on brain recorded signals. The brain decoding paradigm is employed to classify magnetoencephalography (MEG) data presented in [1] to four genre classes: Comedy, Romantic, Drama, and Horror. Our results show that: 1) there is a significant correlation between audio-visual features of movies and corresponding brain signals specially in the visual and temporal lobes; 2) the genre of movie clips can be classified with an accuracy significantly over the chance level using the MEG signal. On top of that we show that the combination of multimedia features and MEG-based features achieves the best accuracy. Our study provides a primary step towards user-centric media content retrieval using brain signals.

Keywords

Multimedia content retrieval MEG Genre classification Brain decoding Signal processing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abadi, M., Subramanian, R., Kia, S., Avesani, P., Patras, I., Sebe, N.: DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Transactions on Affective Computing (2015)Google Scholar
  2. 2.
    Brezeale, D., Cook, D.J.: Using closed captions and visual features to classify movies by genre. In: International Workshop on Multimedia Data Mining (2006)Google Scholar
  3. 3.
    Carlson, T.A., Hogendoorn, H., Kanai, R., Mesik, J., Turret, J.: High temporal resolution decoding of object position and category. Journal of vision 11(10) (2011)Google Scholar
  4. 4.
    Cox, D.D., Savoy, R.L.: Functional magnetic resonance imaging (fmri) brain reading: detecting and classifying distributed patterns of fmri activity in human visual cortex. Neuroimage 19(2), 261–270 (2003)CrossRefGoogle Scholar
  5. 5.
    Fisher, R.A.: Statistical methods for research workers. Quarterly Journal of the Royal Meteorological Society 82(351), 119–119 (1956)Google Scholar
  6. 6.
    Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P.: Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293(5539), 2425–2430 (2001)CrossRefGoogle Scholar
  7. 7.
    Haynes, J.D., Rees, G.: Decoding mental states from brain activity in humans. Nature Reviews Neuroscience 7(7), 523–534 (2006)CrossRefGoogle Scholar
  8. 8.
    Huang, H.Y., Shih, W.S., Hsu, W.H.: A film classifier based on low-level visual features. In: IEEE Workshop on Multimedia Signal Processing, pp. 465–468 (2007)Google Scholar
  9. 9.
    Kamitani, Y., Tong, F.: Decoding motion direction from activity in human visual cortex. Journal of Vision 5(8), 152–152 (2005)CrossRefGoogle Scholar
  10. 10.
    Kamitani, Y., Tong, F.: Decoding the visual and subjective contents of the human brain. Nature neuroscience 8(5), 679–685 (2005)CrossRefGoogle Scholar
  11. 11.
    Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: Deap: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing 3(1), 18–31 (2012)CrossRefGoogle Scholar
  12. 12.
    Landis, J.R., Koch, G.G.: The Measurement of Observer Agreement for Categorical Data. Biometrics 33(1), 159–174 (1977)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Li, D., Sethi, I.K., Dimitrova, N., McGee, T.: Classification of general audio data for content-based retrieval. Pattern Recognition Letters 22(5), 533–544 (2001)CrossRefMATHGoogle Scholar
  14. 14.
    Nam, J., Alghoniemy, M., Tewfik, A.H.: Audio-visual content-based violent scene characterization. In: International Conference on Image Processing (1998)Google Scholar
  15. 15.
    Obermayer, K., Blasdel, G.G.: Geometry of orientation and ocular dominance columns in monkey striate cortex. The Journal of neuroscience 13(10), 4114–4129 (1993)Google Scholar
  16. 16.
    Oostenveld, R., Fries, P., Maris, E., Schoffelen, J.M.: Fieldtrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational intelligence and neuroscience (2010)Google Scholar
  17. 17.
    Rasheed, Z., Sheikh, Y., Shah, M.: On the use of computable features for film classification. IEEE Transactions on Circuits and Systems for Video Technology 15(1), 52–64 (2005)CrossRefGoogle Scholar
  18. 18.
    Soleymani, M., Chanel, G., Kierkels, J.J., Pun, T.: Affective characterization of movie scenes based on multimedia content analysis and user’s physiological emotional responses. In: IEEE International Symposium on Multimedia (2008)Google Scholar
  19. 19.
    Soleymani, M., Kierkels, J.J., Chanel, G., Pun, T.: A bayesian framework for video affective representation. In: International Conference on Affective Computing and Intelligent Interaction (2009)Google Scholar
  20. 20.
    Sugano, M., Isaksson, R., Nakajima, Y., Yanagihara, H.: Shot genre classification using compressed audio-visual features. In: International Conference on Image Processing (2003)Google Scholar
  21. 21.
    Tanaka, K.: Mechanisms of visual object recognition: monkey and human studies. Current opinion in neurobiology 7(4), 523–529 (1997)CrossRefGoogle Scholar
  22. 22.
    Wang, G., Tanaka, K., Tanifuji, M.: Optical imaging of functional organization in the monkey inferotemporal cortex. Science 272(5268), 1665–1668 (1996)CrossRefGoogle Scholar
  23. 23.
    Xu, M., Chia, L.T., Jin, J.: Affective content analysis in comedy and horror videos by audio emotional event detection. In: IEEE International Conference on Multimedia and Expo (2005)Google Scholar
  24. 24.
    Xu, M., Jin, J.S., Luo, S., Duan, L.: Hierarchical movie affective content analysis based on arousal and valence features. In: ACM Multimedia (2008)Google Scholar
  25. 25.
    Xu, M., Xu, C., He, X., Jin, J.S., Luo, S., Rui, Y.: Hierarchical affective content analysis in arousal and valence dimensions. Signal Processing 93(8), 2140–2150 (2013)CrossRefGoogle Scholar
  26. 26.
    Zhou, H., Hermans, T., Karandikar, A.V., Rehg, J.M.: Movie genre classification via scene categorization. In: ACM Multimedia (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Pouya Ghaemmaghami
    • 1
  • Mojtaba Khomami Abadi
    • 1
    • 4
  • Seyed Mostafa Kia
    • 1
    • 2
    • 3
  • Paolo Avesani
    • 1
    • 2
    • 3
  • Nicu Sebe
    • 1
  1. 1.Department of Information Engineering and Computer ScienceUniversity of TrentoTrentoItaly
  2. 2.NeuroInformatics Laboratory (NILab)Bruno Kessler FoundationTrentoItaly
  3. 3.Centro Interdipartimentale Mente e Cervello (CIMeC)University of TrentoTrentoItaly
  4. 4.Semantic, Knowledge and Innovation Lab (SKIL)Telecom ItaliaTrentoItaly

Personalised recommendations