Advertisement

Character Prediction in TV Series via a Semantic Projection Network

  • Ke Sun
  • Zhuo Lei
  • Jiasong Zhu
  • Xianxu Hou
  • Bozhi Liu
  • Guoping QiuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11295)

Abstract

The goal of this paper is to automatically recognize characters in popular TV series. In contrast to conventional approaches which rely on weak supervision afforded by transcripts, subtitles or character facial data, we formulate the problem as the multi-label classification which requires only label-level supervision. We propose a novel semantic projection network consisting of two stacked subnetworks with specially designed constraints. The first subnetwork is a contractive autoencoder which focuses on reconstructing feature activations extracted from a pre-trained single-label convolutional neural network (CNN). The second subnetwork functions as a region-based multi-label classifier which produces character labels for the input video frame as well as reconstructing the input visual feature from the mapped semantic labels space. Extensive experiments show that the proposed model achieves state-of-the-art performance in comparison with recent approaches on three challenging TV series datasets (the Big Bang Theory, the Defenders and Nirvava in Fire).

Keywords

Video understanding Character recognition Convolutional neural network Autoencoder Semantic projection 

Notes

Acknowledgment

This work was jointly supported in part by the National Natural Science Foundation of China under Grant 61773414, and in part by the Shenzhen Future Industry Development Funding program under Grant 201607281039561400, and the Shenzhen Scientific Research and Development Funding Program under Grant JCYJ20170818092931604.

References

  1. 1.
    Bojanowski, P., Bach, F., Laptev, I., Ponce, J., Schmid, C., Sivic, J.: Finding actors and actions in movies. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 2280–2287. IEEE (2013)Google Scholar
  2. 2.
    Cour, T., Sapp, B., Nagle, A., Taskar, B.: Talking pictures: temporal grouping and dialog-supervised person recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1014–1021 (2011)Google Scholar
  3. 3.
    Cour, T., Sapp, B., Jordan, C., Taskar, B.: Learning from ambiguously labeled images. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 919–926 (2009)Google Scholar
  4. 4.
    Cour, T., Sapp, B., Nagle, A., Taskar, B.: Talking pictures: temporal grouping and dialog-supervised person recognition. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1014–1021. IEEE (2010)Google Scholar
  5. 5.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 248–255. IEEE (2009)Google Scholar
  6. 6.
    Dong, Z., Jia, S., Wu, T., Pei, M.: Face video retrieval via deep learning of binary hash representations. In: AAAI, pp. 3471–3477 (2016)Google Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  8. 8.
    He, Z., Chen, C., Bu, J., Li, P., Cai, D.: Multi-view based multi-label propagation for image annotation. Neurocomputing 168(C), 853–860 (2015)CrossRefGoogle Scholar
  9. 9.
    Iwata, M., Ito, A., Kise, K.: A study to achieve manga character retrieval method for manga images. In: 2014 11th IAPR International Workshop on Document Analysis Systems (DAS), pp. 309–313. IEEE (2014)Google Scholar
  10. 10.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)Google Scholar
  11. 11.
    Kostinger, M., Wohlhart, P., Roth, P.M., Bischof, H.: Learning to recognize faces from videos and weakly related information cues. In: IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 23–28 (2011)Google Scholar
  12. 12.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. M.Sc. thesis, University of Toronto (2009)Google Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Li, C., Kang, Q., Ge, G., Song, Q., Lu, H., Cheng, J.: Deepbe: learning deep binary encoding for multi-label classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 39–46 (2016)Google Scholar
  15. 15.
    Li, Y., Wang, R., Cui, Z., Shan, S., Chen, X.: Compact video code and its application to robust face retrieval in tv-series. In: BMVC (2014)Google Scholar
  16. 16.
    Li, Y., Wang, R., Shan, S., Chen, X.: Hierarchical hybrid statistic based video binary code and its application to face retrieval in tv-series. In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, pp. 1–8. IEEE (2015)Google Scholar
  17. 17.
    Nagrani, A., Zisserman, A.: From benedict cumberbatch to sherlock holmes: Character identification in tv series without a script. CoRR abs/1801.10442 (2017)Google Scholar
  18. 18.
    Nam, J., Kim, J., Loza Mencía, E., Gurevych, I., Fürnkranz, J.: Large-scale multi-label text classification—revisiting neural networks. In: Calders, T., Esposito, F., Hüllermeier, E., Meo, R. (eds.) ECML PKDD 2014. LNCS (LNAI), vol. 8725, pp. 437–452. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-44851-9_28CrossRefGoogle Scholar
  19. 19.
    Parkhi, O.M., Rahtu, E., Zisserman, A.: It’s in the bag: stronger supervision for automated face labelling. In: ICCV Workshop, vol. 2, p. 6 (2015)Google Scholar
  20. 20.
    Paszke, A., et al.: Automatic differentiation in pytorch. In: NIPS-W (2017)Google Scholar
  21. 21.
    Pont-Tuset, J., Arbeláez, P., Barron, J.T., Marques, F., Malik, J.: Multiscale combinatorial grouping for image segmentation and object proposal generation. IEEE Trans. Pattern Anal. Mach. Intell. 39(1), 128–140 (2015)CrossRefGoogle Scholar
  22. 22.
    Ramanathan, V., Joulin, A., Liang, P., Fei-Fei, L.: Linking people in videos with “their” names using coreference resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 95–110. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_7CrossRefGoogle Scholar
  23. 23.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 512–519 (2014)Google Scholar
  24. 24.
    Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML (2011)Google Scholar
  25. 25.
    Shan, C.: Face recognition and retrieval in video. Stud. Comput. Intell. 287, 235–260 (2010)Google Scholar
  26. 26.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
  27. 27.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  28. 28.
    Sivic, J., Everingham, M., Zisserman, A.: “who are you?"- learning person specific classifiers from video. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 1145–1152. IEEE (2009)Google Scholar
  29. 29.
    Tapaswi, M., Bäuml, M., Stiefelhagen, R.: Story-based video retrieval in TV series using plot synopses. In: Proceedings of International Conference on Multimedia Retrieval, p. 137. ACM (2014)Google Scholar
  30. 30.
    Tapaswi, M., Bauml, M., Stiefelhagen, R.: Storygraphs: visualizing character interactions as a timeline. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 827–834 (2014)Google Scholar
  31. 31.
    Tsoumakas, G., Katakis, I.: Multi-label classification: an overview. Int. J. Data Warehous. Min. (IJDWM) 3(3), 1–13 (2007)CrossRefGoogle Scholar
  32. 32.
    Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., Xu, W.: CNN-RNN: a unified framework for multi-label image classification. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2285–2294. IEEE (2016)Google Scholar
  33. 33.
    Wei, Y., et al.: CNN: single-label to multi-label. arXiv preprint arXiv:1406.5726 (2014)
  34. 34.
    Wei, Y., et al.: HCP: A flexible CNN framework for multi-label image classification. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1901–1907 (2016)CrossRefGoogle Scholar
  35. 35.
    Wohlhart, P., Köstinger, M., Roth, P.M., Bischof, H.: Multiple instance boosting for face recognition in videos. In: Mester, R., Felsberg, M. (eds.) DAGM 2011. LNCS, vol. 6835, pp. 132–141. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23123-0_14CrossRefGoogle Scholar
  36. 36.
    Wu, F., Wang, Z., Zhang, Z., Yang, Y., Luo, J., Zhu, W., Zhuang, Y.: Weakly semi-supervised deep learning for multi-label image annotation. IEEE Trans. Big Data 1(3), 109–122 (2015)CrossRefGoogle Scholar
  37. 37.
    Yu, Q., Wang, J., Zhang, S., Gong, Y., Zhao, J.: Combining local and global hypotheses in deep neural network for multi-label image classification. Neurocomputing 235, 38–45 (2017)CrossRefGoogle Scholar
  38. 38.
    Zhang, M., Zhou, Z.: ML-KNN: a lazy learning approach to multi-label learning. Pattern Recognit. 40(7), 2038–2048 (2007)CrossRefGoogle Scholar
  39. 39.
    Zhang, Z., Saligrama, V.: Zero-shot learning via joint latent similarity embedding. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 6034–6042 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ke Sun
    • 1
  • Zhuo Lei
    • 2
  • Jiasong Zhu
    • 1
  • Xianxu Hou
    • 3
  • Bozhi Liu
    • 3
  • Guoping Qiu
    • 3
    • 4
    Email author
  1. 1.Shenzhen Key Laboratory of Spatial Information Smarting Sensing and ServicesShenzhen UniversityShenzhenChina
  2. 2.School of Computer ScienceUniversity of Nottingham NingboNingboChina
  3. 3.Guangdong Key Laboratory of Intelligent Information Processing, College of Information EngineeringShenzhen UniversityShenzhenChina
  4. 4.School of Computer ScienceUniversity of NottinghamNottinghamUK

Personalised recommendations