Advertisement

Semantic Extraction and Object Proposal for Video Search

  • Vinh-Tiep Nguyen
  • Thanh Duc Ngo
  • Duy-Dinh Le
  • Minh-Triet Tran
  • Duc Anh Duong
  • Shin’ichi Satoh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10133)

Abstract

In this paper, we propose two approaches to deal with the problems of video searching: ad-hoc video search and known item search. First, we propose to combine multiple semantic concepts extracted from multiple networks trained on many data domains. Second, to help user find exactly video shot that has been shown before, we propose a sketch based search system which detects and indexes many objects proposed by an object proposal algorithm. By this way, we not only leverage the concepts but also the spatial relations between them.

Keywords

Semantic extraction Object proposal Sketch based search 

Notes

Acknowledgement

This research is funded by Vietnam National University HoChiMinh City (VNU-HCM) under grant number B2013-26-01. We are thankful to our colleagues Sang Phan, Yusuke Matsui, Benjamin Renoust who provided their source code to make our system more efficient.

References

  1. 1.
    Barthel, K.U., Hezel, N., Mackowiak, R.: Navigating a graph of scenes for exploring large video collections. In: Tian, Q., Sebe, N., Qi, G.-J., Huet, B., Hong, R., Liu, X. (eds.) MMM 2016. LNCS, vol. 9517, pp. 418–423. Springer, Heidelberg (2016). doi: 10.1007/978-3-319-27674-8_43 CrossRefGoogle Scholar
  2. 2.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893. IEEE Computer Society, Washington, DC (2005). http://dx.doi.org/10.1109/CVPR.2005.177
  3. 3.
    Johnson, J., Karpathy, A., Fei-Fei, L.: Densecap: fully convolutional localization networks for dense captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  4. 4.
    Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalanditis, Y., Li, L.J., Shamma, D.A., Bernstein, M., Fei-Fei, L.: Visual genome: connecting language and vision using crowdsourced dense image annotations (2016)Google Scholar
  5. 5.
    Patterson, G., Hays, J.: SUN attribute database: discovering, annotating, and recognizing scene attributes. In: Proceeding of the 25th Conference on Computer Vision and Pattern Recognition (CVPR) (2012)Google Scholar
  6. 6.
    Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. CoRR abs/1506.02640 (2015). http://arxiv.org/abs/1506.02640
  7. 7.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  9. 9.
    Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 487–495. Curran Associates, Inc., Red Hook (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Vinh-Tiep Nguyen
    • 1
  • Thanh Duc Ngo
    • 2
  • Duy-Dinh Le
    • 2
  • Minh-Triet Tran
    • 1
  • Duc Anh Duong
    • 2
  • Shin’ichi Satoh
    • 3
  1. 1.University of Science, Vietnam National University-HCMCHo Chi Minh CityVietnam
  2. 2.University of Information Technology, Vietnam National University-HCMCHo Chi Minh CityVietnam
  3. 3.National Institute of InformaticsTokyoJapan

Personalised recommendations