Skip to main content

NII-UIT Browser: A Multimodal Video Search System

  • Conference paper
MultiMedia Modeling (MMM 2015)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8936))

Included in the following conference series:

Abstract

We introduce an interactive system for searching a known scene in a video database. The key idea is to enable multimodal search. As the retrieved database is getting larger, using individual modals may not be powerful enough to discriminate a scene with other near duplicates. In our system, a known scene can be described and searched by its visual cues or audio genres. Templates are given for users to rapidly and exactly describe the scene. Moreover, search results are updated instantly as users change the description. As a result, users can generate a large number of possible queries to find the matched scene in a short time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lokoč, J., Blažek, A., Skopal, T.: Signatured-based Video Browser. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014, Part II. LNCS, vol. 8326, pp. 415–418. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  2. Ngo, T.D., Nguyen, V.H., Lam, V., Phan, S., Le, D.-D., Duong, D.A., Satoh, S.: NII-UIT: A Tool for Known Item Search by Sequential Pattern Filtering. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014, Part II. LNCS, vol. 8326, pp. 419–422. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  3. Felzenszwalb, P., Girshick, R., McAllester, D., Ramanan, D.: Object Detection with Discriminatively Trained Part Based Models. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 1627–1645 (2010)

    Google Scholar 

  4. Zhu, X., Ramanan, D.: Face detection, pose estimation and landmark localization in the wild. In: Computer Vision and Pattern Recognition (CVPR), pp. 2879–2886 (2012)

    Google Scholar 

  5. Lee, C.H., Soong, F., Juang, B.H.: A segment model based approach to speech recognition. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 501–541 (1988)

    Google Scholar 

  6. Lowe, D.G.: Distinctive image features from scale invariant keypoints. International Journal Computer Vision (IJCV), 91–110 (2004)

    Google Scholar 

  7. Thomas, D., Daniel, K., Herman, N.: Features for Image Retrieval: An Experimental Comparison. Journal Information Retrieval, 77–107 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Ngo, T.D., Nguyen, VT., Nguyen, V.H., Le, DD., Duong, D.A., Satoh, S. (2015). NII-UIT Browser: A Multimodal Video Search System. In: He, X., Luo, S., Tao, D., Xu, C., Yang, J., Hasan, M.A. (eds) MultiMedia Modeling. MMM 2015. Lecture Notes in Computer Science, vol 8936. Springer, Cham. https://doi.org/10.1007/978-3-319-14442-9_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-14442-9_28

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-14441-2

  • Online ISBN: 978-3-319-14442-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics