Advertisement

Exquisitor at the Video Browser Showdown 2020

  • Björn Þór JónssonEmail author
  • Omar Shahbaz Khan
  • Dennis C. Koelma
  • Stevan Rudinac
  • Marcel Worring
  • Jan Zahálka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11962)

Abstract

When browsing large video collections, human-in-the-loop systems are essential. The system should understand the semantic information need of the user and interactively help formulate queries to satisfy that information need based on data-driven methods. Full synergy between the interacting user and the system can only be obtained when the system learns from the user interactions while providing immediate response. Doing so with dynamically changing information needs for large scale multimodal collections is a challenging task. To push the boundary of current methods, we propose to apply the state of the art in interactive multimodal learning to the complex multimodal information needs posed by the Video Browser Showdown (VBS). To that end we adapt the Exquisitor system, a highly scalable interactive learning system. Exquisitor combines semantic features extracted from visual content and text to suggest relevant media items to the user, based on user relevance feedback on previously suggested items. In this paper, we briefly describe the Exquisitor system, and its first incarnation as a VBS entrant.

Keywords

Interactive learning Video browsing Scalability 

Notes

Acknowledgments

This work was supported by a PhD grant from the IT University of Copenhagen and by the European Regional Development Fund (project Robotics for Industry 4.0, CZ.02.1.01/0.0/0.0/15 003/0000470).

References

  1. 1.
    Cohn, D.A., Ghahramani, Z., Jordan, M.I.: Active learning with statistical models. J. Artif. Intell. Res. 4(1), 129–145 (1996)CrossRefGoogle Scholar
  2. 2.
    Guðmundsson, G.T., Jónsson, B.T., Amsaleg, L.: A large-scale performance study of cluster-based high-dimensional indexing. In: Proceedings of International Workshop on Very-large-scale Multimedia Corpus, Mining and Retrieval (VLS-MCM), Firenze, Italy (2010)Google Scholar
  3. 3.
    Gurrin, C., Schoeffmann, K., Joho, H., Dang-Nguyen, D., Riegler, M., Piras, L. (eds.): Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge, LSC@ICMR 2018, Yokohama, Japan (2018)Google Scholar
  4. 4.
    Huijser, M.W., van Gemert, J.C.: Active decision boundary annotation with deep generative models. In: Proceedings of IEEE ICCV, Venice, Italy, pp. 5296–5305 (2017)Google Scholar
  5. 5.
    Jónsson, B.Þ., et al.: Exquisitor: Interactive learning at large. arXiv:1904.08689 (2019)
  6. 6.
    Khan, O.S., Jónsson, B.Þ., Zahálka, J., Rudinac, S., Worring, M.: Exquisitor at the lifelog search challenge 2019. In: Proceedings of the ACM Workshop on Lifelog Search Challenge, LSC@ICMR, Ottawa, ON, Canada, pp. 7–11 (2019)Google Scholar
  7. 7.
    Larson, M., et al. (eds.): Working Notes Proceedings of the MediaEval 2018 Workshop, CEUR Workshop Proceedings, vol. 2283. CEUR-WS.org, Sophia Antipolis (2018)Google Scholar
  8. 8.
    Lokoč, J., Bailer, W., Schoeffmann, K., Münzer, B., Awad, G.: On influential trends in interactive video retrieval: video browser showdown 2015–2017. IEEE Trans. Multimedia 20(12), 3361–3376 (2018)CrossRefGoogle Scholar
  9. 9.
    Ragnarsdóttir, H., et al.: Exquisitor: breaking the interaction barrier for exploration of 100 million images. In: Proceedings of the ACM Multimedia Conference, Nice, France (2019)Google Scholar
  10. 10.
    Rossetto, L., Schuldt, H., Awad, G., Butt, A.A.: V3C – a research video collection. In: Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, W.-H., Vrochidis, S. (eds.) MMM 2019. LNCS, vol. 11295, pp. 349–360. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-05710-7_29CrossRefGoogle Scholar
  11. 11.
    Rui, Y., Huang, T.S., Mehrotra, S.: Content-based image retrieval with relevance feedback in MARS. In: Proceedings of ICIP, Santa Barbara, CA, USA, pp. 815–818 (1997)Google Scholar
  12. 12.
    Schoeffmann, K., Bailer, W., Gurrin, C., Awad, G., Lokoč, J.: Interactive video search: where is the user in the age of deep learning? In: Proceedings of ACM Multimedia, Seoul, Republic of Korea, pp. 2101–2103 (2018)Google Scholar
  13. 13.
    Snoek, C.G.M., Worring, M., de Rooij, O., van de Sande, K.E.A., Yan, R., Hauptmann, A.G.: VideOlympics: real-time evaluation of multimedia retrieval systems. IEEE MultiMedia 15(1), 86–91 (2008)CrossRefGoogle Scholar
  14. 14.
    Thornley, C., Johnson, A.C., Smeaton, A.F., Lee, H.: The scholarly impact of TRECVID (2003–2009). J. Am. Soc. Inf. Sci. Technol. (JASIST) 62(4), 613–627 (2011)CrossRefGoogle Scholar
  15. 15.
    Zahálka, J., Worring, M.: Towards interactive, intelligent, and integrated multimedia analytics. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology (VAST), Paris, France, pp. 3–12 (2014)Google Scholar
  16. 16.
    Zahálka, J., Rudinac, S., Jónsson, B.T., Koelma, D.C., Worring, M.: Blackthorn: large-scale interactive multimodal learning. IEEE Trans. Multimedia 20(3), 687–698 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Björn Þór Jónsson
    • 1
    Email author
  • Omar Shahbaz Khan
    • 1
  • Dennis C. Koelma
    • 2
  • Stevan Rudinac
    • 2
  • Marcel Worring
    • 2
  • Jan Zahálka
    • 3
  1. 1.IT University of CopenhagenCopenhagenDenmark
  2. 2.University of AmsterdamAmsterdamNetherlands
  3. 3.Czech Technical University in PraguePragueCzech Republic

Personalised recommendations