Abstract
According to our experience from VBS2023 and the feedback from the IVR4B special session at CBMI2023, we have largely revised the diveXplore system for VBS2024. It now integrates OpenCLIP trained on the LAION-2B dataset for image/text embeddings that are used for free-text and visual similarity search, a query server that is able to distribute different queries and merge the results, a user interface optimized for fast browsing, as well as an exploration view for large clusters of similar videos (e.g., weddings, paraglider events, snow and ice scenery, etc.).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Baek, J., et al.: What is wrong with scene text recognition model comparisons? Dataset and model analysis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4715–4723 (2019)
Baek, Y., Lee, B., Han, D., Yun, S., Lee, H.: Character region awareness for text detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9365–9374 (2019)
Berns, F., Rossetto, L., Schoeffmann, K., Beecks, C., Awad, G.: V3C1 dataset: an evaluation of content characteristics. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 334–338. ACM (2019)
Cherti, M., et al.: Reproducible scaling laws for contrastive language-image learning (2022). https://doi.org/10.48550/ARXIV.2212.07143. https://arxiv.org/abs/2212.07143
Johnson, J., Douze, M., Jégou, H.: Billion-scale similarity search with GPUs. IEEE Trans. Big Data 7(3), 535–547 (2019)
Kletz, S., Schoeffmann, K., Leibetseder, A., Benois-Pineau, J., Husslein, H.: Instrument recognition in laparoscopy for technical skill assessment. In: Ro, Y.M., et al. (eds.) MMM 2020, Part II. LNCS, vol. 11962, pp. 589–600. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37734-2_48
Leibetseder, A., Schoeffmann, K.: diveXplore 6.0: ITEC’s interactive video exploration system at VBS 2022. In: Þór Jónsson, B., et al. (eds.) MMM 2022. LNCS, vol. 13142, pp. 569–574. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98355-0_56
Lokoč, J., et al.: Interactive video retrieval in the age of effective joint embedding deep models: lessons from the 11th vbs. Multimedia Systems, pp. 1–24 (2023)
Lokoč, J., et al.: Interactive search or sequential browsing? A detailed analysis of the video browser showdown 2018. ACM Trans. Multimedia Comput. Commun. Appl. 15(1), 29:1–29:18 (2019). https://doi.org/10.1145/3295663. http://doi.acm.org/10.1145/3295663
Monfort, M., et al.: Moments in time dataset: one million videos for event understanding. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 502–508 (2020). https://doi.org/10.1109/TPAMI.2019.2901464
Nasirihaghighi, S., Ghamsarian, N., Stefanics, D., Schoeffmann, K., Husslein, H.: Action recognition in video recordings from gynecologic laparoscopy. In: 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), pp. 29–34. IEEE (2023)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Rossetto, L., et al.: Interactive video retrieval in the age of deep learning - detailed evaluation of VBS 2019. IEEE Trans. Multimedia 23, 243–256 (2021). https://doi.org/10.1109/TMM.2020.2980944
Rossetto, L., Gasser, R., Sauter, L., Bernstein, A., Schuldt, H.: A system for interactive multimedia retrieval evaluations. In: Lokoč, J., et al. (eds.) MMM 2021, Part II 27. LNCS, vol. 12573, pp. 385–390. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67835-7_33
Rossetto, L., Schoeffmann, K., Bernstein, A.: Insights on the V3C2 dataset. arXiv preprint arXiv:2105.01475 (2021)
Schoeffmann, K.: diveXB: an interactive video retrieval system for beginners. In: Proceedings of the 20th International Conference on Content-based Multimedia Indexing, CBMI 2023, pp. 1–6. IEEE (2023)
Schoeffmann, K., Del Fabro, M., Szkaliczki, T., Böszörmenyi, L., Keckstein, J.: Keyframe extraction in endoscopic video. Multimedia Tools Appl. 74, 11187–11206 (2015)
Schoeffmann, K., Stefanics, D., Leibetseder, A.: diveXplore at the video browser showdown 2023. In: Dang-Nguyen, D.T., et al. (eds.) MultiMedia Modeling, MMM 2023. LNCS, vol. 13833. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-27077-2_59
Schuhmann, C., et al.: LAION-400M: open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114 (2021)
Souček, T., Lokoč, J.: TransNet V2: an effective deep network architecture for fast shot transition detection. arXiv preprint arXiv:2008.04838 (2020)
Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)
Truong, Q.T., et al.: Marine video kit: a new marine video dataset for content-based analysis and retrieval. In: Dang-Nguyen, D.T., et al. (eds.) MultiMedia Modeling, MMM 2023. LNCS, vol. 13833. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-27077-2_42
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696 (2022)
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2018). https://doi.org/10.1109/TPAMI.2017.2723009
Acknowledgements
This work was funded by the FWF Austrian Science Fund by grant P 32010-N38.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Schoeffmann, K., Nasirihaghighi, S. (2024). DiveXplore at the Video Browser Showdown 2024. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-53302-0_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53301-3
Online ISBN: 978-3-031-53302-0
eBook Packages: Computer ScienceComputer Science (R0)