Abstract
Understanding movies and their structural patterns is a crucial task in decoding the craft of video editing. While previous works have developed tools for general analysis, such as detecting characters or recognizing cinematography properties at the shot level, less effort has been devoted to understanding the most basic video edit, the Cut. This paper introduces the Cut type recognition task, which requires modeling multi-modal information. To ignite research in this new task, we construct a large-scale dataset called MovieCuts, which contains 173, 967 video clips labeled with ten cut types defined by professionals in the movie industry. We benchmark a set of audio-visual approaches, including some dealing with the problem’s multi-modal nature. Our best model achieves \(47.7\%\) mAP, which suggests that the task is challenging and that attaining highly accurate Cut type recognition is an open research problem. Advances in automatic Cut-type recognition can unleash new experiences in the video editing industry, such as movie analysis for education, video re-editing, virtual cinematography, machine-assisted trailer generation, machine-assisted video editing, among others. Our data and code are publicly available: https://github.com/PardoAlejo/MovieCuts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arijon, D.: Grammar of the Film Language. Focal Press London (1976)
Bain, M., Nagrani, A., Brown, A., Zisserman, A.: Condensed movies: story based retrieval with contextual embeddings (2020)
Benini, S., Svanera, M., Adami, N., Leonardi, R., Kovács, A.B.: Shot scale distribution in art films. Multimedia Tools Appl. 75(23), 16499–16527 (2016). https://doi.org/10.1007/s11042-016-3339-9
Bojanowski, P., Bach, F., Laptev, I., Ponce, J., Schmid, C., Sivic, J.: Finding actors and actions in movies. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2280–2287 (2013)
Bordwell, D., Thompson, K., Smith, J.: Film Art: An Introduction, vol. 7. McGraw-Hill, New York (1993)
Bost, X., et al.: Remembering winter was coming. Multimedia Tools Appl. 78(24), 35373–35399 (2019). https://doi.org/10.1007/s11042-019-07969-4
Bost, X., Labatut, V., Linares, G.: Serial speakers: a dataset of tv series. arXiv preprint arXiv:2002.06923 (2020)
Bredin, H.: pyannote.metrics: a toolkit for reproducible evaluation, diagnostic, and error analysis of speaker diarization systems. In: Interspeech 2017, 18th Annual Conference of the International Speech Communication Association. Stockholm, Sweden (2017). http://pyannote.github.io/pyannote-metrics/
Brown, A., Huh, J., Nagrani, A., Chung, J.S., Zisserman, A.: Playing a part: speaker verification at the movies. arXiv preprint arXiv:2010.15716 (2020)
Burch, N.: Theory of Film Practice. Princeton University Press (2014)
Canini, L., Benini, S., Leonardi, R.: Classifying cinematographic shot types. Multimedia Tools Appl. 62(1), 51–73 (2013). https://doi.org/10.1007/s11042-011-0916-9
Chen, H., Xie, W., Vedaldi, A., Zisserman, A.: Vggsound: a large-scale audio-visual dataset. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2020)
Cutting, J.E.: The evolution of pace in popular movies. Cogn. Res. Principles Implications 1(1), 1–21 (2016). https://doi.org/10.1186/s41235-016-0029-0
Duchenne, O., Laptev, I., Sivic, J., Bach, F., Ponce, J.: Automatic annotation of human actions in video. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1491–1498. IEEE (2009)
Everingham, M., Sivic, J., Zisserman, A.: Hello! my name is... buffy”-automatic naming of characters in tv video. In: BMVC, vol. 2, p. 6 (2006)
Galvane, Q., Ronfard, R., Lino, C., Christie, M.: Continuity editing for 3D animation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29 (2015)
Gu, C., et al.: Ava: a video dataset of spatio-temporally localized atomic visual actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6047–6056 (2018)
Gygli, M.: Ridiculously fast shot boundary detection with fully convolutional neural networks. In: 2018 International Conference on Content-Based Multimedia Indexing (CBMI), pp. 1–4. IEEE (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hesham, M., Hani, B., Fouad, N., Amer, E.: Smart trailer: automatic generation of movie trailer using only subtitles. In: 2018 First International Workshop on Deep and Representation Learning (IWDRL), pp. 26–30. IEEE (2018)
Hoai, M., Zisserman, A.: Thread-safe: towards recognizing human actions across shot boundaries. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9006, pp. 222–237. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16817-3_15
Huang, Q., Liu, W., Lin, D.: Person search in videos with one portrait through visual and temporal links. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 425–441 (2018)
Huang, Q., Xiong, Y., Lin, D.: Unifying identification and context learning for person recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Huang, Q., Xiong, Yu., Rao, A., Wang, J., Lin, D.: MovieNet: a holistic dataset for movie understanding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 709–727. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_41
Huang, Q., Xiong, Y., Xiong, Y., Zhang, Y., Lin, D.: From trailers to storylines: an efficient way to learn from movies. arXiv preprint arXiv:1806.05341 (2018)
Huang, Q., Yang, L., Huang, H., Wu, T., Lin, D.: Caption-supervised face recognition: training a state-of-the-art face model without manual annotation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 139–155. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58520-4_9
Irie, G., Satou, T., Kojima, A., Yamasaki, T., Aizawa, K.: Automatic trailer generation. In: Proceedings of the 18th ACM international conference on Multimedia, pp. 839–842 (2010)
Katz, E., Klein, F.: The film encyclopedia. Collins (2005)
Kay, W., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)
Kozlovic, A.K.: Anatomy of film. Kinema A J. Film Audiov. Media (2007)
Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: 2011 International Conference on Computer Vision, pp. 2556–2563. IEEE (2011)
Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)
Liu, X., Hu, Y., Bai, S., Ding, F., Bai, X., Torr, P.H.: Multi-shot temporal event localization: a benchmark. arXiv preprint arXiv:2012.09434 (2020)
Maharaj, T., Ballas, N., Rohrbach, A., Courville, A., Pal, C.: A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6884–6893 (2017)
Murch, W.: In the Blink of an Eye, vol. 995. Silman-James Press Los Angeles (2001)
Nagrani, A., Zisserman, A.: From benedict cumberbatch to sherlock holmes: character identification in tv series without a script. arXiv preprint arXiv:1801.10442 (2018)
Pardo, A., Caba, F., Alcazar, J.L., Thabet, A.K., Ghanem, B.: Learning to cut by watching movies. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6858–6868 (2021)
Pavlakos, G., Malik, J., Kanazawa, A.: Human mesh recovery from multiple shots. arXiv preprint arXiv:2012.09843 (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 (2021)
Rao, A., et al.: A unified framework for shot type classification based on subject centric lens. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 17–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_2
Rao, A., et al.: A local-to-global approach to multi-modal movie scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10146–10155 (2020)
Rohrbach, A., et al.: Movie description. Int. J. Comput. Vis. 123(1), 94–120 (2017). https://doi.org/10.1007/s11263-016-0987-1
Sivic, J., Everingham, M., Zisserman, A.: “Who are you?”-learning person specific classifiers from video. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1145–1152. IEEE (2009)
Smith, J.R., Joshi, D., Huet, B., Hsu, W., Cota, J.: Harnessing ai for augmenting creativity: Application to movie trailer creation. In: Proceedings of the 25th ACM international conference on Multimedia, pp. 1799–1808 (2017)
Smith, T.J., Henderson, J.M.: Edit blindness: the relationship between attention and global change blindness in dynamic scenes. J. Eye Mov. Res. 2(2) (2008)
Smith, T.J., Levin, D., Cutting, J.E.: A window on reality: perceiving edited moving images. Curr. Dir. Psychol. Sci. 21(2), 107–113 (2012)
Tapaswi, M., Zhu, Y., Stiefelhagen, R., Torralba, A., Urtasun, R., Fidler, S.: MovieQA: understanding stories in movies through question-answering. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Thompson, R., Bowen, C.J.: Grammar of the Edit, vol. 13. Taylor & Francis (2009)
Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459 (2018)
Tsivian, Y.: Cinemetrics, part of the humanities’ cyberinfrastructure (2009)
Vicol, P., Tapaswi, M., Castrejon, L., Fidler, S.: Moviegraphs: towards understanding human-centric situations from videos. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Wang, H.L., Cheong, L.F.: Taxonomy of directing semantics for film shot classification. IEEE Trans. Circ. Syst. Video Technol. 19(10), 1529–1542 (2009)
Wang, W., Tran, D., Feiszli, M.: What makes training multi-modal classification networks hard? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12695–12705 (2020)
Wu, H.Y., Christie, M.: Analysing cinematography with embedded constrained patterns. In: WICED-Eurographics Workshop on Intelligent Cinematography and Editing (2016)
Wu, H.Y., Galvane, Q., Lino, C., Christie, M.: Analyzing elements of style in annotated film clips. In: WICED 2017-Eurographics Workshop on Intelligent Cinematography and Editing, pp. 29–35. The Eurographics Association (2017)
Wu, H.Y., Palù, F., Ranon, R., Christie, M.: Thinking like a director: film editing patterns for virtual cinematographic storytelling. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 14(4), 1–22 (2018)
Wu, T., Huang, Q., Liu, Z., Wang, Yu., Lin, D.: Distribution-balanced loss for multi-label classification in long-tailed datasets. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 162–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_10
Xia, J., Rao, A., Huang, Q., Xu, L., Wen, J., Lin, D.: Online multi-modal person search in videos. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 174–190. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_11
Xiong, Y., Huang, Q., Guo, L., Zhou, H., Zhou, B., Lin, D.: A graph-based framework to bridge movies and synopses. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
Xu, H., Zhen, Y., Zha, H.: Trailer generation via a point process-based visual attractiveness model. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)
Acknowledgements
This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pardo, A., Heilbron, F.C., Alcázar, J.L., Thabet, A., Ghanem, B. (2022). MovieCuts: A New Dataset and Benchmark for Cut Type Recognition. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13667. Springer, Cham. https://doi.org/10.1007/978-3-031-20071-7_39
Download citation
DOI: https://doi.org/10.1007/978-3-031-20071-7_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20070-0
Online ISBN: 978-3-031-20071-7
eBook Packages: Computer ScienceComputer Science (R0)