Organizing Videos Streams for Clustering and Estimation of Popular Scenes
The huge diffusion of mobile devices with embedded cameras has opened new challenges in the context of the automatic understanding of video streams acquired by multiple users during events, such as sport matches, expos, concerts. Among the other goals there is the interpretation of which visual contents are the most relevant and popular (i.e., where users look). The popularity of a visual content is an important cue exploitable in several fields that include the estimation of the mood of the crowds attending to an event, the estimation of the interest of parts of a cultural heritage, etc. In live social events people capture and share videos which are related to the event. The popularity of a visual content can be obtained through the “visual consensus” among multiple video streams acquired by the different users devices. In this paper we address the problem of detecting and summarizing the “popular scenes” captured by users with a mobile camera during events. For this purpose, we have developed a framework called RECfusion in which the key popular scenes of multiple streams are identified over time. The proposed system is able to generate a video which captures the interests of the crowd starting from a set of the videos by considering scene content popularity. The frames composing the final popular video are automatically selected from the different video streams by considering the scene recorded by the highest number of users’ devices (i.e., the most popular scene).
KeywordsVideo analysis Clustering Social cameras Scene understanding
This work has been performed in collaboration with Telecom Italia JOL WAVE in the project FIR2014-UNICT-DFA17D.
- 2.Finlayson, G., Schaefer, G.: Colour indexing across devices and viewing conditions. In: International Workshop on Content-Based Multimedia Indexing (2001)Google Scholar
- 4.Park, H.S., Jain, E., Sheikh, Y.: 3D social saliency from head-mounted cameras. In: Advances in Neural Information Processing Systems, pp. 431–439 (2012)Google Scholar
- 5.Hoshen, Y., Ben-Artzi, G., Peleg, S.: Wisdom of the crowd in egocentric video curation. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 587–593 (2014)Google Scholar
- 6.Saini, M.K., Gadde, R., Yan, S., Ooi, W.T.: Movimash: online mobile video mashup. In: ACM International Conference on Multimedia, pp. 139–148 (2012)Google Scholar
- 14.Ortis, A., Farinella, G.M., D’Amico, V., Addesso, L., Torrisi, G., Battiato, S.: Recfusion: automatic video curation driven by visual content popularity. In: ACM Multimedia, MM 2015, pp. 1179–1182. ACM (2015)Google Scholar
- 15.Domke, J., Aloimonos, Y.: Deformation and viewpoint invariant color histograms. In: British Machine Vision Conference, pp. 509–518 (2006)Google Scholar
- 16.Milotta, F.L.M., Battiato, S., Stanco, F., D’Amico, V., Torrisi, G., Addesso, L.: RECfusion: automatic scene clustering and tracking in video from multiple sources. In: EI - Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications (2016)Google Scholar
- 17.Ballan, L., Brostow, G.J., Puwein, J., Pollefeys, M.: Unstructured video-based rendering: interactive exploration of casually captured videos. In: ACM Transactions on Graphics, pp. 1–11 (2010)Google Scholar