Scene Modelling and Classification Using Learned Spatial Relations
This paper describes a method for building visual scene models from video data using quantized descriptions of motion. This method enables us to make meaningful statements about video scenes as a whole (such as “this video is like that video”) and about regions within these scenes (such as “this part of this scene is similar to this part of that scene”). We do this through unsupervised clustering of simple yet novel motion descriptors, which provide a quantized representation of gross motion within scene regions. Using these we can characterise the dominant patterns of motion, and then group spatial regions based upon both proximity and local motion similarity to define areas or regions with particular motion characteristics. We are able to process scenes in which objects are difficult to detect and track due to variable frame-rate, video quality or occlusion, and we are able to identify regions which differ by usage but which do not differ by appearance (such as frequently used paths across open space). We demonstrate our method on 50 videos making up very different scene types: indoor scenarios with unpredictable unconstrained motion, junction scenes, road and path scenes, and open squares or plazas. We show that these scenes can be clustered using our representation, and that the incorporation of learned spatial relations into the representation enables us to cluster more effectively.
Unable to display preview. Download preview PDF.
- 3.Breitenstein, M.D., Sommerlade, E., Leibe, B., Van Gool, L., Reid, I.: Probabilistic parameter selection for learning scene structure from video. In: Proc. British Machine Vision Conference, BMVC (2008)Google Scholar
- 7.Efros, A.A., Berg, A.C., Mori, G., Malik, J.: Recognizing action at a distance. In: Proc. International Conference on Computer Vision (ICCV), Nice, France (2003)Google Scholar
- 9.Hoiem, D., Efros, A.A., Hebert, M.: Putting objects in perspective. In: Proc. Computer Vision and Pattern Recognition (CVPR), pp. 2137–2144 (2006)Google Scholar
- 10.Hoiem, D., Efros, A.A., Hebert, M.: Closing the loop in scene interpretation. In: CVPR (2008)Google Scholar
- 12.KaewTraKulPong, P., Bowden, R.: Probabilistic learning of salient patterns across spatially separated, uncalibrated views. In: Intelligent Distributed Surveillance Systems, pp. 36–40 (2004)Google Scholar
- 13.Kaufhold, J., Colling, R., Hoogs, A., Rondot, P.: Recognition and segmentation of scene content using region-based classification. In: Proc. International Conference on Pattern Recognition, ICPR (2006)Google Scholar
- 16.Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: Proc. Computer Vision and Pattern Recognition, CVPR (2008)Google Scholar
- 17.Laptev, I., Pérez, P.: Retrieving actions in movies. In: Proc. International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil (2007)Google Scholar
- 18.Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)Google Scholar
- 21.Rother, D., Patwardhan, K.A., Sapiro, G.: What can casual walkers tell us about a 3D scene?. In: Proc. International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil (2007)Google Scholar
- 22.Shi, J., Tomasi, C.: Good features to track. In: Proc. Computer Vision and Pattern Recognition (CVPR), pp. 593–600 (1994)Google Scholar
- 24.Tomasi, C., Kanade, T.: Detection and tracking of point features. Technical Report CMU-CS-91-132, Carnegie Mellon (1991)Google Scholar