Learning Contextual Variations for Video Segmentation
This paper deals with video segmentation in vision systems. We focus on the maintenance of background models in long-term videos of changing environment which is still a real challenge in video surveillance. We propose an original weakly supervised method for learning contextual variations in videos. Our approach uses a clustering algorithm to automatically identify different contexts based on image content analysis. Then, state-of-the-art video segmentation algorithms (e.g. codebook, MoG) are trained on each cluster. The goal is to achieve a dynamic selection of background models. We have experimented our approach on a long video sequence (24 hours). The presented results show the segmentation improvement of our approach compared to codebook and MoG.
Keywordsvideo segmentation weakly supervised learning context awareness video surveillance cognitive vision
Unable to display preview. Download preview PDF.
- 2.Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 246–252 (1999)Google Scholar
- 6.Pass, G., Zabih, R., Miller, J.: Comparing images using color coherence vectors. In: ACM International Conference on Multimedia, pp. 65–73. ACM Press, New York, USA (1997)Google Scholar
- 7.Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining, Portland, pp. 226–231 (1996)Google Scholar