Advertisement

Moving Pixels in Static Cameras: Detecting Dangerous Situations due to Environment or People

  • Simone Calderara
  • Rita Cucchiara
  • Andrea Prati
Part of the Studies in Computational Intelligence book series (SCI, volume 282)

Summary

Dangerous situations arise in everyday life and many efforts have been lavished to exploit technology to increase the level of safety in urban areas. Video analysis is absolutely one of the most important and emerging technology for security purposes. Automatic video surveillance systems commonly analyze the scene searching for moving objects. Well known techniques exist to cope with this problem that is commonly referred as “change detection”. Every time a difference against a reference model is sensed, it should be analyzed to allow the system to discriminate among a usual situation or a possible threat. When the sensor is a camera, motion is the key element to detect changes and moving objects must be correctly classified according to their nature. In this context we can distinguish among two different kinds of threat that can lead to dangerous situations in a video-surveilled environment. The first one is due to environmental changes such as rain, fog or smoke present in the scene. This kind of phenomena are sensed by the camera as moving pixels and, subsequently as moving objects in the scene. This kind of threats shares some common characteristics such as texture, shape and color information and can be detected observing the features’ evolution in time. The second situation arises when people are directly responsible of the dangerous situation. In this case a “subject” is acting in an unusual way leading to an abnormal situation. From the sensor’s point of view, moving pixels are still observed, but specific features and time-dependent statistical models should be adopted to learn and then correctly detect unusual and dangerous behaviors. With these premises, this chapter will present two different case studies. The first one describes the detection of environmental changes in the observed scene and details the problem of reliably detecting smoke in outdoor environments using both motion information and global image features, such as color information and texture energy computed by the means of the Wavelet transform. The second refers to the problem of detecting suspicious or abnormal people behaviors by means of people trajectory analysis in a multiple cameras video-surveillance scenario. Specifically, a technique to infer and learn the concept of normality is proposed jointly with a suitable statistical tool to model and robustly compare people trajectories.

Keywords

Video Surveillance Dynamic Time Warping Energy Ratio Dangerous Situation Static Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bregman, A.: Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press, London (1990)Google Scholar
  2. 2.
    Haritaoglu, I., Harwood, D., Davis, L.: W4: real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 809–830 (2000)CrossRefGoogle Scholar
  3. 3.
    Calderara, S., Prati, A., Cucchiara, R.: Hecol: Homography and epipolar-based consistent labeling for outdoor park surveillance. Comput. Vis. Image Underst. 111(1), 21–42 (2008)CrossRefGoogle Scholar
  4. 4.
    Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. C 34(3), 334–352 (2004)CrossRefGoogle Scholar
  5. 5.
    Vezzani, R., Cucchiara, R.: Ad-hoc: Appearance driven human tracking with occlusion handling. In: Proc. of First Int. Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences, THEMIS 2008, in conjunction with BMVC 2008 (2008)Google Scholar
  6. 6.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. ACM Comput. Surv. 38(4), 13 (2006)CrossRefGoogle Scholar
  7. 7.
    Zhang, Z., Piccardi, M.: A review of tracking methods under occlusions. In: Proc. of Int. Conf. on Mach. Vis. Appl., pp. 146–149 (2007)Google Scholar
  8. 8.
    Khan, S., Shah, M.: Consistent labeling of tracked objects in multiple cameras with overlapping fields of view. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1355–1360 (2003)CrossRefGoogle Scholar
  9. 9.
    Hu, W., Xiao, X., Fu, Z., Xie, D., Tan, T., Maybank, S.: A system for learning statistical motion patterns. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1450–1464 (2006)CrossRefGoogle Scholar
  10. 10.
    Madden, C., Cheng, E.D., Piccardi, M.: Tracking people across disjoint camera views by an illumination-tolerant appearance representation. Mach. Vis. Appl. 18(3-4), 233–247 (2007)zbMATHCrossRefGoogle Scholar
  11. 11.
    Parameswaran, V., Chellappa, R.: View invariance for human action recognition. Int. J. Comp. Vis. 66(1), 83–101 (2006)CrossRefGoogle Scholar
  12. 12.
    Lavee, G., Khan, L., Thuraisingham, B.M.: A framework for a video analysis tool for suspicious event detection. Multimedia Tools Appl. 35(1), 109–123 (2007)CrossRefGoogle Scholar
  13. 13.
    Javed, O., Rasheed, Z., Alatas, O., Shah, M.: Knight: a real time surveillance system for multiple and non-overlapping cameras. Proc. of Int. Conf. Multimedia and Expo. 1, 649–652 (2003)Google Scholar
  14. 14.
    Zhao, T., Aggarwal, M., Kumar, R., Sawhney, H.: Real-time wide area multi-camera stereo tracking. In: Proc. of Int. Conf. Comp. Vis. Pattern Recognit., vol. 1, pp. 976–983 (2005)Google Scholar
  15. 15.
    Morris, B., Trivedi, M.: A survey of vision-based trajectory learning and analysis for surveillance. IEEE Trans. Circuits Syst. Video Technol. 18(8), 1114–1127 (2008)CrossRefGoogle Scholar
  16. 16.
    Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.J.: Querying and mining of time series data: experimental comparison of representations and distance measures. In: Proc. VLDB Endow., vol. 1(2), pp. 1542–1552 (2008)Google Scholar
  17. 17.
    Junejo, I., Javed, O., Shah, M.: Multi feature path modeling for video surveillance. In: Proc. of Int. Conf. Pattern Recognit., vol. 2, pp. 716–719 (2004)Google Scholar
  18. 18.
    Chen, X., Schonfeld, D., Khokhar, A.: Robust null space representation and sampling for view invariant motion trajectory analysis. In: Proc. of IEEE Int. Conf. Comp. Vis. Pattern Recognit. (2008)Google Scholar
  19. 19.
    Piciarelli, C., Foresti, G.: On-line trajectory clustering for anomalous events detection. Pattern Recognit. Lett. 27(15), 1835–1842 (2006)CrossRefGoogle Scholar
  20. 20.
    Buzan, D., Sclaroff, S., Kollios, G.: Extraction and clustering of motion trajectories in video. In: Proc. of Int. Conf. Pattern Recognit., vol. 2 (2004)Google Scholar
  21. 21.
    Keogh, E.J., Pazzani, M.J.: Scaling up dynamic time warping for datamining application. In: Proc. of ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pp. 285–289 (2000)Google Scholar
  22. 22.
    Qiao, Y., Yasuhara, M.: Affine invariant dynamic time warping and its application to online rotated handwriting recognition. Proc. of Int. Conf. Pattern Recognit. 2, 905–908 (2006)Google Scholar
  23. 23.
    Mecocci, A., Pannozzo, M.: A completely autonomous system that learns anomalous movements in advanced videosurveillance applications. Proc. of IEEE Int. Conf. Image Process. 2, 586–589 (2005)Google Scholar
  24. 24.
    Porikli, F., Haga, T.: Event detection by eigenvector decomposition using object and frame features. In: Proc. of Comp. Vis. Pattern Recognit. Workshop, vol. 7, pp. 114–121 (2004)Google Scholar
  25. 25.
    Mardia, K., Jupp, P.: Directional Statistics. Wiley, Chichester (2000)zbMATHGoogle Scholar
  26. 26.
    Fisher, R.: Dispersion on a sphere. Proc. Roy. Soc. London Ser. A 217, 295–305 (1953)zbMATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Bishop, C.: Pattern Recognit. Mach. Learn. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  28. 28.
    Prati, A., Calderara, S., Cucchiara, R.: Using circular statistics for trajectory shape analysis. Proc. of Comp. Vis. Pattern Recognit. (2008)Google Scholar
  29. 29.
    Needleman, S., Wunsch, C.: A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. 48(3), 443–453 (1970)CrossRefGoogle Scholar
  30. 30.
    Kailath, T.: The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Tech. COM-15(1), 52–60 (1967)Google Scholar
  31. 31.
    Reynolds, A., Richards, G., Rayward-Smith, V.: The Application of K-Medoids and PAM to the Clustering of Rules. In: Yang, Z.R., Yin, H., Everson, R.M. (eds.) IDEAL 2004. LNCS, vol. 3177, pp. 173–178. Springer, Heidelberg (2004)Google Scholar
  32. 32.
    Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting moving objects, ghosts and shadows in video streams. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1337–1342 (2003)CrossRefGoogle Scholar
  33. 33.
    Calderara, S., Cucchiara, R., Prati, A.: Bayesian-competitive Consistent Labeling for People Surveillance. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 354–360 (2008)CrossRefGoogle Scholar
  34. 34.
    Kopilovic, I., Vagvolgyi, B., Sziranyi, T.: Application of panoramic annular lens for motion analysis tasks: surveillance and smoke detection. In: Proc. of 15th Int. Conf. Pattern Recognit., vol. 4, pp. 714–717 (2000)Google Scholar
  35. 35.
    Vicente, J., Guillemant, P.: An image processing technique for automatically detecting forest fire. Int. J. Therm. Sci. 41(12), 1113–1120 (2002)CrossRefGoogle Scholar
  36. 36.
    Chen, T.-H., Yin, Y.-H., Huang, S.-F., Ye, Y.-T.: The Smoke Detection for Early Fire-Alarming System Base on Video Processing. In: Proc. of Int. Conf. Intell. Inf. Hiding and Multimedia, pp. 427–430 (2006)Google Scholar
  37. 37.
    Stauffer, C., Grimson, W.E.L.: Adaptive Background Mixture Models for Real-Time Tracking. In: Proc. of IEEE Conf. Comp. Vis. Pattern Recognit., pp. 246–252 (1999)Google Scholar
  38. 38.
    Xiong, Z., Caballero, R., Wang, H., Finn, A., Lelic, M.A., Peng, P.: Video-based Smoke Detection: Possibilities, Techniques, and Challenges Suppression and Detection Research and Applications. In: A Techn. Working Conf., SUPDET (2007)Google Scholar
  39. 39.
    Toreyin, B.U., Dedeoglu, Y., Cetin, A.E.: Flame detection in video using hidden Markov models. In: Proc. of IEEE Int. Conf. Image Proc. (2005)Google Scholar
  40. 40.
    Toreyin, B.U., Dedeoglu, Y., Cetin, A.E.: Wavelet based real-time smoke detection in video. In: EUSIPCO (2005)Google Scholar
  41. 41.
    Toreyin, B.U., Dedeoglu, Y., Cetin, A.E., Fazekas, D., Chetverikov, A.T., Kiryati, N.: Dynamic texture detection, segmentation and analysis. In: Proc. of ACM Conf. Image Video Retr., pp. 131–134 (2007)Google Scholar
  42. 42.
    Collins, R.T., Lipton, A.J., Kanade, T.: A system for video surveillance and monitoring. In: Proc. of 8th Int. Top Meet on Robot and Remote Syst. (1999)Google Scholar
  43. 43.
    Mallat, S.G.: A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989)zbMATHCrossRefGoogle Scholar
  44. 44.
    Sato, M.: Fast learning of on-line EM algorithm. Technical Report TR-H-281, ATR Human Information Processing Research LaboratoriesGoogle Scholar
  45. 45.
    Oakley, J.P., Bu, H.: Correction of Simple Contrast Loss in Color Images. IEEE Trans. Image Proc. 16(2), 511–522 (2007)CrossRefMathSciNetGoogle Scholar
  46. 46.
    Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. Int. J. Comput. Vis. 48(3), 233–254 (2002)zbMATHCrossRefGoogle Scholar
  47. 47.
    Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K.: Polarization-based vision through haze. Appl. Opt. 42(3) (2003)Google Scholar
  48. 48.
    Garg, K., Nayar, S.K.: Photometric Model of a Rain Drop. Technical Report, Department of Computer Science, Columbia University (2004)Google Scholar
  49. 49.
    Wilfred, P., Shah, M., Lobo, N.V.: Flame Recognition in Video. Pattern Recognit. Lett. 23(1-3), 319–327 (2002)CrossRefGoogle Scholar
  50. 50.
    Garg, K., Nayar, S.K.: When Does a Camera See Rain? In: Proc. of IEEE Int. Conf. Comput. Vis., vol. 2, pp. 1067–1074 (2005)Google Scholar
  51. 51.
    Liu, C.B., Ahuja, N.: Vision Based Fire Detection. In: Proc. of Int. Conf. Pattern Recognit., vol. 4 (2004)Google Scholar
  52. 52.
    Fastcom Tech. SA, Blvd. de Grancy 19A, CH-1006 Lausanne, Switzerland: Method and Device for Detecting Fires Based on Image Analysis. In: Patent Coop. Treaty (PCT) Appl. No: PCT/CH02/00118, PCT Pubn. No: WO02/069292Google Scholar
  53. 53.
    Garg, K., Nayar, S.K.: Detection and Removal of Rain from Videos. In: Proc. of Comput. Vis. Pattern Recognit., vol. 1, pp. 528–535 (2004)Google Scholar
  54. 54.
    Casella, G., Berger, R.: Statistical Inference, 2nd edn. Duxbury Press, Boston (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Simone Calderara
    • 1
  • Rita Cucchiara
    • 1
  • Andrea Prati
    • 2
  1. 1.D.I.I.University of Modena and Reggio Emilia 
  2. 2.Di.S.M.I.University of Modena and Reggio Emilia 

Personalised recommendations