An Autonomous Surveillance Vehicle for People Tracking

  • C. Piciarelli
  • C. Micheloni
  • G. L. Foresti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3617)

Abstract

In this paper, the problem of the surveillance and the security of indoor environments is addressed through the development of an autonomous surveillance vehicle (ASV). The ASV has been designed to perform the object detection by adopting an image alignment method followed by a change detection operation. Hence, in addition to the classical robotic tasks (e.g., navigation and obstacle avoiding), the tracking of objects (e.g., persons) moving in an indoor environment is considered. The tracking procedure allows the ASV to maintain the interesting objects in the centre of the image, and in specific cases to focus the image acquisition on particular parts of the object (e.g., face of a person, etc.) for recognition purposes. Experimental results have been performed on different real scenarios where no objects moves inside the monitored scene and where at least one moving object is into the scene.

Keywords

Mobile Robot Indoor Environment Object Detection Autonomous Underwater Vehicle Consecutive Frame 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Black, M., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications to early vision. International Journal of Computer Vision 19(1), 57–92 (1996)CrossRefGoogle Scholar
  2. 2.
    Cohen, I., Medioni, G.: Detecting and tracking moving objects in video from and airborne observer. In: Procceddings of the IEEE Image Understanding Workshop, Manterey, CA, November 1998, pp. 217–222 (1998)Google Scholar
  3. 3.
    Collins, R.T., Lipton, A.J., Fujiyoshi, H., Kanade, T.: Algorithms for cooperative multisensor surveillance. Proceedings of the IEEE 89, 1456–1477 (2001)CrossRefGoogle Scholar
  4. 4.
    DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: A survey. IEEE Trans. on Pattern Analysis Machine Intelligence 24(2), 237–267 (2002)CrossRefGoogle Scholar
  5. 5.
    Foresti, G.L., Micheloni, C.: A robust feature tracker for active surveillance of outdoor scenes. Electronic Letters on Computer Vision and Image Analysis 1(1), 21–36 (2003)Google Scholar
  6. 6.
    Giralt, G., Sobek, R., Chatila, R.: A multi-level planning and navigation system for a mobile robot; a first approach to hilare. In: Proceedings of International Joint Conference of Artificial Intelligence, vol. 1, pp. 335–337 (1979)Google Scholar
  7. 7.
    Hayman, E., Thorhallsson, T., Murray, D.: Zoom-invariant tracking using points and lines in affine views: an application of the affine multifocal tensors. In: International Conference on Computer Vision, pp. 269–277 (1999)Google Scholar
  8. 8.
    Herbert, M., Thorpe, C., Stenz, S.: Intelligent Unmanned Ground Vehicles: Autonomous Navigation Research at Carnegie Mellon. Kluwer Academic, Dordrecht (1997)Google Scholar
  9. 9.
    Horn, B.K.P., Schunk, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)CrossRefGoogle Scholar
  10. 10.
    Moravec, H.P.: The standford cart and the cmu rover. Proc. IEEE 71(7), 872–874 (1983)CrossRefGoogle Scholar
  11. 11.
    Rosenblatt, J., Williams, S., Durrant-Whyte, H.: Behavior-based control for autonomous underwater exploration. In: Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco,CA (April 2000)Google Scholar
  12. 12.
    Rosin, P.: Thresholding for change detection. In: Proceedings of IEEE International Conference on Computer Vision, Bombay, India, pp. 274–279 (1998)Google Scholar
  13. 13.
    Southall, B., Hague, T., Marchant, J.A., Buxton, B.F.: Vision-aided outdoor navigation of an autonomous horticultural vehicle. In: Proceeding of the first International Conference on Vision Systems (1999)Google Scholar
  14. 14.
    Stein, G.P., Shashua, A.: Model-based brightness constraints: on direct estimation of structure and motion. IEEE Trans. on Pattern Analysis Machine Intelligence 22(9), 992–1015 (2000)CrossRefGoogle Scholar
  15. 15.
    Tordoff, B.J., Murray, D.W.: Reactive control of zoom while tracking using perspective and affine cameras. IEEE Trans. on Pattern Analysis Machine Intelligence 26(1), 98–112 (2004)CrossRefGoogle Scholar
  16. 16.
    Viola, P., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: IEEE CVPR (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • C. Piciarelli
    • 1
  • C. Micheloni
    • 1
  • G. L. Foresti
    • 1
  1. 1.Department of Computer ScienceUniversity of UdineUdineItaly

Personalised recommendations