Advertisement

Multimedia Systems

, Volume 12, Issue 3, pp 255–268 | Cite as

Functional calibration for pan-tilt-zoom cameras in hybrid sensor networks

  • Christopher R. Wren
  • Ugur Murat Erdem
  • Ali J. Azarbayejani
Regular paper

Abstract

Wide-area context awareness is a crucial enabling technology for next generation smart buildings and surveillance systems. It is not practical to gather this context awareness by covering the entire building with cameras. However, significant gaps in coverage caused by installing cameras in a sparse way can make it very difficult to infer the missing information. As a solution we advocate a class of hybrid perceptual systems that build a comprehensive model of activity in a large space, such as a building, by merging contextual information from a dense network of ultra-lightweight sensor nodes and video from a sparse network of cameras. In this paper we explore the task of automatically recovering the relative geometry between a pan-tilt-zoom camera and a network of one-bit motion detectors. We present results both for the recovery of geometry alone and also for the recovery of geometry jointly with simple activity models. Because we do not believe a metric calibration is necessary, or even entirely useful, for this task, we formulate and pursue the novel goal we term functional calibration. Functional calibration is a blending of geometry estimation and simple behavioral model discovery. Accordingly, results are evaluated by measuring the ability of the system to automatically foveate targets in a large, non-convex space, rather than by measuring, for example, pixel reconstruction error.

Categories and Subject Descriptors

I.2.10 [Vision and Scene Understanding]: Motion I.2.10 [Vision and Scene Understanding]: Video analysis I.4.8 [Scene Analysis]: Sensor fusion I.2.9 [Robotics]: Sensors C.3 [Special-purpose and application-based systems]: Real-time and embedded systems 

General Terms

Sensor Networks Video Surveillance Adaptive Systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bagdanov A.D., del Bimbo A.:, Pernici, 7F.: Acquisition of high-resolution images through on-line saccade sequence planning. In: Proceedings of Third ACM International Workshop on Video Surveillance and Sensor Networks, pp. 121–130, Singapore (2005)Google Scholar
  2. 2.
    Baker P., Aloimonos Y. (2003) Calibration of a multicamera network. In: Pless R., Santos-Victor J., Yagi Y. (eds). The fourth Workshop on Omnidirectional Vision, Camera Networks and Non-classical cameras. Madison, Wisconsin USAGoogle Scholar
  3. 3.
    Barreto J., Daniilidis K.: Wide area multiple camera calibration and estimation of radial distortion. In: Sturm P., Svoboda T., Teller S. (eds.) The fifth Workshop on Omnidirectional Vision, Camera Networks and Non-classical cameras. Prague (2004)Google Scholar
  4. 4.
    Collins R.T., Tsin Y.: Calibration of an outdoor active camera system. In: Computer Vision and Pattern Recognition. Fort Collins, CO, USA IEEE, pp. 528–534 (1999)Google Scholar
  5. 5.
    Forsyth D.A., Ponce J.: Computer Vision: A Modern . Prentice Hall (2003)Google Scholar
  6. 6.
    Hartley R.I.: Self-calibration from multiple views with a rotating camera. In: The Third European Conference on Computer Vision, Stocklholm, Sweden Springer-Verlag, pp. 471–478, (1994)Google Scholar
  7. 7.
    Ivanov Y.A., Bobick A.F. (2000) Recognition of visual activities and interactions by stochastic parsing. Transactions on Pattern Analysis and Machine Intelligence 22(8): 852–872CrossRefGoogle Scholar
  8. 8.
    Jung B., Sukhatme G.S.: Cooperative tracking using mobile robots and environment-embedded, networked sensors. In: Proceedings of the International Symposium on Computational Intelligence in Robotics and Automation, pp. 206–211, Banff, Alberta, Canada (2001)Google Scholar
  9. 9.
    Kaelbling L.P., Littman M.L., Moore A.W. (1996) Reinforcement learning: A survey. J. Artif. Intell. Res. 4, 237–285Google Scholar
  10. 10.
    Khan S., Javed O., Shah M.: Tracking in uncalibrated cameras with overlapping field of view. In: Workshop on Performance Evaluation of Tracking and Surveillance. IEEE (2001)Google Scholar
  11. 11.
    Lim S.N., Davis L.S., Elgammal A.: A scalable image-based multi-camera visual surveillance system. In: IEEE AVSS, , Florida (2003)Google Scholar
  12. 12.
    Minnen D., Essa I., Starner T.: Expectation grammars: Leveraging high-level expectations for activity recognition. In: Workshop on Event Mining, Event Detection, and Recognition in Video, held in Conjunction with Computer Vision and Pattern Recognition, vol. 2, pp. 626. IEEE (2003)Google Scholar
  13. 13.
    Qureshi F., Terzopoulos D.: Surveillance camera scheduling: a virtual vision approach. In: Proceedings of Third ACM International Workshop on Video Surveillance and Sensor Networks, pp. 131–139, Singapore (2005)Google Scholar
  14. 14.
    Rahimi A., Dunagan B., Darrell T.: Simultaneous calibration and tracking with a network of non-overlapping sensors. In: Computer Vision and Pattern Recognition, pp. 187–194. IEEE Computer Society (2004)Google Scholar
  15. 15.
    Shet V., Harwood D., Davis L.: Multivalued default logic for identity maintenance in visual surveillance. In: European Conference on Computer Vision, Graz, Austria, (2006) (to)Google Scholar
  16. 16.
    Sinha S.N., Pollefeys M.: Towards calibrating a pan-tilt-zoom cameras network. In: Sturm P., Svoboda T., Teller S., (eds.) The fifth Workshop on Omnidirectional Vision, Camera Networks and Non-classical cameras, Prague (2004)Google Scholar
  17. 17.
    Sony Corporation. SNC-RZ30 CGI command manual, 2.0 edn (2003)Google Scholar
  18. 18.
    Stauffer C., Tieu K.: Automated multi-camera planar tracking correspondence modeling. In: Computer Vision and Pattern Recognition, pp. 259–266. IEEE (2003)Google Scholar
  19. 19.
    Stein G.P.: Tracking from multiple view points: Self-calibration of space and time. In: Image Understanding Workshop, Montery, CA, USA, DARPA (1998)Google Scholar
  20. 20.
    Munguia Tapia E., Intille S.S., Lopez L. Larson K.: The design of a portable kit of wireless sensors for naturalistic data collection. In: Proceedings of PERVASIVE 2006, Dublin, Ireland, Springer-Verlag (2006)Google Scholar
  21. 21.
    Thrun S., Fox D., Burgard W.: A probabilistic approach to concurrent mapping and localization for mobile robots. Mach. Learn. Auton. Robots (joint issue), 31(5), 29–53 & 253–271 (1998)Google Scholar
  22. 22.
    Toyama K., Krumm J., Brumitt B., Meyers B.: Wallflower: Principles and practice of background maintenance. In: ICCV, pp. 255–261. IEEE (1999)Google Scholar
  23. 23.
    Trajkovic M.: Interactive calibration of a pan-tilt-zoom (ptz) camera for surveillance applications. In: Asian Conference on Computer Vision (2002)Google Scholar
  24. 24.
    Trivedi M.M., Prati A., Kogut G.: Distributed interactive video arrays for event based analysis of incidents. In: International Conference on Intelligent Transportation Systems, pp. 950–956, Singapore, IEEE (2002)Google Scholar
  25. 25.
    Wren C.R., Minnen D., Rao S.G.: Similarity-based analysis for large networks of ultra-low resolution sensors. Recognit, 39(10), 1918–1931, Special Issue on Similarity-Based Pattern Recognition (2006)Google Scholar
  26. 26.
    Wren C.R., Tapia E.M, Toward scalable activity recognition for sensor networks. In: Lecture Notes in Computer , vol. 3987, pp. 168–185, Dublin, Ireland 2nd International Workshop on Location- and Context-Awareness (2006)Google Scholar

Copyright information

© Springer-Verlag 2006

Authors and Affiliations

  • Christopher R. Wren
    • 1
  • Ugur Murat Erdem
    • 2
  • Ali J. Azarbayejani
    • 3
  1. 1.MERL ResearchCambridgeUSA
  2. 2.Boston UniversityBostonUSA
  3. 3.MERL TechnologyCambridgeUSA

Personalised recommendations