Skip to main content

Virtual Vision

Virtual Reality Subserving Computer Vision Research for Camera Sensor Networks

  • Chapter
Distributed Video Sensor Networks

Abstract

Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called “Virtual Vision”, which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    With regard to software, a virtual vision simulator consists of an environmental model, character models, an animation engine, and a rendering engine. Most commercial modeling/animation systems enable users to create 3D virtual scenes, including virtual buildings populated by virtual characters, and they incorporate rendering subsystems to illuminate and visualize the scenes. The animation subsystem can animate the virtual characters, but autonomous pedestrian animation is an area of active research in the computer animation community and there are as yet no adequate commercial solutions.

  2. 2.

    We are currently validating our virtual vision paradigm in a collaborative project with the University of California, Riverside, through the development of a virtual vision simulator that emulates a large-scale physical camera network that they have deployed.

References

  1. Bertamini, F., Brunelli, R., Lanz, O., Roat, A., Santuari, A., Tobia, F., Xu, Q.: Olympus: An ambient intelligence architecture on the verge of reality. In: Proc. International Conference on Image Analysis and Processing, Mantova, Italy, pp. 139–145 (2003)

    Google Scholar 

  2. Birren, F.: Color Perception in Art. Van Nostrand Reinhold, New York (1976)

    Google Scholar 

  3. Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR’00), Hilton Head Island, SC, vol. 2, pp. 142–151 (2000)

    Google Scholar 

  4. Costello, C.J., Diehl, C.P., Banerjee, A., Fisher, H.: Scheduling an active camera to observe people. In: Proc. ACM Int. Workshop on Video Surveillance and Sensor Networks, New York, pp. 39–45 (2004)

    Chapter  Google Scholar 

  5. Qureshi, F.Z.: Intelligent Perception in Virtual Camera Networks and Space Robotics. PhD thesis, Department of Computer Science, University of Toronto, Canada (2007)

    Google Scholar 

  6. Qureshi, F.Z., Terzopoulos, D.: Towards intelligent camera networks: A virtual vision approach. In: Proc. Joint IEEE Int. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS05), Beijing, China, pp. 177–184 (2005)

    Chapter  Google Scholar 

  7. Qureshi, F.Z., Terzopoulos, D.: Surveillance camera scheduling: A virtual vision approach. ACM Multimed. Syst. J. 12, 269–283 (2006)

    Article  Google Scholar 

  8. Qureshi, F.Z., Terzopoulos, D.: Smart camera networks in virtual reality. Proc. IEEE 96(10), 1640–1656 (2008) (Special Issue on Smart Cameras)

    Article  Google Scholar 

  9. Qureshi, F.Z., Terzopoulos, D.: Planning ahead for PTZ camera assignment and control. In: Proc. Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC 09), Como, Italy, pp. 1–8 (2009)

    Chapter  Google Scholar 

  10. Rabie, T., Terzopoulos, D.: Active perception in virtual humans. In: Vision Interface (VI 2000), Montreal, Canada, pp. 16–22 (2000)

    Google Scholar 

  11. Rabie, T., Shalaby, A., Abdulhai, B., El-Rabbany, A.: Mobile vision-based vehicle tracking and traffic control. In: Proc. IEEE International Conference on Intelligent Transportation Systems (ITSC 2002), Singapore, pp. 13–18 (2002)

    Chapter  Google Scholar 

  12. Salgian, G., Ballard, D.H.: Visual routines for autonomous driving. In: Sixth International Conference on Computer Vision, Bombay, India, pp. 876–882 (1991)

    Google Scholar 

  13. Santuari, A., Lanz, O., Brunelli, R.: Synthetic movies for computer vision applications. In: Proc. IASTED International Conference: Visualization, Imaging, and Image Processing, Spain, pp. 1–6 (2003)

    Google Scholar 

  14. Shao, W., Terzopoulos, D.: Autonomous pedestrians. In: Proc. ACM SIGGRAPH/EG Symposium on Computer Animation, Los Angeles, CA, pp. 19–28 (2005)

    Google Scholar 

  15. Shao, W., Terzopoulos, D.: Environmental modeling for autonomous virtual pedestrians. In: Proc. SAE Digital Human Modeling Symposium, Iowa City, IA (2005)

    Google Scholar 

  16. Swain, M.J., Ballard, D.H.: Color indexing. Int. J. Comput. Vis. 7(1), 11–32 (1991)

    Article  Google Scholar 

  17. Terzopoulos, D.: Perceptive agents and systems in virtual reality. In: Proc. ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, pp. 1–3 (2003)

    Google Scholar 

  18. Terzopoulos, D., Rabie, T.: Animat vision: Active vision in artificial animals. Videre, J. Comput. Vis. Res. 1(1), 2–19 (1997)

    Google Scholar 

Download references

Acknowledgements

We thank Wei Shao for developing and implementing the train station simulator and Mauricio Plaza-Villegas for his valuable contributions. We thank Tom Strat, formerly of DARPA, for his generous support and encouragement.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Demetri Terzopoulos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

Terzopoulos, D., Qureshi, F.Z. (2011). Virtual Vision. In: Bhanu, B., Ravishankar, C., Roy-Chowdhury, A., Aghajan, H., Terzopoulos, D. (eds) Distributed Video Sensor Networks. Springer, London. https://doi.org/10.1007/978-0-85729-127-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-127-1_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-126-4

  • Online ISBN: 978-0-85729-127-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics