Advertisement

Journal of Intelligent and Robotic Systems

, Volume 50, Issue 3, pp 257–295 | Cite as

Optimal Camera Placement for Automated Surveillance Tasks

  • Robert Bodor
  • Andrew Drenner
  • Paul Schrater
  • Nikolaos PapanikolopoulosEmail author
Article

Abstract

Camera placement has an enormous impact on the performance of vision systems, but the best placement to maximize performance depends on the purpose of the system. As a result, this paper focuses largely on the problem of task-specific camera placement. We propose a new camera placement method that optimizes views to provide the highest resolution images of objects and motions in the scene that are critical for the performance of some specified task (e.g. motion recognition, visual metrology, part identification, etc.). A general analytical formulation of the observation problem is developed in terms of motion statistics of a scene and resolution of observed actions resulting in an aggregate observability measure. The goal of this system is to optimize across multiple cameras the aggregate observability of the set of actions performed in a defined area. The method considers dynamic and unpredictable environments, where the subject of interest changes in time. It does not attempt to measure or reconstruct surfaces or objects, and does not use an internal model of the subjects for reference. As a result, this method differs significantly in its core formulation from camera placement solutions applied to problems such as inspection, reconstruction or the Art Gallery class of problems. We present tests of the system’s optimized camera placement solutions using real-world data in both indoor and outdoor situations and robot-based experimentation using an all terrain robot vehicle-Jr robot in an indoor setting.

Keywords

Camera networks Robot/camera placement Observability Optimization Sensor networks Vision-based robotics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abrams, S., Allen, P.K., Tarabanis, K.A.: Dynamic sensor planning. In: Proceedings of the IEEE International Conference on Intelligent Autonomous Systems, pp. 206–215. IEEE, Pittsburgh, PA, February 1993Google Scholar
  2. 2.
    Ben-Arie, J., Wang, Z., Pandit, P., Rajaram, S.: Human activity recognition using multidimensional indexing. IEEE Trans. Pattern Anal. Mach. Intell. 24(8), 1091–1104, August 2002CrossRefGoogle Scholar
  3. 3.
    Beymer, D., Konolige, K.: Real-time tracking of multiple people using continuous detection. In: International Conference on Computer Vision (1999)Google Scholar
  4. 4.
    Bodor, R., Drenner, A., Janssen, M., Schrater, P., Papanikolopoulos, N.: Mobile camera positioning to optimize the observability of human activity recognition tasks. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2005)Google Scholar
  5. 5.
    Bodor, R., Jackson, B., Papanikolopoulos, N.: Vision-based human tracking and activity recognition. In: Proceedings of the 11th Mediterranean Conference on Control and Automation, June 2003Google Scholar
  6. 6.
    Bodor, R., Schrater, P., Papanikolopoulos, N.: Multi-camera positioning to optimize task observability. In: Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance (2005)Google Scholar
  7. 7.
    Bregler, C., Malik, J.: Tracking people with twists and exponential maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 1998Google Scholar
  8. 8.
    Carranza, J., Thebalt, C., Magnor, M., Seidel, H.: Free-viewpoint video of human actors. In: Proceedings of ACM SIGGRAPH (2003)Google Scholar
  9. 9.
    Chen, S., Li, Y.: Automatic sensor placement for model-based robot vision. IEEE Trans. Syst. Man Cybern. Part B Cybern. 33(1), 393–408 (2004)CrossRefGoogle Scholar
  10. 10.
    Chen, X., Davis, J.: Camera placement considering occlusion for robust motion capture. Technical Report CS-TR-2000-07. Stanford University (2000)Google Scholar
  11. 11.
    Cheung, G., Baker, S., Kanade, T. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2003Google Scholar
  12. 12.
    Cutler, R., Turk, M.: View-based interpretation of real-time optical flow for gesture recognition. In: Proceedings of the Third IEEE Conference on Face and Gesture Recognition, Nara, Japan, April 1998Google Scholar
  13. 13.
    Denzler, J., Zobel, M., Niemann, H.: On optimal camera parameter selection in Kalman filter based object tracking. In: Proceedings of the 24th DAGM Symposium on Pattern Recognition, pp. 17–25. Zurich, Switzerland, (2002)Google Scholar
  14. 14.
    Fablet, R., Black, M.J.: Automatic detection and tracking of human motion with a view-based representation. In: European Conference on Computer Vision, May 2002Google Scholar
  15. 15.
    Fleishman, S., Cohen-Or, D., Lischinski, D.: Automatic camera placement for image-based modeling. In: Proceedings of Pacific Graphics 99, pp. 12–20 (1999)Google Scholar
  16. 16.
    Gasser, G., Bird, N., Masoud, O., Papanikolopoulos, N.: Human activity monitoring at bus stops. In: Proceedings of the IEEE Conference on Robotics and Automation, April 2004Google Scholar
  17. 17.
    Gerkey, B., Vaughan, R.T., Howard, A.: The player/stage project: Tools for multi-robot and distributed sensor systems. In: Proceedings of the 11th International Conference on Advanced Robotics, pp. 317–323. Coimbra, Portugal, June 2003Google Scholar
  18. 18.
    Grauman, K., Shakhnarovich, G., Darrell, T.: A bayesian approach to image-based visual hull reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2003)Google Scholar
  19. 19.
    Haritaoglu, I., Harwood, D., Davis, L.: W4: Real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 809–831 (2000)CrossRefGoogle Scholar
  20. 20.
    Isler, V., Kannan, S., Daniilidis, K.: Vc-dimension of exterior visibility. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 667–671, May 2004CrossRefGoogle Scholar
  21. 21.
    Jackson, B., Bodor, R., Papanikolopoulos, N.P.: Learning static occlusions from interactions with moving figures. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Japan, October 2004Google Scholar
  22. 22.
    Masoud, O., Papanikolopoulos, N.P.: Using geometric primitives to calibrate traffic scenes. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Japan, October 2004Google Scholar
  23. 23.
    Matusik, W., Buehler, C., Raskar, R., Gortler, S., McMillan, L.: Image-based visual hulls. In: Proceedings of ACM SIGGRAPH, July 2000Google Scholar
  24. 24.
    Maurin, B., Masoud, O., Papanikolopoulos, N.: Monitoring crowded traffic scenes. In: Proceedings of the IEEE International Conference on Intelligent Transportation Systems. Singapore, September 2002Google Scholar
  25. 25.
    McKenna, S., Jabri, S., Duric, Z., Wechsler, H.: Tracking interacting people. In: Proceedings of the Conference on Automatic Face and Gesture Recognition. Grenoble, France, March 2000Google Scholar
  26. 26.
    Mizoguchi, M., Sato, J.: Space-time invariants and video motion extraction from arbitrary viewpoints. In: Proceedings of the International Conference on Pattern Recogition. Quebec, August 2002Google Scholar
  27. 27.
    Mordohai, P., Medioni, G.: Dense multiple view stereo with general camera placement using tensor voting. In: Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission (2004)Google Scholar
  28. 28.
    Mori, H., Charkari, M., Matsushita, T. On-line vehicle and pedestrian detections based on sign pattern. IEEE Trans. Ind. Electron. 41(4) (1994)Google Scholar
  29. 29.
    Nelson, B., Khosla, P.K.: Increasing the tracking region of an eye-in-hand system by singluarity and joint limit avoidance. In: Proceedings of the IEEE International Conference on Robotics and Automation vol. 3, pp. 418–423. IEEE, Piscataway (1993)Google Scholar
  30. 30.
    Nelson, B., Khosla, P.K.: Integrating sensor placement and visual tracking strategies. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation vol. 2, pp. 1351–1356. IEEE, Piscataway (1994)Google Scholar
  31. 31.
    Nelson, B., Khosla, P.K.: The resolvability ellipsoid for visual servoing. In: Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, pp. 829–832. IEEE, Piscataway (1994)Google Scholar
  32. 32.
    Nicolescu, M., Medioni, G.: Motion segmentation with accurate boundaries—a tensor voting approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 2003Google Scholar
  33. 33.
    Niem, W.: Error analysis for silhouette-based 3D shape estimation from multiple views. In: Proceedings of the International Workshop on Synthetic-Natural Hybrid Coding and Three Dimensional Imaging (1997)Google Scholar
  34. 34.
    Olague, G., Mohr, R.: Optimal 3D sensor placement to obtain accurate 3D point positions. In: Proceedings of the Fourteenth International Conference on Pattern Recognition vol. 1, pp. 16–20, August 1998Google Scholar
  35. 35.
    O’Rourke, J.: Art Gallery Theorems and Algorithms. Oxford University Press, New York (1987)zbMATHGoogle Scholar
  36. 36.
    Parameswaran, V., Chellappa, R.: View invariants for human action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2003Google Scholar
  37. 37.
    Pless, R.: Image spaces and video trajectories: using isomap to explore video sequences. In: Proceedings of the International Conference on Computer Vision, pp. 1433–1440 (2003)Google Scholar
  38. 38.
    Rosales, R., Sclaroff, S.: 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 1999Google Scholar
  39. 39.
    Roy, S., Chaudhury, S., Banerjee, S.: Active recognition through next view planning: a survey. Pattern Recogn. 37, 429–446 (2004)CrossRefGoogle Scholar
  40. 40.
    Scott, W., Roth, G., Rivest, J.-F.: View planning for automated three-dimensional object reconstruction and inspection. Comput. Surv. 35(1), 64–96, March 2003CrossRefGoogle Scholar
  41. 41.
    Sharma, R., Hutchinson, S.: Motion perceptibility and its application to active vision-based servo control. IEEE Trans. Robot. Autom. 13(4), 607–617 (1997)CrossRefGoogle Scholar
  42. 42.
    Stauffer, C., Tieu, K.: Automated multi-camera planar tracking correspondence modeling. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1., June 2003Google Scholar
  43. 43.
    Tarabanis, K.A., Allen, P., Tsai, R.Y.: A survey of sensor planning in computer vision. IEEE Trans. Robot. Autom. 11(1), 86–104, February 1995CrossRefGoogle Scholar
  44. 44.
    Tarabanis, K.A., Tsai, R.Y., Allen, P.: Automated sensor planning for robotic vision tasks. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 76–82, April 1991Google Scholar
  45. 45.
    Tarabanis, K.A., Tsai, R.Y., Allen, P.: The mvp sensor planning system for robotic vision tasks. IEEE Trans. Robot. Autom. 11(1), 72–85, February 1995CrossRefGoogle Scholar
  46. 46.
    Tarabanis, K.A., Tsai, R.Y., Kaul, A.: Computing occlusion-free viewpoints. IEEE Trans. Pattern Anal. Mach. Intell. 18(3), 273–292, March 1996CrossRefGoogle Scholar
  47. 47.
    Tarbox, G., Gottschlich, S.: Planning for complete sensor coverage in inspection. Comput. Vis. Image Underst. 61(1), 84–111, January 1995CrossRefGoogle Scholar
  48. 48.
    Ukita, N., Matsuyama, T.: Incremental observable-area modeling for cooperative tracking. In: Proceedings of the International Conference on Pattern Recognition, September 2000Google Scholar
  49. 49.
    Weik, S., Liedtke, C.: Hierarchical 3D pose estimation for articulated human body models from a sequence of volume data. In: Proceedings of the International Workshop on Robot Vision, pp. 27–34 (2001)Google Scholar
  50. 50.
    Wong, K., Chang, M.: 3d model reconstruction by constrained bundle adjustment. In: Proceedings of the International Conference on Pattern Recognition, August 2004Google Scholar
  51. 51.
    Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785, July 1997CrossRefGoogle Scholar
  52. 52.
    Yao, Y., Allen, P.: Computing robust viewpoints with multi-constraints using tree annealing. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, pp. 993–998. IEEE, Piscataway (1995)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2007

Authors and Affiliations

  • Robert Bodor
    • 1
  • Andrew Drenner
    • 1
  • Paul Schrater
    • 1
  • Nikolaos Papanikolopoulos
    • 1
    Email author
  1. 1.Department of Computer Science and EngineeringUniversity of MinnesotaMinneapolisUSA

Personalised recommendations