Generalized Multi-sensor Planning

  • Anurag Mittal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3951)


Vision systems for various tasks are increasingly being deployed. Although significant effort has gone into improving the algorithms for such tasks, there has been relatively little work on determining optimal sensor configurations. This paper addresses this need. We specifically address and enhance the state-of-the-art in the analysis of scenarios where there are dynamically occuring objects capable of occluding each other. The visibility constraints for such scenarios are analyzed in a multi-camera setting. Also analyzed are other static constraints such as image resolution and field-of-view, and algorithmic requirements such as stereo reconstruction, face detection and background appearance. Theoretical analysis with the proper integration of such visibility and static constraints leads to a generic framework for sensor planning, which can then be customized for a particular task. Our analysis can be applied to a variety of applications, especially those involving randomly occuring objects, and include surveillance and industrial automation. Several examples illustrate the wide applicability of the approach.


Static Constraint Quality Function Face Detection Soft Constraint Stereo Match 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Cai, Q., Aggarwal, J.K.: Tracking human motion in structured environments using a distributed-camera system. PAMI 21(11), 1241–1247 (1999)CrossRefGoogle Scholar
  2. 2.
    Cameron, A., Durrant-Whyte, H.F.: A bayesian approach to optimal sensor placement. IJRR 9(5), 70–88 (1990)Google Scholar
  3. 3.
    Chen, X., Davis, J.: Camera placement considering occlusion for robust motion capture. Technical Report CS-TR-2000-07, Stanford University (December 2000)Google Scholar
  4. 4.
    Collins, R.T., Lipton, A.J., Fujiyoshi, H., Kanade, T.: Algorithms for cooperative multi-sensor surveillance. Proceedings of the IEEE 89(10), 1456–1477 (2001)CrossRefGoogle Scholar
  5. 5.
    Cook, D.J., Gmytrasiewicz, P., Holder, L.B.: Decision-theoretic cooperative sensor planning. PAMI 18(10), 1013–1023 (1996)CrossRefGoogle Scholar
  6. 6.
    Cowan, C.K., Kovesi, P.D.: Automatic sensor placement from vision tast requirements. PAMI 10(3), 407–416 (1988)CrossRefGoogle Scholar
  7. 7.
    González-Banos, H., Latombe, J.C.: A randomized art-gallery algorithm for sensor placement. In: SCG, Medford, MA (June 2001)Google Scholar
  8. 8.
    Hager, G., Mintz, M.: Computational methods for task-directed sensor data fusion and sensor planning. IJRR 10(4), 285–313 (1991)Google Scholar
  9. 9.
    Ingber, L.: Very fast simulated re-annealing. Mathematical Computer Modeling 12, 967–973 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Kang, S.B., Seitz, S.M., Sloan, P.P.: Visual tunnel analysis for visibility prediction and camera planning. In: CVPR, Hilton Head, SC, June 2000, pp. II:195–202 (2000)Google Scholar
  11. 11.
    Khan, S., Shah, M.: Consistent labeling of tracked objects in multiple cameras with overlapping fields of view. PAMI 25(10), 1355–1360 (2003)CrossRefGoogle Scholar
  12. 12.
    Kutulakos, K.N., Dyer, C.R.: Recovering shape by purposive viewpoint adjustment. IJCV 12(2-3), 113–136 (1994)CrossRefGoogle Scholar
  13. 13.
    Maver, J., Bajcsy, R.K.: Occlusions as a guide for planning the next view. PAMI 15(5), 417–433 (1993)CrossRefGoogle Scholar
  14. 14.
    Mittal, A., Davis, L.S.: M2tracker: A multi-view approach to segmenting and tracking people in a cluttered scene. IJCV 51(3), 189–203 (2003)CrossRefzbMATHGoogle Scholar
  15. 15.
    Mittal, A., Davis, L.S.: Visibility analysis and sensor planning in dynamic environments. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 175–189. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Mittal, A., Davis, L.S.: A general method for sensor planning in multi-sensor systems: Extension to random occlusion. IJCV (submitted, 2005)Google Scholar
  17. 17.
    Miura, J., Ikeuchi, K.: Task-oriented generation of visual sensing strategies. In: ICCV, Boston, MA, pp. 1106–1113 (1995)Google Scholar
  18. 18.
    O’Rourke, J.: Art Gallery Theorems and Algorithms, August 1987. Oxford University Press, Oxford (1987)zbMATHGoogle Scholar
  19. 19.
    Rahimi, A., Dunagan, B., Darrell, T.J.: Simultaneous calibration and tracking with a network of non-overlapping sensors. In: CVPR, pp. I:187–194 (2004)Google Scholar
  20. 20.
    Reed, M.K., Allen, P.K.: Constraint-based sensor planning for scene modeling. PAMI 22(12), 1460–1467 (2000)CrossRefGoogle Scholar
  21. 21.
    Shang, Y.: Global Search Methods for Solving Nonlinear Optimization Problems. PhD thesis, University of Illinois at Urbana-Champaign (1997)Google Scholar
  22. 22.
    Spletzer, J., Taylor, C.J.: A framework for sensor planning and control with applications to vision guided multi-robot systems. In: CVPR, Kauai, Hawaii (2001)Google Scholar
  23. 23.
    Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. PAMI 22(8), 747–757 (2000)CrossRefGoogle Scholar
  24. 24.
    Tarabanis, K., Tsai, R.Y., Kaul, A.: Computing occlusion-free viewpoints. PAMI 18(3), 279–292 (1996)CrossRefGoogle Scholar
  25. 25.
    Ye, Y., Tsotsos, J.K.: Sensor planning for 3d object search. CVIU 73(2), 145–168 (1999)Google Scholar
  26. 26.
    Yi, S.K., Haralick, R.M., Shapiro, L.G.: Optimal sensor and light-source positioning for machine vision. CVIU 61(1), 122–137 (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Anurag Mittal
    • 1
  1. 1.Dept of Computer Science and EnggIndian Institute of Technology MadrasChennaiIndia

Personalised recommendations