Skip to main content
Log in

A General Method for Sensor Planning in Multi-Sensor Systems: Extension to Random Occlusion

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some user-specified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance.

We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. occlusion causes random loss of object capture from certain necessitates the use of other sensors that have visibility of this object. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average” visibility rates and a deterministic approach to address worst-case scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multi-view capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are best suited for a given scenario.

The approach can be customized for use in many multi-sensor applications and our contribution is especially significant for those that involve randomly occurring objects capable of occluding each other. These include security systems for surveillance in public places, industrial automation and traffic monitoring. Several examples illustrate such versatility by application of our approach to a diverse set of different and sometimes multiple system objectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Abrams, S., Allen, P. K., & Tarabanis, K. (1999). Computing camera viewpoints in a robot work-cell. International Journal of Robotics Research, 18(3), 267–285.

    Article  Google Scholar 

  • Aggarwal, A. (1984). The art gallery theorem: its variations, applications, and algorithmic aspects. PhD thesis, Johns Hopkins University, Baltimore, MD.

  • Anderson, D. (1982). Efficient algorithms for automatic viewer orientation. Computers & Graphics, 9(4), 407–413.

    Article  Google Scholar 

  • Armstrong, P., & Antonis, J. (2000). The construction of 3 dimensional models using an active vision system. In European conference on computer vision (Vol. II, pp. 182–196), Dublin, Ireland.

  • Blostein, S., & Huang, T. (1987). Error analysis in stereo determination of 3-D point positions. Pattern Analysis and Machine Intelligence, 9(6), 752–766.

    Google Scholar 

  • Cai, Q., & Aggarwal, J. (1999). Tracking human motion in structured environments using a distributed-camera system. Pattern Analysis and Machine Intelligence, 21(11), 1241–1247.

    Article  Google Scholar 

  • Cameron, A., & Durrant-Whyte, H. (1990). A Bayesian approach to optimal sensor placement. International Journal of Robotics Research, 9(5), 70–88.

    Article  Google Scholar 

  • Chin, W.-P., & Ntafos, S. (1988). Optimum watchman routes. Information Processing Letters, 28, 39–44.

    Article  MathSciNet  MATH  Google Scholar 

  • Chvátal, V. (1975). A combinatorial theorem in plane geometry. Journal of Combinatorial Theory Series B, 18, 39–41.

    Article  MathSciNet  MATH  Google Scholar 

  • Collins, R., Lipton, A., Fujiyoshi, H., & Kanade, T. (2001). Algorithms for cooperative multi-sensor surveillance. Proceedings of the IEEE, 89(10), 1456–1477.

    Article  Google Scholar 

  • Cook, D., Gmytrasiewicz, P., & Holder, L. (1996). Decision-theoretic cooperative sensor planning. Pattern Analysis and Machine Intelligence, 18(10), 1013–1023.

    Article  Google Scholar 

  • Cowan, C. K. (1988). Model based synthesis of sensor location. In IEEE conference on robotics and automation (pp. 900–905).

  • Cowan, C. K., & Bergman, A. (1989). Determining the camera and light source location for a visual task. In IEEE conference on robotics and automation (pp. 509–514).

  • Cowan, C. K., & Kovesi, P. (1988). Automatic sensor placement from vision test requirements. Pattern Analysis and Machine Intelligence, 10(3), 407–416.

    Article  Google Scholar 

  • Culberson, J., & Reckhow, R. (1988). Covering polygons is hard. In 29th symposium on foundations of computer science (pp. 601–611).

  • Danner, T., & Kavraki, L. (2000). Randomized planning for short inspection paths. In IEEE conference on robotics and automation (pp. 971–976).

  • Darrell, T., Gordon, G., Harville, M., & Woodfill, J. (1998). Integrated person tracking using stereo, color, and pattern detection. In IEEE international conference on computer vision and pattern recognition (pp. 601–608), Santa Barbara, CA.

  • Darrell, T., Demirdjian, D., Checka, N., & Felzenszwalb, P. (2001). Plan-view trajectory estimation with dense stereo background models. In IEEE international conference on computer vision (Vol. II, pp. 628–635), Vancouver, Canada.

  • Deinzer, F., Denzler, J., & Niemann, H. (2003). Viewpoint selection—planning optimal sequences of views for object recognition. In CAIP03 (pp. 65–73).

  • Duda, R., Hart, P., & Stork, D. (2001). Pattern classification. New York: Wiley.

    MATH  Google Scholar 

  • Durand, F. (1999). 3D visibility: analytical study and applications. PhD thesis, Université Joseph Fourier, Grenoble I. http:// www-imagis.imag.fr.

  • Durand, F., Drettakis, G., & Puech, C. (1997). The 3D visibility complex: a unified data-structure for global visibility of scenes of polygons and smooth objects. In 9th Canadian conference on computational geometry.

  • Edelsbrunner, H., Rourke, J., & Welzl, E. (1984). Stationing guards in rectilinear art galleries. CVGIP (pp. 167–176).

  • Elgammal, A., & Davis, L. (2001). Probabilistic framework for segmenting people under occlusion. In IEEE international conference on computer vision (Vol. II, pp. 145–152).

  • Georgis, N., Petrou, M., & Kittler, J. (1998). Error guided design of a 3D vision system. Pattern Analysis and Machine Intelligence, 20(4), 366–379.

    Article  Google Scholar 

  • Ghosh, S. (1987). Approximation algorithms for art gallery problems. In Canadian information processing society congress.

  • Gigus, Z., & Malik, J. (1990). Computing the aspect graph for line drawings of polyhedral objects. Pattern Analysis and Machine Intelligence, 12(2), 113–122.

    Article  Google Scholar 

  • Gigus, Z., Canny, J., & Seidel, R. (1991). Efficiently computing and representing aspect graphs of polyhedral objects. Pattern Analysis and Machine Intelligence, 13(6), 542–551.

    Article  Google Scholar 

  • Gonzalez-Banos, H., & Latombe, J.-C. (1998). Planning robot motions for range-image acquisition and automatic 3D model construction. In AAAI Fall symposium.

  • González-Banos, H., & Latombe, J. (2001). A randomized art-gallery algorithm for sensor placement. In SCG, Medford, MA.

  • Gonzalez-Banos, H., Guibas, L., Latombe, J.-C., LaValle, S., Lin, D., Motwani, R., & Tomasi, C. (1998). Torsion planning with visibility constraints: Building autonomous observers. In Robotics research—the eighth international symposium (pp. 95–101).

  • Greiffenhagen, M., Ramesh, V., Comaniciu, D., & Niemann, H. (2000). Statistical modeling and performance characterization of a real-time dual camera surveillance system. In IEEE international conference on computer vision and pattern recognition (Vol. II, pp. 335–342), Hilton Head, SC.

  • Grimson, W. (1986). Sensing strategies for disambiguating among multiple objects in known poses. IEEE Transactions on Robotics and Automation, 2(4), 196–213.

    Google Scholar 

  • Grimson, W., Stauffer, C., Romano, R., & Lee, L. (1998). Using adaptive tracking to classify and monitor activities in a site. In IEEE international conference on computer vision and pattern recognition, Santa Barbara, CA.

  • Hager, G., & Mintz, M. (1991). Computational methods for task-directed sensor data fusion and sensor planning. International Journal of Robotics Research, 10(4), 285–313.

    Article  Google Scholar 

  • Hutchinson, S., & Kak, A. (1989). Planning sensing strategies in robot work cell with multi-sensor capabilities. IEEE Transactions on Robotics and Automation, 5(6), 765–783.

    Article  Google Scholar 

  • Ikeuchi, K., & Robert, J. (1991). Modeling sensor detectability with VANTAGE geometric/sensor modeler. IEEE Transactions on Robotics and Automation, 7, 771–784.

    Article  Google Scholar 

  • Ingber, L. (1989). Very fast simulated re-annealing. Mathematical Computer Modeling, 12, 967–973.

    Article  MathSciNet  MATH  Google Scholar 

  • Isard, M., & MacCormick, J. (2001). BraMBLe: a Bayesian multiple-blob tracker. In IEEE international conference on computer vision (Vol. II, pp. 34–41), Vancouver, Canada.

  • Isler, V., Kannan, S., Daniilidis, K., & Valtr, P. (2004). VC-dimension of exterior visibility. Pattern Analysis and Machine Intelligence, 26(5), 667–671.

    Article  Google Scholar 

  • Kamgar-Parsi, B., & Kamgar-Parsi, B. (1989). Evaluation of quantization error in computer vision. Pattern Analysis and Machine Intelligence, 11(9), 929–940.

    Article  Google Scholar 

  • Kang, S., Seitz, S., & Sloan, P. (2000). Visual tunnel analysis for visibility prediction and camera planning. In IEEE international conference on computer vision and pattern recognition (Vol. II, pp. 195–202), Hilton Head, SC.

  • Kay, D., & Guay, M. (1970). Convexity and a certain property P m . Israel Journal of Mathematics, 8, 39–52.

    MathSciNet  MATH  Google Scholar 

  • Kelly, P. H., Katkere, A., Kuramura, D., Moezzi, S., Chatterjee, S., & Jain, R. (1995). An architecture for multiple perspective interactive video. In Proceedings of the third ACM international conference on multimedia (pp. 201–212).

  • Kettnaker, V., & Zabih, R. (1999). Counting people from multiple cameras. In ICMCS (Vol. II, pp. 253–259).

  • Khan, S., & Shah, M. (2003). Consistent labeling of tracked objects in multiple cameras with overlapping fields of view. Pattern Analysis and Machine Intelligence, 25(10), 1355–1360.

    Article  Google Scholar 

  • Khan, S., Javed, O., Rasheed, Z., & Shah, M. (2001). Human tracking in multiple cameras. In IEEE international conference on computer vision (Vol. I, pp. 331–336), Vancouver, Canada.

  • Kim, H., Jain, R., & Volz, R. (1985). Object recognition using multiple views. In IEEE conference on robotics and automation (pp. 28–33).

  • Kitamura, Y., Sato, H., & Tamura, H. (1990). An expert system for industrial machine vision. In International conference on pattern recognition (Vol. I, pp. 771–774).

  • Kleinrock, L. (1975). Queuing systems, Vol. I: Theory. New York: Wiley.

    Google Scholar 

  • Krishnan, A., & Ahuja, N. (1996). Panoramic image acquisition. In IEEE international conference on computer vision and pattern recognition (pp. 379–384), San Francisco, CA.

  • Krumm, J., Harris, S., Meyers, B., Brumitt, B., Hale, M., & Shafer, S. (2000). Multi-camera multi-person tracking for easy living. In Visual surveillance.

  • Kutulakos, K., & Dyer, C. (1994). Recovering shape by purposive viewpoint adjustment. International Journal of Computer Vision, 12(2–3), 113–136.

    Article  Google Scholar 

  • Lee, D., & Lin, A. (1986). Computational complexity of art gallery problems. IEEE Transactions on Information Theory, 32, 276–282.

    Article  MathSciNet  MATH  Google Scholar 

  • Lehel, P., Hemayed, E., & Farag, A. (1999). Sensor planning for a trinocular active vision system. In IEEE international conference on computer vision and pattern recognition (Vol. II, pp. 306–312), Ft. Collins, CO.

  • Lingas, A. (1982). The power of non-rectilinear holes. In 9th colloquium on automata, languages, and programming (pp. 369–383).

  • MacCormick, J., & Blake, A. (2000). A probabilistic exclusion principle for tracking multiple objects. International Journal of Computer Vision, 39(1), 57–71.

    Article  MATH  Google Scholar 

  • Magee, M., & Nathan, M. (1987). Spatial reasoning, sensor repositioning and disambiguation in 3D model based recognition. In Workshop on spatial reasoning and multi-sensor fusion (pp. 262–271).

  • Masek, W. (1978). Some NP-complete set covering problems. Technical report, manuscript, MIT.

  • Maver, J., & Bajcsy, R. (1993). Occlusions as a guide for planning the next view. Pattern Analysis and Machine Intelligence, 15(5), 417–433.

    Article  Google Scholar 

  • Mittal, A., & Davis, L. (2002). M2 tracker: a multi-view approach to segmenting and tracking people in a cluttered scene using region-based stereo. In European conference on computer vision (Vol. I, p. 18), Copenhagen, Denmark.

  • Mittal, A., & Davis, L. (2003). M2 tracker: a multi-view approach to segmenting and tracking people in a cluttered scene. International Journal of Computer Vision, 51(3), 189–203.

    Article  Google Scholar 

  • Mittal, A., & Davis, L. (2004). Visibility analysis and sensor planning in dynamic environments. In European conference on computer vision (Vol. III, p. 543), Prague, Czech Republic.

  • Miura, J., & Ikeuchi, K. (1995). Task-oriented generation of visual sensing strategies. In IEEE international conference on computer vision (pp. 1106–1113), Boston, MA.

  • Mulligan, J., Isler, V., & Daniilidis, K. (2001). Performance evaluation of stereo for tele-presence. In IEEE international conference on computer vision (Vol. II, pp. 558–565).

  • Nayar, S. K. (1997). Catadioptric omnidirectional camera. In IEEE international conference on computer vision and pattern recognition. Puerto Rico.

  • Novini, A. (1988). Lighting and optics expert system for machine vision. In Optics, illumination, image sensing (pp. 1005–1019).

  • O’Rourke, J. (1982). The complexity of computing minimum convex covers for polygons. In 20th Allerton conference. (pp. 75–84), Monticello.

  • O’Rourke, J. (1987), Art gallery theorems and algorithms. Oxford: Oxford University Press.

    MATH  Google Scholar 

  • Paragios, N., & Ramesh, V. (2001). A MRF-based approach for real-time subway monitoring. In IEEE international conference on computer vision and pattern recognition (Vol. I, pp. 1034–1040), Hawaii.

  • Peleg, S., Ben-Ezra, M., & Pritch, Y. (2001). Omnistereo: panoramic stereo imaging. Pattern Analysis and Machine Intelligence, 23(3), 279–290.

    Article  Google Scholar 

  • Petitjean, S., Ponce, J., & Kriegman, D. (1992). Computing exact aspect graphs of curved objects: algebraic surfaces. International Journal of Computer Vision, 9(3), 231–255.

    Article  Google Scholar 

  • Pito, R. (1999). A solution to the next best view problem for automated surface acquisition. Pattern Analysis and Machine Intelligence, 21(10), 1016–1030.

    Article  Google Scholar 

  • Raczkowsky, J., & Mittenbuehler, K. H. (1989). Simulation of cameras in robot applications. In Computer graphics applications (pp. 16–25).

  • Reed, M. K., & Allen, P. K. (2000). Constraint-based sensor planning for scene modeling. Pattern Analysis and Machine Intelligence, 22(12), 1460–1467.

    Article  Google Scholar 

  • Rodriguez, J., & Aggarwal, J. (1990). Stochastic analysis of stereo quantization error. Pattern Analysis and Machine Intelligence, 12(5), 467–470.

    Article  Google Scholar 

  • Roy, S., Chaudhury, S., & Banerjee, S. (2001). Recognizing large 3-D objects through next view planning using an uncalibrated camera. In IEEE international conference on computer vision (Vol. II, pp. 276–281), Vancouver, Canada.

  • Roy, S., Chaudhury, S., & Banerjee, S. (2004). Active recognition through next view planning: a survey. Pattern Recognition, 37(3), 429–446.

    Article  Google Scholar 

  • Sakane, S., Ishii, M., & Kakikura, M. (1987). Occlusion avoidance of visual sensors based on a hand eye action simulator system: HEAVEN. Advanced Robotics, 2(2), 149–165.

    Article  Google Scholar 

  • Sakane, S., Niepold, R., Sato, T., & Shirai, Y. (1992). Illumination setup planning for a hand–eye system based on an environmental model. Advanced Robotics, 6(4), 461–482.

    Article  Google Scholar 

  • Shang, Y. (1997). Global search methods for solving nonlinear optimization problems. PhD thesis, University of Illinois at Urbana-Champaign.

  • Shermer, T. (1992). Recent results in art galleries. Proceedings of the IEEE, 80(9), 1384–1399.

    Article  Google Scholar 

  • Slavik, P. (1997). A tight analysis of the greedy algorithm for set cover. Journal of Algorithms, 25, 237–254.

    Article  MathSciNet  MATH  Google Scholar 

  • Spletzer, J., & Taylor, C. (2001). A framework for sensor planning and control with applications to vision guided multi-robot systems. In CVPR, Kauai, HI.

  • Stamos, I., & Allen, P. K. (1998). Interactive sensor planning. In IEEE international conference on computer vision and pattern recognition (pp. 489–494).

  • Stauffer, C., & Grimson, W. (2000). Learning patterns of activity using real-time tracking. Pattern Analysis and Machine Intelligence, 22(8), 747–757.

    Article  Google Scholar 

  • Stuerzlinger, W. (1999). Imaging all visible surfaces. In Graphics interface proceedings. pp. 115–122. Los Altos: Kaufman.

    Google Scholar 

  • Tarabanis, K., Tsai, R., & Kaul, A. (1991). Computing viewpoints that satisfy optical constraints. In IEEE international conference on computer vision and pattern recognition (pp. 152–158).

  • Tarabanis, K., Allen, P. K., & Tsai, R. (1995a). A survey of sensor planning in computer vision. IEEE Transactions on Robotics and Automation, 11(1), 86–105.

    Article  Google Scholar 

  • Tarabanis, K., Tsai, R., & Allen, P. (1995b). The MVP sensor planning system for robotic vision tasks. IEEE Transactions on Robotics and Automation, 11(1), 72–85.

    Article  Google Scholar 

  • Tarabanis, K., Tsai, R., & Kaul, A. (1996). Computing occlusion-free viewpoints. Pattern Analysis and Machine Intelligence, 18(3), 279–292.

    Article  Google Scholar 

  • Urrutia, J. (1997). Art gallery and illumination problems. In Handbook on computational geometry (pp. 387–434). Amsterdam: Elsevier Science.

    Google Scholar 

  • Wixson, L. (1994). Viewpoint selection for visual search tasks. In IEEE international conference on computer vision and pattern recognition (pp. 800–805).

  • Ye, Y., & Tsotsos, J. (1999). Sensor planning for 3D object search. Computer Vision and Image Understanding, 73(2), 145–168.

    Article  Google Scholar 

  • Yi, S., Haralick, R., & Shapiro, L. (1995). Optimal sensor and light-source positioning for machine vision. Computer Vision and Image Understanding, 61(1), 122–137.

    Article  Google Scholar 

  • Zhao, T., & Nevatia, R. (2003). Bayesian human segmentation in crowded situations. In IEEE international conference on computer vision and pattern recognition (Vol. II, pp. 459–466).

  • Zhao, T., & Nevatia, R. (2004). Tracking multiple humans in crowded environment. In IEEE international conference on computer vision and pattern recognition (Vol. II, pp. 406–413).

  • Zhao, T., Nevatia, R., & Lv, F. (2001). Segmentation and tracking of multiple humans in complex situations. In IEEE international conference on computer vision and pattern recognition (pp. 194–201), Kauai, HI.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anurag Mittal.

Additional information

Most of this work was done while A. Mittal was with Real-Time Vision and Modeling Department, Siemens Corporate Research, Princeton, NJ 08540.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mittal, A., Davis, L.S. A General Method for Sensor Planning in Multi-Sensor Systems: Extension to Random Occlusion. Int J Comput Vis 76, 31–52 (2008). https://doi.org/10.1007/s11263-007-0057-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-007-0057-9

Keywords

Navigation