Abstract
Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this article, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratory motions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
T. Arbel and F. Ferrie. Viewpoint selection by navigation through entropy maps. 7th IEEE Int. Conf. on Computer Vision, Vol. I, pages 248–254, Los Alamitos, September 1999.
N. Ayache. Artificial Vision for Mobile Robots. The MIT Press, Cambridge, MA, 1991.
J. D. Boissonnat. Representing 2D and 3D shapes with the Delaunay triangulation. 7th Int. Conf. on Pattern Recognition, pages 745–748, Montreal, Canada, 1984.
C. Connolly. The determination of next best views. In IEEE Int. Conf. on Robotics and Automation, Vol. 2, pages 432–435, St Louis, Missouri, March 1995.
H. F. Durrant-Whyte. Integration, Coordination, and Control of Multi-Sensor Robot Systems. Kluwer Academic Publishers, Boston, 1987.
O. D. Faugeras, F. Lustman, and G. Toscani. Motion and structure from motion from point and line matches. In First Int. Conf. on Computer Vision, pages 25–34, Washington DC, 1987.
G. Flandin, F. Chaumette, E. Marchand. Eye-in-hand / eye-to-hand cooperation for visual servoing. IEEE Int. Conf. Robotics and Automation, Vol. 3, pages 2741–2746, San Fransisco, April 2000.
G. Flandin, F. Chaumette. Visual Data Fusion: Application to Objects Localization and Exploration. IRISA Research report, No 1394, April 2001.
K. N. Kutulakos, C. R. Dyer, and V. J. Lumelsky. Provable strategies for vision-guided exploration in three dimensions. IEEE Int. Conf. Robotics and Automation, pages 1365–1372, Los Alamitos, CA, 1994.
S. Lacroix and R. Chatila. Motion and perception strategies for outdoor mobile robot navigation in unknown environments. Lecture Notes in Control and Information Sciences, 223. Springer-Verlag, New York, 1997.
D. G. Maksarov and J. P. Norton. State bounding with ellipsoidal set description of the uncertainty. Int. Journal on Control, 65(5):847–866, 1996.
E. Marchand and F. Chaumette. Active vision for complete scene reconstruction and exploration. IEEE Trans. on Pattern Analysis and Machine Intelligence, 21(1):65–72, January 1999.
D. Marr and K. Nishihara. Representation and recognition of the spatial organization of three-dimensional shapes. Proc. Royal Soc. London Bulletin, pages 269–294, 1977.
J.M. Odobez and P. Bouthemy. Robust multiresolution estimation of parametric motion models. Journal of Visual Communication and Image Representation, 6(4):348–365, 1995.
F. C. Schweppe. Recursive state estimation: unknown but bounded errors and system inputs. IEEE Trans. on Automatic Control, AC-13:22–28, 1968.
G. Slabaugh, B. Culbertson, T. Malzbender, and R. Schafer. A survey of methods for volumetric scene reconstruction from photographs. Technical report, Center for Signal and Image Processing, Georgia Institute of Technology, 2001.
K. A. Tarabanis, P. K. Allen, and R. Y. Tsai. A survey of sensor planning in computer vision. IEEE Trans. on Robotics and Automation, 11(1):86–104, February 1995.
E. Welzl. Smallest enclosing disks (balls and ellipsoids). Lecture Notes in Computer Science, 555:359–370, 1991.
P. Whaite and F. P. Ferrie. Autonomous exploration: Driven by uncertainty. Int. Conf. on Computer Vision and Pattern Recognition, pages 339–346, Los Alamitos, CA, June 1994.
H. S. Witsenhausen. Sets of possible states of linear systems given perturbed observations. IEEE Trans. on Automatic Control, AC-13:556–558, 1968.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Flandin, G., Chaumette, F. (2002). Visual Data Fusion for Objects Localization by Active Vision. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds) Computer Vision — ECCV 2002. ECCV 2002. Lecture Notes in Computer Science, vol 2353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-47979-1_21
Download citation
DOI: https://doi.org/10.1007/3-540-47979-1_21
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43748-2
Online ISBN: 978-3-540-47979-6
eBook Packages: Springer Book Archive