Advertisement

Autonomous Robots

, Volume 8, Issue 2, pp 173–190 | Cite as

On the Automated Construction of Image-Based Maps

  • Eric Bourque
  • Gregory Dudek
Article

Abstract

For many tasks, we wish to record or recover the description of a remote environment so that it can be inspected by a person. This is the problem we address in this paper. Rather than recovering a geometric description of an environment, as many robotics systems attempt to do, we seek to recover a model of an environment in terms of its appearance from a set of carefully selected viewpoints. Our hope is that this type of model is both more accessible to humans for many realistic tasks, and also more readily achieved with automated systems. These viewpoints are locations in the environment associated with views containing maximal visual interest. This approach to environment representation is analogous to image compression. Our goal is to obtain a set of representative views resembling those that would be selected by a human observer given the same task. Our computational procedure is inspired by models of human visual attention appearing in the literature on human psychophysics. We make use of the underlying edge structure of a scene, as it is largely unaffected by variations in illumination. Our implementation uses a mobile robot to traverse the environment, and then builds an image-based virtual representation of the environment, only keeping the views whose responses were highest. We demonstrate the effectiveness of our attention operator on both single images, and in viewpoint selection within an unknown environment.

environment modelling image based virtual reality mobile robotics iconic maps visual attention exploration virtual reality 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Apple Computer1997.QuickTime VR 2.0 Authoring Tools Suite, Apple Computer.Google Scholar
  2. Balch, T. and Arkin, R.C.1994.Communication in reactive multiagent robotic systems. Autonomous Robots, 1(1):27–52.Google Scholar
  3. Beame, P., Borodin, A., Raghavan, P., Ruzzo, W.L., and Tompa, M.1996.Time-space tradeoffs for undirected graph traversal by graph automata. Information and Computation,130(2):101–129.Google Scholar
  4. Bourque, E., Dudek, G., and Ciaravola, P.1998.Robotic sightseeing— A method for automatically creating virtual environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, Vol. 4, pp. 3186–3191.Google Scholar
  5. Canny, J.F.1986.Acomputational approach to edge detection. Transactions on Pattern Analysis and Machine Intelligence, 8(6):679–698.Google Scholar
  6. Chen, S.E.1995.QuickTime VR—An image based approach to virtual environment navigation. In Proceedings of theACMComputer Graphics (SIGGRAPH), New York, pp. 29–38.Google Scholar
  7. Choset, H., Nagatani, K., and Rizzi, A.1997.Sensor based planning: Using a homing strategy and local map method to implement the generaed Voronoi graph. In Proc. SPIE Conference on Mobile Robotics, Pittsburgh, PA.Google Scholar
  8. Ciaravola, P.1997.An automated robotic system for synthesis of image-based virtual reality. Technical Report CIM-TR–97–12, Centre for Intelligent Machines, McGill University.Google Scholar
  9. Deriche, R.1987.Using Canny's criteria to derive a recursively implemented optimal edge detector. International Journal of Computer Vision, 1(2).Google Scholar
  10. Dudek, G., Jenkin, M., Milios, E., and Wilkes, D.1991.Robotic exploration as graph construction. IEEE Transactions on Robotics and Automation, 7(6):859–865.CrossRefGoogle Scholar
  11. Dudek, G., Freedman, P., and Rekleitis, I.M.1996.Just-in-time sensing: Efficiently combining sonar and laser range data for exploring unknown worlds. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, Vol. 1, pp. 667–671.Google Scholar
  12. Elder, J. and Zucker, S.W. 1996. Computing contour closure. In Proc. 4th European Conference on Computer Vision, Cambridge, UK, Vol. 2, 399–412.Google Scholar
  13. Gortler, S.J., Grzeszczuk, R., Szeliski, R., and Cohen, M.F.1996. The lumigraph. In Proceedings of the ACM Computer Graphics (SIGGRAPH), 43–54.Google Scholar
  14. Julesz, B.1962.Visual pattern discrimination. IRE Transactions on Information Theory, IT-8, 84–92.Google Scholar
  15. Julesz, B.1991.Early vision and focal attention. Review of Modern Physics,63(3)735–772.Google Scholar
  16. Kang, S.B. and Desikan, P.K.1997.Virtual navigation of complex scenes using clusters of cylindrical panoramic images. Technical Report CRL 97/5, Digital Equipment Corporation Cambridge Research Laboratory.Google Scholar
  17. Kleinberg, J.M.1994.The locaation problem for mobile robots. In Proc. 35th IEEE Conference on Foundations of Computer Science, Santa Fe, NM.Google Scholar
  18. Koch, C. and Ullman, S.1985.Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurbiology, 4219–227.Google Scholar
  19. Kuipers, B. and Levitt, T.1988.Navigation and Mapping in largescale space. AI Magazine, 25–43.Google Scholar
  20. Kuipers, B. and Byun, Y.-T.1991.A robot exploration and mapping strategy based on a semantic hierachy of spatial representations. Robotics and Autonomous Systems, 8:46–63.Google Scholar
  21. Landy, M.S. and Bergen, J.R.1991.Texture segregation and orientation gradient. Vision Research,31:679–691.Google Scholar
  22. Langer, M.1993.Diffuse shading, visibility fields, and the geometry of ambient light. In Proceedings of the Fourth International Conference on Computer Vision, 138–147.Google Scholar
  23. Langer, M., Dudek, G., and Zucker, S.W.1995.Space occupancy using multiple shadowimages. In Proceedings of the IEEE Conference on Intelligent Robotic Systems, Pittsburgh, PA, 390–396.Google Scholar
  24. Lippman, A.1980.Movie maps: An application of the optical videodisc to computer graphics. In Proceedings of the ACM Computer Graphics (SIGGRAPH), pp. 32–43.Google Scholar
  25. McMillan, L. and Bishop, G.1995.Plenoptic modeling: An imagebased rendering system. In Proceedings of the ACM Computer Graphics (SIGGRAPH), Los Angeles, CA, pp. 39–46.Google Scholar
  26. Northdurft, H.C.1985.Orientation sensitivity and texture segmentation in patterns with different line orientation. Vision Research, 25:551–560.Google Scholar
  27. Noton, D. and Stark, L.1971.Eye movements and visual perception. Scientific American,224(6):35–43.Google Scholar
  28. Oore, S., Hinton, G., and Dudek, G.1997.A mobile robot that learns its place. Neural Computation, 3(9): 683–699.Google Scholar
  29. Ripley, D.G.1989.DVI—A digital multimedia technology. Communications of the ACM,32(7): 811–822.Google Scholar
  30. Roy, N.1997.Multi-agent exploration and ndezvous. Master's Thesis, McGill University.Google Scholar
  31. Schneider, W. and Shiffrin, R.M.1977.Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review,84(1): 1–66.Google Scholar
  32. Shiffrin, R.M. and Schneider, W.1977.Controlled and automatic human information processing: II. General theory. Psychological Review,84(2): 127–190.Google Scholar
  33. Sim, R.1998.Navigating by the stars: Robot positioning using attention. Technical Report CIM–98–1170, Centre for Intelligent Machines, McGill University.Google Scholar
  34. Sim, R.1999.Learning and evaluating visual features for pose estimation. In Proceedings of the International Conference on Computer Vision, Kerkyra, Greece, pp. 1217–1222.Google Scholar
  35. Simhon, S. and Dudek, G.1998.Selecting targets for local reference frames. In Proceedings IEEE International Conference on Robotics and Automation, Leuven, Belgium, pp. 2840–2845.Google Scholar
  36. Szeliski, R.1996.Video mosaics for virtual environments. IEEE Computer Graphics and Applications,13(2):22–30.Google Scholar
  37. Triesman, A.M.1982.Perceptual grouping and attention in visual search for features and objects. Journal of Experimental Psychology: Human Perception and Performance, 8(2): 194–214.Google Scholar
  38. Tsotsos, J.K.1990.Analysing vision at the complexity level. Behavioral and Brain Sciences,13 (3): 423–496.Google Scholar
  39. Tsotsos, J.K., Culhane, S.M., Wai, W.Y.K., Lai, Y., Davis, N., and Nuflo, F.1995.Modelling visual attention via selective tuning. Artificial Intelligence,78(1–2):507–547.Google Scholar
  40. Williams, L. and Jacobs, D.1995.Stochastic completion fields: A neural model of illusory contour shape and salience. In International Conference on Computer Vision.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Eric Bourque
    • 1
  • Gregory Dudek
    • 2
  1. 1.Mobile Robotics Laboratory, Centre for Intelligent MachinesMcGill UniversityMontréal, QuébecCanada
  2. 2.Mobile Robotics Laboratory, Centre for Intelligent MachinesMcGill UniversityMontréal, QuébecCanada

Personalised recommendations