Advertisement

Object Detection for a Mobile Robot Using Mixed Reality

  • Hua Chen
  • Oliver Wulf
  • Bernardo Wagner
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4270)

Abstract

This paper describes a novel Human-Robot Interface (HRI) that uses a Mixed Reality (MR) space to enhance and visualize object detection for mobile robot navigation. The MR space combines the 3D virtual model of a mobile robot and its navigating environment with the real data such as physical building measurement, the real-time acquired robot’s position and laser scanned points. The huge amount of laser scanned points are rapidly segmented as belonging either to the background (i.e. fixed building) or newly appeared objects by comparing them with the 3D virtual model. This segmentation result can not only accelerate the object detection process but also facilitate the further process of object recognition with significant reduction of redundant sensor data. Such a MR space can also help human operators realizing effective surveillance through real-time visualization of the object detection results. It can be applied in a variety of mobile robot applications in a known environment. Experimental results verify the validity and feasibility of the proposed approach.

Keywords

Mobile Robot Virtual Environment Object Detection Mixed Reality Virtual Reality Modeling Language 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ohta, Y., Tamura, H. (eds.): Mixed Reality: Merging Real and Virtual Worlds. Springer, Heidelberg (1999)Google Scholar
  2. 2.
    Natonek, E., Flckiger, L., Zimmerman, T., Baur, C.: Virtual Reality: an intuitive approach to robotics. In: Proc. of SPIE Telemanipulator and Telepresence Technologies, vol. 2351, pp. 260–270 (1994)Google Scholar
  3. 3.
    Rosenberg, L.B.: Virtual fixtures: Perceptual tools for telerobotic manipulation. In: Proc. IEEE Virtual Reality Annual International Symposium (VRAIS), pp. 76–82 (1993)Google Scholar
  4. 4.
    Monferrer, A., Bonyuet, D.: Cooperative robot teleoperation through virtual reality interfaces. In: Proc. of the Sixth International Conference on Information Visualisation, pp. 243–248 (2002)Google Scholar
  5. 5.
    Dunn, T.L., Wardhani, A.: A 3D Robot Simulation for Education. In: ACM Proc. of the 1st International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, pp. 277–278 (2003)Google Scholar
  6. 6.
    Gracanin, D., Matijasevic, M., Tsourveloudis, N.C., Valavanis, K.P.: Virtual reality testbed for mobile robots. In: Proc. of the IEEE International Symposium on Industrial Electronics (ISIE), vol. 1, pp. 293–297 (1999)Google Scholar
  7. 7.
    Wang, J.J., Lewis, M., Gennari, J.: A game engine based simulation of the NIST urban search and rescue arenas. In: Proc. of the Winter Simulation Conference, pp. 1039–1045 (2003)Google Scholar
  8. 8.
    Adams, J.A.: Critical Considerations for Human-Robot Interface Development. In: Proc. of the American Association for Artificial Intelligence (AAAI) Fall Symposium on Human-Robot Interaction, pp. 1–8 (2002)Google Scholar
  9. 9.
    Chen, H.: Real-time video object extraction for Mixed Reality applications, Dissertation, Technical University of Clausthal, Germany (2005), ISBN 3-89720-844-xGoogle Scholar
  10. 10.
    Daily, M., Cho, Y., Martin, K., Payton, D.: World embedded interfaces for human-robot interaction. In: Proc. of the 36th Annual Hawaii International Conference on System Sciences, pp. 125–130 (2003)Google Scholar
  11. 11.
    Tejada, S., Cristina, A., Goodwyne, P., et al.: Virtual Synergy: A Human-Robot Interface for Urban Search and Rescue. In: Proc. of the American Association for Artificial Intelligence (AAAI) Mobile Robot Competition, pp. 13–19 (2003)Google Scholar
  12. 12.
    Kuc, R.: A spatial sampling criterion for sonar obstacle detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(7), 686–690 (1990)CrossRefGoogle Scholar
  13. 13.
    Berns, K., Kepplin, V., Dillmann, R.: Terrain and obstacle detection for walking machines using a stereo-camera-head. In: IEEE Proc. of the 24th Annual Conference of the Industrial Electronics Society (IECON), vol. 2, pp. 1170–1175 (1998)Google Scholar
  14. 14.
  15. 15.
    Wulf, O., Wagner, B.: Fast 3D-Scanning Methods for Laser Measurement Systems. In: International Conference on Control Systems and Computer Science (CSCS) (2003)Google Scholar
  16. 16.
    Wulf, O., Allah, M.K., Wagner, B.: Using 3D Data for Monte Carlo Localization in Complex Indoor Environments. In: 2nd European Conference on Mobile Robots (ECMR) (2005)Google Scholar
  17. 17.
    Milgram, P., Kishino, F.: A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information Systems E77-D(12), 1321–1329 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hua Chen
    • 1
  • Oliver Wulf
    • 2
  • Bernardo Wagner
    • 2
  1. 1.Learning Lab Lower Saxony (L3S) Research CenterHanoverGermany
  2. 2.Institute for Systems EngineeringUniversity of HanoverGermany

Personalised recommendations