Advertisement

“Anywhere Augmentation”: Towards Mobile Augmented Reality in Unprepared Environments

  • Tobias Höllerer
  • Jason Wither
  • Stephen DiVerdi
Part of the Lecture Notes in Geoinformation and Cartography book series (LNGC)

Abstract

We introduce the term “Anywhere Augmentation” to refer to the idea of linking location-specific computing services with the physical world, making them readily and directly available in any situation and location. This chapter presents a novel approach to “Anywhere Augmentation” based on efficient human input for wearable computing and augmented reality (AR). Current mobile and wearable computing technologies, as found in many industrial and governmental service applications, do not routinely integrate the services they provide with the physical world. Major limitations in the computer’s general scene understanding abilities and the infeasibility of instrumenting the whole globe with a unified sensing and computing environment prevent progress in this area. Alternative approaches must be considered.

We present a mobile augmented reality system for outdoor annotation of the real world. To reduce user burden, we use openly available aerial photographs in addition to the wearable system’s usual data sources (position, orientation, camera and user input). This allows the user to accurately annotate 3D features from a single position by aligning features in both their firstperson viewpoint and in the aerial view. At the same time, aerial photographs provide a rich set of features that can be automatically extracted to create best guesses of intended annotations with minimal user input. Thus, user interaction is often as simple as casting a ray from a firstperson view, and then confirming the feature from the aerial view. We examine three types of aerial photograph features — corners, edges, and regions — that are suitable for a wide variety of useful mobile augmented reality applications. By using aerial photographs in combination with wearable augmented reality, we are able to achieve much higher accuracy 3D annotation positions from a single user location than was previously possible.

Keywords

Aerial Photograph Augmented Reality Digital Surface Model Outdoor Scene Wearable Computer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mahdi Abdelguerfi. 3D Synthetic Environment Reconstruction. Kluwer Academic Publishers, New York, NY, June 2001.Google Scholar
  2. 2.
    G. Abowd, C. Atkeson, J. Hong, S. Long, R. Kooper, and M. Pinkerton. Cyberguide: A mobile context-aware tour guide. In Proceedings of Wireless Networks, pages 421–433, 1997.Google Scholar
  3. 3.
    P. Axelsson. Integrated sensors for improved 3D interpretation. International Archives of Photogrammetry and Remote Sensing, 32(4):27–34, 1998.Google Scholar
  4. 4.
    Y. Baillot, D. Brown, and S. Julier. Authoring of physical models using mobile computers. In Proceedings of the International Symposium on Wearable Computers, pages 39–46, 2001.Google Scholar
  5. 5.
    Y. Baillot, S. Julier, D. Brown, and M. Livingston. A tracker alignment framework for augmented reality. In Proceedings of the the International Symposium on Mixed and Augmented Reality, pages 142–150, 2003.Google Scholar
  6. 6.
    A. Baumgartner, C. Steger, H. Mayer, and W. Eckstein. Multi-resolution, semantic objects, and context for road extraction. In Proc. of Semantic Modelling for the Acquisition of Topographic Information from Images and Maps, pages 140–156, 1997.Google Scholar
  7. 7.
    B. Bell, T. Höllerer, and S. Feiner. An annotated situation-awareness aid for augmented reality. In Proceedings of User Interface Software and Technology, pages 213–216, 2002.Google Scholar
  8. 8.
    D. Bowman. Interaction Techniques for Common Tasks in Immersive Virtual Environments. PhD thesis, Georgia Institute of Technology, Atlanta, 1999.Google Scholar
  9. 9.
    A. Chamberlain and R. Kalawsky. A comparative investigation into two pointing systems for use with wearable computers while mobile. In Proc. of the International Symposium on Wearable Computers, pages 110–117, 2004.Google Scholar
  10. 10.
    Intel Corporation. Open Source Computer Vision Library Reference Manual. Intel Corporation, 2000.Google Scholar
  11. 11.
    R. Darken and H. Cevik. Map usage in virtual environments: Orientation issues. In Proceedings of Virtual Reality, pages 133–140, 1999.Google Scholar
  12. 12.
    S. Feiner, B. MacIntyre, T. Höllerer, and A. Webster. A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment. In Proceedings of the International Symposium on Wearable Computers, pages 74–81, 1997.Google Scholar
  13. 13.
    A. Fischer, T. Kolbe, F. Lang, A. Cremers, W. Förstner, L. Plümer, and V. Steinhage. Extracting buildings from aerial images using hierarchical aggregation in 2D and 3D. Computer Vision and Image Understanding, 72(2):185–203, 1998.CrossRefGoogle Scholar
  14. 14.
    Google maps, 2006. http://maps.google.com/.Google Scholar
  15. 15.
    S. Güven and S. Feiner. A hypermedia authoring tool for augmented and virtual reality. In Proceedings of the International Symposium on Wearable Computers, pages 89–97, 2003.Google Scholar
  16. 16.
    T. Kim, S. Park, T. Kim, S. Jeong, and K. Kim. Semi automatic tracking of road centerlines from high resolution remote sensing data. In Proceedings of Asian Conference on Remote Sensing, 2002.Google Scholar
  17. 17.
    G. King, W. Piekarski, and B. Thomas. ARVino — outdoor augmented reality visualisation of viticulture GIS data. In Proceedings of the International Symposium on Mixed and Augmented Reality, pages 52–55, 2005.Google Scholar
  18. 18.
    C. Lin and R. Nevatia. Building detection and description from a single intensity image. Computer Vision & Image Understanding, 72, 101–121, 1998.CrossRefGoogle Scholar
  19. 19.
    H. Mayer. Automatic object extraction from aerial imagery — a survey focusing on buildings. Computer Vision and Image Understanding, 74(2):138–149, 1999.CrossRefGoogle Scholar
  20. 20.
    R. Nevatia and A. Huertas. Knowledge-based building detection and description. In Proceedings of the Image Understanding Workshop, pages 469–478, 1998.Google Scholar
  21. 21.
    W. Piekarski and B. Thomas. Interactive augmented reality techniques for construction at a distance of 3D geometry. In Proc. of the Workshop on Virtual Environments, pages 19–28, 2003.Google Scholar
  22. 22.
    W. Piekarski and B. Thomas. Augmented reality working planes: A foundation for action and construction at a distance. In Proceedings of the International Symposium on Mixed and AR, pages 162–171, 2004.Google Scholar
  23. 23.
    G. Reitmayr, E. Eade, and T. Drummond. Localisation and interaction for augmented maps. In Proceedings of the International Symposium on Mixed and Augmented Reality, pages 120–129, 2005.Google Scholar
  24. 24.
    G. Reitmayr, D. Schmalstieg. Collaborative AR for outdoor navigation & inform.browsing. In Proc.of Symp.onLBS&TeleCartography, pp31–41, 2004.Google Scholar
  25. 25.
    J. Rekimoto, Y. Ayatsuka, and K. Hayashi. Augment-able reality: Situated communication through physical and digital spaces. In Proceedings of the International Symposium on Wearable Computers, pages 68–75, 1998.Google Scholar
  26. 26.
    K. Satoh, M. Anabuki, H. Yamamoto, and H. Tamura. A hybrid registration method for outdoor AR. In Proc. of Internat.Symp. on AR, pp 67–76, 2001.Google Scholar
  27. 27.
    J. Shufelt. Performance evaluation and analysis of vanishing point detection techniques. Transactions on Pattern Analysis and Machine Intelligence, 21(3):282–288, 1999.CrossRefGoogle Scholar
  28. 28.
    G. Sohn and I. Dowman. Extraction of buildings from high resolution satellite data. Automated Extraction of Man-Made Objects from Aerial and Space Images, 3:339–348, 2002.Google Scholar
  29. 29.
    G. Vosselman and J. Knecht. Road tracing by profile matching and kalman filtering. In Proceedings of Ascona Workshop on Automatic Extraction of Man-Made Objects from Aerial and Space Images, pages 255–264, 1995.Google Scholar
  30. 30.
    U. Weidner. Digital surface models for building extraction. Proc. of Autom. Extraction of man-made Objects from Aerial&SpaceImages, 2:193–202, 1997.Google Scholar
  31. 31.
    J. Wither and T. Höllerer. Pictorial depth cues for outdoor augmented reality. In Proc. of the Internat. Symp. on Wearable Computers, pages 92–99, 2005.Google Scholar
  32. 32.
    Yahoo maps, 2006. http://maps.yahoo.com/.Google Scholar
  33. 33.
    S. You, U. Neumann, and R. Azuma. Hybrid inertial and vision tracking for AR registration. In Proceedings of Virtual Reality, pages 260–268, 1999.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Tobias Höllerer
    • 1
  • Jason Wither
    • 1
  • Stephen DiVerdi
    • 1
  1. 1.Four Eyes Laboratory, Department of Computer ScienceUniversity of CaliforniaSanta BarbaraUSA

Personalised recommendations