An Easy to Use Mobile Augmented Reality Platform for Assisted Living Using Pico-projectors

  • Rafael F. V. Saracchini
  • Carlos C. Ortega
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8671)

Abstract

We present in this paper an easy to use Computer Vision based platform for real-time 3D mapping, and augmented reality in indoors environments and its innovative application in Assisted Living. The information is displayed to the user by projecting it into the environment by a wearable device with embedded pico-projector. The system does not need markers or complicated set-ups, using low cost off-the-shelf equipment. It is also robust to small changes of the environment, and can make use of surrounding objects to provide more stable camera tracking. Pilot tests in health care centres and residences demonstrated the efficacy of the initial prototype.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Castle, R.O., Murray, D.W.: Object recognition and localization while tracking and mapping. In: 8th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2009, pp. 179–180 (October 2009)Google Scholar
  2. 2.
    Engel, J., Sturm, J., Cremers, D.: Semi-dense visual odometry for a monocular camera. In: IEEE International Conference on Computer Vision (ICCV 2013), pp. 1449–1456 (December 2013)Google Scholar
  3. 3.
    Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Trans. on Pattern Analysis and Machine Intelligence 32(8), 1362–1376 (2010)CrossRefGoogle Scholar
  4. 4.
    Goesele, M., Snavely, N., Curless, B., Hoppe, H., Seitz, S.M.: Multi-view stereo for community photo collections. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8 (October 2007)Google Scholar
  5. 5.
    Grisetti, Stachniss, Grzonka, Burgard: A tree parameterization for efficiently computing maximum likelihood maps using gradient descent. In: Proc. of Robotics: Science and Systems (2007)Google Scholar
  6. 6.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: Rgb-d mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments. Int. Journal of Robotics Research 31(5), 647–663 (2012)CrossRefGoogle Scholar
  7. 7.
    Karlsson, N., di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., Munich, M.E.: The vSLAM algorithm for robust localization and mapping. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, pp. 24–29 (April 2005)Google Scholar
  8. 8.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), Nara, Japan (November 2007)Google Scholar
  9. 9.
    Meilland, M., Barat, C., Comport, A.: 3D high dynamic range dense visual slam and its application to real-time object re-lighting. In: IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR 2013), pp. 143–152 (October 2013)Google Scholar
  10. 10.
    Minetto, R., Leite, N.J., Stolfi, J.: Afftrack: Robust tracking of features in variable-zoom videos. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 4285–4288 (November 2009)Google Scholar
  11. 11.
    Mistry, P., Maes, P.: Sixthsense: A wearable gestural interface. In: ACM SIGGRAPH, pp. 11:1–11:1. ACM, New York (2009)Google Scholar
  12. 12.
    Newcombe, Izadi, Hillige, Molyneaux, Kim, Davison, Kohi, Shotton, Hodges, Fitzgibbon: Kinectfusion: Real-time dense surface mapping and tracking. In: IEEE Int. Symp. on Mixed and Augmented Reality (ISMAR), pp. 127–136 (October 2011)Google Scholar
  13. 13.
    Newcombe, Lovegrove, Davison: Dtam: Dense tracking and mapping in real-time. In: IEEE Int. Conf. on Computer Vision (ICCV), pp. 2320–2327 (2011)Google Scholar
  14. 14.
    Nister, D., Stewenius, H.: Scalable recognition with a vocabulary tree. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2161–2168 (2006)Google Scholar
  15. 15.
    Engineering System Technologies, Ekahau Vision, Sonitor Technologies, and Ubisense (2014)Google Scholar
  16. 16.
    Whelan, Kaess, Fallon, Johannsson, Leonard, McDonald: Kintinuous: Spatially extended kinectfusion. Technical Report MIT-CSAIL-TR-2012-020Google Scholar
  17. 17.
    Wu, Agarwal, Curless, Seitz: Multicore bundle adjustment. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3057–3064 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Rafael F. V. Saracchini
    • 1
  • Carlos C. Ortega
    • 1
  1. 1.Technical Institute of Castilla y LeónBurgosSpain

Personalised recommendations