Advertisement

Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization

  • Feng HuEmail author
  • Zhigang Zhu
  • Jianting Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8927)

Abstract

In this paper, we propose a first-person localization and navigation system for helping blind and visually-impaired people navigate in indoor environments. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote GPU-enabled server. Compact and effective omnidirectional video features are extracted and represented in the smart phone front end, and then transmitted to the server, where the features of an input image or a short video clip are used to search a database of an indoor environment via image-based indexing to find both the location and the orientation of the current view. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in database indexing, and computation is accelerated by using multi-core CPUs and GPUs. Experiments on synthetic data and real data are carried out to demonstrate the capacity of the proposed system, with respect to real-time response and robustness.

Keywords

Panoramic vision Mobile computing Cloud computing Blind navigation 

References

  1. 1.
    Altwaijry, H., Moghimi, M., Belongie, S.: Recognizing locations with google glass: A case study. In: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 167–174, March 2014Google Scholar
  2. 2.
    Aly, M., Bouguet, J.Y.: Street view goes indoors: Automatic pose estimation from uncalibrated unordered spherical panoramas. In: 2012 IEEE Workshop on Applications of Computer Vision (WACV), pp. 1–8. IEEE (2012)Google Scholar
  3. 3.
    Cicirelli, G., Milella, A., Di Paola, D.: Rfid tag localization by using adaptive neuro-fuzzy inference for mobile robot applications. Industrial Robot: An International Journal 39(4), 340–348 (2012)CrossRefGoogle Scholar
  4. 4.
    Cummins, M., Newman, P.: Appearance-only slam at large scale with fab-map 2.0. The International Journal of Robotics Research 30(9), 1100–1123 (2011)CrossRefGoogle Scholar
  5. 5.
    Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6), 1052–1067 (2007)CrossRefGoogle Scholar
  6. 6.
    Di Corato, F., Pollini, L., Innocenti, M., Indiveri, G.: An entropy-like approach to vision based autonomous navigation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1640–1645. IEEE (2011)Google Scholar
  7. 7.
    GoPano: Gopano micro camera adapter, June 2014. http://www.gopano.com/products/gopano-micro
  8. 8.
    Hui, Z., Fei, L., Hui-juan, L.: Study on fisheye image correction based on cylinder model. Computer Applications 28(10), 2664–2666 (2008)zbMATHGoogle Scholar
  9. 9.
    Kulyukin, V., Gharpure, C., Nicholson, J., Pavithran, S.: Rfid in robot-assisted indoor navigation for the visually impaired. In: Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems. (IROS 2004), vol. 2, pp. 1979–1984. IEEE (2004)Google Scholar
  10. 10.
    Legge, G.E., Beckmann, P.J., Tjan, B.S., Havey, G., Kramer, K., Rolkosky, D., Gage, R., Chen, M., Puchakayala, S., Rangarajan, A.: Indoor navigation by people with visual impairment using a digital sign system. PloS one 8(10), e76783 (2013)CrossRefGoogle Scholar
  11. 11.
    Manduchi, R., Coughlan, J.M.: The last meter: blind visual guidance to a target. In: Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pp. 3113–3122. ACM (2014)Google Scholar
  12. 12.
    Molina, E., Zhu, Z.: Visual noun navigation framework for the blind. Journal of Assistive Technologies 7(2), 118–130 (2013)CrossRefGoogle Scholar
  13. 13.
    Molina, E., Zhu, Z., Tian, Y.: Visual nouns for indoor/outdoor navigation. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 33–40. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  14. 14.
    Murillo, A.C., Singh, G., Kosecka, J., Guerrero, J.J.: Localization in urban environments using a panoramic gist descriptor. IEEE Transactions on Robotics 29(1), 146–160 (2013)CrossRefGoogle Scholar
  15. 15.
    Rivera-Rubio, J., Idrees, S., Alexiou, I., Hadjilucas, L., Bharath, A.A.: Mobile visual assistive apps: benchmarks of vision algorithm performance. In: Petrosino, A., Maddalena, L., Pala, P. (eds.) ICIAP 2013. LNCS, vol. 8158, pp. 30–40. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  16. 16.
    Scaramuzza, D., Martinelli, A., Siegwart, R.: A flexible technique for accurate omnidirectional camera calibration and structure from motion. In: IEEE International Conference on Computer Vision Systems, ICVS 2006, pp. 45–45. IEEE (2006)Google Scholar
  17. 17.
    Zhu, Z., Yang, S., Xu, G., Lin, X., Shi, D.: Fast road classification and orientation estimation using omni-view images and neural networks. IEEE Transactions on Image Processing 7(8), 1182–1197 (1998)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceThe Graduate Center, CUNYNew YorkUSA
  2. 2.Department of Computer ScienceThe City College of New YorkNew YorkUSA

Personalised recommendations