Advertisement

Smartphone Based Outdoor Navigation and Obstacle Avoidance System for the Visually Impaired

  • Qiaoyu Chen
  • Lijun WuEmail author
  • Zhicong Chen
  • Peijie Lin
  • Shuying Cheng
  • Zhenhui Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11909)

Abstract

Interlaced roads and unexpected obstacles restrict the blind from traveling. Existing outdoor blind auxiliary systems are bulky or costly, and some of them cannot even feedback the type or distance of obstacles. It is important for auxiliary blind systems to provide navigation, obstacle detection and ranging functions with affordable price and portable size. This paper presents an outdoor navigation system based on smartphone for the visually impaired, which can also help them avoid multi-type dangerous obstacles. Geographic information obtained from GPS receiving module is processed by professional navigation API to provide directional guidance. In order to help the visually impaired avoid obstacle, SSD-MobileNetV2 is retrained by a self-collected dataset with 4500 images, for better detecting the typical obstacles on the road, i.e. car, motorcycle, electric bicycle, bicycle, and pedestrian. Then, a light-weight monocular ranging method is employed to estimate the obstacle’s distance. Based on category and distance, the risk level of obstacle is evaluated, which is timely conveyed to the blind via different tunes. Field tests show that the retrained SSD-MobileNetV2 model can detect obstacles with considerable precision, and the vision-based ranging method can effectively estimate distance.

Keywords

Blind auxiliary system Walking navigation Ranging SSD-MobileNetV2 Smartphone 

Notes

Acknowledgement

This work is financially supported in parts by the Fujian Provincial Department of Science and Technology of China (Grant No. 2019H0006 and 2018J01774), the National Natural Science Foundation of China (Grant No. 61601127), and the Foundation of Fujian Provincial Department of Industry and Information Technology of China (Grant No. 82318075).

References

  1. 1.
    Brief Data of the National Basic Information Database of Persons with Disabilities. http://www.cdpf.org.cn/sjzx/cjrgk/201206/t20120626_387581.shtml. Accessed 6 July 2019
  2. 2.
  3. 3.
    Brian Port. https://www.wicab.com/. Accessed 6 July 2019
  4. 4.
    Dutta, S., Barik, M., Chowdhury, C., Gupta, D.: Divya-Dristi: a smartphone based campus navigation system for the visually impaired. In: International Conference on Emerging Applications of Information Technology, pp. 1–3. IEEE (2018)Google Scholar
  5. 5.
    Somyat, N., Wongsansukjaroen, T., Longjaroen, W., Nakariyakul, S.: NavTU: android navigation app for Thai people with visual impairments. In: International Conference on Knowledge and Smart Technology, pp. 134–138. IEEE (2018)Google Scholar
  6. 6.
    Velázquez, R., Rodrigo, P.: An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback. Appl. Sci. 8(4), 578 (2018)CrossRefGoogle Scholar
  7. 7.
    McMahon, D.: Effects of digital navigation aids on adults with intellectual disabilities: comparison of paper map, Google maps, and augmented reality. J. Spec. Educ. Technol. 30(3), 157–165 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Haklay, M., Weber, P.: OpenStreetMap: user-generated street maps. IEEE Pervasive Comput. 7(4), 12–18 (2008)CrossRefGoogle Scholar
  9. 9.
    Uijlings, J.: Selective search for object recognition. Int. J. Comput. Vis. 104(2), 154–171 (2013)CrossRefGoogle Scholar
  10. 10.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587. IEEE (2014)Google Scholar
  11. 11.
    He, K.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)CrossRefGoogle Scholar
  12. 12.
    Girshick, R.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE (2015)Google Scholar
  13. 13.
    Ren, S.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015)CrossRefGoogle Scholar
  14. 14.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788. IEEE (2016)Google Scholar
  15. 15.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  16. 16.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: CVPR, pp. 6517–6525. IEEE (2017)Google Scholar
  17. 17.
    Redmon, J., Farhadi, A.: YOLOv3: an Incremental Improvement. arXiv:1804.02767 (2018)
  18. 18.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2015)
  19. 19.
    Howard, A., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)
  20. 20.
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: MobileNetV2: inverted residuals and linear bottlenecks. In: CVPR, pp. 4510–4520. IEEE (2018)Google Scholar
  21. 21.
    Krizhevsky, A., Hinton, G.: Convolutional deep belief networks on CIFAR-10. Unpublished manuscript 40(7), 1–9 (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Physics and Information EngineeringFuzhou UniversityFuzhouChina
  2. 2.State Grid Fuzhou Electric Power Supply CompanyFuzhouChina

Personalised recommendations