Advertisement

Content-Aware Video Analysis to Guide Visually Impaired Walking on the Street

  • Ervin Yohannes
  • Timothy K. Shih
  • Chih-Yang LinEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11870)

Abstract

Although many researchers have developed systems or tools to assist blind and visually impaired people, they continue to face many obstacles in daily life—especially in outdoor environments. When people with visual impairments walk outdoors, they must be informed of objects in their surroundings. However, it is challenging to develop a system that can handle related tasks. In recent years, deep learning has enabled the development of many architectures with more accurate results than machine learning. One popular model for instance segmentation is Mask-RCNN, which can do segmentation and rapidly recognize objects. We use Mask-RCNN to develop a context-aware video that can help blind and visually impaired people recognize objects in their surroundings. Moreover, we provide the distance between the subject and object, and the object’s relative speed and direction using Mask-RCNN outputs. The results of our content-aware video include the name of the object, class object score, the distance between the person and the object, speed of the object, and object direction.

Keywords

Content-aware Mask-RCNN Visually impaired Distance Speed Direction Assistive technology 

References

  1. 1.
    Yang, K.: Unifying terrain awareness for visually impaired through real-time semantic segmentation. Sensor 18(5), 1506 (2018)CrossRefGoogle Scholar
  2. 2.
    Bai, J.: Virtual-blind-road following-based wearable navigation device for blind people. IEEE Trans. Consum. Electron. 64(1), 136–143 (2018)CrossRefGoogle Scholar
  3. 3.
    Agrawal, M.P., Gupta, A.R.: Smart stick for the blind and visually impaired people. In: 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), pp. 542–545. IEEE (2018)Google Scholar
  4. 4.
    Sen, A., Sen, K.: Ultrasonic blind stick for completely blind people to avoid any kind of obstacles. In: 2018 IEEE SENSORS, pp. 1–4. IEEE (2018)Google Scholar
  5. 5.
    Monteiro, J., Aires, J.P.: Virtual guide dog: an application to support visually-impaired people through deep convolutional neural networks. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2267–2274. IEEE (2017)Google Scholar
  6. 6.
    Anantharaman, R., Velazquez, M.: Utilizing mask R-CNN for detection and segmentation of oral diseases. In: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 2197–2204. IEEE (2018)Google Scholar
  7. 7.
    Johnson, J.W.: Adapting Mask-RCNN for automatic nucleus segmentation. arXiv preprint arXiv:1805.00500 (2018)
  8. 8.
    Nguyen, D.H., Le, T.H.: Hand segmentation under different viewpoints by combination of Mask R-CNN with tracking. In: 2018 5th Asian Conference on Defense Technology (ACDT), pp. 14–20. IEEE (2018)Google Scholar
  9. 9.
    Rocha, D., Carvalho, V.: MyEyes-automatic combination system of clothing parts to blind people: first insights. In: 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–5. IEEE (2017)Google Scholar
  10. 10.
    Mohane, V., Gode, C.: Object recognition for blind people using portable camera. In: 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), pp. 1–4. IEEE (2016)Google Scholar
  11. 11.
    Yan, M., Li, M.: Deep learning for vehicle speed prediction. Energy Proc. 152, 618–623 (2018)CrossRefGoogle Scholar
  12. 12.
    Hinterstoisser, S., Lepetit, V.: On pre-trained image features and synthetic images for deep learning. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)Google Scholar
  13. 13.
    Gu, Q., Yang, J.: Local fast R-CNN flow for object-centric event recognition in complex traffic scenes. In: Satoh, S. (ed.) PSIVT 2017. LNCS, vol. 10799, pp. 439–452. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-92753-4_34CrossRefGoogle Scholar
  14. 14.
    Chen, H., Huang, Y.: Semantic aware attention based deep object co-segmentation. arXiv preprint arXiv:1810.06859 (2018)
  15. 15.
    Shih, H.C.: A survey of content-aware video analysis for sports. IEEE Trans. Circ. Syst. Video Technol. 28(5), 1212–1231 (2017)CrossRefGoogle Scholar
  16. 16.
    Yohannes, E.: Building segmentation of satellite image based on area and perimeter using region growing. Indonesian J. Electr. Eng. Comput. Sci. 3(3), 579–585 (2016)CrossRefGoogle Scholar
  17. 17.
    He, K., Gkioxari, G.: Mask R-CNN. In: Proceedings of the IEEE International Conference on computer vision, pp. 2961–2969. IEEE (2017)Google Scholar
  18. 18.
    Dhulavvagol, P.M., Desai, A.: Vehicle tracking and speed estimation of moving vehicles for traffic surveillance applications. In: 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, pp. 373–377.(2017)Google Scholar
  19. 19.
    Kumar, A., Khorramshahi, P.: A semi-automatic 2D solution for vehicle speed estimation from monocular videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 137–144. IEEE (2018)Google Scholar
  20. 20.
    Bourja, O., Maach, A.: Speed estimation using simple line. Proc. Comput. Sci. 127, 209–217 (2018)CrossRefGoogle Scholar
  21. 21.
    Czapla, Z.: Vehicle speed estimation with the use of gradient-based image conversion into binary form. In: 2017 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 213–216. IEEE (2017)Google Scholar
  22. 22.
    Odat, E., Shamma, J.S.: Vehicle classification and speed estimation using combined passive infrared/ultrasonic sensors. IEEE Trans. Intell. Transp. Syst. 19(5), 1593–1606 (2017)CrossRefGoogle Scholar
  23. 23.
    Tang, Z., Wang, G.: Single-camera and inter-camera vehicle tracking and 3D speed estimation based on fusion of visual and semantic features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 108–115. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ervin Yohannes
    • 1
  • Timothy K. Shih
    • 1
  • Chih-Yang Lin
    • 2
    Email author
  1. 1.National Central UniversityTaoyuanTaiwan
  2. 2.Yuan Ze UniversityTaoyuanTaiwan

Personalised recommendations