Skip to main content
Log in

YOLO glass: video-based smart object detection using squeeze and attention YOLO network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Visually impairments or blindness people need guidance in order to avoid collision risks with outdoor obstacles. Recently, technology has been proving its presence in all aspects of human life, and new devices provide assistance to humans on a daily basis. However, due to real-time dynamics or a lack of specialized knowledge, object detection confronts a reliability difficulty. To overcome the challenge, YOLO Glass a Video-based Smart object detection model has been proposed for visually impaired person to navigate effectively in indoor and outdoor environments. Initially the captured video is converted into key frames and pre-processed using Correlation Fusion-based disparity approach. The pre-processed images were augmented to prevent overfitting of the trained model. The proposed method uses an obstacle detection system based on a Squeeze and Attendant Block YOLO Network model (SAB-YOLO). A proposed system assists visually impaired users in detecting multiple objects and their locations relative to their line of sight, and alerts them by providing audio messages via headphones. The system assists blind and visually impaired people in managing their daily tasks and navigating their surroundings. The experimental results show that the proposed system improves accuracy by 98.99%, proving that it can accurately identify objects. The detection accuracy of the proposed method is 5.15%, 7.15% and 9.7% better that existing YOLO v6, YOLO v5 and YOLO v3, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Availability of data and material

Data sharing is not applicable to this article as no new data were created or analyzed in this Research.

References

  1. Khan, W., Hussain, A., Khan, B., Nawaz, R., Baker, T.: Novel framework for outdoor mobility assistance and auditory display for visually impaired people. In: 2019 12th International Conference on Developments in eSystems Engineering (DeSE), pp. 984–989. IEEE (2019). https://doi.org/10.1109/DeSE.2019.00183

  2. Theodorou, P., Tsiligkos, K., Meliones, A., Filios, C.: An extended usability and UX evaluation of a mobile application for the navigation of individuals with blindness and visual impairments outdoors, an evaluation framework based on training. Sensors 22(12), 4538 (2022). https://doi.org/10.3390/s22124538

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  3. Gamal, O., Thakkar, S., Roth, H.: Towards intelligent assistive system for visually impaired people: outdoor navigation system. In: 2020 24th International Conference on System Theory, Control and Computing (ICSTCC), pp. 390–397. IEEE (2020). https://doi.org/10.1109/ICSTCC50638.2020.9259682

  4. Martínez-Cruz, S., Morales-Hernández, L.A., Pérez-Soto, G.I., Benitez-Rangel, J.P., Camarillo-Gómez, K.A.: An outdoor navigation assistance system for visually impaired people in public transportation. IEEE Access 9, 130767–130777 (2021). https://doi.org/10.1109/ACCESS.2021.3111544

    Article  Google Scholar 

  5. Hu, M., Chen, Y., Zhai, G., Gao, Z., Fan, L.: An overview of assistive devices for blind and visually impaired people. Int. J. Rob. Autom. 34(5), 580–598 (2019). https://doi.org/10.2316/J.2019.206-0302

    Article  Google Scholar 

  6. Islam, M.M., Sadi, M.S., Zamli, K.Z., Ahmed, M.M.: Developing walking assistants for visually impaired people: a review. IEEE Sens. J. 19(8), 2814–2828 (2019). https://doi.org/10.1109/JSEN.2018.2890423

    Article  ADS  Google Scholar 

  7. Chandna, S., Singhal, A.: Towards outdoor navigation system for visually impaired people using YOLOv5. In: 2022 12th International Conference on Cloud Computing, Data Science and Engineering (Confluence), pp. 617–622. IEEE (2022). https://doi.org/10.1109/Confluence52989.2022.9734204

  8. Dimas, G., Diamantis, D.E., Kalozoumis, P., Iakovidis, D.K.: Uncertainty-aware visual perception system for outdoor navigation of the visually challenged. Sensors 20(8), 2385 (2020). https://doi.org/10.3390/s20082385

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  9. Lu, Q.: Feasibility Study of a" Smart" Aid for the Visually Impaired and Blind's Independent Mobility in Outdoor Environments, (2018)

  10. Nasralla, M. M., Rehman, I. U., Sobnath, D., Paiva, S.: Computer vision and deep learning-enabled UAVs: proposed use cases for visually impaired people in a smart city. In: Computer Analysis of Images and Patterns: CAIP 2019 International Workshops, ViMaBi and DL-UAV, Salerno, Italy, September 6, 2019, Proceedings 18, 91–99. Springer (2019)

  11. Ooi, S., Okita, T., Sano, M.: Study on a navigation system for visually impaired persons based on egocentric vision using deep learning. In: Proceedings of the 2020 8th International Conference on Communications and Broadband Networking, pp. 68–72 (2020). https://doi.org/10.1145/3390525.3390536

  12. Busaeed, S., Katib, I., Albeshri, A., Corchado, J.M., Yigitcanlar, T., Mehmood, R.: LidSonic V.2 0: a LiDAR and deep-learning-based green assistive edge device to enhance mobility for the visually impaired. Sensors 22(19), 7435 (2022). https://doi.org/10.3390/s22197435

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  13. Grewe, L., Stevenson, G.: Seeing eye drone: a deep learning, vision-based UAV for assisting the visually impaired with mobility. In: Proceedings of the ACM Turing Celebration Conference-China, 1–5 (2019). https://doi.org/10.1145/3321408.3321414

  14. Parikh, N., Shah, I., Vahora, S.: Android smartphone based visual object recognition for visually impaired using deep learning. In: 2018 International Conference on Communication and Signal Processing (ICCSP) 0420-0425. IEEE (2018). https://doi.org/10.1109/ICCSP.2018.8524493

  15. Saitis, C., Kalimeri, K.: Multimodal classification of stressful environments in visually impaired mobility using EEG and peripheral biosignals. IEEE Trans. Affect. Comput. 12(1), 203–214 (2018). https://doi.org/10.1109/TAFFC.2018.2866865

    Article  Google Scholar 

  16. Meshram, V.V., Patil, K., Meshram, V.A., Shu, F.C.: An astute assistive device for mobility and object recognition for visually impaired people. IEEE Trans. Hum. Mach. Syst. 49(5), 449–460 (2019). https://doi.org/10.1109/THMS.2019.2931745

    Article  Google Scholar 

  17. Croce, D., Giarre, L., Pascucci, F., Tinnirello, I., Galioto, G.E., Garlisi, D., Valvo, A.L.: An indoor and outdoor navigation system for visually impaired people. IEEE Access 7, 170406–170418 (2019). https://doi.org/10.1109/ACCESS.2019.2955046

    Article  Google Scholar 

  18. Rahman, M.A., Sadi, M.S., Islam, M.M., Saha, P.: Design and development of navigation guide for visually impaired people. In: 2019 IEEE international conference on biomedical engineering, computer and information Technology for Health (BECITHCON), pp. 89–92 (2019). https://doi.org/10.1109/BECITHCON48839.2019.9063201

  19. Joshi, R., Tripathi, M., Kumar, A., Gaur, M.S.: Object recognition and classification system for visually impaired. In: 2020 International Conference on Communication and Signal Processing (ICCSP) 1568–1572. IEEE (2020)

  20. Elgendy, M., Guzsvinecz, T., Sik-Lanyi, C.: Identification of markers in challenging conditions for people with visual impairment using convolutional neural network. Appl. Sci. 9(23), 5110 (2019). https://doi.org/10.3390/app9235110

    Article  Google Scholar 

  21. Chang, W.J., Chen, L.B., Hsu, C.H., Chen, J.H., Yang, T.C., Lin, C.P.: MedGlasses: a wearable smart-glasses-based drug pill recognition system using deep learning for visually impaired chronic patients. IEEE Access 8, 17013–17024 (2020). https://doi.org/10.1109/ACCESS.2020.2967400

    Article  Google Scholar 

  22. Lin, J.Y., Chiang, C.L., Wu, M.J., Yao, C.C., Chen, M.C.: Smart glasses application system for visually impaired people based on deep learning. In: 2020 Indo–Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN), pp. 202–206. IEEE (2020). https://doi.org/10.1109/IndoTaiwanICAN48429.2020.9181366

  23. Li, G., Xu, J., Li, Z., Chen, C., Kan, Z.: Sensing and navigation of wearable assistance cognitive systems for the visually impaired. IEEE Trans. Cognit. Dev. Syst. (2022). https://doi.org/10.1109/TCDS.2022.3146828

    Article  Google Scholar 

  24. Busaeed, S., Mehmood, R., Katib, I., Corchado, J.M.: LidSonic for visually impaired: green machine learning-based assistive smart glasses with smart app and Arduino. Electron 11(7), 1076 (2022). https://doi.org/10.3390/electronics11071076

    Article  Google Scholar 

  25. Caraiman, S., Morar, A., Owczarek, M., Burlacu, A., Rzeszotarski, D., Botezatu, N., Herghelegiu, P., Moldoveanu, F., Strumillo, P., Moldoveanu, A.: Computer vision for the visually impaired: the sound of vision system. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1480–1489 (2017)

  26. Tapu, R., Mocanu, B., Zaharia, T.: A computer vision-based perception system for visually impaired. Multimed. Tools Appl. 76, 11771–11807 (2017)

    Article  Google Scholar 

  27. Lin, Y., Wang, K., Yi, W., Lian, S.: Deep learning based wearable assistive system for visually impaired people. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019).

  28. Nagarajan, M.P.: Gopinath, hybrid optimization-enabled deep learning for indoor object detection and distance estimation to assist visually impaired persons. Adv. Eng. Softw. 176, 103362 (2023). https://doi.org/10.1016/j.advengsoft.2022.103362

    Article  Google Scholar 

  29. Gevorgyan, Z.: SIoU loss: More powerful learning for bounding box regression. arXiv preprint arXiv:2205.12740 (2022) https://doi.org/10.48550/arXiv.2205.12740

  30. Mukhopadhyay, A., Mukherjee, I., Biswas, P.: Comparing CNNs for non-conventional traffic participants. In: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings, pp. 171–175 (2019)

  31. Guravaiah, K., Bhavadeesh, Y.S., Shwejan, P., Vardhan, A.H., Lavanya, S.: Third eye: object recognition and speech generation for visually impaired. Procedia Comput. Sci. 218, 1144–1155 (2023). https://doi.org/10.1016/j.procs.2023.01.093

    Article  Google Scholar 

  32. Gupta, C., Gill, N.S., Gulia, P., Chatterjee, J.M.: A novel finetuned YOLOv6 transfer learning model for real-time object detection. J. Real-Time Image Proc. 20(3), 42 (2023)

    Article  Google Scholar 

Download references

Acknowledgements

The author would like to express his heartfelt gratitude to the supervisor for his guidance and unwavering support during this research for his guidance and support.

Funding

No Financial support.

Author information

Authors and Affiliations

Authors

Contributions

The authors confirm contribution to the paper as follows: Study conception and design: TS, GB; Data collection: TS; Analysis and interpretation of results: GB; Draft manuscript preparation: GB, TS. All authors reviewed the results and approved the final version of the manuscript.

Corresponding author

Correspondence to T. Sugashini.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interests.

Ethical approval

My research guide reviewed and ethically approved this manuscript for publishing in this Journal.

Human and animal rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Informed consent

I certify that I have explained the nature and purpose of this study to the above-named individual, and I have discussed the potential benefits of this study participation. The questions the individual had about this study have been answered, and we will always be available to address future questions.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sugashini, T., Balakrishnan, G. YOLO glass: video-based smart object detection using squeeze and attention YOLO network. SIViP 18, 2105–2115 (2024). https://doi.org/10.1007/s11760-023-02855-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02855-x

Keywords

Navigation