Real-time pedestrian crossing lights detection algorithm for the visually impaired

  • Ruiqi Cheng
  • Kaiwei Wang
  • Kailun Yang
  • Ningbo Long
  • Jian Bai
  • Dong Liu
Article

Abstract

In defect of intelligent assistant approaches, the visually impaired feel hard to cross the roads in urban environments. Aiming to tackle the problem, a real-time Pedestrian Crossing Lights (PCL) detection algorithm for the visually impaired is proposed in this paper. Different from previous works which utilize analytic image processing to detect the PCL in ideal scenarios, the proposed algorithm detects PCL using machine learning scheme in the challenging scenarios, where PCL have arbitrary sizes and locations in acquired image and suffer from the shake and movement of camera. In order to achieve the robustness and efficiency in those scenarios, the detection algorithm is designed to include three procedures: candidate extraction, candidate recognition and temporal-spatial analysis. A public dataset of PCL, which includes manually labeled ground truth data, is established for tuning parameters, training samples and evaluating the performance. The algorithm is implemented on a portable PC with color camera. The experiments carried out in various practical scenarios prove that the precision and recall of detection are both close to 100%, meanwhile the frame rate is up to 21 frames per second (FPS).

Keywords

Pedestrian crossing lights detection Real-time video processing Candidate extraction and recognition Temporal-spatial analysis Visually impaired people 

References

  1. 1.
    Bay H, Tuytelaars T, Gool LV (2006) SURF: speeded up robust features. In: The 9th European conference on computer vision, Graz, Austria. Springer-Verlag, 2094476, pp 404–417.  https://doi.org/10.1007/11744023_32 Google Scholar
  2. 2.
    Charette Rd Traffic Lights Recognition (TLR) public benchmarks (2010) http://www.lara.prd.fr/benchmarks/trafficlightsrecognition. Accessed 7 Dec 2016
  3. 3.
    Chen Q, Shi Z, Zou Z (2014) Robust and real-time traffic light recognition based on hierarchical vision architecture. In: 7th International Congress on Image and Signal Processing (CISP), 14–16 Oct 2014, pp 114–119.  https://doi.org/10.1109/CISP.2014.7003760
  4. 4.
    Cheng R (2016) Pedestrian traffic light recognition (PTLR) public database. http://www.wangkaiwei.org/file/PTLR%20dataset.rar. Accessed 11 Dec 2016
  5. 5.
    Cheng R, Wang K, Yang K, Zhao X (2015) A ground and obstacle detection algorithm for the visually impaired. In: IET International Conference on Biomedical Image and Signal Processing, 19 Nov. 2015, pp 1–6.  https://doi.org/10.1049/cp.2015.0777
  6. 6.
    Chia-Hsiang L, Yu-Chi S, Liang-Gee C (2012) An intelligent depth-based obstacle detection system for visually-impaired aid applications. In: 13th international workshop on image analysis for multimedia interactive services, 23-25 May 2012, pp 1–4.  https://doi.org/10.1109/WIAMIS.2012.6226753
  7. 7.
    Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 I.E. computer society conference on computer vision and pattern recognition (CVPR'05), 25-25 June 2005, pp 886-893 vol. 881.  https://doi.org/10.1109/CVPR.2005.177
  8. 8.
    Filipe V, Fernandes F, Fernandes H, Sousa A, Paredes H, Barroso J (2012) Blind navigation support system based on Microsoft Kinect. Procedia Comput Sci 14:94–101.  https://doi.org/10.1016/j.procs.2012.10.011 CrossRefGoogle Scholar
  9. 9.
    Intel RealSense R200 (2016) https://software.intel.com/en-us/realsense/r200camera. Accessed 10 Apr 2017
  10. 10.
    Ivanchenko V, Coughlan J, Shen H (2010) Real-time walk light detection with a mobile phone. In: the 12th international conference on computers helping people with special needs, Vienna, Austria. Springer-Verlag, 1880791, pp 229–234Google Scholar
  11. 11.
    Kangaroo Kangaroo Mobile Desktop Pro (2016) http://www.kangaroo.cc/kangaroo-mobile-desktop-pro/. Accessed 18 Dec 2016
  12. 12.
    Leung TS, Medioni G (2014) Visual navigation aid for the blind in dynamic environments. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, 23-28 June 2014, pp 579–586.  https://doi.org/10.1109/CVPRW.2014.89
  13. 13.
    Mascetti S, Ahmetovic D, Gerino A, Bernareggi C, Busso M, Rizzi A (2016) Robust traffic lights detection on mobile devices for pedestrians with visual impairment. Comput Vis Image Underst 148:123–135.  https://doi.org/10.1016/j.cviu.2015.11.017 CrossRefGoogle Scholar
  14. 14.
    Mascetti S, Ahmetovic D, Gerino A, Bernareggi C, Busso M, Rizzi A (2016) Supporting pedestrians with visual impairment during road crossing: a mobile application for traffic lights detection. In: the 15th International Conference on Computers Helping People with Special Needs, Cham, 13–15 July 2016. Springer International Publishing, pp 198–201.  https://doi.org/10.1007/978-3-319-41267-2_27
  15. 15.
    Muja M, Lowe DG (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: International conference on computer vision theory and application (VISAPP'09). pp 331–340Google Scholar
  16. 16.
    Roters J (2011) Pedestrian lights database. http://www.uni-muenster.de/PRIA/en/forschung/index.shtml. Accessed 28 Mar 2017
  17. 17.
    Roters J, Jiang X, Rothaus K (2011) Recognition of traffic lights in live video streams on mobile devices. IEEE Trans Circuits Syst Video Technol 21(10):1497–1511.  https://doi.org/10.1109/TCSVT.2011.2163452 CrossRefGoogle Scholar
  18. 18.
    Salarian M, Manavella A, Ansari R (2015) A vision based system for traffic lights recognition. In: SAI Intelligent Systems Conference (IntelliSys), 10–11 Nov 2015, pp 747–753.  https://doi.org/10.1109/IntelliSys.2015.7361224
  19. 19.
    Shi X, Zhao N, Xia Y (2016) Detection and classification of traffic lights for automated setup of road surveillance systems. Multimed Tools Appl 75(20):12547–12562.  https://doi.org/10.1007/s11042-014-2343-1 CrossRefGoogle Scholar
  20. 20.
    Tadayoshi S, Haiyuan W, Naoki N, Suguru K (2002) Measurement of the length of pedestrian crossings and detection of traffic lights from image data. Meas Sci Technol 13(9):1450.  https://doi.org/10.1088/0957-0233/13/9/311 CrossRefGoogle Scholar
  21. 21.
    Wei Y, Kou X, Lee MC (2014) A new vision and navigation research for a guide-dog robot system in urban system. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 8-11 July 2014, pp 1290–1295.  https://doi.org/10.1109/AIM.2014.6878260
  22. 22.
    Yan C, Xie H, Yang D, Yin J, Zhang Y, Dai Q (2017) Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell Transp SystGoogle Scholar
  23. 23.
    Yan C, Xie H, Liu S, Yin J, Zhang Y, Dai Q (2017) Effective Uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans Intell Transp SystGoogle Scholar
  24. 24.
    Yang K, Wang K, Cheng R, Zhu X (2015) A new approach of point cloud processing and scene segmentation for guiding the visually impaired. In: IET international conference on biomedical image and signal processing, 19 Nov. 2015. Pp 1-6.  https://doi.org/10.1049/cp.2015.0778
  25. 25.
    Yang K, Wang K, Hu W, Bai J (2016) Expanding the detection of traversable area with RealSense for the visually impaired. Sensors 16(11):1954.  https://doi.org/10.3390/s16111954 CrossRefGoogle Scholar
  26. 26.
    Yang K, Wang K, Cheng R, Hu W, Huang X, Bai J (2017) Detecting traversable area and water hazards for the visually impaired with a pRGB-D sensor. Sensors 17(8):1890.  https://doi.org/10.3390/s17081890 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  • Ruiqi Cheng
    • 1
  • Kaiwei Wang
    • 1
  • Kailun Yang
    • 1
  • Ningbo Long
    • 1
  • Jian Bai
    • 1
  • Dong Liu
    • 1
  1. 1.College of Optical Science and EngineeringZhejiang UniversityHangzhouChina

Personalised recommendations