Advertisement

Traffic Light and Vehicle Signal Recognition with High Dynamic Range Imaging and Deep Learning

  • Jian-Gang WangEmail author
  • Lu-Bing Zhou
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 865)

Abstract

Use of autonomous vehicles aims to eventually reduce the number of motor vehicle fatalities caused by humans. Deep learning plays an important role in making this possible because it can leverage the huge amount of training data that comes from autonomous car sensors. Automatic recognition of traffic light and vehicle signal is a perception module critical to autonomous vehicles because a deadly car accident could happen if a vehicle fails to follow traffic lights or vehicle signals. A practical Traffic Light Recognition (TLR) or Vehicle Signal Recognition (VSR) faces some challenges, including varying illumination conditions, false positives and long computation time. In this chapter, we propose a novel approach to recognize Traffic Light (TL) and Vehicle Signal (VS) with high dynamic range imaging and deep learning in real-time. Different from existing approaches which use only bright images, we use both high exposure/bright and low exposure/dark images provided by a high dynamic range camera. TL candidates can be detected robustly from low exposure/dark frames because they have a clean dark background. The TL candidates on the consecutive high exposure/bright frames are then classified accurately using a convolutional neural network. The dual-channel mechanism can achieve promising results because it uses undistorted color and shape information of low exposure/dark frames as well as rich texture of high exposure/bright frames. Furthermore, the TLR performance is boosted by incorporating a temporal trajectory tracking method. To speed up the process, a region of interest is generated to reduce the search regions for the TL candidates. The experimental results on a large dual-channel database have shown that our dual-channel approach outperforms the state of the art which uses only bright images. Encouraged by the promising performance of the TLR, we extend the dual-channel approach to vehicle signal recognition. The algorithm reported in this chapter has been integrated into our autonomous vehicle via Data Distribute Service (DDS) and works robustly in real roads.

Keywords

Traffic light recognition Vehicle signal recognition High dynamic range imaging Deep learning Autonomous vehicle Data distribute service 

Notes

Acknowledgements

We have benefited enormously from ideas and discussions with our ex-colleagues: Yu Pan, Serin Lee, Zhi-Wei Song, Boon-Siew Han and Vincensius-Billy Saputra.

References

  1. 1.
    Jensen, M.B., Philipsen, M.P., Trivedi, M., Mogelmose, A., Moeslund, T.: Vision for looking at traffic lights: issues, survey, and perspectives. IEEE Trans. Intell. Transp. Syst. 17(7), 1800–1815 (2016)CrossRefGoogle Scholar
  2. 2.
    Diaz, M., Pirlo, G., Ferrer, M.A., Impedvov, D.: A survey on traffic light detection. In: Proceedings of ICIAP 2015 workshops on New Trends in Image Analysis and Processing, Lecture Notes in Computer Science, vol. 9281, pp. 201–208 (2015)Google Scholar
  3. 3.
    Philipsen, M.P., Jensen, M.B., Mogelmose, T., Moeslund, T.B., Trivedi, M.M.: Ongoing work on traffic lights: detection and evaluation. In: Proceedings of 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (2015)Google Scholar
  4. 4.
    Gong, J., Jiang, Y., Xiong, G., Guan, C., Tao, G., Chen, H.: The recognition and tracking of traffic lights based on color segmentation and CAMSHIFT for intelligent vehicles. In: Proceedings of IEEE Intelligent Vehicle Symposium (2010)Google Scholar
  5. 5.
    Siogkas, G., Skodras, E., Dermatas, E.: Traffic lights detection in adverse conditions using color, symmetry and spatiotemporal information. In: Proceedings of International Conference on Computer Vision Theory and Applications, pp. 620–627 (2012)Google Scholar
  6. 6.
    Charette, R. Nashashibi, F.: Traffic light recognition using image processing compared to learning processes. In: Proceedings of IEEE/RSJ International Conference on Robots and Systems, pp. 333–338 (2009)Google Scholar
  7. 7.
    Diaz-Cabrera, M., Cerri, P., Sanchez-Medina, J.: Suspended traffic lights detection and distance estimation using color features. In: Proceedings IEEE International Conference on Intelligent Transportation Systems, pp. 1315–1320 (2012)Google Scholar
  8. 8.
    Levinson, J., Askeland, J., Dolson, J., Thrun, S.: Traffic light mapping, localization, and state detection for autonomous vehicles. In: Proceedings of International IEEE Conference on Robotics and Automation (ICRA), pp. 5784–5791 (2011)Google Scholar
  9. 9.
    Haltakov, V., Mayr, J., Unger, C., Ilic, S.: Semantic segmentation based traffic light detection at day and at night. In: Proceedings of German Conference on Pattern Recognition, Lecture Notes in Computer Science, vol. 9358, pp. 446–457 (2015)Google Scholar
  10. 10.
    Charette, R., Fawzi Nashashibi, F.: Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates. In: Proceedings of IEEE Intelligent Vehicles Symposium (2009)Google Scholar
  11. 11.
    Fairfield, N., Urmson, C.: Traffic light mapping and detection. In: Proceedings of International IEEE Conference on Robotics and Automation (ICRA), pp. 5421–5426 (2011)Google Scholar
  12. 12.
    John, V., Yoneda, K., Qi, B., Liu, Z. Mita, S.: Traffic light recognition in varying illumination using deep learning and saliency map. In: Proceedings of International IEEE Conference on Intelligent Transportation System (ITSC) (2014)Google Scholar
  13. 13.
    Gradinescu, V., Gorgorin, C., Diaconescu, R., Cristea, V., lftode, L.: Adaptive traffic lights using car-to-car communication. In: Proceedings of 65th IEEE Vehicular Technology Conference, pp. 21–25 (2007)Google Scholar
  14. 14.
    Kumar, N., Lourenco, N., Terra, D., Alves, L.N., Aguiar, R.L.: Visible light communication in intelligent transportation systems. In: Proceedings of IEEE Intelligent Vehicle Symposium, pp. 748–753 (2012)Google Scholar
  15. 15.
    Dresner, K., Stone, P.: A multiagent approach, to autonomous intersection management. Artif. Intell. Res. 31, 591–656 (2008)CrossRefGoogle Scholar
  16. 16.
  17. 17.
    Jang, C., Kim, C., Kim, D., Lee, M., Sunwoo, M.: Multiple exposure images based traffic light recognition. In: Proceedings of IEEE Intelligent Vehicle Symposium, pp. 1313–1318 (2014)Google Scholar
  18. 18.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of International IEEE Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2005)Google Scholar
  19. 19.
    Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning fine-grained image similarity with deep ranking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1386–1393 (2014)Google Scholar
  20. 20.
    Wang, J.-G., Zhou, L.-B., Pan, Y., Lee, S., Han, B.-S., Billy, V.: Appearance based brake-lights recognition. In: Proceedings of IEEE Intelligent Vehicle Symposium (2016)Google Scholar
  21. 21.
    Casares, M., Almagambetov, A., Velipasalar, S.: A robust algorithm for the detection of vehicle turn signals and brake lights. In: Proceedings of International IEEE Conference on Advanced Video and Signal-Based Surveillance, pp. 386–391 (2012)Google Scholar
  22. 22.
    Wang, J.-G., Zhou, L.-B.: Traffic light recognition with high dynamic range imaging and deep learning. IEEE Trans. Intell. Transp. Syst. 20(4), 1341–1352 (2019)CrossRefGoogle Scholar
  23. 23.
    Wang, J.-G., Zhou, L.-B., Song, Z.-W., Yuan, M.-L.: Real-time vehicle signal lights recognition with HDR camera. In: Proceedings of IEEE International Conference on Internet of Things (iThings) (2016)Google Scholar
  24. 24.
  25. 25.
    Kim, H.-K., Park, J.H., June, H.-Y.: Effective traffic lights recognition method for real time driving assistance system in the daytime. Int. J. Electr. Comput. Eng. 5(11), 1429–1432 (2011)Google Scholar
  26. 26.
    Bradski, D.: Dr. Dobb’s journal of software toolsGoogle Scholar
  27. 27.
  28. 28.
    Lu, H., Zhang, H., Yang, S., Zheng, Z.: Camera parameters auto-adjusting technique for robust robot vision. In: Proceedings of International IEEE Conference on Robotics and Automation, pp. 1518–1523 (2010)Google Scholar
  29. 29.
    Agarwal, V., Abidi, B.R., Koschan, A., Abidi, M.A.: An overview of color constancy algorithms. J. Pattern Recogn. Res. 1(1), 42–54 (2006)Google Scholar
  30. 30.
    Shim, I., Lee, J.-Y., Kweon, I.S.: Auto-adjusting camera exposure for outdoor robotics using gradient information. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robotics and Systems, pp. 1011–1017 (2014)Google Scholar
  31. 31.
  32. 32.
    Hu, Y., Xie, X., Ma, W.-Y., Chia, L.-T., Rajan, D.: Salient region detection using weighted feature maps based on the human visual attention model. In: Proceedings of Pacific Rim Conference on Multimedia (2004)Google Scholar
  33. 33.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2009)Google Scholar
  34. 34.
    Krizhevsky, A., Sutskever, L. Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of NIPS (2012)Google Scholar
  35. 35.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of ACM International Conference on Multimedia, pp. 675–678 (2014)Google Scholar
  36. 36.
    Ren, S. He, K., Girshic, R., Sun, J.: Faster R-CNN: toward real-time object detection with region proposal networks. https://arxiv.org/abs/1506.01497
  37. 37.
    YOLO: real-time object detection. https://pjreddie.com/darknet/yolov1/
  38. 38.
    Redmon, J. Farhadi, A.: YOLO9000: better, faster, stronger. https://arxiv.org/abs/1612.08242
  39. 39.
    Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: SSD: single shot multibox detector. https://arxiv.org/abs/1512.02325
  40. 40.
  41. 41.
  42. 42.
  43. 43.
    Koller, D., Weber, J., Malik, J.: Robust multiple car tracking with occlusion reasoning. Springer, Berlin (1994)Google Scholar
  44. 44.
    She, K., Bebis, G., Gu, H., Miller, R.: Vehicle tracking using online fusion of color and shape features. In: Proceedings of 7th IEEE International IEEE Conference on Intelligent Transportation Systems, pp. 731–736 (2004)Google Scholar
  45. 45.
    Chan, Y.-M., Huang, S.-S., Fu, L.-C., Hsiao, P.-Y.: Vehicle detection under various lighting conditions by incorporating particle filter. In: Proceedings of IEEE Intelligent Transportation Systems Conference, pp. 534–539 (2007)Google Scholar
  46. 46.
    Malley, R., Jones, E., Glavin, M.: Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions. IEEE Trans. Intell. Transp. Syst. 11(2), 453–462 (2010)CrossRefGoogle Scholar
  47. 47.
    Casares, M., Almagambetov, A., Velipasalar, S.: A robust algorithm for the detection of vehicle turn signals and brake lights. In: Proceedings of IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance, pp. 386–391 (2012)Google Scholar
  48. 48.
    Almagambetov, A., Casares, M., Velipasalar, S.: Autonomous tracking of vehicle rear lights and detection of brakes and turn signals. In: Proceedings of IEEE Symposium on Computational Intelligence for Security and Defence Applications (CISDA), pp. 1–7 (2012)Google Scholar
  49. 49.
    Cui, Z.-Y., Yang, S.-W., Tsai, H.-M.: A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In: Proceedings of IEEE 18th International Conference on Intelligent Transportation Systems, pp. 931–937 (2015)Google Scholar
  50. 50.
    Thammakaroon, P., Tangamchit, P.: Predictive brake warning at night using taillight characteristic. In: Proceedings of IEEE International Symposium on Industrial Electronics, pp. 217–221 (2009)Google Scholar
  51. 51.
    Ming, Q., Jo, K.-H.: Vehicle detection using tail light segmentation. In: Proceedings of 6th IEEE International Forum on Strategic Technology (IFOST), vol. 2, pp. 729–732 (2011)Google Scholar
  52. 52.
    Nagumo, S., Hasegawa, H., Okamoto, N.: Extraction of forward vehicles by front-mounted camera using brightness information. In: Proceedings of IEEE Canadian Conference on Electrical and Computer Engineering, vol. 2, pp. 1243–1246 (2003)Google Scholar
  53. 53.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D.: Cascade object detection with deformable part models. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2241–2248 (2010)Google Scholar
  54. 54.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
  55. 55.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein M.: Imagenet large scale visual recognition challenge. arXiv:1409.0575 (2014)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Robotics DepartmentInstitute for Infocomm ResearchSingaporeSingapore
  2. 2.Autonomous Vehicle DepartmentInstitute for Infocomm ResearchSingaporeSingapore

Personalised recommendations