Skip to main content

Traffic Sign Detection for Green Smart Public Transportation Vehicles Based on Light Neural Network Model

  • Chapter
  • First Online:
Computational Intelligence Techniques for Green Smart Cities

Part of the book series: Green Energy and Technology ((GREEN))

  • 378 Accesses

Abstract

Aiming to rise the security degree and the safety level of drivers, vehicles, and pedestrians, a traffic sign detection system is proposed in this work based on deep learning technologies. By developing the proposed assisting system, we contribute to build a new public smart transportation system used in smart cities and smart environments. Traffic sign detection presents one of the most important parts in an ADAS system due to its safety reasons. Detecting road sign can widely prevent people from accidents by respecting the traffic rules. Ensuring a reliable implementation on edge devices as field programmable gate arrays (FPGAs) presents an increasing challenge. To address this problem, we propose to build in this paper a new traffic sign detection system based on deep convolutional neural networks (DCNNs). The proposed detection system has been built based on YOLO as an objects detectors model in combination with SqueezeNet model which was used as lightweight backbone for features extraction. The use of SqueezeNet has been proposed to ensure a lightweight implementation on FPGA. In order to ensure the model implementation on FPGA, different optimizations techniques have been proposed. The proposed lightweight implementation of the traffic sign detection system has been performed on the pynq z1 platform. Training and testing experiments have been performed using the Chinese traffic sign detection (CTSD) dataset. Based on the experiments results, the proposed detection system achieved very interesting results in terms of detection accuracy and processing time. It achieves 96% mAP as a detection accuracy with 16 FPS as a processing time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Afif, M., Ayachi, R., Atri, M.: Indoor objects detection system implementation using multi-graphic processing units. Cluster Comput. (2021). https://doi.org/10.1007/s10586-021-03419-9

    Article  Google Scholar 

  2. Afif, M., Ayachi, R., Pissaloux, E., et al.: Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimed. Tools Appl. 79, 31645–31662 (2020). https://doi.org/10.1007/s11042-020-09662-3

    Article  Google Scholar 

  3. Afif, M., Ayachi, R., Said, Y., et al.: Deep learning-based application for indoor way finding assistance navigation. Multimed. Tools Appl. 80, 27115–27130 (2021). https://doi.org/10.1007/s11042-021-10999-6

    Article  Google Scholar 

  4. Afif, M., Ayachi, R., Said, Y., et al.: Deep learning based application for indoor scene recognition. Neural Process Lett. 51, 2827–2837 (2020). https://doi.org/10.1007/s11063-020-10231-w

    Article  Google Scholar 

  5. Ayachi, R., Afif, M., Said, Y., Abdelaali, A.B.: Pedestrian detection for advanced driving assisting system: a transfer learning approach. In: 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), pp. 1–5. IEEE (2020)

    Google Scholar 

  6. Audi is advancing the tech that teaches cars to talk to traffic lights: Available at: https://www.digitaltrends.com/cars/audi-traffic-light-recognition-v2i-technology-gains-new-features/. Accessed 1 Jul 2021

  7. Driver Support services: Available at: https://www.volvotrucks.com/en-en/services/driver-support.html. Accessed 01 Jul 2021

  8. Giuffrè, T., Canale, A., Severino, A., Trubia, S.: Automated vehicles: a review of road safety implications as a driver of change. In: Proceedings of the 27th CARSP Conference, vol. 16 (2017)

    Google Scholar 

  9. Ayachi, R., Said, Y., Abdelali, A.B.: Optimizing neural networks for efficient FPGA implementation: A survey. Arch. Comput. Methods Eng. 1–11 (2021)

    Google Scholar 

  10. How Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861

  11. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size (2016). arXiv preprint arXiv:1602.07360

  12. Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019). PMLR

    Google Scholar 

  13. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  14. Zhang, Y., Wang, Z., Qi, Y., Liu, J., Yang, J.: Ctsd: A dataset for traffic sign recognition in complex real-world images. In 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2018)

    Google Scholar 

  15. Lechner, M., Jantsch, A., Dinakarrao, S.M.P.: ResCoNN: Resource-efficient FPGA-accelerated CNN for traffic sign classification. In: 2019 Tenth International Green and Sustainable Computing Conference (IGSC), pp. 1–6. IEEE (2019)

    Google Scholar 

  16. Lin, Z., Yih, M., Ota, J.M., Owens, J.D., Muyan-Özçelik, P.: Benchmarking deep learning frameworks and investigating FPGA deployment for traffic sign classification and detection. IEEE Trans. Intell. Veh. 4(3), 385–395 (2019)

    Article  Google Scholar 

  17. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks, pp. 1453–1460. IEEE (2011)

    Google Scholar 

  18. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)

    Google Scholar 

  20. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision, pp. 21–37. Springer, Cham (2016)

    Google Scholar 

  21. Shabarinath, B. B., Muralidhar, P.: Convolutional neural network based traffic-sign classifier optimized for edge inference. In: 2020 IEEE region 10 conference (TENCON), pp. 420–425. IEEE (2020)

    Google Scholar 

  22. Yeom, S. K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.R., Samek, W.: Pruning by explaining: a novel criterion for deep neural network pruning. arXiv preprint arXiv:1912.08881 (2019). Yeom, S. K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.R., Samek, W.: Pruning by explaining: A novel criterion for deep neural network pruning. Patt. Recogn. 115, 107899 (2021)

  23. Nahshan, Y., Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Bronstein, A.M., Mendelson, A.: Loss aware post-training quantization. Mach. Learn. 1–18 (2021)

    Google Scholar 

  24. Young, S., Wang, Z., Taubman, D., Girod, B.: Transform quantization for CNN compression. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)

    Google Scholar 

  25. Ayachi, R., Afif, M., Said, Y., Atri, M.: Strided convolution instead of max pooling for memory efficiency of convolutional neural networks. In International conference on the Sciences of Electronics, Technologies of Information and Telecommunications, pp. 234–243. Springer, Cham (2018s)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riadh Ayachi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ayachi, R., Afif, M., Said, Y., Abdelali, A.B. (2022). Traffic Sign Detection for Green Smart Public Transportation Vehicles Based on Light Neural Network Model. In: Lahby, M., Al-Fuqaha, A., Maleh, Y. (eds) Computational Intelligence Techniques for Green Smart Cities. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-030-96429-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96429-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96428-3

  • Online ISBN: 978-3-030-96429-0

  • eBook Packages: EnergyEnergy (R0)

Publish with us

Policies and ethics