Abstract
Aiming to rise the security degree and the safety level of drivers, vehicles, and pedestrians, a traffic sign detection system is proposed in this work based on deep learning technologies. By developing the proposed assisting system, we contribute to build a new public smart transportation system used in smart cities and smart environments. Traffic sign detection presents one of the most important parts in an ADAS system due to its safety reasons. Detecting road sign can widely prevent people from accidents by respecting the traffic rules. Ensuring a reliable implementation on edge devices as field programmable gate arrays (FPGAs) presents an increasing challenge. To address this problem, we propose to build in this paper a new traffic sign detection system based on deep convolutional neural networks (DCNNs). The proposed detection system has been built based on YOLO as an objects detectors model in combination with SqueezeNet model which was used as lightweight backbone for features extraction. The use of SqueezeNet has been proposed to ensure a lightweight implementation on FPGA. In order to ensure the model implementation on FPGA, different optimizations techniques have been proposed. The proposed lightweight implementation of the traffic sign detection system has been performed on the pynq z1 platform. Training and testing experiments have been performed using the Chinese traffic sign detection (CTSD) dataset. Based on the experiments results, the proposed detection system achieved very interesting results in terms of detection accuracy and processing time. It achieves 96% mAP as a detection accuracy with 16 FPS as a processing time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Afif, M., Ayachi, R., Atri, M.: Indoor objects detection system implementation using multi-graphic processing units. Cluster Comput. (2021). https://doi.org/10.1007/s10586-021-03419-9
Afif, M., Ayachi, R., Pissaloux, E., et al.: Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimed. Tools Appl. 79, 31645–31662 (2020). https://doi.org/10.1007/s11042-020-09662-3
Afif, M., Ayachi, R., Said, Y., et al.: Deep learning-based application for indoor way finding assistance navigation. Multimed. Tools Appl. 80, 27115–27130 (2021). https://doi.org/10.1007/s11042-021-10999-6
Afif, M., Ayachi, R., Said, Y., et al.: Deep learning based application for indoor scene recognition. Neural Process Lett. 51, 2827–2837 (2020). https://doi.org/10.1007/s11063-020-10231-w
Ayachi, R., Afif, M., Said, Y., Abdelaali, A.B.: Pedestrian detection for advanced driving assisting system: a transfer learning approach. In: 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), pp. 1–5. IEEE (2020)
Audi is advancing the tech that teaches cars to talk to traffic lights: Available at: https://www.digitaltrends.com/cars/audi-traffic-light-recognition-v2i-technology-gains-new-features/. Accessed 1 Jul 2021
Driver Support services: Available at: https://www.volvotrucks.com/en-en/services/driver-support.html. Accessed 01 Jul 2021
Giuffrè, T., Canale, A., Severino, A., Trubia, S.: Automated vehicles: a review of road safety implications as a driver of change. In: Proceedings of the 27th CARSP Conference, vol. 16 (2017)
Ayachi, R., Said, Y., Abdelali, A.B.: Optimizing neural networks for efficient FPGA implementation: A survey. Arch. Comput. Methods Eng. 1–11 (2021)
How Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size (2016). arXiv preprint arXiv:1602.07360
Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019). PMLR
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Zhang, Y., Wang, Z., Qi, Y., Liu, J., Yang, J.: Ctsd: A dataset for traffic sign recognition in complex real-world images. In 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2018)
Lechner, M., Jantsch, A., Dinakarrao, S.M.P.: ResCoNN: Resource-efficient FPGA-accelerated CNN for traffic sign classification. In: 2019 Tenth International Green and Sustainable Computing Conference (IGSC), pp. 1–6. IEEE (2019)
Lin, Z., Yih, M., Ota, J.M., Owens, J.D., Muyan-Özçelik, P.: Benchmarking deep learning frameworks and investigating FPGA deployment for traffic sign classification and detection. IEEE Trans. Intell. Veh. 4(3), 385–395 (2019)
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks, pp. 1453–1460. IEEE (2011)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision, pp. 21–37. Springer, Cham (2016)
Shabarinath, B. B., Muralidhar, P.: Convolutional neural network based traffic-sign classifier optimized for edge inference. In: 2020 IEEE region 10 conference (TENCON), pp. 420–425. IEEE (2020)
Yeom, S. K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.R., Samek, W.: Pruning by explaining: a novel criterion for deep neural network pruning. arXiv preprint arXiv:1912.08881 (2019). Yeom, S. K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S., Müller, K.R., Samek, W.: Pruning by explaining: A novel criterion for deep neural network pruning. Patt. Recogn. 115, 107899 (2021)
Nahshan, Y., Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Bronstein, A.M., Mendelson, A.: Loss aware post-training quantization. Mach. Learn. 1–18 (2021)
Young, S., Wang, Z., Taubman, D., Girod, B.: Transform quantization for CNN compression. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2021)
Ayachi, R., Afif, M., Said, Y., Atri, M.: Strided convolution instead of max pooling for memory efficiency of convolutional neural networks. In International conference on the Sciences of Electronics, Technologies of Information and Telecommunications, pp. 234–243. Springer, Cham (2018s)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Ayachi, R., Afif, M., Said, Y., Abdelali, A.B. (2022). Traffic Sign Detection for Green Smart Public Transportation Vehicles Based on Light Neural Network Model. In: Lahby, M., Al-Fuqaha, A., Maleh, Y. (eds) Computational Intelligence Techniques for Green Smart Cities. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-030-96429-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-96429-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-96428-3
Online ISBN: 978-3-030-96429-0
eBook Packages: EnergyEnergy (R0)