Skip to main content
Log in

Autonomous Vehicle Driving in Harsh Weather: Adaptive Fusion Alignment Modeling and Analysis

  • Research Article-Electrical Engineering
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

The achievements of high driving performance and error minimization of autonomous vehicles (AVs) in harsh weather are the biggest challenges for the society of autonomous research area. AVs are mainly driven by the sensor fusion technology of light detection and ranging (LiDAR), radio detection and ranging (RADAR), and camera sensors. In harsh weather such as rain, storm, law lighting, snowfall, and vapor, the detection performances of all the sensors are obstructed. The camera imaging for object detection systems is highly affected by different types of noise in adverse weather conditions and its performance is very anxious for error-free AV driving. This article proposes the prediction-based adaptive fusion alignment (AFA) algorithm of the robust path and object tracking systems with the deep convolutional neural networking (D-CNN) model for detection accuracy improvement, calculative error reduction, and overall driving error minimization of AVs in harsh weather conditions. RADAR and LiDAR are not deep learning (DL) based yet. The D-CNN model of DL algorithms for camera image processing and the segmentation process of object classification is used for actual object detection and localization. The AV-simulated driving accuracy in harsh weather is significantly increased with the proposed AFA and D-CNN algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Ding, X.; Wang, Z.; Zhang, L.; Wang, C.: Longitudinal vehicle speed estimation for four-wheel- independently-actuated electric vehicles based on multi-sensor fusion. IEEE Trans. Veh. Tech. 69(11), 12797–12806 (2020)

    Article  Google Scholar 

  2. Bai, J.; Li, S.; Huang, L.; Chen, H.: Robust detection and tracking method for moving object based on radar and camera data fusion. IEEE Sens. J. 21(9), 10761–10774 (2021)

    Article  Google Scholar 

  3. Contreras-Cruz, M.A., et al.: Convolutional neural network and sensor fusion for obstacle classification in the context of powered prosthetic leg applications. Comput. Electr. Eng. 1(108), 108656 (2023)

    Article  Google Scholar 

  4. Gharghan, S.K.; Al-Kafaji, R.D.; Mahdi, S.Q.; Zubaidi, S.L.; Ridha, H.M.: Indoor localization for the blind based on the fusion of a metaheuristic algorithm with a neural network using energy-efficient WSN. Arab. J. Sci. Eng. 8, 1–28 (2022)

    Google Scholar 

  5. Hasanujjaman, M.; Chowdhury, M.Z.; Jang, Y.M.: Sensor fusion in autonomous vehicle with traffic surveillance camera system: detection, localization, and AI networking. Sensors 23(6), 1–23 (2023)

    Article  Google Scholar 

  6. Habibi, O.; Chemmakha, M.; Lazaar, M.: Performance evaluation of CNN and pre-trained models for malware classification. Arab. J. Sci. Eng. 30, 1–5 (2023)

    Google Scholar 

  7. Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S.: Vehicle detection and tracking in adverse weather using a deep learning framework. IEEE Trans. Intell. Trans. Syst. 22(7), 4230–4242 (2020)

    Article  Google Scholar 

  8. Karthik, B.; Krishna Kumar, T.; Vijayaragavan, S.P.; Sriram, M.: Removal of high-density salt and pepper noise in color image through modified cascaded filter. J. Ambient. Intell. Humaniz. Comput. 12, 3901–3908 (2021)

    Article  Google Scholar 

  9. Pimpalkhute, A.V.; Page, R.; Kothari, A.; Bhurchandi, K.M.; Kamble, V.M.: Digital image noise estimation using DWT coefficients. IEEE Trans. Image Process. 30, 1962–1972 (2021)

    Article  Google Scholar 

  10. Tahon, M.; Montresor, S.; Picart, P.: Towards reduced CNNs for de-noising phase images corrupted with speckle noise. Photonics 8(7), 255 (2021)

    Article  Google Scholar 

  11. Kong, X.Y.; Liu, L.; Qian, Y.S.: Low-light image enhancement via Poisson noise aware retinex model. IEEE Signal Process. Lett. 28, 1540–1544 (2021)

    Article  Google Scholar 

  12. Huang, Y.; Wang, H.; Khajepour, A.; Ding, H.; Yuan, K.; Qin, Y.: A novel local motion planning framework for autonomous vehicles based on resistance network and model predictive control. IEEE Trans. Veh. Tech. 69(1), 55–66 (2019)

    Article  Google Scholar 

  13. Person, M.; Jensen, M.; Smith, A.O.; Gutierrez, H.: Multimodal fusion object detection system for autonomous vehicles. ASME J. Dyn. Sys. Meas. Control 141(7), 071017 (2019)

    Article  Google Scholar 

  14. Zhao, X.; Sun, P.; Xu, Z.; Min, H.; Yu, H.: Fusion of 3d lidar and camera data for object detection in autonomous vehicle applications. IEEE Sens. J. 20(9), 4901–4913 (2020)

    Article  Google Scholar 

  15. Daniel, A.; Subburathinam, K.; Anand Muthu, B.; Rajkumar, N.; Kadry, S.; Kumar Mahendran, R.; Pandian, S.: Procuring cooperative intelligence in autonomous vehicles for object detection through data fusion approach. IET Intell. Transp. Syst. 14(11), 1410–14177 (2020)

    Article  Google Scholar 

  16. Meyer, M.; Kuschk, G.; Tomforde, S.: Graph convolutional networks for 3d object detection on radar data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3060–3069 (2021)

  17. Dai, X.; Yuan, X.; Wei, X.: TIRNet: object detection in thermal infrared images for autonomous driving. Appl. Intell. 51, 1244–1261 (2021)

    Article  Google Scholar 

  18. John, V.; Mita, S.: Deep feature-level sensor fusion using skip connections for real-time object detection in autonomous driving. Electronics 10(4), 424 (2021)

    Article  Google Scholar 

  19. Li, Y.; Deng, J.; Zhang, Y.; Ji, J.; Li, H.; Zhang, Y.: EZFusion: a close look at the integration of lidar, millimeter-wave radar, and camera for accurate 3d object detection and tracking. IEEE Robot. Autom. Lett. 7(4), 11182–11189 (2022)

    Article  Google Scholar 

  20. Arikumar, S.K.; Deepak Kumar, A.; Gadekallu, T.R.; Prathiba, S.B.; Tamilarasi, K.: Real-time 3D object detection and classification in autonomous driving environment using 3D LiDAR and camera sensors. Electronics 24, 4203 (2022)

    Article  Google Scholar 

  21. Dworak, D.; Baranowski, J.: Adaptation of grad-CAM method to neural network architecture for LiDAR point cloud object detection. Energies 15(13), 4681 (2022)

    Article  Google Scholar 

  22. Alfred Daniel, J.; Chandru Vignesh, C.; Muthu, B.A.; Senthil Kumar, R.; Sivaparthipan, C.B.; Marin, C.E.: Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle. Multimed. Tools Appl. 82, 1–24 (2023)

    Article  Google Scholar 

  23. Zhou, T.; Chen, J.; Shi, Y.; Jiang, K.; Yang, M.; Yang, D.: Bridging the view disparity between radar and camera features for multi-modal fusion 3d object detection. IEEE Trans. Intell. Veh. 8, 1523 (2023)

    Article  Google Scholar 

  24. Kalbasi, M.; Nikmehr, H.: Noise-robust, reconfigurable canny edge detection and its hardware realization. IEEE Access 8, 39934–39945 (2020)

    Article  Google Scholar 

  25. Bijelic, M.; Gruber, T.; Ritter, W.: Benchmarking image sensors under adverse weather conditions for autonomous driving. IEEE Intell. Veh. Symp. (IV) 26, 1773–1779 (2018)

    Google Scholar 

  26. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W.: Weather influence and classification with automotive lidar sensors. IEEE Intell. Veh. Symp. (IV) 9, 1527–1534 (2019)

    Google Scholar 

  27. Arnold, E., et al.: A survey on 3d object detection methods for autonomous driving applications. IEEE Trans. Intell. Transp. Syst. 10, 3782–3795 (2019)

    Article  Google Scholar 

  28. Ravindran, R.; Santora, M.J.; Jamali, M.M.: Multi-object detection and tracking, based on DNN, for autonomous vehicles: a review. IEEE Sens. J. 21(5), 5668–5677 (2020)

    Article  Google Scholar 

  29. Wang, J.; Liu, J.; Kato, N.: Networking and communications in autonomous driving: a survey. IEEE Commun. Surv. Tutor. 21(2), 1243–1274 (2018)

    Article  Google Scholar 

  30. Guo, J.; Kurup, U.; Shah, M.: Is it safe to drive? An overview of factors, metrics, and datasets for drivability assessment in autonomous driving. IEEE Trans. Intell. Trans. Syst. 21(8), 3135–3151 (2019)

    Article  Google Scholar 

  31. Cheng, S.; Li, L.; Guo, H.Q.; Chen, Z.G.; Song, P.: Longitudinal collision avoidance and lateral stability adaptive control system based on mpc of autonomous vehicles. IEEE Trans. Intell. Trans. Syst. 21(6), 2376–2385 (2019)

    Article  Google Scholar 

  32. Han, G.; Fu, W.; Wang, W.; Wu, Z.: The lateral tracking control for the intelligent vehicle based on adaptive PID neural network. Sensors 17(6), 1244 (2017)

    Article  Google Scholar 

  33. Karaman, S.; Frazzoli, E.: Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 30(7), 846–894 (2011)

    Article  Google Scholar 

  34. Xu, S.; Peng, H.: Design, analysis, and experiments of preview path tracking control for autonomous vehicles. IEEE Trans. Intell. Trans. Syst. 21(1), 48–58 (2019)

    Article  Google Scholar 

  35. Benekohal, R.F.; Treiterer, J.: CARSIM: car-following model for simulation of traffic in normal and stop-and-go conditions. Transp. Res. Rec. 1194, 99–111 (1988)

    Google Scholar 

  36. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O.: Nuscenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020, pp. 11621–11631

  37. Chowdhury, M.Z.; Ahmed, S.; Jang, Y.M.I.N.: 6G wireless communication systems: applications, requirements, technologies, challenges, and research directions. IEEE Open J. Commun. Soc. 1, 957–975 (2020)

    Article  Google Scholar 

  38. Zhao, W.; Ma, W.; Jiao, L.; Chen, P.; Yang, S.; Hou, B.: Multi-scale image block-level F-CNN for remote sensing images object detection. IEEE Access 7, 43607–43621 (2019)

    Article  Google Scholar 

  39. Jiang, H.; Learned-Miller, E.: Face detection with the faster R-CNN. In: Proceedings of 12th IEEE International Conference on Automatic Face and Gesture Recognition, pp. pp. 650–657. Washington (2017)

Download references

Acknowledgements

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (MSIT) (No.2022R1A2C1007884).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mostafa Zaman Chowdhury.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hasanujjaman, M., Chowdhury, M.Z., Hossan, M.T. et al. Autonomous Vehicle Driving in Harsh Weather: Adaptive Fusion Alignment Modeling and Analysis. Arab J Sci Eng 49, 6631–6640 (2024). https://doi.org/10.1007/s13369-023-08389-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-023-08389-1

Keywords

Navigation