Skip to main content
Log in

Fuel-Saving Control Strategy for Fuel Vehicles with Deep Reinforcement Learning and Computer Vision

  • Published:
International Journal of Automotive Technology Aims and scope Submit manuscript

Abstract

This study uses deep reinforcement learning (DRL) combined with computer vision technology to investigate vehicle fuel economy. In a driving cycle with car-following and traffic light scenarios, the vehicle fuel-saving control strategy based on DRL can realize the cooperative control of the engine and continuously variable transmission. The visual processing method of the convolutional neural network is used to extract available visual information from an on-board camera, and other types of information are obtained through the vehicle’s inherent sensor. The various detected types of information are further used as the state of DRL, and the fuel-saving control strategy is built. A Carla–Simulink co-simulation model is established to evaluate the proposed strategy. An urban road driving cycle and highway road driving cycle model with visual information is built in Carla, and the vehicle power system is constructed in Simulink. Results show that the fuel-saving control strategy based on DRL and computer vision achieves improved fuel economy. In addition, in the Carla–Simulink co-simulation model, the fuel-saving control strategy based on DRL and computer vision consumes an average time of 17.55 ms to output control actions, indicating its potential for use in real-time applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Fagnant, D. J. and Kockelman, K. (2015). Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice, 77, 167–181.

    Google Scholar 

  • Gao, H., Cheng, B., Wang, J., Li, K., Zhao, J. and Li, D. (2018). Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE Trans. Industrial Informatics 14, 9, 4224–4231.

    Article  Google Scholar 

  • Han, J., Vahidi, A. and Sciarretta, A. (2019). Fundamentals of energy efficient driving for combustion engine and electric vehicles: An optimal control perspective. Automatica, 103, 558–572.

    Article  MathSciNet  MATH  Google Scholar 

  • Hongwen, H., Jinquan, G., Jiankun, P., Huachun, T. and Chao, S. (2018). Real-time global driving cycle construction and the application to economy driving pro system in plug-in hybrid electric vehicles. Energy, 152, 95–107.

    Article  Google Scholar 

  • Lee, H., Sung, J., Lee, H., Zheng, C., Lim, W., and Cha, S. W. (2018). Model-based integrated control of engine and CVT to minimize fuel use. Int. J. Automotive Technology 19, 4, 687–694.

    Article  Google Scholar 

  • Li, J., Zhou, Q., He, Y., Shuai, B., Li, Z., Williams, H. and Xu, H. (2019). Dual-loop online intelligent programming for driver-oriented predict energy management of plug-in hybrid electric vehicles. Applied Energy, 253, 113617.

    Article  Google Scholar 

  • Li, K., Chen, T., Luo, Y. and Wang, J. (2012). Intelligent environment-friendly vehicles: Concept and case studies. IEEE Trans. Intelligent Transportation Systems 13, 1, 318–328.

    Article  Google Scholar 

  • Li, Y., He, H., Peng, J. and Wang H. (2019). Deep reinforcement learning-based energy management for a series hybrid electric vehicle enabled by history cumulative trip information. IEEE Trans. Vehicular Technology 68, 8, 7416–7430.

    Article  Google Scholar 

  • Lutsey, N. (2012). Regulatory and technology lead-time: The case of US automobile greenhouse gas emission standards. Transport Policy, 21, 179–190.

    Article  Google Scholar 

  • Pyun, B., Seo, M., Kim, S. and Choi, H. (2022). Development of an autonomous driving controller for articulated bus using model predictive control algorithm with inner model. Int. J. Automotive Technology 23, 2, 357–336.

    Article  Google Scholar 

  • Sulaiman, N., Hannan, M. A., Mohamed, A., Ker, P. J., Majlan, E. H. and Daud, W. R. W. (2018). Optimization of energy management system for fuel-cell hybrid electric vehicles: Issues and recommendations. Applied Energy, 228, 2061–2079.

    Article  Google Scholar 

  • Sun, C., Moura, S. J., Hu, X., Hedrick, J. K. and Sun, F. (2015). Dynamic traffic feedback data enabled energy management in plug-in hybrid electric vehicles. IEEE Trans. Control Systems Technology 23, 3, 1075–1086.

    Article  Google Scholar 

  • Tan, H., Zhang, H., Peng, J., Jiang, Z. and Wu, Y. (2019). Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space. Energy Conversion and Management, 195, 548–560.

    Article  Google Scholar 

  • Tang, T.-Q., Liao, P., Ou, H, and Zhang, J. (2018). A fuel-optimal driving strategy for a single vehicle with CVT. Physica A: Statistical Mechanics and its Applications, 505, 114–123.

    Article  MATH  Google Scholar 

  • Tang, X., Chen, J., Liu, T., Li, J. and Hu, X. (2021). Research on deep reinforcement learning-based intelligent car-following control and energy management strategy for hybrid electric vehicles. J. Mechanical Engineering 57, 22, 237–246.

    Article  Google Scholar 

  • Van der Meulen, S., De Jager, B., Veldpaus, F., Van der Noll, E., Van Der Sluis, F. and Steinbuch, M. (2012). Improving continuously variable transmission efficiency with extremum seeking control. IEEE Trans. Control Systems Technology 20, 5, 1376–1383.

    Article  Google Scholar 

  • Wang, Y., Tan, H., Wu, Y. and Peng, J. (2021). Hybrid electric vehicle energy management with computer vision and deep reinforcement learning. IEEE Trans. Industrial Informatics 17, 6, 3857–3868.

    Article  Google Scholar 

  • Wu, J., He, H., Peng, J., Li, Y. and Li, Z. (2018). Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus. Applied Energy, 222, 799–811.

    Article  Google Scholar 

  • Wu, Y., Tan, H., Peng, J., Zhang, H. and He, H. (2019). Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus. Applied Energy, 247, 454–466.

    Article  Google Scholar 

  • Yang, W., Ruan, J., Yang, J. and Zhang, N. (2020). Investigation of integrated uninterrupted dual input transmission and hybrid energy storage system for electric vehicles. Applied Energy, 262, 114446.

    Article  Google Scholar 

  • Zhang, F., Xi, J. and Langari, R. (2017). Real-time energy management strategy based on velocity forecasts using V2V and V2I communications. IEEE Trans. Intelligent Transportation Systems 18, 2, 416–430.

    Article  Google Scholar 

  • Zhang, S., Wu, Y., Un, P., Fu, L., and Hao, J. (2016). Modeling real-world fuel consumption and carbon dioxide emissions with high resolution for light-duty passenger vehicles in a traffic populated city. Energy, 113, 461–471.

    Article  Google Scholar 

  • Zhang, Y. and Ioannou, P. A. (2017). Combined variable speed limit and lane change control for highway traffic. IEEE Trans. Intelligent Transportation Systems 18, 7, 1812–1823.

    Article  Google Scholar 

Download references

Acknowledgement

The authors gratefully acknowledge the financial support of the science and technology Foundation of Jilin Province under Project No. 20220508151RC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ling Han.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, L., Liu, G., Zhang, H. et al. Fuel-Saving Control Strategy for Fuel Vehicles with Deep Reinforcement Learning and Computer Vision. Int.J Automot. Technol. 24, 609–621 (2023). https://doi.org/10.1007/s12239-023-0051-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12239-023-0051-4

Key Words

Navigation