Skip to main content
Log in

A Deep Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Buses

  • Regular Paper
  • Published:
International Journal of Precision Engineering and Manufacturing-Green Technology Aims and scope Submit manuscript

Abstract

An energy management strategy (EMS) plays an important role for hybrid vehicles, as it is directly related to the power distribution between power sources and further the energy saving of the vehicles. Currently, rule-based EMSs and optimization-based EMSs are faced with the challenge when considering the optimality and the real-time performance of the control at the same time. Along with the rapid development of the artificial intelligence, learning-based EMSs have gained more and more attention recently, which are able to overcome the above challenge. A deep reinforcement learning (DRL)-based EMS is proposed for fuel cell hybrid buses (FCHBs) in this research, in which the fuel cell durability is considered and evaluated based on a fuel cell degradation model. The action space of the DRL algorithm is limited according to the efficiency characteristic of the fuel cell in order to improve the fuel economy and the Prioritized Experience Replay (PER) is adopted for improving the convergence performance of the DRL algorithm. Simulation results of the proposed DRL-based EMS for an FCHB are compared to those of a dynamic programming (DP)-based EMS and a reinforcement learning (RL)-based EMS. Comparison results show that the fuel economy of the proposed DRL-based EMS is improved by an average of 3.63% compared to the RL-based EMS, while the difference to the DP-based EMS is within an average of 5.69%. In addition, the fuel cell degradation rate is decreased by an average of 63.49% using the proposed DRL-based EMS compared to the one without considering the fuel cell durability. Furthermore, the convergence rate of the proposed DRL-based EMS is improved by an average of 30.54% compared to the one without using the PER. Finally, the adaptability of the proposed DRL-based EMS is validated on a new driving cycle, whereas the training of the DRL algorithm is completed on the other three driving cycles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Gao, D. W., Jin, Z. H., & Lu, Q. C. (2008). Energy management strategy based on fuzzy logic for a fuel cell hybrid bus. Journal of Power Sources, 185(1), 311–317.

    Article  Google Scholar 

  2. Zhang, Q., Deng, W., Zhang, S., & Wu, J. (2016). A rule based energy management system of experimental battery/supercapacitor hybrid energy storage system for electric vehicles. Journal of Control Science and Engineering, 2016, 1–17.

    MATH  Google Scholar 

  3. Yan, M., Li, M., He, H., Peng, J., & Sun, C. (2018). Rule-based energy management for dual-source electric buses extracted by wavelet transform. Journal of Cleaner Production, 189, 116–127.

    Article  Google Scholar 

  4. Lee, H. S., Kim, J. S., Park, Y. I., & Cha, S. W. (2016). Rule-based power distribution in the power train of a parallel hybrid tractor for fuel savings. International Journal of Precision Engineering and Manufacturing-Green Technology, 3(3), 231–237.

    Article  Google Scholar 

  5. Lin, C. C., Peng, H., Grizzle, J. W., & Kang, J. M. (2003). Power management strategy for a parallel hybrid electric truck. IEEE Transactions on Control Systems Technology, 11(6), 839–849.

    Article  Google Scholar 

  6. Kim, N. W., Cha, S. W., & Peng, H. (2011). Optimal control of hybrid electric vehicles based on Pontryagin’s minimum principle. IEEE Transactions on Control Systems Technology, 19(5), 1279–1287.

    Article  Google Scholar 

  7. Hou, C., Ouyang, M. G., Xu, L. F., & Wang, H. W. (2014). Approximate Pontryagin’s minimum principle applied to the energy management of plug-in hybrid electric vehicles. Applied Energy, 115, 174–189.

    Article  Google Scholar 

  8. Zheng, C. H., Kim, N. W., & Cha, S. W. (2012). Optimal control in the power management of fuel cell hybrid vehicles. International Journal of Hydrogen Energy, 37(1), 655–663.

    Article  Google Scholar 

  9. Liu, T., Zou, Y., Liu, D., & Sun, F. (2015). Reinforcement learning of adaptive energy management with transition probability for a hybrid electric tracked vehicle. IEEE Transactions on Industrial Electronics, 62(12), 7837–7846.

    Article  Google Scholar 

  10. Zou, Y., Liu, T., Liu, D., & Sun, F. (2016). Reinforcement learning-based real-time energy management for a hybrid tracked vehicle. Applied Energy, 171, 372–382.

    Article  Google Scholar 

  11. Liu, T., & Hu, X. (2018). A bi-level control for energy efficiency improvement of a hybrid tracked vehicle. IEEE Transactions on Industrial Informatics, 14(4), 1616–1625.

    Article  MathSciNet  Google Scholar 

  12. Liu, T., Wang, B., & Yang, C. L. (2018). Online Markov chain-based energy management for a hybrid tracked vehicle with speedy Q-learning. Energy, 160, 544–555.

    Article  Google Scholar 

  13. Du, G., Zou, Y., Zhang, X., Kong, Z., Wu, J., & He, D. (2019). Intelligent energy management for hybrid electric tracked vehicles using online reinforcement learning. Applied Energy, 251, 113388.

    Article  Google Scholar 

  14. Liu, T., Hu, X., Hu, W., & Zou, Y. (2019). A heuristic planning reinforcement learning-based energy management for power-split plug-in hybrid electric vehicles. IEEE Transactions on Industrial Informatics, 15(12), 6436–6445.

    Article  Google Scholar 

  15. Zhou, Q., Li, J., Shuai, B., Williams, H., He, Y., Li, Z., & Yan, F. (2019). Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle. Applied Energy, 255, 113755.

    Article  Google Scholar 

  16. Liu, C., & Murphey, Y. L. (2019). Optimal power management based on Q-learning and neuro-dynamic programming for plug-in hybrid electric vehicles. IEEE Transactions On Neural Networks and Learning Systems, 31(6), 1942–1954.

    Article  Google Scholar 

  17. Zhang, Q., Wu, K., & Shi, Y. (2020). Route planning and power management for PHEVs with reinforcement learning. IEEE Transactions on Vehicular Technology, 69(5), 4751–4762.

    Article  Google Scholar 

  18. Lin, X., Zhou, B., & Xia, Y. (2020). Online recursive power management strategy based on the reinforcement learning algorithm with cosine similarity and a forgetting factor. IEEE Transactions on Industrial Electronics, 68(6), 5013–5023.

    Article  Google Scholar 

  19. Sun, H., Fu, Z., Tao, F., Zhu, L., & Si, P. (2020). Data-driven reinforcement-learning-based hierarchical energy management strategy for fuel cell/battery/ultracapacitor hybrid electric vehicles. Journal of Power Sources, 455, 227964.

    Article  Google Scholar 

  20. Zhang, W., Wang, J., Liu, Y., Gao, G., Liang, S., & Ma, H. (2020). Reinforcement learning-based intelligent energy management architecture for hybrid construction machinery. Applied Energy, 275, 115401.

    Article  Google Scholar 

  21. Xiong, R., Cao, J., & Yu, Q. (2018). Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle. Applied Energy, 211, 538–548.

    Article  Google Scholar 

  22. Bin, X., Dhruvang, R., Darui, Z., Adamu, Y., Xueyu, Z., Xiaoya, L., & Zoran, F. (2020). Parametric study on reinforcement learning optimized energy management strategy for a hybrid electric vehicle. Applied Energy, 259, 114200.

    Article  Google Scholar 

  23. Wu, J., He, H., Peng, J., Li, Y., & Li, Z. (2018). Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus. Applied Energy, 222, 799–811.

    Article  Google Scholar 

  24. Du, G., Zou, Y., Zhang, X., Liu, T., Wu, J., & He, D. (2020). Deep reinforcement learning based energy management for a hybrid electric vehicle. Energy, 201, 117591.

    Article  Google Scholar 

  25. Hu, Y., Li, W., Xu, K., Zahid, T., Qin, F., & Li, C. (2018). Energy management strategy for a hybrid electric vehicle based on deep reinforcement learning. Applied Sciences, 8(2), 187.

    Article  Google Scholar 

  26. Qi, X., Luo, Y., Guoyuan, Wu., Boriboonsomsin, K., & Barth, M. (2019). Deep reinforcement learning enabled self-learning control for energy efficient driving. Transportation Research Part C Emerging Technologies, 99, 67–81.

    Article  Google Scholar 

  27. Tan, H., Zhang, H., Peng, J., Jiang, Z., & Wu, Y. (2019). Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space. Energy Conversion and Management, 195, 548–560.

    Article  Google Scholar 

  28. Li, Y., He, H., Peng, J., & Wang, H. (2019). Deep reinforcement learning-based energy management for a series hybrid electric vehicle enabled by history cumulative trip information. IEEE Transactions on Vehicular Technology, 68(8), 7416–7430.

    Article  Google Scholar 

  29. Liu, J., Chen, Y., Zhan, J., & Shang, F. (2019). Heuristic dynamic programming based online energy management strategy for plug-in hybrid electric vehicles. IEEE Transactions on Vehicular Technology, 68(5), 4479–4493.

    Article  Google Scholar 

  30. http://www.miit-eidc.org.cn/module/download/downfile.jsp?classid=0&filename=b68c55c7356349629d0058c32e5f3474.pdf. Accessed 19 Dec 2021.

  31. Zheng, C. H., Oh, C. E., Park, Y. I., & Cha, S. W. (2012). Fuel economy evaluation of fuel cell hybrid vehicles based on equivalent fuel consumption. International Journal of Hydrogen Energy, 37(2), 1790–1796.

    Article  Google Scholar 

  32. Zheng, C. H., Xu, G. Q., Park, Y. I., Lim, W. S., & Cha, S. W. (2014). Prolonging fuel cell stack lifetime based on Pontryagin’s Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation. Journal of Power Sources, 248, 533–544.

    Article  Google Scholar 

  33. Zheng, C. H., & Cha, S. W. (2017). Real-time application of Pontryagin’s Minimum Principle to fuel cell hybrid buses based on driving characteristics of buses. International Journal of Precision Engineering and Manufacturing-Green Technology, 4(2), 199–209.

    Article  Google Scholar 

  34. Hu, Z., Li, J., Xu, L., Song, Z., Fang, C., Ouyang, M., & Kou, G. (2016). Multi-objective energy management optimization and parameter sizing for proton exchange membrane hybrid fuel cell vehicles. Energy Conversion and Management, 129, 108–121.

    Article  Google Scholar 

  35. Pei, P., Chang, Q., & Tang, T. (2008). A quick evaluating method for automotive fuel cell lifetime. International Journal of Hydrogen Energy, 33(14), 3829–3836.

    Article  Google Scholar 

  36. Li, Y., He, H., Peng, J., & Wu, J. (2018). Energy management strategy for a series hybrid electric vehicle using improved deep Q-network learning algorithm with prioritized replay. DEStech Trans Environ Energy Earth Sci. https://doi.org/10.12783/dteees/iceee2018/27794

    Article  Google Scholar 

  37. Larochelle, H., Bengio, Y., Louradour, J., & Lamblin, P. (2009). Exploring strategies for training deep neural networks. Journal of Machine Learning Research, 10(1), 1–40.

    MATH  Google Scholar 

  38. Glorot, X., Bordes, A., & Bengio, Y. (2011, June). Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 315–323). JMLR Workshop and Conference Proceedings.

  39. Kulikov, I., Kozlov, A., Terenchenko, A., & Karpukhin, K. (2020). Comparative study of powertrain hybridization for heavy-duty vehicles equipped with diesel and gas engines. Energies, 13(8), 2072.

    Article  Google Scholar 

  40. Zheng, C. H., Xu, G. Q., Park, Y. I., Lim, W. S., & Cha, S. W. (2014). Comparison of PMP and DP in fuel cell hybrid vehicles. International Journal of Automotive Technology, 15(1), 117–123.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by Shenzhen Science and Technology Innovation Commission (Grant no. KQJSCX20180330170047681, JCYJ20210324115800002, JCYJ20180507182628567), Department of Science and Technology of Guangdong Province (Grant no. 2021A0505030056, 2021A0505050005), National Natural Science Foundation of China (Grant no. 62073311), CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Pengcheng Program, and Shenzhen Key Laboratory of Electric Vehicle Powertrain Platform and Safety Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suk Won Cha.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, C., Li, W., Li, W. et al. A Deep Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Buses. Int. J. of Precis. Eng. and Manuf.-Green Tech. 9, 885–897 (2022). https://doi.org/10.1007/s40684-021-00403-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40684-021-00403-x

Keywords

Navigation