Abstract
In energy management of HEVs, taking an energy distribution scheme derived from the given EMS will bring in changes of the vehicle state and driving state. Meanwhile, energy consumption of the powertrain occurs simultaneously with the transition of vehicle states. This instantaneous energy (or fuel) consumption and the sum of energy (fuel) it consumes over the future will provide a criterion for judging the strategy performance. Then, a new energy distribution scheme should be calculated according to the current vehicle states to accomplish the energy management. This described process contains main elements including the interaction of the decision maker with the controlled object and the environment it belongs to, the policy (or strategy), states, actions, and costs (or rewards). Because the state transition process of the vehicle shows a distinct Markovian property [54], we will model and formulate the HEV energy management problem based on MDP theory. The general modeling part will be described in this chapter, while the similarities and differences of the modeling process for different energy management problems will be described in the relevant sections of subsequent chapters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Li, Y., He, H. (2022). Background: Deep Reinforcement Learning. In: Deep Reinforcement Learning-Based Energy Management for Hybrid Electric Vehicles. Synthesis Lectures on Advances in Automotive Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-79206-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-79206-9_2
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-79194-9
Online ISBN: 978-3-031-79206-9
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 11