Skip to main content

Advertisement

Log in

Torque control strategy and optimization for fuel consumption and emission reduction in parallel hybrid electric vehicles

  • INDUSTRIAL APPLICATION
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

To reduce fuel consumption and exhaust emissions in hybrid electric vehicles (HEVs), it is important to develop a well-organized energy management system (EMS). This paper proposes a torque control strategy coupled with optimization for a parallel HEV. A torque control strategy is developed first. In particular, a function to control the driving condition, called the internal combustion engine (ICE) torque control function, is introduced. This function controls the driving conditions (electric motor (EM) driving, ICE driving, and ICE driving assisted by EM) for reducing fuel consumption and exhaust emissions. This function depends on several design variables that should be optimized. Numerical simulation of HEV using Matlab/Simulink is so computationally intensive that a sequential approximate optimization (SAO) using a radial basis function network (RBF) is adopted to determine the optimal values of these design variables. As the result, the optimal ICE torque control function is determined with a small number of simulation runs. In this paper, CO2 and NOx emissions are minimized simultaneously for reducing the fuel consumption and exhaust emission. Through numerical simulations using typical driving cycles, the trade-off between CO2 and NOx emissions is clarified and the validity of the proposed torque control strategy coupled with the proposed optimization is examined.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Abbreviations

BSFC:

Brake Specific Fuel Consumption

DOH:

Degree of Hybridization

DP:

Dynamic Programming

EA:

Evolutionary Algorithm

EM:

Electric Motor

EMS:

Energy Management System

FLC:

Fuzzy Logic Control

HEV:

Hybrid Electric Vehicle

ICE:

Internal Combustion Engine

JC08:

Japan Chassis 08

LHD:

Latin Hypercube Design

MOO:

Multi-Objective Optimization

NEDC:

New European Driving Cycle

RBF:

Radial Basis Function

SAO:

Sequential Approximate Optimization

SOC:

State of Charge

WLTC:

Worldwide harmonized Light duty driving Test Cycle

References

  • Baumann BM, Washington G, Glenn BC, Rizzoni G (2000) Mechatronics design and control of hybrid electric vehicles. IEEE/ASME Trans Mechatron 5(1):58–72

    Article  Google Scholar 

  • Chau KT, Wong YS (2002) Overview of power management in hybrid electric vehicle. Energy Convers Manag 43:1953–1968

    Article  Google Scholar 

  • Delprat S, Lauber J, Guerra TM, Rimaux J (2004) Control of a parallel hybrid powertrain: optimal control. IEEE Trans Veh Technol 53(3):872–881

    Article  Google Scholar 

  • Donald J, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13:455–492

    Article  MATH  Google Scholar 

  • Guemri M, Neffati A, Caux S, Ngueveu SU (2014) Management of distributed power in hybrid vehicles based on D.P. or Fuzzy logic. Optim Eng 15(4):993–1012

    Article  MathSciNet  Google Scholar 

  • Hui S, Ji-hai J, Xin W (2009) Torque control strategy for a parallel hydraulic hybrid vehicle. J Terrramech 46:259–265

    Article  Google Scholar 

  • Kheir NA, Salman MA, Schouten NJ (2004) Emissions and fuel economy trade-off for hybrid vehicles using fuzzy logic. Math Comput Simul 66:155–172

    Article  MATH  MathSciNet  Google Scholar 

  • Kitayama S, Arakawa M, Yamazaki K (2011) Sequential approximate optimization using radial basis function network for engineering optimization. Optim Eng 12(4):535–557

    Article  MATH  MathSciNet  Google Scholar 

  • Kitayma S, Srirat J, Arakawa M, Yamazaki K (2013) Sequential approximate multi-objective optimization using radial basis function network. Struct Multidiscip Optim 48(3):501–515

    Article  MathSciNet  Google Scholar 

  • Koot M, Kesseks JTBA (2005) Energy management strategies for vehicular electric power systems. IEEE Trans on Veh Technol 54(3):771–782

    Article  Google Scholar 

  • Kum D, Peng H, Bucknor NK (2011) Supervisory control of parallel hybrid electric vehicles for fuel emission reduction. Trans ASME, J Dyn Syst, Meas, Control 133:061010-1–061010-10

    Article  Google Scholar 

  • Lin CC, Peng H, Grizzle JW (2003) Power management strategy for a parallel hybrid electric truck. IEEE Trans Control Syst Technol 11(6):839–849

    Article  Google Scholar 

  • Long VT, Nhan NV (2012) Bee-algorithm-based optimization of component size and control strategy parameters for parallel hybrid electric vehicle. Int J Automot Technol 13(7):1177–1183

    Article  Google Scholar 

  • Miettinen, K.M. (1998) Nonlinear Multiobjective Optimization, Kluwer Academic Publishers

  • Montazeri-Gh M, Poursamad A, Ghalichi B (2006) Application of genetic algorithm for optimization of control strategy in parallel hybrid electric vehicle. J Frankl Inst 343:420–435

    Article  MATH  Google Scholar 

  • Pei D, Leamy MJ (2013) Dynamic programming-informed equivalent cost minimization control strategy for hybrid-electric vehicle. Trans ASME, J Dyn Syst Meas Control 135:051013-1–051013-12

    Article  Google Scholar 

  • Perez LV, Bossio GR, Moitre D, Garcia GO (2006) Optimization of power management in an hybrid electric vehicle using dynamic programming. Math Comput Simul 73:244–254

    Article  MATH  MathSciNet  Google Scholar 

  • Schouten NJ, Salman MA, Kheir NA (2003) Energy management strategies for parallel hybrid vehicles using fuzzy logic. Control Eng Pract 11:171–177

    Article  Google Scholar 

  • Sinoquet D, Rousseau G, Milhau Y (2011) Design optimization and optimal control for hybrid vehicles. Optim Eng 12:199–213

    Article  MATH  MathSciNet  Google Scholar 

  • Wang L, Zhang Y, Yin C, Zhang H, Wang C (2012) Hardware-in-the-loop simulation for the design and verification of the control system of a series–parallel hybrid electric city-bus. Simul Model Pract Theory 25:148–162

    Article  Google Scholar 

  • Wu J, Zhang CH, Cui NX (2008) PSO algorithm-based parameter optimization for HEV powertrain and its control strategy. Int J Automot Technol 9(1):53–69

    Article  Google Scholar 

  • Zhu Y, Chen Y, Wu Z, Wang A (2006) Optimization design of an energy management strategy for hybrid vehicle. Int J Altern Propuls 1(1):47–62

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Satoshi Kitayama.

Appendix sequential approximate optimization with radial basis function network

Appendix sequential approximate optimization with radial basis function network

1.1 A.1 Radial basis function network and width in the gaussian kernel

The RBF network is a three-layer feed-forward network. Given the training data expressed by {x j , y j }(j = 1, 2, ⋯, m), where m represents the number of sampling points, the output of the network (response surface) is given by

$$ \widehat{y}\left(\mathbf{x}\right)={\displaystyle {\sum}_{j=1}^m{w}_jK\left(\mathbf{x},{\mathbf{x}}_j\right)} $$
(A1)

where m denotes the number of sampling points, K(x, x j ) is the j-th basis function, and w j denotes the weight of the j-th basis function. The following Gaussian kernel is generally used as the basis function:

$$ K\left(\mathbf{x},{\mathbf{x}}_j\right)= \exp \left(-\frac{{\left(\mathbf{x}-{\mathbf{x}}_j\right)}^T\left(\mathbf{x}-{\mathbf{x}}_j\right)}{r_j^2}\right) $$
(A2)

In (A2), x j represents the j-th sampling point, and r j is the width of the j-th basis function. The response y j is calculated at the sampling point x j . The learning of RBF network is usually accomplished by solving

$$ E={\displaystyle {\sum}_{j=1}^m{\left({y}_j-\widehat{y}\left({\mathbf{x}}_j\right)\right)}^2}+{\displaystyle {\sum}_{j=1}^m{\lambda}_j{w}_j^2}\to \min $$
(A3)

where the second term is introduced for the purpose of the regularization. It is recommended that λ j in (A3) is sufficient small value (e.g., λ j =1.0 × 10−2). Thus, the learning of RBF network is equivalent to finding the weight vector w. The necessary condition of (A3) result in the following equation.

$$ \mathbf{w}={\left({\mathbf{H}}^T\mathbf{H}+\boldsymbol{\Lambda} \right)}^{-1}{\mathbf{H}}^T\mathbf{y} $$
(A4)

where H, Λ and y are given as follows:

$$ \mathbf{H}=\left[\begin{array}{cccc}\hfill K\left({\mathbf{x}}_1,{\mathbf{x}}_1\right)\hfill & \hfill K\left({\mathbf{x}}_1,{\mathbf{x}}_2\right)\hfill & \hfill \cdots \hfill & \hfill K\left({\mathbf{x}}_1,{\mathbf{x}}_m\right)\hfill \\ {}\hfill K\left({\mathbf{x}}_2,{\mathbf{x}}_1\right)\hfill & \hfill K\left({\mathbf{x}}_2,{\mathbf{x}}_2\right)\hfill & \hfill \cdots \hfill & \hfill K\left({\mathbf{x}}_2,{\mathbf{x}}_m\right)\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill K\left({\mathbf{x}}_m,{\mathbf{x}}_1\right)\hfill & \hfill K\left({\mathbf{x}}_m,{\mathbf{x}}_2\right)\hfill & \hfill \cdots \hfill & \hfill K\left({\mathbf{x}}_m,{\mathbf{x}}_m\right)\hfill \end{array}\right],\;\boldsymbol{\Lambda} =\left[\begin{array}{cccc}\hfill {\lambda}_1\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {\lambda}_2\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ {}\hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill {\lambda}_m\hfill \end{array}\right] $$
(A5)
$$ \mathbf{y}={\left({y}_1,{y}_2,\cdots, {y}_m\right)}^T $$
(A6)

It is clear from (A4) that the learning of RBF network is equivalent to the matrix inversion (H T H + Λ)−1. The new sampling points are added through the SAO process. Using the RBF network, it is easy to calculate the weight vector w, because the additional learning is reduced to the incremental calculation of the matrix inversion.

The width in the Gaussian kernel plays an important role for good approximation. The first author of this paper has proposed the following simple estimate of the width (Kitayama et al. 2011):

$$ \begin{array}{cc}\hfill {r}_j=\frac{d_{j, \max }}{\sqrt{n}\sqrt[n]{m-1}}\hfill & \hfill j=1,2,\cdots, m\hfill \end{array} $$
(A7)

where r j denotes the width of the j-th Gaussian kernel, and d j,max denotes the maximum distance between the j-th sampling point and the other sampling points. (A7) is applied to each Gaussian kernel individually, and can deal with the non-uniform distribution of sampling points.

1.2 A.2 Density function using RBF network

In the SAO, it is important to find out the unexplored region for global approximation. The Kriging can achieve this objective with the expected improvement (EI) function. In order to find out the unexplored region with the RBF network, we have developed a function called the density function (Kitayama et al. 2011). The basic idea is very simple. The local maxima are generated at the sampling points. To achieve this objective, every output y of the RBF network is replaced with +1. The procedure to construct the density function is summarized as follows:

  1. (D-STEP1)

    The following vector y D is prepared at the sampling points.

    $$ {\mathbf{y}}^D={\left(1,1,\cdots, 1\right)}_{m\times 1}^T $$
    (A8)
  2. (D-STEP2)

    The weight vector w D of the density function D(x) is calculated as follows:

    $$ {\mathbf{w}}^D={\left({\mathbf{H}}^T\mathbf{H}+\boldsymbol{\Lambda} \right)}^{-1}{\mathbf{H}}^T{\mathbf{y}}^D $$
    (A9)
  3. (D-STEP3)

    The density function D(x) is minimized.

    $$ D\left(\mathbf{x}\right)={\displaystyle {\sum}_{j=1}^m{w}_j^DK\left(\mathbf{x},{\mathbf{x}}_j\right)}\to \min $$
    (A10)
  4. (D-STEP4)

    The point minimizing D(x) is taken as the new sampling point.

Figure 15 shows an illustrative example in one dimension. The black dots denote the sampling points. It is found from Fig. 5 that local minima are generated around the unexplored region. The RBF network is basically the interpolation between sampling points: therefore, points A and B in Fig. 15 are the lower and upper bounds of the design variables of the density function.

Fig. 15
figure 15

Illustrative example of density function

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kitayama, S., Saikyo, M., Nishio, Y. et al. Torque control strategy and optimization for fuel consumption and emission reduction in parallel hybrid electric vehicles. Struct Multidisc Optim 52, 595–611 (2015). https://doi.org/10.1007/s00158-015-1254-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-015-1254-8

Keywords

Navigation