Optimal Steady-State Base-Calibration of Model Based ECU-Functions

Conference paper

Abstract

In the ECU, there are a number of submodels which are used to calculate immeasurable signals, for example, EGR mass flow, exhaust gas temperature, etc. Usually these submodels consist of physical equations and empirical parts, that are modeled with parameters, curves and maps, which can only be calibrated experimentally. Currently the data set for the steady-state calibration of these submodels is generated by grid measurements on the engine testbench. There is a great potential for reduction of the cost and time needed for the calibration of these submodels with the utilization of the DoE approach and physical connections between them. This paper presents an algorithm which should calibrate all of the existing submodels as a network in the “Air System Model” at Bosch-ECU, namely automated and with minimal calibration cost at engine testbench. The algorithm should find the most informative combination of inputs, such as engine speed, fuel quantity, actuator position of throttle valve, EGR valve and variable turbine geometry (VTG), for the calibration of the network of submodels in air system. The process is implemented in the framework of “Sequential Experimental Design”. After the initial experiment, where the submodels are fed with the equally and loosely spaced inputs within a predefined range, initial statistical models can be built with a Gaussian process model. At each iteration of the process, before the measurement is conducted, a function of the information content with respect to the combination of the inputs is derived. This can be done in 3 steps.
  1. 1.

    At first the relevant system variables for the calibration, such as air mass flow and gas temperature after mixture with EGR, will be predicted depending on the combination of the inputs, current statistical models for the relevant submodels and the physical structure of the system

     
  2. 2.

    Based on the predicted system variables an extended Kalman filter can be employed to estimate the variance of the measurement points for calibration of the submodels

     
  3. 3.

    The information content of the predicted measurement points for the calibration of the submodels is calculated and summed up. It is defined by the reduction of the uncertainty of the unmeasured region in each Gaussian process model by adding the predicted measurement point to the current data set.

     
With the above derived function, the most informative combination of inputs can be found and used in the experiment at this iteration. The statistical model for each is updated with the generated data set and can be used in next iteration. The process continues till a desired quality of calibration is reached, which is described in an “Automatic Stopping Criterion”. At the end the statistical models for the submodels are converted into maps or curves and stored in the ECU. As a use case, the algorithm was applied to choose the most informative measurement points in a grid measurement for the calibration of three submodels, cylinder charge, pressure upstream of the throttle valve and EGR mass flow. As a result, with less than 30 % of the grid measurement points, a slightly better calibration quality for the three submodels was achieved.

Keywords

Steady-state base-calibration of ECU-functions Sequential experimental design Gaussian process regression Mutual information Extended Kalman filter 

References

  1. 1.
    Gibbs, M.N.: Bayesian Gaussian processes for regression and classification. Dissertation, University of Cambridge (1997)Google Scholar
  2. 2.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian processes for machine learning. The MIT Press, Cambridge (2006)Google Scholar
  3. 3.
    Mackay, D.J.C.: Gaussian processes: a replacement for supervised neural networks? Tutorial lecture notes for NIPS 1997, University of Cambridge (1997)Google Scholar
  4. 4.
    Ankenman, B., Nelson, B.L., Staum, J.: Stochastic Kriging for simulation metamodeling. Oper. Res. 58(2), 371–382 (2010)Google Scholar
  5. 5.
    Costa, J.-P., Pronzato, L., Thierry, E.: A comparison between Kriging and radial basis function networks for nonlinear prediction. In: International Workshop on Nonlinear Signal and Image Processing, NSIP’99, Antalya, Turkey, Paper number: 155 (1999)Google Scholar
  6. 6.
    Bishop, C.M.: Pattern recognition and machine learning (Information science and statistics). Springer, Heidelberg (2007)Google Scholar
  7. 7.
    Chapelle, O.: Some thoughts about Gaussian processes model selection and large scale. In: Nips Workshop on Gaussian Processes. Max Planck Institute for Bioligical Cybernetics (2005)Google Scholar
  8. 8.
    Castro, R.M.: Active learning and adaptive sampling for non-parametric inference. Dissertation, Rice University (2007)Google Scholar
  9. 9.
    Chen Quin Lam, M.S.: Sequential adaptive designs in computer experiments for response surface model fit. Dissertation, The Ohio State University (2008)Google Scholar
  10. 10.
    Murphy, K.P.: Machine learning: a probabilistic perspective. The MIT Press, Cambridge (2012)MATHGoogle Scholar
  11. 11.
    Krause, A., Singh, A., Guestrin, C.: Near-optimal sensor placements in Gaussian processes: theory, efficient algorithms and empirical studies. J. Mach. Learn. Res. 9, 235–284 (2008)MATHGoogle Scholar
  12. 12.
    Kalman, R.E.: A new approach to linear filtering and prediction problems. J. Basic Eng. 82, 35–45 (1960)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Robert Bosch GmbHStuttgartGermany

Personalised recommendations