Abstract
Optimal engine operation during a transient driving cycle is the key to achieving greater fuel economy, engine efficiency, and reduced emissions. In order to achieve continuously optimal engine operation, engine calibration methods use a combination of static correlations obtained from dynamometer tests for steady-state operating points and road and/or track performance data. As the parameter space of control variables, design variable constraints, and objective functions increases, the cost and duration for optimal calibration become prohibitively large. In order to reduce the number of dynamometer tests required for calibrating modern engines, a large-scale simulation-driven machine learning approach is presented in this work. A parallel, fast, robust, physics-based reduced-order engine simulator is used to obtain performance and emission characteristics of engines over a wide range of control parameters under various transient driving conditions (drive cycles). We scale the simulation up to 3,906 nodes of the Theta supercomputer at the Argonne Leadership Computing Facility to generate data required to train a machine learning model. The trained model is then used to predict various engine parameters of interest, and the results are compared with those predicted by the engine simulator. Our results show that a deep-neural-network-based surrogate model achieves high accuracy: Pearson product-moment correlation values larger than 0.99 and mean absolute percentage error within 1.07% for various engine parameters such as exhaust temperature, exhaust pressure, nitric oxide, and engine torque. Once trained, the deep-neural-network-based surrogate model is fast for inference: it requires about 16 \(\upmu \)s for predicting the engine performance and emissions for a single design configuration compared with about 0.5 s per configuration with the engine simulator. Moreover, we demonstrate that transfer learning and retraining can be leveraged to incrementally retrain the surrogate model to cope with new configurations that fall outside the training data space.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Aptly named after a small, intelligent dog that loves to learn new tricks.
References
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016)
Aithal, S.M.: Analysis of the current signature in a constant-volume combustion chamber. Combust. Sci. Technol. 185(2), 336–349 (2013). https://doi.org/10.1080/00102202.2012.718297
Aithal, S.M.: Prediction of voltage signature in a homogeneous charge compression ignition (HCCI) engine fueled with propane and acetylene. Combust. Sci. Technol. 185(8), 1184–1201 (2013). https://doi.org/10.1080/00102202.2013.781593
Aithal, S.M.: Development of an integrated design tool for real-time analyses of performance and emissions in engines powered by alternative fuels. In: Proceedings of SAE 11th International Conference on Engines & Vehicles. SAE (2013)
Aithal, S.M., Wild, S.M.: ACCOLADES: a scalable workflow framework for large-scale simulation and analyses of automotive engines. In: Kunkel, J.M., Ludwig, T. (eds.) ISC High Performance 2015. LNCS, vol. 9137, pp. 87–95. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20119-1_7
Bishop, C.M.: Pattern Recognition and Machine Learning, vol. 1. Springer, New York (2006). https://doi.org/10.1007/978-1-4615-7566-5
Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. arXiv preprint arXiv:1603.02754 (2016)
Chollet, F., et al.: Keras (2015). https://keras.io
Drucker, H.: Improving regressors using boosting techniques. In: ICML, vol. 97, pp. 107–115 (1997)
Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002)
Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)
Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning, vol. 1. MIT press, Cambridge (2016)
Hashemi, N., Clark, N.: Artificial neural network as a predictive tool for emissions from heavy-duty diesel vehicles in Southern California. Int. J. Eng. Res. 8(4), 321–336 (2007)
Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)
Krijnsen, H.C., van Kooten, W.E., Calis, H.P.A., Verbeek, R.P., Bleek, C.M.: Prediction of NOx emissions from a transiently operating diesel engine using an artificial neural network. Chem. Eng. Technol. Industr. Chem. Plant Equip. Process Eng. Biotechnol. 22(7), 601–607 (1999)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Loh, W.Y.: Classification and regression trees. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 1(1), 14–23 (2011)
Louppe, G., Geurts, P.: Ensembles on random patches. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7523, pp. 346–361. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33460-3_28
McKay, M., Beckman, R., Conover, W.: Comparison the three methods for selecting values of input variable in the analysis of output from a computer code. Technometrics; (United States). https://doi.org/10.1080/00401706.1979.10489755
Parlak, A., Islamoglu, Y., Yasar, H., Egrisogut, A.: Application of artificial neural network to predict specific fuel consumption and exhaust temperature for a diesel engine. Appl. Therm. Eng. 26(8–9), 824–828 (2006)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Shrivastava, N., Khan, Z.M.: Application of soft computing in the field of internal combustion engines: a review. Arch. Comput. Meth. Eng. 25(3), 707–726 (2018)
Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Stat. Comput. 14(3), 199–222 (2004)
Acknowledgment
This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This material was based upon work supported by the U.S. Department of Energy, Office of Science, under Contract DE-AC02-06CH11357.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Aithal, S.M., Balaprakash, P. (2019). MaLTESE: Large-Scale Simulation-Driven Machine Learning for Transient Driving Cycles. In: Weiland, M., Juckeland, G., Trinitis, C., Sadayappan, P. (eds) High Performance Computing. ISC High Performance 2019. Lecture Notes in Computer Science(), vol 11501. Springer, Cham. https://doi.org/10.1007/978-3-030-20656-7_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-20656-7_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20655-0
Online ISBN: 978-3-030-20656-7
eBook Packages: Computer ScienceComputer Science (R0)