Advertisement

Building Robust Prediction Models for Defective Sensor Data Using Artificial Neural Networks

  • Cláudio Rebelo de SáEmail author
  • Arvind Kumar Shekar
  • Hugo Ferreira
  • Carlos Soares
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 950)

Abstract

Sensors are susceptible to failure when exposed to extreme conditions over long periods of time. Besides they can be affected by noise or electrical interference. Models (Machine Learning or others) obtained from these faulty and noisy sensors may be less reliable. In this paper, we propose a data augmentation approach for making neural networks more robust to missing and faulty sensor data. This approach is shown to be effective in a real life industrial application that uses data of various sensors to predict the wear of an automotive fuel-system component. Empirical results show that the proposed approach leads to more robust neural network in this particular application than existing methods.

Notes

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.

References

  1. 1.
    Ali, J.B., Chebel-Morello, B., Saidi, L., Malinowski, S., Fnaiech, F.: Accurate bearing remaining useful life prediction based on weibull distribution and artificial neural network. Mech. Syst. Signal Process. 56(57), 150–172 (2015)Google Scholar
  2. 2.
    Allred, D., Harvey, J.M., Berardo, M., Clark, G.M.: Prognostic and predictive factors in breast cancer by immunohistochemical analysis. Mod. Pathol.: Off. J. US Can. Acad. Pathol. Inc. 11(2), 155–168 (1998)Google Scholar
  3. 3.
    Arandjelović, R., Zisserman, A.: Three things everyone should know to improve object retrieval. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2911–2918. IEEE (2012)Google Scholar
  4. 4.
    Baxt, W.G.: Use of an artificial neural network for data analysis in clinical decision-making: the diagnosis of acute coronary occlusion. Neural Comput. 2(4), 480–489 (1990)CrossRefGoogle Scholar
  5. 5.
    Cho, K., van Merrienboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724–1734 (2014)Google Scholar
  6. 6.
    Dziubiński, M., Drozd, A., Adamiec, M., Siemionek, E.: Electromagnetic interference in electrical systems of motor vehicles. In: IOP Conference Series: Materials Science and Engineering, vol. 148, p. 012036. IOP Publishing (2016)Google Scholar
  7. 7.
    Elleithy, K., Sobh, T.: Innovations and Advances in Computer, Information, Systems Sciences, and Engineering, vol. 152. Springer, Heidelberg (2012)Google Scholar
  8. 8.
    Jäger, G., Zug, S., Brade, T., Dietrich, A., Steup, C., Moewes, C., Cretu, A.M.: Assessing neural networks for sensor fault detection. In: 2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), pp. 70–75, May 2014Google Scholar
  9. 9.
    Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S.: Self-normalizing neural networks. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 972–981 (2017)Google Scholar
  10. 10.
    Maier, H.R., Dandy, G.C.: Neural networks for the prediction and forecasting of water resources variables: a review of modelling issues and applications. Environ. Model. Softw. 15(1), 101–124 (2000)CrossRefGoogle Scholar
  11. 11.
    Maybeck, P.: Stochastic Models, Estimation, and Control. Mathematics in Science and Engineering. Elsevier Science, Amsterdam (1982)zbMATHGoogle Scholar
  12. 12.
    Moghaddam, A.H., Moghaddam, M.H., Esfandyari, M.: Stock market index prediction using artificial neural network. J. Econ. Financ. Adm. Sci. 21(41), 89–93 (2016)Google Scholar
  13. 13.
    Perez, L., Wang, J.: The effectiveness of data augmentation in image classification using deep learning. ArXiv e-prints, December 2017Google Scholar
  14. 14.
    Redman, T.C., Blanton, A.: Data Quality for the Information Age. Artech House, Inc., London (1997)Google Scholar
  15. 15.
    Refenes, A.N., Zapranis, A., Francis, G.: Stock performance modeling using neural networks: a comparative study with regression models. Neural Netw. 7(2), 375–388 (1994)CrossRefGoogle Scholar
  16. 16.
    Reuss, P., Stram, R., Althoff, K., Henkel, W., Henning, F.: Knowledge engineering for decision support on diagnosis and maintenance in the aircraft domain. In: Synergies Between Knowledge Engineering and Software Engineering, pp. 173–196. Springer (2018)Google Scholar
  17. 17.
    Shekar, A.K., Bocklisch, T., Sánchez, P.I., Straehle, C.N., Müller, E.: Including multi-feature interactions and redundancy for feature ranking in mixed datasets. In: Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2017, Skopje, Macedonia, 18–22 September 2017, Proceedings, Part I, pp. 239–255 (2017)CrossRefGoogle Scholar
  18. 18.
    Sietsma, J., Dow, R.J.: Creating artificial neural networks that generalize. Neural Netw. 4(1), 67–79 (1991)CrossRefGoogle Scholar
  19. 19.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Tu, J.V.: Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J. Clin. Epidemiol. 49(11), 1225–1231 (1996)CrossRefGoogle Scholar
  21. 21.
    Widrow, B., Rumelhart, D.E., Lehr, M.A.: Neural networks: applications in industry, business and science. Commun. ACM 37(3), 93–105 (1994)CrossRefGoogle Scholar
  22. 22.
    Zhu, X., Wu, X.: Class noise vs. attribute noise: a quantitative study. Artif. Intell. Rev. 22(3), 177–210 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Cláudio Rebelo de Sá
    • 1
    Email author
  • Arvind Kumar Shekar
    • 2
  • Hugo Ferreira
    • 3
  • Carlos Soares
    • 4
  1. 1.Twente UniversityEnschedeNetherlands
  2. 2.Robert Bosch GmbHStuttgartGermany
  3. 3.INESC TECPortoPortugal
  4. 4.Faculdade de EngenhariaUniversidade do PortoPortoPortugal

Personalised recommendations