Skip to main content

Training Neural Networks on Noisy Data

  • Conference paper
Book cover Artificial Intelligence and Soft Computing (ICAISC 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8467))

Included in the following conference series:

Abstract

This paper discusses approaches to noise-resistant training of MLP neural networks. We present various aspects of the issue and the ways of obtaining that goal by using two groups of approaches and combinations of them. The first group is based on a different processing of each vector depending of the likelihood of the vector being an outlier. The likelihood is determined by instance selection and outlier detection. The second group is based on training MLP neural networks with non-differentiable robust objective functions. We evaluate the performance of particular methods with different level of noise in the data for regression problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Beliakov, G., Kelarev, A., Yearwood, J.: Derivative-free optimization and neural networks for robust regression. Optimization 61(12), 1467–1490 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  2. Ben-Gal, I.: Outlier detection. Kluwer Academic Publishers (2005)

    Google Scholar 

  3. Chen, D., Jain, R.: A robust backpropagation learning algorithm for function approximation. IEEE Transactions on Neural Networks 5(3), 467–479 (1994)

    Article  Google Scholar 

  4. Chuang, C.C., Su, S.F., Hsiao, C.C.: The annealing robust backpropagation (arbp) learning algorithm. IEEE Transactions on Neural Networks 11(5), 1067–1077 (2000)

    Article  Google Scholar 

  5. El-Melegy, M.T., Essai, M.H., Ali, A.A.: Robust training of artificial feedforward neural networks. In: Hassanien, A.-E., Abraham, A., Vasilakos, A.V., Pedrycz, W. (eds.) Foundations of Computational, Intelligence Volume 1. SCI, vol. 201, pp. 217–242. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  6. El-Melegy, M.: Random sampler m-estimator algorithm for robust function approximation via feed-forward neural networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 3134–3140 (2011)

    Google Scholar 

  7. El-Melegy, M.: Ransac algorithm with sequential probability ratio test for robust training of feed-forward neural networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 3256–3263 (2011)

    Google Scholar 

  8. El-Melegy, M.: Random sampler m-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks. IEEE Transactions on Neural Networks and Learning Systems 24(7), 1074–1085 (2013)

    Article  Google Scholar 

  9. Golak, S., Burchart-Korol, D., Czaplicka-Kolarz, K., Wieczorek, T.: Application of neural network for the prediction of eco-efficiency. In: Liu, D., Zhang, H., Polycarpou, M., Alippi, C., He, H. (eds.) ISNN 2011, Part III. LNCS, vol. 6677, pp. 380–387. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  10. Guillen, A.: Applying mutual information for prototype or instance selection in regression problems. In: ESANN 2009 (2009)

    Google Scholar 

  11. Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions (Wiley Series in Probability and Statistics), revised edn. Wiley-Interscience, New York (2005)

    Book  Google Scholar 

  12. Hart, P.: The condensed nearest neighbor rule (corresp.). IEEE Transactions on Information Theory 14(3), 515–516 (1968)

    Article  Google Scholar 

  13. Huber, P.J.: Robust Statistics. Wiley Series in Probability and Statistics. Wiley-Interscience (1981)

    Google Scholar 

  14. Kordos, M., Duch, W.: Variable Step Search Algorithm for Feedforward Networks. Neurocomputing 71(13-15), 2470–2480 (2008)

    Article  Google Scholar 

  15. Kordos, M., Białka, S., Blachnik, M.: Instance selection in logical rule extraction for regression problems. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part II. LNCS, vol. 7895, pp. 167–175. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  16. Kordos, M., Blachnik, M., Strzempa, D.: Do We Need Whatever More Than k-NN? In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2010, Part I. LNCS (LNAI), vol. 6113, pp. 414–421. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  17. Kordos, M., Rusiecki, A.: Improving MLP Neural Network Performance by Noise Reduction. In: Dediu, A.-H., Martín-Vide, C., Truthe, B., Vega-Rodríguez, M.A. (eds.) TPNC 2013. LNCS, vol. 8273, pp. 133–144. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  18. Liano, K.: Robust error measure for supervised neural network learning with outliers. IEEE Transactions on Neural Networks 7(1), 246–250 (1996)

    Article  Google Scholar 

  19. Pernia-Espinoza, A.V., Ordieres-Mere, J.B., de Pison, F.J.M., Gonzalez-Marcos, A.: Tao-robust backpropagation learning algorithm. Neural Networks 18(2), 191–204 (2005)

    Article  Google Scholar 

  20. Prechelt, L.: Proben1 – a set of neural network benchmark problems and benchmarking rules. Tech. rep. (1994)

    Google Scholar 

  21. Rousseeuw, P.J., Leroy, A.M.: Robust Regression and Outlier Detection. John Wiley & Sons, Inc., New York (1987)

    Book  MATH  Google Scholar 

  22. Rousseeuw, P.J.: Least median of squares regression. Journal of the American Statistical Association 79(388), 871–880 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  23. Rusiecki, A.: Robust LTS backpropagation learning algorithm. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 102–109. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  24. Rusiecki, A.: Robust MCD-based backpropagation learning algorithm. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 154–163. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  25. Rusiecki, A.: Robust learning algorithm based on iterative least median of squares. Neural Processing Letters 36(2), 145–160 (2012)

    Article  Google Scholar 

  26. Rusiecki, A.: Robust learning algorithm based on LTA estimator. Neurocomputing 120, 624–632 (2013)

    Article  Google Scholar 

  27. Salvador, G., Derrac, J., Ramon, C.: Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 417–435 (2012)

    Article  Google Scholar 

  28. Tolvi, J.: Genetic algorithms for outlier detection and variable selection in linear regression models. Soft Computing 8, 527–533 (2004)

    Article  MATH  Google Scholar 

  29. Merz, C., Murphy, P.: Uci repository of machine learning databases (2013), http://www.ics.uci.edu/mlearn/MLRepository.html

  30. Wilson, D.L.: Asymptotic properties of nearest neighbor rules using edited data. IEEE Transactions on Systems, Man and Cybernetics SMC-2(3), 408–421 (1972)

    Article  Google Scholar 

  31. Zhang, J.: Intelligent selection of instances for prediction functions in lazy learning algorithms. Artifcial Intelligence Review 11, 175–191 (1997)

    Article  Google Scholar 

  32. Source code and datasets used in the paper, https://code.google.com/p/mlp2013/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Rusiecki, A., Kordos, M., Kamiński, T., Greń, K. (2014). Training Neural Networks on Noisy Data. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2014. Lecture Notes in Computer Science(), vol 8467. Springer, Cham. https://doi.org/10.1007/978-3-319-07173-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07173-2_13

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07172-5

  • Online ISBN: 978-3-319-07173-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics