Advertisement

Machine Learning with Known Input Data Uncertainty Measure

  • Wojciech M. Czarnecki
  • Igor T. Podolak
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8104)

Abstract

Uncertainty of the input data is a common issue in machine learning. In this paper we show how one can incorporate knowledge on uncertainty measure regarding particular points in the training set. This may boost up models accuracy as well as reduce overfitting. We show an approach based on the classical training with jitter for Artificial Neural Networks (ANNs). We prove that our method, which can be applied to a wide class of models, is approximately equivalent to generalised Tikhonov regularisation learning. We also compare our results with some alternative methods. In the end we discuss further prospects and applications.

Keywords

machine learning neural networks classification clustering jitter uncertainty random variables 

References

  1. 1.
    Niaf, E., Flamary, R., Lartizien, C., Canu, S.: Handling uncertainties in SVM classification. In: IEEE Statistical Signal Processing Workshop, pp. 757–760 (2011)Google Scholar
  2. 2.
    Ni, E.A., Ling, C.X.: Active Learning with c-Certainty. In: Tan, P.-N., Chawla, S., Ho, C.K., Bailey, J. (eds.) PAKDD 2012, Part I. LNCS, vol. 7301, pp. 231–242. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Pelckmans, K., De Brabanter, J., Suykens, J.A.K., De Moor, B.: Handling missing values in support vector machine classifiers. Neural Networks 18, 684–692 (2005)CrossRefzbMATHGoogle Scholar
  4. 4.
    Zhang, S.S., Wu, X., Zhu, M.: Efficient missing data imputation for supervised learning. In: 9th IEEE Int. Conf. on Cognitive Informatics, pp. 672–679 (2010)Google Scholar
  5. 5.
    Han, B., Xiao, S., Liu, L., Wu, Z.: New methods for filling missing values by grey relational analysis. In: Int. Conf. on Artificial Intelligence, Management Science and Electronic Commerce, pp. 2721–2724 (2011)Google Scholar
  6. 6.
    Coleman, H.W., Steele, W.G.: Experimentation, validation and uncertainty analysis for engineers. John Wiley and Sons (2009)Google Scholar
  7. 7.
    Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill–posed problems. V.H. Winston (1977)Google Scholar
  8. 8.
    Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Computation 7(1), 108–116 (1995)CrossRefGoogle Scholar
  9. 9.
    Reed, R., Oh, S., Marks, R.J.: Regularization using jittered training data. In: IEEE Int. Joint Conf. on Neural Networks, pp. 147–152. IEEE Press (1992)Google Scholar
  10. 10.
    Sietsma, J., Dow, R.J.F.: Creating artificial neural networks that generalise. Neural Networks 4(1), 1481–1497 (1990)Google Scholar
  11. 11.
    Weigand, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by weight elimination applied to currency exchange rate prediction. In: Int. Joint Conf. on Neural Networks, pp. 1–837 (1991)Google Scholar
  12. 12.
    Heaton, J.: Programming neural networks with Encog 3 in Java. Heaton Research Inc. (2011)Google Scholar
  13. 13.
    Apache Commons Mathematics Library, http://commons.apache.org/math
  14. 14.
    Settles, B.: Active Learning, Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers (2012)Google Scholar
  15. 15.
    Podolak, I., Roman, A.: Theoretical Foundations and Experimental Results for a Hierarchical Classifier with Overlapping Clusters. Computational Intelligence 29(2), 357–388 (2013)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2013

Authors and Affiliations

  • Wojciech M. Czarnecki
    • 1
  • Igor T. Podolak
    • 2
  1. 1.Faculty of Mathematics and Computer ScienceAdam Mickiewicz University in PoznanPoland
  2. 2.Faculty of Mathematics and Computer ScienceJagiellonian UniversityPoland

Personalised recommendations