Classifier’s Complexity Control while Training Multilayer Perceptrons

  • Šarūnas Raudys
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1876)

Abstract

We consider an integrated approach to design the classification rule. Here qualities of statistical and neural net approaches are merged together. Instead of using the multivariate models and statistical methods directly to design the classifier, we use them in order to whiten the data and then to train the perceptron. A special attention is paid to magnitudes of the weights and to optimization of the training procedure. We study an influence of all characteristics of the cost function (target values, conventional regularization parameters), parameters of the optimization method (learning step, starting weights, a noise injection to original training vectors, to targets, and to the weights) on a result. Some of the discussed methods to control complexity are almost not discussed in the literature yet.

Keywords

Decision Boundary Classification Rule Generalization Error Training Vector Learning Step 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Amari, S.: A theory of adaptive pattern classifiers. — IEEE Trans. Electron. Computers, EC-16 (1967) 623–625Google Scholar
  2. 2.
    An, G.: The effects of adding noise during backpropagation training on generalization performance, Neural Computation, 8 (1996) 643–674CrossRefGoogle Scholar
  3. 3.
    Bishop, C..M.: Neural Networks for Pattern Recognition. Oxford Univ. Press (1995)Google Scholar
  4. 4.
    Devijver, P.A., Kittler, J. Pattern Recognition. A statistical approach, Precentice-Hall International, Inc., London (1982).Google Scholar
  5. 5.
    Duin, R.P.W.: Small sample size generalization. Proc. 9th Scandinavian Conference on Image Analysis, June 6–9, 1995, Uppsala, Sweden (1995)Google Scholar
  6. 6.
    Kanal L., Chandrasekaran, B.: On dimensionality and sample size in statistical pattern recognition. Pattern Recognition, 3 (1971) 238–255CrossRefGoogle Scholar
  7. 7.
    Mao, J. Jain, A.: Regularization techniques in artificial neural networks. In: Proc. World Congress on Neural Networks, Portland, July 1993.Google Scholar
  8. 8.
    Raudys, S. On the problems of sample size in pattern recognition. In: Proc., 2nd All-Union Conf. Statist. Methods in Control Theory; Moscow, Nauka, 64–67 (1970)Google Scholar
  9. 9.
    Raudys S.: Evolution and generalization of a single neurone. I. SLP as seven statistical classifiers. Neural Networks, 11 (1998) 283–296.CrossRefGoogle Scholar
  10. 10.
    Raudys S.: Statistical and Neural Classification Algorithms. An Integrated Approach. Springer, London (2001)Google Scholar
  11. 11.
    Raudys S, Skurikhina, M., Cibas, T., Gallinari, P.: Ridge estimates of the covariance matrix and regularization of artificial neural network classifier. Pattern Recognition and Image Processing, Int. J. of Russian Academy of Sciences, Moscow, 1995, N4 633–650Google Scholar
  12. 12.
    Reed R.: Pruning Algorithms-A Survey. IEEE Transactions on Neural Networks, 4, (1993) 740–747CrossRefGoogle Scholar
  13. 13.
    Reed R., Marks II, R.J., Oh, S.: Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter, IEEE Transactions on Neural Networks, 6 (1995) 529–538.CrossRefGoogle Scholar
  14. 14.
    Skurichina M., Raudys, S.. Duin, R.P.W.: K-nearest neighbors directed noise injection in multilayer perceptron training, IEEE Trans. on Neural Networks, 11 (2000) 504–511CrossRefGoogle Scholar
  15. 15.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Berlin (1995)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Šarūnas Raudys
    • 1
  1. 1.Institute of Mathematics and InformaticsVilniusLithuania

Personalised recommendations