Advertisement

Is pocket algorithm optimal?

  • Marco Muselli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 904)

Abstract

The pocket algorithm is considered able to provide for any classification problem the weight vector which satisfies the maximum number of input-output relations contained in the training set. A proper convergence theorem ensures the achievement of an optimal configuration with probability one when the number of iterations grows indefinitely. In the present paper a new formulation of this theorem is given; a rigorous proof corrects some formal and substantial errors which invalidate previous theoretical results. In particular it is shown that the optimality of the asymptotical solution is ensured only if the number of permanences for the pocket vector lies in a proper interval of the real axis which bounds depend on the number of iterations.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cybenko, G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems2 (1989), 303–314.Google Scholar
  2. 2.
    Hornik, K., Stinchcombe, M., and White, H. Multilayer feedforward networks are universal approximators. Neural Networks2 (1989), 359–366.Google Scholar
  3. 3.
    Blum, A., AND Rivest, R. L. Training a 3-node neural network is NP-complete. In Proceedings of the 1988 Workshop on Computational Learning Theory (Cambridge, MA, 1988), D. Haussler and L. Pitt, Eds., Morgan Kaufmann, pp. 9–18.Google Scholar
  4. 4.
    Hertz, J., Krogh, A., AND Palmer, R. G.Introduction to the Theory of Neural Computation. Redwood City, CA: Addison-Wesley, 1991.Google Scholar
  5. 5.
    Gallant, S. I.Neural Networks Learning and Expert Systems. Cambridge, MA: MIT Press, 1993.Google Scholar
  6. 6.
    Rosenblatt, F.Principles of Neurodynamics. Washington, DC: Spartan Press, 1961.Google Scholar
  7. 7.
    Minsky, M., AND Papert, S.Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969.Google Scholar
  8. 8.
    Ho, Y.-C., AND Kashyap, R. L. An algorithm for linear inequalities and its applications. IEEE Transactions on Electronic Computers14 (1965), 683–688.Google Scholar
  9. 9.
    Khachiyan, L. G. A polynomial algorithm in linear programming. Soviet Mathematics Doklady20 (1979), 191–194.Google Scholar
  10. 10.
    Mansfield, A. J. Comparison of perceptron training by linear programming and by the perceptron convergence procedure. In Proceedings of the International Joint Conference on Neural Networks (Seattle, WA, 1991), pp. II-25–II-30.Google Scholar
  11. 11.
    Gallant, S. I. Perceptron-based learning algorithms. IEEE Transactions on Neural Networks1 (1990), 179–191.Google Scholar
  12. 12.
    Mézard, M., AND Nadal, J.-P. Learning in feedforward layered networks: The tiling algorithm. Journal of Physics A22 (1989), 2191–2203.Google Scholar
  13. 13.
    Frean, M. The upstart algorithm: A method for constructing and training feed-forward neural networks. Neural Computation2 (1990), 198–209.Google Scholar
  14. 14.
    Muselli, M. On sequential construction of binary neural networks. To appear on IEEE Transactions on Neural Networks.Google Scholar
  15. 15.
    Frean, M. A “thermal” perceptron learning rule. Neural Computation4 (1992), 946–957.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Marco Muselli
    • 1
  1. 1.Istituto per i Circuiti ElettroniciConsiglio Nazionale delle RicercheGenovaItaly

Personalised recommendations