Multilayer Perceptron Learning Utilizing Reducibility Mapping

  • Seiya Satoh
  • Ryohei Nakano
Part of the Studies in Computational Intelligence book series (SCI, volume 465)


In the search space of MLP(J), multilayer perceptron having J hidden units, there exist flat areas called singular regions created by applying reducibility mapping to the optimal solution of MLP(J −1). Since such singular regions cause serious slowdown for learning, a learning method for avoiding singular regions has been aspired. However, such avoiding does not guarantee the quality of the final solutions. This paper proposes a new learning method which does not avoid but makes good use of singular regions to stably and successively find solutions excellent enough for MLP(J). The potential of the method is shown by our experiments using artificial and real data sets.


Multilayer perceptron Learning method Reducibility mapping Singular region Polynomial network 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amari, S.: Natural gradient works efficiently in learning. Neural Computation 10(2), 251–276 (1998)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Amari, S., Park, H., Fukumizu, K.: Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Computation 12(6), 1399–1409 (2000)CrossRefGoogle Scholar
  3. 3.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern classification, 2nd edn. John Wiley & Sons, Inc. (2001)Google Scholar
  4. 4.
    Fukumizu, K., Amari, S.: Local minima and plateaus in hierarchical structure of multilayer perceptrons. Neural Networks 1(3), 317–327 (2000)CrossRefGoogle Scholar
  5. 5.
    Hamey, L.G.C.: XOR has no local minima: a case study in neural network error surface. Neural Networks 11(4), 669–681 (1998)CrossRefGoogle Scholar
  6. 6.
    Minnett, R.C.J., Smith, A.T., Lennon Jr., W.C., Hecht-Nielsen, R.: Neural network tomography: network replication from output surface geometry. Neural Networks 24(5), 484–492 (2011)CrossRefGoogle Scholar
  7. 7.
    Luenberger, D.G.: Linear and nonlinear programming. Addison-Wesley (1984)Google Scholar
  8. 8.
    Nakano, R., Saito, K.: Discovering Polynomials to Fit Multivariate Data Having Numeric and Nominal Variables. In: Arikawa, S., Shinohara, A. (eds.) Progress in Discovery Science. LNCS (LNAI), vol. 2281, pp. 482–493. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Nakano, R., Satoh, S., Ohwaki, T.: Learning method utilizing singular region of multilayer perceptron. In: Proc. 3rd Int. Conf. on Neural Computation Theory and Applications, pp. 106–111 (2011)Google Scholar
  10. 10.
    Watanabe, S.: Algebraic geometry and statistical learning theory. Cambridge Univ. Press (2009)Google Scholar
  11. 11.
    Saito, K., Nakano, R.: Partial BFGS update and efficient step-length calculation for three-layer neural networks. Neural Computation 9(1), 239–257 (1997)CrossRefGoogle Scholar
  12. 12.
    Sussmann, H.J.: Uniqueness of the weights for minimal feedforward nets with a given input-output map. Neural Networks 5(4), 589–593 (1992)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Chubu UniversityKasugaiJapan

Personalised recommendations