Advertisement

Knowledge and Information Systems

, Volume 48, Issue 2, pp 493–503 | Cite as

Local search and pseudoinversion: an hybrid approach to neural network training

  • Luca RubiniEmail author
  • Rossella Cancelliere
  • Patrick Gallinari
  • Andrea Grosso
Short Paper

Abstract

We consider recent successful techniques proposed for neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by Moore–Penrose generalized inverse. This study aimed to analyse the impact on performances when the completely random sampling of the space of input weights is replaced by a local search procedure over a discretized set of weights. The performances of the proposed training methods are assessed through computational experience on several UCI datasets.

Keywords

Neural networks Random projections Local search  Pseudoinverse matrix 

Notes

Acknowledgments

The activity has been partially carried on in the context of the Visiting Professor Program of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Italian Istituto Nazionale di Alta Matematica (INdAM).

References

  1. 1.
    Aarts EHL, Lenstra JK (2003) Local search in combinatorial optimization. Princeton University Press, PrincetonzbMATHGoogle Scholar
  2. 2.
    Achlioptas D (2001) Database-friendly random projections. In: Proceedings of the 20th ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pp 274–281Google Scholar
  3. 3.
    Ajorloo H, Manzuri-Shalmani MT, Lakdashti A (2007) Restoration of damaged slices in images using matrix pseudo inversion. In: 22nd international symposium on computer and information sciencesGoogle Scholar
  4. 4.
    Asuncion A, Newman DJ (2007) UCI machine learning repository. School of Information and Computer Sciences, University of California, Irvine. http://www.ics.uci.edu/~mlearn/MLRepository.html
  5. 5.
    Badeva V, Morosov V (1991) Problemes incorrectements posès, thèorie et applications. Masson, ParisGoogle Scholar
  6. 6.
    Cancelliere R, Deluca R, Gai M, Gallinari P, Rubini L (2015) An analysis of numerical issues in neural training based on pseudoinversion. Comput Appl Math. doi: 10.1007/s40314-015-0246-z
  7. 7.
    Fuhry M, Reichel L (2012) A new Tikhonov regularization method. Numer Algorithms 59:433–445MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Gallinari P, Cibas T (1999) Practical complexity control in multilayer perceptrons. Signal Process 74:29–46CrossRefzbMATHGoogle Scholar
  9. 9.
    Girosi F, Jones M, Poggio T (1995) Regularization theory and neural networks architectures. Neural Comput 7(2):219–269CrossRefGoogle Scholar
  10. 10.
    Halawa K (2011) A method to improve the performance of multilayer perceptron by utilizing various activation functions in the last hidden layer and the least squares method. Neural Process Lett 34:293–303CrossRefGoogle Scholar
  11. 11.
    Haykin S (1999) Neural networks, a comprehensive foundation. Prentice Hall, Upper Saddle RiverzbMATHGoogle Scholar
  12. 12.
    Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501CrossRefGoogle Scholar
  13. 13.
    Johnson WB, Lindenstrauss J (1984) Extensions of Lipschitz mapping into Hilbert space. Contemporary Mathematics 26:189–206MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Kohno K, Kawamoto M, Inouye Y (2010) A matrix pseudoinversion lemma and its application to block-based adaptive blind deconvolution for MIMO systems. IEEE Trans Circuits Syst 57:1449–1462MathSciNetCrossRefGoogle Scholar
  15. 15.
    Martinez-Martinez J, Escandell-Montero P, Soria-Olivas E, Martn-Guerrero J, Magdalena-Benedito R, Gomez-Sanchis J (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74:3716–3721CrossRefGoogle Scholar
  16. 16.
    Nguyen TD, Pham HTB, Dang VH (2010) An efficient Pseudo Inverse matrix-based solution for secure auditing. In: IEEE international conference on computing and communication technologies, research, innovation, and vision for the futureGoogle Scholar
  17. 17.
    Poggio T, Girosi F (1990) Regularization algorithms that are equivalent to multilayer networks. Science 247:978–982MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Rubini L, Cancelliere R, Gallinari P, Grosso A, Raiti A (2014) Computational experience with pseudoinversion-based training of neural networks using random projection matrices. In: Proceedings of the 16th international conference on artificial intelligence: methodology, systems, applications, AIMSA 2014. Lecture notes in computer science, Vol 8722, pp 236–245Google Scholar
  19. 19.
    Rumelhart DE, Hinton GE, Williams RJ (1996) Learning internal representations by error propagation. Parallel distributed processing: explorations in the microstructure of cognition, vol 1. MIT Press Cambridge, MA, USAGoogle Scholar
  20. 20.
    Tikhonov AN (1963) Solution of incorrectly formulated problems and the regularization method. Sov Math 4:1035–1038zbMATHGoogle Scholar
  21. 21.
    Tikhonov AN, Arsenin VY (1977) Solutions of ill-posed problems. Winston, WashingtonzbMATHGoogle Scholar
  22. 22.
    Wang XZ, Wang D, Huang GB (2012) Editorial: Special issue on extreme learning machines. Soft Comput 16(9):1461–1463CrossRefzbMATHGoogle Scholar
  23. 23.
    Wang X (2013) Editorial: Special issue on extreme learning machine with uncertainty. Int J Uncertain Fuzziness Knowl Based Syst 21:v-viGoogle Scholar
  24. 24.
    Wanyu D, Zheng Q, Chen L (2009) Regularised extreme learning machine. In: Proceedings of the IEEE symposium on computational intelligence and data mining, pp 389–395Google Scholar
  25. 25.
    Zhu QY, Qin AK, Suganthan PN, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recognit 38:1759–1763CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag London 2016

Authors and Affiliations

  • Luca Rubini
    • 1
    Email author
  • Rossella Cancelliere
    • 1
  • Patrick Gallinari
    • 2
  • Andrea Grosso
    • 1
  1. 1.Department of Computer SciencesUniversity of TurinTurinItaly
  2. 2.Laboratory of Computer Sciences, LIP6Université Pierre et Marie CurieParisFrance

Personalised recommendations