Advertisement

Matrix Pseudoinversion for Image Neural Processing

  • Rossella Cancelliere
  • Mario Gai
  • Thierry Artières
  • Patrick Gallinari
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7667)

Abstract

Recently some novel strategies have been proposed for training of Single Hidden Layer Feedforward Networks, that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by Moore-Penrose generalised inverse. Such non-iterative strategies are appealing since they allow fast learning, but some care may be required to achieve good results, mainly concerning the procedure used for matrix pseudoinversion. This paper proposes a novel approach based on original determination of the initialization interval for input weights, a careful choice of hidden layer activation functions and on critical use of generalised inverse to determine output weights. We show that this key step suffers from numerical problems related to matrix invertibility, and we propose a heuristic procedure for bringing more robustness to the method. We report results on a difficult astronomical image analysis problem of chromaticity diagnosis to illustrate the various points under study.

Keywords

Pseudoinverse matrix Weights initialization Supervised learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  2. 2.
    Halawa, K.: A method to improve the performance of multilayer perceptron by utilizing various activation functions in the last hidden layer and the least squares method. Neural Processing Letters 34, 293–303 (2011)CrossRefGoogle Scholar
  3. 3.
    Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme Learning Machine: Theory and applications. Neurocomputing 70, 489–501 (2006)CrossRefGoogle Scholar
  4. 4.
    Nguyen, T.D., Pham, H.T.B., Dang, V.H.: An efficient Pseudo Inverse matrix-based solution for secure auditing. In: IEEE International Conference on Computing and Communication Technologies, Research, Innovation, and Vision for the Future (2010)Google Scholar
  5. 5.
    Kohno, K., Kawamoto, M., Inouye, Y.: A Matrix Pseudoinversion Lemma and its Application to Block-Based Adaptive Blind Deconvolution for MIMO Systems. IEEE Transactions on Circuits and Systems I 57(7), 1449–1462 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Ajorloo, H., Manzuri-Shalmani, M.T., Lakdashti, A.: Restoration of Damaged Slices in Images Using Matrix Pseudo Inversion. In: 22th International Symposium on Computer and Information Sciences, Ankara, pp. 98–104 (2007)Google Scholar
  7. 7.
    Bini, D., Capovani, M., Menchi, O.: Metodi numerici per l’algebra lineare. Ed. Zanichelli, Bologna (1988)Google Scholar
  8. 8.
    Bishop, C. M.: Pattern Recognition and Machine Learning. Ed. Springer, Berlin (2006)Google Scholar
  9. 9.
    Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important that the size of the network. IEEE Trans. Inf. Theory 44(2), 525–536 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Ortega, J.M.: Matrix Theory. Plenum Press, New York (1987)zbMATHGoogle Scholar
  11. 11.
    Golub, G., van Loan, C.: Matrix computations. The Johns Hopkins University Press, London (1996)zbMATHGoogle Scholar
  12. 12.
    Le Gall, J.Y., Saisse, M.: Chromatic Aberration of an All-Reflective Telescope. In: Instrumentation in Astronomy V, London. SPIE, vol. 445, pp. 497–504 (1983)Google Scholar
  13. 13.
    Perryman, M.A.C., et al.: GAIA - Composition, formation and evolution of the galaxy. Concept and technology study, Rep. and Exec. Summary. In: European Space Agency, ESA-SCI, Munich, Germany, vol. 4 (2000)Google Scholar
  14. 14.
    Gai, M., Cancelliere, R.: Neural network correction of astrometric chromaticity. MNRAS 362(4), 1483–1488 (2005)CrossRefGoogle Scholar
  15. 15.
    Cancelliere, R., Gai, M.: Efficient computation and Neural Processing of Astrometric Images. Computing and Informatics 28, 711–727 (2009)Google Scholar
  16. 16.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: 13th International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia, Italy (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Rossella Cancelliere
    • 1
  • Mario Gai
    • 2
  • Thierry Artières
    • 3
  • Patrick Gallinari
    • 3
  1. 1.Universitá di TorinoTurinItaly
  2. 2.National Institute of AstrophysicsTurinItaly
  3. 3.LIP6, Université Pierre et Marie CurieParisFrance

Personalised recommendations