An Application of Component-Wise Iterative Optimization to Feed-Forward Neural Networks

  • Yachen Lin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4488)

Abstract

Component-wise Iterative Optimization (CIO) is a method of dealing with a large data in the OLAP applications, which can be treated as the enhancement of the traditional batch version methods such as least squares. The salient feature of the method is to process transactions one by one, optimizes estimates iteratively for each parameter over the given objective function, and update models on the fly. A new learning algorithm can be proposed when applying CIO to feed-forward neural networks with a single hidden layer. It incorporates the internal structure of feed-forward neural networks with a single hidden layer by applying the algorithm CIO in closed-form expressions to update weights between the output layer and the hidden layer. Its optimally computational property is a natural consequence inherited from the property of the algorithm CIO and is also demonstrated in an illustrative example.

References

  1. 1.
    Baykal, B., Constantinids, A.: Sliding window adaptive fast QR and QR-lattice algorithm. IEEE Signal Process 46(11), 2877–2887 (1998)CrossRefGoogle Scholar
  2. 2.
    Belge, M., Miller, E.: A sliding window RLS-like adaptive algorithm for filtering alpha-stable noise. IEEE Signal Process Letter 7(4), 86–89 (2000)CrossRefGoogle Scholar
  3. 3.
    Choi, B., Bien, Z.: Sliding-windowed weighted recursive least-squares method for parameter estimation. Electron. Letter 25(20), 1381–1382 (1989)MATHCrossRefGoogle Scholar
  4. 4.
    Fisher, R.: The use of multiple measurements in taxonomic problems. In: Ann. Eugencis 7, Pt. II, pp. 179–188 (1939)Google Scholar
  5. 5.
    Fletcher, R.: Practical Methods of Optimization vol. I: Unconstrained Optimization. Comput. J. 6, 163–168 (1980)MathSciNetGoogle Scholar
  6. 6.
    Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 721–741 (1984)MATHCrossRefGoogle Scholar
  7. 7.
    Hwang, J.N., Lay, S.R., Maechler, M., Martin, D.M.: Regression modeling in back-propagation and projection pursuit learning. IEEE Trans. on Neural networks 5(3), 342–353 (1994)CrossRefGoogle Scholar
  8. 8.
    Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist. 20, 608–613 (1992)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Karayiannis, N.B., Venetsanopoulos, A.N.: Artificial Neural Networks: Learning algorithms, Performance evaluation, and Applications. Kluwer Academic Publishers, Dordrecht (1993)Google Scholar
  10. 10.
    Lin, Y.: Component-wise Iterative Optimization for Large Data. Submitted (2006)Google Scholar
  11. 11.
    Lin, Y.: Feed-forward Neural Networks – Learning algorithms, Statistical properties, and Applications. Ph.D. Dissertation, Syracuse University (1996)Google Scholar
  12. 12.
    Rumelhart, D.E., Hinton, E.G., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing, MIT Press, Cambridge (1986)Google Scholar
  13. 13.
    Watrous, R.L.: Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization. In: IEEE First Int. Conf. Neural Networks, San Diego, vol. II, pp. 619–627 (1987)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Yachen Lin
    • 1
  1. 1.Fidelity National Information Services, Inc., 11601 Roosevelt Blvd-TA76, Saint Petersburg, FL 33716 

Personalised recommendations