Abstract
Component-wise Iterative Optimization (CIO) is a method of dealing with a large data in the OLAP applications, which can be treated as the enhancement of the traditional batch version methods such as least squares. The salient feature of the method is to process transactions one by one, optimizes estimates iteratively for each parameter over the given objective function, and update models on the fly. A new learning algorithm can be proposed when applying CIO to feed-forward neural networks with a single hidden layer. It incorporates the internal structure of feed-forward neural networks with a single hidden layer by applying the algorithm CIO in closed-form expressions to update weights between the output layer and the hidden layer. Its optimally computational property is a natural consequence inherited from the property of the algorithm CIO and is also demonstrated in an illustrative example.
Chapter PDF
Similar content being viewed by others
References
Baykal, B., Constantinids, A.: Sliding window adaptive fast QR and QR-lattice algorithm. IEEE Signal Process 46(11), 2877–2887 (1998)
Belge, M., Miller, E.: A sliding window RLS-like adaptive algorithm for filtering alpha-stable noise. IEEE Signal Process Letter 7(4), 86–89 (2000)
Choi, B., Bien, Z.: Sliding-windowed weighted recursive least-squares method for parameter estimation. Electron. Letter 25(20), 1381–1382 (1989)
Fisher, R.: The use of multiple measurements in taxonomic problems. In: Ann. Eugencis 7, Pt. II, pp. 179–188 (1939)
Fletcher, R.: Practical Methods of Optimization vol. I: Unconstrained Optimization. Comput. J. 6, 163–168 (1980)
Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 721–741 (1984)
Hwang, J.N., Lay, S.R., Maechler, M., Martin, D.M.: Regression modeling in back-propagation and projection pursuit learning. IEEE Trans. on Neural networks 5(3), 342–353 (1994)
Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist. 20, 608–613 (1992)
Karayiannis, N.B., Venetsanopoulos, A.N.: Artificial Neural Networks: Learning algorithms, Performance evaluation, and Applications. Kluwer Academic Publishers, Dordrecht (1993)
Lin, Y.: Component-wise Iterative Optimization for Large Data. Submitted (2006)
Lin, Y.: Feed-forward Neural Networks – Learning algorithms, Statistical properties, and Applications. Ph.D. Dissertation, Syracuse University (1996)
Rumelhart, D.E., Hinton, E.G., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing, MIT Press, Cambridge (1986)
Watrous, R.L.: Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization. In: IEEE First Int. Conf. Neural Networks, San Diego, vol. II, pp. 619–627 (1987)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Lin, Y. (2007). An Application of Component-Wise Iterative Optimization to Feed-Forward Neural Networks. In: Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds) Computational Science – ICCS 2007. ICCS 2007. Lecture Notes in Computer Science, vol 4488. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72586-2_67
Download citation
DOI: https://doi.org/10.1007/978-3-540-72586-2_67
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72585-5
Online ISBN: 978-3-540-72586-2
eBook Packages: Computer ScienceComputer Science (R0)