Skip to main content

Retracted: Robust Training of Feedforward Neural Networks Using Combined Online/Batch Quasi-Newton Techniques

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2012 (ICANN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7553))

Included in the following conference series:

Abstract

This paper describes a robust training algorithm based on quasi-Newton process in which online and batch error functions are combined by a weighting coefficient parameter. The parameter is adjusted to ensure that the algorithm gradually changes from online to batch. Furthermore, an analogy between this algorithm and Langevin one is considered. Langevin algorithm is a gradient-based continuous optimization method incorporating Simulated Annealing concept. Neural network training is presented to demonstrate the validity of combined algorithm. The algorithm achieves more robust training and accurate generalization results than other quasi-Newton based training algorithms.

An Erratum for this chapter can be found at http://dx.doi.org/10.1007/978-3-642-33266-1_72

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Haykin, S.: Neural Networks and Learning Machines 3rd. Pearson (2009)

    Google Scholar 

  2. Zhang, Q.J., Gupta, K.C., Devabhaktuni, V.K.: Artificial neural networks for RF and microwave design-from theory to practice. IEEE Trans. Microwave Theory and Tech. 51, 1339–1350 (2003)

    Article  Google Scholar 

  3. Ninomiya, H., Wan, S., Kabir, H., Zhang, X., Zhang, Q.J.: Robust training of microwave neural network models using combined global/local optimization techniques. In: IEEE MTT-S International Microwave Symposium (IMS) Digest, pp. 995–998 (June 2008)

    Google Scholar 

  4. Nocedal, J., Wright, S.J.: Numerical Optimization 2nd. Springer (2006)

    Google Scholar 

  5. Schraudolph, N.N., Yu, J., Gunter, S.: A stochastic quasi-Newton method for online convex optimization. In: Proc. 11th Intl. Conf. Artificial Intelligence and Statistics (2007)

    Google Scholar 

  6. Ninomiya, H.: An improved online quasi-Newton method for robust training and its application to microwave neural network models. In: Proc. IEEE&INNS/IJCNN 2010, pp. 792–799 (July 2010)

    Google Scholar 

  7. Gelfand, S.B., Mitter, S.K.: Recursive stochastic algorithms for global optimization in. Rd SIAM J. Control and Optimization 29(5), 999–1018 (1991)

    Google Scholar 

  8. Corane, A., Marechesi, M., Martini, C., Ridella, S.: Minimizing multimodal functions of continuous variables with the Simulated Annealing algorithm. ACM Trans. Math. Soft. 13(3), 262–280 (1987)

    Article  Google Scholar 

  9. Rögnvaldsson, T.: On Langevin updating in multilayer perceptrons. Neur. Comp. 6(5), 916–926 (1991)

    Article  Google Scholar 

  10. Kwok, T.Y., Yeung, D.Y.: Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks 8(5), 630–645 (1997)

    Article  Google Scholar 

  11. Ma, L., Khorasani, K.: New training strategies for constructive neural networks with application to regression problems. Neural Networks 17, 589–609 (2004)

    Article  Google Scholar 

  12. Levy, A., Montalvo, A., Gomez, S., Galderon, A.: Topics in global optimization. Lecture Notes in Mathematics, vol. (909). Springer, New York (1981)

    Google Scholar 

  13. Benoudjit, N., Archambeau, C., Lendasse, A., Leel, J., Verleysen, M.: Width optimization of the Gaussian kernels in radial basis function networks. In: Proc. Eur. Symp. Artif. Neural Netw., pp. 425–432 (April 2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ninomiya, H. (2012). Retracted: Robust Training of Feedforward Neural Networks Using Combined Online/Batch Quasi-Newton Techniques. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33266-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33266-1_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33265-4

  • Online ISBN: 978-3-642-33266-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics