Skip to main content

Parallel Computation of a New Data Driven Algorithm for Training Neural Networks

  • Conference paper
  • 3573 Accesses

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 7951)

Abstract

Different from some early learning algorithms such as backpropagation (BP) or radial basis function (RBF) algorithms, a new data driven algorithm for training neural networks is proposed. The new data driven methodology for training feedforward neural networks means that the system modeling are performed directly using the input-output data collected from real processes, To improve the efficiency, the parallel computation method is introduced and the performance of parallel computing for the new data driven algorithm is analyzed. The results show that, by using the parallel computing mechanisms, the training speed can be much higher.

Keywords

  • artificial intelligence
  • neural networks
  • weight function
  • B-spline function
  • algorithm
  • data driven methodology
  • parallel computation

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-642-39065-4_19
  • Chapter length: 8 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   99.00
Price excludes VAT (USA)
  • ISBN: 978-3-642-39065-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   131.00
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ampazis, N., Perantonis, S.J.: Two highly efficient second-order algorithms for training feedforward networks. IEEE Trans. Neural Netw. 13, 1064–1073 (2002)

    CrossRef  Google Scholar 

  2. Khashman, A.: A Modified Backpropagation Learning Algorithm With Added Emotional Coefficients. IEEE Trans. Neural Netw. 19, 1896–1909 (2008)

    CrossRef  Google Scholar 

  3. Bortman, M., Aladjem, M.: A Growing and Pruning Method for Radial Basis Function Networks. IEEE Trans. Neural Netw. 20, 1039–1045 (2009)

    CrossRef  Google Scholar 

  4. Wedge, D., Ingram, D., McLean, D., Mingham, C., Bandar, Z.: On global-local artificial neural networks for function approximation. IEEE Trans. Neural Netw. 17, 942–952 (2006)

    CrossRef  Google Scholar 

  5. Zhang, D.Y.: New Theories and Methods on Neural Networks. Tsinghua University Press, Beijing (2006) (in Chinese)

    Google Scholar 

  6. Zhang, D.Y.: New Algorithm for Training Feedforward Neural Networks with Cubic Spline weight functions. Systems Engineering and Electronics 28, 1434–1437 (2006) (in Chinese)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, D. (2013). Parallel Computation of a New Data Driven Algorithm for Training Neural Networks. In: Guo, C., Hou, ZG., Zeng, Z. (eds) Advances in Neural Networks – ISNN 2013. ISNN 2013. Lecture Notes in Computer Science, vol 7951. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39065-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39065-4_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39064-7

  • Online ISBN: 978-3-642-39065-4

  • eBook Packages: Computer ScienceComputer Science (R0)