Skip to main content

Modeling with Neural Networks: Principles and Model Design Methodology

  • Chapter
Neural Networks

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Akaike H. [1973], Information theory and an extension of the maximum likelihood princiaple, 2nd International Symposium on Information Theory, pp 267–281, Akademia Kiado

    Google Scholar 

  2. Akaike H. [1974], A new look at the statistical model identification, IEEE Transactions on Automatic Control, 19, pp 716–723

    Google Scholar 

  3. Antoniadis A., Berruyer J., Carmona R. [1992], Régression non linéaire et applications, Economica

    Google Scholar 

  4. Bartlett P.L. [1997], For valid generalization, the size of the weights is more important than the size of the network, Neural Information Processing Systems, 9, Morgan Kaufmann

    Google Scholar 

  5. Bishop C. [1993], Curvature-driven smoothing: a learning algorithm for feedforward networks, IEEE Transactions on Neural Networks, 4, pp 882–884

    Google Scholar 

  6. Bishop C. [1995], Neural Networks for Pattern Recognition, Oxford University Press.

    Google Scholar 

  7. Björck A. [1967], Solving linear least squares problems by Gram-Schmidt orthogonalization. BIT, 7, pp 1–27

    Google Scholar 

  8. Broyden C.G. [1970], The convergence of a class of double-rank minimization algorithms 2: the new algorithm, Journal of the Institute of Mathematics and its Applications, 6, pp 222–231

    Google Scholar 

  9. Chen S., Billings S.A., Luo W., Orthogonal least squares methods and their application to nonlinear system identification, International Journal of Control, 50, pp 1873–1896

    Google Scholar 

  10. Draper N.R., Smith H. [1998], Applied Regression Analysis, Wiley

    Google Scholar 

  11. Dreyfus G., Idan Y. [1998], The canonical form of discrete-time nonlinear models, Neural Computation, 10, pp 133–164

    Google Scholar 

  12. Gallinari P., Cibas T. [1999], Practical complexity control in multilayer perceptrons, Signal Processing, 74, pp 29–46

    Google Scholar 

  13. Geman S., Benenstock E., Doursat R. [1992], Neural networks and the bias/variance dilemma, Neural Computation 4, pp 1–58

    Google Scholar 

  14. Goodwin G.C., Payne R.L. [1977], Dynamic System Identification: Experiment Design and Data Analysis, Mathematics in Science and Engineering, Academic Press

    Google Scholar 

  15. Goodwin G.C., Sin K.S. [1984], Adaptive Filtering Prediction and Control, Prentice-Hall, New Jersey

    Google Scholar 

  16. Guyon I., Gunn S., Nikravesh M., Zadeh L., eds. [2005], Feature extraction, foundations and applications, Springer

    Google Scholar 

  17. Hansen L.K., Larsen J. [1996], Linear unlearning for cross-validation, Advances in Computational Mathematics, 5, pp 269–280

    Google Scholar 

  18. Haykin S. [1994], Neural Networks: a comprehensive approach, MacMillan

    Google Scholar 

  19. Jollife I.T. [1986], Principal Component Analysis, Springer

    Google Scholar 

  20. Kohonen T. [2001] Self-Organizing Maps, Springer

    Google Scholar 

  21. Kullback S., Leibler R. A. [1951], On information and sufficiency, Annals of mathematical Statistics, 22, pp 79–86

    Google Scholar 

  22. Kullback S. [1959], Information Theory and Statistics, Dover Publications

    Google Scholar 

  23. Kuo B. C. [1992], Digital Control Systems, Saunders College Publishing

    Google Scholar 

  24. Kuo B. C. [1995], Automatic Control Systems, PrenticeHall

    Google Scholar 

  25. Lagarde de J. [1983], Initiation à l’analyse des données, Dunod, Paris

    Google Scholar 

  26. Lawrance A.J. [1995], Deletion, influence and masking in regression, Journal of the Royal Statistical Society, B 57, pp 181–189

    Google Scholar 

  27. Leontaritis I.J., Billings S.A. [1987], Model selection and validation methods for nonlinear systems, International Journal of Control, 45, pp 311–341

    Google Scholar 

  28. Levenberg K. [1944], A method for the solution of certain nonlinear problems in least squares, Quarterly Journal of Applied Mathematics, 2, pp 164–168

    Google Scholar 

  29. Levin A., Narendra K.S. [1993], Control of nonlinear dynamical systems using neural networks: controllability and stabilization, IEEE Transaction on Neural Networks, 4, pp 1011–1020

    Google Scholar 

  30. Ljung L. [1987], System Identification; Theory for the User, Prentice Hall

    Google Scholar 

  31. McKay D.J.C. [1992], A practical Bayesian framework for backpropagation networks, Neural Computation, 4, pp 448–472

    Google Scholar 

  32. McQuarrie A.D.R, Tsai C., Regression and Time Series Model Selection, World Scientific

    Google Scholar 

  33. Marquardt D.W. [1963], An algorithm for least-squares estimation of nonlinear parameters, Journal of the Society of Industrial and Applied Mathematics, 11, pp 431–441

    Google Scholar 

  34. Monari G. [1999], Sélection de modèles non-lineaires par leave-one-out; étude thé orique et application des réseaux de neurones au procédé de soudage par points, Thèse de Doctorat de l’Université Pierre et Marie Curie, Paris. Available from http://www.neurones.espci.fr

    Google Scholar 

  35. Monari G., Dreyfus G. [2000], Withdrawing an example from the training set: an analytic estimation of its effect on a nonlinear parameterised model, Neurocomputing, 35, pp 195–201

    Google Scholar 

  36. Monari G., Dreyfus G. [2002], Local Overfitting Control via Leverages, Neural Computation

    Google Scholar 

  37. Mood A.M., Graybill F.A., Boes D.C. [1974], Introduction to the Theory of Statistics, McGraw-Hill

    Google Scholar 

  38. Narendra K.S, Annaswamy A.M. [1989], Stable Adaptive Systems, Prentice-Hall

    Google Scholar 

  39. Nerrand O. [1992], Réseaux de neurones pour le filtrage adaptatif, l’identification et la commande de processus, thèse de doctorat de l’Université Pierre et Marie-Curie

    Google Scholar 

  40. Nerrand O., Urbani D., Roussel-Ragot P., Personnaz L., Dreyfus G. [1994], Training recurrent neural networks: why and how?An Illustration in Process Modeling, IEEE Transactions on Neural Networks 5, pp 178–184

    Google Scholar 

  41. Norgaard J.P., Ravn O., Poulsen N.K., Hansen L.K. [2000], Neural Networks for Modelling and Control of Dynamic Systems, Springer

    Google Scholar 

  42. Norton J.P. [1986], An introduction to Identification, Academic Press

    Google Scholar 

  43. Oussar Y. [1998], Réseaux d’ondelettes et réseaux de neurones pour la modélisation statique et dynamique de processus, Thèse de Doctorat de l’Université Pierre et Marie Curie, Paris. Available from http://www.neurones.espci.fr

    Google Scholar 

  44. Oussar y., Dreyfus G. [2002], Initialization by selection for wavelet network training, Neurocomputing, 34, pp 131–143

    Google Scholar 

  45. Oussar Y., Dreyfus G. [2001], How to be a gray box: dynamic semiphysical modeling, Neural Networks, vol. 14

    Google Scholar 

  46. Plaut D., Nowlan S., Hinton G.E. [1986], Experiments on learning by back propagation, Technical Report, Carnegie-Mellon University

    Google Scholar 

  47. Poggio T., Torre V., Koch C. [1985], Computational vision and regularization theory, Nature, 317, pp 314–319

    Google Scholar 

  48. Press W.H., Teukolsky S.A., Vetterling W.T., Flannery B.P. [1992], Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press.

    Google Scholar 

  49. Puskorius G.V., Feldkamp L.A. [1994], Neurocontrol of nonlinear dynamical systems with Kalman Filter trained recurrent networks, IEEE Trans. on Neural Networks, 5, pp 279–297

    Google Scholar 

  50. Rumelhart D.E., Hinton G.E., Williams R.J. [1986], Learning internal representations by error backpropagation, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp 318–362, MIT Press

    Google Scholar 

  51. Saarinen S., Bramley R., Cybenko G. [1993], Ill-conditioning in neural network training problems, SIAM J. Sci. Stat. Comp., 14, pp 693–714

    Google Scholar 

  52. Seber G.A.F., Wilde C.J. [1989], Nonlinear Regression, Wiley

    Google Scholar 

  53. Seber G.A.F. [1977], Linear Regression Analysis, Wiley

    Google Scholar 

  54. Sjö berg J., Zhang Q., Ljung L., Benveniste A., Delyon B., [1995], Nonlinear black-box modeling in system identification: a unified overview, Automatica, 31, pp 1691–1724

    Google Scholar 

  55. Soderstrom T. [1977], On model structure testing in system identification, International Journal of Control, 26, pp 1–18

    Google Scholar 

  56. Sontag E.D. [1993], Neural networks for control, Essays on control: perspectives in the theory and its applications, pp 339–380, Birkhäuser

    Google Scholar 

  57. Stone M. [1974], Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society, B 36, pp 111–147

    Google Scholar 

  58. Stoppiglia H. [1998], Méthodes statistiques de sélection de modèles neuronaux; applications financières et bancaires, thèse de doctorat de l’Université Pierre et Marie-Curie. Available from http://www. neurones.espci.fr

    Google Scholar 

  59. Stoppiglia H., Dreyfus G., Dubois R., Oussar Y. [2003], Ranking a Random Feature for Variable and Feature Selection, Journal of Machine Learning Research, pp 1399–1414

    Google Scholar 

  60. Stricker M. [2000], Réseaux de neurones pour le traitement automatique du langage: conception et réalisation de filtres d’informations, thèse de l’Université Pierre et Marie-Curie. Available from http://www. neurones.espci.fr

    Google Scholar 

  61. Tibshirani R.J. [1996], A comparison of some error estimates for neural models, Neural Computation, 8, pp 152–163

    Google Scholar 

  62. Tikhonov A.N., Arsenin V.Y. [1977], Solutions of Ill-Posed Problems, Winston

    Google Scholar 

  63. Vapnik V.N. [1995], The Nature of Statistical Learning Theory, Springer

    Google Scholar 

  64. Waibel, Hanazawa T., Hinton G., Shikano K., and Lang K. [1989], Phoneme recognition using time-delay neural networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, 37, pp 328–339

    Google Scholar 

  65. Werbos P.J. [1974], Beyond regression: new tools for prediction and analysis in the behavioural sciences, Ph. D. thesis, Harvard University

    Google Scholar 

  66. Widrow B., Hoff M.E. [dy1960], Adaptive switching circuits, IRE Wescon Convention Records, 4, pp 96–104

    Google Scholar 

  67. Wonnacott T.H., Wonnacott R.J. [1990], Statistique économie-gestion-sciences-médecine, Economica, 4e édition, 1990

    Google Scholar 

  68. Zhou G., Si J. [1998], A systematic and effective supervised learning mechanism based on Jacobian rank deficiency, Neural Computation, 10, pp 1031–1045

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Dreyfus, G. (2005). Modeling with Neural Networks: Principles and Model Design Methodology. In: Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28847-3_2

Download citation

Publish with us

Policies and ethics