Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Akaike H. [1973], Information theory and an extension of the maximum likelihood princiaple, 2nd International Symposium on Information Theory, pp 267–281, Akademia Kiado
Akaike H. [1974], A new look at the statistical model identification, IEEE Transactions on Automatic Control, 19, pp 716–723
Antoniadis A., Berruyer J., Carmona R. [1992], Régression non linéaire et applications, Economica
Bartlett P.L. [1997], For valid generalization, the size of the weights is more important than the size of the network, Neural Information Processing Systems, 9, Morgan Kaufmann
Bishop C. [1993], Curvature-driven smoothing: a learning algorithm for feedforward networks, IEEE Transactions on Neural Networks, 4, pp 882–884
Bishop C. [1995], Neural Networks for Pattern Recognition, Oxford University Press.
Björck A. [1967], Solving linear least squares problems by Gram-Schmidt orthogonalization. BIT, 7, pp 1–27
Broyden C.G. [1970], The convergence of a class of double-rank minimization algorithms 2: the new algorithm, Journal of the Institute of Mathematics and its Applications, 6, pp 222–231
Chen S., Billings S.A., Luo W., Orthogonal least squares methods and their application to nonlinear system identification, International Journal of Control, 50, pp 1873–1896
Draper N.R., Smith H. [1998], Applied Regression Analysis, Wiley
Dreyfus G., Idan Y. [1998], The canonical form of discrete-time nonlinear models, Neural Computation, 10, pp 133–164
Gallinari P., Cibas T. [1999], Practical complexity control in multilayer perceptrons, Signal Processing, 74, pp 29–46
Geman S., Benenstock E., Doursat R. [1992], Neural networks and the bias/variance dilemma, Neural Computation 4, pp 1–58
Goodwin G.C., Payne R.L. [1977], Dynamic System Identification: Experiment Design and Data Analysis, Mathematics in Science and Engineering, Academic Press
Goodwin G.C., Sin K.S. [1984], Adaptive Filtering Prediction and Control, Prentice-Hall, New Jersey
Guyon I., Gunn S., Nikravesh M., Zadeh L., eds. [2005], Feature extraction, foundations and applications, Springer
Hansen L.K., Larsen J. [1996], Linear unlearning for cross-validation, Advances in Computational Mathematics, 5, pp 269–280
Haykin S. [1994], Neural Networks: a comprehensive approach, MacMillan
Jollife I.T. [1986], Principal Component Analysis, Springer
Kohonen T. [2001] Self-Organizing Maps, Springer
Kullback S., Leibler R. A. [1951], On information and sufficiency, Annals of mathematical Statistics, 22, pp 79–86
Kullback S. [1959], Information Theory and Statistics, Dover Publications
Kuo B. C. [1992], Digital Control Systems, Saunders College Publishing
Kuo B. C. [1995], Automatic Control Systems, PrenticeHall
Lagarde de J. [1983], Initiation à l’analyse des données, Dunod, Paris
Lawrance A.J. [1995], Deletion, influence and masking in regression, Journal of the Royal Statistical Society, B 57, pp 181–189
Leontaritis I.J., Billings S.A. [1987], Model selection and validation methods for nonlinear systems, International Journal of Control, 45, pp 311–341
Levenberg K. [1944], A method for the solution of certain nonlinear problems in least squares, Quarterly Journal of Applied Mathematics, 2, pp 164–168
Levin A., Narendra K.S. [1993], Control of nonlinear dynamical systems using neural networks: controllability and stabilization, IEEE Transaction on Neural Networks, 4, pp 1011–1020
Ljung L. [1987], System Identification; Theory for the User, Prentice Hall
McKay D.J.C. [1992], A practical Bayesian framework for backpropagation networks, Neural Computation, 4, pp 448–472
McQuarrie A.D.R, Tsai C., Regression and Time Series Model Selection, World Scientific
Marquardt D.W. [1963], An algorithm for least-squares estimation of nonlinear parameters, Journal of the Society of Industrial and Applied Mathematics, 11, pp 431–441
Monari G. [1999], Sélection de modèles non-lineaires par leave-one-out; étude thé orique et application des réseaux de neurones au procédé de soudage par points, Thèse de Doctorat de l’Université Pierre et Marie Curie, Paris. Available from http://www.neurones.espci.fr
Monari G., Dreyfus G. [2000], Withdrawing an example from the training set: an analytic estimation of its effect on a nonlinear parameterised model, Neurocomputing, 35, pp 195–201
Monari G., Dreyfus G. [2002], Local Overfitting Control via Leverages, Neural Computation
Mood A.M., Graybill F.A., Boes D.C. [1974], Introduction to the Theory of Statistics, McGraw-Hill
Narendra K.S, Annaswamy A.M. [1989], Stable Adaptive Systems, Prentice-Hall
Nerrand O. [1992], Réseaux de neurones pour le filtrage adaptatif, l’identification et la commande de processus, thèse de doctorat de l’Université Pierre et Marie-Curie
Nerrand O., Urbani D., Roussel-Ragot P., Personnaz L., Dreyfus G. [1994], Training recurrent neural networks: why and how?An Illustration in Process Modeling, IEEE Transactions on Neural Networks 5, pp 178–184
Norgaard J.P., Ravn O., Poulsen N.K., Hansen L.K. [2000], Neural Networks for Modelling and Control of Dynamic Systems, Springer
Norton J.P. [1986], An introduction to Identification, Academic Press
Oussar Y. [1998], Réseaux d’ondelettes et réseaux de neurones pour la modélisation statique et dynamique de processus, Thèse de Doctorat de l’Université Pierre et Marie Curie, Paris. Available from http://www.neurones.espci.fr
Oussar y., Dreyfus G. [2002], Initialization by selection for wavelet network training, Neurocomputing, 34, pp 131–143
Oussar Y., Dreyfus G. [2001], How to be a gray box: dynamic semiphysical modeling, Neural Networks, vol. 14
Plaut D., Nowlan S., Hinton G.E. [1986], Experiments on learning by back propagation, Technical Report, Carnegie-Mellon University
Poggio T., Torre V., Koch C. [1985], Computational vision and regularization theory, Nature, 317, pp 314–319
Press W.H., Teukolsky S.A., Vetterling W.T., Flannery B.P. [1992], Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press.
Puskorius G.V., Feldkamp L.A. [1994], Neurocontrol of nonlinear dynamical systems with Kalman Filter trained recurrent networks, IEEE Trans. on Neural Networks, 5, pp 279–297
Rumelhart D.E., Hinton G.E., Williams R.J. [1986], Learning internal representations by error backpropagation, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp 318–362, MIT Press
Saarinen S., Bramley R., Cybenko G. [1993], Ill-conditioning in neural network training problems, SIAM J. Sci. Stat. Comp., 14, pp 693–714
Seber G.A.F., Wilde C.J. [1989], Nonlinear Regression, Wiley
Seber G.A.F. [1977], Linear Regression Analysis, Wiley
Sjö berg J., Zhang Q., Ljung L., Benveniste A., Delyon B., [1995], Nonlinear black-box modeling in system identification: a unified overview, Automatica, 31, pp 1691–1724
Soderstrom T. [1977], On model structure testing in system identification, International Journal of Control, 26, pp 1–18
Sontag E.D. [1993], Neural networks for control, Essays on control: perspectives in the theory and its applications, pp 339–380, Birkhäuser
Stone M. [1974], Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society, B 36, pp 111–147
Stoppiglia H. [1998], Méthodes statistiques de sélection de modèles neuronaux; applications financières et bancaires, thèse de doctorat de l’Université Pierre et Marie-Curie. Available from http://www. neurones.espci.fr
Stoppiglia H., Dreyfus G., Dubois R., Oussar Y. [2003], Ranking a Random Feature for Variable and Feature Selection, Journal of Machine Learning Research, pp 1399–1414
Stricker M. [2000], Réseaux de neurones pour le traitement automatique du langage: conception et réalisation de filtres d’informations, thèse de l’Université Pierre et Marie-Curie. Available from http://www. neurones.espci.fr
Tibshirani R.J. [1996], A comparison of some error estimates for neural models, Neural Computation, 8, pp 152–163
Tikhonov A.N., Arsenin V.Y. [1977], Solutions of Ill-Posed Problems, Winston
Vapnik V.N. [1995], The Nature of Statistical Learning Theory, Springer
Waibel, Hanazawa T., Hinton G., Shikano K., and Lang K. [1989], Phoneme recognition using time-delay neural networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, 37, pp 328–339
Werbos P.J. [1974], Beyond regression: new tools for prediction and analysis in the behavioural sciences, Ph. D. thesis, Harvard University
Widrow B., Hoff M.E. [dy1960], Adaptive switching circuits, IRE Wescon Convention Records, 4, pp 96–104
Wonnacott T.H., Wonnacott R.J. [1990], Statistique économie-gestion-sciences-médecine, Economica, 4e édition, 1990
Zhou G., Si J. [1998], A systematic and effective supervised learning mechanism based on Jacobian rank deficiency, Neural Computation, 10, pp 1031–1045
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Dreyfus, G. (2005). Modeling with Neural Networks: Principles and Model Design Methodology. In: Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-28847-3_2
Download citation
DOI: https://doi.org/10.1007/3-540-28847-3_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22980-3
Online ISBN: 978-3-540-28847-3
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)