Introduction to Mapping and Neural Networks

  • Vladimir M. Krasnopolsky
Chapter
Part of the Atmospheric and Oceanographic Sciences Library book series (ATSL, volume 46)

Abstract

In this chapter, the major properties of mappings and multilayer perceptron (MLP) neural networks (NNs) are formulated and discussed. Several examples of real-life problems (prediction of time series, interpolation of lookup tables, satellite retrievals, and fast emulations of model physics) that can be considered as complex, nonlinear, and multidimensional mappings are introduced. The power and flexibility of the NN emulation technique as well as its limitations are discussed; also, it is shown how various methods can be designed to bypass or reduce some of these limitations. The chapter contains an extensive list of references giving extended background and further detail to the interested reader on each examined topic. It can be used as a textbook and an introductory reading for students and beginning and advanced investigators interested in learning how to apply the NN technique to emulate various complex, nonlinear, and multidimensional mappings in different fields of science.

Keywords

Hide Neuron Ensemble Member Target Mapping Lookup Table Stochastic Mapping 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Aires F, Schmitt M, Chedin A, Scott N (1999) The “Weight Smoothing” regularization of MLP for Jacobian stabilization. IEEE Trans Neural Netw 10:1502–1510CrossRefGoogle Scholar
  2. Aires F, Prigent C, Rossow WB (2004a) Neural network uncertainty assessment using Bayesian statistics: a remote sensing application. Neural Comput 16:2415–2458CrossRefGoogle Scholar
  3. Aires F, Prigent C, Rossow WB (2004b) Neural network uncertainty assessment using Bayesian statistics with application to remote sensing: 3 Network Jacobians. J Geophys Res. doi: 10.1029/2003JD004175 Google Scholar
  4. Attali J-G, Pagès G (1997) Approximations of functions by a multilayer perceptron: a new approach. Neural Netw 6:1069–1081CrossRefGoogle Scholar
  5. Barron AR (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inform Theory 39:930–945CrossRefGoogle Scholar
  6. Belochitski AP, Binev P, DeVore R, Fox-Rabinovitz M, Krasnopolsky V, Lamby P (2011) Tree approximation of the long wave radiation parameterization in the NCAR CAM global climate model. J Comput Appl Math 236:447–460CrossRefGoogle Scholar
  7. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, OxfordGoogle Scholar
  8. Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
  9. Bollivier M, Eifler W, Thiria S (2000) Sea surface temperature forecasts using on-line local learning algorithm in upwelling regions. Neurocomputing 30:59–63CrossRefGoogle Scholar
  10. Cardaliaguet P, Euvrard G (1992) Approximation of a function and its derivatives with a neural network. Neural Netw 5:207–220CrossRefGoogle Scholar
  11. Chen T, Chen H (1995a) Approximation capability to functions of several variables, nonlinear functionals and operators by radial basis function neural networks. Neural Netw 6:904–910CrossRefGoogle Scholar
  12. Chen T, Chen H (1995b) Universal approximation to nonlinear operators by neural networks with arbitrary activation function and its application to dynamical systems. Neural Netw 6:911–917CrossRefGoogle Scholar
  13. Chen AM, Lu H, Hecht-Nielsen R (1993) On the geometry of feedforward neural network error surface. Neural Comput 5:91–927CrossRefGoogle Scholar
  14. Cheng B, Titterington DM (1994) Neural networks: a review from a statistical perspective. Stat Sci 9:2–54CrossRefGoogle Scholar
  15. Cherkassky V, Mulier F (2007) Learning from data, 2nd edn. Wiley, HobokenCrossRefGoogle Scholar
  16. Chevallier F, Mahfouf J-F (2001) Evaluation of the Jacobians of infrared radiation models for variational data assimilation. J Appl Meteorol 40:1445–1461CrossRefGoogle Scholar
  17. Chevallier F, Morcrette J-J, Chéruy F, Scott NA (2000) Use of a neural-network-based longwave radiative transfer scheme in the EMCWF atmospheric model. Q J Roy Meteor Soc 126:761–776CrossRefGoogle Scholar
  18. Cilliers P (2000) What can we learn from a theory of complexity? Emergence 2:23–33. doi: 10.1207/S15327000EM0201_03 CrossRefGoogle Scholar
  19. Cybenko G (1989) Approximation by superposition of sigmoidal functions. Math Control Signal 2:303–314CrossRefGoogle Scholar
  20. DeVore RA (1998) Nonlinear approximation. Acta Numerica 8:51–150CrossRefGoogle Scholar
  21. Elsner JB, Tsonis AA (1992) Nonlinear prediction, chaos, and noise. Bull Am Meteorol Soc 73:49–60CrossRefGoogle Scholar
  22. Funahashi K (1989) On the approximate realization of continuous mappings by neural networks. Neural Netw 2:183–192CrossRefGoogle Scholar
  23. Gell-Mann M, Lloyd S (1996) Information measures, effective complexity, and total information. Complexity 2:44–52CrossRefGoogle Scholar
  24. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal 12:993–1001CrossRefGoogle Scholar
  25. Hashem S (1997) Optimal linear combination of neural networks. Neural Netw 10:599–614CrossRefGoogle Scholar
  26. Haykin S (2008) Neural networks and learning machines. Pearson, New YorkGoogle Scholar
  27. Hornik K (1991) Approximation capabilities of multilayer feedforward network. Neural Netw 4:251–257CrossRefGoogle Scholar
  28. Hornik K, Stinchcombe M, White H (1990) Universal approximation of an unknown mapping and its derivatives using multilayer feedforward network. Neural Netw 3:551–560CrossRefGoogle Scholar
  29. Hsieh WW (2001) Nonlinear principal component analysis by neural networks. Tellus 53A:599–615Google Scholar
  30. Hsieh WW (2004) Nonlinear multivariate and time series analysis by neural network methods. Rev Geophys. doi: 10.1029/2002RG000112 Google Scholar
  31. Hsieh WW (2009) Machine learning methods in the environmental sciences. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  32. Kon M, Plaskota L (2001) Complexity of neural network approximation with limited information: a worst case approach. J Complex 17:345–365CrossRefGoogle Scholar
  33. Krasnopolsky VM (2007) Reducing uncertainties in neural network Jacobians and improving accuracy of neural network emulations with NN ensemble approaches. Neural Netw 20:454–461CrossRefGoogle Scholar
  34. Krasnopolsky VM, Fox-Rabinovitz MS (2006) Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction. Neural Netw 19:122–134CrossRefGoogle Scholar
  35. Krasnopolsky VM, Kukulin VI (1977) A stochastic variational method for the few-body systems. J Phys G Nucl Partic Nucl Phys 3:795–807CrossRefGoogle Scholar
  36. Krasnopolsky VM, Gemmill WH, Breaker LC (1999) A multiparameter empirical ocean algorithm for SSM/I retrievals. Can J Remote Sens 25:486–503Google Scholar
  37. Krasnopolsky VM, Gemmill WH, Breaker LC (2000) A neural network multi-parameter algorithm SSM/I ocean retrievals: comparisons and validations. Remote Sens Environ 73:133–142CrossRefGoogle Scholar
  38. Krasnopolsky VM, Chalikov DV, Tolman HL (2002) A neural network technique to improve computational efficiency of numerical oceanic models. Ocean Model 4:363–383CrossRefGoogle Scholar
  39. Krasnopolsky VM, Lord SJ, Moorthi S, Spindler T (2009) How to deal with inhomogeneous outputs and high dimensionality of neural network emulations of model physics in numerical climate and weather prediction models. In: Proceedings of international joint conference on neural networks, Atlanta, Georgia, USA, 14–19 June, pp 1668–1673Google Scholar
  40. Lee JW, Oh J-H (1997) Hybrid learning of mapping and its Jacobian in multilayer neural networks. Neural Comput 9:937–958CrossRefGoogle Scholar
  41. Liano K (1996) Robust error measure for supervised neural network learning with outliers. IEEE Trans Neural Netw 7:246–250CrossRefGoogle Scholar
  42. Luengo J, Garcia S, Herrera F (2010) A study on the use of imputation methods for experimentations with radial basis function network classifier handling missing attribute values: he good synergy between RBFNs and event covering method. Neural Netw 23:406–418CrossRefGoogle Scholar
  43. Maas O, Boulanger J-P, Thiria S (2000) Use of neural networks for predictions using time series: illustration with the El Niño Southern oscillation phenomenon. Neurocomputing 30:53–58CrossRefGoogle Scholar
  44. MacKay DJC (1992) A practical Bayesian framework for back-propagation networks. Neural Comput 4:448–472CrossRefGoogle Scholar
  45. Maclin R, Shavlik J (1995) Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks. In: Proceedings of the eleventh international conference on artificial intelligence, Detroit, MI, pp 775–780Google Scholar
  46. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in neural nets. Bull Math Biophys 5:115–137CrossRefGoogle Scholar
  47. Nabney IT (2002) Netlab: algorithms for pattern recognition. Springer, New YorkGoogle Scholar
  48. Naftaly U, Intrator N, Horn D (1997) Optimal ensemble averaging of neural networks. Comput Neural Syst 8:283–294CrossRefGoogle Scholar
  49. Neal RM (1996) Bayesian learning for neural networks. Springer, New YorkCrossRefGoogle Scholar
  50. Nguyen D, Widrow B (1990) Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. In: Proceedings of international joint conference of neural networks, vol 3, San Diego, CA, USA, 17–21 June, pp 21–26Google Scholar
  51. Nilsson NJ (1965) Learning machines: foundations of trainable pattern-classifying systems. McGraw Hill, New YorkGoogle Scholar
  52. Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198Google Scholar
  53. Reitsma F (2001) Spatial complexity. Master’s thesis, Auckland University, New ZealandGoogle Scholar
  54. Richman MB, Trafalis TB, Adrianto I (2009) Missing data imputation through machine learning algorithm. In: Haupt SE, Pasini A, Marzban C (eds) Artificial intelligence methods in environmental sciences. Springer, New YorkGoogle Scholar
  55. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL, Group PR (eds) Parallel distributed processing, vol 1. MIT Press, Cambridge, MAGoogle Scholar
  56. Selfridge OG (1958) Pandemonium: a paradigm for learning. In: Mechanization of thought processes. In: Proceedings of a symposium held at the National Physical Lab, HMSO, London, pp 513–526Google Scholar
  57. Sharkey AJC (1996) On combining artificial neural nets. Connect Sci 8:299–313CrossRefGoogle Scholar
  58. Tang Y, Hsieh WW (2003) ENSO simulation and prediction in a hybrid coupled model with data assimilation. J Meteorol Soc Jpn 81:1–19CrossRefGoogle Scholar
  59. Vann L, Hu Y (2002) A neural network inversion system for atmospheric remote-sensing measurements. In: Proceedings of the IEEE instrumentation and measurement technology conference, vol 2, pp 1613–1615. doi: 10.1109/IMTC.2002.1007201
  60. Vapnik VN (1995) The nature of statistical learning theory. Springer, New YorkCrossRefGoogle Scholar
  61. Vapnik VN, Kotz S (2006) Estimation of dependences based on empirical data (information science and statistics). Springer, New YorkGoogle Scholar
  62. Weigend AS, Gershenfeld NA (1994) The future of time series: learning and understanding. In: Weigend AS, Gershenfeld NA (eds) Time series prediction. Forecasting the future and understanding the past. Addison-Wesley Publishing Company, Reading, pp 1–70Google Scholar
  63. Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. dissertation, Committee on Applied Mathematics, Harvard University, Cambridge, MA Reprinted in Werbos P (1994) The roots of backpropagation. Wiley, HobokenGoogle Scholar
  64. Werbos P (1982) Applications of advances in nonlinear sensitivity analysis, systems modeling and optimization. In: Drenick R, Kozin F (eds) Proceedings of the 70th IFIP, 1981. Springer, New York. Reprinted in Werbos P (1994) The roots of backpropagation. Wiley, HobokenGoogle Scholar
  65. Wessels LFA, Bernard E (1992) Avoiding false local minima by proper initialization of connections. IEEE Trans Neural Netw 3:899–905CrossRefGoogle Scholar
  66. Zorita E, von Storch H (1999) A survey of statistical downscaling techniques. J Climate 2:2474–2489CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht(outside the USA.) 2013

Authors and Affiliations

  • Vladimir M. Krasnopolsky
    • 1
  1. 1.NOAA Center for Weather and Climate PredictionCamp SpringUSA

Personalised recommendations