Advertisement

An Overview of Gradient-Enhanced Metamodels with Applications

  • Luc Laurent
  • Rodolphe Le Riche
  • Bruno Soulier
  • Pierre-Alain Boucard
Original Paper

Abstract

Metamodeling, the science of modeling functions observed at a finite number of points, benefits from all auxiliary information it can account for. Function gradients are a common auxiliary information and are useful for predicting functions with locally changing behaviors. This article is a review of the main metamodels that use function gradients in addition to function values. The goal of the article is to give the reader both an overview of the principles involved in gradient-enhanced metamodels while also providing insightful formulations. The following metamodels have gradient-enhanced versions in the literature and are reviewed here: classical, weighted and moving least squares, Shepard weighting functions, and the kernel-based methods that are radial basis functions, kriging and support vector machines. The methods are set in a common framework of linear combinations between a priori chosen functions and coefficients that depend on the observations. The characteristics common to all kernel-based approaches are underlined. A new \(\nu \)-GSVR metamodel which uses gradients is given. Numerical comparisons of the metamodels are carried out for approximating analytical test functions. The experiments are replicable, as they are performed with an opensource available toolbox. The results indicate that there is a trade-off between the better computing time of least squares methods and the larger versatility of kernel-based approaches.

Abbreviations

\(\nu \)-GSVR

\(\nu \)-Version of the gradient-enhanced support vector regression (surrogate model)

\(\varepsilon \)-SVR

\(\varepsilon \)-Version of the support vector regression (surrogate model)

\(\nu \)-SVR

\(\nu \)-Version of the support vector regression (surrogate model)

\(\varepsilon _k\)-GSVR

\(\varepsilon _k\)-Version of the gradient-enhanced support vector regression (surrogate model)

BLUP

Best linear unbiased predictor

EGO

Efficient global optimization [33]

GBK

Gradient-based kriging (surrogate model)

GEK

Gradient-enhanced kriging (surrogate model)

GEUK

Gradient-enhanced universal kriging (surrogate model)

GKRG

Gradient-enhanced cokriging (surrogate model, same as GBK, GEK and GEUK)

GLS

Generalized least square regression (surrogate model)

GradLS

Gradient-enhanced least square regression (surrogate model)

GRBF

Gradient-enhanced radial basis function (surrogate model)

GRENAT

Gradient-enhanced approximation toolbox (Matlab/Octave’s toolbox [12])

GSVR

Gradient-enhanced support vector regression (surrogate model)

IDW

Inverse distance weighting method also called Shepard weighting method (surrogate model)

IHS

Improved hypercube sampling (sampling technique [103])

InOK

Indirect gradient-enhanced ordinary kriging (surrogate model)

InRBF

Indirect gradient-enhanced radial basis function (surrogate model)

KRG

Kriging (surrogate model)

LOO

Leave-one-out

LS

Least square regression (surrogate model)

MLS

Moving least square regression (surrogate model)

MSE

Mean square error (quality criterion)

MultiDOE

Multiple design of experiments (Matlab/Octave’s toolbox [133])

OK

Ordinary kriging (surrogate model)

OCK

Gradient-enhanced ordinary cokriging (surrogate model)

RBF

Radial basis function (surrogate model)

RSM

Response surface methodology

SBAO

Surrogate-based analysis and optimization [18]

SVM

Support vector machine

SVR

Support vector regression (surrogate model)

WLS

Weigthed least square regression (surrogate model)

MARS

Multivariate adaptive regression splines (surrogate model)

ANN

Artificial neural network (surrogate model)

LHS

Latin hypercube sampling (sampling technique)

OA

Orthogonal array (sampling technique)

UD

Uniform design (sampling technique)

RAAE

Relative average absolute error (quality criterion)

RMSE

Root mean square error (quality criterion)

RMAE

Relative maximal absolute error (quality criterion)

LOOCV

Leave-one-out cross-validation

Notes

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Hughes GF (1968) On the mean accuracy of statistical pattern recognizers. IEEE Trans Inf Theory 14(1):55–63CrossRefGoogle Scholar
  2. 2.
    Lions J-L (1971) Optimal control of systems governed by partial differential equations. Springer, New YorkCrossRefzbMATHGoogle Scholar
  3. 3.
    Cacuci DG (1981) Sensitivity theory for nonlinear systems. I. Nonlinear functional analysis approach. J Math Phys 22(12):2794–2802CrossRefMathSciNetGoogle Scholar
  4. 4.
    Jameson A (1988) Aerodynamic design via control theory. J Sci Comput 3(3):233–260CrossRefzbMATHGoogle Scholar
  5. 5.
    Beda L, Korolev L, Sukkikh N, Frolova T (1959) Programs for automatic differentiation for the machine. Technical Report, BESM, Institute for Precise Mechanics and Computation Techniques, Academy of Science, MoscowGoogle Scholar
  6. 6.
    Wengert RE (1964) A simple automatic derivative evaluation program. Commun ACM 7:463–464CrossRefzbMATHGoogle Scholar
  7. 7.
    Griewank A (1989) On automatic differentiation. In: Mathematical programming: recent developments and applications. Kluwer, Amsterdam, pp 83–108Google Scholar
  8. 8.
    Paoletti V, Fedi M, Italiano F, Florio G, Ialongo S (2016) Inversion of gravity gradient tensor data: does it provide better resolution? Geophys J Int 205(1):192–202CrossRefGoogle Scholar
  9. 9.
    Qin P, Huang D, Yuan Y, Geng M, Liu J (2016) Integrated gravity and gravity gradient 3D inversion using the non-linear conjugate gradient. J Appl Geophys 126:52–73CrossRefGoogle Scholar
  10. 10.
    Lorentz R (2000) Multivariate hermite interpolation by algebraic polynomials: a survey. J Comput Appl Math 122(12):167–201. Numerical analysis in the 20th century, vol. II: interpolation and extrapolationGoogle Scholar
  11. 11.
    Lai M-J (2007) Multivariate splines for data fitting and approximation. In: Approximation theory XII: San Antonio, pp 210–228Google Scholar
  12. 12.
    Laurent L (2016) GRENAT (Matlab/Octave Toolbox). https://bitbucket.org/luclaurent/grenat
  13. 13.
    Box GEP, Wilson K (1951) On the experimental attainment of optimum conditions. J R Stat Soc B 13(1):1–45zbMATHMathSciNetGoogle Scholar
  14. 14.
    Simpson TW, Mistree F, Korte JJ, Mauery TM (1998) Comparison of response surface and kriging models for multidisciplinary design optimization. In: AIAA paper 98-4758. 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, no 98–4755Google Scholar
  15. 15.
    Giunta AA, Watson LT (1998) A comparison of approximation modeling techniques—polynomial versus interpolating models. In: 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, no AIAA-98-4758, American Institute of Aeronautics and AstronauticsGoogle Scholar
  16. 16.
    Jin R, Chen W, Simpson T (2000) Comparative studies of metamodeling techniques under multiple modeling criteria. In: 8th symposium on multidisciplinary analysis and optimization, no AIAA-2000-4801, American Institute of Aeronautics and AstronauticsGoogle Scholar
  17. 17.
    Varadarajan S, Chen W, Pelka CJ (2000) Robust concept exploration of propulsion systems with enhanced model approximation capabilities. Eng Optim 32(3):309–334CrossRefGoogle Scholar
  18. 18.
    Queipo NV, Haftka RT, Shyy W, Goel T, Vaidyanathan R, Tucker PK (2005) Surrogate-based analysis and optimization. Prog Aerosp Sci 41(1):1–28CrossRefGoogle Scholar
  19. 19.
    Peter J, Marcelet M, Burguburu S, Pediroda V (2007) Comparison of surrogate models for the actual global optimization of a 2d turbomachinery flow. In: Proceedings of the 7th WSEAS international conference on simulation, modelling and optimization, pp 46–51. World Scientific and Engineering Academy and Society (WSEAS)Google Scholar
  20. 20.
    Marcelet M (2008) Etude et mise en oeuvre d’une méthode d’optimisation de forme couplant simulation numérique en aérodynamique et en calcul de structure. PhD thesis, École Nationale Supérieure d’Arts et MétiersGoogle Scholar
  21. 21.
    Forrester AIJ, Sóbester A, Keane AJ (2008) Engineering design via surrogate modelling: a practical guide, 1st edn. Wiley, ChichesterCrossRefGoogle Scholar
  22. 22.
    Kim B-S, Lee Y-B, Choi D-H (2009) Comparison study on the accuracy of metamodeling technique for non-convex functions. J Mech Sci Technol 23:1175–1181CrossRefGoogle Scholar
  23. 23.
    Zhao D, Xue D (2010) A comparative study of metamodeling methods considering sample quality merits. Struct Multidiscip Optim 42(6):923–938CrossRefGoogle Scholar
  24. 24.
    McKay MD, Beckman RJ, Conover WJ (1979) Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2):239–245zbMATHMathSciNetGoogle Scholar
  25. 25.
    Santner TJ, Williams BJ, Notz WI (2003) The design and analysis of computer experiments. Springer, New YorkCrossRefzbMATHGoogle Scholar
  26. 26.
    Fang KT, Li R, Sudjianto A (2005) Design and modeling for computer experiments. Chapman & Hall, Boca RatonCrossRefzbMATHGoogle Scholar
  27. 27.
    Franco J (2008) Planification d’expériences numériques en phase exploratoire pour la simulation des phénoménes complexes. PhD thesis, École Nationale Supérieure des Mines de Saint-étienneGoogle Scholar
  28. 28.
    Schonlau M (1997) Computer experiments and global optimization. PhD thesis, University of WaterlooGoogle Scholar
  29. 29.
    Sóbester A, Leary SJ, Keane AJ (2005) On the design of optimization strategies based on global response surface approximation models. J Glob Optim 33(1):31–59CrossRefzbMATHMathSciNetGoogle Scholar
  30. 30.
    Alexandrov NM, Dennis JE, Lewis RM, Torczon V (1998) A trust-region framework for managing the use of approximation models in optimization. Struct Optim 15(1):16–23CrossRefGoogle Scholar
  31. 31.
    Sasena MJ (2002) Flexibility and efficiency enhancements for constrained global design optimization with kriging approximations. PhD thesis, University of MichiganGoogle Scholar
  32. 32.
    Watson AG, Barnes RJ (1995) Infill sampling criteria to locate extremes. Math Geol 27(5):589–608CrossRefGoogle Scholar
  33. 33.
    Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13(4):455–492CrossRefzbMATHMathSciNetGoogle Scholar
  34. 34.
    Matheron G (1970) La théorie des variables régionalisées et ses applications. Les Cahiers du Centre de Morphologie Mathématique de Fontainebleau, école Nationale des Mines de Paris, vol Fascicule 5Google Scholar
  35. 35.
    Wackernagel H (1995) Multivariate geostatistics: an introduction with applications. Springer, BerlinCrossRefzbMATHGoogle Scholar
  36. 36.
    Isaaks EH, Srivastava RM (1989) An introduction to applied geostatistics. Oxford University Press, New YorkGoogle Scholar
  37. 37.
    Hoef JM, Cressie NAC (1993) Multivariable spatial prediction. Math Geol 25(2):219–240CrossRefzbMATHMathSciNetGoogle Scholar
  38. 38.
    Goovaerts P (1997) Geostatistics for natural resources evaluation. Oxford University Press, New YorkGoogle Scholar
  39. 39.
    Morris MD, Mitchell TJ, Ylvisaker D (1993) Bayesian design and analysis of computer experiments: use of derivatives in surface prediction. Technometrics 35(3):243–255CrossRefzbMATHMathSciNetGoogle Scholar
  40. 40.
    Koehler JR, Owen AB (1996) Computer experiments. Handb Stat 13:261–308CrossRefzbMATHMathSciNetGoogle Scholar
  41. 41.
    Lewis RM (1998) Using sensitivity information in the construction of kriging models for design optimization. In: Proceedings of the 7th AIAA/USAF/NASA/ISSMO multidisciplinary analysis & optimization symposium, Saint Louis, MissouriGoogle Scholar
  42. 42.
    Arnaud M, Emery X (2000) Estimation et interpolation spatiale. Hermes Science Publications, ParisGoogle Scholar
  43. 43.
    Chung H-S, Alonso JJ (2002) Using gradients to construct cokriging approximation models for high-dimensional design optimization problems. In: 40th AIAA aerospace sciences meeting & exhibit, no AIAA-2002-0317, American Institute of Aeronautics and AstronauticsGoogle Scholar
  44. 44.
    Chung H-S, Alonso JJ (2002) Design of a low-boom supersonic business jet using cokriging approximation models. In: 9th AIAA/ISSMO symposium on multidisciplinary analysis and optimization, no AIAA-2002-5598, American Institute of Aeronautics and AstronauticsGoogle Scholar
  45. 45.
    Leary SJ, Bhaskar A, Keane AJ (2004) A derivative based surrogate model for approximating and optimizing the output of an expensive computer simulation. J Glob Optim 30(1):39–58CrossRefzbMATHMathSciNetGoogle Scholar
  46. 46.
    Leary SJ, Bhaskar A, Keane AJ (2004) Global approximation and optimization using adjoint computational fluid dynamics codes. AIAA J 42(3):631–641CrossRefGoogle Scholar
  47. 47.
    Laurenceau J, Sagaut P (2008) Building efficient response surfaces of aerodynamic functions with kriging and cokriging. AIAA J 46(2):498–507CrossRefGoogle Scholar
  48. 48.
    Laurenceau J, Meaux M (2008) Comparison of gradient and response surface based optimization frameworks using adjoint method. In: 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 16th AIAA/ASME/AHS adaptive structures conference, 10th AIAA non-deterministic approaches conference, 9th AIAA Gossamer Spacecraft Forum, 4th AIAA multidisciplinary design optimization specialists conference, no AIAA-2008-1889, American Institute of Aeronautics and AstronauticsGoogle Scholar
  49. 49.
    Dwight R, Han Z-H (2009) Efficient uncertainty quantification using gradient-enhanced kriging. In: 50th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, no AIAA-2009-2276, American Institute of Aeronautics and AstronauticsGoogle Scholar
  50. 50.
    Xuan Y, Xiang J, Zhang W, Zhang Y (2009) Gradient-based kriging approximate model and its application research to optimization design. Sci China E 52(4):1117–1124CrossRefzbMATHGoogle Scholar
  51. 51.
    Lockwood BA, Mavriplis DJ (2010) Parameter sensitivity analysis for hypersonic viscous flow using a discrete adjoint approach. AIAA Paper 447Google Scholar
  52. 52.
    March A, Willcox K, Wang Q (2010) Gradient-based multifidelity optimisation for aircraft design using Bayesian model calibration. Aeronaut J 115(1174):729–738CrossRefGoogle Scholar
  53. 53.
    Yamazaki W, Rumpfkeil M, Mavriplis D (2010) Design optimization utilizing gradient/hessian enhanced surrogate model. In: 28th AIAA applied aerodynamics conference, no AIAA-2010-4363, American Institute of Aeronautics and AstronauticsGoogle Scholar
  54. 54.
    Bompard M (2011) Modéles de substitution pour l’optimisation globale de forme en aérodynamique et méthode locale sans paramétrisation. PhD thesis, Université de Nice Sophia AntipolisGoogle Scholar
  55. 55.
    Rumpfkeil MP, Yamazaki W, Mavriplis DJ (2011) A dynamic sampling method for kriging and cokriging surrogate models. In: 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, no AIAA-2011-883, American Institute of Aeronautics and AstronauticsGoogle Scholar
  56. 56.
    Laurent L, Boucard P-A, Soulier B (2012) Gradient-enhanced metamodels and multiparametric strategies for designing structural assemblies. In: Topping BHV (ed) Proceedings of the eleventh international conference on computational structures technology, 4–7 September, no Paper 230, Civil-Comp Press, Stirlingshire, UKGoogle Scholar
  57. 57.
    Laurent L (2013) Stratégie multiparamétrique et métamodèles pour l’optimisation multiniveaux de structures. PhD thesis, École Normale Supérieure de Cachan, 61, avenue du Président Wilson, 94230 CachanGoogle Scholar
  58. 58.
    Laurent L, Boucard P-A, Soulier B (2013) Combining multiparametric strategy and gradient-based surrogate model for optimizing structure assemblies. In: ISSMO (ed)10th world congress on structural and multidisciplinary optimization, Orlando, Florida, USA, 19–24 MayGoogle Scholar
  59. 59.
    Laurent L (2014) Global optimisation on assembly problems using gradient-based surrogate model and multiparametric strategy. In: PhD Olympiad ECCOMAS, 11th World Congress on Computational Mechanics, Barcelona, Spain, 20–25 July 2014Google Scholar
  60. 60.
    Han Z-H, Görtz S, Zimmermann R (2013) Improving variable-fidelity surrogate modeling via gradient-enhanced kriging and a generalized hybrid bridge function. Aerosp Sci Technol 25(1):177–189CrossRefGoogle Scholar
  61. 61.
    Zimmermann R (2013) On the maximum likelihood training of gradient-enhanced spatial gaussian processes. SIAM J Sci Comput 35(6):A2554–A2574CrossRefzbMATHMathSciNetGoogle Scholar
  62. 62.
    Ulaganathan S, Couckuyt I, Dhaene T, Degroote J, Laermans E (2016) Performance study of gradient-enhanced kriging. Eng Comput 32(1):15–34CrossRefGoogle Scholar
  63. 63.
    Ulaganathan S, Couckuyt I, Ferranti F, Laermans E, Dhaene T (2015) Performance study of multi-fidelity gradient enhanced kriging. Struct Multidiscip Optim 51(5):1017–1033CrossRefGoogle Scholar
  64. 64.
    Zongmin W (1992) Hermite-Birkhoff interpolation of scattered data by radial basis functions. Approx Theory Appl 8(2):1–10zbMATHMathSciNetGoogle Scholar
  65. 65.
    Kampolis IC, Karangelos EI, Giannakoglou KC (2004) Gradient-assisted radial basis function networks: theory and applications. Appl Math Model 28(2):197–209CrossRefzbMATHGoogle Scholar
  66. 66.
    Giannakoglou KC, Papadimitriou DI, Kampolis IC (2006) Aerodynamic shape design using evolutionary algorithms and new gradient-assisted metamodels. Comput Methods Appl Mech Eng 195(44–47):6312–6329CrossRefzbMATHGoogle Scholar
  67. 67.
    Ong Y-S, Lum K, Nair PB (2008) Hybrid evolutionary algorithm with hermite radial basis function interpolants for computationally expensive adjoint solvers. Comput Optim Appl 39(1):97–119CrossRefzbMATHMathSciNetGoogle Scholar
  68. 68.
    Lázaro M, Santamaría I, Pérez-Cruz F, Artés-Rodríguez A (2005) Support vector regression for the simultaneous learning of a multivariate function and its derivatives. Neurocomputing 69(1–3):42–61CrossRefGoogle Scholar
  69. 69.
    Jayadeva, Khemchandani R, Chandra S (2006) Regularized least squares twin svr for the simultaneous learning of a function and its derivative. In: Neural networks, 2006. IJCNN’06. International joint conference onIJCNN ’06Google Scholar
  70. 70.
    Bloch G, Lauer F, Colin G, Chamaillard Y (2008) Support vector regression from simulation data and few experimental samples. Inf Sci 178(20):3813–3827. Special issue on industrial applications of neural networks—10th engineering applications of neural networks 2007Google Scholar
  71. 71.
    Jayadeva, Khemchandani R, Chandra S (2008) Regularized least squares support vector regression for the simultaneous learning of a function and its derivatives. Inf Sci 178(17):3402–3414CrossRefzbMATHGoogle Scholar
  72. 72.
    Lauer F, Bloch G (2008) Incorporating prior knowledge in support vector regression. Mach Learn 70(1):89–118CrossRefGoogle Scholar
  73. 73.
    Khemchandani R, Karpatne A, Chandra S (2013) Twin support vector regression for the simultaneous learning of a function and its derivatives. Int J Mach Learn Cybern 4(1):51–63CrossRefGoogle Scholar
  74. 74.
    Renka RJ (1988) Multivariate interpolation of large sets of scattered data. ACM Trans Math Softw (TOMS) 14(2):139–148CrossRefzbMATHMathSciNetGoogle Scholar
  75. 75.
    Lauridsen S, Vitali R, van KeulenF, Haftka RT, Madsen JI (2002) Response surface approximation using gradient information, vol 4, p 5. In: Cheng et al (ed) Proceedings of 4th world congress on structural and multidisciplinary optimization, Dalian China, 4–8 June 2001Google Scholar
  76. 76.
    Kim C, Wang S, Choi KK (2005) Efficient response surface modeling by using moving least-squares method and sensitivity. AIAA J 43(11):2404–2411CrossRefGoogle Scholar
  77. 77.
    Breitkopf P, Naceur H, Rassineux A, Villon P (2005) Moving least squares response surface approximation: formulation and metal forming applications. Comput Struct 83(17–18):1411–1428. Advances in meshfree methodsGoogle Scholar
  78. 78.
    van Keulen F, Liu B, Haftka RT (2000) Noise and discontinuity issues in response surfaces based on functions and derivatives. In: 41st structures, structural dynamics, and materials conference and exhibit, no AIAA-00-1363, American Institute of Aeronautics and AstronauticsGoogle Scholar
  79. 79.
    Vervenne K, van Keulen F (2002) An alternative approach to response surface building using gradient information. In 43rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, no AIAA-2002-1584, American Institute of Aeronautics and AstronauticsGoogle Scholar
  80. 80.
    van Keulen F, Vervenne K (2004) Gradient-enhanced response surface building. Struct Multidiscip Optim 27(5):337–351Google Scholar
  81. 81.
    Liu W (2003) Development of gradient-enhanced kriging approximations for multidisciplinary design optimization. PhD thesis, University of Notre Dame, IndianaGoogle Scholar
  82. 82.
    Myers RH, Montgomery DC (1995) Response surface methodology: process and product optimization using designed experiments. Wiley, New YorkzbMATHGoogle Scholar
  83. 83.
    Mazja V (1985) Sobolev spaces. Springer, New YorkCrossRefGoogle Scholar
  84. 84.
    Runge C (1901) Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten. Zeitschrift für Mathematik und Physik 46:224–243zbMATHGoogle Scholar
  85. 85.
    Haftka RT (1993) Semi-analytical static nonlinear structural sensitivity analysis. AIAA J 31(7):1307–1312CrossRefzbMATHGoogle Scholar
  86. 86.
    Rasmussen CE (2006) Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, CambridgezbMATHGoogle Scholar
  87. 87.
    Stander N (2001) The successive response surface method applied to sheet-metal forming. In: Proceedings of the first MIT conference on computational fluid and solid mechanics, pp 481–485, 12–15 June 2001Google Scholar
  88. 88.
    Lancaster P, Salkauskas K (1981) Surfaces generated by moving least squares methods. Math Comput 37(155):141–158CrossRefzbMATHMathSciNetGoogle Scholar
  89. 89.
    Zhou L, Zheng WX (2006) Moving least square ritz method for vibration analysis of plates. J Sound Vib 290(3–5):968–990CrossRefzbMATHGoogle Scholar
  90. 90.
    Häussler-Combe U, Korn C (1998) An adaptive approach with the element-free-Galerkin method. Comput Methods Appl Mech Eng 162(1–4):203–222CrossRefzbMATHMathSciNetGoogle Scholar
  91. 91.
    Shepard D (1968) A two-dimensional interpolation function for irregularly-spaced data. In: Proceedings of the 1968 23rd ACM national conference, pp 517–524. ACMGoogle Scholar
  92. 92.
    Mercer J (1909) Functions of positive and negative type, and their connection with the theory of integral equations. Philos Trans R Soc Lond A 209(441–458):415–446CrossRefzbMATHGoogle Scholar
  93. 93.
    Genton MG (2001) Classes of kernels for machine learning: a statistics perspective. J Mach Learn Res 2:299–312zbMATHMathSciNetGoogle Scholar
  94. 94.
    Laurent L, Boucard P-A, Soulier B (2013) Generation of a cokriging metamodel using a multiparametric strategy. Comput Mech 51:151–169CrossRefzbMATHMathSciNetGoogle Scholar
  95. 95.
    Laurent L, Boucard P-A, Soulier B (2013) A dedicated multiparametric strategy for the fast construction of a cokriging metamodel. Comput Struct 124:61–73. Special Issue: KRETAGoogle Scholar
  96. 96.
    Stein ML (1999) Interpolation of spatial data: some theory for kriging. Springer, New YorkCrossRefzbMATHGoogle Scholar
  97. 97.
    Matérn B (1960) Spatial Variation (Lecture NotesStatist. 36). Springer, BerlinGoogle Scholar
  98. 98.
    Abramowitz M, Stegun I (1964) Handbook of mathematical functions with formulas, graphs, and mathematical tables. In: Applied mathematics series, vol 55, no 1972. U.S. Government Printing OfficeGoogle Scholar
  99. 99.
    Lockwood BA, Anitescu M (2012) Gradient-enhanced universal kriging for uncertainty propagation. Nucl Sci Eng 170:168–195CrossRefGoogle Scholar
  100. 100.
    Hardy RL (1971) Multiquadric equations of topography and other irregular surfaces. J Geophys Res 76(8):1905–1915CrossRefGoogle Scholar
  101. 101.
    Powell MJ (1981) Approximation theory and methods. Cambridge University Press, CambridgezbMATHGoogle Scholar
  102. 102.
    Broomhead D, Lowe D (1988) Multivariable functional interpolation and adaptive networks. Complex Syst 2:321–355zbMATHMathSciNetGoogle Scholar
  103. 103.
    Beachkofski B, Grandhi R (2002) Improved distributed hypercube sampling. In: 43rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, no AIAA 2002–1274Google Scholar
  104. 104.
    Schaback R (2007) A practical guide to radial basis functions. Book for teaching (Preprint)Google Scholar
  105. 105.
    Stone M (1974) Cross-validatory choice and assessment of statistical predictions. J R Stat Soc B 36:111–147zbMATHMathSciNetGoogle Scholar
  106. 106.
    Geisser S (1975) The predictive sample reuse method with applications. J Am Stat Assoc 70(350):320–328CrossRefzbMATHGoogle Scholar
  107. 107.
    Bompard M, Peter J, Desideri J et al (2010) Surrogate models based on function and derivative values for aerodynamic global optimization. In: Proceedings of the V European conference on computational fluid dynamics ECCOMAS CFD 2010Google Scholar
  108. 108.
    Rippa S (1999) An algorithm for selecting a good value for the parameter c in radial basis function interpolation. Adv Comput Math 11:193–210CrossRefzbMATHMathSciNetGoogle Scholar
  109. 109.
    Soulier B, Courrier N, Laurent L, Boucard P-A (2015) Métamodèles à gradients et multiniveaux de fidélité pour l’optimisation d’assemblages. In: 12ème Coloque National en Calcul des Structures, Giens, France, 12-22 mai, CSMAGoogle Scholar
  110. 110.
    Ulaganathan S, Couckuyt I, Dhaene T, Degroote J, Laermans E (2016) High dimensional kriging metamodelling utilising gradient information. Appl Math Model 40:5256–5270CrossRefGoogle Scholar
  111. 111.
    Chauvet P (1999) Aide mémoire de la géostatistique linéaire. Cahiers de Géostatistique, école Nationale Supérieur des Mines de Paris, Centre de Géostatistique, Fontainebleau, vol Fascicule 2Google Scholar
  112. 112.
    Mardia K, Marshall R (1984) Maximum likelihood estimation of models for residual covariance in spatial regression. Biometrika 71(1):135CrossRefzbMATHMathSciNetGoogle Scholar
  113. 113.
    Warnes J, Ripley B (1987) Problems with likelihood estimation of covariance functions of spatial gaussian processes. Biometrika 74(3):640CrossRefzbMATHMathSciNetGoogle Scholar
  114. 114.
    Toal DJ, Bressloff N, Keane A, Holden C (2011) The development of a hybridized particle swarm for kriging hyperparameter tuning. Eng Optim 43:675–699CrossRefGoogle Scholar
  115. 115.
    Toal DJJ, Forrester AIJ, Bressloff NW, Keane AJ, Holden C (2009) An adjoint for likelihood maximization. Proc R Soc Lond A 465(2111):3267–3287CrossRefzbMATHMathSciNetGoogle Scholar
  116. 116.
    Laurent L (2013) Multilevel optimisation of structures using a multiparametric strategy and metamodels. PhD thesis, École Normale Supérieure de Cachan, 61, avenue du Président Wilson, 94230 CachanGoogle Scholar
  117. 117.
    Vapnik VN (1995) The nature of statistical learning theory. Springer, New YorkCrossRefzbMATHGoogle Scholar
  118. 118.
    Vapnik VN, Chervonenkis AY (1974) Theory of pattern recognition. Nauka, Moscow (in Russian)zbMATHGoogle Scholar
  119. 119.
    Smola AJ, Murata N, Schölkopf B, Müller K-R (1998) Asymptotically optimal choice of ε-loss for support vector machines. In: Niklasson, L.F.: Proceedings of the 8th international conference on artificial neural networks 1998. Vol 1: Skövde, Sweden, 2–4 September 1998, ICANN 98, pp 105–110. SpringerGoogle Scholar
  120. 120.
    Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199–222CrossRefMathSciNetGoogle Scholar
  121. 121.
    Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines: and other kernel-based learning methods. Cambridge University Press, New YorkCrossRefzbMATHGoogle Scholar
  122. 122.
    Platt J (1998) Sequential minimal optimization: a fast algorithm for training support vector machines. Technical Report MSR-TR-98-14, Microsoft ResearchGoogle Scholar
  123. 123.
    Schölkopf B, Bartlett PL, Smola AJ, Williamson R (1999) Shrinking the tube: a new support vector regression algorithm. In: Advances in neural information processing systems, vol 11, (Cambridge, MA, USA), pp. 330–336, Max-Planck-Gesellschaft, MIT PressGoogle Scholar
  124. 124.
    Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12:1207–1245CrossRefGoogle Scholar
  125. 125.
    Chang C-C, Lin C-J (2002) Training v-support vector regression: theory and algorithms. Neural Comput 14:1959–1977CrossRefzbMATHGoogle Scholar
  126. 126.
    Cherkassky V, Ma Y (2002) Artificial neural networks—ICANN 2002: international conference Madrid, Spain, August 28–30, 2002 proceedings, chapter. Selection of meta-parameters for support vector regression, pp 687–693. Springer, Berlin HeidelbergGoogle Scholar
  127. 127.
    Vapnik VN (1998) Statistical learning theory. Wiley, New YorkzbMATHGoogle Scholar
  128. 128.
    Vapnik VN, Chapelle O (2000) Bounds on error expectation for support vector machines. Neural Comput 12:2013–2036CrossRefGoogle Scholar
  129. 129.
    Chapelle O, Vapnik VN, Bousquet O, Mukherjee S (2002) Choosing multiple parameters for support vector machines. Mach Learn 46(1–3):131–159CrossRefzbMATHGoogle Scholar
  130. 130.
    Chang M-W, Lin C-J (2005) Leave-one-out bounds for support vector regression model selection. Neural Comput 17:1188–1222CrossRefzbMATHGoogle Scholar
  131. 131.
    Laurent L, Boucard P-A, Soulier B (2011) Fast multilevel optimization using a multiparametric strategy and a cokriging metamodel. In: Tsompanakis Y, Tolpping BHV (eds) Proceedings of the second international conference on soft computing technology in civil, structural and environmental engineering, 6–9 September, no Paper 50, Civil-Comp Press, Stirlingshire, UKGoogle Scholar
  132. 132.
    Laurent L, Soulier B, Le Riche R, Boucard P-A (2016) On the use of gradient-enhanced metamodels for global approximation and global optimization. In: VII European congress on computational methods in applied sciences and engineering, Hersonissos, Crete, Greece, June 5-10Google Scholar
  133. 133.
    Laurent L (2016) MultiDOE (Matlab/Octave Toolbox). https://bitbucket.org/luclaurent/multidoe
  134. 134.
    Couckuyt I, Dhaene T, Demeester P (2014) ooDACE toolbox: a flexible object-oriented kriging implementation. J Mach Learn Res 15(1):3183–3186zbMATHGoogle Scholar
  135. 135.
    Ulaganathan S, Couckuyt I, Deschrijver D, Laermans E, Dhaene T (2015) A matlab toolbox for kriging metamodelling. Procedia Comput Sci 51:2708–2713CrossRefGoogle Scholar
  136. 136.
    Forrester AIJ, Sóbester A, Keane AJ (2006) Optimization with missing data. Proc R Soc A 462(2067):935–945CrossRefzbMATHGoogle Scholar
  137. 137.
    Fritz J, Neuweiler I, Nowak W (2009) Application of fft-based algorithms for large-scale universal kriging problems. Math Geo 41(5):509–533CrossRefzbMATHGoogle Scholar
  138. 138.
    Hensman J, Durrande N, Solin A (2016) Variational Fourier features for Gaussian processes. ArXiv e-printsGoogle Scholar
  139. 139.
    Bouhlel MA, Bartoli N, Otsmane A, Morlier J (2016) Improving kriging surrogates of high-dimensional design models by partial least squares dimension reduction. Struct Multidiscip Optim 53(5):935–952CrossRefMathSciNetGoogle Scholar
  140. 140.
    Bouhlel MA, Bartoli N, Otsmane A, Morlier J (2016) An improved approach for estimating the hyperparameters of the kriging model for high-dimensional problems through the partial least squares method. Math Prob Eng 2016:11CrossRefMathSciNetGoogle Scholar

Copyright information

© CIMNE, Barcelona, Spain 2017

Authors and Affiliations

  1. 1.Laboratoire de Mécanique des Structures et des Systèmes CouplésConservatoire National des Arts et MétiersParisFrance
  2. 2.École Nationale Supérieure des Mines de Saint-ÉtienneSaint-ÉtienneFrance
  3. 3.LMT Cachan (ENS Cachan/CNRS/Université Paris-Saclay)CachanFrance

Personalised recommendations