Four algorithms to construct a sparse kriging kernel for dimensionality reduction
- 65 Downloads
In the context of computer experiments, metamodels are largely used to represent the output of computer codes. Among these models, Gaussian process regression (kriging) is very efficient see e.g Snelson (Flexible and efficient Gaussian process models for machine learning. ProQuest LLC, Ann Arbor, MI. Thesis (Ph.D.)–University of London, University College London, London, 2008). In high dimension that is with a large number of input variables, but with few observations, the estimation of the parameters with a classical anisotropic kriging can be completely inaccurate. Because there are equal numbers of ranges and input variables the optimization space becomes too large compared to available information. One way to overcome this drawback is to use an isotropic kernel that only depends on one parameter. However this model is too restrictive. The aim of this paper is twofold. Our first objective is to propose a smooth kernel with as few parameters as warranted. We introduce a kernel which is a tensor product of few isotropic kernels built on well-chosen subgroup of variables. The main difficulty is to find the number and the composition of the groups. Our second objective is to propose algorithmic strategies to overcome this difficulty. Four forward strategies are proposed. They all start with the simplest isotropic kernel and stop when the best model according to BIC criterion is found. They all show very good accuracy results on simulation test cases. But one of them is more efficient. Tested on a real data set, our kernel shows very good prediction results.
KeywordsMetamodel Isotropic Anisotropic Clustering
This work benefited from the financial support of the French ANR project “PEPITO” (ANR-14-CE23-0011).
- Durrande N (2001) Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste. PhD thesis, Ecole Nationale Supérieure des Mines de Saint-EtienneGoogle Scholar
- Ginsbourger D, Roustant O, Schuhmacher D, Durrande N, Lenz N (2016) On ANOVA decompositions of kernels and Gaussian random field paths. In: Monte Carlo and quasi-Monte Carlo methods, volume 163 of Springer Proc. Math. Stat., Springer, Cham, pp 315–330Google Scholar
- Snelson EL (2008) Flexible and efficient Gaussian process models for machine learning. ProQuest LLC, Ann Arbor, MI. Thesis (Ph.D.)–University of London, University College London, LondonGoogle Scholar
- Stitson MO, Gammerman A, Vapnik V, Vovk V, Watkins C, Weston J (1999) Advances in kernel methods. Chapter support vector regression with ANOVA decomposition Kernels, MIT Press, Cambridge, MA, pp 285–291Google Scholar
- Sudret B (2012) Meta-models for structural reliability and uncertainty quantification. In: Asian-Pacific symposium on structural reliability and its applications. Singapore, Singapore, pp 1–24Google Scholar
- Yi G (2009) Variable selection with penalized Gaussian process regression models. PhD thesis, University of Newcastle Upon TyneGoogle Scholar