Skip to main content
Log in

Modeling and optimization with Gaussian processes in reduced eigenbases

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

Parametric shape optimization aims at minimizing an objective function f(x) where x are CAD parameters. This task is difficult when f(⋅) is the output of an expensive-to-evaluate numerical simulator and the number of CAD parameters is large. Most often, the set of all considered CAD shapes resides in a manifold of lower effective dimension in which it is preferable to build the surrogate model and perform the optimization. In this work, we uncover the manifold through a high-dimensional shape mapping and build a new coordinate system made of eigenshapes. The surrogate model is learned in the space of eigenshapes: a regularized likelihood maximization provides the most relevant dimensions for the output. The final surrogate model is detailed (anisotropic) with respect to the most sensitive eigenshapes and rough (isotropic) in the remaining dimensions. Last, the optimization is carried out with a focus on the critical dimensions, the remaining ones being coarsely optimized through a random embedding and the manifold being accounted for through a replication strategy. At low budgets, the methodology leads to a more accurate model and a faster optimization than the classical approach of directly working with the CAD parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Even if pruning the αj components for \(j>d^{\prime }\) (see comments at the end of Section 2.2), \(n<d^{\prime }\) may hold.

  2. As explained at the end of Section 2, we can restrict all calculations to α’s \(d^{\prime }\) first coordinates. Even though \(d^{\prime }\ll D\), it has approximately the same dimension as d, hence the optimization is still carried out in a high-dimensional space.

  3. In this article, the mapping T(⋅) is the composition of ϕ(⋅) with the projection onto a subspace of \((\mathbf v^{1},\dotsc ,\mathbf v^{D})\).

  4. Since we do not know the convexity of \(\mathcal A\), the projection might not be unique.

References

  • Allard D, Senoussi R, Porcu E (2016) Anisotropy models for spatial data. Math Geosci 48(3):305–328

    Article  MathSciNet  Google Scholar 

  • Auger A, Hansen N (2005) Performance evaluation of an advanced local search evolutionary algorithm. In: Congress on evolutionary computation, vol 2. IEEE, pp 1777–1784

  • Bellman RE (1961) Adaptive control processes: a guided tour. Princeton university press

  • Ben Salem M, Bachoc F, Roustant O, Gamboa F, Tomaso L (2019) Sequential dimension reduction for learning features of expensive black-box functions. hal-01688329v2. https://hal.archives-ouvertes.fr/hal-01688329v2

  • Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Ann Rev Fluid Mech 25(1):539–575

    Article  MathSciNet  Google Scholar 

  • Binois M, Ginsbourger D, Roustant O (2017) On the choice of the low-dimensional domain for global optimization via random embeddings. arXiv:https://arxiv.org/abs/1704.05318

  • Bouhlel MA, Bartoli N, Otsmane A, Morlier J (2016) Improving kriging surrogates of high-dimensional design models by partial least squares dimension reduction. Struct Multidiscip Optim 53(5):935–952

    Article  MathSciNet  Google Scholar 

  • Cinquegrana D, Iuliano E (2018) Investigation of adaptive design variables bounds in dimensionality reduction for aerodynamic shape optimization. Comput Fluids 174:89–109

    Article  MathSciNet  Google Scholar 

  • Colding TH, Minicozzi WP (2006) Shapes of embedded minimal surfaces. Proc Nat Acad Sci 103 (30):11,106–11,111

    Article  MathSciNet  Google Scholar 

  • Constantine P, Dow E, Wang Q (2014) Active subspace methods in theory and practice: applications to kriging surfaces. SIAM J Sci Comput 36(4):A1500–A1524

    Article  MathSciNet  Google Scholar 

  • Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models-their training and application. Comput Vis Image Understand 61(1):38–59

    Article  Google Scholar 

  • Cressie N (1992) Statistics for spatial data. Terra Nova 4(5):613–617

    Article  Google Scholar 

  • Deville Y, Ginsbourger D, Durrande N, Roustant O, Roustant O (2015) Package ’kergp’

  • Durrande N (2011) Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. une approche fonctionnelle et probabiliste. Ph.D. thesis École Nationale Supérieure des Mines de Saint-Étienne

  • Durrande N, Ginsbourger D, Roustant O (2012) Additive covariance kernels for high-dimensional Gaussian process modeling. In: Annales de la Faculté des sciences de Toulouse: Mathématiques, vol 21, pp 481–499

  • Duvenaud D, Nickisch H, Rasmussen CE (2011) Additive Gaussian processes. In: Advances in neural information processing systems, pp 226–234

  • Forrester AI, Keane AJ (2009) Recent advances in surrogate-based optimization. Progress Aerosp Sci 45 (1–3):50–79

    Article  Google Scholar 

  • Frank IE, Friedman J (1993) A statistical view of some chemometrics regression tools. Technometrics 35 (2):109–135

    Article  Google Scholar 

  • Gaudrie D, Le Riche R, Picheny V, Enaux B, Herbert V (2018) Budgeted multi-objective optimization with a focus on the central part of the Pareto front-extended version. arXiv:1809.10482

  • Gaudrie D, Le Riche R, Picheny V, Enaux B, Herbert V (2019) Modeling and optimization with Gaussian processes in reduced eigenbases - extended version. arXiv:1908.11272

  • Gaudrie D, Le Riche R, Picheny V, Enaux B, Herbert V (2020) Targeting solutions in Bayesian multi-objective optimization: sequential and batch versions. Ann Math Artif Intell 88:187–212

    Article  MathSciNet  Google Scholar 

  • Jones DR (2001) A taxonomy of global optimization methods based on response surfaces. J Global Optim 21 (4):345–383

    Article  MathSciNet  Google Scholar 

  • Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Global Optim 13(4):455–492

    Article  MathSciNet  Google Scholar 

  • Li J, Bouhlel MA, Martins J (2018) A data-based approach for fast airfoil analysis and optimization. In: 2018 AIAA/ASCE/AHS/ASC structures, structural dynamics, and materials conference, p 1383

  • Li J, Cai J, Qu K (2019) Surrogate-based aerodynamic shape optimization with the active subspace method. Struct Multidiscip Optim 59(2):403–419

    Article  MathSciNet  Google Scholar 

  • Liu D, Nocedal J (1989) On the limited memory BFGS method for large scale optimization. Math Program 45(1–3):503–528

    Article  MathSciNet  Google Scholar 

  • Loeppky JL, Sacks J, Welch WJ (2009) Choosing the sample size of a computer experiment: a practical guide. Technometrics 51(4):366–376

    Article  MathSciNet  Google Scholar 

  • Mebane WR Jr, Sekhon JS, et al. (2011) Genetic optimization using derivatives: the rgenoud package for R. J Stat Softw 42(11):1–26

    Article  Google Scholar 

  • Mika S, Schölkopf B, Smola A, Müller KR, Scholz M, Rätsch G (1999) Kernel PCA and de-noising in feature spaces. In: Advances in neural information processing systems, pp 536–542

  • Mockus J (1975) On Bayesian methods for seeking the extremum. In: Optimization techniques IFIP technical conference. Springer, pp 400–404

  • Namura N, Shimoyama K, Obayashi S (2017) Kriging surrogate model with coordinate transformation based on likelihood and gradient. newblock Journal of Global Optimization 68(4):827–849

    Article  MathSciNet  Google Scholar 

  • Palar PS, Shimoyama K (2018) On the accuracy of kriging model in active subspaces. In: 2018 AIAA/ASCE/AHS/ASC structures, structural dynamics, and materials conference, p 0913

  • Raghavan B, Breitkopf P, Tourbier Y, Villon P (2013) Towards a space reduction approach for efficient structural shape optimization. Struct Multidiscip Optim 48(5):987–1000

    Article  Google Scholar 

  • Raghavan B, Le Quilliec G, Breitkopf P, Rassineux A, Roelandt JM, Villon P (2014) Numerical assessment of springback for the deep drawing process by level set interpolation using shape manifolds. Int J Mater Form 7(4):487–501

    Article  Google Scholar 

  • Rasmussen CE, Williams CK (2006) Gaussian processes for machine learning. The MIT Press

  • Roustant O, Ginsbourger D, Deville Y (2012) DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization

  • Sacks J, Welch WJ, Mitchell TJ, Wynn HP (1989) Design and analysis of computer experiments. Stat Sci, 409–423

  • Saltelli A, Tarantola S, Campolongo F, Ratto M (2004) Sensitivity analysis in practice: a guide to assessing scientific models. Chichester

  • Schölkopf B, Smola A, Müller KR (1997) Kernel principal component analysis. In: International conference on artificial neural networks. Springer, pp 583–588

  • Shahriari B, Bouchard-Côté A, Freitas N (2016) Unbounded Bayesian optimization via regularization. In: Artificial intelligence and statistics, pp 1168–1176

  • Shan S, Wang GG (2004) Space exploration and global optimization for computationally intensive design problems: a rough set based approach. Struct Multidiscip Optim 28(6):427–441

    Article  Google Scholar 

  • Shan S, Wang GG (2010) Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions. Struct Multidiscip Optim 41(2):219–241

    Article  MathSciNet  Google Scholar 

  • Stegmann MB, Gomez DD (2002) A brief introduction to statistical shape analysis. Informatics and mathematical modelling, Technical University of Denmark DTU 15(11)

  • Stein M (1999) Interpolation of spatial data: some theory for kriging. Springer Science & Business Media

  • Tripathy R, Bilionis I, Gonzalez M (2016) Gaussian processes with built-in dimensionality reduction: applications to high-dimensional uncertainty propagation. J Comput Phys 321:191–223

    Article  MathSciNet  Google Scholar 

  • Vapnik V (1995) The nature of statistical learning theory. Springer science & business media

  • Wall ME, Rechtsteiner A, Rocha LM (2003) Singular value decomposition and principal component analysis. In: A practical approach to microarray data analysis. Springer, pp 91–109

  • Wang Q (2012) Kernel principal component analysis and its applications in face recognition and active shape models. arXiv:1207.3538

  • Wang Z, Zoghi M, Hutter F, Matheson D, De Freitas N (2013) Bayesian optimization in high dimensions via random embeddings. In: Twenty-Third international joint conference on artificial intelligence

  • Wu X, Peng X, Chen W, Zhang W (2019) A developed surrogate-based optimization framework combining HDMR-based modeling technique and TLBO algorithm for high-dimensional engineering problems. Struct Multidiscip Optim 60:663–680

    Article  Google Scholar 

  • Yi G, Shi J, Choi T (2011) Penalized Gaussian process regression and classification for high-dimensional nonlinear data. Biometrics 67(4):1285–1294

    Article  MathSciNet  Google Scholar 

Download references

Funding

This research was partly funded by a CIFRE grant (convention #2016/0690) established between the ANRT and the Groupe PSA for the doctoral work of David Gaudrie.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Gaudrie.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Responsible Editor: Erdem Acar

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Replication of results

Comparisons between variants of EGO algorithms working in the original X space or in a reduced eigencomponents space have been presented in Sections 3 and 4. Two examples out of three are analytical and easily reproducible. To facilitate a replication of results, the pseudo-code of the final approach we propose, AddGP(\(\boldsymbol \alpha ^{a} + \boldsymbol \alpha ^{\overline {a}}\))-EI embed with replication, is given in Section 5. The penalized maximum likelihood was implemented in the R language extending the kergp package. The additive GP was built using the kergp package. The maximization of the EI was carried out by the genoud package.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gaudrie, D., Le Riche, R., Picheny, V. et al. Modeling and optimization with Gaussian processes in reduced eigenbases. Struct Multidisc Optim 61, 2343–2361 (2020). https://doi.org/10.1007/s00158-019-02458-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-019-02458-6

Keywords

Navigation