Skip to main content
Log in

Surrogate-assisted Bounding-Box approach for optimization problems with tunable objectives fidelity

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this work, we present a novel framework to perform multi-objective optimization when considering expensive objective functions computed with tunable fidelity. This case is typical in many engineering optimization problems, for example with simulators relying on Monte Carlo or on iterative solvers. The objectives can only be estimated, with an accuracy depending on the computational resources allocated by the user. We propose here a heuristic for allocating the resources efficiently to recover an accurate Pareto front at low computational cost. The approach is independent from the choice of the optimizer and overall very flexible for the user. The framework is based on the concept of Bounding-Box, where the estimation error can be regarded with the abstraction of an interval (in one-dimensional problems) or a product of intervals (in multi-dimensional problems) around the estimated value, naturally allowing the computation of an approximated Pareto front. This approach is then supplemented by the construction of a surrogate model on the estimated objective values. We first study the convergence of the approximated Pareto front toward the true continuous one under some hypotheses. Secondly, a numerical algorithm is proposed and tested on several numerical test-cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Alexander, D.L.J., Bulger, D.W., Calvin, J.M., Romeijn, H.E., Sherriff, R.L.: Approximate implementations of pure random search in the presence of noise. J. Glob. Optim. 31(4), 601–612 (2005). https://doi.org/10.1007/s10898-004-9970-4

    Article  MathSciNet  MATH  Google Scholar 

  2. Barrico, C., Antunes, C.H.: Robustness analysis in multi-objective optimization using a degree of robustness concept. In: 2006 IEEE International Conference on Evolutionary Computation, pp. 1887–1892 (2006). https://doi.org/10.1109/CEC.2006.1688537

  3. Binh, T.T., Korn, U.: MOBES: A multiobjective evolution strategy for constrained optimization problems. In: The Third International Conference on Genetic Algorithms (Mendel 97), vol. 25 (1997)

  4. Buche, D., Schraudolph, N.N., Koumoutsakos, P.: Accelerating evolutionary algorithms with Gaussian process fitness function models. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 35(2), 183–194 (2005). https://doi.org/10.1109/TSMCC.2004.841917

    Article  Google Scholar 

  5. Deb, K., Gupta, H.: Introducing robustness in multi-objective optimization. Evol. Comput. 14(4), 463–494 (2006). https://doi.org/10.1162/evco.2006.14.4.463

    Article  Google Scholar 

  6. Du, X.: Unified uncertainty analysis by the first order reliability method. J. Mech. Des. 130(9), 091401–091410 (2008). https://doi.org/10.1115/1.2943295

    Article  Google Scholar 

  7. Emmerich, M., Giotis, A., Özdemir, M., Bäck, T., Giannakoglou, K.: Metamodel—Assisted Evolution Strategies, pp. 361–370. Springer, Berlin (2002). https://doi.org/10.1007/3-540-45712-7_35

    Book  Google Scholar 

  8. Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006). https://doi.org/10.1109/TEVC.2005.859463

    Article  Google Scholar 

  9. Eskandari, H., Geiger, C.D., Bird, R.: Handling uncertainty in evolutionary multiobjective optimization: SPGA. In: 2007 IEEE Congress on Evolutionary Computation, pp. 4130–4137 (2007). https://doi.org/10.1109/CEC.2007.4425010

  10. Fieldsend, J.E., Everson, R.M.: Multi-objective optimisation in the presence of uncertainty. In: 2005 IEEE Congress on Evolutionary Computation, vol. 1, pp. 243–250 (2005). https://doi.org/10.1109/CEC.2005.1554691

  11. Fusi, F., Congedo, P.M.: An adaptive strategy on the error of the objective functions for uncertainty-based derivative-free optimization. J. Comput. Phys. 309, 241–266 (2016). https://doi.org/10.1016/j.jcp.2016.01.004

    Article  MathSciNet  MATH  Google Scholar 

  12. Gong, D., Qin, N., Sun, X.: Evolutionary algorithms for multi-objective optimization problems with interval parameters. In: 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA), pp. 411–420 (2010). https://doi.org/10.1109/BICTA.2010.5645160

  13. Gutjahr, W.J., Pflug, G.C.: Simulated annealing for noisy cost functions. J. Glob. Optim. 8(1), 1–13 (1996). https://doi.org/10.1007/BF00229298

    Article  MathSciNet  MATH  Google Scholar 

  14. Hughes, E.J.: Evolutionary Multi-objective Ranking with Uncertainty and Noise, pp. 329–343. Springer, Berlin (2001). https://doi.org/10.1007/3-540-44719-9_23

    Book  Google Scholar 

  15. Ishibuchi, H., Tsukamoto, N., Nojima, Y.: Evolutionary many-objective optimization: a short review. In: 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pp. 2419–2426 (2008). https://doi.org/10.1109/CEC.2008.4631121

  16. Jin, Y.: Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evol. Comput. 1(2), 61–70 (2011). https://doi.org/10.1016/j.swevo.2011.05.001

    Article  Google Scholar 

  17. Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments—a survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005). https://doi.org/10.1109/TEVC.2005.846356

    Article  Google Scholar 

  18. Jin, Y., Olhofer, M., Sendhoff, B.: A framework for evolutionary optimization with approximate fitness functions. IEEE Trans. Evol. Comput. 6(5), 481–494 (2002). https://doi.org/10.1109/TEVC.2002.800884

    Article  Google Scholar 

  19. Li, M., Azarm, S., Aute, V.: A multi-objective genetic algorithm for robust design optimization. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, GECCO ’05, pp. 771–778. ACM, New York (2005). https://doi.org/10.1145/1068009.1068140

  20. Limbourg, P.: Multi-objective Optimization of Problems with Epistemic Uncertainty, pp. 413–427. Springer, Berlin (2005). https://doi.org/10.1007/978-3-540-31880-4_29

    Book  MATH  Google Scholar 

  21. Limbourg, P., Aponte, D.E.S.: An optimization algorithm for imprecise multi-objective problem functions. In: 2005 IEEE Congress on Evolutionary Computation, vol. 1, pp. 459–466 (2005). https://doi.org/10.1109/CEC.2005.1554719

  22. Mlakar, M., Tusar, T., Filipic, B.: Comparing solutions under uncertainty in multiobjective optimization. Math. Probl. Eng. 2014, 1–10 (2014). https://doi.org/10.1155/2014/817964

    Article  MathSciNet  MATH  Google Scholar 

  23. Picheny, V., Ginsbourger, D., Richet, Y.: Noisy expected improvement and on-line computation time allocation for the optimization of simulators with tunable fidelity (2010). https://hal.archives-ouvertes.fr/hal-00489321. Working paper or preprint

  24. Soares, G.L., Guimaraes, F.G., Maia, C.A., Vasconcelos, J.A., Jaulin, L.: Interval robust multi-objective evolutionary algorithm. In: 2009 IEEE Congress on Evolutionary Computation, pp. 1637–1643 (2009). https://doi.org/10.1109/CEC.2009.4983138

  25. Tan, K.C., Goh, C.K.: Handling Uncertainties in Evolutionary Multi-objective Optimization, pp. 262–292. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-68860-0_13

    Book  Google Scholar 

  26. Teich, J.: Pareto-Front Exploration with Uncertain Objectives, pp. 314–328. Springer, Berlin (2001). https://doi.org/10.1007/3-540-44719-9_22

    Book  Google Scholar 

  27. Torzcon, V., Trosset, M.W.: Using approximations to accelerate engineering design optimization. In: 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization: A Collection of Technical Papers, Part 2, pp. 738–748. American Institute of Aeronautics and Astronautics (1998)

  28. Toscano-Palmerin, S., Frazier, P.I.: Bayesian optimization with expensive integrands (2018). arXiv preprint arXiv:1803.08661

  29. Žilinskas, A.: On similarities between two models of global optimization: statistical models and radial basis functions. J. Glob. Optim. 48(1), 173–182 (2010). https://doi.org/10.1007/s10898-009-9517-9

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Rivier.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof 1

Proof

The proof here is trivial. By definition,

$$\begin{aligned}&{\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) = \big \{{\varvec{x}}_i,\ i \in \llbracket 1,N \rrbracket \ |\ \text{ non-domination } \text{ condition }\big \} \\&\quad \subseteq \big \{{\varvec{x}}_i,\ i \in \llbracket 1,N \rrbracket \big \} \equiv \big \{{\varvec{x}}_i\big \}_{i=1}^N. \end{aligned}$$

The same can be said for the boxed Pareto optima. \(\square \)

Proof 2

Proof

For \({\varvec{y}}\notin {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big )\), let us assume that \(\not \exists {\varvec{y}}' \in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ),\ {\varvec{y}}' \succ {\varvec{y}}\), then \({\varvec{y}}\in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big ({\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) \cup {\varvec{y}}\Big ) = {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big )\) from Eq. 6, which proves the first implication by contradiction. The second implication is immediate from the definition of \({\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\). \(\square \)

Proof 3

Proof

By using the explicit definition of the Pareto front as in Eq. 4, the proof is immediate as Assumption 1 gives \(\forall {\varvec{x}}\in {\mathcal {X}},\forall j \in \llbracket 1,N \rrbracket ,\ f_j({\varvec{x}}) \in \big [{\widetilde{f}}_j({\varvec{x}}) - {\overline{\varepsilon }}_j({\varvec{x}}),\ {\widetilde{f}}_j({\varvec{x}}) + {\overline{\varepsilon }}_j({\varvec{x}})\big ]\). Hence, \(\forall {\varvec{x}}_i \in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big )\):

$$\begin{aligned}&\forall k \in \llbracket 1,N \rrbracket , \exists j \in \llbracket 1,m \rrbracket ,\ \pm f_j({\varvec{x}}_i)< \pm f_j({\varvec{x}}_k), \\&\quad \pm {\widetilde{f}}_j({\varvec{x}}_i) - {\overline{\varepsilon }}_j({\varvec{x}}_i) < \pm {\widetilde{f}}_j({\varvec{x}}_k) + {\overline{\varepsilon }}_j({\varvec{x}}_k). \end{aligned}$$

Therefore, \({\varvec{x}}_i \in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}_{{\mathcal {B}}}}^{{\varvec{f}}}\Big (\big \{\big ({\varvec{x}}_i,\overline{{\varvec{\varepsilon }}}({\varvec{x}}_i)\big )\big \}_{i=1}^N\Big )\), which ends the proof. \(\square \)

Proof 4

Proof

For proving this, mathematical induction can be used.

Let us assume that \(\exists l \in {\mathbb {N}}_+, \text{ so } \text{ that } {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) \subseteq {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\). Then \(\forall {\varvec{x}}\in {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\):

  • if \({\varvec{x}}\in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ),\ \not \exists {\varvec{y}}\in {\mathcal {X}}, \text{ so } \text{ that } {\varvec{y}}\succ {\varvec{x}}\). Hence, \(\not \exists {\varvec{y}}\in {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k} \subseteq {\mathcal {X}}, \text{ so } \text{ that } {\varvec{y}}\succ {\varvec{x}}\), and therefore \({\varvec{x}}\in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big );\)

  • if \({\varvec{x}}\notin {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big )\), with Proposition 2, \(\exists {\varvec{y}}\in {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) \subseteq {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}, \text{ so } \text{ that } {\varvec{y}}\succ {\varvec{x}}\), therefore, \({\varvec{x}}\notin {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big ).\)

which means that:

$$\begin{aligned} {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) = {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big ). \end{aligned}$$
(13)

Finally, Lemma 1 gives \({\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big ) \subseteq {\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}_{{\mathcal {B}}}}^{{\varvec{f}}}\Big (\Big \{\Big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k},\overline{{\varvec{\varepsilon }}}^{k}\Big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\Big )\Big )\Big \}_{i=1}^N \Big ) = {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k+1}\), which ends the inductive step of the proof, yielding \({\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) \subseteq {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k+1}\).

Of course, \({\mathcal {X}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}}}\Big (\big \{{\varvec{x}}_i\big \}_{i=1}^N\Big ) \subseteq \big \{{\varvec{x}}_i\big \}_{i=1}^N = {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},0}\), therefore, the mathematical induction proves that the robustness inclusion is verified. \(\square \)

Proof 5

Proof

The triangle inequality gives :

$$\begin{aligned}&d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),\ {\mathcal {P}}_c\Big ) \le d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),{\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\Big )\nonumber \\&\quad + d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),\ {\mathcal {P}}_c\Big ). \end{aligned}$$
(14)

Assumption 1 implies that \(\forall {\varvec{x}}\in {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k},\ d_{\infty }\big ({\varvec{f}}({\varvec{x}}),{\widetilde{{\varvec{f}}}}^{l_k}({\varvec{x}})\big ) \le \underset{j}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}({\varvec{x}})\).

Now, let us suppose that \(\exists {\varvec{a}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\), so that \(d_{\infty }\Big ({\varvec{a}}, {\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\Big ) > \underset{(i,j)}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big )\).

This means that \({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\cap \big ({\varvec{a}},\overline{{\varvec{\varepsilon }}}_{max}\big ) = \emptyset \) with \(\overline{{\varvec{\varepsilon }}}_{max}\) being the m-dimensional vector where each component is equal to \(\underset{(i,j)}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big )\).

Therefore, from Definition 8, (i) either \(\exists i \in \llbracket 1,N \rrbracket \), then \(\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ),0\big ) \underset{{\mathcal {B}}}{\succ \succ } \big ({\varvec{a}},\overline{{\varvec{\varepsilon }}}_{max}\big )\) or (ii) \(\forall {\varvec{a}}' \in \big ({\varvec{a}},\overline{{\varvec{\varepsilon }}}_{max}\big ), \not \exists i \in \llbracket 1,N \rrbracket \), so that \({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \succ {\varvec{a}}'\).

  • In the first case i), the dominance can be formulated as follows: \(\exists j \in \llbracket 1,N \rrbracket \), so that \(\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_j}^{{\varvec{f}},k}\big ),\overline{{\varvec{\varepsilon }}}_{max}\big ) \underset{{\mathcal {B}}}{\succ \succ } \big ({\varvec{a}},0\big )\) and as \({\varvec{a}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ), \not \exists i \in \llbracket 1,N \rrbracket , {\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \succ \succ {\varvec{a}}\). Therefore, it can be inferred that \(\not \exists i \in \llbracket 1,N \rrbracket , {\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \in \big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_j}^{{\varvec{f}},k}\big ),\overline{{\varvec{\varepsilon }}}_{max}\big )\). However, this would mean \(\exists {\varvec{x}}\in {\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k},\ d_{\infty }\big ({\varvec{f}}({\varvec{x}}),{\widetilde{{\varvec{f}}}}^{l_k}({\varvec{x}})\big )> \overline{{\varvec{\varepsilon }}}_{max_i} > \underset{j}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}({\varvec{x}})\), which is contradictory with Assumption 1.

  • The second case ii) implies that \(\exists {\varvec{b}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\), so that \(\big ({\varvec{a}},\overline{{\varvec{\varepsilon }}}_{max}\big ) \underset{{\mathcal {B}}}{\succ \succ } \big ({\varvec{b}},0\big )\). However, \(\exists j \in \llbracket 1,N \rrbracket \), so that \({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_j}^{{\varvec{f}},k}\big ) \succ {\varvec{a}}\). Therefore, it follows \(\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_j}^{{\varvec{f}},k}\big ),\overline{{\varvec{\varepsilon }}}_{max}\big ) \underset{{\mathcal {B}}}{\succ \succ } \big ({\varvec{b}},0\big )\). As \({\varvec{b}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\), from Definition 8, it follows that \(\not \exists i \in \llbracket 1,N \rrbracket \), so that \({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \succ \succ {\varvec{b}}\), hence, \(\not \exists i \in \llbracket 1,N \rrbracket , {\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \in \big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_j}^{{\varvec{f}},k}\big ),\overline{{\varvec{\varepsilon }}}_{max}\big )\), which contradicts again Assumption 1.

Hence, we prove by contradiction that \(\forall {\varvec{a}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\), it follows that \(d_{\infty }\Big ({\varvec{a}}, {\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\Big ) \le \underset{(i,j)}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big )\). This statement holds also when inverting the Pareto front continuous sets (real and approximated), and this can proved in the same way. As a consequence, the Hausdorff distance can be written as follows:

$$\begin{aligned} d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),{\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\Big ) \le \underset{(i,j)}{\max }\ \overline{{\varvec{\varepsilon }}}_j^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ). \end{aligned}$$

Hence, Assumption 2 implies that:

$$\begin{aligned} \lim _{k \rightarrow \infty } d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),{\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big )\Big ) = 0. \end{aligned}$$
(15)

Let us focus now on the second part of the sum in Eq. 14.

Of course, \(\forall {\varvec{a}}\in {\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ), \exists {\varvec{a}}' \in {\mathcal {P}}_c\), so that \({\varvec{a}}' \succ {\varvec{a}}\) (or \({\varvec{a}}' = {\varvec{a}}\)). Moreover, \(\forall {\varvec{b}}\in {\mathbb {R}}^m\) such that \(\exists {\varvec{a}}' \in {\mathcal {P}},\ {\varvec{a}}' \succ \succ {\varvec{b}}\), Assumption 3 with Theorem 1 provides evidence that the recursive discrete efficient set converges toward the continuous real one and that this efficient set is included in \({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\). In other words, \(\forall k \in {\mathbb {N}}, \exists M \in {\mathbb {N}}^*, \exists i \in \llbracket 1,M \rrbracket , s.t. \forall j \in \llbracket 1,m \rrbracket , \big |a'_j - f_j\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big )\big | < \big |a'_j - b_j\big |\). Thus, \(\forall k \in {\mathbb {N}}, \exists M \in {\mathbb {N}}^*, \exists i \in \llbracket 1,M \rrbracket , {\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}_i}^{{\varvec{f}},k}\big ) \succ \succ {\varvec{b}}\). Hence, \({\widetilde{{\mathcal {P}}}}_c\) is always dominated by \({\mathcal {P}}_c\) and any element dominated by \({\mathcal {P}}_c\) is dominated by \({\widetilde{{\mathcal {P}}}}_c\) with a sufficient number of points. From Definition 8, it can be deduced that:

$$\begin{aligned} \lim _{N \rightarrow \infty } d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\varvec{f}}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),\ {\mathcal {P}}_c\Big ) = 0. \end{aligned}$$
(16)

Finally, by combining Eqs. 1415 and 16, it comes:

$$\begin{aligned} \lim _{(N,l) \rightarrow (+\infty ,+\infty )}d_H\Big ({\widetilde{{\mathcal {P}}}}_c\big ({\widetilde{{\varvec{f}}}}^{l_k}\big ({\widetilde{{\mathcal {X}}}}_{{\widetilde{{\mathcal {P}}}}}^{{\varvec{f}},k}\big )\big ),\ {\mathcal {P}}_c\Big ) = 0, \end{aligned}$$

which ends the proof. \(\square \)

Proof 6

Proof

The proof is straightforward and comes from the following inequalities:

If \(\overline{{\varvec{\varepsilon }}}_{SA}^t({\varvec{x}}_i) > {\varvec{s}}_1\),

$$\begin{aligned} \big |{\varvec{f}}_{opt}({\varvec{x}}_i) - {\varvec{f}}({\varvec{x}}_i)\big | = \big |{\widetilde{{\varvec{f}}}}^l({\varvec{x}}_i) - {\varvec{f}}({\varvec{x}}_i)\big | \le \overline{{\varvec{\varepsilon }}}^l({\varvec{x}}_i). \end{aligned}$$

Else,

$$\begin{aligned} \big |{\varvec{f}}_{opt}({\varvec{x}}_i) - {\varvec{f}}({\varvec{x}}_i)\big | = \big |{\varvec{f}}^t_{SA}({\varvec{x}}_i) - {\varvec{f}}({\varvec{x}}_i)\big | \le \overline{{\varvec{\varepsilon }}}_{SA}^t({\varvec{x}}_i) \end{aligned}$$

which comes from Eq. 3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rivier, M., Congedo, P.M. Surrogate-assisted Bounding-Box approach for optimization problems with tunable objectives fidelity. J Glob Optim 75, 1079–1109 (2019). https://doi.org/10.1007/s10898-019-00823-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-019-00823-9

Keywords

Navigation