Skip to main content

Meta-Modeling in Multiobjective Optimization

  • Chapter
Multiobjective Optimization

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5252))

Abstract

In many practical engineering design and other scientific optimization problems, the objective function is not given in closed form in terms of the design variables. Given the value of the design variables, the value of the objective function is obtained by some numerical analysis, such as structural analysis, fluidmechanic analysis, thermodynamic analysis, and so on. It may even be obtained by conducting a real (physical) experiment and taking direct measurements. Usually, these evaluations are considerably more time-consuming than evaluations of closed-form functions. In order to make the number of evaluations as few as possible, we may combine iterative search with meta-modeling. The objective function is modeled during optimization by fitting a function through the evaluated points. This model is then used to help predict the value of future search points, so that high performance regions of design space can be identified more rapidly. In this chapter, a survey of meta-modeling approaches and their suitability to specific problem contexts is given. The aspects of dimensionality, noise, expensiveness of evaluations and others, are related to choice of methods. For the multiobjective version of the meta-modeling problem, further aspects must be considered, such as how to define improvement in a Pareto approximation set, and how to model each objective function. The possibility of interactive methods combining meta-modeling with decision-making is also covered. Two example applications are included. One is a multiobjective biochemistry problem, involving instrument optimization; the other relates to seismic design in the reinforcement of cable-stayed bridges.

Reviewed by: Jerzy Błaszczyński, Poznan University, Poland; Yaochu Jin, Honda Research Institute Europe, Germany; Koji Shimoyama, Tohoku University, Japan; Roman Słowiński, Poznan University of Technology, Poland

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Anderson, V.L., McLean, R.A.: Design of Experiments: A Realistic Approach. Marcel Dekker, New York (1974)

    MATH  Google Scholar 

  • Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning for control. Artificial Intelligence Review 11(1), 75–113 (1997)

    Article  Google Scholar 

  • Breiman, L.: Classification and Regression Trees. Chapman and Hall, Boca Raton (1984)

    MATH  Google Scholar 

  • Brown, G., Wyatt, J.L., Tiňo, P.: Managing diversity in regression ensembles. The Journal of Machine Learning Research 6, 1621–1650 (2005)

    MathSciNet  MATH  Google Scholar 

  • Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  • Cauwenberghs, G., Poggio, T.: Incremental and decremental support vector machine learning. Advances in Neural Information Processing Systems 13, 409–415 (2001)

    Google Scholar 

  • Chafekar, D., Shi, L., Rasheed, K., Xuan, J.: Multiobjective ga optimization using reduced models. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews 35(2), 261–265 (2005)

    Article  Google Scholar 

  • Chapelle, O., Vapnik, V., Weston, J.: Transductive inference for estimating values of functions. Advances in Neural Information Processing Systems 12, 421–427 (1999)

    Google Scholar 

  • Cohn, D.A., Ghahramani, Z., Jordan, M.I.: Active learning with statistical models. Journal of Artificial Intelligence Research 4, 129–145 (1996)

    MATH  Google Scholar 

  • Corne, D.W., Oates, M.J., Kell, D.B.: On fitness distributions and expected fitness gain of mutation rates in parallel evolutionary algorithms. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 132–141. Springer, Heidelberg (2002)

    Google Scholar 

  • Cortes, C., Vapnik, V.: Support vector networks. Machine Learning 20, 273–297 (1995)

    MATH  Google Scholar 

  • Cristianini, N., Shawe-Tylor, J.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, Cambridge (2000)

    Book  MATH  Google Scholar 

  • Grunert da Fonseca, V., Fonseca, C.M., Hall, A.O.: Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function. In: Zitzler, E., Deb, K., Thiele, L., Coello Coello, C.A., Corne, D.W. (eds.) EMO 2001. LNCS, vol. 1993, pp. 213–225. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  • Emmerich, M.T.M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 62–76. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  • Emmerich, M., Giannakoglou, K., Naujoks, B.: Single-and Multi-objective Evolutionary Optimization Assisted by Gaussian Random Field Metamodels. IEEE Transactions on Evolutionary Computation 10(4), 421–439 (2006)

    Article  Google Scholar 

  • English, T.M.: Optimization is easy and learning is hard in the typical function. In: Proceedings of the 2000 Congress on Evolutionary Computation (CEC00), pp. 924–931. IEEE Computer Society Press, Piscataway (2000)

    Google Scholar 

  • Erenguc, S.S., Koehler, G.J.: Survey of mathematical programming models and experimental results for linear discriminant analysis. Managerial and Decision Economics 11, 215–225 (1990)

    Article  Google Scholar 

  • Eyheramendy, S., Lewis, D., Madigan, D.: On the naive Bayes model for text categorization. In: Proceedings Artificial Intelligence & Statistics 2003 (2003)

    Google Scholar 

  • Fieldsend, J.E., Everson, R.M.: Multi-objective Optimisation in the Presence of Uncertainty. In: 2005 IEEE Congress on Evolutionary Computation (CEC’2005), Edinburgh, Scotland, September 2005, vol. 1, pp. 243–250. IEEE Computer Society Press, Los Alamitos (2005)

    Chapter  Google Scholar 

  • Fonseca, C.M., Fleming, P.J.: Genetic algorithms for multi-objective optimization: Formulation, discussion and generalization. In: Proceedings of the Fifth International Conference on Genetic Algorithms, pp. 416–426 (1993)

    Google Scholar 

  • Freed, N., Glover, F.: Simple but powerful goal programming models for discriminant problems. European J. of Operational Research 7, 44–60 (1981)

    Article  MATH  Google Scholar 

  • Gaspar-Cunha, A., Vieira, A.: A multi-objective evolutionary algorithm using neural networks to approximate fitness evaluations. International Journal of Computers, Systems, and Signals (2004)

    Google Scholar 

  • Hamza, K., Saitou, K.: Vehicle crashworthiness design via a surrogate model ensemble and a co-evolutionary genetic algorithm. In: Proc. of IDETC/CIE 2005 ASME 2005 International Design Engineering Technical Conference, California, USA (2005)

    Google Scholar 

  • Honda, M., Morishita, K., Inoue, K., Hirai, J.: Improvement of anti-seismic capacity with damper braces for bridges. In: Proceedings of the Seventh International Conference on Motion and Vibration Control (2004)

    Google Scholar 

  • Huang, D., Allen, T.T., Notz, W.I., Zeng, N.: Global Optimization of Stochastic Black-Box Systems via Sequential Kriging Meta-Models. Journal of Global Optimization 34(3), 441–466 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Hughes, E.J.: Evolutionary Many-Objective Optimisation: Many Once or One Many? In: 2005 IEEE Congress on Evolutionary Computation (CEC’2005), vol. 1, pp. 222–227. IEEE Computer Society Press, Los Alamitos (2005)

    Chapter  Google Scholar 

  • Hughes, E.J.: Multi-Objective Equivalent Random Search. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 463–472. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  • Hüsken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for evolutionary design optimization. Soft Computing 9(1), 21–28 (2005)

    Article  Google Scholar 

  • Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation. Soft Computing - A Fusion of Foundations, Methodologies and Applications 9(1), 3–12 (2005)

    Google Scholar 

  • Jin, Y., Sendhoff, B.: Reducing fitness evaluations using clustering techniques and neural network ensembles. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 688–699. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  • Jin, Y., Olhofer, M., Sendhoff, B.: Managing approximate models in evolutionary algorithms design optimization. In: Proceedings of the 2001 Congress on Evolutionary Computation, CEC2001, pp. 592–599 (2001)

    Google Scholar 

  • Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455–492 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Joslin, D., Dragovich, J., Vo, H., Terada, J.: Opportunistic fitness evaluation in a genetic algorithm for civil engineering design optimization. In: Proceedings of the Congress on Evolutionary Computation (CEC 2006), pp. 2904–2911. IEEE Computer Society Press, Los Alamitos (2006)

    Google Scholar 

  • Jourdan, L., Corne, D.W., Savic, D.A., Walters, G.A.: Preliminary Investigation of the ‘Learnable Evolution Model’ for Faster/Better Multiobjective Water Systems Design. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 841–855. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  • Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA Journal 44(4), 879–891 (2006)

    Article  Google Scholar 

  • Knowles, J.: ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation 10(1), 50–66 (2006)

    Article  Google Scholar 

  • Langdon, W.B., Poli, R.: Foundations of Genetic Programming. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  • Larranaga, P., Lozano, J.A.: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2001)

    MATH  Google Scholar 

  • Laumanns, M., Očenášek, J.: Bayesian optimization algorithms for multi-objective optimization. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 298–307. Springer, Heidelberg (2002)

    Google Scholar 

  • Michalski, R.: Learnable Evolution Model: Evolutionary Processes Guided by Machine Learning. Machine Learning 38(1), 9–40 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  • Sasena, M.J., Papalambros, P.Y., Goovaerts, P.: Metamodeling sample criteria in a global optimization framework. In: 8th AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach, AIAA-2000-4921 (2000)

    Google Scholar 

  • Myers, R.H., Montgomery, D.C.: Response Surface Methodology: Process and Product Optimization using Designed Experiments. Wiley, Chichester (1995)

    MATH  Google Scholar 

  • Nain, P., Deb, K.: A computationally effective multi-objective search and optimization technique using coarse-to-fine grain modeling. Kangal Report 2002005 (2002)

    Google Scholar 

  • Nakayama, H., Sawaragi, Y.: Satisficing trade-off method for multi-objective programming. In: Grauer, M., Wierzbicki, A. (eds.) Interactive Decision Analysis, pp. 113–122. Springer, Heidelberg (1984)

    Chapter  Google Scholar 

  • Nakayama, H., Yun, Y.: Generating support vector machines using multiobjective optimization and goal programming. In: Jin, Y. (ed.) Multi-objective Machine Learning, pp. 173–198. Springer, Heidelberg (2006a)

    Chapter  Google Scholar 

  • Nakayama, H., Yun, Y.: Support vector regression based on goal programming and multi-objective programming. IEEE World Congress on Computational Intelligence, CD-ROM, Paper ID: 1536 (2006b)

    Google Scholar 

  • Nakayama, H., Arakawa, M., Sasaki, R.: Simulation based optimization for unknown objective functions. Optimization and Engineering 3, 201–214 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Nakayama, H., Arakawa, M., Washino, K.: Optimization for black-box objective functions. In: Tseveendorj, I., Pardalos, P.M., Enkhbat, R. (eds.) Optimization and Optimal Control, pp. 185–210. World Scientific, Singapore (2003)

    Chapter  Google Scholar 

  • Nakayama, H., Inoue, K., Yoshimori, Y.: Approximate optimization using computational intelligence and its application to reinforcement of cable-stayed bridges. In: Zha, X.F., Howlett, R.J. (eds.) Integrated Intelligent Systems for Engineering Design, pp. 289–304. IOS Press, Amsterdam (2006)

    Google Scholar 

  • O’Hagan, S., Dunn, W.B., Knowles, J.D., Broadhurst, D., Williams, R., Ashworth, J.J., Cameron, M., Kell, D.B.: Closed-Loop, Multiobjective Optimization of Two-Dimensional Gas Chromatography/Mass Spectrometry for Serum Metabolomics. Analytical Chemistry 79(2), 464–476 (2006)

    Google Scholar 

  • Orr, M.: Introduction to radial basis function networks. Centre for Cognitive Science, University of Edinburgh (1996), http://www.cns.ed.ac.uk/people/mark.html

  • Ray, T., Smith, W.: Surrogate assisted evolutionary algorithm for multiobjective optimization. In: 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, pp. 1–8 (2006)

    Google Scholar 

  • Robins, A.: Maintaining stability during new learning in neural networks. In: IEEE International Conference on Systems, Man, and Cybernetics, 1997, ’Computational Cybernetics and Simulation’, vol. 4, pp. 3013–3018 (1997)

    Google Scholar 

  • Schölkopf, B., Smola, A.: New support vector algorithms. Technical Report NC2-TR-1998-031, NeuroCOLT2 Technical report Series (1998)

    Google Scholar 

  • Schonlau, M.: Computer Experiments and Global Optimization. Ph.D. thesis, Univ.of Waterloo, Ontario, Canada (1997)

    Google Scholar 

  • Schwaighofer, A., Tresp, V.: Transductive and Inductive Methods for Approximate Gaussian Process Regression. Advances in Neural Information Processing Systems 15, 953–960 (2003)

    Google Scholar 

  • Vapnik, V.N.: Statistical Learning Theory. John Wiley & Sons, Chichester (1998)

    MATH  Google Scholar 

  • Voutchkov, I., Keane, A.J.: Multiobjective optimization using surrogates. Presented at Adaptive Computing in Design and Manufacture (ACDM 06), Bristol, UK (2006)

    Google Scholar 

  • Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 67–82 (1997)

    Article  Google Scholar 

  • Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Grunert da Fonseca, V.: Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Transactions on Evolutionary Computation 7(2), 117–132 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Knowles, J., Nakayama, H. (2008). Meta-Modeling in Multiobjective Optimization. In: Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds) Multiobjective Optimization. Lecture Notes in Computer Science, vol 5252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88908-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-88908-3_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-88907-6

  • Online ISBN: 978-3-540-88908-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics