Advertisement

A Classifier-Assisted Framework for Expensive Optimization Problems: A Knowledge-Mining Approach

  • Yoel Tenne
  • Kazuhiro Izui
  • Shinji Nishiwaki
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6683)

Abstract

Real-world engineering design optimization problems often rely on computationally-expensive simulations to replace laboratory experiments. A common optimization approach is to approximate the expensive simulation with a computationally cheaper model resulting in a model-assisted optimization algorithm. A prevalent issue in such optimization problems is that the simulation may crash for some input vectors, a scenario which increases the optimization difficulty and results in wasted computer resources. While a common approach to handle such vectors is to assign them a penalized fitness and incorporate them in the model training set this can result in severe model deformation and degrade the optimization efficacy. As an alternative we propose a classifier-assisted framework where a classifier is incorporated into the optimization search and biases the optimizer away from vectors predicted to crash to simulator and with no model deformation. Performance analysis shows the proposed framework improves performance with respect to the penalty approach and that it may be possible to ‘knowledge-mine’ the classifier as a post-optimization stage to gain new insights into the problem being solved.

Keywords

Support Vector Machine Near Neighbour Optimization Search Kriging Model Penalty Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995)zbMATHGoogle Scholar
  2. 2.
    Büche, D., Schraudolph, N.N., Koumoutsakos, P.: Accelerating evolutionary algorithms with Gaussian process fitness function models. IEEE Transactions on Systems, Man, and Cybernetics–Part C 35(2), 183–194 (2005)CrossRefGoogle Scholar
  3. 3.
    Buhmann, M.D.: Radial Basis Functions Theory and Implementations. Cambridge Monographs on Applied and Computational Mathematics, vol. (12). Cambridge University Press, Cambridge (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    Chipperfield, A., Fleming, P., Pohlheim, H., Fonseca, C.: Genetic Algorithm TOOLBOX For Use with MATLAB, Version 1.2. Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield (1994)Google Scholar
  5. 5.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust Region Methods. SIAM, Philadelphia (2000)CrossRefzbMATHGoogle Scholar
  6. 6.
    Conn, A.R., Scheinberg, K., Toint, P.L.: A derivative free optimization algorithm in practice. In: Proceedings of the Seventh AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. American Institute of Aeronautics and Astronautics, Reston (1998); AIAA Paper AIAA-1998-4718Google Scholar
  7. 7.
    Cressie, N.A.C.: Statistics for Spatial Data. Wiley, New York (1993)zbMATHGoogle Scholar
  8. 8.
    Drela, M., Youngren, H.: XFOIL 6.9 User Primer. Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA (2001)Google Scholar
  9. 9.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley, second edn (2001)zbMATHGoogle Scholar
  10. 10.
    Emmerich, M.T.M., Giotis, A., Özdemir, M., Bäck, T., Giannakoglou, K.: Metamodel-assisted evolution strategies. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 361–370. Springer, Heidelberg (2002)Google Scholar
  11. 11.
    Handoko, S., Kwoh, C.K., Ong, Y.S.: Feasibility structure modeling: An effective chaperon for constrained memetic algorithms. IEEE Transactions on Evolutionary Computation 14(5), 740–758 (2010)CrossRefGoogle Scholar
  12. 12.
    Jin, Y., Olhofer, M., Sendhoff, B.: A framework for evolutionary optimization with approximate fitness functions. IEEE Transactions on Evolutionary Computation 6(5), 481–494 (2002)CrossRefGoogle Scholar
  13. 13.
    Johnson, M.E., Moore, L.M., Ylvisaker, D.: Minimax and maximin distance designs. Journal of Statistical Planning and Inference 26(2), 131–148 (1990)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Koehler, J.R., Owen, A.B.: Computer experiments. In: Ghosh, S., Rao, C.R., Krishnaiah, P.R. (eds.) Handbook of Statistics, pp. 261–308. Elsevier, Amsterdam (1996)Google Scholar
  15. 15.
    Linhart, H., Zucchini, W.: Model Selection. Wiley Series in Probability and Mathematical Statistics. Wiley-Interscience Publication, New York (1986)zbMATHGoogle Scholar
  16. 16.
    MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297. University of California Press, Berkeley (1967)Google Scholar
  17. 17.
    McKay, M.D., Beckman, R.J., Conover, W.J.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2), 239–245 (1979)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Myers, R.H., Montgomery, D.C.: Response Surface Methodology: Process and Product Optimization Using Designed Experiments. John Wiley and Sons, New York (1995)zbMATHGoogle Scholar
  19. 19.
    Ong, Y.S., Nair, P.B., Keane, A.J.: Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA Journal 41(4), 687–696 (2003)CrossRefGoogle Scholar
  20. 20.
    Rasheed, K., Hirsh, H., Gelsey, A.: A genetic algorithm for continuous design space search. Artificial Intelligence in Engineering 11, 295–305 (1997)CrossRefGoogle Scholar
  21. 21.
    Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Statistical Science 4(4), 409–435 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Sammon, J. J.W.: A nonlinear mapping for data structure analysis. IEEE Transactions on Computers C-18(5), 401–409 (1969)CrossRefGoogle Scholar
  23. 23.
    Sobieszczansk-Sobieski, J., Haftka, R.: Multidisciplinary aerospace design optimization: Survey of recent developments. Structural Optimization 14(1), 1–23 (1997)CrossRefGoogle Scholar
  24. 24.
    Tenne, Y., Armfield, S.W.: A versatile surrogate-assisted memetic algorithm for optimization of computationally expensive functions and its engineering applications. In: Yang, A., Shan, Y., Thu Bui, L. (eds.) Success in Evolutionary Computation. SCI, vol. 92, pp. 43–72. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  25. 25.
    Tenne, Y., Armfield, S.W.: A framework for memetic optimization using variable global and local surrogate models. Journal of Soft Computing 13(8) (2009)Google Scholar
  26. 26.
    Tenne, Y., Goh, C.K. (eds.): Computational Intelligence in Expensive Optimization Problems, Evolutionary Learning and Optimization, vol. 2. Springer, Heidelberg (2010), http://www.springerlink.com/content/v81864 zbMATHGoogle Scholar
  27. 27.
    Tenne, Y., Izui, K., Nishiwaki, S.: Handling undefined vectors in expensive optimization problems. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplicatons 2010. LNCS, vol. 6024, pp. 582–591. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  28. 28.
    Vapnik, V.N.: Statistical Learning Theory. Wiley-Interscience Publication, Hoboken (1998)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yoel Tenne
    • 1
  • Kazuhiro Izui
    • 1
  • Shinji Nishiwaki
    • 1
  1. 1.Kyoto UniversityKyotoJapan

Personalised recommendations