Abstract
Heuristic search can be an effective multi-objective optimization tool; however, the required frequent function evaluations can exhaust computational sources. This paper explores using a hybrid approach with statistical interpolation methods to expand optimal solutions obtained by multiple criteria heuristic search. The goal is to significantly increase the number of Pareto optimal solutions while limiting computational effort. The interpolation approaches studied are kriging and general regression neural networks. This paper develops a hybrid methodology combining an interpolator with a heuristic, and examines performance on several non-linear bi-objective example problems. Computational experience shows this approach successfully expands and enriches the Pareto fronts of multi-objective optimization problems.
References
Agrawal S, Dashora Y, Tiwari MK and Young-Jun S (2008). Interactive particle swarm: A Pareto-adaptive metaheuristic to multiobjective optimization. IEEE Trans Syst Man Cybern Hum 38: 258–277.
Casey K, Lim A and Dozier GV (2006). Evolving general regression neural networks for tsunami detection and response. Proceedings of the IEEE Congress on Evolutionary Computation (CEC2006), IEEE Computer Society, Washington, DC, USA, pp 2451–2458.
Coello Coello CA, Lamont GB and Van Veldhuizen DA (2007). Evolutionary Algorithms for Solving Multi-objective Problems. 2nd edn. Springer: New York.
Coello Coello CA and Lechuga MS (2002). MOPSO: A proposal for multiple objective particle swarm optimization. Proceedings of the IEEE Congress on Evolutionary Computation (CEC2002), IEEE Computer Society, Washington, DC, USA, pp 1051–1056.
Coppersmith D and Winograd S (1990). Matrix multiplication via arithmetic progressions. J Symb Comput 9: 251–280.
Cressie NAC (1993). Statistics for Spatial Data. John Wiley & Sons: New York.
Deb K (2001). Multi Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons: New York.
Fieldsend JE and Singh S (2002). A multi-objective algorithm based upon particle swarm optimisation, an efficient data structure and turbulence. Proceedings of the 2002 U.K. Workshop on Computational Intelligence, Birmingham, UK, pp 37–44, http://empslocal.ex.ac.uk/people/staff/jefields/JF_07.pdf.
Garrett A, Dozier GV and Deb K (2007). NEMO: Neural enhancement for multiobjective optimization. Proceedings of the IEEE Congress on Evolutionary Computation (CEC2007), IEEE Computer Society, Washington, DC, USA, pp 3108–3113.
Goldberg DE, Korb B and Deb K (1989). Messy genetic algorithms: Motivation, analysis and first results. Complex Syst 4: 415–444.
Kennedy J and Eberhart RC (1995). Particle swarm optimization. Proceedings of the IEEE International Conference on Neural Networks, IEEE Service Center, Piscataway, NJ, pp 1942–1948.
Kleijnen JPC and van Beers WCM (2005). Robustness of kriging when interpolating in random simulation with heterogeneous variances: Some experiments. Eur J Opl Res 165: 826–834.
Krige DG (1951). A statistical approach to some basic mine valuation problems on the Witwatersrand. J Chem Metall Min Soc S Af 52: 119–139.
Matheron G (1973). The intrinsic random functions and their applications. Adv Appl Probab 5: 439–468.
Miettinen KM (1999). Nonlinear Multiobjective Optimization. Kluwer Academic Publishers: Boston.
Mitchell TM (1997). Machine Learning. McGraw Hill: Boston.
Olea RA (1974). Optimal contour mapping using universal kriging. J Geophys Res 79: 695–702.
Polat O and Yıldırım T (2008). Genetic optimization of GRNN for pattern recognition without feature extraction. Expert Syst Appl 34: 2444–2448.
Reyes-Sierra M and Coello Coello CA (2006). Multi-objective particle swarm optimizers: A survey of the state-of-the-art. Int J Comput Intell Res 2: 287–308.
Sacks J, Welch WJ, Mitchell TJ and Wynn HP (1989). Design and analysis of computer experiments. Stat Sci 4: 409–435.
Schaffer JD (1984). Some experiments in machine learning using vector evaluated genetic algorithms. PhD thesis, Vanderbilt University.
Simpson TW, Mauery TM, Korte JJ and Mistree F (2001). Kriging models for global approximation in simulation-based multidisciplinary design optimization. AIAA J 39: 2233–2241.
Skriver AJV and Andersen KA (2003). The bi-criterion semi-obnoxious location (BSL) problem solved by an ɛ approximation. Eur J Opl Res 146: 517–528.
Specht DF (1991). A general regression neural network. IEEE T Neural Networ 2: 568–576.
Strassen V (1969). Gaussian elimination is not optimal. Numer Math 13: 354–356.
Trochu F (1993). A contouring program based on dual kriging interpolation. Eng Comput 9: 160–177.
Werner H and Obach M (2001). New neural network types estimating the accuracy of response for ecological modeling. Ecol Model 146: 289–298.
Wu J-D and Ye S-H (2009). Driver identification based on voice signal using continuous wavelet transform and artificial neural network techniques. Expert Syst Appl 36: 1061–1069.
Yapicioglu H, Dozier GV and Smith AE (2006). Neural network enhancement of multi-objective evolutionary search. Proceedings of the IEEE Congress on Evolutionary Computation (CEC2006), IEEE Computer Society, Washington, DC, USA, pp 1909–1915.
Yapicioglu H, Smith AE and Dozier GV (2007). Solving the semi-desirable facility location problem using bi-objective particle swarm. Eur J Opl Res 177: 733–749.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A
Test functions
Test Problem 1
Where d(·) is the Euclidean distance between the fixed point a i and point X.
Test Problems 2 and 3
where
Where d(·) is the distance (Euclidean for test Problem 2 and rectilinear for test Problem 3) between fixed point a i and a facility located at point X. M=200, m=1, d 1 =10 and d 2 =30.
Test Problems 4 and 5
Where d(·) is the distance that is Euclidean for test Problem 4 and rectilinear for test Problem 5 between the fixed point a i and a facility located at point X.
Test Problem 6
Where d(·) is the Euclidean distance between the fixed point a i and a facility located at point X and b=−2.
The data for the first five test problems are given in Table A1.
Appendix B
Pareto set and Pareto front figures
Figures B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, B16, B17, B18.
Rights and permissions
About this article
Cite this article
Yapicioglu, H., Liu, H., Smith, A. et al. Hybrid approach for Pareto front expansion in heuristics. J Oper Res Soc 62, 348–359 (2011). https://doi.org/10.1057/jors.2010.151
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1057/jors.2010.151