Abstract
Recent studies have demonstrated that the performance of Reference vector (RV) based Evolutionary Multi- and Many-objective Optimization algorithms could be improved, through the intervention of Machine Learning (ML) methods. These studies have shown how learning efficient search directions from the intermittent generations’ solutions, could be utilized to create pro-convergence and pro-diversity offspring, leading to better convergence and diversity, respectively. The entailing steps of data-set preparation, training of ML models, and utilization of these models, have been encapsulated as Innovized Progress operators, namely, IP2 (for convergence improvement) and IP3 (for diversity improvement). Evidently, the focus in these studies has been on proof-of-the-concept, and no exploratory analysis has been done to investigate, if and how drastically the operators’ performance may be impacted, if their underlying ML methods (Random Forest for IP2, and kNN for IP3) are varied. This paper seeks to bridge this gap, through an exploratory analysis for both IP2 and IP3, based on eight different ML methods, tested against an exhaustive test suite comprising of seven multi-objective and 32 many-objective test instances. While the results broadly endorse the robustness of the existing IP2 and IP3 operators, they also reveal interesting tradeoffs across different ML methods, in terms of the Hypervolume (HV) metric and corresponding run-time. Notably, within the gambit of the considered test suite and different ML methods adopted, kNN emerges as a winner for both IP2 and IP3, based on conjunct consideration of HV metric and run-time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For this paper, the 21 seed runs were executed in parallel to save the overall run-time, given which the exact run-time for each seed was not traceable. Hence, for run-time estimate, only the seed corresponding to the median hypervolume was executed again.
References
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324
Chen, L., Liu, H.L., Tan, K.C., Cheung, Y.M., Wang, Y.: Evolutionary many-objective algorithm using decomposition-based dominance relationship. IEEE Trans. Cybern. 49(12), 4129–4139 (2019). https://doi.org/10.1109/TCYB.2018.2859171
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785
Cheng, R., et al.: A benchmark test suite for evolutionary many-objective optimization. Complex Intell. Syst. 3(1), 67–81 (2017). https://doi.org/10.1007/s40747-017-0039-7
Cover, T.: Estimation by the nearest neighbor rule. IEEE Trans. Inf. Theory 14(1), 50–55 (1968). https://doi.org/10.1109/TIT.1968.1054098
Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Trans. Evol. Comput. 18(4), 577–601 (2014). https://doi.org/10.1109/TEVC.2013.2281535
Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Abraham, A., Jain, L., Goldberg, R. (eds.) Evolutionary Multiobjective Optimization: Theoretical Advances and Applications, pp. 105–145. Springer, London (2005). https://doi.org/10.1007/1-84628-137-7_6
Elarbi, M., Bechikh, S., Gupta, A., Ben Said, L., Ong, Y.: A new decomposition-based NSGA-II for many-objective optimization. IEEE Trans. Syst. Man Cybern.: Syst. 48(7), 1191–1210 (2018). https://doi.org/10.1109/TSMC.2017.2654301
Goldstein, B.A., Polley, E.C., Briggs, F.B.S.: Random forests for genetic association studies. Stat. Appl. Genet. Mol. Biol. 10(1) (2011). https://doi.org/10.2202/1544-6115.1691
Hussain, J.N.: High dimensional data challenges in estimating multiple linear regression. In: Journal of Physics: Conference Series, vol. 1591, no. 1, p. 012035 (2020). https://doi.org/10.1088/1742-6596/1591/1/012035
Liu, J., Wang, Y., Wei, S., Wu, X., Tong, W.: A parameterless penalty rule-based fitness estimation for decomposition-based many-objective optimization evolutionary algorithm. IEEE Access 7, 81701–81716 (2019). https://doi.org/10.1109/ACCESS.2019.2920698
Mittal, S., Saxena, D.K., Deb, K., Goodman, E.D.: A learning-based Innovized progress operator for faster convergence in evolutionary multi-objective optimization. ACM Trans. Evol. Learn. Optim. 2(1) (2021). https://doi.org/10.1145/3474059
Mittal, S., Saxena, D.K., Deb, K., Goodman, E.D.: Enhanced Innovized progress operator for evolutionary multi- and many-objective optimization. IEEE Trans. Evol. Comput. 26(5), 961–975 (2022). https://doi.org/10.1109/TEVC.2021.3131952
Mittal, S., Saxena, D.K., Deb, K., Goodman, E.D.: A unified Innovized progress operator for performance enhancement in evolutionary multi- and many-objective optimization. Technical report. 2022006, Computational Optimization and Innovation Laboratory, Michigan State University, East Lansing, MI-48824, USA (2022). https://coin-lab.org/content/publications.html
Ogutu, J., Schulz-Streeck, T., Piepho, H.P.: Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions. BMC Proc. 6(2), S10 (2012). https://doi.org/10.1186/1753-6561-6-S2-S10
Saxena, D.K., Kapoor, S.: On timing the nadir-point estimation and/or termination of reference-based multi- and many-objective evolutionary algorithms. In: Deb, K., et al. (eds.) EMO 2019. LNCS, vol. 11411, pp. 191–202. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12598-1_16
Saxena, D.K., Mittal, S., Kapoor, S., Deb, K.: A localized high-fidelity-dominance based many-objective evolutionary algorithm. IEEE Trans. Evol. Comput. 1 (2022). https://doi.org/10.1109/TEVC.2022.3188064
Yuan, Y., Xu, H., Wang, B., Yao, X.: A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20(1), 16–37 (2016). https://doi.org/10.1109/TEVC.2015.2420112
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000). https://doi.org/10.1162/106365600568202
Acknowledgement
The authors wish to acknowledge Government of India, for supporting this research through an Indo-US SPARC project (code: P66). The authors also wish to thank Sukrit Mittal for his support through this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bhasin, D., Swami, S., Sharma, S., Sah, S., Saxena, D.K., Deb, K. (2023). Investigating Innovized Progress Operators with Different Machine Learning Methods. In: Emmerich, M., et al. Evolutionary Multi-Criterion Optimization. EMO 2023. Lecture Notes in Computer Science, vol 13970. Springer, Cham. https://doi.org/10.1007/978-3-031-27250-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-27250-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27249-3
Online ISBN: 978-3-031-27250-9
eBook Packages: Computer ScienceComputer Science (R0)