Avoid common mistakes on your manuscript.
In his paper “Jaws 30”, celebrating the 30 years since the publication of the first book on Genetic Programming (GP), the author reported the words of John Koza at GECCO 2022. In particular, he highlighted the importance of pursuing research in GP with a multidisciplinary approach. I fully agree with this statement, and I believe it is a relevant path if we want to popularize GP outside the research community and open new perspectives for the next 30 years.
In particular, while GP demonstrated its suitability in addressing problems over different domains and producing human-competitive results, we are still not able to compete, in terms of GP adoption, with deep neural networks. In this sense, I strongly believe future research in GP must be based on a multidisciplinary approach. In GECCO 2022, Koza suggested that such an approach should specifically involve the area of biology, with its concepts and methods. I believe several areas can contribute significantly to the future success of GP. In particular, merging the knowledge acquired from different research areas is a key ingredient to reaching the full interpretability of the models: while existing works refer to GP as a technique that produces interpretable models, this is only partially true. Tree-based GP models with a limited depth are interpretable, but what about large GP models that may eventually rely on different representations? Are they interpretable by a domain expert? Personally, I have some concerns about considering these models as human-interpretable.
In this sense, GP should exploit methods that emerge from different areas (and not only limited to machine learning) to answer the quest for interpretable models that represents nowadays a turning point towards the adoption of machine learning (ML) models in specific areas. Concerning GP, the definition of interpretable is not even yet formalized. Thus, the foundational question of what it means to interpret a GP model must be addressed. Nowadays, it is not clear how to select or evaluate methods for producing interpretations of GP models. Thus, it would be of fundamental interest to formally define these concepts, with the long-term objective of reaching the definition of a unifying framework for interpretable evolutionary-based models, where models’ interpretability is considered beyond the representation of the solutions.
Once we reach this milestone, it would be possible to include interpretability constraints in the evolutionary process to make it possible the creation of GP solutions that respect a certain level of interpretability. In a recent study of 2022, Mei and coauthors [1] provided a comprehensive review of the studies on genetic programming that can potentially improve the model interpretability, both explicitly and implicitly, as a byproduct. In particular, they considered both intrinsic interpretability, aiming to directly evolve more interpretable models by genetic programming, and post-hoc interpretability, which uses genetic programming to explain other black-box machine learning models or explain the models evolved by genetic programming by simpler models such as linear models. While the study showed the strong potential of genetic programming for improving the interpretability of machine learning models, it also pointed out that this field of study is still in its infancy, and more effort is needed to reach the evolution of interpretable models.
All in all, pursuing the study of GP by considering different areas of research can play a fundamental role: from the formal definition of interpretable models to the evaluation of such models and the definition of methods for interpreting GP (and, in general, ML) solutions, results and ideas coming from different research areas will be fundamental to reach the objective of automatically evolving interpretable solutions.
Another relevant point highlighted by Bill Langdon in his paper “Jaws 30” is that GP is nowadays an empirical method guided by experiments more than theory. In this sense, I believe this is a general property of more recent disciplines (if we consider GP a young method). The same issue characterizes the field of deep neural networks, in which the design of networks’ topology is guided by experience more than formal principles. However, the lack of a strong theory characterizing the design of neural networks is not a justification to stop the investigation of theoretical aspects of GP. On the contrary, moving the field from an empirical endeavor to a more theory-guided discipline is fundamental to popularizing the use of GP and strengthening the results achieved.
Nowadays, GP systems are developed by trying different options and selecting what seemed to work. As in biological systems, if we consider a complex GP system (i.e., combining GP with other heuristics or complex genetic operators), it is possible to remove some complexity without experiencing a performance loss. This phenomenon (i.e., the creation of unnecessarily complex systems) is today pushed by the pressure to publish: by creating a GP system with a complexity that is not necessary, the same system looks novel with respect to existing contributions, thus maximizing the chance of passing the review process. This “complexization” is something that emerged clearly in recent years and GP papers (and generally ML papers) do not always report sufficient levels of detail to ensure the reliability of the results. Mathematics is typically not used in GP papers to formalize concepts, corroborate some findings, or even justify the adoption of a new component in the evolutionary process. To overcome this situation and to strengthen GP’s findings, it is fundamental to design formal principles to be used in all the studies, to understand whether the added complexity is fundamental to reaching better results.
Reference
Y. Mei, Q. Chen, A. Lensen, B. Xue, M. Zhang, Explainable artificial intelligence by genetic programming: a survey. IEEE Trans. Evolut. Comput. (2022). https://doi.org/10.1109/TEVC.2022.3225509
Funding
Open access funding provided by FCT|FCCN (b-on).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Thirtieth Anniversary of Genetic Programming: On the Programming of Computers by Means of Natural Selection.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Castelli, M. Commentary for the GPEM peer commentary special section on W. B. Langdon’s “Jaws 30”. Genet Program Evolvable Mach 24, 20 (2023). https://doi.org/10.1007/s10710-023-09468-w
Published:
DOI: https://doi.org/10.1007/s10710-023-09468-w