Variable Transformations in Estimation of Distribution Algorithms

  • Davide Cucci
  • Luigi Malagò
  • Matteo Matteucci
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7491)

Abstract

In this paper we address model selection in Estimation of Distribution Algorithms (EDAs) based on variables trasformations. Instead of the classic approach based on the choice of a statistical model able to represent the interactions among the variables in the problem, we propose to learn a transformation of the variables before the estimation of the parameters of a fixed model in the transformed space. The choice of a proper transformation corresponds to the identification of a model for the selected sample able to implicitly capture higher-order correlations. We apply this paradigm to EDAs and present the novel Function Composition Algorithms (FCAs), based on composition of transformation functions, namely I-FCA and Chain-FCA, which make use of fixed low-dimensional models in the transformed space, yet being able to recover higher-order interactions.

Keywords

Function Composition Algorithm Transformation of Variables Minimization of Mutual Information Chain Model 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baluja, S., Caruana, R.: Removing the genetics from the standard genetic algorithm. In: Machine learning: proceedings of the Twelfth International Conference on Machine Learning, pp. 38–46. Morgan Kaufmann (1995)Google Scholar
  2. 2.
    Brownlee, A.E.I., McCall, J.A.W., Shakya, S.K., Zhang, Q.: Structure Learning and Optimisation in a Markov Network Based Estimation of Distribution Algorithm. In: Chen, Y.-p. (ed.) Exploitation of Linkage Learning. ALO, vol. 3, pp. 45–69. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  3. 3.
    Cho, D., Zhang, B.: Evolutionary optimization by distribution estimation with mixtures of factor analyzers. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 2, pp. 1396–1401 (2002)Google Scholar
  4. 4.
    Corsano, E., Cucci, D., Malagò, L., Matteucci, M.: Implicit model selection based on variable transformations in estimation of distribution. In: Learning and Intelligent OptimizatioN Conference LION 6. LNCS, vol. 7219. Springer (to apppear, 2012)Google Scholar
  5. 5.
    De Bonet, J., Isbell, C., Viola, P.: Mimic: Finding optima by estimating probability densities. In: Advances in Neural Information Processing Systems, p. 424. The MIT Press (1996)Google Scholar
  6. 6.
    Echegoyen, C., Zhang, Q., Mendiburu, A., Santana, R., Lozano, J.: On the limits of effectiveness in estimation of distribution algorithms. In: 2011 IEEE Congress on Evolutionary Computation (CEC), pp. 1573–1580 (June 2011)Google Scholar
  7. 7.
    Grosset, L., LeRiche, R., Haftka, R.: A double-distribution statistical algorithm for composite laminate optimization. Structural and Multidisciplinary Optimization 31, 49–59 (2006)CrossRefGoogle Scholar
  8. 8.
    Harik, G.: Linkage learning via probabilistic modeling in the eCGA, 1999. Harik, G. R (1999); Linkage Learning via Probabilistic Modeling in the ECGA (IlliGAL Report No. 99010). University of Illinois at Urbana-ChampaignGoogle Scholar
  9. 9.
    Hohfeld, M., Rudolph, G.: Towards a theory of population-based incremental learning. In: Proceedings of the 4th IEEE Conference on Evolutionary Computation, pp. 1–5. IEEE Press (1997)Google Scholar
  10. 10.
    Malagò, L., Matteucci, M., Pistone, G.: Towards the geometry of estimation of distribution algorithms based on the exponential family. In: Proceedings of the 11th Workshop on Foundations of Genetic Algorithms, FOGA 2011, pp. 230–242. ACM, New York (2011)CrossRefGoogle Scholar
  11. 11.
    Mühlenbein, H., Mahnig, T.: Mathematical analysis of evolutionary algorithms. In: Essays and Surveys in Metaheuristics, Operations Research/Computer Science Interface Series, pp. 525–556. Kluwer Academic Publishers (2002)Google Scholar
  12. 12.
    Pelikan, M., Goldberg, D.: Hierarchical Bayesian Optimization Algorithm. In: Pelikan, M., Sastry, K., Cant Paz, E. (eds.) Scalable Optimization via Probabilistic Modeling. SCI, vol. 33, pp. 63–90. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Shakya, S., Brownlee, A., McCall, J., Fournier, F., Owusu, G.: A fully multivariate DEUM algorithm. In: IEEE Congress on Evolutionary Computation (2009)Google Scholar
  14. 14.
    Thierens, D.: The Linkage Tree Genetic Algorithm. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 264–273. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  15. 15.
    Toussaint, M.: Compact Genetic Codes as a Search Strategy of Evolutionary Processes. In: Wright, A.H., Vose, M.D., De Jong, K.A., Schmitt, L.M. (eds.) FOGA 2005. LNCS, vol. 3469, pp. 75–94. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. 16.
    Zhang, Q.: On stability of fixed points of limit models of univariate marginal distribution algorithm and factorized distribution algorithm. IEEE Transactions on Evolutionary Computation 8(1), 80–93 (2004)CrossRefGoogle Scholar
  17. 17.
    Zhang, Q., Allinson, N., Yin, H.: Population optimization algorithm based on ica. In: 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, pp. 33–36 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Davide Cucci
    • 1
  • Luigi Malagò
    • 1
  • Matteo Matteucci
    • 1
  1. 1.Department of Electronics and InformationPolitecnico di MilanoMilanoItaly

Personalised recommendations