Skip to main content

Optimization by ℓ1-Constrained Markov Fitness Modelling

  • Conference paper
Learning and Intelligent Optimization (LION 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7219))

Included in the following conference series:

Abstract

When the function to be optimized is characterized by a limited and unknown number of interactions among variables, a context that applies to many real world scenario, it is possible to design optimization algorithms based on such information. Estimation of Distribution Algorithms learn a set of interactions from a sample of points and encode them in a probabilistic model. The latter is then used to sample new instances. In this paper, we propose a novel approach to estimate the Markov Fitness Model used in DEUM. We combine model selection and model fitting by solving an ℓ1-constrained linear regression problem. Since candidate interactions grow exponentially in the size of the problem, we first reduce this set with a preliminary coarse selection criteria based on Mutual Information. Then, we employ ℓ1-regularization to further enforce sparsity in the model, estimating its parameters at the same time. Our proposal is analyzed against the 3D Ising Spin Glass function, a problem known to be NP-hard, and it outperforms other popular black-box meta-heuristics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aarts, E., Korst, J.: Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing. John Wiley & Sons, Inc., New York (1989)

    MATH  Google Scholar 

  2. Barahona, F.: On the computational complexity of Ising spin glass models. Journal of Physics A: Mathematical and General 15(10), 3241–3253 (1982)

    Article  MathSciNet  Google Scholar 

  3. Brown, D.F., Garmendia-Doval, A.B., McCall, J.A.W.: Markov Random Field Modelling of Royal Road Genetic Algorithms. In: Collet, P., Fonlupt, C., Hao, J.-K., Lutton, E., Schoenauer, M. (eds.) EA 2001. LNCS, vol. 2310, pp. 65–76. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  4. Brownlee, A.E.I., McCall, J.A.W., Shakya, S.K., Zhang, Q.: Structure Learning and Optimisation in a Markov Network Based Estimation of Distribution Algorithm. In: Chen, Y.-P. (ed.) Exploitation of Linkage Learning. ALO, vol. 3, pp. 45–69. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  5. Bunea, F., Tsybakov, A., Wegkamp, M.: Sparsity oracle inequalities for the lasso. Electronic Journal of Statistics 1, 169 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–499 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Hammersley, J., Clifford, P.: Markov fields on finite graphs and lattices (1971) (unpublished)

    Google Scholar 

  8. Hansen, N., Müller, S., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation 11(1), 1–18 (2003)

    Article  Google Scholar 

  9. Höfling, H., Tibshirani, R.: Estimation of sparse binary pairwise markov networks using pseudo-likelihoods. The Journal of Machine Learning Research 10, 883–906 (2009)

    MATH  Google Scholar 

  10. Larrañaga, P., Lozano, J.A. (eds.): Estimation of Distribution Algoritms. A New Tool for evolutionary Computation. Number 2 in Genetic Algorithms and Evolutionary Computation. Springer (2001)

    Google Scholar 

  11. Malagò, L., Matteucci, M., Pistone, G.: Optimization of pseudo-boolean functions by stochastic natural gradient descent. In: 9th Metaheuristics International Conference, MIC 2011 (2011)

    Google Scholar 

  12. Malagò, L., Matteucci, M., Pistone, G.: Towards the geometry of estimation of distribution algorithms based on the exponential family. In: Proceedings of the 11th Workshop on Foundations of Genetic Algorithms, FOGA 2011, pp. 230–242. ACM, New York (2011)

    Google Scholar 

  13. Malagò, L., Matteucci, M., Valentini, G.: Introducing ℓ1-regularized logistic regression in Markov Networks based EDAs. In: Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2011. IEEE Press (2011)

    Google Scholar 

  14. Pelikan, M., Goldberg, D., Ocenasek, J., Trebst, S.: Robust and scalable black-box optimization, hierarchy, and ising spin glasses. Technical report, Illinois Genetic Algorithms Laboratory, IlliGAL (2003)

    Google Scholar 

  15. Pelikan, M., Goldberg, D.E.: A hierarchy machine: Learning to optimize from nature and humans. Complexity 8(5), 36–45 (2003)

    Article  Google Scholar 

  16. Ravikumar, P., Wainwright, M.J., Lafferty, J.D.: High-dimensional Ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics 38(3), 1287–1319 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Shakya, S., Brownlee, A., McCall, J., Fournier, F., Owusu, G.: A fully multivariate DEUM algorithm. In: IEEE Congress on Evolutionary Computation (2009)

    Google Scholar 

  18. Shakya, S., McCall, J.: Optimization by Estimation of Distribution with DEUM framework based on Markov random fields. International Journal of Automation and Computing 4(3), 262–272 (2007)

    Article  Google Scholar 

  19. Shakya, S., McCall, J., Brown, D.: Solving the Ising spin glass problem using a bivariate EDA based on Markov random fields. In: IEEE Congress on Evolutionary Computation, pp. 908–915 (2006)

    Google Scholar 

  20. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267–288 (1996)

    Google Scholar 

  21. Valentini, G., Malagò, L., Matteucci, M.: Evoptool: an extensible toolkit for evolutionary optimization algorithms comparison. In: Proceedings of IEEE World Congress on Computational Intelligence, pp. 2475–2482 (July 2010)

    Google Scholar 

  22. Winkler, G.: Image Analysis, Random Fields and Dynamic Monte Carlo Methods: A Mathematical Introduction, 2nd edn. Springer (2003)

    Google Scholar 

  23. Wolsey, L.A.: Integer Programming. Wiley Interscience (1998)

    Google Scholar 

  24. Yang, J., Xu, H., Cai, Y., Jia, P.: Effective structure learning for eda via l1-regularized bayesian networks. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO 2001, pp. 327–334. ACM (2010)

    Google Scholar 

  25. Zlochin, M., Birattari, M., Meuleau, N., Dorigo, M.: Model-based search for combinatorial optimization: A critical survey. Annals of Operations Research 131(1-4), 375–395 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Valentini, G., Malagò, L., Matteucci, M. (2012). Optimization by ℓ1-Constrained Markov Fitness Modelling. In: Hamadi, Y., Schoenauer, M. (eds) Learning and Intelligent Optimization. LION 2012. Lecture Notes in Computer Science, vol 7219. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34413-8_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34413-8_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34412-1

  • Online ISBN: 978-3-642-34413-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics