Identifying Key Algorithm Parameters and Instance Features Using Forward Selection

  • Frank Hutter
  • Holger H. Hoos
  • Kevin Leyton-BrownEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7997)


Most state-of-the-art algorithms for large-scale optimization problems expose free parameters, giving rise to combinatorial spaces of possible configurations. Typically, these spaces are hard for humans to understand. In this work, we study a model-based approach for identifying a small set of both algorithm parameters and instance features that suffices for predicting empirical algorithm performance well. Our empirical analyses on a wide variety of hard combinatorial problem benchmarks (spanning SAT, MIP, and TSP) show that—for parameter configurations sampled uniformly at random—very good performance predictions can typically be obtained based on just two key parameters, and that similarly, few instance features and algorithm parameters suffice to predict the most salient algorithm performance characteristics in the combined configuration/feature space. We also use these models to identify settings of these key parameters that are predicted to achieve the best overall performance, both on average across instances and in an instance-specific way. This serves as a further way of evaluating model quality and also provides a tool for further understanding the parameter space. We provide software for carrying out this analysis on arbitrary problem domains and hope that it will help algorithm developers gain insights into the key parameters of their algorithms, the key features of their instances, and their interactions.


Root Mean Square Error Random Forest Problem Instance Configuration Space Travel Salesman Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Automated configuration of mixed integer programming solvers. In: Proceedings of CPAIOR-10, pp. 186–202 (2010)Google Scholar
  2. 2.
    Nannen, V., Eiben, A.E.: Relevance estimation and value calibration of evolutionary algorithm parameters. In: Proceedings of IJCAI-07, pp. 975–980 (2007)Google Scholar
  3. 3.
    Ansotegui, C., Sellmann, M., Tierney, K.: A gender-based genetic algorithm for the automatic configuration of solvers. In: Proceedings of CP-09, pp. 142–157 (2009)Google Scholar
  4. 4.
    Birattari, M., Yuan, Z., Balaprakash, P., Stützle, T.: F-race and iterated F-race: an overview. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Empirical Methods for the Analysis of Optimization Algorithms. Springer, Heidelberg (2010)Google Scholar
  5. 5.
    Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: ParamILS: an automatic algorithm configuration framework. JAIR 36, 267–306 (2009)zbMATHGoogle Scholar
  6. 6.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 5. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011)Google Scholar
  7. 7.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Parallel algorithm configuration. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, vol. 7219, pp. 55–70. Springer, Heidelberg (2012)Google Scholar
  8. 8.
    Hutter, F., Babić, D., Hoos, H.H., Hu, A.J.: Boosting verification by automatic tuning of decision procedures. In: Proceedings of FMCAD-07, pp. 27–34 (2007)Google Scholar
  9. 9.
    Chiarandini, M., Fawcett, C., Hoos, H.: A modular multiphase heuristic solver for post enrolment course timetabling. In: Proceedings of PATAT-08 (2008)Google Scholar
  10. 10.
    Vallati, M., Fawcett, C., Gerevini, A.E., Hoos, H.H., Saetti, A.: Generating fast domain-optimized planners by automatically configuring a generic parameterised planner. In: Proceedings of ICAPS-PAL11 (2011)Google Scholar
  11. 11.
    Ridge, E., Kudenko, D.: Sequential experiment designs for screening and tuning parameters of stochastic heuristics. In: Proceedings of PPSN-06, pp. 27–34 (2006)Google Scholar
  12. 12.
    Chiarandini, M., Goegebeur, Y.: Mixed models for the analysis of optimization algorithms. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 225–264. Springer, Berlin (2010)CrossRefGoogle Scholar
  13. 13.
    Bartz-Beielstein, T.: Experimental Research in Evolutionary Computation: The New Experimentalism. Natural Computing Series. Springer, Berlin (2006)Google Scholar
  14. 14.
    Finkler, U., Mehlhorn, K.: Runtime prediction of real programs on real machines. In: Proceedings of SODA-97, pp. 380–389 (1997)Google Scholar
  15. 15.
    Fink, E.: How to solve it automatically: selection among problem-solving methods. In: Proceedings of AIPS-98, pp. 128–136. AAAI Press (1998)Google Scholar
  16. 16.
    Howe, A.E., Dahlman, E., Hansen, C., Scheetz, M., Mayrhauser, A.: Exploiting competitive planner performance. In: Biundo, S., Fox, M. (eds.) ECP 1999. LNCS, vol. 1809, pp. 62–72. Springer, Heidelberg (2000)Google Scholar
  17. 17.
    Leyton-Brown, K., Nudelman, E., Shoham, Y.: Learning the Empirical Hardness of Optimization Problems. In: Hentenryck, P. (ed.) CP 2002. LNCS, vol. 2470, pp. 556–572. Springer, Heidelberg (2002)Google Scholar
  18. 18.
    Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)CrossRefGoogle Scholar
  19. 19.
    Smith-Miles, K.: Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv. 41(1), 6:1–6:25 (2009)Google Scholar
  20. 20.
    Smith-Miles, K., Lopes, L.: Measuring instance difficulty for combinatorial optimization problems. Comput. Oper. Res. 39(5), 875–889 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. JAIR 32, 565–606 (2008)zbMATHGoogle Scholar
  22. 22.
    Hutter, F., Hamadi, Y., Hoos, H.H., Leyton-Brown, K.: Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms. In: Benhamou, F. (ed.) CP 2006. LNCS, vol. 4204, pp. 213–228. Springer, Heidelberg (2006)Google Scholar
  23. 23.
    Hutter, F., Xu, L., Hoos, H.H., Leyton-Brown, K.: Algorithm runtime prediction: the state of the art. CoRR, abs/1211.0906 (2012)Google Scholar
  24. 24.
    Smith-Miles, K., van Hemert, J.: Discovering the suitability of optimisation algorithms by learning from evolved instances. AMAI 61, 87–104 (2011)zbMATHGoogle Scholar
  25. 25.
    Bartz-Beielstein, T., Markon, S.: Tuning search algorithms for real-world applications: a regression tree based approach. In: Proceedings of CEC-04, pp. 1111–1118 (2004)Google Scholar
  26. 26.
    Bishop, C.M.: Pattern recognition and machine learning. Springer, New York (2006)zbMATHGoogle Scholar
  27. 27.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  28. 28.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefzbMATHGoogle Scholar
  29. 29.
    Leyton-Brown, K., Nudelman, E., Shoham, Y.: Empirical hardness models: methodology and a case study on combinatorial auctions. J. ACM 56(4), 1–52 (2009)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Xu, L., Hoos, H.H., Leyton-Brown, K.: Predicting satisfiability at the phase transition. In: Proceedings of AAAI-12 (2012)Google Scholar
  31. 31.
    Friedman, J.: Multivariate adaptive regression splines. Ann. Stat. 19(1), 1–141 (1991)CrossRefzbMATHGoogle Scholar
  32. 32.
    IBM Corp.: IBM ILOG CPLEX Optimizer. (2012). Accessed 27 Oct 2012
  33. 33.
    Babić, D., Hutter, F.: Spear theorem prover. Solver description SAT competition (2007)Google Scholar
  34. 34.
    Helsgaun, K.: General \(k\)-opt submoves for the Lin-Kernighan TSP heuristic. Math. Program. Comput. 1(2–3), 119–163 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Styles, J., Hoos, H.H., Müller, M.: Automatically configuring algorithms for scaling performance. In: Hamadi, Y., Schoenauer, M. (eds.) LION 6. LNCS, vol. 7219, pp. 205–219. Springer, Heidelberg (2012)Google Scholar
  36. 36.
    Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. JMLR 13, 281–305 (2012)MathSciNetGoogle Scholar
  37. 37.
    Wang, Z., Zoghi, M., Hutter, F., Matheson, D., de Freitas, N.: Bayesian optimization in a billion dimensions via random embeddings. ArXiv e-prints, January (2013). arXiv:1301.1942Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Frank Hutter
    • 1
  • Holger H. Hoos
    • 1
  • Kevin Leyton-Brown
    • 1
    Email author
  1. 1.University of British ColumbiaVancouver BCCanada

Personalised recommendations