Advertisement

CAVE: Configuration Assessment, Visualization and Evaluation

  • André BiedenkappEmail author
  • Joshua Marben
  • Marius Lindauer
  • Frank Hutter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11353)

Abstract

To achieve peak performance of an algorithm (in particular for problems in AI), algorithm configuration is often necessary to determine a well-performing parameter configuration. So far, most studies in algorithm configuration focused on proposing better algorithm configuration procedures or on improving a particular algorithm’s performance. In contrast, we use all the collected empirical performance data gathered during algorithm configuration runs to generate extensive insights into an algorithm, given problem instances and the used configurator. To this end, we provide a tool, called CAVE, that automatically generates comprehensive reports and insightful figures from all available empirical data. CAVE aims to help algorithm and configurator developers to better understand their experimental setup in an automated fashion. We showcase its use by thoroughly analyzing the well studied SAT solver spear on a benchmark of software verification instances and by empirically verifying two long-standing assumptions in algorithm configuration and parameter importance: (i) Parameter importance changes depending on the instance set at hand and (ii) Local and global parameter importance analysis do not necessarily agree with each other.

Notes

Acknowledgments

The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 39/963-1 FUGG and the Emmy Noether grant HU 1900/2-1.

References

  1. 1.
    Hutter, F., Hoos, H., Leyton-Brown, K., Stützle, T.: ParamILS: an automatic algorithm configuration framework. JAIR 36, 267–306 (2009)CrossRefGoogle Scholar
  2. 2.
    Ansótegui, C., Sellmann, M., Tierney, K.: A gender-based genetic algorithm for the automatic configuration of algorithms. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 142–157. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-04244-7_14
  3. 3.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello Coello, C.A. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-25566-3_40
  4. 4.
    Ansótegui, C., Malitsky, Y., Sellmann, M., Tierney, K.: Model-based genetic algorithms for algorithm configuration. In: Yang, Q., Wooldridge, M. (eds.) Proceedings of IJCAI’15, pp. 733–739 (2015)Google Scholar
  5. 5.
    López-Ibáñez, M., Dubois-Lacoste, J., Caceres, L.P., Birattari, M., Stützle, T.: The irace package: iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 3, 43–58 (2016)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Hutter, F., Lindauer, M., Balint, A., Bayless, S., Hoos, H., Leyton-Brown, K.: The configurable SAT solver challenge (CSSC). AIJ 243, 1–25 (2017)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Fawcett, C., Helmert, M., Hoos, H., Karpas, E., Roger, G., Seipp, J.: Fd-autotune: domain-specific configuration using fast-downward. In: Helmert, M., Edelkamp, S. (eds.) Proceedings of ICAPS’11 (2011)Google Scholar
  8. 8.
    Mu, Z., Hoos, H.H., Stützle, T.: The impact of automated algorithm configuration on the scaling behaviour of state-of-the-Art inexact TSP solvers. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 157–172. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-50349-3_11
  9. 9.
    Wagner, M., Friedrich, T., Lindauer, M.: Improving local search in a minimum vertex cover solver for classes of networks. In: Proceedings of IEEE CEC, pp. 1704–1711. IEEE (2017)Google Scholar
  10. 10.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Automated configuration of mixed integer programming solvers. In: Lodi, A., Milano, M., Toth, P. (eds.) CPAIOR 2010. LNCS, vol. 6140, pp. 186–202. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-13520-0_23
  11. 11.
    Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Proceedings of NIPS’12, pp. 2960–2968 (2012)Google Scholar
  12. 12.
    Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: Proceedings of ICLR’17 (2017)Google Scholar
  13. 13.
    Bischl, B., et al.: ASlib: a benchmark library for algorithm selection. AIJ 41–58 (2016)Google Scholar
  14. 14.
    Smith-Miles, K., Baatar, D., Wreford, B., Lewis, R.: Towards objective measures of algorithm performance across instance space. Comput. OR 45, 12–24 (2014)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hutter, F., Hoos, H., Leyton-Brown, K.: Identifying key algorithm parameters and instance features using forward selection. In: Proceedings of LION’13, pp. 364–381 (2013)Google Scholar
  16. 16.
    Hutter, F., Hoos, H., Leyton-Brown, K.: An efficient approach for assessing hyperparameter importance. In: Proceedings of ICML’14, pp. 754–762 (2014)Google Scholar
  17. 17.
    Fawcett, C., Hoos, H.: Analysing differences between algorithm configurations through ablation. J. Heuristics 22(4), 431–458 (2016)CrossRefGoogle Scholar
  18. 18.
    Biedenkapp, A., Lindauer, M., Eggensperger, K., Fawcett, C., Hoos, H., Hutter, F.: Efficient parameter importance analysis via ablation with surrogates. In: Proceedings of AAAI’17, pp. 773–779 (2017)Google Scholar
  19. 19.
    Babić, D., Hutter, F.: Spear theorem prover. Solver description. SAT Competition (2007)Google Scholar
  20. 20.
    Xu, L., KhudaBukhsh, A.R., Hoos, H.H., Leyton-Brown, K.: Quantifying the similarity of algorithm configurations. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 203–217. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-50349-3_14
  21. 21.
    Bussieck, M., Drud, A.S., Meeraus, A., Pruessner, A.: Quality assurance and global optimization. In Bliek, C., Jermann, C., Neumaier, A. (eds.) Proceedings of GOCOS. Lecture Notes in Computer Science, vol. 2861. Springer (2003) 223–238Google Scholar
  22. 22.
    Bussieck, M., Dirkse, S., Vigerske, S.: PAVER 2.0: an open source environment for automated performance analysis of benchmarking data. J. Glob. Optim. 59(2–3), 259–275 (2014)CrossRefGoogle Scholar
  23. 23.
    Rice, J.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)CrossRefGoogle Scholar
  24. 24.
    Nell, C., Fawcett, C., Hoos, H.H., Leyton-Brown, K.: HAL: a framework for the automated analysis and design of high-performance algorithms. In: Coello Coello, C.A. (ed.) LION 2011. LNCS, vol. 6683, pp. 600–615. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-25566-3_47
  25. 25.
    Falkner, S., Lindauer, M., Hutter, F.: SpySMAC: automated configuration and performance analysis of SAT solvers. In: Proceedings of SAT’15, pp. 1–8 (2015)Google Scholar
  26. 26.
    Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., Sculley, D.: Google vizier: a service for black-box optimization. In: Proceedings of KDD, pp. 1487–1495. ACM (2017)Google Scholar
  27. 27.
    Heinrich, J., Weiskopf, D.: State of the art of parallel coordinates. In: Proceedings of Eurographics, Eurographics Association, pp. 95–116 (2013)Google Scholar
  28. 28.
    Lloyd, J., Duvenaud, D., Grosse, R., Tenenbaum, J., Ghahramani, Z.: Automatic construction and natural-language description of nonparametric regression models. In: Proceedings of AAAI’14, pp. 1242–1250 (2014)Google Scholar
  29. 29.
    Hutter, F., Xu, L., Hoos, H., Leyton-Brown, K.: Algorithm runtime prediction: methods and evaluation. AIJ 206, 79–111 (2014)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Breimann, L.: Random forests. MLJ 45, 5–32 (2001)Google Scholar
  31. 31.
    Nudelman, E., Leyton-Brown, K., Hoos, H.H., Devkar, A., Shoham, Y.: Understanding random sat: beyond the clauses-to-variables ratio. In: Wallace, M. (ed.) CP 2004. LNCS, vol. 3258, pp. 438–452. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30201-8_33
  32. 32.
    Eggensperger, K., Lindauer, M., Hoos, H., Hutter, F., Leyton-Brown, K.: Efficient benchmarking of algorithm configuration procedures via model-based surrogates. Mach. Learn. (2018) (To appear)Google Scholar
  33. 33.
    Hutter, F., et al.: AClib: a benchmark library for algorithm configuration. In: Pardalos, P.M., Resende, M.G.C., Vogiatzis, C., Walteros, J.L. (eds.) LION 2014. LNCS, vol. 8426, pp. 36–40. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-09584-4_4
  34. 34.
    Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: SATzilla: Portfolio-based algorithm selection for SAT. JAIR 32, 565–606 (2008)CrossRefGoogle Scholar
  35. 35.
    Schneider, M., Hoos, H.H.: Quantifying homogeneity of instance sets for algorithm configuration. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, pp. 190–204. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-34413-8_14
  36. 36.
    Xu, L., Hoos, H., Leyton-Brown, K.: Hydra: automatically configuring algorithms for portfolio-based selection. In: Proceedings of AAAI’10, pp. 210–216 (2010)Google Scholar
  37. 37.
    Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC - instance-specific algorithm configuration. In: Proceedings of ECAI’10, pp. 751–756 (2010)Google Scholar
  38. 38.
    Kruskal, J.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1), 1–27 (1964)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Groenen, P., van de Velden, M.: Multidimensional scaling by majorization: A review. J. Stat. Softw. 73(8) (2016)Google Scholar
  40. 40.
    Lindauer, M., Hutter, F.: Warmstarting of model-based algorithm configuration. In: Proceedings of the AAAI conference (2018) (To appear)Google Scholar
  41. 41.
    Gerevini, A., Serina, I.: LPG: a planner based on local search for planning graphs with action costs. In: Proceedings of AIPS’02, pp. 13–22 (2002)Google Scholar
  42. 42.
    Gebser, M., Kaufmann, B., Schaub, T.: Conflict-driven answer set solving: from theory to practice. AI 187–188, 52–89 (2012)Google Scholar
  43. 43.
    KhudaBukhsh, A., Xu, L., Hoos, H., Leyton-Brown, K.: SATenstein: automatically building local search SAT solvers from components. In: Proceedings of IJCAI’09, pp. 517–524 (2009)Google Scholar
  44. 44.
    Balint, A., Schöning, U.: Choosing probability distributions for stochastic local search and the role of make versus break. In: Cimatti, A., Sebastiani, R. (eds.) SAT 2012. LNCS, vol. 7317, pp. 16–29. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-31612-8_3
  45. 45.
    Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: Lee, J. (ed.) CP 2011. LNCS, vol. 6876, pp. 454–469. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23786-7_35
  46. 46.
    Meinshausen, N.: Quantile regression forests. JMLR 7, 983–999 (2006)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • André Biedenkapp
    • 1
    Email author
  • Joshua Marben
    • 1
  • Marius Lindauer
    • 1
  • Frank Hutter
    • 1
  1. 1.University of FreiburgFreiburgGermany

Personalised recommendations