Machine Learning

, Volume 107, Issue 8–10, pp 1495–1515 | Cite as

ML-Plan: Automated machine learning via hierarchical planning

  • Felix Mohr
  • Marcel Wever
  • Eyke HüllermeierEmail author
Part of the following topical collections:
  1. Special Issue of the ECML PKDD 2018 Journal Track


Automated machine learning (AutoML) seeks to automatically select, compose, and parametrize machine learning algorithms, so as to achieve optimal performance on a given task (dataset). Although current approaches to AutoML have already produced impressive results, the field is still far from mature, and new techniques are still being developed. In this paper, we present ML-Plan, a new approach to AutoML based on hierarchical planning. To highlight the potential of this approach, we compare ML-Plan to the state-of-the-art frameworks Auto-WEKA, auto-sklearn, and TPOT. In an extensive series of experiments, we show that ML-Plan is highly competitive and often outperforms existing approaches.


Automated machine learning Automated planning Algorithm selection Algorithm configuration Heuristic search 



This work was supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (SFB 901).


  1. Bjornsson, Y., & Finnsson, H. (2009). Cadiaplayer: A simulation-based general game player. IEEE Transactions on Computational Intelligence and AI in Games, 1(1), 4–15.CrossRefGoogle Scholar
  2. Browne, C., Powley, E. J., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., et al. (2012). A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1–43. Scholar
  3. de Sá, A. G., Pinto, W. J. G., Oliveira, L. O. V., & Pappa, G. L. (2017). Recipe: A grammar-based framework for automatically evolving classification pipelines. In European Conference on Genetic Programming (pp. 246–261). Springer.Google Scholar
  4. Erol, K., Hendler, J. A., & Nau, D. S. (1994). UMCP: A sound and complete procedure for hierarchical task-network planning. In Proceedings of the Second International Conference on Artificial Intelligence Planning Systems, University of Chicago, Chicago, Illinois, USA, June 13–15, 1994 (pp. 249–254).
  5. Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in neural information processing systems (pp. 2962–2970). Curran Associates, Inc.Google Scholar
  6. Ghallab, M., Nau, D. S., & Traverso, P. (2004). Automated planning—Theory and practice. New York City: Elsevier.zbMATHGoogle Scholar
  7. Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2011). Sequential model-based optimization for general algorithm configuration. LION, 5, 507–523.Google Scholar
  8. Kietz, J., Serban, F., Bernstein, A., & Fischer, S. (2009). Towards cooperative planning of data mining workflows. In Proceedings of the Third Generation Data Mining Workshop at the 2009 European Conference on Machine Learning (pp. 1–12). Citeseer.Google Scholar
  9. Kietz, J. U., Serban, F., Bernstein, A., & Fischer, S. (2012). Designing KDD-workflows via HTN-planning for intelligent discovery assistance. In 5th planning to learn workshop WS28 at ECAI 2012 (p. 10).Google Scholar
  10. Kocsis, L., Szepesvári, C., & Willemson, J. (2006). Improved Monte-Carlo search. Technical report 1, University of Tartu, Estonia.Google Scholar
  11. Komer, B., Bergstra, J., & Eliasmith, C. (2014). Hyperopt-sklearn: Automatic hyperparameter configuration for scikit-learn. In ICML workshop on AutoML.Google Scholar
  12. Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., & Leyton-Brown, K. (2017). Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA. The Journal of Machine Learning Research, 18(1), 826–830.MathSciNetzbMATHGoogle Scholar
  13. Lloyd, J. R., Duvenaud, D. K., Grosse, R. B., Tenenbaum, J. B., & Ghahramani, Z. (2014). Automatic construction and natural-language description of nonparametric regression models. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, Québec, Canada (pp. 1242–1250).Google Scholar
  14. Mohr, F., Wever, M., Hüllermeier, E., & Faez, A. (2018). Towards the automated composition of machine learning services. In Proceedings of the IEEE International Conference on Services Computing. SCC.Google Scholar
  15. Nau, D. S., Au, T., Ilghami, O., Kuter, U., Murdock, J. W., Wu, D., et al. (2003). SHOP2: An HTN planning system. Journal of Artificial Intelligence Research (JAIR), 20, 379–404. Scholar
  16. Nguyen, P., Hilario, M., & Kalousis, A. (2014). Using meta-mining to support data mining workflow planning and optimization. Journal of Artificial Intelligence Research, 51, 605–644.CrossRefGoogle Scholar
  17. Nguyen, P., Kalousis, A., & Hilario, M. (2011). A meta-mining infrastructure to support KD workflow optimization. In Proceedings of the PlanSoKD-11 Workshop at ECML/PKDD (pp. 1–10).Google Scholar
  18. Nguyen, P., Kalousis, A., & Hilario, M. (2012). Experimental evaluation of the e-lico meta-miner. In 5th planning to learn workshop WS28 at ECAI (pp. 18–19).Google Scholar
  19. Olson, R. S., & Moore, J. H. (2016). Tpot: A tree-based pipeline optimization tool for automating machine learning. In Workshop on automatic machine learning (pp. 66–74).Google Scholar
  20. Schadd, M. P. D., Winands, M. H. M., van den Herik, H. J., Chaslot, G. M. J. B., & Uiterwijk, J. W. H. M. (2008). Single-player Monte-Carlo tree search. In H. J. van den Herik, X. Xu, Z. Ma, & M. H. M. Winands (Eds.), Computers and games. Berlin: Springer.Google Scholar
  21. Thornton, C., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2013). Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2013, Chicago, IL, USA (pp. 847–855).Google Scholar
  22. Vanschoren, J., van Rijn, J. N., Bischl, B., & Torgo, L. (2013). OpenML: Networked science in machine learning. SIGKDD explorations, 15(2), 49–60. Scholar

Copyright information

© The Author(s) 2018

Authors and Affiliations

  1. 1.Paderborn UniversityPaderbornGermany

Personalised recommendations