Deep Learning as a Competitive Feature-Free Approach for Automated Algorithm Selection on the Traveling Salesperson Problem

  • Moritz SeilerEmail author
  • Janina Pohl
  • Jakob Bossek
  • Pascal Kerschke
  • Heike Trautmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12269)


In this work we focus on the well-known Euclidean Traveling Salesperson Problem (TSP) and two highly competitive inexact heuristic TSP solvers, EAX and LKH, in the context of per-instance algorithm selection (AS). We evolve instances with \(1\,000\) nodes where the solvers show strongly different performance profiles. These instances serve as a basis for an exploratory study on the identification of well-discriminating problem characteristics (features). Our results in a nutshell: we show that even though (1) promising features exist, (2) these are in line with previous results from the literature, and (3) models trained with these features are more accurate than models adopting sophisticated feature selection methods, the advantage is not close to the virtual best solver in terms of penalized average runtime and so is the performance gain over the single best solver. However, we show that a feature-free deep neural network based approach solely based on visual representation of the instances already matches classical AS model results and thus shows huge potential for future studies.


Automated algorithm selection Traveling Salesperson Problem Feature-based approaches Deep learning 



The authors acknowledge support by the European Research Center for Information Systems (ERCIS) .


  1. 1.
    Alissa, M., Sim, K., Hart, E.: Algorithm selection using deep learning without feature extraction. In: Proceedings of the Genetic and Evolutionary Computation Conference GECCO 2019, pp. 198–206. Association for Computing Machinery, New York (2019).
  2. 2.
    Bischl, B., et al.: ASlib: a benchmark library for algorithm selection. Artif. Intell. 237, 41–58 (2016). Scholar
  3. 3.
    Bischl, B., et al.: mlr: machine learning in R. J. Mach. Learn. Res. (JMLR) 17(170), 1–5 (2016). Scholar
  4. 4.
    Bischl, B., Mersmann, O., Trautmann, H., Preuss, M.: Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO), pp. 313–320. ACM, July 2012.
  5. 5.
    Bossek, J.: Salesperson: computation of instance features and R interface to the state-of-the-art exact and inexact solvers for the traveling salesperson problem (2017). R package version 1.0.0
  6. 6.
    Bossek, J., Kerschke, P., Neumann, A., Wagner, M., Neumann, F., Trautmann, H.: Evolving diverse TSP instances by means of novel and creative mutation operators. In: Friedrich, T., Doerr, C., Arnold, D. (eds.) Proceedings of the 15th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XV), pp. 58–71. ACM, Potsdam (2019)Google Scholar
  7. 7.
    Bossek, J., Trautmann, H.: Evolving instances for maximizing performance differences of state-of-the-art inexact TSP solvers. In: Festa, P., Sellmann, M., Vanschoren, J. (eds.) LION 2016. LNCS, vol. 10079, pp. 48–59. Springer, Cham (2016). Scholar
  8. 8.
    Bossek, J., Trautmann, H.: Understanding characteristics of evolved instances for state-of-the-art inexact TSP solvers with maximum performance difference. In: Adorni, G., Cagnoni, S., Gori, M., Maratea, M. (eds.) AI*IA 2016. LNCS (LNAI), vol. 10037, pp. 3–12. Springer, Cham (2016). Scholar
  9. 9.
    Chen, T., et al.: XGBoost: extreme gradient boosting (2019). R package version
  10. 10.
    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Gordon, G.J., Dunson, D.B., Dudík, M. (eds.) Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, 11–13 April 2011. JMLR Proceedings, vol. 15, pp. 315–323. (2011).
  11. 11.
    Guyon, I., Elisseeff, A.: An introduction to feature extraction. In: Guyon, I., Nikravesh, M., Gunn, S., Zadeh, L.A. (eds.) Feature Extraction. STUDFUZZ, vol. 207, pp. 1–25. Springer, Heidelberg (2006). Scholar
  12. 12.
    Härdle, W.K., Simar, L.: Applied Multivariate Statistical Analysis, 4th edn. Springer, Heidelberg (2015). Scholar
  13. 13.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Heidelberg (2009). Scholar
  14. 14.
    Helsgaun, K.: An effective implementation of the lin-kernighan traveling salesman heuristic. Eur. J. Oper. Res. 126(1), 106–130 (2000)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hutter, F., Xu, L., Hoos, H.H., Leyton-Brown, K.: Algorithm runtime prediction: methods & evaluation. Artif. Intell. 206, 79–111 (2014). Scholar
  16. 16.
    Hutter, F., Xu, L., Hoos, H.H., Leyton-Brown, K.: Algorithm runtime prediction: methods & evaluation. Artif. Intell. J. (AIJ) 206, 79–111 (2014). Scholar
  17. 17.
    Karatzoglou, A., Smola, A., Hornik, K., Zeileis, A.: kernlab - An S4 package for kernel methods in R. J. Stat. Softw. (JSS) 11(9), 1–20 (2004). Scholar
  18. 18.
    Kerschke, P., Bossek, J., Trautmann, H.: Parameterization of state-of-the-art performance indicators: a robustness study based on inexact TSP solvers. In: Proceedings of the 20th Genetic and Evolutionary Computation Conference (GECCO) Companion, pp. 1737–1744. ACM, Kyoto (2018).
  19. 19.
    Kerschke, P., Hoos, H.H., Neumann, F., Trautmann, H.: Automated algorithm selection: survey and perspectives. Evol. Comput. (ECJ) 27(1), 3–45 (2019)CrossRefGoogle Scholar
  20. 20.
    Kerschke, P., Kotthoff, L., Bossek, J., Hoos, H.H., Trautmann, H.: Leveraging TSP solver complementarity through machine learning. Evol. Comput. (ECJ) 26(4), 597–620 (2018)CrossRefGoogle Scholar
  21. 21.
    Kerschke, P., Trautmann, H.: Automated algorithm selection on continuous black-box problems by combining exploratory landscape analysis and machine learning. Evol. Comput. 27(1), 99–127 (2019). pMID: 30365386CrossRefGoogle Scholar
  22. 22.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015).
  23. 23.
    Kohavi, R., John, G.H., et al.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)CrossRefGoogle Scholar
  24. 24.
    Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. AI Mag. 35(3), 48–60 (2014). Scholar
  25. 25.
    Kotthoff, L., Kerschke, P., Hoos, H., Trautmann, H.: Improving the state of the art in inexact TSP solving using per-instance algorithm selection. In: Dhaenens, C., Jourdan, L., Marmion, M.-E. (eds.) LION 2015. LNCS, vol. 8994, pp. 202–217. Springer, Cham (2015). Scholar
  26. 26.
    LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, vol. 3361, no. 10, p. 1995 (1995)Google Scholar
  27. 27.
    Liaw, A., Wiener, M.: Classification and regression by randomForest. R News 2(3), 18–22 (2002). Scholar
  28. 28.
    Lin, M., Chen, Q., Yan, S.: Network in network. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014).
  29. 29.
    Lindauer, T.M., Hoos, H.H., Hutter, F., Schaub, T.: AutoFolio: an automatically configured algorithm selector (extended abstract). In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 5025–5029, August 2017.
  30. 30.
    Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm portfolios based on cost-sensitive hierarchical clustering. In: Rossi, F. (ed.) Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), vol. 13, pp. 608–614. Association for the Advancement of Artificial Intelligence (AAAI), August 2013.
  31. 31.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18(1), 50–60 (1947). Scholar
  32. 32.
    Mannor, S., Peleg, D., Rubinstein, R.Y.: The cross entropy method for classification. In: Raedt, L.D., Wrobel, S. (eds.) Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, 7–11 August 2005. ACM International Conference Proceeding Series, vol. 119, pp. 561–568. ACM (2005).
  33. 33.
    Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., Wagner, M., Neumann, F.: Local search and the traveling salesman problem: a feature-based characterization of problem hardness. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, pp. 115–129. Springer, Heidelberg (2012). Scholar
  34. 34.
    Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., Neumann, F.: A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem. Ann. Math. Artif. Intell. 69(2), 151–182 (2013). Scholar
  35. 35.
    Nagata, Y., Kobayashi, S.: A powerful genetic algorithm using edge assembly crossover for the traveling salesman problem. INFORMS J. Comput. 25(2), 346–363 (2013)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 27(8), 1226–1238 (2005). Scholar
  37. 37.
    Pihera, J., Musliu, N.: Application of machine learning to algorithm selection for TSP. In: 26th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus, 10–12 November 2014, pp. 47–54. IEEE Computer Society (2014)Google Scholar
  38. 38.
    Reinelt, G.: TSPLIB-a traveling salesman problem library. ORSA J. Comput. 3(4), 376–384 (1991)CrossRefGoogle Scholar
  39. 39.
    Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976). Scholar
  40. 40.
    Rizzini, M., Fawcett, C., Vallati, M., Gerevini, A.E., Hoos, H.H.: Static and dynamic portfolio methods for optimal planning: an empirical analysis. Int. J. Artif. Intell. Tools 26(01), 1–27 (2017). Scholar
  41. 41.
    Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Ross, P., Schulenburg, S., Marín-Bläzquez, J.G., Hart, E.: Hyper-heuristics: learning to combine simple heuristics in bin-packing problems. In: Proceedings of the 4th Annual Conference on Genetic and Evolutionary Computation GECCO 2002, pp. 942–948. Morgan Kaufmann Publishers Inc., San Francisco (2002)Google Scholar
  43. 43.
    Sim, K., Hart, E., Paechter, B.: A hyper-heuristic classifier for one dimensional bin packing problems: improving classification accuracy by attribute evolution. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7492, pp. 348–357. Springer, Heidelberg (2012). Scholar
  44. 44.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  45. 45.
    Therneau, T., Atkinson, B.: rpart: recursive partitioning and regression trees (2019). R package version 4.1-15
  46. 46.
    Urbanowicz, R.J., Meeker, M., La Cava, W., Olson, R.S., Moore, J.H.: Relief-based feature selection: introduction and review. J. Biomed. Inform. 85, 189–203 (2018). Scholar
  47. 47.
    Wu, Y., He, K.: Group normalization. Int. J. Comput. Vis. 128(3), 742–755 (2020). Scholar
  48. 48.
    Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: Evaluating component solver contributions to portfolio-based algorithm selectors. In: Cimatti, A., Sebastiani, R. (eds.) SAT 2012. LNCS, vol. 7317, pp. 228–241. Springer, Heidelberg (2012). Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Moritz Seiler
    • 1
    Email author
  • Janina Pohl
    • 1
  • Jakob Bossek
    • 2
  • Pascal Kerschke
    • 1
  • Heike Trautmann
    • 1
  1. 1.Statistics and Optimization GroupUniversity of MünsterMünsterGermany
  2. 2.Optimisation and LogisticsThe University of AdelaideAdelaideAustralia

Personalised recommendations