Annals of Mathematics and Artificial Intelligence

, Volume 61, Issue 2, pp 87–104

Discovering the suitability of optimisation algorithms by learning from evolved instances

Article

Abstract

The suitability of an optimisation algorithm selected from within an algorithm portfolio depends upon the features of the particular instance to be solved. Understanding the relative strengths and weaknesses of different algorithms in the portfolio is crucial for effective performance prediction, automated algorithm selection, and to generate knowledge about the ideal conditions for each algorithm to influence better algorithm design. Relying on well-studied benchmark instances, or randomly generated instances, limits our ability to truly challenge each of the algorithms in a portfolio and determine these ideal conditions. Instead we use an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm. The proposed methodology ensures that the meta-data is sufficient to be able to learn the features of the instances that uniquely characterise the ideal conditions for each algorithm. A case study is presented based on a comprehensive study of the performance of two heuristics on the Travelling Salesman Problem. The results show that prediction of search effort as well as the best performing algorithm for a given instance can be achieved with high accuracy.

Keywords

Algorithm selection Combinatorial optimization Travelling salesman problem Hardness prediction Phase transition Instance difficulty 

Mathematics Subject Classifications (2010)

49-04 68Q25 68Q87 68T05 68T20 90B99 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Applegate, D., Cook, W., Rohe, A.: Chained Lin-Kernighan for large traveling salesman problems. INFORMS J. Comput. 15(1), 82–92 (2003)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Bachelet, V.: Métaheuristiques parallèles hybrides: application au problème d’affectation quadratique. Ph.D. thesis, Universite des Sciences et Technologies de Lille (1999)Google Scholar
  3. 3.
    Battiti, R.: Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Netw 5(4), 537–550 (1994)CrossRefGoogle Scholar
  4. 4.
    Burke, E., Kendall, G., Newall, J., Hart, E., Ross, P., Schulenburg, S.: Hyper-heuristics: an emerging direction in modern search technology. International Series in Operations Research and Management Science, pp. 457–474 (2003)Google Scholar
  5. 5.
    Cheeseman, P., Kanefsky, B., Taylor, W.: Where the really hard problems are. In: Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI), pp. 331–337 (1991)Google Scholar
  6. 6.
    Cho, Y., Moore, J., Hill, R., Reilly, C.: Exploiting empirical knowledge for bi-dimensional knapsack problem heuristics. International Journal of Industrial and Systems Engineering 3(5), 530–548 (2008)CrossRefGoogle Scholar
  7. 7.
    Corne, D., Reynolds, A.: Optimisation and generalisation: footprints in instance space. Parallel Problem Solving from Nature–PPSN XI, pp. 22–31 (2010)Google Scholar
  8. 8.
    Gaertner, D., Clark, K.: On optimal parameters for ant colony optimization algorithms. In: Proceedings of the 2005 International Conference on Artificial Intelligence, vol. 1, pp. 83–89 (2005)Google Scholar
  9. 9.
    Gent, I., Walsh, T.: The TSP phase transition. Artif. Intell. 88(1–2), 349–358 (1996)CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Gras, R.: How efficient are genetic algorithms to solve high epistasis deceptive problems? In: IEEE Congress on Evolutionary Computation. CEC 2008. (IEEE World Congress on Computational Intelligence), pp. 242–249 (2008)Google Scholar
  11. 11.
    Hall, N., Posner, M.: Performance prediction and preselection for optimization and heuristic solution procedures. Oper. Res. 55(4), 703–716 (2007)CrossRefMATHGoogle Scholar
  12. 12.
    van Hemert, J.: Property analysis of symmetric travelling salesman problem instances acquired through evolution. In: Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCop 2005). LNCS, vol. 3448, pp. 122–131, Springer (2005)Google Scholar
  13. 13.
    van Hemert, J.: Evolving combinatorial problem instances that are difficult to solve. Evol. Comput. 14(4), 433–462 (2006)CrossRefGoogle Scholar
  14. 14.
    van Hemert, J., Urquhart, N.: Phase transition properties of clustered travelling salesman problem instances generated with evolutionary computation. In: Parallel Problem Solving from Nature-PPSN VIII. LNCS, vol. 3242, pp. 151–160, Springer (2004)Google Scholar
  15. 15.
    Johnson, D., McGeoch, L.: The traveling salesman problem: a case study. In: Aarts, E., Lenstra, J. (eds.) Local Search in Combinatorial Optimization, chap. 8, pp. 215–310. John Wiley & Sons, Inc (1997)Google Scholar
  16. 16.
    Kilby, P., Slaney, J., Walsh, T.: The backbone of the travelling salesperson. In: International Joint Conference on Artificial Intelligence, vol. 19, pp. 175–180 (2005)Google Scholar
  17. 17.
    Kohonen, T.: Self-organization maps. Proc. IEEE 78, 1464–1480 (1990)CrossRefGoogle Scholar
  18. 18.
    Kratica, J., Ljubić, I., Tošic, D.: A genetic algorithm for the index selection problem. In: Raidl, G., et al. (eds.) Applications of Evolutionary Computation, vol. 2611, pp. 281–291. Springer-Verlag (2003)Google Scholar
  19. 19.
    Leyton-Brown, K., Nudelman, E., Shoham, Y.: Learning the empirical hardness of optimization problems: The case of combinatorial auctions. In: Principles and Practice of Constraint Programming-CP 2002. Lecture Notes in Computer Science. vol. 2470, pp. 556–572, Springer (2002)Google Scholar
  20. 20.
    Leyton-Brown, K., Nudelman, E., Shoham, Y.: Empirical hardness models: Methodology and a case study on combinatorial auctions. J. ACM (JACM) 56(4), 1–52 (2009)CrossRefMathSciNetGoogle Scholar
  21. 21.
    Lin, S., Kernighan, B.: An efficient heuristic algorithm for the traveling salesman problem. Oper. Res. 21(2), 498–516 (1973)CrossRefMATHMathSciNetGoogle Scholar
  22. 22.
    Locatelli, M., Wood, G.: Objective Function Features Providing Barriers to Rapid Global Optimization. J. Glob. Optim. 31(4), 549–565 (2005)CrossRefMATHMathSciNetGoogle Scholar
  23. 23.
    Macready, W., Wolpert, D.: What makes an optimization problem hard. Complexity 5, 40–46 (1996)MathSciNetGoogle Scholar
  24. 24.
    Nudelman, E., Leyton-Brown, K., Hoos, H., Devkar, A., Shoham, Y.: Understanding random SAT: beyond the clauses-to-variables ratio. Principles and Practice of Constraint Programming–CP, 2004. Lecture Notes in Computer Science, vol. 3258, pp. 438–452 (2004)Google Scholar
  25. 25.
    Pfahringer, B., Bensusan, H., Giraud-Carrier, C.: Meta-learning by landmarking various learning algorithms. In: Proceedings of the Seventeenth International Conference on Machine Learning table of contents, pp. 743–750. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA (2000)Google Scholar
  26. 26.
    Reeves, C.: Landscapes, operators and heuristic search. Ann. Oper. Res. 86, 473–490 (1999)CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    Rice, J.: The Algorithm Selection Problem. Adv. Comput. 15, 65–118 (1976)CrossRefGoogle Scholar
  28. 28.
    Ridge, E., Kudenko, D.: An analysis of problem difficulty for a class of optimisation heuristics. Evolutionary Computation in Combinatorial Optimization. Lecture Notes in Computer Science, vol. 4446, pp. 198 (2007)Google Scholar
  29. 29.
    Sander, J., Ester, M., Kriegel, H., Xu, X.: Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data Mining and Knowledge Discovery 2(2), 169–194 (1998)CrossRefGoogle Scholar
  30. 30.
    Schiavinotto, T., Stützle, T.: A review of metrics on permutations for search landscape analysis. Comput. Oper. Res. 34(10), 3143–3153 (2007)CrossRefMATHGoogle Scholar
  31. 31.
    Smith-Miles, K.: Towards insightful algorithm selection for optimisation using meta-learning concepts. In: IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence), pp. 4118–4124 (2008)Google Scholar
  32. 32.
    Smith-Miles, K., van Hemert, J., Lim, X.: Understanding TSP difficulty by learning from evolved instances. In: Proceedings of the 4th Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 6073, pp. 266–280 (2010)Google Scholar
  33. 33.
    Smith-Miles, K., James, R., Giffin, J., Tu, Y.: A knowledge discovery approach to understanding relationships between scheduling problem structure and heuristic performance. In: Proceedings of the 3rd Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 5851, pp. 89–103 (2009)Google Scholar
  34. 34.
    Smith-Miles, K.A., Lopes, L.B.: Measuring Instance Difficulty for Combinatorial Optimization Problems. Computers and Operations Research, under revision (2011)Google Scholar
  35. 35.
    SOMine, V.: Enterprise Edition Version 3.0. Eudaptics Software Gmbh (1999)Google Scholar
  36. 36.
    Stadler, P., Schnabl, W.: The landscape of the traveling salesman problem. Phys. Lett. A 161(4), 337–344 (1992)CrossRefMATHMathSciNetGoogle Scholar
  37. 37.
    Thiebaux, S., Slaney, J., Kilby, P.: Estimating the hardness of optimisation. In: Proceedings of the European Conference on Artificial Intelligence, pp. 123–130 (2000)Google Scholar
  38. 38.
    Vasconcelos, N.: Feature selection by maximum marginal diversity: optimality and implications for visual recognition. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 1 (2003)Google Scholar
  39. 39.
    Xin, B., Chen, J., Pan, F.: Problem difficulty analysis for particle swarm optimization: deception and modality. In: Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, pp. 623–630 (2009)Google Scholar
  40. 40.
    Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: SATzilla-07: The design and analysis of an algorithm portfolio for SAT. In: Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, vol. 4741, pp. 712–727 (2007)Google Scholar
  41. 41.
    Zhang, W.: Phase transitions and backbones of the asymmetric traveling salesman problem. J. Artif. Intell. Res. 21, 471–497 (2004)MATHGoogle Scholar
  42. 42.
    Zhang, W., Korf, R.: A study of complexity transitions on the asymmetric traveling salesman problem. Artif. Intell. 81(1–2), 223–239 (1996)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.School of Mathematical SciencesMonash UniversityVictoriaAustralia
  2. 2.School of InformaticsUniversity of EdinburghEdinburghUK

Personalised recommendations