Impact of Censored Sampling on the Performance of Restart Strategies

  • Matteo Gagliolo
  • Jürgen Schmidhuber
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4204)

Abstract

Algorithm selection, algorithm portfolios, and randomized restarts, can profit from a probabilistic model of algorithm run-time, to be estimated from data gathered by solving a set of experiments. Censored sampling offers a principled way of reducing this initial training time. We study the trade-off between training time and model precision by varying the censoring threshold, and analyzing the consequent impact on the performance of an optimal restart strategy, based on an estimated model of runtime distribution.

We present experiments with a SAT solver on a graph-coloring benchmark. Due to the “heavy-tailed” runtime distribution, a modest censoring can already reduce training time by a few orders of magnitudes. The nature of the optimization process underlying the restart strategy renders its performance surprisingly robust, also to more aggressive censoring.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gomes, C.P., Fernandez, C., Selman, B., Bessiere, C.: Statistical regimes across constrainedness regions. Constraints 10(4), 317–337 (2005)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Rice, J.R.: The algorithm selection problem. In: Rubinoff, M., Yovits, M.C. (eds.) Advances in computers, vol. 15, pp. 65–118. Academic Press, New York (1976)Google Scholar
  3. 3.
    Hogg, T., Williams, C.P.: The hardest constraint problems: a double phase transition. Artif. Intell. 69(1-2), 359–377 (1994)MATHCrossRefGoogle Scholar
  4. 4.
    Gomes, C.P., Selman, B., Crato, N., Kautz, H.: Heavy-tailed phenomena in satisfiability and constraint satisfaction problems. J. Autom. Reason. 24(1-2), 67–100 (2000)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Huberman, B.A., Lukose, R.M., Hogg, T.: An economic approach to hard computational problems. Science 275, 51–54 (1997)CrossRefGoogle Scholar
  6. 6.
    Luby, M., Sinclair, A., Zuckerman, D.: Optimal speedup of las vegas algorithms. Inf. Process. Lett. 47(4), 173–180 (1993)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Gomes, C.P., Selman, B.: Algorithm portfolios. Artificial Intelligence 126(1–2), 43–62 (2001)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Nelson, W.: Applied Life Data Analysis. John Wiley, New York (1982)MATHCrossRefGoogle Scholar
  9. 9.
    Li, C.M., Anbulagan: Heuristics based on unit propagation for satisfiability problems. In: IJCAI 1997, pp. 366–371 (1997)Google Scholar
  10. 10.
    Hoos, H.H., Stützle, T.: SATLIB: An Online Resource for Research on SAT. In: Gent, I.P., Maaren, H.v., Walsh, T. (eds.) SAT 2000, pp. 283–292. IOS Press, Amsterdam (2000), http://www.satlib.org Google Scholar
  11. 11.
    Frost, D., Rish, I., Vila, L.: Summarizing csp hardness with continuous probability distributions. In: AAAI/IAAI, pp. 327–333 (1997)Google Scholar
  12. 12.
    van Moorsel, A.P.A., Wolter, K.: Analysis and algorithms for restart. In: QEST 2004: Proceedings of the The Quantitative Evaluation of Systems, First International Conference on (QEST 2004), Washington, DC, USA, pp. 195–204. IEEE Computer Society, Los Alamitos (2004)CrossRefGoogle Scholar
  13. 13.
    Hoos, H.H.: On the run-time behaviour of stochastic local search algorithms for sat. In: Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI 1999), Orlando, Florida, pp. 661–666 (1999)Google Scholar
  14. 14.
    Hoos, H.H., Stützle, T.: Local search algorithms for SAT: An empirical evaluation. Journal of Automated Reasoning 24(4), 421–481 (2000)MATHCrossRefGoogle Scholar
  15. 15.
    Hoos, H.H.: A mixture-model for the behaviour of sls algorithms for sat. In: Eighteenth national conference on Artificial intelligence, pp. 661–667. American Association for Artificial Intelligence, Menlo Park (2002)Google Scholar
  16. 16.
    Boddy, M., Dean, T.L.: Deliberation scheduling for problem solving in time-constrained environments. Artificial Intelligence 67(2), 245–285 (1994)MATHCrossRefGoogle Scholar
  17. 17.
    Hansen, E.A., Zilberstein, S.: Monitoring and control of anytime algorithms: A dynamic programming approach. Artificial Intelligence 126(1–2), 139–157 (2001)MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Alt, H., Guibas, L., Mehlhorn, K., Karp, R., Widgerson, A.: A method for obtaining randomized algorithms with small tail probalities. Research Report MPI-I-92-110, Max-Planck-Institut für Informatik, Im Stadtwald, D-66123 Saarbrücken, Germany (1992)Google Scholar
  19. 19.
    Gomes, C.P., Selman, B., Kautz, H.: Boosting combinatorial search through randomization. In: AAAI 1998/IAAI 1998: Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, pp. 431–437. American Association for Artificial Intelligence, Menlo Park (1998)Google Scholar
  20. 20.
    Kautz, H., Horvitz, E., Ruan, Y., Gomes, C., Selman, B.: Dynamic restart policies (2002)Google Scholar
  21. 21.
    Gagliolo, M., Schmidhuber, J.: A neural network model for inter-problem adaptive online time allocation. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 7–12. Springer, Heidelberg (2005)Google Scholar
  22. 22.
    Gagliolo, M., Schmidhuber, J.: Dynamic algorithm portfolios. In: Ninth International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, Florida (January 2006)Google Scholar
  23. 23.
    Gent, I., Hoos, H.H., Prosser, P., Walsh, T.: Morphing: Combining structure and randomness. In: Proc. of AAAI 1999, pp. 654–660 (1999), Benchmark avaliable at: http://www.intellektik.informatik.tu-darmstadt.de/SATLIB/Benchmarks/SAT/SW-GCP/descr.html
  24. 24.
    Reed, W.: The double pareto-lognormal distribution - a new parametric model for size distribution (2001), Available at: http://www.math.uvic.ca/faculty/reed/index.html
  25. 25.
    Kaplan, E.L., Meyer, P.: Nonparametric estimation from incomplete samples. J. of the ASA 73, 457–481 (1958)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Matteo Gagliolo
    • 1
    • 2
  • Jürgen Schmidhuber
    • 1
    • 3
  1. 1.IDSIAManno (Lugano)Switzerland
  2. 2.Faculty of InformaticsUniversity of LuganoLuganoSwitzerland
  3. 3.TU MunichGarching, MünchenGermany

Personalised recommendations