Skip to main content

Restarting Algorithms: Sometimes There Is Free Lunch

  • Conference paper
  • First Online:
Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2020)

Abstract

In this overview article we will consider the deliberate restarting of algorithms, a meta technique, in order to improve the algorithm’s performance, e.g., convergence rates or approximation guarantees. One of the major advantages is that restarts are relatively black box, not requiring any (significant) changes to the base algorithm that is restarted or the underlying argument, while leading to potentially significant improvements, e.g., from sublinear to linear rates of convergence. Restarts are widely used in different fields and have become a powerful tool to leverage additional information that has not been directly incorporated in the base algorithm or argument. We will review restarts in various settings from continuous optimization, discrete optimization, and submodular function maximization where they have delivered impressive results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allen-Zhu, Z., Orecchia, L.: Linear coupling: An ultimate unification of gradient and mirror descent. arXiv preprint arXiv:1407.1537 (2014)

  2. Anari, N., Haghtalab, N., Naor, S., Pokutta, S., Singh, M., Torrico, A.: Structured robust submodular maximization: offline and online algorithms. In: Proceedings of AISTATS (2019)

    Google Scholar 

  3. Anari, N., Haghtalab, N., Naor, S., Pokutta, S., Singh, M., Torrico, A.: Structured robust submodular maximization: offline and online algorithms. INFORMS J. Comput. (2020+, to appear)

    Google Scholar 

  4. Anderson, D., Hendel, G., Le Bodic, P., Viernickel, M.: Clairvoyant restarts in branch-and-bound search using online tree-size estimation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1427–1434 (2019)

    Google Scholar 

  5. Badanidiyuru, A., Vondrák, J.: Fast algorithms for maximizing submodular functions. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1497–1514. SIAM (2014)

    Google Scholar 

  6. Berthold, T., Heinz, S., Pfetsch, M.E.: Solving pseudo-Boolean problems with SCIP (2008)

    Google Scholar 

  7. Biere, A.: Adaptive restart strategies for conflict driven SAT solvers. In: Kleine Büning, H., Zhao, X. (eds.) SAT 2008. LNCS, vol. 4996, pp. 28–33. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79719-7_4

    Chapter  MATH  Google Scholar 

  8. Bockmayr, A., Eisenbrand, F., Hartmann, M., Schulz, A.: On the Chvátal rank of polytopes in the 0/1 cube. Discrete Appl. Math. 98, 21–27 (1999)

    Article  MathSciNet  Google Scholar 

  9. Bolte, J., Daniilidis, A., Lewis, A.: The łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2007)

    Article  Google Scholar 

  10. Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165(2), 471–507 (2016). https://doi.org/10.1007/s10107-016-1091-6

    Article  MathSciNet  MATH  Google Scholar 

  11. Braun, G., Pokutta, S., Zink, D.: Lazifying conditional gradient algorithms. In: Proceedings of the International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  12. Braun, G., Pokutta, S., Zink, D.: Lazifying conditional gradient algorithms. J. Mach. Learn. Res. (JMLR) 20(71), 1–42 (2019)

    MathSciNet  MATH  Google Scholar 

  13. Chvátal, V.: Edmonds polytopes and a hierarchy of combinatorial problems. Discrete Math. 4, 305–337 (1973)

    Article  MathSciNet  Google Scholar 

  14. Chvátal, V., Cook, W., Hartmann, M.: On cutting-plane proofs in combinatorial optimization. Linear algebra Appl. 114, 455–499 (1989)

    Article  MathSciNet  Google Scholar 

  15. Combettes, C.W., Pokutta, S.: Revisiting the Approximate Carathéodory Problem via the Frank-Wolfe Algorithm. Preprint (2019)

    Google Scholar 

  16. Diakonikolas, J., Carderera, A., Pokutta, S.: Locally accelerated conditional gradients. Proceedings of AISTATS (arXiv:1906.07867) (2020, to appear)

  17. Edmonds, J., Karp, R.M.: Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 19(2), 248–264 (1972)

    Article  Google Scholar 

  18. Eisenbrand, F., Schulz, A.: Bounds on the Chvátal rank on polytopes in the 0/1-cube. Combinatorica 23(2), 245–261 (2003)

    Article  MathSciNet  Google Scholar 

  19. Fisher, M.L., Nemhauser, G.L., Wolsey, L.A.: An analysis of approximations for maximizing submodular set functions–ii. In: Balinski, M.L., Hoffman, A.J. (eds.) Polyhedral Combinatorics, pp. 73–87. Springer, Heidelberg (1978). https://doi.org/10.1007/BFb0121195

  20. Graham, R.L., Grötschel, M., Lovász, L.: Handbook of Combinatorics, vol. 1. Elsevier (1995)

    Google Scholar 

  21. Hazan, E.: Lecture notes: Optimization for machine learning. arXiv preprint arXiv:1909.03550 (2019)

  22. Hazan, E., Luo, H.: Variance-reduced and projection-free stochastic optimization. In: International Conference on Machine Learning, pp. 1263–1271 (2016)

    Google Scholar 

  23. Hinder, O., Lubin, M.: A generic adaptive restart scheme with applications to saddle point algorithms. arXiv preprint arXiv:2006.08484 (2020)

  24. Hu, X., Shonkwiler, R., Spruill, M.C.: Random restarts in global optimization (2009)

    Google Scholar 

  25. Huang, J., et al.: The effect of restarts on the efficiency of clause learning. IJCAI 7, 2318–2323 (2007)

    Google Scholar 

  26. Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems, pp. 315–323 (2013)

    Google Scholar 

  27. Kerdreux, T., d’Aspremont, A., Pokutta, S.: Restarting Frank-Wolfe. In: Proceedings of AISTATS (2019)

    Google Scholar 

  28. Lan, G., Pokutta, S., Zhou, Y., Zink, D.: Conditional accelerated lazy stochastic gradient descent. In: Proceedings of the International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  29. Lan, G.: First-order and Stochastic Optimization Methods for Machine Learning. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-39568-1

  30. Le Bodic, P., Pfetsch, M., Pavelka, J., Pokutta, S.: Solving MIPs via scaling-based augmentation. Discrete Optim. 27, 1–25 (2018)

    Article  MathSciNet  Google Scholar 

  31. Mirrokni, V., Paes Leme, R., Vladu, A., Wong, S.C.W.: Tight bounds for approximate Carathéodory and beyond. In: Proceedings of the 34th International Conference on Machine Learning, pp. 2440–2448 (2017)

    Google Scholar 

  32. Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions–I. Math. Program. 14(1), 265–294 (1978)

    Article  MathSciNet  Google Scholar 

  33. Nemirovski, A.: Lectures on modern convex optimization. In: Society for Industrial and Applied Mathematics (SIAM). Citeseer (2001)

    Google Scholar 

  34. Nesterov, Y.: How to make the gradients small. Optima. Math. Optim. Soc. Newslett. 88, 10–11 (2012)

    Google Scholar 

  35. Nesterov, Y.: Lectures on Convex Optimization. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-319-91578-4

  36. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(O(1/k^2)\). Dokl. akad. nauk Sssr. 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  37. O’donoghue, B., Candes, E.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)

    Google Scholar 

  38. Rothvoß, T., Sanità, L.: 0/1 polytopes with quadratic chvátal rank. Oper. Res. 65(1), 212–220 (2017)

    Article  MathSciNet  Google Scholar 

  39. Roulet, V., d’Aspremont, A.: Sharpness, restart and acceleration. ArXiv preprint arXiv:1702.03828 (2017)

  40. Schulz, A.S., Weismantel, R.: The complexity of generic primal algorithms for solving general integer programs. Math. Oper. Res. 27(4), 681–692 (2002)

    Article  MathSciNet  Google Scholar 

  41. Schulz, A.S., Weismantel, R., Ziegler, G.M.: 0/1-integer programming: optimization and augmentation are equivalent. In: Spirakis, P. (ed.) ESA 1995. LNCS, vol. 979, pp. 473–483. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-60313-1_164

    Chapter  Google Scholar 

  42. Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. In: Advances in Neural Information Processing Systems, pp. 712–720 (2016)

    Google Scholar 

  43. Xu, Y., Yang, T.: Frank-Wolfe Method is Automatically Adaptive to Error Bound Condition (2018)

    Google Scholar 

Download references

Acknowledgement

We would like to thank Gábor Braun and Marc Pfetsch for helpful comments and feedback on an earlier version of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastian Pokutta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pokutta, S. (2020). Restarting Algorithms: Sometimes There Is Free Lunch. In: Hebrard, E., Musliu, N. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2020. Lecture Notes in Computer Science(), vol 12296. Springer, Cham. https://doi.org/10.1007/978-3-030-58942-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58942-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58941-7

  • Online ISBN: 978-3-030-58942-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics