Skip to main content

Parameter Selection and Preconditioning for a Graph Form Solver

  • Chapter
  • First Online:
Emerging Applications of Control and Systems Theory

Abstract

In the chapter “Block splitting for distributed optimization”, Parikh and Boyd describe a method for solving a convex optimization problem, where each iteration involves evaluating a proximal operator and projection onto a subspace. In this chapter, we address the critical practical issues of how to select the proximal parameter in each iteration, and how to scale the original problem variables, so as to achieve reliable practical performance. The resulting method has been implemented as an open-source software package called POGS (Proximal Graph Solver), that targets multi-core and GPU-based systems, and has been tested on a wide variety of practical problems. Numerical results show that POGS can solve very large problems (with, say, a billion coefficients in the data), to modest accuracy in a few tens of seconds, where similar problems take many hours using interior-point methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Briceno-Arias, L.M., Combettes, P.L.: A monotone + skew splitting model for composite monotone inclusions in duality. SIAM J. Optim. 21(4), 1230–1250 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Briceno-Arias, L.M., Combettes, P.L., Pesquet, J.C., Pustelnik, N.: Proximal algorithms for multicomponent image recovery problems. J. Math. Imaging Vis. 41(1–2), 3–22 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory, vol. 15. SIAM (1994)

    Google Scholar 

  4. Boyd, S., Mueller, M., O’Donoghue, B., Wang, Y.: Performance bounds and suboptimal policies for multi-period investment. Found. Trends Optim. 1(1), 1–69 (2013)

    Google Scholar 

  5. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  MATH  Google Scholar 

  6. Bradley, A.M.: Algorithms for the equilibration of matrices and their application to limited-memory quasi-Newton methods. Ph.D. thesis. Stanford University (2010)

    Google Scholar 

  7. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, vol. 2. SIAM (2001)

    Google Scholar 

  8. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004)

    Google Scholar 

  9. Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. 35(3), 22 (2008)

    Article  MathSciNet  Google Scholar 

  10. Boyd, S., Fougner, C.: Parameter Selection and Pre-conditioning for a Graph form Solver (2015). www.stanford.edu/~boyd/papers/pogs.html

  11. Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., Ng, A.Y.: Deep learning with COTS HPC systems. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1337–1345 (2013)

    Google Scholar 

  12. Calamai, P.H., Moré, J.J.: Projected gradient methods for linearly constrained problems. Math. Program. 39(1), 93–116 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chu, E., O’Donoghue, B., Parikh, N., Boyd, S.: A primal-dual operator splitting method for conic optimization (2013)

    Google Scholar 

  14. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer (2011)

    Google Scholar 

  16. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 421–439 (1956)

    Google Scholar 

  18. Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes (2014). arXiv:1406.4834

  19. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  20. Eckstein, J., Svaiter, B.F.: A family of projective splitting methods for the sum of two maximal monotone operators. Math. Program. 111(1–2), 173–199 (2008)

    MathSciNet  MATH  Google Scholar 

  21. Giselsson, P., Boyd, S.: Diagonal scaling in Douglas-Rachford splitting and ADMM. In: 53rd IEEE Conference on Decision and Control (2014)

    Google Scholar 

  22. Giselsson, P., Boyd, S.: Metric selection in Douglas-Rachford splitting and ADMM (2014). arXiv:1410.8479

  23. Giselsson, P., Boyd, S.: Preconditioning in fast dual gradient methods. In: 53rd IEEE Conference on Decision and Control (2014)

    Google Scholar 

  24. P. Giselsson. Tight linear convergence rate bounds for Douglas-Rachford splitting and ADMM (2015). arXiv:1503.00887

  25. Glowinski, R., Marroco, A.: Sur l’approximation, par \(\acute{{\rm {e}}}\)l\(\acute{{\rm {m}}}\)ents finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de Dirichlet non linéaires. Math. Model. Numer. Anal. 9(R2), 41–76 (1975)

    MATH  Google Scholar 

  26. Goldstein, T., O’Donoghue, B., Setzer, S., Baraniuk, R.: Fast alternating direction optimization methods. SIAM J. Imaging Sci. 7(3), 1588–1623 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  27. Ghadimi, E., Teixeira, A., Shames, I., Johansson, M.: Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems. IEEE Trans. Autom. Control 60, 644–658 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  28. Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(6), 409–436 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  29. Hastie, T., Tibshirani, R., Friedman, T.: The Elements of Statistical Learning. Springer (2009)

    Google Scholar 

  30. He, B.S., Yang, H., Wang, S.L.: Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 106(2), 337–356 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  31. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 1097–1105 (2012)

    Google Scholar 

  32. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  33. Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Le, Q.V., Ng, A.Y.: On optimization methods for deep learning. In: Proceedings of the 28th International Conference on Machine Learning, pp. 265–272 (2011)

    Google Scholar 

  34. Nishihara, R., Lessard, L., Recht, B., Packard, A., Jordan, M.I.: A general analysis of the convergence of ADMM (2015). arXiv:1502.02009

  35. Nocedal, J., Wright, S.: Numerical Optimization, vol. 2. Springer (1999)

    Google Scholar 

  36. O’Donoghue, B., Stathopoulos, G., Boyd, S.: A splitting method for optimal control. IEEE Trans. Control Syst. Technol. 21(6), 2432–2442 (2013)

    Article  Google Scholar 

  37. O’Connor, D., Vandenberghe, L.: Primal-dual decomposition by operator splitting and applications to image deblurring. SIAM J. Imaging Sci. 7(3), 1724–1754 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  38. Olafsson, A., Wright, S.: Efficient schemes for robust IMRT treatment planning. Phys. Med. Biol. 51(21), 5621–5642 (2006)

    Article  Google Scholar 

  39. Parikh, N., Boyd, S.: Block splitting for distributed optimization. Math. Program. Comput. 1–26 (2013)

    Google Scholar 

  40. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)

    Google Scholar 

  41. Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. IEEE Int. Conf. Comput. Vis. 1762–1769 (2011)

    Google Scholar 

  42. Polyak, B.: Introduction to Optimization. Optimization Software Inc., Publications Division, New York (1987)

    MATH  Google Scholar 

  43. Pesquet, J.C., Pustelnik, N.: A parallel inertial proximal optimization method. Pac. J. Optim. 8(2), 273–305 (2012)

    MathSciNet  MATH  Google Scholar 

  44. Paige, C.C., Saunders, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J, Numer. Anal. 12(4), 617–629 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  45. Paige, C.C., Saunders, M.A.: LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8(1), 43–71 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  46. Ruiz, D.: A scaling algorithm to equilibrate both rows and columns norms in matrices. Technical report, Rutherford Appleton Laboratory (2001) (Technical Report RAL-TR-2001-034)

    Google Scholar 

  47. Shor, N.Z.: Nondifferentiable Optimization and Polynomial Problems. Kluwer Academic Publishers (1998)

    Google Scholar 

  48. Sinkhorn, R., Knopp, P.: Concerning nonnegative matrices and doubly stochastic matrices. Pac. J. Math. 21(2), 343–348 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  49. Spingarn, J.E.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. 32(2), 199–223 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  50. Toh, K., Todd, M.J., Tütüncü, R.H.: SDPT3-a MATLAB software package for semidefinite programming, version 1.3. Optim. Methods Softw. 11(1–4), 545–581 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  51. Vanderbei, R.J.: Symmetric quasidefinite matrices. SIAM J. Optim. 5(1), 100–113 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  52. Wang, H., Banerjee, A.: Bregman alternating direction method of multipliers. Adv. Neural Inf. Process. Syst. 2816–2824 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephen Boyd .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fougner, C., Boyd, S. (2018). Parameter Selection and Preconditioning for a Graph Form Solver. In: Tempo, R., Yurkovich, S., Misra, P. (eds) Emerging Applications of Control and Systems Theory. Lecture Notes in Control and Information Sciences - Proceedings. Springer, Cham. https://doi.org/10.1007/978-3-319-67068-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67068-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67067-6

  • Online ISBN: 978-3-319-67068-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics