Advertisement

Computational Optimization and Applications

, Volume 74, Issue 3, pp 627–643 | Cite as

Solution refinement at regular points of conic problems

  • Enzo Busseti
  • Walaa M. MoursiEmail author
  • Stephen Boyd
Article
  • 45 Downloads

Abstract

Many numerical methods for conic problems use the homogenous primal–dual embedding, which yields a primal–dual solution or a certificate establishing primal or dual infeasibility. Following Themelis and Patrinos (IEEE Trans Autom Control, 2019), we express the embedding as the problem of finding a zero of a mapping containing a skew-symmetric linear function and projections onto cones and their duals. We focus on the special case when this mapping is regular, i.e., differentiable with nonsingular derivative matrix, at a solution point. While this is not always the case, it is a very common occurrence in practice. In this paper we do not aim for new theorerical results. We rather propose a simple method that uses LSQR, a variant of conjugate gradients for least squares problems, and the derivative of the residual mapping to refine an approximate solution, i.e., to increase its accuracy. LSQR is a matrix-free method, i.e., requires only the evaluation of the derivative mapping and its adjoint, and so avoids forming or storing large matrices, which makes it efficient even for cone problems in which the data matrices are given and dense, and also allows the method to extend to cone programs in which the data are given as abstract linear operators. Numerical examples show that the method improves an approximate solution of a conic program, and often dramatically, at a computational cost that is typically small compared to the cost of obtaining the original approximate solution. For completeness we describe methods for computing the derivative of the projection onto the cones commonly used in practice: nonnegative, second-order, semidefinite, and exponential cones. The paper is accompanied by an open source implementation.

Keywords

Conic programming Homogenous self-dual embedding Projection operator Residual map 

Notes

Acknowledgements

The authors thank Yinyu Ye, Micheal Saunders, Nicholas Moehle, and Steven Diamond for useful discussions.

References

  1. 1.
    Ali, A., Wong, E., Kolter, J.: A semismooth newton method for fast, generic convex programming. In: Proceedings of the 34th International Conference on Machine Learning, pp. 272–279 (2018)Google Scholar
  2. 2.
    Boyd, S., Busseti, E., Diamond, S., Kahn, R., Koh, K., Nystrup, P., Speth, J.: Multi-period trading via convex optimization. Found. Trends Optim. 3(1), 1–76 (2017)CrossRefGoogle Scholar
  3. 3.
    Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, Berlin (2017)CrossRefGoogle Scholar
  4. 4.
    Busseti, E., Ryu, E., Boyd, S.: Risk-constrained Kelly gambling. J. Invest. 25(3), 118–134 (2016)CrossRefGoogle Scholar
  5. 5.
    Browder, F.: Convergence theorems for sequences of nonlinear operators in Banach spaces. Math. Z. 100(3), 201–225 (1967)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization. SIAM, Philadelphia (2001)CrossRefGoogle Scholar
  7. 7.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefGoogle Scholar
  8. 8.
    Boyd, S., Vandenberghe, L.: Introduction to Applied Linear Algebra - Vectors, Matrices, and Least Squares. Cambridge University Press, Cambridge (2018)zbMATHGoogle Scholar
  9. 9.
    Chen, X., Qi, H.D., Tseng, P.: Analysis of nonsmooth symmetric-matrix-valued functions with applications to semidefinite complementarity problems. SIAM J. Optim. 13(4), 960–985 (2003)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Diamond, S., Boyd, S.: CVXPY: a Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 16(83), 1–5 (2016)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Domahidi, A., Chu, E., Boyd, S.: ECOS: an SOCP solver for embedded systems. In: 2013 European Control Conference, pp. 3071–3076. IEEE (2013)Google Scholar
  12. 12.
    Evans, L., Gariepy, R.: Measure Theory and Fine Properties of Functions. CRC Press, Boca Raton (1992)zbMATHGoogle Scholar
  13. 13.
    El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18(4), 1035–1064 (1997)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Fu, A., Narasimhan, B., Boyd, S.: CVXR: an R package for disciplined convex optimization. J. Stat. Softw. (2019) (to appear)Google Scholar
  15. 15.
    Grant, M., Boyd, S.: Graph implementations for nonsmooth convex programs. In: Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences, pp. 95–110. Springer (2008)Google Scholar
  16. 16.
    Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx (2014)
  17. 17.
    Gardiner, J., Laub, A., Amato, J., Moler, C.: Solution of the Sylvester matrix equation \(AXB^{ T}+CXD^{T}=E\). ACM Trans. Math. Softw. 18(2), 223–231 (1992)CrossRefGoogle Scholar
  18. 18.
    Jiang, H.: Global convergence analysis of the generalized Newton and Gauss–Newton methods of the Fischer–Burmeister equation for the complementarity problem. Math. Oper. Res. 24(3), 529–543 (1999)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Jones, E., Oliphant, T., Peterson, P., Others: SciPy: Open source scientific tools for Python. http://www.scipy.org/ (2001). Accessed 4 Mar 2019
  20. 20.
    Kanzow, C., Ferenczi, I., Fukushima, M.: On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity. SIAM J. Optim. 20(1), 297–320 (2009)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the IEEE International Symposium on Computer Aided Control Systems Design, pp. 284–289 (2004)Google Scholar
  22. 22.
    Lasdon, L., Mitter, S., Waren, A.: The conjugate gradient method for optimal control problems. IEEE Trans. Autom. Control 12(2), 132–138 (1967)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Moreau, J.-J.: Décomposition orthogonale d’un espace hilbertien selon deux cônes mutuellement polaires. Bulletin de la Société Mathématique de France 93, 273–299 (1965)MathSciNetCrossRefGoogle Scholar
  24. 24.
    MOSEK ApS. The MOSEK optimization toolbox for MATLAB manual, version 8.0 (revision 57) (2017)Google Scholar
  25. 25.
    Malick, J., Sendov, H.: Clarke generalized Jacobian of the projection onto the cone of positive semidefinite matrices. Set-Valued Anal. 14(3), 273–293 (2006)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Nash, S.: A survey of truncated-Newton methods. J. Comput. Appl. Math. 124(1–2), 45–59 (2000)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Nocedal, J., Wright, S.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, Berlin (2006)Google Scholar
  28. 28.
    Numba Development Team. Numba. http://numba.pydata.org (2015). Accessed 4 Mar 2019
  29. 29.
    O’Donoghue, B., Chu, E., Parikh, N., Boyd, S.: Conic optimization via operator splitting and homogeneous self-dual embedding. J. Optim. Theory Appl. 169(3), 1042–1068 (2016)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Oliphant, T.: A Guide to NumPy, vol. 1. Trelgol Publishing, Spanish Fork (2006)Google Scholar
  31. 31.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)Google Scholar
  32. 32.
    Permenter, F., Friberg, H.A., Andersen, E.D.: Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach. SIAM J. Optim, 27(3), 1257–1282 (2017)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Paige, C., Saunders, M.: LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8(1), 43–71 (1982)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(3, Ser. A), 353–367 (1993)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Qi, L., Sun.: A survey of some nonsmooth equations and smoothing Newton methods. In: Progress in Optimization, volume 30 of Applied Optimization, pp. 121–146. Kluwer (1999)Google Scholar
  36. 36.
    Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)CrossRefGoogle Scholar
  37. 37.
    Rockafellar, R., Wets, R.: Variational Analysis. Springer, Berlin (1998)CrossRefGoogle Scholar
  38. 38.
    Stellato, B., Banjac, G., Goulart, P., Bemporad, A., Boyd, S.: OSQP: an operator splitting solver for quadratic programs. ArXiv e-prints (2017)Google Scholar
  39. 39.
    SCS. Splitting conic solve, version 1.1.0. https://github.com/cvxgrp/scs (2015)
  40. 40.
    Sun, D., Sun, J.: Löwner’s operator and spectral functions in Euclidean Jordan algebras. Math. Oper. Res. 33(2), 421–445 (2008)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Sturm, J.: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11(1–4), 625–653 (1999)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Sylvester, J.: Sur l’équation linéaire trinôme en matrices d’un ordre quelconque. Comptes Rendus de l’Académie des Sciences 99, 527–529 (1884)zbMATHGoogle Scholar
  43. 43.
    Taylor, J.: Convex Optim. Power Syst. Cambridge University Press, Cambridge (2015)CrossRefGoogle Scholar
  44. 44.
    Themelis, A., Patrinos, P.: SuperMann: a superlinearly convergent algorithm for finding fixed points of nonexpansive operators. IEEE Trans. Autom. Control (2019).  https://doi.org/10.1109/TAC.2019.2906393 CrossRefGoogle Scholar
  45. 45.
    Udell, M., Mohan, K., Zeng, D., Hong, J., Diamond, S., Boyd, S.: Convex optimization in Julia. In: SC14 Workshop on High Performance Technical Computing in Dynamic Languages (2014)Google Scholar
  46. 46.
    Wright, S., Holt, J.: An inexact Levenberg–Marquardt method for large sparse nonlinear least squares. ANZIAM J. 26(4), 387–403 (1985)MathSciNetzbMATHGoogle Scholar
  47. 47.
    Ye, Y., Todd, M., Mizuno, S.: An \({O}(\sqrt{n}{L})\)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19(1), 53–67 (1994)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electrical EngineeringStanford UniversityStanfordUSA
  2. 2.Mathematics Department, Faculty of ScienceMansoura UniversityMansouraEgypt

Personalised recommendations