Abstract
First-order primal-dual methods are appealing for their low memory overhead, fast iterations, and effective parallelization. However, they are often slow at finding high accuracy solutions, which creates a barrier to their use in traditional linear programming (LP) applications. This paper exploits the sharpness of primal-dual formulations of LP instances to achieve linear convergence using restarts in a general setting that applies to alternating direction method of multipliers (ADMM), primal-dual hybrid gradient method (PDHG) and extragradient method (EGM). In the special case of PDHG, without restarts we show an iteration count lower bound of \(\Omega (\kappa ^2 \log (1/\epsilon ))\), while with restarts we show an iteration count upper bound of \(O(\kappa \log (1/\epsilon ))\), where \(\kappa \) is a condition number and \(\epsilon \) is the desired accuracy. Moreover, the upper bound is optimal for a wide class of primal-dual methods, and applies to the strictly more general class of sharp primal-dual problems. We develop an adaptive restart scheme and verify that restarts significantly improve the ability of PDHG, EGM, and ADMM to find high accuracy solutions to LP problems.
Similar content being viewed by others
Notes
A robust algorithm for LP would need to detect violations of this assumption, i.e., when the problem is infeasible or unbounded. We refer readers to [6] to understand the behavior of primal-dual methods when applied to infeasible or unbounded LP problems.
The RHS of (4b) is well-defined because lim sup always exists [76, Section 5.3]. Later we show (Proposition 5) that \(\rho _r(z)\) is monotonically non-increasing in \(r \in (0,\infty )\) for fixed z which means that \(\rho _{0}(z) = \limsup _{r \rightarrow 0^{+}} \rho _r(z) = \lim _{r \rightarrow 0^{+}} \rho _{r}(z) < \infty \).
PDHG is often presented in a form with different primal and dual step sizes [14, 15]. Here, we choose to use the same primal and dual step size for consistent notation with other primal-dual algorithms. Our results can easily extend to the case of different step sizes by rescaling. In particular, by setting \(\eta = \sqrt{\sigma \tau }\) and defining a rescaled space: \(( \hat{x}, \hat{y}) = ( x \sqrt{\eta / \tau }, y \sqrt{\eta / \sigma })\) where \(\tau \in (0,\infty )\) is the desired primal step and \(\sigma \in (0,\infty )\) is the desired dual step size. Applying (9) to this rescaled space, i.e., replacing f(x) with \(\hat{f}(\hat{x}) = f(\hat{x} / \sqrt{\eta / \tau })\), g(y) with \(\hat{g}(\hat{x}) = g(\hat{y} / \sqrt{\eta / \sigma })\), X with \(\hat{X} = \{ x \sqrt{\eta / \tau }: x \in X \}\), Y with \(\hat{Y} = \{ y \sqrt{\eta / \sigma }: y \in Y \}\), \(\Vert x - x^t \Vert \) with \(\Vert \hat{x}^t - x^t \Vert \), and \(\Vert y - y^t \Vert \) with \(\Vert \hat{y}^t - y^t \Vert \), then substituting back \((\hat{x},\hat{y}) = ( x \sqrt{\eta / \tau }, y \sqrt{\eta / \sigma })\) and \((\hat{x}^{t},\hat{y}^{t}) = ( x^t \sqrt{\eta / \tau }, y^t \sqrt{\eta / \sigma })\) yields the classic PDHG update:
$$\begin{aligned} x^{t+1}&\in {\mathop {\textrm{argmin}}\limits _{x\in X}} f(x)+(y^t)^{\top } Ax+\frac{1}{2 \tau } \Vert x-x^t\Vert _2^2 \\ y^{t+1}&\in {\mathop {\textrm{argmin}}\limits _{y\in Y}} g(y)-y^{\top } A(2 x^{t+1}-x^t) +\frac{1}{2 \sigma } \Vert y-y^t\Vert _2^2 \ . \end{aligned}$$A function f grows \(\mu \)-quadratically if \({f(z) - f^{\star }} \ge {\mu }\mathbf{dist{}}(z, Z^{\star })^2\).
The active variables are variables not at their bounds.
In the LP case (19), non-degeneracy means that the algorithm converges to a primal-dual solution that satisfies strict complimentary.
The restart lengths were ordered descending by iterations to find a normalized duality gap below \(10^{-7}\), and then by the normalized duality gap at the maximum number of iterations. The top three restart lengths were then selected for display.
References
Alacaoglu, A., Fercoq, O., Cevher, V.: On the convergence of stochastic primal-dual hybrid gradient, arXiv preprint arXiv:1911.00799 (2019)
Alamo, T., Limon, D., Krupa, P.: Restart FISTA with global linear convergence. In: 18th European Control Conference (ECC). IEEE, vol. 2019, pp. 1969–1974 (2019)
Andersen, E.D., Andersen, K.D.: The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm. In: High Performance Optimization, pp. 197–232. Springer (2000)
Anderson, R.I., Fok, R., Scott, J.: Hotel industry efficiency: an advanced linear programming examination. Am. Bus. Rev. 18(1), 40 (2000)
Applegate, D., Díaz, M., Hinder, O., Lu, H., Lubin, M., O’Donoghue, B., Schudy, W.: Practical large-scale linear programming using primal-dual hybrid gradient. In: Advances in Neural Information Processing Systems, vol. 34 (2021)
Applegate, D., Díaz, M., Lu, H., Lubin, M.: Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming, arXiv preprint arXiv:2102.04592 (2021)
Basu, K., Ghoting, A., Mazumder, R., Pan, Y.: ECLIPSE: an extreme-scale linear program solver for web-applications. In: Daumé III, H., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning (Virtual), Proceedings of Machine Learning Research, PMLR, vol. 119, pp. 704–714 (2020)
Basu, K., Ghoting, A., Mazumder, R., Pan, Y.: Eclipse: an extreme-scale linear program solver for web-applications. In: International Conference on Machine Learning, PMLR, pp. 704–714 (2020)
Blum, M., Floyd, R.W., Pratt, V., Rivest, R.L., Tarjan, R.E.: Time bounds for selection. J. Comput. Syst. Sci. 7(4), 448–461 (1973)
Bowman, E.H.: Production scheduling by the transportation method of linear programming. Oper. Res. 4(1), 100–103 (1956)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)
Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 31(5), 1340–1359 (1993)
Burke, V.J., Ferris, M.C.: A Gauss-Newton method for convex composite optimization. Math. Program. 71(2), 179–194 (1995)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)
Charnes, A., Cooper, W.W.: The stepping stone method of explaining linear programming calculations in transportation problems. Manag. Sci. 1(1), 49–69 (1954)
Condat, L., Malinovsky, G., Richtárik, P.: Distributed proximal splitting algorithms with rates and acceleration. Front. Signal Process. 12 (2022)
Dantzig, G.B.: Linear Programming and Extensions, vol. 48. Princeton University Press (1998)
Daskalakis, C., Andrew, I., Syrgkanis, V., Zeng, H.: Training GANs with optimism. In: International Conference on Learning Representations (2018)
Davis, D., Drusvyatskiy, D., MacPhee, K.J., Paquette, C.: Subgradient methods for sharp weakly convex functions. J. Optim. Theory Appl. 179(3), 962–982 (2018)
Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(1–3), 293–318 (1992)
Eckstein, J., Bertsekas, D.P.: et al., An alternating direction method for linear programming (1990)
Fercoq, O.: Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient (2021)
Fercoq, O., Zheng, Q.: Adaptive restart of accelerated gradient methods under local quadratic growth condition. IMA J. Numer. Anal. 39(4), 2069–2095 (2019)
Ferris, M.C.: Finite termination of the proximal point algorithm. Math. Program. 50(1–3), 359–366 (1991)
Freund, R.M., Haihao, L.: New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure. Math. Program. 170(2), 445–477 (2018)
Galabova, I.L., Hall, J.A.J.: The ‘idiot’ crash quadratic penalty algorithm for linear programming and its application to linearizations of quadratic assignment problems. Optim. Methods Softw. 35(3), 488–501 (2020)
Gilpin, A., Pena, J., Sandholm, T.: First-order algorithm with \(\cal{O} (\ln (1/\epsilon ))\)-convergence for \(\epsilon \)-equilibrium in two-person zero-sum games. Math. Program. 133(1), 279–298 (2012)
Giselsson, P., Boyd, S.: Monotonicity and restart in fast gradient methods. In: 53rd IEEE Conference on Decision and Control, pp. 5058–5063. IEEE (2014)
Goldstein, T., Li, M., Yuan, X.: Adaptive primal-dual splitting methods for statistical learning and image processing. In: Advances in Neural Information Processing Systems, pp. 2089–2097 (2015)
Gondzio, J.: Interior point methods 25 years later. Eur. J. Oper. Res. 218(3), 587–601 (2012)
Güler, O., Hoffman, A.J., Rothblum, U.G.: Approximations to solutions to systems of linear inequalities. SIAM J. Matrix Anal. Appl. 16(2), 688–696 (1995)
Gutman, D.H., Peña, J.F.: The condition number of a function relative to a set. Math. Program. (2020), to appear
Hanssmann, F., Hess, S.W.: A linear programming approach to production and employment scheduling. Manag. Sci. 1, 46–51 (1960)
Harker, P.T., Pang, J.-S.: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48(1–3), 161–220 (1990)
He, B., Yuan, X.: On the \({O}(1/n)\) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)
Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49, 263–265 (1952)
Hunter, J.K., Nachtergaele, B.: Applied Analysis. World Scientific Publishing Company (2001)
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural. Inf. Process. Syst. 26, 315–323 (2013)
Karmarkar, N.: A new polynomial-time algorithm for linear programming. In: Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing, pp. 302–311 (1984)
Klatte, D., Thiere, G.: Error bounds for solutions of linear equations and inequalities. Z. Oper. Res. 41(2), 191–214 (1995)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Lewis, A.S, Liang, J.: Partial smoothness and constant rank, arXiv preprint arXiv:1807.03134 (2018)
Li, X., Sun, D., Toh, K.-C.: An asymptotically superlinearly convergent semismooth newton augmented Lagrangian method for linear programming. SIAM J. Optim. 30(3), 2410–2440 (2020)
Liang, J., Fadili, J., Peyré, G.: Local linear convergence analysis of primal-dual splitting methods. Optimization 67(6), 821–853 (2018)
Lin, H., Mairal, J., Harchaoui, Z.: A universal catalyst for first-order optimization. In: Advances in Neural Information Processing Systems, pp. 3384–3392 (2015)
Lin, Q., Xiao, L.: An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. In: International Conference on Machine Learning, pp. 73–81 (2014)
Lin, T., Ma, S., Ye, Y., Zhang, S.: An ADMM-based interior-point method for large-scale linear programming. Optim. Methods Softw. 36(2–3), 389–424 (2021)
Liu, Q., Van Ryzin, G.: On the choice-based linear programming model for network revenue management. Manuf. Serv. Oper. Manag. 10(2), 288–310 (2008)
Lu, H.: An \({O}(s^r)\)-resolution ODE framework for discrete-time optimization algorithms and applications to convex-concave saddle-point problems, arXiv preprint arXiv:2001.08826 (2020)
Luo, Z.-Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46(1), 157–178 (1993)
Manne, A.S.: Linear programming and sequential decisions. Manag. Sci. 6(3), 259–267 (1960)
Marcotte, P., Zhu, D.: Weak sharp solutions of variational inequalities. SIAM J. Optim. 9(1), 179–189 (1998)
Mittelmann, H.D.: Benchmark of simplex LP solvers (2020). http://plato.asu.edu/ftp/lpsimp.html
Mokhtari, A., Ozdaglar, A., Pattathil, S.: A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: proximal point approach. In: International Conference on Artificial Intelligence and Statistics (2020)
Nemirovski, A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex–concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)
Nesterov, Yu.: Subgradient methods for huge-scale optimization problems. Math. Program. 146(1), 275–297 (2014)
Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)
Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)
Nesterov, Y.: Introductory Lectures on Convex Optimization: a basic course, vol. 87. Springer (2013)
Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \({O} (1/k^2)\). Soviet Math. Doklady 27, 372–376 (1983)
Niao, H.: Mirror-prox algorithm, Fall (2016), http://niaohe.ise.illinois.edu/IE598_2016/pdf/IE598-lecture18-mirror%20prox%20algorithm%20for%20saddle%20point%20problems.pdf
O’Donoghue, B., Candes, E.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)
O’Donoghue, B., Chu, E., Parikh, N., Boyd, S.: Conic optimization via operator splitting and homogeneous self-dual embedding. J. Optim. Theory Appl. 169(3), 1042–1068 (2016)
Peña, J., Vera, J.C., Zuluaga, L.F.: New characterizations of Hoffman constants for systems of linear constraints. Math. Program. (2020), to appear
Pokutta, S.: Restarting algorithms: sometimes there is free lunch. In: International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 22–38. Springer (2020)
Polyak, B.: Sharp minima. In: Proceedings of the IIASA Workshop on Generalized Lagrangians and Their Applications, Laxenburg, Austria. Institute of Control Sciences Lecture Notes, Moscow (1979)
Polyak, B.: Introduction to Optimization. Optimization Software Inc, New York (1987)
Ramakrishnan, K.G., Resende, M.G.C., Ramachandran, B., Pekny, J.F.: Tight QAP Bounds via Linear Programming, pp. 297–303. World Scientific Publishing Co. (2002)
Renegar, J.: Incorporating condition measures into the complexity theory of linear programming. SIAM J. Optim. 5(3), 506–524 (1995)
Renegar, J.: Linear programming, complexity theory and elementary functional analysis. Math. Program. 70(1–3), 279–351 (1995)
Roulet, V., d’Aspremont, A.: Sharpness, restart, and acceleration. SIAM J. Optim. 30(1), 262–289 (2020)
Tyrrell Rockafellar, R.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14(5), 877–898 (1976)
Tang, J., Golbabaee, M., Bach, F. et al.: Rest-katyusha: exploiting the solution’s structure via scheduled restart schemes. In: Advances in Neural Information Processing Systems, pp. 429–440 (2018)
Thomson, B.S., Bruckner, J.B., Bruckner, A.M.: Elementary real analysis, vol. 1, ClassicalRealAnalysis.com (2008)
Tseng, P.: On linear convergence of iterative methods for the variational inequality problem. J. Comput. Appl. Math. 60(1–2), 237–252 (1995)
Yang, T., Lin, Q.: RSG: beating subgradient method without smoothness and strong convexity. J. Mach. Learn. Res. 19(1), 236–268 (2018)
Acknowledgements
We thank Brendan O’Donoghue, Vasilis Charisopoulos, and Warren Schudy for useful discussions and feedback on this work.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Proof of Lemma 2
Proof
Let \(U \Sigma V^{\top }\) be a singular value decomposition of H. Recall U and V are orthogonal matrices (\(U^{\top } U = U U^{\top } = {{\textbf {I}}}\) and \(\Vert U z \Vert = \Vert z \Vert \)), and \(\Sigma \) is diagonal. First, by assumption there exists some \(\bar{z}\) such that \(H \bar{z} = h\). This implies that for \(\bar{p} = V^{\top } \bar{z}\) we have \(U^{\top } h = U^{\top } (U \Sigma V^{\top }) \bar{z} = \Sigma V^{\top } \bar{z} = \Sigma \bar{p}\). Consider some \(z \in {\mathbb {R}}^{n}\). Define \(p = V^{\top } z\),
and \(z^{\star }= V p^{*}\). If \(\Sigma _{ii} \ne 0\) then by definition of \(p^{*}\), \((\Sigma p^{*})_i = (U^{\top } h)_i\). If \(\Sigma _{ii} = 0\) then \((U^{\top } h)_i = (\Sigma \bar{p})_i = \Sigma _{ii} \bar{p}_i = 0 = \Sigma _{ii} p^{*}_i = (\Sigma p^{*})_i\). Hence \(\Sigma p^{*} = U^{\top } h\). Therefore,
\(\square \)
Proof of Proposition 3 for EGM
Proof
Let \(z^{\star }= \min _{z \in Z^{\star }} \Vert z^{t} - z \Vert \). By definition of \(\hat{z}^t\) and that F is \(L\)-Lipschitz, we have
Note that
Substituting (58) into the LHS of (57) and cancelling terms yields
By \(F(z^{\star })^{\top } (z^{\star }- z^{t-1}) \le F(z^{\star })^{\top } (\hat{z}^t - z^{t-1})\), (59), F is \(L\)-Lipschitz, and the triangle inequality one gets
where the first inequality uses \(F(z^{\star })^{\top } (z^{\star }- z^{t-1}) \le F(z^{\star })^{\top } (\hat{z}^t - z^{t-1})\), the second inequality uses (59), the third inequality uses that F is \(L\)-Lipschitz, and the final inequality uses the triangle inequality.
Notice that \(\Vert \hat{z}^t - z^{t-1} \Vert +\Vert z^{\star }- z^{t-1} \Vert > 0\) otherwise the result trivially holds, thus by (60) we get \(\frac{1}{2\eta } \Vert \hat{z}^t - z^{t-1} \Vert - \left( \frac{1}{2\eta }+L\right) \Vert z^{\star }- z^{t-1} \Vert \le 0\), rearranging yields
where the last inequality uses that \(\eta \le 1 / L\) from the premise of Proposition 1. \(\square \)
Proof of Lemma 6
Proof
Recall for ADMM the norm is \(\Vert z \Vert = \sqrt{z^{\top } M z}\) with
where we ustilize \(V=I\) in (28). Suppose that \(\eta = 1\). Consider \(z \in Z\), \(r \in (0, 2R]\). Let
Let \(z_1 = z + r \frac{v}{\Vert v \Vert }\), then \(z_1\in Z\) by the definition of Z. It follows from (21) that
Consider any optimal solution \(z^{\star }= (x^{\star }_{V}, x^{\star }_{V}, y^{\star })\), define \(\hat{z} = (x^{\star }_{V}, x^{\star }_{V}, y) \in Z^{\star }\). Let \(z_2= z - \mu (z - \hat{z})\) for \(\mu = \min \left\{ \frac{r}{\Vert z - \hat{z} \Vert }, 1 \right\} \), then \(\Vert z_2-z\Vert \le \frac{r}{\Vert z - \hat{z} \Vert }\Vert z - \hat{z} \Vert \le r\). Meanwhile, we have \(z_2 = (1- \mu ) z + \mu \hat{z}\), thus \(z_2\in Z\) by convexity. We conclude that \(z_2 \in W_{r}(z)\). Therefore, it follows from (21) that
Substituting \(r \in (0, 2R]\) and \(\Vert z - \hat{z} \Vert \le \Vert z - z^{\star }\Vert \in [0,2R]\) into (62) and noticing \(\rho _r(z)\ge 0\) yields
Combining (61) and (63), we deduce there exists \(K'\) and \(h'\) such that
where the last inequality is from duality, Hoffman condition (Equation (26)), and the fact that \(\Vert z \Vert _2 \ge \Vert z \Vert \). Taking the square root obtains the result for \(\eta = 1\).
Next, we consider the case \(\eta \ne 1\). Let us denote the corresponding normalized duality gap as \(\rho _r^{\eta }\) and distance as \(\mathbf{dist{}}^{\eta }\) then with \(\theta = \max \{ \eta , 1/\eta \}\) we get
\(\square \)
Proof of Proposition 10
The next lemma is used in the proof of Proposition 10.
Lemma 7
Consider the sequence \(\{ z^{n, 0} \}_{i=0}^{\infty }\), \(\{ \tau ^{n} \}_{i=1}^{\infty }\) generated by Algorithm 1 with the adaptive restart scheme and \(\beta <\frac{1}{q+3}\). Suppose (6) satisfies Property 3, and there exists \(a \in (0,\infty )\) s.t. \(\rho _r(z) \ge \frac{a}{1 + \Vert z \Vert } \mathbf{dist{}}(z, Z^{\star })\). Then \(z^{n,0}\) stays in a bounded region, in particular, \(z^{n,0} \in W_{R}(z^{0,0})\) for all \(n \in {\mathbb {N}}\) with \(R = \frac{2C (q + 3)^3}{\beta a (1 - \beta (q+3))} \mathbf{dist{}}( z^{0, 0}, Z^{\star }) (1 + \Vert z^{0, 0} \Vert + \mathbf{dist{}}(z^{0,0}, Z^{\star }))\).
Proof
First, notice that it holds for any \(n>0\) that \(\mathbf{dist{}}(z^{n, 0}, Z^{\star }) \le \mathbf{dist{}}(z^{n-1, 0}, Z^{\star }) + \Vert z^{n, 0} - z^{n-1,0} \Vert \le (q + 3) \mathbf{dist{}}(z^{n-1, 0}, Z^{\star })\) where the first inequality uses the triangle inequality, and the second inequality is from Property ii.. Recursively applying this inequality yields
Thus, we have
where the first inequality uses the triangle inequality, the second inequality is from Property ii., the third inequality utilizes (64) for \(n=0,\ldots ,N-1\), the equality uses the formula for the sum of a geometric series, and the last inequality uses \(q\ge 0\).
Furthermore, we have
where the first inequality recursively uses (30), the second inequality is from Property i. and \(\tau ^{0} \ge 1\), and the last inequality uses Property ii..
Therefore, it holds that
where the first inequality uses Property ii., the second inequality is from that \(\rho _r(z) \ge \frac{a}{1 + \Vert z \Vert } \mathbf{dist{}}(z, Z^{\star })\), the third inequality uses (66), the fourth inequality uses the triangle inequality, the fifth inequality uses (65), and the last inequality uses \(q+3 \ge 2\).
Therefore, we have
where the first inequality is the triangle inequality, the second inequality uses (67), and the last inequality is from the bound on the sum of a geometric series by noticing \(\beta (q+3)<1\). This finishes the proof. \(\square \)
Proof of Proposition 10for EGM and ADMM appled to LP. Recall that the Lagrangian form of an LP instance is \(\alpha \)-sharp on \(S(R)=\{z \in Z: \Vert z \Vert \le R\}\) with \(\alpha =\frac{1}{H(K)\sqrt{1+4R^2}}\) (see Lemma 5) and the ADMM form of a LP instance is \(\alpha \)-sharp on \(S(R)=\{z \in Z: \Vert z \Vert \le R\}\) with \(\alpha =\frac{1}{ \max \{ \eta ^2, 1/\eta ^2 \} H(K')\sqrt{1+4R^2}}\) (see Lemma 6). Thus the sharpness constant \(\alpha \) satisfies the condition in Lemma 7 with \(a=\frac{1}{2\,H(K)}\) for standard form LP (Lemma 5) and and \(a=\frac{1}{2 \max \{ \eta ^2, 1/\eta ^2 \} H(K')}\) for ADMM form LP. We finish the proof by setting \(R = \frac{2C (q + 3)^3}{\beta a (1 - \beta (q+3))} \mathbf{dist{}}( z^{0, 0}, Z^{\star }) (1 + \Vert z^{0, 0} \Vert + \mathbf{dist{}}(z^{0,0}, Z^{\star }))\) as stated in Lemma 7. \(\square \)
Proof of Corollary 2
Proof
Let \(\Vert z \Vert _M = \sqrt{\Vert x \Vert ^2 - 2 \eta y^{\top } A x + \Vert y \Vert ^2}\). Recall that Property 3 holds for \(\Vert \cdot \Vert _M\) with \(q = 0\) and \(C = 2 / \eta \) (refer to Table 1). Define \(\hat{B}_{r}(z) = \{ z \in Z: \Vert z \Vert _{M} \le r \}\) then with \(\hat{r} = r \sqrt{\frac{1}{1 - \eta {\sigma _{\max }}{(A)}}}\) for any \(z \in Z\) we get
where the first inequality uses that \(W_{r}(z) \subseteq \hat{B}_{\hat{r}}(z)\) by Proposition 7, the second inequality uses Property i.. This establishes \(C = \frac{2}{\eta (1 - \eta {\sigma _{\max }}{(A)})}\). Moreover, with \(z^{\star }\in {\mathop {\textrm{argmin}}\limits _{z \in Z^{\star }}} \Vert z^{0} - z \Vert _M\) we have
where the first inequality uses by Proposition 7, the second inequality uses Property ii., and the third inequality uses Proposition 7. This establishes \(q = 4 \frac{1 + \eta {\sigma _{\max }}{(A)}}{1 - \eta {\sigma _{\max }}{(A)}} - 2\). \(\square \)
Linear time algorithm for linear trust region problems
Theorem 4
Algorithm 2 exactly solves (49), the trust region problem with linear objective, Euclidean norm, and bounds.
Proof
If the algorithm returns from line 1, (51) is unbounded, and the algorithm returns \(\hat{z}(\infty )\). Otherwise, in each iteration of the while loop (line 5), the algorithm maintains the invariants
-
\(\mathcal {I} = \{i:\;\lambda _{\text {lo}}< \hat{\lambda }_i < \lambda _{\text {hi}}\}\)
-
For \(\lambda _{\text {lo}}< \lambda < \lambda _{\text {hi}}\), \(\Vert \hat{z}(\lambda ) - z \Vert ^2 = f_{\text {lo}}+ \lambda ^2 f_{\text {hi}}+ \sum _{i \in \mathcal {I}} (\hat{z}(\lambda )_i - z_i)^2\)
-
\(\Vert \hat{z}(\lambda _{\text {lo}}) - z\Vert ^2 \le r^2 \le \Vert \hat{z}(\lambda _{\text {hi}}) - z\Vert ^2\)
The while loop (line 5) is finite, since on each iteration \(|\mathcal {I}|\) is reduced by at least a factor of 2. When the while loop exits (with \(\mathcal {I} = \emptyset \)), these invariants mean that the final \(\lambda _{\text {mid}}\) computed on line 18 optimizes (51) and the returned \(\hat{z}(\lambda _{\text {mid}})\) solves (49). \(\square \)
Theorem 5
Algorithm 2 runs in \(O(m+n)\) time.
Proof
The work outside the while loop (line 5) is clearly \(O(m+n)\). In each pass through the while loop, the median (line 6) can be found in \(O(|\mathcal {I}|)\) time ([9]), and the rest of the loop also takes \(O(|\mathcal {I}|)\) time. Since at least half of the elements of \(\mathcal {I}\) are removed in each iteration, and initially \(|\mathcal {I}| \le m+n\), the total time in the while loop is at most \(O(\sum _{i=0}^\infty \frac{m+n}{2^i}) = O(m+n)\) by the formula for the sum of an infinite geometric series. \(\square \)
Numerical results for restarted EGM and ADMM
In this section, we repeat the numerical experiments of Sect. 7 with EGM and ADMM instead of PDHG. We tune the primal weight for EGM and the step size \(\eta \) for ADMM using the same procedure as with PDHG (see Sect. 7).
Figure 3 (EGM) is almost identical to Fig. 2 (PDHG), which verifies again the improved performance of restarted algorithms. Furthermore, note that one iteration of EGM is about twice as expensive as a PDHG iteration, because EGM requires four matrix–vector multiplications per iteration, while PDHG requires two matrix–vector multiplications.
Indeed, to see the difference between EGM and PDHG requires a close examination of Table 4. Recall that termination criteria is only checked every 30 iterations, which causes the iteration counts to be identical in several instances.
Figure 4 shows the performance of restarted ADMM for the same problems, which verifies again the improved performance of restarted algorithms. We observe that restarts do appear to make performance slower for nug08-3rd but this appears to be a very easy problem—the number of iterations for all methods is very low.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Applegate, D., Hinder, O., Lu, H. et al. Faster first-order primal-dual methods for linear programming using restarts and sharpness. Math. Program. 201, 133–184 (2023). https://doi.org/10.1007/s10107-022-01901-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-022-01901-9