Abstract
Various first order approaches have been proposed in the literature to solve Linear Programming (LP) problems, recently leading to practically efficient solvers for large-scale LPs. From a theoretical perspective, linear convergence rates have been established for first order LP algorithms, despite the fact that the underlying formulations are not strongly convex. However, the convergence rate typically depends on the Hoffman constant of a large matrix that contains the constraint matrix, as well as the right hand side, cost, and capacity vectors.
We introduce a first order approach for LP optimization with a convergence rate depending polynomially on the circuit imbalance measure, which is a geometric parameter of the constraint matrix, and depending logarithmically on the right hand side, capacity, and cost vectors. This provides much stronger convergence guarantees. For example, if the constraint matrix is totally unimodular, we obtain polynomial-time algorithms, whereas the convergence guarantees for approaches based on primal-dual formulations may have arbitrarily slow convergence rates for this class. Our approach is based on a fast gradient method due to Necoara, Nesterov, and Glineur (Math. Prog. 2019); this algorithm is called repeatedly in a framework that gradually fixes variables to the boundary. This technique is based on a new approximate version of Tardos’s method, that was used to obtain a strongly polynomial algorithm for combinatorial LPs (Oper. Res. 1986).
The full version is available at arXiv:2311.01959.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Applegate, D., et al.: Practical large-scale linear programming using primal-dual hybrid gradient. Adv. Neural. Inf. Process. Syst. 34, 20243–20257 (2021)
Applegate, D., Hinder, O., Lu, H., Lubin, M.: Faster first-order primal-dual methods for linear programming using restarts and sharpness. Math. Program. 201(1), 133–184 (2023)
Dadush, D., Huiberts, S., Natura, B., Végh, L.A.: A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix. Math. Program. (2023). (in press)
Dadush, D., Natura, B., Végh, L.A.: Revisiting Tardos’s framework for linear programming: faster exact solutions using approximate solvers. In: Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 931–942 (2020)
Eckstein, J., Bertsekas, D.P., et al.: An alternating direction method for linear programming. Technical report LIDS-P-1967 (1990)
Ekbatani, F., Natura, B., Végh, L.A.: Circuit imbalance measures and linear programming. In: Surveys in Combinatorics 2022. London Mathematical Society Lecture Note Series, pp. 64–114. Cambridge University Press (2022)
Fujishige, S., Kitahara, T., Végh, L.A.: An update-and-stabilize framework for the minimum-norm-point problem. In: Del Pia, A., Kaibel, V. (eds.) IPCO 2023. LNCS, vol. 13904, pp. 142–156. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-32726-1_11
Gilpin, A., Pena, J., Sandholm, T.: First-order algorithm with convergence for-equilibrium in two-person zero-sum games. Math. Program. 133(1–2), 279–298 (2012)
Hinder, O.: Worst-case analysis of restarted primal-dual hybrid gradient on totally unimodular linear programs. arXiv preprint arXiv:2309.03988 (2023)
Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49(4), 263–265 (1952)
Karmarkar, N.: A new polynomial-time algorithm for linear programming. In: Proceedings of the 16th Annual ACM Symposium on Theory of Computing (STOC), pp. 302–311 (1984)
Khachiyan, L.G.: A polynomial algorithm in linear programming. In: Doklady Academii Nauk SSSR, vol. 244, pp. 1093–1096 (1979)
Necoara, I., Nesterov, Y., Glineur, F.: Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175, 69–107 (2019)
Smale, S.: Mathematical problems for the next century. Math. Intell. 20, 7–15 (1998)
Tardos, É.: A strongly polynomial minimum cost circulation algorithm. Combinatorica 5(3), 247–255 (1985)
Tardos, É.: A strongly polynomial algorithm to solve combinatorial linear programs. Oper. Res. 34, 250–256 (1986)
Tunçel, L.: Approximating the complexity measure of Vavasis-Ye algorithm is NP-hard. Math. Program. 86(1), 219–223 (1999)
Vavasis, S.A., Ye, Y.: A primal-dual interior point method whose running time depends only on the constraint matrix. Math. Program. 74(1), 79–120 (1996)
Wang, S., Shroff, N.: A new alternating direction method for linear programming. Adv. Neural Inf. Process. Syst. 30 (2017)
Yang, T., Lin, Q.: RSG: beating subgradient method without smoothness and strong convexity. J. Mach. Learn. Res. 19(1), 236–268 (2018)
Acknowledgements
This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. ScaleOpt–757481; for C. Hertrich additionally via grant agreement no. ForEFront–615640). Y. Tao also acknowledges Grant 2023110522 from SUFE, National Key R &D Program of China (2023YFA1009500), NSFC grant 61932002. Part of the work was done while L. Végh was visiting the Corvinus Institute for Advanced Studies, Corvinus University, Budapest, Hungary, and while C. Hertrich was affiliated with London School of Economics, UK, and with Goethe-Universität Frankfurt, Germany.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cole, R., Hertrich, C., Tao, Y., Végh, L.A. (2024). A First Order Method for Linear Programming Parameterized by Circuit Imbalance. In: Vygen, J., Byrka, J. (eds) Integer Programming and Combinatorial Optimization. IPCO 2024. Lecture Notes in Computer Science, vol 14679. Springer, Cham. https://doi.org/10.1007/978-3-031-59835-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-59835-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-59834-0
Online ISBN: 978-3-031-59835-7
eBook Packages: Computer ScienceComputer Science (R0)