Advertisement

Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem

  • Md Sarowar Morshed
  • Md Saiful Islam
  • Md. Noor-E-AlamEmail author
Article
  • 69 Downloads

Abstract

The Sampling Kaczmarz Motzkin (SKM) algorithm is a generalized method for solving large-scale linear systems of inequalities. Having its root in the relaxation method of Agmon, Schoenberg, and Motzkin and the randomized Kaczmarz method, SKM outperforms the state-of-the-art methods in solving large-scale Linear Feasibility (LF) problems. Motivated by SKM’s success, in this work, we propose an Accelerated Sampling Kaczmarz Motzkin (ASKM) algorithm which achieves better convergence compared to the standard SKM algorithm on ill-conditioned problems. We provide a thorough convergence analysis for the proposed accelerated algorithm and validate the results with various numerical experiments. We compare the performance and effectiveness of ASKM algorithm with SKM, Interior Point Method (IPM) and Active Set Method (ASM) on randomly generated instances as well as Netlib LPs. In most of the test instances, the proposed ASKM algorithm outperforms the other state-of-the-art methods.

Keywords

Kaczmarz method Nesterov’s acceleration Motzkin method Sampling Kaczmarz Motzkin algorithm 

Mathematics Subject Classification

90C05 65F10 90C25 15A39 68W20 

Notes

Acknowledgements

The authors are truly grateful to the anonymous referees and the editor for their valuable comments and suggestions in the revision process which have greatly improved the quality of this paper.

References

  1. 1.
    Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262 (2008)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Leventhal, D., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35(3), 641–654 (2010)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Needell, D.: Randomized Kaczmarz solver for noisy linear systems. BIT Numer. Math. 50(2), 395–403 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Drineas, P., Mahoney, M.W., Muthukrishnan, S., Sarlós, T.: Faster least squares approximation. Numer. Math. 117(2), 219–249 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Zouzias, A., Freris, N.M.: Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. Appl. 34(2), 773–793 (2013)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Lee, Y.T., Sidford, A.: Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In: Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS’13, Washington, DC, USA. IEEE Computer Society, pp. 147–156 (2013)Google Scholar
  7. 7.
    Ma, A., Needell, D., Ramdas, A.: Convergence properties of the randomized extended gauss seidel and Kaczmarz methods. SIAM J. Matrix Anal. Appl. 36(4), 1590–1604 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Qu, Z., Richtarik, P., Takac, M., Fercoq, O.: SDNA: stochastic dual Newton ascent for empirical risk minimization. In: Proceedings of the 33rd International Conference on Machine Learning, vol. 48, New York, USA, 20–22 June 2016. PMLR, pp. 1823–1832 (2016)Google Scholar
  9. 9.
    De Loera, J., Haddock, J., Needell, D.: A sampling Kaczmarz–Motzkin algorithm for linear feasibility. SIAM J. Sci. Comput. 39(5), S66–S87 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19(2), 400–408 (1982)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Eisenstat, S.C., Walker, H.F.: Choosing the forcing terms in an inexact Newton method. SIAM J. Sci. Comput. 17(1), 16–32 (1996)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Bellavia, S.: Inexact interior-point method. J. Optim. Theory Appl. 96(1), 109–121 (1998)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Zhao, X.-Y., Sun, D., Toh, K.-C.: A Newton-CG augmented lagrangian method for semidefinite programming. SIAM J. Optim. 20(4), 1737–1765 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Wang, C., Aimin, X.: An inexact accelerated proximal gradient method and a dual Newton-CG method for the maximal entropy problem. J. Optim. Theory Appl. 157(2), 436–450 (2013)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Jiang, K., Sun, D., Toh, K.-C.: An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP. SIAM J. Optim. 22(3), 1042–1064 (2012)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Gondzio, J.: Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM J. Optim. 23(3), 1510–1527 (2013)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Kaczmarz, S.: Angenaherte auflsung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sci. Lett. 35, 355–357 (1937)zbMATHGoogle Scholar
  18. 18.
    Herman, G.T.: Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd edn. Springer, New York (2009)CrossRefGoogle Scholar
  19. 19.
    Censor, Y.: Row-action methods for huge and sparse systems and their applications. SIAM Rev. 23(4), 444–466 (1981)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Liu, J., Wright, S.J.: An accelerated randomized Kaczmarz algorithm. Math. Comput. 85(297), 153–178 (2016)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Agamon, S.: The relaxation method for linear inequalities. Can. J. Math. 6, 382–392 (1954)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Motzkin, T.S., Schoenberg, I.J.: The relaxation method for linear inequalities. Can. J. Math. 6, 393–404 (1954)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 65–386 (1958)CrossRefGoogle Scholar
  24. 24.
    Ramdas, A., Peña, J.: Margins, kernels and non-linear smoothed perceptrons. In: Proceedings of the 31st International Conference on Machine Learning, vol. 32, Beijing, China, 22–24 June. PMLR, pp. 244–252 (2014)Google Scholar
  25. 25.
    Ramdas, A., Peña, J.: Towards a deeper geometric, analytic and algorithmic understanding of margins. Optim. Methods Softw. 31(2), 377–391 (2016)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Nutini, J., Sepehry, B., Laradji, I., Schmidt, M., Koepke, H., Virani, A.: Convergence rates for greedy Kaczmarz algorithms, and faster randomized Kaczmarz rules using the orthogonality graph. In: Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, UAI’16, Arlington, Virginia, USA, 2016. AUAI Press, pp. 547–556 (2016)Google Scholar
  27. 27.
    Petra, S., Popa, C.: Single projection Kaczmarz extended algorithms. Numer. Algorithms 73(3), 791–806 (2016)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Hoffman, A.J.: On approximate solutions of systems of linear inequalities. In: Selected Papers of Alan J Hoffman: With Commentary. World Scientific, pp. 174–176 (2003)Google Scholar
  29. 29.
    Xu Xiang, X., Liu, W.T., Dai, X.: An accelerated randomized extended Kaczmarz algorithm. J. Phys.: Conf. Ser. 814(1), 012017 (2017)Google Scholar
  30. 30.
    Morshed, M.S., Noor-E-Alam, M.: Generalized affine scaling algorithms for linear programming problems. Comput. Oper. Res. 114, 104807 (2020)Google Scholar
  31. 31.
    Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(o(1/k^{2})\). Soviet Mathematics Doklady, vol. 27, pp. 372–376 (1983)Google Scholar
  32. 32.
    Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, 1st edn. Springer, New York (2014)zbMATHGoogle Scholar
  34. 34.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Gordon, R., Bender, R., Herman, G.T.: Algebraic reconstruction techniques (art) for three-dimensional electron microscopy and x-ray photography. J. Theor. Biol. 29(3), 471–481 (1970)CrossRefGoogle Scholar
  37. 37.
    Calafiore, G., El Ghaoui, L.: Optimization Models. Control Systems and Optimization Series. Cambridge University Press, Cambridge (2014)zbMATHGoogle Scholar
  38. 38.
    Lichman, M.: UCI machine learning repository. (2013). http://archive.ics.uci.edu/ml
  39. 39.
    Yeh, I.-C., Lien, C.: The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl. 36(2), 2473–2480 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Mechanical and Industrial EngineeringNortheastern UniversityBostonUSA

Personalised recommendations