Unified frameworks for high order Newton-Schulz and Richardson iterations: a computationally efficient toolkit for convergence rate improvement

Abstract

Convergence rate and robustness improvement together with reduction of computational complexity are required for solving the system of linear equations \(A \theta _* = b\) in many applications such as system identification, signal and image processing, network analysis, machine learning and many others. Two unified frameworks (1) for convergence rate improvement of high order Newton-Schulz matrix inversion algorithms and (2) for combination of Richardson and iterative matrix inversion algorithms with improved convergence rate for estimation of \(\theta _*\) are proposed. Recursive and computationally efficient version of new algorithms is developed for implementation on parallel computational units. In addition to unified description of the algorithms the frameworks include explicit transient models of estimation errors and convergence analysis. Simulation results confirm significant performance improvement of proposed algorithms in comparison with existing methods.

This is a preview of subscription content, log in to check access.

FIG 1
FIG 2

References

  1. 1.

    Ljung, L.: System Identification: Theory for the User. Prentice-Hall, Upper Saddle River (1999)

    Google Scholar 

  2. 2.

    Bayard, D.: A general theory of linear time-invariant adaptive feedforward systems with harmonic regressors. IEEE Trans. Autom. Control 45(11), 1983–1996 (2000)

    MathSciNet  Article  MATH  Google Scholar 

  3. 3.

    Stotsky, A.: Recursive trigonometric interpolation algorithms. J. Syst. Control Eng. 224(1), 65–77 (2010)

    Google Scholar 

  4. 4.

    Stotsky, A.: Harmonic regressor: robust solution to least-squares problem. Proc. IMechE Part I: J. Syt. Control Eng. 227(8), 662–668 (2013)

    Google Scholar 

  5. 5.

    Stotsky, A.: Towards accurate estimation of fast varying frequency in future electricity networks: the transition from model-free methods to model-based approach. J. Syst. Control Eng. 230(10), 1164–1175 (2016)

    Google Scholar 

  6. 6.

    Stotsky, A.: Grid frequency estimation using multiple model with harmonic regressor: robustness enhancement with stepwise splitting method. In: Conference Paper Archive, IFAC PapersOnLine 50-1, pp. 12817–12822, Elsevier (2017)

  7. 7.

    Chao, B. T, Li, H. L, Scott, E. J.: On the solution of ill-conditioned, simultaneous, linear, algebraic equations by machnine computation. In: Alan Kingery, R. (ed.) Engineering Experiment Station Bulletin No. 459, University of Illinois Bulletin (2007)

  8. 8.

    Brussino, G., Sonnad, V.: A comparison of direct and preconditioned iterative techniques for sparse, unsymmetric systems of linear equations. Numer. Methods Eng. 28(4), 801–815 (1989)

    Article  Google Scholar 

  9. 9.

    Stotsky, A.: Accuracy improvement in least-squares estimation with harmonic regressor: new preconditioning and correction methods. In: 54-th CDC, Dec. 15–18, Osaka, Japan, pp. 4035–4040 (2015)

  10. 10.

    Isaacson, E., Keller, H.: Analysis of Numerical Methods. Wiley, New York (1966)

    Google Scholar 

  11. 11.

    Stickel, E.: On a class of high order methods for inverting matrices. ZAMM Z. Angew. Math. Mech. 67, 331–386 (1987)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Soleymani, F.: A new method for solving ill-conditioned linear systems. Opusc. Math. 33(2), 337–344 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Missirlis, N.: A parallel iterative system solver. Linear Algebra Appl. 65, 25–44 (1985)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Richardson, L.: The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a Masonry Dam. Philos. Trans. R. Soc. A 210, 307–357 (1910)

    Article  MATH  Google Scholar 

  15. 15.

    Dubois, D., Greenbaum, A., Rodrigue, G.: Approximating the inverse of a matrix for use in iterative algorithms on vector processors. Computing 22, 257–268 (1979)

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Stotsky, A.: Combined high-order algorithms in robust least-squares estimation with harmonic regressor and Strictly Diagonally Dominant Information Matrix. Proc. IMechE Part I: J. Syst. Control Eng. 229(2), 184–190 (2015)

    Google Scholar 

  17. 17.

    Chen, K.: Matrix Preconditioning Techniques and Applications. Cambridge University Press, Cambridge (2005)

    Google Scholar 

  18. 18.

    O’Leary, D.: Yet another polynomial preconditioner for the conjugate gradient algorithm. Linear Algebra Appl. 154–156, 377–388 (1991)

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Brezinski, C.: Variations on Richardson’s method and acceleration. In: Numerical Analysis, A Numerical Analysis Conference in Honour of Jean Meinguet, Bull. Soc. Math. Belgium, pp. 33–44 (1996)

  20. 20.

    Horn, R., Johnson, C.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Google Scholar 

  21. 21.

    Hackbusch, W.: Iterative Solution of Large Sparse Systems of Equations. Springer Verlag, Heidelberg (1994)

    Google Scholar 

  22. 22.

    Benzi, M.: Preconditioning techniques for large linear systems: a survey. J. Comput. Phys. 182, 418–477 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    O’Leary, D., White, R.: Multi-splittings of matrices and parallel solution of linear systems. SIAM J. Algebraic Discrete Methods 6(4), 630–640 (1985)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Hadjidimos, A.A., Yeyios, A.: On multisplitting methods and m-step preconditioners for parallel and vector machines. In: Computer Science Technical Reports, Purdue University, Purdue e-Pubs, Report Number 92-060, Paper 981 (1992)

  25. 25.

    Ferronato, M.: Preconditioning for sparse linear systems at the dawn of the 21st century: history, current developments, and future perspectives. Int. Sch. Res. Netw. ISRN Appl. Math. 2012, Article ID 127647, 49 pages. https://doi.org/10.5402/2012/127647

  26. 26.

    Pan, V., Soleymani, F., Zhao, L.: Highly efficient computation of generalized inverse of a matrix. arXiv:1604.07893v1 [math.RA] (2016)

  27. 27.

    Li, W., Li, Z.: A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 215(9), 3433–3442 (2010)

    MathSciNet  MATH  Google Scholar 

  28. 28.

    Chen, H., Wang, Y.: A family of higher-order convergent iterative methods for computing the Moore–Penrose inverse. Appl. Math. Comput. 218, 4012–4016 (2011)

    MathSciNet  MATH  Google Scholar 

  29. 29.

    Climent, J., Thome, N., Wei, Y.: A geometrical approach on generalized inverses by Neumann-type series. Linear Algebra Appl. 332–334, 533–540 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    Schulz, G.: Iterative Berechnung Der Reziproken Matrix. Z. Angew. Math. Mech. 13, 57–59 (1933)

    Article  MATH  Google Scholar 

  31. 31.

    Demidovich, B., Maron, I.: Basics of Numerical Mathematics. Fizmatgiz, Moscow (1963). (in Russian)

    Google Scholar 

  32. 32.

    Peng, R., Spielman, D.: An efficient parallel solver for SDD linear systems, arXiv:1311.3286v1 [cs.NA], (2013)

  33. 33.

    Janiszowski, K.: Inversion of square matrices in processors with limited calculation abillities. Int. J. Appl. Math. Comput. Sci. AMSC 13(2), 199–204 (2003)

    MathSciNet  MATH  Google Scholar 

  34. 34.

    Esmaeili, H., Erfanifar, R., Rashidi, M.: A fourth-order iterative inverse for computing the Moore–Penrose inverse. J. Hyperstructures 1(6), 52–67 (2017)

    Google Scholar 

  35. 35.

    Srivastava, S., Gupta, D.: A higher order iterative method for \(A^{(2)}_{T, S}\). J. Appl. Math. Comput. 46(1/2), 147–168 (2014)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Alexander Stotsky.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Stotsky, A. Unified frameworks for high order Newton-Schulz and Richardson iterations: a computationally efficient toolkit for convergence rate improvement. J. Appl. Math. Comput. 60, 605–623 (2019). https://doi.org/10.1007/s12190-018-01229-8

Download citation

Keywords

  • Richardson iteration
  • Neumann series
  • High order Newton-Schulz algorithm
  • Least squares estimation
  • Harmonic regressor
  • Strictly Diagonally Dominant Matrix
  • Symmetric positive definite matrix
  • Ill-conditioned matrix
  • Polynomial preconditioning
  • Matrix power series factorization
  • Computationally efficient matrix inversion algorithm
  • Simultaneous calculations