On Monte Carlo and Quasi-Monte Carlo for Matrix Computations

  • Vassil Alexandrov
  • Diego Davila
  • Oscar Esquivel-Flores
  • Aneta KaraivanovaEmail author
  • Todor Gurov
  • Emanouil Atanassov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10665)


This paper focuses on minimizing further the communications in Monte Carlo methods for Linear Algebra and thus improving the overall performance. The focus is on producing set of small number of covering Markov chains which are much longer that the usually produced ones. This approach allows a very efficient communication pattern that enables to transmit the sampled portion of the matrix in parallel case. The approach is further applied to quasi-Monte Carlo. A comparison of the efficiency of the new approach in case of Sparse Approximate Matrix Inversion and hybrid Monte Carlo and quasi-Monte Carlo methods for solving Systems of Linear Algebraic Equations is carried out. Experimental results showing the efficiency of our approach on a set of test matrices are presented. The numerical experiments have been executed on the MareNostrum III supercomputer at the Barcelona Supercomputing Center (BSC) and on the Avitohol supercomputer at the Institute of Information and Communication Technologies (IICT).


Monte Carlo for linear algebra Quasi-Monte Carlo for linear algebra Hybrid methods 



The work of the authors (V.A., D.D., and O.E-F.) is supported by Severo Ochoa program of excellence, Spain. The work of the authors (A.K. and T.G.) is supported by the NSF of Bulgaria under Grant DFNI-I02/8.


  1. 1.
    Golub, G., Loan, C.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
  2. 2.
    Straßburg, J., Alexandrov, V.N.: Enhancing Monte Carlo preconditioning methods for matrix computations. In: Proceedings ICCS 2014, pp. 1580–1589 (2014)Google Scholar
  3. 3.
    Alexandrov, V.N., Esquivel-Flores, O.A.: Towards Monte Carlo preconditioning approach and hybrid Monte Carlo algorithms for matrix computations. CMA 70(11), 2709–2718 (2015)MathSciNetGoogle Scholar
  4. 4.
    Carpentieri, B., Duff, I., Giraud, L.: Some sparse pattern selection strategies for robust Frobenius norm minimization preconditioners in electromagnetism. Numer. Linear Algebra Appl. 7, 667–685 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Carpentieri, B., Duff, I., Giraud, L.: Experiments with sparse preconditioning of dense problems from electromagnetic applications, CERFACS, Toulouse, France. Technical report (2000)Google Scholar
  6. 6.
    Alléon, G., Benzi, M., Giraud, L.: Sparse approximate inverse preconditioning for dense linear systems arising in computational electromagnetics. Num. Algorithms 16(1), 1–15 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Evans, T., Hamilton, S., Joubert, W., Engelmann, C.: MCREX - Monte Carlo Resilient Exascale Project.
  8. 8.
    Benzi, M., Meyer, C., Tůma, M.: A sparse approximate inverse preconditioner for the conjugate gradient method. SIAM J. Sci. Comput. 17(5), 1135–1149 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Huckle, T., Kallischko, A., Roy, A., Sedlacek, M., Weinzierl, T.: An efficient parallel implementation of the MSPAI preconditioner. Parallel Comput. 36(5–6), 273–284 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Grote, M., Hagemann, M.: SPAI: SParse Approximate Inverse Preconditioner. Spaidoc. pdf paper in the SPAI, vol. 3, p. 1 (2006)Google Scholar
  11. 11.
    Huckle, T.: Factorized sparse approximate inverses for preconditioning. J. Supercomput. 25(2), 109–117 (2003)CrossRefzbMATHGoogle Scholar
  12. 12.
    Strassburg, J., Alexandrov, V.: On scalability behaviour of Monte Carlo sparse approximate inverse for matrix computations. In: Proceedings of the ScalA 2013 Workshop, Article no. 6. ACM (2013)Google Scholar
  13. 13.
    Vajargah, B.F.: A new algorithm with maximal rate convergence to obtain inverse matrix. Appl. Math. Comput. 191(1), 280–286 (2007)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Hoemmen, M., Vuduc, R., Nishtala, R.: BeBOP sparse matrix converter. University of California at Berkeley (2011)Google Scholar
  15. 15.
    Boisvert, R.F., Pozo, R., Remington, K., Barrett, R.F., Dongarra, J.J.: Matrix market: a web resource for test matrix collections. In: Boisvert, R.F. (ed.) QNS 1997. IFIPAICT, pp. 125–137. Springer, Boston (1997). CrossRefGoogle Scholar
  16. 16.
    Davis, T.A., Hu, Y.: The University of Florida sparse matrix collection. ACM Trans. Math. Softw. (TOMS) 38(1), 1 (2011)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Alexandrov, V., Esquivel-Flores, O., Ivanovska, S., Karaivanova, A.: On the preconditioned Quasi-Monte Carlo algorithm for matrix computations. In: Lirkov, I., Margenov, S.D., Waśniewski, J. (eds.) LSSC 2015. LNCS, vol. 9374, pp. 163–171. Springer, Cham (2015). CrossRefGoogle Scholar
  18. 18.
    Karaivanova, A.: Quasi-Monte Carlo methods for some linear algebra problems. Convergence and complexity. Serdica J. Comput. 4, 57–72 (2010)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Atanassov, E., Gurov, T., Karaivanova, A., Ivanovska, S., Durchova, M., Dimitrov, D.: On the parallelization approaches for Intel MIC architecture. In: AIP Conference Proceedings, vol. 1773, p. 070001 (2016).

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Vassil Alexandrov
    • 1
    • 2
    • 4
  • Diego Davila
    • 2
  • Oscar Esquivel-Flores
    • 4
  • Aneta Karaivanova
    • 3
    Email author
  • Todor Gurov
    • 3
  • Emanouil Atanassov
    • 3
  1. 1.ICREA - Catalan Institution for Advanced Research StudiesBarcelonaSpain
  2. 2.Barcelona Supercomputing CenterBarcelonaSpain
  3. 3.IICT, Bulgarian Academy of SciencesSofiaBulgaria
  4. 4.Inst. Tech. y de Estudios Superiores de MonterreyMonterreyMexico

Personalised recommendations