Skip to main content

Dynamic Programming Approaches to Optimizing the Blocking Strategy for Basic Matrix Decompositions

  • Chapter
  • First Online:
Software Automatic Tuning
  • 574 Accesses

Abstract

In this chapter, we survey several approaches to optimizing the blocking strategy for basic matrix decompositions, such as LU, Cholesky, and QR. Conventional blocking strategies such as fixed-size blocking and recursive blocking are widely used to optimize the performance of these decompositions. However, these strategies have only a small number of parameters such as the block size or the level of recursion and are not sufficiently flexible to exploit the performance of modern high-performance architectures. As such, several attempts have been made to define a much larger class of strategies and to choose the best strategy among them according to the target machine and the matrix size. The number of candidate strategies is usually exponential in the size of the matrix. However, with the use of dynamic programming, the cost of optimization can be reduced to a realistic level. As representatives of such approaches, we survey variable-size blocking, generalized recursive blocking, and the combination of variable-size blocking and the TSQR algorithm. Directions for future research are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Golub GH, van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press

    MATH  Google Scholar 

  2. Stewart GW (2001) Matrix algorithms, Vol II: Eigensystems. SIAM

    MATH  Google Scholar 

  3. Bischof C, van Lang B (2000) Algorithm 807: The SBR toolbox - software for successive band reduction. ACM Trans Math Software 26:602–616

    Article  MathSciNet  Google Scholar 

  4. Freund RW, Malhotra M (1997) A block QMR algorithm for non-hermitian linear systems with multiple right-hand sides. Linear Algebra Appl 254:119–157

    Article  MathSciNet  MATH  Google Scholar 

  5. Moler CB, Stewart GW (1980) The block conjugate gradient algorithm and related methods. Linear Algebra Appl 29:293–322

    Article  MathSciNet  Google Scholar 

  6. Dongarra J, Croz JD, Hammarling S, Hanson RJ (1988) An extended set of FORTRAN basic linear algebra subprograms. ACM Trans Math Software 14:1–17

    Article  MATH  Google Scholar 

  7. Elmroth E, Gustavson F (2000) Applying recursion to serial and parallel QR factorization leads to better performance. IBM J Res Dev 44:605–624

    Article  Google Scholar 

  8. Bischof C, Lacroute P (1990) An adaptive blocking strategy for matrix factorizations. In: Proceedings of the Joint International Conference on Vector and Parallel Processing. Lecture Notes in Computer Science, vol 457, Springer, pp 210–221

    Google Scholar 

  9. Fukaya T, Yamamoto Y, Zhang SL (2008) A dynamic programming approach to optimizing the blocking strategy for the Householder QR decomposition. In: Proceedings of IEEE Cluster 2008. pp 402–410

    Google Scholar 

  10. Bischof C (1989) Adaptive blocking in the QR factorization. J Supercomput 11:s193–s208

    Article  MathSciNet  Google Scholar 

  11. Fukaya T, Yamamoto Y, Zhang SL (2009) A study on automatic tuning for parallel computation of the blocked Housseholder QR decomposition. In: IPSJ SIG Technical Report. HPC-121-18

    Google Scholar 

  12. Fukaya T, Yamamoto Y, Zhang SL (2009) An approach to automatic tuning for parallel Householder QR decomposition. In: iWAPT2009 poster presentation

    Google Scholar 

  13. Langou J (2007) AllReduce algorithms: Application to Householder QR factorization. In: Preconditioning 2007. http://www.precond07.enseeiht.fr/Talks/langou/langou.pdf

  14. Demmel JW, Grigori L, Hoemmen MF, Langou J (2008) Communication-optimal parallel and sequential QR and LU factorizations. LAPACK Working Notes 204

    Google Scholar 

  15. Stewart GW (1998) Matrix algorithms, vol I, Basic decompositions. SIAM

    MATH  Google Scholar 

  16. Higham NJ (2002) Accuracy and stability of numerical algorithms, 2nd edn. SIAM

    Book  MATH  Google Scholar 

  17. Schreiber R, van Loan CF (1989) A storage-efficient WY representation for products of householder transformations. SIAM J Sci Stat Comput 10:53–57

    Article  MATH  Google Scholar 

  18. Puglisi C (1992) Modification of the Householder method based on the compact WY representation. SIAM J Sci Stat Comput 13:723–726

    Article  MathSciNet  MATH  Google Scholar 

  19. Anderson E, Bai Z, Bischof C, Demmel J, Dongarra J, Croz JD, Greenbaum A, Hammarling S, McKenney A, Ostrouchov S, Sorensen D (1992) LAPACK user’s guide. SIAM

    Google Scholar 

  20. Bellman R (2003) Dynamic programming. Dover

    Google Scholar 

  21. Bertsekas DP, Tsitsiklis JN (1996) Neuro-dynamic programming. Athena Scientific

    MATH  Google Scholar 

  22. Dackland K, Kågström B (1996) A hierarchical approach for performance analysis of ScaLAPACK-based routines using the distributed linear algebra machine. In: Proceedings of Workshop on Applied Parallel Computing in Industrial Computation and Optimization (PARA96). Lecture Notes in Computer Science, vol 1184, Springer, pp 187–195

    Google Scholar 

  23. Cuenca J, Garcia LP, Gimenez DG (2004) Empirical modelling of parallel linear algebra routines. In: Proceedings of the 5th International Conference on Parallel Processing and Applied Mathematics (PPAM2003). Lecture Notes in Computer Science, vol 3019, Springer, pp 169–174

    Google Scholar 

  24. Yamamoto Y (2005) Performance modeling and optimal block size selection for a BLAS-3 based tridiagonalization algorithm. In: Proceedings of HPC-Asia 2005, Beijing, pp 249–256

    Google Scholar 

  25. Karlsson L, Kågström B (2008) A framework for dynamic node-scheduling of two-sided blocked matrix computations. In: Proceedings of PARA08. Lecture Notes in Computer Science, to appear

    Google Scholar 

  26. Yamamoto Y, Fukaya T, Uneyama T, Takata M, Kimura K, Iwasaki M, Nakamura Y (2007) Accelerating the singular value decomposition of rectangular matrices with the CSX600 and the Integrable SVD. In: Parallel computing technologies. Lecture Notes in Computer Science, vol 4671, Springer, pp 340–345

    Google Scholar 

  27. Yamamoto Y, Miyata T, Nakamura Y (2007) Accelerating the complex Hessenberg QR algorithm with the CSX600 floating-point coprocessor. In: Proceedings of The 19th IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS2007), Cambridge, pp 204–211

    Google Scholar 

  28. Volkov V, Demmel JW (2008) LU, QR and Cholesky factorizations using vector capabilities of GPUs. LAPACK Working Notes 202

    Google Scholar 

  29. Suda R (2007) A Bayesian method for online code selection: Toward efficient and robust methods of automatic tuning. In: Proceedings of the Second International Workshop on Automatic Performance Tuning (iWAPT2007), Tokyo, pp 23–31

    Google Scholar 

  30. Daoudi E, Lakhouaja A, Outada H (2001) Study of the parallel block one-sided Jacobi method. In: Proceedings of HPCN Europe 2001. Lecture Notes in Computer Science, vol 2110, Springer, pp 454–463

    Google Scholar 

Download references

Acknowledgments

We would like to express our sincere gratitude to the anonymous reviewer, whose comments helped us to greatly improve the quality of this chapter. We are also grateful to the members of the Auto-Tuning Research Group for engaging in valuable discussions and to Prof. Shao-Liang Zhang, Prof. Tomohiro Sogabe, and other members of the Zhang laboratory for their continuous support. This study is supported in part by the Ministry of Education, Science, Sports, and Culture through a Grant-in-Aid for Scientific Research on Priority Areas, “i-explosion” (No. 21013014), a Grant-in-Aid for Scientific Research (B) (No. 21300013), a Grant-in-Aid for Scientific Research (C) (No. 21560065), and a Grant-in-Aid for Scientific Research (A) (No. 20246027).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusaku Yamamoto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer New York

About this chapter

Cite this chapter

Yamamoto, Y., Fukaya, T. (2011). Dynamic Programming Approaches to Optimizing the Blocking Strategy for Basic Matrix Decompositions. In: Naono, K., Teranishi, K., Cavazos, J., Suda, R. (eds) Software Automatic Tuning. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-6935-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-6935-4_5

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-6934-7

  • Online ISBN: 978-1-4419-6935-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics