Abstract
Sparse matrix computations arise in many scientific and engineering applications, but their performance is limited by the growing gap between processor and memory speed. In this paper, we present a case study of an important sparse matrix triple product problem that commonly arises in primal-dual optimization method.
Instead of a generic two-phase algorithm, we devise and implement a single pass algorithm that exploits the block diagonal structure of the matrix. Our algorithm uses fewer floating point operations and roughly half the memory of the two-phase algorithm. The speed-up of the one-phase scheme over the two-phase scheme is 2.04 on a 900 MHz Intel Itanium-2, 1.63 on an 1 GHz Power-4, and 1.99 on a 900 MHz Sun Ultra-3.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Goemans, M.X., Williamson, D.P.: The primal-dual method for approximation algorithms and its application to network design problems. In: Approximation Algorithms for NP-hard Problems, pp. 144–191. PWS Publishing Co., Boston (1996)
Gilbert, J.R., Moler, C., Schreiber, R.: Sparse matrices in Matlab: Design and implementation. SIAM J. Matrix Analysis and Applications 13, 333–356 (1992)
Im, E., Yelick, K.A.: Optimizing sparse matrix computations for register reuse in SPARSITY. In: Alexandrov, V.N., Dongarra, J., Juliano, B.A., Renner, R.S., Tan, C.J.K. (eds.) ICCS-ComputSci 2001. LNCS, vol. 2073, pp. 127–136. Springer, Heidelberg (2001)
Im, E., Yelick, K.A., Vuduc, R.: SPARSITY: Framework for Optimizing Sparse Matrix- Vector Multiply. International Journal of High Performance Computing Applications 18(1), 135–158 (2004)
Vuduc, R., Gyulassy, A., Demmel, J.W., Yelick, K.A.: Memory Hierarchy Optimizations and Bounds for Sparse A TAx. In: Proceedings of the ICCS Workshop on Parallel Linear Algebra, Melbourne, Australia, June 2003. LNCS, vol. 2660, pp. 705–714. Springer, Heidelberg (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Im, EJ., Bustany, I., Ashcraft, C., Demmel, J.W., Yelick, K.A. (2006). Performance Tuning of Matrix Triple Products Based on Matrix Structure. In: Dongarra, J., Madsen, K., Waśniewski, J. (eds) Applied Parallel Computing. State of the Art in Scientific Computing. PARA 2004. Lecture Notes in Computer Science, vol 3732. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11558958_89
Download citation
DOI: https://doi.org/10.1007/11558958_89
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-29067-4
Online ISBN: 978-3-540-33498-9
eBook Packages: Computer ScienceComputer Science (R0)