CSPR: Column Only SPARSE Matrix Representation for Performance Improvement on GPU Architecture

  • B. Neelima
  • Prakash S. Raghavendra
Part of the Communications in Computer and Information Science book series (CCIS, volume 203)

Abstract

General purpose computation on graphics processing unit (GPU) is prominent in the high performance computing era of this time. Porting or accelerating the data parallel applications onto GPU gives the default performance improvement because of the increased computational units. Better performances can be seen if application specific fine tuning is done with respect to the architecture under consideration. One such very widely used computation intensive kernel is sparse matrix vector multiplication (SPMV) in sparse matrix based applications. Most of the existing data format representations of sparse matrix are developed with respect to the central processing unit (CPU) or multi cores. This paper gives a new format for sparse matrix representation with respect to graphics processor architecture that can give 2x to 5x performance improvement compared to CSR (compressed row format), 2x to 54x performance improvement with respect to COO (coordinate format) and 3x to 10 x improvement compared to CSR vector format for the class of application that fit for the proposed new format. It also gives 10% to 133% improvements in memory transfer (of only access information of sparse matrix) between CPU and GPU. This paper gives the details of the new format and its requirement with complete experimentation details and results of comparison.

Keywords

GPU CPU SPMV CSR COO CSR-vector 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Young, G.O.: Synthetic structure of industrial plastics (Book style with paper title and editor). In: Peters, J. (ed.) Plastics, 2nd edn., vol. 3, pp. 15–64. McGraw-Hill, New York (1964)Google Scholar
  2. 2.
  3. 3.
  4. 4.
    Ryoo, S., et al.: Optimization Principles and Application Performance Evaluation of a Multithreaded GPU Using CUDA. In: Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programing (PPoPP 2008). ACM, New York (2008)Google Scholar
  5. 5.
    Ryoo, S., et al.: Program Optimization Space Pruning for a Multithreaded GPU. In: Proceedings of the 2008 International Symposium on Code Generation and Optimization, pp. 195–204. ACM, New York (2008)Google Scholar
  6. 6.
    Bell, N., Garland, M.: Efficient sparse matrix-vector multiplication on CUDA. In: Proceedings of ACM/IEEE Conf. Supercomputing (SC 2009). ACM, New York (2009)Google Scholar
  7. 7.
    D’Azevedo, E.F., Fahey, M.R., Mills, R.T.: Vectorized sparse matrix multiply for compressed row storage. In: Sunderam, V.S., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2005. LNCS, vol. 3514, pp. 99–106. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Vuduc, R.W., Moon, H.-J.: Fast sparse matrix-vector multiplication by exploiting variable block structure. In: Yang, L.T., Rana, O.F., Di Martino, B., Dongarra, J. (eds.) HPCC 2005. LNCS, vol. 3726, pp. 807–816. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Blelloch, G.E., Heroux, M.A., Zagha, M.: Segmented operations for sparse matrix computations on vector multiprocessors. Technical Report, CMU-CS-93-173, Department of Computer Science, Carnegie Mellon University (CMU), Pittsburgh, PA, USA (1993) Google Scholar
  10. 10.
    Geus, R., Röllin, S.: Towards a fast parallel sparse matrix-vector multiplication. In: Proceedings of the International Conference on Parallel Computing, ParCo (2001)Google Scholar
  11. 11.
    Mellor-Crummey, J., Garvin, J.: Optimizing sparse matrix vector multiply using unroll-and-jam. International Journal of High Perfromance Computing Applications 18(2) (2002)Google Scholar
  12. 12.
    Nishtala, R., Vuduc, R., Demmel, J.W., Yelick, K.A.: When cache blocking sparse matrix vector multiply works and why. Journal of Applicable Algebra in Engineering, Communication, and Computing 18(3) (2007)Google Scholar
  13. 13.
    Temam, O., Jalby, W.: Characterizing the behavior of sparse algorithms on caches. In: Proceedings of the 1992 ACM/ IEEE Conference on Supercomputing (SC 1992). IEEE Computer Society Press, Los Alamitos (1992)Google Scholar
  14. 14.
    Toledo, S.: Improving memory-system performance of sparse matrix-vector multiplication. In: Proceeding of Eighth SIAM Conference on Parallel Processing for Scientific Computing (1997) Google Scholar
  15. 15.
    Vastenhouw, B., Bisseling, R.H.: A two-dimensional data distribution method for parallel sparse matrix-vector multiplication. Journal of SIAM Review 47(1), 67–95 (2005)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Im, E.J., Yelick, K., Vuduc, R.: Sparsity: Optimization framework for sparse matrix kernels. International Journal of High Performance Computing Applications 18(1), 135–158 (2004)CrossRefGoogle Scholar
  17. 17.
    Vuduc, R.: Automatic performance tuning of sparse matrix kernels. Doctoral Dissertation, University of California, Berkeley, Berkeley, CA, USA (2003) Google Scholar
  18. 18.
    Vuduc, R., James, W.D., Katherine, A.Y.: OSKI: A library of automatically tuned sparse matrix kernels. In: Proceedings of SciDAC. J. Phys.: Conf. Series, vol. 16, pp. 521–530. IOP Science (2005)Google Scholar
  19. 19.
    Williams, S., et al.: Scientific computing kernels on the Cell processor. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing (SC 2007). Kluwer Academic Publishers Norwell (2007); International Journal of Parallel Programming 35(3), 263–298 (2007)Google Scholar
  20. 20.
    Lee, B.C., Vuduc, R., Demmel, J., Yelick, K.: Performance models for evaluation and automatic tuning of symmetric sparse matrix-vector multiply. In: Proceedings of the International Conference on Parallel Processing (ICPP 2004). IEEE Computer Scoiety, Washington, DC, USA (2004)Google Scholar
  21. 21.
    Baskaran, M.M., Bordawekar, R.: Optimizing sparse matrix-vector multiplication on GPUs using compile-time and run-time strategies. Technical Report RC24704 (W0812-047), IBM T.J.Watson Research Center, Yorktown Heights, NY, USA (December 2008)Google Scholar
  22. 22.
    Bolz, J., Farmer, I., Grinspun, E., Schröder, P.: Sparse matrix solvers on the GPU: Conjugate gradients and multigrid. In: Proceedings of Special Interest Group on Graphics Conf (SIGGRAPH), San Diego, CA, USA (July 2003)Google Scholar
  23. 23.
    Christen, M., Schenk, O.: General-purpose sparse matrix building blocks using the NVIDIA CUDA technology platform. In: Proceedings of Workshop on General-Purpose Processing on Graphics Processing Units, GPGPU (2007)Google Scholar
  24. 24.
    Garland, M.: Sparse matrix computations on manycore GPUs. In: Proceeding of ACM/IEEE Design Automation Conf. (DAC), Anaheim, CA, USA, pp. 2–6 (2008) Google Scholar
  25. 25.
    Geus, R., Röllin, S.: Towards a fast sparse symmetric matrix-vector multiplication. Journal of Parallel Computing 27(7), 883–896 (2001)CrossRefMATHGoogle Scholar
  26. 26.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • B. Neelima
    • 1
  • Prakash S. Raghavendra
    • 1
  1. 1.Department of Information TechnologyNITK SurathkalMangaloreIndia

Personalised recommendations