Performance and Numerical Accuracy Evaluation of Heterogeneous Multicore Systems for Krylov Orthogonal Basis Computation

  • Jérôme Dubois
  • Christophe Calvin
  • Serge Petiton
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6449)


We study the numerical behavior of heterogeneous systems such as CPU with GPU or IBM Cell processors for some orthogonalization processes. We focus on the influence of the different floating arithmetic handling of these accelerators with Gram-Schmidt orthogonalization using single and double precision. We observe for dense matrices a loss of at worst 1 digit for CUDA-enabled GPUs as well as a speed-up of 20x, and 2 digits for the Cell processor for a 7x speed-up. For sparse matrices, the result between CPU and GPU is very close and the speed-up is 10x. We conclude that the Cell processor is a good accelerator for double precision because of its full IEEE compliance, and not sufficient for single precision applications. The GPU speed-up is better than Cell and the decent IEEE support delivers results close to the CPU ones for both precisions.


parallel and distributed computing numerical algorithms for CS&E performance analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    An updated set of basic linear algebra subprograms (blas). ACM Trans. Math. Softw. 28(2), 135–151 (2002)Google Scholar
  2. 2.
    Arevalo, A., Matinata, R.M., (Raj)Pandian, M., Peri, E., Ruby, K., Thomas, F., Almond, C.: Architecture overview and its impact on programming. In: Programming the Cell Broadband Engine Architecture: Examples and Best Practices, ch. 4.61. IBM (2008)Google Scholar
  3. 3.
    Bell, N., Garland, M.: Implementing sparse matrix-vector multiplication on throughput-oriented processors. In: SC 2009: Proceedings of the 2009 ACM/IEEE Conference on Supercomputing. ACM, New York (2009)Google Scholar
  4. 4.
    Braconnier, T., Langlois, P., Rioual, J.C.: The influence of orthogonality on the arnoldi method. Linear Algebra and its Applications 309(1-3), 307–323 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Buck, I., Foley, T., Horn, D., Sugerman, J., Fatahalian, K., Houston, M., Hanrahan, P.: Brook for gpus: stream computing on graphics hardware. ACM Trans. Graph. 23(3), 777–786 (2004)CrossRefGoogle Scholar
  6. 6.
    NVidia Corporation. Nvidia: Cublas library. Technical report. Whitepaper. Part of CUDA ToolkitGoogle Scholar
  7. 7.
    Duff, I.S., Grimes, R.G., Lewis, J.G.: Sparse matrix test problems. ACM Trans. Math. Softw. 15(1), 1–14 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Frigo, M., Johnson, S.G.: Fftw on the cell processor,
  9. 9.
    Giraud, L., Langou, J., Rozložník, M., van den Eshof, J.: Rounding error analysis of the classical Gram-Schmidt orthogonalization process. Numerische Mathematik 101(1), 87–100 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Goldberg, D.: What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys (1991)Google Scholar
  11. 11.
    Golub, G.H., Van Loan, C.F.: Matrix Computations (Johns Hopkins Studies in Mathematical Sciences). The Johns Hopkins University Press, Baltimore (1996)Google Scholar
  12. 12.
    Hernandez, V., Roman, J.E., Tomas, A.: Parallel arnoldi eigensolvers with enhanced scalability via global communications rearrangement. Parallel Comput. 33(7-8), 521–540 (2007)MathSciNetCrossRefGoogle Scholar
  13. 13.
    IEEE: IEEE standard for binary floating-point arithmetic. ACM SIGPLAN Notices 22(2), 9–25 (1985)Google Scholar
  14. 14.
    Meuer, H., Strohmaier, E., Dongarra, J., Simon, H.: Architecture share over time,
  15. 15.
    NVIDIA. NVIDIA CUDA Programming Guide 2.0 (2008)Google Scholar
  16. 16.
    Rozlozník, M., Strakos, Z., Tuma, M.: On the role of orthogonality in the gmres method. In: Král, J., Bartosek, M., Jeffery, K. (eds.) SOFSEM 1996. LNCS, vol. 1175, pp. 409–416. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  17. 17.
    Takuya, Y., Daisuke, T., Taisuke, B., Mitsuhisa, S.: Parallel implementation of classical gram-schmidt orthogonalization using matrix multiplication. IPSJ SIG Technical Reports (63(HPC-106)), 31–36 (2006)Google Scholar
  18. 18.
    Clint Whaley, R., Petitet, A., Dongarra, J.J.: Automated empirical optimizations of software and the atlas project. Parallel Computing 27, 2001 (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jérôme Dubois
    • 1
    • 2
  • Christophe Calvin
    • 1
  • Serge Petiton
    • 2
  1. 1.Commissariat l’Energie AtomiqueCEA-Saclay/DEN/DANS/DM2S/SERMA/LLPRFrance
  2. 2.Laboratoire d’Informatique Fondamentale de LilleUniversité de Lille 1France

Personalised recommendations