Advertisement

Finding Enclosures for Linear Systems Using Interval Matrix Multiplication in CUDA

  • Alexander DallmannEmail author
  • Philip-Daniel Beck
  • Jürgen Wolff  von Gudenberg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8385)

Abstract

In this paper we present CUDA kernels that compute an interval matrix product. Starting from a naive implementation we investigate possible speedups using commonly known techniques from standard matrix multiplication. We also evaluate the achieved speedup when our kernels are used to accelerate a variant of an existing algorithm that finds an enclosure for the solution of a linear system. Moreover the quality of our enclosure is discussed.

Keywords

GPGPU Interval arithmetic Linear algebra Parallel computing 

References

  1. 1.
    Fujimoto, N.: Economical two-fold working precision matrix multiplication on consumer-level CUDA GPUs. In: 2011 Second Workshop on Architecture and Multi-Core Applications (WAMCA), pp. 24–29 (2011)Google Scholar
  2. 2.
    Hammer, R.: C++ Toolbox for Verified Computing. Springer, Heidelberg (1995)zbMATHGoogle Scholar
  3. 3.
    Cui, X., Chen, Y., Mei, H.: Improving Performance of Matrix Multiplication and FFT on GPU. In: 2009 15th International Conference on Parallel and Distributed Systems (ICPADS), pp. 42–48 (2009)Google Scholar
  4. 4.
    Beck, P.-D., Nehmeier, M.: Parallel interval newton method on CUDA. In: Manninen, P., Öster, P. (eds.) PARA. LNCS, vol. 7782, pp. 454–464. Springer, Heidelberg (2013) Google Scholar
  5. 5.
    Jaulin, L., Kieffer, M., Didrit, O., Walter, E.: Applied Interval Analysis. Springer, London (2001)CrossRefzbMATHGoogle Scholar
  6. 6.
    Alefeld, G., Herzberger, J.: Introduction to Interval Computations. Computer science and applied mathematics. Academic Press, New York (1983)zbMATHGoogle Scholar
  7. 7.
    Rump, S.M.: Kleine Fehlerschranken bei Matrixproblemen. (Universität Karlsruhe 1980)Google Scholar
  8. 8.
    NVIDIA Corporation: Parallel Thread Execution ISA (Version 3.1). http://docs.nvidia.com/cuda/pdf/ptx_isa_3.1.pdf
  9. 9.
    NVIDIA Corporation: NVIDIA CUDA C Programming Guide (Version 5.0). http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html
  10. 10.
    IEEE 754–2008: IEEE Standard for Floating-Point Arithmetic (2008)Google Scholar
  11. 11.
    Ogita, T., Rump, S., Oishi, S.: Accurate sum and dot product. SIAM J. Sci. Comput. 26(6), 1955–1988 (2005)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Alexander Dallmann
    • 1
    Email author
  • Philip-Daniel Beck
    • 1
  • Jürgen Wolff  von Gudenberg
    • 1
  1. 1.Chair of Computer Science IIUniversity of WürzburgWürzburgGermany

Personalised recommendations