Advertisement

Hardware Acceleration of Matrix Multiplication over Small Prime Finite Fields

  • Shane T. Fleming
  • David B. Thomas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7806)

Abstract

Dense matrix-matrix multiplication over small finite fields is a common operation in many application domains, such as cryptography, random numbers, and error correcting codes. This paper shows that FPGAs have the potential to greatly accelerate this time consuming operation, and in particular that systolic array based approaches are both practical and efficient when using large modern devices. A number of finite-field specific architectural optimisations are introduced, allowing n×n matrices to be processed in O(n) cycles, for matrix sizes up to n = 350. Comparison with optimised software implementations on a single-core CPU shows that an FPGA accelerator can achieve between 80x and 700x speed-up over a Virtex-7 XC7V200T for GF(2 k ), but for GF(3) and larger finite fields can provide practical speed-ups of 1000x or more.

Keywords

Galois Fields Matrix Multiplication Finite Fields FPGA Hardware Acceleration Systolic Arrays 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Albrecht, M.: Algorithmic Algebraic Techniques and their Application to Block Cipher Cryptanalysis. PhD thesis, Royal Holloway, University of London (2010)Google Scholar
  2. 2.
    Thomas, D., Luk, W.: Fpga-optimised uniform random number generators using luts and shift registers. In: 2010 International Conference on Field Programmable Logic and Applications (FPL), pp. 77–82. IEEE (2010)Google Scholar
  3. 3.
    Zhuo, L., Prasanna, V.: Scalable and modular algorithms for floating-point matrix multiplication on fpgas. In: Proceedings of the18th International Parallel and Distributed Processing Symposium, p. 92. IEEE (2004)Google Scholar
  4. 4.
    Bensaali, F., Amira, A., Sotudeh, R.: Floating-point matrix product on fpga. In: IEEE/ACS International Conference on Computer Systems and Applications, AICCSA 2007, pp. 466–473. IEEE (2007)Google Scholar
  5. 5.
    Shoup, V.: A computational introduction to number theory and algebra. Cambridge University Press (2008)Google Scholar
  6. 6.
    Shoup, V.: Ntl: A library for doing number theory (2001)Google Scholar
  7. 7.
    Strassen, V.: Gaussian elimination is not optimal. Numerische Mathematik 13(4), 354–356 (1969)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Coppersmith, D., Winograd, S.: Matrix multiplication via arithmetic progressions. Journal of symbolic computation 9(3), 251–280 (1990)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Kung, H., Leiserson, C.E.: Systolic arrays (for vlsi). Society for Industrial & Applied, 256 (1979)Google Scholar
  10. 10.
    Dumas, J., Gautier, T., Giesbrecht, M., Giorgi, P., Hovinen, B., Kaltofen, E., Saunders, B., Turner, W., Villard, G., et al.: Linbox: A generic library for exact linear algebra. In: Proceedings of the 2002 International Congress of Mathematical Software, pp. 40–50. World Scientific Pub., Beijing (2002)Google Scholar
  11. 11.
    Albrecht, M., Bard, G.: M4ri–linear algebra over gf (2) (2008)Google Scholar
  12. 12.
    Dorsey, P.: Xilinx stacked silicon interconnect technology delivers breakthrough fpga capacity, bandwidth, and power efficiency. Xilinx White Paper: Virtex-7 FPGAs, 1–10 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Shane T. Fleming
    • 1
  • David B. Thomas
    • 1
  1. 1.Imperial College LondonLondonUnited Kingdom

Personalised recommendations