A Vector Caching Scheme for Streaming FPGA SpMV Accelerators
- Cite this paper as:
- Umuroglu Y., Jahre M. (2015) A Vector Caching Scheme for Streaming FPGA SpMV Accelerators. In: Sano K., Soudris D., Hübner M., Diniz P. (eds) Applied Reconfigurable Computing. ARC 2015. Lecture Notes in Computer Science, vol 9040. Springer, Cham
The sparse matrix – vector multiplication (SpMV) kernel is important for many scientific computing applications. Implementing SpMV in a way that best utilizes hardware resources is challenging due to input-dependent memory access patterns. FPGA-based accelerators that buffer the entire irregular-access part in on-chip memory enable highly efficient SpMV implementations, but are limited to smaller matrices due to on-chip memory limits. Conversely, conventional caches can work with large matrices, but cache misses can cause many stalls that decrease efficiency. In this paper, we explore the intersection between these approaches and attempt to combine the strengths of each. We propose a hardware-software caching scheme that exploits preprocessing to enable performant and area-effective SpMV acceleration. Our experiments with a set of large sparse matrices indicate that our scheme can achieve nearly stall-free execution with average 1.1 % stall time, with 70 % less on-chip memory compared to buffering the entire vector. The preprocessing step enables our scheme to offer up to 40 % higher performance compared to a conventional cache of same size by eliminating cold miss penalties.
Unable to display preview. Download preview PDF.