Advertisement

Application of Memory-Based Computing

  • Somnath Paul
  • Swarup Bhunia
Chapter

Abstract

Memories are typically associated with data storage in computer system. They either store data temporarily (for example by volatile memories such as SRAM, DRAM etc.) or permanently (for example by non-volatile memories such as Flash, magnetic disks etc.). In this chapter we first draw the outline how the embedded memory may be used for computing as well in a time-multiplexed hardware reconfigurable system. In particular, this chapter makes the following key contributions:
  • We propose a hardware architecture to utilize a dense 2-D memory array for the purpose of reconfigurable computing. The main idea is to map multiple multi-input multi-output LUTs to the embedded memory array at each compute block and evaluate them over multiple clock cycles. The proposed framework is therefore a spatio-temporal framework unlike the conventional hardware reconfigurable frameworks which are fully spatial. In the proposed hardware architecture, multiple computing elements communicate with each other over a time-multiplexed programmable interconnect.

  • Along with the hardware architecture we outline an effective software framework which maps an input application to the proposed hardware reconfigurable framework. The software flow outlined partitions the input application and maps it to multiple compute blocks, and finally places and routes the mapped design.

  • We describe the application domains of the proposed memory based computing model. We identify that the model can be applied to realize a stand-alone reconfigurable framework for mapping random logic as well as it can be used for hardware acceleration of algorithmic tasks.

References

  1. 1.
    A. Dehon, “DPGA Utilization and Application”, in Intl. Symp. on FPGAs, 1996Google Scholar
  2. 2.
    E.S. Chung, P.A. Milder, J.C. Hoe, K. Mai, “Single-Chip Heterogeneous Computing: Does the Future Include Custom Logic, FPGAs, and GPGPUs? in Intl. Symp. on Microarchitecture, 2010Google Scholar
  3. 3.
    M. Bushnell, V. Agarwal, “Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits” (Springer, Heidelberg, 2000)Google Scholar
  4. 4.
    J. Backus, “Can programming style be liberated from the von-neumann style? A functional style and its algebra of programs”. ACM Comm. 21(8), 613–641 (1978)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    S. Che, J. Li, J.W. Sheaffer, K. Skadron, J. Lach, “Accelerating Compute-Intensive Applications with GPUs and FPGAs”, in Symp. on Application Specific Processor, 2008Google Scholar
  6. 6.
    R. Tessier, W. Burleson, “Reconfigurable computing for digital signal processing: A survey”. J. VLSI Signal Process. Syst. 28(1/2), (2001)Google Scholar
  7. 7.
    S. Paul, S. Bhunia, “Dynamic transfer of computation to processor cache for yield and reliability improvement”. IEEE Trans. Very Large Scale Integrat. Syst., 1368–1379 (2011)Google Scholar
  8. 8.
    J. Dido et al., “A Flexible Floating Point Format for Optimizing Data-Paths and Operators FPGA Based DSPs”, in Intl. Symp. on FPGAs, 2002Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Somnath Paul
    • 1
  • Swarup Bhunia
    • 2
  1. 1.Intel LabsHillsboroUSA
  2. 2.Department of EECSCase Western Reserve UniversityClevelandUSA

Personalised recommendations