Skip to main content

A Dynamic Cache Architecture for Efficient Memory Resource Allocation in Many-Core Systems

  • Conference paper
  • First Online:
Applied Reconfigurable Computing (ARC 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9625))

Included in the following conference series:

Abstract

Today’s computing systems still mostly consist of homogeneous multi-core processing systems with statically allocated computing resources. Looking into the future, these computing systems will evolve to heterogeneous processing systems with more diverse processing units and new requirements. With multiple applications running concurrently on these many-core platforms, these applications compete for computational resources and thus processing power. However, not all applications are able to efficiently make use of all available resources at all times, which leads to the challenge to efficiently allocate tasks to computational resources during run-time. This issue is especially crucial when looking at cache resources, where the bandwidth and the available resources strongly bound computation times. For example, streaming based algorithms will run concurrently with block-based computations, which leads to an inefficient allocation of cache resources.

In this paper, we propose a dynamic cache architecture that enables the parameterization and the resource allocation of cache memory resources between cores during run-time. The reallocation is done with only little overhead such that each algorithm class can be more efficiently executed on the many-core platform. We contribute with a cache architecture that is for the first time prototyped on programmable hardware to demonstrate the feasibility of the proposed approach. At last we evaluate the overhead introduced by the increased flexibility of the hardware architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Albonesi, D.: Selective cache ways: on-demand cache resource allocation. In: Proceedings of the 32nd Annual ACM/IEEE International Symposium on Microarchitecture (1999)

    Google Scholar 

  2. Gordon-Ross, A., Lau, J., Calder, B.: Phase-based cache reconfiguration for a highly-configurable two-level cache hierarchy. In: Proceedings of the 18th ACM Great Lakes symposium on VLSI (2008)

    Google Scholar 

  3. Ji, X., Nicolaescu, D., Veidenbaum, A., Nicolau, A., Gupta, R.: Compiler Directed Cache Assist Adaptivity. Technical report, University of California Irvine (2000)

    Google Scholar 

  4. Malik, A., Moyer, B., Cermak, D.: A low power unified cache architecture providing power and performance flexibility (poster session). In: Proceedings of the International Symposium on Low Power Electronics and Design (2000)

    Google Scholar 

  5. Nicolaescu, D., Ji, X., Veidenbaum, A.V., Nicolau, A., Gupta, R.: Compiler-directed cache line size adaptivity. In: Chong, F.T., Kozyrakis, C., Oskin, M. (eds.) IMS 2000. LNCS, vol. 2107, pp. 183–187. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  6. Nowak, F., Buchty, R., Karl, W.: A run-time reconfigurable cache architecture. In: Advances in Parallel Computing (2008)

    Google Scholar 

  7. Prokop, H.: Cache-oblivious algorithms. Master’s thesis, MIT (1999)

    Google Scholar 

  8. Ranganathan, P., Adve, S., Jouppi, N.P.: Reconfigurable caches and their application to media processing. In: Proceedings of the 27th Annual International Symposium on Computer Architecture - ISCA 2000 (2000)

    Google Scholar 

  9. Sutter, H.: The free lunch is over: a fundamental turn toward concurrency in software. Dr. Dobb’s Journal (2005)

    Google Scholar 

  10. Tao, J., Kunze, M., Nowak, F., Buchty, R., Karl, W.: Performance advantage of reconfigurable cache design on multicore processor systems. Int. J. Parallel Program. 36, 347–360 (2008)

    Article  MATH  Google Scholar 

  11. Zhang, C., Vahid, F., Najjar, W.: A highly configurable cache architecture for embedded systems. In: 30th Annual International Symposium on Computer Architecture (2003)

    Google Scholar 

Download references

Acknowledgment

This research work is supported by the German Research Foundation (DFG) within the Transregio SFB Invasive Computing (DFG SFB/TRR89).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carsten Tradowsky .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Tradowsky, C., Cordero, E., Orsinger, C., Vesper, M., Becker, J. (2016). A Dynamic Cache Architecture for Efficient Memory Resource Allocation in Many-Core Systems. In: Bonato, V., Bouganis, C., Gorgon, M. (eds) Applied Reconfigurable Computing. ARC 2016. Lecture Notes in Computer Science(), vol 9625. Springer, Cham. https://doi.org/10.1007/978-3-319-30481-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30481-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30480-9

  • Online ISBN: 978-3-319-30481-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics