Skip to main content

Systematic Placement of Dynamic Objects Across Heterogeneous Memory Hierarchies

  • Chapter
  • First Online:
Dynamic Memory Management for Embedded Systems

Abstract

In the previous chapters we have explained how to improve different aspects of the memory subsystem when dynamic memory is used in an embedded system. In particular, we explained how to design efficient custom dynamic memory managers to serve the dynamic memory requests of the applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It is possible that each of the instances of one DDT has very different access characteristics. A hypothetical mechanism to differentiate between them would be an interesting basis for future work.

  2. 2.

    In extreme cases of platforms with many small memory blocks, this adjustment can lead to an unbounded increase of the pool size. To avoid this effect, the designer can use a smaller value or the algorithm could be changed to increase the size only the first time that the pool is split.

  3. 3.

    It could still be possible to split the pool of packet bodies in two areas, one cacheable and the other non-cacheable, allocating space from the first one as long as possible. However, how big should the cacheable pool be? The answer to this question would require an analysis very similar to the one we propose! Besides, due to the difficulties in predicting the run-time behavior of cache hierarchies, some unwanted interactions could still happen.

  4. 4.

    Energy consumption may increase in these cases because the cost of writing and reading input data from the scratchpad, and then writing and reading results before posting them to the DRAM, is added to the cost of accessing the data straight from the DRAM. As a result, independently of whether they are performed by the processor or the DMA, a net overhead of two writes and two reads to the scratchpad is created without any reutilization payback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Atienza Alonso .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Atienza Alonso, D. et al. (2015). Systematic Placement of Dynamic Objects Across Heterogeneous Memory Hierarchies. In: Dynamic Memory Management for Embedded Systems. Springer, Cham. https://doi.org/10.1007/978-3-319-10572-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-10572-7_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-10571-0

  • Online ISBN: 978-3-319-10572-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics