Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Memory Hierarchy

  • Stefan Manegold
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_657

Synonyms

Hierarchical memory system

Definition

A Hierarchical Memory System – or Memory Hierarchy for short – is an economical solution to provide computer programs with (virtually) unlimited fast memory, taking advantage of locality and cost-performance of memory technology. Computer storage and memory hardware – from disk drives to DRAM main memory to SRAM CPU caches – shares the limitation that as they became faster, they become more expensive (per capacity), and thus smaller. Consequently, memory hierarchies are organized into several levels, starting from huge and inexpensive but slow disk systems to DRAM main memory and SRAM CPU caches (both off and on chip) to registers in the CPU core. Each level closer to the CPU is faster but smaller than the next level one step down in the hierarchy. Memory hierarchies exploit the principle of locality, i.e., the property that computer programs do not access all their code and data uniformly, but rather focus on referencing only small...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Ailamaki A, Boncz PA, Manegold S, editors. In: Proceedings of the 1st Workshop on Data Management on New Hardware; 2005.Google Scholar
  2. 2.
    Ailamaki A, Boncz PA, Manegold S, editors. In: Proceedings of the 2nd Workshop on Data Management on New Hardware; 2006.Google Scholar
  3. 3.
    Ailamaki A, Luo Q, editors. In: Proceedings of the 3rd Workshop on Data Management on New Hardware; 2007.Google Scholar
  4. 4.
    Ailamaki AG, DeWitt DJ., Hill MD, Wood DA. DBMSs on a modern processor: where does time go? In: Proceedings of the 25th International Conference on Very Large Data Bases; 1999. p. 266–77.Google Scholar
  5. 5.
    Boncz PA, Manegold S, Kersten M. Database architecture optimized for the New Bottleneck: memory access. In: Proceedings of the 25th International Conference on Very Large Data Bases; 1999. p. 54–65.Google Scholar
  6. 6.
    Denning PJ. The working set model for program behaviour. Commun ACM. 1968;11(5):323–33.MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Denning PJ. The locality principle. Commun ACM. 2005;48(7):19–24.CrossRefGoogle Scholar
  8. 8.
    Hennessy JL, Patterson DA. Computer architecture – a quantitative approach. 3rd ed. San Mateo: Morgan Kaufmann; 2003.zbMATHGoogle Scholar
  9. 9.
    Hill MD, Smith AJ. Evaluating associativity in CPU caches. IEEE Trans Comput. 1989;38(12):1612–30.CrossRefGoogle Scholar
  10. 10.
    Kilburn T, Edwards DBC, Lanigan MI, Sumner FH. One-level storage system. IRE Trans Electronic Comput. 1962;2(11):223–35.CrossRefGoogle Scholar
  11. 11.
    Manegold S. Understanding, modeling, and improving main-memory database performance. PhD thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands; 2002.Google Scholar
  12. 12.
    Moore GE. Cramming more components onto integrated circuits. Electronics. 1965;38(8):114–7.Google Scholar
  13. 13.
    Ross K, Luo Q, editors. In: Proceedings of the 3rd Workshop on Data Management on New Hardware; 2007.Google Scholar
  14. 14.
    Shatdal A, Kant C, Naughton J. Cache conscious algorithms for relational query processing. In: Proceedings of the 20th International Conference on Very Large Data Bases; 1994. p. 510–2.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.CWIAmsterdamThe Netherlands

Section editors and affiliations

  • Anastasia Ailamaki
    • 1
  1. 1.Informatique et CommunicationsEcole Polytechnique Fédérale de LausanneLausanneSwitzerland