Encyclopedia of Database Systems

Editors: LING LIU, M. TAMER ÖZSU

Memory Hierarchy

  • Stefan Manegold
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-39940-9_657

Synonyms

Definition

A Hierarchical Memory System – or Memory Hierarchy for short – is an economical solution to provide computer programs with (virtually) unlimited fast memory, taking advantage of locality and cost-performance of memory technology. Computer storage and memory hardware – from disk drives to DRAM main memory to SRAM CPU caches – shares the limitation that as they became faster, they become more expensive (per capacity), and thus smaller. Consequently, memory hierarchies are organized into several levels, starting from huge and inexpensive but slow disk systems to DRAM main memory and SRAM CPU caches (both off and on chip) to registers in the CPU core. Each level closer to the CPU is faster but smaller than the next level one step down in the hierarchy. Memory hierarchies exploit the principle of locality, i.e., the property that computer programs do not access all their code and data uniformly, but rather focus on referencing only small...

This is a preview of subscription content, log in to check access

Recommended Reading

  1. 1.
    Ailamaki A., Boncz P.A., and Manegold S. (eds.). Proc. Workshop on Data Management on New Hardware, 2005.Google Scholar
  2. 2.
    Ailamaki A., Boncz P.A., and Manegold S. (eds.). Proc. Workshop on Data Management on New Hardware, 2006.Google Scholar
  3. 3.
    Ailamaki A. and Luo Q. (eds.) Proc. Workshop on Data Management on New Hardware, 2007.Google Scholar
  4. 4.
    Ailamaki A.G., DeWitt D.J., Hill M.D., and Wood D.A. DBMSs on a Modern Processor: Where does time go? In Proc. 25th Int. Conf. on Very Large Data Bases, 1999, pp. 266–277.Google Scholar
  5. 5.
    Boncz P.A., Manegold S., and Kersten M.L. Database Architecture Optimized for the New Bottleneck: memory access. In Proc. 25th Int. Conf. on Very Large Data Bases, 1999, pp. 54–65.Google Scholar
  6. 6.
    Denning P.J. The working set model for program behaviour. Commun. ACM, 11(5):323–333, 1968.MathSciNetCrossRefGoogle Scholar
  7. 7.
    Denning P.J. The locality principle. Commun. ACM, 48(7):19–24, 2005.CrossRefGoogle Scholar
  8. 8.
    Hennessy J.L. and Patterson D.A. Computer Architecture – A Quantitative Approach, 3rd edn. Morgan Kaufmann, San Mateo, CA, USA, 2003.Google Scholar
  9. 9.
    Hill M.D. and Smith A.J. Evaluating associativity in CPU caches. IEEE Trans. Comput., 38(12):1612–1630, December 1989.CrossRefGoogle Scholar
  10. 10.
    Kilburn T., Edwards D.B.C., Lanigan M.I., and Sumner F.H. One-level storage system. IRE Trans. Electronic Comput., 2(11):223–235, April 1962.CrossRefGoogle Scholar
  11. 11.
    Manegold S. Understanding, Modeling, and Improving Main-Memory Database Performance. PhD thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands, December 2002.Google Scholar
  12. 12.
    Moore G.E. Cramming more components onto integrated circuits. Electronics, 38(8):114–117, April 1965.Google Scholar
  13. 13.
    Ross K. and Luo Q. (eds.). In Proc. Workshop on Data Management on New Hardware, 2007.Google Scholar
  14. 14.
    Shatdal A., Kant C., and Naughton J. Cache conscious algorithms for relational query processing. In Proc. 20th Int. Conf. on Very Large Data Bases, 1994, pp. 510–512.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Stefan Manegold

There are no affiliations available