Abstract
In this chapter, we address the question how data is organized in memory. Relational database tables have a two-dimensional structure but main memory is organized unidimensional, providing memory addresses that start at zero and increase serially to the highest available location. The database storage layer has to decide how to map the two-dimensional table structures to the linear memory address space.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
T.W. Barr, A.L. Cox, S. Rixner, Translation caching: skip, don’t walk (the Page Table). ACM SIGARCH Comput Arch. News 38(3), 48–59 (2010)
V. Babka, P. T\(\mathop {{\rm {u}}}\limits ^{\circ }\)ma, Investigating cache parameters of x86 family processors. Comput. Perform. Eval. Benchmarking. 77–96 (2009)
M. Grund, J. Krueger, H. Plattner, A. Zeier, S. Madden, P. Cudre-Mauroux, HYRISE - A hybrid main memory storage engine, in VLDB (2011)
J. Krueger, C. Kim, M. Grund, N. Satish, D. Schwalb, J. Chhugani, H. Plattner, P. Dubey, A. Zeier, Fast updates on read-optimized databases using multi-core CPUs, in PVLDB (2011)
D. Schwalb, J. Krueger, H. Plattner, Cache conscious column organization in in-memory column stores. Technical Report 60, Hasso-Plattner-Institute, December 2012.
R.H. Saavedra, A.J. Smith, Measuring cache and TLB performance and their effect on benchmark runtimes. IEEE Trans. Comput. 44(10), 1223–1235 (1995)
Author information
Authors and Affiliations
Corresponding author
Self Test Questions
Self Test Questions
-
1.
When DRAM can be accessed randomly with the same costs, why are consecutive accesses usually faster than stride accesses?
-
(a)
With consecutive memory locations, the probability that the next requested location has already been loaded in the cache line is higher than with randomized/strided access. Furthermore is the memory page for consecutive accesses probably already in the TLB
-
(b)
The bigger the size of the stride, the higher the probability, that two values are both in one cache line
-
(c)
Loading consecutive locations is not faster, since the CPU performs better on prefetching random locations, than prefetching consecutive locations
-
(d)
With modern CPU technologies like TLBs, caches and prefetching, all three access methods expose the same performance.
-
(a)
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Plattner, H. (2013). Data Layout in Main Memory. In: A Course in In-Memory Data Management. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36524-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-36524-9_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36523-2
Online ISBN: 978-3-642-36524-9
eBook Packages: Business and EconomicsBusiness and Management (R0)