A Unified Analysis of Paging and Caching
- E. Torng
- … show all 1 hide
Rent the article at a discountRent now
* Final gross prices may vary according to local VAT.Get Access
Paging (caching) is the problem of managing a two-level memory hierarchy in order to minimize the time required to process a sequence of memory accesses. In order to measure this quantity, which we refer to as the total memory access time, we define the system parameter miss penalty to represent the extra time required to access slow memory. We also introduce the system parameter page size. In the context of paging, miss penalty is quite large, so most previous studies of on-line paging have implicitly set miss penalty =∞ in order to simplify the model. We show that this seemingly insignificant simplification substantially alters the precision of derived results. For example, previous studies have essentially ignored page size. Consequently, we reintroduce the miss penalty and page size parameters to the paging problem and present a more accurate analysis of on-line paging (and caching). We validate using this more accurate model by deriving intuitively appealing results for the paging problem which cannot be derived using the simplified model.
First, we present a natural, quantifiable definition of the amount of locality of reference in any access sequence. We also point out that the amount of locality of reference in an access sequence should depend on page size among other factors. We then show that deterministic and randomized marking algorithms such as the popular least recently used (LRU) algorithm achieve constant competitive ratios when processing typical access sequences which exhibit significant locality of reference; this represents the first competitive analysis result which (partially) explains why LRU performs as well as it is observed to in practice.
Next, we show that finite lookahead can be used to obtain algorithms with improved competitive ratios. In particular, we prove that modified marking algorithms with sufficient lookahead achieve competitive ratios of 2. This is in stark contrast to the simplified model where lookahead cannot be used to obtain algorithms with improved competitive ratios.
We conclude by using competitive analysis to evaluate the benefits of increasing associativity in caches. We accomplish this by specifying an algorithm and varying the system configuration rather than the usual process of specifying the system configuration and varying the algorithm.
- A Unified Analysis of Paging and Caching
Volume 20, Issue 2 , pp 175-200
- Cover Date
- Print ISSN
- Online ISSN
- Additional Links
- Key words. Paging, Caching, Competitive analysis, Locality of reference, Miss rate, Miss penalty, Memory access time, Cache associativity, Page size.
- Industry Sectors
- E. Torng (A1)
- Author Affiliations
- A1. Department of Computer Science, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824-1027, USA. email@example.com., US