, Volume 3, Issue 1, pp 79-119

First online:

Competitive snoopy caching

  • Anna R. KarlinAffiliated withComputer Science Department, Stanford University
  • , Mark S. ManasseAffiliated withDEC Systems Research Center
  • , Larry RudolphAffiliated withComputer Science Department, Hebrew University
  • , Daniel D. SleatorAffiliated withComputer Science Department, Carnegie-Mellon University

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

Key words

Shared-bus multiprocessors Amortized analysis Potential functions Page replacement Shared memory Cache coherence