Algorithmica

, Volume 3, Issue 1, pp 79–119

Competitive snoopy caching

Authors

  • Anna R. Karlin
    • Computer Science DepartmentStanford University
  • Mark S. Manasse
    • DEC Systems Research Center
  • Larry Rudolph
    • Computer Science DepartmentHebrew University
  • Daniel D. Sleator
    • Computer Science DepartmentCarnegie-Mellon University
Article

DOI: 10.1007/BF01762111

Cite this article as:
Karlin, A.R., Manasse, M.S., Rudolph, L. et al. Algorithmica (1988) 3: 79. doi:10.1007/BF01762111

Abstract

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

Key words

Shared-bus multiprocessorsAmortized analysisPotential functionsPage replacementShared memoryCache coherence

Copyright information

© Springer-Verlag New York Inc. 1988