, Volume 3, Issue 1, pp 79–119

Competitive snoopy caching

  • Anna R. Karlin
  • Mark S. Manasse
  • Larry Rudolph
  • Daniel D. Sleator

DOI: 10.1007/BF01762111

Cite this article as:
Karlin, A.R., Manasse, M.S., Rudolph, L. et al. Algorithmica (1988) 3: 79. doi:10.1007/BF01762111


In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

Key words

Shared-bus multiprocessors Amortized analysis Potential functions Page replacement Shared memory Cache coherence 

Copyright information

© Springer-Verlag New York Inc. 1988

Authors and Affiliations

  • Anna R. Karlin
    • 1
  • Mark S. Manasse
    • 3
  • Larry Rudolph
    • 4
  • Daniel D. Sleator
    • 5
  1. 1.Computer Science DepartmentStanford UniversityStanfordUSA
  2. 2.Computer Science DepartmentPrinceton UniversityPrincetonUSA
  3. 3.DEC Systems Research CenterPalo AltoUSA
  4. 4.Computer Science DepartmentHebrew UniversityJerusalemIsrael
  5. 5.Computer Science DepartmentCarnegie-Mellon UniversityPittsburghUSA

Personalised recommendations