Algorithmica

, Volume 3, Issue 1–4, pp 79–119

Competitive snoopy caching

  • Anna R. Karlin
  • Mark S. Manasse
  • Larry Rudolph
  • Daniel D. Sleator
Article

Abstract

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

Key words

Shared-bus multiprocessors Amortized analysis Potential functions Page replacement Shared memory Cache coherence 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [AB]
    Archibald, J., and Baer, J.-L. An evaluation of cache coherence solutions in shared-bus multiprocessors. Technical Report 85-10-05, Department of Computer Science, University of Washington, 1985.Google Scholar
  2. [B]
    Belady, L. A., A study of replacement algorithms for virtual storage computers.IBM Systems J.,5 78–101, 1966.CrossRefGoogle Scholar
  3. [BM]
    Bentley, J. L., and McGeoch, C. C., Amortized analysis of self-organizing sequential search heuristics.Comm. ACM,28(4) 404–411, 1985.CrossRefGoogle Scholar
  4. [F]
    Frank, S. J., Tightly coupled multiprocessor system speeds memory access times.Electronics,57(1), 164–169, 1984.Google Scholar
  5. [G]
    Goodman, J. R. Using cache memory to reduce processor-memory traffic.Proc. 10th Annual IEEE International Symposium on Computer Architecture, pp. 124–131, 1983.Google Scholar
  6. [KEW]
    Katz, R., Eggers, S., Wood, D. A., Perkins, C., and Sheldon, R. G. Implementing a cache consistency protocol.Proc. 12th Annual IEEE International Symposium on Computer Architecture, pp. 276–283, 1985.Google Scholar
  7. [PP]
    Papamarcos, M., and Patel, J. A low overhead coherence solution for multiprocessors with private cache memories.Proc. 11th Annual IEEE International Symposium on Computer Architecture, pp. 348–354, 1984.Google Scholar
  8. [RS1]
    Rudolph, L., and Segall, Z. Dynamic decentralized cache schemes for MIMD parallel processors.Proc. 11th Annual IEEE International Symposium on Computer Architecture, pp. 340–347, 1984.Google Scholar
  9. [RS2]
    Rudolph, L., and Segall, Z. Dynamic paging schemes for MIMD parallel processors. Technical Report, Computer Science Department, Carnegie-Mellon University, 1986.Google Scholar
  10. [ST]
    Sleator, D. D., and Tarjan, R. E., Amortized efficiency of list update and paging rules.Comm. ACM,28(2), 202–208, 1985.CrossRefMathSciNetGoogle Scholar
  11. [VH]
    Vernon, M. K., and Holliday, M. A. Performance analysis of multiprocessor cache consistency protocols using generalized timed Petri nets. Technical Report, Computer Science Department, University of Wisconsin, 1986.Google Scholar

Copyright information

© Springer-Verlag New York Inc. 1988

Authors and Affiliations

  • Anna R. Karlin
    • 1
  • Mark S. Manasse
    • 3
  • Larry Rudolph
    • 4
  • Daniel D. Sleator
    • 5
  1. 1.Computer Science DepartmentStanford UniversityStanfordUSA
  2. 2.Computer Science DepartmentPrinceton UniversityPrincetonUSA
  3. 3.DEC Systems Research CenterPalo AltoUSA
  4. 4.Computer Science DepartmentHebrew UniversityJerusalemIsrael
  5. 5.Computer Science DepartmentCarnegie-Mellon UniversityPittsburghUSA

Personalised recommendations