Advertisement

The parallel hierarchical memory model

  • Ben H. H. Juurlink
  • Harry A. G. Wijshoff
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 824)

Abstract

Modern computer systems usually have a complex memory system consisting of increasingly larger and slower memory. Traditional computer models like the Random Access Machine (RAM) have no concept of memory hierarchy, making it inappropriate for an accurate complexity analysis of algorithms on these types of architectures.

Aggarwal et al. introduced the Hierarchical Memory Model (HMM). In this model, access to memory location x requires f(x) instead of constant time. In a second paper they proposed an extension of the HMM called the Hierarchical Memory Model with Block Transfer (HMBT), in which a block of consecutive locations can be copied in unit time per element after the initial access latency.

This paper introduces two extensions of the HMBT model: the Parallel Hierarchical Memory Model with Block Transfer (P-HMBT), and the pipelined P-HMBT (PP-HMBT). Both models are intended to model memory systems in which data transfers between memory levels may proceed concurrently.

Tight bounds are given for several problems including dot product, matrix transposition and prefix sums. Also, the relationship between the models is examined. It is shown that the HMBT and P-HMBT are both strictly less powerful than the PP-HMBT. It is also shown that the HMBT and P-HMBT are incomparable in strength.

Index terms

Hierarchical memory data locality algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. Aggarwal, B. Alpern, A.K. Chandra, and M. Snir. A Model for Hierarchical Memory. In 19-th Annual ACM Symposium on Theory of Computing, pages 305–314, May 1987.Google Scholar
  2. 2.
    A. Aggarwal, A.K. Chandra, and M. Snir. Hierarchical Memory with Block Transfer. In 28-th Symposium on Foundations of Computer Science, pages 204–216, October 1987.Google Scholar
  3. 3.
    A.V. Aho, J.E. Hopcroft, and J.D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974.Google Scholar
  4. 4.
    A.V. Aho, J.E. Hopcroft, and J.D. Ullman. Data Structures and Algorithms. Addison-Wesley, 1983.Google Scholar
  5. 5.
    A. Chin. Complexity Models for All-Purpose Parallel Computation. In A. Gibbons and P. Spirakis, editors, Lectures on parallel computation, chapter 14. Cambridge University Press, 1993.Google Scholar
  6. 6.
    J.W. Hong and H.T. Kung. I/O Complexity: The Red-Blue Pebble Game. In Proc. of the 13-th Annual ACM Symp. on Theory of Computing, pages 326–333, May 1981.Google Scholar
  7. 7.
    B.H.H. Juurlink and H.A.G. Wijshoff. Experiences with a Model for Parallel Computation. In 12th Annual ACM Symposium on Principles of Distributed Computing, pages 87–96, August 1993.Google Scholar
  8. 8.
    B.H.H. Juurlink and H.A.G. Wijshoff. The Parallel Hierarchical Memory Model. Technical Report 93-33, Leiden University, 1993.Google Scholar
  9. 9.
    C.P. Kruskal, R. Rudolph, and M. Snir. The Power of Parallel Prefix. IEEE Trans. on Comp., C-34(10), October 1985.Google Scholar
  10. 10.
    Alan Smith. Cache Memories. ACM Computing Surveys, 14(3), September 1982.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Ben H. H. Juurlink
    • 1
  • Harry A. G. Wijshoff
    • 1
  1. 1.High Performance Computing Division, Department of Computer ScienceLeiden UniversityRA LeidenThe Netherlands

Personalised recommendations