Advertisement

PRAM's towards realistic parallelism: BRAM's

  • Rolf Niedermeier
  • Peter Rossmanith
Communications
Part of the Lecture Notes in Computer Science book series (LNCS, volume 965)

Abstract

Due to its many idealizing assumptions, the well-known parallel random access machine (PRAM) is not a very practical model of parallel computation.

As a more realistic model we suggest the BRAM. Here each of the p processors gets a piece of length n of the input, which thus has size pn in total. Access to global memory has to be data-independent, block-wise, and has to obey the owner restriction. Assuming different global memory sizes, BRAM's are suitable for modeling various parallel computers ranging from bounded degree networks to completely connected parallel machines, while abstracting from architectural details.

We present optimal BRAM algorithms requiring different global memory sizes and different numbers of block communications for the longest common subsequence and the sorting problem.

Keywords

Parallel Machine Global Memory Sequential Algorithm Longe Common Subsequence Neighboring Processor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. Aggarwal, A. K. Chandra, and M. Snir. On communication latency in PRAM computations. In Proc. of 1st SPAA, pages 11–21, 1989.Google Scholar
  2. 2.
    A. Aggarwal, A. K. Chandra, and M. Snir. Communication Complexity of PRAMs. Theoretical Comput. Sci., 71:3–28, 1990.Google Scholar
  3. 3.
    M. Ajtai, J. Komlós, and E. Szemerédi. Sorting in clog n parallel steps. Combinatorica, 3:1–19, 1983.Google Scholar
  4. 4.
    A. Chin. Complexity models for all-purpose parallel computation. In Gibbons and Spirakis [8], chapter 14, pages 393–404.Google Scholar
  5. 5.
    R. Cole. Parallel merge sort. SIAM J. Comput., 17(4):770–785, Aug. 1988.Google Scholar
  6. 6.
    D. Culler et al. LogP: Towards a realistic model of parallel computation. In 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 1–12, May 1993.Google Scholar
  7. 7.
    P. W. Dymond and W. L. Ruzzo. Parallel RAMs with owned global memory and deterministic language recognition. In Proc. of 13th ICALP, number 226 in LNCS, pages 95–104. Springer-Verlag, 1986.Google Scholar
  8. 8.
    A. Gibbons and P. Spirakis, editors. Lectures on parallel computation. Cambridge International Series on Parallel Computation. Cambridge University Press, 1993.Google Scholar
  9. 9.
    D. Gomm, M. Heckner, K.-J. Lange, and G. Riedle. On the design of parallel programs for machines with distributed memory. In A. Bode, editor, Proc. of 2d EDMCC, number 487 in LNCS, pages 381–391, Munich, Federal Republic of Germany, Apr. 1991. Springer-Verlag.Google Scholar
  10. 10.
    A. Gottlieb and C. P. Kruskal. Complexity results for permuting data and other computations on parallel processors. J. ACM, 31(2):193–209, April 1984.CrossRefGoogle Scholar
  11. 11.
    T. Heywood and C. Leopold. Models of parallelism. Technical Report CSR-28-93, The University of Edinburgh, Department of Computer Science, July 1993.Google Scholar
  12. 12.
    C. P. Kruskal, L. Rudolph, and M. Snir. A complexity theory of efficient parallel algorithms. Theoretical Comput. Sci., 71:95–132, 1990.Google Scholar
  13. 13.
    M. Kunde. Block gossiping on grids and tori: Sorting and routing match the bisection bound deterministically. In T. Lengauer, editor, Proc. of 1st ESA, number 726 in LNCS, pages 272–283, Bad Honnef, Federal Republic of Germany, Sept. 1993. Springer-Verlag.Google Scholar
  14. 14.
    M. Kunde, R. Niedermeier, K. Reinhardt, and P. Rossmanith. Optimal Average Case Sorting on Arrays. In E. W. Mayr and C. Puech, editors, Proc. of 12th STACS, number 900 in LNCS, pages 503–514. Springer-Verlag, 1995.Google Scholar
  15. 15.
    K.-J. Lange and R. Niedermeier. Data-independences of parallel random access machines. In R. K. Shyamasundar, editor, Proc. of 13th FST&TCS, number 761 in LNCS, pages 104–113, Bombay, India, Dec. 1993. Springer-Verlag.Google Scholar
  16. 16.
    W. F. McColl. General purpose parallel computing. In Gibbons and Spirakis [8], chapter 13, pages 337–391.Google Scholar
  17. 17.
    P. Rossmanith. The Owner Concept for PRAMs. In C. Choffrut and M. Jantzen, editors, Proc. of 8th STACS, number 480 in LNCS, pages 172–183, Hamburg, Federal Republic of Germany, Feb. 1991. Springer-Verlag.Google Scholar
  18. 18.
    L. G. Valiant. General purpose parallel architectures. In van Leeuwen [20], chapter 18, pages 943–971.Google Scholar
  19. 19.
    P. van Emde Boas. Machine models and simulations. In van Leeuwen [20], chapter 1, pages 1–66.Google Scholar
  20. 20.
    J. van Leeuwen, editor. Algorithms and Complexity, volume A of Handbook of Theoretical Computer Science. Elsevier, 1990.Google Scholar
  21. 21.
    U. Vishkin. Workshop on “Suggesting computer science agenda(s) for high-performance computing” (Preliminary announcement). Announced via electronic mail on “TheoryNet”, January 1994.Google Scholar
  22. 22.
    P. M. B. Vitányi. Locality, Communication, and Interconnect Length in Multicomputers. SIAM J. Comput., 17(4):659–672, August 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Rolf Niedermeier
    • 1
  • Peter Rossmanith
    • 2
  1. 1.Wilhelm-Schickard-Institut für InformatikUniversität TübingenTübingenFed. Rep. of Germany
  2. 2.Fakultät für InformatikTechnische Universität MünchenMünchenFed. Rep. of Germany

Personalised recommendations