Optimal Tree Contraction in the EREW Model

  • Hillel Gazit
  • Gary L. Miller
  • Shang-Hua Teng


A deterministic parallel algorithm for parallel tree contraction is presented in this paper. The algorithm takes T = O(n/P) time and uses P (Pn/log n) processors, where n = the number of vertices in a tree using an Exclusive Read and Exclusive Write (EREW) Parallel Random Access Machine (PRAM). This algorithm improves the results of Miller and Reif [MR85,MR87], who use the CRCW randomized PRAM model to get the same complexity and processor count. The algorithm is optimal in the sense that the product P · T is equal to the input size and gives an O(log n) time algorithm when P = n/log n. Since the algorithm requires O(n) space, which is the input size, it is optimal in space as well. Techniques for prudent parallel tree contraction are also discussed, as well as implementation techniques for fixed-connection machines.


Contraction Phase List Ranking Free Vertex Parallel Random Access Machine Compress Operation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [AM87]
    Richard Anderson and Gary L. Miller. Optimal Parallel Algorithms for List Ranking. Technical Report, USC, Los Angeles, 1987.Google Scholar
  2. [BV85]
    I. Bar-On and U. Vishkin. Optimal parallel generation of a computation tree form. ACM Transactions on Programming Languages and Systems, 7 (2): 348–357, April 1985.CrossRefGoogle Scholar
  3. [CV86a]
    R. Cole and U. Vishkin. Approximate and exact parallel scheduling with applications to list, tree, and graph problems. In 27th Annual Symposium on Foundations of Computer Science, pages 478–491, IEEE, Toronto, Oct 1986.CrossRefGoogle Scholar
  4. [CV86b]
    Richard Cole and Uzi Vishkin. Deterministic coin tossing with applications to optimal list ranking. Information and Control, 70 (1): 32–53, 1986.CrossRefGoogle Scholar
  5. [DNP86]
    Eliezer Dekel, Simeon Ntafos, and Shie-Tung Peng. Parallel Tree Techniques and Code Opimization, pages 205–216. Volume 227 of Lecture Notes in Computer Science, Springer-Verlag, 1986.Google Scholar
  6. [KU86]
    Anna Karlin and Eli Upfal. Parallel hashing—an efficient implementation of shared memory. In Proceedings of the 18th Annual ACM Symposium on Theory of Computing, pages 160–168, ACM, Berkeley, May 1986.Google Scholar
  7. [MR]
    Gary L. Miller and John H. Reif. Parallel tree contraction part 2: further applications. SIAM J. Comput. submitted.Google Scholar
  8. [MR85]
    Gary L. Miller and John H. Reif. Parallel tree contraction and its applications. In 26th Symposium on Foundations of Computer Science, pages 478–489, IEEE, Portland, Oregon, 1985.Google Scholar
  9. [Mr87]
    Gary L. Miller and John H. Reif. Parallel Tree Contraction Part 1: Fundamentals. Volume 5, JAI Press, 1987. to appear.Google Scholar
  10. [MT87]
    Gary L. Miller and Shang-Hua Teng. Systematic methods for tree based parallel algorithm development. In Second International Conference on Supercomputing, pages 392–403, Santa Clara, May 1987.Google Scholar
  11. [Ran87]
    A. Ranade. How to emulate shared memory. In 28th Annual Symposium on Foundations of Computer Science, pages 185–194, IEEE, Los Angeles, Oct 1987.Google Scholar
  12. [TV85]
    R. E. Tarjan and U. Vishkin. An efficient parallel biconnectivity algorithm. SIAM J. Comput., 14 (4): 862–874, November 1985.CrossRefGoogle Scholar
  13. [Wy179]
    J. C. Wyllie. The Complexity of Parallel Computation. Technical Report TR 79–387, Department of Computer Science, Cornell University, Ithaca, New York, 1979.Google Scholar

Copyright information

© Plenum Press, New York 1988

Authors and Affiliations

  • Hillel Gazit
    • 1
  • Gary L. Miller
    • 1
  • Shang-Hua Teng
    • 1
  1. 1.University of Southern CaliforniaLos AngelesUSA

Personalised recommendations