Advertisement

The probabilistic and deterministic parallel complexity of symmetric functions

  • Ming Li
  • Yaacov Yesha
Parallel And Distributed Computing
Part of the Lecture Notes in Computer Science book series (LNCS, volume 267)

Abstract

In this paper we prove lower bounds on the computational complexity of symmetric functions on parallel machines. The parallel models considered are PRIORITY and ARBITRARY, both widely used versions of the Parallel Random Access Machine (PRAM). We first consider PRIORITY. We prove that a PRIORITY PRAM with one shared memory cell and O(n) processors requires Ω(g(n)) time in order to decide if a binary vector of length n has at least g(n) 1's, for the case g(n)=o(n1/4). This is the decision problem for the threshold language Lg. Our lower bound is tight. For PRIORITY with m shared memory cells we prove an Ω(g(n)/m) lower bound. The limitation on the number of processors is essential, since with enough processors Lg can be decided by the same model in O(√g(n)) time. The limitation on g is important: for instance, for g(n)=n/2 Lg can be decided in time O(√g(n)), which is optimal [VW].

Our new technique for threshold languages enables us to show that for a large range of number of processors, PRIORITY and ARBITRARY with one shared memory cell require the same time complexity for computing large classes of symmetric functions. These results are of interest since PRIORITY is known to be more powerful than ARBITRARY (see [FMRW], for instance).

Next we show that a probabilistic PRIORITY with mn ,(0≤<1) shared memory cells and any finite number of processors requires expected time more than 1−ε / 4 logn to compute PARITY of n bits. (Note that with enough processors and shared memory PRIORITY can compute PARITY in constant time even deterministically). For probabilistic PRIORITY without ROM we prove a tight Ω(n/m) lower bound. The probabilistic case is much more complicated than the deterministic case, hence we develop new techniques for probabilistic lower bounds.

Keywords

Probabilistic Priority Shared Memory Symmetric Function Parallel Machine Parallel Random Access Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

6. References

  1. [A]
    L. Adleman, Two theorems on random polynomial time, Proc. 19th IEEE Symposium on Foundations of Computer Science (1978), pp. 75–83.Google Scholar
  2. [AS]
    B. Awerbuch and Y. Shiloach, New connectivity and MSF algorithms for ultracomputers and PRAM, Proc. IEEE conf. on parallel proc., 1983, 175–179.Google Scholar
  3. [B]
    P. Beame, Limits on the power of concurrent-write parallel machines, 18th ACM Symp. on Theory of Computing, Berkeley, 1986, pp. 169–176.Google Scholar
  4. [CD]
    S.A Cook and C. Dwork, Bounds on the time for parallel RAM's to compute simple functions, Proc. 14th ACM Symp. on Theory of Computing, 1982, pp 231–233.Google Scholar
  5. [FMRW]
    F. Fich, F. Meyer auf der Heide, P. Ragde, and A Wigderson, One, two, three, ..., infinity: lower bounds for parallel computation, 24th ACM Symp. on Theory of Computing, 1985, pp 48–58.Google Scholar
  6. [FRW]
    F. Fich, P. Ragde, and A. Wigderson, Relations between concurrent-write models of parallel computation, Proc. 3rd ACM Symp. on Principles of Distributed computing, 1984, pp 179–184.Google Scholar
  7. [Ga]
    Z. Galil, Optimal parallel algorithms for string matching, STOC'84, p.240.Google Scholar
  8. [GGKMRS]
    A. Gottlieb, R. Grishman, C. Kruskal, K. McAuliffe, L. Rudolf and M. Snir, The NYU Ultracomputer — Designing a MIMD shared memory parallel machine, IEEE Trans. Comput. C-32 (1983), pp 175–189.Google Scholar
  9. [HCS]
    D. Hirschberg, A. Chandra, and D. Sarwate, Computing connected components on parallel computers, CACM 22,8, 1979, pp. 461–464.Google Scholar
  10. [K]
    D. Kuck, A survey of parallel machine organization and programming, Computing Surveys, 9 (1977) pp29–52.CrossRefGoogle Scholar
  11. [KUW]
    R.M. Karp, E. Upfal and A. Wigderson, Constructing a perfect matching is in random NC, Proc. 17th ACM STOC, 1985.Google Scholar
  12. [LY]
    M. Li and Y. Yesha, Separation and lower bounds for ROM and nondeterministic models of parallel computation, Ohio State University, CISRC-TR-86-7, (To appear in Information and Control) 1985.Google Scholar
  13. [LY1]
    M. Li and Y. Yesha, New lower bounds for parallel computation, 18th ACM Ann. Symp. on Theory of Computing, Berkeley, 1986, pp. 177–187.Google Scholar
  14. [MW]
    F. Meyer auf der Heide and A. Wigderson, The complexity of parallel sorting, 17th ACM Symp. on Theory of computing, 1985, pp. 532–540.Google Scholar
  15. [P]
    F.P. Preparata, New parallel-sorting schemes, IEEE Trans. Comp. c-27. p.669Google Scholar
  16. [SV]
    Y. Shiloach and U. Vishkin, Finding the maximum, merging and sorting on parallel models of computation, J. of Algorithms 2, 1981, pp. 88–102.Google Scholar
  17. [SV1]
    Y. Shiloach and U. Vishkin, An O(logn) parallel connectivity algorithm, J. of Algorithms 3, 1982, pp. 57–67.Google Scholar
  18. [V]
    U. Vishkin, Parallel-design space distributed — implementation space (PDDI) general purpose computer, RC 9541, IBM T.J. Watson Research Center, Yorktown Heights, NY.Google Scholar
  19. [VW]
    U. Vishkin and A Wigderson, Trade-offs between depth and width in parallel computation, SIAM J. Computing, Vol. 14, No 2, May 1985, pp 303–314.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1987

Authors and Affiliations

  • Ming Li
    • 1
  • Yaacov Yesha
    • 1
  1. 1.Department of Computer and Information ScienceThe Ohio State UniversityColumbusUSA

Personalised recommendations