# Input sensitive, optimal parallel randomized algorithms for addition and identification

- First Online:

## Abstract

Although many sophisticated parallel algorithms now exist, it is not at all clear if any of them is sensitive to properties of the input which can be determined only at run-time. For example, in the case of parallel addition in shared memory models, we intuitively understand that we should not add those inputs whose value is zero. A technique which exploits this idea, may beat the general lower bound for addition if the count of nonzero operants is much smaller than the numbers to be added. In this paper, we device such algorithms for two fundamental problems of parallel computation. Our model of computation is the CRCW PRAM. We first provide a randomized algorithm for parallel addition which never errs and computes the result in 0 (logm) expected parallel time, where m is the count of nonzero entries among the n numbersto be added. This algorithm uses 0 (m) shared space. We then use this result to solve the following problem of processor identification : n processors are given, each keeping either a 0 or an 1. We want each processor at the end, to know which are the processors with the 1's. Our solution is randomized and sensitive to the number 10 of the 1's. It takes 0 (min { m, n logm/logn}) expected parallel time and only 0 (m) shared memory, capable of holding only 0(n) size numbers. Combinatorial techniques of Erdos and Renyi were helpful to a part of this second result.

All our techniques enjoy the following properties : (1) They never produce an erroneous answer (2) if T is the actual parallel time and E (T) its expected value, then Prob {T>k.E (T)}⩽n^{−C} where k is arbitrary and c>1 is linear on k and can be specified by the algorithm implementer. (3) m is initially unknown to our algorithms. They produce an accurate estimate of it.

## Preview

Unable to display preview. Download preview PDF.

### References

- (CLC, 83).Chin, F., J. Lam and I. Chen, "Oprtimal Parallel Algorithms for the Connected Components Problem,"CACM83.Google Scholar
- (C, 80).Cook, S., "Towards a Complexity Theory of Synchronous Parallel Computations", Specker Symp. on Logic and Algorithms, Zurich, Feb.5–11, 1980.Google Scholar
- (DNS, 81).Dekel, E., D. Nassimi and S. Sahni, "Parallel Matrix and Graph Algorithms, "SIAM J. Comp. 10 (4) 1981.Google Scholar
- (ER, 60).Erdos, P. and A. Renyi, "On the Evolution of Random Graphs," The Art of Counting, J. Spencer Editor, MIT Press, 1973.Google Scholar
- (ER, 63).Erdos, P. and A. Renyi, "On two problems of information theory"Mayar Tud. Akad. Mat. Kut. Int. Kozl. 8 (1963); also in The Art of Counting, J. Spencer, Editor, MIT Press, 1973.Google Scholar
- (G, 78).Goldschlager, L., "A Unified Approach to Models of Synchronous Parallel Machines", Proc. 11 th sub STOC, May 1978Google Scholar
- (G, 77).Goldshlager, L., "Synchronous Parallel Computation", Ph. D. thesis, Univ. of Toronto, C.S Dept., 1977.Google Scholar
- (GLR, 80).Gottlieb, A., B Lubachevsky and L. Rudolph, "Basic Techniques for the efficient coordination of very large numbers of cooperating sequential processors, "Courant Inst. TR No. 028, Dec. 1980.Google Scholar
- (HCS, 79).Hirschberg, D., A. Chandra, D. Sarwate, "Computing Connected Components on Parallel Computers, "CACM 22(8) Aug.1979.Google Scholar
- (K, 82).Kucera L., "Parallel Computation and Conflicts in Memory Access", Info. Processing Letters Vol. 14, April 1982.Google Scholar
- (MV, 83).Melhorn, K., and U. Vishkin, "Randomized and deterministic simulation of PRAMs by parallel machines with restricted granularity of parallel memories," 9th Workshop on Graph Theoretic Concepts in Computer Science, Univ. Usnabruck, June 1983.Google Scholar
- (R, 82).Reif, J., "Symmetric Complementation," 14th STOC, San Francisco, CA, May 1982.Google Scholar
- (R, 82b).Reif, J., "On the Power of Probabilistic Choice in synchronous Parallel Computations," 9th ICALP, Aarchus, Denmark, July 1982.Google Scholar
- (Ru, 79).Ruzzo, W., "On Uniform Circuit Complexity", Proc. 20th FOCS, Oct. 1979.Google Scholar
- (SJ, 81).Savage, C. and J. Ja'ja', "Fast, Efficient Parallel Algorithms for Some Graph Problems", SIAM J. Comp. 10 (4), Nov. 1981.Google Scholar
- (SV, 80).Shiloach, Y. and U. Vishkin, "Finding the Maximum Merging and Sorting in a Parallel Computation Model", Tech. Rep. Technion Israel, Comp. Sci., March 1980.Google Scholar
- (SV, 82).Shiloah, Y. and U. Vishkin. "An 0 (logn) Parallel Connectivity Algorithm", J. of Algorithms, 1982.Google Scholar
- (S, 80).Schwartz, J.T., "Ultracomputers", ACM TOPLAS 1980, pp. 484–521.Google Scholar
- (UW, 84).Upfal, E. and A. Wigderson, "How to share memory in a distributed system", 25th FOCS, Proceedings, October 1984.Google Scholar
- (V. 83a)Vishkin, U., "A parallel-design, distributed-implementation general purpose parallel computer", to appear, J. TCS.Google Scholar
- (V, 83b).Vishkin, U., "Randomized speeds-ups in parallel computation", 16th STOC, April 1984, Proceedings.Google Scholar
- (U. 84)Upfal, E., "A probabilistic relation between desirable and feasible models of parallel computation", 16th ACM STOC 1984, Proceedings.Google Scholar
- (W, 79).Wyllie, J., "The Complexity of Parallel Computation", Ph. D. Thesis, Cornell University, 1979.Google Scholar