Are parallel machines always faster than sequential machines?
We demonstrate that parallel machines are always faster than sequential machines for a wide range of machine models, including tree Turing machine (TM), multidimensional TM, log-cost random access machine (RAM), and unit-cost RAM. More precisely, we show that every sequential machine M (in the above list) that runs in time T can be sped up by a parallel version M′ of M that runs in time o(T). All previous speedup results either rely on the severe limitation on the storage structure of M (e.g., M is a TM with linear tapes) or require that M′ has a more versatile storage structure than M (e.g., M′ is a parallel RAM (PRAM), and M is a TM with linear tapes). It is unclear whether it is the parallelism, or the restriction on the storage structures, or the combination of both that realizes such speedup. We remove all the above restrictions on storage structures in previous results. We present speedup theorems where both M and M′ use the same kind of storage medium, which is not linear tapes. Thus, we prove conclusively that parallelism alone suffices to achieve the speedup.
Unable to display preview. Download preview PDF.
- 1.S. G. Akl. The Design and Analysis of Parallel Algorithms. Prentice Hall, Englewood Cliffs, New Jersey, 1989.Google Scholar
- 2.A. K. Chandra, D. C. Kozen, and L. J. Stockmeyer. Alternation. J. Assoc. Comput. Mach., 28:114–133, 1981.Google Scholar
- 3.A. K. Chandra and L. J. Stockmeyer. Alternation. In Proc. 17th Ann. IEEE Symp. on Foundations of Computer Science, pages 98–108, 1976.Google Scholar
- 4.S. A. Cook and R. A. Reckhow. Time bounded random access machines. J. Comput. System Sci., 7:354–375, 1973.Google Scholar
- 5.P. W. Dymond and M. Tompa. Speedups of deterministic machines by synchronous parallel machines. J. Comput. System Sci., 30:149–161, 1985.Google Scholar
- 6.S. Fortune and J. Wyllie. Parallelism in random access machines. In Proc. 10th Ann. ACM Symp. on Theory of Computing, pages 114–118, 1978.Google Scholar
- 7.L. M. Goldschlager. A universal interconnection pattern for parallel computers. J. Assoc. Comput. Mach., 29:1073–1086, 1982.Google Scholar
- 8.R. M. Karp and V. Ramachandran. Parallel algorithms for shared-memory machines. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A, chapter 17, pages 869–941. MIT Press, Cambridge, Massachusetts, 1990.Google Scholar
- 9.J. Katajainen, J. van Leeuwen, and M. Penttonen. Fast simulation of Turing machines by random access machines. SIAM J. Comput., 17:77–88, 1988.Google Scholar
- 10.I. Parberry. Parallel speedup of sequential machines: a defense of the parallel computation thesis. ACM SIGACT News, 18:54–67, 1986.Google Scholar
- 11.I. Parberry and G. Schnitger. Parallel computation with threshold functions. J. Comput. System Sci., 36:278–302, 1988.Google Scholar
- 12.W. Paul and R. Reischuk. On alternation II. Acta Inform., 14:391–403, 1980.Google Scholar
- 13.W. Paul and R. Reischuk. On time versus space II. J. Comput. System Sci., 22:312–327, 1981.Google Scholar
- 14.J. H. Reif. On synchronous parallel computations with independent probabilistic choice. SIAM J. Comput., 13:46–56, 1984.Google Scholar
- 15.K. R. Reischuk. A fast implementation of a multidimensional storage into a tree storage. Theoret. Comput. Sci., 19:253–266, 1982.Google Scholar
- 16.W. J. Savitch. Parallel random access machines with powerful instruction sets. Math. Systems Theory, 15:191–210, 1982.Google Scholar
- 17.W. J. Savitch and M. J. Stimson. Time bounded random access machines with parallel processing. J. Assoc. Comput. Mach., 26:103–118, 1979.Google Scholar