Random competition: A simple, but efficient method for parallelizing inference systems
We present a very simple parallel execution model suitable for inference systems with nondeterministic choices (OR-branching points). All the parallel processors solve the same task without any communication. Their programs only differ in the initialization of the random number generator used for branch selection in depth first backtracking search. This model, called random competition, permits us to calculate analytically the parallel performance for arbitrary numbers of processors. This can be done exactly and without any experiments on a parallel machine. Finally, due to their simplicity, competition architectures are easy (and therefore low-priced) to build.
As an application of this systematic approach we compute speedup expressions for specific problem classes defined by their run-time distributions. The results vary from a speedup of 1 for linearly degenerate search trees up to clearly ℍsuperlinearℍ speedup for strongly imbalanced search trees. Moreover, we are able to give estimates for the potential degree of OR-parallelism inherent in the different problem classes. Such an estimate is very important for the design of particular parallel inference machines, since spedups strongly depend upon the application domain.
Unable to display preview. Download preview PDF.
- [Ali87]K. A. M. Ali. Or-parallel execution of PROLOG on a multi-sequential machine. Int. Journal of Parallel Programming, 15(3):189–214, 1987.Google Scholar
- [BS81]I. N. Bronstein and K. A. Semendjajew. Taschenbuch der Mathematik. Harri Deutsch, Thun, Frankfurt, 1981.Google Scholar
- [FK87]B. Fronhöfer and F. Kurfeß. Cooperative competition: A modest proposal concerning the use of multi-processor systems for automated reasoning. Technical Report ATP-74-VII-87, Institut für Informatik, Technische Universität München, 1987.Google Scholar
- [JAM87]V. K. Janakiram, D. P. Agrawal, and R. Mehrotra. Randomized parallel algorithms for Prolog programs and backtracking applications. In Int. Conf. on Parallel Processing, pages 278–281, 1987.Google Scholar
- [Kor85]R. E. Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelligence, 27:97–109, 1985.Google Scholar
- [KPB+89][KPB+89] F. Kurfeß, X. Pandolfi, Z. Belmesk, W. Ertel, R. Letz, and J. Schumann. PARTHEO and FP2: Design of a Parallel Inference Machine. In Ph. Treleaven, editor, Parallel Computers: Object-Oriented, Functional, Logic, chapter 9, pages 259–297. Wiley & Sons, Chichester, 1989.Google Scholar
- [KR90]Vipin Kumar and V. Nageshwara Rao. Scalable parallel formulations of depth-first search. In P.S. Gopalakrishnan Vipin Kumar and Laveen N. Kanal, editors, Parallel Algorithmus for Maschine Intelligence and Vision, pages 1–41. Springer Verlag, New York, 1990.Google Scholar
- [LSBB91]R. Letz, J. Schumann, S. Bayerl, and W. Bibel. SETHEO, A High-Performance Theorem Prover. to appear in Journal of Automated Reasoning, 1991. available as Technical Report TUM-I9008 from Technical University Munich.Google Scholar
- [Pfe88]F. Pfenning. Single axioms in the implicational propositional calculus. In 9th Int. Conf. on Automated Deduction, pages 710–713, Berlin, 1988. Springer.Google Scholar
- [SL90]J. K. Slaney and E. L. Lusk. Parallelizing the closure computation in automated deduction. In 10th Int. Conf. on Automated Deduction, pages 28–39, Berlin, Heidelberg, 1990. Springer.Google Scholar