Advertisement

Predicting the speedup of parallel Ada programs

  • Lars Lundberg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 603)

Abstract

A method for predicting the speedup of parallel Ada programs has been developed. For this purpose the states Active and Blocked are used to characterize a task during program execution. Of the active tasks, some may be waiting for a processor to be available. Transitions between the two states may be caused only by certain tasking constructs that can be statically identified in the source code. The execution of a task forms a list of Active and Blocked time segments. Segments in different tasks may depend on each other through task synchronizations, thus forming a dependency graph.

Using this graph and certain assumptions about the way tasks are scheduled, one can determine how the number of active tasks varies during the execution. Disregarding hardware and system overheads, speedup is limited either by the number of processors or by the number of active tasks. That is, dependency graphs make it possible to compare the speedup of different programs solving the same problem. This method can also be used for selecting a multiprocessor system with a suitable number of processors for a certain program.

By inserting probes at certain tasking constructs and executing the program on a single-processor, we are able to record the dependency graph. This method has been used for predicting the speedup of a parallel Ada program containing 80 tasks.

Keywords

Prime Number Parallel Program Dependency Graph Active Task Task Type 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Department of Defense, USA. Ada reference manual, February 1983. ANSI/MIL-STD-1815A.Google Scholar
  2. [2]
    James L. Peterson and Abraham Silberschatz, Operating System Concepts (second edition), Addison-Wesley Publishing Company, 1985.Google Scholar
  3. [3]
    Gordon E. Anderson, An Ada Multitask Solution for the Sieve of Erathosthenes, ACM, Ada Letters Vol 7 September/October 1988.Google Scholar
  4. [4]
    V. F. Rich, Parallel Ada for Symmetrical Multiprocessors, In proceedings of the distributed Ada symposium held at university of Southampton, pp 61–70, December 1989.Google Scholar
  5. [5]
    Susan Flynn Hummel, SMARTS — Shared-memory Multiprocessor Ada Run Time Supervisor, Ph.D. thesis, Published as technical report 495 at the Department of Computer Science, New York University, February 1990.Google Scholar
  6. [6]
    R.L. Graham, Bounds for Certain Multiprocessor Timing Anomalies, SIAM Journal of Applied Mathematics, 17, 2 (1969), pp 416–429.Google Scholar
  7. [7]
    K. So, A.S. Bolmarcich, F. Darema and V.A. Norton, A Speedup Analyzer for Parallel Programs, In proceedings of the 1987 International Conference on Parallel Processing, pp 653–662, August 1987.Google Scholar
  8. [8]
    Anselmo A. Lastra and C. Frank Starmer, POET: A Tool for the Analysis of the Performance of Parallel Algorithms, In proceedings of the 1988 International Conference on Parallel Processing, pp 126–129, August 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Lars Lundberg
    • 1
  1. 1.Department of Computer EngineeringLund UniversityLundSweden

Personalised recommendations