Advertisement

A graphical tool for automatic parallelization and scheduling of programs on multiprocessors

  • Yu-Kwong Kwok
  • Ishfaq Ahmad
  • Min-You Wu
  • Wei Shu
Workshop 03: Automatic Parallelization and High-Performance Compilers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1300)

Abstract

In this paper, we introduce an experimental software tool called CASCH (Computer Aided SCHeduling) for automatic parallelization and scheduling of applications to parallel processors. CASCH transforms a sequential program to a parallel program through automatic task graph generation, scheduling, mapping, communication, and synchronization primitives insertion. The major strength of CASCH is its extensive library of state-of-the-art scheduling and mapping algorithms reported in the recent literature. Using these algorithms, a practitioner can choose the most suitable one for generating the shortest schedule for the application at hand. Furthermore, the scheduling algorithms can be interactively analyzed, tested and compared using real data on a common platform with various performance objectives. CASCH with its graphical interface is useful for both novice and expert programmers of parallel machines, and can serve as a teaching and learning aid for understanding scheduling and mapping algorithms.

Keywords

Schedule Algorithm Parallel Program Task Graph Parallel Code Gantt Chart 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    T.L. Adam, K.M. Chandy, and J. Dickson, “A Comparison of List Scheduling for Parallel Processing Systems,” Comm. of the ACM, vol. 17, pp. 685–690, Dec. 1974.CrossRefGoogle Scholar
  2. [2]
    Ahmad, Y.-K. Kwok, and M.-Y. Wu, “Analysis, Evaluation and Comparison of Algorithms for Scheduling Task Graphs to Parallel Processors,” Proc. of the 1996 Int'l Symposium on Parallel Architecture, Algorithms and Networks, Beijing, China, Jun. 1996, pp. 207–213.Google Scholar
  3. [3]
    B. Appelbe and K. Smith, “A Parallel-Programming Toolkit,” IEEE Software, pp. 29–38, Jul. 1989.Google Scholar
  4. [4]
    J. Baxter and J.H. Patel, “The LAST Algorithm: A Heuristic-Based Static Task Allocation Algorithm,” Proc. of Int'l Conf. on Parallel Processing, vol. II, pp. 217–222, Aug. 1989.Google Scholar
  5. [5]
    A. Beguelin, “A tool for monitoring [PVM] programs,” Tech. Rep. CMU-CS-93-105, CMU, 1993.Google Scholar
  6. [6]
    Cray Research Inc., UNICOS Performance Utilities Reference Manual, sr2040 6.0 edition, 1991.Google Scholar
  7. [7]
    Digital Equipment Corp., PARASPHERE User Guide.Google Scholar
  8. [8]
    H. El-Rewini and T.G. Lewis, “Scheduling Parallel Programs onto Arbitrary Target Machines,” Journal of Parallel and Distributed Computing, vol. 9, no. 2, pp. 138–153, Jun. 1990.CrossRefGoogle Scholar
  9. [9]
    C. Farhat and M. Lesoinne, “Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics,” International Journal for Numerical Methods in Engineering, 36(5):745–764,1993.CrossRefGoogle Scholar
  10. [10]
    C. Fineman, P. Hontalas, S. Listgarten, and J. Yen, A User's Guide to AIMS, ver. 1.1 ed., May 1992.Google Scholar
  11. [11]
    High Performance Fortran Forum, High performance, fortran language specification, Technical Report Version 1.0, Rice University, May 1993.Google Scholar
  12. [12]
    M.T. Heath and J.A. Etheridge, “Visualizing the performance of parallel programs,” IEEE Software, 8(5):29–39, 1991.CrossRefGoogle Scholar
  13. [13]
    J.J.Hwang, Y.C. Chow, F.D. Anger and C.Y. Lee, “Scheduling Precedence Graphs in Systems with Interprocessor Communication Times,” SIAM J. of Comp., vol. 18, no. 2, pp. 244–257, Apr. 1989.MathSciNetCrossRefGoogle Scholar
  14. [14]
    Intel Supercomputer Systems Division, iPSC/2 and iPSC/860 Interactive Parallel Debugger Manual, Apr. 1991.Google Scholar
  15. [15]
    Kendall Square Research, KSR Manual.Google Scholar
  16. [16]
    K. Kennedy, K.S. McKinley, and C.Tseng, “Interactive Parallel Programming Using the Parascope Editor,” IEEE Trans. Parallel and Distributed Systems, 2(3):329–341, 1991.CrossRefGoogle Scholar
  17. [17]
    S.J. Kim and J.C. Browne, “A General Approach to Mapping of Parallel Computation upon Multiprocessor Architectures,” Proc. ICPP, vol. II, pp. 1–8, Aug. 1988.Google Scholar
  18. [18]
    Y.-K. Kwok and I. Ahmad, “Dynamic Critical-Path Scheduling: An Effective Technique for Allocating Task Graphs to Multiprocessors,” IEEE Trans. on Parallel and Distributed Systems, vol. 7, no. 5, May 1996, pp. 506–521.CrossRefGoogle Scholar
  19. [19]
    —, “Bubble Scheduling: A Quasi Dynamic Algorithm for Static Allocation of Tasks to Parallel Architectures,” Proc. of the 7th IEEE Sym. on Parallel and Dist. Processing, Oct. 1995, pp. 36–43.Google Scholar
  20. [20]
    T.G. Lewis and H. El-Rewini, “Parallax: A Tool for Parallel Program Scheduling,” IEEE Parallel & Distributed Technology, vol.1, no. 3, pp. 62–72, May 1993.CrossRefGoogle Scholar
  21. [21]
    V.M. Lo, S. Rajopadhye, S. Gupta, D. Keldsen, M.A. Mohamed, B. Nitzberg, J.A. Telle, and X. Zhong, “OREGAMI: Tools for Mapping Parallel Computations to Parallel Architectures,” International Journal of Parallel Programming, vol. 20, no. 3, 1991, pp. 237–270.CrossRefGoogle Scholar
  22. [22]
    MasPar Computer, MPPE User Guide.Google Scholar
  23. [23]
    N. Mehdiratta and K. Ghose, “A Bottom-Up Approach to Task Scheduling on Distributed Memory Multiprocessor,” Proc. of Int'l Conf. on Parallel Processing, vol. II, pp. 151–154, Aug. 1994.Google Scholar
  24. [24]
    D. Pease, A. Ghafoor, I. Abroad, K. Foudil-Bey, D. Andrews, T. Karpinski, M. Mikki and M. Zerrouki, “PAWS (Parallel Assessment Window System): A performance Assessment Tool for Parallel Computing Systems,” IEEE Computer, Vol. 24, No. 1, pp. 18–29, January 1991.CrossRefGoogle Scholar
  25. [25]
    D. A. Reed, R. A. Aydt, T. M. Madhyastha, R. J. Noe, K. A. Shield, and B. W. Schwartz., “The Pablo Performance Analysis Environment,” Technical report, Computer Science Department, University of Illinois at Urbana-Champagn.Google Scholar
  26. [26]
    V. Sarkar, Partitioning and Scheduling Parallel Programs for Multiprocessors, MIT Press, Cambridge, MA, 1989.zbMATHGoogle Scholar
  27. [27]
    B. Shirazi, K. Kavi, A.R. Hurson, and P. Biswas, “PARSA: A Parallel Program Scheduling and Assessment Environment,” Proceedings of International Conference Parallel Processing, 1993, vol. II, pp. 68–72.Google Scholar
  28. [28]
    G.C. Sih and E.A. Lee, “A Compile-Time Scheduling Heuristic for Interconnection-Constrained Heterogeneous Processor Architectures,” IEEE Trans. on Parallel and Distributed Systems, vol. 4, no. 2, pp. 75–87, Feb. 1993.CrossRefGoogle Scholar
  29. [29]
    Thinking Machines Corp., Prism User's Guide, version 1.1 edition, December 1991.Google Scholar
  30. [30]
    M. Wolfe, “The Tiny Loop Restructuring Research Tool,” Int'l Conf. on Parallel Processing, pages II-46-53, August 1991.Google Scholar
  31. [31]
    M.-Y. Wu and D.D. Gajski, “Hypertool: A programming aid for message-passing systems,” IEEE Trans. Parallel and Distributed Systems, 1(3):330–343, Jul. 1990.CrossRefGoogle Scholar
  32. [32]
    T. Yang and A. Gerasoulis, “PYRROS: Static Task Scheduling and Code Generation for Message-Passing Multiprocessors,” The 6th ACM Int'l Conf. on Supercomputing, Jul. 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Yu-Kwong Kwok
    • 1
  • Ishfaq Ahmad
    • 1
  • Min-You Wu
    • 2
  • Wei Shu
    • 2
  1. 1.Department of Computer ScienceThe Hong Kong University of Science and TechnologyHong Kong
  2. 2.Department of Computer ScienceState University of New York at BuffaloNew YorkUSA

Personalised recommendations