Effective Selection of Partition Sizes for Moldable Scheduling of Parallel Jobs

  • Srividya Srinivasan
  • Vijay Subramani
  • Rajkumar Kettimuthu
  • Praveen Holenarsipur
  • P. Sadayappan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2552)

Abstract

Although the current practice in parallel job scheduling requires jobs to specify a particular number of requested processors, most parallel jobs are moldable, i.e. the required number of processors is flexible. This paper addresses the issue of effective selection of processor partition size for moldable jobs. The proposed scheduling strategy is shown to provide significant benefits over a rigid scheduling model and is also considerably better than a previously proposed approach to moldable job scheduling.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S. V. Anastasiadis and K. C. Sevcik. Parallel Application Scheduling on Networks of Workstations. Journal of Parallel and Distributed Computing, 43(2): 109–124, 1997. 174CrossRefGoogle Scholar
  2. [2]
    O. Arndt, B. Freisleben, T. Kielmann, and F. Thilo. A Comparative Study of Online Scheduling Algorithms for Networks of Workstations. Cluster Computing, 3(2):95–112, 2000. 174CrossRefGoogle Scholar
  3. [3]
    S. H. Chiang, R. K. Mansharamani, and M. K. Vernon. Use of Application Characteristics and Limited Preemption for Run-to-Completion Parallel Processor Scheduling Policies. In SIGMETRICS, pages 33–44, 1994. 174Google Scholar
  4. [4]
    S. H. Chiang and M. K. Vernon. Production Job Scheduling for Parallel Shared Memory Systems. In Proceedings of the International Parallel and Distributed Processing Symp, 2001. 174Google Scholar
  5. [5]
    W. Cirne. Using Moldability to Improve the Performance of Supercomputer Jobs. Ph.D. Thesis. Computer Science and Engineering, University of California San Diego, 2001. 174, 176Google Scholar
  6. [6]
    W. Cirne. When the Herd is Smart: The Emergent Behavior of SA. In IEEE Trans. Par. Distr. Systems, 2002. 174, 176Google Scholar
  7. [7]
    W. Cirne and F. Berman. Adaptive Selection of Partition Size for Supercomputer Requests. In Workshop on Job Scheduling Strategies for Parallel Processing, pages 187–208, 2000. 174, 176Google Scholar
  8. [8]
    A. B. Downey. A Model For Speedup of Parallel Programs. Technical Report CSD-97-933. University of California at Berkeley, 1997. 176Google Scholar
  9. [9]
    D. G. Feitelson. Logs of real parallel workloads from production systems. http://www.cs.huji.ac.il/labs/parallel/workload/logs.html. 175, 176
  10. [10]
    D. G. Feitelson, L. Rudolph, U. Schwiegelshohn, K. C. Sevcik, and P. Wong. Theory and Practice in Parallel Job Scheduling. In Workshop on Job Scheduling Strategies for Parallel Processing, pages 1–34. 174, 176Google Scholar
  11. [11]
    D. Jackson, Q. Snell, and M. J. Clement. Core Algorithms of the Maui Scheduler. In Wkshp. on Job Sched. Strategies for Parallel Processing, pages 87–102, 2001. 175Google Scholar
  12. [12]
    P. J. Keleher, D. Zotkin, and D. Perkovic. Attacking the Bottlenecks of Backfilling Schedulers. Cluster Computing, 3(4):245–254, 2000. 174CrossRefGoogle Scholar
  13. [13]
    D. Lifka. The ANL/IBM SP Scheduling System. In Workshop on Job Scheduling Strategies for Parallel Processing, pages 295–303, 1995. 175Google Scholar
  14. [14]
    A. W. Mu’alem and D. G. Feitelson. Utilization, Predictability, Workloads, and User Runtime Estimates in Scheduling the IBM SP2 with Backfilling. In IEEE Trans. Par. Distr. Systems, volume 12, pages 529–543, 2001. 174, 175CrossRefGoogle Scholar
  15. [15]
    E. Rosti, E. Smirni, L. W. Dowdy, G. Serazzi, and B. M. Carlson. Robust Partitioning Policies of Multiprocessor Systems. Performance Evaluation, 19(2–3):141–165, 1994. 174CrossRefGoogle Scholar
  16. [16]
    S. Setia and S. Tripathi. A Comparative Analysis of Static Processor Partitioning Policies for Parallel Computers. In Proc. of the Intl. Wkshp. on Modeling and Simulation of Computer and Telecomm. Syst. (MASCOTS), pages 283–286, 1993. 174Google Scholar
  17. [17]
    K. C. Sevcik. Application Scheduling and Processor Allocation in Multiprogrammed Parallel Processing Systems. Performance Evaluation, 19(2–3):107–140, 1994. 174MATHCrossRefGoogle Scholar
  18. [18]
    J. Skovira, W. Chan, H. Zhou, and D. Lifka. The EASY-LoadLeveler API Project. In Wkshp. on Job Sched. Strategies for Parallel Processing, pages 41–47, 1996. 175Google Scholar
  19. [19]
    S. Srinivasan, R. Kettimuthu, V. Subramani, and P. Sadayappan. Characterization of Backfilling Strategies for Parallel Job Scheduling. In Proceedings of the ICPP2002 Workshops, pages 514–519, 2002. 180Google Scholar
  20. [20]
    S. Srinivasan, R. Kettimuthu, V. Subramani, and P. Sadayappan. Selective Reservation Strategies for Backfill Job Scheduling. In Proceedings of the 8th Workshop on Job Scheduling Strategies for Parallel Processing, 2002. 180Google Scholar
  21. [21]
    A. Streit. On Job Scheduling for HPC-Clusters and the dynP Scheduler. In Proc. Intl. Conf. High Perf. Comp., pages 58–67, 2001. 174Google Scholar
  22. [22]
    D. Talby and D. Feitelson. Supporting Priorities and Improving Utilization of the IBM SP Scheduler Using Slack-Based Backfilling. In Proceedings of the 13th International Parallel Processing Symposium, 1999. 175Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Srividya Srinivasan
    • 1
  • Vijay Subramani
    • 1
  • Rajkumar Kettimuthu
    • 1
  • Praveen Holenarsipur
    • 1
  • P. Sadayappan
    • 1
  1. 1.Ohio State UniversityColumbusUSA

Personalised recommendations