Improving first-come-first-serve job scheduling by gang scheduling

  • Uwe Schwiegeishohn
  • Ramin Yahyapour
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1459)

Abstract

We present a new scheduling method for batch jobs on massively parallel processor architectures. This method is based on the First-come-first-serve strategy and emphasizes the notion of fairness. Severe fragmentation is prevented by using gang scheduling which is only initiated by highly parallel jobs. Good worst-case behavior of the scheduling approach has already been proven by theoretical analysis. In this paper we show by simulation with real workload data that the algorithm is also suitable to be applied in real parallel computers. This holds for several different scheduling criteria like makespan or sum of the flow times. Simulation is also used for determination of the best parameter set for the new method.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    S.-H. Chiang, R.K. Masharamani, and M.K. Vernon. Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies. In Proceedings of ACM SIGMETRICS Conference on Measurement of Computer Systems, pages 33–44, 1994.Google Scholar
  2. [2]
    D.G. Feitelson. Packing schemes for gang scheduling. In D.G. Feitelson and L. Rudolph, editors, IPPS'96 Workshop: Job Scheduling Strategies for Parallel Processing, pages 89–110. Springer-Verlag, Lecture Notes in Computer Science LNCS 1162, 1996.Google Scholar
  3. [3]
    D.G. Feitelson and M.A. Jette. Improved utilization and responsiveness with gang scheduling. In D.G. Feitelson and L. Rudolph, editors, IPPS'97 Workshop: Job Scheduling Strategies for Parallel Processing, pages 238–261. Springer-Verlag, Lecture Notes in Computer Science LNCS 1291, 1997.Google Scholar
  4. [4]
    D.G. Feitelson and B. Nitzberg. Job characteristics of a production parallel scientific workload on the NASA Ames iPSC/860. In D.G. Feitelson and L. Rudolph, editors, IPPS'95 Workshop: Job Scheduling Strategies for Parallel Processing, pages 337–360. Springer-Verlag, Lecture Notes in Computer Science LNCS 949, 1995.Google Scholar
  5. [5]
    D.G. Feitelson and L. Rudolph. Gang scheduling performance benefits for fine-grain parallelization. Journal of Parallel and Distributed Computing, 16:306–318, 1992.MATHCrossRefGoogle Scholar
  6. [6]
    D.G. Feitelson and L. Rudolph. Parallel job scheduling: Issues and approaches. In D.G. Feitelson and L. Rudolph, editors, IPPS'95 Workshop: Job Scheduling Strategies for Parallel Processing, pages 1–18. Springer-Verlag, Lecture Notes in Computer Science LNCS 949, 1995.Google Scholar
  7. [7]
    D.G. Feitelson and L. Rudolph. Evaluation of design choices for gang scheduling using distributed hierarchical control. Journal of Parallel and Distributed Computing, 35:18–34, 1996.MATHCrossRefGoogle Scholar
  8. [8]
    D.G. Feitelson and L. Rudolph. Towards convergence in job schedulers for parallel supercomputers. In D.G. Feitelson and L. Rudolph, editors, IPPS'96 Workshop: Job Scheduling Strategies for Parallel Processing, pages 1–26. Springer-Verlag, Lecture Notes in Computer Science LNCS 1162, 1996.Google Scholar
  9. [9]
    D.G. Feitelson, L. Rudolph, U. Schwiegeishohn, K.C. Sevcik, and P. Wong. Theory and practice in parallel job scheduling. In D.G. Feitelson and L. Rudolph, editors, IPPS'97 Workshop: Job Scheduling Strategies for Parallel Processing, pages 1–34. Springer-Verlag, Lecture Notes in Computer Science LNCS 1291, 1997.Google Scholar
  10. [10]
    A. Feldmann, J. Sgall, and S.-H. Teng. Dynamic scheduling on parallel machines. Theoretical Computer Science, 130:49–72, 1994.MATHMathSciNetCrossRefGoogle Scholar
  11. [11]
    H. Franke, P. Pattnaik, and L. Rudolph. Gang scheduling for highly efficient distributed multiprocessor systems. In Proceedings of the 6 th Symp. on the Frontiers of Massively Parallel Computation, pages 1–9, 1996.Google Scholar
  12. [12]
    M. Garey and R.L. Graham. Bounds for multiprocessor scheduling with resource constraints. SIAM Journal on Computing, 4(2):187–200, June 1975.MATHMathSciNetCrossRefGoogle Scholar
  13. [13]
    S. Hotovy. Workload evolution on the Cornell Theory Center IBM SP2. In D.G. Feitelson and L. Rudolph, editors, IPPS'96 Workshop: Job Scheduling Strategies for Parallel Processing, pages 27–40. Springer-Verlag, Lecture Notes in Computer Science LNCS 1162, 1996.Google Scholar
  14. [14]
    J. Jann, P. Pattnaik, H. Franke, F. Wang, J. Skovira, and J. Riordan. Modeling of workload in MPPs. In D.G. Feitelson and L. Rudolph, editors, IPPS'97 Workshop: Job Scheduling Strategies for Parallel Processing, pages 94–116. Springer-Verlag, Lecture Notes in Computer Science LNCS 1291, 1997.Google Scholar
  15. [15]
    T. Kawaguchi and S. Kyan. Worst case bound of an LRF schedule for the mean weighted flow-time problem. SIAM Journal on Computing, 15(4):1119–1129, November 1986.MATHMathSciNetCrossRefGoogle Scholar
  16. [16]
    S. Leonardi and D. Raz. Approximating total flow time on parallel machines. In Proceedings of the 29 th ACM Symposium on the Theory of Computing, pages 110–119, May 1997.Google Scholar
  17. [17]
    D.A. Lifka. The ANL/IBM SP scheduling system. In D.G. Feitelson and L. Rudolph, editors, IPPS'95 Workshop: Job Scheduling Strategies for Parallel Processing, pages 295–303. Springer-Verlag, Lecture Notes in Computer Science LNCS 949, 1995.Google Scholar
  18. [18]
    U. Schwiegelshohn. Preemptive weighted completion time scheduling of parallel jobs. In Proceedings of the 4 th Annual European Symposium on Algorithms (ESA96), pages 39–51. Springer-Verlag Lecture Notes in Computer Science LNCS 1136, September 1996.MathSciNetGoogle Scholar
  19. [19]
    Uwe Schwiegeishohn and Ramin Yahyapour. Analysis of First-Come-First-Serve Parallel Job Scheduling. In Proceedings of the 9 th SIAM Symposium on Discrete Algorithms, pages 629–638, January 1998.Google Scholar
  20. [20]
    F. Wang, H. Franke, M. Papaefthymiou, P. Pattnaik, L. Rudolph, and M.S. Squillante. A gang scheduling design for multiprogrammed parallel computing environments. In D.G. Feitelson and L. Rudolph, editors, IPPS'96 Workshop: Job Scheduling Strategies for Parallel Processing, pages 111–125. Springer-Verlag, Lecture Notes in Computer Science LNCS 1162, 1996.Google Scholar
  21. [21]
    F. Wang, M. Papaefthymiou, and M.S. Squillante. Performance evaluation of gang scheduling for parallel and distributed multiprogramming. In D.G. Feitelson and L. Rudolph, editors, IPPS'97 Workshop: Job Scheduling Strategies for Parallel Processing, pages 277–298. Springer-Verlag, Lecture Notes in Computer Science LNCS 1291, 1997.Google Scholar

Copyright information

© Springer-Verlag 1998

Authors and Affiliations

  • Uwe Schwiegeishohn
    • 1
  • Ramin Yahyapour
    • 1
  1. 1.Computer Engineering InstituteUniversity DortmundDortmundGermany

Personalised recommendations