Moldable Parallel Job Scheduling Using Job Efficiency: An Iterative Approach

  • Gerald Sabin
  • Matthew Lang
  • P. Sadayappan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4376)


Currently, job schedulers require “rigid” job submissions from users, who must specify a particular number of processors for each parallel job. Most parallel jobs can be run on different processor partition sizes, but there is often a trade-off between wait-time and run-time — asking for many processors reduces run-time but may require a protracted wait. With moldable scheduling, the choice of job partition size is determined by the scheduler, using information about job scalability characteristics.We explore the role of job efficiency in moldable scheduling, through the development of a scheduling scheme that utilizes job efficiency information. The algorithm is able to improve the average turnaround time, but requires tuning of parameters. Using this exploration as motivation, we then develop an iterative scheme that avoids the need for any parameter tuning. The iterative scheme performs an intelligent, heuristic based search for a schedule that minimizes average turnaround time. It is shown to perform better than other recently proposed moldable job scheduling schemes, with good response times for both the small and large jobs, when evaluated with different workloads.


Iterative Scheme Schedule Scheme Average Response Time Partition Size Queue Time 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Feitelson, D.: Workshops on job scheduling strategies for parallel processing,
  2. 2.
    Feitelson, D.G., et al.: Theory and practice in parallel job scheduling. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1291, pp. 1–34. Springer, Heidelberg (1997)Google Scholar
  3. 3.
    Weil, A.M., Feitelson, D.G.: Utilization, predictability, workloads, and user runtime estimates in scheduling the ibm sp2 with backfilling. IEEE Trans. Parallel Distrib. Syst. 12(6), 529–543 (2001)CrossRefGoogle Scholar
  4. 4.
    Skovira, J., et al.: The easy - loadleveler api project. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1162, pp. 41–47. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  5. 5.
    Frachtenberg, E., et al.: Flexible CoScheduling: Mitigating load imbalance and improving utilization of heterogeneous resources. In: IPDPS, vol. 17 (2003)Google Scholar
  6. 6.
    Cirne, W., Berman, F.: Adaptive selection of partition size for supercomputer requests. In: Feitelson, D.G., Rudolph, L. (eds.) IPDPS-WS 2000 and JSSPP 2000. LNCS, vol. 1911, pp. 187–208. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  7. 7.
    Cirne, W., Berman, F.: Using moldability to improve the performance of supercomputer jobs. J. Parallel Distrib. Comput. 62(10), 1571–1601 (2002)zbMATHGoogle Scholar
  8. 8.
    Srinivasan, S., et al.: Effective selection of partition sizes for moldable scheduling of parallel jobs. In: Sahni, S.K., Prasanna, V.K., Shukla, U. (eds.) HiPC 2002. LNCS, vol. 2552, pp. 174–183. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Srinivasan, S., Krishnamoorthy, S., Sadayappan, P.: A robust scheduling strategy for moldable scheduling of parallel jobs. In: CLUSTER, pp. 92–99. IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  10. 10.
    Srinivasan, S., et al.: Characterization of backfilling strategies for parallel job scheduling. In: ICPP Workshops, pp. 514–522. IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  11. 11.
    Frachtenberg, E., Feitelson, D.G.: Pitfalls in parallel job scheduling evaluation. In: Feitelson, D.G., et al. (eds.) JSSPP 2005. LNCS, vol. 3834, pp. 257–282. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  12. 12.
    Feitelson, D.G., Rudolph, L., Schwiegelshohn, U.: Parallel job scheduling - a status report. In: Feitelson, D.G., Rudolph, L., Schwiegelshohn, U. (eds.) JSSPP 2004. LNCS, vol. 3277, pp. 1–16. Springer, Heidelberg (2005)Google Scholar
  13. 13.
    Feitelson, D.G.: Logs of real parallel workloads from production systems,
  14. 14.
    Du, J., Leung, J.Y.T.: Complexity of scheduling parallel task systems. SIAM J. Discret. Math. 2(4), 473–487 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Krishnamurti, R., Ma, E.: An approximation algorithm for scheduling tasks on varying partition sizes in partitionable multiprocessor systems. IEEE Transactions on Computers 41(12), 1572–1579 (1992)CrossRefGoogle Scholar
  16. 16.
    Garey, M.R., Graham, R.L.: Bounds for multiprocessor scheduling with resource constraints. SIAM J. Comput. 4(2), 187–200 (1975)zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Garey, M.R., Johnson, D.S.: Complexity results for multiprocessor scheduling under resource constraints. SIAM J. Comput. 4(4), 397–411 (1975)zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Li, K., Cheng, K.H.: Job scheduling in partitionable mesh connected systems. In: ICPP, vol. 2, pp. 65–72 (1989)Google Scholar
  19. 19.
    Tuomenoksa, D.L., Siegel, H.J.: Task scheduling on the pasm parallel processing system. IEEE Trans. Software Eng. 11(2), 145–157 (1985)CrossRefGoogle Scholar
  20. 20.
    Eager, D.L., Zahorjan, J., Lazowska, E.D.: Speedup versus efficiency in parallel systems, pp. 76–91 (1995)Google Scholar
  21. 21.
    Ghosal, D., Serazzi, G., Tripathi, S.K.: The processor working set and its use in scheduling multiprocessor systems. IEEE Trans. Softw. Eng. 17(5), 443–453 (1991)CrossRefGoogle Scholar
  22. 22.
    Kleinrock, L., Huang, J.H.: On parallel processing systems: Amdahl’s law generalized and some results on optimal design. IEEE Trans. Softw. Eng. 18(5), 434–447 (1992)CrossRefGoogle Scholar
  23. 23.
    McCann, C., Vaswani, R., Zahorjan, J.: A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors. ACM Trans. Comput. Syst. 11(2), 146–178 (1993)CrossRefGoogle Scholar
  24. 24.
    Sevcik, K.C.: Application scheduling and processor allocation in multiprogrammed parallel processing systems. Technical Report CSRI-282, Computer Systems Research Institute, University of Toronto, Toronto, Canada, M5S 1A1 (1993)Google Scholar
  25. 25.
    Rosti, E., et al.: Analysis of non-work-conserving processor partitioning policies. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 949, pp. 165–181. Springer, Heidelberg (1995)Google Scholar
  26. 26.
    Downey, A.B.: Using queue time predictions for processor allocation. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing, pp. 35–57. Springer, Heidelberg (1997)Google Scholar
  27. 27.
    Downey, A.B.: A parallel workload model and its implications for processor allocation. Cluster Computing 1(1), 133–145 (1998)CrossRefGoogle Scholar
  28. 28.
    Sevcik, K.C.: Characterizations of parallelism in applications and their use in scheduling. SIGMETRICS Perform. Eval. Rev. 17(1), 171–180 (1989)CrossRefGoogle Scholar
  29. 29.
    Chiang, S.H., Mansharamani, R.K., Vernon, M.K.: Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies. In: SIGMETRICS, pp. 33–44 (1994)Google Scholar
  30. 30.
    Downey, A.B.: A model for speedup of parallel programs. Technical Report CSD-97-933 (1997)Google Scholar
  31. 31.
    Rajan, A.: Evaluation of scheduling strategies for moldable parallel jobs. Master’s thesis, The Ohio State University (2004)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Gerald Sabin
    • 1
  • Matthew Lang
    • 1
  • P. Sadayappan
    • 1
  1. 1.Dept. of Computer Science and Engineering, The Ohio State University, Columbus OH 43201USA

Personalised recommendations