Coarse-Grain Time Slicing with Resource-Share Control in Parallel-Job Scheduling

  • Bryan Esbaugh
  • Angela C. Sodan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4782)


We present a parallel job scheduling approach for coarse-grain timesharing which preempts jobs to disk and avoids any additional memory pressure. The approach provides control regarding the resource shares allocated to different job classes. We demonstrate that this approach significantly improves response times for short and medium jobs and that it permits controlling different desirable resource-shares for different job classes at different times of the day according to site policies.


Time Slice Interarrival Time Average Response Time Target Share Site Policy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chiang, S.-H., Fu, C.: Benefit of Limited Time Sharing in the Presence of Very Large Parallel Jobs. In: Proc. IPDPS (2005)Google Scholar
  2. 2.
    Esbaugh, B., Sodan, A.: Preemption and Share Control in Parallel Grid Job Scheduling. In: Proc. CoreGrid Workshop, Springer, Dresden, Germany (2007)Google Scholar
  3. 3.
  4. 4.
    Jackson, D., Snell, Q., Clement, M.: Core Algorithms of the Maui Scheduler. In: Feitelson, D.G., Rudolph, L. (eds.) JSSPP 2001. LNCS, vol. 2221, Springer, Heidelberg (2001)CrossRefGoogle Scholar
  5. 5.
    Kettimuthu, R., Subramani, V., Srinivasan, S., Gopalasamy, T.: Selective Preemption Strategies for Parallel Job Scheduling. In: Proc. ICPP (2002)Google Scholar
  6. 6.
    Lublin, U., Feitelson, D.G.: The Workload on Parallel Supercomputers: Modeling the Characteristics of Rigid Jobs. J. of Parallel and Distributed Computing 63(11), 1105–1122 (2003)zbMATHCrossRefGoogle Scholar
  7. 7.
    Moab Workload Manager Administrator’s Guide, Version 5.0.0. Cluster Resources, available at
  8. 8.
    Moreira, J.E., Chan, W., Fong, L.L., Franke, H., Jette, M.A.: An Infrastructure for Efficient Parallel Job Execution in Terascale Computing Environments. In: Proc. ACM/IEEE Supercomputing (November 1998)Google Scholar
  9. 9.
    Parsons, E.W., Sevcik, K.C.: Implementing Multiprocessor Scheduling Disciplines. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1291, Springer, Heidelberg (1997)Google Scholar
  10. 10.
    Perkovic, D., Keleher, P.J.: Randomization, Speculation, and Adaptation in Batch Schedulers. In: Proc. ACM/IEEE Supercomputing, Dallas/TX (November 2000)Google Scholar
  11. 11.
    Setia, S., Squillante, M., Naik, V.K.: The Impact of Job Memory Requirements on Gang-scheduling Performance. Perf. Evaluation Review 26(4), 30–39 (1999)CrossRefGoogle Scholar
  12. 12.
    Sharcnet network of clusters (2007),
  13. 13.
    Snell, Q., Clement, M.J., Jackson, D.B.: Preemption Based Backfill. In: Feitelson, D.G., Rudolph, L., Schwiegelshohn, U. (eds.) JSSPP 2002. LNCS, vol. 2537, Springer, Heidelberg (2002)CrossRefGoogle Scholar
  14. 14.
    Sodan, A.C., Doshi, C., Barsanti, L., Taylor, D.: Gang Scheduling and Adaptation to Mitigate Reservation Impact. In: Proc. IEEE CCGrid, Singapore (May 2006)Google Scholar
  15. 15.
    Sodan, A.C.: Loosely Coordinated Coscheduling in the Context of Other Dynamic Approaches for Job Scheduling-A Survey. Concurrency & Computation: Practice & Experience 17(15), 1725–1781 (2005)zbMATHCrossRefGoogle Scholar
  16. 16.
    Wong, A.T., Oliker, L., Kramer, W.T.C., Kaltz, T.L., Bailey, D.H.: ESP: A System Utilization Benchmark. ACM/IEEE Supercomputing, Dallas/TX (November 2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Bryan Esbaugh
    • 1
  • Angela C. Sodan
    • 1
  1. 1.University of Windsor, Computer Science, 401 Sunset Ave., Windsor, Ontario, N9B3P4Canada

Personalised recommendations