Advertisement

Adaptive Job Scheduling Via Predictive Job Resource Allocation

  • Lawrence Barsanti
  • Angela C. Sodan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4376)

Abstract

Standard job scheduling uses static job sizes which lacks flexibility regarding changing load in the system and fragmentation handling. Adaptive resource allocation is known to provide the flexibility needed to obtain better response times under such conditions. We present a scheduling approach (SCOJO-P) which decides resource allocation, i.e. the number of processors, at job start time and then keeps the allocation fixed throughout the execution (i.e. molds the jobs). SCOJO-P uses a heuristic to predict the average load on the system over the runtime of a job and then uses that information to determine the number of processors to allocate to the job. When determining how many processors to allocate to a job, our algorithm attempts to balance the interests of the job with the interests of jobs that are currently waiting in the system and jobs that are expected to arrive in the near future. We compare our approach with traditional fixed-size scheduling and with the Cirne-Berman approach which decides job sizes at job submission time by simulating the scheduling of the jobs currently running or waiting. Our results show that SCOJO-P improves mean response times by approximately 70% vs. traditional fixed-size scheduling while the Cirne-Berman approach only improves it 30% (which means SCOJO-P improves mean response time by 59% vs. Cirne-Berman).

Keywords

adaptive job scheduling molding prediction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barsanti, L.: An Alternative Approach to Adaptive Space Sharing. Honors Thesis, University of Windsor, Computer Science (August 2005)Google Scholar
  2. 2.
    Chiang, S.-H., Mansharamani, R.K., Vernon, M.K.: Use of Application Characteristics and Limited Preemption for Run-to-Completion Parallel Processor Scheduling Policies. In: Proc. ACM SIGMETRICS Conf. on Measurement and Modeling of Computer Systems, ACM Press, New York (1994)Google Scholar
  3. 3.
    Chiang, S.-H., Vernon, M.K.: Dynamic vs. Static Quantum-Based Parallel Processor Allocation. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1162, pp. 200–223. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  4. 4.
    Cirne, W., Berman, F.: A Model for Moldable Supercomputer Jobs. In: Proc. IPDPS Int’l Parallel and Distributed Computing Symposium (April 2001)Google Scholar
  5. 5.
    Cirne, W., Berman, F.: When the Herd is Smart–Aggregate Behavior in the Selection of Job Request. IEEE Trans. on Par. and Distr. Systems 14(2) (2003)Google Scholar
  6. 6.
    Corbalan, J., Mortarell, X., Labarta, J.: Improving Gang Scheduling through Job Performance Analysis and Malleability. In: Proc. ICS (June 2001)Google Scholar
  7. 7.
    Downey, A.: A Model for Speedup of Parallel Programs. Technical Report CSD-97-933, Univ. of California Berkeley (Jan. 1997)Google Scholar
  8. 8.
    Dutot, P.-F., Trystram, D.: Scheduling on Hierarchical Clusters Using Malleable Tasks. In: Proc. SPAA Symp. on Parallel Algorithms and Architectures (July 2001)Google Scholar
  9. 9.
    Feitelson, D.G., et al.: Theory and Practice in Parallel Job Scheduling. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1291, Springer, Heidelberg (1997)Google Scholar
  10. 10.
    Franke, H., et al.: An Evaluation of Parallel Job Scheduling for ASCI Blue-Pacific. In: Proc. IEEE/ACM SC Supercomputing Conference, ACM Press, New York (1999)Google Scholar
  11. 11.
    Gibbons, R.A.: Historical Application Profiler for Use by Parallel Schedulers. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1291, Springer, Heidelberg (1997)Google Scholar
  12. 12.
    Ghosal, D., Serazzi, G., Tripathi, S.K.: The Processor Working Set and Its Use in Scheduling Multiprocessor Systems. IEEE Trans. Software Engineering 17(5), 443–453 (1991)CrossRefGoogle Scholar
  13. 13.
    Lublin, U., Feitelson, D.G.: The Workload on Parallel Supercomputers–Modeling the Characteristics of Rigid Jobs. Journal of Parallel and Distributed Computing 63(11), 1105–1122 (2003)zbMATHCrossRefGoogle Scholar
  14. 14.
    McCann, C., Zahorjan, J.: Processor Allocation Policies for Message Passing Parallel Computers. In: Proc. SIGMETRICS Conf. Measurement & Modeling of Computer Systems, May 1994, pp. 208–219 (1994)Google Scholar
  15. 15.
    Naik, V.K., Setia, S.K., Squillante, M.K.: Processor Allocation in Multiprogrammed Distributed-Memory Parallel Computer Systems. Journal of Parallel and Distributed Computing 46(1), 28–47 (1997)CrossRefGoogle Scholar
  16. 16.
    Padhye, J.D., Dowdy, L.: Dynamic Versus Adaptive Processor Allocation Policies for Message Passing Parallel Computers: An Empirical Comparison. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1162, pp. 224–243. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  17. 17.
    Parsons, E.W., Sevcik, K.C.: Implementing Multiprocessor Scheduling Disciplines. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 1291, Springer, Heidelberg (1997)Google Scholar
  18. 18.
    Rosti, E., et al.: Analysis of Non-Work-Conserving Processor Partitioning Policies. In: Feitelson, D.G., Rudolph, L. (eds.) Job Scheduling Strategies for Parallel Processing. LNCS, vol. 949, Springer, Heidelberg (1995)Google Scholar
  19. 19.
    Sevcik, K.C.: Characterization of Parallelism in Applications and Their Use in Scheduling. Performance Evaluation Review 17, 171–180 (1989)CrossRefGoogle Scholar
  20. 20.
    Sodan, A.C., Huang, X.: SCOJO-Share-Based Job Coscheduling with Integrated Dynamic Resource Directory in Support of Grid Scheduling. In: HPCS Ann. Int. Symposium on High Performance Computing Systems, May 2003, pp. 213–221 (2003)Google Scholar
  21. 21.
    Sodan, A.C., Huang, X.: Adaptive Time/Space Scheduling with SCOJO. In: Proc. HPCS, Winnipeg (May 2004)Google Scholar
  22. 22.
    Sodan, A.C., Han, L.: ATOP-Space and Time Adaptation for Parallel and Grid Applications via Flexible Data Partitioning. In: Proc. 3rd ACM/IFIP/USENIX Workshop on Reflective and Adaptive Middleware, Oct. 2004, ACM, New York (2004)Google Scholar
  23. 23.
    Sodan, A.C.: Loosely Coordinated Coscheduling in the Context of Other Dynamic Approaches for Job Scheduling–A Survey. Concurrency & Computation: Practice & Experience 17(15), 1725–1781 (2005)zbMATHCrossRefGoogle Scholar
  24. 24.
    Sodan, A.C., et al.: Gang Scheduling and Adaptive Resource Allocation to Mitigate Advance Reservation Impact. In: IEEE CCGrid, Singapore, May 2006, IEEE Computer Society Press, Los Alamitos (2006)Google Scholar
  25. 25.
    Srinivasan, S., et al.: Effective Selection of Partition Sizes for Moldable Scheduling of Parallel Jobs. In: Sahni, S.K., Prasanna, V.K., Shukla, U. (eds.) HiPC 2002. LNCS, vol. 2552, Springer, Heidelberg (2002)Google Scholar
  26. 26.
    Tsafrir, D., Etsion, Y., Feitelson, D.G.: Modeling User Runtime Estimates. In: Feitelson, D.G., et al. (eds.) JSSPP 2005. LNCS, vol. 3834, pp. 1–35. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  27. 27.
    Turek, J., et al.: Scheduling Parallelizable Tasks: Putting it All on the Shelf. SIGMETRICS Performance Evaluation Review, Proc. SIGMETRICS Conf. on Measurement and Modeling of Computer Systems 20(1) (1982)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Lawrence Barsanti
    • 1
  • Angela C. Sodan
    • 1
  1. 1.University of Windsor, Windsor ON N9B 3P4Canada

Personalised recommendations