Advertisement

Provably Efficient Two-Level Adaptive Scheduling

  • Yuxiong He
  • Wen-Jing Hsu
  • Charles E. Leiserson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4376)

Abstract

Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernel-level job scheduler allots processors to jobs and a user-level thread scheduler maps the ready threads of a job onto the allotted processors. This paper presents two-level scheduling schemes for scheduling “adaptive” multithreaded jobs whose parallelism can change during execution. TheAGDEQ algorithm uses dynamic-equipartioning (DEQ) as a job-scheduling policy and an adaptive greedy algorithm (A-Greedy) as the thread scheduler. The ASDEQ algorithm uses DEQ for job scheduling and an adaptive work-stealing algorithm (A-Steal) as the thread scheduler. AGDEQ is suitable for scheduling in centralized scheduling environments, and ASDEQ is suitable for more decentralized settings. Both two-level schedulers achieve O(1)-competitiveness with respect to makespan for any set of multithreaded jobs with arbitrary release time. They are also O(1)-competitive for any batched jobs with respect to mean response time. Moreover, because the length of the scheduling quantum can be adjusted to amortize the cost of context-switching during processor reallocation, our schedulers provide control over the scheduling overhead and ensure effective utilization of processors.

Keywords

Competitive Ratio Total Response Time Utilization Parameter Work Stealing Total Allotment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agrawal, K., et al.: Adaptive task scheduling with parallelism feedback. In: PPoPP (2006)Google Scholar
  2. 2.
    Agrawal, K., He, Y., Leiserson, C.E.: An empirical evaluation of work stealing with parallelism feedback. In: ICDCS (2006)Google Scholar
  3. 3.
    Agrawal, K., He, Y., Leiserson, C.E.: Work stealing with parallelism feedback. Unpublished manuscripts (2006)Google Scholar
  4. 4.
    Arora, N.S., Blumofe, R.D., Plaxton, C.G.: Thread scheduling for multiprogrammed multiprocessors. In: SPAA, Puerto Vallarta, Mexico, pp. 119–129 (1998)Google Scholar
  5. 5.
    Avrahami, N., Azar, Y.: Minimizing total flow time and total completion time with immediate dispatching. In: SPAA, pp. 11–18. ACM Press, New York (2003)Google Scholar
  6. 6.
    Bansal, N., et al.: Non-clairvoyant scheduling for minimizing mean slowdown. Algorithmica 40(4), 305–318 (2004)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Becchetti, L., Leonardi, S.: Nonclairvoyant scheduling to minimize the total flow time on single and parallel machines. J. ACM 51(4), 517–539 (2004)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Blelloch, G., Gibbons, P., Matias, Y.: Provably efficient scheduling for languages with fine-grained parallelism. Journal of the ACM 46(2), 281–321 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Blelloch, G.E., Gibbons, P.B., Matias, Y.: Provably efficient scheduling for languages with fine-grained parallelism. In: SPAA, Santa Barbara, California, pp. 1–12 (1995)Google Scholar
  10. 10.
    Blelloch, G.E., Greiner, J.: A provable time and space efficient implementation of NESL. In: ICFP, pp. 213–225 (1996)Google Scholar
  11. 11.
    Blumofe, R.D.: Executing Multithreaded Programs Efficiently. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, USA (1995)Google Scholar
  12. 12.
    Blumofe, R.D., Leiserson, C.E.: Space-efficient scheduling of multithreaded computations. SIAM Journal on Computing 27(1), 202–229 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Blumofe, R.D., Leiserson, C.E.: Scheduling multithreaded computations by work stealing. Journal of the ACM 46(5), 720–748 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Brecht, T., Deng, X., Gu, N.: Competitive dynamic multiprocessor allocation for parallel applications. In: Parallel and Distributed Processing, pp. 448–455. IEEE Computer Society Press, Los Alamitos (1995)Google Scholar
  15. 15.
    Brent, R.P.: The parallel evaluation of general arithmetic expressions. Journal of the ACM, 201–206 (1974)Google Scholar
  16. 16.
    Burton, F.W., Sleep, M.R.: Executing functional programs on a virtual tree of processors. In: FPCA, Portsmouth, New Hampshire, October 1981, pp. 187–194 (1981)Google Scholar
  17. 17.
    Chekuri, C., et al.: Approximation techniques for average completion time scheduling. In: SODA, pp. 609–618. Society for Industrial and Applied Mathematics, Philadelphia (1997)Google Scholar
  18. 18.
    Chen, J., Miranda, A.: A polynomial time approximation scheme for general multiprocessor job scheduling (extended abstract). In: STOC, pp. 418–427. ACM Press, New York (1999)Google Scholar
  19. 19.
    Chiang, S.-H., Vernon, M.K.: Dynamic vs. static quantum-based parallel processor allocation. In: JSSPP, Honolulu, Hawaii, United States, pp. 200–223 (1996)Google Scholar
  20. 20.
    Deng, X., Dymond, P.: On multiprocessor system scheduling. In: SPAA, pp. 82–88 (1996)Google Scholar
  21. 21.
    Deng, X., et al.: Preemptive scheduling of parallel jobs on multiprocessors. In: SODA, pp. 159–167. Society for Industrial and Applied Mathematics, Philadelphia (1996)Google Scholar
  22. 22.
    Deng, X., et al.: Preemptive scheduling of parallel jobs on multiprocessors. In: SODA, pp. 159–167. Society for Industrial and Applied Mathematics, Philadelphia (1996)Google Scholar
  23. 23.
    Du, J., Leung, J.Y.-T.: Complexity of scheduling parallel task systems. SIAM J. Discrete Math. 2(4), 473–487 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    Edmonds, J.: Scheduling in the dark. In: STOC, pp. 179–188 (1999)Google Scholar
  25. 25.
    Edmonds, J., et al.: Non-clairvoyant multiprocessor scheduling of jobs with changing execution characteristics. Journal of Scheduling 6(3), 231–250 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  26. 26.
    Fang, Z., et al.: Dynamic processor self-scheduling for general parallel nested loops. IEEE Transactions on Computers 39(7), 919–929 (1990)CrossRefGoogle Scholar
  27. 27.
    Feitelson, D.G.: Job scheduling in multiprogrammed parallel systems (extended version). Technical report, IBM Research Report RC 19790 (87657) 2nd Revision (1997)Google Scholar
  28. 28.
    Graham, R.L.: Bounds on multiprocessing anomalies. SIAM Journal on Applied Mathematics 17(2), 416–429 (1969)zbMATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Gu, N.: Competitive analysis of dynamic processor allocation strategies. Master’s thesis, York University (1995)Google Scholar
  30. 30.
    Hall, L.A., Shmoys, D.B., Wein, J.: Scheduling to minimize average completion time: off-line and on-line algorithms. In: SODA, pp. 142–151. Society for Industrial and Applied Mathematics, Philadelphia (1996)Google Scholar
  31. 31.
    Halstead Jr., R.H.: Implementation of Multilisp: Lisp on a multiprocessor. In: LFP, Austin, Texas, August 1984, pp. 9–17 (1984)Google Scholar
  32. 32.
    Hummel, S.F., Schonberg, E.: Low-overhead scheduling of nested parallelism. IBM Journal of Research and Development 35(5-6), 743–765 (1991)CrossRefGoogle Scholar
  33. 33.
    Jansen, K., Porkolab, L.: Linear-time approximation schemes for scheduling malleable parallel tasks. In: SODA, pp. 490–498. Society for Industrial and Applied Mathematics, Philadelphia (1999)Google Scholar
  34. 34.
    Jansen, K., Zhang, H.: Scheduling malleable tasks with precedence constraints. In: SPAA, pp. 86–95. ACM Press, New York (2005)CrossRefGoogle Scholar
  35. 35.
    Kalyanasundaram, B., Pruhs, K.R.: Minimizing flow time nonclairvoyantly. J. ACM 50(4), 551–567 (2003)CrossRefMathSciNetGoogle Scholar
  36. 36.
    Leutenegger, S.T., Vernon, M.K.: The performance of multiprogrammed multiprocessor scheduling policies. In: SIGMETRICS, Boulder, Colorado, United States, pp. 226–236. Coorado (1990)Google Scholar
  37. 37.
    Lucco, S.: A dynamic scheduling method for irregular parallel programs. In: PLDI, pp. 200–211. ACM Press, New York (1992)Google Scholar
  38. 38.
    Ludwig, W., Tiwari, P.: Scheduling malleable and nonmalleable parallel tasks. In: SODA, pp. 167–176. Society for Industrial and Applied Mathematics, Philadelphia (1994)Google Scholar
  39. 39.
    Majumdar, S., Eager, D.L., Bunt, R.B.: Scheduling in multiprogrammed parallel systems. In: SIGMETRICS, Santa Fe, New Mexico, United States, pp. 104–113 (1988)Google Scholar
  40. 40.
    Martorell, X., et al.: A tool to schedule parallel applications on multiprocessors: The NANOS CPU manager. In: Feitelson, D.G., Rudolph, L. (eds.) JSSPP, pp. 87–112 (2000)Google Scholar
  41. 41.
    McCann, C., Vaswani, R., Zahorjan, J.: A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors. ACM Transactions on Computer Systems 11(2), 146–178 (1993)CrossRefGoogle Scholar
  42. 42.
    Motwani, R., Phillips, S., Torng, E.: Non-clairvoyant scheduling. In: SODA, pp. 422–431 (1993)Google Scholar
  43. 43.
    Mounie, G., Rapine, C., Trystram, D.: Efficient approximation algorithms for scheduling malleable tasks. In: SPAA, pp. 23–32. ACM Press, New York (1999)CrossRefGoogle Scholar
  44. 44.
    Narlikar, G.J., Blelloch, G.E.: Space-efficient scheduling of nested parallelism. ACM Transactions on Programming Languages and Systems 21(1), 138–173 (1999)CrossRefGoogle Scholar
  45. 45.
    Rosti, E., et al.: Robust partitioning schemes of multiprocessor systems. Performance Evaluation 19(2-3), 141–165 (1994)CrossRefGoogle Scholar
  46. 46.
    Rosti, E., et al.: Analysis of non-work-conserving processor partitioning policies. In: IPPS, pp. 165–181 (1995)Google Scholar
  47. 47.
    Rudolph, L., Slivkin-Allalouf, M., Upfal, E.: A simple load balancing scheme for task allocation in parallel machines. In: SPAA, Hilton Head, South Carolina, July 1991, pp. 237–245 (1991)Google Scholar
  48. 48.
    Schwiegelshohn, U., et al.: Smart smart bounds for weighted response time scheduling. SIAM J. Comput. 28(1), 237–253 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  49. 49.
    Sen, S.: Dynamic processor allocation for adaptively parallel jobs. Master’s thesis, Massachusetts Institute of technology (2004)Google Scholar
  50. 50.
    Sevcik, K.C.: Application scheduling and processor allocation in multiprogrammed parallel processing systems. Performance Evaluation 19(2-3), 107–140 (1994)zbMATHCrossRefGoogle Scholar
  51. 51.
    Shmoys, D.B., Wein, J., Williamson, D.P.: Scheduling parallel machines online. In: FOCS, pp. 131–140 (1991)Google Scholar
  52. 52.
    Song, B.: Scheduling adaptively parallel jobs. Master’s thesis, Massachusetts Institute of Technology (1998)Google Scholar
  53. 53.
    Squillante, M.S.: On the benefits and limitations of dynamic partitioning in parallel computer systems. In: IPPS, pp. 219–238 (1995)Google Scholar
  54. 54.
    Guha, K., Brecht, T.B.: Using parallel program characteristics in dynamic processor allocation policies. Performance Evaluation 27-28, 519–539 (1996)Google Scholar
  55. 55.
    Tucker, A., Gupta, A.: Process control and scheduling issues for multiprogrammed shared-memory multiprocessors. In: SOSP, pp. 159–166. ACM Press, New York (1989)Google Scholar
  56. 56.
    Turek, J., et al.: Scheduling parallelizable tasks to minimize average response time. In: SPAA, pp. 200–209 (1994)Google Scholar
  57. 57.
    Turek, J., et al.: Scheduling parallel tasks to minimize average response time. In: SODA, pp. 112–121. Society for Industrial and Applied Mathematics, Philadelphia (1994)Google Scholar
  58. 58.
    Yang, P., et al.: Dynamic scheduling of concurrent tasks with cost performance trade-off. In: CASES, pp. 103–109. ACM Press, New York (2000)CrossRefGoogle Scholar
  59. 59.
    Yue, K.K., Lilja, D.J.: Implementing a dynamic processor allocation policy for multiprogrammed parallel applications in the SolarisTM operating system. Concurrency and Computation-Practice and Experience 13(6), 449–464 (2001)zbMATHCrossRefGoogle Scholar
  60. 60.
    Zahorjan, J., McCann, C.: Processor scheduling in shared memory multiprocessors. In: SIGMETRICS, Boulder, Colorado, United States, May 1990, pp. 214–225 (1990)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Yuxiong He
    • 1
  • Wen-Jing Hsu
    • 1
  • Charles E. Leiserson
    • 2
  1. 1.Nanyang Technological University, Nanyang Avenue 639798Singapore
  2. 2.Massachusetts Institute of Technology, Cambridge, MA 02139USA

Personalised recommendations