Probablistic Self-Scheduling

  • Milind Girkar
  • Arun Kejariwal
  • Xinmin Tian
  • Hideki Saito
  • Alexandru Nicolau
  • Alexander Veidenbaum
  • Constantine Polychronopoulos
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4128)


Scheduling for large parallel systems such as clusters and grids presents new challenges due to multiprogramming/polyprocessing [1]. In such systems, several jobs (each consisting of a number of parallel tasks) of multiple users may run at the same time. Processors are allocated to the different jobs either statically or dynamically; further, a processor may be taken away from a task of one job and be reassigned to a task of another job. Thus, the number of processors available to a job varies with time. Although several approaches have been proposed in the past for scheduling tasks on multiprocessors, they assume a dedicated availability of processors. Consequently, the existing scheduling approaches are not suitable for multiprogrammed systems. In this paper, we present a novel probabilistic approach for scheduling parallel tasks on multiprogrammed parallel systems. The key characteristic of the proposed scheme is its self-adaptive nature, i.e., it is responsive to systemic parameters such as number of processors available. Self-adaptation helps achieve better load balance between the different processors and helps reduce the synchronization overhead (number of allocation points). Experimental results show the effectiveness of our technique.


Parallel Task Chunk Size Parallel Loop Probabilistic Schedule Synchronization Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Polychronopoulos, C.D.: Multiprocessing vs multiprogramming. In: Proceedings of the 1989 International Conference on Parallel Processing, pp. II–223–II–230 (August 1989)Google Scholar
  2. 2.
    Lundstrom, S., Barnes, G.: A controllable MIMD architectures. In: Proceedings of the 1980 International Conference on Parallel Processing, St. Charles, IL (August 1980)Google Scholar
  3. 3.
    Kejariwal, A., Nicolau, A.: Reading list of self-scheduling of parallel loops,
  4. 4.
    Polychronopoulos, C.D.: Towards autoscheduling compilers. Journal of Supercomputing 2(3), 297–330 (1988)CrossRefGoogle Scholar
  5. 5.
    Polychronopoulos, C.D., Kuck, D.J.: Guided self-scheduling: A practical scheduling scheme for parallel supercomputers. IEEE Transactions on Computers 36(12), 1425–1439 (1987)CrossRefGoogle Scholar
  6. 6.
    Hummel, S.F., Schonberg, E., Flynn, L.E.: Factoring: a method for scheduling parallel loops. Communications of the ACM 35(8), 90–101 (1992)CrossRefGoogle Scholar
  7. 7.
    Meyer, P.L.: Introductory Probability and Statistical Applications. Reading, MA (1970)Google Scholar
  8. 8.
    Downey, A.B., Feitelson, D.G.: The elusive goal of workload characterization. SIGMETRICS Performance Evaluation Review 26(4), 14–29 (1999)CrossRefGoogle Scholar
  9. 9.
    Kejariwal, A., Nicolau, A., Polychronopoulos, C.D.: Feedback-based guided self-scheduling. In: Proceedings of the 12th SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, CA (February 2006)Google Scholar
  10. 10.
    Polychronopoulos, C.: Loop coalescing: A compiler transformation for parallel machines. In: Proceedings of the 1987 International Conference on Parallel Processing, August 1987, pp. 235–242 (1987)Google Scholar
  11. 11.
  12. 12.
  13. 13.
  14. 14.
    Kejariwal, A., Nicolau, A., Polychronopoulos, C.D.: An efficient approach for self-scheduling parallel loops on multiprogrammed parallel computers. In: Proceedings of the 18th International Workshop on Languages and Compilers for Parallel Computing, Hawthorne, NY (October 2005)Google Scholar
  15. 15.
    Zhang, Y., Burcea, M., Cheng, V., Ho, R., Voss, M.: An adaptive OpenMP loop scheduler for hyperthreaded SMPs. In: Proceedings of the 17th International Conference for Parallel and Distributed Computing Systems, San Francisco, CA (2004)Google Scholar
  16. 16.
    Bigben: Pittsburgh Supercomputing Center.
  17. 17.
  18. 18.
    Browne, J.C., Chandy, K.M., Hogarth, J., Lee, C.: The effect on throughput in multi-processing in a multi-programming environment. IEEE Transactions on Computers C-22(8), 728–735 (1973)CrossRefGoogle Scholar
  19. 19.
    Sauer, C.H., Chandy, K.M.: The impact of distributions and disciplines on multiple processor systems. Communications of the ACM 22(1), 25–34 (1979)MATHCrossRefGoogle Scholar
  20. 20.
    Ousterhout, J.: Scheduling techniques for concurrent systems. In: Proceedings of the Conference on Distributed Computing Systems, pp. 22–30 (1982)Google Scholar
  21. 21.
    Rommel, C.G., Towsley, D., Stankovic, J.A.: Analysis of fork-join jobs using processor-sharing. Technical Report UM-CS-1987-052, University of Massachusetts (1987)Google Scholar
  22. 22.
    Leuze, M.R., Dowdy, L.W., Park, K.H.: Multiprogramming a distributed-memory multiprocessor. Concurrency: Practice and Experience 1(1), 19–33 (1989)CrossRefGoogle Scholar
  23. 23.
    Setia, S.K., Squillante, M.S., Tripathi, S.K.: Processor scheduling on multiprogrammed, distributed memory parallel computers. SIGMETRICS Performance Evaluation Review 21(1), 158–170 (1993)CrossRefGoogle Scholar
  24. 24.
    McCann, C., Vaswani, R., Zahorjan, J.: A dynamic processor aladdress policy for multiprogrammed shared-memory multiprocessors. ACM Transactions on Computer Systems 11(2), 146–178 (1993)CrossRefGoogle Scholar
  25. 25.
    Sevcik, K.C.: Application scheduling and processor aladdress in multiprogrammed parallel processing systems. Performance Evaluation 19(2-3), 107–140 (1994)MATHCrossRefGoogle Scholar
  26. 26.
    Miller, A.R.: Nonpreemptive run-time scheduling issues on a multitasked, multiprogrammed multiprocessor with dependencies, bidimensional tasks, folding and dynamic graphs. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign (1987)Google Scholar
  27. 27.
    Majumdar, S., Eager, D.L., Bunt, R.B.: Scheduling in multiprogrammed parallel systems. In: Proceedings of the 1988 ACM SIGMETRICS conference on Measurement and modeling of computer systems, Santa Fe, NM, pp. 104–113 (1988)Google Scholar
  28. 28.
    Leutenegger, S.T., Vernon, M.K.: The performance of multiprogrammed multiprocessor scheduling algorithms. In: Proceedings of the 1990 ACM SIGMETRICS conference on Measurement and modeling of computer systems, Boulder, CO, pp. 226–236 (1990)Google Scholar
  29. 29.
    Chandy, K.M., Reynolds, P.F.: Scheduling partially ordered tasks with probabilistic execution times. In: Proceedings of the Fifth Symposium on Operating Systems Principles, Austin, TX, pp. 169–177 (1975)Google Scholar
  30. 30.
    Bruno, J., Downey, P.: Probabilistic bounds on the performance of list scheduling. SIAM Journal of Computing 15(2), 409–417 (1986)MATHCrossRefMathSciNetGoogle Scholar
  31. 31.
    Tongsima, S., Chantrapornchai, C., Sha, E.H.-M., Passos, N.L.: Scheduling with confidence for probabilistic data-flow graphs. In: Proceedings of 7th Great Lakes Symposium on VLSI, Urbana, IL, pp. 150–155 (1997)Google Scholar
  32. 32.
    Som, T.K., Sargent, R.G.: A probabilistic event scheduling policy for optimistic parallel discrete event simulation. In: Proceedings of 12th Workshop on Parallel and Distributed Simulation, Banff, Alberta, Canada, May 1998, pp. 56–63 (1998)Google Scholar
  33. 33.
    Burns, A., Punnekkat, S., Strigini, L., Wright, D.R.: Probabilistic scheduling guarantees for fault-tolerant real-time systems. In: Proceedings of Seventh IFIP International Working Conference on Dependable Computing for Critical Applications, San Jose, CA, January 1999, pp. 361–378 (1999)Google Scholar
  34. 34.
    Fujita, S., Zhou, H.: Multiprocessor scheduling problem with probabilistic execution costs. In: Proceedings International Symposium on Parallel Architectures, Algorithms and Networks, Dallas/Richardson, TX, December 2000, pp. 121–126 (2000)Google Scholar
  35. 35.
    Li, K., Pan, Y.: Probabilistic analysis of scheduling precedence constrained parallel tasks on multicomputers with contiguous processor allocation. IEEE Transactions on Computers 49(10), 1021–1030 (2000)CrossRefMathSciNetGoogle Scholar
  36. 36.
    Moulin, H.: Split-proof probabilistic scheduling. In: New Trends in Co-operative Game Theory (January 2005)Google Scholar
  37. 37.
    Özsoy, H.: Coordinated splitting in probabilistic scheduling. In: Public Economic Theory, Marseille, France (2005)Google Scholar
  38. 38.
    Glatard, T., Montagnat, J., Pennec, X.: Probabilistic and dynamic optimization of job partitioning on a grid infrastructure. In: Proceedings of the 14th Euromicro Conference on Parallel, Distributed and Network-based Processing, Montbilard-Sochaux, France (February 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Milind Girkar
    • 1
  • Arun Kejariwal
    • 2
  • Xinmin Tian
    • 1
  • Hideki Saito
    • 1
  • Alexandru Nicolau
    • 2
  • Alexander Veidenbaum
    • 2
  • Constantine Polychronopoulos
    • 3
  1. 1.Intel CorporationSanta ClaraUSA
  2. 2.Center for Embedded Computer SystemsUniversity of California at IrvineIrvineUSA
  3. 3.Center for Supercomputing Research and DevelopmentUniversity of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations