Scheduling at Twilight the EasyWay

  • Hannah Bast
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2285)


We investigate particularly simple algorithms for optimizing the tradeoff between load imbalance and assignment overheads in dynamic multiprocessor scheduling scenarios, when the information that is available about the processing time of a task before it is completed is vague. We describe a simple and elegant generic algorithm that, in a very general model, always comes surprisingly close to the theoretical optimum, and the performance of which we can analyze exactly with respect to constant factors. In contrast, we prove that algorithms that assign tasks in equal-sized portions perform far from optimal in general. In fact, we give evidence that the performance of our generic scheme cannot be improved by any constant factor without sacrificing the simplicity of the algorithm. We also give lower bounds on the performance of the various decreasing-size heuristics that have typically been used so far in concrete applications.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bast, H.,On Scheduling Parallel Tasks at Twilight, Theory of Computing Systems33:5 (2000), pp. 489–563.Google Scholar
  2. 2.
    Bast, H., Scheduling at Twilight the Easy Way, full paper (2002),
  3. 3.
    Eager, D. L., and Subramaniam, S., Affinity scheduling of unbalanced workloads, in Proceedings Supercomputing (SC 1994), pp. 214–226.Google Scholar
  4. 4.
    Flynn, L. E., Hummel, S. F., and Schonberg, E. Factoring: A method for scheduling parallel loops, Communications of the ACM35:8 (1992), pp. 90–101.CrossRefGoogle Scholar
  5. 5.
    Hagerup, T., Allocating Independent Tasks to Parallel Processors: An Experimental Study, Journal of Parallel and Distributed Computing47 (1997), pp. 185–197.CrossRefGoogle Scholar
  6. 6.
    Hummel, S. F., Schmidt, J., Uma, R. N., and Wein, J., Load-sharing in heterogeneous systems via weighted factoring, in Proceedings 8th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA 1996), pp. 318–328.Google Scholar
  7. 7.
    Kruskal, C. P., and Weiss, A., Allocating independent subtasks on parallel processors, IEEE Transactions on Software Engeneering11 (1985), pp. 1001–1016.CrossRefGoogle Scholar
  8. 8.
    Liu, J., Saletore, V. A., and Lewis, T. G., Safe self scheduling—a parallel loop scheduling scheme for shared-memory multiprocessors, International Journal of Parallel Programming22:6 (1994), pp. 589–616.CrossRefGoogle Scholar
  9. 9.
    Lucco, S., A dynamic scheduling method for irregular parallel programs, in Proceedings Conference on Programming Language Design and Implementation (PLDI 1992), pp. 200–211.Google Scholar
  10. 10.
    Markatos, E. P., and LeBlanc, T. J., Using processor affinity in loop scheduling on sharedmemory multiprocessors, IEEE Transactions on Parallel and Distributed Systems5:4 (1994), pp. 379–400.CrossRefGoogle Scholar
  11. 11.
    Orlando, S., and Perego, R., Scheduling data-parallel computations on heterogeneous and time-shared environments, in Proceedings European Conference on Parallel Computing (EURO-PAR 1998), Springer Lecture Notes in Computer Science 1470, pp. 356–365.Google Scholar
  12. 12.
    Polychronopoulos, C. D., and Kuck, D. J., Guided self-scheduling:Apractical scheduling scheme for parallel supercomputers, IEEE Transactions on Computers36 (1987), pp. 1425–1439.CrossRefGoogle Scholar
  13. 13.
    Tzen, T. H., and Ni, L. M., Trapezoid self-scheduling—a practical scheduling scheme for parallel compilers, IEEE Transactions on Parallel and Distributed Systems4:1 (1993), pp. 87–98.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Hannah Bast
    • 1
  1. 1.Max-Planck-Institut für InformatikSaarbrückenGermany

Personalised recommendations