Advertisement

Arabian Journal for Science and Engineering

, Volume 41, Issue 8, pp 3279–3295 | Cite as

Scheduling of Parallel Tasks with Proportionate Priorities

  • Muhammad Khurram BhattiEmail author
  • Isil Oz
  • Konstantin Popov
  • Mats Brorsson
  • Umer Farooq
Research Article - Computer Engineering and Computer Science

Abstract

Parallel computing systems promise higher performance for computationally intensive applications. Since programmes for parallel systems consist of tasks that can be executed simultaneously, task scheduling becomes crucial for the performance of these applications. Given dependence constraints between tasks, their arbitrary sizes, and bounded resources available for execution, optimal task scheduling is considered as an NP-hard problem. Therefore, proposed scheduling algorithms are based on heuristics. This paper presents a novel list scheduling heuristic, called the Noodle heuristic. Noodle is a simple yet effective scheduling heuristic that differs from the existing list scheduling techniques in the way it assigns task priorities. The priority mechanism of Noodle maintains a proportionate fairness among all ready tasks belonging to all paths within a task graph. We conduct an extensive experimental evaluation of Noodle heuristic with task graphs taken from Standard Task Graph. Our experimental study includes results for task graphs comprising of 50, 100, and 300 tasks per graph and execution scenarios with 2-, 4-, 8-, and 16-core systems. We report results for average Schedule Length Ratio (SLR) obtained by producing variations in Communication to Computation cost Ratio. We also analyse results for different degree of parallelism and number of edges in the task graphs. Our results demonstrate that Noodle produces schedules that are within a maximum of 12 % (in worst-case) of the optimal schedule for 2-, 4-, and 8-core systems. We also compare Noodle with existing scheduling heuristics and perform comparative analysis of its performance. Noodle outperforms existing heuristics for average SLR values.

Keywords

List scheduling Static task scheduling Directed acyclic graph (DAG) Multiprocessor Multicore Parallel computing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wolf W., Jerraya A.A., Martin G.: Multiprocessor system-on-chip (mpsoc) technology. IEEE Trans. CAD ICs Syst. 27(10), 1701–1713 (2008)CrossRefGoogle Scholar
  2. 2.
    Grama, A.; Gupta, A.; Karypis, G.; Kumar, V.: Introduction to Parallel Computing, vol. 2nd edn. Pearson, A. Wesley, Boston (2003)Google Scholar
  3. 3.
    Sinnen, O.: Task scheduling for parallel systems. Wiley, New York. ISBN: 978-0-471-73576-2, p. 296, May (2007)Google Scholar
  4. 4.
    Semar Shahul A., Sinnen O.: Scheduling task graphs optimally with a*. J. Supercomput. 51(3), 310–332 (2010)CrossRefGoogle Scholar
  5. 5.
    Sinnen O., Sousa L.: List scheduling: extension for contention awareness and evaluation of node priorities for heterogeneous cluster architectures. Parallel Comput. 30(1), 81–101 (2004)CrossRefGoogle Scholar
  6. 6.
    Sinnen O.: Reducing the solution space of optimal task scheduling. Comput. OR. 43, 201–214 (2014)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Sinnen O., Sousa L.A.: Communication contention in task scheduling. IEEE Trans. Parallel Distrib. Syst. 16(6), 503–515 (2005)CrossRefGoogle Scholar
  8. 8.
    Darte, A.; Robert, Y. and Vivien, F.: Scheduling and automatic parallelization. BirkhŠuser, New York. ISBN 0-8176-4149-1, (2002)Google Scholar
  9. 9.
    Topcuouglu H., Hariri S., you Wu M.: Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 13(3), 260–274 (2002)CrossRefGoogle Scholar
  10. 10.
    Suter, F.; Desprez, F. and Casanova, H.: From heterogeneous task scheduling to heterogeneous mixed parallel scheduling. In: Euro-Par 2004 Parallel Processing. pp. 230–237, (2004)Google Scholar
  11. 11.
    Kwok Y.-K., Ahmad I.: Link contention-constrained scheduling and mapping of tasks and messages to a network of heterogeneous processors. Cluster Comput. 3(2), 113–124 (2000)CrossRefGoogle Scholar
  12. 12.
    Kasahara H., Narita S.: Practical multiprocessor scheduling algorithms for efficient parallel processing. IEEE Trans. Comput. C 33(11), 1023–1029 (1984)CrossRefGoogle Scholar
  13. 13.
    Khan M.A.: Scheduling for heterogeneous systems using constrained critical paths. Parallel Comput. 38, 175–193 (2012)CrossRefGoogle Scholar
  14. 14.
    Ahmad I., Kwok Y.-K.: On exploiting task duplication in parallel program scheduling. IEEE Trans. Parallel Distrib. Syst. 9(9), 872–892 (1998)CrossRefGoogle Scholar
  15. 15.
    Kwok Y.-K., Ahmad I.: Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors. IEEE Trans. Parallel Distrib. Syst. 7(5), 506–521 (1996)CrossRefGoogle Scholar
  16. 16.
    Wu M.-Y., Gajski D.: Hypertool: a programming aid for message-passing systems. IEEE Trans. Parallel Distrib. Syst. 1(3), 330–343 (1990)CrossRefGoogle Scholar
  17. 17.
    Yang T., Gerasoulis A.: Dsc: scheduling parallel tasks on an unbounded number of processors. IEEE Trans. Parallel Distrib. Syst 5(9), 951–967 (1994)CrossRefGoogle Scholar
  18. 18.
    Iverson, M.A.; Ozguner, F. and Follen, G.J.: Parallelizing existing applications in a distributed heterogeneous environment. In: HCW ’95, pp. 93–100 (1995)Google Scholar
  19. 19.
    Shahul A.Z., Sinnen O.: Scheduling task graphs optimally with a*. J. Supercomput. 51(3), 310–332 (2010)CrossRefGoogle Scholar
  20. 20.
    Arabnejad H., Barbosa J.: List scheduling algorithm for heterogeneous systems by an optimistic cost table. IEEE Trans. Parallel Distrib. Syst. 25(3), 682–694 (2014)CrossRefGoogle Scholar
  21. 21.
    Deelman E., Singh G., Su M.-H., Blythe J., Gil Y., Kesselman C., Mehta G., Vahi K., Berriman G.B., Good J., Laity A., Jacob J.C., Katz D.S.: Pegasus: A framework for mapping complex scientific workflows onto distributed systems. Sci. Prog. 13(3), 219–237 (2005)Google Scholar
  22. 22.
  23. 23.
  24. 24.
    Orsila H., Kangas T., Salminen E., Hamalainen T.D., Hannikainen M.: Automated memory-aware application distribution for multi-processor system-on-chips. JSA 53(11), 795–815 (2007)Google Scholar
  25. 25.
    de Langen P., Juurlink B.: Leakage-aware multiprocessor scheduling. J. Signal Process. Syst. 57(1), 73–88 (2009)CrossRefGoogle Scholar

Copyright information

© King Fahd University of Petroleum & Minerals 2016

Authors and Affiliations

  • Muhammad Khurram Bhatti
    • 1
    Email author
  • Isil Oz
    • 2
  • Konstantin Popov
    • 3
  • Mats Brorsson
    • 4
  • Umer Farooq
    • 5
  1. 1.Information Technology UniversityLahorePakistan
  2. 2.Marmara UniversityIstanbulTurkey
  3. 3.SICS ICTKistaSweden
  4. 4.KTH Royal Institute of TechnologyStockholmSweden
  5. 5.COMSATS Institute of Information Technology (CIIT)LahorePakistan

Personalised recommendations