Advertisement

Computing

, Volume 100, Issue 6, pp 557–595 | Cite as

Locality-aware task scheduling for homogeneous parallel computing systems

  • Muhammad Khurram Bhatti
  • Isil Oz
  • Sarah Amin
  • Maria Mushtaq
  • Umer Farooq
  • Konstantin Popov
  • Mats Brorsson
Article
  • 283 Downloads

Abstract

In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce execution time and energy consumption of parallel applications. Locality can be exploited at various hardware and software layers. For instance, by implementing private and shared caches in a multi-level fashion, recent hardware designs are already optimised for locality. However, this would all be useless if the software scheduling does not cast the execution in a manner that promotes locality available in the programs themselves. Since programs for parallel systems consist of tasks executed simultaneously, task scheduling becomes crucial for the performance in multi-level cache architectures. This paper presents a heuristic algorithm for homogeneous multi-core systems called locality-aware task scheduling (LeTS). The LeTS heuristic is a work-conserving algorithm that takes into account both locality and load balancing in order to reduce the execution time of target applications. The working principle of LeTS is based on two distinctive phases, namely; working task group formation phase (WTG-FP) and working task group ordering phase (WTG-OP). The WTG-FP forms groups of tasks in order to capture data reuse across tasks while the WTG-OP determines an optimal order of execution for task groups that minimizes the reuse distance of shared data between tasks. We have performed experiments using randomly generated task graphs by varying three major performance parameters, namely: (1) communication to computation ratio (CCR) between 0.1 and 1.0, (2) application size, i.e., task graphs comprising of 50-, 100-, and 300-tasks per graph, and (3) number of cores with 2-, 4-, 8-, and 16-cores execution scenarios. We have also performed experiments using selected real-world applications. The LeTS heuristic reduces overall execution time of applications by exploiting inter-task data locality. Results show that LeTS outperforms state-of-the-art algorithms in amortizing inter-task communication cost.

Keywords

Runtime resource management Parallel computing Multicore scheduling Homogeneous systems Directed acyclic graph (DAG) Embedded systems 

Mathematics Subject Classification

68U01 

References

  1. 1.
    Wolf W, Jerraya AA, Martin G (2008) Multiprocessor system-on-chip (MPSoC) technology. IEEE Trans CAD ICs Syst 27(10):1701–1713CrossRefGoogle Scholar
  2. 2.
    Bhatti MK, Oz I, Popov K, Brorsson M, Farooq U (2016) Scheduling of parallel tasks with proportionate priorities. Arab J Sci Eng 41(8):3279–3295.  https://doi.org/10.1007/s13369-016-2180-9 CrossRefGoogle Scholar
  3. 3.
    Yoo RM, Hughes CJ, Kim C, Chen Y-K, Kozyrakis C (2013) Locality-aware task management for unstructured parallelism: a quantitative limit study. In: Proceedings of the twenty-fifth annual ACM symposium on parallelism in algorithms and architectures, ser. SPAA ’13. ACM, New York, NY, pp 315–325.  https://doi.org/10.1145/2486159.2486175
  4. 4.
    Grama A, Gupta A, Karypis G, Kumar V (2003) Introduction to parallel computing, 2nd edn. Pearson A. Wesley, ReadingzbMATHGoogle Scholar
  5. 5.
    Sinnen O, Sousa L (2004) List scheduling: extension for contention awareness and evaluation of node priorities for heterogeneous cluster architectures. Parallel Comput 30(1):81–101CrossRefGoogle Scholar
  6. 6.
    Sinnen O (2014) Reducing the solution space of optimal task scheduling. Comput OR 43:201–214MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Bhatti MK, Belleudy C, Auguin M (2011) Hybrid power management in real time embedded systems: an interplay of DVFs and DPM techniques. Real-Time Syst 47(2):143–162CrossRefGoogle Scholar
  8. 8.
    Shahul AS, Sinnen O (2010) Scheduling task graphs optimally with a*. J Supercomput 51(3):310–332CrossRefGoogle Scholar
  9. 9.
    Sinnen O, Sousa LA (2005) Communication contention in task scheduling. IEEE Trans Parallel Distrib Syst 16(6):503–515CrossRefGoogle Scholar
  10. 10.
    Dally W (2009) The future of GPU computing. In: The 22nd annual supercomputing conferenceGoogle Scholar
  11. 11.
    Hill M, Kozyrakis C (2012) Advancing computer systems without technology progress. In: DARPA/ISAT workshopGoogle Scholar
  12. 12.
    Consortium CC (2012) 21st century computer architecture. A community white paperGoogle Scholar
  13. 13.
  14. 14.
    Sinnen O (2007) Task scheduling for parallel systems. Wiley, New York. ISBN 978-0-471-73576-2CrossRefGoogle Scholar
  15. 15.
    Yang T, Gerasoulis A (1994) Dsc: scheduling parallel tasks on an unbounded number of processors. IEEE Trans Parallel Distrib Syst 5(9):951–967CrossRefGoogle Scholar
  16. 16.
    Kasahara H, Narita S (1984) Practical multiprocessor scheduling algorithms for efficient parallel processing. IEEE Trans Comput C–33(11):1023–1029CrossRefGoogle Scholar
  17. 17.
    Khan MA (2012) Scheduling for heterogeneous systems using constrained critical paths. Parallel Comput 38:175–193CrossRefGoogle Scholar
  18. 18.
    Topcuouglu H, Hariri S, you Wu M (2002) Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans Parallel Distrib Syst 13(3):260–274CrossRefGoogle Scholar
  19. 19.
    Kwok Y-K, Ahmad I (2000) Link contention-constrained scheduling and mapping of tasks and messages to a network of heterogeneous processors. Cluster Comput 3(2):113–124CrossRefGoogle Scholar
  20. 20.
    Ahmad I, Kwok Y-K (1998) On exploiting task duplication in parallel program scheduling. IEEE Trans Parallel Distrib Syst 9(9):872–892CrossRefGoogle Scholar
  21. 21.
    Kwok Y-K, Ahmad I (1996) Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors. IEEE Trans Parallel Distrib Syst 7(5):506–521CrossRefGoogle Scholar
  22. 22.
    Wu M-Y, Gajski D (1990) Hypertool: a programming aid for message-passing systems. IEEE Trans Parallel Distrib Syst 1(3):330–343CrossRefGoogle Scholar
  23. 23.
    Fard HM, Prodan R, Barrionuevo JJD, Fahringer T (2012) A multi-objective approach for workflow scheduling in heterogeneous environments. In: 2012 12th IEEE/ACM international symposium on cluster, cloud and grid computing (ccgrid 2012), pp 300–309Google Scholar
  24. 24.
    Arabnejad H, Barbosa J (2014) List scheduling algorithm for heterogeneous systems by an optimistic cost table. IEEE Trans Parallel Distrib Syst 25(3):682–694CrossRefGoogle Scholar
  25. 25.
    Iverson MA, Ozguner F, Follen GJ (1995) Parallelizing existing applications in a distributed heterogeneous environment. In: HCW ’95, pp 93–100Google Scholar
  26. 26.
    Bertrand Cirou EJ (2001) Triplet: a clustering scheduling algorithm for heterogeneous systems. New York.  https://doi.org/10.1109/ICPPW.2001.951956
  27. 27.
    Kim S, Browne J (1988) General approach to mapping of parallel computations upon multiprocessor architectures. Unknown J 3:1–8Google Scholar
  28. 28.
    Sarkar V (1989) Partitioning and scheduling parallel programs for multiprocessors. MIT Press, Cambridge, MAzbMATHGoogle Scholar
  29. 29.
    Kanemitsu H, Hanada M, Nakazato H (2016) Clustering-based task scheduling in a large number of heterogeneous processors. IEEE Trans Parallel Distrib Syst 27(11):3144–3157CrossRefGoogle Scholar
  30. 30.
    Shahul AZ, Sinnen O (2010) Scheduling task graphs optimally with a*. J Supercomput 51(3):310–332CrossRefGoogle Scholar
  31. 31.
    Deelman E, Singh G, Su M-H, Blythe J, Gil Y, Kesselman C, Mehta G, Vahi K, Berriman GB, Good J, Laity A, Jacob JC, Katz DS (2005) Pegasus: a framework for mapping complex scientific workflows onto distributed systems. Sci Program 13(3):219–237Google Scholar
  32. 32.
    Darte A, Robert Y, Vivien F (2002) Scheduling and automatic parallelization. BirkhŁuser, New York. ISBN 0-8176-4149-1zbMATHGoogle Scholar
  33. 33.
    Suter F, Desprez F, Casanova H (2004) From heterogeneous task scheduling to heterogeneous mixed parallel scheduling. In: Euro-Par 2004 parallel processing, pp 230–237Google Scholar
  34. 34.
    Orsila H, Kangas T, Salminen E, Hamalainen TD, Hannikainen M (2007) Automated memory-aware application distribution for multi-processor system-on-chips. JSA 53(11):795–815Google Scholar
  35. 35.
    de Langen P, Juurlink B (2009) Leakage-aware multiprocessor scheduling. J Signal Process Syst 57(1):73–88CrossRefGoogle Scholar
  36. 36.
    Bhatti MK, Oz I, Popov K, Muddukrishna A, Brorsson M (2014) Noodle: a heuristic algorithm for task scheduling in MPSoC architectures. In: 2014 17th Euromicro conference on digital system design (DSD). IEEE, pp 667–670Google Scholar

Copyright information

© Springer-Verlag GmbH Austria 2017

Authors and Affiliations

  1. 1.Embedded Computing LabInformation Technology University (ITU)LahorePakistan
  2. 2.Computer Engineering DepartmentIzmir Institute of TechnologyIzmirTurkey
  3. 3.Department of Electrical and Computer EngineeringDhofar UniversitySalalahOman
  4. 4.SICSKistaSweden
  5. 5.KTH Royal Institute of TechnologyKistaSweden

Personalised recommendations