Scheduling of Parallel Tasks with Proportionate Priorities
- 121 Downloads
Parallel computing systems promise higher performance for computationally intensive applications. Since programmes for parallel systems consist of tasks that can be executed simultaneously, task scheduling becomes crucial for the performance of these applications. Given dependence constraints between tasks, their arbitrary sizes, and bounded resources available for execution, optimal task scheduling is considered as an NP-hard problem. Therefore, proposed scheduling algorithms are based on heuristics. This paper presents a novel list scheduling heuristic, called the Noodle heuristic. Noodle is a simple yet effective scheduling heuristic that differs from the existing list scheduling techniques in the way it assigns task priorities. The priority mechanism of Noodle maintains a proportionate fairness among all ready tasks belonging to all paths within a task graph. We conduct an extensive experimental evaluation of Noodle heuristic with task graphs taken from Standard Task Graph. Our experimental study includes results for task graphs comprising of 50, 100, and 300 tasks per graph and execution scenarios with 2-, 4-, 8-, and 16-core systems. We report results for average Schedule Length Ratio (SLR) obtained by producing variations in Communication to Computation cost Ratio. We also analyse results for different degree of parallelism and number of edges in the task graphs. Our results demonstrate that Noodle produces schedules that are within a maximum of 12 % (in worst-case) of the optimal schedule for 2-, 4-, and 8-core systems. We also compare Noodle with existing scheduling heuristics and perform comparative analysis of its performance. Noodle outperforms existing heuristics for average SLR values.
KeywordsList scheduling Static task scheduling Directed acyclic graph (DAG) Multiprocessor Multicore Parallel computing
Unable to display preview. Download preview PDF.
- 2.Grama, A.; Gupta, A.; Karypis, G.; Kumar, V.: Introduction to Parallel Computing, vol. 2nd edn. Pearson, A. Wesley, Boston (2003)Google Scholar
- 3.Sinnen, O.: Task scheduling for parallel systems. Wiley, New York. ISBN: 978-0-471-73576-2, p. 296, May (2007)Google Scholar
- 8.Darte, A.; Robert, Y. and Vivien, F.: Scheduling and automatic parallelization. BirkhŠuser, New York. ISBN 0-8176-4149-1, (2002)Google Scholar
- 10.Suter, F.; Desprez, F. and Casanova, H.: From heterogeneous task scheduling to heterogeneous mixed parallel scheduling. In: Euro-Par 2004 Parallel Processing. pp. 230–237, (2004)Google Scholar
- 18.Iverson, M.A.; Ozguner, F. and Follen, G.J.: Parallelizing existing applications in a distributed heterogeneous environment. In: HCW ’95, pp. 93–100 (1995)Google Scholar
- 21.Deelman E., Singh G., Su M.-H., Blythe J., Gil Y., Kesselman C., Mehta G., Vahi K., Berriman G.B., Good J., Laity A., Jacob J.C., Katz D.S.: Pegasus: A framework for mapping complex scientific workflows onto distributed systems. Sci. Prog. 13(3), 219–237 (2005)Google Scholar
- 23.Set, S.T.G.: http://www.kasahara.elec.waseda.ac.jp/schedule.
- 24.Orsila H., Kangas T., Salminen E., Hamalainen T.D., Hannikainen M.: Automated memory-aware application distribution for multi-processor system-on-chips. JSA 53(11), 795–815 (2007)Google Scholar