Advertisement

Frontiers of Computer Science

, Volume 13, Issue 1, pp 73–85 | Cite as

FunctionFlow: coordinating parallel tasks

  • Xuepeng Fan
  • Xiaofei Liao
  • Hai JinEmail author
Research Article
  • 10 Downloads

Abstract

With the growing popularity of task-based parallel programming, nowadays task-parallel programming libraries and languages are still with limited support for coordinating parallel tasks. Such limitation forces programmers to use additional independent components to coordinate the parallel tasks — the components can be third-party libraries or additional components in the same programming library or language. Moreover, mixing tasks and coordination components increase the difficulty of task-based programming, and blind schedulers for understanding tasks’ dependencies.

In this paper, we propose a task-based parallel programming library, FunctionFlow, which coordinates tasks in the purpose of avoiding additional independent coordination components. First, we use dependency expression to represent ubiquitous tasks’ termination. The key idea behind dependency expression is to use && for both task’s termination and || for any task termination, along with the combination of dependency expressions. Second, as runtime support, we use a lightweight representation for dependency expression. Also, we use suspended-task queue to schedule tasks that still have prerequisites to run.

Finally, we demonstrate FunctionFlow’s effectiveness in two aspects, case study about implementing popular parallel patterns with FunctionFlow, and performance comparision with state-of-the-art practice, TBB. Our demonstration shows that FunctionFlow can generally coordinate parallel tasks without involving additional components, along with comparable performance with TBB.

Keywords

task parallel programming tasks dependency FunctionFlow coordination patterns 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This paper was supported by the National High-Tech Research and Development Program of China (2015AA015303), and the National Natural Science Foundation of China (Grant No. 61732010).

Supplementary material

11704_2016_6286_MOESM1_ESM.ppt (465 kb)
Supplementary material, approximately 463 KB.

References

  1. 1.
    Reinders J. Intel Threading Building Blocks: Outfitting C++ for Multicore Processor Parallelism. Sebastopol, CA: O’Reilly Media, Inc., 2007Google Scholar
  2. 2.
    Leijen D, Schulte W, Burckhardt S. The design of a task parallel library. In: Proceedings of ACM Annual Conference on Object Oriented Programming Systems, Languages, and Applications. 2009, 227–242Google Scholar
  3. 3.
    Kambadur P, Gupta A, Ghoting A, Avron H, Lumsdaine A. PFunc: modern task parallelism for modern high performance computing. In: Proceedings of ACM Conference on High Performance Computing Networking, Storage and Analysis. 2009, 1–11Google Scholar
  4. 4.
    Frigo M, Leiserson C E, Randall K H. The implementation of the Cilk-5 multithreaded language. ACM SIGPLAN Notices, 1998, 33(5): 212–223CrossRefGoogle Scholar
  5. 5.
    Dagum L, Menon R. OpenMP: an industry standard API for shared-memory programming. IEEE Computational Science and Engineering, 2002, 5(1): 46–55CrossRefGoogle Scholar
  6. 6.
    Saraswat V, Sarkar V, von Praun C. X10: concurrent programming for modern architectures. In: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2007, 271Google Scholar
  7. 7.
    Chamberlain B L, Callahan D, Zima H P. Parallel programmability and the Chapel language. International Journal of High Performance Computing Applications, 2013, 21(3): 291–312CrossRefGoogle Scholar
  8. 8.
    Imam S, Sarkar V. Cooperative scheduling of parallel tasks with general synchronization patterns. In: Proceedings of the 24th European Conference on Object-Oriented Programming. 2014, 618–643Google Scholar
  9. 9.
    Saad Y. Iterative Methods for Sparse Linear Systems. 2nd ed. Philadelphia: SIAM, 2003CrossRefzbMATHGoogle Scholar
  10. 10.
    Alexandrescu A. Modern C++ Design: Generic Programming and Design Patterns Applied. Addison Wesley, 2001Google Scholar
  11. 11.
    Pyla H K, Ribbens C, Varadarajan S. Exploiting coarse-grain speculative parallelism. ACM SIGPLAN Notices, 2011, 46(10): 555–574CrossRefGoogle Scholar
  12. 12.
    Kazi I H, Lilja D J. Coarse-grained thread pipelining: a speculative parallel execution model for shared-memory multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 2001, 12(9): 952–966CrossRefGoogle Scholar
  13. 13.
    Li S, Hu C, Zhang J, Zhang Y. Automatic tuning of sparse matrixvector multiplication on multi core clusters. Science in China Series F: Information Sciences, 2015, 58(9): 1–14Google Scholar
  14. 14.
    Zhang F, Qiao X, Liu Z. Parallel divide and conquer bio-sequence comparison based on smith-waterman algorithm. Science in China Series F: Information Sciences, 2004, 47(2): 221–231MathSciNetzbMATHGoogle Scholar
  15. 15.
    Chi C C, Juurlink B, Meenderinck C. Evaluation of parallel H.264 decoding strategies for the cell broadband engine. In: Proceedings of the 24th ACM International Conference on Supercomputing. 2010, 105–114CrossRefGoogle Scholar
  16. 16.
    Subhlok J, Stichnoth J M, Ohallaron D R, Gross T. Exploiting task and data parallelism on a multicomputer. In: Proceedings of the 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 1993, 13–22Google Scholar
  17. 17.
    Chase D, Lev Y. Dynamic circular work-stealing deque. In: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures. 2005, 21–28Google Scholar
  18. 18.
    Dechev D, Pirkelbauer P, Stroustrup B. Understanding and effectively preventing the ABA problem in descriptor-based lock-free designs. In: Proceedings of the 13th IEEE International Symposiumon Object/Component/Service-Oriented Real-Time Distributed Computing. 2010, 185–192Google Scholar
  19. 19.
    Herlihy M. Wait-free synchronization. ACM Transactions on Programming Languages and Systems, 1991, 13(1): 124–149CrossRefGoogle Scholar
  20. 20.
    Bienia C, Li K. PARSEC 2.0: a new benchmark suite for chip-multiprocessors. In: Proceedings of the 5th AnnualWorkshop on Modeling Benchmarking and Simulation. 2009Google Scholar
  21. 21.
    Woo S C, Ohara M, Torrie E, Singh J P, Gupta A. The SPLASH-2 programs: characterization and methodological considerations. In: Proceedings of the 22nd ACM Annual International Symposium on Computer Architecture. 1995, 24–36Google Scholar
  22. 22.
    Bahmani B, Moseley B, Vattani A, Kumar R, Vassilvitskii S. Scalable k-means++. Very Large Data Bases Endowment, 2012, 5(7): 622–633Google Scholar
  23. 23.
    Luo Y, Duraiswami R. Canny edge detection on NVIDIA CUDA. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2008, 1–8Google Scholar
  24. 24.
    Spatz C. Basic Statistics: Tales of Distributions. Belmont: Wads worth Cengage Learning, 1981zbMATHGoogle Scholar
  25. 25.
    Zhou J, Demsky B. Bamboo: a data-centric, object-oriented approach to many-core software. In: Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation. 2010, 388–399Google Scholar
  26. 26.
    Tzenakis G, Papatriantafyllou A, Vandierendonck H, Pratikakis P, Nikolopoulos D S. BDDT: blocklevel dynamic dependence analysis for task-based parallelism. Lecture Notes in Computer Science, 2013, 8299: 17–31CrossRefGoogle Scholar
  27. 27.
    Lam M S, Rinard M C. Coarse-grain parallel programming in Jade. In: Proceedings of the 3rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 1991, 94–105Google Scholar
  28. 28.
    Perez J M, Badia R M, Labarta J. A dependency-aware task-based programming environment for multi-core architectures. In: Proceedings of the 9th IEEE International Conference on Cluster Computing. 2008, 142–151Google Scholar
  29. 29.
    Chatterjee S S, Gururaj R. Lazy-parallel function calls for automatic parallelization. In: Proceedings of the 1st International Conference on Computational Intelligence and Information Technology. 2011, 811–816CrossRefGoogle Scholar
  30. 30.
    Aldinucci M, Danelutto M, Kilpatrick P, Torquati M. Fastflow: highlevel and efficient streaming on multi-core. In: Pllana S, Xhafa F, eds. Programming Multi-core and Many-core Computing Systems. Parallel and Distributed Computing, Chapter 13. Wiley, 2014Google Scholar
  31. 31.
    Tasirlar S, Sarkar V. Data-driven tasks and their implementation. In: Proceedings of the 40th IEEE International Conference on Parallel Processing. 2011, 652–661Google Scholar
  32. 32.
    Fan X, Jin H, Zhu L, Liao X, Ye C, Tu X. Function flow: making synchronization easier in task parallelism. In: Proceedings of the 2012 ACM International Workshop on Programming Models and Applications for Multicores and Manycores. 2012, 74–82Google Scholar
  33. 33.
    Kwok Y K, Ahmad I. Static scheduling algorithms for allocating directed task graphs to multiprocessors. ACM Computing Surveys, 1999, 31(4): 406–471CrossRefGoogle Scholar
  34. 34.
    Guo Y, Barik R, Raman R, Sarkar V. Work-first and help-first scheduling policies fora sync-finish task parallelism. In: Proceedings of the 23rd IEEE International Symposium on Parallel and Distributed Processing Symposium. 2009, 1–12Google Scholar
  35. 35.
    Tardieu O, Wang H, Lin H. A work-stealing scheduler for X10’s task parallelism with suspension. In: Proceedings of the 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2012, 267–276Google Scholar
  36. 36.
    Xia Y, Prasanna V K, Li J. Hierarchical scheduling of DAG structured computations on manycore processors with dynamic thread grouping. In: Proceedings of the 15th Workshop on Job Scheduling Strategies for Parallel Processing. 2010, 154–174CrossRefGoogle Scholar
  37. 37.
    Ahmad I, Kwok Y K, Wu M Y. Analysis, evaluation, and comparison of algorithms for scheduling task graphs onparallel processors. In: Proceedings of the 2nd IEEE International Symposium on Parallel Architectures, Algorithms, and Networks. 1996, 207–213Google Scholar
  38. 38.
    Agarwal S, Barik R, Bonachea D, Sarkar V, Shyamasundar R K, Yelick K. Deadlock-free scheduling of X10 computations with bounded resources. In: Proceedings of the 19th Annual ACM symposium on Parallel Algorithms and Architectures. 2007, 229–240Google Scholar
  39. 39.
    Agrawal K, Leiserson C E, Sukha J. Executing task graphs using workstealing. In: Proceedings of the 24th IEEE International Symposium on Parallel and Distributed Processing Symposium. 2010, 1–12Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Services Computing Technology and System Lab (SCTS) & Cluster and Grid Computing Lab (CGCL), School of Computer Science and TechnologyHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations