Advertisement

Task Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages

  • Dounia Khaldi
  • Pierre Jouvelot
  • Corinne Ancourt
  • François Irigoin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7760)

Abstract

Efficiently programming parallel computers would ideally require a language that provides high-level programming constructs to avoid the programming errors frequent when expressing parallelism. Since task parallelism is considered more error-prone than data parallelism, we survey six popular parallel language designs that tackle this difficult issue: Cilk, Chapel, X10, Habanero-Java, OpenMP and OpenCL. Using the parallel computation of the Mandelbrot set as running example, this paper describes how the fundamentals of task parallel programming are dealt with in these languages. Our study suggests that, even though there are many keywords and notions introduced by these languages, they boil down, as far as control issues are concerned, to three key task concepts: creation, synchronization and atomicity. These languages adopt one of three memory models: shared, message passing and Partitioned Global Address Space. The paper is designed to give users and language and compiler designers an up-to-date comparative overview of current parallel languages.

Keywords

Parallel language Task parallelism Mandelbrot set Cilk Chapel X10 Habanero-Java OpenMP OpenCL 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Cilk 5.4.6 Reference Manual. Supercomputing Technologies Group, MIT Laboratory for Computer Science (1998), http://supertech.lcs.mit.edu/cilk
  3. 3.
    Chapel Language Specification 0.796. Cray Inc., 901 Fifth Avenue, Suite 1000, Seattle, WA 98164 (October 21, 2010)Google Scholar
  4. 4.
    Adve, S.V., Gharachorloo, K.: Shared Memory Consistency Models: A Tutorial. IEEE Computer 29, 66–76 (1996)CrossRefGoogle Scholar
  5. 5.
    Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: An Efficient Multithreaded Runtime System. Journal of Parallel and Distributed Computing, 207–216 (1995)Google Scholar
  6. 6.
    Cavé, V., Zhao, J., Sarkar, V.: Habanero-Java: the New Adventures of Old X10. In: 9th International Conference on the Principles and Practice of Programming in Java (PPPJ) (August 2011)Google Scholar
  7. 7.
    Chamberlain, B., Callahan, D., Zima, H.: Parallel Programmability and the Chapel Language. Int. J. High Perform. Comput. Appl. 21, 291–312 (2007)CrossRefGoogle Scholar
  8. 8.
    Charles, P., Grothoff, C., Saraswat, V., Donawa, C., Kielstra, A., Ebcioglu, K., von Praun, C., Sarkar, V.: X10: An Object-Oriented Approach to Non-Uniform Cluster Computing. SIGPLAN Not. 40, 519–538 (2005)CrossRefGoogle Scholar
  9. 9.
    Cuevas, E., Garcia, A., Fernandez, F.J.J., Gadea, R.J., Cordon, J.: Importance of Simulations for Nuclear and Aeronautical Inspections with Ultrasonic and Eddy Current Testing. Simulation in NDT (September 2010), Online Workshop, http://www.ndt.net
  10. 10.
    Dennis, J.B., Gao, G.R., Todd, K.W.: Modeling The Weather With a Data Flow Supercomputer. IEEE Trans. Computers, 592–603 (1984)Google Scholar
  11. 11.
    Khaldi, D., Jouvelot, P., Ancourt, C., Irigoin, F.: SPIRE: A Sequential to Parallel Intermediate Representation Extension. Technical Report CRI/A-487, MINES ParisTech (2012)Google Scholar
  12. 12.
    Kumar, S., Kim, D., Smelyanskiy, M., Chen, Y.-K., Chhugani, J., Hughes, C.J., Kim, C., Lee, V.W., Nguyen, A.D.: Atomic Vector Operations on Chip Multiprocessors. SIGARCH Comput. Archit. News 36(3), 441–452 (2008)zbMATHCrossRefGoogle Scholar
  13. 13.
    Larus, J., Kozyrakis, C.: Transactional Memory. Commun. ACM 51, 80–88 (2008)CrossRefGoogle Scholar
  14. 14.
    MPI. Message Passing Interface, http://www-unix.mcs.anl.gov/mpi
  15. 15.
    OpenCL. The Open Standard for Parallel Programming of Heterogeneous Systems, http://www.khronos.org/opencl
  16. 16.
  17. 17.
    Padua, D.A. (ed.): Encyclopedia of Parallel Computing. Springer (2011)Google Scholar
  18. 18.
    Sarkar, V.: Synchronization Using Counting Semaphores. In: ICS 1988, pp. 627–637 (1988)Google Scholar
  19. 19.
    Shirako, J., Peixotto, D.M., Sarkar, V., Scherer, W.N.: Phasers: A Unified Deadlock-Free Construct for Collective and Point-To-Point Synchronization. In: ICS 2008, pp. 277–288. ACM, New York (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Dounia Khaldi
    • 1
  • Pierre Jouvelot
    • 1
  • Corinne Ancourt
    • 1
  • François Irigoin
    • 1
  1. 1.CRI, Mathématiques et systèmesMINES ParisTechFontainebleauFrance

Personalised recommendations