Advertisement

Performance and Productivity of New Programming Languages

  • Iris Christadler
  • Giovanni Erbacci
  • Alan D. Simpson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7174)

Abstract

Will HPC programmers (have to) adapt to new programming languages and parallelization concepts? Many different languages are currently discussed as complements or successors to the traditional HPC programming paradigm (Fortran/C+MPI). These include both languages designed specifically for the HPC community (e.g. the partitioned global address space (PGAS) languages UPC, CAF, X10 or Chapel) and languages that allow the use of hardware accelerators (e.g. Cn for ClearSpeed accelerator boards, CellSs for IBM CELL and GPGPU languages like CUDA, OpenCL, CAPS hmpp and RapidMind).

During the project “Partnership for Advanced Computing in Europe – Preparatory Phase” (PRACE-PP), developers across Europe have ported three benchmarks to more than 12 different programming languages and assessed both performance and productivity. Their results will help scientific groups to choose the optimal combination of language and hardware to efficiently tackle their scientific problems. This paper describes the framework used for this assessment and the results gathered during the study together with guidelines for interpretation.

Keywords

High Performance Computing Benchmark Suite Hardware Accelerator High Performance Computing Application High Performance Fortran 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Simpson, A., et al.: PRACE-PP Deliverable D6.1 Identification and Cate- gorisation of Applications and Initial Benchmarks Suite, http://www.prace-project.eu/documents/public-deliverables-1/Identification_and_Categorisatio_of_Applications_and_Initial_Benchmark_Suite_nal.pdf
  2. 2.
    Michielse, P., et al.: PRACE-PP Deliverable D6.3.1 Report on available perfor- mance analysis and benchmark tools, Representative Benchmark, http://www.prace-project.eu/documents/public-deliverables-1/public-deliverables/D6.3.1.pdf
  3. 3.
    von Alfthan, S., et. al.: PRACE-PP Deliverable D6.5 Report on Porting and Optimisation of Applications, http://www.prace-project.eu/documents/public-deliverables/d6-5.pdf
  4. 4.
    Jowkar, M., et. al.: PRACE-PP Deliverable D6.4 Report on Approaches to Petascaling, http://www.prace-project.eu/documents/public-deliverables/d6-4.pdf
  5. 5.
    Kennedy, K., et al.: Defining and Measuring the Productivity of Programming Languages. International Journal of High Performance Computing Applications 18 (2004)Google Scholar
  6. 6.
    Cavazzoni, C., et al.: PRACE-PP Deliverable D6.6 Report on petascale softwarelibraries and programming models, http://www.prace-project.eu/documents/public-deliverables/d6-6.pdf
  7. 7.
    Asanovic, K., et al.: The Landscape of Parallel Computing Research: A View from Berkeley, Technical Report No. UCB/EECS-2006-183 (2006), http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf
  8. 8.
    Che, S., et al.: Rodinia: A benchmark suite for heterogeneous computing. In: IEEE International Symposium on Workload Characterization IISWC 2009, pp. 44–54 (2009)Google Scholar
  9. 9.
    The Euroben benchmark home page, http://www.euroben.nl/
  10. 10.
    Intel Math Kernel Library, http://software.intel.com/en-us/intel-mkl/
  11. 11.
    Carlson, W., et al.: UPC: Distributed Shared Memory Programming. Book of WileyInter-Science (2005)Google Scholar
  12. 12.
    Numrich, R.W., Reid, J.: Co-Array Fortran for Parallel Programming. ACM SIGPLAN Fortran Forum 17 (1998)Google Scholar
  13. 13.
    Dongarra, J., et al.: DARPA’s HPCS Program: History, Models, Tools, Languages. Advances in Computers (2008)Google Scholar
  14. 14.
    Chamberlain, B.L., et al.: Parallel Programmability and the Chapel Language. International Journal of High Performance Computing Applications 21 (2007)Google Scholar
  15. 15.
    X10 language, http://x10-lang.org
  16. 16.
    ClearSpeed Cn language, http://www.clearspeed.com
  17. 17.
    Perez, J.P.: CellSs: making it easier to program the cell broadband engine processor. IBM Journal of Research and Development 51 (2007)Google Scholar
  18. 18.
  19. 19.
    OpenCL - The open standard for parallel programming of heterogeneous systems, http://www.khronos.org/opencl/
  20. 20.
  21. 21.
    RapidMind, https://www.rapidmind.com (forwarded to Intel ArBB)
  22. 22.
    Christadler, I., et al. (eds): PRACE Workshop New Languages and Future Technology Prototypes, Garching (2010), http://www.prace-project.eu/documents/prace_workshop_on_new_languages_and_future_technology_prototypes.pdf
  23. 23.
    Strumpen, V., et. al.: PRACE-1IP Deliverable D9.2.1 First report on multi-Peta to Exascale software, http://www.prace-project.eu/documents/public-deliverables-1/
  24. 24.
  25. 25.
    Sai Saigar, R., et al.: PRACE-PP Deliverable D8.3.2 Final technical report and architecture proposal, http://www.prace-project.eu/documents/public-deliverables/d8-3-2-extended.pdf
  26. 26.
    Dongarra, J., et al. (eds): CT Watch Quarterly. High Productivity Computing Systems and the Path Towards Usable Petascale Computing Part A 2(4A) (2006), http://www.ctwatch.org/quarterly/pdf/ctwatchquarterly-8.pdf
  27. 27.
    Squires, S., et al.: Software Productivity Research in High-Performance Com putingi. CT Watch Quarterly 2(4A), 52–61 (2006)Google Scholar
  28. 28.
    Hochstein, L., et al.: Experiments to Understand HPC Time to Development. CT Watch Quarterly 2(4A), 24–32 (2006)Google Scholar
  29. 29.
    Christadler, I., Weinberg, V.: RapidMind: Portability across Architectures and Its Limitations. In: Keller, R., et al. (eds.) Facing the Multicore-Challenge. LNCS, vol. 6310, pp. 4–15. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  30. 30.
    Tesla M-Class GPU Computing Modules (“Fermi”), http://www.nvidia.com/docs/IO/105880/DSTesla-M2090LR.pdf
  31. 31.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Iris Christadler
    • 1
  • Giovanni Erbacci
    • 2
  • Alan D. Simpson
    • 3
  1. 1.Leibniz Supercomputing CentreGarchingGermany
  2. 2.CINECA Supercomputing CentreBolognaItaly
  3. 3.EPCCThe University of EdinburghUnited Kingdom

Personalised recommendations