Advertisement

Programming Support for Future Parallel Architectures

  • Siegfried Benkner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9548)

Abstract

Due to physical constraints the performance of single processors has reached its limits, and all major hardware vendors switched to multi-core architectures. In addition, there is a trend towards heterogeneous parallel systems comprised of conventional multi-core CPUs, GPUs, and other types of accelerators. As a consequence, the development of applications that can exploit the potential of emerging parallel architectures and at the same time are portable between different types of systems is becoming more and more challenging. In this paper we discuss recent research efforts of the European PEPPHER project in software development for future parallel architectures. We present a high-level compositional approach to parallel software development in concert with an intelligent task-based runtime system. Such an approach can significantly enhance programmability of future parallel systems, while ensuring efficiency and facilitating performance portability across a range of different architectures.

Keywords

Runtime System Performance Portability Target Architecture Execution Unit Transformation Tool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Ansel, J., Chan, C.P., Wong, Y.L., Olszewski, M., Zhao, Q., Edelman, A., Amarasinghe, S.P.: PetaBricks: a language and compiler for algorithmic choice. In: Proceedings of the 2009 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2009, pp. 38–49. ACM (2009)Google Scholar
  2. 2.
    Augonnet, C., Thibault, S., Namyst, R., Wacrenier, P.-A.: StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurrency Comput. Pract. Experience Spec. Issue: Euro-Par 23, 187–198 (2011)CrossRefGoogle Scholar
  3. 3.
    Benkner, S., Bajrovic, E., Marth, E., Sandrieser, M., Namyst, R., Thibault, S.: High-level support for pipeline parallelism on many-core architectures. In: Kaklamanis, C., Papatheodorou, T., Spirakis, P.G. (eds.) Euro-Par 2012. LNCS, vol. 7484, pp. 614–625. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  4. 4.
    Benkner, S., Pllana, S., Träff, J.L., Tsigas, P., Dolinsky, U., Augonnet, C., Bachmayer, B., Kessler, C., Moloney, D., Osipov, V.: PEPPHER: efficient and productive usage of hybrid computing systems. IEEE Micro 31(5), 28–41 (2011)CrossRefGoogle Scholar
  5. 5.
    Bradski, D.G.R., Kaehler, A.: Learning OpenCV, 1st edn. O’Reilly Media Inc, Sebastopol (2008)Google Scholar
  6. 6.
    Bueno, J., Planas, J., Duran, A., Badia, R., Martorell, X., Ayguade, E., Labarta, J.: Productive programming of GPU clusters with OmpSs. In: Parallel Distributed Processing Symposium (IPDPS 2012), (2012)Google Scholar
  7. 7.
    Dastgeer, U., Li, L., Kessler, C.: The PEPPHER composition tool: performance-aware composition for GPU-based systems. Computing 96(12), 1195–1211 (2014)CrossRefGoogle Scholar
  8. 8.
    Hugo, A., Guermouche, A., Wacrenier, P.-A., Namyst, R.: Composing multiple StarPU applications over heterogeneous machines: a supervised approach. Int. J. High Perform. Comput. Appl. 28, 285–300 (2014)CrossRefGoogle Scholar
  9. 9.
    Kaiser, H., Heller, T., Adelstein-Lelbach, B., Serio, A., Fey, D.: HPX - a task based programming model in a global address space. In: PGAS 2014: The 8th International Conference on Partitioned Global Address Space Programming Models (2014)Google Scholar
  10. 10.
    Kessler, C., Dastgeer, U., Thibault, S., Namyst, R., Richards, A., Dolinsky, U., Benkner, S., Traff, J., Pllana, S.: Programmability and performance portability aspects of heterogeneous multi-/manycore systems. In: Design, Automation Test in Europe Conference Exhibition (DATE), pp. 1403–1408, March 2012Google Scholar
  11. 11.
    Lee, H.J., Brown, K., Sujeeth, A., Chafi, H., Olukotun, K., Rompf, T., Odersky, M.: Implementing domain-specific languages for heterogeneous parallel computing. IEEE Micro 31(5), 42–53 (2011)CrossRefGoogle Scholar
  12. 12.
    Linderman, M.D., Collins, J.D., Wang, H., Meng, T.H.Y.: Merge: a programming model for heterogeneous multi-core systems. In: Proceedings of the 13th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2008), pp. 287–296. ACM (2008)Google Scholar
  13. 13.
    Liu, L., Kessler, C.: Validating energy compositionality of GPU computations. In: Proceedings of the HiPEAC Workshop on Energy Efficiency with Heterogeneous Computing (EEHCO-2015) in conjunction with HiPEAC-2015 Conference, Amsterdam, The Netherlands (2015)Google Scholar
  14. 14.
    Mattson, T., Cledat, R., Budimlic, Z., Cave, V., Chatterjee, S., Seshasayee, B., van der Wijngaart, R., Sarkar, V.: OCR the Open Community Runtime Interface, version 1.0.0, June 2015Google Scholar
  15. 15.
    Miceli, R., Civario, G., Sikora, A., César, E., Gerndt, M., Haitof, H., Navarrete, C., Benkner, S., Sandrieser, M., Morin, L., Bodin, F.: Autotune: a plugin-driven approach to the automatic tuning of parallel applications. In: Manninen, P., Öster, P. (eds.) PARA. LNCS, vol. 7782, pp. 328–342. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  16. 16.
    Sandrieser, M., Benkner, S., Pllana, S.: Using explicit platform descriptions to support programming of heterogeneous many-core systems. Parallel Comput. 38(1–2), 52–65 (2012)CrossRefGoogle Scholar
  17. 17.
    Topcuoglu, H., Hariri, S., Wu, M.-Y.: Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Sys. 13(3), 260–274 (2002)CrossRefGoogle Scholar
  18. 18.
    Wernsing, J.R., Stitt, G.: Elastic computing: a framework for transparent, portable, and adaptive multi-core heterogeneous computing. In: Proceedings of the ACM SIGPLAN/SIGBED 2010 Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pp. 115–124. ACM (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Research Group Scientific ComputingUniversity of ViennaViennaAustria

Personalised recommendations