Abstract
Existing programming models tend to tightly interleave algorithm and optimization in HPC simulation codes. This requires scientists to become experts in both the simulated domain and the optimization process and makes the code difficult to maintain or port to new architectures. In this paper, we propose the \({\textsc {InKS}}\) programming model that decouples these concerns with two distinct languages: \({\textsc {InKS}}_{\textsf {pia} }\) to express the simulation algorithm and \({{\textsc {InKS}}}_{\textsf {pso} }\) for optimizations. We define \({\textsc {InKS}}_{\textsf {pia} }\) and evaluate the feasibility of defining \({\textsc {InKS}}_{\textsf {pso} }\) with three test languages: \({\textsc {InKS}}_{\textsf {o/C++} }\), \({\textsc {InKS}}_{\textsf {o/loop} }\) and \({\textsc {InKS}}_{\textsf {o/XMP} }\). We evaluate the approach on synthetic benchmarks (NAS and heat equation) as well as on a more complex example (6D Vlasov–Poisson solver). Our evaluation demonstrates the soundness of the approach as it improves the separation of algorithmic and optimization concerns at no performance cost. We also identify a set of guidelines for the later full definition of the \({\textsc {InKS}}_{\textsf {pso} }\) language.
This is a preview of subscription content, log in to check access.




References
- 1.
Augonnet C, Thibault S, Namyst R, Wacrenier PA (2011) StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurr Comput Pract Exper 23(2):187–198. https://doi.org/10.1002/cpe.1631
- 2.
Aumage O, Bigot J, Ejjaaouani K, Mehrenberger M (2017) InKS, a programming model to decouple performance from semantics in simulation codes. Technical report, Inria
- 3.
Bailey DH, Barszcz E, Barton JT, Browning DS, Carter RL, Dagum L, Fatoohi RA, Frederickson PO, Lasinski TA, Schreiber RS, Simon HD, Venkatakrishnan V, Weeratunga SK (1991) The NAS parallel benchmarks. Int J Supercomput Appl 5(3):63–73. https://doi.org/10.1177/109434209100500306
- 5.
Chandra R, Dagum L, Kohr D, Maydan D, McDonald J, Menon R (2001) Parallel programming in OpenMP. Morgan Kaufmann, Los Altos
- 6.
Christen M, Schenk O, Burkhart H (2011) PATUS: a code generation and autotuning framework for parallel iterative stencil computations on modern microarchitectures. In: Parallel and distributed processing symposium (IPDPS) 2011, IEEE. https://doi.org/10.1109/ipdps.2011.70
- 7.
Cosnard M, Jeannot E (1999) Compact dag representation and its dynamic scheduling. J Parallel Distrib Comput 58(3):487–514. https://doi.org/10.1006/jpdc.1999.1566
- 8.
Danelutto M, García J, Miguel Sanchez L, Sotomayor R, Torquati M (2016) Introducing parallelism by using REPARA C++11 attributes. pp 354–358. https://doi.org/10.1109/PDP.2016.115
- 4.
Edwards HC, Trott CR, Sunderland D (2014) Kokkos. J Parallel Distrib Comput 74(12):3202–3216. https://doi.org/10.1016/j.jpdc.2014.07.003
- 9.
El-Ghazawi T, Carlson W, Sterling T, Yelick K (2005) UPC: distributed shared memory programming. Wiley, London
- 10.
Feautrier P, Lengauer C (2011) Polyhedron model. Springer, London. https://doi.org/10.1007/978-0-387-09766-4_502
- 11.
Griebler D, Loff J, Mencagli G, Danelutto M, Fernandes LG (2018) Efficient NAS benchmark kernels with c++ parallel programming. In: 2018 26th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). https://doi.org/10.1109/PDP2018.2018.00120
- 12.
Hoque R, Herault T, Bosilca G, Dongarra J (2017) Dynamic task discovery in PaRSEC: a data-flow task-based runtime. In: 8th workshop on latest advances in scalable algorithms for large-scale systems, ACM
- 13.
Höhnerbach M, Ismail AE, Bientinesi P (2016) The vectorization of the Tersoff multi-body potential: an exercise in performance portability. In: International Conference for High Performance Computing, Networking, Storage and Analysis, IEEE
- 14.
Isoard A (2016) Extending polyhedral techniques towards parallel specifications and approximations. Ph.D. thesis, École doctorale en Informatique et Mathématiques de Lyon
- 15.
Kamil S (2012) StencilProbe: a microbenchmark for stencil applications. Accessed 25 Aug 2017
- 16.
Kormann K, Reuter K, Rampp M (2019) A massively parallel semi-Lagrangian solver for the six-dimensional Vlasov–Poisson equation. Int J High Perform Comput Appl. https://doi.org/10.1177/1094342019834644
- 17.
Lee J, Sato M (2010) Implementation and performance evaluation of XcalableMP: a parallel programming language for distributed memory systems. In: International Conference on Parallel Processing Workshops
- 18.
Mehrenberger M, Steiner C, Marradi L, Crouseilles N, Sonnendrucker E, Afeyan B (2013) Vlasov on GPU (VOG project). In: ESAIM: Proceedings of 43. https://doi.org/10.1051/proc/201343003
- 19.
Steuwer M, Remmelg T, Dubach C (2017) LIFT: a functional data-parallel IR for high-performance GPU code generation. In: 2017 IEEE/ACM international symposium on code generation and optimization (CGO)
- 20.
Tang Y, Chowdhury RA, Kuszmaul BC, Luk CK, Leiserson CE (2011) The Pochoir stencil compiler. In: 23rd symposium on parallelism in algorithms and architectures, ACM, SPAA ’11. https://doi.org/10.1145/1989493.1989508
- 21.
Tanno H, Iwasaki H (2009) Parallel skeletons for variable-length lists in SkeTo skeleton library. In: Proceedings of the 15th International Euro-Par Conference on Parallel Processing, Springer, Euro-Par ’09. https://doi.org/10.1007/978-3-642-03869-3_63
- 22.
Verdoolaege S (2010) isl: an integer set library for the polyhedral model. In: Fukuda K, Hoeven J, Joswig M, Takayama N (eds) Mathematical software—ICMS 2010. Springer, New York
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ejjaaouani, K., Aumage, O., Bigot, J. et al. InKS: a programming model to decouple algorithm from optimization in HPC codes. J Supercomput 76, 4666–4681 (2020). https://doi.org/10.1007/s11227-019-02950-2
Published:
Issue Date:
Keywords
- Programming model
- Separation of concerns
- HPC
- DSL