International Journal of Parallel Programming

, Volume 43, Issue 5, pp 939–960 | Cite as

TuCCompi: A Multi-layer Model for Distributed Heterogeneous Computing with Tuning Capabilities

  • Hector Ortega-Arranz
  • Yuri Torres
  • Arturo Gonzalez-Escribano
  • Diego R. Llanos
Article

Abstract

During the last decade, parallel processing architectures have become a powerful tool to deal with massively-parallel problems that require high performance computing (HPC). The last trend of HPC is the use of heterogeneous environments, that combine different computational processing devices, such as CPU-cores and graphics processing units (GPUs). Maximizing the performance of any GPU parallel implementation of an algorithm requires an in-depth knowledge about the GPU underlying architecture, becoming a tedious manual effort only suited for experienced programmers. In this paper, we present TuCCompi, a multi-layer abstract model that simplifies the programming on heterogeneous systems including hardware accelerators, by hiding the details of synchronization, deployment, and tuning. TuCCompi chooses optimal values for their configuration parameters using a kernel characterization provided by the programmer. This model is very useful to tackle problems characterized by independent, high computational-load independent tasks, such as embarrassingly-parallel problems. We have evaluated TuCCompi in different, real-world, heterogeneous environments using the all-pair shortest-path problem as a case study.

Keywords

Abstract parallel model Auto-tuning CUDA GPU   Heterogeneous system HPC framework MPI OpenMP 

References

  1. 1.
    Foster, I.: Designing and building parallel programs: concepts and tools for parallel software engineering. Addison-Wesley Longman Publishing Co., Inc., Boston (1995)MATHGoogle Scholar
  2. 2.
    Hoelzle, U., Barroso, L.A.: The datacenter as a computer: an introduction to the design of warehouse-scale machines, 1st edn. Morgan and Claypool Publishers, San Rafael (2009)Google Scholar
  3. 3.
    Cirne, W., Paranhos, D., Costa, L., Santos-Neto, E., Brasileiro, F., Sauve, J., Silva, F.A.B., Barros, C., Silveira, C.: Running bag-of-tasks applications on computational grids: the mygrid approach. In: Proceedings of international conference on parallel processing (ICPP 2003), pp. 407–416 (2003)Google Scholar
  4. 4.
    Mangharam, R., Saba, A.A.: Anytime algorithms for GPU architectures. In: Proceedings of the 2011 IEEE 32nd real-time systems symposium, RTSS ’11, pp. 47–56. Washington, DC, IEEE Computer Society (2011)Google Scholar
  5. 5.
    Taylor, M.: Bitcoin and the age of bespoke silicon. In: Compilers, architecture and synthesis for embedded systems (CASES), 2013 international conference on, pp. 1–10 (2013)Google Scholar
  6. 6.
    Brodtkorb, A.R., Dyken, C., Hagen, T.R., Hjelmervik, J.M., Storaasli, O.O.: State-of-the-art in heterogeneous computing. Sci. Program. 18(1), 1–33 (2010)Google Scholar
  7. 7.
    Reyes, R., de Sande, F.: Optimization strategies in different CUDA architectures using llCoMP. Microprocess. Microsyst. 36(2), 78–87 (2012)CrossRefGoogle Scholar
  8. 8.
    Liang, T., Li, H., Chiu, J.: Enabling mixed openMP/MPI programming on hybrid CPU/GPU computing architecture. In: Proceedings of the 2012 IEEE 26th international parallel and distributed processing symposium workshops & PhD forum (IPDPSW), pp. 2369–2377. IEEE, Shanghai (2012)Google Scholar
  9. 9.
    Torres, Y., Gonzalez-Escribano, A., Llanos, D.: Using Fermi architecture knowledge to speed up CUDA and OpenCL programs. In: Parallel and distributed processing with applications (ISPA), 2012 IEEE 10th international symposium on, pp. 617–624 (2012)Google Scholar
  10. 10.
    Torres, Y., Gonzalez-Escribano, A., Llanos, D.R.: uBench: exposing the impact of CUDA block geometry in terms of performance. J. Supercomput. 65(3), 1150–1163 (2013)Google Scholar
  11. 11.
    Yang, C., Huang, C., Lin, C.: Hybrid CUDA, OpenMP, and MPI parallel programming on multicore GPU clusters. Comput. Phys. Commun. 182, 266–269 (2011)CrossRefGoogle Scholar
  12. 12.
    Howison, M., Bethel, E., Childs, H.: Hybrid parallelism for volume rendering on large-, multi-, and many-core systems. Vis. Comput. Graph. IEEE Trans. 18(1), 17–29 (2012)CrossRefGoogle Scholar
  13. 13.
    Steuwer, M., Gorlatch, S.: SkelCL: enhancing OpenCL for high-level programming of multi-GPU systems. In: LNCS, ser, Malyshkin, V. (eds.) Parallel computing technologies, p. 258272. Springer, Berlin (2013)Google Scholar
  14. 14.
    Hugo, A.-E., Guermouche, A., Wacrenier, P.-A., Namyst, R.: Composing multiple starPU applications over heterogeneous machines: a supervised approach. In: Proceedings of IEEE 27th IPDPSW’13, pp. 1050–1059. Washington, USA: IEEE, (2013)Google Scholar
  15. 15.
    Dastgeer, U., Enmyren, J., Kessler, C. W.: Auto-tuning SkePU: a multi-backend skeleton programming framework for multi-GPU systems. In: Proceedings of the 4th IWMSE, pp. 25–32. New York, NY, USA: ACM, (2011)Google Scholar
  16. 16.
    Reyes, R., López-Rodríguez, I., Fumero, J.J., de Sande, F.: accULL: an OpenACC implementation with CUDA and OpenCL support. In: Proceedings of the 18th conference on parallel processing, ser. EuroPar’12, pp. 871–882. Springer, Berlin (2012)Google Scholar
  17. 17.
    Farooqui, N., Kerr, A., Diamos, G.F., Yalamanchili, S., Schwan, K.:A framework for dynamically instrumenting GPU compute applications within GPU Ocelot. In: Proceedings of 4th workshop on GPGPU: CA, USA, 5 Mar 2011. ACM, p. 9 (2011)Google Scholar
  18. 18.
    Pai, S., Thazhuthaveetil, M.J., Govindarajan, R.: Improving GPGPU concurrency with elastic kernels. SIGPLAN Not. 48(4), 407–418 (2013)CrossRefGoogle Scholar
  19. 19.
    NVIDIA.: NVIDIA CUDA programming guide 6.0, (2014)Google Scholar
  20. 20.
    Kirk, D. B., Hwu, W.W.: Programming massively parallel processors: a hands-on approach, 1st edn. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (2010)Google Scholar
  21. 21.
    Dijkstra, E.W.: A note on two problems in connexion with graphs. Numerische Mathematik 1, 269–271 (1959)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Crauser, A., Mehlhorn, K., Meyer, U., Sanders, P.: A parallelization of Dijkstra’s shortest path algorithm. In: LNCS, ser, Brim, L., Gruska, J., Zlatuška, J. (eds.) Mathematical foundations of computer science 1998, pp. 722–731. Springer, Berlin (1998)Google Scholar
  23. 23.
    Ortega-Arranz, H., Torres, Y., Llanos, D.R., Gonzalez-Escribano, A.: A new GPU-based approach to the shortest path problem. In: High performance computing and simulation (HPCS). international conference on 2013, pp. 505–512 (2013)Google Scholar
  24. 24.
    Martín, P., Torres, R., Gavilanes, A.: CUDA solutions for the SSSP problem. In: LNCS, ser, Allen, G., Nabrzyski, J., Seidel, E., van Albada, G., Dongarra, J., Sloot, P. (eds.) Computational Science: ICCS 2009, pp. 904–913. Springer, Berlin (2009)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Hector Ortega-Arranz
    • 1
  • Yuri Torres
    • 1
  • Arturo Gonzalez-Escribano
    • 1
  • Diego R. Llanos
    • 1
  1. 1.Departamento de InformáticaUniversidad de ValladolidValladolidSpain

Personalised recommendations