Teaching Parallel Programming in Interdisciplinary Studies

  • Eduardo Cesar
  • Ana Cortés
  • Antonio Espinosa
  • Tomàs Margalef
  • Juan Carlos Moure
  • Anna Sikora
  • Remo Suppi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9523)

Abstract

Nowadays many fields of science and engineering are evolving by the joint contribution of complementary fields. Computer science, and especially high performance computing, has become a key factor in the development of many research fields, establishing a new paradigm called computational science. Researchers and professionals from many different fields require a knowledge of high performance computing, including parallel programming, to develop a fruitful work in their particular field. So, at Universitat Autònoma of Barcelona, an interdisciplinary master on Modeling for science and engineering was started 5 years ago to provide a deep knowledge on the application of modeling and simulation to graduate students on different fields (Mathematics, Physics, Chemistry, Engineering, Geology, etc.). In this master, Parallel Programming appears as a compulsory subject, because it is a key topic for them. The concepts learnt in parallel programming must be applied to real applications. Therefore, a subject on Applied Modelling and Simulation has also been included. In this paper, the experience on teaching parallel programming in such interdisciplinary master is shown.

Keywords

Parallel programming Message passing Shared memory GPUs MPI OpenMP CUDA 

References

  1. 1.
  2. 2.
    Dyninst API. http://www.dyninst.org/. Accessed 18 May 2015
  3. 3.
    Lightweight performance tools. https://code.google.com/p/likwid/. Accessed 18 May 2015
  4. 4.
    Message Passing Interface Forum. http://www.mpi-forum.org/. Accessed 18 May 2015
  5. 5.
    NetLogo. Wilensky, U.: Center for Connected Learning and Computer-Based Modeling. Northwestern University, Evanston, IL (1999). https://ccl.northwestern.edu/netlogo/index.shtml. Accessed 18 May 2015
  6. 6.
    OpenMP. http://openmp.org/. Accessed 18 May 2015
  7. 7.
  8. 8.
    perf: Linux profiling. https://perf.wiki.kernel.org/index.php/Main_Page. Accessed 18 May 2015
  9. 9.
    Performance API. http://icl.cs.utk.edu/papi/. Accessed 18 May 2015
  10. 10.
    Performance Visualization. http://www.mcs.anl.gov/research/projects/perfvis/software/viewers/. Accessed 18 May 2015
  11. 11.
    Bell, N., Hoberock, J.: Thrust: a productivity-oriented library for CUDA. Jade Edition, GPU Computing Gems (2012)Google Scholar
  12. 12.
    Forthofer, J., Shannon, K., Butler, B.W.: Initialization of high resolution surface wind simulations using nws gridded data. In: Proceedings of 3rd Fire Behavior and Fuels Conference, 25–29 October 2010Google Scholar
  13. 13.
    Foster, I.T.: Designing and Building Parallel Programs - Concepts and Tools for Parallel Software Engineering. Addison-Wesley, Reading (1995)MATHGoogle Scholar
  14. 14.
    Gutierrez-Milla, A., Borges, F., Suppi, R., Luque, E.: Individual-oriented model crowd evacuations distributed simulation. In: Proceedings of the International Conference on Computational Science, ICCS 2014, Cairns, Queensland, Australia, 10–12 June, 2014. pp. 1600–1609 (2014). http://dx.doi.org/10.1016/j.procs.2014.05.145
  15. 15.
    Helbing, D., Buzna, L., Johansson, A., Werner, T.: Self-organized pedestrian crowd dynamics: experiments, simulations, and design solutions. Transp. Sci. 39(1), 1–24 (2005). http://dx.doi.org/10.1287/trsc.1040.0108 CrossRefGoogle Scholar
  16. 16.
    Martínez, A., Sikora, A., César, E., Sorribes, J.: ELASTIC: a large scale dynamic tuning environment. Sci. Program. 22(4), 261–271 (2014)Google Scholar
  17. 17.
    McCool, M., Reinders, J., Robison, A.: Structured Parallel Programming: Patterns for Efficient Computation, 1st edn. Morgan Kaufmann Publishers Inc., San Francisco (2012)Google Scholar
  18. 18.
    Miceli, R., Civario, G., Sikora, A., César, E., Gerndt, M., Haitof, H., Navarrete, C., Benkner, S., Sandrieser, M., Morin, L., Bodin, F.: AutoTune: a plugin-driven approach to the automatic tuning of parallel applications. In: Manninen, P., Öster, P. (eds.) PARA. LNCS, vol. 7782, pp. 328–342. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  19. 19.
    Morajko, A., Morajko, O., Margalef, T., Luque, E.: MATE: dynamic performance tuning environment. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 98–107. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  20. 20.
    Shende, S., Malony, A.D.: The Tau parallel performance system. IJHPCA 20(2), 287–311 (2006)Google Scholar
  21. 21.
    Wolf, F.: Scalasca. In: Encyclopedia of Parallel Computing, pp. 1775–1785 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Eduardo Cesar
    • 1
  • Ana Cortés
    • 1
  • Antonio Espinosa
    • 1
  • Tomàs Margalef
    • 1
  • Juan Carlos Moure
    • 1
  • Anna Sikora
    • 1
  • Remo Suppi
    • 1
  1. 1.Computer Architecture and Operating Systems DepartmentUniversitat Autònoma de BarcelonaCerdanyola del VallésSpain

Personalised recommendations