Scaling Performance Tool MPI Communicator Management

  • Markus Geimer
  • Marc-André Hermanns
  • Christan Siebert
  • Felix Wolf
  • Brian J. N. Wylie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6960)

Abstract

The Scalasca toolset has successfully demonstrated measurement and analysis scalability on the largest computer systems, however, applications have growing complexity and increasing demands on performance tools. One such application is the PFLOTRAN code for simulating multiphase subsurface flow and reactive transport. While PFLOTRAN itself and Scalasca runtime summarization both scale well, MPI communicator management becomes critical for trace collection with tens of thousands of processes. Re-design and re-engineering of key components of the Scalasca measurement system are presented which encompass the representation of communicators, communicator definition tracking and unification, and translation of ranks recorded in event traces.

Keywords

MPI communicators performance measurement tools scalability 

References

  1. 1.
    an Mey, D., Biersdorff, S., Bischof, C., Diethelm, K., Eschweiler, D., Gerndt, M., Knüpfer, A., Lorenz, D., Malony, A.D., Nagel, W.E., Oleynik, Y., Rössel, C., Saviankou, P., Schmidl, D., Shende, S.S., Wagner, M., Wesarg, B., Wolf, F.: Score-P–A unified performance measurement system for petascale applications. In: Proc. Competence in High Performance Computing, HPC Status Konferenz der Gauß-Allianz e.V., CiHPC, Schwetzingen, Germany. Springer, Heidelberg (2010) (to appear) Google Scholar
  2. 2.
    ANL/LANL/ORNL/PNNL/UIUC: PFLOTRAN, http://ees.lanl.gov/pflotran/
  3. 3.
    Argonne National Laboratory, USA: FPMPI-2.1g (August 2010), http://www.mcs.anl.gov/research/projects/fpmpi/
  4. 4.
    Argonne National Laboratory, USA: MPICH2-1.4 MPE (June 2011), http://www.mcs.anl.gov/research/projects/mpich2/
  5. 5.
    Barcelona Supercomputing Centre, Spain: Extrae-2.1.1 (March 2011), http://www.bsc.es/ssl/apps/performanceTools/
  6. 6.
    Geimer, M., Saviankou, P., Strube, A., Szebenyi, Z., Wolf, F., Wylie, B.J.N.: Further improving the scalability of the Scalasca toolset. In: Proc. PARA 2010, Reykjavík, Iceland. LNCS. Springer, Heidelberg (2010) Google Scholar
  7. 7.
    Geimer, M., Wolf, F., Wylie, B.J.N., Ábrahám, E., Becker, D., Mohr, B.: The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience 22(6), 702–719 (2010)Google Scholar
  8. 8.
    Hammond, G.E., Lichtner, P.C.: Cleaning up the Cold War: Simulating uranium migration at the Hanford 300 Area. In: Proc. Scientific Discovery through Advanced Computing, SciDAC, Chattanooga, TN, USA. Journal of Physics: Conference Series. IOP Publishing (July 2010)Google Scholar
  9. 9.
    Jülich Supercomputing Centre, Germany: Scalasca toolset for scalable performance analysis of large-scale parallel applications, http://www.scalasca.org/
  10. 10.
    Lawrence Livermore National Laboratory, USA: mpiP-3.3 (June 2011), http://mpip.sourceforge.net/
  11. 11.
    Mohr, B., Frings, W. (eds.): Jülich Blue Gene/P Extreme Scaling Workshop. FZJ-JSC-IB reports 2010-02, 2010-03 & 2011-02, Jülich Supercomputing Centre (2009, 2010 & 2011), http://www2.fz-juelich.de/jsc/bg-ws11/
  12. 12.
    Technische Universität Dresden, Germany: VampirTrace-5.11 (June 2011), http://www.tu-dresden.de/zih/vampirtrace/
  13. 13.
    Technische Universität München, Germany: Periscope-1.3.2 (February 2011), http://www.lrr.in.tum.de/periscope/
  14. 14.
    Träff, J.L.: Compact and efficient implementation of the MPI group operations. In: Keller, R., Gabriel, E., Resch, M., Dongarra, J. (eds.) EuroMPI 2010. LNCS, vol. 6305, pp. 170–178. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  15. 15.
    University of Oregon, Eugene, USA: TAU-2.20.2 (May 2011), http://tau.uoregon.edu/tau/
  16. 16.
    Wylie, B.J.N., Geimer, M.: Large-scale performance analysis of PFLOTRAN with Scalasca. In: Proc. 53rd CUG Meeting, Fairbanks, AK, USA. Cray User Group, Inc. (May 2011)Google Scholar
  17. 17.
    Wylie, B.J.N., Geimer, M., Mohr, B., Böhme, D., Szebenyi, Z., Wolf, F.: Large-scale performance analysis of Sweep3D with the Scalasca toolset. Parallel Processing Letters 20(4), 397–414 (2010)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Markus Geimer
    • 1
  • Marc-André Hermanns
    • 2
  • Christan Siebert
    • 2
  • Felix Wolf
    • 1
    • 2
    • 3
  • Brian J. N. Wylie
    • 1
  1. 1.Jülich Supercomputing Centre, Forschungszentrum JülichGermany
  2. 2.German Research School for Simulation SciencesAachenGermany
  3. 3.RWTH Aachen UniversityAachenGermany

Personalised recommendations