Advertisement

Implementation and Usage of the PERUSE-Interface in Open MPI

  • Rainer Keller
  • George Bosilca
  • Graham Fagg
  • Michael Resch
  • Jack J. Dongarra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

This paper describes the implementation, usage and experience with the MPI performance revealing extension interface (Peruse) into the Open MPI implementation. While the PMPI-interface allows timing MPI-functions through wrappers, it can not provide MPI-internal information on MPI-states and lower-level network performance. We introduce the general design criteria of the interface implementation and analyze the overhead generated by this functionality. To support performance evaluation of large-scale applications, tools for visualization are imperative. We extend the tracing library of the Paraver-toolkit to support tracing Peruse-events and show how this helps detecting performance bottlenecks. A test-suite and a real-world application are traced and visualized using Paraver.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Paraver Homepage. WWW (May 2006), http://www.cepba.upc.es/paraver
  2. 2.
    Peruse specification. WWW (May 2006), http://www.mpi-peruse.org
  3. 3.
    Brunst, H., Winkler, M., Nagel, W.E., Hoppe, H.-C.: Performance Optimization for Large Scale Computing: The Scalable VAMPIR Approach. In: Alexandrov, V.N., Dongarra, J., Juliano, B.A., Renner, R.S., Tan, C.J.K., et al. (eds.) ICCS-ComputSci 2001. LNCS, vol. 2074, pp. 751–760. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M.: Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 97–104. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Jost, G., Jin, H., Labarta, J., Gimenez, J., Caubet, J.: Performance analysis of multilevel parallel applications on shared memory architectures. In: International Parallel and Distributed Processing Symposium (IPDPS 2003), Nice, France, April 2003, vol. 00, p. 80b (2003)Google Scholar
  6. 6.
    Keller, R., Gabriel, E., Krammer, B., Müller, M.S., Resch, M.M.: Towards efficient execution of MPI applications on the Grid: Porting and Optimization issues. Journal of Grid Computing 1(2), 133–149 (2003)CrossRefGoogle Scholar
  7. 7.
    Message Passing Interface Forum. MPI: A Message Passing Interface Standard (June 1995), http://www.mpi-forum.org
  8. 8.
    Message Passing Interface Forum. MPI-2: Extensions to the Message-Passing Interface (July 1997), http://www.mpi-forum.org
  9. 9.
    Shende, S., Malony, A.D.: TAU: The TAU Parallel Performance System (2005)Google Scholar
  10. 10.
    Woodall, T.S., Graham, R.L., Castain, R.H., Daniel, D.J., Sukalski, M.W., Fagg, G.E., Gabriel, E., Bosilca, G., Angskun, T., Dongarra, J. M., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A.: Open MPI’s TEG Point-to-Point Communications Methodology: Comparison to Existing Implementations. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 105–111. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rainer Keller
    • 1
  • George Bosilca
    • 2
  • Graham Fagg
    • 2
  • Michael Resch
    • 1
  • Jack J. Dongarra
    • 2
  1. 1.High-Performance Computing CenterUniversity of Stuttgart 
  2. 2.Innovative Computing LaboratoryUniversity of Tennessee 

Personalised recommendations