Advertisement

Analysis of the MPI-IO Optimization Levels with the PIOViz Jumpshot Enhancement

  • Thomas Ludwig
  • Stephan Krempel
  • Michael Kuhn
  • Julian Kunkel
  • Christian Lohse
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4757)

Abstract

With MPI-IO we see various alternatives for programming file I/O. The overall program performance depends on many different factors. A new trace analysis environment provides deeper insight into the client/server behavior and visualizes events of both process types. We investigate the influence of making independent vs. collective calls together with access to contiguous and non-contiguous data regions in our MPI-IO program. Combined client and server traces exhibit reasons for observed I/O performance.

Keywords

Parallel I/O MPI-IO Performance Visualization Trace-Based Tools 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Gropp, W., Lusk, E., Thakur, R.: Using MPI-2 — Advanced Features of the Message-Passing Interface. The MIT Press, Cambridge (1999)CrossRefGoogle Scholar
  3. 3.
  4. 4.
    Vampir (Home page), http://www.vampir.eu/
  5. 5.
    Krempel, S.: Tracing Connections Between MPI Calls and Resulting PVFS2 Disk Operations, Bachelor’s Thesis, Ruprecht-Karls-Universität Heidelberg, Germany (2006)Google Scholar
  6. 6.
    Kunkel, J., Ludwig, T.: Performance Evaluation of the PVFS2 Architecture. In: Proceedings of the PDP, Naples, Italy (2007)Google Scholar
  7. 7.
    Ludwig, T., et al.: Tracing the MPI-IO Calls’ Disk Accesses. In: European PVM/MPI User’s Group Meeting, Bonn, Germany, pp. 322–330. Springer, Berlin (2006)Google Scholar
  8. 8.
    Ludwig, T.: Research Trends in High Performance Parallel Input/Output for Cluster Environments. In: Proceedings of the 4th International Scientific and Practical Conference on Programming UkrPROG’2004, National Academy of Sciences of Ukraine, Kiev, Ukraine pp. 274–281 (2004)Google Scholar
  9. 9.
    Miller, B.P., et al.: The Paradyn Parallel Performance Measurement Tool. IEEE Computer. Special issue on performance evaluation tools for parallel and distributed computer systems 28(11), 37–46 (1995)Google Scholar
  10. 10.
  11. 11.
    Panse, F.: Extended Tracing Capabilities and Optimization of the PVFS2 Event Logging Management, Diploma Thesis, Ruprecht-Karls-Universität Heidelberg, Germany (2006)Google Scholar
  12. 12.
    Paradyn Parallel Performance Tools (Home page), http://www.paradyn.org/index.html
  13. 13.
    Performance Visualization for Parallel Programs (Home page), http://www-unix.mcs.anl.gov/perfvis/
  14. 14.
    Shende, S., Malony, A.: The Tau Parallel Performance System. International Journal of High Performance Computing Applications 20, 287–311 (2006)CrossRefGoogle Scholar
  15. 15.
    TAU – Tuning and Analysis Utilities (Home page), http://www.cs.uoregon.edu/research/tau/
  16. 16.
    Thakur, R., Lusk, E., Gropp, W.: Users Guide for ROMIO: A High-Performance, Portable MPI-IO Implementation. Technical Memorandum ANL/MCS-TM-234, Mathematics and Computer Science Division, Argonne National Laboratory, Revised (July 1998)Google Scholar
  17. 17.
    The Parallel Virtual File System – Version 2 (Home page), http://www.pvfs.org/pvfs2/
  18. 18.
    The PVFS2 Development Team: PVFS2 Internal Documentation included in the source code package (2006)Google Scholar
  19. 19.
    Withanage, D.: Performance Visualization for the PVFS2 Environment, Bachelor’s Thesis, Ruprecht-Karls-Universität Heidelberg, Germany (November 2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Thomas Ludwig
    • 1
  • Stephan Krempel
    • 1
  • Michael Kuhn
    • 1
  • Julian Kunkel
    • 1
  • Christian Lohse
    • 1
  1. 1.Ruprecht-Karls-Universität Heidelberg, Im Neuenheimer Feld 348, 69120 HeidelbergGermany

Personalised recommendations