Capturing and Visualizing Event Flow Graphs of MPI Applications

  • Karl Fürlinger
  • David Skinner
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6043)


A high-level understanding of how an application executes and which performance characteristics it exhibits is essential in many areas of high performance computing, such as application optimization, hardware development, and system procurement.

Tools are needed to help users in uncovering the application characteristics, but current approaches are unsuitable to help develop a structured understanding of program execution akin to flow charts. Profiling tools are efficient in terms of overheads but their way of recording performance data discards temporal information. Tracing preserves all the temporal information but distilling the essential high level structures, such as initialization and iteration phases can be challenging and cumbersome.

We present a technique that extends an existing profiling tool to capture event flow graphs of MPI applications. Event flow graphs try to strike a balance between the abundance of data contained in full traces and the concise information profiling tools can deliver with low overheads.

We describe our technique for efficiently gathering an event flow graph for each process of an MPI application and for combining these graphs into a single application-level flow graph. We explore ways to reduce the complexity of the graphs by collapsing nodes in a step-by-step fashion and present techniques to explore flow graphs interactively.


Temporal Information Hash Table High Performance Computing Iteration Phase Workload Characterization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Hernandez, O., Liao, C., Chapman, B.: Dragon: A static and dynamic tool for OpenMP. In: Chapman, B.M. (ed.) WOMPAT 2004. LNCS, vol. 3349, pp. 53–66. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Koutsofios, E., North, S.C.: Drawing graphs with dot. Murray Hill, NJ (October 1993)Google Scholar
  4. 4.
    Noeth, M., Mueller, F., Schulz, M., de Supinski, B.R.: Scalable compression and replay of communication traces in massively parallel e nvironments. In: Proceedings of the 21th International Parallel and Distributed Processing Symposium (IPDPS ’07), pp. 1–11. IEEE, Los Alamitos (2007)Google Scholar
  5. 5.
    Preissl, R., Schulz, M., Kranzlmüller, D., Supinski, B.R., Quinlan, D.J.: Using MPI communication patterns to guide source code transformations. In: Bubak, M., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2008, Part III. LNCS, vol. 5103, pp. 253–260. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Quinlan, D.J.: ROSE: Compiler support for object-oriented frameworks. Parallel Processing Letters 10(2/3), 215–226 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Karl Fürlinger
    • 1
  • David Skinner
    • 2
  1. 1.Computer Science Division, EECS DepartmentUniversity of California at BerkeleyBerkeleyU.S.A.
  2. 2.Lawrence Berkeley National LabBerkeley

Personalised recommendations