Towards I/O analysis of HPC systems and a generic architecture to collect access patterns

  • Marc C. Wiedemann
  • Julian M. Kunkel
  • Michaela Zimmer
  • Thomas Ludwig
  • Michael Resch
  • Thomas Bönisch
  • Xuan Wang
  • Andriy Chut
  • Alvaro Aguilera
  • Wolfgang E. Nagel
  • Michael Kluge
  • Holger Mickler
Special Issue Paper

Abstract

In high-performance computing applications, a high-level I/O call will trigger activities on a multitude of hardware components. These are massively parallel systems supported by huge storage systems and internal software layers. Their complex interplay currently makes it impossible to identify the causes for and the locations of I/O bottlenecks. Existing tools indicate when a bottleneck occurs but provide little guidance in identifying the cause or improving the situation.

We have thus initiated Scalable I/O for Extreme Performance to find solutions for this problem. To achieve this goal in SIOX, we will build a system to record access information on all layers and components, to recognize access patterns, and to characterize the I/O system. The system will ultimately be able to recognize the causes of the I/O bottlenecks and propose optimizations for the I/O middleware that can improve I/O performance, such as throughput rate and latency. Furthermore, the SIOX system will be able to support decision making while planning new I/O systems.

In this paper, we introduce the SIOX system and describe its current status: We first outline our approach for collecting the required access information. We then provide the architectural concept, the methods for reconstructing the I/O path and an excerpt of the interface for data collection. This paper focuses especially on the architecture, which collects and combines the relevant access information along the I/O path, and which is responsible for the efficient transfer of this information. An abstract modelling approach allows us to better understand the complexity of the analysis of the I/O activities on parallel computing systems, and an abstract interface allows us to adapt the SIOX system to various HPC file systems.

Keywords

I/O analysis I/O path Causality tree 

References

  1. 1.
    Babu S, Borisov N, Uttamchandani S, Routray R, Singh A (2009) DIADS: addressing the “My-problem-or-yours” syndrome with integrated SAN and database diagnosis. In: FAST’09: proceedings of the 7th conference on file and storage technologies. USENIX Association, Berkeley, pp 57–70 Google Scholar
  2. 2.
    Barham P, Donnelly ARI, Mortier R (2004) Using magpie for request extraction and workload modelling. Microsoft Research Google Scholar
  3. 3.
    Chaarawi M, Gabriel E, Keller R, Graham RL, Dongarra JJ (2011) OMPIO: a modular software architecture for MPI I/O. Springer, Berlin/Heidelberg Google Scholar
  4. 4.
    Geimer M, Wolf F, Wylie BJN, Becker D, Böhme D, Frings W, Hermanns MA, Mohr B, Szebenyi Z (2009) Recent developments in the scalasca toolset. In: Tools for high performance computing, proceedings of the 3rd international workshop on parallel tools. Springer, Berlin Google Scholar
  5. 5.
    Hermanns MA, Geimer M, Wolf F, Wylie BJN (2009) Verifying causality between distant performance phenomena in large-scale MPI applications. In: Proceedings of the 17th Euromicro international conference on parallel, distributed, and network-based processing (PDP), Weimar, Germany. IEEE Computer Society Press, Los Alamitos, pp 78–84 Google Scholar
  6. 6.
    Knüpfer A, Nagel WE (2006) Compressible memory data structures for event-based trace analysis. Future Gener Comput Syst 22:359–368 CrossRefGoogle Scholar
  7. 7.
    Knüpfer A, Brunst H, Doleschal J, Jurenz M, Lieber M, Mickler H, Müller MS, Nagel WE (2008) The Vampir performance analysis tool-set. In: Tools for high performance computing, proceedings of the 2nd international workshop on parallel tools. Springer, Berlin, pp 139–155 CrossRefGoogle Scholar
  8. 8.
    Kunkel J (2011) HDTrace—a tracing and simulation environment of application and system interaction. Tech. Rep. 2, Deutsches Klimarechenzentrum GmbH, Bundesstraße 45a, 20146, Hamburg Google Scholar
  9. 9.
    Kunkel J, Ludwig T (2011) IOPm—modeling the I/O path with a functional representation of parallel file system and hardware architecture, to be published Google Scholar
  10. 10.
    Lofstead J, Zheng F, Klasky S, Schwan K (2009) Adaptable, metadata rich IO methods for portable high performance IO. IEEE Computer Society, Washington Google Scholar
  11. 11.
    Minartz T, Molka D, Kunkel J, Knobloch M, Kuhn M, Ludwig T (2012) Handbook of energy-aware and green computing. Chapman and Hall/CRC Press Taylor and Francis Group LLC, Boca Raton, p 600 Google Scholar
  12. 12.
    Noeth M, Ratn P, Mueller F, Schulz M, de Supinski BR (2009) ScalaTrace: scalable compression and replay of communication traces for high performance computing. J Parallel Distrib Comput 69:696–710 CrossRefGoogle Scholar
  13. 13.
    Shende SS, Malony AD (2006) The TAU parallel performance system. Int J High Perform Comput Appl 20(2):287–311 CrossRefGoogle Scholar
  14. 14.
    Thakur R, Gropp W, Lusk E (1999) On implementing MPI-IO portably and with high performance. ACM Press, New York Google Scholar
  15. 15.
    Thereska E, Salmon B, Salmon O, Strunk J, Wachs M, Abd-el-Malek M, Lopez J, Ganger GR (2006) Stardust: tracking activity in a distributed storage system. In: ACM SIGMETRICS conference on measurement and modeling of computer systems. ACM Press, New York, pp 3–14 Google Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  • Marc C. Wiedemann
    • 1
    • 2
  • Julian M. Kunkel
    • 2
  • Michaela Zimmer
    • 2
  • Thomas Ludwig
    • 2
  • Michael Resch
    • 3
  • Thomas Bönisch
    • 3
  • Xuan Wang
    • 3
  • Andriy Chut
    • 3
  • Alvaro Aguilera
    • 4
  • Wolfgang E. Nagel
    • 4
  • Michael Kluge
    • 4
  • Holger Mickler
    • 4
  1. 1.HamburgGermany
  2. 2.Universität Hamburg—Deutsches Klimarechenzentrum GmbHHamburgGermany
  3. 3.High Performance Computing Center Stuttgart (HLRS)Universität StuttgartStuttgartGermany
  4. 4.Zentrum für Informationsdienste und HochleistungsrechnenTechnische Universität DresdenDresdenGermany

Personalised recommendations