The HOPSA Workflow and Tools

  • Bernd Mohr
  • Vladimir Voevodin
  • Judit Giménez
  • Erik Hagersten
  • Andreas Knüpfer
  • Dmitry A. Nikitenko
  • Mats Nilsson
  • Harald Servat
  • Aamer Shah
  • Frank Winkler
  • Felix Wolf
  • Ilya Zhukov
Conference paper

Abstract

To maximise the scientific output of a high-performance computing system, different stakeholders pursue different strategies. While individual application developers are trying to shorten the time to solution by optimising their codes, system administrators are tuning the configuration of the overall system to increase its throughput. Yet, the complexity of today’s machines with their strong interrelationship between application and system performance presents serious challenges to achieving these goals. The HOPSA project (HOlistic Performance System Analysis) therefore sets out to create an integrated diagnostic infrastructure for combined application and system-level tuning – with the former provided by the EU and the latter by the Russian project partners. Starting from system-wide basic performance screening of individual jobs, an automated workflow routes findings on potential bottlenecks either to application developers or system administrators with recommendations on how to identify their root cause using more powerful diagnostic tools. Developers can choose from a variety of mature performance-analysis tools developed by our consortium. Within this project, the tools will be further integrated and enhanced with respect to scalability, depth of analysis, and support for asynchronous tasking, a node-level paradigm playing an increasingly important role in hybrid programs on emerging hierarchical and heterogeneous systems.

References

  1. 1.
    J. Labarta, S. Girona, V. Pillet, T. Cortes, L. Gregoris, DiP: A parallel program development environment. in: Proc. of the 2nd International Euro-Par Conference, Lyon, France, Springer, 1996.Google Scholar
  2. 2.
    M. Geimer, F. Wolf, B.J.N. Wylie, E. Ábrahám, D. Becker, B. Mohr: The Scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience, 22(6):702–719, April 2010.Google Scholar
  3. 3.
    E. Berg, E. Hagersten: StatCache: A Probabilistic Approach to Efficient and Accurate Data Locality Analysis. In: Proceedings of the 2004 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS-2004), Austin, Texas, USA, March 2004.Google Scholar
  4. 4.
    W. Nagel, M. Weber, H.-C. Hoppe, and K. Solchenbach. VAMPIR: Visualization and Analysis of MPI Resources. Supercomputer, 12(1):69–80, 1996.Google Scholar
  5. 5.
    D. an Mey, S. Biersdorff, C. Bischof, K. Diethelm, D. Eschweiler, M. Gerndt, A. Knüpfer, D. Lorenz, A.D. Malony, W.E. Nagel, Y. Oleynik, C. Rössel, P. Saviankou, D. Schmidl, S.S. Shende, M. Wagner, B. Wesarg, F. Wolf: Score-P: A Unified Performance Measurement System for Petascale Applications. In: Competence in High Performance Computing 2010 (CiHPC), pp. 85–97. Gauß-Allianz, Springer (2012).Google Scholar
  6. 6.
    M. Gerndt and M. Ott. Automatic Performance Analysis with Periscope. Concurrency and Computation: Practice and Experience, 22(6):736–748, 2010.Google Scholar
  7. 7.
    S. Shende and A. D. Malony. The TAU Parallel Performance System. International Journal of High Performance Computing Applications, 20(2):287–331, 2006. SAGE Publications.Google Scholar
  8. 8.
    M. Geimer, P. Saviankou, A. Strube, Z. Szebenyi, F. Wolf, B. J. N. Wylie: Further improving the scalability of the Scalasca toolset. In: Proceedings of PARA 2010: State of the Art in Scientific and Parallel Computing, Part II: Minisymposium Scalable tools for High Performance Computing, Reykjavik, Iceland, June 6–9 2010, volume 7134 of Lecture Notes in Computer Science, pages 463–474, Springer, 2012.Google Scholar
  9. 9.
    D. Eschweiler, M. Wagner, M. Geimer, A. Knüpfer, W. E. Nagel, F. Wolf: Open Trace Format 2 – The Next Generation of Scalable Trace Formats and Support Libraries. In: Proceedings of the International Conference on Parallel Computing (ParCo), Ghent, Belgium, 2011, volume 22 of Advances in Parallel Computing, pages 481–490, IOS Press, 2012.Google Scholar
  10. 10.
    A. Knüpfer, R. Brendel, H. Brunst, H. Mix, W. E. Nagel: Introducing the Open Trace Format (OTF), In: Vassil N. Alexandrov, Geert Dick van Albada, Peter M. A. Sloot, Jack Dongarra (Eds): Computational Science – ICCS 2006: 6th International Conference, Reading, UK, May 28–31, 2006, Proceedings, Part II, Springer Verlag, ISBN: 3-540-34381-4, pages 526–533, Vol. 3992, 2006.Google Scholar
  11. 11.
    F. Wolf, B. Mohr: EPILOG Binary Trace-Data Format. Technical Report FZJ-ZAM-IB-2004-06, Forschungszentrum Jülich, 2004.Google Scholar
  12. 12.
    H. Servat Gelabert, G. Llort Sanchez, J. Gimenez, and J. Labarta. Detailed performance analysis using coarse grain sampling. In: Euro-Par 2009 – Parallel Processing Workshops, Delft, The Netherlands, August 2009, volume 6043 of Lecture Notes in Computer Science, pages 185–198. Springer, 2010.Google Scholar
  13. 13.
    T-Platforms, Moscow, Russia. Clustrx HPC Software. http://www.t-platforms.com/products/software/clustrxproductfamily.html, last accessed September 2012.
  14. 14.
    A.V. Adinets, P.A. Bryzgalov, Vad.V. Voevodin, S.A. Zhumatiy, D.A. Nikitenko. About one approach to monitoring, analysis and visualization of jobs on cluster system (In Russian). In: Numerical Methods and Programming, 2011, vol. 12, Pp. 90–93Google Scholar
  15. 15.
    Z. Szebenyi, F. Wolf, B. J.N. Wylie. Space-Efficient Time-Series Call-Path Profiling of Parallel Applications. In: Proc. of the ACM/IEEE Conference on Supercomputing (SC09), Portland, OR, USA, ACM, 2009.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Bernd Mohr
    • 1
  • Vladimir Voevodin
    • 2
  • Judit Giménez
    • 3
  • Erik Hagersten
    • 5
  • Andreas Knüpfer
    • 6
  • Dmitry A. Nikitenko
    • 2
  • Mats Nilsson
    • 5
  • Harald Servat
    • 3
  • Aamer Shah
    • 4
  • Frank Winkler
    • 6
  • Felix Wolf
    • 4
  • Ilya Zhukov
    • 1
  1. 1.Jülich Supercomputing CentreForschungszentrum Jülich GmbHJuelichGermany
  2. 2.Moscow State University, RCCMoscowRussia
  3. 3.Barcelona Supercomputing CentreBarcelonaSpain
  4. 4.German Research School for Simulation Sciences GmbH / RWTH Aachen UniversityAachenGermany
  5. 5.Rogue Wave Software ABUppsalaSweden
  6. 6.Technical University DresdenDresdenGermany

Personalised recommendations