Advertisement

International Journal of Parallel Programming

, Volume 31, Issue 1, pp 35–53 | Cite as

Performance Analysis Integration in the Uintah Software Development Cycle

  • J. Davison de St. Germain
  • Alan Morris
  • Steven G. Parker
  • Allen D. Malony
  • Sameer Shende
Article

Abstract

The increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributed systems require performance technology that can be flexibly configured to observe different events and associated performance data of interest. It must also be possible to integrate performance evaluation techniques with the programming paradigms and software engineering methods. This is particularly important for tracking performance on parallel software projects involving many code teams over many stages of development. This paper describes the integration of the TAU and XPARE tools in the Uintah Computational Framework (UCF). Discussed is the use of performance mapping techniques to associate low-level performance data to higher levels of abstraction in UCF and the use of performance regression testing to provide a historical portfolio of the evolution of application performance. A scalability study shows the benefits of integrating performance technology in building large-scale parallel applications.

parallel performance software applications 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

REFERENCES

  1. 1.
    Center for the Simulation of Accidental Fires and Explosions. http://www.csafe.utah.edu.Google Scholar
  2. 2.
    Academic Strategic Alliances Program. http://www.llnl.gov/asci-alliances.Google Scholar
  3. 3.
    J. Davison de St. Germain, J. McCorquodale, S. G. Parker, and C. R. Johnson, Uintah: A Massively Parallel Problem Solving Environment, HPDC'00: Ninth IEEE International Symposium on High Performance and Distributed Computing (2000).Google Scholar
  4. 4.
    S. G. Parker, D. M. Beazley, and C. R. Johnson, Computational steering software systems and strategies, IEEE Comput. Sci. Eng., 4(4):50-59 (1997).Google Scholar
  5. 5.
    S. G. Parker, The SCIRun Problem Solving Environment and Computational Steering Software System. Ph.D. thesis, University of Utah (1999).Google Scholar
  6. 6.
    S. G. Parker and C. R. Johnson, SCIRun: A scientific programming environment for computational steering, Proc. Supercomputing '95, IEEE Press (1995)Google Scholar
  7. 7.
    S. G. Parker, D. M. Weinstein, and C. R. Johnson, The SCIRun computational steering software system. In: E. Arge, A. M. Bruaset, and H. P. Langtangen (Eds.), Modern Software Tools in Scientific Computing, Birkhauser Press, pp. 1-44 (1997).Google Scholar
  8. 8.
    Common Component Architecture Forum. http://www.cca-forum.org.Google Scholar
  9. 9.
    A. Malony and S. Shende, Performance Technology for Complex Parallel and Distributed Systems, In G. Kotsis and P. Kacsuk (Eds.), Distributed and Parallel Systems From Instruction Parallelism to Cluster Computing. Proc. 3rd Workshop on Distributed and Parallel Systems, DAPSYS 2000, Kluwer, pp. 37-46 (2000).Google Scholar
  10. 10.
    S. Shende, A. Malony, J. Cuny, K. Lindlan, P. Beckman, and S. Karmesin, Portable Profiling and Tracing for Parallel Scientific Applications using C++. Proc. SIGMETRICS Symposium on Parallel and Distributed Tools, SPDT'98, ACM, pp. 134-145 (1998).Google Scholar
  11. 11.
    K. A. Lindlan, J. Cuny, A. D. Malony, S. Shende, B. Mohr, R. Rivenburgh, and C. Rasmussen, Tool Framework for Static and Dynamic Analysis of Object-Oriented Software with Templates, Proceedings SC'2000 (2000).Google Scholar
  12. 12.
    S. Shende, A. Malony, and R. Ansell-Bell, Instrumentation and Measurement Strategies for Flexible and Portable Empirical Performance Evaluation, Proc. International Conference on Parallel and Distributed Processing Techniques and Applications, PDPTA '2001, CSREA, pp. 1150-1156 (2001).Google Scholar
  13. 13.
    Pallas GmbH: VAMPIR: Visualization and Analysis of MPI Resources. http://www.pallas.de/pages/vampir.htm.Google Scholar
  14. 14.
    Message Passing Interface Forum: MPI: A Message Passing Interface Standard. International Journal of Supercomputer Applications (Special Issue on MPI) 8(3/4)(1994).Google Scholar
  15. 15.
    S. Shende, The Role of Instrumentation and Mapping in Performance Measurement, Ph.D. dissertation, University of Oregon (2001).Google Scholar

Copyright information

© Plenum Publishing Corporation 2003

Authors and Affiliations

  • J. Davison de St. Germain
    • 1
  • Alan Morris
    • 1
  • Steven G. Parker
    • 1
  • Allen D. Malony
    • 2
  • Sameer Shende
    • 2
  1. 1.School of ComputingUniversity of UtahSalt Lake City
  2. 2.Department of Computer and Information ScienceUniversity of OregonEugene

Personalised recommendations