Software Development in the Grid: The DAMIEN Tool-Set

  • Edgar Gabriel
  • Rainer Keller
  • Peggy Lindner
  • Matthias S. Müller
  • Michael M. Resch
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2657)


The development of applications for Grid-environments is currently lacking the support of tools, which end-users are familiar with from their regular working environment. This paper analyzes the requirements for developing, porting and optimizing scientific applications for Grid-environments. A toolbox designed and implemented in the frame of the DAMIEN project which closes some of the gaps and supports the end-user during the development of the application and its day-to-day usage in Grid-environments is then presented.


High Performance Computing Communication Library Parallel Virtual Machine Medium Size Problem Heterogeneous Grid 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    J. Almond, D. Snelling: UNICORE: Secure and Uniform Access to Distributed Resources via the World Wide Web. A White Paper, October 1998.Google Scholar
  2. 2.
    G. Allen, T. Dramlitsch, I. Foster, N. T. Karonis, M. Ripeanu, E. Seidel, and B. Toonen: Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus. Supercomputing 2001, Denver, 2001.Google Scholar
  3. 3.
    T. B. Bönisch, R. Rühle: Efficient Flow Simulation with Structured Multiblock Meshes on Current Supercomputers in ERCOFTAC Bulletin No. 50: Parallel Computing in CFD, 2001.Google Scholar
  4. 4.
    H. Brunst, M. Winkler, W. E. Nagel, and H.-C. Hoppe: Performance optimization for large scale computing: The scalable vampir approach in V. N. Alexandrov, J. J. Dongarra, B. A. Juliano, R. S. Renner, and C. K. Tan, editors, Computational Science — ICCS 2001, Part II, number 2074 in LNCS, pages 751–760, San Francisco, CA, USA, May 2001. Springer.CrossRefGoogle Scholar
  5. 5.
    C. Catlett and L. Smarr: Metacomputing in Communications of the ACM, 35(6):44–52, 1992.CrossRefGoogle Scholar
  6. 6.
    DAMIEN — Distributed Application and Middleware for Industrial Use of European Networks. World Wide Web:
  7. 7.
    J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson: A set of Level 3 Basic Linear Algebra Subprograms. ACM Trans. Math. Soft., 16 (1990), pp. 1–17.zbMATHCrossRefGoogle Scholar
  8. 8.
    G. E. Fagg, K. S. London, and J. J. Dongarra: MPI_Connect Managing Heterogeneous MPI Applications Interoperation and Process Control in V. Alexandrov and J. Dongarra, editors, Recent advances in Parallel Virtual Machine and Message Passing Interface, volume 1497 of Lecture Notes in Computer Science, pages 93–96. Springer, 1998. 5th European PVM/MPI Users’ Group Meeting.CrossRefGoogle Scholar
  9. 9.
    I. Foster, C. Kesselmann, S. Tuecke: The Anatomy of the Grid: Enabling Scalable Virtual Organizations in International Journal of Supercomputing Applications, 15(3), 2001.Google Scholar
  10. 10.
    E. Gabriel, M. Resch, T. Beisel, and R. Keller: Distributed Computing in a Heterogeneous Computing Environment in Vassil Alexandrov, Jack Dongarra (Eds.) ‘Recent Advances in Parallel Virtual Machine and Message Passing Interface’, pp. 180–188, Springer, 1998.Google Scholar
  11. 11.
    E. Gabriel, M. Lange, and R. Rühle: Direct Numerical Simulation of Turbulent Reactive Flows in a Metacomputing Environment in Proceedings of the 2001 ICPP Workshops, pp. 237–244, 2001.Google Scholar
  12. 12.
    S. Girona, J. Labarta, and R. M. Badia: Validation of Dimemas communication model for MPI collective communications in 7th EuroPVM/MPI 2000, Balatonfüred, Lake Balaton, Hungary, September 2000.Google Scholar
  13. 13.
    W. Gropp and E. Lusk: User’s Guide for mpich, a Portable Implementation of MPI.Google Scholar
  14. 14.
    M. G. Hackenberg, R. Redler, P. Post, and B. Steckel: MpCCI, multidisciplinary applications and multigrid in Proceedings ECCOMAS 2000, CIMNE, Barcelona, September 2000.Google Scholar
  15. 15.
    I. L. Hofacker, W. Fontana, L. S. Bonhoeffer, M. Tacker, P. Schuster: Vienna RNA Package, World Wide Web:, October 2002.
  16. 16.
    T. Imamura, Y. Tsujita, H. Koide and H. Takemiya: An Architecture of Stampi: MPI Library on a Cluster of Parallel Computers in J. Dongarra, P. Kacsuk and N. Podhorszki (eds.), Recent Advances in Parallel Virutal Machine and Message Passing Interface, number 1908 in Springer Lecture Notes in Computer Science, pages 200–207, September 2000. 7th European PVM/MPI Users’ Group Meeting.CrossRefGoogle Scholar
  17. 17.
    N. Karonis and B. Toonen: MPICH-G2. World Wide Web:
  18. 18.
    P. Lindner, N. Currle-Linde, M. M. Resch, E. Gabriel: Distributed Application Management in Heterogeneous Grids in Proceedings of the Euroweb Conference, Oxford, GB, pp. 145–154, December 17–18, 2002.Google Scholar
  19. 19.
    MPI Forum: MPI: A Message-Passing Interface Standard. Document for a Standard Message-Passing Interface, University of Tennessee, 1995.Google Scholar
  20. 20.
    MPI Forum: MPI2: Extensions to the Message-Passing Interface Standard. Document for a Standard Message-Passing Interface, University of Tennessee, 1997.Google Scholar
  21. 21.
    Mindterm Secure Shell. World Wide Web:
  22. 22.
    M. Müller, E. Gabriel and M. Resch: A Software Development Environment for Grid-Computing in Concurrency and Computation — Practice and Experience, Vol. 14:1543–1551, 2002.zbMATHCrossRefGoogle Scholar
  23. 23.
    S. M. Pickles, J. M. Brooke, F. C. Costen, E. Gabriel, M. Müller, M. Resch and S. M. Ord: Metacomputing Across Intercontinental Networks in Future Generation Computer Systems (17) 2001, pp. 911–918, Elsevier Science.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Edgar Gabriel
    • 1
    • 2
  • Rainer Keller
    • 1
  • Peggy Lindner
    • 1
  • Matthias S. Müller
    • 1
  • Michael M. Resch
    • 1
  1. 1.High Performance Computing Center StuttgartStuttgartGermany
  2. 2.Innovative Computing Laboratories, Computer Science DepartmentUniversity of TennesseeKnoxvilleUSA

Personalised recommendations