Advertisement

Efficient Distributed File I/O for Visualization in Grid Environments

  • Werner Benger
  • Hans-Christian Hege
  • André Merzky
  • Thomas Radke
  • Edward Seidel
Part of the Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 13)

Abstract

Large-scale simulations running in metacomputing environments face the problem of efficient file I/O. For efficiency it is desirable to write data locally, distributed across the computing environment, and then to minimize data transfer, that is, reduce remote file access. Both aspects require I/O approaches that differ from existing paradigms. For the data output of distributed simulations, one wants to use fast local parallel I/O for all participating nodes, producing a single distributed logical file, while keeping changes to the simulation code as small as possible. For reading the data file, as in postprocessing and file-based visualization, one wants to have efficient partial access to remote and distributed files, using a global naming scheme and efficient data caching, and again keeping the changes to the postprocessing code small. However, all available software solutions require all data to be staged locally (involving possible data recombination and conversion), or suffer from the performance problems of remote or distributed file systems. In this paper we show how to interface the HDF5 I/O library via its flexible Virtual File Driver layer to the Globus Data Grid. We show that combining these two toolkits in a suitable way provides us with a new I/O framework, which allows efficient, secure, distributed and parallel file I/O in a metacomputing environment.

Keywords

File System Data Grid Grid Environment File Access Parallel File System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. A. Abromovici et al. Ligo: The laser interferometer gravitational wave observatory. Science, 256:325, 1992.ADSCrossRefGoogle Scholar
  2. 2.
    G. Allen, W. Benger, C. Hege, J. Massó, A. Merzky, T. Radke, E. Seidel, and J. Shalf. Solving Einstein’s equations on supercomputers. In press in IEEE Computer, 1999.Google Scholar
  3. 3.
    G. Allen, T. Goodale, and E. Seidel. The cactus computational collaboratory: Enabling technologies for relativistic astrophysics, and a toolkit for solving PDEs by communities in science and engineering. In IEEE 7th Symp. on the Frontiers of Massively Parallel Computation (Frontiers ’99), 1999.Google Scholar
  4. 4.
    Amira- An advanced 3D visualization and volume modeling system, 1999 <.http://www.amiravis.com> or <http://amira.zib.de>
  5. 5.
    P. Anninos, J. Massó, E. Seidel, and W.-M. Suen. Numerical relativity and black holes. Physics World,9(7):43–48, 1996.Google Scholar
  6. 6.
    W. Benger, I. Foster, J. Novotny, E. Seidel, J. Shalf, W. Smith, and P. Walker. Numerical relativity in a distributed environment. In Ninth SIAM Conference on Parallel Processing for Scientific Computing Proceedings, 1999.Google Scholar
  7. 7.
    J. Bester, I. Foster, C. Kesselman, J. Tedesco, and S. Tuecke. Gass: A data movement and access service for wide area computing systems. In Sixth Workshop on I/O in Parallel and Distributed Systems, 1999.Google Scholar
  8. 8.
    The cactus code server, 1999. <http://www.cactuscode.org/>.
  9. 9.
    J. Carretero, F. Pérez, P. de Miguel, F. García, and L. Alonso. ParFiSys: A parallel file system for MPP. ACM SIGOPS, 30(2):74–80, 1996.CrossRefGoogle Scholar
  10. 10.
    A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke. The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets. Submitted to NetStore ’99, 1999.Google Scholar
  11. A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke. Storage client API specification. Unpublished, 1999.Google Scholar
  12. 12.
    DFN gigabit testbeds, 1999. <http://www.dfn.de /projekte/gigabit/home.>.
  13. 13.
    I. Foster and C. Kesselman. Globus: A metacomputing infrastructure toolkit. Int. J. Supercomputer Applications, 11(2):115–128, 1997.CrossRefGoogle Scholar
  14. 14.
    I. Foster and C. Kesselman. The globus project: A status report. In IPPS/SPDP ’98 Heterogeneous Computing Workshop Proceedings, pages 4–18, 1998.Google Scholar
  15. 15.
    I. Foster and C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 1998.Google Scholar
  16. 16.
    A. S. Grimshaw, W. A. Wulf, J. C. French, A. C. Weaver, and P. F. Reynolds, Jr. A Synopsis of the Legion Project. Technical Report CS-94–20, University of Virgina Comp. Sci. Dept., 1994.Google Scholar
  17. 17.
    HDF5: API specification reference manual,1999.<http://hdf.ncsa.uiuc.edu/HDF5/doc/RM_H5Front.html>.
  18. 18.
    M. Litzkow, M. Livny, and M. W. Mutka. Condor: A hunter of idle workstations. In 8th Int. Conf. of Distributed Computing Systems Proceedings, pages 104–111, 1988.Google Scholar
  19. 19.
    The Open Software Foundation. Introduction to OSF DCE. Prentice Hall, Englewood Cliffs, NJ, 1988–1991. PTR.Google Scholar
  20. 20.
    M. Romberg. The unicore architecture: Seamless access to distributed resources. In IEEE High Performance Distributed Computing Proceedings, volume HPDC-8, 1999.Google Scholar
  21. 21.
    E. Seidel. Technologies for collaborative, large scale simulation in astrophysics and a general toolkit for solving PDE’s in science and engineering. In T. Plesser and P. Wittenburg, editors, Forschung and wissenschaftliches Rechnen. MaxPlanck-Gesselschaft, München, Germany, 1999.Google Scholar
  22. 22.
    E. Seidel. Black hole coalescence and mergers: Review, status, and “where are we heading?”. In press in Progress of Theoretical Physics, 2000.Google Scholar
  23. 23.
    V. S. Sunderam. PVM: A framework for parallel distributed computing. Con-currency: Practice and Experience, 2(4):315–339, 1990.CrossRefGoogle Scholar
  24. 24.
    Tele-immersion: Collision of black holes,1999.<http://www.zib.de/Visual/projects/TIKSL/>.
  25. 25.
    B. Tierney, J. Lee, B. Crowley, M. Holding, J. Hylton, and F. Drake. A network-aware distributed storage cache for data intensive environments. In IEEE High Performance Distributed Computing, Proceedings, volume HPDC-8, 1999.Google Scholar
  26. 26.
    A user’s guide for HDF5, 1999. <http://www.hdf.ncsa.uiuc.edu /HDF5/doc/H5.user.html >.

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Werner Benger
    • 1
    • 2
  • Hans-Christian Hege
    • 1
  • André Merzky
    • 1
  • Thomas Radke
    • 2
  • Edward Seidel
    • 2
    • 3
  1. 1.Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB)Germany
  2. 2.Max-Planck-Institut für Gravitationsphysik Potsdam/GolmAlbert-Einstein-Institut (AEI)Germany
  3. 3.National Center for Supercomputing Applications (NCSA)ChampaignUSA

Personalised recommendations