Advertisement

Remote Parallel I/O in Grid Environments

  • Rudolf Berrendorf
  • Marc-André Hermanns
  • Jan Seidel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3911)

Abstract

Although processor speed and memory bandwidth as well as capacities of persistent storage devices evolved rapidly in the last years, the bandwidth between memory and persistent storage devices could not match that pace. As current scientific applications tend to process an enormous amount of data at runtime, the access to a local disk might become the main performance bottleneck.

The communication infrastructure for local area networks has also evolved rapidly, thus modern filesystems for supercomputing are using storage area network solutions to distribute the load of application i/o to several special purpose i/o nodes. These san solutions, however, are often bound to a specific organizational structure, such as different locations of the same company or several institutes of a university. This often implies a common user base and accounting information at each site. In a highly variant grid environment these demands might be hard to meet.

This paper describes the definition of two adio devices for romio, a publicly available mpii/o implementation, to provide transparent access to remote parallel i/o and additionally access to remote i/o on files in the memory of a remote server. The architecture of these devices allows for a definition of remote servers on a per job basis, and can be configured by the user before runtime.

Keywords

Main Memory Message Passing Interface Grid Environment Remote Server Persistent Storage 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    MPIF, MPIF: MPI-2: Extensions to the Message-Passing Interface. University of Tennessee, Knoxville (1996), http://www.mpi-forum.org/docs/mpi-20-html/mpi2-report.html
  2. 2.
    Latham, R., Miller, N., Ross, R., Carns, P.: A Next-Generation Parallel File System for Linux Clusters. LinuxWorld (2004)Google Scholar
  3. 3.
    Schwan, P.: Building a cluster file system for 1,000 node clusters. In: Proceedings of the 5th Linux Symposition (2003)Google Scholar
  4. 4.
    Isaila, F., Tichy, W.F.: Clusterfile: A flexible Physical layout Parallel File System, vol. 15, pp. 653–679. John Wiley & Sons Ltd., Chichester (2003)Google Scholar
  5. 5.
    Barrios, M., et al.: GPFS: A Parallel File System. IBM Red Book (1998)Google Scholar
  6. 6.
    Niederberger, R., Alessandrini, V.: The DEISA Project: Motivations, Strategies, Technologies. In: International Supercomputer Conference 2004, Heidelberg, Germany (2004), http://www.deisa.org/
  7. 7.
    Wyrzykowski, R.: CLUSTERIX - National CLUSTER of LInuX Systems. In: Grid Workshop, Cracow (2004)Google Scholar
  8. 8.
    Foster, I., Kesselman, C.: Globus: A Metacomputing Infrastructure Toolkit. Intl J. Supercomputer Applications 11(2), 115–128 (1997)CrossRefGoogle Scholar
  9. 9.
    Streit, A., Erwin, D., Lippert, T., Mallmann, D., Menday, R., Rambadt, M., Riedel, M., Romberg, M., Schuller, B., Wieder, P.: UNICORE – from Project Results to Production Grids (2005), http://unicore.sourceforge.net/docs/paper_streit_et_al.pdf
  10. 10.
    DFN: Viola – Vertically Integrated Optical Testbed for Large Scale Applications (2005), http://www.viola-testbed.de/
  11. 11.
    Pöppe, M., Schuch, S., Bemmerl, T.: A Message Passing Interface Library for Inhomogeneous Coupled Clusters, Nice, France (2003)Google Scholar
  12. 12.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Technical report, MCS Division - Argonne National Laboratory (1996), http://www-unix.mcs.anl.gov/mpi/mpich/
  13. 13.
    Thakur, R., Ross, R., Lusk, E., Gropp, W.: Users Guide for ROMIO: A High-Performance, Portable MPI-IO Implementation. Technical Report ANL/MCS-TM-234, MCS Division, Argonne National Laboratory (2004), http://www-unix.mcs.anl.gov/romio/
  14. 14.
    Thakur, R., Gropp, W., Lusk, E.: An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces. In: Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation, pp. 180–187 (1996)Google Scholar
  15. 15.
    Lee, J., Ma, X., Ross, R., Thakur, R., Winslett, M.: RFS: Efficient and Flexible Remote File Access for MPI-IO. In: Proceedings of the International Conference on Cluster computing (2004), http://dais.cs.uiuc.edu/~jlee17/papers/cluster.pdf

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rudolf Berrendorf
    • 1
  • Marc-André Hermanns
    • 2
  • Jan Seidel
    • 1
  1. 1.Department of Computer ScienceUniversity of Applied Science Bonn-Rhein-SiegSt. AugustinGermany
  2. 2.Central Institute for Applied MathematicsResearch Centre JülichJülichGermany

Personalised recommendations