Flexible I/O Support for Reconfigurable Grid Environments

  • Marc-André Hermanns
  • Rudolf Berrendorf
  • Marcel Birkner
  • Jan Seidel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4128)


With growing computational power of current supercomputers, scientific computing applications can work on larger problems. The corresponding increase in dataset size is often correlated to an increase in needed storage for the results. Current storage area networks (sans) balance i/o load on multiple disks using high speed networks, but are integrated on the operating system level, demanding administrative intervention if the usage topology changes. While this is practical for single sites or fairly static grid environments, it is hard to extend to a user defined per-job basis. Reconfigurable grid environments, where computing and storage resources are coupled on a per-job basis, need a more flexible approach for parallel i/o on remote locations.

This paper gives a detailed overview of the abilities of the transparent remote access provided by tunnelfs, a part of the viola parallel i/o project. We show how tunnelfs manages flexible and transparent access to remote i/o resources in a reconfigurable grid environment, supporting the definition of the amount and location of persistent storage services on a per-job basis.


Server Process Grid Environment Intermediary Server Operating System Level Computer Science Division 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Message Passing Interface Forum (MPIF): MPI-2: Extensions to the Message-Passing Interface. University of Tennessee, Knoxville (1996), http://www.mpi-forum.org/docs/mpi-20-html/mpi2-report.html
  2. 2.
    Thakur, R., Ross, R., Lusk, E., Gropp, W.: Users Guide for ROMIO: A High-Performance, Portable MPI-IO Implementation. Technical Report ANL/MCS-TM-234, Mathematics and Computer Science Division, Argonne National Laboratory (2004), http://www-unix.mcs.anl.gov/romio/
  3. 3.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Technical report, Mathematics and Computer Science Division - Argonne National Laboratory (1996), http://www-unix.mcs.anl.gov/mpi/mpich/
  4. 4.
    Thakur, R., Gropp, W., Lusk, E.: An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces. In: Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation, pp. 180–187 (1996)Google Scholar
  5. 5.
    The DEISA Project Group: Distributed European Infrastructure for Supercomputing Applications (2005), http://www.deisa.org/
  6. 6.
    The VIOLA Project Group: Vertically Integrated Optical testbed for Large scale Applications (2005), http://www.viola-testbed.de/
  7. 7.
    Pöppe, M., Schuch, S., Bemmerl, T.: A Message Passing Interface Library for Inhomogeneous Coupled Clusters. In: Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS 2003), Workshop on Communication Architecture for Clusters (CAC 2003), Nice, France (2003)Google Scholar
  8. 8.
    Bierbaum, B.: Implementation of a Multi Device Architecture for MetaMPICH. Master’s thesis, Chair for Operating Systems, RWTH Aachen (2005) (in German), http://www.lfbs.rwth-aachen.de/~boris/diplomarbeit.pdf
  9. 9.
    Berrendorf, R., Hermanns, M.A., Seidel, J.: Remote Parallel I/O in Grid Environments. In: Wyrzykowski, R., Dongarra, J., Meyer, N., Waśniewski, J. (eds.) PPAM 2005. LNCS, vol. 3911, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Lee, J., Ma, X., Ross, R., Thakur, R., Winslett, M.: RFS: Efficient and Flexible Remote File Access for MPI-IO. In: Proceedings of the International Conference on Cluster Computing (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Marc-André Hermanns
    • 1
  • Rudolf Berrendorf
    • 2
  • Marcel Birkner
    • 2
  • Jan Seidel
    • 2
  1. 1.Central Institute for Applied MathematicsResearch Centre JülichJülichGermany
  2. 2.Department of Computer ScienceUniversity of Applied Sciences Bonn-Rhein-SiegSt. AugustinGermany

Personalised recommendations