Advertisement

High-Bandwidth Remote Parallel I/O with the Distributed Memory Filesystem MEMFS

  • Jan Seidel
  • Rudolf Berrendorf
  • Marcel Birkner
  • Marc-André Hermanns
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4192)

Abstract

The enormous advance in computational power of supercomputers enables scientific applications to process problems of increasing size. This is often correlated with an increasing amount of data stored in (parallel) filesystems. As the increase in bandwith of common disk based I/O devices can not keep up with the evolution of computational power, the access to this data becomes the bottleneck in many applications. memfs takes the approach to distribute I/O data among multiple dedicated remote servers on a user-level basis. It stores files in the accumulated main memory of these I/O nodes and is able to deliver this data with high bandwidth.

We describe how memfs manages a memory based distributed filesystem, how it stores data among the participating I/O servers and how it assigns servers to application clients. Results are given for a usage in a grid project with high-bandwidth wan connections.

Keywords

Parallel I/O Memory filesystem 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    IBM Corporation: IBM General Parallel Filesystem (2006), http://www-03.ibm.com/servers/eserver/clusters/software/gpfs.html
  2. 2.
    PVFS2: Parallel Virtual Filesystem 2 (2006), http://www.pvfs.org/pvfs2
  3. 3.
    Message Passing Interface Forum (MPIF): MPI-2: Extensions to the Message-Passing Interface. University of Tennessee, Knoxville (1996), http://www.mpi-forum.org/docs/mpi-20-html/mpi2-report.html
  4. 4.
    Thakur, R., Ross, R., Lusk, E., Gropp, W.: Users Guide for ROMIO: A High-Performance, Portable MPI-IO Implementation. Technical Report ANL/MCS-TM-234, Mathematics and Computer Science Division, Argonne National Laboratory (2004), http://www-unix.mcs.anl.gov/romio/
  5. 5.
    Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Technical report, Mathematics and Computer Science Division - Argonne National Laboratory (1996), http://www-unix.mcs.anl.gov/mpi/mpich/
  6. 6.
    Thakur, R., Gropp, W., Lusk, E.: An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces. In: Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation, pp. 180–187 (1996)Google Scholar
  7. 7.
    The VIOLA Project Group: Vertically Integrated Optical testbed for Large scale Applications (2005), http://www.viola-testbed.de/
  8. 8.
    Pöppe, M., Schuch, S., Bemmerl, T.: A Message Passing Interface Library for Inhomogeneous Coupled Clusters. In: Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS 2003), Workshop on Communication Architecture for Clusters (CAC 2003), Nice, France (2003)Google Scholar
  9. 9.
    Berrendorf, R., Hermanns, M.-A., Seidel, J.: Remote Parallel I/O in Grid Environments. In: Wyrzykowski, R., Dongarra, J., Meyer, N., Waśniewski, J. (eds.) PPAM 2005. LNCS, vol. 3911, pp. 212–219. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Flouris, M.D., Markatos, E.P.: The network ramdisk: Using remote memory on heterogeneous nows. Cluster Computing: The Journal on Networks, Software, and Applications, 2(4), 281–293 (1999), http://archvlsi.ics.forth.gr/OS/os.html
  11. 11.
    Cluster File Systems, Inc.: Lustre file system (2006), http://www.lustre.org/
  12. 12.
    Ghemawat, S., Gobioff, H., Leung, S.T.: The Google File System. In: Proceedings of the nineteenth ACM symposium on Operating systems principles, pp. 29–43. IEEE Computer Society Press, Los Alamitos (2003)CrossRefGoogle Scholar
  13. 13.
    Red Hat, Inc.: Red hat global file system (2006), http://www.redhat.com/software/rha/gfs/

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jan Seidel
    • 1
  • Rudolf Berrendorf
    • 1
  • Marcel Birkner
    • 1
  • Marc-André Hermanns
    • 2
  1. 1.Department of Computer ScienceUniversity of Applied Sciences Bonn-Rhein-SiegSt. AugustinGermany
  2. 2.Central Institute for Applied MathematicsResearch Centre JülichJülichGermany

Personalised recommendations