The Journal of Supercomputing

, Volume 47, Issue 1, pp 53–75 | Cite as

A collective I/O implementation based on inspector–executor paradigm

  • David E. Singh
  • Florin Isaila
  • Juan C. Pichel
  • Jesús Carretero
Article

Abstract

In this paper, we present a novel multiple phase I/O collective technique for generic block-cyclic distributions. The I/O technique is divided into two stages: inspector and executor. During the inspector stage, the communication pattern is computed and the required datatypes are automatically generated. This information is used during the executor stage in performing the communication and file accesses. The two stages are decoupled, so that for repetitive file access patterns, the computations from the inspector stage can be performed once and reused several times by the executor. This strategy allows to amortize the inspector cost over several I/O operations. In this paper, we evaluate the performance of multiple phase I/O collective technique and we compare it with other state of the art approaches. Experimental results show that for small access granularities, our method outperforms in the large majority of cases other parallel I/O optimizations techniques.

Keywords

Parallel computing Parallel file systems Performance evaluation Parallel I/O Parallel programming 

References

  1. 1.
    Bordawekar R (1997) Implementation of collective I/O in the Intel Paragon parallel file system: Initial experiences. In: Proceedings of 11th international conference on supercomputing Google Scholar
  2. 2.
    Lustre: A scalable, high-performance file system. Cluster File Systems Inc white paper, version 1.0, November 2002. http://www.lustre.org/docs/whitepaper.pdf
  3. 3.
    del Rosario J, Bordawekar R, Choudhary A (1993) Improved parallel I/O via a two-phase run-time access strategy. In: Proceedings of IPPS workshop on input/output in parallel computer systems Google Scholar
  4. 4.
    Isaila F, Malpohl G, Olaru V, Szeder G, Tichy W (2004) Integrating collective I/O and cooperative caching into the “Clusterfile” parallel file system. In: Proceedings of ACM international conference on supercomputing (ICS). Assoc Comput Mach, New York, pp 315–324 Google Scholar
  5. 5.
    Kotz D (1994) Disk-directed I/O for MIMD multiprocessors. In: Proceedings of the first USENIX symposium on operating systems design and implementation Google Scholar
  6. 6.
    Liao WK, Coloma K, Choudhary AN, Ward L (2005) Cooperative write-behind data buffering for MPI I/O. In: PVM/MPI, pp 102–109 Google Scholar
  7. 7.
    Liao WK, Coloma K, Choudhary A, Ward L, Russel E, Tideman S (2005) Collective caching: application-aware client-side file caching. In: Proceedings of the 14th international symposium on high performance distributed computing (HPDC) Google Scholar
  8. 8.
    Ligon WB, Ross RB (1999) An overview of the parallel virtual file system. In: Proceedings of the extreme Linux workshop Google Scholar
  9. 9.
    Message Passing Interface Forum (1997) MPI2: Extensions to the Message Passing Interface Google Scholar
  10. 10.
    Nieuwejaar N, Kotz D, Purakayastha A, Ellis CS, Best ML (1996) File access characteristics of parallel scientific workloads. IEEE Trans Parallel Distrib Syst 7(10):1075–1089 CrossRefGoogle Scholar
  11. 11.
    Prost J-P, Treumann R, Hedges R, Jia B, Koniges A (2001) MPI-IO/GPFS, an optimized implementation of MPI-IO on top of GPFS. In: Supercomputing’01: Proceedings of the 2001 ACM/IEEE conference on supercomputing (CDROM). Assoc Comput Mach, New York, p 17 CrossRefGoogle Scholar
  12. 12.
    Schmuck F, Haskin R (2002) GPFS: A shared-disk file system for large computing clusters. In: Proceedings of FAST Google Scholar
  13. 13.
    Seamons KE, Chen Y, Jones P, Jozwiak J, Winslett M (1995) Server-directed collective I/O in Panda. In: Proceedings of supercomputing’95 Google Scholar
  14. 14.
    Singh DE, Isaila F, Calderón A, Garcia F, Carretero J (2007) Multiple-phase I/O technique for improving data access locality. In: PDP’2000 15th Euromicro workshop on parallel and distributed processing Google Scholar
  15. 15.
    Singh DE, Isaila F, Pichel JC, Carretero J (2007) A collective I/O implementation based on inspector–executor paradigm. In: International conference on parallel and distributed processing techniques and applications (PDPTA) Google Scholar
  16. 16.
    Thakur R, Gropp W, Lusk E (1999) Data sieving and collective I/O in ROMIO. In: Proceedings of the 7th symposium on the frontiers of massively parallel computation, pp 182–189, February 1999 Google Scholar
  17. 17.
    Thakur R, Gropp W, Lusk E (2002) On implementing MPI-IO portably and with high performance. In: Proceedings of the sixth workshop on I/O in parallel and distributed systems, pp 23–32, May 1999 Google Scholar
  18. 18.
    Thakur R, Gropp W, Lusk E (2002) Optimizing non-contiguous accesses in MPI-IO. Parallel Comput 28(1):83–105 MATHCrossRefGoogle Scholar
  19. 19.
    Yu W, Vetter J, Canon RS, Jiang S (2007) Exploiting lustre file joining for effective collective I/O. In: CCGRID’07: Proceedings of the seventh IEEE international symposium on cluster computing and the grid. IEEE Comput Soc, Los Alamitos, pp 267–274 CrossRefGoogle Scholar
  20. 20.
    Worringen J (2006) Self-adaptive hints for collective I/O. In: PVM/MPI, pp 202–211 Google Scholar
  21. 21.
    Worringen J, Träff J-L, Ritzdorf H (2003) Improving generic non-contiguous file access for mpi-io. In: Euro-PVM/MPI 03, Venice, Italy. Lecture notes in computer science, vol 2840. Springer, Berlin Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • David E. Singh
    • 1
  • Florin Isaila
    • 1
  • Juan C. Pichel
    • 1
  • Jesús Carretero
    • 1
  1. 1.Computer Science DepartmentUniversidad Carlos III de MadridLeganesSpain

Personalised recommendations