Advertisement

Semi-automatic Code Modernization for Optimal Parallel I/O

  • Ritu AroraEmail author
  • Trung Nguyen Ba
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 964)

Abstract

As we progress from the multi-petascale era to the exascale computing era, the need for optimally doing parallel I/O will become increasingly important not only for enhancing the performance of an application, but also for the health of the underlying systems on which the applications run. Manual reengineering of existing applications to optimally do I/O can be challenging for several parallel programmers despite the availability of the MPI I/O interface, high-level APIs and libraries (such as parallel HDF5, and parallel NetCDF). In this paper, we present an interactive, high-level tool for reengineering existing applications so that they can do parallel I/O optimally. This tool frees the developers from the effort required for manually reengineering their applications to take advantage of the MPI libraries and the Lustre filesystem, and hence, helps in enhancing their productivity. Parallelizing I/O using our tool does not involve any external high-level I/O library, is purely MPI based, helps in optimally using the Lustre filesystem when it is available, and results in code that is portable across different systems supporting MPI libraries. The performance of the code generated using our tool is comparable to the performance of the best available manually written versions of the code for the selected test cases.

Notes

Acknowledgement

The work presented in this paper was made possible through the National Science Foundation (NSF) award number 1642396. We are very grateful to NSF for the same.

References

  1. 1.
    Arora, R., Ba, T.N.: ITALC: interactive tool for application-level checkpointing. In: Proceedings of the Fourth International Workshop on HPC User Support Tools, HUST 2017, pp. 2:1–2:11. ACM, New York (2017).  https://doi.org/10.1145/3152493.3152558
  2. 2.
    Arora, R., Olaya, J., Gupta, M.: A tool for interactive parallelization. In: Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment, XSEDE 2014, pp. 51:1–51:8. ACM, New York (2014).  https://doi.org/10.1145/2616498.2616558
  3. 3.
    Bennett, R., Bryant, K., Sussman, A., Das, R., Saltz, J.: Jovian: a framework for optimizing parallel I/O. In: Proceedings Scalable Parallel Libraries Conference, pp. 10–20, October 1994.  https://doi.org/10.1109/SPLC.1994.377009
  4. 4.
    Bouteiller, A., Bosilca, G., Dongarra, J.: Retrospect: deterministic replay of MPI applications for interactive distributed debugging. In: Cappello, F., Herault, T., Dongarra, J. (eds.) EuroPVM/MPI 2007. LNCS, vol. 4757, pp. 297–306. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-75416-9_41. http://dl.acm.org/citation.cfm?id=2396095.2396148CrossRefGoogle Scholar
  5. 5.
    Cazes, J., Arora, R.: Introduction to parallel I/O. https://www.tacc.utexas.edu/documents/13601/900558/MPI-IO-Final.pdf/eea9d7d3-4b81-471c-b244-41498070e35d. Accessed 20 Oct 2018
  6. 6.
    The University of Tennessee Knoxville: The National Institute for Computer Science: Lustre striping guide. https://www.nics.tennessee.edu/computing-resources/file-systems/lustre-striping-guide. Accessed 20 Oct 2018
  7. 7.
    Corbett, P., et al.: Overview of the MPI-IO parallel I/O interface (1995)Google Scholar
  8. 8.
    FreeBSD: HEXDUMP(1) FreeBSD General Commands Manual HEXDUMP(1). https://www.freebsd.org/cgi/man.cgi?query=hexdump&sektion=1. Accessed 20 Oct 2018
  9. 9.
    Gu, J., Klasky, S., Podhorszki, N., Qiang, J., Wu, K.: Querying large scientific data sets with adaptable IO system ADIOS. In: Yokota, R., Wu, W. (eds.) SCFA 2018. LNCS, vol. 10776, pp. 51–69. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-69953-0_4CrossRefGoogle Scholar
  10. 10.
    Harrington, P.: Diagnosing parallel I/O bottlenecks in HPC applications. In: International Conference for High Performance Computing Networking Storage and Analysis (SCI7) ACM Student Research Competition (SRC) (2017)Google Scholar
  11. 11.
    Quinlan, D.: Rose: compiler support for object-oriented frameworks. Parallel Process. Lett. 10, 215–226 (2000)CrossRefGoogle Scholar
  12. 12.
    Latham, R., Ross, R., Thakur, R.: The impact of file systems on MPI-IO scalability. In: Kranzlmüller, D., Kacsuk, P., Dongarra, J. (eds.) EuroPVM/MPI 2004. LNCS, vol. 3241, pp. 87–96. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30218-6_18CrossRefGoogle Scholar
  13. 13.
    NetCDF-UCAR: Network Common Data Form (NetCDF). https://www.hdfgroup.org/solutions/hdf5/. Accessed 20 Oct 2018
  14. 14.
    OpenSFS’s Lustre: Luster. http://lustre.org/. Accessed 20 Oct 2018
  15. 15.
    Shan, H., Shalf, J.: Using IOR to analyze the I/O performance for HPC platforms. In: Cray User Group Conference, CUG 2007 (2007)Google Scholar
  16. 16.
    Texas Advanced Computing Center: PnetCDF: A Parallel I/O Library for NetCDF File Access. https://trac.mcs.anl.gov/projects/parallel-netcdf. Accessed 3 Dec 2018
  17. 17.
    Texas Advanced Computing Center: Stampede2 TACC’S Flagship Supercomputer. https://www.tacc.utexas.edu/systems/stampede2. Accessed 20 Oct 2018
  18. 18.
    The HDF5 Library & File Format: TheHDFGroup. https://www.hdfgroup.org/solutions/hdf5/. Accessed 20 Oct 2018

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Texas Advanced Computing CenterUniversity of Texas at AustinAustinUSA
  2. 2.University of Massachusetts at AmherstAmherstUSA

Personalised recommendations