Advertisement

Periodic I/O Scheduling for Super-Computers

  • Guillaume Aupy
  • Ana Gainaru
  • Valentin Le FèvreEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10724)

Abstract

With the ever-growing need of data in HPC applications, the congestion at the I/O level becomes critical in super-computers. Architectural enhancement such as burst-buffers and pre-fetching are added to machines, but are not sufficient to prevent congestion. Recent online I/O scheduling strategies have been put in place, but they add an additional congestion point and overheads in the computation of applications.

In this work, we show how to take advantage of the periodic nature of HPC applications in order to develop efficient periodic scheduling strategies for their I/O transfers. Our strategy computes once during the job scheduling phase a pattern where it defines the I/O behavior for each application, after which the applications run independently, transferring their I/O at the specified times. Our strategy limits the amount of I/O congestion at the I/O node level and can be easily integrated into current job schedulers. We validate this model through extensive simulations and experiments by comparing it to state-of-the-art online solutions.

Specifically, we show that not only our scheduler has the advantage of being de-centralized, thus overcoming the overhead of online schedulers, but we also show that on Mira one can expect an average dilation improvement of 22% with an average throughput improvement of 32%! Finally, we show that one can expect those improvements to get better in the next generation of platforms where the compute - I/O bandwidth imbalance increases.

Notes

Acknowledgement

This work was supported in part by the ANR Dash project. Part of this work was done when Guillaume Aupy and Valentin Le Fèvre were in Vanderbilt University. The authors would like to thank Anne Benoit and Yves Robert for helpful discussions.

References

  1. 1.
    Aupy, G., Gainaru, A., Le Fèvre, V.: Periodic I/O scheduling for super-computers. Research report 9037, Inria Bordeaux Sud-Ouest (2017)Google Scholar
  2. 2.
    Baruah, S.K., Gehrke, J.E., Plaxton, C.G.: Fast scheduling of periodic tasks on multiple resources. In: Proceedings of the 9th International Parallel Processing Symposium, pp. 280–288. IEEE (1995)Google Scholar
  3. 3.
    Behzad, B., et al.: Taming parallel I/O complexity with auto-tuning. In: Proceedings of SC 2013 (2013)Google Scholar
  4. 4.
    Biswas, R., Aftosmis, M., Kiris, C., Shen, B.-W.: Petascale computing: impact on future NASA missions. In: Petascale Computing: Architectures and Algorithms, pp. 29–46 (2007)Google Scholar
  5. 5.
    Bryan, G.H., Fritsch, J.M.: A benchmark simulation for moist nonhydrostatic numerical models. Mon. Weather Rev. 130(12), 2917–2928 (2002)CrossRefGoogle Scholar
  6. 6.
    Bryan, G.L., et al.: Enzo: an adaptive mesh refinement code for astrophysics. arXiv:1307.2265 (2013)
  7. 7.
    Carns, P., et al.: 24/7 characterization of petascale I/O workloads. In: Proceedings of CLUSTER 2009, pp. 1–10. IEEE (2009)Google Scholar
  8. 8.
    Carter, J., Borrill, J., Oliker, L.: Performance characteristics of a cosmology package on leading HPC architectures. In: Bougé, L., Prasanna, V.K. (eds.) HiPC 2004. LNCS, vol. 3296, pp. 176–188. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30474-6_23 CrossRefGoogle Scholar
  9. 9.
    Colella, P., et al.: Chombo infrastructure for adaptive mesh refinement (2005). https://seesar.lbl.gov/ANAG/chombo/
  10. 10.
    Daly, J.T.: A higher order estimate of the optimum checkpoint interval for restart dumps. FGCS 22(3), 303–312 (2004)CrossRefGoogle Scholar
  11. 11.
    Dorier, M., Antoniu, G., Ross, R., Kimpe, D., Ibrahim, S.: CALCioM: mitigating I/O interference in HPC systems through cross-application coordination. In: Proceedings of IPDPS (2014)Google Scholar
  12. 12.
    Dorier, M., Ibrahim, S., Antoniu, G., Ross, R.: Omnisc’IO: a grammar-based approach to spatial and temporal I/O patterns prediction. In: SC, pp. 623–634. IEEE Press (2014)Google Scholar
  13. 13.
    Ethier, S., Adams, M., Carter, J., Oliker, L.: Petascale parallelization of the gyrokinetic toroidal code. In: VECPAR 2012 (2012)Google Scholar
  14. 14.
    Gainaru, A., Aupy, G., Benoit, A., Cappello, F., Robert, Y., Snir, M.: Scheduling the I/O of HPC applications under congestion. In: IPDPS, pp. 1013–1022. IEEE (2015)Google Scholar
  15. 15.
    Habib, S., et al.: The universe at extreme scale: multi-petaflop sky simulation on the BG/Q. In: Proceedings of SC 2012, p. 4. IEEE Computer Society (2012)Google Scholar
  16. 16.
    Hanen, C., Munier, A.: Cyclic scheduling on parallel processors: an overview. Citeseer (1993)Google Scholar
  17. 17.
    Harrod, B.: Big data and scientific discovery (2014)Google Scholar
  18. 18.
    Hu, W., Liu, G.-M., Li, Q., Jiang, Y.-H., Cai, G.-L.: Storage wall for exascale supercomputing. Front. Inf. Technol. Electron. Eng. 17, 1154–1175 (2016). Zhejiang University PressCrossRefGoogle Scholar
  19. 19.
    Isaila, F., Carretero, J.: Making the case for data staging coordination and control for parallel applications. In: Workshop on Exascale MPI at Supercomputing Conference (2015)Google Scholar
  20. 20.
    Kougkas, A., Dorier, M., Latham, R., Ross, R., Sun, X.-H.: Leveraging burst buffer coordination to prevent I/O interference. In: IEEE International Conference on eScience. IEEE (2016)Google Scholar
  21. 21.
    Kumar, S., et al.: Characterization and modeling of PIDX parallel I/O for performance optimization. In: SC. ACM (2013)Google Scholar
  22. 22.
    Lazzarini, A.: Advanced LIGO data & computing (2003)Google Scholar
  23. 23.
    Liu, N., et al.: On the role of burst buffers in leadership-class storage systems. In: MSST/SNAPI (2012)Google Scholar
  24. 24.
    Lofstead, J., et al.: Managing variability in the IO performance of petascale storage systems. In: SC. IEEECS (2010)Google Scholar
  25. 25.
    Lofstead, J., Ross, R.: Insights for exascale IO APIs from building a petascale IO API. In: Proceedings of SC 2013, p. 87. ACM (2013)Google Scholar
  26. 26.
    Nair, R., Tufo, H.: Petascale atmospheric general circulation models. J. Phys. Conf. Ser. 78, 012078 (2007). IOP PublishingCrossRefGoogle Scholar
  27. 27.
    Sankaran, R., et al.: Direct numerical simulations of turbulent lean premixed combustion. J. Phys. Conf. Ser. 46, 38 (2006). IOP PublishingCrossRefGoogle Scholar
  28. 28.
    Seelam, S.R., Teller, P.J.: Virtual I/O scheduler: a scheduler of schedulers for performance virtualization. In: Proceedings VEE, pp. 105–115. ACM (2007)Google Scholar
  29. 29.
    Serafini, P., Ukovich, W.: A mathematical model for periodic scheduling problems. SIAM J. Discret. Math. 2(4), 550–581 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Shan, H., Shalf, J.: Using IOR to analyze the I/O performance for HPC platforms. In: Cray User Group (2007)Google Scholar
  31. 31.
    Skinner, D., Kramer, W.: Understanding the causes of performance variability in HPC workloads. In: IEEE Workload Characterization Symposium, pp. 137–149 (2005)Google Scholar
  32. 32.
    Tessier, F., Malakar, P., Vishwanath, V., Jeannot, E., Isaila, F.: Topology-aware data aggregation for intensive I/O on large-scale supercomputers. In: Proceedings of the First Workshop on Optimization of Communication in HPC, pp. 73–81. IEEE Press (2016)Google Scholar
  33. 33.
    Zhang, X., Davis, K., Jiang, S.: Opportunistic data-driven execution of parallel programs for efficient I/O services. In: Proceedings of IPDPS, pp. 330–341. IEEE (2012)Google Scholar
  34. 34.
    Zhou, Z., Yang, X., Zhao, D., Rich, P., Tang, W., Wang, J., Lan, Z.: I/O-aware batch scheduling for petascale computing systems. In: 2015 IEEE International Conference on Cluster Computing, pp. 254–263, September 2015Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Guillaume Aupy
    • 1
  • Ana Gainaru
    • 2
  • Valentin Le Fèvre
    • 1
    • 3
    Email author
  1. 1.Inria and University of BordeauxTalenceFrance
  2. 2.Vanderbilt UniversityNashvilleUSA
  3. 3.École Normale Supérieure de LyonLyonFrance

Personalised recommendations