Advertisement

Argo

  • Swann PerarnauEmail author
  • Brian C. Van Essen
  • Roberto Gioiosa
  • Kamil Iskra
  • Maya B. Gokhale
  • Kazutomo Yoshii
  • Pete Beckman
Chapter
Part of the High-Performance Computing Series book series (HPC, volume 1)

Abstract

Argo is an ongoing project improving Linux for exascale machines. Targeting emerging production workloads such as workflows and coupled codes, we focus on providing missing features and building new resource management facilities. Our work is unified into compute containers, a containerization approach aimed at providing modern HPC applications with dynamic control over a wide range of kernel interfaces.

Notes

Acknowledgements

Results presented in this chapter were obtained using the Chameleon testbed supported by the National Science Foundation. Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computer Research, under Contract DE-AC02-06CH11357. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.

References

  1. Appc: App container specification and tooling (2017). https://github.com/appc/spec.
  2. Ahn, D. H., Garlick, J., Grondona, M., Lipari, D., Springmeyer, B., & Schulz, M. (2014). Flux: A next-generation resource management framework for large HPC centers. In 2014 43rd International Conference on Parallel Processing Workshops (ICCPW) (pp. 9–17). IEEE.Google Scholar
  3. Bautista-Gomez, L., Gainaru, A., Perarnau, S., Tiwari, D., Gupta, S., Cappello, F., et al. (2016). Reducing waste in large scale systems through introspective analysis. In IEEE International Parallel and Distributed Processing Symposium (IPDPS).Google Scholar
  4. Beserra, D., Moreno, E. D., Endo, P. T., Barreto, J., Sadok, D., & Fernandes, S. (2015). Performance analysis of LXC for HPC environments. In International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS).Google Scholar
  5. Dongarra, J., Beckman, P., et al. (2011). The international exascale software project roadmap. International Journal of High Performance Computing Applications.Google Scholar
  6. Dreher, M., & Raffin, B. (2014). A flexible framework for asynchronous in situ and in transit analytics for scientific simulations. In IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CLUSTER).Google Scholar
  7. Ellsworth, D., Patki, T., Perarnau, S., Seo, S., Amer, A., Zounmevo, J., et al. (2016). Systemwide power management with Argo. In High-Performance, Power-Aware Computing (HPPAC).Google Scholar
  8. Gioiosa, R., Petrini, F., Davis, K., & Lebaillif-Delamare, F. (2004). Analysis of system overhead on parallel computers. In IEEE International Symposium on Signal Processing and Information Technology (ISSPIT).Google Scholar
  9. Intel. Running average power limit – RAPL. https://01.org/blogs/2014/running-average-power-limit---rapl.
  10. Jacobsen, D. M., & Canon, R. S. (2015). Contain this, unleashing Docker for HPC. In Proceedings of the Cray User Group.Google Scholar
  11. Jiang, M., Van Essen, B., Harrison, C., & Gokhale, M. (2014). Multi-threaded streamline tracing for data-intensive architectures. In IEEE Symposium on Large Data Analysis and Visualization (LDAV).Google Scholar
  12. Kernel.org (2004). Linux control groups. https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt.
  13. Krone, M., Stone, J. E., Ertl, T., & Schulten, K. (2012). Fast visualization of Gaussian density surfaces for molecular dynamics and particle system trajectories. In EuroVis Short Papers.Google Scholar
  14. Merkel, D. (2014). Docker: Lightweight Linux containers for consistent development and deployment. Linux J., 2014(239).Google Scholar
  15. Morari, A., Gioiosa, R., Wisniewski, R., Cazorla, F., & Valero, M. (2011). A quantitative analysis of OS noise. In 2011 IEEE International, Parallel Distributed Processing Symposium (IPDPS) (pp. 852–863).Google Scholar
  16. Morari, A., Gioiosa, R., Wisniewski, R., Rosenburg, B., Inglett, T., & Valero, M. (2012). Evaluating the impact of TLB misses on future HPC systems. In 2012 IEEE 26th International, Parallel Distributed Processing Symposium (IPDPS) (pp. 1010–1021).Google Scholar
  17. Perarnau, S., Thakur, R., Iskra, K., Raffenetti, K., Cappello, F., Gupta, R., et al. (2015). Distributed monitoring and management of exascale systems in the Argo project. In IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), Short Paper.Google Scholar
  18. Perarnau, S., Zounmevo, J. A., Dreher, M., Van Essen, B. C., Gioiosa, R., Iskra, K., et al. (2017). Argo NodeOS: Toward unified resource management for exascale. In IEEE International Parallel and Distributed Processing Symposium (IPDPS).Google Scholar
  19. Pronk, S., Pall, S., Schulz, R., Larsson, P., et al. (2013). GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics.Google Scholar
  20. Rostedt, S. (2009). Finding origins of latencies using ftrace. In Real Time Linux Workshop (RTLWS).Google Scholar
  21. Seo, S., Amer, A., & Balaji, P. (2018). BOLT is OpenMP over lightweight threads. http://www.bolt-omp.org/.
  22. Seo, S., Amer, A., Balaji, P., Bordage, C., Bosilca, G., Brooks, A., et al. (2017). Argobots: A lightweight low-level threading and tasking framework. IEEE Transactions on Parallel and Distributed Systems, PP(99), 1–1.Google Scholar
  23. Van Essen, B., Hsieh, H., Ames, S., Pearce, R., & Gokhale, M. (2015). DI-MMAP: A scalable memory map runtime for out-of-core data-intensive applications. Cluster Computing, 18, 15.Google Scholar
  24. Wheeler, K. B., Murphy, R. C., & Thain, D. (2008). Qthreads: An API for programming with millions of lightweight threads. In 2008 IEEE International Symposium on Parallel and Distributed Processing (pp. 1–8).Google Scholar
  25. Xavier, M. G., Neves, M. V., Rossi, F. D., Ferreto, T. C., Lange, T., & De Rose, C. A. F. (2013). Performance evaluation of container-based virtualization for high performance computing environments. In Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP).Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Swann Perarnau
    • 1
    Email author
  • Brian C. Van Essen
    • 2
  • Roberto Gioiosa
    • 3
  • Kamil Iskra
    • 1
  • Maya B. Gokhale
    • 2
  • Kazutomo Yoshii
    • 1
  • Pete Beckman
    • 1
  1. 1.Argonne National LaboratoryLemontUSA
  2. 2.Lawrence Livermore National LaboratoryLivermoreUSA
  3. 3.Oak Ridge National LaboratoryOak RidgeUSA

Personalised recommendations