Skip to main content

Massively Parallel Simulations with DOE?s ASCI Supercomputers: An Overview of the Los Alamos Crestone Project

  • Conference paper
Adaptive Mesh Refinement - Theory and Applications

Part of the book series: Lecture Notes in Computational Science and Engineering ((LNCSE,volume 41))

Summary

The Los Alamos Crestone Project is part of the Department of Energy’s (DOE) Accelerated Strategic Computing Initiative, or ASCI Program. The main goal of this software development project is to investigate the use of continuous adaptive mesh refinement (CAMR) techniques for application to problems of interest to the Laboratory. There are many code development efforts in the Crestone Project, both unclassified and classified codes. In this overview I will discuss the unclassified SAGE and the RAGE codes. The SAGE (SAIC adaptive grid Eulerian) code is a one-, two-, and three-dimensional, multimaterial, Eulerian, massively parallel hydrodynamics code for use in solving a variety of high-deformation flow problems. The RAGE CAMR code is built from the SAGE code by adding various radiation packages, improved setup utilities, and graphics packages. It is used for problems in which radiation transport of energy is important. The goal of these massively-parallel versions of the SAGE and RAGE codes is to run extremely large problems in a reasonable amount of calendar time. Our target is scalable performance to ∼10,000 processors on a 1 billion CAMR computational cell problem that requires hundreds of variables per cell, multiple physics packages (e.g., radiation and hydrodynamics), and implicit matrix solves for each cycle. A general description of the RAGE code has been published in [1], [2], [3] and [4].

Currently, the largest simulations we do are three-dimensional, using around 500 million computation cells and running for literally months of calendar time using ∼2000 processors. Current ASCI platforms range from several 3-teraOPS supercomputers to one 12-teraOPS machine at Lawrence Livermore National Laboratory, the White machine, and one 20-teraOPS machine installed at Los Alamos, the Q machine. Each machine is a system comprised of many component parts that must perform in unity for the successful run of these simulations. Key features of any massively parallel system include the processors, the disks, the interconnection between processors, the operating system, libraries for message passing and parallel I/O, and other fundamental units of the system.

We will give an overview of the current status of the Crestone Project codes SAGE and RAGE. These codes are intended for general applications without tuning of algorithms or parameters. We have run a wide variety of physical applications from millimeter-scale laboratory laser experiments, to the multikilometer-scale asteroid impacts into the Pacific Ocean, to parsec-scale galaxy formation. Examples of these simulations will be shown. The goal of our effort is to avoid ad hoc models and attempt to rely on first-principles physics. In addition to the large effort on developing parallel code physics packages, a substantial effort in the project is devoted to improving the computer science and software quality engineering (SQE) of the Project codes as well as a sizable effort on the verification and validation (V&V) of the resulting codes. Examples of these efforts for our project will be discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. “The SAGE Code Multimaterial Equation of State Methods” M.L. Gittings in Numerical Methods Symposium, 28–30 April 1992; “The RAGE Code” R.N. Byrne, T. Betlach, and M.L. Gittings in Numerical Methods Symposium, 28–30 April 1992. Copies may be ordered from the Defense Nuclear Agency (currently the Defense Threat Reduction Agency), 56801 Telegraph Road, Alexandria, VA 22310-3398.

    Google Scholar 

  2. “2D and 3D simulations of RM instability growth with RAGE: a continuous adaptive mesh refinement code” R. Weaver, M.L. Gittings, R.M. Baltrusaitis, Q. Zhang and S. Sohn, July 1997, proceedings of the 21st International Symposium on Shock Waves (ISSW21), Australia, paper 8271.

    Google Scholar 

  3. “The parallel implementation of RAGE: a 3D continuous adaptive mesh refinement radiation-hydrodynamics code” R. Weaver, M.L. Gittings, M.L. Clover, July 1999, proceedings of the 22st International Symposium on Shock Waves (ISSW22), London, paper 3560.

    Google Scholar 

  4. “The Simulation of Shock-Generated Instabilities” R.M. Baltrusaitis, M.L. Gittings, R. Weaver, R. Benjamin and J. Budzinski 1996, Physics of Fluids, 8(9), p. 2471 (also as LA-UR-95-4000).

    Article  Google Scholar 

  5. “Predictive Performance and Scalability Modeling of a Large-Scale Application” Darren J. Kerbyson, Hank J. Alme, Adolfy Hoisie, Fabrizio Petrini, Harvey J. Wasserman, and Michael Gittings, in Proc. of IEEE/ACM SC2001, Denver, November 2001.

    Google Scholar 

  6. “Verifying Large-Scale System Performance During Installation using Modeling” Darren J. Kerbyson, Adolfy Hoisie, and Harvey J. Wasserman, in Hardware/Software Support for Parallel and Distributed Scientific and Engineering Computing, L.T. Yang (eds), Kluwer, September 2003.

    Google Scholar 

  7. “Code Validation Experiments — A Key to Predictive Science” Brian Fishbine, Los Alamos Research Quarterly, Fall 2002, LALP-02-194.

    Google Scholar 

  8. “Two and Three Dimensional Calculations of a NOVA Hydro Instability Experiment” B.H. Wilde, F.J. Swenson, R.P. Weaver, M.L. Gittings, and W.J. Powers Proceedings of the 6th International Workshop on Compressible Turbulent Mixing, France, July, 1997.

    Google Scholar 

  9. “Hydrothermal Pressure Instabilities related to Magmatic Steam Injection and Reflected in Long-Period Seismicity” B.A. Chouet, M.M. Morrissey, M.L. Gittings, and R. Weaver, December 1997, EOS, Transactions, American Geophysical Union 1997 Fall meeting, Vol 78, p F764.

    Google Scholar 

  10. “ICF Targets with Joints and Gaps” S. R. Goldman, M. D. Wilke, D.C. Wilson, S.E. Caldwell, W. W. Hsing, R. P. Weaver, Bull. Am. Phys. Soc. 41, 1353 (1996)

    Google Scholar 

  11. “Inertial-Confinement Fusion at the Los Alamos National Laboratory: Current Trends in International Fusion Research” Erick Lindman, D. Baker, C. Barnes, B. Bauer, J.B. Beck, R. Berggren, B. Bezzerides, P. Bradley, R. E. Chrien, M. Clover, J. Cobble, C. A. Coverdale, M. Cray, N. Delamater, D. DuBois, B. H. Failor, J. C. Fernandez, L. Foreman, R. Gibson, P. Gobby, S. R. Goldman, D. Harris, A. Hauer, J. Hoffer, N. Hoffman, W. W. Hsing, R. Johnson, K. Klare, R. Kopp, W. Krauser, G. Kyrala, G. Magelssen, R. Mason, D. Montgomery, T. J. Murphy, J. Oertel, G. Pollak, H. Rose, K. Schoenberg, D. P. Smitherman, M. S. Sorem, F. Swenson, D. Tubbs, W. Varnum, H. Vu, J. Wallace, R. Watt, R. Weaver, B. Wilde, M. Wilke, D. Wilson, W. M. Wood, Review and Assessment, Washington, DC, Proceedings of the 2nd Symposium, Plenum Press (March 10–14, 1997).

    Google Scholar 

  12. “Two and Three Dimensional RAGE Calculations of a NOVA Instability Experiment” B.H. Wilde, F.J. Swenson, R.P. Weaver, M.L. Gittings, and W.J. Powers, presented at the JOWOG 32m, Lawrence Livermore National Laboratory, August 13, 1997.

    Google Scholar 

  13. “Two and Three Dimensional Calculations of a NOVA Hydro Instability Experiment” B.H. Wilde, F.J. Swenson, R.P. Weaver, M.L. Gittings, and W.J. Powers proceedings of the 6th International Workshop on Compressible Turbulent Mixing, France, July, 1997.

    Google Scholar 

  14. “Shock Propagation in the Nonlinear Regime” W.W. Hsing, S. Caldwell, S.R. Goldman, R.P. Weaver, Bull Am. Phys Soc. 42, 1952 (1997)

    Google Scholar 

  15. “Shock Structuring Due to Fabrication Joints in ICF Targets” S.R. Goldman, S.E. Caldwell, G.A. Kyrala, M.D. Wilke, D.C. Wilson, C.W. Barnes, W.W. Hsing, N.D. Delamater, J. Grove, J.M. Wallace, R.P. Weaver, A.M. Dunne, M.J. Edwards, P. Graham, B.R. Thomas, Bull Am Phys. Soc. 43, 1667 (1998)

    Google Scholar 

  16. “Simulations of Richtmyer-Meshkov Instability in Two and Three Dimensions” Richard L. Holmes, Cindy A. Zoldi, Robert P. Weaver, Michael Gittings and Michael Clover, July 1999, proceedings of the 22st International Symposium on Shock Waves (ISSW22), London, paper 4480.

    Google Scholar 

  17. “Shock Structuring Due to Fabrication Joints in Targets” S.R. Goldman, S.E. Caldwell, M.D. Wilke, D.C. Wilson, C.W. Barnes, W.W. Hsing, N.D. Delamater, G.T. Schappert, J. W. Grove, J.M. Wallace, Robert P. Weaver, A.M. Dunne, M.J. Edwards, P. Graham, and B. R. Thomas, Physics of Plasmas, 6(8), pp. 3327–3336, 1999. [also as LA-UR-98-4250].

    Article  Google Scholar 

  18. “Richtmyer-Meshkov Instability Growth: Experiment, Simulation and Theory” Richard L. Holmes, Guy Dimonte, Bruce Fryxell, Michael L. Gittings, John W. Grove, Marilyn Schneider, David H. Sharp, Alexander L. Velikovich, Robert P. Weaver, and Qiang Zhang, Journal of Fluid Mechanics 39, pp. 55–79. [Also as LA-UR-97-2606].

    Google Scholar 

  19. “Planar,ablative Rayleigh Taylor Instability Growth in Copper Foils” G. T. Schappert, W.W. Hsing, D. E. Hollowell, R. P. Weaver and B. A. Remington 38th Annual Meeting of the Division of Plasma Physics; 11–15 November, 1996; Denver CO

    Google Scholar 

  20. “Rayleigh-Taylor Instability Growth in Copper Foils Driven by NOVA Hohlraum Radiation” G. T Schappert, W.W. Hsing, D. E. Hollowell, R. P. Weaver and B. A. Remington; 37th Annual Mtg. of the Division of Plasma Physics; November 6–10, 1995; Louiville, Ky

    Google Scholar 

  21. “Ablative Rayleigh-Taylor Instability Modeling” D. Hollowell, G. Schappert, S. Caldwell, W. Hsing, R. Weaver; June 1997 Anomolous Absorption Conference, Vancouver Canada

    Google Scholar 

  22. “2-D Rayleigh-Taylor Experiments and Modeling” David Hollowell, Gottfried Schappert, Robert Weaver, Warren Hsing, Rodney Mason, Steve Caldwell; LA UR 98-1828 and 25th European Conference on Laser Interaction With Matter, Formia, Italy May, 1998

    Google Scholar 

  23. “Production of Enhanced Pressure Regions due to Inhomogeneities in Interial Confinement Fusion,” S.R. Goldman, C.W. Barnes, S.E. Caldwell, D.C. Wilson, S.H. Batha, J. W. Grove, M.L. Gittings, W.W. Hsing, R.J. Kares, K.A. Klare, G.A. Kyrala, R.W. Margevicius, R. P. Weaver, M.D. Wilke, A.M. Dunne, M.J. Edwards, P. Graham, and B. R. Thomas, Physics of Plasmas, 7(5), pp. 2007–2013, 2000.

    Article  Google Scholar 

  24. “Supersonic Jet and Shock Interactions,” J.M. Foster, B.H. Wilde, P.A. Rosen, T.S. Perry, M. Fell, M.J. Edwards, B.F. Lasinski, R.E. Turner and M.L. Gittings, Physics of Plasmas, 9(5) Part 2, pp. 2251–2263, 2002.

    Article  Google Scholar 

  25. “On 3D, Automated, Self-Contained Grid Generation Within the RAGE CAMR Hydrocode” by W.R. Oakes, P.J. Henning, M.L. Gittings, and R.P. Weaver, in proceedings of the 7th International Conference on Numerical Grid Generation in Computational Field Simulations, p 973, Chateau Whistler Resort, September 2000.

    Google Scholar 

  26. “Two-and Three-Dimensional Simulations of Asteroid Ocean Impacts” by Galen Gisler, Robert Weaver, Charles Mader, and M.L. Gittings, Los Alamos internal report, LA-UR-02-30, 2002.

    Google Scholar 

  27. “Two-and Three-Dimensional Simulations of Asteroid Impacts” by Galen Gisler, Robert Weaver, M.L. Gittings, and Charles Mader, Los Alamos internal report, accepted for publication in Computers in Science and Engineering Journal, 2004.

    Google Scholar 

  28. “The Piecewise Parabolic Method (PPM) for Gas-Dynamical Simulations” P. Colella and P. R. Woodward, J. of Computational Phys. 54(1):174–201, 1984.

    Article  MathSciNet  Google Scholar 

  29. “Local adaptive mesh refinement for shock hydrodynamics” Berger, MJ and Collela, P (1989), J. Comp. Phys. 82, 64–84

    Article  Google Scholar 

  30. “2-D Convergence Analysis of the RAGE Hydrocode” J. R. Kamm and W. J. Rider, Los Alamos National laboratory internal report, LA-UR-98-3872.

    Google Scholar 

  31. “Comparing the Fidelity of Numerical Simulations of Shock-Generated Instabilities” J. R. Kamm, W. J. Rider, P.V. Vorobieff, P.M. Rightley, R.F. Benjamin, K. P. Prestridge, July 18–23, 1999, proceedings of the 22st International Symposium on Shock Waves (ISSW22), London, paper 4259.

    Google Scholar 

  32. “A Numerical and Experimental Study of a Shock-Accelerated Heavy Gas Cylinder” C. Zoldi Ph.D. thesis for the State University of New York at Stony Brook, May 2002.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Weaver, R.P., Gittings, M.L. (2005). Massively Parallel Simulations with DOE?s ASCI Supercomputers: An Overview of the Los Alamos Crestone Project. In: Plewa, T., Linde, T., Gregory Weirs, V. (eds) Adaptive Mesh Refinement - Theory and Applications. Lecture Notes in Computational Science and Engineering, vol 41. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-27039-6_2

Download citation

Publish with us

Policies and ethics