Extreme Event Analysis in Next Generation Simulation Architectures

  • Stephen HamiltonEmail author
  • Randal Burns
  • Charles Meneveau
  • Perry Johnson
  • Peter Lindstrom
  • John Patchett
  • Alexander S. Szalay
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10266)


Numerical simulations present challenges because they generate petabyte-scale data that must be extracted and reduced during the simulation. We demonstrate a seamless integration of feature extraction for a simulation of turbulent fluid dynamics. The simulation produces on the order of 6 TB per timestep. In order to analyze and store this data, we extract velocity data from a dilated volume of the strong vortical regions and also store a lossy compressed representation of the data. Both reduce data by one or more orders of magnitude. We extract data from user checkpoints in transit while they reside on temporary burst buffer SSD stores. In this way, analysis and compression algorithms are designed to meet specific time constraints so they do not interfere with simulation computations. Our results demonstrate that we can perform feature extraction on a world-class direct numerical simulation of turbulence while it is running and gather meaningful scientific data for archival and post analysis.


Velocity Data Compression Algorithm Lossy Compression Vorticity Magnitude High Vorticity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The authors would like to thank Los Alamos National Laboratory for providing compute resources. Specifically we would like to thank Ryan Braithwaite who configured the Darwin cluster and setup our reservation times to run our experiments. This work is supported in part by the National Science Foundation under Grants CMMI-0941530, OCI-108849, ACI-1261715, No. OCI-1244820, and AST-0939767, Johns Hopkins University’s Institute for Data Intensive Engineering & Science, Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was partially supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration, and under the auspices of the U.S. Department of Energy.


  1. 1.
    Sodani, A.: Race to exascale: opportunities and challenges. In: IEEE/ACM International Symposium on Microarchitecture, Micro-44 (2011)Google Scholar
  2. 2.
    Hick, J.: I/O requirements for exascale. In: Open Fabrics Alliance (2011)Google Scholar
  3. 3.
    Hemsoth, N.: Burst buffers flash exascale potential. HPC Wire, 1 May 2014 (2014)Google Scholar
  4. 4.
    Bent, J., Grider, G., Kettering, B., Manzanares, A., McClelland, M., Torres, A., Torrez, A.: Storage challenges at Los Alamos National Labs. In: 2012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–5, April 2012Google Scholar
  5. 5.
    Li, Y., Perlman, E., Wan, M., Yang, Y., Meneveau, C., Burns, R., Chen, S., Szalay, A., Eyink, G.: A public turbulence database cluster and applications to study lagrangian evolution of velocity increments in turbulence. J. Turbul. 9(31), 1–29 (2008)zbMATHGoogle Scholar
  6. 6.
    Bürger, K., Treib, M., Westermann, R., Werner, S., Lalescu, C.C., Szalay, A., Meneveau, C., Eyink, G.L.: Vortices within vortices: hierarchical nature of vortex tubes in turbulence. Computing Research Repository. arXiv:1210.3325 (2012)
  7. 7.
    Eyink, G., Vishniac, E., Lalescu, C., Aluie, H., Kanov, K., Bürger, K., Burns, R., Meneveau, C., Szalay, A.: Flux-freezing breakdown in high-conductivity magnetohydrodynamic turbulence. Nature 497(7450), 466–469 (2013)CrossRefGoogle Scholar
  8. 8.
    Bent, J., Gibson, G., Grider, G., McClelland, B., Nowoczynski, P., Nunez, J., Polte, M., Wingate, M.: PLFS: a checkpoint filesystem for parallel applications. In: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, SC 2009, pp. 21:1–21:12. ACM, New York (2009)Google Scholar
  9. 9.
    ACES Team: Trinity platform introduction and usage model. Los Alamos National Laboratories, number LA-UR-15-26834 (2015)Google Scholar
  10. 10.
    Ang, D.H., Brim, M., Parker, S., Watson, G., Bland, W.: Providing a robust tools landscape for coral machines. In: Workshop on Extreme Scale Programming Tools (2015)Google Scholar
  11. 11.
    Kanov, K., Lalescu, C., Burns, R.: Efficient evaluation of threshold queries of derived fields in a numerical simulation database. In: Extending Database Technology, pp. 301–312 (2015)Google Scholar
  12. 12.
    Lindstrom, P.: Fixed-rate compressed floating-point arrays. IEEE Trans. Vis. Comput. Graph. 20(12), 2674–2683 (2014)CrossRefGoogle Scholar
  13. 13.
    Bent, J., Faibish, S., Ahrens, J., Grider, G., Patchett, J., Tzelnic, P., Woodring, J.: Jitter-free co-processing on a prototype exascale storage stack. In: 2012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–5. IEEE, LA-UR-pending (2012)Google Scholar
  14. 14.
    Ma, K.-L., Wang, C., Hongfeng, Y., Tikhonova, A.: In-situ processing and visualization for ultrascale simulations. J. Phys: Conf. Ser. 78(1), 012043 (2007)Google Scholar
  15. 15.
    Ahrens, J., Lo, L.-T., Nouanesengsy, B., Patchett, J., McPherson, A.: Petascale visualization: approaches and initial results. In: Workshop on Ultrascale Visualization UltraVis 2008, pp. 24–28, Nov 2008Google Scholar
  16. 16.
    Chen, F., Flatken, M., Basermann, A., Gerndt, A., Hetheringthon, J., Krüger, T., Matura, G., Nash, R.W.: Enabling in situ pre- and post-processing for exascale hemodynamic simulations - a co-design study with the sparse geometry Lattice-Boltzmann code HemeLB. In: 2012 SC Companion: High Performance Computing, Networking, Storage and Analysis (SCC), pp. 662–668, Nov 2012Google Scholar
  17. 17.
    Wang, T., Mohror, K., Moody, A., Sato, K., Yu, W.: An ephemeral burst-buffer file system for scientific applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2016, pp. 69:1–69:12 (2016)Google Scholar
  18. 18.
    Liu, N., Cope, J., Carns, P., Carothers, C., Ross, R., Grider, G., Crume, A., Maltzahn, C.: On the role of burst buffers in leadership-class storage systems. In: 2012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–11, April 2012Google Scholar
  19. 19.
    Li, D., Vetter, J.S., Marin, G., McCurdy, C., Cira, C., Liu, Z., Yu, W.: Identifying opportunities for byte-addressable non-volatile memory in extreme-scale scientific applications. In: Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium, IPDPS 2012, pp. 945–956. IEEE Computer Society, Washington (2012)Google Scholar
  20. 20.
    Sreenivasan, K., Yeung, P.K., Zhai, X.M.: Extreme events in computational turbulence. Proc. Natl. Acad. Sci. U.S.A. 112(41), 12633–12638 (2015)CrossRefGoogle Scholar
  21. 21.
    Dubief, Y., Delcayre, F.: On coherent-vortex identification in turbulence. J. Turbul. 1(11), 11 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Kida, S., Miura, H.: Identification and analysis of vortical structures. Eur. J. Mech. - B/Fluids 17(4), 471–488 (1998)CrossRefzbMATHGoogle Scholar
  23. 23.
    Jeong, J., Hussain, F.: On the identification of a vortex. J. Fluid Mech. 285, 69–94 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Tennekes, H., Lumley, J.L.: A first course in turbulence. M.I.T. Press, Cambridge (1972)zbMATHGoogle Scholar
  25. 25.
    Schroeder, W., Martin, K., Lorensen, B.: Visualization Toolkit: An Object-Oriented Approach to 3D Graphics, 4th edn. Kitware, New York (2006)Google Scholar
  26. 26.
    Bremer, P.-T., Gruber, A., Bennett, J.C., Gyulassy, A., Kolla, H., Chen, J.H., Grout, R.W.: Identifying turbulent structures through topological segmentation. Commun. Appl. Math. Comput. Sci. 11(1), 37–53 (2016)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceJohns Hopkins UniversityBaltimoreUSA
  2. 2.Department of Mechanical EngineeringJohns Hopkins UniversityBaltimoreUSA
  3. 3.Lawrence Livermore National LaboratoryLivermoreUSA
  4. 4.Los Alamos National LaboratoryLos AlamosUSA
  5. 5.Department of Physics and AstronomyJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations