The current availability of thousands of processors at many high performance computing centers has made it feasible to carry out, in near real time, interactive visualization of 3D mantle convection temperature fields, using grid configurations having 10–100 million unknowns. We will describe the technical details involved in carrying out this endeavor, using the facilities available at the Laboratory of Computational Science and Engineering (LCSE) at the University of Minnesota. These technical details involve the modification of a parallel mantle convection program, ACuTEMan; the usage of client–server socket based programs to transfer upwards of a terabyte of time series scientific model data using a local network; a rendering system containing multiple nodes; a high resolution PowerWall display, and the interactive visualization software, DSCVR. We have found that working in an interactive visualizastion mode allows for fast and efficient analysis of mantle convection results.
This is a preview of subscription content, log in to check access.
Brandt A (1977) Multi-level adaptive solutions to boundary-value problems. Math Comput 31:333–390
Chin-Purcell K (1993) Brick of bytes. Minnesota Supercomputing Center, Inc.
Chorin AJ (1997) A numerical method for solving incompressible viscous flow problems. J Comp Phys 2:12–26
Cohen RE (2005) High-performance computing requirements for the computational solid earth sciences. http://www.geo-prose.com/computational_SES.html
Kameyama M (2005) ACuTEMan: a multigrid-based mantle convection simulation code and its optimization to the Earth Simulator. J Earth Simul 4:2–10
Kameyama M, Kageyama A, Sato T (2005) Multigrid iterative algorithm using pseudo-compressibility for three-dimensional mantle convection with strongly variable viscosity. J Comput Phys 206(1):162–181
Margetts L, Smethurst C, Ford R (2005) Interactive finite element analysis, NAFEMS World Congress
McKenzie DP, Roberts J, Weiss NO (1973) Convection in the Earth’s mantle. Tectonophysics 19:89–103
Quenette S, Moresi L, Sunter PD, Appelbe BF (2007) Explaining StGermain: an aspect oriented environment for building extensible computational mechanics modeling software. 21st IEEE international parallel and distributed processing symposium
Stacey FD (1992) Physics of the Earth. Brookfield Press, Brisbane, p 513
Torrance KE, Turcotte DL (1971) Thermal convection with large viscosity variations. J Fluid Mech 47:113–125
Trottenberg U, Oosterlee C, Schuller A (2001) Multigrid. Academic, New York, p 631
Turcotte DL, Schubert G (1982) Geodynamics: applications of continuum physics to geological problems. Wiley, New York, p 450
Wesseling P (1992) An introduction to Multigrid methods. Wiley, New York, p 284
Woodward PR, Porter DH, Greensky J, Larson AJ, Knox M, Hanson J, Ravindran N, Fuchs T (2007) Interactive volume visualization of fluid flow simulation data. http://www.lcse.umn.edu/mri/PARA06-vis-paper-1-20-07-w-citation.pdf
Zhou S, Oloso A, Damon M, Clune T (2006) Controlled parallel asynchronous IO, poster. In: Proceedings of SC2006. ACM/IEEE Supercomputing
We thank Professor Paul R. Woodward for his always sage advice and the NSF Middleware and ITR programs for support. This equipment used in this research was supported by the NSF MRI grant awarded to Paul Woodward.
It has been shown in previous studies that taking advantage of network speeds has been shown to decrease the overall computation time for scientific models (Zhou et al. 2006). A natural consequence of interactive visualization is the reduction in wall-clock time. In our case, it takes less wall clock time to visualize our ACuTEMan results immediately because we have been able to reduce the amount of data that is written to local disk on the compute nodes. We find it sufficient to store only partial data sets at specified intervals, as opposed to archiving the output of an entire simulation. Most important are the final conditions, or the state of the model at the last time step. We can restart the model using these conditions as the initial ones and rerun a partial case in the event that the data becomes of interest. Saving the final conditions of our model allows us to start a simulation from various times of interest when it becomes necessary to re-analyze the data and saves us from having to store large data sets at each time increment.
The decrease in computation time is a result of both minimizing the frequency of local data writes and also utilizing the local network for data output. In an interactive mode, data is transferred close to the theoretical maximum rate of the network and not at the rate of the local disk. The data is transferred from the local RAM on the nodes of the Blade system to the ram disk on the LCSE rendering system through the InfiniBand interface. Once the ram disk on the LCSE system is inundated with data, the speedup would no longer be observed. When the grid is increased to a large enough size, the capacity of the LCSEs RAM disk would be reached, thus filling the memory and slowing down the overall visualization and computation performance. We want to be careful to find a balance between doing real-time visualization and over-exerting the resources currently in place at the LCSE. The data throughput rate can be improved by adding more nodes to the LCSE system for accepting incoming transfers from multiple nodes on the Blade system.
About this article
Cite this article
Damon, M., Kameyama, M.C., Knox, M. et al. Interactive visualization of 3D mantle convection. Vis Geosci 13, 49–57 (2008). https://doi.org/10.1007/s10069-007-0008-1
- 3D volume visualization
- Mantle convection
- Interactive visualization
- Parallel computing