Abstract
This investigation summarizes a set of executions completed on the supercomputers Stampede at TACC (USA), Helios at IFERC (Japan), and Eagle at PSNC (Poland), with the molecular dynamics solver LAMMPS, compiled for CPUs. A communication-intensive benchmark based on long-distance interactions tackled by the Fast Fourier Transform operator has been selected to test its sensitivity to rather different patterns of tasks location, hence to identify the best way to accomplish further simulations for this family of problems. Weak-scaling tests show that the attained execution time of LAMMPS is closely linked to the cluster topology and this is revealed by the varying time-execution observed in scale up to thousands of MPI tasks involved in the tests. It is noticeable that two clusters exhibit time saving (up to 61% within the parallelization range) when the MPI-task mapping follows a concentration pattern over as few nodes as possible. Besides this result is useful from the user’s standpoint, it may also help to improve the clusters throughput by, for instance, adding live-migration decisions in the scheduling policies in those cases of communication-intensive behaviour detected in characterization tests. Also, it opens a similar output for a more efficient usage of the cluster from the energy consumption point of view.
Keywords
- Cluster throughput
- LAMMPS benchmarking
- MPI application performance
- Weak scaling
This is a preview of subscription content, access via your institution.
Buying options










References
TOP500 Supercomputers homepage. http://www.top500.org
Shalf, J., Quinlan, D., Janssen, C.: Rethinking hardware-software codesign for exascale systems. Computer 44(11), 22–30 (2011). https://doi.org/10.1109/MC.2011.300
Exascale Computing Project (ECP) homepage. https://www.exascaleproject.org
Partnership Research for Advance Computing in Europe. http://www.prace-ri.eu
National Supercomputing Center in Tianjin homepage. http://www.nscc-tj.gov.cn
Post-K supercomputer. www.fujitsu.com/global/Images/post-k-supercomputer.pdf
Jeannot, E., Mercier, G., Tessier, F.: Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans. Parallel Distrib. Syst. 25(4), 993–1002 (2014). https://doi.org/10.1109/TPDS.2013.104
Chavarría-Miranda, D., Nieplocha, J., Tipparaju, V.: Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the Third Conference on Computing Frontiers, Ischia, Italy (2006). https://doi.org/10.1145/1128022.1128073
Smith, B.E., Bode, B.: Performance effects of node mappings on the IBM BlueGene/L machine. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 1005–1013. Springer, Heidelberg (2005). https://doi.org/10.1007/11549468_110
Rodrigues E.R., Madruga F.L., Navaux P.O.A., Panetta J.: Multi-core Aware Process Mapping and its Impact on Communication Overhead of Parallel Applica- tions. In: Proceedings of the IEEE Symposium Computers and Communication, Sousse, Tunisia, pp. 811–817 (2009). https://doi.org/10.1109/ISCC.2009.5202271
León, E.A., Karlin, I., Moody, A.T.: System noise revisited: enabling application scalability and reproducibility with SMT. In: IEEE International Parallel and Distributed Processing Symposium, pp. 596–607. IEEE, Chicago (2016). https://doi.org/10.1109/IPDPS.2016.48
Chai, L., Gao, Q., Panda, D.K.: Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system. In: Proceedings of the 7th IEEE International Symposium Cluster Computing and the Grid (CCGrid), Rio De Janeiro, Brazil, pp. 471–478 (2007). https://doi.org/10.1109/CCGRID.2007.119
Shainer, G., Lui, P., Liu, T., Wilde, T., Layton, J.: The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, pp. 455–460 (2011). https://doi.org/10.2316/P.2011.757-005
Xingfu, W., Taylor, V.: Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core cray XT4 systems. In: Cray UG Proceedings (CUG 2009), Atlanta, USA, pp. 4–7 (2009)
Ribeiro, C.P., et al.: Evaluating CPU and memory affinity for numerical scientific multithreaded benchmarks on multi-cores. Int. J. Comput. Sci. Inform. Syst. 7(1), 79–93 (2012)
Xingfu, W., Taylor, V.: Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th International Conference Parallel Distributed Computing and Systems (PDCS-07), Massachusetts, USA, Cambridge, pp. 13–18 (2007)
Rodríguez-Pascual, M., Moríñigo, J.A., Mayo-García, R.: Benchmarking performance: influence of task location on cluster throughput. In: Mocskos, E., Nesmachnow, S. (eds.) CARLA 2017. CCIS, vol. 796, pp. 125–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73353-1_9
Moríñigo, J.A., Rodríguez-Pascual, M., Mayo-García, R.: Slurm Configuration Impact on Benchmarking. In: Slurm User Group Meeting, Athens, Greece (2016). https://slurm.schedmd.com/publications.html
Zhang, C., Yuan, X., Srinivasan, A.: Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium Parallel and Distributed Processing, Workshops and PhD Forum, Atlanta, USA, pp. 1–8 (2010). https://doi.org/10.1109/IPDPSW.2010.5470774
McKenna, G.: Performance Analysis and Optimisation of LAMMPS on XCmaster, HPCx and BlueGene. MSc, University of Edinburgh, EPCC (2007)
Liu, J.: LAMMPS on Advanced SGI Architectures. White Paper SGI (2010)
Cornebize, T., Heinrich, F., Legrand, A., Vienne, J.: Emulating High Performance Linpack on a Commodity Server at the Scale of a Supercomputer, HAL-id: hal-01654804 (2017)
Stampede supercomputer. https://www.tacc.utexas.edu/systems/stampede
Helios supercomputer. http://www.iferc.org/CSC_Scope.html#Systems
Eagle supercomputer. https://wiki.man.poznan.pl/hpc/index.php?title=Strona_glowna
LAMMPS homepage. http://lammps.sandia.gov
CHARMM homepage. https://www.charmm.org
Plimpton, S., Pollock, R., Stevens, M.: Particle-Mesh Ewald and rRESPA for parallel molecular dynamics simulations. In: Eighth SIAM Conference on Parallel Processing for Scientific Computing (1997)
Fast Fourier Transform of the West homepage. http://www.fftw.org
Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995). https://doi.org/10.1006/jcph.1995.1039
Acknowledgment
This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness project CODEC2 (TIN2015-63562-R) with European Regional Development Fund (ERDF) as well as carried out on computing facilities provided by the CYTED Network RICAP (517RT0529) and Poznań Supercomputing and Networking Center. The support of Marcin Pospieszny, system administrator at PSNC, is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Moríñigo, J.A., García-Muller, P., Rubio-Montero, A.J., Gómez-Iglesias, A., Meyer, N., Mayo-García, R. (2019). Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling. In: Meneses, E., Castro, H., Barrios Hernández, C., Ramos-Pollan, R. (eds) High Performance Computing. CARLA 2018. Communications in Computer and Information Science, vol 979. Springer, Cham. https://doi.org/10.1007/978-3-030-16205-4_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-16205-4_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16204-7
Online ISBN: 978-3-030-16205-4
eBook Packages: Computer ScienceComputer Science (R0)