Skip to main content

Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling

Part of the Communications in Computer and Information Science book series (CCIS,volume 979)


This investigation summarizes a set of executions completed on the supercomputers Stampede at TACC (USA), Helios at IFERC (Japan), and Eagle at PSNC (Poland), with the molecular dynamics solver LAMMPS, compiled for CPUs. A communication-intensive benchmark based on long-distance interactions tackled by the Fast Fourier Transform operator has been selected to test its sensitivity to rather different patterns of tasks location, hence to identify the best way to accomplish further simulations for this family of problems. Weak-scaling tests show that the attained execution time of LAMMPS is closely linked to the cluster topology and this is revealed by the varying time-execution observed in scale up to thousands of MPI tasks involved in the tests. It is noticeable that two clusters exhibit time saving (up to 61% within the parallelization range) when the MPI-task mapping follows a concentration pattern over as few nodes as possible. Besides this result is useful from the user’s standpoint, it may also help to improve the clusters throughput by, for instance, adding live-migration decisions in the scheduling policies in those cases of communication-intensive behaviour detected in characterization tests. Also, it opens a similar output for a more efficient usage of the cluster from the energy consumption point of view.


  • Cluster throughput
  • LAMMPS benchmarking
  • MPI application performance
  • Weak scaling

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-16205-4_17
  • Chapter length: 15 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   79.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-16205-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   99.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.


  1. TOP500 Supercomputers homepage.

  2. Shalf, J., Quinlan, D., Janssen, C.: Rethinking hardware-software codesign for exascale systems. Computer 44(11), 22–30 (2011).

    CrossRef  Google Scholar 

  3. Exascale Computing Project (ECP) homepage.


  5. Partnership Research for Advance Computing in Europe.

  6. National Supercomputing Center in Tianjin homepage.

  7. Post-K supercomputer.

  8. Jeannot, E., Mercier, G., Tessier, F.: Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans. Parallel Distrib. Syst. 25(4), 993–1002 (2014).

    CrossRef  Google Scholar 

  9. Chavarría-Miranda, D., Nieplocha, J., Tipparaju, V.: Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the Third Conference on Computing Frontiers, Ischia, Italy (2006).

  10. Smith, B.E., Bode, B.: Performance effects of node mappings on the IBM BlueGene/L machine. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 1005–1013. Springer, Heidelberg (2005).

    CrossRef  Google Scholar 

  11. Rodrigues E.R., Madruga F.L., Navaux P.O.A., Panetta J.: Multi-core Aware Process Mapping and its Impact on Communication Overhead of Parallel Applica- tions. In: Proceedings of the IEEE Symposium Computers and Communication, Sousse, Tunisia, pp. 811–817 (2009).

  12. León, E.A., Karlin, I., Moody, A.T.: System noise revisited: enabling application scalability and reproducibility with SMT. In: IEEE International Parallel and Distributed Processing Symposium, pp. 596–607. IEEE, Chicago (2016).

  13. Chai, L., Gao, Q., Panda, D.K.: Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system. In: Proceedings of the 7th IEEE International Symposium Cluster Computing and the Grid (CCGrid), Rio De Janeiro, Brazil, pp. 471–478 (2007).

  14. Shainer, G., Lui, P., Liu, T., Wilde, T., Layton, J.: The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, pp. 455–460 (2011).

  15. Xingfu, W., Taylor, V.: Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core cray XT4 systems. In: Cray UG Proceedings (CUG 2009), Atlanta, USA, pp. 4–7 (2009)

    Google Scholar 

  16. Ribeiro, C.P., et al.: Evaluating CPU and memory affinity for numerical scientific multithreaded benchmarks on multi-cores. Int. J. Comput. Sci. Inform. Syst. 7(1), 79–93 (2012)

    Google Scholar 

  17. Xingfu, W., Taylor, V.: Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th International Conference Parallel Distributed Computing and Systems (PDCS-07), Massachusetts, USA, Cambridge, pp. 13–18 (2007)

    Google Scholar 

  18. Rodríguez-Pascual, M., Moríñigo, J.A., Mayo-García, R.: Benchmarking performance: influence of task location on cluster throughput. In: Mocskos, E., Nesmachnow, S. (eds.) CARLA 2017. CCIS, vol. 796, pp. 125–138. Springer, Cham (2018).

    CrossRef  Google Scholar 

  19. Moríñigo, J.A., Rodríguez-Pascual, M., Mayo-García, R.: Slurm Configuration Impact on Benchmarking. In: Slurm User Group Meeting, Athens, Greece (2016).

  20. Zhang, C., Yuan, X., Srinivasan, A.: Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium Parallel and Distributed Processing, Workshops and PhD Forum, Atlanta, USA, pp. 1–8 (2010).

  21. McKenna, G.: Performance Analysis and Optimisation of LAMMPS on XCmaster, HPCx and BlueGene. MSc, University of Edinburgh, EPCC (2007)

    Google Scholar 

  22. Liu, J.: LAMMPS on Advanced SGI Architectures. White Paper SGI (2010)

    Google Scholar 

  23. Cornebize, T., Heinrich, F., Legrand, A., Vienne, J.: Emulating High Performance Linpack on a Commodity Server at the Scale of a Supercomputer, HAL-id: hal-01654804 (2017)

    Google Scholar 

  24. Stampede supercomputer.

  25. Helios supercomputer.

  26. Eagle supercomputer.

  27. LAMMPS homepage.

  28. CHARMM homepage.

  29. Plimpton, S., Pollock, R., Stevens, M.: Particle-Mesh Ewald and rRESPA for parallel molecular dynamics simulations. In: Eighth SIAM Conference on Parallel Processing for Scientific Computing (1997)

    Google Scholar 

  30. Fast Fourier Transform of the West homepage.

  31. Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995).

    CrossRef  MATH  Google Scholar 

Download references


This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness project CODEC2 (TIN2015-63562-R) with European Regional Development Fund (ERDF) as well as carried out on computing facilities provided by the CYTED Network RICAP (517RT0529) and Poznań Supercomputing and Networking Center. The support of Marcin Pospieszny, system administrator at PSNC, is gratefully acknowledged.

Author information

Authors and Affiliations


Corresponding author

Correspondence to José A. Moríñigo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Moríñigo, J.A., García-Muller, P., Rubio-Montero, A.J., Gómez-Iglesias, A., Meyer, N., Mayo-García, R. (2019). Benchmarking LAMMPS: Sensitivity to Task Location Under CPU-Based Weak-Scaling. In: Meneses, E., Castro, H., Barrios Hernández, C., Ramos-Pollan, R. (eds) High Performance Computing. CARLA 2018. Communications in Computer and Information Science, vol 979. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16204-7

  • Online ISBN: 978-3-030-16205-4

  • eBook Packages: Computer ScienceComputer Science (R0)