Skip to main content

Performance drop at executing communication-intensive parallel algorithms

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.


This work summarizes the results of a set of executions completed on three fat-tree network supercomputers: Stampede at TACC (USA), Helios at IFERC (Japan) and Eagle at PSNC (Poland). Three MPI-based, communication-intensive scientific applications compiled for CPUs have been executed under weak-scaling tests: the molecular dynamics solver LAMMPS; the finite element-based mini-kernel miniFE of NERSC (USA); and the three-dimensional fast Fourier transform mini-kernel bigFFT of LLNL (USA). The design of the experiments focuses on the sensitivity of the applications to rather different patterns of task location, to assess the impact on the cluster performance. The accomplished weak-scaling tests stress the effect of the MPI-based application mappings (concentrated vs. distributed patterns of MPI tasks over the nodes) on the cluster. Results reveal that highly distributed task patterns may imply a much larger execution time in scale, when several hundreds or thousands of MPI tasks are involved in the experiments. Such a characterization serves users to carry out further, more efficient executions. Also researchers may use these experiments to improve their scalability simulators. In addition, these results are useful from the clusters administration standpoint since tasks mapping has an impact on the cluster throughput.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14


  1. TOP500 Supercomputers homepage.

  2. Shalf J, Quinlan D, Janssen C (2011) Rethinking hardware–software codesign for exascale systems. Computer 44(11):22–30.

    Article  Google Scholar 

  3. Exascale Computing Project (ECP) homepage.

  4. EuroHPC homepage.

  5. Partnership Research for Advance Computing in Europe.

  6. National Supercomputing Center in Tianjin homepage.

  7. Post-K Supercomputer.

  8. Moríñigo JA, García-Muller P, Rubio-Montero AJ, Gómez-Iglesias A, Meyer N, Mayo-García R (2019) Benchmarking LAMMPS: sensitivity to task location under CPU-based weak-scaling. In: High Performance Computing, Proceedings of the 5th Latin American Conference (CARLA 2018), Bucaramanga, Colombia—Communication in Computer and Information Science, vol 979, pp 224–238.

  9. Jeannot E, Mercier G, Tessier F (2014) Process placement in multicore clusters: algorithmic issues and practical techniques. IEEE Trans Parallel Distrib Syst 25(4):993–1002.

    Article  Google Scholar 

  10. Chavarría-Miranda D, Nieplocha J, Tipparaju V (2006) Topology-aware tile mapping for clusters of SMPs. In: Proceedings of the 3rd Conference On Computing Frontiers, Ischia, Italy.

  11. Smith B, Bode B (2005) Performance effects of node mappings on the IBM BlueGene machine. In: Euro-Par 2005 Parallel Processing. Lecture notes in computer science, vol 3648, pp 1005–1013.

  12. Rodrigues ER, Madruga FL, Navaux POA, Panetta J (2009) Multi-core aware process mapping and its impact on communication overhead of parallel applications. In: Proceedings of the IEEE Symposium Computers and Communications, Sousse, Tunisia, pp 811–817.

  13. León EA, Karlin I, Moody AT (2016) System noise revisited: enabling application scalability and reproducibility with SMT. In: Proceedings of the IEEE International Parallel and Distributed Processing Symposium, Chicago, USA, pp 596–607.

  14. Chai L, Gao Q, Panda DK (2007) Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system. In: Proceedings of the 7th IEEE International Symposium Cluster Computing and the Grid (CCGrid), Rio De Janeiro, Brazil, pp 471–478.

  15. Shainer G, Lui P, Liu T, Wilde T, Layton J (2011) The impact of inter-node latency versus intra-node latency on HPC applications. In: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Systems, pp 455–460.

  16. Xingfu W, Taylor V (2009) Using processor partitioning to evaluate the performance of MPI, OpenMP and hybrid parallel applications on dual- and quad-core cray XT4 systems. In: Compute the Future. Proceedings of the Cray User Group (CUG 2009), Atlanta, USA

  17. Rodríguez-Pascual M, Moríñigo JA, Mayo-García R (2019) Effect of MPI tasks location on cluster throughput using NAS. Clust Comput 22(4):1187–1198.

    Article  Google Scholar 

  18. Moríñigo JA, Rodríguez-Pascual M, Mayo-García R Slurm (2016) Configuration impact on benchmarking. In: Slurm User Group Meeting, Athens, Greece.

  19. Xingfu W, Taylor V (2007) Processor partitioning: an experimental performance analysis of parallel applications on SMP clusters systems. In: 19th IASTED Conference Parallel Distributed Computing and Systems (PDCS07), Cambridge, USA, pp 13–18

  20. Zhang C, Yuan X (2010) Processor affinity and MPI performance on SMP-CMP clusters. In: IEEE International Symposium Parallel and Distributed Processing, Workshops and PhD forum, Atlanta, USA, pp 1–8.

  21. McKenna G (2007) Performance analysis and optimisation of LAMMPS on XCmaster, HPCx and BlueGene. University of Edinburgh, EPCC, Edinburgh

    Google Scholar 

  22. Liu J (2010) LAMMPS on advanced SGI architectures. White paper SGI

  23. León EA, Rosenthal E (2014) Characterizing applications sensitivity to network performance. In: Supercomputing Conference (SC’14), Poster, New Orleans, USA

  24. León EA, Karlin I, Bhatele A, Langer SH, Chambreau C, Howell LH, D’Hooge T, Leininger ML (2016) Characterizing parallel scientific applications on commodity clusters: an empirical study of a tapered fat-tree. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’16), Salt Lake City, USA

  25. Jain N, Bhatele A, Howell LH, Böhme D, Karlin I, León EA, Mubarak M, Wolfe N, Gamblin T, Leininger ML (2017) Predicting the performance impact of different fat-tree configurations. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC’17), Denver, USA, pp 50:1–50:13.

  26. Choi DJ, Lockwood G, Sinkovits RS, Tatineni M (2014) Performance of applications using dual-rail InfiniBand 3D torus network on the gordon supercomputer. In: Conference on Extreme Science and Engineering Discovery Environment (XSEDE’14), Atlanta, GA, USA, pp 43:1–43:6.

  27. Cornebize T, Heinrich F, Legrand A, Vienne J (2017) Emulating high performance linpack on a commodity server at the scale of a supercomputer, HAL-id: hal-01654804

  28. Ferreira K, Grant RE, Levenhagen MJ, Levy S, Groves T (2019) Hardware MPI message matching behaviour to inform design, corrurrency and computation. Pract Exp.

    Article  Google Scholar 

  29. Pollard SA, Jain N, Herbein S, Bhatele A (2018) Evaluation of an interference-free node allocation policy on fat-tree clusters. In: Proceedings of the ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC’18), Dallas, USA.

  30. León EA, Chambreau C, Leininger ML (2017) What do scientific applications need? An empirical study of multirail network bandwidth. In: 7th International Conference on Advanced Communications and Computations (INFOCOMP 2017), Venice, Italy, pp 35–39

  31. Dang HV, Snir M, Gropp W(2016) Towards millions of communicating threads. In: Proceedings of the 23nd European MPI Users’ Group Meeting (EuroMPI 2016), Edinburgh, UK, pp 1–14.

  32. Radulovic M, Asifuzzaman K, Carpenter P, Radojkovic P, Ayguadé E (2018) HPC benchmarking: scaling right and looking beyond the average. In: Proceedings of the 24th International European Conference on Parallel and Distributed Computing (EuroPAR 2018), LNCS, vol 11014, pp 135–146.

  33. Stampede supercomputer.

  34. Helios supercomputer.

  35. Eagle supercomputer.

  36. Taffet P, Rao S, León EA, Karlin I (2019) Testing the limits of tapered fat tree networks. In: IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), Denver, USA, pp 47–52.

  37. LAMMPS homepage.

  38. Brooks BR, Brooks CL III, Mackerell AD Jr, Nilsson L, Petrella RJ et al (2009) CHARMM: the biomolecular simulation program. J Comput Chem 30(10):1545–1614.

    Article  Google Scholar 

  39. Plimpton S (1995) Fast parallel algorithms for short-range molecular dynamics. J Comput Phys 117(1):1–19.

    Article  MATH  Google Scholar 

  40. Brown WM, Kohlmeyer A, Plimpton SJ, Tharrington AN (2012) Implementing molecular dynamics on hybrid high performance computers—particle–particle particle-mesh. Comput Phys Commun 183(3):449–459.

    Article  Google Scholar 

  41. Lin PT, Heroux MA, Barrett RF, Williams AB (2015) Assessing a mini-application as performance proxy for a finite element method engineering application. Concurr Comput 27(17):5374–5389.

    Article  Google Scholar 

  42. Richards DF, Glosli JN, Chan B, Dorr MR, Draeger EW et al (2009) Beyond homogeneous decomposition: scaling long-range forces on massively parallel systems. In: Proceedings of the International Conference on High Performance Computing Networking, Storage and Analysis (SC’09), art. nº 60. Portland, USA.

  43. Fast Fourier Transform of the West homepage.

  44. Plimpton S, Pollock R, Stevens M (1997) Particle-mesh Ewald and rRESPA for parallel molecular dynamics simulations. In: SIAM 8th Conference on Parallel Processing for Scientific Computing

  45. Bhatia H, Jain N, Bhatele A, Livnat Y, Domke J, Pascucci V, Bremer P (2018) Interactive investigation of traffic congestion on fat-tree networks using TreeScope. Comput Graph Forum 37:561–572.

    Article  Google Scholar 

  46. Qiao P, Wang X, Yang X, Fan Y, Lan Z (2017) Preliminary interference study about job placement and routing algorithms in the fat-tree topology for HPC applications. In: IEEE International Conference on Cluster Computing (CLUSTER), Honolulu, USA, pp 641–642.

Download references


This work was partially funded by the Spanish Ministry of Economy and Competitiveness CODEC-OSE project (RTI2018-096006-B-I00) and the Comunidad de Madrid CABAHLA project (S2018/TCS-4423), both with European Regional Development Funds (ERDF). It also profited from H2020 co-funded projects Energy oriented Centre of Excellence for Computing Applications II (EoCoE-II, No. 824158) and Supercomputing and Energy in Mexico (Enerxico, No. 828947). Access to resources of CYTED Network RICAP (517RT0529) and Poznan Supercomputing and Networking Center, in particular the support of Marcin Pospieszny, system administrator at PSNC, is acknowledged.

Author information

Authors and Affiliations


Corresponding author

Correspondence to José A. Moríñigo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Moríñigo, J.A., García-Muller, P., Rubio-Montero, A.J. et al. Performance drop at executing communication-intensive parallel algorithms. J Supercomput 76, 6834–6859 (2020).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Cluster throughput
  • Communication-intensive algorithms
  • MPI application
  • Weak scaling