1 Introduction

In times of ever increasing computing power the number of critical components affecting scalability and efficiency of a particular application is also growing rapidly. Of particular importance are the increasing dynamic nature of high-performance computing (HPC) systems [1], the necessity for a scalable yet robust implementation of the target problem using modern parallelization paradigms [2], the achievement of cache-optimal performance at the single node level [3] and a straightforward way to accurately monitor and analyse the extent to which individual system/software components do condition the overall system. It is important to raise awareness for these critical issues also at the non-specialist user level, where a great number of people nowadays is making routine use of HPC resources in order to gain new insights and drive forward exciting activities.

Assessment of parallel performance and overhead has been extensively studied in the past [4,5,6,7,8,9,10,11,12]. Starting with the logP model [4, 5], four parameters, i.e. latency, overhead, gap (reciprocal communication bandwidth) and number of processors, could be accounted for and a given algorithm be analysed, respectively. A key design goal was to find a balance between overly simplistic and overly specific models. Application to MPI [6] and several extensions respecting large messages [6, 7] and contention effects [8] have been described. A more abstract framework with tuneable complexity but still practical timing requirements has been provided with PERC [9]. More recent trends in hybrid MPI/OpenMP programming were taken care of by a combination of application signature with system profiles [10]. Along similar lines application-centric performance modelling [11, 13] was described based on characteristics of the application and the target computing platform with the objective of successful large-scale extrapolation. Similar predictions could also be made with the help of run-time functions within the SUIF infrastructure [12].

Recently, the cost of computation has become cheap in relation to communication [14]; thus, in order to make an algorithm scalable, the overhead due to communication must be reduced to a minimum [15]. While several powerful tools for quantifying communication overhead have been developed in the past [16,17,18,19,20], their routine use by the general HPC practitioner must still be considered far from standard practice. Consequently, it would be nice to have available a quick and simple method to estimate the extent of communication overhead without the need for additional interference with the software/system layer (e.g. without recompiling, switching on profiling flags, linking to additional libraries). Ideally, such a method should be easy to adopt by any HPC user interested in the subject. In the following we aim to outline the basics and practical details of exactly such an approach.

Table 1 HPL: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 2 GROMACS: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 3 AMBER: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 4 InHouseDev: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 5 VASP: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 6 QUANTUM ESPRESSO: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)
Table 7 LAMMPS: Exe-Times, \(t_n\), and MPI-Overhead, \(\tau _n^{MPI}\)

2 Basic model

We begin our investigation with the selection of a set of scientific applications frequently used on HPC platforms. They are,

  • HPL [21],

  • GROMACS [22],

  • AMBER [23],

  • VASP [24],

  • QUANTUM ESPRESSO [25],

  • LAMMPS [26]

  • and an in-house developed quantum chemistry code [27, 28].

Realistic problems are defined and computed in parallel on increasing numbers of cores using MPI [2] as the communication protocol. Only strong scaling is considered, i.e. constant problem size computed in shorter times with increasing numbers of processing elements (cores). Times to solution, \(t_n\), are recorded as a function of numbers of involved cores, n, and results are summarized in Tables 1, 2, 3, 4, 5, 6 and 7 (columns 1, 2). In addition, the time spent in MPI calls, \(\tau _n^{MPI}\), is also recorded and included in Tables 1, 2, 3, 4, 5, 6 and 7 (column 3). Two different tools are used to measure MPI times, in particular mpiP [17] and allinea/MAP [18]. The time records obtained from both tools are largely identical as demonstrated by the example of AMBER (see Table 3). \(\tau _n^{MPI}\) assessment based on mpiP analysis (Tables 1, 2, 3, 4) yields individual MPI timings on a per-task basis; hence, averages need to be formed for the n different tasks of a particular sample run. Because individual MPI times do vary considerably, it was also of interest to compute the variance of \(\tau _n^{MPI}\) and its corresponding standard deviation, \(\pm \Delta \tau _n^{MPI}\) (see Tables 1, 2, 3, 4, column 4). Given the diversity of the applications and their markedly different characteristics in terms of parallel scalability, it is not obvious to identify common trends in the introduced parallel overhead. However, what appears to be a rather general signature of all applications is the rather smooth development of the quotient between parallel overhead and total run-time, \(\tau _n^{MPI}/t_n\), which is graphically illustrated in Fig. 1 (also see final column in Tables 1, 2, 3, 4, 5, 6, 7). All data sets can be approximated by the following simple expression in two adjustable parameters, b and c,

$$\begin{aligned} \frac{\tau _n^{MPI}}{t_n} = \frac{b}{c + 1} - \frac{b}{c + n} \end{aligned}$$
(1)

and resulting fits are also included in Fig. 1 (solid curves). While primarily an empirical relation, Eq. (1) should still satisfy the limit value conditions, \(\tau _1=0\) and \(\frac{\tau _\infty }{t_\infty } < 1\).

2.1 Generalization

So far we have been very imprecise in the use of the term “parallel overhead” and frequently replaced it with “communication overhead” or \(\tau _n^{MPI}\), etc. In general, we consider every incremental time fragment emerging within a parallel algorithm parallel overhead if it is in excess of the serial algorithm required to solve exactly the same type of problem. Typically this will include [15],

  • time to interchange data

  • time to synchronize individual parallel tasks

  • extra computing time due to code sections arising only in the parallel algorithm

  • computing time penalties due to load balancing issues

  • computing time penalties due to inhomogeneous conditions between individual components of the parallel machine [1]

Measuring parallel overhead is not a trivial matter [29,30,31]. A conventional view is that to a large extent it is all covered by communication overhead. In fact, if we review the above list we see that task-level recording of individual MPI times (as done here) will either explicitly or implicitly include almost all of the incurred parallel overhead. Moreover, since our primary interest is in providing an approximate estimate, we shall consider \(\tau _n^{MPI}\) to be a sufficiently accurate measure of the total parallel overhead and adopt the notation \(\tau _n\) for the latter throughout the remainder of this article. Estimating \(\tau _n\) will now help to (i) raise awareness that a particular application may be significantly affected by parallel overhead, (ii) facilitate a posteriori assessment of various applications reporting times to solution, \(t_n\), as a function of numbers of cores, n, (iii) identify optimal run-time conditions on a given parallel architecture.

2.2 Solving for \(\tau _n\)

In the following we shall build on the model established in Eq. (1) and try to isolate a closed form for the parallel overhead, \(\tau _n\), thereof. Starting with

$$\begin{aligned} \tau _n = t_n \left( \frac{b}{c + 1} - \frac{b}{c + n} \right) \end{aligned}$$
(2)

we can formally decompose the time to solution,

$$\begin{aligned} t_n = t_n^{\hbox {Amdahl}} + \tau _n \end{aligned}$$
(3)

into an ideal time to solution, \(t_n^{\hbox {Amdahl}}\), and an associated parallel overhead, \(\tau _n\). As already implied by the superscript, the initial term is given from the classic Amdahl relation [32,33,34,35],

$$\begin{aligned} t_n^{\hbox {Amdahl}} = f_s t_1 + \frac{(1 - f_s)t_1}{n} \end{aligned}$$
(4)

where \(t_1\) denotes the single core execution time, and \(f_s\) its serial fraction that cannot be run in parallel. It follows from Eqs. (2) and (3) that we can isolate an expression for the parallel overhead, in particular,

$$\begin{aligned} \tau _n = \frac{t_n^{\hbox {Amdahl}} b(n - 1)}{(1 + c -b)n + (b + c +c^2)} \end{aligned}$$
(5)

and thus again using Eq. (3) describe the total time to solution,

$$\begin{aligned} t_n = t_n^{\hbox {Amdahl}} \left[ 1 + \frac{b(n - 1)}{(1 + c -b)n + (b + c +c^2)} \right] \end{aligned}$$
(6)

as a multiplicative extension to the original proposal of Amdahl [32,33,34,35].

Fig. 1
figure 1

Ratio of parallel overhead, i.e. the time spent in MPI communication, \(\tau _n^{MPI}\), to total execution time, \(t_n\), for a selected set of scientific applications frequently used on HPC platforms (also see Tables 1, 2, 3, 4, 5, 6, 7). Error bars indicate the resulting uncertainty if we consider standard deviations to the average values of \(\tau _n^{MPI}\) following mpiP analysis [17]. allinea/MAP evaluations [18] deliver mean values for \(\tau _n^{MPI}\) by default. Individual data sets can be nicely fit by a two-parameter model as detailed in Eq. (1) (solid curves)

3 Results

3.1 HPC systems used

All test applications examined here—except the in-house developed code—were run on the Vienna Scientific Cluster, version 3 (VSC-3) [36]. VSC-3 consists of 2020 compute nodes, all of them equipped with dual socket 8 core Intel Xeon CPUs (E5-2650v2, 2.6 GHz, Ivy Bridge) and interconnected by a dual-rail Infiniband QDR-80 network. Standard node memory is 64 GB; optionally available are nodes with 128 or 256 GB. The system features a rather unconventional cooling infrastructure, i.e. Liquid Immersion Cooling [37], where hardware components are fully immersed in mineral oil.

The in-house developed code was run on VSC-2, another HPC installation consisting of 1314 compute nodes with 2 CPUs of type AMD Opteron 6132 HE (2.2 GHz, 8 core) and again interconnected via an Infiniband QDR fabric. Standard nodes on VSC-2 provide 32 GB RAM.

3.2 Parallel overhead determined from run-time records

The simplest type of performance analysis for a particular application is to record execution times for increasing numbers of cores operating in parallel. This is also the most relevant type of analysis because it is based on exactly that type of executable that will be used later on in the production stage. Thus, no alterations to the binary have to be made for the purpose of analysing the code, for example introduction of internal timers, instrumentation due to profiling/debugging, inclusion of event counters, library wrappers; and all observed execution times do directly reflect the most natural run-time behaviour of the application taken into account.

Fig. 2
figure 2

Recorded times to solution, \(t_n\), (brown triangles, also see Table 1, columns 1–2) as a function of numbers of cores, n, operating in parallel for application HPL [21]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the mpiP-derived [17] mean parallel overhead rather well (compare orange line to the brown triangles with error bars, respectively, Table 1, columns 3–4). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 3
figure 3

Recorded times to solution, \(t_n\), (blue triangles, also see Table 2, columns 1–2) as a function of numbers of cores, n, operating in parallel for application GROMACS [22]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the mpiP-derived [17] mean parallel overhead rather well (compare orange line to the blue triangles with error bars, respectively, Table 2, columns 3–4). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 4
figure 4

Recorded times to solution, \(t_n\), (green dots, also see Table 3, columns 1–2) as a function of numbers of cores, n, operating in parallel for application AMBER [23]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the mpiP-derived [17] mean parallel overhead rather well (compare orange line to the green dots with error bars, respectively, Table 3, columns 3–4). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 5
figure 5

Recorded times to solution, \(t_n\), (red squares, also see Table 4, columns 1–2) as a function of numbers of cores, n, operating in parallel for an application developed in-house [27, 28]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the mpiP-derived [17] mean parallel overhead fairly well for larger core counts (compare orange line to the red squares with error bars, respectively, Table 4, columns 3–4). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 6
figure 6

Recorded times to solution, \(t_n\), (bright green pentagons, also see Table 5, columns 1–2) as a function of numbers of cores, n, operating in parallel for application VASP [24]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the allinea/MAP-derived [18] parallel overhead fairly well (compare orange line to the open pentagons in bright green, respectively, Table 5, column 3). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 7
figure 7

Recorded times to solution, \(t_n\), (golden diamonds, also see Table 6, columns 1–2) as a function of numbers of cores, n, operating in parallel for application QUANTUM ESPRESSO [25]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the allinea/MAP-derived [18] parallel overhead fairly well (compare orange line to the open diamonds in gold, respectively, Table 6, column 3). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Fig. 8
figure 8

Recorded times to solution, \(t_n\), (3/4 filled discs in bright blue, also see Table 7, columns 1–2) as a function of numbers of cores, n, operating in parallel for application LAMMPS [26]. Very large initial times corresponding to very small core counts have been truncated for better graphical comparison. Best fitting the data by Eq. (6) yields parameters b and c (implicitly also \(f_s\)) where the original data are reasonably well approximated (solid line in cyan). In addition, an estimate can be provided for the parallel overhead using Eq. (5) (solid orange line). The estimate matches the allinea/MAP-derived [18] parallel overhead fairly well (compare orange line to the 1/4 filled discs in bright blue, respectively, Table 7, column 3). Significant deviation from Amdahl’s Law is seen already for small core counts (compare cyan to grey line) (Color figure online)

Applying Eq. (6) to exactly such a simple record of just execution times, \(t_n\), for varying numbers of cores, n, should result in the derivation of parameters, b and c, which in turn can be plugged into Eq. (5) to yield approximate estimates for the corresponding parallel overhead, \(\tau _n\). The latter is of fundamental interest, for both additional development and practical deployment at optimal run-time conditions. An example of such an approach is given in Fig. 2. The application considered was HPL [21], and the underlying data are collected in columns 1–2 of Table 1. Experimental run-times (brown triangles) are reproduced fairly well from a fit using Eq. (6). The obtained curve is shown as the cyan line in Fig. 2. Parameters b and c obtained from the fit are then applied in Eq. (5) to determine an estimate for the parallel overhead, and the corresponding graph is shown as the orange line in Fig. 2. Since in this particular case we have available experimentally derived values for \(\tau _n\) (Table 1, columns 3–4), a direct comparison can be made between calculated and measured results (compare brown triangles with error bars to the orange curve in Fig. 2). Apart from an initial region of general uncertainty (see dimension of the error bars at small numbers of cores), the agreement between predicted and experimental values is rather good. A consequence of all of this is a significant deviation from Amdahl’s Law [32,33,34,35] starting already at modest numbers of cores (compare grey line with cyan curve in Fig. 2).

Additional tests were carried out for the rest of the applications, and corresponding results are graphically summarized in Figs. 3, 4, 5, 6, 7, and 8. It should be noted that the scale on both axes had to be changed considerably between different applications in order to emphasize their specific characteristics in terms of scaling and overhead times. From this it also becomes clear that the approach is rather general and can be applied to a wide range of diverse applications in identical fashion. As can be seen from Figs. 3, 4, 5, 6, 7, and 8, general results remain the same for all applications considered. However, remarkable specific differences arise upon closer examination. For example, GROMACS [22] exhibits a strange pattern of zig-zag-like execution times that is closely paralleled by the overhead times (see Fig. 3). This may indicate restricted ability to split the problem into parallel tasks of arbitrary size. Obviously, fitting such a data set can only deliver a best compromise for \(\tau _n\). In contrast, the general evolution of sample AMBER [23] appears to be smooth (Fig. 4). Similar to all the other examples, it is interesting to see how quickly \(\tau _n\) is becoming the dominant factor and how steadily standard deviations to \(\tau _n\) do decrease with increasing numbers of cores.

The immediate impression of the in-house developed code [27, 28] is that here we certainly face the least optimized application (Fig. 5). However, it is still interesting to observe that the proposed method for predicting parallel overhead remains applicable even in such cases. Here a saturation domain is reached quickly because of a strongly rising parallel overhead. Standard deviations of \(\tau _n\) are remarkably small. Owing to the implemented communication model of primary/secondary tasks, standard deviations will start to make sense only for runs involving more than two tasks (see Table 4, fourth column). Moreover, averages and related properties will comprise only the group of secondary tasks of dimension \(n-1\).

Smooth trends are seen in sample VASP [24] with again \(\tau _n\) quickly becoming the determining factor (Fig. 6). In contrast, a rather pronounced inversion in \(t_n\) evolution is observed in both of the final two samples, QUANTUM ESPRESSO [25] and LAMMPS [26] (Figs. 7, 8). Interestingly, fitted curves do still lead to reasonably good approximations of \(\tau _n\) demonstrating the versatility and broad applicability of the approach.

3.3 Fitting with GNUPLOT

Care must be taken when fitting the data set and the following remarks may prove useful when reproducing our results. All our fits have been obtained with the help of package GNUPLOT [38]. Two cases may be distinguished depending on whether or not the serial fraction, \(f_s\), is known in detail. In the majority of cases \(f_s\) in not known and cannot be determined accurately in a quick and straightforward way. However, treating it as another entirely free parameter is also discouraged because it may rapidly turn into a negatively signed number due to over-fitting. A working procedure includes the following steps,

  • define an explicit value for \(f_s\) (either known or guessed)

  • fit the data using Eq. (6) and derive parameters b and c

  • graphically check the quality of the fit and aim at asymptotic standard errors in the range of 10–30%

  • make sure that \(c>b\) and try to have \(b+\Delta b \approx c\), where \(\Delta b\) is the reported asymptotic standard error

  • incrementally decrease \(f_s\) and repeat the above steps until an optimal fit is obtained

In so doing, the formerly unknown value of \(f_s\) can be obtained as a by-product. It should be pointed out that occasionally dropping a couple of very large initial data points for small values of n was necessary to obtain a reasonable approximation in the limit of large n. Graphical control was the most important guiding principle all throughout the fitting process.

4 Conclusion

A simple procedure is presented that allows an approximate estimation of parallel overhead solely based on run-time records. The method exhibits a broad range of applicability including well-optimized applications as well as less advanced implementations where code optimization is still in progress (compare for example Fig. 2 with Fig. 5). Asymptotic limits show a rather smooth trend and thus facilitate reasonable approximations in the limit of large n. Specifics of a particular HPC installation do not seem to play a significant role since two entirely different systems were employed here and led to similar conclusions.