1 Introduction

Optimisation problems with conflicting objectives are encountered in many practical applications, viz. in engineering; and multi-criteria optimisation (MCO) is an appropriate way to deal with them. In MCO, the Pareto set (PS) is determined, which is the set of the best possible compromises between the objectives (Pareto 2014). For any point on the PS, an improvement in one objective is only possible at the cost of a decline in at least one other objective; hence, the PS represents the optimal solutions of the MCO problem. For a detailed mathematical discussion, the reader is referred to Ref. Ehrgott (2005). In practical MCO problems, typically only one solution can be realised, which should be chosen from the PS. In MCO, not only the PS is determined, but also its picture in the parameter space (PSPS). Having selected a point on the PS, the PSPS shows which parameters are needed to get to that point. Determining PS is far from trivial as a brute force enumeration of the parameter space is usually infeasible from a practical standpoint. Hence, dedicated methods for determining PS have been developed, see e.g. Logist et al. (2010), Hernandez (2012), which, however usually still require a large number of evaluations of the objective function. The present work deals with the topology of low-dimensional PS, as they are encountered in many practical problems. The results enable swift estimations of the PS in these cases.

MCO has been used in many fields of engineering, e.g. the design of energy systems (Shirazi et al. 2014; Najafi et al. 2014), semiconductors (Ganesan et al. 2015) and chemical processes (Clark and Westerberg 1983; Bhaskar et al. 2000; Rangaiah and Petriciolet 2013; Höller et al. 2019; von Kurnatowski et al. 2017; Bortz et al. 2014). In the field of thermodynamics, to give just one example, MCO has been used before for modelling pure compounds and mixtures (Kulkarni et al. 2020; Stöbener et al. 2014, 2016; Werth et al. 2015; Kohns et al. 2016) with equations of state (Rehner and Gross 2020; Graham 2020; Forte et al. 2017) and excess Gibbs energy models (Forte et al. 2020). In such applications, both the number of objectives N as well as the number of parameters M is usually low, with typical values below five for both numbers. However, if the evaluation of the objective functions is costly, such as in the development of molecular models, the determination of PS may pose a severe problem even for such low-dimensional MCO problems.

In our previous work on MCO in thermodynamics (Kulkarni et al. 2020; Stöbener et al. 2014; Kohns et al. 2016), we have observed characteristic topologies of the PSPS and the PS, which depend on the numbers for N and M, but not on the specific problem that is investigated. A sketch of these topologies for some cases is shown in Fig. 1.

Fig. 1
figure 1

Sketches of typical topologies of the Pareto set (PS, right panels) in the objective space and their pictures in the parameter space (PSPS, left panels) for low dimensional MCO problems with N objectives and M parameters as observed in previous studies of our group (Kulkarni et al. 2020; Stöbener et al. 2014; Kohns et al. 2016; Forte et al. 2020). The stars indicate the extreme compromises, i.e. the global minima of the N individual objective functions

For \(N=2\), the PS as well as PSPS show two distinct branches, which are both almost straight lines. The region near the intersection of the branches is the so called Pareto knee region, where the values of both the objectives are typically low, while the branches far from the Pareto knee region have high value of one objective and low value of the other. For \(N=3\), the PSPS shows triangles in the centre, with needle-like extensions from the vertices, which are, however, not always observed. In this case, the central triangular region is the Pareto knee.

It has been shown previously (Kulkarni et al. 2020; Stöbener et al. 2014; Kohns et al. 2016) that these characteristic topologies are closely related to properties of the individual objective functions of the MCO problem. However, the inner workings of that relation have not been unravelled yet, and the conditions under which these characteristic topologies occur are still unknown. The starting point of the present work was therefore the wish to clarify these issues. The central hypothesis behind the approach was that the characteristic topological features are observed if the topologies of the individual objective functions meet certain requirements, and the aim was to elucidate the corresponding conditions.

The results published by Augusto et al. (2014) turned out to be a key for tackling this challenge. Augusto et al. (2014) have derived analytical expressions for the PSPS for arbitrary numbers for M and N for the case that all objective functions are quadratic. The topologies sketched in Fig. 1 can be obtained from their results if certain conditions are met. Basically, the lines in Fig. 1 correspond to valleys in the contours of the individual objective functions. Mathematically, the valleys can be related to Eigenvalues and Eigenvectors of the individual objective functions at their minima (crosses in Fig. 1). From this, mathematical conditions can be derived which, if met, lead to the occurrence of the characteristic topologies.

Such valley-like topologies are found in the objective functions of many practical MCO problems. For example, we have observed such topologies in all of our previous studies on MCO in the field of thermodynamics, independent of type of thermodynamic model that was used, i.e. for molecular models (Kulkarni et al. 2020; Stöbener et al. 2014; Kohns et al. 2016), equations of state (Forte et al. 2018), as well as for models of the Gibbs excess energy (Forte et al. 2020). Also, in MCO problems from various other fields, e.g., from process design (Bortz et al. 2014, 2017; Burger et al. 2014), energy systems engineering (Chiu et al. 2019), and from quantitative spectroscopy (Matviychuk et al. 2020), PS were found that indicate valley-like structures of the topology of the objective functions. We refer here to the existence of so-called “Pareto knees” in the PS (Branke et al. 2004), which may be interpreted as the result of valleys in the topology that intersect each other. If these valleys are steep, the resulting Pareto knee will be sharp.

Hence, there is good reason to believe that what we are studying in the present work is not just some special case, but one which is of high practical interest. The insights gained in our study explain not only empirical observations from the literature (e.g. regarding the Pareto knees), they can also be used for designing new methods for determining PS and the PSPS in practical problems. We also specify the conditions for which the interesting and practically relevant type of behaviour studied here has to be expected.

To illustrate our findings, we discuss a practical MCO problem: the development of molecular models of water, where the aim is to describe different thermodynamic properties well, viz. the vapour pressure and liquid density. The objectives are conflicting, as the considered class of water models (which is the most common one) is fairly simple, so that not all properties of water can be simultaneously modelled with high accuracy. Therefore, compromises have to be made and it is highly desirable to know the PS, which is the set of the best compromises. We show that for this MCO problem, the valley-like topologies occur and the mathematical conditions that characterise them are met. We also use the example to discuss how the insights from the present work can be used for estimating the Pareto set of the MCO problem, based on the analysis of the individual objective functions. We note that the MCO problems we have studied here contain no constraints, neither for the objectives nor for the parameters. Considering the influence of adding constraints was out of the scope of the present study, especially as it would have required individual considerations for different types of constraints (e.g. constraints on the PS and the PSPS, respectively).

This paper is organised as follows: Sect. 2 presents a mathematical analysis of properties of Pareto sets in the case of quadratic objective functions, including a discussion of constructing estimates of these sets based on properties of the individual objective functions. In Sect. 3, we discuss how these findings can be applied to systems, for which the objective functions are not quadratic, but can be approximated by quadratic functions. The results are then illustrated by two types of examples in Sect. 4: the first one is a study on systems with synthetic quadratic objective functions and the second one is a practical MCO problem from thermodynamics, the optimisation of molecular model of water. Finally, conclusions are drawn in Sect. 5.

2 MCO with quadratic objective functions

2.1 Quadratic objective functions

In general, a quadratic objective function \(f_i\) in an M-dimensional parameter space can be written as

$$\begin{aligned} \begin{aligned} f_{i}({\mathbf {x}})&=({\mathbf {x}}-{\mathbf {x}}_{i})^\intercal {\mathbf {H}}_i({\mathbf {x}}-{\mathbf {x}}_i)\\ {\mathbf {H}}_i&=\begin{pmatrix} h_{11,i} &{} \ldots &{} h_{1M,i}\\ \vdots &{} \ddots &{} \vdots \\ h_{M1,i} &{} \ldots &{} h_{MM,i} \end{pmatrix}\\ \end{aligned} \end{aligned}$$
(1)

where \(i=1,\ldots ,N\), and N is the dimensionality of the objective space. The vector \({\mathbf {x}}\in {\mathbb {R}}^M\) is a point in the parameter space, \({\mathbf {x}}_i\in {\mathbb {R}}^M\) indicates the coordinates of the minimum of \(f_{i}\), \({\mathbf {H}}_i\) is the Hessian matrix of \(f_{i}\). The Hessian matrix \({\mathbf {H}}_i\) is a symmetric \(M\times M\) matrix, thus \(h_{jk,i}=h_{kj,i}\text { }\forall \text { }j\ne k\).

2.2 Method of Augusto et al.

Augusto et al. (2014) have described an analytical method for determining the PS for an MCO containing an arbitrary number of quadratic objective functions described by Eq. (1). The full method, which works for any M and N, is described in the Appendix. Here, we introduce it using the simple but vivid case \(M=N=2\) for illustration.

Let the two objective functions be \(f_\text {A}({\mathbf {x}})\) and \(f_\text {B}({\mathbf {x}})\), their Hessian matrices \({\mathbf {A}}\) and \({\mathbf {B}}\), and their minima at \({\mathbf {x}}_\text {A}\) and \(\mathbf {x_\text {B}}\). \(f_\text {A}\) can be written as

$$\begin{aligned} \begin{aligned} f_\text {A}({\mathbf {x}})&=({\mathbf {x}}-{\mathbf {x}}_\text {A})^\intercal {\mathbf {A}}({\mathbf {x}} -{\mathbf {x}}_\text {A})\\ {\mathbf {A}}&=\begin{pmatrix} a_{11} &{} a_\text {nd}\\ a_\text {nd} &{} a_{22} \end{pmatrix}\\ \end{aligned} \end{aligned}$$
(2)

where, \({\mathbf {x}},{\mathbf {x}}_\text {A}\in {\mathbb {R}}^2\). The Hessian matrix \({\mathbf {A}}\) is symmetric, and hence both the non-diagonal elements are equal, thus \(a_{12}=a_{21}=a_\text {nd}\).

In principle, the Hessian \({\mathbf {A}}\) as well as the corresponding minimum \({\mathbf {x}}_\text {A}\) are arbitrary. However, without loss of generality, a coordinate transformation can be applied such that (i) \({\mathbf {x}}_\text {A}\) lies at the origin, and (ii) the eigenvectors of \({\mathbf {A}}\) are parallel to the coordinate axes, so that the non-diagonal elements of \({\mathbf {A}}\) vanish. Additionally, for the purpose of the present discussion, one of the eigenvalues of \({\mathbf {A}}\) is scaled to 1, which does not impact the topology of the objective function. Denoting the other eigenvalue by \(\xi\), \({\mathbf {A}}\) can then be written as

$$\begin{aligned} {\mathbf {A}}=\begin{pmatrix} 1 &{} 0\\ 0 &{} \xi \end{pmatrix} \end{aligned}$$
(3)

Varying the parameter \(\xi\) yields different ratios of eigenvalues. A valley-like topology of \(f_\text {A}\) is obtained if either \(\xi \gg 1\) or \(\xi \ll 1\). We will discuss the former case, for which the valley is oriented in the \(x_1\)-direction.

For two quadratic objective functions, following Augusto et al. (2014) (Eq. (27) in their paper), any point \({\mathbf {x}}^*\) on the PSPS can be calculated as follows:

$$\begin{aligned}&{\mathbf {x}}^*=(w_\text {A}{\mathbf {A}} +w_\text {B}{\mathbf {B}})^{-1}(w_\text {A}{\mathbf {A}}{\mathbf {x}}_\text {A} +w_\text {B}{\mathbf {B}}{\mathbf {x}}_\text {B}) \end{aligned}$$
(4)
$$\begin{aligned}&w_\text {A}+w_\text {B}=1 \end{aligned}$$
(5)

Here, \(w_\text {A}\) and \(w_\text {B}\) are the weights corresponding to the objective functions \(f_\text {A}({\mathbf {x}})\) and \(f_\text {B}({\mathbf {x}})\) respectively. Each choice of the weights \((w_\text {A},w_\text {B})\) gives a unique point \({\mathbf {x}}^*\) of the PSPS, so that the evaluation of Eq. (4) as a function of \(w_\text {A}\) yields the entire PSPS. The PS is then found by evaluating the objective functions \(f_\text {A}\) and \(f_\text {B}\) for all values of \({\mathbf {x}}^*\). When \(w_\text {A}\approx 1\), the point \({\mathbf {x}}^*\) is close to \({\mathbf {x}}_\text {A}\) (the coordinates of the minimum of \(f_\text {A}({\mathbf {x}})\)), and when \(w_\text {B}\approx 1\), the point \({\mathbf {x}}^*\) is close to \({\mathbf {x}}_\text {B}\) (the coordinates of the minimum of \(f_\text {B}({\mathbf {x}})\)).

2.3 Analysis

In the present section, the results of Augusto et al. (2014) are used for an analysis of the topology of the PSPS, and, as a consequence, also of the PS. For simplicity and clarity, we continue to discuss the case \(M=N=2\), but emphasise that the argument can be generalised, as shown in the Appendix.

Equation (4) can be rearranged to give:

$$\begin{aligned} {\mathbf {x}}^*=\bigg [\Big ({\mathbf {I}} +\frac{w_\text {B}}{w_\text {A}}{\mathbf {A}}^{-1}{\mathbf {B}}\Big )^{-1} (w_\text {A}{\mathbf {A}})^{-1}\bigg ]w_\text {B}{\mathbf {B}}{\mathbf {x}}_\text {B} \end{aligned}$$
(6)

where \({\mathbf {I}}\) is the unit matrix of dimension \(M\times M\).

Let us consider a point \({\mathbf {x}}^*\) in the vicinity of \({\mathbf {x}}_\text {A}\), so that \(w_\text {A}\gg w_\text {B}\). Applying a Taylor series expansion in \(\frac{w_\text {B}}{w_\text {A}}\) to the first term in brackets in Eq. (6) yields the following approximation:

$$\begin{aligned} {\mathbf {x}}^*=\bigg (\frac{w_\text {B}}{w_\text {A}}\bigg ) {\mathbf {A}}^{-1}{\mathbf {B}}{\mathbf {x}}_\text {B}-\bigg (\frac{w_\text {B}}{w_\text {A}}\bigg )^2 ({\mathbf {A}}^{-1}{\mathbf {B}})^2{\mathbf {x}}_\text {B} +{\mathcal {O}}\Bigg [\bigg (\frac{w_\text {B}}{w_\text {A}}\bigg )^3\Bigg ] \end{aligned}$$
(7)

In the following, we only keep the leading term, i.e.,

$$\begin{aligned} {\mathbf {x}}^*=\frac{w_\text {B}}{w_\text {A}}{\mathbf {A}}^{-1}{\mathbf {B}}{\mathbf {x}}_\text {B} +{\mathcal {O}}\Bigg [\bigg (\frac{w_\text {B}}{w_\text {A}}\bigg )^2\Bigg ] \end{aligned}$$
(8)

Substituting the values for \({\mathbf {x}}_\text {B}=[x_\text {1,B},x_\text {2,B}]^\intercal\) and the matrices \({\mathbf {B}}\) and \({\mathbf {A}}^{-1}\) yields:

$$\begin{aligned} {\mathbf {x}}^*=\begin{pmatrix} \frac{w_\text {B}}{w_\text {A}}(b_{11}x_{\text {1,B}}+b_\text {nd}x_{\text {2,B}})\\ \frac{w_\text {B}}{w_\text {A}}\frac{1}{\xi }(b_\text {nd}x_{\text {1,B}}+b_{22}x_{\text {2,B}}) \end{pmatrix} \end{aligned}$$
(9)

Note that the coordinate transformation was carried out such that the minimum of \(f_\text {A}\) is shifted to the origin and \({\mathbf {A}}\) becomes a diagonal matrix, while the minimum of \(f_\text {B}\) is generally at some other position \({\mathbf {x}}_\text {B}=(x_\text {1,B},x_\text {2,B})^\intercal\) and \({\mathbf {B}}\) is not a diagonal matrix, i.e. \(b_\text {nd}\ne 0\). Equation (9) shows directly that for \(\frac{w_\text {B}}{w_\text {A}}\ll 1\), the PSPS approaches the origin, i.e. the coordinates of minimum of \(f_\text {A}\).

If \(\xi \gg 1\), Eq. (9) can be approximated by:

$$\begin{aligned} {\mathbf {x}}^*=\begin{pmatrix} \frac{w_\text {B}}{w_\text {A}}(b_{11}x_{\text {1,B}}+b_\text {nd}x_{\text {2,B}})\\ {\mathcal {O}}(\frac{w_\text {B}}{w_\text {A}}\frac{1}{\xi }) \end{pmatrix} \end{aligned}$$
(10)

Hence, for \(\xi \gg 1\), the \(x_2\)-component of \({\mathbf {x}}^*\) is small, i.e. the PSPS near the minimum of \(f_\text {A}\) lies close to the \(x_1\)-axis.

The eigenvalues of \(f_\text {A}\) are 1 and \(\xi\), and the eigenvector belonging to the eigenvalue 1 is the vector in the \(x_1\)-direction. Hence, Eq. (10) can be interpreted as follows: for \(\xi \gg 1\), the minimum of \(f_\text {A}\) together with the eigenvector corresponding to the smaller eigenvalue of \({\mathbf {A}}\) define a linear approximation of the PSPS in the vicinity of the minimum of \(f_\text {A}\). The geometric interpretation of this finding in the \(f_\text {A}(x_1,x_2)\) space is that for \(\xi \gg 1\), the minimum of \(f_\text {A}\) lies at the bottom of a valley which goes in the \(x_1\)-direction (the direction of the eigenvector corresponding to the eigenvalue 1). The deviations of the PSPS from that linear approximation increase with increasing distance from the minimum of \(f_\text {A}\) (with increasing \(\frac{w_\text {B}}{w_\text {A}}\)) and with lower ratios of the eigenvalues \(\xi\). The same arguments can be applied to \(f_\text {B}\).

The findings that were discussed above for the case \(M=N=2\) can be generalised to arbitrary values of M and N. For each objective \(f_i\text { }(i=1,\ldots ,N)\), an analysis as the one described above can be carried out. If one of the eigenvalues of the \(M\times M\) diagonal Hessian matrix \({\mathbf {H}}_i\) defining \(f_i\) (see Eq. (1)) is much smaller than all the other eigenvalues, then the minimum of \(f_i\) together with the eigenvector corresponding to that eigenvalue define a linear approximation of the PSPS, in the vicinity of the minimum. The corresponding equations for the general case are given in the Appendix.

2.4 Estimation of the Pareto set

We now consider the case of quadratic objective functions and assume that for each objective function, one eigenvalue of the Hessian matrix is much smaller than the others, i.e. the objective functions have valley-like topologies. It then follows from Sect. 2.3 that estimates for the Pareto set of the MCO problem in the vicinity of the minima can be obtained by considering only the valleys of the individual objective functions \(f_i\text { }(i=1,\ldots ,N)\). We describe two alternative ways for achieving this.

(a) Linear approximation The linear approximation consists of the following three steps to be carried out for all \(i=1,\ldots ,N\) objectives:

  1. (1)

    Find the minimum \({\mathbf {x}}_i\) of \(f_i\).

  2. (2)

    Find the eigenvector \({\mathbf {u}}_i\) belonging to the smallest eigenvalue of the matrix \({\mathbf {H}}_i\).

  3. (3)

    Combine the results from 1) and 2) to obtain a linear approximation of the PSPS near \({\mathbf {x}}_i\).

This process yields N straight lines that can be combined to get an estimate of the PSPS. From this, the PS can be found by evaluating the objective functions of the MCO problem at each point on the PSPS. The way these individual lines have to be combined to form the estimate of the PSPS depends on M and N.

We will discuss this by referring to Fig. 1. Let us consider the first case, \(M=N=2\): there are two lines which will intersect in general. The intersection point in the objective space is known as the Pareto knee. The arguments given here explain why often very sharp Pareto knees are found. Not the entire lines belong to the PSPS, but only their respective sections between the minima and the Pareto knee.

For \(M=2\) and \(N=3\), there are three lines in the two-dimensional parameter space. Not only these three lines belong to the PSPS, but also the triangular region between these lines (see Fig. 1). The picture of this triangular region in the objective space corresponds to the Pareto knee. We will refer to this area connecting the lines (both in the PS as well as the PSPS) as the Pareto knee region.

For the case considered here (\(M=2\), \(N=3\)), the area between the lines in the PSPS is in general triangular and easy to construct. For other cases, defining this area is not so straightforward; e.g. for \(M=3\) and \(N=3\), there are three lines which, however, will in general not intersect. It was not in the scope of the present work to establish a general theory on how to construct the Pareto knee region from the knowledge of the lines. Intuitively, the Pareto knee region is expected to be found where lines are close to each other and the solution on how to construct it will depend on the quality of the linear approximation of the PSPS in the regions where the lines are close.

The linear approximation of the PSPS will only be good near the minima \({\mathbf {x}}_i\), while deviations will occur far from the minima. The size of these deviations depends on the details of the problem, including the distance of the different minima \({\mathbf {x}}_i\text { }(i=1,\ldots ,N)\) from each other. It would be possible to get better predictions by taking into account higher order terms in the Taylor series expansion mentioned above, but this would require the knowledge of derivatives of the objective function up to an order higher than two, which may be difficult to obtain in practical problems. The idea of the valley-like topology of the objective functions enables devising a different strategy instead, which is called valley approximation in the following.

(b) Valley approximation

The valley approximation is a modification of the linear approximation, in which the step 2 in the scheme shown above is replaced by a different scheme for finding the lines which start at the minima \({\mathbf {x}}_i\) of the objective functions \(f_i\). The geometrical picture of the objective functions for which one of the eigenvalues is much smaller than the others is a valley-like topology. The lowest point of the valley is the minimum of \(f_i\), located at \({\mathbf {x}}_i\). The valley is first approximated well by linear approximation, in the vicinity of \({\mathbf {x}}_i\), using the approach described above under a). However, with increasing distance from \({\mathbf {x}}_i\), deviations between the valley of \(f_i\) and the linear approximation will occur. We suggest to use the bottom line of the valley as an approximation of the PSPS rather than using the linear approximation also for large distances away from \({\mathbf {x}}_i\). We refrain from trying to give a formal proof that this is an improvement over the linear approximation but will discuss some examples below that confirm the according expectation.

3 Application to non-quadratic MCO problems

The topologies shown in Fig. 1 were observed in practical MCO problems from chemical engineering, and they are fully in line with the theory developed in the previous section for the quadratic objective functions. In particular, the theory explains the occurrence of the different cases shown in Fig. 1. For \(N>2\), needle-like extensions of the PSPS are only found if the minima of the objective functions \(f_i\) lie outside the Pareto knee region.

This suggests that the following holds for the objective functions \(f_i\) in the underlying MCO problem:

  1. (i)

    The \(f_i\) can be approximated reasonably by quadratic functions.

  2. (ii)

    One of the eigenvalues of the Hessian matrix of \(f_i\), evaluated at the minimum of \(f_i\) is much smaller than the other eigenvalue.

If these conditions are met, it is expected that (a) the linear approximation as well as, alternatively, (b) the valley approximation can be used for obtaining estimates of the PSPS, and, accordingly of the PS, as described in Sect. 2.4. Furthermore, for real MCO problems, there is a further option:

(c) Quadratic approximation

In the first step, for each goal function \(f_i\), quadratic approximations around their minima are constructed. Then, the method of Augusto et al. (2014) is applied to each of these quadratic functions, so that an estimate of the PSPS is obtained.

Near the minimum, the quadratic approximation yields the same results as the linear approximation, but in the region of the Pareto knee, it may deviate strongly from the linear approximation. Furthermore, the linear approximation as well as the valley approximation yield only curves, while the quadratic approximation can yield objects of higher dimensions, which is important in the Pareto knee region.

4 Examples

4.1 Synthetic quadratic objective functions

In this section, we consider synthetic quadratic objective functions as defined in Eq. (2) in two-dimensional parameter space, i.e. \(M=2\). Three different quadratic objective functions, \(f_\text {A}\), \(f_\text {B}\) and \(f_\text {C}\), are defined in the following. We will consider an example with a two-dimensional objective space, i.e. \(N=2\), using only the two objectives \(f_\text {A}\) and \(f_\text {B}\), as well as a second example with a three-dimensional objective space, i.e. \(N=3\), using all three objectives \(f_\text {A}\), \(f_\text {B}\) and \(f_\text {C}\).

As in Sect. 2.2, the objective function \(f_\text {A}({\mathbf {x}})\) was chosen to have its minimum at the origin, its eigenvectors parallel to the coordinate axes, and the two eigenvalues 1 and \(\xi\). As a consequence, \({\mathbf {A}}\) is a diagonal matrix with its diagonal entries as 1 and \(\xi\). The other two objective functions, \(f_\text {B}({\mathbf {x}})\) and \(f_\text {C}({\mathbf {x}})\), are obtained by rotating \(f_\text {A}({\mathbf {x}})\) about the origin and translating it by different amounts, such that the minima are shifted. \(f_\text {B}({\mathbf {x}})\) is obtained by rotating \(f_\text {A}({\mathbf {x}})\) by \(-70^\circ\) and positioning its minimum at \({\mathbf {x}}_\text {B}=[10,-5]^\intercal\), while \(f_\text {C}({\mathbf {x}})\) is obtained by rotating \(f_\text {A}({\mathbf {x}})\) by \(45^\circ\) and positioning its minimum at \({\mathbf {x}}_\text {C}=[10,5]^\intercal\). Thus, the ratio of eigenvalues \(\xi\) is the same for all three objectives. Choosing the same ratio of eigenvalues for all objective functions is not a requirement for the current analysis to be valid; it was only done for convenience.

Two different ratios of eigenvalues, viz. 10 and 100, are used to assess the accuracy of the linear approximation and the valley approximation of the PSPS and the PS (see Sect. 2.4). The exact solution was calculated using the method of Augusto et al. (2014) For \(\xi =100\), the valley-like landscape is “steeper” than that for \(\xi =10\). Fig. 2 illustrates the three objective functions \(f_\text {A}\), \(f_\text {B}\), and \(f_\text {C}\), for \(\xi =10\).

Fig. 2
figure 2

Contour plots of the three synthetic quadratic objective functions considered in the present work. The white stars indicate the global minima of the functions, while the white dashed lines indicate the valleys (paths of lowest ascent) that pass through the minima

Figure 3 presents the Pareto sets for the MCO when the two objectives \(f_\text {A}({\mathbf {x}})\) and \(f_\text {B}({\mathbf {x}})\) are considered. The figures show the PSPS on the left and the PS on the right. Additionally, in the parameter space, the valleys of the individual functions as well as their minima (analogous to Fig. 2) are shown for reference.

Fig. 3
figure 3

Pareto sets for the MCO with the two synthetic quadratic objective functions \(f_\text {A}\) and \(f_\text {B}\). Top row: \(\xi = 10\), bottom row: \(\xi = 100\). Left: parameter space (PSPS), right: objective space (PS). Blue lines: exact solution, red lines: valley approximation, green dotted lines: linear approximation (see Sect. 2.4), black dotted lines: valleys of the individual functions. The stars in the plots indicate the global minima of the individual objective functions. The valley approximation and the linear approximation are strictly identical here

For the synthetic quadratic functions, the valleys are straight lines along the eigenvectors corresponding to the smaller eigenvalue of the Hessians of the objective functions. As a consequence, the valley approximations (red lines) and the linear approximations (green dotted lines) coincide with each other.

For \(\xi =10\), PSPS is accurately approximated, except for the region of the Pareto knee. However, even the differences in that region hardly show up in the PS, which is very well approximated. As expected, the approximation is even better for \(\xi =100\), where the differences between the approximation and the exact solution are basically negligible both for the PSPS and the PS.

Figure 4 presents the Pareto sets for the synthetic quadratic objective functions for all three objectives \(f_\text {A}({\mathbf {x}})\), \(f_\text {B}({\mathbf {x}})\), and \(f_\text {C}({\mathbf {x}})\). Again, the two ratios of eigenvalues \(\xi =10\) and \(\xi =100\) are considered. As for the case with two objectives discussed above, the PS is approximated very well for both \(\xi =10\) as well as \(\xi =100\). Therefore, for brevity, we only show the results for the PSPS in Fig. 4.

Fig. 4
figure 4

PSPS for the MCO with the three synthetic quadratic objective functions \(f_\text {A}\), \(f_\text {B}\) and \(f_\text {C}\). Left: \(\xi = 10\), right: \(\xi = 100\). Blue: exact solution, red: valley approximation, green: linear approximation, black dotted lines: valleys of the individual functions. The stars in the plots indicate global minima of the individual objective functions. The valley approximation and the linear approximation are strictly identical here

For \(\xi =10\), the approximations work very well near the minima, while in the region of the Pareto knee, deviations are observed. For \(\xi =100\), these deviations basically vanish, leading to an almost perfect approximation of the PSPS.

4.2 Parametrisation of a molecular model of water

As a practical example for the applying of the concepts for estimating Pareto sets developed in the present work, we discuss now an MCO problem from thermodynamics, the parametrisation of a molecular model of water. Therefore, we chose a popular water model, the so-called SPC/E model (Berendsen et al. 1987) as a starting point. The problem with applying MCO for the development of molecular models of fluids is that the evaluations of the goal functions are computationally extremely costly, as they usually require carrying out molecular simulations at many state points in each iteration step. We have circumvented this problem here by using the so-called reduced units methods, which is explained below, together with simulation data for the SPC/E model from a previous study of our group (Kulkarni et al. 2020). This enabled us to carry out the analysis without carrying out additional molecular simulations. In the MCO problem, which we study, there are three objective functions which measure the difference between the results from the molecular model (Kulkarni et al. 2020) and experimental data (Lemmon et al. 2018; Wagner and Pruß 2002) for three important thermodynamic properties of water: saturated liquid density (\(\rho ^\text {liq}\)), vapour pressure (\(p^\text {s}\)) and enthalpy of vaporisation (\(\Delta h^\text {vap}\)). The definition of the corresponding objective functions \(\delta p^\text {s}\), \(\delta \rho ^\text {liq}\) and \(\delta (\Delta h^\text {vap})\) was adopted from our previous work (Kulkarni et al. 2020) and is included in the Appendix.

The reduced units method (Merker et al. 2012) is used here to obtain the relation between the parameters and the objectives. While SPC/E model has five parameters, only two of them were used as variables (\(M=2\)), in order to be able to apply the reduced units method. These are the Lennard-Jones size parameter \(\sigma\) and the Lennard-Jones energy parameter \(\varepsilon\). Upon varying \(\sigma\) and \(\varepsilon\), the other model parameters vary in a prescribed way. Details are given in Merker et al. (2012) and are not important for the present discussion. We mention this here only to emphasise that further imptovements would be possible if all parameters were used in the MCO, which we have simply not done here for computational reasons and as \(M=2\) is also convenient for the discussion. If simulation results for the original model, which is SPC/E here, are available, the reduced units method yields analytical expressions for obtaining the relation between the simulation results and the parameters \(\sigma\) and \(\varepsilon\). From these, the objectives of the MCO problem, \(\delta p^\text {s}\), \(\delta \rho ^\text {liq}\) and \(\delta (\Delta h^\text {vap})\), can be calculated. We refer the reader to the original Ref. Merker et al. (2012) for more details on the reduced units method.

Contour plots for the three objectives \(\delta \rho ^\text {liq}\), \(\delta p^\text {s}\) and \(\delta (\Delta h^\text {vap})\) as functions of the parameters \(\sigma\) and \(\varepsilon\) are shown in Fig. 5. Both \(\delta \rho ^\text {liq}\) and \(\delta p^\text {s}\) show global minima as well as valleys passing through the minima. Here, the valleys were estimated by selecting the lowest values of the individual objectives along their cross sections parallel to the coordinate axes. Note that \(\delta (\Delta h^\text {vap})\) does not depend on \(\sigma\) here. As a consequence, the contour lines of this objective are parallel to the \(\sigma\)-axis. Thus, there is no distinct minimum for this objective, rather the minimum is “spread” across an entire line parallel to the \(\sigma\)-axis, i.e. a line of constant \(\varepsilon\).

Fig. 5
figure 5

Contour plots of the three objective functions for the optimisation of the SPC/E water model. The objectives \(\delta \rho ^\text {liq}\), \(\delta p^\text {s}\) and \(\delta (\Delta h^\text {vap})\) indicate the deviations between simulation results for SPC/E and the experimental data for saturated liquid density, vapour pressure and enthalpy of vaporisation respectively. The white stars indicate the minima of the functions, while the white dashed lines indicate the valleys (paths of lowest ascent) that pass through the minima. There is no distinct minimum for \(\delta (\Delta h^\text {vap})\) as this objective does not depend on \(\sigma\)

Since analytical relations between the objectives and the parameters are available, it is possible to calculate their second derivatives at the respective minima. From these, the Hessian matrices can be calculated, and the quadratic approximations of the objective functions can be obtained. Also, the eigenvalues of the Hessian at the minima of the individual objectives were computed, which are shown in Table 1. The ratios of the eigenvalues are significantly larger than those considered for the synthetic quadratic functions, c.f. Sect. 4.1.

Table 1 Eigenvalues of the Hessian matrices for the simulation results of the three objective functions considered in the MCO of the molecular model of water

For MCO in this case, similar to the synthetic quadratic objective functions, first, only two objectives, viz. \(\delta \rho ^\text {liq}\) and \(\delta p^\text {s}\), are considered, so that \(N=2\). In a second step, three objectives, \(\delta \rho ^\text {liq}\), \(\delta p^\text {s}\) and \(\delta (\Delta h^\text {vap})\), were considered, so that \(N=3\). For both cases, the PSPS and the PS obtained by three methods presented in Sect. 3, the linear approximation, the valley approximation and the quadratic approximation, were compared to the exact Pareto set. The latter was calculated by a brute-force enumeration of the numbers for the objective functions on a narrowly spaced parameter grid, followed by a selection of the points that were not dominated.

Figure 6 presents the results obtained for the PSPS (left) and the PS (right) for the case \(N=2\). Also, a zoom into the Pareto knee region of the PS is provided. The three approximations are shown together with the exact Pareto set. Additionally, the bottom lines of the valleys in the two objective functions and the corresponding minima are shown.

Fig. 6
figure 6

Pareto set for the MCO of the SPC/E water model with the two objective functions \(\delta \rho ^\text {liq}\) and \(\delta p^\text {s}\). Left: parameter space (PSPS) and right: objective space (PS) along with the magnified view of the PS around the Pareto knee region. Blue: exact solution, red: valley approximation, purple: quadratic approximation and green: linear approximation of the Pareto set. Additionally, the valleys (see Fig. 5) are shown as black dashed lines for reference. Stars indicate the global minima of the individual objective functions. Exact solution and valley approximation of the Pareto set coincide with each other, while quadratic approximation and linear approximation of the Pareto set also coincide with each other

The valley approximation matches the exact Pareto set almost perfectly. As the objective functions are no longer strictly quadratic, the linear approximation deviates from the bottom lines of the valleys, which are slightly non-linear. The linear approximation and the quadratic approximation show almost no differences, even in the region of the Pareto knee, which can be understood as a consequence of the topology of the individual objective functions with their sharp and deep valleys, and, hence, as a result from the wide spread of the Eigenvalues. However, these two coinciding approximations, while giving still fair results, are clearly less accurate than the (almost perfect) valley approximation.

Figure 7 presents the results obtained for the PSPS (left) and the PS (right) for the case \(N=3\). As in Fig. 6, the results of the three approximations are compared to the exact solution. However, now there are three objective functions, and, hence, three valleys, only two of which have minima for the reasons discussed above.

Fig. 7
figure 7

PSPS for the MCO of the SPC/E water model with the three objective functions \(\delta \rho ^\text {liq}\), \(\delta p^\text {s}\) and \(\delta (\Delta h^\text {vap})\). Blue: exact solution, red: valley approximation, purple: quadratic approximation, and green: linear approximation. Stars indicate the global minima of the individual objective functions. Additionally, the valleys (see Fig. 5) are shown as black dashed lines for reference

In this case, a triangular Pareto knee region with two extensions is expected, which is the case here. These two extensions end at the two minima, which lie outside the Pareto knee region. This topology is not only found in the brute force evaluation of the Pareto set but also by all considered approximation methods. The quadratic approximation yields the triangular surface and the extensions, while the other two methods yield only three intersecting lines, from which, it is, however, easy to construct the triangular Pareto knee region including the two extensions. As for \(N=2\), the predictions from the valley method are almost perfect, and as for \(N=2\), the differences between the results from the quadratic approximation and the linear approximation are negligible. Again, as for \(N=2\), these predictions are less accurate than those of the valley approximation.

Altogether, the results from the presented examples suggest that the valley approximation is an accurate and useful one in practical MCO problems.

5 Conclusions

In previous studies of low-dimensional MCO problems, the Pareto sets, as well as their pictures in the parameter space, were found to have peculiar topologies that are closely related to the topologies of the individual objective functions, which often have a minimum lying at the lowest point of a deep and sharp valley. Such structures are characterized mathematically by the fact that one of the Eigenvalues of the Hessian of the objective function, at its minimum, is much smaller than the others, which seems to be the case in many engineering problems. This finding can be used for estimating the Pareto set of the MCO problem: the minima of the individual objective functions are extreme compromises of the MCO problem, and, as such belong to the Pareto set. In the case described above, starting from the minimum, more points of the Pareto set can be found by following the valleys. We call this valley approximation. The so-called Pareto knee region, which is usually the region where the most interesting compromises are located, is found where the valleys meet or at least approach each other. This is discussed in the present work first for MCO problems with quadratic objective functions, for which the Pareto set can be determined analytically using the method of Augusto et al. (2014) Then, the analysis is extended to MCO problems with non-quadratic objective functions, that, however, are required to have the topology described above. Three schemes to determine the Pareto set of these MCO problems are discussed, of which the valley approximation gave the best results. Hence, the results not only explain the peculiar topologies of the Pareto sets observed earlier, but can also be used to construct approximations of Pareto sets.

Questions left open by the present work include: (a) an analysis of more examples of low-dimensional practical MCO problems. The hypothesis is that many will be found, which show the peculiar topologies mentioned above. It would also be interesting to make generic statements on why this type of problems is so common. (b) The present method for estimating the Pareto set is based on finding branches of the Pareto set that start in the extreme compromises. We have basically left open the question how to find out in which way these branches are linked in the Pareto set when this does not follow from simple geometric considerations such as intersection points. Here the quadratic approximation method, in which all objective functions are approximated with quadratic functions around their respective minima, which was presented in the present work, could be useful, as it yields not only branches but also their connections. Finally, the new methods should be implemented in a robust and efficient program package.