1 Introduction

Despite rapid technological advancements in software and hardware performance, many problems featuring numerous evaluation criteria or objective functions (Wiedemann 1978), multiple constraints of different nature and hundreds to thousands of decision variables remain challenging to solve (Carrizosa et al. 2019). Several methods have been developed to solve multi-objective linear programming problems, each of which features strengths and weaknesses (Sylva and Crema 2007). Among these, the ε-constraint method, along with its variants, has been used in many systems and applications (e.g. Liu and Papageorgiou 2013; Paul et al. 2017; Zhou et al. 2018; Jenkins et al. 2019), reported as a powerful way to solve multi-objective linear programming problems and preferred over competing techniques (Kadziński et al. 2017a, b; Jabbarzadeh et al. 2019).

In this study, we focus on one of the most widely used improvements of ε-constraint, the AUGMECON method (Mavrotas 2009) as subsequently improved in AUGMECON 2 (Mavrotas and Florios 2013). By looking at its main novelties, its core weaknesses are identified and discussed in detail, serving as a motivation for developing a new model that effectively overcomes them. These weaknesses, although dependent on various characteristics and processes of the method, can be summarised in the ineffective handling of the true nadir points of the objective functions of a problem and, most notably, in the significant amount of time required to apply it as more objective functions are added to a problem, which can even make a problem practically insolvable. Drawing on these, we introduce AUGMECON-R, a powerful and robust improvement that addresses these weaknesses, and apply it in comparison with its predecessor, in both a set of reference problems from the literature and a series of significantly more complex problems of four to six objective functions.

The rest of the paper is organised as follows. Section 2 carries out a brief overview of the ε-constraint, AUGMECON and AUGMECON 2 methods, by highlighting their characteristics of significance to introducing AUGMECON-R in Sect. 3. Section 4 performs a comparative analysis between AUGMECON-R and its predecessor (AUGMECON 2). Finally, Sect. 5 draws conclusions and outlines prospects for future work.

2 A brief overview of the augmented ε-constraint method

According to Hwang et al. (1980), Multiple-Objective Mathematical Programming (MOMP) solving algorithms can be organised in three groups: a priori methods, in which the decision makers have the capacity to express their preferences or objective function weights prior to solving the problem; interactive methods, which feature an ongoing dialogue between analysts and decision makers, eventually leading to preferences converging with solutions; and a posteriori methods, in which the problem is solved and the effective Pareto solutions are found, allowing the decision makers to select among these based on their preferences. Given the infrequency of early knowledge and quantification capacity of the decision makers’ preference model, which is prerequisite to a priori methods, and the difficulty in the decision makers having complete overview of (an approximation of) the Pareto front, associated with interactive methods, this paper orients on a posteriori methods to solving a MOMP problem of the form:

$$ \begin{aligned} \mathop {\hbox{max} }\limits_{ } \left\{ {f_{1} \left( x \right),f_{2} \left( x \right), \ldots ,f_{p} \left( x \right)} \right\} \hfill \\ s.t.:x \in S \hfill \\ \end{aligned} $$

where \( x \) is the vector of decision variables, \( f_{1} \left( x \right),f_{2} \left( x \right), \ldots ,f_{p} \left( x \right) \) are the \( p \) objective functions, and \( S \) is the space of efficient solutions.

Among these methods, the ε-constraint algorithm aims at optimising one objective function, while considering all other objective functions as constraints. The model, widely applied for multi-objective linear programming problems (Mavrotas et al. 2011; Sakar and Koksalan 2013), is thus transformed to:

$$ \begin{aligned} \mathop {\hbox{max} }\limits_{ } \left\{ {f_{1} \left( x \right)} \right\} \hfill \\ s.t.: \hfill \\ f_{2} \left( x \right) \ge e_{2} \hfill \\ f_{3} \left( x \right) \ge e_{3} \hfill \\ \ldots \hfill \\ f_{p} \left( x \right) \ge e_{p} \hfill \\ x \in S \hfill \\ \end{aligned} $$

By changing the right-hand side of the constrained objective functions (\( e_{i} \)), efficient solutions are obtained. The problem is solved on a step-by-step basis on an \( N_{2} \times N_{3} \times \ldots \times N_{p} \) grid, where \( N_{i} \) is the integer range of the objective function \( f_{i} \).

One of the method’s main advantages is that the number of efficient solutions can be controlled, by appropriately adjusting the number of grid points, on which each optimisation is solved, along the range of each objective function. However, this range must be calculated; it cannot be secured that solutions are not weak but effective; and solving any problem with more than two objective functions is very time-consuming.

These weaknesses motivated the development of augmented ε-constraint or AUGMECON (Mavrotas 2009), which transforms the problem into the following:

$$ \begin{aligned} \mathop {\hbox{max} }\limits_{ } \left\{ {f_{1} \left( x \right) + eps \times \left( {s_{2} + s_{3} + \cdots + s_{p} } \right)} \right\}, eps \in \left( {10^{ - 6} , 10^{ - 3} } \right) \hfill \\ s.t.: \hfill \\ f_{2} \left( x \right) - s_{2} = e_{2} \hfill \\ f_{3} \left( x \right) - s_{3} = e_{3} \hfill \\ \ldots \hfill \\ f_{p} \left( x \right) - s_{p} = e_{p} \hfill \\ x \in S \hfill \\ \end{aligned} $$

In essence, AUGMECON introduces the following modifications to the original ε-constraint method, to ensure that only effective Pareto solutions are obtained: (i) all constraints corresponding to the \( p - 1 \) objective functions become strict inequalities; and (ii) slack (or surplus) variables are introduced both to the primary objective function and to the constrained ones.

Another significant novelty of AUGMECON is that it exploits cases where the problem is infeasible, leading to an early exit from the nested loop of the step increase function: the algorithm initially sets lower bounds to the constrained objective functions, which gradually become stricter; if the problem becomes infeasible, i.e. the model cannot be solved for the given constraint of an objective function, after a specific grid point increase, there is no point in strengthening the constraints and the algorithm exits from the innermost loop and continues to the next grid point of said objective function. This way AUGMECON contributes to faster model solution, especially when the problem features more than two objective functions.

AUGMECON has been employed in various applications and systems, including supply chain management (Torabi et al. 2013; Bootaki et al. 2014, 2016; Canales-Bustos et al. 2017; Musavi and Bozorgi-Amiri 2017; Ravat et al. 2017; Vieira et al. 2017; Sazvar et al. 2018; Ehrenstein et al. 2019; Oiu et al. 2019; Shekarjan et al. 2019; Xin et al. 2019), energy planning (Hombach and Walther 2015; Tartibu et al. 2015; Arancibia et al. 2016; Cambero and Sowlati 2016; Cambero et al. 2016; Mohammadkhani et al. 2018; Rabbani et al. 2018; Sedighizadeh et al. 2018; Razm et al. 2019), waste management (Mavrotas et al. 2013; Mavrotas et al. 2015a, b; Inghels et al. 2016), portfolio analysis (Xidonas et al. 2010, 2011; Khalili-Damghani et al. 2012), transportation (Resat and Turkay 2015; Babakeik et al. 2018) and others (Khalili-Damghani and Amiri 2012; Aras and Yurdakul 2016; Yu et al. 2018; Behmanesh and Zandieh 2019; Zhang et al. 2019; Xiong et al. 2019); and has been combined with or compared against evolutionary algorithms (Khalili-Damghani et al. 2013; Dabiri et al. 2017; Wang et al. 2018; Mohammadi et al. 2019).

Mavrotas and Florios (2013) further extended this algorithm in AUGMECON 2, by introducing a bypass coefficient as well as a type of lexicographic optimisation to all objective functions, the order of which was insignificant in AUGMECON:

$$ \mathop {\hbox{max} }\limits_{ } \left\{ {f_{1} \left( x \right) + eps \times \left( {s_{2} /r_{2} + 10^{ - 1} s_{3} /r_{3} + \cdots + 10^{{ - \left( {p - 2} \right)}} s_{p} /r_{p} } \right)} \right\} $$

By means of the bypass coefficient, AUGMECON 2 makes use of the information provided by the slack/surplus variables of the constrained objective functions to avoid unnecessary iterations and accelerate solution. The jumps made in the innermost loop to help accelerate grid scanning allow for decreasing the step of the process and therefore increasing the grid points; by doing so, the exact Pareto set can be identified. But, in order to do so, (a) the objective function coefficients must be integer and (b) the nadir points of the Pareto set must be known.

To deal with the first limitation, non-integer coefficients can be multiplied by the appropriate power of 10, as necessary, which can however significantly expand the grid and increase the grid points, leading to very large solution times. Regarding the second limitation, adding steps to accurately calculate the nadir points of the Pareto set can also increase the algorithm’s complexity, so the AUGMECON 2 algorithm only uses an underestimation (overestimation), i.e. a lower (upper) bound, in cases of maximisation (minimisation) objectives.

Despite its weaknesses, which are analysed in detail below, AUGMECON 2 has significantly better performance over AUGMECON, and this is why it has also been applied in a wide range of problem domains since its introduction, including supply chain management (Gavranis and Kozanidis 2017; Bal and Satoglu 2018; Attia et al. 2019; Habibi et al. 2019; Resat and Unsal 2019; Roshan et al. 2019; Saedinia et al. 2019; Vafaeenezhad et al. 2019; Mohammed and Duffuaa 2020), project selection (Mavrotas et al. 2015a, b; Schaeffer and Cruz-Reyes 2016), and network optimisation and planning (Florios and Mavrotas 2014; Oke and Siddiqui 2015; Mousazadeh et al. 2018; Rahimi et al. 2019), as well as in policy-related problems, such as energy and climate action (Forouli et al. 2019a, b; Van de Ven et al. 2019; Doukas and Nikas 2020).

3 Augmecon-r

3.1 Motivation

Although AUGMECON 2 constitutes a significant upgrade to AUGMECON and a powerful algorithm for solving multi-objective integer programming (MOIP) problems and finding the exact Pareto set of a problem, it features certain weaknesses, the need to overcome which has motivated the development of AUGMECON-R.

First, the solution time for large-scale problems of more than two objective functions is still high, since jumps only occur in the innermost loop and not across the grid and for all nested loops, which represent the constrained objective functions: a problem of \( m \) objective functions of average range \( n \) would have an AUGMECON 2 complexity of \( O\left( {n^{m - 1} } \right) \), which is relatively large for programs running in environments like GAMS. For example, a 6kpY problem (a knapsack problem of 6 objective functions, 6 constraints and Y decision variables), with an average integer range of 1000 for each objective function, would feature a complexity of \( O\left( {10^{15} } \right) \), or slightly less given the iterations avoided due to the bypass coefficient of the innermost loop. The more objective functions a problem has, the more time-consuming AUGMECON 2 becomes for solving said problem.

Second, AUGMECON 2 requires that objective function coefficients be integer. If this is not the case, non-integer coefficients are multiplied by the appropriate power of 10, thereby also increasing the complexity accordingly: a problem of \( m \) objective functions of average range \( n \) and an average number of decimals \( k \) would have an AUGMECON 2 complexity of \( O\left( {n^{m - 1} \times 10^{{k \times \left( {m - 1} \right)}} } \right). \)

Third, implementing AUGMECON 2 requires a priori knowledge of the nadir points of the objective functions. Nadir point calculation algorithms are usually complex, hard to program and could require writing chunks of code larger than those of AUGMECON 2 itself; are generally capable of solving problems of up to three objective functions; and their running time is comparable to the one required by AUGMECON 2 (Alves and Costa 2009). This is why AUGMECON 2 opts for underestimation of nadir points, i.e. the use of lower bounds of the objective functions, thereby only slightly increasing computation time. This process of approximating the nadir points, in AUGMECON 2, takes place in the problem’s payoff table where the lowest values, which in theory are equal to or greater than the nadir points, are multiplied by an arbitrary coefficient (e.g. 90%), resulting in what is hopefully an underestimation of the actual nadir points. Academically speaking, one heuristic approach around this would be the calculation of all payoff tables, considering all possible orders of the constrained objective functions; this would expectedly give a closer approximation to the actual nadir points, allowing tightening the arbitrary coefficient, e.g. to 95%, hopefully ensuring that the nadir point would be included in the new, smaller grid. However, this approach would simply improve computation time, without avoiding either the arbitrary or the hopeful nature of the approximation process.

Fourth, the correlation between the order of constrained objective functions across loops and computation time is a weakness by itself: AUGMECON 2 features a bypass coefficient only for the innermost loop of the process, resulting in getting rid of only those unnecessary grid point checks that can be avoided within the innermost loop. In order to maximise the number of unnecessary iterations avoided as much as possible, after calculating the payoff table, the algorithm should be in a position to switch order of constrained objective functions accordingly, so that the objective function of the largest range could be placed in the innermost loop.

These four limitations associated with AUGMECON 2 constitute the motivation of developing a new algorithm that can significantly improve computation time and efficiency, as well as allow for easily solving problems that have so far been hard or practically impossible to solve.

3.2 An improved search algorithm

Reading through (Mavrotas and Florios 2013) and the performance recorded for AUGMECON 2, there appears to be a large deviation between the number of models solves and the solutions included in the Pareto front. For example, in the case of the 3kp100 problem—i.e. of a knapsack problem of three objective functions, three constraints and a hundred decision variables—there are 103,049 models solved, which is approximately sixteen times the number of the solutions included in the Pareto Front (6,500). However, given that this is a MOIP problem and AUGMECON 2 calculates the exact Pareto set by using a unity step to explore all possible integer values of the objective functions across the grid, one would expect that the models solved would be equal or at least close to the number of Pareto front solutions, which is not the case.

This large number of unnecessary optimisations computed can be attributed to the use of only one bypass coefficient in the innermost loop and, in addition, to the large number of infeasibilities that could have otherwise been to some extent foreseen and avoided.

In this direction, AUGMECON-R introduces a novelty that is largely based on the existing notion of the bypass coefficient, by incorporating to the model as many bypass coefficients as objective functions, which would be of the form:

$$ \begin{aligned} b_{2} = int\left( {s_{2} /step_{2} } \right) \hfill \\ b_{3} = int\left( {s_{3} /step_{3} } \right) \hfill \\ \ldots \hfill \\ b_{p} = int\left( {s_{p} /step_{p} } \right) \hfill \\ \end{aligned} $$

where \( int() \) is the function that returns the integer part of a real number, and \( s_{i} \) is the slack/surplus variable for an objective function \( i \), and \( step_{i} \) is the discretisation step for this objective function:

$$ step_{i} = r_{i} /q_{i} $$

where \( r_{i} \) is the range of the objective function \( i \), and \( q_{i} \) the number of equal intervals that the range is divided to formulate the grid, so that the latter comprise \( q_{i} + 1 \) grid points.

This way, instead of having one bypass coefficient acting at the innermost loop like AUGMECON 2, AUGMECON-R features an active bypass coefficient in each one of the outer loops as well. In every iteration, bypass coefficients \( b_{i} = int\left( {s_{i} /step_{i} } \right) \) are calculated. When \( s_{i} > step_{i} \), in the next iteration for \( b_{i} ' \) corresponding to \( e_{i}^{'} = e_{i} + step_{i} \) the optimisation will again lead to the same solution, with \( s_{i} ' = s_{i} - step_{i} \), making the iteration unnecessary. The \( b_{i} \) bypass coefficient indicates how many iterations should be bypassed, provided that these iterations concern the \( i^{th} \) objective function and the right-hand sides of all other constrained objective functions remain constant. The new process introduced in the proposed algorithm can be shown with a simple example. Assume that we have a four-objective problem with the following payoff table as shown in Table 1 (all objective functions to be maximised):

Table 1 Payoff table of example problem

From the payoff table, we have \( r_{2} = r_{3} = r_{4} = 10 \), which are divided into ten equal intervals, with a unity step of \( step_{2} = step_{3} = step_{4} = 1 \). AUGMECON-R includes the following process:

figure a

The objective function \( f_{2} \left( x \right) \) is represented in the innermost loop (k counter). Assume that we currently are at the 2nd iteration of the innermost loop (\( k = 1) \), the 4th iteration of the middle loop (\( j = 3) \)and the 3rd iteration of the outermost loop (\( i = 2) \), with \( e_{2} = 103 \), \( e_{3} = 80 \), and \( e_{4} = 48 \), as displayed in brackets and bold in Table 2.

Table 2 Grid points of the example problem

After the optimisation, we obtain \( s_{2} = 4 \), \( s_{3} = 3 \), and \( s_{4} = 4 \), meaning that \( f_{2} = 103 + 4 = 107 \), \( f_{3} = 80 + 3 = 83 \), and \( f_{4} = 48 + 4 = 52 \) (and, for the sake of completeness, \( f_{1} = 97 \)). Hence, \( b_{2} = 4 \), \( b_{3} = 3 \), and \( b_{4} = 4 \). While AUGMECON 2 would consider unnecessary only the four next iterations of the innermost loop, AUGMECON-R acknowledges that any combination of \( k \in \left[ {1,5} \right] \), \( j \in \left[ {3,6} \right] \), \( i \in \left[ {2,6} \right] \) would return the same solution. In this problem, AUGMECON-R would avoid 19 unnecessary iterations that AUGMECON 2 would not.

Assuming we have the capacity to store the values of the bypass coefficients \( b_{i} \) in optimisation \( h \) and defining as pure any optimisation that leads to a solution different from the one resulting from a unity decrease of any of the parameters for the right-hand side for a specific iteration drawn from the grid points of the objective functions, \( e_{i} \), then AUGMECON-R can avoid:

$$ \mathop \sum \limits_{h \in D} \left\{ {b_{3,h} } \right\}, \quad if\;p = 3 $$
$$ \mathop \sum \limits_{h \in D} \left\{ {b_{3,h} + b_{4,h} *\left( {b_{3,h} + 1} \right)} \right\}, \quad if\; p = 4 $$
$$ \mathop \sum \limits_{h \in D} \left\{ {b_{3,h} + b_{4,h} *\left( {b_{3,h} + 1} \right) + b_{5,h} *\left( {b_{4,h} + 1} \right)\left( {b_{3,h} + 1} \right)} \right\},\quad if\; p = 5 $$
$$ \mathop \sum \limits_{h \in D} \left\{ {b_{3,h} + b_{4,h} \left( {b_{3,h} + 1} \right) + b_{5,h} \left( {b_{4,h} + 1} \right)\left( {b_{3,h} + 1} \right) + \ldots + b_{p,h} \left( {b_{p - 1,h} + 1} \right)\left( {b_{p - 2,h} + 1} \right) \ldots \left( {b_{3,h} + 1} \right)} \right\},\quad if\; p \ge 6 $$

iterations compared to AUGMECON 2, where: \( h \) is a pure optimisation and \( D \) is the sum of all pure optimisations.

In order to achieve this for any problem of \( p \) objective functions, as suggested above, the AUGMECON-R algorithm requires that a \( \left( {p - 1} \right) \)-dimensional array be introduced to store integer flag values, \( flag\left[ {\left( {N_{2} + 1} \right) \times \left( {N_{3} + 1} \right) \times \cdots \times \left( {N_{p} + 1} \right)} \right] \), where \( N_{i} \) is the integer range of the objective function \( f_{i} \). The array is initialised with zero values and, prior to any optimisation, the algorithm examines if the corresponding value of the array is zero or not; if it is zero, the optimisation is performed, otherwise the algorithm jumps in the innermost loop as many steps as the array value indicates.

By introducing the flag array and the notion of pure optimisations, AUGMECON-R can at the same time avoid any unnecessary optimisations due to infeasibilities: if, for any value of \( e_{2}^{*} ,e_{3}^{*} , \ldots , e_{p}^{*} \) of the right-hand side of the constrained objective functions, there lies an infeasibility, then for an increase of any of \( e_{2} ,e_{3} , \ldots , e_{p} \) with all others equal to or greater than \( e_{2}^{*} ,e_{3}^{*} , \ldots , e_{p}^{*} \) an infeasibility will also be reached. Therefore, for \( \delta_{i} \in N \), any \( \left\{ {e_{i}^{*} + \delta_{i} } \right\} \) combination on the right-hand side of the constrained objective functions will return an infeasibility; while AUGMECON 2 would only avoid infeasibilities for \( \left\{ {e_{2}^{*} + \delta_{2} ,e_{3}^{*} , \ldots ,e_{p}^{*} } \right\}, \delta_{2} > 0 \), AUGMECON-R avoids all infeasibilities for \( \left\{ {e_{2}^{*} + \delta_{2} ,e_{3}^{*} + \delta_{3} , \ldots ,e_{p}^{*} + \delta_{p} } \right\} \).

The proposed algorithm can, therefore, avoid all iterations, for which all right-hand sides of the constrained objective functions are equal to or greater than the \( e_{i}^{*} \) values that led to an infeasibility.

The flow chart of AUGMECON-R is shown in Fig. 1.

Fig. 1
figure 1

Flowchart of the AUGMECON-R algorithm

3.3 Source code

The code for AUGMECON-R customised for a representative model of a 4kp40 problem, freely available on GitHubFootnote 1, has largely been based on the AUGMECON 2 source code and is presented in Appendix 1.

4 Comparative analysis and discussion

In this section, AUGMECON-R is employed for numerous problems and its performance is compared against the performance of AUGMECON 2. Initially, the benchmark problems presented by Mavrotas and Florios (2013) are solved, acting as a reference, followed by a series of random, more challenging in terms of objective functions and density problems; for the latter, AUGMECON 2 was also used by the authors, to provide for a comparative analysis.

It must be noted that all problems presented in the section have been solved in GAMS version 23.5, using CPLEX 12.2, a 64-bit Windows 10 operating system, a 2.7 GHz i5-6400 processor and an 8GB RAM memory.

4.1 Reference benchmark problems

Here, given that AUGMECON-R was designed as an upgrade to AUGMECON 2, we use as reference the 3kpY problems Mavrotas and Florios (2013) used to compare the performance of and establish AUGMECON 2 against AUGMECON; these include a 3kp100, a 3kp50 and a 3kp40 problem, i.e. selected knapsack problems of three objective functions, three constraints, and 100, 50 and 40 decision variables respectively. The 2kpY problems used in the same study were disregarded since, based on the proposed model outlined in Sect. 3.2, AUGMECON-R is identical to AUGMECON2 when dealing with only two objective functions.

It should also be noted that, in their study, Mavrotas and Florios (2013) do not use their originally proposed AUGMECON 2 algorithm, but a programming modification of it that arbitrarily avoids certain optimisations at the initial stages, both in the innermost loop and in the outer loop. This is noteworthy as, although the use of this modification does not significantly change the order of the resulting difference (cf. the performance reported in Mavrotas and Florios 2013), here the performance of AUGMECON-R is compared against the original AUGMECON 2 algorithm, and not against the ad hoc modified one. Table 3 summarises the performance between the two algorithms, in terms of the CPU time needed, the grid points per objective function, the total models solved, the infeasibilities found, the number of models solved multiple times (‘duplicate solutions’), the dominated solutions and the solutions found in the Pareto front.

Table 3 Performance comparison between AUGMECON 2 (AUGM 2) and AUGMECON-R (AUGM-R) for the 3kpY problems

Our findings suggest that, for the same number of solutions in the Pareto front, AUGMECON-R is multiple times faster and solves significantly less models, leading to significantly fewer infeasibilities and duplicate solutions (Table 4). To make up for potential randomness in CPU times due to different levels of CPU core availability, the CPU times presented are average times after a series of model runs, so that comparison can be considered unbiased and representative. This is also why the number of models solved is highlighted as a comparison metric, indicating similar ratios. It should be noted that the differences in CPU time ratios and models solved can be attributed to the time needed by AUGMECON-R to perform the bypass condition checks. Furthermore, the differences of ratios among the three problems can be attributed to the different density of the problems, i.e. the ratio of the number of solutions included in the Pareto front over the number of models solved: the denser the problem, the smaller the time difference between the two algorithms, as fewer iterations are avoided in the loops outside the innermost loop.

Table 4 Comparison ratios of performance of AUGMECON 2 over AUGMECON-R for the 3kpY problems

To highlight the enhanced performance of AUGMECON-R over AUGMECON 2, the arbitrary selection of the lower bounds loosens, to maximise the probability of including the actual nadir points in the analysis and ensure that no solution is missed. So, instead of multiplying the nadir values of the payoff tables by 95%, as was the case in the problems above, we reiterate our analysis of these three problems, by multiplying the nadir values by 5%, leading to an emphatically larger grid, in order to evaluate how this impacts the performance of the two algorithms in comparison (Table 5).

Table 5 Performance comparison between AUGMECON 2 (AUGM 2) and AUGMECON-R (AUGM-R) for the 3kpY problems with lower bounds

Although the difference of the two algorithms is now more evident for the case of significantly lower bounds, by looking at Tables 3 and 5, it is worth pointing out that this problem modification led to a CPU time increase of 2.24%, 6.00% and 6.40% for AUGMECON-R, compared to a CPU time increase of 170.00%, 103.50% and 209.52% for AUGMECON 2, for 3kp100, 3kp50 and 3kp40 respectively. Similar findings can be observed for all other relevant metrics; for example, the additional models solved by AUGMECON-R are negligible in all three problems (41, 2 and 2), the same cannot be said for AUGMECON 2 (314,157, 36,197 and 28,550 respectively).

4.2 Complex benchmark problems

Here, we implement both algorithms to evaluate AUGMECON-R in problems of more than three objective functions. Before doing so, however, we distinguish between uncorrelated and weakly correlated problems (Martello and Monaci 2020; Shah and Reed 2011): uncorrelated problems assume no correlation between elements of the objective function coefficient matrix and those of the constraint coefficient matrix, while weakly correlated problems assume a weak correlation between these elements. This weak correlation makes their solution significantly more difficult, as the solver requires more time resources, and given the time requirements for AUGMECON 2 to solve such problems, only uncorrelated problems are assumed in this study.

We define the following problems:

  • A 4kp40 problem, with 155, 119 and 121 being the true nadir points of the three constrained objective functions and 123, 127 and 140 being their ranges respectively.

  • A 4kp40* problem, which is identical with the 4kp40 problem but without a priori knowledge of nadir points, hence with the consideration of significantly lower bounds: 15, 11 and 13 being the lower bounds and 263, 235 and 248 the range respectively.

  • A 4kp50 binary problem, with objective function coefficients resulting from a uniform distribution \( U\left[ {0,1} \right] \) and constraint coefficients from a uniform distribution \( U\left[ {50,70} \right] \), with 718, 735 and 713 being the true nadir points, and 51, 35 and 44 being the ranges respectively.

  • A 4kp50* binary problem, which is identical with the 4kp50 binary problem but after extending the range of the objective functions by assigning new lower bounds at 70, 69 and 57.

  • A 5kp40 problem, with objective function coefficients resulting from a uniform distribution \( U\left[ {50,40} \right] \) and constraint coefficients from a uniform distribution \( U\left[ {2,10} \right] \), with 29, 32, 27 and 27 being the true nadir points of the four constrained objective functions, and 21, 21, 27 and 25 being the ranges respectively.

  • A 5kp40* problem, which is identical with the 5kp40 problem but after extending all ranges to 45, to make sure we include the now unknown true nadir points, with 5, 8, 9 and 7 being the new lower bounds.

  • A 6kp50 binary problem, with objective function coefficients resulting from a uniform distribution \( U\left[ {0,1} \right] \) and constraint coefficients from a uniform distribution \( U\left[ {0,5} \right] \), with 38, 37, 31, 27 and 30 being the true nadir points of the five constrained objective functions, and 21, 24, 26, 30 and 22 being the ranges respectively.

  • A 6kp50* problem, which is identical with the 6kp50 binary problem but after extending all ranges to 50, to make sure we include the now unknown true nadir points, with 9, 11, 7, 7 and 2 being the new lower bounds.

The matrices of the objective function and constraint coefficients are provided in Appendix 2, for all of the above pairs of problems, i.e. for 4kp40 – 4kp40*, 4kp50 – 4kp50*, 5kp40 – 5kp40* and 6kp50 – 6kp50*.

Table 6 summarises the performance differences between AUGMECON 2 and AUGMECON-R, for the problems 4kp40 and 4kp40*, while Fig. 2 visualises the Pareto front of the problem.

Table 6 Performance comparison between AUGMECON 2 and AUGMECON-R for the 4kp40 problem, with the true nadir points (4kp40) and with lower bounds (4kp40*)
Fig. 2
figure 2

The Pareto front of the 4kp40 problem

It is evident, from Table 4, that AUGMECON-R is almost 21 times faster than its predecessor, with the latter solving almost 26 times more models, in the problem where the actual nadir points are known a priori; this ratio difference is, as discussed above, due to the number of checks made by AUGMECON-R in its \( flag \) array. When considering the case of the true nadir points being unknown and thus extending the grid to secure that the actual nadir points are included in the analysis and that no solution is missed, AUGMECON-R outperforms AUGMECON 2 by solving about 131 times less models, more than 85 times faster. One odd finding is that AUGMECON-R now solves even less models than before; given how small the lower bounds are, the surplus variables are significantly larger, and this circumstantially leads to less models. This, however, does not bear any negative impacts on the accuracy of the algorithm, as it stumbles upon equally as many infeasibilities.

Similarly, Table 7 summarises the performance differences between AUGMECON 2 and AUGMECON-R, for the binary problems 4kp50 and 4kp50*, while Fig. 3 visualises the Pareto front of the problem.

Table 7 Performance comparison between AUGMECON 2 and AUGMECON-R for the 4kp50 problem, with the true nadir points (4kp50) and with lower bounds (4kp50*)
Fig. 3
figure 3

The Pareto front of the 4kp50 problem

Again, AUGMECON-R solves the 4kp50 problem almost 32 times faster, having solved about 35 times less models. But what is strikingly interesting is that, when considering the 4kp50* problem, AUGMECON 2 required 161 hours and solved more than four million models. These findings clearly indicate that AUGMECON-R has the capacity to timely solve time-wise non-viable, complex problems, ensuring accuracy and assuring no solution is missed.

Moving onto a five-objective problem, Table 8 summarises the performance differences between AUGMECON 2 and AUGMECON-R, for the problems 5kp40 and 5kp40*, while Fig. 4 visualises the Pareto front of the problem.

Table 8 Performance comparison between AUGMECON 2 and AUGMECON-R for the 5kp40 problem, with the true nadir points (5kp40) and with lower bounds (5kp40*)
Fig. 4
figure 4

The Pareto front of the 5kp40 problem

AUGMECON-R solved the 5kp40 problem almost 68 times faster, having solved about 83 times less models, while in the case of lack of a priori knowledge lower bounds considering the 4kp50* problem, these differences surge to 505 and 737 times respectively. In both cases, however, it is evident that AUGMECON-R outperforms AUGMECΟN 2 even more than in the previous two sets of problems of four objective functions; as previously discussed, the larger the number of objective functions is, the larger this outperformance is. This can be highlighted in the final problem of six objective functions, as follows in Table 9 and Fig. 5.

Table 9 Performance comparison between AUGMECON 2 and AUGMECON-R for the 6kp50 problem, with the true nadir points (6kp50) and with lower bounds (6kp50*)
Fig. 5
figure 5

The Pareto front of the 6kp50 problem

As with the 4kp50 binary problem, the 6kp50 binary problem solved by both AUGMECON 2 and AUGMECON-R emphasises the performance difference between the two methods. Given the significantly higher complexity that a sixth objective function adds to the problem, the CPU time and models solved ratios are even higher in this case, with AUGMECON 2 solving 175 more models 155 times slower. By extending the grid in order to ensure that the a priori unknown true nadir points are included in the analysis, AUGMECON 2 cannot solve the problem in a reasonable amount of time—it took the algorithm 47 hours to cross the grid once.

5 Conclusions

In this study, an improved version of the augmented ε-constraint method, AUGMECON-R, is introduced, allowing robust and timely optimisation of complex systems. Drawing from the weaknesses associated with its predecessor (AUGMECON 2), the concept and mathematical model of the proposed method is presented in detail and its code provided in Appendix 1, before implementing both methods in comparison. The problems solved in Sects. 4.1, 4.2 suggest that the proposed method, AUGMECON-R, greatly outperforms its predecessor, AUGMECON 2, by solving significantly less models in emphatically less time and allowing us to easily and timely solve hard or even impossible, in terms of time and processing requirements, problems of multiple objective functions. AUGMECON-R, furthermore, solves the problem of unknown nadir points, by using very low or zero-value lower bounds without increasing the time requirements.

As with ε-constraint (e.g. Ehrgott and Ryan 2002; Laumanns et al. 2006), there exist in the literature a few other attempts to identify weaknesses associated with and improve accordingly the AUGMECON 2 algorithm (e.g. Domínguez-Ríos et al. 2019), which however tend to perform a posteriori and numerous checks that potentially enhance complexity and time requirements, such as (Zhang and Reimann 2014), which additionally is developed in Visual Studio Express instead of the usual operational research problem solving implementation platform, GAMS.

One limitation of the proposed method lies in the introduction of a \( flag \) array, the size of which is directly linked to the range of the objective functions and therefore can lead to occupy a large memory space that could be unavailable. To overcome this, AUGMECON-R could in the future be developed in an object-oriented language like C++, instead of GAMS, which allows for dynamic memory allocation. This would enable using virtual memory, avoiding the \( flag \) array initialisation with zero values and releasing memory space whenever a counter moves across the grid.

Given that even the slightest uncertainty in the data can render system optimality meaningless from a practical point of view (Bertsimas and Sim 2004) and given recent advancements on robust linear optimisation (Bertsimas and Brown 2009), other prospects for future research include the all-in-one integration of AUGMECON-R with uncertainty and robustness analysis methods (Van de Ven et al. 2019; Mastorakis and Siskos 2016; Ben-Tal et al. Ben-Tal et al. 2010; Kadzinski et al. 2017a, b; Witting et al. 2013), thereby avoiding the use of numerous methods, code scripts, or even implementation platforms.