Abstract
This paper deals with an interesting facility location problem known as the bi-objective p-Median and p-Dispersion problem (BpMD problem). The BpMD problem seeks to locate p facilities to service a set of n demand points, and the goal is to minimize the total distance between facilities and demand points and, simultaneously, maximize the minimum distance between all pairs of hosted facilities. The problem is addressed with a novel path relinking approach, called reactive path relinking, which hybridizes two of the most extended path relinking variants: interior path relinking and exterior path relinking. Additionally, the proposal is adapted to a multi-objective perspective for finding a good approximation of the Pareto front. Computational results prove the superiority of the proposed algorithm over the best procedures found in the literature.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Facility location problems (FLP) are a very-studied family of combinatorial optimization problems that consists in finding the best location to site certain number of facilities to efficiently serve all demand points. As is described in López-Sánchez et al. (2020), the definition of “best” in the facility location literature varies, as it depends on the idiosyncrasies of each particular problem and its solution requirements. Briefly, in the p-Median problem, first studied in the mid-sixties (Hakimi 1964, 1965), the objective function seeks to minimize the total distance between demand points and their nearest facility; in the p-center problem, first solved in Minieka (1970), the objective function seeks to minimize the maximum distance between demand points and their nearest facilities; in the maximal covering location problem, first introduced in Church and Velle (1974), the objective consists in maximizing the total number of demand points covered, within a specific distance, by the facilities; or in the p-Dispersion Problem (Erkut 1990), the objective selects facilities as dispersed as possible. There exists many other variants of FLP that optimize different objectives. Traditionally, most of the FLPs has been widely studied not only theoretically but also practically, and many methods of resolution have been successfully proposed, exact and approximate algorithms.
FLPs become even more difficult when more than one objective function must be optimized at a time. This situation appears when companies, either public or private, deal with real-world FLPs since they are inherently multi-objective in nature. Indeed, besides the typical objective function, expressed as a sum of various component expenses (most simply as transportation and fixed costs), multi-objective approaches require for optimizing resource utilization and customer responsiveness, in addition to the standard economic objectives. Recent concerns regarding climate change or environmental objectives have been also considered. See Farahani et al. (2019) for a thorough review.
The optimization of such problems is hard since the objectives that must be optimized are usually in conflict with each other (Deb and Deb 2014). Indeed, when this happens, it is impossible to improve one objective without deteriorating another one, requiring multi-objective optimization (Sánchez-Oro et al. 2020). The majority of these problems appear when solving real-world challenges, because more than one perspective needs to be tackled simultaneously, such as the profit of the company, the benefits of the workers, or the satisfaction of the customers, among others (Franca et al. 2010; Sagrado et al. 2015; Pinto-Varela et al. 2011). The simple fact of considering more than one perspective at the same time justifies the emergence of a conflict between the different objectives.
In the last few years, the number of scientific papers that have addressed FLP from a multi-objective point of view has increased considerably. For instance, the p-Center problem and the p-Dispersion Problem are studied from a bi-objective optimization perspective in Tutunchi and Fathi (2019) and Pérez-Peló et al. (2019). However, in Karatas and Yakıcı (2018) and López-Sánchez et al. (2020), it is addressed a multi-objective FLP in which they considered three problems: the p-Center problem, the p-Median problem, and the maximal coverage location problem. Different objectives were simultaneously considered in Karatas (2017), where the coverage of the demand points is maximized while the total distance and balance workload of the facilities are minimized. Similarly, in Wang et al. (2018), it is maximized the total coverage and simultaneously minimized the total distance. There exist many other FLP variants depending on the objectives to optimize. For a recent survey of this family of problems, see Boonmee et al. (2017).
This paper is focused on the Bi-objective p-Median and p-Dispersion problem (BpMD problem), a multi-objective facility location problem, which was first addressed in Sayyady (2012). Both objectives have been proven to be in conflict in Sayyady (2012) and Sayyady et al. (2015). Additionally, for a thorough analysis on the relation between several facility location problems from a multi-objective perspective see Erkut and Neuman (1989).
As stated in Sayyady et al. (2015), the location of traffic sensors in highway networks motivated the consideration of the p-Median and p-Dispersion problem simultaneously. Nevertheless, this is not the only realistic problem that fits in this bi-objective model. For instance, considering the location of any profitable chain of business in different sites of a region and being the business exactly the same, the goal would be to locate the businesses near to the customers but, at the same time, in order to do not compete one business to each other, it would be necessary to locate one business as far as possible to another one and here, relies the importance of solving the BpMD problem. Sayyady et al. (2015) formulated the BpMD problem as an integer programming model and then, solved by using the \(\epsilon \)-Constraint (\(\epsilon \)-C). Furthermore, they proposed an alternative Integer Programming (IP) model and an iterative algorithm named the Incremental Algorithm. Then, they also used a Lagrangian heuristic procedure for solving the IP model for larger instances. Later on, Colmenar et al. (2018) proposed the first metaheuristic to solve the BpMD problem using a Scatter Search algorithm with three improvement methods: a local search based on dominance, a local search that alternates both objectives and a local search that tries to minimize the normalized distance to an ideal point. The algorithms providing the best results for the BpMD problem in the literature were proposed by Sayyady et al. (2015) and Colmenar et al. (2018); therefore, both algorithms have been considered in the competitive testing of Sect. 4. Furthermore, the comparison includes the well-known multi-objective evolutionary algorithms: MOEA/D, Multi-Objective Evolutionary Algorithm based on Decomposition (Zhang and Li 2007); NSGA-II, elitist Non-dominated Sorting Genetic Algorithm (Deb et al. 2002); and SPEA2, Strength Pareto Evolutionary Algorithm (Zitzler et al. 2001).
In this paper, we propose a novel approach based on the path relinking (PR) methodology is proposed, named reactive path relinking (RPR), which is based on the combination of the two most extended path relinking strategies, Interior and Exterior PR (IPR and EPR, respectively) to solve the BpMD problem. EPR strategy was first introduced by Duarte et al. (2015) and was compared against the IPR strategy. The authors solved another interesting FLP known as Minimum Differential Dispersion Problem, whose objective is to minimize the difference between the sum of the maximum and the sum of the minimum distances between the demand points and the selected facilities. The novelty of the current proposal lies in the combination of both path relinking strategies (IPR and EPR) into a single one, called reactive path relinking. The proposed procedure will decide which of the strategies should be used depending on the pairs of solutions to combine. Computational results prove the superiority of the proposed algorithm over the state-of-the-art procedures when considering the set of instances previously used in the related literature. Specifically, the widely accepted multi-objective metrics have been considered in order to compare the procedures, emerging the RPR as the most competitive algorithm in both quality and computing time.
The main contributions of this work are the following:
-
A novel metaheuristic algorithm based on the path relinking methodology has been proposed, the reactive path relinking (RPR) algorithm.
-
The RPR algorithm has emerged to address a multi-objective facility location problem: the BpMD problem.
-
A complete analysis of the performance of the novel approach reactive path relinking has been included.
-
A competitive testing is performed with the RPR algorithm and the state-of-the-art algorithms.
-
The proposed algorithm and the obtained results are publicly available to ease further comparisons.Footnote 1
The rest of this paper is organized as follows. Section 2 describes the BpMD problem and includes some definitions related to the problem. Section 3 details the algorithm implemented to solve the problem under consideration. Section 4 includes the performance metrics used to test the quality of the algorithm and the computational experiments to check the validity of our proposal. Finally, Sect. 5 summarizes the paper and discusses future work.
2 Problem definition
Let N and F be two sets of locations, with \(|N| = n\) representing the demand points and \(|F|=m\) representing the candidate facilities. Without loss of generality, in this paper \(N=F\) since our instances consider that demand points are also candidates to host facilities. It is worth mentioning that this assumption is not always considered, since in some cases N and F are disjoint sets. Let us define \(d_{ij} \ge 0\) as the distance between i and j, with \(i,j \in N\). This distance satisfies \(d_{ij}\ge d_{ik}+d_{kj}, \forall i,j,k \in N\), and \(d_{ii}=0\). The goal of a facility location problem is to select a subset of nodes to host facilities, \(S \subset N\) with \(|S|=p\), in order to optimize one or more objective functions.
The p-Median problem (pMP) selects a subset of \(S \subset N\) facilities, with \(|S| = p\), with the aim of minimizing the total distance between demand points and its closest facility. Given a solution S, the objective function of p-Median is evaluated as:
Then, the objective of the p-Median problem is to find a solution \(S^\star _{pM}\) with the minimum \(f_{pM}\) value. More formally,
being \({\mathbb {P}}\) the set of all possible combinations of facilities that can be conformed selecting p elements from the set of available locations N.
The p-Dispersion problem (pDP) seeks to maximize the distance between the p selected facilities, with the aim of distributing the facilities throughout the available space and avoiding to place facilities close to each other. Given a solution S, the objective function of the p-Dispersion problem is evaluated as:
The p-Dispersion problem then consists in finding a solution \(S^\star _{pD}\) with the maximum \(f_{pD}\) value. Specifically,
Therefore, the Bi-objective p-Median p-Dispersion problem, known as BpMD problem, is focused on solving two problems at the same time, the pMP and the pDP. It is needed to emphasize that the objective function of the pMP and the pDP are in conflict and, consequently, the improvement of one objective function leads the deterioration the other one. It is worth mentioning that a detailed integer programming model for the BpMD is included in Sayyady et al. (2015).
Next, an example with \(N = \{1,2,3,4,5\}\) (with \(|N| = n = 5\)) and the aim to locate \(p = 3\) facilities is included in order to show the conflict between the functions. Table 1 shows all possible solutions (10 combinations), the first column (named Facilities) shows the x and y coordinates of the chosen facilities, and columns 2 and 3 compute the values of the two objective functions of the two considered problems, pMP and pDP, respectively. For this example, the distance between any two points is calculated as the euclidean distance. The best objective function value for each problem isolated is highlighted in bold font.
Figures 1 and 2 represent the optimal location for the p-Median problem and the p-Dispersion problem, respectively. Facilities are represented with squares and demand points with circles. Considering the Euclidean distance, the optimal solution value for the p-Median problem is 2.41 units and the facilities must be located at points (1, 4), (2, 2), and (4, 4). On the contrary, the value of the objective function of the p-Dispersion problem is 2.24 units, when considering the same facilities. Analogously, the optimal solution value for the p-Dispersion problem is 3.00 units and the facilities should be located at points (1, 1), (1, 4), and (4, 4), while the evaluation of the objective function for the p-Median problem yields 3.65 units.
As this toy-example shows, both objectives are in conflict, that is, it might not be possible to find a single solution with the optimum value for \(f_{pM}\) and \(f_{pD}\) simultaneously, being impossible to improve one objective without deteriorating the other one.
Next, without loss of generality the BpMD problem is formally defined as a minimization bi-objective combinatorial optimization problem:
where \({\mathbb {P}}\) is called the feasible set and \(f_{pM}:{\mathbb {P}}\rightarrow {\mathbb {R}}\) and \(f_{pD}:{\mathbb {P}}\rightarrow {\mathbb {R}}\) are the two considered objective functions. The image of the feasible set is \(\{(f_{pM}(S),-f_{pD}(S)):S\in {\mathbb {P}}\}\) is named the objective space.
Let us briefly introduce the basic concepts of multi-objective definitions. Given two solutions \(S_1\) and \(S_2\) in \({\mathbb {P}}\):
-
\(S_1\) weakly dominates \(S_2\), denoted as \(S_1\preceq S_2\), if and only if \(f_{pM}(S_1)\le f_{pM}(S_2)\) and \(-f_{pD}(S_1)\le -f_{pD}(S_2)\).
-
\(S_1\) dominates \(S_2\), denoted as \(S_1 \prec S_2\), if and only if \(S_1\preceq S_2\) and at least one objective function is strictly better than the other, that is, \(f_{pM}(S_1) < f_{pM}(S_2)\) and \(-f_{pD}(S_1)\le -f_{pD}(S_2)\) or \(f_{pM}(S_1) \le f_{pM}(S_2)\) and \(-f_{pD}(S_1) < -f_{pD}(S_2)\).
-
\(S_1\) strictly dominates \(S_2\), denoted as \(S_1\preceq \preceq S_2\), if and only if \(f_{pM}(S_1) < f_{pM}(S_2)\) and \(-f_{pD}(S_1) < -f_{pD}(S_2)\).
The goal of the BpMD problem is to find a high-quality approximation of non-dominated solutions, also known as efficient solutions or Pareto set. It should bear in mind that the Pareto set contains only those solutions that are not dominated by any other solution. Notice that the image of the Pareto set is named the Pareto front. In order to achieve this goal, a metaheuristic procedure able to solve the BpMD problem is proposed. The metaheuristic finds the best approximation of the Pareto front with high-quality solutions in a reasonable amount of time even for problems with a considerable size.
3 Algorithmic proposal
The methodology used to solve the BpMD problem is based on a simple but efficient algorithm which has been successfully applied to solve a wide variety of combinatorial optimization problems, the path relinking (PR) metaheuristic (see Resende et al. (2010); Pérez-Peló et al. (2020); Campos et al. (2014) for relevant research in PR). This metaheuristic was first proposed in the framework of tabu search to integrate intensification and diversification strategies (Glover and Laguna 1998). The success of PR relies on exploring trajectories that connect pairs of high-quality solutions (hereinafter, the initiating and guiding solutions, \(S_i\) and \(S_g\), respectively). The procedure generates new intermediate solutions that would be hopefully better than these pairs of connected solutions. Specifically, PR links \(S_i\) to \(S_g\) combining paths of solutions which in turn, generates new solutions on such paths that will share attributes from both solutions. Without loss of generality, combining two solutions \(S_i\) and \(S_g\) consists in including attributes of \(S_g\) in \(S_i\) iteratively until \(S_i = S_g\).
In this paper, two different PR strategies, Interior and Exterior PR (IPR and EPR, respectively), are considered. Additionally, a novel approach based on the mixture of these two path relinking strategies is proposed. Both strategies have been deeply studied but in an isolated way by Duarte et al. (2015). The novelty of our proposal lies on the combination of both, IPR and EPR, into a single one, which is able to autonomously decide which of the strategies should be used, analyzing the pairs of solutions to combine.
3.1 Constructive method
As previously explained, the PR methodology requires a set of efficient solutions to combine. Although they can be constructed in a random manner, a greedy constructive procedure is proposed in order to form promising points of the search space. Of course, another way to obtain a good set of efficient solutions would be using multi-start algorithms such as GRASP (Greedy Randomized Adaptive Search Procedure), see Feo and Resende (1995). The constructive method is responsible for obtaining the initial set of efficient solutions. It uses a greedy criterion to generate solutions from scratch by using an iterative algorithm. At each iteration, the algorithm performs the best choice with respect to the objective function to optimize.
The BpMD problem is a bi-objective optimization problem, so it is needed to optimize two objective functions that are in conflict. Therefore, to consider both objectives at the same time in the constructive procedure, this bi-objective optimization problem is turned into a single-objective optimization problem. There are several ways of combining two or more objectives in an optimization procedure. For example, Marti et al. (2015) presents a new approach for switching between different objectives. In this research, an aggregation of the considered objectives is proposed, resulting in a simple yet effective mechanism. More precisely, given a partial solution \(S'\) with \(|S'| < p\) and a node \(v \in N {\setminus } S'\), the proposed function is defined as \(\beta \cdot f_{pM}(S'\cup \{v\})-(1-\beta ) \cdot f_{pD}(S' \cup \{v\})\), where \(\beta \in [0,1]\) is a weight or proportion to the relative importance of the pMP objective.
In order to normalize both objective functions in the range [0, 1], the following intervals are defined: \(f_{pM}(S' \cup \{v\})\in [\min _{pM};\max _{pM}]=[0; \max d_{ij}: \ \forall i,j \in N \wedge i\ne j ] \) and \(f_{pD}(S' \cup \{v\})\in [\min _{pD};\max _{pD}]=[0;\max d_{ij}, \ \forall i,j \in N \wedge i\ne j ]\), where \(\min _{pM}\) and \(\max _{pM}\) represent the minimum and the maximum values of \(f_{pM}\), respectively, and similarly, \(\min _{pD}\) and \(\max _{pD}\) are the minimum and the maximum values of the \(f_{pD}\), respectively. For the sake of convenience, the objective function of the pDP is transformed into a minimization problem by simply switching the sign, i.e., \(-f_{pD}(S' \cup \{v\})\in [-\max _{pD};-\min _{pD}]\).
Therefore, given a partial solution \(S'\), a candidate \(v \in N {\setminus } S'\), and \(\beta \), the considered greedy function is defined as follows:
Algorithm 1 shows the pseudocode for the greedy construction used in this paper, which receives as input parameter the set of candidate elements able to host a facility and the step used to update \(\beta \). It is worth mentioning that as a multi-objective optimization problem is being addressed, therefore, the output of the constructive procedure is a set of efficient solutions.
The algorithm starts with an initial set of efficient solutions, ES, that is empty and the parameter \(\beta \) is initialized to zero (steps 1 and 2). With the aim of increasing the diversity of the set of constructed solutions, the values of \(\beta \) are uniformly selected in the interval [0, 1]. Specifically, \(\beta \) is set from 0.0 to 1.0, where each iteration increases its value by adding the input parameter step (see steps 3 to 15).
Greedy algorithms are mainly deterministic procedures, since they select in each step the best element to be included in the partial solution under construction. In order to further improve the diversification, it is proposed to construct several solutions where each one is initialized by considering a different element. In particular, if there are n elements, that means that at most, n different solutions could be constructed, (steps 4 to 13).
The method selects the first facility, v, to be added to the solution \(S'\) (step 5), creating the list of candidate nodes, C, with all nodes except the one that has been already included in the solution (step 6). The algorithm includes one node at each iteration until p nodes have been included in S, (steps 7–11). In each iteration, the most promising node from the list of candidates is selected, \(v'\) by using the aforementioned greedy function (see Eq. 1). Then, the selected node is included in the solution (step 9) and removed from the candidate list (step 10).
As a bi-objective optimization problem is being addressed, once that a feasible solution is found, it is needed to check whether it can be admitted in the set of efficient solutions (step 12) or not. Specifically, the function \( Insert \& Update (S',ES)\) determines if the solution \(S'\) is an efficient solution and, if so, it is inserted in the set of efficient solutions, removing all solutions of the set that are dominated by \(S'\). Finally, before starting a new iteration, the value of \(\beta \) is updated in step 14. The greedy algorithm ends by returning the set of efficient solutions (step 16).
3.2 Local search method
Once an initial set of efficient solutions is attained, a second phase is considered. This phase consists in applying a local search to all the efficient solutions obtained. As a facility location problem is being solved, the neighborhood of a solution S, denoted as \({\mathcal {N}}(S)\), for the BpMD problem is defined as the set of solutions that can be obtained by exchanging a selected facility with any non-selected facility. Specifically,
The proposed local search follows a first-improvement strategy. In particular, the method traverses \({\mathcal {N}}(S)\), performing the first move that leads to a better solution with respect to the considered objective. If so, the search is subsequently restarted to the neighborhood of the updated solution; otherwise, the method stops since no better solution can be found. It is important to bear in mind that two objective functions are being addressed. Then, there are different variants to consider (see Mladenović et al. 2007; Erkut 1990). The use of the weighted sum function that was previously defined in Eq. 1 to guide the search is considered again. Specifically, the procedure starts by considering \(\beta = 0\), which means that only \(f_{pM}\) is improved (ignoring the other objective). Notice that after each move, the corresponding neighboring solution is tested whether it is admitted in the set of efficient solutions or not. This strategy is maintained until \(\beta = 1.0\) which means that only \(f_{pD}\) is improving, ignoring \(f_{pM}\).
Algorithm 2 shows the pseudocode of the proposed improvement strategy used in this paper. It receives three input parameters: an initial solution, S; the set of efficient solutions, ES; and the \(\beta \) parameter. As customary, the local search ends when no improvement is found (steps 2 to 11). In each iteration, this method explores the aforementioned neighborhood (steps 4 to 10) by following the first improving strategy. Specifically, the exploration is halted once a better neighboring solution is found (steps 5 to 8). Notice that the comparison between S and \(S'\) is done by considering Eq. 1 (step 5). Notice that it is needed to test whether any solution explored qualifies to enter in ES or not (step 9). Finally, the local search algorithm returns a set of efficient solutions (step 12).
So far, our proposal is quite similar to GRASP. However, our way to construct solutions is always deterministic with the only diversification of changing the initial node to be added to the partial solution under construction. Therefore, n different solutions are obtained in the constructive phase. Once that a feasible solution is obtained, the local search is applied as it is done in GRASP.
3.3 Interior path relinking
Interior PR (IPR) creates a path that connects two high-quality feasible solutions, exploring new solutions while traversing the path. The rationale behind this is that if the initiating and guiding solutions are promising, then there is a high probability of finding good solutions in the path that connects them.
The path between \(S_i\) and \(S_g\) is created including in \(S_i\) elements that are in \(S_g \setminus S_i\), exchanging them with those elements in \(S_i \setminus S_g\). In other words, it consists in iteratively modifying \(S_i\) to become more similar to \(S_g\) in every step, until reaching \(S_g\).
Figure 3 shows an example where the IPR is illustrated. The example considers a set of \(|N| = 10\) nodes labelled from 1 to 10 and \(p=5\). Let \(S_i=\{1,2,3,4,5\}\) and \(S_g=\{1,2,6,7,8\}\) be the initiating and guiding solutions, respectively. At the first step of the IPR, element 3 is replaced by element 6, obtaining the intermediate solution \(S_1=\{1,2,6,4,5\}\), then, at the second step, element 4 is interchanged by element 7, resulting in other intermediate solution, \(S_2=\{1,2,6,7,5\}\), and at the last step, element 5 is substituted by element 8 obtaining, in such a way, the guiding solution, \(S_g\). Similarly, to transform \(S_g\) into \(S_i\), the first step of the IPR replaces element 6 by element 3, obtaining the intermediate solution \(S'_1=\{1,2,3,7,8\}\), then, at the second step, element 7 is interchanged by element 4, resulting in the solution \(S'_2=\{1,2,3,4,8\}\), and the last step substitutes element 8 by element 5 obtaining, in such a way, the initiating solution, \(S_i\).
Notice that every solution found during the path is considered for entering in the set of efficient solutions.
3.4 Exterior path relinking
Exterior PR (EPR) (Duarte et al. 2015) follows the opposite idea of IPR. If the solutions in the efficient set are similar among them, then the path created with IPR will be rather short, thus leading to a small number of new solutions explored. This behavior results in maintaining the similarity among solutions in the efficient set. Then, it is necessary to include some diversity in the search, to explore a different region of the search space.
Given an initiating solution \(S_i\) and a guiding solution \(S_g\), EPR iteratively includes in \(S_i\) elements that are not in \(S_g\), with the aim of reaching new solutions which are diverse with respect to both \(S_i\) and \(S_g\). This strategy will eventually lead the algorithm to explore a wider portion of the search space to increase the quality of the efficient set.
Figure 4 depicts an example where the EPR has been illustrated. Again, a set of \(|N| = 10\) nodes is considered, labelled from 1 to 10 and \(p=5\). Let \(S_i=\{1,2,3,4,5\}\) and \(S_g=\{1,2,3,6,7\}\) be two pairs of solutions. The task is to remove the values \(S_i\cap S_g=\{1,2,3\}\) with elements in the set \(N\setminus S_i\cup S_g=\{8,9,10\}\). At the first step of the EPR, element 1 is replaced by element 8, getting the intermediate solution \(S_{1}=\{8,2,3,4,5\}\), at the second step, element 2 is interchanged by element 9, obtaining \(S_{2}=\{8,9,3,4,5\}\), at the third step, element 3 is substituted by element 10, resulting in the intermediate solution \(S_{3}=\{8,9,10,4,5\}\). Similarly, if you start with the solution \(S_g\) thought \(S_i\), \(S_g=\{1,2,3,6,7\}\), then the first step of the EPR interchanges element 1 with element 8, resulting in the intermediate solution \(S'_{1}=\{8,2,3,6,7\}\). The second step of the EPR replaces element 2 by element 9, getting the solution \(S'_2=\{8,9,3,6,7\}\). And finally, at the last step of the EPR, the element 3 is substituted by element 10, obtaining the intermediate solution \(S'_3=\{8,9,10,6,7\}\).
3.5 Reactive path relinking
One of the main contributions of this work is the proposal of a novel path relinking approach that selects IPR or EPR analyzing the initiating and guiding solutions. This algorithm, named reactive path relinking (RPR), combines the two strategies of the PR previously described, IPR and EPR.
RPR analyzes the similarity between the initiating \(S_i\) and guiding \(S_g\) solutions. If the similarity between them does not exceed a certain threshold k, then it is assumed that \(S_i\) and \(S_g\) are different enough to apply IPR with the aim of intensifying. Otherwise, EPR is applied to diversify the search. The combination of both strategies balances intensification and diversification with the aim of finding a high-quality set of efficient solutions. The value of k is an input parameter of the algorithm (see Sect. 4 for a detailed experimentation to select the best value) which represents the minimum number of facilities that \(S_i\) and \(S_g\) must have in common to be considered similar solutions. Algorithm 3 shows the pseudocode for the considered RPR algorithm.
The algorithm receives 3 input parameters: the set of efficient solutions, \( ES \); k, the threshold to consider that two solutions are similar or not; and the weight \(\beta \). RPR starts by initializing the backup-copy of the efficient set (step 1) and the best found solution found during the complete execution of RPR (step 2). Additionally, the set T is initialized (step 3) and used to store the already combined solutions. Then, the main body of the algorithm is repeated while the generation of paths finds non-dominated intermediate solutions (steps 4 to 32). In each iteration, all elements are first shuffled in the efficient set to avoid prioritize in any search direction (step 5). Then, in order to increase the efficiency of the method, those solutions for which a path was generated in the previous iterations are discarded (step 8). After that, for non-explored pairs of solutions, their similarity is evaluated. If they are different enough (step 9), the IPR strategy is considered, being the set of nodes to remove, denoted by R, those which are in \(S_i\) but not in \(S_g\) (step 10), and the set of candidates nodes to be added those which are in \(S_g\) but not in \(S_i\), denoted by A (step 11). Otherwise, EPR strategy is considered (step 12), conforming R with the nodes that \(S_i\) and \(S_g\) have in common (step 13) and A with those that are neither in \(S_i\) nor in \(S_g\) (step 14). Then, the solution used to traverse the path, S, is initialized with the initiating solution \(S_i\) (step 16).
The method iterates while there are nodes to be removed (steps 17–27). In each iteration, the nodes that will be removed and added to the solution are selected at random from R and A, respectively (steps 18–19). Then, the solution is updated (step 20), as well as the nodes to be removed (step 21) and the candidates to be added (step 22).
Considering that a multi-objective optimization problem is being solved, it must be checked if any constructed solution can be admitted in the efficient set (step 23). Moreover, it is tested if any of the solutions found during the generation of a path is the best based on \(f_{r}\), or not (steps 24 to 26).
Once a specific path between \(S_i\) and \(S_g\) has been finished, the corresponding pair is stored in T to avoid exploring the same path in future iterations (step 28). At the end of RPR, the improvement method is applied with the aim of further improving the generated solution, updating, if required, the efficient set (step 33). Finally, the algorithm returns the update \( ES \).
3.6 Computational complexity
This section is devoted to analyze the computational complexity of the proposed algorithm. In particular, each component of the complete algorithm is evaluated and, finally, the complexity of the algorithm is presented.
The first component of the proposed algorithm is the greedy constructive procedure. This method constructs a solution starting from each node \(i \in N\) and, in each construction, the best node is selected to be included in each iteration, resulting in a complexity of \(O(n\cdot p)\).
Then, for each constructed solution, the local search method is applied which, following a first improvement approach, requires to traverse the complete neighborhood in the worst case, resulting in swapping each selected element \(s \in S\) with every available node in \(V\setminus S\), resulting in a complexity of \(O(n\cdot p)\).
These two phases, constructive and local search, are repeated during \(\beta / \textit{step}\) iterations, thus having a computational complexity of \(O(n \cdot p \cdot \beta / \textit{step})\).
Once the initial set of non-dominated solutions, ES, is constructed, the RPR is applied. The RPR is executed for each pair of solutions in ES avoiding repeating pairs (i.e., the combination of \(S_2\) and \(S_1\) is not performed if the combination of \(S_1\) and \(S_2\) has been already executed). Therefore, the complexity of this method is \(O(|ES| \cdot \log |ES| \cdot p)\). Notice that the inclusion of p in this equation is due to that, in the worst case, each combination performs p iterations (when the combined solutions are completely different in IPR or the same solution in EPR). At the end of each combination, the local search is applied, and then, RPR presents a total complexity of \(O(|ES| \cdot \log |ES| \cdot p^2 \cdot n)\).
Finally, the computational complexity of the complete algorithm is the maximum between the complexity of generating the set of non-dominated solutions and the complexity of executing the RPR over the set of non-dominated solutions, that is the maximum between \(O(n \cdot p \cdot \beta / \textit{step})\) and \(O(|ES| \cdot \log |ES| \cdot p^2 \cdot n)\), respectively.
4 Computational results
This section presents the computational results conducted on the proposed metaheuristic and a comparison against several of the best-known multi-objective algorithms. All experiments have been performed in an Intel Core i7-7700HQ (4 x 2.8 GHz) with 8GB RAM, and the algorithms were implemented using Java 9. The source code has been also made publicly available.Footnote 2
The results of this section are divided into two parts: preliminary and final experiments. In the former, the influence of each part of the proposed algorithm is analyzed as well as the corresponding parameters setting, while the latter shows the performance of the best configuration for the proposed algorithm over the complete set of instances, configured with the best selected parameters. The quality of the obtained results is measured by comparing them with the best approximation of the Pareto front found (hereafter B)Footnote 3 since the optimal Pareto front of the BpMD problem is unknown.
To evaluate the performance of the proposal, three set of instances have been considered, denoted as pmed set, D set, and kmedian set. The first one consists of 40 instances of the well-known OR-libraryFootnote 4 (Beasley 1990). The number of nodes ranges from 100 to 900, while the number of facilities to select is in the range [5, 200], having a large variety of combinations. The second set of instances consists of 9 medium-sized instances with \(n = 250\) and \(p = 25\), named as D250 and 10 instances with \(n = 350\) and \(p = 35\), named D350 previously solved by Colmenar et al. (2018) and by Sayyady et al. (2015). Last set of instances consists of 15 large-sized instances, the smallest one contains \(n = 1000\) nodes, then, the next instances increment by 1000 nodes until reach \(n = 5000\), each instance has 3 different p values [\(n\cdot 0.05\), \(n\cdot 0.15\), \(n\cdot 0.25\)]. They have been downloaded from UFLLib.Footnote 5
Table 2 summarizes the main characteristics of each instance. To prevent the well-known over-fitting, a representative subset of instances (25% aprox.) with different characteristics is randomly selected.
Before starting with the computational results, it is needed to briefly describe how the quality of the algorithms is measured when dealing with multi-objective optimization problems. As it is well-known, the performance among multi-objective optimization algorithms cannot be done by comparing the objective function values, but comparing Pareto fronts since multi-objective optimization obtains a set of efficient solutions instead of an unique solution as in single-objective optimization. Therefore, to compare different algorithms it is necessary to use metrics that evaluate the quality of each obtained Pareto front. It is needed to measure basically the cardinality of the Pareto front, the proximity of the obtained solutions to the best/optimal Pareto front, and the diversity of the solutions. In this work, several of the most extended multi-objective metrics have been considered (see Durillo and Nebro 2011 or Li and Yao 2019): the number of efficient solutions, coverage, spread, hypervolume, the \(\epsilon \)-indicator, generational distance, and inverted generational distance.
The number of efficient solutions in the Pareto front (|A|) counts the number of efficient solutions found with the considered algorithm. The decision-maker usually prefers a larger number of efficient solutions. However, this metric does not have into account the quality of the solutions.
The coverage metric, C(A, B), evaluates the proportion of solutions of the best/optimal Pareto front B that weakly dominates the solutions from the Pareto front obtained by the algorithm A. Let r be the number of objective functions, then the coverage is computed as:
Therefore, the smaller the value of C(A, B), the better the algorithm A is. Note that \(0\le C(A,B)\le 1\), if \(C(A,B)=0\) means that no solution in the best/optimal Pareto front is weakly dominated by the solutions obtained by algorithm A and \(C(A,B)=1\) means that all the solutions in the best/optimal Pareto front are weakly dominated by the solutions obtained by algorithm A.
The spread (\(\Delta \)) measures the extent of spread by the set of computed solutions obtained by the Pareto front of algorithm A.
where \(d_i\) is the Euclidean distance between consecutive solutions in the Pareto front obtained with algorithm A, \({\hat{d}}\) is the mean of these distances, and \(d_f\) and \(d_l\) are, respectively, the Euclidean distances to the extreme solutions of the best/optimal Pareto front in the objective space. The best possible value is \(\Delta =0\) which indicates a perfect spread of the solutions in the Pareto front obtained with the considered algorithm.
The hypervolume (HV) evaluates the volume in the objective space which is covered by the Pareto front obtained with the algorithm A. To calculate the HV, for each solution \(a\in A\), a hypercube \(v_a\) is constructed with a reference point W (a vector of worst objective function values) and the solution a as the diagonal corners of the hypercube. Then, the hypervolume is the union of all hypercubes is found.
Then, the larger the HV value, the better the set of efficient solutions obtained by the algorithm.
The \(\epsilon \)-indicator (\(\epsilon \)) measures the smallest distance (the smallest \(\epsilon \) value) needed to transform every point of the Pareto front obtained with the algorithm A in the closest point of the best/optimal Pareto front.
Therefore, the smaller the \(\epsilon \)-indicator, the better.
The generational distance (GD) measures how far the solutions of the Pareto front obtained by the algorithm A are from those solutions in the best/optimal Pareto front (B).
where \(d_i\) is the Euclidean distance between each of these solutions and the nearest solution of the best/optimal Pareto front. If \(GD = 0\) indicates that all the solutions are in the best Pareto front.
The inverted generational distance (IGD) is an inversion of the generational distance metric with the aim of measuring the distance from the best/optimal Pareto front to the set of efficient solutions obtained by the algorithm A.
The smallest the value of IGD, the better. Note that the only different between GD and IGD is that while comparing, the IGD does not miss any part of best/optimal Pareto front.
Finally, the computing time (CPU) required by each algorithm is included.
As the considered problem is a multi-objective optimization problem, a set of non-dominated solutions (which trivially contains more than one solution) is generated by each algorithm. The results presented in Tables 3 to 7 report the average values across the set of instances used in each experiment, by performing a single execution of the compared procedures.
4.1 Algorithm parameter tuning
This section is oriented toward both demonstrating the effectiveness of the proposed strategies and tuning the parameters. The aim of the first preliminary experiment is to determine the number of constructions to calculate the initial set of efficient solutions. As Sect. 3.1 describes, the number of solutions built depends on the step value. Specifically, it has been considered \( step \in \{1/50, 1/100, 1/150, 1/200\}\), which results in 51, 101, 151, and 201 solutions built. Table 3 shows the results obtained in this experiment, where the metrics described above are reported.
Analyzing the results of Table 3, as expected the larger the number of solutions built, the better performance in all metrics. Therefore, the more solutions, the better results. However, the average computing time increases considerably. Note that between the two intermediate values of the column named step, values 1/100 and 1/150, there are no significant differences in the performance in all the metrics and the computing time can be substantially reduced. For this reason, a compromise between the quality of the performance metrics and the computational time could be selected. Thus, step is set to 1/100.
As described in Sect. 3.1, it is desirable to construct a rich and diverse set of efficient solutions. To this end, \(\beta \) is set to \(0.00,0.01,0.02,\ldots ,0.98,0.99,1.00\) since it was opted by the step=1/100. Note that, \(\beta =0.0\) optimizes \(f_{pD}\) meanwhile \(\beta =1.0\) optimizes \(f_{pM}\). Of course, \(\beta =0.50\) considers that both objectives are equally important, \(f_{pM}\) and \(f_{pD}\), see Eq. 1.
Once the number of constructions is set, Table 4 is analyzed. In the next experiment, the results obtained with the reactive path relinking when considering different values for k are shown. Specifically, the k value are set as \(0.25\times p\), \(0.50\times p\), and \(0.75\times p\), being p the number of facilities. Furthermore, in this Table 4, the IPR and the EPR are included in order to evaluate the actual contribution of the reactive mechanism. The associated results are summarized in Table 4, where the same metrics than in the above table are reported.
According to the results shown in Table 4, it is difficult to conclude the more appropriate value of k to select the best RPR variant. Nevertheless, the RPR with \(k=0.75 \times p\) achieves the best values for the coverage (0.38), the hypervolume (0.64), and IGD (6680.34). Additionally, it can be observed that the differences between the best values obtained with other strategies and the strategy of RPR with \(k=0.75 \times p\) are quite similar, obtaining no significant differences: the number of efficient solutions is practically equal (36.68 vs 36.06), the spread (0.95 vs 0.95), the \(\epsilon \)-indicator (0.03 vs 0.03), and GD (1648.78 vs 1674.72). Furthermore, the computing time of RPR with \(k=0.75 \times p\) is the slightly faster (1014.91 vs 1067.66). Therefore, this variant is selected as the most competitive one.
Finally, with the aim of analyzing the impact of each part of the proposed algorithm, an additional experiment which consists of comparing the results obtained by the constructive procedure (C) is conducted, then coupled with the local search method (C+LS), and finally, the reactive path relinking (RPR). Table 5 shows the results obtained in this experiment.
Analyzing the coverage, the most relevant part of the algorithm is the RPR, which is able to reduce the coverage to almost 0. In this case, the local search procedure clearly outperforms the results obtained by the constructive procedure (0.77 versus 0.87). On the other hand, considering the hypervolume, C+LS considerably improves the results of the constructive procedure, while RPR is able to provide the final improvement that results in a highly competitive algorithm for the BpMD. Notice that in the \(\epsilon \)-indicator metric the local search procedure is the one obtaining the most remarkable improvement. Additionally, the number of non-dominated solutions increases considerably when including the local search strategy. Therefore, it can be concluded that each part of the proposed algorithm is essential in the final RPR algorithm, each one of them providing different advantages that lead to increase the effectiveness of the algorithm.
4.2 Comparison to other algorithms
The most appropriate parameters to implement the RPR algorithm have been chosen considering the results obtained in the previous section. Specifically, step = 1/100 and \(k=0.75 \times p\). The performance of the proposed algorithm is compared with three of the most competitive evolutionary algorithms, MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition), NSGA-II (elitist Non-dominated Sorting Genetic Algorithm), and SPEA2 (Strength Pareto Evolutionary Algorithm), the Scatter Search algorithm proposed by Colmenar et al. (2018) and the \(\epsilon \)-C method, see Sayyady et al. (2015). In all the remaining tables, the number between parenthesis close to the name of the instance set indicates the number of instances in that set. Before presenting the results, all algorithms will be briefly described.
MOEA/D, first proposed in Zhang and Li (2007), decomposes the multi-objective optimization problem into single-objective optimization subproblems using a set of even spread weight vectors. All these subproblems are simultaneously solved in a single run to approximate the set of efficient solutions. In MOEA/D, the neighborhood relations among these subproblems are defined based on the distances between their weight vectors. The optimal solutions of two neighboring subproblems should be very similar. Each subproblem is optimized by using information only from its neighboring subproblems.
NSGA-II, proposed by Deb et al. (2002), starts by generating an initial population of solutions that will be sorted based on non-domination. To sort each solution, a rank (fitness) value is assigned based on the front in which they belong to. Then, a crowding distance is assigned to each solution in the front. The crowding distance is a measure related to the density of solutions around each solution, i.e., how close a solution is to its neighbors. Next, individual solutions are selected by using a binary tournament selection with crowed-comparison-operator. The selected population generates offspring from the crossover and mutation operators, and this new population with the current population and current offsprings is sorted again based on non-domination and only the best N individuals are selected, being N the population size. The selection is based on rank and on crowding distance on the last front.
SPEA2 proposed by Zitzler et al. (2001) is an improvement version of the SPEA that resolves its weakness: fitness assignment, density estimation and archive truncation. SPEA2 uses a regular population and an external archive to keep the elite solutions. The algorithm starts by generating an initial population of solutions and an empty archive. Next, all non-dominated population solutions are copied to the archive and any dominated solutions or duplicates are removed from the archive. If the size of the archive exceeds a predefined limit, further archive members are deleted by a clustering technique which preserves the characteristics of the non-dominated front. Afterward, fitness values are assigned to both archive and population members and again individual solutions are selected by using a binary tournament selection. The selected population generates offspring from the crossover and mutation operators.
Note that the solution representation for these three evolutionary algorithms consists of an integer array of p elements, where each element indicates the selected facility (notice that the order of the selected facilities is not relevant). These algorithms require one main input parameter, the population size. This input parameters have been tuned with the aim of having a fair comparison in terms of computing time, resulting in a population size of 100 solutions. The stopping criterion for these three evolutionary algorithms is set to 3600 s. Finally, the remaining parameters have been set to the default and recommended values by the MOEA Framework library used to implement these algorithms (Hadka 2015).
The Scatter Search (SS) procedure reported in Colmenar et al. (2018) is another evolutionary algorithm that starts by randomly constructing a set of solutions to generate the initial population. Then, they are improved with three different local search methods, which are based on the first improvement strategy. The reference set update method maintains a set of not only the best solutions (in terms of the objective function values) but also according to their diversity. The subset generation method and the combination method considered in this Scatter Search follow a typical implementation.
Finally, it has been included an exact procedure based on the \(\epsilon \)-C strategy, with the aim of providing a near-optimal approximation of the Pareto front. Since this algorithm is very computationally demanding, the computational effort required to generate the optimal Pareto front makes it not suitable for this kind of problems, but it can be used to analyze how far from the best Pareto front are the proposed algorithms in a short computing time. The \(\epsilon \)-C method used in this work has been adapted from Sayyady et al. (2015) but including a time limit of 7200 s and, therefore, not all instances are solved to optimality.
Now, the results obtained by the algorithms are shown in Table 6 that summarizes the average results provided when solving the whole set of instances of the OR-library. In particular, the instances are grouped by the number of nodes in steps of 100. The summary results presented in the last row prove the superiority of our proposal (RPR) over the five analyzed algorithms (MOEA/D, NSGA-II, SPEA 2, SS and \(\epsilon \)-C). It can be observed that the RPR is the algorithm that reaches the largest number of efficient solutions on average. Specifically, almost 32 efficient solutions, followed by \(\epsilon \)-C which is able to find more than 17 efficient solutions on average. The coverage indicates that the percentage of solutions of the best Pareto front, B, that dominates the solutions from the Pareto front obtained by the RPR is only 13%, meanwhile that percentage is more than 95% for the MOEA/D, NSGA-II and SPEA2, more than 97% for the Scatter Search, and more than 75% for the \(\epsilon \)-C method, which demonstrates that most of the solutions of algorithm RPR are part of the best Pareto front B. To measure the extent of the spread achieved by the Pareto front approximation obtained by each algorithm, the Scatter Search finds the best value 0.99 and RPR obtains 0.96, but both values are very similar. On the other hand, the average size of the space covered by the RPR is 70.08%. Regarding the average \(\epsilon \)-C, the smallest distance needed to transform every solution of the Pareto front obtained by the RPR in the closest solution of the best Pareto front is 0.03 and the average values of the \(\epsilon \)-C for the remaining algorithms are much more larger. Finally, RPR obtains the best values for the GD and IGD, which indicate how far the solutions are from the best Pareto front.
If we now analyze the results depending on the instance size, we can clearly see that the time required by RPR scales better with the size, while the other algorithms quickly reach the time limit. Regarding the other metrics, the behavior is similar independently of the instance size, emerging RPR as the most competitive algorithm.
Furthermore, in order to facilitate future comparisons, all Pareto fronts of D250 and D350 instances are available in Fig. 5 and Fig. 6, and all individual results per instance are detailed in Tables 10–17 of “Appendix A”.
The second set of instances considered in this paper are those reported in Colmenar et al. (2018), that is, set D250 and D350 with 9 and 10 instances, respectively. The associated average results are shown in Table 7. Analyzing the results, these instances seem to be easier to solve for the exact procedure \(\epsilon \)-C, so they can be used to illustrate how far from the best approximation of the Pareto Front are the heuristic algorithms included in this comparison. Although in terms of coverage and hypervolume value, \(\epsilon \)-C algorithm is able to obtain consistently better values, it is worth mentioning that the number of solutions in the non-dominated set obtained by RPR is considerably larger than the ones obtained by the other algorithms, including \(\epsilon \)-C. Furthermore, half of the solutions provided by the RPR are included in the best approximation of the Pareto front regarding the coverage metric. Additionally, the hypervolume obtained by the RPR is very similar to the one obtained by \(\epsilon \)-C, which is also an indicator of the quality of the front obtained by RPR. Regarding the metrics that measure the distance to the closest solution in the best approximation front, \(\epsilon \)-indicator, GD, and IGD, it can be seen that RPR presents the best values, indicating that, in those cases in which its solutions are covered by the best approximation front, they are still really close to the best solution. Indeed, RPR presents better results than \(\epsilon \)-C in this metrics, since when \(\epsilon \)-C is not able to find the best solution, it remains far from it due to the constraint on the computational time. Finally, the computing time produced by RPR is an order of magnitude lower than the one required by \(\epsilon \)-C, highlighting the suitability of RPR for providing fast and high-quality solutions for the tackled problem. A similar behavior to those obtained in the previous table can be emphasized, the scalability of RPR when analyzing the computing time. In the case of the exact procedure, the increase in computing time makes it unaffordable for solving larger instances.
With the aim of testing the algorithm when dealing with a more challenging set of instances, we have included the kmedian testbed, where the number of nodes ranges from 1000 to 5000. This experiment is divided into two different tables. On the one hand, Table 8 includes all the instances for which the \(\epsilon \)-C algorithm is able to finish, even without guaranteeing optimality, since the time limit has been set to 3600 s. On the other hand, Table 9 shows the results obtained in those instances in which the exact procedure \(\epsilon \)-C is not even able to start since the memory requirements are too high.
Analyzing the results in Table 8, the \(\epsilon \)-C is only able to provide solutions for 6 out of 15 of the kmedian instances. These instances seem to be harder to solve, since the number of solutions |A| is close to 5, being RPR able to generate a more populated set of non-dominated solutions. Additionally, RPR is able to reach the smallest value of coverage, indicating that it is able to reach the largest number of solutions belonging to the reference set of non-dominated solutions. In terms of hypervolume, RPR also provides competitive results, as well as in the remaining metrics in the considered time horizon of 3600 s.
Table 9 shows the results where \(\epsilon \)-C is not considered, since it is not able even to generate a single feasible solution. As it can be seen, results are rather similar than the previous ones. However, it is important to remark that, in this case, RPR is able to obtain the best results in more metrics, highlighting the scalability of the proposal. In particular, RPR is able to generate, on average, approximately 6 solutions in each non-dominated set, while the remaining methods are not able to reach 4 solutions. The value of 0.16 in the coverage metric indicates that most of the solutions provided by RPR belongs to the reference set, and the value of \(\epsilon \)-indicator or GD, for instance, suggests that the non-dominated set of solutions of RPR are better distributed along the search space. Analyzing the results per instance size, RPR maintains similar performance in all the metrics, even improving the \(\epsilon \)-indicator metric while increasing the size of the instance.
5 Conclusions
Nowadays, most of real-world optimization problems are modeled considering more than one objective at a time and usually those objectives are in conflict among them, being impossible to improve one objective without deteriorating at least one of the other. Facility locations are interesting combinatorial optimization problems that are being recently addressed from a multi-objective perspective. This manuscript is focused on the Bi-objective p-Median p-Dispersion problem (BpMD problem) that seeks to minimize the total distance between facilities and demand points and, at the same time to maximize the minimum distance between all pairs of facilities.
To address the BpMD problem, a new path relinking strategy has been proposed. The algorithm combines two different approaches: interior and exterior. The reactive path relinking has been designed in an efficient way and is able to decide which should use depending on the pairs of solutions to combine. This algorithm is able to find high-quality Pareto front approximations in short computing times, becoming a competitive algorithm when comparing the state of the art. Furthermore, computational experiments show the effectiveness of our proposed algorithm for solving relatively large instances of this bi-objective optimization problem in comparison with the other considered algorithms: MOEA/D, NSGA-II, SPEA2, Scatter Search and \(\epsilon \)-C method, being able to obtain better Pareto fronts.
Therefore, our results establish the first benchmarks not only for this problem but also for the applications of this novel proposed methodology, the reactive path relinking, to other multi-objective combinatorial optimization problems.
A future line of research is to adapt the reactive path relinking algorithm to solve other combinatorial optimization problems, whether single-objective or multi-objective nature. A new strategy to prove would be the adaptive reactive path relinking that will allow the algorithm to decide the k parameter to consider to combine every pair of solutions according their similarity or dissimilarity. Furthermore, it would be interesting to test another constructive metaheuristics as GRASP that includes randomization when selecting one node from a restricted candidate list containing the most promising nodes. Another interesting methodology to test would be the Fixed Set Search (FSS) algorithm, see Jovanovic et al. (2022), where a population of solutions is generated with GRASP and the elements appearing with more frequency in a subset of the most promising solutions is fixed for the next generation of solutions. As the authors hold, the FSS can be viewed as a metaheuristic that adds a learning mechanism to the GRASP.
Data availability
Enquiries about data availability should be directed to the authors and publicly available to ease further comparisons in https://grafo.etsii.urjc.es/BpMpD.
Notes
The best approximation of the Pareto front, denoted B, is estimated with all the non-dominated solutions of the set, resulting from merging the solutions found by all the algorithms under comparison.
References
Beasley JE (1990) OR-library: distributing test problems by electronic mail. J Oper Res Soc 41:1069–1072. https://doi.org/10.2307/2582903
Boonmee C, Arimura M, Asada T (2017) Facility location optimization model for emergency humanitarian logistics. Int J Disaster Risk Reduct 24:485–498. https://doi.org/10.1016/j.ijdrr.2017.01.017
Campos V, Martí R, Sánchez-Oro J, Duarte A (2014) GRASP with path relinking for the orienteering problem. J Oper Res Soc 65:1800–1813. https://doi.org/10.1057/jors.2013.156
Church R, Velle CR (1974) The maximal covering location problem. Papers Reg Sci Assoc 32:101–118. https://doi.org/10.1111/j.1435-5597.1974.tb00902.x
Colmenar JM, Hoff A, Martí R, Duarte A (2018) Scatter search for the bi-criteria p-median p-dispersion problem. Progress Artific Intell 7:31–40. https://doi.org/10.1007/s13748-017-0132-6
Deb K (2014) Multi-objective optimization. In: Burke EK, Kendall G (eds) Search methodologies: introductory tutorials in optimization and decision support techniques, pp. 403–449. Springer, Boston, MA. https://doi.org/10.1201/9781315183176-4
Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6:182–197. https://doi.org/10.1109/4235.996017
Duarte A, Sánchez-Oro J, Resende MG, Glover F, Martí R (2015) Greedy randomized adaptive search procedure with exterior path relinking for differential dispersion minimization. Inf Sci 296:46–60. https://doi.org/10.1016/j.ins.2014.10.010
Durillo JJ, Nebro AJ (2011) Jmetal: a java framework for multi-objective optimization. Adv Eng Softw 42:760–771. https://doi.org/10.1016/j.advengsoft.2011.05.014
Erkut E (1990) The discrete p-dispersion problem. Eur J Oper Res 46:48–60. https://doi.org/10.1016/0377-2217(90)90297-o
Erkut E, Neuman S (1989) Analytical models for locating undesirable facilities. Eur J Oper Res 40:275–291. https://doi.org/10.1016/0377-2217(89)90420-7
Farahani RZ, Fallah S, Ruiz R, Hosseini S, Asgari N (2019) Or models in urban service facility location: a critical review of applications and future developments. Eur J Oper Res 276:1–27. https://doi.org/10.1016/j.ejor.2018.07.036
Feo TA, Resende MGC (1995) Greedy randomized adaptive search procedures. J Global Optim 6:109–133. https://doi.org/10.1007/bf01096763
Franca RB, Jones EC, Richards CN, Carlson JP (2010) Multi-objective stochastic supply chain modeling to evaluate tradeoffs between profit and quality. Int J Prod Econ 127:292–299. https://doi.org/10.1016/j.ijpe.2009.09.005
Glover F, Laguna M (1998). Tabu search. In: Du D-Z, Pardalos PM (eds) Handbook of combinatorial optimization, Vol 1–3, pp 2093–2229. Springer, Boston. https://doi.org/10.1007/978-1-4419-7997-1_17
Hadka D (2015) Moea framework-a free and open source java framework for multiobjective optimization. version 2.11. http://www.moeaframework.org
Hakimi SL (1964) Optimum locations of switching centers and the absolute centers and medians of a graph. Oper Res 12:450–459. https://doi.org/10.1287/opre.12.3.450
Hakimi SL (1965) Optimum distribution of switching centers in a communication network and some related graph theoretic problems. Oper Res 13:462–475. https://doi.org/10.1287/opre.13.3.462
Jovanovic R, Sanfilippo AP, Voß S (2022) Fixed set search applied to the multi-objective minimum weighted vertex cover problem. J Heuristics 28:481–508. https://doi.org/10.1007/s10732-022-09499-z
Karatas M (2017) A multi-objective facility location problem in the presence of variable gradual coverage performance and cooperative cover. Eur J Oper Res 262:1040–1051. https://doi.org/10.1016/j.ejor.2017.04.001
Karatas M, Yakıcı E (2018) An iterative solution approach to a multi-objective facility location problem. Appl Soft Comput 62:272–287. https://doi.org/10.1016/j.asoc.2017.10.035
Li M, Yao X (2019) Quality evaluation of solution sets in multiobjective optimisation: a survey. ACM Comput Surv. https://doi.org/10.1145/3300148
López-Sánchez AD, Sánchez-Oro J, Laguna M (2020) A new scatter search design for multiobjective combinatorial optimization with an application to facility location. INFORMS J Comput. https://doi.org/10.1287/ijoc.2020.0966
Marti R, Campos V, Resende MG, Duarte A (2015) Multiobjective grasp with path relinking. Eur J Oper Res 240:54–71. https://doi.org/10.1016/j.ejor.2014.06.042
Minieka E (1970) The m-center problem. SIAM Rev 12:138–139
Mladenović N, Brimberg J, Hansen P, Moreno-Pérez JA (2007) The p-median problem: a survey of metaheuristic approaches. Eur J Oper Res 179:927–939. https://doi.org/10.1016/j.ejor.2005.05.034
Pérez-Peló S, Sánchez-Oro J, Duarte A (2020) Finding weaknesses in networks using greedy randomized adaptive search procedure and path relinking. Expert Syst 37:e12540. https://doi.org/10.1111/exsy.12540
Pérez-Peló S, Sánchez-Oro J, López-Sánchez AD, Duarte A (2019) A multi-objective parallel iterated greedy for solving the p-center and p-dispersion problem. Electronics 8:1440. https://doi.org/10.3390/electronics8121440
Pinto-Varela T, Barbosa-Póvoa APF, Novais AQ (2011) Bi-objective optimization approach to the design and planning of supply chains: economic versus environmental performances. Comput Chem Eng 35:1454–1468. https://doi.org/10.1016/j.compchemeng.2011.03.009
Resende M, Martí R, Gallego M, Duarte A (2010) Grasp and path relinking for the max-min diversity problem. Comput Oper Res 37:498–508. https://doi.org/10.1016/j.cor.2008.05.011
Sagrado J, Águila IM, Orellana FJ (2015) Multi-objective ant colony optimization for requirements selection. Empir Softw Eng 20:577–610. https://doi.org/10.1007/s10664-013-9287-3
Sánchez-Oro J, López-Sánchez AD, Colmenar JM (2020) A general variable neighborhood search for solving the multi-objective open vehicle routing problem. J Heuristics 26:423–452. https://doi.org/10.1007/s10732-017-9363-8
Sayyady F (2012) Mathematical models and algorithms for the location of sensors on a traffic network. Ph.D. thesis. http://www.lib.ncsu.edu/resolver/1840.16/8131
Sayyady F, Tutunchi GK, Fathi Y (2015) p-median and p-dispersion problems: a bi-criteria analysis. Comput Oper Res 61:46–55. https://doi.org/10.1016/j.cor.2015.02.007
Tutunchi GK, Fathi Y (2019) Effective methods for solving the bi-criteria p-center and p-dispersion problem. Comput Oper Res 101:43–54. https://doi.org/10.1016/j.cor.2018.08.009
Wang S-C, Lin C-C, Chen T-C, Hsiao HC (2018) Multi-objective competitive location problem with distance-based attractiveness for two facilities. Comput Electric Eng 71:237–250. https://doi.org/10.1016/j.compeleceng.2018.07.013
Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11:712–731. https://doi.org/10.1109/tevc.2007.892759
Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength pareto evolutionary algorithm. ETH Zurich. https://doi.org/10.3929/ethz-a-004284029
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. J. Sánchez-Oro, A. Duarte and I. Lozano-Osorio acknowledge the support from the Spanish Ministry of “Ciencia, Innovación y Universidades” (MCIN/AEI/10.13039/501100011033/FEDER, UE) under grant ref. PID2021-126605NB-I00 and PID2021-125709OA-C22, the “Comunidad de Madrid” and “Fondos Estructurales” of the European Union with grant reference S2018/TCS-4566. A.D. López-Sánchez acknowledges support from the Spanish Ministry of “Economía, Industria y Competitividad” through Projects PID2019-104263RB-C41, and from Junta de Andalucía, FEDER-UPO Research & Development Call, reference number UPO-1263769.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have not disclosed any competing interests.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Human and animals rights
This article does not contain any studies with human participants performed by any of the authors.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lozano-Osorio, I., Sánchez-Oro, J., López-Sánchez, A.D. et al. A reactive path relinking algorithm for solving the bi-objective p-Median and p-Dispersion problem. Soft Comput 27, 8029–8059 (2023). https://doi.org/10.1007/s00500-023-07994-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-023-07994-4