The Optimal Solution Set of the Multi-source Weber Problem
- 47 Downloads
Abstract
This paper considers the classical multi-source Weber problem (MWP), which is to find M new facilities with respect to N customers to minimize the sum of transportation costs between these facilities and the customers. We propose a modified algorithm in the spirit of Cooper’s work for solving the MWP including a location phase and an allocation phase. The task of location phase is to find the optimal solution sets of many single-source Weber problems (SWPs), which are reduced by the heuristic of the nearest center reclassification for the customers in the previous allocation phase. Some examples are stated to clarify the proposed algorithms. Moreover, we present an algorithm with \( O (d\log d) \) time for finding the optimal solution set of SWP in the collinear case; where d is the number of customers.
Keywords
Multi-source Weber problem Location SubdifferentialMathematics Subject Classification
Primary 90B06 Secondary 46N10 49J521 Introduction
- 1.
It can find the optimal solution set (or the set of all optimal solutions) of SWP.
- 2.
It is simple and easy to implement.
- 3.
It requires little computational time.
- (1)
\(a_j\in {\mathbb {R}}^{n}\) is the location of the jth customer, \(j=1,2,\ldots ,N\);
- (2)
\(x_i \in {\mathbb {R}}^{n}\) is the location of the ith facility to be determined, \(i=1,2,\ldots ,M\);
- (3)
\(s_j\geqslant 0\) is the given demand required by the jth customer;
- (4)
\(w_{ij}\geqslant 0\) denotes the unknown allocation from the ith facility to the jth customer; and
- (5)
\(\Vert \cdot \Vert \) is the Euclidean norm in \({\mathbb {R}}^{n}\).
In this paper, we present a modified algorithm in the spirit of Cooper’s work for solving MWP, whose important characteristics are that it considers the optimal solution set and improves the desirability of the solutions in the Cooper’s algorithm. Since the facilities are capacitated, it can easily be proved that in an optimal solution of the MWP, each customer is satisfied by its nearest facility. In the location phase of modified algorithm, we find the optimal solution set instead of an optimal solution in Cooper’s algorithm. Then, in the allocation phase of the modified algorithm, we peruse that each customer is assigned to its nearest facility among all optimal solutions in the location phase that it leads to find near solutions to the best solutions of MWP.
The paper is organized as follows. In the next section, we will discuss the Cooper’s algorithm as it will be used as a basis in the design of our algorithm. Section 3 provides some preliminaries which will be used in the sequel. In Sect. 4, we consider the SWP in the collinear case and an algorithm presents for solving this problem. In Sect. 5, a modified Cooper’s algorithm in the spirit of the Cooper’s work is developed. In Sect. 6, an application for the importance of finding the optimal solution set of MWP is presented.
2 A Review of Literature
3 Preliminaries
In the following theorem, we summarize some results, which are used in what follows:
Theorem 3.1
- (1)
For any scalar \( \lambda \), \(\partial \lambda \phi ( x)=\lambda \partial \phi ( x)\).
- (2)
The point \( \hat{x} \) is a (global) minimizer of \( \phi \), if and only if \( 0\in \partial \phi (\hat{x}) \).
- (3)Let \( \psi \) be differentiable at \( \hat{x} \), then \( 0\in \partial (\phi +\psi )(\hat{x}) \), if only if:$$\begin{aligned} -\nabla \psi (\hat{x})\in \partial \phi (\hat{x}). \end{aligned}$$
- (4)
\( \partial ( \Vert (\cdot )-a\Vert )(a)= {\mathbb {B}} \).
4 The Nonsmooth Approach to SWP
The objective function of SWP is a nonsmooth function with non-differentiable points \( a_{1},a_{2},\ldots ,a_{d} \), which makes it hard to solve. This section considers the case that the data points are collinear. By following the well-known lemma, we are able to recognize the collinearity of the given pints \( a_{1},a_{2},\ldots ,a_{d} \).
Lemma 4.1
Suppose that \( a_{2}-a_{1},a_{3}-a_{2},\ldots ,a_{d}-a_{1} \) are the columns of the matrix \( \mathcal {A} \). Then \( a_{1},a_{2},\ldots ,a_{d} \) are collinear if and only if \( \mathcal {A} \) has rank 1 or less.
Definition 4.2
We say that the list \( a_{1},a_{2},\ldots ,a_{d} \) is regular if \( [a_{i},a_{j}]\subseteq [a_{p},a_{q}] \) for \( 1 \leqslant p \leqslant i \leqslant j \leqslant q \leqslant d \).
In the following theorem, we give a rearrangement of points to get a list of regular points. In this section, we suppose that \( a_{1},a_{2},\ldots ,a _{d} \) are distinct and collinear.
Theorem 4.3
Proof
The proof is trivial. \(\square \)
Sort Algorithm | |
---|---|
Input: | A list \(a_{1},a_{2},\ldots ,a_{d}\) of customers. |
Output: | A permutation \( \pi :\{1,2,\ldots ,d\}\longrightarrow \{1,2,\ldots ,d\}\) such that |
\( a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)} \) be regular. | |
Step 1. | Find \( k\in \{1,2,\ldots ,n\} \) such that \( a_{k,1}\ne a_{k,2}. \) |
Step 2. | Sort \( a_{k,1},a_{k,2},\ldots ,a_{k,d}\), i.e., find a permutation |
\( \pi :\{1,2,\ldots ,d\}\longrightarrow \{1,2,\ldots ,d\}\) such that \( a_{k,\pi (i)}<a_{k,\pi (i+1)} \) | |
for all \( i=1,2,\ldots ,d-1 \). | |
Step 3. | List \(a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}\) is regular. |
Note that the list \( a_{k,1},a_{k,2},\ldots ,a_{k,d}\) in Step 2 can be sorted in \( O (d \log d) \) time using the merge sort algorithm in [16]. Therefore, the overall running time (or the time complexity) of the sort algorithm is \( O (n)+ O (d\log d) \) time. By considering the sort algorithm, we assume that the list \(a_{1},a_{2},\ldots ,a_{d}\) is regular.
In the following theorem, we discuss the convexity and closedness of the optimal solution set of SWP.
Theorem 4.4
The set \( \Omega \) is a closed and convex set. In particular, \( \Omega \) is a subset of \( [a_{1},a_{d}]\).
Proof
In the following theorem, a necessary and sufficient condition for computing the optimal solution set of SWP is presented.
Theorem 4.5
There exist \( p,q \in \{1,2,\ldots ,d\} \) such that \( \Omega =[a_{p},a_{q}] \).
Proof
Since f is a convex function, it follows that \( a_{p} \in \Omega \), which is a contradiction. Similarly, there exists \( q\in \{1,2,\ldots ,d\} \) such that \( y^{*}=a_{q} \). \(\square \)
Backward algorithm | |
---|---|
Input: | The number of customers d and positive multipliers |
\( s_{1},s_{2},\ldots ,s_{d}. \) | |
Output: | The optimal solution set of SWP. |
Step 1. | Pick an arbitrary \( r\in \{2,3,\ldots ,d,d+1\} \) such that \( T_{r}<-s_{r} \). |
Step 2. | Set \( l=0 \). |
Step 3. | For \( k=r-1\) to 1 do: |
\( T_{k}=T_{k+1}+s_{k}+s_{k+1}; \) | |
if \( |T_{k}|\leqslant s_{k} \), then set \( l=l+1 \) and \( P(l)=k \); | |
else, if \( T_{k}>s_{k} \), then stop. | |
Step 4. | \( [a_{P(1)},a_{P(l)}]\) is the optimal solution set of SWP. |
Forward algorithm | |
---|---|
Input: | The number of customers d and positive multipliers |
\( s_{1},s_{2},\ldots ,s_{d}. \) | |
Output: | The optimal solution set of SWP. |
Step 1. | Pick an arbitrary \( r\in \{0,1,\ldots ,d-1\} \) such that \( T_{r}>s_{r} \). |
Step 2. | Set \( l=0 \). |
Step 3. | For \( k=r+1 \) to d do: |
\( T_{k}=T_{k-1}-s_{k}-s_{k-1}; \) | |
if \( |T_{k}|\leqslant s_{k} \), then set \( l=l+1 \) and \( P(l)=k \); | |
else, if \( T_{k}<-s_{k} \), then stop. | |
Step 4. | \( [a_{P(1)},a_{P(l)}]\) is the optimal solution set of SWP. |
To make these algorithms clear, the following example is provided.
Example 4.6
\( a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}\) and \( s_{\pi (1)},s_{\pi (2)},\ldots ,s_{\pi (d)} \), respectively. Now, we apply the forward algorithm for solving SWP with positive multipliers
Theorem 4.7
Backward (forward) algorithm works correctly.
Proof
The proof of Theorem 4.7 leads directly to the following result.
Proposition 4.8
Proof
One can apply Proposition 4.8 to determine the optimal solution set of SWP.
A necessary and sufficient condition of optimality is presented by Proposition 4.8 for SWP, whereas, Condition 4.2 is only a sufficient condition of optimality for SWP in the collinear case. The following example shows the difference between Condition 4.2 and the presented necessary and sufficient condition of optimality in Proposition 4.8.
Example 4.9
Consider SWP. Let \( a_{1}=(1,2)^T,~a_{2}=(2,4)^T,~a_{3}=(3,6)^T,\) \(a_{4}=(4,8)^T,~s_{1}=1,~ s_{2}=2,~s_{3}=2, \) and \( s_{4}=1 \). Let us check the optimality condition in the points \( x_{1}=(2,4)^T,~x_{2}=(2.25,4.5)^T,~x_{3}=(3,6)^T,~x_{4}=(4,8)^T \).
Now consider \( x_{2}=(2.25,4.5)^T \). Since there exists no \( a_{j}~(j=1,2,\ldots ,4) \) such that \( x_{2}=a_{j} \), Condition (4.2) does not present any information of optimality at this point. Whereas, since \( x_{2} \in [a_{2},a_{3}] \), from Proposition 4.8, it is necessary and sufficient to show that \( T_{2}=w_{2} \) for verifying the optimality condition at \( x_{2} \). Since \( T_{2}=2 \), it follows that \( x_{2} \) is an optimal solution of SWP.
Now let us analyze the running time of the backward (forward) algorithm.
Theorem 4.10
Backward (forward) algorithm terminates after at most d iterations, and can be implemented to run in \( O (d) \) time.
Proof
In the next section, we present an example in which the Cooper’s algorithm converges to a non-desirable solution. To get over this difficulty, we propose a modified Cooper’s algorithm, which improves the Cooper’s algorithm towards the desirability of solutions.
5 Modified Cooper’s Algorithm
In the following example, we verify the behavior of Cooper’s algorithm applied to MWP.
Example 5.1
Modified Cooper’s algorithm | |
---|---|
Input: | The number of customers N, the positive multipliers |
\( s_{1},s_{2},\ldots ,s_{N} \), the location of customers \( a_{1},a_{2},\ldots ,a_{N} \), | |
the number of facilities M, the partitions \( A_{i}^{k}~(i=1,2,\ldots ,M) \), | |
and the optimal solution sets of involved SWP in the kth | |
iteration \( \Omega _{i}^{k}~(i=1,2,\ldots ,M) \). | |
Output: | The desirable solution set of MWP. |
Step 1. | Set \( t_{r}:=0 \), \(r=1,2,\ldots , N \). |
Step 2. | For \(r=1\) to N set: |
\(\begin{aligned} \qquad \qquad \qquad \qquad \quad \qquad \qquad \bar{x}_{i,r}^{k}=\left\{ \begin{array}{lll} &{}P_{\Omega _{i}^{k}}(a_{r})~ \text {if}~a_{r}\not \in A_{i}^{k},\\ &{} \qquad \qquad \qquad \qquad \text {for}~ i=1,2,\ldots ,M.\\ &{}{{{\mathrm{argmax}}}}_{\Omega _{i}^{k}}\Vert x-a_{r}\Vert ~\text {otherwise}, \end{array}\right. \qquad \qquad (5.2) \end{aligned}\) | |
Step 3. | For \(r=1\) to N do: |
for \(j=1\) to N do: | |
\( d_{i,j}^{r}=\Vert \bar{x}_{i,r}^{k}-a_{j}\Vert \) for \( i=1,2,\ldots ,M \); | |
if \( a_{j}\in A_{h}^{k} \) and \( d_{lj}^{r}=\min _{i=1,2,\ldots ,M,~i\ne h}\{d_{i,j}^{r}\}<d_{hj}^{r} \), | |
then \( A_{h,r}^{k+1}=A_{h}^{k}/\{a_{j}\} \), \( A_{h,l}^{k+1}=A_{l}^{k}\cup \{a_{j}\} \), and \( t_{r}=t_{r}+1 \) | |
(reassign \( a_{j} \)). | |
Step 4. | If \( t_{r}=0 \) for all \( r \in { \mathcal {N}} \), then \(\{\Omega _{1}^{k},\Omega _{2}^{k},\ldots ,\Omega _{M}^{k}\}\) is the desirable |
solution set of MWP, and the customers in \( A_{i}^{k}~(i=1,2,\ldots ,M) \) | |
should be allocated from a \( x_{i}^{k}\in \Omega _{i}^{k}~(i=1,2,\ldots ,M) \) and stop. | |
Step 5. | For \(r=1\) to N do: |
compute \( \Omega _{i,r}^{k+1} \) for \(i=1,2,\ldots , M \); where, \( \Omega _{i,r}^{k+1} \) is the optimal | |
solution set for the cluster \( A_{i,r}^{k} \), which is defined similar to (5.1). | |
Step 6. | Let: |
\(\bar{r}=\mathop {{{\mathrm{argmin}}}}\limits _{r\in {\mathcal {N}}}\sum _{i=1}^{M}\sum _{\{j\in {\mathcal {N}},a_{j}\in A_{i,r}^{k}\}}s_{j}\Vert x_{i,r}^{k+1}-a_{j}\Vert ,\) | |
where, \( x_{i,r}^{k+1} \in \Omega _{i,r}^{k+1} \) for \( i=1,2,\ldots ,M \). | |
Then set \( A_{i}^{k+1}=A_{i,\bar{r}}^{k+1} \), \( \Omega _{i}^{k+1}=\Omega _{i,\bar{r}}^{k+1} ~(i=1,2,\ldots ,M)\), \( k:=k+1 \), and | |
go to Step 1. |
Note that the modified Cooper’s algorithm is basically equivalent to the original Cooper’s algorithm, if in each iteration the optimal solution sets of involved SWPs are singletons. In particular, if \( t_{r}=0 \) for some \( r\in {\mathcal {N} }\) in the \(( k+1) \)th iteration, then \( \Omega _{i,r}^{k+1}=\Omega _{i,r}^{k} \). To simplify the modified Cooper’s algorithm in Step 2, we present the following proposition, and its proof is easy.
Proposition 5.2
- (1)If \( a \not \in [b,c] \), thenwhere,$$\begin{aligned} P_{[b,c]}(a)=\left\{ \begin{array}{ll} b&{}~\mathrm{{if}} ~~\bar{t}<0,\\ b+\bar{t}(c-b)&{}~\mathrm{{if}} ~~0\leqslant \bar{t} \leqslant 1,\\ c&{}~\mathrm{{if}}~~\bar{t}>1, \end{array}\right. \end{aligned}$$$$\begin{aligned} \bar{t}=\dfrac{(b-a)^{T}(b-c)}{\Vert b-c\Vert ^2}. \end{aligned}$$
- (2)Thus,$$\begin{aligned} \max _{x\in [b,c]}\Vert x-a\Vert =\max \{\Vert b-a\Vert ,\Vert c-a\Vert \}. \end{aligned}$$
One can apply the Proposition 5.2 to solve (5.2) exactly, which decreases sharply the computational time of the Modified Cooper’s algorithm.
6 Application
- (1)
y is the location of the wholesale market to be determined;
- (2)
\( W_{i} \) is the corresponding weight for the ith facility.
where, \(d_{i}(y)=\displaystyle \min _{x_{i}\in \Omega _{i}} \Vert x_{i}-y\Vert \) for \( i=1,2,\ldots ,M \). To make this application clear, the following example is provided.
Example 6.1
7 Conclusion
In this paper, first we proposed an algorithm for sorting the points in \( {\mathbb {R}}^n \) in the collinear case. Then, using the obtained result, we present an efficient algorithm for solving SWP in the collinear case. Moreover, we modified the Cooper’s algorithm which has an advantage in the collinear case. A numerical comparison shows the superiority in efficiency and effectiveness of considering the optimal solution sets of SWPs. The numerical results show that the developed algorithms are suitable for the solution of reasonable MWP with the collinear case. An interesting future research topic is to see whether the results in Sect. 4 can be extended for constrained SWP in the collinear case.
References
- 1.Beck, A., Sabach, S.: Weiszfeld’s method: old and new results. J. Optim. Theory Appl. 164(1), 1–40 (2015)MathSciNetCrossRefMATHGoogle Scholar
- 2.Blum, M., Floyd, R.W., Pratt, V.R., Rivest, R.L., Tarjan, R.E.: Time bounds for selection. J. Comput. Syst. Sci. 7(4), 448–461 (1972)MathSciNetCrossRefMATHGoogle Scholar
- 3.Bose, P., Maheshwari, A., Morin, P.: Fast approximations for sums of distances, clustering and the Fermat–Weber problem. Comput. Geom. 24(3), 135–146 (2003)MathSciNetCrossRefMATHGoogle Scholar
- 4.Brimberg, J., Hansen, P., Mladenovic, N., Taillard, E.D.: Improvements and comparison of heuristics for solving the uncapacitated multi-source Weber problem. Oper. Res. 48(3), 444–460 (2000)CrossRefGoogle Scholar
- 5.Brimberg, J., Wesolowsky, G.O.: Locating facilities by minimax relative to closest points of demand areas. Comput. Oper. Res. 29(6), 625–636 (2002)MathSciNetCrossRefMATHGoogle Scholar
- 6.Chandrasekaran, R., Tamir, A.: Open questions concerning Weiszfelds algorithm for the Fermat–Weber location problem. Math. Program. 44(1), 293–295 (1989)MathSciNetCrossRefMATHGoogle Scholar
- 7.Chandrasekaran, R., Tamir, A.: Algebraic optimization: the Fermat–Weber location problem. Math. Program. 46(2), 219–224 (1990)MathSciNetCrossRefMATHGoogle Scholar
- 8.Chatelon, J.A., Hearn, D.W., Lowe, T.J.: A subgradient algorithm for certain minimax and minisum problems. Math. Program. 15(1), 130–145 (1978)MathSciNetCrossRefMATHGoogle Scholar
- 9.Cooper, L.: Heuristic methods for location-allocation problems. SIAM Rev. 6(1), 37–53 (1964)MathSciNetCrossRefMATHGoogle Scholar
- 10.Eyster, J.W., White, J.A., Wierwille, W.W.: On solving multifacility location problems using a hyperboloid approximation procedure. AIEE Trans. 5(1), 1–6 (1973)CrossRefGoogle Scholar
- 11.Gamal, M.D.H., Salhi, S.: A cellular type heuristic for the multisource Weber problem. Comput. Oper. Res. 30(11), 1609–1624 (2003)MathSciNetCrossRefMATHGoogle Scholar
- 12.Görner, S., Kanzow, C.: On Newton’s method for the Fermat–Weber location problem. J. Optim. Theory Appl. 170(1), 107–118 (2016)MathSciNetCrossRefMATHGoogle Scholar
- 13.Hansen, P., Mladenovic, N., Taillard, E.: Heuristic solution of the multisource Weber problem as a p-median problem. Oper. Res. Lett. 22(2), 55–62 (1998)MathSciNetCrossRefMATHGoogle Scholar
- 14.Jiang, J.L., Yuan, X.M.: A heuristic algorithm for constrained multi-source Weber problem—the variational inequality approach. Eur. J. Oper. Res. 187(2), 357–370 (2008)MathSciNetCrossRefMATHGoogle Scholar
- 15.Katz, I.N.: Local convergence in Fermat’s problem. Math. Program. 6(1), 89–104 (1974)MathSciNetCrossRefMATHGoogle Scholar
- 16.Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms. Springer, Berlin (2002)CrossRefMATHGoogle Scholar
- 17.Kuhn, H.W.: A note on Fermat’s problem. Math. Program. 4(1), 98–107 (1973)MathSciNetCrossRefMATHGoogle Scholar
- 18.Levin, Y., Ben-Israel, A.: A heuristic method for large-scale multi-facility location problems. Comput. Oper. Res. 31(2), 257–272 (2004)MathSciNetCrossRefMATHGoogle Scholar
- 19.Li, Y.: A Newton acceleration of the Weiszfeld algorithm for minimizing the sum of Euclidean distances. Comput. Optim. Appl. 10(3), 219–242 (1998)MathSciNetCrossRefMATHGoogle Scholar
- 20.Luis, M., Said, S., Nagy, G.: A guided reactive GRASP for the capacitated multi-source Weber problem. Comput. Oper. Res. 38(7), 1014–1024 (2011)MathSciNetCrossRefMATHGoogle Scholar
- 21.Ostresh, L.M.: On the convergence of a class of iterative methods for solving the Weber location problem. Oper. Res. 26(4), 597–609 (1978)MathSciNetCrossRefMATHGoogle Scholar
- 22.Overton, M.L.: A quadratically convergence method for minimizing a sum of Euclidean norms. Math. Program. 27(1), 34–63 (1983)MathSciNetCrossRefMATHGoogle Scholar
- 23.Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)CrossRefMATHGoogle Scholar
- 24.Rosing, K.E.: An optimal method for solving the (generalized) multi-Weber problem. Eur. J. Oper. Res. 58(3), 414–426 (1992)CrossRefMATHGoogle Scholar
- 25.Sherali, H.D., Nordai, F.L.: NP-hard, capacitated, balanced p-median problems on a chain graph with a continuum of link demands. Math. Oper. Res. 13(1), 32–49 (1998)MathSciNetCrossRefMATHGoogle Scholar
- 26.Vardi, Y., Zhang, C.-H.: A modified Weiszfeld algorithm for the Fermat–Weber location problem. Math. Program. 90(3), 559–566 (2001)MathSciNetCrossRefMATHGoogle Scholar
- 27.Weiszfeld, E.: Sur le point pour lequel la somme des distances de \(n\) points donn\(\acute{e}\)s est minimum. Tohoku Math. J. 43, 355–386 (1937)MATHGoogle Scholar
- 28.Weiszfeld, E., Plastria, F.: On the point for which the sum of the distances to \( n \) given points is minimum. Ann. Oper. Res. 167(1), 7–41 (2009)MathSciNetCrossRefMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.