# The Optimal Solution Set of the Multi-source Weber Problem

• A. Raeisi Dehkordi
Open Access
Original Paper

## Abstract

This paper considers the classical multi-source Weber problem (MWP), which is to find M new facilities with respect to N customers to minimize the sum of transportation costs between these facilities and the customers. We propose a modified algorithm in the spirit of Cooper’s work for solving the MWP including a location phase and an allocation phase. The task of location phase is to find the optimal solution sets of many single-source Weber problems (SWPs), which are reduced by the heuristic of the nearest center reclassification for the customers in the previous allocation phase. Some examples are stated to clarify the proposed algorithms. Moreover, we present an algorithm with $$O (d\log d)$$ time for finding the optimal solution set of SWP in the collinear case; where d is the number of customers.

## Keywords

Multi-source Weber problem Location Subdifferential

## Mathematics Subject Classification

Primary 90B06 Secondary 46N10 49J52

## 1 Introduction

The single-source Weber problem (SWP), also known as the Fermat–Weber location problem, which is probably the first facility location problem historically, was studied as early as the 17th century. This is the simplest continuous facility location model; however, there were still questions, which were resolved only very recently. The Fermat–Weber location problem intends to find a point in $${\mathbb {R}}^n$$ that minimizes the sum of the weighted Euclidean distance from the d given points:
\begin{aligned} \min _{x\in {\mathbb {R}}^n} f(x):=\sum _{j=1}^{d}s_{j}\Vert x-a_{j}\Vert , \end{aligned}
where $$a_{j}\in \mathbb {R}^n~(j=1,2,\ldots ,d)$$ is the location of the jth customer, and $$s_{j}>0~(j=1,2,\ldots ,d)$$ is the weight corresponding to the customer $$a_{j}$$. It is well-known that if the data points are not collinear, f is a strictly convex function and therefore has a unique optimal solution. In the collinear case, SWP can be reduced to a selecting problem (see [2, 16, 21]). Among the several schemes to solve SWP [3, 7, 8, 10, 14, 18, 22, 28], one of the most popular methods was presented in [27], and has been discussed and developed widely in [1, 6, 12, 15, 17, 19, 26]. In [3] the objective function of SWP is approximated by a simple function, and an $$\epsilon$$-approximation algorithm is presented for solving a new problem with this approximated function. In [18] the Newton-Bracketing (NB) method for convex minimization was utilized to solve the SWP. This paper presents a fast algorithm for solving SWP in the collinear case. The advantages of this algorithm are three folds:
1. 1.

It can find the optimal solution set (or the set of all optimal solutions) of SWP.

2. 2.

It is simple and easy to implement.

3. 3.

It requires little computational time.

There are several algorithms (see for instance [16]) to sort the points in $$\mathbb {R}$$ in nondecreasing order, to the best of our knowledge there exists no algorithm for sorting the points in $${\mathbb {R}}^n$$, which are collinear. In our algorithm, for solving SWP, we will need to sort the points in $${\mathbb {R}}^n$$. For this purpose, we present an algorithm in $${ O } (d\log (d))$$ time.
Here, we consider the classical multi-source Weber problem (MWP) which plays an important role in the operations research and management science. The classical MWP is to find the locations of M new facilities to minimize the sum of the transportation costs between these facilities, and the N customers with known locations. We assume that the involved transportation costs are proportional to the corresponding distances. More specifically, the mathematical model of MWP is as follows:
\begin{aligned} \text {MWP}:&\min F(x_{1},x_{2},\ldots ,x_{M}):=\sum _{i=1}^M\sum _{j=1}^N w_{ij}\Vert x_i-a_j\Vert ,\nonumber \\&\displaystyle \sum _{i=1}^M w_{ij}=s_j,\quad j=1,2,\ldots ,N,\nonumber \\&x_i \in {\mathbb {R}}^{n},\quad i=1,2,\ldots ,M, \end{aligned}
(1.1)
where,
1. (1)

$$a_j\in {\mathbb {R}}^{n}$$ is the location of the jth customer, $$j=1,2,\ldots ,N$$;

2. (2)

$$x_i \in {\mathbb {R}}^{n}$$ is the location of the ith facility to be determined, $$i=1,2,\ldots ,M$$;

3. (3)

$$s_j\geqslant 0$$ is the given demand required by the jth customer;

4. (4)

$$w_{ij}\geqslant 0$$ denotes the unknown allocation from the ith facility to the jth customer; and

5. (5)

$$\Vert \cdot \Vert$$ is the Euclidean norm in $${\mathbb {R}}^{n}$$.

It is well-known that the objective function of the problem (1.1) is neither concave nor convex, and may have several local optimal solutions, see [9]. Furthermore, [25] demonstrates that the problem is NP-hard, even if all of the demand points are to be located on a straight line. Among the existing effective numerical algorithms for solving MWP is the Cooper’s algorithm, which was presented originally in [9]. Its attractive characteristic is that each iteration consists of a location phase and an allocation phase. However, there are similar algorithms [14, 18], based on Cooper’s algorithm for solving MWP. According to [9], the Cooper’s algorithm has a monotone nonincreasing convergent sequence for the objective function value. Nevertheless, it will not guarantee to converge to the global minimum. As the Cooper’s algorithm is swift, straightforward and efficient, it has widely been used by other researchers either to generate efficient initial solutions or improve the obtained results even further [4, 11, 20].

In this paper, we present a modified algorithm in the spirit of Cooper’s work for solving MWP, whose important characteristics are that it considers the optimal solution set and improves the desirability of the solutions in the Cooper’s algorithm. Since the facilities are capacitated, it can easily be proved that in an optimal solution of the MWP, each customer is satisfied by its nearest facility. In the location phase of modified algorithm, we find the optimal solution set instead of an optimal solution in Cooper’s algorithm. Then, in the allocation phase of the modified algorithm, we peruse that each customer is assigned to its nearest facility among all optimal solutions in the location phase that it leads to find near solutions to the best solutions of MWP.

The paper is organized as follows. In the next section, we will discuss the Cooper’s algorithm as it will be used as a basis in the design of our algorithm. Section 3 provides some preliminaries which will be used in the sequel. In Sect. 4, we consider the SWP in the collinear case and an algorithm presents for solving this problem. In Sect. 5, a modified Cooper’s algorithm in the spirit of the Cooper’s work is developed. In Sect. 6, an application for the importance of finding the optimal solution set of MWP is presented.

## 2 A Review of Literature

In [9] an iterative heuristic method known as the alternate location-allocation algorithm for solving MWP is presented. This algorithm is efficient in terms of solution quality and computational effort. The base of the Cooper’s algorithms is to decrease the objective function value of MWP in each iteration. Let $${\mathcal {N}}=\{1,2,\ldots ,N \}$$ and $$A=\{a_j|~ j \in {\mathcal {N}} \}$$ denote the set of locations of all customers, and $$A^{k}=\{A_1^{k},A_2^{k},\ldots ,A_M^{k} \}$$ with $$\bigcup _{i=1}^M A_{i}^k=A$$ and $$A_i^k\cap A_j^k=\emptyset$$ (for $$i\ne j$$) denote the disjoint partition of A at the kth iteration. At the $$(k+1)$$th iteration of the Cooper’s algorithm, the location phase finds the candidate of locations of facilities by solving the following SWPs:
\begin{aligned}&\mathrm {SWP:}~x_i^{k+1} \in \mathop {\text {argmin}}\limits _{x \in {\mathbb {R}}^{n}}\left\{ F_i(x):=\displaystyle \sum _{\{j\in {\mathcal {N}}, ~a_j \in A_{i}^k \}}s_j \Vert x-a_{j}\Vert \right\} ,\nonumber \\&\quad =\left\{ y\in {\mathbb {R}}^{n}~|~F_{i}(y)\leqslant F_{i}(x)\quad \forall x \in {\mathbb {R}}^{n}\right\} \quad i=1,2,\ldots ,M, \end{aligned}
(2.1)
where, $$A_{i}^k$$ for $$i=1,2,\ldots ,M$$ are produced by the allocation phase. Then, the allocation phase involves an allocation or a partition, which depends on the $$x_{i}^{k+1}~(i=1,2,\ldots ,M)$$ generated by solving (2.1). More specifically, if for all $$i \in \{1,2,\ldots ,M \}$$, $$x_{i}^{k+1}$$ is the nearest facility among all facilities for each customer in $$A_{i}^k$$, then $$A_{i}^k~(i=1,2,\ldots ,M)$$ are the desirable partitions. Therefore, it is reasonable to allocate customers from the facility $$x_{i}^{k+1}$$ to minimize the total sum of transportation costs. Otherwise, the set of customers should be partitioned again according to the heuristic of the nearest center reclassification (NCR), i.e., the new partition $$A=\{A_1^{k+1},A_2^{k+1},\ldots ,A_M^{k+1} \}$$ is generated so that $$x_{i}^{k+1}$$ is the nearest facility for each customer in $$A_i^{k+1}$$. Note that with the presupposition that each facility to be determined is capable of providing sufficient services for the targeted customers, the heuristic characteristic shared by all Cooper-type algorithms is that, finally, each customer ($$a_j$$) is served only by one of the facilities (the nearest one). This observation explains the disappearance of $$w_{ij}$$ in (2.1). The MWP has considerably been probed in the literature as a widely used optimization problem. A branch and bound algorithm has been developed in [24] to solve the problem. The exact methods are not capable of dealing with large-sized instances in a reasonable computational time, hence heuristic methods appeared to be the best way forward. In [13], MWP is solved by a p-median problem, and then the Cooper’s algorithm is applied to find the proper locations for the facilities. In [4] a solution approach is presented by proposing a combination of the variable neighborhood search and the Cooper’s algorithm. A two-phase heuristic method is investigated in [11], known as a cellular heuristic.

## 3 Preliminaries

In this section, we present some notations and auxiliary results that will be needed in what follows. Let $$a,b \in {\mathbb {R}}^n$$. We denote the line segment between a and b by [ab] ; i.e.,
\begin{aligned}{}[a,b]=\{ta+(1-t)b~|~0\leqslant t \leqslant 1 \}. \end{aligned}
The closed unit ball in $${\mathbb {R}}^n$$ is denoted by $${\mathbb {B}}$$. The optimal solution set of SWP is denoted by
\begin{aligned} \Omega :=\{x^{*}\in {\mathbb {R}}^n~|~f(x^{*})\leqslant f(x)~\forall x \in {\mathbb {R}}^n\}. \end{aligned}
Let $$x \in \mathbb {R}$$. We denote the largest integer less than or equal to x (or the floor function at x) by $$\lceil {x}\rceil$$. Let X be a non-empty closed set in $$\mathbb {R}^{n}$$. Let y be a point in X that is closest to x; we say that y lies in the projection of x onto X. The set of points in X closest to x is denoted by
\begin{aligned} P_{X}(x):=\mathop {{{\mathrm{argmin}}}}\limits _{s\in X}\Vert x-s\Vert =\{y\in X~|~\Vert x-y\Vert \leqslant \Vert x-s\Vert \quad \forall s\in X\}. \end{aligned}
We recall the following results from [23]. Let $$\hat{x} \in {\mathbb {R}}^n$$ and $$\phi : {\mathbb {R}}^n \longrightarrow {\mathbb {R}}$$ be a convex and Lipschitz function. In place of the gradient, we consider subgradients, those elements $$\xi$$ of $${\mathbb {R}}^n$$ satisfying:
\begin{aligned} \phi (x)-\phi (\hat{x}) \geqslant \langle \xi , x-\hat{x} \rangle \quad \forall x \in {\mathbb {R}}^n . \end{aligned}
The set of subgradients (called the subdifferential) is denoted by $$\partial \phi (\hat{x})$$.

In the following theorem, we summarize some results, which are used in what follows:

### Theorem 3.1

[23] Let $$\phi : {\mathbb {R}}^n \longrightarrow \mathbb {R}$$ and $$\psi : {\mathbb {R}}^n \longrightarrow \mathbb {R}$$ be Lipschitz and convex functions. Then, the following assertions hold:
1. (1)

For any scalar $$\lambda$$, $$\partial \lambda \phi ( x)=\lambda \partial \phi ( x)$$.

2. (2)

The point $$\hat{x}$$ is a (global) minimizer of $$\phi$$, if and only if $$0\in \partial \phi (\hat{x})$$.

3. (3)
Let $$\psi$$ be differentiable at $$\hat{x}$$, then $$0\in \partial (\phi +\psi )(\hat{x})$$, if only if:
\begin{aligned} -\nabla \psi (\hat{x})\in \partial \phi (\hat{x}). \end{aligned}

4. (4)

$$\partial ( \Vert (\cdot )-a\Vert )(a)= {\mathbb {B}}$$.

## 4 The Nonsmooth Approach to SWP

The objective function of SWP is a nonsmooth function with non-differentiable points $$a_{1},a_{2},\ldots ,a_{d}$$, which makes it hard to solve. This section considers the case that the data points are collinear. By following the well-known lemma, we are able to recognize the collinearity of the given pints $$a_{1},a_{2},\ldots ,a_{d}$$.

### Lemma 4.1

Suppose that $$a_{2}-a_{1},a_{3}-a_{2},\ldots ,a_{d}-a_{1}$$ are the columns of the matrix $$\mathcal {A}$$. Then $$a_{1},a_{2},\ldots ,a_{d}$$ are collinear if and only if $$\mathcal {A}$$ has rank 1 or less.

### Definition 4.2

We say that the list $$a_{1},a_{2},\ldots ,a_{d}$$ is regular if $$[a_{i},a_{j}]\subseteq [a_{p},a_{q}]$$ for $$1 \leqslant p \leqslant i \leqslant j \leqslant q \leqslant d$$.

In the following theorem, we give a rearrangement of points to get a list of regular points. In this section, we suppose that $$a_{1},a_{2},\ldots ,a _{d}$$ are distinct and collinear.

### Theorem 4.3

The given points $$a_{1},a_{2},\ldots ,a_{d}$$ can be rearranged into $$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$ so that there exists $$k \in \{1,2,\ldots ,n\}$$ such that
\begin{aligned} a_{k ,\pi (1)}<a_{k ,\pi (2)}<\cdots < a_{k, \pi (d)}, \end{aligned}
where, $$\pi :\{1,2,\ldots ,n\}\longrightarrow \{1,2,\ldots ,n\}$$ is a permutation and $$a_{k ,\pi (j)}~ (j=1,2,\ldots ,d)$$ is the kth coordinate of $$a_{\pi (j)}$$. Suppose that $$a_{1},a_{2},\ldots ,a_{d}$$ are rearranged as $$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$, then $$\{a_{k, \pi (j)}\}_{j=1}^{d}$$ is a constant or strictly monotone sequence for each $$k \in \{1,2,\ldots ,n\}$$. In particular, the list $$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$ is regular.

### Proof

The proof is trivial. $$\square$$

Let $$\mathbb {A} =\left( a_{1},a_{2},\ldots ,a_{d}\right)$$. To regularize $$a_{1},a_{2},\ldots ,a_{d}$$, Theorem 4.3 suggests a simple algorithm, which requires sorting the elements of one of the rows $$\mathbb {A}$$ as $$a_{k,1},a_{k,2},\ldots ,a_{k,d}$$ such that $$a_{k,1}\ne a_{k,2}$$.

Sort Algorithm

Input:

A list $$a_{1},a_{2},\ldots ,a_{d}$$ of customers.

Output:

A permutation $$\pi :\{1,2,\ldots ,d\}\longrightarrow \{1,2,\ldots ,d\}$$ such that

$$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$ be regular.

Step 1.

Find $$k\in \{1,2,\ldots ,n\}$$ such that $$a_{k,1}\ne a_{k,2}.$$

Step 2.

Sort $$a_{k,1},a_{k,2},\ldots ,a_{k,d}$$, i.e., find a permutation

$$\pi :\{1,2,\ldots ,d\}\longrightarrow \{1,2,\ldots ,d\}$$ such that $$a_{k,\pi (i)}<a_{k,\pi (i+1)}$$

for all $$i=1,2,\ldots ,d-1$$.

Step 3.

List $$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$ is regular.

Note that the list $$a_{k,1},a_{k,2},\ldots ,a_{k,d}$$ in Step 2 can be sorted in $$O (d \log d)$$ time using the merge sort algorithm in [16]. Therefore, the overall running time (or the time complexity) of the sort algorithm is $$O (n)+ O (d\log d)$$ time. By considering the sort algorithm, we assume that the list $$a_{1},a_{2},\ldots ,a_{d}$$ is regular.

In the following theorem, we discuss the convexity and closedness of the optimal solution set of SWP.

### Theorem 4.4

The set $$\Omega$$ is a closed and convex set. In particular, $$\Omega$$ is a subset of $$[a_{1},a_{d}]$$.

### Proof

It is easy to see that $$\Omega$$ is a closed and convex set. Now we denote by h the straight line passing through $$a_{1},a_{2},\ldots ,a_{d}$$,
\begin{aligned} h=\{a_{1}+t(a_{d}-a_{1}),~t\in \mathbb {R}\}. \end{aligned}
Since the list $$a_{1},a_{2},\ldots ,a_{d}$$ is distinct and regular, there exists $$k\in \{1,2,\ldots ,n\}$$ such that
\begin{aligned} a_{k,1}<a_{k,2}<\cdots <a_{k,d}. \end{aligned}
By contradiction, suppose that $$x^{*}\in \Omega$$ and $$x^{*}\not \in [a_{1},a_{d}]$$. If $$x^{*}\not \in h$$, then $$P_{h}(x^{*})\ne x^{*}$$ and
\begin{aligned} \Vert x^{*}-a_{j}\Vert ^{2}=\Vert x^{*}-P_{h}(x^{*})\Vert ^{2}+\Vert P_{h}(x^{*}) -a_{j}\Vert ^{2},\quad j=1,2,\ldots ,d. \end{aligned}
Therefore,
\begin{aligned} \Vert P_{h}(x^{*})-a_{j}\Vert \leqslant \Vert x^{*}-a_{j}\Vert ,\quad j=1,2,\ldots ,d, \end{aligned}
with at least one strict inequality. Multiplying the above inequalities by $$s_{j}$$ and adding them, we get
\begin{aligned} \sum _{j=1}^{d}s_{j}\Vert P_{h}(x^{*})-a_{j}\Vert <\sum _{j=1}^{d}s_{j}\Vert x^{*} -a_{j}\Vert ,\quad j=1,2,\ldots ,d. \end{aligned}
This contradiction shows that $$x^{*}$$ can not be an optimal solution for SWP. Now suppose that $$x^{*} \in h$$ and $$x^{*}\not \in [a_{1},a_{d}]$$. To proceed, consider the two possible cases: (a) $$x_{k}^{*}>a_{k,d}$$, and (b) $$x_{k}^{*}<a_{k,1}$$. In case (a), $$P_{[a_{1},a_{d}]}(x^{*})=a_{d}$$ and
\begin{aligned} \Vert x^{*}-a_{j}\Vert =\Vert x^{*}-a_{d}\Vert +\Vert a_{d}-a_{j}\Vert ,\quad j=1,2,\ldots ,d-1. \end{aligned}
Therefore,
\begin{aligned} \Vert a_{d}-a_{j}\Vert <\Vert x^{*}-a_{j}\Vert ,\quad j=1,2,\ldots ,d-1. \end{aligned}
Multiplying the above inequalities by $$s_{j}$$ and adding them, we obtain
\begin{aligned} \sum _{j=1}^{d}s_{j}\Vert a_{d}-a_{j}\Vert <\sum _{j=1}^{d}s_{j}\Vert x^{*}-a_{j}\Vert ,\quad j=1,2,\ldots ,d. \end{aligned}
This is a contradiction. The case (b) can be done similarly to the case (a). $$\square$$

In the following theorem, a necessary and sufficient condition for computing the optimal solution set of SWP is presented.

### Theorem 4.5

There exist $$p,q \in \{1,2,\ldots ,d\}$$ such that $$\Omega =[a_{p},a_{q}]$$.

### Proof

According to Theorem 4.4, we have $$\Omega =[x^{*},y^{*}]$$ where $$x^{*},y^{*} \in [a_{1},a_{d}]$$. On the contrary, suppose that $$x^{*}\not \in \{a_{1},a_{2},\ldots ,a_{d}\}$$. Since $$x^{*}$$ and $$a_{1},a_{2},\ldots ,a_{d}$$ are distinct and collinear, it follows from Theorem 4.3 that there exist $$p\in \{1,2,\ldots ,d\}$$ and $$k\in \{1,2,\ldots ,n\}$$ such that $$a_{k,p}< x_{k}^{*}< a_{k,p+1}$$. Since $$x^{*}$$ is an optimal solution of SWP, we have $$0\in \partial f(x^{*})$$. By differentiability of f at $$x^{*}$$,
\begin{aligned} \sum _{j=1}^{d}s_{j}\dfrac{x^{*}-a_{j}}{\Vert x^{*}-a_{j}\Vert }=0. \end{aligned}
We prove that $$0\in \partial f(a_{p})$$. For this purpose, we first show that
\begin{aligned} \dfrac{x^{*}-a_{j}}{\Vert x^{*}-a_{j}\Vert }=\dfrac{a_{p}-a_{j}}{\Vert a_{p}-a_{j}\Vert }, \quad j=1,\ldots ,d,\quad j\ne p. \end{aligned}
If $$j \in \{1,\ldots ,p-1\}$$, then there exists $$t_{j}>1$$ such that $$x^{*}=a_{j}+t_{j}(a_{p}-a_{j})$$. Therefore,
\begin{aligned} \dfrac{x^{*}-a_{j}}{\Vert x^{*}-a_{j}\Vert }=\dfrac{a_{j}+t_{j}(a_{p}-a_{j})-a_{j}}{\Vert a_{j}+t_{j}(a_{p}-a_{j})-a_{j}\Vert }=\dfrac{a_{p}-a_{j}}{\Vert a_{p}-a_{j}\Vert }, \quad j=1,\ldots ,p-1. \end{aligned}
If $$j \in \{p+1,\ldots ,d\}$$, then there exists $$t_{j} \in (0,1)$$ such that $$x^{*}=a_{p}+t_{j}(a_{j}-a_{p}).$$ Therefore,
\begin{aligned} \dfrac{x^{*}-a_{j}}{\Vert x^{*}-a_{j}\Vert }=\dfrac{a_{p}+t_{j}(a_{j}-a_{p})-a_{j}}{\Vert a_{p}+t_{j}(a_{j}-a_{p})-a_{j}\Vert }=\dfrac{a_{p}-a_{j}}{\Vert a_{p}-a_{j}\Vert },\quad j=p+1,\ldots ,d. \end{aligned}
It implies that
\begin{aligned} \left\| -\sum _{\begin{array}{c} j=1\\ j\ne p \end{array}}^{d}s_{j}\dfrac{a_{p}-a_{j}}{\Vert a_{p}-a_{j}\Vert } \right\| =\left\| - \sum _{\begin{array}{c} j=1\\ j\ne p \end{array}}^{d}s_{j}\dfrac{x^{*}-a_{j}}{\Vert x^{*}-a_{j}\Vert }\right\| =\left\| s_{p} \dfrac{x^{*}-a_{p}}{\Vert x^{*}-a_{p}\Vert }\right\| =s_{p}. \end{aligned}
Hence,
\begin{aligned} -\sum _{\begin{array}{c} j=1\\ j\ne p \end{array}}^{d}s_{j}\dfrac{a_{p}-a_{j}}{\Vert a_{p}-a_{j}\Vert }\in s_{p}\partial \Vert (\cdot )-a_{p}\Vert _{(a_p)}. \end{aligned}
Therefore, $$0\in \partial f(a_{p})$$.

Since f is a convex function, it follows that $$a_{p} \in \Omega$$, which is a contradiction. Similarly, there exists $$q\in \{1,2,\ldots ,d\}$$ such that $$y^{*}=a_{q}$$. $$\square$$

For convenience, we assume that $$a_{0}, a_{d+1}\in {\mathbb {R}}^{n}$$ such that $$a_{0},a_{1},\ldots ,a_{d+1}$$ are distinct and collinear, and also the list $$a_{0},a_{1},\ldots ,a_{d+1}$$ is regular. We set $$s_{0}:=0$$, $$s_{d+1}:=0$$, $$T_{0}:= \sum _{j=1}^{d}s_{j}$$,
\begin{aligned} T_{k}:=\sum _{j=k+1}^{d+1}s_{j}-\sum _{j=0}^{k-1}s_{j},\quad k=1,2,\ldots ,d, \end{aligned}
and $$T_{d+1}:=-\sum _{j=1}^{d}s_{j}$$. Observe that
\begin{aligned} T_{k}=T_{k-1}-s_{k}-s_{k-1},\quad k=1,2,\ldots ,d, \end{aligned}
and
\begin{aligned} T_{k}=T_{k+1}+s_{k}+s_{k+1},\quad k=1,2,\ldots ,d. \end{aligned}
We propose the following algorithms for solving SWP in the collinear case.

Backward algorithm

Input:

The number of customers d and positive multipliers

$$s_{1},s_{2},\ldots ,s_{d}.$$

Output:

The optimal solution set of SWP.

Step 1.

Pick an arbitrary $$r\in \{2,3,\ldots ,d,d+1\}$$ such that $$T_{r}<-s_{r}$$.

Step 2.

Set $$l=0$$.

Step 3.

For $$k=r-1$$ to 1 do:

$$T_{k}=T_{k+1}+s_{k}+s_{k+1};$$

if $$|T_{k}|\leqslant s_{k}$$, then set $$l=l+1$$ and $$P(l)=k$$;

else, if $$T_{k}>s_{k}$$, then stop.

Step 4.

$$[a_{P(1)},a_{P(l)}]$$ is the optimal solution set of SWP.

The backward algorithm can be replaced by the forward algorithm.

Forward algorithm

Input:

The number of customers d and positive multipliers

$$s_{1},s_{2},\ldots ,s_{d}.$$

Output:

The optimal solution set of SWP.

Step 1.

Pick an arbitrary $$r\in \{0,1,\ldots ,d-1\}$$ such that $$T_{r}>s_{r}$$.

Step 2.

Set $$l=0$$.

Step 3.

For $$k=r+1$$ to d do:

$$T_{k}=T_{k-1}-s_{k}-s_{k-1};$$

if $$|T_{k}|\leqslant s_{k}$$, then set $$l=l+1$$ and $$P(l)=k$$;

else, if $$T_{k}<-s_{k}$$, then stop.

Step 4.

$$[a_{P(1)},a_{P(l)}]$$ is the optimal solution set of SWP.

To make these algorithms clear, the following example is provided.

### Example 4.6

Consider six customers whose locations denoted by $$a_{j}~(j=1,2,\ldots ,6)$$ are given by the columns of the following matrix:
\begin{aligned} \mathbb {A}=\left( \begin{array}{r r r r r r} -5&{}0&{}-3&{}4&{}2&{}-1\\ 10&{}0&{}6&{}-8&{}-4&{}2 \end{array}\right) ,\\ \end{aligned}
and thus $$s_{j}~ (j=1,2,\ldots ,d)$$ are the columns of the following matrix:
\begin{aligned} S=\left( \begin{array}{rrrrrr} 3&8&1&4&2&10 \end{array}\right) . \end{aligned}
We first use the sort algorithm for regularizing $$a_{1},a_{2},\ldots ,a_{6}$$. Then, we have
\begin{aligned} \mathbb {A}_{\pi }&:=\left( \begin{array}{r r r r r r} -5&{}-3&{}-1&{}0&{}2&{}4\\ 10&{}6&{}2&{}0&{}-4&{}-8 \end{array}\right) ,\\ S_{\pi }&:=\left( \begin{array}{r r r r r r} 3&1&10&8&2&4 \end{array}\right) , \end{aligned}
whose columns $$\mathbb {A}_{\pi }$$ and columns $$S_{\pi }$$ are the incidence vectors of the lists

$$a_{\pi (1)},a_{\pi (2)},\ldots ,a_{\pi (d)}$$ and $$s_{\pi (1)},s_{\pi (2)},\ldots ,s_{\pi (d)}$$, respectively. Now, we apply the forward algorithm for solving SWP with positive multipliers

$$s_{\pi (1)},s_{\pi (2)},\ldots ,s_{\pi (d)}$$. By setting $$r=0$$, we obtain
\begin{aligned}&r=0,\quad l=0,\quad T_{0}=28,\\&k=1,\quad T_{k}=25,\\&k=2,\quad T_{k}=21,\\&k=3,\quad T_{k}=10,\quad l=1,\quad P(1)=\pi (3),\\&k=4,\quad T_{k}=-8,\quad l=2,\quad P(2)=\pi (4),\\&k=5,\quad T_{k}=-18.\\ \end{aligned}
Hence, the optimal solution set is $$[a_{\pi (3)},a_{\pi (4)} ]=[a_6 ,a_2]$$.

### Theorem 4.7

Backward (forward) algorithm works correctly.

### Proof

Let $$k \in \{1,2,\ldots ,d\}$$. It is easy to see that the following equalities hold.
\begin{aligned} \dfrac{a_{k}-a_{0}}{\Vert a_{k}-a_{0}\Vert }&= \dfrac{a_{d+1}-a_{0}}{\Vert a_{d+1}-a_{0}\Vert },\\ \dfrac{a_{k}-a_{d+1}}{\Vert a_{k}-a_{d+1}\Vert }&=- \dfrac{a_{d+1}-a_{0}}{\Vert a_{d+1}-a_{0}\Vert },\\ \dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert }&=\dfrac{a_{k}-a_{0}}{\Vert a_{k}-a_{0}\Vert }\quad (j=1,2,\ldots ,k-1),\\ \dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert }&=\dfrac{a_{k}-a_{d+1}}{\Vert a_{k}-a_{d+1}\Vert }\quad (j=k+1,k+2,\ldots ,d). \end{aligned}
If $$|T_{k}|\leqslant s_{k}$$ ($$k=1,2,\ldots ,d$$), by the above equalities and Theorem 3.1, the following equivalent statements hold.
\begin{aligned} |T_{k}|\leqslant s_{k} \Longleftrightarrow&\left\| \sum _{j=k+1}^{d+1}s_{j}\dfrac{a_{d+1}-a_{0}}{\Vert a_{d+1}-a_{0}\Vert }- \sum _{j=0}^{k-1}s_{j}\dfrac{a_{d+1}-a_{0}}{\Vert a_{d+1}-a_{0}\Vert } \right\| \leqslant s_{k}\nonumber \\ \Longleftrightarrow&\left\| -\sum _{j=k+1}^{d+1}s_{j}\dfrac{a_{k}-a_{d+1}}{\Vert a_{k}-a_{d+1}\Vert }- \sum _{j=0}^{k-1}s_{j}\dfrac{a_{k}-a_{0}}{\Vert a_{k}-a_{0}\Vert } \right\| \leqslant s_{k}\nonumber \\ \Longleftrightarrow&\left\| -\sum _{j=k+1}^{d+1}s_{j}\dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert }- \sum _{j=0}^{k-1}s_{j}\dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert } \right\| \leqslant s_{k}\nonumber \\ \Longleftrightarrow&-\sum _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{d}s_{j}\dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert }\in s_{k} \partial \left( \Vert (\cdot )-a_{k}\Vert \right) (a_{k}) \nonumber \\ \Longleftrightarrow&-\sum _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{d}s_{j}\dfrac{a_{k}-a_{j}}{\Vert a_{k}-a_{j}\Vert }\in \partial \left( s_{k}\Vert (\cdot )-a_{k}\Vert \right) (a_{k})\nonumber \\ \Longleftrightarrow&0\in \partial f(a_{k}). \end{aligned}
(4.1)
Since the objective function is convex, the proof is complete. $$\square$$

The proof of Theorem 4.7 leads directly to the following result.

### Proposition 4.8

The optimal solution set of SWP can only be one of the forms $$a_{p}$$ or $$[a_{p},a_{p+1}]$$ for some $$p \in \{1,2,\ldots ,d\}$$. In particular, if there exists $$p \in \{1,2,\ldots ,d\}$$ such that the optimal solution of SWP is $$[a_{p},a_{p+1}]$$, then
\begin{aligned}&\sum _{j=p+1}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j}=s_{p},\\&\sum _{j=p+2}^{d+1}s_{j}-\sum _{j=0}^{p}s_{j}=-s_{p+1}. \end{aligned}

### Proof

Consider the first part of the proof. By contradiction, suppose that there exist $$p \in \{1,2,\ldots ,d\}$$ and $$r \in \{2,3,\ldots ,d-p\}$$ such that $$[a_{p},a_{p+r}]$$ is the optimal solution set of SWP. Therefore, $$0\in \partial f(a_{i})$$ for $$i=p,p+1,\ldots ,p+r$$. Based on (4.1), we have
\begin{aligned} -s_{i} \leqslant \sum _{j=i+1}^{d+1}s_{j}-\sum _{j=0}^{i-1}s_{j}\leqslant s_{i},\quad i=p,p+1, \ldots ,p+r. \end{aligned}
We deduce that
\begin{aligned} s_{p}&\geqslant \sum _{j=p+1}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j} \\&\geqslant \sum _{\begin{array}{c} j=p+1\\ j\ne p+r \end{array}}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j} -\sum _{j=p+r+1}^{d+1}s_{j}+\sum _{j=0}^{p+r-1}s_{j}\\&\geqslant s_{p}+2 \sum _{j=p+1}^{p+r-1}s_{j}\\&>s_{p}. \end{aligned}
This is a contradiction and completes the proof.
Now consider the next part of the proof. By contradiction, suppose that
\begin{aligned} \sum _{j=p+1}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j}\ne s_{p}. \end{aligned}
By (4.1), we have
\begin{aligned}&-s_{p} \leqslant \sum _{j=p+1}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j}< s_{p},\\&-s_{p+1} \leqslant \sum _{j=p+2}^{d+1}s_{j}-\sum _{j=0}^{p}s_{j}\leqslant s_{p+1}. \end{aligned}
Therefore,
\begin{aligned} s_{p}&> \sum _{j=p+1}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j} \\&\geqslant \sum _{j=p+2}^{d+1}s_{j}-\sum _{j=0}^{p-1}s_{j} -\sum _{j=p+2}^{d+1}s_{j}+\sum _{j=0}^{p}s_{j}\\&=s_{p}, \end{aligned}
\begin{aligned} \sum _{j=p+2}^{d+1}s_{j}-\sum _{j=0}^{p}s_{j}=- s_{p+1}. \end{aligned}
$$\square$$

One can apply Proposition 4.8 to determine the optimal solution set of SWP.

Let the list $$a_{1},a_{2},\ldots ,a_{d}$$ be regular. The point $$a_{i}$$ is a median of the list $$a_{1},a_{2},\ldots ,a_{d}$$ if index i satisfies:
\begin{aligned} \sum _{j=1}^{i-1}s_{j}<\sum _{j=1}^{d}\frac{s_{j}}{2}\quad \text {and}\quad \sum _{j=1}^{i}s_{j}\geqslant \sum _{j=1}^{d}\frac{s_{j}}{2}. \end{aligned}
(4.2)
In the collinear case, it is well-known that if $$a_{i}$$ be a median of the list $$a_{1},a_{2},\ldots ,a_{d}$$, then $$a_{i}$$ is an optimal solution of SWP (see [2, 16, 21]).

A necessary and sufficient condition of optimality is presented by Proposition 4.8 for SWP, whereas, Condition 4.2 is only a sufficient condition of optimality for SWP in the collinear case. The following example shows the difference between Condition 4.2 and the presented necessary and sufficient condition of optimality in Proposition 4.8.

### Example 4.9

Consider SWP. Let $$a_{1}=(1,2)^T,~a_{2}=(2,4)^T,~a_{3}=(3,6)^T,$$ $$a_{4}=(4,8)^T,~s_{1}=1,~ s_{2}=2,~s_{3}=2,$$ and $$s_{4}=1$$. Let us check the optimality condition in the points $$x_{1}=(2,4)^T,~x_{2}=(2.25,4.5)^T,~x_{3}=(3,6)^T,~x_{4}=(4,8)^T$$.

Consider $$x_{1}=(2,4)^T$$, then $$x_{1}=a_{2}$$ . In Condition (4.2), we obtain $$i=2$$ and
\begin{aligned} \sum _{j=1}^{1}s_{j}=1<\sum _{j=1}^{4}\dfrac{s_{j}}{2}=3\quad \text {and}\quad \sum _{j=1}^{2}s_{j}=3 \geqslant \sum _{j=1}^{4}\dfrac{s_{j}}{2}=3. \end{aligned}
Therefore, $$x_{1}=a_{2}$$ satisfies the Condition (4.2), and $$x_{1}$$ is an optimal solution of SWP. Now consider the results in Proposition 4.8, we have
\begin{aligned} T_{2}=\sum _{j=3}^{5}s_{j}-\sum _{j=0}^{1}s_{j}=2. \end{aligned}
Therefore, $$|T_{2}|\leqslant 2$$ and $$x_{1}$$ is an optimal solution set of SWP.

Now consider $$x_{2}=(2.25,4.5)^T$$. Since there exists no $$a_{j}~(j=1,2,\ldots ,4)$$ such that $$x_{2}=a_{j}$$, Condition (4.2) does not present any information of optimality at this point. Whereas, since $$x_{2} \in [a_{2},a_{3}]$$, from Proposition 4.8, it is necessary and sufficient to show that $$T_{2}=w_{2}$$ for verifying the optimality condition at $$x_{2}$$. Since $$T_{2}=2$$, it follows that $$x_{2}$$ is an optimal solution of SWP.

Now consider $$x_{3}=(3,6)^T$$. Since $$x_{3}=a_{3}$$, it is easy to see that $$i=3$$. Since
\begin{aligned} \sum _{j=1}^{2}s_{j}=3 \not < \sum _{j=1}^{4}\dfrac{s_{j}}{2}=3, \end{aligned}
it follows that $$x_{3}$$ does not satisfy the Condition (4.2). Whereas $$|T_{3}|=3 \leqslant 3$$. Therefore, $$x_{3}$$ satisfies the optimality condition in Proposition 4.8. This observation shows that Condition (4.2) is only a sufficient condition of optimality for SWP.
Finally, consider $$x_{4}=(4,8)^T$$. Since $$|T_{4}|=5\not \leqslant w_{4}=1$$, it follows that $$x_{4}$$ is not an optimal solution of SWP. On the other hand,
\begin{aligned} \sum _{j=1}^{3}s{j}=5 \not < \sum _{j=1}^{4}\dfrac{s_{j}}{2}=3. \end{aligned}
Hence $$x_{4}$$ does not satisfy in Condition (4.2). But Condition (4.2) is only a sufficient condition of optimality, therefore we can not say that $$x_{4}$$ is not an optimal solution of SWP.

Now let us analyze the running time of the backward (forward) algorithm.

### Theorem 4.10

Backward (forward) algorithm terminates after at most d iterations, and can be implemented to run in $$O (d)$$ time.

### Proof

In the worst case, $$[a_{1},a_{2}]$$ is the optimal solution set and we applied the backward algorithm beginning with $$r=d+1$$. Therefore, the backward algorithm terminates with $$k=1$$, which implies that the backward algorithm terminates after d iterations. Let us denote the worst case of the running time of backward algorithm for d customers by $$g_{b} (d)$$. We obtain
\begin{aligned} g_{b}(d)=(d+1)+2d+2=3d+3. \end{aligned}
Similarly, in the worst case of the running time of forward algorithm for d customers (which denotes by $$g_{f} (d)$$), the forward algorithm terminates after d iterations and
\begin{aligned} g_{f}(d)=d+2d+2=3d+2. \end{aligned}
$$\square$$
We can decrease the overall running time by choosing a suitable initial iteration. For this intention, by Proposition 4.8, we suggest the following approach:
\begin{aligned} \left\{ \begin{array}{ll} &{}\text {Use the forward algorithm with}~r=\lceil {\frac{d}{2}}\rceil ~\text {if}~T_{\lceil {\frac{d}{2}}\rceil }>s_{\lceil {\frac{d}{2}}\rceil }.\\ &{}\text {Set}~ [a_{\lceil {\frac{d}{2}}\rceil },a_{\lceil {\frac{d}{2}}\rceil +1}] ~\text {is the optimal solution set}~\text {if}~T_{\lceil {\frac{d}{2}}\rceil }=s_{\lceil {\frac{d}{2}}\rceil }.\\ &{}\text {Set}~ \{a_{\lceil {\frac{d}{2}}\rceil }\} ~\text {is the optimal solution set}~\text {if}~ -s_{\lceil {\frac{d}{2}}\rceil }<T_{\lceil {\frac{d}{2}}\rceil }<s_{\lceil {\frac{d}{2}}\rceil }.\\ &{}\text {Set}~ [a_{\lceil {\frac{d}{2}}\rceil -1},a_{\lceil {\frac{d}{2}}\rceil }] ~\text {is the optimal solution set}~\text {if}~T_{\lceil {\frac{d}{2}}\rceil }=-s_{\lceil {\frac{d}{2}}\rceil }.\\ &{}\text {Use the backward algorithm with}~r=\lceil {\frac{d}{2}}\rceil ~\text {if}~ T_{\lceil {\frac{d}{2}}\rceil }<-s_{\lceil {\frac{d}{2}}\rceil }. \end{array} \right. \end{aligned}
In the worst case of the running time of the above approach, d is odd, $$T_{\lceil {\frac{d}{2}}\rceil }>s_{\lceil {\frac{d}{2}}\rceil }$$, and $$\Omega = [a_{d-1},a_{d}]$$. Let us denote the worst case of the running time of the above approach by g(d) , then we obtain
\begin{aligned} g(d)=2+d+2\left( d-\lceil {\frac{d}{2}}\rceil \right) +2= 2+d+d+1+2=2d+3. \end{aligned}
Note that we require $$(3n+2)d-1$$ operations to compute f(y) for any $$y \in {\mathbb {R}}^n$$, which is more than the required operations of the forward (backward) algorithm for $$n\geqslant 2$$ or $$d\geqslant 2$$.

In the next section, we present an example in which the Cooper’s algorithm converges to a non-desirable solution. To get over this difficulty, we propose a modified Cooper’s algorithm, which improves the Cooper’s algorithm towards the desirability of solutions.

## 5 Modified Cooper’s Algorithm

In the following example, we verify the behavior of Cooper’s algorithm applied to MWP.

### Example 5.1

Assume that there are 12 customers whose locations denoted by $$a_{j}~(j=1,2,\ldots ,12)$$ are given by the columns of the following matrix:
\begin{aligned} A=\left( \begin{array}{r r r r r r r r r r r r} 0\;&{}2\;&{}4\;&{}6\;&{}8\;&{}10\;&{}11\;&{}13\;&{}15\;&{}17\;&{}19\;&{}21\\ 0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0\;&{}0 \end{array}\right) , \end{aligned}
and all $$s_{j}~(j=1,2,\ldots ,12)$$ are 1. Suppose that $$M=2$$. At the kth iteration, suppose that $$A_{1}^{k}=\{a_{1},a_{2},\ldots ,a_{6}\}$$ and $$A_{2}^{k}=\{a_{7},a_{8},\ldots ,a_{12}\}$$. Now, at the $$(k+1)$$th iteration of the Cooper’s algorithm, the task of location phase is to solve the involved SWPs. In the poor choice, $${x_{1}^{k+1}}^{T}=(4,0)$$ and $${x_{2}^{k+1}}^{T}=(17,0)$$. Now, if we apply the NCR, we will have $$A_{i}^{k+1}=A_{i}^{k}~(i=1,2)$$ and the Cooper’s algorithm stops with
\begin{aligned} F(x_{1}^{k+1},x_{2}^{k+1})&=\sum _{i=1}^{2}\sum _{j=1}^{12}w_{ij}\Vert x_{i}^{k+1}-a_{j}\Vert \\&=\sum _{j=1}^{6}s_{j}\Vert x_{1}^{k+1}-a_{j}\Vert +\sum _{j=7}^{12}s_{j}\Vert x_{2}^{k+1}-a_{j}\Vert =36. \end{aligned}
Whereas we can make a correct choice so that the partitions change at the $$(k+1)$$th iteration, i.e., new partitions $$A_{i}^{k+1}(i=1,2)$$ are generated by NCR so that there exists $$i\in \{1,2\}$$ such that $$A_{i}^{k+1}\ne A_{i}^{k}$$, and the location phase finds new locations of facilities for these partitions, implying that F decreases strictly. If we set $${x_{1}^{k+1}}^{T}=(6,0)$$ and $${x_{2}^{k+1}}^{T}=(17,0)$$, then $$A_{1}^{k+1}=\{a_{1},a_{2},\ldots ,a_{7}\}$$ and $$A_{2}^{k+1}=\{a_{8},a_{9},\ldots ,a_{12}\}$$. By applying the location phase, we have $${x_{1}^{k+2}}^{T}=(6,0)$$ and $${x_{2}^{k+2}}^{T}=(17,0)$$. Hence,
\begin{aligned} F(x_{1}^{k+2},x_{2}^{k+2})&=\sum _{i=1}^{2}\sum _{j=1}^{12}w_{ij}\Vert x_{i}^{k+1}-a_{j}\Vert \\&=\sum _{j=1}^{7}s_{j}\Vert x_{1}^{k+2}-a_{j}\Vert +\sum _{j=8}^{12}s_{j}\Vert x_{2}^{k+2}-a_{j}\Vert =35. \end{aligned}
We see that $$F(x_{1}^{k+2},x_{2}^{k+2})<F(x_{1}^{k+1},x_{2}^{k+1})$$. Indeed, $$x_{1}^{k+2}$$ and $$x_{2}^{k+2}$$ are more desirable than $$x_{1}^{k+1}$$ and $$x_{2}^{k+1}$$.
For simplicity, we denote by $$\Omega _{i}^{k} ~(i=1,2,\ldots ,M)$$, the optimal solution set of the involved SWP in the location phase for the cluster $$A_{i}^{k}~(i=1,2,\ldots ,M)$$, i.e.,
\begin{aligned} \Omega _{i}^{k}&=\left\{ x^{*}\in {\mathbb {R}^{n}}:F_{i}(x^{*})\leqslant F_{i}(x),~\forall x \in {\mathbb {R}^{n}}\right\} , i=1,2,\ldots ,M. \end{aligned}
(5.1)
In the location phase in the Cooper’s algorithms, the locations of facilities are found by the algorithms in each iteration of the Cooper’s algorithms; this may lead to a poor choice of the locations of facilities, which is unavoidable. Therefore, we need to compute the optimal solution set of the SWPs involved in the location phase. For this intention, if the customers in $$A_{i}^{k}~(i=1,2,\ldots ,M)$$ are collinear, then we use our scheme for computing $$\Omega _{i}^{k} ~(i=1,2,\ldots ,M)$$; otherwise, the objective function of the SWP involved in the location phase for the cluster $$A_{i}^{k}~(i=1,2,\ldots ,M)$$ is strictly convex function, and $$\Omega _{i}^{k} ~(i=1,2,\ldots ,M)$$ is singleton, which can be found by applying one of the known algorithms for solving SWP [1, 12, 14, 18, 19, 27]. Suppose that the Cooper’s algorithm is terminated with $$A_{i}^{k}~(i=1,2,\ldots ,M)$$, then our task is to compute $$\Omega _{i}^{k}~(i=1,2,\ldots ,M)$$ and find $$x_{i}^{k}\in \Omega _{i}^{k}~(i=1,2,\ldots ,M)$$ such that $$A_{l}^{k+1}\ne A_{l}^{k}$$ for some $$l \in \{1,2,\ldots ,M\}$$, which leads to a decrease in the objective function value.
Moreover, we can use the modified Cooper’s algorithm when the Cooper’s algorithms are stopped with $$A_{i}^{k}~(i=1,2,\ldots ,M)$$.

Modified Cooper’s algorithm

Input:

The number of customers N, the positive multipliers

$$s_{1},s_{2},\ldots ,s_{N}$$, the location of customers $$a_{1},a_{2},\ldots ,a_{N}$$,

the number of facilities M, the partitions $$A_{i}^{k}~(i=1,2,\ldots ,M)$$,

and the optimal solution sets of involved SWP in the kth

iteration $$\Omega _{i}^{k}~(i=1,2,\ldots ,M)$$.

Output:

The desirable solution set of MWP.

Step 1.

Set $$t_{r}:=0$$, $$r=1,2,\ldots , N$$.

Step 2.

For $$r=1$$ to N set:

\begin{aligned} \qquad \qquad \qquad \qquad \quad \qquad \qquad \bar{x}_{i,r}^{k}=\left\{ \begin{array}{lll} &{}P_{\Omega _{i}^{k}}(a_{r})~ \text {if}~a_{r}\not \in A_{i}^{k},\\ &{} \qquad \qquad \qquad \qquad \text {for}~ i=1,2,\ldots ,M.\\ &{}{{{\mathrm{argmax}}}}_{\Omega _{i}^{k}}\Vert x-a_{r}\Vert ~\text {otherwise}, \end{array}\right. \qquad \qquad (5.2) \end{aligned}

Step 3.

For $$r=1$$ to N do:

for $$j=1$$ to N do:

$$d_{i,j}^{r}=\Vert \bar{x}_{i,r}^{k}-a_{j}\Vert$$ for $$i=1,2,\ldots ,M$$;

if $$a_{j}\in A_{h}^{k}$$ and $$d_{lj}^{r}=\min _{i=1,2,\ldots ,M,~i\ne h}\{d_{i,j}^{r}\}<d_{hj}^{r}$$,

then $$A_{h,r}^{k+1}=A_{h}^{k}/\{a_{j}\}$$, $$A_{h,l}^{k+1}=A_{l}^{k}\cup \{a_{j}\}$$, and $$t_{r}=t_{r}+1$$

(reassign $$a_{j}$$).

Step 4.

If $$t_{r}=0$$ for all $$r \in { \mathcal {N}}$$, then $$\{\Omega _{1}^{k},\Omega _{2}^{k},\ldots ,\Omega _{M}^{k}\}$$ is the desirable

solution set of MWP, and the customers in $$A_{i}^{k}~(i=1,2,\ldots ,M)$$

should be allocated from a $$x_{i}^{k}\in \Omega _{i}^{k}~(i=1,2,\ldots ,M)$$ and stop.

Step 5.

For $$r=1$$ to N do:

compute $$\Omega _{i,r}^{k+1}$$ for $$i=1,2,\ldots , M$$; where, $$\Omega _{i,r}^{k+1}$$ is the optimal

solution set for the cluster $$A_{i,r}^{k}$$, which is defined similar to (5.1).

Step 6.

Let:

$$\bar{r}=\mathop {{{\mathrm{argmin}}}}\limits _{r\in {\mathcal {N}}}\sum _{i=1}^{M}\sum _{\{j\in {\mathcal {N}},a_{j}\in A_{i,r}^{k}\}}s_{j}\Vert x_{i,r}^{k+1}-a_{j}\Vert ,$$

where, $$x_{i,r}^{k+1} \in \Omega _{i,r}^{k+1}$$ for $$i=1,2,\ldots ,M$$.

Then set $$A_{i}^{k+1}=A_{i,\bar{r}}^{k+1}$$, $$\Omega _{i}^{k+1}=\Omega _{i,\bar{r}}^{k+1} ~(i=1,2,\ldots ,M)$$, $$k:=k+1$$, and

go to Step 1.

Note that the modified Cooper’s algorithm is basically equivalent to the original Cooper’s algorithm, if in each iteration the optimal solution sets of involved SWPs are singletons. In particular, if $$t_{r}=0$$ for some $$r\in {\mathcal {N} }$$ in the $$( k+1)$$th iteration, then $$\Omega _{i,r}^{k+1}=\Omega _{i,r}^{k}$$. To simplify the modified Cooper’s algorithm in Step 2, we present the following proposition, and its proof is easy.

### Proposition 5.2

Let $$a ,b, c\in {\mathbb {R}}^{n}$$ and list bc be regular. Then, the following assertions hold:
1. (1)
If $$a \not \in [b,c]$$, then
\begin{aligned} P_{[b,c]}(a)=\left\{ \begin{array}{ll} b&{}~\mathrm{{if}} ~~\bar{t}<0,\\ b+\bar{t}(c-b)&{}~\mathrm{{if}} ~~0\leqslant \bar{t} \leqslant 1,\\ c&{}~\mathrm{{if}}~~\bar{t}>1, \end{array}\right. \end{aligned}
where,
\begin{aligned} \bar{t}=\dfrac{(b-a)^{T}(b-c)}{\Vert b-c\Vert ^2}. \end{aligned}

2. (2)
Thus,
\begin{aligned} \max _{x\in [b,c]}\Vert x-a\Vert =\max \{\Vert b-a\Vert ,\Vert c-a\Vert \}. \end{aligned}

One can apply the Proposition 5.2 to solve (5.2) exactly, which decreases sharply the computational time of the Modified Cooper’s algorithm.

## 6 Application

We now proceed to present an application for the importance of finding the desirable solution set of MWP. Consider the following bi-level facility location problem:
\begin{aligned}&\min _{y\in \mathbb {R}^n}\sum _{i=1}^{M}W_{i}\left\| y-x_{i}\right\| ,\end{aligned}
(6.1)
\begin{aligned}&(x_{1},x_{2},\ldots ,x_{M})\in \mathop {{{\mathrm{argmin}}}}\limits _{x_{i}\in \mathbb {R}^n}\left\{ F(x_{1},x_{2},\ldots ,x_{M})~|~ \displaystyle \sum _{i=1}^M w_{ij}=s_j,~\forall j\in \mathcal {N}\right\} ,\nonumber \\ \end{aligned}
(6.2)
where,
1. (1)

y is the location of the wholesale market to be determined;

2. (2)

$$W_{i}$$ is the corresponding weight for the ith facility.

The aim of this model is to find the main source $$y^*$$ that allocates the demands of the retail markets $$x_{1},x_{2},\ldots ,x_{m}$$ with the hypothesis that the sum of transportation costs between these retail markets and the demand of the customers be minimized. It is worth mentioning that we need to find a desirable solution of MWP (6.2) for solving the Problem (6.1). Suppose that $$\{\Omega _{1},\Omega _{2},\ldots ,\Omega _{M}\}$$ is the optimal solution of MWP (6.2), then the Problem (6.1) reduces to the following minisum location problem with closest distances (see [5]):
\begin{aligned} \min _{y\in \mathbb {R}^n}\sum _{i=1}^{M}W_{i}d_{i}(y), \end{aligned}

where, $$d_{i}(y)=\displaystyle \min _{x_{i}\in \Omega _{i}} \Vert x_{i}-y\Vert$$ for $$i=1,2,\ldots ,M$$. To make this application clear, the following example is provided.

### Example 6.1

Suppose that the locations of customers $$a_{j}~(j=1,2,\ldots ,8)$$ are given by the columns of the following matrix:
\begin{aligned} A=\left( \begin{array}{r r r r r r r r} 0&{}0&{}0&{}0&{}7&{}7&{}10&{}10\\ -3&{}-2&{}2&{}3&{}-3&{}3&{}-3&{}3 \end{array}\right) ,\\ \end{aligned}
and all $$s_{j}=1$$. Suppose that $$M=2$$ and $$W_{1}=2$$ and $$W_{2}=1$$. We first find the optimal solution of the involved MWP in (6.1) by Modified Cooper’s algorithm. Therefore, we have $$\Omega _{1}=\{x~|~x_{1}=0~\text {and}~-2\leqslant x_{2} \leqslant 2\}$$ and $$\Omega _{2}=\{(8.5,0)^{T}\}$$ with $$A_{1}=\{a_{1},a_{2},a_{3},a_{4}\}$$ and $$A_{2}=\{a_{5},a_{6},a_{7},a_{8}\}$$. Hence, Problem (6.2) reduces to the following problem:
\begin{aligned} \min _{y \in \mathbb {R}^n} 2d_{1}(y)+d_{2}(y). \end{aligned}
(6.3)
It is easy to see that $$y^{*}=(0,0)^T$$ is the optimal solution of Problem (6.3). Moreover, $$x_{1}^{*}=(0,0)^T$$ and $$x_{2}^{*}=(8.5,0)^T$$ are the optimal retail markets. Illustrations of solution method for solving Example 6.1 are presented in Fig. 1.

## 7 Conclusion

In this paper, first we proposed an algorithm for sorting the points in $${\mathbb {R}}^n$$ in the collinear case. Then, using the obtained result, we present an efficient algorithm for solving SWP in the collinear case. Moreover, we modified the Cooper’s algorithm which has an advantage in the collinear case. A numerical comparison shows the superiority in efficiency and effectiveness of considering the optimal solution sets of SWPs. The numerical results show that the developed algorithms are suitable for the solution of reasonable MWP with the collinear case. An interesting future research topic is to see whether the results in Sect. 4 can be extended for constrained SWP in the collinear case.

## References

1. 1.
Beck, A., Sabach, S.: Weiszfeld’s method: old and new results. J. Optim. Theory Appl. 164(1), 1–40 (2015)
2. 2.
Blum, M., Floyd, R.W., Pratt, V.R., Rivest, R.L., Tarjan, R.E.: Time bounds for selection. J. Comput. Syst. Sci. 7(4), 448–461 (1972)
3. 3.
Bose, P., Maheshwari, A., Morin, P.: Fast approximations for sums of distances, clustering and the Fermat–Weber problem. Comput. Geom. 24(3), 135–146 (2003)
4. 4.
Brimberg, J., Hansen, P., Mladenovic, N., Taillard, E.D.: Improvements and comparison of heuristics for solving the uncapacitated multi-source Weber problem. Oper. Res. 48(3), 444–460 (2000)
5. 5.
Brimberg, J., Wesolowsky, G.O.: Locating facilities by minimax relative to closest points of demand areas. Comput. Oper. Res. 29(6), 625–636 (2002)
6. 6.
Chandrasekaran, R., Tamir, A.: Open questions concerning Weiszfelds algorithm for the Fermat–Weber location problem. Math. Program. 44(1), 293–295 (1989)
7. 7.
Chandrasekaran, R., Tamir, A.: Algebraic optimization: the Fermat–Weber location problem. Math. Program. 46(2), 219–224 (1990)
8. 8.
Chatelon, J.A., Hearn, D.W., Lowe, T.J.: A subgradient algorithm for certain minimax and minisum problems. Math. Program. 15(1), 130–145 (1978)
9. 9.
Cooper, L.: Heuristic methods for location-allocation problems. SIAM Rev. 6(1), 37–53 (1964)
10. 10.
Eyster, J.W., White, J.A., Wierwille, W.W.: On solving multifacility location problems using a hyperboloid approximation procedure. AIEE Trans. 5(1), 1–6 (1973)
11. 11.
Gamal, M.D.H., Salhi, S.: A cellular type heuristic for the multisource Weber problem. Comput. Oper. Res. 30(11), 1609–1624 (2003)
12. 12.
Görner, S., Kanzow, C.: On Newton’s method for the Fermat–Weber location problem. J. Optim. Theory Appl. 170(1), 107–118 (2016)
13. 13.
Hansen, P., Mladenovic, N., Taillard, E.: Heuristic solution of the multisource Weber problem as a p-median problem. Oper. Res. Lett. 22(2), 55–62 (1998)
14. 14.
Jiang, J.L., Yuan, X.M.: A heuristic algorithm for constrained multi-source Weber problem—the variational inequality approach. Eur. J. Oper. Res. 187(2), 357–370 (2008)
15. 15.
Katz, I.N.: Local convergence in Fermat’s problem. Math. Program. 6(1), 89–104 (1974)
16. 16.
Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms. Springer, Berlin (2002)
17. 17.
Kuhn, H.W.: A note on Fermat’s problem. Math. Program. 4(1), 98–107 (1973)
18. 18.
Levin, Y., Ben-Israel, A.: A heuristic method for large-scale multi-facility location problems. Comput. Oper. Res. 31(2), 257–272 (2004)
19. 19.
Li, Y.: A Newton acceleration of the Weiszfeld algorithm for minimizing the sum of Euclidean distances. Comput. Optim. Appl. 10(3), 219–242 (1998)
20. 20.
Luis, M., Said, S., Nagy, G.: A guided reactive GRASP for the capacitated multi-source Weber problem. Comput. Oper. Res. 38(7), 1014–1024 (2011)
21. 21.
Ostresh, L.M.: On the convergence of a class of iterative methods for solving the Weber location problem. Oper. Res. 26(4), 597–609 (1978)
22. 22.
Overton, M.L.: A quadratically convergence method for minimizing a sum of Euclidean norms. Math. Program. 27(1), 34–63 (1983)
23. 23.
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
24. 24.
Rosing, K.E.: An optimal method for solving the (generalized) multi-Weber problem. Eur. J. Oper. Res. 58(3), 414–426 (1992)
25. 25.
Sherali, H.D., Nordai, F.L.: NP-hard, capacitated, balanced p-median problems on a chain graph with a continuum of link demands. Math. Oper. Res. 13(1), 32–49 (1998)
26. 26.
Vardi, Y., Zhang, C.-H.: A modified Weiszfeld algorithm for the Fermat–Weber location problem. Math. Program. 90(3), 559–566 (2001)
27. 27.
Weiszfeld, E.: Sur le point pour lequel la somme des distances de $$n$$ points donn$$\acute{e}$$s est minimum. Tohoku Math. J. 43, 355–386 (1937)
28. 28.
Weiszfeld, E., Plastria, F.: On the point for which the sum of the distances to $$n$$ given points is minimum. Ann. Oper. Res. 167(1), 7–41 (2009)