1 Introduction

Firms dedicated to similar activities tend to settle their plants in the same region because proximity between them provides mutual benefits. These situations are called agglomeration economies. They have been widely studied in economic literature. Marshall (1920) provides the first analysis on this theory. He states that benefits of firms from agglomeration come from three main sources: Input sharing, labor market pooling, and knowledge spillovers. There is also a large body of empirical literature measuring the relative importance of different agglomeration theories. See, for instance, Rosenthal and Strange (2003) and Ellison et al. (2010).

In this paper, we consider that a firm is planning to open a new plant in a country divided into different regions where the plant could be opened. There are also a finite number of firms already located in these regions. The new plant generates agglomeration economies for all of them. This means that the firms in the region where the plant is opened receive a positive externality.

In a decentralized mechanism, the plant would be located in the most profitable region for the new firm, say k. However, if the new firm is located in a different region, say \(k^{*}\), the aggregate utility of all firms in region \(k^{*}\) and the new firm could be greater than the aggregate utility of all firms in region k and the new firm. This is because the new firm could create more positive externalities in region \(k^{*}\) than in region k. Then, it makes sense for firms in region \(k^{*}\) to transfer something to the new firm in order to incentivize it to locate its plant in region \(k^{*}\) instead of region k. The question is what transfers should be made or, equivalently, how the aggregate utility generated when the new plant is located in region \(k^{*}\) should be divided up.

An indirect approach for answering this question is the following: First, associate a cooperative game with transferable utility to the situation. Second, compute a solution in the associated cooperative game. Several reasonable ways can be found of associating a cooperative game with a given arbitrary problem. For instance, in minimum cost spanning tree problems, Bird (1976) gives one way and Bergantiños and Vidal-Puga (2007) give several options. In this paper we associate a cooperative game with each agglomeration problem as follows: In some countries, e.g. Spain, regional governments can incentivize firms to locate their plants in their territories. The usual way is to offer a subsidy to the firm for opening a new plant. Of course such subsidies come from the budget of the regional government, which in turn comes from the taxes paid by the economic agents in the region (which include other firms in the region). Thus, there is some transfer between the firms in the region and the firm opening the plant there. We think that the best way to model this situation is to consider that the coalitions that can share their benefits are those formed by the firm opening the plant and all firms in a given region.

We study several solution concepts for the cooperative game. The core is non-empty and can be described as follows: the new firm receives at least the aggregate utility (of all existing firms plus the new firm) when the new firm locates in the second best region. The difference between the aggregate utility of the best region and the second best region is divided among the new firm and the firms in the best region. Firms outside the best region receive 0.

The \(\tau\)-value, the nucleolus and the per capita nucleolus coincide. We call this the egalitarian optimal location rule. The Shapley value also coincides with this rule in a subset of agglomeration problems. The egalitarian optimal location rule is the core allocation where the difference between the aggregate utility of the best region and that of the second best region is divided equally among the new firm and the firms of the best region.

We also consider a rule, called the weighted optimal location rule, which is not defined through the associated cooperative game. This rule is defined as follows: the new firm receives the aggregate utility (of all existing firms plus the new firm) when the new firm locates in the second best region. The difference between the aggregate utility of the best and second best regions is divided among the firms located in the best region proportionally to the individual benefits generated for the firms by the location of the new firm. Firms outside the best region receive 0.

Finally, we provide axiomatic characterizations of both rules. The egalitarian optimal location rule is characterized by core selection (the rule should select core allocations) and equal treatment inside optimal regions (if the aggregate utility of the best region increases, then all firms in that region and the new firm should improve by the same quantity).

We characterize the set of rules satisfying core selection and merging-splitting proofness (if one firm splits, the allocation to the rest of the firms does not change). This set of rules can be described as follows: The new firm receives the aggregate utility of the second best region for sure. Moreover, the new firm also receives a transfer from firms in the best region. The transfer from each firm located in the best region is proportional to the benefits of the firm. The weighted optimal location rule is the rule satisfying the two properties above where the transfer is zero.

The rest of the paper is organized as follows. Section 2 formally introduces the problem and the rules. Section 3 associates a cooperative game with any agglomeration problem. Section 4 examines several cooperative solutions. Section 5 presents axiomatic characterizations of the rules, and Sect. 6 concludes.

2 The agglomeration problem

We introduce the formal model for studying the situations described in the introduction.

2.1 The model

Let \({\mathbb {N}}\) be set of all potential firms and let \(\mathcal {N}\) be the family of all finite (non-empty) subsets of \(\mathbb {N}\). An element \(N \in \mathcal {N}\) describes a finite set of firms. We usually take \(N=\{1,...,n\}\).

An agglomeration problem (or simply, a “problem”) is a tuple \(\mathcal {A}=(0,N,P,b)\) where

  • 0 is the firm which will open a plant in the country.

  • \(N \in \mathcal {N}\) is the set of firms already located in the country. We denote by \(N_{0}=N\cup \{0\}\).

  • \(P=(P_{k})_{k \in R}\) with \(\bigcup \nolimits _{k \in R}P_{k}=N\) is an indexed collection of pairwise disjoint subsets of N, where \(R=\{1,...,r\}\) is the set of regions in the country. \(P_{k}\) denotes the set of firms located in region k.

  • \(b=\left( b_{i}^{k}:i\in N_{0}\text { and }k\in R \right)\). \(b_{i}^{k}\ge 0\) denotes the benefit obtained by firm i when 0 locates its plant in region k.

We assume that if an existing firm is outside the region where the new plant is located it does not obtain any (significant) benefit. Thus, for all \(k\in R\) and all \(i\in N \backslash P_{k}\), \(b_{i}^{k}=0\). There are no further assumptions about P. So there may be a region with no firms located in it, i.e., \(P_{k}=\emptyset\).

We now introduce some concepts and notation used throughout the paper.

In a decentralized mechanism, firm 0 would locate its new plant in a region where the firm optimizes its individual benefit. Namely in

$$\begin{aligned} \arg \max _{k\in R}\left\{ b_{0}^{k}\right\} . \end{aligned}$$

Nevertheless, the aggregate benefit could be greater if the location was different, so it makes sense to locate the new plant maximizing the total benefit and then provide a compensation scheme. Thus, firm 0 gets more than in the decentralized mechanism and the other firms are not worse off.

For any \(S \subseteq N_{0}\) and every \(k \in R\), let \(b^{k}(S)\) denote the aggregate benefit of firms in S if 0 is located in region k, this is,

$$\begin{aligned} b^{k}(S)=\sum _{i \in S}b_{i}^{k}. \end{aligned}$$

We define the global benefit of any problem \(\mathcal {A}\) as

$$\begin{aligned} g(\mathcal {A})=\max _{k\in R}\left\{ b^{k}(N_{0})\right\} . \end{aligned}$$

For each region \(k\in R\), the benefit obtained by all firms in \(N_{0}\) when the plant is located in region k is given by

$$\begin{aligned} b^{k}(N_{0})=b^{k}(P_{k}\cup \{0\}). \end{aligned}$$

For each \(i\in N\), \(k(i)\in R\) denotes the region where firm i is located, i.e., \(i \in P_{k(i)}\).

Given a problem \(\mathcal {A}\) we say that \(k^{*} \in R\) is an optimal region if locating firm 0 in region \(k^{*}\) results in the global benefit. Namely for each \(P_{k}\in P\),

$$\begin{aligned} b^{k^{*}}(P_{k^{*}}\cup \{0\})\ge b^{k}(P_{k}\cup \{0\}). \end{aligned}$$

Obviously, \(k^{*}\) may be not unique and \(g(\mathcal {A})=b^{k^{*}}(P_{k^{*}}\cup \{0\})\) for each optimal region \(k^{*}\).

For every problem \(\mathcal {A}\), we now define \(s(\mathcal {A})\) as the aggregate benefit obtained by all firms in \(N_{0}\) when the plant is located in the second best region. Formally, given an optimal region \(k^{*}\)

$$\begin{aligned} s(\mathcal {A})=\max _{k\in R\backslash \{k^{*}\}} \left\{ b^{k}(P_{k}\cup \{0\})\right\} . \end{aligned}$$

The number \(s(\mathcal {A})\) is well-defined because it does not depend on the chosen \(k^{*}\). When there are several optimal regions, \(s(\mathcal {A})=g(\mathcal {A})\). Otherwise, \(s(\mathcal {A})<g(\mathcal {A})\). If \(r=1\), \(s(\mathcal {A})=0\).

We now define \(I_{0}(\mathcal {A})\) as the maximum between the individual benefit of firm 0 when it locates in an optimal region \(k^{*}\) and the benefit obtained by all firms when 0 locates in the second best region. Then,

$$\begin{aligned} I_{0}(\mathcal {A})=\max _{k\in R}\left\{ b^{k}(N_{0}\backslash P_{k^{*}})\right\} =\max \left\{ b_{0}^{k^{*}},s(\mathcal {A})\right\} . \end{aligned}$$

Notice that \(I_{0}(\mathcal {A})\) is the maximum utility that can be obtained without cooperating with firms in region \(k^{*}\).

When there are several optimal regions, \(s(\mathcal {A})=g(\mathcal {A})\) and hence \(I_{0}(\mathcal {A})=g(\mathcal {A})\). Thus, \(I_{0}(\mathcal {A})\) does not depend on the \(k^{*}\) chosen.

2.2 Rules

In this section we propose two rules: the egalitarian optimal location rule and the weighted optimal location rule.

A rule is a way of dividing the global benefit among the set of all firms, i.e., a function f assigning to each problem \(\mathcal {A}\) a vector in \(\mathbb {R}^{n+1}\) that satisfies

$$\begin{aligned} \sum _{i\in N_{0}}f_{i}(\mathcal {A})=g(\mathcal {A}). \end{aligned}$$

For each problem \(\mathcal {A}\), let \(k^{*}\) be an optimal region and let \(|P_{k^{*}}|\) denote the number of firms in region \(k^{*}\). The egalitarian optimal location rule (EOL), for each \(i\in N_{0}\), is defined as

$$\begin{aligned} EOL_{i}(\mathcal {A})=\left\{ \begin{array}{lc} I_{0}(\mathcal {A})+\dfrac{g(\mathcal {A})-I_{0}\left( \mathcal {A}\right) }{|P_{k^{*}}|+1}, &{}\hbox { if } i=0 \\ \dfrac{g(\mathcal {A})-I_{0}\left( \mathcal {A}\right) }{|P_{k^{*}}|+1}, &{}\hbox { if } i\in P_{k^{*}} \\ 0, &{}\hbox { otherwise}. \end{array}\right. \end{aligned}$$

This rule has a nice interpretation. Firm 0 receives \(I_{0}(\mathcal {A})\) for sure. Moreover, the surplus generated (with respect to \(I_{0}(\mathcal {A})\)) by \(P_{k^{*}}\) and 0 is divided equally among all firms generating that surplus. Firms outside the optimal region get zero. Below, we prove that the EOL rule can be obtained as a solution of a cooperative game.

Notice that when \(\mathcal {A}\) has several optimal regions, \(EOL_{0}(\mathcal {A})=g(\mathcal {A})\) and \(EOL_{i}(\mathcal {A})=0\) for all \(i\in N\).

Consider the following example.

Example 1

Let \(N=\{1,2,3\}\), \(P=\{\{1\},\{2,3\}\}\), \(b_{0}^{1}=6\), \(b_{1}^{1}=2\), \(b_{0}^{2}=5\), \(b_{2}^{2}=1\) and \(b_{3}^{2}=8\).

The optimal region is \(P_{2}=\{2,3\}\), \(g(\mathcal {A})=14\) and \(I_{0}(\mathcal {A})=8\). Now \(EOL(\mathcal {A})=(10,0,2,2)\).

The interpretation of this allocation is the following. To attract firm 0 to region 2, firms in that region must transfer something to firm 0. Thus it seems reasonable for firm 0 get 10. However, firm 2 gets 2, more than its individual benefit when firm 0 locates in region 2. Instead of transferring something to firm 0, firm 2 receives a transfer of 1 unit from firm 3.

We now introduce a rule called the weighted optimal location rule (WOL). The idea is simple: Firm 0 receives \(I_{0}(\mathcal {A})\). If firm 0 receives \(I_{0}(\mathcal {A})\) for locating in \(k^{*}\), then it is known for sure that no region can offer firm 0 more than \(I_{0}(\mathcal {A})\). The surplus generated when firm 0 locates in region \(k^{*}\), \(g(\mathcal {A})-I_{0}(\mathcal {A})\), is divided among the firms in region \(k^{*}\) proportionally to b.

Formally, for each \(i\in N_{0}\),

$$\begin{aligned} WOL_{i}(\mathcal {A})=\left\{ \begin{array}{lc} I_{0}(\mathcal {A}), &{} \hbox { if } i=0 \\ \dfrac{b_{i}^{k^{*}}}{b^{k^{*}}(P_{k^{*}})} (g(\mathcal {A})-I_{0}(\mathcal {A})), &{}\hbox { if } i\in P_{k^{*}} \\ 0, &{}\hbox { otherwise}. \end{array} \right. \end{aligned}$$

Notice that there are two main differences between the egalitarian optimal location rule and the weighted optimal location rule. In EOL, \(g(\mathcal {A})-I_{0}(\mathcal {A})\) is divided equally among all firms in \(P_{k^{*}}\) and firm 0. While in WOL, \(g(\mathcal {A})-I_{0}(\mathcal {A})\) is divided only among firms in \(P_{k^{*}}\) and not equally but proportionally to b.

In Example 1\(WOL(\mathcal {A})=(8,0,0.67,5.33)\). In this case both firms in the optimal region transfer something to firm 0.

3 The cooperative game

In this section we approach the agglomeration problem as a cooperative game with transferable utility and study some properties of that game.

We first review some well-known definitions from cooperative game theory. Later, we associate a cooperative game with any agglomeration problem.

3.1 Basic notions of cooperative games

We now introduce cooperative games with transferable utility and some solutions such as the core, the nucleolus, the \(\tau\)-value, and the Shapley value.

A cooperative game with transferable utility (TU game) is a pair (Nv) where N is the non-empty, finite set of players and \(v:2^{N}\rightarrow \mathbb {R}\) with \(v(\emptyset )=0\) is the characteristic function. For any coalition \(S\subseteq N\), v(S) represents the amount that the members of coalition S can obtain if they cooperate. When no confusion arises, we refer to v as a game.

An allocation \(x\in \mathbb {R}^{n}\) is an imputation in v if \(\sum \nolimits _{i\in N}x_{i}=v(N)\) and \(x_{i}\ge v(\{i\})\), for all \(i\in N\). The set of all imputations for a game v is denoted by I(v).

The core of v is

$$\begin{aligned} C(v)=\left\{ x\in I(v): \sum _{i\in S}x_{i}\ge v(S),\text { for all } S\subseteq N\right\} . \end{aligned}$$

We now present four single-value solutions for TU games: the \(\tau\)-value (Tijs 1981), the nucleolus (Schmeidler 1969), the per capita nucleolus (Grotte 1970), and the Shapley value (Shapley 1953).

For any game v and every \(i\in N\), let \(M_{i}(v)\) be player i’s marginal contribution to the grand coalition, i.e.,

$$\begin{aligned} M_{i}(v)=v(N)-v(N\backslash \{i\}). \end{aligned}$$

The vector \(M(v)=(M_{i}(v))_{i\in N}\) is called the utopia vector of v.

The minimum right vector is \(m(v)=(m_{i}(v))_{i\in N}\), where \(m_{i}(v)\) is the greatest possible remainder for player i of v(S) after every other player in the coalition obtains their utopia payoff. Formally, for all \(i\in N\)

$$\begin{aligned} m_{i}(v)=\max \limits _{S\subseteq N:i\in S}\left\{ v(S)-\sum _{j\in S\backslash \{i\}}M_{j}(v)\right\} . \end{aligned}$$

The core cover of v consists of the set of allocations that gives each player at least their minimum right and at most their utopia point. Namely,

$$\begin{aligned} CC(v)=\left\{ x\in \mathbb {R}^{n}:\sum _{i\in N}x_{i}=v(N),m(v)\le x\le M(v)\right\} . \end{aligned}$$

When the core cover is non-empty, the \(\tau\)-value is defined as

$$\begin{aligned} \tau (v)=\alpha M(v)+(1-\alpha )m(v) \end{aligned}$$

with \(\alpha \in [0,1]\) such that \(\sum \nolimits _{i\in N}\tau _{i}(v)=v(N)\).

Let \(t \in \mathbb {N}\). For two vectors \(x,y \in \mathbb {R}^{t}\), we say that x is lexicographically smaller than y (and write \(x \le _{L} y\)) if there is an integer \(j \in \{1,...,t\}\) such that \(x_{i}=y_{i}\) if \(1\le i<j\) and \(x_{j}<y_{j}\).

The excess of \(S\subseteq N\) with respect to any \(x\in I(v)\) is defined as

$$\begin{aligned} e(S,x)=v(S)-\sum _{i\in S}x_{i}. \end{aligned}$$

This number can be interpreted as the degree of dissatisfaction of coalition S when imputation x is realized.

For each \(x\in I(v)\), the excess vector, \(\theta (x)\in \mathbb {R}^{2^{n}}\) is the vector of all excesses e(Sx) arranged in non-increasing order, i.e., \(\theta _{i}(x)\ge \theta _{i+1}(x)\) for all \(i\in \{1,...,2^{n}-1\}\).

The nucleolus of v is the set

$$\begin{aligned} \eta (v)=\{x\in I(v): \theta (x) \le _{L} \theta (y),\forall y\in I(v)\}. \end{aligned}$$

The nucleolus recursively minimizes the dissatisfaction of the worst treated coalitions.

For any game v, the per capita excess of \(S\subseteq N\) with respect to \(x\in I(v)\) is

$$\begin{aligned} e^{pc}(S,x)=\dfrac{e(S,x)}{|S|}. \end{aligned}$$

Let \(\theta ^{pc}(x)\in \mathbb {R}^{2^{n}}\) be the per capita excess vector. This vector contains all per capita excesses arranged in non-increasing order.

The per capita nucleolus of v is defined as

$$\begin{aligned} \eta ^{pc}(v)=\{x\in I(v): \theta ^{pc}(x) \le _{L} \theta ^{pc}(y),\forall y\in I(v)\}. \end{aligned}$$

While the nucleolus concerns about the dissatisfaction of coalitions, the per capita nucleolus is based in the maximal dissatisfaction per player for each coalition.

If \(I(v) \ne \emptyset\), both the nucleolus and the per capita nucleolus are always non-empty and contain a unique allocation. Furthermore, if the game has a non-empty core, \(\eta\) and \(\eta ^{pc}\) belong to the core.

Let \(\Pi _{N}\) be the set of all permutations of the finite set \(N\subset {\mathbb {N}}\). Given \(\pi \in \Pi _{N}\), let \(Pre(i,\pi )\) denote the set of elements of N which come before i in the order given by \(\pi\), i.e. \(Pre(i,\pi )=\{j\in N | \pi (j)<\pi (i)\}\).

The Shapley value of v is defined for all \(i\in N\) as the average of the marginal contribution of agent i over the set of all permutations. Namely,

$$\begin{aligned} Sh_{i}(v)=\dfrac{1}{|N|!}\sum _{\pi \in \Pi _{N}}(v(Pre(i,\pi ) \cup \{i\}) -v(Pre(i,\pi ))). \end{aligned}$$

Finally, we introduce some standard properties of TU games. We say that v is:

  • Monotone if \(v(S)\le v(T)\) whenever \(S\subseteq T\), for all \(S,T\subseteq N\).

  • Superadditive if for \(S,T\subseteq N\) with \(S\cap T=\emptyset\), \(v(S\cup T)\ge v(S)+v(T)\).

  • Convex if \(v(S\cup \{i\})-(S)\ge v(T\cup \{i\})-v(T)\), \(\forall\) \(T\subseteq S\subseteq N\backslash \{i\}\), \(\forall\) \(i\in N\).

Monotonicity states that the worth of a coalition increases as more players join it. Superadditivity says that it is more profitable for two disjoint coalitions to merge. In a convex game, the marginal contribution of a player is monotone with respect to the size of the coalition that they join.

3.2 The agglomeration game

We associate a cooperative game \(v^{\mathcal {A}}\) with each agglomeration problem \(\mathcal {A}\). We also study some basic properties of that cooperative game.

We now define the game \(v^{\mathcal {A}}\) under the assumption that benefits can only be shared between firm 0 and all firms in region k. For any \(S \subseteq N_{0}\), if \(0 \notin S\), \(v^{\mathcal {A}}(S)=0\). If \(0 \in S\), then firm 0 could cooperate with any region \(k \in R\) fully contained in S (\(P_{k} \subset S\)) obtaining together \(b^{k}(P_{k} \cup \{0\})\), or could locate in a region \(k \in R\) not fully contained in S obtaining its individual benefit \(b_{0}^{k}\).

Formally, given a problem \(\mathcal {A}\) and \(S\subseteq N_{0}\), we define, for every \(k \in R\)

$$\begin{aligned} P_{k}^{S}=\left\{ \begin{array}{lc} P_{k}, &{}\hbox { if } P_{k} \subset S \\ \emptyset , &{} \hbox { otherwise}. \end{array} \right. \end{aligned}$$

The agglomeration game \(v^{\mathcal {A}}\) is defined as

$$\begin{aligned} v^{\mathcal {A}}(S)=\left\{ \begin{array}{lc} \max \limits _{k \in R}\left\{ b^{k}(P_{k}^{S} \cup \{0\}) \right\} , &{}\hbox { if } 0 \in S \\ 0, &{} \hbox { otherwise}. \end{array}\right. \end{aligned}$$

Notice that \(v^{\mathcal {A}}(N_{0})=g(\mathcal {A})\). We compute \(v^{\mathcal {A}}\) in Example 1.

Example 2

(Continuation of Example 1) The table below shows the worth of coalitions \(S\subseteq N_{0}\), with \(0\in S\) according to \(v^{\mathcal {A}}\):

Notice that, for example, the worth of coalition \(\{0,3\}\) is 6, although the aggregate benefit is \(b_{0}^{2}+b_{3}^{2}=13\). Since firm 2 is also located in region 2 and is not in the coalition, firm 0 would locate its new plant in region 1, the most profitable region for 0.

In the proposition below we discuss the properties satisfied by \(v^{\mathcal {A}}\).

Proposition 1

\(v^{\mathcal {A}}\) is monotone and superadditive.

Proof

Let \(S\subseteq T\subseteq N_{0}\). Since \(P_{k}^{S} \subseteq P_{k}^{T}\), for every \(k \in R\), it can be deduced that \(v^{\mathcal {A}}(S)\le v^{\mathcal {A}}(T)\) and hence \(v^{\mathcal {A}}\) is monotone.

Let \(S,T\subseteq N_{0}\) with \(S\cap T=\emptyset\). Consider two cases. First, \(0\notin S\cup T\). Then \(v^{\mathcal {A}}(S)=v^{\mathcal {A}}(T)=v^{\mathcal {A}}(S\cup T)=0\) and \(v^{\mathcal {A}}(S\cup T)\ge v^{\mathcal {A}}(S)+v^{\mathcal {A}}(T)\). Second, \(0\in S\cup T\). Assume that \(0\in S\) (the case \(0\in T\) is similar so we omit it). Since \(v^{\mathcal {A}}\) is monotone, \(v^{\mathcal {A}}(S)\le v^{\mathcal {A}}(S\cup T)\). Since \(0\notin T\), \(v^{\mathcal {A}}(T)=0\). Then \(v^{\mathcal {A}}(S\cup T)\ge v^{\mathcal {A}}(S)+v^{\mathcal {A}}(T)\). Hence \(v^{\mathcal {A}}\) is superadditive. \(\square\)

Notice that \(v^\mathcal {A}\) may not be convex. Take \(i=1\), \(S=\{0,2\}\) and \(T=\{0,2,3\}\) in Example 1. Since \(v^{\mathcal {A}}(S\cup \{i\})-v^{\mathcal {A}}(S)=8-6=2>v^{\mathcal {A}}(T\cup \{i\})-v^{\mathcal {A}}(T)=14-14=0\), it can be deduced that \(v^{\mathcal {A}}\) is not convex.

In the next claim we state some obvious links between \(I_{0}(\mathcal {A})\) and \(v^{\mathcal {A}}\) that will be used in the proofs of our results.

Claim 1

For any problem \(\mathcal {A}\), any optimal region \(k^{*}\) and each \(S\subseteq N_{0}\) with \(0\in S\), the following statements hold

  1. 1.

    \(I_{0}(\mathcal {A}) \ge v^{\mathcal {A}}(S)\) when \(P_{k^{*}}\nsubseteq S\).

  2. 2.

    \(I_{0}(\mathcal {A}) \le v^{\mathcal {A}}(S)\) when \(P_{k^{*}}\subseteq S\).

  3. 3.

    \(I_{0}(\mathcal {A})=v^{\mathcal {A}}(N_{0}\backslash \{i\})\), for any \(i\in P_{k^{*}}\).

  4. 4.

    \(I_{0}(\mathcal {A})=\max \limits _{k\in R\backslash \{k^{*}\}}\{v^{\mathcal {A}}(P_{k}\cup \{0\})\}\).

Now we discuss some links between \(v^{\mathcal {A}}\) and other classes of games in the literature. Big boss games were introduced in Muto et al. (1988). A game v is a big boss game with a powerful player \(i^{*}\in N\) if it satisfies the following three conditions: (B1) v is monotone; (B2) \(v(S)=0\) if \(i^{*}\notin S\); and (B3) \(v(N)-v(S)\ge \sum \limits _{i\in N\backslash S}[v(N)-v(N\backslash \{i\})]\) if \(i^{*}\in S\). Bahel (2016) extends the family of big boss games considering all games that satisfy (B1) and (B2) but not (B3) and calls this family generalized big boss games or veto games.

In big boss games, the \(\tau\)-value coincides with the nucleolus. This is not always true from generalized big boss games.

It is easy to see that for a problem \(\mathcal {A}\) and an optimal region \(k^{*}\) with \(|P_{k^{*}}|=1\), \(v^{\mathcal {A}}\) is a big boss game. This is also true if the problem has multiple optimal regions. In general, \(v^{\mathcal {A}}\) is a generalized big boss game but not a big boss game. Consider Example 1 and \(S=\{0,1\}\). Since \(v^{\mathcal {A}}(N_{0})-v^{\mathcal {A}}(S)=6\) and \(v^{\mathcal {A}}(N_{0})-v^{\mathcal {A}}(N_{0}\backslash \{2\})= v^{\mathcal {A}}(N_{0})-v^{\mathcal {A}}(N_{0}\backslash \{3\})=6\), \(v^{\mathcal {A}}\) does not satisfy (B3).

4 Solutions of the agglomeration game

In this section we study the core, the \(\tau\)-value, the nucleolus, the per capita nucleolus, and the Shapley value of the agglomeration game \(v^{\mathcal {A}}\).

4.1 The core

We prove that the core of \(v^{\mathcal {A}}\) is always non-empty. It can be described as follows: Firm 0 receives something between \(I_{0}(\mathcal {A})\) and \(g(\mathcal {A})\). Firms in the optimal region \(k^{*}\) receive something between zero and \(g(\mathcal {A})-I_{0}(\mathcal {A})\). Firms in other regions receive zero.

Theorem 1

Given a problem \(\mathcal {A}\) and an optimal region \(k^{*}\), the core of the game \(v^{\mathcal {A}}\) is non-empty and is given by

$$\begin{aligned} C(v^{\mathcal {A}})=\left\{ x\in \mathbb {R}^{n+1}:\sum _{i\in N_{0}}x_{i}=g(\mathcal {A}), \begin{array}{l} I_{0}(\mathcal {A}) \le x_{0}\le g(\mathcal {A}), \\ 0\le x_{i}\le g(\mathcal {A})-I_{0}(\mathcal {A}),\forall i\in P_{k^{*}}, \\ x_{i}=0,\forall i\in N\backslash P_{k^{*}} \end{array} \right\} . \end{aligned}$$

Proof

First, we prove \(``\subseteq ''\). Let \(x \in C(v^{\mathcal {A}})\). Then, \(\sum \nolimits _{i\in N_{0}}x_{i}=v^{\mathcal {A}}(N_{0})=g(\mathcal {A})\).

Take \(i \in N\). Since \(v^{\mathcal {A}}(\{i\})=0\), \(x_{i} \ge 0\) holds. Moreover, since \(\sum \nolimits _{j \in N_{0} \backslash \{i\}}x_{j} \ge v^{\mathcal {A}}(N_{0} \backslash \{i\})\),

$$\begin{aligned} x_{i}=v^{\mathcal {A}}(N_{0})-\sum _{j \in N_{0} \backslash \{i\}}x_{j} \le g(\mathcal {A})-v^{\mathcal {A}}(N_{0} \backslash \{i\}). \end{aligned}$$

Consider two cases:

  • \(i \in N \backslash P_{k^{*}}\). Since \(P_{k^{*}}^{N_{0} \backslash \{i\}}=P_{k^{*}}\),

    $$\begin{aligned}v^{\mathcal {A}}(N_{0} \backslash \{i\})=b^{k^{*}}(P_{k^{*}}\cup \{0\})=g(\mathcal {A}).\end{aligned}$$

    Hence, \(x_{i}=0\).

  • \(i \in P_{k^{*}}\). By Claim 1.3, \(v^{\mathcal {A}}(N_{0} \backslash \{i\})=I_{0}(\mathcal {A})\). Then, \(0 \le x_{i} \le g(\mathcal {A})-I_{0}(\mathcal {A})\).

We now prove that \(x_{0} \ge I_{0}(\mathcal {A})\). Assume first that \(I_{0}(\mathcal {A})=b_{0}^{k^{*}}\). We prove that \(v^{\mathcal {A}}(\{ 0\})=b_{0}^{k^{*}}\). Let \(k\in R\backslash \{k^{*}\}\). Then,

$$\begin{aligned} b_{0}^{k}\le b^{k}(P_{k}\cup \{0\})\le s(\mathcal {A}) \le I_{0}(\mathcal {A})=b_{0}^{k^{*}}. \end{aligned}$$

Let \(S=\{0\}\). Since \(P_{k}^{S}=\emptyset\) for all \(k\in R\),

$$\begin{aligned} v^{\mathcal {A}}(\{0\})=\max _{k\in R}\{b_{0}^{k}\}=b_{0}^{k^{*}}. \end{aligned}$$

Then, \(x_{0} \ge v^{\mathcal {A}}(\{0\})=b_{0}^{k^{*}}=I_{0}(\mathcal {A})\).

Assume now that \(I_{0}(\mathcal {A})=s(\mathcal {A})\). Then there exists \(\ell \in R\backslash \{k^{*}\}\) such that

$$\begin{aligned} x_{0}=x_{0}+\sum _{i \in P_{\ell }}x_{i} \ge v^{\mathcal {A}}(\{0\} \cup P_{\ell })=b_{0}^{\ell }+b^{\ell }(P_{\ell })=I_{0}(\mathcal {A}). \end{aligned}$$

Finally, we prove that \(x_{0} \le g(\mathcal {A})\).

$$\begin{aligned} x_{0}=v^{\mathcal {A}}(N_{0})-\sum _{i \in N}x_{i} \le g(\mathcal {A})-v^{\mathcal {A}}(N)=g(\mathcal {A}). \end{aligned}$$

We now prove \(``\supseteq ''\). It suffices to prove that \(\sum \nolimits _{i \in S}x_{i} \ge v^{\mathcal {A}}(S)\), for all \(S \subseteq N_{0}\).

If \(0\notin S\), \(\sum \nolimits _{i \in S}x_{i} \ge 0=v^{\mathcal {A}}(S)\). Now, assume that \(0 \in S\). Since \(x_{i}=0\) when \(i \in N \backslash P_{k^{*}}\), we have that

$$\begin{aligned} \sum _{i \in S}x_{i}=x_{0}+\sum _{i \in S \cap P_{k^{*}}}x_{i}. \end{aligned}$$

Again, we face two cases:

  • \(P_{k^{*}} \subseteq S\). Then,

    $$\begin{aligned} \sum _{i \in S}x_{i}=v^{\mathcal {A}}(N_{0})=v^{\mathcal {A}}(S). \end{aligned}$$
  • \(P_{k^{*}} \nsubseteq S\). Then,

    $$\begin{aligned} \sum _{i \in S}x_{i} \ge I_{0}(\mathcal {A})+\sum _{i \in S \cap P_{k^{*}}}x_{i} \ge I_{0}(\mathcal {A}). \end{aligned}$$

    By Claim 1.1, \(I_{0}(\mathcal {A}) \ge v^{\mathcal {A}}(S).\)

\(\square\)

As a consequence of this theorem if \(k^{*}\) is not unique, the core consists of a single element in which firm 0 gets the total worth of the grand coalition and the all other firms get zero. The same happens when \(P_{k^{*}}=\emptyset\) or if \(b_{i}^{k^{*}}=0\) for all \(i\in P_{k^{*}}\).

Taking into account the expression of the core, it is straightforward to check that both EOL and WOL belong to the core of \(v^{\mathcal {A}}\).

4.2 The \(\tau\)-value

We now prove that the \(\tau\)-value of the game \(v^{\mathcal {A}}\) coincides with the egalitarian optimal location rule.

Theorem 2

For each problem \(\mathcal {A}\), \(\tau (v^{\mathcal {A}})=EOL(\mathcal {A})\).

Proof

The idea of the proof is simple. We consider several cases depending on \(\mathcal {A}\). For each case we compute \(\tau (v^{\mathcal {A}})\) and we prove that it coincides with \(EOL(\mathcal {A})\).

Let \(k^{*}\) be an optimal region. Assume first that \(k^{*}\) is not unique. Since \(v^{\mathcal {A}}(N_{0}\backslash \{0\})=0\) and \(v^{\mathcal {A}}(N_{0}\backslash \{i\})=g(\mathcal {A})\) for all \(i \in N\), \(M_{0}(v^\mathcal {A})=g(\mathcal {A})\) and \(M_{i}(v^\mathcal {A})=0\), \(\forall i \in N\). Moreover \(m_{0}(v^{\mathcal {A}})=g(\mathcal {A})\) and \(m_{i}(v^\mathcal {A})=0\), for all \(i \in N\). Hence, \(\tau (v^{\mathcal {A}})=EOL(\mathcal {A})\).

We now consider the case when \(k^{*}\) is unique. We first compute \(M(v^{\mathcal {A}})\). Since for all \(S \supseteq P_{k^{*}}\cup \{0\}\), \(v^{\mathcal {A}}(S)=v^{\mathcal {A}}(N_{0})\), \(v^{\mathcal {A}}(N_{0}\backslash \{0\})=0\), and for all \(i \in P_{k^{*}}\) \(v^{\mathcal {A}}(N_{0}\backslash \{i\})=I_{0}(\mathcal {A})\), it can be deduced that

$$\begin{aligned} M_{i}(v^{\mathcal {A}})=\left\{ \begin{array}{ll} g(\mathcal {A}), &{}\hbox { if } i=0 \\ g(\mathcal {A})-I_{0}(\mathcal {A}), &{} \hbox { if } i\in P_{k^{*}} \\ 0, &{} \hbox { otherwise}. \end{array} \right. \end{aligned}$$

We now compute \(m(v^{\mathcal {A}})\). Let \(i\in N\) and \(S\subseteq N_{0}\) with \(i\in S\). If \(0 \notin S\), then \(v^{\mathcal {A}}(S)=0\). Since \(M_{j}(v^{\mathcal {A}}) \ge 0\) for all \(j \in N_{0}\),

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{i\}}M_{j}(v^{\mathcal {A}}) \le 0. \end{aligned}$$

Assume that \(0 \in S\). Since \(M_{0}(v^{\mathcal {A}})=g(\mathcal {A})\) and \(g(\mathcal {A})=v^{\mathcal {A}}(N_{0}) \ge v^{\mathcal {A}}(S)\),

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{i\}}M_{j}(v^{\mathcal {A}}) \le g(\mathcal {A})-g(\mathcal {A})-\sum _{j\in S\backslash \{i,0\}}M_{j}(v^{\mathcal {A}}) \le 0. \end{aligned}$$

Moreover, for \(S=\{i\}\),

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum \limits _{j\in S\backslash \{i\}}M_{j}(v^{\mathcal {A}})=0. \end{aligned}$$

Therefore, \(m_{i}(v^{\mathcal {A}})=0\), for all \(i \in N\).

We now compute \(m_0(v^{\mathcal {A}})\). Let \(S\subseteq N_{0}\) with \(0\in S\). If \(P_{k^{*}}\nsubseteq S\), by Claim 1.1,

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{0\}}M_{j}(v^{\mathcal {A}}) \le v^{\mathcal {A}}(S) \le I_{0}(\mathcal {A}). \end{aligned}$$

Now, assume that \(P_{k^{*}}\subseteq S\). Thus,

$$\begin{aligned} \begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{0\}}M_{j}(v^{\mathcal {A}})=&v^{\mathcal {A}}(N_{0})-\sum _{j\in P_{k^{*}}}M_{j}(v^{\mathcal {A}})\\ =&g(\mathcal {A})-|P_{k^{*}}|(g(\mathcal {A})-I_{0}(\mathcal {A}))\\ \le&g(\mathcal {A})-(g(\mathcal {A})-I_{0}(\mathcal {A}))=I_{0}(\mathcal {A}), \end{aligned} \end{aligned}$$

where the last inequality holds when \(|P_{k^{*}}| \ge 1\). Notice that if \(|P_{k^{*}}|=0\) then \(M_{0}(v^{\mathcal {A}})=g(\mathcal {A})\), \(M_{i}(v^{\mathcal {A}})=m_{i}(v^{\mathcal {A}})=0\) for all \(i \in N\) and \(m_{0}(v^{\mathcal {A}})=g(\mathcal {A})\) because \(v^{\mathcal {A}}(\{0\})=g(\mathcal {A})\). Thus, \(\tau (v^{\mathcal {A}})=EOL(\mathcal {A})\).

If \(I_{0}(\mathcal {A})=b_{0}^{k^{*}}\), take \(S=\{0\}\). We have argued in the proof of Theorem 1 that \(v^{\mathcal {A}}(\{0\})=b_{0}^{k^{*}}\). Then,

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{0\}}M_{j}(v^{\mathcal {A}})=b_{0}^{k^{*}}=I_{0}(\mathcal {A}). \end{aligned}$$

If \(I_{0}(\mathcal {A}) \ne b_{0}^{k^{*}}\), take \(S=P_{\ell } \cup \{0\}\) where \(\ell \in R \backslash \{k^{*}\}\) is such that \(I_{0}(\mathcal {A})=v^{\mathcal {A}}(P_{\ell }\cup \{0\})\). Then,

$$\begin{aligned} v^{\mathcal {A}}(S)-\sum _{j\in S\backslash \{0\}}M_{j}(v^{\mathcal {A}})=v^{\mathcal {A}}(P_{\ell }\cup \{0\})=I_{0}(\mathcal {A}). \end{aligned}$$

Then, \(m_{0}(v^{\mathcal {A}})=I_{0}(\mathcal {A})\).

We know that \(\tau (v^{\mathcal {A}})=\alpha M(v^{\mathcal {A}})+(1-\alpha )m(v^{\mathcal {A}})\) where \(\alpha \in [0,1]\) and \(\sum \nolimits _{i\in N_{0}}\tau _{i}(v^{\mathcal {A}})=v^{\mathcal {A}}(N_{0})=g(\mathcal {A})\).

Thus,

$$\begin{aligned} \begin{aligned} g(\mathcal {A})=&\alpha \sum _{i\in N_{0}}M_{i}(v^{\mathcal {A}})+(1-\alpha )\sum _{i\in N_{0}}m_{i}(v^{\mathcal {A}})\\ =&\alpha (g(\mathcal {A})+|P_{k^{*}}|(g(\mathcal {A})-I_{0}(\mathcal {A})))+(1-\alpha ) I_{0}(\mathcal {A})\\ =&\alpha (g(\mathcal {A})+|P_{k^{*}}|g(\mathcal {A})-|P_{k^{*}}|I_{0}(\mathcal {A})-I_{0}(\mathcal {A}))+I_{0}(\mathcal {A})\\ =&\alpha (|P_{k^{*}}|+1)(g(\mathcal {A})-I_{0}(\mathcal {A}))+I_{0}(\mathcal {A}). \end{aligned} \end{aligned}$$

Therefore,

$$\begin{aligned} \alpha (|P_{k^{*}}|+1)(g(\mathcal {A})-I_{0}(\mathcal {A})) =g(\mathcal {A})-I_{0}(\mathcal {A}). \end{aligned}$$

Two cases are possible. First, \(g(\mathcal {A}) = I_{0}(\mathcal {A})\). Then, \(M_{0}(v^{\mathcal {A}}) = m_{0}(v^{\mathcal {A}}) = g(\mathcal {A})\) and \(M_{i}(v^{\mathcal {A}})= m_{i}(v^{\mathcal {A}}) = 0\) for all \(i \in N\). Thus, \(\tau (v^{\mathcal {A}})=EOL(\mathcal {A})\).

Secondly, \(g(\mathcal {A}) \ne I_{0}(\mathcal {A})\). Then, \(\alpha =\dfrac{1}{|P_{k^{*}}|+1}.\) Let \(i\in N\). If \(i \notin P_{k^{*}}\), \(m_{i}(v^{\mathcal {A}})=M_{i}(v^{\mathcal {A}})=0\) and \(\tau _{i}(v^{\mathcal {A}})=0\). If \(i\in P_{k^{*}}\),

$$\begin{aligned} \tau _{i}(v^{\mathcal {A}})=\alpha M_{i}(v^{\mathcal {A}})+(1-\alpha )m_{i}(v^{\mathcal {A}})= \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}. \end{aligned}$$

Moreover,

$$\begin{aligned} \begin{aligned} \tau _{0}(v^{\mathcal {A}})=&\alpha M_{0}(v^{\mathcal {A}})+(1-\alpha ) m_{0}(v^{\mathcal {A}})=\dfrac{g(\mathcal {A})}{|P_{k^{*}}|+1}+ \dfrac{|P_{k^{*}}|}{|P_{k^{*}}|+1}I_{0}(\mathcal {A})\\ =&\dfrac{|P_{k^{*}}|I_{0}(\mathcal {A})+I_{0}(\mathcal {A})+ g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}=I_{0} (\mathcal {A})+\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}. \end{aligned} \end{aligned}$$

Thus, \(\tau (v^{\mathcal {A}})=EOL(\mathcal {A})\). \(\square\)

In general, the computation of the \(\tau\)-value is NP-hardFootnote 1. As a consequence of Theorem 2, in agglomeration games \(\tau\) can be computed in polynomial time.

4.3 The nucleolus

We now prove that the nucleolus of the game \(v^{\mathcal {A}}\) coincides with the egalitarian optimal location rule.

Theorem 3

For each problem \(\mathcal {A}\), \(\eta (v^{\mathcal {A}})=EOL(\mathcal {A})\).

Proof

The idea of the proof is simple. We consider several cases depending on \(\mathcal {A}\). For each case we compute \(\eta (v^{\mathcal {A}})\) and we prove that it coincides with \(EOL(\mathcal {A})\).

Let \(k^{*}\) be an optimal region. Assume first that \(k^{*}\) is not unique. Then \(EOL_{0}(\mathcal {A})=g(\mathcal {A})\) and \(EOL_{i}(\mathcal {A})=0\), for all \(i \in N\). Since the core consist of a single element, the nucleolus coincides with that element. Therefore, \(\eta (v^{\mathcal {A}})=EOL(\mathcal {A})\).

We now assume that \(k^{*}\) is unique. Take \(S\subseteq N_{0}\). We compute e(Sx) where \(x=EOL(\mathcal {A})\). We consider several cases:

  1. (i)

    \(0 \notin S\) and \(S\cap P_{k^{*}}=\emptyset\). Then,

    $$\begin{aligned}e(S,x)=v^{\mathcal {A}}(S)-\sum _{i\in S}x_{i}=0-0=0.\end{aligned}$$
  2. (ii)

    \(0 \notin S\) and \(S\cap P_{k^{*}} \ne \emptyset\). Then,

    $$\begin{aligned} \begin{aligned} e(S,x)=&v^{\mathcal {A}}(S)-\sum _{i\in S}x_{i}=0-|S\cap P_{k^{*}}|\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1} \\ \le&-\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned} \end{aligned}$$
  3. (iii)

    \(0\in S\) and \(S\cap P_{k^{*}}=P_{k^{*}}\). Then,

    $$\begin{aligned}e(S,x)=v^{\mathcal {A}}(S)-\sum _{i\in S}x_{i}=g(\mathcal {A})-x_{0}-\sum _{i\in P_{k^{*}}}x_{i}=g(\mathcal {A})-g(\mathcal {A})=0.\end{aligned}$$
  4. (iv)

    \(0\in S\) and \(S\cap P_{k^{*}} \ne P_{k^{*}}\). By Claim 1.1, \(v^{\mathcal {A}}(S) \le I_{0}(\mathcal {A})\). Then,

    $$\begin{aligned} \begin{aligned} e(S,x) & =v^{\mathcal {A}}(S)-\sum _{i\in S}x_{i}=v^{\mathcal {A}}(S)-x_{0}-\sum _{i\in S\cap P_{k^{*}}}x_{i} \\ & \le I_{0}(\mathcal {A})-\left( I_{0}(\mathcal {A})+\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) -|S\cap P_{k^{*}}|\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) \\ & = -(|S\cap P_{k^{*}}|+1)\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) \le -\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned} \end{aligned}$$

Thus, \(\theta (x)\) can be expressed as

$$\begin{aligned} \theta (x)=(0,...,0,e(S^{1},x),e(S^{2},x),...) \end{aligned}$$

where the 0, ..., 0 corresponds to cases (i) and (iii) and \(e(S^{1},x),e(S^{2},x),...\) corresponds to cases (ii) or (iv). It has already been shown above that for all \(S^{h}\),

$$\begin{aligned} e(S^{h},x) \le -\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned}$$

Let \(y\in C(v^{\mathcal {A}})\) such that \(y \ne x\). It is easy to see that for cases (i) and (iii), \(e(S,y)=0\). Thus, \(\theta (y)\) can be expressed as

$$\begin{aligned} \theta (y)=(0,...,0,e(T^{1},y),e(T^{2},y),...) \end{aligned}$$

where the 0, ..., 0 corresponds to cases (i) and (iii) and \(e(T^{1},y),e(T^{2},y),...\) corresponds to cases (ii) or (iv). Now, it suffices to prove that there is S in cases (ii) or (iv) satisfying that

$$\begin{aligned} e(S,y)>-\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned}$$

Consider two cases:

  • \(y_{0}<x_{0}\). Let \(S=N_{0}\backslash P_{k^{*}}\),

    $$\begin{aligned} e(S,y)=v^{\mathcal {A}}(N_{0}\backslash P_{k^{*}})-\sum _{i\in N_{0}\backslash P_{k^{*}}}y_{i}=I_{0}(\mathcal {A})-y_{0}>I_{0}(\mathcal {A}) -x_{0}=-\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned}$$
  • \(y_{0}\ge x_{0}\). Then, there is \(i\in P_{k^{*}}\) such that \(y_{i}<x_{i}\). If we take \(S=\{i\}\),

    $$\begin{aligned} e(S,y)=v^{\mathcal {A}}(\{i\})-y_{i}>0-x_{i}= -\left( \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}\right) . \end{aligned}$$

Therefore, \(\theta (x) \le _{L} \theta (y)\) and \(\eta (v^{\mathcal {A}})=EOL(\mathcal {A})\). \(\square\)

In general, the computation of the nucleolus is NP-hard. As a consequence of Theorem 3, in agglomeration games \(\eta\) can be computed in polynomial time.

4.4 The per capita nucleolus

We now prove that the per capita nucleolus of the game \(v^{\mathcal {A}}\) also coincides with the egalitarian optimal location rule.

Theorem 4

For each problem \(\mathcal {A}\), \(\eta ^{pc}(v^{\mathcal {A}})=EOL(\mathcal {A})\).

Proof

It is similar to the proof of the nucleolus and we omit it. \(\square\)

In general, the computation of the per capita nucleolus is NP-hard. As a consequence of Theorem 4, in agglomeration games \(\eta ^{pc}\) can be computed in polynomial time.

4.5 The Shapley value

The Shapley value is the most popular single value solution for cooperative games. It is well known that, in non-convex games the Shapley value can lie outside the core. Moreover, the computation of this allocation is NP-hard.

In Example 1, the Shapley value is (9.17, 0.5, 2.17, 2.17). Note that this allocation is outside the core because firm 1, which is not located in the optimal region, gets a positive amount. Moreover, firm 2 receives more than \(b_{2}^{2}\).

Since the Shapley value can lie outside the core, firms in the optimal region can transfer money to firms outside the optimal region. This is quite difficult to imagine in the situation that we are considering. Furthermore, we do not have a closed expression for the Shapley value for any agglomeration problem.

Nevertheless, in agglomeration problems where \(I_{0}(\mathcal {A})=b_{0}^{k^{*}}\), the Shapley value coincides with the egalitarian optimal location rule. The next theorem formally states this result.

Theorem 5

For any problem \(\mathcal {A}\) such that \(I_{0}(\mathcal {A})=b_{0}^{k^{*}}\), \(Sh(v^{\mathcal {A}})=EOL(\mathcal {A})\).

Proof

The idea of the proof is simple. We consider several cases depending on \(\mathcal {A}\). For each case we compute \(Sh(v^{\mathcal {A}})\) and we prove that it coincides with \(EOL(\mathcal {A})\).

Let \(k^{*}\) be an optimal region. Assume that \(k^{*}\) is not unique. It is known that \(EOL_{0}(\mathcal {A})=g(\mathcal {A})\) and \(EOL_{i}(\mathcal {A})=0, \forall i \in N\).

Since \(I_{0}(\mathcal {A})=b_{0}^{k^{*}}\) and \(k^{*}\) is not unique it can be deduced that \(I_{0}(\mathcal {A})=g(\mathcal {A})\). Then, for all \(S\subseteq N_{0}\)

$$\begin{aligned} v^{\mathcal {A}}(S)=\left\{ \begin{array}{ll} g(\mathcal {A}), &{} \hbox { if } 0 \in S \\ 0, &{} \hbox { otherwise}. \end{array}\right. \end{aligned}$$

For any \(S \subseteq N_{0}\) such that \(0 \notin S\), \(v^{\mathcal {A}}(S \cup \{0\})-v^{\mathcal {A}}(S)=g(\mathcal {A})\). Then, \(Sh_{0}(v^{\mathcal {A}})=g(\mathcal {A})\).

For any \(i \in N\) and any \(S \subseteq N_{0}\) such that \(i \notin S\), \(v^{\mathcal {A}}(S \cup \{i\})=v^{\mathcal {A}}(S)\). Then, \(Sh_{i} (v^{\mathcal {A}})=0\) for any \(i \in N\). Therefore, \(Sh(v^{\mathcal {A}})=EOL(\mathcal {A})\).

We now consider that \(k^{*}\) is unique. Then, for all \(S\subseteq N_{0}\)

$$\begin{aligned} v^{\mathcal {A}}(S)=\left\{ \begin{array}{ll} g(\mathcal {A}), &{} \hbox { if } 0 \in S \hbox { and } P_{k^{*}} \subseteq S \\ I_{0}(\mathcal {A}), &{}\hbox { if } 0 \in S \hbox { and } P_{k^{*}} \nsubseteq S \\ 0, &{}\hbox { otherwise}. \end{array}\right. \end{aligned}$$

Take \(i \in N \backslash P_{k^{*}}\) and \(S \subseteq N_{0}\) with \(i \notin S\). Then \(v^{\mathcal {A}}(S \cup \{i\})=v^{\mathcal {A}}(S)\). Therefore, \(Sh_{i}(v^{\mathcal {A}})=0\), \(\forall i \in N \backslash P_{k^{*}}\).

Take \(i \in P_{k^{*}}\) and let \(S \subseteq N_{0}\) with \(i \notin S\).

$$\begin{aligned} v^{\mathcal {A}}(S \cup \{i\})-v^{\mathcal {A}}(S)=\left\{ \begin{array}{ll} g(\mathcal {A})-I_{0}(\mathcal {A}), &{}\hbox { if } P_{k^{*}} \cup \{0\} \subseteq S \cup \{i\} \\ 0, &{} \hbox { otherwise}. \end{array}\right. \end{aligned}$$

Let \(\Pi '\) be the subset of \(\Pi _{N_0}\) given by the permutations \(\pi\) where i is the last element of \(P_{k^{*}} \cup \{0\}\) in \(\pi\). Thus,

$$\begin{aligned} \begin{aligned} Sh_{i}(v) &= \dfrac{1}{|N_0|!}\sum _{\pi \in \Pi _{N_0}}(v(Pre(i,\pi ) \cup \{i\})-v(Pre(i,\pi ))) \\ & = \dfrac{1}{|N_0|!}\sum _{\pi \in \Pi '}(g(\mathcal {A})-I_{0}(\mathcal {A})) \\ & = \dfrac{|\Pi '| }{|N_0|!}(g(\mathcal {A})-I_{0}(\mathcal {A})). \\ \end{aligned} \end{aligned}$$

Since 1 of \(|P_{k^{*}}|+1\) permutations in \(\Pi _{N_0}\) belongs to \(\Pi '\) it can be deduced that

$$\begin{aligned} Sh_{i}(v)=\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}, \quad \forall i \in P_{k^{*}}. \end{aligned}$$

Finally,

$$\begin{aligned} \begin{aligned} Sh_{0}(v^{\mathcal {A}}) & = g(\mathcal {A})-\sum _{i\in N} Sh_{i}(v^{\mathcal {A}}) \\ & = g(\mathcal {A})-\dfrac{|P_{k^{*}}|(g(\mathcal {A})-I_{0}(\mathcal {A}))}{|P_{k^{*}}|+1} \\ & = I_{0}(\mathcal {A}) + \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}. \end{aligned} \end{aligned}$$

Therefore, \(Sh(v^{\mathcal {A}})=EOL(\mathcal {A})\). \(\square\)

5 Axiomatic characterizations

In this section we introduce some properties of rules. We analyze which of those properties are fulfilled by the egalitarian optimal location rule and the weighted optimal location rule. Finally, we present axiomatic characterizations for both rules.

Core selection says that the rule should select core allocations.

Core selection (CS): For any problem \(\mathcal {A}\), \(f(\mathcal {A})\in C(v^{\mathcal {A}})\).

Monotonicity says that if the individual benefit of a firm increases (and the rest of the problem remains the same), then that firm should not end up worse off.

Monotonicity (M): Given two problems \(\mathcal {A}=(0,N,P,b)\) and \(\mathcal {A}'=(0,N,P,b')\) and \(i\in N_{0}\) such that \(b_{i}'^{k}>b_{i}^{k}\) for some \(k\in R\) and \(b_{j}'^{\ell }=b_{j}^{\ell }\) otherwise, \(f_{i}(\mathcal {A}')\ge f_{i}(\mathcal {A})\).

Consider two firms belonging to the same region that obtain the same benefit when 0 opens the new plant in their region. Symmetry says that they should receive the same amount.

Symmetry (SYM): For any problem \(\mathcal {A}\) and each pair of firms \(i,j\in P_{k}\) such that \(b_{i}^{k}=b_{j}^{k}\), \(f_{i}(\mathcal {A})=f_{j}(\mathcal {A})\).

Assume that firm 0 locates in region \(k^{*}\). Thus it would be desirable to have rules where firms in \(P_{k^{*}}\) transfer something to firm 0 in order to incentivize firm 0 to locate in region \(k^{*}\).

Let \(k^{*}\) be an optimal region. For each \(i\in N_{0}\), define \(t^{f,k^{*}}(\mathcal {A})\), the transfer vector associated to f and \(k^{*}\), as \(t_{i}^{f,k^{*}}(\mathcal {A})=b_{i}^{k^{*}}-f_{i}(\mathcal {A})\).

The next property says that firms in \(P_{k^{*}}\) should transfer something, but they cannot receive transfers.

No transfer to local firms (NTLF): For any problem \(\mathcal {A}\), each optimal region \(k^{*}\) and each \(i\in P_{k^{*}}\) we have that \(t_{i}^{f,k^{*}}(\mathcal {A})\ge 0\).

Equal treatment inside optimal regions says that if the value of an optimal region increases (and the rest of the problem remains the same), then all firms in that region and firm 0 are affected in the same amount.

Equal treatment inside optimal regions (ETOR): Given two problems \(\mathcal {A}=(0,N,P,b)\) and \(\mathcal {A}'=(0,N,P,b')\) and an optimal region \(k^{*}\) for \(\mathcal {A}\) such that \(b'^{k^{*}}(P_{k^{*}})>b^{k^{*}}(P_{k^{*}})\) and \(b_{j}'^{\ell }=b_{j}^{\ell }\) for each \(j \in N_{0}\) and each \(\ell \in R \backslash \{k^{*}\}\). Then, for each \(i,j\in P_{k^{*}}\cup \{0\}\),

$$\begin{aligned} f_{i}(\mathcal {A}')-f_{i}(\mathcal {A})=f_{j}(\mathcal {A}')-f_{j}(\mathcal {A}). \end{aligned}$$

Consider a situation in which a firm in N splits (for instance, each of its plants comes to be considered as an independent firm). The next property says that the amount obtained by each of the other pre-existing firms does not change.

Merging-splitting proofness (MSP): Let \(\mathcal {A}=(0,N,P,b)\) and \(\mathcal {A}'=(0,N',P',b')\) be two problems such that, for some \(i \in N\)

  • \(N'=(N \backslash \{i\}) \cup \{i^{1},...,i^{m}\}\), with \(N \backslash \{i\} \cap \{i^{1},...,i^{m}\}=\emptyset\).

  • \(P'=\{P_{1}',...,P_{r}'\}\) where \(P_{k}'=P_{k}\) for all \(k\ne k(i)\) and \(P_{k(i)}'=(P_{k(i)}\backslash \{i\}) \cup \{i^{1},...,i^{m}\}\).

  • \(b_{j}'^{k}=b_{j}^{k}\) for all \(j\in N_{0}\backslash \{i\}\) and \(k\in R\) and \(b_{i}^{k(i)}=\sum \nolimits _{\ell =1}^{m}b_{i^{\ell }}'^{k(i)}\).

Then, \(f_{j}(\mathcal {A})=f_{j}(\mathcal {A}')\) for all \(j\in N_{0}\backslash \{i\}\).

Instead of splitting (as above), we can interpret this property as merging. Namely, by considering that a subset of firms (in the same region) merges into a single firm.

In the propositions below we discuss what properties are satisfied by each rule.

Proposition 2

  1. (a)

    The egalitarian optimal location rule satisfies core selection, monotonicity, symmetry, and equal treatment inside optimal regions.

  2. (b)

    The egalitarian optimal location rule does not satisfy no transfer to local firms or merging-splitting proofness.

Proof

(a) By Theorem 3, EOL coincides with the nucleolus of \(v^{\mathcal {A}}\) and the nucleolus is always in the core of \(v^{\mathcal {A}}\). Thus, EOL satisfies CS. It is straightforward to check that EOL satisfies SYM.

We prove that EOL satisfies M. Let \(\mathcal {A}'=(0,N,P,b')\) be a problem such that \(b_{i}'^{k}>b_{i}^{k}\) for some \(i \in N_{0}\) and \(k \in R\).

Assume that \(i \in N\). Notice that necessarily \(k=k(i)\). There are three possibilities for k(i):

  • k(i) is an optimal region for \(\mathcal {A}\). Then, k(i) is the unique optimal region for \(\mathcal {A}'\). Therefore, \(g(\mathcal {A}')>g(\mathcal {A})\) and \(I_{0}(\mathcal {A}')=I_{0}(\mathcal {A})\). Then, \(EOL_{i}(\mathcal {A}')>EOL_{i}(\mathcal {A})\).

  • k(i) is not an optimal region for either \(\mathcal {A}\) or \(\mathcal {A}'\). Then, \(EOL_{i}(\mathcal {A}')=0=EOL_{i}(\mathcal {A})\).

  • k(i) is not an optimal region for \(\mathcal {A}\) but it is for \(\mathcal {A}'\). Then, \(EOL_{i}(\mathcal {A}') \ge 0=EOL_{i}(\mathcal {A})\).

If \(i=0\), \(g(\mathcal {A}') \ge g(\mathcal {A})\) and \(I_{0}(\mathcal {A}') \ge I_{0}(\mathcal {A})\). Then,

$$\begin{aligned} \begin{aligned} EOL_{0}(\mathcal {A}') &= I_{0}(\mathcal {A}')+\dfrac{g(\mathcal {A}')-I_{0}(\mathcal {A}')}{|P_{k}|+1} \\ & = I_{0}(\mathcal {A})+(I_{0}(\mathcal {A}')-I_{0}(\mathcal {A}))+\dfrac{g(\mathcal {A}')-g(\mathcal {A})}{|P_{k}|+1}+\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{|P_k|+1}-\dfrac{I_{0}(\mathcal {A}')-I_{0}(\mathcal {A})}{|P_{k}|+1} \\ &= EOL_{0}(\mathcal {A})+\dfrac{|P_{k}|(I_{0}(\mathcal {A}')-I_{0}(\mathcal {A}))}{|P_{k}|+1}+\dfrac{g(\mathcal {A}')-g(\mathcal {A})}{|P_{k}|+1} \\ >&EOL_{0}(\mathcal {A}). \end{aligned} \end{aligned}$$

Thus, EOL satisfies M.

Finally, we prove that EOL satisfies ETOR. Let \(\mathcal {A}\) and \(\mathcal {A}'\) be two problems fulfilling the conditions in ETOR. Then \(k^{*}\) is also an optimal region for \(\mathcal {A}'\), \(g(\mathcal {A}')>g(\mathcal {A})\) and \(I_{0}(\mathcal {A}')=I_{0}(\mathcal {A})\). Therefore, for all \(i \in P_{k^{*}} \cup \{0\}\),

$$\begin{aligned} EOL_{i}(\mathcal {A}')-EOL_{i}(\mathcal {A})=\dfrac{g(\mathcal {A}') -g(\mathcal {A})}{|P_{k^{*}}|+1}. \end{aligned}$$

Thus, EOL satisfies ETOR.

(b) Consider Example 1. Clearly, EOL does not fulfill NTLF since

$$\begin{aligned} t_{2}^{EOL}(\mathcal {A})=b_{2}^{2}-EOL_{2}(\mathcal {A})=1-2=-1<0. \end{aligned}$$

Let \(\mathcal {A}\) be as in Example 1. Let \(\mathcal {A}'=(0,N',P',b')\) be a problem such that \(N'=\{1,2,3^{1},3^{2}\}\), \(P=\{\{1\},\{2,3^{1},3^{2}\}\}\), \(b_{3^{1}}'^{2}=b_{3^{2}}'^{2}=4\), and \(b_{i}'^{k}=b_{i}^{k}\), otherwise. However,

$$\begin{aligned} EOL_{2}(\mathcal {A}')=\dfrac{14-8}{4}=1.5<2=EOL_{2}(\mathcal {A}). \end{aligned}$$

Thus, EOL does not satisfy MSP. \(\square\)

Proposition 3

  1. (a)

    The weighted optimal location rule satisfies core selection, monotonicity, symmetry, no transfer to local firms, and merging-splitting proofness.

  2. (b)

    The weighted optimal location rule does not satisfy equal treatment inside optimal regions.

Proof

(a) WOL satisfies CS by Theorem 1. It is straightforward to prove that WOL also satisfies SYM.

We now prove that WOL satisfies M. Let \(\mathcal {A}'=(0,N,P,b')\) be a problem such that \(b_{i}'^{k}>b_{i}^{k}\) for some \(i \in N_{0}\) and \(k \in R\).

Let \(i \in N\). Notice that necessarily \(k=k(i)\). There are three possibilities for k(i):

  • k(i) is an optimal region for \(\mathcal {A}\). Then, k(i) is the unique optimal region for \(\mathcal {A}'\). Therefore, \(g(\mathcal {A}')>g(\mathcal {A})\) and \(I_{0}(\mathcal {A}')=I_{0}(\mathcal {A})\). Then,

    $$\begin{aligned}WOL_{i}(\mathcal {A}')=\dfrac{b_{i}'^{k^{*}}}{b'^{k^{*}}(P_{k^{*}})}(g(\mathcal {A}')-I_{0}(\mathcal {A}')) \ge \dfrac{b_{i}^{k^{*}}}{ b^{k^{*}}(P_{k^{*}})}(g(\mathcal {A})-I_{0}(\mathcal {A}))= WOL_{i}(\mathcal {A}).\end{aligned}$$
  • k(i) is not an optimal region for either \(\mathcal {A}\) or \(\mathcal {A}'\). Then, \(WOL_{i}(\mathcal {A}')=0=WOL_{i}(\mathcal {A})\).

  • k(i) is not an optimal region for \(\mathcal {A}\) but it is for \(\mathcal {A}'\). Then, \(WOL_{i}(\mathcal {A}') \ge 0=WOL_{i}(\mathcal {A})\).

If \(i=0\), \(WOL_{0}(\mathcal {A}')=I_{0}(\mathcal {A}') \ge I_{0}(\mathcal {A})=WOL_{0}(\mathcal {A})\). Therefore, WOL satisfies M.

We prove that WOL satisfies NTLF. Let \(k^{*}\) be an optimal region. If \(k^{*}\) is not unique, then \(WOL_{i}(\mathcal {A})=0\), for all \(i \in N\). Hence \(t_{i}^{WOL}(\mathcal {A}) = b_i^{k^{*}} \ge 0\) for all \(i \in N\).

Now consider that \(k^{*}\) is unique. For all \(i \in P_{k^{*}}\),

$$\begin{aligned} t_{i}^{WOL}(\mathcal {A})=b_{i}^{k^{*}}-\dfrac{b_{i}^{k^{*}}}{b^{k^{*}}(P_{k^{*}})}(g(\mathcal {A})-I_{0}(\mathcal {A}))= b_{i}^{k^{*}}\left[ 1-\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{b^{k^{*}}(P_{k^{*}})}\right] . \end{aligned}$$

Let \(i \in P_{k^{*}}\). If \(b_{i}^{k^{*}}=0\), then \(t_{i}^{WOL}(\mathcal {A})=0\). Assume that \(b_{i}^{k^{*}}>0\). Thus,

$$\begin{aligned} \begin{aligned} t_{i}^{WOL}(\mathcal {A}) \ge 0 \Leftrightarrow&1-\dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})}{b^{k^{*}}(P_{k^{*}})} \ge 0 \\ \Leftrightarrow&b^{k^{*}}(P_{k^{*}}) \ge g(\mathcal {A})-I_{0}(\mathcal {A}) \\ \Leftrightarrow&I_{0}(\mathcal {A}) \ge g(\mathcal {A})-b^{k^{*}}(P_{k^{*}})=b_{0}^{k^{*}}, \end{aligned} \end{aligned}$$

which always holds. Thus, WOL satisfies NTLF.

We now prove that WOL satisfies MSP. Let \(i \in N\) and \(\mathcal {A}'\) fulfilling the conditions in MSP. Let \(k^{*}\) be an optimal region for \(\mathcal {A}\). If \(k^{*}\) is not unique, \(WOL_{0}(\mathcal {A}')=g(\mathcal {A}')=g(\mathcal {A})=WOL_{0}(\mathcal {A})\) and \(WOL_{j}(\mathcal {A})=0=WOL_{j}(\mathcal {A}')\), for all \(j \in N \backslash \{i\}\).

Now assume that \(k^{*}\) is unique. Notice that \(k^{*}\) is the only optimal region for \(\mathcal {A}'\). Moreover, \(g(\mathcal {A}')=g(\mathcal {A})\) and \(I_{0}(\mathcal {A})=I_{0}(\mathcal {A}')\). Then, for all \(j \in P_{k^{*}} \backslash \{i\}\),

$$\begin{aligned} WOL_{j}(\mathcal {A}')=\dfrac{b_{j}'^{k^{*}}}{b'^{k^{*}}( P'_{k^{*}})}(g(\mathcal {A}')-I_{0}(\mathcal {A}'))= \dfrac{b_{j}^{k^{*}}}{b^{k^{*}}(P_{k^{*}})}(g(\mathcal {A}) -I_{0}(\mathcal {A}))=WOL_{j}(\mathcal {A}). \end{aligned}$$

Since \(WOL_{0}(\mathcal {A}')=I_{0}(\mathcal {A}')=I_{0}(\mathcal {A})=WOL_{0}(\mathcal {A})\) and \(WOL_{j}(\mathcal {A}')=0=WOL_{j}(\mathcal {A})\), for all \(j \in N \backslash P_{k^{*}}\), WOL satisfies MSP.

(b) Again, consider the problem introduced in Example 1. Now let \(\mathcal {A}'=(0,N,P,b')\) be a problem such that \(b'^{2}_{2}=4\). Note that \(\mathcal {A}\) and \(\mathcal {A}'\) fulfill the conditions of ETOR. Since \(WOL(\mathcal {A}')=(8,0,3,6)\) it can be deduced that WOL does not satisfy ETOR. \(\square\)

We now provide an axiomatic characterization for the EOL rule with core selection and equal treatment inside optimal regions.

Theorem 6

The egalitarian optimal location rule is the only rule that satisfies core selection and equal treatment inside optimal regions.

Proof

By Proposition 2, EOL satisfies CS and ETOR.

We now prove uniqueness. Let f be a rule satisfying both properties. Assume that \(\mathcal {A}\) has several optimal regions. Since the core has a single element and f satisfies CS it is obvious that f coincides with EOL.

Assume that \(\mathcal {A}\) has a unique optimal region \(k^{*}\). We consider two cases.

  • \(I_{0}(\mathcal {A})=s(\mathcal {A})\). There exists \(\ell \in R\backslash \{k^{*}\}\) such that \(I_{0}(\mathcal {A})=b^{\ell }(P_{\ell }\cup \{0\})\). Define \(\mathcal {A}^{1}=(0,N,P,b^{1})\) such that \(b^{1k^{*}}(P_{k^{*}})=s(\mathcal {A})-b^{k^{*}}_{0}\) and \(b^{1k}_{j}=b^{k}_{j}\), otherwise.

    In \(\mathcal {A}^{1}\) there are at least two optimal regions: \(k^{*}\) and \(\ell\). Moreover, \(g(\mathcal {A}^{1})=I_{0}(\mathcal {A})\). By Theorem 1, the core of \(v^{\mathcal {A}^{1}}\) has a single element \((I_{0}(\mathcal {A}),0,...,0)\). By CS \(f(\mathcal {A}^{1})=(I_{0}(\mathcal {A}),0,...,0)\).

    Since \(\mathcal {A}\) and \(\mathcal {A}^{1}\) fulfill the conditions of ETOR, we have that for each \(i,j\in P_{k^{*}}\cup \{0\}\),

    $$\begin{aligned} f_{i}(\mathcal {A})-f_{i}(\mathcal {A}^{1})=f_{j}(\mathcal {A})-f_{j}(\mathcal {A}^{1}). \end{aligned}$$

    Fix \(i\in P_{k^{*}}\cup \{0\}\), then

    $$\begin{aligned}&\begin{aligned} (|P_{k^{*}}|+1)(f_{i}(\mathcal {A})-f_{i}(\mathcal {A}^{1})) &= \sum _{j\in P_{k^{*}}\cup \{0\}}(f_{j}(\mathcal {A})-f_{j}(\mathcal {A}^{1})) \\ &= g(\mathcal {A}) - g(\mathcal {A}^{1}) \\ &= g(\mathcal {A})-I_{0}(\mathcal {A}) \end{aligned}\\&\Rightarrow f_{i}(\mathcal {A})=f_{i}(\mathcal {A}^{1})+\dfrac{g(\mathcal {A})- I_{0}(\mathcal {A})}{|P_{k^{*}}|+1}. \end{aligned}$$

    Then f coincides with EOL on \(P_{k^{*}} \cup \{0\}\). Since f satisfies CS and Theorem 1 it follows that \(f_{i}(\mathcal {A})=0\) for all \(i\in N\backslash P_{k^{*}}\). Hence f coincides with EOL on \(N\backslash P_{k^{*}}\).

  • \(I_{0}(\mathcal {A})=b^{k^{*}}_{0} > s(\mathcal {A})\). Let \(\mathcal {A}^{2}=(0,N,P,b^{2})\) be a problem such that \(b^{2k^{*}}_{i}=0\), for all \(i \in P_{k^{*}}\) and \(b^{2k}_{j}=b^{k}_{j}\), otherwise. Note that \(k^{*}\) is also the unique optimal region for \(\mathcal {A}^{2}\), \(g(\mathcal {A}^{2})=I_{0}(\mathcal {A})\) and \(b_{i}^{2k^{*}}=0\), for all \(i \in P_{k^{*}}\). By Theorem 1 and CS \(f(\mathcal {A}^{2})=(I_{0}(\mathcal {A}),0,...,0)\).

    Since \(\mathcal {A}\) and \(\mathcal {A}^{2}\) also fulfill the conditions of ETOR, it can be concluded using arguments similar to those used in the previous case that f coincides with EOL.

\(\square\)

Remark 1

The properties used in Theorem 6 are independent.

The WOL rule satisfies CS but not ETOR.

The rule f given by \(f_{i}(\mathcal {A})=\dfrac{g(\mathcal {A})}{|N|+1}\), \(\forall i \in N_{0}\) satisfies ETOR but not CS.

The property of merging-splitting proofness has been adapted to several class of problems and used, by combining it with other properties, to characterize interesting rules. We give some examples. In the bankruptcy problem, O’Neill (1982) characterizes the proportional rule. In the minimum cost spanning tree problem, Gómez-Rúa and Vidal-Puga (2011) characterize the Bird’s rule (Bird 1976). In the museum pass problem, Bergantiños and Moreno-Ternero (2015) characterize a rule based in the Shapley value.

In the next theorem we characterize the rules that satisfy core selection and merging-splitting proofness. These rules have a nice interpretation. Firm 0 receives \(I_{0}(\mathcal {A})\) for sure. Moreover firm 0 receives a transfer \(x_{0}(\mathcal {A})\) from firms in the optimal region \(k^{*}\). The transfer of each firm located in region \(k^{*}\) is proportional to b.

We now introduce a definition, which will be used in the statement of the theorem. We say that \(\mathcal {A}=(0,N,P,b)\) and \(\mathcal {A}'=(0,N',P',b')\) are equivalent for firm 0 when the set of regions is the same and for each region two conditions hold. First, the benefit for firm 0 of opening a plant is the same in both problems. Second, the aggregate benefit in each region is the same for both problems. Formally, for each \(k\in R\), \(b_{0}^{k}=b'^{k}_{0}\) and \(b^{k}(P_{k})=b'^{k}(P'_{k})\).

Theorem 7

A rule f satisfies core selection and merging-splitting proofness if and only if for each \(\mathcal {A}\) there is \(x_{0}(\mathcal {A}) \in [0,g(\mathcal {A})-I_{0}(\mathcal {A})]\) such that

$$\begin{aligned} f_{i}(\mathcal {A})=\left\{ \begin{array}{ll} I_{0}(\mathcal {A})+x_{0}(\mathcal {A}), &{}\hbox { if } i=0 \\ \dfrac{b_{i}^{k^{*}}}{b^{k^{*}}(P_{k^{*}})}(g(\mathcal {A}) -I_{0}(\mathcal {A})-x_{0}(\mathcal {A})), &{}\hbox { if } i\in P_{k^{*}} \\ 0, &{} \hbox { otherwise}, \end{array} \right. \end{aligned}$$

with \(x_{0}(\mathcal {A})=x_{0}(\mathcal {A}')\) when \(\mathcal {A}\) and \(\mathcal {A}'\) are equivalent for firm 0.

Proof

Let f be as in the statement of the theorem. By Theorem 1, f satisfies CS. Let \(\mathcal {A}\) and \(\mathcal {A}'\) be as in the definition of MSP. Thus, \(\mathcal {A}\) and \(\mathcal {A}'\) are equivalent for firm 0. Hence, \(x_{0}(\mathcal {A})=x_{0}(\mathcal {A}')\). Then, f satisfies MSP.

Conversely, let f be a rule satisfying CS and MSP. We first give an intuitive idea of the proof. We start by proving that f also satisfies SYM (this result will be used several times in the rest of the proof). Later on we define \(x_{0}(\mathcal {A})=f_{0}(\mathcal {A})-I_{0}(\mathcal {A})\) and we prove that \(x_{0}(\mathcal {A})\) satisfies the conditions of the statement. Finally we prove that f is as desired considering several cases. Some of them are easy but one of them is quite complicated (the idea of the proof in such case will be explained later).

We prove that f also satisfies SYM. Let \(i,j \in P_{k}\) as in the definition of SYM. Consider the following problems:

  • \(\mathcal {A}^{ip}\) obtained from \(\mathcal {A}\) by splitting firm i into two, \(p_{1}\) and \(p_{2}\), with benefit \(b^{k}_{p_{1}}=b^{k}_{p_{2}}=\dfrac{b^{k}_{i}}{2}\).

  • \(\mathcal {A}^{ip,jq}\) obtained from \(\mathcal {A}^{ip}\) by splitting firm j into two, \(q_{1}\) and \(q_{2}\) with benefit \(b^{k}_{q_{1}}=b^{k}_{q_{2}}=\dfrac{b^{k}_{i}}{2}\).

  • \(\mathcal {A}^{jp}\) obtained from \(\mathcal {A}\) by splitting firm j into two, \(p_{1}\) and \(p_{2}\), with benefit \(b^{k}_{p_{1}}=b^{k}_{p_{2}}=\dfrac{b^{k}_{i}}{2}\).

  • \(\mathcal {A}^{jp,iq}\) obtained from \(\mathcal {A}^{jp}\) by splitting firm i into two, \(q_{1}\) and \(q_{2}\) with benefit \(b^{k}_{q_{1}}=b^{k}_{q_{2}}=\dfrac{b^{k}_{i}}{2}\).

Since the sequence of problems \(\mathcal {A}\), \(\mathcal {A}^{ip}\) and \(\mathcal {A}^{ip,jq}\) is under the hypothesis of MSP, it follows that

$$\begin{aligned} f_{i}(\mathcal {A})=f_{p_{1}}(\mathcal {A}^{ip})+f_{p_{2}} (\mathcal {A}^{ip})=f_{p_{1}}(\mathcal {A}^{ip,jq})+f_{p_{2}}(\mathcal {A}^{ip,jq}). \end{aligned}$$

Since the sequence of problems \(\mathcal {A}\), \(\mathcal {A}^{jp}\), and \(\mathcal {A}^{jp,iq}\) is under the hypothesis of MSP it follows that

$$\begin{aligned} f_{j}(\mathcal {A})=f_{p_{1}}(\mathcal {A}^{jp})+f_{p_{2}} (\mathcal {A}^{jp})=f_{p_{1}}(\mathcal {A}^{jp,iq})+f_{p_{2}}(\mathcal {A}^{jp,iq}). \end{aligned}$$

Since the problems \(\mathcal {A}^{ip,jq}\) and \(\mathcal {A}^{jp,iq}\) coincide, it follows that \(f_{i}(\mathcal {A})=f_{j}(\mathcal {A})\). Hence, f satisfies SYM.

Assume that \(\mathcal {A}\) has several optimal regions. Since f satisfies CS and Theorem 1 it can be deduced that \(f_{0}(\mathcal {A})=g(\mathcal {A})\) and \(f_{i}(\mathcal {A})=0\), for all \(i \in N\). Thus, \(x_{0}(\mathcal {A})=0\).

Now assume that \(\mathcal {A}\) has a unique optimal region \(k^{*}\). If \(P_{k^{*}}= \emptyset\) then, \(g(\mathcal {A})=I_{0}(\mathcal {A})=b_{0}^{k^{*}}\). Since f satisfies CS and Theorem 1, \(f_{0}(\mathcal {A})=g(\mathcal {A})\), \(f_{i}(\mathcal {A})=0, \forall i \in N\) and \(x_{0}(\mathcal {A})=0\).

Suppose that \(P_{k^{*}} \ne \emptyset\). We assume, without loss of generality, that \(P_{k^{*}}=\{1,...,p\}\). Since f satisfies CS, by Theorem 1 it can be deduced that

$$\begin{aligned} f(\mathcal {A})=(I_{0}(\mathcal {A})+x_{0},x_{1},...,x_{p},0,...,0) \end{aligned}$$

where \(0\le x_{i}\le g(\mathcal {A})-I_{0}(\mathcal {A})\) for each \(i=0,1,...,p\). Notice that \(x_{0}=f_{0}(\mathcal {A})-I_{0}(\mathcal {A})\).

Define \(x_{0}(\mathcal {A})=f_{0}(\mathcal {A})-I_{0}(\mathcal {A})\). We now prove that \(x_{0}(\mathcal {A})\) satisfies the conditions of the statement of the theorem. We have argued above that \(x_{0}(\mathcal {A}) \in [0,g(\mathcal {A})-I_{0}(\mathcal {A})]\). Suppose that \(\mathcal {A}\) and \(\mathcal {A}'\) are equivalent for firm 0. Thus, for each \(k \in R\),

$$\begin{aligned} b^{k}(P_{k}\cup \{0\})=b'^{k}(P'_{k}\cup \{0\}). \end{aligned}$$

Since \(k^{*}\) is the unique optimal region in \(\mathcal {A}\), \(k^{*}\) is the unique optimal region in \(\mathcal {A}'\). Then,

$$\begin{aligned} s(\mathcal {A})=\max _{k\in R\backslash \{k^{*}\}} \left\{ b^{k}(P_{k}\cup \{0\})\right\} =\max _{k\in R\backslash \{k^{*}\}} \left\{ b'^{k}(P'_{k}\cup \{0\})\right\} =s(\mathcal {A}'). \end{aligned}$$

Hence,

$$\begin{aligned} I_{0}(\mathcal {A})=\max \left\{ b_{0}^{k^{*}},s(\mathcal {A})\right\} \max \left\{ b'^{k^{*}}_{0},s(\mathcal {A}')\right\} =I_{0}(\mathcal {A}'). \end{aligned}$$

Notice that if \(\mathcal {A}\) and \(\mathcal {A}'\) are equivalent for firm 0 then \(\mathcal {A}\) could be obtained from \(\mathcal {A}'\) via a sequence of merging-splitting steps (actually 2r steps). In the first r steps we merge all firms of each region given by \(\mathcal {A}'\) in a single firm. In the second r steps we split the unique firm of each region in the firms of such region given by \(\mathcal {A}\). Since f satisfies MSP, \(f_{0}(\mathcal {A})=f_{0}(\mathcal {A}')\).

Thus, \(x_{0}(\mathcal {A})=f_{0}(\mathcal {A})-I_{0}(\mathcal {A})=f_{0} (\mathcal {A}')-I_{0}(\mathcal {A}')=x_{0}(\mathcal {A}')\).

It only remains to prove that for each \(i \in P_{k^{*}}\), \(x_{i}\) is as in the statement of the theorem. We first give an intuitive explanation of this proof using a problem \(\mathcal {A}\) where \(k^{*}= 1\), \(P_{1}=\{ 1,2\}\), \(b_{1}^{1}=1.1\), \(b_{2}^{1}=2.2\) and \(\varepsilon =0.5\).

  1. 1.

    We consider the problem \(\mathcal {A}^{1}\) obtained from \(\mathcal {A}\) by splitting firm 1 in three firms: two with benefits 0.5 and one with benefit 0.1. We consider the problem \(\mathcal {A}^{2}\) obtained from \(\mathcal {A}^{1}\) by splitting firm 2 in five firms: four with benefits 0.5 and one with benefit 0.2.

  2. 2.

    Since f satisfies SYM we can define \(x^{\varepsilon }\) as the allocation given by f to each of the six firms with benefit 0.5 in \(\mathcal {A}^{2}\). Besides \(x_{1}^{\varepsilon }\) will denote the allocation given by f to the firm with benefit 0.1 in \(\mathcal {A}^{2}\), and \(x_{2}^{\varepsilon }\) will denote the allocation given by f to the firm with benefit 0.2 in \(\mathcal {A}^{2}\).

  3. 3.

    \(f_{1}(\mathcal {A})=2x^{\varepsilon }+x_{1}^{\varepsilon }\) and \(f_{2}(\mathcal {A})=4x^{\varepsilon }+x_{2}^{\varepsilon }\).

  4. 4.

    \(0 \le x_{1}^{\varepsilon } \le x^{\varepsilon }\) and \(0 \le x_{2}^{\varepsilon } \le x^{\varepsilon }\).

  5. 5.

    For each \(i=1,2\), we find upper and lower bounds for \(f_{i}(\mathcal {A})\), \(L_{i}^{\varepsilon }(\mathcal {A}) \le f_{i}(\mathcal {A}) \le U_{i}^{\varepsilon }(\mathcal {A})\).

  6. 6.

    Finally,

    $$\begin{aligned} \lim _{\varepsilon \rightarrow 0} L_{i}^{\varepsilon }(\mathcal {A})=\lim _{\varepsilon \rightarrow 0} U_{i}^{\varepsilon }(\mathcal {A})=\dfrac{b_{i}^{k^{*}}}{b^{k^{*}} (P_{k^{*}})}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})). \end{aligned}$$

We now start the formal proof. For any \(\varepsilon >0\), it is possible to find \(\{n_{i}^{\varepsilon }\}_{i\in P_{k^{*}}}\), \(b^{\varepsilon }\), and \(\{b_{i}^{\varepsilon }\}_{i\in P_{k^{*}}}\) such that

  • \(b^{\varepsilon }\in \mathbb {R}\) and \(b^{\varepsilon } \le \varepsilon\).

  • For each \(i\in P_{k^{*}}\), \(n_{i}^{\varepsilon }\in \mathbb {N}\), \(0\le b_{i}^{\varepsilon }\le b^{\varepsilon }\) and \(b_{i}^{k^{*}}=n_{i}^{\varepsilon }b^{\varepsilon }+b_{i}^{\varepsilon }\).

For every \(h=1,...,p\), let \(\mathcal {A}^{h}\) be the problem obtained from \(\mathcal {A}\) by splitting each firm \(i=1,...,h\) into \(n_{i}^{\varepsilon }+1\) firms. The first \(n_{i}^{\varepsilon }\) firms with \(b_{i^{\ell }}^{hk^{*}}=b^{\varepsilon }\) and firm \(n_{i}^{\varepsilon }+1\) with \(b_{i^{n_{i}^{\varepsilon }+1}}^{hk^{*}}=b_{i}^{\varepsilon }\).

Formally, for each \(h=1,...,p\), let \(\mathcal {A}^{h}=(0,N^{h},P^{h},b^{h})\) be a problem such that

  • \(N^{h}=(N \backslash \{1,...,h\}) \cup \left( \cup _{i=1}^{h} \{i^{1},...,i^{n_{i}^{\varepsilon }+1}\} \right)\).

  • \(P^{h}=\{P^{h}_{1},...,P^{h}_{r}\}\) where \(P^{h}_{k^{*}}=(P_{k^{*}} \backslash \{1,...,h\}) \cup \left( \cup _{i=1}^{h} \{i^{1},...,i^{n_{i}^{\varepsilon }+1}\} \right)\) and \(P^{h}_{k}=P_{k}\) for all \(k \ne k^{*}\).

  • \(b_{j}^{hk}=b_{j}^{k}\), for all \(j \in N \backslash \{1,...,h\}\) and \(k \in R\). Given \(i \in \{1,...,h\}\) and \(\ell =1,...,n_{i}^{\varepsilon }+1\), \(b_{i^{\ell }}^{hk}=0\) if \(k \ne k^{*}\), \(b_{i^{\ell }}^{hk^{*}}=b^{\varepsilon }\) if \(\ell =1,...,n_{i}^{\varepsilon }\) and \(b_{i^{\ell }}^{hk^{*}}=b_{i}^{\varepsilon }\) if \(\ell =n_{i}^{\varepsilon }+1\).

Notice that in the sequence of problem \(\mathcal {A} \rightarrow \mathcal {A}^{1} \rightarrow \cdots \rightarrow \mathcal {A}^{p}\) it is possible to apply MSP to each pair of consecutive problems.

For \(\mathcal {A}\) and \(\mathcal {A}^{1}\),

  • \(f_{i}(\mathcal {A})=f_{i}(\mathcal {A}^{1})\), \(\forall i \in N_{0} \backslash \{1\}\) and

  • \(f_{1}(\mathcal {A})=\displaystyle \sum _{\ell =1}^{n_{1}^{\varepsilon }+1}f_{1^{\ell }}(\mathcal {A}^{1})\).

For \(\mathcal {A}^{1}\) and \(\mathcal {A}^{2}\),

  • \(f_{i}(\mathcal {A}^{1})=f_{i}(\mathcal {A}^{2})\), \(\forall i \in N_{0} \backslash \{1,2\}\),

  • \(f_{1^{\ell }}(\mathcal {A}^{1})=f_{1^{\ell }}(\mathcal {A}^{2})\), \(\forall \ell =1,...,n_{1}^{\varepsilon }+1\) and

  • \(f_{2}(\mathcal {A}^{1})=\displaystyle \sum _{\ell =1}^{n_{2}^{\varepsilon }+1}f_{2^{\ell }}(\mathcal {A}^{2})\).

Iterating the previous argument, the following is obtained for \(\mathcal {A}^{p-1}\) and \(\mathcal {A}^{p}\),

  • \(f_{i}(\mathcal {A}^{p-1})=f_{i}(\mathcal {A}^{p})\), \(\forall i \in N_{0} \backslash P_{k^{*}}\),

  • \(f_{i^{\ell }}(\mathcal {A}^{p-1})=f_{i^{\ell }}(\mathcal {A}^{p})\), \(\forall i=1,...,p-1\), \(\ell =1,...,n_{i}^{\varepsilon }+1\) and

  • \(f_{p}(\mathcal {A}^{p-1})=\displaystyle \sum _{\ell =1}^{n_{p}^{\varepsilon }+1}f_{p^{\ell }}(\mathcal {A}^{p})\).

Therefore,

  • \(f_{1}(\mathcal {A})=\displaystyle \sum _{\ell =1}^{n_{1}^{\varepsilon }+1}f_{1^{\ell }}(\mathcal {A}^{1})=\displaystyle \sum _{\ell =1}^{n_{1}^{\varepsilon }+1}f_{1^{\ell }}(\mathcal {A}^{2})=\cdots =\displaystyle \sum _{\ell =1}^{n_{1}^{\varepsilon }+1}f_{1^{\ell }}(\mathcal {A}^{p})\),

  • \(f_{2}(\mathcal {A})=f_{2}(\mathcal {A}^{1})=\displaystyle \sum _{\ell =1}^{n_{2}^{\varepsilon }+1}f_{2^{\ell }}(\mathcal {A}^{2})=\cdots =\displaystyle \sum _{\ell =1}^{n_{2}^{\varepsilon }+1}f_{2^{\ell }}(\mathcal {A}^{p})\),

    \(\cdot\)

    \(\cdot\)

    \(\cdot\)

  • \(f_{p}(\mathcal {A})=f_{p}(\mathcal {A}^{1})=\cdots =f_{p}(\mathcal {A}^{p-1})=\displaystyle \sum _{\ell =1}^{n_{p}^{\varepsilon }+1}f_{p^{\ell }}(\mathcal {A}^{p})\).

From these equations and SYM, it can be concluded that, for any \(i \in P_{k^{*}}\),

$$\begin{aligned} f_{i}(\mathcal {A})=\sum _{\ell =1}^{n_{i}^{\varepsilon }+1}f_{i^{\ell }} (\mathcal {A}^{p})=\sum _{\ell =1}^{n_{i}^{\varepsilon }}f_{i^{\ell }} (\mathcal {A}^{p})+f_{i^{n_{i}^{\varepsilon }+1}}(\mathcal {A}^{p})= n_{i}^{\varepsilon }f_{i^{1}}(\mathcal {A}^{p})+f_{i^{n_{i}^{\varepsilon }+1}} (\mathcal {A}^{p}). \end{aligned}$$

Notice also that \(f_{i^{1}}(\mathcal {A}^{p})=f_{j^{1}}(\mathcal {A}^{p})\), for all \(i,j \in P_{k^{*}}\). Then denote \(x^{\varepsilon }=f_{i^{1}}(\mathcal {A}^{p})\) and \(x_{i}^{\varepsilon }=f_{i^{n_{i}^{\varepsilon }+1}}(\mathcal {A}^{p})\). Thus, \(\forall i\in P_{k}^{*}\),

$$\begin{aligned} f_{i}(\mathcal {A})=n^{\varepsilon }_{i}x^{\varepsilon }+x_{i}^{\varepsilon }. \end{aligned}$$

Fix \(i \in P_{k^{*}}\). Let \(\mathcal {A}^{\alpha }\) be the problem obtained from \(\mathcal {A}^{p}\) by splitting firm \(i^{1}\) into two firms: \(\alpha ^{1}\) with \(b_{\alpha ^{1}}^{\alpha k^{*}}=b_{i}^{\varepsilon }\) and \(\alpha ^{2}\) with \(b_{\alpha ^{2}}^{\alpha k^{*}}=b^{\varepsilon }-b_{i}^{\varepsilon }\). By MSP and because f is non-negative (by CS),

$$\begin{aligned} f_{\alpha ^{1}}(\mathcal {A}^{\alpha }) \le f_{\alpha ^{1}}(\mathcal {A}^{\alpha })+f_{\alpha ^{2}} (\mathcal {A}^{\alpha })=f_{i^{1}}(\mathcal {A}^{p}) =x^{\varepsilon }. \end{aligned}$$

Moreover, \(x_{i}^{\varepsilon }=f_{i^{n_{1}^{\varepsilon }+1}}(\mathcal {A}^{p}) =f_{i^{n_{1}^{\varepsilon }+1}}(\mathcal {A}^{\alpha })=f_{\alpha ^{1}} (\mathcal {A}^{\alpha })\). Hence, \(0 \le x_{i}^{\varepsilon } \le x^{\varepsilon }\), for all \(i \in P_{k^{*}}\).

Since

$$\begin{aligned} g(\mathcal {A})=f_{0}(\mathcal {A})+\sum _{j\in P_{k^{*}}}f_{j}(\mathcal {A})=I_{0}(\mathcal {A})+x_{0}(\mathcal {A}) + \sum _{j\in P_{k^{*}}}n_{j}^{\varepsilon }x^{\varepsilon } + \sum _{j\in P_{k^{*}}} x_{j}^{\varepsilon } \end{aligned}$$

and \(0 \le x_{j}^{\varepsilon } \le x^{\varepsilon }\) for all \(j \in P_{k^{*}}\), it can be deduced that

$$\begin{aligned} g(\mathcal {A}) \ge I_{0}(\mathcal {A})+x_{0}(\mathcal {A})+\sum _{j\in P_{k^{*}}}n_{j}^{\varepsilon }x^{\varepsilon } \end{aligned}$$
(1)

and

$$\begin{aligned} g(\mathcal {A}) \le I_{0}(\mathcal {A})+x_{0}(\mathcal {A})+\sum _{j\in P_{k^{*}}}(n_{j}^{\varepsilon }+1)x^{\varepsilon }. \end{aligned}$$
(2)

By (1),

$$\begin{aligned} x^{\varepsilon } \le \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})}{\sum \nolimits _{j\in P_{k^{*}}}n_{j}^{\varepsilon }}=\dfrac{(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0} (\mathcal {A}))b^{\varepsilon }}{b^{k^{*}}(P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}. \end{aligned}$$

Therefore, for all \(i \in P_{k^{*}}\),

$$\begin{aligned} \begin{aligned} f_{i}(\mathcal {A})=&n_{i}^{\varepsilon }x^{\varepsilon }+x_{i}^{\varepsilon } \le (n_{i}^{\varepsilon }+1)x^{\varepsilon } \\ \le&(n_{i}^{\varepsilon }+1)\left( \dfrac{(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A}))b^{\varepsilon }}{b^{k^{*}}(P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}\right) \\ =&\dfrac{b^{k^{*}}_{i}-b_{i}^{\varepsilon }+b^{\varepsilon }}{b^{k^{*}}(P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})). \end{aligned} \end{aligned}$$

By (2),

$$\begin{aligned} x^{\varepsilon } \ge \dfrac{g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})}{\sum \nolimits _{j\in P_{k^{*}}}(n_{j}^{\varepsilon }+1)}=\dfrac{(g(\mathcal {A})-I_{0}(\mathcal {A})- x_{0}(\mathcal {A}))b^{\varepsilon }}{pb^{\varepsilon }+b^{k^{*}}(P_{k^{*}})- \sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}. \end{aligned}$$

Therefore, for all \(i \in P_{k^{*}}\),

$$\begin{aligned} \begin{aligned} f_{i}(\mathcal {A}) & = n_{i}^{\varepsilon }x^{\varepsilon }+x_{i}^{\varepsilon } \ge n_{i}^{\varepsilon }x^{\varepsilon } \\ & \ge n_{i}^{\varepsilon }\left( \dfrac{(g(\mathcal {A})-I_{0} (\mathcal {A})-x_{0}(\mathcal {A}))b^{\varepsilon }}{pb^{\varepsilon } +b^{k^{*}}(P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}\right) \\ & = \dfrac{b^{k^{*}}_{i}-b_{i}^{\varepsilon }}{pb^{\varepsilon }+b^{k^{*}} (P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})). \end{aligned} \end{aligned}$$

Then, for all \(i \in P_{k^{*}}\),

$$\begin{aligned} \begin{aligned}&\dfrac{b^{k^{*}}_{i}-b_{i}^{\varepsilon }}{pb^{\varepsilon }+ b^{k^{*}}(P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}} (g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})) \\&\le f_{i}(\mathcal {A}) \le \dfrac{b^{k^{*}}_{i}-b_{i}^{\varepsilon }+b^{\varepsilon }}{b^{k^{*}} (P_{k^{*}})-\sum \nolimits _{j \in P_{k^{*}}} b^{\varepsilon }_{j}}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})). \end{aligned} \end{aligned}$$

Taking the limit when \(\varepsilon\) tends to zero in the previous inequality, it is obtained that for all \(i \in P_{k^{*}}\),

$$\begin{aligned} \dfrac{b^{k^{*}}_{i}}{b^{k^{*}}( P_{k^{*}})}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})) \le f_{i}(\mathcal {A}) \le \dfrac{b^{k^{*}}_{i}}{b^{k^{*}}(P_{k^{*}})}(g(\mathcal {A})-I_{0} (\mathcal {A})-x_{0}(\mathcal {A})). \end{aligned}$$

Hence,

$$\begin{aligned} f_{i}(\mathcal {A})=\dfrac{b^{k^{*}}_{i}}{b^{k^{*}} (P_{k^{*}})}(g(\mathcal {A})-I_{0}(\mathcal {A})-x_{0}(\mathcal {A})) \end{aligned}$$

and the desired expression is obtained. \(\square\)

Remark 2

The properties used in Theorem 7 are independent.

The EOL rule satisfies CS but not MSP.

The rule f given by

$$\begin{aligned} f_{i}(\mathcal {A})=\left\{ \begin{array}{ll} I_{0}(\mathcal {A}), &{} \hbox { if } i=0 \\ \dfrac{b_{i}^{k(i)}}{b^{k(j)}(N)}(g(\mathcal {A})-I_{0}(\mathcal {A})), &{}\hbox { if } i \in N \end{array} \right. \end{aligned}$$

satisfies MSP but not CS.

The next corollary of Theorem 7 provides a characterization of the WOL rule.

Corollary 1

Of all the rules satisfying core selection and merging-splitting proofness, the weighted optimal location rule is the one where the transfer received by the firm is zero.

The proof of this corollary is obvious because WOL coincides with the rule characterized in Theorem 7 when \(x_{0}(\mathcal {A})=0\) for all \(\mathcal {A}\).

6 Concluding remarks

We introduce a new type of location problem that considers a widely studied economic phenomenon: Agglomeration economies. Once the optimal region has been determined, the main issue is to provide an appropriate compensation scheme for the firms involved in the problem. We analyze this problem using cooperative game theory. We first prove that the core is non empty and characterize all the allocations in the core. We also consider a rule, called the egalitarian optimal location rule, which always selects an element in the core. We also prove that this rule can be obtained as the \(\tau\)-value, the nucleolus or the per capita nucleolus of the cooperative game. We provide an axiomatic characterization for this rule. We also propose a rule called the weighted optimal location rule. That rule is also characterized.

A possible variant of the model is to consider the case of negative externalities. Namely, \(b_{i}^{k}<0\) is possible. Our results can not be extended to this general model. For instance, the core of the associated game may be empty.