Skip to main content
Log in

The price of fairness with the extended Perles–Maschler solution

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

In Nash bargaining problem, due to fairness concerns of players, instead of maximizing the sum of utilities of all players, an implementable solution should satisfy some axioms or characterizations. Such a solution can result in the so-called price of fairness, because of the reduction in the sum of utilities of all players. An important issue is to quantify the system efficiency loss under axiomatic solutions through the price of fairness. Based on Perles–Maschler solution of two-player Nash bargaining problem, this paper deals with the extended Perles–Maschler solution of multi-player Nash bargaining problem. We give lower bounds of three measures of the system efficiency for this solution, and show that the lower bounds are asymptotically tight.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Download references

Acknowledgments

This research has been supported by the National Natural Science Foundation of China (Projects 71031005 and 71210002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinxing Xie.

Appendix

Appendix

Proof of Theorem 4

For a given \(\mathbf {x}\) satisfying the condition in the theorem, it is clear that \(g^i(\mathbf {x}_{-i}|A)>x_i\) because \(\mathbf {x}\in A\) and \(\mathbf {x} \notin OP (A)\). Consider the following matrix

$$\begin{aligned} \mathbf {J_*}=diag\left\{ g^1(\mathbf {x}_{-1}|A)-x_1, \ldots , g^n(\mathbf {x}_{-n}|A)-x_n\right\} . \end{aligned}$$
(47)

Then define function \(\bar{\mathbf {g}}(t,\mathbf {k})=(\bar{g}^1(t,\mathbf {k}),\ldots ,\bar{g}^n(t, \mathbf {k}))^T=J_*^{-1}(\mathbf {g}(\mathbf {x}+tJ_*\mathbf {k}|A)-\mathbf {x})\) for \(t\in [0,+\infty )\) and \(\mathbf {k}\in \mathbb {R}_+^n\). (Note that \(\bar{\mathbf {g}}(t,\mathbf {k})\) is well defined for \(\mathbf {x}+tJ_*\mathbf {k}\in A\).)

Since \(\mathbf {d}^\mathbf {x}(\mathbf {h}^*(\mathbf {x}|A)|A)=\lambda ^*(\mathbf {x}|A) \cdot \mathbf {h}^*(\mathbf {x}|A)\), we have

$$\begin{aligned} \begin{aligned}&\lambda ^*(\mathbf {x}|A)\cdot \mathbf {h}^*(\mathbf {x}|A)=\lim _{t>0,t\rightarrow 0} \frac{\mathbf {g}(\mathbf {x}+t\mathbf {h}^* (\mathbf {x}|A)|A)-\mathbf {g}(\mathbf {x}|A)}{t}. \end{aligned} \end{aligned}$$
(48)

Hence, it holds that

$$\begin{aligned} \begin{aligned}&\lambda ^*(\mathbf {x}|A) \cdot J_*^{-1}\cdot \mathbf {h}^*(\mathbf {x}|A)=\lim _{t>0,t\rightarrow 0} \frac{J_*^{-1}\mathbf {g}(\mathbf {x}+tJ_*J_*^{-1} \mathbf {h}^*(\mathbf {x}|A)|A)-J_*^{-1}\mathbf {g}(\mathbf {x}|A)}{t}, \end{aligned} \end{aligned}$$

which leads to

$$\begin{aligned} \begin{aligned}&\lambda ^*(\mathbf {x}|A)\cdot \mathbf {k}^*= \lim _{t>0,t\rightarrow 0}\frac{\bar{\mathbf {g}}(t,\mathbf {k}^*) -\bar{\mathbf {g}}(0,\mathbf {k}^*)}{t}, \end{aligned} \end{aligned}$$
(49)

where \(\mathbf {k}^*=(k^*_1,\ldots ,k^*_n)^T=J_*^{-1}\cdot \mathbf {h}^*(\mathbf {x}|A)\).

From Theorem 1, \(g^i(\cdot |A)\) is concave in \(A^i\), by which \(\mathbf {e}^T\cdot \mathbf {g}(\mathbf {x}|A)\) is concave for all \(\mathbf {x}\in A\). Therefore, we have, \(\mathbf {e}^T\cdot \bar{\mathbf {g}}(t,\mathbf {k})\) is concave for all \(\mathbf {k}\in \mathbb {R}_+^n\) such that \(\mathbf {x}+tJ_*\mathbf {k}\in A\) (i.e., \(\bar{\mathbf {g}}(t,\mathbf {k})\) is well defined), and \(\mathbf {e}^T\cdot \bar{\mathbf {g}}(t,\mathbf {k})\) is also concave for all \(t\in \mathbb {R}_+\) such that \(\mathbf {x}+tJ_*\mathbf {k}\in A\).

Since \(\mathbf {x} \notin OP (A)\), for sufficient small \(t\), we have \(\mathbf {x}+tJ_*\mathbf {k}^*\in A\) and \(\mathbf {x}+t(\mathbf {e}^TJ_*\mathbf {k}^*)\cdot \mathbf {e}^i\in A, \forall i=1,2,\ldots ,n\). Hence, from (49), together with \(\mathbf {e}^T\cdot \bar{\mathbf {g}}(t,\mathbf {k})\) being concave in \(\mathbf {k}\) if it is well defined, we have

$$\begin{aligned} \begin{aligned}&\lambda ^*(\mathbf {x}|A)\cdot \mathbf {e}^T\cdot \mathbf {k}^*\\&=\lim _{t>0,t\rightarrow 0}\frac{\mathbf {e}^T\cdot \bar{\mathbf {g}} (t,\mathbf {k}^*)-\mathbf {e}^T\cdot \bar{\mathbf {g}}(0,\mathbf {k}^*)}{t}\\&\ge \lim _{t>0,t\rightarrow 0}\frac{\sum _{i=1}^n \frac{k_i^*}{\mathbf {e}^T\cdot \mathbf {k}^*}\left( \mathbf {e}^T\cdot \bar{\mathbf {g}} (t,(\mathbf {e}^T\cdot \mathbf {k}^*)\mathbf {e}^i)-\mathbf {e}^T\cdot \bar{\mathbf {g}}(0,\mathbf {k}^*)\right) }{t}. \end{aligned} \end{aligned}$$
(50)

Since the last term in (50) is well defined for all \(t\in (0,(\mathbf {e}^T\cdot \mathbf {k}^*)^{-1}]\), it is known that \(\bar{\mathbf {g}}(t,(\mathbf {e}^T\cdot \mathbf {k}^*)\mathbf {e}^i)\) is concave for \(t\). Hence, we have from (50)

$$\begin{aligned} \begin{aligned}&\lambda ^*(\mathbf {x}|A)\cdot \mathbf {e}^T\cdot \mathbf {k}^*\\&\ge \lim _{t>0,t\rightarrow 0}\frac{\sum _{i=1}^n \frac{k_i^*}{\mathbf {e}^T\cdot \mathbf {k}^* }\left( \mathbf {e}^T\cdot \bar{\mathbf {g}}(t,(\mathbf {e}^T\cdot \mathbf {k}^*)\mathbf {e}^i)-\mathbf {e}^T\cdot \bar{\mathbf {g}}(0,\mathbf {k}^*)\right) }{t}\\&\ge \frac{\sum _{i=1}^n \frac{k_i^*}{\mathbf {e}^T\cdot \mathbf {k}^* }(\mathbf {e}^T\cdot \bar{\mathbf {g}}\left( (\mathbf {e}^T\cdot \mathbf {k}^*)^{-1},(\mathbf {e}^T\cdot \mathbf {k}^*)\mathbf {e}^i)-\mathbf {e}^T\cdot \bar{\mathbf {g}}(0,\mathbf {k}^*)\right) }{(\mathbf {e}^T\cdot \mathbf {k}^*)^{-1}}\\ {}&=\sum _{i=1}^n k_i^*\left( \mathbf {e}^T\cdot \bar{\mathbf {g}}(1,\mathbf {e}^i)-\mathbf {e}^T\cdot \bar{\mathbf {g}}(0,\mathbf {k}^*)\right) \\&=\sum _{i=1}^n k_i^*\left( \mathbf {e}^T\cdot J_*^{-1}(\mathbf {g}(\mathbf {x}+J_*\mathbf {e}^i|A)-\mathbf {x})-\mathbf {e}^T\cdot J_*^{-1}(\mathbf {g}(\mathbf {x}|A)-\mathbf {x})\right) \\&=\sum _{i=1}^n k_i^*(1-n). \end{aligned} \end{aligned}$$
(51)

The last equality holds because \(\mathbf {x}+J_*\mathbf {e}^i\in OP (A)\) (hence \(\mathbf {g}(\mathbf {x}+J_*\mathbf {e}^i|A)=\mathbf {x}+J_*\mathbf {e}^i\)) and \(J_*^{-1}(\mathbf {g}(\mathbf {x}|A)-\mathbf {x})=\mathbf {e}^T\). Therefore, we have obtained

$$\begin{aligned} \begin{aligned} \lambda ^*(\mathbf {x}|A)\ge -(n-1). \end{aligned} \end{aligned}$$
(52)

Since \(\mathbf {g}(\mathbf {x}|A)\) is decreasing, from (49), it is easy to see \(\lambda ^*(\mathbf {x}|A)\le 0\). Hence, we have proved that \(|\lambda ^*(\mathbf {x}|A)|\le n-1\). \(\square \)

Proof of Theorem 5

Here, we consider the same function \(\mathbf {W}(u|A )\) in Theorem 2. Recall that the EPM solution of \(A\) in Theorem 2 is \(\mathbf {f}_{PM}(A)=\mathbf {\eta }(u_{PM}|A)\). Then, we have already known that it is sufficient to show \(\mathbf {W}(u|A )\) is non-decreasing. However, we are not able to prove \(\mathbf {W}^\prime (u|A )\ge \mathbf {0}^n\) because \(\mathbf {W}(u|A )\) may not be differentiable. Here, we adopt another approach, that is to prove the right-sided derivative of \(\mathbf {W}(u|A )\) is non-negative, i.e., to prove

$$\begin{aligned} \begin{aligned} \mathbf {W}_+^\prime (u|A )\doteq \lim _{t>0,t\rightarrow 0} \frac{\mathbf {W}(u+t|A )-\mathbf {W}(u|A )}{t}\ge \mathbf {0}^n, \end{aligned} \end{aligned}$$
(53)

for all \(u\in [0,u_{PM})\). (In Lemma 1 following this proof, we show that this condition guarantees \(\mathbf {W}(u|A )\) is non-decreasing.)

From Theorem 2, we have

$$\begin{aligned} \begin{aligned} \mathbf {W}_+^\prime (u|A )=(n-1)\mathbf {\eta }_+^\prime (u|A)+\lim _{t>0,t\rightarrow 0} \frac{\mathbf {g}(\mathbf {\eta }(u+t)|A)-\mathbf {g}(\mathbf {\eta }(u)|A)}{t}. \end{aligned} \end{aligned}$$
(54)

Note that \(\mathbf {\eta }(u|A)\) is strictly increasing, hence \(\mathbf {\eta }_+^\prime (u|A)=\mathbf {h}^*(\mathbf {\eta }(u|A)|A)\ge \mathbf {0}^n\). From \(\mathbf {\eta }_+^\prime (u|A)\ge \mathbf {0}^n\) and \(\mathbf {\eta }_+^\prime (u|A)\ne \mathbf {0}^n\), it follows that

$$\begin{aligned} \begin{aligned} \lim _{t>0,t\rightarrow 0}&\frac{\mathbf {g}(\mathbf {\eta }(u+t)|A)-\mathbf {g}(\mathbf {\eta }(u)|A)}{t}\\&=\mathbf {d}^{\mathbf {\eta }(u|A)}(\mathbf {\eta }_+^\prime (u|A)|A) =\mathbf {d}^{\mathbf {\eta }(u|A)}(\mathbf {h}^*(\mathbf {\eta }(u|A)|A)|A)\\&=\lambda ^*(\mathbf {\eta }(u|A)|A)\mathbf {h}^*(\mathbf {\eta }(u|A)|A). \end{aligned} \end{aligned}$$
(55)

Substituting (55) into (54) leads to

$$\begin{aligned} \begin{aligned} \mathbf {W}_+^\prime (u|A )=(n-1+\lambda ^*(\mathbf {\eta }(u|A)|A))\mathbf {h}^*(\mathbf {\eta }(u|A)|A). \end{aligned} \end{aligned}$$
(56)

Then, from Theorem 4, it is known that \(\mathbf {W}_+^\prime (u|A )\ge \mathbf {0}^n\), which completes the proof. \(\square \)

Lemma 1

If \(f:[a,b)\rightarrow \mathbb {R}\) is a continuous function, and \(f_+^\prime (x)\ge 0\) for all \(x\in [a,b)\), then \(f\) is increasing in \([a,b)\).

Proof

Consider \(\overline{f}(x)=f(x)+\varepsilon x\) for \(x\in [a,b)\) and \(\varepsilon >0\). Then, \(\overline{f}_+^\prime (x)\ge \varepsilon \). We first show that \(\overline{f}\) is increasing in \([a,b)\).

If \(\overline{f}\) is not increasing in \([a,b)\), then there are \(c\in [a,b)\) and \(d\in [a,b)\) such that \(c<d\) and \(\overline{f}(c)>\overline{f}(d)\). The continuity of \(\overline{f}(x)\) in \([c,d]\) implies that there exists \(\theta \in [c,d]\) such that \(\overline{f}(\theta )=\sup _{x\in [c,d]}\overline{f}(x)\). Since \(\overline{f}(c)>\overline{f}(d)\), we have \(\theta \ne d\), hence \(\varepsilon \le \overline{f}_+^\prime (\theta )=\lim _{t>0,t\rightarrow 0}\frac{\overline{f}(\theta +t)-\overline{f}(\theta )}{t}\le 0\), which is in contradiction with \(\varepsilon >0\)! This means that \(\overline{f}\) is increasing in \([a,b)\).

From the increasing property of \(\overline{f}\) in \([a,b), \forall a\le c\le d<b\), we have \(\overline{f}(d)\ge \overline{f}(c)\), i.e., \(f(d)\ge f(c)+\varepsilon (c-d)\). Let \(\varepsilon \rightarrow 0\), it follows that \(f(d)\ge f(c)\). This means that \(f\) is increasing in \([a,b)\). \(\square \)

1.1 Calculation of the examples

We need to show that \(A(t,s)\in P_0^*\), i.e., \(A(t,s)\) satisfies Assumptions 1, 2 and \( OP (A(t,s))\) being the union of pieces of hyper-planes. It is easy to see from the definition of \(A(t,s)\) that \(A(t,s)\) is convex, compact, comprehensive and \( OP (A(t,s))\) is the union of pieces of hyper-planes. Hence, we only need to verify that \(A(t,s)\) satisfies Assumption 2. According to the definition of \(A(t,s), \forall \mathbf {x}\in A(t,s)\), we have

$$\begin{aligned} \begin{aligned} g^1(\mathbf {x}_{-1}|A(t,s))=\min \left\{ q^1-t\sum _{i\ne 1}\frac{q^1x_i}{q^i}, \min _{j\ne 1}\left\{ \frac{q^1-q^1x_j}{s}-\sum _{i\ne 1,j} \frac{q^1x_i}{q^i}\right\} \right\} . \end{aligned} \end{aligned}$$

And \(\forall k=2,\ldots ,n\),

$$\begin{aligned} \begin{aligned}&g^k(\mathbf {x}_{-k}|A(t,s))=\\&\min \left\{ q^k\!-\!s\sum _{i\ne k}\frac{q^kx_i}{q^i}, \frac{q^k-q^kx_1}{t}\!-\!\sum _{i\ne k,1}\frac{q^kx_i}{q^i}, \min _{j\ne 1,k}\left\{ \frac{q^k-q^kx_j}{s}\!-\!\sum _{i\ne k,j}\frac{q^kx_i}{q^i}\right\} \right\} . \end{aligned} \end{aligned}$$

It is easy to see that \(g^k(\mathbf {0}^{n-1}|A(t,s))=q^k\) and \(g^k(\cdot |A(t,s))\) is continuous and strictly decreasing, \(\forall k=1,\ldots ,n\). For any given \(k\in \{1,\ldots ,n\}\), we show that \(\mathbf {x}\in OP (A(t,s))\Longleftrightarrow x_k=g^k (\mathbf {x}_{-k}|A(t,s) )\).

If \(\mathbf {x}\in OP (A(t,s))\), then \(p_1 (\mathbf {x}|t)\le 1,p_i (\mathbf {x}|s)\le 1,i=2,\ldots ,n\), and at least one of them is equality, which implies \(x_k=g^k (\mathbf {x}_{-k}|A(t,s) )\).

If \(x_k=g^k (\mathbf {x}_{-k}|A(t,s) )\), then \(p_1 (\mathbf {x}|t)\le 1,p_i (\mathbf {x}|s)\le 1,i=2,\ldots ,n\), and at least one of them is with equality. Assume \(p_l (\mathbf {x}|r)=1\) where \(1\le l\le n\) and \(r\in \{s,t\} \). If \(\mathbf {x} \notin OP (A(t,s))\), then \(\exists \mathbf {y}\in A(t,s)\) such that \(\mathbf {y}\ge \mathbf {x}\) and \(\mathbf {y}\ne \mathbf {x}\). Since each coefficient in \(p_l (\cdot |r)\) is strictly positive, we have \(p_l (\mathbf {y}|r)>1\), which leads to a contradiction with \( \mathbf {y}\in A(t,s)\). Hence, \(\mathbf {x}\in OP (A(t,s))\).

Therefore, \(A(t,s)\) satisfies Assumption 2, which implies \(A(t,s)\in P_0^*\). Since \(g^k(\mathbf {0}^{n-1}|A(t,s))=q^k\) for \(k=1,\ldots ,n\), it is known that the maximum achievable profit for the \(k\)th player is \(q^k\).

A two dimensional example of \(A(t,s)\) is shown in Fig. 3. Since \( OP (A(t,s))\) is the union of pieces of hyper-planes, we know that \(g^i(\cdot |A(t,s))\) is piecewise linear in \(A^i(s,t)\). Therefore, \(\mathbf {G}(\mathbf {x}|A(t,s))=\partial \mathbf {g}(\mathbf {x}|A(t,s))/\partial \mathbf {x}\) is a piecewise constant matrix in \(A(t,s)\) (i.e., \(\mathbf {G}(\mathbf {x}|A(t,s))\) is a constant matrix in each piece of \(A(t,s)\)). Hence, the tangent vector \(\mathbf {h}^*(\mathbf {x}|A(t,s))\) of \(C(A(t,s))\) is a piecewise constant vector, which indicates that \(C(A(t,s))\) is piecewise linear. Because \(C(A(t,s))\) is piecewise linear, finding the points on the PM path of \(A(t,s)\) which are not differentiable is enough to determine the PM path \(C(A(t,s))\).

Fig. 3
figure 3

A two dimensional example of \(A(t,s)\)

Since \(C(A(t,s))\) starts from \(\mathbf {0}^n\), we need to calculate \(\mathbf {G}(\mathbf {0}^n|A(t,s) )\) and \(\mathbf {h}(\mathbf {0}^n|A(t,s) )\). Note that in a small local area of \(\mathbf {0}^n\), it follows that

$$\begin{aligned} \begin{aligned}&g^1(\mathbf {x}_{-1}|A(t,s))=1-t\sum _{j\ne 1}\frac{q^1x_j}{q^j},\\&g^i(\mathbf {x}_{-i}|A(t,s))=1-s\sum _{j\ne i}\frac{q^ix_j}{q^j}, 2\le i\le n. \end{aligned} \end{aligned}$$
(57)

Hence

$$\begin{aligned} \begin{aligned}&\mathbf {G}(\mathbf {0}^n|A(t,s) )=\left[ \begin{array}{cccc}0&{}\frac{-tq^1}{q^2}&{}\ldots &{}\frac{-tq^1}{q^n}\\ \frac{-sq^2}{q^1}&{}0&{}\ldots &{}\frac{-sq^2}{q^n}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ \frac{-sq^n}{q^1}&{} \frac{-sq^n}{q^2}&{}\ldots &{}0\end{array}\right] ,\\&\mathbf {h}(\mathbf {0}^n|A(t,s) )=\left( 1,\frac{q^2\epsilon }{q^1},\ldots ,\frac{q^n\epsilon }{q^1}\right) ^T, \end{aligned} \end{aligned}$$
(58)

where

$$\begin{aligned} \epsilon =\frac{(n-2)s+\sqrt{(n-2)^2 s^2+4(n-1)st}}{2(n-1)t}. \end{aligned}$$
(59)

Then, the PM path \(C(A(t,s))\) starts from \(\mathbf {0}^n\) and moves along the direction \(\mathbf {h}(\mathbf {0}^n|A(t,s) )\) until reaching the point \(\mathbf {y}\) where \(g^i(\cdot |A(t,s) )\) is not differentiable at \(\mathbf {y}_{-i}\) for \(i\in \{2,\ldots ,n\}\). Then, \(\mathbf {y}\) satisfies that \(\forall i\in \{2,\ldots ,n\}, (\mathbf {y}_{-i}: g^i (\mathbf {y_{-i}}|A(t,s) ))\) is in the intersection of two hyper-planes \(p_1 (\mathbf {x}|t)=1\) and \(p_i (\mathbf {x}|s)=1\). Consequently, we have

$$\begin{aligned} \mathbf {y}=bq^1\mathbf {h}(\mathbf {0}^n|A(t,s) )=(bq^1,b\epsilon q^2,\ldots ,b\epsilon q^n)^T, \end{aligned}$$
(60)

where

$$\begin{aligned} b=\frac{1-t}{1-st+t(1-s)(n-2)\epsilon }. \end{aligned}$$
(61)

For any \(\mathbf {x}\in A(t,s)\) and \(\mathbf {x}>\mathbf {y}\), we have

$$\begin{aligned} \begin{aligned}&g^1(\mathbf {x}_{-1}|A(t,s))=q^1-t\sum _{j\ne 1}\frac{q^1x_j}{q^j},\\&g^i(\mathbf {x}_{-i}|A(t,s))=\frac{q^i}{t}-\sum _{j\ne 1,i}\frac{q^ix_j}{q^j}-\frac{q^ix_1}{tq^1}, 2\le i\le n. \end{aligned} \end{aligned}$$
(62)

The above leads to

$$\begin{aligned} \begin{aligned}&\mathbf {G}(\mathbf {x}|A(t,s) )=\left[ \begin{array}{cccc}0&{}\frac{-tq^1}{q^2}&{}\ldots &{}\frac{-tq^1}{q^n}\\ \frac{-q^2}{tq^1}&{}0&{}\ldots &{}\frac{-q^2}{q^n}\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ \frac{-q^n}{tq^1}&{} \frac{-q^n}{q^2}&{}\ldots &{}0\end{array}\right] ,\\&\mathbf {h}(\mathbf {x}|A(t,s) )=\left( 1,\frac{q^2}{q^1t},\ldots ,\frac{q^n}{q^1t}\right) ^T. \end{aligned} \end{aligned}$$
(63)

Then, the PM path starts again from \(\mathbf {y}\) and moves along the direction \(\mathbf {h}(\mathbf {x}|A(t,s) )\) until reaching the EPM solution \(\mathbf {f_{PM}}(A(t,s))\) in the hyper-plane \(p_1 (\mathbf {x}|t)=1\). Hence,

$$\begin{aligned} \mathbf {f_{PM}} (A(t,s) )=\mathbf {y}+cq^1 \mathbf {h}(\mathbf {x}|A(t,s) ), \end{aligned}$$
(64)

where

$$\begin{aligned} c=\frac{1-b-t\epsilon b(n-1)}{n}. \end{aligned}$$
(65)

So far, we have found the EPM solution \(\mathbf {f_{PM}}(A(t,s))\). Next, we estimate the system efficiency.

Assume that \(\mathbf {z}\) is the intersection of the hyper-planes \(p_1 (\mathbf {x}|t)=1\) and \(p_i (\mathbf {x}|s)=1\) for all \(i=2,\ldots ,n\). Then, we have

$$\begin{aligned} \mathbf {z}=\frac{\left( (1+(n-2)s-(n-1)t)q^1, (1-s) q^2, \ldots , (1-s) q^n\right) ^T}{1+(n-2)s-(n-1)st}. \end{aligned}$$
(66)

Let \(s\rightarrow 0\), we have \(\epsilon \rightarrow 0, b\rightarrow 1-t\) and \(c\rightarrow t/n \), then let \(t\rightarrow 0\) we have \(\mathbf {f_{PM}}(A(t,s))\rightarrow (q^1,q^2/n,\ldots ,q^n/n)\) and \(\mathbf {z}\rightarrow (q^1,q^2,\ldots ,q^n)\).

Therefore, we have the following inequalities

$$\begin{aligned} \lim _{t\rightarrow 0}\lim _{s\rightarrow 0}E_{A(t,s)}^e&\le \lim _{t\rightarrow 0} \lim _{s\rightarrow 0}\frac{\mathbf {e}^T\cdot \mathbf {f_{PM}} (A(t,s))}{\mathbf {e}^T\cdot \mathbf {z}}=\frac{1}{n}+\frac{n-1}{n}\cdot \frac{q^1}{\sum _{i=1}^nq^i}, \end{aligned}$$
(67)
$$\begin{aligned} \lim _{t\rightarrow 0}\lim _{s\rightarrow 0}E_{A(t,s)}^g&\le \lim _{t\rightarrow 0} \lim _{s\rightarrow 0}\frac{\sum _{i=1}^n (q^i)^{-1}\cdot f^i(A)}{\sum _{i=1}^n (q^i)^{-1} \cdot z_i}=\frac{2n-1}{n^2} \end{aligned}$$
(68)

and

$$\begin{aligned} \lim _{t\rightarrow 0}\lim _{s\rightarrow 0}E_{A(t,s)}^f\le \lim _{t\rightarrow 0}\lim _{s\rightarrow 0}\frac{n}{\sum _{i=1}^n [f^i(A)]^{-1} \cdot z_i}=\frac{n}{n^2-n+1}. \end{aligned}$$
(69)

Together with Theorem 6, we know that the above inequalities become equalities, hence the bounds given in Theorem 6 are asymptotically tight.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, F., Xie, J. & Zhao, X. The price of fairness with the extended Perles–Maschler solution. Math Meth Oper Res 80, 193–212 (2014). https://doi.org/10.1007/s00186-014-0475-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-014-0475-8

Keywords

Navigation