Skip to main content
Log in

When partner knows best: asymmetric expertise in partnerships

  • Original Paper
  • Published:
International Journal of Game Theory Aims and scope Submit manuscript

Abstract

This paper analyzes the problem of a principal (she) designing a contract for an agent (he) to form a short-lived partnership to exploit an asset before reselling, as in asset flipping. The agent possesses higher expertise than the principal in the sense that he can form a more-accurate assessment of the resale value of the asset before negotiating the dissolution of the partnership. By dissolving the partnership through a Texas shootout with the agent as proposer, the principal can “neutralize” her partner’s informational advantage and have him disclose the resale value for free. Thus, in the optimal contract, the agent’s superior expertise does not distort the structure of the partnership (i.e., the allocation of shares). The partners attain a higher aggregate surplus ex-ante if the principal commits to giving the asset away to the agent upon dissolving the partnership: She earns a lower revenue but lets all types of the agent enjoy a higher surplus. Thus, at the ex-ante stage, the agent could “bribe” the principal to implement this alternative. However, the additional surplus for lower types is insufficient to compensate the principal, so this higher ex-ante aggregate surplus is unattainable at the interim stage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability statement

Data sharing is not applicable to this article, as no datasets were generated or analysed during the current study.

Notes

  1. Ours is a common-value partnership, so efficiency is a moot issue.

  2. A similar analysis applies to partnerships that are to last a definite number of multiple periods, or when there is an exogenous probability of dissolution. The substance behind our assumption is that the dissolution is triggered exogenously. If the decision whether to dissolve the partnership now or wait is made conditional on the agent’s report of the current resale value, we have a stopping problem embedded in the partnership contracting problem; see the discussion in Sect. 5.5.

  3. Alternative distributional assumptions that lead to structure (1) are provided in the examples below. Section 3.4 analyzes the case where \(\phi (0)>0,\) so the principal has a positive outside option, while Sect. 5.2 discusses non-linear expectations.

  4. Alternatively, we could model the agent’s expertise by having him observe a private signal s about the state of the market, so that his assessment of the resale value is E(w|evs). The substance behind Assumption 2 is that the agent is better informed about w than the principal.

  5. Section 5.1 exemplifies alternative cost functions.

  6. Thus, under a Texas shootout, dissolution negotiations never “fail”; the partners always reach an agreement where one partner buys out the other one.

  7. For the principal, with no uncertainty remaining when her information set is reached, her risk attitude is moot. For the agent, while he observes the dissolution value, he faces the uncertainty stemming from the principal’s randomization. Nonetheless, his risk attitude in this regard does not change his best response to the principal’s mixing being straightforward pricing.

  8. While the weaker assumption that \(\hat{\lambda }(v)\) is non-decreasing suffices, the stronger Assumption 3 allows for a simpler characterization of some of the contracts below.

  9. More generally, the principal could commit ex-ante to a pricing rule p(v),  provided that the rule preserves incentive compatibility. The analysis at the end of this section extends to pricing rules that can be ranked in terms of the effort functions they induce.

  10. A detailed analysis of this case is available in the Appendix.

References

  • Andersson T, Gudmundsson J, Talman D, Yang Z (2013) A competitive partnership formation process. Tilburg University Discussion Paper No. 2013-008

  • Bloch F, Dutta B, Manea M (2019) Efficient partnership formation in networks. Theor Econ 14:779–811

    Article  Google Scholar 

  • Brooks R, Landeo C, Spier K (2010) Trigger happy or gun shy? Dissolving common-value partnerships with Texas shootouts. RAND J Econ 41:649–673

    Article  Google Scholar 

  • Cao V (2018) Constrained-efficient profit division in a dynamic partnership. Working Paper

  • Cetemen D (2018) Achieving efficiency in repeated partnerships via information design. Working Paper

  • Che YK, Iossa E, Rey P (2017) Prizes versus contracts as incentives for innovation. Working Paper

  • Cramton P, Gibbons R, Klemperer P (1987) Dissolving a partnership efficiently. Econometrica 55:615–632

    Article  Google Scholar 

  • De-Frutos MA, Kittsteiner T (2008) Efficient partnership dissolution under buy-sell clauses. RAND J Econ 39(1):184–198

    Article  Google Scholar 

  • Francetich A (2015) Endogenous winner’s curse in dynamic mechanisms. AEJ Microecon 7(2):45–76

    Article  Google Scholar 

  • Gudmundsson J (2011) On symmetry in the formation of stable partnerships. Lund University, Department of Economics Working Paper 2011:29

  • Gudmundsson J (2013) Cycles and third-party payments in the partnership formation problem. Lund University, Department of Economics Working Paper 2013:16

  • Jehiel P, Pauzner A (2006) Partnership dissolution with interdependent values. RAND J Econ 37(1):1–22

    Article  Google Scholar 

  • Khoroshilov Y (2018) Partnership dissolution: information and efficiency. Decis Anal 15(3):133–194

    Article  Google Scholar 

  • Kittsteiner T (2003) Partnerships and double auctions with interdependent valuations. Games Econ Behav 44(1):54–76

    Article  Google Scholar 

  • Li J, Xue Y, Wu W (2013) Partnership dissolution and proprietary information. Soc Choice Welf 40:495–527

    Article  Google Scholar 

  • Loertscher S, Wasser C (2019) Optimal structure and dissolution of partnerships. Theor Econ 14:1063–1114

    Article  Google Scholar 

  • McAfee P (1992) Amicable divorce: dissolving a partnership with simple mechanisms. J Econ Theory 56:266–293

    Article  Google Scholar 

  • Moldovanu B (2002) How to dissolve a partnership. J Inst Theor Econ 158(1):66–80

    Article  Google Scholar 

  • Myerson R (1981) Optimal auction design. Math Oper Res 6(1):58–73

    Article  Google Scholar 

  • Ornelas E, Turner J (2007) Efficient dissolution of partnerships and the structure of control. Games Econ Behav 60(1):187–199

    Article  Google Scholar 

  • Talman D, Yang Z (2011) A model of partnership formation. J Math Econ 47(2):206–212

    Article  Google Scholar 

  • Toikka J (2011) Ironing without control. J Econ Theory 146(6):2510–2526

    Article  Google Scholar 

  • Wilson R (1969) Competitive bidding with disparate information. Manag Sci 15(7):446–448

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Francetich.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is an extensive revision of a paper that previously circulated under the title “Profiting From Experts’ ‘Tyranny’ in Partnerships.” The idea for this article was conceived during my time at the Decision Sciences Department at Bocconi University as a postdoctoral fellow. I am deeply indebted to David Kreps for continuously enlightening me. The paper has also benefited from comments by Camelia Bejan, Juan Camilo Gomez, Steve Holland, attendants of the 28th International Game Theory Conference at Stony Brook, the 2018 NASM at UC Davis, seminar participants at the Economics Department at UW and at the UWB School of Business, and anonymous referees. Any remaining errors and omissions are all mine.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 39 KB)

Appendix

Appendix

1.1 Main proofs

Proof of Proposition 1

In a dissolution mechanism, the principal’s first-best payoff is bounded above by \(\kappa _{P}w+\left( \kappa _{A}-\theta \right) w=(1-\theta )w.\) This upper bound is attained in the second-best environment in the Texas-shootout equilibrium with the agent as the proposer. The result is established in Proposition 1 of Brooks et al. (2010); a proof is included here for the sake of completeness.

Principal. Let \(p\in \left[ 0,\overline{w}\right]\) be the price the agent calls, and let w(p) be the principal’s updated belief about w based on p. If she sells her shares, her expected payoff is \(p(1-\theta )\); if she buys the agent’s shares, she makes \(w(p)-p\theta .\) Thus, her best response is to sell if \(p>w(p),\) buy if \(p<w(p),\) and she is indifferent between buying and selling if \(p=w(p).\)

Agent. Let \(\alpha \in [0,1]\) be the probability that the principal buys the agent’s \(\theta\) shares. If the agent calls a price \(p\in \left[ 0,\overline{w}\right]\) when the dissolution value is w,  his expected payoff is:

$$\begin{aligned} U(p,\alpha ;\theta ,w)&=\alpha p\theta +(1-\alpha )[w-p(1-\theta )] \\ &=\alpha p\theta +(1-\alpha )w-p(1-\alpha )(1-\theta ) \\ &=(1-\alpha )w-p[1-\alpha -\theta ]. \end{aligned}$$

If the principal chooses \(\alpha =1-\theta ,\) the agent’s payoff becomes constant in p:

$$\begin{aligned} U(p,1-\theta ;\theta ,w)=(1-\alpha )w. \end{aligned}$$

Thus, \(p^{*}=w\) is a best response. Moreover, \(w(p^{*})=w,\) so the principal is willing to randomize. Equilibrium payoffs are \(\theta w\) for the agent and \((1-\theta )w\) for the principal, the first-best payoffs. \(\square\)

Proof of Proposition 2

In any incentive-compatible mechanism, the feasible payoff for the principal is:

$$\begin{aligned} u_{P}(\theta ,t)=\frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}} \left[ 2\theta (v)v-\left( v+\frac{1}{\lambda (v)}\right) \theta (v)^{2}\right] f(v){\mathrm {d}}v-U_{A}(0). \end{aligned}$$

In the optimal contract, \(U_{A}(0)=0.\) Ignoring the monotonicity constraint, we search for the optimal share allocation rule by pointwise maximization. We look for the maximizer of the following strictly-concave parametric function:

$$\begin{aligned} J(\theta ;v)\,{:}{=}\,2v\theta -\left( v+\frac{1}{\lambda (v)}\right) \theta ^{2}=2v\theta -\left( \frac{v\lambda (v)+1}{\lambda (v)}\right) \theta ^{2} \end{aligned}$$
(5)

subject to the constraint \(0\le \theta \le 1.\) For each \(v\in [0,\overline{v}],\) the maximizer of (5) is \(\theta ^{*}(v)\) in (3). Under Assumption 3, \(\theta ^{*}(v)\) is non-decreasing. Therefore, the transfer rule \(t^{*}(v)\) ensures incentive compatibility and individual rationality. \(\square\)

Proof of Proposition 3

In any incentive-compatible mechanism with \(U_{A}(0)=0,\) the feasible payoff for the principal is now:

$$\begin{aligned} u_{P}(q,\theta ,t)&=\beta +\int _{0}^{\overline{v}}q(v) \left\{ \frac{1+\delta \alpha }{2}\left[ 2\theta (v)v -\left( v+\frac{1}{\lambda (v)}\right) \theta (v)^{2}\right] -(1-\delta )\beta \right\} f(v){\mathrm {d}}v\\ &=\beta +\int _{0}^{\overline{v}}q(v)\left\{ \frac{1+\delta \alpha }{2} J(\theta (v);v)-(1-\delta )\beta \right\} f(v){\mathrm {d}}v. \end{aligned}$$

Taking \(\theta ^{*}(v)\) from (3), we have:

$$\begin{aligned} u_{P}(q,\theta ,t)\le \beta +\int _{0}^{\overline{v}}\max \left\{ \frac{1+\delta \alpha }{2}v\theta ^{*}(v) -(1-\delta )\beta ,0\right\} f(v){\mathrm {d}}v. \end{aligned}$$

Under Assumption 3, \(\theta ^{*}(v)\) is continuous and strictly increasing, and so is \(\psi (v)\,{:}{=}\,v\theta ^{*}(v).\) Thus, \(\psi (v)\) is invertible. Moreover, under the assumption on \(\beta ,\) \(\frac{2(1-\delta )\beta }{1+\delta \alpha }\) is in the range of \(\psi (v),\) so we can define \(\underline{v}\,{:}{=}\,\psi ^{-1}\left( 2(1-\delta )\beta /(1+\delta \alpha )\right) .\) This gives:

$$\begin{aligned} u_{P}(q,\theta ,t)\le \beta +\int _{\underline{v}}^{\overline{v}} \left[ \frac{1+\delta \alpha }{2}v\theta ^{*}(v) -(1-\delta )\beta \right] f(v){\mathrm {d}}v. \end{aligned}$$

This upper bound is attained by \(q^{*}(v)=I(v\ge \underline{v}),\) which is non-decreasing. Thus, \(\hat{\theta }^{*}(v)=q^{*}(v)\theta ^{*}(v)\) is also non-decreasing under Assumption 3. The transfer rule \(t^{*}(v)\) from Proposition 2 for \(v\ge \underline{v}\) guarantees implementability. \(\square\)

Proof of Proposition 4

In an incentive-compatible mechanism with \(U_{A}(0)=0,\)

$$\begin{aligned} u_{P}(\theta ,t)=E\left[ (\theta (v)+\delta \alpha )v-\frac{1}{1+\delta \alpha }\left( v+\frac{1}{\lambda (v)}\right) \frac{(\theta (v)+\delta \alpha )^2}{2}\right] . \end{aligned}$$

We now look for the maximizer of the following strictly-concave parametric function:

$$\begin{aligned} K(\theta ;v)=(\theta +\delta \alpha ) v-\frac{1}{1+\delta \alpha } \left( v+\frac{1}{\lambda (v)}\right) \frac{(\theta +\delta \alpha )^{2}}{2}. \end{aligned}$$
(6)

This function is maximized at \(\theta ^{*}_0(v)\) in (4), which is continuous and non-decreasing under Assumption 3. The corresponding transfer rule in this contract is \({\tau }^{*}(v)\,{:}{=}\,\frac{1}{2(1+\delta \alpha )}\left[ (\theta _0^{*}(v)+\delta \alpha )^{2}v -\int _{0}^{v}(\theta _0^{*}(\epsilon )+\delta \alpha )^{2}{\mathrm {d}}\epsilon \right] .\) For all v such that \(\hat{\lambda }(v)\le \delta \alpha ,\) we have \(\theta _0^{*}(v)=0\) and \(\tau ^{*}(v)=\frac{1}{2(1+\delta \alpha )}\left[ (\delta \alpha )^{2}v-\int _{0}^{v}(\delta \alpha )^{2}{\mathrm {d}}\epsilon \right] =0.\) For all \(v\ge \underline{v}\,{:}{=}\,\hat{\lambda }^{-1}(\delta \alpha ),\) if any, we have:

$$\begin{aligned} \tau ^{*}(v)&=\frac{1}{2(1+\delta \alpha )}\left[ (\theta _0^{*}(v)+\delta \alpha )^{2}v-\int _{0}^{\underline{v}} (\theta _0^{*}(\epsilon )+\delta \alpha )^{2}{\mathrm {d}}\epsilon -\int _{\underline{v}}^{v}(\theta _0^{*}(\epsilon ) +\delta \alpha )^{2}{\mathrm {d}}\epsilon \right] \\ &=\frac{1}{2(1+\delta \alpha )}\left[ (\theta _0^{*}(v)+\delta \alpha )^{2}v-(\delta \alpha )^2\underline{v} -\int _{\underline{v}}^{v}(\theta _0^{*}(\epsilon )+\delta \alpha )^{2}{\mathrm {d}}\epsilon \right] =:t^{*}_0(v). \end{aligned}$$

This establishes the proposition. \(\square\)

Proof of Corollary 1

For the principal,

$$\begin{aligned} u_{P}(\theta ^{*}_0,t^{*}_0)&=\frac{1+\delta \alpha }{2}E\left[ I(v\le \underline{v})J \left( \frac{\delta \alpha }{\delta \alpha +1};v\right) +I(v>\underline{v})J \left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1};v\right) \right] \\ &\le \frac{1+\delta \alpha }{2}E\left[ J\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1};v\right) \right] =u_{P}(\theta ^{*},t^{*}); \end{aligned}$$

the inequality follows from the fact that \(\frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\) maximizes \(J(\theta ;v).\) For the agent of type \(v\le \underline{v},\) we have:

$$\begin{aligned} U^*_{A0}(v)=\frac{1+\delta \alpha }{2}\int _{0}^{v} \left( \frac{\delta \alpha }{1+\delta \alpha }\right) ^{2}{\mathrm {d}}\epsilon \ge \frac{1+\delta \alpha }{2}\int _{0}^{v}\left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda } (\epsilon )+1}\right) ^{2}{\mathrm {d}}\epsilon =U_{A}^{*}(v). \end{aligned}$$

Finally, for \(v>\underline{v}\) (if any),

$$\begin{aligned} U^*_{A0}(v)&=\frac{1+\delta \alpha }{2}\int _{0}^{\underline{v}}\left( \frac{\delta \alpha }{1+\delta \alpha }\right) ^{2}{\mathrm {d}}\epsilon +\frac{1+\delta \alpha }{2} \int _{\underline{v}}^{v}\left( \frac{\theta ^{*}_0(\epsilon )+\delta \alpha }{1+\delta \alpha }\right) ^{2}{\mathrm {d}}\epsilon \\ &=\frac{1+\delta \alpha }{2}\int _{0}^{\underline{v}}\left( \frac{\delta \alpha }{1+\delta \alpha }\right) ^{2} {\mathrm {d}}\epsilon +\frac{1+\delta \alpha }{2}\int _{\underline{v}}^{v}\left( \frac{\hat{\lambda } (\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^{2}{\mathrm {d}}\epsilon \\ &\ge \frac{1+\delta \alpha }{2}\int _{0}^{v}\left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda } (\epsilon )+1}\right) ^{2}{\mathrm {d}}\epsilon =U_{A}^{*}(v). \end{aligned}$$

This establishes the result. \(\square\)

Proof of Corollary 2

We can write the ex-ante aggregate surplus under a contract with effort-choice \(\widetilde{e}(\theta ,v)\) and share allocation rule \(\widetilde{\theta }(v),\) \(\widetilde{S},\) as:

$$\begin{aligned} \widetilde{{\mathcal {S}}}=\frac{1+\delta \alpha }{2}\int _0^{\overline{v}}\left[ 2\widetilde{e} \left( \widetilde{\theta }(v),v\right) -\widetilde{e} \left( \widetilde{\theta }(v),v\right) ^2\right] vf(v) {\mathrm {d}}v. \end{aligned}$$
(7)

For Proposition 2, (7) is:

$$\begin{aligned} {\mathcal {S}}=\frac{1+\delta \alpha }{2}\int _0^{\overline{v}} \left[ \frac{2\hat{\lambda }(v)}{\hat{\lambda }(v)+1} -\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\right) ^2\right] vf(v){\mathrm {d}}v. \end{aligned}$$

Compare this surplus with the surplus under Proposition 4:

$$\begin{aligned}&\int _0^{\underline{v}}\left[ \left( \frac{2\delta \alpha }{\delta \alpha +1}\right) -\left( \frac{\delta \alpha }{\delta \alpha +1}\right) ^2\right] vf(v){\mathrm {d}}v +\int _{\underline{v}}^{\overline{v}}\left[ \frac{2\hat{\lambda }(v)}{\hat{\lambda }(v)+1} -\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\right) ^{2}\right] vf(v){\mathrm {d}}v\\ &\quad \ge \int _{0}^{\overline{v}}\left[ \frac{2\hat{\lambda }(v)}{\hat{\lambda }(v)+1} -\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\right) ^{2}\right] vf(v){\mathrm {d}}v, \end{aligned}$$

where the inequality follows from the fact that \(\hat{\lambda }(v)/(\hat{\lambda }(v)+1)\le \delta \alpha /(\delta \alpha +1)\) for \(v\le \underline{v}\) and that the function \(h(x)=2x-x^2\) is strictly increasing on [0, 1). It follows that \({\mathcal {S}}_{0}\ge {\mathcal {S}}.\) \(\square\)

Proof of Proposition 5

We prove this proposition by showing that the effort exerted within the partnership, \(e^{**}(v,p)=e^{*}(\theta ^{*}(v,p),v,p),\) is non-increasing in p; the result then follows from the fact the integrand in (7), \(2e-e^2,\) is strictly increasing on [0, 1). The proof for part (i) is a straightforward extension of Example 6; details are available from the author upon request. Here, we focus on part (ii).

Since \(w\le \overline{w},\) the principal can focus on \(p\le \overline{w}\); furthermore, we can rule out \(p=\overline{w},\) as it leads to a lower ex-ante aggregate surplus than in Proposition 2. Thus, fix \(p\in [0,\overline{w}).\) If the exploitation value is null, the agent might as well exert no effort. Otherwise, for every \(v\in (0,\overline{v}),\) the agent chooses effort to maximize the function:

$$\begin{aligned} {\mathrm {u}}_{A}(e,p,\theta ,t,v)=\theta e v+\delta \left[ \overline{w}-(1-\theta ) p -\int _{p}^{\overline{w}}G(w|ev){\mathrm {d}}w\right] -t-(1+\delta \alpha )v\frac{e^{2}}{2}. \end{aligned}$$

Under the additional assumptions, this function is strictly concave (as a function of e), so it has a unique maximizer \(e^{*}.\) For each triple \((\theta ,v,p)\) with \(0\le p<\overline{w}\) and \(0<v<\overline{v},\) \(e^{*}\) solves the first-order condition:

$$\begin{aligned} \theta -\delta \int _{p}^{\overline{w}}\frac{\partial G(w|e^{*}v)}{\partial \nu }{\mathrm {d}}w-(1+\delta \alpha )e^{*}=0. \end{aligned}$$
(8)

Thus, it has the following properties:

$$\begin{aligned} \frac{\partial e^{*}(\theta ,v,p)}{\partial \theta }&=\frac{1}{\delta v\int _{p}^{\overline{w}} \frac{\partial ^2 G(w|ev)}{\partial \nu ^2}{\mathrm {d}}w+(1+\delta \alpha )}>0; \end{aligned}$$
(9)
$$\begin{aligned} \frac{\partial e^{*}(\theta ,v,p)}{\partial v}&=-\frac{\delta e^{*}(\theta ,v,p)\int _{p}^{\overline{w}} \frac{\partial ^2 G(w|e^{*}(\theta ,v,p)v)}{\partial \nu ^2}{\mathrm {d}}w}{\delta v\int _{p}^{\overline{w}} \frac{\partial ^2 G(w|ev)}{\partial \nu ^2}{\mathrm {d}}w+(1+\delta \alpha )}\le 0; \end{aligned}$$
(10)
$$\begin{aligned} \frac{\partial e^{*}(\theta ,v,p)}{\partial p}&=\frac{\delta \frac{\partial G(p|e^{*}(\theta ,v,p)v)}{\partial \nu }}{\delta v\int _{p}^{\overline{w}}\frac{\partial ^2 G(w|ev)}{\partial \nu ^2}{\mathrm {d}}w +(1+\delta \alpha )}\le 0. \end{aligned}$$
(11)

We have:

$$\begin{aligned} \frac{\partial \hat{{\mathrm {u}}}_{A}(\theta ,t,v,p)}{\partial v}&=e^{*}(\theta ,v,p)\left[ \theta -\int _{p}^{\overline{w}}\frac{\partial G(w|e^*v)}{\partial \nu }{\mathrm {d}}w -\frac{1+\delta \alpha }{2}e^{*}(\theta ,v,p)\right] \\ &=\frac{1+\delta \alpha }{2} e^{*}(\theta ,v,p)^{2}, \end{aligned}$$

where the last equality follows from (8). Hence, truthful surplus in a contract with \(U_{A}(0)=0\) is \(U_{A}(v)=\frac{1+\delta \alpha }{2}\int _{0}^{v} e^{*}(\theta (\epsilon ,p),\epsilon ,p)^{2}{\mathrm {d}}\epsilon\); from the point of view of the ex-ante stage, this is:

$$\begin{aligned} E[U_{A}(v)]=\frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}} \frac{e^{*}(\theta (v,p),v,p)^{2}}{\lambda (v)}f(v){\mathrm {d}}v. \end{aligned}$$

The principal’s payoff in an incentive-compatible contract satisfies:

$$\begin{aligned} u_P(\theta ,t,p)\propto E\left\{ \frac{1}{\lambda (v)}\left[ 2e^{*}(\theta (v,p),v,p) \hat{\lambda }(v) -\left( \hat{\lambda }(v)+1\right) e^{*}(\theta (v,p),v,p)^2\right] \right\} . \end{aligned}$$

Consider the function:

$$\begin{aligned} R(\theta ,v,p)\,{:}{=}\,2e^{*}(\theta ,v,p)\hat{\lambda }(v)- \left( \hat{\lambda }(v)+1\right) e^{*}(\theta ,v,p)^2. \end{aligned}$$

Ignoring the monotonicity constraint, the principal determines the allocation rule \(\theta ^{*}(v,p)\) by maximizing \(R(\theta ,v,p)\) with respect to \(\theta .\) Under Assumption 3, (9) and (10) imply that \(\theta ^{*}(v,p)\) is non-decreasing in v,  hence it is implementable. Now, recall \(e^{**}(v,p)=e^{*}(\theta ^{*}(v,p),v,p).\) For types v such that \(\theta (v,p)=0,\) we have that \(e^{**}(v,p)=e^{*}(0,v,p),\) which is non-increasing in p. The same is true for all v such that \(\theta (v,p)=1.\) Finally, for all v such that \(\theta ^{*}(v,p)\in (0,1),\) we have \(e^{**}(v,p)=\frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\); here, p does not affect the equilibrium effort whatsoever. It follows that \(e^{**}(v,p)\) is non-increasing in p for every v. \(\square\)

Proof of Corollary 3

Being committed to \(p=0,\) the agent’s choice of effort is \(e^{*}(\theta )=\frac{\theta +\delta \alpha }{1+\delta \alpha },\) and the ex-ante aggregate surplus is:

$$\begin{aligned} {\mathcal {S}}_0(\theta ): =\frac{1+\delta \alpha }{2}\int _0^{\overline{v}}\left[ 2\frac{\theta (v)+\delta }{1+\delta } -\left( \frac{\theta (v)+\delta }{1+\delta }\right) ^2\right] vf(v){\mathrm {d}}v. \end{aligned}$$

The term in square brackets is bounded above by 1. This bound is attained by setting \(\theta (v)=1\) for all \(v\in [0,\overline{v}].\) To ensure ex-ante participation, the payment must be no grater than \(\frac{1+\delta \alpha }{2}E(v).\) \(\square\)

Proof of Proposition 6

A partnership-constitution mechanism is now given by the tuple \((q,\theta _{1},\theta _{2},t_{1},t_{2}),\) where \(q:\left[ 0,\overline{v}\right] ^{2}\rightarrow \{0,1\}\) is the partner-selection function—with \(q(v_{1},v_{2})=1\) denoting agent 1 being selected and \(q(v_{1},v_{2})=0\) denoting agent 2 making partner—and \((\theta _{i},t_{i})\) is the contract if agent \(i=1,2\) is chosen. Once an agent is selected, the problem is the same as before; thus, incentive compatibility for the agents leads to:

$$\begin{aligned} U_{Ai}(v_{i},v_{-i})=U_{Ai}(0,v_{-i})+\frac{1+\delta \alpha }{2} \int _{0}^{v_{i}}\theta (\epsilon ){\mathrm {d}}\epsilon . \end{aligned}$$

In any incentive-compatible mechanism where the lowest-type agents are held to their outside option, denoting the profile of agents’ types by \(\mathbf{v}=(v_{1},v_{2}),\) we have:

$$\begin{aligned}&u_{P}(\theta ,t) \\ &\quad =\frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}}\int _{0}^{\overline{v}}\left[ q(\mathbf{v})J (\theta _{1}(\mathbf{v});v_{1})+(1-q(\mathbf{v}))J(\theta _{2}(\mathbf{v});v_{2})\right] f(v_{1})f(v_{2}) {\mathrm {d}}v_{1}{\mathrm {d}}v_{2} \\ &\quad \le \frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}}\int _{0}^{\overline{v}}\left[ q(\mathbf{v}) J(\theta ^{*}(v_{1});v_{1})+(1-q(\mathbf{v}))J(\theta ^{*}(v_{2});v_{2})\right] f(v_{1})f(v_{2}){\mathrm {d}}v_{1}{\mathrm {d}}v_{2}\\ &\quad \le \frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}}\int _{0}^{\overline{v}} \max \left\{ J(\theta ^{*}(v_{1});v_{1}),J(\theta ^{*}(v_{2});v_{2})\right\} f(v_{1})f(v_{2}){\mathrm {d}}v_{1}{\mathrm {d}}v_{2}\\ &\quad =\frac{1+\delta \alpha }{2}\int _{0}^{\overline{v}}\int _{0}^{\overline{v}}\max \left\{ \theta ^{*}(v_{1})v_{1},\theta ^{*}(v_{2})v_{2}\right\} f(v_{1})f(v_{2}){\mathrm {d}}v_{1}{\mathrm {d}}v_{2} \end{aligned}$$

where \(\theta ^{*}(v_{i})\) is as in (3). The last upper bound is attained by setting:

$$\begin{aligned} q^{*}(v_{1},v_{2})=I\left( \theta ^{*}(v_{1})v_{1}\ge \theta ^{*}(v_{2})v_{2}\right) , \end{aligned}$$

which—under Assumption 3—equals \(q^{*}(v_{1},v_{2})=I\left( v_{1}\ge v_{2}\right) .\) The transfer rules \(t_{i}^{*}(\mathbf{v})\) ensure incentive compatibility and individual rationality. \(\square\)

1.2 Additional computations for Example 4

In Example 4, the cdf and hazard rate for v are:

$$\begin{aligned} F(v)=\left\{ \begin{array}{ll} \frac{3}{8}v &{}\quad 0\le v<2, \\ \frac{2}{3}+\frac{v}{24} &{}\quad 2\le v\le 8; \\ \end{array}\right. \quad \lambda (v)=\left\{ \begin{array}{ll} \frac{3}{8-3v} &{}\quad 0\le v<2, \\ \frac{1}{8-v} &{}\quad 2\le v\le 8; \end{array}\right. \end{aligned}$$

and the function \(J(\theta ;v)\) from (5) is:

$$\begin{aligned} J(\theta ;v)=\left\{ \begin{array}{ll} 2v\theta -\frac{8}{3}\theta ^2 &{}\quad 0\le v<2, \\ 2v\theta -8\theta ^2 &{}\quad 2\le v\le 8. \end{array}\right. \end{aligned}$$

Following Toikka (2011), we perform a change of variable on v to obtain a uniformly distributed type. Take the quantile function \(v=F^{-1}(p),\) which is given by:

$$\begin{aligned} F^{-1}(p)=\left\{ \begin{array}{ll} \frac{3}{8}p &{}\quad 0\le p<\frac{3}{4}, \\ 24p-16 &{}\quad \frac{3}{4}\le p\le 1, \end{array}\right. \end{aligned}$$

and define \(\widetilde{J}(\theta ,p)\,{:}{=}\,J(\theta ,F^{-1}(p))\); we have:

$$\begin{aligned} \widetilde{J}(\theta ,p)=\left\{ \begin{array}{ll} \frac{16}{3}p\theta -\frac{8}{3}\theta ^2 &{}\quad 0\le p<\frac{3}{4}, \\ 16(3p-2)\theta -8\theta ^2 &{}\quad \frac{3}{4}\le p\le 1. \end{array}\right. \end{aligned}$$

First, we take the (piecewise) partial derivative of \(\widetilde{J}(\theta ,p)\) with respect to \(\theta\):

$$\begin{aligned} \widetilde{J}'_{\theta }(\theta ,p)=\left\{ \begin{array}{ll} \frac{16}{3}(p-\theta ) &{}\quad 0<p<\frac{3}{4}, \\ 48p-32-16\theta &{}\quad \frac{3}{4}<p\le 1. \end{array}\right. \end{aligned}$$

Next, compute \(H(\theta ,p)\,{:}{=}\,\int _{0}^{p}\widetilde{J}'_{\theta }(\theta ,r){\mathrm {d}}r\):

$$\begin{aligned} H(\theta ,p)=\left\{ \begin{array}{ll} \frac{8}{3}p^{2}-\frac{16}{3}\theta p &{}\quad 0\le p<\frac{3}{4}, \\ 12+8\theta -16(2+\theta )p+24p^2 &{}\quad \frac{3}{4}\le p\le 1. \end{array}\right. \end{aligned}$$

For each fixed \(\theta ,\) both pieces of \(H(\theta ,p)\) are parabolas in p. Thus, its convex hull, \(\overline{H}(\theta ,p),\) is obtained by “patching” \(H(\theta ,p)\) with a linear (possibly flat) function; there are two values \(p_{1}<\frac{3}{4}\) and \(p_{2}\ge \frac{3}{4}\) for p and two parameters \(a,b\in {\mathbb {R}}\) such that:

$$\begin{aligned} \overline{H}(\theta ,p)=\left\{ \begin{array}{ll} \frac{8}{3}p^{2}-\frac{16}{3}\theta p &{}\quad 0\le p<p_{1}, \\ ap+b &{}\quad p_{1}\le p<p_{2}, \\ 12+8\theta -16(2+\theta )p+24p^2 &{}\quad p_{2}\le p\le 1. \end{array}\right. \end{aligned}$$

Now, take the (piecewise) partial derivative of \(\overline{H}(\theta ,p)\) with respect to p; call it \(\overline{h}(\theta ,p)\):

$$\begin{aligned} \overline{h}(\theta ,p)\,{:}{=}\,\overline{H}'_{p}(\theta ,p)=\left\{ \begin{array}{ll} \frac{16}{3}(p-\theta ) &{}\quad 0<p<p_{1}, \\ a &{}\quad p_{1}\le p\le p_{2}, \\ 24p-16(2+\theta ) &{}\quad p_{2}<p\le 1. \end{array}\right. \end{aligned}$$

Integrate \(\overline{h}(\theta ,p)\) with respect to \(\theta\) to obtained the “ironed” objective function for the principal, \(\overline{J}(\theta ,p)\):

$$\begin{aligned} \overline{J}(\theta ,p)\,{:}{=}\,\int _{0}^{\theta }\overline{h}(s,p) {\mathrm {d}}s=\left\{ \begin{array}{ll} \frac{16}{3}p\theta -\frac{8}{3}\theta ^2 &{}\quad 0\le p<p_{1}, \\ a\theta &{}\quad p_{1}\le p<p_{2}, \\ 16(3p-2)\theta -8\theta ^2 &{}\quad p_{2}\le p\le 1. \end{array}\right. \end{aligned}$$

In order to identify \(a, b, p_{1},\) and \(p_{2},\) notice that the linear “patch” on \(H(\theta ,p)\) must paste smoothly with its two pieces:

$$\begin{aligned}&\frac{16}{3}(p_{1}-\theta )=a=48p_{2}-16(2+\theta );\quad (\text{smooth pasting}) \end{aligned}$$
(12)
$$\begin{aligned}&\frac{8}{3}p^2_{1}-\frac{16}{3}\theta p_{1}=ap_{1}+b; \quad (\text{value matching at}\ p_{1}) \end{aligned}$$
(13)
$$\begin{aligned}&24p_{2}^{2}-16(2+\theta )p_{2}+4(3+2\theta )=ap_{2}+b. \quad (\text{value matching at}\ p_{2}). \end{aligned}$$
(14)

From (1), we get \(p_{1}=9p_{2}-6-2\theta .\) From (1) and (3), \(b=-24p_{2}^{2}+4(3+2\theta ).\) Thus, (2) becomes \(\frac{16}{3}p^2_{1}=-24p_{2}^{2}+4(3+2\theta ).\) Combining the latter equation with (1) yields the quadratic equation \(144 p_{2}^2-72(3+\theta )p_{2}+81+27\theta +8\theta ^2=0.\) At the same time, we want to maximize the third piece of \(\overline{J}(\theta ,p)\) with respect to \(\theta .\) (Notice that the boundaries for each piece, \(p_{1}\) and \(p_{2},\) may in principle depend on \(\theta\) itself.) For \(p\in (0,p_{1}\)), the maximizer is \(\theta (p)=p\); for \(p\in [p_{2},1),\) we have \(\theta (p)=3p-2\)—recall that \(p_{2}\ge \frac{3}{4}>\frac{2}{3}.\) Plugging the latter expression evaluated at \(p_{2}\) in the quadratic equation above yields:

$$\begin{aligned} 144 p_{2}^2-72(3p_{2}+1)p_{2}+81+27(3p_{2}-2)+8(3p_{2}-2)^2=0. \end{aligned}$$

Expanding the binomial squared and regrouping terms, this equation reduces to the linear equation \(-6p_{2}+5=0,\) so we get \(p_{2}=\frac{5}{6}\); thus, \(\theta (p_{2})=\frac{1}{2}\) and, from (1), \(p_{1}=\frac{1}{2}.\) The optimal ironed allocation rule in terms of p is then:

$$\begin{aligned} \overline{\theta }^{*}(p)=\left\{ \begin{array}{ll} p &{}\quad p<\frac{1}{2}, \\ \frac{1}{2} &{}\quad \frac{1}{2}\le p<\frac{5}{6}, \\ 3p-2 &{}\quad p\ge \frac{5}{6}. \end{array}\right. \end{aligned}$$

Changing variables back to v by setting \(p=F(v)\) yields \(\overline{\theta }^{*}(v)\) in Example 4.

1.3 Commitment to \(p=\overline{w}\)

If the principal commits to \(p=\overline{w},\) the agent’s production payoff is \({\mathrm {u}}_{A}(e;\theta ,t,v)=\theta e v+\delta \theta \overline{w}-t-(1+\delta \alpha )v\frac{e^{2}}{2},\) which is maximized at \(e^{*}(\theta )=\frac{\theta }{1+\delta \alpha }\); since the agent has a guaranteed dissolution payoff, he will exert less effort: \(e^{*}(\theta )<\theta\) for all \(\theta >0.\) Expected payoffs in a constitution contract are now:

$$\begin{aligned} u_{A}(\widetilde{v};v)&=\frac{\theta (\widetilde{v})^{2}}{2(1+\delta \alpha )}v+ \delta \theta (\widetilde{v})\overline{w}-t(\widetilde{v});\\ u_{P}(\theta ,t)&=E\left\{ (1-\theta (v))e^{*}(\theta (v))v+ \delta \left[ \alpha e^{*}(\theta (v))v -\theta (v)\overline{w}\right] \right\} +E[t(v)]. \end{aligned}$$

Proposition A1

(Price commitment—\(p=\overline{w}\)) Under Assumption 3, the optimal contract when the principal commits ex-ante to dissolution price \(p=\overline{w}\) is as follows. The optimal share allocation rule is:

$$\begin{aligned} \theta ^{*}_1(v)\,{:}{=}\,\min \left\{ (1+\delta \alpha )\frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1},1\right\} . \end{aligned}$$

Define \(\underline{v}\) as \(\hat{\lambda }^{-1}\left( \frac{1}{\delta \alpha }\right)\) if \(\max \{\hat{\lambda }(v):v\in [0,\overline{v}]\}>\frac{1}{\delta \alpha }\) and as \(\overline{v}\) otherwise. An agent of type \(v\le \underline{v}\) pays \(t_1^{*}(v)\,{:}{=}\,\frac{1}{2(1+\delta \alpha )}\left[ \theta _1^{*}(v)^{2}v-\int _{0}^{v}\theta _1^{*}(\epsilon )^{2}{\mathrm {d}}\epsilon \right] +\delta \overline{w}\theta ^{*}_1(v),\) while an agent of type \(v>\underline{v},\) if any, is awarded full ownership for \(t_{1f}^{*}(v)\,{:}{=}\,\frac{1}{2(1+\delta \alpha )}\left[ \underline{v}-\int _{0}^{\underline{v}}\theta _1^{*}(\epsilon )^{2}{\mathrm {d}}\epsilon \right] +\delta \overline{w}.\) When the partnership is dissolved, the agent sells his shares back to the principal.

Proof

In an incentive-compatible mechanism, we have:

$$\begin{aligned} t(v)=\frac{\theta (v)^{2}}{2(1+\delta \alpha )}v+\delta \overline{w}\theta (v)- \frac{1}{2(1+\delta \alpha )}\int _0^v\theta (\epsilon )^{2}{\mathrm {d}}\epsilon . \end{aligned}$$

The principal’s payoff is:

$$\begin{aligned} u_{P}(\theta ,t)=\frac{1}{1+\delta \alpha }E\left[ (1+\delta \alpha )\theta (v)v -\left( v+\frac{1}{\lambda (v)}\right) \frac{\theta (v)^2}{2}\right] . \end{aligned}$$

So, we look for the maximizer of the strictly-concave parametric function:

$$\begin{aligned} L(\theta ;v)=(1+\delta \alpha ) v\theta -\left( v+\frac{1}{\lambda (v)}\right) \frac{\theta ^{2}}{2}. \end{aligned}$$

The maximizer is \(\theta ^{*}_1(v),\) which is non-decreasing under Assumption 3; the transfer function is \(\tau ^{*}(v)\,{:}{=}\,\frac{1}{2(1+\delta \alpha )}\left[ \theta _1^{*}(v)^{2}v-\int _{0}^{v}\theta _1^{*} (\epsilon )^{2}{\mathrm {d}}\epsilon \right] +\delta \overline{w}\theta ^{*}_1(v).\) For \(v\le \underline{v},\) we have:

$$\begin{aligned} \tau ^{*}(v)=\frac{1}{2(1+\delta \alpha )}\left[ \theta _1^{*}(v)^{2}v-\int _{0}^{v}\theta _1^{*} (\epsilon )^{2}{\mathrm {d}}\epsilon \right] +\delta \overline{w}\theta ^{*}_1(v)=:t^{*}_1(v); \end{aligned}$$

for \(v>\underline{v},\) if any such v exists,

$$\begin{aligned} \tau ^{*}(v)&=\frac{1}{2(1+\delta \alpha )}\left[ v-\int _{0}^{\underline{v}}\theta _1^{*} (\epsilon )^{2}{\mathrm {d}}\epsilon -\int _{\underline{v}}^{v}{\mathrm {d}}\epsilon \right] +\delta \overline{w}\\ &=\frac{1}{2(1+\delta \alpha )}\left[ \underline{v}-\int _{0}^{\underline{v}}\theta _1^{*} (\epsilon )^{2}{\mathrm {d}}\epsilon \right] +\delta \overline{w}=:t^{*}_{1f}(v). \end{aligned}$$

This establishes the proposition. \(\square\)

Corollary A1

(The value of commitment, II) Denote by \(u_{P}(\theta ^{*}_1,t^{*}_1)\) and \(U^*_{A1}(v)\) the payoffs for the principal and a type-v agent, respectively, under the contract in the proposition above. We have \(u_{P}(\theta ^{*},t^{*})\ge u_{P}(\theta ^{*}_1,t^{*}_1)\) and \(U^*_{A}(v)\ge U^*_{A1}(v)\) for all v.

Proof

For the principal, we have:

$$\begin{aligned} u_{P}(\theta ^{*}_1,t^{*}_1)&=(1+\delta \alpha )E\left\{ I(v<\underline{v})\left[ \frac{v\hat{\lambda }(v)}{\hat{\lambda }(v)+1}-\left( v+\frac{1}{\lambda (v)}\right) \frac{1}{2}\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}\right) ^2\right] \right\} \\ &\quad +\frac{1}{1+\delta \alpha }E\left\{ I(v>\underline{v})\left[ (1+\delta \alpha )v-\left( v+\frac{1}{\lambda (v)}\right) \frac{1}{2}\right] \right\} \\ u_{P}(\theta ^{*}_1,t^{*}_1)&=(1+\delta \alpha )E\left\{ I(v<\underline{v})\left[ \frac{v\hat{\lambda }(v)}{\hat{\lambda }(v)+1}-\left( v+\frac{1}{\lambda (v)}\right) \frac{1}{2}\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1} \right) ^2\right] \right\} \\ &\quad +(1+\delta \alpha )E\left\{ I(v>\underline{v})\left[ \frac{1}{1+\delta \alpha }v-\left( v+\frac{1}{\lambda (v)}\right) \frac{1}{2}\left( \frac{1}{1+\delta \alpha }\right) ^2\right] \right\} \\ &=\frac{1+\delta \alpha }{2}E\left[ I(v<\underline{v})J\left( \frac{\hat{\lambda }(v)}{\hat{\lambda }(v)+1}; v\right) +I(v>\underline{v})J\left( \frac{1}{1+\delta \alpha };v\right) \right] \\ &\le \frac{1+\delta \alpha }{2}E\left[ J\left( \theta ^{*}(v);v\right) \right] =u_{P}(\theta ^{*},t^{*}), \end{aligned}$$

where \(J(\theta ;v)\) is the function in (5). For the agent of type \(v\le \underline{v},\)

$$\begin{aligned} U^*_{A1}(v)&=\frac{1}{2(1+\delta \alpha )}\int _{0}^{v}\left( (1+\delta \alpha )\frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^2{\mathrm {d}}\epsilon \\ &=\frac{1+\delta \alpha }{2}\int _{0}^{v}\left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1} \right) ^2{\mathrm {d}}\epsilon =U_{A}^{*}(v). \end{aligned}$$

Finally, for the agent of type \(v>\underline{v},\) if any,

$$\begin{aligned} U^*_{A1}(v)&=\frac{1}{2(1+\delta \alpha )}\left[ \int _{0}^{\underline{v}}\left( (1+\delta \alpha ) \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^2{\mathrm {d}}\epsilon +\int _{\underline{v}}^v{\mathrm {d}}\epsilon \right] \\ &=\frac{1+\delta \alpha }{2}\left[ \int _{0}^{\underline{v}}\left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^2{\mathrm {d}}\epsilon +\left( \frac{1}{1+\delta \alpha }\right) ^2(v-\underline{v})\right] \\ &\le \frac{1+\delta \alpha }{2}\left[ \int _{0}^{\underline{v}}\left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^2{\mathrm {d}}\epsilon +\int _{\underline{v}}^{v} \left( \frac{\hat{\lambda }(\epsilon )}{\hat{\lambda }(\epsilon )+1}\right) ^2{\mathrm {d}}\epsilon \right] =U^*_{A}(v). \end{aligned}$$

This establishes the result. \(\square\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Francetich, A. When partner knows best: asymmetric expertise in partnerships. Int J Game Theory 52, 363–399 (2023). https://doi.org/10.1007/s00182-022-00821-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00182-022-00821-4

Keywords

JEL Classification

Navigation