Skip to main content
Log in

Voting over selfishly optimal income tax schedules with tax-driven migrations

  • Original Paper
  • Published:
Social Choice and Welfare Aims and scope Submit manuscript

Abstract

We study majority voting over selfishly optimal nonlinear income tax schedules proposed by a continuum of workers who can migrate between two competing jurisdictions. Both skill level and migration cost are the private information of each worker who will propose an allocation schedule that maximizes the utility of her own type. We identify reasonable scenarios in which the first-order approach applies and hence the second-order sufficient condition for incentive compatibility is fulfilled; otherwise, we need to apply the ironing surgery developed by Brett and Weymark (Games Econ Behav 101:172–188, 2017). Under quasilinear-in-consumption preferences, we show that the tax schedule proposed by the median skill type is the Condorcet winner, and provide a complete characterization of this tax schedule. While this schedule features negative marginal tax rates for low-skilled workers, it features positive rates for high-skilled workers with small migration elasticities; the marginal tax rates at the bottom and top skill levels cannot be unambiguously signed. Moreover, we detail the conditions under which migration induces uniformly higher or lower equilibrium marginal tax rates facing both low- and high-skilled workers than their counterparts in autarky, which leads us to conclude that geographic mobility does not always limit the government’s ability to redistribute incomes via tax-transfer systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. In a sense, the equity-efficiency tradeoff governing the design of a socially optimal tax-transfer system still applies to our study. On the one hand, the international mobility of high-skilled workers in the context of globalization makes the efficiency concern more relevant, while on the other hand the rising income inequality induced by globalization calls for more public transfers and subsidies to low-skilled workers (or the socially and economically disadvantaged) — especially when the majority rule under the one-person-one-vote principle is used to select tax schedules. In practice, both the United States (e.g., Piketty 2014) and China (e.g., Huang et al. 2020, 2021) face an intensifying equity-efficiency tradeoff—i.e., maintaining grassroots political support while guaranteeing economic efficiency.

  2. Political factors are likely to considerably affect the structure of tax and transfer systems in Western democracies in the coming years. For instance, two manifestations are (1) the rise of extreme right parties in the EU in the backdrop of labor migrants and concerns about the implications for the sustainability of the welfare state and (2) the debate on the desirable marginal tax rates levied on top earners during the 2020 US presidential elections (e.g., the proposals of Democratic Party candidates such as Senators Bernie Sanders and Elizabeth Warren). Some trends favor the wealthy, perhaps due to political lobbying and the role of certain die-hard constituencies. Our theoretical paper must adopt a political economy model that is tractable to a certain degree; hence we leave some of these issues unexplored.

  3. There are some exceptions, such as the US and Israel, where citizens pay domestic income tax based on their global income.

  4. There are other conceptual frameworks for income redistribution under electoral competition besides the citizen-candidate model adopted here, such as the conventional Downsian model and its basic variants (e.g., vote-share maximizing politicians, a winner-take-all system, competition among politicians who differ in a quality dimension, policy-motivated candidates with commitment problems, and endogenous entry of politicians). That is, politicians propose optimal income tax schedules and then citizens (voters) choose among these proposals according to their preferences. By addressing distributive politics, our paper complements those analyses (e.g., Laslier and Picard 2002; Bierbrauer and Boyer 2013, 2016).

  5. Nonetheless, as pointed out by a referee, one cannot exclude the possibility that other factors may determine individual preferences regarding redistribution, such as political affinity with left- or right-wing policies, beliefs and representations about social mobility, fairness concerns, and natives’ attitudes towards (and perceptions of) immigration, as shown, for example, empirically by Alesina et al. (2018) and Alesina et al. (2019).

  6. In a population with the majority consisting of “poor” individuals, Höchtl et al. (2012) experimentally find that redistribution outcomes appear as if all voters were exclusively motivated by self-interest. We therefore argue that it is somewhat reasonable to focus on selfishly optimal income taxes in the current political economy.

  7. This paper thus provides theoretical support to the empirical finding of Jacobs et al. (2017) that all Dutch political parties give greater political weight to middle incomes than to the poor and rich.

  8. Given that income taxation in the United States is based on citizenship rather than the residence principle, the migration elasticities of high incomes may not be that large. This prediction helps explains why effective marginal tax rates in the United States are negative for low incomes and positive for high incomes (see Congressional Budget Office 2012).

  9. Meltzer and Richard (1981) and Hindriks and De Donder (2003) have also studied voting over selfishly optimal tax schedules, but the former focuses on linear taxes and the latter on quadratic tax schedules.

  10. Morelli et al. (2012) and Bierbrauer et al. (2013) employ a simpler assumption, that migration costs and skill levels are independently distributed. Assuming that migration costs distribute identically and independently across skill levels, Blumkin et al. (2015) also show that migration elasticity increases with respect to an individual’s skill level, which seems to be consistent with the empirical finding of Doquier and Marfouk (2006) and Simula and Trannoy (2010) that higher-skilled individuals are more likely to migrate.

  11. This assumption not only simplifies the theoretical derivation; it also seems to be empirically reasonable by eliminating the income effect on taxable income (e.g., Gruber and Saez 2002). Under risk-neutral preferences, c could be interpreted as a nonnegative wealth transfer from the government (or the mechanism designer).

  12. This principle states that there is an equivalence between admissible allocations and those that are decentralizable via an income tax system.

  13. If individual skills are drawn independently, Bierbrauer (2011) proves that the optimal sophisticated mechanism associated with strategic interdependence is a simple mechanism as long as individuals exhibit decreasing risk aversion.

  14. Throughout, the superscripts “R” and “M” stand for Rawlsian (i.e., maxi-min) and maxi-max, respectively.

  15. We use the superscripts “\(M*\)” and “\(R*\)” to differentiate the complete solution from that obtained under the first-order approach.

  16. For example, if a worker proposes a schedule that causes only some types to relocate to the other jurisdiction, then those types do not have the chance to vote on this proposal. Moreover, if the schedules proposed by two different individuals result in different sets of types being residents, then it is more difficult to determine which types should be allowed to vote.

  17. In some developed countries, immigrants, especially newcomers, cannot participate in the political process; only fully fledged citizens can participate in collective decision-making. Many countries adopt a delayed instead assimilation policy, with a minimum residency period as a prerequisite for citizenship, e.g., 3 years (Netherlands, Australia, and Canada) to 10 years (Switzerland).

  18. These conditions relate to the following four indexes: (1) whether the ex post measure of workers of all skill levels is greater than, equal to, or smaller than the ex ante one; (2) whether the net labor inflow of skill levels below the ex ante median skill level is positive or not; (3) whether the net labor inflow of skill levels above the ex ante median skill level is positive or not; and (4) the relative magnitude of these two net labor inflows.

  19. We use the superscript “MR” because this threshold of migration elasticity is associated with the comparison of maxi-max (i.e., “M”) and Rawlsian (i.e., “R”) marginal tax rates.

  20. In what follows, the superscripts “I”, “O”, and “NI” stand for inflow, outflow, and net inflow, respectively.

References

  • Agranov M, Palfrey TR (2015) Equilibrium tax rates and income redistribution: a laboratory study. J Public Econ 130:45–58

    Article  Google Scholar 

  • Alesina A, Miano A, Stantcheva S (2018) Immigration and redistribution. NBER Working Paper No. 24733

  • Alesina A, Murard E, Rapoport H (2019) Immigration and preferences for redistribution in Europe. NBER Working Paper No. 25562

  • Akcigit U, Baslandze S, Stantcheva S (2016) Taxation and the International Mobility of Inventors. American Economic Review 106(10):2930–2981

    Article  Google Scholar 

  • Bierbrauer FJ (2011) On the optimality of optimal income taxation. J Econ Theory 146(5):2105–2116

    Article  Google Scholar 

  • Bierbrauer FJ, Boyer PC (2013) Political competition and mirrleesian income taxation: a first pass. J Public Econ 103:1–14

    Article  Google Scholar 

  • Bierbrauer FJ, Boyer PC (2016) Efficiency, welfare, and political competition. Quart J Econ 131(1):461–518

    Article  Google Scholar 

  • Bierbrauer FJ, Brett C, Weymark JA (2013) Strategic nonlinear income tax competition with perfect labor mobility. Games Econ Behav 82:292–311

    Article  Google Scholar 

  • Black D (1948) On the rationale of group decision-making. J Polit Econ 56(1):23–34

    Article  Google Scholar 

  • Blumkin T, Sadka E, Shem-Tov Y (2015) International tax competition: zero tax rate at the top re-established. Int Tax Public Financ 22(5):760–776

    Article  Google Scholar 

  • Bohn H, Stuart C (2013) Revenue extraction by median voters. Unpublished Manuscript. Department of Economics, University of California, Santa Barbara

  • Brett C (2016) Probabilistic voting over nonlinear income taxes with international migration. Unpublished Manuscript. Department of Economics, Mount Allison University

  • Brett C, Weymark JA (2011) How optimal nonlinear income taxes change when the distribution of the population changes. J Public Econ 95(11–12):1239–1247

    Article  Google Scholar 

  • Brett C, Weymark JA (2016) Voting over selfishly optimal nonlinear income tax schedules with a minimum-utility constraint. J Math Econ 67:18–31

    Article  Google Scholar 

  • Brett C, Weymark JA (2017) Voting over selfishly optimal nonlinear income tax schedules. Games Econ Behav 101:172–188

    Article  Google Scholar 

  • Brett C, Weymark JA (2020) Majority rule and selfishly optimal nonlinear income tax schedules with discrete skill levels. Soc Choice Welf 54:337–362

    Article  Google Scholar 

  • Congressional Budget Office (2012) Effective Marginal Tax Rates for Low- and Moderate-Income Workers. Publication No.4149, Congressional Budget Office, Congress of the United States

  • Corneo G, Neher F (2015) Democratic redistribution and rule of the majority. Eur J Polit Econ 40(Part A):96–109

    Article  Google Scholar 

  • Cremer H, Pestieau P (1998) Social insurance, majority voting and labor mobility. J Public Econ 68(3):397–420

    Article  Google Scholar 

  • Dai D (2020) Voting over selfishly optimal tax schedules: can Pigouvian tax redistribute income? J Public Econ Theory 22(5):1660–1686

    Article  Google Scholar 

  • Dai D, Gao W, Tian G (2020) Relativity, mobility, and optimal nonlinear income taxation in an open economy. J Econ Behav Org 172(C):57–82

    Article  Google Scholar 

  • Diamond P (1998) Optimal income taxation: an example with a u-shaped pattern of optimal marginal tax rates. Am Econ Rev 88(1):83–95

    Google Scholar 

  • Doquier F, Marfouk A (2006) International Migration by Educational Attainment (1990-2000) In: C. Ozden and M. Schiff (eds.) International Migration, Remittances and Development, Chapter 5, New York: Palgrave Macmillan

  • Gans JS, Smart M (1996) Majority voting with single-crossing preferences. J Public Econ 59(2):219–237

    Article  Google Scholar 

  • Gruber J, Saez E (2002) The elasticity of taxable income: evidence and implications. J Public Econ 84(1):1–32

    Article  Google Scholar 

  • Gründler K, Köllner S (2017) Determinants of governmental redistribution: income distribution, development levels, and the role of perceptions. J Comp Econ 45(4):930–962

    Article  Google Scholar 

  • Guesnerie R (1995) A contribution to the pure theory of taxation. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Hamilton J, Pestieau P (2005) Optimal income taxation and the ability distribution: implications for migration equilibria. Int Tax Public Financ 12(1):29–45

    Article  Google Scholar 

  • Hammond PJ (1979) Straightforward individual incentive compatibility in large economies. Rev Econ Stud 46(2):263–282

    Article  Google Scholar 

  • Hindriks J (2001) Mobility and redistributive politics. J Public Econ Theory 3(1):95–120

    Article  Google Scholar 

  • Hindriks J, De Donder P (2003) The politics of progressive income taxation with incentive effects. J Public Econ 87(11):2491–2505

    Article  Google Scholar 

  • Höchtl W, Sausgruber R, Tyran J-R (2012) Inequality Aversion and Voting on Redistribution. Eur Econ Rev 56(7):1406–1421

    Article  Google Scholar 

  • Huang KXD, Liu Z, Tian G (2020) Promote competitive neutrality to facilitate china’s economic development: outlook, policy simulations, and reform implementation-a summary of the annual SUFE macroeconomic report (2019–2020). Front Econ Chin 15(1):1–24

    Google Scholar 

  • Huang KXD, Li S, Tian G (2021) Chinese Economy under the New “Dual Circulation” strategy: challenges and opportunities - a summary of the annual SUFE macroeconomic report (2020–2021). Front Econ Chin 16(1):1–29

  • Jacobs B, Jongen ELW, Zoutman FT (2017) Revealed social preferences of dutch political parties. J Public Econ 156(C):81–100

    Article  Google Scholar 

  • Kleven HJ, Landais C, Saez E (2013) Taxation and international migration of superstars: evidence from the European football market. Am Econ Rev 103(5):1892–1924

    Article  Google Scholar 

  • Kleven HJ, Landais C, Saez E, Schultz E (2014) Migration and wage effects of taxing top earners: evidence from the Foreigners’ tax scheme in Denmark. Quart J Econ 129(1):333–378

    Article  Google Scholar 

  • Laslier J, Picard N (2002) Distributive politics and electoral competition. J Econ Theory 103(1):106–130

    Article  Google Scholar 

  • Lehmann E, Simula L, Trannoy A (2014) Tax Me If You Can! Optimal nonlinear income tax between competing governments. Quart J Econ 129(4):1995–2030

    Article  Google Scholar 

  • Leite-Monteiro M (1997) Redistributive policy with labour mobility across countries. J Public Econ 65(2):229–244

    Article  Google Scholar 

  • Meltzer AH, Richard SF (1981) A rational theory of the size of government. J Polit Econ 89(5):914–927

    Article  Google Scholar 

  • Mirrlees JA (1971) An exploration in the theory of optimum income taxation. Rev Econ Stud 38(2):175–208

    Article  Google Scholar 

  • Mirrlees JA (1982) Migration and optimal income taxes. J Public Econ 18(3):319–341

    Article  Google Scholar 

  • Morelli M, Yang H, Ye L (2012) Competitive nonlinear taxation and constitutional choice. Am Econ J: Microecon 4(1):142–175

    Google Scholar 

  • Piketty T (2014) Capital in the twenty-first century. Belknap Press, Cambridge, MA

    Book  Google Scholar 

  • Piketty T, Saez E (2013) Optimal Labor Income Taxation. In Chapter 7 of Handbook of Public Economics 5:391–474

  • Roberts KWS (1977) Voting over income tax schedules. J Public Econ 8(3):329–340

    Article  Google Scholar 

  • Röell AA (2012) Voting over Nonlinear Income Tax Schedules. Unpublised Manuscript, School of International and Public Affairs, Columbia University

  • Simula L, Trannoy A (2010) Optimal income tax under the threat of migration by top-income earners. J Public Econ 94(1–2):163–173

    Article  Google Scholar 

  • Simula L, Trannoy A (2012) Shall we keep the highly skilled at home? the optimal income tax perspective. Soc Choice Welf 39(4):751–782

    Article  Google Scholar 

  • Stigler GJ (1957) The tenable range of functions of local government. In: Federal Expenditure Policy for Economic Growth and Stability: Papers Submitted by Panelists Appearing Before the Subcommittee on Fiscal Policy. Joint Economic Committee, Congress of the U.S. Washington

  • Stigler GJ (1970) Director’s law of public income redistribution. J Law Econ 13(1):1–10

    Article  Google Scholar 

  • Topkis DM (1978) Minimizing a submodular function on a lattice. Oper Res 26(2):305–321

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Darong Dai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is a revised chapter of the Ph.D. thesis of Darong Dai submitted to Texas A&M University in May 2018. We are particularly indebted to John Weymark for his constructive comments and generous encouragement. Helpful comments and suggestions from an associate editor and a referee are gratefully acknowledged. We also would like to thank the helpful feedback and comments from Yonghong An, Pedro Bento, Laurent Bouton, Klaus Desmet, Simona Fabrizi, Andrew T. Foerster, Andrew Glover, Dennis W. Jansen, Claus Thustrup Kreiner, Quan Li, Jason Lindo, Liqun Liu, Chen-Yu Pan, Tatevik Sekhposyan, Sang-Chul Suh, Kei-Mu Yi, Yuzhe Zhang, Sarah Zubairy, and the participants of Macro Research Group and Macro Student Conference at TAMU (October 2017), the Midwest Economic Theory Conference at SMU (November 2017), the 14th Meeting of the Society for Social Choice and Welfare (June 2018), the 2018 and 2021 Asian Meetings of the Econometric Society, the 20th Annual Meeting of the Association for Public Economic Theory (July 2019), and the Joint Congress of the European Economic Association and the Econometric Society (August 2019). Financial support from the National Natural Science Foundation of China (NSFC-72003115) and the Key Laboratory of Mathematical Economics (SUFE) at Ministry of Education of China is gratefully acknowledged by Darong Dai. The usual disclaimer certainly applies.

Appendices

Appendix: Proofs

Proof of Lemma 3.1

By using (6), we have

$$\begin{aligned} U(w)=U({\underline{w}})+\int _{{\underline{w}}}^{w}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) dt. \end{aligned}$$
(38)

Integrating over the ex post support of the skill distribution yields

$$\begin{aligned} \int _{{\underline{w}}}^{{\overline{w}}}U(w){\tilde{f}}(w)dw =U({\underline{w}})\int _{{\underline{w}}}^{{\overline{w}}}{\tilde{f}}(w)dw+ \int _{{\underline{w}}}^{{\overline{w}}}\left[ \int _{{\underline{w}}}^{w}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) dt \right] {\tilde{f}}(w)dw. \end{aligned}$$
(39)

Reversing the order of integration in (39) gives rise to

$$\begin{aligned} \int _{{\underline{w}}}^{{\overline{w}}}U(w){\tilde{f}}(w)dw =U({\underline{w}})\int _{{\underline{w}}}^{{\overline{w}}}{\tilde{f}}(w)dw+ \int _{{\underline{w}}}^{{\overline{w}}}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) \left[ \int _{t}^{{\overline{w}}}{\tilde{f}}(w) dw\right] dt. \end{aligned}$$
(40)

Also, it follows from (5) that

$$\begin{aligned} \int _{{\underline{w}}}^{{\overline{w}}}U(w){\tilde{f}}(w)dw =\int _{{\underline{w}}}^{{\overline{w}}}c(w){\tilde{f}}(w)dw-\int _{{\underline{w}}}^{{\overline{w}}}h\left( \frac{y(w)}{w} \right) {\tilde{f}}(w)dw. \end{aligned}$$
(41)

Applying the equality form of (11) to (41) shows that

$$\begin{aligned} \int _{{\underline{w}}}^{{\overline{w}}}U(w){\tilde{f}}(w)dw =\int _{{\underline{w}}}^{{\overline{w}}}y(w){\tilde{f}}(w)dw-\int _{{\underline{w}}}^{{\overline{w}}}h\left( \frac{y(w)}{w} \right) {\tilde{f}}(w)dw. \end{aligned}$$
(42)

Combining (40) and (42) leads us to

$$\begin{aligned} \begin{aligned} U({\underline{w}})\int _{{\underline{w}}}^{{\overline{w}}}{\tilde{f}}(w)dw=&\int _{{\underline{w}}}^{{\overline{w}}}y(w){\tilde{f}}(w)dw-\int _{{\underline{w}}}^{{\overline{w}}}h\left( \frac{y(w)}{w} \right) {\tilde{f}}(w)dw\\&-\int _{{\underline{w}}}^{{\overline{w}}}\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \left[ \int _{w}^{{\overline{w}}}{\tilde{f}}(t) dt\right] dw. \end{aligned} \end{aligned}$$
(43)

Applying (10), we can rewrite (43) as

$$\begin{aligned} U({\underline{w}})=\frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \int _{{\underline{w}}}^{{\overline{w}}}\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] {\tilde{f}}(w)-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \Gamma (w,{\overline{w}})\right\} dw. \end{aligned}$$
(44)

Substituting (44) into (38) and setting \(w=k\), the maximand in (14) is established. \(\square\)

Proof of Lemma 3.2

By setting \(k={\underline{w}}\), the maximand of problem (14) is hence given by (44). It is straightforward that the corresponding maximization problem can be solved point-wise. By letting \(\partial U({\underline{w}})/ \partial y(w)=0\) and rearranging the algebra, we obtain

$$\begin{aligned} \begin{aligned}&\left[ 1-\frac{1}{w}h'\left( \frac{y(w)}{w} \right) \right] {\tilde{f}}(w) +\left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\partial {\tilde{f}}(w)}{\partial y(w)}-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \frac{\partial \Gamma (w,{\overline{w}})}{\partial y(w)} \\&\quad = \ U({\underline{w}})\frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y(w)} + \left[ \frac{1}{w^{2}}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{3}}h''\left( \frac{y(w)}{w} \right) \right] \Gamma (w,{\overline{w}}). \end{aligned}\end{aligned}$$
(45)

As it is obvious by (10) that

$$\begin{aligned} \frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y(w)} = \frac{\partial \Gamma (w,{\overline{w}})}{\partial y(w)}=\frac{\partial {\tilde{f}}(w)}{\partial y(w)}, \end{aligned}$$
(46)

applying (46) to (45) and rearranging the algebra, the desired (15) is hence established. By setting \(k=w\) in the maximand of problem (14), then, for \(\forall w\in ({\underline{w}},{\overline{w}})\), (17) is immediate by evaluating

$$\begin{aligned} \frac{\partial U(w)}{\partial y(w)}= \frac{\partial U({\underline{w}})}{\partial y(w)} + \frac{\partial }{\partial y(w)}\int _{{\underline{w}}}^{w}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) dt =\frac{\partial }{\partial y(w)}\int _{{\underline{w}}}^{w}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) dt \end{aligned}$$

at the maxi-min income schedule. Also, by using (8), (16) is immediate. \(\square\)

Proof of Lemma 3.3

By setting \(k={\overline{w}}\), the maximand of problem (14) can be written as

$$\begin{aligned} \begin{aligned} U({\overline{w}})&=\frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \int _{{\underline{w}}}^{{\overline{w}}}\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] {\tilde{f}}(w)-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \Gamma (w,{\overline{w}})\right\} dw \\&\quad +\int _{{\underline{w}}}^{{\overline{w}}}\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) dw. \end{aligned} \end{aligned}$$
(47)

The maximization problem can be solved point-wise. Applying \(\partial U({\overline{w}})/ \partial y(w)=0\) and (46) to (47) and rearranging the algebra, we obtain

$$\begin{aligned} \begin{aligned}&\left[ 1-\frac{1}{w}h'\left( \frac{y(w)}{w} \right) \right] {\tilde{f}}(w) +\left[ y(w)-h\left( \frac{y(w)}{w} \right) -\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) -U({\underline{w}}) \right] \frac{\partial {\tilde{f}}(w)}{\partial y(w)} \\&\quad = \left[ \frac{1}{w^{2}}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{3}}h''\left( \frac{y(w)}{w} \right) \right] [\Gamma (w,{\overline{w}})-\Gamma ({\underline{w}},{\overline{w}})]. \end{aligned} \end{aligned}$$
(48)

Noting from (38) that \(\partial U(w)/ \partial y(w)=\partial U({\overline{w}})/ \partial y(w)=0\) for all \(w\in ({\underline{w}},{\overline{w}})\) whenever evaluated at the maxi-max income schedule. Applying this to (16) reveals that \(\partial {\tilde{f}}(w)/\partial y(w)=0\) for all \(w\in ({\underline{w}},{\overline{w}})\), thus (18) follows immediately from (48). Moreover, it follows from (47) that the first order condition can be expressed as:

$$\begin{aligned} \frac{\partial U({\overline{w}})}{\partial y(w)}= \frac{\partial U({\underline{w}})}{\partial y(w)} + \frac{\partial }{\partial y(w)}\int _{{\underline{w}}}^{{\overline{w}}}\frac{y(t)}{t^{2}}h'\left( \frac{y(t)}{t} \right) dt=0, \end{aligned}$$

evaluating which at \(y(w)=y({\underline{w}})\) immediately gives (21). Evaluating (16) at \(w={\underline{w}}\) yields (20). Then, we obtain (19) by evaluating (48) at \(w={\underline{w}}\). \(\square\)

Proof of Proposition 3.1

Using the maximization problem (14) stated in Lemma 3.1, it is easy to show that

$$\begin{aligned} \frac{\partial U(k)}{\partial y(w)} = \frac{\partial U({\overline{w}})}{\partial y(w)} \ \ \text {for} \ \forall w\in [{\underline{w}},k) \end{aligned}$$

and

$$\begin{aligned} \frac{\partial U(k)}{\partial y(w)} = \frac{\partial U({\underline{w}})}{\partial y(w)} \ \ \text {for} \ \forall w\in (k,{\overline{w}}], \end{aligned}$$

for \(\forall k\in ({\underline{w}},{\overline{w}})\). Therefore, the desired income schedule (22) follows from a direct application of Lemmas 3.2 and 3.3 . \(\square\)

Proof of Proposition 3.2

First, (27) in part (i) is immediate by applying (20)-(21) to the tax formula of (24). Part (ii) is also immediate by (25). Using the chain rule of calculus, (8) and (9), we have

$$\begin{aligned} \frac{\partial {\tilde{f}}(w)}{\partial y(w)} \frac{1}{{\tilde{f}}(w)} =\frac{\partial {\tilde{f}}(w)}{\partial \Delta (w)} \frac{c(w)}{{\tilde{f}}(w)}\frac{\partial U(w)}{\partial y(w)} \frac{1}{c(w)} = {\tilde{\theta }}(w)\frac{\partial U(w)}{\partial y(w)} \frac{1}{c(w)}. \end{aligned}$$
(49)

Then we get from (49), (17), (26), (2), (5)–(6) and the condition

$$\begin{aligned}&y(w)-h\left( \frac{y(w)}{w} \right) - \frac{y(w)}{w^2}h'\left( \frac{y(w)}{w} \right) \nonumber \\&\quad =T^{R}(y(w))+U(w)-U'(w)>U({\underline{w}}) \end{aligned}$$
(50)

that

$$\begin{aligned} \tau ^{R}(w)=\underset{>0}{\underbrace{\frac{\partial U(w)}{\partial y(w)}}} \left\{ \underset{>0}{\underbrace{\frac{\Gamma (w,{\overline{w}})}{{\tilde{f}}(w)}}} \ - \ \underset{>0}{\underbrace{\frac{{\tilde{\theta }}(w)}{c(w)} \left[ y(w)-h\left( \frac{y(w)}{w} \right) - \frac{y(w)}{w^2}h'\left( \frac{y(w)}{w} \right) - U({\underline{w}})\right] }} \right\} , \end{aligned}$$

by which assertion (28) is immediate. It follows from (6) that \(U({\overline{w}})>U({\underline{w}})\). Making use of (50) again shows the desired assertion (29). \(\square\)

Proof of Proposition 3.3

We complete the proof in three steps.

\({\underline{\mathrm{Step 1.}}}\) It follows from (25) and (26) that

$$\begin{aligned} \begin{aligned} \tau ^{M}(w)-\tau ^{R}(w)= \&\frac{\partial {\tilde{f}}(w)}{\partial y(w)} \frac{1}{{\tilde{f}}(w)}\left[ y(w)-h\left( \frac{y(w)}{w} \right) - \frac{y(w)}{w^2}h'\left( \frac{y(w)}{w} \right) -U({\underline{w}})\right] \\&-\frac{\Gamma ({\underline{w}},{\overline{w}})}{w{\tilde{f}}(w)}\left[ \frac{1}{w}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{2}}h''\left( \frac{y(w)}{w} \right) \right] . \end{aligned} \end{aligned}$$
(51)

Applying (50) to (51), we immediately get \(\tau ^{M}(w)<\tau ^{R}(w)\) for \(T^{R}(y(w))\le U'(w)+U({\underline{w}})-U(w)\). Then we have either \(\tau ^{M}(w)<0<\tau ^{R}(w)\) or \(\tau ^{M}(w)<\tau ^{R}(w)\le 0\). If \(\tau ^{M}(w)<0<\tau ^{R}(w)\), then under tax schedule \(\tau ^{M}(\cdot )\) each type-w worker has her income distorted upward compared to the full-information solution, whereas her income is distorted downward compared to the full-information solution under tax schedule \(\tau ^{R}(\cdot )\). If \(\tau ^{M}(w)<\tau ^{R}(w)\le 0\), then each type-w worker has her income distorted upward compared to the full-information solution under both tax schedules, but the magnitude of distortion is greater under tax schedule \(\tau ^{M}(\cdot )\). Thus, regardless of which case we consider, we see an upward discontinuity of the income schedule, as desired in part (i-a).

\({\underline{\mathrm{Step 2.}}}\) If, however, \(T^{R}(y(w))> U'(w)+U({\underline{w}})-U(w)\), then applying (49) and (50) to (51) yields

$$\begin{aligned}&\tau ^{M}(w)-\tau ^{R}(w)\\&\quad = \underset{>0}{\underbrace{\frac{\partial U(w)}{\partial y(w)}}} \left\{ \underset{>0}{\underbrace{\frac{{\tilde{\theta }}(w)}{c(w)} \left[ y(w)-h\left( \frac{y(w)}{w} \right) - \frac{y(w)}{w^2}h'\left( \frac{y(w)}{w} \right) - U({\underline{w}})\right] }} \ - \ \underset{>0}{\underbrace{\frac{\Gamma ({\underline{w}},{\overline{w}})}{{\tilde{f}}(w)}}} \right\} , \end{aligned}$$

by which we arrive at the following result:

$$\begin{aligned} \tau ^{M}(w) {\left\{ \begin{array}{ll}<\tau ^{R}(w) &{} \hbox { for}\ {\tilde{\theta }}(w)<{\tilde{\theta }}^{*}(w),\\ =\tau ^{R}(w) &{} \hbox { for}\ {\tilde{\theta }}(w)={\tilde{\theta }}^{*}(w),\\>\tau ^{R}(w) &{} \hbox { for}\ {\tilde{\theta }}(w)>{\tilde{\theta }}^{*}(w), \end{array}\right. } \end{aligned}$$
(52)

in which the critical value of migration elasticity is given by

$$\begin{aligned} {\tilde{\theta }}^{*}(w)=\frac{c(w)\Gamma ({\underline{w}},{\overline{w}})}{\left[ y(w)-h( y(w)/w )- (y(w)/w^2)h'( y(w)/w)- U({\underline{w}})\right] {\tilde{f}}(w)}. \end{aligned}$$
(53)

Using (28), (52)–(53) and \({\tilde{\theta }}^{*}(w) >{\tilde{\theta }}_{\star }(w)\), we have the following results: \(0>\tau ^{M}(w)>\tau ^{R}(w)\) if \({\tilde{\theta }}(w)>{\tilde{\theta }}^{*}(w)\), \(0>\tau ^{M}(w)=\tau ^{R}(w)\) if \({\tilde{\theta }}(w)={\tilde{\theta }}^{*}(w)\), \(\tau ^{M}(w) <0 \le \tau ^{R}(w)\) if \({\tilde{\theta }}(w)\le {\tilde{\theta }}_{\star }(w)<{\tilde{\theta }}^{*}(w)\), and \(\tau ^{M}(w)<\tau ^{R}(w)<0\) if \({\tilde{\theta }}_{\star }(w)<{\tilde{\theta }}(w)<{\tilde{\theta }}^{*}(w)\). By applying the same reasoning used to prove part (i-a), the desired assertions in parts (i-b) and (ii) follow.

\({\underline{\mathrm{Step 3.}}}\) As w approaches \({\underline{w}}\) from above, we get from Eq. (25) and the continuity of \(\tau ^{M}(w)\) over the interval \(({\underline{w}},k)\) that \(\tau ^{M}({\underline{w}})=0\). As a result, under the marginal tax rate \(\tau ^{M}({\underline{w}})\) each type-\({\underline{w}}\) worker has her income undistorted as in the full-information solution. Thus according to Eqs. (24), (20), (21), (2), (5) and (6) we have

$$\begin{aligned} {\underline{\tau }}^{M}({\underline{w}})-\tau ^{M}({\underline{w}})=\underset{>0}{\underbrace{-\frac{\partial {\tilde{f}}({\underline{w}})}{\partial y({\underline{w}})} \frac{1}{{\tilde{f}}({\underline{w}})}}}\cdot \left[ T^{M}(y({\underline{w}}))- U'({\underline{w}})\right] . \end{aligned}$$

If \(T^{M}(y({\underline{w}}))> U'({\underline{w}})\), then we immediately have \({\underline{\tau }}^{M}({\underline{w}})>\tau ^{M}({\underline{w}})=0\), which means that under the marginal tax rate \({\underline{\tau }}^{M}({\underline{w}})\), each type-\({\underline{w}}\) worker has her income distorted downward compared to the full-information solution. We thus see an upward discontinuity of the income schedule at the bottom skill level, as desired in claim (iv). If, by contrast, \(T^{M}(y({\underline{w}}))< U'({\underline{w}})\), then we have \({\underline{\tau }}^{M}({\underline{w}})<\tau ^{M}({\underline{w}})=0\), yielding a downward discontinuity of the income schedule at the bottom skill level, as desired in claim (iii). \(\square\)

Proof of Theorem 3.2

We complete the proof in four steps.

\({\underline{\mathrm{Step 1.}}}\) Under the conditions given in Theorem 3.2, Proposition 3.3 suggests that there will be two downward discontinuities in the income schedule. Hence we need to build two bridges such that the resulting income schedule satisfies the SOIC condition. Obviously, the left endpoint of the first bridge is just \({\underline{w}}\). Let us fix the other bridge endpoints \(w_{\eta }, w_{\alpha }\) and \(w_{\beta }\) that shall be endogenously determined, and let \(y^{*}({\underline{w}}, w_{\eta })\) and \(y^{*}(w_{\alpha }, w_{\beta })\) denote the optimal before-tax income levels on the bridges over skill intervals \([{\underline{w}}, w_{\eta }]\) and \([w_{\alpha }, w_{\beta }]\), respectively. Without a loss of generality, we assume that the bridge in the middle cannot begin in the interior of a bunching interval of the maxi-max schedule \(y^{M*}(\cdot )\), nor can it end in the interior of a bunching interval of maxi-min schedule \(y^{R*}(\cdot )\). In what follows, let \({\mathcal {B}}^{M}\) and \({\mathcal {B}}^{R}\) denote the types that are bunched with some other types in the complete solution to the maxi-max and maxi-min problems, respectively. Also, whenever w is bunched, we let interval \([w_{-}, w_{+}]\) denote the set of types bunched with w.

\({\underline{\mathrm{Step 2.}}}\) We now equivalently rewrite the maximand of problem (14) as follows:

$$\begin{aligned} \begin{aligned} U(k) =&\int _{{\underline{w}}}^{k}\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)}{\Gamma ({\underline{w}},{\overline{w}})}+\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \left[ 1-\frac{\Gamma (w,{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \right\} dw \\&+\int _{k}^{{\overline{w}}}\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)}{\Gamma ({\underline{w}},{\overline{w}})}-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \frac{\Gamma (w,{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right\} dw. \end{aligned} \end{aligned}$$
(54)

Taking into account the bunching possibility, (54) should be modified as follows:

$$\begin{aligned} \begin{aligned} U^{*}(k) =&\int _{{\underline{w}}}^{k}{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w|w\notin {\mathcal {B}}^{M} \right\} }dw \\&+\int _{{\underline{w}}}^{k} {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w_{-},w_{+}),\Gamma (w_{-},{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w|w\in {\mathcal {B}}^{M} \right\} } dw \\&+\int _{k}^{{\overline{w}}}{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w|w\notin {\mathcal {B}}^{R} \right\} }dw \\&+\int _{k}^{{\overline{w}}} {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{-},w_{+}),\Gamma (w_{+},{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w|w\in {\mathcal {B}}^{R} \right\} } dw, \end{aligned} \end{aligned}$$
(55)

in which

$$\begin{aligned} \begin{aligned}&{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}})) \\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)}{\Gamma ({\underline{w}},{\overline{w}})}+\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \left[ 1-\frac{\Gamma (w,{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] ;\\&\quad {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w_{-},w_{+}),\Gamma (w_{-},{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\Gamma (w_{-},w_{+})}{\Gamma ({\underline{w}},{\overline{w}})}+\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \left[ 1-\frac{\Gamma (w_{-},{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \end{aligned} \end{aligned}$$
(56)

and

$$\begin{aligned} \begin{aligned}&{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)}{\Gamma ({\underline{w}},{\overline{w}})}-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \frac{\Gamma (w,{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})};\\&\quad {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{-},w_{+}),\Gamma (w_{+},{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\Gamma (w_{-},w_{+})}{\Gamma ({\underline{w}},{\overline{w}})}-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \frac{\Gamma (w_{+},{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \end{aligned} \end{aligned}$$
(57)

with \({\mathbb {I}}\) being a standard indicator function.

As ironing does not affect the solution outside a bunching region, no modifications to the integrands in (55) are needed for types that are not bunched. Departing from the first-order approach, if an extra unit of consumption is given to type-w workers, it must be given to all workers who are bunched with them, whose mass is \(\Gamma (w_{-},w_{+})\). Also, if w is bunched, in the maxi-max case, some of this extra consumption can be reclaimed from workers of lower types than those bunched with w, whose mass is \(\Gamma ({\underline{w}},{\overline{w}})-\Gamma (w_{-},{\overline{w}})\). The corresponding workers in the maxi-min case are those of higher types than those bunched with w, whose mass is \(\Gamma (w_{+},{\overline{w}})\).

\({\underline{\mathrm{Step 3.}}}\) Now, the selfishly optimal income schedule of proposer \(k\in ({\underline{w}},{\overline{w}})\) is obtained by solving the following problem:

$$\begin{aligned} \begin{aligned}&\max _{y(\cdot )} \left\{ \int _{{\underline{w}}}^{w_{\eta }} {\tilde{\Phi }}^{M*}(w, y(w),\Gamma ({\underline{w}},w_{\eta }),\Gamma ({\underline{w}},{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }> {\underline{w}} \right\} } dw \right. \\&\quad + \int _{w_{\eta }}^{w_{\alpha }}{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }\le w_{\alpha } \right\} }dw\\&\quad +\int _{w_{\alpha }}^{k} {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w_{\alpha },w_{\beta }),\Gamma (w_{\alpha },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw \\&\quad +\int _{k}^{w_{\beta }} {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{\alpha },w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw\\&\quad +\left. \int _{w_{\beta }}^{{\overline{w}}}{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}}\ge w_{\beta } \right\} }dw \right\} , \end{aligned} \end{aligned}$$
(58)

subject to

$$\begin{aligned} y(w)= {\left\{ \begin{array}{ll} y^{*}\left( {\underline{w}},w_{\eta }\right) &{} \hbox { for}\ w\in \left[ {\underline{w}},w_{\eta }\right] ,\\ y^{*}\left( w_{\alpha },w_{\beta }\right) &{} \hbox { for}\ w\in \left[ w_{\alpha },w_{\beta }\right] . \end{array}\right. } \end{aligned}$$

Thus, problem (58) can be simplified as the following unconstrained maximization problem:

$$\begin{aligned}&\max _{y(\cdot )} \left\{ \int _{w_{\eta }}^{w_{\alpha }}{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }\le w_{\alpha } \right\} }dw \right. \\&\quad +\left. \int _{w_{\beta }}^{{\overline{w}}}{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}}\ge w_{\beta } \right\} }dw \right\} . \end{aligned}$$

This problem can obviously be solved point-wise, and the solutions are implicitly determined by these first-order conditions:

$$\begin{aligned} \begin{aligned}&\frac{\partial {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial {\tilde{f}}(w)}\frac{\partial {\tilde{f}}(w)}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (w,{\overline{w}})}\frac{\partial \Gamma (w,{\overline{w}})}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})}\frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y(w)}=0 \ \ \text {for} \ \forall w\in (w_{\eta }, w_{\alpha }), \end{aligned} \end{aligned}$$
(59)

and

$$\begin{aligned} \begin{aligned}&\frac{\partial {\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial {\tilde{f}}(w)}\frac{\partial {\tilde{f}}(w)}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (w,{\overline{w}})}\frac{\partial \Gamma (w,{\overline{w}})}{\partial y(w)}\\&\quad +\frac{\partial {\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})}\frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y(w)}=0 \ \ \text {for} \ \forall w\in (w_{\beta }, {\overline{w}}]. \end{aligned} \end{aligned}$$
(60)

As above, we denote the resulting solutions as \(y^{M*}(\cdot )\) and \(y^{R*}(\cdot )\), respectively.

\({\underline{\mathrm{Step 4.}}}\) We now show that \(y^{*}(w_{\alpha },w_{\beta })=y^{M*}(w_{\alpha })\) if \(w_{\alpha }>{\underline{w}}\) and \(y^{*}(w_{\alpha },w_{\beta })=y^{R*}(w_{\beta })\) if \(w_{\beta }<{\overline{w}}\). Based on the above ironing procedure, we can apply the same reasoning used to prove Proposition 3 of Brett and Weymark (2017) to show that \(y^{*}(\cdot )\) is continuous on \([{\underline{w}}, {\overline{w}}]\). Suppose that there exists a type \(k'>k\) for which \(y^{*}(k')\) is not the maxi-min income, formally \(y^{*}(k')\ne y^{R*}(k')\). The SOIC condition (7) must bind at \(k'\), which implies that the slope of \(y^{*}(\cdot )\) is zero at \(k'\). Since \(y^{*}(\cdot )\) is continuous, we obtain that there exists a \(w_{\beta }>k'\) such that \(y^{*}(\cdot )\) is constant on \([k, w_{\beta }]\) and coincides with the maxi-min income schedule \(y^{R*}(\cdot )\) on \([w_{\beta }, {\overline{w}}]\). Similarly, if there exists a type \(k'<k\) for which \(y^{*}(k')\) is not the maxi-max income, formally \(y^{*}(k')\ne y^{M*}(k')\), we can use the same argument to show that there exists a \(w_{\alpha }<k'\) such that \(y^{*}(\cdot )\) is constant on \([w_{\alpha },k]\) and coincides with the maxi-max income schedule \(y^{M*}(\cdot )\) on \([w_{\eta }, w_{\alpha }]\). By further setting \(y^{*}({\underline{w}},w_{\eta })\equiv {\underline{y}}^{M*}({\underline{w}})\), the desired income schedule given by (30) is therefore established. \(\square\)

Proof of Lemma 3.4

We get from (55), (58), (20) and (21) that

$$\begin{aligned} \frac{\partial ^2 U^{*}(k)}{\partial y({\underline{w}}) \partial k} = \&\frac{\partial {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w_{\alpha },w_{\beta }),\Gamma (w_{\alpha },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})} \frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y({\underline{w}})} \cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ w=k \right\} }\\&- \frac{\partial {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{\alpha },w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})} \frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y({\underline{w}})} \cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ w=k \right\} }\\ = \&\frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial y({\underline{w}})} \cdot \frac{y(k)}{k^{2}} h'\left( \frac{y(k)}{k} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}}\cdot \left[ \Gamma (w_{\alpha },{\overline{w}})-\Gamma (w_{\beta },{\overline{w}}) \right] \cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ w_{\beta }<{\overline{w}} \right\} }\\ = \&\underset{<0}{\underbrace{\frac{\partial {\tilde{f}}({\underline{w}})}{\partial y({\underline{w}})} }}\cdot \frac{y(k)}{k^{2}} h'\left( \frac{y(k)}{k} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}}\cdot \underset{>0}{\underbrace{\left[ \Gamma (w_{\alpha },{\overline{w}})-\Gamma (w_{\beta },{\overline{w}}) \right] }} \cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ w_{\beta }<{\overline{w}} \right\} }\\ < \&0, \end{aligned}$$

which implies that \(U^{*}(k)\) is a submodular function, and an application of the Topkis Theorem (see Topkis 1978) implies that \(y({\underline{w}})\) is decreasing in k. Since the endpoint \(w_{\eta }(k)\) is completely determined by the value of \(y({\underline{w}})\), we get that \(w_{\eta }(k)\) is decreasing in k. \(\square\)

Proof of Lemma 3.5

We will complete the proof in five steps.

\({\underline{\mathrm{Step 1.}}}\) Our proof employs the procedure developed by Brett and Weymark (2017). Suppose \(w_{\beta }<{\overline{w}}\) holds. According to the continuity of income schedule \(y^{*}(\cdot )\), we get from Theorem 3.2 that \(y^{*}(w_{\beta })=y^{R*}(w_{\beta })\). Also, \(y^{*}(w_{\beta })=y^{*}(w_\alpha )\) because income is a constant on the bridge. If we also have \(w_{\alpha }>{\underline{w}}\), then by continuity again, \(y^{*}(w_\alpha )=y^{M*}(w_\alpha )\). Define

$$\begin{aligned} \psi (w_{\beta })\equiv {\left\{ \begin{array}{ll} (y^{M*})^{-1}(y^{R*}(w_{\beta })) &{} \hbox { if}\ w_{\alpha }>{\underline{w}},\\ w_{\alpha } &{} \hbox { if}\ w_{\alpha }={\underline{w}}. \end{array}\right. } \end{aligned}$$
(61)

We can therefore write proposer k’s objective function of choosing \(w_{\beta }\) as follows:

$$\begin{aligned} \begin{aligned}&\Xi (w_{\beta };k) \\&\quad \equiv \int _{w_{\eta }}^{\psi (w_{\beta })}{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }\le w_{\alpha } \right\} \cap \left\{ y(w)=y^{M*}(w) \right\} }dw\\&\qquad + \int _{\psi (w_{\beta })}^{k} {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} } dw \\&\qquad +\int _{k}^{w_{\beta }} {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} } dw\\&\qquad +\int _{w_{\beta }}^{{\overline{w}}}{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}}\ge w_{\beta } \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} }dw. \end{aligned} \end{aligned}$$
(62)

Using (62), the first-order condition with respect to \(w_{\beta }\) can be derived as

$$\begin{aligned} \Psi _{1}+\Psi _{2}(k)+\Psi _{3}(k)+\Psi _{4}=0, \end{aligned}$$
(63)

in which

$$\begin{aligned}&\begin{aligned}&\Psi _{1} = \\&\quad \frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }} \left\{ {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }\le w_{\alpha } \right\} \cap \left\{ y(w)=y^{M*}(w) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} } \right. \\&\quad -\left. {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} } \right\} , \end{aligned} \end{aligned}$$
(64)
$$\begin{aligned}&\begin{aligned}&\Psi _{2}(k) = \\&\quad \int _{\psi (w_{\beta })}^{k} \frac{\partial {\tilde{\Phi }}^{M*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw \\&\qquad +\int _{\psi (w_{\beta })}^{k} \frac{\partial {\tilde{\Phi }}^{M*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (\psi (w_{\beta }),w_{\beta })} \frac{\partial \Gamma (\psi (w_{\beta }),w_{\beta })}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw\\&\qquad +\int _{\psi (w_{\beta })}^{k} \frac{\partial {\tilde{\Phi }}^{M*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (\psi (w_{\beta }),{\overline{w}})} \frac{\partial \Gamma (\psi (w_{\beta }),{\overline{w}})}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw\\&\qquad +\int _{\psi (w_{\beta })}^{k} \frac{\partial {\tilde{\Phi }}^{M*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (\psi (w_{\beta }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})} \frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw\\&\quad \equiv \int _{\psi (w_{\beta })}^{k} \left[ \Psi _{21}(w)+\Psi _{22}(w)+\Psi _{23}(w)+\Psi _{24}(w) \right] \cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw, \end{aligned} \end{aligned}$$
(65)
$$\begin{aligned}&\begin{aligned}&\Psi _{3}(k) = \\&\quad \int _{k}^{w_{\beta }} \frac{\partial {\tilde{\Phi }}^{R*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial y^{R*}(w_{\beta })}\frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw \\&\qquad +\int _{k}^{w_{\beta }} \frac{\partial {\tilde{\Phi }}^{R*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (\psi (w_{\beta }),w_{\beta })}\frac{\partial \Gamma (\psi (w_{\beta }),w_{\beta })}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw\\&\qquad +\int _{k}^{w_{\beta }} \frac{\partial {\tilde{\Phi }}^{R*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma (w_{\beta },{\overline{w}})}\frac{\partial \Gamma (w_{\beta },{\overline{w}})}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw\\&\qquad +\int _{k}^{w_{\beta }} \frac{\partial {\tilde{\Phi }}^{R*}(w, y^{R*}(w_{\beta }),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))}{\partial \Gamma ({\underline{w}},{\overline{w}})}\frac{\partial \Gamma ({\underline{w}},{\overline{w}})}{\partial w_{\beta }}\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw\\&\quad \equiv \int _{k}^{w_{\beta }} \left[ \Psi _{31}(w)+\Psi _{32}(w)+\Psi _{33}(w)+\Psi _{34}(w) \right] \cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw, \end{aligned} \end{aligned}$$
(66)

and

$$\begin{aligned} \begin{aligned} \Psi _{4}= \&{\tilde{\Phi }}^{R*}(w, y(w),\Gamma (\psi (w_{\beta }),w_{\beta }),\Gamma (w_{\beta },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=w_{\beta } \right\} }\\&-{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}} \ge w_{\beta } \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} \cap \left\{ w=w_{\beta } \right\} }. \end{aligned} \end{aligned}$$
(67)

\({\underline{\mathrm{Step 2.}}}\) Using (56) and (57) gives us

$$\begin{aligned} \begin{aligned}&{\tilde{\Phi }}^{M*}(w,y(w),\Gamma (w_{\alpha },w_{\beta }), \Gamma (w_{\alpha },{\overline{w}}), \Gamma ({\underline{w}},{\overline{w}}))\\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\Gamma (w_{\alpha },w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})}+\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \left[ 1-\frac{\Gamma (w_{\alpha },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] ,\\&\quad {\tilde{\Phi }}^{R*}(w,y(w),\Gamma (w_{\alpha },w_{\beta }), \Gamma (w_{\beta },{\overline{w}}), \Gamma ({\underline{w}},{\overline{w}}))\\&\quad \equiv \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\Gamma (w_{\alpha },w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})}-\frac{y(w)}{w^{2}}h'\left( \frac{y(w)}{w} \right) \frac{\Gamma (w_{\beta },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})}, \end{aligned} \end{aligned}$$
(68)

for \(\forall w \in [w_{\alpha },w_{\beta }]\).

By using (61), (68), (56) and (57), we can rewrite (64) as

$$\begin{aligned} \begin{aligned} \Psi _{1} =&\frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }} \left\{ {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta } \le w_{\alpha } \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} } \right. \\&-\left. {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w,w_{\beta }),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} } \right\} \\ =&\frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }} \left\{ {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\right. \\&-\left. {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w,w_{\beta }),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}})) \right\} \cdot {\mathbb {I}}_{\left\{ w_{\eta } \le w_{\alpha } \right\} \cap \left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} }\\ =&\frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }} \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)-\Gamma (w,w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})}\cdot {\mathbb {I}}_{ \left\{ w_{\alpha }\ge w_{\eta } \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=\psi (w_{\beta }) \right\} }\\ =&\frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }} \left[ y^{R*}(w_{\beta })-h\left( \frac{y^{R*}(w_{\beta })}{\psi (w_{\beta })} \right) \right] \frac{{\tilde{f}}(\psi (w_{\beta }))-\Gamma (\psi (w_{\beta }),w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})}\cdot {\mathbb {I}}_{\left\{ w_{\alpha }\ge w_{\eta } \right\} }. \end{aligned} \end{aligned}$$
(69)

Using (65) and (68), we have

$$\begin{aligned}&\begin{aligned} \Psi _{21}(w) =&\left\{ \left[ 1-\frac{1}{w}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \frac{\Gamma (\psi \left( w_{\beta } \right) ,w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})} \right. \\&+ \left. \left[ \frac{1}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) + \frac{y^{R*}(w_{\beta })}{w^{3}}h''\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \left[ 1-\frac{\Gamma (\psi \left( w_{\beta } \right) ,{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \right\} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}, \end{aligned} \end{aligned}$$
(70)
$$\begin{aligned}&\begin{aligned} \Psi _{22}(w) =&\left[ y^{R*}(w_{\beta })-h\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \\&\times \left\{ {\tilde{f}}(w_{\beta })-{\tilde{f}}\left( \psi (w_{\beta }) \right) \frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }}+\left[ \int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}dw \right] \right\} , \end{aligned} \end{aligned}$$
(71)
$$\begin{aligned}&\begin{aligned} \Psi _{23}(w) = \&\frac{y^{R*}(w_{\beta })}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \\&\times \left[ {\tilde{f}}\left( \psi (w_{\beta }) \right) \frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }}-\int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}dw \right] , \end{aligned} \end{aligned}$$
(72)

and

$$\begin{aligned} \begin{aligned} \Psi _{24}(w) = \&\left\{ \frac{y^{R*}(w_{\beta })}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \Gamma (\psi (w_{\beta }),{\overline{w}}) - \left[ y^{R*}(w_{\beta })-h\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \Gamma (\psi (w_{\beta }),w_{\beta }) \right\} \\&\times \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}} \int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}dw. \end{aligned} \end{aligned}$$
(73)

Similarly, using (66) and (68) gives us

$$\begin{aligned}&\begin{aligned} \Psi _{31}(w) =&\left\{ \left[ 1-\frac{1}{w}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \frac{\Gamma (\psi \left( w_{\beta } \right) ,w_{\beta })}{\Gamma ({\underline{w}},{\overline{w}})} \right. \\&- \left. \left[ \frac{1}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) + \frac{y^{R*}(w_{\beta })}{w^{3}}h''\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \frac{\Gamma (w_{\beta },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right\} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}, \end{aligned} \end{aligned}$$
(74)
$$\begin{aligned}&\begin{aligned} \Psi _{32}(w) =&\left[ y^{R*}(w_{\beta })-h\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \\&\times \left\{ {\tilde{f}}(w_{\beta })-{\tilde{f}}\left( \psi (w_{\beta }) \right) \frac{\mathrm {d} \psi (w_{\beta })}{\mathrm {d} w_{\beta }}+\left[ \int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })}\frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }} dw \right] \right\} , \end{aligned} \end{aligned}$$
(75)
$$\begin{aligned}&\Psi _{33}(w) = \frac{y^{R*}(w_{\beta })}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \frac{{\tilde{f}}\left( w_{\beta } \right) }{\Gamma ({\underline{w}},{\overline{w}})}, \end{aligned}$$
(76)

and

$$\begin{aligned} \begin{aligned} \Psi _{34}(w) = \&\left\{ \frac{y^{R*}(w_{\beta })}{w^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{w} \right) \Gamma (w_{\beta },{\overline{w}}) - \left[ y^{R*}(w_{\beta })-h\left( \frac{y^{R*}(w_{\beta })}{w} \right) \right] \Gamma (\psi (w_{\beta }),w_{\beta }) \right\} \\&\times \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}} \int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}dw. \end{aligned} \end{aligned}$$
(77)

Finally, by using (68), we can rewrite (67) as

$$\begin{aligned} \begin{aligned} \Psi _{4}= \&{\tilde{\Phi }}^{R*}(w, y(w),\Gamma (\psi (w),w),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w_{\beta }) \right\} \cap \left\{ w=w_{\beta } \right\} }\\&-{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}}\ge w_{\beta } \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} \cap \left\{ w=w_{\beta } \right\} }\\ =\&\left[ {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (\psi (w),w),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))- {\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}})) \right] \\&\times {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} \cap \left\{ w=w_{\beta } \right\} }\\ =\&\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{\Gamma \left( \psi (w),w \right) -{\tilde{f}}(w)}{\Gamma \left( {\underline{w}},{\overline{w}} \right) } \right\} {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} \cap \left\{ w=w_{\beta } \right\} }. \end{aligned} \end{aligned}$$
(78)

\({\underline{\mathrm{Step 3.}}}\) Suppose \(w_{\alpha }>{\underline{w}}\) holds. By continuity of income schedule \(y^{*}(\cdot )\), we get from Theorem 3.2 that \(y^{*}(w_\alpha )=y^{M*}(w_\alpha )\). Also, \(y^{*}(w_{\beta })=y^{*}(w_\alpha )\) because income is a constant on the bridge. If we also have \(w_{\beta }<{\overline{w}}\), then by continuity again, \(y^{*}(w_\beta )=y^{R*}(w_\beta )\). Define

$$\begin{aligned} \varphi (w_{\alpha })\equiv {\left\{ \begin{array}{ll} (y^{R*})^{-1}(y^{M*}(w_{\alpha })) &{} \hbox { if}\ w_{\beta }<{\overline{w}},\\ w_{\beta } &{} \hbox { if}\ w_{\beta }={\overline{w}}. \end{array}\right. } \end{aligned}$$
(79)

We can therefore write proposer k’s objective function of choosing \(w_{\alpha }\) as follows:

$$\begin{aligned} \begin{aligned}&\Xi (w_{\alpha };k) \\&\quad \equiv \int _{w_{\eta }}^{w_{\alpha }}{\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\eta }\le w_{\alpha } \right\} \cap \left\{ y(w)=y^{M*}(w) \right\} }dw\\&\qquad + \int _{w_{\alpha }}^{k} {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w_{\alpha },\varphi (w_{\alpha })),\Gamma (w_{\alpha },{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{M*}(w_{\alpha }) \right\} } dw \\&\qquad +\int _{k}^{\varphi (w_{\alpha })} {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{\alpha },\varphi (w_{\alpha })),\Gamma (\varphi (w_{\alpha }),{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{M*}(w_{\alpha }) \right\} } dw\\&\qquad +\int _{\varphi (w_{\alpha })}^{{\overline{w}}}{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}} \ge w_{\beta } \right\} \cap \left\{ y(w)=y^{R*}(w) \right\} }dw. \end{aligned} \end{aligned}$$
(80)

Using (80), the first-order condition with respect to \(w_{\alpha }\) can be derived as

$$\begin{aligned} \Lambda _{1}+\Lambda _{2}(k)+\Lambda _{3}(k)+\Lambda _{4}=0, \end{aligned}$$
(81)

in which

$$\begin{aligned}&\begin{aligned} \Lambda _{1}= \&\left[ {\tilde{\Phi }}^{M*}(w, y(w),{\tilde{f}}(w),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))- {\tilde{\Phi }}^{M*}(w, y(w),\Gamma (w,\varphi (w)), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}})) \right] \\&\times {\mathbb {I}}_{\left\{ w_{\eta } \le w_{\alpha } \right\} \cap \left\{ w_{\alpha }>{\underline{w}} \right\} \cap \left\{ y(w)=y^{M*}(w) \right\} \cap \left\{ w=w_{\alpha } \right\} }\\ =\&\left\{ \left[ y(w)-h\left( \frac{y(w)}{w} \right) \right] \frac{{\tilde{f}}(w)-\Gamma (w,\varphi (w))}{\Gamma \left( {\underline{w}},{\overline{w}} \right) } \right\} {\mathbb {I}}_{\left\{ w_{\alpha }\ge w_{\eta } \right\} \cap \left\{ y(w)=y^{M*}(w) \right\} \cap \left\{ w=w_{\alpha } \right\} }, \end{aligned} \end{aligned}$$
(82)
$$\begin{aligned}&\Lambda _{2}(k)=\int _{w_{\alpha }}^{k}\left[ \Lambda _{21}(w)+\Lambda _{22}(w)+\Lambda _{23}(w)+\Lambda _{24}(w) \right] \cdot {\mathbb {I}}_{\left\{ w_{\alpha }>{\underline{w}} \right\} } dw, \end{aligned}$$
(83)

with

$$\begin{aligned}&\begin{aligned} \Lambda _{21}(w) =&\left\{ \left[ 1-\frac{1}{w}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \frac{\Gamma (w_{\alpha },\varphi (w_{\alpha }))}{\Gamma ({\underline{w}},{\overline{w}})} \right. \\&+ \left. \left[ \frac{1}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) + \frac{y^{M*}(w_{\alpha })}{w^{3}}h''\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \left[ 1-\frac{\Gamma (w_{\alpha },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \right\} \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}, \end{aligned} \end{aligned}$$
(84)
$$\begin{aligned}&\begin{aligned} \Lambda _{22}(w) =&\left[ y^{M*}(w_{\alpha })-h\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \\&\times \left\{ {\tilde{f}}\left( \varphi (w_{\alpha }) \right) \frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }}-{\tilde{f}}(w_{\alpha })+\left[ \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })}\frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw \right] \right\} , \end{aligned} \end{aligned}$$
(85)
$$\begin{aligned}&\begin{aligned} \Lambda _{23}(w) = &\frac{y^{M*}(w_{\alpha })}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})}\\&\times \left\{ {\tilde{f}}(w_{\alpha }) - \left[ \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })}\frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw \right] \right\} , \end{aligned} \end{aligned}$$
(86)

and

$$\begin{aligned}&\begin{aligned}&\Lambda _{24}(w) \\&\quad = \left\{ \frac{y^{M*}(w_{\alpha })}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \Gamma (w_{\alpha },{\overline{w}}) - \left[ y^{M*}(w_{\alpha })-h\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \Gamma (w_{\alpha },\varphi (w_{\alpha })) \right\} \\&\qquad \times \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}} \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })} \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw; \end{aligned} \end{aligned}$$
(87)
$$\begin{aligned}&\Lambda _{3}(k)=\int _{k}^{\varphi (w_{\alpha })}\left[ \Lambda _{31}(w)+\Lambda _{32}(w)+\Lambda _{33}(w)+\Lambda _{34}(w) \right] \cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} } dw, \end{aligned}$$
(88)

with

$$\begin{aligned}&\begin{aligned} \Lambda _{31}(w) =&\left\{ \left[ 1-\frac{1}{w}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \frac{\Gamma (w_{\alpha },\varphi (w_{\alpha }))}{\Gamma ({\underline{w}},{\overline{w}})} \right. \\&- \left. \left[ \frac{1}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) + \frac{y^{M*}(w_{\alpha })}{w^{3}}h''\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \frac{\Gamma (\varphi (w_{\alpha }),{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right\} \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}, \end{aligned} \end{aligned}$$
(89)
$$\begin{aligned}&\begin{aligned} \Lambda _{32}(w) =&\left[ y^{M*}(w_{\alpha })-h\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \frac{1}{\Gamma ({\underline{w}},{\overline{w}})} \\&\times \left\{ {\tilde{f}}\left( \varphi (w_{\alpha }) \right) \frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }}-{\tilde{f}}(w_{\alpha })+\left[ \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })}\frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw \right] \right\} , \end{aligned} \end{aligned}$$
(90)
$$\begin{aligned}&\Lambda _{33}(w) = \frac{y^{M*}(w_{\alpha })}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \frac{{\tilde{f}}\left( \varphi (w_{\alpha })\right) }{\Gamma ({\underline{w}},{\overline{w}})}\frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }}, \end{aligned}$$
(91)

and

$$\begin{aligned} \begin{aligned}&\Lambda _{34}(w) \\&\quad = \left\{ \frac{y^{M*}(w_{\alpha })}{w^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \Gamma (\varphi (w_{\alpha }),{\overline{w}}) - \left[ y^{M*}(w_{\alpha })-h\left( \frac{y^{M*}(w_{\alpha })}{w} \right) \right] \Gamma (w_{\alpha }, \varphi (w_{\alpha })) \right\} \\&\qquad \times \frac{1}{\Gamma ({\underline{w}},{\overline{w}})^{2}} \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })} \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw; \end{aligned} \end{aligned}$$
(92)

and finally

$$\begin{aligned} \begin{aligned} \Lambda _{4} =&\frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }} \\&\quad \times [{\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{\alpha },w),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{M*}(w_{\alpha }) \right\} \cap \left\{ w=\varphi (w_{\alpha }) \right\} } \\&-{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))\cdot {\mathbb {I}}_{\left\{ {\overline{w}}\ge w_{\beta } \right\} \cap \left\{ y(w)=y^{M*}(w_{\alpha }) \right\} \cap \left\{ w=\varphi (w_{\alpha }) \right\} }]\\ =&\left[ {\tilde{\Phi }}^{R*}(w, y(w),\Gamma (w_{\alpha },w),\Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}}))-{\tilde{\Phi }}^{R*}(w, y(w),{\tilde{f}}(w), \Gamma (w,{\overline{w}}),\Gamma ({\underline{w}},{\overline{w}})) \right] \\&\times {\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} \cap \left\{ y(w)=y^{M*}(w_{\alpha }) \right\} \cap \left\{ w=\varphi (w_{\alpha }) \right\} } \cdot \frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }}\\ =&\left\{ \left[ y^{M*}(w_{\alpha })-h\left( \frac{y^{M*}(w_{\alpha })}{\varphi (w_{\alpha })} \right) \right] \frac{\Gamma \left( w_{\alpha },\varphi (w_{\alpha }) \right) -{\tilde{f}}(\varphi (w_{\alpha }))}{\Gamma \left( {\underline{w}},{\overline{w}} \right) } \right\} \frac{\mathrm {d} \varphi (w_{\alpha })}{\mathrm {d} w_{\alpha }}{\mathbb {I}}_{\left\{ w_{\beta }<{\overline{w}} \right\} }. \end{aligned} \end{aligned}$$
(93)

\(\underline{\mathrm{Step 4.}}\) We first consider the case with \(w_{\beta }<{\overline{w}}\). It follows from (62) and (63) that

$$\begin{aligned} \frac{\partial ^2 \Xi (w_{\beta };k)}{\partial w_{\beta } \partial k}=\frac{\mathrm {d} \Psi _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Psi _{3}(k) }{\mathrm {d} k}. \end{aligned}$$
(94)

Using equations (70)-(76), (94) can be explicitly expressed as

$$\begin{aligned} \begin{aligned}&\frac{\mathrm {d} \Psi _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Psi _{3}(k) }{\mathrm {d} k}\\&\quad = \left[ \Psi _{21}(k)+\Psi _{22}(k)+\Psi _{23}(k) +\Psi _{24}(k) \right] -\left[ \Psi _{31}(k)+\Psi _{32}(k)+\Psi _{33}(k)+\Psi _{34}(k) \right] \\&\quad = \left[ \Psi _{21}(k)-\Psi _{31}(k) \right] + \underset{=0}{\underbrace{\left[ \Psi _{22}(k)-\Psi _{32}(k) \right] }} + \left[ \Psi _{23}(k)-\Psi _{33}(k) \right] + \left[ \Psi _{24}(k)-\Psi _{34}(k) \right] \\&\quad = \left[ \frac{1}{k^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{k} \right) + \frac{y^{R*}(w_{\beta })}{k^{3}}h''\left( \frac{y^{R*}(w_{\beta })}{k} \right) \right] \left[ 1-\frac{\Gamma (\psi (w_{\beta }),{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} +\frac{\Gamma (w_{\beta },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}\\&\qquad + \frac{y^{R*}(w_{\beta })}{k^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{k} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})}\left[ -\frac{\mathrm {d} \Gamma (\psi (w_{\beta }),w_{\beta })}{\mathrm {d} w_{\beta }}\right] \\&\qquad + \frac{y^{R*}(w_{\beta })}{k^{2}}h'\left( \frac{y^{R*}(w_{\beta })}{k} \right) \frac{\Gamma (\psi (w_{\beta }),{\overline{w}})-\Gamma (w_{\beta },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})^{2}}\int _{\psi (w_{\beta })}^{w_{\beta }}\frac{\partial {\tilde{f}}(w)}{\partial y^{R*}(w_{\beta })} \frac{\mathrm {d} y^{R*}(w_{\beta })}{\mathrm {d} w_{\beta }}dw. \end{aligned} \end{aligned}$$
(95)

Since \(\mathrm {d} y^{R*}(w_{\beta })/\mathrm {d} w_{\beta }>0\) by assumption and \(\Gamma (\psi (w_{\beta }),{\overline{w}})>\Gamma (w_{\beta },{\overline{w}})\), we have

$$\begin{aligned} \frac{\partial ^2 \Xi (w_{\beta };k)}{\partial w_{\beta } \partial k}=\frac{\mathrm {d} \Psi _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Psi _{3}(k) }{\mathrm {d} k}>0 \end{aligned}$$
(96)

whenever

$$\begin{aligned} \frac{\mathrm {d} \Gamma (\psi (w_{\beta }),w_{\beta })}{\mathrm {d} w_{\beta }}\le 0, \end{aligned}$$

as desired. In particular, \(\mathrm {d} \psi (w_{\beta })/\mathrm {d} w_{\beta }>0\) based on the construction of \(\psi (\cdot )\) as well as the monotonicity of the income schedule. In the case of (96), \(\Xi (w_{\beta };k)\) is a supermodular function, and an application of Topkis Theorem (see Theorem 6.1 of Topkis (1978)) implies that \(w_{\beta }(k)\) is nondecreasing in k.

\({\underline{\mathrm{Step 5.}}}\) We now consider the case with \(w_{\alpha }>{\underline{w}}\). It follows from (80) and (81) that

$$\begin{aligned} \frac{\partial ^2 \Xi (w_{\alpha };k)}{\partial w_{\alpha } \partial k}=\frac{\mathrm {d} \Lambda _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Lambda _{3}(k) }{\mathrm {d} k}. \end{aligned}$$
(97)

Using equations (83)-(91), (97) can be explicitly expressed as

$$\begin{aligned} \begin{aligned}&\frac{\mathrm {d} \Lambda _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Lambda _{3}(k) }{\mathrm {d} k}\\&\quad = \left[ \Lambda _{21}(k)+\Lambda _{22}(k)+\Lambda _{23}(k) +\Lambda _{24}(k) \right] -\left[ \Lambda _{31}(k)+\Lambda _{32}(k)+\Lambda _{33}(k) +\Lambda _{34}(k)\right] \\&\quad = \left[ \Lambda _{21}(k)-\Lambda _{31}(k) \right] + \underset{=0}{\underbrace{\left[ \Lambda _{22}(k)-\Lambda _{32}(k) \right] }} + \left[ \Lambda _{23}(k)-\Lambda _{33}(k) \right] + \left[ \Lambda _{24}(k)-\Lambda _{34}(k) \right] \\&\quad = \left[ \frac{1}{k^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{k} \right) + \frac{y^{M*}(w_{\alpha })}{k^{3}}h''\left( \frac{y^{M*}(w_{\alpha })}{k} \right) \right] \left[ 1-\frac{\Gamma ( w_{\alpha },{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} +\frac{\Gamma (\varphi (w_{\alpha }),{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})} \right] \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}\\&\qquad + \frac{y^{M*}(w_{\alpha })}{k^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{k} \right) \frac{1}{\Gamma ({\underline{w}},{\overline{w}})}\left[ -\frac{\mathrm {d} \Gamma (w_{\alpha }, \varphi (w_{\alpha }))}{\mathrm {d} w_{\alpha }} \right] \\&\qquad + \frac{y^{M*}(w_{\alpha })}{k^{2}}h'\left( \frac{y^{M*}(w_{\alpha })}{k} \right) \frac{\Gamma ( w_{\alpha },{\overline{w}})- \Gamma (\varphi (w_{\alpha }),{\overline{w}})}{\Gamma ({\underline{w}},{\overline{w}})^{2}} \int _{w_{\alpha }}^{\varphi (w_{\alpha })}\frac{\partial {\tilde{f}}(w)}{\partial y^{M*}(w_{\alpha })} \frac{\mathrm {d} y^{M*}(w_{\alpha })}{\mathrm {d} w_{\alpha }}dw. \end{aligned} \end{aligned}$$
(98)

Since \(\mathrm {d} y^{M*}(w_{\alpha })/\mathrm {d} w_{\alpha }>0\) by assumption, we know according to (98) that

$$\begin{aligned} \frac{\partial ^2 \Xi (w_{\alpha };k)}{\partial w_{\alpha } \partial k}=\frac{\mathrm {d} \Lambda _{2}(k) }{\mathrm {d} k}+\frac{\mathrm {d} \Lambda _{3}(k) }{\mathrm {d} k}>0 \end{aligned}$$
(99)

whenever

$$\begin{aligned} \frac{\mathrm {d} \Gamma (w_{\alpha }, \varphi (w_{\alpha }))}{\mathrm {d} w_{\alpha }} \le 0, \end{aligned}$$

as desired. In the case of (99), \(\Xi (w_{\alpha };k)\) is a supermodular function, and an application of Topkis Theorem (see Theorem 6.1 of Topkis (1978)) again implies that \(w_{\alpha }(k)\) is nondecreasing in k. \(\square\)

Proof of Theorem 4.1

We complete the proof in four steps.

\({\underline{\mathrm{Step 1.}}}\) Based on Proposition 3.3, we first consider the case with a downward discontinuity at the type of proposer, namely the income schedules with the SOIC condition being violated. Let us consider two alternative proposers of types \(k_{1}\) and \(k_{2}\), for \(k_{1}<k_{2}\). Since the income schedules they proposed coincide with the maxi-max schedule for types below their type and coincide with the maxi-min schedule for types above their type, and the maxi-max income schedule lies everywhere above the maxi-min income schedule, the higher the type of proposer, the more workers who are below the proposer and the more who are allocated with the maxi-max incomes. Precisely, if the proposer changes from type \(k_{1}\) to type \(k_{2}\), all workers of types belonging to set \([k_{1}, k_{2}]\) are strictly better off in terms of pre-tax income, while all other workers of the remaining types are neutral to this change. We hence have that \(y(w,k_{1})\le y(w,k_{2})\) for \(\forall w, k_{1}, k_{2}\in [{\underline{w}}, {\overline{w}}]\), and \(y(w,k_{1})< y(w,k_{2})\) for \(\forall w\in [k_{1}, k_{2}]\) (see Fig. 7). In addition, as all proposers face the same government budget and incentive constraints, each proposer must weakly prefer what she obtains with her own schedule to what any other worker proposed for her. Formally,

$$\begin{aligned} U(w,w)\ \ge \ U(w,k) \ \ \ \text {for} \ \forall w,k\in \left[ {\underline{w}}, {\overline{w}}\right] . \end{aligned}$$
(100)
Fig. 7
figure 7

The type of proposer changes from \(k_{1}\) to \(k_{2}\) when SOIC is violated

We next show that a worker of any type w has a weakly single-peaked preference on the set of types. To this end, we need to consider two cases with the proof procedure being directly borrowed from Brett and Weymark (2017).

\({\underline{\mathrm{Step 2.}}}\) First, we consider the right-hand side of w. We arbitrarily pick three types \(w, k_{1}, k_{2}\) satisfying \(w<k_{1}<k_{2}\). Using (6) and (100), we have

$$\begin{aligned} \begin{aligned} U\left( w,k_{1}\right) \&= \ U(k_{1},k_{1})-\int _{w}^{k_{1}}h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}}dt\\&\ge \ U\left( k_{1},k_{2}\right) -\int _{w}^{k_{1}}h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}}dt. \end{aligned} \end{aligned}$$
(101)

Similarly, according to (6),

$$\begin{aligned} U(w,k_{2})\ = \ U(k_{1},k_{2})-\int _{w}^{k_{1}}h'\left( \frac{y(t,k_{2})}{t} \right) \frac{y(t,k_{2})}{t^{2}}dt. \end{aligned}$$
(102)

Solving for \(U(k_{1},k_{2})\) from (102) and inserting it into (101) produces

$$\begin{aligned} U(w,k_{1})-U(w,k_{2})\ \ge \ \int _{w}^{k_{1}}\left[ h'\left( \frac{y(t,k_{2})}{t} \right) \frac{y(t,k_{2})}{t^{2}}-h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}} \right] dt. \end{aligned}$$
(103)

Since h is strictly increasing and convex, we hence have by using (103) that \(U(w,k_{1})\ge U(w,k_{2})\), which combined with (100) reveals that

$$\begin{aligned} U(w,w)\ \ge \ U(w,k_{1})\ \ge \ U(w,k_{2}), \ \ \ \forall w<k_{1}<k_{2}. \end{aligned}$$
(104)

\({\underline{\mathrm{Step 3.}}}\) Second, for the case with \(w>k_{1}>k_{2}\), using (6) and (100) yields

$$\begin{aligned} \begin{aligned} U(w,k_{1})\&= \ U(k_{1},k_{1})+\int _{k_{1}}^{w}h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}}dt\\&\ge \ U(k_{1},k_{2})+\int _{k_{1}}^{w}h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}}dt. \end{aligned} \end{aligned}$$
(105)

Similarly,

$$\begin{aligned} U(w,k_{2})\ = \ U(k_{1},k_{2})+\int _{k_{1}}^{w}h'\left( \frac{y(t,k_{2})}{t} \right) \frac{y(t,k_{2})}{t^{2}}dt. \end{aligned}$$
(106)

Making use of (105) and (106) gives rise to

$$\begin{aligned} U(w,k_{1})-U(w,k_{2})\ \ge \ \int _{k_{1}}^{w}\left[ h'\left( \frac{y(t,k_{1})}{t} \right) \frac{y(t,k_{1})}{t^{2}}-h'\left( \frac{y(t,k_{2})}{t} \right) \frac{y(t,k_{2})}{t^{2}} \right] dt. \end{aligned}$$
(107)

Applying the same reasoning used in step 2 to (107), we arrive at

$$\begin{aligned} U(w,w)\ \ge \ U(w,k_{1})\ \ge \ U(w,k_{2}), \ \ \ \forall w>k_{1}>k_{2}. \end{aligned}$$
(108)

Accordingly, (104) combined with (108) reveals that the preference of any given type of worker is indeed (weakly) single-peaked on the set of types. Applying the Black’s Median Voter Theorem (see Black, 1948), the desired assertion hence follows. After ironing the downward discontinuity by building a bridge, we establish the income schedule that meets the SOIC condition in Theorem 3.2. Using Lemma 3.5, it is immediately apparent that the reasoning used above applies as well, and hence the desired assertion holds true for the complete solution of the tax design problem.

\({\underline{\mathrm{Step 4.}}}\) We now move to the case in which the SOIC condition is met under the first-order approach. As shown in Fig. 8, if the proposer changes from type \(k_{1}\) to type \(k_{2}\), all workers of types belonging to set \([k_{1}, k_{2}]\) are strictly worse off in terms of pre-tax income, while all other workers of the remaining types are neutral to this change. We hence have that \(y(w,k_{1})\ge y(w,k_{2})\) for \(\forall w, k_{1}, k_{2}\in [{\underline{w}}, {\overline{w}}]\), and \(y(w,k_{1})> y(w,k_{2})\) for \(\forall w\in [k_{1}, k_{2}]\). In addition, condition (100) still applies here. It is easy to show that we still have (103) for \(w<k_{1}<k_{2}\) and (107) for \(w>k_{1}>k_{2}\). Since the right-hand side of inequalities (103) and (107) is equal to zero, the assertion established above holds true in the current case. \(\square\)

Fig. 8
figure 8

The type of proposer changes from \(k_{1}\) to \(k_{2}\) when SOIC is met

Proof of Lemma 5.1

In the case of autarky considered by Brett and Weymark (2017), the marginal tax rates are obtained as follows:

$$\begin{aligned} \begin{aligned} {\hat{\tau }}^{M}(w)&\ = \ -\frac{F(w)}{wf(w)}\left[ \frac{1}{w}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{2}}h''\left( \frac{y(w)}{w} \right) \right] ,\\ {\hat{\tau }}^{R}(w)&\ = \ \frac{1-F(w)}{wf(w)}\left[ \frac{1}{w}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{2}}h''\left( \frac{y(w)}{w} \right) \right] . \end{aligned} \end{aligned}$$
(109)

Using (25) and (109), we obtain

$$\begin{aligned} {\hat{\tau }}^{M}(w) -\tau ^{M}(w)=- \left[ \frac{1}{w^2}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{3}}h''\left( \frac{y(w)}{w} \right) \right] \left[ \frac{F(w)}{f(w)}- \frac{\Gamma ({\underline{w}},w)}{{\tilde{f}}(w)} \right] , \end{aligned}$$

from which assertion (i) immediately follows.

Using (26), (109) and (17), we obtain

$$\begin{aligned} {\hat{\tau }}^{R}(w) -\tau ^{R}(w)= \&\frac{\partial U(w)}{\partial y(w)}\left[ \frac{1-F(w)}{f(w)} - \frac{\Gamma ({\underline{w}},{\overline{w}})-\Gamma ({\underline{w}},w)}{{\tilde{f}}(w)} \right] \\&+ \frac{\partial U(w)}{\partial y(w)}\frac{{\tilde{\theta }}(w)}{c(w)} \left[ T^{R}(y(w))+U(w)-U'(w)-U({\underline{w}}) \right] , \end{aligned}$$

from which assertion (ii) follows.

Applying (109) and (26) reveals that

$$\begin{aligned}&{\hat{\tau }}^{M}(w) -\tau ^{R}(w)\\&\quad =-\frac{\partial U(w)}{\partial y(w)} \left\{ \frac{\Gamma ({\underline{w}},{\overline{w}})-\Gamma ({\underline{w}},w)}{{\tilde{f}}(w)}+\frac{F(w)}{f(w)} - \frac{{\tilde{\theta }}(w)}{c(w)} \left[ T^{R}(y(w))+U(w)-U'(w)-U({\underline{w}}) \right] \right\} , \end{aligned}$$

by which we can establish assertion (iii).

Applying (25) and (109) reveals that

$$\begin{aligned} {\hat{\tau }}^{R}(w) -\tau ^{M}(w) = \left[ \frac{1}{w^2}h'\left( \frac{y(w)}{w} \right) +\frac{y(w)}{w^{3}}h''\left( \frac{y(w)}{w} \right) \right] \cdot \left[ \frac{\Gamma ({\underline{w}},w)}{{\tilde{f}}(w)}+\frac{1-F(w)}{f(w)}\right] , \end{aligned}$$

by which assertion (iv) is immediate. \(\square\)

Proof of Proposition 5.1

First, it is easy to see from Propositions 3.2 and 3.3 , (35) and (37) that \(\Theta ^{R}(w)<{\tilde{\theta }}_{\star }(w)<\min \{\Theta ^{MR}(w), {\tilde{\theta }}^{*}(w)\}\). The relative magnitude of \(\Theta ^{MR}(w)\) and \({\tilde{\theta }}^{*}(w)\) is generally ambiguous. Using claim (i) of Lemma 5.1 and Proposition 3.3(iii), we get \({\hat{\tau }}^{M}(w)>\tau ^{M}(w)\) whenever \(T^{M}(y({\underline{w}}))< U'({\underline{w}})\) and \(F(w)/f(w)<\Gamma ({\underline{w}},w)/{\tilde{f}}(w)\) for all \(w\in ({\underline{w}},w_{m}]\), as desired in all three conditions. Furthermore, using (109), we have \(0>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\). Using (34), the remaining requirements of condition (a) lead to \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)\). Using claim (ii) of Proposition 3.2, the requirement of \(T^{R}(y(w))< U'(w)+U({\underline{w}})-U(w)\) in condition (a) leads to \(\tau ^{R}(w)>0\), and hence condition (a) implies that \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)>0>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\), as desired in claim (i).

Using claim (ii) of Lemma 5.1, the requirements of \(T^{R}(y(w))> U'(w)+U({\underline{w}})-U(w)\) and \([1-F(w)]/f(w)\ge [\Gamma ({\underline{w}},{\overline{w}})-\Gamma ({\underline{w}},w)]/{\tilde{f}}(w)\) in condition (b) imply that \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)\). If \({\tilde{\theta }}(w) \le {\tilde{\theta }}_{\star }(w)\), then we get from claim (ii) of Proposition 3.2 that \(\tau ^{R}(w)>0\), and hence condition (b) also implies that \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)>0>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\), as desired in claim (i). Departing from condition (a), the migration elasticity is allowed to be greater than the threshold \({\tilde{\theta }}_{\star }(w)\), which means that it is possible that \(\tau ^{R}(w)<0\) based on claim (ii) of Proposition 3.2. However, using claim (iii) of Lemma 5.1, it is still guaranteed that \(0>\tau ^{R}(w)>{\hat{\tau }}^{M}(w)\) for \({\tilde{\theta }}_{\star }(w)<{\tilde{\theta }}(w)<\min \{\Theta ^{MR}(w), {\tilde{\theta }}^{*}(w)\}\). Using (109), we have in this case that \({\hat{\tau }}^{R}(w)>0>\tau ^{R}(w)>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\). Once again, claim (i) follows.

Using claim (ii) of Lemma 5.1, the requirements of \(T^{R}(y(w))>U'(w)+U({\underline{w}})-U(w)\), \([1-F(w)]/f(w)< [\Gamma ({\underline{w}},{\overline{w}})-\Gamma ({\underline{w}},w)]/{\tilde{f}}(w)\) and \(\Theta ^{R}(w)<{\tilde{\theta }}(w)\) for all \(w\in (w_{m},{\overline{w}}]\) in condition (c) imply that \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)\). If \(\Theta ^{R}(w)<{\tilde{\theta }}(w) \le {\tilde{\theta }}_{\star }(w)\), then we get from claim (ii) of Proposition 3.2 that \(\tau ^{R}(w)>0\), and hence condition (c) also implies that \({\hat{\tau }}^{R}(w)>\tau ^{R}(w)>0>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\), as desired in claim (i). Similar to condition (b), here the migration elasticity is allowed to be greater than the threshold \({\tilde{\theta }}_{\star }(w)\), which leads to the possible prediction of \(\tau ^{R}(w)<0\) based on claim (ii) of Proposition 3.2. However, using claim (iii) of Lemma 5.1, it is still guaranteed that \(0>\tau ^{R}(w)>{\hat{\tau }}^{M}(w)\) for \({\tilde{\theta }}_{\star }(w)<{\tilde{\theta }}(w)<\min \{\Theta ^{MR}(w), {\tilde{\theta }}^{*}(w)\}\). Using (109), we have in this case that \({\hat{\tau }}^{R}(w)>0>\tau ^{R}(w)>{\hat{\tau }}^{M}(w)>\tau ^{M}(w)\). Once again, claim (i) follows. Finally, applying claim (i) and Fig. 3, claims (ii)-(iii) of this proposition are immediate. \(\square\)

Proof of Proposition 5.2

The proof is quite similar to that of the previous proposition and hence the detail is omitted to save space. Basically, using Lemma 5.1, Proposition 3.2 and (109), we get \(\tau ^{R}(w)>{\hat{\tau }}^{R}(w)>0>\tau ^{M}(w)>{\hat{\tau }}^{M}(w)\) under one of the four conditions, then claim (i) holds true. Using claim (i) and Figure 4, claims (ii)-(iii) are immediate. \(\square\)

Proof of Corollary 5.1

According to the proof of Theorem 3.2 and Proposition 5.1, the relationship between the income schedule under migration and that in autarky is illustrated by Fig. 5, in which the thick black curve represents the income schedule derived by Brett and Weymark (2017) after ironing the downward discontinuity at the ex ante median skill level, with the bridge endpoints denoted by \({\hat{w}}_{\alpha }\) and \({\hat{w}}_{\beta }\). Also, \(w_{\alpha }, w_{\beta }\) denote the bridge endpoints endogenously determined in the proof of Lemma 3.5. Immediately, result (i) follows from the left one of Fig. 5, and result (ii) follows from the right one. \(\square\)

Proof of Proposition 5.3

Given that the maxi-max income schedule is above the maxi-min income schedule in the autarky equilibrium but not in the migration equilibrium, claim (i) follows from applying part (i) of Lemma 5.1 and part (i) of Proposition 3.2. Thus, in order for all skill types to face higher tax rates under migration than in autarky, the maxi-min tax rates in the migration equilibrium must be larger than those in the autarky equilibrium, which cannot be satisfied by using part (ii) of Lemma 5.1 under the assumption of \(T^{R}(y(w))> U'(w)+U({\underline{w}})-U(w)\) and \({\tilde{\theta }}(w)\ge {\tilde{\theta }}^{*}(w)\) for all \(w \in ({\tilde{w}}_{m},{\overline{w}}]\). As such, we must have higher maxi-min tax rates in the autarky equilibrium than those in the migration equilibrium. Again applying the fact given at the beginning, claim (iii) is immediate by using part (i) of Lemma 5.1 and part (i) of Proposition 3.2. \(\square\)

Appendix B: The relative magnitude of ex ante and ex post median skill levels

After combining the migration decisions, the ex post measure of workers is given by

$$\begin{aligned} \Gamma ({\underline{w}},{\overline{w}})=\int _{{\underline{w}}}^{{\overline{w}}}{\tilde{f}}(w)dw \ = \ \int _{{\underline{w}}}^{w_{m}}{\tilde{f}}(w)dw + \int _{w_{m}}^{{\overline{w}}}{\tilde{f}}(w)dw. \end{aligned}$$
(110)

By using (8), we get the right-hand terms of (110) asFootnote 20

$$\begin{aligned} \int _{{\underline{w}}}^{w_{m}}{\tilde{f}}(w)dw = \frac{1}{2} \ + \ \underset{L^{NI}([{\underline{w}},w_{m}])\ =\ \text {net labor inflow}}{\underbrace{L^{I}([{\underline{w}},w_{m}])-L^{O}([{\underline{w}},w_{m}])}} \end{aligned}$$
(111)

and

$$\begin{aligned} \int _{w_{m}}^{{\overline{w}}}{\tilde{f}}(w)dw = \frac{1}{2} \ + \ \underset{L^{NI}([w_{m},{\overline{w}}])\ =\ \text {net labor inflow}}{\underbrace{L^{I}([w_{m},{\overline{w}}])-L^{O}([w_{m},{\overline{w}}])}}, \end{aligned}$$
(112)

in which the measures of labor inflows are defined as

$$\begin{aligned} \begin{aligned} L^{I}([{\underline{w}},w_{m}])&\equiv \int _{\left\{ w\in [{\underline{w}},w_{m}]| \Delta (w)\ge 0 \right\} }G_{-}(\Delta (w)|w)f_{-}(w)n_{-}dw, \\ L^{I}([w_{m},{\overline{w}}])&\equiv \int _{\left\{ w\in [w_{m},{\overline{w}}]| \Delta (w)\ge 0 \right\} }G_{-}(\Delta (w)|w)f_{-}(w)n_{-}dw, \end{aligned} \end{aligned}$$
(113)

and the measures of labor outflows are defined as

$$\begin{aligned} \begin{aligned} L^{O}([{\underline{w}},w_{m}])&\equiv \int _{\left\{ w\in [{\underline{w}},w_{m}]| \Delta (w)\le 0 \right\} }G(-\Delta (w)|w)f(w)dw, \\ L^{O}([w_{m},{\overline{w}}])&\equiv \int _{\left\{ w\in [w_{m},{\overline{w}}]| \Delta (w)\le 0 \right\} }G(-\Delta (w)|w)f(w)dw. \end{aligned} \end{aligned}$$
(114)

Using (110)–(114), we can identify the relationship of the ex post median skill level \({\tilde{w}}_{m}\) with the ex ante median skill level \(w_{m}\) and summarize the results as three propositions.

Proposition 6.1

Suppose \(\Gamma ({\underline{w}},{\overline{w}})=1\). We have: (a) If \(L^{NI}([{\underline{w}},w_{m}])=L^{NI}([w_{m},{\overline{w}}])=0\), then \({\tilde{w}}_{m}=w_{m}\); (b) If \(L^{NI}([{\underline{w}},w_{m}])>0\) and \(L^{NI}([w_{m},{\overline{w}}])<0\), then \({\tilde{w}}_{m}<w_{m}\); (c) If \(L^{NI}([{\underline{w}},w_{m}])<0\) and \(L^{NI}([w_{m},{\overline{w}}])>0\), then \({\tilde{w}}_{m}>w_{m}\).

Proposition 6.2

Suppose \(\Gamma ({\underline{w}},{\overline{w}})>1\). We have: (a) If \(L^{NI}([{\underline{w}},w_{m}])=0\) and \(L^{NI}([w_{m},{\overline{w}}])\) \(>0\), then \({\tilde{w}}_{m}>w_{m}\); (b) If \(L^{NI}([{\underline{w}},w_{m}])>0\) and \(L^{NI}([w_{m},{\overline{w}}])=0\), then \({\tilde{w}}_{m}<w_{m}\); (c) If \(L^{NI}([{\underline{w}},w_{m}])>0\) and \(L^{NI}([w_{m},{\overline{w}}])>0\), then \({\tilde{w}}_{m}<w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])>L^{NI}([w_{m},{\overline{w}}])\), \({\tilde{w}}_{m}=w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])=L^{NI}([w_{m},{\overline{w}}])\), and \({\tilde{w}}_{m}>w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])<L^{NI}([w_{m},{\overline{w}}])\); (d) If \(L^{NI}([{\underline{w}},w_{m}])>0\) and \(L^{NI}([w_{m},{\overline{w}}])<0\), then \({\tilde{w}}_{m}<w_{m}\); (e) If \(L^{NI}([{\underline{w}},w_{m}])<0\) and \(L^{NI}([w_{m},{\overline{w}}])>0\), then \({\tilde{w}}_{m}>w_{m}\).

Proposition 6.3

Suppose \(\Gamma ({\underline{w}},{\overline{w}})<1\). We have: (a) If \(L^{NI}([{\underline{w}},w_{m}])=0\) and \(L^{NI}([w_{m},{\overline{w}}])\) \(<0\), then \({\tilde{w}}_{m}<w_{m}\); (b) If \(L^{NI}([{\underline{w}},w_{m}])<0\) and \(L^{NI}([w_{m},{\overline{w}}])=0\), then \({\tilde{w}}_{m}>w_{m}\); (c) If \(L^{NI}([{\underline{w}},w_{m}])<0\) and \(L^{NI}([w_{m},{\overline{w}}])<0\), then \({\tilde{w}}_{m}<w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])>L^{NI}([w_{m},{\overline{w}}])\), \({\tilde{w}}_{m}=w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])=L^{NI}([w_{m},{\overline{w}}])\), and \({\tilde{w}}_{m}>w_{m}\) for \(L^{NI}([{\underline{w}},w_{m}])<L^{NI}([w_{m},{\overline{w}}])\); (d) If \(L^{NI}([{\underline{w}},w_{m}])>0\) and \(L^{NI}([w_{m},{\overline{w}}])<0\), then \({\tilde{w}}_{m}<w_{m}\); (e) If \(L^{NI}([{\underline{w}},w_{m}])<0\) and \(L^{NI}([w_{m},{\overline{w}}])>0\), then \({\tilde{w}}_{m}>w_{m}\).

To identity the relationship between ex ante and ex post median skill levels, we divide the ex post population of workers into two groups: the first group has skill levels lower than the ex ante median skill level and the second group has skill levels higher than the ex ante median level. Propositions 6.16.3 consider three possible cases corresponding to three possible ex post measures of workers of all skill levels.

Proposition 6.1 considers the case that migrations do not change the total measure of workers. This proposition yields three possible subcases. Subcase (a) shows that labor inflow and labor outflow cancel each other out for both groups, and hence the median skill level should be the same under the same total measure. Subcase (b) demonstrates that the first group faces a positive net labor inflow while the second group faces a positive net labor outflow, hence the position of ex post median skill level should move towards the left under the same total measure, leading to a smaller median skill level than the ex ante one. Subcase (c) illustrates that the first group faces a positive net labor outflow while the second group faces a positive net labor inflow, hence the position of ex post median skill level should move towards the right under the same total measure, leading to a larger median skill level than the ex ante one. We can analyze Propositions 6.26.3 in the same way.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, D., Tian, G. Voting over selfishly optimal income tax schedules with tax-driven migrations. Soc Choice Welf 60, 183–235 (2023). https://doi.org/10.1007/s00355-021-01366-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00355-021-01366-3

Navigation