Skip to main content
Log in

Optimal multi-unit allocation with costly verification

  • Original Paper
  • Published:
Social Choice and Welfare Aims and scope Submit manuscript

Abstract

A principal has n homogeneous objects to allocate to \(I> n\) agents. The principal can allocate at most one good to an agent, and each agent values the good. Agents have private information about the principal’s payoff of allocating the goods. There are no monetary transfers, but the principal may check any agent’s value at a cost. In this setting, we propose a direct mechanism, called the n-ascending mechanism, which balances the benefit of efficient allocation and the cost of checking agents. While such a mechanism itself is not obviously strategy-proof, we show that its outcome is easily implementable by an extensive game which has an equilibrium in obviously dominant strategies. When \(n = 2,\) we show that the 2-ascending mechanism is essentially the unique optimal mechanism that maximizes the principal’s expected net payoff.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. The agents \(i = 1,\ldots , n\) may not be the default first n agents; we may relabel agents according to their value distributions and checking costs. See Sect. 3.1.

  2. In an earlier version Chua et al. (2019), we also prove the optimality of the n-ascending mechanism for general n.

  3. See also Gale and Hellwig (1985), Border and Sobel (1987), and Mookherjee and Png (1989). Our work is also related to, among others, Glazer and Rubinstein (2004, 2006), Green and Laffont (1986), Bull and Watson (2007), Deneckere and Severinov (2008), Ben-Porath and Lipman (2012), Kartik and Tercieux (2012), Sher and Vohra (2015) and Doval (2018) in the same sense as Ben-Porath et al. (2014).

  4. See also Lipman (2015) and Erlanson and Kleiner (2019) for related technical results.

  5. See Dye (1985) and Jung and Kwon (1988) for earlier work on evidence models.

  6. See Section 3.2 and Appendix C of Ben-Porath et al. (2019) for how their approach works when there are only finitely many types. Our approach to the continuous-type setting differs significantly from theirs. See Appendix B.

  7. See Remark (iv) in Sect. 3.4 for more detailed discussions.

  8. See also Li (2021) for the allocation of goods among financially constrained agents.

  9. See part 1 of the online appendix of BDL for more detailed arguments.

  10. The word “essential” means “up to measure-zero subsets of \({\mathbf{T}}.\)” This applies throughout the paper.

  11. BDL defines a critical value \(t_{i}^*\) implicitly by \({\mathbf{E}}\left( t_{i}\right) ={\mathbf{E}}\left( \max \left\{ t_{i},t_{i}^*\right\} \right) -c_{i}.\) It is straightforward to see that \(v_{i}^* = t_{i}^*-c_{i}.\)

  12. See Lemma 8 in Appendix B for a case where the reallocation occurs between agent i and the agent who has the second highest reported net value among \({\mathcal{I}}{\setminus } \{i\}.\) For other reallocation rules that are of interest, see Step (ii) of the proof of Proposition 4 in Chua et al. (2019).

  13. Although the choice variable has been simplified from \(({\mathbf{p}},{\mathbf{q}})\) to \({\mathbf{p}},\) sometimes we still specify the checking policy behind to make the intuition clear.

  14. The mechanism design literature has explored different limiting arguments for different purposes, which are often problem specific; see, e.g., Mierendorff (2011, 2016).

  15. See their Example 2 and Appendix C for details.

  16. Due to the equivalence, OSP implementation of the n-ASM also serves as an OSP implementation of the optimal mechanism in the discrete model.

  17. The optimality of the n-ASM for general n can be found in the working paper version Chua et al. (2019). The proof there is more involved, but the key insights are no more than those delivered in the case of \(n = 2.\)

  18. One can apply the argument in Footnote 10 of BDL to make agent i strictly prefer truth-telling to lying, without changing the allocation and checking cost on the equilibrium path. The same comment applies to the second case.

  19. Each slice here is a compact subset of \(\Phi .\) The same footnote applies to slices of \(\Phi _{j}\) for all \(j\in {\mathcal{I}}.\)

  20. We allow the mechanism to withhold some good in tied cases to avoid discussions on zero-measure sets.

  21. To have a more transparent comparison, one can specify the ex-post allocation \({\mathbf{p}}({\mathbf{t}})\) for all \({\mathbf{t}}\in {\mathbf{T}}\) according to Definition 4. Since this is simply restating the definition and the proof of Lemma 6, we do not repeat the details here.

  22. For example, the checking policies after subproblem (16) and subproblem (17) constitute such an eligible \({\mathbf{q}}.\)

References

  • Aliprantis CD, Border KC (2006) Infinite dimensional analysis: a Hitchhiker’s guide. Springer, Berlin

    Google Scholar 

  • Ben-Porath E, Lipman BL (2012) Implementation with partial provability. J Econ Theory 147:1689–1724

    Article  Google Scholar 

  • Ben-Porath E, Dekel E, Lipman BL (2014) Optimal allocation with costly verification. Am Econ Rev 104:3779–3813

    Article  Google Scholar 

  • Ben-Porath E, Dekel E, Lipman BL (2019) Mechanisms with evidence: commitment and robustness. Econometrica 87:529–566

    Article  Google Scholar 

  • Border KC, Sobel J (1987) Samurai accountant: a theory of auditing and plunder. Rev Econ Stud 54:525–540

    Article  Google Scholar 

  • Bull J, Watson J (2007) Hard evidence and mechanism design. Games Econ Behav 58:75–93

    Article  Google Scholar 

  • Chua GA, Hu G, Liu F (2019) Optimal multi-unit allocation with costly verification. Available at SSRN. https://ssrn.com/abstract=3407031. Accessed 18 Oct 2022.

  • Deneckere R, Severinov S (2008) Mechanism design with partial state verifiability. Games Econ Behav 64:487–513

    Article  Google Scholar 

  • Doval L (2018) Whether or not to open Pandora’s box. J Econ Theory 175:127–158

    Article  Google Scholar 

  • Dye RA (1985) Disclosure of nonproprietary information. J Account Res 23:123–145

  • Epitropou M, Vohra R (2019) Optimal on-line allocation rules with verification. In: International symposium on algorithmic game theory. Springer, Berlin, pp 3–17

  • Erlanson A, Kleiner A (2019) A note on optimal allocation with costly verification. J Math Econ 84:56–62

    Article  Google Scholar 

  • Erlanson A, Kleiner A (2020) Costly verification in collective decisions. Theor Econ 15:923–954

    Article  Google Scholar 

  • Gale D, Hellwig M (1985) Incentive-compatible debt contracts: the one-period problem. Rev Econ Stud 52:647–663

    Article  Google Scholar 

  • Glazer J, Rubinstein A (2004) On optimal rules of persuasion. Econometrica 72:1715–1736

    Article  Google Scholar 

  • Glazer J, Rubinstein A (2006) A study in the pragmatics of persuasion: a game theoretical approach. Theor Econ 1:395–410

    Google Scholar 

  • Green JR, Laffont J-J (1986) Partially verifiable information and mechanism design. Rev Econ Stud 53:447–456

    Article  Google Scholar 

  • Jung W-O, Kwon YK (1988) Disclosure when the market is unsure of information endowment of managers. J Account Res 26:146–153

  • Kartik N, Tercieux O (2012) Implementation with evidence. Theor Econ 7:323–355

    Article  Google Scholar 

  • Li S (2017) Obviously strategy-proof mechanisms. Am Econ Rev 107:3257–3287

    Article  Google Scholar 

  • Li Y (2020) Mechanism design with costly verification and limited punishments. J Econ Theory 186:105000

  • Li Y (2021) Mechanism design with financially constrained agents and costly verification. Theor Econ 16:1139–1194

  • Lipman B (2015) An elementary proof of the optimality of threshold mechanisms. Working Paper

  • Mierendorff K (2011) Optimal dynamic mechanism design with deadlines. Unpublished manuscript, University of Zurich

  • Mierendorff K (2016) Optimal dynamic mechanism design with deadlines. J Econ Theory 161:190–222

    Article  Google Scholar 

  • Mookherjee D, Png I (1989) Optimal auditing, insurance, and redistribution. Q J Econ 104:399–415

    Article  Google Scholar 

  • Mylovanov T, Zapechelnyuk A (2017) Optimal allocation with ex post verification and limited penalties. Am Econ Rev 107:2666–94

    Article  Google Scholar 

  • Osborne MJ, Rubinstein A (1994) A course in game theory. MIT Press, Cambridge

    Google Scholar 

  • Sher I, Vohra R (2015) Price discrimination through communication. Theor Econ 10:597–648

    Article  Google Scholar 

  • Townsend RM (1979) Optimal contracts and competitive markets with costly state verification. J Econ Theory 21:265–293

    Article  Google Scholar 

Download references

Acknowledgements

We are indebted to the editors and two anonymous referees for their generous guidance and detailed comments which significantly improved the paper. We are also grateful to Yi-Chun Chen, Jiangtao Li, Bart Lipman, Joel Sobel, Satoru Takahashi, Xi Weng, Yongchao Zhang and participants at various conferences for helpful comments, and Xu Cheng for excellent research assistance. Chua acknowledges the financial support by Singapore Ministry of Education Academic Research Fund MOE2015-T2-2-046. Hu acknowledges the financial support by the National Natural Science Foundation of China (No. 72003121 and No. 72033004) and Shanghai Pujiang Program (No. 2020PJC052). Liu acknowledges the financial support by the Major Program of the National Natural Science Foundation of China (No. 72192843) and National Natural Science Foundation of China (No. 72071198). All remaining errors are our own. Due to the length consideration, this paper is a substantial shortening of an earlier version Chua et al. (2019), which proves the optimality of the n-ascending mechanism for all \(n \ge 2\) and contains additional discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaoji Hu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (pdf 12710 KB)

Appendices

Appendix A: Proof of Theorem 1

Suppose agent i is called to play. Regardless of the history \(h\in I_i(t_i),\) we could have \(p_i = 1\) only if \(i = m_k\) for some k,  and \(q_i = 1\) and \(t_i' = t_i.\) Therefore, whenever \(t_i'' \ne t_i,\) \(p_i (h, t_i'', {\mathbf{t}}_{-i}'',t_i) \equiv 0\) regardless of h and \({\mathbf{t}}_{-i}'',\) which means that

$$\begin{aligned} \sup _{h\in I_i(t_i),\ {\mathbf{t}}_{-i}''\in {\mathbf{T}}_{-i}} p_i (h, t_i'', {\mathbf{t}}_{-i}'',t_i) = \inf _{h\in I_i(t_i),\ {\mathbf{t}}_{-i}''\in {\mathbf{T}}_{-i}} p_i (h, t_i'', {\mathbf{t}}_{-i}'',t_i) = 0. \end{aligned}$$

For the truthful strategy \(t_i' = t_i,\) we consider three cases. First, any agent \(i > n\) with \(t_i - c_i \le v_n^*\) is indifferent between any strategies because \(p_i\) will be zero anyway.Footnote 18 Thus, \(t_i' = t_i\) is trivially an obviously dominant strategy. Second, any agent \(i = 1,\ldots , n\) with \(t_i - c_i \le v_i^*\) is indifferent between any strategies, because his utility is purely determined by \({\mathbf{t}}_{-i}'\) and \({\mathbf{t}}_{-i}\) (zero if challenged and one if not). Again, \(t_i' = t_i\) is trivially an obviously dominant strategy. Finally, in all other scenarios, there are \(h\in I_i(t_i)\) and \({\mathbf{t}}_{-i}''\) such that \(p_i (h, t_i, {\mathbf{t}}_{-i}'',t_i) =1\) as well as \(h\in I_i(t_i)\) and \({\mathbf{t}}_{-i}''\) such that \(p_i (h, t_i, {\mathbf{t}}_{-i}'',t_i) = 0,\) which can be seen from an example with two goods and three agents as follows (where \(i = 1,2,3\) and \({\mathbf{t}}_{-i}'' = {\mathbf{t}}_{-i}\)):

$$\begin{aligned}&v_2^*<v_1^*< t_3-c_3< t_2 - c_2< t_1 - c_1\\ &v_2^*<v_1^*< t_2 - c_2< t_1 - c_1< t_3-c_3 \\ &v_2^*<v_1^*< t_1 - c_1< t_3-c_3 < t_2 - c_2. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \sup _{h\in I_i(t_i),\ {\mathbf{t}}_{-i}''\in {\mathbf{T}}_{-i}} p_i (h, t_i, {\mathbf{t}}_{-i}'',t_i) = 1 \quad \text{and} \quad \inf _{h\in I_i(t_i),\ {\mathbf{t}}_{-i}''\in {\mathbf{T}}_{-i}} p_i (h, t_i, {\mathbf{t}}_{-i}'',t_i) = 0. \end{aligned}$$

As a result, \(t_i' = t_i\) is an obviously dominant strategy for agent i,  but not any other strategy.

Appendix B: Proof of Theorem 2: a roadmap

We only present the major steps with formal statements in this appendix. Technical details omitted here can be found in the Online Appendix D.

In Appendix B.1, we identify a set of binding constraints such that the checking policy \({\mathbf{q}}\) is pinned down (in reduced form) by the allocation rule \({\mathbf{p}},\) which means that the choice variable can be simplified from \(({\mathbf{p}},{\mathbf{q}})\) to \({\mathbf{p}}.\) In Appendices B.2 and B.3, we classify the set of feasible mechanisms into convex and compact subsets and introduce the threshold mechanisms for each subset. Appendix B.4 shows that to solve the principal’s problem, it suffices to restrict attention to the threshold mechanisms, which can also be classified into convex and compact subsets. Finally, in Appendix B.5, we examine the extreme points of these convex and compact subsets, and characterize the optimal mechanisms.

1.1 B.1 Simplification of the principal’s problem

In this subsection, we simplify the principal’s problem (1)–(5). The goal is to eliminate \({\mathbf{q}}\) from the choice variable, so the principal only needs to choose \({\mathbf{p}}.\) This part is largely adopted from BDL.

Recall that \(\hat{p}_i\left( t_i\right)\) \(={\mathbf{E}}_{{\mathbf{t}}_{-i}} p_i\left( t_i, {\mathbf{t}}_{-i}\right)\) and \(\hat{q}_i\left( t_i\right) ={\mathbf{E}}_{{\mathbf{t}}_{-i}} q_i\left( t_i, {\mathbf{t}}_{-i}\right) .\) We can write the incentive compatibility constraint as

$$\begin{aligned} \hat{p}_i\left( t_i^{\prime }\right) \ge \hat{p}_i\left( t_i\right) -\hat{q}_i\left( t_i\right) , \quad \forall t_i, t_i^{\prime } \in T_i. \end{aligned}$$

This incentive compatibility constraint has the property that the payoff to falsely claiming to be type \(t_i\) does not depend on the true type \(t_i^{\prime }.\) Because the payoff to lying does not depend on the truth, we can rearrange the incentive constraint to say that the worst truth is better than any lie. In other words, a mechanism \(({\mathbf{p}},{\mathbf{q}})\) is incentive compatible if and only if it satisfies

$$\begin{aligned} \inf _{t_i^{\prime } \in T_i} \hat{p}_i\left( t_i^{\prime }\right) \ge \hat{p}_i\left( t_i\right) -\hat{q}_i\left( t_i\right) , \quad \forall t_i \in T_i. \end{aligned}$$

Let \(\varphi _i^{\mathbf{p}}\,{:}{=}\,\inf _{t_i^{\prime } \in T_i} \hat{p}_i\left( t_i^{\prime }\right)\) be the worst interim assignment probability under the truth that is induced by \({\mathbf{p}}.\) We can rewrite the incentive compatibility constraint as

$$\begin{aligned} \hat{q}_i\left( t_i\right) \ge \hat{p}_i\left( t_i\right) -\varphi _i, \quad \forall t_i \in T_i. \end{aligned}$$

Because the objective function (1) is strictly decreasing in \(\hat{q}_i\left( t_i\right) ,\) this constraint must bind, so

$$\begin{aligned} \hat{q}_i\left( t_i\right) =\hat{p}_i\left( t_i\right) -\varphi _i, \quad \forall t_i \in T_i. \end{aligned}$$

We can substitute this result into the objective function and rewrite the principal’s objective function as

$$\begin{aligned} {\mathbf{E}}_{{\mathbf{t}}}\left[ \sum _{i\in {\mathcal{I}}} p_i({\mathbf{t}}) t_i-\sum _{i\in {\mathcal{I}}} c_i q_i({\mathbf{t}})\right]&=\sum _{i\in {\mathcal{I}}} {\mathbf{E}}_{t_i}\left[ \hat{p}_i\left( t_i\right) t_i-c_i \hat{q}_i\left( t_i\right) \right] \\&=\sum _{i\in {\mathcal{I}}} {\mathbf{E}}_{t_i}\left[ \hat{p}_i\left( t_i\right) \left( t_i-c_i\right) +\varphi _i c_i\right] \\&={\mathbf{E}}_{{\mathbf{t}}}\left[ \sum _{i\in {\mathcal{I}}}\left[ p_i({\mathbf{t}})\left( t_i-c_i\right) +\varphi _i c_i\right] \right] . \end{aligned}$$

Then the principal’s problem becomes

$$\begin{aligned} \max _{\mathbf{p}} \quad&{\mathbf{E}}_{{\mathbf{t}}} \left[ \sum _{i\in {\mathcal{I}}} \left[ p_i({\mathbf{t}})(t_i-c_i) + \varphi _ic_i \right] \right] \end{aligned}$$
(8)
$$\begin{aligned} \text{ subject to} \quad&p_i({\mathbf{t}}) \in [0,1], \quad \forall {\mathbf{t}}\in {\mathbf{T}}, \ \forall i\in {\mathcal{I}}, \\&\sum _{i\in {\mathcal{I}}} p_i({\mathbf{t}})\le 2, \quad \forall {\mathbf{t}}\in {\mathbf{T}}, \quad \text{and} \\&\varphi _i=\inf _{t_i^{\prime } \in T_i} \hat{p}_i\left( t_i^{\prime }\right) , \quad \forall i\in {\mathcal{I}}, \end{aligned}$$
(9)

which is independent of \({\mathbf{q}}.\) From now on, we also refer to a mechanism as \({\mathbf{p}}.\)

We say a mechanism \({\mathbf{p}}\) is feasible if \(p_i({\mathbf{t}}) \in [0,1]\) for all \({\mathbf{t}}\) and all i,  and \(\sum _{i\in {\mathcal{I}}} p_i({\mathbf{t}})\le 2\) for all \({\mathbf{t}}.\) Let P denote the set of feasible mechanisms, i.e.,

$$\begin{aligned} P\,{:}{=}\, \left\{ {\mathbf{p}}: \quad p_i({\mathbf{t}})\in [0,1],\forall {\mathbf{t}}\in {\mathbf{T}},\forall i\in {\mathcal{I}}\quad \text{and} \quad \sum _{i\in {\mathcal{I}}} p_i({\mathbf{t}})\le 2,\forall {\mathbf{t}}\in {\mathbf{T}} \right\} . \end{aligned}$$

A vector \({\boldsymbol{\varphi }}= (\varphi _1,\ldots ,\varphi _I) \in {\mathbb{R}}^I_+\) is said to be feasible if there exists a feasible mechanism \({\mathbf{p}}\) such that (9) holds, i.e., \({\boldsymbol{\varphi }}= {\boldsymbol{\varphi }}^{\mathbf{p}}.\) The following lemma characterizes the set of feasible \({\boldsymbol{\varphi }}\)’s.

Lemma 2

\({\boldsymbol{\varphi }}\) is feasible if and only if \(\varphi _{i}\in [0,1]\) for all \(i\in {\mathcal{I}}\) and \(\sum _{i\in {\mathcal{I}}}\varphi _{i}\le 2.\)

Let \(\Phi\) denote the set of feasible \({\boldsymbol{\varphi }}\)’s, i.e.,

$$\begin{aligned} \Phi \,{:}{=}\,\left\{ {\boldsymbol{\varphi }}: \quad \varphi _{i}\in [0,1], \forall i \in {\mathcal{I}}\quad \text{and} \quad \sum _{i\in {\mathcal{I}}}\varphi _{i}\le 2\right\} . \end{aligned}$$

For example, Fig. 2 below illustrates the set \(\Phi\) for the allocation problem of two goods among three agents. Two remarks are in order. First, although \(\Phi\) and \({\mathbf{T}}\) are both subsets in \({\mathbb{R}}^I,\) \({\boldsymbol{\varphi }}\in \Phi\) is interpreted as a probability profile whereas \({\mathbf{t}}\in {\mathbf{T}}\) is a type profile. Second, there may be multiple feasible mechanisms that correspond to the same vector \({\boldsymbol{\varphi }}.\)

Fig. 2
figure 2

An illustration of \(\Phi\)

In what follows, we solve the principal’s problem (8)–(9) by first characterizing the optimal \({\mathbf{p}}\) for given (subsets of) vector \({\boldsymbol{\varphi }}\)’s (corresponding to (subsets of) mechanism \({\mathbf{p}}\)’s) and then solving for the overall optimal \({\mathbf{p}}.\) This approach is reflected in the lemma below.

Lemma 3

Solving the principal’s problem (8)–(9) is equivalent to solving

$$\begin{aligned} \max _{{\begin{matrix} \varphi _i\in [0,1], \forall i \\ \sum _i\varphi _i\le 2. \end{matrix}}}\quad \max _{{\begin{matrix} p_i({\mathbf{t}})\in [0,1], \forall {\mathbf{t}},\forall i\\ \sum _i p_i({\mathbf{t}})\le 2, \forall {\mathbf{t}}. \end{matrix}}} \quad {\mathbf{E}}_{{\mathbf{t}}} \left[ \sum _{i\in {\mathcal{I}}} \left[ p_i({\mathbf{t}})(t_i-c_i) + \varphi _ic_i \right] \right] \end{aligned}$$
(10)
$$\begin{aligned} \text{ subject to } \quad{\mathbf{E}}_{{\mathbf{t}}_{-i}}\left[ p_i(t_i,{\mathbf{t}}_{-i})\right] \ge \varphi _i, \quad \forall t_i, \quad \forall i. \end{aligned}$$
(11)

Particularly, any solution \(({\boldsymbol{\varphi }},{\mathbf{p}})\) to (10)–(11) satisfies \({\boldsymbol{\varphi }}= {\boldsymbol{\varphi }}^{\mathbf{p}}.\)

We refer to the inner maximization of (10)–(11) for a given \({\boldsymbol{\varphi }}\) as the \({\boldsymbol{\varphi }}\)-relaxed problem.

1.2 B.2 Classification of \(\Phi\) and P

We classify \(\Phi\) into \(I+1\) subsets and denote by \(\Phi _{j},\) \(j\in {\mathcal{I}}\cup \{\emptyset \},\) the j-th class. A short intuition for \({\boldsymbol{\varphi }}\in \Phi _{j},\) \(j\in {\mathcal{I}},\) is that the j-th entry \(\varphi _{j}\) is “relatively larger” than \(\varphi _{i},\) \(i\ne j.\) Similarly, for \({\boldsymbol{\varphi }}\in \Phi _{\emptyset },\) \(\varphi _{i}\)’s are “relatively even.” Each class will be further divided into slices which are convex and compact subsets of \(\Phi .\)

To get detailed intuition for those subsets, we introduce some notations for different parts of \({\mathbf{T}}.\) Let \(\underline{v}=\min _{i\in {\mathcal{I}}}\{\underline{t}_{i}-c_{i}\}\) and \(\bar{v}=\min _{i\in {\mathcal{I}}}\{\bar{t}_{i}-c_{i}\}.\) For each \(v\in [\underline{v},\bar{v}]\) and each \(j\in {\mathcal{I}}\cup \{\emptyset \},\) we define \(K^{j}(v) \subseteq {\mathbf{T}}\) as the subset of type profiles such that agent j (nobody when \(j= \emptyset\)) has a net value which is higher than v and all other agents have net values lower than v,  i.e.,

$$\begin{aligned} K^{j}(v)\,{:}{=}\, \left\{ {\mathbf{t}}\in {\mathbf{T}}: \quad t_{j}-c_{j}\ge v\ge t_{l}-c_{l},\quad \forall l\ne j\right\} . \end{aligned}$$

We call \(K^{\emptyset }(v)\) the palm, \(K^{j}(v)\) the j-th finger to facilitate our verbal description. Define the claw as \(K(v)\,{:}{=}\, K^{\emptyset }(v) \bigcup \left[ \bigcup _{j\in {\mathcal{I}}} K^{j}(v)\right] .\) Those sets of value profiles have been illustrated in Example 3.

Since the classification in this subsection is a preparation for introducing threshold mechanisms in the next, we provide a brief intuition for the threshold mechanisms and, thus, for the classification. Note that the principal’s objective function in (10) represents a trade-off between allocation efficiency and checking-cost saving, where \(\sum _i\varphi _ic_i\) is the saved checking cost. When goods are allocated without checking agents’ values, agents with low values may receive the goods in the presence of high-valued agents, i.e., there may be efficiency loss. The idea of our threshold mechanism is to allow for such inefficiency at “low-value profiles” in the sense that either (a) every net value \(t_{i}-c_{i}\) is low or (b) the number of agents who have high net values is small. Particularly, the first case corresponds to the palm \(K^{\emptyset }(v),\) where \(t_{i}-c_{i}<v\) for all \(i\in {\mathcal{I}}\); and the second case corresponds to the fingers \(K^{j}(v),\) where \(t_{j}-c_{j}>v\) and all other agents have low net values. Here is why we seek to save checking costs and allow for inefficiency: For value profiles in \(K^{\emptyset }(v),\) agents’ values are too low to be worth distinguishing, in which case we would allocate the two goods without checking anyone. For value profiles in \(K^{j}(v),\) intuitively, agent j deserves a good. However, the other agents’ values are too low to be worth distinguishing, in which case we would allocate the remaining good to some \(i\ne j\) without checking anyone.

Fix a feasible \({\boldsymbol{\varphi }}\) and consider the \({\boldsymbol{\varphi }}\)-relaxed problem. We proceed to find a “proper” region of low-value profiles where allocation efficiency could be sacrificed to validate the to-be-saved checking cost \(\sum _i\varphi _ic_i.\) Beyond that region, we pursue efficient allocation. Intuitively, we want as less inefficiency as possible (as small a low-value profile region as possible), but we have to guarantee that the interim allocation probability for each i is at least \(\varphi _i\) in the sense of (11) (which necessitates inefficiency and requires that the low-value profile region be “large enough”). The interplay of these two forces suggests that the low-value profile region should have a “proper size” such that inefficient allocation within it can help to satisfy the requirement (11) exactly, i.e., make each agent i’s interim allocation probability be exactly \(\varphi _i\) and nothing more (for low-value \(t_i\)’s in the low-value profile region). For example, if for some v,  types satisfying \(t_i - c_i \le v\) are deemed low-value, then the probability that agent i receives a good conditional on \(t_i - c_i \le v\) should be \(\varphi _{i}F_{i}(v+c_{i}).\)

Roughly speaking, to satisfy the requirement (11) exactly within low-value regions, we need to put probability masses \(\varphi _{i}F_{i}(v+c_{i}),\) \(i\in {\mathcal{I}},\) into \(K^{\emptyset }(v)\) and \(K^{j}(v)\)’s, and respect the “capacity constraints” such that each type profile in the palm \(K^{\emptyset }(v)\) has a “capacity” of two and each type profile in the finger \(K^{j}(v)\) has a capacity of one (the other good goes to agent j). More precisely, we want a threshold v to satisfy

$$\begin{aligned} \sum _{i\in {\mathcal{I}}}\varphi _{i}F_{i}(v+c_{i}) = 2 \int _{K^{\emptyset }(v)}dF + \sum _{i\in {\mathcal{I}}} \int _{K^i(v)}dF = \int _{K(v)}dF+ \int _{K^{\emptyset }(v)}dF. \end{aligned}$$

Moreover, the capacity in finger \(K^{j}(v)\) (where \(t_j - c_j > v\)) cannot be used to accommodate \(\varphi _jF_j(v + c_j)\) (which is for \(t_j - c_j \le v\)); another reason is that agent j cannot get more than one good at any type profile. Therefore, we also want to have

$$\begin{aligned} \varphi _{i}F_{i}(v+c_{i}) \le \int _{K(v){\setminus } K^{i}(v)}dF, \quad \forall i\in {\mathcal{I}}. \end{aligned}$$

When \(\varphi _{i}\)’s are “relatively even,” we can find such a threshold v so that the claw K(v) does serve as the desired region of low-value profiles. In other words, K(v) can accommodate “relatively even” \({\boldsymbol{\varphi }}\)’s.

Formally, the class \(\Phi _{\emptyset }\) of “relatively even” \({\boldsymbol{\varphi }}\)’s consists of slices indexed by \(v\in [\underline{v},\bar{v}],\) i.e., \(\Phi _{\emptyset }\,{:}{=}\, \bigcup _{v\in [\underline{v},\bar{v}]} \Phi _{\emptyset }(v).\)Footnote 19 To define \(\Phi _{\emptyset }(v)\) as the subset of \({\boldsymbol{\varphi }}\)’s that K(v) can accommodate, we consider two cases. For each \(v\in (\underline{v},\bar{v}],\) let

$$\begin{aligned} \Phi _{\emptyset }(v)\,{:}{=}\, \left\{ {\boldsymbol{\varphi }}\in \Phi : \quad \begin{matrix} \sum _{i\in {\mathcal{I}}}\varphi _{i}F_{i}(v+c_{i}) = \int _{K(v)}dF+ \int _{K^{\emptyset }(v)}dF, \quad \text{ and }\\ \varphi _{i}F_{i}(v+c_{i}) \le \int _{K(v){\setminus } K^{i}(v)}dF, \quad \forall i\in {\mathcal{I}}\end{matrix} \right\} . \end{aligned}$$

Since each \({\boldsymbol{\varphi }}\in \Phi _{\emptyset }(v)\) satisfies a linear equation, a slice \(\Phi _{\emptyset }(v)\) is a subset of an \((I-1)\)-dimensional hyperplane in \({\mathbb{R}}^{I}.\) That is why we call \(\Phi _{\emptyset }(v)\) a slice. More intuition of the equation and inequalities will be clear when we introduce threshold mechanisms in Appendix B.3. For \(v=\underline{v},\) the “corner slice” is a singleton defined as \(\Phi _{\emptyset }(\underline{v})\,{:}{=}\, \{{\mathbf{0}}\},\) i.e., the measure-zero region \(K(\underline{v})\) can only accommodate \({\boldsymbol{\varphi }}= {\mathbf{0}}.\)

However, when \(\varphi _{i}\)’s are not “relatively even,” the “proper” region of low-value profiles combines two pieces from two claws, respectively. Particularly, suppose \(\varphi _j\) is “relatively larger” than other \(\varphi _i\)’s, the “proper” region of low-value profiles consists of two parts: (a) \(K^{\emptyset }(v') {\setminus } K^{j}(v')\) from a claw \(K(v')\) and (b) \(K^{\emptyset }(v) \cup K^{j}(v)\) from claw K(v),  where the former claw is larger than the latter in that \(v' > v.\)

Formally, the class \(\Phi _j\) for each \(j\in {\mathcal{I}}\) consists of slices indexed by a pair \((v,v')\) such that \(v \in [\underline{v},\bar{v}]\) and \(v'\in [v,\bar{v}].\) For notational convenience, we write \((v,v') \in [\underline{v},\bar{v}] \times [v,\bar{v}].\) Then, \(\Phi _{j}\,{:}{=}\, \bigcup _{(v,v') \in [\underline{v},\bar{v}] \times [v,\bar{v}]}\Phi _{j}(v,v') .\) To define \(\Phi _j(v,v')\) as the subset of \({\boldsymbol{\varphi }}\)’s that the combined region \(\left[ K^{\emptyset }(v') {{\setminus }} K^{j}(v')\right] \cup \left[ K^{\emptyset }(v) \cup K^{j}(v) \right]\) can appropriately accommodate, we consider three cases. First, for each \(j\in {\mathcal{I}}\) and each pair \((v,v') \in (\underline{v},\bar{v}] \times [v,\bar{v}],\) let

$$\begin{aligned} \Phi _{j}(v,v') \,{:}{=}\,&\left\{ {\boldsymbol{\varphi }}\in \Phi : \begin{array}{l} \varphi _{j} = \prod _{k\in {\mathcal{I}}{\setminus }\{j\}} F_{k}(v'+c_{k}) + \sum _{i\ne j} \left[ 1-F_{i}(v'+c_{i})\right] \prod _{k\ne i,j} F_{k}(v'+c_{k}); \quad \text{ and }\\ \begin{array}{ll} \sum _{i\in {\mathcal{I}}{\setminus } \{j\}}\varphi _{i}F_{i}(v+c_{i}) = \int _{K^{\emptyset }(v) \cup K^{j}(v)}dF, &{} \quad \text{if } |i\in {\mathcal{I}}{\setminus } \{j\}: \varphi _{i}>0|\ge 2,\\ \varphi _{i} = \prod _{k\ne i,j} F_{k}(v+c_{k}), &{} \quad \text{if } \varphi _{i}>0 \text{ and }\varphi _{k}=0, \forall k\ne i,j \end{array} \end{array}\right\} , \end{aligned}$$

where the first equation implies the following but is slightly more general:

$$\begin{aligned} \varphi _{j} F_j(v' + c_j) = \int _{K^{\emptyset }(v') {\setminus } K^{j}(v')} dF. \end{aligned}$$

Since for each \({\boldsymbol{\varphi }}\in \Phi _{j}(v,v'),\) the j-th entry is fixed, and the other \(I-1\) entries generally satisfy a linear equation, we know that a slice \(\Phi _{j}(v,v')\) is a subset of an \((I-2)\)-dimensional hyperplane in \({\mathbb{R}}^{I}.\)

Again, the detailed intuition of the equation and inequalities will be clear in Appendix B.3. But the close connection between \(\Phi _{\emptyset }(v)\) and \(\Phi _{j}(v,v')\) needs be clarified immediately. Note that when \(F_{j}(v'+c_{j}) > 0,\) the first equation is equivalent to

$$\begin{aligned} \varphi _{j} F_j(v' + c_j) = \int _{K^{\emptyset }(v') {\setminus } K^{j}(v')} dF. \end{aligned}$$

And when \(F_{i}(v+c_{i}) > 0,\)

$$\begin{aligned} \prod _{k\ne i,j} F_{k}(v+c_{k}) = \frac{1}{F_{i}(v+c_{i})}\int _{K^{\emptyset }(v)\cup K^{j}(v)}dF, \end{aligned}$$

in which case the last equation implies

$$\begin{aligned} \varphi _{i}F_{i}(v+c_i) = \int _{K^{\emptyset }(v)\cup K^{j}(v)}dF \le \int _{K(v){\setminus } K^{i}(v)}dF. \end{aligned}$$

Therefore, when \(v' = v,\) it is straightforward to verify that \(\Phi _{j}(v,v) \subseteq \Phi _{\emptyset }(v).\) As a corollary, for each \(j\in {\mathcal{I}},\) the class \(\Phi _{j}\) has a nonempty intersection with \(\Phi _{\emptyset }.\)

Now we consider the second case. For each \(j\in {\mathcal{I}},\) \(v = \underline{v}\) and each \(v' \in (\underline{v},\bar{v}],\) we define the boundary slice

$$\begin{aligned} \Phi _{j}(\underline{v},v')\,{:}{=}\, \left\{ {\boldsymbol{\varphi }}\in \Phi : \begin{array}{c} \varphi _{j} = \prod _{k\in {\mathcal{I}}{\setminus } \{j\}} F_{k}(v'+c_{k}) + \sum _{i\ne j} \left[ 1-F_{i}(v'+c_{i})\right] \prod _{k\ne i,j} F_{k}(v'+c_{k}); \quad \text{and} \\ \varphi _{i} = 0, \quad \forall i\ne j \end{array}\right\} . \end{aligned}$$

We define \(\Phi _{j}(v,v')\) and \(\Phi _{j}(\underline{v},v')\) separately to make sure that each of them is convex. Finally, for \(v = v' = \underline{v},\) the “corner slice” is a singleton defined as \(\Phi _{j}(\underline{v},\underline{v}): = \{{\mathbf{0}}\}.\)

The following lemma says that any \({\boldsymbol{\varphi }}\in \Phi\) must belong to one of the slices in \(\Phi _{j}\) for some j,  i.e.,

$$\begin{aligned} \Phi = \left[ \bigcup _{v\in [\underline{v},\bar{v}]}\Phi _{\emptyset }(v) \right] \bigcup \left[ \bigcup _{j\in {\mathcal{I}}, (v,v') \in [\underline{v},\bar{v}] \times [v,\bar{v}]}\Phi _{j}(v,v') \right] . \end{aligned}$$

Lemma 4

For any vector \({\boldsymbol{\varphi }}\in \Phi ,\) there either exists a net value \(v\in [\underline{v},\bar{v}]\) such that \({\boldsymbol{\varphi }}\in \Phi _{\emptyset }(v),\) or exist \(j\in {\mathcal{I}}\) and a pair of net values \((v,v') \in [\underline{v},\bar{v}] \times [v,\bar{v}]\) such that \({\boldsymbol{\varphi }}\in \Phi _{j}(v,v').\)

Those slices of \({\boldsymbol{\varphi }}\)’s are compact and convex subsets by definition. The following lemma characterizes the extreme points of \(\Phi _{\emptyset }(v)\) and \(\Phi _{j}(v,v^{\prime }),\) when not a singleton.

Lemma 5

For each \(v\in (\underline{v},\bar{v}],\) the set of extreme points of \(\Phi _{\emptyset }(v)\) is given by

$$\begin{aligned} \Phi _{\emptyset }^{ex}(v)\,{:}{=}\,\left\{ {\boldsymbol{\varphi }}\in \Phi : \quad \exists \text{ distinct }i,j\in {\mathcal{I}}\quad \text{ s.t. } \begin{array}{l} \varphi _{k}=0, \quad \forall k\in {\mathcal{I}}{\setminus } \{i,j\},\\ \varphi _{j}F_{j}(v+c_{j}) = \int _{K(v){\setminus } K^{j}(v)}dF, \quad \text{and}\\ \varphi _{i}F_{i}(v+c_{i}) = \int _{K^{\emptyset }(v) \cup K^{j}(v)}dF. \end{array}\right\} \end{aligned}$$

For each \(j\in {\mathcal{I}}\) and each pair \((v,v') \in (\underline{v},\bar{v}] \times [v,\bar{v}],\) the set of extreme points of \(\Phi _{j}(v,v^{\prime })\) is given by

$$\begin{aligned} \Phi _{j}^{ex}(v,v^{\prime })\,{:}{=}\,\left\{ {\boldsymbol{\varphi }}\in \Phi : \quad \exists i\in {\mathcal{I}}{\setminus } \{j\} \quad \text{ s.t. } \begin{array}{l} \varphi _{k}=0, \quad \forall k\in {\mathcal{I}}{\setminus } \{i,j\},\\ \varphi _{j}F_{j}(v'+c_{j}) = \int _{K(v'){\setminus } K^{j}(v')}dF, \quad \text{and}\\ \varphi _{i}F_{i}(v+c_{i}) = \int _{K^{\emptyset }(v) \cup K^{j}(v)}dF. \end{array}\right\} \end{aligned}$$

We close this subsection by classifying P. By Lemma 2, we know that a classification of \(\Phi\) induces a classification of P. To be precise, for each \(j\in {\mathcal{I}}\cup \{\emptyset \},\) the set of mechanisms that corresponds to \(\Phi _{j}\) is defined as follows:

$$\begin{aligned} P_{j}\,{:}{=}\,\left\{ {\mathbf{p}}\in P: \quad \exists {\boldsymbol{\varphi }}\in \Phi _{j} \quad \text{s.t.}\quad \varphi _i=\inf _{t_i^{\prime }\in T_i} {\mathbf{E}}_{{\mathbf{t}}_{-i}}\left[ p_i(t_i^{\prime },{\mathbf{t}}_{-i})\right] , \quad \forall i \in {\mathcal{I}}\right\} . \end{aligned}$$

The slices of \(P_{j},\) i.e., \(P_{\emptyset }(v)\)’s and \(P_{j}(v,v')\)’s, are defined analogously.

1.3 B.3 Threshold mechanisms: definitions

In this subsection, we introduce a threshold mechanism within each slice of each class of P. Let \({\boldsymbol{\varphi }}^{\mathbf{p}}\in \Phi\) be the vector that is induced from \({\mathbf{p}}\in P.\) The advantage of threshold mechanisms is that the infinite-dimensional choice variable \({\mathbf{p}}\) is largely pinned down by an I-dimensional vector \({\boldsymbol{\varphi }}^{\mathbf{p}}.\)

Below, we first define threshold mechanisms for \(j=\emptyset\) and then for \(j\in {\mathcal{I}}.\) They are (partially) illustrated in Examples 4 and 5, respectively.

Definition 3

A feasible mechanism \({\mathbf{p}}\in P_{\emptyset }(v)\) is called a \(\emptyset\)-threshold mechanism if the following three conditions hold:

  1. 1.

    For each \({\mathbf{t}}\notin K(v),\) two agents with the highest net values obtain the goods, i.e., for each \(i\in {\mathcal{I}},\)Footnote 20

    $$\begin{aligned} p_{i}(t_{i},{\mathbf{t}}_{-i})= \left\{ \begin{array}{ll} 1, &{} \quad \text{if } |\{j\in {\mathcal{I}}: t_{j}-c_{j}\ge t_{i}-c_{i}\}|\le 2;\\ 0, &{} \quad \text{otherwise}. \end{array}\right. \end{aligned}$$
  2. 2.

    For each \(i\in {\mathcal{I}}\) and each \({\mathbf{t}}\in K^{i}(v),\) agent i obtains one good for sure, i.e., \(p_{i}({\mathbf{t}})=1.\)

  3. 3.

    For each \(i\in {\mathcal{I}}\) and each \(t_{i}\) with \(\underline{t}_{i}-c_{i}\le t_{i}-c_{i}\le v,\) \({\mathbf{E}}_{{\mathbf{t}}_{-i}}\left[ p_{i}(t_{i},{\mathbf{t}}_{-i})\right] =\varphi _{i}^{\mathbf{p}}.\)

Condition 1 pins down the ex-post allocation efficiently outside the low-value profile region K(v). Condition 2 specifies the ex-post allocation partially, and also partially efficiently on \(K^i(v)\) where agent i has a high net value whereas all other agents have low values. Finally, condition 3 only imposes a restriction by specifying the interim (reduced-form) allocation, which exactly meets the requirement (11) for low-value \(t_i\)’s.

Example 4

Consider the allocation of two goods among three agents. The entire domain \({\mathbf{T}}\) (in net values) is included in Fig. 3a for completeness. Let F be the profile of net values (vvv). In the subdomain indicated by Fig. 3b, agent 1 receives one good for sure according to rule 1 and rule 2 (since he has the highest net value). Figure 3c further illustrates rule 1 (fixing \(t_{1}-c_{1}=\bar{v}\)), which says that outside K(v),  the other good goes to the agent who has the second highest net value. Particularly, agent 2 receives the other good in region \(H_{12}C\) and agent 3 receives it in region \(H_{13}C.\) The allocation in claw K(v) is not completely specified by Definition 3.

Fig. 3
figure 3

a The Domain \({\mathbf{T}}.\) b \({\mathbf{t}}: t_{1}-c_{1}>\max \left\{ v^*,t_{2}-c_{2},t_{3}-c_{3}\right\} .\) c Allocation between agents 2 and 3. A 3-Dimensional Illustration of \({\mathbf{p}}({\mathbf{t}})\) for \(\emptyset\)-threshold mechanisms

Definition 4

For each \(j\in {\mathcal{I}},\) a feasible mechanism \({\mathbf{p}}\in P_{j}(v,v')\) is called a j-threshold mechanism if the following three conditions hold:

  1. 1.

    For each \({\mathbf{t}}\notin K(v'),\) two agents with the highest net values obtain the goods, i.e., for each \(i\in {\mathcal{I}},\)

    $$\begin{aligned} p_{i}({\mathbf{t}})= \left\{ \begin{array}{ll} 1, &{} \quad \text{if } |\{k\in {\mathcal{I}}: t_{k}-c_{k}\ge t_{j}-c_{j}\}|\le 2;\\ 0, &{} \quad \text{otherwise}. \end{array}\right. \end{aligned}$$
  2. 2.

    Consider three cases within \(K(v').\) For each \(i\in {\mathcal{I}}{{\setminus }} \{j\}\) and each \({\mathbf{t}}\in K^{i}(v'),\) agent i and agent j each obtains a good for sure, i.e., \(p_{i}({\mathbf{t}}) = p_{j}({\mathbf{t}}) =1.\) For each \({\mathbf{t}}\in \left[ K^{\emptyset }(v')\cup K^{j}(v')\right] {{\setminus }} \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] ,\) agent j receives one good, and the agent who has the highest net value among agents in \({\mathcal{I}}{\setminus } \{j\}\) receives the remaining good, i.e., \(p_j({\mathbf{t}}) = 1\) and for each \(i\in {\mathcal{I}}{\setminus } \{j\},\)

    $$\begin{aligned} p_{i}({\mathbf{t}})= \left\{ \begin{array}{ll} 1, &{} \quad \text{if } t_{i}-c_{i}>t_{k}-c_{k} \text{ for all }k\ne i,j;\\ 0, &{} \quad \text{otherwise}. \end{array}\right. \end{aligned}$$

    Finally, for each \({\mathbf{t}}\in \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] ,\) agent j receives one good, i.e., \(p_j({\mathbf{t}}) =1.\)

  3. 3.

    For each \(i\in {\mathcal{I}}{{\setminus }} \{j\}\) and each \(t_{i}\) with \(\underline{t}_{i}-c_{i}\le t_{i}-c_{i}\le v,\) we have \({\mathbf{E}}_{{\mathbf{t}}_{-i}}\left[ p_{i}(t_{i},{\mathbf{t}}_{-i})\right] =\varphi _{i}^{\mathbf{p}}.\)

Condition 1 pins down the ex-post allocation efficiently outside the low-value profile region \(K(v').\) Condition 2 specifies the ex-post allocation in \(K(v')\) except for \({\mathbf{t}}\in \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] ,\) and the specified allocation to agents other than j is efficient. However, the allocation to agent j may be inefficient so that the requirement (11) for j can be exactly satisfied. Finally, condition 3 only imposes a restriction by specifying the interim (reduced-form) allocation for all \(i\ne j,\) which exactly meets the requirement (11) for low-value \(t_i\)’s. Notably, for each \(t_{j}\) with \(\underline{t}_{j}-c_{j}\le t_{j}-c_{j}\le v',\) conditions 1–2 imply that

$$\begin{aligned} {\mathbf{E}}_{{\mathbf{t}}_{-j}}\left[ p_{j}(t_{j},{\mathbf{t}}_{-j})\right] = \prod _{k\in {\mathcal{I}}{\setminus }\{j\}} F_{k}(v'+c_{k}) + \sum _{i\ne j} \left[ 1-F_{i}(v'+c_{i})\right] \prod _{k\ne i,j} F_{k}(v'+c_{k}), \end{aligned}$$

which is consistent with rule 3 in format, i.e., \({\mathbf{E}}_{{\mathbf{t}}_{-j}}\left[ p_{j}(t_{j},{\mathbf{t}}_{-j})\right] = \varphi _j^{\mathbf{p}},\) since \({\mathbf{p}}\in P_j(v,v').\)

Example 5

Consider the allocation of two goods among three agents. Let F be the net-value profile (vvv) and H be the net-value profile \((v',v',v').\) Let \(j = 1.\) Two key features, in addition to those already existing in Definition 3, are as follows: First, for every \({\mathbf{t}}\notin K^{\emptyset }(v)\cup K^{j}(v),\) the ex-post allocation is completely specified. Particularly, for each \({\mathbf{t}}\in \left[ K^{\emptyset }(v')\cup K^{j}(v')\right] {{\setminus }} \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] ,\) i.e., the subdomain in the right panel of Fig. 4, the remaining good goes to the agent with the highest net value among agents in \({\mathcal{I}}{\setminus } \{j\}.\) Second, allocation in subdomain \(K^{\emptyset }(v)\cup K^{j}(v)\) is not completely specified.

Fig. 4
figure 4

A 3-dimensional illustration of \({\mathbf{p}}({\mathbf{t}})\) for j-threshold mechanisms

The following lemma guarantees that for every feasible mechanism \({\mathbf{p}}\in P,\) there exists a corresponding threshold mechanism.

Lemma 6

For each \(v\in [\underline{v},\bar{v}]\) and each \({\mathbf{p}}\in P_{\emptyset }(v),\) there exists a \(\emptyset\)-threshold mechanism \({\mathbf{p}}^{\prime }\) in \(P_{\emptyset }(v)\) such that \({\boldsymbol{\varphi }}^{\mathbf{p}^{\prime }}={\boldsymbol{\varphi }}^{\mathbf{p}}.\) For each \(j\in {\mathcal{I}},\) each \((v,v')\in [\underline{v},\bar{v}]\times [v,\bar{v}]\) and each \({\mathbf{p}}\in P_{j}(v,v'),\) there exists a j-threshold mechanism \({\mathbf{p}}^{\prime }\) in \(P_{j}(v,v')\) such that \({\boldsymbol{\varphi }}^{\mathbf{p}^{\prime }}={\boldsymbol{\varphi }}^{\mathbf{p}}.\)

1.4 B.4 Threshold mechanisms: necessity

In this subsection, we show that without loss of generality, we can restrict our attention to threshold mechanisms, particularly the extreme threshold mechanisms. We call a threshold mechanism \({\mathbf{p}}\) extreme if the induced \({\boldsymbol{\varphi }}^{\mathbf{p}}\) is an extreme point either in \(\Phi _{\emptyset }(v)\) for some v or in \(\Phi _{j}(v,v')\) for some \((v,v').\)

Definition 5

For each \(v\in [\underline{v},\bar{v}],\) an \(\emptyset\)-threshold mechanism \({\mathbf{p}}\in P_{\emptyset }(v)\) is an extreme \(\emptyset\)-threshold mechanism if \({\boldsymbol{\varphi }}^{\mathbf{p}}\in \Phi ^{ex}_{\emptyset }(v).\) For each \(j\in {\mathcal{I}}\) and each pair \((v,v') \in [\underline{v},\bar{v}] \times [v,\bar{v}],\) a j-threshold mechanism \({\mathbf{p}}\in P_{j}(v,v')\) is an extreme j-threshold mechanism if \({\boldsymbol{\varphi }}^{\mathbf{p}}\in \Phi ^{ex}_{j}(v,v').\)

An optimal mechanism is necessarily a randomization over extreme threshold mechanisms:

Proposition 1

A mechanism is optimal only if it is essentially a randomization over the optimal extreme threshold mechanisms.

A key step to prove the proposition is to show that any optimal mechanism is equivalent to its corresponding threshold mechanism as specified in Lemma 6.

By Lemma 5, we know that any extreme \(\emptyset\)-threshold mechanism \({\mathbf{p}}\in P_{\emptyset }(v)\) must also be an extreme j-threshold mechanism with thresholds (vv) for some \(j\in {\mathcal{I}}.\) Then, the following proposition is an immediate corollary of Proposition 1.

Proposition 1

A mechanism is optimal only if it is essentially a randomization over the optimal extreme threshold mechanisms within \(\cup _{j\in {\mathcal{I}}} P_{j}.\)

1.5 B.5 Optimal (extreme threshold) mechanisms

In this subsection, we solve for the optimal extreme threshold mechanisms within \(\cup _{j\in {\mathcal{I}}} P_{j}.\) It is clear from the proof of Lemma 6 that when \({{\boldsymbol{\varphi }}}\) is extreme, the threshold mechanism \({\mathbf{p}}\) such that \({{\boldsymbol{\varphi }}}^{\mathbf{p}} = {{\boldsymbol{\varphi }}}\) is unique. Therefore, there is a bijection between extreme threshold mechanisms in \(\cup _{j\in {\mathcal{I}}} P_{j}\) and extreme \({{\boldsymbol{\varphi }}}\)’s in \(\cup _{j\in {\mathcal{I}}} \Phi _{j}.\) Hence, the principal’s problem is simply to choose optimal extreme \({{\boldsymbol{\varphi }}}\)’s in \(\cup _{j\in {\mathcal{I}}} \Phi _{j}.\) By Lemma 5, choosing an extreme \({\boldsymbol{\varphi }}\) is equivalent to choosing two distinct agents i and j,  and respectively two thresholds v and \(v'\) such that \(v\le v'.\) Therefore, the principal’s problem can be much simplified.

For notational convenience, we let \((t-c)|^{(1)}_{{\mathcal{I}}}\) be the highest net value reported by agents in \({\mathcal{I}},\) let \((t-c)|^{(2)}_{{\mathcal{I}}}\) be the second highest net value reported by agents in \({\mathcal{I}},\) and let \((t-c)|^{(1)}_{{\mathcal{I}}{\setminus } \{j\}}\) be the highest net value reported by agents in \({\mathcal{I}}{\setminus } \{j\}.\) The principal’s problem is written as follows according to the rules of extreme j-threshold mechanisms:

$$\begin{aligned} \max _{ i,j,v, v': i\ne j,v\le v' }&\ \varphi _{i}c_{i} + \varphi _{j}c_{j} + \int _{{\mathbf{T}}{\setminus } K(v')}(t-c)|^{(1)}_{{\mathcal{I}}}+(t-c)|^{(2)}_{{\mathcal{I}}}dF \\&\quad + \int _{K(v') {\setminus } \left[ K^{\emptyset }(v')\cup K^{j}(v')\right] } (t_{j}-c_{j}) + (t-c)|^{(1)}_{{\mathcal{I}}}dF \\&\quad + \int _{\left[ K^{\emptyset }(v')\cup K^{j}(v')\right] {\setminus } \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] } (t_{j}-c_{j}) + (t-c)|^{(1)}_{{\mathcal{I}}{\setminus } \{j\}}dF \\&\quad + \int _{K^{\emptyset }(v)\cup K^{j}(v)} (t_{j}-c_{j}) + (t_{i}-c_{i}) dF \end{aligned}$$
(12)
$$\begin{aligned} \text{subject to }\varphi _{j}F_{j}(v'+c_{j}) = \int _{K(v'){\setminus } K^{j}(v')}dF \quad \text{and} \end{aligned}$$
(13)
$$\begin{aligned}\varphi _{i}F_{i}(v+c_{i}) = \int _{K^{\emptyset }(v) \cup K^{j}(v)}dF. \end{aligned}$$
(14)

The objective function (12) follows from Definition 4 and Lemma 5, and the constraints (13) and (14) indicate that we are indeed choosing an extreme \({{\boldsymbol{\varphi }}}\) in \(\cup _{j\in {\mathcal{I}}} \Phi _{j}.\)

Let \(T_{{\mathcal{I}}}^{j}\) be the set of type profiles such that the net value of agent j is the largest among agents in \({\mathcal{I}},\) i.e., \(T_{{\mathcal{I}}}^{j}\,{:}{=}\,\left\{ {\mathbf{t}}\in {\mathbf{T}}: t_{j}-c_{j} \ge t_{k}-c_{k},\forall k\in {\mathcal{I}}{{\setminus }} \{j\}\right\} .\) A geometric illustration is the quadrangular pyramid O-\(A_{1}B_{12}CB_{31}\) in Fig. 4. The principal’s objective function (12) can be rearranged into the following one:

$$\begin{aligned} \max _{ i,j,v, v': i\ne j,v\le v' }\ \&\varphi _{j}c_{j}+ \int _{{\mathbf{T}}{\setminus } \left[ K(v') \cup T_{{\mathcal{I}}}^{j} \right] } (t-c)|^{(2)}_{{\mathcal{I}}}dF +\int _{K(v') \cup T_{{\mathcal{I}}}^{j}} (t_{j}-c_{j})dF \\&\quad + \varphi _{i}c_{i} + \int _{{\mathbf{T}}{\setminus } \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] } (t-c)|^{(1)}_{{\mathcal{I}}{\setminus } \{j\}}dF + \int _{K^{\emptyset }(v)\cup K^{j}(v)} (t_{i}-c_{i}) dF. \end{aligned}$$
(15)

Hence, the choice of v and i is actually a subproblem given \(v'\) and j,  and vice versa; the only constraints are \(i \ne j\) and \(v\le v'.\) In what follows, we first drop the constraint \(v\le v'\) and analyze the two relaxed subproblems separately. Then we aggregate the solutions for the relaxed subproblems to obtain the solution for the original problem (12)–(14).

(Relaxed) optimal v and i.    We first solve for the optimal v and i by dropping \(v\le v'.\)

$$\begin{aligned} \max _{i,v: i\ne j}\quad \varphi _{i}c_{i} + \int _{{\mathbf{T}}{\setminus } \left[ K^{\emptyset }(v)\cup K^{j}(v)\right] } (t-c)|^{(1)}_{{\mathcal{I}}{\setminus } \{j\}}dF + \int _{K^{\emptyset }(v)\cup K^{j}(v)} (t_{i}-c_{i}) dF \quad \text{subject to}\quad (14). \end{aligned}$$
(16)

It is straightforward to see that the objective function is independent of the type of agent j. Thus, the subproblem is a single-good allocation problem, where the set of agents is \({\mathcal{I}}{\setminus } \{j\},\) and the allocation rule is as follows: agent i obtains the good if \({\mathbf{t}}\in K^\emptyset (v)\cup K^j(v)\); otherwise, the agent who has the highest net value (other than j) obtains the good. As for the checking policy, the saved checking cost \(\varphi _{i}c_{i}\) and (14) amount to saying that for each \({\mathbf{t}}\) such that \(\max _{k\ne i,j} \{ t_k-c_k \} \le v,\) agent i obtains the good without being checked. The rest of the objective function says that for all other type profiles, the agent who gets the good is always checked. By the main result of BDL, we know that the subproblem (16) has the following solution.

Lemma 7

The relaxed subproblem (16) has the following unique solution up to equivalence-relabeling-randomization:

  1. 1.

    When \(j=1,\) the optimal \(i=2\) and the optimal \(v=v^*_{2}.\)

  2. 2.

    When \(j\ne 1,\) the optimal \(i=1\) and the optimal \(v=v^*_{1}.\)

(Relaxed) optimal \(v'\) and j. Now we investigate the optimal \(v'\) and j by dropping \(v\le v'.\)

$$\begin{aligned} \max _{ j,v':j\ne i} \quad \varphi _{j}c_{j}+ \int _{{\mathbf{T}}{\setminus } \left[ K(v') \cup T_{{\mathcal{I}}}^{j} \right] } (t-c)|^{(2)}_{{\mathcal{I}}}dF +\int _{K(v') \cup T_{{\mathcal{I}}}^{j}} (t_{j}-c_{j}) dF \quad \text{ subject to }\quad (17). \end{aligned}$$
(17)

This is again a single-good allocation problem, where the allocation rule is as follows: within the region \(K(v') \cup T_{{\mathcal{I}}}^{j},\) the good is allocated to agent j; otherwise, the good is allocated to the agent who has the second highest net value. As for the checking policy, the saved checking cost \(\varphi _{j}c_{j}\) and constraint (13) amount to saying the following: for all type profiles such that at most one agent has a net value that is above \(v',\) i.e., \(|\left\{ k\ne j: t_{k}-c_{k}> v'\right\} | \le 1,\) agent j receives the good without being checked. The rest of the objective function says that for all other type profiles, the agent who gets the good is always checked.

This observation facilitates the comparison of the principal’s payoffs under different choices of \(v'\)’s, which in turn gives us a unique optimal \(v'\) for each j,  as claimed in the following lemma.

Lemma 8

For any fixed j in the relaxed subproblem (17), the unique optimal choice of \(v'\) is \(v_{j}^*.\)

(Relaxed) small index first. Collect the choice variables of (12)–(14) in a tuple \((j,v';i,v).\) We proceed to compare the principal’s payoffs under two tuples \((1,v_{1}^*;j,v_{j}^*)\) and \((j,v_{j}^*;1,v_{1}^*)\) when \(j\ne 1.\) The former represents an extreme 1-threshold mechanism. Although the latter may not satisfy the constraint \(v'\ge v,\) we still refer to it as a j-threshold mechanism for convenience; whether it is really a j-threshold mechanism does not affect our argument. The former mechanism delivers a higher payoff to the principal than the latter:

Lemma 9

For any agent \(j \ne 1,\) the 1-threshold mechanism \((1,v_{1}^*;j,v_{j}^*)\) delivers a weakly higher payoff to the principal than the j-threshold mechanism \((j,v_{j}^*;1,v_{1}^*),\) where the comparison is strict if \(v_{1}^*> v_{j}^*.\)

Optimality and uniqueness. The following proposition states the optimality of the extreme 1-threshold mechanism \((1,v_{1}^*;2,v_{2}^*),\) with which we can prove Theorem 2.

Proposition 2

The extreme 1-threshold mechanism \((1,v_{1}^*;2,v_{2}^*)\) is an optimal choice of \((j,v';i,v)\) for (12)–(14). Moreover, it is the unique optimal mechanism up to equivalence-relabeling-randomization.

Proof

Suppose \(j\ne 1\) at optimum. By Lemma 7, we know that \(i=1\) and \(v = v_{1}^*\) in subproblem (16). By Lemma 8, we know that \(v' = v_{j}^*\) in subproblem (17). Thus, the solution for the relaxed problem is \((j,v_{j}^*;1,v_{1}^*)\) for some \(j\ne 1,\) which must deliver a weakly higher payoff than the optimal mechanism in the original problem (12)–(14). Suppose \(v_{1}^*>v_{j}^*.\) Lemma 9 tells us that in the relaxed problem, \((1,v_{1}^*;j,v_{j}^*)\) performs strictly better than \((j,v_{j}^*;1,v_{1}^*).\) Then, \((1,v_{1}^*;j,v_{j}^*)\) performs strictly better than the solution to the original problem (12)–(14). Since \((1,v_{1}^*;j,v_{j}^*)\) is a feasible mechanism satisfying \(v'>v,\) we have arrived at a contradiction. Therefore, we must have either (a) \(j=1\) or (b) \(j\ne 1\) but \(v_{j}^* = v_{1}^*,\) where in the latter case we can relabel the agents to have \(j=1.\) Thus, without loss of generality, we have \(j=1.\) Again, by Lemma 7, we know that \(i = 2\) with \(v = v_{2}^*.\) By Lemma 8, for \(j=1,\) the optimal threshold is \(v_{1}^*.\) Hence, \((1,v_{1}^*;2,v_{2}^*)\) is an optimal mechanism for the relaxed problem. Since \((1,v_{1}^*;2,v_{2}^*)\) satisfies the constraint \(v' \ge v,\) it is optimal for the original problem. \(\square\)

Proof of Theorem 2

We prove the sufficiency part first. Let \(({\mathbf{p}}^{ASM},{\mathbf{q}}^{ASM})\) denote the 2-ASM in Definition 1. Let \({\mathbf{p}}\) denote the allocation rule of \((1,v_{1}^*;2,v_{2}^*).\) By comparing Definitions 1 and 4, it is straightforward to verify that \({\mathbf{p}}\) is equivalent to \({\mathbf{p}}^{ASM}.\)Footnote 21 Next, we retrieve the checking policy of the mechanism \((1,v_{1}^*;2,v_{2}^*),\) denoted by \({\mathbf{q}},\) by requiring that \({\mathbf{E}}_{{\mathbf{t}}_{-i}}[q_{i}(t_{i},{\mathbf{t}}_{-i})] = {\mathbf{E}}_{{\mathbf{t}}_{-i}}[p_{i}(t_{i},{\mathbf{t}}_{-i})] -\varphi _{i}^{\mathbf{p}}\) for all \(i\in {\mathcal{I}}\) and all \(t_{i}\in T_{i}.\) Again, it is straightforward to verify that any eligible \({\mathbf{q}}\) that satisfies the above requirement must be equivalent to \({\mathbf{q}}^{ASM} .\)Footnote 22 Thus, \(({\mathbf{p}},{\mathbf{q}})\) is equivalent to the 2-ASM. Since the mechanism \((1,v_{1}^*;2,v_{2}^*),\) or \(({\mathbf{p}},{\mathbf{q}}),\) is optimal, we know that the 2-ASM is optimal. As a result, essential randomizations of 2-ASMs are also optimal.

Now suppose a mechanism is optimal. By Proposition 1 or 1’, it is essentially one (or a randomization) of the extreme threshold mechanisms. Since \((1,v_{1}^*;2,v_{2}^*),\) or the 2-ASM, is the unique optimal extreme threshold mechanism up to equivalence-relabeling-randomization (Proposition 2), we know that the optimal mechanism must be essentially one (or a randomization) of the 2-ASMs. \(\square\)

Appendix C: Proof of Lemma 1

The “only-if” part. Suppose \(p_i({\mathbf{t}}) = 1.\) According to the Ascending Algorithm, there are two possibilities for agent i to obtain a good. First, \(i \le n\) and he is not challenged. Then within the group \(\{i+1,\ldots ,I\},\) only agents who are removed with a good may have a net value that is larger than \(v^*_i.\) Therefore, \(|\{j\in \{i+1,\ldots ,I\}: t_j - c_j > v^*_i\}| \le n-i.\) Since \(v_j^* \le v_i^*\) for all \(j\in \{i+1,\ldots ,I\},\) we have

$$\begin{aligned} |\{j\in \{i+1,\ldots ,I\}: w_j > w_i\}| \le n-i. \end{aligned}$$
(18)

Obviously,

$$\begin{aligned} |\{j\in \{1,\ldots ,i-1\}: w_j \ge w_i \}| \le i - 1. \end{aligned}$$
(19)

Note that

$$\begin{aligned}&|\{j\in {\mathcal{I}}: w_j> w_i\}| + |\{j \in \{1,\ldots ,i-1\}: w_j = w_i\} | \\&\quad = |\{j\in \{i+1,\ldots ,I\}: w_j > w_i\}| + |\{j\in \{1,\ldots ,i-1\}: w_j \ge w_i \}|. \end{aligned}$$
(20)

Hence, the desired inequality (7) is derived by summing up (18) and (19).

Second, i obtains a good by successfully challenging some k with \(k \le n\) and \(k < i.\) Then \(t_i - c_i > \max \{v_k^*,t_k-c_k\}.\) Moreover, within the group \(\{k+1,\ldots ,I\},\) only agents who are removed with a good may have a net value that is larger than \(t_i - c_i\) (or equal to \(t_i - c_i\) in the tied case); otherwise, i cannot be the successful challenger to k. Hence,

$$\begin{aligned} |\{j\in \{k+1,\ldots ,I\}: t_j - c_j > t_i - c_i\}| + |\{j\in \{k+1,\ldots ,i-1\}: t_j - c_j = t_i - c_i\}| \le n-k. \end{aligned}$$

Since \(v_j^* \le v_k^* < t_i - c_i\) for all \(j\in \{k+1,\ldots ,I\},\) we have

$$\begin{aligned} |\{j\in \{k+1,\ldots ,I\}: w_j > w_i\}| + |\{j\in \{k+1,\ldots ,i-1\}: w_j = w_i\}|\le n-k. \end{aligned}$$
(21)

Obviously,

$$\begin{aligned} |\{j\in \{1,\ldots ,k-1\}: w_j \ge w_i \}| \le k - 1. \end{aligned}$$
(22)

Note that \(t_i - c_i > w_k = \max \{v_k^*,t_k-c_k\}\) implies \(w_i > w_k.\) Then summing up (21) and (22), together with (20), leads to the desired inequality (7).

The “if” part. Now suppose (7) holds for agent i under \({\mathbf{t}}.\) If \(w_i = v_i^*,\) i.e., \(v_i^* \ge t_i - c_i,\) then \(i \le n.\) Obviously, we have \(v_j^* \ge v_i^*\) for all \(j \in \{1,\ldots , i -1\},\) which implies \(w_j \ge w_i.\) Therefore, \(|\{j\in \{1,\ldots ,i-1\}: w_j \ge w_i \}| = i - 1.\) By (7) and (20), we have \(|\{j\in \{i+1,\ldots ,I\}: w_j > w_i\}| \le n - i.\) This in turn implies \(|\{j\in \{i+1,\ldots ,I\}: t_j - c_j > v^*_i\}| \le n - i,\) i.e., agent i is not challenged. Hence, \(p_i({\mathbf{t}}) = 1.\)

If \(w_i = t_i - c_i,\) i.e., \(t_i - c_i > v_i^*,\) we need to consider three cases. When \(i \le n\) and i is not successfully challenged, we are done as agent i gets a good.

The second case is that \(i \le n\) but i is successfully challenged. Since the net value of the successful challenger is decreasing as the Ascending Algorithm proceeds, we know that all successful challengers in \(\{i+1,\ldots ,I\}\) have net values larger than \(t_i-c_i.\) That is,

$$\begin{aligned}&|\{j \in \{i+1,\ldots ,I\}: t_j-c_j> t_i-c_i\}| \ge n-(i-1) \quad \text{ or } \\&|\{j \in \{i+1,\ldots ,I\}: w_j > w_i\}| \ge n-(i-1), \end{aligned}$$

which implies by (7) and (20) that \(|\{j\in \{1,\ldots ,i-1\}: w_j \ge w_i\}| \le i-2.\) Therefore, \(w_i > w_j\) and thus \(t_i-c_i > w_j\) for some \(j\in \{1,\ldots ,i-1\},\) which makes i a successful challenger. Hence, \(p_i({\mathbf{t}}) = 1.\)

When \(i > n,\) by (7) and (20), we know that

$$\begin{aligned}&|\{j\in \{1,\ldots ,n\}: w_j \ge t_i - c_i \}| + |\{j\in \{n+1,\ldots ,i-1\}: w_j \ge t_i - c_i\}| \\&\qquad + |\{j\in \{i+1,\ldots ,I\}: w_j> t_i - c_i\}| \\&\quad = |\{j\in \{1,\ldots ,i-1\}: w_j \ge w_i \}| + |\{j\in \{i+1,\ldots ,I\}: w_j > w_i\}| \\&\quad \le n-1, \end{aligned}$$

which says that agent i fails in challenging favored agents for at most \(n-1\) times. Hence, i succeeds once and \(p_i({\mathbf{t}}) = 1.\)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chua, G.A., Hu, G. & Liu, F. Optimal multi-unit allocation with costly verification. Soc Choice Welf 61, 455–488 (2023). https://doi.org/10.1007/s00355-023-01463-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00355-023-01463-5

Navigation