1 Introduction

Algorithmic mechanism design is the art of designing and implementing the rules of a game to achieve a desired outcome from a set of possible outcomes. Each player (agent) has a valuation that assigns a value to each possible outcome. The desired outcome is the one that maximizes the sum of the valuations; this sum is usually called social welfare. The players are assumed to be selfish: they report valuations to the mechanism, which may differ from the true valuations. Players may lie about their valuations in order to direct the mechanism into an outcome favorable to them. The mechanism computes an outcome and payments for the players. The utility of a player is her/his value of the outcome computed by the mechanism minus her/his payment charged by the mechanism. The agents are interested in optimizing their personal utility. Social welfare and personal utilities are determined with respect to the true valuations of the players, although they are not public knowledge. The purpose of the payments is to incentivize the players to report their true valuations. A mechanism is truthful if reporting the truth is a best strategy for each player irrespective of the inputs provided by the other players. A mechanism is efficient if the outcome and the payments can be computed in polynomial time. The underlying optimization problem is the computation of an outcome maximizing social welfare given the valutions of the players.

If the underlying optimization problem can be efficiently solved to optimality, the celebrated VCG mechanism (see, e.g., [20]) achieves truthfulness, social welfare optimization, and polynomial running time. The computation of the outcome and the computation of the payments requires to solve the underlying optimization problem to optimality.

Many optimization problems are NP-hard and hence are unlikely to have an exact algorithm with polynomial running time. However, it might be possible to solve the problem approximately in polynomial running time.

An example is the combinatorial auction problem. There is a set of m items to be sold to a set of n players. The (reported) value of a set S of items to the i-th player is v i (S) with v i () = 0 and v i (S) ≤ v i (T) whenever ST. Let x i, S be a 0-1 variable indicating that set S is given to player i. Then \({\sum }_{S} x_{i,S} \le 1\) for every player i as at most one set can be given to i, and \({\sum }_{i} {\sum }_{S; j \in S} x_{i,S} \le 1\) for every item j as any item can be given away only once. The social welfare is \({\sum }_{i,S} v_{i}(S) x_{i,S}\). The linear programming relaxation is obtained by replacing the integrality constraints for x i, s by 0 ≤ x i, S ≤ 1. Note that the number d of variables is exponential in the number of items, namely d = n2m. The linear program is of the packing type, i.e., if x is feasible and yx, then y is feasible. For the combinatorial auction problem, \(O(\sqrt {n})\)-approximation algorithms exist and these algorithms also provide the corresponding integrality-gap-verifier (the definition is given below) with \(\alpha = 1/\sqrt {n}\) [3, 16, 22].

For many integer linear programming problems, approximation algorithms are known that first solve the corresponding linear programming relaxation and then construct an integral solution either by rounding or by primal-dual methods. Lavi and Swamy [18, 19] showed that certain linear programming based approximation algorithms for the social welfare problem can be turned into randomized mechanisms that are truthful-in-expectation, i.e., reporting the truth maximizes the expected utility of a player. The LS-mechanism is powerful (see [4, 12, 18, 19] for applications), but unlikely to be efficient in practice because of its use of the Ellipsoid method. We show how to use the multiplicative weights update method instead. This results in simpler algorithms at the cost of somewhat weaker approximation and truthfulness guarantees.

We next review the LS-mechanism. It applies to integer linear programming problems of the packing type for which the linear programming relaxation can be solved exactly and for which an α-integrality gap verifier is available. More precisely:

  1. 1.

    Let \(\mathcal {Q} \subseteq \mathbb {R}_{\ge 0}^{d}\) be a packing polytope, i.e., \(\mathcal {Q}\) is a bounded convex polytope contained in the non-negative orthant of d-dimensional space with the property that if \(y \in \mathcal {Q}\) and xy then \(x \in \mathcal {Q}\). The linear programming problem for \(\mathcal {Q}\) asks to find for a given d-dimensional vector v a point \(x^{*} = \text {argmax}_{x \in \mathcal {Q}} v^{T} x\).

  2. 2.

    We use \(\mathcal {Q}_{\mathcal {I}} := \mathcal {Q} \cap \mathbb {Z}^{d}\) for the set of integral points in \(\mathcal {Q}\). The integer linear programming problem for \(\mathcal {Q}_{\mathcal {I}}\) asks to find for a given d-dimensional vector v a point \(x^{*} = \text {argmax}_{x \in \mathcal {Q}_{\mathcal {I}}} v^{T} x\). We use x 1, x 2, …, x j, … to denote the elements of \(\mathcal {Q}_{\mathcal {I}}\) and \(\mathcal {N}\) for the index set of all elements in \(\mathcal {Q}_{\mathcal {I}}\).

  3. 3.

    An α-integrality-gap-verifier for \(\mathcal {Q}_{\mathcal {I}}\) for some α ∈ (0,1] is an efficient algorithm that on input \(v \in \mathbb {R}^{d}\) and \(x^{*} \in \mathcal {Q}\), returns an \(x \in \mathcal {Q}_{\mathcal {I}}\) such that

    $$ v^{T} x \ge \alpha v^{T} x^{*}.$$

The mechanism consists of three main steps:

  1. 1.

    Let \(v_{i} \in \mathbb {R}_{\ge 0}^{d}, 1\le i\le n,\) be the reported valuation of the i-th player and let \(v = {\sum }_{i} v_{i}\) be the accumulated reported valuation. Solve the LP-relaxation, i.e., find a maximizer x =argmax xQ v T x for the social welfare of the fractional problem, and determine the VCG pricesFootnote 1 p 1, …,p n . The allocation x and the VCG-prices are a truthful mechanism for the fractional problem.

  2. 2.

    Write αx as a convex combination of integral solutions in \(\mathcal {Q}\), i.e., \(\alpha \cdot x^{*} = {\sum }_{j \in \mathcal {N}} \lambda _{j} x^{j}\), λ j ≥ 0, \({\sum }_{j \in \mathcal {N}} \lambda _{j} = 1\), and \(x^{j} \in \mathcal {Q}_{\mathcal {I}}\). This step requires the α-integrality-gap-verifier.

  3. 3.

    Pick the integral solution x j with probability λ j , and charge the i-th player the price \(p_{i} \cdot ({v_{i}^{T}} x^{j}/ {v_{i}^{T}} x^{*})\). If \({v_{i}^{T}} x^{*} = 0\), charge zero.

The LS-mechanism approximates social welfare with factor α (is α-socially efficient) and guarantees truthfulness-in-expectation, i.e., it converts a truthful fractional mechanism into an α-approximate truthful-in-expectation integral mechanism. With respect to practical applicability, steps 1 and 2 are the two major bottlenecks. Step 1 requires solving n+1 linear programs, one for the fractional solution and one for each price; an exact solution requires the use of the Ellipsoid method (see e.g. [11]), if the dimension is exponential. Furthermore, up to recently, the only method known to perform the decomposition in Step 2 is through the Ellipsoid method. An alternative method avoiding the use of the Ellipsoid method was recently given by Kraft, Fadaei, and Bichler [14]. We comment on their result in the next section.

1.1 Our Results

Our result concerns the design and analysis of a practical algorithm for the LS-scheme. We first consider the case where the LP-relaxation of SWM (social welfare maximization) in Step 1 of the LS-scheme can be solved exactly and efficiently and then our problem reduces to the design of a practical algorithm for Step 2. Afterwards, we consider the more general problem where only an FPTAS for the LP-relaxation is available.

Convex Decomposition

Over the past 15 years, simple and fast methods have been developed for solving packing and covering linear programs [2, 9, 10, 15, 17, 21, 24] within an arbitrarily small error guarantee ε. These methods are based on the multiplicative weights update (MWU) method [1], in which a very simple update rule is repeatedly performed until a near-optimal solution is obtained. We show how to replace the use of the Ellipsoid method in Step 2 by an approximation algorithm for covering linear programs. This result is the topic of Section 2.

Theorem 1

Let ε > 0 be arbitrary. Given a fractional point \(x^{*} \in \mathcal {Q}\) , and an α-integrality-gap verifier for \(\mathcal {Q}_{\mathcal {I}}\) , we can find a convex decomposition

$$ \frac{\alpha}{1+4\varepsilon}\cdot x^{*}=\sum\limits_{j\in\mathcal{N}}\lambda_{j}x^{j}. $$

The convex decomposition has size (= number of nonzero λ j ) at most s(1+⌈ε −2 lns⌉), where s is the size of the support of x (= number of nonzero components). The algorithm makes at most s⌈ε −2 lns⌉ calls to the integrality-gap-verifier.

Kraft, Fadaei, and Bichler [14] obtained a related result independently. However, their construction is less efficient in two aspects. First, it requires O(s 2 ε −2) calls of the integrality-gap-verifyer. Second, the size of their convex decomposition might be as large as O(s 3 ε −2). In the combinatorial auction problem, s = n + m. Theorem 1 together with Steps 1 and 3 of the LS scheme implies a mechanism that is truthful-in-expectation and has (α/(1 + 4ε))-social efficiency.

We leave it as an open problem whether the quadratic dependency of the size of the decomposition on ε can be improved.Footnote 2

Approximately Truthful-in-Expectation Mechanism

We drop the assumption that the fractional SWM-problem can be solved optimally and assume instead that we have an FPTAS for it. We assume further that the problem is separable, which means that the variables can be partitioned into disjoint groups, one for each player, such that the value of an allocation for a player depends only on the variables in his group, i.e,

$$v_{i}(x)=v_{i}(x_{i}),$$

where x i is the set of variables associatedFootnote 3with player i. Formally, any outcome \(x \in \mathcal {Q}\subseteq \mathbb {R}^{d}\) can be written as x = (x 1, …,x n ) where \(x_{i} \in \mathbb {R}^{d_{i}}\) and d = d 1+… + d n . We further assume that for each player i ∈ [n], there is a dominating allocation \(u^{i} \in \mathcal {Q}\) that maximizes his value for every valuation v i , i.e.,

$$ v_{i}(u^{i}) = \underset{z \in\mathcal{Q}}{\max}~v_{i}(z), $$
(1)

for every \(v_{i}\in \mathcal {V}_{i}\), where \(\mathcal {V}_{i}\) denotes the possible valuations of player i. For the case of a combinatorial auction, the allocation u i allocates all items to player i.

Theorem 2

Let ε 0 ∈(0,1/2]. Define \(\varepsilon ={\Theta }(\frac {{\varepsilon _{0}^{5}}}{n^{4}})\) . Assuming that the fractional SWM-problem has an FPTAS, is separable, and has a dominant allocation for every player i, and that there is an α-integrality gap verifier for \(\mathcal {Q}_{\mathcal {I}}\) , there is a polynomial time randomized integral mechanism with the following properties:

  1. (C1)

    No positive transfer, i.e., prices are nonnegative.

  2. (C2)

    Individually rational with probability 1−ε 0 , i.e., i.e., the utility of any truth-telling player is non-negative with probability at least 1−ε 0.

  3. (C3)

    (1 − ε 0 )-truthful-in-expectation, i.e., reporting the truth maximizes the expected utility of a player up to a factor 1 − ε 0.

  4. (C4)

    γ-socially efficient, where γ = α(1 − ε)(1 − ε 0 )/(1 + 4ε).

Our mechanism is based on constructing a randomized fractional mechanism with properties (C1) to (C3) and being (1 − ε)(1 − ε 0)-socially efficient and then converting the mechanism into an integral mechanism with the properties above. The conversion is simple. Let us assume that x is a fractional allocation obtained from the fractional mechanism. We apply our convex decomposition technique and Step 3 of the Lavi-Swamy mechanism to obtain an integral randomized mechanism that satisfies (C1) to (C4).We show this result in Section 3.

Our fractional mechanism refines the one given in [6], where the dependency of ε on n and ε 0 is as ε = Θ(ε 0/n 9). A recent experimental study of our mechanism on Display Ad Auctions [7] shows the applicability of our techniques in practice.

We leave it as an open problem whether the dependency of ε on ε 0 and n can be improved.

On the Existence of an FPTAS for the Fractional SWM-Problem

We close the survey of our results with a comment on the existence of an FPTAS for the fractional SWM-problem. Consider a packing linear program

$$\max c^{T}x\quad\text{subject to}\quad Ax\leq b,~~~ x\geq 0, $$

where \(A\in \mathbb {R}_{\ge 0}^{m\times n}\) is an m × n matrix with non-negative entries and \(c\in \mathbb {R}_{>0}^{n},\) \(b\in \mathbb {R}_{> 0}^{m}\) are positive vectors. We may assume that each column of A contains a non-zero entry as otherwise the problem is trivially unbounded. For every κ ≥ 1 and weight vector \(z\in \mathbb {R}^{m}_{\ge 0}\) , let \(\mathcal {O}_{\kappa }(z)\) denote a κ-approximation oracle that returns a j such that

$$\frac{1}{c_{j}}\sum\limits_{i=1}^{m}\frac{z_{i}a_{ij}}{b_{i}} \le \kappa \cdot \min_{j^{\prime}\in[n]}\frac{1}{c_{j^{\prime}}}\sum\limits_{i=1}^{m}\frac{z_{i}a_{ij^{\prime}}}{b_{i}}.$$

Garg and Könemann [10] presented an algorithm that uses the oracle \(\mathcal {O}_{\kappa }\) to construct an approximation with a factor arbitrarily close to 1/κ. For κ = 1, their algorithm is an FPTAS.

What is the approximation oracle in case of the combinatorial auction problem? In this problem, we have one constraint for each player and one constraint for each item. Let y i ≥ 0 be the weight for agent i and z j ≥ 0 be the weight for item j. Then oracle \(\mathcal {O}_{1}(y,z)\) requires to find the pair

$$(i,S) := \text{argmin}_{(k,T)} \frac{1}{v_{k}(T)} \left( y_{k} + \sum\limits_{j \in T} z_{j}\right).$$

In other words, for each k, one needs to find the set T which minimizes \((y_{k} + {\sum }_{j \in T} z_{j})/v_{k}(T)\). If y k is interpreted as a fixed cost incurred by agent k and z j as the cost of item j, then T is the set that minimizes the ratio of cost relative to value. For a simple-minded bidder who is interested in the items in a subset T 0 and no other item, i.e., v k (T) = v k (T 0) if T 0T and v k (T) = 0, otherwise, T 0 is the minimizer. s valuations, i.e., \(v_{k}(T) = {\sum }_{j \in T} {a_{j}^{k}}\), where \({a_{j}^{k}} \ge 0\) is the value of item j for agent k. In this situation, \(\frac {1}{v_{k}(T)} \left (y_{k} + {\sum }_{j \in T} z_{j}\right ) \le \beta \) for a set T and a positive real β if and only if \({\sum }_{j \in T} (\beta {a_{j}^{k}} - z_{j}) \ge y_{k}\) and hence the minimal β for which such a set T exists is readily determined by binary search on β.

2 A Fast Algorithm for Convex Decompositions

Let \(x^{*} \in \mathcal {Q}\) be arbitrary. Carr and Vempala [5] showed how to construct a convex combination of points in \(\mathcal {Q}_{\mathcal {I}}\) dominating α x using a polynomial number of calls to an α-integrality-gap-verifier for \(\mathcal {Q}_{\mathcal {I}}\). Lavi and Swamy [19] modified the construction to get an exact convex decomposition \(\alpha x^{*} = {\sum }_{i \in \mathcal {N}}\lambda _{i} x^{i}\) for the case of packing linear programs. The construction uses the Ellipsoid method. We show an approximate version that replaces the use of the Ellipsoid method by the multiplicative weights update (MWU) method. For any ε > 0, we show how to obtain a convex decomposition of α x /(1 + ε). Let s be the number of non-zero components of x . The size of the decomposition and the number of calls to the α-integrality gap verifier are O(s ε −2 lns).

This section is structured as follows. We first review Khandekar’s FPTAS for covering linear programs (Section 2.1). We then use it and the α-integrality gap verifier to construct, on input \(x^{*} \in \mathcal {Q}\), a dominating convex combination for α x /(1 + 4ε) (Section 2.2). In Section 2.3, we show how to convert a dominating convex combination into an exact convex decomposition. Finally, in Section 2.4, we put the pieces together.

2.1 Khandekar’s Algorithm for Covering Linear Programs

Consider a covering linear program:

$$ \min c^{T}x \quad\text{subject to}\qquad Ax\geq b,~~ x\ge 0, $$
(2)

where \(A\in \mathbb {R}_{\ge 0}^{m\times n}\) is an m × n matrix with non-negative entries and \(c\in \mathbb {R}_{\ge 0}^{n}\) and \(b\in \mathbb {R}_{\ge 0}^{m}\) are non-negative vectors. We assume the availability of a κ-approximation oracle for some κ ∈ (0, 1].

\(\mathcal {O}_{\kappa }(z)\):

Given \(z\in \mathbb {R}^{m}_{\ge 0}\), the oracle finds a column j of A that maximizes \(\frac {1}{c_{j}}{\sum }_{i=1}^{m}\frac {z_{i}a_{ij}}{b_{i}}\) within a factor of κ:

$$\frac{1}{c_{j}}\sum\limits_{i=1}^{m}\frac{z_{i}a_{ij}}{b_{i}} \ge \kappa \cdot \underset{j^{\prime}\in[n]}{\max}~\frac{1}{c_{j^{\prime}}}\sum\limits_{i=1}^{m}\frac{z_{i}a_{ij^{\prime}}}{b_{i}} $$

For an exact oracle κ = 1, Khandekar [15] gave an algorithm which computes a feasible solution \(\hat {x}\) to (2) such that \(c^{T}\hat {x}\leq (1+4\varepsilon )z^{*}\) where z is the value of an optimal solution. The algorithm makes O(m ε −2 logm) calls to the oracle, where m is the number of rows in A. There are algorithms predating Khandekar’s work, see, for example, [13 Chapter 4].

Theorem 3

(Generalization of Khandekar’s algorithm to arbitrary κ ≤ 1) Let \(\varepsilon \in (0,\frac {1}{2}]\) and let z be the value of an optimum solution to ( 2 ). Procedure Covering \((\mathcal {O}_{\kappa })\) (see Algorithm 3 in Appendix ) terminates in at most m⌈ε −2 lnm⌉ iterations with a feasible solution \(\hat x\) of ( 2 ) of at most m⌈ε −2 lnm⌉ positive components. At termination, it holds that

$$ c^{T}\hat x\le\frac{(1+4\varepsilon)}{\kappa}z^{*}. $$
(3)

For completeness, we give a proof of Khandekar’s result in Appendix. The proof of Theorem 3 can be modified to give (see  Appendix):

Corollary 1

Suppose b = 1, c = 1, and we use the following oracle \(\mathcal {O}^{\prime }\) instead of \(\mathcal {O}\) in Algorithm 3:

Then the algorithm terminates in at most m⌈ε −2 lnm⌉ iterations with a feasible solution \(\hat x\) having at most m⌈ε −2 lnm⌉ positive components, such that \(\boldsymbol {1}^{T}\hat x\le 1+4\varepsilon \).

2.2 Finding a Dominating Convex Combination

Recall that we use \(\mathcal {N}\) to index the elements in \(\mathcal {Q}_{\mathcal {I}}\). We assume the availability of an α-integrality-gap-verifier \(\mathcal {F}\) for \(\mathcal {Q}_{\mathcal {I}}\). We will use the results of the preceding section and show how to obtain for any \(x^{*}\in \mathcal {Q}\) and any positive ε a convex composition of points in \(\mathcal {Q}_{\mathcal {I}}\) that covers α x /(1 + 4ε). Our algorithm requires O(s ε −2 lns) calls to the oracle, where s is size of the support of x .

Theorem 4

Let ε>0 be arbitrary. Given a fractional point \(x^{*} \in \mathcal {Q}\) and an α-integrality-gap verifier \(\mathcal {F}\) for \(\mathcal {Q}_{\mathcal {I}}\) , we can find a convex combination \(\bar {x}\) of integral points in \(\mathcal {Q}_{\mathcal {I}}\) such that

$$\frac{\alpha}{1+4\varepsilon}\cdot x^{*} \le \bar{x} = \sum\limits_{i\in\mathcal{N}}\lambda_{i}x^{i} .$$

The convex decomposition has size at most s⌈ε −2 lns⌉, where s is the number of positive entries of x . The algorithm makes at most s⌈ε −2 lns⌉ calls to the integrality-gap verifier.

Proof

The task of finding the multipliers λ i is naturally formulated as a covering LP ([5]), namely,

$$\begin{array}{@{}rcl@{}} \min \quad & &\sum\limits_{i\in\mathcal{N}} \lambda_{i}\\ s.t. && \sum\limits_{i\in\mathcal{N}} \lambda_{i}{x^{i}_{j}} \geq \alpha\cdot x_{j}^{*} \quad \text{for all } j,\\ && \sum\limits_{i\in\mathcal{N}} \lambda_{i} \geq 1,~~\lambda_{i} \geq 0.\\ &&\lambda_{i} \geq 0. \end{array} $$
(4)

Clearly, we can restrict our attention to the \(j \in S^{+} := \{ j : x^{*}_{j} > 0 \}\) and rewrite the constraint for jS + as \( {\sum }_{i\in \mathcal {N}} \lambda _{i}{x^{i}_{j}}/ (\alpha \cdot x_{j}^{*}) \ge 1\). For simplicity of notation, we assume S +=[1..s]. We thus have a covering linear program as in (2) with m: = s+1 constraints, \(n := \left |\mathcal {N}\right |\) variables λ i , right-hand side b: = 1, cost vector c: = 1, and constraint matrix A = (a j, i ) (note that we use j for the row index and i for the column index), where

$$a_{j,i} := \left\{ \begin{array}{l l} {x^{i}_{j}}/(\alpha x^{*}_{j}) & \quad 1\leq j\leq s, i\in\mathcal{N}\\ 1 & \quad j=s+1, i\in\mathcal{N} \end{array} \right. $$

Thus we can apply Corollary 1 of Section 2.1, provided we can efficiently implement the required oracle \(\mathcal {O}^{\prime }\). We do so using \({\mathcal {F}}\).

Oracle \(\mathcal {O}^{\prime }\) has is given a \(\tilde {z})\) such that \(1^{T}\tilde {z}=1\). Let us conveniently write \(\tilde {z}=(w,z)\), where \(w\in \mathbb {R}_{\ge 0}^{s}\), \(z\in \mathbb {R}_{\ge 0}\), and \({\sum }^{j=1}_{j = s}w_{j}+z=1\). Oracle \(\mathcal {O}^{\prime }\) needs to find a column i such that \(\tilde {z}^{T}A\boldsymbol {1}_{i}\geq 1\). In our case \(\tilde {z}^{T}A\boldsymbol {1}_{i}={\sum }_{j=1}^{s} w_{j} {x^{i}_{j}}/\alpha x^{*}_{j}+z\), and we need to find a column i for which this expression is at least one. Since z does not depend on i, we concentrate on the first term. Define

$$V_{j}:= \left\{ \begin{array}{l l} \frac{w_{j}}{\alpha x^{*}_{j}}& \quad \text{for \(j\in S^{+}\)}\\ 0 & \quad \text{otherwise}. \end{array} \right. $$

Call algorithm \({\mathcal {F}}\) with \(x^{*}\in \mathcal {Q}\) and V:=(V 1, …). \({\mathcal {F}}\) returns an integer solution \(x^{i}\in \mathcal {Q}_{\mathcal {I}}\) such that

$$\sum\limits_{j\in S^{+}}\frac{w_{j}}{\alpha x_{j}^{*}}{x^{i}_{j}} = V^{T} x^{i} \ge \alpha \cdot V^{T}x^{*}=\sum\limits_{j\in S^{+}} w_{j},$$

and hence,

$$\sum\limits_{j\in S^{+}} \frac{w_{j}}{\alpha x_{j}^{*}}{x^{i}_{j}}+z \ge\sum\limits_{j\in S^{+}}w_{j}+z=1.$$

Thus i is the desired column of A.

It follows by Corollary 1 that Algorithm 3 finds a feasible solution \(\lambda ^{\prime } \in \mathbb {R}_{\ge 0}^{|\mathcal {N}|}\) to the covering LP (4), and a set \(\mathcal {Q}_{\mathcal {I}}^{\prime }\subseteq \mathcal {Q}_{\mathcal {I}}\) of vectors (returned by \(\mathcal {F}\)), such that \(\lambda ^{\prime }_{i}>0\) only for \(i\in \mathcal {N}^{\prime }\), where \(\mathcal {N}^{\prime }\) is the index set returned by oracle \(\mathcal {O}^{\prime }\) and \(|\mathcal {N}^{\prime } |\le s \lceil \varepsilon ^{-2} \ln s \rceil \). Also \({\Lambda } := {\sum }_{i\in \mathcal {N}^{\prime }}\lambda ^{\prime }_{i}\le (1+4\varepsilon )\). Scaling \(\lambda ^{\prime }_{i}\) by Λ, we obtain a set of multipliers \(\{\lambda _{i}=\lambda _{i}^{\prime }/{\Lambda }:~ i\in \mathcal {N}^{\prime }\}\), such that \({\sum }_{i\in \mathcal {N}^{\prime }}\lambda _{i} =1\) and

$$\sum\limits_{i\in\mathcal{N}^{\prime}}\lambda_{i} x^{i}\geq \frac{\alpha}{1+4\varepsilon} x^{*}. $$

We may assume \({x^{i}_{j}}=0\) for all jS + whenever λ i > 0; otherwise simply replace x i by a vector in which all components not in S + are set to zero. By the packing property this is possible. □

2.3 From Dominating Convex Combination to Exact Convex Decomposition

We will show how to turn a dominating convex combination into an exact decomposition. The construction is general and uses only the packing property. Such a construction seems to have been observed in [18], but was not made explicit. Kraft, Fadaei, and Bichler [14] describe an alternative construction. Their construction may increase the size of the convex decomposition (= number of non-zero λ i ) by a multiplicative factor s and an additive factor s 2. In contrast, our construction increases the size only by an additive factor s.

figure a

Theorem 5

Let \(x^{*} \in \mathcal {Q}\) be dominated by a convex combination \({\sum }_{i \in \mathcal {N}} \lambda _{i} x^{i}\) of integral points in \(\mathcal {Q}_{\mathcal {I}}\) , i.e.,

$$ \sum\limits_{i \in \mathcal{N}} \lambda_{i} x^{i} \ge x^{*}. $$
(5)

Then Algorithm 1 achieves equality in ( 5 ). It increases the size of the convex combination by at most s, where s is the number of positive components of x .

Proof

Let \(S^{+} = \{ j : x^{*}_{j} > 0 \}\). We may assume \({x^{i}_{j}} = 0\) for all jS + and all \(i \in {\mathcal N}\) with λ i > 0.

For jS +, let \({\Delta }_{j} = {\sum }_{i\in \mathcal {N}}\lambda _{i} {x^{i}_{j}} - x^{*}_{j} \) be the gap in the j-th component. If Δ j = 0 for all jS +, we are done. Otherwise, choose j and \(i \in \mathcal {N}\) such that Δ j > 0 and \(\lambda _{i} {x^{i}_{j}} > 0\).

Let 1 j be the j-th unit vector. If, for some j with \({x^{i}_{j}} > 0\) and Δ j > 0, replacing x i by x i1 j maintains feasibility, i.e., satisfies constraint (5), we perform this replacement. Since x i is an integer vector in \(\mathcal {Q}_{\mathcal {I}}\), the vector x i1 j is nonnegative and, by the packing property, in \(\mathcal {Q}_{\mathcal {I}}\). The replacement decreases Δ j by λ i and does not increase the number of nonzero λ i .

Otherwise, Δ j <λ i for all j with Δ j > 0 and \({x^{i}_{j}} > 0\). Since x i is integral, we also have \({\Delta }_{j} \le \lambda _{i} {x^{i}_{j}}\) for all such j. Among the indices j with Δ j > 0 and \({x^{i}_{j}} > 0\), let k minimize \({\Delta }_{k}/{x^{i}_{k}}\). Let y be such that \(y_{j} = {x^{i}_{j}}\) if Δ j = 0 and y j = 0 if Δ j > 0. Then \(y \in \mathcal {Q}_{\mathcal I}\) since \(\mathcal {Q}\) is a packing polytope. In the convex combination, replace

$$\lambda_{i} x^{i} \quad\text{by}\quad \left( \lambda_{i} - \frac{{\Delta}_{k}}{{x^{i}_{k}}}\right)\cdot x^{i} + \frac{{\Delta}_{k}}{{x^{i}_{k}}} \cdot y.$$

Notice that \(\lambda _{i} - \frac {{\Delta }_{k}}{{x^{i}_{k}}} \ge 0\). Let \({\Delta }_{j}^{\prime }\) be the new gaps. Then clearly \({\Delta }_{j}^{\prime } = {\Delta }_{j}\), if Δ j = 0. Consider any j with Δ j > 0. Then

$${\Delta}^{\prime}_{j} = {\Delta}_{j} - \frac{{\Delta}_{k}}{{x^{i}_{k}}} \cdot {x^{i}_{j}} = \left\{\begin{array}{ll} 0 & \text{if }j = k\\ \ge \left( {\Delta}_{j} - \frac{{\Delta}_{j}}{{x^{i}_{j}}}\right)\cdot {x^{i}_{j}} = 0 &\text{if }j \not= k. \end{array}\right.$$

The inequality in the second case holds since \({\Delta }_{k}/{x^{i}_{k}} \le {\Delta }_{j}/{x^{i}_{j}}\). We have decreased the number of nonzero Δ j by one at the cost one additional nonzero λ i . Thus the total number of vectors added to the convex decomposition is at most s. □

2.4 Fast Convex Decomposition

We are now ready to prove Theorem 1.

Proof of Theorem 1

Theorem 4 yields a convex combination of integer points of \(\mathcal {Q}_{I}\) dominating α x /(1+4ε). The convex decomposition has size at most s𝜖 −2 lns⌉, where s is the number of positive entries of x . The algorithm makes at most s𝜖 −2 lns⌉ calls to the integrality-gap verifier. Theorem 5 turns this dominating convex combination into an exact combination. It adds up to s additional vectors to the convex combination. □

3 Approximatly Truthful-in-Expectation Mechanisms

The goal of this section is to derive an approximate VCG-mechanism. We do not longer assume that the fractional SWM-problem can be solved exactly, but instead assume that we have an FPTAS for it. We will first design a randomized fractional algorithm (Theorem 6 in Section 3.1) and then convert the fractional mechanism into an integral mechanism and prove Theorem 2 in Section 3.2.

3.1 Approximately Truthful-in-Expectation Fractional Mechanisms

Theorem 6

Let ε 0 ∈(0,1/2]. Define \(\varepsilon ={\Theta }(\frac {{\varepsilon _{0}^{5}}}{n^{4}})\) . Assuming that the fractional SWM-problem has an FPTAS, is separable, and has a dominant allocation for every player i, there is a polynomial time randomized fractional mechanism (Algorithm 2) with the following properties:

  1. (D1)

    No positive transfer, i.e., prices are nonnegative.

  2. (D2)

    Individually rational with probability 1 − ε 0 , i.e., the utility of any truth-telling player is non-negative with probability at least 1−ε 0.

  3. (D3)

    (1 − ε 0 )-truthful-in-expectation, i.e., reporting the truth maximizes the expected utility of a player up to a factor 1 − ε 0.

  4. (D4)

    γ-socially efficient, where γ = (1 − ε)(1 − ε 0).

In order to present Algorithm 2 and prove Theorem 6, we introduce some notation and prove some preliminary Lemmas. Let

$$ L_{i} := \sum\limits_{j \ne i} v_{j}(u^{j}) \quad\text{and}\quad \beta_{i} := \varepsilon L_{i}. $$
(6)

Note that L i does not depend on the valuation of player i. Let \(\mathcal {A}\) be an ε-approximation algorithm for the LP relaxation of SWM. Note that \(\mathcal {A}\) is polynomial time since the running time of an FPTAS is polynomial in \(\frac {1}{\varepsilon }\). We use \(\mathcal {A}(v)\) to denote the outcome of \(\mathcal {A}\) on input v; \(\mathcal {A}(v)\) is a fractional allocation in Q. In the following, we will apply \(\mathcal {A}\) to different valuations which we denote by v = (v i , v i ), \(\bar {v} = (\bar {v}_{i}, v_{-i})\), and v =(0, v i ). Here v i is the reported valuation of player i, \(\bar {v}_{i}\) is his true valuation and \(v^{\prime }_{i}=\mathbf {0}\). We denote the allocation returned by \(\mathcal {A}\) on input v (resp., \(\bar {v}\), v ) by x (resp., \(\bar {x}\), x ). Note that x, \(\bar {x}\), x are fractional allocations.

We first bound the maximal change in social welfare induced by a change of the valuation of the i-th player.

Lemma 1

Let ε ≥ 0 and let \(\mathcal {A}\) be an ε-approximation algorithm which returns allocation x on input vector v. Let \(\hat {x}\in \mathcal {Q}\) be an arbitrary point, then

$$ v(x) \ge v(\hat{x})-\beta_{i} -\varepsilon\cdot v_{i}(\hat{x}) $$
(7)

for every i.

Proof

We have

$$\begin{array}{@{}rcl@{}} v(x)&\ge&(1-\varepsilon)\max_{x \in \mathcal{Q}} v(x) \\ &\ge& (1-\varepsilon)v(\hat{x}) \\ &=&v(\hat{x}) - \varepsilon\cdot \sum\limits_{j\neq i}v_{j}(\hat{x})-\varepsilon\cdot v_{i}(\hat{x}) \\ &\ge& v(\hat{x})-\beta_{i}-\varepsilon\cdot v_{i}(\hat{x}), \end{array} $$

where the first inequality follows from the fact that \(\mathcal {A}\) is an ε-approximation algorithm, and the last inequality follows from \(\varepsilon {\sum }_{j\neq i}v_{j}(\hat {x}) \leq \varepsilon {\sum }_{j\neq i} v_{j}(u^{j})=\beta _{i}\). □

We use the following payment rule:

$$ p_{i}(v):= \max\{p_{i}^{\mathit{VCG}}(v)-\beta_{i},0\} $$
(8)

where

$$p_{i}^{\mathit{VCG}}(v):=v_{-i}(x^{\prime})-v_{-i}(x).$$

\(v_{-i}(x)={\sum }_{j\neq i}v_{j}(x), x ={\mathcal {A}}(v)\) and \(x^{\prime } ={\mathcal {A}}(0,v_{-i})\). Observe the similarity in the definition of \(p_{i}^{\mathit {VCG}}(v)\) to the VCG payment rule. In both cases, the payment is defined as the difference of the total value of two allocations to the players different from i. The first allocation ignores the influence of player i (\(x^{\prime } ={\mathcal {A}}(0,v_{-i})\)) and the second allocation takes it into account (\(x ={\mathcal {A}}(v)\)). The difference to the VCG rule is that x and x are not true maximizers but are computed by an ε-approximation algorithm.

figure b

Let \(U_{i}(v)=\bar {v}_{i}(x)-p_{i}(v)\) be the utility of player i for bid vector v. Note that the value of the allocation \(x=\mathcal {A}(v)\) is evaluated with the true valuation \(\bar {v}_{i}\) of player i. Let \(U_{i}(\bar {v}) = \bar {v}_{i}(\bar {x})-p_{i}(\bar {v})\) be the utility of player i for valuation vector \(\bar {v}=(\bar {v}_{i}, v_{-i})\).

Lemma 2

Let ε≥0 and let \(\mathcal {A}\) be an ε-approximation algorithm. Let M 0 be the mechanism with allocation function \(\mathcal {A}(v)\) and the payment rule ( 8 ). M 0 is an individually rational mechanism with no positive transfer, such that for all i,

$$ U_{i}(\bar{v})\ge U_{i}(v)-\varepsilon\cdot\bar v_{i}(x)-3\beta_{i}. $$
(9)

Proof

By definition, p i (v)≥0 for all v and all x; so the mechanism has no positive transfer. We next address individual rationality. Assume \(p_{i}(\bar v)=p^{\mathit {VCG}}_{i}(\bar {v})-\beta _{i}>0\), as otherwise \(U_{i}(\bar v)\ge 0\). We have

$$\begin{array}{@{}rcl@{}} U_{i}(\bar{v})&=&\bar{v}_{i}(\bar{x})-p_{i}(\bar{v}) \\ &=&\bar{v}_{i}(\bar{x})- p^{\mathit{VCG}}_{i}(\bar{v})+\beta_{i}\\ &=&\bar{v}_{i}(\bar{x}) + \bar{v}_{-i}(\bar{x}) - \bar{v}_{-i}(x^{\prime}) +\beta_{i} \\ &=&\bar{v}(\bar{x}) - \bar{v}(x^{\prime}) + \bar{v}_{i}(x^{\prime}) +\beta_{i} \\ &\ge&(1-\varepsilon)\bar{v}_{i}(x^{\prime})\ge 0, \end{array} $$

where the first inequality follows from Lemma 1 with \(v = \bar {v}\) and \(\hat {x} = x^{\prime }\).

Finally, we prove (9). We have v (x ) = v i (x ), v (x) = v i (x), and \(v^{\prime }_{i}(x)=0\). Thus,

$$\begin{array}{@{}rcl@{}} p^{\mathit{VCG}}_{i}(v) =v_{-i}(x^{\prime})-v_{-i}(x)=v^{\prime}(x^{\prime})-v^{\prime}(x)+\varepsilon\cdot v^{\prime}_{i}(x) \end{array} $$

Applying Lemma 1 for v = v and \(\hat {x} = x\), we obtain

$$\begin{array}{@{}rcl@{}} v^{\prime}(x^{\prime})-v^{\prime}(x)+\varepsilon\cdot v^{\prime}_{i}(x) \ge - \beta_{i} \end{array} $$

Therefore,

$$\begin{array}{@{}rcl@{}} p^{\mathit{VCG}}_{i}(v)+\beta_{i}\ge 0. \end{array} $$
(10)

To see (9), we consider two cases:

  • Case 1:    p i (v) = 0. Then using (10)

    $$\begin{array}{@{}rcl@{}} U_{i}(\bar{v}) = \bar{v}_{i}(\bar{x})-0\ge \bar{v}_{i}(\bar{x})-p^{\mathit{VCG}}_{i}(\bar{v})-\beta_{i}. \end{array} $$
  • Case 2:    \(p_{i}(v) = p^{\mathit {VCG}}_{i}(v)-\beta _{i}\).

    $$\begin{array}{@{}rcl@{}} U_{i}(\bar{v}) = \bar{v}_{i}(\bar{x})-p_{i}(\bar{v})=\bar{v}_{i}(\bar{x})-p^{\mathit{VCG}}_{i}(\bar{v})+\beta_{i}\ge \bar{v}_{i}(\bar{x})-p^{\mathit{VCG}}_{i}(\bar{v})-\beta_{i}, \end{array} $$

    where the last inequality follows from β i ≥ 0. Therefore, in both cases we have:

    $$U_{i}(\bar{v}) \ge\bar{v}_{i}(\bar{x})-p^{\mathit{VCG}}_{i}(\bar{v})-\beta_{i}.$$

Now by using the definition of \(p^{\mathit {VCG}}_{i}\) and Lemma 1, we get

$$\begin{array}{@{}rcl@{}} U_{i}(\bar{v})& \ge&\bar{v}_{i}(\bar{x})-p^{\mathit{VCG}}_{i}(\bar{v})-\beta_{i}\\ &=&\bar{v}_{i}(\bar{x})+\bar{v}_{-i}(\bar{x})-\bar{v}_{-i}(x^{\prime})-\beta_{i}\\ &=&\bar{v}(\bar{x})-\bar{v}_{-i}(x^{\prime})-\beta_{i}\\ &\ge&\bar{v}(x)-\beta_{i}-\varepsilon\bar{v}_{i}(x)-\bar{v}_{-i}(x^{\prime})-\beta_{i}\\ &=&\bar{v}_{i}(x)-p^{\mathit{VCG}}_{i}(v)-\varepsilon \bar{v}_{i}(x)-2\beta_{i}\\ &\ge& \bar{v}_{i}(x)-p_{i}(v)-\beta_{i}-\varepsilon \bar{v}_{i}(x)-2\beta_{i}\\ &=&U_{i}(v)-\varepsilon \bar{v}_{i}(x)-3\beta_{i}. \end{array} $$

In what follows we prove Theorem 6.

Proof of Theorem 6

Define \(q_{0} = (1 - \frac {\varepsilon _{0}}{n})^{n}\), \(\bar {\varepsilon } = \varepsilon _{0}/2\), and q j = (1 − q 0)/n for 1 ≤ jn. Let \(\eta = \bar {\varepsilon }(1 - q_{0})^{2}/n^{3}\) , η = η/q j , and \(\varepsilon = \eta \bar {\varepsilon } (1 - q_{0})/(8 n)\). Then usingFootnote 4 \(q_{0}= (1 -\frac {\varepsilon _{0}}{n})^{n}\ge 1-\varepsilon _{0}\) and \(q_{0}= (1 -\frac {\varepsilon _{0}}{n})^{n}\le 1-\varepsilon _{0}/2\), we get

$$\frac{ {\varepsilon_{0}^{5}}}{128 n^{4}}=\frac{ \bar{\varepsilon}^{2} (\varepsilon_{0}/2)^{3}}{8 n^{4}}\le\varepsilon = \eta \bar{\varepsilon} (1 - q_{0})/(8 n)=\frac{ \bar{\varepsilon}^{2} (1 - q_{0})^{3}}{8 n^{4}} \le \frac{ \bar{\varepsilon}^{2} {\varepsilon_{0}^{3}}}{8 n^{4}}=\frac{{\varepsilon_{0}^{5}}}{16 n^{4}},$$

as stated in the Theorem. Let \(U_{i}(v) = \bar {v}_{i}(x)-p_{i}(v)\) be the utility of player i obtained by the mechanism M 0 of Lemma 2. Let further \(\widehat U_{i}(v) = v_{i}(x)-p_{i}(v)\). Following [6], we call player i active if the following two conditions hold:

$$\begin{array}{@{}rcl@{}} \widehat U_{i}(v)+\frac{\bar{\varepsilon} q_{i}}{q_{0}}v_{i}(u^{i})&\ge&\frac{q_{i}}{q_{0}} \eta^{\prime} L_{i}, \end{array} $$
(11)
$$\begin{array}{@{}rcl@{}} v_{i}(u^{i})&\ge& \eta L_{i}. \end{array} $$
(12)

Note that these conditions do not depend on the true valuation \(\bar v\). We denote by T = T(v) the set of active players when the valuation is v = (v 1, …,v n ). Note that L i does not depend on v i . Thus when we refer to conditions (11) and (12) for \(\bar {v}\), we replace v and x by \(\bar {v}\) and \(\bar {x}\) on the left side and keep the right side unchanged. Non-negativity of payments is immediate from the definition of mechanism M and Lemma 2. Moreover, the utility of a truth-telling bidder i can be negative only if he/she is allocated in step 5, i.e., at most with probability q i . It follows that the mechanism is individually rational with probability at least \(1 - {\sum }_{i=1}^{n} q_{i} = q_{0}= (1 -\frac {\varepsilon _{0}}{n})^{n}\ge 1-\varepsilon _{0}\).

Now we address truthfulness. Let us denote the expected utility of player i obtained from the mechanism in Algorithm 2 on input \(v\in \mathcal {V}\) by \(\mathbb {E}[U^{\prime }_{i}(v)]\). Assume j = 0 in Algorithm 2. We run ε-approximation algorithm \(\mathcal {A}\) on v to compute allocation x = (x 1, …,x n ). Then we change x i and p i to zero for all inactive i. Let \(\widetilde x\) be the allocation obtained in this way. The value for player i is \(v_{i}(\widetilde x)\). When the i-th player is active, this value is equal to v i (x) because v i depends only on the valuation in the i-th group (separability property). Therefore in this case his utility is U i (v). So we have that

$$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(v)]=\left\{ \begin{array}{ll} q_{0} \cdot U_{i}(v) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i}) &\text{ if }i\in T(v),\\ q_{i} \bar{v}_{i}(u^{i}) & \text{ if }i \not\in T(v). \end{array} \right. \end{array} $$
(13)

We first observe

$$ \mathbb{E}[U^{\prime}_{i}(\bar{v})]\ge(1-\bar{\varepsilon})q_{i} \cdot \bar{v}_{i}(u^{i}). $$
(14)

Indeed, the inequality is trivially satisfied if \(i\not \in T(\bar {v})\). On the other hand, if \(i\in T(\bar {v})\), then (11) implies \(U_{i}(\bar {v})=\widehat U_{i}(\bar v)\ge \frac {q_{i}}{q_{0}}\left (\eta ^{\prime } L_{i}-\bar {\varepsilon }\bar {v}_{i}(u^{i})\right )\), therefore

$$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(\bar{v})] &=&q_{0} \cdot U_{i}(\bar{v}) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})\\ &\ge& q_{0} \cdot \frac{q_{i}}{q_{0}}\left( \eta^{\prime} L_{i}-\bar{\varepsilon}\bar{v}_{i}(u^{i})\right)+ q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i}) \\ &=&(1-\bar{\varepsilon})q_{i} \cdot \bar{v}_{i}(u^{i}). \end{array} $$

We now consider four cases:

  • Case 1:    \(i\in T(\bar v)\cap T(v)\). Note that (12) for \(\bar {v}\) implies \(\beta _{i} = \varepsilon L_{i} \le \frac {\varepsilon \bar {v}_{i}(u^{i})}{\eta }\). Thus, by Lemma 2, and using assumption (1) that \(\bar {v}_{i}(x)\le \bar {v}_{i}(u^{i})\), we have

    $$ U_{i}(\bar{v})\ge U_{i}(v)-\varepsilon(1+\frac{3}{\eta})\bar{v}_{i}(u^{i}) \ge U_{i}(v)- \frac{4 \varepsilon }{\eta}\bar{v}_{i}(u^{i}). $$
    (15)

    Hence by using (13) and (15), we have

    $$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(v)]&=& q_{0} \cdot U_{i}(v) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})\\ &\le& q_{0} (U_{i}(\bar{v}) + \frac{4 \varepsilon}{\eta} \bar{v}_{i}(u^{i}) ) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})\\ &=& \underbrace{q_{0} U_{i}(\bar{v})+q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})}_{\mathbb{E}[U^{\prime}_{i}(\bar{v})]}+ q_{0} \frac{4 \varepsilon}{\eta} \bar{v}_{i}(u^{i})\\ &=& \mathbb{E}[U^{\prime}_{i}(\bar{v})]+q_{0} \frac{4 \varepsilon}{\eta}\bar{v}_{i}(u^{i}). \end{array} $$

    Now applying (14) in the above inequality, we get

    $$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(v)] &\le&\mathbb{E}[U^{\prime}_{i}(\bar{v})]+q_{0} \frac{4 \varepsilon}{\eta}\bar{v}_{i}(u^{i}) \\ &\le&\left( 1+\frac{q_{0}}{(1-\bar{\varepsilon})q_{i}} \frac{4 \varepsilon}{\eta}\right)\mathbb{E}[U^{\prime}_{i}(\bar{v})]\\ &\le& (1+\bar{\varepsilon})\mathbb{E}[U^{\prime}_{i}(\bar{v})], \end{array} $$

    the last inequality follows from the definition of ε. Note that (since q 0 ≤ 1 and \(\bar {\varepsilon } \le 1/2\))

    $$\varepsilon \frac{q_{0}}{(1-\bar{\varepsilon})q_{i}}\frac{4}{\eta} \le \varepsilon \frac{1}{(1-\bar{\varepsilon})q_{i}}\frac{4}{\eta} \le \frac{\eta \bar{\varepsilon} (1 - q_{0})}{8 n} \frac{8}{q_{i} \eta} = \bar{\varepsilon}. $$
  • Case 2:    iT(v). By (14), we have

    $$\mathbb{E}[U^{\prime}_{i}(v)] = q_{i} \bar v_{i}(u^{i})\le\frac{1}{1-\bar{\varepsilon}}\mathbb{E}[U^{\prime}_{i}(\bar{v})] \le (1 + \varepsilon_{0})\mathbb{E}[U^{\prime}_{i}(\bar{v})] . $$

    Since, \(\frac {1}{1 - \bar {\varepsilon }} = 1 + \bar {\varepsilon }(1 + \bar {\varepsilon } + \bar {\varepsilon }^{2} + \ldots ) \le 1 + 2 \bar {\varepsilon } = 1 +\varepsilon _{0}\).

  • Case 3: \(i\in T(v)\setminus T(\bar v)\) and (12) does not hold for \(\bar {v}\). Since \(U_{i}(v) \le \bar {v}_{i}(u^{i})\), we have

    $$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(v)] &=&q_{0} \cdot U_{i}(v) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})\\&\le& (q_{0} +q_{i} )\bar v_{i}(u^{i})- q_{i} \eta^{\prime} L_{i} <(q_{0} + q_{i} - 1)\bar v_{i}(u^{i}) \\ &\le& 0 \le q_{i} \bar v_{i}(u^{i}) =\mathbb{E}[U^{\prime}_{i}(\bar{v})], \end{array} $$

    where the second inequality holds because (12) does not hold for \(\bar {v}\) and q i η /η = 1.

  • Case 4:    \(i\in T(v)\setminus T(\bar v)\) and (12) holds for \(\bar {v}\). Then (11) does not hold for \(\bar {v}\) and hence

    $$ U_{i}(\bar{v}) =\widehat U_{i}(\bar{v}) <\frac{q_{i}}{q_{0}}\left( \eta^{\prime} L_{i}-\bar{\varepsilon}\bar v_{i}(u^{i})\right). $$
    (16)

    Since (12) holds for \(\bar {v}\), we have (15). Hence by (14), (15) and (16) we have

    $$\begin{array}{@{}rcl@{}} \mathbb{E}[U^{\prime}_{i}(v)] &=&q_{0} \cdot U_{i}(v) + q_{i} (\bar{v}_{i}(u^{i})-\eta^{\prime} L_{i})\\ &\le& q_{0} \left( U_{i}(\bar{v}) + \frac{4 \varepsilon} {\eta}\bar v_{i}(u^{i})\right) + q_{i} (\bar v_{i}(u^{i})-\eta^{\prime} L_{i}) \\ &\le& q_{i} \eta^{\prime} L_{i}- q_{i} \bar{\varepsilon}\bar v_{i}(u^{i}) + \frac{4 \varepsilon} {\eta} \bar{v}_{i}(u^{i})+q_{i} (\bar v_{i}(u^{i})-\eta^{\prime} L_{i}) \\ &=&(1-\bar{\varepsilon}) q_{i} \cdot\bar v_{i}(u^{i})+ \frac{4 \varepsilon}{\eta} \bar{v}_{i}(u^{i})\\ &\le&\left( 1 +\frac{1}{(1-\bar{\varepsilon})q_{i} } \frac{4 \varepsilon} {\eta}\right)\mathbb{E}[U^{\prime}_{i}(\bar{v})] \\ &\le& (1 + \bar{\varepsilon}) \mathbb{E}[U^{\prime}_{i}(\bar{v})], \end{array} $$

    where the last inequality follows from the definition of ε (see Case 1).

We finally argue about the approximation ratio. Note that for iT(v), one of the inequalities (11) or (12) does not hold. Also, U i (v)≥0 in this case since p i = 0, and hence \(v_{i}(u^{i})<\max \{\eta , \eta ^{\prime }/{\bar {\varepsilon }}\} L_{i} = \eta ^{\prime } L_{i}/\bar {\varepsilon } \). Since \(\mathcal {A}\) returns allocation x that is (1 − ε)-social efficiency andFootnote 5 \(q_{i} - q_{0} n\frac {\eta ^{\prime }}{\bar {\varepsilon }}\ge 0\) , it follows that for any \(v\in \mathcal {V}\) , (recall \(x=\mathcal {A}(v)\))

$$\begin{array}{@{}rcl@{}} \mathbb{E}[v(\widetilde x)]&=&q_{0} \sum\limits_{i\in T(v)}v_{i}(x)+ \sum\limits_{i\in[n]}q_{i} v_{i}(u^{i})\\ &=& q_{0}\sum\limits_{i\in[n]}v_{i}(x)- q_{0}\sum\limits_{i\notin T(v)}v_{i}(x) + \sum\limits_{i\in[n]} q_{i} v_{i}(u^{i})\\ &=& q_{0}v(x)- q_{0}\sum\limits_{i\notin T(v)}v_{i}(u^{i}) + \sum\limits_{i\in[n]} q_{i} v_{i}(u^{i})\\ &>& q_{0}v(x)- q_{0} \frac{\eta^{\prime}}{\bar{\varepsilon}}\sum\limits_{i\not\in T(v)}L_{i} + \sum\limits_{i\in[n]} q_{i} v_{i}(u^{i})\\ &=& q_{0}v(x)- q_{0} \frac{\eta^{\prime}}{\bar{\varepsilon}}\sum\limits_{i\not\in T(v)}\sum\limits_{j\neq i}v_{j}(u^{j}) + \sum\limits_{i\in[n]} q_{i} v_{i}(u^{i})\\ &\ge& q_{0}v(x)- q_{0} \frac{\eta^{\prime}}{\bar{\varepsilon}}n\sum\limits_{{j\in[n]}}v_{j}(u^{j}) + \sum\limits_{i\in[n]} q_{i} v_{i}(u^{i})\\ &\ge& q_{0} v(x)+\sum\limits_{i\in[n]}\left( q_{i} - q_{0} n\frac{\eta^{\prime}}{\bar{\varepsilon}}\right)v_{i}(u^{i})\\ &\ge& q_{0} (1 - \varepsilon) \cdot \underset{z \in Q}{\max}~ v(z)\\ &\ge& (1-\varepsilon_{0}) (1 - \varepsilon) \cdot \underset{z \in Q}{\max} ~v(z). \end{array} $$

3.2 Approximately Truthful-in-Expectation Integral Mechanisms

In this subsection, we derive a randomized mechanism M which returns an integral allocation. Let ε > 0 be arbitrary. First run Algorithm 2 to obtain x and p(v). Then compute a convex decomposition of \(\frac {\alpha }{1 +4 \varepsilon }x\), which is \( \frac {\alpha }{1 +4\varepsilon } x = {\sum }_{j \in \mathcal {N}} {\lambda _{j}^{x}} x^{j}\) . Finally with probability \({\lambda _{j}^{x}}\) (we use the superscript x to distinguish the convex decompositions of different x) return the allocation x j and charge the i-th player the price \(p_{i}(v) \frac {v_{i}(x^{j})}{v_{i}(x)}\), if v i (x) > 0, and zero otherwise. We now prove Theorem 2.

Proof of Theorem 2

Let M be a fractional randomized mechanism obtained in Theorem 6. Since M has no positive transfer, M does neither. M is individually rational with probability 1 − ε 0, therefore for any allocation \(\bar {x}\), we have \(\bar {v}_{i}(\bar {x}) - p_{i}(\bar {v}) \ge 0\) with probability 1 − ε 0. So

$$\bar{v}_{i}(x^{l}) - p_{i}(\bar{v}) \frac{\bar{v}_{i}(x^{l})}{\bar{v}_{i}(\bar{x})} = \left( \bar{v}_{i}(\bar{x}) - p_{i}(\bar{v})\right) \frac{\bar{v}_{i}(x^{l})}{\bar{v}_{i}(\bar{x})} \ge 0, $$

hence M is individually rational with probability 1 − ε 0. Now we prove truthfulness. Let \(\mathit {\mathbb {E}}[U^{\prime \prime }_{i}(\bar {v})]\) be the expected utility of player i when she inputs her true valuation and let \(\mathit {\mathbb {E}}[U^{\prime \prime }_{i}(v)]\) be her expected utility when she inputs v i . Then by definition of \(\mathit {\mathbb {E}}[U^{\prime \prime }_{i}(\bar {v})]\), we have

$$\begin{array}{@{}rcl@{}} \mathit{\mathbb{E}}[U^{\prime\prime}_{i}(\bar{v})]&=& \mathbb{E}_{\bar{x} \sim M(\bar{v})}\left[\sum\limits_{l \in \mathcal{N}} \lambda^{\bar{x}}_{l} \left( \bar{v}_{i}(x^{l}) - p_{i}(\bar{v}) \frac{\bar{v}_{i}(x^{l})}{\bar{v}_{i}(\bar{x})}\right)\right]\\ &=&\mathbb{E}_{\bar{x} \sim M(\bar{v})}\left[\left( \bar{v}_{i}\left( \sum\limits_{l \in \mathcal{N}} \lambda^{\bar{x}}_{l} x^{l}\right) - p_{i}(\bar{v}) \frac{\bar{v}_{i}({\sum}_{l \in \mathcal{N}} \lambda^{\bar{x}}_{l} x^{l})}{\bar{v}_{i}(\bar{x})}\right)\right]\\ &=& \mathbb{E}_{\bar{x} \sim M(\bar{v})}\left[ \frac{\alpha}{1 +4 \varepsilon} \bar{v}_{i}(\bar{x}) - \frac{\alpha}{1 +4 \varepsilon} p_{i}(\bar{v}) \right] \\ &=& \frac{\alpha}{1 +4 \varepsilon} \mathbb{E}_{\bar{x} \sim M(\bar{v})}[ \bar{v}_{i}(\bar{x}) - p_{i}(\bar{v}) ] \\ &=& \frac{\alpha}{1 +4 \varepsilon} \mathbb{E}[U^{\prime}(\bar{v})] \\ &\ge& (1 - \varepsilon_{0}) \frac{\alpha}{1 +4 \varepsilon}\mathbb{E}[U^{\prime}(v)] \\ &=& (1 - \varepsilon_{0}) \frac{\alpha}{1 +4 \varepsilon} \mathbb{E}_{x\sim M(v)}[\bar{v}(x) - p_{i}(v)]\\ &=& (1 - \varepsilon_{0}) \mathbb{E}_{x\sim M(v)}\left[\frac{\alpha}{1 +4 \varepsilon} \bar{v}(x) - p_{i}(v)\frac{\alpha}{1 +4 \varepsilon}\cdot \frac{v_{i}(x)}{v_{i}(x)} \right]\\ &=& (1 - \varepsilon_{0}) \mathbb{E}\left[\sum\limits_{l \in \mathcal{N}} {\lambda^{x}_{l}} \left( \bar{v}_{i}(x^{l}) - p_{i}(v) \frac{v_{i}(x^{l})}{v_{i}(x)}\right)\right]\\ &=& (1 - \varepsilon_{0}) \mathit{\mathbb{E}}[U^{\prime}_{i}(v)]. \end{array} $$

Taking expectation with respect to x shows that the mechanism is \(\frac {\alpha (1 - \varepsilon _{0})(1 - \varepsilon )}{1+4\varepsilon }\)-socially efficient.

$$\begin{array}{@{}rcl@{}} \mathit{\mathbb{E}[v(x)]}&=&\mathbb{E}_{x \sim M(v)}\left[\sum\limits_{l \in \mathcal{N}} {\lambda^{x}_{l}}v(x^{l})\right] =\mathbb{E}_{x \sim M(v)}\left[v\left( \sum\limits_{l \in \mathcal{N}} {\lambda^{x}_{l}}x^{l}\right)\right] \\ &=&\mathbb{E}_{x \sim M(v)}\left[v\left( \frac{\alpha}{1+4\varepsilon}x\right)\right]\\ &=&\frac{\alpha}{1+4\varepsilon}\mathbb{E}_{x \sim M(v)}[v(x)] \ge \frac{\alpha}{1+4\varepsilon}(1 - \varepsilon_{0})(1 - \varepsilon) \underset{z \in Q}{\max}~ v(z). \end{array} $$

This completes the proof of Theorem 2. □