Packing under Convex Quadratic Constraints

We consider a general class of binary packing problems with a convex quadratic knapsack constraint. We prove that these problems are APX-hard to approximate and present constant-factor approximation algorithms based upon three different algorithmic techniques: (1) a rounding technique tailored to a convex relaxation in conjunction with a non-convex relaxation whose approximation ratio equals the golden ratio; (2) a greedy strategy; (3) a randomized rounding leading to an approximation algorithm for the more general case with multiple convex quadratic constraints. We further show that a combination of the first two strategies can be used to yield a monotone algorithm leading to a strategyproof mechanism for a game-theoretic variant of the problem. Finally, we present a computational study of the empirical approximation of the three algorithms for problem instances arising in the context of real-world gas transport networks.


Introduction
We consider packing problems with a convex quadratic knapsack constraint of the form maximize p x subject to x W x ≤ c, x ∈ {0, 1} n , (P ) where W ∈ Q n×n ≥0 is a symmetric positive semi-definite (psd) matrix with nonnegative entries, p ∈ Q n ≥0 is a non-negative profit vector, and c ∈ Q ≥0 is a nonnegative budget.Such convex and quadratically constrained packing problems are clearly NP-complete since they contain the classical (linearly constrained) NP-complete knapsack problem [14] as a special case when W is a diagonal matrix.In this paper, we therefore focus on the development of approximation algorithms.For some ρ ∈ [0, 1], an algorithm is a ρ-approximation algorithm if its runtime is polynomial in the input size and for every instance, it computes We acknowledge funding through the DFG CRC/TRR 154, Subproject A007.arXiv:1912.00468v2[math.OC] 17 Dec 2019 a solution with objective value at least ρ times that of an optimum solution.The value ρ is then called the approximation ratio of the algorithm.We note that the assumption on W being psd is necessary in order to allow for sensible approximation.To see this, observe that when W is the adjacency matrix of an undirected graph and c = 0, (P ) encodes the problem of finding an independent set of maximal weight, which is NP-hard to approximate within a factor better than n −(1− ) for any > 0, even in the unweighted case [10].
The packing problems that we consider also have a natural interpretation in terms of mechanism design.Consider a situation where a set of n selfish agents demands a service, and the subsets of agents that can be served simultaneously are modeled by a convex quadratic packing constraint.Each agent j has private information p j about its willingness to pay for receiving the service.In this context, a (direct revelation) mechanism takes as input the matrix W and the budget c.It then elicits the private value p j from agent j.Each agent j may misreport a value p j instead of their true value p j if this is to their benefit.The mechanism then computes a solution x ∈ {0, 1} n to (P ) as well as a payment vector g ∈ Q n ≥0 .A mechanism is strategyproof if no agent has an interest in misreporting p j , no matter what the other agents report.
Before we present our results on approximation ratios and mechanisms for non-negative, convex, and quadratically constrained packing problems, we give two real-world examples that fall into this category.
Example 1 (Welfare maximization in gas supply networks).Consider a gas pipeline modeled by a directed graph G = (V, E) with different entry and exit nodes.There is a set of n transportation requests (s j , t j , q j , p j ), j ∈ [n] := {1, . . ., n}, each specifying an entry node s j ∈ V , an exit node t j ∈ V , the amount of gas to be transported q j ∈ Q ≥0 , and an economic value p j ∈ Q ≥0 .One model for gas flows in pipe networks is given by the Weymouth equations [28] of the form β e q e |q e | = π u − π v for all e = (u, v) ∈ E.
Here, the parameter β e ∈ Q >0 is a pipe specific value that depends on physical properties of the pipe segment modeled by the edge, such as length, diameter, and roughness.Positive flow values q e > 0 denote flow from u to v, while a negative q e indicates flow in the opposite direction.The value π u denotes the square of the pressure at node u ∈ V .In real-life gas networks, there is typically a bound c ∈ Q ≥0 on the maximal difference of the squared pressures in the network.For the operation of gas networks, it is a natural problem to find the welfaremaximal subset of transportation requests that can be satisfied simultaneously while satisfying the pressure constraint.
To illustrate this problem, we consider the particular case in which the network has a path topology similar to the one depicted in Figure 1.We assume that for each request the entry node is left of the exit node.Thus, the pressure in the pipe is decreasing from left to right.For j ∈ [n], let E j ⊆ E denote the set of edges on the unique (s j , t j )-path in G. Indexing the vertices v 0 , . . ., v k and edges e 1 , . . ., e k from left to right, the maximal squared pressure difference in the pipe is given by where x j ∈ {0, 1} indicates whether transportation request j ∈ [n] is being served.For the matrix W = (w ij ) i,j∈[n] defined by w ij = e∈Ei∩Ej β e q i q j , the pressure constraint can be formulated as x W x ≤ c.To see that the matrix W is positive semi-definite, we write W = e∈E β e q e (q e ) , where q e ∈ Q n ≥0 is defined as q e i = q i if e ∈ E i , and q e i = 0, otherwise.Gas networks are particularly interesting from a mechanism design perspective, since several countries employ or plan to employ auctions to allocate gas network capacities [21], but theoretical and experimental work uses only linear flow models [17,24], thus ignoring the physics of the gas flow.
Example 2 (Processor speed scaling).Consider a mobile device with battery capacity c and k compute cores.Further, there is a set of n tasks (q j , p j ), each specifying a load q j ∈ Q k ≥0 for the k cores and a profit p j .The computations start at time 0 and all computations have to be finished at time 1.In order to adapt to varying workloads, the compute cores can run at different speeds.In the speed scaling literature, it is a common assumption that energy consumption of core i when running at speed s is equal to β i s 2 , where β i ∈ Q >0 is a core-specific parameter [2,13,29]. 4The goal is to select a profit-maximal subset of tasks that can be scheduled in the available time with the available battery capacity.Given a subset of tasks, it is without loss of generality to assume that each core runs at the minimal speed such that the core finishes at time 1, i.e., every core i runs at speed j∈[n] x j q j i so that the total energy consumption is The energy constraint can thus be formulated as a convex quadratic constraint.
Mechanism design problems for processor speed scaling are interesting when the tasks are controlled by selfish agents and access to computation on the energy-constrained device is determined via an auction.

Our Results
In Section 3 we derive a φ-approximation algorithm for packing problems with convex quadratic constraints where φ = ( √ 5 − 1)/2 ≈ 0.618 is the inverse golden ratio.The algorithm first solves a convex relaxation and scales the solution by φ, which turns it into a feasible solution to a second non-convex relaxation.The latter relaxation has the property that any solution can be transformed into a solution with at most one fractional component without decreasing the objective value.In the end, the algorithm returns the integral part of the transformed solution.Combining this procedure with a partial enumeration scheme yields a φapproximation; see Theorem 1.In Section 4 we prove that the greedy algorithm, when combined with partial enumeration, is a constant-factor approximation algorithm with an approximation ratio between (1 − √ 3/e) ≈ 0.363 and φ; see Theorem 2 and Theorem 3. In Section 5, we show that a combination of the results from the previous section allows to derive a strategyproof mechanism with constant approximation ratio.In Section 6 we derive a randomized constantfactor approximation algorithm for the more general problem with a constant number of r convex quadratic packing constraints.The algorithm solves a convex relaxation, scales the solution, and performs randomized rounding based on that scaled solution.Combining this algorithm with partial enumeration yields a constant-factor approximation; see Theorem 5.In Section 7 we show that packing problems with convex quadratic constraints of type (P ) are APX-hard; see Theorem 6.Finally, in Section 8, we apply the three algorithms to several instances of the problem type described in Example 1 based on real-world data from the GasLib library [26].

Related Work
When W is a non-negative diagonal matrix, the quadratic constraint in (P ) becomes linear and the problem is then equivalent to the 0-1-knapsack problem which admits a fully polynomial-time approximation scheme (FPTAS) [12].Another interesting special case is when W is completely-positive, i.e., it can then be written as W = U U for some matrix U ∈ Q n×k ≥0 with non-negative entries.The minimal k for which W can be expressed in this way is called the cp-rank of W , see [3] for an overview on completely positive matrices.The quadratic constraint in (P ) can then be expressed as U x 2 ≤ √ c.For the case that U ∈ Q n×2 ≥0 , this problem is known as the 2-weighted knapsack problem for which Woeginger [30] showed that it does not admit an FPTAS, unless P = NP.Chau et al. [5] settled the complexity of this problem showing that it admits a polynomial-time approximation scheme (PTAS).Elbassioni et al. [6] generalized this result to matrices with constant cp-rank.
Exchanging constraints and objective in (P ) leads to knapsack problems with quadratic objective function and a linear constraint first studied by Gallo [8].These problems have a natural graph-theoretic interpretation where nodes and edges have profits, the nodes have weights, and the task is to choose a subset of nodes so as to maximize the total profit of the induced subgraph.Rader and Woeginger [23] give an FPTAS when the graph is edge series-parallel.Pferschy and Schauer [22] generalize this result to graphs of bounded treewidth.They also give a PTAS for graphs not including a forbidden minor which includes planar graphs.
Mechanism design problems with a knapsack constraint are contained as a special case when W is a diagonal matrix.For this special case, Mu'alem and Nisan [18] give a mechanism that is strategyproof and yields a 1/2-approximation.Briest et al. [4] give a general framework that allows to construct a mechanism that is an FPTAS for the objective function.Aggarwal and Hartline [1] study knapsack auctions with the objective to maximize the sum of the payments to the mechanism.

Preliminaries
For ease of exposition, we assume that all matrices and vectors are integer.Let [n] := {1, . . ., n} and W = (w ij ) i,j∈[n] ∈ N n×n be a symmetric psd matrix.Furthermore, let p ∈ N n be a profit vector and let c ∈ N be a budget.We consider problems of the form (P ), i.e., max {p x : x W x ≤ c, x ∈ {0, 1} n }.Throughout the paper, we denote the characteristic vector of a subset S ⊆ [n] by χ S ∈ {0, 1} n , i.e., χ i = 1 if i ∈ S and χ i = 0, otherwise.
We first state the intuitive result that after fixing x i = 1 for i ∈ N 1 ⊆ [n] and fixing x i = 0 for i ∈ N 0 (with N 0 ∩ N 1 = ∅), we again obtain a packing problem with a convex and quadratic packing constraint.
Lemma 1.Let W ∈ N n×n be symmetric psd, p ∈ N n , and c ∈ N. Further, let N 0 , N 1 ∈ 2 [n] with N 0 ∩ N 1 = ∅ and N 0 ∪ N 1 [n] be arbitrary.Then, there exist ñ ∈ N, W ∈ N ñ×ñ symmetric psd, p ∈ N ñ, and c ∈ N such that Note that W is obtained from W by taking principal minors and adding diagonal matrices with non-negative entries so that W is positive semi-definite.Let c = c − χ N1 W χ N1 .With a slight abuse of notation, for a set S ⊆ [ñ], let χS denote its characteristic vector in {0, 1} ñ and χ S its characteristic vector in {0, 1} n .We then obtain for all S ⊆ [ñ] the equality Thus, we have χ W χS ≤ c if and only if χ S∪N1 W χ S∪N1 ≤ c.Defining p ∈ N ñ with pi = p i for all i ∈ [ñ] then establishes the claimed result.
By Lemma 1, the following assumptions are without loss of generality.
Lemma 2. It is without loss of generality to assume that 0 < w ii ≤ c and p i > 0 for all i ∈ [n].
Proof.If w ii > c for some i ∈ [n], then x i = 0 in every feasible solution x.If w ii = 0, then the positive semi-definiteness of W implies w ij = w ji = 0 for every j ∈ [n].Hence, the value of x i does not influence the value of x W x and it is without loss of generality to assume that x i = 1.Furthermore, if p i = 0 then the value of x i does not influence the value of p x and it is without loss of generality to assume that x i = 0.In all cases, Lemma 1 yields the claimed result.

A Golden Ratio Approximation Algorithm
In this section, we derive a φ-approximation algorithm for packing problems with convex quadratic constraints of type (P ) where φ = ( √ 5 − 1)/2 ≈ 0.618 is the inverse golden ratio.To this end, we first solve a convex relaxation of the problem.We then use the resulting solution to compute a feasible solution to another non-convex relaxation of the problem.The second relaxation has the property that any solution can be transformed so that it has at most one fractional value, and the transformation does not decrease the objective value.Together with a partial enumeration scheme in the spirit of Sahni [25], this yields a φ-approximation.
Denote by d ∈ N n the diagonal of W ∈ N n×n and let D := diag(d) ∈ N n×n be the corresponding diagonal matrix.For a vector x ∈ {0, 1} n we have x 2 i = x i for all i ∈ [n] and, thus, we obtain We arrive at the following relaxation of (P ): The following lemma shows that we can compute an exact optimal solution to (R 1 ) in polynomial time.
Lemma 3. The relaxation (R 1 ) can be solved exactly in polynomial time.
Proof.For every x ∈ [0, 1] n , we have p x ∈ P := {0, . . ., i∈[n] p i }.For fixed q ∈ P , consider the mathematical program minimize x W x with optimal value c(q).Since (D q ) is quadratic and convex with linear constraints, it can be solved exactly in polynomial time, see Kozlov et al. [16].If c(q) > c, we conclude that the maximal value of (R 1 ) is strictly smaller than q.
If c(q) ≤ c, the corresponding solution x solves (R 1 ) with an objective value of q.With binary search over P , we can compute the maximal value q * ∈ P such that (D q ) has a solution of at most c.The thus computed value q * is the maximal objective of (R 1 ) and the corresponding optimal solution x of (D q ) is an optimal solution of (R 1 ).
We proceed to propose a second relaxation of (P ).To this end, note that for every x ∈ {0, 1} n we have Relaxing the integrality condition yields the following relaxation of (P ): Note that since the trace of W − D is zero, W − D has a negative eigenvalue unless all eigenvalues are zero.Hence, W − D is not positive semi-definite, unless W is a diagonal matrix.Therefore, the relaxation (R 2 ) is in general not convex.
We proceed to show that (R 2 ) always has an optimal solution for which at most one variable is fractional.For Lemma 4. For any feasible solution x of (R 2 ), one can construct a feasible solution x with |N f (x)| ≤ 1 and p x ≥ p x in linear time.
Proof.Let x be a feasible solution of (R 2 ).Assume |N f (x)| ≥ 2, and consider i, j ∈ N f (x) with i = j, in particular, x i , x j ∈ (0, 1).We proceed to construct a feasible solution x with By Lemma 2 it is without loss of generality to assume that w kk > 0 and thus ν k (x) > 0. Note that ν k (x) does not depend on x k and therefore, for all x ∈ R n and t ∈ R, we have that where χ k ∈ {0, 1} n denotes the k-th unit vector.Without loss of generality, assume that r i (x) ≥ r j (x) and define Consider the vector x = x − εχ j + δχ i .By the definition of ε, we have xj = x j − ε ≥ 0. We further obtain Note that xj = 0 if ε = x j and xi = 1 if ε = ε so that at least one of the inequalities xj ≥ 0 and xi ≤ 1 is tight.We conclude that x ∈ [0, 1] n and Algorithm 1: Golden ratio algorithm of φy H containing at most one fractional variable Thus, x is a feasible solution of (R 2 ).Moreover, we have

Applying this construction iteratively (at most) |N
Remark 1.The algorithm in the proof of Lemma 4 can be improved by setting In this way, we obtain v(x) = v(x) and increase the objective value at least as much as in the proof of Lemma 4 while still ensuring that x is feasible for (R 2 ) and We proceed to devise a φ-approximation algorithm.The algorithm iterates over all sets H ⊆ [n] with |H| ≤ 3.For each set H it computes an optimal solution y H to the convex relaxation (R 1 ) with the additional constraints x i = 1 for all i ∈ H, and Then, we scale down y H by a factor of φ and show that φy H is a feasible solution to the non-convex relaxation (R 2 ).By Lemma 4, we can transform this solution into another solution z H with at most one fractional variable.The integral part of z H is our candidate solution for the starting set H. In the end, we return the best thus computed candidate over all possible sets H; see Algorithm 1.
Proof.Fix an optimal solution x * of (P ) and define S * := {i ∈ [n] : x * i = 1}.Since the algorithm iterates over all solutions of size at least three, it is without loss of generality for our following arguments to assume that |S * | ≥ 4. Let H * ⊂ S * with |H * | = 3 be chosen such that p i ≤ min h∈H * p h for all i ∈ S * and consider the run of the algorithm when starting with Consider the packing problem where as additional constraints we have x i = 1 for all i ∈ H * and x i = 0 for all i ∈ H.By Lemma 1, this packing problem can be written as where W is a symmetric and positive semi-definite matrix.We then have p x * = h∈H * p i + p x * for an optimal solution x * of ( P ).
Let y be an optimal solution to the convex relaxation (R 1 ) of ( P ).Since (R 1 ) is a relaxation of ( P ), we have p y ≥ p x * .We proceed to show that φy is feasible for the non-convex relaxation (R 2 ) of ( P ).To this end, we calculate where for the inequality we used that y is feasible for the convex relaxation and, thus, y W y ≤ c and d y ≤ c.By Lemma 4, we can transform φy into a solution z such that p z ≥ φp y and z has at most one fractional variable z with ∈ and consider the solution χ S .We have that so that χ S is feasible for (P ).Moreover, we obtain As a result of Theorem 1, we can derive an upper bound on the optimal value of (R 1 ).This will turn out to be useful when constructing a monotone greedy algorithm in the next section.
Proof.Since y * is feasible for (R 1 ), we have

The Greedy Algorithm
In this section we analyze the greedy algorithm and show that, when combined with a partial enumeration scheme in the spirit of Sahni [25], it is at least a (1 − √ 3/e)-approximation for packing problems with quadratic constraints of type (P ).We further show that its approximation ratio can be bounded from above by the golden ratio φ.Even though this approximation ratio is thus not better than the one guaranteed by the golden ratio algorithm (Theorem 1), it is worth analyzing it for several reasons.Firstly, it is simple to understand as well as to implement and turns out to have a much better running time in practice than the golden ratio algorithm; see the computational results in Section 8. And, secondly, the greedy algorithm serves as a main building block to devise a strategyproof mechanism with constant welfare guarantee; see Section 5.
For a set S ⊆ [n], we write w(S) := χ S W χ S .The core idea of the greedy algorithm is as follows.Assume that we have an initial solution S ⊂ [n].Amongst all remaining items in [n]\S, we pick an item i that maximizes the ratio between profit gain and weight gain, i.e., .
If adding i to the solution set would make it infeasible, i.e., w(S ∪ {i}) > c, then we delete i from [n].Otherwise, we add i to S. We repeat this process until [n] \ S is empty.
It is known from the knapsack problem that, when starting the greedy algorithm as described above with the empty set as initial set, then the produced solution can be arbitrarily bad compared to an optimal solution.However, the greedy algorithm can be turned into a constant-factor approximation by using partial enumeration: For all feasible subsets U ⊆ [n] with |U | ≤ 2, we run the greedy algorithm starting with U as initial set.In the end we return the best solution set found in this process; see Algorithm 2.
Algorithm 2: Greedy algorithm with partial enumeration The analysis of the algorithm follows a similar approach as the analysis of Sviridenko [27] for the greedy algorithm for maximizing a submodular function under a linear knapsack constraint.The non-linearity of the constraint in our case makes the analysis more complicated, though.In order to prove the approximation ratio of the greedy algorithm we need the following two technical lemmas.
Lemma 5. Let m ∈ N and consider the sequence (θ t ) t∈N defined by the recursive formula e .Proof.Consider the initial value problem x is Lipschitz-continuous in s, by the Picard-Lindelöf Theorem, this problem has a unique solution, which is given by Since its first derivative is monotonically decreasing, it follows that ψ is concave.Define z t := t i=1 θ i , t ∈ {0, . . ., m}.We claim that for every t ∈ {0, . . ., m} we have Note that (2) implies the result using To finish the proof, we prove (2) by induction.We have z 0 = 0 = ψ(0).Now assume that (2) holds for some arbitrary but fixed t ∈ {0, . . ., m − 1}.By the recursive definition of (θ t ) t∈N and the concavity of ψ, it then follows that Proof.We first show the statement for sequences 0 = w 0 < w 1 < • • • < w m with the additional property that w i − w i−1 = 1 for all i ∈ [m].For this case, it is to show that It suffices to show that the optimal value of the following optimization problem is at least We claim that every optimal solution to (3) satisfies all inequalities with equality.For a contradiction, fix an optimal solution θ * 1 , . . ., θ * m and suppose there is s ∈ {0, . . ., m − 1} such that Choosing the minimal s with this property, we have θ * s+1 > 0. Let , and consider the solution θ 1 , . . ., θ m defined as We first check that the solution θ 1 , . . ., θ m is feasible.For the inequalities for t = 0, . . ., s − 1, there is nothing to show since the involved variables are not altered.For t = s, the inequality is satisfied by the choice of δ.For t > s, we obtain where for the second inequality we used that θ * 1 , . . ., θ * m is feasible.Finally, we note that contradicting the optimality of θ * 1 , . . ., θ * m .We conclude that every optimal solution of (3) satisfies all inequalities with equality.The result then follows from Lemma 5.
It is left to show that the statement holds for arbitrary finite sequences 0 = w 0 < w 1 < • • • < w m .Fix such a sequence, let m := w m , and let θ 1 , . . ., θ m be such that there are first w 1 − w 0 copies of θ 1 , then w 2 − w 1 copies of θ 2 , and so on.We thus obtain yielding the result.
We can now prove the approximation ratio of the greedy algorithm.
Theorem 2. The Greedy algorithm with partial enumeration (Algorithm 2) is a an approximation algorithm with approximation ratio (1 e ) for (P ).
Proof.Let x * be an optimal solution of (P ) and set S * := {i ∈ [n] : Since the algorithm enumerates all solutions with at most two items, it is without loss of generality to assume that |S * | ≥ 3. Consider the run of the greedy algorithm with U = {i * 1 , i * 2 }.Without loss of generality, we assume that i * 1 = n − 1 and i * 2 = n.Set S 0 := U , and for t = 1, 2, . . ., denote by S t and i t the values of S and i after the t-th pass of the while loop.Furthermore, define .
By Lemma 1, we can treat the problem after fixing x i * 1 = x i * 2 = 1 as a new problem of the same form with matrix W ∈ N (n−2)×(n−2) , profit vector p, and budget c.In the following, for a set S ⊆ [n] \ U we write w(S) := χ S W χ S .Note that w is supermodular, i.e., for any two sets S, S ⊆ [n] \ U we have i∈S \S w(S ∪ {i}) − w(S) ≤ w(S ∪ S ) − w(S).
By Lemma 2, we can assume without loss of generality that w(S∪{i})− w(S) > 0.
Let t * be the first step of the greedy algorithm for which i t * ∈ S * but the algorithm does not add i t * to its solution set.It is without loss of generality to assume that in all previous iterations t ∈ {1, . . ., t * − 1} we had S t = S t−1 ∪ {i t } as otherwise item i t would be neither contained in the optimal solution nor the solution computed by the greedy algorithm; thus, removing it from the instance would not change the analysis.Since i t * is not included in the solution, we have w(S t * −1 ∪ {i t * } \ U ) > c.In the following, we write S t * := S t * −1 ∪ {i t * }, S * := S * \ U , and for t ∈ {0, . . ., t * }, we write St := S t \ U .
For all t ∈ {0, . . ., t * − 1}, we obtain where we used the supermodularity of w.By the Cauchy-Schwarz inequality it holds that and thus, by Lemma 6, Finally, this leads to Since the greedy algorithm with starting solution U obtains a profit of at least i∈S t * −1 p i , this implies the claimed result.
We proceed to show that the approximation ratio of the greedy algorithm can be bounded from above by the golden ratio.
Theorem 3. The approximation ratio of the greedy algorithm with partial enumeration is at most φ = ( √ 5 − 1)/2, even if we allow partial enumeration over an arbitrary but fixed number of items.
Proof.Consider the following instance.Let m, , k ∈ N with < k and denote by χ i the i-th unit vector in R m .Let there be two types of items: m items of type 1 with profit p (1) = 1 and weight vector y , and m type 2 items with profit p (2) = 1+2 k 2 +2k and weight vector y ; see Figure 3 for an illustration.
We wish to solve subject to w(S 1 , S 2 ) : Setting Y := [y m , y 1 , . . ., y m ], this optimization problem can be reformulated as in (P ) with weight matrix W = Y Y , which is clearly nonnegative and positive semidefinite.
Hence, the greedy algorithm will always include type 2 item j ∈ T i before type 1 item i in its solution.
Assume that for a partial solution S = (S 1 , S 2 ) we have i ∈ S 1 , i / ∈ S 1 , and j / ∈ S 2 for some type 1 items i, i ∈ [m] and a type 2 item j ∈ T i .Since |S 2 ∩ T i | ≤ , we have .
Consequently, the greedy algorithm will always add type 1 item i before type 2 item j ∈ T i to its solution given that type 1 item i is already included.Thus, the greedy algorithm starts with some initial solution S 0 = (S 0 1 , S 0 2 ).Afterwards, it includes all type 2 items in [m ]\ i∈S 0 1 T i (Step 1).Finally, it adds type 1 items until the capacity bound of mk 2 is reached (Step 2).Let s := |S 0 1 |.The weight of the partial solution after Step 1 is given by sk 2 +(m−s) 2 .Adding any type 1 item in Step 2 increases the weight of the solution by k 2 + 2k .Hence, in Step 2, k 2 + 2k type 1 items are added until the capacity is reached.(It is without loss of generality to assume that r ∈ Z since otherwise after adding r type 1 items, the remaining capacity would be filled with type 2 items and the resulting approximation ratio would be even lower.)Thus, the profit of the solution Ŝ produced by the greedy algorithm is given by p( Ŝ) = (s + r)p (1) where q := k .On the other hand, consider the solution S * = (S * 1 , S * 2 ) with S * 1 = [m] and S * 2 = ∅.It fulfills p(S) = m and w(S) = mk 2 .Thus, we have Under the constraint q ∈ (0, 1), the ratio ρ(q) attains its minimum at q = φ with value q(φ) = φ, where φ = is the golden ratio.

Monotone Algorithms
To illustrate the need for monotone algorithms, reconsider the situation described in Example 1 with a set of n selfish agents requesting permission to send gas through a pipeline.Each agent j has a private value p j expressing the monetary gain from being allowed to send the gas.A natural objective of a system provider is to maximize social welfare, i.e., to solve (P ).Since the true value p j is the private information of agent j, the system designer has to employ a mechanism that incentivizes the agents to report their true values p j . 5It is without loss of generality [9,19] to assume the following form of a direct revelation Next, we prove the monotonicity of the algorithm.To this end, let p, p ∈ N n be two declared profit vectors such that there is i ∈ [n] with pi = p i + 1 and pj = p j for all j = i.Let x and x be the corresponding solutions computed by Algorithm 3 and assume that x i = 1.It is to show that xi = 1.Let q * and q * be the optimal values of (R 1 ) with respect to p and p.Then q * ≤ q * ≤ q * + 1.Let H := {j ∈ [n] : p j ≥ αq * } and Ĥ := {j ∈ [n] : pj ≥ αq * }.
Next, assume that H = ∅.Then either Ĥ = {i} and thus xi = 1, or Ĥ = ∅.In the latter case, Algorithm 3 executes the greedy algorithm for both p and p.But since for every and for every j ∈ [n] \ {i} and every S ⊆ when the greedy algorithm adds item i to its solution after k iterations for p, then it also adds i to its solution after at most k iterations for p.The critical payments can be computed with binary search.

Constantly Many Packing Constraints
In this section we generalize Problem (P ) by allowing a constant number of convex quadratic constraints and derive a constant-factor approximation algorithm using randomized rounding combined with partial enumeration.To this end, let r ∈ N be a constant natural number, and for every k ∈ N n×n be a symmetric psd matrix with non-negative entries.Furthermore, let p ∈ N n and c k ∈ N, k ∈ [r].We consider packing problems with r convex quadratic knapsack constraints of the form maximize p x x ∈ {0, 1} n .
where the last inequality follows from the monotonicity of the functions X → X W k X with respect to the natural partial order on R n and the FKG inequality; see [7].In the following, we show that for every ∈ [n] and k ∈ [r] we have Combining ( 4) and ( 5), and using that p y ≥ (1 − ε)p * we then obtain and we are finished.We proceed as follows.Let ∈ [n] and k ∈ [r].Define z ∈ R n by z i = y i for all i ∈ [n] \ { } and z = 1.We derive an upper bound on z W k z and then use this bound to prove that E[X W k X | X = 1] ≤ g(α, δ) c k .Using Markov's inequality yields (5).For the sake of readability, throughout the rest of the proof we omit the superscripts and simply write W = W k and c = c k .
Claim.We have Proof (of the claim).Let W = U U , with U = (u ij ) i,j∈[n] , be the Cholesky decomposition of W and denote by u i ∈ R n , i ∈ [n], the rows of U .It follows that Let γ ∈ (0, 1) and i ∈ [n].There are two possible cases.Either |u i y| ≥ γ|u i z| and thus |u i z| ≤ 1 γ |u i y|.Or we have |u i y| < γ|u i z|.But then, Hence, It follows that in any of the two cases we have Using that w ≤ δc, we conclude that Applying standard calculus we see that the function γ → δ (1−γ) 2 + 1 γ 2 attains its minimal value (1 + δ This completes the proof of the claim. Using the bound (6) we proceed to derive an upper bound on the expected value of X W X conditioned on X = 1.To that end, let where the inequality follows from α ∈ (0, 1).Since y is a feasible solution of (R k ), we have Plugging ( 6), (8), and w ≤ δc into (7) yields Therefore, by Markov's inequality, This establishes Inequality (5) and completes the proof.
Proof.The fact that the function (0, 1) → R, α → f (α, δ) attains its maximum value at α δ can be verified using standard calculus.By the continuity of f and α δ it follows that , which completes the proof.
We proceed to show that for this α, the probability that the random vector X = Ber(αy) produced by Algorithm 4 is infeasible for (P k ) can be bounded from above by 1 2 .Lemma 10.Let y be an optimal solution to (R k ), α ∈ (0, 1), and X = Ber(αy).Then P[X infeasible for (P k )] ≤ r(α 2 + α).In particular, if α = α δ , then P[X infeasible for (P k )] ≤ 1 2 .Proof.Since for every i, j ∈ [n] with i = j, X i and X j are stochastically independent, it holds for every k ∈ Algorithm 5: Randomized rounding with partial enumeration where the inequality follows from the fact that y is a feasible solution of (R k ).Thus, Markov's inequality implies Finally, for every δ ∈ (0, 1) as required.
To finish the proof, we show that for any constant δ ∈ (0, 1), any optimal solution to (P k ) contains a constant number of δ-heavy items only.Lemma 11.Let x * be an optimal solution to problem (P k ), let δ ∈ (0, 1), and let which completes the proof.
We are now in position to devise a randomized constant-factor approximation algorithm for Problem (P k ).The algorithm first enumerates all solutions using only heavy items, then computes a solution with randomized rounding involving only the light items, and returns the better of the two solutions; see Algorithm 5.
Proof.Let > 0 and δ > 0 be arbitrary.We claim that Algorithm 5 yields a ρ ε,δ -approximation where Let x * be an optimal solution of (P k ).We distinguish two cases.First case: i∈H δ p i x * i ≥ ρ ε,δ p x * .Then, by Lemma 11, Finally, by the continuity of f and α δ we obtain lim ε,δ→0 which completes the proof.

Approximation Hardness
In this section, we show that packing problems with convex quadratic constraints of type (P ) are APX-hard.Theorem 6.It is NP-hard to approximate packing problems with convex quadratic constraints by a factor of 91 92 + ε, for any ε > 0. Proof.We reduce from the 6-set packing problem which is NP-hard to approximate by a factor of 22  23 + ε for all ε > 0; see Hazan et al. [11].An instance of a 6-set packing is given by a ground set [m] and a family S ⊆ 2 [m] of subsets of [m] such that |S| = 6 for all S ∈ S. A subfamily S * ⊆ S is a feasible solution to the 6-set packing problem if S ∩ T = ∅ for all S, T ∈ S * .For a given instance of 6-set packing, and a value k ∈ N the gap problem is the decision problem to decide whether: Yes: there is a solution to the 6-set packing problem of size at least k, or No: every solution has size strictly smaller than 22  23 k.For optimal sizes in the interval [ 22  23 k, k) any answer is admissible.The approximation hardness of 6-set packing implies that the gap problem is an NP-hard decision problem.
Let n := |S| and number the sets S = {S 1 , S 2 , . . ., S n }.Let A = (a ij ) i,j ∈ {0, 1} m×n be defined as a ij = 1 if and only if i ∈ S j , and let W = A A. Consider the problem Suppose, we have a Yes-instance for the gap problem and let S * be a subset of pairwise disjoint sets of cardinality k.Then a feasible solution for (SP ) is given by x * defined as x * j = 1 if S j ∈ S * , and x * j = 0, otherwise.Since every set S j , j ∈ [n], contributes at least 6 to the left hand side of the knapsack constraint (9), this solution is also optimal for (SP ) and has an objective value of k.
Next, consider a No-instance for the gap problem, let x * be a corresponding optimal solution of (SP ), and let k be its objective value.Since for a No-instance every solution of the 6-set packing problem has size strictly less than 22  23 k, every set that is picked beyond the first 22  23 k sets, intersects at least once with at least one of the first 22  23 k sets.Thus, the first 22  23 k sets each contribute at least 6 to (9), and each of the further k − 22  23 k sets each contributes at least 5 + 4 − 1 = 8 to (9).We obtain 6k ≥ (x * ) W x * implying k ≤ 91 92 k.We conclude that for a Yes-instance the objective value of (SP ) is at least k while for a No-instance it is strictly less than 91 92 k.Therefore, the problem is NP-hard to approximate by a factor of 91 92 + ε for any ε > 0.

Computational results
We apply our algorithms to a problem of the type described in Example 1. Specifically, we solve the welfare maximization problem for instance 134 of the GasLib library [26]; see Figure 4 for an illustration of the network G = (V, E).
The instance contains upper and lower pressure bounds for every node v ∈ V as well as all physical properties to compute the pipe resistances β e , e ∈ E. Sources and sinks are denoted by S and T , respectively.Every sink t ∈ T requests a transportation of q t units of gas to t.To ensure the robustness of the network in the sense of [15], we assume that all sinks between s 1 and s 2 are (possibly) supplied by s 1 , all sinks between s 3 and t 45 by s 3 , and all other sinks by s 2 .Denote the set of all sinks that are (possibly) supplied by s i by T i , i = 1, 2, 3.For simplicity, we assume that the economic welfare is proportional to the amount of transported gas.That is, there is a constant θ > 0 such that for every sink t ∈ T the economic welfare p t of transporting q t units of gas to t equals θq t .
Our goal is to choose a welfare-maximal subset of transportations that can be satisfied simultaneously while the pressures at the first sink s 1 and the last source t 45 are within their feasible interval.To that end, let Ē denote the path from s 1 to t 45 , and for every t ∈ T i denote by E t the set of edges on the unique path from s i to t, i = 1, 2, 3. Let p = (p t ) t∈T , W = (w t,t ) t,t ∈T , with w t,t = e∈ Ē∩Et∩E t β e q t q t , and let c = πs1 − π ¯t45 , where for a node v ∈ V , πv and π ¯v denote the upper and lower bound on the squared pressure at v, respectively.Finally, let x = (x t ) t∈T ∈ {0, 1} T , where x t = 1 if and only if sink t is supplied.Then, the welfare-maximization problem can be formulated as (P ); see Example 1.
The GasLib-134 instance contains 1234 different scenarios, where for each scenario demands qt are given for every sink t ∈ T .In order to make the optimization problem non-trivial we increase the node demands by setting q t = γ qt , for γ ∈ Γ := {5, 10, 50, 100}.We apply the golden ratio algorithm, the greedy algorithm, and randomized rounding to the first 100 scenarios.For each scenario we consider every γ ∈ Γ .Each of the three algorithms is executed in three dif-

Fig. 1 :
Fig. 1: Gas network with feed-in and feed-out nodes.
Therefore, φy * is feasible for (R 2 ).By Lemma 4, we can transform φy * into a vector z with p z ≥ p (φy * ) = φp y * and |N f (z)| ≤ 1.The integral part z of z is feasible for (P ), and thus, p z ≤ p z + max i∈[n] p i ≤ 2p x * .We conclude that p y * ≤ 1 φ p z ≤ 2 φ p x * .