Skip to main content
Log in

Improved Bounds in Stochastic Matching and Optimization

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

Real-world problems often have parameters that are uncertain during the optimization phase; stochastic optimization or stochastic programming is a key approach introduced by Beale and by Dantzig in the 1950s to address such uncertainty. Matching is a classical problem in combinatorial optimization. Modern stochastic versions of this problem model problems in kidney exchange, for instance. We improve upon the current-best approximation bound of 3.709 for stochastic matching due to Adamczyk et al. (in: Algorithms-ESA 2015, Springer, Berlin, 2015) to 3.224; we also present improvements on Bansal et al. (Algorithmica 63(4):733–762, 2012) for hypergraph matching and for relaxed versions of the problem. These results are obtained by improved analyses and/or algorithms for rounding linear-programming relaxations of these problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abolhassani, M., Esfandiari, H., Hajiaghayi, M., Mahini, H., Malec, D., Srinivasan, A.: Selling tomorrow’s bargains today. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, pp. 337–345 (2015)

  2. Adamczyk, M., Grandoni, F., Mukherjee, J.: Improved approximation algorithms for stochastic matching. In: Bansal, N., Finocchi, I. (eds.) Algorithms—ESA 2015. Lecture Notes in Computer Science, vol. 9294. Springer, Berlin, Heidelberg (2015)

  3. Alon, N., Spencer, J.H.: Wiley interscience series in discrete mathematics and optimization. In: The Probabilistic Method, vol. 6, pp. 85–96 (2008)

  4. Bansal, N., Gupta, A., Nagarajan, V., Rudra, A.: When lp is the cure for your matching woes: approximating stochastic matchings. arXiv preprint arXiv:1003.0167v1 [cs DS] (2010)

  5. Bansal, N., Gupta, A., Li, J., Mestre, J., Nagarajan, V., Rudra, A.: When LP is the cure for your matching woes: improved bounds for stochastic matchings. Algorithmica 63(4), 733–762 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Beale, E.M.: On minimizing a convex function subject to linear inequalities. J. Roy. Stat. Soc. B Met. 17, 173–184 (1955)

    MathSciNet  MATH  Google Scholar 

  7. Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Springer, New York (1997)

    MATH  Google Scholar 

  8. Chen, N., Immorlica, N., Karlin, A.R., Mahdian, M., Rudra, A.: Approximating matches made in heaven. In: Albers, S., Marchetti-Spaccamela, A., Matias, Y., Nikoletseas, S., Thomas, W. (eds.) Automata, Languages and Programming. ICALP 2009. Lecture Notes in Computer Science, vol. 5555. Springer, Berlin, Heidelberg (2009)

  9. Dantzig, G.B.: Linear programming under uncertainty. Manag. Sci. 1(3–4), 197–206 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dean, B.C., Goemans, M.X., Vondrák, J.: Approximating the stochastic knapsack problem: the benefit of adaptivity. Math. Oper. Res. 33(4), 945–964 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fortuin, C.M., Ginibre, J., Kasteleyn, P.N.: Correlational inequalities for partially ordered sets. Commun. Math. Phys. 22, 89–103 (1971)

    Article  MATH  Google Scholar 

  12. Füredi, Z., Kahn, J., Seymour, P.D.: On the fractional matching polytope of a hypergraph. Combinatorica 13(2), 167–180 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  13. Gandhi, R., Khuller, S., Parthasarathy, S., Srinivasan, A.: Dependent rounding and its applications to approximation algorithms. J. ACM (JACM) 53(3), 324–360 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  14. Garg, N., Gupta, A., Leonardi, S., Sankowski, P.: Stochastic analyses for online combinatorial optimization problems. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, pp. 942–951 (2008)

  15. Gupta, A., Kumar, A.: A constant-factor approximation for stochastic steiner forest. In: Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, pp 659–668. ACM (2009)

  16. Gupta, A., Ravi, R., Sinha, A.: LP rounding approximation algorithms for stochastic network design. Math. Oper. Res. 32(2), 345–364 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Immorlica, N., Karger, D., Minkoff, M., Mirrokni, V.S.: On the costs and benefits of procrastination: approximation algorithms for stochastic combinatorial optimization problems. In: Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, pp 691–700 (2004)

  18. Levi, R., Pál, M., Roundy, R.O., Shmoys, D.B.: Approximation algorithms for stochastic inventory control models. Math. Oper. Res. 32(2), 284–302 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Ravi, R., Sinha, A.: Hedging uncertainty: approximation algorithms for stochastic optimization problems. Math. Program. 108(1), 97–114 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  20. Ruszczynski, A.P., Shapiro, A.: Stochastic Programming, vol. 10. Elsevier, Amsterdam (2003)

    MATH  Google Scholar 

  21. Shachnai, H., Srinivasan, A.: Finding large independent sets in graphs and hypergraphs. SIAM J. Discret. Math. 18, 488–500 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  22. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming—Modeling and Theory, vol. 16, 2nd edn. SIAM, Philadelphia (2014)

    MATH  Google Scholar 

  23. Shmoys, D.B., Swamy, C.: An approximation scheme for stochastic linear programming and its application to stochastic integer programs. J. ACM (JACM) 53(6), 978–1012 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  24. Srinivasan, A.: Approximation algorithms for stochastic and risk-averse optimization. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp 1305–1313. Society for Industrial and Applied Mathematics (2007)

  25. Vazirani, V.V.: Approximation Algorithms. Springer, Berlin (2013)

    Google Scholar 

  26. Williamson, D.P., Shmoys, D.B.: The Design of Approximation Algorithms. Cambridge University Press, Cambridge (2011)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

A preliminary version of this paper appears as part of a paper in the Proc. International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2015. We thank R. Ravi and the referees for their valuable comments regarding the details as well as context of this work. The research of the first author is partially supported in part by grants from British Council’s UKIERI program and the United States Department of Transportation’s UTRC Region II consortium (RF # 49198-25-26). The research of the fourth author is partially supported in part by NSF Awards CNS-1010789, CCF-1422569 and CCF-1749864, and by research awards from Adobe, Inc. The research of the last author is supported in part by NSF Awards CNS 1010789 and CCF 1422569.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pan Xu.

Appendices

Appendix A: Proofs for Sect. 4

1.1 Proof of Lemma 3

Proof

Let \(B=A+y_a \le 2\) be an arbitrary but a fixed feasible value; we now investigate how A and \(y_a\) are arranged in WS. A moment’s reflection tells us that in WS we will have

$$\begin{aligned} \sum _{f \in E(u)} y_{f} p_{f}=A+y_{a}p_{a}=1 \Rightarrow ~B=A+y_{a}\ge 1, ~ y_{a}(1-p_{a})=B-1 \end{aligned}$$

Recall that the update expression of \(P_u\) as shown in Equation (7) is as follows:

$$\begin{aligned} P_u = (1-xA)(1-xy_a)\Pr [Z_{u} \le t_u-1] + (1-x A)x y_a(1-p_a)\Pr [Z_{u} \le t_u-2] \end{aligned}$$

Note that in WS, the values of \(\Pr [Z_{u} \le t_{u}-1]\) and \(\Pr [Z_{u} \le t_{u}-2]\) are functions of B and can be ignored (since in WS, \(\mathbb {E}[Z_{u}]=x(t_{u}-B)\)). For the rest of the expression, we have

$$\begin{aligned}&(1-xA)(1-xy_a) \ge (1-xB +x^{2} (B-1)) \text { and } \\&(1-xA) x y_{a}(1-p_{a}) \ge (1-x)x(B-1) \end{aligned}$$

The two terms are together minimized when \(A=1, y_a=B-1\) and \(p_a=0\). Note that in this configuration, \(\sum _{f \in E(u) } y_{f}p_{f}=A+ y_{a}p_{a}=1\), and thus the matching constraint is maintained. Since \(B=A+y_{a} \) is fixed, the patience constraint is maintained as well. Therefore, for any fixed value B, \(P_{u}\) will be minimized at the following feasible configuration: \(A=1, y_a=B-1\) and \(p_a=0\). This completes our proof.

\(\square \)

1.2 Statement of Lemma 11 and its Proof

Lemma 11

Let Z be the sum of a finite collection of independent Bernoulli random variables with \(\mathbb {E}[Z] = \mu \). For any \(A > \mu , A \in \mathbb {Z}\), we have \(\Pr [Z \le A] \ge \Pr [Y \le A]\), where \(Y \sim \mathrm {Pois}(\mu )\).

Lemma 11 follows directly from the following two propositions: Propositions 2 and 3. The proofs of the two propositions will both invoke Proposition 1 below, which we will show first.

Notation We let \(\mathrm {B}(N, \mu /N)\) denote the Binomial distribution with parameters \((N, \mu /N)\).

Proposition 1

Let \(Z_x \sim \mathrm {B}(N, \mu /N)\) where \( \ell \le N, \ell \in \mathbb {Z}\) and \(\mu <\frac{N}{N+1}(\ell +1) \). Then we have \(\Pr [Z_x=\ell ]>\Pr [Z_x=\ell +1]\).

Proof

The result becomes trivial when \(\ell =N\). We assume \(\ell \le N-1\).

$$\begin{aligned} \Pr [Z_x=\ell ]= & {} {N\atopwithdelims ()\ell } \left( \frac{\mu }{N}\right) ^{\ell } \left( 1-\frac{\mu }{N}\right) ^{N-\ell }\\ \Pr [Z_x=\ell +1]= & {} {N\atopwithdelims ()\ell +1} \left( \frac{\mu }{N}\right) ^{\ell +1} \left( 1-\frac{\mu }{N}\right) ^{N-\ell -1} \end{aligned}$$

We get that

$$\begin{aligned} \frac{\Pr [Z_x=\ell ]}{\Pr [Z_x=\ell +1]}>1\Leftrightarrow & {} \frac{(\ell +1)(N-\mu )}{(N-\ell )\mu } >1\\\Leftrightarrow & {} \mu <\frac{N}{N+1} (\ell +1) \end{aligned}$$

\(\square \)

Proposition 2 considers the case when Z is a sum of at most N independent Bernoulli random variables, each having a mean value that lies in (0, 1]. Subject to this “at most N” restriction and the constraint that \(\mathbb {E}[Z] = \mu \) for some given \(\mu \), it is easy to see that the problem of minimizing \(\Pr [Z \le A]\), where A is a positive integer that is at most \(N-1\), is that of minimizing a continuous function over a closed set (which in fact is a polytope); thus, this problem has a minimum (as opposed to an infimum). In the following paragraphs, we will repeatedly use the term “optimal configuration”, which refers to any configuration of \(Z_{i}\)s under which \(\Pr [Z \le A]\) achieves its minimum value; also recall that we refer to a value \(z \in [0,1]\) as “floating” if \(z \in (0,1)\).

Proposition 2

For any given positive integers A and \(N \ge A+1\), let Z be the sum of at most N independent Bernoulli random variables \(Z_i\) with \(\mathbb {E}[Z] = \mu \), where \(\mu < A\). Then there exists an optimal configuration where each Bernoulli random variable \(Z_i\) has the same mean value, which, furthermore, is floating.

Proof

We first show that there exists an optimal configuration where for some (possibly empty) subset \(S\subseteq \{1, 2, \ldots , N\}\), (i) all \(Z_i\) with \(i \in S\) have mean value 1 each, and (ii) all \(Z_i\) with \(i \not \in S\) have the same floating mean value.

Consider an optimal configuration where there are two of our Bernoulli random variables, say \(Z_{1}\) and \(Z_{2}\), with different floating means. Let \(\mathbb {E}[Z_{1}]=z_{1},\mathbb {E}[Z_{2}]=z_{2}\) and \(Z_x\) be the sum of all the \(Z_i\)’s excluding \(Z_1\) and \(Z_2\). Assume \(0<z_{1}< z_{2} <1\). Notice that

$$\begin{aligned} \Pr [Z \le A]= & {} \Pr [Z_{x} \le A-2] +\Pr [Z_{x}=A-1](1-z_{1}z_{2}) \\&+\Pr [Z_{x}=A](1-z_{1})(1-z_{2}), \end{aligned}$$

and observe that the coefficient of \(z_{1}z_{2}\) is \(\Pr [Z_{x}=A]-\Pr [Z_{x}=A-1]\). We consider the following two cases:

  • \(\Pr [Z_{x}=A]-\Pr [Z_{x}=A-1]>0\). Then the value \(\Pr [Z \le A]\) can be strictly reduced by the perturbation: \(z_{1}\leftarrow z_{1}-\epsilon , z_{2}\leftarrow z_{2}+\epsilon \).

  • \(\Pr [Z_{x}=A]-\Pr [Z_{x}=A-1]<0\). Then the value \(\Pr [Z \le A]\) can be strictly reduced by the perturbation: \(z_{1}\leftarrow z_{1}+\epsilon , z_{2}\leftarrow z_{2}-\epsilon \).

Each of the above two cases will lead to a contradiction and thus we conclude \(\Pr [Z_{x}=A]-\Pr [Z_{x}=A-1]=0\) in the original optimal configuration. Since the coefficient of the nonlinear term \(z_1 z_2\) in the expression of \(\Pr [Z\le A]\) is zero, we see that our configuration remains optimal after resetting \(z'_{1} =z_{1}+z_{2}, z'_{2}=0 \) if \(z_{1}+z_{2} \le 1\) or \(z'_{1} =1, z'_{2}=z_{1}+z_{2}-1\) if \(z_{1}+z_{2} > 1\). After this change, we can successfully reduce the number of summands with a floating mean value; applying this strategy repeatedly, we reach a scenario where all floating means are the same.

Now we show that S must be empty in any optimal configuration obtained from the above routine. Assume w.l.o.g. that \(|S|=1\) (if \(|S| > 1\), just iterate the argument for \(|S| = 1\)). Say \(Z_1=1\) deterministically and all other \(Z_i\) have a floating mean value \(0<p<1\). We arbitrarily select one random variable with floating mean, say \(Z_2\), and let \(Z_x\) be the sum of all the other \(Z_i\) (i.e., all \(Z_i\) other than \(Z_1\) and \(Z_2\)). Note that

$$\begin{aligned} \Pr [Z \le A]=\Pr [Z_x+Z_2 \le A-1]=\Pr [Z_x\le A-2]+\Pr [Z_x=A-1] (1-p) \end{aligned}$$
(15)

where \(\mu _x=\mathbb {E}[Z_x]=\mu -1-p=N^\prime p\) with \(N^\prime \) being the number of variables in \(Z_x\).

Consider the following perturbation to \(Z_1\) and \(Z_2\): replace \(Z_1\) and \(Z_2\) by two i.i.d. Bernoulli random variables \(Z_0, Z_0'\) such that \(\mathbb {E}[Z_0]=(1+p)/2=q\). After this perturbation, we get a replacement \(Z^\prime \) for Z such that

$$\begin{aligned}&\Pr [Z^\prime \le A] = \Pr [Z_0+Z_0'+Z_x \le A]\end{aligned}$$
(16)
$$\begin{aligned}&\quad = \Pr [Z_x \le A-2]+(1-q^2) \Pr [Z_x =A-1]+(1-q)^2\Pr [Z_x=A] \end{aligned}$$
(17)

To apply Proposition 1 for \(Z_x\), we set \(\ell =A-1\). Note that for \(Z_x \sim \mathrm {B}(N^\prime , \mu _x/N^\prime )\), we have

$$\begin{aligned} \mu _x=N^\prime p=\frac{N^\prime }{N^\prime +1} (\mu -1) <\frac{N^\prime }{N^\prime +1} (\ell +1) \end{aligned}$$

Thus we get \(\Pr [Z_x=A-1]> \Pr [Z_x=A]\); plugging this into (17) yields

$$\begin{aligned} \Pr [Z^\prime\le & {} A] <\Pr [Z_x \le A-2]+((1-q^2)+(1-q)^2) \Pr [Z_x =A-1]\\= & {} \Pr [Z \le A], \end{aligned}$$

where the final equality follows from (15). This contradicts the assumption that the original configuration is optimal; thus, S must be empty. \(\square \)

Let \(\Pr (A, \mu , N)\) be the minimum value of \(\Pr [Z \le A]\) under the restriction that the number of Bernoulli random variables with positive mean is at most N.

Proposition 3

For any \(N \ge A+1\), we have \(\Pr (A, \mu , N)>\Pr (A, \mu , N+1)\).

Proof

From Proposition 2, we know \(\Pr (A, \mu , N)\) can be achieved when Z follows a Binomial distribution with some parameters \(N^\prime \le N\) and \(\mu /N^\prime \). Arbitrarily choose a random variable, \(Z_1\), from Z. Let \(\mathbb {E}[Z_1]=z=\mu /N^\prime \) and \(Z_x=\sum _{i=2}^{N^\prime } Z_i\). Notice that \(\mu _x=\mathbb {E}[Z_x]=\frac{N^\prime -1}{N^\prime }\mu \).

Consider perturbing the current configuration of Z as: replace \(Z_1\) with \(Z_{1a}\) and \(Z_{1b}\) where \(\mathbb {E}[Z_{1a}]=\mathbb {E}[Z_{1b}]=z/2\). Now consider \(\Pr [Z^\prime \le A]\) where \(Z^\prime =Z_x+Z_{1a}+Z_{1b}\). The new value is

$$\begin{aligned} \Pr [Z^\prime \le A]= & {} \Pr [Z_x \le A-2]+\left( 1-\frac{z^2}{4}\right) \Pr [Z_x =A-1]\\&+\,(1-z/2)^2Pr[Z_x=A] \end{aligned}$$

Notice that \(\Pr [Z \le A]=\Pr [Z_x \le A-1]+\Pr [Z_x=A](1-z)\). Therefore we have

$$\begin{aligned} \Pr [Z \le A]-\Pr [Z^\prime \le A]=\frac{1}{4}z^2(\Pr [Z_x = A-1]-\Pr [Z_x =A] ) \end{aligned}$$

To apply Proposition 1 on \(Z_x\), set \(\ell =A-1\). Note that we have

$$\begin{aligned} \mu _x=\frac{N^\prime -1}{N^\prime } \mu < \frac{N^\prime -1}{N^\prime } A=\frac{N^\prime -1}{N^\prime } (\ell +1) \end{aligned}$$

Thus we conclude that \(\Pr [Z_x = A-1] >\Pr [Z_x =A]\), which implies \(\Pr [Z \le A]> \Pr [Z^\prime \le A]\). Notice that after the perturbation, the number of random variables with positive mean will be at most \(N^\prime +1\le N+1\). Thus \(\Pr (A, \mu , N)=\Pr [Z \le A]>\Pr [Z^\prime \le A] \ge \Pr (A, \mu , N+1)\). \(\square \)

Lemma 11 follows from the preceding two propositions.

Appendix B: Stochastic Matching with Relaxed Patience

1.1 Proof of Lemma 5

Lemma 5 mainly addresses the issue of the configuration of E(u) in the WS, subject to the constraints: (1) \(\sum _{f \in E(u)} y_{f}p_{f} \le 1- y_{e}p_{e}\), (2) \(\sum _{f \in E(u)} y_{f} \le t_u- y_{e}\) with \(t_{u} \ge 2\) and (3) \(0 \le y_f \le 1\) for each \(f \in E(u)\). Notice that for any given pair \((y_{e}, p_{e})\), part of the result shown in Lemma 2 still applies here: i.e., at most one edge in E(u) takes a floating \(p_{f}\) value. Recalling our previous notation from Sect. 4: (1) \(E_1(u)\) and \(E_0(u)\) are the set of edges in WS which have \(p_f=1\) and \(p_f=0\) respectively; (2) \((y_a, p_{a})\) is the unique potential edge that takes a floating \(0<p_a<1\) value; and (3) \(A=\sum _{f \in E_1(u)} y_f\), \(Z_{u}=\sum _{f \in E_{0}(u)} Z_{f}\), where each \(Z_{f}\) is a Bernoulli random variable with mean \(x \cdot y_{f}\) and all the \(Z_f\)’s are independent.

Lemma 5 consists of the following three propositions; we assume \(t_{u} \ge 2, 1>h \ge 1/2\).

Proposition 4

Suppose e is a small edge and there is no large edge in E(u). Then WS can be characterized as \(Q_2=\Big (A=1/2, y_a=B-1/2, p_a=1/(2y_a), Z_u \sim \mathrm {Pois}(x(t_u-B)) \Big )\) for some \(1 \le B \le 3/2\).

Proposition 5

Suppose e is a small edge and there is a large edge in E(u). Then WS can be characterized as \(Q_1=\Big (A=1, y_a=0, Z_u \sim \mathrm {Pois}(x(t_u-1)) \Big ) \).

Proposition 6

Suppose e is a large edge. Then WS can be characterized as \(A=0, y_a=B, p_a=1/(2B), Z_u \sim \mathrm {Pois}(x(t_u-B-1/2))\) for some \(1/2 \le B \le 1\).

To prove the three propositions above, we will repeatedly apply local perturbation techniques, similar to the one we used in Lemma 2.

Proof of Proposition 4

Proof

A moment’s reflection shows that in WS the matching constraint will be tight, i.e., \(A+y_a p_a=1\). Thus we have \(A \ge 1/2\) since \(y_a p_a \le 1/2 \). As a result, we know for \(E_1(u)\) in WS, there will be one edge with \(p=1, y=1/2\) and another edge with \(p=1, y=A-1/2\). Therefore the lower bound \(P_u\) of \(\mathcal {P}_u\) in WS can be updated as follows:

$$\begin{aligned} P_u= & {} \left( 1-\frac{1}{2}x\right) \left( 1-\left( A-\frac{1}{2}\right) x\right) (1-x y_a) \Pr [Z_u \le t_u] \\&+ \left( 1-\frac{1}{2}x\right) \left( 1-\left( A-\frac{1}{2}\right) x\right) xy_a(1-p_a) \Pr [Z_u \le t_u-1] \end{aligned}$$

Let \(B=A+y_a\) be fixed. Substituting \(A=1-y_a p_a\) into B, we have \(y_a (1-p_a)=B-1\), implying \(B \ge 1\) and \(y_a \ge B-1\). By applying the local perturbation argument, we get that for any given \(B \ge 1\), in WS, \((A, y_{a})\) will take one of the following two (boundary) values: \(Q_1=(y_a=B-1, p_a=0, A=1)\) where \(y_a\) reaches the lower bound and \(Q_2=(y_a=B-1/2, p_a=\frac{1}{2 y_a}=\frac{1}{2B-1}, A=1/2)\) if \(B \le 3/2\) and \(Q_2=(y_a=1, p_a=2-B, A=B-1)\) if \(B \ge 3/2\) where \(y_a\) reaches the upper bound.

Note that \(Q_{1}\) essentially states that in WS, there are two edges \(y_{1}=y_{2}=1/2, p_{1}=p_{2}=1\) while no edge takes a floating \(p_{f}\) value. It can be viewed as a special case of \(Q_{2}\) with \(B=1\) and thus can be ignored.

Now consider \(Q_{2}\) with \(3/2 \le B \le 2\). Assume the WS does not fall at some boundary value of B, i.e., \(3/2<B <2\). Then we perturb \((A, y_b) \rightarrow (A+\epsilon , y_b-\epsilon )\), where \(y_{b}\) is an arbitrary edge in \(E_0(u)\). We observe that the term involving \(\epsilon ^2\) included in the expression of \(P_{u}\) after perturbation is

$$\begin{aligned} H(\epsilon ^2)= & {} (-x^2 \epsilon ^2) (1-x)\Pr [Z_{u}^\prime = t_u] \\&+\,(-x^2 \epsilon ^2) \Pr [Z_{u}^\prime \le t_u-2] + \epsilon ^2 x^3\left( \frac{1}{2}+y_b-2A\right) \end{aligned}$$

where \(Z_{u}^{\prime }=Z_u-Z_{b}\) and \(Z_{b}\) is a Bernoulli random variable associated with \(y_{b}\). Notice that \(y_b<1/2\) and \(A=B-1>1/2\). Thus we get that \(H(\epsilon ^2)<0\), implying that in WS, \(B=2\) or 3 / 2. Again the case \(Q_{2}\) with \(B=2\) can be ignored since it is a special case of \(Q_{2}\) with \(B=1\). Therefore the WS can only fall in \(Q_{2}\) with some \(1 \le B \le 3/2\). \(\square \)

Proof of Proposition 5

Proof

We consider the following two cases.

  • Consider the first case \(A >1/2\). Notice that in WS, the matching constraint will be tight, i.e., \(A+y_{a}p_{a}=1\). Thus \(E_1(u)\) must include the large edge since \(y_a p_a <1/2\). For each A, the infimum value of \(\prod _{f \in E_1 (u)} (1-x y_f)\) happens at a configuration where \(E_1(u)\) consists of a large edge \(y_1\) and at most one other light edge. Thus we can rewrite \(\prod _{f \in E_1(u)} (1-x y_f)\) as \((1-x y_1 h) (1-(A-y_1)x)\) where \(1/2 <y_1 \le A\). Further, we observe that in WS, either \(y_1=A\) or \(y_1=1/2+\epsilon \). The latter is reduced to the case when all edges in E(u) are small, since the adversary will set \(y_1=1/2\) and \(y_1\) will not be attenuated. Therefore we can update \(P_u\) as follows:

    $$\begin{aligned} P_u=(1-x A h) (1\!-\!x y_a) \Pr [Z_u \le t_u]+ (1-x A h)xy_a(1-p_a) \Pr [Z_u \le t_u\!-\!1] \end{aligned}$$

    Let \(B=A+y_a\) be fixed with some value \(1 \le B \le 2\). Applying a similar analysis as in Proposition 4, we get that in WS, \((A, y_a)\) take one of the two (boundary) values, either \(Q_1=(A=1, y_a=B-1, p_a=0)\) or \(Q_2=(A=1/2+\epsilon , y_a=B-1/2-\epsilon , p_a=(1/2-\epsilon )/y_a)\) if \(B \le 3/2\) or \(Q_2=(A=B-1, y_a=1, p_a=2-B)\) if \(B >3/2\). For \(Q_1\), the expression of \(P_u\) can be updated as

    $$\begin{aligned} P_u = (1-xh) \Pr [Z_u \le t_u] \end{aligned}$$

    where \(Z_u \sim \mathrm {Pois}(x(t_u-1))\). For \(Q_2\) with \(B \le 3/2\), it can be reduced to the case when no large item is in E(u). For \(Q_2\) with \(B \ge 3/2\), the expression of \(P_u\) can be updated as

    $$\begin{aligned} P_u = (1-xAh) (1-x) \Pr [Z_u \le t_u] + (1-xAh)xA \Pr [Z_u \le t_u-1] \end{aligned}$$

    We know that for each \(B \ge 3/2\), \(Z_u\) should follow a Poisson distribution with mean \(x(t_u-B)\). For simplicity, we assume each edge in \(E_0\) has a value of \(y_f\) which can be aribitrarily small. Select an edge, say \(y_b\) in \(E_0(u)\), and perturb as \(A \leftarrow A+\epsilon , y_b \leftarrow y_b-\epsilon \). We get that the terms involving \(\epsilon ^{2}\) included in the final expession of \(P_{u}\) after perturbation, sum to

    $$\begin{aligned} H(\epsilon ^2)= & {} -x^2h \epsilon ^2 (1-x) \Pr [Z_u^\prime = t_u] - x^2h \epsilon ^2 \Pr [Z_u^\prime \le t_u-2]\\&+ \,x^2 \epsilon ^2 (1 - h - 2xAh + xhy_b) \Pr [Z_u^\prime = t_u-1] \end{aligned}$$

    where \(Z_u^{\prime }=Z_u-Z_{b}\) and \(Z_{b}\) is a Bernoulli random variable associated with \(y_{b}\). Notice that \(\mathbb {E}[Z_u^\prime ] \le x(t_u-B) < t_u-1\), from which we get \(\Pr [Z_u^\prime = t_u-1] < \Pr [Z_u^\prime \le t_u-2] \). Thus we have that for any \(h \ge 1/2\),

    $$\begin{aligned} H(\epsilon ^2) \le x^2 \epsilon ^2 (1 - 2h- 2xAh + xhy_b) \Pr [Z_u^\prime = t_u-1] < 0 \end{aligned}$$

    Therefore we claim that in WS, A should arrive at a boundary value, i.e., either \(A=1\) or \(A=1/2+\epsilon \). Both of these two cases have been analyzed before.

  • Consider the second case \(A \le 1/2\). It implies that since \(y_a p_a \ge 1/2\), a should be a large edge. We know that in WS, \(E_1\) should consist of a single edge and \(P_u\) has the form:

    $$\begin{aligned} P_u = (1-xA) (1-xhy_a) \Pr [Z_u \le t_u] + (1 - xA) xhy_a (1-p_a) \Pr [Z_u \le t_u-1] \end{aligned}$$

    When \(B=A+y_a\) is fixed at some value \(1\le B <3/2\), we know \((A, y_a)\) must take some (boundary) value in WS: either \(Q_1=(A=1/2-\epsilon , y_a=B-1/2+\epsilon , p_a=(1/2+\epsilon )/y_a)\) or \(Q_2=(A=B-1, y_a=1, p_a=2-B)\). Similarly, we see that \(Q_1\) can be ignored since it can be reduced to the case when \(y_a p_a=1/2\) such that it will not be attenuated. Now we focus on the analysis of \(Q_2=(A=B-1, y_a=1, p_a=2-B)\) where \( 1 \le B <3/2 \). The value of \(P_u\) can be updated as:

    $$\begin{aligned} P_u=(1-xA) (1-xh) \Pr [Z_u \le t_u] + (1-xA) xAh \Pr [Z_u \le t_u-1] \end{aligned}$$

    Applying the same perturbation analysis as before, we get that in WS either \(A=0\) or \(A=1/2-\epsilon \). The instance of \(A=0\) is just the case of \(Q_{1}\) while the instance of \(A=1/2-\epsilon \) can be reduced to the situation without attenuation.

\(\square \)

Proof of Proposition 6

Proof

In this case, we consider a large edge e with \(y_{e}p_{e}>1/2\). Recall that in WS, the adversary will try to minimize \(P_u\) subject to (1) \(\sum _{f \in E(u)} y_f p_f \le 1/2\), (2) \(\sum _{f \in E(u)} y_{f} \le t_u-1/2 \) and (3) \(0 \le y_{f} \le 1\) for each \(f\in E(u)\). In our context, we have in WS, \(A+y_{a}p_{a} = 1/2\) and \(A+y_{a}+\sum _{f \in E_{0}(u)} y_f = t_u-1/2\).

Let \(A+y_a=B\) be some fixed value at \( 1/2 \le B \le 3/2 \). As before, we observe that in the WS, \((A, y_a)\) should arrive at boundary points, either \(Q_1=(A=1/2, y_a=B-1/2, p_a=0)\) or \(Q_2=(A=0, y_a=B, p_a=1/(2B))\) if \(B \le 1\) and \(Q_2=(A=B-1, y_a=1, p_a=3/2-B)\) if \(B>1\). Observe that \(Q_{1}\) is a special case of \(Q_{2}\) with \(B=1/2\) and thus can be ignored.

For the instance \(Q_2=(A=B-1, y_a=1, p_a=3/2-B)\) with \(B\ge 1\). \(P_u\) can be updated as

$$\begin{aligned} P_u=(1-Ax) (1-x) \Pr [Z_u \le t_u]+ (1-Ax)x (A+1/2) \Pr [Z_u \le t_u-1] \end{aligned}$$

Notice that \(\mathbb {E}[Z_u]=x(t_u-B-1/2) \le t_u-1\), implying that \(Z_u \sim \mathrm {Pois}(x(t_u-B-1/2))\). Perturb in the same way as before: \(A \leftarrow A+\epsilon \) and \(y_b \leftarrow y_b-\epsilon \) where \(y_{b}\) is an arbitrary edge in \(E_{2}(u)\). We get that the coefficient of \(\epsilon ^{2}\) is

$$\begin{aligned} H(\epsilon ^{2}) \le -x^3\left( y_b-\frac{1}{2}-2A\right) Pr[Z_u^\prime =t_u-1]<0 \end{aligned}$$

Thus we claim that if WS arrives at \(Q_2=(A=B-1, y_a=1, p_a=3/2-B)\) with some \( 1\le B \le 3/2\), then B must be at boundary points either \(B=1\) or \(B=3/2\). Both of these two can be viewed as special instances of \(Q_{2}\) with \(1/2 \le B \le 1\), and thus can be ignored. \(\square \)

1.2 Proof of Lemma 6

Proof

We split our discussion into the following two cases.

  • Consider the first case when e is small with \(y_e p_e=0\). Note that in WS, we have \(A+y_{a}+\sum _{f \in E_0(u)} y_f=1\). Thus we can set \(p_{a}=1\), since this does not violate the matching constraint and potentially decreases the value of \(P_{u}\). This means we can assume in WS there is no floating edge.

    After a similar analysis in Lemma 3, we find that in WS, either \(A=1\) or \(A=1/2\). When \(A=1\), \(P_u=(1-x h)\) which is larger or equal to that at \(Q_1\) when \(t_u \ge 2\), just as shown in Lemma 5.

    When \(A=1/2\),

    $$\begin{aligned} P_u = \left( 1-\frac{1}{2}x\right) \Pr [Z_u \le 1] = \left( 1-\frac{1}{2}x\right) \left( 1+\frac{1}{2}x\right) , Z_u \sim \mathrm {Pois}\left( \frac{1}{2}x\right) \end{aligned}$$

    Consider the \(P_{u}\) in WS at \(Q_2\) with \(B=1\): just as shown in Lemma 5, we have

    $$\begin{aligned} P_u \le \left( 1-\frac{1}{2}x\right) ^2 \Pr [Z_u \le t_u] \le \left( 1-\frac{1}{2}x\right) \left( 1+\frac{1}{2}x\right) \end{aligned}$$

    Thus we claim that WS can not satisfy \(t_{u}=1\) when \(y_ep_e=0\).

  • Consider the second case when e is large with \(y_ep_e>1/2\). Similarly we assume no floating edge in WS and \(A=1/2\). Therefore we have \(P_u = (1-1/2x)\). Notice that when \(t_u \ge 2\), in WS the bound on \(P_u\) in case \(Q_2\) shown in Lemma 5 is

    $$\begin{aligned} P_u \le (1-1/2x) \Pr [Z_u \le t_u] \le (1-1/2x) \end{aligned}$$

    since \(B \ge 1/2\). Thus we claim that WS could not satisfy \(t_{u}=1\) when \(y_ep_e>1/2\).

\(\square \)

1.3 Numerical Verification Details in the Proof of Theorem 2:

The following numerical verifications are similar to those shown in the proof of Theorem 1. All our numerical computations were done on Mathematica 10 with precision at least up to the fourth digit after the decimal point.

  1. 1.

    Consider a small edge e with \(y_ep_e=0\) where both E(u) and E(v) have WS at \(Q_2\) just as shown in Lemma 5 with \(B=B_{u}\) and \(B=B_{v}\) respectively. In this case, the Chernoff-Hoeffding bound is

    $$\begin{aligned} P_u(\mathbf{Small }_2) \ge \mathbf{L }_{a}(t_u)= \left( 1-\frac{1}{2}x\right) ^2 \left[ 1-\exp \left( \frac{ -\epsilon ^2 }{2+\epsilon }x\left( t_u-\frac{3}{2}\right) \right) \right] \end{aligned}$$

    where \(\epsilon =\frac{1}{x}-1\). We verify that:

    • When \(t_u, t_v \ge 150\), we see

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_2) P_{v}(\mathbf{Small }_2) dx \ge \int _{0}^{1} \mathbf{L }_{a}^{2}(150) dx= 0.374 \end{aligned}$$
    • When \(2 \le t_u, t_v \le 150\), the integral \(\int _{0}^{1}P_{u}(\mathbf{Small }_2) P_{v}(\mathbf{Small }_2) dx\) gets its minimum value of 0.373799 at \(B_u=B_v=1.4984, t_u=t_v=5\).

    • When \(2 \le t_{u} \le 150\) while \(t_{v} \ge 150\), we see that

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_2) P_{v}(\mathbf{Small }_2) dx \ge \int _{0}^{1}P_{u}(\mathbf{Small }_2) \mathbf L _a(150) dx, \end{aligned}$$

      and that the latter integral gets its minimum value of 0.373899 at \(B_{u}=1.48529, t_u=5\).

  2. 2.

    Consider a large item \(y_e p_e=1/2+\epsilon \). In this case, the Chernoff-Hoeffding bound is

    $$\begin{aligned} P_u(\mathbf{Large }) \ge \mathbf L _b(t_u)= \left( 1-\frac{1}{2}x\right) \left[ 1-\exp \left( \frac{ -\epsilon ^2 }{2+\epsilon }x\left( t_u-\frac{3}{2}\right) \right) \right] \end{aligned}$$

    where \(\epsilon =\frac{1}{x}-1\). We verify that:

    • When \(t_u, t_v \ge 110\), we see

      $$\begin{aligned} \int _{0}^{1} P_{u}(\mathbf{Large }) P_{v}(\mathbf{Large }) dx \ge \int _{0}^{1} \mathbf{L }_{b}^{2}(110) dx= 0.539476 \end{aligned}$$
    • When \(1 \le t_u, t_v \le 110\), the integral \(\int _{0}^{1}P_{u}(\mathbf{Large }) P_{v}(\mathbf{Large }) dx\) gets its minimum value of 0.54563 at \(t_u=t_v=6, B_{u}=B_{v}=1\).

    • When \(2 \le t_{u} \le 110, t_{v} \ge 110\), we see that

      $$\begin{aligned} \int _{0}^{1} P_{u}(\mathbf{Large }) P_{v}(\mathbf{Large }) dx \ge \int _{0}^{1}P_{u}(\mathbf{Large }) \mathbf L _b(110) dx \end{aligned}$$

      and the latter integral gets its minimum value of 0.536973 at \(t_u=5, B_{u}=1\).

    Thus to reach an approximation ratio of 0.373799, it suffices to set \(h \ge 0.6961\).

  3. 3.

    Consider a small item \(y_e p_e=0\) where both E(u) and E(v) have WS at \(Q_1\) just as shown in Lemma 5 with \(h=0.7\).

    The Chernoff-Hoeffding bound is,

    $$\begin{aligned} P_u(\mathbf{Small }_{1}) \ge \mathbf{L }_{c}(t_u)= (1- h x) \left[ 1-\exp \left( \frac{ -\epsilon ^2 }{2+\epsilon }x(t_u-1) \right) \right] \end{aligned}$$

    with \( h=0.7,\epsilon =\frac{1}{x}-1\). We verify that:

    • When \(t_u, t_v \ge 100\),

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{1}) dx \ge \int _{0}^{1} \mathbf{L }_{c}^{2}(100) dx= 0.442734 \end{aligned}$$
    • When \(2 \le t_u, t_v \le 100\), the integral \(\int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{1})dx\) gets its minimum value of 0.445811 at \(t_u=t_v=6\).

    • When \(2 \le t_{u} \le 100\) and \(t_{v} \ge 100\), we see that

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{1}) dx \ge \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) \mathbf{L }_{c}(100) dx \end{aligned}$$

      and the latter integral gets its minimum value of 0.441362 at \(t_u=5\).

  4. 4.

    Now consider a small item \(y_e p_e=0\) where E(u) has WS at \(Q_1 \) with \(h=0.7\) while E(v) has WS at \(Q_2\) with some \(B_{v}\).

    We verify that:

    • When \(t_u, t_v \ge 30\),

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{2}) dx \ge \int _{0}^{1} \mathbf{L }_{c}(30) \mathbf{L }_{a}(30) dx= 0.383453 \end{aligned}$$
    • When \(1 \le t_u, t_v \le 30\), the integral \(\int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{2}) dx\) gets its minimum value of 0.40739 at \( t_u= 6, t_v=6, B_{v}= 1.49814 \).

    • When \(2 \le t_{u} \le 30\) while \(t_{v} \ge 30\),

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{2}) dx \ge \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) \mathbf{L }_{a}(30) dx \end{aligned}$$

      and the latter integral gets its minimum value of 0.389957 at \( t_u= 4\).

    • When \(t_{u} \ge 30\) while \(2 \le t_{v} \le 30\),

      $$\begin{aligned} \int _{0}^{1}P_{u}(\mathbf{Small }_{1}) P_{v}(\mathbf{Small }_{2}) dx \ge \int _{0}^{1} \mathbf{L }_{c}(30) P_{v}(\mathbf{Small }_{2}) dx \end{aligned}$$

      and the latter integral gets its minimum value of 0.404117 at \(t_v=5, B=1.47987\).

    Thus we conclude that the bottleneck configuration is \(y_e p_e =0\), with both of E(u) and E(v) having WS at \(Q_2\) with \(t_u=t_v=5, B_{u}=B_{v}=1.4984\). The resultant approximation ratio is 0.373799.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baveja, A., Chavan, A., Nikiforov, A. et al. Improved Bounds in Stochastic Matching and Optimization. Algorithmica 80, 3225–3252 (2018). https://doi.org/10.1007/s00453-017-0383-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-017-0383-4

Keywords

Navigation