Skip to main content

Pure characteristics demand models and distributionally robust mathematical programs with stochastic complementarity constraints

Abstract

We formulate pure characteristics demand models under uncertainties of probability distributions as distributionally robust mathematical programs with stochastic complementarity constraints (DRMP-SCC). For any fixed first-stage variable and a random realization, the second-stage problem of DRMP-SCC is a monotone linear complementarity problem (LCP). To deal with ambiguity of probability distributions of the involved random variables in the stochastic LCP, we use the distributionally robust approach. Moreover, we propose an approximation problem with regularization and discretization to solve DRMP-SCC, which is a two-stage nonconvex-nonconcave minimax optimization problem. We prove the convergence of the approximation problem to DRMP-SCC regarding the optimal solution sets, optimal values and stationary points as the regularization parameter goes to zero and the sample size goes to infinity. Finally, preliminary numerical results for investigating distributional robustness of pure characteristics demand models are reported to illustrate the effectiveness and efficiency of our approaches.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    We can generally assume that \(P(\mathrm {d}\xi )=p(\xi )\mathbb {Q}(\mathrm {d}\xi )\) for some nominal probability distribution \(\mathbb {Q}\). We know from Radon-Nikodym theorem (see e.g. [34, Theorem 7.32]) that there exists such a density function \(p(\xi )\) if and only if P is absolutely continuous w.r.t. \(\mathbb {Q}\). Here we neglect \(\mathbb {Q}\) to simplify the notation.

References

  1. 1.

    Aumann, R.J.: Integrals of set-valued functions. J. Math. Anal. Appl. 12, 1–12 (1965)

    MathSciNet  MATH  Article  Google Scholar 

  2. 2.

    Berry, S., Pakes, A.: The pure characteristics demand model. Internat. Econom. Rev. 48, 1193–1225 (2007)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2013)

    MATH  Google Scholar 

  4. 4.

    Chen, X., Sun, H., Wets, R.J.B.: Regularized mathematical programs with stochastic equilibrium constraints: Estimating structural demand models. SIAM J. Optim. 25, 53–75 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  5. 5.

    Chen, X., Xiang, S.: Perturbation bounds of P-matrix linear complementarity problems. SIAM J. Optim. 18, 1250–1265 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Chen, X., Ye, J.: A class of quadratic programs with linear complementarity constraints. Set-Valued Var. Anal. 17, 113–133 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. SIAM, Philadelphia (1992)

    MATH  Google Scholar 

  8. 8.

    Debreu, G.: Saddle point existence theorems. Cowles Commission Discussion Paper: Mathematicas No. 412 (1952)

  9. 9.

    Delage, E., Ye, Y.: Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 58, 595–612 (2010)

    MathSciNet  MATH  Article  Google Scholar 

  10. 10.

    Esfahani, P.M., Kuhn, D.: Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations. Math. Program. 171, 115–166 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  11. 11.

    Guo, L., Lin, G.H.: Notes on some constraint qualifications for mathematical programs with equilibrium constraints. J. Optim. Theory Appl. 156, 600–616 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  12. 12.

    Hanasusanto, G.A., Roitch, V., Kuhn, D., Wiesemann, W.: Ambiguous joint chance constraints under mean and dispersion information. Oper. Res. 65, 751–767 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Izmailov, A.F., Solodov, M.V.: An active-set Newton method for mathematical programs with complementarity constraints. SIAM J. Optim. 19, 1003–1027 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  14. 14.

    Jin, C., Netrapalli, P., Jordan, M.: What is local optimality in nonconvex-nonconcave minimax optimization? In: International Conference on Machine Learning, pp. 4880–4889. PMLR (2020)

  15. 15.

    Kantorovich, L.V., Rubinshtein, S.: On a space of totally additive functions. Vestnik of the St. Petersburg University: Mathematics 13, 52–59 (1958)

  16. 16.

    Lin, G.H., Chen, X., Fukushima, M.: Solving stochastic mathematical programs with equilibrium constraints via approximation and smoothing implicit programming with penalization. Math. Program. 116, 343–368 (2009)

    MathSciNet  MATH  Article  Google Scholar 

  17. 17.

    Lin, G.H., Fukushima, M.: Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constraints: A survey. Pac. J. Optim. 6, 455–482 (2010)

    MathSciNet  MATH  Google Scholar 

  18. 18.

    Liu, Y., Pichler, A., Xu, H.: Discrete approximation and quantification in distributionally robust optimization. Math. Oper. Res. 44, 19–37 (2019)

    MathSciNet  MATH  Article  Google Scholar 

  19. 19.

    Luna, J.P., Sagastizábal, C., Solodov, M.: An approximation scheme for a class of risk-averse stochastic equilibrium problems. Math. Program. 157, 451–481 (2016)

    MathSciNet  MATH  Article  Google Scholar 

  20. 20.

    Luo, Z.Q., Pang, J.S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

    MATH  Book  Google Scholar 

  21. 21.

    Mangasarian, O.L.: Error bounds for nondegenerate monotone linear complementarity problems. Math. Program. 48, 437–445 (1990)

    MathSciNet  MATH  Article  Google Scholar 

  22. 22.

    Mangasarian, O.L.: Global error bounds for monotone affine variational inequality problems. Linear Algebra Appl. 174, 153–163 (1992)

    MathSciNet  MATH  Article  Google Scholar 

  23. 23.

    Mangasarian, O.L., Fromovitz, S.: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 17, 37–47 (1967)

    MathSciNet  MATH  Article  Google Scholar 

  24. 24.

    Mangasarian, O.L., Pang, J.S.: Exact penalty for mathematical programs with linear complementarity constraints. Optimization 42, 1–8 (1997)

    MathSciNet  MATH  Article  Google Scholar 

  25. 25.

    Mangasarian, O.L., Ren, J.: New improved error bounds for the linear complementarity problem. Math. Program. 66, 241–255 (1994)

    MathSciNet  MATH  Article  Google Scholar 

  26. 26.

    Mangasarian, O.L., Shiau, T.H.: Error bounds for monotone linear complementarity problems. Math. Program. 36, 81–89 (1986)

    MathSciNet  MATH  Article  Google Scholar 

  27. 27.

    Nouiehed, M., Sanjabi, M., Huang, T., Lee, J.D., Razaviyayn, M.: Solving a class of non-convex min-max games using iterative first order methods. In: Advances in Neural Information Processing Systems, pp. 14934–14942 (2019)

  28. 28.

    Pang, J.S., Scutari, G.: Nonconvex games with side constraints. SIAM J. Optim. 21, 1491–1522 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  29. 29.

    Pang, J.S., Su, C.L., Lee, Y.C.: A constructive approach to estimating pure characteristics demand models with pricing. Oper. Res. 63, 639–659 (2015)

    MathSciNet  MATH  Article  Google Scholar 

  30. 30.

    Pflug, G.C., Pichler, A.: Multistage Stochastic Optimization. Springer, Cham (2014)

    MATH  Book  Google Scholar 

  31. 31.

    Razaviyayn, M., Huang, T., Lu, S., Nouiehed, M., Sanjabi, M., Hong, M.: Nonconvex min-max optimization: Applications, challenges, and recent theoretical advances. IEEE Signal Process. Mag. 37, 55–66 (2020)

    Article  Google Scholar 

  32. 32.

    Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer-Verlag, Berlin (2009)

    MATH  Google Scholar 

  33. 33.

    Scheel, H., Scholtes, S.: Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity. Math. Oper. Res. 25, 1–22 (2000)

    MathSciNet  MATH  Article  Google Scholar 

  34. 34.

    Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2014)

    MATH  Book  Google Scholar 

  35. 35.

    Shapiro, A., Xu, H.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57, 395–418 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  36. 36.

    Su, C.L., Judd, K.L.: Constrained optimization approaches to estimation of structural models. Econometrica 80, 2213–2230 (2012)

    MathSciNet  MATH  Article  Google Scholar 

  37. 37.

    Villani, C.: Topics in Optimal Transportation. American Mathematical Soc, Providence (2003)

    MATH  Book  Google Scholar 

  38. 38.

    Xie, W.: On distributionally robust chance constrained programs with wasserstein distance. Math. Program. 186, 115–155 (2021)

    MathSciNet  MATH  Article  Google Scholar 

  39. 39.

    Xie, W., Ahmed, S.: On deterministic reformulations of distributionally robust joint chance constrained optimization problems. SIAM J. Optim. 28, 1151–1182 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  40. 40.

    Xu, H., Liu, Y., Sun, H.: Distributionally robust optimization with matrix moment constraints: Lagrange duality and cutting plane methods. Math. Program. 169, 489–529 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  41. 41.

    Xu, H., Ye, J.: Approximating stationary points of stochastic mathematical programs with equilibrium constraints via sample averaging. Set-Valued Var. Anal. 19, 283–309 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  42. 42.

    Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6, 1758–1789 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  43. 43.

    Ye, J., Zhu, D., Zhu, Q.J.: Exact penalization and necessary optimality conditions for generalized bilevel programming problems. SIAM J. Optim. 7, 481–507 (1997)

    MathSciNet  MATH  Article  Google Scholar 

  44. 44.

    Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56, 285–300 (1992)

    MathSciNet  MATH  Article  Google Scholar 

  45. 45.

    Zhang, J., Xu, H., Zhang, L.: Quantitative stability analysis for distributionally robust optimization with moment constraints. SIAM J. Optim. 26, 1855–1882 (2016)

    MathSciNet  MATH  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and two anonymous referees for their very helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Xiaojun Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper is dedicated to the memory of Olvi L. Mangasarian. His contributions to linear complementarity problems have impacted greatly on our research on distributionally robust mathematical programs with stochastic complementarity constraints. Jie Jiang’s work was partly supported by China Postdoctoral Science Foundation (Grant No. 2020M673117) and CAS AMSS-PolyU Joint Laboratory of Applied Mathematics. Xiaojun Chen’s work was partly supported by Hong Kong Research Grant Council PolyU15300219.

Appendix

Appendix

Proof

(The proof of Proposition 1) Denote \(\bar{p}_i=P_0(\varXi _i)\) for \(i=1,\cdots ,k\). We verify that \(\bar{p}=(\bar{p}_1,\cdots ,\bar{p}_k)^\top \in \mathcal {P}_k\) for all sufficiently large k in the following. Since \(\varPsi \) is continuous, we know from mean value theorem of integrals that

$$\begin{aligned} \mathbb {E}_{P_0}[\varPsi (\xi )]=\sum _{i=1}^k \int _{\varXi _i} \varPsi (\xi ) \,P_0(d\xi )=\sum _{i=1}^k \varPsi (\tilde{\xi }^i)P_0(\varXi _i) \end{aligned}$$

for some \(\tilde{\xi }^i\in \varXi _i\), \(i=1,\cdots ,k\). Then

$$\begin{aligned} \left\| \mathbb {E}_{P_0}[\varPsi (\xi )] - \sum _{i=1}^k \bar{p}_i \varPsi (\xi ^i) \right\| \le \sum _{i=1}^k \bar{p}_i \left\| \varPsi (\xi ^i) - \varPsi (\tilde{\xi }^i) \right\| . \end{aligned}$$
(45)

We first consider the case that \(\varXi \) is bounded. For \(\alpha >0\), there exists \(\delta >0\) such that if \(\max _{1\le i\le k}\mathrm {diam}(\varXi _i) <\delta \), then

$$\begin{aligned} \max _{1\le i\le k}\left\| \varPsi (\xi ^i) - \varPsi (\tilde{\xi }^i) \right\| \le \alpha . \end{aligned}$$

Since \(\varXi \) is bounded, we can find a sequence \(\{\xi ^k\}_{k=1}^\infty \) such that the corresponding Voronoi tessellation \(\varXi _1,\cdots ,\varXi _k,\cdots \) satisfying

$$\begin{aligned} \lim _{k\rightarrow \infty }\max _{1\le i\le k}\mathrm {diam}(\varXi _i)=0. \end{aligned}$$

Hence there is \(\bar{k}>0\) such that \(\max _{1\le i\le k}\mathrm {diam}(\varXi _i)<\delta \) for any \(k\ge \bar{k}\).

Then, it knows from (45) that

$$\begin{aligned} \left\| \mathbb {E}_{P_0}[\varPsi (\xi )] - \sum _{i=1}^k \bar{p}_i \varPsi (\xi ^i) \right\| \le \alpha . \end{aligned}$$

This, together with Assumption 1, indicates that \(\sum _{i=1}^k \bar{p}_i \varPsi (\xi ^i) \in \varGamma ,\) which implies the nonemptiness of \(\mathcal {P}_k\).

Now we consider the case that \(\varXi \) is unbounded. Let \(\varXi _b:=\{\xi \in \varXi :\left\| \xi \right\| \le b\}\) for \(b>0\). Denote a probability distribution \(\bar{P}_0\) supported on \(\varXi _b\) by

$$\begin{aligned} \bar{P}_0(\varXi _a)=\frac{P_0(\varXi _a\cap \varXi _b)}{P_0(\varXi _b)} \end{aligned}$$

for any measurable \(\varXi _a\subseteq \varXi \), where \(P_0\) is defined in Assumption 1. Note that

$$\begin{aligned} \lim _{b\rightarrow \infty }\frac{1}{P_0(\varXi _b)}=1~\text {and}~\lim _{b\rightarrow \infty } \int _{\varXi _b} \varPsi (\xi ) P_0(d\xi ) = \int _{\varXi } \varPsi (\xi ) P_0(d\xi )=\mathbb {E}_{P_0}[\varPsi (\xi )]. \end{aligned}$$

We have

$$\begin{aligned} \lim _{b\rightarrow \infty }\int _{\varXi _b} \varPsi (\xi ) \bar{P}_0(d\xi ) = \lim _{b\rightarrow \infty } \frac{1}{P_0(\varXi _b)}\int _{\varXi _b} \varPsi (\xi ) P_0(d\xi ) = \mathbb {E}_{P_0}[\varPsi (\xi )]. \end{aligned}$$

Therefore, there exists \(b_0>0\) such that, for any \(b\ge b_0\),

$$\begin{aligned} \left\| \mathbb {E}_{\bar{P}_0}[\varPsi (\xi )] - \mathbb {E}_{P_0}[\varPsi (\xi )]\right\| \le \frac{\alpha }{2}. \end{aligned}$$

From Assumption 1, we obtain

$$\begin{aligned} \mathbb {E}_{\bar{P}_0}[\varPsi (\xi )] + \frac{\alpha }{2}\mathbb {B}\subseteq \varGamma . \end{aligned}$$
(46)

Due to the boundedness of \(\varXi _b\) and (46), by the same proof for the case that \(\varXi \) is bounded, there exists a \(\bar{k}>0\) such that \(\mathcal {P}_k\) is nonempty for \(k\ge \bar{k}\). \(\square \)

Proof

(The proof of Theorem 7) Since \((x^{k*},\mathbf {y}^{k*},p^{k*})\) is an accumulation point of \((x_\epsilon ^k,\mathbf {y}_\epsilon ^k, p_\epsilon ^k)\) as \(\epsilon \downarrow 0\), there exists a sequence \(\{\epsilon _j\}_{j= 1}^\infty \) with \(\epsilon _j\downarrow 0\) as \(j\rightarrow \infty \), such that \((x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k)\rightarrow (x^{k*},\mathbf {y}^{k*},p^{k*})\) as \(j\rightarrow \infty \). Based on (26), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\in \nabla _x G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_X(x_{\epsilon _j}^k),\\ \mathbf {y}_{\epsilon _j}^k=\hat{\mathbf {y}}_{\epsilon _j}(x_{\epsilon _j}^k),\\ 0\in -\nabla _{p}G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_{\mathcal {P}_k}(p_{\epsilon _j}^k). \end{array}\right. } \end{aligned}$$

We know from Definitions 3, 4 and Lemma 3 that

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\in \nabla _x G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_X(x_{\epsilon _j}^k),\\ {\left\{ \begin{array}{ll} \nabla _\mathbf {y} G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) - \lambda ^{k,j} - (\mathbf {M}^\top +\epsilon _j I) \mu ^{k,j} = 0,\\ \lambda _i^{k,j} =0 ~\text {for}~ i\in \mathcal {I}_{+0}^{k,j}(\mathbf {y}_{\epsilon _j}^k),~ \mu _i^{k,j} =0 ~\text {for}~ i\in \mathcal {I}_{0+}^{k,j}(\mathbf {y}_{\epsilon _j}^k),\\ \lambda _i^{k,j}\ge 0,~\mu _i^{k,j}\ge 0~\text {for}~ i\in \mathcal {I}_{00}^{k,j}(\mathbf {y}_{\epsilon _j}^k), \end{array}\right. }\\ 0\in -\nabla _{p}G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_{\mathcal {P}_k}(p_{\epsilon _j}^k), \end{array}\right. } \end{aligned}$$

where \(\{\lambda ^{k,j}\}_{j=1}^\infty \) and \(\{\mu ^{k,j}\}_{j=1}^\infty \) are two sequences of multipliers, and

$$\begin{aligned} \mathcal {I}_{+0}^{k,j}(\mathbf {y}_{\epsilon _j}^k)&=\{i:(\mathbf {y}_{\epsilon _j}^k)_i>0, ((\mathbf {M} + \epsilon _j I) \mathbf {y}_{\epsilon _j}^k + \mathbf {q}(x_{\epsilon _j}^k))_i=0\},\\ \mathcal {I}_{0+}^{k,j}(\mathbf {y}_{\epsilon _j}^k)&=\{i:(\mathbf {y}_{\epsilon _j}^k)_i=0, ((\mathbf {M} + \epsilon _j I) \mathbf {y}_{\epsilon _j}^k + \mathbf {q}(x_{\epsilon _j}^k))_i>0\},\\ \mathcal {I}_{00}^{k,j}(\mathbf {y}_{\epsilon _j}^k)&=\{i:(\mathbf {y}_{\epsilon _j}^k)_i=0, ((\mathbf {M} + \epsilon _j I) \mathbf {y}_{\epsilon _j}^k + \mathbf {q}(x_{\epsilon _j}^k))_i=0\}. \end{aligned}$$

Thus, for sufficiently large j, we have

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\in \nabla _x G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_X(x_{\epsilon _j}^k),\\ {\left\{ \begin{array}{ll} \nabla _\mathbf {y} G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) - \lambda ^{k,j} - (\mathbf {M}^\top +\epsilon _j I) \mu ^{k,j} = 0,\\ \lambda _i^{k,j} =0 ~\text {for}~ i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*}),~ \mu _i^{k,j} =0 ~\text {for}~ i\in \mathcal {I}_{0+}^k(\mathbf {y}^{k*}), \end{array}\right. }\\ 0\in -\nabla _{p}G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) + \mathcal {N}_{\mathcal {P}_k}(p_{\epsilon _j}^k), \end{array}\right. } \end{aligned}$$
(47)

where

$$\begin{aligned} \mathcal {I}_{+0}(\mathbf {y}^{k*})&=\{i:\mathbf {y}^{k*}_i>0, (\mathbf {M} \mathbf {y}^{k*} + \mathbf {q}(x^{k*}))_i=0\},\\ \mathcal {I}_{0+}(\mathbf {y}^{k*})&=\{i:\mathbf {y}^{k*}_i=0, (\mathbf {M} \mathbf {y}^{k*} + \mathbf {q}(x^{k*}))_i>0\}. \end{aligned}$$

Next, we verify the boundedness of sequences \(\{\lambda ^{k,j}\}_{j=1}^\infty \) and \(\{\mu ^{k,j}\}_{j=1}^\infty \). Notice that if \(\{\mu ^{k,j}\}_{j=1}^\infty \) is bounded, from the boundedness of \(\{(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k)\}_{j=1}^\infty \) and continuous differentiability of G, \(\{\lambda ^{k,j}\}_{j=1}^\infty \) is bounded. Now we assume that \(\{\mu ^{k,j}\}_{j=1}^\infty \) is unbounded. Consider, by dividing \(\left\| \mu ^{k,j}\right\| \), that

$$\begin{aligned} \frac{ \nabla _\mathbf {y} G(x_{\epsilon _j}^k,\mathbf {y}_{\epsilon _j}^k,p_{\epsilon _j}^k) }{\left\| \mu ^{k,j}\right\| } - \frac{\lambda ^j}{\left\| \mu ^{k,j}\right\| } - (\mathbf {M}^\top +\epsilon _j I) \frac{\mu ^{k,j}}{\left\| \mu ^{k,j}\right\| } = 0, \end{aligned}$$

which can deduce, according to \(\left\| \mu ^{k,j}\right\| \rightarrow \infty \) as \(j\rightarrow \infty \), that

$$\begin{aligned} \frac{\lambda ^{k,j}}{\left\| \mu ^{k,j}\right\| } + \mathbf {M}^\top \frac{\mu ^{k,j}}{\left\| \mu ^{k,j}\right\| } \rightarrow 0 \end{aligned}$$
(48)

as \(j\rightarrow \infty \). Since \(\lambda _i^{k,j} =0\) for \(i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*})\) and \(\mu _i^{k,j} =0\) for \(i\in \mathcal {I}_{0+}^k(\mathbf {y}^{k*})\), we rewrite (48) as

$$\begin{aligned} \sum _{i\in \mathcal {I}_{0+}^k(\mathbf {y}^{k*})\cup \mathcal {I}_{00}^k(\mathbf {y}^{k*})}\frac{\lambda _i^{k,j}}{\left\| \mu ^{k,j}\right\| } \mathbf {e}_i + \sum _{i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*})\cup \mathcal {I}_{00}^k(\mathbf {y}^{k*})} \frac{\mu _i^{k,j}}{\left\| \mu ^{k,j}\right\| } \mathbf {M}^\top _i \rightarrow 0, \end{aligned}$$

where \(\mathcal {I}_{00}^k(\mathbf {y}^{k*})=\{i:\mathbf {y}^{k*}_i=0, (\mathbf {M} \mathbf {y}^{k*} + \mathbf {q}(x^{k*}))_i=0\}.\)

Then, by MPEC-LICQ at \(\mathbf {y}^{k*}\) for problem (23) with \((\bar{x},\bar{p})=(x^{k*},p^{k*})\) and \(\epsilon =0\), we obtain

$$\begin{aligned} \frac{\lambda _i^{k,j}}{\left\| \mu ^{k,j}\right\| }\rightarrow 0, i\in \mathcal {I}_{0+}^k(\mathbf {y}^{k*})\cup \mathcal {I}_{00}^k(\mathbf {y}^{k*}) ~\text {and}~ \frac{\mu _i^{k,j}}{\left\| \mu ^{k,j}\right\| }\rightarrow 0, i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*})\cup \mathcal {I}_{00}^k(\mathbf {y}^{k*}) \end{aligned}$$

as \(k\rightarrow \infty \), which contradicts \(\frac{\mu _i^{k,j}}{\left\| \mu ^{k,j}\right\| }\nrightarrow 0\) for some \(i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*})\cup \mathcal {I}_{00}^k(\mathbf {y}^{k*})\). Therefore, both \(\{\lambda ^{k,j}\}_{j=1}^\infty \) and \(\{\mu ^{k,j}\}_{j=1}^\infty \) are bounded. Without loss of generality, we assume that \(\lambda ^{k,j}\rightarrow \lambda ^{k*}\) and \(\mu ^{k,j}\rightarrow \mu ^{k*}\) as \(j\rightarrow \infty \). Therefore, by letting \(j\rightarrow \infty \), we have from (47) that

$$\begin{aligned} {\left\{ \begin{array}{ll} 0\in \nabla _x G(x^{k*},\mathbf {y}^{k*},p^{k*}) + \mathcal {N}_X(x^{k*}),\\ {\left\{ \begin{array}{ll} \nabla _\mathbf {y} G(x^{k*},\mathbf {y}^{k*},p^{k*}) - \lambda ^{k*} - \mathbf {M}^\top \mu ^{k*} = 0,\\ \lambda _i^{k*} =0 ~\text {for}~ i\in \mathcal {I}_{+0}^k(\mathbf {y}^{k*}),~ \mu _i^{k*} =0 ~\text {for}~ i\in \mathcal {I}_{0+}^k(\mathbf {y}^{k*}), \end{array}\right. }\\ 0\in -\nabla _{p}G(x^{k*},\mathbf {y}^{k*},p^{k*}) + \mathcal {N}_{\mathcal {P}_k}(p^{k*}). \end{array}\right. } \end{aligned}$$
(49)

Moreover, for \(\lambda _i^{k,j}\mu _i^{k,j}\ge 0\) for \(i=1,\ldots , mk\), we have \(\lambda _i^{k*}\mu _i^{k*}\ge 0\) for \(i\in \mathcal {I}_{00}^k(\mathbf {y}^{k*})\). This together with (49) means that \((x^{k*},\mathbf {y}^{k*},p^{k*})\) is a block coordinatewise C-stationary point of problem (\(\mathrm {P}_k\)). \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jiang, J., Chen, X. Pure characteristics demand models and distributionally robust mathematical programs with stochastic complementarity constraints. Math. Program. (2021). https://doi.org/10.1007/s10107-021-01720-4

Download citation

Keywords

  • Distributionally robust
  • Stochastic equilibrium
  • Regularization
  • Discrete approximation
  • Pure characteristics demand

Mathematics Subject Classification

  • 90C15
  • 90C33
  • 90C26