Skip to main content
Log in

Routing optimization with time windows under uncertainty

Mathematical Programming Submit manuscript

Abstract

We study an a priori Traveling Salesman Problem with Time Windows (tsptw) in which the travel times along the arcs are uncertain and the goal is to determine within a budget constraint, a route for the service vehicle in order to arrive at the customers’ locations within their stipulated time windows as well as possible. In particular, service at customer’s location cannot commence before the beginning of the time window and any arrival after the end of the time window is considered late and constitutes to poor customer service. In articulating the service level of the tsptw under uncertainty, we propose a new decision criterion, called the essential riskiness index, which has the computationally attractive feature of convexity that enables us to formulate and solve the problem more effectively. As a decision criterion for articulating service levels, it takes into account both the probability of lateness and its magnitude, and can be applied in contexts where either the distributional information of the uncertain travel times is fully or partially known. We propose a new formulation for the tsptw, where we explicitly express the service starting time at each customer’s location as a convex piecewise affine function of the travel times, which would enable us to obtain the tractable formulation of the corresponding distributionally robust problem. We also show how to optimize the essential riskiness index via Benders decomposition and present cases where we can obtain closed-form solutions to the subproblems. We also illustrate in our numerical studies that this approach scales well with the number of samples used for the sample average approximation. The approach can be extended to a more general setting including Vehicle Routing Problem with Time Windows with uncertain travel times and customers’ demands.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

  1. Adulyasak, Y., Jaillet, P.: Models and algorithms for stochastic and robust vehicle routing with deadlines. Transp. Sci. 50(2), 608–626 (2015)

    Article  Google Scholar 

  2. Agra, A., Christiansen, M., Figueiredo, R., Hvattum, L.M., Poss, M., Requejo, C.: Layered formulation for the robust vehicle routing problem with time windows. In: Lecture Notes in Computer Science. Springer, pp. 249–260 (2012)

  3. Agra, A., Christiansen, M., Figueiredo, R., Hvattum, L.M., Poss, M., Requejo, C.: The robust vehicle routing problem with time windows. Comput. Oper. Res. 40(3), 856–866 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ascheuer, N., Fischetti, M., Grötschel, M.: Solving the asymmetric travelling salesman problem with time windows by branch-and-cut. Math. Program. 90(3), 475–506 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  5. Aumann, R.J., Serrano, R.: An economic index of riskiness. J. Polit. Econ. 116(5), 810–836 (2008)

    Article  MATH  Google Scholar 

  6. Baldacci, R., Bartolini, E., Mingozzi, A., Roberti, R.: An exact solution framework for a broad class of vehicle routing problems. Comput. Manag. Sci. 7(3), 229–268 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Baldacci, R., Mingozzi, A., Roberti, R.: Recent exact algorithms for solving the vehicle routing problem under capacity and time window constraints. Eur. J. Oper. Res. 218(1), 1–6 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, vol. 2. SIAM, Philadelphia (2001)

    Book  MATH  Google Scholar 

  9. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. 98(1–3), 49–71 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertsimas, D., Sim, M., Zhang, M.: A practically efficient approach for solving adaptive distributionally robust linear optimization problems. Manag. Sci. (2017, Forthcoming)

  11. Brown, D.B., Sim, M.: Satisficing measures for analysis of risky positions. Manag. Sci. 55(1), 71–84 (2009)

    Article  MATH  Google Scholar 

  12. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci. 6(1), 73–79 (1959)

    Article  MathSciNet  MATH  Google Scholar 

  13. Claus, A.: A new formulation for the travelling salesman problem. SIAM J. Algebr. Discrete Methods 5(1), 21–25 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manag. Sci. 6(1), 80–91 (1959)

    Article  MathSciNet  MATH  Google Scholar 

  15. Dash, S., Günlük, O., Lodi, A., Tramontani, A.: A time bucket formulation for the traveling salesman problem with time windows. INFORMS J. Comput. 24(1), 132–147 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Desaulniers, G., Madsen, O.B., Ropke, S.: Chapter 5: The vehicle routing problem with time windows. In: Vehicle Routing: Problems, Methods, and Applications, 2nd edn. Society for Industrial & Applied Mathematics (SIAM), pp. 119–159 (2014)

  17. Errico, F., Desaulniers, G., Gendreau, M., Rei, W., Rousseau, L.M.: A priori optimization with recourse for the vehicle routing problem with hard time windows and stochastic service times. Eur. J. Oper. Res. 249(1), 55–66 (2016a)

    Article  MathSciNet  MATH  Google Scholar 

  18. Errico, F., Desaulniers, G., Gendreau, M., Rei, W., Rousseau, L.M.: The vehicle routing problem with hard time windows and stochastic service times. Eur. J. Transp. Logist. 1–29 (2016)

  19. Fisher, M.L., Jörnsten, K.O., Madsen, O.B.: Vehicle routing with time windows: two optimization algorithms. Oper. Res. 45(3), 488–492 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  20. Gendreau, M., Jabali, O., Rei, W.: Chapter 8: Stochastic vehicle routing problems. In: MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics, pp. 213–239 (2014)

  21. Gounaris, C.E., Wiesemann, W., Floudas, C.A.: The robust capacitated vehicle routing problem under demand uncertainty. Oper. Res. 61(3), 677–693 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Gupta, D., Denton, B.: Appointment scheduling in health care: challenges and opportunities. IIE Trans. 40(9), 800–819 (2008)

    Article  Google Scholar 

  23. Isii, K.: On sharpness of tchebycheff-type inequalities. Ann. Inst. Stat. Math. 14(1), 185–197 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  24. Jaillet, P., Qi, J., Sim, M.: Routing optimization under uncertainty. Oper. Res. 64(1), 186–200 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kao, E.P.C.: A preference order dynamic program for a stochastic traveling salesman problem. Oper. Res. 26(6), 1033–1045 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kenyon, A.S., Morton, D.P.: Stochastic vehicle routing with random travel times. Transp. Sci. 37(1), 69–82 (2003)

    Article  Google Scholar 

  27. Laporte, G.: Fifty years of vehicle routing. Transp. Sci. 43(4), 408–416 (2009)

    Article  Google Scholar 

  28. Laporte, G., Louveaux, F., Mercure, H.: The vehicle routing problem with stochastic travel times. Transp. Sci. 26(3), 161–170 (1992)

    Article  MATH  Google Scholar 

  29. Lee, C., Lee, K., Park, S.: Robust vehicle routing problem with deadlines and travel time/demand uncertainty. J. Oper. Res. Soc. 63(9), 1294–1306 (2011)

    Article  Google Scholar 

  30. Nemirovski, A., Shapiro, A.: Convex approximations of chance constrained programs. SIAM J. Optim. 17(4), 969–996 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  31. Pecin, D., Contardo, C., Desaulniers, G., Uchoa, E.: New enhancements for the exact solution of the vehicle routing problem with time windows. INFORMS J. Comput. 29(3), 489–502 (2017a)

    Article  MathSciNet  MATH  Google Scholar 

  32. Pecin, D., Pessoa, A., Poggi, M., Uchoa, E.: Improved branch-cut-and-price for capacitated vehicle routing. Math. Program. Comput. 9(1), 61–100 (2017b)

    Article  MathSciNet  MATH  Google Scholar 

  33. Popescu, I.: Robust mean-covariance solutions for stochastic optimization. Oper. Res. 55(1), 98–112 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  34. Qi, J.: Mitigating delays and unfairness in appointment systems. Manag. Sci. 63(2), 566–583 (2016)

    Article  Google Scholar 

  35. Rockafellar, R.T., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2, 21–42 (2000)

    Article  Google Scholar 

  36. Scarf, H., Arrow, K., Karlin, S.: A min–max solution of an inventory problem. In: Studies in the Mathematical Theory of Inventory and Production, vol. 10, pp. 201–209 (1958)

  37. See, C.T., Sim, M.: Robust approximation to multiperiod inventory management. Oper. Res. 58(3), 583–594 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Sniedovich, M.: Technical note—analysis of a preference order traveling salesman problem. Oper. Res. 29(6), 1234–1237 (1981)

    Article  MATH  Google Scholar 

  39. Taş, D., Gendreau, M., Dellaert, N., van Woensel, T., de Kok, A.: Vehicle routing with soft time windows and stochastic travel times: a column generation and branch-and-price solution approach. Eur. J. Oper. Res. 236(3), 789–799 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  40. Toth, P., Vigo, D. (eds.): Vehicle Routing: Problems, Methods, and Applications, 2nd edn. Society for Industrial & Applied Mathematics (SIAM), Philadelphia (2014)

    MATH  Google Scholar 

  41. Vanderbeck, F., Wolsey, L.A.: Reformulation and decomposition of integer programs. In: 50 Years of Integer Programming 1958–2008, pp. 431–502 (2010)

  42. Verweij, B., Ahmed, S., Kleywegt, A.J., Nemhauser, G., Shapiro, A.: The sample average approximation method applied to stochastic routing problems: a computational study. Comput. Optim. Appl. 24(2–3), 289–333 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  43. Wiesemann, W., Kuhn, D., Sim, M.: Distributionally robust convex optimization. Oper. Res. 62(6), 1358–1376 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  44. Xin, L., Goldberg, D.A., Shapiro, A.: Time (in) consistency of multistage distributionally robust inventory models with moment constraints. arXiv preprint arXiv:1304.3074 (2013)

  45. Zhen, J., den Hertog, D., Sim, M.: Adjustable robust optimization via Fourier–Motzkin elimination. Oper. Res. (2017, Forthcoming)

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers and associate editor for their extremely helpful suggestions and very thorough review of the paper. The first author is financially supported by the Chinese Scholarship Council, the National Research Foundation Singapore (NRF-RSS2016-004), and the National Natural Science Foundation of China (71725001). The third author is financially supported by Singapore Ministry of Education Social Science Research Thematic Grant (MOE2016-SSRTG-059). The last author is funded by the National Natural Science Foundation of China (71420107028, 91747105).

Disclaimer Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Singapore Ministry of Education or the Singapore Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Baldacci.

Proofs of analytical results

Proofs of analytical results

1.1 Proof of Proposition 2

It has been established that the variable \(\varvec{s}^l\), \(l \in \mathcal {N}\), represents the partial route from node 1 to node l along the route determined by \(\varvec{x}\), say \(\{(1, i_2), (i_2, i_3)\), \(\ldots , (i_{\check{\kappa }-1}, \underbrace{i_{\check{\kappa }}}_{=l})\), \(\ldots , (i_{n - 1},\underbrace{ i_n}_{ = n})\}\) so that

$$\begin{aligned} s^{l}_a =\left\{ \begin{array}{ll} 1, &{} \qquad \text{ if }\, a \in \{ (1, i_2), (i_2, i_3), \ldots , (i_{\check{\kappa }-1},l)\}, \\ 0, &{} \qquad \text{ otherwise } \end{array} \right. \end{aligned}$$
(33)

(see, for instance, [24]). We have from Eq. (1) that the service commencement time at the node l can be determined recursively as follows,

$$\begin{aligned} \begin{array}{rcl} t_1 &{}=&{} \underline{\tau }_{1} = 0, \\ t_{i_2} &{}=&{} \max \{ t_1 + z_{1 i_2}, \underline{\tau }_{i_2} \}, \\ t_{i_3} &{}=&{} \max \{ t_{i_2} + z_{i_2 i_3}, \underline{\tau }_{i_3} \}, \\ &{} \vdots &{} \\ t_l &{}=&{} \max \{ t_{i_{\check{\kappa }- 1}} + z_{i_{\check{\kappa }- 1} l}, \underline{\tau }_l \}. \\ \end{array} \end{aligned}$$

Hence, we have

$$\begin{aligned} \begin{array}{rcl} t_l &{}=&{} \displaystyle \max \left\{ \max \left\{ \cdots \max \left\{ \max \left\{ t_1 + z_{1 i_2}, \underline{\tau }_{i_2} \right\} + z_{i_2 i_3}, \underline{\tau }_{i_3} \right\} \cdots , \underline{\tau }_{i_{\check{\kappa }- 1}} \right\} + z_{i_{\check{\kappa }- 1} l}, \underline{\tau }_l \right\} \\ &{}=&{} \displaystyle \max _{k \in \{1, i_2, i_3, \ldots , i_{\check{\kappa }- 1}, l\}} \left\{ \underline{\tau }_k + \sum _{a \in \{ (k, i_{\hat{\kappa }+ 1}), (i_{\hat{\kappa }+ 1}, i_{\hat{\kappa }+ 2}), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a \right\} , \end{array} \end{aligned}$$
(34)

where \(k=i_{\hat{\kappa }}\) represents a node along the partial route from node 1 to node l.

Next we show that the service commencement time at node l can also determined by Eq. (2). For notational convenience, we define

$$\begin{aligned} \displaystyle t^k_l = \sum _{a \in \delta ^-(k)} s^l_a \underline{\tau }_k + \varvec{z}'\left( \varvec{s}^l - \varvec{s}^k \right) , \end{aligned}$$

for all \(k \in \underline{\mathcal {N}}\cup \{1\}\) and consider three cases as follows:

  1. i)

    When \(k = 1\), we have \(\underline{\tau }_1 = 0\) and \(\varvec{s}^1 = \varvec{0}\) and hence, we have

    $$\begin{aligned} t^1_l = \displaystyle \sum _{a \in \delta ^-(1)} s^{l}_a \underline{\tau }_1 + \varvec{z}'(\varvec{s}^{l} - \varvec{s}^{1}) = \displaystyle \underline{\tau }_1 + \sum _{a \in \{ (1, i_2), (i_2, i_3), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a. \end{aligned}$$

    Moreover, \(t^1_l \ge 0\) because \(\varvec{z} \ge \varvec{0}\).

  2. ii)

    When \(k \in \{i_2, i_3, \ldots , i_{\check{\kappa }- 1}, l\} \cap \underline{\mathcal {N}}\), we have \(\sum _{a \in \delta ^-(k)} s^{l}_a = 1\) and

    $$\begin{aligned} t^k_l = \displaystyle \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + \varvec{z}'(\varvec{s}^{l} - \varvec{s}^{k}) = \underline{\tau }_k + \sum _{a \in \{ (k, i_{\hat{\kappa }+ 1}), (i_{\hat{\kappa }+ 1}, i_{\hat{\kappa }+ 2}), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a. \end{aligned}$$
  3. iii)

    When \(k \in \{i_{\check{\kappa }+ 1}, i_{\check{\kappa }+ 2}, \ldots , n\} \cap \underline{\mathcal {N}}\), we have \(\sum _{a \in \delta ^-(k)} s^{l}_a = 0\) and \((\varvec{s}^{l} - \varvec{s}^{k}) \le \varvec{0}\). Since \(\varvec{z} \ge \varvec{0}\), we have

    $$\begin{aligned} t^k_l = \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + \varvec{z}'(\varvec{s}^{l} - \varvec{s}^{k}) \le 0 \le t^1_l. \end{aligned}$$

    Hence, \(\max \{ t^1_l,t^k_l \} = t^1_l\) for all \(k \in \{i_{\check{\kappa }+ 1}, i_{\check{\kappa }+ 2}, \ldots , n\} \cap \underline{\mathcal {N}}\).

Observe that we can express Eq. (2) as

$$\begin{aligned} \displaystyle t_l = \max _{k \in \underline{\mathcal {N}}\cup \{1\}} ~ t^k_l = \max _{k \in \{1\} \cup (\{i_2, i_3, \ldots , i_{\check{\kappa }- 1}, l\} \cap \underline{\mathcal {N}})} \left\{ \underline{\tau }_k + \sum _{a \in \{ (k, i_{\hat{\kappa }+ 1}), (i_{\hat{\kappa }+ 1}, i_{\hat{\kappa }+ 2}), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a \right\} . \end{aligned}$$
(35)

We note that for all \(k \in \{i_2, i_3, \ldots , i_{\hat{\kappa }- 1}, l\} \backslash \underline{\mathcal {N}}\) we have \(\underline{\tau }_k = 0\) and since \(\varvec{z} \ge \varvec{0}\), we also have

$$\begin{aligned} \underline{\tau }_k + \sum _{a \in \{ (k, i_{\hat{\kappa }+ 1}), (i_{\hat{\kappa }+ 1}, i_{\hat{\kappa }+ 2}), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a \le \underline{\tau }_1 + \sum _{a \in \{ (1, i_2), (i_2, i_3), \ldots , (k, i_{\hat{\kappa }+ 1}), \ldots , (i_{\check{\kappa }- 1}, l) \}} z_a = t^1_l. \end{aligned}$$

Hence, \(\max \{ t^1_l,t^k_l \} = t^1_l\) for all \(k \in \{i_2, i_3, \ldots , i_{\check{\kappa }- 1}, l\} \backslash \underline{\mathcal {N}}\). Therefore, taking these conditions into account, we have shown the equivalence of Eqs. (34), (35) and (2).

1.2 Proof of Proposition 3

  1. i)

    Satisficing: If \(\mathbb {P}\left( \tilde{\xi } \le 0 \right) =1\), then \(\displaystyle \mathbb {E}_{\mathbb {P}}\left( \max \{\tilde{\xi }, -\alpha \}\right) \le 0\) for all \(\alpha \ge 0\) and \(\rho _E(\tilde{\xi }) = 0\). Conversely, if \(\rho _E(\tilde{\xi }) = 0\), we have \(\displaystyle \mathbb {E}_{\mathbb {P}}\left( \max \{\tilde{\xi }, 0\}\right) \le 0\), which implies \(\mathbb {P}\left( \tilde{\xi } \le 0 \right) =1\).

  2. ii)

    Infeasibility: For any \(\alpha \in [0, \infty )\), if \(\displaystyle \mathbb {E}_{\mathbb {P}} \left( \tilde{\xi } \right) > 0\), then \(\displaystyle \mathbb {E}_{\mathbb {P}}\left( \max \{\tilde{\xi }, -\alpha \} \right) > 0\). However, the definition requires

    $$\begin{aligned} \displaystyle \mathbb {E}_{\mathbb {P}}\left( \max \{\tilde{\xi }, -\alpha \} \right) \le 0. \end{aligned}$$

    We conclude that \(\rho _E(\tilde{\xi }) = \min \emptyset = \infty \).

  3. iii)

    Convexity: We denote \(\alpha _1^* = \rho _{E}\left( \tilde{\xi }_1\right) \) and \(\alpha _2^* = \rho _{E}\left( \tilde{\xi }_2\right) \). Hence, \(\mathbb {E}_\mathbb {P}\left( \max \{\tilde{\xi }_1, -\alpha _1^*\}\right) \le 0\) and \(\mathbb {E}_\mathbb {P}\left( \max \{\tilde{\xi }_2, -\alpha _2^*\}\right) \le 0\). Since the function \(\mathbb {E}_\mathbb {P}\left( \max \{\tilde{\xi }, -\alpha \}\right) \) is jointly convex in \(\tilde{\xi }\) and \(\alpha \), we have for \(\lambda \in [0,1]\),

    $$\begin{aligned} \begin{array}{rl} &{} \mathbb {E}_\mathbb {P}\left( \max \{\lambda \tilde{\xi }_1+(1-\lambda )\tilde{\xi }_2, -\left( \lambda \alpha _1^* + (1 - \lambda )\alpha _2^*\right) \}\right) \\ &{}\quad \le \lambda \mathbb {E}_\mathbb {P}\left( \max \{\tilde{\xi }_1, -\alpha _1^*\}\right) + (1 - \lambda ) \mathbb {E}_\mathbb {P}\left( \max \{\tilde{\xi }_2, -\alpha _2^*\}\right) \\ &{}\quad \le 0. \end{array} \end{aligned}$$

    Hence, \(\alpha _\lambda = \lambda \alpha _1^* + (1 - \lambda )\alpha _2^*\) satisfies \(\mathbb {E}_\mathbb {P}\left( \max \{\lambda \tilde{\xi }_1+(1-\lambda )\tilde{\xi }_2, -\alpha _\lambda \}\right) \le 0\) and \(\rho _{E}\left( \lambda \tilde{\xi }_1+(1-\lambda )\tilde{\xi }_2\right) \le \alpha _\lambda = \lambda \rho _{E}\left( \tilde{\xi }_1\right) +(1-\lambda )\rho _{E}\left( \tilde{\xi }_2\right) \).

  4. iv)

    Delay bounds: The bound is true for \(\rho _E\left( \tilde{\xi }\right) = \infty \) and \(\rho _E\left( \tilde{\xi }\right) = 0\), since the latter would imply \(\mathbb {P}\left( \tilde{\xi } > 0 \right) =0\). For \(\rho _E\left( \tilde{\xi }\right) \in (0,\infty )\), we let \(\alpha ^* = \rho _E\left( \tilde{\xi }\right) \) and hence we have

    $$\begin{aligned} \begin{array}{rcl} \mathbb {P}\left( \tilde{\xi }>\alpha ^* \theta \right) &{}=&{} \displaystyle \mathbb {P}\left( \tilde{\xi } + \alpha ^*> \alpha ^*\theta + \alpha ^* \right) \\ &{}\le &{} \displaystyle \mathbb {P}\left( \left( \tilde{\xi } + \alpha ^* \right) ^+ > \alpha ^* (1 + \theta ) \right) \\ &{}\le &{} \displaystyle \frac{\mathbb {E}\left( (\tilde{\xi } + \alpha ^*)^+\right) }{ \alpha ^* (1 + \theta ) } \\ &{}\le &{} \displaystyle \frac{\alpha ^*}{\alpha ^* (1 + \theta )} \\ &{} = &{} \displaystyle \frac{1}{1+\theta }. \end{array} \end{aligned}$$

    The second inequality holds because of Markov inequality; the third inequality is due to \(\mathbb {E}_{\mathbb {P}}\left( \max \{ \tilde{\xi } + \alpha ^*, 0 \} \right) \le \alpha ^*\), implied by the definition.

1.3 Proof of Theorem 1

We denote \(\tilde{{\varvec{u}}} = \tilde{{\varvec{z}}} - \varvec{\mu }\) and rewrite the ambiguity set as

$$\begin{aligned} \mathbb {G}= \left\{ \mathbb {Q}\in \mathcal {P} ~\left| ~ \begin{array}{l} \displaystyle \mathbb {E}_{\mathbb {Q}}(\tilde{{\varvec{u}}})=\varvec{0} \\ \displaystyle \mathbb {E}_{\mathbb {Q}}\left( \tilde{{\varvec{u}}} \tilde{{\varvec{u}}}'\right) = \varvec{\varSigma } \\ \end{array}\right. \right\} . \end{aligned}$$

For each \(l \in \overline{\mathcal {N}}\), we determine the worse-case expectation

$$\begin{aligned} \sup _{\mathbb {Q}\in \mathbb {G}}\mathbb {E}_{\mathbb {Q}} \left( \max \left\{ \max _{k \in \underline{\mathcal {N}}\cup \{1\}} \left\{ \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + (\tilde{{\varvec{u}}}+\varvec{\mu })^\prime (\varvec{s}^{l} - \varvec{s}^{k}) \right\} - \overline{\tau }_l + \alpha _l, 0 \right\} \right) \end{aligned}$$

by formulating the following optimization problem:

$$\begin{aligned} \begin{array}{rl} \sup &{} \displaystyle \int _{\mathbb {R}^{|\mathcal {A}|}} \left( \max \left\{ \max _{k \in \underline{\mathcal {N}}\cup \{1\}} \left\{ \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + \left( \varvec{u} + \varvec{\mu } \right) ^\prime \left( \varvec{s}^{l} - \varvec{s}^{k}\right) \right\} - \overline{\tau }_l + \alpha _l, 0 \right\} \right) d\mathbb {Q}\\ \text{ s.t. } &{} \displaystyle \int _{\mathbb {R}^{|\mathcal {A}|}}~ \varvec{u}~d\mathbb {Q}= \varvec{0}, \\ &{} \displaystyle \int _{\mathbb {R}^{|\mathcal {A}|}}~\varvec{u} \varvec{u}'~ d\mathbb {Q}= \varvec{\varSigma }, \\ &{} \displaystyle \int _{\mathbb {R}^{|\mathcal {A}|}} ~d\mathbb {Q}= 1, \\ &{} \displaystyle d\mathbb {Q}\ge 0. \end{array} \end{aligned}$$

Its dual problem is given as

$$\begin{aligned} \begin{array}{rll} \inf &{} \displaystyle v_{l0} + \mathrm{tr}(\varvec{\varSigma }\varvec{V}_l) \\ \text{ s.t. } &{} \displaystyle v_{l0} + \varvec{u}' \varvec{v}_l + \varvec{u}' \varvec{V}_l \varvec{u} \ge 0, &{} \forall \varvec{u} \in \mathbb {R}^{|\mathcal {A}|}, \\ &{} \displaystyle v_{l0} + \varvec{u}' \varvec{v}_l + \varvec{u}' \varvec{V}_l \varvec{u} \ge \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + (\varvec{u} + \varvec{\mu })^\prime (\varvec{s}^{l} - \varvec{s}^{k}) - \overline{\tau }_l + \alpha _l, &{} \forall k \in \underline{\mathcal {N}}\cup \{1\}, \\ &{}&{} \varvec{u} \in \mathbb {R}^{|\mathcal {A}|}, \\ \end{array} \end{aligned}$$
(36)

for which strong duality holds and their objectives coincide (see, for instance, [23]). We express the first constraint in Problem (36) equivalently as

$$\begin{aligned} \begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix}' \begin{bmatrix} v_{l0}&\displaystyle \quad \frac{\varvec{v}_l^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}_l}{2}&\quad \varvec{V}_l \end{bmatrix} \begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix} \ge 0,\quad \forall \varvec{u} \in \mathbb {R}^{|\mathcal {A}|}, \end{aligned}$$

or further as

$$\begin{aligned} \displaystyle \begin{bmatrix} v_{l0}&\displaystyle \quad \frac{\varvec{v}_l^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}_l}{2}&\quad \varvec{V}_l \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}. \end{aligned}$$

Similarly, we express the second constraint in Problem (36) equivalently as

$$\begin{aligned} \begin{bmatrix} \displaystyle v_{l0} - \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k - \varvec{\mu }'(\varvec{s}^l - \varvec{s}^k) + \overline{\tau }_l - \alpha _l&\displaystyle \quad \frac{(\varvec{v}_l - \varvec{s}^l + \varvec{s}^k)'}{2} \\ \displaystyle \frac{\varvec{v}_l - \varvec{s}^l + \varvec{s}^k}{2}&\quad \varvec{V}_l \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}. \end{aligned}$$

Hence, the first constraint of Problem (10) is satisfied if and only if there exists \(v_{l0} \in \mathbb {R}\), \(\varvec{v}_l \in \mathbb {R}^{|\mathcal {A}|}\) and \(\varvec{V}_l \in \mathbb {R}^{|\mathcal {A}| \times |\mathcal {A}|}\) for \(l \in \overline{\mathcal {N}}\) that are feasible in the following system of conic constraints,

$$\begin{aligned} \begin{array}{ll} \displaystyle v_{l0} + \mathrm{tr}(\varvec{\varSigma }\varvec{V}_l)\le \alpha _l, &{} \forall l \in \overline{\mathcal {N}}, \\ \displaystyle \begin{bmatrix} v_{l0} &{} \displaystyle \quad \frac{\varvec{v}_l^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}_l}{2} &{} \quad \varvec{V}_l \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}, &{} \forall l \in \overline{\mathcal {N}}, \\ \begin{bmatrix} \displaystyle v_{l0} - \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k - \varvec{\mu }'(\varvec{s}^l - \varvec{s}^k) + \overline{\tau }_l - \alpha _l &{} \displaystyle \quad \frac{(\varvec{v}_l - \varvec{s}^l + \varvec{s}^k)'}{2} \\ \displaystyle \frac{\varvec{v}_l - \varvec{s}^l + \varvec{s}^k}{2} &{} \quad \varvec{V}_l \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}, &{} \forall l \in \overline{\mathcal {N}}, \\ &{}k \in \underline{\mathcal {N}}\cup \{1\}. \end{array} \end{aligned}$$

Substituting them in Problem (10), we obtain Problem (11).

1.4 Proof of Theorem 2

It suffices to show that for a given \(s_0 \in \mathbb {R}, \varvec{s} \in \mathbb {R}^{|\mathcal {A}|}, \varvec{s} \ne \varvec{0}, \alpha \ge 0\), the following constraints are feasible

$$\begin{aligned} \begin{array}{l} \displaystyle v_{0} + \mathrm{tr}(\varvec{\varSigma }\varvec{V}) \le \alpha \\ \begin{bmatrix} \displaystyle v_{0} - s_0 &{} \displaystyle \quad \frac{(\varvec{v} - \varvec{s})'}{2} \\ \displaystyle \frac{\varvec{v} - \varvec{s}}{2} &{} \quad \varvec{V} \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}, \\ \begin{bmatrix} v_{0} &{} \displaystyle \quad \frac{\varvec{v}^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}}{2} &{} \quad \varvec{V} \end{bmatrix} \in \mathbb {S}_+^{|\mathcal {A}| + 1}, \\ \end{array} \end{aligned}$$
(37)

for some \(v_0 \in \mathbb {R}, \varvec{v} \in \mathbb {R}^{|\mathcal {A}|}, \varvec{V} \in \mathbb {S}_+^{|\mathcal {A}| }\) if and only if

$$\begin{aligned} \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s} \le 4\alpha ( \alpha -s_0). \end{aligned}$$
(38)

For the “if” direction, suppose the constraint (38) is feasible, since \(\varvec{s}'\varvec{\varSigma }'\varvec{s}>0\), we would have \(\alpha >0\) and \(\alpha - s_0> 0\). Let

$$\begin{aligned} \begin{array}{rcl} v_{0} &{}=&{} \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}}, \\ \varvec{v} &{}=&{} \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) \varvec{s}}{2 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}}, \\ \varvec{V} &{}=&{} \displaystyle \frac{\varvec{s} \varvec{s}^{\prime }}{4 \sqrt{ s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s} } }. \\ \end{array} \end{aligned}$$

Observe that for all \(\varvec{u} \in \mathbb {R}^{|\mathcal {A}|}\),

$$\begin{aligned} \begin{array}{rcll} &{}&{}\begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix}^\prime \begin{bmatrix} v_{0} &{} \displaystyle \quad \frac{\varvec{v}^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}}{2} &{} \quad \varvec{V} \end{bmatrix} \begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix} \\ &{}&{}\quad = \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} + \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) }{2 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} \varvec{s}'\varvec{u}+ \frac{(\varvec{s}'\varvec{u})^2}{4 \sqrt{ s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s} } }\\ &{}&{}\quad = \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}} + \varvec{s}'\varvec{u}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} \ge 0, \end{array} \end{aligned}$$

and

$$\begin{aligned}&\begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix}^\prime \begin{bmatrix} v_{0} -s_0&\displaystyle \quad \frac{(\varvec{v}-\varvec{s})^{\prime }}{2} \\ \displaystyle \frac{(\varvec{v}-\varvec{s})}{2}&\quad \varvec{V} \end{bmatrix} \begin{bmatrix} 1 \\ \varvec{u} \end{bmatrix} \\&\quad = \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} -s_0+ \left( \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) }{2 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} -1\right) \varvec{s}'\varvec{u} + \frac{(\varvec{s}'\varvec{u})^2}{4 \sqrt{ s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s} } }\\&\quad = \displaystyle \frac{\left( s_{0} - \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} + \frac{\left( s_{0} - \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) }{2 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} \varvec{s}'\varvec{u} + \frac{(\varvec{s}'\varvec{u})^2}{4 \sqrt{ s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s} } }\\&\quad = \displaystyle \frac{\left( s_{0} - \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}} + \varvec{s}'\varvec{u}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} \ge 0. \end{aligned}$$

Moreover,

$$\begin{aligned} \begin{array}{rcll} &{}&{} \displaystyle \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s} \le 4\alpha ( \alpha -s_0) \\ &{}&{}\quad \Rightarrow \displaystyle s_0^2 + \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s} \le ( 2\alpha -s_0)^2 \\ &{}&{}\quad \Rightarrow \displaystyle \sqrt{s_0^2 + \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s}} \le 2\alpha -s_0 &{} \text{ since } \,2\alpha -s_0> \alpha -s_0>0\\ &{}&{}\quad \Rightarrow \displaystyle \frac{1}{2}s_0 + \frac{1}{2}\sqrt{s^0 + \varvec{s}'\varvec{\varSigma }\varvec{s}} \le \alpha \\ &{}&{}\quad \Rightarrow \displaystyle \frac{2s_0 \sqrt{s_0^2 + \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s}} +2(s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s})}{4 \sqrt{s_0^2 + \varvec{s}^{\prime }\varvec{\varSigma }\varvec{s}} } \le \alpha &{} \text{ since } \,\varvec{s}'\varvec{\varSigma }\varvec{s}>0\\ &{}&{}\quad \Rightarrow \displaystyle \frac{\left( s_{0} + \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}\right) ^2}{4 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}} + \frac{\varvec{s}^\prime \varvec{\varSigma } \varvec{s}}{4 \sqrt{ s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s} }} \le \alpha \\ &{}&{}\quad \Rightarrow \displaystyle v_{0} + \mathrm{tr}(\varvec{\varSigma }\varvec{V}) \le \alpha . \end{array} \end{aligned}$$

Hence, the constraints in (37) are also feasible.

Conversely, suppose the constraints in (37) are feasible for some \(v_0 \in \mathbb {R}, \varvec{v} \in \mathbb {R}^{|\mathcal {A}|}, \varvec{V} \in \mathbb {S}_+^{|\mathcal {A}| }\). Let

$$\begin{aligned} \begin{array}{rcl} r_{0} &{}=&{} \displaystyle \frac{1}{2} + \frac{s_{0}}{2\sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}}, \\ \varvec{r} &{}=&{} \displaystyle \frac{\varvec{\varSigma } \varvec{s}}{2 \sqrt{s_{0}^2 + \varvec{s}^{\prime } \varvec{\varSigma }\varvec{s}}}, \\ \end{array} \end{aligned}$$

Note that since \(\varvec{s}\varvec{\varSigma }\varvec{s}>0\), we have \(r_0 \in (0,1)\). Observe that

$$\begin{aligned} \begin{bmatrix} r_{0}&\quad \varvec{r}^{\prime } \\ \varvec{r}&\quad \frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \in \mathbb {S}^{|\mathcal {A}|}_+ \quad \text{ and } \quad \begin{bmatrix} 1-r_{0}&\quad -\varvec{r}^{\prime } \\ -\varvec{r}&\quad \varvec{\varSigma }-\frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \in \mathbb {S}^{|\mathcal {A}|}_+ \end{aligned}$$

since for all \(\varvec{w} \in \mathbb {R}^{|\mathcal {A}|}\),

$$\begin{aligned} \begin{bmatrix} 1 \\ \varvec{w} \end{bmatrix}^\prime \begin{bmatrix} r_{0}&\quad \varvec{r}^{\prime } \\ \varvec{r}&\quad \frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \begin{bmatrix} 1 \\ \varvec{w} \end{bmatrix} = r_0 + 2\varvec{r}'\varvec{w} + \frac{1}{r_0}(\varvec{r}'\varvec{w})^2 = \left( \sqrt{r_0} + \frac{1}{\sqrt{r_0}} \varvec{r}'\varvec{w}\right) ^2\ge 0 \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rcll} &{}&{}\begin{bmatrix} 1 \\ \varvec{w} \end{bmatrix}^\prime \begin{bmatrix} 1-r_{0} &{} \quad -\varvec{r}^{\prime } \\ -\varvec{r} &{} \quad \varvec{\varSigma }- \frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \begin{bmatrix} 1 \\ \varvec{w} \end{bmatrix}\\ &{}&{}\quad = \displaystyle 1-r_0 - 2\varvec{r}'\varvec{w} - \frac{1}{r_0}(\varvec{r}'\varvec{w})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad = \displaystyle \left( \sqrt{1-r_0} - \frac{1}{\sqrt{1-r_0}}\varvec{r}'\varvec{w} \right) ^2 -\left( \frac{1}{r_0} + \frac{1}{1-r_0} \right) (\varvec{r}'\varvec{w})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad \ge \displaystyle - \frac{1}{r_0(1-r_0)} (\varvec{r}'\varvec{w})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad = \displaystyle - \frac{4\left( s_0^2 + \varvec{s}'\varvec{\varSigma }\varvec{s}\right) }{\varvec{s}'\varvec{\varSigma }\varvec{s}} (\varvec{r}'\varvec{w})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad = \displaystyle - \frac{1 }{\varvec{s}'\varvec{\varSigma }\varvec{s}} (\varvec{w}'\varvec{\varSigma }\varvec{s})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad = \displaystyle - \frac{1 }{\varvec{s}'\varvec{\varSigma }\varvec{s}} ((\varvec{\varSigma }^{1/2}\varvec{w})'\varvec{\varSigma }^{1/2}\varvec{s})^2 + \varvec{w}'\varvec{\varSigma }\varvec{w} \\ &{}&{}\quad \ge \displaystyle - \frac{1 }{\varvec{s}'\varvec{\varSigma }\varvec{s}} (\varvec{s}'\varvec{\varSigma }\varvec{s})(\varvec{w}'\varvec{\varSigma }\varvec{w}) + \varvec{w}'\varvec{\varSigma }\varvec{w}\qquad \qquad \qquad \qquad \text{ Cauchy--Schwarz } \text{ inequality } &{} \\ &{}&{}\quad = 0. \end{array} \end{aligned}$$

Therefore,

$$\begin{aligned}&\mathrm{tr} \left( \begin{bmatrix} v_{0}&\displaystyle \quad \frac{\varvec{v}^{\prime }}{2} \\ \displaystyle \frac{\varvec{v}}{2}&\quad \varvec{V} \end{bmatrix} \begin{bmatrix} 1-r_{0}&\quad -\varvec{r}^{\prime } \\ -\varvec{r}&\quad \varvec{\varSigma }-\frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \right) \\&\quad +\, \mathrm{tr} \left( \begin{bmatrix} \displaystyle v_{0} - s_0&\displaystyle \quad \frac{(\varvec{v} - \varvec{s})'}{2} \\ \displaystyle \frac{\varvec{v} - \varvec{s}}{2}&\quad \varvec{V} \end{bmatrix} \begin{bmatrix} r_{0}&\quad \varvec{r}^{\prime } \\ \varvec{r}&\quad \frac{1}{r_0} \varvec{r}\varvec{r}' \end{bmatrix} \right) \ge 0 \end{aligned}$$

or equivalently

$$\begin{aligned} v_0+ \mathrm{tr}(\varvec{\varSigma }\varvec{V} ) \ge s_0 r_0 +\varvec{r}'\varvec{s} = \frac{1}{2}s_0 + \frac{1}{2}\sqrt{s^2_0 + \varvec{s}'\varvec{\varSigma }\varvec{s}}. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{2} s_0+ \frac{1}{2}\sqrt{s^2_0 + \varvec{s}'\varvec{\varSigma }\varvec{s}}\le v_0+ \mathrm{tr}(\varvec{\varSigma }\varvec{V} ) \le \alpha , \end{aligned}$$

which implies the feasibility of the constraint (38).

1.5 Proof of Proposition 4

Denote the optimal solutions of Problem (14) with \(\varvec{s}\) being \(\varvec{\check{s}}\) and \(\varvec{\hat{s}}\) by \(\check{\alpha }_l\) and \(\hat{\alpha }_l\), respectively, so that \(F_l(\varvec{\check{s}}) = \check{\alpha }_l\) and \(F_l(\varvec{\hat{s}}) = \hat{\alpha }_l\). To prove the convexity of \(F_l(\varvec{s})\) by definition, we shall prove \(F_l(\lambda \varvec{\check{s}} + (1-\lambda ) \varvec{\hat{s}}) \le \lambda F_l(\varvec{\check{s}}) + (1-\lambda ) F_l(\varvec{\hat{s}})\) for any \(\lambda \in [0,1]\).

Observe that the left-hand side function of the first constraint in Problem (14), \(g(\varvec{s},\alpha _l) {=} \sup _{\mathbb {P}\in \mathbb {F}}\mathbb {E}_{\mathbb {P}} \left( \max \left\{ \max _{k \in \underline{\mathcal {N}}\cup \{1\}} \left\{ \sum _{a \in \delta ^-(k)} s^{l}_a \underline{\tau }_k + \tilde{{\varvec{z}}}^\prime (\varvec{s}^{l} {-} \varvec{s}^{k}) \right\} - \overline{\tau }_l, - \alpha _l \right\} \right) \), is convex piecewise affine in \((\varvec{s},\alpha _l)\). We then have, for any \(\lambda \in [0,1]\),

$$\begin{aligned} \begin{array}{rcl} g\left( \lambda \varvec{\check{s}} + (1-\lambda ) \varvec{\hat{s}}, \lambda \check{\alpha } + (1-\lambda ) \hat{\alpha } \right) &{}\le &{} \lambda g(\varvec{\check{s}}, \check{\alpha }) + (1-\lambda ) g(\varvec{\hat{s}},\hat{\alpha }), \\ &{} \le &{} \lambda 0 + (1 - \lambda ) 0. \\ &{} = &{} 0. \end{array} \end{aligned}$$

Also note that \(\lambda \check{\alpha } + (1-\lambda ) \hat{\alpha } \ge 0\). Hence, \(\lambda \check{\alpha } + (1-\lambda ) \hat{\alpha }\) is a feasible solution to Problem (14) with \(\varvec{s}\) being \(\lambda \varvec{\check{s}} + (1-\lambda ) \varvec{\hat{s}}\). As a result,

$$\begin{aligned} F_l(\lambda \varvec{\check{s}} + (1-\lambda ) \varvec{\hat{s}}) \le \lambda \check{\alpha } + (1-\lambda ) \hat{\alpha } = \lambda F_l(\varvec{\check{s}}) + (1-\lambda ) F_l(\varvec{\hat{s}}). \end{aligned}$$

This completes the proof.

1.6 Proof of Theorem 3

We specify the dual problem of Problem (17) as follows,

$$\begin{aligned} \begin{array}{rll} \hat{F}_l(\varvec{s}) = \max &{} \displaystyle \sum _{\omega \in \varOmega } \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} \xi _{lk}^{\omega }(\varvec{s}) r^{\omega }_{k} \\ \text{ s.t. } &{} \displaystyle -\psi + \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r_k^{\omega } + q^\omega = 0, &{} \forall \omega \in \varOmega , \\ &{} \displaystyle \sum _{\omega \in \varOmega } q^\omega \le 1, \\ &{} \psi \ge 0, \\ &{} r_k^{\omega } \ge 0, &{} k \in \underline{\mathcal {N}}\cup \{1\}, \omega \in \varOmega , \\ &{} q^\omega \ge 0, &{} \forall \omega \in \varOmega , \end{array} \end{aligned}$$
(39)

which we claim is equivalent to the following problem,

$$\begin{aligned} \begin{array}{rll} \displaystyle \check{F}_{l}(\varvec{s}) = \max &{} \displaystyle \sum _{\omega \in \varOmega } \xi _l^\omega (\varvec{s}) p^{\omega } \\ \text{ s.t. } &{} \displaystyle |\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} \le 1, &{} \forall \omega \in \varOmega , \\ &{} p^{\omega } \ge 0, &{} \forall \omega \in \varOmega . \end{array} \end{aligned}$$
(40)

We next prove the claim. For an optimal solution \((\psi , \varvec{r}, \varvec{q})\) to Problem (39), we infer from the first and the fifth constraints that \(\psi \ge \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r^{\omega }_k\), \(\forall \omega \in \varOmega \), and therefore, \(q^{i} = \psi - \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r^{i}_k \ge \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r^{\omega }_k - \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r^{i}_k\), \(\forall i, \omega \in \varOmega \). This inequality and the second constraint in Problem (39) imply that \(|\varOmega | \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r_k^{\omega } - \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} \sum _{i \in \varOmega } r_k^{i} \le 1, \forall \omega \in \varOmega \). Hence, the solution \(p^\omega := \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r_k^{\omega }\), \(\forall \omega \in \varOmega \), is clearly feasible in Problem (40). Since, by definition \(\xi _l^\omega (\varvec{s}) = \max _{k \in \underline{\mathcal {N}}\cup \{1\}} \xi _{lk}^{\omega }(\varvec{s})\), it follows that \(\check{F}_l(\varvec{s}) \ge \hat{F}_l(\varvec{s})\). Conversely, for an optimal solution \(\varvec{p}\) to Problem (40), we construct a solution to Problem (39) as follows. Let \(r_{\kappa ^*(\omega )}^\omega := p^{\omega }\) for all \(\omega \in \varOmega \) where \(\kappa ^*(\omega ) \in {{\mathrm{arg\,max}}}_{k \in \underline{\mathcal {N}}\cup \{1\}} \xi _{lk}^{\omega }(\varvec{s})\), \(r_k^\omega := 0\) for all \(\omega \in \varOmega \) and \(k \in \underline{\mathcal {N}}\cup \{1\}\backslash \{\kappa ^*(\omega )\}\), \(\psi := \max _{\omega \in \varOmega } p^{\omega }\), and \(q^\omega := \max _{i \in \varOmega } p^{i} - p^{\omega }\) for all \(\omega \in \varOmega \). We can verify that the solution is feasible and the objective value is equal to \(\check{F}_l(\varvec{s})\). In particular, Constraint \(\sum _{\omega \in \varOmega }q^\omega \le 1\) is feasible since \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} \le 1\) for all \(\omega \in {{\mathrm{arg\,max}}}_{i \in \varOmega } p^i\). Hence, \(\hat{F}_l(\varvec{s}) \ge \check{F}_l(\varvec{s})\). We then conclude that \(\hat{F}_l(\varvec{s}) = \check{F}_l(\varvec{s})\) and the two problems are equivalent. Observe that \(\varvec{0}\) is a feasible solution to Problem (40); the problem then can be bounded optimal or unbounded. Since strong duality holds for linear programming problems, we also have \(\hat{F}_l(\varvec{s}) = F_l(\varvec{s})\).

We next solve Problem (40). The problem is unbounded if and only if there exists a recession direction \(\varvec{p} \not = \varvec{0}\), \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} \le 0\) and \(p^{\omega } \ge 0\) for \(\omega \in \varOmega \), such that \(\sum _{\omega \in \varOmega }\xi _l^{\omega }(\varvec{s}) p^{\omega } > 0\). The recession direction should satisfy \(p^1 = p^2 = \cdots = p^{|\varOmega |} > 0\), or otherwise the constraint \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} \le 0\) would be violated for all \(\omega \in {{\mathrm{arg\,max}}}_{i \in \varOmega } p^i\). Hence, the condition \(\sum _{\omega \in \varOmega }\xi _l^{\omega }(\varvec{s}) p^{\omega } > 0\) is equivalent to \(\sum _{\omega \in \varOmega } \xi _l^{\omega } (\varvec{s})> 0\), and in which case, the problem is unbounded and \(F_l(\varvec{s})=\infty \).

If the problem is bounded and optimal, there exists an optimal extreme point solution, which is determined by selecting \(|\varOmega |\) binding constraints in Problem (40). Observe that for a given \(\omega \in \varOmega \) the constraints \(p^\omega \ge 0\) and \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} \le 1\) cannot be simultaneously binding because it would imply that \(- \sum _{i \in \varOmega \backslash \{\omega \}} p^{i} = 1\), which contradicts with the nonnegativity of \(\varvec{p}\). Hence, partition \(\varOmega \) into two subsets \(\varOmega _1\) and \(\varOmega _2\), such that \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} = 1\), \(\forall \omega \in \varOmega _1\) and \(p^{\omega } = 0\), \(\forall \omega \in \varOmega _2\). When \(\varOmega _1 = \emptyset \), we have \(\varvec{p} = \varvec{0}\) being an extreme point with objective value of zero. We also note that \(|\varOmega _1| = |\varOmega |\) is inadmissible because we would have \(\sum _{\omega \in \varOmega } (|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i}) = \sum _{\omega \in \varOmega } 1\), which is a contradiction unless \( |\varOmega |=0\). When \(1 \le |\varOmega _1| \le |\varOmega | - 1\), we solve for the system of linear equations, \(|\varOmega | p^{\omega } - \sum _{i \in \varOmega } p^{i} = 1\), \(\forall \omega \in \varOmega _1\), or equivalently, \((|\varOmega | \varvec{I} - \varvec{e} \varvec{e}')\varvec{p} = \varvec{e}\), where \(\varvec{I}\) is the unit diagonal matrix in \(\mathbb {R}^{|\varOmega _1| \times |\varOmega _1|}\), \(\varvec{p} = (p^{\omega })'_{\omega \in \varOmega _1}\), and \(\varvec{e}\) is the vector of ones in \(\mathbb {R}^{|\varOmega _1|}\). Observe that \( \varvec{p} = \frac{1}{|\varOmega | - |\varOmega _1|} \varvec{e} \) is a feasible solution. Moreover, it is the unique solution because \((|\varOmega | \varvec{I} - \varvec{e} \varvec{e}')\) is invertible and we can verify that \((|\varOmega | \varvec{I} - \varvec{e} \varvec{e}') (\frac{1}{|\varOmega |} \varvec{I} + \frac{1}{|\varOmega | (|\varOmega | - |\varOmega _1|)} \varvec{e} \varvec{e}') = \varvec{I}\). Hence, \(p^{\omega } = \frac{1}{|\varOmega | - |\varOmega _1|}\), \(\forall \omega \in \varOmega _1\) and \(p^\omega = 0\), \(\forall \omega \in \varOmega _2\) is an extreme point solution. Observe that at optimality, \(\varOmega _1\) would correspond to the set \(\{\nu (1), \nu (2), \ldots , \nu (|\varOmega _1|) \}\) and the objective value would be \( \sum _{\omega =1}^{|\varOmega _1|} \frac{\xi _l^{\nu (\omega )}(\varvec{s})}{|\varOmega | - |\varOmega _1|}. \) Finally, we conclude that

$$\begin{aligned} \check{F}_l(\varvec{s})= \max \left\{ \max _{i \in \{1, 2, \ldots , |\varOmega | - 1\}}\left\{ \sum _{\omega =1}^i \frac{\xi _l^{\nu (\omega )}(\varvec{s})}{|\varOmega | - i}\right\} ,0\right\} , \end{aligned}$$

and the result follows.

1.7 Proof of Theorem 4

Let \(\beta \), \( \begin{bmatrix} \gamma _0,&\varvec{\gamma }' \\ \varvec{\gamma },&\varvec{\varGamma } \end{bmatrix}, \) and \( \begin{bmatrix} r_{k0},&\varvec{r}_k' \\ \varvec{r}_k,&\varvec{R}_k \end{bmatrix}, \)\(k \in \underline{\mathcal {N}}\cup \{1\}\), be the dual variables corresponding to the first three constraints in Problem (19), respectively, where \(\varvec{\gamma } \in \mathbb {R}^{|\mathcal {A}|}\), \(\varvec{\varGamma } \in \mathbb {R}^{|\mathcal {A}| \times |\mathcal {A}|}\), \(\varvec{r}_k \in \mathbb {R}^{|\mathcal {A}|}\), \(\varvec{R}_k \in \mathbb {R}^{|\mathcal {A}| \times |\mathcal {A}|}\). Based on the theory of conic duality (see, for instance, [8]), we obtain its dual problem (20). Moreover, their objectives coincide because the Slater’s condition holds, i.e., there exists a strictly relative interior point for Problem (20), defined as \(\beta = 1\), \(\gamma _0 = r_{k0} = \frac{1}{|\underline{\mathcal {N}}| + 2}\), \(\varvec{\gamma } = \varvec{r}_k = \varvec{0}\), and \(\varvec{\varGamma } = \varvec{R}_k = \frac{\varvec{\varSigma }}{|\underline{\mathcal {N}}| + 2}\), \(k \in \underline{\mathcal {N}}\cup \{1\}\), such that

$$\begin{aligned} \begin{array}{ll} \displaystyle \beta - \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r_{k0} < 1, \\ \displaystyle -\beta + \gamma _0 + \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} r_{k0} = 0, \\ \displaystyle \varvec{\gamma } + \sum _{k \in \underline{\mathcal {N}}\cup \{1\}} \varvec{r}_k = \varvec{0}, \\ \displaystyle -\varvec{\varSigma } \beta + \varvec{\varGamma } + \sum _{k \in \underline{\mathcal {N}}\cup \{1\}}\varvec{R}_k = \varvec{0}, \\ \beta > 0, \\ \displaystyle \begin{bmatrix} \gamma _0 &{} \varvec{\gamma }^{\prime } \\ \varvec{\gamma } &{} \varvec{\varGamma } \end{bmatrix} \in \mathbb {S}_{++}^{|\mathcal {A}| + 1}, \\ \displaystyle \begin{bmatrix} r_{k0} &{} \varvec{r}_{k}^{\prime } \\ \varvec{r}_{k} &{} \varvec{R}_{k} \end{bmatrix} \in \mathbb {S}_{++}^{|\mathcal {A}| + 1}, &{} \forall k \in \underline{\mathcal {N}}\cup \{1\}, \\ \end{array} \end{aligned}$$

where \(\mathbb {S}_{++}^{|\mathcal {A}| + 1}\) represents the set of symmetric positive definite matrices in \(\mathbb {R}^{(|\mathcal {A}|+1) \times (|\mathcal {A}|+1)}\). The last two constraints hold because

$$\begin{aligned} \begin{array}{rcl} \displaystyle \varvec{\varSigma } \in \mathbb {S}_{++}^{|\mathcal {A}|} &{} \Leftrightarrow &{} \displaystyle \begin{bmatrix} \displaystyle \frac{1}{|\underline{\mathcal {N}}| + 2} &{} \varvec{0}' \\ \varvec{0} &{} \displaystyle \frac{\varvec{\varSigma }}{|\underline{\mathcal {N}}| + 2} \end{bmatrix} \in \mathbb {S}_{++}^{|\mathcal {A}| + 1}. \end{array} \end{aligned}$$

1.8 Proof of Theorem 5

Given a restricted master problem’s solution \({\varvec{s}}^*\), we denote the objective values of Problem (21) and (22) by \(\hat{F}(\varvec{s}^*)\) and \(\check{F}(\varvec{s}^*)\), respectively. When \({\varvec{s}^l}^*=\varvec{0}\), we observe that \(\hat{F}(\varvec{s}^*) = \check{F}(\varvec{s}^*) = 0\). Otherwise, if \(\overline{\tau }_l - \varvec{\mu }'{\varvec{s}^l}^* >0\), it is easy to see that

$$\begin{aligned} \hat{F}(\varvec{s}^*) = \frac{{{\varvec{s}^l}^*}'\varvec{\varSigma }{\varvec{s}^l}^*}{4 (\overline{\tau }_l - \varvec{\mu }'{\varvec{s}^l}^*)}. \end{aligned}$$

Based on the first-order condition, the optimal solution of Problem (22) is

$$\begin{aligned} \varvec{r} = \frac{\varvec{\varSigma }^{\frac{1}{2}}{\varvec{s}^l}^*}{2\left( \varvec{\mu }'{\varvec{s}^l}^* - \overline{\tau }_l\right) } \end{aligned}$$

and correspondingly,

$$\begin{aligned} \check{F}(\varvec{s}^*) = \frac{{{\varvec{s}^l}^*}'\varvec{\varSigma }{\varvec{s}^l}^*}{4 (\overline{\tau }_l - \varvec{\mu }'{\varvec{s}^l}^*)} = \hat{F}(\varvec{s}^*). \end{aligned}$$

If \(\overline{\tau }_l - \varvec{\mu }'{\varvec{s}^l}^* \le 0\), since \(\varvec{\varSigma }\) is positive definite, we observe that Problem (21) is infeasible and \(\hat{F}(\varvec{s}^*) = \infty \). Observe also that

$$\begin{aligned} \bar{\varvec{r}} = -{\varvec{\varSigma }^{\frac{1}{2}}{\varvec{s}^l}^*} \end{aligned}$$

is an recession direction to Problem (22) such that \(\check{F}(\varvec{s}^*) = \infty = \hat{F}(\varvec{s}^*)\). Hence, the result follows.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Baldacci, R., Sim, M. et al. Routing optimization with time windows under uncertainty. Math. Program. 175, 263–305 (2019). https://doi.org/10.1007/s10107-018-1243-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-018-1243-y

Keywords

Mathematics Subject Classification

Navigation