Skip to main content
Log in

Regulation of a single-server queue with customers who dynamically choose their service durations

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

In recent years, there is a growing attention towards queueing models with customers who choose their service durations. The model assumptions in the existing literature imply that every customer knows his service demand when he enters into the service position. Clearly, this property is not consistent with some real-life situations. Motivated by this issue, the current work includes a single-server queueing model with customers who dynamically choose their service durations. In this setup, the main result is existence of a quadratic price function which (1) implies an optimal resource allocation from a social point of view and (2) internalizes the externalities in the system. In addition, it is explained how to compute its parameters efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Oz [23] is a work which was presented in The 20th INFORMS Applied Probability Society Conference, July 3–5, 2019, Brisbane Australia.

  2. In fact, smoothness is not the issue here because once the right-derivative of \(p(\cdot )\) is right-continuous and nondecreasing, then a straightforward equivalent condition for (12) might be phrased.

References

  1. Agranov, M., Ortoleva, P.: Stochastic choice and preferences for randomization. J. Polit. Econ. 125(1), 40–68 (2017)

    Article  Google Scholar 

  2. Alili, L., Kyprianou, A.E.: Some remarks on first passage of Lévy processes, the American put and pasting principles. Ann. Appl. Probab. 15(3), 2062–2080 (2005)

    Article  Google Scholar 

  3. Asmussen, S., Kella, O.: A multi-dimensional martingale for Markov additive processes and its applications. Adv. Appl. Probab. 32(2), 376–393 (2000)

    Article  Google Scholar 

  4. Ballinger, T.P., Wilcox, N.T.: Decisions, error and heterogeneity. Econ. J. 107(443), 1090–1105 (1997)

    Article  Google Scholar 

  5. Barron, Y.: A fluid EOQ model with Markovian environment. J. Appl. Probab. 52(2), 473–489 (2015)

    Article  Google Scholar 

  6. Bekker, R., Boxma, O.J., Kella, O.: Queues with delays in two-state strategies and Lévy input. J. Appl. Probab. 45(2), 314–332 (2008)

    Article  Google Scholar 

  7. Debo, L., Li, C.: Design and Pricing of Discretionary Service Lines. Manag. Sci. 67(4), 2251–2271 (2021)

    Article  Google Scholar 

  8. Edelson, N.M., Hilderbrand, D.K.: Congestion tolls for Poisson queuing processes. Econom. J. Econom. Soc. 43, 81–92 (1975)

    Google Scholar 

  9. Feldman, P., Segev, E.: Managing congestion when customers choose their service times: the important role of time limits. Available at SSRN 3424317 (2019)

  10. Gossen, H.H.: The Laws of Human Relations and the Rules of Human Action Derived Therefrom. MIT Press (1983)

  11. Hardin, G.: The tragedy of the commons. Science 162(3859), 1243–1248 (1968)

    Article  Google Scholar 

  12. Hassin, R.: Rational Queueing. Chapman and Hall/CRC (2016)

  13. Hassin, R., Haviv, M.: To Queue or not to Queue: Equilibrium Behavior in Queueing Systems, vol. 59. Springer (2003)

  14. Haviv, M., Ritov, Y.A.: Externalities, tangible externalities, and queue disciplines. Manag. Sci. 44(6), 850–858 (1998)

    Article  Google Scholar 

  15. Hey, J.D.: Experimental investigations of errors in decision making under risk. Eur. Econ. Rev. 39(3–4), 633–640 (1995)

    Article  Google Scholar 

  16. Hopp, W.J., Iravani, S.M., Yuen, G.Y.: Operations systems with discretionary task completion. Manag. Sci. 53(1), 61–77 (2007)

    Article  Google Scholar 

  17. Jacobovic, R., Kella, O.: Minimizing a stochastic convex function subject to stochastic constraints and some applications. Stoch. Processes Appl. 130(11), 7004–7018 (2020)

    Article  Google Scholar 

  18. Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer (1988)

  19. Kella, O., Whitt, W.: Useful martingales for stochastic storage processes with Lévy input. J. Appl. Probab. 29(2), 396–403 (1992)

    Article  Google Scholar 

  20. Leeman, W.A.: Letter to the editor-the reduction of queues through the use of price. Oper. Res. 12(5), 783–785 (1964)

    Article  Google Scholar 

  21. Mas-Colell, A., Whinston, M.D., Green, J.R.: Microeconomic Theory, vol. 1. Oxford University Press, New York (1995)

    Google Scholar 

  22. Naor, P.: The regulation of queue size by levying tolls. Econom. J. Econom. Soc. 37, 15–24 (1969)

    Google Scholar 

  23. Oz, B.: Regulating service length demand in a single server queue. Unpublished Manuscript (2019)

  24. Parzen, E., TEXAS A AND M UNIV COLLEGE STATION INST OF STATISTICS.: Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory No. TR-B-3 (1980)

  25. Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel (2006)

    Google Scholar 

  26. Rubinstein, A.: Lecture Notes in Microeconomic Theory: The Economic Agent, 2nd edn. Princeton University Press (2012)

  27. Tong, C., Rajagopalan, S.: Pricing and operational performance in discretionary services. Prod. Oper. Manag. 23(4), 689–703 (2014)

    Article  Google Scholar 

  28. Tversky, A.: Intransitivity of preferences. Psychol. Rev. 76(1), 31 (1969)

    Article  Google Scholar 

  29. Yang, T., Templeton, J.G.C.: A survey on retrial queues. Queueing Syst. 2(3), 201–233 (1987)

    Article  Google Scholar 

  30. Zacks, S.: Sample Path Analysis and Distributions of Boundary Crossing Times, vol. 2203. Springer, New York (2017)

    Book  Google Scholar 

Download references

Acknowledgements

The author is extremely grateful to Binyamin Oz for valuable discussions as well as for sharing his unpublished manuscript. In addition, the author would like to thank Offer Kella, Moshe Haviv and Refael Hassin for their comments before the submission. Finally, the author would like to thank the anonymous referees for their comments which significantly helped in improving the presentation of the contents in this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Royi Jacobovic.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

1.1 A1. Proof of Proposition 1

Assume that \(ES_1>0\). For every \(k\ge 1\), let \(N_k\) be the number of customers who receive service during the k’th busy period. \(C_1,C_2,\ldots \) are independent of all other random quantities in this model. Therefore, by conditioning and un-conditioning with respect to this sequence, then a known result regarding the long-run average queue-length in an M/G/1 queue implies that the long-run average loss due to waiting time equals

$$\begin{aligned} \frac{\gamma \lambda ^2 ES_1^2}{2\left( 1-\lambda ES_1\right) } . \end{aligned}$$
(63)

In addition, \(V(\cdot )\) is a nonincreasing process such that \(V(0)>0\). Hence, the details \(ES_1^2<\infty \) and \(EV^2(0)<\infty \) imply that

$$\begin{aligned} E\left[ \int _0^{S_1}V_1(s)\mathrm{d}s\right] ^+\mathrm{d}s\le E\int _0^{S_1}V_1^+(s)\mathrm{d}s\le ES_1V_1(0)<\infty . \end{aligned}$$

Thus, \(E\int _0^{S_1}V_1(s)\mathrm{d}s\) is well-defined. Moreover, for every \(i\ge 1\), \(S_i\) is determined by \(V_i(\cdot )\) as a solution of (4). Thus, since \(V_1(\cdot ),V_2(\cdot ),V_3(\cdot ),\ldots \) is an iid sequence, then \(\left( S_1,V_1(\cdot )\right) ,\left( S_2,V_2(\cdot )\right) ,\left( S_3,V_3(\cdot )\right) ,\ldots \) is an iid sequence. Hence

$$\begin{aligned} \int _0^{S_1}V_1(s)\mathrm{d}s,\int _0^{S_2}V_2(s)\mathrm{d}s,\int _0^{S_3}V_3(s)\mathrm{d}s,\ldots \end{aligned}$$
(64)

is also an iid sequence and the rest follows by some standard renewal-reward arguments. \(\square \)

1.2 A2. Auxiliary lemmata

The following auxiliary Lemmas 1 and 2 will be used later on in the proof of Proposition 2.

Lemma 1

Let \(v:\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) be a nonincreasing right-continuous function. In addition, for every \(n\ge 1 \), define \(v_n(s)\equiv v*g_n(s),\forall s\ge 0\), where

$$\begin{aligned} g_n(u)\equiv n1_{\left[ 0,\frac{1}{n}\right] }(-u) ,\ \ \forall u\in \mathbb {R} . \end{aligned}$$
(65)

Then,

  1. 1.

    For every \(n\ge 1 \), \(v_n(\cdot )\) is a continuous, nonnegative and nonincreasing function on \(\left[ 0,\infty \right) \) such that \(0\le v_n(0)\le v(0)\).

  2. 2.

    For every \(s\in \left[ 0,\infty \right) \), \(v_n(s)\uparrow v(s)\) as \(n\rightarrow \infty \).

Proof

  1. 1.

    Let U be a random variable which is distributed uniformly on \(\left[ 0,1\right] \) and for every \(n\ge 1 \) denote \(U_n\equiv \frac{U}{n}\). Fix \(n\ge 1 \) and notice that

    $$\begin{aligned} v_n(s)=Ev\left( s+U_n\right) , \quad \forall s\ge 0 . \end{aligned}$$
    (66)

    Recall that \(v(\cdot )\) is nonnegative and nonincreasing function, i.e. \(v_n(\cdot )\) also shares these properties. In addition, since \(U_n\) is nonnegative, then \(v_n(0)\le v(0)\). To show that \(v_n(\cdot )\) is continuous on \(\left[ 0,\infty \right) \), pick an arbitrary \(s\in \left[ 0,\infty \right) \) and let \((s_k)_{k=1}^\infty \) be a sequence such that \(s_k\rightarrow s\) as \(k\rightarrow \infty \). Then, since \(v(\cdot )\) is nonincreasing, \(s-U_n\) is P-a.s. a continuity point of \(v(\cdot )\), i.e. \(v\left( s_k+U_n\right) \rightarrow v\left( s+U_n\right) \) as \(k\rightarrow \infty \) when the convergence holds P-a.s. In addition, \(0\le v\left( s_k+U_n\right) \le v(0)<\infty \) for every \(k\ge 1\) and hence the dominated convergence theorem implies that

    $$\begin{aligned} \lim _{k\rightarrow \infty }v_n(s_k)=E\lim _{k\rightarrow \infty } v\left( s_k+U_n\right) =Ev\left( s+U_n\right) =v_n(s) \end{aligned}$$
    (67)

    and the result follows.

  2. 2.

    Fix \(s\in \left[ 0,\infty \right) \) and observe that \(v(\cdot )\) is nonnegative, nonincreasing and right-continuous. Thus, since \(U_n\downarrow 0\) as \(n\rightarrow \infty \), the result follows by the monotone convergence theorem.

\(\square \)

Lemma 2

Let \(\left( \mathbf{X} ,\mathcal {X},\mu \right) \) be a general measure space and let \(\alpha :\mathbf{X} \rightarrow \mathbb {R}\) and \(\xi :\mathbf{X} \times [0,\infty )\rightarrow \mathbb {R}\) be such that

  1. (i)

    For every \(t\ge 0\), \(x\mapsto \xi (x,t)\) is \(\mathcal {X}\)-measurable.

  2. (ii)

    For every \(x\in \mathbf {X}\), \(t\mapsto \xi (x,t)\) is right-continuous on \([0,\infty )\).

In addition, let \(\beta (\cdot )\in L_1(\mu )\) and define

$$\begin{aligned}&\zeta (x,t)=\beta (x)+\int _0^t\xi (x,s)\mathrm{d}s , \ \ \forall (x,t)\in \mathbf {X}\times [0,\infty ) , \\&\varphi (t):=\int _{\mathbf {X}}\zeta (x,t)\mu (\mathrm{d}x) , \ \ \forall t\in [0,\infty ) . \end{aligned}$$

If at least one of the following conditions hold:

  1. C1:

    For every \(x\in \mathbf {X}\), \(t\mapsto \xi (x,t)\) is nonnegative and nonincreasing.

  2. C2:

    There exists \(\psi (\cdot )\in L_1(\mu )\) such that \(|\xi (x,t)|\le |\psi (x)|\) for every \((x,t)\in \mathbf{X} \times [0,\infty )\).

Then, \(\varphi (\cdot )\) is right-differentiable on \([0,\infty )\) such that

$$\begin{aligned} \partial _+\varphi (t)=\int _\mathbf{X }\xi (x,t)\mu (\mathrm{d}x) , \ \ \forall t\in [0,\infty ) . \end{aligned}$$
(68)

Moreover, if C2 holds and \(t\mapsto \xi (x,t)\) is continuous on \((0,\infty )\), then \(\varphi (\cdot )\) is differentiable on \((0,\infty )\) such that

$$\begin{aligned} \frac{d}{d t}\varphi (t)=\int _\mathbf{X }\xi (x,t)\mu (d x) , \ \ \forall t\in (0,\infty ) . \end{aligned}$$
(69)

Proof

For simplicity and without loss of generality the proof is given for the case where \(\beta \) is identically zero. In addition, let \(\mathcal {B}[0,\infty )\) be the Borel \(\sigma \)-field which is associated with \([0,\infty )\). Notice that assumptions (i) and (ii) imply that \((x,t)\mapsto \xi (x,t)\) is \(\mathcal {X}\otimes \mathcal {B}[0,\infty )\)-measurable. For details see, for example, Remark 1.4 on page 5 of [18].

Now, observe that under either C1 or C2, Fubini’s theorem may be applied in order to deduce that

$$\begin{aligned} \eta (t):=\int _{\mathbf {X}}\xi (x,t)\mu (\mathrm{d}x) , \ \ \forall t\in [0,\infty ), \end{aligned}$$

is \(\mathcal {B}[0,\infty )\)-measurable and satisfies

$$\begin{aligned} \varphi (t)&=\int _{\mathbf {X}}\zeta (x,t)\mu (\mathrm{d}x)\\ {}&=\int _{\mathbf {X}}\int _0^t\xi (x,s)d s\mu (\mathrm{d}x)=\int _0^t\eta (s)d s , \quad \forall t\in [0,\infty ) . \end{aligned}$$

Thus, in order to show (68), it is enough to prove that \(\eta (\cdot )\) is right-continuous on \([0,\infty )\). To this end, by (ii), under C1 (C2), the monotone (dominated) convergence theorem implies that

$$\begin{aligned} \lim _{s\downarrow t}\eta (s)=\int _\mathbf{X} \lim _{s\downarrow t}\xi (x,s)\mathrm{d}s=\int _\mathbf{X} \xi (x,t)\mathrm{d}s=\eta (t) , \quad \forall t\in [0,\infty ), \end{aligned}$$

and the result follows. Finally, observe that in order to show (69), it is enough to prove that \(\eta (\cdot )\) is continuous on \((0,\infty )\). This can be done by using similar arguments. \(\square \)

Finally, note that once \(\left( \mathbf{X} ,\mathcal {X},\mu \right) \) is complete, the conclusion of Lemma 2 remains valid even when the requirements which appear in (ii), C1 and C2 are satisfied \(\mu \)-a.s. (instead of pointwise).

A3. Proof of Proposition 2

The proof of Proposition 2 is given by the subsequent lemmata:

Lemma 3

Assume that \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) such that \(x_\alpha <0\). Then, there exists \(\tilde{\alpha }\in \left[ 0,\lambda ^{-1}\right) \) such that \(x_{\tilde{\alpha }}\ge 0\) and \(g\left( \alpha \right) \le g\left( \tilde{\alpha }\right) \).

Proof

Let \(\alpha \) be such that \(x_\alpha <0\) and define

$$\begin{aligned} \tilde{S}\equiv \inf \left\{ s\ge 0;V(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\le 0\right\} . \end{aligned}$$
(70)

Notice that \(\tilde{S}\le S_\alpha \) and hence \(\tilde{\alpha }\equiv E\tilde{S}\le \alpha \). This implies that

$$\begin{aligned} g\left( \tilde{\alpha }\right)&\ge f\left( \tilde{S}\right) \nonumber \\&=E\int _0^{\tilde{S}}\left[ V(s)-\frac{\gamma \lambda }{1-\lambda \tilde{\alpha }}s\right] \mathrm{d}s\nonumber \\&\ge E\int _0^{\tilde{S}}\left[ V(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\right] \mathrm{d}s\nonumber \\&\ge E\int _0^{S_{\alpha }}\left[ V(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\right] \mathrm{d}s=g\left( \alpha \right) . \end{aligned}$$
(71)

In addition, notice that \(\tilde{\alpha }\le \alpha \), \(\tilde{\alpha }=E\tilde{S}=ES_{\tilde{\alpha }}\) and

$$\begin{aligned} S_{\tilde{\alpha }}=\inf \left\{ s\ge 0;V(s)-\frac{\gamma \lambda }{1-\lambda \tilde{\alpha }}s\le x_{\tilde{\alpha }}\right\} . \end{aligned}$$
(72)

Thus, since \(\alpha \mapsto x_\alpha \) is nonincreasing, \(x_{\tilde{\alpha }}\) is nonnegative and the result follows. \(\square \)

Lemma 4

Consider some \(\alpha \in \left[ 0,\lambda ^{-1}\right) \).

  1. 1.

    If \(x_\alpha \ge 0\), then, \(0\le S_\alpha \le \frac{V(0)}{\gamma \lambda }\), and \(g(\alpha )\ge 0\).

  2. 2.

    For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) ,\)

    $$\begin{aligned} g\left( \alpha \right) \le \frac{EV^2(0)}{\gamma \lambda }<\infty . \end{aligned}$$
    (73)

Proof

Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) such that \(x_\alpha \ge 0\) and notice that

$$\begin{aligned} S_\alpha&=\inf \left\{ s\ge 0;V(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\le x_\alpha \right\} \end{aligned}$$
(74)
$$\begin{aligned}&\le \inf \left\{ s\ge 0;V(s)-\gamma \lambda s\le 0\right\} . \end{aligned}$$
(75)

Therefore, since \(V(\cdot )\) is nonincreasing, we deduce that

$$\begin{aligned} 0\le S_\alpha \le \frac{V(0)}{\gamma \lambda } \end{aligned}$$
(76)

and hence

$$\begin{aligned} g\left( \alpha \right)&=E\int _0^{S_\alpha }\left[ V(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\right] \mathrm{d}s\nonumber \\&\le EV(0)S_\alpha \le \frac{EV^2(0)}{\gamma \lambda }<\infty . \end{aligned}$$
(77)

Then, use Lemma 3 in order to show that this upper bound holds for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). Finally, to show that \(g(\alpha )\ge 0\) for every \(\alpha \) for which \(x_\alpha \ge 0\) observe that in such a case, \(g(\alpha )\) is defined as an expectation of an integral with an integrand which is nonnegative on the integration domain. \(\square \)

Lemma 5

For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \), \(S_\alpha \) is square-integrable.

Proof

Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). If \(x_\alpha \ge 0\), then the result is a consequence of Lemma 4 and hence it is left to consider the case when \(x_\alpha \in (-\infty ,0)\). To this end, define \(\tilde{V}(s)=V(s)-x_\alpha ,\forall s\ge 0\), and observe that, for every \(S\in \mathcal {S}\) such that \(ES=\alpha \),

$$\begin{aligned} E\int _0^S\left[ \tilde{V}(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\right] \mathrm{d}s=-\alpha x_\alpha +f\left( S\right) . \end{aligned}$$

This means that \(S_\alpha \) is also a solution of

$$\begin{aligned} \begin{aligned}&\max _{S\in \mathcal {F}}:&E\int _0^S\left[ \tilde{V}(s)-\frac{ \gamma \lambda }{1- \lambda \alpha }s\right] \mathrm{d}s \\&\ \text {s.t:}&0\le S \ , \ P \text {-a.s.} , \\&&ES=\alpha . \end{aligned} \end{aligned}$$
(78)

In addition, note that \(\tilde{V}(\cdot )\) is a nonincreasing right-continuous process such that \(\tilde{V}(0)=V(0)-x_\alpha \) is a square-integrable positive random variable. Consequently, Lemma 4 implies the result because, by definition,

$$\begin{aligned} S_\alpha =\inf \left\{ s\ge 0;\tilde{V}(s)-\frac{\gamma \lambda }{1-\lambda \alpha }s\le 0\right\} . \end{aligned}$$
(79)

\(\square \)

Note that, for every \(S\ge 0\) which is square integrable,

$$\begin{aligned} f(S)=E\int _0^SV(s)\mathrm{d}s-\frac{\gamma \lambda ES^2}{2\left( 1-\lambda \alpha \right) } . \end{aligned}$$
(80)

Lemma 6

\(f(\cdot )\) is concave on \(\mathcal {D}\) and \(g(\cdot )\) is concave on \(\left[ 0,\lambda ^{-1}\right) \).

Proof

Define

$$\begin{aligned} \mathcal {S}_0\equiv \mathcal {D}\times \left[ 0,\lambda ^{-1}\right) \end{aligned}$$
(81)

and for every \((S,\alpha )\in \mathcal {S}_0\) denote

$$\begin{aligned} h(S,\alpha )\equiv \int _0^SV(s)\mathrm{d}s-\frac{\gamma \lambda S^2}{2\left( 1-\lambda \alpha \right) } . \end{aligned}$$
(82)

In particular, observe that \(V(\cdot )\) is a nonincreasing right-continuous process which implies that \(s\mapsto \int _0^sV(t)\mathrm{d}t\) is concave on \([0,\infty )\). Therefore, since \((t,s)\mapsto \frac{t^2}{s}\) is convex on \(\mathbb {R}\times (0,\infty )\), then \(h(S,\alpha )\) is concave on \(\mathcal {S}_0\). Thus, since an expectation is a linear operator,

$$\begin{aligned} H(S,\alpha )\equiv Eh(S,\alpha )=E\int _0^SV(s)\mathrm{d}s-\frac{\gamma \lambda ES^2}{2\left( 1-\lambda \alpha \right) }\ , \ \forall (S,\alpha )\in \mathcal {S}_0, \end{aligned}$$
(83)

is a concave functional on \(\mathcal {S}_0\). Especially, notice that \((S,\alpha )\in \mathcal {S}_0\) implies that S is square-integrable and hence (80) could be used in order to justify the last equality. Now, consider \(S_1,S_2\in \mathcal {D}\) and, for every \(i=1,2\), denote \(\alpha _i\equiv ES_i\). Thus, observe that the concavity of \(H(\cdot )\) implies that, for every \(\mu \in (0,1)\),

$$\begin{aligned} f\left[ S_2+\mu (S_1-S_2)\right]&=H\left[ S_2+\mu (S_1-S_2),\alpha _2+\mu (\alpha _1-\alpha _2)\right] \\ {}&\ge \mu H\left( S_1,\alpha _1\right) +(1-\mu )H\left( S_2,\alpha _2\right) \\ {}&=\mu f(S_1)+(1-\mu )f(S_2) \end{aligned}$$

and hence the concavity of \(f(\cdot )\) follows by definition.

In order to prove the concavity of \(g(\cdot )\), recall Lemma 5 which implies that, for every \(\alpha \in [0,\lambda ^{-1})\),

$$\begin{aligned} g(\alpha )=\sup \bigg \{H(S,\alpha );S\in \mathcal {D}\ , \ ES=\alpha \bigg \} \end{aligned}$$
(84)

and hence the result follows because \(g(\cdot )\) equals a supremum of a concave functional on a convex set which is not empty (take, for example, \((\alpha ,\alpha )\)). \(\square \)

Lemma 7

\(\lim _{\alpha \downarrow 0}g(\alpha )=g(0)=0\).

Proof

Denote

$$\begin{aligned} S^0\equiv \inf \left\{ s\ge 0;V(s)\le 0\right\} \end{aligned}$$
(85)

and \(\alpha _0\equiv E\left( S^0\wedge \frac{1}{2\lambda }\right) \). Observe that positiveness of \(S^0\) implies that \(\alpha _0\in \left( 0,\frac{1}{2\lambda }\right] \). Then, for every \(\alpha \in [0,\alpha _0)\), define \(\hat{S}_\alpha \equiv \frac{\alpha }{\alpha _0}\left( S^0\wedge \frac{1}{2\lambda }\right) \) which is a square-integrable nonnegative random variable such that \(E\hat{S}_\alpha =\alpha \). Thus, by the definition of \(g(\cdot )\) and using (80), we deduce that

$$\begin{aligned} g(\alpha )\ge E\int _0^{\hat{S}_\alpha } V(s)\mathrm{d}s-\frac{ \alpha ^2 \gamma \lambda }{2\alpha _0^2\left( 1- \lambda \alpha \right) }E\left( S^0\wedge \frac{1}{2\lambda }\right) ^2 . \end{aligned}$$
(86)

In particular, the expectation in the second term is finite and hence this term tends to zero as \(\alpha \downarrow 0\). In addition, for every \(\alpha \in \left[ 0,\alpha _0\right) \),

$$\begin{aligned} 0\le \int _0^{\hat{S}_\alpha } V(s)\mathrm{d}s\le \int _0^{S^0\wedge \frac{1}{2\lambda }}V(0)\mathrm{d}s\le \frac{V(0)}{2\lambda }. \end{aligned}$$

Thus, since \(EV(0)<\infty \), the dominated convergence implies that the first term in (86) tends to zero as \(\alpha \downarrow 0\). To provide an upper bound which tends to zero, note that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),

$$\begin{aligned} g(\alpha )&=E\int _0^{S_\alpha }\left[ V(s)-s\frac{\gamma \lambda }{1-\lambda \alpha }\right] \mathrm{d}s\nonumber \\ {}&\le E\int _0^{S_\alpha }\left[ V(s)-s\gamma \lambda \right] \mathrm{d}s\nonumber \\ {}&\le E\int _0^{\tilde{S}_\alpha }\left[ V(s)-s\gamma \lambda \right] \mathrm{d}s, \end{aligned}$$
(87)

where \(\tilde{S}_\alpha \) is the solution of

$$\begin{aligned} \begin{aligned}&\max _{S\in \mathcal {F}}:&E\int _0^S\left[ V(s)-s\gamma \lambda \right] \mathrm{d}s \\&\ \text {s.t:}&0\le S \ , \ P \text {-a.s.} , \\&&ES=\alpha \end{aligned} \end{aligned}$$
(88)

which is specified by Theorem 1 of [17]. In particular, notice that this optimization is well-defined due to (21). In addition, note that existence of this solution is justified by the same kind of argument which was provided in order to justify that \(S_\alpha \) is a solution of (24). Now, let

$$\begin{aligned} \tilde{S}\equiv \inf \left\{ s\ge 0;\ V(s)-s\gamma \lambda \le 0\right\} \end{aligned}$$
(89)

and notice that

$$\begin{aligned} E\int _0^{\tilde{S}}\left[ \ V(s)-s\gamma \lambda \right] ^-\mathrm{d}s=0<\infty . \end{aligned}$$
(90)

Therefore, the pre-conditions of Proposition 1 of [17] are satisfied, i.e.

$$\begin{aligned} \exists \lim _{\alpha \downarrow 0}E\int _0^{\tilde{S}_\alpha }\left[ V(s)-s\gamma \lambda \right] \mathrm{d}s=0 \end{aligned}$$
(91)

and the proof is completed. \(\square \)

Lemma 8

Let \(\alpha '\equiv \inf \{\alpha \in [0,\lambda ^{-1});x_\alpha <0\}\). Then, \(\alpha '<\lambda ^{-1}\).

Proof

Assume by contradiction that \(\alpha '=\lambda ^{-1}\) which means that \(x_\alpha \ge 0\) for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). In addition, observe that Lemma 5 and (80) imply that, for every \(\alpha \in \left[ 0,\lambda ^ {-1}\right) \),

$$\begin{aligned} g\left( \alpha \right) =E\int _0^{S_{\alpha }}V(s)\mathrm{d}s-\frac{\gamma ES_{\alpha }^2}{2\left( 1-\lambda \alpha \right) } . \end{aligned}$$

In addition, \(x_{\alpha }\ge 0,\forall \alpha \in \left[ 0,\lambda ^{-1}\right) \), and hence Lemma 4 implies that \(0\le S_{\alpha }\le \frac{V(0)}{\gamma \lambda },\forall \alpha \in \left( \alpha _0,\lambda ^{-1}\right) \). Therefore, since \(V(\cdot )\) is nonincreasing, we deduce that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),

$$\begin{aligned} E\int _0^{S_{\alpha }}V(s)\mathrm{d}s\le EV(0)S_{\alpha }\le \frac{EV^2(0)}{\gamma \lambda } . \end{aligned}$$
(92)

These results imply that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),

$$\begin{aligned} g\left( \alpha \right) \le \frac{EV^2(0)}{\gamma \lambda }-\frac{\gamma ES^2_{\alpha }}{2\left( 1-\lambda \alpha \right) } . \end{aligned}$$
(93)

Now, if

$$\begin{aligned} \lim _{\alpha \uparrow \lambda ^{-1}}\inf ES_\alpha ^2>0 , \end{aligned}$$
(94)

then (93) implies that \(g(\alpha )\) tends to \(-\infty \) as \(\alpha \uparrow \lambda ^{-1}\). Thus, in such a case there exists \(\alpha \in (0,\lambda ^{-1})\) such that \(g(\alpha )<0\). On the other hand, recall that \(x_{\alpha }\ge 0\) and

$$\begin{aligned} S_{\alpha }=\inf \left\{ s\ge 0;V(s)-\frac{\lambda \gamma s}{1-\lambda \alpha }\le x_{\alpha }\right\} . \end{aligned}$$
(95)

Thus, since \(V(\cdot )\) is a nonincreasing right-continuous process, this implies that

$$\begin{aligned} g(\alpha )=f(S_{\alpha })=E\int _0^{S_{\alpha }}\left[ V(s)-\frac{\gamma \lambda s}{1-\lambda \alpha }\right] \mathrm{d}s\ge 0 \end{aligned}$$
(96)

which implies a contradiction.

Hence, we deduce that

$$\begin{aligned} \lim _{\alpha \uparrow \lambda ^{-1}}\inf ES_\alpha ^2=0 . \end{aligned}$$
(97)

Then, since, for every \(\alpha \in [0,\lambda ^{-1})\), \(S_\alpha \) is squared-integrable, the Cauchy-Schwartz inequality leads to the following contradiction:

$$\begin{aligned} 0<\lambda ^{-2}&=\lim _{\alpha \uparrow \lambda ^{-1}}\inf \alpha ^2 \end{aligned}$$
(98)
$$\begin{aligned}&=\lim _{\alpha \uparrow \lambda ^{-1}}\inf \left( ES_\alpha \right) ^2\nonumber \\ {}&\le \lim _{\alpha \uparrow \lambda ^{-1}}\inf ES_\alpha ^2=0 . \end{aligned}$$
(99)

\(\square \)

Lemma 9

There exists \(\alpha ^*\in \left[ 0,\frac{EV(0)}{\gamma \lambda }\right] \cap \left[ 0,\lambda ^{-1}\right) \) which is a maximizer of \(g(\cdot )\) on \([0,\lambda ^{-1})\) such that \(x_{\alpha ^*}\ge 0\) and \(S^*\equiv S_{\alpha ^*}\) is an optimal solution of (20) which is square-integrable.

Proof

By Lemma 8, \(\alpha '<\lambda ^{-1}\) and hence Lemma 3 implies that Phase II is reduced to maximization of \(g(\cdot )\) on the closed interval \(\left[ 0,\alpha '\right] \). By Lemmas 6 and 7 we deduce that \(g(\cdot )\) is continuous on \([0,\alpha ']\) and hence that there exists \(\alpha ^*\in [0,\alpha ']\) which maximizes the value of \(g(\cdot )\) over \([0,\lambda ^{-1})\). In addition, given the maximizer \(\alpha ^*\), square integrability of \(S^*\) is a direct consequence of Lemma 5. Finally, the upper bound and the fact that \(x_{\alpha ^*}\ge 0\) stems immediately from Lemmas 3 and 4. \(\square \)

Lemma 10

\(f(S^*)=0\) if and only if \(\alpha ^*=0\).

Proof

Assume that \(0=\alpha ^*=ES^*\). Since \(S^*\) is a nonnegative random variable, then \(S^*=0\), P-a.s. and \(f(S^*)=0\) follows immediately. To show the other direction assume that

$$\begin{aligned} 0=f(S^*)=E\int _0^{S^*}\left[ V(s)-\frac{\gamma \lambda s}{1-\lambda \alpha ^*}\right] \mathrm{d}s \end{aligned}$$
(100)

and recall that

$$\begin{aligned} S^*=\inf \left\{ s\ge 0;V(s)-\frac{\gamma \lambda s}{1-\lambda \alpha ^*}\le x^*\right\} . \end{aligned}$$
(101)

Therefore, since \(V(\cdot )\) is a nonincreasing right-continuous process, then

$$\begin{aligned} 0\ge E\int _0^{S^*}x^*=\alpha ^*x^* \end{aligned}$$

and hence \(x^*\ge 0\) implies that \(\alpha ^*=0\) (remember that \(\alpha ^*\in [0,\lambda ^{-1})\)). \(\square \)

Observe that Lemma 9 implies that there exists \(\alpha ^*\in \left[ 0,\lambda ^{-1}\right) \) and \(x^*= x_{\alpha ^*}\ge 0\) for which \(S^*=S_{\alpha ^*}=S_{\alpha ^*}\left( x^*\right) \) is an optimum of (20). Therefore, since \(V(\cdot )\) is nonincreasing and right-continuous, its left limit at \(S^*\) is nonnegative and hence \(S^*\) is also an optimum (with the same objective value) of the analogous optimization with \(V^+(\cdot )\) replacing \(V(\cdot )\). Thus, without loss of generality, from now on assume that \(V(\cdot )\) is nonnegative.

Lemma 11

\(f(S^*)>0\) (and hence \(\alpha ^*>0\)).

Proof

In order to prove that \(f(S^*)>0\), it is enough to find a random variable \(S_0\in \mathcal {S}\) for which \(f(S_0)>0\). To this end, for every \(\alpha \in [0,\infty )\) define a function

$$\begin{aligned} \upsilon (\alpha )\equiv f(\alpha )=E\int _0^\alpha V(s)\mathrm{d}s-\frac{\gamma \lambda \alpha ^2}{2(1-\lambda \alpha )} . \end{aligned}$$
(102)

Since \(V(\cdot )\) is a nonnegative nonincreasing right-continuous process, Lemma 2 implies that \(\upsilon (\cdot )\) is right-differentiable with a right-derivative at zero which equals

$$\begin{aligned} \partial _+\upsilon (\alpha =0)=EV(0)>0 . \end{aligned}$$
(103)

This means that there exists \(\alpha _0>0\) such that \(\upsilon (\alpha _0)>\upsilon (0)=0\) and the result follows. \(\square \)

Lemma 12

If \(V(\cdot )\) is continuous, then (16) holds.

Proof

Assume that \(V(\cdot )\) is a continuous process and denote \(x^*=x_{\alpha ^*}\). Thus, since \(V(\cdot )\) is a continuous process, once \(S^*>0\), then

$$\begin{aligned} V\left( S^*\right) =\frac{\gamma \lambda }{1-\lambda \alpha ^*}S^*+x^* . \end{aligned}$$

Therefore, by multiplying both sides by \(S^*\) and taking expectations we deduce that

$$\begin{aligned} ES^*V\left( S^*\right) =\frac{\gamma \lambda }{1-\lambda \alpha ^*}E\left( S^*\right) ^2+x^*ES^*=\frac{\gamma \lambda }{1-\lambda \alpha ^*}E\left( S^*\right) ^2+x^*\alpha ^* . \end{aligned}$$
(104)

In addition, for every \(u>0\) define a function \(\upsilon (u)\equiv f\left( uS^*\right) \). It is known that \(S^*\) is square-integrable and hence, for every \(u>0\), \(uS^*\) is also square-integrable. Thus, using (80) we deduce that

$$\begin{aligned} \upsilon (u)=E\int _0^{uS^*}V(s)\mathrm{d}s-\frac{\gamma \lambda u^2E\left( S^*\right) ^2}{2\left( 1-\lambda u\alpha ^*\right) } . \end{aligned}$$
(105)

Recall that \(V(\cdot )\) is continuous on \(\left[ 0,\infty \right) \) and hence the fundamental theorem of calculus implies that, for every \(u>0\),

$$\begin{aligned} \frac{d}{\mathrm{d}u}\int _0^{uS^*}V(s)\mathrm{d}s=S^*V(uS^*) . \end{aligned}$$
(106)

Observe that this derivative is nonnegative and dominated from above by \(S^*V(0)\). Therefore, since \(S^*\) is dominated by a linear function of V(0) (see Lemma 4) and \(EV^2(0)<\infty \), then Lemma 2 allows replacing the order of expectation and differentiation. Namely, \(\upsilon (\cdot )\) is differentiable at some neighbourhood of \(u=1\) with a derivative

$$\begin{aligned} \frac{d\mu (u)}{\mathrm{d}u}\bigg |_{u=1}=ES^*V\left( S^*\right) -\gamma \left[ \frac{\lambda E\left( S^*\right) ^2}{\left( 1-\lambda \alpha ^*\right) }+\frac{\lambda ^2\alpha ^*E\left( S^*\right) ^2}{2\left( 1-\lambda \alpha ^*\right) ^2}\right] . \end{aligned}$$
(107)

It has already been shown that \(u=1\) is a global maximum of \(\upsilon (\cdot )\). Therefore, applying this result with a first-order condition at \(u=1\) and an insertion of (104) all together lead to a conclusion that

$$\begin{aligned} 0=\frac{d\mu (u)}{\mathrm{d}u}\bigg |_{u=1}=\alpha ^*\left[ x^*-\gamma \frac{\lambda ^2E\left( S^*\right) ^2}{2\left( 1-\lambda \alpha ^*\right) ^2}\right] . \end{aligned}$$
(108)

Note that \(f\left( S^*\right) >0\) implies that \(\alpha ^*>0\) and hence the result follows. \(\square \)

The final step in the proof of Proposition 2 is to extend the result of Lemma 12 to a general \(V(\cdot )\).

Lemma 13

$$\begin{aligned} x^*\equiv x_{\alpha ^*}= \gamma \frac{\lambda ^2E\left( S^*\right) ^2}{2\left( 1-\lambda E S^*\right) ^2}\in (0,\infty ) . \end{aligned}$$
(109)

Proof

Consider \(V(\cdot )\) which might have jumps and, for every \(n\ge 1 \), define \(V_n(s)\equiv V*g_n(s),\forall s\ge 0\) , where that \(g_n\) is given by the statement of Lemma 1. Note that due to this lemma, for each \(n\ge 1 \), \(V_n\) satisfies the assumptions of Lemma 12. In addition, by Lemma 1, it is known that for every \(s\ge 0\), \(V_n(s)\uparrow V(s)\) as \(n\rightarrow \infty \). In addition, for every \(n\ge 1 \) consider the optimization

$$\begin{aligned} \begin{aligned}&\max _{S\in \mathcal {F}}:&f_n(S)\equiv E\int _0^S\left[ V_n(s)-s\frac{\gamma \lambda }{1-\lambda ES}\right] \mathrm{d}s \\&\ \text {s.t:}&0\le S \ , \ P \text {-a.s.} , \\&&ES<\lambda ^{-1} \end{aligned} \end{aligned}$$
(110)

and denote its objective functional by \(f_n(\cdot )\). Note that, for each \(n\ge 1\), (110) has a solution \(S_n\) such that

$$\begin{aligned} S_n=\inf \left\{ s\ge 0;V_n(s)-\frac{\lambda \gamma }{1-\lambda \alpha _n}s\le x_n\right\} \end{aligned}$$
(111)

for some \(\alpha _n\in \left( 0,\lambda ^{-1}\right) \) and \(x_n\ge 0\). Clearly, the sequence \(ES_1,ES_2,\ldots \) is bounded on \(\left[ 0,\lambda ^{-1}\right) \). In addition, recall that, for every \(n\ge 1 \), \(S_n\) is bounded by a linear function of \(V_n(0)\in \left[ 0,V(0)\right] \) (see also Lemma 4). Therefore, since \(EV^2(0)<\infty \), the sequence \(ES_1^2,ES_2^2,\ldots \) is bounded. Consequently, there exists \(\left\{ n_k \right\} _{k=1}^\infty \subseteq \left\{ 1,2,\ldots \right\} \) such that

$$\begin{aligned} \exists \lim _{k\rightarrow \infty }ES_{n_k }\equiv \alpha , \ \ \exists \lim _{k\rightarrow \infty }ES_{n_k }^2\equiv \sigma . \end{aligned}$$
(112)

For every \(k\ge 1\), \(V_{n_k }(\cdot )\) is a process which satisfies the assumptions of Lemma 12 and hence

$$\begin{aligned} x_{n_k }= \gamma \frac{ \lambda ^2ES_{n_k }^2}{2\left[ 1- \lambda ES_{n_k }\right] ^2}\ge 0 ,\ \ \forall k\ge 1 . \end{aligned}$$
(113)

This means that

$$\begin{aligned} \exists \lim _{k\rightarrow \infty }x_{n_k}=\gamma \frac{\lambda ^2\sigma }{2\left( 1-\lambda \alpha \right) ^2}\equiv x \end{aligned}$$
(114)

such that \(x^*\ge 0\). Moreover, by construction, for every \(s\ge 0\), \(V_{n_k }(s)\uparrow V(s)\) as \(k\rightarrow \infty \) and observe that \(\alpha _{n_k}\rightarrow \alpha \) as \(k\rightarrow \infty \). Therefore, if, for every \(k\ge 1\),

$$\begin{aligned} \zeta _k(s)\equiv x_{n_k }+\frac{\gamma \lambda }{1-\lambda \alpha _{n_k }}s- V_{n_k }(s) , \ \ \forall s\ge 0, \end{aligned}$$
(115)

and

$$\begin{aligned} \zeta (s)\equiv x+\frac{\gamma \lambda }{1-\lambda \alpha }s- V(s) , \ \ \forall s\ge 0 , \end{aligned}$$
(116)

then for every \(s\ge 0\), \(\zeta _k(s)\rightarrow \zeta (s)\) as \(k\rightarrow \infty \). Now, for every \(k\ge 1\), define

$$\begin{aligned} \zeta ^{-1}_k(u)\equiv \inf \left\{ s\ge 0;\zeta _k(s)\ge u\right\} , \ \ \forall u\in \mathbb {R}, \end{aligned}$$
(117)

and

$$\begin{aligned} \zeta ^{-1}(u)\equiv \inf \left\{ s\ge 0;\zeta (s)\ge u\right\} , \ \ \forall u\in \mathbb {R} . \end{aligned}$$
(118)

Furthermore, observe that \(\zeta (\cdot ),\zeta _1(\cdot ),\zeta _2(\cdot ),\ldots \) are all (strictly) increasing continuous processes tending to infinity as \(s\rightarrow \infty \). Therefore, it can be deduced that \(\zeta ^{-1}(\cdot ),\zeta ^{-1}_1(\cdot ),\zeta ^{-1}_2(\cdot ),\ldots \) are finite-valued continuous processes (with a time index u). Now, using exactly the same arguments as in the proof of Theorem 2A in [24], we deduce that, for every \(u\in \mathbb {R}\), \(\zeta ^{-1}_k(u)\rightarrow \zeta ^{-1}(u)\) as \(k\rightarrow \infty \). Especially, this is true for \(u=0\), i.e., for every sample-space realization,

$$\begin{aligned} \exists \lim _{l\rightarrow \infty } S_{n_k }=\inf \left\{ s\ge 0;V(s)-\frac{\lambda \gamma }{1-\lambda \alpha }s\le x\right\} \equiv S' . \end{aligned}$$
(119)

It is left to show that \(S'\) is a solution of (20). To this end, notice that, for every \(k\ge 1\), \(S_{n_k }\) is nonnegative and bounded from above by a linear function of \(V_n(0)\le V(0)\). Therefore, since V(0) is square-integrable, dominated convergence implies that

$$\begin{aligned} \alpha =ES' , \ \ \sigma =E\left( S'\right) ^2<\infty . \end{aligned}$$
(120)

To prove optimality of \(S'\), observe that, for every \(k\ge 1\),

$$\begin{aligned} V_{n_k }(s)\le V(s) ,\ \ \forall s\ge 0, \end{aligned}$$
(121)

and hence

$$\begin{aligned} f_{n_k }(S)\le f(S) ,\ \ \forall S\in \mathcal {S} . \end{aligned}$$
(122)

Thus, the optimality of \(S_{n_k }\) (for each \(k\ge 1\)) implies that

$$\begin{aligned} f_{n_k }\left( S^*\right) \le f_{n_k }\left( S_{n_k }\right) \le f\left( S_{n_k }\right) \le f\left( S^*\right) , \ \ \forall k\ge 1 . \end{aligned}$$
(123)

Moreover, it has already been shown that \(S^*\) is square-integrable and hence it is possible to use (80). Thus, since, for every \(s\ge 0\), \(0\le V_{n_k }(s)\uparrow V(s)\) as \(k\rightarrow \infty \), monotone convergence implies that

$$\begin{aligned} f_{n_k }\left( S^*\right)&=E\int _0^{S^*}V_{n_k }(s)\mathrm{d}s-\frac{\gamma \lambda }{2\left( 1-\lambda \alpha ^*\right) }E\left( S^*\right) ^2\\&\xrightarrow {k\rightarrow \infty }E\int _0^{S^*}V(s)\mathrm{d}s-\frac{\gamma \lambda }{2\left( 1-\lambda \alpha ^*\right) }E\left( S^*\right) ^2=f\left( S^*\right) .\nonumber \end{aligned}$$
(124)

In addition it is known that, for every \(k\ge 1\), \(ES_{n_k }^2<\infty \). Thus, it is possible to use (80) once again with a squeezing theorem in order to deduce that

$$\begin{aligned} f\left( S^*\right)&=\lim _{l\rightarrow \infty } f\left( S_{n_k }\right) \nonumber \\ {}&=\lim _{k\rightarrow \infty } E\int _0^{S_{n_k }}V(s)-\frac{\gamma \lambda \lim _{k\rightarrow \infty } E\left( S_{n_k }\right) ^2}{2\left( 1-\lambda \lim _{k\rightarrow \infty } \alpha _{n_k }\right) }\nonumber \\ {}&=E\int _0^{S'}V(s)\mathrm{d}s-\frac{\gamma \lambda E\left( S'\right) ^2}{2\left( 1-\lambda ES'\right) }\nonumber \\ {}&=f\left( S'\right) . \end{aligned}$$
(125)

In particular, notice that, for every \(k\ge 1\), \(S_{n_k }\) is bounded from above by \(\frac{V(0)}{\gamma \lambda }\). Hence, for every \(k\ge 1\), \(\int _0^{S_{n\left( k_l\right) }}V(s)\mathrm{d}s\) is bounded from above by \(\frac{V^2(0)}{\gamma \lambda }\) which is an integrable random variable. Thus, the dominated convergence theorem justifies the third equality of (125). \(\square \)

A4. Proof of Theorem 2

Observe that \(S^{**}\) and \(V_{t_{\max }}(0)\) are square-integrable and hence

$$\begin{aligned} 0<v(t)&\le P_t\int _0^{S^{**}}V_{t_{\max }}(0)\mathrm{d}s=P_tES^{**}V_{t_{\max }}(0)\xrightarrow {t\uparrow t_{\text {max}}}0 . \end{aligned}$$
(126)

Since \(t\mapsto P_t\) is positive and continuous on \(t\in \left[ t_{\min },t_{\max }\right) \), then it is enough to show that \(r(t)\equiv v(t)/P_t\) is continuous on \(\left[ t_{\text {min}},t_{\text {max}}\right) \). To this end, fix \(t\in \left[ t_{\text {min}},t_{\text {max}}\right) \) and let \(\{t_n\}_{n=1}^\infty \subseteq \left[ t_{\text {min}},t+\epsilon \right] \) be a sequence such that \(t_n\rightarrow t\) as \(n\rightarrow \infty \), where \(\epsilon \in (0,t_{\max }-t)\) is an arbitrary constant. Thus, since \((t,s)\mapsto \tilde{V}(t,s)\) is nondecreasing in its first coordinate, by Lemma 4 we deduce that, for every \(n\ge 1\),

$$\begin{aligned} 0\le S_{t_n}\le \frac{V_{t_n}(0)}{\gamma \lambda P_{t_n}}\le \frac{V_{t_{\max }}(0)}{\gamma \lambda P_{t+\epsilon }} . \end{aligned}$$
(127)

Since the upper bound is square-integrable and uniform in n, we deduce that

$$\begin{aligned} \alpha _n\equiv ES_{t_n}\ \ , \ \ \sigma _n\equiv ES_{t_n}^2 , \ \ \forall n\ge 1, \end{aligned}$$
(128)

are two bounded sequences. Hence, there exists a subsequence \(\left\{ n(k)\right\} _{k=1}^\infty \subseteq \mathbb {N}\) such that

$$\begin{aligned} \exists \lim _{k\rightarrow \infty }\alpha _{n(k)}\equiv \alpha , \ \ \exists \lim _{k\rightarrow \infty }\sigma _{n(k)}\equiv \sigma . \end{aligned}$$
(129)

In addition, Proposition 2 implies that, for every \(k\ge 1\),

$$\begin{aligned} S_{t_{n(k)}}=\inf \left\{ s\ge 0;V_{t_{n(k)}}(s)-s\frac{\gamma \lambda P_{t_{n(k)}}}{1-\lambda \alpha _{n(k)}P_{t_{n(k)}}}\le x_{n(k)}\right\} \end{aligned}$$
(130)

such that

$$\begin{aligned} 0<x_{n(k)}=\frac{\gamma \left( \lambda P_{t_{n(k)}}\right) ^2\sigma _{n(k)}}{2\left( 1-\lambda \alpha _{n(k)} P_{t_{n(k)}}\right) }\xrightarrow {k\rightarrow \infty }\frac{\gamma \left( \lambda P_t\right) ^2\sigma }{2\left( 1-\lambda \alpha P_t\right) }\equiv x . \end{aligned}$$
(131)

In particular, note that x is nonnegative.

Now, let \(U=F(T)\sim U(0,1)\) which is independent from \(\tilde{V}\). In addition, observe that, for every \(t\in \left[ t_{\min },t_{\max }\right) \) and \(u\in \mathbb {R}\),

$$\begin{aligned} P\left( T>u|T>t\right) =\frac{1-F(u\vee t)}{1-F(t)} . \end{aligned}$$
(132)

Therefore, for every \(p\in (0,1)\) and \(t\in \left[ t_{\min },t_{\max }\right) \), the p’th quantile of T given \(\{T>t\}\) equals

$$\begin{aligned} q_t(p)=F^{-1}\left[ 1-(1-p)\left[ 1-F(t)\right] \right] . \end{aligned}$$
(133)

Recalling that \(F(\cdot )\) is continuous and increasing on \(\left[ t_{\min },t_{\max }\right) \), then \(q_t(p)\) is also continuous and increasing in t. In addition, without loss of generality, assume that \(\tau _t=q_t(U)\). Consequently, since \((t,s)\mapsto \tilde{V}(t,s)\) is continuous and nondecreasing in t, then \((t,s)\mapsto V_t(s)\) is also continuous and nondecreasing in t. Thus, by applying the same technique which appears in the proof of Lemma 13, we deduce that

$$\begin{aligned} \exists \lim _{k\rightarrow \infty }S_{t_{n(k)}}=\inf \left\{ s\ge 0;V_{t}(s)-s\frac{\gamma \lambda P_t}{1-\lambda P_t}\le x\right\} \equiv S'_t . \end{aligned}$$
(134)

In addition, since \(S_t\le S^{**}\) for every \(t\in \left[ t_{\min },t_{\max }\right) \) and \(S^{**}\) is square-integrable, then dominated convergence implies that

$$\begin{aligned} \alpha = ES'_t , \ \ \sigma =E\left( S'_t\right) ^2 . \end{aligned}$$
(135)

For every \(\nu _1,\nu _2\in \left[ t_{\min },t_{\max }\right) \) define

$$\begin{aligned} w_{\nu _1}(\nu _2)\equiv E\int _0^{S_{\nu _2}}V_{\nu _1}(s)\mathrm{d}s-\frac{\gamma \lambda P_{\nu _1} ES_{\nu _2}^2}{2\left( 1-\lambda P_{\nu _1} ES_{\nu _2}\right) }, \end{aligned}$$
(136)

which is nondecreasing in \(\nu _1\) and such that \(r(\nu _1)= w_{\nu _1}(\nu _1)\). In addition, notice that

$$\begin{aligned} w_{t_{n(k)}}(t)\le w\left[ t_{n(k)}\right] \le w_{t+\epsilon }\left[ t_{n(k)}\right] , \ \forall k\ge 1 . \end{aligned}$$
(137)

Moreover, since, for every \(k\ge 1\),

$$\begin{aligned} 0\le V_{t_{n(k)}}(s)1_{\left[ 0,S_t\right] }(s)\le V_{t_{\max }}(0)1_{\left[ 0,S^{**}\right] }(s) , \end{aligned}$$
(138)

dominated convergence implies that

$$\begin{aligned} w_{t_{n(k)}}(t)&=E\int _0^{S_{t}}V_{t_{n(k)}}(s)\mathrm{d}s-\frac{\gamma \lambda P_{t_{n(k)}} ES_{t}^2}{2\left[ 1-\lambda P_{t_{n(k)}} ES_{t}\right] } \end{aligned}$$
(139)
$$\begin{aligned}&\xrightarrow {k\rightarrow \infty }E\int _0^{S_{t}}V_{t}(s)\mathrm{d}s-\frac{\gamma \lambda P_t ES_{t}^2}{2\left( 1-\lambda P_t ES_{t}\right) }=r(t) . \end{aligned}$$
(140)

Similarly, since, for every \(k\ge 1\),

$$\begin{aligned} 0\le \int _0^{S_{t_{n(k)}}}V_{t+\epsilon }(s)\mathrm{d}s\le S^{**}V_{t_{\max }}(0) , \end{aligned}$$
(141)

dominated convergence might be used once again in order to derive the limit

$$\begin{aligned} w_{t+\epsilon }\left[ t_{n(k)}\right]&=E\int _0^{S_{t_{n(k)}}}V_{t+\epsilon }(s)\mathrm{d}s-\frac{\gamma \lambda P_{t+\epsilon } ES_{t_{n(k)}}^2}{2\left( 1-\lambda P_{t+\epsilon } ES_{t_{n(k)}}\right) }\nonumber \\&\xrightarrow {k\rightarrow \infty }E\int _0^{S'_{t}}V_{t+\epsilon }(s)\mathrm{d}s-\frac{\gamma \lambda P_{t+\epsilon } E\left( S_t'\right) ^2}{2\left( 1-\lambda P_{t+\epsilon } ES'_{t}\right) }=\frac{\phi _{t+\epsilon }(S'_t)}{P_{t+\epsilon }}. \end{aligned}$$
(142)

Therefore, deduce that

$$\begin{aligned} r(t)\le \lim _{k\rightarrow \infty }\inf w\left[ t_{n(k)}\right] \le \lim _{k\rightarrow \infty }\sup w\left[ t_{n(k)}\right] \le \frac{\phi _{t+\epsilon }(S'_t)}{P_{t+\epsilon }} . \end{aligned}$$
(143)

Note that this inequality is valid even when \(\epsilon \) is replaced by some \(\epsilon '\in (0,\epsilon )\). This is true because, up to a finite prefix, the sequence \(\{t_{n(k)}; k\ge 1\}\) belongs to \((t_{\min },t+\epsilon ')\). Thus, the next step is to take a limit of the upper bound as \(\epsilon \downarrow 0\). In practice, the same kind of arguments which were made in the previous limits calculations imply that

$$\begin{aligned} \frac{\phi _{t+\epsilon }(S'_t)}{P_{t+\epsilon }}&=E\int _0^{S'_{t}}V_{t+\epsilon }(s)\mathrm{d}s-\frac{\gamma \lambda P(T>t+\epsilon ) E\left( S_t'\right) ^2}{2\left[ 1-\lambda P\left( T>t+\epsilon \right) ES'_{t}\right] }\nonumber \\ {}&\xrightarrow {\epsilon \downarrow 0}E\int _0^{S'_{t}}V_{t}(s)\mathrm{d}s-\frac{\gamma \lambda P(T>t) E\left( S_t'\right) ^2}{2\left[ 1-\lambda P\left( T>t\right) ES'_{t}\right] }=r(t) . \end{aligned}$$
(144)

This shows that \(r(t_{n(k)})\rightarrow r(t)\) as \(k\rightarrow \infty \).

Now, note that \(t\mapsto P_t\) is decreasing on \([t_{\min },t_{\max }]\) and, for every \(0<p_1<p_2\le 1\), \(\mathcal {D}_{p_2}\subset \mathcal {D}_{p_1}\). Therefore, since, for every \(s\ge 0\), \(t\mapsto V_t(s)\) is nondecreasing on \([t_{\min },t_{\max }]\), then

$$\begin{aligned} r(t)=\max _{S\in \mathcal {D}_{P_t}}\left\{ E\int _0^{S}V_{t}(s)\mathrm{d}s-\frac{\gamma \lambda P(T>t) ES^2}{2\left[ 1-\lambda P\left( T>t\right) ES\right] }\right\} \end{aligned}$$
(145)

is increasing in t on \((t_{\min },t_{\max })\). Furthermore, for every \(t\in (t_{\min },t_{\max })\),

$$\begin{aligned} 0<r(t)\le E\int _0^{S^{**}}V_{t_{\max }}(0)\mathrm{d}s=ES^{**}V_{t_{\max }}(0)<\infty . \end{aligned}$$
(146)

Hence, if either \(t_n\uparrow t\) as \(n\rightarrow \infty \) or \(t_n\downarrow t\) as \(n\rightarrow \infty \), then \(\{r(t_n);n\ge 1\}\) is a bounded monotone sequence. This means that in both cases

$$\begin{aligned} \exists \lim _{n\rightarrow \infty }r(t_n)=\lim _{k\rightarrow \infty }r(t_{n(k)})=r(t) . \end{aligned}$$
(147)

Hence, by the generality of \(\{t_n\}_{n=1}^\infty \), deduce that

$$\begin{aligned} \lim _{u\uparrow t}r(u)=\lim _{u\downarrow t}r(u)=r(t), \end{aligned}$$
(148)

which yields the required result.

A5. V(s) which is a constant minus a subordinator

Let \(\mathbb {F}\) be some filtration of \(\mathcal {F}\) which is augmented and right-continuous. Then, assume that \(\left\{ J(s);s\ge 0\right\} \) is a subordinator, i.e. a nondecreasing, right-continuous process with stationary and independent increments with respect to \(\mathbb {F}\) such that \(J(0)=0\), P-a.s. It is known that \(Ee^{-J(s)t}=e^{-\eta (t)s}\) for every \(t,s\ge 0\), where

$$\begin{aligned} \eta (t)\equiv ct+\int _{(0,\infty )}\left( 1-e^{-t x}\right) \nu (dx)\ \ , \ \ \forall t\ge 0 , \end{aligned}$$
(149)

\(c\ge 0\) and \(\nu \) is the associated Lévy measure which satisfies

$$\begin{aligned} \int _{(0,\infty )}\left( x\wedge 1\right) \nu (dx)<\infty . \end{aligned}$$
(150)

In particular, \(\eta (\cdot )\) is referred to as the exponent of \(J(\cdot )\). In addition, denote

$$\begin{aligned} \rho \equiv EJ(1)=\eta '(0)=c+\int _{(0,\infty )}x\nu (dx)=c+\int _0^\infty \nu \left[ (x,\infty )\right] dx \end{aligned}$$
(151)

and assume that \(\rho \in (0,\infty )\).

Let \(\kappa \in (0,\infty )\) be some constant and consider a process \(V(s)=\kappa -J(s)\) for every \(s\ge 0\). In particular, this is a nonincreasing jump process with a nonpositive drift.

Now, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(s\ge 0\), define \(J_\alpha (s)\equiv J(s)+\frac{\gamma \lambda }{1-\lambda \alpha }s\), which is a subordinator with Lévy measure \(\nu \), parameter \(c_\alpha \equiv c+\frac{\gamma \lambda }{1-\lambda \alpha }\) and exponent \(\eta _\alpha (\cdot )\). Then, given \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(x\in \left[ 0,\kappa \right] \), observe that

$$\begin{aligned} S_\alpha (x)&=\inf \left\{ s\ge 0;J(s)\ge \kappa -x-\frac{ \gamma \lambda }{1-\lambda \alpha }s\right\} \nonumber \\&=\inf \left\{ s\ge 0;J_\alpha (s)\ge \kappa -x\right\} \nonumber \\&=\inf \left\{ s\ge 0;J_\alpha (s)>\kappa -x\right\} , \end{aligned}$$
(152)

where the last equality holds because \(J_\alpha (\cdot )\) is an increasing process. To derive a relevant formula in terms of potential measures, it is known (see, for example, Equation (8) in [2]) that

$$\begin{aligned} Ee^{-t J_\alpha \left[ S_\alpha (x)\right] }=\eta _\alpha (t)\int _{\kappa -x}^\infty e^{-t z}U_\alpha (\mathrm{d}z) , \ \ \forall t\ge 0, \end{aligned}$$
(153)

where \(U_\alpha (\cdot )\) is a potential measure which is defined via

$$\begin{aligned} \int _0^\infty e^{-t z}U_\alpha (\mathrm{d}z)=\frac{1}{\eta _\alpha (t)} , \ \ \forall t\ge 0 . \end{aligned}$$
(154)

By these equations, differentiating (153) w.r.t. t and taking \(t\downarrow 0\), we deduce that

$$\begin{aligned} EJ_\alpha \left[ S_\alpha (x)\right] =\eta '_\alpha (0)\int _0^{\kappa -x} U_\alpha (\mathrm{d}z) . \end{aligned}$$
(155)

Thus, by plugging this result into Equation 3.7 of [6], we deduce that

$$\begin{aligned} ES_\alpha (x)=\frac{EJ_\alpha \left[ S_\alpha (x)\right] }{\eta _\alpha '(0)}=\int _0^{\kappa -x} U_\alpha (\mathrm{d}z) . \end{aligned}$$
(156)

This formula might be used in order to derive \(x_{\alpha '}\) by a standard line-search procedure on \(\left[ 0,\kappa \right] \).

Then, it is left to develop a formula for \(g(\alpha )\). To this end, let \(S_\alpha =S_\alpha \left( x_\alpha \right) \) and notice that

$$\begin{aligned} EJ^2_\alpha \left( S_\alpha \right) =2\eta _\alpha '(0)\int _0^{\kappa -x_\alpha }zU_\alpha (\mathrm{d}z)-\eta _\alpha ''(0)\int _0^{\kappa -x_\alpha }U_\alpha (\mathrm{d}z) \end{aligned}$$
(157)

can be derived by a similar fashion to (155). In addition, for every \(t\ge 0\), the Kella-Whitt martingale (see Theorem 2 of [19]) which is associated with \(J_\alpha (\cdot )\) is given by

$$\begin{aligned} M_\alpha (s;t)\equiv -\eta _\alpha (t)\int _0^se^{-t J_\alpha (s)}\mathrm{d}s+1-e^{-t J_\alpha (s)} , \ \ \forall s\ge 0 . \end{aligned}$$
(158)

It is known that this is a zero-mean martingale. Thus, by applying Doob’s optional stopping theorem w.r.t. \(S_\alpha \wedge s\) for some \(s>0\) and then taking \(s\rightarrow \infty \) using the monotone and bounded convergence theorems, we deduce that

$$\begin{aligned} E\int _0^{S_\alpha }e^{-t J_\alpha (s)}\mathrm{d}s=\frac{1-Ee^{-t J_\alpha (S_\alpha )}}{\eta _\alpha (t)} , \ \ \forall t>0 . \end{aligned}$$
(159)

Now, by differentiating both sides w.r.t. t, for every \(t>0\), we obtain

$$\begin{aligned} E\int _0^{S_\alpha }J_\alpha (s)e^{-t J_\alpha (s)}\mathrm{d}s=\frac{\eta _\alpha '(t)\left[ 1-Ee^{-t J_\alpha (S_\alpha )}\right] -\eta _\alpha (t)EJ_\alpha (S_\alpha )e^{-t J_\alpha (S_\alpha )}}{\eta _\alpha ^2(t)} . \end{aligned}$$
(160)

Thus, by taking the limit \(t\downarrow 0\) using monotone convergence, with the help of L’Hopital’s rule (twice), we deduce that

$$\begin{aligned} E\int _0^{S_\alpha }J_\alpha (s)\mathrm{d}s=\frac{\eta _\alpha '(0)EJ_\alpha ^2(S_\alpha )+\eta _\alpha ''(0)EJ_\alpha (S_\alpha )}{2\left[ \eta _\alpha '(0)\right] ^2} . \end{aligned}$$
(161)

Now, observe that this result can be plugged into the objective function of Phase II, i.e.

$$\begin{aligned} g(\alpha )&=E\int _0^{S_\alpha }\left[ \kappa -J_\alpha (s)\right] \mathrm{d}s\nonumber \\&=\kappa \alpha -\frac{\eta _\alpha '(0)EJ_\alpha ^2(S_\alpha )+\eta _\alpha ''(0)EJ_\alpha (S_\alpha )}{2\left[ \eta _\alpha '(0)\right] ^2} . \end{aligned}$$
(162)

Thus, by an insertion of (155) and (157) into (162), we derive an expression of \(g(\alpha )\) in terms of integrals with respect to \(U_\alpha (\cdot )\). Finally, by Proposition 2, \(g(\cdot )\) is concave on \(\left[ 0,\lambda ^{-1}\right) \) and hence standard numerical techniques might be applied in order to maximize it.

1.1 When \(J(\cdot )\) is a Poisson process

Assume that \(J(\cdot )\) is a Poisson process with rate \(q\in (0,\infty )\). Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \), \(x\in \left[ 0,\kappa \right] \), and, for every \(j=0,1,\ldots ,\lfloor \kappa -x\rfloor \), denote

$$\begin{aligned} s_j\equiv \frac{(1-\lambda \alpha )(\kappa -x-j)}{\gamma \lambda } . \end{aligned}$$
(163)

In addition, let \(s_{\lfloor \kappa -x\rfloor +1}\equiv 0\). Especially, notice that

$$\begin{aligned} \kappa -x-\frac{ \gamma \lambda }{1-\lambda \alpha }s_j=j , \ \ \forall j=0,1,\ldots ,\lfloor \kappa -x\rfloor , \end{aligned}$$
(164)

and, for every \(s\ge 0\), define

$$\begin{aligned} \delta (s)\equiv \left\lfloor k-x-\frac{ \gamma \lambda }{1-\lambda \alpha }s\right\rfloor . \end{aligned}$$
(165)

Then, observe that, for every \(x\in \left[ 0,\kappa \right] \) and \(s\in \left[ 0,\infty \right) \setminus \left\{ s_0,s_1,\ldots ,s_{\lfloor k-x\rfloor +1}\right\} \),

$$\begin{aligned} P\left[ S_\alpha (x)>s\right]&=P\left[ J(s)<k-x-\frac{ \gamma \lambda }{1-\lambda \alpha }s\right] \end{aligned}$$
(166)
$$\begin{aligned}&=1_{[0,\infty )}\left( k-x-\frac{ \gamma \lambda }{1-\lambda \alpha }s\right) \sum _{n=0}^{\delta (s)}e^{-qs}\frac{(qs)^n}{n!}. \end{aligned}$$
(167)

Thus,

$$\begin{aligned} ES_\alpha (x)&=\int _0^\infty 1_{[0,\infty )}\left( k-x-\frac{ \gamma \lambda }{1-\lambda \alpha }s\right) \sum _{n=0}^{\delta (s)}e^{-qs}\frac{(qs)^n}{n!}\mathrm{d}s\\ {}&\nonumber =q^{-1}\sum _{j=0}^{\lfloor \kappa -x\rfloor }\sum _{n=0}^j\int _{s_{j+1}}^{s_j}e^{-qs}\frac{q^{n+1}s^n}{n!}\mathrm{d}s\\ {}&=\nonumber \sum _{j=0}^{\lfloor \kappa -x\rfloor }\sum _{n=0}^j\sum _{m=0}^n\frac{q^{m-1}}{m!}\left( e^{-q s_{j+1}}s_{j+1}^m-e^{-q s_j}s_j^m\right) \end{aligned}$$
(168)

is a formula that can be used in order to find \(x_\alpha \). In a similar fashion, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(s\ge 0\),

$$\begin{aligned} P\left( S_\alpha ^2>s\right) =P\left[ S_\alpha (x_\alpha )>\sqrt{s}\right] \end{aligned}$$
(169)

and hence

$$\begin{aligned} ES_\alpha ^2&=\int _0^\infty P\left[ S_\alpha (x_\alpha )>\sqrt{s}\right] \mathrm{d}s \end{aligned}$$
(170)
$$\begin{aligned}&=\sum _{j=0}^{\lfloor k-x_\alpha \rfloor }\sum _{n=0}^j\int _{s_{j+1}}^{s_j}e^{-q\sqrt{s}} \frac{q^n\left( \sqrt{s}\right) ^n}{n!}\mathrm{d}s \end{aligned}$$
(171)
$$\begin{aligned}&=\sum _{j=0}^{\lfloor k-x_\alpha \rfloor }\sum _{n=0}^j\frac{2(n+1)}{q^2}\int _{\sqrt{s_{j+1}}}^{\sqrt{s_j}} e^{-qy}\frac{q^{n+2}y^{n+1}}{(n+1)!}dy \end{aligned}$$
(172)
$$\begin{aligned}&=\sum _{j=0}^{\lfloor k-x_\alpha \rfloor }\sum _{n=0}^j 2(n+1)\sum _{m=0}^{n+1}\frac{q^{m-2}}{m!}\left( e^{-q \sqrt{s_{j+1}}}s_{j+1}^{\frac{m}{2}}-e^{-q \sqrt{s_j}}s_j^\frac{m}{2}\right) . \end{aligned}$$
(173)

Therefore, with the help of (161), note that this equation remains valid when \(J_\alpha (\cdot )\) and \(\eta _\alpha (\cdot )\) are replaced by any other subordinator and its exponent. This implies that

$$\begin{aligned} g(\alpha )&=\kappa \alpha -E\int _0^{S_\alpha }J(s)\mathrm{d}s-\frac{\gamma \lambda ES_\alpha ^2}{2\left( 1-\lambda \alpha \right) ^2}\nonumber \\&=\kappa \alpha -\frac{EJ^2(S_\alpha )+EJ(S_\alpha )}{2q}-\frac{\gamma \lambda ES_\alpha ^2}{2\left( 1-\lambda \alpha \right) ^2} . \end{aligned}$$
(174)

Now, observe that \(J(S_\alpha )\) is a discrete random variable with support \(\mathcal {N}\equiv \left\{ 0,1,\ldots ,\lfloor \kappa -x_\alpha \rfloor +1\right\} \). Thus, since \(J(\cdot )\) maintains independent increments, then for every \(n\in \mathcal {N}\),

$$\begin{aligned} P\left[ J(S_\alpha )=n\right]&=P\left[ J(s_n)=n\right] \nonumber \\&\quad +\sum _{i=0}^{n-1}P\left[ J(s_n)=i\right] P\left[ J(s_n)-J(s_{n-1})\ge n-i\right] . \end{aligned}$$
(175)

Note that \(J(s_n)\sim \text {Poi}\left( qs_n\right) \) and \(J(s_n)-J(s_{n-1})\sim \text {Poi}\left( q\frac{1-\lambda \alpha }{ \gamma \lambda }\right) \). Therefore, all of these probabilities have closed form expressions and so do \(EJ(S_\alpha )\) and \(EJ^2(S_\alpha )\).

Finally, note that this example is closely related to the crossing time of a Poisson process by a decreasing linear boundary. For more information regarding this issue and some other related topics, see [30].

A6. Expected externalities in a retrial M/G/1 queue

As mentioned in Sect. 4, the externalities in the standard M/G/1 queue are well-studied in existing literature. However, to the best of the author’s knowledge, there is no existing literature regarding expected externalities in an M/G/1 retrial queue with no waiting room, infinite orbit capacity and exponential retrial times with a constant rate (for the exact model setup see, for example, Sections 2 and 3 of [29]). Respectively, the purpose of this section is to make a conjecture regarding the exact expressions for the expected externalities in an M/G/1 retrial queue with the above-mentioned features. To this end, the first step is to analyse a regulation problem in a retrial queue which is the analogue of the model described in Sect. 2 (with no balking). Then, the conjectured expression for the expected externalities will stem naturally from the optimal price function which should internalize the expected externalities.

1.1 Analogous model with customer’s retrials

Consider the same model as described in Sect. 2 with the following modifications: Assume that now there is no waiting room. Instead, there is an orbit with an infinite capacity such that any customer who finds the server busy at his arrival time joins the orbit. Every customer in the orbit conducts retrials until he finds the server idle and then he starts receiving service. In addition, the retrial times constitute an iid sequence of exponentially distributed random variables with rate \(\theta \in (0,\infty )\) which is independent of all other random elements in this model. In particular, just like in the original model, the service discipline is nonpreemptive such that customers are those who decide on their service durations. Assume that \(\left( C,D\right) ,\left( C_1,D_1\right) ,\left( C_2,D_2\right) ,\ldots \) is an iid sequence of nonnegative random variables which is independent from all other random elements in this model such that \(ED=\delta \in \left( 0,\infty \right) \).

Now, for every \(i\ge 1\), the total utility of the i’th customer from orbiting \(w\ge 0\) minutes, conducting r retrials and receiving a service of \(s\ge 0\) minutes is given by

$$\begin{aligned} U_i \left( s,w,r;p\right) \equiv X_i(s)-p(s)-C_iw-D_ir . \end{aligned}$$
(176)

This means that now, besides the original assumptions, there is an additional assumption that the i’th customer suffers a constant loss of \(D_i\) monetary units for each retrial.

Given this model, the purpose is to find an optimal price function in sense of Sect. 3. To this end, the same approach which is described in Sect. 4 can be carried out. To start with, using known results regarding the M/G/1 retrial queue with exponential retrial times (see, for example, Equations (3.15) and (3.16) in [29]), the optimization to be solved is given by

$$\begin{aligned} \max _{S\in \mathcal {D}}: \ E\int _0^S V(s)\mathrm{d}s-\left( \gamma +\theta \delta \right) \lambda \left[ \frac{ES^2}{2\left( 1- \lambda ES\right) }+\frac{ES}{\theta \left( 1-\lambda ES\right) }\right] . \end{aligned}$$
(177)

Especially, notice that, just like in Sect. 4, the objective functional can be extended (denote the extension by \(\tilde{f}(\cdot )\)) and then maximized on \(\mathcal {S}\). This can be done by a two-phase method.

1.2 Phase I:

For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) the optimization of Phase I is

$$\begin{aligned} \begin{aligned}&\max _{S\in \mathcal {F}}:&E\int _0^S \left[ V(s)-\frac{\lambda \left( \gamma +\theta \delta \right) }{1-\lambda \alpha }s\right] \mathrm{d}s \\&\ \text {s.t:}&0\le S \ , \ P \text {-a.s.} , \\&&ES=\alpha . \end{aligned} \end{aligned}$$
(178)

This optimization is identical to (24) up to re-parameterization of \(\gamma \), and hence the same results hold for this case. In particular, there exists \(x_\alpha \) such that

$$\begin{aligned} T_\alpha \left( x_\alpha \right) \equiv T_\alpha \equiv \inf \left\{ s\ge 0;V(s)-\frac{\lambda \left( \gamma +\theta \delta \right) }{1-\lambda \alpha }s\le x_\alpha \right\} \end{aligned}$$
(179)

is an optimal solution of (178).

1.3 Phase II:

For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) denote the objective value of Phase II

$$\begin{aligned} \tilde{g}(\alpha )&\equiv \tilde{f}\left( T_\alpha \right) \nonumber \\&=E\int _0^{T_\alpha } \left[ V(s)-s\frac{\lambda \left( \gamma +\theta \delta \right) }{1-\lambda \alpha }\right] \mathrm{d}s-\frac{\lambda \left( \gamma +\theta \delta \right) \alpha }{\theta \left( 1-\lambda \alpha \right) } . \end{aligned}$$
(180)

This phase can be analysed by the same method which was applied to Phase II of the original model.

Theorem 3

There are \(\alpha ^*\in \left( 0,\lambda ^{-1}\right) \) and \(x^*\in (0,\infty )\) such that \(T^*=T_{\alpha ^*}\left( x^*\right) \) is a solution of (178). In addition, for every constant \(\pi \), an optimal price function is given by

$$\begin{aligned} \tilde{p}_\pi ^*(s)\equiv \pi +s x^*+s^2\frac{\lambda \left( \gamma +\theta \delta \right) }{2\left( 1- \lambda \alpha ^*\right) }+\int _0^s\xi (t)\mathrm{d}t , \ \ \forall s\ge 0 . \end{aligned}$$
(181)

Furthermore, for every \(s\ge 0\) one has

$$\begin{aligned} \tilde{p}^*_0(s)=\lambda \left( \gamma +\theta \delta \right) z(s)+\int _0^s\xi (t)\mathrm{d}t, \end{aligned}$$
(182)

where

$$\begin{aligned} z(s)=\frac{s}{1-\lambda \alpha ^*}\left[ \frac{\lambda \left( \frac{E(T^*)^2}{2}+\frac{\alpha ^*}{\theta }\right) }{1-\lambda \alpha ^*}+\theta ^{-1}\right] +s^2\frac{\lambda }{2\left( 1-\lambda \alpha ^*\right) } . \end{aligned}$$
(183)

Proof

The proof follows by the same arguments as in the proof of Theorem 1. \(\square \)

1.4 Conjecture

A general observation regarding externalities in retrial queues is that unlike the regular M/G/1 queue, the externalities caused by a tagged customer are decomposed into two parts:

  1. 1.

    Waiting externalities: The total waiting time that could be saved for customers if the tagged customer reduced his service demand to zero.

  2. 2.

    Retrial externalities: The number of retrials that could be saved for customers if the tagged customer reduced his service demand to zero.

Because this model is similar to the original one, it makes sense that \(\tilde{p}^*_0(\cdot )\) internalizes the externalities in this model just like \(p^*_0(\cdot )\) does in the original one. Thus, due to the interpretations of \(\gamma \) and \(\delta \), it is plausible that the expected waiting and retrial externalities which are caused due to a tagged customer with a service demand of \(s\ge 0\) minutes are given respectively by \(\lambda z(s)\) and \(\lambda \theta z(s)\). Since this is not the focus of this work, the proof of this conjecture is left for future research.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jacobovic, R. Regulation of a single-server queue with customers who dynamically choose their service durations. Queueing Syst 101, 245–290 (2022). https://doi.org/10.1007/s11134-021-09722-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11134-021-09722-x

Keywords

Mathematics Subject Classification

Navigation