Abstract
In recent years, there is a growing attention towards queueing models with customers who choose their service durations. The model assumptions in the existing literature imply that every customer knows his service demand when he enters into the service position. Clearly, this property is not consistent with some real-life situations. Motivated by this issue, the current work includes a single-server queueing model with customers who dynamically choose their service durations. In this setup, the main result is existence of a quadratic price function which (1) implies an optimal resource allocation from a social point of view and (2) internalizes the externalities in the system. In addition, it is explained how to compute its parameters efficiently.
Similar content being viewed by others
Notes
Oz [23] is a work which was presented in The 20th INFORMS Applied Probability Society Conference, July 3–5, 2019, Brisbane Australia.
In fact, smoothness is not the issue here because once the right-derivative of \(p(\cdot )\) is right-continuous and nondecreasing, then a straightforward equivalent condition for (12) might be phrased.
References
Agranov, M., Ortoleva, P.: Stochastic choice and preferences for randomization. J. Polit. Econ. 125(1), 40–68 (2017)
Alili, L., Kyprianou, A.E.: Some remarks on first passage of Lévy processes, the American put and pasting principles. Ann. Appl. Probab. 15(3), 2062–2080 (2005)
Asmussen, S., Kella, O.: A multi-dimensional martingale for Markov additive processes and its applications. Adv. Appl. Probab. 32(2), 376–393 (2000)
Ballinger, T.P., Wilcox, N.T.: Decisions, error and heterogeneity. Econ. J. 107(443), 1090–1105 (1997)
Barron, Y.: A fluid EOQ model with Markovian environment. J. Appl. Probab. 52(2), 473–489 (2015)
Bekker, R., Boxma, O.J., Kella, O.: Queues with delays in two-state strategies and Lévy input. J. Appl. Probab. 45(2), 314–332 (2008)
Debo, L., Li, C.: Design and Pricing of Discretionary Service Lines. Manag. Sci. 67(4), 2251–2271 (2021)
Edelson, N.M., Hilderbrand, D.K.: Congestion tolls for Poisson queuing processes. Econom. J. Econom. Soc. 43, 81–92 (1975)
Feldman, P., Segev, E.: Managing congestion when customers choose their service times: the important role of time limits. Available at SSRN 3424317 (2019)
Gossen, H.H.: The Laws of Human Relations and the Rules of Human Action Derived Therefrom. MIT Press (1983)
Hardin, G.: The tragedy of the commons. Science 162(3859), 1243–1248 (1968)
Hassin, R.: Rational Queueing. Chapman and Hall/CRC (2016)
Hassin, R., Haviv, M.: To Queue or not to Queue: Equilibrium Behavior in Queueing Systems, vol. 59. Springer (2003)
Haviv, M., Ritov, Y.A.: Externalities, tangible externalities, and queue disciplines. Manag. Sci. 44(6), 850–858 (1998)
Hey, J.D.: Experimental investigations of errors in decision making under risk. Eur. Econ. Rev. 39(3–4), 633–640 (1995)
Hopp, W.J., Iravani, S.M., Yuen, G.Y.: Operations systems with discretionary task completion. Manag. Sci. 53(1), 61–77 (2007)
Jacobovic, R., Kella, O.: Minimizing a stochastic convex function subject to stochastic constraints and some applications. Stoch. Processes Appl. 130(11), 7004–7018 (2020)
Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer (1988)
Kella, O., Whitt, W.: Useful martingales for stochastic storage processes with Lévy input. J. Appl. Probab. 29(2), 396–403 (1992)
Leeman, W.A.: Letter to the editor-the reduction of queues through the use of price. Oper. Res. 12(5), 783–785 (1964)
Mas-Colell, A., Whinston, M.D., Green, J.R.: Microeconomic Theory, vol. 1. Oxford University Press, New York (1995)
Naor, P.: The regulation of queue size by levying tolls. Econom. J. Econom. Soc. 37, 15–24 (1969)
Oz, B.: Regulating service length demand in a single server queue. Unpublished Manuscript (2019)
Parzen, E., TEXAS A AND M UNIV COLLEGE STATION INST OF STATISTICS.: Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory No. TR-B-3 (1980)
Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel (2006)
Rubinstein, A.: Lecture Notes in Microeconomic Theory: The Economic Agent, 2nd edn. Princeton University Press (2012)
Tong, C., Rajagopalan, S.: Pricing and operational performance in discretionary services. Prod. Oper. Manag. 23(4), 689–703 (2014)
Tversky, A.: Intransitivity of preferences. Psychol. Rev. 76(1), 31 (1969)
Yang, T., Templeton, J.G.C.: A survey on retrial queues. Queueing Syst. 2(3), 201–233 (1987)
Zacks, S.: Sample Path Analysis and Distributions of Boundary Crossing Times, vol. 2203. Springer, New York (2017)
Acknowledgements
The author is extremely grateful to Binyamin Oz for valuable discussions as well as for sharing his unpublished manuscript. In addition, the author would like to thank Offer Kella, Moshe Haviv and Refael Hassin for their comments before the submission. Finally, the author would like to thank the anonymous referees for their comments which significantly helped in improving the presentation of the contents in this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
1.1 A1. Proof of Proposition 1
Assume that \(ES_1>0\). For every \(k\ge 1\), let \(N_k\) be the number of customers who receive service during the k’th busy period. \(C_1,C_2,\ldots \) are independent of all other random quantities in this model. Therefore, by conditioning and un-conditioning with respect to this sequence, then a known result regarding the long-run average queue-length in an M/G/1 queue implies that the long-run average loss due to waiting time equals
In addition, \(V(\cdot )\) is a nonincreasing process such that \(V(0)>0\). Hence, the details \(ES_1^2<\infty \) and \(EV^2(0)<\infty \) imply that
Thus, \(E\int _0^{S_1}V_1(s)\mathrm{d}s\) is well-defined. Moreover, for every \(i\ge 1\), \(S_i\) is determined by \(V_i(\cdot )\) as a solution of (4). Thus, since \(V_1(\cdot ),V_2(\cdot ),V_3(\cdot ),\ldots \) is an iid sequence, then \(\left( S_1,V_1(\cdot )\right) ,\left( S_2,V_2(\cdot )\right) ,\left( S_3,V_3(\cdot )\right) ,\ldots \) is an iid sequence. Hence
is also an iid sequence and the rest follows by some standard renewal-reward arguments. \(\square \)
1.2 A2. Auxiliary lemmata
The following auxiliary Lemmas 1 and 2 will be used later on in the proof of Proposition 2.
Lemma 1
Let \(v:\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) be a nonincreasing right-continuous function. In addition, for every \(n\ge 1 \), define \(v_n(s)\equiv v*g_n(s),\forall s\ge 0\), where
Then,
-
1.
For every \(n\ge 1 \), \(v_n(\cdot )\) is a continuous, nonnegative and nonincreasing function on \(\left[ 0,\infty \right) \) such that \(0\le v_n(0)\le v(0)\).
-
2.
For every \(s\in \left[ 0,\infty \right) \), \(v_n(s)\uparrow v(s)\) as \(n\rightarrow \infty \).
Proof
-
1.
Let U be a random variable which is distributed uniformly on \(\left[ 0,1\right] \) and for every \(n\ge 1 \) denote \(U_n\equiv \frac{U}{n}\). Fix \(n\ge 1 \) and notice that
$$\begin{aligned} v_n(s)=Ev\left( s+U_n\right) , \quad \forall s\ge 0 . \end{aligned}$$(66)Recall that \(v(\cdot )\) is nonnegative and nonincreasing function, i.e. \(v_n(\cdot )\) also shares these properties. In addition, since \(U_n\) is nonnegative, then \(v_n(0)\le v(0)\). To show that \(v_n(\cdot )\) is continuous on \(\left[ 0,\infty \right) \), pick an arbitrary \(s\in \left[ 0,\infty \right) \) and let \((s_k)_{k=1}^\infty \) be a sequence such that \(s_k\rightarrow s\) as \(k\rightarrow \infty \). Then, since \(v(\cdot )\) is nonincreasing, \(s-U_n\) is P-a.s. a continuity point of \(v(\cdot )\), i.e. \(v\left( s_k+U_n\right) \rightarrow v\left( s+U_n\right) \) as \(k\rightarrow \infty \) when the convergence holds P-a.s. In addition, \(0\le v\left( s_k+U_n\right) \le v(0)<\infty \) for every \(k\ge 1\) and hence the dominated convergence theorem implies that
$$\begin{aligned} \lim _{k\rightarrow \infty }v_n(s_k)=E\lim _{k\rightarrow \infty } v\left( s_k+U_n\right) =Ev\left( s+U_n\right) =v_n(s) \end{aligned}$$(67)and the result follows.
-
2.
Fix \(s\in \left[ 0,\infty \right) \) and observe that \(v(\cdot )\) is nonnegative, nonincreasing and right-continuous. Thus, since \(U_n\downarrow 0\) as \(n\rightarrow \infty \), the result follows by the monotone convergence theorem.
\(\square \)
Lemma 2
Let \(\left( \mathbf{X} ,\mathcal {X},\mu \right) \) be a general measure space and let \(\alpha :\mathbf{X} \rightarrow \mathbb {R}\) and \(\xi :\mathbf{X} \times [0,\infty )\rightarrow \mathbb {R}\) be such that
-
(i)
For every \(t\ge 0\), \(x\mapsto \xi (x,t)\) is \(\mathcal {X}\)-measurable.
-
(ii)
For every \(x\in \mathbf {X}\), \(t\mapsto \xi (x,t)\) is right-continuous on \([0,\infty )\).
In addition, let \(\beta (\cdot )\in L_1(\mu )\) and define
If at least one of the following conditions hold:
-
C1:
For every \(x\in \mathbf {X}\), \(t\mapsto \xi (x,t)\) is nonnegative and nonincreasing.
-
C2:
There exists \(\psi (\cdot )\in L_1(\mu )\) such that \(|\xi (x,t)|\le |\psi (x)|\) for every \((x,t)\in \mathbf{X} \times [0,\infty )\).
Then, \(\varphi (\cdot )\) is right-differentiable on \([0,\infty )\) such that
Moreover, if C2 holds and \(t\mapsto \xi (x,t)\) is continuous on \((0,\infty )\), then \(\varphi (\cdot )\) is differentiable on \((0,\infty )\) such that
Proof
For simplicity and without loss of generality the proof is given for the case where \(\beta \) is identically zero. In addition, let \(\mathcal {B}[0,\infty )\) be the Borel \(\sigma \)-field which is associated with \([0,\infty )\). Notice that assumptions (i) and (ii) imply that \((x,t)\mapsto \xi (x,t)\) is \(\mathcal {X}\otimes \mathcal {B}[0,\infty )\)-measurable. For details see, for example, Remark 1.4 on page 5 of [18].
Now, observe that under either C1 or C2, Fubini’s theorem may be applied in order to deduce that
is \(\mathcal {B}[0,\infty )\)-measurable and satisfies
Thus, in order to show (68), it is enough to prove that \(\eta (\cdot )\) is right-continuous on \([0,\infty )\). To this end, by (ii), under C1 (C2), the monotone (dominated) convergence theorem implies that
and the result follows. Finally, observe that in order to show (69), it is enough to prove that \(\eta (\cdot )\) is continuous on \((0,\infty )\). This can be done by using similar arguments. \(\square \)
Finally, note that once \(\left( \mathbf{X} ,\mathcal {X},\mu \right) \) is complete, the conclusion of Lemma 2 remains valid even when the requirements which appear in (ii), C1 and C2 are satisfied \(\mu \)-a.s. (instead of pointwise).
A3. Proof of Proposition 2
The proof of Proposition 2 is given by the subsequent lemmata:
Lemma 3
Assume that \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) such that \(x_\alpha <0\). Then, there exists \(\tilde{\alpha }\in \left[ 0,\lambda ^{-1}\right) \) such that \(x_{\tilde{\alpha }}\ge 0\) and \(g\left( \alpha \right) \le g\left( \tilde{\alpha }\right) \).
Proof
Let \(\alpha \) be such that \(x_\alpha <0\) and define
Notice that \(\tilde{S}\le S_\alpha \) and hence \(\tilde{\alpha }\equiv E\tilde{S}\le \alpha \). This implies that
In addition, notice that \(\tilde{\alpha }\le \alpha \), \(\tilde{\alpha }=E\tilde{S}=ES_{\tilde{\alpha }}\) and
Thus, since \(\alpha \mapsto x_\alpha \) is nonincreasing, \(x_{\tilde{\alpha }}\) is nonnegative and the result follows. \(\square \)
Lemma 4
Consider some \(\alpha \in \left[ 0,\lambda ^{-1}\right) \).
-
1.
If \(x_\alpha \ge 0\), then, \(0\le S_\alpha \le \frac{V(0)}{\gamma \lambda }\), and \(g(\alpha )\ge 0\).
-
2.
For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) ,\)
$$\begin{aligned} g\left( \alpha \right) \le \frac{EV^2(0)}{\gamma \lambda }<\infty . \end{aligned}$$(73)
Proof
Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) such that \(x_\alpha \ge 0\) and notice that
Therefore, since \(V(\cdot )\) is nonincreasing, we deduce that
and hence
Then, use Lemma 3 in order to show that this upper bound holds for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). Finally, to show that \(g(\alpha )\ge 0\) for every \(\alpha \) for which \(x_\alpha \ge 0\) observe that in such a case, \(g(\alpha )\) is defined as an expectation of an integral with an integrand which is nonnegative on the integration domain. \(\square \)
Lemma 5
For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \), \(S_\alpha \) is square-integrable.
Proof
Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). If \(x_\alpha \ge 0\), then the result is a consequence of Lemma 4 and hence it is left to consider the case when \(x_\alpha \in (-\infty ,0)\). To this end, define \(\tilde{V}(s)=V(s)-x_\alpha ,\forall s\ge 0\), and observe that, for every \(S\in \mathcal {S}\) such that \(ES=\alpha \),
This means that \(S_\alpha \) is also a solution of
In addition, note that \(\tilde{V}(\cdot )\) is a nonincreasing right-continuous process such that \(\tilde{V}(0)=V(0)-x_\alpha \) is a square-integrable positive random variable. Consequently, Lemma 4 implies the result because, by definition,
\(\square \)
Note that, for every \(S\ge 0\) which is square integrable,
Lemma 6
\(f(\cdot )\) is concave on \(\mathcal {D}\) and \(g(\cdot )\) is concave on \(\left[ 0,\lambda ^{-1}\right) \).
Proof
Define
and for every \((S,\alpha )\in \mathcal {S}_0\) denote
In particular, observe that \(V(\cdot )\) is a nonincreasing right-continuous process which implies that \(s\mapsto \int _0^sV(t)\mathrm{d}t\) is concave on \([0,\infty )\). Therefore, since \((t,s)\mapsto \frac{t^2}{s}\) is convex on \(\mathbb {R}\times (0,\infty )\), then \(h(S,\alpha )\) is concave on \(\mathcal {S}_0\). Thus, since an expectation is a linear operator,
is a concave functional on \(\mathcal {S}_0\). Especially, notice that \((S,\alpha )\in \mathcal {S}_0\) implies that S is square-integrable and hence (80) could be used in order to justify the last equality. Now, consider \(S_1,S_2\in \mathcal {D}\) and, for every \(i=1,2\), denote \(\alpha _i\equiv ES_i\). Thus, observe that the concavity of \(H(\cdot )\) implies that, for every \(\mu \in (0,1)\),
and hence the concavity of \(f(\cdot )\) follows by definition.
In order to prove the concavity of \(g(\cdot )\), recall Lemma 5 which implies that, for every \(\alpha \in [0,\lambda ^{-1})\),
and hence the result follows because \(g(\cdot )\) equals a supremum of a concave functional on a convex set which is not empty (take, for example, \((\alpha ,\alpha )\)). \(\square \)
Lemma 7
\(\lim _{\alpha \downarrow 0}g(\alpha )=g(0)=0\).
Proof
Denote
and \(\alpha _0\equiv E\left( S^0\wedge \frac{1}{2\lambda }\right) \). Observe that positiveness of \(S^0\) implies that \(\alpha _0\in \left( 0,\frac{1}{2\lambda }\right] \). Then, for every \(\alpha \in [0,\alpha _0)\), define \(\hat{S}_\alpha \equiv \frac{\alpha }{\alpha _0}\left( S^0\wedge \frac{1}{2\lambda }\right) \) which is a square-integrable nonnegative random variable such that \(E\hat{S}_\alpha =\alpha \). Thus, by the definition of \(g(\cdot )\) and using (80), we deduce that
In particular, the expectation in the second term is finite and hence this term tends to zero as \(\alpha \downarrow 0\). In addition, for every \(\alpha \in \left[ 0,\alpha _0\right) \),
Thus, since \(EV(0)<\infty \), the dominated convergence implies that the first term in (86) tends to zero as \(\alpha \downarrow 0\). To provide an upper bound which tends to zero, note that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),
where \(\tilde{S}_\alpha \) is the solution of
which is specified by Theorem 1 of [17]. In particular, notice that this optimization is well-defined due to (21). In addition, note that existence of this solution is justified by the same kind of argument which was provided in order to justify that \(S_\alpha \) is a solution of (24). Now, let
and notice that
Therefore, the pre-conditions of Proposition 1 of [17] are satisfied, i.e.
and the proof is completed. \(\square \)
Lemma 8
Let \(\alpha '\equiv \inf \{\alpha \in [0,\lambda ^{-1});x_\alpha <0\}\). Then, \(\alpha '<\lambda ^{-1}\).
Proof
Assume by contradiction that \(\alpha '=\lambda ^{-1}\) which means that \(x_\alpha \ge 0\) for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \). In addition, observe that Lemma 5 and (80) imply that, for every \(\alpha \in \left[ 0,\lambda ^ {-1}\right) \),
In addition, \(x_{\alpha }\ge 0,\forall \alpha \in \left[ 0,\lambda ^{-1}\right) \), and hence Lemma 4 implies that \(0\le S_{\alpha }\le \frac{V(0)}{\gamma \lambda },\forall \alpha \in \left( \alpha _0,\lambda ^{-1}\right) \). Therefore, since \(V(\cdot )\) is nonincreasing, we deduce that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),
These results imply that, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \),
Now, if
then (93) implies that \(g(\alpha )\) tends to \(-\infty \) as \(\alpha \uparrow \lambda ^{-1}\). Thus, in such a case there exists \(\alpha \in (0,\lambda ^{-1})\) such that \(g(\alpha )<0\). On the other hand, recall that \(x_{\alpha }\ge 0\) and
Thus, since \(V(\cdot )\) is a nonincreasing right-continuous process, this implies that
which implies a contradiction.
Hence, we deduce that
Then, since, for every \(\alpha \in [0,\lambda ^{-1})\), \(S_\alpha \) is squared-integrable, the Cauchy-Schwartz inequality leads to the following contradiction:
\(\square \)
Lemma 9
There exists \(\alpha ^*\in \left[ 0,\frac{EV(0)}{\gamma \lambda }\right] \cap \left[ 0,\lambda ^{-1}\right) \) which is a maximizer of \(g(\cdot )\) on \([0,\lambda ^{-1})\) such that \(x_{\alpha ^*}\ge 0\) and \(S^*\equiv S_{\alpha ^*}\) is an optimal solution of (20) which is square-integrable.
Proof
By Lemma 8, \(\alpha '<\lambda ^{-1}\) and hence Lemma 3 implies that Phase II is reduced to maximization of \(g(\cdot )\) on the closed interval \(\left[ 0,\alpha '\right] \). By Lemmas 6 and 7 we deduce that \(g(\cdot )\) is continuous on \([0,\alpha ']\) and hence that there exists \(\alpha ^*\in [0,\alpha ']\) which maximizes the value of \(g(\cdot )\) over \([0,\lambda ^{-1})\). In addition, given the maximizer \(\alpha ^*\), square integrability of \(S^*\) is a direct consequence of Lemma 5. Finally, the upper bound and the fact that \(x_{\alpha ^*}\ge 0\) stems immediately from Lemmas 3 and 4. \(\square \)
Lemma 10
\(f(S^*)=0\) if and only if \(\alpha ^*=0\).
Proof
Assume that \(0=\alpha ^*=ES^*\). Since \(S^*\) is a nonnegative random variable, then \(S^*=0\), P-a.s. and \(f(S^*)=0\) follows immediately. To show the other direction assume that
and recall that
Therefore, since \(V(\cdot )\) is a nonincreasing right-continuous process, then
and hence \(x^*\ge 0\) implies that \(\alpha ^*=0\) (remember that \(\alpha ^*\in [0,\lambda ^{-1})\)). \(\square \)
Observe that Lemma 9 implies that there exists \(\alpha ^*\in \left[ 0,\lambda ^{-1}\right) \) and \(x^*= x_{\alpha ^*}\ge 0\) for which \(S^*=S_{\alpha ^*}=S_{\alpha ^*}\left( x^*\right) \) is an optimum of (20). Therefore, since \(V(\cdot )\) is nonincreasing and right-continuous, its left limit at \(S^*\) is nonnegative and hence \(S^*\) is also an optimum (with the same objective value) of the analogous optimization with \(V^+(\cdot )\) replacing \(V(\cdot )\). Thus, without loss of generality, from now on assume that \(V(\cdot )\) is nonnegative.
Lemma 11
\(f(S^*)>0\) (and hence \(\alpha ^*>0\)).
Proof
In order to prove that \(f(S^*)>0\), it is enough to find a random variable \(S_0\in \mathcal {S}\) for which \(f(S_0)>0\). To this end, for every \(\alpha \in [0,\infty )\) define a function
Since \(V(\cdot )\) is a nonnegative nonincreasing right-continuous process, Lemma 2 implies that \(\upsilon (\cdot )\) is right-differentiable with a right-derivative at zero which equals
This means that there exists \(\alpha _0>0\) such that \(\upsilon (\alpha _0)>\upsilon (0)=0\) and the result follows. \(\square \)
Lemma 12
If \(V(\cdot )\) is continuous, then (16) holds.
Proof
Assume that \(V(\cdot )\) is a continuous process and denote \(x^*=x_{\alpha ^*}\). Thus, since \(V(\cdot )\) is a continuous process, once \(S^*>0\), then
Therefore, by multiplying both sides by \(S^*\) and taking expectations we deduce that
In addition, for every \(u>0\) define a function \(\upsilon (u)\equiv f\left( uS^*\right) \). It is known that \(S^*\) is square-integrable and hence, for every \(u>0\), \(uS^*\) is also square-integrable. Thus, using (80) we deduce that
Recall that \(V(\cdot )\) is continuous on \(\left[ 0,\infty \right) \) and hence the fundamental theorem of calculus implies that, for every \(u>0\),
Observe that this derivative is nonnegative and dominated from above by \(S^*V(0)\). Therefore, since \(S^*\) is dominated by a linear function of V(0) (see Lemma 4) and \(EV^2(0)<\infty \), then Lemma 2 allows replacing the order of expectation and differentiation. Namely, \(\upsilon (\cdot )\) is differentiable at some neighbourhood of \(u=1\) with a derivative
It has already been shown that \(u=1\) is a global maximum of \(\upsilon (\cdot )\). Therefore, applying this result with a first-order condition at \(u=1\) and an insertion of (104) all together lead to a conclusion that
Note that \(f\left( S^*\right) >0\) implies that \(\alpha ^*>0\) and hence the result follows. \(\square \)
The final step in the proof of Proposition 2 is to extend the result of Lemma 12 to a general \(V(\cdot )\).
Lemma 13
Proof
Consider \(V(\cdot )\) which might have jumps and, for every \(n\ge 1 \), define \(V_n(s)\equiv V*g_n(s),\forall s\ge 0\) , where that \(g_n\) is given by the statement of Lemma 1. Note that due to this lemma, for each \(n\ge 1 \), \(V_n\) satisfies the assumptions of Lemma 12. In addition, by Lemma 1, it is known that for every \(s\ge 0\), \(V_n(s)\uparrow V(s)\) as \(n\rightarrow \infty \). In addition, for every \(n\ge 1 \) consider the optimization
and denote its objective functional by \(f_n(\cdot )\). Note that, for each \(n\ge 1\), (110) has a solution \(S_n\) such that
for some \(\alpha _n\in \left( 0,\lambda ^{-1}\right) \) and \(x_n\ge 0\). Clearly, the sequence \(ES_1,ES_2,\ldots \) is bounded on \(\left[ 0,\lambda ^{-1}\right) \). In addition, recall that, for every \(n\ge 1 \), \(S_n\) is bounded by a linear function of \(V_n(0)\in \left[ 0,V(0)\right] \) (see also Lemma 4). Therefore, since \(EV^2(0)<\infty \), the sequence \(ES_1^2,ES_2^2,\ldots \) is bounded. Consequently, there exists \(\left\{ n_k \right\} _{k=1}^\infty \subseteq \left\{ 1,2,\ldots \right\} \) such that
For every \(k\ge 1\), \(V_{n_k }(\cdot )\) is a process which satisfies the assumptions of Lemma 12 and hence
This means that
such that \(x^*\ge 0\). Moreover, by construction, for every \(s\ge 0\), \(V_{n_k }(s)\uparrow V(s)\) as \(k\rightarrow \infty \) and observe that \(\alpha _{n_k}\rightarrow \alpha \) as \(k\rightarrow \infty \). Therefore, if, for every \(k\ge 1\),
and
then for every \(s\ge 0\), \(\zeta _k(s)\rightarrow \zeta (s)\) as \(k\rightarrow \infty \). Now, for every \(k\ge 1\), define
and
Furthermore, observe that \(\zeta (\cdot ),\zeta _1(\cdot ),\zeta _2(\cdot ),\ldots \) are all (strictly) increasing continuous processes tending to infinity as \(s\rightarrow \infty \). Therefore, it can be deduced that \(\zeta ^{-1}(\cdot ),\zeta ^{-1}_1(\cdot ),\zeta ^{-1}_2(\cdot ),\ldots \) are finite-valued continuous processes (with a time index u). Now, using exactly the same arguments as in the proof of Theorem 2A in [24], we deduce that, for every \(u\in \mathbb {R}\), \(\zeta ^{-1}_k(u)\rightarrow \zeta ^{-1}(u)\) as \(k\rightarrow \infty \). Especially, this is true for \(u=0\), i.e., for every sample-space realization,
It is left to show that \(S'\) is a solution of (20). To this end, notice that, for every \(k\ge 1\), \(S_{n_k }\) is nonnegative and bounded from above by a linear function of \(V_n(0)\le V(0)\). Therefore, since V(0) is square-integrable, dominated convergence implies that
To prove optimality of \(S'\), observe that, for every \(k\ge 1\),
and hence
Thus, the optimality of \(S_{n_k }\) (for each \(k\ge 1\)) implies that
Moreover, it has already been shown that \(S^*\) is square-integrable and hence it is possible to use (80). Thus, since, for every \(s\ge 0\), \(0\le V_{n_k }(s)\uparrow V(s)\) as \(k\rightarrow \infty \), monotone convergence implies that
In addition it is known that, for every \(k\ge 1\), \(ES_{n_k }^2<\infty \). Thus, it is possible to use (80) once again with a squeezing theorem in order to deduce that
In particular, notice that, for every \(k\ge 1\), \(S_{n_k }\) is bounded from above by \(\frac{V(0)}{\gamma \lambda }\). Hence, for every \(k\ge 1\), \(\int _0^{S_{n\left( k_l\right) }}V(s)\mathrm{d}s\) is bounded from above by \(\frac{V^2(0)}{\gamma \lambda }\) which is an integrable random variable. Thus, the dominated convergence theorem justifies the third equality of (125). \(\square \)
A4. Proof of Theorem 2
Observe that \(S^{**}\) and \(V_{t_{\max }}(0)\) are square-integrable and hence
Since \(t\mapsto P_t\) is positive and continuous on \(t\in \left[ t_{\min },t_{\max }\right) \), then it is enough to show that \(r(t)\equiv v(t)/P_t\) is continuous on \(\left[ t_{\text {min}},t_{\text {max}}\right) \). To this end, fix \(t\in \left[ t_{\text {min}},t_{\text {max}}\right) \) and let \(\{t_n\}_{n=1}^\infty \subseteq \left[ t_{\text {min}},t+\epsilon \right] \) be a sequence such that \(t_n\rightarrow t\) as \(n\rightarrow \infty \), where \(\epsilon \in (0,t_{\max }-t)\) is an arbitrary constant. Thus, since \((t,s)\mapsto \tilde{V}(t,s)\) is nondecreasing in its first coordinate, by Lemma 4 we deduce that, for every \(n\ge 1\),
Since the upper bound is square-integrable and uniform in n, we deduce that
are two bounded sequences. Hence, there exists a subsequence \(\left\{ n(k)\right\} _{k=1}^\infty \subseteq \mathbb {N}\) such that
In addition, Proposition 2 implies that, for every \(k\ge 1\),
such that
In particular, note that x is nonnegative.
Now, let \(U=F(T)\sim U(0,1)\) which is independent from \(\tilde{V}\). In addition, observe that, for every \(t\in \left[ t_{\min },t_{\max }\right) \) and \(u\in \mathbb {R}\),
Therefore, for every \(p\in (0,1)\) and \(t\in \left[ t_{\min },t_{\max }\right) \), the p’th quantile of T given \(\{T>t\}\) equals
Recalling that \(F(\cdot )\) is continuous and increasing on \(\left[ t_{\min },t_{\max }\right) \), then \(q_t(p)\) is also continuous and increasing in t. In addition, without loss of generality, assume that \(\tau _t=q_t(U)\). Consequently, since \((t,s)\mapsto \tilde{V}(t,s)\) is continuous and nondecreasing in t, then \((t,s)\mapsto V_t(s)\) is also continuous and nondecreasing in t. Thus, by applying the same technique which appears in the proof of Lemma 13, we deduce that
In addition, since \(S_t\le S^{**}\) for every \(t\in \left[ t_{\min },t_{\max }\right) \) and \(S^{**}\) is square-integrable, then dominated convergence implies that
For every \(\nu _1,\nu _2\in \left[ t_{\min },t_{\max }\right) \) define
which is nondecreasing in \(\nu _1\) and such that \(r(\nu _1)= w_{\nu _1}(\nu _1)\). In addition, notice that
Moreover, since, for every \(k\ge 1\),
dominated convergence implies that
Similarly, since, for every \(k\ge 1\),
dominated convergence might be used once again in order to derive the limit
Therefore, deduce that
Note that this inequality is valid even when \(\epsilon \) is replaced by some \(\epsilon '\in (0,\epsilon )\). This is true because, up to a finite prefix, the sequence \(\{t_{n(k)}; k\ge 1\}\) belongs to \((t_{\min },t+\epsilon ')\). Thus, the next step is to take a limit of the upper bound as \(\epsilon \downarrow 0\). In practice, the same kind of arguments which were made in the previous limits calculations imply that
This shows that \(r(t_{n(k)})\rightarrow r(t)\) as \(k\rightarrow \infty \).
Now, note that \(t\mapsto P_t\) is decreasing on \([t_{\min },t_{\max }]\) and, for every \(0<p_1<p_2\le 1\), \(\mathcal {D}_{p_2}\subset \mathcal {D}_{p_1}\). Therefore, since, for every \(s\ge 0\), \(t\mapsto V_t(s)\) is nondecreasing on \([t_{\min },t_{\max }]\), then
is increasing in t on \((t_{\min },t_{\max })\). Furthermore, for every \(t\in (t_{\min },t_{\max })\),
Hence, if either \(t_n\uparrow t\) as \(n\rightarrow \infty \) or \(t_n\downarrow t\) as \(n\rightarrow \infty \), then \(\{r(t_n);n\ge 1\}\) is a bounded monotone sequence. This means that in both cases
Hence, by the generality of \(\{t_n\}_{n=1}^\infty \), deduce that
which yields the required result.
A5. V(s) which is a constant minus a subordinator
Let \(\mathbb {F}\) be some filtration of \(\mathcal {F}\) which is augmented and right-continuous. Then, assume that \(\left\{ J(s);s\ge 0\right\} \) is a subordinator, i.e. a nondecreasing, right-continuous process with stationary and independent increments with respect to \(\mathbb {F}\) such that \(J(0)=0\), P-a.s. It is known that \(Ee^{-J(s)t}=e^{-\eta (t)s}\) for every \(t,s\ge 0\), where
\(c\ge 0\) and \(\nu \) is the associated Lévy measure which satisfies
In particular, \(\eta (\cdot )\) is referred to as the exponent of \(J(\cdot )\). In addition, denote
and assume that \(\rho \in (0,\infty )\).
Let \(\kappa \in (0,\infty )\) be some constant and consider a process \(V(s)=\kappa -J(s)\) for every \(s\ge 0\). In particular, this is a nonincreasing jump process with a nonpositive drift.
Now, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(s\ge 0\), define \(J_\alpha (s)\equiv J(s)+\frac{\gamma \lambda }{1-\lambda \alpha }s\), which is a subordinator with Lévy measure \(\nu \), parameter \(c_\alpha \equiv c+\frac{\gamma \lambda }{1-\lambda \alpha }\) and exponent \(\eta _\alpha (\cdot )\). Then, given \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(x\in \left[ 0,\kappa \right] \), observe that
where the last equality holds because \(J_\alpha (\cdot )\) is an increasing process. To derive a relevant formula in terms of potential measures, it is known (see, for example, Equation (8) in [2]) that
where \(U_\alpha (\cdot )\) is a potential measure which is defined via
By these equations, differentiating (153) w.r.t. t and taking \(t\downarrow 0\), we deduce that
Thus, by plugging this result into Equation 3.7 of [6], we deduce that
This formula might be used in order to derive \(x_{\alpha '}\) by a standard line-search procedure on \(\left[ 0,\kappa \right] \).
Then, it is left to develop a formula for \(g(\alpha )\). To this end, let \(S_\alpha =S_\alpha \left( x_\alpha \right) \) and notice that
can be derived by a similar fashion to (155). In addition, for every \(t\ge 0\), the Kella-Whitt martingale (see Theorem 2 of [19]) which is associated with \(J_\alpha (\cdot )\) is given by
It is known that this is a zero-mean martingale. Thus, by applying Doob’s optional stopping theorem w.r.t. \(S_\alpha \wedge s\) for some \(s>0\) and then taking \(s\rightarrow \infty \) using the monotone and bounded convergence theorems, we deduce that
Now, by differentiating both sides w.r.t. t, for every \(t>0\), we obtain
Thus, by taking the limit \(t\downarrow 0\) using monotone convergence, with the help of L’Hopital’s rule (twice), we deduce that
Now, observe that this result can be plugged into the objective function of Phase II, i.e.
Thus, by an insertion of (155) and (157) into (162), we derive an expression of \(g(\alpha )\) in terms of integrals with respect to \(U_\alpha (\cdot )\). Finally, by Proposition 2, \(g(\cdot )\) is concave on \(\left[ 0,\lambda ^{-1}\right) \) and hence standard numerical techniques might be applied in order to maximize it.
1.1 When \(J(\cdot )\) is a Poisson process
Assume that \(J(\cdot )\) is a Poisson process with rate \(q\in (0,\infty )\). Let \(\alpha \in \left[ 0,\lambda ^{-1}\right) \), \(x\in \left[ 0,\kappa \right] \), and, for every \(j=0,1,\ldots ,\lfloor \kappa -x\rfloor \), denote
In addition, let \(s_{\lfloor \kappa -x\rfloor +1}\equiv 0\). Especially, notice that
and, for every \(s\ge 0\), define
Then, observe that, for every \(x\in \left[ 0,\kappa \right] \) and \(s\in \left[ 0,\infty \right) \setminus \left\{ s_0,s_1,\ldots ,s_{\lfloor k-x\rfloor +1}\right\} \),
Thus,
is a formula that can be used in order to find \(x_\alpha \). In a similar fashion, for every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) and \(s\ge 0\),
and hence
Therefore, with the help of (161), note that this equation remains valid when \(J_\alpha (\cdot )\) and \(\eta _\alpha (\cdot )\) are replaced by any other subordinator and its exponent. This implies that
Now, observe that \(J(S_\alpha )\) is a discrete random variable with support \(\mathcal {N}\equiv \left\{ 0,1,\ldots ,\lfloor \kappa -x_\alpha \rfloor +1\right\} \). Thus, since \(J(\cdot )\) maintains independent increments, then for every \(n\in \mathcal {N}\),
Note that \(J(s_n)\sim \text {Poi}\left( qs_n\right) \) and \(J(s_n)-J(s_{n-1})\sim \text {Poi}\left( q\frac{1-\lambda \alpha }{ \gamma \lambda }\right) \). Therefore, all of these probabilities have closed form expressions and so do \(EJ(S_\alpha )\) and \(EJ^2(S_\alpha )\).
Finally, note that this example is closely related to the crossing time of a Poisson process by a decreasing linear boundary. For more information regarding this issue and some other related topics, see [30].
A6. Expected externalities in a retrial M/G/1 queue
As mentioned in Sect. 4, the externalities in the standard M/G/1 queue are well-studied in existing literature. However, to the best of the author’s knowledge, there is no existing literature regarding expected externalities in an M/G/1 retrial queue with no waiting room, infinite orbit capacity and exponential retrial times with a constant rate (for the exact model setup see, for example, Sections 2 and 3 of [29]). Respectively, the purpose of this section is to make a conjecture regarding the exact expressions for the expected externalities in an M/G/1 retrial queue with the above-mentioned features. To this end, the first step is to analyse a regulation problem in a retrial queue which is the analogue of the model described in Sect. 2 (with no balking). Then, the conjectured expression for the expected externalities will stem naturally from the optimal price function which should internalize the expected externalities.
1.1 Analogous model with customer’s retrials
Consider the same model as described in Sect. 2 with the following modifications: Assume that now there is no waiting room. Instead, there is an orbit with an infinite capacity such that any customer who finds the server busy at his arrival time joins the orbit. Every customer in the orbit conducts retrials until he finds the server idle and then he starts receiving service. In addition, the retrial times constitute an iid sequence of exponentially distributed random variables with rate \(\theta \in (0,\infty )\) which is independent of all other random elements in this model. In particular, just like in the original model, the service discipline is nonpreemptive such that customers are those who decide on their service durations. Assume that \(\left( C,D\right) ,\left( C_1,D_1\right) ,\left( C_2,D_2\right) ,\ldots \) is an iid sequence of nonnegative random variables which is independent from all other random elements in this model such that \(ED=\delta \in \left( 0,\infty \right) \).
Now, for every \(i\ge 1\), the total utility of the i’th customer from orbiting \(w\ge 0\) minutes, conducting r retrials and receiving a service of \(s\ge 0\) minutes is given by
This means that now, besides the original assumptions, there is an additional assumption that the i’th customer suffers a constant loss of \(D_i\) monetary units for each retrial.
Given this model, the purpose is to find an optimal price function in sense of Sect. 3. To this end, the same approach which is described in Sect. 4 can be carried out. To start with, using known results regarding the M/G/1 retrial queue with exponential retrial times (see, for example, Equations (3.15) and (3.16) in [29]), the optimization to be solved is given by
Especially, notice that, just like in Sect. 4, the objective functional can be extended (denote the extension by \(\tilde{f}(\cdot )\)) and then maximized on \(\mathcal {S}\). This can be done by a two-phase method.
1.2 Phase I:
For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) the optimization of Phase I is
This optimization is identical to (24) up to re-parameterization of \(\gamma \), and hence the same results hold for this case. In particular, there exists \(x_\alpha \) such that
is an optimal solution of (178).
1.3 Phase II:
For every \(\alpha \in \left[ 0,\lambda ^{-1}\right) \) denote the objective value of Phase II
This phase can be analysed by the same method which was applied to Phase II of the original model.
Theorem 3
There are \(\alpha ^*\in \left( 0,\lambda ^{-1}\right) \) and \(x^*\in (0,\infty )\) such that \(T^*=T_{\alpha ^*}\left( x^*\right) \) is a solution of (178). In addition, for every constant \(\pi \), an optimal price function is given by
Furthermore, for every \(s\ge 0\) one has
where
Proof
The proof follows by the same arguments as in the proof of Theorem 1. \(\square \)
1.4 Conjecture
A general observation regarding externalities in retrial queues is that unlike the regular M/G/1 queue, the externalities caused by a tagged customer are decomposed into two parts:
-
1.
Waiting externalities: The total waiting time that could be saved for customers if the tagged customer reduced his service demand to zero.
-
2.
Retrial externalities: The number of retrials that could be saved for customers if the tagged customer reduced his service demand to zero.
Because this model is similar to the original one, it makes sense that \(\tilde{p}^*_0(\cdot )\) internalizes the externalities in this model just like \(p^*_0(\cdot )\) does in the original one. Thus, due to the interpretations of \(\gamma \) and \(\delta \), it is plausible that the expected waiting and retrial externalities which are caused due to a tagged customer with a service demand of \(s\ge 0\) minutes are given respectively by \(\lambda z(s)\) and \(\lambda \theta z(s)\). Since this is not the focus of this work, the proof of this conjecture is left for future research.
Rights and permissions
About this article
Cite this article
Jacobovic, R. Regulation of a single-server queue with customers who dynamically choose their service durations. Queueing Syst 101, 245–290 (2022). https://doi.org/10.1007/s11134-021-09722-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11134-021-09722-x