Abstract
We analyze decisions for a car rental firm using a common pool of cars to serve both rental, and purchasing customers. The rental customers arrive successively, and rent out cars for random durations while effectuating random incremental mileages on them. This stochastic rental behavior makes the decision of when to sell a rental car quite a crucial one for the firm because it involves a certain amount of risk. On one hand, selling a car when its mileage is low proactively avoids a huge decline in the car’s residual market value (even though it could also cause the firm to forfeit income from future rental customers who intend to rent that car for long durations while driving it sparingly). On the other hand, delaying selling is equally risky because the sample of early rental customers whom that car serves may only successively keep the car for short durations while significantly adding to its mileage. Such opportunistic customers would therefore have the effect of drastically diminishing the car’s market value without providing sufficient rental income to compensate. We use optimal stopping theory to derive the firm’s optimal selling policy algorithm which unfortunately comes with a practical implementation challenge. To address this issue, we propose three heuristic selling policies, one of which is based on the restart-in formulation introduced by Katehakis and Veinott (Math Oper Res 12(2):262–268, 1987). Numerical experiments using real and artificial parameter settings (1) reveal conditions under which the proposed heuristic policies outperform the firm’s current (suboptimal) policy, and (2) demonstrate how all four suboptimal policies compare to the optimal policy.
Similar content being viewed by others
References
Bassamboo, A., Kumar, S., & Randhawa, R. S. (2009). Dynamics of new product introduction in closed rental systems. Operations Research, 57(6), 1347–1359.
Carmona, R., & Touzi, N. (2008). Optimal multiple stopping and valuation of swing options. Mathematical Finance, 18(2), 239–268.
Chemla, D., Meunier, F., & Calvo, R. W. (2013). Bike sharing systems: Solving the static rebalancing problem. Discrete Optimization, 10(2), 120–146.
Davis, G. A., & Cairns, R. D. (2012). Good timing: The economics of optimal stopping. Journal of Economic Dynamics and Control, 36(2), 255–265.
Dayanik, S., & Sezer, S. O. (2012). Multisource Bayesian sequential binary hypothesis testing problem. Annals of Operations Research, 201(1), 99–130.
Fang, H. A. (2008). A discretecontinuous model of households’ vehicle choice and usage, with an application to the effects of residential density. Transportation Research Part B: Methodological, 42(9), 736–758.
Fricker, C., & Gast, N. (2014). Incentives and redistribution in homogeneous bike-sharing systems with stations of finite capacity. EURO Journal on Transportation and Logistics 1–31. doi:10.1007/s13676-014-0053-5.
Gans, N., & Savin, S. (2007). Pricing and capacity rationing for rentals with uncertain durations. Management Science, 53(3), 390–407.
George, D. K., & Xia, C. H. (2011). Fleet-sizing and service availability for a vehicle rental system via closed queueing networks. European Journal of Operational Research, 211(1), 198–207.
Jacka, S. D. (1991). Optimal stopping and the american put. Mathematical Finance, 1(2), 1–14.
Katehakis, M. N., & Veinott, A. F, Jr. (1987). The multi-armed Bandit problem: Decomposition and computation. Mathematics of Operations Research, 12(2), 262–268.
Kolomvatsos, K., Anagnostopoulos, C., & Hadjiefthymiades, S. (2014). An efficient recommendation system based on the optimal stopping theory. Expert Systems with Applications, 41(15), 6796–6806.
KPMG. (2013). Global automative retail market. https://www.kpmg.com/tr/en/IssuesAndInsights/ArticlesPublications/Documents/global-automotive-retail-market-study.pdf. Accessed 2 June 2016.
Morali, N., & Soyer, R. (2003). Optimal stopping in software testing. Naval Research Logistics, 50, 88–104.
Nair, R., Miller-Hooks, E., Hampshire, R. C., & Busic, A. (2013). Large-scale vehicle sharing systems: Analysis of Velib’. International Journal of Sustainable Transportation, 7(1), 85–106.
Papier, F., & Thonemann, U. W. (2010). Capacity rationing in stochastic rental systems with advance demand information. Operations Research, 58(2), 274–288.
Papier, F., & Thonemann, U. W. (2011). Capacity rationing in rental systems with two customer classes and batch arrivals. Omega, 39(1), 73–85.
Pazour, J. A., & Roy, D. (2015). Analyzing rental vehicle threshold policies that consider expected waiting times for two customer classes. Computers & Industrial Engineering, 80, 80–96.
Ross, K. W., & Tsang, D. H. K. (1989). The stochastic Knapsack problem. IEEE Transactions on Communications, 37(7), 740–747.
Savin, S. V., Cohen, M. A., Gans, N., & Katalan, Z. (2005). Capacity management in rental businesses with two customer bases. Operations Research, 53(4), 617–631.
Tainiter, M. (1964). Some Stochastic inventory models for rental situations. Management Science, 11(2), 316–326.
The Economist. (2015). The future of work: There’s an app for that. The Economist (January 3rd—9th, 2015), pp 17–20.
Transparency Market Research. (2014). Car Rental Market—Global and U.S. industry analysis, size, share, growth, trends and forecast, 2013–2019
Wang, R., & Liu, M. (2005). A realistic cellular automata model to simulate traffic flow at urban roundabouts. Computational Science ICCS. Berlin, Heidelberg: Springer.
Waserhole, A., & Jost, V. (2014). Pricing in vehicle sharing systems: Optimization in queuing networks with product forms. EURO Journal on Transportation and Logistics 1–28. doi:10.1007/s13676-014-0054-4.
Whisler, W. D. (1967). A stochastic inventory model for rented equipment. Management Science, 13(9), 640–647.
Zachariah, B. (2015). Optimal stopping time in software testing based on failure size approach. Annals of Operations Research, 235(1), 771–784.
Acknowledgments
The author would like to thank the guest editor Professor Michael N. Katehakis, and the anonymous referee for their constructive and insightful feedback which significantly improved the quality of this paper. This research was partially supported by the University of Mary Washington Faculty Development Fund.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: The optimal policy
1.1 Proof of Theorem 1
For \(n_{f} \in \{n+1,\ldots ,N\}\) and \(n \in \{1,\ldots ,N-1\}\), the moment-generating function (MGF) of a superposition of \(n_{f} - n\) independent and identical \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions is
where \(\theta \in \mathbb {R}\). Substituting \(\theta = -r\) we obtain \(\mathbb {E}\left[ \exp \left( -r\sum _{i=n}^{n_{f}-1}X_{i}\right) \right] \) \(= \exp \left( -r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \) \(\left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\). Hence, by Eq. (2),
Applying \(v_{n} \triangleq p_{M}\exp \left( -ry_{n-1}\right) + p_{L}u_{n-1}\) and \(\mathbb {E}\left[ V_{n+1}^{\pi ^{*}}\right] = \max _{n_{f} \in \{n+1,\ldots ,N\}} g(n, n_{f}) + p_{L}u_{n-1}\) to Eq. (4), we obtain after some algebra
1.2 Proof of Theorem 2
From the perspective of a car at stage \(n=1\), the definition, \(Y_{1, n-1} \triangleq \sum _{i=1}^{n-1}X_{i}\) represents the car’s random mileage at stage \(n \in \{1,\ldots ,N\}\) in the future where \(Y_{1, 0} \triangleq 0\). By Definition 1 and Theorem 1, the firm will sell the car at some random stage \(n > 1\) in the future where \(g(n,n_{f}) = p_{M}\exp \left( -rY_{1, n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n} + (n_{f}-n)p_{L}\mu _{T}\) and
However, obtaining \(\Pr \left( a_{n}^{\pi ^{*}} = 1\right) \) for \(n \in \{1,\ldots ,N\}\) is difficult because \(Y_{1, n-1}\) appears on both sides of \(Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{ \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) }\right) \). We therefore resort to the approximation
which holds true if \(\frac{\sigma _{X}}{\mu _{X}}\) is negligible. Hence,
Recall that the rental distances (\(X_{i}\) where \(i \in \mathbb {N}^{+}\)) are assumed to be independent and identical \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions. Random variables of this nature have a mean \(\mathbb {E}\left[ X_{i}\right] = \mu _{X} + \sigma _{X} \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \) and variance \(\hbox {Var}\left[ X_{i}\right] = \sigma _{X}^{2}\left( 1 - \left( \frac{\mu _{X}}{\sigma _{X}}\right) \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) - \left( \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \right) ^{2}\right) \) where \(\lambda (\cdot ) = \frac{\phi (\cdot )}{S(\cdot )}\), and \(\phi (\cdot )\) is the density function of the standard normal distribution. Consequently,
Being compound Poisson mixed distributions, truncated normal distributions are stable under convolution. Hence, \(Y_{1, n-1}\) also follows a truncated normal distribution on the support \((0,\infty )\). Hence,
Note that from the perspective of a car at stage \(n=1\), the function g(1, n) is the value function of any arbitrary policy that will deterministically sell that car at any stage \(n \in \{1,\ldots ,N\}\). If the selling stage is allowed to be random, and \(\Pr \left( a_{1}^{\pi ^{*}} = 1\right) \triangleq 0\) (so as not to trivialize the problem by allowing the possibility that the rental firm also sells brand new cars) then the optimal policy’s value function is \(V_{1}^{\pi ^{*}} = \sum \nolimits _{n=2}^{N} \Pr \left( a_{n}^{\pi ^{*}} = 1\right) g(1,n)\). In other words,
Appendix 2: The special case optimal policy
1.1 Proof of Lemma 1
Since \(r\sigma _{X} > 0\), it follows that \(0< S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) < S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) \) which also means that \(\frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) } < 1\). Moreover, \(n_{f} \in \{n+1,\ldots ,N\} \Longrightarrow n_{f} > n\). Therefore, for any \(n \in \{1,\ldots ,N-1\}\), the function \(\left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\) is convex, nonincreasing, and strictly positive in \(n_{f}\). Now, observe that
This means that when \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), the function \(p_{M}\exp \left( -ry_{n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \) reduces to an exponential decay function which is also convex, nonincreasing, and strictly positive in \(n_{f}\) for any \(n \in \{1,\ldots ,N-1\}\). We know that the product of two convex, nonincreasing, and strictly positive functions is also convex. Hence, the first term of \(g\left( n, n_{f}\right) \) which is the function \(p_{M}\exp \left( -ry_{n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\) is also a convex function if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\). It is also quite obvious that for \(1 \le n < N\), the function \((n_{f}-n)p_{L}\mu _{T}\) is convex in \(n_{f}\) because it is a linear function of \(n_{f}\). Since \(g\left( n, n_{f}\right) \) is the sum of two convex functions, it is also convex. Therefore, for any discrete \(n \in \{1,\ldots ,N-1\}\), the function \(g\left( n, n_{f}\right) \) is convex in \(n_{f}\) if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\). Revisiting the optimal policy from Theorem 1,
we see that for any \(n \in \{1,\ldots ,N-1\}\), if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), then by the convexity of \(g\left( n, n_{f}\right) \) in \(n_{f}\) over the domain \(\{n+1,\ldots ,N\}\), we obtain either \( \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\), or \( \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). We obtain this result because the maximum of a convex function over a discrete or continuous convex domain lies at one of the domain’s extreme points. Next, we show that \(\frac{p_{M}}{p_{L}} = c(n)\) necessarily implies \(n = N-1\). Using the composite definition of c(n) (see Definition 1) we obtain for \(\frac{p_{M}}{p_{L}} = c(n)\). Note
For \(1 \le n < N-1\), consider the two different possibilities for maximizing the convex function \(g\left( n, n_{f}\right) \) over the discrete convex set \(\{n+1,\ldots , N\}\):
1.1.1 Possibility 1: \(\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, n+1\right) \) i.e., \(\displaystyle \mathop {\hbox {arg max}}\limits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\)
We know that if \(g\left( n, n_{f}\right) \) is convex over \(\{n+1,\ldots , N\}\), and if \(g\left( n, n+1\right) > g\left( n, N\right) \), then \(\mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\). We now show that \(\frac{p_{M}}{p_{L}} > c(n)\) implies \(g\left( n, n+1\right) > g\left( n, N\right) \) which by extension also implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\). The statement \(\frac{p_{M}}{p_{L}} > c(n)\) implies that
1.1.2 Possibility 2: \(\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, N\right) \) i.e., \(\displaystyle \mathop {\hbox {arg max}}\limits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\)
We know that if \(g\left( n, n_{f}\right) \) is convex over \(\{n+1,\ldots , N\}\), and if \(g\left( n, n+1\right) < g\left( n, N\right) \), then \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). The next step is to show that \(\frac{p_{M}}{p_{L}} < c(n)\) implies \(g\left( n, n+1\right) < g\left( n, N\right) \) which by extension also implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). The statement \(\frac{p_{M}}{p_{L}} < c(n)\) implies that
Invoking Theorem 1, and joining Possibilities 1, and 2 together with the result that \(\frac{p_{M}}{p_{L}} = c(n) \Longrightarrow n = N-1\), we obtain the following special optimal policy \(\pi _{sp}^{*}\) under the necessary restriction that \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\)
1.2 Proof of Theorem 3
Recall \(h_{X} \triangleq \frac{1}{2}r\sigma _{X}^{2} + \frac{1}{r}\left( \ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\right) \). We begin by showing that the conditions \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) imply a strictly positive upper bound on the maximum of the function \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) over the domain \(n \in \{1,\ldots ,N-2\}\). From \(\mu _{X} \ge h_{X}\), we obtain
Furthermore, since \(0< \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) } < 1\) it follows that \(\ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) < 0\). Hence, \(w_{X} > 0\). We now claim that \(\frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\) represents the maximum of \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) for \(n \in \{1,\ldots ,N-2\}\). In other words, we claim that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\). To see the veracity of this claim, assume without loss of generality that n is continuous, and observe that \(\frac{\partial \alpha (n)}{\partial n} = \alpha (n)w_{X}\). Hence,
Since \(\alpha (n)> 0, w_{X} > 0\), and \(N - n - 1 > 0\) for \(n \in \{1,\ldots ,N-2\}\) it follows that \(\alpha (n)w_{X} (N-n-1) - \alpha (n) > 0 \Longrightarrow \frac{\partial }{\partial n}\left( \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right) < 0\) for \(n \in \{1,\ldots ,N-2\}\). Because the gradient \(\frac{\partial }{\partial n}\left( \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right) < 0\) over the domain \(n \in \{1,\ldots ,N-2\}\), we infer that \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) is strictly decreasing in n, and that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)} = 1\). Hence, \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\). Combining this result with Inequality (13) shows us that the maximum of \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) over \(n \in \{1,\ldots ,N-2\}\) is bounded from above by \(w_{X} > 0\) i.e., \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)} \le w_{X} > 0\). Next, we demonstrate that if \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\), then the function c(n) defined for \(n \in \{1,\ldots ,N-2\}\) attains its maximum at \(n = N-2\). First note that,
We recognize \(\frac{\partial \alpha (n) }{\partial n} = \alpha (n)w_{X}\). Therefore, substituting \(\alpha (n)w_{X}\) into Eq. (14) we obtain
If \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\), then it implies that for \(n \in \{1,\ldots ,N-2\}\),
Since \(\frac{\partial c(n)}{\partial n} \ge 0\) for \(n \in \{1,\ldots ,N-2\}\) it is obvious that c(n) is nondecreasing in n. This implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = N-2\). Hence, \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X} \Longrightarrow \displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = N-2 \Longrightarrow \displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\). To summarize: \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), imply that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\) which further implies that c(n) is nondecreasing in n. This imposes the fact that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\). Now, since \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) we can invoke the special case optimal policy (see Lemma 1), and if \(\frac{p_{M}}{p_{L}} > \displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\), then the special case optimal policy boils down to
Unfortunately, one can never really guarantee that \(\frac{p_{M}}{p_{L}}\) will always exceed \(c(N-2)\) because \(c(N-2)\) is itself a quantity that until stage \(n = N-2\) remains random. To see why this is the case, consider that \(c(N-2) = \frac{\mu _{T}}{\exp (-ry_{N-3})\left( \alpha (N-1) - \alpha (N-2) \right) } = \frac {\exp (ry_{N-3})\mu _{T}}{\left( \alpha (N-1) - \alpha (N-2) \right) }\). Since we never observe \(y_{N-3}\) until stage \(N-2\), it follows that for \(1 \le n < N-2\) the quantity \(c(N-2) = \frac{\exp (rY_{n, N-3})\mu _{T}}{\left( \alpha (N-1) - \alpha (N-2) \right) }\) where \(Y_{n, N-3} = \sum _{i=n}^{N-3}X_{i}\) is the superposition of \(N-n-2\) \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions. Because \(Y_{n, N-3}\) could in principle be infinitely large, it follows that \(c(N-2)\) is also potentially infinite. Therefore as \(\frac{p_{M}}{p_{L}} \rightarrow \infty \), the probability measure \(\Pr \left( \frac{p_{M}}{p_{L}} > c(N-2) \right) \rightarrow 1\). Hence,
1.3 Proof of Theorem 4
We showed in the proof of Theorem 3 that if \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), then \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\) which further implies that c(n) is nondecreasing in n. If c(n) is nondecreasing in n, then \(\displaystyle \min \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(1)\). Since \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) we can invoke the special case optimal policy (see Lemma 1), and if \(\frac{p_{M}}{p_{L}} < \displaystyle \min \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(1)\), then the special case optimal policy is in short
Rights and permissions
About this article
Cite this article
Fontem, B. An optimal stopping policy for car rental businesses with purchasing customers. Ann Oper Res 317, 47–76 (2022). https://doi.org/10.1007/s10479-016-2240-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-016-2240-2