Skip to main content
Log in

An optimal stopping policy for car rental businesses with purchasing customers

  • S.I.: Avi-Itzhak-Sobel:Probability
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

We analyze decisions for a car rental firm using a common pool of cars to serve both rental, and purchasing customers. The rental customers arrive successively, and rent out cars for random durations while effectuating random incremental mileages on them. This stochastic rental behavior makes the decision of when to sell a rental car quite a crucial one for the firm because it involves a certain amount of risk. On one hand, selling a car when its mileage is low proactively avoids a huge decline in the car’s residual market value (even though it could also cause the firm to forfeit income from future rental customers who intend to rent that car for long durations while driving it sparingly). On the other hand, delaying selling is equally risky because the sample of early rental customers whom that car serves may only successively keep the car for short durations while significantly adding to its mileage. Such opportunistic customers would therefore have the effect of drastically diminishing the car’s market value without providing sufficient rental income to compensate. We use optimal stopping theory to derive the firm’s optimal selling policy algorithm which unfortunately comes with a practical implementation challenge. To address this issue, we propose three heuristic selling policies, one of which is based on the restart-in formulation introduced by Katehakis and Veinott (Math Oper Res 12(2):262–268, 1987). Numerical experiments using real and artificial parameter settings (1) reveal conditions under which the proposed heuristic policies outperform the firm’s current (suboptimal) policy, and (2) demonstrate how all four suboptimal policies compare to the optimal policy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Bassamboo, A., Kumar, S., & Randhawa, R. S. (2009). Dynamics of new product introduction in closed rental systems. Operations Research, 57(6), 1347–1359.

    Article  Google Scholar 

  • Carmona, R., & Touzi, N. (2008). Optimal multiple stopping and valuation of swing options. Mathematical Finance, 18(2), 239–268.

    Article  Google Scholar 

  • Chemla, D., Meunier, F., & Calvo, R. W. (2013). Bike sharing systems: Solving the static rebalancing problem. Discrete Optimization, 10(2), 120–146.

    Article  Google Scholar 

  • Davis, G. A., & Cairns, R. D. (2012). Good timing: The economics of optimal stopping. Journal of Economic Dynamics and Control, 36(2), 255–265.

    Article  Google Scholar 

  • Dayanik, S., & Sezer, S. O. (2012). Multisource Bayesian sequential binary hypothesis testing problem. Annals of Operations Research, 201(1), 99–130.

    Article  Google Scholar 

  • Fang, H. A. (2008). A discretecontinuous model of households’ vehicle choice and usage, with an application to the effects of residential density. Transportation Research Part B: Methodological, 42(9), 736–758.

    Article  Google Scholar 

  • Fricker, C., & Gast, N. (2014). Incentives and redistribution in homogeneous bike-sharing systems with stations of finite capacity. EURO Journal on Transportation and Logistics 1–31. doi:10.1007/s13676-014-0053-5.

  • Gans, N., & Savin, S. (2007). Pricing and capacity rationing for rentals with uncertain durations. Management Science, 53(3), 390–407.

    Article  Google Scholar 

  • George, D. K., & Xia, C. H. (2011). Fleet-sizing and service availability for a vehicle rental system via closed queueing networks. European Journal of Operational Research, 211(1), 198–207.

    Article  Google Scholar 

  • Jacka, S. D. (1991). Optimal stopping and the american put. Mathematical Finance, 1(2), 1–14.

    Article  Google Scholar 

  • Katehakis, M. N., & Veinott, A. F, Jr. (1987). The multi-armed Bandit problem: Decomposition and computation. Mathematics of Operations Research, 12(2), 262–268.

    Article  Google Scholar 

  • Kolomvatsos, K., Anagnostopoulos, C., & Hadjiefthymiades, S. (2014). An efficient recommendation system based on the optimal stopping theory. Expert Systems with Applications, 41(15), 6796–6806.

    Article  Google Scholar 

  • KPMG. (2013). Global automative retail market. https://www.kpmg.com/tr/en/IssuesAndInsights/ArticlesPublications/Documents/global-automotive-retail-market-study.pdf. Accessed 2 June 2016.

  • Morali, N., & Soyer, R. (2003). Optimal stopping in software testing. Naval Research Logistics, 50, 88–104.

    Article  Google Scholar 

  • Nair, R., Miller-Hooks, E., Hampshire, R. C., & Busic, A. (2013). Large-scale vehicle sharing systems: Analysis of Velib’. International Journal of Sustainable Transportation, 7(1), 85–106.

    Article  Google Scholar 

  • Papier, F., & Thonemann, U. W. (2010). Capacity rationing in stochastic rental systems with advance demand information. Operations Research, 58(2), 274–288.

    Article  Google Scholar 

  • Papier, F., & Thonemann, U. W. (2011). Capacity rationing in rental systems with two customer classes and batch arrivals. Omega, 39(1), 73–85.

    Article  Google Scholar 

  • Pazour, J. A., & Roy, D. (2015). Analyzing rental vehicle threshold policies that consider expected waiting times for two customer classes. Computers & Industrial Engineering, 80, 80–96.

    Article  Google Scholar 

  • Ross, K. W., & Tsang, D. H. K. (1989). The stochastic Knapsack problem. IEEE Transactions on Communications, 37(7), 740–747.

    Article  Google Scholar 

  • Savin, S. V., Cohen, M. A., Gans, N., & Katalan, Z. (2005). Capacity management in rental businesses with two customer bases. Operations Research, 53(4), 617–631.

    Article  Google Scholar 

  • Tainiter, M. (1964). Some Stochastic inventory models for rental situations. Management Science, 11(2), 316–326.

    Article  Google Scholar 

  • The Economist. (2015). The future of work: There’s an app for that. The Economist (January 3rd—9th, 2015), pp 17–20.

  • Transparency Market Research. (2014). Car Rental Market—Global and U.S. industry analysis, size, share, growth, trends and forecast, 2013–2019

  • Wang, R., & Liu, M. (2005). A realistic cellular automata model to simulate traffic flow at urban roundabouts. Computational Science ICCS. Berlin, Heidelberg: Springer.

  • Waserhole, A., & Jost, V. (2014). Pricing in vehicle sharing systems: Optimization in queuing networks with product forms. EURO Journal on Transportation and Logistics 1–28. doi:10.1007/s13676-014-0054-4.

  • Whisler, W. D. (1967). A stochastic inventory model for rented equipment. Management Science, 13(9), 640–647.

    Article  Google Scholar 

  • Zachariah, B. (2015). Optimal stopping time in software testing based on failure size approach. Annals of Operations Research, 235(1), 771–784.

    Article  Google Scholar 

Download references

Acknowledgments

The author would like to thank the guest editor Professor Michael N. Katehakis, and the anonymous referee for their constructive and insightful feedback which significantly improved the quality of this paper. This research was partially supported by the University of Mary Washington Faculty Development Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Belleh Fontem.

Appendices

Appendix 1: The optimal policy

1.1 Proof of Theorem 1

For \(n_{f} \in \{n+1,\ldots ,N\}\) and \(n \in \{1,\ldots ,N-1\}\), the moment-generating function (MGF) of a superposition of \(n_{f} - n\) independent and identical \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions is

$$\begin{aligned} \mathbb {E}\left[ \exp \left( \theta \sum _{i=n}^{n_{f}-1}X_{i}\right) \right] = \exp \left( \theta (n_{f}-n)\mu _{X} + \frac{1}{2}\theta ^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}} - \theta \sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n} \end{aligned}$$

where \(\theta \in \mathbb {R}\). Substituting \(\theta = -r\) we obtain \(\mathbb {E}\left[ \exp \left( -r\sum _{i=n}^{n_{f}-1}X_{i}\right) \right] \) \(= \exp \left( -r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \) \(\left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\). Hence, by Eq. (2),

$$\begin{aligned}&\mathbb {E}\left[ V_{n+1}^{\pi ^{*}}\right] = \max _{n_{f} \in \{n+1,\ldots ,N\}} \left\{ \mathbb {E}\left[ p_{M}\exp \left( -r\left( y_{n-1} + \sum _{i=n}^{n_{f}-1}X_{i}\right) \right) + p_{L}\left( u_{n-1} + \sum _{i=n}^{n_{f}-1}T_{i}\right) \right] \right\} \\&\quad = \max _{n_{f} \in \{n+1,\ldots ,N\}} p_{M} \exp \left( -ry_{n-1} -r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\\&\qquad + p_{L}u_{n-1} + (n_{f} - n)p_{L}\mu _{T}\\&\quad = \max _{n_{f} \in \{n+1,\ldots ,N\}} g(n, n_{f}) + p_{L}u_{n-1} \end{aligned}$$

Applying \(v_{n} \triangleq p_{M}\exp \left( -ry_{n-1}\right) + p_{L}u_{n-1}\) and \(\mathbb {E}\left[ V_{n+1}^{\pi ^{*}}\right] = \max _{n_{f} \in \{n+1,\ldots ,N\}} g(n, n_{f}) + p_{L}u_{n-1}\) to Eq. (4), we obtain after some algebra

$$\begin{aligned} a_{n}^{\pi ^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) }\right) \text { and } n \in \{1,\ldots ,N-1\} \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

1.2 Proof of Theorem 2

From the perspective of a car at stage \(n=1\), the definition, \(Y_{1, n-1} \triangleq \sum _{i=1}^{n-1}X_{i}\) represents the car’s random mileage at stage \(n \in \{1,\ldots ,N\}\) in the future where \(Y_{1, 0} \triangleq 0\). By Definition 1 and Theorem 1, the firm will sell the car at some random stage \(n > 1\) in the future where \(g(n,n_{f}) = p_{M}\exp \left( -rY_{1, n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n} + (n_{f}-n)p_{L}\mu _{T}\) and

$$\begin{aligned} a_{n}^{\pi ^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) }\right) \text { and } n \in \{1,\ldots ,N-1\} \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

However, obtaining \(\Pr \left( a_{n}^{\pi ^{*}} = 1\right) \) for \(n \in \{1,\ldots ,N\}\) is difficult because \(Y_{1, n-1}\) appears on both sides of \(Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{ \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) }\right) \). We therefore resort to the approximation

$$\begin{aligned} p_{M}\exp \left( -rY_{1, n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n} \approx 0 \end{aligned}$$

which holds true if \(\frac{\sigma _{X}}{\mu _{X}}\) is negligible. Hence,

$$\begin{aligned} \Pr \left( a_{n}^{\pi ^{*}} = 1\right)&\approx \Pr \left( Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} \left\{ (n_{f}-n)p_{L}\mu _{T} \right\} }\right) \right) \\&= \Pr \left( Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{(N-n)p_{L}\mu _{T}}\right) \right) \end{aligned}$$

Recall that the rental distances (\(X_{i}\) where \(i \in \mathbb {N}^{+}\)) are assumed to be independent and identical \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions. Random variables of this nature have a mean \(\mathbb {E}\left[ X_{i}\right] = \mu _{X} + \sigma _{X} \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \) and variance \(\hbox {Var}\left[ X_{i}\right] = \sigma _{X}^{2}\left( 1 - \left( \frac{\mu _{X}}{\sigma _{X}}\right) \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) - \left( \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \right) ^{2}\right) \) where \(\lambda (\cdot ) = \frac{\phi (\cdot )}{S(\cdot )}\), and \(\phi (\cdot )\) is the density function of the standard normal distribution. Consequently,

$$\begin{aligned} \mathbb {E}[Y_{1, n-1}]&= \left( n-1\right) \left( \mu _{X} + \sigma _{X} \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \right) \\ \hbox {Var}[Y_{1, n-1}]&= \left( n-1\right) \sigma _{X}^{2}\left( 1 - \left( \frac{\mu _{X}}{\sigma _{X}}\right) \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) - \left( \lambda \left( \frac{-\mu _{X}}{\sigma _{X}}\right) \right) ^{2}\right) \end{aligned}$$

Being compound Poisson mixed distributions, truncated normal distributions are stable under convolution. Hence, \(Y_{1, n-1}\) also follows a truncated normal distribution on the support \((0,\infty )\). Hence,

$$\begin{aligned} \Pr \left( a_{n}^{\pi ^{*}} = 1\right)&\approx \Pr \left( Y_{1, n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{(N-n)p_{L}\mu _{T}}\right) \right) \\&= \frac{ {\varPhi }\left( \frac{\frac{1}{r} \ln \left( \frac{p_{M}}{(N-n)p_{L}\mu _{T}}\right) - \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) - {\varPhi }\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) }{1 - {\varPhi }\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) } \\&= \frac{ {\varPhi }\left( \frac{\frac{1}{r} \ln \left( \frac{p_{M}}{(N-n)p_{L}\mu _{T}}\right) - \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) - {\varPhi }\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) }{S\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) } \end{aligned}$$

Note that from the perspective of a car at stage \(n=1\), the function g(1, n) is the value function of any arbitrary policy that will deterministically sell that car at any stage \(n \in \{1,\ldots ,N\}\). If the selling stage is allowed to be random, and \(\Pr \left( a_{1}^{\pi ^{*}} = 1\right) \triangleq 0\) (so as not to trivialize the problem by allowing the possibility that the rental firm also sells brand new cars) then the optimal policy’s value function is \(V_{1}^{\pi ^{*}} = \sum \nolimits _{n=2}^{N} \Pr \left( a_{n}^{\pi ^{*}} = 1\right) g(1,n)\). In other words,

$$\begin{aligned}&V_{1}^{\pi ^{*}} \approx \sum _{n=2}^{N} \left( \frac{ {\varPhi }\left( \frac{\frac{1}{r} \ln \left( \frac{p_{M}}{(N-n)p_{L}\mu _{T}}\right) - \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) - {\varPhi }\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) }{S\left( \frac{- \mathbb {E}[Y_{1, n-1}]}{\sqrt{\hbox {Var}[Y_{1, n-1}]}}\right) }\right) g(1,n). \end{aligned}$$

Appendix 2: The special case optimal policy

1.1 Proof of Lemma 1

Since \(r\sigma _{X} > 0\), it follows that \(0< S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) < S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) \) which also means that \(\frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) } < 1\). Moreover, \(n_{f} \in \{n+1,\ldots ,N\} \Longrightarrow n_{f} > n\). Therefore, for any \(n \in \{1,\ldots ,N-1\}\), the function \(\left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\) is convex, nonincreasing, and strictly positive in \(n_{f}\). Now, observe that

$$\begin{aligned}&\frac{\sigma _{X}}{\mu _{X}} \le \frac{1}{\mu _{X}}\sqrt{\frac{2\mu _{X}}{r}} \Longleftrightarrow -r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2} \le 0 \end{aligned}$$

This means that when \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), the function \(p_{M}\exp \left( -ry_{n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \) reduces to an exponential decay function which is also convex, nonincreasing, and strictly positive in \(n_{f}\) for any \(n \in \{1,\ldots ,N-1\}\). We know that the product of two convex, nonincreasing, and strictly positive functions is also convex. Hence, the first term of \(g\left( n, n_{f}\right) \) which is the function \(p_{M}\exp \left( -ry_{n-1}-r(n_{f}-n)\mu _{X} + \frac{1}{2}r^{2}(n_{f}-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{n_{f} - n}\) is also a convex function if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\). It is also quite obvious that for \(1 \le n < N\), the function \((n_{f}-n)p_{L}\mu _{T}\) is convex in \(n_{f}\) because it is a linear function of \(n_{f}\). Since \(g\left( n, n_{f}\right) \) is the sum of two convex functions, it is also convex. Therefore, for any discrete \(n \in \{1,\ldots ,N-1\}\), the function \(g\left( n, n_{f}\right) \) is convex in \(n_{f}\) if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\). Revisiting the optimal policy from Theorem 1,

$$\begin{aligned} a_{n}^{\pi ^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) }\right) \text { and } n \in \{1,\ldots ,N-1\} \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

we see that for any \(n \in \{1,\ldots ,N-1\}\), if \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), then by the convexity of \(g\left( n, n_{f}\right) \) in \(n_{f}\) over the domain \(\{n+1,\ldots ,N\}\), we obtain either \( \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\), or \( \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). We obtain this result because the maximum of a convex function over a discrete or continuous convex domain lies at one of the domain’s extreme points. Next, we show that \(\frac{p_{M}}{p_{L}} = c(n)\) necessarily implies \(n = N-1\). Using the composite definition of c(n) (see Definition 1) we obtain for \(\frac{p_{M}}{p_{L}} = c(n)\). Note

$$\begin{aligned}&\frac{p_{M}}{p_{L}} = \textstyle \frac{\mu _{T}(N-n - 1)}{\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) - \exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n}} \\&\quad \Longrightarrow p_{M}\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + p_{L}\mu _{T} \\&\quad = p_{M}\exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \textstyle \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n} + (N-n)p_{L}\mu _{T} \\&\quad \Longrightarrow g\left( n, n+1\right) = g\left( n, N\right) \Longrightarrow n = N - 1 \end{aligned}$$

For \(1 \le n < N-1\), consider the two different possibilities for maximizing the convex function \(g\left( n, n_{f}\right) \) over the discrete convex set \(\{n+1,\ldots , N\}\):

1.1.1 Possibility 1: \(\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, n+1\right) \) i.e., \(\displaystyle \mathop {\hbox {arg max}}\limits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\)

We know that if \(g\left( n, n_{f}\right) \) is convex over \(\{n+1,\ldots , N\}\), and if \(g\left( n, n+1\right) > g\left( n, N\right) \), then \(\mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\). We now show that \(\frac{p_{M}}{p_{L}} > c(n)\) implies \(g\left( n, n+1\right) > g\left( n, N\right) \) which by extension also implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = n+1\). The statement \(\frac{p_{M}}{p_{L}} > c(n)\) implies that

$$\begin{aligned}&\frac{p_{M}}{p_{L}}> \textstyle \frac{\mu _{T}(N-n - 1)}{\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) - \exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n}} \\&\quad \Longrightarrow p_{M}\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + p_{L}\mu _{T} \\&{>} p_{M}\exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \textstyle \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n} + (N-n)p_{L}\mu _{T} \\&\Longrightarrow g\left( n, n+1\right) > g\left( n, N\right) \Longrightarrow \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, n+1\right) \end{aligned}$$

1.1.2 Possibility 2: \(\displaystyle \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, N\right) \) i.e., \(\displaystyle \mathop {\hbox {arg max}}\limits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\)

We know that if \(g\left( n, n_{f}\right) \) is convex over \(\{n+1,\ldots , N\}\), and if \(g\left( n, n+1\right) < g\left( n, N\right) \), then \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). The next step is to show that \(\frac{p_{M}}{p_{L}} < c(n)\) implies \(g\left( n, n+1\right) < g\left( n, N\right) \) which by extension also implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = N\). The statement \(\frac{p_{M}}{p_{L}} < c(n)\) implies that

$$\begin{aligned}&\frac{p_{M}}{p_{L}}< \textstyle \frac{\mu _{T}(N-n - 1)}{\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) - \exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n}} \\&\quad \Longrightarrow p_{M}\exp \left( -ry_{n-1}-r\mu _{X} + \frac{1}{2}r^{2}\sigma _{X}^{2}\right) \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + p_{L}\mu _{T} \\&\quad {<} p_{M}\exp \left( -ry_{n-1}-r(N-n)\mu _{X} + \frac{1}{2}r^{2} (N-n)\sigma _{X}^{2}\right) \textstyle \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) ^{N - n} + (N-n)p_{L}\mu _{T} \\&\quad \Longrightarrow g\left( n, n+1\right) < g\left( n, N\right) \Longrightarrow \max _{n_{f} \in \{n+1,\ldots ,N\}} g\left( n, n_{f}\right) = g\left( n, N\right) \end{aligned}$$

Invoking Theorem 1, and joining Possibilities 1, and 2 together with the result that \(\frac{p_{M}}{p_{L}} = c(n) \Longrightarrow n = N-1\), we obtain the following special optimal policy \(\pi _{sp}^{*}\) under the necessary restriction that \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\)

$$\begin{aligned} a_{n}^{\pi _{sp}^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(n, n+1)}\right) \text { and } \frac{p_{M}}{p_{L}} > c\left( n\right) \text { where } n \in \{1,\ldots ,N-2\} \\ 1&{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(n, N)}\right) \text{ and } \frac{p_{M}}{p_{L}} < c\left( n\right) \text { where } n \in \{1,\ldots ,N-2\} \\ 1 &{}\quad \text { if } y_{N-2} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(N-1, N)}\right) \text { and } n = N-1 \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

1.2 Proof of Theorem 3

Recall \(h_{X} \triangleq \frac{1}{2}r\sigma _{X}^{2} + \frac{1}{r}\left( \ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\right) \). We begin by showing that the conditions \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) imply a strictly positive upper bound on the maximum of the function \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) over the domain \(n \in \{1,\ldots ,N-2\}\). From \(\mu _{X} \ge h_{X}\), we obtain

$$\begin{aligned}&\mu _{X} \ge \frac{1}{2}r\sigma _{X}^{2} + \frac{1}{r}\left( \ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) + \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\right) \nonumber \\&\Longrightarrow r\mu _{X} - \frac{1}{2}r^{2}\sigma _{X}^{2} - \ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) \ge \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)} \nonumber \\&\text {Defining } w_{X} \triangleq r\mu _{X} - \frac{1}{2}r^{2}\sigma _{X}^{2} - \ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) , \text { we obtain } \nonumber \\&w_{X} \ge \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)} \Longrightarrow \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)} \le w_{X} \nonumber \\&\text {But } \frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}} \Longrightarrow \frac{\sigma _{X}}{\mu _{X}} \le \frac{1}{\mu _{X}}\sqrt{\frac{2\mu _{X}}{r}} \Longrightarrow \sqrt{\frac{2\mu _{X}}{r}} \ge \sigma _{X} \Longrightarrow r\mu _{X} \ge \frac{1}{2}r^{2}\sigma _{X}^{2} \end{aligned}$$
(13)

Furthermore, since \(0< \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) } < 1\) it follows that \(\ln \left( \frac{S\left( \frac{- \mu _{X}}{\sigma _{X}}+r\sigma _{X}\right) }{S\left( \frac{- \mu _{X}}{\sigma _{X}}\right) }\right) < 0\). Hence, \(w_{X} > 0\). We now claim that \(\frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\) represents the maximum of \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) for \(n \in \{1,\ldots ,N-2\}\). In other words, we claim that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\). To see the veracity of this claim, assume without loss of generality that n is continuous, and observe that \(\frac{\partial \alpha (n)}{\partial n} = \alpha (n)w_{X}\). Hence,

$$\begin{aligned} \frac{\partial }{\partial n}\left( \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right)&= \frac{\partial }{\partial n}\left( \frac{\alpha (N-1)}{(N-n-1)\alpha (n)} - \frac{1}{N-n-1}\right) \\&= \frac{-\alpha (N-1)\left( \alpha (n)w_{X} (N-n-1) - \alpha (n) \right) - (\alpha (n))^{2}}{(N-n-1)^{2}(\alpha (n))^{2}} \end{aligned}$$

Since \(\alpha (n)> 0, w_{X} > 0\), and \(N - n - 1 > 0\) for \(n \in \{1,\ldots ,N-2\}\) it follows that \(\alpha (n)w_{X} (N-n-1) - \alpha (n) > 0 \Longrightarrow \frac{\partial }{\partial n}\left( \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right) < 0\) for \(n \in \{1,\ldots ,N-2\}\). Because the gradient \(\frac{\partial }{\partial n}\left( \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right) < 0\) over the domain \(n \in \{1,\ldots ,N-2\}\), we infer that \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) is strictly decreasing in n, and that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)} = 1\). Hence, \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)}\). Combining this result with Inequality (13) shows us that the maximum of \(\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\) over \(n \in \{1,\ldots ,N-2\}\) is bounded from above by \(w_{X} > 0\) i.e., \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} = \frac{\alpha (N-1) - \alpha (1)}{(N-2)\alpha (1)} \le w_{X} > 0\). Next, we demonstrate that if \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\), then the function c(n) defined for \(n \in \{1,\ldots ,N-2\}\) attains its maximum at \(n = N-2\). First note that,

$$\begin{aligned} \frac{\partial c\left( n\right) }{\partial n}&= \frac{-\mu _{T}\left( \exp (-ry_{n-1})\left( \alpha (N-1) - \alpha (n) \right) \right) - \mu _{T}(N-n - 1)\left( \frac{\partial \left( \exp (-ry_{n-1})\left( \alpha (N-1) - \alpha (n) \right) \right) }{\partial n}\right) }{\left( \exp (-ry_{n-1})\left( \alpha (N-1) - \alpha (n) \right) \right) ^{2}} \nonumber \\&=\frac{-\mu _{T}\left( \exp (-ry_{n-1})\left( \alpha (N-1) - \alpha (n) \right) \right) - \mu _{T}(N-n - 1)\left( \frac{\exp (-ry_{n-1}) \partial \left( \alpha (N-1) - \alpha (n) \right) }{\partial n}\right) }{\exp (-2ry_{n-1})\left( \alpha (N-1) - \alpha (n) \right) ^{2}} \nonumber \\&=\frac{\left( \mu _{T}\exp (ry_{n-1})\right) \left( (N-n - 1)\left( \frac{\partial \alpha (n) }{\partial n}\right) - \left( \alpha (N-1) - \alpha (n) \right) \right) }{\left( \alpha (N-1) - \alpha (n) \right) ^{2}} \end{aligned}$$
(14)

We recognize \(\frac{\partial \alpha (n) }{\partial n} = \alpha (n)w_{X}\). Therefore, substituting \(\alpha (n)w_{X}\) into Eq. (14) we obtain

$$\begin{aligned} \frac{\partial c\left( n\right) }{\partial n} = \frac{\left( \mu _{T}\exp (ry_{n-1})\right) \left( (N-n - 1)\alpha (n)w_{X} - \left( \alpha (N-1) - \alpha (n) \right) \right) }{\left( \alpha (N-1) - \alpha (n) \right) ^{2}} \end{aligned}$$

If \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\), then it implies that for \(n \in \{1,\ldots ,N-2\}\),

$$\begin{aligned}&\frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)} \le w_{X} \Longrightarrow w_{X} \ge \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)} \\&\quad \Longrightarrow (N-n - 1)\alpha (n) w_{X} \ge \alpha (N-1) - \alpha (n)\\&\quad \Longrightarrow (N-n - 1)\alpha (n) w_{X} - (\alpha (N-1) - \alpha (n)) \ge 0 \\&\quad \textstyle \Longrightarrow \frac{\partial c\left( n\right) }{\partial n} = \frac{\left( \mu _{T}\exp (ry_{n-1})\right) \left( (N-n - 1)\alpha (n)w_{X} - \left( \alpha (N-1) - \alpha (n) \right) \right) }{\left( \alpha (N-1) - \alpha (n) \right) ^{2}} \ge 0 \text { for } n \in \{1,\ldots ,N-2\} \end{aligned}$$

Since \(\frac{\partial c(n)}{\partial n} \ge 0\) for \(n \in \{1,\ldots ,N-2\}\) it is obvious that c(n) is nondecreasing in n. This implies that \(\displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = N-2\). Hence, \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X} \Longrightarrow \displaystyle \mathop {\hbox {arg max}}\limits \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = N-2 \Longrightarrow \displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\). To summarize: \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), imply that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\) which further implies that c(n) is nondecreasing in n. This imposes the fact that \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\). Now, since \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) we can invoke the special case optimal policy (see Lemma 1), and if \(\frac{p_{M}}{p_{L}} > \displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(N-2)\), then the special case optimal policy boils down to

$$\begin{aligned} a_{n}^{\pi _{sp}^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(n, n+1)}\right) \text { and } \mu _{X} \ge h_{X}, \frac{p_{M}}{p_{L}} > c(N-2) \\ &{}\qquad \text { where } n \in \{1,\ldots ,N-2\} \\ 1 &{}\quad \text { if } y_{N-2} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(N-1, N)}\right) \text { where } n = N - 1 \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

Unfortunately, one can never really guarantee that \(\frac{p_{M}}{p_{L}}\) will always exceed \(c(N-2)\) because \(c(N-2)\) is itself a quantity that until stage \(n = N-2\) remains random. To see why this is the case, consider that \(c(N-2) = \frac{\mu _{T}}{\exp (-ry_{N-3})\left( \alpha (N-1) - \alpha (N-2) \right) } = \frac {\exp (ry_{N-3})\mu _{T}}{\left( \alpha (N-1) - \alpha (N-2) \right) }\). Since we never observe \(y_{N-3}\) until stage \(N-2\), it follows that for \(1 \le n < N-2\) the quantity \(c(N-2) = \frac{\exp (rY_{n, N-3})\mu _{T}}{\left( \alpha (N-1) - \alpha (N-2) \right) }\) where \(Y_{n, N-3} = \sum _{i=n}^{N-3}X_{i}\) is the superposition of \(N-n-2\) \((\mu _{X}, \sigma _{X}, 0, \infty )\) truncated normal distributions. Because \(Y_{n, N-3}\) could in principle be infinitely large, it follows that \(c(N-2)\) is also potentially infinite. Therefore as \(\frac{p_{M}}{p_{L}} \rightarrow \infty \), the probability measure \(\Pr \left( \frac{p_{M}}{p_{L}} > c(N-2) \right) \rightarrow 1\). Hence,

$$\begin{aligned} \lim _{\frac{p_{M}}{p_{L}} \rightarrow \infty } a_{n}^{\pi _{sp}^{*}} = {\left\{ \begin{array}{ll} 1 &{}\quad \text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(n, n+1)}\right) \text { and } \mu _{X} \ge h_{X} \text { where } n \in \{1,\ldots ,N-1\} \\ 1 &{}\quad \text { if } n = N \\ 0 &{}\quad \text { otherwise } \end{array}\right. } \end{aligned}$$

1.3 Proof of Theorem 4

We showed in the proof of Theorem 3 that if \(\mu _{X} \ge h_{X}\), and \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\), then \(\displaystyle \max \nolimits _{n \in \{1,\ldots ,N-2\}} \left\{ \frac{\alpha (N-1) - \alpha (n)}{(N-n - 1)\alpha (n)}\right\} \le w_{X}\) which further implies that c(n) is nondecreasing in n. If c(n) is nondecreasing in n, then \(\displaystyle \min \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(1)\). Since \(\frac{\sigma _{X}}{\mu _{X}} \le \sqrt{\frac{2}{r\mu _{X}}}\) we can invoke the special case optimal policy (see Lemma 1), and if \(\frac{p_{M}}{p_{L}} < \displaystyle \min \nolimits _{n \in \{1,\ldots ,N-2\}} c(n) = c(1)\), then the special case optimal policy is in short

$$\begin{aligned} a_{n}^{\pi _{sp}^{*}} = {\left\{ \begin{array}{ll} 1 &{}\text { if } y_{n-1} \le \frac{1}{r} \ln \left( \frac{p_{M}}{g(n, N)}\right) \text { and } \mu _{X} \ge h_{X}, \frac{p_{M}}{p_{L}} < c(1) \text { where } n \in \{1,\ldots ,N-1\} \\ 1 &{}\text { if } n = N \\ 0 &{}\text { otherwise } \end{array}\right. } \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fontem, B. An optimal stopping policy for car rental businesses with purchasing customers. Ann Oper Res 317, 47–76 (2022). https://doi.org/10.1007/s10479-016-2240-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-016-2240-2

Keywords

Navigation