1 Introduction

The online increasing subsequence problems are stochastic optimisation problems concerned with non-anticipating policies aimed to select an increasing subsequence from a sequence of random items \(X_{1}, X_{2},\ldots \) with known continuous distribution \(F\), which, without loss of generality, will be assumed uniform on \([0,1]\). The online constraint requires to accept or reject \(X_{i}\) at time \(i\), when the item is observed, with the decision becoming immediately terminal. Formally, an online policy for selecting an increasing subsequence is a collection of stopping times \(\boldsymbol{\tau } = (\tau _{1}, \tau _{2}, \ldots )\) adapted to the sequence of sigma-fields \(\mathcal{F}_{i} = \sigma \{X_{1}, X_{2}, \ldots , X_{i}\}\), \(1\leq i < \infty \), and satisfying

  1. (i)

    \(\tau _{1} < \tau _{2} < \cdots \),

  2. (ii)

    \(X_{\tau _{1}}< X_{\tau _{2}}< \cdots \).

We denote the space of all online policies by \({\mathcal{T}}\).

In the first problem, introduced by Samuels and Steele [20], the objective is to maximise the expected length of an increasing subsequence selected from the first \(n\) items. The performance of an online policy is measured by \(\mathbb{E}L_{n}(\boldsymbol{\tau })\), where

$$\begin{aligned} L_{n}(\boldsymbol{\tau }) := \max \{j: 1\leq \tau _{1} \leq \cdots \leq \tau _{j} \leq n \ \mbox{such that}\ X_{\tau _{1}} \leq X_{\tau _{2}} \leq \cdots \leq X_{\tau _{j}} \} \end{aligned}$$

is the length of the selected increasing subsequence.

Samuels and Steele proved that the maximal expected length, \(v_{n} = \sup _{\boldsymbol{\tau }\in {\mathcal{T}}} \mathbb{E}L_{n}( \boldsymbol{\tau })\), satisfies

$$\begin{aligned} v_{n} \sim \sqrt{2n},~~~ \mathrm{as}~ n \to \infty . \end{aligned}$$
(1)

It has been later observed that the Samuels-Steele problem is equivalent to a special case of online bin-packing problem [11, 18] with uniformly distributed weights being packed into a unit-sized bin. This connection was useful to show that \(\sqrt{2n}\) is, in fact, the upper bound [9] (also see [13] for an alternative approach).

For comparison, a clairvoyant decision-maker with a complete overview of the data could choose the longest increasing subsequence. The study of the statistical properties of its length \(l(n)\) is known as the Ulam-Hammersley problem. After years of exciting development Baik et al. [5] showed that

$$ \mathbb{E}l(n) = 2\sqrt{n} + \widehat{c} n^{1/6} + o(n^{1/6}), ~~~n \to \infty , $$

with \(\widehat{c} = -1.758\ldots \), and proved that the distribution of \((l(n) - 2\sqrt{n}) / n^{1/6}\) converges to Tracy-Widom distribution from random matrix theory. See Romik [19] for an excellent account.

To prove the existence of the limit \(v_{n}/\sqrt{n}\), Samuels and Steele also introduced analogous problem with arrivals by Poisson process, and the objective to complete selections within given time horizon \([0,n]\). The connection with poissonised version has been useful to obtain asymptotic results in the fixed-\(n\) setting. Arlotto et al. [2] proved the central limit result analogous to Bruss-Delbaen [8] result in poissonised setting:

$$ \frac{\sqrt{3}(L_{n}(\boldsymbol{\tau ^{*}}) - \sqrt{2n})}{\sqrt{2n}} \overset{d}{\to } \mathcal{N}(0,1), ~~~n\to \infty , $$

where \(\boldsymbol{\tau ^{*}}\) is the optimal selection policy.

Tightest known bounds on \(v_{n}\) are

$$\begin{aligned} \sqrt{2n} - 2 \log {n} -2 \leq v_{n} < \sqrt{2n}, ~~~ n \to \infty . \end{aligned}$$
(2)

The lower bound was shown recently in [4]. By assessing a suboptimal selection policy with an acceptance window which depends on both the size of the last selection and the number of items yet to be observed, Arlotto et al. suggested that the optimality gap in (2) can be further tightened. They also obtained numerical evidence that the performance of the employed policy is within \(O(1)\) from the optimum.

In a dual problem, studied recently by Arlotto et al. [3], the objective is to minimise \(\mathbb{E}\tau _{k}\), the expected time to select an increasing subsequence of fixed length \(k\) from an infinitely long series of observations. For the optimal expected time

$$ \beta _{k} = \inf _{\boldsymbol{\tau } \in \mathcal{T}} \mathbb{E} \tau _{k} $$

Arlotto et al. proved the bounds

$$\begin{aligned} \frac{k^{2}}{2} \leq \beta _{k} \leq \frac{k^{2}}{2} + O(k \log {k}),~~~ \text{for all } k\geq 2. \end{aligned}$$
(3)

The bounds were obtained by an analytical investigation of the optimality recursion.

The quickest selection problem of Arlotto et al. [3] is equivalent to a special case of the sum constraint problem of Chen et al. [10] (see Example 2 on p. 541 for the \(k^{2}/2\) asymptotics and Mallows et al. [15] for a multidimensional extension). So the principal asymptotics \(k^{2}/2\) can be read off from this earlier work. Coffman et al. [11] in Sect. 6 showed that the same asymptotics \(k^{2}/2\) also occur in the offline quickest selection problem.

In the present paper we adapt an asymptotic comparison method used before in the poissonised setting [6, 14] to approximate solutions to the optimality equations. In fact, this method can also be used to estimate the performance of a certain class of suboptimal policies too. We refine the cited results as follows. For the longest increasing subsequence problem, we prove that

$$\begin{aligned} v_{n} = \sqrt{2n} - \frac{1}{12} \log {n} + O(1), ~~~\text{as } n \to \infty . \end{aligned}$$
(4)

A similar expansion with the second term \((\log {n})/6\) was obtained in the related problem of online selection from a random permutation of \(n\) integers by Peng and Steele [17]. The difference in logarithmic terms can be interpreted as an advantage of a better-informed decision-maker, who knows the values of order statistics of \(\{X_{1},\dots ,X_{n}\}\) but not the succession in which the items are revealed in the course of observation.

Furthermore, we settle the conjecture from [4] by showing that the performance of the policy used there is indeed within \(O(1)\) from the maximum \(v_{n}\).

For the quickest selection problem, we prove that

$$\begin{aligned} \beta _{k} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + O(k), ~~~ k \to \infty . \end{aligned}$$
(5)

Given the natural duality of the two problems, one may suspect that the functions \(n \to v_{n}\) and \(k \to \beta _{k}\) are asymptotic inverses of one another. In the principal asymptotics, this was obvious from (1) and (3). However, the refined expansions (4) and (5) show a more intimate connection between the problems: inverting \(v_{n}\) gives a two-term expansion of \(\beta _{n}\).

For the quickest selection problem, we also introduce selection policies that are asymptotically optimal. The first is a variation of the constant window policy, resembling the one from [20] to achieve the principal asymptotics \(k^{2}/2\). The second is a more complex policy, with optimality gap \(O(k)\), which is in fact the best one can do asymptotically, without concern about small \(k\).

2 The Longest Subsequence Problem

In the Samuels-Steele problem, \(n\) is the model parameter representing the total size of a random sample. After the first item is observed, the problem reduces down to a selection out of \(n-1\) random items. Furthermore, if the first item of size \(z, \, z \in [0,1]\) was selected, the future observations that fall below \(z\) are to be automatically discarded. We denote by \(v_{m}(z)\), \(m=1, \ldots , n\) the maximal expected length of an increasing subsequence, when the remaining sample size is \(m\), and the last selected item is of size \(z\). The functions \(v_{m}:[0,1] \to \mathbb{R}^{+}\) are called the optimal value functions in the longest subsequence selection.

In addition, we define a subclass of online selection policies that have a variable acceptance window. That is, at each step of selection, there is a threshold function \(h_{m}:[0,1] \to [0,1]\), \(0\leq h_{m}(z) \leq 1-z\), that shapes the decision of whether to accept or reject current observation: the corresponding policy accepts the observation of size \(x\) if and only if it falls into the acceptance window \([z, z+h_{m}(z)]\). Setting \(\tau _{0} = 0\) and \(X_{\tau _{0}}=0\), this policy corresponds to a sequence of stopping times \(\boldsymbol{\tau } = (\tau _{1}, \tau _{2}, \ldots , \tau _{j})\), where

$$ \tau _{j} = \min \left \{ \tau _{j-1} < i \leq n: X_{i} \in \left [X_{ \tau _{j-1}},\, X_{\tau _{j-1}} +h_{n-i+1}(X_{\tau _{j-1}})\right ] \right \} $$

with the convention that \(\min \emptyset = \infty \).

2.1 Optimality Equation for the Value Function

The optimality equation is a well-known recursion [2, 4, 20]

$$\begin{aligned} v_{n+1}(z) = z \, v_{n}(z) + \int _{z}^{1} \max \left \{ v_{n}(x)+1, v_{n}(z) \right \} \mathrm{d}x,~~~n\in {\mathbb{N}}, \end{aligned}$$
(6)

with \(v_{0}(z) = 0\). Note that \(v_{n}(z)+c\) also satisfies (6) for any constant \(c\). We provide here an intuition behind the optimality equation (6). Assume we are at the selection stage with \(n+1\) observations to inspect and the last selection size is \(z\); this corresponds to the value function \(v_{n+1}(z)\) on the left-hand side. With probability \(z\) the next observation is below \(z\), leaving us with the expected length \(v_{n}(z)\), which explains the first term on the right-hand side. Should the next observation \(x\) be admissible, the dynamic programming principle prescribes us to choose \(x\) if and only if \(v_{n}(x)+1 \geq v_{n}(z)\). Hence, the optimal decision provides \(\max \{v_{n}(x)+1, v_{n}(z)\}\). Averaging over the uniformly distributed \(x\) gives (6).

Observe that \(v_{n}(z)\) is the maximum expected length of an increasing subsequence chosen from \(N\) items, with \(N\overset{d}{=}\mathrm{Bin}(n, 1-z)\) (see [13], p. 945, and [20], p. 1083).

Let \(h_{n}(z):[0,1] \to [0,1]\) be the solution to

$$\begin{aligned} v_{n}(z+x) + 1 = v_{n}(z), \end{aligned}$$
(7)

if \(v_{n}(z)>1\), and \(h_{n}(z)=1-z\) otherwise. Then, the monotonicity of \(v_{n}(z)\) in \(z\) implies that the integrand in (6) is equal to \(v_{n}(x)+1\) on the interval \([z, z+h_{n}(z)]\). On the remaining interval \([z+h_{n}(z),1]\) the integrand assumes value \(v_{n}(z)\).

This provides the form of the optimal selection policy: accept the observation \(x\), if it falls into the acceptance window \([z, z+ h_{n}(z)]\). From (7) it can be seen that the acceptance window is updated dynamically with every observation. Thus, the optimal policy indeed belongs to the class of policies with a variable acceptance window, and we call functions \(h_{n}(z)\) the optimal threshold functions. Note, the equation (7) has a solution only when \(v_{n}(z)>1\). This has a logical interpretation: when \(v_{n}(z)\leq 1\), the decision-maker should select every successive record, as this provides the largest expected payoff.

In the sequel we work directly with the optimality equation to refine the asymptotics of the value functions. The comparison method we employ hinges on certain monotonicity properties of (6). Let an operator \(G_{n}\) acting on continuous bounded functions \(f:[0,1]\to \mathbb{R}_{+}\) possess the following properties

  1. (i)

    Shift-invariance: \(G_{n+1}(f + c)(z) = G_{n+1}(f)(z) + c\) for any constant \(c\),

  2. (ii)

    Monotonicity: if \(\widehat{f}(z) \geq f(z)\), then \(G_{n+1}(\widehat{f})(z) \geq G_{n+1}(f)(z)\).

To determine the range of limit regimes for \((n,z)\), we introduce a size function \(g_{n}(z) := n(1-z)\). We say that sequence \((f_{n})\) is locally bounded from above if

$$ \underset{(n,z):g_{n}(z) \leq c}{\sup } f(z) < \infty ,~~~{\mathrm{for~every}}~ c>0, $$

locally bounded from below if \((-f_{n})\) is locally bounded from above, and locally bounded if \((|f_{n}|)\) is locally bounded from above. With this in mind, we present the following key lemma.

Lemma 1

Asymptotic comparison

Let sequence \((f_{n})\) solve \(f_{n+1} = G_{n+1}(f_{n})(z)\).

If \((f_{n})\) is locally bounded from above, while a sequence \((\widehat{f}_{n})\) is locally bounded from below and \(\widehat{f}_{n+1} \geq G_{n+1}(\widehat{f}_{n})(z)\) when \(g_{n}(z)\) is sufficiently large, then the difference \(f_{n}(z) - \widehat{f}_{n}(z)\) is bounded from the above uniformly for all \(n\) and \(z\).

Similarly, if \((f_{n})\) is locally bounded from below, while \((\widehat{f}_{n})\) is locally bounded from above and \(\widehat{f}_{n+1} \leq G_{n+1}(\widehat{f}_{n})(z)\) when \(g_{n}(z)\) is sufficiently large, then \(f_{n}(z) -\widehat{f}_{n}(z)\) is bounded from the below uniformly for all \(n\) and \(z\).

Proof

Adding a constant if necessary and using the shift-invariance property (i) of the operator \(G_{n}\), we can reduce to the case \(\widehat{f}_{n}(z)>0\).

If the first claim is not true, then for every constant \(c>0\) we can find \(n_{0}\), \(z_{0}\) such that \(f_{n_{0}+1}(z_{0}) - \widehat{f}_{n_{0}+1}(z_{0}) > c\). Choose the minimal such \(n_{0} = n_{0}(c)\), then

$$ f_{j}(z) \leq \widehat{f}_{j}(z) +c~~~{\mathrm{for}}~ j\leq n_{0}. $$

Observe that

$$ f_{n_{0}+1}(z_{0}) > \widehat{f}_{n_{0}+1}(z_{0}) + c > c. $$

Therefore, from the local boundedness of \((f_{n})\), we have \(n_{0}(c) \to \infty \) as \(c\to \infty \).

Moreover, since \((f_{n})\) is locally bounded from above, we have \(g_{n_{0}+1}(z) \geq f_{n_{0}+1}(z_{0}) >c\), so we can choose \(c\) large enough to achieve \(G_{n_{0}+1}(\widehat{f}_{n_{0}})(z_{0}) < \widehat{f}_{n_{0}+1}(z_{0})\). Therefore, appealing to the shift-invariance (i), we obtain

$$ G_{n_{0}+1}(\widehat{f}_{n_{0}}+c)(z_{0}) = G_{n_{0}+1}(\widehat{f}_{n_{0}})(z_{0}) + c\leq \widehat{f}_{n_{0}+1}(z_{0}) + c < f_{n_{0}+1}(z_{0}). $$

However, by the choice of \(n_{0}\), we have \(f_{n_{0}}(z_{0}) < \widehat{f}_{n_{0}}(z_{0}) + c\). Thus, from the monotonicity property (ii), it follows that

$$ f_{n_{0}+1}(z_{0}) = G_{n_{0}+1}(f_{n_{0}})(z_{0}) \leq G_{n_{0}+1}( \widehat{f}_{n_{0}})(z_{0}), $$

which is a contradiction.

Now, to prove the second part of the lemma, assume to the contrary that the difference \(f_{n}(z) - \widehat{f}_{n}(z)\) is unbounded from the below. Then, for every constant \(c>0\) one can find \(n_{1}\), \(z_{1}\) such that \(f_{n_{1}+1}(z_{1}) - \widehat{f}_{n_{1}+1}(z_{1}) < -c\). Choosing the minimal such \(n_{1} = n_{1}(c)\), we have

$$\begin{aligned} f_{j}(z) \geq \widehat{f}_{j}(z) - c, ~~~j\leq n_{1}. \end{aligned}$$
(8)

From \(g_{n_{1}+1}(z_{1}) \geq \widehat{f}_{n_{1}+1}(z_{1})> f_{n_{1}+1}(z_{1}) + c > c \), it follows that \(n_{1}(c) \to \infty \) as \(c\to \infty \). Moreover, choosing \(c\) large enough, we can achieve

$$ G_{n_{1}+1}(\widehat{f}_{n_{1}}-c)(z_{1}) = G_{n_{1}+1}(\widehat{f}_{n_{1}})(z_{1}) - c \geq \widehat{f}_{n_{1}+1}(z_{1}) - c > f_{n_{1}+1}(z_{1}). $$

However, this contradicts

$$ f_{n_{1}+1}(z_{1}) = G_{n_{1}+1}(f_{n_{1}})(z_{1})\geq G_{n_{1}+1} ( \widehat{f}_{n_{1}})(z_{1}), $$

which follows from (8). This concludes our proof. □

2.2 Asymptotic Expansion of \(v_{n}\)

We utilise Lemma 1 to compare \(v_{n}(z)\) with a sequence of carefully chosen test functions. With each iteration of the method, we obtain a finer asymptotic expansion of \(v_{n}(z)\). Since we do not employ the initial condition \(v_{n}(0)=0\), the maximal accuracy of the comparison method is bound by the \(O(1)\)-term. Having obtained the desired expansion of \(v_{n}(z)\), we specialise it to the case with \(z=0\), thus deriving (4). The whole procedure is reminiscent of a familiar method of successive approximations of the solution to the differential equations (see, for example, [12], Sect. 9.1).

Introduce operators \(\Delta \) and \(\varPsi \) acting as

$$ \Delta f_{n}(z) := f_{n+1}(z) - f_{n}(z), ~~~\varPsi f_{n}(z) := \int _{z}^{1} (f_{n}(x)+1 - f_{n}(z))_{+} \mathrm{d}x. $$

With this notation, the optimality equation (6) assumes the form

$$\begin{aligned} \Delta v_{n}(z) = \varPsi v_{n}(z), ~~~ v_{0}(z) =0. \end{aligned}$$

Applying Lemma 1 with \(G_{n+1}(f_{n})(z) = f_{n}(z) + \varPsi f_{n}(z)\) yields the following result.

Corollary 1

If, for \(n(1-z)\) large enough, \(\Delta \widehat{v}_{n}(z) > \varPsi \widehat{v}_{n}(z)\), then the difference \(v_{n}(z) - \widehat{v}_{n}(z)\) is bounded from the above uniformly in \(n\) and \(z\); likewise, if \(\Delta \widehat{v}_{n}(z) < \varPsi \widehat{v}_{n}(z)\), then \(v_{n}(z) - \widehat{v}_{n}(z)\) is bounded from the below uniformly for all \(n\) and \(z\).

To obtain the principal asymptotics consider the test function

$$ v^{(0)}_{n}(z): = \gamma _{0} \sqrt{n(1-z)}, $$

where \(\gamma _{0}>0\) is a parameter. Introducing for convenience \(\widehat{n}:=n(1-z)\) and expanding for large \(\widehat{n}\) we obtain

$$\begin{aligned} \Delta v^{(0)}_{n}(z) \sim \gamma _{0} \frac{1-z}{2\sqrt{\widehat{n}}}. \end{aligned}$$
(9)

Furthermore, using the change of variable \(y:={(x-z)}/{(1-z)}\), we can write the integral as

$$\begin{aligned} \varPsi v^{(0)}_{n}(z) =& (1-z) \int _{0}^{1} \left (\gamma _{0} \sqrt{ \widehat{n}-\widehat{n}y} - \gamma _{0} \sqrt{\widehat{n}} + 1\right )_{+} \mathrm{d}y \\ =& (1-z) \int _{0}^{h_{n}^{(0)}(z)}\left (\gamma _{0} \sqrt{ \widehat{n}-\widehat{n}y} - \gamma _{0} \sqrt{\widehat{n}} + 1\right ) \mathrm{d}y, \end{aligned}$$

where \(h_{n}^{(0)}(z)\) is the solution to

$$\begin{aligned} \gamma _{0} \sqrt{\widehat{n}(1-x)} - \gamma _{0} \sqrt{\widehat{n}} + 1= 0. \end{aligned}$$

We have

$$\begin{aligned} h_{n}^{(0)}(z) \sim \frac{2}{\gamma _{0}\sqrt{\widehat{n}}}, ~~~ \widehat{n} \to \infty . \end{aligned}$$
(10)

Using Taylor expansion of the integrand in \(y\) around 0 yields

$$\begin{aligned} 1 -\gamma _{0} \sqrt{\widehat{n}} \,\frac{y}{2} + O(\widehat{n}^{-1/2}), ~~~\widehat{n}\to \infty ; \end{aligned}$$

hence, integrating and using (10) yields

$$\begin{aligned} \varPsi v^{(0)}_{n}(z) \sim \frac{1-z}{\gamma _{0}\sqrt{\widehat{n}}}, ~~~ \widehat{n}\to \infty . \end{aligned}$$
(11)

The match between (9) and (11) occurs for \(\gamma _{0} = \sqrt{2}\). Therefore, we have, for \(n(1-z)\) large enough,

$$\begin{aligned} \begin{aligned} \Delta v^{(0)}_{n}(z) &> \varPsi v^{(0)}_{n}(z), ~~~ \text{when } \gamma _{0} > \sqrt{2}, \\ \Delta v^{(0)}_{n}(z) &< \varPsi v^{(0)}_{n}(z), ~~~ \text{when } \gamma _{0} < \sqrt{2}. \end{aligned} \end{aligned}$$

Applying Lemma 1 we see that \(\underset{n(1-z)\to \infty }{\limsup }(v_{n}(z) - \gamma _{0} \sqrt{n(1-z)})< \infty \). In light of this, \(\underset{n(1-z)\to \infty }{\limsup }(v_{n}(z)/\sqrt{n(1-z)}) \leq \gamma _{0}\), for \(\gamma _{0}>\sqrt{2}\). Consequently,

$$\begin{aligned} \limsup _{n(1-z) \to \infty } \frac{v_{n}(z)}{\sqrt{n(1-z)}} \leq \sqrt{2}. \end{aligned}$$
(12)

A parallel argument with \(\gamma _{0} < \sqrt{2}\) yields

$$\begin{aligned} \liminf _{n(1-z) \to \infty } \frac{v_{n}(z)}{\sqrt{n(1-z)}} \geq \sqrt{2}. \end{aligned}$$
(13)

Combining (12) with (13), we obtain

$$\begin{aligned} v_{n}(z) \sim \sqrt{2n(1-z)}, ~~~{\mathrm{for}}~ n(1-z) \to \infty . \end{aligned}$$

For a better approximation we consider the test functions

$$ v^{(1)}_{n}(z) = \sqrt{2n(1-z)} + \gamma _{1} \log {(n(1-z)+1)} $$

with \(\gamma _{1}\in {\mathbb{R}}\). We choose \(\log {(n(1-z)+1)}\) over \(\log {(n(1-z))}\) to avoid the singularity at 0. The forward difference becomes

$$ \Delta v^{(1)}_{n}(z)= \sqrt{2n(1-z)} \left (\left (1+\frac{1}{{n}} \right )^{1/2} - 1\right ) + \gamma _{1} \log {\left (1 + \frac{1}{n(1-z)+1} + \frac{1}{n}\right )}. $$

Using Taylor expansion with a remainder yields

$$\begin{aligned} \Delta v^{(1)}_{n}(z) = \frac{1-z}{\sqrt{2\widehat{n}}} + \gamma _{1} \frac{1-z}{\widehat{n}+1} + O(\widehat{n}^{-{3}/{2}}), ~~~\widehat{n} \to \infty . \end{aligned}$$
(14)

On the other hand, using substitution \(y ={(x-z)}/{(1-z)}\) it can be deduced that

$$\begin{aligned} \varPsi v^{(1)}_{n}(z) =& (1-z) \int _{0}^{1} \Big( \sqrt{2\widehat{n}} \left ((1-y)^{1/2}-1\right ) + \gamma _{1} \log {(\widehat{n}(1-y)+1)} \\ -& \gamma _{1} \log {(\widehat{n}+1)} +1 \Big)_{+} \mathrm{d}y \\ =& (1-z) \int _{0}^{h_{n}^{(1)}(z)} \left ( \sqrt{2\widehat{n}}\Big((1-y)^{1/2}-1 \right ) + \gamma _{1} \log {(\widehat{n}(1-y)+1)} \\ -& \gamma _{1} \log {(\widehat{n}+1)} +1 \Big)\, \mathrm{d}y , \end{aligned}$$
(15)

where \(h_{n}^{(1)}(z)\) solves

$$ \sqrt{2\widehat{n}}\left ((1-y)^{1/2}-1\right ) + \gamma _{1} \log {( \widehat{n}(1-y)+1)} - \gamma _{1} \log {(\widehat{n}+1)} +1 = 0. $$
(16)

For \(\widehat{n}\to \infty \),

$$\begin{aligned} h_{n}^{(1)}(z) = \sqrt{\frac{2}{\widehat{n}}} - \left (\frac{1}{2} + 2 \gamma _{1} \right ) \frac{1}{\widehat{n}+1} + O(\widehat{n}^{-{3}/{2}}). \end{aligned}$$
(17)

Actually, we only need the first term of (17) to obtain the expansion of \(\varPsi v^{(1)}_{n}(z)\) up to the desired order. This is down to the fact that order \(O(\widehat{n}^{-1})\)-term in (17) contributes only \(O(\widehat{n}^{-3/2})\) to \(\varPsi v^{(1)}_{n}(z)\). Indeed, keeping \(\widehat{n}\) as a parameter, let us view the integral on the third line of (15) as a function of its upper limit

$$\begin{aligned} I(h) := (1-z)\int _{0}^{h} \left ( \sqrt{2\widehat{n}}\left ((1-y)^{1/2}-1 \right ) + \gamma _{1} \log {\left ( \frac{\widehat{n}(1-y)+1}{\widehat{n}+1}\right )} +1 \right ) \mathrm{d}y. \end{aligned}$$

In view of (16), \(h_{1}:=h_{n}^{(1)}(z)\) is a stationary point of the integrand. Expanding at \(h_{1}\) with a remainder we get, for some \(\zeta \in [0,1]\)

$$\begin{aligned} I(h_{1}+\varepsilon ) - I(h_{1}) =& I'(h_{1}) \varepsilon + I''(h_{1}+ \zeta \varepsilon ) \frac{\varepsilon ^{2}}{2} \\ =&(1-z)\left ( \frac{\sqrt{2\widehat{n}}}{2\sqrt{1-(h_{1}+\zeta \varepsilon )}}- \frac{\gamma _{1}}{1-(h_{1}+\zeta \varepsilon )}\right ) \frac{\varepsilon ^{2}}{2}. \end{aligned}$$

Now letting \(\widehat{n} \to \infty \) and \(\varepsilon = O(\widehat{n}^{-1})\) we obtain

$$\begin{aligned} I(h_{1}+\varepsilon ) - I(h_{1}) = O(\widehat{n}^{-3/2}), \end{aligned}$$

as claimed.

In light of this, integrating and expanding we obtain

$$\begin{aligned} \varPsi v^{(1)}_{n}(z) \sim \frac{1-z}{\sqrt{2\widehat{n}}} - \left ( \gamma _{1} +\frac{1}{6} \right ) \frac{1-z}{\widehat{n}+1}, ~~~ \widehat{n} \to \infty . \end{aligned}$$
(18)

Expansions (14) and (18) match at \(\gamma _{1}=-\frac{1}{12}\). Thus, another application of Lemma 1 gives us

$$\begin{aligned} v_{n}(z) \sim \sqrt{2n(1-z)} - \frac{1}{12} \log {(n(1-z))},~~~n(1-z) \to \infty . \end{aligned}$$

We need one more iteration to bound the remainder. Consider the test functions

$$\begin{aligned} v^{(2)}_{n}(z) = \sqrt{2n(1-z)} - \frac{1}{12} \log {(n(1-z)+1)} + \gamma _{2} \frac{1}{\sqrt{n(1-z)+1}},~~~\gamma _{2}\in {\mathbb{R}}. \end{aligned}$$

For \(\widehat{n}\to \infty \), we obtain the expansion

$$\begin{aligned} \Delta v^{(2)}_{n}(z) \sim \frac{1-z}{\sqrt{2\widehat{n}}} - \frac{1-z}{12(\widehat{n}+1)} + \frac{1-z}{(\widehat{n}+1)^{{3}/{2}}} \left (-\frac{\gamma _{2}}{2} - \frac{(1-z)\sqrt{2}}{8} \right ), \end{aligned}$$
(19)

uniformly in \(z\in [0,1)\), and with some more effort for the integral

$$\begin{aligned} \varPsi v^{(2)}_{n}(z) \sim \frac{1-z}{\sqrt{2\widehat{n}}} - \frac{1-z}{12(\widehat{n}+1)}+ \frac{1-z}{(\widehat{n}+1)^{3/2}} \left (\frac{\gamma _{2}}{2} + \frac{35\sqrt{2}}{144}- \frac{\sqrt{2}}{4}\right ),~~~ \widehat{n}\to \infty . \end{aligned}$$
(20)

Since \(z\in [0,1)\), we have

$$\begin{aligned} -\frac{\gamma _{2}}{2} - \frac{\sqrt{2}}{8} \leq - \frac{\gamma _{2}}{2} - \frac{(1-z)\sqrt{2}}{8} < - \frac{\gamma _{2}}{2}. \end{aligned}$$
(21)

Appealing to (19), (20) and the first inequality in (21), we conclude that, for large \(n(1-z)\),

$$\begin{aligned} \Delta v^{(2)}_{n}(z) > \varPsi v^{(2)}_{n}(z), ~~~\text{for } \gamma _{2} \leq \frac{\sqrt{2}}{144}-\frac{\sqrt{2}}{8}; \end{aligned}$$

hence, by Lemma 1, \(v_{n}(z) - v^{(2)}_{n}(z)\) for such \(\gamma _{2}\) is bounded from above. On the other hand, exploiting the second inequality in (21), we derive that for large \(n(1-z)\)

$$\begin{aligned} \Delta v^{(2)}_{n}(z) < \varPsi v^{(2)}_{n}(z), ~~~ \text{for } \gamma _{2} \geq \frac{\sqrt{2}}{144}; \end{aligned}$$

thus, by the asymptotic dominance lemma \(v_{n}(z) - v^{(2)}_{n}(z)\) for such \(\gamma _{2}\) is bounded from below. However, since the last term of \(v^{(2)}_{n}(z)\) is already bounded, it follows readily that

$$\begin{aligned} v_{n}(z) = \sqrt{2n(1-z)} - \frac{1}{12} \log {(n(1-z))} + O(1), ~~~n(1-z) \to \infty . \end{aligned}$$
(22)

Our main result is the special case of (22) with \(z=0\).

Theorem 1

The maximum expected length of an increasing subsequence selected in an online regime \(v_{n}\) satisfies

$$\begin{aligned} v_{n} = \sqrt{2n} - \frac{1}{12} \log {n} + O(1), ~~~n\to \infty . \end{aligned}$$
(23)

2.3 Asymptotically Optimal Policies

Recall our definition of a policy with a variable acceptance window. For \(m\in {\mathbb{N}}\), \(m\leq n\), let \(\widetilde{h}_{m}:[0,1]\to [0,1]\) be threshold functions which define a policy via the acceptance window

$$\begin{aligned} z< x\leq z+\widetilde{h}_{m}(z), \end{aligned}$$

where \(m\) is the number of remaining observations. The threshold functions \(\widetilde{h}_{m}(z)\) depend on both the size of the last selected item and the number of remaining observations. As was shown in the previous section, when \(v_{m}(z) > 1\), the optimal policy has threshold functions \(h_{m}(z)\) solving

$$\begin{aligned} v_{m}(z+x) +1 = v_{m}(z). \end{aligned}$$

However, there are good policies that can be defined more simply. For example, the stationary policy of Samuels and Steele [20] has constant threshold functions independent of the remaining sample size. It accepts every observation that exceeds the last selection by no more than \(\sqrt{2/n}\). Setting threshold functions \(\widehat{h}_{m}(z) :=\min \{ \sqrt{2/n}, 1-z\}\) for all \(m=1, 2, \ldots , n\) describes this strategy completely. Remarkably, this uncomplicated policy achieves asymptotic optimality up to the leading order term of the expected performance. The intuition behind the choice of this threshold function lies in the derivation of the familiar mean-constraint bound on \(v_{n}\) (see, for example, [9]).

A more sophisticated policy was introduced by Arlotto et al. [4]. In contrast to the stationary policy of Samuels and Steele, the acceptance window here is variable. The acceptance criterion for this policy is

$$ z< x\leq z + \widetilde{h}_{m}(z), $$

where

$$\begin{aligned} \widetilde{h}_{m}(z)=\min \left \{ \sqrt{\frac{2(1-z)}{m}},1 \right \} ~~~ \text{for } m=1, 2, \ldots , n. \end{aligned}$$
(24)

Observe, that normalising (24) leads to the acceptance condition

$$\begin{aligned} 0< \frac{x-z}{1-z} < \sqrt{\frac{2}{m(1-z)}}, \end{aligned}$$

i.e. the threshold functions are similar to Samuels and Steele’s with one exception: Arlotto et al. used the expected number of remaining admissible observations in the calculation of the threshold function.

The value functions corresponding to Arlotto et al.’s policy satisfy the recursion

$$ \widetilde{v}_{n+1}(z) = (1-\widetilde{h}_{n}(z))\,\widetilde{v}_{n}(z) + \int _{z}^{z+\widetilde{h}_{n}(z)} \left (\widetilde{v}_{n}(x)+1 \right )\mathrm{d}x, ~~~ \widetilde{v}_{0}(z)=0. $$
(25)

Equation (25) can be subjected to the comparison method too. Applying Lemma 1 with the functional

$$ G_{n+1}(f_{n})(z)= (1- \widetilde{h}_{n}(z)) \,f_{n}(z) + \int _{z}^{z+ \widetilde{h}_{n}(z)} (f_{n}(x)+1)\, {\mathrm{d}}x, $$

one can obtain an analogous result. Choosing the approximating function of the form

$$ \widetilde{v}^{(0)}_{n}(z) := \alpha _{0} \sqrt{n(1-z)} + \alpha _{1} \log {(n(1-z)+1)} + \frac{\alpha _{3}}{\sqrt{n(1-z)}}, ~~~ \alpha _{0} \in \mathbb{R}_{+}, \,\alpha _{1}, \alpha _{2} \in \mathbb{R}, $$

and successively working out the values of \(\alpha _{0}\), \(\alpha _{1}\), \(\alpha _{2}\) leads to the same asymptotic expansion as in (22)

$$\begin{aligned} \widetilde{v}_{n}(z) = \sqrt{2n(1-z)} - \frac{1}{12} \log {n(1-z)} + O(1). \end{aligned}$$

Taken together with (22) this settles the conjecture in [4].

Theorem 2

The policy with threshold functions (24) has the expected performance \(\widetilde{v}_{n}=\widetilde{v}_{n}(0)\) satisfying

$$ |v_{n} - \widetilde{v}_{n}| = O(1), ~~~ n \to \infty . $$

3 The Quickest Selection Problem

We now turn to study the quickest selection problem introduced in [3]. In contrast to the original problem, they asked what is the minimum expected time \(\beta _{k}\) to choose a \(k\)-long increasing subsequence from an infinite sequence of random variables in an online fashion? Recall that, formally,

$$\begin{aligned} \beta _{k} = \beta _{k}(0) = \inf _{\boldsymbol{\tau } \in \mathcal{T}} \mathbb{E}\tau _{k}. \end{aligned}$$

3.1 Optimality Recursion in the Quickest Selection Problem

The quickest selection problem is a decision problem with infinite horizon. Still, a version of the comparison method turns useful here too.

The value function in the quickest selection problem \(\beta _{k}(z)\) depends on the running maximum \(z\) and the number of selections yet to be made. The first step decomposition yields the dynamic programming equation

$$ \beta _{k+1}(z)=1+z\beta _{k+1}(z)+\int _{z}^{1}\min (\beta _{k}(x), \,\beta _{k+1}(z))\,{\mathrm{d}}x. $$

The marks above \(z\) occur with \({\mathrm{Geom}}(1-z)\) interarrival times, hence

$$ \beta _{k}(z)=\frac{\beta _{k}}{1-z}, $$

where \({\beta _{k}}:={\beta _{k}}(0)\). Substituting this into the dynamic programming equation and substituting \(x\to (1-z) \,x + z\) we arrive at

$$\begin{aligned} \beta _{k+1}=1+\int _{0}^{1}\min \left (\frac{\beta _{k}}{1-x},\, \beta _{k+1}\right )\,{\mathrm{d}}x. \end{aligned}$$
(26)

Lemma 2

The optimal value function \(\beta _{k}\) satisfies the implicit recursion

$$\begin{aligned} \beta _{k+1}-\beta _{k}-\beta _{k}\log \left ( \frac{\beta _{k+1}}{\beta _{k}} \right )=1, \end{aligned}$$
(27)

initialised with \(\beta _{1} = 1\).

Proof

We have \(\beta _{k}/(1-x) \leq \beta _{k+1}\) if and only if \(x \leq 1 - \beta _{k}/\beta _{k+1} =: h_{k}\). Thus, it is optimal to choose the observation \(X\) only when \(X\leq h_{k}\). From this we can rewrite (26) as

$$ \beta _{k+1} = 1 + (1 - h_{k}) \, \beta _{k+1} + \int _{0}^{h_{k}} \frac{\beta _{k}}{1-x} \, {\mathrm{d}} x, ~~~\beta _{1}=1. $$

Integrating and permuting yields (27). □

The optimal strategy amounts to the following rule. If at stage \(j\) some \(k\) items are yet to be chosen, the last selection was \(z\), and the observed item is \(X_{j}=x\) then the item should be selected if and only if

$$ 0< \frac{x-z}{1-z}< h_{k}. $$

For arbitrary \(\widetilde{h}_{k}, \, 0\leq \widetilde{h}_{k} \leq 1-z\), we call a strategy defined in that way self-similar.

Arlotto et al. derived (3) by analysing the optimality recursion in the following form ([3], Lemma 3)

$$\begin{aligned} \beta _{k+1} = \min _{t\in [0,1]} \left ( \frac{1}{t} - \frac{\beta _{k}}{t} \log {(1-t)}\right ), ~~~\beta _{1}=1. \end{aligned}$$
(28)

The recursion (27) is, in fact, equivalent to (28). Indeed, we find that the minimising value of \(t = t^{*}\) satisfies

$$\begin{aligned} \log {(1-t^{*})} = \frac{1}{\beta _{k}}-\frac{t^{*}}{1-t^{*}}. \end{aligned}$$
(29)

Substituting (29) into (28) yields the optimal solution \(t^{*} = 1 - \beta _{k} / \beta _{k+1}\). Plugging the optimal solution into (28) and rearranging gives (27).

Arlotto et al. also proved that the function \(k \to \beta _{k}\) is convex and showed the optimal policy to be self-similar with the optimal threshold functions satisfying

$$\begin{aligned} h_{k} = \frac{2}{k} + O(k^{-2}\log {k}), ~~~ k \to \infty . \end{aligned}$$
(30)

3.2 Preparation for the Asymptotic Analysis

Recursion (27) possesses several useful analytical properties. To begin with, define a function \(G:\mathbb{R}^{2}_{+} \to \mathbb{R}\) as

$$\begin{aligned} G(x,y) := y - x - x\log {\frac{y}{x}}. \end{aligned}$$

In terms of \(G\), (27) becomes

$$\begin{aligned} G(\beta _{k}, \beta _{k+1}) = 1, ~~~\beta _{1}=1. \end{aligned}$$
(31)

Recursion (31) taken together with the condition

$$\begin{aligned} 1 = \beta _{1} < \beta _{2} < \cdots \end{aligned}$$

defines the sequence \(\beta _{k}\) uniquely, as seen from the next lemma.

For \(x\geq 0\) define \(g(x)\) as a solution to

$$\begin{aligned} G(x, g(x))=1. \end{aligned}$$

This function \(g(x)\) has two branches, and we are interested in the upper branch.

Lemma 3

The function \(g\) has a branch that lies entirely in the domain \(\mathcal{D}=\{(x,y):x+1 < y, \, x>0\}\).

Proof

Calculating the partial derivatives

$$\begin{aligned} \frac{\partial G}{\partial x} = \log {\frac{x}{y}}, ~~~ \frac{\partial G}{\partial y} = 1 - \frac{x}{y}, \end{aligned}$$

we see that, if \(G(x_{0}, y_{0}) = 1\), then, by the Implicit Function Theorem, in the vicinity of \((x_{0}, y_{0})\) there is a uniquely defined function \(g(x)\) with

$$\begin{aligned} g'(x) = -\frac{\partial G}{\partial x} / \frac{\partial G}{\partial y} = \frac{-\log {(x/y)}}{1-x/y} \end{aligned}$$

provided \(x_{0} \neq y_{0}\). If, furthermore, \(0< x_{0}< y_{0}\), then this function has derivative \(g'(x)>1\), since

$$\begin{aligned} -\log {z} > 1-z, ~~~ \text{for } 0< z< 1, \end{aligned}$$

where \(z = x/y\). Thus, if there is one such point \((x_{0}, y_{0}) \in \mathcal{D}\), then there is a branch \(g(x): \mathbb{R}_{+} \to \mathbb{R}_{+}\) with \((x, g(x)) \in \mathcal{D}\). □

In particular, we can pick \((x_{0}, y_{0}) = (1, y_{0})\), where \(y_{0} =3.146\cdots \) solves

$$\begin{aligned} y - \log {y} = 2. \end{aligned}$$

Note that \(g(0+) = 1\), but \(g'(0+)=\infty \).

From now on we only consider the branch of \(g(x)\) defined in Lemma 3. In these terms

$$\begin{aligned} \beta _{k+1} = g(\beta _{k}), ~~~ \beta _{1}=1. \end{aligned}$$

That is, the sequence of optimal values \(\beta _{k}\) is obtained as iterations of \(g\), starting with \(\beta _{1} = 1\). So \(\beta _{2} = g(1)\), \(\beta _{3} = g(g(1))\), etc. We wish to find now the asymptotic behaviour of \(g\) for large \(x\).

Lemma 4

Function \(g(x)\) possesses the following asymptotic expansion

$$\begin{aligned} g(x) = x + \sqrt{2x} + \frac{2}{3} + \frac{\sqrt{2}}{18{\sqrt{x}}}+O(x^{-1}), ~~~ x\to \infty . \end{aligned}$$
(32)

Proof

Dividing both sides of \(G(x,y)=1\) by \(x\) yields

$$\begin{aligned} \frac{y}{x} - 1 - \log {\frac{y}{x}} = \frac{1}{x}. \end{aligned}$$

Note that taking limits on both sides gives

$$\begin{aligned} \lim _{x\to \infty }\frac{y}{x} = 1 + \lim _{x\to \infty }\log { \frac{y}{x}}, ~~~{\mathrm{where~}} y=g(x). \end{aligned}$$

Performing a change of variables

$$\begin{aligned} z = \frac{y}{x}-1, ~~~ w=\frac{1}{x}, \end{aligned}$$

we arrive at

$$\begin{aligned} z - \log {(1+z)} = w. \end{aligned}$$
(33)

Because \(\limsup y/x<\infty \) and \(a=1\) is the unique solution to \(a=1+\log a\), we may conclude that

$$ \frac{y}{x} \to 1. $$

In light of this, we may investigate (33) in the vicinity of \(z=w=0\). The function \(w(z)\) is analytic within a unit circle. Thus, expanding logarithm yields a series representation of \(w(z)\)

$$\begin{aligned} w(z) = \frac{z^{2}}{2}-\frac{z^{3}}{3}+\frac{z^{4}}{4}-\cdots . \end{aligned}$$
(34)

Since \(w'(0)=0\), the inverse function has an algebraic branch point at 0 of order 1 (see [16] for definition). The inverse \(z(w)\) is representable as Puiseux series in powers of \(w^{1/2}\), with coefficients that can be calculated recursively. From the first two terms of series (34) we obtain

$$\begin{aligned} z(w) = \sqrt{2}w^{1/2} + O(w). \end{aligned}$$

Plugging \(z(w) = \sqrt{2}w^{1/2} + a_{0} w + o(w)\), where \(a_{0}\) is a constant coefficient, into (34) yields

$$\begin{aligned} \sqrt{2} a_{0} w^{3/2}-\frac{2\sqrt{2}}{3} w^{3/2} + O(w^{2}) = 0, \end{aligned}$$

which provides us with a refinement

$$\begin{aligned} z(w) = \sqrt{2} w^{1/2} + \frac{2}{3}w + O(w^{3/2}). \end{aligned}$$

Another iteration of the method with \(z(w) = \sqrt{2} w^{1/2} + \frac{2}{3}w + a_{1} w^{3/2}+o(w^{3/2})\), \(a_{1}\) constant, results in the expansion

$$\begin{aligned} z(w) = \sqrt{2} w^{1/2} + \frac{2}{3} w + \frac{\sqrt{2}}{18} w^{3/2} + O(w^{2}). \end{aligned}$$

Translating this back in terms of variables \(x\), \(y\), we obtain the desired asymptotic expansion

$$\begin{aligned} y = g(x) = x + \sqrt{2x} + \frac{2}{3} + \frac{\sqrt{2}}{18\sqrt{x}} + O(x^{-1}), ~~~ x\to \infty . \end{aligned}$$

 □

Suppose now that \((x_{k})\) is a sequence of iterations

$$ x_{k+1} = g(x_{k}), ~~~ k=1,2,\ldots $$

with some \(x_{1}>0\). Since \(x_{k+1}>x_{k}+1\), we have \(x_{k+1}>x_{0} +k\), and so \(x_{k} \to \infty \), as \(k \to \infty \). Thus, by Lemma 4,

$$\begin{aligned} x_{k+1} = x_{k} + \sqrt{2x_{k}} + \frac{2}{3} + o(1), ~~~ k \to \infty . \end{aligned}$$
(35)

To derive the leading asymptotic term from (35) we only need

$$\begin{aligned} x_{k+1} - x_{k} \sim \sqrt{2x_{k}}, ~~~ \text{as } k \to \infty . \end{aligned}$$
(36)

The idea is to compare \(x_{k}\) with a solution of the analogous differential equation

$$ f'(x) = \sqrt{2f(x)}, $$

which satisfies

$$\begin{aligned} \int _{f(t)}^{f(t+1)} \frac{\mathrm{d}u}{\sqrt{2u}}=1. \end{aligned}$$
(37)

Equation (37), in turn, yields

$$ \sqrt{2f(t+1)}-\sqrt{2f(t)} = 1. $$

An application of the mean value theorem leads to

$$ \sqrt{2x_{k+1}}-\sqrt{2x_{k}} = \frac{x_{k+1}-x_{k}}{\sqrt{2\widetilde{x}_{k}}}, $$

where \(x_{k}<\widetilde{x}_{k} < x_{k+1}\). Hence,

$$ \frac{x_{k+1}-x_{k}}{\sqrt{2x_{k+1}}} \leq \sqrt{2x_{k+1}}-\sqrt{2x_{k}} \leq \frac{x_{k+1}-x_{k}}{\sqrt{2x_{k}}}. $$

Recalling that \(\lim _{k \to \infty } x_{k+1}/x_{k} =1\) and the asymptotics (36), we obtain

$$ \sqrt{2x_{k}} \sim k, ~~~ k \to \infty , $$

and, therefore,

$$\begin{aligned} x_{k} \sim \frac{k^{2}}{2}. \end{aligned}$$
(38)

The recursion \(x_{k+1}=g(x_{k})\) is homogeneous, meaning that any consequent term \(x_{k+1}\) of the sequence is a function of \(x_{k}\) only, independent of \(k\). Thus, we are interested in how the shift in the initial condition affects the sequence for large \(k\).

Lemma 5

For any sequence \((x_{k})\) solving the recursion \(G(x_{k}, x_{k+1})=1\), it holds that

$$ \lvert x_{k}-\beta _{k}\rvert =O(k), ~~~k \to \infty . $$

Proof

If \(x_{1}=\beta _{1}\) the sequences are identical and the assertion trivial. We shall first examine the case \(x_{1} > \beta _{1}\).

Given the monotonicity of \(g(x)\), we have that \(x_{k} > \beta _{k}\), for all \(k\in \mathbb{N}\). Thus, it suffices to prove that there exists a positive constant \(c\) such that \(x_{k}-\beta _{k} \leq ck\), for all \(k\).

Since sequence \((\beta _{k})\) is unbounded and increasing, we can find a finite \(k_{0}\) such that \(\beta _{k_{0}}> x_{1}\). Having identified the point \(k_{0}\) of the inequality direction change, we know that, by monotonicity of \(g(x)\), the elements \(\beta _{k_{0}+1}, \beta _{k_{0}+2}, \ldots \) dominate \(x_{2}, x_{3}, \ldots \) respectively. Observe that

$$ \beta _{k_{0}}=\beta _{1} + \sum _{i=1}^{k_{0}} \Delta \beta _{i}. $$
(39)

Hence, comparing \(x_{k}\) to the shifted sequence yields

$$ x_{k} - \left (\beta _{k} + \sum _{i=k}^{k+k_{0}} \Delta \beta _{i} \right ) < 0, ~~~ \text{for all }k. $$

Whence the upper bound

$$ x_{k} - \beta _{k} < \sum _{i=1}^{k_{0}} \Delta \beta _{i}. $$

The asymptotic expansion (35) together with (38) implies \(\Delta \beta _{k} = O(k)\); therefore, allowing us to choose \(c:=k_{0} M\), where \(M\) is a constant such that \(\Delta \beta _{k} \leq M k\) for all \(k\in \mathbb{N}\).

Now we turn to the case when \(x_{1} < \beta _{1}\). From the monotonicity of \(g(x)\), it is enough to show that there exists a positive constant \(c_{1}\) satisfying \(x_{k} - \beta _{k} \geq -c_{1}k\) for all \(k\).

The sequence \((x_{k})\) is unbounded and increasing; therefore, one can find a finite \(k_{1}\) such that \(x_{k_{1}}>\beta _{1}\). Choosing the smallest such \(k_{1}\), we have that \(x_{k_{1}+1} > \beta _{2}\), \(x_{k_{1}+2} > \beta _{3}\), …. Appealing to (39), we have

$$ x_{k} - \left (\beta _{k} - \sum _{i=1}^{k_{1}} \beta _{i}\right ) > 0. $$

Whence,

$$ x_{k} - \beta _{k} > - \sum _{i=1}^{k_{1}} \beta _{i}. $$

Choosing \(c_{1}:=k_{1} M\) concludes the proof. □

Before stating an analogue of Lemma 1, we need to highlight the following monotonicity property of \(G\).

Lemma 6

Let \(0< u< v\). If \(G(u,v)>1\) and \(u>x\), then \(v>g(x)\). Analogously, if \(G(u,v)<1\) and \(u< x\), then \(v< g(x)\).

Proof

From

$$\begin{aligned} g'(u) = \frac{-\log {(u/v)}}{1-u/v} > 0 \end{aligned}$$

we have \(g(u)>g(x)\). Then, \(G(u,g(u))=1\) and \(G(u,v)>1\) imply \(v>g(u)\) by monotonicity of \(G\) that follows from

$$\begin{aligned} \frac{\partial G}{\partial v} = 1 - \frac{u}{v} > 0. \end{aligned}$$

Hence \(v>g(x)\).

Analogously, \(u< x\) implies \(g(u)< g(x)\). By \(G(u,v)<1\) and the monotonicity of \(G\), we have \(v< g(u)< g(x)\), which completes the proof. □

Now, we state and prove the analogue of Lemma 1 in the quickest selection problem.

Lemma 7

Let \((x_{k})\) be an increasing sequence such that \(G(x_{k},x_{k+1})>1\) (or, equivalently, \(x_{k+1}>g(x_{k})\)) for all sufficiently large \(k\). Then for some constant \(c>0\)

$$ \beta _{k}-x_{k}< ck, ~~~k \in {\mathbb{N}}. $$

Similarly, if \(G(x_{k},x_{k+1})<1\) (or, equivalently, \(x_{k+1}< g(x_{k})\)) for all sufficiently large \(k\), then for some \(c>0\)

$$ x_{k}-\beta _{k}< ck,~~~k\in {\mathbb{N}}. $$

Proof

Assume to the contrary that for arbitrarily large \(c_{0}\in \mathbb{R}^{+}\) there exists \(k_{0} \in \mathbb{N}\) such that

$$\begin{aligned} \begin{aligned} \beta _{k} - x_{k} &< c_{0} k, ~~~ \text{for } k < k_{0}, \\ \beta _{k}- x_{k} &\geq c_{0}k, ~~~ \text{for } k \geq k_{0}. \end{aligned} \end{aligned}$$
(40)

Choosing \(c_{0}\) large ensures \(x_{k}>g(x_{k})\), \(k \geq k_{0}\).

Now, it is easy to see that \(\beta _{k_{0}}< x_{k_{0}}\) leads to a contradiction with the second inequality in (40); thus, we only consider the case \(\beta _{k_{0}}>x_{k_{0}}\). Introducing a sequence \((y_{k})\) that satisfies \(G(y_{k}, y_{k+1}) =1 \) and \(y_{k_{0}} = x_{k_{0}}\), we have

$$\begin{aligned} x_{k} > y_{k}, ~~~{\mathrm{for~}} k\geq k_{0}+1. \end{aligned}$$
(41)

Moreover, by Lemma 5, there exists a positive constant \(c_{1}\) such that

$$\begin{aligned} \beta _{k} - y_{k} < c_{1}k, ~~~{\mathrm{for~}} k\in \mathbb{N}. \end{aligned}$$

Let \(c_{2} := c_{0} \vee c_{1}\). Then we can find a \(k_{1} \geq k_{0}\), \(k_{1} \in \mathbb{N}\) such that

$$\begin{aligned} \beta _{k} - x_{k} \geq c_{2} k, ~~~{\mathrm{for~}} k\geq k_{1}, \end{aligned}$$
(42)

and

$$\begin{aligned} \beta _{k} - y_{k} < c_{2}k, ~~~{\mathrm{for~}} k\in \mathbb{N}. \end{aligned}$$

Recalling (41) yields

$$\begin{aligned} x_{k} > \beta _{k} - c_{2}k, ~~~{\mathrm{for~}} k \geq k_{0}+1, \end{aligned}$$

which contradicts (42).

For the second part of the lemma, assume \(G(x_{k}, x_{k+1}) < 1\) for large \(k\), but for an arbitrarily large constant \(\widetilde{c}_{0}\), one can find \(k_{2}\) such that

$$\begin{aligned} \begin{aligned} x_{k} - \beta _{k} &< \widetilde{c}_{0} k, ~~~ \text{for } k < k_{2}, \\ x_{k} - \beta _{k} &\geq \widetilde{c}_{0} k, ~~~ \text{for } k \geq k_{2}. \end{aligned} \end{aligned}$$
(43)

When \(\beta _{k_{2}} > x_{k_{2}}\), this leads to a contradiction with (43) immediately. Hence, we only consider the case \(\beta _{k_{2}} < x_{k_{2}}\). For a sequence \((z_{k})\) satisfying \(G(z_{k}, z_{k+1}) = 1\) and \(z_{k_{2}} = x_{k_{2}}\), we have

$$ x_{k}< z_{k}, ~~~{\mathrm{for~}} k\geq k_{2}+1. $$
(44)

By virtue of Lemma 5, we can find a positive constant \(\widetilde{c}_{1}\) such that

$$ \beta _{k} - z_{k} > -\widetilde{c}_{1} k, ~~~{\mathrm{for~all~}}k. $$

With \(\widetilde{c}_{2}:=\widetilde{c}_{0} \vee \widetilde{c}_{1}\), we can find \(k_{3} \geq k_{2}\), \(k_{3} \in \mathbb{N}\) to have

$$ x_{k} - \beta _{k} \geq \widetilde{c}_{2} k, ~~~{\mathrm{for ~}} k \geq k_{3}. $$
(45)

Since \(\widetilde{c}_{2} \geq \widetilde{c}_{1}\), one has \(\beta _{k} - z_{k} > -\widetilde{c_{2}} k\), for all \(k\). Hence, by (44),

$$ x_{k} < \beta _{k} + \widetilde{c}_{2} k, ~~~{\mathrm{for ~}}k\geq k_{3}. $$

However, this contradicts (45), which finalises the proof of the lemma. □

With this result in our toolbox, we are fully equipped to refine the asymptotic expansion of \(\beta _{k}\).

3.3 Asymptotic Expansion of \(\beta _{k}\)

The order of the next term of expansion of \(\beta _{k}\) is readily suggested by the upper bound in (3). However, to strengthen the hypothesis, we provide a heuristic argument based on the natural duality between this problem and the original problem of selecting the longest increasing subsequence.

We are taking a step further in exploring the connection between the two problems. Obtaining the asymptotic inverse of (23) suggests that

$$\begin{aligned} \beta _{k} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + O(k),~~~ \text{as } k \to \infty . \end{aligned}$$

Thus, heuristics hint at the second term of order \(O(k \log {k})\). In view of this, we choose the first approximating sequence \((x^{(0)}_{k})\)

$$\begin{aligned} x^{(0)}_{k} := \frac{k^{2}}{2} + \omega _{0} k \log {k}, \end{aligned}$$

where \(\omega _{0}\) is a parameter. Recalling the expansion (32), we obtain, as \(k\to \infty \),

$$\begin{aligned} g(x^{(0)}_{k}) = x^{(0)}_{k} + \sqrt{2x^{(0)}_{k}} + \frac{2}{3} + O(k^{-1}) =\frac{k^{2}}{2} + \omega _{0} k \log {k} + k + \omega _{0} \log {k} + \frac{2}{3} + o(1). \end{aligned}$$

Therefore,

$$\begin{aligned} x^{(0)}_{k+1} - g(x^{(0)}_{k}) = -\frac{1}{6} + \omega _{0} + o(1), ~~~ k\to \infty . \end{aligned}$$

Straightforwardly it follows that, for \(k\) large enough,

$$\begin{aligned} \begin{aligned} x^{(0)}_{k+1} &> g(x^{(0)}_{k}), ~~~ \text{when } \omega _{0} > \frac{1}{6} \\ x^{(0)}_{k+1} &< g(x^{(0)}_{k}), ~~~ \text{when } \omega _{0} < \frac{1}{6}. \end{aligned} \end{aligned}$$
(46)

Combining the inequalities (46) with Lemma 7 produces the following result

Corollary 2

As \(k \to \infty \),

$$\begin{aligned} \beta _{k} \sim \frac{k^{2}}{2} + \frac{k \log {k}}{6}. \end{aligned}$$
(47)

To bound the remainder, we need yet another successive approximation. Choose a test function of the form

$$\begin{aligned} x^{(1)}_{k} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + \omega _{1} ( \log {k})^{2}, \end{aligned}$$

where \(\omega _{1}\) is a constant. On the one hand, we have, as \(k \to \infty \),

$$\begin{aligned} x^{(1)}_{k+1} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + k + \omega _{1} (\log {k})^{2}+ \frac{\log {k}}{6}+ \frac{2}{3} + \frac{2\omega _{1} \log {k}}{k}+ O(k^{-1}). \end{aligned}$$

On the other hand, taking all four terms of expansion (32),

$$\begin{aligned} g(x^{(1)}_{k}) \sim \frac{k^{2}}{2} + \frac{k \log {k}}{6} + k + \omega _{1} (\log {k})^{2} + \frac{\log {k}}{6}+\frac{2}{3} + \frac{\omega _{1} (\log {k})^{2}}{k} - \frac{(\log {k})^{2}}{72k}. \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned} x^{(1)}_{k+1} - g(x^{(1)}_{k}) &> 0, ~~~ \text{when } \omega _{1} < \frac{1}{72}, \\ x^{(1)}_{k+1} - g(x^{(1)}_{k}) &< 0, ~~~ \text{when } \omega _{1} > \frac{1}{72}. \end{aligned} \end{aligned}$$

Recall that the shift in the initial condition of the optimality recursion (27) results in the order \(O(k)\) change to the solution; since the comparison to the approximating sequence \(x^{(1)}_{k}\) provides a refinement of smaller order, we may bound the remainder in the expansion (47).

Theorem 3

The minimum expected time required to select an increasing sequence of length \(k\) satisfies the following asymptotic expansion

$$\begin{aligned} \beta _{k} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + O(k), ~~~{\mathrm{as}}~k \to \infty . \end{aligned}$$
(48)

Corollary 3

The optimal threshold \(h_{k}\) satisfies the following refined asymptotic expansion

$$\begin{aligned} h_{k} = \frac{2}{k} - \frac{\log {k}}{3k^{2}} + O(k^{-2}), ~~~ k \to \infty . \end{aligned}$$

Proof

Unfortunately, the direct computation of \(h_{k} = 1 - \beta _{k}/\beta _{k+1}\) from (48) does not yield any meaningful results due to the \(O(k)\) remainder. However, [3], Lemma 7 provides an asymptotic approximation to the optimal threshold functions in terms of the optimal value functions

$$ h_{k} = \sqrt{\frac{2}{\beta _{k}}}(1+O(k^{-1})), ~~~ k \to \infty . $$

Using the one-term asymptotic expansion \(\beta _{k} \sim k^{2}/2\), Arlotto et al. computed that \(h_{k} = 2/k + O(k^{-2} \log {k})\), \(k\to \infty \). Plugging in the refined asymptotics (48) instead leads to the desired result. □

3.4 A Quasi-Stationary Policy

In this section, we construct a simple quasi-stationary policy that, as \(k\) grows large, has the expected time of selection matching \(\beta _{k}\) up to the leading term of expansion. We call it quasi-stationary because it has a second more conservative selection mode with a more narrow acceptance window. Our quasi-stationary policy has threshold functions independent of the remaining number of elements to be chosen, analogously to the stationary policy of Samuels and Steele [20].

We define our policy by choosing the threshold functions \(\widehat{h}_{i}(z),\, i=1, \ldots k\)

$$\begin{aligned} \widehat{h}_{i}(z) = \textstyle\begin{cases} \eta , & \mbox{if } z< 1 - a(k)-\eta , \\ \frac{a(k)}{k}, & \mbox{if } z \geq 1-a(k)-\eta , \end{cases}\displaystyle \end{aligned}$$

where

$$\begin{aligned} \eta = \frac{2(1-a(k))}{k(1+a(k))} \end{aligned}$$

and the function \(a(k):\mathbb{R}_{+} \to [0,1]\) is decreasing in \(k\); we fix \(a(k)\) in the sequel.

The policy acts in two regimes. Firstly, we accept every consecutive observation within \(\eta \) above the last selected item. Secondly, when the last selection size gets above \(1-a(k)-\eta \), we abandon the initial rule and accept all admissible elements within an acceptance window of size \(a(k)/k\).

The choice of \(\eta \) is inspired by the asymptotics of the optimal threshold (30). However, choosing \(2/k\) exactly leads to a problem: with high probability the selection process will cross the \(1-a(k)-\eta \) barrier, while there are \(O(k^{1/2})\) elements yet to choose. Loosely speaking, as \(k\) gets large, the selection process with a constant window is governed by the central limit theorem. Although the expectation of the sum of \(k\) random variables distributed uniformly on \([0, 2/k] \) is 1, it has a standard deviation of \(O(k^{-1/2})\). A way to overcome this issue is to decrease the threshold size so that the probability of reaching the barrier is low, but keep it large enough so that the expected time of selection remains unchanged up to the terms of a lower order. The task narrows down to choosing a suitable \(a(k)\).

The value \(\widehat{\beta }_{k}\) corresponds to the expected performance of the quasi-stationary policy in the rest of this section.

Theorem 4

The quasi-stationary policy with threshold functions \(\widehat{h}_{i}(z)\) is asymptotically optimal, i.e.

$$\begin{aligned} \widehat{\beta }_{k} \sim \frac{k^{2}}{2}, ~~~ \textit{as } k \to \infty . \end{aligned}$$

Proof

Let \((Z_{j})_{j\in \mathbb{N}}\) denote the last selection process of the quasi-stationary policy. Introduce a hitting time \(\xi \) of the barrier \(1-a(k)-\eta \)

$$\begin{aligned} \xi := \inf \{j: Z_{j} > 1-a(k)-\eta \}, \end{aligned}$$

where we follow the convention \(\inf \emptyset = \infty \). Moreover, let the stopping time \(\rho \) be defined as

$$\begin{aligned} \rho :=\xi \land k. \end{aligned}$$

In this notation, we can write \(\widehat{\beta }_{k}\) out as follows

$$\begin{aligned} \widehat{\beta }_{k} = \mathbb{E}\tau _{\rho }+ \mathbb{E}(\tau _{k} - \tau _{\rho })^{+}. \end{aligned}$$
(49)

Before the barrier is hit, the inter-selection times are independent and distributed identically as \(\mathrm{Geom}(\eta )\), hence

$$\begin{aligned} \mathbb{E}\tau _{\rho }= \frac{\mathbb{E}\rho }{\eta }; \end{aligned}$$

moreover, by Wald’s identity

$$\begin{aligned} \mathbb{E}\rho < \frac{1-a(k)-\eta }{\eta /2}. \end{aligned}$$

Consequently,

$$\begin{aligned} \mathbb{E}\tau _{\rho }< \frac{k^{2}(1+a(k))^{2}}{2(1-a(k))}- \frac{k(1+a)}{(1-a)}. \end{aligned}$$
(50)

The second expectation in (49) is bounded by the expected time of selection in case the barrier is hit. Thus,

$$\begin{aligned} \mathbb{E}(\tau _{k} - \tau _{\rho })^{+} \leq \mathbb{P}(\rho < k)\ \mathbb{E}(\tau _{k} | \rho < k). \end{aligned}$$

A rough upper-bound on \(\mathbb{E}(\tau _{k} | \rho < k)\) suffices for our purposes

$$\begin{aligned} \mathbb{E}(\tau _{k} | \rho < k)< \frac{k^{2}}{a(k)}; \end{aligned}$$
(51)

it follows from computing an expected time to select all \(k\) elements with a constant window \(a(k)/k\). To get a grip on \(\mathbb{P}(\rho < k)\) we first notice that

$$\begin{aligned} \mathbb{P}(\rho < k) = \mathbb{P} \left (Z_{k} > 1-a(k)-\eta \right ). \end{aligned}$$

Introduce a renewal sequence \((S_{j})\) with inter-arrival times distributed uniformly on \([0,\eta ]\). For \(j<\rho \), this sequence is equivalent in distribution to the gaps between consecutive selections \(Z_{j+1}-Z_{j}|\rho < j\). In light of this, we can write

$$\begin{aligned} \mathbb{P}(\rho < k) = \mathbb{P}\left (\sum _{j=1}^{k} S_{j} \geq 1-a(k)- \eta \right ). \end{aligned}$$
(52)

Since we have

$$\begin{aligned} \mu = \mathbb{E}\left (\sum _{j=1}^{k} S_{j}\right ) = \frac{k\eta }{2}, \end{aligned}$$

we can write the probability on the right-hand side of (52) in terms of \(\mu \) as

$$\begin{aligned} \mathbb{P}\left (\sum _{j=1}^{k} S_{j} \geq 1-a(k)-\eta \right ) = \mathbb{P}\left (\sum _{j=1}^{k} S_{j} \geq (1+\epsilon )\mu \right ), \end{aligned}$$

where \(\epsilon = a(k) - 2/k\). The probability in focus can be estimated from above by applying the Chernoff-Hoeffding inequality (see, for example, [7] for details)

$$\begin{aligned} \mathbb{P} \left (\sum _{j=1}^{k} S_{j} > (1+\epsilon ) \mu \right ) \leq \exp {\left (-\frac{k\epsilon ^{2}}{2}\right )}. \end{aligned}$$
(53)

Thus, choosing \(a(k):=k^{-1/2+\varepsilon }\), \(0<\varepsilon <1/2\) we ensure that the probability in (53) has an exponentially decreasing upper-bound

$$\begin{aligned} \mathbb{P} \left (\sum _{j=1}^{k} S_{j} > (1+\epsilon ) \mu \right ) \leq \exp {\left (-\frac{k^{2\varepsilon }}{2}\right )}+O(\exp {(k^{-1/2+ \varepsilon })}). \end{aligned}$$
(54)

With \(a(k)\) finally fixed, from an upper bound (50) we have

$$\begin{aligned} \mathbb{E}\tau _{\rho }< \frac{k^{2}}{2} + O(k^{3/2+\varepsilon }). \end{aligned}$$
(55)

Taking together (51), (54) and (55) yields

$$\begin{aligned} \widehat{\beta }_{k} < \frac{k^{2}}{2} + O(k^{3/2+\varepsilon }). \end{aligned}$$
(56)

A sufficient lower-bound on \(\widehat{\beta }_{k}\) follows from the inequality

$$\begin{aligned} \widehat{\beta }_{k} = \mathbb{E}\tau _{k} \geq \mathbb{E}(\tau _{k} | \rho > k) = \frac{k^{2}(1+a(k))}{2(1-a(k))}. \end{aligned}$$

Plugging in the expression for \(a(k)\) yields

$$\begin{aligned} \widehat{\beta }_{k} \geq \frac{k^{2}}{2} + O(k^{3/2+\varepsilon }). \end{aligned}$$
(57)

At last, combining (56) with (57) leads to

$$\begin{aligned} \widehat{\beta }_{k} = \frac{k^{2}}{2} + O(k^{3/2+\varepsilon }), \end{aligned}$$

and the result of Theorem 4 follows immediately. □

3.5 A Self-Similar Policy

We shall construct next a self-similar policy to closer approach optimality. Recall that a selection policy is self-similar if it chooses the observation of size \(x\) if and only if

$$ 0< \frac{x-z}{1-z}< \widetilde{h}_{k}. $$

Let \(\widetilde{\beta }_{k}\) be the value functions of such strategy; then, decomposing by the first arrival yields

β ˜ k + 1 =1+E ( β ˜ k + 1 1 ( X > h ˜ k + 1 ) + β ˜ k 1 X 1 ( X h ˜ k + 1 ) ) .

Computing the integral and rearranging

$$ \widetilde{\beta }_{k+1} \widetilde{h}_{k+1}+\widetilde{\beta }_{k} \log (1-\widetilde{h}_{k+1})=1. $$

This is an inhomogeneous linear recursion, which can be solved explicitly in terms of \(\widetilde{h}_{k}\)’s by the method of variation of constants.

Introduce a self-similar suboptimal selection policy with thresholds

$$\begin{aligned} \widetilde{h}_{k}:=\frac{2}{k+1},~~k\in {\mathbb{N}}. \end{aligned}$$
(58)

Note that \(\widetilde{h}_{k}<1\) for \(k>1\), thus \(\widetilde{\beta }_{k}<\infty \) for all \(k\). The recursion defining the value functions \(\widetilde{\beta }_{k}\) becomes

$$\begin{aligned} \widetilde{\beta }_{k+1}=a_{k} \widetilde{\beta }_{k}+b_{k}, ~~~ \widetilde{\beta }_{1}=1, \end{aligned}$$
(59)

where

$$ a_{k}= \left (\frac{k}{2}+1\right ) \log \left (1+\frac{2}{k}\right ), ~~~b_{k}=\frac{k}{2}+1\,.~ $$

The homogeneous equation (59) has the general solution of the form

$$ y_{k+1}=a_{1}\cdots a_{k} y_{1}. $$

Taking two terms in the expansion of the logarithm we get

$$ a_{k}=1+\frac{1}{k}+O\left ( \frac{1}{k}\right ), $$

which readily implies

$$ y_{k}\sim c y_{1} k, $$

for some \(c>0\). We see that \(y_{k}\) is about linear in the initial value \(y_{1}\). Likewise, because the general solution is the sum of a particular solution and the general solution to the homogeneous equation, if we replace the initial value \(\widetilde{\beta }_{1}=1\) in the inhomogeneous equation by \(\widetilde{\beta }_{1}+\theta \), the corresponding solution will change by about \(\theta c k\).

On the other hand, equation (59) has necessary monotonicity properties to apply the asymptotic comparison method (since \(a_{k}>0\)). Checking that a test function satisfies the appropriate inequality for \(k>k_{0}\), we adjust the initial value (resulting in the \(O(k)\) deflection) for this \(k_{0}\) to apply comparison in the already familiar way.

The comparison lemma adapted for recursion (59) states that if a sequence \((y_{k})\) is such that \(y_{k+1} > a_{k} y_{k} + b_{k}\), then \(\widetilde{\beta }_{k}-y_{k}< ck\), and vice versa.

Following the usual procedure, we choose test functions of the form

$$ y_{k} = d_{0} k^{2} + d_{1} k \log {k} + d_{2} \log {k}, ~~~ d_{0}, \, d_{1}, \, d_{2} \in \mathbb{R}. $$

The computation consists of three successive refinements, but we drop the explicit calculation. Matching coefficients and observing that the last term of \(y_{k}\) is of order \(o(k)\), we obtain

$$\begin{aligned} \widetilde{\beta }_{k} = \frac{k^{2}}{2} + \frac{k \log {k}}{6} + O(k). \end{aligned}$$

This result, together with the expansion (48), allows us to obtain the following theorem, which is the final accord of this paper.

Theorem 5

The self-similar strategy with threshold functions (58) has the value function \(\widetilde{\beta }_{k}\) satisfying

$$ \lvert \beta _{k} - \widetilde{\beta }_{k}\rvert = O(k), ~~~ k \to \infty . $$

4 Concluding Remarks

In this paper, we studied two classical sequential selection problems initiated by Samuels and Steele [20] and Arlotto et al. [3]. To refine the asymptotic expansions of the respective value functions, we developed a method of approximating solutions to the difference equations satisfying certain monotonicity criteria. This ‘asymptotic comparison’ method, as we called it, allowed us to methodically obtain finer asymptotics of the solution to the optimality equation by bounding it from the above and the below with suitable test functions. In fact, we believe this method to be applicable to a wider class of value function recursions. In particular, the method could be adapted to the improve the value function asymptotics in closely related online bin-packing problem [11], where only principal term is currently known. Theorem 1 holds for the special case of the bin-packing problem with uniform weights and a unit-sized bin. This suggests a logarithmic term in the expansion for the general case too, which ties in nicely with the logarithmic regret bound derived by Arlotto and Xie [1].

Although this paper achieves high precision in estimating the mean length \(v_{n}\) in the longest increasing subsequence problem, there are several possibilities to improve the result. For example, one may prove the convergence of \(O(1)\)-term in the expansion (4) to a constant. With this settled, it may be possible to derive a refined expansion of the variance \({\mathrm{Var}}\, L_{n}(\boldsymbol{\tau ^{*}})\), as was demonstrated in Gnedin and Seksenbayev (2019) [14] for the poissonised variant of the problem. Moreover, building a direct bridge connecting the results in the classical and the poissonised problems remains an open problem.