1 Introduction and notation

In the full information best-choice problem Gilbert and Mosteller (1966) we deal with a discrete time stochastic process \((X_1, \ldots , X_n)\) where \(X_1, \ldots , X_n\) are i.i.d. random variables with known continuous cumulative distribution function F. We observe elements of \((X_1, \ldots , X_n)\) one by one and our goal is to choose on-line the largest element of \(X_1, \ldots , X_n\) which is not a priori known. Stopping the process at a given moment means choosing the object we have observed at this moment according to the knowledge obtained in the hitherto observations. The best-choice problem consists of finding a strategy of stopping the process that maximizes the probability \(\mathbf P \left[ X_{\tau }=\max \left\{ X_1,\ldots ,X_n\right\} \right] \) over all stopping times \(\tau \le n\). [see Gnedin (1996)]

Let us recall basic results for this case.

Numbers \(d_k\), called decision numbers, are implicitly defined as satisfying the following equalities: \(d_0=0\) and for \(k=1, 2\ldots \),

$$\begin{aligned} \sum _{j=1}^k {k \atopwithdelims ()j}(d_k^{-1}-1)^j j^{-1}=1, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \sum _{j=1}^k (d_k^{-j}-1)j^{-1}=1. \end{aligned}$$

The optimal stopping time is given by the following formula:

$$\begin{aligned} \tau ^*_n=\min \left\{ t: X_t=\max \left\{ X_1,\ldots , X_{t}\right\} ,1\le t\le n \text{ and }F(X_t)\ge d_{n-t} \right\} , \end{aligned}$$

if the set under minimum is nonempty, otherwise \(\tau ^*_n=n\).

It is known [see Samuels (1991)], that the sequence \((d_k)\) is increasing in k, \(\lim \limits _{k\rightarrow \infty }d_k=1\) and \(\lim \limits _{k\rightarrow \infty }k(1-d_k)=c\), where \(c=0.804352\ldots \) is the solution of the following equation

$$\begin{aligned} \sum ^{\infty }_{j=1}\frac{c^j}{j!j}=\int _0^c x^{-1}(e^x-1)dx=1. \end{aligned}$$

The maximal probability (using the optimal stopping time)

$$\begin{aligned} v_n=\mathbf P \left[ X_{\tau ^*_n}=\max \left\{ X_1,\ldots ,X_n\right\} \right] \end{aligned}$$

does not depend on F, is strictly decreasing in n and

$$\begin{aligned} v_{\infty }:=\lim \limits _{n\rightarrow \infty }v_n=e^{-c}+\left( e^{c}-c-1\right) \int ^{\infty }_1 x^{-1}e^{-cx}dx \end{aligned}$$
(1)

(\(v_{\infty }=0.580164\ldots \)) [see Samuels (1991)].

In this paper we consider a modification to the classical full information best-choice problem. Namely, we consider m consecutive classical full information searches. Our aim is to choose the largest element in one of them if we have only one choice. Our goal is to find a strategy that maximizes the probability of achieving this aim.

The problem considered here is related to real life situations where contests may be repeated several times but once in one of them the choice is made the procedure ends. Usually selectors know how many times they can repeat the contest and intuitively, the more contests are ahead the more selective they can be.

The solution of the no information version for the repeated contest problem was presented in Kuchta and Morayne (2014a).

Here is a formal description of the problem considered.

Let \(n_1,\ldots , n_m \in \texttt {N}\) and \(\left( X^{(m)}, X^{(m-1)}, \dots , X^{(1)}\right) \) be a sequence of m consecutive searches: for \(i=0,\ldots ,m-1,\)

$$\begin{aligned} X^{(m-i)}=\left( X_{1+\sum _{k=0}^{i-1}n_{m-k}},X_{2+\sum _{k=0}^{i-1}n_{m-k}} \ldots , X_{\sum _{k=0}^{i}n_{m-k}} \right) , \end{aligned}$$

where \(X_{1+\sum _{k=0}^{i-1}n_{m-k}},X_{2+\sum _{k=0}^{i-1}n_{m-k}} \ldots , X_{\sum _{k=0}^{i}n_{m-k}}\) are independent random variables with known continuous cumulative distribution function \(F_{m-i}\). (The inverse numbering simplifies some technicalities in the proof and is adjusted to the recursion we will use.) The continuous distribution \(F_{m-i}\) is known and since the largest measurement in a sample remains the largest under all monotonic transformations of its variable, we lose no generality by assuming that \(F_{m-i}\) is the standard uniform for all searches: \(F(x)=x\) on \(0\le x \le 1\).

Let for \(1\le i\le m\)

$$\begin{aligned} Y^{(i)}\equiv \left( X^{(i)}, X^{(i-1)}, \dots , X^{(1)}\right) \end{aligned}$$

and

$$\begin{aligned} Max(Y^{(i)})=\left\{ \max \left( X^{(i)}\right) ,\ldots , \max \left( X^{(1)}\right) \right\} , \end{aligned}$$

where \(\max (X^{(i)})\) is the largest element of the search \(X^{(i)}\), for \(i=1,\ldots ,m\).

Let t be an integer, \(1\le t \le \sum ^m_{i=1}n_i\). For the time \(t=\sum ^{i-1}_{k=0}n_{m-k}+j\), where \(0\le i \le m-1\), \(1\le j\le n_{m-i}\), the selector sees the whole sequences \(X^{(m)}, X^{(m-1)}, \dots , X^{(m-i+1)}\) and the first j values of the search \(X^{(m-i)}\). The goal of the selector is to stop the search at a time t maximizing the probability that \(X_t\in Max \left( Y^{(m)}\right) .\) Formally, for \(t\ge 1\), let \(\mathcal {F}_t\) be the \(\sigma \)-algebra generated by \(X_1,\ldots ,X_t\), \(\mathcal {F}_t=\sigma (X_1,\ldots ,X_t)\). Our aim is to find a stopping time \(\tau _m\) with respect to the filtration \((\mathcal {F}_t)\) maximizing the probability \(\mathbf P \left[ X_{\tau _m}\in Max\left( Y^{(m)}\right) \right] .\)

2 Optimal stopping time

Let us recall the Monotone Case Theorem [see Chow et al. (1971)], which is often a very useful tool when looking for an optimal stopping time.

Theorem 2.1

If \(\left\{ (Z_i, \mathcal {F}_i):i\le n\right\} \) is a stochastic process such that the inequality \(Z_i\ge E(Z_{i+1}|\mathcal {F}_i)\) implies the inequality \(Z_{i+1}\ge E(Z_{i+2}|\mathcal {F}_{i+1})\) for each \(i\le n-2\), then the stopping time

$$\begin{aligned} \bar{\rho }=\min \left\{ i:Z_i\ge E(Z_{i+1}|\mathcal {F}_i)\right\} \end{aligned}$$

is optimal for maximizing \(E(Z_{\tau })\) over all \((\mathcal {F}_{i})\)-stopping times \(\tau \).

We apply this theorem to determine an optimal stopping time for m searches, i.e. for \(Y^{(m)}\).

Let \(\gamma _{m-1}\) be the probability of success using an optimal stopping time for \(Y^{(m-1)}\).

For \(1\le k\le n_m\) we define the sequence of multiple search decision numbers \(\hat{d}_k\) in the following way:

$$\begin{aligned} \hat{d}_0=0, \end{aligned}$$

and if \(1\le k\le n_m-1\),

$$\begin{aligned} \sum ^k_{j=1}\frac{1}{j}\left( \hat{d}_k^{-j}-1\right) =1-\gamma _{m-1}, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \sum ^k_{j=1}{k\atopwithdelims ()j}\frac{1}{j}\left( \hat{d}_k^{-1}-1\right) ^j=1-\gamma _{m-1}. \end{aligned}$$
(2)

Notice that the numbers \(\hat{d}_k\) are to be used only in the first search \(X^{(m)}\). For \(m=1\), \(\hat{d}_k=d_k\).

Now let us define the following stopping times \(\tau _m\) for \(Y^{(m)}\).

For \(m=1\):

$$\begin{aligned} \tau _1=\min \left\{ t: X_t=\max \left\{ X_{1},\ldots , X_{t}\right\} ,1\le t\le n_1 \text{ and } X_t\ge d_{n_1-t} \right\} \end{aligned}$$

if the set under minimum is nonempty, otherwise \(\tau _1 = n_1\),

and, for \(m>1\):

$$\begin{aligned} \tau _m=\left\{ \begin{array}{ll} \min \lbrace t: X_t=\max \lbrace X_1,\ldots X_t\rbrace \,,1\le t\le n_m, X_t\ge \hat{d}_{n_m-t} \rbrace &{}\text{ if } \text{ the } \text{ set } \\ &{} \text{ under } \text{ minimum } \text{ is } \text{ nonempty, } \\ n_m+\tau _{m-1} &{}\text{ otherwise. } \end{array} \right. \end{aligned}$$

In the first case of the definition above we choose from the first search \(X^{(m)}=\left( X_1,\ldots , X_{n_m}\right) \). In the second case we choose from among the remaining \(m-1\) searches: \(Y^{(m-1)}\), so the recursion is used.

Theorem 2.2

The stopping time \(\tau _m\) is optimal for \(Y^{(m)}\). When using \(\tau _m\) the probability of success equals

$$\begin{aligned} \gamma _m= & {} \frac{1}{n_m}\left( 1-(\hat{d}_{n_m-1})^{n_m}\right) \nonumber \\&+\sum _{t=1}^{n_m-1}\left( \frac{1}{n_m-t}\sum ^t_{i=1}\left( \frac{(\hat{d}_{n_m-i})^{t}}{t}-\frac{(\hat{d}_{n_m-i})^{n_m}}{n_m}\right) -\frac{(\hat{d}_{n_m-t-1})^{n_m}}{n_m}\right) \nonumber \\&+ \frac{1}{n_m}\sum ^{n_m-1}_{t=1}(\hat{d}_t)^{ n_m}\cdot \gamma _{m-1}, \end{aligned}$$
(3)

where we set \(\gamma _0=0\).

Proof

In the proof we use recursion with respect to m. If \(\tau _{m-1}\) is an optimal stopping time for the case \(Y^{(m-1)}\), then when looking for an optimal stopping time \(\tau _m\) for m searches, i.e. for \(Y^{(m)}\), the only stopping times that should be considered are the times of relative records for \(X^{(m)}\) and the optimal stopping time \(\tau _{m-1}\) in the remaining \(m-1\) searches \(Y^{(m-1)}\). Namely,

let \(\rho _1=1\) and, for \(2\le i\le \sum _{j=1}^m n_j\) if \(\rho _{i-1}\le n_m\), then

$$\begin{aligned} \rho _i=\left\{ \begin{array}{ll} \min \lbrace j> \rho _{i-1}:X_j=\max \lbrace X_1,\ldots X_j\rbrace , j\le n_m\rbrace &{}\text{ if } \text{ the } \text{ set } \text{ under } \\ &{}\text{ minimum } \text{ is } \text{ nonempty, }\\ n_m+\tau _{m-1} &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$

(Note that if \(\rho _{i}\le n_m\) then \(\rho _{i}\) is the time of i-th relative record. If \(\rho _{i-1}>n_m\) then \(\rho _i=\rho _{i-1}\).)

Let

$$\begin{aligned} W_i=\left\{ \begin{array}{ccl} 1 &{} if &{} X_{\rho _i}\in Max(Y^{(m)}),\\ 0 &{} if &{} X_{\rho _i}\notin Max(Y^{(m)}), \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} Z_i=E(W_i| \mathcal {F}_{\rho _i}). \end{aligned}$$

Let us notice that in our notation the probability of stopping on a maximal element is equal to

$$\begin{aligned} \mathbf P [X_{\rho _i}\in Max(Y^{(m)})]=\mathbf P [W_i=1]=E(Z_{i}). \end{aligned}$$

Thus, our aim is maximizing \(E(Z_{\tau })\) over all \( \left( \mathcal {F}_{\rho _i}\right) _i\) - stopping times \({\tau }\). By Theorems 2.3, 4.1 and Proposition 5.2 of Kuchta and Morayne (2014b) the process Z satisfies the hypothesis of the Monotone Case Theorem.

Suppose we have seen the t-th element x from the first search and this element is maximal so far. Thus \(t=\rho _j\) for some j. There are still \(n_m-t\) elements to come in \(X^{(m)}\). The probability that the next \(n_m-t\) elements are not bigger than x is equal to \(x^{n_m-t}\) and this is the probability of winning if we stop now. The probability of winning in the time of the next relative record in \(X^{(m)}\) is equal to

$$\begin{aligned} \sum _{i=1}^{n_m-t} {n_m-t \atopwithdelims ()i}\frac{1}{i}x^{n_m-t-i}(1-x)^i, \end{aligned}$$

where the i-th summand is the probability that exactly i elements from the remaining \(n_m-t\) ones are larger than x, and the maximum of those i elements appears first. Choosing the times when they come corresponds to the factor \({n_m-t \atopwithdelims ()i}\), the probability that exactly these elements are bigger than x is equal to \(x^{n_m-t-i}(1-x)^i\) and the probability that the largest element from this group comes before the other ones is equal to \(\frac{1}{i}\).

If there is no relative record after x till the time \(n_m\), i.e., within the first search \(X^{(m)}\), we use the optimal strategy for the remaining part which consists of \(m-1\) searches \(X^{(m-1)},\ldots , X^{(1)}\). The probability that this happens and that we succeed is equal to \(x^{n_m-t}\gamma _{m-1}.\) Thus, by the Monotone Case Theorem, we decide to stop at the t-th moment if it is the first moment of the relative record when

$$\begin{aligned} Z_j=x^{n_m-t}\ge E(Z_{j+1}|\mathcal {F}_{\rho _j})=\sum _{i=1}^{n_m-t} {n_m-t \atopwithdelims ()i}\frac{1}{i}x^{n_m-t-i}(1-x)^i +x^{n_m-t}\gamma _{m-1}.\nonumber \\ \end{aligned}$$
(4)

Since

$$\begin{aligned} \sum _{i=1}^{n_m-t} {n_m-t \atopwithdelims ()i}\frac{1}{i}\left( x^{-1}-1\right) ^i=\sum _{i=1}^{n_m-t}\frac{1}{i}\left( x^{-i}-1\right) , \end{aligned}$$

(4) is equivalent to

$$\begin{aligned} 1-\gamma _{m-1}\ge \sum _{i=1}^{n_m-t}\frac{1}{i}\left( x^{-i}-1\right) . \end{aligned}$$
(5)

The function \(f(x)=\sum _{i=1}^{n_m-t}\frac{1}{i}\left( x^{-i}-1\right) \) is decreasing in x, where \(0<x<1\) and \(1\le t\le n_m-1\). Thus the smallest x satisfying (5) is equal to the solution of the equation

$$\begin{aligned} 1-\gamma _{m-1}= \sum _{i=1}^{n_m-t}\frac{1}{i}\left( x^{-i}-1\right) . \end{aligned}$$

Hence \(\tau _m\) is an optimal stopping time.

The probability that we choose from \(X^{(m)}\) and we are successful is equal to

$$\begin{aligned} \mathbf P \left[ X_{\tau _m}=\max (X^{(m)})\right] =\sum _{t=1}^{n_m}p_t, \end{aligned}$$
(6)

where \(p_t\) is the probability that we stop at time t and it is successful, i.e. \(X_t=\max \left\{ X_1,\ldots , X_{n_m}\right\} \).

Thus

$$\begin{aligned} p_1=\int _{\hat{d}_{n_m-1}}^1x^{n_m-1}dx=\frac{1}{n_m}\left( 1-(\hat{d}_{n_m-1})^{n_m}\right) . \end{aligned}$$
(7)

Let \(1\le t\le n_m-1\). The probability that no element among the first t is chosen and that the absolute largest is \(X_{t+1}\) is equal to (see the explanation below)

$$\begin{aligned}&\frac{1}{n_m-t}\sum ^t_{i=1}\left( \int ^{\hat{d}_{n_m-i}}_0 x^{t-1}dx-\int ^{\hat{d}_{n_m-i}}_0 x^{n_m-1}dx\right) \nonumber \\&\quad =\frac{1}{n_m-t}\sum ^t_{i=1}\left( \frac{(\hat{d}_{n_m-i})^{t}}{t}-\frac{(\hat{d}_{n_m-i})^{n_m}}{n_m}\right) , \end{aligned}$$
(8)

where the first integral is the probability that the i-th element is below \(\hat{d}_{n_m-i}\) and it is the biggest among the first t elements, the second integral is the probability that the i-th element is below \(\hat{d}_{n_m-i}\) and it is the absolute maximum, and the factor \(\frac{1}{n_m-t}\) is the probability that the best element among the remaining \({n_m-t}\) elements is exactly the \((t+1)\)-th one.

The probability that \(X_{t+1}\) is the largest in \(X^{(m)}\) but does not pass the threshold, is equal to

$$\begin{aligned} \int ^{\hat{d}_{n_m-t-1}}_0 x^{n_m-1}dx=\frac{(\hat{d}_{n_m-t-1})^{n_m}}{n_m}. \end{aligned}$$
(9)

Note that if the last event (whose probability is given by (9)) happens then also the previous one (whose probability is given by (8)) does, because the thresholds \(\hat{d}_{n_m-i}\) are decreasing with i. Thus by (8) and (9), for \(1\le t\le n_m-1\),

$$\begin{aligned} p_{t+1}= \frac{1}{n_m-t}\sum ^t_{i=1}\left( \frac{(\hat{d}_{n_m-i})^{t}}{t}-\frac{(\hat{d}_{n_m-i})^{n_m}}{n_m}\right) -\frac{(\hat{d}_{n_m-t-1})^{n_m}}{n_m}. \end{aligned}$$
(10)

We do not stop at the first search \(X^{(m)}\) if and only if, for every \(1\le t\le n_m-1\), \(X_t< \hat{d}_{n_m-t}\) when \(X_t=\max \lbrace X_1,\ldots X_t\rbrace \). Thus the probability that using \(\tau _m\) we do not stop at the first search \(X^{(m)}\) is equal to

$$\begin{aligned} P[\tau _m>n_m]=\sum _{t=1}^{n_m-1}\int _0^{\hat{d}_{n_m-t}}x^{n_m-1}dx=\sum _{t=1}^{n_m-1}\frac{(\hat{d}_t)^{n_m}}{n_m}. \end{aligned}$$

The above equality, (7), (10) and (6) yield (3). \(\square \)

3 Asymptotics

In this section we examine the asymptotic behavior of the probability of success \(\gamma _m\) and the multiple search decision numbers \(\hat{d}_k\) as \(n_i\longrightarrow \infty \) for every \(i\in \left\{ 1,\dots , m\right\} \).

Let us define recursively the following sequence: \(r_0=0,\)   and for \(\,\, i\ge 1\):

$$\begin{aligned} r_i=e^{-c_i}+\left( e^{c_i}-c_i-1\right) \int ^{\infty }_1 x^{-1}e^{-c_ix}dx, \end{aligned}$$
(11)

where \(c_i\) satisfies the following equation (\(i\ge 1\)):

$$\begin{aligned} \int ^{c_i}_0x^{-1}\left( e^x-1\right) dx=1-r_{i-1}, \end{aligned}$$
(12)

or, equivalently,

$$\begin{aligned} \sum ^{\infty }_{j=1}\frac{c_i^j}{j!j}=1-r_{i-1}. \end{aligned}$$

Let \(n_i^*=\min \lbrace n_1,\ldots , n_i\rbrace \) for \(i=1,\ldots , m\) and let, for \(1\le k \le n_m-1\),

$$\begin{aligned} \alpha _k=k\left( \hat{d}_k^{-1}-1\right) . \end{aligned}$$
(13)

Theorem 3.1

\(\gamma _m\longrightarrow r_m\) as \(n_m^*\longrightarrow \infty \).

Proof

We prove this theorem by induction with respect to m.

For \(m=1\) we have only one search and

$$\begin{aligned} \lim \limits _{n_{1}^*\rightarrow \infty }\gamma _{1}=v_{\infty }=r_1. \end{aligned}$$

Of course, this is the asymptotic solution (1) of the classical full information best-choice problem.

Let \(m\ge 2\) and assume that \(\lim \limits _{n_{m-1}^*\rightarrow \infty }\gamma _{m-1}=r_{m-1}\).

Note that \(\alpha _k\) is, in fact, a function of m variables: \(k,n_{m-1},\ldots ,n_1\); \(\alpha _k=\alpha _k(n_{m-1},\ldots ,n_1)\).

Claim 1

\(\alpha _k\longrightarrow c_m\) as \((k,n_m^*)\longrightarrow (\infty , \infty )\) and \(k \le n_m-1\).

Proof of Claim 1

For \(1\le k \le n_m-1\), (2) and (13) yield

$$\begin{aligned} 1-\gamma _{m-1}=\sum ^k_{j=1}{k\atopwithdelims ()j}\frac{1}{j}\left( \frac{\alpha _k}{k}\right) ^j. \end{aligned}$$

Thus

$$\begin{aligned}&1-\gamma _{m-1}=\sum ^k_{j=1}{k\atopwithdelims ()j}\frac{1}{k^j}\int ^{\alpha _k}_0 x^{j-1}dx=\int ^{\alpha _k}_0\frac{1}{x}\left( \left( 1+\frac{x}{k}\right) ^k-1\right) dx\\&\quad =\int _{\mathbb {R}}\mathbf {1}_{[0,\alpha _k]}\frac{1}{x}\left( \left( 1+\frac{x}{k}\right) ^k-1\right) dx. \end{aligned}$$

Since the function \(\frac{1}{x}\left( \left( 1+\frac{x}{k}\right) ^k-1\right) \) is increasing in k for \(x>0\) and, for \(k=1\), \(\int ^{\alpha _1}_0 1dx=\alpha _1=1-\gamma _{m-1}\le 1\), we always have \(0\le \alpha _k<1\) for \(1< k \le n_m-1\).

Let \(U=\limsup _{(k,n_m^*)\rightarrow (\infty ,\infty )}\alpha _k\) and \(L=\liminf _{(k,n_m^*)\rightarrow (\infty ,\infty )}\alpha _k\).

By Lebesgue’s bounded convergence theorem

$$\begin{aligned} 1-r_{m-1}=\int _{\mathbb {R}}\mathbf {1}_{[0,U]} x^{-1}(e^x-1)dx=\int ^{U}_0 x^{-1}(e^x-1)dx \end{aligned}$$

and

$$\begin{aligned} 1-r_{m-1}=\int _{\mathbb {R}}\mathbf {1}_{[0,L]} x^{-1}(e^x-1)dx=\int ^{L}_0 x^{-1}(e^x-1)dx, \end{aligned}$$

which implies \(L=U\). Hence the limit from the statement of the claim exists. In view of the definition of \(c_m\) we also obtain \(\lim _{(k,n_m^*)\rightarrow (\infty ,\infty )}\alpha _k=c_m.\) \(\square \)

Further we follow the method used by Samuels (1982), [see also Samuels (1991)].

Consider the first search \(X^{(m)}\). Let \(M_t=\max \left\{ X_1,\ldots , X_t\right\} \), \(1\le t \le n_m\), and let \(M_0=0\). Let \(\sigma _{n_m}\) be the arrival time of the largest element in \(X^{(m)}\) and \(\hat{\sigma }_{n_m}\) be the arrival time of the largest element in \(X^{(m)}\) before the time \(\sigma _{n_m}\), i.e.

$$\begin{aligned} X_{\sigma _{n_m}}=M_{n_m} \text{ and } X_{\hat{\sigma }_{n_m}}=M_{\sigma _{n_m}-1}. \end{aligned}$$

Because \(\hat{d}_{n_m-t}\) is decreasing and \(M_t\) is increasing in t for \(1\le t\le n_m\), the probability that we choose from \(X^{(m)}\) and we are successful is equal to

$$ \begin{aligned} \mathbf P \left[ X_{\tau _m}=M_{n_m}\right] = \mathbf P \left[ M_{n_m}\ge \hat{d}_{n_m-\sigma _{n_m}}\;\; \& \;\;M_{\sigma _{n_m-1}}<\hat{d}_{n_m-\hat{\sigma }_{{n_m}}}\right] \end{aligned}$$

and the probability that we do not choose from \(X^{(m)}\) is equal to

$$\begin{aligned} \mathbf P \left[ \tau _m>n_m\right] =\mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] . \end{aligned}$$

Thus,

$$\begin{aligned} \lim \limits _{n_{m}^*\rightarrow \infty }\gamma _{m}=\lim \limits _{n_{m}^*\rightarrow \infty }{} \mathbf P \left[ X_{\tau _m}=M_{n_m}\right] + \lim \limits _{n_{m}^*\rightarrow \infty }{} \mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] \cdot \lim \limits _{n_{m-1}^*\rightarrow \infty }\gamma _{m-1}. \end{aligned}$$
(14)

Claim 2

$$\begin{aligned} (i)\quad \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ X_{\tau _m}=M_{n_m}\right]= & {} e^{-c_m}+\left( e^{c_m}-c_m-1\right) \int ^{\infty }_1x^{-1}e^{-c_mx}dx\nonumber \\&-r_{m-1}\left( e^{-c_m}-c_m\int ^{\infty }_1x^{-1}e^{-c_mx}dx\right) \end{aligned}$$
(15)

and

$$\begin{aligned} (ii)\;\;\; \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] =e^{-c_m}-c_m\int ^{\infty }_1x^{-1}e^{-c_mx}dx. \end{aligned}$$
(16)

Proof of Claim 2

We change variables:

$$\begin{aligned} S_{n_m}= & {} n_m\left( 1-M_{n_m}\right) ; T_{n_m}=\frac{\sigma _{n_m}}{n_m}; \\ \hat{S}_{n_m}= & {} \left( \sigma _{n_m}-1\right) \left( 1-\frac{M_{\sigma _{n_m}-1}}{M_{n_m}}\right) ; \hat{T}_{n_m}=\frac{\hat{\sigma }_{n_m}}{\sigma _{n_m}-1}. \end{aligned}$$

Then \(n_m-\sigma _{n_m}=n_m\left( 1-T_{n_m}\right) \) and \(n_m-\hat{\sigma }_{n_m}=n_m\left( 1-T_{n_m}\hat{T}_{n_m}\right) +\hat{T}_{n_m}\). Thus, applying (13),

$$\begin{aligned} M_{n_m} \ge \hat{d}_{n_m-\sigma _{n_m}}\Leftrightarrow S_{n_m}\left( 1-T_{n_m}+\frac{\alpha _{n_m\left( 1-T_{n_m}\right) }}{n_m}\right) \le \alpha _{n_m\left( 1-T_{n_m}\right) } \end{aligned}$$

and

$$\begin{aligned}&M_{\sigma _{n_m}-1}<\hat{d}_{n_m-\hat{\sigma }_{{n_m}}}\Leftrightarrow \\&\alpha _{n_m\left( 1-T_{n_m}\hat{T}_{n_m}\right) +\hat{T}_{n_m}}<\left( S_{n_m}+L_{n_m}\right) \left[ 1-T_{n_m}\hat{T}_{n_m}+\frac{1}{n_m}\left( \alpha _{n_m\left( 1-T_{n_m}\hat{T}_{n_m}\right) +\hat{T}_{n_m}}+\hat{T}_{n_m}\right) \right] , \end{aligned}$$

where

$$\begin{aligned} L_{n_m}=\left( T_{n_m}-\frac{1}{n_m}\right) ^{-1} \left( 1-\frac{S_{n_m}}{n_m}\right) \hat{S}_{n_m}. \end{aligned}$$

Let us define the following events (depending on \(n_m\)):

$$\begin{aligned} A= & {} \left[ S_{n_m}\left( 1-T_{n_m}+\frac{\alpha _{n_m\left( 1-T_{n_m}\right) }}{n_m}\right) \le \alpha _{n_m\left( 1-T_{n_m}\right) }\right] ,\\ B= & {} \left[ \alpha _{K_{n_m}}<\left( S_{n_m}+L_{n_m}\right) \left( 1-T_{n_m}\hat{T}_{n_m}+\frac{1}{n_m}\left( \alpha _{K_{n_m}}+\hat{T}_{n_m}\right) \right) \right] , \end{aligned}$$

where

$$\begin{aligned} K=n_m\left( 1-T_{n_m}\hat{T}_{n_m}\right) +\hat{T}_{n_m}. \end{aligned}$$

Then

$$ \begin{aligned} \mathbf P \left[ X_{\tau _m}=M_{n_m}\right] =\mathbf P \left[ M_{n_m}\ge \hat{d}_{n_m-\sigma _{n_m}}\;\; \& \;\;M_{\sigma _{n_m}-1}<\hat{d}_{n_m-\hat{\sigma }_{{n_m}}}\right] =\mathbf P \left[ A\cap B\right] \end{aligned}$$

and

$$\begin{aligned} \mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] =1-\mathbf P \left[ A\right] . \end{aligned}$$

By the properties of the uniform distribution,

$$\begin{aligned} S_{n_m}\longrightarrow S, \;\; \hat{S}_{n_m}\longrightarrow \hat{S},\;\; T_{n_m}\longrightarrow T,\;\;\hat{T}_{n_m}\longrightarrow \hat{T}, \end{aligned}$$

in the weak convergence as \(n_m^*\longrightarrow \infty \), where \(S, \hat{S}, T, \hat{T}\) are mutually independent variables, \(S, \hat{S}\) have the exponential distribution with parameter 1, and \(T, \hat{T}\) have the uniform distribution on \(\left[ 0,1\right] \).

We have

$$\begin{aligned} \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ X_{\tau _m}=M_{n_m}\right]= & {} \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ A\cap B\right] \\= & {} \mathbf P \left[ S\left( 1-T\right) \le c_m\le \left( S+\frac{\hat{S}}{T}\right) \left( 1-T\hat{T}\right) \right] \end{aligned}$$

and

$$\begin{aligned} \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] =\lim \limits _{n_m^*\rightarrow \infty }\left( 1-\mathbf P \left[ A\right] \right) =\mathbf P \left[ S\left( 1-T\right) > c_m\right] . \end{aligned}$$

The conditional probability

$$\begin{aligned} \mathbf P \left[ S\left( 1-T\right) \le c_m \le \left( S+\frac{\hat{S}}{T}\right) \left( 1-T\hat{T}\right) \mid S=s, T=t, \hat{T}=\hat{t}\right] \end{aligned}$$

is equal to

$$\begin{aligned} \left\{ \begin{array}{ll} 1 &{} \text{ if } c_m/\left( 1-t\hat{t}\right) -s<0 \text{ and } s \le c_m/\left( 1-t\right) , \\ &{}\\ 0 &{} \text{ if } c_m/\left( 1-t\hat{t}\right) -s<0 \text{ and } s > c_m/\left( 1-t\right) , \\ &{}\\ e^{-t\left( c_m/\left( 1-t\hat{t}\right) -s\right) } &{}\text{ otherwise }. \end{array} \right. \end{aligned}$$

Now we integrate this probability multiplied by the exponential density of S and we obtain the conditional probability for given \(T=t\) and \(\hat{T}=\hat{t}\):

$$\begin{aligned}&\mathbf P \left[ S\left( 1-T\right) \le c_m \le \left( S+\frac{\hat{S}}{T}\right) \left( 1-T\hat{T}\right) \mid T=t, \hat{T}=\hat{t}\right] \\&\quad =\int _0^{c_m/\left( 1-t\hat{t}\right) }e^{-t c_m/\left( 1-t\hat{t}\right) }e^{-s\left( 1-t\right) }ds+\int ^{c_m/\left( 1-t\right) }_{c_m/\left( 1-t\hat{t}\right) }e^{-s}ds\\&\quad =(1-t)^{-1}e^{-t c_m/\left( 1-t\hat{t}\right) }\left( 1-e^{-c_m\left( 1-t\right) /\left( 1-t\hat{t}\right) }\right) +e^{-c_m/(1-t\hat{t})}-e^{-c_m/\left( 1-t\right) }. \end{aligned}$$

In the next step we integrate this expression over the unit square:

$$\begin{aligned}&\int ^1_0\int ^1_0 \mathbf P \left[ S\left( 1-T\right) \le c_m \le \left( S+\frac{\hat{S}}{T}\right) \left( 1-T\hat{T}\right) \mid T=t, \hat{T}=\hat{t}\right] dt d\hat{t}\nonumber \\&=\int ^1_0\int ^1_0\left( (1-t)^{-1}e^{-t c_m/\left( 1-t\hat{t}\right) }\left( 1-e^{-c_m\left( 1-t\right) /\left( 1-t\hat{t}\right) }\right) +e^{-c_m/(1-t\hat{t})}\right) dtd\hat{t}\nonumber \\&\quad -\int ^1_0\int ^1_0 e^{-c_m/\left( 1-t\right) }dtd\hat{t}. \end{aligned}$$
(17)

It is easy to see that

$$\begin{aligned} \int ^1_0\int ^1_0e^{-c_m/\left( 1-t\right) }dtd\hat{t} =e^{-c_m}-c_m\int ^{\infty }_1x^{-1}e^{-c_mx}dx. \end{aligned}$$

Making the following change of variables in the first integral of (17)

$$\begin{aligned} u=\frac{1-t}{1-t\hat{t}}, v=\frac{1}{1-t\hat{t}}, \end{aligned}$$

we obtain

$$\begin{aligned}&\int ^1_0\int ^1_0\left( (1-t)^{-1}e^{-c_mt/\left( 1-t\hat{t}\right) }\left( 1-e^{-c_m\left( 1-t\right) /\left( 1-t\hat{t}\right) }\right) +e^{-c_m/\left( 1-t\hat{t}\right) }\right) dtd\hat{t}\\&=\int ^{\infty }_1\int ^1_0v^{-2}\left( v-u\right) ^{-1}e^{-c_m \left( v-u\right) }dudv\\&\qquad +\left( e^{-c_m}-c_m\int ^{\infty }_1x^{-1}e^{-c_mx}dx\right) \int ^{c_m}_0x^{-1}\left( e^x-1\right) dx. \end{aligned}$$

We set \(w=v-u\). Interchanging the order of integration we obtain

$$\begin{aligned} \int ^{\infty }_1\int ^1_0v^{-2}\left( v-u\right) ^{-1}e^{-c_m\left( v-u\right) }dudv=e^{-c_m}+\left( e^{c_m}-c_m-1\right) \int ^{\infty }_1x^{-1}e^{-c_mx}dx. \end{aligned}$$

By (12) we obtain (15).

The conditional probability of \(\left\{ S\ge c_m/\left( 1-t\right) \right\} \) for \(T=t\) and \(\hat{T}=\hat{t}\) is equal to

$$\begin{aligned} \mathbf P \left[ S\ge c_m/\left( 1-t\right) \mid T=t, \hat{T}=\hat{t}\right] = \int ^{\infty }_{c_m/\left( 1-t\right) }e^{-s}ds=e^{-c_m/\left( 1-t\right) }. \end{aligned}$$

Analogously,

$$\begin{aligned} \lim \limits _{n_m^*\rightarrow \infty }{} \mathbf P \left[ M_{n_m}< \hat{d}_{n_m-\sigma _{n_m}}\right] = \int ^1_0\int ^1_0e^{-c_m/\left( 1-t\right) }dtd\hat{t}=e^{-c_m}-c_m\int ^{\infty }_1x^{-1}e^{-c_mx}dx. \end{aligned}$$

This completes the proof of the claim. \(\square \)

By (14), (15) and (16) and the induction hypothesis, we get

$$\begin{aligned} \lim \limits _{n_m^*\rightarrow \infty }\gamma _m= e^{-c_m}+\left( e^{c_m}-c_m-1\right) \int ^{\infty }_1x^{-1}e^{-c_mx}dx=r_m. \end{aligned}$$
(18)

This completes the proof. \(\square \)

The following proposition describes the asymptotic behavior of the sequences \((c_m)\) and \((r_m)\) when \(m\longrightarrow \infty \) .

Proposition 3.2

The sequence \((c_m)\) is decreasing and \(\lim \limits _{m\rightarrow \infty }c_m=0\). The sequence \((r_m)\) is increasing and \(\lim \limits _{m\rightarrow \infty }r_m = 1\).

Proof

The function

$$\begin{aligned} g(y)=e^{-y}+\left( e^{y}-y-1\right) \int ^{\infty }_y x^{-1}e^{-x}dx,\,\, 0<y<1 \end{aligned}$$

is decreasing in y, (the derivative of this function is negative i.e. \(\frac{dg}{dy}=(e^y-1)\left( \int ^{\infty }_ye^{-x}x^{-1}dx -e^{-y}y^{-1}\right) <0\)). By (11), (12) and \(r_0=0\), \(c_1\approx 0.804\). It is easy to see that the sequence \((r_m)\) is increasing and \(c_m\) is decreasing with \(m\longrightarrow \infty \) and both sequences are bounded by 0 and 1. Thus, both sequences are convergent. Let \(\beta =\lim \limits _{m\rightarrow \infty }c_m\). By (11) and (12) the sequence \((c_m)\) satisfies the following recurrence

$$\begin{aligned} \int ^{c_{m+1}}_0x^{-1}\left( e^{x}-1\right) dx=1-e^{-c_m}-\left( e^{c_m}-c_m-1\right) \int ^{\infty }_1x^{-1}e^{-c_mx}dx. \end{aligned}$$

Thus, \(\beta \) is the solution of the following equation

$$\begin{aligned} \int ^{\beta }_0x^{-1}\left( e^{x}-1\right) dx=1-e^{-{\beta }}-\left( e^{\beta }-\beta -1\right) \int ^{\infty }_1x^{-1}e^{-{\beta }x}dx. \end{aligned}$$

It is easy to check that the only \(\beta \) satisfying this equation is \(\beta =0\).

Table 1 The values of \(r_m\), \(c_m\) and \(a_m\) for \(m=1,\ldots ,10\)

By (18), it is now easy to check that \(\lim \limits _{m\rightarrow \infty }r_m=1\).

This completes the proof. \(\square \)

Approximations of the first ten elements of the sequences \((r_m)\) and \((c_m)\) are given in Table 1, see also Fig. 1. For comparison, the first column gives the corresponding probability of success \((a_m)\) for the iterated no information version (the classical secretary problem) [see also Kuchta and Morayne (2014a)].

Fig. 1
figure 1

The graph of \(r_i\), \(i=1,\ldots ,10\)

Table 2 Approximations of probabilities of success for two searches of the same lengths: n = 2, 3, . . . , 12 and n = 20

The following proposition describes the asymptotic behavior of the decision numbers \(\hat{d}_k\) when \(k\longrightarrow \infty \) and \(n_m^*\longrightarrow \infty \).

Proposition 3.3

\(\hat{d}_k \longrightarrow 1\) as \((k, n_m^*)\longrightarrow (\infty ,\infty )\) and \(k \le n_m-1\).

Proof

By (13) we have \(\hat{d}_k= \left( \frac{\alpha _k}{k}+1\right) ^{-1}\). By Claim 1 \(\lim \alpha _k= c_m\) as \((k, n_m^*)\longrightarrow (\infty ,\infty )\) and \(k \le n_m-1\). Since \(0<c_m<1\), \(\hat{d}(k) \longrightarrow 1\) as \((k, n_m^*)\longrightarrow (\infty ,\infty )\) and \(k \le n_m-1\). \(\square \)

Example

Let us consider the case of two searches of the same length n. According to the optimal strategy using (13) for multiple search decision numbers \(\hat{d}(k)\) for \(n=2,3,4,5,6,7,8,9,10,11,12,20\), we obtain the approximations of \(\gamma _2\) from Table 2 (see Fig. 2). (\(\gamma _2\longrightarrow 0.7443...\) for \(n\longrightarrow \infty \).)

Fig. 2
figure 2

The graph of \(\gamma _2\) for two searches of the same lengths: n = 2, 3, . . . , 12 and n = 20