1 Introduction

This work focuses on various stochastic recursions of autoregressive type, such as:

$$\begin{aligned} W_{n+1}= & {} [V_{n}W_{n}+B_{n}-A_{n}]^{+},\,n=0,1,\ldots , \end{aligned}$$
(1)
$$\begin{aligned} W_{n+1}= & {} \left\{ \begin{array}{ll} \left[ V^{(1)}_{n}W_{n}+B_{n}-A^{(1)}_{n}\right] ^{+},&{}B_{n}\le T_{n},\\ \left[ V^{(2)}_{n}W_{n}+T_{n}-A^{(2)}_{n}\right] ^{+},&{} B_{n}>T_{n}, \end{array}\right. \end{aligned}$$
(2)
$$\begin{aligned} W_{n+1}= & {} \left\{ \begin{array}{lll} \left[ W_{n}+B_{n}-A^{(0)}_{n}\right] ^{+},&{}\text { with probability (w.p.) }p,\\ \left\{ \begin{array}{ll} \left[ V^{(1)}_{n}W_{n}+\widehat{B}_{n}-A^{(1)}_{n}\right] ^{+},&{}\widehat{B}_{n}\le T_{n},\\ \left[ V^{(2)}_{n}W_{n}+T_{n}-A^{(2)}_{n}\right] ^{+},&{} \widehat{B}_{n}>T_{n}, \end{array}\right. ,&\text { with probability (w.p.) }q:=1-p. \end{array}\right. \end{aligned}$$
(3)

Note that in (3), we assume that with probability \(q:=1-p\), and when \(\widehat{B}_{n}\le T_{n}\), \(W_{n+1}=[V^{(1)}_{n}W_{n}+\widehat{B}_{n}-A^{(1)}_{n}]^{+}\), while when \(\widehat{B}_{n}>T_{n}\), \(W_{n+1}=[V^{(2)}_{n}W_{n}+T_{n}-A^{(2)}_{n}]^{+}\). Moreover, we also focus on:

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{lll} \left[ V^{(0)}_{n}W_{n}+B_{n}-A_{n}\right] ^{+}, &{}\text { w.p. }p_{1},\,V^{(0)}_{n}\in \{a_{1},\ldots ,a_{M}\}, \\ &{}a_{k}\in (0,1),k=1,\ldots ,M,n\in \mathbb {N}_{0},\\ \left[ V^{(1)}_{n}W_{n}+B_{n}-A_{n}\right] ^{+},&{}\text { w.p. }p_{2},\,V^{(1)}_{n}\in [0,1),\,n\in \mathbb {N}_{0},\\ \left[ V^{(2)}_{n}W_{n}+B_{n}-A_{n}\right] ^{+},&{}\text { w.p. }1-p_{1}-p_{2},\,\,V^{(2)}_{n}<0,\,n\in \mathbb {N}_{0}, \end{array}\right. \end{aligned}$$
(4)

with \(\mathbb {N}_{0}:=\mathbb {N}\cup \{0\}\). Finally, we also consider the integer-valued counterpart,

$$\begin{aligned} X_{n+1}=\left\{ \begin{array}{ll} \sum _{k=1}^{X_{n}}U_{k,n}+Z_{n}-Q_{n+1},&{}X_{n}>0, \\ Y_{n}-\tilde{Q}_{n+1},&{}X_{n}=0, \end{array}\right. \end{aligned}$$
(5)

and a two-dimensional generalization of it, where \(x^{+} = max(0, x)\), \(x^{-} = min(0, x)\). Moreover, \(\{V_{n}\}_{n\in \mathbb {N}_{0}}\) and \(\{B_{n}-A_{n}\}_{n\in \mathbb {N}_{0}}\) (similarly \(\{\widehat{B}_{n}-A_{n}^{(1)}\}_{n\in \mathbb {N}_{0}}\), \(\{\widehat{T}_{n}-A_{n}^{(2)}\}_{n\in \mathbb {N}_{0}}\)) are sequences of independent and identically distributed (i.i.d.) random variables. For the recursion (2), the thresholds \(T_n\) are assumed to be i.i.d. random variables with cumulative distribution function (cdf) \(T(\cdot )\) and Laplace–Stieltjes transform (LST) \(\tau (\cdot )\). Moreover, \(B_{n}\) are i.i.d. random variables with cdf \(F_{B} (\cdot )\) and LST \(\phi _{B}(\cdot )\).

The ultimate goal of this work is to investigate classes of reflected autoregressive processes described by recursions of the type given above, in which various independence assumptions of \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\) are lifted and for which a detailed exact analysis can be also provided.

The stochastic recursion (1) where \(\{V_{n}\}_{n\in \mathbb {N}_{0}}\) are such that \(V_{n}=a\) a.s. (almost surely) for every n, where \(a\in (0,1)\), and where \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\) are i.i.d. sequences, and also independent on \(\{W_{n}\}_{n\in \mathbb {N}_{0}}\) has been treated in [8], i.e. the case where \(W_{n+1}=[aW_{n}+B_{n}-A_{n}]^{+}\), \(n=0,1,\ldots \), with \(a\in (0,1)\). The case where \(a=1\) corresponds to the classical Lindley recursion describing the waiting time of the classical G/G/1 queue [2, 11], while the case where \(a=-1\) is covered in [15]. Further progress has been made in [6], where additional models described by recursion (1) have been investigated. The work in [6, Section 3] is the closest to our case, where the authors investigated a recursion where V is either a positive constant with probability p, or a random variable taking negative values with probability \(1-p\). The fact that V is negative simplified considerably the analysis.

In [5], the authors considered the case where \(V_{n}W_{n}\) in (1) was replaced by \(S(W_{n})\), where \(\{S(t)\}_{t\ge 0}\) is a Levy subordinator (recovering also the case in [8], where \(S(t)=at\)). Note that in [5, 6, 8] the sequences \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\) are assumed to be independent. Recently, in [7], the authors have considered Lindley-type recursions that arise in queueing and insurance risk models, where the sequences \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\) obey a semi-linear dependence. These recursions can also be treated as of autoregressive type. This work is the closest to ours. Moreover, in [1], the authors developed a method to study functional equations that arise in a wide range of queueing, autoregressive and branching processes. Finally, the author in [12] considered a generalized version of the model in [6], by assuming \(V_{n}\) to take values in \((-\infty ,1]\). In particular, in [12], the author investigated the recursion (4) for \(M=1\), \(a_{1}=1\).

The main contribution of this paper is to investigate the transient as well as the stationary behaviour of a wide range of autoregressive processes described in (1)-(5), by lifting various independence assumptions of the sequences \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\). This is accomplished by using Liouville’s theorem [14, Theorem 10.52], and by stating and solving a Wiener–Hopf boundary value problem [10], or by solving an integral equation, depending on the nature of \(\{V_{n}\}_{n\in \mathbb {N}_{0}}\). We have to point out here that to our best knowledge, autoregressive recursions of the form (2)-(5) have not been considered in the literature so far. We also investigate the stationary analysis of \(\{X_{n}\}_{n\in \mathbb {N}_{0}}\) in (5), which represents a novel retrial queueing model. An extension to a two-dimensional case that describes a retrial queue with priorities is also considered.

2 M/G/1-type autoregressive queues with interarrival times randomly proportional to service/system times

In the following, we cope with some autoregressive M/G/1-type queueing systems where the interarrival time between the nth and the \((n+1)\)th job, say \(A_{n}\), depends on the service time of the nth job, or on the system time after the arrival of the nth job.

2.1 Interarrival times randomly proportional to service times

Consider the following variant of the standard autoregressive M/G/1 queue: When the service time equals \(x\ge 0\), then the next interarrival time equals \(\beta _{i}x\) (with probability \(p_{i}\), \(i=1,\ldots ,N+M\)) increased by an independent additive delay \(J_{n}\). In the following, we consider the recursion (1), where \(P(V_{n}=a)=1\), \(a\in (0,1)\).

Let \(W_{n}\) be the workload in the queue just before the nth customer arrival. The interarrival time between the nth and the \((n+1)\)th customer, say \(A_{n}\), satisfies \(A_{n}=G_{n}B_{n}+J_{n}\), where \(B_{n}\) is the service time of the nth customer and \(J_{n}\) an additive delay or random jitter. The random variable \(G_n\) has finite support. Let \(\beta _i\) denote its ith value and let \(p_i = P(G_{n}=\beta _{i})\) denote the corresponding probability, \(i=1,\ldots ,N+M\) (\(M,N\ge 1\)), with \(\sum _{i=1}^{N+M}p_{i}=1\). We further assume that the service times and jitter are exponentially distributed: \(B_n \sim \exp (\mu )\) and \(J_n \sim \exp (\delta )\). Extensions to the case where \(J_{n}\) has a rational transform will be also discussed. Thus, the sequence \(\{W_{n}\}_{n\in \mathbb {N}_{0}}\) obeys the following recursion:

$$\begin{aligned} W_{n+1}=[aW_{n}+(1-G_{n})B_{n}-J_{n}]^{+}, \end{aligned}$$
(6)

where \(a\in (0,1)\). Without loss of generality and in order to avoid trivial solutions, assume that \(1<\beta _{1}<\beta _{2}<\ldots <\beta _{N}\), and \(\beta _{N+1}<\beta _{N+2}<\ldots<\beta _{N+M}<1\).

2.1.1 Transient analysis

We first focus on the transient distribution, and following the lines in [8], let for \(|r|<1\),

$$\begin{aligned} Z_{w}(r,s)=\sum _{n=0}^{\infty }r^{n}E(e^{-sW_{n+1}}|W_{0}=w)\,,\quad U^{-}_{w}(r,s)=\sum _{n=0}^{\infty }r^{n}E(e^{-sU^{-}_{n}}|W_{0}=w), \end{aligned}$$

where \(U^{-}_{n}:=[aW_{n}+(1-G_{n})B_{n}-J_{n}]^{-}\). Then, using the property that \(1+e^{x}=e^{[x]^{+}}+e^{[x]^{-}}\), (6) leads to

$$\begin{aligned}&E(e^{-sW_{n+1}}|W_{0}=w)\\&\quad = E(e^{-s(aW_{n}+(1-G_{n})B_{n}-J_{n})}|W_{0}=w)+1-E(e^{-sU_{n}}|W_{0}=w) \\&\quad = E(e^{-saW_{n}}|W_{0}=w)E(e^{sJ_{n}})\sum _{i=1}^{N+M}p_{i}E(e^{-s(1-\beta _{i})B_{n}})+1-E(e^{-sU_{n}}|W_{0}=w)\\&\quad = E(e^{-saW_{n}}|W_{0}=w)\frac{\delta }{\delta -s}\sum _{i=1}^{N+M}p_{i}\phi _{B}(\bar{\beta }_{i}s)+1-E(e^{-sU_{n}}|W_{0}=w), \end{aligned}$$

where \(\bar{\beta }_{i}=1-\beta _{i}\), \(i=1,\ldots ,N+M\), and \(\phi _{B}(s)\) being the LST of B. Multiplying by \(r^{n}\) and summing from \(n=0\) to infinity yield

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}=r\frac{\delta }{\delta -s}Z_{w}(r,as)\sum _{i=1}^{N+M}p_{i}\phi _{B}(\bar{\beta }_{i}s)+\frac{r}{1-r}-rU^{-}_{w}(r,s). \nonumber \\ \end{aligned}$$
(7)

Assume hereon that \(B\sim \exp (\mu )\). Then, \(\phi _{B}(\bar{\beta }_{i}s)=\frac{\mu }{\mu +\bar{\beta }_{i}s}=\frac{1}{1-\gamma _{i}s}\), where \(\gamma _{i}=\frac{\beta _{i}-1}{\mu }\), \(i=1,\ldots ,N+M\). Simple calculations imply that

$$\begin{aligned} \sum _{i=1}^{N+M}\frac{p_{i}}{1-\gamma _{i}s}=\frac{\sum _{i=1}^{N+M}p_{i}\prod _{j\ne i}(1-\gamma _{i}s)}{\prod _{i=1}^{N+M}(1-\gamma _{i}s)}:=\frac{f(s)}{g(s)}. \end{aligned}$$

Note that \(g(s)=0\) has \(N+M\) distinct and real roots \(\gamma _{i}^{-1}\), \(i=1,\ldots ,N+M,\) where N of them are positive and M are negative. In particular, let \(s_{j}^{+}=\gamma _{j}^{-1}=\frac{\mu }{\beta _{j}-1}\), \(j=1,\ldots ,N\) the positive roots, and \(s_{k}^{-}=\gamma _{k}^{-1}=\frac{\mu }{\beta _{k}-1}\), \(k=N+1,\ldots ,N+M\) the negative roots, respectively, of \(g(s)=0\). Note that

$$\begin{aligned} g(s)= & {} \prod _{i=1}^{N+M}(1-\gamma _{i}s)=\prod _{i=1}^{N+M}\gamma _{i}(\gamma _{i}^{-1}-s)\\= & {} \prod _{i=1}^{N+M}(-\gamma _{i})\prod _{j=1}^{N}(s-s_{j}^{+})\prod _{k=N+1}^{N+M}(s-s_{k}^{-}):=g^{+}(s)g^{-}(s), \end{aligned}$$

where \(g^{+}(s):=\prod _{j=1}^{N}(s-s_{j}^{+})\), \(g^{-}(s):=\prod _{i=1}^{N+M}(-\gamma _{i})\prod _{k=N+1}^{N+M}(s-s_{k}^{-})\). Now (7) becomes for \(Re(s)=0\):

$$\begin{aligned}{} & {} (\delta -s)g^{+}(s)[ Z_{w}(r,s)-e^{-sw}]\nonumber \\{} & {} \quad -r\delta \frac{f(s)}{g^{-}(s)}Z_{w}(r,as)=(\delta -s)g^{+}(s)[\frac{r}{1-r}-rU_{w}(r,s)]. \end{aligned}$$
(8)

Now we make the following observations:

  • The left-hand side is analytic in \(Re(s)>0\), and continuous in \(Re(s)\ge 0\).

  • The right-hand side is analytic in \(Re(s)<0\), and continuous in \(Re(s)\le 0\).

  • \(Z_{w}(r,s)\) (resp. \(U_{w}(r,s)\)) is for \(Re(s)\ge 0\) (resp. \(Re(s)\le 0\)) bounded by \((1-r)^{-1}\).

Thus, (8) represents an entire function. Generalized Liouville’s theorem [14, Theorem 10.52] states that in their respective half-planes, both the left-hand side (LHS) and the right-hand side (RHS) can be written as the same \((N+1)\)th-order polynomial in s, for large s, i.e.

$$\begin{aligned}{} & {} (\delta -s)g^{+}(s)[ Z_{w}(r,s)-e^{-sw}]-r\delta \frac{f(s)}{g^{-}(s)}K_{w}(r,s)Z_{w}(r,as)\nonumber \\{} & {} \quad =\sum _{i=0}^{N+1}s^{i}C_{i,w}(r),\,Re(s)\ge 0. \end{aligned}$$
(9)

Note that for \(s=0\) (9) yields

$$\begin{aligned} \delta \prod _{i=1}^{N}(-s_{i}^{+})(\frac{1}{1-r}-1)-r\delta \frac{f(0)}{g^{-}(0)}\frac{1}{1-r}=C_{0,w}(r). \end{aligned}$$

Having in mind that \(\frac{f(0)}{g(0)}=1\), so that \(\frac{f(0)}{g^{-}(0)}=\prod _{j=1}^{N}(-s_{j}^{+})\), we easily realize that \(C_{0,w}(r)=0\). Moreover, setting \(s=\delta \), and \(s=s^{+}_{j}\), \(j=1,\ldots ,N,\) we obtain the following system of equations for the remaining of unknown coefficients \(C_{i,w}(r)\), \(i=1,\ldots ,N\):

$$\begin{aligned} -r\delta \frac{f(s^{+}_{j})}{g^{-}(s^{+}_{j})}Z_{w}(r,as^{+}_{j})= & {} \sum _{i=1}^{N+1}(s^{+}_{j})^{i}C_{i,w}(r),\,j=1,\ldots ,N, \nonumber \\ -r \delta \frac{f(\delta )}{g^{-}(\delta )}Z_{w}(r,a\delta )= & {} \sum _{i=1}^{N+1}\delta ^{i}C_{i,w}(r). \end{aligned}$$
(10)

It remains to obtain \(Z_{w}(r,as^{+}_{j})\), \(j=1,\ldots ,N\), and \(Z_{w}(r,a\delta )\). These terms are derived as follows: Expression (9) is now written as

$$\begin{aligned} Z_{w}(r,s)=K_{w}(r,s)Z_{w}(r,as)+L_{w}(r,s), \end{aligned}$$
(11)

where

$$\begin{aligned} K_{w}(r,s):= r\frac{\delta }{\delta -s}\frac{f(s)}{g(s)},&L_{w}(r,s):=\frac{\sum _{i=1}^{N+1}s^{i}C_{i,w}(r)}{(\delta -s)g^{+}(s)}+e^{-sw}. \end{aligned}$$

Iterating (11) yields

$$\begin{aligned} Z_{w}(r,s)=\sum _{n=0}^{\infty }L_{w}(r,a^{n}s)\prod _{m=0}^{n-1}K_{w}(r,a^{m}s), \end{aligned}$$
(12)

with the convention that an empty product is defined to be 1. Setting \(s=\alpha \delta \), and \(s=as^{+}_{j}\) in (12), we obtain expressions for the \(Z_{w}(r,a\delta )\), \(Z_{w}(r,as^{+}_{j})\), \(j=1,\ldots ,N\), respectively. Substituting back in (10), we obtain a system of \(N+1\) equations for the unknown coefficients \(C_{i,w}(r)\), \(i=1,\ldots ,N+1\).

Remark 1

It is easily realized in (12) that \(Z_{w}(r,s)\) appears to have singularities in \(s=\delta /a^{m}\), and \(s=s_{j}^{+}/a^{m}\), \(j=1,\ldots ,N\), \(m=0,1,\ldots \). We can show that these are removable singularities. Let us show this for \(s=\delta \), and \(s=s_{j}^{+}\). We write (12) as follows to isolate the singularities for \(s=\delta \) and \(s=s_{j}^{+}\):

$$\begin{aligned} Z_{w}(r,s)=&\frac{\sum _{i=1}^{N+1}s^{i}C_{i,w}(r)}{(\delta -s)g^{+}(s)}+e^{-sw} +\sum _{n=1}^{\infty }(\frac{\sum _{i=1}^{N+1}(a^{n}s)^{i}C_{i,w}(r)}{(\delta -a^{n}s)g^{+}(a^{n}s)}\\&+\, e^{-sa^{n}w})r^{n}\frac{\delta }{\delta -s}\frac{f(s)}{g(s)}\prod _{m=1}^{n-1}\frac{\delta }{\delta -a^{m}s}\frac{f(a^{m}s)}{g(a^{m}s)}\\ =&e^{-sw}+\frac{1}{(\delta -s)g^{+}(s)}\left[ \sum _{i=1}^{N+1}s^{i}C_{i,w}(r)+r\delta \frac{f(s)}{g^{-}(s)}\sum _{n=1}^{\infty }(\frac{\sum _{i=1}^{N+1}(a^{n}s)^{i}C_{i,w}(r)}{(\delta -a^{n}s)g^{+}(a^{n}s)}\right. \\&+\, \left. e^{-sa^{n}w})r^{n-1}\prod _{m=1}^{n-1}\frac{\delta }{\delta -a^{m}s}\frac{f(a^{m}s)}{g(a^{m}s)}\right] \\ =&e^{-sw}+\frac{1}{(\delta -s)g^{+}(s)}\left[ \sum _{i=1}^{N+1}s^{i}C_{i,w}(r)+r\delta \frac{f(s)}{g^{-}(s)}Z_{w}(r,as)\right] . \end{aligned}$$

It is easily realized by using (10) that the term inside the brackets in the last line vanishes for \(s=\delta \), and \(s=s_{j}^{+}\), confirming that \(s=\delta \), and \(s=s_{j}^{+}\), \(j=1,\ldots ,N\) are not poles of \(Z_{w}(r,s)\). Similarly, we can show using (9) that \(Z_{w}(r,s)\) has no singularity at \(s=\delta /a\), \(s=s_{j}^{+}/a\), and so on.

2.1.2 Stationary analysis

We now focus on the steady-state counterpart of \(W_{n}\), say W. By applying Abel’s theorem on power series to (12), or by considering the relation \(W=[aW+(1-G)B-J]^{+}\) (i.e. by focusing directly to the limiting random variable W), and assuming that \(Z(s):=E(e^{-sW})\), we can obtain after some algebra:

$$\begin{aligned} Z(s)=Z(as)\frac{\delta }{\delta -s}\frac{f(s)}{g(s)}+1-U^{-}(s). \end{aligned}$$
(13)

Since \(a\in (0,1)\), the stability condition can be ensured as long as \(E(\log (1+(1-G)B))<\infty \); see also [6, 16].

Note also that

$$\begin{aligned}{}[aW+(1-G)B-J]^{-}=\left\{ \begin{array}{ll} aW+(1-G)B-J,&{}aW+(1-G)B-J<0, \\ 0,&{} aW+(1-G)B-J\ge 0, \end{array}\right. \end{aligned}$$

thus,

$$\begin{aligned} U^{-}(s)&= E(e^{-s(aW+(1-G)B-J)}|aW+(1-G)B-J<0)\\&\qquad P(aW+(1-G)B-J<0)+P(aW+(1-G)B-J\ge 0) \\&=\frac{\delta }{\delta -s}P(aW+(1-G)B-J<0)+P(aW+(1-G)B-J\ge 0)\\&=1+\frac{s}{\delta -s}P(aW+(1-G)B-J<0), \end{aligned}$$

where we used the fact that \(E(e^{-s(aW+(1-G)B-J)}|aW+(1-G)B-J<0)\) is the LST of the probability distribution characterized by:

$$\begin{aligned}&P(aW+(1-G)B-J\le x|aW+(1-G)B-J<0)\\&\quad =P(J\ge aW+(1-G)B-x|J>aW+(1-G)B) \\&\quad =P(J\ge -x)=P(-J\le x), \end{aligned}$$

and thus, \(E(e^{-s(aW+(1-G)B-J)}|aW+(1-G)B-J<0)=\frac{\delta }{\delta -s}\). Let \(P:=P(aW+(1-G)B-J<0)\). Then, (13) is now written as

$$\begin{aligned} Z(s)= & {} Z(as)\frac{\delta }{\delta -s}\frac{f(s)}{g(s)}-\frac{s}{\delta -s}P \nonumber \\= & {} -\frac{Ps}{\delta -s}+\frac{\delta }{\delta -s}\frac{f(s)}{g(s)}[-\frac{Pas}{\delta -as}+\frac{\delta }{\delta -as}\frac{f(as)}{g(as)}Z(a^{2}s)] \nonumber \\= & {} \ldots \nonumber \\= & {} -\sum _{n=0}^{\infty }\frac{Pa^{n}s}{\delta -a^{n}s}\prod _{j=0}^{n-1}\frac{f(a^{j}s)\delta }{g(a^{j}s)(\delta -a^{j}s)}+\lim _{n\rightarrow \infty }Z(a^{n}s)\prod _{j=0}^{n-1}\frac{f(a^{j}s)\delta }{g(a^{j}s)(\delta -a^{j}s)} \nonumber \\= & {} -\sum _{n=0}^{\infty }\frac{Pa^{n}s}{\delta -a^{n}s}\prod _{j=0}^{n-1}\frac{f(a^{j}s)\delta }{g(a^{j}s)(\delta -a^{j}s)}+\prod _{j=0}^{\infty }\frac{f(a^{j}s)\delta }{g(a^{j}s)(\delta -a^{j}s)}, \end{aligned}$$
(14)

since \(\lim _{n\rightarrow \infty }Z(a^{n}s)=Z(0)=1\). Note that \(P=P(W=0)\). Then, P can be derived by multiplying (14) with \(\delta -s\) (i.e. the functional equation before the iterations), and setting \(s=\delta \), so that

$$\begin{aligned} P=Z(a\delta )\frac{f(\delta )}{g(\delta )}. \end{aligned}$$

Setting \(s=a\delta \) in (14) (so that to obtain \(Z(a\delta )\)), and substituting back, yields,

$$\begin{aligned} P=\frac{\frac{f(\delta )}{g(\delta )}\prod _{j=0}^{\infty }\frac{f(a^{j+1}\delta )}{g(a^{j+1}\delta )(1-a^{j+1})}}{1+\frac{f(\delta )}{g(\delta )}\sum _{n=0}^{\infty }\frac{a^{n+1}}{1-a^{n+1}}\prod _{j=0}^{n-1}\frac{f(a^{j+1}\delta )}{g(a^{j+1}\delta )(1-a^{j+1})}}. \end{aligned}$$

Differentiating the expression in the first line in (14) with respect to s and setting \(s=0\) yields after some algebra,

$$\begin{aligned} E(W):=-\frac{d}{ds}Z(s)|_{s=0}=\frac{\frac{1}{\mu }\sum _{i=1}^{K}p_{i}\bar{\beta }_{i}-\frac{1}{\delta }(1-P)}{1-a}, \end{aligned}$$

where P is given above.

Remark 2

Note that the analysis can be considerably adapted to consider the case where the random variables \(J_{n}\) follow a hyperexponential distribution with L phases, i.e. with density function \(f_{J}(x):=\sum _{j=1}^{L}q_{j}\delta _{j}e^{-\delta _{j}x}\), \(x\ge 0\), \(\sum _{j=1}^{L}q_{j}=1\), as well as to consider the case where the service times are arbitrarily distributed with density function \(f_{B}(.)\), and LST \(\phi _{B}(.)\). For convenience, and in order to make the analysis simpler, assume that \(\beta _{i}\in (0,1)\), \(i=1,\ldots ,K\), with \(K=N+M\) (so that \(Re(s\bar{\beta }_{i})\ge 0\) for \(Re(s)\ge 0\)). In such a case, following similar arguments as above, we come up with the following functional equation:

$$\begin{aligned} Z(s){} & {} =Z(as)\sum _{i=1}^{L}q_{i}\frac{\delta _{i}}{\delta _{i}-s}\sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i})\\{} & {} \quad -P(aW+(1-\Omega )B-J<0)\left( 1-\sum _{i=1}^{L}q_{i}\frac{\delta _{i}}{\delta _{i}-s}\right) , \end{aligned}$$

where

$$\begin{aligned}{} & {} P(aW+(1-\Omega )B-J<0)\\{} & {} \qquad = \sum _{i=1}^{K}p_{i}\int _{0}^{\infty }f_{W_{n}}(w)\textrm{d}w\int _{0}^{\infty }f_{B}(x)\textrm{d}x\int _{aw+\bar{\beta }_{i}x}^{\infty }\sum _{j=1}^{L}q_{j}\delta _{j}e^{-\delta _{j}y}\textrm{d}y\\{} & {} \qquad = \sum _{i=1}^{K}p_{i}\sum _{j=1}^{L}q_{j}\phi _{B}(\delta _{j}\bar{\beta }_{i})Z(\alpha \delta _{j}), \end{aligned}$$

so that

$$\begin{aligned} Z(s)= & {} Z(as)V(s)\sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i})-\sum _{i=1}^{K}p_{i}\sum _{j=1}^{L}\nonumber \\{} & {} \times q_{j}\phi _{B}(\delta _{j}\bar{\beta }_{i})Z(\alpha \delta _{j})(1-V(s)), \end{aligned}$$
(15)

or equivalently,

$$\begin{aligned} \prod _{j=1}^{L}(\delta _{j}-s)Z(s)= & {} Z(as)\sum _{j=1}^{L}q_{j}\delta _{j}\prod _{m\ne j}(\delta _{m}-s)\sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i}) \nonumber \\{} & {} -\sum _{i=1}^{K}p_{i}\sum _{j=1}^{L}q_{j}\phi _{B}(\delta _{j}\bar{\beta }_{i})Z(\alpha \delta _{j})\nonumber \\{} & {} \times \left[ \prod _{j=1}^{L}(\delta _{j}-s)-\sum _{j=1}^{L}q_{j}\delta _{j}\prod _{m\ne j}(\delta _{m}-s)\right] , \end{aligned}$$
(16)

where

$$\begin{aligned} V(s):=\frac{\sum _{j=1}^{L}q_{j}\delta _{j}\prod _{m\ne j}(\delta _{m}-s)}{\prod _{j=1}^{L}(\delta _{j}-s)}. \end{aligned}$$

Note that we first have to derive expressions for the \(Z(\alpha \delta _{j})\), \(j=1,\ldots ,L\). Iterating (15) yields

$$\begin{aligned} Z(s)= & {} \sum _{i=1}^{K}p_{i}\sum _{j=1}^{L}q_{j}\phi _{B}(\delta _{j}\bar{\beta }_{i})Z(a\delta _{j})\sum _{n=0}^{\infty }\Psi (a^{n}s)\prod _{l=0}^{n-1}\Phi (a^{l}s)+\prod _{l=0}^{\infty }\Phi (a^{l}s), \nonumber \\ \end{aligned}$$
(17)

where

$$\begin{aligned} \Phi (s):= \sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i})V(s),&\Psi (s):=V(s)-1. \end{aligned}$$

Setting in (17), \(s=a\delta _{p}\), \(p=1,\ldots ,L,\), we obtain a system of L equations for the unknown terms \(Z(a\delta _{p})\), \(p=1,\ldots ,L\):

$$\begin{aligned}{} & {} Z(a\delta _{p})(1-\sum _{i=1}^{K}p_{i}q_{p}\phi _{B}(\delta _{p}\bar{\beta }_{i})\sum _{n=0}^{\infty }\Psi (a^{n}s)\prod _{l=0}^{n-1}\Phi (a^{l}s)) \\{} & {} -\sum _{i=1}^{K}p_{i}\sum _{j\ne p}q_{j}\phi _{B}(\delta _{j}\bar{\beta }_{i})Z(a\delta _{j})\sum _{n=0}^{\infty }\Psi (a^{n+1}\delta _{p})\prod _{l=0}^{n-1}\Phi (a^{l+1}\delta _{p})=\prod _{l=0}^{\infty }\Phi (a^{l+1}\delta _{p}). \end{aligned}$$

Remark 3

Consider the case of a reflected autoregressive M/G/1-type queue where interarrival times are deterministic proportional dependent on service times with additive delay. We consider the case where \(A_{n}=bB_{n}+J_{n}\), where \(b\in (0,1)\) and \(J_{n}\sim \exp (\delta )\). The sequence \(\{W_{n}\}_{n\in \mathbb {N}_{0}}\) obeys the following recursion:

$$\begin{aligned} W_{n+1}=[aW_{n}+(1-b)B_{n}-J_{n}]^{+}, \end{aligned}$$
(18)

where \(a\in (0,1)\). Note for \(a=1-b\), the recursion (18) was investigated in [7, Section 2]. Here we cope with the general case (\(a\ne 1-b\)), although the analysis follows the lines in [8].

2.2 Proportional dependency with additive and subtracting delay

We now focus on the case where the interarrival times are such that \(A_{n}=[cB_{n}+J_{n}]^{+}\), with

$$\begin{aligned} J_{n}:=\left\{ \begin{array}{ll} \tilde{J}_{n}&{},\text { with probability }p, \\ -\widehat{J}_{n}&{}, \text { with probability }q:=1-p, \end{array}\right. \end{aligned}$$
(19)

where \(\tilde{J}_{n}\sim exp(\delta )\), \(\widehat{J}_{n}\sim exp(\nu )\). Now the sequence \(\{W_{n}\}_{n\in \mathbb {N}_{0}}\) obeys \(W_{n+1}=[aW_{n}+B_{n}-[cB_{n}+J_{n}]^{+}]^{+}\). With probability p, \(J_{n}=\tilde{J}_{n}\), and thus, \([cB_{n}+J_{n}]^{+}=cB_{n}+\tilde{J}_{n}\), while with probability q, \(J_{n}=-\widehat{J}_{n}\), and thus, \([cB_{n}+J_{n}]^{+}=[cB_{n}-\widehat{J}_{n}]^{+}\). Therefore,

$$\begin{aligned} E(e^{-sW_{n+1}})=pE(e^{-s[aW_{n}+\bar{c}B_{n}-\tilde{J}_{n}]^{+}})+q E(e^{-s[aW_{n}+B_{n}-[cB_{n}-\widehat{J}_{n}]^{+}]}), \end{aligned}$$
(20)

where \(\bar{c}:=1-c\). By focusing on the limiting random variable W with density function \(f_{W}(.)\), and LST \(Z(s)=E(e^{-sW})\), we can obtain:

$$\begin{aligned}{} & {} E(e^{-s[aW_{n}+\bar{c}B_{n}-\tilde{J}_{n}]^{+}})=\int _{w=0}^{\infty }f_{W}(w)\textrm{d}w\int _{x=0}^{\infty }f_{B}(x)\textrm{d}x\\{} & {} \quad \left\{ \int _{y=0}^{aw+\bar{c}x}\delta e^{-\delta y}e^{-s(aw+\bar{c}x-y)}\textrm{d}y+\int _{y=aw+\bar{c}x}^{\infty }\delta e^{-\delta y}\textrm{d}y\right\} \\{} & {} \quad =\int _{w=0}^{\infty }f_{W}(w)\int _{x=0}^{\infty }f_{B}(x)[\frac{\delta e^{-s(aw+\bar{c}x)}-se^{-\delta (aw+\bar{c}x)}}{\delta -s}]\textrm{d}w\textrm{d}x\\{} & {} \quad = \frac{\delta }{\delta -s}Z(as)\phi _{B}(s\bar{c})-\frac{s}{\delta -s}Z(a\delta )\phi _{B}(\delta \bar{c}). \end{aligned}$$

Now

$$\begin{aligned} E(e^{-s[aW_{n}+B_{n}-[cB_{n}-\widehat{J}_{n}]^{+}]}= & {} Z(as)E(e^{-s[B_{n}-[cB_{n}-\widehat{J}_{n}]^{+}]} \\= & {} Z(as)\int _{x=0}^{\infty }f_{B}(x)\left[ \int _{y=0}^{cx}e^{-s(aw+\bar{c}x+y)}\nu e^{-\nu y}\textrm{d}y\right. \\{} & {} +\left. \int _{y=cx}^{\infty }e^{-s(aw+x)}\nu e^{-\nu y}\textrm{d}y\right] \textrm{d}x\\= & {} Z(as)\left[ \frac{\nu }{\nu +s}(\phi _{B}(s\bar{c})-\phi _{B}(s+\nu c))+\phi _{B}(s+\nu c)\right] \\= & {} Z(as)\left( \frac{\nu \phi _{B}(s\bar{c})+s\phi _{B}(s+\nu c)}{\nu +s}\right) . \end{aligned}$$

Thus, (20) reads

$$\begin{aligned} Z(s)=H(s)Z(as)+L(s), \end{aligned}$$

where

$$\begin{aligned} H(s)= & {} \phi _{B}(s\bar{c})(\frac{p\delta }{\delta -s}+\frac{\nu q}{\nu +s})+ \frac{qs}{\nu +s}\phi _{B}(s+\nu c), \\ L(s)= & {} -\frac{s}{\delta -s}pZ(a\delta )\phi _{B}(\delta \bar{c}):=-\frac{s}{\delta -s}P. \end{aligned}$$

Iterating as in Sect. 2.1.2, and having in mind that \(\lim _{n\rightarrow \infty }Z(a^{n}s)=1\), we arrive at

$$\begin{aligned} Z(s)=-P\sum _{n=0}^{\infty }\frac{a^{n}s}{\delta -a^{n}s}\prod _{j=0}^{n-1}H(a^{j}s)+\prod _{j=0}^{\infty }H(a^{j}s). \end{aligned}$$

Setting \(s=a\delta \), and substituting back, we obtain

$$\begin{aligned} P=\frac{p\phi _{B}(\delta \bar{c})\prod _{j=0}^{\infty }H(a^{j+1}\delta )}{1+p\phi _{B}(\delta \bar{c})\sum _{n=0}^{\infty }\frac{a^{n+1}}{1-a^{n+1}}\prod _{j=0}^{n-1}H(a^{j+1}\delta )}. \end{aligned}$$

Remark 4

One may also consider the case where the interarrival times are related to the previous service time as follows: \(A_{n}=G_{n}B_{n}+J_{n}\), where \(J_{n}\), as given in (19), and \(G_{n}\) are i.i.d. random variables with probability mass function given by \(P(G_{n}=c_{k})=p_{k}\), \(c_{k}\in (0,1)\), \(k=1,\ldots ,N\), \(\sum _{k=1}^{N}p_{k}=1\). In particular, (20) now becomes

$$\begin{aligned} E(e^{-sW_{n+1}})=pE(e^{-s[aW_{n}+(1-G_{n})B_{n}-\tilde{J}_{n}]^{+}})+q E(e^{-s[aW_{n}+B_{n}-(G_{n}B_{n}-\widehat{J}_{n})^{+}]}), \nonumber \\ \end{aligned}$$
(21)

and following the same arguments as above, we again have

$$\begin{aligned} Z(s)=H(s)Z(as)+L(s), \end{aligned}$$

where now

$$\begin{aligned} H(s):= & {} \sum _{k=1}^{N}p_{k}\left[ \phi _{B}(s\bar{c}_{k})\left( \frac{p\delta }{\delta -s}+\frac{\nu q}{\nu +s}\right) + \frac{qs}{\nu +s}\phi _{B}(s+\nu c_{k})\right] , \\ L(s):= & {} -\frac{s}{\delta -s}pZ(a\delta )\sum _{k=1}^{N}p_{k}\phi _{B}(\delta \bar{c}_{k}):=-\frac{s}{\delta -s}P. \end{aligned}$$

Following the lines in Sect. 2.1.2, and having in mind that \(\lim _{n\rightarrow \infty }Z(a^{n}s)=1\), we obtain the desired expression for Z(s).

Remark 5

The case where \(\tilde{J}_{n}\), \(\widehat{J}_{n}\) are i.i.d. random variables following a distribution with rational LST can also be treated similarly. In particular, assume that \(\tilde{J}_{n}\), \(\widehat{J}_{n}\) follow hyperexponential distributions, i.e. their density functions are \(f_{\tilde{J}}(x):=\sum _{j=1}^{L}q_{j}\delta _{j}e^{-\delta _{j}x}\), and \(f_{\widehat{J}}(x):=\sum _{m=1}^{M}h_{m}\nu _{m}e^{-\nu _{m}x}\), respectively, with \(\sum _{j=1}^{L}q_{j}=1\), \(\sum _{m=1}^{M}h_{m}=1\). Then, following similar arguments as above, and assuming \(A_{n}=G_{n}B_{n}+J_{n}\), where \(J_{n}\), as given in (19), we obtain after lengthy computations:

$$\begin{aligned} Z(s)=H(s)Z(as)+L(s), \end{aligned}$$
(22)

where now

$$\begin{aligned} H(s):= & {} \sum _{k=1}^{N}p_{k}\left[ \phi _{B}(s\bar{c}_{k})\left( p\sum _{j=1}^{L}\frac{\delta _{j}q_{j}}{\delta _{j}-s}+ q\sum _{m=1}^{M}\frac{\nu _{m}h_{m}}{\nu _{m}+s}\right) \right. \\{} & {} \left. + qs\sum _{m=1}^{M}\frac{h_{m}}{\nu _{m}+s}\phi _{B}(s+\nu _{m} c_{k})\right] , \\ L(s):= & {} -sp \sum _{k=1}^{N}p_{k}\sum _{j=1}^{L}\frac{q_{j}}{\delta _{j}-s}Z(a\delta _{j})\phi _{B}(\delta _{j}\bar{c}_{k}). \end{aligned}$$

Iterating (22) as in Sect. 2.1.2, and having in mind that \(\lim _{n\rightarrow \infty }Z(a^{n}s)=1\), we obtain the desired expression for Z(s).

2.3 Interarrival times randomly proportional to system time

Consider the following variant of the standard M/G/1 queue: When the workload just after the nth arrival equals \(x\ge 0\), then the next interarrival time equals \(\beta _{i}x\) (with probability \(p_{i}\)) increased by a random jitter \(J_{n}\sim \exp (\delta )\). Thus, \(A_{n}=G_{n}(W_{n}+B_{n})+J_{n}\), where \(P(G_{n}=\beta _{i})=p_{i}\), \(i=1,\ldots ,K\), \(\beta _{i}\in (0,1)\). Note that our model generalizes the one in [7, Section 2], in which \(P(G_{n}=c)=1\), i.e. \(\beta _{1}=c\in (0,1)\), \(\beta _{i}=0\), \(i\ne 1\). Then,

$$\begin{aligned} W_{n+1}=[(1-G_{n})W_{n}+(1-G_{n})B_{n}-J_{n}]^{+}. \end{aligned}$$
(23)

Note that the recursion (23) is a special case of the recursion (1) with \(V_{n}:=1-G_{n}\). By focusing on the limiting random variable W, we have,

$$\begin{aligned} Z(s)&:=E(e^{-sW})= \sum _{i=1}^{K}p_{i}\int _{0}^{\infty }\int _{0}^{\infty }f_{B_{n}}(x)\textrm{d}x\left[ \int _{0}^{\bar{\beta }_{i}(w+x)}e^{-s(\bar{\beta }_{i}(w+x)-y)}\delta e^{-\delta y}\textrm{d}y\right. \\ {}&\qquad +\left. \int _{\bar{\beta }_{i}(w+x)}^{\infty }\delta e^{-\delta y}\textrm{d}y\right] \textrm{d}P(W<w) \\&= \frac{\delta }{\delta -s}\sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i})Z(s\bar{\beta }_{i})-\frac{s}{\delta -s}\sum _{i=1}^{K}p_{i}\phi _{B}(\delta \bar{\beta }_{i})Z(\delta \bar{\beta }_{i}). \end{aligned}$$

It is easy to show that \(P(J>\bar{\beta }_{i}(W+B))=\phi _{B}(\delta \bar{\beta }_{i})Z(\delta \bar{\beta }_{i})\). Thus, \(P(W=0)=\sum _{i=1}^{K}p_{i}P(J>\bar{\beta }_{i}(W+B))\), and therefore,

$$\begin{aligned} Z(s)=\frac{\delta }{\delta -s}\sum _{i=1}^{K}p_{i}\phi _{B}(s\bar{\beta }_{i})Z(s\bar{\beta }_{i})-\frac{s}{\delta -s}P(W=0). \end{aligned}$$
(24)

Following [1], we can obtain

$$\begin{aligned} Z(s)&=\sum _{k=0}^{\infty }\sum _{i_{1}+\ldots +i_{K}=k}p_{1}^{i_{1}}\ldots p_{K}^{i_{K}}L_{i_{1},\ldots ,i_{K}}(s)K(\bar{\beta }_{1}^{i_{1}}\ldots \bar{\beta }_{K}^{i_{K}}s)\\&\quad +\lim _{k\rightarrow \infty }\sum _{i_{1}+\ldots +i_{K}=k}p_{1}^{i_{1}}\ldots p_{K}^{i_{K}}L_{i_{1},\ldots ,i_{K}}(s), \end{aligned}$$

where \(K(s):=-\frac{s}{\delta -s}P(W=0)\), \(L_{0,0,\ldots ,0,1,0,\ldots ,0}(s):=\phi _{B}(\bar{\beta }_{k}s)\), with 1 in position k, \(k=1,\ldots , K\), and

$$\begin{aligned} L_{i_{1},\ldots ,i_{K}}(s):=\phi _{B}(\bar{\beta }_{1}^{i_{1}}\ldots \bar{\beta }_{K}^{i_{K}}s)\sum _{j=1}^{K}L_{i_{1},\ldots ,i_{j}-1,\ldots ,i_{K}}(s). \end{aligned}$$

Remark 6

A similar analysis can be applied in order to investigate recursions of the form \(W_{n+1}=[V_{n}W_{n}+(1-G_{n})B_{n}-J_{n}]^{+}\), where \(V_{n}\) are i.i.d. random variables with \(P(V_{n}=\gamma _{i})=q_{i}\), \(\gamma _{i}\in (0,1)\), \(i=1,\ldots ,K\).

2.3.1 Asymptotic expansions

In the following, we focus on deriving asymptotic expansions of the basic performance metrics \(P(W=0)\), \(E(W^{l})\), \(l=1,2,\ldots \), by perturbing \(\beta _{i}\)s, i.e. by letting in (24) \(\beta _{i}\) to be equal to \(\beta _{i}\epsilon \) with \(\epsilon \) very small. Then, (24) is written as:

$$\begin{aligned} (\delta -s)Z(s)=\delta \sum _{i=1}^{K}p_{i}\phi _{B}(s(1-\beta _{i}\epsilon ))Z(s(1-\beta _{i}\epsilon ))-sP(W=0). \end{aligned}$$

Note that for \(\epsilon =0\), the above equation provides the LST of the waiting time (say \(\tilde{W}\)) of the classical M/G/1 queue where arrivals occur according to a Poisson process with rate \(\delta \). So, when \(\epsilon \rightarrow 0\), there is a weak dependence between sojourn time and the subsequent interarrival time. Following [7, subsection 2.3], consider the Taylor series development of \(P(W=0)\), \(E(W^{l})\), \(l=1,\ldots ,L\) up to \(\epsilon ^{m}\) terms for \(m\in \mathbb {N}\). Thus, for \(\epsilon \rightarrow 0\):

$$\begin{aligned} P(W=0)=&P(\tilde{W}=0)+\sum _{h=1}^{m}R_{0,h}\epsilon ^{m}+o(e^{m}), \nonumber \\ E(W^{l}) =&E(\tilde{W}^{l})+\sum _{h=1}^{m}R_{l,h}\epsilon ^{m}+o(e^{m}). \end{aligned}$$
(25)

Differentiating the functional equation with respect to s, setting \(s=0\) yields for \(\rho =\delta E(B)\),

$$\begin{aligned} E(W)=\frac{P(W=0)-(1-\rho )-\rho \epsilon \sum _{i=1}^{K}p_{i}\beta _{i}}{\delta \epsilon \sum _{i=1}^{K}p_{i}\beta _{i}}. \end{aligned}$$

Simple calculations imply that

$$\begin{aligned}&R_{0,1}= (\delta E(\tilde{W}) +\rho )\sum _{i=1}^{K}p_{i}\beta _{i}, \\&\delta \sum _{i=1}^{K}p_{i}\beta _{i}R_{1,h-1}=R_{0,h},\,h=2,3,\ldots . \end{aligned}$$

Assuming that the first L moments of W are well defined, we subsequently differentiate the above functional equation \(l=2,\ldots ,L\) times with respect to s, and set \(s=0\). Then, for \(l=2,3,\ldots ,L,\) we have:

$$\begin{aligned}{} & {} \delta (1-\sum _{i=1}^{K}p_{i}(1-\beta _{i}\epsilon )^{l})E(W^{l})\nonumber \\{} & {} \quad =-lE(W^{l-1})+\delta \sum _{i=1}^{K}p_{i}(1-\beta _{i}\epsilon )^{l}\sum _{j=0}^{l-1}\left( {\begin{array}{c}l\\ j\end{array}}\right) E(W^{j})E(B^{l-j}). \end{aligned}$$
(26)

Setting \(\epsilon =0\), and having in mind that \(\sum _{i=1}^{K}p_{i}=1\), we recover the recursive formula to obtain the moments of the standard M/G/1 queue:

$$\begin{aligned} 0=-lE(\tilde{W}^{l-1})+\delta \sum _{j=0}^{l-1}\left( {\begin{array}{c}l\\ j\end{array}}\right) E(\tilde{W}^{j})E(B^{l-j}),\,l=2,3,\ldots ,L. \end{aligned}$$

Then, substituting (25) in (26) we have

$$\begin{aligned}{} & {} \delta (1-\sum _{i=1}^{K}p_{i}(1-\beta _{i}\epsilon )^{l})\sum _{h=1}^{m}R_{l,h}\epsilon ^{h} \nonumber \\{} & {} \quad = -l\sum _{h=1}^{m}R_{l-1,h}\epsilon ^{h}+\delta \sum _{i=1}^{K}p_{i}(1-\beta _{i}\epsilon )^{l})\sum _{j=0}^{l-1}\left( {\begin{array}{c}l\\ j\end{array}}\right) E(B^{l-j})\sum _{h=1}^{m}R_{j,h}\epsilon ^{h} \nonumber \\{} & {} \qquad +\delta (\sum _{i=1}^{K}p_{i}(1-\beta _{i}\epsilon )^{l}-1)\sum _{j=0}^{l}\left( {\begin{array}{c}l\\ j\end{array}}\right) E(\tilde{W}^{j})E(B^{l-j}). \end{aligned}$$
(27)

Equating \(\epsilon \) factors on both sides, we obtain \(R_{l-1,1}\) in terms of \(R_{l-2,1},\ldots ,R_{0,1}\), as well as in terms of \(E(\tilde{W}^{n})\) obtained above. Since \(R_{0,1}\) is known, all \(R_{l,1}\) can be derived by:

$$\begin{aligned} R_{l-1,1}=\frac{1}{1-\delta E(B)}[\frac{\delta }{l}\sum _{n=0}^{l-2}\left( {\begin{array}{c}l\\ n\end{array}}\right) E(B^{l-n})R_{n,1}-\delta \sum _{i=1}^{K}p_{i}\beta _{i}\sum _{n=0}^{l}\left( {\begin{array}{c}l\\ n\end{array}}\right) E(\tilde{W}^{n})E(B^{l-n})]. \end{aligned}$$

Similarly, for \(h=2\),

$$\begin{aligned} R_{l-1,2}=&\frac{1}{1-\delta E(B)}[ \frac{\delta }{l}\sum _{n=0}^{l-2}\left( {\begin{array}{c}l\\ n\end{array}}\right) E(B^{l-n})R_{n,2} -\frac{\delta }{l}\sum _{i=1}^{K}p_{i}\beta _{i}\sum _{n=0}^{l}\left( {\begin{array}{c}l\\ n\end{array}}\right) E(B^{l-n})R_{n,1}\\&+\delta \frac{l-1}{2}\sum _{i=1}^{K}p_{i}\beta _{i}^{2} \sum _{n=0}^{l}\left( {\begin{array}{c}l\\ n\end{array}}\right) E(\tilde{W}^{n})E(B^{l-n})]. \end{aligned}$$

Similarly, we can obtain \(R_{k-1,h}\) in terms of \(R_{k,h-1}\) and \(R_{n,l}\), \(n+l\le l-2+h\). The procedure we follow to recursively obtain \(R_{l,h}\) is the same as the one given in [7, subsection 2.3], so further details are omitted.

3 The single-server queue with service time randomly dependent on waiting time

Consider now the following variant of the M/M/1 queue. Customers arrive according to a Poisson process with rate \(\lambda \), and assume that if the waiting time of the nth arriving customer equals \(W_n\), then her service time equals \([B_n - \Omega _{n} W_{n}]^{+}\), with \(P(\Omega _{n}=a_{l})=g_{l}\), \(a_{l}\in (0,1)\), \(l=1,\ldots ,K\). Moreover, \(\{B_n\}_{n\in \mathbb {N}_{0}}\) is a sequence of independent, exponentially distributed random variables with rate \(\mu \), independent of anything else. Note that when the waiting time is very large the service requirement tends to zero, which can be explained as an abandonment.

We focus on the limiting random variable W, let \(Z(s):=E(e^{-sW})\), and assume that \(A_{n}\) are i.i.d. random variables from \(exp(\lambda )\). Then,

$$\begin{aligned} \begin{array}{rl} Z(s):= &{}E(e^{-sW})=E(e^{-s[W+[B - \Omega W]^{+}-A]^{+}}) \\ = &{} E(e^{-s[W+[B - \Omega W]^{+}-A]}) +1-E(e^{-s[W+[B - \Omega W]^{+}-A]^{-}})\\ =&{}\sum _{l=1}^{K}g_{l}E(e^{sA})E(e^{-s[W+[B - a_{l} W]^{+}]})+1-E(e^{-sU}), \end{array} \end{aligned}$$
(28)

where \(U:=[W+[B- \Omega W]^{+}-A]^{-}\). Note that,

$$\begin{aligned} E&(e^{-s[W+[B - a_{l} W]^{+}]})\\&\quad = \int _{w=0}^{\infty }\left[ \int _{x=0}^{a_{l}w}\mu e^{-\mu x}e^{-sw}\textrm{d}x+\int _{x=a_{l}w}^{\infty }e^{-s(x+(1-a_{l})w)}\mu e^{-\mu x}\textrm{d}x\right] \textrm{d}P(W<w) \\&\quad = \int _{w=0}^{\infty }(e^{-sw}-e^{-(a_{l}\mu +s)w})\textrm{d}P(W<w)+\frac{\mu }{\mu +s}\int _{w=0}^{\infty }e^{-w(s+a_{l}\mu )}\textrm{d}P(W<w) \\&\quad = Z(s)-\frac{s}{\mu +s}Z(s+a_{l}\mu ). \end{aligned}$$

Moreover, since

$$\begin{aligned} {[}W+(B-a_{l}W)-A]^{-}=\left\{ \begin{array}{ll} W+(B-a_{l}W)-A,&{}W+(B-a_{l}W)-A<0, \\ 0,&{}W+(B-a_{l}W)-A\ge 0, \end{array}\right. \end{aligned}$$

we have,

$$\begin{aligned} E(e^{-sU})=&E(e^{-s[W+[B - a_{l} W]^{+}-A]}|A>W+[B - a_{l} W]^{+})P(A>W+[B - a_{l} W]^{+})\\&+P(A\le W+[B- a_{l} W]^{+}) \\ =&\frac{\lambda }{\lambda -s}P(A>W+[B - a_{l} W]^{+}) +P(A\le W+[B - a_{l} W]^{+}) \\ =&1+\frac{s}{\lambda -s}P(A> W+[B - a_{l} W]^{+}). \end{aligned}$$

Note that,

$$\begin{aligned} P(A> W+[B - a_{l} W]^{+})=&\int _{w=0}^{\infty }\left( \int _{x=0}^{a_{l}w}\mu e^{-\mu x}\textrm{d}x\int _{y=w}^{\infty }\lambda e^{-\lambda y}\textrm{d}y\textrm{d}x+\right. \\&\left. \int _{x=a_{l}w}^{\infty }\mu e^{-\mu x}\textrm{d}x\int _{y=x+(1-a_{l})w}^{\infty }\lambda e^{-\lambda y}\textrm{d}y\textrm{d}x\right) \textrm{d}P(W<w) \\ =&\int _{w=0}^{\infty }\left( e^{-\lambda w}(1-e^{-\mu a_{l}w})+\frac{\mu }{\mu +\lambda }e^{-(\lambda +\mu a_{l}w)}\right) \textrm{d}P(W<w)\\ =&Z(\lambda )-\frac{\lambda }{\mu +\lambda }Z(\lambda +\mu a_{l}). \end{aligned}$$

Remark 7

Note that \(P(A> W+[B - \Omega W]^{+})=P(W=0):=\pi _{0}\).

Thus, substituting the last expression back in (28) we arrive after simple calculations at:

$$\begin{aligned} Z(s)=\frac{\lambda }{\mu +s}\sum _{l=1}^{K}g_{l}Z(s+a_{l}\mu )+C, \end{aligned}$$
(29)

where \(C:=Z(\lambda )-\frac{\lambda }{\mu +\lambda }\sum _{l=1}^{K}g_{l}Z(\lambda +\mu a_{l})=\pi _{0}\). For \(s=0\), (29) yields \(\sum _{l=1}^{K}g_{l}Z(\mu a_{l})=\frac{\mu }{\lambda }(1-\pi _{0})\). Note also that \(Z(\mu a_{l})=P(B>a_{l}W)\), and \(\sum _{l=1}^{K}g_{l}Z(\mu a_{l})=P(B>\Omega W)\).

To solve (29), we need to iterate it and having in mind that as \(s\rightarrow \infty \), \(Z(s)\rightarrow 0\) (needs some work). Note that such kind of recursions were treated in [1], since the commutativity of \(\zeta _{l}(s):=s+a_{l}\mu \) and \(\zeta _{m}(s):=s+a_{m}\mu \), i.e. \(\zeta _{l}(\zeta _{m}(s))=\zeta _{m}(\zeta _{l}(s))\) makes the recursion (29) relatively easy to handle, although in each iteration, any term gives rise to K new terms; see also [7, Remark 5.3]. Extensions to the case where service time distributions have rational LST are relatively easy to handle, e.g. a hyperexponential distribution.

4 Threshold-type dependence among interarrival and service times

4.1 The simple case

Customers arrive with a service request at a single server. Service requests of successive customers are i.i.d. random variables \(B_n\), \(n=1,2,\ldots \) with cdf \(F_{B}(.)\), and LST \(\phi _{B}(.)\). Upon arrival, the service request is registered. If the service request \(B_n\) is less than a threshold \(T_n\), then the next interarrival interval, say \(J_{n}^{(0)}\), is exponentially distributed with rate \(\lambda _{0}\); otherwise, the service time becomes exactly equal to \(T_n\) (is cut off at \(T_n\)), and the next interarrival interval, say \(J_{n}^{(1)}\), is exponentially distributed with rate \(\lambda _{1}\). We assume that an arrival makes obsolete a fixed fraction \(1-a_{0}\) (resp. \(1-a_{1}\)) of the work that is already present, with \(a_{k}\in (0,1)\), \(k=0,1\). We assume the thresholds \(T_n\) to be i.i.d. random variables with cdf \(T(\cdot )\), with LST \(\tau (\cdot )\). Let also for \(Re(s)\ge 0\)

$$\begin{aligned} \chi (s):=&E(e^{-sB}1(B<T))=\int _{0}^{\infty }e^{-sx}(1-T(x))\textrm{d}F_{B}(x), \\ \psi (s):=&E(e^{-sT}1(B\ge T))=\int _{0}^{\infty }e^{-sx}(1-F_{B}(x))\textrm{d}T(x), \end{aligned}$$

with

$$\begin{aligned} \chi (s)+\psi (s)=E(e^{-s\min (B,T)}). \end{aligned}$$

Let \(W_{n}\) be the waiting time of the nth arriving customer, \(n=1,2,\ldots \). Then,

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ a_{0}W_{n}+B_{n}-J^{(0)}_{n}\right] ^{+},&{}\,B_{n}<T_{n},\\ \left[ a_{1}W_{n}+T_{n}-J^{(1)}_{n}\right] ^{+},&{}\,B_{n}\ge T_{n}, \end{array}\right. \end{aligned}$$
(30)

with \(J^{(k)}_{n}\sim exp(\lambda _{k})\), \(k=0,1\). Assume that \(W_{0}=w\), and let \(E_{w}(e^{-sW_{n}}):=E(e^{-sW_{n}}|W_{0}=w)\). Then,

$$\begin{aligned} E_{w}(e^{-sW_{n+1}}) =&E_{w}(e^{-s[a_{0}W_{n}+B_{n}-J^{(0)}_{n}]^{+}}1(B_{n}<T_{n}))+ E_{w}(e^{-s[a_{1}W_{n}+T_{n}-J^{(1)}_{n}]^{+}}1(B_{n}\ge T_{n})) \nonumber \\ =&E_{w}(e^{-s[a_{0}W_{n}+B_{n}-J^{(0)}_{n}]}1(B_{n}<T_{n})+ E_{w}(e^{-s[a_{1}W_{n}+T_{n}-J^{(1)}_{n}]}1(B_{n}\ge T_{n}))+1\nonumber \\&- E_{w}(e^{-s[a_{0}W_{n}+B_{n}-J^{(0)}_{n}]^{-}}1(B_{n}<T_{n}))-E_{w}(e^{-s[a_{1}W_{n}+T_{n}-J^{(1)}_{n}]^{-}}1(B_{n}\ge T_{n}))\nonumber \\ =&E_{w}(e^{-sa_{0} W_{n}})E(e^{sJ^{(0)}_{n}})E(e^{-sB_{n}}1(B_{n}<T_{n}))\nonumber \\&+E_{w}(e^{-sa_{1} W_{n}})E(e^{sJ^{(1)}_{n}})E(e^{-sT_{n}}1(B_{n}\ge T_{n}))+1-U^{-}_{w,n}(s), \end{aligned}$$
(31)

where \(U^{-}_{w,n}(s):=E_{w}(e^{-s[a_{0}W_{n}+B_{n}-J^{(0)}_{n}]^{-}}1(B_{n}<T_{n}))+E_{w}(e^{-s[a_{1}W_{n}+T_{n}-J^{(1)}_{n}]^{-}}1 (B_{n}\ge T_{n}))\). Note that \(U^{-}_{w,n}(s)\) is analytic in \(Re(s)\le 0\). Let

$$\begin{aligned} Z_{w}(r,s):=&\sum _{n=0}^{\infty }r^{n}E_{w}(e^{-sW_{n}}),\,Re(s)\ge 0, \\ M_{w}(r,s):=&\sum _{n=0}^{\infty }r^{n}U^{-}_{w,n}(s),\,Re(s)\le 0. \end{aligned}$$

Then, (31) leads for \(Re(s)=0\) to:

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}= & {} r\frac{\lambda _{0}}{\lambda _{0}-s}\chi (s)Z_{w}(r,a_{0}s)+r\frac{\lambda _{1}}{\lambda _{1}-s}\psi (s)Z_{w}(r,a_{1}s)\nonumber \\{} & {} +\frac{r}{1-r}-rM_{w}(r,s). \end{aligned}$$
(32)

Multiplying (32) by \(\prod _{k=0}^{1}(\lambda _{k}-s)\), we obtain

$$\begin{aligned}{} & {} \prod _{k=0}^{1}(\lambda _{k}-s)(Z_{w}(r,s)-e^{-sw})-r(\lambda _{0}(\lambda _{1}-s)\chi (s)Z_{w}(r,a_{0}s)\nonumber \\{} & {} \qquad +\lambda _{1}(\lambda _{0}-s)\psi (s)Z_{w}(r,a_{1}s)) \nonumber \\{} & {} \quad =\prod _{k=0}^{1}(\lambda _{k}-s)\left( \frac{r}{1-r}-rM_{w}(r,s)\right) . \end{aligned}$$
(33)

Our objective is to obtain \(Z_{w}(r,s)\), and \(M_{w}(r,s)\) by formulating and solving a Wiener–Hopf boundary value problem. A few observations:

  • The LHS in (33) is analytic in \(Re(s) > 0\) and continuous in \(Re(s)\ge 0\).

  • The RHS in (33) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\).

  • \(Z_{w}(r,s)\) is for \(Re(s)\ge 0\) bounded by \(|\frac{1}{1-r}|\), so by the generalized Liouville’s theorem [14, Theorem 10.52], the LHS is at most a quadratic polynomial in s (dependent on r) for large s, \(Re(s)>0\).

  • \(M_{w}(r,s)\) is for \(Re(s)\le 0\) bounded by \(|\frac{1}{1-r}|\), so by the generalized Liouville’s theorem [14, Theorem 10.52], the RHS is at most a quadratic polynomial in s (dependent on r) for large s, \(Re(s) < 0\).

Thus,

$$\begin{aligned}{} & {} \prod _{k=0}^{1}(\lambda _{k}-s)(Z_{w}(r,s)-e^{-sw})-r(\lambda _{0}(\lambda _{1}-s)\chi (s)Z_{w}(r,a_{0}s)\nonumber \\{} & {} \qquad +\lambda _{1}(\lambda _{0}-s)\psi (s)Z_{w}(r,a_{1}s))\nonumber \\{} & {} \quad = C_{0,w}(r)+sC_{1,w}(r)+s^{2}C_{2,w}(r),\,Re(s)\ge 0, \end{aligned}$$
(34)
$$\begin{aligned}{} & {} \prod _{k=0}^{1}(\lambda _{k}-s)\left( \frac{r}{1-r}-rM_{w}(r,s)\right) \nonumber \\{} & {} \quad = C_{0,w}(r)+sC_{1,w}(r)+s^{2}C_{2,w}(r),\,Re(s)\le 0, \end{aligned}$$
(35)

with \(C_{i,w}(r)\), \(i=0,1,2,\) functions of r to be determined.

Taking \(s=0\) in (3435) yields

$$\begin{aligned} \lambda _{0}\lambda _{1}\left( \frac{1}{1-r}-1\right) -r(\chi (0)+\psi (0))\lambda _{0}\lambda _{1}\frac{r}{1-r}=C_{0,w}(r), \end{aligned}$$

and having in mind that \(\chi (0)+\psi (0)=1\), \(C_{0,w}(r)=0\). Substituting \(s=\lambda _{0}\) in (3435) leads to

$$\begin{aligned} -r(\lambda _{1}-\lambda _{0})\chi (\lambda _{0})Z_{w}(r,\alpha _{0}\lambda _{0})=C_{1,w}(r)+\lambda _{0}C_{2,w}(r). \end{aligned}$$
(36)

Similarly, for \(s=\lambda _{1}\),

$$\begin{aligned} -r(\lambda _{0}-\lambda _{1})\psi (\lambda _{1})Z_{w}(r,\alpha _{1}\lambda _{1})=C_{1,w}(r)+\lambda _{1}C_{2,w}(r). \end{aligned}$$
(37)

To obtain \(C_{1,w}(r)\), \(C_{2,w}(r)\), we still need to derive expressions for \(Z_{w}(r,\alpha _{k}\lambda _{k})\), \(k=0,1\). We accomplish this task by obtaining first \(Z_{w}(r,s)\) after successive iterations of (3435). Note that (3435) can be written as

$$\begin{aligned} Z_{w}(r,s)=r\sum _{k=0}^{1}h_{k}(s)Z_{w}(r,a_{k}s)+L_{w}(r,s), \end{aligned}$$
(38)

where

$$\begin{aligned} L_{w}(r,s)= & {} \frac{sC_{1,w}(r)+s^{2}C_{2,w}(r)}{(\lambda _{0}-s)(\lambda _{1}-s)}+e^{-sw}, \nonumber \\ h_{k}(s)= & {} \frac{\lambda _{k}}{\lambda _{k}-s}(\chi (s)1_{\{k=0\}}+\psi (s)1_{\{k=1\}}),\,k=0,1. \end{aligned}$$
(39)

After \(n-1\) iterations, we obtain

$$\begin{aligned} Z_{w}(r,s)= & {} r^{n}\sum _{k=0}^{n}K_{k,n-k}(s)Z_{w}(r,a_{0}^{k}a_{1}^{n-k}s)\nonumber \\{} & {} +\sum _{i=0}^{n-1}r^{i} \sum _{k=0}^{i}K_{k,i-k}(s)L_{w}(r,a_{0}^{k}a_{1}^{i-k}s), \end{aligned}$$
(40)

where \(K_{k,n-k}(s)\) are recursively defined as follows: \(K_{0,0}(s)=1\), \(K_{.,-1}(s)=0=K_{-1,.}(s)\), \(K_{1,0}(s)=h_{0}(s)\), \(K_{0,1}(s)=h_{1}(s)\) and

$$\begin{aligned} K_{k+1,n-k}(s)=&K_{k,n-k}(s)h_{0}(a_{0}^{k}a_{1}^{n-k}s)+K_{k+1,n-k-1}(s)h_{1}(a_{0}^{k+1}a_{1}^{n-k-1}s),\,n-k\ge k+1, \\ K_{k,n-k+1}(s)=&K_{k,n-k}(s)h_{1}(a_{0}^{k}a_{1}^{n-k}s)+K_{k-1,n-k+1}(s)h_{0}(a_{0}^{k-1}a_{1}^{n-k+1}s),\,n-k\le k-1. \end{aligned}$$

Therefore,

$$\begin{aligned} Z_{w}(r,s)=\sum _{i=0}^{\infty }r^{i} \sum _{k=0}^{i}K_{k,i-k}(s)L_{w}(r,a_{0}^{k}a_{1}^{i-k}s)+\lim _{n\rightarrow \infty }r^{n}\sum _{k=0}^{n}K_{k,n-k}(s)Z_{w}(r,a_{0}^{k}a_{1}^{n-k}s). \nonumber \\ \end{aligned}$$
(41)

The second term in the RHS of (41) converges to zero due to the fact that \(|r|<1\); thus,

$$\begin{aligned} Z_{w}(r,s)=\sum _{i=0}^{\infty }r^{i} \sum _{k=0}^{i}K_{k,i-k}(s)L_{w}(r,a_{0}^{k}a_{1}^{i-k}s). \end{aligned}$$
(42)

Setting in (42) \(s=a_{k}\lambda _{k}\) we obtain expressions for the \(Z_{w}(r,a_{k}\lambda _{k})\), \(k=0,1\). Note that these expressions are given in terms of the unknowns \(C_{l,w}(r)\), \(l=1,2\). Substituting back in (36), (37), we obtain a linear system of two equations with two unknowns \(C_{l,w}(r)\), \(l=1,2\).

4.1.1 Stationary analysis

Using Abel’s theorem, or considering directly the limiting random variable W, which satisfies the relation \(W=\left\{ \begin{array}{ll} \left[ a_{0}W+B-J^{(0)}\right] ^{+},&{}\,B<T,\\ \left[ a_{1}W+T-J^{(1)}\right] ^{+},&{}\,B\ge T,\end{array}\right. \) leads for \(Re(s)=0\) to

$$\begin{aligned} \begin{array}{rl} E(e^{-sW})=&\frac{\lambda _{0}}{\lambda _{0}-s}\chi (s)E(e^{-sa_{0}W})+\frac{\lambda _{1}}{\lambda _{1}-s}\psi (s)E(e^{-sa_{1}W})+1-M(s), \end{array} \end{aligned}$$
(43)

where

$$\begin{aligned} M(s):=E(e^{-s[a_{0}W+B-J^{(0)}]^{-}}1(B<T))+E(e^{-s[a_{1}W+T-J^{(1)}]^{-}}1(B\ge T)). \end{aligned}$$

Setting \(Z(s)=E(e^{-sW})\), and following similar arguments as above, we obtain,

$$\begin{aligned} Z(s)=\sum _{k=0}^{1}h_{k}(s)Z(a_{k}s)+L(s), \end{aligned}$$
(44)

where \(h_{k}(s)\), \(k=0,1\) as above and \( L(s)=\frac{sC_{1}+s^{2}C_{2}}{(\lambda _{0}-s)(\lambda _{1}-s)}\).

Note that \(L(0)=0\), and [1, Theorem 2] applies. Thus, iterating (44), we have

$$\begin{aligned} Z(s)= \lim _{n\rightarrow \infty }\sum _{k=0}^{n}K_{k,n-k}(s)+\sum _{i=0}^{\infty } \sum _{k=0}^{i}K_{k,i-k}(s)L(a_{0}^{k}a_{1}^{i-k}s), \end{aligned}$$
(45)

where \(K_{k,n-k}(s)\) as above. The coefficients \(C_{1}\), \(C_{2}\) can be obtained by deriving first expressions for the terms \(Z(a_{k}\lambda _{k})\) by setting \(s=a_{k}\lambda _{k}\), \(k=0,1\) in (45):

$$\begin{aligned} \begin{array}{rl} Z(a_{0}\lambda _{0})= \lim _{n\rightarrow \infty }\sum _{k=0}^{n}K_{k,n-k}(a_{0}\lambda _{0})+\sum _{i=0}^{\infty } \sum _{k=0}^{i}K_{k,i-k}(a_{0}\lambda _{0})L(a_{0}^{k+1}a_{1}^{i-k}\lambda _{0}), \\ Z(a_{1}\lambda _{1})= \lim _{n\rightarrow \infty }\sum _{k=0}^{n}K_{k,n-k}(a_{1}\lambda _{1})+\sum _{i=0}^{\infty } \sum _{k=0}^{i}K_{k,i-k}(a_{1}\lambda _{1})L(a_{0}^{k}a_{1}^{i-k+1}\lambda _{1}). \end{array} \nonumber \\ \end{aligned}$$
(46)

Then, by substituting these expressions in the following equations (that are derived similarly as those in (36), (37)):

$$\begin{aligned} -(\lambda _{1}-\lambda _{0})\chi (\lambda _{0})Z(\alpha _{0}\lambda _{0})=C_{1}+\lambda _{0}C_{2}, \end{aligned}$$
(47)
$$\begin{aligned} -(\lambda _{0}-\lambda _{1})\psi (\lambda _{1})Z(\alpha _{1}\lambda _{1})=C_{1}+\lambda _{1}C_{2}, \end{aligned}$$
(48)

we derive a linear system of equations to obtain the unknown coefficients \(C_{1}\), \(C_{2}\).

Remark 8

It would be interesting to consider the performance measures \(P(W = 0)\) and \(E(W^{l})\), \(l = 1, 2,\ldots ,\) in the regime that \(a_{k}\rightarrow 1\), \(k=0,1\) (see also [7, Section 2.3]), i.e. a perturbation of the model in [9].

Differentiating (44) with respect to s and letting \(s=0\) yield after some algebra that,

$$\begin{aligned} E(W):=-\frac{d}{ds}Z(s)|_{s=0}=\frac{\frac{\chi (0)}{\lambda _{0}}+\frac{\psi (0)}{\lambda _{0}}+\chi ^{\prime }(0)+\psi ^{\prime }(0)-\frac{C_{1}+2C_{2}}{\lambda _{0}\lambda _{1}}}{1-a_{0}\chi (0)-a_{1}\psi (0)}, \end{aligned}$$

where \(f^{\prime }(.)\) denotes the derivative of a function f(.) and \(C_{1}\), \(C_{2}\) are derived as shown above.

4.1.2 The case \(a_{0}\in (0,1)\), \(a_{1}=1\)

We now consider the stationary version of the special case where \(a_{1}=1\), i.e. we assume that when \(B_{n}\ge T_{n}\), the next arrival does not make obsolete a fixed fraction of the already present work. This maybe seen natural if we think that in such a case the service time is cut-off, since it exceeds the threshold \(T_{n}\). Following similar arguments as above, we obtain

$$\begin{aligned} \begin{array}{l} Z(s)= \frac{\lambda _{0}}{\lambda _{0}-s}\chi (s)Z(a_{0}s)+\frac{\lambda _{1}}{\lambda _{1}-s}\psi (s)Z(s)+M^{-}(s)\Leftrightarrow \\ (\lambda _{0}-s)Z(s)-\lambda _{0}\beta (s)Z(a_{0}s)=(\lambda _{0}-s)\frac{\beta (s)}{\chi (s)}M^{-}(s), \end{array} \end{aligned}$$
(49)

where \(\beta (s):=\frac{\chi (s)}{1-\frac{\lambda _{1}\psi (s)}{\lambda _{1}-s}}\), \(M^{-}(s):=1-E(e^{-s[a_{0}W+B-J^{(0)}]^{-}}1(B<T))-E(e^{-s[W+T-J^{(1)}]^{-}}1(B\ge T))\). Note that \(\beta (s)\) is the LST of the distribution of the random variable \(\tilde{B}\), which is the time elapsed from the epoch a service request arrives until the epoch the registered service is of threshold type:

$$\begin{aligned} \beta (s)&=E(e^{-s B}1(B\le T))+E(e^{-s(T-J^{(1)})}1(B\ge T))\beta (s)\Leftrightarrow \beta (s)\\&=\frac{E(e^{-s B}1(B\le T))}{1-E(e^{-s(T-J^{(1)})}1(B\ge T))}. \end{aligned}$$

Thus, following the lines in [8], Liouville’s theorem [14, Theorem 10.52] states that

$$\begin{aligned} (\lambda _{0}-s)Z(s)-\lambda _{0}\beta (s)Z(a_{0}s)=C_{0}+sC_{1}. \end{aligned}$$
(50)

For \(s=0\), (50) implies that \(C_{0}=0\). Thus,

$$\begin{aligned} Z(s)=\frac{\lambda _{0}}{\lambda _{0}-s}\beta (s)Z(a_{0}s)+\frac{sC_{1}}{\lambda _{0}-s}, \end{aligned}$$
(51)

which has a solution similar to the one in [8, Theorem 2.2], so further details are omitted.

4.1.3 The case \(a_{0}=a_{1}:=a\in (0,1)\)

Now consider the case where the fraction of work that becomes obsolete because of an arrival is independent on whether \(B<T\), or \(B\ge T\). In such a scenario, for \(Re(s)=0\),

$$\begin{aligned}{} & {} \prod _{k=0}^{1}(\lambda _{k}-s)Z(s)-[\lambda _{0}(\lambda _{1}-s)\chi (s)+\lambda _{1}(\lambda _{0}-s)\psi (s)]Z(as)\nonumber \\{} & {} \quad =\prod _{k=0}^{1}(\lambda _{k}-s)(1-M(s)). \end{aligned}$$
(52)

Now we have:

  • The LHS of (52) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\).

  • The RHS of (52) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\).

  • Z(s) is for \(Re(s)\ge 0\) bounded by 1, and hence, the LHS of (52) behaves at most as a quadratic polynomial in s for large s, with \(Re(s)> 0\).

  • M(s) is for \(Re(s)\le 0\) bounded by 1, and hence, the RHS of (52) behaves at most as a quadratic polynomial in s for large s, with \(Re(s)< 0\).

Liouville’s theorem [14, Theorem 10.52] implies that both sides in (52) are equal to the same quadratic polynomial in s, in their respective half-planes. Therefore, for \(Re(s)\ge 0\),

$$\begin{aligned}{} & {} \prod _{k=0}^{1}(\lambda _{k}-s)Z(s)-[\lambda _{0}(\lambda _{1}-s)\chi (s)+\lambda _{1}(\lambda _{0}-s)\psi (s)]Z(as)=C_{0}+sC_{1}+s^{2}C_{2}.\nonumber \\ \end{aligned}$$
(53)

Setting \(s=0\) in (53), and having in mind that \(\chi (0)+\psi (0)=1\), we obtain \(C_{0}=0\). Setting \(s=\lambda _{i}\), \(i=0,1\), we obtain

$$\begin{aligned} -(\lambda _{1}-\lambda _{0})\chi (\lambda _{0})Z(a\lambda _{0})=&C_{1}+\lambda _{0}C_{2}, \nonumber \\ -(\lambda _{0}-\lambda _{1})\psi (\lambda _{1})Z(a\lambda _{1})=&C_{1}+\lambda _{1}C_{2}. \end{aligned}$$
(54)

We further need to obtain \(Z(a\lambda _{i})\), \(i=0,1\). Note that \(Z(a\lambda _{i})=P(A^{(i)}>a W)\), \(i=0,1\). Now (53) is rewritten as

$$\begin{aligned} Z(s)=H(s)Z(as)+L(s), \end{aligned}$$
(55)

where \(H(s):=\frac{\lambda _{0}}{\lambda _{0}-s}\chi (s)+\frac{\lambda _{1}}{\lambda _{1}-s}\psi (s)\). Iterating (55) and having in mind that \(Z(a^{n}s)\rightarrow 1\), as \(n\rightarrow \infty \), we obtain,

$$\begin{aligned} Z(s)=\prod _{n=0}^{\infty }H(a^{n}s)+\sum _{n=0}^{\infty }L(a^{n}s)\prod _{j=0}^{n-1}H(a^{j}s). \end{aligned}$$
(56)

Note that in (56), Z(s) appears to have singularities in \(s=\lambda _{k}/a^{j}\), \(j=0,1,\ldots \), \(k=0,1\), but following [8, see Remark 2.5], it can be seen that these are removable singularities.

Setting \(s=a\lambda _{0}\),

$$\begin{aligned} Z(a\lambda _{0})= & {} \prod _{n=0}^{\infty }\frac{(\lambda _{1}-a^{n+1}\lambda _{0})\chi (a^{n+1}\lambda _{0})+\lambda _{1}(1-a^{n+1})\psi (a^{n+1}\lambda _{0})}{(\lambda _{1}-a^{n+1}\lambda _{0})(1-a^{n+1})} \nonumber \\{} & {} +\sum _{n=0}^{\infty }\frac{a^{n+1}(C_{1}+C_{2}\lambda _{0}a^{n+1})}{(\lambda _{1}-a^{n+1}\lambda _{0})(1-a^{n+1})}\nonumber \\{} & {} \prod _{j=0}^{n-1}\frac{(\lambda _{1}-a^{j+1}\lambda _{0})\chi (a^{j+1}\lambda _{0})+\lambda _{1}(1-a^{j+1})\psi (a^{j+1}\lambda _{0})}{(\lambda _{1}-a^{j+1}\lambda _{0})(1-a^{j+1})}. \end{aligned}$$
(57)

Similarly, for \(s=a\lambda _{1}\),

$$\begin{aligned} Z(a\lambda _{1})= & {} \prod _{n=0}^{\infty }\frac{(\lambda _{0}-a^{n+1}\lambda _{1})\psi (a^{n+1}\lambda _{1})+\lambda _{0}(1-a^{n+1})\chi (a^{n+1}\lambda _{1})}{(\lambda _{0}-a^{n+1}\lambda _{1})(1-a^{n+1})}\nonumber \\{} & {} +\sum _{n=0}^{\infty }\frac{a^{n+1}(C_{1}+C_{2}\lambda _{1}a^{n+1})}{(\lambda _{0}-a^{n+1}\lambda _{1})(1-a^{n+1})}\nonumber \\{} & {} \prod _{j=0}^{n-1}\frac{(\lambda _{0}-a^{j+1}\lambda _{1})\psi (a^{j+1}\lambda _{1})+\lambda _{0}(1-a^{j+1})\chi (a^{j+1}\lambda _{1})}{(\lambda _{0}-a^{j+1}\lambda _{1})(1-a^{j+1})}. \end{aligned}$$
(58)

Substituting (57), (58) in (54), we obtain a linear system of equations for the unknown coefficients \(C_{1}\), \(C_{2}\).

Remark 9

Assume now that the interarrival times are deterministic proportionally dependent on service times. More precisely, let \(J^{(k)}_{n}=c_{k}U^{(k)}_{n}+X^{(k)}_{n}\), \(c_{k}\in (0,1)\), \(k=0,1\), where \(U^{(0)}_{n}:=B_{n}\), \(U^{(1)}_{n}:=T_{n}\), and \(X^{(k)}_{n}\sim exp(\delta _{k})\). Thus,

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ a_{0}W_{n}+(1-c_{0})B_{n}-X^{(0)}_{n}\right] ^{+},&{}\,B_{n}<T_{n},\\ \left[ a_{1}W_{n}+(1-c_{1})T_{n}-X^{(1)}_{n}\right] ^{+},&{}\,B_{n}\ge T_{n}. \end{array}\right. \end{aligned}$$

Following similar arguments as in the previous section, we arrive, for \(Re(s)=0\), at,

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}&=r\frac{\delta _{0}}{\delta _{0}-s}\chi (s(1-c_{0}))Z_{w}(r,a_{0}s)\\&\quad +r\frac{\delta _{1}}{\delta _{1}-s}\psi (s(1-c_{1}))Z_{w}(r,a_{1}s)+\frac{r}{1-r}-rM_{w}(r,s), \end{aligned}$$

where now \(M_{w}(r,s)=\sum _{n=0}^{\infty }r^{n}U^{-}_{w,n}(s)\) with

$$\begin{aligned} U^{-}_{w,n}(s)&:=E_{w}(e^{-s[a_{0}W_{n}+(1-c_{0})B_{n}-X^{(0)}_{n}]^{-}}1(B_{n}<T_{n}))\\&\qquad +E_{w}(e^{-s[a_{1}W_{n}+(1-c_{1})T_{n}-X^{(1)}_{n}]^{-}}1(B_{n}\ge T_{n})). \end{aligned}$$

Using similar arguments as above, Liouville’s theorem [14, Theorem 10.52] implies that

$$\begin{aligned}&\prod _{k=0}^{1}(\delta _{k}-s)(Z_{w}(r,s)-e^{-sw})-r(\delta _{0}(\delta _{1}-s)\chi (s(1-c_{0}))Z_{w}(r,a_{0}s)\\&\qquad +\delta _{1}(\delta _{0}-s)\psi (s(1-c_{1}))Z_{w}(r,a_{1}s))\\&\quad =C_{0,w}(r)+sC_{1,w}(r)+s^{2}C_{2,w}(r),\,Re(s)\ge 0. \end{aligned}$$

The rest of the analysis follows as the one in the previous section. Similar steps as those in the previous section can be followed to cope with the stationary analysis, so further details are omitted.

4.2 Interarrival times random proportionally dependent on service times

Assume that \(J^{(k)}_{n}=G^{(k)}_{n}U^{(k)}_{n}+X^{(k)}_{n}\), \(k=0,1\), where \(U^{(0)}_{n}:=B_{n}\), \(U^{(1)}_{n}:=T_{n}\), and \(X_{n}^{(k)}\) are i.i.d. random variables with distribution that have rational LST:

$$\begin{aligned} \phi _{X_{k}}(s)=\frac{N_{k}(s)}{D_{k}(s)},\,k=0,1, \end{aligned}$$

where \(D_{k}(s):=\prod _{i=1}^{L_{k}}(s+t_{i}^{(k)})\) with \(N_{k}(s)\) is a polynomial of degree at most \(L_{k}-1\), not sharing zeros with \(D_{k}(s),\) \(k=0,1\). Moreover, assume that \(Re(t_{i}^{(k)})>0\), \(i=1,\ldots ,L_{k}\). Thus,

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ a_{0}W_{n}+(1-G^{(0)}_{n})B_{n}-X^{(0)}_{n}\right] ^{+},&{}\,B_{n}<T_{n},\\ \left[ a_{1}W_{n}+(1-G^{(1)}_{n})T_{n}-X^{(1)}_{n}\right] ^{+},&{}\,B_{n}\ge T_{n}, \end{array}\right. \end{aligned}$$
(59)

where \(P(G_{n}^{(0)}=\beta _{i})=p_{i}\), \(i=1,\ldots ,K\), \(P(G_{n}^{(1)}=\gamma _{i})=q_{i}\), \(i=1,\ldots ,M\). Assume that \(\beta _{i}\in (0,1)\), \(i=1,\ldots ,K\), \(\gamma _{i}\in (0,1)\), \(i=1,\ldots ,M\). Following similar arguments as in the previous section, we arrive for \(Re(s)=0\), at

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}= & {} r\frac{N_{0}(-s)}{D_{0}(-s)}\sum _{i=1}^{K}p_{i}\chi (s(1-\beta _{i}))Z_{w}(r,a_{0}s)\nonumber \\{} & {} +r\frac{N_{1}(-s)}{D_{1}(-s)}\sum _{i=1}^{M}q_{i}\psi (s(1-\gamma _{i}))Z_{w}(r,a_{1}s)\nonumber \\{} & {} +\frac{r}{1-r}-rM_{w}(r,s), \end{aligned}$$
(60)

where now \(M_{w}(r,s)=\sum _{n=0}^{\infty }r^{n}U^{-}_{w,n}(s)\) with

$$\begin{aligned} U^{-}_{w,n}(s)&:=E_{w}(e^{-s[a_{0}W_{n}+(1-G_{n}^{(0)})B_{n}-X^{(0)}_{n}]^{-}}1(B_{n}<T_{n}))\\&\quad +E_{w}(e^{-s[a_{1}W_{n}+(1-G_{n}^{(1)})T_{n}-X^{(1)}_{n}]^{-}}1(B_{n}\ge T_{n})). \end{aligned}$$

Then, for \(Re(s)=0\),

$$\begin{aligned} \begin{array}{l} D_{0}(-s)D_{1}(-s)[ Z_{w}(r,s)-re^{-sw}]-rN_{0}(-s)D_{1}(-s)\sum _{i=1}^{K}p_{i}\chi (s(1-\beta _{i}))Z_{w}(r,a_{0}s)\\ -rN_{1}(-s)D_{0}(-s)\sum _{i=1}^{M}q_{i}\psi (s(1-\gamma _{i}))Z_{w}(r,a_{1}s)=D_{0}(-s)D_{1}(-s)[\frac{r^{2}}{1-r}-rM_{w}(r,s)]. \end{array} \nonumber \\ \end{aligned}$$
(61)

Now we have:

  • The LHS of (61) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\).

  • The RHS of (61) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\).

  • For large s, both sides in (61) are \(O(s^{L_{0}+L_{1}})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] implies that for \(Re(s)\ge 0\),

$$\begin{aligned} \begin{array}{l} D_{0}(-s)D_{1}(-s)[ Z_{w}(r,s)-e^{-sw}]-rN_{0}(-s)D_{1}(-s)\displaystyle \sum _{i=1}^{K}p_{i}\chi (s(1-\beta _{i}))Z_{w}(r,a_{0}s)\\ -rN_{1}(-s)D_{0}(-s)\displaystyle \sum _{i=1}^{M}q_{i}\psi (s(1-\gamma _{i}))Z_{w}(r,a_{1}s)=\sum _{l=0}^{L_{1}+L_{2}}C_{l}(r)s^{l}, \end{array} \end{aligned}$$
(62)

and for \(Re(s)\le 0\),

$$\begin{aligned} D_{0}(-s)D_{1}(-s)\left[ \frac{r}{1-r}-rM_{w}(r,s)\right] =\sum _{l=0}^{L_{1}+L_{2}}C_{l}(r)s^{l}. \end{aligned}$$

For \(s=0\), (62) implies after simple computations that \(C_{0}(r)=0\). For \(s=t_{j}^{(0)}\), \(j=1,\ldots ,L_{0}\), (62) implies that

$$\begin{aligned} -rN_{0}(-t_{j}^{(0)})D_{1}(-t_{j}^{(0)})\sum _{i=1}^{K}p_{i}\chi (t_{j}^{(0)}(1-\beta _{i}))Z_{w}(r,a_{0}t_{j}^{(0)})=\sum _{l=1}^{L_{1}+L_{2}}C_{l}(r)(t_{j}^{(0)})^{l}. \nonumber \\ \end{aligned}$$
(63)

Similarly, for \(s=t_{j}^{(1)}\), \(j=1,\ldots ,L_{1}\), we have,

$$\begin{aligned} -rN_{1}(-t_{j}^{(1)})D_{0}(-t_{j}^{(1)})\sum _{i=1}^{M}q_{i}\psi (t_{j}^{(1)}(1-\gamma _{i}))Z_{w}(r,a_{1}t_{j}^{(1)})=\sum _{l=0}^{L_{1}+L_{2}}C_{l}(r)(t_{j}^{(1)})^{l}, \nonumber \\ \end{aligned}$$
(64)

Note that \(N_{k}(-t_{j}^{(k)})\ne 0\), \(j=1,\ldots ,L_{k},\) \(k=0,1\). Then, (63), (64) constitutes a system of equations to obtain the remaining of the coefficients \(C_{l}(r)\), \(l=1,\ldots ,L_{0}+L_{1}\). However, we still need to obtain \(Z_{w}(r,a_{k}t_{j}^{(k)})\), \(k=0,1\), \(j=1,\ldots ,L_{k}\). Note that (62) has the same form as in (38) but now,

$$\begin{aligned} \begin{array}{rl} L_{w}(r,s)=&{}\dfrac{\sum _{l=1}^{L_{1}+L_{2}}s^{l}C_{l}(r)}{D_{0}(-s)D_{1}(-s)}+e^{-sw},\\ h_{k}(s)=&{}\dfrac{N_{k}(-s)}{D_{k}(-s)}(\sum _{i=1}^{K}p_{i}\chi (s(1-\beta _{i}))1_{\{k=0\}}+\sum _{i=1}^{M}q_{i}\psi (s(1-\gamma _{i}))1_{\{k=1\}}),\,k=0,1. \end{array}\nonumber \\ \end{aligned}$$
(65)

Thus, the expression for \(Z_{w}(r,s)\) is the same as in (42), where the expressions \(K_{k,i-k}(s)\), \(L_{w}(r,a_{0}^{k}a_{1}^{i-k}s)\) are obtained analogously using (65). Having this expression, we can obtain \(Z_{w}(r,a_{k}t_{j}^{(k)})\), \(j=1,\ldots ,L_{k}\), \(k=0,1\). Substituting back in (63), (64), we can derive the remaining coefficients \(C_{l}(r)\), \(l=1,\ldots ,L_{0}+L_{1}\).

4.2.1 A more general case

We now consider the case where the interarrival times are also dependent on the system time. More precisely, we assume that \(J^{(k)}_{n}=G_{n}^{(k)}(U^{(k)}_{n}+W_{n})+X^{(k)}_{n}\), \(k=0,1\).

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ (1-G_{n}^{(0)})W_{n}+(1-G_{n}^{(0)})B_{n}-X^{(0)}_{n}\right] ^{+},&{}\,B_{n}<T_{n},\\ \left[ (1-G_{n}^{(1)})W_{n}+(1-G_{n}^{(1)})T_{n}-X^{(1)}_{n}\right] ^{+},&{}\,B_{n}\ge T_{n}. \end{array}\right. \end{aligned}$$
(66)

Thus,

$$\begin{aligned} \begin{array}{rl} E_{w}(e^{-sW_{n+1}}) = &{} E_{w}(e^{-s[(1-G_{n}^{(0)})W_{n}+(1-G_{n}^{(0)})B_{n}-X^{(0)}_{n}]^{+}}1(B_{n}<T_{n}))\\ &{}+ E_{w}(e^{-s[(1-G_{n}^{(1)})W_{n}+(1-G_{n}^{(1)})T_{n}-X^{(1)}_{n}]^{+}}1(B_{n}\ge T_{n}))\\ =&{}E(e^{sJ^{(0)}_{n}})\sum _{i=1}^{K}p_{i}E_{w}(e^{-s\bar{\beta }_{i}W_{n}})E(e^{-s\bar{\beta }_{i}B_{n}}1(B_{n}<T_{n}))\\ {} &{}+E(e^{sJ^{(1)}_{n}})\sum _{i=1}^{M}q_{i}E_{w}(e^{-s\bar{\gamma }_{i}W_{n}})E(e^{-s\bar{\gamma }_{i}T_{n}}1(B_{n}\ge T_{n}))+1-U^{-}_{w,n}(s), \end{array} \nonumber \\ \end{aligned}$$
(67)

Then, by using (67), and similar arguments as above, we obtain for \(Re(s)=0\),

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}&=r\phi _{X_{0}}(-s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})Z_{w}(r,s\bar{\beta }_{i})\\&\qquad +r\phi _{X_{1}}(-s)\sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})Z_{w}(r,s\bar{\gamma }_{i})\\&\qquad +\frac{r}{1-r}-rM_{w}(r,s), \end{aligned}$$

or equivalently,

$$\begin{aligned} \begin{array}{l} D_{0}(-s)D_{1}(-s)[ Z_{w}(r,s)-e^{-sw}]-rN_{0}(-s)D_{1}(-s)\displaystyle \sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})Z_{w}(r,s\bar{\beta }_{i})\\ -rN_{1}(-s)D_{0}(-s)\displaystyle \sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})Z_{w}(r,s\bar{\gamma }_{i})=D_{0}(-s)D_{1}(-s)[\frac{r}{1-r}-rM_{w}(r,s)]. \end{array} \nonumber \\ \end{aligned}$$
(68)

Now we have:

  • The LHS of (68) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\).

  • The RHS of (68) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\).

  • For large s, both sides in (61) are \(O(s^{L_{0}+L_{1}})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] implies that for \(Re(s)\ge 0\),

$$\begin{aligned}{} & {} D_{0}(-s)D_{1}(-s)[ Z_{w}(r,s)-e^{-sw}]-rN_{0}(-s)D_{1}(-s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})Z_{w}(r,s\bar{\beta }_{i})\nonumber \\{} & {} \quad -rN_{1}(-s)D_{0}(-s)\sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})Z_{w}(r,s\bar{\gamma }_{i})=\sum _{l=0}^{L_{1}+L_{2}}C_{l}(r)s^{l}. \end{aligned}$$
(69)

For \(s=0\), \(C_{0}(r)=0\). For convenience, set \(\bar{\beta }_{i}=a_{i}\), \(i=1,\ldots ,K\), \(\bar{\gamma }_{i}=a_{K+i}\), \(q_{i}=p_{K+i}\), \(i=1,\ldots ,M\), and

$$\begin{aligned} f(a_{i}s):=\left\{ \begin{array}{ll} \phi _{X_{0}}(-s)\chi (sa_{i}),&{}i=1,\ldots ,K, \\ \phi _{X_{1}}(-s)\psi (sa_{i}),&{}i=K+1,\ldots ,K+M. \end{array}\right. \end{aligned}$$

Then, (69) can be written as

$$\begin{aligned} Z_{w}(r,s)=r\sum _{i=1}^{K+M}p_{i}f(a_{i}s)Z_{w}(r,a_{i}s)+L_{w}(r,s), \end{aligned}$$
(70)

where \(L_{w}(r,s):=\frac{\sum _{l=1}^{L_{1}+L_{2}}s^{l}C_{l}(r)}{D_{0}(-s)D_{1}(-s)}+e^{-sw}\). Therefore,

$$\begin{aligned} Z_{w}(r,s)= & {} \sum _{i=0}^{\infty }r^{i} \sum _{i_{1}+\ldots +i_{K+M}=i}p_{1}^{i_{1}}\ldots p_{K+M}^{i_{K+M}}L_{i_{1},\ldots ,i_{K+M}}(s)+\lim _{n\rightarrow \infty }r^{n}\sum _{i_{1}+\ldots +i_{K+M}=n}\nonumber \\{} & {} \times p_{1}^{i_{1}}\ldots p_{K+M}^{i_{K+M}}L_{i_{1},\ldots ,i_{K+M}}(s)Z_{w}(r,a_{1}^{i_{1}}\ldots a_{K+M}^{i_{K+M}}s), \end{aligned}$$
(71)

where \(L_{0,0,\ldots ,0,1,0,\ldots ,0}(s):=f(a_{k}s)\), with 1 in position k, and \(k=1,\ldots , K+M\),

$$\begin{aligned} L_{i_{1},\ldots ,i_{K+M}}(s)=f(a_{1}^{i_{1}}\ldots a_{K+M}^{i_{K+M}}s)\sum _{j=1}^{K+M}L_{i_{1},\ldots ,i_{j}-1,\ldots ,i_{K+M}}(s). \end{aligned}$$

The second term in the RHS of (71) converges to zero due to the fact that \(|r|<1\); thus,

$$\begin{aligned} Z_{w}(r,s)=\sum _{i=0}^{\infty }r^{i} \sum _{i_{1}+\ldots +i_{K+M}=i}p_{1}^{i_{1}}\ldots p_{K+M}^{i_{K+M}}L_{i_{1},\ldots ,i_{K+M}}(s). \end{aligned}$$
(72)

Setting \(s=t_{j}^{(k)}\), \(j=1,\ldots ,L_{k}\), \(k=0,1,\) in (69) we obtain a system of equations for the remaining coefficients \(C_{l}(r)\), \(l=1,\ldots ,L_{0}+L_{1}\). Specifically for \(s=t_{j}^{(0)}\), \(j=1,\ldots ,L_{0}\),

$$\begin{aligned} -rN_{0}(-t_{j}^{(0)})D_{1}(-t_{j}^{(0)})\sum _{i=1}^{K}p_{i}\chi (t_{j}^{(0)}\bar{\beta }_{i})Z_{w}(r,\bar{\beta }_{i}t_{j}^{(0)})=\sum _{l=1}^{L_{1}+L_{2}}C_{l}(r)(t_{j}^{(0)})^{l}, \nonumber \\ \end{aligned}$$
(73)

and for \(s=t_{j}^{(1)}\), \(j=1,\ldots ,L_{1}\), we have,

$$\begin{aligned} -rN_{1}(-t_{j}^{(1)})D_{0}(-t_{j}^{(1)})\sum _{i=1}^{M}q_{i}\psi (t_{j}^{(1)}\bar{\gamma }_{i})Z_{w}(r,\bar{\gamma }_{i}t_{j}^{(1)})=\sum _{l=0}^{L_{1}+L_{2}}C_{l}(r)(t_{j}^{(1)})^{l},\nonumber \\ \end{aligned}$$
(74)

where we have further used the expression in (72).

4.3 A mixed case

Consider the following recursion:

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ aW_{n}+(1-G^{(0)}_{n})B_{n}-X^{(0)}_{n}\right] ^{+},&{}\,B_{n}<T_{n},\\ \left[ V_{n}W_{n}+(1-G^{(1)}_{n})T_{n}-X^{(1)}_{n}\right] ^{+},&{}\,B_{n}\ge T_{n}, \end{array}\right. \end{aligned}$$
(75)

where \(V_{n}<0\), \(a\in (0,1)\), and \(\beta _{i}\in (0,1)\), \(i=1,\ldots ,K\), \(\gamma _{i}>1\), \(i=1,\ldots ,M\). Then, following a similar procedure as above, we obtain for \(Re(s)=0\),

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}&=Z_{w}(r,as)r\frac{\delta _{0}}{\delta _{0}-s}\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i}) +\int _{-\infty }^{0}Z_{w}(r,sy)P(V\in \textrm{d}y)r\\&\quad \frac{\delta _{1}}{\delta _{1}-s}\sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})+\frac{r}{1-r}-rM_{w}(r,s), \end{aligned}$$

where \(M_{w}(r,s)=\sum _{n=0}^{\infty }r^{n}U^{-}_{w,n}(s)\) with

$$\begin{aligned} U^{-}_{w,n}(s)&:=E_{w}(e^{-s[aW_{n}+(1-G_{n}^{(0)})B_{n}-X^{(0)}_{n}]^{-}}1(B_{n}<T_{n}))\\&\quad +E_{w}(e^{-s[V_{n}W_{n}+(1-G_{n}^{(1)})T_{n}-X^{(1)}_{n}]^{-}}1(B_{n}\ge T_{n})). \end{aligned}$$

Equivalently, we have

$$\begin{aligned}{} & {} \prod _{j=0}^{1}(\delta _{j}-s)(Z_{w}(r,s)-e^{-sw})-Z_{w}(r,as)r\delta _{0}(\delta _{1}-s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i}) \nonumber \\{} & {} \quad =\int _{-\infty }^{0}Z_{w}(r,sy)P(V\in \textrm{d}y)r\delta _{1}(\delta _{0}-s)\sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})+\prod _{j=0}^{1}(\delta _{j}-s)(\frac{r}{1-r}-rM_{w}(r,s)). \nonumber \\ \end{aligned}$$
(76)

Clearly,

  • the LHS of (76) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\),

  • the RHS of (76) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\),

  • for large s, both sides are \(O(s^{2})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] now states that

$$\begin{aligned}&\prod _{j=0}^{1}(\delta _{j}-s)(Z_{w}(r,s)-e^{-sw})-Z_{w}(r,as)r\delta _{0}(\delta _{1}-s)\\&\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i}) =C_{0}+C_{1}s+C_{2}s^{2},\,Re(s)\ge 0. \end{aligned}$$

For \(s=0\), we have \(C_{0}=\frac{r\delta _{0}\delta _{1}}{1-r}(1-\chi (0))\). Setting \(s=\delta _{1}\), and \(s=\delta _{0}\), we, respectively, have the following linear system of equations:

$$\begin{aligned} C_{2}\delta _{1}^{2}+C_{1}\delta _{1}=&-C_{0}, \\ C_{2}\delta _{0}^{2}+C_{1}\delta _{0}=&-C_{0}-r\delta _{0}(\delta _{1}-\delta _{0})\chi (0)Z_{w}(r,a\delta _{0}), \end{aligned}$$

from which

$$\begin{aligned} \begin{array}{rl} C_{1}=&{}-\frac{r}{1-r}((\delta _{0}+\delta _{1})(1-\chi (0))+\delta _{1}\chi (\delta _{0})(1-r)Z_{w}(r,a\delta _{0})), \\ C_{2}=&{}\frac{r}{1-r}(1-\chi (0)+\chi (\delta _{0})(1-r)Z_{w}(r,a\delta _{0})). \end{array} \end{aligned}$$
(77)

It remains to find \(Z_{w}(r,a\delta _{0})\). This can be done by iteratively solving

$$\begin{aligned} Z_{w}(r,s)=r\frac{\delta _{0}}{\delta _{0}-s}\chi (s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})Z_{w}(r,as)+\frac{C_{0}+sC_{1}+s^{2}C_{2}}{ \prod _{j=0}^{1}(\delta _{j}-s)}+e^{-sw}. \end{aligned}$$

In particular,

$$\begin{aligned} Z_{w}(r,s)=\sum _{n=0}^{\infty }L_{w}(r,a^{n}s)\prod _{j=0}^{n-1}K_{w}(r,a^{j}s), \end{aligned}$$
(78)

where \(L_{w}(r,s):=\frac{C_{0}+sC_{1}+s^{2}C_{2}}{ \prod _{j=0}^{1}(\delta _{j}-s)}+e^{-sw}\), \(K_{w}(r,s):=r\frac{\delta _{0}}{\delta _{0}-s}\chi (s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})\). Thus,

$$\begin{aligned} Z_{w}(r,a\delta _{0})=\sum _{n=0}^{\infty }L_{w}(r,a^{n}\delta _{0})\prod _{j=0}^{n-1}K_{w}(r,a^{j}\delta _{0}). \end{aligned}$$
(79)

Substituting (79) in (77) we obtain a linear system of equations for the unknown coefficients \(C_{1}\), \(C_{2}\).

4.3.1 A more general case

Assume the case where the Laplace–Stieltjes transforms of the distributions of \(X_{n}^{(k)}\), \(k=0,1,\) are rational and such that:

$$\begin{aligned} A_{0}(s)=\frac{\widehat{A}_{0}(s)}{\prod _{j=1}^{L_{0}}(s+\delta _{j})},\,\,A_{1}(s)=\frac{\widehat{A}_{1}(s)}{\prod _{l=1}^{L_{1}}(s+\zeta _{l})}, \end{aligned}$$

with \(\widehat{A}_{k}(s)\) is a polynomial of degree at most \(L_{k}-1\), not sharing zeros with the corresponding denominators of \(A_{k}(s),\) \(k=0,1\). Moreover, assume that \(Re(\delta _{j})>0\), \(j=1,\ldots ,L_{0}\), and \(Re(\zeta _{l})<0\), \(l=1,\ldots ,L_{1}\). Moreover, assume that \(\beta _{i}\in (0,1)\), \(i=1,\ldots ,K\), \(\gamma _{i}>1\), \(i=1,\ldots ,M\). Then, (76) becomes now for \(Re(s)=0\):

$$\begin{aligned}{} & {} \prod _{j=1}^{L_{0}}(\delta _{j}-s)\prod _{l=1}^{L_{1}}(\zeta _{l}-s)(Z_{w}(r,s)-e^{-sw})\nonumber \\{} & {} \qquad -Z_{w}(r,as)r\widehat{A}_{0}(-s)\prod _{j=1}^{L_{1}}(\zeta _{j}-s)\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})\nonumber \\{} & {} \quad =\int _{-\infty }^{0}Z_{w}(r,sy)P(V\in \textrm{d}y)r\widehat{A}_{1}(-s)\prod _{j=1}^{L_{0}}(\delta _{j}-s)\nonumber \\{} & {} \qquad \sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})+\prod _{j=1}^{L_{0}}(\delta _{j}-s)\nonumber \\{} & {} \qquad \prod _{l=1}^{L_{1}}\left( \zeta _{l}-s\right) \left( \frac{r}{1-r}-rM_{w}(r,s)\right) . \end{aligned}$$
(80)

Again, we have that

  • the LHS of (80) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\),

  • the RHS of (80) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\),

  • for large s, both sides are \(O(s^{L_{0}+L_{1}})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] states that for \(Re(s)\ge 0\),

$$\begin{aligned}{} & {} \prod _{j=1}^{L_{0}}(\delta _{j}-s)\prod _{l=1}^{L_{1}}(\zeta _{l}-s)(Z_{w}(r,s)-e^{-sw})-Z_{w}(r,as)r\widehat{A}_{0}(-s)\prod _{l=1}^{L_{1}}(\zeta _{l}-s)\nonumber \\{} & {} \sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i})=\sum _{k=0}^{L_{0}+L_{1}}C_{k}(r)s^{k}, \end{aligned}$$
(81)

and for \(Re(s)\le 0\),

$$\begin{aligned}{} & {} \int _{-\infty }^{0}Z_{w}(r,sy)P(V\in \textrm{d}y)r\widehat{A}_{1}(-s)\prod _{j=1}^{L_{0}}(\delta _{j}-s)\sum _{i=1}^{M}q_{i}\psi (s\bar{\gamma }_{i})\nonumber \\{} & {} \qquad +\prod _{j=1}^{L_{0}}(\delta _{j}-s)\prod _{l=1}^{L_{1}}(\zeta _{l}-s)\left( \frac{r}{1-r}-rM_{w}(r,s)\right) \nonumber \\{} & {} \quad =\sum _{k=0}^{L_{0}+L_{1}}C_{k}(r)s^{k}. \end{aligned}$$
(82)

Setting \(s=0\), and using either (81), or (82), we get after straightforward computations that

$$\begin{aligned} C_{0}(r)=\frac{r^{2}(1-\chi (0))}{1-r}\prod _{j=1}^{L_{0}}\delta _{j}\prod _{l=1}^{L_{1}}\zeta _{l}. \end{aligned}$$

For \(s=\delta _{j}\), \(j=1,\ldots ,L_{0}\), (81) gives,

$$\begin{aligned} \sum _{k=1}^{L_{0}+L_{1}}C_{k}(r)\delta _{j}^{k}=-r\widehat{A}_{0}(-\delta _{j})\prod _{l=1}^{L_{1}}(\zeta _{l}-\delta _{j})\sum _{i=1}^{K}p_{i}\chi (\delta _{l}\bar{\beta }_{i})Z_{w}(r,a\delta _{j}). \end{aligned}$$
(83)

We further need other \(L_{1}\) equations to obtain all the coefficients \(C_{k}(r)\). Note that for \(s=\zeta _{l}\), \(l=1,\ldots ,L_{1}\), the expression (82) gives:

$$\begin{aligned} \sum _{k=1}^{L_{0}+L_{1}}C_{k}(r)\zeta _{l}^{k}=-r\widehat{A}_{1}(-\zeta _{l})\prod _{j=1}^{L_{0}}(\delta _{j}-\zeta _{l})\sum _{i=1}^{M}q_{i}\psi (\zeta _{l}\bar{\gamma }_{i})\int _{-\infty }^{0}Z_{w}(r,sy)P(V\in \textrm{d}y). \nonumber \\ \end{aligned}$$
(84)

It is readily seen that (81) can be rewritten as

$$\begin{aligned} Z_{w}(r,s)=K(r,s)Z_{w}(r,as)+L_{w}(r,s), \end{aligned}$$
(85)

with

$$\begin{aligned} K(r,s)&=r\frac{A_{0}(s)}{\prod _{l=1}^{L_{1}}(\zeta _{l}-s)}\sum _{i=1}^{K}p_{i}\chi (s\bar{\beta }_{i}),\\ L_{w}(r,s)&=\frac{\sum _{k=0}^{L_{0}+L_{1}}C_{k}(r)s^{k}}{\prod _{l=1}^{L_{1}}(\zeta _{l}-s)\prod _{j=1}^{L_{0}}(\delta _{j}-s)}+e^{-sw}. \end{aligned}$$

Iterating (85) implies that

$$\begin{aligned} Z_{w}(r,s)=\sum _{n=0}^{\infty }L_{w}(r,a^{n}s)\prod _{m=0}^{n-1}K(r,a^{m}s), \end{aligned}$$
(86)

where the convergence of the infinite sum can be proved with the aid of D’Alembert’s test, since \(a\in (0,1)\), and

$$\begin{aligned} \lim _{n\rightarrow \infty }\left| \frac{L_{w}(r,a^{n}s)}{L_{w}(r,a^{n+1}s)K(r,a^{n}s)}\right| =\left| \frac{\prod _{l=1}^{L_{1}}\zeta _{l}}{r\chi (0)}\right| . \end{aligned}$$

Setting \(s=a\delta _{j}\), \(j=1,\ldots ,L_{0}\) in (86), we obtain \(Z_{w}(r,a\delta _{j})\) that can be used in (83). Moreover, expression (86) can be used in (84). Thus, we can construct a system of \(L_{0}+L_{1}\) equations for the unknown coefficients \(C_{k}(r)\), \(k=1,\ldots ,L_{0}+L_{1}\).

5 The uniform proportional case with dependence

In the following, we consider recursions of the form

$$\begin{aligned} W_{n+1}=[V_{n}W_{n}+B_{n}-A_{n}]^{+}, \end{aligned}$$
(87)

with \(V_{n}\sim U(0,1)\), and dependence among the sequences \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\). The case of independent \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\) was treated in [6].

5.1 Deterministic proportional dependency with additive and subtracting delay

We consider the case where

$$\begin{aligned} W_{n+1}=[V_{n}W_{n}+B_{n}-A_{n}]^{+}, \end{aligned}$$

with \(V_{n}\sim U(0,1)\) and for \(c_{0},c_{1}\in (0,1)\), \(\tilde{J}_{n}\sim exp(\delta )\), \(\widehat{J}_{n}\sim exp(\nu )\):

$$\begin{aligned} A_{n}=\left\{ \begin{array}{ll} A_{n}^{(0)}:=c_{0}B_{n}+\tilde{J}_{n},&{} \text { w.p. }p, \\ A_{n}^{(1)}:=[c_{1}B_{n}-\widehat{J}_{n}]^{+}, &{} \text { w.p. }q:=1-p. \end{array}\right. \end{aligned}$$

Stability is ensured when \(E(\log |V|)<0\); see [16]. Note that

$$\begin{aligned} E(e^{-sA_{n}^{(0)}}|B_{n}=t)=&\frac{\delta }{\delta -s}e^{-sc_{0}t}, \\ E(e^{-sA_{n}^{(1)}}|B_{n}=t)=&\frac{\nu e^{-sc_{1}t}-se^{-\nu c_{1}t}}{\nu +s}, \end{aligned}$$

and thus,

$$\begin{aligned} E(e^{-sA_{n}^{(0)}-z B_{n}})=&\frac{\delta }{\delta +s}\phi _{B}(z+sc_{0}),\,Re(z+sc_{0})>0, \\ E(e^{-sA_{n}^{(1)}-zB_{n}})=&\frac{\nu \phi _{B}(z+sc_{1})-s\phi _{B}(z+\nu c_{1})}{\nu -s},\,Re(z+sc_{1})>0. \end{aligned}$$

Then,

$$\begin{aligned} Z_{n+1}(s)=&E(e^{-sW_{n+1}})=E(e^{-s[V_{n}W_{n}+B_{n}-A_{n}]^{+}}) \\ =&pE(e^{-s[V_{n}W_{n}+B_{n}-A_{n}^{(0)}]^{+}})+qE(e^{-s[V_{n}W_{n}+B_{n}-A_{n}^{(1)}]^{+}}). \end{aligned}$$

Note that,

$$\begin{aligned} {[}V_{n}W_{n}+B_{n}-A_{n}^{(1)}]^{+}&=[V_{n}W_{n}+B_{n}-[c_{1}B_{n}-\widehat{J}_{n}]^{+}]^{+}\\ {}&=V_{n}W_{n}+B_{n}-[cB_{n}-\widehat{J}_{n}]^{+}. \end{aligned}$$

Therefore, for \(n\in \mathbb {N}\):

$$\begin{aligned} Z_{n+1}(s)=&p\left( E(e^{-sV_{n}W_{n}})E(e^{-sB_{n}+sA_{n}^{(0)}})+1-E(e^{-s[V_{n}W_{n}+B_{n}-A_{n}^{(0)}]^{-}})\right) \\&+q E(e^{-sV_{n}W_{n}}) E(e^{-sB_{n}+sA_{n}^{(1)}})\\ =&E(e^{-sV_{n}W_{n}})\left( p\frac{\delta }{\delta -s}\phi _{B}(s\bar{c}_{0})+q\frac{\nu \phi _{B}(s\bar{c}_{1})+s\phi _{B}(s+\nu c_{1})}{\nu +s}\right) \\&+p\left( 1-\left[ P(V_{n}W_{n}+B_{n}-A_{n}^{(0)}\ge 0)\right. \right. \\&\left. \left. +P(V_{n}W_{n}+B_{n}-A_{n}^{(0)}< 0)\frac{\delta }{\delta -s}\right] \right) \\ =&\frac{1}{s}\int _{0}^{s}Z_{n}(y)\textrm{d}y\left( p\frac{\delta }{\delta -s}\phi _{B}(s\bar{c}_{0})+q\frac{\nu \phi _{B}(s\bar{c}_{1})+s\phi _{B}(s+\nu c_{1})}{\nu +s}\right) \\&-\frac{spd_{n+1}}{\delta -s}, \end{aligned}$$

where \(d_{n}:=P(W_{n}=0)\) and we have used the fact that:

$$\begin{aligned} E(e^{-sV_{n}W_{n}})=&\int _{0}^{1}E(e^{-svW_{n}})\textrm{d}v=\frac{1}{s}\int _{0}^{s}E(e^{-yW_{n}})\textrm{d}y=\frac{1}{s}\int _{0}^{s}Z_{n}(y)\textrm{d}y. \end{aligned}$$

If \(W_{0}=w\), then \(E(e^{-sW_{0}})=e^{-sw_{0}}\), and the last expression allows to recursively determine all the transforms \(Z_{n}(s)\), \(n\in \mathbb {N}\). Multiplying with \(\delta -s\), and setting \(s=\delta \):

$$\begin{aligned} d_{n+1}=\frac{\phi _{B}(\delta \bar{c}_{0})}{\delta }\int _{0}^{\delta }Z_{n}(y)\textrm{d}y. \end{aligned}$$

Let \(U_{W}(r,s):=\sum _{n=0}^{\infty }r^{n}Z_{n}(s)\), \(|r|<1\), then:

$$\begin{aligned} U_{W}(r,s)=r\frac{\Psi (s)}{s(\delta -s)}\int _{0}^{s}U_{W}(r,y)\textrm{d}y+K(s), \end{aligned}$$
(88)

where

$$\begin{aligned} \Psi (s)=&p\delta \phi _{B}(s\bar{c}_{0})+q(\delta -s)\frac{\nu \phi _{B}(s\bar{c}_{1})+s\phi _{B}(s+\nu c_{1})}{\nu +s}, \\ K(s)=&e^{-sw_{0}}-\frac{sp}{\delta -s}(U_{W}(r,\infty )-p_{0}). \end{aligned}$$

Letting \(I(s)=\int _{0}^{s}U_{W}(r,y)\textrm{d}y\), (88) becomes:

$$\begin{aligned} I^{\prime }(s)=r\frac{\Psi (s)}{s(\delta -s)}I(s)+K(s). \end{aligned}$$

The solution of such kind of first-order differential equation is obtained by following the lines in [6, Section 5]. Note that solving this kind of differential equation with a singularity is tricky.

5.2 Randomly proportional dependency with additive delay

In the following, we consider the case where \(A_{n}=G_{n}B_{n}+J_{n}\), with \(P(G_{n}=\beta _{i})=p_{i}\), \(i=1,\ldots ,K\), and \(J_{n}\) are i.i.d. random variables that follow a hyperexponential distribution with density function \(f(x)=\sum _{j=1}^{L}q_{j}\delta _{j}e^{-\delta _{j}x}\). (The analysis can be further generalized to the case of a distribution with a rational Laplace transform.) Then,

$$\begin{aligned} Z_{n+1}(s)&=E(e^{-sW_{n+1}})=E(e^{-s[V_{n}W_{n}+(1-G_{n})B_{n}-J_{n}]^{+}}) \\&=\sum _{j=1}^{L}q_{j}\sum _{l=1}^{K}p_{l}\int _{v=0}^{1}\int _{w=0}^{\infty }\int _{x=0}^{\infty }f_{B}(x)\\&\quad \left[ \int _{y=0}^{vw+\bar{\beta }_{l}x}e^{-s(vw+\bar{\beta }_{l}x-y)}\delta _{j}e^{-\delta _{j}y}+\int _{y=vw+\bar{\beta }_{l}x}^{\infty }\delta _{j}e^{-\delta _{j}y}\textrm{d}y\right] \textrm{d}x\textrm{d}P(W<w)\textrm{d}v\\&=\sum _{j=1}^{L}q_{j}\sum _{l=1}^{K}p_{l}\int _{v=0}^{1}\int _{w=0}^{\infty }\int _{x=0}^{\infty }f_{B}(x)\\&\quad \left[ \frac{\delta _{j}e^{-s(vw+\bar{\beta }_{l}x)}-se^{-\delta _{j}(vw+\bar{\beta }_{l}x)}}{\delta _{j}-s}\right] \textrm{d}x\textrm{d}P(W<w)\textrm{d}v\\&=\sum _{j=1}^{L}q_{j}(\frac{\delta _{j}}{\delta _{j}-s})\sum _{l=1}^{K}p_{l}\phi _{B}(s\bar{\beta }_{l})\frac{1}{s}\int _{0}^{s}Z_{n}(y)\textrm{d}y\\&\quad -s\sum _{j=1}^{L}(\frac{q_{j}}{\delta _{j}-s})\sum _{l=1}^{K}p_{l}\phi _{B}(\delta _{j}\bar{\beta }_{l})\frac{1}{\delta _{j}}\int _{0}^{\delta _{j}}Z_{n}(y)\textrm{d}y\\&=\frac{\sum _{j=1}^{L}q_{j}\delta _{j}\prod _{m\ne j}(\delta _{m}-s)\sum _{l=1}^{K}p_{l}\phi _{B}(s\bar{\beta }_{l})}{s\prod _{m=1}^{L}(\delta _{m}-s)}\int _{0}^{s}Z_{n}(y)\textrm{d}y\\&\quad -s\sum _{j=1}^{L}\frac{q_{j}}{\delta _{j}-s}c_{j,n+1}, \end{aligned}$$

where \(\bar{\beta }_{l}:=1-\beta _{l}\), \(l=1,\ldots ,K\), and

$$\begin{aligned} c_{j,n+1}:=\frac{\sum _{l=1}^{K}p_{l}\phi _{B}(\delta _{j}\bar{\beta }_{l})}{\delta _{j}}\int _{0}^{\delta _{j}}Z_{n}(y)\textrm{d}y=P(W_{n+1}=0|Q=j),\,j=1,\ldots ,L, \end{aligned}$$

where Q denotes the type of the arrival process. Then, multiplying with \(r^{n+1}\) and summing over n (with \(W_{0}=w\)) results in

$$\begin{aligned} U_{W}(r,s)&=r\frac{N(s)}{sD(s)}\sum _{l=1}^{K}p_{l}\phi _{B}(s\bar{\beta }_{l})\int _{0}^{s}U_{W}(r,y)\textrm{d}y+e^{-sw}\\&\quad -s\sum _{j=1}^{L}\frac{q_{j}}{\delta _{j}-s}[U^{(j)}_{W}(r,\infty )-c_{j,0}], \end{aligned}$$

where \(U_{W}(r,s):=\sum _{n=0}^{\infty }r^{n}Z_{n}(s)\), \(N(s):=\sum _{j=1}^{L}q_{j}\delta _{j}\prod _{m\ne j}(\delta _{m}-s)\), \(D(s):=\prod _{j=1}^{L}(\delta _{j}-s)\), and \(U^{(j)}_{W}(r,s):=\sum _{n=0}^{\infty }r^{n}E(e^{-sW_{n}}|Q=j)\), \(j=1,\ldots ,L\). Letting \(I(s)=\int _{0}^{s}U_{W}(r,y)\textrm{d}y\), we have,

$$\begin{aligned} I^{\prime }(s)=r\frac{N(s)}{sD(s)}\sum _{l=1}^{K}p_{l}\phi _{B}(s\bar{\beta }_{l})I(s)+K(r,s), \end{aligned}$$
(89)

where

$$\begin{aligned} K(r,s):=e^{-sw}-s\sum _{j=1}^{L}\frac{q_{j}}{\delta _{j}-s}[U^{(j)}_{W}(r,\infty )-c_{j,0}]. \end{aligned}$$

The form of (89) is the same as the one in [6, Section 5, eq. (50)], and the analysis can be performed similarly, although it would be somewhat more tricky, due to the zeros of D(s).

5.3 Interarrival times proportionally dependent on system time

We now consider the case where \(A_{n}=c(W_{n}+B_{n})+J_{n}\), \(c\in (0,1)\). We assume that \((B_{n},J_{n})\) are i.i.d. sequences of random vectors. Thus, the quantities \((\bar{c}B_{n}-J_{n})\) are i.i.d. random variables; however, within a pair \(B_{n}\), \(J_{n}\) are dependent. Here, we assume that a non-negative random vector (BJ) has a bivariate matrix-exponential distribution with LST \(E(e^{-sB-zJ}):=\frac{G(s,z)}{D(s,z)}\), where G(sz) and D(sz) are polynomial functions in s and z. A consequence of this definition is that the LST of the distribution of \(Y:=\bar{c}B-J\) is also a rational function; the distribution of Y is called a bilateral matrix-exponential distribution [3, Theorem 3.1]. This class of distributions, under which we model the dependence structure, belongs to the class of multivariate matrix-exponential distributions, which was introduced in [4]. For ease of notation, let \(E(e^{-sY}):=h(s)=\frac{f(s)}{g(s)}\), and assume that g(s) has L zeros, say \(t_{j}\) such that \(Re(t_{j})>0\), \(j=1,\ldots ,L\), and M zeros, say \(\zeta _{m}\), such that \(Re(\zeta _{m})<0\), \(m=1,\ldots ,M\), whereas f(s) is a polynomial of degree at most \(M+L-1\), not sharing the same zeros with g(s).

Then, the recursion (87) becomes

$$\begin{aligned} W_{n+1}=[(V_{n}-c)W_{n}+\bar{c}B_{n}-J_{n}]^{+}, \end{aligned}$$

so that \(V_{n}-c\sim U(-c,\bar{c})\). For \(H_{n}=[(V_{n}-c)W_{n}+\bar{c}B_{n}-J_{n}]^{-}\), and \(Re(s)=0\) we have,

$$\begin{aligned} E(e^{-sW_{n+1}}|W_{0}=w)=&\frac{f(s)}{g(s)}\left[ \int _{-c}^{0}E(e^{-svW_{n}}|W_{0}=w)\textrm{d}v\right. \\&+\left. \int _{0}^{\bar{c}}E(e^{-svW_{n}}|W_{0}=w)\textrm{d}v\right] +1-E(e^{-sH_{n}}|W_{0}=w). \end{aligned}$$

Multiplying with \(r^{n+1}\) (\(0<r<1\)) and summing from \(n=0\) to infinity, we obtain

$$\begin{aligned}{} & {} g(s)(Z_{w}(r,s)-e^{-sw})-rf(s)\int _{0}^{\bar{c}} Z_{w}(r,sy_{1})\textrm{d}y_{1}\nonumber \\{} & {} \quad = rf(s)\int _{-c}^{0} Z_{w}(r,sy_{1})\textrm{d}y_{1}+rg(s)(\frac{1}{1-r}-H(r,s)), \end{aligned}$$
(90)

where \(Z_{w}(r,s):=\sum _{n=0}^{\infty }r^{n}E(e^{-sW_{n}}|W_{0}=w)\), \(H(r,s):=\sum _{n=0}^{\infty }r^{n}E(e^{-sH_{n}}|W_{0}=w)\). We now have that

  1. 1.

    the LHS of (90) is analytic in \(Re(s)>0\) and continuous in \(Re(s)\ge 0\),

  2. 2.

    the RHS of (90) is analytic in \(Re(s)<0\) and continuous in \(Re(s)\le 0\),

  3. 3.

    for large s, both sides are \(O(s^{M+L})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] states that for \(Re(s)\ge 0\),

$$\begin{aligned} g(s)(Z_{w}(r,s)-e^{-sw})-rf(s)\int _{0}^{\bar{c}} Z_{w}(r,sy_{1})\textrm{d}y_{1}=\sum _{l=0}^{M+L}C_{l}(r)s^{l}, \end{aligned}$$
(91)

and for \(Re(s)\le 0\),

$$\begin{aligned} rf(s)\int _{-c}^{0} Z_{w}(r,sy_{1})\textrm{d}y_{1}+rg(s)(\frac{1}{1-r}-H(r,s))=\sum _{l=0}^{M+L}C_{l}(r)s^{l}. \end{aligned}$$
(92)

For \(s=0\), (91) yields

$$\begin{aligned} C_{0}(r)=g(0)(\frac{1}{1-r}-1)-rf(0)\int _{0}^{\bar{c}}\frac{\textrm{d}y_{1}}{1-r}=\frac{rc}{1-r}g(0), \end{aligned}$$

where we have taken into account that \(f(0)=g(0)\). The same value for \(C_{0}(r)\) can be derived from (92) by setting \(s=0\). We can also obtain L other equations for the remaining coefficients. Setting \(s=t_{j}\), \(j=1,\ldots ,L,\) in (91), we obtain:

$$\begin{aligned} -rf(t_{j})\int _{0}^{\bar{c}} Z_{w}(r,t_{j}y_{1})\textrm{d}y_{1}=\sum _{l=0}^{M+L}C_{l}(r)t_{j}^{l}. \end{aligned}$$
(93)

Proceeding similarly as in [6, 12],

$$\begin{aligned}{} & {} Z_{w}(r,s)=r\frac{f(s)}{g(s)}\int _{0}^{\bar{c}}Z_{w}(r,sy_{1})\textrm{d}y_{1}+L(r,s),\, Re(s)\ge 0, \end{aligned}$$
(94)

where \(L(r,s):=e^{-sw}+\frac{\sum _{l=0}^{M+L}C_{l}(r)s^{l}}{g(s)}\). Next, we follow the lines in [12]. Note that for \(r\in [0,1)\), \(|K(r,s)|:=|r\frac{f(s)}{g(s)}|\le r<1\) as \(s\rightarrow 0\). Iterating (94) n times, we obtain

$$\begin{aligned} Z_{w}(r,s)=&\int \ldots \int _{[0,\bar{c}]^{n+1}}K(r,s)\prod _{h=1}^{n}K(r,sy_{1}\ldots y_{h})Z_{w}(r,sy_{1}\ldots y_{n+1})\textrm{d}y_{1}\ldots \textrm{d}y_{n+1}\\&+L(r,s)+\sum _{j=1}^{n}\int \ldots \int _{[0,\bar{c}]^{j}}K(r,s)\prod _{h=1}^{j-1}K(r,sy_{1}\ldots y_{h})L(r,sy_{1}\ldots y_{j})\textrm{d}y_{1}\ldots \textrm{d}y_{j}. \end{aligned}$$

Since we will let n tend to \(\infty \), we are interested in investigating the convergence of the summation in the previous expression, as well as in obtaining the limit of the first term in the right-hand side of the previous expression. Since the expressions of K(rs), L(rs) share the same properties as those in [12], we can show that

$$\begin{aligned} Z_{w}(r,s)= & {} L(r,s)+\sum _{n=1}^{\infty }\int \ldots \int _{[0,\bar{c}]^{n}}K(r,s)\prod _{j=1}^{n-1}K\nonumber \\{} & {} \times (r,sy_{1}\ldots y_{j})L(r,sy_{1}\ldots y_{n})\textrm{d}y_{1}\ldots \textrm{d}y_{n}. \end{aligned}$$
(95)

We still need M more equations to obtain a system of equations for the coefficients \(C_{l}(r)\). Substituting \(s=\zeta _{m}\), \(m=1,\ldots ,M,\) in (92) and using (95), we obtain

$$\begin{aligned} rf(\zeta _{m})\int _{-c}^{0} Z_{w}(r,\zeta _{m}y_{1})\textrm{d}y_{1}=\sum _{l=0}^{M+L}C_{l}(r)\zeta _{m}^{l}. \end{aligned}$$
(96)

Finally, by using (93), and (96), we can derive the remaining coefficients \(C_{l}(r)\), \(l=1,\ldots ,L+M\).

Remark 10

An alternative way to solve (94) is by performing the transformation \(v_{1}=sy_{1}\), so that (94) becomes:

$$\begin{aligned} Z_{w}(r,s)=r\int _{0}^{\bar{c}s}h(s)Z_{w}(r,v_{1})\textrm{d}v_{1}+L(r,s),\, Re(s)\ge 0. \end{aligned}$$
(97)

Note that (97) is a Fredholm equation [13]; therefore, a natural way to proceed is by successive substitutions. Define now iteratively the function

$$\begin{aligned} L^{i^*}(r,s):=r\int _{0}^{\bar{c}s}h(s)L^{(i-1)^*}(r,v)\textrm{d}v,\,i\ge 1, \end{aligned}$$

with \(L^{0^*}(r,s):=L(r,s)\). Then, after n iterations we have that

$$\begin{aligned} Z_{w}(r,s)&=\sum _{i=0}^{n+1}L^{i^*}(r,s)+r^{n+1}\int _{v_{1}=0}^{\bar{c}s}\\&\quad \int _{v_{2}=0}^{\bar{c}v_{1}}\ldots \int _{v_{n+1}=0}^{\bar{c}v_{n}}h(s)\prod _{j=1}^{n}h(v_{j})Z_{w}(r,v_{n+1})\textrm{d}v_{n+1}\ldots \textrm{d}v_{2}\textrm{d}v_{1}. \end{aligned}$$

Note that

$$\begin{aligned}&\lim _{n\rightarrow \infty }r^{n+1}\int _{v_{1}=0}^{\bar{c}s}\int _{v_{2}=0}^{\bar{c}v_{1}}\ldots \\&\int _{v_{n+1}=0}^{\bar{c}v_{n}}h(s)\prod _{j=1}^{n}h(v_{j})Z_{w}(r,v_{n+1})\textrm{d}v_{n+1}\ldots \textrm{d}v_{2}\textrm{d}v_{1}=0. \end{aligned}$$

To see this, observe that

$$\begin{aligned}&|h(v_{n})\int _{v_{n+1}=0}^{\bar{c}v_{n}}Z_{w}(r,v_{n+1})\textrm{d}v_{n+1}|<\\&|\int _{v_{n+1}=0}^{1}Z_{w}(r,v_{n+1})\textrm{d}v_{n+1}|\le \frac{1}{1-r}. \end{aligned}$$

Thus, the above limit is less than or equal to

$$\begin{aligned} \lim _{n\rightarrow \infty }r^{n+1}\frac{1}{1-r}=0. \end{aligned}$$

Therefore,

$$\begin{aligned} Z_{w}(r,s)=\sum _{i=0}^{\infty }L^{i^*}(r,s). \end{aligned}$$
(98)

Now for \(Re(s)\ge 0\), we have \(M_{2}(r,s)=\max _{v\in [0,\bar{c}s]}|L(r,s)|<\infty \). Then,

$$\begin{aligned} |L^{i^*}(r,s)|<|\int _{0}^{\bar{c}s}L^{(i-1)^*}(r,s)|\le \bar{c}s \max _{v\in [0,\bar{c}s]}|L(r,s)|=\bar{c}sM_{2}(r,s)<\infty , \end{aligned}$$

which ensures the convergence of the infinite sum in (98).

5.4 A Bernoulli dependent structure

Consider the following (simpler) case of recursion in (2) where \(V_{n}^{(1)}<0\) a.s., and \(V_{n}^{(2)}=U_{n}^{1/a}\), with \(U_{n}\sim U(0,1)\), \(a\ge 2\):

$$\begin{aligned} W_{n+1}=\left\{ \begin{array}{ll} \left[ V^{(1)}_{n}W_{n}+B_{n}-A^{(1)}_{n}\right] ^{+},&{}\text {w.p. } p,\\ \left[ U_{n}^{1/a}W_{n}+T_{n}-A^{(2)}_{n}\right] ^{+},&{} \text {w.p. } q:=1-p, \end{array}\right. \end{aligned}$$
(99)

where the LST of \(B_{n}\), say \(\phi _{B}(s):=\frac{N_{B}(s)}{D_{B}(s)}\) is rational with poles at \(s_{1},\ldots ,s_{l}\), with \(Re(s_{j})<0\), \(j=1,\ldots ,l\). Then, for \(Re(s)=0\),

$$\begin{aligned}&E(e^{-sW_{n+1}}|W_{0}=w)= p E(e^{-sV_{n}^{(1)}W_{n}}|W_{0}=w)\phi _{B}(s)\phi _{A_{1}}(-s)\\&\quad +qE(e^{-sU^{1/a}_{n}W_{n}}|W_{0}=w)\phi _{T}(s)\phi _{A_{2}}(-s) +1-J_{n}(s), \end{aligned}$$

where for \(n=0,1,\ldots \),

$$\begin{aligned} J_{n}(s):=pE(e^{-s\left[ V^{(1)}_{n}W_{n}+B_{n}-A^{(1)}_{n}\right] ^{-}}|W_{0}=w)+qE(e^{-s\left[ U_{n}^{1/a}W_{n}+T_{n}-A^{(2)}_{n}\right] ^{-}}|W_{0}=w). \end{aligned}$$

Note that for \(u=sv^{1/a}\), we have,

$$\begin{aligned}&E(e^{-sU^{1/a}_{n}W_{n}}|W_{0}=w)\\&\quad =\int _{0}^{1}E(e^{-sv^{1/a}W_{n}}|W_{0}=w)\textrm{d}v=\frac{a}{s^{a}}\int _{0}^{s}u^{a-1}E(e^{-uW_{n}}|W_{0}=w)\textrm{d}u. \end{aligned}$$

Setting \(Z_{w}^{(a)}(r,s):=s^{a-1}Z_{w}(r,s)\) and proceeding as in [6], we obtain,

$$\begin{aligned} \begin{array}{l} D_{B}(s)[Z_{w}^{(a)}(r,s)-s^{a-1}e^{-sw}-rq\frac{a}{s}\phi _{T}(s)\phi _{A_{2}}(-s)\int _{0}^{s}Z^{(a)}_{w}(r,u)\textrm{d}u]\\ =rs^{a-1}[pN_{B}(s)\phi _{A_{1}}(-s)\int _{-\infty }^{0}Z_{w}(r,sv)P(V^{(1)}\in \textrm{d}y)+D_{B}(s)(\frac{1}{1-r}-M_{w}(r,s))], \end{array} \nonumber \\ \end{aligned}$$
(100)

where \(M_{w}(r,s):=\sum _{n=0}^{\infty }r^{n}J_{n}(s)\). Note that:

  • The LHS in (100) is analytic for \(Re(s)>0\) and continuous for \(Re(s)\ge 0\).

  • The RHS in (100) is analytic for \(Re(s)<0\) and continuous for \(Re(s)\le 0\).

  • For large s, both sides are \(O(s^{l})\) in their respective half-planes.

It follows by Liouville’s theorem [14, Theorem 10.52] that

$$\begin{aligned}{} & {} D_{B}(s)[Z_{w}^{(a)}(r,s)-s^{a-1}e^{-sw}-rq\frac{a}{s}\phi _{T}(s)\phi _{A_{2}}(-s)\int _{0}^{s}Z^{(a)}_{w}(r,u)\textrm{d}u]\nonumber \\{} & {} \quad = \sum _{k=0}^{l}C_{k}(r)s^{k},\,\,Re(s)\ge 0, \end{aligned}$$
(101)
$$\begin{aligned}{} & {} rs^{a-1}[pN_{B}(s)\phi _{A_{1}}(-s)\int _{-\infty }^{0}Z_{w}(r,sv)P(V^{(1)}\in \textrm{d}y)+D_{B}(s)(\frac{1}{1-r}-M_{w}(r,s))]\nonumber \\{} & {} \quad = \sum _{k=0}^{l}C_{k}(r)s^{k},\,\,Re(s)\le 0. \end{aligned}$$
(102)

Setting \(s=0\) in either (101), or (102) yields \(C_{0}(r)=0\). Note that for \(s=s_{j}\), we have \(D_{B}(s_{j})=0\), \(j=1,\ldots , l\). Substituting in (102) yields

$$\begin{aligned} rs_{j}^{a-1}[pN_{B}(s_{j})\phi _{A_{1}}(-s_{j})\int _{-\infty }^{0}Z_{w}(r,s_{j}v)P(V^{(1)}\in \textrm{d}y)= \sum _{k=1}^{l}C_{k}(r)s_{j}^{k}. \end{aligned}$$
(103)

Note that from (101)

$$\begin{aligned} Z_{w}^{(a)}(r,s)=rq\frac{a}{s}\phi _{T}(s)\phi _{A_{2}}(-s)\int _{0}^{s}Z^{(a)}_{w}(r,u)\textrm{d}u+s^{a-1}e^{-sw}+\frac{\sum _{k=1}^{l}C_{k}(r)s^{k}}{D_{B}(s)}, \end{aligned}$$

or equivalently, if \(I^{(a)}(s):=\int _{0}^{s}Z_{w}^{(a)}(r,u)\textrm{d}u\), we have

$$\begin{aligned} I^{(a)\prime }(s)=rq\frac{a}{s}\phi _{T}(s)\phi _{A_{2}}(-s)I^{(a)}(s)+\frac{\sum _{k=1}^{l}C_{k}(r)s^{k}}{D_{B}(s)}+s^{a-1}e^{-sw}. \end{aligned}$$
(104)

Thus, following standard techniques from the theory of ordinary differential equations, we have for a positive number c, such that \(c\le s\),

$$\begin{aligned}&I^{(a)}(s)=e^{\int _{c}^{s}rq\frac{a}{u}\phi _{T}(u)\phi _{A_{2}}(-u)\textrm{d}u}\\&\left( I^{(a)}(c)+\int _{c}^{s}e^{-\int _{c}^{t}rq\frac{a}{u}\phi _{T}(u)\phi _{A_{2}}(-u)\textrm{d}u}\left( \frac{\sum _{k=1}^{l}C_{k}(r)t^{k}}{D_{B}(t)}+t^{a-1}e^{-tw}\right) \textrm{d}t\right) . \end{aligned}$$

Note that

$$\begin{aligned} \int _{c}^{s}rq\frac{a}{u}\phi _{T}(u)\phi _{A_{2}}(-u)\textrm{d}u=-(1+o(1))rqa\ln (c),\text { as }c\rightarrow 0. \end{aligned}$$

Since \(I^{(a)\prime }(s)=s^{a-1}Z_{w}(r,s)\), we have \(I^{(a)\prime }(0)=0\), and thus,

$$\begin{aligned} I^{(a)}(s)=\int _{0}^{s}e^{\int _{t}^{s}rq\frac{a}{u}\phi _{T}(u)\phi _{A_{2}}(-u)\textrm{d}u}\left( \frac{\sum _{k=1}^{l}c_{k}(r)t^{k}}{D_{B}(t)}+t^{a-1}e^{-tw}\right) \textrm{d}t. \end{aligned}$$

Combining the above with (104), and having in mind that \(I^{(a)\prime }(s)=Z_{w}^{(a)}(r,s)=s^{a-1}Z_{w}(r,s)\), we have that

$$\begin{aligned} Z_{w}(r,s)&=\frac{\sum _{k=1}^{l}C_{k}(r)s^{k}}{s^{a-1}D_{B}(s)}+e^{-sw} +rq\frac{a}{s^{a}}\phi _{T}(s)\phi _{A_{2}}(-s)\\&\quad \int _{0}^{s}e^{\int _{t}^{s}rq\frac{a}{u}\phi _{T}(u)\phi _{A_{2}}(-u)\textrm{d}u}\left( \frac{\sum _{k=1}^{l}C_{k}(r)t^{k}}{D_{B}(t)}+t^{a-1}e^{-tw}\right) \textrm{d}t. \end{aligned}$$

By substituting the derived expression for \(Z_{w}(r,s)\) in (102), we can obtain a system of equations to obtain the remaining unknown coefficients \(C_{k}(r)\), \(k=1,\ldots ,l\).

5.5 Another generalization

We now consider the case where

$$\begin{aligned} W_{n+1}=[V_{n}W_{n}+B_{n}-A_{n}]^{+}, \end{aligned}$$

with \(V_{n}\sim U(0,1)\), and \(E(e^{-s A_{n}}|B_{n}=t)=\chi (s)e^{-\psi (s)t}\), and \(B_{n}\sim exp(\mu )\). Thus, the interarrival times depend on the service time of the previous customer, so that

$$\begin{aligned} E(e^{-sA_{n}-zB_{n}})=\int _{0}^{\infty }\mu e^{-\mu t}e^{-zt}\chi (s)e^{-\psi (s)t}\textrm{d}t=\frac{\mu \chi (s)}{\mu +\psi (s)+z}, \end{aligned}$$

when \(Re(\mu +\psi (s)+z)>0\). Since for \(s=0\) the \(E(e^{-s A_{n}}|B_{n}=t)\) should be equal to one, we have to implicitly assume that \(\psi (0)=0\) and \(\chi (0)=1\). Therefore, by denoting \(Z_{n}(s)=E(e^{-s W_{n}})\) we have:

$$\begin{aligned} Z_{n+1}(s):=E(e^{-s W_{n+1}}) =&E(e^{-s (V_{n}W_{n}+B_{n}-A_{n})})+1-E(e^{-s [V_{n}W_{n}+B_{n}-A_{n}]^{-}}) \\ =&E(e^{-s V_{n}W_{n}})E(e^{-s(B_{n}-A_{n})})+1-U^{-}_{V_{n}W_{n}}(s)\\ =&E(e^{-s V_{n}W_{n}})\frac{\mu \chi (-s)}{\mu +\psi (-s)+s}+1-U^{-}_{V_{n}W_{n}}(s), \end{aligned}$$

where \(U^{-}_{V_{n}W_{n}}(s):=E(e^{-s [V_{n}W_{n}+B_{n}-A_{n}]^{-}})\). Clearly, under the transformation \(v=su\), we have:

$$\begin{aligned} E(e^{-s V_{n}W_{n}})=\int _{0}^{1}E(e^{-s uW_{n}})\textrm{d}u=\frac{1}{s}\int _{0}^{s}Z_{n}(v)\textrm{d}v. \end{aligned}$$

Thus, assuming that \(\chi (s):=\frac{P_{1}(s)}{Q_{1}(s)}\), \(\psi (s):=\frac{P_{2}(s)}{Q_{2}(s)}\), with \(P_{2}(s),Q_{1}(s),Q_{2}(s)\) polynomials of degrees L, M, and N, respectively:

$$\begin{aligned} Z_{n+1}(s)=&\frac{P_{1}(-s)}{sQ_{1}(-s)}\frac{\mu Q_{2}(-s)}{(\mu +s)Q_{2}(-s)+P_{2}(-s)}\int _{0}^{s}Z_{n}(v)\textrm{d}v+1-U^{-}_{V_{n}W_{n}}(s)\\ =&\frac{\mu N_{Y}(s)}{s D_{Y}(s)}\int _{0}^{s}Z_{n}(v)\textrm{d}v+1-U^{-}_{V_{n}W_{n}}(s). \end{aligned}$$

Multiplying with \(r^{n+1}\) (having in mind that \(W_{0}=w\)), and summing from zero to infinity, we obtain

$$\begin{aligned} s D_{Y}(s)[Z_{w}(r,s)-e^{-sw}]-r\mu N_{Y}(s)\int _{0}^{s}Z_{w}(r,v)\textrm{d}v=rs D_{Y}(s)\left( \frac{1}{1-r}-M_{w}(r,s)\right) , \end{aligned}$$

where \(M_{w}(r,s):=\sum _{n=0}^{\infty }r^{n}U^{-}_{V_{n}W_{n}}(s)\). Note that \(D_{Y}(s):=Q_{1}(-s)((\mu +s)Q_{2}(-s)+P_{2}(-s))\) is a polynomial of degree \(M+N+1\). Thus, following similar arguments and Liouville’s theorem [14, Theorem 10.52], we have

$$\begin{aligned}&s D_{Y}(s)[Z_{w}(r,s)-e^{-sw}]-r\mu N_{Y}(s)\int _{0}^{s}Z_{w}(r,v)\textrm{d}v\\&\quad = \sum _{l=0}^{M+N+L+2}C_{l}(r)s^{l},\,Re(s)\ge 0, \\&rs D_{Y}(s)\left( \frac{1}{1-r}-M_{w}(r,s)\right) = \sum _{l=0}^{M+N+L+2}C_{l}(r)s^{l},\,Re(s)\le 0. \end{aligned}$$

For \(s=0\), we can easily derive \(C_{0}(r)=0\). Assuming that all the zeros of \(D_{Y}(s)\), say \(t_{j}\), \(j=1,\ldots ,M+N+1\) are in the positive half-plane, we can derive a system of equations for the remaining coefficients \(C_{l}(r)\):

$$\begin{aligned} -r\mu N_{Y}(t_{j})\int _{0}^{t_{j}}Z_{w}(r,v)\textrm{d}v=\sum _{l=1}^{M+N+L+2}C_{l}(r)t_{j}^{l},\,j=1,\ldots ,M+N+1. \end{aligned}$$

Now for \(Re(s)\ge 0\), we have,

$$\begin{aligned} Z_{w}(r,s)=r\mu \frac{N_{Y}(s)}{D_{Y}(s)}\int _{0}^{s}Z_{w}(r,v)\textrm{d}v+e^{-sw}-\frac{\sum _{l=1}^{M+N+L+2}C_{l}(r)s^{l}}{D_{Y}(s)}. \end{aligned}$$

The form of the above equation is the same as in [6, eq. (48), p. 239], so it can be solved similarly.

6 On modified versions of a multiplicative Lindley recursion with dependencies

In the following, we focus on the recursion (3), which generalizes the model in [12]. More precisely, we assume that \(V_{n}^{(1)}\) are such that \(P(V_{n}^{(1)}\in [0,1))=1\), and \(V_{n}^{(2)}\) such that \(P(V_{n}^{(2)}<0)=1\). We further use \(\mu \) to denote the probability measure on [0, 1), i.e. \(\mu (A):=P(V_{n}^{(1)}\in A)\) for every Borel set A on [0, 1).

Assume also that \(\{Y_{n}^{(0)}:=B_{n}-A_{n}^{(0)}\}_{n\in \mathbb {N}_{0}}\) are i.i.d. random variables and their LST, say \(\phi _{Y_{0}}(s):=E(e^{-sY_{n}^{(0)}}):=\frac{N_{0}(s)}{D_{0}(s)}\), with \(D_{0}(s):=\prod _{i=1}^{L}(s+m_{i})\prod _{j=1}^{K_{0}}(s+t_{j_{0}}^{(0)})\), with \(Re(m_{i})<0\), \(i=1,\ldots ,L\), \(Re(t_{j_{0}}^{(0)})>0\), \(j_{0}=1,\ldots ,K_{0}\). Assume also that \(\{A_{n}^{(k)}\}_{n\in \mathbb {N}_{0}}\), \(k=1,2,\) are independent sequences of i.i.d. random variables with LST \(\phi _{A_{k}}(s):=E(e^{-sA_{n}^{(k)}}):=\frac{N_{k}(s)}{D_{k}(s)}\), \(D_{k}(s):=\prod _{j_{k}=1}^{K_{k}}(s+t_{j_{k}}^{(k)})\), with \(N_{k}(s)\) polynomial of degree at most \(K_{k}-1\), not sharing same zeros with \(D_{k}(s)\), and \(Re(t_{j_{1}}^{(1)})>0\), \(j_{1}=1,\ldots ,K_{1}\), \(Re(t_{j_{2}}^{(2)})<0\), \(j_{2}=1,\ldots ,K_{2}\). We assume that \(W_{0}=w\). Then,

$$\begin{aligned} E(e^{-s W_{n+1}})&= pE(e^{-s [W_{n}+B_{n}-A_{n}^{(0)}]^{+}})+q\Big [E(e^{-s [V^{(1)}_{n}W_{n}+\widehat{B}_{n}-A_{n}^{(1)}]}1(\widehat{B}_{n}\le T_{n}))\\&\quad +E(e^{-s [V^{(2)}_{n}W_{n}+T_{n}-A_{n}^{(2)}]}1(\widehat{B}_{n}> T_{n})) +1-E(e^{-s [V^{(1)}_{n}W_{n}+\widehat{B}_{n}-A_{n}^{(1)}]^{-}}1\\&\quad \times (\widehat{B}_{n}\le T_{n}))-E(e^{-s [V^{(2)}_{n}W_{n}+T_{n}-A_{n}^{(2)}]^{-}}1(\widehat{B}_{n}> T_{n}))\Big ] \\&= pE(e^{-s W_{n}})\frac{N_{0}(s)}{D_{0}(s)}+qE(e^{-sV_{n}^{(1)} W_{n}})\chi (s)\frac{N_{1}(-s)}{D_{1}(-s)}\\&\quad +qE(e^{-sV_{n}^{(2)} W_{n}})\psi (s) \frac{N_{2}(-s)}{D_{2}(-s)}+1-J_{n}^{-}(s), \end{aligned}$$

where

$$\begin{aligned} J_{n}^{-}(s):=&pE(e^{-s [W_{n}+B_{n}-A_{n}^{(0)}]^{-}})+q[E(e^{-s [V^{(1)}_{n}W_{n}+\widehat{B}_{n}-A_{n}^{(1)}]^{-}}1(\widehat{B}_{n}\le T_{n}))\\&+\, E(e^{-s [V^{(2)}_{n}W_{n}+T_{n}-A_{n}^{(2)}]^{-}}1(\widehat{B}_{n}> T_{n}))],\\ \chi (s):=&E(e^{-s\widehat{B}_{n}}1(\widehat{B}_{n}\le T_{n}))=\int _{0}^{\infty }e^{-sx}(1-T(x))d\widehat{B}(x), \\ \psi (s):=&E(e^{-sT_{n}}1(\widehat{B}_{n}> T_{n}))=\int _{0}^{\infty }e^{-sx}(1-\widehat{B}(x))\textrm{d}T(x). \end{aligned}$$

Letting \(Z_{w}(r,s):=\sum _{n=0}^{\infty }r^{n}E(e^{-s W_{n}})\), \(r\in [0,1)\), we have for \(Re(s)=0\) that

$$\begin{aligned}{} & {} D_{1}(-s)D_{2}(-s)\left[ Z_{w}(r,s)(D_{0}(s)-rpN_{0}(s))-D_{0}(s)e^{-sw}\right] \nonumber \\{} & {} \quad -rq\chi (s)D_{0}(s)D_{2}(-s)N_{1}(-s)\int _{[0,1)}Z_{w}(r,sy)P(V^{(1)}\in \textrm{d}y) \nonumber \\{} & {} \quad =D_{0}(s)\Bigg [rq\psi (s)N_{2}(-s)\int _{(-\infty ,0)}Z_{w}(r,sy)P(V^{(2)}\in \textrm{d}y)\nonumber \\{} & {} \qquad +rD_{1}(-s)D_{2}(-s)\left( \frac{1}{1-r}-J^{-}(r,s)\right) \Bigg ], \end{aligned}$$
(105)

where \(J^{-}(r,s)=\sum _{n=0}^{\infty }r^{n}J_{n}^{-}(s)\). It is readily seen that:

  • The LHS in (105) is analytic for \(Re(s)>0\) and continuous for \(Re(s)\ge 0\).

  • The RHS in (105) is analytic for \(Re(s)<0\) and continuous for \(Re(s)\le 0\).

  • For large s, both sides are \(O(s^{L+K_{0}+K_{1}+K_{2}})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] implies that

$$\begin{aligned}&D_{1}(-s)D_{2}(-s)\left[ Z_{w}(r,s)(D_{0}(s)-rpN_{0}(s))-D_{0}(s)e^{-sw}\right] \\&\qquad -rq\chi (s)D_{0}(s)D_{2}(-s)N_{1}(-s)\int _{[0,1)}Z_{w}(r,sy)P(V^{(1)}\in \textrm{d}y)\\&\quad = \sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)s^{l},\,Re(s)\ge 0,\\&D_{0}(s)D_{1}(-s)[rq\psi (s)N_{2}(-s)\int _{(-\infty ,0)}Z_{w}(r,sy)P(V^{(2)}\in \textrm{d}y)\\&\qquad +rD_{2}(-s)(\frac{1}{1-r}-J^{-}(r,s))]\\&\quad = \sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)s^{l},\,Re(s)\le 0, \end{aligned}$$

where \(C_{l}(r)\), \(l=0,1,\ldots ,L+K_{0}+K_{1}+K_{2}\), are unknown coefficients to be derived. For \(s=0\), simple computations imply that

$$\begin{aligned} C_{0}(r)=\frac{r}{1-r}(1-p-q\chi (0))\prod _{j=1}^{L}m_{j}\prod _{j_{0}=1}^{K_{0}}t_{j_{0}}^{(0)}\prod _{j_{1}=1}^{K_{1}}t_{j_{1}}^{(1)}\prod _{j_{2}=1}^{K_{2}}t_{j_{2}}^{(2)}. \end{aligned}$$

Thus, for \(Re(s)\ge 0\), we have

$$\begin{aligned} Z_{w}(r,s)=K(r,s)\int _{[0,1)}Z_{w}(r,sy_{1})\mu (\textrm{d}y_{1})+L(r,s), \end{aligned}$$
(106)

where

$$\begin{aligned}&K(r,s):=\frac{rq\chi (s)D_{0}(s)N_{1}(-s)}{D_{1}(-s)(D_{0}(s)-rpN_{0}(s))},\\&L(r,s):=\frac{D_{0}(s)e^{-sw}+\sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)s^{l}}{D_{0}(s)-rpN_{0}(s)}. \end{aligned}$$

The functional equation in (106) has the same form as the one in [12, eq. (13)] and can be treated similarly. Note that in our case, for \(r\in [0,1)\),

$$\begin{aligned} |K(r,s)|\le \frac{rq|\chi (s)||D_{0}(s)||N_{1}(-s)|}{|D_{1}(-s)|(|D_{0}(s)|-rp|N_{0}(s)|)}\rightarrow \frac{rq\chi (0)}{1-rp}<\frac{rq}{1-rp}\le r<1, \end{aligned}$$

as \(s\rightarrow 0\). Thus, there is a positive constant \(\epsilon \) such that for s satisfying \(|s|\le \epsilon \), we have \(|K(r,s)|\le \bar{r}:=\frac{1+r}{2}\). Note that K(rs), L(rs) satisfy the same properties as those in [12], thus, proceeding similarly and iterating n times (106) we obtain

$$\begin{aligned} Z_{w}(r,s){} & {} =L(r,s)+\sum _{j=1}^{n}\int \ldots \int _{[0,1)^{j}}K(r,s)\prod _{h=1}^{j-1}\nonumber \\{} & {} \quad K(r,sy_{1}\ldots y_{h})L(r,sy_{1}\ldots y_{j})\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{j})\nonumber \\{} & {} \quad +\int \ldots \int _{[0,1)^{n+1}}K(r,s)\prod _{h=1}^{n}K(r,sy_{1}\ldots y_{h})\nonumber \\{} & {} \quad Z(r,sy_{1}\ldots y_{n+1})\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{n+1}). \end{aligned}$$
(107)

We will let \(n\rightarrow \infty \) to obtain \(Z_{w}(r,s)\), so we need to verify the convergence of the summation in the second term in (107), as well as to obtain the limit of the third term in (107). Following the lines in [12, pp. 9-10], we can finally obtain,

$$\begin{aligned}{} & {} Z_{w}(r,s)=L(r,s)+\sum _{j=1}^{\infty }\int \ldots \int _{[0,1)^{j}}K(r,s)\nonumber \\{} & {} \quad \prod _{h=1}^{j-1}K(r,sy_{1}\ldots y_{h})L(r,sy_{1}\ldots y_{j})\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{j}). \end{aligned}$$
(108)

We still need to derive the remaining coefficients \(C_{l}(r)\), \(l=1,\ldots ,L+\sum _{k=0}^{2}K_{k}\): First, by using Rouché’s theorem [14, Theorem 3.42, p. 116], we can show that \(D_{0}(s)-rpN_{0}(s)=0\) has \(K_{0}\) roots, say \(\delta _{1}(r),\ldots ,\delta _{K_{0}}(r)\), with \(Re(\delta _{j}(r))\ge 0\), \(j=1,\ldots ,K_{0}\). Thus, we can obtain \(K_{0}\) equations:

$$\begin{aligned}&-rq\chi (\delta _{j}(r))D_{0}(\delta _{j}(r))D_{2}(-\delta _{j}(r))N_{1}(-\delta _{j}(r))\\&\qquad \int _{[0,1)}Z_{w}(r,\delta _{j}(r)y)P(V^{(1)}\in \textrm{d}y)\\&\quad = D_{1}(-\delta _{j}(r))D_{2}(-\delta _{j}(r))D_{0}(\delta _{j}(r))e^{-\delta _{j}(r)w} \\&\qquad +\sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)(\delta _{j}(r))^{l}. \end{aligned}$$

Similarly, for \(s=t_{j_{1}}^{(1)}\), \(j_{1}=1,\ldots ,K_{1}\),

$$\begin{aligned}&-rq\chi (t_{j_{1}}^{(1)})D_{0}(t_{j_{1}}^{(1)})D_{2}(-t_{j_{1}}^{(1)})N_{1}(-t_{j_{1}}^{(1)}) \int _{[0,1)}Z_{w}(r,t_{j_{1}}^{(1)}y)P(V^{(1)}\in \textrm{d}y)\\&\quad = \sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)(t_{j_{1}}^{(1)})^{l}. \end{aligned}$$

For \(s=t_{j_{2}}^{(2)}\), \(j_{2}=1,\ldots ,K_{2}\),

$$\begin{aligned}&-rq\psi (t_{j_{2}}^{(2)})D_{0}(t_{j_{2}}^{(2)})D_{1}(-t_{j_{2}}^{(2)})N_{2}(-t_{j_{2}}^{(2)})\int _{(-\infty ,0)}Z_{w}(r,t_{j_{2}}^{(2)}y)P(V^{(2)}\in \textrm{d}y)\\&\quad = \sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)(t_{j_{2}}^{(2)})^{l}, \end{aligned}$$

while for \(s=m_{j}\), \(j=1,\ldots ,L\),

$$\begin{aligned} \sum _{l=0}^{L+K_{0}+K_{1}+K_{2}}C_{l}(r)(m_{j})^{l}=0. \end{aligned}$$

By inserting where is needed the expression in (108), we obtain a system of \(L+K_{0}+K_{1}+K_{2}\) equations to obtain \(C_{l}(r)\), \(l=1,\ldots ,L+K_{0}+K_{1}+K_{2}\).

6.1 A mixed-autoregressive case

Consider first a simple version of the recursion (4), i.e. \(W_{n+1}=[V_{n}W_{n}+B_{n}-A_{n}]^{+}\), where now \(P(V_{n}=a)=p_{1}\), \(P(V_{n}\in [0,1))=p_{2}\), and \(P(V_{n}<0)=1-p_{1}-p_{2}\), with \(a\in (0,1)\), \(0\le p_{1}\le 1\), \(0\le p_{2}\le 1\), \(p_{1}+p_{2}\le 1\). (The general version of (4) will be considered in Remark 11.) Note that the case where \(a=1\) was analysed in [12]. In the following, we fill the gap in the literature, by analysing the case where \(a\in (0,1)\), which we call mixed-autoregressive, in the sense that in the obtained functional equation we will have the terms: \(Z_{w}(r,as)\), and \(\int _{[0,1)}Z_{w}(r,sy)P(V\in \textrm{d}y)\). Assume that \(V^{+}{\mathop {=}\limits ^\textrm{def}}(V|V\in [0,1))\), \(V^{-}{\mathop {=}\limits ^\textrm{def}}(V|V<0)\). Then, for \(Re(s)=0\), \(r\in [0,1)\) we have

$$\begin{aligned} Z_{w}(r,s)-e^{-sw}= & {} rp_{1}\phi _{Y}(s)Z_{w}(r,as)+rp_{2}\phi _{Y}(s)\int _{[0,1)}Z_{w}(r,sy)P(V^{+}\in \textrm{d}y) \nonumber \\{} & {} +r(1-p_{1}-p_{2})\phi _{Y}(s)\int _{(-\infty ,0)}Z_{w}(r,sy)P(V^{-}\in \textrm{d}y)\nonumber \\{} & {} +r\left( \frac{1}{1-r}-J^{-}(r,s)\right) , \end{aligned}$$
(109)

where \(\{Y_{n}=B_{n}-A_{n}\}_{n\in \mathbb {N}_{0}}\) are i.i.d. random variables with LST \(\phi _{Y}(s):=\frac{N_{Y}(s)}{D_{Y}(s)}\), with \(D_{Y}(s):=\prod _{i=1}^{L}(s-t_{i})\prod _{j=1}^{M}(s-s_{j})\). Without loss of generality, we assume that \(Re(t_{i})>0\), \(i=1,\ldots ,L\), \(Re(s_{j})<0\), \(j=1,\ldots ,M\). Thus, (109) becomes

$$\begin{aligned} \begin{array}{l} D_{Y}(s)(Z_{w}(r,s)-e^{-sw})-rp_{1}N_{Y}(s)Z_{w}(r,as)-rp_{2}N_{Y}(s)\int _{[0,1)}Z_{w}(r,sy)P(V^{+}\in \textrm{d}y)\\ =r(1-p_{1}-p_{2})N_{Y}(s)\int _{(-\infty ,0)}Z_{w}(r,sy)P(V^{-}\in \textrm{d}y)+rD_{Y}(s)\left( \frac{1}{1-r}-J^{-}(r,s)\right) .\end{array} \nonumber \\ \end{aligned}$$
(110)

It is readily seen that:

  • The LHS in (110) is analytic for \(Re(s)>0\) and continuous for \(Re(s)\ge 0\).

  • The RHS in (110) is analytic for \(Re(s)<0\) and continuous for \(Re(s)\le 0\).

  • For large s, both sides are \(O(s^{L+M})\) in their respective half-planes.

Thus, Liouville’s theorem [14, Theorem 10.52] implies that for \(Re(s)\ge 0\),

$$\begin{aligned}{} & {} D_{Y}(s)(Z_{w}(r,s)-e^{-sw})-rp_{1}N_{Y}(s)Z_{w}(r,as)-rp_{2}N_{Y}(s)\int _{[0,1)}\nonumber \\{} & {} Z_{w}(r,sy)P(V^{+}\in \textrm{d}y)=\sum _{l=0}^{M+L}C_{l}(r)s^{l}, \end{aligned}$$
(111)

and for \(Re(s)\le 0\),

$$\begin{aligned}{} & {} r(1-p_{1}-p_{2})N(s)\int _{(-\infty ,0)}Z_{w}(r,sy)P(V^{-}\in \textrm{d}y)+rD_{Y}(s)(\frac{1}{1-r}-J^{-}(r,s))\nonumber \\{} & {} \quad =\sum _{l=0}^{M+L}C_{l}(r)s^{l}. \end{aligned}$$
(112)

By using either (111) or (112) for \(s=0\), we obtain,

$$\begin{aligned} C_{0}(r)=\frac{r(1-p_{1}-p_{2})}{1-r}\prod _{i=1}^{L}t_{i}\prod _{j=1}^{M}s_{j}. \end{aligned}$$

Denoting by \(\mu \) the probability measure on [0, 1) induced by \(V^{+}\), the expression (111) is written as

$$\begin{aligned} Z_{w}(r,s)=p_{1}K(r,s)Z_{w}(r,as)+p_{2}K(r,s) \int _{[0,1)}Z_{w}(r,sy_{1})\mu (\textrm{d}y_{1})+L_{w}(r,s), \nonumber \\ \end{aligned}$$
(113)

where

$$\begin{aligned} K(r,s):=r\phi _{Y}(s),\,\,L_{w}(r,s):=e^{-sw}+\frac{\sum _{l=0}^{M+L}C_{l}(r)s^{l}}{D_{Y}(s)}. \end{aligned}$$

Our aim is to solve (113), which combines the model in [8], with those in [5, 12], i.e. in the functional equation the unknown function \(Z_{w}(r,s)\) arises also as \(Z_{w}(r,as)\) as well as in \(\int _{[0,1)}Z_{w}(r,sy)\mu (\textrm{d}y)\). Let for \(i,j=0,1,\ldots ,\)

$$\begin{aligned} f_{i,j}(s):=&a^{i}\prod _{k=1}^{j}y_{k}s,\,y_{k}\in [0,1),k=1,\ldots ,j, \\ F(r,f_{i,j}(s)):=&\left\{ \begin{array}{ll} Z_{w}(r,a^{i}s),&{}j=0, \\ \int \ldots \int _{[0,1)^{j}}Z_{w}(r,f_{i,j}(s))\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{j}),&{}j\ge 1, \end{array}\right. \end{aligned}$$

where \(f_{i,0}(s)=a^{i}s\) (i.e. \(\prod _{k=1}^{0}y_{k}:=1\)). Moreover, \(f_{i,j}(f_{k,l}(s))=f_{i+k,j+l}(s)=f_{k,l}(f_{i,j}(s))\). Then, (113) becomes

$$\begin{aligned} F(r,s)=p_{1}K(r,s)F(r,f_{1,0}(s))+p_{2}K(r,s)F(r,f_{0,1}(s))+L_{w}(r,s), \end{aligned}$$
(114)

where \(F(r,s)=F(r,f_{0,0}(s))=Z_{w}(r,s)\). Iterating (114) \(n-1\) times yields,

$$\begin{aligned} F(r,s)= & {} \sum _{k=0}^{n}p_{1}^{k}p_{2}^{n-k}G_{k,n-k}(s)F(r,f_{k,n-k}(s))\nonumber \\{} & {} \quad +\sum _{k=0}^{n-1}\sum _{m=0}^{k}p_{1}^{m}p_{2}^{k-m}G_{m,k-m}(s)\tilde{L}(r,f_{m,k-m}(s)), \end{aligned}$$
(115)

where \(G_{k,n-k}(r,s)\) are recursively defined as follows (with \(G_{-1,.}(s) = G_{.,-1}(s) \equiv 0\), \(G_{0,0}(s)=1\)):

$$\begin{aligned} G_{1,0}(s):=&K(r,s),\,\,\,\, G_{0,1}(s):=K(r,s), \\ G_{k+1,n-k}(s)=&G_{k,n-k}(s) \tilde{K}(r,f_{k,n-k}(s))+G_{k+1,n-1-k}(s) \tilde{K}(r,f_{k+1,n-1-k}(s)),\\ G_{k,n+1-k}(s)=&G_{k-1,n+1-k}(s) \tilde{K}(r,f_{k-1,n+1-k}(s))+G_{k,n-k}(s) \tilde{K}(r,f_{k,n-k}(s)), \end{aligned}$$

where also

$$\begin{aligned} \tilde{K}(r,f_{i,j}(s)):=&\left\{ \begin{array}{ll} K(r,a^{i}s),&{}j=0, \\ \int \ldots \int _{[0,1)^{j}}K(r,f_{i,j}(s))\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{j}),&{}j\ge 1, \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rl} \tilde{L}(r,f_{i,j}(s)):=&{}\left\{ \begin{array}{ll} L_{w}(r,a^{i}s),&{}j=0, \\ \int \ldots \int _{[0,1)^{j}}L_{w}(r,f_{i,j}(s))\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{j}),&{}j\ge 1. \end{array}\right. \end{array} \end{aligned}$$

It can be easily verified that \(G_{k,n-k}(r,s)\) is a sum of \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) terms, and each of them is a product of n terms of values of \(\tilde{K}(r,f_{.,.}(.))\), which are related to the LST \(\phi _{Y}(.)\). We have to mention that our framework is related to the one developed in [1] with the difference that the functions \(f_{i,j}(s)\) (for \(j>0\)) are more complicated compared to the corresponding \(a_{i}(z)\) in [1], and inherit difficulties in solving (114).

In what follows, we will let \(n\rightarrow \infty \) in (115) so as to obtain an expression for F(rs). In doing that, we have to verify the convergence of the summation in the second term in the right-hand side of (115), as well as to estimate the limit of the corresponding first term in the right-hand side of (115). The key ingredient is to show that \(G_{k,n-k}(s)\) is bounded. Similarly to [1, p. 8], \(G_{k,n-k}(s)\) can be interpreted as the total weight of all \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) paths from (0, 0) to \((k,n-k)\). Let \(C_{k,n-k}\) the set of all paths leading from (0, 0) to \((k,n-k)\), where a path from (0, 0) to \((k,n-k)\) is defined as a sequence of grid points starting from (0, 0) and ending to \((k,n-k)\) by only taking unit steps (1, 0), (0, 1). Then, a typical term (one of the \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) terms) of \(G_{k,n-k}(s)\) should be the following:

$$\begin{aligned} \int \ldots \int _{[0,1)^{m}}\prod _{(l,m)\in C_{k,n-k}}\tilde{K}(r,a^{l}y_{1}\ldots y_{m}s)\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{n-k}), \end{aligned}$$

for \(m=0,\ldots , n-k\), and \(l=0,\ldots ,k\) with \((l,m)\ne (k,n-k)\). For \(Re(s)\ge 0\), \(M_{1}(r,s):=\sup _{y\in [0,1]}|K(r,sy)|<\infty \), \(M_{2}(r,s):=\sup _{y\in [0,1]}|L(r,sy)|<\infty \), and \(|K(r,s)|\le r<1\). Then, for \(a\in (0,1)\), \(M_{l}(r,a^{i}s)<M_{l}(r,s)\), \(i\ge 1\), \(l=1,2\). Following [12],

$$\begin{aligned}&\left| \int \ldots \int _{[0,1)^{m}}\prod _{(l,m)\in C_{k,n-k}}\tilde{K}(r,a^{l}y_{1}\ldots y_{m}s)\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{n-k})\right| \\&\quad \le E\left[ \prod _{(l,m)\in C_{k,n-k}}|\tilde{K}(r,a^{l}Z_{1}\ldots Z_{m}s)|\right] , \end{aligned}$$

where \(Z_{1},Z_{2},\ldots \) is a sequence of i.i.d. random variables with the same distribution as \(V^{+}\). Following the same procedure as in [12, pp. 8-9], we can show that each of the weights of the path is bounded, implying that \(G_{k,n-k}(s)\) is also bounded. This result will imply as \(n\rightarrow \infty \) that the first term in the right-hand side of (115) vanishes. Thus,

$$\begin{aligned} F(r,s)=\sum _{k=0}^{\infty }\sum _{m=0}^{k}p_{1}^{m}p_{2}^{k-m}G_{m,k-m}(s)\tilde{L}(r,f_{m,k-m}(s)). \end{aligned}$$
(116)

We are now ready to obtain the coefficients \(C_{l}(r)\), \(l=1,\ldots ,M+L\). For \(s=t_{i}\), \(i=1,\ldots ,L\), in (111), we have

$$\begin{aligned} -rp_{1}N_{Y}(t_{i})Z_{w}(r,at_{i})-rp_{2}N_{Y}(t_{i})\int _{[0,1)}Z_{w}(r,t_{i}y)\mu (\textrm{d}y)=\sum _{l=0}^{M+L}C_{l}(r)t_{i}^{l}. \nonumber \\ \end{aligned}$$
(117)

Setting \(s=s_{j}\), \(j=1,\ldots ,M\), in (112) yields

$$\begin{aligned} r(1-p_{1}-p_{2})N(s_{j})\int _{(-\infty ,0)}Z_{w}(r,s_{j}y)P(V^{-}\in \textrm{d}y)=\sum _{l=0}^{M+L}C_{l}(r)s_{j}^{l}. \end{aligned}$$
(118)

Equations (117), (118) constitute a system of equations to obtain the unknown coefficients \(C_{l}(r)\), \(l=1,\ldots ,M+L\).

Remark 11

We now return to the general case of recursion (4). The analysis is still applicable when we assume that with probability \(p_{1}\), \(V^{(0)}_{n}\in \{a_{1},\ldots ,a_{M}\}\), with \(a_{k}\in (0,1)\), and \(P(V^{(0)}_{n}=a_{k})=q_{k}\), \(k=1,\ldots ,M\). Then, (113) takes the following form

$$\begin{aligned} Z_{w}(r,s)=p_{1}K(r,s)\sum _{k=1}^{M}q_{k}Z_{w}(r,a_{k}s)+p_{2}K(r,s) \int _{[0,1)}Z_{w}(r,sy_{1})\mu (\textrm{d}y_{1})+L_{w}(r,s). \nonumber \\ \end{aligned}$$
(119)

Then, by setting \(h_{j}:=p_{1}q_{j}\), \(j=1,\ldots ,M\), \(h_{M+1}:=p_{2}\), \(f_{i_{1},\ldots ,i_{M},i_{M+1}}(s):=a_{1}^{i_{1}}\ldots a_{M}^{i_{M}}\prod _{j=1}^{i_{M+1}}y_{j}s\), and \(e_{j}^{(M+1)}\) an \(1\times (M+1)\) row vector with 1 at the jth position and all the other entries equal to zero, (119) becomes

$$\begin{aligned} F(r,s)=K(r,s)\sum _{j=1}^{M+1}h_{j}F(r,f_{e_{j}^{(M+1)}}(s))+L_{w}(r,s). \end{aligned}$$
(120)

Note that (120) has the same form as the functional equations treated in [1, eq. (2)]. After n iterations (120) becomes

$$\begin{aligned} F(r,s)=\sum _{i_{1}+\ldots +i_{M}+i_{M+1}=n+1}h_{1}^{i_{1}}\ldots h_{M}^{i_{M}}h_{M+1}^{i_{M+1}}G_{i_{1},\ldots ,i_{M},i_{M+1}}(s)F(r,f_{i_{1},\ldots ,i_{M},i_{M+1}}(s))\\ +\sum _{k=0}^{n}\sum _{i_{1}+\ldots +i_{M}+i_{M+1}=k}h_{1}^{i_{1}}\ldots h_{M}^{i_{M}}h_{M+1}^{i_{M+1}}G_{i_{1},\ldots ,i_{M},i_{M+1}}(s)\tilde{L}(r,f_{i_{1},\ldots ,i_{M},i_{M+1}}(s)), \end{aligned}$$

where now

$$\begin{aligned} G_{i_{1},\ldots ,i_{M},i_{M+1}}(s)=&\sum _{j=1}^{M+1}\tilde{K}(r,f_{i_{1},\ldots ,i_{j-1},\ldots ,i_{M+1}}(s))G_{i_{1},\ldots ,i_{j-1},\ldots ,i_{M+1}}(s),\\ \tilde{K}(r,f_{i_{1},\ldots ,i_{M+1}}(s)):=&\left\{ \begin{array}{ll} K(r,a_{1}^{i_{1}}\ldots a_{M}^{i_{M}}s),&{}i_{M+1}=0, \\ \int \ldots \int _{[0,1)^{j}}K(r,f_{i_{1},\ldots ,i_{M+1}}(s))\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{i_{M+1}}),&{}i_{M+1}\ge 1, \end{array}\right. \\ \tilde{L}(r,f_{i_{1},\ldots ,i_{M+1}}(s)):=&\left\{ \begin{array}{ll} L(r,a_{1}^{i_{1}}\ldots a_{M}^{i_{M}}s),&{}i_{M+1}=0, \\ \int \ldots \int _{[0,1)^{j}}L(r,f_{i_{1},\ldots ,i_{M+1}}(s))\mu (\textrm{d}y_{1})\ldots \mu (\textrm{d}y_{i_{M+1}}),&{}i_{M+1}\ge 1, \end{array}\right. \end{aligned}$$

with \(G_{0,\ldots ,0,0}(s):=1\), \(G_{i_{1},\ldots ,i_{M},i_{M+1}}(s)=0\), in case one of the indices becomes \(-1\). Following the above approach and having in mind that the functions \(f_{i_{1},\ldots ,i_{M+1}}(s)\) are commutative contraction mappings on \(\{s\in \mathbb {C};Re(s)\ge 0\}\), \(F(r,s):=Z_{w}(r,s)\) can be derived by using [1, Theorem 3].

Remark 12

Note that in this subsection we have not considered any dependence framework among \(B_{n}\), \(A_{n}\), since our major focus was on introducing this mixed-autoregressive concept, and generalizing the work in [12], by assuming \(a\in (0,1)\), instead of \(a=1\). However, the analysis is still applicable even when we lift the independence assumption. For example, assume the simple scenario where now with probability \(p_{1}\), we further assume \(A_{n}=cB_{n}+J_{n}\), i.e. the interarrival time is linearly dependent on the service time of the previous customer, with \(c\in (0,1)\), \(J_{n}\sim exp(\delta )\). Then, (113) becomes

$$\begin{aligned} Z_{w}(r,s)= & {} p_{1}K_{1}(r,s)Z_{w}(r,as)\nonumber \\{} & {} +p_{2}K(r,s) \int _{[0,1)}Z_{w}(r,sy_{1})\mu (\textrm{d}y_{1})+L_{w}(r,s), \end{aligned}$$
(121)

where now \(K_{1}(r,s):=r\frac{\delta }{\delta -s}\phi _{B}(\bar{c}s)\). The rest of the analysis can be pursued similarly as above. Clearly, the analysis is still applicable either if we consider \(J_{n}\) to have distribution with rational LST, or a more general dependence structure, e.g. \(A_{n}=G_{n}(W_{n}+B_{n})+J_{n}\), \(P(G_{n}=\beta _{i})=q_{i}\), \(i=1,\ldots ,M\), or the (random) threshold dependence structure analysed in Sect. 4. Clearly, we can also apply the same steps when lifting independence assumptions for the general case analysed in Remark 11.

7 A more general dependence framework

In the following, we consider a more general dependence structure among \(\{B_{n}\}_{n\in \mathbb {N}_{0}}\), \(\{A_{n}\}_{n\in \mathbb {N}_{0}}\). More precisely, assume that

$$\begin{aligned} E(e^{-s A_{n}}|B_{n}=t)=\chi (s)\sum _{i=1}^{N}p_{i}e^{-\psi _{i}(s)t}, \end{aligned}$$
(122)

thus, the interarrival times depend on the service time of the previous customer, so that

$$\begin{aligned} E(e^{-sA_{n}-zB_{n}})=\int _{0}^{\infty }e^{-zt}\chi (s)\sum _{i=1}^{N}p_{i}e^{-\psi _{i}(s)t}\textrm{d}F_{B}(t)=\chi (s)\sum _{i=1}^{N}p_{i}\phi _{B}(z+\psi _{i}(s)), \end{aligned}$$

with \(Re(\psi _{i}(s)+z)>0\). Clearly \(\chi (0)=1\), \(\psi _{i}(0)=0\), \(i=1,\ldots ,N\). The component \(e^{-\psi _{i}(s)t}\) depends on the previous service time. The component \(\chi (s)\) does not depend on the service time.

Note that with the above framework we can recover some of the cases analysed above. In particular, the case \(A_{n}=c B_{n}+J_{n}\) with \(N=1\), so that \(p_{1}=1\),

$$\begin{aligned}\begin{array}{rl} E(e^{-s A_{n}}|B_{n}=t)=&E(e^{-s (c B_{n}+J_{n})}|B_{n}=t)=E(e^{-sJ_{n}})e^{-cst}, \end{array} \end{aligned}$$

with \(\chi (s):=E(e^{-sJ_{n}})\), \(\psi (s)=cs\).

The case \(A_{n}=G_{n}B_{n}+J_{n}\), with \(P(G_{n}=\beta _{i})=p_{i}\), \(i=1,\ldots ,N\). Then:

$$\begin{aligned} \chi (s)=E(e^{-sJ_{n}}),\,\,\psi _{i}(s)=\beta _{i}s,\,i=1,\ldots ,N. \end{aligned}$$

Another interesting scenario: Given \(B=t\), \(A=\sum _{k=1}^{N_{i}(t)}H_{i,k}\), with probability \(p_{i}\), \(N_{i}(t)\sim Poisson(\gamma _{i}t)\), \(i=1,\ldots ,N,\) and \(\{H_{i,k}\}\) are sequences of i.i.d. random variables with a rational LST, each of them distributed like \(H_{i}\). Then,

$$\begin{aligned}\begin{array}{rl} E(e^{-s A_{n}}|B_{n}=t)=&{} \sum _{i=1}^{N}p_{i}E(e^{-s \sum _{k=1}^{N_{i}(t)}H_{i,k}}|B_{n}=t)=\\ =&{}\sum _{i=1}^{N}p_{i}\sum _{l_{i}=0}^{\infty }E(e^{-s \sum _{k=1}^{l_{i}}H_{i,k}}|B_{n}=t)\frac{e^{-\gamma _{i}t}(\gamma _{i}t)^{l_{i}}}{l_{i}!}\\ =&{}\sum _{i=1}^{N}p_{i}e^{-\gamma _{i}(1-E(e^{-sH_{i}}))}, \end{array} \end{aligned}$$

and thus, \(\chi (s)=1\), \(\psi _{i}(s)=\gamma _{i}(1-E(e^{-sH_{i}}))\), \(i=1,\ldots ,N\).

So, returning to the simpler general scenario for the stochastic recursion in (1): \(W_{n+1}=[aW_{n}+B_{n}-A_{n}]^{+}\), where the interarrival times depend on the service time of the previous customer based on (122), we have:

$$\begin{aligned} \begin{array}{rl} E(e^{-sW_{n+1}}) =&{}E(e^{-s(aW_{n}+B_{n}-A_{n})})+1-E(e^{-s[aW_{n}+B_{n}-A_{n}]^{-}}) \\ =&{}E(e^{-saW_{n}})E(e^{-s(B_{n}-A_{n})})+1-U_{n}(s)\\ = &{} E(e^{-saW_{n}})\chi (-s)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))+1-U_{n}(s). \end{array} \end{aligned}$$

Assuming that the limit as \(n\rightarrow \infty \) exists, by focusing on the limiting random variable W, and setting \(Z(s)=E(e^{-sW})\), we come up with the following functional equation:

$$\begin{aligned}\begin{array}{c} Z(s)=Z(as)\chi (-s)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))+1-U(s).\end{array} \end{aligned}$$

Let \(\chi (s):=\frac{A_{1}(s)}{\prod _{k=1}^{K}(s+\lambda _{k})}\), \(\psi _{i}(s):=\frac{B_{i}(s)}{\prod _{l=1}^{L_{i}}(s+\nu _{l})}\), with \(A_{1}(s)\) a polynomial of degree at most \(K-1\), not sharing the same zeros with the denominator of \(\chi (s)\), and similarly, \(B_{i}(s)\) polynomial of degree at most \(L_{i}-1\), not sharing the same zeros with the denominator of \(\psi _{i}(s)\), for \(i=1,\ldots , N\). Then, for \(Re(s)=0\),

$$\begin{aligned}\begin{array}{c} \prod _{k=1}^{K}(\lambda _{k}-s)Z(s)-A_{1}(-s)Z(as)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))=1-U(s).\end{array} \end{aligned}$$

By using similar arguments as above, Liouville’ theorem [14, Theorem 10.52] implies that,

$$\begin{aligned} \prod _{k=1}^{K}(\lambda _{k}-s)Z(s)-A_{1}(-s)Z(as)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))=\sum _{j=0}^{K}C_{j}s^{j},\,Re(s)\ge 0. \nonumber \\ \end{aligned}$$
(123)

Setting \(s=0\), yields \(C_{0}=0\). The other \(C_{j}\)s are found by using the K zeros \(s=\lambda _{k}\), \(k=1,\ldots ,K\). Indeed, set \(s=\lambda _{k}\), \(k=1,\ldots ,K\) in (123) to obtain the following system:

$$\begin{aligned} -A_{1}(-\lambda _{k})Z(a\lambda _{k})\sum _{i=1}^{N}p_{i}\phi _{B}(\lambda _{k}+\psi _{i}(-\lambda _{k}))=\sum _{j=1}^{K}C_{j}\lambda _{k}^{j}. \end{aligned}$$
(124)

However, we still need to find \(Z(a\lambda _{k})\), \(k=1,\ldots ,K\). This can be done by iterating

$$\begin{aligned} Z(s)=Z(as)A(-s)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))+\frac{\sum _{j=1}^{K}C_{j}s^{j}}{\prod _{k=1}^{K}(\lambda _{k}-s)}. \end{aligned}$$

This will result in expressions containing infinite products of the form \(\prod _{m=0}^{\infty }A(-a^{m}s) \phi _{B}(a^{m}s+\psi _{i}(-a^{m}s))\). Indeed, after the iterations we get:

$$\begin{aligned} Z(s)= & {} \sum _{j=1}^{K}\sum _{n=0}^{\infty }\frac{C_{j}(a^{n}s)^{j}}{\prod _{k=1}^{K}(\lambda _{k}-a^{n}s)}\prod _{m=0}^{n-1}A(-a^{m}s)\sum _{i=1}^{N}p_{i}\phi _{B}(a^{m}s+\psi _{i}(-a^{m}s))\nonumber \\{} & {} +\prod _{m=0}^{\infty }A(-a^{m}s)\sum _{i=1}^{N}p_{i}\phi _{B}(a^{m}s+\psi _{i}(-a^{m}s)). \end{aligned}$$
(125)

Note that for large m, \(\phi _{B}(a^{m}s+\psi _{i}(-a^{m}s))\) approaches 1, since \(a^{m}s+\psi _{i}(-a^{m}s)\rightarrow 0\).

Substituting \(s=a\lambda _{k}\), \(k=1,\ldots ,K\) in (125), we obtain \(Z(a\lambda _{k})\). Finally, by substituting the derived expression in (124), we get a system of equations for the unknown coefficients \(C_{j}\), \(j=1,\ldots ,K\).

Remark 13

Note that in the independent case, i.e. \(\psi (s)=0\), the situation is easy. In the linear dependent case, i.e. \(A_{n}=\beta _{i}B_{n}+J_{n}\), \(\psi _{i}(s)=\beta _{i}s\), the analysis is also easy to handle. If we additionally assume that \(J_{n}\sim exp(\delta )\), then we are interested in the convergence of \(\prod _{m=0}^{\infty }\frac{\delta \phi _{B}(a^{m}\bar{\beta }_{i}s)}{\delta -a^{m}s}\), which is also easy to handle.

7.1 Interarrival times dependent on system time

Assume now that

$$\begin{aligned} E(e^{-s A_{n}}|W_{n}+B_{n}=t)=\chi (s)e^{-\psi (s)t}, \end{aligned}$$

and thus, the interarrival time depends on the workload present after the arrival of the previous customer. Therefore,

$$\begin{aligned} E(e^{-sA_{n}-z(W_{n}+B_{n})})=\chi (s)\phi _{B}(z+\psi (s))Z(z+\psi (s)), \end{aligned}$$

with \(Re(z+\psi (s))>0\). Then, for \(Re(s)=0\), the functional equation becomes

$$\begin{aligned} Z(s)-\chi (-s)\phi _{B}(s+\psi (-s))Z(s+\psi (-s))=1-U(s). \end{aligned}$$
(126)

Note that the case where \(A_{n}=c(W_{n}+B_{n})+J_{n}\), \(c\in (0,1)\), \(J_{n}\sim exp(\lambda )\) was recently treated in [7, Section 2]. For that case, \(\chi (s)=\frac{\lambda }{\lambda +s}\), and \(\psi (s)=sc\). For the general case, the functional equation (126) can be treated by following the lines in [1], when \(g(s):=s+\psi (-s)\) is a contraction mapping on the closed positive half-plane.

A more interesting case arises when we assume that the next interarrival time randomly depends on the workload present right after the arrival of the previous customer. More precisely,

$$\begin{aligned} E(e^{-s A_{n}}|W_{n}+B_{n}=t)=\chi (s)\sum _{i=1}^{N}p_{i}e^{-\psi _{i}(s)t}. \end{aligned}$$
(127)

In such a case,

$$\begin{aligned} E(e^{-sA_{n}-z(W_{n}+B_{n})})=\chi (s)\sum _{i=1}^{N}p_{i}\phi _{B}(z+\psi _{i}(s))Z(z+\psi _{i}(s)), \end{aligned}$$

with \(Re(z+\psi _{i}(s))>0\), \(i=1,\ldots ,N\). Then, for \(Re(s)\ge 0\), we have

$$\begin{aligned} \prod _{k=1}^{K}(\lambda _{k}-s) Z(s)-A_{1}(-s)\sum _{i=1}^{N}p_{i}\phi _{B}(s+\psi _{i}(-s))Z(s+\psi _{i}(-s))=\sum _{j=0}^{K}C_{j}s^{j}. \nonumber \\ \end{aligned}$$
(128)

A special case of the dependence relation (127) arises when \(A_{n}=G_{n}(W_{n}+B_{n})+J_{n}\), \(P(G_{n}=\beta _{i})=p_{i}\), \(i=1,\ldots ,N\); see Sect. 2.3. For such a case, \(\chi (s)=\frac{\delta }{\delta +s}\), \(\psi _{i}(s)=\beta _{i}s\), \(i=1,\ldots ,N\). In general, if \(g_{i}(s)=s+\psi _{i}(-s)\), \(i=1,\ldots ,N,\) are commutative contraction mappings on the closed positive half-plane, then following the lines in [1], the functional equation (128) can be handled.

8 An integer-valued reflected autoregressive process and a novel retrial queueing system with dependencies

In this section, we consider the following integer-valued stochastic process \(\{X_{n};n=0,1,\ldots \}\) that is determined by the recursion (5):

$$\begin{aligned} X_{n+1}=\left\{ \begin{array}{ll} \sum _{k=1}^{X_{n}}U_{k,n}+Z_{n}-Q_{n+1},&{}X_{n}>0, \\ Y_{n}-\tilde{Q}_{n+1},&{}X_{n}=0, \end{array}\right. \end{aligned}$$
(129)

where \(Z_{1},Z_{2},\ldots \), \(Y_{1},Y_{2},\ldots \), are i.i.d. non-negative integer-valued random variables with probability generating function (pgf) C(z), and G(z), respectively, and \(Q_{n}\), \(\tilde{Q}_{n}\) are i.i.d. random variables such that

$$\begin{aligned}&P(Q_{n}=0|\sum _{k=1}^{X_{n}}U_{k,n}+Z_{n}=l,X_{n}>0):= \frac{\lambda _{1}}{\lambda _{1}+\alpha _{1}(1-\delta _{0,l})}, \\&P(Q_{n}=1|\sum _{k=1}^{X_{n}}U_{k,n}+Z_{n}=l,X_{n}>0):= \frac{\alpha _{1}(1-\delta _{0,l})}{\lambda _{1}+\alpha _{1}(1-\delta _{0,l})},\\&P(\tilde{Q}_{n}=0|Y_{n}=l,X_{n}=0):= \frac{\lambda _{0}}{\lambda _{0}+\alpha _{0}(1-\delta _{0,l})}, \\&P(\tilde{Q}_{n}=1|Y_{n}=l,X_{n}=0):= \frac{\alpha _{0}(1-\delta _{0,l})}{\lambda _{0}+\alpha _{0}(1-\delta _{0,l})}, \end{aligned}$$

where \(\delta _{0,l}\) denotes the Kronecker’s delta function, i.e. \(\delta _{0,l}=1\), if \(l=0\), and \(\delta _{0,l}=0\), if \(l\ne 0\). Moreover, \(U_{k,n}\) are i.i.d. Bernoulli distributed random variables with parameter \(\xi _{n}\), i.e. \(P(U_{k,n}=1)=\xi _{n}\), \(P(U_{k,n}=0)=1-\xi _{n}\). It is also assumed that \(\xi _{n}\) are also i.i.d. random variables with \(P(\xi _{n}=a_{i})=p_{i}\), \(i=1,\ldots ,M\), and \(\sum _{i=1}^{M}p_{i}=1\). As usual, it is assumed that for all n, \(Z_{n}\), \(Y_{n}\), \(U_{k,n}\), \(Q_{n}\), \(\tilde{Q}_{n}\) are independent of each other and of all preceding \(X_{r}\).

Note that (129) can be interpreted as follows: Let \(X_{n}\) be the number of waiting customers in an orbit queue just after the beginning of the nth service, \(Q_{n+1}\) (resp. \(\tilde{Q}_{n+1}\)) be the number of orbiting customers that initiate the \((n+1)\)th service when \(X_{n}>0\) (resp. \(X_{n}=0\)). Note that when \(X_{n}>0\) (resp. \(X_{n}=0\)), the first primary customer arrives according to a Poisson process with rate \(\lambda _{1}\) (resp. \(\lambda _{0}\)). \(Z_{n}\) (resp. \(Y_{n}\)) denotes the number of arriving customers during the nth service when \(X_{n}>0\) with pgf \(E(z^{Z_{n}}):=C(z)\) (resp. with pgf \(E(z^{Y_{n}}):=G(z)\), if \(X_{n}=0\)). The orbiting customers become impatient during the nth service. In particular, with probability \(p_{i}\), each of the \(X_{n}\) customers in orbit becomes impatient with probability \(1-a_{i}\), \(i=1,\ldots ,M\), i.e. there are M schemes that model the impatience behaviour of the customers in orbit during a service time, and with probability \(p_{i}\), \(i=1,\ldots ,M\), the ith scheme is assigned at the beginning of a service. Under the ith impatience scheme, each orbiting customer becomes impatient and leaves the system with probability \(1-a_{i}\), \(i=1,\ldots ,M\). Therefore, with probability \(p_{i}\), \(i=1,\ldots ,M\), \(U_{k,n}\) equals 1 with probability \(a_{i}\), and 0 with probability \(1-a_{i}\).

Under such a setting, the service time and/or the rate of the Poisson arriving process of the number of customers that join the orbit queue during a service time depend on the orbit size at the beginning of the service. Moreover, the retrieving times depend also on whether the orbit queue is empty or not at the beginning of the last service (i.e. the are exponentially distributed with rate \(\alpha _{0}\) (resp. \(\alpha _{1}\)) when \(X_{n}=0\) (resp. \(X_{n}>0\))). We have to note that to our best knowledge it is the first time that such a retrial model is considered in the related literature.

Then,

$$\begin{aligned} E(z^{X_{n+1}})=&E(z^{\sum _{k=1}^{X_{n}}U_{k,n}+Z_{n}-Q_{n+1}}1(X_{n}>0))+E(z^{Y_{n}-\tilde{Q}_{n+1}}1(X_{n}=0)) \\ =&E(z^{Z_{n}})(\frac{\alpha _{1}}{z(\lambda _{1}+\alpha _{1})}+\frac{\lambda _{1}}{\lambda _{1}+\alpha _{1}})E(z^{\sum _{k=1}^{X_{n}}U_{k,n}}(1-1(X_{n}=0))) \\&+E(z^{Y_{n}-\tilde{Q}_{n+1}}(1(X_{n}=0,Y_{n}>0)+1(X_{n}=0,Y_{n}=0))) \\ =&E(z^{Z_{n}})\frac{\alpha _{1}+z\lambda _{1}}{z(\lambda +\alpha _{1})}[E(z^{\sum _{k=1}^{X_{n}}U_{k,n}})-E(1_{(X_{n}=0)})]\\&+E(z^{Y_{n}}1(Y_{n}>0))E(1(X_{n}=0))[\frac{\alpha _{0}}{z(\lambda _{0}+\alpha _{0})}+\frac{\lambda _{0}}{\lambda _{0}+\alpha _{0}}]\\\&+\, E(1(Y_{n}=0))E(1(X_{n}=0))\\ =&E(z^{Z_{n}})\frac{\alpha _{1}+z\lambda _{1}}{z(\lambda +\alpha _{1})}[E(z^{\sum _{k=1}^{X_{n}}U_{k,n}})-E(1(X_{n}=0))]\\&+E(1(X_{n}=0))[\frac{\alpha _{0}+z\lambda _{0}}{z(\lambda _{0}+\alpha _{0})}E(z^{Y_{n}})+\frac{a_{0}(z-1)}{z(\lambda _{0}+\alpha _{0})}E(1(Y_{n}=0))], \end{aligned}$$

where in the third equality we used the fact that when \(Y_{n}=0\), then \(\tilde{Q}_{n+1}=0\) with certainty. Let f(z) be the pgf of the steady-state distribution of \(\{X_{n}\}_{n\in \mathbb {N}_{0}}\) we have after some algebra,

$$\begin{aligned} f(z)=\frac{\widehat{C}(z)}{z}\sum _{i=1}^{M}p_{i}f(\bar{a}_{i}+a_{i}z)+\frac{f(0)}{z}\left[ G(0)\frac{\alpha _{0}(z-1)}{\alpha _{0}+\lambda _{0}}+\widehat{G}(z)-\widehat{C}(z)\right] , \nonumber \\ \end{aligned}$$
(130)

where \(\widehat{C}(z)=C(z)\frac{\alpha _{1}+\lambda _{1}z}{\lambda _{1}+\alpha _{1}}\), \(\widehat{G}(z)=G(z)\frac{\alpha _{0}+\lambda _{0}z}{\lambda _{0}+\alpha _{0}}\). After multiplying (130) with z and letting \(z=0\), we obtain

$$\begin{aligned} f(0)=C(0)\sum _{i=1}^{M}p_{i}f(\bar{a}_{i}). \end{aligned}$$
(131)

Set \(g(z)=\frac{\widehat{C}(z)}{z}\), \(K(z)=\frac{f(0)}{z}[G(0)\frac{\alpha _{0}(z-1)}{\alpha _{0}+\lambda _{0}}+\widehat{G}(z)-\widehat{C}(z)]\), so that (130) is now written as

$$\begin{aligned} f(z)=g(z)\sum _{i=1}^{M}p_{i}f(\bar{a}_{i}+a_{i}z)+K(z), \end{aligned}$$

which has the same form as the one in [1, Section 5, p. 19]. Note that \(g(1)=1\), \(K(1)=0\), thus, the functional equation in (130) can be solved by following [1, Theorem 2]:

Theorem 14

The generating function f(z) is given by

$$\begin{aligned} \begin{array}{rl} f(z)=&{}\lim _{n\rightarrow \infty }\sum _{i_{1}+\ldots +i_{M}=n+1}p_{1}^{i_{1}}\ldots p_{M}^{i_{M}}L_{i_{1},\ldots ,i_{M}}(z) \\ &{}+ \sum _{k=0}^{\infty }\sum _{i_{1}+\ldots +i_{M}=k}p_{1}^{i_{1}}\ldots p_{M}^{i_{M}}L_{i_{1},\ldots ,i_{M}}(z)K(1-a_{1}^{i_{1}}\ldots a_{M}^{i_{M}}(1-z)), \end{array} \nonumber \\ \end{aligned}$$
(132)

where \(L_{i_{1},\ldots ,i_{M}}(z)\) are recursively obtained by the relation (5) in [1]. The term f(0) is determined by substituting \(\bar{a}_{i}\), \(i=1,\ldots ,M\), in (132), multiplying both sides by \(p_{i}\), summing over i, and using (131).

Remark 15

Note that \(\widehat{C}(z)\) (resp. \(\widehat{G}(z)\)) refers to the pgf of the number of primary customers that arrive between successive service initiations when \(X_{n}>0\) (resp. \(X_{n}=0\)). Moreover, we can further assume class-dependent service times, i.e. when an orbiting (resp. primary) customer is the one that occupies the server, the pgf of the number of arriving customers during his/her service time equals \(C_{o}(z)\) (resp. \(C_{p}(z)\)). In such a case, \(\widehat{C}(z)=\frac{\alpha _{1}C_{o}(z)+\lambda _{1}zC_{p}(z)}{\lambda _{1}+\alpha _{1}}\) when \(X_{n}>0\). Similarly, \(\widehat{G}(z)=\frac{\alpha _{0}G_{o}(z)+\lambda _{0}zG_{p}(z)}{\lambda _{0}+\alpha _{0}}\) when \(X_{n}=0\).

Remark 16

Moreover, some very interesting special cases may be deduced from (130). In particular, when \(\alpha _{k}\rightarrow \infty \), \(k=0,1\), then \(\widehat{C}(z)=C(z)\), and \(\widehat{G}(z)=G(z)\), since \(\frac{\alpha _{k}+\lambda _{k}z}{\lambda _{k}+\alpha _{k}}\rightarrow 1\) as \(\alpha _{k}\rightarrow \infty \). Thus, (130) reduces to the functional equation that corresponds to the standard M/G/1 queue generalization in [1, Section 5]. Moreover, one can further assume that one of \(\alpha _{k}\)s tend to infinity, e.g. \(\alpha _{0}\rightarrow \infty \) and \(a_{1}>0\). In such a scenario, the server has the flexibility to treat the orbit queue as a typical queue, when at the beginning of the last service the orbit queue was empty.

8.1 An extension to a two-dimensional case: a priority retrial queue

In the following, we go one step further towards a multidimensional case. In particular, we consider the two-dimensional discrete-time process \(\{(X_{1,n},X_{2,n});n=0,1,\ldots \}\), and assume that only the component \(\{X_{2,n};n=0,1,\ldots \}\) is subject to the autoregressive concept, i.e. we generalize the previous model to incorporate two classes of customers (primary and orbiting customers) and priorities, where orbiting customers are impatient.

Primary customers arrive according to a Poisson process with rate \(\lambda _{1}\) and if they find the server busy form a queue waiting to be served. Retrial customers arrive according to a Poisson process with rate \(\lambda _{2}\), and upon finding a busy server join an infinite capacity orbit queue, from where they retry according to the constant retrial policy, i.e. only the first in orbit queue attempts to connect with the server after an exponentially distributed time with rate \(\alpha \).

Let \(X_{i,n}\) be the number of customers in queue i (i.e. type i customers) just after the beginning of the nth service, where with \(i=1\) (resp. \(i=2\)) we refer to the primary (resp. orbit) queue. As usual, the server becomes available to the orbiting customers only when there are no customers at the primary queue upon a service completion. We further assume that orbiting customers become impatient during the service of an orbiting customer, according to the machinery described above.

Let also \(A_{i,n}\), \(i=1,2,\) be the number of customers of type i that join the system during the nth service, with pgf \(A(z_{1},z_{2})\), and set \(\lambda :=\lambda _{1}+\lambda _{2}\). Then \(X_{n}:=\{(X_{1,n},X_{2,n});n=0,1,\ldots \}\) satisfies the following recursions:

$$\begin{aligned}&\left\{ \begin{array}{rl} X_{1,n+1}=&{}X_{1,n}+A_{1,n}-1,\,\text { if }X_{1,n}>0,\,A_{1,n}\ge 0, \\ X_{2,n+1}=&{}X_{1,n}+A_{1,n},\,\text { if }X_{2,n}\ge 0,\,A_{2,n}\ge 0, \end{array}\right. \\&\left\{ \begin{array}{rl} X_{1,n+1}=&{}A_{1,n}-1,\,\text { if }X_{1,n}>0,\,A_{1,n}>0, \\ X_{2,n+1}=&{}X_{1,n}+A_{1,n},\,\text { if }X_{2,n}\ge 0, \end{array}\right. \\&\left\{ \begin{array}{rlr} X_{1,n+1}=&{}0,\,\text { if }X_{1,n}=A_{1,n}=0,&{} \\ &{}&{}\text { with probability }\frac{\lambda }{\lambda +\alpha },\\ X_{2,n+1}=&{} X_{2,n}+A_{2,n},\,\text { if }X_{2,n}>0,\,A_{2,n}\ge 0,&{}\\ X_{1,n+1}=&{}0,\,\text { if }X_{1,n}=A_{1,n}=0,&{} \\ &{}&{}\text { with probability }\frac{\alpha }{\lambda +\alpha }.\\ X_{2,n+1}=&{} \sum _{k=1}^{X_{2,n}}Y_{k,n}+A_{2,n}-1,\,\text { if }X_{2,n}>0,\,A_{2,n}\ge 0, \end{array}\right. \end{aligned}$$

More precisely, the value of the impatience probability equals \(\bar{a}_{i}:=1-a_{i}\) with probability \(p_{i}\), \(i=1,\ldots ,M\), i.e. \(P(\xi _{n}=a_{i})=p_{i}\), and \(P(Y_{k,n}=1)=\xi _{n}\), \(P(Y_{k,n}=0)=1-\xi _{n}\). Moreover,

$$\begin{aligned} \left\{ \begin{array}{rlr} X_{1,n+1}=&{}0,\,\text { if }X_{1,n}=A_{1,n}=0,&{} \\ &{}&{}\text { with probability }\frac{\lambda }{\lambda +\alpha },\\ X_{2,n+1}=&{} X_{2,n}+A_{2,n},\,\text { if }X_{2,n}=0,\,A_{2,n}> 0,&{}\\ X_{1,n+1}=&{}0,\,\text { if }X_{1,n}=A_{1,n}=0,&{} \\ &{}&{}\text { with probability }\frac{\alpha }{\lambda +\alpha },\\ X_{2,n+1}=&{} A_{2,n}-1,\,\text { if }X_{2,n}=0,\,A_{2,n}> 0, \end{array}\right. \end{aligned}$$
$$\begin{aligned} \left\{ \begin{array}{rl} X_{1,n+1}=&{}0,\,\text { if }X_{1,n}=A_{1,n}=0, \\ X_{2,n+1}=&{} 0,\,\text { if }X_{2,n}=A_{2,n}= 0. \end{array}\right. \end{aligned}$$

To our best knowledge, it is the first time that such a priority retrial model is considered in the related literature.

Let \(F(z_{1},z_{2}):=E(z_{1}^{X_{1,n}}z_{2}^{X_{2,n}})\). Then, using the above recursions, and after lengthy but straightforward calculations we come up with the following functional equation:

$$\begin{aligned} F(z_{1},z_{2})[z_{1}-A(z_{1},z_{2})]= & {} \frac{\alpha A(0,z_{2})z_{1}}{z_{2}(\lambda +\alpha )}\sum _{i=1}^{M}p_{i}F(0,\bar{a}_{i}+a_{i}z_{2})\nonumber \\{} & {} -\frac{F(0,z_{2})A(0,z_{2})(\alpha +\lambda (1-z_{1}))}{\lambda +\alpha }\nonumber \\{} & {} +\frac{F(0,0)A(0,0)\alpha (z_{2}-1)z_{1}}{z_{2}(\lambda +\alpha )}. \end{aligned}$$
(133)

Then, it is readily seen by using Rouché’s theorem [14, Theorem 3.42, p. 116] that \(z_{1}-A(z_{1},z_{2})\) has for fixed \(|z_{2}|\le 1\), exactly one zero, say \(z_{1}=q(z_{2})\) in \(|z_{1}|<1\). Substitute \(z_{1}=q(z_{2})\) in (133) to obtain:

$$\begin{aligned}&F(0,z_{2})\frac{A(0,z_{2})(\alpha +\lambda (1-q(z_{2})))}{\lambda +\alpha }\\&\quad =\frac{\alpha A(0,z_{2})q(z_{2})}{z_{2}(\lambda +\alpha )}\sum _{i=1}^{M}p_{i}F(0,\bar{a}_{i}+a_{i}z_{2})+\frac{F(0,0)A(0,0)\alpha (z_{2}-1)q(z_{2})}{z_{2}(\lambda +\alpha )}, \end{aligned}$$

or equivalently, by setting \(\tilde{F}(z_{2}):=F(0,z_{2})\), \(g(z_{2}):=\frac{\alpha q(z_{2})}{z_{2}(\alpha +\lambda (1-q(z_{2})))}\), \(l(z_{2}):=\frac{A(0,0)\alpha (z_{2}-1)q(z_{2})}{A(0,z_{2})(\alpha +\lambda (1-q(z_{2})))z_{2}}\),

$$\begin{aligned} \tilde{F}(z_{2})=g(z_{2})\sum _{i=1}^{M}p_{i}\tilde{F}(\bar{a}_{i}+a_{i}z_{2})+l(z_{2}). \end{aligned}$$
(134)

Note that (134) has the same form as the one in [1, Section 5, p. 19], and \(g(1)=1\), \(l(1)=0\). Thus, from [1, Theorem 2], or equivalently by using Theorem 14 we can solve (134) and get an expression for \(\tilde{F}(z_{2}):=F(0,z_{2})\). Using that expression in (133), we can finally get \(F(z_{1},z_{2})\). Note also that from (134), for \(z_{2}=0\),

$$\begin{aligned} F(0,0)=\sum _{i=1}^{M}p_{i}F(0,\bar{a}_{i}). \end{aligned}$$

By substituting \(z_{2}=\bar{a}_{i}\), \(i=1,\ldots ,M\), in the derived expression for \(F(0,z_{2})\) (i.e. the expression that is obtained by using Theorem 14), we can finally get F(0, 0). Then, by setting \(\bar{a}_{i}+a_{i}z_{2}\) instead of \(z_{2}\), in the expression for \(F(0,z_{2})\), the function \(F(z_{1},z_{2})\) is derived through (133).

9 Conclusion

In this work, we investigated the transient and/or the stationary behaviour of various reflected autoregressive processes. These types of processes are described by stochastic recursions where various independence assumptions among the sequences of random variables that are involved there are lifted and for which a detailed exact analysis can be also provided. This is accomplished by using Liouville’s theorem [14, Theorem 10.52], as well as by stating and solving a Wiener–Hopf boundary value problem [10], or an integral equation. Various options for follow-up research arise. One of them is to consider multivariate extensions of the processes that we introduced. Such vector-valued counterparts are anticipated to be highly challenging. In Sect. 8.1, we cope with a simple two-dimensional case; however, the autoregressive concept was used only in one component. Another possible line of research concerns scaling limits and asymptotics. One also anticipates that, under certain appropriate scalings, a diffusion analysis similar to the one presented in [8] can be applied.