1 Introduction

In fields like probability, operations research, management science and industrial engineering, there are phenomena that are well described by infinite-server systems. Many authors, in literature, assume that in most queueing systems the arrival process of customers is a Poisson process and the service times have an exponential distribution. This implies that the future evolution of the system only depends on the present situation.

However, in real-word problems, the future evolution of the arrival process and/or the service process is often influenced by the past events. This is essentially observed, during periods of suffering, for instance, earthquakes, crime and riots and social media trend (Daw and Pender [8] and Rizoiu et al. [20]) and in the financial market, incorporating some kind of contagion effect (Aït-Sahalia et al. [1]). Recently, the same phenomenon is observed to describe the temporal growth and migration of COVID-19 (see Chiang et al. [6], Escobar [12], Garetto et al. [13]). The work by Eick et al. [9], and recent works by Daw and Pender [8] and Koops et al. [16], motivated us to consider the service process as a state-dependent Hawkes (sdHawkes) process.

This article studies an infinite-server system wherein the dynamics of the arrival process and the service process are governed by Hawkes processes. The self-exciting property of Hawkes process is well known for its efficiency in reproducing jump clusters due to the dependence of the conditional event arrival rate, or intensity, on the counting process itself. It can be considered that this type of infinite-server system has the potential to represent, for instance, the evolution of the number of people visiting a shopping centre or the number of clients visiting a website. In both these cases, the intensity function for the arrival process cannot be considered as a constant, and the rate of new arrival events increases with the occurrence of each event. Similarly, it is observed that the intensity function for the service process may be state-dependent and every new service completion excites further service process.

Errais et al. [11] derived the Markov property for the two-dimensional process consisting of a Hawkes process and its intensity having exponential kernel. We generalize their approach investigating the Markov property for the three-dimensional process consisting of the number of customers in the system, the arrival intensity and the service intensity.

To achieve this, we have a problem that must be solved. On a filtered probability space, \((\varOmega , {\mathscr {F}},\{{{\mathscr {F}}}_t\}_{t \ge 0},P)\), we consider the filtrations generated by the arrival process, \(\{{{\mathscr {F}}}^M_t\}_{t \ge 0}\), the filtrations generated by the service process \(\{{{\mathscr {F}}}^S_t\}_{t \ge 0}\), and the filtrations generated by the number of the customers in the system \(\{{{\mathscr {F}}}^N_t\}_{t \ge 0}\). Then, we note that \({{\mathscr {F}}}^N_t \subseteq {{\mathscr {F}}}^M_t \vee {{\mathscr {F}}}^S_t\), which implies that every \({{\mathscr {F}}}^M_t \vee {{\mathscr {F}}}^S_t\)-martingale is a \({{\mathscr {F}}}^N_t\)-martingale. But, the vice versa is not always true, namely, a \({{\mathscr {F}}}^N_t\)-martingale is not always a \({{\mathscr {F}}}^M_t \vee {{\mathscr {F}}}^S_t\)-martingale, in general. This is the classical problem of the enlargement of filtrations. An assumption in this framework is the Immersion property, which allows us to get that any \({{\mathscr {F}}}^N_t\)-martingale is also a \({{\mathscr {F}}}^M_t \vee {{\mathscr {F}}}^S_t\)-martingale (see Bielecki et al. [4], Jeanblanc et al. [15], and Tardelli [21]). Consequently, following the existing literature, the Immersion property is assumed in the present paper.

Generalizing some results of Daw and Pender [8] and Koops et al. [16], we derive the transient or time-dependent behaviour of the infinite-server system with Hawkes arrivals and sdHawkes services. This is the main contribution of the present paper.

The paper is organized as follows. In Sect. 2, we introduce Hawkes processes and their properties. In Sect. 3, we describe an infinite-server system with Hawkes arrivals and sdHawkes services. We discuss the moments of the arrival process and its intensity. We derive the Markov property of the process describing the number of customers in the system using the Immersion property. In Sect. 4, we obtain the joint transient or time-dependent distribution of the system size, the arrival and the service intensity processes for the Hawkes/sdHawkes/\(\infty \) system. Afterwards, we deduce the above time-dependent results for a M/sdHawkes/\(\infty \) system. Finally, in Sect. 5, we present concluding remarks.

2 Hawkes processes

Hawkes processes constitute a particular class of multivariate point processes having numerous applications throughout science and engineering. These are processes characterized by a particular form of stochastic intensity vector, that is, the intensity depends on the sample path of the point process.

Let \((\varOmega , {\mathscr {F}},\{{{\mathscr {F}}}_t\}_{t \ge 0},P)\) be a filtered probability space, where \(\{{{\mathscr {F}}}_t\}_{t \ge 0}\) is a given filtration satisfying the usual conditions. On this space, a counting process \(M = \{M_t, t \ge 0 \}\) is defined, and \(\{{{\mathscr {F}}}^M_t\}_{t \ge 0}\), where \({\mathscr {F}}^M_t = \sigma \{M_u, u \le t\}\), is its associated filtration, and stands for the information available up to time t.

Definition 1

The conditional law of M is defined, for \(\Delta t \rightarrow 0\), as

$$\begin{aligned} P[M_{t+\Delta t} - M_t = m | {{{\mathscr {F}}}}^M_t] = \left\{ \begin{array}{l} \lambda _t \Delta t + o(\Delta t) \\ o(\Delta t) \\ 1 - \lambda _t \Delta t + o(\Delta t) \end{array} \quad \begin{array}{l} m = 1 \\ m > 1\\ m = 0. \end{array} \right. \end{aligned}$$

For a Hawkes process, the intensity \(\lambda _t\) is a function of the past jumps of the process itself, and, in general, assumes the representation

$$\begin{aligned} \lambda _t = \lambda _0 +\int _0^t \varphi (t-u)\mathrm{d}M_u. \end{aligned}$$
(1)

The function \(\varphi (\cdot )\), called excitation function, is such that \(\varphi (t) \ge 0\), for \(t \ge 0\). It represents the size of the impact of jumps and belongs to the space of \(L^1\)-integrable functions.

When \(\varphi (t)=0\), M is a counting process with constant intensity. This means that a Poisson process is obtained as a special case of a Hawkes process.

Following the existing literature (see Daw and Pender [8]), we restrict our attention to a function \(\varphi (\cdot )\) defined by an exponential decay kernel. To this end, let the arrival intensity be governed by the dynamics

$$\begin{aligned} \mathrm{d}\lambda _t = r (\lambda ^* - \lambda _t) \mathrm{d}t + B_t \mathrm{d}M_t, \end{aligned}$$
(2)

where \(\lambda ^*\) represents an underlying stationary arrival rate, called baseline intensity. The constant \(r > 0\) describes the decay of the intensity as time passes after an arrival, and \(B_t\), for each \(t \ge 0\), is a positive random variable representing the size of the jump in the intensity upon an arrival. The solution to Eq. (2), given \(\lambda _0\), which is the initial value of \(\lambda _t\), is obtained as

$$\begin{aligned} \lambda _t = \lambda ^* + e^{-rt} (\lambda _0 - \lambda ^*) + \int _0^t B_s e^{-r (t - s)} \mathrm{d}M_s. \end{aligned}$$
(3)

Taking \(\{t_i\}_{i >0}\) as the sequence of the jump times of M, and if the self-exciting term is such that \(\varphi (t - t_i) \equiv B_{t_i} e^{-r(t-t_i)} \), then Eq. (1) coincides with Eq. (3).

Remark 1

Note that, the Hawkes process M itself does not have the Markov property. However, assuming that the excitation function \(\varphi (\cdot )\) has an exponential decay kernel, it is possible to prove that the bi-dimensional process \((\lambda , M)\) is a Markov process (Errais et al. [11]). Furthermore, the explosion is avoided by ensuring the condition given by \({\mathbb {E}}[B_t] < r\), (Daw and Pender [8]).

3 The model: Hawkes/sdHawkes/\(\infty \) system

The arrival process of the system is the Hawkes process M acting as an input process to an infinite-server system, having arrival intensity given by Eq. (3), and \(\{t_i\}_{i \ge 0}\) as the sequence of the arrival times.

For the service requirement, we consider \(S = \{S_t, t \ge 0\}\), another Hawkes process having serving intensity \(\mu _t\), with initial intensity \(\mu _0 > 0\), baseline intensity \(\mu ^*\) and, exponential excitation function. Taking \(N_t\) as the number of customers in the system at t, for a constant \(s > 0\), and for \(t > 0\),

$$\begin{aligned} \mu _t = N_t \left( \mu ^* + e^{-st} (\mu _0 - \mu ^*) + \int _0^t C_u e^{-s (t - u)} \mathrm{d}S_u \right) . \end{aligned}$$
(4)

For \(i>0\), \(\tau _i\) denotes the time epoch of ith customer departure after the service completion, and \(C_{\tau _i}\) is a positive random variable representing the size of that jump. The number of customers in the system whose service is completed on, or before, time t is given by \(S_t\) and \(N_t = M_t - S_t\).

This form of the serving intensity \(\mu _t\) allows us to take into account the crowding of the system, which is the number of customers in the system at time t, and about the experience gained by the server process, which is given by the Hawkes structure. This is called as a state-dependent Hawkes process, sdHawkes process (Li and Cui [17] and Morario-Patrichi and Pakkanen [18]).

To the best of our knowledge, this is the first time in literature that Hawkes processes are introduced to model the experience in arrivals and services of an infinite-server system. Furthermore, we note that, at time t, the sdHawkes process for service will start as new, whenever the number of customers in the system become one, i.e. \(N_t =1\). Therefore, we call this queueing model as Hawkes/sdHawkes/\(\infty \) in Kendall’s notation.

Since the service process is state-dependent, for the stability of the infinite-server system (Daw and Pender [8]), only the stability for the arrival process is needed, which means that \({\mathbb {E}}[B_t] < r\), for all t.

3.1 Moments of \(M_t\) and \(\lambda _t\)

This subsection is devoted to derive the results regarding the moments of the process M and its intensity \(\lambda \). These results can be proved taking into account the results of Dassios and Zhao [7] and generalizing those obtained in Daw and Pender [8], Section 2.

Proposition 1

Given a Hawkes process \((M_t, \lambda _t)\), with dynamics given by Eq. (3) and \({\mathbb {E}}[B_t] < r\), for all \(t \ge 0\),

$$\begin{aligned} {\mathbb {E}}[\lambda _t]= & {} \frac{r \lambda ^*}{r - {\mathbb {E}}[B_t]} + \left( \lambda _0 - \frac{r \lambda ^*}{r - {\mathbb {E}}[B_t]} \right) e^{ - t \left( r - {\mathbb {E}}[B_t]\right) }, \end{aligned}$$
(5)
$$\begin{aligned} \mathrm{Var}[\lambda _t]= & {} \frac{({\mathbb {E}}[B_t])^2}{r - {\mathbb {E}}[B_t]} \left[ \left( \frac{r \lambda ^*}{2 \left( r - {\mathbb {E}}[B_t] \right) } - \lambda _0\right) e^{ - 2 t \left( r - {\mathbb {E}}[B_t]\right) } \right. \nonumber \\&\quad - \left. \left( \frac{r \lambda ^*}{r - {\mathbb {E}}[B_t]} - \lambda _0 \right) e^{ - t \left( r - {\mathbb {E}}[B_t]\right) } + \frac{r \lambda ^*}{2 \left( r - {\mathbb {E}}[B_t] \right) } \right] , \end{aligned}$$
(6)
$$\begin{aligned} {\mathbb {E}}[M_t]= & {} \frac{1}{r - {\mathbb {E}}[B_t]} \left[ r \lambda ^* t + \left( \lambda _0 - \frac{r \lambda ^*}{r - {\mathbb {E}}[B_t]} \right) \left( 1 - e^{ - t \left( r - {\mathbb {E}}[B_t]\right) } \right) \right] , \end{aligned}$$
(7)
$$\begin{aligned} \mathrm{Var}[M_t]= & {} \frac{1}{(r - {\mathbb {E}}[B_t])^3} \left\{ r^3 \lambda ^* t + r^2 \left( \lambda _0 - \frac{r \lambda ^*}{2 \left( r - {\mathbb {E}}[B_t] \right) } \right) \left( 1 - e^{ - 2t \left( r - {\mathbb {E}}[B_t]\right) } \right) \right. \nonumber \\&- 2r {\mathbb {E}}[B_t] (r - {\mathbb {E}}[B_t]) \left( \lambda _0 - \frac{r \lambda ^*}{r - {\mathbb {E}}[B_t]} \right) t e^{ - t \left( r - {\mathbb {E}}[B_t]\right) } \nonumber \\&\left. + \left( \left( r^2 - ({\mathbb {E}}[B_t] )^2 \right) \lambda _0 - r \lambda ^* \frac{r^2 - ({\mathbb {E}}[B_t] )^2+ 2r {\mathbb {E}}[B_t]}{\left( r - {\mathbb {E}}[B_t] \right) } \right) \right. \nonumber \\&\times \left. \left( 1 - e^{ - t \left( r - {\mathbb {E}}[B_t]\right) } \right) \right\} . \end{aligned}$$
(8)

Remark 2

In general, for the moments of \(M_t\) and for the moments of \(\lambda _t\) of order n, we have to recursively solve the following system of differential equations

$$\begin{aligned} \frac{\hbox {d} {\mathbb {E}}[M^n_t]}{\hbox {d}t}= & {} \sum _{i=0}^{n-1} {n \atopwithdelims ()i} {\mathbb {E}}[\lambda _t M^i_t],\\ \frac{\hbox {d} {\mathbb {E}}[\lambda ^n_t]}{\hbox {d}t}= & {} n r \lambda ^* {\mathbb {E}}[\lambda ^{n-1}_t] - n r {\mathbb {E}}[\lambda ^n_t] + \sum _{i=0}^{n-1} {n \atopwithdelims ()i} ({\mathbb {E}}[B_t] )^{n-i} {\mathbb {E}}[\lambda ^{i+1}_t], \\ \frac{\hbox {d} {\mathbb {E}}[\lambda ^n_t M^k_t]}{\hbox {d}t}= & {} n r \lambda ^* {\mathbb {E}}[\lambda ^{n-1}_t M^k_t] - n r {\mathbb {E}}[\lambda ^n_t M^k_t] + \sum _{(i,j) \in {{\mathscr {S}}}} {n \atopwithdelims ()i} {k \atopwithdelims ()j} \\&\quad \times ({\mathbb {E}}[B_t] )^{n-i} {\mathbb {E}}[\lambda ^{i+1}_t M^j_t], \end{aligned}$$

for positive integer values of n and k and with \({{{\mathscr {S}}}} := \{(i, j): i = 0, \ldots , n, j = 0, \ldots , k, (i,j) \ne (n,k)\}\).

Corollary 1

If \({\mathbb {E}}[B_t] = r\), the differential equations given in Daw and Pender [8] imply that

$$\begin{aligned} {\mathbb {E}}[\lambda _t] = \lambda _0 + r \lambda ^* t, \qquad \mathrm{Var}[\lambda _t] = r^2 \left( \lambda _0 t + \frac{r \lambda ^*}{2} t^2 \right) , \qquad {\mathbb {E}}[M_t] = \lambda _0 t + \frac{r \lambda ^*}{2} t^2,\\ \mathrm{Var}[M_t]= \lambda _0 t + r \left( \lambda _0 + \frac{\lambda ^*}{2} \right) t^2 + \frac{r^2}{3} \left( \lambda _0 + \lambda ^*\right) t^3 + \frac{r^2 \lambda ^*}{6} \left( r + 3 \lambda ^*\right) t^4. \end{aligned}$$

Note that by taking the limit as \({\mathbb {E}}[B_t] \rightarrow r\) in Eqs. (5), (6), (7) and (8), the same results of Corollary 1 are achieved.

Proposition 2

Assuming that \({\mathbb {E}}[B_t] < r\), and taking \(\displaystyle \lim _{t \rightarrow \infty } {\mathbb {E}}[B_t] = {\hat{B}}_1\) and \(\displaystyle \lim _{t \rightarrow \infty } ({\mathbb {E}}[B_t] )^2 = {\hat{B}}_2\), we have

$$\begin{aligned} \lim _{t \rightarrow \infty } {\mathbb {E}} [\lambda _t] = \frac{r \lambda ^*}{r - {\hat{B}}_1} = \lambda _\infty , \qquad \lim _{t \rightarrow \infty } \mathrm{Var}[\lambda _t] = \frac{r \lambda ^* {\hat{B}}_2}{2 \left( r - {\hat{B}}_1 \right) ^2}, \end{aligned}$$

and \({\mathbb {E}}[M_t] \rightarrow \infty \) as \(t \rightarrow \infty \).

3.2 Markov property of the Hawkes/sdHawkes/\(\infty \) system

Recall that, if the excitation function of a Hawkes process is exponential, then the process jointly with its intensity is a Markov process. In order to characterize \({{{\mathscr {L}}}}\left( N_t, \lambda _t, \mu _t \right) \), the law of \(N_t\), the Hawkes arrival intensity \(\lambda _t\) and the state dependent Hawkes server intensity \(\mu _t\), we have to prove the Markov property of the Hawkes/sdHawkes/\(\infty \) system. To this end, we need some preliminaries. Let the processes M, S and N be defined on the same filtered probability space \((\varOmega , {\mathscr {F}},\{{{\mathscr {F}}}_t\}_{t \ge 0},P)\). Given the sub \(\sigma \)-algebras of \({\mathscr {F}}_t\)

$$\begin{aligned} {\mathscr {F}}^M_t := \sigma \{M_u, u \le t\}, \qquad {\mathscr {F}}^S_t := \sigma \{S_u, u \le t\}, \qquad \text{ and } \qquad {\mathscr {F}}^{M, S}_t := {\mathscr {F}}^M_t \vee {\mathscr {F}}^S_t, \end{aligned}$$

we observe that \({{{\mathscr {F}}}}^{M, S}_t\) contains, also, all the information related to the process \(N = \{N_t, t \ge 0\}\) until time t.

Note that, \({\mathscr {F}}^M_t \subseteq {\mathscr {F}}^{M,S}_t\) and that \({\mathscr {F}}^S_t \subseteq {\mathscr {F}}^{M,S}_t\). Consequently, every \({\mathscr {F}}^{M,S}_t\)-martingale is a \({\mathscr {F}}^M_t\)-martingale and, at the same time, every \({\mathscr {F}}^{M,S}_t\)-martingale is a \({\mathscr {F}}^S_t\)-martingale. But, in general, it is not true that a \({\mathscr {F}}^M_t\)-martingale is a \({\mathscr {F}}^{M,S}_t\)-martingale and that a \({\mathscr {F}}^S_t\)-martingale is a \({\mathscr {F}}^{M,S}_t\)-martingale. This is a classical topic in this context, the so-called problem of the enlargement of filtrations. An exciting example is Azema’s martingale, see Subsection 4.3.8 in Jeanblanc et al. [15]. Hence, to overcome this difficulty, we assume a property for the class of martingales, that is the Immersion property as given below.

Definition 2

When the filtration \({\mathscr {F}}^M := \{{{\mathscr {F}}}^M_t\}_{t \ge 0}\) is immersed in the filtration \({\mathscr {F}}^{M, S} := \{{{\mathscr {F}}}^{M, S}_t\}_{t \ge 0}\), it means that any \({\mathscr {F}}^M_t\)-martingale is a \({\mathscr {F}}^{M,S}_t\)-martingale. In this case, we say that \({\mathscr {F}}^M\) satisfies the Immersion property with respect to the filtration \({\mathscr {F}}^{M, S}\). Also, let the filtration \({\mathscr {F}}^S := \{{{\mathscr {F}}}^S_t\}_{t \ge 0}\) satisfy the Immersion property with respect to \({\mathscr {F}}^{M, S}\).

Remark 3

Given the arrival process M and the service process S, we have that the sub \(\sigma \)-algebras \({\mathscr {F}}^M_t\) and \({\mathscr {F}}^S_t\) generated by these processes, respectively, are such that \({\mathscr {F}}^M_t \subseteq {\mathscr {F}}^{M,S}_t\) and \({\mathscr {F}}^S_t \subseteq {\mathscr {F}}^{M, S}_t\).

By Eq. (4), we are able to deduce that the service process S can be represented in terms of a function of M driven by another Hawkes process R, which is independent of M. Since the processes M and R are independent, any \({\mathscr {F}}^M_t\)-martingale is a \({\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t\)-martingale and any \({\mathscr {F}}^S_t\)-martingale is a \({\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t\)-martingale. Taking into account that \({\mathscr {F}}^{M, S}_t \subseteq {\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t\), we get that

$$\begin{aligned} {\mathscr {F}}^M_t \subseteq {\mathscr {F}}^{M,S}_t \subseteq {\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t \qquad \text{ and } \qquad {\mathscr {F}}^S_t \subseteq {\mathscr {F}}^{M, S}_t \subseteq {\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t, \end{aligned}$$

which implies that any \({\mathscr {F}}^M_t \vee {\mathscr {F}}^R_t\)-martingale is a \({\mathscr {F}}^{M, S}_t\)-martingales. As a conclusion, all the \({\mathscr {F}}^M_t\)-martingales and all the \({\mathscr {F}}^S_t\)-martingales are \({\mathscr {F}}^{M, S}_t\)-martingales. Greater details on this topic can be found in Aksamit and Jeanblanc [2], Jeanblanc et al. [15] and, more recently, in Calzolari and Torti [5].

Lemma 1

By Eq. (3), for any positive value of \(\Delta t\), we get that

$$\begin{aligned} \lambda _{t+ \Delta t} - \lambda _t= & {} (e^{-r \Delta t} - 1) e^{- r t} (\lambda _0 - \lambda ^*) + (e^{-r \Delta t} - 1) \int _0^{t} B_s e^{-r (t - s)} \mathrm{d}M_s\\&\quad + \int _t^{t+\Delta t} B_s e^{-r (t +\Delta t - s)} \mathrm{d}M_s \\= & {} (e^{-r \Delta t} - 1) (\lambda _t - \lambda ^*) + \int _t^{t+ \Delta t} B_s e^{-r (t + \Delta t - s)} \mathrm{d}M_s. \end{aligned}$$

By Eq. (4), if \(N_t\) remains constant between t and \(t + \Delta t\), we have that

$$\begin{aligned} \mu _{t+ \Delta t} - \mu _t= & {} (e^{- s \Delta t}-1) N_t \left( e^{- s t} (\mu _0 - \mu ^*) + \int _0^t C_u e^{-s (t - u)} \mathrm{d}S_u \right) \\&\quad + N_t\int _t^{t+\Delta t} C_u e^{-s (t + \Delta t - u)} \mathrm{d}S_u \\= & {} (e^{- s \Delta t}-1) ( \mu _t - \mu ^* N_t ) + N_t \int _t^{t+\Delta t} C_u e^{-s (t + \Delta t - u)} \mathrm{d}S_u. \end{aligned}$$

Moreover, for a negligible value of \(\Delta t\) and between t and \(t+\Delta t\), if there are no new arrivals and there are no service completions, we get that

$$\begin{aligned} \lambda _{t+ \Delta t} - \lambda _t \approx r \Delta t (\lambda ^* - \lambda _t), \qquad \qquad \mu _{t+ \Delta t} - \mu _t \approx s \Delta t (\mu ^* N_t - \mu _t). \end{aligned}$$

Recalling that the self-exciting terms of the intensities \(\lambda _t\) and \(\mu _t\) are defined by exponential decay kernels, even though the processes M and S does not have the Markov property themselves, we are able to show that \((M, S, \lambda , \mu )\) is a Markov process (Errais et al. [11]). To get the Markov property of \((M, S, \lambda , \mu )\), we derive its Dynkin formula, in the next Proposition, taking into account that this formula is a direct consequence of the strong Markov property and, hence, it builds a bridge between differential equations and Markov processes, (see, for instance, Øksendal [19], Sect. 7.4).

Proposition 3

Let \({\mathscr {A}}\) be an operator acting on a suitable function f, with continuous partial derivatives with respect to \(\lambda \) and \(\mu \), such that

$$\begin{aligned} {\mathscr {A}}f(M, S, \lambda , \mu ):= & {} r (\lambda ^* - \lambda ) \frac{\partial f(M, S, \lambda , \mu )}{\partial \lambda } + s (\mu ^* - \mu ) \frac{\partial f(M, S, \lambda , \mu )}{\partial \mu } \nonumber \\&+ \lambda {\mathscr {A}}^\lambda f(M, S, \lambda , \mu ) + \mu {\mathscr {A}}^\mu f(M, S, \lambda , \mu ), \end{aligned}$$
(9)

where

$$\begin{aligned} {\mathscr {A}}^\lambda f(M, S, \lambda , \mu ) = \int _0^\infty \left[ f\left( M+1, S, \lambda + x, \mu \right) - f(M, S, \lambda , \mu ) \right] \mathrm{d}P[B \le x], \\ {\mathscr {A}}^\mu f(M, S, \lambda , \mu ) = \int _0^\infty \left[ f\left( M, S+ 1, \lambda , \mu + y \right) - f(M, S, \lambda , \mu ) \right] \mathrm{d}P[C \le y], \end{aligned}$$

and B and C are random variables such that \(P(B > 0) = 1\) and \(P(C > 0) = 1\). If, for \(t \le T\),

$$\begin{aligned}&{\mathbb {E}}\left[ \int _0^t \left| {\mathscr {A}} f(M_u, S_u, \lambda _u, \mu _u) - r (\lambda ^* - \lambda _u) \frac{\partial f(M_u, S_u, \lambda _u, \mu _u)}{\partial \lambda } \right. \right. \\&\quad \left. \left. - s (\mu ^* - \mu _u) \frac{\partial f(M_u, S_u, \lambda _u, \mu _u)}{\partial \mu } \right| \ \mathrm{d}u \right] < \infty , \end{aligned}$$

the following Dynkin formula holds

$$\begin{aligned}&{\mathbb {E}} \left[ f (M_T, S_T, \lambda _T, \mu _T) \Big | {\mathscr {F}}^{M, S}_t \right] \nonumber \\&\quad = f (M_t, S_t, \lambda _t, \mu _t) + {\mathbb {E}} \left[ \int _t^T {\mathscr {A}} f (M_u, S_u, \lambda _u, \mu _u) \ \mathrm{d}u \Big | {\mathscr {F}}^{M, S}_t \right] . \end{aligned}$$
(10)

Proof

For the sake of completeness, we prove this result along similar line as in Errais et al. [11]. For a fixed time t, \((M_t, S_t, \lambda _t, \mu _t)\) has right-continuous paths of finite variation. Hence,

$$\begin{aligned} f (M_t, S_t, \lambda _t, \mu _t) - f (M_0, S_0, \lambda _0, \mu _0)= & {} \int _0^t r (\lambda ^* - \lambda _u) \frac{\partial f(M_u, S_u, \lambda _u, \mu _u)}{\partial \lambda } \ \mathrm{d}u \nonumber \\&+ \int _0^t s (\mu ^* - \mu _u) \frac{\partial f(M_u, S_u, \lambda _u, \mu _u)}{\partial \mu }\ \mathrm{d}u \nonumber \\&+ \sum _{0 < u \le t} \left[ f(M_u, S_u, \lambda _u, \mu _u) \right. \nonumber \\&\left. - f(M_{u-}, S_{u-}, \lambda _{u-}, \mu _{u-}) \right] . \end{aligned}$$
(11)

Note that \(m^\lambda _t = M_t - \int _0^t \lambda _v \mathrm{d}v\) is a \({\mathscr {F}}^M_t\)-martingale, and by the Immersion property, \(m^\lambda _t\) is also a \({\mathscr {F}}^{M, S}_t\)-martingale, and, moreover, \(m^\mu _t = S_t - \int _0^t \mu _v \mathrm{d}v\) is a \({\mathscr {F}}^{M, S}_t\)-martingale. Hence, we are able to rewrite the summation in Eq. (11) as

$$\begin{aligned}&\int _0^t {\mathscr {A}}^\lambda f(M_v, S_v, \lambda _v, \mu _v) \left( dm^\lambda _v + \lambda _v \mathrm{d}v \right) \\&\quad + \int _0^t {\mathscr {A}}^\mu f(M_v, S_v, \lambda _v, \mu _v) \left( dm^\mu _v + \mu _v \mathrm{d}v \right) . \end{aligned}$$

The integrability condition on the predictable integrand guarantees that

$$\begin{aligned} \int _0^t {\mathscr {A}}^\lambda f(M_v, S_v, \lambda _v, \mu _v) \ \mathrm{d}m^\lambda _v + \int _0^t {\mathscr {A}}^\mu f(M_v, S_v, \lambda _v, \mu _v) \ \mathrm{d}m^\mu _v \end{aligned}$$

is a martingale (Theorem 8, Chapter II of Bremaud [3]). As a conclusion, we get that \(f(M, S, \lambda , \mu )\) is a semi-martingale having a unique decomposition given by a sum of a predictable process with finite variation and a martingale, and we are able to write that

$$\begin{aligned} f (M_t, S_t, \lambda _t, \mu _t)) - f (M_0, S_0, \lambda _0, \mu _0)) - \int _0^t {\mathscr {A}} f (M_v, S_v, \lambda _v, \mu _v) \mathrm{d}v\\ = \int _0^t {\mathscr {A}}^\lambda f(M_v, S_v, \lambda _v, \mu _v) \mathrm{d}m^\lambda _v + \int _0^t {\mathscr {A}}^\mu f(M_v, S_v, \lambda _v, \mu _v) \mathrm{d}m^\mu _v. \end{aligned}$$

Since the right-hand side is a martingale, so is the process defined by the left-hand side, which results to the formula given by Eq. (10). \(\square \)

Under all the assumptions made so far, the infinitesimal generator of the process \((M, S, \lambda , \mu )\) acting on a function \(f(M, S, \lambda , \mu )\) is given by Eq. (9).

Remark 4

Since \((M, S, \lambda , \mu )\) is a Markov process, and taking into account that the process \(N = M - S\), we are able to deduce that \((N, \lambda , \mu )\) is also a Markov process.

4 Characterization of the law of the infinite-server systems

The joint transient distribution of \(\left( N, \lambda , \mu \right) \) is uniquely defined by the transformation

$$\begin{aligned} \zeta (t, z, u, v) = {\mathbb {E}} \left[ z^{N_t} e^{- u \lambda _t} e^{- v \mu _t} \right] , \end{aligned}$$
(12)

where \(t \ge 0\), \(0 \le z \le 1\), \(u \ge 0\) and \(v \ge 0\). In this section, we characterize \(\zeta \) in terms of the solution of a system of ordinary differential equations (ODEs).

Theorem 1

Let the arrival process be a Hawkes process and let the service process be a sdHawkes process. Given the random variables B and C as defined in Proposition 3, let \(\beta (u) := {\mathbb {E}} [e^{- u B}]\) and let \(\gamma (v) := {\mathbb {E}} [e^{- v C}]\).

If \((N_t, \lambda _t, \mu _t)|_{t=0} = (0, \lambda _0, 0)\), then the couple \(\left( u(\cdot ), v(\cdot ) \right) \) solves the system of ODEs

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle u'(t) + r u(t) -1 + z \beta \left( u(t) \right) e^{ - s \mu ^* \int _0^t v(x) \ \mathrm{d}x } = 0, \\ \displaystyle v'(t) + s v(t) - 1 + \frac{\gamma \left( v(t)\right) }{z} e^{ s \mu ^* \int _0^t v(x) \ \mathrm{d}x } = 0, \end{array} \right. \end{aligned}$$
(13)

with boundary conditions \(u(0) = u\) and \(v(0) = v\). Furthermore,

$$\begin{aligned} \zeta (t, z, u, v) = e^{- u(0) \lambda _0 } \exp {\left\{ - \lambda ^* r \int _0^t u(x) \ \mathrm{d}x \right\} }, \end{aligned}$$
(14)

where \(\zeta \) depends on v through the coupling with u given by system of ODEs (13).

In order to prove Theorem 1, we make use of Propositions 4 and 5 given below, whose proofs are in “Appendix”.

Proposition 4

The joint distribution of \(\left( N, \lambda , \mu \right) \), for \(t > 0\),

$$\begin{aligned} F(t, k, \lambda , \mu ) = P\left[ N_t = k, \lambda _t \le \lambda , \mu _t \le \mu \right] \end{aligned}$$

is such that

$$\begin{aligned}&\frac{\partial ^3 F(t, k, \lambda , \mu )}{\partial \mu \partial \lambda \partial t} + (\lambda + \mu ) \frac{\partial ^2 F(t, k, \lambda , \mu )}{\partial \mu \partial \lambda } \nonumber \\&\quad = \int _0^\lambda x \frac{\partial ^2 F(t, k-1, x, \mu )}{\partial \mu \partial \lambda } \ \mathrm{d}P[B \le \lambda - x] - \frac{\partial }{\partial \lambda } \left[ r \left( \lambda ^* - \lambda \right) \frac{\partial ^2 F(t, k, \lambda , \mu )}{\partial \mu \partial \lambda } \right] \nonumber \\&\qquad + \int _0^\mu y \frac{\partial ^2 F(t, k+1, \lambda , y)}{\partial \lambda \partial \mu } \ \mathrm{d}P[C \le \mu - y] - \frac{\partial }{\partial \mu } \left[ s \left( k \mu ^* - \mu \right) \frac{\partial ^2 F(t, k, \lambda , \mu )}{\partial \mu \partial \lambda } \right] . \end{aligned}$$
(15)

Proposition 5

Taking

$$\begin{aligned} \xi (t, k, u, v) = \int _0^\infty \int _0^\infty e^{- u \lambda } e^{- v \mu } \ \frac{\partial ^2 F(t, k, \lambda , \mu )}{\partial \lambda \partial \mu } \ \mathrm{d}\lambda \ \mathrm{d}\mu , \end{aligned}$$

we have that

$$\begin{aligned}&\frac{\partial \xi (t, k, u, v)}{\partial t} + (u r - 1) \frac{\partial \xi (t, k, u, v)}{\partial u} + \beta (u) \frac{\partial \xi (t, k-1, u, v) }{\partial u}+ (v s - 1) \nonumber \\&\frac{\partial \xi (t, k, u, v)}{\partial v} + \gamma (v) \frac{\partial \xi (t, k+1, u, v)}{\partial v} = - ( u r \lambda ^* + v s k \mu ^* ) \xi (t, k, u, v). \end{aligned}$$
(16)

Proof of Theorem 1

Recalling the definition of \(\zeta \) given in Eq. (12), which in turn implies that

$$\begin{aligned} \zeta (t, z, u, v) = \sum _{k=0}^\infty z^k \xi (t, k, u, v), \end{aligned}$$

and taking into account that \(F(t, -1, \lambda , \mu ) = 0\), note that

$$\begin{aligned} \sum _{k=0}^\infty z^k \frac{\partial \xi (t, k-1, u, v)}{\partial u}= & {} \frac{\partial \xi (t, -1, u, v)}{\partial u} + z \sum _{k=1}^\infty z^{k-1} \frac{\partial \xi (t, k-1, u, v)}{\partial u} \\= & {} z \frac{\partial }{\partial u} \sum _{k=0}^\infty z^k \xi (t, k, u, v) = z \frac{\partial \zeta (t, z, u, v)}{\partial u}. \end{aligned}$$

Moreover,

$$\begin{aligned} \sum _{k=0}^\infty z^k \frac{\partial \xi (t, k+1, u, v)}{\partial v}= & {} \frac{1}{z} \frac{\partial }{\partial v} \sum _{k=0}^\infty z^{k+1} \xi (t, k+1, u, v) \\= & {} \frac{1}{z} \frac{\partial }{\partial v} \int \left[ \sum _{k=0}^\infty k z^{k-1} \xi (t, k, u, v) \right] \ \mathrm{d}z = \frac{1}{z} \frac{\partial \zeta (t, z, u, v)}{\partial v}, \end{aligned}$$

and

$$\begin{aligned} \sum _{k=0}^\infty k z^k \xi (t, k, u, v) = z \frac{\partial \zeta (t, z, u, v)}{\partial z}. \end{aligned}$$

Substituting all these in Eq. (16), we obtain the partial differential equation (PDE) satisfied by \(\zeta \) as

$$\begin{aligned}&\frac{\partial \zeta (t, z, u, v)}{\partial t} + v s \mu ^* z \frac{\partial \zeta (t, z, u, v)}{\partial z} + \left( u r + z \beta (u) - 1 \right) \frac{\partial \zeta (t, z, u, v)}{\partial u} \nonumber \\&\quad + \left( v s - 1 + \frac{1}{z} \gamma (v) \right) \frac{\partial \zeta (t, z, u, v)}{\partial v} = - u r \lambda ^* \zeta (t, z, u, v). \end{aligned}$$
(17)

As usual, applying the method of the characteristics, we reduce the PDE to a system of ODEs, along which the solutions are integrated from some initial data given on a suitable hypersurface.

To this end, let z, u and v be parameterized by w, \(0< w < t\), and with the boundary conditions \(z(t) = z\), \(u(t) = u\), \(v(t) = v\). A comparison with Eq. (17), taking into account the chain rule, gives us

$$\begin{aligned} \frac{\hbox {d} \zeta \left( t, z(w), u(w), v(w) \right) }{\hbox {d}w}= & {} \frac{\partial \zeta }{\partial t} \frac{\hbox {d}t}{\hbox {d}w} + \frac{\partial \zeta }{\partial z} \frac{\hbox {d}z}{\hbox {d}w} + \frac{\partial \zeta }{\partial u} \frac{\hbox {d}u}{\hbox {d}w} + \frac{\partial \zeta }{\partial v} \frac{\hbox {d}v}{\hbox {d}w} \nonumber \\= & {} - u(w) r \lambda ^* \zeta \left( t, z(w), u(w), v(w)\right) . \end{aligned}$$
(18)

This in turn implies that, for a real constant c, we are able to deduce that

$$\begin{aligned} \zeta \left( t, z(w), u(w), v(w)\right) = c \exp {\left\{ - r \lambda ^* \int _0^w u(x) \ \mathrm{d}x \right\} }, \end{aligned}$$

where \(c = \zeta \left( 0, z(0), u(0), v(0)\right) = \zeta \left( 0, z, u, v \right) = e^{- u(0) \lambda _0}\), and

$$\begin{aligned} \zeta \left( t, z(t), u(t), v(t)\right) = c \exp {\left\{ - r \lambda ^* \int _0^t u(x) \ \mathrm{d}x \right\} }, \end{aligned}$$

which implies Eq. (14).

Furthermore, by Eq. (17) and the chain rule written in Eq. (18), we deduce that

$$\begin{aligned} \frac{\hbox {d} z(w)}{\hbox {d}w} = s \mu ^* v(w) z(w). \end{aligned}$$

Recalling that \(0< w < t\), and that \(z(t) = z\),

$$\begin{aligned} z(w) = C \exp { \left\{ s \mu ^* \int _0^w v(x) \ \mathrm{d}x \right\} }, \end{aligned}$$

for a real constant \(C = z \exp { \left\{ - s \mu ^* \int _0^t v(x) \ \mathrm{d}x \right\} }\), which allows us to write that

$$\begin{aligned} z(w) = z \exp { \left\{ - s \mu ^* \int _w^t v(x) \ \mathrm{d}x \right\} }. \end{aligned}$$

Similarly, by Eq. (17) and the chain rule written in Eq. (18), and substituting z(w), we get that

$$\begin{aligned} \frac{\hbox {d} u(w)}{\hbox {d}w}= & {} r u(w) - 1 + z(w) \beta (u(w)) \\= & {} r u(w) - 1 + z \beta (u(w)) \exp { \left\{ - s \mu ^* \int _w^t v(x) \ \mathrm{d}x \right\} }, \end{aligned}$$

and

$$\begin{aligned} \frac{\hbox {d} v(w)}{\hbox {d}w}= & {} s v(w) - 1 + \frac{1}{z(w)} \gamma (v(w)) \\= & {} s v(w) - 1 + \frac{1}{z} \gamma (v(w)) \exp { \left\{ s \mu ^* \int _w^t v(x) \ \mathrm{d}x \right\} }. \end{aligned}$$

By substituting t for \(t-w\), and by taking into account that we have a change of sign in the LHS of both \(\frac{\hbox {d} u(w)}{\hbox {d}w}\) and \(\frac{\hbox {d} v(w)}{\hbox {d}w}\), we have Eq. (13), with boundary conditions \(u(0) = u\) and \(v(0) = v\). \(\square \)

Proposition 6

Let \(v(t) = w'(t) = y_2(t)\) and \(w(t) = y_1(t)\), then the system of ODEs as given in Eq. (13) turns out to be a dynamical system such that

$$\begin{aligned} u'(t)= & {} - r u(t) + 1 - z \beta \left( u(t) \right) e^{ - s \mu ^* y_1(t)}, \end{aligned}$$
(19)
$$\begin{aligned} y'_1(t)= & {} y_2(t), \end{aligned}$$
(20)
$$\begin{aligned} y'_2(t)= & {} 1 - s y_2(t) - \frac{\gamma \left( y_2(t)\right) }{z} e^{ s \mu ^* y_1(t) }. \end{aligned}$$
(21)

Proof

In Theorem 1, for the Hawkes/sdHawkes/\(\infty \) system, we derive \(\zeta (t, z, u, v)\) by Eq. (14) where the couple \(\left( u(\cdot ), v(\cdot ) \right) \) solves the system of ODEs as given in Eq. (13). Now, let

$$\begin{aligned} \int _0^t v(x) \mathrm{d}x = w(t). \end{aligned}$$

Differentiating both sides, we have \(v(t) = w'(t)\), which implies that \(v'(t) = w''(t)\). Putting these values in Eq. (13), we get

$$\begin{aligned}&u'(t) + r u(t) -1 + z \beta \left( u(t) \right) e^{ - s \mu ^* w(t)} = 0, \end{aligned}$$
(22)
$$\begin{aligned}&w''(t) + s w'(t) - 1 + \frac{\gamma \left( w'(t)\right) }{z} e^{ s \mu ^* w(t) } = 0. \end{aligned}$$
(23)

To convert Eq. (23) into a dynamical system, let

$$\begin{aligned} y_1(t) = w(t), \qquad y_2(t) = w'(t), \qquad y'_1(t) = w'(t) = y_2(t), \qquad y'_2(t) = w''(t) \end{aligned}$$

which implies Eqs. (19), (20) and (21). \(\square \)

Proposition 7

If the random variables B and C follow exponential distributions with parameter a and b, respectively, then Eqs. (19), (20) and (21) become such that

$$\begin{aligned} u'(t)= & {} - r u(t) + 1 - \frac{z a}{a+u} e^{ - s \mu ^* y_1(t)}, \end{aligned}$$
(24)
$$\begin{aligned} y'_1(t)= & {} y_2(t), \end{aligned}$$
(25)
$$\begin{aligned} y'_2(t)= & {} 1 - s y_2(t) - \frac{b}{z (b + y_2)} e^{ s \mu ^* y_1(t) }. \end{aligned}$$
(26)

Furthermore, let

$$\begin{aligned} u(t) = y_1(t), \qquad v(t) = y_2(t), \qquad z(t) = y_3(t), \qquad \zeta (t) = y_4(t), \end{aligned}$$
(27)

then we are able to write the dynamical system

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle y_1'(t) = 1 - r y_1(t) - \frac{a y_3(t) }{a + y_1(t)}, \\ \\ \displaystyle y_2'(t) = 1 - s y_2(t) - \frac{b}{y_3(t) \left( b + y_2(t) \right) }, \\ \\ \displaystyle y_3'(t) = s \mu ^* y_2(t) y_3(t), \\ \\ \displaystyle y'_4(t) = (- \lambda ^* r y_1(t)) y_4(t). \end{array} \right. \end{aligned}$$
(28)

Proof

Taking into account the assumptions made on the random variables B and C, we get that

$$\begin{aligned} \beta (u(t)) := {\mathbb {E}} [e^{- u(t) B}] = \frac{a}{a+u(t)}, \quad \text{ and } \quad \gamma (v(t)) := {\mathbb {E}} [e^{- y_2(t) C}] = \frac{b}{b+y_2(t)}. \end{aligned}$$
(29)

Substituting these values in Eqs. (19), (20) and (21), we get Eqs. (24), (25) and (26). Moreover, taking

$$\begin{aligned} z(t) = - z e^{- s \mu ^* y_1(t)} \end{aligned}$$

the system given by Eqs. (24), (25) and (26) turns out to be

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle z'(t) = s \mu ^* v(t) z(t), \\ \\ \displaystyle u'(t) + r u(t) -1 + \frac{z(t) a}{a + u(t)} = 0, \\ \\ \displaystyle v'(t) + s v(t) - 1 + \frac{b}{z(t) \left( b + v(t) \right) } = 0, \\ \\ \displaystyle \zeta ' = (- \lambda ^* r u(t)) \zeta , \end{array} \right. , \end{aligned}$$

with boundary conditions

$$\begin{aligned} z(0) = z, \qquad \qquad u(0) = u, \qquad \text{ and } \qquad v(0) = v. \end{aligned}$$
(30)

Finally, by Eq. (27), we are able to write the dynamical system as given in Eq. (28). \(\square \)

Next, we are going to deduce the results for the M/sdHawkes/\(\infty \) system.

Theorem 2

For an M/sdHawkes/\(\infty \) system, let

$$\begin{aligned} \zeta (t, z, v) = {\mathbb {E}} \left[ z^{N_t} e^{-v \mu _t} \right] \end{aligned}$$
(31)

where \(t \ge 0\), \(0 \le z \le 1\), \(v \ge 0\). Given the constant intensity \(\lambda _t = \lambda \), and the initial values, \((N_t, \mu _t)|_{t=0} = (0, 0)\), we get that

$$\begin{aligned} \zeta \left( t, z(t), v(t)\right) = \exp { \left\{ - \lambda \int _0^t \left( z(x) + 1\right) \ \mathrm{d}x \right\} }, \end{aligned}$$
(32)

where

$$\begin{aligned} z(w) = z \exp { \left\{ - s \mu ^* \int _w^t v(x) \ \mathrm{d}x \right\} }, \end{aligned}$$

and, given the boundary condition \(v(0) = v\), \(v(\cdot )\) solves the ODE

$$\begin{aligned} v'(t) + s v(t) - 1 + \frac{\gamma \left( v(t)\right) }{z} e^{ s \mu ^* \int _0^t v(x) \ \mathrm{d}x } = 0. \end{aligned}$$
(33)

The proof of Theorem 2 is obtained in three steps: Propositions 8 and  9 given below, whose proofs are in “Appendix”, and then the main part which is also proved in “Appendix”.

Proposition 8

The joint distribution of \(\left( N, \mu \right) \), for \(t > 0\), and \(\lambda _t = \lambda \), positive constant,

$$\begin{aligned} F(t, k, \lambda , \mu ) = F(t, k, \mu ) {\mathbb {I}}_{\lambda _t = \lambda } = P\left[ N_t = k, \mu _t \le \mu \right] {\mathbb {I}}_{\lambda _t = \lambda } \end{aligned}$$

is such that

$$\begin{aligned}&\frac{\partial ^2 F(t, k, \mu )}{\partial \mu \partial t} + \frac{\partial }{\partial \mu } \left[ s \left( k \mu ^* - \mu \right) \frac{\partial F(t, k, \mu )}{\partial \mu } \right] \nonumber \\&\quad = \lambda \frac{\partial F(t, k - 1, \mu )}{\partial \mu } - (\lambda + \mu ) \frac{\partial F(t, k, \mu )}{\partial \mu } + \int _0^\mu y \frac{\partial F(t, k+1, y)}{\partial \mu } \mathrm{d}P[C \nonumber \\&\quad \le \mu - y]. \end{aligned}$$
(34)

Proposition 9

Taking

$$\begin{aligned} \xi (t, k, v) = \int _0^\infty e^{- v \mu } \ \frac{\partial F(t, k, \mu )}{\partial \mu } \ \mathrm{d}\mu , \end{aligned}$$

we have that

$$\begin{aligned}&\frac{\partial \xi (t, k, v)}{\partial t} + (v s - 1) \frac{\partial \xi (t, k, v)}{\partial v} + \gamma (v) \frac{\partial \xi (t, k+1, v)}{\partial v} \nonumber \\&\quad = - \lambda \xi (t, k-1, v) - (\lambda + v s k \mu ^*) \xi (t, k, v). \end{aligned}$$
(35)

Remark 5

Note that, in queueing systems, the situation in which a service time follows a general distribution is more general than the situation in which a service time follows a state-dependent Hawkes process. This observation suggests to study the results obtained in Theorem 2 for the M/sdHawkes/\(\infty \) system to try to find the relations with the analogous results for an M/G/\(\infty \) system (see Eick et al. [9]). This is an interesting open research problem which could be studied in detail.

Remark 6

Koops et al. [16] have studied a \(Hawkes/M/\infty \) system. More precisely, they obtained that

$$\begin{aligned} \zeta (t, z, u) = {\mathbb {E}} \left[ z^{N_t} e^{-u \lambda _t} \right] = e^{-u(t) \lambda _0} e^{-\lambda _0 r \int _0^t u(w) \ \mathrm{d}w}, \end{aligned}$$

where \(u(\cdot )\) solves the ODE    \(u'(w) = - r u(w) - (1 + (z-1) e^{-\mu ^* w}) \beta (u(w)) + 1\). In Theorem 1, these known results have been extended taking into account the state-dependent Hawkes service times.

Remark 7

In Theorem 1, if \(r = s= 0\) and \({\mathbb {E}}[B_t] = {\mathbb {E}}[C_t] = 0\), Eqs. (3) and (4) are simplified and \(\lambda _t = \lambda _0 \) and \(\mu _t = N_t \mu _0\). This means that the arrival process, \(M_t\), turns out to be a Poisson process with parameter \(\lambda _0\) and, if there are \(N_t\) customers in the system at time t, the service time has an exponential distribution with parameter \(N_t \mu _0\), i.e. this is an \(M/M/\infty \) system. Therefore, as usual, the joint distribution is

$$\begin{aligned} F(t, k, \lambda , \mu ) = P[N_t = k, \lambda _t = \lambda _0, \mu _t = N_t \mu _0] = P[N_t = k] {\mathbb {I}}_{\lambda _t = \lambda _0} {\mathbb {I}}_{\mu _t = N_t \mu _0}, \end{aligned}$$

and taking \(F(t, k, \lambda , \mu ) = F(t, k) = P[N_t = k]\)

$$\begin{aligned} \frac{\hbox {d}F(t, 0)}{\hbox {d}t} = \lim _{\Delta t \rightarrow 0} \frac{F(t + \Delta t, 0) - F(t, 0)}{\Delta t} = - \lambda _0 F(t, 0) + \mu _0 F(t, 1), \end{aligned}$$

while, for \(k = 1, 2, \ldots \)

$$\begin{aligned} \frac{\hbox {d}F(t, k)}{\hbox {d}t}= & {} \lambda _0 F(t, k-1) - (\lambda _0 + k \mu _0) F(t, k) + (k+1) \mu _0 F(t, k+1). \end{aligned}$$

Solving the above system of difference-differential equations, we obtain the distribution of \(N_t\), as

$$\begin{aligned} F(t, k) = \frac{1}{k!} \left( \frac{\lambda _0}{\mu _0} (1 - e^{-\mu _0 t})\right) ^k \exp {\left\{ - \frac{\lambda _0}{\mu _0} (1 - e^{-\mu _0 t}) \right\} }, \end{aligned}$$

and by noting that

$$\begin{aligned} \zeta (t, z, u, v) = e^{-u \lambda _0} {\mathbb {E}}\left[ (z e^{-v \mu _0})^{N_t} \right] = e^{-u \lambda _0} \sum _{k=0}^\infty (z e^{-v \mu _0})^k P[N_t = k], \end{aligned}$$

we have that

$$\begin{aligned} \zeta (t, z, u, v) = e^{-u \lambda _0} \exp \left\{ - \frac{\lambda _0}{\mu _0} (1 - e^{-\mu _0 t}) (1 - z e^{-v \mu _0}) \right\} . \end{aligned}$$

Furthermore, note that F(tk), for \(t \rightarrow \infty \), turns out to be

$$\begin{aligned} \displaystyle P[N_\infty = k] = \frac{1}{k!} \left( \frac{\lambda _0}{\mu _0} \right) ^k e^{- \lambda _0 / \mu _0},~~ \text{ for }~~ k = 0, 1, 2, \ldots . \end{aligned}$$

These results of distribution of \(N_t\) and of limiting distribution coincide with the results presented in Gross et al. [14].

5 Concluding remarks and open problems

Main contribution of this paper is to derive the joint time-dependent distribution of the vector process \((N, \lambda , \mu )\), given by system size, arrival and service intensity processes for the Hawkes/sdHawkes/\(\infty \) and M/sdHawkes/\(\infty \) systems. To get this task, we proved the Markov property for the vector process \((N, \lambda , \mu )\). Then, the idea is to characterize the law of the infinite server system in terms of the solution of a corresponding system of ODEs. The methodology used is inspired by Koops et al. [16], where the authors achieved the same results for a Hawkes/M/\(\infty \) system. For a M/M/\(\infty \) system, the above results are deduced directly following a classical procedure.

However, we have to take into account that, in a queueing system, if the service time follows a general distribution, then this is a more general situation than that in which the service time follows a state-dependent Hawkes process. Hence, an interesting research problem, which could be studied in detail, is to connect the results obtained for the M/sdHawkes/\(\infty \) system with the analogous results for the M/G/\(\infty \) system. More precisely, the idea is to investigate the relations between the transient behaviour of M/G/\(\infty \) system and that of M/sdHawkes/\(\infty \) system.

Taking into account that Koops et al. [16] obtained the time-dependent results for Hawkes/M/\(\infty \) system, an open problem is to study if these results could be extended for a Hawkes/G/\(\infty \) system. Instead of obtaining a solution by solving ODEs (as in Theorem 1), several methods are possible, for instance, methods such as fixed point equation in the transform domain and concepts using branching processes. Others future works could be to investigate the asymptotic behaviour of the distributions and the moments for the Hawkes/sdHawkes/\(\infty \) system.