1 Introduction

Every year about 11 billion tons of goods are transported by ship. Large volumes of products such as grain, oil, chemicals and manufactured goods are shipped by sea, so that around \(90\%\) of international trade is handled by bulk ports worldwide. Bulk ports must be able to accomodate the vessels and to control in reasonable time the load or the unload of materials. They provide berth facilities for ships but the berth capacity is limited and costly, so that terminal operators try to minimize the berth utilization. Queuing theory has been considered since the early 60s as a tool to solve optimization of port handling service system. Use of queuing systems in modeling port dynamics relies on the interpretation of the ships as clients, the berths as server and the time spent by the ship at the berth as service time. Over time many authors have provided the solution for different types of systems. In certain cases it has been possible to obtain the exact solution (for instance, Frankel (1987) and Tsinker (2004)). Other researchers proposed approximations (see Artalejo et al. (2008), de la Peña-Zarzuelo et al. (2020)), statistical models or computational techniques (see Dahal et al. (2003), Wadhwa (1992)). In some recent papers, attention has been devoted to the customer waiting time management: the clients of a queuing system are regarded as active entities able to join or to leave the queue under particular conditions. The customers’ decision about the waiting time of service may be influenced by the observation of the queue length or by the knowledge of crucial information about the system, such as the arrival and the service rates (cfr. Bountali and Economou (2019) and Dimitrakopoulos et al. (2021)). Furthermore, customers may be also characterized by an impatience time which is defined to describe the intolerance to delay. If the waiting time reaches the impatience time, then the customer leaves the system without receiving any service. Examples of this kind of a queueing system can be found in Bar-Lev et al. (2013) or in Inoue et al. (2018).

The random nature of vessel arrivals and terminal service processes have a significant impact on port performance. Indeed, they often lead to significant handling delays. Hence, many arrival planning strategies have been provided in order to mitigate such undesirable effects (see, for instance, Lang and Veenstra (2010) and van Asperen et al. (2003)). In many studies, the arrival of a vessel in a certain area is assumed to follow a stationary Poisson process, which generally is not a realistic assumption, since it may be expected that during certain days or months of the year, ships will arrive in greater numbers than during other periods, cf. Goerlandt and Kujala (2011). Besides the assumption of exponential distribution, other distributions, such as the Erlang-2 distribution, have been considered to describe the ships interarrival times, cf. van Vianen et al. (2014). However, as suggested by Altiok (2000) and Jagerman and Altiok (2003), vessels arrive on a somewhat scheduled basis and there should be one vessel arriving every time units. Unfortunately, due to bad weather conditions, natural phenomena or unexpected failures, vessels do not generally arrive at their scheduled times, so that each vessel is assigned a constant lay period during which, in a uniform way, it is supposed to arrive at the port. Some examples of a pre-scheduled random arrivals process can be found in Guadagni et al. (2011) and Lancia et al. (2018) to describe public and private transportation systems.

Stimulated by the above mentioned researches, in this paper we investigate a stochastic model describing the vessel arrivals and the port dynamics. We assume that the vessel are scheduled to arrive at multiples of a given period \(a>0\), whereas the actual arrival time is increased by a uniformly distributed lay period. Differently from a previous model in which the lay period is uniform over (0, a) (cf. Jagerman and Altiok (2003)), in the present investigation we consider the more general case in which it is uniform over (0, ka), \(k\in \mathbb {N}\).

We mainly focus on the determination of explicit forms of the conditional probability distribution and the mean of the counting process describing the vessel arrivals. It should be noted that the analytical results available in the literature in this area are quite scarce and fragmentary. Nevertheless, we are able to find several explicit results on the main probability distributions of interest. In particular, in some special cases we determine (i) the unconditional probability distribution of the vessel arrival process, (ii) the probability distribution of the stationary process which represents the number of vessel arrivals when the initial time is an arbitrarily chosen instant, (iii) the distribution, the variance and the autocorrelation of the interarrival times between consecutive vessel arrivals. We remark that the finding of such explicit analytical results represents a strength of the paper, compared to other articles in this area where only approximations are obtained by means of simulation techniques.

In addition, with reference to a vessel queueing system with infinite servers, whose service times are exponentially distributed, we propose an application of the main results to determine the mean number of vessels in the queue seen at the arrival of a given ship. This is helpful to deal with resource allocation problems in the port.

1.1 Plan of the Paper

We assume that the actual arrival time of the vessel scheduled to arrive at time \(\varepsilon _i:=i a\), \(a>0\), \(i\in \mathbb {Z}\), is given by \(A_i= i a + Y_i-y_0\), \(i\in \mathbb {Z}\), where the random variables \(Y_i\) are independent and identically distributed, with uniform distribution on (0, ka), \(k\in \mathbb {N}\), and where \(y_0\) denotes the realization of \(Y_0\). Section 2 is devoted to the study of the counting process N(t), which represents the number of scheduled vessels arriving during the time interval (0, t], \(t\ge 0\), where 0 is an arrival instant. We provide the explicit expression of the probability generating function of N(t), its probability distribution and expected value. Furthermore, we develop some cases of interest based on specific choices of k, i.e. \(k=1\) and \(k=2\).

For the same choices of k, in Section 3 we obtain the probability law of a stationary counting process, say \(N_e(t)\). The latter process provides the number of arrivals in a time interval of length t, when the initial time is an arbitrarily chosen instant. The knowledge of the distribution of \(N_e(t)\) allows us to obtain some useful results concerning the variance and the correlations of the random variables \(X_i\), representing the actual interarrival time between the \((i-1)\)-th and the i-th vessel arrival. In particular, for \(k=1\) we find that the correlations \(\rho _h =\textrm{Corr}(X_i,X_{i+h})\), \(i\in {\mathbb Z}\), \(h\in {\mathbb N}\), are vanishing for \(h\ge 2\). Moreover, in the case \(k=2\) they are vanishing for \(h\ge 4\), and negative increasing for \(h=1,2,3\).

Finally, in Section 4 we consider an application of the previous results to a queueing system, say SHIP/M/\(\infty\), in which the service times have i.i.d. exponential distribution with parameter \(\mu > 0\), and there is an infinite number of servers. Denoting by \(Q_n\) the number of customers at the port seen by the arrival of the n-th customer, the knowledge of \(\mathbb {E}\left[ N(t)\right]\) allows us to obtain the explicit expression of the average number of customers \(\mathbb {E}(Q_n)\), together with an upper and a lower bound for the probability of empty queue. In addition, from the stationarity of the interarrival times \(X_i\) we obtain that \(\mathbb {E}(Q_n)\) does not depend on n. Moreover, it depends on parameters a and \(\mu\) only through their product. In addition, it is decreasing in \(\mu a>0\) and increasing in \(k\in \mathbb {N}\).

2 The Model

Consider a system able to receive an infinite sequence of units, such as a stochastic flow of vessel arrivals occurring at epochs in the time domain \((-\infty ,\infty )\). Specifically, for a fixed \(a>0\), the i-th vessel is scheduled to arrive at the port at the time

$$\begin{aligned} \varepsilon _i=i a, \qquad i\in \mathbb {Z}, \end{aligned}$$
(1)

and it is expected to show up during a lay period having length \(\omega >0\). The lay period is assumed to be the same for every ship. Each vessel arrives at the port, independently from the others, at a random time which is uniformly distributed within the lay period. The assumption of uniformity of vessel arrivals within the lay period is justified recalling that captains are only given the window of arrival and not a scheduled arrival time (see Jagerman and Altiok (2003)).

In the sequel, we denote by \(A_i\), \(i\in \mathbb {Z}\), the actual (rescaled) arrival time of the i-th vessel, which is scheduled to arrive at time (1). Specifically, we assume that (cf. Jagerman and Altiok (2003))

$$\begin{aligned} A_i=i a+Y_i- y_0, \qquad i\in \mathbb {Z},\;\; a>0. \end{aligned}$$
(2)

As specified, the r.v.’s \(Y_i\), \(i \in \mathbb {Z}\), representing the elapsed arrival time of the i-th vessel, are independent and identically uniformly distributed over \((0,\omega )\), with distribution function \(F_Y(\cdot )\). The random variable \(Y_0\) describes the elapsed arrival time of the 0-th vessel, and \(y_0\) denotes its realization used for the rescaling in Eq. (2). Hence, we have \(A_0=0\), so that the arrival of the vessel scheduled at time \(-y_0\) actually occurs at time \(t=0\).

Note that, since the vessels can arrive in any order during the common period of sailing, the order of the actual arrivals may differ from the scheduled ones. Hence, whereas the vessels are scheduled to arrive in increasing order, their actual arrival times are not necessarily ordered. Let \(X_i\), \(i \in \mathbb {Z}\), be the actual interarrival time between the \((i-1)\)-th and the i-th vessel arrival, with \(X_1\) denoting the time length between time \(t=0\) and the subsequent arrival. It is known that \(\{X_i\}\) is a stationary sequence of identical distributed random variables. Moreover, the corresponding counting process N(t), which represents the number of scheduled vessels arriving during the time interval (0, t], with 0 an arrival point, is a non-stationary process in continuous time (see Cox and Lewis (1966)).

2.1 Probability Generating Function

In this section we determine the probability generating function (pgf) of N(t), \(t>0\), conditional on \(Y_0=y_0\). Recalling that the arrival events are independent, we have

$$\begin{aligned} G_N(t,z|y_0):=\mathbb {E}\left[ z^{N(t)}|Y_0=y_0\right] =\prod _{j\mathop{=}-\infty ,\, j\mathop{\ne} 0}^{+\infty }(1-p_j+p_j z),\qquad z\in (0,1), \; y_0\in (0,\omega ), \end{aligned}$$
(3)

where, due to Eq. (2),

$$\begin{aligned} p_j:=\mathbb {P}(0<A_j<t)=\mathbb {P}(y_0-j a<Y_j<t+y_0-j a)=F_Y(t+y_0-j a)-F_Y(y_0-j a). \end{aligned}$$
(4)

From Eqs. (3) and (4), for \(t>0\), \(z\in (0,1)\), \(y_0\in (0,\omega )\), the pgf of N(t) can be expressed as

$$\begin{aligned} G_N(t,z|y_0)=\prod _{j\mathop{=}-\infty ,\, j\mathop{\ne} 0}^{+\infty }\left[ 1-(1-z)\left( F_Y(t+y_0-ja)-F_Y(y_0-ja)\right) \right] . \end{aligned}$$
(5)

In the sequel we shall assume that \(\omega =k a\), with \(k\in \mathbb {N}\), \(a>0\), so that \(y_0\in (0,ka)\). Under such assumption, it is possible to obtain the explicit expression of \(G_N(t,z|y_0)\), for different ranges of values of t.

Theorem 2.1

Assuming that \(\omega =ka\), for \((n-1)a<y_0<na\), with \(n=1,\dots ,k\), the pgf \(G_N(t,z|y_0)\) given in (5) can be expressed in the following way:

  1. (i)

    for \(0 \le t<k a-y_0\),

    $$\begin{aligned} G_N(t,z|y_0)=(-1)^{s\mathop{-}n}\left[ \frac{1-z}{k}\right] ^{2s\mathop{-}2n} \left( n-\frac{y_0}{a}-\frac{k}{1-z}\right) _{{s\mathop{-}n}} \left( n-\frac{t+y_0}{a}+\frac{k}{1-z}\right) _{{s\mathop{-}n}} \\ \times \left[ 1-(1-z)\frac{t}{ka}\right] ^{k\mathop{-}s\mathop{+}n\mathop{-}1}, \end{aligned}$$
  2. (ii)

    for \(k a-y_0\le t<(k+n-1) a-y_0\),

    $$\begin{aligned} G_N(t,z|y_0)=(-1)^{-n\mathop{+}s}\left[ \frac{1-z}{k}\right] ^{2 s\mathop{-}1\mathop{-}2 n} \left( n-\frac{y_0}{a}-\frac{k}{1-z}\right) _{s\mathop{-}n} \left( n-\frac{t+y_0}{a}+\frac{k}{1-z}\right) _{s\mathop{-}n}\\ \times \frac{\left[ 1-(1-z)\frac{t}{ka}\right] ^{n\mathop{-}s\mathop{+}k}}{\left( \frac{y_0}{a}+\frac{k z}{1-z}\right) }, \end{aligned}$$
  3. (iii)

    for \(t\ge (k+n-1)a-y_0\),

    $$G_N(t,z|y_0)=\left[ \frac{1-z}{k}\right] ^{2k\mathop{-}1} \frac{z^{s\mathop{-}k\mathop{-}n}}{\left( \frac{y_0}{a}+\frac{k z}{1\mathop{-}z}\right) } \left( n-\frac{y_0}{a}-\frac{k}{1-z}\right) _{k} \left( -s+1+\frac{t+y_0}{a}-\frac{k}{1-z}\right) _{k},$$

where \(s=\lfloor (t+y_0)/a \rfloor +1\), with \(\lfloor \cdot \rfloor\) denoting the integer part function, \((x)_n:=\frac{\Gamma (x+n)}{\Gamma (x)}\) is the Pochhammer symbol, and \(\Gamma (\cdot )\) is the Gamma function.

Proof

Starting from Eq. (5), the pgf of N(t) can be rewritten as

$$G_N(t,z|y_0)=\prod _{j\mathop{=}1}^{k\mathop{-}1}\left[ 1-(1-z)\phi _j^-(t){\textbf{1}}_{\{y_0\mathop{<}(k\mathop{-}j)a\}}\right] \!\prod _{j\mathop{=}1}^{\lfloor y_0/a \rfloor }\left[ 1-(1-z)\phi _j^+(t)\right] \!\!\prod _{j\mathop{=}1\mathop{+}\lfloor y_0/a \rfloor }^{+\infty }\left[ 1-(1-z)F_{W_j}(t)\right] ,$$

where \(F_{W_j}(t)\) is the distribution function of a random variable \(W_j\) uniformly distributed over the interval \(\left[ ja-y_0, (k+j)a-y_0\right]\), \(\textbf{1}_A\) is the indicator function of the set A and we assume

$$\begin{aligned} \phi _j^-(t):={\left\{ \begin{array}{ll} \frac{t}{ka} \; &{} 0\le t<(k-j)a-y_0,\\ \frac{(k-j)a-y_0}{ka} \; &{}t\ge (k-j)a-y_0, \end{array}\right. }\quad \phi _j^+(t):={\left\{ \begin{array}{ll} \frac{t}{ka} \; &{} 0\le t<(k+j)a-y_0,\\ \frac{(k+j)a-y_0}{ka} \; &{}t\ge (k+j)a-y_0. \end{array}\right. } \end{aligned}$$

Hence, the proof follows by considering different ranges of values of t. \(\square\)

The knowledge of the pgf of N(t) is useful to determine suitable upper bounds to the probability that the number of scheduled vessels arriving during a time interval (0, t] exceeds a given threshold. Indeed, from the well-known Chernoff bound we have

$$\begin{aligned} \mathbb {P}\left[ N(t) \ge m\,|\,Y(0)=y_0\right] \le \inf _{\theta \mathop{\ge} 0} e^{-\theta m} G_N(t,e^\theta |y_0) =: B_m(t\,|\,y_0). \end{aligned}$$
(6)

An example of the upper bond \(B_m(t\,|\,y_0)\) is given in Fig. 1, where it is shown to be increasing in t, as expected. Finally, we remark that in the case (iii) of Theorem 2.1 it is not hard to see that \(B_{m\mathop{+}1}(t+a\,|\,y_0)=B_m(t\,|\,y_0)\).

Fig. 1
figure 1

Bound \(B_m(t\,|\,y_0)\), given in Eq. (6), as a function of t, for \(k=5\), \(y_0=2\), \(a=1\), \(n=3\) and \(m=10\), with some exact values of \(\mathbb {P}\left[ N(t) \ge m\,|\,Y(0)=y_0\right]\) shown below   

The model (2) of the vessels’ arrival times is based on the assumption that the r.v.’s \(Y_i\), representing the elapsed arrival time of the i-th vessel, \(i \in \mathbb {Z}\), are uniformly distributed over \((0, \omega )\). It would be desirable to modify the latter assumption by considering other distributions for \(Y_i\), looking for more flexible models for the vessel arrivals. For instance, as an alternative model one can assume that \(Y_i\) has a decreasing triangular probability density function (pdf) for describing arrivals that are more likely in proximity of the scheduled arrival times. However, unfortunately the determination of the pgf becomes more difficult even in this case, as shown in the following remark.

Remark 2.1

If the random variables \(Y_i\) are independent and identically distributed, with decreasing triangular pdf on (0, ka), \(k\in \mathbb {N}\), such that

$$F_Y(x)=\frac{2x}{ka}\left( 1-\frac{x}{ka}\right) , \qquad 0\le x\le ka,$$

then, due to Eq. (5), the pgf of N(t), \(t>0\), can be expressed as

$$G_N(t,z\mid y_0) =\prod _{j\mathop{=}\left\lfloor {y_0}/{a}\mathop{-}k \right\rfloor }^{\left\lfloor {y_0}/{a} \right\rfloor } \left[ 1-(1-z)\phi _{1j}(t)\right] \prod _{j\mathop{=}\left\lfloor {y_0}/{a} \right\rfloor +1}^{+\infty }\left[ 1-(1-z)\phi _{2j}(t)\right] ,$$

where

$$\phi _{1j}(t)={\left\{ \begin{array}{ll} \frac{2t}{ka}\left[ 1-\frac{1}{2ka}(t+2(y_0-ja))\right] ,\quad 0\le t <(k+j)a-y_0,\\ 1-\frac{2}{ka}(y_0-ja)\left( 1-\frac{y_0-ja}{2ka}\right) , \quad t\ge (k+j)a-y_0, \end{array}\right. }$$

and

$$\phi _{2j}(t)={\left\{ \begin{array}{ll} 0, &{}\quad 0\le t< ja-y_0\\ \frac{2}{ka}(t+y_0-ja)\left( 1-\frac{t+y_0-ja}{2ka}\right) ,&{}\quad ja-y_0\le t <(k+j)a-y_0,\\ 1, &{}\quad t\ge (k+j)a-y_0. \end{array}\right. }$$

In this case, the determination of the explicit form of the pgf is quite difficult and leads to heavy calculations since the functions \(\phi _{ij}(t)\), \(i=1,2\), have a quadratic dependence on t.

2.2 Probability Distribution

Fig. 2
figure 2

Probabilities \(p_{y_0}(h,t)\) for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\) with \(s=3\) and \(t=1\) (left-hand side) and \(s=4\) and \(t=2\) (right-hand side)

Fig. 3
figure 3

Probabilities \(p_{y_0}(h,t)\) for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\), \(s=4\) and \(t=3\)

For the model (2), with the r.v.’s \(Y_i\) uniformly distributed over \((0, \omega )\), in this section we obtain the conditional probability of N(t) given that \(Y_0=y_0\), denoted by

$$\begin{aligned} p_{y_0}(n,t):=\mathbb {P}\left( N(t)=n\,|\,Y_0=y_0\right) ,\qquad n\in \mathbb {N}_0=\{0,1,2,\ldots \}. \end{aligned}$$
(7)

Even though the expressions for \(p_{y_0}(n,t)\) are given in a quite cumbersome form, they involve finite sums and hypergeometric functions that can be computed by use of any scientific software. In particular, we recall that the series of the Gauss hypergeometric function \({}_{2}F_{1}(a,b; \, c; \, z)\), defined in Eq. (25), terminates if either a or b is a nonpositive integer, in which case the function reduces to a polynomial.

The following proposition provides the explicit expressions of the conditional probability distribution of N(t).

Fig. 4
figure 4

Probabilities \(p_{y_0}(h,t)\) for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\) with \(s=5\) and \(t=4\) (left-hand side) and \(s=6\) and \(t=5\) (right-hand side)

Proposition 2.1

Let us assume \((n-1)a<y_0<na\), with \(n=1,\dots ,k\). Denoting by \(s=\lfloor (t+y_0)/a \rfloor +1\), we have

  1. (i)

    for \(0\le t<k a-y_0\),

    $$\begin{aligned} p_{y_0}(0,t)=\frac{(-1)^{s\mathop{-}n}}{k^{2s\mathop{-}2n}}\,\frac{\Gamma \left( 1+k-n+\frac{y_0}{a}\right) }{\Gamma \left( 1+k-s+\frac{y_0}{a}\right) }\,\frac{\Gamma \left( 1-k-n+\frac{t+y_0}{a}\right) }{\Gamma \left( 1-k-s+\frac{t+y_0}{a}\right) }\, \left( 1-\frac{t}{ka}\right) ^{k\mathop{+}n\mathop{-}s\mathop{-}1}, \end{aligned}$$
  2. (ii)

    for \(k a -y_0 \le t<(k+n-1) a-y_0\),

    $$\begin{aligned} p_{y_0}(0,t)=\frac{(-1)^{s\mathop{-}n} }{k^{2 s\mathop{-}1\mathop{-}2 n}}\,\frac{a}{y_0}\,\left( 1-\frac{t}{ka}\right) ^{n\mathop{-}s\mathop{+}k}\frac{\Gamma \left( 1-k-n+\frac{t+y_0}{a}\right) \Gamma \left( 1+k-n+\frac{y_0}{a}\right) }{\Gamma \left( -k-s+1+\frac{t+y_0}{a}\right) \Gamma \left( -s+1+k+\frac{y_0}{a}\right) }, \end{aligned}$$
  3. (iii)

    for \(t\ge (k+n-1) a -y_0\),

    $$p_{y_0}(0,t)=0.$$

Moreover, for \(h\ge 1\), it results

  1. (i)

    for \(0\le t<k a-y_0\),

    $$p_{y_0}(h,t)={\mathcal P}_1(h,t,n,a,y_0),$$
  2. (ii)

    for \(k a -y_0 \le t<(k+n-1) a-y_0\),

    $$p_{y_0}(h,t)={\mathcal P}_2(h,t,n,a,y_0),$$
  3. (iii)

    for \(t\ge (k+n-1) a -y_0\),

    $$p_{y_0}(h,t)={\mathcal P}_3(h,t,n,a,y_0),$$

where, for brevity, the expressions of \({\mathcal P}_i(h,t,n,a,y_0)\), \(i=1,2,3\), and the proofs are provided in Appendix A.

Figures 2, 3 and 4 show some plots of the probability \(p_{y_0}(h,t)\) for different choices of the involved parameters, evaluated by means of the explicit expressions obtained in Proposition 2.1. Similarly, the exact values of \(\mathbb {P}\left[ N(t) \ge m\,|\,Y(0)=y_0\right]\) shown in Fig. 1 have been evaluated by means of Proposition 2.1.

Remark 2.2

Tables 1, 2, 3 and 4 provide a comparison between the exact expression of the state probabilities \(p_{y_0}(h,t)\) obtained in Proposition 2.1 and the probabilities \(\hat{p}_{y_0}(h,t)\) evaluated via numerical inversion of the pgf \(G_N(t,z|y_0)\) thanks to Theorem 1 of Abate and Whitt (1992). The results show that the numerical inversion provides a good approximation of the exact distribution for small values of h.

Table 1 Probabilities \(p_{y_0}(h,t)\) and \(\hat{p}_{y_0}(h,t)\), with the absolute and relative errors, for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\), \(s=3\) and \(t=1\) with \(r=0.001\)
Table 2 Same as Table 1, for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\), \(s=4\) and \(t=2\) with \(r=0.001\)
Table 3 Same as Table 1, for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\), \(s=4\) and \(t=3\) with \(r=0.01\)
Table 4 Same as Table 1, for \(k=4\), \(y_0=1.5\), \(a=1\), \(n=2\) with \(s=5\) and \(t=4\) with \(r=0.01\)

In the next theorem, the limiting behavior of \(G_N(t,z|y_0)\) is analyzed for \(t\rightarrow 0^{+}\) and \(t\rightarrow +\infty\).

Theorem 2.2

Assuming \(\omega =k a\), with \(k\in \mathbb {N}\) and \(a>0\), for \((n-1)a<y_0<na\), with \(n=1,\dots ,k\), the pgf of N(t) exhibits the following limiting behavior

$$\begin{aligned} G_N(t,z|y_0)\sim {\left\{ \begin{array}{ll} \left[ 1-t\,\, \frac{1-z}{k a} \right] ^{k\mathop{-}1}\quad \text {as}\,\, t\rightarrow 0^{+} \\ a_k\cdot z^{\frac{t\mathop{+}y_0}{a}\mathop{-}k\mathop{-}n}\quad \text {as}\,\, t\rightarrow +\infty . \end{array}\right. } \end{aligned}$$
(8)

where

$$\begin{aligned} a_k=a_k(k,n;y_0,a,z):=\left[ \frac{1-z}{k}\right] ^{2k\mathop{-}1} \frac{\left( n-\frac{y_0}{a}-\frac{k}{1-z}\right) _{k} \left( 1-\frac{k}{1-z}\right) _{k}}{\left( -k+\frac{y_0}{a}+\frac{k}{1-z}\right) }. \end{aligned}$$
(9)

Proof

It follows from Theorem 2.1.

By inspection of (9), the absence of singularity of the pgf \(G_N(t,z|y_0)\) for large t is ensured.

Remark 2.3

Note that, in agreement with Eq. (A.13) of Jagerman and Altiok (2003), under the assumptions of Theorem 2.2, from (8) one has the large deviation limit

$$\begin{aligned} \lim _{t\mathop{\rightarrow} \mathop{+}\infty } \frac{1}{t}\log G_N(t,z|y_0)=\frac{\log z}{a}, \qquad z\in (0,1). \end{aligned}$$

2.3 Mean Values

Making use of Theorem 2.1, we can now determine the mean of N(t), conditional on \(Y_0=y_0\).

Proposition 2.2

For \(\omega =ka\), \(a>0\), \(k\in {\mathbb N}\), the conditional mean of N(t), given that \(Y_0=y_0\), is given by

$$\begin{aligned} \mathbb {E}\left[ N(t)|y_0\right] = \left\{ \begin{array}{ll} \frac{t(k-1)}{ak}\quad &{} \text {for}\;\; 0 \le t<k a-y_0, \\ \frac{kt+y_0-ak}{ak}\quad &{} \text {for}\;\; t\ge k a-y_0. \end{array} \right. \end{aligned}$$
(10)

Proof

Recalling Theorem 2.1, we consider the case \(0\le t<k a -y_0\) and \(s\ge n\). We have

$$\begin{aligned} \mathbb {E}\left[ N(t)|y_0\right] &=\lim _{z\mathop{\rightarrow} 1} \frac{\partial }{\partial z} G_N(t,z|y_0) \\ &=\lim _{z\mathop{\rightarrow} 1} \frac{(-1)^{1\mathop{+}s\mathop{-}n}ak}{\left[ ak+t(z-1)\right] ^2}\left( 1+\frac{t(z-1)}{ak}\right) ^{k\mathop{+}n\mathop{-}s }A_1(z)\cdot \left( \frac{A_2(z)+A_3(z)}{(z-1)^2}\right) , \end{aligned}$$

where

$$\begin{aligned} A_1(z):&=\left( \frac{1-z}{k}\right) ^{2s\mathop{-}2n}\left( n-\frac{t+y_0}{a}-\frac{k}{z-1}\right) _{s\mathop{-}n}\left( n-\frac{y_0}{a}+\frac{k}{z-1}\right) _{s\mathop{-}n}, \\ A_2(z):&=\left[ 2ak(n-s)-(-1+k-n+s)t(z-1)\right] (z-1),\\ A_3(z):&=k\left[ ak+t(z-1)\right] \left[ \psi ^{(0)}\left( n-\frac{t+y_0}{a}-\frac{k}{z-1}\right) -\psi ^{(0)}\left( s-\frac{t+y_0}{a}-\frac{k}{z-1}\right) \right. \\ &\quad\left. + \, \psi ^{(0)}\left( s-\frac{y_0}{a}+\frac{k}{z-1}\right) -\psi ^{(0)}\left( n-\frac{y_0}{a}+\frac{k}{z-1}\right) \right] , \end{aligned}$$

with \(\psi ^{(0)}(z)\) denoting the digamma function (see, for instance, Eq. (6.3.1) of Abramowitz and Stegun (1992)). By considering Eq. (6.3.6) of Abramowitz and Stegun (1992), the differences of the polygamma functions in \(A_3(z)\) can be expressed as

$$\begin{aligned} \psi ^{(0)}\left( n-\frac{t+y_0}{a}-\frac{k}{z-1}\right) -\psi ^{(0)}\left( s-\frac{t+y_0}{a}-\frac{k}{z-1}\right) =-\sum _{j\mathop{=}0}^{s\mathop{-}n\mathop{-}1}\frac{1}{n-\frac{t+y_0}{a}-\frac{k}{z-1}+j}, \end{aligned}$$

and

$$\begin{aligned} \psi ^{(0)}\left( s-\frac{y_0}{a}+\frac{k}{z-1}\right) -\psi ^{(0)}\left( n-\frac{y_0}{a}+\frac{k}{z-1}\right) =\sum _{j\mathop{=}0}^{s\mathop{-}n\mathop{-}1}\frac{1}{n-\frac{y_0}{a}+\frac{k}{z-1}+j}, \end{aligned}$$

so that, by means of L’Hôpital’s rule, we have

$$\begin{aligned} \lim _{z\mathop{\rightarrow }1} \frac{A_2(z)+A_3(z)}{(z-1)^2}=t(k-1). \end{aligned}$$

Finally, since \(\lim _{z\rightarrow 1} A_1(z)=(-1)^{s-n}\), the thesis follows. The other cases can be treated similarly.

Equation (10) shows that \(\mathbb {E}\left[ N(t)|y_0\right]\) is piecewise linear, continuous and increasing convex for \(t \ge 0\). The unconditional expected value of N(t) is provided in the following proposition.

Proposition 2.3

For \(\omega =ka\), \(a>0\), \(k\in {\mathbb N}\), the expected value of N(t) is given by

$$\begin{aligned} \mathbb {E}\left[ N(t)\right] = \left\{ \begin{array}{ll} \frac{t \left[ 2 a k(k\mathop{-}1) + t\right] }{2 (a k)^2}\quad &{} \text {for}\;\; 0 \le t < k a,\\ \frac{2 t\mathop{-}a}{2 a}\quad &{} \text {for} \;\; t\ge k a. \end{array} \right. \end{aligned}$$
(11)

Proof

The proof follows from Eq. (10) and recalling that \(Y_0\sim U(0,k a)\).

From Eq. (11) we note that \(\mathbb {E}\left[ N(t)\right]\) is an increasing convex continuous function of t, and that

$$\begin{aligned} \mathbb {E}\left[ N(t)\right] \sim \frac{t}{a} \qquad \text {as } \; t\rightarrow +\infty , \end{aligned}$$
(12)

according to a well known result on point processes Cox and Lewis (1966).

2.4 Some Cases of Interest

Aiming to provide a more detailed description of the considered model, hereafter we provide the explicit expressions of the probability generating function (5) and of the state probabilities of N(t) (given in Proposition 2.1) in two cases: (i) \(k=1\) and (ii) \(k=2\). These cases refer to the assumption that \(Y_j\) is uniformly distributed over (0, a) and (0, 2a), respectively.

Note that for \(k=1\) the vessels’ arrival times are uniformly distributed in non-overlapping time intervals, this ensuring that the interarrival times between consecutive vessel arrivals are distributed as the difference between two uniform random variables.

Proposition 2.4

Under the assumptions of Theorem 2.1, in case \(k=1\), \(a>0\), we have

$$\begin{aligned} G_N(t,z|y_0)= {\left\{ \begin{array}{ll} 1, &{} \ 0\le t < a-y_0\\ z^{n\mathop{-}1}\left[ 1-(1-z)\frac{t-na+y_0}{a}\right] , &{} \ na-y_0\le t < (n+1)a-y_0,\; \; n\ge 1. \end{array}\right. } \end{aligned}$$

Moreover,

$$\begin{aligned} p_{y_0}(0,t)= {\left\{ \begin{array}{ll} 1,\quad &{} 0\le t<a-y_0\\ 1-\frac{t-a+y_0}{a},\quad &{} a-y_0\le t<2a-y_0\\ 0, &{}t\ge 2a-y_0, \end{array}\right. } \end{aligned}$$
(13)

and, for \(n\ge 1\)

$$\begin{aligned} p_{y_0}(n,t)= {\left\{ \begin{array}{ll} 0,\quad &{} 0\le t<na-y_0 \\ \frac{t-na+y_0}{a},\quad &{} na-y_0\le t<(n+1)a-y_0 \\ 1-\frac{t-(n+1)a+y_0}{a},\quad &{} (n+1)a-y_0\le t<(n+2)a-y_0 \\ 0,&{}t\ge (n+2)a-y_0. \end{array}\right. } \end{aligned}$$
(14)

Proof

The proof follows from Theorem 2.1 and Proposition 2.1 for \(k=1\).

The case \(k=2\) can be analyzed under two different scenarios: \(y_0\in (0,a)\) and \(y_0\in (a,2 a)\).

Proposition 2.5

Under the assumptions of Theorem 2.1, in case \(k=2\), \(a>0\) and assuming \(y_0\in (0,a)\), one has

$$\begin{aligned} G_N(t,z|y_0)={\left\{ \begin{array}{ll} \left[ 1-(1-z)\frac{t}{2a}\right] ,\quad &{} 0\le t < a-y_0 \vspace{1mm} \\ \left[ 1-(1-z)\frac{a-y_0}{2a}\right] \left[ 1-(1-z)\frac{t-a+y_0}{2a}\right] ,\quad &{} a-y_0\le t < 2a-y_0 \\ z^{n\mathop{-}2}\left[ 1-(1-z)\frac{a-y_0}{2a}\right] \\ \times \left[ 1-(1-z)\frac{t-(n-1)a+y_0}{2a}\right] \left[ 1-(1-z)\frac{t-na+y_0}{2a}\right] , \quad &{} na-y_0\le t < (n+1)a-y_0,\; n\ge 2. \end{array}\right. } \end{aligned}$$

The state probabilities given in Proposition 2.1 have the following expressions

$$\begin{aligned} p_{y_0}(0,t)= {\left\{ \begin{array}{ll} \left[ 1-\frac{t}{2a}\right] ,\quad &{}{} 0\le t< a-y_0 \vspace{1mm} \\ \left[ 1-\frac{a-y_0}{2a}\right] \left[ 1-\frac{t-a+y_0}{2a}\right] ,\quad &{}{} a-y_0\le t< 2a-y_0 \vspace{1mm} \\ \left[ 1-\frac{a-y_0}{2a}\right] \left[ 1-\frac{t-a+y_0}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] ,\quad &{}{} 2a-y_0\le t < 3a-y_0 \\ 0, &{}{}t\ge 3a-y_0, \end{array}\right. } \end{aligned}$$
$$\begin{aligned} p_{y_0}(1,t)= {\left\{ \begin{array}{ll} \frac{t}{2a},\quad &{} 0\le t<a-y_0\\ \frac{a-y_0}{2a}\left[ 1-\frac{t-a+y_0}{2a}\right] + \left[ 1-\frac{a-y_0}{2a}\right] \frac{t-a+y_0}{2a},\quad &{} a-y_0\le t<2a-y_0 \vspace{1mm} \\ \frac{a-y_0}{2a}\left[ 1-\frac{t-a+y_0}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] + \left[ 1-\frac{a-y_0}{2a}\right] \frac{t-a+y_0}{2a} \vspace{1mm} \\ \times \left[ 1-\frac{t-2a+y_0}{2a}\right] + \left[ 1-\frac{a-y_0}{2a}\right] \left[ 1-\frac{t-a+y_0}{2a}\right] \frac{t-2a+y_0}{2a}, \quad &{} 2a-y_0\le t<3a-y_0 \vspace{1mm} \\ \left[ 1-\frac{a-y_0}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] \left[ 1-\frac{t-3a+y_0}{2a}\right] , &{} 3a-y_0\le t<4a-y_0 \\ 0,&{}t\ge 4a-y_0, \end{array}\right. } \end{aligned}$$

and, for \(n\ge 2\),

$$p_{y_0}(h,t)={\mathcal N}_1(h,t,a,y_0),$$

where, for brevity, the expression of \({\mathcal N}_1(h,t,a,y_0)\) is provided in Appendix B.

If \(k=2\), \(a>0\) and \(y_0\in (a,2 a)\), we have

$$\begin{aligned} G_N(t,z|y_0)={\left\{ \begin{array}{ll} \left[ 1-(1-z)\frac{t}{2a}\right] ,\; &{} 0\le t<2a-y_0 \vspace{1mm} \\ \left[ 1-(1-z)\frac{t}{2a}\right] \left[ 1-(1-z)\frac{t-2a+y_0}{2a}\right] ,\; &{} 2a-y_0\le t<3a-y_0 \vspace{1mm} \\ z^{n\mathop{-}3} \left[ 1-(1-z)\frac{3a-y_0}{2a}\right] \left[ 1-(1-z)\frac{t-(n-1)a+y_0}{2a}\right] \left[ 1-(1-z)\frac{t-na+y_0}{2a}\right] ,\; &{} \\ &{} na-y_0\le t<(n+1)a-y_0, \; n\ge 3.\\ \end{array}\right. } \end{aligned}$$

Moreover, one has

$$\begin{aligned} p_{y_0}(0,t)= {\left\{ \begin{array}{ll} 1-\frac{t}{2a}, \; &{} 0\le t<2a-y_0 \\ \left[ 1-\frac{t}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] ,\; &{} 2a-y_0\le t<3a-y_0 \vspace{1mm} \\ \left[ 1-\frac{3a-y_0}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] \left[ 1-\frac{t-3a+y_0}{2a}\right] ,\; &{} 3a-y_0\le t<4a-y_0 \\ 0,\; &{}t\ge 4 a-y_0,\\ \end{array}\right. } \end{aligned}$$
$$\begin{aligned} p_{y_0}(1,t)= {\left\{ \begin{array}{ll} \frac{t}{2a},\; &{} 0\le t<2a-y_0 \\ \frac{t}{2a}\left[ 1-\frac{t-2a+y_0}{2a}\right] +\frac{t-2a+y_0}{2a}\left[ 1-\frac{t}{2a}\right] ,\; &{} 2a-y_0\le t<3a-y_0 \vspace{1mm} \\ \frac{3a-y_0}{2a}\left[ 1-\frac{t-2a+y_0}{2a}\right] \left[ 1-\frac{t-3a+y_0}{2a}\right] \vspace{1mm} \\ +\left[ 1-\frac{3a-y_0}{2a}\right] \frac{t-2a+y_0}{2a}\left[ 1-\frac{t-3a+y_0}{2a}\right] \vspace{1mm} \\ +\left[ 1-\frac{3a-y_0}{2a}\right] \left[ 1-\frac{t-2a+y_0}{2a}\right] \frac{t-3a+y_0}{2a}, \; &{} 3a-y_0\le t<4a-y_0 \vspace{1mm} \\ \left[ 1-\frac{3a-y_0}{2a}\right] \left[ 1-\frac{t-3a+y_0}{2a}\right] \left[ 1-\frac{t-4a+y_0}{2a}\right] ,\; &{} 4a-y_0\le t<5a-y_0 \\ 0,\; &{} t\ge 5a-y_0,\\ \end{array}\right. } \end{aligned}$$

and for \(n\ge 2\)

$$p_{y_0}(h,t)={\mathcal N}_2(h,t,a,y_0),$$

where the expression of \({\mathcal N}_2(h,t,a,y_0)\) is given in Appendix B.

Proof

The proof follows from Theorem 2.1 and Proposition 2.1 for \(k=2\).

The probabilities \(p_{y_0}(h,t)\), obtained in Proposition 2.5, are plotted in Fig. 5 for \(a=1\) and two different choices of \(y_0\). For \(h=0\) the probabilities \(p_{y_0}(h,t)\) are decreasing in t. Moreoever, for \(h\ge 1\), the plots are composed of continuous, straight and curved lines. Such shape is partially inherited from the triangular distribution arising in the case \(k=1\), when the times between consecutive vessel arrivals are distributed as the difference between two uniform random variables.

Fig. 5
figure 5

Probabilities \(p_{y_0}(n,at)\) obtained in Proposition 2.5 for \(a=1\), \(n=0,1,2\) with \(y_0=0.5\) (upper cases) and \(y_0=1.5\) (lower cases)

3 The Stationary Arrival Counting Process

In the previous section we obtained the probability distribution of the counting process that describes the vessel arrivals, conditional on an arrival occurring at time 0. Unfortunately, due to the difficulties in managing the expressions mentioned in Proposition 2.1, the unconditional distribution is very hard to be obtained in a closed form in the general setting. However, considering the relevance of the unconditional distribution in the applications, hereafter we devote our efforts to derive the mentioned distribution in some special cases. We are confident that these results are useful to give a first insight of the properties and behavior of the considered processes.

Differently from Section 2.2, where the interest was oriented to study the conditional probability (7), in this section we deal with the (unconditional) distribution

$$\begin{aligned} p(n,t):=\mathbb {P}\left( N(t)=n\right) ,\quad n\in {\mathbb {N}_0}. \end{aligned}$$
(15)

In general, this distribution can be easily obtained in a formal way, but the explicit form requires hard calculations due to the lengthy and complicated forms obtained in Proposition 2.1. Furthermore, we also focus on the analysis of the stationary arrival counting process, namely \(N_e(t)\), which represents the number of arrivals in a time interval of length t when the initial time is an arbitrarily chosen instant. This process is useful since it allows to determine some relevant information on the interarrival times between consecutive vessel arrivals. Let us denote by

$$\begin{aligned} \widetilde{p}(n,t):=\mathbb {P}\left( N_e(t)=n\right) ,\quad n\in {\mathbb {N}_0}, \end{aligned}$$
(16)

the probability law of \(N_e(t)\). The probability distributions of the counting processes N(t) and \(N_e(t)\) are linked by means of the following relations (see Chapter 4 of Cox and Lewis (1966)):

$$\begin{aligned} \begin{aligned}&\widetilde{p}(0,t)=1-\frac{1}{\mathbb {E}(X)}\int _0^t p(0,u)\,\textrm{d}u, \\&\widetilde{p}(n,t)=\frac{1}{\mathbb {E}(X)} \left[ \int _0^t p(n-1,\tau )\,\textrm{d}\tau -\int _0^t p(n,\tau )\,\textrm{d}\tau \right] , \qquad n\in \mathbb {N}, \end{aligned} \end{aligned}$$
(17)

where X is identically distributed to the interarrival times \(X_i\). Note that, due to (12), for all \(i\in {\mathbb Z}\) one has

$$\begin{aligned} \mathbb {E}(X_i)=a. \end{aligned}$$
(18)

Hence, from (17) one has \(\int _0^{\infty }p(n,t) \,\textrm{d}t =a\) for all \(n\in \mathbb {N}_0\), and

$$\begin{aligned} \mathbb {P}(\widetilde{T}_{k\mathop{+}1}> t)= 1-\frac{1}{a} \int _0^t p(k,\tau )\,\textrm{d}\tau , \qquad t\ge 0, \;\; k\in \mathbb {N}_0, \end{aligned}$$
(19)

where \(\widetilde{T}_{k\mathop{+}1}\) is the first-passage-time of \(N_e(t)\) through state \(k+1\).

Let us now obtain some results concerning the correlations of the random variables \(X_i\). The difficulties due to the heaviness of the available expressions for the distribution of N(t) force us to consider only few instances. Hence, for both cases \(k=1\) and \(k=2\), hereafter we focus on the expressions of (i) the (unconditional) probability law of N(t), (ii) the probability density function of the interarrival times \(X_i\), (iii) the probability law of \(N_e(t)\), and (iv) the variance and the autocorrelation coefficients of \(X_i\).

3.1 Case \(k=1\)

Proposition 3.1

In the case \(k=1\), for \(a>0\), the probability law of N(t), defined in Eq. (15), is given by

$$\begin{aligned} p(0,t)={\left\{ \begin{array}{ll} 1-\frac{t^2}{2a^2},\qquad &{} 0\le t< a\\ \frac{(2 a-t)^2}{2a^2},\qquad &{}a\le t<2a\\ 0, \qquad &{}t\ge 2a, \end{array}\right. } \end{aligned}$$

and, for \(n\in \mathbb {N},\)

$$\begin{aligned} p(n,t)={\left\{ \begin{array}{ll} \frac{\left[ t-a(n-1)\right] ^2}{2a^2}, \qquad &{} (n-1)a\le t< na \\ 1-\frac{(t-na)^2}{2a^2}-\frac{\left[ a(n+1)-t\right] ^2}{2a^2}, \qquad &{}na\le t<(n+1)a\\ \frac{\left[ a(n+2)-t\right] ^2}{2a^2},\qquad &{}(n+1)a\le t< (n+2)a\\ 0, \qquad &{}t\ge (n+2)a. \end{array}\right. } \end{aligned}$$

Proof

The thesis follows from Eqs. (13) and (14) and recalling that \(Y_0\sim U(0,a)\).

Remark 3.1

From Proposition 3.1, we have that the mode of p(nt), \(t\ge 0\), is given by

$$Mode \, (p(0,t ))= 0, \qquad Mode \, (p(n,t))= \frac{a}{2}(2n +1)\quad \text {for }n\in \mathbb {N}.$$

As an immediate consequence of Proposition 3.1, we obtain the following results about the density of the interarrival times.

Corollary 3.1

Under the assumptions of Proposition 3.1, the probability density function of the interarrival times \(X_i\), for \(i\in \mathbb {Z}\), is given by

$$f_X(t)={\left\{ \begin{array}{ll} \frac{t}{a^2},\quad &{} 0\le t< a\\ \frac{2a-t}{a^2},\quad &{}a\le t<2a\\ 0,\quad &{} t\ge 2a. \end{array}\right. }$$

Proof

The thesis follows noting that \(\mathbb {P}(X_i>t)=\mathbb {P}(X_1>t)=p(0,t)\), for all \(i\in \mathbb {Z}\).

The following proposition provides, for \(n\in {\mathbb {N}_0}\), the explicit expression of the probabilities \(\widetilde{p}(n,t)\) introduced in Eq. (16).

Proposition 3.2

For \(k=1\), \(a>0\), the probability law of \(N_e(t)\) is given by

$$\begin{aligned} \widetilde{p}(0,t)={\left\{ \begin{array}{ll} 1-\frac{t}{a}\left[ 1-\frac{t^2}{6a^2}\right] ,\qquad &{} 0\le t<a\\ \frac{(2a-t)^3}{6a^3},\qquad &{}a\le t<2a\\ 0, \qquad &{}t\ge 2a, \end{array}\right. } \end{aligned}$$
(20)
$$\begin{aligned} \widetilde{p}(1,t)={\left\{ \begin{array}{ll} \frac{t}{a}\left[ 1-\frac{t^2}{3a^2}\right] , \qquad &{} 0\le t<a\\ 1-\frac{(2a-t)^3}{3a^3}-\frac{t-a}{a}\left[ 1-\frac{(t-a)^2}{6a^2}\right] ,\qquad &{}a\le t<2a\\ \frac{(3a-t)^3}{6a^3}, \qquad &{}2a\le t<3a\\ 0, \qquad &{}t\ge 3a, \end{array}\right. } \end{aligned}$$

and, for \(n\in \mathbb {N}\),

$$\begin{aligned} \widetilde{p}(n+1,t)={\left\{ \begin{array}{ll} \frac{\left[ t-a(n-1)\right] ^3}{6a^3},\qquad &{}(n-1)a\le t<na\\ \frac{(t-an)}{a}\left[ 1-\frac{(t-na)^2}{3a^2}\right] +\frac{\left[ a(n+1)-t\right] ^3}{6a^3},\qquad &{}na\le t<(n+1)a\\ 1-\frac{t-a(n+1)}{a}\left[ 1-\frac{\left[ t-a(n+1)\right] ^2}{6a^2}\right] -\frac{\left[ a(n+2)-t\right] ^3}{3a^3}, \qquad &{}(n+1)a\le t<(n+2)a\\ \frac{\left[ a(n+3)-t\right] ^3}{6a^3},\qquad &{}(n+2)a\le t <(n+3)a\\ 0, \qquad &{}t\ge (n+3)a. \end{array}\right. } \end{aligned}$$

Proof

The result follows from Proposition 3.1 and Eq. (18), due to Eq. (17).

Figure 6 shows some plots of the probabilities \(\widetilde{p}(n,t)\) obtained in Proposition 3.2 for \(k=1\) and different values of n.

Fig. 6
figure 6

Probabilities \(\widetilde{p}(h,t)\) for \(a=1\), \(k=1\) and \(h=0,1,2\) (from left to right)

Let us now provide the mean first-passage-time of \(N_e(t)\) through level \(n+1\), \(n\in \mathbb {N}_0\), which is asymptotically linear in n.

Corollary 3.2

Under the assumptions of Proposition 3.1, from (19) one has

$$\mathbb {E}(\widetilde{T}_{1})= \frac{7}{12}\,a, \qquad \mathbb {E}(\widetilde{T}_{n\mathop{+}1}) = \frac{a}{2}(2n+1), \quad n\in \mathbb {N}.$$

With reference to the interarrival times \(X_i\), \(i\in {\mathbb Z}\), the following proposition provides the variance and the autocorrelation coefficients

$$\begin{aligned} \rho _h =\textrm{Corr}(X_i,X_{i\mathop{+}h}), \qquad i\in {\mathbb Z}, \;\; h\in {\mathbb N}. \end{aligned}$$
(21)

Proposition 3.3

In the case \(k=1\), \(a>0\), the variance of \(X_i\), \(i\in {\mathbb Z}\), is

$$Var\left( X\right) =\frac{a^2}{6},$$

and the autocorrelation coefficients (21) are given by

$$\rho _1=-\frac{1}{2}, \qquad \rho _{h\mathop{+}1}=0, \quad h\in {\mathbb N}.$$

Proof

Due to Eq. 8 of Jagerman and Altiok (2003), and recalling Eq. (20), we have

$$Var \, \left( X\right) =2a \int _{0}^{+\infty } \widetilde{p}(0,t)\textrm{d}t - a^2=\frac{a^2}{6}.$$

From this result, and by means of Eq. 9 of Jagerman and Altiok (2003), we obtain

$$\rho _1=a\left\{ \frac{\int _0^{+\infty } \widetilde{p}(1,t)\textrm{d}t-a}{Var \, \left( X\right) }\right\} =-\frac{1}{2}, \qquad \rho _{h+1}=a\left\{ \frac{\int _0^{+\infty } \widetilde{p}(h+1,t)\textrm{d}t-a}{Var \, \left( X\right) }\right\} =0, \quad h\in \mathbb {N}.$$

The thesis then follows.

3.2 Case \(k=2\)

Proposition 3.4

In the case \(k=2\), \(a>0\), the probability law of N(t), defined in Eq. (15), is given by

$$\begin{aligned} p(0,t)= {\left\{ \begin{array}{ll} 1-\frac{t}{2a}\left( 1+\frac{t}{4a}-\frac{t^2}{6a^2}\right) ,\qquad &{} 0\le t<a\\ \frac{3a-t}{3a}\left( 1-\frac{t}{2a}\cdot \frac{6a-t}{4a}\cdot \frac{a+t}{4a}\right) , \quad &{} a\le t<2a\\ \frac{(3a-t)^2}{6a^2}\cdot \frac{4a-t}{4a}\cdot \frac{6a-t}{4a},\qquad &{} 2a\le t<3a\\ 0,\qquad &{}t\ge 3 a. \end{array}\right. } \end{aligned}$$
$$\begin{aligned} p(1,t)= {\left\{ \begin{array}{ll} \frac{t(12a^2+3at-4t^2)}{24a^3},\qquad &{} 0\le t<a\\ \frac{t(3a-t)(18a^2+7at-3t^2)}{96a^4}, \quad &{} a\le t<2a\\ \frac{1}{24}-\frac{(4a-t)(5a-t)(5a^2-8at+2t^2)}{48a^4},\qquad &{} 2a\le t<3a\\ \frac{(4a-t)^2(5a-t)(7a-t)}{96a^4}, &{} 3a\le t<4a\\ 0,\qquad &{} t\ge 4 a. \end{array}\right. } \end{aligned}$$
$$\begin{aligned} p(2,t)= {\left\{ \begin{array}{ll} \frac{t^3}{12a^3},\qquad &{} 0\le t<a\\ \frac{t(-6a^3+9a^2t+8at^2-3t^3)}{96a^4}, \quad &{} a\le t<2a\\ \frac{1}{24}-\frac{(t-a)(4a-t)(3a^2-5at+t^2)}{16a^4},\qquad &{} 2a\le t<3a\\ \frac{1}{24}-\frac{(5a-t)(6a-t)(15a^2-12at+2t^2)}{48a^4},\qquad &{} 3a\le t<4a\\ \frac{(5a-t)^2(6a-t)(8a-t)}{96a^4},\qquad &{} 4a\le t<5a\\ 0,\qquad &{} t\ge 5 a. \end{array}\right. } \end{aligned}$$

and for \(n\ge 3\)

$$\begin{aligned} p(n,t)={\left\{ \begin{array}{ll} \frac{((n-5)a-t)((n-3)a-t)((n-2)a-t)^2}{96a^4},\qquad &{} (n-2)a\le t<(n-1)a\\ \frac{1}{24}-\frac{((n-3)a-t)((n-2)a-t)(-3a^2+2(an-t)^2)}{48a^4}, \qquad &{} (n-1)a\le t<na\\ \frac{1}{24}+ \frac{((n-1)a-t)((n+2)a-t)\left[ a(-3a+an-t)+(an-t)^2\right] }{16a^4},\qquad &{} na\le t<(n+1)a\\ \frac{1}{24}-\frac{((n+3)a-t)((n+4)a-t)\left[ a(-a+4an-4t)+2(an-t)^2\right] }{48a^4},\qquad &{} (n+1)a\le t<(n+2)a\\ \frac{\left[ (n+3)a-t\right] ^2\left[ (n+4)a-t\right] \left[ (n+6)a-t\right] }{96a^4},\qquad &{} (n+2)a\le t<(n+3)a\\ 0,\qquad &{}t\ge (n+3)a. \end{array}\right. } \end{aligned}$$

Proof

The proof follows from Proposition 2.5 recalling that \(Y_0\sim U(0,2a)\).

Remark 3.2

Due to Proposition 3.4, the mode of p(nt), \(t\ge 0\), is given by

$$Mode(p(0,t ))= 0, \qquad Mode(p(1,t))\approx a\, 1.4216, \qquad Mode(p(n,t))= \frac{a}{2}(2n +1) \quad \text {for }n\ge 2.$$

The following result thus holds.

Corollary 3.3

Under the assumptions of Proposition 3.4, the probability density function of the interarrival times \(X_i\) for \(i\in \mathbb {Z}\) is given by

$$f_X(t)={\left\{ \begin{array}{ll} \frac{(2a-t)(a+t)}{4a^3},\quad &{} 0\le t< a\\ \frac{25a^3+9a^2t-12at^2+2t^3}{48a^4},\quad &{}a\le t<2a\\ \frac{(3a-t)(39a^2-18at+2t^2)}{48a^4},\quad &{}2a\le t< 3a\\ 0,\quad &{}t\ge 3a. \end{array}\right. }$$

The next proposition provides the explicit expression of the probabilities \(\widetilde{p}(n,t)\), defined in Eq. (16).

Proposition 3.5

For \(k=2\), \(a>0\), the probability law of \(N_e(t)\) is given by

$$\begin{aligned} \widetilde{p}(0,t)={\left\{ \begin{array}{ll} \frac{(2a-t)(24a^3-12a^2t+t^3)}{48a^4},\qquad &{} 0\le t<a\\ \frac{1}{240}+\frac{(3a-t)^2(53a^3-18a^2t-4at^2+t^3)}{480a^5},\qquad &{} a\le t<2a\\ \frac{(3a-t)^2(29a^2-11at+t^2)}{480a^5},\qquad &{} 2a\le t<3a\\ 0,\qquad &{} t\ge 3a. \end{array}\right. } \end{aligned}$$
$$\begin{aligned} \widetilde{p}(1,t)={\left\{ \begin{array}{ll} \frac{t}{a}\left( 1-\frac{t(24a^2+4at-3t^2)}{48a^4}\right) ,\qquad &{} 0\le t<a\\ \frac{1}{120}+\frac{t}{a}\left( 1-\frac{t(130a^3+10a^2t-15at^2+2t^3)}{240a^4}\right) ,\qquad &{} a\le t<2a\\ \frac{1}{480}+\frac{(t-5a)^5}{96a^5}+\frac{(4a-t)(745a^3-492a^2t+109at^2-8t^3)}{96a^4},\qquad &{} 2a\le t<3a\\ \frac{(4a-t)^3(41a^2-13at+t^2)}{480a^5},\qquad &{} 3a\le t<4a\\ 0,\qquad &{} t\ge 4a. \end{array}\right. } \end{aligned}$$
$$\begin{aligned} \widetilde{p}(2,t)={\left\{ \begin{array}{ll} \frac{t^2(12a^2+2at-3t^2)}{48a^4},\qquad &{} 0\le t<a\\ \frac{113}{240}-\frac{(2a-t)^2(29a^3+29a^2t+3at^2-3t^3)}{240a^5},\qquad &{} a\le t<2a\\ \frac{13}{240}+\frac{(4a-t)^2(3a^3-7a^2t+6at^2-t^3)}{48a^5},\qquad &{} 2a\le t<3a\\ \frac{29}{120}-\frac{(3a-t)(7a-t)(49a^3-46a^2t+12at^2-t^3)}{96a^5},\qquad &{} 3a\le t<4a\\ \frac{(5a-t)^3(55a^2-15at+t^2)}{480a^5},\qquad &{}4a\le t<5a\\ 0,\qquad &{}t\ge 5a, \end{array}\right. } \end{aligned}$$

and for \(n\ge 3\)

$$\begin{aligned} \widetilde{p}(n,t)={\left\{ \begin{array}{ll} {\textbf{1}}_{\{n\mathop{=}3\}} \left( \frac{t^4}{48 a^4}\right) + {\textbf{1}}_{\{n\ge 4\}} \left( \frac{\left[ t-a(n-3)\right] ^3 (t^2+a t (11-2 n)+a^2 n^2-11 a^2 n+29 a^2)}{480 a^5}\right) ,\quad &{}(n-3)a\le t < (n-2)a\\ {\textbf{1}}_{\{n\mathop{=}3\}} \left( \frac{2 a^5-10 a^3 t^2+10 a^2 t^3+5 a t^4-2 t^5}{240 a^5}\right) + {\textbf{1}}_{\{n\ge 4\}} \left( \frac{11}{480}\right. \quad &{} (n-2)a\le t < (n-1)a \\ \left. +\frac{(n-2)(-18-25n+31n^2-10n^3+n^4)}{96}-\frac{(32-174n+153n^2-48n^3+5n^4)t}{96a}\right. \\ \left. +\frac{(-87+153n-72n^2+10n^3)t^2}{96a^2}-\frac{(51-48n+10n^2)t^3}{96a^3}+\frac{(5n-12)t^4}{96a^4}-\frac{t^5}{96a^5}\right) , \\ \frac{29}{120}+\frac{11-15n^2+n^3+4n^4-n^5}{48}+\frac{n(30-3n-16n^2+5n^3)t}{48a} \quad &{} (n-1)a\le t < na \\ -\frac{(15-3n-24n^2+10n^3)t^2}{48a^2}+\frac{(-1-16n+10n^2)t^3}{48a^3}-\frac{(5n-4)t^4}{48a^4}+\frac{t^5}{48a^5},\\ \frac{113}{240}+\frac{n^2(-15-n+4n^2+n^3)}{48}-\frac{n(-30-3n+16n^2+5n^3)t}{48a} \quad &{} na\le t < (n+1)a \\ +\frac{(-15-3n+24n^2+10n^3)t^2}{48a^2}-\frac{(-1+16n+10n^2)t^3}{48a^3}+\frac{(4+5n)t^4}{48a^4}-\frac{t^5}{48a^5}, \\ \frac{29}{120}-\frac{(n+1)(n+5)(-3+10n+6n^2+n^3)}{96}+\frac{(32+174n+153n^2+48n^3+5n^4)t}{96a} \quad &{} (n+1)a\le t < (n+2)a\\ -\frac{(87+153n+72n^2+10n^3)t^2}{96a^2}+\frac{(51+48n+10n^2)t^3}{96a^3}-\frac{(5n+12)t^4}{96a^4}+\frac{t^5}{96a^5},\\ \frac{\left[ a(n+3)-t\right] ^3 (t^2-a t (11+2 n)+a^2 n^2+11 a^2 n+29 a^2)}{480 a^5},\quad &{} (n+2)a\le t < (n+3)a\\ 0,\quad &{}t\ge (n+3)a. \end{array}\right. } \end{aligned}$$

Proof

The result follows from Proposition 3.4, recalling Eq. (17).

The plots of some probabilities of \(N_e(t)\) given in Proposition 3.5 are shown in Fig. 7.

Fig. 7
figure 7

Probabilities \(\widetilde{p}(n,t)\) for \(a=1\), \(k=2\) and \(n=0,1,2,3\)

Hereafter we show the mean first-passage-time of \(N_e(t)\) through \(n+1\), which is identical to that of case \(k=1\) for \(n\ge 3\).

Corollary 3.4

Under the assumptions of Proposition 3.4, from (19) one has

$$\mathbb {E}(\widetilde{T}_{1})= \frac{409}{576}\,a, \qquad \mathbb {E}(\widetilde{T}_{2})= \frac{443}{288}\,a, \qquad \mathbb {E}(\widetilde{T}_{3})= \frac{1441}{576}\,a, \qquad \mathbb {E}(\widetilde{T}_{n\mathop{+}1}) = \frac{a}{2}(2n+1), \quad n\ge 3.$$

Also in this case, similarly to Proposition 3.3, we are able to provide the variance and the autocorrelation coefficients \(\rho _h\) of the interarrival times.

Proposition 3.6

For \(k=2\), \(a>0\), the variance of the interarrival times is given by

$$Var(X)=\frac{121}{288}\,a^2,$$

and the autocorrelation coefficients (21) are given by

$$\rho _1=-\frac{9}{22},\qquad \rho _2=-\frac{21}{242},\qquad \rho _3=-\frac{1}{242},\qquad \rho _h=0,\;\; h\ge 4.$$

Proof

Due to Eq. 8 of Jagerman and Altiok (2003), and recalling Proposition 3.5, we have

$$\begin{aligned} Var(X)=2a\int _0^{+\infty }\widetilde{p}(0,t)\,\textrm{d}t-a^2 =\frac{121}{288}\,a^2. \end{aligned}$$

The expression of the autocorrelation coefficients \(\rho _i\), \(i=1,2,3\), follows from Eq. (9) of Jagerman and Altiok (2003) and noting that

$$\int _0^{+\infty }\widetilde{p}(1,t)\,\textrm{d}t=\frac{53}{64}\,a, \quad \int _0^{+\infty }\widetilde{p}(2,t)\,\textrm{d}t=\frac{185}{192}\,a, \quad \int _0^{+\infty }\widetilde{p}(3,t)\,\textrm{d}t=\frac{575}{576}\,a.$$

Moreover, for \(h\ge 4\), one has

$$\begin{aligned} \int _0^{+\infty }\widetilde{p}(h,t)\,\textrm{d}t=a, \end{aligned}$$

so that

$$\begin{aligned} \rho _{h}=a\left\{ \frac{\int _0^{+\infty }\widetilde{p}(h,t)\,\textrm{d}t-a}{Var(X)}\right\} =0. \end{aligned}$$

Remark 3.3

Note that for both cases \(k=1\) and \(k=2\), we have that \(\sum _{h=0}^{+\infty }\rho _h=-\frac{1}{2}\), according to a well-known result concerning the stationary time series, cf. Hassani (2009).

4 Application to the SHIP/M/\(\infty\) Queue

This section provides an application of the previous results, in which we assume that the counting process N(t) introduced in Section 2 is the arrival process of a suitable queueing system having an infinite number of servers. Furthermore, we assume that the service times are independent and identically distributed, having exponential distribution with parameter \(\mu >0\). Henceforth, the resulting system is named SHIP/M/\(\infty\) queue. The assumption of an infinite number of servers is suitable to approximate a real situation in which the port facilities are wide enough to ensure the service even for a large number of arriving vessels. Clearly, in this setting a more realistic model should include a finite number of servers rather than an infinite one. Hence, the following analysis of the SHIP/M/\(\infty\) queue can be viewed as a first step toward the investigation of the bulk port queueing mechanism.

Hereafter, we purpose to study the number \(Q_n\) of customers at the port seen by the arrival of the n-th customer. The main result is based on Eq. (2.1) of Brandt (1987) that, upon a correction, reads

$$\begin{aligned} Q_n=\sum _{i\mathop{=}1}^{\infty } {\textbf{1}}{\left( \sum _{j\mathop{=}n\mathop{-}i}^{n\mathop{-}1} X_j < S_{n-i}\right) }, \qquad n\in \mathbb {N}, \end{aligned}$$
(22)

where \(S_{n-i}\) denotes the service time of the \((n-i)\)-th customer. The expected value of \(Q_n\) is obtained in the following proposition. It is worth mentioning that \(\mathbb {E}(Q_n)\) does not depend on n, since by assumption the interarrival times \(X_i\) are stationary and the service times \(S_j\) are i.i.d.

Proposition 4.1

In the case of the SHIP/M/\(\infty\) queue, for \(k\in \mathbb {N}\) and \(\mu >0\), the expected number of customers at the port seen by the arrival of the n-th customer is

$$\begin{aligned} \mathbb {E}(Q_n)=\frac{1-\textrm{e}^{-\mu a k }+\mu a k(k-1)}{(\mu a k)^2}, \qquad n\in \mathbb {N}. \end{aligned}$$
(23)

Proof

Due to Eq. (22), one has

$$\mathbb {E}(Q_n)=\sum _{i\mathop{=}1}^{\infty }\mathbb {P}\left( \sum _{j\mathop{=}n\mathop{-}i}^{n\mathop{-}1} X_j < S_{n\mathop{-}i}\right) .$$

Hence, recalling that \(\left\{ X_i\right\}\) is a stationary sequence and that \(S_n\), \(n\in {\mathbb Z}\), is a sequence of independent random variables exponentially distributed with parameter \(\mu\), we have

$$\mathbb {E}(Q_n) =\sum _{i\mathop{=}1}^{\infty } {\mathbb P}\left( \sum _{j\mathop{=}1}^{i} X_j < S_{n\mathop{-}i}\right) =\sum _{i\mathop{=}1}^{\infty } \int _{0}^{+\infty } \mu e^{-\mu t} {\mathbb P} (N(t)\ge i) \textrm{d}t =\int _{0}^{+\infty } \mu e^{-\mu t}{ \mathbb E}\left[ N(t)\right] \textrm{d}t,$$

so that the thesis follows from Eq. (11).

Note that the mean number of customers at the port seen by the arrival of a customer depends on parameters \(\mu\) and a only through their product. Moreover, \(\mathbb {E}(Q_n)\) is decreasing in \(\mu a\), it tends to 0 \((+\infty )\) as \(\mu a\) increases (\(\mu a\) tends to 0). Furthermore, \(\mathbb {E}(Q_n)\) is increasing in \(k\in \mathbb {N}\) and it tends to \(1/(\mu a )\) as \(k\rightarrow \infty\).

The knowledge of \(\mathbb {E}(Q_n)\) allows to determine the parameters such that the expected number of customers equals a given value, say q. Indeed, thanks to (23) we are able to find numerically the solutions of the equation \(\mathbb {E}(Q_n)=q\), for \(q>0\). Fig. 8 shows an example of the solution in terms of \(\mu a\). We can see that the solution is decreasing in q and is increasing in k. This means that a small number of customers is expected when k is small and \(\mu a\) is large, i.e. when the mean service time is small and when there are rare overlaps in the vessels arrival.

Fig. 8
figure 8

Values of \(\mu a\) such that \(\mathbb {E}(Q_n)=q\), for \(0.1<q<10\), for \(k=1\) (solid curve) and \(k=10\) (dashed curve)

In the following proposition we provide the expression of an upper and a lower bound for the probability of empty queue.

Proposition 4.2

In the case of the SHIP/M/\(\infty\) queue, for \(k\in \mathbb {N}\) and \(\mu >0\), the probability of empty queue satisfies

$$\begin{aligned} \phi _1(k,a,\mu ):=\max \{0, 1-{\mathbb {E}(Q_n)}\}\le {\mathbb P}(Q_n=0)\le \int _{0}^{+\infty } \mathbb {P}(N(t)=0) \mu \textrm{e}^{-\mu t} \textrm{d}t:=\phi _2(k,a,\mu ) \end{aligned}$$
(24)

where \(\phi _1\) is given in terms of (23) and the expression of \(\phi _2\) is provided in Appendix C, for brevity.

Proof

The thesis follows from Eq. (22).

Fig. 9
figure 9

\(\phi _1(k,a,\mu )\) (dashed line) and \(\phi _2(k,a,\mu )\) (solid line) for \(a=1\) with \(k=1\) (left-hand side), \(k=2\) (center) and \(k=3\) (right-hand side)

Remark 4.1

From Proposition 4.2 we have

$$\lim _{a\rightarrow +\infty } \phi _1(k,a,\mu ) =\lim _{a\rightarrow +\infty } \phi _2(k,a,\mu ) =1, \qquad \lim _{\mu \rightarrow +\infty } \phi _1(k,a,\mu ) =\lim _{\mu \rightarrow +\infty } \phi _2(k,a,\mu ) =1,$$

this suggesting that, for large a and \(\mu\), one has \({\mathbb P}(Q_n=0)\approx 1\).

Proposition 4.3

In the case of the SHIP/M/\(\infty\) queue, for \(k=1\) and \(\mu >0\), one has

$$\begin{aligned} \max {\left\{ 0,1-\frac{1-e^{-a\mu }}{(a\mu )^2}\right\} }\le \mathbb {P}(Q_n=0) \le 1 - \frac{(1- e^{-a\mu })^2}{(a\mu )^2}, \end{aligned}$$

and, for \(k=2\),

$$\begin{aligned} \max {\left\{ 0,1-\frac{1-e^{-2a\mu }+2a\mu }{(2a \mu )^2} \right\} }\le \mathbb {P}(Q_n=0) \le \frac{e^{-3a\mu }\,g(a,\mu )}{(2a\mu )^4}. \end{aligned}$$

where \(g(a,\mu )=\left[ 4a \mu \,e^{3a\mu }(4a^3\mu ^3-2a^2\mu ^2-a\mu +2) -e^{2a\mu }(a\mu +2)^2 +2e^{a\mu }(a\mu -2)^2 -(a\mu -2)^2\right]\).

Proof

The thesis is a consequence of Proposition 4.2.

In Fig. 9, we provide some plots of the bounds \(\phi _1(k,a,\mu )\) and \(\phi _2(k,a,\mu )\) for different choices of the parameters by considering the expressions obtained in Propositions 4.2 and 4.3. From the picture one can note that the bounds approach 1 more slowly as k increases.

5 Concluding Remarks

In this paper we have studied the model of port dynamics proposed in Jagerman and Altiok (2003), in which it is assumed that each vessel has a scheduled arrival time but there is also a constant lay period during which, in a uniform way, it can arrive at the port. In particular, we have considered the counting process N(t), which represents the number of scheduled vessels arriving during the time interval (0, t], \(t>0\), and we have obtained the explicit expression of its probability generating function. The latter allowed us to get the probability distribution of N(t) and also its expected value. Differently from other papers in which approximations or simulation techniques are used, we have been able to disclose explicit analytical results. Starting from the probability law of N(t), standard computations have allowed us to obtain the probability law of the stationary counting process related to N(t), and also various results concerning the autocorrelations of the random variables \(\{X_i\}\), \(i\in {\mathbb Z}\), giving the actual interarrival time between the \((i-1)\)-th and the i-th vessel arrival.

The final part of the paper has been dedicated to the queueing system characterized by stationary interarrival times \(\{X_i\}\), exponential service time and infinite number of servers. The latter assumption is a suitable formal approximation of a real situation in which the port facilities are wide enough to ensure the service even for a large number of arriving vessels. Some results on the average number of customers at the port and on the probability of an empty queue have been disclosed.

The given results can be further exploited for the analysis both of the more general case when the lay period \(\omega\) is a real number (not necessarily a multiple of the parameter a) and of a more realistic SHIP/M/N queue. This will be the object of a future investigation.