Introduction

Especially since the start of the COVID-19 epidemic, there are have been numerous papers published on mathematical models for the spread of epidemics; see, for instance, [19] and [17]. These models can be quite complex, with systems of seven or more nonlinear differential equations. The authors are generally interested in the stability analysis of their models and/or in computing the very important basic reproduction number \(R_0\). [4], among others, proposed a SIS-type (for Susceptible, Infected, Susceptible) model, and they studied its stability. In this paper, we consider a basic SIR (R standing for Recovered or Removed) model, to which random perturbations will be added to make it more realistic. The deterministic SIR model is a particular case of either the SIRS (see [20]), SEIR (E standing for Exposed; see [18], or [5]) or SEIRS (see [2]) models.

Let X(t) denote the number of individuals who are susceptible to a certain disease at time t, Y(t) the number who are infected, and Z(t) the number who are either vaccinated or cured from the disease, which is assumed to be non-lethal. Moreover, the size of the population is constant. For example, the population could consist of the people on a cruise ship, and the disease could be an outbreak of gastroenteritis.

We begin with the following deterministic model:

$$\begin{aligned} \textrm{d}X(t)= & {} -a_1 X(t) \textrm{d}t - a_2 X(t) Y(t) \textrm{d}t, \end{aligned}$$
(1)
$$\begin{aligned} \textrm{d}Y(t)= & {} -b_1 Y(t) \textrm{d}t + a_2 X(t) Y(t) \textrm{d}t, \end{aligned}$$
(2)
$$\begin{aligned} \textrm{d}Z(t)= & {} a_1 X(t) \textrm{d}t + b_1 Y(t) \textrm{d}t, \end{aligned}$$
(3)

where \(a_1\), \(a_2\) and \(b_1\) are positive constants. The constant \(a_1\) is the vaccination rate, while \(b_1\) is the recovery rate. We thus assume that susceptible individuals become infected at a rate which is proportional to the product \(X(t) Y(t)\).

Next, we introduce random perturbations in the form of diffusion processes:

$$\begin{aligned} \textrm{d}X(t)= & {} -a_1 X(t) \textrm{d}t - a_2 X(t) Y(t) \textrm{d}t + m_1[X(t)] \textrm{d}t \nonumber \\{} & {} + \, \left\{ v_1[X(t)]\right\} ^{1/2} \textrm{d}B_1(t), \end{aligned}$$
(4)
$$\begin{aligned} \textrm{d}Y(t)= & {} -b_1 Y(t) \textrm{d}t + a_2 X(t) Y(t) \textrm{d}t + m_2[Y(t)] \textrm{d}t \nonumber \\{} & {} + \, \left\{ v_2[Y(t)]\right\} ^{1/2} \textrm{d}B_2(t) , \end{aligned}$$
(5)
$$\begin{aligned} \textrm{d}Z(t) & = {} a_1 X(t) \textrm{d}t + b_1 Y(t) \textrm{d}t - m_1[X(t)] \textrm{d}t - \left\{ v_1[X(t)]\right\} ^{1/2} \textrm{d}B_1(t) \nonumber \\{} & {} - \, m_2[Y(t)] \textrm{d}t - \left\{ v_2[Y(t)]\right\} ^{1/2} \textrm{d}B_2(t), \end{aligned}$$
(6)

where \(m_i(\cdot )\) is a real function, \(v_i(\cdot )\) is a positive function, and \(\{B_i(t), t \ge 0\}\) is a standard Brownian motion, for \(i=1,2\). The two stochastic processes are assumed to be independent.

Notice that \(\dot{X}(t) + \dot{Y}(t) + \dot{Z}(t) = 0\) for both the deterministic and the stochastic model, so that the size of the population is a constant, that we denote by N.

Now, we define the first-passage time

$$\begin{aligned} \tau (x,y) = \inf \{t > 0: X(t) + Y(t) = 0 \mid X(0)=x, Y(0)=y\}, \end{aligned}$$
(7)

with \(x+y>0\). That is, the random variable \(\tau (x,y)\) is the first time all members of the population are either vaccinated or cured from the disease. We assume that vaccinated or cured individuals are immunized against the disease either permanently or for a time period of length \(T > \tau (x,y)\), where T is large enough that we can neglect the probability \(P[\tau (x,y) > T]\). Then, we only need to consider the two-dimensional process (X(t), Y(t)).

Let \(M(x,y;\alpha )\) denote the moment-generating function of \(\tau (x,y)\):

$$\begin{aligned} M(x,y;\alpha ) := E\left[ e^{-\alpha \tau (x,y)}\right] , \end{aligned}$$
(8)

where \(\alpha \) is a positive constant. The function M satisfies the following Kolmogorov partial differential equation:

$$\begin{aligned} \alpha M(x,y;\alpha ) & = {} \frac{1}{2} v_1(x) M_{xx}(x,y;\alpha ) + \frac{1}{2} v_2(y) M_{yy}(x,y;\alpha )\nonumber \\{} &\quad {} + \, \left[ -a_1 x - a_2 x y + m_1(x)\right] M_{x}(x,y;\alpha ) \nonumber \\{} &\quad {} + \, \left[ -b_1 y + a_2 x y + m_2(y)\right] M_{y}(x,y;\alpha ), \end{aligned}$$
(9)

in which \(M_x = \partial M / \partial x\), etc. The boundary condition is

$$\begin{aligned} M(x,y;\alpha ) = 1 \quad \text { if }x+y = 0. \end{aligned}$$
(10)

Indeed, we can write (see, for instance, [8], p. 282)) that the conditional transition density function

$$\begin{aligned} p(x,x_0,y,y_0;t,t_0) := f_{(X(t),Y(t)) \mid (X(t_0),Y(t_0))}(x,y \mid x_0,y_0) \end{aligned}$$
(11)

satisfies the Kolmogorov backward equation

$$\begin{aligned} -p_{t_0}= & {} \frac{1}{2} v_1(x) p_{xx} + \frac{1}{2} v_2(y) p_{yy} + \left[ -a_1 x - a_2 x y + m_1(x)\right] p_{x} \nonumber \\{} & {} + \, \left[ -b_1 y + a_2 x y + m_2(y)\right] p_{y}. \end{aligned}$$
(12)

Furthermore, because the functions \(m_i\) and \(v_i\) do not depend on t, for \(i=1,2\), the function \(p(x,x_0,y,y_0;t,t_0)\) can be written as \(p(x,x_0,y,y_0;t-t_0)\). It follows that \(p_{t_0} = -p_t\).

Next, we can also state that the probability density function \(\rho (t;x,y)\) of the random variable \(\tau (x,y)\) satisfies Eq. (12):

$$\begin{aligned} \rho _{t}= & {} \frac{1}{2} v_1(x) \rho _{xx} + \frac{1}{2} v_2(y) \rho _{yy} + \left[ -a_1 x - a_2 x y + m_1(x)\right] \rho _{x} \nonumber \\{} & {} + \, \left[ -b_1 y + a_2 x y + m_2(y)\right] \rho _{y}. \end{aligned}$$
(13)

Finally, Eq. (9) is obtained by multiplying both sides of the above equation by \(e^{-\alpha t}\) and integrating from 0 to \(\infty \). The boundary condition in (10) follows from the fact that \(\tau (x,y) = 0\) if \(x+y=0\).

Similarly, the function \(m(x,y) := E[\tau (x,y)]\) (if it exists) satisfies

$$\begin{aligned} -1= & {} \frac{1}{2} v_1(x) m_{xx}(x,y) + \frac{1}{2} v_2(y) m_{yy}(x,y) \nonumber \\{} & {} + \, \left[ -a_1 x - a_2 x y + m_1(x)\right] m_{x}(x,y) \nonumber \\{} & {} + \, \left[ -b_1 y + a_2 x y + m_2(y)\right] m_{y}(x,y), \end{aligned}$$
(14)

subject to the boundary condition

$$\begin{aligned} m(x,y) = 0 \quad \text { if }x+y = 0. \end{aligned}$$
(15)

To prove this result, we use the series expansion of the exponential function (assuming that all moments of \(\tau (x,y)\) exist):

$$\begin{aligned} M(x,y; \alpha ) : = E\left[ e^{-\alpha \tau (x,y)}\right] = 1 - \alpha E[\tau (x,y)] + \frac{\alpha ^2}{2} E[\tau ^2(x,y)] - \cdots , \end{aligned}$$
(16)

which we substitute into (9). Equation (14) is obtained by equating the terms in \(\alpha \) on each side, and Eq. (15) is due to the fact that \(\tau (x,y) = 0\) if \(x+y=0\).

Remark

In theory, we can obtain \(E[\tau (x,y)]\) by differentiating the moment-generating function \(M(x,y;\alpha )\) with respect to \(\alpha \) and then taking the limit as \(\alpha \) decreases to zero. However, because \(M(x,y;\alpha )\) is often expressed in terms of special functions, it may be easier to solve Eq. (14) instead.

Finally, let us define the random variable

$$\begin{aligned} \tau _{0,N}(x,y) = \inf \{t > 0: X(t) + Y(t) = 0 \; {or} \; N \mid X(0)=x, Y(0)=y\}. \end{aligned}$$
(17)

We assume that \(X(0) < N\). Therefore, when the expected value of \(\dot{X}(t)\) is negative (which will be the case in Sect. 3), we can state that if \(X(\tau _{0,N}) + Y(\tau _{0,N}) = N\), then it is likely that every member of the population is infected. We are interested in computing

$$\begin{aligned} p(x,y) := P[X(\tau _{0,N}) + Y(\tau _{0,N}) = 0]; \end{aligned}$$
(18)

that is, the probability that the epidemic will be over at time \(\tau _{0,N}\). The function p satisfies

$$\begin{aligned} 0 &= {} \frac{1}{2} v_1(x) p_{xx}(x,y) + \frac{1}{2} v_2(y) p_{yy}(x,y)\nonumber \\{} &\quad {} + \, \left[ -a_1 x - a_2 x y + m_1(x)\right] p_{x}(x,y) \nonumber \\{} &\quad {} + \, \left[ -b_1 y + a_2 x y + m_2(y)\right] p_{y}(x,y). \end{aligned}$$
(19)

The boundary conditions are

$$\begin{aligned} p(x,y) = \left\{ \begin{array}{cl} 1 &{} \quad \text { if }x+y=0, \\ 0 &{} \quad \text { if }x+y=N. \end{array} \right. \end{aligned}$$
(20)

Obtaining explicit solutions to first-passage problems for diffusion processes in two or more dimensions is usually very difficult because one needs to solve partial differential equations, subject to the appropriate boundary conditions. See [9] and the references therein. Sometimes, using symmetry, it is possible to reduce the partial to ordinary differential equations.

One type of two-dimensional diffusion processes for which explicit and exact solutions to first-passage problems have been obtained are integrated diffusion processes. That is, processes of the form \(\{(X_1(t),X_2(t)), t \ge 0\}\), where \(\{X_2(t), t \ge 0\}\) is a diffusion process and

$$\begin{aligned} X_1(t) = X_1(0) + \int _0^t X_2(s) \textrm{d}s. \end{aligned}$$
(21)

The author has considered such problems for integrated Wiener processes [13], Ornstein-Uhlenbeck processes [11] and geometric Brownian motions [12]. See also [6, 10] and [16].

In the next section, we will find exact solutions to the above partial differential equations in important particular cases. The method of similarity solutions will be used.

Explicit Solutions

Since the boundary condition in Eq. (10) depends only on the sum \(x+y\), we can look for a solution of (9), (10) of the form \(L(w;\alpha ) = M(x,y;\alpha )\), where \(w := x+y\). This technique is known as the method of similarity solutions, and w is the similarity variable.

In order for the method to work, we must be able to express both the differential equation and the boundary condition in terms of w. Equation (9) is transformed into (dropping the dependence of L on the constant \(\alpha \))

$$\begin{aligned} \alpha L(w) = \frac{1}{2} [v_1(x)+v_2(y)] L''(w) + [m_1(x) + m_2(y) - a_1 x - b_1 y] L'(w) \end{aligned}$$
(22)

and the boundary condition is \(L(0) = 1\). Moreover, because \(X(t)+Y(t) \le N\) \(\forall t \ge 0\), we can state that N is a reflecting boundary for the sum \(X(t)+Y(t)\); it follows (see [3], p. 233, or [14], p. 221) that \(L'(N) = 0\).

Proposition 2.1

If both expressions between square brackets in Eq. (22) can be expressed in terms of \(w:=x+y\), then the problem (9), (10) can be solved by assuming that \(M(x,y;\alpha ) = L(w;\alpha )\).

Remark

If the problem (9), (10) is solvable by the method of similarity solutions, then so are the problems (14), (15) and (19), (20). Moreover, we assume that the functions \(v_1(x)+v_2(y)\) and \(m_1(x) + m_2(y)\) are smooth enough to guarantee the existence and uniqueness of the solutions to the equations, which will be the case in the particular problems considered below.

Particular cases. I) The simplest particular case is the one when \(v_1(x) + v_2(y) \equiv v_0 > 0\), \(m_1(x) = a_1 x\) and \(m_2(y) = b_1 y\). Equation (22) then reduces to

$$\begin{aligned} \alpha L(w) = \frac{1}{2} v_0 L''(w). \end{aligned}$$
(23)

The general solution of the above equation is easily found to be

$$\begin{aligned} L(w) = c_1 \exp \left( \sqrt{\frac{2 \alpha }{v_0}} w\right) + c_2 \exp \left( -\sqrt{\frac{2 \alpha }{v_0}} w\right) . \end{aligned}$$
(24)

With the conditions \(L(0)=1\) and \(L'(N)=0\), we can determine the value of the constants \(c_1\) and \(c_2\). We find that

$$\begin{aligned} L(w) = \frac{\cosh \left[ \frac{2 \sqrt{\alpha }}{\sqrt{v_0}} (N-w)\right] }{\cosh \left[ \frac{2 \sqrt{\alpha }}{\sqrt{v_0}} N\right] } \quad \text { for }0 \le w \le N. \end{aligned}$$
(25)

It follows that

$$\begin{aligned} M(x,y;\alpha ) = \frac{\cosh \left[ \frac{2 \sqrt{\alpha }}{\sqrt{v_0}} (N-x-y)\right] }{\cosh \left[ \frac{2 \sqrt{\alpha }}{\sqrt{v_0}} N\right] } \quad \text { for }0 \le x+y \le N \end{aligned}$$
(26)

and

$$\begin{aligned} m(x,y) = -\lim _{\alpha \downarrow 0} \frac{\partial }{\partial \alpha } M(x,y;\alpha ) = \frac{(x+y)^2 - 2 N (x+y)}{v_0}. \end{aligned}$$
(27)

Remark

The random perturbations, with the above infinitesimal parameters, are particular (generalized) Ornstein-Uhlenbeck processes. Moreover, letting \(W(t):=X(t)+Y(t)\), \(\{W(t), t \ge 0\}\) is a Wiener process with zero mean and variance parameter \(v_0\).

We can retrieve the formula for m(xy) by solving Eq. (14), subject to the appropriate boundary conditions. First, we can assume that \(m(x,y) = n(w)\), so that Eq. (14), with the above assumptions, simplifies to

$$\begin{aligned} -1 = \frac{1}{2} v_0 n''(w). \end{aligned}$$
(28)

In addition to the boundary condition \(n(0) = 0\), we can write that \(n'(N) = 0\). Indeed, we have

$$\begin{aligned} L(w): &= {} E\left[ e^{-\alpha \tau }\right] = 1 - \alpha E[\tau ] + \frac{1}{2} \alpha ^2 E[\tau ^2] - \cdots \nonumber \\ &= {} 1 - \alpha n(w) + \frac{1}{2} \alpha ^2 E[\tau ^2] - \cdots , \end{aligned}$$
(29)

so that

$$\begin{aligned} \lim _{\alpha \downarrow 0} L'(w) = -n'(w). \end{aligned}$$
(30)

Since \(L'(N) = 0\) for any \(\alpha \), we deduce that \(n'(N) = 0\) as well. It is then straightforward to find that

$$\begin{aligned} n(w) = \frac{w^2 - 2 N w}{v_0} \quad \Longrightarrow \quad m(x,y) = \frac{(x+y)^2 - 2 N (x+y)}{v_0}. \end{aligned}$$
(31)

Next, with \(r(w) := p(x,y)\), Eq. (19) becomes

$$\begin{aligned} \frac{1}{2} v_0 r''(w) = 0 \quad \Longrightarrow \quad r(w) = d_1 w + d_2. \end{aligned}$$
(32)

The solution that satisfies the boundary conditions \(r(0)=1\) and \(r(N)=0\) is

$$\begin{aligned} r(w) = 1 - \frac{w}{N} \quad \Longrightarrow \quad p(x,y) = 1 - \frac{x+y}{N} \quad \text { for }0 \le x+y \le N. \end{aligned}$$
(33)

Case II. Assume that \(m_1(x) = a_1 x\) and \(m_2(y) = b_1 y\), as in Case I, but that \(v_1(x) = v_0 x\) and \(v_2(y) = v_0 y\). Equation (22) is then

$$\begin{aligned} \alpha L(w) = \frac{1}{2} v_0 w L''(w). \end{aligned}$$
(34)

The general solution of Eq. (34) is

$$\begin{aligned} L(w) = c_1 \sqrt{w} I_1\left( \sqrt{\frac{2 \alpha }{v_0}} w\right) + c_2 \sqrt{w} K_1\left( \sqrt{\frac{2 \alpha }{v_0}} w\right) , \end{aligned}$$
(35)

where \(I_{1}(\cdot )\) and \(K_{1}(\cdot )\) are modified Bessel functions (see [1], p. 374) of order \(\nu = 1\). Making use of the boundary conditions \(L(0)=1\) and \(L'(N)=0\), we find that

$$\begin{aligned} L(w) = \frac{\gamma }{I_0(\gamma \sqrt{N})} \left[ K_0(\gamma \sqrt{N}) I_1(\gamma \sqrt{w}) + I_0(\gamma \sqrt{N}) K_1(\gamma \sqrt{w})\right] \end{aligned}$$
(36)

for \(0 \le w \le N\), where

$$\begin{aligned} \gamma := \frac{2 \sqrt{2} \sqrt{\alpha }}{\sqrt{v_0}}. \end{aligned}$$
(37)

The formula for M(xy) is then obtained by replacing w by \(x+y\) in the above equation. Moreover, we compute

$$\begin{aligned} m(x,y) = -\lim _{\alpha \downarrow 0} \frac{\partial }{\partial \alpha } M(x,y;\alpha ) = \frac{2 (x+y)}{v_0} \left[ 1+\ln (N)-\ln (x+y)\right] . \end{aligned}$$
(38)

As in Case I, m(xy) can also be found by solving the differential equation satisfied by n(w), subject to \(n(0)=n'(N)=0\).

Finally, Eq. (19) reduces to

$$\begin{aligned} \frac{1}{2} v_0 w r''(w) = 0, \end{aligned}$$
(39)

and we obtain the same solution (given in Eq. (33)) as in Case I for the function p(xy).

Remark

This time, the random perturbations and \(\{W(t), t \ge 0\}\) are limiting cases of a Cox-Ingersoll-Ross (CIR) process.

Case III. Let \(m_1(x) \equiv 0\), \(m_2(y) \equiv 0\), \(a_1=b_1:=a\), and \(v_1(x) + v_2(y) \equiv v_0 > 0\), as in Case I. We must then solve

$$\begin{aligned} \alpha L(w) = \frac{1}{2} v_0 L''(w) - a w L'(w). \end{aligned}$$
(40)

Making use of a mathematical software program, we find that

$$\begin{aligned} L(w)= & {} -\frac{\frac{1}{2} \sqrt{\frac{a}{v_0}} \Gamma (\kappa ) w}{a (\sqrt{\pi } \alpha +1) M\left( \kappa +1,\frac{3}{2},\frac{a N^2}{v_0}\right) - \alpha M\left( \kappa ,\frac{3}{2},\frac{a N^2}{v_0}\right) } \nonumber \\{} & {} \times \bigg \{\alpha (\alpha +a) M\left( \kappa ,\frac{3}{2},\frac{a w^2}{v_0}\right) U\left( \kappa +1,\frac{3}{2},\frac{a N^2}{v_0}\right) \nonumber \\{} & {} - \; 2 \alpha a M\left( \kappa ,\frac{3}{2},\frac{a w^2}{v_0}\right) U\left( \kappa ,\frac{3}{2},\frac{a N^2}{v_0}\right) \nonumber \\{} & {} - \; 2 a (a+\alpha ) U\left( \kappa ,\frac{3}{2},\frac{a w^2}{v_0}\right) M\left( \kappa +1,\frac{3}{2},\frac{a N^2}{v_0}\right) \nonumber \\{} & {} + \; 2 a \alpha U\left( \kappa ,\frac{3}{2},\frac{a w^2}{v_0}\right) M\left( \kappa ,\frac{3}{2},\frac{a N^2}{v_0}\right) \bigg \} \end{aligned}$$
(41)

for \(0 \le w \le N\), in which

$$\begin{aligned} \kappa := \frac{1}{2} + \frac{\alpha }{2 a} \end{aligned}$$
(42)

and \(M(\cdot ,\cdot ,\cdot )\) (Kummer’s function) and \(U(\cdot ,\cdot ,\cdot )\) (Tricomi’s function) are confluent hypergeometric functions (see [1], p. 504).

This time, it is clearly easier to obtain the function m(xy) by solving Eq. (14) rather than by differentiating L(w) with respect to \(\alpha \) and taking the limit as \(\alpha \) decreases to zero. The unique solution to the differential equation

$$\begin{aligned} -1 = \frac{1}{2} v_0 n''(w) - a w n'(w) \end{aligned}$$
(43)

that satisfies the boundary conditions \(n(0)=0\) and \(n'(N)=0\) can be expressed as follows:

$$\begin{aligned} n(w) = -\sqrt{\frac{\pi }{a v_0}} \int _0^w e^{a u^2/v_0} \left\{ \textrm{erf}\left( \sqrt{a/v_0} u\right) - \textrm{erf}\left( \sqrt{a/v_0} N\right) \right\} \textrm{d}u \end{aligned}$$
(44)

for \(0 \le w \le N\), where “erf” is the error function:

$$\begin{aligned} \textrm{erf}(z) := \frac{2}{\sqrt{\pi }} \int _0^z e^{-t^2} \textrm{d}t. \end{aligned}$$
(45)

Finally, the function \(r(w) = p(x,y)\) satisfies

$$\begin{aligned} 0 = \frac{1}{2} v_0 r''(w) - a w r'(w). \end{aligned}$$
(46)

The unique solution such that \(r(0)=1\) and \(r(N)=0\) is

$$\begin{aligned} r(w) = 1- \frac{\textrm{erfi}\left( \sqrt{a/v_0} w\right) }{\textrm{erfi}\left( \sqrt{a/v_0} N\right) } \end{aligned}$$
(47)

for \(0 \le w \le N\), where “erfi” is the imaginary error function defined by

$$\begin{aligned} \textrm{erfi}(z) = \frac{2}{\sqrt{\pi }} \int _0^z e^{t^2} \textrm{d}t. \end{aligned}$$
(48)

Remark

The random perturbations are Wiener processes with zero means and \(\{W(t), t \ge 0\}\) is an Ornstein-Uhlenbeck process.

Case IV. Lastly, let \(m_1(x) \equiv 0\), \(m_2(y) \equiv 0\), \(a_1=b_1:=a\), as in Case III, and \(v_1(x) = v_0 x\) and \(v_2(y) = v_0 y\), as in Case II. The differential equation satisfied by the function L(w) is then

$$\begin{aligned} \alpha L(w) = \frac{1}{2} v_0 w L''(w) - a w L'(w). \end{aligned}$$
(49)

We find that the solution that we are looking for is

$$\begin{aligned} L(w) & = {} \frac{2 a}{v_0} \Gamma (2 \kappa ) w U(2 \kappa , 2, 2 a w /v_0) - \frac{2 \alpha }{v_0} \Gamma (2 \kappa ) \nonumber \\{} &\quad {} \times \left[ \frac{(a + \alpha ) U(2 \kappa +1, 2, 2 a N /v_0) - a U(2 \kappa , 2, 2 a N /v_0)}{(a + \alpha ) M(2 \kappa +1, 2, 2 a N /v_0) - \alpha M(2 \kappa , 2, 2 a N /v_0)} \right] \nonumber \\{} &\quad {} \times \; w M(2 \kappa , 2, 2 a w /v_0) \end{aligned}$$
(50)

for \(0 \le w \le N\).

As in Case III, we will calculate the expected value of \(\tau (x,y)\) by solving

$$\begin{aligned} -1 = \frac{1}{2} v_0 w n''(w) - a w n'(w). \end{aligned}$$
(51)

We can express the solution for which \(n(0)=0\) and \(n'(N)=0\) as follows:

$$\begin{aligned} n(w) = \frac{2}{v_0} \int _0^w e^{2 a u/v_0} \left\{ \textrm{Ei}\left( 1,(2a/v_0) u\right) - \textrm{Ei}\left( 1,(2a/v_0) N\right) \right\} \textrm{d}u \end{aligned}$$
(52)

for \(0 \le w \le N\), where Ei\((\cdot ,\cdot )\) is a particular exponential integral, which is defined by

$$\begin{aligned} \textrm{Ei}(\lambda ,z) = \int _1^{\infty } e^{-t z} t^{-\lambda } \textrm{d}t. \end{aligned}$$
(53)

To conclude, the function \(r(w) = p(x,y)\) is such that

$$\begin{aligned} 0 = \frac{1}{2} v_0 w r''(w) - a w r'(w), \end{aligned}$$
(54)

with \(r(0)=1\) and \(r(N)=0\). We easily find that

$$\begin{aligned} r(w) = \frac{e^{(2 a /v_0) N}-e^{(2 a /v_0) w}}{e^{(2 a /v_0) N}-1} \quad \text { for }0 \le w \le N. \end{aligned}$$
(55)

Remark

As in Case II, the random perturbations and \(\{W(t), t \ge 0\}\) are limiting cases of a Cox-Ingersoll-Ross (CIR) process.

Concluding Remarks

In this paper, we computed various explicit and exact solutions to first-passage problems for a particular stochastic three-dimensional SIR model. Because the size of the population was assumed to be constant, it was possible to reduce the first-passage problems to two-dimensional ones. Moreover, thanks to the symmetry in the model, we were then able to make use of the method of similarity solutions to solve the partial differential equations of interest, subject to the appropriate boundary conditions.

We were mainly interested in computing two quantities: the expected time needed to end the epidemic, and the probability that every member of the population will become infected, rather than everybody becoming immunized or cured from the disease. We saw that, even though one is able to find explicitly the moment-generating function of the first-passage time \(\tau (x,y)\), it is often much easier to obtain the expected value m(xy) of \(\tau (x,y)\) by solving the differential equation it satisfies, instead of differentiating the moment-generating function and then taking the appropriate limit. Indeed, the moment-generating function is generally expressed in terms of special functions, so that performing the required derivative can be very difficult.

When we cannot use the method of similarity solutions, we could at least try to solve the differential equations numerically, in any particular case. However, the aim of this paper was to obtain analytical solutions to the first-passage problems.

Next, we could try to compute the expected time that an epidemic will last when the model is more general than the one considered in this paper. We could also discretize the deterministic model and add Gaussian random variables as perturbations.

Finally, the problem of optimally controlling the system of stochastic differential equations is obviously an important one; see [7] and [15].