Abstract
We study an ecosystem of interacting species that are influenced by random environmental fluctuations. At any point in time, we can either harvest or seed (repopulate) species. Harvesting brings an economic gain while seeding incurs a cost. The problem is to find the optimal harvesting-seeding strategy that maximizes the expected total income from harvesting minus the cost one has to pay for the seeding of various species. In Hening et al. (J Math Biol 79(2):533–570, 2019b) we considered this problem when one has absolute control of the population (infinite harvesting and seeding rates are possible). In many cases, these approximations do not make biological sense and one must consider what happens when one, or both, of the seeding and harvesting rates are bounded. The focus of this paper is the analysis of these three novel settings: bounded seeding and infinite harvesting, bounded seeding and bounded harvesting, and infinite seeding and bounded harvesting. Even one dimensional harvesting problems can be hard to tackle. Once one looks at an ecosystem with more than one species analytical results usually become intractable. In order to gain information regarding the qualitative behavior of the system we develop rigorous numerical approximation methods. This is done by approximating the continuous time dynamics by Markov chains and then showing that the approximations converge to the correct optimal strategy as the mesh size goes to zero. By implementing these numerical approximations, we are able to gain qualitative information about how to best harvest and seed species in specific key examples. We are able to show through numerical experiments that in the single species setting the optimal seeding-harvesting strategy is always of threshold type. This means there are thresholds \(0<L_1<L_2<\infty \) such that: (1) if the population size is ‘low’, so that it lies in \((0, L_1]\), there is seeding using the maximal seeding rate; (2) if the population size ‘moderate’, so that it lies in \((L_1,L_2)\), there is no harvesting or seeding; (3) if the population size is ‘high’, so that it lies in the interval \([L_2, \infty )\), there is harvesting using the maximal harvesting rate. Once we have a system with at least two species, numerical experiments show that constant threshold strategies are not optimal anymore. Suppose there are two competing species and we are only allowed to harvest or seed species 1. The optimal strategy of seeding and harvesting will involve lower and upper thresholds \(L_1(x_2)<L_2(x_2)\) which depend on the density \(x_2\) of species 2.
Similar content being viewed by others
References
Albon S, Clutton-Brock T, Guinness F (1987) Early development and population dynamics in red deer. ii. density-independent effects and cohort variation. J Anim Ecol 56:69–81
Alvarez LHR (2000) Singular stochastic control in the presence of a state-dependent yield structure. Stoch Process Appl 86:323–343
Alvarez LHR, Shepp LA (1998) Optimal harvesting of stochastically fluctuating populations. J Math Biol 37(2):155–177
Alvarez LH, Hening A (2019) Optimal sustainable harvesting of populations in random environments. Stoch Process Appl. https://www.sciencedirect.com/science/article/abs/pii/S030441491830348X
Bass RF (1998) Diffusions and elliptic operators. Springer, New York
Beddington J, May R (1980) Maximum sustainable yields in systems subject to harvesting at more than one trophic level. Math Biosci 51(3–4):261–281
Benaim M (2018) Stochastic persistence, arXiv preprint arXiv:1806.08450
Benaïm M, Schreiber SJ (2019) Persistence and extinction for stochastic ecological difference equations with feedbacks. J Math Biol 79(1):393–431
Billingsley P (1968) Convergence of probability measures. Wiley, New York
Bouchard B, Touzi N (2011) Weak dynamic programming principle for viscosity solutions. SIAM J Control Optim 49(3):948–962
Braumann CA (2002) Variable effort harvesting models in random environments: generalization to density-dependent noise intensities. Math Biosci 177/178:229–245
Budhiraja A, Ross K (2007) Convergent numerical scheme for singular stochastic control with state constraints in a portfolio selection problem. SIAM J Control Optim 45(6):2169–2206
Chesson PL (1982) The stabilizing effect of a random environment. J Math Biol 15(1):1–36
Chesson P (1994) Multispecies competition in variable environments. Theor Popul Biol 45(3):227–276
Chesson PL, Warner RR (1981) Environmental variability promotes coexistence in lottery competitive systems. Am Nat 117(6):923–943
Chesson P, Huntly N (1997) The roles of harsh and fluctuating conditions in the dynamics of ecological communities. Am Nat 150(5):519–553
Chesson P, Hening A, Nguyen D (2019) A general theory of coexistence and extinction for stochastic ecological communities. Preprint
Du NH, Nguyen NH, Yin G (2016) Conditions for permanence and ergodicity of certain stochastic predator–prey models. J Appl Probab 53(1):187–202
Evans SN, Ralph PL, Schreiber SJ, Sen A (2013) Stochastic population growth in spatially heterogeneous environments. J Math Biol 66(3):423–476
Evans SN, Hening A, Schreiber SJ (2015) Protected polymorphisms and evolutionary stability of patch-selection strategies in stochastic environments. J Math Biol 71(2):325–359
Freidlin MI (2016) Functional integration and partial differential equations, vol 109. Princeton University Press, Princeton
Gard TC (1984) Persistence in stochastic food web models. Bull Math Biol 46(3):357–370
Gard TC (1988) Introduction to stochastic differential equations. Dekker, New York
Hening A, Nguyen D (2018) Coexistence and extinction for stochastic Kolmogorov systems. Ann Appl Probab 28(3):1893–1942
Hening A, Nguyen DH, Ungureanu SC, Wong TK (2019a) Asymptotic harvesting of populations in random environments. J Math Biol 78(1–2):293–329
Hening A, Tran K, Phan T, Yin G (2019b) Harvesting of interacting stochastic populations. J Math Biol 79(2):533–570
Hofbauer J (1981) A general cooperation theorem for hypercycles. Monatshefte für Mathematik 91(3):233–240
Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge
Hofbauer J, So JW-H (1989) Uniform persistence and repellors for maps. Proc Am Math Soc 107(4):1137–1142
Hutson V (1984) A theorem on average Liapunov functions. Monatshefte für Mathematik 98(4):267–275
Jin Z, Yang H, Yin G (2013) Numerical methods for optimal dividend payment and investment strategies of regime-switching jump diffusion models with capital injections. Automatica 49(8):2317–2329
Kesten H, Ogura Y (1981) Recurrence properties of lotka-volterra models with random fluctuations. J Math Soc Jpn 33(2):335–366
Krylov NV (2008) Controlled diffusion processes, vol 14. Springer, New York
Kushner HJ (1984) Approximation and weak convergence methods for random processes, with applications to stochastic systems theory. MIT Press, Cambridge
Kushner HJ (1990) Numerical methods for stochastic control problems in continuous time. SIAM J Control Optim 28(5):999–1048
Kushner HJ, Dupuis PG (1992) Numerical methods for stochastic control problems in continuous time. Springer, New York
Kushner HJ, Martins LF (1991) Numerical methods for stochastic singular control problems. SIAM J Control Optim 29(6):1443–1475
Lande R, Engen S, Sæther B-E (1995) Optimal harvesting of fluctuating populations with a risk of extinction. Am Nat 145(5):728–745
Lande R, Sæther B-E, Engen S (1997) Threshold harvesting for sustainability of fluctuating resources. Ecology 78(5):1341–1350
Lande R, Engen S, Sæther BE (2003) Stochastic population dynamics in ecology and conservation. Oxford University Press, Oxford
Li X, Mao X (2009) Population dynamical behavior of non-autonomous Lotka–Volterra competitive system with random perturbation. Discret Contin Dyn Syst Ser A 24(2):523–545
Lions P-L, Sznitman A-S (1984) Stochastic differential equations with reflecting boundary conditions. Commun Pure Appl Math 37(4):511–537
Lungu EM, Øksendal B (1997) Optimal harvesting from a population in a stochastic crowded environment. Math Biosci 145(1):47–75
Lungu EM, Øksendal B (2001) Optimal harvesting from interacting populations in a stochastic environment. Bernoulli 7(3):527–539
Mao X, Yuan C (2006) Stochastic differential equations with Markovian switching. Imperial College Press, London
May RM, Beddington J, Horwood J, Shepherd J (1978) Exploiting natural populations in an uncertain world. Math Biosci 42(3–4):219–252
May RM, Beddington JR, Clark CW, Holt SJ, Laws RM (1979) Management of multispecies fisheries. Science 205(4403):267–277
Schreiber SJ, Benaïm M, Atchadé KAS (2011) Persistence in fluctuating environments. J Math Biol 62(5):655–683
Smith HL, Thieme HR (2011) Dynamical systems and population persistence, vol 118. American Mathematical Society, Providence
Song Q, Stockbridge RH, Zhu C (2011) On optimal harvesting problems in random environments. SIAM J Control Optim 49(2):859–889
Tran K, Yin G (2017) Optimal harvesting strategies for stochastic ecosystems. IET Control Theory Appl 11(15):2521–2530
Turelli M (1977) Random environments and stochastic calculus. Theoret Popul Biol 12(2):140–178
Turelli M, Gillespie JH (1980) Conditions for the existence of stationary densities for some two-dimensional diffusion processes with applications in population biology. Theoret Popul Biol 17(2):167–189
Acknowledgements
Alexandru Hening has been supported by the NSF through the grant DMS-1853463. We thank two anonymous referees for feedback which led to the improvement of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Properties of the value function
Proposition A.1
Assume we are in the setting of bounded seeding and unbounded harvesting rates. Suppose that there exists number \(U>0\) such that
Then there exists \(x^*\in [0, U]^d\) such that
Moreover,
Proof
Fix some \(x \in \overline{S}\setminus [0, U]^d\) and \((Y, C)\in {\mathcal {A}}_{x}\), and let X denote the corresponding harvested process. Let \(x_i^*= \min \{x_i, U\}\) for \(i=1, \ldots , d\) and \(x^*=(x_1^*, \ldots , x_d^*)'\).
Let \(\varepsilon \in (0, 1)\) be a constant and define
We can extend \(\Phi _\varepsilon (\cdot )\) to the entire \(\overline{S}\) so that \(\Phi _\varepsilon (\cdot )\) is twice continuously differentiable, \(\Phi _\varepsilon (y)\ge 0\) and \(f\le \nabla \Phi _\varepsilon (y)\) for all \(y\in \overline{S}\). By assumption, we can check that
Choose N sufficiently large so that \(|x|< N\). For
we have \(T_N \rightarrow \gamma _0\) with probability one as \(N \rightarrow \infty \). By Dynkin’s formula,
where \(Y^c(\cdot )\) is the continuous part of \(Y(\cdot )\). Let \(\Delta Y(s)= Y(s)-Y(s-)\). Since \(\nabla \Phi _\varepsilon (X(s))=f\) and \(\Phi _\varepsilon \left( X(s)\right) -\Phi _\varepsilon \left( X(s-)\right) =-f\cdot \Delta Y(s)\), we obtain
Since \(\Phi _\varepsilon (y)\ge 0\) and \(f<g(y)\) for any \(y\in \overline{S}\), it follows from (A.2) that
Letting \(N\rightarrow \infty \), by the bounded convergence theorem, we obtain
As a result
The above implies
Letting \(\varepsilon \rightarrow 0\) in (A.3)
where \(\Phi _0(\cdot )\) is also defined by (A.1) at \(\varepsilon =0\). Note that if \(\mathbb {P}(\gamma _0=0)<1\), then (A.3) is a strict inequality. On the other hand, it is obvious (by harvesting instantaneously \(x-x^*\) at time \(t=0\)) that
In view of (A.4) and (A.5), if \(x \in \overline{S} \setminus [0, U]^d\), \(V(x)=V(x^*) + f\cdot (x-x^*)\). Moreover, it is optimal to instantaneously harvest an amount of \(x-x^*\) to drive the population to the state \(x^*\) on the boundary of \([0, U]^d\), and then apply an optimal or near-optimal harvesting-seeding policy in \({\mathcal {A}}_{x^*}\). Therefore, if the initial population \(x\in [0, U]^d\), it is optimal to apply a harvesting-seeding policy so that the population process stays in \([0, U]^d\) forever. This completes the proof. \(\square \)
Proposition A.2
Suppose we are in the setting of bounded seeding and harvesting rates, and that Assumption 2.1 is satisfied.
-
(a)
The value function V is finite and continuous on \({\overline{S}}\).
-
(b)
The value function V is a viscosity subsolution of (2.19); that is, for any \(x^0\in S\) and any function \(\phi \in C^2(S)\) satisfying
$$\begin{aligned} (V-\phi )(x)\ge (V-\phi )(x^0)=0, \end{aligned}$$for all x in a neighborhood of \(x^0\), we have
$$\begin{aligned} ({\mathcal {L}}-\delta ) \phi (x^0) + \max \limits _{\xi \in [-\lambda , \mu ]}\Big [\xi ^-\cdot \big (f-\nabla \phi )\left( x^0\right) - \xi ^+ \cdot (g-\nabla \phi )\left( x^0\right) \Big ]\le 0.\nonumber \\ \end{aligned}$$(A.6) -
(c)
The value function V is a viscosity supersolution of (2.19); that is, for any \(x^0\in S\) and any function \(\varphi \in C^2(S)\) satisfying
$$\begin{aligned} (V-\varphi )(x)\le (V-\varphi )(x^0)=0, \end{aligned}$$(A.7)for all x in a neighborhood of \(x^0\), we have
$$\begin{aligned} ({\mathcal {L}}-\delta ) \varphi (x^0) + \max \limits _{\xi \in [-\lambda , \mu ]}\Big [\xi ^-\cdot \big (f-\nabla \varphi )\left( x^0\right) - \xi ^+ \cdot (g-\nabla \varphi )\left( x^0\right) \Big ]\ge 0. \end{aligned}$$(A.8) -
(d)
The value function V is a viscosity solution of (2.19).
In the proof, we use the following notation and definitions. For a point \(x^0\in S\) and a strategy \(Q\in {\mathcal {A}}_{x^0}\), let X be the corresponding process with harvesting and seeding. Let \(B_\varepsilon (x^0)=\{x\in S: |x-x^0|<\varepsilon \}\), where \(\varepsilon >0\) is sufficiently small so that \(\overline{B_\varepsilon (x^0)}\subset S\). Let \(\theta =\inf \{t\ge 0: {X}(t)\notin B_\varepsilon (x^0) \}\). For a constant \(r>0\), we define \(\theta _r=\theta \wedge r\).
Proof
(a) Since the functions \(f(\cdot )\), \(g(\cdot )\) and the rates \(C(\cdot )\), \(R(\cdot )\) are bounded, the value function is also bounded. The conclusion then follows by (Krylov 2008, Chapter 3, Theorem 5).
(b) For \(x^0\in S\), consider a \(C^2\) function \(\phi (\cdot )\) satisfying \(\phi (x^0)=V(x^0)\) and \(\phi (x)\le V(x)\) for all x in a neighborhood of \(x^0\). Let \(\varepsilon >0\) be sufficiently small so that \(\overline{B_\varepsilon (x^0)}\subset S\) and \(\phi (x)\le V(x)\) for all \(x\in \overline{B_\varepsilon (x_0)}\), where \(\overline{B_\varepsilon (x_0)}=\{x\in S: |x-x^0|\le \varepsilon \}\) is the closure of \(B_\varepsilon (x^0)\).
Let \(\xi \in [-\mu , \lambda ]\) and define \(Q\in {\mathcal {A}}_{x^0}\) to satisfy \(Q(t)=\xi \) for all \(t\in [0, r]\) for a positive constant r. We denote by X the corresponding harvested process with initial condition \(x^0\). Then \({X}(t)\in \overline{B_\varepsilon (x^0)}\) for all \(0\le t\le \theta \). By virtue of the dynamic programming principle, we have
By the Dynkin formula, we obtain
A combination of (A.9) and (A.10) leads to
which in turn implies
By the continuity of \(X(\cdot )\) and the definition of \(Q(\cdot )\), we obtain
This completes the proof of (b).
(c) Let \(x^0\in S\) and suppose \(\varphi (\cdot )\in C^2(S)\) satisfies (A.7) for all x in a neighborhood of \(x^0\). We argue by contradiction. Suppose that (A.8) does not hold. Then there exists a constant \(A>0\) such that
Let \(\varepsilon >0\) be small enough so that \(\overline{B_\varepsilon (x^0)}\subset S\) and for any \(x\in \overline{B_\varepsilon (x^0 )}\), \(\varphi (x)\ge V(x)\) and
Let \(Q\in {\mathcal {A}}_{x^0}\) and \({X}(\cdot )\) be the corresponding process. Recall that \(\theta =\inf \{t\ge 0: {X}(t)\notin B_\varepsilon (x^0) \}\) and \(\theta _r=\theta \wedge r\) for any \(r>0\). It follows from the Dynkin formula that
Equations (A.13) and (A.14) show that
Therefore
Letting \(r\rightarrow \infty \), we have
Set \(\kappa _0 = A \mathbb {E}\int _0^{\theta } e^{-\delta s} ds>0\). Taking the supremum over \(Q\in {\mathcal {A}}_{x^0}\) we arrive at
In view of the dynamic programming principle, the preceding inequality can be rewritten as \(V(x^0)\ge V(x_0)+\kappa _0>V(x^0)\), which is a contradiction. This implies that (A.8) has to hold and the conclusion follows.
Part (d) follows from (b) and (c). \(\square \)
Appendix B: Numerical algorithm
We will present the detailed convergence analysis of Theorem 2.6, which is closely based on the Markov chain approximation method developed by Kushner and Dupuis (1992), Kushner and Martins (1991). Theorem 2.8 and Theorem 2.10 can be derived using similar techniques and we therefore omit the details.
1.1 B.1: Transition probabilities for bounded seeding and unbounded harvesting rates
For simplicity, we make use of one more assumption below. This assumption will be used to ensure that the transition probabilities \(p^h(x, y|u)\) are well defined. Nevertheless, this is not an essential assumption. There are several alternatives to handle the cases when Assumption B.1 fails. We refer the reader to (Kushner 1990, page 1013) for a detailed discussion. Define for any \(x\in \overline{S}\) the covariance matrix \(a(x)= \sigma (x)\sigma '(x)\).
Assumption B.1
For any \(i=1, \ldots , d\) and \(x\in \overline{S}\),
We define the difference \(\Delta X_n^h = X_{n+1}^h-X_{n}^h.\) Denote by \(\Delta Y^h_n\) the harvesting amount for the chain at step n. If \(\pi ^h_n=i\), we let \(\Delta Y^h_n=h\mathbf{e_i}\) and then \(\Delta X^h_n=-h\mathbf{e_i}\). If \(\pi ^h_n=0\), we set \(\Delta Y^h_n=0\). Define
For definiteness, if \(X^{h}_{n, i}\) is the ith component of the vector \(X^h_n\) and \(\{j: X^{h}_{n, j}=U\}\) is non-empty, then step n is a harvesting step on species \(\min \{j: X_{n, j}^{h}=U\}\). Recall that \(u^h_n= (\pi ^h_n, C^h_n)\) for \(n\in {\mathbb {Z}}_{\ge 0}\) and \(u^h=\{u^h_n\}_n\equiv \{Y^h_n, C^h_n\}_n\) is a sequence of controls. It should be noted that \(\pi ^h_n = 0\) includes the case when we seed nothing; that is, \(C^h_n = 0\). Denote by \({\mathcal {F}}^h_n=\sigma \{X^h_m,u^h_m, m\le n\}\) the \(\sigma \)-algebra containing the information from the processes \(X^h_m\) and \(u^h_m\) between the times 0 and n.
The sequence \(u^h= (\pi ^h, C^h)\equiv \{Y^h_n, C^h_n\}_n\) is said to be admissible if it satisfies the following conditions:
-
(a)
\(u^h_n\) is \(\sigma \{X^h_0, \ldots , X^h_{n},u^h_0, \ldots , u^h_{n-1}\}-\text {adapted},\)
-
(b)
For any \(x\in S_h\), we have
$$\begin{aligned} \mathbb {P}\{ X^h_{n+1} = x | {\mathcal {F}}^h_n\}= \mathbb {P}\{ X^h_{n+1} = x | X^h_n, u^h_n\} = p^h( X^h_n, x| u^h_n), \end{aligned}$$ -
(c)
Denote by \(X^{h}_{n, i}\) the ith component of the vector \(X^h_n\). Then
$$\begin{aligned} \mathbb {P}\big ( \pi ^h_{n}=\min \{j: X^{h}_{n, j} = U\} | X^{h}_{n, j} = U \text { for some } j\in \{1, \ldots , d \}, {\mathcal {F}}^h_n\big )=1. \end{aligned}$$ -
(d)
\(X^h_n\in S_h\) for all \(n\in {\mathbb {Z}}_{\ge 0}\).
The class of all admissible control sequences \(u^h\) having the initial state x will be denoted by \({\mathcal {A}}^h_{x}\).
For each \((x, u)\in S_h\times {\mathcal {U}}\), we define a family of interpolation intervals \(\Delta t^h (x, u)\). The values of \(\Delta t^h (x, u)\) will be specified later. Then we define
Let \(\mathbb {E}^{h, u}_{x, n}\), \({\mathbb {Cov}}^{h, u}_{x, n}\) denote the conditional expectation and covariance given by
respectively. Our objective is to define transition probabilities \(p^h (x, y | u)\) so that the controlled Markov chain \(\{X^h_n\}\) is locally consistent with respect to the controlled diffusion (2.7) in the sense that the following conditions hold at seeding steps, i.e., for \(u=(0, c)\)
Using the procedure used by Kushner (1990), for \((x, u)\in S_h\times {\mathcal {U}}\) with \(u=(0, c)\), define
Set \(p^h \left( x, y|u=(0, c)\right) =0\) for all unlisted values of \(y\in S^h\). Assumption B.1 guarantees that the transition probabilities in (B.3) are well-defined. At the harvesting steps, we define
Thus, \(p^h \left( x, y|u=(i, c)\right) =0\) for all unlisted values of \(y\in S^h\). Using the above transition probabilities, we can check that the locally consistent conditions of \(\{X^h_n\}\) in (B.2) are satisfied.
1.2 B.2: Continuous–time interpolation and time rescaling
The convergence result is based on a continuous-time interpolation of the chain, which will be constructed to be piecewise constant on the time interval \([t^h_n, t^h_{n+1}), n\ge 0\). We define \(n^h(t)=\max \{n: t^h_n\le t\}, t\ge 0\). We first define discrete time processes associated with the controlled Markov chain as follows. Let \(B^h_0=M^h_0=0\) and define for \(n\ge 1\),
The piecewise constant interpolation processes, denoted by \((X^h(\cdot ), Y^h(\cdot ), B^h(\cdot ), M^h(\cdot ), C^h(\cdot ))\) are naturally defined as
Define \({\mathcal {F}}^h(t)=\sigma \{X^h(s), Y^h(s), C^h(s): s\le t\}\). At each step n, we can write
Thus, we obtain
This implies
Recall that \(\Delta t^h_m = h^2/Q_h(X^h_m, u^h_m)\) if \(\pi ^h_m=0\) and \(\Delta t^h_m = 0\) if \(\pi ^h_m\ge 1\). It follows that
with \(\{\varepsilon _1^h(\cdot )\}\) being an \({\mathcal {F}}^h(t)\)-adapted process satisfying
We now attempt to represent \(M^h(\cdot )\) in a form similar to the diffusion term in (2.7). Factor
where \(P(\cdot )\) is an orthogonal matrix, \(D(\cdot )=\mathrm{diag}\{r_1(\cdot ), ..., r_d (\cdot )\}\). Without loss of generality, we suppose that \(\inf \limits _{x}r_i(x)>0\) for all \(i=1, \ldots , d\). Define \(D_0(\cdot )=\mathrm{diag}\{1/r_1(\cdot ), ..., 1/r_d (\cdot )\}\).
Remark B.2
In the argument above, for simplicity, we assume that the diffusion matrix a(x) is nondegenerate. If this is not the case, we can use the trick from (Kushner and Dupuis 1992, p.288-289) to establish equation (B.12).
Define \(W^h(\cdot )\) by
Then we can write
with \(\{\varepsilon _2^h(\cdot )\}\) being an \({\mathcal {F}}^h(t)\)-adapted process satisfying
Using (B.10) and (B.12), we can write (B.9) as
where \(\varepsilon ^h(\cdot )\) is an \({\mathcal {F}}^h(t)\)-adapted process satisfying
The objective function from (2.12) can be rewritten as
Time rescaling Next we will introduce “stretched-out” time scale. This is similar to the approach previously used by Kushner and Martins (1991) and Budhiraja and Ross (2007) for singular control problems. Using the new time scale, we can overcome the possible non-tightness of the family of processes \(\{Y^h(\cdot )\}\).
Define the rescaled time increments \(\{\Delta \widehat{t}_n^h: n\in {\mathbb {Z}}_{\ge 0}\}\) by
Definition B.3
The rescaled time process \(\widehat{T}^h(\cdot )\) is the unique continuous nondecreasing process satisfying the following:
-
(a)
\(\widehat{T}^h(0)=0\);
-
(b)
the derivative of \(\widehat{T}^h(\cdot )\) is 1 on \((\widehat{t}^h_n, \widehat{t}^h_{n+1})\) if \(\pi ^h_n=0\), i.e., n is a seeding step;
-
(c)
the derivative of \(\widehat{T}^h(\cdot )\) is 0 on \((\widehat{t}^h_n, \widehat{t}^h_{n+1})\) if \(\pi ^h_n\ge 1\), i.e., n is a harvesting step.
Define the rescaled and interpolated process \(\widehat{X}^h(t)= X^h(\widehat{T}^h(t))\) and likewise define \(\widehat{Y}^h(\cdot )\), \(\widehat{C}^h(\cdot )\), \(\widehat{B}^h(\cdot )\), \(\widehat{M}^h(\cdot )\), and the filtration \(\widehat{{\mathcal {F}}}^h(\cdot )\) similarly. It follows from (B.9) that
Using the same argument we used for (B.13) we obtain
with \(\widehat{\varepsilon }^h(\cdot )\) is an \(\widehat{{\mathcal {F}}}^h(\cdot )\)-adapted process satisfying
Define
1.3 Convergence
Using weak convergence methods, we can obtain the convergence of the algorithms. Let \(D[0, \infty )\) denote the space of functions that are right continuous and have left-hand limits endowed with the Skorokhod topology. All the weak analysis will be on this space or its k-fold products \(D^k[0, \infty )\) for appropriate k.
Theorem B.4
Suppose Assumptions 2.1 and B.1 hold. Let the chain \(\{X^h_n \}\) be constructed with transition probabilities defined in (B.3)–(B.4), \(X^h(\cdot )\), \(W^h(\cdot )\), \(Y^h(\cdot )\), and \(A^h(\cdot )\) be the continuous-time interpolation defined in (B.5)–(B.6), (B.11), and (B.19). Let \(\widehat{X}^h(\cdot )\), \(\widehat{W}^h(\cdot )\), \(\widehat{Y}^h(\cdot )\), \(\widehat{A}^h(\cdot )\) be the corresponding rescaled processes, \(\widehat{T}^h(\cdot )\) be the process from Definition B.3, and denote
Then the family of processes \((\widehat{H}^h)_{h>0}\) is tight. As a result, \((\widehat{H}^h)_{h>0}\) has a weakly convergent subsequence with limit
Proof
We use the tightness criteria used by (Kushner 1984, p. 47). Specifically, a sufficient condition for tightness of a sequence of processes \(\zeta ^h(\cdot )\) with paths in \(D^k[0, \infty )\) is that for any constants \(T_0, \rho \in (0, \infty )\),
The proof for the tightness of \(\widehat{W}^h(\cdot )\) is standard; see for example Kushner and Martins (1991), Jin et al. (2013). We show the tightness of \(\widehat{Y}^h(\cdot )\) to demonstrate the role of time rescaling. Following the definition of “stretched out” timescale, for any constants \(T_0, \rho \in (0, \infty )\), \(s\in [0, \rho ]\) and \(t\le T_0\),
Thus \(\{\widehat{Y}^h(\cdot )\}\) is tight. The tightness of \(\{\widehat{T}^h(\cdot )\}\) follows from the fact that
Since \(|\widehat{A}^h(t+s)-\widehat{A}^h(t)|\le |\widehat{T}^h(t+s)-\widehat{T}^h(t)|\sum _{i=1}^d \lambda _i\), it follows that \(\{\widehat{A}^h(\cdot )\}\) is tight. The tightness of \(\{\widehat{X}^h(\cdot )\}\) follows from (B.16), (B.20). Hence \(\{\widehat{X}^h(\cdot ), \widehat{W}^h(\cdot ), \widehat{Y}^h(\cdot ), \widehat{A}^h(\cdot ), \widehat{T}^h(\cdot )\}\) is tight. By virtue of Prohorov’s Theorem, \(\widehat{H}^h(\cdot )\) has a weakly convergent subsequence with the limit \(\widehat{H}(\cdot )\). This completes the proof.
\(\square \)
We proceed to characterize the limit process.
Theorem B.5
Under conditions of Theorem B.4, let \(\widehat{{\mathcal {F}}}(t)\) be the \(\sigma \)-algebra generated by
Then the following assertions hold.
-
(a)
\(\widehat{X}(\cdot )\), \(\widehat{W}(\cdot )\), \(\widehat{Y}(\cdot )\), \(\widehat{A}(\cdot )\), and \(\widehat{T}(\cdot )\) have continuous paths with probabilty one, \(\widehat{Y}(\cdot )\) and \(\widehat{T}(\cdot )\) are nondecreasing and nonnegative. Moreover, \(\widehat{T}(\cdot )\) is Lipschitz continuous with Lipschitz coefficient 1.
-
(b)
There exists an \(\{\widehat{{\mathcal {F}}}(\cdot )\}\)-adapted process \(\widehat{C}(\cdot )\) with \(\widehat{C}(t)\in [0, \lambda ]\) for any \(t\ge 0\), such that \(\widehat{A}(t)=\int _0^t \widehat{C}(s)d\widehat{T}(s)\) for any \(t\ge 0\).
-
(c)
\(\widehat{W}(t)\) is an \(\widehat{{\mathcal {F}}}(t)\)-martingale with quadratic variation process \(\widehat{T}(t)I_d\), where \(I_d\) is the \(d\times d\) identity matrix.
-
(d)
The limit processes satisfy
$$\begin{aligned} \widehat{X}(t) = x + \int _0^t \big [ b (\widehat{X}(s)) + \widehat{C}(s) \big ]d\widehat{T}(s) + \int _0^t \sigma (\widehat{X}(s)) d \widehat{W}(s) - \widehat{Y}(t). \end{aligned}$$(B.21)
Proof
(a) Since the sizes of the jumps of \(\widehat{X}^h(\cdot )\), \(\widehat{W}^h(\cdot )\), \(\widehat{Y}^h(\cdot )\), \(\widehat{A}^h(\cdot )\), \(\widehat{T}^h(\cdot )\) go to 0 as \(h\rightarrow 0\), the limits of these processes have continuous paths with probability one (see (Kushner 1990, p. 1007)). Moreover, \(\widehat{Y}^h(\cdot )\) (resp. \(\widehat{T}^h(\cdot )\)) converges uniformly to \(\widehat{Y}(\cdot )\), (resp. \(\widehat{T}(\cdot )\)) on bounded time intervals. This, together with the monotonicity and non-negativity of \(\widehat{Y}^h(\cdot )\) and \(\widehat{T}^h(\cdot )\) implies that the processes \(\widehat{Y}(\cdot )\) and \(\widehat{T}(\cdot )\) are nondecreasing and nonnegative.
(b) Since \(|\widehat{A}^h_i(t+s)-\widehat{A}^h_i(t)|\le \lambda _i |\widehat{T}^h(t+s)-\widehat{T}^h(t)|\) for any \(t\ge 0, s\ge 0, h>0, i=1, 2,\ldots , d\) and by virtue of Skorohod representation, \(|\widehat{A}_i(t+s)-\widehat{A}_i(t)|\le \lambda _i |\widehat{T}(t+s)-\widehat{T}(t)|\) for any \(t\ge 0, s\ge 0, i=1, 2,\ldots , d\); that is, each \(\widehat{A}_i\) is absolutely continuous with respect to \(\widehat{T}\). Therefore, there exists a \([0,\lambda _i]\)-valued \(\{\widehat{{\mathcal {F}}}(t)\}\)-adapted process \(\widehat{C}_i(\cdot )\) such that \(\widehat{A}_i(t)=\int _0^t \widehat{C}_i(s)d\widehat{T}(s)\) for any \(t\ge 0\). Then \(C(\cdot )=(C_1(\cdot ), \ldots , C_d(\cdot ))'\) is the desired process.
(c) Let \(\widehat{\mathbb {E}}_t^h\) denote the expectation conditioned on \(\widehat{{\mathcal {F}}}^h(t)={\mathcal {F}}^h(\widehat{T}^h(t))\). Recall that \(W^h(\cdot )\) is an \({\mathcal {F}}^h(\cdot )\)- martingale and by the definition of \(\widehat{W}^h(\cdot )\), for any \(\rho >0\),
where \(\mathbb {E}|\widehat{\varepsilon }^h(\rho )|\rightarrow 0\) as \(h\rightarrow 0\). To characterize \(\widehat{W}(\cdot )\), let q be an arbitrary integer, \(t>0\), \(\rho >0\) and \(\{t_k: k\le q\}\) be such that \(t_k\le t<t+\rho \) for each k. Let \(\Psi (\cdot )\) be a real-valued and continuous function with compact support. Then in view of (B.22), we have
and
By the Skorokhod representation and the dominated convergence theorems, letting \(h\rightarrow 0\) in (B.23), we obtain
Since \(\widehat{W}(\cdot )\) has continuous paths with probability one, (B.25) implies that \(\widehat{W}(\cdot )\) is a continuous \(\widehat{{\mathcal {F}}}(\cdot )\)-martingale. Moreover, (B.24) gives us that
This implies part (c).
(d) The proof of this part is motivated by that of (Kushner and Dupuis 1992, Theorem 10.4.1). By virtue of Skorohod representation,
as \(h\rightarrow 0\) uniformly in t on any bounded time interval with probability one.
For each positive constant \(\rho \) and a process \(\widehat{\nu }(\cdot )\), define the piecewise constant process \(\widehat{\nu }^\rho (\cdot )\) by \(\widehat{\nu }^\rho (t)=\widehat{\nu }(k\rho )\) for \(t\in [k\rho , k\rho +\rho ), k\in {\mathbb {Z}}_{\ge 0}\). Then, by the tightness of \((\widehat{X}^h(\cdot ))\), (B.17) can be rewritten as
where \(\lim \limits _{\rho \rightarrow 0}\limsup \limits _{h\rightarrow 0} \mathbb {E}|\widehat{\varepsilon }^{h, \rho }(t)|=0.\) Owing to the fact that \(\widehat{X}^{h, \rho }\) takes constant values on the intervals \([k\rho , k\rho +\rho )\), we have
which are well defined with probability one since they can be written as finite sums. Combining (B.27)–(B.29), we have
where \(\lim \limits _{\rho \rightarrow 0}E|\widehat{\varepsilon }^{ \rho }(t)|=0.\) Taking the limit \(\rho \rightarrow 0\) in the above equation yields the result. \(\square \)
For \(t<\infty \), define the inverse \({\overline{T}}(t)= \inf \{s: \widehat{T}(s)>t\}\). For any process \(\widehat{\nu }(\cdot )\), define the time-rescaled process \((\overline{\nu }(\cdot ))\) by \(\overline{\nu }(t)= \widehat{\nu }({\overline{T}}(t))\) for \(t\ge 0\). Let \({\mathcal {\overline{F}}}(t)\) be the \(\sigma \)-algebra generated by \(\{\overline{X}(s), {\overline{W}}(s), {\overline{Y}}(s), {\overline{C}}(s), {\overline{T}}(s): s\le t\}\). Let \(V^h(x)\) and \(V^U(x)\) be value the functions defined in (2.13) and (2.9), respectively.
Theorem B.6
Under conditions of Theorem B.4, the following assertions are true.
-
(a)
\(\overline{T}\) is right continuous, nondecreasing, and \(\overline{T}(t)\rightarrow \infty \) as \(t \rightarrow \infty \) with probability one.
-
(b)
The processes \(\overline{Y}(t)\) and \(\overline{C}(t)\) are \(\mathcal {\overline{F}}(t)\)-adapted. Moreover, \(\overline{Y}(t)\) is right-continuous, nondecreasing, nonnegative; \(\overline{C}(t)\in [0, \lambda ]\) for any \(t\ge 0\).
-
(c)
\(\overline{W}(\cdot )\) is an \(\mathcal {\overline{F}}(t)\)-adapted standard Brownian motion, and
$$\begin{aligned} {\overline{X}}(t)=x +\int _0^t \Big [b(\overline{X}(s)) +\overline{C}(s)\Big ] ds+\int _0^t \sigma (\overline{X}(s))d\overline{W}(s)-\overline{Y}(t), \quad t\ge 0.\nonumber \\ \end{aligned}$$(B.31)
Proof
(a) We will argue via contradiction that \(\widehat{T}(t)\rightarrow \infty \) as \(t\rightarrow \infty \) with probability one. Suppose \(\mathbb {P}[\sup _{t\ge 0}\widehat{T}(t)<\infty ]>0\). Then there exist positive constants \(\varepsilon \) and \(T_0\) such that
We first observe that
Since \(\widehat{T}^h(\cdot )\) is nondecreasing and \(\widehat{T}^h(\widehat{t}^h_n)=t^h_n\),
The last inequality above is a consequence of the inequalities \(t^h_{n^h(t)}\le t< t^h_{n^h(t)+1}=t^h_{n^h(t)}+\Delta t^h_{n+1} <t^h_{n^h(t)}+1\).
It follows from (B.9) that for each fixed \(t\ge 0\), \(\sup \limits _{h}\mathbb {E}\big (|Y^h(t)|\big )<\infty .\) Thus, for a sufficiently large K,
In views of (B.33) and (B.34), we obtain
Since \(\widehat{T}^h\) converges weakly to \(\widehat{T}\), it follows from (B.35) that \(\liminf \limits _{h\rightarrow 0} \mathbb {P}\big [\widehat{T}^h(T_0+2K) <T_0-1 \big ]\le \varepsilon /2\). This contradicts (B.32) (see (Billingsley 1968, Theorem 1.2.1)). Hence \(\widehat{T}(t)\rightarrow \infty \) as \(t\rightarrow \infty \) with probability one. Thus \({\overline{T}}(t)<\infty \) for all t and \({\overline{T}}(t)\rightarrow \infty \) as \(t\rightarrow \infty \). Since \(\widehat{T}(\cdot )\) is nondecreasing and continuous, \({\overline{T}}(\cdot )\) is nondecreasing and right-continuous.
(b) The properties of \(\overline{Y}(\cdot )\) follow from the fact that \(\widehat{Y}(\cdot )\) is continuous, nondecreasing, nonnegative, and \({\overline{T}}(\cdot )\) is right-continuous. The properties of \(\overline{C}(\cdot )\) follow from those of \(\widehat{C}(\cdot )\).
(c) Note that although \({\overline{T}}(\cdot )\) might fail to be continuous, \(\overline{W}(\cdot )=\widehat{W}({\overline{T}}(\cdot ))\) has continuous paths with probability one. Indeed, consider the tight sequence \(\big ({W}^h(\cdot ), \widehat{W}^h(\cdot ), \widehat{T}^h(\cdot )\big )\) with the weak limit \(\big (\widetilde{W}(\cdot ), \widehat{W}(\cdot ), \widehat{T}(\cdot )\big )\). Since \(\widehat{W}^h(\cdot )=W^h(\widehat{T}^h(\cdot ))\), we must have that \(\widehat{W}(\cdot )=\widetilde{W}(\widehat{T}(\cdot ))\). It follows from the definition of \({\overline{T}}(\cdot )\) that for each \(t\ge 0\), we have \(\widehat{T}({\overline{T}}(t))=t\). Hence \(\overline{W}(t)=\widehat{W}({\overline{T}}(t))=\widetilde{W}\big (\widehat{T}({\overline{T}}(t))\big )=\widetilde{W}(t)\). Since the sizes of the jumps of \(W^h(\cdot )\) go to 0 as \(h\rightarrow 0\), \(\widetilde{W}(\cdot )\) also has continuous paths with probability 1. This shows that \(\overline{W}(\cdot )=\widehat{W}({\overline{T}}(\cdot ))\) has continuous paths with probability 1. Before characterizing \(\overline{W}(\cdot )\), we note that for \(t\ge 0\), \(\{{\overline{T}}(s)\le t\}=\{\widehat{T}(t)\ge s\}\in \widehat{{\mathcal {F}}}(t)\) since \(\widehat{T}(t)\) is \(\widehat{{\mathcal {F}}}(t)\)-measurable. Thus \({\overline{T}}(s)\) is an \(\widehat{{\mathcal {F}}}(t)\)-stopping time for each \(s\ge 0\). Since \(\widehat{W}(t)\) is an \(\widehat{{\mathcal {F}}}(t)\)-martingale with quadratic variation process \(\widehat{T}(t) I_d\),
and \(\widehat{T}({\overline{T}}(t)\wedge n)\le \widehat{T}({\overline{T}}(t))=t\). Hence for each fixed \(t\ge 0\), the family \(\{\widehat{W}({\overline{T}}(t)\wedge n), n\ge 1\}\) is uniformly integrable. By that uniform integrability, we obtain from (B.36) that \(E\big [\widehat{W}({\overline{T}}(t))| \widehat{{\mathcal {F}}}({\overline{T}}(s))\big ]=\widehat{W}({\overline{T}}(s))\), that is \(E\big [\overline{W}(t)| \overline{{\mathcal {F}}}(s)\big ]=\overline{W}(s)\). This proves that \(\overline{W}(\cdot )\) is a continuous \(\overline{{\mathcal {F}}}(\cdot )\) -martingale. We next consider its quadratic variation. By the Burkholder–Davis–Gundy inequality, there exists a positive constant K independent of \(n=1, 2,\ldots \) such that
Thus the families \(\{\widehat{W}({\overline{T}}(t)\wedge n), n\ge 1\}\) and \(\{\widehat{T}({\overline{T}}(t)\wedge n), n\ge 1\}\) are uniformly integrable for each fixed \(t\ge 0\). Combining this with the fact that \(\widehat{W}(\cdot )\), \(\widehat{T}(\cdot )\) have continuous paths, for nonnegative constants \(s\le t\), we have
Note that the first equation in (B.37) follows from the martingale property of \(\widehat{W}(\cdot )\widehat{W}(\cdot )'-\widehat{T}(\cdot )I_d\) with respect to \(\widehat{{\mathcal {F}}}(t).\) Letting \(n\rightarrow \infty \) in (B.37), we arrive at
Therefore, \(\overline{W}(\cdot )\) is an \(\overline{{\mathcal {F}}}(t)\)—adapted standard Brownian motion. A rescaling of (B.21) yields
The proof is complete. \(\square \)
Theorem B.7
Under conditions of Theorem B.4, let \(V^h(x)\) and \(V^U(x)\) be value functions defined in (2.13) and (2.9), respectively. Then \(V^h(x)\rightarrow V^U(x), x\in [0,U]^d\) as \(h\rightarrow 0\). If (2.10) holds, then \(V^h(x)\rightarrow V(x), x\in [0,U]^d\) as \(h\rightarrow 0\).
Proof
We first show that as \(h\rightarrow 0\),
where \(u^h=(\pi ^h, C^h)\). Indeed, for an admissible strategy \(u^h=(\pi ^h_n, C^h_n)\), we have
By a small modification of the proof in Theorem B.6 (a), we have \(\widehat{T}^h(t)\rightarrow \infty \) as \(t\rightarrow \infty \) with probability 1. It also follows from the representation (B.9) and estimates on \(B^h(\cdot )\) and \(M^h(\cdot )\) that \(\{Y^h(n+1)-Y^h(n): n, h\}\) is uniformly integrable. Thus, by the definition of \(\widehat{T}^h(\cdot )\),
uniformly in h as \(T_0\rightarrow \infty \). In the above argument, we have used that \(\widehat{T}^h(T_0)\le T_0\). Then by the weak convergence, the Skohorod representation, and uniform integrability we have for any \(T_0>0\) that
Therefore, we obtain
Similarly,
On inversion of the timescale, we have
Thus, \(J^h(x, u^h)\rightarrow J(x, \overline{Y}(\cdot ), \overline{C}(\cdot ))\) as \(h\rightarrow 0\).
Next, we prove that
For any small positive constant \(\varepsilon \), let \(\{\widetilde{u}^h\}\) be an \(\varepsilon \)-optimal harvesting strategy for the chain \(\{X^h_n\}\); that is,
Choose a subsequence \(\{\widetilde{h}\}\) of \(\{h\}\) such that
Without loss of generality (passing to an additional subsequence if needed), we may assume that
converges weakly to
and \(\overline{Y}(\cdot )=\widehat{Y}(\overline{T}(\cdot ))\), \(\overline{A}(\cdot )=\widehat{A}(\overline{T}(\cdot ))\), \(\overline{C}(\cdot )=\widehat{C}(\overline{T}(\cdot ))\). It follows from our claim in the beginning of the proof that
where \(J(x, \overline{Y}(\cdot ), \overline{C}(\cdot ))\le V^U(x)\) since \(V^U(x)\) is the maximizing performance function. Since \(\varepsilon \) is arbitrarily small, (B.40) follows from (B.41) and (B.42).
To prove the reverse inequality \(\liminf \limits _{h} V^h(x)\ge V^U(x) \), for any small positive constant \(\varepsilon \), we choose a particular \(\varepsilon \)-optimal harvesting strategy for (2.7) such that the approximation can be applied to the chain \(\{X^h_n\}\) and the associated reward compared with \(V^h(x)\). By an adaptation of the method used by Kushner and Martins (1991) for singular control problems, for given \(\varepsilon >0\), there is a \(\varepsilon \)-optimal harvesting strategy \(({Y}(\cdot ), {C}(\cdot ))\) for (2.7) in \({\mathcal {A}}_x^U\) with the following properties: There are \(T_\varepsilon <\infty \), \(\rho >0\), and \(\lambda >0\) such that \(( {Y}(\cdot ), {C}(\cdot ))\) are constants on the intervals \([n\lambda , n\lambda + \lambda )\); only one of the components of \( Y(\cdot )\) can jump at a time and the jumps take values in the discrete set \(\{k\rho : k=1, 2, ...\}\); \({Y}(\cdot )\) is bounded and is constant on \([T_\varepsilon , \infty )\); and \(C(\cdot )\) takes only finitely many values.
We adapt this strategy to the chain \(\{X^h_n\}\) by a sequence of controls \(u^h\equiv (Y^h,C^h)\) using the same method as in (Kushner and Martins 1991, p. 1459). Suppose that we wish to apply a harvesting action of “impulsive” magnitude \(\Delta y_i\) (that is, for species i) to the chain at some interpolated time \(t_0\). Define \(n_h=\min \{k: t^h_k\ge t_0\}\), with \(t^h_k\) was defined in (B.1). Then starting at step \(n_h\), apply \([\Delta y_i/h]\) successive harvesting steps on species i. Let \(Y^h(\cdot )\) denote the piecewise interpolation of the harvesting strategy just defined. With the observation above, let \(({Y}^h, {C}^h)\) denote the interpolated form of the adaption. By the weak convergence argument analogous to that of preceding theorems, we obtain the weak convergence
where \(A(t)=\int _0^t C(s)ds\), and the limit solves (2.7). It follows that
By the optimality of \(V^h(x)\) and the above weak convergence,
It follows that \(\liminf \limits _{h\rightarrow 0}V^h(x)\ge V^U(x) -\varepsilon \). Since \(\varepsilon \) is arbitrarily small, \(\liminf \limits _{h\rightarrow 0}V^h(x)\ge V^U(x)\). Therefore, \(V^h(x)\rightarrow V^U(x)\) as \(h\rightarrow 0\). If (2.10) holds, by Proposition 2.4 we have \(V^U(x)=V(x)\) which finishes the proof. \(\square \)
1.4 Transition probabilities for bounded harvesting and seeding rates
In this case, recall that \(u^h_n= (\pi ^h_n, Q^h_n)\) for each n and \(u^h=\{u^h_n\}_n\) be a sequence of controls. It should be noted that \(\pi ^h_n = 0\) includes the case that we harvest nothing and also seed nothing; that is, \(Q^h_n=0\). Note also that \({\mathcal {F}}^h_n=\sigma \{X^h_m, u^h_m, m\le n\}\).
The sequence \(u^h= (\pi ^h, Q^h)\) is said to be admissible if it satisfies the following conditions:
-
(a)
\(u^h_n\) is \(\sigma \{X^h_0, X^h_1,\ldots , X^h_{n}, u^h_0, u^h_1,\ldots , u^h_{n-1}\}-\text {adapted},\)
-
(b)
For any \(x\in S_{h+}\), we have
$$\begin{aligned} \mathbb {P}\{ X^h_{n+1} = x | {\mathcal {F}}^h_n\}= \mathbb {P}\{ X^h_{n+1} = x | X^h_n, u^h_n\} = p^h( X^h_n, x| u^h_n), \end{aligned}$$ -
(c)
Let \(X^{h}_{n, j}\) be the j th component of the vector \(X^h_n\) for \(j=1, 2, \ldots , d\). Then
$$\begin{aligned} \mathbb {P}\big ( \pi ^h_{n}=\min \{j: X^{h}_{n, j} = U+h\} | X^{h}_{n, j} = U+h \text { for some } j\in \{1, \ldots , d \}, {\mathcal {F}}^h_n\big )=1. \end{aligned}$$ -
(d)
\(X^h_n\in S_{h+}\) for all \(n\in {\mathbb {Z}}_{\ge 0}\).
Now we proceed to define transition probabilities \(p^h (x, y | u)\) so that the controlled Markov chain \(\{X^h_n\}\) is locally consistent with respect to the controlled diffusion \(X(\cdot )\). For \((x, u)\in S_{h+}\times {\mathcal {U}}\) with \(u=(0, q)\), we define
Set \(p^h \left( x, y|u=(0, q)\right) =0\) for all unlisted values of \(y\in S_{h+}\). Assumption B.1 guarantees that the transition probabilities in (B.43) are well-defined. At the reflection steps, we define
Thus, \(p^h \left( x, y|u=(i, q)\right) =0\) for all unlisted values of \(y\in S_{h+}\).
1.5 Transition probabilities for unbounded seeding and bounded harvesting rates
In this case, recall that \(u^h_n= (\pi ^h_n, R^h_n)\) for each n and \(u^h=\{u^h_n\}_n\) be a sequence of controls. It should be noted that \(\pi ^h_n = 0\) includes the case that we harvest nothing; that is, \(R^h_n=0\). Note also that \({\mathcal {F}}^h_n=\sigma \{X^h_m, u^h_m, m\le n\}\).
The sequence \(u^h= (\pi ^h, R^h)\) is said to be admissible if it satisfies the following conditions:
-
(a)
\(u^h\) is \(\sigma \{X^h_0, X^h_1,\ldots , X^h_{n}, u^h_0, u^h_1,\ldots , u^h_{n-1}\}-\text {adapted},\)
-
(b)
For any \(x\in S_{h+}\), we have
$$\begin{aligned} \mathbb {P}\{ X^h_{n+1} = x | {\mathcal {F}}^h_n\}= \mathbb {P}\{ X^h_{n+1} = x | X^h_n, u^h_n\} = p^h( X^h_n, x| u^h_n), \end{aligned}$$ -
(c)
Let \(X^{h}_{n, j}\) be the j th component of the vector \(X^h_n\) for \(j=1, 2, \ldots , d\). Then
$$\begin{aligned} \mathbb {P}\big ( \pi ^h_{n}=\min \{j: X^{h}_{n, j} = U+h\} | X^{h}_{n, j} = U+h \text { for some } j\in \{1, \ldots , d \}, {\mathcal {F}}^h_n\big )=1. \end{aligned}$$ -
(d)
\(X^h_n\in S_{h+}\) for all \(n\in {\mathbb {Z}}_{\ge 0}\).
Now we proceed to define transition probabilities \(p^h (x, y | u)\) so that the controlled Markov chain \(\{X^h_n\}\) is locally consistent with respect to the controlled diffusion \(X(\cdot )\). We use the notations as in the preceding case. For \((x, u)\in S_{h+}\times {\mathcal {U}}\) with \(u=(0, r)\), we define
Set \(p^h \left( x, y|u=(0, r)\right) =0\) for all unlisted values of \(y\in S_{h+}\). Assumption B.1 guarantees that the transition probabilities in (B.45) are well-defined. At the reflection steps, we define
As a result, \(p^h \left( x, y|u=(i, r)\right) =0\) for all unlisted values of \(y\in S_{h+}\). At the seeding steps, we define
Thus, \(p^h \left( x, y|u=(-i, r)\right) =0\) for all unlisted values of \(y\in S_{h+}\).
Rights and permissions
About this article
Cite this article
Hening, A., Tran, K.Q. Harvesting and seeding of stochastic populations: analysis and numerical approximation. J. Math. Biol. 81, 65–112 (2020). https://doi.org/10.1007/s00285-020-01502-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00285-020-01502-0