Skip to main content

Modeling illegal logging in Brazil

Abstract

Deforestation is a major threat to global environmental wellness, with illegal logging as one of the major causes. Recently, there has been increased effort to model environmental crime, with the goal of assisting law enforcement agencies in deterring these activities. We present a continuous model for illegal logging applicable to arbitrary domains. We model the practice of criminals under influence of law enforcement agencies using tools from multiobjective optimal control theory and consider non-instantaneous logging events and load-dependent travel velocity. We calibrate our model using real deforestation data from the Brazilian rainforest and demonstrate the importance of geographically targeted patrol strategies.

Introduction

Deforestation, and in particular, illegal logging causes some of the most damaging effects to the world’s forests. Modeling and quantifying deforestation has become a recent area of study for ecologists, political scientists, and applied mathematicians. Pfaff [29] has validated the correlation between certain parameters and deforestation in tropical regions such as Brazil. The three dominant categories of parameters are identified by Pfaff and other authors [3, 21], as accessibility, population and climate. An effort to control deforestation in Brazil, while exploiting timber in a sustainable way, is through allowing legal concessions for industrial timber harvest in public forests [4]. Companies operating in Brazil, under such concessions, fell an average of only one tree per acre instead of clear-cutting to allow tree regrowing [37]. However, as is reported in [37], legal timber companies pull out of concession as uncontrolled wildcat loggers invaded the company’s land, illegally toppling and stealing trees. The government’s failure to detect and punish illegal loggers leads to even more rampant organized crime and more severe deforestation. It is an essential and urgent task to work out effective tactics to combat illegal loggers. Slough and Urpelainen [36] have studied the influence of geographically targeted deterrence on deforestation. Efficient and effective deployment of law enforcement to threatened areas is the best deterrent for these crimes and has been modeled in the continuum setting [1, 2, 7]. Effective deterrence also can lead to spatial spillovers, as loggers move away from areas with heavy monitoring. Assessing loggers’ responses to policies is important when designing an effective system that minimizes deforestation across forested areas. In this work, we build a game theoretic model to predict interactions between the illegal loggers and law enforcement agencies.

Deforestation in the State of Roraima

In this paper, we focus on the PRODES (Amazon Deforestation Monitoring Program) [24] dataset, which is the official dataset the Brazilian Government uses to make annual statistics relating to deforestation. PRODES uses a mixture of computer and human expert analysis to delineate deforestation regions in Brazilian Amazon annually with a minimum patch size of 6.25 hectares (ha) . In particular, we focus on the state of Roraima, the northernmost state in Brazil. We extract annual deforestation events data and tree coverage data from PRODES. In Fig. 1a, we plot the deforestation events from year 2001 to 2015 as well as the transportation system (including highways and waterways) for Roraima, with the observation that many of the deforestation events occur in the vicinity of roads and rivers. The tree coverage data for year 2015 is shown in Fig. 1b where yellow indicates land covered by trees and blue refers to cleared land. There are fifteen municipalities in this state. In this paper, we assume that loggers originate from and return to these fifteen municipalities.

Fig. 1
figure 1

An overview of the data. a Shows the deforestation events between year 2001 and 2015 in Roraima on top of the transportation system. Dark blue are rivers, white are major highways and red dots correspond to deforestation events. b Exhibits the binary tree coverage for year 2015 data where yellow represents regions covered by trees and blue represents uncovered land

Previous work

The first continuous game theoretic model of deforestation is attributed to Albers [1], who modeled deforestation events in a circular area with radially symmetric benefit and patrol functions. Criminals enter from the boundary of the area and want to maximize their profit

$$\begin{aligned} P(d) = (1 - \varPhi (d))B(d) - C(d), \end{aligned}$$

where B is the benefit to the attacker, C is the cost of traveling to depth d, \(\varPhi \) is the cumulative patrol function and \((1 - \varPhi (d))\) represents the probability of not being captured.

Fig. 2
figure 2

Illustration of previous work. a Albers’ [1] model assumes a circular region and radial symmetric functions so that attackers will only move along the radius. b Arnold et al. [2] generalize the model to arbitrary terrain and applied to the Yosemite national park. In both figures, the white area is pristine while the grey area is affected by criminals

Johnson et al. [17] worked on optimal patrol strategies in the framework of Albers’ model [1]. Kamra et al. [18] extended the model by removing the assumption that trees are homogeneously distributed but maintained the circular area. They considered the game between law enforcement and extractors and applied machine learning techniques to find the optimal or near-optimal patrol strategies. All of these works considered a circular region with the assumptions that extractors come from the boundary of the region and move toward the center. The radial symmetry of the region and the functions is a major restriction.

Arnold et al. [2] generalized Albers’ model [1] to any closed, simple region in \(\mathbb {R}^2\). The primary tool employed in the model is the level set method [26]. In their model, the cost represents the effort expended by extracting at any point in the protected area, evaluated by the optimal travel time, where the velocity is allowed to depend on terrain data. They model the impact of patrol by including capture probability in the formulation of a heuristic modified velocity. The validity of this model has not been tested against real-world data, but has been modified and improved by Cartee and Vladimirsky [7]. The authors constructed two models based on whether the authorities use ground patrol, where confiscation takes place as perpetrators are detected, or aerial patrol, so that illegal goods are not confiscated until perpetrators exit the protected area.

Meanwhile, models for illegal extraction by discrete methods have been developed by Fang et al. [11, 12] and Kar et al. [19, 20]. Both Fang et al. [12] and Kar et al. [20] deployed their models in Queen Elizabeth National Park (QENP), Uganda. Fang et al. [12] developed the PAWS algorithm and described the protected region as nodes connected by edges which are natural pathways such as rivers or roads. Kar et al. [20] used machine learning techniques to predict attacks from extractors. One advantage of such discrete methods is that they can easily incorporate realistic concerns such as detailed terrain information, different types of environmental crime (including animal poaching), or different types of patrol teams [22]. However, they have the disadvantage that they do not track the actual movement of the environmental criminals, and results can be difficult to interpret due to the “black box” nature of parameter estimation methods.

Our contribution

Our work builds upon earlier works by Arnold et al. [2] and Cartee et al. [7] where we use optimal-control theory to model and solve the path planning problem faced by illegal loggers as they balance benefit, travel cost and capture risk. We assume the authorities deploy remote patrols, related to model A in [7], where confiscation of illegal goods is delayed. We introduce several significant improvements to arrive at a more realistic model. We consider non-instantaneous logging activities and positive capture risk while logging on site so that loggers can decide the optimal logging time to maximize profit. We also incorporate load-dependent velocity as loggers return from the forest with illegal goods. We work directly with real-world data from Brazil to calibrate the model. As we provide a more transparent interpretation of “pristine area” and evaluation metrics for patrol efficiency, we simulate and conduct a side-by-side comparison of the predicted outcomes of several geographically targeted and data-driven patrol strategies.

The remainder of the paper is organized as follows. Section 2 describes the model formulation, optimization problem solver and numerical method. Section 3 covers experimental results following a detailed description of the experiment setup. We summarize our conclusions in Sect. 4.

Illegal logging model

Our model, based on the work of Albers [1] and Arnold et al. [2], is a more realistic representation of loggers’ decision-making process. The model can be applied to an arbitrary domain as in [2], but balances travel time and capture risk in a more judicious manner by appealing to optimal control theory. We also account for logging time, which was ignored by previous models but makes a great difference to loggers’ profit as is shown in Sect. 3. In the remaining part of this section, we will first construct the model. The optimal control problem is posed as a static Hamilton–Jacobi equation and solved using a fast-sweeping method [6, 27, 38, 40, 42].

Model construction

Given an arbitrary domain \(\varOmega \subset \mathbb {R}^2\), our goal is to construct an expected profit function \(P(x): \varOmega \rightarrow \mathbb {R}\) from loggers’ perspective. We adopt the basic idea from Albers [1], where \(P = (1-\varPhi ) \mathcal{B} - \mathcal C\). Here, \(\mathcal B\) is the benefit that describes the value of the timber that loggers will obtain if they are not captured throughout the entire trip. The variable \(\mathcal C\) represents the cost and is measured by the travel cost of both going in and out of the forest. The term \(1 - \varPhi \) describes the probability of not being captured, which depends on patrollers’ detection ability and loggers’ trajectories. We present details of these three components in the following paragraphs. We follow the Stackelberg game model and assume loggers have perfect information about patrol.

The benefit \(\mathcal B\) depends on the value and amount of trees that perpetrators decide to log. We assume each location x in the domain \(\varOmega \) has a fixed amount of timber, but with different total value B(x), depending on the category and quality of timber. Departing from previous models where extracting happens instantaneously, we introduce the notion of logging time \(t_{\text {log}}\), which is comparable with travel time. We assume that loggers have a constant production rate 1/T, where T is a global constant representing the time to clear all the trees in one location. The actual benefit ignoring existence of patrollers, is then given by \(\frac{t_{\text {log}}}{T}B(x)\). We always assume that \(t_{\text {log}}\le T\), so the loggers only extract from one spot x in one trip and will return from the forest when the chosen region is cleared.

We assume loggers can be detected when they are logging or on their path back while returning with their illegal goods. We define the capture intensity as \(\psi : \varOmega \rightarrow {\mathbb {R}}\), which we assume to be known to loggers and dependent on patrol resources and strategies. In particular, it satisfies a budget constraint modeled as

$$\begin{aligned} \int _{\varOmega } \psi (x)(1 + \mu d(x))^2 \mathrm{d}x \le E. \end{aligned}$$
(1)

Here, E represents the budget, d(x) is the Euclidean distance from location x to the major highways and \(\mu \) is an adjustable weight parameter. The term \((1 + \mu d(x))^2\) models a scenario wherein it is more expensive to patrol deeper into the forest. We tested different capture intensity functions in the experiments in Sect. 3, which exhibit interesting patterns. Following the derivation in [7], the probability of not being captured while logging at x after time \(t_{\text {log}}\) is \(e^{-\psi (x) t_{\text {log}}}\). Longer logging time means larger benefit, but also a larger risk of being detected. The probability of not being captured when they are walking back following path X(s) is then given by \(e^{-\int _0^{\tau }\psi (X(s))\mathrm{d}s}\) where \(\tau \) is the travel time. Here we assume the loggers are only detected when they actually have timber in their possession. In this case, they lose all of the benefit but without extra penalty (see discussion at the end of this subsection). The expected benefit by logging at location x for time \(t_{\text {log}}\) and returning with path X(s) is then

$$\begin{aligned} B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-\int _0^{\tau }\psi (X(s))\mathrm{d}s}. \end{aligned}$$
(2)

The cost—represented by the travel time—is easy to calculate given a path and the associated velocity. We assume that loggers will embark from and return to one of the fifteen municipalities, and use \(X_{\text {in}}\) and \(X_{\text {out}}\) to represent the paths to and from the logging location, respectively. We assume that loggers may return to any municipality—not necessarily the one they embarked from—which leads to a different initial value as is discussed further in Sect. 2.2. In our model, we first define the inward velocity field \(v: \varOmega \rightarrow \mathbb {R}^2\) following the transportation system, so that loggers travel with highest velocity when they are on major highways and more slowly when they are on water or secondary highways. When loggers are off highways or waterways, their velocity is scaled according to terrain slope following Arnold et al. [2]. When they are returning, we assume their velocity is slower because of the loaded cars or boats. In this case, we set the returning velocity \(v_\text {out}=v(x)/(1 + c(t_{\text {log}}/T)^\gamma )\). Here, \(t_{\text {log}}/T\) measures the amount of trees loggers carry back. The parameters c and \(\gamma \) model the effect of carrying the trees on the speed of motion. The increased travel cost may be another reason that loggers decide to spend less than the maximal logging time.

The previous analysis leads to a more realistic way of calculating profit for logging at position x for time \(t_{\text {log}}\) and following paths \(X_{\text {in}}\), \(X_{\text {out}}\). The profit function is

$$\begin{aligned} P(x, t_{\text {log}}, X_{\text {in}}, X_{\text {out}})= & {} B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-\int _0^{\tau _{\text {out}}}\psi (X_{\text {out}}(s))\mathrm{d}s} \nonumber \\&-\int _0^{\tau _{\text {out}}}\alpha (X_{\text {out}}(s)) \mathrm{d}s - \int _0^{\tau _{\text {in}}}\alpha (X_{\text {in}}(s))\mathrm{d}s, \end{aligned}$$
(3)

where \(\tau _{\text {in}}\) and \(\tau _{\text {out}}\) are the travel times, and \(\alpha \) is a dimensional function that converts time to monetary value. As in [7], \(\alpha \) may be constant or may change based on location. This can model loggers’ preference for certain areas or represent the variance of unit time travel cost. For example, one may expect the unit time cost of traveling on waterways to be smaller than that of traveling on highways. Rational loggers will then try to solve the optimization problem

$$\begin{aligned} P_{\text {opt}}(x) =&\max _{t_{\text {log}}, X_{\text {in}}, X_{\text {out}}} P(x, t_{\text {log}}, X_{\text {in}}, X_{\text {out}}) \end{aligned}$$
(4)

and may go to the spots with positive profit. It is worth mentioning that the optimal path going in vs. out of the forest can be different because of patrols. In reality, a mixture of different patrol methods are deployed in Brazil, including the ground patrol, using boats and motor vehicles, and the remote patrol, using helicopters, planes and drones. As is pointed out in [7], the ground patrol leads to immediate confiscation, and it would be more reasonable for loggers to switch to the minimal-time path thereafter. In this paper, we always assume that the government deploys remote patrols. Since loggers are unaware of being detected, they will choose optimal return paths that balance capture risk and travel time.

Multiobjective approach and Eikonal equations

We adapt the multiobjective optimal control approach of Cartee and Vladimirsky [7] to our model, and describe its use in solving the optimization problem (4). We consider a trajectory X(s) following the dynamics

$$\begin{aligned} \begin{aligned}&{\dot{X}}(t) = {\mathfrak {a}}(t)v(X(t)),\ t\in [0,S],\\&X(0) = x,\\&X(S) \in {\mathbb {X}}_{0}. \end{aligned} \end{aligned}$$
(5)

Here, \({\mathbb {X}}_{0}\) denotes the set of possible destinations (the fifteen municipalities). The map \({\mathfrak {a}}\) is the control plan, taken from the set of valid control functions

$$\begin{aligned} \mathcal {A} = \{{\mathfrak {a}} : [0,T] \rightarrow {\mathbb {R}}^2 \,\,\, \vert \,\,\, {\mathfrak {a}} \text { measurable}, \,\, |{\mathfrak {a}}(t) |= 1, \,\, \forall t \in [0,T]\}. \end{aligned}$$

Define \(J_1(x,{\mathfrak {a}},v) = \int _0^{\tau }\psi (X(s))\mathrm{d}s\) and \(J_2(x,{\mathfrak {a}},v) = \int _0^{\tau }\alpha (X(s))\mathrm{d}s\). According to (3), the profit

$$\begin{aligned} \begin{aligned} P(x, t_{\text {log}}, X_{\text {in}}, X_{\text {out}}) =&B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-J_1(x,{\mathfrak {a}},v_{\text {out}})} -J_2(x,{\mathfrak {a}},v_{\text {out}}) - R(x), \end{aligned} \end{aligned}$$
(6)

where R(x) is the minimal cost traveling from \({\mathbb {X}}_{0}\) to x. In fact, since our velocity is isotropic, R(x) is the unique viscosity solution of the Eikonal equation

$$\begin{aligned} \begin{aligned} v(x)|\nabla R(x)| = \alpha (x),&\quad x\in \varOmega \setminus {\mathbb {X}}_{0},\\ R(x) = 0,&\quad x\in {\mathbb {X}}_{0}. \end{aligned} \end{aligned}$$
(7)

Recall that along the inward path, loggers do not need to worry about the patrol and travel with velocity v(x). Along the outward paths, their velocity will be decreased if they are carrying more timber, and the amount of timber they are carrying is proportional to the logging time. In our model, we set \(v_{\text {out}}= v/(1+c(t_{\text {log}}/T)^\gamma )\). Plugging this into equation (6) yields

$$\begin{aligned} \begin{aligned} P(x, t_{\text {log}}, X_{\text {in}}, X_{\text {out}}) =&B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-J_1(x,{\mathfrak {a}},v)(1+c(t_{\text {log}}/T)^\gamma )}\\&-J_2(x,{\mathfrak {a}},v)(1+c(t_{\text {log}}/T)^\gamma ) - R(x). \end{aligned} \end{aligned}$$
(8)

We can resolve the optimal profit value using a multiobjective control formulation as in [7]. For any \(\lambda \in [0,1]\), let \(K^\lambda (x) = \lambda \psi (x) + (1-\lambda )\alpha (x)\). Then, the value function \(u^\lambda (x)\) defined by

$$\begin{aligned} u^\lambda (x) = \inf _{{\mathfrak {a}} \in \mathcal {A}}\left\{ \lambda J_1(x,{\mathfrak {a}},v) + (1-\lambda )J_2(x,{\mathfrak {a}},v) \right\} \end{aligned}$$
(9)

is the unique viscosity solution [5, 9] of the Eikonal equation

$$\begin{aligned} \begin{aligned} v(x)|\nabla u^\lambda (x)| = K^\lambda (x),&\quad x\in \varOmega \setminus {\mathbb {X}}_{0},\\ u^\lambda (x)=0,&\quad x\in {\mathbb {X}}_{0}. \end{aligned} \end{aligned}$$
(10)

While we do not expect \(u^\lambda \) to be smooth, under mild conditions on v and K, it will be Lipschitz continuous, hence differentiable almost everywhere, and thus the \(\lambda \)-optimal control

$$\begin{aligned} \mathcal {A}^\lambda _x = \mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathfrak {a}} \in \mathcal {A}}\left\{ \lambda J_1(x,{\mathfrak {a}},v) + (1-\lambda )J_2(x,{\mathfrak {a}},v) \right\} \end{aligned}$$

is uniquely determined for almost every starting point \(x\in \varOmega \setminus {\mathbb {X}}_{0}\). The value functions corresponding to \(\lambda \)-optimal controls are defined by

$$\begin{aligned} \begin{aligned}&u_1^\lambda (x) = \inf _{{\mathfrak {a}} \in \mathcal {A}^\lambda _x}\{J_1(x,{\mathfrak {a}},v)\},\\&u_2^\lambda (x) = \inf _{{\mathfrak {a}} \in \mathcal {A}^\lambda _x}\{J_2(x,{\mathfrak {a}},v)\}, \end{aligned} \end{aligned}$$
(11)

and given \(u^\lambda \), we can resolve \(u_1^\lambda \) and \(u_2^\lambda \) by solving

$$\begin{aligned} \begin{aligned} \nabla u^\lambda (x)\nabla u_1^\lambda (x) = \frac{\psi (x)K^\lambda (x)}{v^2(x)},&\ x\in \varOmega \setminus {\mathbb {X}}_{0},\\ \nabla u^\lambda (x)\nabla u_2^\lambda (x) = \frac{\alpha (x) K^\lambda (x)}{v^2(x)},&\ x\in \varOmega \setminus {\mathbb {X}}_{0}. \end{aligned} \end{aligned}$$
(12)

with boundary conditions \(u_1(x) = u_2(x) = 0,\) for \(x\in {\mathbb {X}}_{0}\) [7, 23].

The optimal profit can be calculated by

$$\begin{aligned} \begin{aligned} P_{\text {opt}}(x) =&\max _{t_{\text {log}}, X_{\text {in}}, X_{\text {out}}} P(x, t_{\text {log}}, X_{\text {in}}, X_{\text {out}})\\ =&\max _{t_{\text {log}}\in [0,T]}\left\{ \sup _{{\mathfrak {a}} \in \mathcal {A}}\left[ B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-J_1(x,{\mathfrak {a}},v)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) }\right. \right. \\&\left. \left. \quad -J_2(x,{\mathfrak {a}},v)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) \right] \right\} -R(x)\\ =&\max _{t_{\text {log}}\in [0,T]}\left\{ \max _{\lambda \in [0,1]}\left[ B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-u^\lambda _1(x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) } \right. \right. \\&\left. \left. \quad -u_2^\lambda (x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) \right] \right\} - R(x)\\ =&\max _{\lambda \in [0,1]}\left\{ \max _{t_{\text {log}}\in [0,T]}\left[ B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-u^\lambda _1(x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) } \right. \right. \\&\left. \left. \quad -u_2^\lambda (x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) \right] \right\} - R(x). \end{aligned} \end{aligned}$$
(13)

For the inner maximum,

$$\begin{aligned} B(x)\frac{t_{\text {log}}}{T}e^{-\psi (x)t_{\text {log}}}e^{-u^\lambda _1(x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) } -u_2^\lambda (x)\left( 1+c\left( \frac{t_{\text {log}}}{T}\right) ^\gamma \right) \end{aligned}$$
(14)

is a function of \(t_{\text {log}}\), and it is not easy to find the explicit maximum. In practice, we discretize \([0,T]\times [0,1]\) into finitely many points \((t_i,\lambda _j)\) and simply choose the maximum among these points.

We conclude the discussion of our model with a pair of remarks regarding implementation.

Remark 1: Choice of \({{\mathbb {X}}}_0\). In our model, we need to solve the Eikonal equation (7) for the minimal travel time, where \({{\mathbb {X}}}_0\) is chosen to be the set of municipalities from which the loggers depart. We also need to solve (10) for the \(\lambda \)-optimal value function \(u^\lambda \), where \({{\mathbb {X}}}_0\) represents the set of municipalities that loggers transport timber to. As is briefly mentioned in Sect. 2.1, we do not require loggers to return to the same municipality from which they start. From the patrol perspective, it may not be clear which municipality the loggers will choose. Accordingly, we let \({{\mathbb {X}}}_0\) be the set of all fifteen municipalities in both (7) and (10). The resulting optimal profit for loggers reflects their freedom to choose their starting and terminal municipalities. In the case that the loggers are required to return to their starting point, one could perform 15 rounds of calculation, each of which takes \({{\mathbb {X}}}_0\) to be a singleton corresponding to one municipality in both Eqs. (7) and (10). In all our simulations, we use the former setup, which always gives profit no less than the latter one, and therefore helps the government to prepare for the worst case scenario.

Remark 2: Optimal paths. We find the optimal paths from \(x\in \varOmega \) to the set \({{\mathbb {X}}}_0\) by following the negative gradient directions for the different value functions. That is, to find the minimum cost path (optimal inward path) from x to \({{\mathbb {X}}}_0\) in the absence of patrols, we integrate \( {\dot{\mathbf{x }}} = -v(\mathbf{x })\frac{\nabla R(\mathbf{x })}{|\nabla R(\mathbf{x })|}\). To find the optimal path (optimal outward path) when loggers carry timbers and go outwards, we integrate \({\dot{\mathbf{x }}} = -v( \mathbf{x} )\frac{\nabla u^\lambda (\mathbf{x })}{|\nabla u^\lambda (\mathbf{x })|}\), where \(\lambda \) is the optimal value for the extraction point x.

In our model, the optimal logging time is different at different locations. However, we note that the optimal path is the same regardless of the logging time, because the logging time (and the amount of timbers carried) influences the velocity uniformly. Thus, the path will be traversed more slowly when a larger logging time is used, but the spatial location of the path is the same.

Finally, one of the basic assumptions of the model is that velocity is isotropic, meaning that it depends only on position, not on the direction of motion. Because of this, the optimal path between two points can be determined regardless of which is the starting point and which is the ending point. If we chose an anisotropic velocity—for example, if the downstream and upstream velocities on the river were different—this would no longer be true, and we would need to compute in-coming and out-going paths separately, which would require additional PDEs similar to (7) and (10), but formulated with reversed orientation.

Numerical methods

We need to solve two kinds of PDE in our model, namely the standard Eikonal equation (7), (10) and auxiliary PDEs (12). In this paper, the region \(\varOmega \) is the state of Roraima, which is irregularly shaped. We use a uniform Cartesian grid to discretize a rectangular region in \(\mathbb {R}^2\) containing \(\varOmega \). As mentioned in 2.2, we choose \({{\mathbb {X}}}_0\) to be the set of all 15 municipalities in the state of Roraima. This applies to all three equations (7), (10) and (12). To mark the boundary of \(\varOmega \), we set the velocity to zero outside of \(\varOmega \), which makes it impossible for paths to leave the region.

To approximate the equations, one can apply standard numerical methods for static Hamilton–Jacobi equations [10]. Two of the most popular methods are fast-marching and fast-sweeping schemes. Fast-marching methods are based on the idea of following characteristic flow and updating values at grid nodes monotonically based on the values at neighboring nodes [33,34,35, 39]. With the proper choice for the order of node updates, the fast-marching method can approximate the value function at N grid points with the computational cost of \(O(N\log N)\). By contrast, the philosophy of fast-sweeping methods is to account for all possible directions of characteristic flow, and sweep through the grid nodes in alternating directions updating values at nodes in a Gauss–Seidel manner. Each sweep captures the correct characteristic flow for some subset of the nodes, and this process is iterated until convergence [6, 27, 38, 40, 42].

We opted for the basic fast-sweeping method presented in [27]. While the fast-marching method may be more efficient, the standard fast-sweeping scheme is sufficient for our purposes and is very easy to implement. If efficiency is a concern, and one still prefers fast-sweeping methods, one may be able to parallelize the computation as in [41], though we did not do this. Alternatively, there are some novel fast methods to solve Eikonal equations such as [8], which uses a hybrid fast-marching and fast-sweeping approach.

Implementation and results

In this section, we apply the optimization solver to our model as described in Sect. 2.2. We start with a detailed description of the benefit function, velocity function and evaluation metrics in Sect. 3.1 and follow with an analysis of the numerical results in Sect. 3.2.

Experimental setup

High fidelity inference of the benefit requires domain knowledge of Brazilian forest, and in this paper, we simply construct the benefit function based on the PRODES dataset [24]. We make the assumption that deforestation for agricultural land clearance only takes place within 50 kilometers of the major highways and treat all the other deforestation events as the result of logging, which are marked using red circles in Fig. 3a. We then design the benefit based on a further assumption that high benefit gives rise to high event frequency within the region. Specifically, we use the same technique as in kernel density estimation [28] by assigning a two-dimensional Gaussian to each event. From the PRODES data, we also construct a binary indicator function of tree coverage of the region. We then generate the logging benefit by linearly combining the generated density function and the binary indicator function as is shown in Fig. 3c. Our approach may give a reasonable but not fully accurate evaluation of the true benefit, as features like distance to municipalities and patrols are not incorporated. However, the simpler benefit model allows us to focus on exploring loggers’ behavior under the influence of other factors. The inverse problem of recovering the benefit function based on the deforestation event data, travel distance and capture risk may be of interest by itself. Moreover, as is shown in Fig. 3a, many logging events are in the periphery area of the state of Roraima, providing further evidence that these regions have high benefit worthy of long distance travel. In practice, local governments may be able to design a more realistic benefit function by incorporating more granular data involving types of trees and other vegetation in specific areas.

Fig. 3
figure 3

Panels show a red circles marking logging events (years 2001–2015) and yellow dots marking the 15 municipalities; b the binary indicator function of tree coverage (yellow) from 2015 PRODES data and c the constructed benefit. The benefit is constructed by combining a density function and a binary indicator function, normalized to have maximum benefit 10. The density function is constructed using kernel density estimation with a two-dimensional Gaussian with standard deviation 20 for the logging events in panel (a)

Next, since the logging model is quite sensitive to the transportation system, we design a velocity field, as shown in Fig. 4, to accurately capture the movement of loggers throughout the region. Using the highway and waterway map from OpenStreetMap [25], we assign the velocity 1, 0.7, 0.4 to major highways, secondary highways and waterways, respectively. This reflects the assumption that loggers use trucks and cargo ships to transport timber. Outside of these regions, we use a velocity model based on local slope of the terrain. Specifically, we use elevation data from the Shuttle Radar Topography Mission [13], and set the slope \(S(x,y) = |\nabla \mathcal{E}(x,y)|\) where \(\mathcal{E}(x,y)\) is the elevation map of the region. The velocity is then given as 0.2 times a function of local slope as described by Arnold et al. [2], who based their velocity function on that of Irmischer and Clarke [15]. Note that in reality, the velocity on waterways may be anisotropic, diverging from the isotropic assumption used in our model. One may generalize the model to incorporate the more realistic scenario and arrive at anisotropic Hamilton–Jacobi equations similar to Eqs. (7) and (10), which can be solved similarly.

Fig. 4
figure 4

Velocity field in Roraima. Velocity on major highways, secondary highways and waterways is assigned to be 1, 0.7 and 0.4 respectively. Velocity in off highway and off water areas depends on change of elevation

The model may be helpful to aid in the design or evaluation of geographically targeted patrol strategies. Many governments, including the Brazilian government, endeavor to combat deforestation by designating geographic areas as protected areas or priority areas for monitoring and enforcement. However, identifying where protection should be targeted presents challenges for policymakers. Our model evaluates the efficiency with which targeted patrol strategies can reduce deforestation using three metrics:

  1. 1.

    Pristine area ratio PA: we define the regions with non-positive profit as pristine area. PA calculates the ratio of the area of pristine region over the area of the state as \(\frac{ \int _{\varOmega }{\mathbf {1}}_{\{P(x) \le 0\}}\mathrm{d}x}{ \int _{\varOmega }1 \mathrm{d}x}\).

  2. 2.

    Pristine benefit ratio PB: this metric weighs pristine area by benefit as \(PB =\frac{ \int _{\varOmega }B(x){\mathbf {1}}_{\{P(x) \le 0\}}\mathrm{d}x}{ \int _{\varOmega }B(x) \mathrm{d}x}\) and represents the ratio of benefit within the pristine area over the total benefit.

  3. 3.

    Weighted profit WP: we interpret the positive part of the profit as the probability density for loggers to choose the logging location. We then define WP as the expected profit by \(WP = \frac{\int _{\varOmega } P_+(x)^2\mathrm{d}x}{\int _{\varOmega }P_+(x)\mathrm{d}x}\), where \(P_+(x)\) is the non-negative profit and is defined as \( P(x){\mathbf {1}}_{\{P(x)\ge 0\}}\).

We run the model on a \(600\times 600\) grid. We discretize \(\lambda \) and \(t_{\text {log}}\) into 101 levels and set \(\mu = \frac{2}{(5\max _{x\in \varOmega }d(x))} \approx 7.33\times 10^{7}\), \( T = 2{,}000{,}000\), and set \(\alpha \) to be \(\mu \) on the highways and \(0.7\mu \) otherwise.

Results

We test our model with different patrol budgets and patrol strategies. We also explore the influence of logging time and changing the velocity when traveling with goods.

Example 1: No patrol

We first impose no patrol. Recall that the returning velocity is modified to be \(v(x)/(1 + c(t_{\text {log}}/T)^\gamma )\) to account for the influence of carrying timber. When there is no patrol, i.e., \(\psi (x) = 0\), the optimal paths traveling in and traveling out are the same. When additionally the amount of timber has no influence, i.e., \(c = 0\), the loggers will always use the maximal logging time T. These statements are generally not true when patrol is present or \(c\ne 0\) for the latter case. The resulting nonnegative profit \(P_+\) is shown in Fig. 5a. We then test \(c=0.5\), \(\gamma = 1\) and \(c=1\), \(\gamma = 1.5\). As is shown in Fig. 5, larger c gives a harsher penalty to velocity and thus leads to smaller profit. In all of the following experiments, we fix \(c = 0.5,\,\gamma =1\).

Fig. 5
figure 5

Expected nonnegative profit \(P_+\) when no patrol is imposed. Returning velocity depends on trees obtained and is defined to be \(v(x)/(1 + c(t_{\text {log}}/T)^\gamma )\). The weighted profit is a 2.4091, b 2.3499, c 2.2622

Example 2: Comparison of different budgets

Recall that we impose a budget constraint for patrol following Eq. (1). We assume that patrol uses up all of the resources available and hence we impose equality in Eq. (1) for all our simulations. In this example, we set

$$\begin{aligned} \psi (x) =\frac{ E}{ (1 + \mu d(x))^5 \int _{\varOmega } (1 + \mu d(x))^3\mathrm{d}x}. \end{aligned}$$
(15)

Recall that d(x) is the Euclidean distance to major highways and the above patrol simply puts more effort on regions closer to major highways. The capture intensity is plotted in Fig. 9c, which will be further discussed in Sect. 3.2.4. We then consider values of E set to be 0.001, 0.003 and 0.005. The resulting nonnegative profit is plotted in Fig. 6. As is expected, the higher budget gives lower profit. In all of the following experiments, we fix E to be 0.003.

Fig. 6
figure 6

Expected nonnegative profit \(P_+\) with different budget E while fixing the other parameters

Example 3: Influence of patrol on logging time

We use the same experimental setup as in the previous example, and we fix \(E=0.003\). Recall that we discretize the logging time and search for the best logging time by a parameter sweep. In all of the experiments, we use 101 different levels. We plot the optimal logging time in Fig. 7a, where the numerics represent the proportion of the maximal time T used. We then sample four points in this region that achieve optimal logging time when logging for \(50\%,\,60\%,\, 70\%\) and \(80\%\) of T, respectively, and are marked as red points in Fig. 7a. Figure 7b shows the profit as a function of logging time at each point and we see that there are different optimal logging times for each point. Note that when \(c=0\), i.e., when the timber carried has no influence on the travel velocity, the optimal time is only dependent on the capture intensity and is equal to \(\min \{1/\psi (x),\, T\}\). When c is nonzero, the solution of the optimal logging time is more complicated, as both benefit and travel time now play a role in addition to patrol.

Fig. 7
figure 7

a Optimal logging time for regions with positive benefit. Red points mark sampled points with optimal logging time of \(80\%, 70\%, 60\%, 50\%\), shown via false color. b Profit vs. logging time at each of the sampled points from panel (a)

Example 4: Comparison of different patrol strategies

The enforcement strategy in Brazil during 2003–2012 involved combination of satellite and ground patrols. A reduction in deforestation was achieved. Patrols were active in areas with significant deforestation, the so called “priority” municipalities. Patrols were also sent where the satellite system revealed suspiciously high deforestation [16]. In this example, we compare the patrolling efficiency for different capture intensity functions \(\psi (x)\), all of which require budget E (fixed as 0.003), calculated based on Eq. (1). We plot the corresponding capture intensity function, profit \(P_+\) and optimal time for each experiment and summarize the evaluation based on aforementioned metrics in Table 1.

First, we consider a patrol only based on distance to roads by setting

$$\begin{aligned} \psi (x) =\frac{ E}{ (1 + \mu d(x))^r \int _{\varOmega } (1 + \mu d(x))^{2-r}\mathrm{d}x}, \end{aligned}$$
(16)

where r is chosen from the values 1, 5, 15. We focus on regions that are close to roads as logging and patrol costs are low in these regions. Larger r means the patrol is more concentrated near the highways, while smaller r leads to more uniformly distributed patrol. Figures 8, 9 and 10 exhibit the corresponding capture intensity, profit \(P_+\) and optimal time. When \(r = 1\), the optimal logging time is more uniformly distributed compared with that of larger values of r. In high benefit regions (away from major highways), the optimal logging time and the profit markedly increase with increasing r, as loggers are less likely to be captured away from major roads. As all three profit plots in Figs. 8, 9 and 10 have the pattern that high benefit regions are high profit regions, we are inspired to include benefit in the model.

Fig. 8
figure 8

a Capture intensity is based on distance only, with \(r = 1\) in Eq. (16). b Expected nonnegative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 9
figure 9

a Capture intensity is based on distance only, with \(r = 5\) in Eq. (16). b Expected nonnegative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 10
figure 10

a Capture intensity is based on distance only, with \(r = 15\) in Eq. (16). b Expected non-negative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Next, we set

$$\begin{aligned} \psi (x) =\frac{B(x)^w E}{\int _{\varOmega } (1 + \mu d(x))^2 B(x)^w \mathrm{d}x}, \end{aligned}$$
(17)

where w is chosen to be either \(1,\,0.5,\,\) or 0.2 so that the high benefit regions are targeted. Similar to the previous example, larger w represents more concentrated patrol. The results are shown in Figs. 11, 12 and 13. Clearly, the intense patrol in high profit regions makes those regions less vulnerable. Meanwhile, profitable regions now cluster around highways where both the initial benefit and the travel cost is relatively low.

Fig. 11
figure 11

a Capture intensity is based on benefit only, following equation (17), where \(w=1\). b Expected nonnegative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 12
figure 12

a Capture intensity is based on benefit only, following equation (17), where \(w=0.5\). b Expected non-negative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 13
figure 13

a Capture intensity is based on benefit only, following equation (17), where \(w=0.2\). b Expected non-negative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Previous experiments inform us that we need to balance benefit and distance when designing a patrol strategy. Here, we set the patrol to be

$$\begin{aligned} \psi (x) =\frac{B(x)^w E}{(1 + \mu d(x))^r\int _{\varOmega } (1 + \mu d(x))^{2-r} B(x)^w\mathrm{d}x}. \end{aligned}$$
(18)

We test this patrol with \(w = 0.2\), \(r = 1,\,5,\,15\), plotted in Figs. 14, 15 and 16. The statistics in Table 1 confirm that forests are better protected when patrol strategies take both distance and benefit into account (see (7)–(9) compared with (1)–(6) in Table 1). Compare the patrol results based on distance only (results (1)–(3)) to the results based on strategy (18) (results (7)–(9)) in Table 1; it is clear that the “optimal” attention we should give to small-distance regions, reflected by r, may vary based on w, i.e., the attention high benefit regions get. When we ignore the benefit, the weighted profit (WP) metric and the protected benefit (PB) metric indicate that smaller r is better. When benefit is taken into consideration and \(w=0.2\), the statistics suggest that moderately large concentration along highways is more appropriate. Moreover, the three metrics are not necessarily positively or negatively correlated. For example, both protected area (PA) and weighted profit (WP) of Fig. 16 are larger then those of Figs. 14 and 15 though these two metrics move in opposite directions in many cases. This feature adds to the complexity of finding the “optimal” patrol strategy.

Fig. 14
figure 14

a Capture intensity is based on benefit and distance, with \(w=0.2, r=1\) in Eq. (18). b Expected non-negative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 15
figure 15

a Capture intensity is based on benefit and distance, with \(w=0.2, r=5\) in Eq. (18). b Expected non-negative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Fig. 16
figure 16

a Capture intensity is based on benefit and distance, with \(w=0.2, r=15\) in Eq. (18). b Expected nonnegative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Finally, we consider the patrol strategy that puts more effort along one specific waterway compared with all previous strategies, in addition to targeting highways and high benefit regions. The selected waterways are marked in Fig. 17a with blue curves, and are shown to be popular trails included in many optimal paths in the next experiment. We define a new distance function \({{\hat{d}}}(x)\) which calculates the minimum Euclidean distance to major highways and the selected waterways. We modify the patrol defined in Eq. (18) to obtain the following strategy:

$$\begin{aligned} \psi (x) =\frac{B(x)^w E}{(1 + \mu {\hat{d}}(x))^r\int _{\varOmega } B(x)^w(1 + \mu d(x))^{2}(1 + \mu {\hat{d}}(x))^{-r}\mathrm{d}x}. \end{aligned}$$
(19)

In Fig. 17, we plot the results when \(w = 0.2,\, r = 15\). As loggers have to reroute to avoid heavy patrol (see Fig. 18d), the profit is less than that of the previous strategies.

Fig. 17
figure 17

a Capture intensity is based on benefit and distance to both highways and waterways, with \(w=0.2, r=15\) in Eq. (19). b Expected nonnegative profit \(P_+\) over the entire region. c Optimal logging time plotted on regions with positive benefit

Table 1 Experimental results with different patrol strategies. Evaluation metrics WP, PA, PB are defined in Sect. 3.1

All numerical experiments show that both distance and potential benefit are important factors for patrol allocation. For now, we do not have a method to find optimal patrol strategies, but our model can be applied to evaluate and compare different strategies.

Example 5: Optimal paths

Finally, we calculate and compare optimal paths for illegal loggers in the state of Roraima. We randomly sample 500 target locations with probability proportional to the expected profit shown in Fig. 8b and plot the optimal path going to each of these points (Fig. 18a). We also plot the optimal paths returning from those points under different patrol strategies (Fig. 18b–d). As previously discussed, the optimal paths going to the targets are the same regardless of the patrol. To manifest the differences, we plot the paths leaving the target points (blue curves) on top of those going to the target points (red curves), as shown in panels (b–d). Figure 18b shows that most of the optimal paths going in and returning are quite similar under the patrol plotted in Fig. 13. As we increase the variance of capture intensity by focusing more on the northwest corner as is shown in Fig. 11, a major change is observed in Fig. 18c where loggers choose a different route to avoid patrollers and return to a different municipality than the one from which they start. Still, we see that many optimal paths that go deeper into the forest in the northwest corner cluster into one trajectory; one reason why this happens is that the capture intensity is much more uniform than the velocity field due to the presence of rivers. The fast travel speed along the river outweighs the risk of being captured. With this in mind, we design the patrol exhibited in Fig. 17, where the waterways that attract loggers are targeted. The corresponding optimal paths plotted in Fig. 18d unsurprisingly demonstrate huge differences, which leads to the decrease of WP and the increase of PB in Table 1 and indicates the importance of a spatially targeted patrol.

Fig. 18
figure 18

We sample 500 target points in the region and plot the optimal paths going into the forest (in a) and going out of the forest (in (b)–(d)), with the deployed patrol marked in the sub-title. We plot the leaving paths (blue) on top of the entering paths (red) in (b)–(d). Yellow dots mark the 15 municipalities

Conclusion

We have presented a control theoretic model to predict the practice of illegal loggers, including their travel paths and target locations. We consider logging events that are sufficiently lucrative for the criminals that they are willing to incur some risk of being caught. The criminals balance the risk against the benefit in order to find an optimal logging time. Our model quantifies the intensity of a logging event, as opposed to a binary model where deforestation events clear all the trees from an area. We believe this better approximates reality, since the PRODES dataset [24] indicates the occurrence of past deforestation events in many locations where trees are still present. We detailed the underlying mathematical formalism, including numerical schemes which can be used to simulate the model. Finally, we tested the model with different values for the parameters and made observations comparing different patrol strategies.

We discuss a few directions for future work on this model. First, one of the basic assumptions is that the patrol strategy is known precisely by the loggers. This is a standard game-theoretic simplification, but is likely false in reality. Allowing for imperfect knowledge (perhaps using stochastic effects) may more accurately describe the differential game between the criminals and patrol. Some work of relevance exists on surveillance uncertainty in reach-avoid games [14]. Second, while the model can evaluate a suggested patrol strategy, in its current form it does not resolve the optimal patrol strategy. Designing a model that can resolve the optimal strategy, or even suggest a constructive method for improving a given suboptimal strategy would be a large step forward. Finally, the model described here is static. One could envision a time-series model wherein this is one stage in an on-going game, and the patrol strategy could change at discrete times. Describing this scenario in a realistic manner would likely require some qualitative changes to the model. Studying the long-time behavior could provide additional insight to the expected amount of deforestation over long stretches of time.

Our model is premised on the idea that efficient patrols against deforestation should be spatially targeted, rather than uniformly applied across a territory. This assumption comports with the targeted nature of deforestation enforcement policies used by many countries. However, the most efficient patrols we recover in our experiments suggest more precise spatial targeting of enforcement than those specified by most existing public policies. Such policies typically target administrative units (e.g., municipalities in Brazil) or other large swaths of the forest. There are clear trade-offs between the precise and blunt targeting, including challenges in patrol strategy implementation; communication of control strategies such that logging can be deterred; and political costs of targeting. The tools developed in this article may be used to help researchers and policymakers to study these tradeoffs in order improve the efficacy of deforestation control policy.

References

  1. Albers, H.J.: Spatial modeling of extraction and enforcement in developing country protected areas. Resour. Energy Econ. (2010)

  2. Arnold, D., Fernandez, D., Jia, R., Parkinson, C., Tonne, D., Yaniv, Y., Bertozzi, A.L., Osher, S.J.: Modeling environmental crime in protected areas using the level set method. SIAM J. Appl. Math. 79(3), 802–821 (2019). https://doi.org/10.1137/18M1205339

    MathSciNet  Article  MATH  Google Scholar 

  3. Assunção, J.J., Gandour, C., Rocha, R.: Deforestation slowdown in the Brazilian Amazon: prices or policies? (2012)

  4. Azevedo-Ramos, C., Silva, J.N.M., Merry, F.: The evolution of Brazilian forest concessions. Elem. Sci. Anth. 3, (2015)

  5. Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Springer, New York (2008)

    MATH  Google Scholar 

  6. Boué, M., Dupuis, P.: Markov chain approximations for deterministic control problems with affine dynamics and quadratic cost in the control. SIAM J. Numer. Anal. 36(3), 667–695 (1999)

    MathSciNet  Article  Google Scholar 

  7. Cartee, E., Vladimirsky, A.: Control-theoretic models of environmental crime. SIAM J. Appl. Math. 80(3), 1441–1466 (2020)

    MathSciNet  Article  Google Scholar 

  8. Chacon, A., Vladimirsky, A.: Fast two-scale methods for Eikonal equations. SIAM J. Sci. Comput. 34(2), A547–A578 (2012)

    MathSciNet  Article  Google Scholar 

  9. Crandall, M.G., Lions, P.L.: Viscosity solutions of Hamilton-Jacobi equations. Trans. Am. Math. Soc. 277(1), 1–42 (1983)

    MathSciNet  Article  Google Scholar 

  10. Falcone, M., Ferretti, R.: Chapter 23-numerical methods for Hamilton-Jacobi type equations. In: Abgrall, R., Shu, C.W. (eds.) Handbook of Numerical Methods for Hyperbolic Problems, Handbook of Numerical Analysis, vol. 17, pp. 603–626. Elsevier, Amsterdam (2016)

    Google Scholar 

  11. Fang, F., Jiang, A.X., Tambe, M.: Optimal patrol strategy for protecting moving targets with multiple mobile resources. In: Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, pp. 957–964. International Foundation for Autonomous Agents and Multiagent Systems (2013)

  12. Fang, F., Nguyen, T.H., Pickles, R., Lam, W.Y., Clements, G.R., An, B., Singh, A., Schwedock, B.C., Tambe, M., Lemieux, A.: PAWS–A deployed game-theoretic application to combat poaching. AI Mag. 38(1), 23–36 (2017). https://doi.org/10.1609/aimag.v38i1.2710

    Article  Google Scholar 

  13. Farr, T.G., Rosen, P.A., Caro, E., Crippen, R., Duren, R., Hensley, S., Kobrick, M., Paller, M., Rodriguez, E., Roth, L., Seal, D., Shaffer, S., Shimada, J., Umland, J., Werner, M., Oskin, M., Burbank, D., Alsdorf, D.: The shuttle radar topography mission. Rev. Geophys. (2007). https://doi.org/10.1029/2005RG000183

    Article  Google Scholar 

  14. Gilles, M.A., Vladimirsky, A.: Evasive path planning under surveillance uncertainty. Dyn. Games Appl. 10(2), 391–416 (2020)

    MathSciNet  Article  Google Scholar 

  15. Irmischer, I.J., Clarke, K.C.: Measuring and modeling the speed of human navigation. Cartograpghy Geogr. Inform. Sci. 45(2), 177–186 (2017)

    Article  Google Scholar 

  16. Jackson, R.: A credible commitment: Reducing deforestation in the Brazilian Amazon, 2003–2012 (2016). Princeton University: Innovations for Successful Societies. http://www.princeton.edu/successfulsocieties accessed

  17. Johnson, M.P., Fang, F., Tambe, M.: Patrol strategies to maximize pristine forest area. In: AAAI (2012)

  18. Kamra, N., Gupta, U., Fang, F., Liu, Y., Tambe, M.: Policy learning for continuous space security games using neural networks. In: Proceedings of 32nd AAAI Conference on Artificial Intelligence (2018)

  19. Kar, D., Fang, F., Delle Fave, F., Sintov, N., Tambe, M.: A game of thrones: When human behavior models compete in repeated Stackelberg security games. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 1381–1390. International Foundation for Autonomous Agents and Multiagent Systems (2015)

  20. Kar, D., Ford, B.J., Gholami, S., Fang, F., Plumptre, A.J., Tambe, M., Driciru, M., Wanyama, F., Rwetsiba, A., Nsubaga, M., Mabonga, J.: Cloudy with a chance of poaching: Adversary behavior modeling and forecasting with real-world poaching data. In: AAMAS (2017)

  21. Laurance, W.F., Albernaz, A.K.M., Schroth, G., Fearnside, P.M., Bergen, S., Venticinque, E.M., Da Costa, C.: Predictors of deforestation in the Brazilian Amazon. J. Biogeogr. 29(5–6), 737–748 (2002). https://doi.org/10.1046/j.1365-2699.2002.00721.x

    Article  Google Scholar 

  22. McCarthy, S., Tambe, M., Kiekintveld, C., Gore, M.L., Killion, A.: Preventing illegal logging: Simultaneous optimization of resource teams and tactics for security. In: AAAI Conference on Artificial Intelligence (2016)

  23. Mitchell, I.M., Sastry, S.: Continuous path planning with multiple constraints. In: 42nd IEEE International Conference on Decision and Control (IEEE Cat. No. 03CH37475), vol. 5, pp. 5502–5507. IEEE (2003)

  24. National Institute of Space Research (INPE): PRODES deforestation. Accessed through Global Forest Watch in 07/2019. www.globalforestwatch.org

  25. OpenStreetMap contributors: Planet dump retrieved from https://planet.osm.org. https://www.openstreetmap.org (2017)

  26. Osher, S., Sethian, J.: Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79, 12–49 (1988)

    MathSciNet  Article  Google Scholar 

  27. Parkinson, C.: A rotating-grid upwind fast sweeping scheme for a class of Hamilton–Jacobi equations. arXiv preprint arXiv:2005.02962 (2020)

  28. Parzen, E.: On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962)

    MathSciNet  Article  Google Scholar 

  29. Pfaff, A.S.P.: What drives deforestation in the Brazilian Amazon? Evidence from satellite and socioeconomic data. J. Environ. Econ. Manag. 37(1), 26–43 (1999). https://doi.org/10.1006/jeem.1998.1056

    Article  MATH  Google Scholar 

  30. Quantum GIS Development Team: Quantum GIS geographic information system (2017). http://qgis.osgeo.org

  31. Schwanghart, W., Kuhn, N.: Topotoolbox: a set of Matlab functions for topographic analysis. Environ. Model. Softw. 25(6), 770–781 (2010)

    Article  Google Scholar 

  32. Schwanghart, W., Scherler, D.: Topotoolbox 2: MATLAB-based software for topographic analysis and modeling in Earth surface sciences. Earth Surf. Dyn. 2(1), 1–7 (2014)

    Article  Google Scholar 

  33. Sethian, J.A.: Fast marching methods. SIAM Rev. 41(2), 199–235 (1999)

    MathSciNet  Article  Google Scholar 

  34. Sethian, J.A., Vladimirsky, A.: Fast methods for the Eikonal and related Hamilton-Jacobi equations on unstructured meshes. Proc. Nat. Acad. Sci. 97(11), 5699–5703 (2000)

    MathSciNet  Article  Google Scholar 

  35. Sethian, J.A., Vladimirsky, A.: Ordered upwind methods for static Hamilton-Jacobi equations: theory and algorithms. SIAM J. Numer. Anal. 41(1), 325–363 (2003)

    MathSciNet  Article  Google Scholar 

  36. Slough, T., Urpelainen, J., SAIS, J.H.: Public policy under limited state capacity: Evidence from deforestation control in the Brazilian Amazon. Tech. rep., mimeo (2018)

  37. Trevisani, P., Forero, J.: Illegal loggers undercut brazil forest efforts. Wall Street J. 18, (2020)

  38. Tsai, Y.H.R., Cheng, L.T., Osher, S., Zhao, H.K.: Fast sweeping algorithms for a class of Hamilton-Jacobi equations. SIAM J. Numer. Anal. 41(2), 673–694 (2003)

    MathSciNet  Article  Google Scholar 

  39. Tsitsiklis, J.N.: Efficient algorithms for globally optimal trajectories. IEEE Trans. Autom. Control 40(9), 1528–1538 (1995)

    MathSciNet  Article  Google Scholar 

  40. Zhao, H.: A fast sweeping method for Eikonal equations. Math. Comput. 74(250), 603–627 (2005)

    MathSciNet  Article  Google Scholar 

  41. Zhao, H.: Parallel implementations of the fast sweeping method. J. Comput. Math. 421–429, (2007)

  42. Zhao, H.K., Osher, S., Merriman, B., Kang, M.: Implicit and nonparametric shape reconstruction from unorganized data using a variational level set method. Comput. Vis. Image Underst. 80(3), 295–314 (2000)

    Article  Google Scholar 

Download references

Acknowledgements

This research is funded by NSF grant DMS-1737770 and an academic grant from the National Geospatial-Intelligence Agency (Award No. # HM0210-14-1-0003, Project Title: “Sparsity models for spatiotemporal analysis and modeling of human activity and social networks in a geographic context”), approved for public release, 20-578. The authors thank Raymond Chu, Yixuan (Sheryl) He, and Joseph McGuire for valuable discussions and for performing work related to modeling of Brazilian deforestation during the UCLA Mathematical Modeling REU in the summer of 2019. The authors would like to thank Alexander Vladimirsky for very helpful and thorough comments which improved the methodology and the exposition of the manuscript. The elevation data was downloaded from the Shuttle Radar Topography Mission [13]. This data was processed using the Quantum Geographic Information System [30], and imported to MATLAB using TopoToolbox [31, 32]. On behalf of all authors, the corresponding author states that there is no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea L. Bertozzi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, B., Peng, K., Parkinson, C. et al. Modeling illegal logging in Brazil. Res Math Sci 8, 29 (2021). https://doi.org/10.1007/s40687-021-00263-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40687-021-00263-6

Keywords

  • Illegal logging
  • Optimal path planning
  • Hamilton–Jacobi–Bellman equation
  • Optimal control
  • Fast sweeping method