Abstract
Resource assignment and scheduling models provides an automatic and fast decision support system for wildfire suppression logistics. However, this process generates challenging optimization problems in many realworld cases, and the computational time becomes a critical issue, especially in realisticsize instances. Thus, to overcome that limitation, this work studies and applies a set of decomposition techniques such as augmented Lagrangian, branch and price, and Benders decomposition’s to a wildfire suppression model. Moreover, a reformulation strategy, inspired by Benders’ decomposition, is also introduced and demonstrated. Finally, a numerical study comparing the behavior of the proposals using different problem sizes is conducted.
1 Introduction
Many critical problems in disaster management and logistics can be formulated and solved using the Operations Research (OR) framework (Caunhye et al. 2012). Wildfires are a type of catastrophe with a high impact on the current world, in humanitarian, economic, and above all, ecological terms (European Commission 2020). As their frequency and magnitude are growing at an alarming rate, it is necessary to develop efficient methods to improve the prevention, detection, and planning in logistics related to wildfire suppression.
In Galicia (Northwest of Spain), forest fires are a severe problem to be faced every year, caused by deforestation, arsonists, or the removal of local flora and fauna. With just a surface area of 29,574 \(\text {km}^2\), this region had more than 3550 fires per year from 2011 and 2015, being 42,392.17 hectares burned in 2011 alone. Typically, the wildfires in Galicia are active between 1 and 7 days, requiring more than 70 brigades, 50 engines, and 20 aircraft on the largest fires. The Regional Government of Galicia has to periodically manage (especially in summer) a large amount of wildfire fighting resources (Xunta de Galicia 2017). For instance, they had 7000 people, 360 motor pumps, and 30 aerial resources (25 were helicopters) in the 2017 campaign. Consequently, finding a good resource assignation and scheduling avoids its overallocation, which is very important since they are limited.
Formulating the problem above as a scheduling optimization model will help reduce the impact of wildfires through optimal resource planning. Although several research topics in the optimization of forest fire management have been proposed in recent years (Minas et al. 2012; Miller and Ager 2013; Martell 2015), analyzing the wildfire extinction from the perspective of cost minimization is a wellstudied topic throughout history, already introduced in works such as Headley (1916) and Sparhawk (1925). An overall theoretical framework that pursues management of resources based on minimizing costs was C+NVC (Cost plus Net Value Change), proposed in Gorte and Gorte (1979). This model combines the goal of minimizing the cost of resources and the costs generated by burned land, loss of materials or regeneration tasks. More recently, Donovan and Rideout (2003) complemented the C+NVC scheme using a deterministic programming model, which can establish the optimal planning regarding the number and type of resources needed to extinguish a forest fire from its initial detection.
Inspired by the work above and motivated by the peculiarities of the Galician fire extinguishing system, RodríguezVeiga et al. (2018) proposed a MixedInteger Linear Programming (MILP) model to plan resources involved in a single wildfire extinction. The proposed model was thought to solve problems in the near future, where the uncertainty of the wildfire evolution is lower, for example, in a timeline of 8 h. Thus, this novel model included new features to manage resources such as rest policies or initial conditions, and it could easily be applied to other similar scheduling problems.
On the other hand, in the real world, the resource planning associated with wildfire suppression is managed by a resource coordinator who oversees the schedules of all logistic operations. Mathematical optimization models help to perform these tasks. They provide highquality solutions to minimize the impacts caused by a wildfire. It implies that time is a critical issue. Conducting optimization that takes hours to reach optimality does not make sense when a wildfire is occurring. The resource coordinator must react quickly and cannot afford to wait a long time.
In this way, mathematical decomposition methods (Conejo et al. 2006) can offer a good alternative to simplify the models and reduce the computational time. These techniques split a problem into more manageable subproblems based on their complicating variables or constraints (nondecomposable elements). However, solving these subproblems is not an easy task, and it is necessary to design tailormade algorithms.
Therefore, this manuscript studies several reformulations, based on decomposition techniques in most cases, to improve the MILP model presented in RodríguezVeiga et al. (2018). In addition, we compare our proposals with the solving MILP problem above in terms of solution optimality and computation time. Concerning our contribution, we improve solving problems without adding untractable complexity to the mathematical model or optimization process. Besides, our reformulation procedures can easily export to other optimization models related to optimal resource planning.
The organization of this document is as follows. Section 1.1 covers the related work while Sect. 1.2 presents a brief revision of the optimization model to be addressed: a logistic scheduling model for wildfire suppression. Section 2 describes different ways to apply decomposition strategies to the previous model, and Sect. 3 proposes a customized method based on the Benders decomposition. In Sect. 4, the performance of our proposals is evaluated using simulated data of the wildfire model and solving instances of different sizes. Finally, Sect. 5 summarizes the main conclusions of our study.
1.1 Related work
Good planning in extinction resources generally reduces the costs and damages associated with forest fires. Accordingly, one of the most often topics in wildfire literature is the scheduling problem. Such is the case presented in Wei et al. (2011), where a mixedinteger optimization model considers multiple fires, in addition to spatial and time data. Likewise, Ntaimo et al. (2012) proposed a stochastic integer programming (IP) model to the initial attack in as many fires as possible. In this manner, the authors figure out the best resource allocation in each firefight base to contain as quickly as possible several forest fires.
However, in our particular case, RodríguezVeiga et al. (2018) proposed a different approach to fire extinction. Our model considers all scheduling management (breaks, use policies) for all the resources involved just in a largescale wildfire via a deterministic mixedinteger model. In this way, it is in charge of assigning the resources throughout the fire.
As stated above, applying decomposition techniques can improve the solving of optimization problems (Conejo et al. 2006). There are many examples of these techniques used in scheduling problems that contemplate recurring elements in the context of a wildfire.
Works as Cordeau et al. (2001) or Papadakos (2009) have combined Benders decomposition and column generation methods to deal with optimal scheduling in air traffic. Although this work is interesting due to similarities with our model (fleet assignment assignations over time), their authors considered nonexisting features under our perspective regarding maintenance routing and crew pairing.
Rios and Ross (2010) presented a decomposition scheme based DantzigWolfe method to ease huge traffic flow scheduling problems. It involves both planning and routing decisions, contrary to our model where we just considered optimal resource scheduling. Thus, this strategy obtained one flight per subproblem, being particularly suitable for parallel computing, and solving instances in a planning time of up to 3 h. In comparison, our proposal needs to handle instances with a higher time threshold, though with fewer resources to allocate.
In addition, Romanski and Hentenryck (2016) considered prescriptive evacuation planning and route design for a region threatened by a natural disaster, such as a flood, forest fire, or a hurricane. They proposed a Benders decomposition that generalizes the twostep approach proposed in previous works for converging evacuation plans. On the contrary to our model, they need to deal with flow scheduling problems. Nevertheless, this work is a good example of applying a decomposition strategy to improve a model related to disaster management.
As far as we know, there is no work dedicated to implementing mathematical decomposition techniques in optimization models in the context of resource planning in wildfires. We hope to contribute to simplifying the application of these effective strategies in different forestry optimization problems.
1.2 Problem statement: a wildfire suppression model
We begin from the wildfire extinguishment model presented in RodríguezVeiga et al. (2018) In the same way as Donovan and Rideout (2003), we made the following assumptions:

Some characteristics of firefighting resources are predefined. Such is the case of the fireline production rate, arrival time, or operating costs.

A resource will start a rest when it reaches the maximum working time. Also, resources can leave a specific wildfire just once time.

The extinction time was discretized in 10 min periods, considered enough time for a resource to perform a task. This time is the maximum common divisor between the rest periods (40 min), activity periods (120 min) and estimated time to travel to a rest point (10 min) according to the Spanish regulation.

The model is designed to work over a single wildfire with a maximum planning horizon of 10 h, where predictions of its growth use smaller time intervals (e.g., 10 min). Currently, simulation tools, such as FARSITE (Finney 1998) or Xeocode2 (Xunta de Galicia 2017), can create a predictive 10 h model through GIS information about the wildfire evolution. Thus, an advantage in our model is that it can easily rerun and adapt to changes in the forecast, following a rolling horizon approach (Sethi and Sorger 1991).

There is no direct interaction between fire perimeter and fireline construction. This assumption is valid due to being considered a pessimistic approach for the wildfire contention.
A short description of this model is shown in the following paragraphs. Table 1 shows a definition of the different sets and parameters used in the model, and Table 2 describes the decision and auxiliary variables. The objective function and the constraints of the model are formulated as follows:
subject to
The objective function (1) minimizes the sum of the costs involved in the wildfire contention: the first term represents the variable costs for the use of the selected resources, the second term represents their selection costs, the third term represents the costs associated with the hectares of affected land, and the last term is included to penalize the breach of the minimum number of group resources in each period.
Regarding the constraints, (2) imposes that the total perimeter covered by the resources must be greater than the wildfire perimeter in the contained period, (3) establishes the conditions under which the wildfire can be contained or not in each period, (4)–(6) are related to the coherent beginning of resource activities, and (7) indicates when a resource ends its job with enough time to return to its operational base. Moreover, (8)–(13) correspond to making task assignation feasible according to legal regulations, and (14)–(15) determine the minimum and the maximum number of resources in the wildfire while the fire is not contained, respectively. Finally, (16)–(19) determine logical conditions that the variables must satisfy.
2 Applying decomposition methods to the problem
The following subsections explain in detail how to apply three popular decomposition techniques to the wildfire suppression model described above, hereinafter called the Original Problem.
2.1 Augmented Lagrangian decomposition
The Augmented Lagrangian (AL) is a framework of penalty methods extensively studied in recent decades (Hamdi and Mishra 2011). It consists of seeking to simplify the Original Problem, moving the complicating constraints to the objective function, and multiplying them by Lagrange multipliers and penalty constants. Furthermore, the iterative calibration of these penalties allows goodquality feasible solutions to be reached.
An AL problem can be decomposed for given values of these Lagrange multipliers, and this procedure is known as Augmented Lagrangian decomposition (ALD). We adjusted this procedure, creating a set of different subproblems. In this way, each subproblem maps a particular resource (aircraft, brigade, or machine) and considers the other resources’ values as constants. Consequently, there is the same number of subproblems as different resources in the model. The details of the implementation can be seen in “Appendix A”.
2.2 Branch and price decomposition
Branch and price (BPD) (Barnhart et al. 1998; Vanderbeck and Wolsey 1996; Vanderbeck and Savelsbergh 2006) is a method specifically conceived to handle largescale integer programming problems. It is based on the wellknown branch and bound algorithm: it explores feasible solutions, solving in each node a linear programming (LP) relaxation of the problem and branching according to this result. In the case of branch and price, only a subset of columns, usually related to basic variables, remain in the LP relaxation. The other columns are omitted because they are too many and typically have an associated decision variable equal to zero in the optimal solution. Thus, when this method is used in conjunction with DantzigWolfe Decomposition (DWD), we can refer to it as a whole as Branch and Price Decomposition (BPD), being a good alternative to decrease the complexity of largescale MILP problems.
The DWD is a column generation procedure for continuous problems that instead of considering all the variables of the problem at once, in each step, it solves a master problem containing only some active columns. Furthermore, some subproblems are solved iteratively to determine the columns that must be added to the master problem to improve the objective function.
The BPD is suitable when the problem presents some complicating constraints that prevent a distributed solution into several subproblems defining more tractable combinatorial structure, specially designed to handle integer programming. In this way, the BPD algorithm applies the DWD procedure for each node in the branch and bound tree.
In our case, we want to decompose the Original Problem into several subproblems depending only on resource \(i\in \mathcal {I}\), so the resulting subproblems will be more tractable. It is easy to identify that the complicating constraints avoiding this decomposable structure are the following: (2), (3), (14) and (15). Therefore, these constraints must be relaxed and appear penalized in the objective function of the subproblems.
BPD Master Problem
Suppose that we know p solutions of the problem that we denote as \(SOL_{k}\), for all \(k=1,\ldots ,p\). Then,
The BPD master problem can be formulated as follows:
subject to
where \(f^{(k)}\) is the value of the objective function (1) associated with \(SOL_{k}\),
\(\varvec{\alpha }_{k}\) is the binary variable determining whether solution k is selected, and \(h^{(k)(2)}\), \(h^{(k)(3)}_{t}\), \(h^{(k)(14)}_{gt}\) and \(h^{(k)(15)}_{gt}\) are the values of the complicating constraints (2), (3), (14) and (15), associated with solution \(SOL_{s}\), respectively,
BPD Subproblem
Suppose that we solve the LP relaxation of the BPD master problem,^{Footnote 1} and let us denote \(\lambda ^{(2)}\), \(\lambda _{t}^{(3)}\), \(\lambda _{gt}^{(14)}\), \(\lambda _{gt}^{(15)}\) and \(\sigma \) as the dual solution values associated with constraints (20), (21), (22), (23) and (24), respectively.
Then, the formulation of the (aggregated) BPD subproblem can be expressed as:
subject to
First, it is important to note that this BPD subproblem is decomposable by resource \(i\in \mathcal {I}\), obtaining \(\vert \mathcal {I}\vert \) subproblems that can be solved independently. Once the subproblem is solved, depending on the value of the objective function associated with this new solution, one can determine whether to include this tentative basic new solution (associated with the variable \(\varvec{\alpha }\) that represents the weight given to each solution added to the master problem) to the BPD master problem. If the solution improves the objective function, then this new solution is added to the BPD master problem and solved again, giving new dual value solutions to update the BPD subproblem. Thus, this procedure is repeated until no more columns can be added. Note that the scheme we have just described must be applied at every node of a branch and bound procedure.
2.3 Benders decomposition
Contrary to the previous methods, Benders decomposition (BD) (Benders 1962; Rahmaniani et al. 2017) aims to divide the optimization problem based on their complicating variables: those variables which, when they are fixed, the remaining problem will be relatively easy to optimize.
Applying BD to our wildfire suppression model, we split the problem into a master problem and a subproblem. On the one hand, the master problem selects the resources which are able to contain the wildfire (complicating variables). On the other hand, the subproblem obtains the best feasible solutions according to wildfire rest policies and the resource’s state at every moment.
In this way, we have adapted the BD method by adding a set of transformations to the Original Problem based on the following premise: if we know when a resource starts to work in the wildfire, we can determine its future states over time horizon. That is, we can recalculate when this resource is flying towards the fire, in which periods it will work in the firefront, or during which periods it will need to be resting according to break policies. Consequently, to take advantage of this idea, it is necessary to modify the Original Problem to facilitate separability. This new reformulation will consider how resources must perform rest and travel periods from the start period until the last period (period m). Furthermore, this information is kept in mind to establish when resources must rest, travel, and work while they are in use, that is, until variable \(e_{it}\) takes the value 1.
Reformulated Problem
The transformations performed in the Original Problem can be classified into two groups. First, three new kind of variables have been added associated with the traveling times of the resources. Thus, when a resource travels, it could be moving to the wildfire front, going to rest in the base, or moving due to the end of work (see Table 3). Second, as stated before, suppose we know when a resource starts to work in the wildfire, we can determine how it should act over the timeline. To ease the handle of these periods to be calculated, we have introduced a new group of variables, denoted by a hat over them: traveling associated with rest periods (\({\hat{\varvec{tr}}^{r}}_{it}\)), resting (\({\hat{\varvec{r}}}_{it}\)), ending break (\({\hat{\varvec{er}}}_{it}\)) and ending (\({\hat{\varvec{e}}}_{it}\)). Auxiliary variables \({\hat{\varvec{u}}}_{it}\), \({\hat{\varvec{w}}}_{it}\) and \({\hat{\varvec{cr}}}_{it}\) are also defined using the expression of the Original Problem but replacing \(\varvec{tr}_{it}\), \(\varvec{r}_{it}\), \(\varvec{er}_{it}\) and \(\varvec{e}_{it}\) with \({\hat{\varvec{tr}}^{r}}_{it}\), \({\hat{\varvec{r}}}_{it}\), \({\hat{\varvec{er}}}_{it}\) and \({\hat{\varvec{e}}}_{it}\), respectively.
As a result of applying these transformations, the reformulated problem can be expressed as:
Moreover, the following constraints are introduced to establish a relation between the original variables and the new variables:
In order to explain the reformulated problem, we illustrate the performed transformations with an example.
Example 1
Let us consider a problem instance with a single aerial resource (\(\mathcal {I}= \{1\}\)) and a time horizon of 9 periods (\(\mathcal {T}= \{1, \ldots , 9 \}\)). Suppose that the resource starts without initial conditions (\(CWP_1\) \(=\) \(CRP_1\) \(=\) \(CUP_1\) \(=\) \(ITW_1\) \(=\) \(IOW_1\) \(=\) 0) and it requires 1 period to reach the wildfire from its starting location (\( A_1 = 1 \)). The resource performance will be 1 for all periods (\( PR_{1t} = 1 \) for all \( t \in \mathcal {T}\)). In addiction, the maximum number of allowed periods without breaks is 4 (\( WP_1 = 4 \)), the duration of the break is limited to 1 (\( RP_1 = 1 \)), and the resource needs 1 travel period (\( TRP_1 = 1\)) to move between firefront and the rest location.
In this context, suppose that the wildfire has an initial perimeter of 2 km and it grows 0.1 km per period (\( PER_1 = 2 \) and \( PER_t = 0.1 \) for all \( t \in \{2, \ldots , 9 \} \)).
Figure 1 shows the solution (active variables for each period) of the given instance to represent the main idea of the reformulated problem. The figure illustrates how the new variables split the model into two parts to satisfy the rest policy.

The new variables (those over the edges) denoted with the hat represent how the resource must perform the rest periods. Furthermore, it also considers travel due to rest periods.

The original variables (those below the edges) represent how the resource works in the wildfire to contain it, ensuring that the travel and rest periods established by the new variables must be performed.
Figure 1 shows how the resource starts to be used in the wildfire in the first period. At this moment, although variable \({\hat{\varvec{w}}}\) indicates that the resource could work, constraints (r4) and (29) force the resource to perform a starting flight before beginning to work. In periods 2, 3, and 7, the resource works in the wildfire, being variable \({\hat{\varvec{w}}} = 1\). We ensure that the resource cannot fly during this time through constraint (29). During periods 4 and 6, the resource is moved to take a break due to constraint (30). Something similar happens in period 5 with constraints (27) and (28). Finally, the resource contains the wildfire in period 7 and performs the ending flight in the next period (allowed for the same reasons as the starting flight).
It is important to note that the starting period is the same for both cases since the definitions of the auxiliary variables \({\hat{\varvec{u}}}\) are defined using \(\varvec{s}\).
Table 4 represents the evolution of the wildfire and the performance of the resource over the periods. Note that the resource contains the wildfire in period 7, so from this period, the wildfire perimeter will be 0. In period 8, the resource could keep working since it would satisfy the rest policy (\({\hat{\varvec{w}}}=1\)), but the wildfire is contained, so the resource must leave it (\(\varvec{tr}^e=1\)). To simplify the notation, in Table 4, we denote \(FirePer_t\) as the perimeter of the wildfire and \(ResoPer_{t}\) as the perimeter performed by the resources in each period, i.e., for all \(t\in \mathcal {T}\),
As shown, the reformulated problem is more complicated to solve than the Original Problem since it combines integer variables and nonlinear constraints. However, the purpose is not to solve this problem but to improve its decomposability. The reformulated problem is transformed so that when Benders decomposition is applied, the nonlinear constraints (27)–(30) are linearized by fixing the variables of the Original Problem.
Similar to the notation of a solution in Sect. 2.2, we define a solution for the Original Problem as:
In the case of the reformulated problem, a solution is defined similarly by adding values of the new variables at the end of the original solution vector:
To show the equivalence of the Original Problem and the reformulated problem, the following remark is introduced.
Remark 2.1
The auxiliary variables \(w_{it}\) are nonnegative for all \(i\in \mathcal {I}\) and \(t\in \mathcal {T}\).
Proof
This remark can be demonstrated by contradiction. First, from constraint (r18), we know that
Now, for the sake of contradiction, let us suppose that \(\varvec{w}_{it} < 0\). Then,
\(\square \)
The following proposition proves the equivalence between both problems, the Original Problem and the reformulated problem.
Proposition 2.2
Let \(SOL^{*}\) be a feasible solution of the Original Problem; then, there exists an associated feasible solution of the reformulated problem, \({\hat{SOL}}^{*}=(SOL^*, SOL_{R}^*)\). Furthermore, let \({\hat{SOL}}^{*}=(SOL^*, SOL_{R}^*)\) be a feasible solution of the reformulated problem; then, is a feasible solution of the Original Problem.
Proof
Given a feasible solution of the Original Problem, \(SOL^*\), it is trivial to prove that it has an associated feasible solution in the reformulated problem by considering each \(i\in \mathcal {I}\), \(t\in \mathcal {T}\), \(\varvec{tr}^{s}_{it}=\varvec{tr}^{e}_{it}=\varvec{tr}^{r}_{it}={\hat{\varvec{tr}}}^{r}_{it}=\varvec{tr}^{*}_{it}\), \({\hat{\varvec{r}}}_{it}=\varvec{r}^{*}_{it}\), \({\hat{\varvec{er}}}_{it}=\varvec{er}^{*}_{it}\) and \({\hat{\varvec{e}}}_{it}=\varvec{e}^{*}_{it}\).
In addition, we will prove that a feasible solution of the reformulated problem has an associated feasible solution in the Original Problem. Let us start by proving that constraint (r4) is equivalent to constraint (4). Due to constraint (31), for each \(i\in \mathcal {I}\) and for each \(t\in \mathcal {T}\),
The equivalence between constraints (r7) and (7) is also trivial.
In order to prove the equivalence between constraint (r8) and constraint (8), let us consider the case where \(ITW_{i} = 0\) and \(IOW_{i} = 0\) (the other case is analogous). Then, for each \(i\in \mathcal {I}\) and for each \(t\in \mathcal {T}\), we have
Now, if period \(t^{*}\in \mathcal {T}\) where \(\varvec{e}_{it^{*}} = 1\) is considered, for all \(t\le t^{*}\),
where the second equation is because of constraints (27) and (28). The third equality is due to the following:

If \({\hat{\varvec{r}}}_{it}=1\), by definition of \({\hat{\varvec{w}}}_{it}\) and Remark 2.1, it is clear that the variable \({\hat{\varvec{u}}}_{it}=1\). Otherwise, if \({\hat{\varvec{r}}}_{it}=0\), the equality is trivial.

If \({\hat{\varvec{er}}}_{it}=1\), then by constraint (r10) (if \(t \ge RP_{i}\)) or by constraint (r11) (if \(t < RP_{i}\)), it can be deduced that \({\hat{r}}_{it}=1\). Then, applying the previous item, we know that \({\hat{\varvec{u}}}_{it}=1\). Otherwise, if \({\hat{\varvec{er}}}_{it}=0\), the equality is trivial.
In the cases where \({\hat{\varvec{u}}}_{it} = 1\), considering the definitions of the auxiliary variables \(\varvec{u}_{it}\) and \({\hat{\varvec{u}}}_{it}\), we have that
and therefore^{Footnote 2}
proving that if \({\hat{\varvec{r}}}_{it}=1\) or \({\hat{\varvec{er}}}_{it}=1\), then \(\varvec{u}_{it}=1\) for all \(t\le t^*\).
Hence, we have proved that constraint (8) holds for all \(t\le t^*\),
Otherwise, if \(t>t^{*}\), the proof is similar since the expression of \(\varvec{cr}_{it^{\prime }}\) takes the same value for periods \(t^{\prime } > t^{*}\). This is because \(u_{it^{\prime }} = 0\) for all \(t^{\prime } > t^{*}\),
In the second equality, we use that \(e_{it^{\prime }} = 0\) for all \(t^{\prime } \ne t^{*}\) and constraints (27) and (28). The third equality can be proven using a procedure similar to that used for the case \(t\le t^*\). The fourth equality is because of the definition of \(\varvec{u}_{it}\) and the fact that \({\hat{\varvec{u}}}_{it} \ge 0\) by constraint (18), which implies that \(\sum _{t^{\prime } \in \mathcal {T}^{t}} \varvec{s}_{it^{\prime }} = 1\). Finally, the fifth equation results from the fact that \({\hat{\varvec{r}}}_{it}=0\) and \({\hat{\varvec{er}}}_{it}=0\) for all \(t>t^*\).
Hence, we have proven that constraint (8) also holds for all \(t> t^*\),
The proof of equivalence related to constraints (r9)–(r12), is similar to those already proved, but the following considerations are important:

1.
The equivalence between constraint (r9) and (9) can be proven by analyzing three different situations: \(t \in (\infty , t^* RP_i + 1]\), \(t \in (t^* RP_i + 1, t^*]\) and \(t \in (t^*, \infty )\). For the proof related to constraints (r10) and (r11) one must distinguish two cases: \(t \in (\infty , t^*]\) and \(t \in (t^*, \infty )\). Finally, for constraint (r12), the proof must be done differentiating between the following cases: \(t \in (\infty , t^*TRP_i]\), \(t\in (t^*TRP_i, t^*+ TRP_i]\) and \(t \in (t^*+ TRP_it, \infty )\).

2.
Furthermore, for constraints (r9) and (r12) it is necessary to consider the optimality of the solution to demonstrate the cases in which \(t \le t^*\) and \(t \le t^*+ TRP_i\), respectively.
Finally, the equivalence between constraints (r18) and (18) is trivial using Remark 2.1:
Once the equivalence between the Original Problem and its reformulation has been demonstrated, we proceed to apply the BD approach to the reformulated problem.
Benders Master Problem
The master problem seeks the containment of the forest fire without acknowledging the rest periods of the resources:
The variables of the Benders master Problem are the original variables: \(\varvec{s}_{it}\), \(\varvec{r}_{it}\), \(\varvec{er}_{it}\), \(\varvec{w}_{it}\), \(\varvec{tr^{s}}_{it}\), \(\varvec{tr^{r}}_{it}\) and \(\varvec{tr^{e}}_{it}\). Moreover, \(\mathcal {S}^{*}\) is the set of all the tuples that represent the resources and periods, \((i, t) \in \mathcal {I}\times \mathcal {T}\), where resource i starts in period t at some iteration of the algorithm, i.e.,
being
Using the definition of \(\mathcal {S}^*\), cuts (32), (33) and (34) impose where a given \((i,t)\in \mathcal {S}^*\) (i is a resource and t is a starting period) cannot work, rest and travel due to rest periods, respectively. This can be accomplished using the definition of the following sets:
where \({\hat{w}}_{it^{\prime }}(t)\), \({\hat{tr}}^{r}_{it^{\prime }}(t)\) and \({\hat{r}}_{it^{\prime }}(t)\) are the values of working, travel due to rest, and rest variables if resource i starts in period t, respectively.
It is important to note that the needed information to build cuts (32), (33) and (34) is obtained by solving the Benders subproblem defined below.
Benders Subproblem
The subproblem seeks to establish a correct policy of breaks considering when resources begin to work. As previously stated, knowing the beginning of the resources allows computing the break periods easily. Thus, the problem is greatly simplified:
In the subproblem we employ the previous definition of the variables \({\hat{\varvec{e}}}_{it}\), \({\hat{\varvec{r}}}_{it}\), \({\hat{\varvec{tr}}^{r}}_{it}\) and \({\hat{\varvec{er}}}_{it}\), and the expressions defining \({\hat{\varvec{u}}}_{it}\), \({\hat{\varvec{w}}}_{it}\) and \({\hat{\varvec{cr}}}_{it}\). Additionally, the variables associated with the master problem (complicating variables) are fixed: \(\varvec{s}_{it} = s^{*}_{it}\), \(\varvec{r}_{it} = r^{*}_{it}\), \(\varvec{er}_{it} = er^{*}_{it}\), \(\varvec{w}_{it} = w^{*}_{it}\), \(\varvec{tr^{s}}_{it} = tr^{s*}_{it}\), \(\varvec{tr^{r}}_{it} = tr^{r*}_{it}\) and \(\varvec{tr^{e}}_{it} = tr^{e*}_{it}\). Therefore, as these variables are fixed, constraints (27)–(30) become linear.
Remark 2.3
Since the subproblem only accounts for the feasibility of the solution given by the master problem regarding rest policies, we assume that every resource will work in the wildfire containment the maximum periods allowed. This is necessary for the proper performance of the resources in the subproblem.
Remark 2.4
As the structure of the subproblem shows, it is easy to check that the subproblem can be decomposed trivially by resources. We can do this because the constraints and objective functions do not share information on more than one resource. Thus, instead of solving the Benders subproblem, we can solve for each selected resource i (given from a solution of the Benders master problem). In addition, in order to obtain information on the resolution of these subproblems, we can remove constraints (27)–(30) to obtain information about the configuration of the rest periods. By doing so, the subproblem associated with resource i, which we will name Benders subproblem \({\varvec{i}}\), determines the maximum performance of resource i considering when it starts and the restrictions on rest period legislation.
Benders Algorithm
The pseudocode presented in Algorithm 1 defines the steps in the Benders algorithm in the context of our decomposition of the problem.
It is important to note that three new types of cuts, (32), (33) and (34), are created to improve the convergence of the algorithm. The following remark will discuss the validity of the proposed cuts.
Remark 2.5
Constraints (32)–(34) are valid Benders cuts that allow us to give to the Benders master problem information of how a resource must act knowing its start period.
For a given \(i \in \mathcal {I}\) and \(t^{s}\in \mathcal {T}\) where \(s_{it^{s}}=1\), Benders subproblem i gives the configuration of work, travel associated with rest, and rest periods of resource i from its start period (\(t^s\)) until the last period (m). The configuration of work, travel and rest periods guarantees the feasibility of the Benders subproblem i (and consequently, the feasibility of the Benders subproblem).
Examining Benders master problem, constraint (32) fixes the working periods to 0 according to the solution of Benders subproblem i with \(\varvec{s}_{it^s}=1\). In a similar way, we can impose periods of no travel due to rest and periods of no rest using constraints (33) and (34), respectively.
If a solution proposed by the Benders master problem is infeasible for the reformulated problem, then the cuts generated in the next iteration will remove the solution as each Benders subproblem i will compute a feasible configuration for the resource i to construct the new cuts. Otherwise, if the solution is feasible for the reformulated problem, then it will be feasible for each Benders subproblem i and the associated cuts will admit it.
3 Fixed activity benders problem
Inspired by the idea of BD about precomputing the periods in which resources must rest, we propose a new formulation of the problem that guarantees convergence to the global optimum.
Suppose the period in which a resource begins to work in the wildfire and their initial conditions are known. In that case, it is possible to calculate the work and rest periods according to the pertinent legislation. Therefore, we can simplify the BD method, computing in advance this information without solving the Benders subproblems.
As described above in the proposed BD method, the Benders subproblems share the primal solutions obtained with the Benders master problem with the aim of defining iteratively new cuts (32)–(34) removing solutions that do not verify the policy of breaks.
In this way, we propose a new problem formulation equivalent to the Benders master problem where all the feasibility cuts are defined a priori without the need of solving any subproblem. In order to precompute these feasibility cuts efficiently, a tailormade algorithm (see Algorithm 2) has been designed that takes advantage of the specific features of the problem. Note that this new formulation requires to introduce all the feasibility cuts at once, so the size of the resulting problem is bigger than the ones arising in the decomposition procedures. Although this could derive in a significant computationally burden when solving the problem, the computational results that will be presented later shows a good behaviour of this method in practice, outperforming the other decomposition approaches.
The new problem formulation can be stated as follows:
Fixed Activity Benders Problem
Therefore, before creating the Fixed Activity Benders Problem, we need to compute sets \(W^{}_{it}\), \(TR^{}_{it}\) and \(R^{}_{it}\) (see Algorithm 2 for more details) to create constraints (r32)–(r34), respectively.
Algorithm 2 initializes a counter for the work periods (WorkCounter) for a given resource and start period. Suppose the resource was previously working in any wildfire (\(ITW_i=1\) or \(IOW_i=1\)). This implies that two situations might occur: if the resource starts to work in the first period, the counter is updated with its initial information; and if the resource is resting, it cannot be selected during the wildfire’s extinguishing.
Once all variables have been initialized, the counter is iteratively increased. Thus, when it reaches \(WP_iTRP_i\) working periods, the resource will perform the needed periods to rest. In this case, when the resting phase is finished, the counter is reset, and the procedure described above is repeated until the last period.
In addition, it is important to note that this new method allows incorporating ondemand rest policies, facilitating their adaptation to more complex contexts.
4 Computational results
This section studies the impact of the decomposition techniques explained above, using a representative set of simulated instances of the original wildfire extinction model. To simplify the understanding of this work, we use the following nomenclature:

OP: Original Problem (Sect. 1.2).

AL: A strategy based on Augmented Lagrangian decomposition (Sect. 2.1).

BP: Branch and Price decomposition provided by SCIP (Sect. 2.2).

BD: Benders Decomposition (Sect. 2.3).

FA: Fixed Activity Benders problem (Sect. 3).
The experiments were conducted using the Finisterrae II supercomputer, provided by Galicia Supercomputing Centre (CESGA), which consists of 306 nodes powered with two decacore Intel Haswell 2680v3 CPUs with 128 GB of RAM connected through an Infiniband FDR network. The implementations were programmed in Python version 3.6.8. All the MILP models were solved using Gurobi (version 8.1.0, Gurobi Optimization (2020)), except in the case of BP, where we employed SCIP (version 6.0.1, Gleixner et al. (2018)) framework.
In order to compare the algorithms, we generated two sets of instances (see “Appendix B” for details about the instance generator). The first, used in Sect. 4.1, corresponds to small instances where all the algorithms can provide solutions for most cases. The second, used in Sect. 4.2, contains larger instances based on realistic case studies. More details about the sizes, composition and periods in instances are in the tables of “Appendix C”.
Regarding the experimental settings, all optimization runs used a stopping criterion based on a predefined computational time of 10 min. The goal is to compare the solutions obtained, in terms of the objective function, using the different techniques at the same threshold time. We selected this time considering the requirements of wildfire contention services in real situations. About configuration settings of the methods, just to comment that AL sets the parameters \(\beta \) and \(\lambda \) to 0.3 and 1000, respectively.
Due to the high number of experiments conducted, we have decided to summarize the results over classes of instances with the same problem size. In detail, we present boxplot graphs for the relative objective functions and the relative computational times. Namely, for each instance, let \({x}^{\prime }\) be the best solution found, the relative objective function associated to a solution x is calculated as the quotient \(f(x)/f(x^{\prime })\) (analogously for the computational time). Thus, values closer to 1 represent better performance.
In addition, we show different tables with a summary of statistical results (minimum, mean, median, maximum, standard deviation and percentage of solved instances). For readers more familiar with performance profile graphs (Dolan and Moré 2002), we also provide them for the objective function and for the computational time in “Appendix D”.
The code and instances required to reproduce the results are available at:
4.1 Performance analysis in small instances
In order to analyze the proposed solution techniques, we defined 16 different groups of small instances by combining resource groups (brigades, aircraft and machines) in different ways: using 2 or 4 members per group and setting the number of time periods to 10 or 15. Furthermore, for each group, we randomly generated 100 instances, which gave us a total of 1600 instances. The details of the size (numbers of variables and constraints) of the different groups of instances are shown in Table 9.
As stated before, all the instances were solved with our techniques in a maximum time of 10 min. Thus, for each run, we have collected the objective function obtained during the optimization and the time required to reach this solution within the time threshold.
Figure 2 is a boxplot where we compare our methods in different classes that include various groups of instances. In this way, it shows on its xaxis the number of resources and periods of the problem, separated by a vertical bar (), and on its yaxis, the relative objective function. The results reveal that BP exhibits the worst performance in terms of the objective function.^{Footnote 3} The other methodologies have a similar behaviour always reaching the best optimal solution, except AL, which fails to do it in some specific instances. These conclusions are also corroborated in Table 5, where interestingly, it can be seen that all techniques are able to provide feasible solutions for all the instances, except BP.
Figure 3 and Table 6 represent the results concerning the relative computational time.^{Footnote 4} We can see that FA, BD and OP, are the fastest ones with a big difference with respect to AL and BP. Further, the zoom over the figure (and measures in Table 6) shows how FA outperforms BD and OP. Additionally, the boxplot of the absolute computational times for each group of instances can be seen in Fig. 10.
In conclusion, for small instances, the algorithm that performs the best is FA. Although OP and BD exhibit a good behaviour in terms of optimal values, their computational times are slightly higher. Further, the AL and BP algorithms have much longer computational times, and BP does not converge in many cases.
4.2 Performance analysis in realisticscale instances
In this case, we have defined 48 different groups of realisticscale instances (see Table 10 for details), generating 100 random instances for each group, which gives us a total of 4800 instances.
Based on the results for the small instances, where OP and FA exhibit the best performance, we only analyze the behaviour of these two methods over these larger instances. In fact, as the time threshold is set again to 10 min, we have checked that the other algorithms were not able to reach a feasible solution for most of these instances.
Figure 4 is analogous to the previous ones. In this case, we have split the instances into 24 classes according to the number of resources and periods. The results obtained in the relative objective function for both methods demonstrate that FA finds for all instances the best optimum while OP only reaches it for the smaller ones, those with less than 60 periods and 30 resources.^{Footnote 5} The same conclusions can be clearly seen looking at the summaries presented in Table 7.
Regarding the relative computational time of the algorithms, Fig. 5 shows that FA exhibits again the best performance, being faster than OP.^{Footnote 6} More in detail, in Table 8, we can see that FA is at least 2.5 faster than OP in 50% of the instances (the median of the OP is 2.4890). Additionally, the boxplot of the absolute computational times for each group of instances can be seen in Fig. 11.
5 Conclusions
In this study, we have proposed different decomposition techniques to efficiently tackle a wildfire suppression model to manage the operations of resources.
Implementing decomposition techniques is not usually a straightforward task since it requires adapting the technique to the specific features of the problem. Thus, we have proposed a modification of the augmented Lagrangian decomposition to achieve convergence of our specific problem. Moreover, we have applied Benders and branch and price decompositions to our problem. In the case of Benders, to improve convergence to an optimal point, we have modified the decomposition and proposed new cuts specific to this problem.
Furthermore, the analysis of decomposition methods has encouraged us to propose the socalled Fixed Activity Benders Problem. It is a reformulation of the original model, where some information about resources (such as rest, travel and working time) can be precomputed.
An extensive computational study comparing the different proposed alternatives using a set of randomly generated instances of the wildfire extinguishment model has been conducted. In the results, we observe that the Fixed Activity Benders Problem clearly outperforms the rest of the techniques in terms of optimality and computational times.
As future work, it could be interesting to seek alternative decomposition of the problem that allows us to apply the same or different decomposition techniques more efficiently. Furthermore, it might be promising to study how the Fixed Activity Benders Problem can be embedded in such a decomposition framework.
Notes
It corresponds to relaxing the integrality conditions over the \(\alpha _{k}\) variables.
Note that \(\sum _{t^\prime \in \mathcal {T}^{t1}} \varvec{e}_{it^\prime }\)=0 for all \(t\le t^{*}\) since \(\varvec{e}_{it^{*}} = 1\) by assumption.
The performance profile for the objective function can be seen in 6.
The performance profile for the computational time can be seen in 7.
The performance profile for the objective function can be seen in Fig. 8.
The performance profile for the computational time can be seen in Fig. 9.
References
Barnhart C, Johnson EL, Nemhauser GL, Savelsbergh MWP, Vance PH (1998) Branchandprice: column generation for solving huge integer programs. Oper Res 46:316–329
Benders J (1962) Partitioning procedures for solving mixedvariables programming problems. Numer Math 4:238–252
Caunhye AM, Nie X, Pokharel S (2012) Optimization models in emergency logistics: a literature review. Socioecon Plann Sci 46:4–13
Commission European (2020) Forest fires in Europe, Middle East and North Africa 2019. Technical report, JRC technical reports
Conejo A, Castillo E, Mínguez R, GarcíaBertrand R (2006) Decomposition techniques in mathematical programming: engineering and science applications. Springer
Cordeau JF, Stojković G, Soumis F, Desrosiers J (2001) Benders decomposition for simultaneous aircraft routing and crew scheduling. Transp Sci INFORMS 35:375–388
Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91:201–213
Donovan GH, Rideout DB (2003) An integer programming model to optimize resource allocation for wildfire containment. For Sci 49:331–335
Finney MA (1998) FARSITE: fire area simulator—model development and evaluation, vol 3. United States Department of Agriculture, Forest Service, Rocky Mountain Research Station
Fisher ML (2004) The Lagrangian relaxation method for solving integer programming problems. Manag Sci 50:1861–1871
Gleixner A, Bastubbe M, Eifler L, Gally T, Gamrath G, Gottwald RL, Hendel G, Hojny C, Koch T, Lübbecke ME, Maher SJ, Miltenberger M, Müller B, Pfetsch ME, Puchert C, Rehfeldt D, Schlösser F, Schubert C, Serrano F, Shinano Y, Viernickel JM, Walter M, Wegscheider F, Witt JT, Witzig J (2018) The SCIP optimization suite 6.0. Technical report. Optimization online. http://www.optimizationonline.org/DB_HTML/2018/07/6692.html
Gorte JK, Gorte RW (1979) Application of economic techniques to fire management—a status review and evaluation. Gen. Tech. Rep. INTGTR53. US Department of Agriculture, Forest Service, Intermountain Research Station, Ogden, vol 26, p 53
Gurobi Optimization L (2020) Gurobi optimizer reference manual. http://www.gurobi.com
Hamdi A, Mishra SK (2011) Decomposition methods based on augmented Lagrangians: a survey. In: Topics in nonconvex optimization. Springer, pp 175–203
Headley R (1916) Fire suppression, district 5. USDAForest Service, pp 1–57
Martell DL (2015) A review of recent forest and wildland fire management decision support systems research. Curr For Rep 1:128–137
Miller C, Ager AA (2013) A review of recent advances in risk analysis for wildfire management. Int J Wildland Fire 22:1–14
Minas JP, Hearne JW, Handmer JW (2012) A review of operations research methods applicable to wildfire management. Int J Wildland Fire 21:189–196
Ntaimo L, Arrubla JAG, Stripling C, Young J, Spencer T (2012) A stochastic programming standard response model for wildfire initial attack planning. Can J For Res 42:987–1001
Papadakos N (2009) Integrated airline scheduling. Comput Oper Res 36:176–195
Rahmaniani R, Crainic TG, Gendreau M, Rei W (2017) The benders decomposition algorithm: a literature review. Eur J Oper Res 259:801–817. https://doi.org/10.1016/j.ejor.2016.12.005
Rios J, Ross K (2010) Massively parallel Dantzig–Wolfe decomposition applied to traffic flow scheduling. J Aerosp Comput Inf Commun 7:32–45
RodríguezVeiga J, GinzoVillamayor MJ, CasasMéndez B (2018) An integer linear programming model to select and temporally allocate resources for fighting forest fires. Forests 9:583
Romanski J, Hentenryck PV (2016) Benders decomposition for largescale prescriptive evacuations. In: AAAAI’16: Proceedings of the thirtieth AAAI conference on artificial intelligence, pp 3894–3900
Sethi S, Sorger G (1991) A theory of rolling horizon decision making. Ann Oper Res 29:387–415
Sparhawk WN (1925) The use of liability ratings in planning forest fire protection. National Emergency Training Center, Emmitsburg
Vanderbeck F, Savelsbergh MW (2006) A generic view of Dantzig–Wolfe decomposition in mixed integer programming. Oper Res Lett 34:296–306
Vanderbeck F, Wolsey LA (1996) An exact algorithm for IP column generation. Oper Res Lett 19:151–159
Wei Y, Rideout DB, Hall TB (2011) Toward efficient management of large fires: a mixed integer programming model and two iterative approaches. For Sci 57:435–447
Xunta de Galicia (2017) PLADIGA: Memoria. http://mediorural.xunta.gal/fileadmin/arquivos/forestal/pladiga/2017/2_MEMORIA.pdf. Accessed 19 Dec 2022 17:41:37
Acknowledgements
This research work is supported by the R+D+I project grants PID2020116587GBI00 and PID2021124030NB (C31 and C32), funded by MCIN/AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”/EU. Second author investigation is funded by the Xunta de Galicia (contract postdoctoral 20192022). We acknowledge the computational resources provided by CESGA. Third author acknowledges support from the Xunta de Galicia through the ERDF (ED431C202014 and ED431G 2019/01), and “CITIC”.
Funding
Open Access funding provided thanks to the CRUECSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A. Augmented Lagrangian implementations details
Given a resource i, the associated Augmented Lagrangian Subproblem \({\varvec{i}}\) is formulated as follows:
subject to
being
where \(\bar{\lambda }\) corresponds to fixed Lagrange multipliers, \(\bar{\beta }\) is the fixed penalty parameters large enough to ensure local convexity, and \(\bar{y}\) is the contained period of the wildfire. Since each subproblem needs to have coherence in variable y to guarantee the separability, we have decided to fix the variable as a constant, and explore all the possibilities of y for each iteration of the ALD method. Moreover, expressions \(penSet_1\) and \(penSet_2\) represent inequality constraints (2) and (3), respectively, which are modified to appear penalized in the objective function. v is a set of auxiliary variables to transform inequalities \(penSet_1\) and \(penSet_2\) into equality constraints. As can be observed, the ALD method adds quadratic terms to penalties to confer good convexity properties.
Regarding the formulation of \(penSet_1\) and \(penSet_2\), we highlight that \({w}_{it}\) are primary variables of subproblem i that represent the working time of resource i, and \({\bar{w}}_{i^{\prime }t}\) are constant values that correspond to the values of w in other resources. Thus, the value of each resource \(i^{\prime }\) will be addressed in its respective subproblem, consequently facilitating the decomposition of the Original Problem.
Theoretically, although constraints (14) and (15) could also be penalized in the objective function, we have empirically observed that it causes the convergence to worsen, making it more complex to coordinate their penalties in the resulting subproblems.
As stated above, the ALD method is an iterative algorithm, that seeks to calibrate their penalties to obtain reasonable feasible solutions. It requires an initial solution that can be obtained, for example, using a constructive heuristic. Then, the algorithm repeats the following steps until a stopping criterion is fulfilled. The algorithm decomposes the Original Problem, iterates over each period \(t\in \mathcal {T}\), fixes \(y_{t}=0\) to consider it as the possible contained period, and solve the corresponding subproblem with a specific \(\lambda \) and \(\beta \). These parameters are updated with the results provided by the subproblem in each iteration. This calibration process is conducted using a subgradient method, which is a strategy widely used in the AL literature. Although using the subgradient method causes the loss of the global optimality guarantee in MILP problems (Fisher 2004), the algorithm exhibited good performance in our experiments. The following expressions represent an example of how to update these multipliers and constants in the iteration k and for penalty \(PenSet_1\):
\(PenSet_2\) also has an analogous procedure. Moreover, the method updates the fixed resources’ value in the subproblems at the end of each iteration, but only when a new feasible solution is reached. Finally, ALD will stop when it has converged or has obtained an infeasible problem for each possible y.
B. Description of instances generator
We have implemented an instance generator intending to create a test set to analyze the decomposition methods described throughout the work. The following paragraphs describe the values and intervals of parameters to simulate instances in the wildfire extinction model.
First, the wildfire extinguishing period considered in all instances has been 10 min, classified the resources into three groups: brigades, aircraft and machines. The minimum resource number per group is between 1 and a quarter of that type’s number of resources. And the maximum is between the number of resources of that type and threequarters of it.
Moreover, each sort of resource has unique characteristics, such as in the case of performance, where brigades have values between 2–10 km/h, while aircraft and machines are between 4–15 km/h. In terms of costs, all have a maximum fixed cost of 1000 €. The variable costs depend on the type of resource: 200–500 €/h for brigades; 1000–3000 €/h for aircraft; and 500–1000 €/h in the case of machines.
Likewise, parameters according to the maximum work and rest periods depend on legislation. In these cases, we have taken into consideration Spanish law regulations. It indicates that all resources have a maximum daily time of 480 min (8 h) and a time between breaks of 10 min. Furthermore, there is a maximum working time without breaks of 120 min in the aircraft case, with a rest time of 40 min.
All resources have a low probability of operating already in the wildfire, a medium probability of working in another wildfire, and a high probability of being idle. If they are not in the wildfire, they must travel there, being the arrival times 30–120 min for brigades, 10–40 min for aircraft, and 60–180 min in the case of machines. Additionally, if a resource had been active in the same wildfire, its previous work time must be considered. It implies that the generator will select a value between 0 and 470 min.
On the other hand, the wildfire increases its perimeter between 1–3 km every 10 min. As the cost depends on the fire’s spread, it is calculated by multiplying a factor by the perimeter increment. This factor has been chosen within the interval of 300–500. Further, to ensure the feasibility, the generator has considered the resource’s performance to calculate the perimeter.
Last, the instance generator has always considered the efficiency of the resources as one in all periods.
The characterization above describes the general behaviour of the instance generator function. To see details, we recommend examining the GitHub repository.
C. Dimensions of the instances
D. Auxiliary figures
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
RodríguezVeiga, J., Penas, D.R., GonzálezRueda, Á.M. et al. Application of decomposition techniques in a wildfire suppression optimization model. Optim Eng (2023). https://doi.org/10.1007/s11081022097838
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11081022097838
Keywords
 Integer programming
 Assignment problems
 Wildfire management
 Decomposition techniques
 Benders decomposition