1 Introduction

In recent years, the increasing intensity and duration of natural disasters have made it primordial for decision-makers, stakeholders and communities to enhance the safety and security of critical infrastructure systems during and after disruptions. Typically disaster-prone critical infrastructure systems, road networks spatially connect one location to another and are enablers of disaster relief by providing accessibility to shelters, hospitals, emergency management centres and so on. These networks are complex and it is a challenge of utmost importance in disaster response for decision-makers to rebuild their functionalities to acceptable limits.

Previous events have shown the consequences of natural disasters on road networks. In the aftermath of the 1994 Northridge earthquake four freeway routes were closed due to failures. It is estimated that one fifth of the $6.5 billion total loss in regional economic activity caused by the earthquake could be attributed to transportation disruptions alone (Gordon et al. 1998). In the 1995 Hyogoken-Nanbu earthquake, significant damages to the Hanshin Expressway reduced traffic volumes to 30–55% of pre-earthquake levels, severely impacting disaster relief activities (Chang and Nojima 2001). Following the Haiti earthquake in 2010, relief distribution proved almost impossible, despite the abundance of supplies, due to road infrastructure damages (Çelik 2016). More recently, the 2015 earthquake in Gorkha, Nepal, where the earthquake and a major aftershock triggered landslides around the steep and mountainous regions of Nepal, blocked important road segments, preventing relief supplies from reaching communities (Collins and Jibson 2015).

Due to these potential far-reaching consequences, route restoration is considered to be a task of foremost priority in disaster relief. The route restoration problem (RRP) is the problem of choosing which roads to repair/restore/clear and make traversable following a disaster so that responders can gain access to demand points. In its Public Assistance Debris Management Guide, FEMA pens down debris clearance for emergency route restoration as the first step in disaster response and recovery (Federal Emergency Management Agency 2007). Indeed, adequately planned route restoration can be the precursor of a successful disaster response effort. In the aftermath of the 2011 Tohoku earthquake in Japan, the Tohoku Regional Development Bureau proceeded with urgency to restore roads with a strategy referred to as the “teeth of a comb strategy.” The strategy was to first open an inland route along the backbone of the Tohoku Region and then to restore routes to the disaster-stricken areas along the coast (Kazama and Noda 2012). The Tohoku Regional Development Bureau reports the success of prioritizing route restoration in their Earthquake Memorial Museum (see Tohoku Regional Bureau 2014). With the “teeth of a comb strategy”, by the day following the earthquake, 11 routes were cleared, allowing the flow of emergency traffic, including ambulances, the police, and medical teams.

The optimization of post-disaster route restoration suffers from three main challenges. The first challenge is the modelling of uncertainties. It is well-established that fitting underlying probabilities to post-disaster damage scenarios is one of the most challenging tasks in disaster management. Disasters interact and are often the result of dynamic historical processes involving multiple stress-triggering events. For example, a main seismic shock can cause aftershocks and induce stresses on nearby fault lines, which leads to increasing earthquake probabilities in nearby region, a phenomenon called earthquake clustering. Successive fissures along the North Anatolian Fault have resulted in a cluster of large earthquakes in the 1900s. The dynamic interactions that lead to complications in estimating earthquake probabilities is well-known (Parsons et al. 2000) and such complications extend beyond seismic events, such as in the Tohoku disaster, where a tsunami caused a nuclear meltdown. The second challenge in the optimization of post-disaster route restoration is model tractability. Optimization models for post-disaster route restorations often involve huge networks of damaged road segments and these can lead to intractable models. In the aftermath of the great Gorkha earthquake in Nepal, planning route restoration for a small rural district north of the Araniko highway required constructing a network of 457 nodes and 555 edges, 66 of which were damaged (Aydin et al. 2018). The third challenge in post-disaster route restoration is in the interpretability of solutions. Disasters are chaotic events and no amount of modelling precision will ever capture the true extent of the damages and perfectly reflect on-site realities. Building overly complex models can lead to optimal decision generation from black boxes, which limits the ability of disaster responders to understand the decision process and react accordingly when unexpected events arise.

In this paper, we tackle these three challenges by proposing a robust optimization approach to optimize the route restoration strategy under uncertain restoration times. Restoration time uncertainty originates from uncertainty in damage or blockage estimation. Studies which consider road damage or blockage uncertainty in disasters abound outside the optimal route restoration context (see Caunhye and Nie 2018; Aydin et al. 2018, for example). Robust optimization is an approach that works by immunizing decision-making against all uncertain data realizations within a deterministic uncertainty set. It is attractive in that it only needs moderate information (specifically, the range of restoration times) about the underlying uncertainty and it is nonparameteric, meaning that it does not require any probability specification. Recent years have seen a tremendous growth in the application of robust optimization for decision-making under uncertainty (Zhen et al. 2018) mainly due to its computational viability and its distribution-free characteristic. In robust optimization, probability distributions are replaced with uncertainty sets (typically conic representable), omitting any information about the distribution except for its support. The method is especially enticing to model extreme events and situations for which parametric probability estimates are hard to obtain. Several seminal papers are highly cited in the robust optimization literature, see (El Ghaoui and Lebret 1997; El Ghaoui et al. 1998; Ben-Tal and Nemirovski 1998; Bertsimas and Sim 2004). Robust optimization models are generally semi-infinite problems that are solved with approximations known as decision rules. A multitude of decision rules, such as the linear (Goh and Sim 2010; Chen and Zhang 2009; Caunhye and Cardin 2018), binary (Bertsimas and Georghiou 2015, 2018) and finite adaptability (Hanasusanto et al. 2015; Caunhye and Cardin 2017; Bertsimas and Caramanis 2010) decision rules, have been proposed to approximate these models with solvable forms. Even though decision rules are known to be optimal for a few problems, such as some variants of vehicle routing problems (see Gounaris et al. 2013), they potentially sacrifice a significant amount of optimality (Bertsimas and Goyal 2012). In addition, they often require large numbers of variable and constraint additions that can lead to intractability in big post-disaster route restoration models. In this paper, we propose a novel decision rule, based on restoration time ordering, that is optimal for the RRP. We also show that this decision rule can be implemented with little sacrifice to tractability in several variants of the RRP. Furthermore, owing to its simplicity, this decision rule is easily interpretable and offers a practical rule of thumb for route restoration.

2 Literature review

In this section, we provide a broad review of the literature on route restoration, starting with general network science methodologies to assess restoration strategies and ending with a specific review of optimization methodologies to plan restoration strategies. Network science methods to tackle the assessment of route restoration strategies typically focus on identifying critical road segments with connectivity measures. The work in Aydin et al. (2019) develops an origin-destination betweenness metric to evaluate post-disaster road performance to assist in decision-making during the recovery process. A different evaluation strategy is proposed in Zhou et al. (2019) with a percolation-based connectivity model used to assess post-earthquake recovery of road networks. In addition to the giant connected component to represent global connectivity, the authors also introduce a metric called local connectivity which is quantified using the number of neighbouring nodes. GIS-based accessibility modelling is also used to assess post-disaster route restoration planning, where the idea is to view route restoration as a means to restore access to services. In Kim et al. (2018a), a scenario-based system dynamics approach is proposed to evaluate the performances of post-disaster debris clearance strategies. The work of Ertugay et al. (2016) and Toma-Danila (2018) also assess the performances of route restoration strategies using accessibility modelling, with the novelty being in the consideration of road closure probabilities.

In optimization modelling for post-disaster route restoration, the focus is on choosing best restoration strategies. Numerous deterministic optimization models have been proposed to such effect. In Moreno et al. (2019), a mixed integer programming problem is proposed to integrate crew scheduling and routing decisions for route restoration. Crew scheduling is also the underlying problem tackled in Kim et al. (2018b), where dynamic damages are addressed and an ant colony heuristic is proposed to solve the resulting mixed integer programming model. In Shin et al. (2019), a similar modelling and solution paradigm is followed, but with the addition of goods delivery to the crew scheduling for road repairs problem. An adaptation of the ant colony heuristic is proposed in Yan and Shih (2012) for the time-space network formulation of the crew scheduling and route restoration problem. The work of Aksu and Ozdamar (2014) approaches route restoration from the perspective of a path-based network optimization problem with the aim to restore access in the network. A decomposition approach is used to improve the tractability of the model. The network-based approach is also used in Akbari and Salman (2017b) and Akbari and Salman (2017a) where mixed integer programming models are used to restore subsets of road segments and reconnect the network. In Akbari and Salman (2017b), the focus is on (1) restoring network connectivity in the shortest possible time and (2) tractability enhancement, via the proposition of a metaheuristic based on relaxation and local search. The focus of Akbari and Salman (2017a) is on an effective decomposition-based solution method for the RRP. In Perrier et al. (2008), the problem is approached as a vehicle routing problem for snow plowing equipment in urban areas to clear roads. A general routing problem is considered for debris removal in Sahin et al. (2016) and a constructive heuristic based on Dijkstra’s shortest-path algorithm is proposed to close optimality gaps. The work in Yan and Shih (2009) takes a multi-objective approach to optimize route restoration. Two objectives are considered, namely, minimizing the time spent on restoration and on relief distribution and a heuristic is also proposed as a solution method. In Ajam et al. (2019), latency, which is defined as the time elapsed until a node is visited, is minimized for a debris clearing problem. A meta-heuristic that includes a greedy randomized adaptive search procedure and variable neighbourhood search is proposed to solve the problem. Novel objectives are also proposed in Kasaei and Salman (2016), where two problems are used to plan debris cleaning operations. The first one minimizes the shutdown time of the road system, while the second one focuses on increasing the overall benefits of the reconnecting the network components in a timely manner.

Given the widely recognized intractability of models involving route restoration, as evidenced by the number of reviewed work that propose heuristic solution methods, uncertainty has only rarely been factored into consideration. The work proposed in Çelik et al. (2015) adopts a stochastic approach via a partially observable Markov decision process model to sequence road clearance. The authors recognize the difficulty of incorporating uncertainty considerations and devise a heuristic solution approach as well.

3 Methodology

This section develops a comprehensive framework, comprising of 6 mathematical programming models, to optimize post-disaster route restoration. The first model is a deterministic model which we use as a baseline for comparisons, especially regarding the impact of uncertainty on route restoration decisions. In the second model, we add restoration time uncertainties to the route restoration problem, via a two-stage robust optimization framework with polyhedral uncertainty set. The first stage, which makes decisions prior to uncertainty realizations, selects the routes to be restored and the second stage sequences route restoration after uncertainty realizations. The two-stage framework generates route selection decisions that are robust to uncertainty and restoration sequencing decisions that are dependent on, and therefore flexible to, uncertainty realizations. The polyhedral uncertainty set has the advantage of (1) being distribution-free and (2) permitting a level of control on the conservatism of our solutions, which is appropriate for disaster settings where probability distributions are hard to estimate and over- and under-conservatism can yield sub-optimal planning. Our third model is a single-stage solvable equivalent of the robust optimization model, which is produced using a novel decision rule based on restoration time ordering.

The first three models implicitly assume that routes can be restored concurrently. Concurrent route restorations require the availability of multiple restoration crews and equipment and in situations where resources are severely limited, this may be impractical. Our fourth model relaxes the concurrent restoration assumption by considering sequential job starts. Whilst this would be traditionally achieved using constraint addition, doing so in our case would introduce further complexity in solving the robust counterpart. To make sure that our decision rule remains optimal for the model with sequential job starts and hence, that the tractability and solvability are not impeded, we propose a novel method of relaxing the concurrent restoration assumption using time period enlargement/contraction. Another assumption which may hamper the practical validity of our models is a fundamental one in two-stage robust optimization, which is that restoration time realizations are available prior to route sequencing. In practice, having such restoration time information involves precise post-disaster damage assessment, which is resource-intensive and may be unrealistic in some cases. Our fifth model relaxes this fundamental assumption by optimizing sequencing decisions without a priori restoration time knowledge. Our sixth model is a solvable form of the fifth model under our restoration time ordering decision rule. Altogether, our 6 models follow a methodological underpinning, which is the restoration time ordering decision rule, to propose tractable formulations of the robust route restoration problem for a variety of disaster situations.

3.1 Route restoration problem

In the RRP, a disaster has affected a network, resulting in blocked edges and a pressing need for supplies to be delivered to specific locations. Let \(G = (N,E)\) be an undirected graph that models the affected network. The set N of nodes consists of a supply node, denoted as 0, a set \(N_d\) of demand nodes, and a set \(N\setminus \{\{0\}\cup N_d\}\) of transshipment nodes. The set E of edges contains a set B of blocked edges, which are edges not traversable post-disaster, and a set U of edges that are still traversable after the disaster.

In our RRP, repair teams are dispatched to restore access from the supply node to the demand nodes. The RRP is the problem of sequencing route restorations over a finite time horizon of T time periods so as to minimize the makespan of access restoration from the supply node to all demand nodes. In this paper, the restoration of access from the supply node to all demand nodes is referred to as “network restoration.” Note that the single supply node is not restrictive in where supplies can originate. In the case where there are multiple supply points, they can be represented as nodes connected via dummy edges to a source node 0.

Let [T] be the set of running indices from 1 to T, \(|\mathcal {S}|\) be the cardinality of set \(\mathcal {S}\), and \(A_i\) be the set of nodes connected to node i. Denoting the number of time periods (integer) needed to restore a blocked edge (ij) as \(r_{ij}\), we can define our RRP as follow:


Decision variables

\(x^t_{ij}\)::

Is 1 if restoration for edge (ij) is started at the beginning of period t, 0 otherwise

\(f^t_{ij}\)::

Dummy integer flow variable from node i to j in period t

\(z_t\)::

Is 1 if graph is fully restored at time t.

$$\begin{aligned}&(P1)\quad&\min \sum _{t\in [T]}t z_t + \epsilon \sum _{(i,j)\in B}\sum _{t\in [T]}x^t_{ij}&\nonumber \\&\text {s.t. }&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T] \end{aligned}$$
(1)
$$\begin{aligned}&\sum _{t\in [T]}x^t_{ij} \le 1&\quad \forall (i,j)\in B \end{aligned}$$
(2)
$$\begin{aligned}&|f^{t}_{ij}| \le |N_d| \sum _{\tau \in [t - r_{ij}]}x^{\tau }_{ij}&\quad \forall (i,j)\in B, t\in [T]\setminus [r_{ij}] \end{aligned}$$
(3)
$$\begin{aligned}&f^{t}_{ij} = 0&\quad \forall (i,j)\in B, t\in [r_{ij}] \end{aligned}$$
(4)
$$\begin{aligned}&\sum _{j\in A_i}f^{t}_{ji} - \sum _{j\in A_i}f^{t}_{ij}= F_{it}&\quad \forall i\in N, t\in [T] \end{aligned}$$
(5)
$$\begin{aligned}&\sum _{t\in [T]} z_t = 1&\\&\text {All } x^t_{ij}, z_t\in \{0,1\}, f^t_{ij}\in \mathbb {R},&\nonumber \end{aligned}$$
(6)

where

$$\begin{aligned} F_{it}=\left\{ \begin{array}{ll} z_t &{} \text {if } i\in N_d\\ -|N_d|z_t &{} \text {if } i\in \{0\}\\ 0 &{} \text {Otherwise } \end{array} \right. \end{aligned}$$

and \(A_i\) is the set of adjacent nodes to node i.

The objective of Model (P1) is to minimize the makespan of network restoration, reducing degeneracy with the small value \(\epsilon\). Constraint (1) allows only one restoration to be started in a time period, for simplification. Constraint (2) ensures that a blocked edge may only be unblocked once. Note that we use the terms unblock and restore interchangeably here. Constraint (3) allows flow along an edge if and only if the edge has been unblocked. In this model, flow variables are unrestricted in sign so as to reflect the undirectedness of the graph in a more efficient way. Edges are only defined once, with the sign of the flow variables indicating the direction of the flow. For example, edge (1,2) with a flow of +1 indicates a flow from node 1 to node 2. The same edge with a flow of -1 indicates a flow from node 2 to node 1. If non-negativity restrictions were imposed on flow variables, we would need to define both edge (1,2) and edge (2,1) in the model. Constraint (4) indicates that there is no flow on an edge before its restoration is complete. Constraint (5) is necessary for flow conservation. It also specifies the conditions for the completion of network restoration, which are (1) all demand points have unit net inflows, (2) there is a net outflow of \(|N_d|\) units from the supply node, and 3) all transshipment nodes have zero net inflows and outflows, meaning that there is a path from the supply node to every demand node. Constraint (6) stipulates that the network must be restored by the end of the planning horizon.

Note on Constraint (1) While the constraint allows for a single restoration job to start in any time period, it does not restrict the number of concurrent restoration jobs that can be carried out.

Proposition 1

Given a feasible set of restored edges, \(B^*\), sequencing of edge restoration in descending order of the restoration time is optimal.

Proof

Let \(\bigg ((i_n,j_n)\bigg )_{n\in [|B^*|]}\) be a sequence of edge restorations given set \(B^*\), where \((i_n,j_n)\) is the edge which is chosen for restoration at time n, implying that \(x^{n}_{i_nj_n} = 1\). It is clear that the makespan of full restoration, \(\displaystyle \sum \nolimits_{t\in [T]}t z_t\), for this sequence can be calculated as \(\displaystyle \sum \nolimits_{t\in [T]}t z_t = \max _{n\in [|B^*|]}\{n+r_{i_nj_n}\}\). If \(\mathcal {S}\) is the set of all possible sequences given set \(B^*\), the optimal makespan is \(\displaystyle \min _{((i_n,j_n))_{n\in [|B^*|]}\in \mathcal {S}}\max _{n\in [|B^*|]}\{n+r_{i_nj_n}\}\). Suppose a sequence where \(r_{i_1j_1}\ge r_{i_2j_2}\ge \dots \ge r_{i_{|B^*|}j_{|B^*|}}\) and \(\max _{n\in [|B^*|]}\{n+r_{i_nj_n}\} = n^* + r_{i_{n^*}j_{n^*}}\). It is clear that any other sequence where edge \((i_{n^*},j_{n^*})\) is chosen for restoration at time \(n^*\) will have a makespan greater than or equal to \(n^* + r_{i_{n^*}j_{n^*}}\).

If edge \((i_{n^*},j_{n^*})\) is not chosen for restoration at time \(n^*\), but is rather interchanged, in the sequence, with edge \((i_{m},j_{m})\):

  1. 1.

    for \(m > n^*\), the makespan increases since \(m + r_{i_{n^*}j_{n^*}} > n^* + r_{i_{n^*}j_{n^*}}\).

  2. 2.

    for \(m < n^*\), the makespan either remains the same or increases since \(n^* + r_{i_{m}j_{m}} \ge n^* + r_{i_{n^*}j_{n^*}}\) from the knowledge that \(r_{i_{m}j_{m}} \ge r_{i_{n^*}j_{n^*}}\).

This shows that for any possible other sequence, the makespan will either remain the same or increase, meaning that a sequence in descending order of restoration time gives the minimum makespan. \(\square\)

An interesting note about Proposition 1 is that it remains valid even if vector \(\varvec{r}\) contains real-valued, instead of integer, restoration times. Proposition 1 mainly indicates that there is a simple rule governing the optimal sequencing of edge restorations. It also tells us that if we know the optimal selection of edges to restore, we can sequence edge restorations optimally without running a model.

3.2 Illustrative example: baseline case

To showcase the application of Model (P1), an illustrative network is used. Suppose a planner is given a planning horizon of 50 h to restore the network of 13 nodes and 20 edges, of which 14 are blocked, pictured in Fig. 1. The demand nodes are shaded and the number on a traversable edge represents the edge serial number. On blocked edges, the notation a : bh expresses a as the serial number and b as the number of hours required to restore the edge. The value of \(\epsilon\) is chosen to be \(1\times 10^{-5}\). Model (P1) is solved for the illustrative example using ILOG CPLEX. The solution time is 0.09 seconds and the optimal results are shown in Table 1.

Fig. 1
figure 1

Baseline network for illustrative example

Table 1 Optimal results

The table can be read as such: At the beginning of hour 1 (first column), the restoration of edge (0, 2) (second column) is started. Since the restoration time for this edge is 2 h (third column), restoration is completed at the beginning of hour 3 (fourth column). By restoring edges (0, 2), (9, 12), (1, 11), (0, 5), and (10, 11), access is made possible from the supply node to all demand nodes. Since the first restoration starts at the beginning of hour 1 and network restoration is completed at the beginning of hour 6, the optimal makespan obtained from CPLEX is \(6-1=5\) h. To test Proposition 1, we sequence the edges in descending order of restoration times and observe from Table 1 that the same makespan is obtained, showing the optimality of the rule.

4 Robust counterpart

In practice, restoration times are uncertain. We model the uncertainty in edge restoration times \(\varvec{r}\in \mathbb {R}_+^{|B|}\) using the polyhedral uncertainty set

$$\begin{aligned} \mathcal {R} = \{\varvec{r}\in \mathbb {R}_+^{|B|}: \underline{r}_{ij}\le r_{ij} \le \bar{r}_{ij}, \forall (i,j)\in B, \sum _{(i,j)\in B}r_{ij}\le \varGamma \sum _{(i,j)\in B}\bar{r}_{ij}\}, \end{aligned}$$

where \(\varGamma\), which varies in the range, \([\dfrac{\sum _{(i,j)\in B}\underline{r}_{ij}}{\sum _{(i,j)\in B}\bar{r}_{ij}}, 1]\), is a parameter, called the “budget of uncertainty”, used to control the size of the uncertainty set. When \(\varGamma =\dfrac{\sum _{(i,j)\in B}\underline{r}_{ij}}{\sum _{(i,j)\in B}\bar{r}_{ij}}\), \(\sum _{(i,j)\in B}r_{ij}\le \sum _{(i,j)\in B}\underline{r}_{ij}\) and since \(\varvec{r}\in [\underline{\varvec{r}},\bar{\varvec{r}}]\), \(\mathcal {R}\) becomes a singleton wherein \(\mathcal {R}=\{\underline{\varvec{r}}\}\). As \(\varGamma\) increases, the size of the uncertainty set enlarges. As such, larger total variations in restoration times are allowed, resulting in higher degrees of uncertainty. Notice that we have relaxed the integrality requirement on \(\varvec{r}\) to allow more flexibility in defining uncertainty ranges. Our uncertainty set is general in that it allows the planner to define \(\underline{r}_{ij}=0\) when doubts exist over whether edge (ij) has been affected at all.

The robust counterpart of Model (P1) under uncertain restoration times is formulated using a two-stage robust optimization approach as follows:


Decision variables

\(a_{ij}\)::

Is 1 if edge (ij) is selected for restoration, 0 otherwise

\(g_{ij}\)::

Dummy integer variable to model flow from node i to j

G::

Completion time of network restoration

\(x^t_{ij}\)::

Is 1 if restoration for (ij) is started at the beginning of period t, 0 otherwise.

$$\begin{aligned}&(P2)\quad \min \text { } \epsilon \sum _{(i,j)\in B}a_{ij} + \max _{\varvec{r}\in \mathcal {R}}Q(\varvec{a},\varvec{r})&\nonumber \\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B \end{aligned}$$
(7)
$$\begin{aligned}&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N \end{aligned}$$
(8)
$$\begin{aligned}&\text {All } a_{ij}\in \{0,1\}, g_{ij}\in \mathbb {Z},&\nonumber \\&\text {where } Q(\varvec{a},\varvec{r}) = \min G&\nonumber \\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \sum _{(i,j)\in B}r_{ij}x^t_{ij}&\quad \forall t\in [T] \end{aligned}$$
(9)
$$\begin{aligned}&\sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B \end{aligned}$$
(10)
$$\begin{aligned}&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&\text {All } x^t_{ij}\in \{0,1\}, G\in \mathbb {R}_+,&\nonumber \end{aligned}$$
(11)

where

$$\begin{aligned} \hat{F}_i=\left\{ \begin{array}{ll} 1 &{} \text {if } i\in N_d\\ -|N_d| &{} \text {if } i\in \{0\}\\ 0 &{} \text {Otherwise. } \end{array} \right. \end{aligned}$$

In the first stage, proactive edge selection is made. A fixed set of edges is selected for restoration after the disaster has struck. This selection is made before edge restoration time realizations. It allows repair teams to start preparations such as gathering necessary equipment, manpower, and other resources. In the second stage, Sequencing of edge restorations is done reactively, that is, subject to restoration time realizations. This means that the planner waits for information on restoration time realizations to decide the sequence in which edge restorations are to be conducted. The robust counterpart selects edges for restoration in such a way that they yield the best worst-case sequence. The term \(\epsilon \sum _{(i,j)\in B}a_{ij}\) is employed for degeneracy reduction.

Constraint (7) ensures that flow is possible on an edge only if it has been chosen for restoration. Constraint (8) ensures if all the edges in the first-stage selection are restored, the network is restored, meaning that the edge selection is a feasible one. Constraint (9) is necessary to calculate the makespan of an edge restoration sequence. Constraint (10) makes it possible to sequence an edge if and only if it has been selected for restoration in the first stage. Constraint (11) allows only one restoration to be started in a time period, consistent with the deterministic model. Notice that the model has complete recourse, meaning that its second stage is always feasible.

In the robust optimization literature, two-stage models are typically solved with decision rules, which are constructs that approximate the solution space for solvability. When second-stage decisions are real-valued, decision rules in affine or polynomial forms are generally employed. When these decisions are integer-valued, finite adaptability or binary decision rules are usually used. Whilst decision rules enable solvability, they are, nonetheless, approximations that generate sub-optimal solutions in most cases. Interestingly, for our case, the two-stage model can be solved exactly, as shown in the following proposition.

Proposition 2

Model (P2) is equivalent to mixed integer programming model

$$\begin{aligned}&(P3)\text { } \min \text { }\epsilon \sum _{(i,j)\in B}a_{ij} + G&\\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B\\&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N\\&\sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B \\&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \sum _{(i,j)\in B}(\bar{r}_{ij}\pi ^A_{ijt} - \underline{r}_{ij}\pi ^B_{ijt}) + \varGamma \sum _{(i,j)\in B}\bar{r}_{ij}\pi ^C_t&\quad \forall t\in [T]\\&\pi ^A_{ijt} - \pi ^B_{ijt} + \pi ^C_t \ge x^{t}_{ij}&\quad \forall t\in [T]\\&\sum _{(i,j)\in B}(\bar{r}_{ij}(\pi ^A_{ijt}- \pi ^A_{ijt+1}) - \underline{r}_{ij}(\pi ^B_{ijt}-\pi ^B_{ijt+1}))&\\&+ \varGamma \sum _{(i,j)\in B}\bar{r}_{ij}(\pi ^C_t - \pi ^C_{t+1}) \ge 0&\quad \forall t\in [T-1]\\&\text {All } a_{ij},x^t_{ij}\in \{0,1\}, \pi ^A_{ijt}, \pi ^B_{ijt}, \pi ^C_t\in \mathbb {R}_+, g_{ij}\in \mathbb {Z}.&\end{aligned}$$

Proof

From Proposition 1, we know that the minimum makespan, given a restoration time realization and a feasible set of restored arcs, results when the restoration sequence is in decreasing order of restoration times. By adding constraints to satisfy that decreasing-order condition, we can therefore convert the optimization model \(Q(\varvec{a},\varvec{r})\) into a feasibility problem. The constraints that model the decreasing-order condition would be

$$\begin{aligned} \sum _{(i,j)\in B} r_{ij} x^t_{ij} \ge \sum _{(i,j)\in B}r_{ij} x^{t+1}_{ij}&\quad \forall t\in [T-1]. \end{aligned}$$

Therefore \(\displaystyle \max _{\varvec{r}\in \mathcal {R}}Q(\varvec{a},\varvec{r})\) becomes the model

$$\begin{aligned}&\max _{\varvec{r}\in \mathcal {R}} G&\\&\text {s.t. } \sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B \\&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \sum _{(i,j)\in B}r_{ij} x^t_{ij}&\quad \forall t\in [T]\\&G\le t\sum _{(i,j)\in B}x^t_{ij} + \sum _{(i,j)\in B}r_{ij} x^t_{ij} + T(1-d_t)&\quad \forall t\in [T] \\&\sum _{t\in [T]}d_t = 1&\\&\sum _{(i,j)\in B} r_{ij} x^t_{ij} \ge \sum _{(i,j)\in B}r_{ij} x^{t+1}_{ij}&\quad \forall t\in [T-1], \end{aligned}$$

where \(d_t\in \{0,1\}\) is a binary variable indicating whether network restoration is complete at the beginning of time t or not. The binary variable helps establish the definition of G, the completion time of network restoration. Indeed, \(G=t\sum _{(i,j)\in B}x^t_{ij} + \sum _{(i,j)\in B}r_{ij} x^t_{ij}\) if and only if \(d_t =1\). It is clear that the above model can also be expressed as

$$\begin{aligned}&\min G&\\&\text {s.t. } \sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B \\&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \max _{\begin{array}{c} \{\varvec{r}\in \mathcal {R}:\sum _{(i,j)\in B} r_{ij} x^t_{ij} \ge \\ \sum _{(i,j)\in B}r_{ij} x^{t+\tau }_{ij},\forall \tau \in [T-t],\\ \sum _{(i,j)\in B} r_{ij} x^t_{ij} \le \\ \sum _{(i,j)\in B}r_{ij} x^{t-\tau }_{ij},\forall \tau \in [t-1]\} \end{array}}\sum _{(i,j)\in B}r_{ij} x^t_{ij}&\quad \forall t\in [T]. \end{aligned}$$

The classical way in robust optimization to proceed from this point is by dualizing the inner maximization problem and inferring the final mixed integer model from strong duality. However, the dualization yields a non-linear model that requires linearization with additional variables. In our model, there is a way to dualize the model that requires fewer additional variables by noting that the decreasing-order condition is expressed as constraints on the objective function of the inner maximization problem. Thus, the model can be re-written as

$$\begin{aligned}&\min G&\\&\text {s.t. } \sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B\\&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \rho ^t&\quad \forall t\in [T]\\&\rho ^t \ge \max _{\varvec{r}\in \mathcal {R}} \sum _{(i,j)\in B}r_{ij} x^t_{ij}&\quad \forall t\in [T]\\&\rho ^t \ge \rho ^{t+1}&\quad \forall t\in [T-1]. \end{aligned}$$

Dualizing the inner maximization problem, and because of strong duality, the above model becomes

$$\begin{aligned}&\min G&\\&\text {s.t. } \sum _{t\in [T]}x^t_{ij} = a_{ij}&\quad \forall (i,j)\in B \\&\sum _{(i,j)\in B}x^t_{ij} \le 1&\quad \forall t\in [T]\\&G \ge t\sum _{(i,j)\in B}x^t_{ij} + \rho ^t&\quad \forall t\in [T]\\&\rho ^t \ge \sum _{(i,j)\in B}(\bar{r}_{ij}\pi ^A_{ijt} - \underline{r}_{ij}\pi ^B_{ijt}) + \varGamma \sum _{(i,j)\in B}\bar{r}_{ij}\pi ^C_t&\quad \forall t\in [T]\\&\pi ^A_{ijt} - \pi ^B_{ijt} + \pi ^C_t \ge x^{t}_{ij}&\quad \forall t\in [T]\\&\rho ^t \ge \rho ^{t+1}&\quad \forall t\in [T-1]. \end{aligned}$$

Combining with the first-stage model, we obtain the model in the proposition. \(\square\)

4.1 Illustrative example with uncertain restoration times

Suppose that for the network shown in Fig. 1, the restoration times are uncertain, with the uncertainty expressed as ranges in hours. The ranges are chosen randomly in such a way that they contain the nominal values used for the deterministic network. A value of \(\varGamma = 0.6\) is used.

In the optimal robust solution, edges (5,11), (8,9), (9,12), (7,10), (1,11), (0,2) are chosen for restoration. Table 2 shows the worst-case makespan of optimal edge selections from the robust optimization model, as compared to that of the edge selection from the deterministic model (P1).

Table 2 Comparison of worst-case results for edge selections from the deterministic model against the robust optimization model

The worst-case makespan obtained from the robust optimization model is \(14.1-1=13.1\) h, whereas that obtained from the deterministic model is \(21.2-1=20.2\) h. Even though the deterministic model yields a smaller number of routes to restore, its worst-case network completion time from optimal sequencing is around 7 h longer than that of the robust optimization model. One can easily check that both edge selections make all demand points fully accessible.

4.2 Discussions on concurrent restoration jobs

In the models described so far, restoration jobs can be conducted in parallel, meaning that the restoration of the next edge can be started before the completion of the restoration of the previous edge. Table 3 shows the number of concurrent jobs in every time period in our worst-case makespan solutions.

Table 3 Comparison of the number of concurrent jobs, \(N_c\), in the worst-case makespan solutions

As discussed before, the term \(\epsilon \sum _{(i,j)\in B}a_{ij}\) and in Model (P3) ensures that, among multiple optimal solutions, the one with the smallest number of routes restored is chosen. Starting a restoration job involves equipment and crew setups and these can be avoided by choosing fewer routes. However, as Table 3 shows, fewer routes may also mean more concurrent jobs and the number of jobs is limited by the number of crews and equipment available. By varying \(\epsilon\), the planner can vary the number of routes restored and as a result, the number of concurrent jobs implied by the solution. For instance, a small negative \(\epsilon\) would maximize the number of routes restored without sacrificing the makespan, if multiple optimal solutions exist. This method, however, is arbitrary in terms of how many concurrent jobs will be carried out. With this method, the planner cannot specify the number of concurrent jobs that his/her resources permit. Such a specification would require the addition of constraints, which will alter the optimality of the restoration time ordering decision rule and thus impact the tractability and solvability of the robust counterpart. In addition, a pre-defined number of concurrent jobs is hard to calculate, given that it depends on the number of equipment available, the manpower at the planner’s disposal, the distance of crews and equipment from the disaster area, the contribution of resources from agencies and third parties, and so on.

4.3 Sequential restoration jobs

In numerous cases though, simultaneous restorations are impossible to conduct, such as if one road can only be accessed if another has been restored, or if there is a single restoration team at work, or if only one set of equipment is available. Interestingly, in that particular case, it is possible to reformulate the model with the same level of tractability using time period enlargement/contraction. The underlying principle is to redefine time periods such that every job that is started in a time period is completed by the end of it.

Mathematically, this means that \(\bar{\varvec{r}}\le \varvec{1}\). If this condition is not met in the original problem description, such as in the network used for our illustrative example, it can be enforced by altering the actual time window that a time period represents. Any RRP can be converted into one with sequential restoration jobs by defining time period t to be a window wherein any job that starts can be completed. In this section, it is helpful to understand \(r_{ij}\) as the number of time periods necessary to complete the restoration of edge (ij). A conservative way to alter the time period definition would be to define it (taken to be in hours for clarity here) as a \(\max _{(i,j)\in B}\{\bar{r}_{ij}\}\)-hour time window. The number of time periods necessary to complete the restoration of edge (ij) then becomes \(\dfrac{r_{ij}}{\max _{(i,j)\in B}\{\bar{r}_{ij}\}}\), which is always less than or equal to 1, \(\forall (i,j)\in B\).

From a model formulation viewpoint, the robust optimization model (P3) with sequential restoration jobs has an alternative formulation with fewer decision variables and constraints. This simplification is especially useful when networks with large numbers of nodes and edges (especially of blocked edges) are considered. The alternative formulation is derived in the following proposition.

Proposition 3

If \(\bar{r}_{ij}\le 1\), \(\forall (i,j)\in B\)in uncertainty set \(\mathcal {R}\), problem (P2) is equivalent to

$$\begin{aligned}&(P4)\text { } \min \sum _{(i,j)\in B}\big (a_{ij} + \bar{r}_{ij}\pi ^A_{ij} - \underline{r}_{ij} \pi ^B_{ij} + \varGamma \bar{r}_{ij}\pi ^C + \bar{r}_{ij}\pi ^E_{ij}\big )\\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B\\&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N\\&\sum _{(i,j)\in B}\pi ^D_{ij} \ge 1 \\&\pi ^A_{ij} - \pi ^B_{ij} + \pi ^C - \pi ^D_{ij} \ge 0&\quad \forall (i,j)\in B\\&\pi ^E_{ij} \le \pi ^D_{ij}&\quad \forall (i,j)\in B\\&\pi ^E_{ij} \ge \pi ^D_{ij} - Ma_{ij}&\quad \forall (i,j)\in B\\&\pi ^E_{ij} \le M(1-a_{ij})&\quad \forall (i,j)\in B\\&\text {All } a_{ij}\in \{0,1\}, g_{ij}\in \mathbb {Z}, \pi ^A_{ij},&\pi ^B_{ij}, \pi ^C, \pi ^D_{ij}, \pi ^E_{ij}\in \mathbb {R}_+, \end{aligned}$$

where M is a big number.

Proof

Let \(\mathcal {D}=\{\big ((i_t,j_t)\big )_{t\in [|B^*|]}\in \mathcal {S}:r_{i_1j_1}\ge r_{i_2j_2}\ge \dots \ge r_{i_{|B^*|}j_{|B^*|}}\}\) be a sequence in descending order of restoration times for a feasible set of restored edges \(B^*\) and a set \(\mathcal {S}\) of all possible edge restoration sequences in \(B^*\). If \(r_{i_tj_t} \le 1\), \(\forall t\in [|B^*|]\), \(r_{i_tj_t} - r_{i_{|B^*|}j_{|B^*|}} \le 1 \le |B^*| - t\), \(\forall t\in [|B^*| - 1]\). Therefore \(|B^*| + r_{i_{|B^*|}j_{|B^*|}} \ge t + r_{i_tj_t}\), \(\forall t\in [|B^*| - 1]\) and \(\displaystyle \max _{t\in [|B^*|]}\{t+r_{i_tj_t}\} = |B^*| + r_{i_{|B^*|}j_{|B^*|}}\). As such, Model (P2) can be expressed as

$$\begin{aligned}&\min&\sum _{(i,j)\in B}a_{ij} + \max _{\varvec{r}\in \mathcal {R}}Q(\varvec{a},\varvec{r})&\\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B\\&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N\\&\text {All } a_{ij}\in \{0,1\}, g_{ij}\in \mathbb {Z},&\\&\text {where }&Q(\varvec{a},\varvec{r}) = \max \underline{R}&\\&\underline{R} \le r_{ij} + \bar{r}_{ij}(1-a_{ij})&\quad \forall (i,j)\in B \\&\underline{R}\in \mathbb {R}_+.&\end{aligned}$$

Dualizing the maximization problem \(\displaystyle \max _{\varvec{r}\in \mathcal {R}}Q(\varvec{a},\varvec{r})\), we obtain

$$\begin{aligned}&\min \sum _{(i,j)\in B}\big (\bar{r}_{ij}\pi ^A_{ij} - \underline{r}_{ij}\pi ^B_{ij} + \varGamma \bar{r}_{ij}\pi ^C + \bar{r}_{ij}(1-a_{ij})\pi ^D_{ij}\big ) \\&\sum _{(i,j)\in B}\pi ^D_{ij} \ge 1 \\&\pi ^A_{ij} - \pi ^B_{ij} + \pi ^C - \pi ^D_{ij} \ge 0 \quad \forall (i,j)\in B\\&\text {All } \pi ^A_{ij}, \pi ^B_{ij}, \pi ^C, \pi ^D_{ij}\in \mathbb {R}_+. \end{aligned}$$

Since \(\bar{r}_{ij}(1-a_{ij})\pi ^D_{ij}\) is nonlinear, we perform a substitution using \(\pi ^E_{ij}\in \mathbb {R}_+\), where \(\pi ^E_{ij}=(1-a_{ij})\pi ^D_{ij}\), and add the constraints

$$\begin{aligned}&\pi ^E_{ij} \le \pi ^D_{ij}&\quad \forall (i,j)\in B\\&\pi ^E_{ij} \ge \pi ^D_{ij} - Ma_{ij}&\quad \forall (i,j)\in B\\&\pi ^E_{ij} \le M(1-a_{ij})&\quad \forall (i,j)\in B. \end{aligned}$$

The third constraint, together with the non-negativity of \(\pi ^E_{ij}\), ensure that when \(a_{ij}=1\), \(\pi ^E_{ij}=0\) and the first two constraints guarantee that when \(a_{ij}=0\), \(\pi ^E_{ij}=\pi ^D_{ij}\). With the linearized dual, Model (P2) becomes a single-stage mixed integer programming model. \(\square\)

To apply Model (P4) to the network in Fig. 2, we define our time period as \(\max _{(i,j)\in B}\{\bar{r}_{ij}\}= 24.7\) hours. The optimal edge selection generated is (9,12), (0,5), (3,6), (1,11), (0,2). The worst-case optimal sequencing is \((3,6)\rightarrow (0,5) \rightarrow (0,2)\rightarrow (1,11) \rightarrow (9,12)\), where \(\rightarrow\) represents “followed by”. The worst-case makespan is \((5.132 -1)\) time periods, which is \(4.132\times 24.7=102.1\) h. This is significantly larger than worst-case makespan obtained when simultaneous job starts are allowed. While it is not a surprise that sequential restoration jobs lead to a larger makespan than concurrent jobs, the size of the difference is noticeably large. The main reason for this big jump is the amount of idle time in between jobs, a direct result of increasing the hours contained per time period from 1 to 24.7. This increase is conservative in that it forces \(r_{ij}\) to be less than or equal to 1, for all \((i,j)\in B\)

Fig. 2
figure 2

Network for illustrative example with uncertain restoration times

.

The conservatism can be reduced by decreasing the time window that a time period represents. Let \(\rho\) be the time window that a time period represents. In the above example, \(\rho =24.7\) h and the optimal objective value (representing the worst-case completion time of network restoration) is \(\sum _{(i,j)\in B}a^*_{ij}\) + \(\min _{\{(i,j)\in B: a^*_{ij} = 1\}}\{r^*_{ij}\} = 5 + 0.132\), where \(\varvec{a^*}\) is the optimal solution of \(\varvec{a}\) and \(\varvec{r^*}\) is the worst-case restoration time realization. Model (P4) is also an equivalent formulation of Model (P2) under the looser condition \(\bar{r}_{ij}\le 1\), \(\forall \{(i,j)\in B: a^*_{ij} = 1\}\) (this can be verified from the proof of Proposition 3). Under a less conservative time-window where \((\max _{\{(i,j)\in B: a^*_{ij} = 1\}}\{\bar{r}_{ij}\}\times 24.7)\le \rho <24.7\) h, if it exists, the new restoration time upper bound \(\bar{r}^{'}_{ij}\) for an edge (ij) becomes \(\bar{r}^{'}_{ij}=\dfrac{\bar{r}_{ij} \times 24.7}{\rho }\), where it is easy to see that \(\bar{r}^{'}_{ij}\le 1\), \(\forall \{(i,j)\in B: a^*_{ij} = 1\}\). The worst-case completion time for the same edge selection is therefore \(\sum _{(i,j)\in B}a^*_{ij} + \min _{\{(i,j)\in B: a^*_{ij} = 1\}}\{r^{'*}_{ij}\} = 5 +\dfrac{0.132 \times 24.7}{\rho } \le 6.\)

We can show that \(\varvec{a^*}\) is also optimal under the new time window definition \((\max _{\{(i,j)\in B: a^*_{ij} = 1\}}\{\bar{r}_{ij}\}\times 24.7)\le \rho <24.7\). We first note that the number of edges selected for restoration under the 24.7-hour time window is the smallest that can be chosen. If a smaller number of selected edges existed, it would have been optimal. Since the worst-case completion time with \(\varvec{a^*}\) is less than or equal to \(\sum _{(i,j)\in B}a^*_{ij} + 1\), a larger number of selected edges is also not possible. It is also clear that there is no possible replacement of selected edges that will improve \(\min _{\{(i,j)\in B: a^*_{ij} = 1\}}\{r^{'*}_{ij}\}\) and thus, the optimal solution remains the same. For our illustrative example, \(\max _{\{(i,j)\in B: a^*_{ij} = 1\}}\{\bar{r}_{ij}\}\times 24.7=20.2\) h is the smallest time window that gives the same optimal edge selection solution. The new optimal objective value is \(5 +\dfrac{0.132 \times 24.7}{20.2}=5.161\) time periods, which gives a makespan of \((5.161 -1)\) time periods, which is \(4.161\times 20.2=84.1\) h, a significant improvement from the 102.1 h obtained with the 24.7-h time window. The time window definition is impactful in practice. The re-definition to 20.2 h means that the restoration team moves to the next job after 20.2 h instead of 24.7 h. It also means smaller idle time in between jobs.

4.4 Sequencing without a priori restoration time knowledge

In the previous sections, sequencing is done in response to restoration time realizations, with the objective of optimizing the worst-case, over all possible realizations, of sequencing strategy. This a priori knowledge of uncertainty realizations may not necessarily be available for some disaster situations. For instance, resources may not be available to conduct precise damage assessments and produce exact restoration time realizations, or the disaster may be unprecedented. In the chaotic post-disaster environment, unexpected events are bound to happen, such as traffic slowing access to routes, aftershocks from an earthquake derailing response strategies and complicating damage assessments, and interactions of disasters worsening damages, such as the nuclear meltdown that followed the tsunami during the Tohoku disaster. While the two-stage framework is very helpful in guiding decision-making, it may face some practical challenges in more complex situations because of its reliance on restoration time realizations. From a modelling perspective, the lack of a priori restoration time knowledge means that sequencing edge restorations in decreasing order of restoration times is not possible and therefore, that our decision rule is not implementable. The decision-maker has to envisage the possibility of the chosen restoration sequence being such that the edge with the highest restoration time realization is started last in the sequence, delaying the completion time as much as possible. The decision-making problem thus becomes

$$\begin{aligned}&(P5)\text { } \min \sum _{(i,j)\in B}a_{ij} + \max _{\varvec{r}\in \mathcal {R}} \{\bar{R}: \bar{R} \ge r_{ij} - \bar{r}_{ij}(1-a_{ij}), \forall (i,j)\in B\} \\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B\\&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N\\&\text {All } a_{ij}\in \{0,1\}, g_{ij}\in \mathbb {Z}, \bar{R}\in \mathbb {R}_+.&\end{aligned}$$

Model (P5) finds the best worst-case sequence, with the understanding that the worst makespan possible by any sequence happens when the edge with the highest restoration time is started last. In traditional robust optimization methods, the inner maximization problem would have been dualized and the resulting single-stage model linearized to obtain the final mixed integer programming problem. However, this augments the complexity of the model with dual variables and additional linearization variables, thus decreasing its scalability. In the following proposition, we prove that there is a more scalable solvable formulation of Model (P5).

Proposition 4

Model (P5) is equivalent to

$$\begin{aligned}&(P6)\text { }\min \sum _{(i,j)\in B}a_{ij} + \bar{R}&\\&\text {s.t. } |g_{ij}| \le |N_d| a_{ij}&\quad \forall (i,j)\in B\\&\sum _{j\in A_i}g_{ji} - \sum _{j\in A_i}g_{ij}= \hat{F}_i&\quad \forall i\in N\\&\bar{R} \ge L_{ij}&\quad \forall (i,j)\in B\\&L_{ij} \ge \bar{r}_{ij}y_{ij1}&\quad \forall (i,j)\in B\\&L_{ij} \ge (\underline{r}_{ij} + \sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}))y_{ij2}&\quad \forall (i,j)\in B\\&y_{ij1} + y_{ij2} = a_{ij}&\quad \forall (i,j)\in B\\&\text {All } a_{ij},y_{ij1},y_{ij2}\in \{0,1\}, g_{ij}\in \mathbb {Z}, \bar{R},L_{ij}\in \mathbb {R}_+.&\end{aligned}$$

Proof

Let \(\mathcal {F}\) be a family of feasible sets of restored edges and \(\mathcal {S}_{B^*}\) be the set of all possible edge restoration sequences for \(B^* \in \mathcal {F}\). Define \(((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*}\) as an edge restoration sequence, where \((i_t,j_t)\) is the edge whose restoration starts in the \(t^{th}\) period. Model (P5) can be rewritten in a concise manner as

$$\begin{aligned}&\min _{B^* \in \mathcal {F}}\max _{\begin{array}{c} \varvec{r}\in \mathcal {R}\\ ((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*} \end{array}} \{|B^*| + r_{i_{|B^*|}j_{|B^*|}}\}\\&{\mathop {=}\limits ^{\text{(i) }}}\min _{B^* \in \mathcal {F}}\max _{((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*}} \{|B^*| + r_{i_{|B^*|}j_{|B^*|}}: \underline{r}_{i_{t}j_{t}}\le r_{i_{t}j_{t}} \le \bar{r}_{i_{t}j_{t}}, \forall t\in [|B|],\\&\qquad \qquad \sum _{t\in [|B|]}r_{i_{t}j_{t}}\le \varGamma \sum _{t\in [|B|]}\bar{r}_{i_{t}j_{t}} \}, \end{aligned}$$

where equality (i) follows from the definition of our uncertainty set and for simplicity, we denote as \(\{(i_{t},j_{t})\}_{t>|B^*|}\), as the set of edges not in \(B^*\). It is clear that \(r_{i_{t}j_{t}} = \underline{r}_{i_{t}j_{t}}\), \(\forall t\in [|B|]\setminus \{|B^*|\}\). Therefore, Model (P5) becomes

$$\begin{aligned}&\min _{B^* \in \mathcal {F}}\max _{((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*}} \{|B^*| + r_{i_{|B^*|}j_{|B^*|}}: \underline{r}_{i_{|B^*|}j_{|B^*|}}\le r_{i_{|B^*|}j_{|B^*|}} \le \bar{r}_{i_{|B^*|}j_{|B^*|}},\\&\qquad \qquad \qquad \qquad r_{i_{|B^*|}j_{|B^*|}}\le \sum _{t\in [|B|]}(\varGamma \bar{r}_{i_{t}j_{t}} - \underline{r}_{i_{t}j_{t}}) + \underline{r}_{i_{|B^*|}j_{|B^*|}}\}\\&{\mathop {=}\limits ^{\text{(ii) }}}\min _{B^* \in \mathcal {F}}\max _{((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*}} \{|B^*| + r_{i_{|B^*|}j_{|B^*|}}: \underline{r}_{i_{|B^*|}j_{|B^*|}}\le r_{i_{|B^*|}j_{|B^*|}} \le \min \{\bar{r}_{i_{|B^*|}j_{|B^*|}},\\&\qquad \qquad \qquad \qquad \sum _{t\in [|B|]}(\varGamma \bar{r}_{i_{t}j_{t}} - \underline{r}_{i_{t}j_{t}}) + \underline{r}_{i_{|B^*|}j_{|B^*|}}\} \}\\&{\mathop {=}\limits ^{\text{(iii) }}}\min _{B^* \in \mathcal {F}}\{|B^*| + \max _{((i_t,j_t))_{t\in [|B^*|]}\in \mathcal {S}_{B^*}} \{\min \{\bar{r}_{i_{|B^*|}j_{|B^*|}},\sum _{t\in [|B|]}(\varGamma \bar{r}_{i_{t}j_{t}} - \underline{r}_{i_{t}j_{t}}) + \underline{r}_{i_{|B^*|}j_{|B^*|}}\}\}\}\\&{\mathop {=}\limits ^{\text{(iv) }}}\min _{B^* \in \mathcal {F}}\{|B^*| + \max _{(i,j)\in B^*} \{\min \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\}. \end{aligned}$$

Equality (ii) follows from the fact that \(r_{i_{|B^*|}j_{|B^*|}} \le \bar{r}_{i_{|B^*|}j_{|B^*|}}\) and \(r_{i_{|B^*|}j_{|B^*|}}\le \sum _{t\in [|B|]}(\varGamma \bar{r}_{i_{t}j_{t}} - \underline{r}_{i_{t}j_{t}}) + \underline{r}_{i_{|B^*|}j_{|B^*|}}\). Equality (iii) is valid because the worst makespan possible by any sequence happens when the edge with the highest restoration time is started last. Equality (iv) is a reformulation to our original model notation, instead of the sequence notation. The model obtained after equality (iv) is equivalent to the model in the proposition. \(\square\)

When Model (P6) is implemented on the network in Fig. 2, the optimal edge selection is (5,11), (8,9), (9,12), (7,10), (1,11), (0,2), which gives a worst-case makespan without a priori restoration time knowledge (WMWA) of 18.3 h. This is the same as the optimal edge selection for Model (P3). The optimal edge selection for the deterministic model (P1) gives a WMWA of 25.2 h. The fact that Models (P6) and (P3) give the same optimal objective value in this illustrative case study hints at some equivalence between the two models under certain conditions. Identifying these conditions would be useful for reducing computational complexity in the robust optimization model. Indeed, the model without a priori restoration time knowledge (P6) has fewer variables and constraints than the original robust optimization model (P3). In the proposition below, we prove the conditions under which these two models are equivalent, which, interestingly, are those for a specific type of network restoration process. We start by defining the term “makespan position” before proceeding to prove the proposition.

Definition

A time \(\tau\) is called a makespan position when the optimal worst-case edge restoration makespan occurs at \(\tau\).

Proposition 5

Let \(B^w=\{(i,j)\in B: a^*_{ij}=1\}\)be the optimal edge selection set for Model (P5), where \(\varvec{a}^*\)is the optimal value of \(\varvec{a}\). If the conditions:

  1. 1.

    the makespan position for Model (P2) is equal to \(|B^w|\)

  2. 2.

    the optimal edge selection set cardinality for Model (P2) is equal to \(|B^w|\)

  3. 3.

    \(\max _{(i,j)\in B^w} \{\min \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\)

    = \(\max _{(i,j)\in B} \{\min \{\bar{r}_{ij}, \sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\)

  4. 4.

    \(\min _{(i,j)\in B^w} \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\} = \min _{(i,j)\in B}\{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\)

are satisfied, then \(B^w\)is also an optimal edge selection set for Model (P2).

Proof

Suppose \(B^a\) is the optimal edge selection set of Model (P2). Define the term \(\bar{W}^S = \max _{(i,j)\in S} \{\min \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\) and the term \(\underline{W}^S = \min _{(i,j)\in S} \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\). Since Model (P5) can be concisely represented as \(\min _{B^* \in \mathcal {F}}\{|B^*| + \max _{(i,j)\in B^*} \{\min \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\}\) (from the proof of Proposition 4), where \(\mathcal {F}\) is a family of feasible sets of restored edges, \(|B^w| + \bar{W}^{B^w} \le |B^a| + \bar{W}^{B^a}\). From condition 2., \(|B^w|=|B^a|\), implying that \(\bar{W}^{B^w} \le \bar{W}^{B^a}\).

From condition 1., the makespan position for Model (P2) is \(|B^a|\), and knowing from Proposition 1 that sequencing of edge restoration in descending order of the restoration time is optimal, it is clear that Model (P2) is equivalent to \(\min _{B^* \in \mathcal {F}}\{|B^*| + \min _{(i,j)\in B^*} \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\) (the proof the same logic as that of Proposition 4). This implies that \(|B^w| + \underline{W}^{B^w} \ge |B^a| + \underline{W}^{B^a}\) and thus, \(\underline{W}^{B^w} \ge \underline{W}^{B^a}\).

From condition 3., \(\bar{W}^{B^w} = \max _{(i,j)\in B} \{\min \{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\}\), implying that \(\bar{W}^{B^w} \ge \bar{W}^{B^a}\). Since it was shown that \(\bar{W}^{B^w} \le \bar{W}^{B^a}\), we can conclude that \(\bar{W}^{B^w} = \bar{W}^{B^a}\). From condition 4., \(\underline{W}^{B^w} = \min _{(i,j)\in B}\{\bar{r}_{ij},\sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\}\), implying that \(\underline{W}^{B^w} \le \underline{W}^{B^a}\), meaning that \(\underline{W}^{B^w} = \underline{W}^{B^a}\). Therefore, \(B^w\) is also an optimal edge selection set for Model (P2). \(\square\)

Conditions 2., 3., and 4. in Proposition 5 are automatically satisfied in the case where the restoration of all network edges is required. While hitherto our models have focused on disaster response, where the priority is to restore access to demand points, the restoration of all edges is a case of longer-term disaster recovery, where the network is restored back to the state it was in before the occurrence of the disaster. Condition 1. is always satisfied in the case of sequential restoration jobs (\(\bar{\varvec{r}}\le \varvec{1}\)). Therefore, optimizing the restoration of all edges of a network in sequential restoration jobs is similar to optimizing the restoration of all edges of the network without a priori restoration time knowledge.

The purpose of Proposition 5 is to show the conditions under which the planner can solve the reduced Model (P6), which is the solvable form of (P5), instead of the less scalable problem (P3), which is the solvable form of (P2). Without concurrent jobs, Model (P3) reduces to a simpler model anyway, as shown in proposition 3. However, Condition 1. is met in other circumstances as well. It is useful then, to define a probability bound on the makespan position being at |B|, without the restriction \(\bar{\varvec{r}}\le \varvec{1}\).

Proposition 6

Suppose a value of \(\varGamma\)is chosen such that \(\varGamma \ge \max _{\forall (i,j)\in B}\dfrac{\bar{r}_{ij} - \underline{r}_{ij} + \sum _{(k,l)\in B}\underline{r}_{kl}}{\sum _{(k,l)\in B}\bar{r}_{kl}}\). For independent graphs with \(|B|(\ge 2)\)blocked edges and independent restoration times where the theoretical range of restoration time upper bounds is given by \(\varDelta W\), the probability of the optimal worst-case makespan occurring at position |B| has a lower bound of

$$\begin{aligned}&\prod _{a=2}^{|B|}\bigg (1- e^{-(a/\varDelta W)^2}\bigg ). \end{aligned}$$

Proof

Defining \(\bar{\mathfrak {r}}_t\) be the \(t^{th}\) smallest \(\bar{r}_{ij}\) over all \((i,j)\in B\). For independent graphs with \(|B|(\ge 2)\) blocked edges, \(\bar{\mathfrak {r}}_t\) is a random variable. Since we know that in Model (P3), the optimal edge restoration sequence is in decreasing order of restoration times and the objective is to optimize the worst-case makespan over all uncertainty realizations, |B| is the makespan position (restoration of all |B| edges required) when \(|B| + \bar{\mathfrak {r}}_{|B|} \ge t + \bar{\mathfrak {r}}_t\), \(\forall t\in [|B|-1]\). This can be clearly seen because when \(\varGamma \ge \max _{\forall (i,j)\in B}\dfrac{\bar{r}_{ij} - \underline{r}_{ij} + \sum _{(k,l)\in B}\underline{r}_{kl}}{\sum _{(k,l)\in B}\bar{r}_{kl}}\), \(\bar{r}_{ij} \le \sum _{(k,l)\in B}(\varGamma \bar{r}_{kl} - \underline{r}_{kl}) + \underline{r}_{ij}\), \(\forall (i,j)\in B\), meaning that \(\bar{r}_{ij}\) is achievable at the optimal makespan position. We therefore want to find \(\mathbb {P}(|B| + \bar{\mathfrak {r}}_{|B|} \ge t + \bar{\mathfrak {r}}_t, \forall t\in [|B|-1])\). Due to graph independence, and assuming the restoration times are independent, this is equivalent to \(\prod _{t=1}^{|B|-1}\mathbb {P}(|B| + \bar{\mathfrak {r}}_{|B|} \ge t + \bar{\mathfrak {r}}_t)\). We therefore have

$$\begin{aligned}\prod _{t=1}^{|B|-1}\mathbb {P}(|B| + \bar{\mathfrak {r}}_{|B|} \ge t + \bar{\mathfrak {r}}_t)&=\prod _{t=1}^{|B|-1}\mathbb {P}(\bar{\mathfrak {r}}_t - \bar{\mathfrak {r}}_{|B|} \le |B| -t)\\&=\prod _{t=1}^{|B|-1}\bigg (1- \mathbb {P}(\bar{\mathfrak {r}}_t - \bar{\mathfrak {r}}_{|B|} \ge |B| -t + 1)\bigg )\\&{\mathop {\ge }\limits ^{\text{(i) }}} \prod _{t=1}^{|B|-1}\bigg (1- e^{-\dfrac{2(|B| -t + 1)^2}{2(\bar{W}-\underline{W})^2}}\bigg )\\&= \prod _{t=1}^{|B|-1}\bigg (1- e^{-\bigg (\dfrac{|B| -t + 1}{\bar{W}-\underline{W}}\bigg )^2}\bigg )\\&= \prod _{a=2}^{|B|}\bigg (1- e^{-(a/\varDelta W)^2}\bigg ), \end{aligned}$$

where (i) is obtained from Hoeffding’s inequality. \(\square\)

The variations of the lower bound on the probability of the optimal worst-case makespan occurring at position |B| with different |B| and \(\varDelta W\) are illustrated in Fig. 3.

Fig. 3
figure 3

Variations of the lower bound on the probability of the optimal worst-case makespan occurring at position |B|

We first observe that there is an asymptotic behaviour when |B| increases. The curves become closer to each other because the extra term in \(\prod _{a=2}^{|B|}\bigg (1- e^{-(a/\varDelta W)^2}\bigg )\) tends to 1. Figure 3 provides an interesting guide to choosing time windows. When a time window is chosen such that \(\bar{\varvec{r}}\le \varvec{1}\), we know that Model (P3) reduces to the simpler Model (P4). Figure 3 shows that for the case where network recovery is required, even when a larger time window is chosen, there is still a minimum probability—which may be high—of a simpler model (notably Model (P6)) providing the optimal solution. For instance, suppose a large network with 1000 blocked edges causes intractability in Model (P4), if the simpler Model (P6) is solved under a predefined time window wherein restoration times are maximum 1.5 time periods, there is a minimum \(\prod _{a=2}^{1000}\bigg (1- e^{-(a/1.5)^2}\bigg )=0.815\) probability that its solution will be optimal for Model (P3). This is irrespective of network topology.

Such probability bounds are helpful in guiding decision-makers towards the tractable solution of large models. When the restoration of large networks is required, such as following disasters over large areas, decision-makers are often faced with the prospect of solving intractable combinatorial problems to obtain optimal edge sequencing for robustness. By combining the use of both Proposition 5 and Proposition 6, the tractability of the robust optimization model can be significantly improved. The decision-maker first verifies if Condition 2., 3., and 4. in Proposition 5 are satisfied for the network. Thereafter, (s)he can use restoration time bounds to obtain a lower bound on the probability of satisfaction of Condition 1 from Proposition 6. Notice that by varying the time period size, (s)he can guarantee an adequately high lower bound on that probability. The bound represents a confidence level with which the decision-maker can assume that the model without a priori restoration time knowledge has the same solution as the robust optimization model with descending-order decision rule. Since the model without a priori restoration time knowledge has better tractability and scalability than the robust optimization model with decision rule, it means that if the lower bound on the probability of satisfaction of Condition 1. is high, the decision-maker can be confident that the optimal solution of the two models will be equivalent.

5 Computational experiment: impact of robustness on an Erdős-Rényi random graph

We first investigate the impact robustifying a random network using our robust optimization model with decision rule (P3) and our robust optimization model without a priori restoration time knowledge (P6). We generate the random network using a G(Np) Erdős-Rényi graph model, where \(N=10\) is the number of nodes and \(p=0.5\) is the probability that an edge is included in the graph. The network in Fig. 4 is our generated graph, with the pale-shaded nodes representing demand nodes. All edges are considered to be blocked and their randomly-generated nominal and ranges of restoration times are shown in Table 4. The random generation is carried out as follows: a random number in the range [0, 50] is used as the nominal restoration time. Then random numbers are generated from 0 to the nominal value and from the nominal value to 75, to be used as lower and upper bounds, respectively. We take a time period of 1 hour, a planning horizon of 100 h, \(\varGamma = 0.8\), and \(\epsilon =1\times 10^{-4}\). Node 0 is the supply node.

Fig. 4
figure 4

Our random graph

Table 4 Restoration time statistics

The optimal edge selections from the deterministic model (P1), the robust optimization model (P3), and the robust optimization model without a priori restoration time knowledge (P6) are shown in Fig. 5.

Fig. 5
figure 5

Optimal edge selection for models (P1) (top left), (P3) (top right), and (P6) (bottom)

Under nominal restoration times, the edge selections in (P1), (P3), and (P6) can be sequenced for restoration with minimum makespans of 25 h, 41 h, and 32 h, respectively. This shows that our robust optimization models sacrifice nominal performances to hedge against worst-case uncertainty realizations. To compare worst-case performances, we will compare two situations, which are: (1) when there is a priori knowledge of restoration time realizations, meaning that edges can be sequenced according to the descending-order decision rule, and (2) when there is no a priori knowledge of restoration time realizations and therefore, the decision rule is not implementable. For the case where the decision-maker has a priori knowledge of restoration time realizations, the worst-case makespans obtained from the edge selections in (P1), (P3), and (P6) are 62 h, 46 h, and 49 h, respectively. When the decision-maker does not have a priori knowledge of restoration time realizations, the worst-case makespans obtained from the edge selections in (P1), (P3), and (P6) are 70 h, 53 h, and 53 h, respectively This shows the significant improvement in worst-case performances brought about by robust optimization. It also shows that in this particular graph the optimal edge selection for Model (P3) is also optimal for Model (P6) (the converse is not true).

To test the performances of our models outside the nominal and worst-case realizations, we randomly generate 1000 scenarios of restoration time realizations and find the optimal edge sequencing makespans given the edge selections in Fig. 5. The descriptive statistics of the makespans are shown in Table 5. The results show significant improvements in worst-case performances for our robust optimization models, achieved by sacrificing best-case performances. Moreover, our robust optimization models produce lower average makespans and lower makespan variances compared to the deterministic model, showing that by robustifying the network, we have achieved better average performances together with better performance stability, as compared to nominal planning.

Table 5 Descriptive statistics on optimal makespans for 1000 random scenarios

6 Case study: great gorkha earthquake

The great Gorkha earthquake in 2015 with a magnitude of 7.8 \((M_w)\) and its aftershocks caused significant loss of human lives and properties in the entire Kathmandu Valley. Its impact on the transportation network was devastating as help and rescue operations were delayed because of impassable roads. Approximately 9,000 people died, and 22,000 people were injured as a result of the earthquake. The earthquake triggered a large number of landslides, avalanches, and rockslides in the entire Kathmandu valley (Gnyawali and Adhikari 2017). In 2016, on-site surveys were conducted with locals who live in the affected region, as well as with the Department of Road Authority in Nepal who are responsible for debris clearance activities to identify the times spent to clear closed road segments with respect to the intensity level of landslides. These surveys, together with landslide magnitude and intensity maps were used to assign minimum and maximum restoration times to closed road segments and the full results are reported in Aydin et al. (2018).

This study focuses on the rural district of Sindhupalchok located on the north side of Kathmandu City. It comprises 457 nodes and 555 edges, 66 of which were rendered impassable due to landslide debris. Access needs to be restored to 81 demand points, which represent settlements in Sindhupalchok. All resource/recovery teams to restore road segments must come from the largest highway in the district, called Araniko Highway. The affected region is show in Fig. 6.

Fig. 6
figure 6

Affected Sindhupalchok district

In Aydin et al. (2018), restoration time ranges are taken to be \(\pm 2 \sigma\) and this yields non-overlapping ranges for five different classes of landslides in terms of the size of debris, namely, very small, small, medium, high, very high. In our study, we relax this assumption by taking restoration time ranges to be \(\pm 3 \sigma\), thus allowing greater uncertainty, as well as restoration time overlaps for the different classes of landslides. We also relax the integrality restrictions imposed on the restoration time ranges. The restoration time ranges are shown in Table 6.

Table 6 Restoration time data (adapted from Aydin et al. (2018))

Taking a time period as 1 hour, a planning horizon of 100 h, \(\varGamma = 0.8\), and \(\epsilon =1\times 10^{-4}\), the robust counterpart (P3) restores access to all demand points by restoring 18 out of the 66 blocked roads. The optimal worst-case restoration completion makespan is 31.2 h. Note that the solution times for all models in this section are less than 10 seconds. Figure 7 maps the worst-case restoration sequence of the 18 restorations.

Fig. 7
figure 7

Worst-case restoration sequence with our robust optimization approach

With sequential restoration jobs with a time period representing 76.8 h, Model (P4) also prescribes the restoration of 18 out of the 66 blocked roads, but the optimal set of restored edges is different from that of Model (P3). The optimal worst-case makespan is 1386 h and a significant worst-case theoretical idle time of 1292 h.

Note: Throughout this work, equal time windows are used, meaning that every time period represents the same time interval. This is to conform to the conventions of optimization modelling. Having variable lengths of time negatively impact optimization models. Because of the equal time windows, the makespan for sequential jobs is much higher than expected. In practice, the restoration crew can move to the next job immediately after finishing the first one and therefore, avoid idle time altogether. In this situation, the makespan becomes 69.3 h (the sum of restoration times of restored edges).

6.1 Performance on random scenarios

In this sub-section, we investigate the performances of the deterministic model (P1), the robust optimization (P3) and the model without a priori information (P6) under randomly generated restoration time scenarios. We generate 1000 scenarios of restoration time realizations that are within our uncertainty set and test each model by computing the optimal sequencing and makespan (using the descending order decision rule) for every scenario, given the optimal edge selection of the model as input. The deterministic model is run using the mean restoration times to obtain its optimal edge selection. The results of the experiment are shown in Table 7.

Table 7 Descriptive statistics on optimal makespans for random scenarios

The robust optimization models (P3) and (P6) sacrifice average performances for greater performance stability. We can see that the standard deviations are smaller for these models as compared to the deterministic model. This is because the underlying principle of robust optimization is the search for the best worst-case performance and indirectly, for a reduction in the variation of performances. The robust counterpart (P3) plans for the best worst-case performance assuming that restoration time realizations are available. The model without a priori information (P6) does not use this assumption and therefore suffers from further drops in average performances in the tests. However, because restoration time realizations are not available when model (P6) plans for the best worst-case performance, it generates more conservative plans and therefore offers greater performance stability than all the other models.

7 Conclusions

In this paper, we propose a robust optimization approach to optimize post-disaster route restoration under uncertain restoration times. We present a novel decision rule based on restoration time ordering that yields optimal restoration sequencing. Under this decision rule, the two-stage robust optimization model becomes a single-stage mixed integer programming problem. We analyze the robust counterpart under two main conditions: 1) restoration can only be performed sequentially and 2) restoration must be performed without a priori restoration time knowledge. We show that under some conditions, the robust counterpart can be reduced to the more tractable and scalable model without a priori restoration time knowledge. These conditions are interestingly, conditions under which 1) restoration of all network edges is required (a network recovery case) and 2) network recovery is complete when the restoration of the edge which is sequenced last is complete. We also prove probability bounds on the satisfaction of the second condition. We implement our models in a realistic study of the 2015 Gorkha earthquake in Nepal and show that with less than a third of blocked roads restored, full access to demand points can be achieved.