1 Introduction

In a 2014 presentation, Dr. Steven Chu, Nobel Laureate and former US Energy Secretary, noted that natural disasters are occurring with increasing frequency worldwide, citing climate change as epidemiological evidence for the increase (Chu, 2014). Historical data clearly indicate that the number of natural disasters per year has been climbing dramatically since 1980. In North America alone, the number of annual natural catastrophes has jumped fourfold, from approximately 50 to 200, since the early 1980s, according to the world's largest reinsurer, Munich Re (Sturdevant, 2013). A closer look at the report reveals that the increase is not uniform across different types of natural disasters. The occurrence of geophysical disasters such as earthquakes and tsunamis has remained relatively stable between 1980 and 2011. However, weather-related disasters such as hurricanes; hydrological disasters such as floods; and climatological disasters such as droughts have been increasing steadily during the same period. All three of these increasing occurrence types of disasters are cyclic in nature, which provides valuable information for disaster preparedness planning.

Pandemics can also be considered natural disasters. The Black Death pandemic in the fourteenth century resulted in an estimated 50 million deaths, and is considered to be the most fatal pandemic in human history (World Health Organization, 2017). Pandemics occurring in the twenty-first century include COVID-19, Ebola, MERS, 2009 Swine Flu, and SARS. The occurrences of pandemics (including type, timing, and magnitude) are especially difficult to predict. The recent COVID-19 pandemic has presented an enormous global challenge. Hospitals and clinics generally serve as the bulk of the humanitarian supply chain for pandemic response. The supplies required for pandemic response include personal-protective equipment (PPE), vaccines, ventilators, and other medical equipment. COVID-19 has illustrated the challenges of responding to a global pandemic where nearly every country in the world is affected by the pandemic and response supplies are extremely limited. The United States government maintains a strategic national stockpile of emergency medical supplies for pandemic response, but it is seriously depleted. It is crucial that these supplies are restocked and that the necessary planning is done so that these critical relief supplies are available to respond to the next natural disaster.

Pre-positioning relief supplies at strategic facility locations is a critical part of disaster preparedness planning. It ensures availability of emergency relief commodities at the right time and at the right place. Despite recent efforts to establish advance contracts for relief items with partners that can be activated when needed, i.e., “virtual warehouses” (Balcik & Ak, 2014), pre-positioning of life-saving and life-sustaining resources such as water, tarps, and meals that allow for quick response remains as prevailing practice for most disaster relief agencies. As an illustrative example of pre-positioning, during the 2005 Hurricane Katrina that devastated New Orleans, Wal-Mart began pre-positioning high demand items including bottled water, flashlights and batteries six days prior to the hurricane’s landfall based on information collected on past purchase behaviors of customers in hurricane-prone areas (Shughart, 2011). Compared to the relatively small-scale relief efforts organized by commercial firms, government agencies such as the Federal Emergency Management Agency (FEMA) face a daunting task of preparedness at both county and state levels. During Hurricane Katrina relief supplies included 11,322,000 L of water, 18,960,000 pounds of ice, 5,997,312 meals ready to eat (MREs), and 17 truckloads of tarps. These materials were pre-positioned at key strategic locations prior to Katrina’s landfall; this represented the largest preparedness effort in the agency’s history (Davis, 2005). Still, criticism of FEMA’s slow, ineffective response, especially its failure to pre-deploy sufficient supplies, was frequent as the storm led to a confirmed death toll of 1833. Scott Wells, the Deputy Federal Coordinating Officer in Louisiana, pinpointed one source of the problem: “the response was not robust; it was not enough for the catastrophe at hand” in his testimony to Congress (Townsend, 2006). The destruction created a massive, unexpected demand for emergency supplies, and the damaged infrastructure intensified logistics difficulties. Better coordination amongst the SC echelons before and after the disruption with the use of information-sharing is also an important mitigation strategy for such disruptive events. Investing in appropriate technology and quality information sharing improves supply chain visibility, enhances trust and cooperation among supply chain partners and eventually leads to a more resilient supply chain (Dubey et al., 2019a, 2019b, 2019c). Further, humanitarian organizations must recognize how information sharing and supply chain visibility is key to swift-trust among humanitarian actors and agility in humanitarian supply chains (Dubey et al., 2020a, 2020b).

Starr and Van Wassenhove (2014), among others, emphasize the urgent need for robust solutions to account for the inherent uncertainty in humanitarian logistics. This is traditionally handled by stochastic programing models (e.g., Bozorgi-Amiri et al., 2013; Mete & Zabinsky, 2010; Rawls & Turnquist, 2009; Salmeron & Apte, 2010), which rely on specifying an exact probability distribution for uncertain parameters and then optimizing the expected value of a known objective function. However, as noted by Ergun et al. (2010), Starr and Van Wassenhove (2014) and others, historical data are generally lacking in disaster applications and can be inconsistent due to problems in collection and reporting. Stochastic programming models usually do poorly when the probability distributions are mis-specified. Robust optimization (RO) is a relatively new method for handling parameter uncertainty in an optimization model (Bertsimas & Thiele, 2006a, 2006b; Bertsimas et al., 2011). Different from stochastic programming, RO uses a distribution-free uncertainty set to model parameter uncertainty with risk-averse decision makers hedging against the worst-case scenario. Put differently, distributional information on the uncertainty parameters is not required in a RO model. Instead, the model optimizes an objective function with regard to the worst-case scenario constrained by an uncertainty set. By definition, stochastic programming models identify solutions that perform well in the long run, on average, if probability distributions are correctly specified while robust optimization models do not require exact probability distributions to be defined, but RO solutions tend to be over-conservative because they hedge against the worst case that is rare but disastrous.

The main motivation of this paper is to obtain a robust, or resilient, preparation plan to pre-position relief items given the extreme uncertainties related to natural disasters. The question we attempt to answer is how can we best pre-position relief items in advance so that we can respond with sufficient resources under the uncertainties related to the timing, location and severity of natural disasters.

As noted previously, most existing methods for determining optimal pre-positioning of relief items are either too fragile (e.g., stochastic programming) or too conservative (e.g., robust optimization). This paper aims to overcome the inherent limitations in these existing approaches. Our proposed scenario-robust models ease the difficult, and usually impossible, task of providing exact probability distributions for uncertain parameters in a stochastic programming model with the help of a distribution-free uncertainty set. To be more specific, the research questions of this paper are:

  1. (1)

    Does a scenario-robust model provide a relief-item-pre-position plan that is more robust than the stochastic programming model and less conservative than the robust optimization model?

  2. (2)

    Does a scenario-robust model provide a relief-item-pre-position plan that outperforms both the stochastic programming and the robust optimization models under certain situations?

Assume an uncertain parameter is defined by an underlying probability distribution \( F\left( {\overline{\zeta }^{s} , \overline{P}^{s} } \right)\) in which \(\overline{\zeta }^{s} \) is the point estimate and \( \overline{P}^{s} \) is its associated probability, for scenario \( s\). In a traditional stochastic programming model, \(\overline{\zeta }^{s}\) and \( \overline{P}^{s}\) are assumed to be known with exact knowledge; this is an unrealistic assumption, especially in the context of humanitarian logistics, that we aim to relax. Instead, we allow \( \overline{\zeta }^{s} \) or \( \overline{P}^{s}\) to vary in a distribution-free pre-specified uncertainty set \([\overline{\zeta }^{s} - \hat{\zeta }^{s} , \overline{\zeta }^{s} + \hat{\zeta }^{s} ]\) or \( [\overline{P}^{s} - \hat{P}^{s} , P^{s} + \hat{P}^{s} ]\), respectively. Consider the uncertain demand for emergency supplies characterized by the distribution \( F\left( {\overline{\zeta }^{s} , \overline{P}^{s} } \right)\): \(\overline{\zeta }^{s} = \left\{ {10,20,30,40} \right\}\) and \( \overline{P}^{s} = \left\{ {0.1,0.2,0.3,0.4} \right\}\). Using the first scenario \(\left( {S = 1} \right)\) as an example, the probability of having a demand for 10 units is 0.1. Instead of assuming a demand of 10 units with an exact probability of 0.1, we allow the demand point estimate to vary inside the range \(\left[ {10 - 2, 10 + 2} \right]\) given \(\hat{\zeta }^{1} = 2\) or the probability point estimate to vary inside the range \(\left[ {0.1 - 0.02, 0.1 + 0.02} \right]\) given \( \widehat{ P}^{s} = 0.02\). The level of conservatism is first controlled by the maximum deviations allowed, \(\hat{\zeta }^{s}\) and \( \widehat{ P}^{s}\). We then use parameters \(T^{\zeta }\) and \(T^{P}\) to control the maximum number of parameters that are allowed to vary in the pre-specified ranges (Bertsimas & Sim, 2003). When well-recorded historical data leads to exact knowledge of \(\overline{\zeta }^{s}\) and \( \overline{P}^{s}\), then \(\hat{\zeta }^{s}\) and \(\hat{P}^{s} = 0\), and we have a traditional stochastic-programming model. If such an ideal condition is not held, which is common, our model allows decision makers to use ranges that reflect their degrees of knowledge of \(\left( {\overline{\zeta }^{s} , \overline{P}^{s} } \right)\).

The remainder of this paper is organized as follows. Section 2 reviews the literature on pre-positioning problems with a focus on robustness of solutions. Section 3 presents an existing stochastic model followed by our scenario-robust versions. Section 4 illustrates the effectiveness of our models through a case study of hurricane preparedness in the Southeastern United States. Section 5 assesses the performance of the robust models through simulation studies. Section 6 provides some managerial insights for decision makers. Finally, in Sect. 7, we discuss contribution of the proposed research and suggest future research directions.

2 Literature review

FEMA defines mitigation, preparedness, response, and recovery as the four key phases of humanitarian logistics and we refer interested readers to Wamba (2020), Banomyong et al. (2019) and Behl and Dutta (2019) for a general survey of the field. The pre-positioning problem, which falls within the category of preparedness, determines the number of relief facilities to open, their locations and capacities, and the quantity of relief supplies stored at each location in anticipation of various sources of uncertainty such as demand and network availability. Variants of this problem have been studied through stochastic optimization. Balcik and Beamon (2008) adopt a maximal covering location approach for modeling the problem with consideration of demand uncertainty through scenarios. The model considers key aspects of the problem including facility location, inventory, budget and capacity but it does not consider network disruptions. Salmeron and Apte (2010) study how to allocate a fixed budget to procure and position relief assets to minimize the expected number of casualties and then the expected number of unmet transfer population under uncertainty associated with the location and severity of a disaster. Rawls and Turnquist (2010) propose a two-stage stochastic mixed-integer model solved using a Lagrangian L-shaped solution method to minimize the total expected cost of pre-positioning three types of relief supplies under four sources of uncertainty. Following a similar problem setup, Noyan (2012) proposes a risk-averse two-stage stochastic programming model with incorporation of risk measures on the total cost. More recently, the impact of risk-aversion measures on the solution produced by a stochastic-programming model is further studied in Alem et al. (2016) and Condeixa et al. (2017). The validity of these stochastic-programming models depends on the accuracy of the specified probability distribution, a challenging task for humanitarian logistics problems due to limited access to reliable data (Ergun et al., 2010). Paul and MacDonald (2016) develop a stochastic modeling framework to improve preparedness for disasters with little to no forewarning by optimizing the location and capacities of distribution centers for emergency stockpiles.

Mathematical programming is the main methodology for solving disaster operations-management problems, of which two-stage stochastic-programming models have represented the state-of-the-art modeling approaches over the last decade (Grass & Fischer, 2016; Hoyos et al., 2015). These types of models rely on specifying an exact probability distribution for uncertain parameters. In contrast to stochastic programming, a robust optimization (RO) approach models parameter uncertainty using distribution-free uncertainty sets. RO models are designed to hedge against the worst-case realizations of uncertain parameters. RO was first proposed by Soyster (1973) for a linear optimization problem in which all uncertain parameters assume their worst-case values within a set. El Ghaoui et al. (1998) and Ben-Tal and Nemirovsky (1998, 1999) develop several RO models to provide less conservative solutions by controlling the set of values that the uncertain parameters could realize. Ben-Tal and Nemirovsky (1998, 1999) show that a robust model is tractable if the uncertainty set is defined as a box or an ellipsoid, which represents a set of linear or quadratic relations among the uncertain parameters, respectively. Bertsimas and Sim (2003) propose a cardinality-based robust optimization framework that models the uncertainty in parameters using ranges by capturing their nominal and worst-case values. It uses a budget parameter T to control the number of parameters that is allowed to vary in their pre-specified range. A constraint with, for example, four uncertain parameters leads to five possible solution scenarios, T = 0, 1, 2, 3, 4, based on the cardinality of set T, where cardinality simply refers to the number of elements in set T. When T = 0, none of the four parameters are allowed to deviate from their nominal value, resulting in a deterministic model. When T = 4, all four parameters are allowed to deviate from their nominal value, which results in the most conservative model solutions. Thus, the value of T controls the conservativeness of the solutions obtained. Other forms of RO models include penalizing violations of scenario realizations of the uncertainty (Mulvey & Vanderbei, 1995) and minimizing the worst objective under different criteria such as absolute robust, robust deviation, and relative robust (Kouvelis and Yu, 1997).

RO applications in the general field of supply chain management are relatively new, but interest is growing. In recent years, RO has been successfully implemented to solve problems such as lot-sizing, location-allocation problems under demand uncertainty (Atamtürk & Zhang, 2007), capacitated vehicle routing (Sungur et al., 2008), fixed-charge multi-period facility location problems under network demand uncertainty (Baron et al., 2011), storage assignment (Ang et al., 2012), and hub location under demand uncertainty (Shahabi & Unnikrishnan, 2014). Examples of applications in humanitarian logistics include optimizing the allocation of response equipment (Hassan et al., 2020), mitigating risk in humanitarian relief supply chains (Ben-Tal et al., 2011), transporting disaster relief commodities and injured people in the aftermath of an earthquake (Najafi et al., 2013), locating urgent relief distribution centers (Lu, 2013; Lu & Sheu, 2013), designing a relief network (Tofighi et al., 2016; Yahyaei & Bozorgi-Amiri, 2019), preventing and reducing malaria transmission (de Mattos et al., 2019), and optimizing the USDA one-round food aid bidding system (Paul & Wang, 2015).

Distributionally-robust optimization (DRO) mitigates the over-conservatism of RO by considering partial distributional information such as moment statistics, which can be extracted from available data. As opposed to RO, which protects against the worst-case realization of uncertain parameters in uncertainty sets, DRO aims to hedge against the worst-case realization of the probability distribution through the use of ambiguity sets. The choice of ambiguity set determines the computational tractability of DRO, and various forms have been proposed in the literature. In El Ghaoui et al. (2003), the ambiguity set incorporates all distributions with the exact same first and second moments. Delage and Ye (2010) develop a more generalized ambiguity set that allows for estimation errors in the first two moments. In both cases, tractable reformulation of single-stage problems can be achieved using additional linear inequalities. Two-stage DRO problems with exact first and second moment information are examined in Bertsimas et al. (2010) wherein tractable reformulation through semidefinite programs is developed for uncertainty that only affects the objective function. Assuming knowledge of the support, mean, covariance matrix and directional deviation of uncertain parameters, Goh and Sim (2010) derive tractable reformulations of generic two- and multi-stage DRO problems using affine decision rules. Wiesemann et al. (2014) propose a unifying framework by introducing standardized ambiguity sets that incorporate many existing sets as special cases and conditions under which computational tractability exists. More recently, DRO has been extended to solve dynamic decision-making problems wherein uncertainty unfolds in stages (Bertsimas et al., 2019). Applications of DRO include lot-sizing (Zhang et al., 2016), surgery-block allocation (Wang et al., 2019), blood inventory pre-positioning (Wang & Chen, 2020) and others.

The premise of RO is that there exists no available distribution information on uncertain parameters, while DRO incorporates limited distributional information to mitigate the over-conservatism of RO. However, there are many problems—traditionally approached by stochastic programming—wherein a full probability distribution of uncertain parameters can be estimated, albeit inexactly, from historical data (Li et al., 2018). To apply RO or DRO in such situations would underutilize the available information on uncertain parameters, and thus cause unnecessary over-conservatism that is often manifested by inflated cost. While DRO leverages distributional information to build the ambiguity set, the scenario-robust approach tackles the issue of inexactness by incorporating a distribution-free uncertainty set into probability distributions of uncertain parameters. Han et al. (2013) replace point estimates of travel times in the classic stochastic vehicle routing problem with range estimates in each scenario to improve robustness. Our work is in line with this stream of research and makes the following contributions to the literature. First, our models allow decision makers to use ranges when specifying distributions for uncertainty parameters. Different from Han et al. (2013) where the focus is on the single type of item, we address multiple types of relief items. Second, we explore some key issues in humanitarian logistics such as network topology, shortages, and supply levels under different configurations of uncertainty. Last, the applicability of our approach is demonstrated via a case study of preparedness for hurricanes in the Southeastern United States. In addition, we conduct simulation studies to show the effectiveness of our approach when conditions deviate from the model assumptions.

3 Problem formulations

This section provides formulation of several models that can be used for the pre-positioning of relief supplies for disaster relief: a traditional Stochastic Programming Model (M1) and Scenario-Robust Programming Models (M2 and M2_Adjusted). In the next sections, we compare the solutions generated using these different modeling approaches for a specific case study.

3.1 Traditional stochastic programming model (M1)

We use notation similar to Rawls and Turnquist (2010) as shown in Table 1. The MIP formulation from Rawls and Turnquist (2010) is shown below as \({\text{M}}1\). The objective function (1) minimizes the fixed and acquisition costs plus the expected shipping, unmet demand penalty and holding costs over all scenarios. Constraints (2) are the capacity constraints to ensure that the amount of emergency supplies stored at a location does not exceed its capacity. Constraints (3) ensure that at most one facility of a given size is allowed at each demand location. Constraints (4) and (5) are both scenario dependent, and they ensure that demand is met and that the link capacity is not exceeded, respectively, for each scenario.

Table 1 Model \(M1\) notations

3.1.1 Stochastic programming model (\({\mathbf{M}}1\))

$$ Min \;\xi_{1} = \mathop \sum \limits_{i \in I,l \in L} F_{il} y_{il} + \mathop \sum \limits_{i \in I,k \in K} q_{ik} r_{ik} + \left( {\mathop \sum \limits_{{\left( {i,j} \right) \in A,k \in K,s \in S}} P^{s} c_{ijk}^{s} x_{ijk}^{s} + \mathop \sum \limits_{i \in I,k \in K,s \in S} P^{s} (h_{k} z_{ik}^{s} + p_{k} w_{ik}^{s} )} \right) $$
(1)
$$ s.t. \mathop \sum \limits_{k \in K} b_{k} r_{ik} \le \mathop \sum \limits_{l \in L} M_{l} y_{il} \; \forall i \in I $$
(2)
$$ \mathop \sum \limits_{l \in L} y_{il} \le 1 \; \forall i \in I $$
(3)
$$ \mathop \sum \limits_{j \ne i \in N} x_{jik}^{s} - \mathop \sum \limits_{j \ne i \in N} x_{ijk}^{s} - z_{ik}^{s} + w_{ik}^{s} + \rho_{ik}^{s} r_{ik} \ge d_{ik}^{s} \; \forall i \in I, k \in K,s \in S $$
(4)
$$ \mathop \sum \limits_{k \in K} u_{k} x_{ijk}^{s} \le cap_{ij}^{s} \; \forall \left( {i,j} \right) \in A, s \in S $$
(5)
$$ y_{il} binary; r_{ik} , x_{ijk}^{s} , z_{ik}^{s} , w_{ik}^{s} \ge 0 $$
(6)

3.2 Scenario-robust programming model (M2 and M2_Adjusted)

Model \( {\text{M}}1{ }\) assumes that scenario-dependent parameters \( d_{ik}^{s}\),\( c_{ijk}^{s}\),\(cap_{ij}^{s} , \rho_{ik}^{s} \) and the associated probabilities \(P^{s}\) are known with exact knowledge. We develop a scenario-robust model M2 to relax this assumption. Model \({\text{M}}2\) targets the uncertainty in the point estimates of \( d_{ik}^{s}\),\( c_{ijk}^{s}\),\(cap_{ij}^{s} \) and \(\rho_{ik}^{s}\). This model can be fully linearized and solved using off-the-shelf solvers such as MOSEK or CPLEX, a feature especially desirable for practical applications such as those considered here.

A critical issue to consider when choosing an appropriate form of the uncertainty set is computational tractability. A simple analytical structure of the uncertainty set is preferred, but we must balance this with the goal of retaining modeling detail (Ben-Tal and Nemirovsky, 1999). We adopt the uncertainty set developed in Bertsimas and Sim (2003, 2004) that is capable of capturing the uncertainty yet is still analytically tractable. For each uncertain parameter (\(c_{ijk}^{s}\),\( d_{ik}^{s}\),\( cap_{ij}^{s} \) and \(\rho_{ik}^{s}\)) in M1, we introduce an uncertainty set following Bertsimas and Sim (2003, 2004). Each uncertainty set is characterized by three parameters: a nominal value, a maximum deviation allowed, and a budget controlling the number of parameters allowed to deviate.

Parameters \( c_{ijk}^{s}\),\( d_{ik}^{s}\),\( cap_{ij}^{s} \) and \(\rho_{ik}^{s} \) are modeled with distribution-free uncertainty sets as follows. First, parameter \(c_{ijk}^{s}\) is allowed to vary inside the range [\(\overline{{c_{ijk}^{s} }}\)\(,\overline{{ c_{ijk}^{s} }}\) + \( \widehat{{c_{ijk}^{s} }}]\) where \( \overline{{c_{ijk}^{s} }} { }\) is the nominal value of \(c_{ijk}^{s} { }\) and \(\widehat{{c_{ijk}^{s} }}{ }\) is the deviation of \(c_{ijk}^{s} { }\) from its nominal value \(\overline{{ c_{ijk}^{s} }}\). To model a higher level of uncertainty, we can increase \(\widehat{{c_{ijk}^{s} }}\) and the model becomes more conservative while setting \(\widehat{{c_{ijk}^{s} }} = 0\) reverts the model back to the traditional stochastic programming model, \({\text{M}}1\). Furthermore, the maximum number of \(c_{ijk}^{s}\) that are allowed to deviate is restricted by a cardinality parameter \( T^{C}\); i.e., \( T^{C} \le \left| {U^{C} } \right|\) where \(U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}}\). The uncertainty set is expressed as \(\max_{{\left\{ {U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}},{ }\left| {U^{c} } \right| \le T^{C} } \right\}}} \sum\nolimits_{{\left( {i,j,k,s} \right) \in U^{c} }} {P^{s} \widehat{{c_{ijk}^{s} }}{ } x_{ijk}^{s} }\), representing the total shipping cost deviation when up to \( T^{C}\) of the estimated \(c_{ijk}^{s}\) can vary in [\(\overline{{c_{ijk}^{s} }}\) \( ,\overline{{c_{ijk}^{s} }}\) + \( \widehat{{c_{ijk}^{s} }}]\).

The shipping cost component in the objective,\( \sum\nolimits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} {P^{s} c_{ijk}^{s} x_{ijk}^{s} }\), is now expressed as \(\sum\nolimits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} {P^{s} \overline{{c_{ijk}^{s} }} { } x_{ijk}^{s} }\) + \(\max_{{\left\{ {U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}},{ }\left| {U^{c} } \right| \le T^{C} } \right\}}} \mathop \sum \nolimits_{{\left( {i,j,k,s} \right) \in U^{c} }} P^{s} \widehat{{c_{ijk}^{s} }}{ } x_{ijk}^{s}\), factoring in possible deviation in \(U^{c}\) into the total nominal shipping cost. As shown in model \( {\text{M}}2a\), by successively increasing \( T^{C}\), the solutions are more conservative while setting \( T^{C} = 0\) equates the model to \( {\text{M}}1\). The difference between objective functions (1) and (7) is that (7) incorporates deviation \(\widehat{{c_{ijk}^{s} }}\) in the shipping cost and uses budget \( T^{C}\) to control the number of \({\text{c}}_{{{\text{ijk}}}}^{{\text{s}}}\) parameters that is allowed to deviate from their nominal values \(\overline{{c_{ijk}^{s} }}\).

3.2.1 Model with a robust component for shipping cost (\({\mathbf{M}}2{\mathbf{a}}\))

$$ \begin{gathered} Min \xi_{{2^{\prime}}} = \mathop \sum \limits_{i \in I,l \in L} F_{il} y_{il} + \mathop \sum \limits_{i \in I,k \in K} q_{ik} r_{ik} + \left( {\mathop \sum \limits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} P^{s} \overline{{c_{ijk}^{s} }} { } x_{ijk}^{s} + \max_{{\left\{ {U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}},{ }\left| {U^{c} } \right| \le T^{C} } \right\}}} \mathop \sum \limits_{{\left( {i,j,k,s} \right) \in U}} P^{s} \widehat{{c_{ijk}^{s} }}{ } x_{ijk}^{s} } \right. \hfill \\ \left. { + \mathop \sum \limits_{i \in I,k \in K, s \in S} P^{s} \left( {h_{k} z_{ik}^{s} + p_{k} w_{ik}^{s} } \right)} \right) \hfill \\ \end{gathered} $$
(7)

s.t (2)–(6).

Using the duality procedures developed in Bertsimas and Sim (2004), the maximum deviation component \(\max_{{\left\{ {U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}},{ }\left| {U^{c} } \right| \le T^{C} } \right\}}} \sum\nolimits_{{\left( {i,j,k,s} \right) \in U^{C} }} {P^{s} \widehat{{c_{ijk}^{s} }}{ } x_{ijk}^{s} }\) is equivalent to linear programming model \({\text{M}}2{\text{b}}\) where \(g\) and \(H_{ijk}^{s}\) are the dual variables. Reformulating this model using the dual variables allows for much easier solution methods that generate equivalent results.

3.2.2 Linear model with dual variables (\({\mathbf{M}}2{\mathbf{b}}\))

$$ \min { } T^{C} g + \mathop \sum \limits_{i,j,k,s} H_{ijk}^{s} $$
(8)
$$ s.t. g + H_{ijk}^{s} \ge P^{s} \widehat{{c_{ijk}^{s} }} x_{ijk}^{s} \forall i \in I, j \in J,k \in K,s \in S $$
(9)
$$ g , H_{ijk}^{s} \ge 0 \forall i \in I, j \in J,k \in K,s \in S $$
(10)

Substituting \({\text{M}}2{\text{b}}\) back into model \( {\text{M}}2{\text{a}}\), we have a linear model \( {\text{M}}2{\text{c}}\). Objective function (11) along with constraints (9) and (10), form the robust counterpart of objective function (7).

3.2.3 Linearized model with a robust component for shipping cost (\({\mathbf{M}}2{\mathbf{c}}\))

$$ Min \xi_{{2^{\prime\prime}}} = \mathop \sum \limits_{i \in I,l \in L} F_{il} y_{il} + \mathop \sum \limits_{i \in I,k \in K} q_{ik} r_{ik} + \left( {\mathop \sum \limits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} P^{s} \overline{{c_{ijk}^{s} }} { } x_{ijk}^{s} + T^{C} g + \mathop \sum \limits_{i,j,k,s} H_{ijk}^{s} + \mathop \sum \limits_{i \in I,k \in K, s \in S} P^{s} (h_{k} z_{ik}^{s} + p_{k} w_{ik}^{s} )} \right) $$
(11)

s.t. (2)–(6), (9) and (10).

We follow the same logic to model \( d_{ik}^{s} , cap_{ij}^{s}\), and \( \rho_{ik}^{s}\) in constraints (4) and (5) using uncertainty sets \(\max_{{\left\{ {U^{d} \subseteq {\text{I }} \times {\text{K}} \times {\text{S}},{ }\left| {U^{d} } \right| \le T^{d} } \right\}}} \mathop \sum \limits_{{\left( {i,k,s} \right) \in U^{d} }} \widehat{{d_{ik}^{s} }}{ }\), \(\max_{{\left\{ {U^{cap} \subseteq {\text{I}} \times {\text{ J}} \times {\text{S}},{ }\left| {U^{cap} } \right| \le T^{cap} } \right\}}} \mathop \sum \limits_{{\left( {i,j,s} \right) \in U^{cap} }} \widehat{{cap_{ij}^{s} }}\), and \(\max_{{\left\{ {U^{\rho } \subseteq \left( {{\text{I}},{\text{ K}},{\text{S}}} \right),{ }\left| {U^{\rho } } \right| \le T^{\rho } } \right\}}} \mathop \sum \limits_{{\left( {i,k,s} \right) \in U^{\rho } }} \widehat{{\rho_{ik}^{s} }}{ }\), respectively. Since each parameter appears only once in its respective constraint (which is different from parameters \( c_{ijk}^{s}\) that are summed over \(\left( {i,j} \right) \in A,k \in K\) in their respective constraint), this structure results in a worst possible realization of the parameter from the range uncertainty in the form of [\(\overline{{ d_{ik}^{s} }}\)\(\widehat{{ d_{ik}^{s} }} ,\overline{{ d_{ik}^{s} }}\) + \( \widehat{{ d_{ik}^{s} }}]\), [\(\overline{{cap_{ij}^{s} }}\)\(\widehat{{ cap_{ij}^{s} }} ,\overline{{cap_{ij}^{s} }}\) + \( \widehat{{cap_{ij}^{s} }}],{ }\) and [\(\overline{{\rho_{ik}^{s} }}\)\( \widehat{{\rho_{ik}^{s} }} ,\overline{{\rho_{ik}^{s} }}\) + \( \widehat{{\rho_{ik}^{s} }}].\) Constraints (4) and (5) are augmented accordingly to accommodate the range uncertainty. The deviation terms in constraints (13) and (14) in model \({\text{M}}2{ }\) reflect the direction of the constraints. For example, because the sign in constraint (4) is >  = , \(\widehat{{\rho_{ik}^{s} }}r_{ik}\) is subtracted from the left-hand side of constraint (13) while \(\widehat{{d_{ik}^{s} }}\) is added to its right-hand side. Model \({\text{M}}2{ }\) minimizes the total cost subject to maximum deviation of \( d_{ik}^{s}\),\( c_{ijk}^{s}\),\(cap_{ij}^{s} \) and \(\rho_{ik}^{s}\) constrained by their respective uncertainty sets.

3.2.4 Scenario-robust model (\({\mathbf{M}}2\))

$$ Min \xi_{2} = \mathop \sum \limits_{i \in I,l \in L} F_{il} y_{il} + \mathop \sum \limits_{i \in I,k \in K} q_{ik} r_{ik} + \left( {\mathop \sum \limits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} P^{s} \overline{{c_{ijk}^{s} }} { } x_{ijk}^{s} + T^{C} g + \mathop \sum \limits_{i,j,k,s} H_{ijk}^{s} + \mathop \sum \limits_{i \in I,k \in K, s \in S} P^{s} (h_{k} z_{ik}^{s} + p_{k} w_{ik}^{s} )} \right) $$
(12)

s.t. (2), (3), (6), (9), (10)

$$ \mathop \sum \limits_{j \ne i \in N} x_{jik}^{s} - \mathop \sum \limits_{j \ne i \in N} x_{ijk}^{s} - z_{ik}^{s} + w_{ik}^{s} + \overline{{\rho_{ik}^{s} }} r_{ik} - \widehat{{\rho_{ik}^{s} }}r_{ik} \ge \overline{{d_{ik}^{s} }} + \widehat{{d_{ik}^{s} }} \; \forall i \in I, k \in K,s \in S $$
(13)
$$ \mathop \sum \limits_{k \in K} u_{k} x_{ijk}^{s} \le \overline{{cap_{ij}^{s} }} - \widehat{{cap_{ij}^{s} }} \; \forall \left( {i,j} \right) \in A, s \in S $$
(14)

In model \({\text{M}}2\), parameters \( d_{ik}^{s}\),\( cap_{ij}^{s} \) and \(\rho_{ik}^{s}\) are set to their worst cases, which raises the issue of over-conservatism. This means, when solving model \({\text{M}}2\), parameters \( d_{ik}^{s}\),\({ }cap_{ij}^{s} { }\) and \(\rho_{ik}^{s}\) assume the maximum deviation of \(\widehat{{d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }}\), and \(\widehat{{\rho_{ik}^{s} }}\), respectively. Solving the model at these maximum deviations of the parameters results in overly conservative solutions. We employ a method originating in Bertsimas and Thiele (2006a, 2006b), and adapted by Zokaee et al. (2016) and Liu et al. (2018), to mitigate over-conservatism. We therefore adjust the scenario-robust model (M2_Adjusted) as shown below. Using constraint (14) as an example, we introduce a common conservatism parameter \( T^{ijs}\) for \(cap_{ij}^{s} \forall i \in I, j \in J,s \in S\). The value of \( T^{ijs}\) is in the range of \(\left[ {0, \left| {{\text{I}} \times {\text{J}} \times {\text{S}}} \right|} \right]\), representing the number of uncertain \(cap_{ij}^{s}\) that can deviate from its nominal value \(\overline{{cap_{ij}^{s} }}\) by deviation \(\widehat{{cap_{ij}^{s} }}\). The robust counterpart of constraint (14) is shown below as (17) using the adjusted (less conservative) upper bound given the common uncertainty budget. Constraint (13) is adjusted in a similar fashion as shown below by (16). Model \({\text{M}}2\_{\text{Adjusted}}\) retains the same structure of \({\text{M}}2\) except that constraints (13) and (14) are replaced by (16) and (17), respectively. These new constraints are in line with the suggestion from Bertsimas and Theile (2006) to consider a common conservatism parameter related to all the demand parameters. It effectively allows the decision maker to control the amount of uncertainty considered in the model based on the level of certainty available from past data and/or subject-matter expertise for a specific applicationy.

3.2.5 Adjusted scenario-robust model (\({\mathbf{M}}2\_{\mathbf{Adjusted}}\))

$$ Min \xi_{2} = \mathop \sum \limits_{i \in I,l \in L} F_{il} y_{il} + \mathop \sum \limits_{i \in I,k \in K} q_{ik} r_{ik} + \left( {\mathop \sum \limits_{{\left( {i,j} \right) \in A,k \in K, s \in S}} P^{s} \overline{{c_{ijk}^{s} }} { } x_{ijk}^{s} + T^{C} g + \mathop \sum \limits_{i,j,k,s} H_{ijk}^{s} + \mathop \sum \limits_{i \in I,k \in K, s \in S} P^{s} (h_{k} z_{ik}^{s} + p_{k} w_{ik}^{s} )} \right) $$
(15)

s.t. (2), (3), (6), (9), (10)

$$ \mathop \sum \limits_{j \ne i \in N} x_{jik}^{s} - \mathop \sum \limits_{j \ne i \in N} x_{ijk}^{s} - z_{ik}^{s} + w_{ik}^{s} + \overline{{\rho_{ik}^{s} }} r_{ik} - \frac{{ T^{iks} }}{{\left| {{\text{I}} \times {\text{K}} \times {\text{S}}} \right|}}\widehat{{\rho_{ik}^{s} }}r_{ik} \ge \overline{{d_{ik}^{s} }} + \frac{{ T^{iks} }}{{\left| {{\text{I}} \times {\text{K}} \times {\text{S}}} \right|}}\widehat{{d_{ik}^{s} }} \forall i \in I, k \in K,s \in S $$
(16)
$$ \mathop \sum \limits_{k \in K} u_{k} x_{ijk}^{s} \le \overline{{cap_{ij}^{s} }} - \frac{{ T^{ijs} }}{{\left| {{\text{I}} \times {\text{J}} \times {\text{S}}} \right|}}\widehat{{cap_{ij}^{s} }} \forall \left( {i,j} \right) \in A, s \in S $$
(17)

4 Case study

This section first provides background of the case study used to illustrate the application of proposed models. Next impacts of deviations in demand, unit shipping cost, capacity and the proportion of stocked supplies on the results produced by Stochastic Programming and Scenario-Robust Programming models are reported for the Case Study. Finally, impacts of cardinality uncertainty is studied for the Case Study.

4.1 Background

Hurricane season in the Atlantic, Caribbean and Gulf of Mexico typically lasts from June 1st to November 30th each year (National Hurricane Center, 2018). During this time period, tropical cyclones are common and capable of inflicting severe damage to infrastructure and loss of life. Table 2 summarizes the parameters of a case study that focuses on hurricane preparedness in the southeastern region of the United States which has 30 nodes and 58 links (Rawls and Turnquist, 2009; Noyan, 2012). Three relief items—water (in units of 1000 gallons), meals ready to eat (MRE, in units of 1000), and medical kits—are considered in this case study. Based on historical data, a total of 51 scenarios representing single and multiple hurricane landfalls are provided. (A detailed description of the 51 scenarios can be found in Appendix 1) Parameter settings are the same as in Rawls and Turnquist (2009) with the exception that we introduce a penalty cost, \( p_{k}\), which is 100 times that of the acquisition cost \( (p_{k} = 100q_{k} )\), reflecting the fact that shortages in commercial settings now translates to human suffering and even loss of life in disaster preparedness settings. Our penalty cost setting is more in line with the economic value of a statistical fatality, estimated at $6 million, adopted by various agencies such as US Environmental Protection Agency (Latourrette & Willis, 2007). The three models are coded in IBM Optimization Programming Language and solved by ILOG CPLEX Optimizer Solver version 12.7.1 on a computer with a 1.7 GHz processor and 8 GB RAM. All problem instances are solved under 180 s. Note that model M2 incurs higher costs than model M1 because model M2 incorporates the possible deviations but model M1’s result is based on the assumption of no deviation. Our simulation studies in Sect. 5 demonstrate that model M1 usually leads to higher costs when subjected to deviations.

Table 2 Setting of the case parameters from Rawls and Turnquist (2009)

4.2 Impacts of deviations (\(\widehat{{{\varvec{c}}_{{{\varvec{ijk}}}}^{{\varvec{s}}} }}\),\(\widehat{{\user2{ d}_{{{\varvec{ik}}}}^{{\varvec{s}}} }}\), \(\widehat{{{\varvec{cap}}_{{{\varvec{ij}}}}^{{\varvec{s}}} }} \) and \(\widehat{{{\varvec{\rho}}_{{{\varvec{ik}}}}^{{\varvec{s}}} }}\))

We first invoke a full cardinality on \(c_{ijk}^{s}\) to allow it to vary inside its range \([\overline{{c_{ijk}^{s} }} ,\overline{{c_{ijk}^{s} }}\) + \( \widehat{{c_{ijk}^{s} }}]\) by setting \( T^{C} \le \left| U \right|\) where \(U^{c} \subseteq {\text{I }} \times {\text{J}} \times {\text{K}} \times {\text{S}}\). This means that we allow the maximum amount of uncertain parameters to vary. Note that parameters \(d_{ik}^{s}\), \(cap_{ij}^{s} , \) and \(\rho_{ik}^{s}\) are fixed at their worst values. In the traditional stochastic programming model \( {\text{M}}1\), the nominal values \(\overline{{d_{ik}^{s} }}\), \(\overline{{c_{ijk}^{s} }}\),\( \overline{{cap_{ij}^{s} }}\) and \(\overline{{\rho_{ik}^{s} }}\) are used as inputs. Here we simulate low, medium and high uncertainty in the point estimates by setting \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }} \) to 2.5%, 5% and 7.5% of their respective nominal values.

Table 3 first reports the cost breakdown and acquisition quantity produced by the traditional stochastic programming model \( {\text{M}}1\), followed by the cost and supply differences between models \({\text{M}}2\) and \( {\text{M}}1\), relative to \( {\text{M}}1\), at the low, medium and high uncertainty levels. As expected, model \( {\text{M}}1\), without factoring in the possible point estimate deviations in \(\overline{{d_{ik}^{s} }}\), \(\overline{{c_{ijk}^{s} }}\),\( \overline{{cap_{ij}^{s} }}\) and \(\overline{{\rho_{ik}^{s} }}\), provides the least expensive solution but with higher risk of shortages. A cost breakdown further reveals that the increases in shortage costs outpace all other cost components, estimated at 5.04%, 11.09% and 17.45% when the point estimates deviate by 2.5%, 5% and 7.5%, respectively. This is alarming, and likely unacceptable, because shortages in humanitarian logistics applications often lead to human suffering and even fatalities.

Table 3 Comparison of Models \({\text{M}}1\) and \({\text{M}}2\) Solutions (Varying \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\))—100 × multiplier

Table 4 demonstrates the network structures produced by solutions to models \({\text{M}}1\) and \({\text{ M}}2\) at the three uncertainty levels. In response to uncertainty, the solution to model \({\text{M}}2\) adds a third large facility at location 19 at all three uncertainty levels while retaining the two large facilities at locations 15 and 24 that were originally in the solution of model \({\text{M}}1\). This large facility at location 19 was a medium facility in the solution to M1. Interestingly, we notice that M2 does not recommend building new facilities at the locations that are not in the solution to M1, but only recommends increasing the size of some existing facilities. Consider the solution to model \({\text{M}}2\) at uncertainty level 7.5% as another example: the new medium facility at location 13 was an existing small facility in all the lower uncertainty level model solutions (5% and 2.5%) and the solution to model \({\text{M}}1\) as well. This observation can have direct practical relevance. If existing infrastructures are determined based on solutions from a traditional stochastic programming model, it is reassuring that our scenario-robust model achieves robustness through increasing the size of local facilities without disrupting the overall network structure. One may argue that the systematic buildup pattern is an artifact of the small deviations chosen, but the validation results in Sect. 5 demonstrate the resilience of this finding when tested using larger deviations of 15% and 25%.

Table 4 Network structures produced by model \({\text{M}}2\) (varying \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\)) – 10 × multiplier

4.3 Impact of shortage cost

Holguín-Veras et al. (2013) developed the concept of deprivation costs—the economic valuation of human suffering—to capture the social costs associated with fatalities and related welfare costs. However, due to its complexity, work on integrating such costs into an optimization models has been limited with a few exceptions. Pradhananga et al. (2016) examine the effects of constant and linearly increasing deprivation functions in an integrated preparedness and response framework. Paul and Wang (2015) incorporated deprivation costs into a location-allocation model for an earthquake preparedness problem. Our models are more complex and lack a standard structure to facilitate such integration. Instead, we conduct a thorough analysis of different shortage cost values (Davis et al., 2013), to assess the impact of the shortage cost value. Our analysis uses 10 × and 1000 × multipliers, and we compare the results to those of the 100 × multiplier reported in the previous section. Results are summarized in Tables 5, 6, and 7.

Table 5 Comparison of models \({\text{M}}1\) and \({\text{M}}2\) solutions (varying \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\)) – 10 × Multiplier
Table 6 Comparison of models \({\text{M}}1\) and \({\text{M}}2\) solutions (varying \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\)) – 1000 × multiplier
Table 7 Network structures produced by model \({\text{M}}2\) (Varying \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\)) Given 10 × , 100 × and 1000 × multipliers

In response to increasing shortage cost, we observe that larger quantities of supplies are being pre-positioned, which requires a greater number of facilities overall. The network structure, therefore, requires fewer small facilities but more medium and large facilities across all uncertainty levels. The findings are in line with those of previous studies (e.g., Condeixa et al., 2017; Lodree et al., 2012) conducted in similar disaster-relief settings.

4.4 Impacts of cardinality uncertainty

To examine the effect of different numbers of uncertain parameters (the cardinality budget) that vary, we fix \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{ d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }} \) and \(\widehat{{\rho_{ik}^{s} }}\) at their medium deviation of 5% and test the impact of cardinality uncertainty by gradually increasing \( T^{C}\), \( T^{ijs}\), and \( T^{iks}\), which control the maximum number of locations to be impacted in their respective constraints. Starting at 0 (Level 1, equivalent to Model \(M1\)), we test the models at two equal intervals (Levels 2 and 3) and at their maximum-allowed values (Level 4, equivalent to Model \(M2\)). Table 8 shows the values used for the four levels of the budget parameters.

Table 8 Four levels of the budget parameters used in \({\text{M}}2\_{\text{Adjusted}}\)

Table 9 reports the percent cost and supply increases at Levels 2, 3 and 4 relative to Level 1, and Table 10 shows the network structures at the four different levels. We observe similar patterns as in Sect. 4.2.1: a gradual total cost and supply increase as the level of conservatism increases; robustness is achieved through increasing the size of local facilities while preserving the overall network structure.

Table 9 Comparison of models \({\text{M}}1\) and \({\text{M}}2\_{\text{Adjusted}}\) solutions (Varying \( T^{C}\), \( T^{ijs}\), and \( T^{iks}\))
Table 10 Network structures produced by model \(M2\_{\text{Adjusted}}\) (varying \( T^{C}\), \(T^{ijs}\) and \( T^{iks}\))

5 Validation of robustness

The results of model \({\text{M}}2\) are driven by the maximum deviations specified. In this section, we conduct Monte Carlo simulation studies to compare the results of model M2–\({\text{M}}1{ }\) under various conditions where the actual deviations differ. Furthermore, we derive the dominance regions for model \({\text{M}}2\) where it demonstrates superiority in performance over \({\text{M}}1\). The goal of this analysis is to determine under what conditions model M2 results in solutions superior to model M1.

We evaluate the model robustness of M2 and M2_Adjusted with regard to deviations in \(\widehat{{ c_{ijk}^{s} }}\),\(\widehat{{{ }d_{ik}^{s} }}\), \(\widehat{{cap_{ij}^{s} }}{ }\) and \(\widehat{{\rho_{ik}^{s} }}\). Compared to model \( {\text{M}}1\), \({\text{M}}2\) and M2_Adjusted incur higher first-stage costs in Tables 3 and 9. We examine if this additional cost spent on the higher-level strategic decisions (\(y_{il} \) and \(r_{ik} )\) can be justified under various realized deviations. Note that in the simulation study, the first-stage costs remain the same as they are not directly impacted by uncertainty. This analysis allows us to examine when model M2_Adjusted outperforms model M2 and how the solutions between these models differ.

hModel \({\text{M}}2\) results are driven by the maximum-specified deviations of \(2.5\% ,\; 5\% \) and \( 7.5\%\). The results of model M2_Adjusted are based on the settings of the medium deviation of 5% and the pre-specified numbers of locations to be impacted in their respective constraints. It would favor model \( {\text{M}}2\) or M2_Adjusted if the validation study only tests those deviations or in their vicinities. To facilitate comparisons, we use five uniformly-distributed realized deviation ranges \( {\Psi } = \left\{ {\left[ {0, 2.5\% } \right],\left[ {0, 5\% } \right],\left[ {0, 7.5\% } \right],\left[ {0,15\% } \right],\left[ {0,25\% } \right]} \right\}\) in our study. For example, range \(\left[ {0,2.5\% } \right]\) allows \( \widetilde{{d_{ik}^{s} }}, \widetilde{{ c_{ijk}^{s} }},{ }\widetilde{{\rho_{i}^{s} }},{ }\) and \(\widetilde{{ cap_{ij}^{s} }}\) to deviate by any percentage between \(0 \) and \(2.5\%\) from their respective nominal values. For those realized deviations that are 0 or close to 0, the corresponding locations can be considered barely impacted by the hurricane.

We perform a Monte Carlo simulation using common random numbers to compare the performances of M1 and M2. The input consists of the optimal first-stage decision variables from models \({\text{M}}1\) (\(y_{il}^{M1*}\) and \(r_{ik}^{M1*}\)), \({\text{M}}2\) at the uncertainty levels of 2.5% (\(y_{il}^{M2\_2.5\% *}\) and \(r_{ik}^{M2\_2.5\% *} )\), 5% (\(y_{il}^{M2\_5\% *}\) and \(r_{ik}^{M2\_5\% *} )\) and 7.5% (\(y_{il}^{M2\_7.5\% *}\) and \(r_{ik}^{M2\_7.5\% *} )\), and M2_Adjusted at the cardinality levels 2 (\(y_{il}^{M2\_5\% \_L2*}\) and \(r_{ik}^{M2\_5\% \_L2*} )\) and 3 (\(y_{il}^{M2\_5\% \_L3*}\) and \(r_{ik}^{M2\_5\% \_L3*} )\). In each replication, a disaster scenario is first randomly selected according to the scenario probability distribution \(P^{s}\); second, a random number \(\in\) is uniformly drawn from a deviation range in \({\Psi }\), and the realized {\(\widetilde{{d_{ik}^{s} }}\),\( \widetilde{{ c_{ijk}^{s} }}\), \(\widetilde{{\rho_{i}^{s} }}\), \(\widetilde{{ cap_{ij}^{s} }}\)} is set as {\(\widehat{{ d_{ik}^{s} }}\left( {1 + \in } \right)\),\(\widehat{{ c_{ijk}^{s} }}\left( {1 + \in } \right)\), \(\widehat{{\rho_{ik}^{s} }}\left( {1 - \in } \right)\), \(\widehat{{cap_{ij}^{s} }}{ }\left( {1 - \in } \right)\)}; then the second-stage problem is resolved to obtain the optimal second-stage costs for M1, M2, and M2_Adjusted respectively. A total of 200 replications are conducted for each of the five deviation ranges in \({\Psi }\) and the total costs are reported in Table 11. To facilitate comparisons, we also provide the total cost differences; shaded entries indicate the dominance region for model \({\text{M}}2\) and \({\text{M}}2\_{\text{Adjusted}}\).

Table 11 Average of total costs and cost differences over 200 replications (\({\text{M}}2\_{\text{Adjusted}}\) vs. \({\text{M}}1\))

First, we focus on model \({\text{M}}2{ }\) results in Table 11. Recall that model \({\text{M}}2{ }\) at 2.5% is designed to hedge against 2.5% uncertainty, and it will outperform \({\text{M}}1\) by design when the realized deviation is indeed 2.5%. Under an unfavorable testing condition of \( \left[ {0, 2.5\% } \right]\), it is encouraging to see that the solution to model \({\text{M}}2{ }\) at uncertainty level 2.5% is still slightly better (by $12,576) than the solution to model \({\text{M}}1\). Note that model \({\text{M}}2{ }\) at uncertainty levels of 5% and 7.5% is overly conservative when the realized deviation varies within \(\left[ {0, 2.5\% } \right]\) and fails to offset the additional first-stage cost.

Results for range \(\left[ {0, 5\% } \right]\) may seem perplexing at first, but when examined in conjunction with the results from range \( \left[ {0, 7.5\% } \right]\), they yield great insights. When the realized deviation is drawn from \( \left[ {0, 5\% } \right]\), model \({\text{M}}2\) at uncertainty level of 2.5% performs best, followed by model \( {\text{M}}1\), \({\text{M}}2\) at 5% and \({\text{M}}2\) at 7.5%, in that order. This, again, points to the nature of model \( {\text{M}}2\). When a realized deviation is uniformly drawn from \( \left[ {0, 5\% } \right]\), the mean realized deviation is about 2.5%. Thus, it is unsurprising that \({\text{M}}2\) at 2.5% performs best. Range \(\left[ {0, 7. 5\% } \right]\) reveals a similar pattern where model \({\text{M}}2\) at both 2.5% and 5% levels outperforms \( {\text{M}}1\).

Model \({\text{M}}2\) shows its dominance over \({\text{M}}1\) when tested under the larger deviation ranges of \(\left[ {0, 15\% } \right]\) and \( \left[ {0, 25\% } \right]\). As compared to Model \( {\text{M}}1\), \({\text{M}}2\) specified at 2.5%, 5% and 7.5% offers additional robustness, even if the realized deviation is beyond the maximum specified deviations of M2. This provides great assurance for the robustness of Model \({\text{M}}2\) under catastrophic situations such as Hurricane Katrina where, for example, demand for supplies was substantially underestimated.

Next, we discuss the model \({\text{M}}2\_{\text{Adjusted }}\) results in Table 11. By design, the solutions of Model M2_Adjusted at Levels 2 and 3 should be less conservative than the solution of M2 at 5% (i.e., M2_Adjusted at Level 4), because M2_Adjusted at Levels 2 and 3 limits the number of locations being impacted. The results in Table 11 confirm this expectation. Model M2_Adjusted at Levels 2 and 3 result in lower total cost than M1 even when the realized maximum deviation is small, 2.5% and 5% respectively, while model M2 at 5% requires the realized maximum deviation to be at least 7.5% to perform better than M1. Model M2 at 5% outperforms M2_Adjusted at the two levels when the realized maximum deviation is large, i.e., 25% or more. Table 11 also shows that M2_Adjusted at Level 2 (which is adjusted based on model M2 at 5%) is even less conservative than M2 at 2.5%. In other words, M2_Adjusted at Level 2 can result in greater savings than M2 at 2.5% when the realized deviation is small, but less savings than M2 at 2.5% when the realized deviation is large. The cardinality level considered in M2_Adjusted impacts the robustness of solutions, which allows model M2_Adjusted to generate a greater variety of robust solutions.

6 Managerial implications

The applicability of our approach is demonstrated through a case study of hurricane preparedness in the Southeastern United States. We explore some key issues in humanitarian logistics such as network topology, shortages, and supply levels under various degrees of parameter uncertainty.

Our models allow decision makers to specify uncertainty parameters corresponding to the degree of their knowledge, using distribution-free uncertainty sets in the form of ranges. We test these models using a case study based on hurricane preparedness. Results from our scenario-robust models indicate that in response to uncertainty in point estimates related to demand, shipping costs, capacities and proportion of damaged supplies, solution robustness is achieved through a significant increase in the supply of relief items while facilities are consolidated and increased in size at a local level, rather than building facilities in new locations. The pre-positioning of disaster relief supplies in fewer strategic locations identified would provide a quick, robust, and effective response when dealing with humanitarian disasters such as tsunamis, SARS, Ebola and even in the recent global COVID-19 pandemic. As evidenced in Sect. 5, our models consistently offer robustness even under unfavorable realized deviations of parameters. These characteristics bear significant relevance to practices such as improving Strategic National Stockpile (SNS) response.

SNS is a national emergency program established in 1999 to supplement and replenish state and local relief supplies in the event of a disaster. The SNS 12-h Push Packages initiative aims to be able to reach any location in the US within 12 h of a federal deployment decision. Since its inception, SNS has responded to numerous large-scale emergencies including hurricanes (e.g., Hurricane Dorian in 2019; Hurricanes Isaac and Sandy in 2012) and pandemics (Covid-19 in 2020; H1N1 influenza in 2009). In a 2012 program review (Inglesby & Ellis, 2012), the authors stated some important supply chain principles guiding their practices. One of them is “To forecast is to err”, which recognizes that a considerable amount of uncertainty—such as the uncertainty related to the time and magnitude of a disaster—is inherent in humanitarian supply chain response. To help mitigate this, our models offer two layers of protection. First, our models consider multiple scenarios of a disaster and then consider potential deviations within each scenario. Another principle identified in Inglesby and Ellis (2012) is that “Local optimization leads to global disharmony.” In contrast, our models offer a globally optimal solution, rather than individually optimizing local decisions. Furthermore, the resulting solutions tend to result in enlarging existing DC locations in response to uncertainty rather than requiring the building of new DCs which is less disruptive to the overall global structure as demonstrated in Sect. 4.

7 Research implications

From a methodological standpoint, our paper aims to overcome the inherent limitations in both Stochastic Programming and Robust Optimization approaches. Generally speaking, Robust Optimization models are more appropriate when uncertainty in parameter values cannot be accurately defined by a probability distribution, while stochastic programming models are more appropriate when the decision maker is able to define the exact probability distribution applicable for each random parameter. However, the most situations there is some, but limited, information about such parameters. This limited information can be used to define possible scenario values, but is usually insufficient to define a complete probability distribution for each random parameter. Our proposed scenario-robust method fills this gap between Stochastic Programming and Robust Optimization, and is applicable to cases where we have some limited information about the random model parameters. Our proposed scenario-robust model allows the decision maker to incorporate historical data from past natural disasters but also to represent that this information is limited and does not define all possible scenarios. It eases the difficult, and usually impossible, task of providing exact probability distributions for uncertain parameters in a stochastic programming model through the use of a distribution-free uncertainty set. When well-recorded historical data leads to exact knowledge of uncertain parameters, our approach leads to a traditional stochastic-programming model. If such an ideal condition is not attainable, which is expected in practice, our model allows decision makers to use ranges that reflect their degrees of knowledge regarding the underlying uncertainty of the random parameters.

Our proposed Scenario-Robust Programming model can be used in applications of pre-positioning relief items for many different types of natural disasters, such as earthquakes, floods, and storms, which share a common characteristic that they are somewhat cyclical, but difficult to predict the exact timing, location and severity. We also note that our proposed method can be applied to other applications where parameter uncertainty is common, such as facility location (Melo et al., 2009), vehicle routing (Toth and Vigo, 2014) and other operationally-complex settings. Additionally, our proposed methods can be used to help facilitate public–private partnerships for creating humanitarian supply chains such as was developed through Operation Warp Speed in the United States in 2020 to provide vaccines, PPE and other equipment to affected populations in response to the COVID-19 pandemic.

8 Conclusion

This paper proposes a scenario-robust programming method to create a robust preparation plan to pre-position relief items to respond to natural disasters. Most existing approaches to solving these problems are either too fragile (e.g., stochastic programming) or too conservative (e.g., robust optimization). Our method aims to overcome these inherent limitations by fully utilizing known information while also accommodating the inherent uncertainties related to the pre-positioning of relief items for natural disasters. The resulting preparation plan is robust and cost-effective. Our method generates a greater variety of robust solutions that reflect the degree of known information for the specific natural disaster.

The case study and the simulations in this paper show that the scenario-robust model can generate a relief-item-pre-position plan that outperforms stochastic programming not only when the deviation of parameters is large, but also when the deviation is small. Based on the results from our case study solutions, we find that robustness can be achieved through increasing the size of local facilities without building any new facilities at different locations when compared to the traditional stochastic programming solution.

There are several possible paths for future research. First, our current method assumes the probabilities of scenarios can be obtained from historical data, but the actual loss and impact incurred from the disaster is less certain to estimate. If the historical data are not sufficient to estimate the probability of each scenario, a new scenario-robust model would be needed to simultaneously address the uncertainties in the probabilities of scenarios as well as the uncertainties in the parameters that are currently included in our current method. One challenge in this approach is that the new model would be highly non-linear, and thus, more difficult to solve. Such a model formulation that addresses uncertainties in estimating the probability associated with the scenarios would require a highly efficient algorithm and could be one area of future research. Second, the objective function in our current model is a weighted cost function, which is common and widely used in other resource allocation problems. However, such weighted cost functions may not be the best representation of humanitarian logistics applications because human suffering is a component of this cost function. Simply using a shortage cost parameter in the objective function may not be appropriate to represent these costs related to human suffering. Holguín-Veras et al. (2013) develop the concept of deprivation cost that is defined as the economic valuation of human suffering. The authors conclude that considering both logistics and deprivation costs concurrently leads to more equality for all groups in a humanitarian logistics setting. Incorporation of deprivation costs in future studies is a worthy endeavor, despite the resulting additional complexity, and is another path for future research related to our work.