Abstract
Maxmin approaches have been widely applied to address equity as an essential consideration in humanitarian operations. These approaches, however, have a significant drawback of being neutral when it comes to solutions with the same minimum values. These equivalent solutions, from a maxmin point of view, might be significantly different. We address this problem using the lexicographic maximin approach, a refinement of the classic maxmin approach. We apply this approach in the rapid needs assessment process, which is carried out immediately after the onset of a disaster, to investigate the disaster’s impact on the affected community groups through field visits. We construct routes for an assessment plan to cover community groups, each carrying a distinct characteristic, such that the vector of coverage ratios are maximized. We define the leximin selective assessment problem, which considers the biobjective optimization of total assessment time and coverage ratio vector maximization. We solve the biobjective problem by a heuristic approach based on the multidirectional local search framework.
Introduction
The maxmin (or minmax) approach is one of the widely applied measures to address equity in humanitarian relief operations, such as maximizing the minimum satisfied demand in relief distribution (e.g., Tzeng et al. 2007; Vitoriano et al. 2011; Ransikarbum and Mason 2016; Cao et al. 2016) or minimizing the maximum arrival time of critical supplies (e.g., Campbell et al. 2008; Nolz et al. 2010). A big flaw of this approach is that all solutions with the same maxmin value, which might lead to significantly different solutions, are considered equivalent. This paper aims to refine this approach by applying the lexicographic maxmin approach, which has been applied in various supply chain problems such as facility location and network optimization (Ogryczak et al. 2014). We focus on rapid needs assessment operations, which in contrast to relief distribution problems, have received little attention in humanitarian logistics literature (Pamukcu and Balcik 2020). According to de la Torre et al. (2012), majority of studies addressing relief distribution problem assume that the needs of impacted populations are already known or can be estimated, and needs assessment operations are not specifically addressed.
Rapid needs assessment starts immediately after a disaster strikes to quickly evaluate the disaster’s impact by having direct observations of affected sites and conducting interviews with impacted community groups (IFRC 2008; ACAPS 2011; Arii 2013). The assessment teams have experts from various areas ranging from those who are familiar with the local area to specialties in public health, epidemiology, nutrition, logistics, and shelter (ACAPS 2011; Arii 2013). A successful rapid needs assessment assists humanitarian agencies in effectively satisfying the needs of affected people at the time of great need (Arii 2013). Planning the field visits and accordingly deciding which sites to visit is an important decision to reach a successful assessment. Since time and resources are limited during the rapid needs assessment stage, it is usually not possible to evaluate the entire affected region; therefore, assessment teams take a sample of affected sites to visit using sampling methods (ACAPS 2011). These methods, in general, aim to select a limited number of sites to visit, which will provide the opportunity for assessment teams to observe and compare the conditions of various affected community groups such as; displaced persons, host communities, and returnees (IFRC 2008; ACAPS 2011). The goal here is to achieve acceptable coverage of various community groups.
Balcik (2017) propose “Selective Assessment Routing Problem” (SARP), a mixedinteger model that simultaneously addresses site selection and routing decisions of the rapid needs assessment teams. Balcik (2017) aims to evaluate the postdisaster conditions of different community groups, each carrying a distinct characteristic. The objective function in the SARP is maximizing the minimum coverage ratio achieved across the community groups (maxmin approach). The coverage ratio of a characteristic is calculated by the number of times that the characteristic is covered in the assessment plan divided by the total number of sites in the network carrying that characteristic (Balcik 2017). The SARP has route duration constraints, which define the assessment deadline.
To illustrate the maxmin approach and illustrate problems that may arise when not discriminating between solutions with the same minimum coverage ratio in the SARP, we provide an illustrative example in Fig. 1. In this example, there are three solutions (\(s_1\), \(s_2\), and \(s_3\)) to a SARP instance with three assessment teams and six characteristics. The array of 0s and 1s at each site indicates which characteristics are present at that site. For example, site number 10 carries characteristics 1 and 4. For solution \(s_1\) (see Fig. 1a), the minimum coverage ratio is 0.5 whereas in solution \(s_2\) (see Fig. 1b), the minimum coverage ratio is 0.75. Therefore, from a maxmin point of view, \(s_2\) outperforms \(s_1\) since it has a higher minimum coverage ratio. On the other hand, both solutions \(s_2\) and \(s_3\) (see Fig. 1b, c) provide the same minimum coverage value of 0.75, therefore are considered equal from the maxmin point of view. However, these two solutions may not be the same from a decision maker’s point of view.
In practice, solutions with the same maxmin value often have different desirability due to other criteria. In the case of the SARP, solutions with the same maxmin value can provide different coverage for characteristics that are not the least covered characteristic, which is also important: in rapid needs assessment, visiting such extra sites is desirable since each additional visit improves the coverage ratio associated with at least one characteristic (Balcik 2017). Therefore, we believe that introducing the lexicographic maximin approach can be especially relevant in this context. This approach has already been integrated with a few VRP algorithms (Saliba 2006; Lehuédé et al. 2020).
During the rapid needs assessment operations, on the one hand, minimizing the total travel time (or having a “faster” assessment) and on the other hand maximizing the reliability level (or having a “better” assessment) are two conflicting but at the same time important goals of decision makers. As Zissman et al. (2014) indicate, in planning a humanitarian needs assessment for Haiti, the decision makers faced a number of design tradeoffs including having a faster or better assessment.“The key challenge was that the data were required urgently: a better assessment could be produced by spending more time designing its elements and verifying its results, but the data were needed faster to inform urgent actions to provide relief to those in need” (Zissman et al. 2014, p.13). In order to capture these two conflicting objectives, we propose a biobjective optimization problem where we find the set of solutions to the SARP that offer the best tradeoff between total route duration (assessment time) and community groups coverage ratio maximization using lexicographic ordering.
To solve the biobjective problem, we apply a multidirectional local search method which is a multiobjective optimization framework that generalizes the concept of local search to multiple objectives (Tricoire 2012). Within this method, we design Adaptive Large Neighborhood Search (ALNS) operators to guide the search toward solutions with higher coverage ratios. We evaluate and analyze the proposed approaches via several computational experiments.
The remainder of the paper is organized as follows. In Sect. 2, we review the relevant literature. In Sect. 3, we introduce the leximin approach to the SARP and explain how can we represent the problem in a biobjective fashion. We present a solution method and computational results in Sect. 4 and 5. Finally, we provide conclusions in Sect. 6.
Related literature
Equity has been considered in numerous application areas of humanitarian supply chains. There are various approaches for expressing equity in the literature. According to Young and Isaac (1995) as cited in Cao et al. (2016), the equity measures based on a different theory of justice can be categorized into three classes: “(i) Aristotle’s proportional equity principle, where the resources are allocated in proportion to the beneficiaries’ needs, (ii) classical utilitarianism, where the main aim is to maximize the total welfare of the whole system, and (iii) the Rawlsian maximin or difference principle, which refers to maximizing the welfare of the worstoff beneficiaries in the system" (Cao et al. 2016, 34). There are also other equity measures in the literature such as the Gini coefficient adopted from economics literature (e.g., Gutjahr and Fischer 2018; Eisenhandler and Tzur 2019; Mostajabdaveh et al. 2019), lexicographical goal programming formulation (e.g., Ortuño et al. 2011), the deprivation cost concept (e.g., HolguínVeras et al. 2013). See Mostajabdaveh et al. (2019) for a summary of different approaches on equity measures.
In this paper, we focus on the Rawlsian maximin measures within the context of humanitarian operations. In fact, the focus of this paper is not to compare different equity measures as the definition of an appropriate measure depends on the context.
The minmax approach has been applied in numerous relief distribution problems. Tzeng et al. (2007) construct a reliefdistribution model using a multiobjective model for a joint facility location and commodity transportation problem, where the objectives are minimizing total costs, minimizing total travel time, and maximizing the minimal satisfaction during the planning period. Campbell et al. (2008) focus on two objectives of minimizing the maximum arrival time of supplies and the average arrival times of supplies. The authors demonstrate the potential impact of using these objective functions on the tradeoff between efficiency and equity within the context of the traveling salesperson and vehicle routing problems. Balcik et al. (2008) address equity by minimizing the maximum percentage of unsatisfied demand over all locations, whereas Ransikarbum and Mason (2016) maximize the minimum percentage of satisfied demand in relief distribution.
Apart from studies that focus on the distribution of relief items, field visit planning of needs assessment processes has been studied in a few papers in the field of optimization. Huang et al. (2013) consider routing of postdisaster assessment teams. In this study, the assessment teams visit all communities in the affected regions, which makes this model appropriate for the detailed assessment stage where time allows visiting all the impacted sites. This, however, is not the case in the rapid needs assessment stage, where it is usually not possible to visit all the sites due to a shortage of time. Constructing routes for rapid needs assessment is studied by Balcik (2017), in which assessment teams need to select a subset of sites to visit and consider the equity among various community groups. Balcik (2017) presents a mixedinteger model for the proposed SARP, which simultaneously addresses site selection and routing decisions and supports the rapid needs assessment process that involves the purposive sampling method, a method that only selects those sites that carry certain characteristics. The objective function is the maximization of the minimum coverage ratio achieved across all community groups. Minimizing total route duration is used as a secondary objective in a lexicographic fashion.
The study of Balcik (2017) has been extended in various directions. Balcik and Yanıkoğlu (2020) consider the travel time as an uncertain parameter in postdisaster networks and propose a robust optimization approach to address the uncertainty. Pamukcu and Balcik (2020) assume assessment coordinators and specify a target number of observations for each community group in advance, and the objective is to minimize the total duration required to cover all community groups. Li et al. (2020) present a biobjective model integrating both the rapid needs assessment stage and the detailed needs assessment stage. Li et al. (2020), similar to Balcik (2017), maximize the minimum coverage ratio achieved among community groups as the objective of the rapid needs assessment stage. In the detailed need assessment stage, where the assessment teams need to visit all of the affected sites, Li et al. (2020) minimize the maximum assessment time of all assessment teams. Bruni et al. (2020) address the postdisaster assessment operations from a customercentric point of view by considering a service level constraint that guarantees a specified coverage ratio with the objective of minimizing the total latency. Hakimifar et al. (2021) present easytoimplement heuristic algorithms for site selection and routing decisions of the assessment teams while planning and executing the field visits. They test the performance of proposed heuristic algorithms within a simulation environment and incorporate various uncertain aspects of the postdisaster environment in the field. The authors compare the deterministic setting of the proposed simple heuristics with that of Balcik (2017).
There are also relevant studies focusing on damage assessment using unmanned aerial vehicles (UAVs) (e.g., Oruc and Kara 2018; Zhu et al. 2019, 2020; Glock and Meyer 2020). Damage assessment using UAVs studies focus mainly on settings where UAVs’ highquality pictures can meet the assessment purposes, and there is no possibility or necessity to conduct interviews with the community groups, which represent an essential part of needs assessment (Hakimifar et al. 2021). Both SARP and damage assessment with UAVs studies belong to the family of Team Orienteering Problems (Butt and Cavalier 1994; Chao et al. 1996) (TOP). TOPs address site selection and vehicle routing decisions to maximize the benefits collected from the visited nodes and construct efficient routes. The difference between the TOP and the SARP lies in the concept of benefits, i.e., the TOP maximizes total collected profits, which are easy to quantify and unique per location, e.g., revenue, whereas the SARP maximizes the minimum coverage ratio of each characteristic over all visited sites. It is not straightforward to quantify the benefits of visiting a particular site without considering the characteristics of the other sites that might be included in the assessment plan (Balcik 2017). For more information regarding modeling and solution approaches for the TOP, we refer to the surveys conducted by Vansteenwegen et al. (2011), Archetti et al. (2014), and Gunawan et al. (2016).
As an alternative to the classic maxmin approach, the lexicographic minimax approach has been applied in various areas of operations research. According to the survey conducted by Ogryczak et al. (2014), the lexicographic minimax approach has been applied in areas such as facility location (Ogryczak 1997), supply chain optimization (Liu and Papageorgiou 2013), air traffic flow management (Bertsimas et al. 2008) and bandwidth allocation and network optimization (e.g., Nace and Orlin 2007; Ogryczak et al. 2005), in order to provide equitable or fair solutions. The lexicographic minimax approach has also been integrated in vehicle routing models. Saliba (2006) presents heuristics for the VRP with lexicographic maxorder objective in a singleobjective fashion. The lexicographic maxorder objective is used to minimize lexicographically the sorted route lengths. Lehuédé et al. (2020) investigate a lexicographic minimax approach to solve a biobjective vehicle routing problem with route balancing. To the best of our knowledge, this study is the first one to introduce the lexicographic maximin approach to the humanitarian operation context. Furthermore, since the SARP belongs to the family of TOPs, we are also extending the literature on TOPs by introducing the lexicographic maximin approach.
Our contribution is twofold: i) we consider the biobjective optimization of finding the set of solutions to the SARP that offer the best tradeoff between total route duration and coverage ratio vector maximization. The proposed biobjective problem addresses the two conflicting objectives of faster versus better assessment that decision makers face during the rapid needs assessment processes. ii) we consider the lexicographic maximin approach—a refinement of the classic maxmin approach. The lexicographic maximin approach differentiates between solutions with the same maxmin value. This differentiation provides a range of solutions for decision makers, in which more sites are included in the assessment plan. Visiting such extra sites is beneficial since it increases the coverage ratio of at least one characteristic.
A lexicographic maximin approach to the selective assessment routing problem
The lexicographic maximin is a refinement of the maxmin approach (Dubois et al. 1996). In the lexicographic maximin we compare two vectors, instead of simply comparing two numbers (maxmin value). For that purpose, we sort the coverage ratios of a solution from lowest to highest and compare it with the sorted values of the other solution. To illustrate this, let us take solutions \(s_2\) and \(s_3\) from the illustrative example provided in Fig. 1. The vectors of coverage ratios from \(s_2\) and \(s_3\) are shown as bar charts in Fig. 2a, b, respectively. The sorted forms of these vectors are represented in Fig. 2c, d. According to the lexicographic maximin approach, solution \(s_3\) dominates \(s_2\) as it has a higher coverage ratio in the third position of sorted vectors.
We use the following definition of the leximin ordering, adapted from the leximin definition of Bouveret and Lemaitre (2009):
Definition 1
(Leximin ordering) Let x and y denote two vectors in \({\mathbb {R}}^m\). Let \(x^{\uparrow }\) and \(y^{\uparrow }\) denote these same vectors where each element has been rearranged in a nondecreasing order. According to the leximin ordering:

the vector x leximindominates y (written \(x \succ _{leximin} y\)) if and only if \(\exists i \in \{1,\ldots ,m\}\) such that: \(\forall j \in \{1,\ldots ,i1\}\), \(x_{j}^{\uparrow }\) = \(y_{j}^{\uparrow }\) and \(x_{i}^{\uparrow }\) > \(y_{i}^{\uparrow }\),

x and y are indifferent (written x \(\sim _{leximin}\) y ) if and only if \(x^{\uparrow }\) = \(y^{\uparrow }\).

\(x\succeq _{leximin} y\) is the case where \(x \succ _{leximin} y\) or x \(\sim _{leximin}\) y.
As for the biobjective setting, we consider the problem of finding the set of solutions to the SARP that offer the best tradeoff between total route duration (we call it duration in the rest of the paper) and coverage ratio maximization. The lexicographic ordering is used to compare solutions on coverage ratio vector maximization aspects. We call this problem the leximinSARP. As the leximinSARP is a biobjective problem, we define Pareto optimality in the leximinSARP as follows:
Definition 2
(Dominance and Pareto optimality in the leximinSARP) Let \(c_s\) be the duration of solution s and \(l^s = (l_{1}^s ,\ldots , l_{m}^s)\) the vector of coverage ratios for solution s.

Let s and \(s^\prime\) represent two solutions of the leximinSARP. A solution s dominates a solution \(s^\prime\) iff: \(c_s \le c_{s^\prime }\) and \(l^s \succeq _{leximin} l^{s^\prime }\) and either \(c_s < c_{s^\prime }\) or \(l^s \succ _{leximin} l^{s^\prime }\).

s is a Pareto optimal solution iff no other solution dominates s.
To illustrate the concepts of dominance and Pareto optimality, we present the next example in Table 1, including a set of six solutions according to their resulted durations and coverage ratios (CRs). In this example \(s_2\) is dominated by \(s_1\) since \(s_1\) has a lower duration and leximindominates \(s_2\). Similarly, \(s_6\) is dominated by \(s_5\). \(s_4\), however, is not dominated by \(s_5\) since although \(s_5\) leximindominates \(s_4\), \(s_4\) has a lower duration. Figure 3 provides a twodimension solution visualization for this example. Note that although in lexicographic ordering we always compare two vectors of values instead of two points, in twodimensional visualization we only plot the maxmin value. The information in such a figure should always be interpreted along with solutions’ information, i.e., vector of coverage ratios and solution durations (e.g., Table 1 and Fig. 3).
Solution method
The SARP, as a variant of the TOP, is a NPhard problem (Balcik 2017) and consequently the leximinSARP as well. In order to compute the complete set of Pareto optimal solutions, we develop a biobjective metaheuristic that is based on the multidirectional local search (MDLS) framework proposed by Tricoire (2012), to approximate the set of Pareto optimal solutions for the leximinSARP. We describe the general structure of MDLS in Sect. 4.1. The components of the operators used in the MDLS are explained in Sect. 4.2.
Multidirectional local search
MDLS is a metaheuristic framework for multiobjective optimization, which generalizes the concept of singleobjective local search to multiple objectives. To perform MDLS with the leximinSARP as a biobjective problem, a specific local search should be defined for each objective (duration and leximin). Therefore, we develop a parametric adaptive large neighborhood search algorithm, called \(ALNS_i\), where parameter i represents the objective function being optimized: \(ALNS_1\) aims at minimizing durations and \(ALNS_2\) aims at maximizing the leximin value. ALNS is a metaheuristic that iteratively destroys and repairs an incumbent solution and has been successfully applied to various transportation problems (Shaw 1998; Ropke and Pisinger 2006; Pisinger and Ropke 2019). We explain the components of the ALNS algorithm in Sect. 4.2. By developing ALNS for both objectives, we can apply MDLS.
MDLS considers an archive of solutions, F, which is preserved and updated via the solution process. MDLS maintains F as a nondominated set by performing nondominated union operations (Tricoire 2012). The nondominated union of two sets performs the union of these two sets while removing the elements that are dominated by at least one element of these two sets. Algorithm 1 outlines the MDLS steps. At each iteration, three steps are performed i) a solution x is randomly selected from F, each solution having equal probability, ii) for each objective, singleobjective local search is performed on x, and iii) F is updated with solutions obtained by singleobjective local search (Tricoire 2012). We initialize the archive F with the solution of the cheapest insertion heuristic adapted from Pisinger and Ropke (2007).
ALNS components
Within the designed MDLS framework, for each objective, we define several destroy and repair operators. Then, for each objective, at each iteration, we randomly choose a destroy and repair from the set of operators for that objective and generate a new solution. As in the leximinSARP, it is not possible to visit all sites; the sites removed from one solution by the destroy operator are added to the list of unassigned (or not visited) sites. The repair operator then selects and inserts some of the unassigned sites in the partially destroyed solution until no further insertion is possible due to time constraints. Below, we explain the destroy and repair operators for each objective.
Duration objective: destroy and repair operators
Classical LNS destroy and repair operators for the VRP with the single objective of cost minimization are defined in detail within the literature, e.g., Pisinger and Ropke (2007). In this study, for the duration objective, we implement some of these destroy and repair operators. The applied removal operators are random removal, worst removal, and related removal. In the following, we describe each briefly:

Random removal: this operator diversifies the search by randomly selecting sites to be removed from the current solution. We repeat this process until the number of removed sites is equal to the destroy quantity q (Pisinger and Ropke 2007).

Worst removal: in this operator, according to Pisinger and Ropke (2007), the q sites with the highest cost to the duration objective value are removed to “try to insert them at another place in the solution to obtain a better solution value”. Given a site i and a solution s, we define \(f^{\prime }(s,i)\) as the cost of the solution where site i has been removed completely. We define \(\Delta f_{i}\) as \(\Delta f_{i}= f(s)  f^{\prime }(s,i)\). This operator randomizes the selected sites to avoid situations where the same sites are removed repetitively. In particular, we sort the sites by descending duration then we choose a random number y in the interval [0,1). The site to be removed is \(L[y^{p_{worst}}L]\) ; where L represents the array of sorted sites and the parameter \(p_{worst}\) controls the degree of randomization (Pisinger and Ropke 2007).

Related removal: this operator selects a random site i and removes it as well as its \(q1\) most related sites, aiming to shuffle these similar sites around and create new, perhaps better, solutions. We define relatedness R(i, j) of two sites i and j by: R(i, j) = \(t_{ij}\), where \(t_{ij}\) is the travel distance between the two sites. The lower R(i, j) is, the more related are the two sites. The randomization works similar to the worst removal operator, with parameter \(p_{related}\) controlling the degree of randomization (Pisinger and Ropke 2007).
The two repair operators used for the duration objective work as follows:

Cheapest insertion heuristic: this operator calculates the minimum insertion cost for every unassigned site. The insertion cost is determined by subtracting the current solution’s total duration without the inserted site from the duration of the solution with the inserted site. Afterward, we insert the site with the minimum cost difference in the current solution. We continue this process until no further insertion is possible, i.e., maximum allowed time (\(T_{max}\)) is reached (Pisinger and Ropke 2007).

k regret heuristic: this heuristic aims to improve the cheapest insertion heuristic by incorporating a lookahead information when selecting the site to insert. The site with the biggest regret value is inserted, i.e., the site that has the highest difference between the cost of the insertion into the best tour and the insertions into the k best tours (Pisinger and Ropke 2007). A k regret heuristic calculates the part with the greatest cost difference between the cheapest and the k1 next cheapest insertions. The selected site is inserted at its minimum cost position. We consider 2 regret heuristics and 3regret heuristics as two separate repair operators.
Leximin objective: destroy and repair operators
For the leximin objective, we use the following two destroy operators:

Random removal: this operator is the same as the one used for the duration objective and aims to diversify the search by randomly selecting sites to be removed from the current solution. This process is repeated until the number of removed sites is equal to the destroy quantity q.

Worst min removal: in this operator the q sites with the worst contribution to the minimum coverage ratio of the solution are removed. That is, this operator starts removing sites that decrease the least the minimum coverage ratio of the current solution. The randomization works similar to the duration objective, controlled by parameter \(p_{worst}\).
Two repair operators have been designed to guide the search toward leximinoptimal solutions:

Highest maxmin insertion: this operator prioritizes the selection of sites with the highest minimum coverage ratio of characteristics to be inserted first. That is, we first insert (if feasible) the site with the highest contribution to the maxmin value of the current solution. We insert each unassigned site within a feasible position that minimizes the total travel time. In order to discriminate between insertions with a same contribution to the maxmin objective, we provide two versions of this operator:

Highest random maxmin: in case there are equivalent maxmin values while sorting the sites, the candidate site for insertion is randomly selected among those sites with best maxmin value. We continue this process until no further insertion is possible.

Highest maxmin with priority to the duration: in this method, among the sites with equivalent maxmin values, we select the one that yields the lowest increase to the duration objective function. We continue this process until no further insertion is possible.


Highest Leximin insertion: in this operator, we prioritize the selection of sites with the highest contribution to the leximin objective. More precisely, we compare the respective vectors of coverage ratios resulting from inserting the unassigned sites. This is in contrast to the Highest maxmin insertion which is a proxy to the leximin objective and considers only the maxmin value. Therefore, this operator is computationally expensive. We evaluate the performance of this operator in Sect. 5.3.
We follow the adaptive weight adjustment described in (Ropke and Pisinger 2006) to automatically adjust the weights of different operators using statistics from earlier iterations. According to this method, we keep track of the success rate of each operator, which measures how well the operator has performed recently. Within the scope of our study, an operator is successful when using this operator leads to an update in the nondominated set of solutions. We divide the entire search into a number of segments, each corresponding to the same number of MDLS iterations. We calculate new weights using the recorded scores after each segment using this formula: \(\omega _{i,j+1}=\omega _{i,j}(1r)+r\frac{\pi _i}{\theta _i}\), where \(\omega _{i,j}\) represents the weight of operator i used in segment j, \(\pi _i\) is the score of operator i obtained during the last segment, and \(\theta _i\) is the number of times we have attempted to use operator i during the last segment. The parameter r (reaction factor) controls how fast the weight adjustment algorithm reacts to changes in the effectiveness of the operators. This is similar to the scheme used by Ropke and Pisinger (2006).
Computational results
The designed algorithm is implemented in C++. All computational experiments are carried out on a 64bit Windows Server with two 2.3 GHz Intel Core CPUs and 12 GB RAM. We describe our test instances and three different configurations in Sect. 5.1. Parameter settings are presented in Sect. 5.2. The quality of the Pareto front approximation is explained in Sect. 5.3. We analyze the equivalent solutions for the maxmin objective function in Sect. 5.4 and compare our results with the maxmin values reported in Balcik (2017) in Sect. 5.5.
Instance and configuration description
We evaluate our designed MDLS framework on the Balcik (2017) instances with 25, 50, 75, and 100 nodes. These instances are generated by modifying Solomon’s 100node random (R) and randomclustered (RC) instances (Balcik 2017). Within these instances, there are 12 characteristics. The maximum allowed duration for each team (\(T_{max}\)) is set between two and eight, and the number of assessment teams (K) ranges between two and six. As a result, there are 48 instances. Table 2 shows the characteristics of the SARP instances for our experiments.
We run MDLS on each instance 10 times with two different run times, using three different configurations called all, leximin and maxmin. The difference between these configurations is whether we use the costly operator of Highest Leximin insertion or not. In fact, the three configurations include all removal operators as well as all durationoriented operators. The all configuration includes both Highest maxmin insertion and Highest Leximin insertion. The leximin configuration includes only Highest Leximin insertion and the maxmin configuration includes only Highest maxmin insertion.
Parameter settings
To set the parameter values preliminary test were performed. In these tests, we used instances with 25 and 50 nodes to set the parameters. We changed one parameter at a time while keeping the other parameters fixed. After some preliminary computations under different parameter settings, we found the following parameters performing the best: The ruin quantity q used in the destroy operators is randomly selected within bounds between at least one site and at most 30 percent of the number of sites in the current solution. The other parameters are set as follows: (\(p_{related}\), \(p_{worst}\), r) = (5, 3, 0.1). For the purpose of adaptive weight adjustment, each segment lasts 100 MDLS iterations. It is worth mentioning that the designed algorithm was highly robust against parameters, and changing them did not significantly influence the algorithm’s performance.
Quality of the Pareto front approximation
To analyze the impact of using different configurations on the quality of the Pareto front approximation, we follow the approach described in Lehuédé et al. (2020). Accordingly, we provide a reference set for each instance. The reference set for each instance is the nondominated union of the sets returned by the algorithm over 60 runs, including all three MDLS configurations, two different time limits, ten runs per setting. We compare the quality of each set by looking at the percentage of found solutions from the reference set. In addition, we analyze the percentage of reference solutions found within a 1%, 2%, and 3% distance of a solution from the assessed set. We call solution \(s_1\) within a \(\alpha\)% distance of another solution \(s_2\) if, when the duration and all coverage ratios of \(s_2\) are multiplied by 1 + \(\alpha /100\) and 1  \(\alpha /100\), respectively, then \(s_1\) dominates this transformed solution. The boxplots provided in “Appendix 2” (Figs. 9, 10, 11, 12, 13, 14, 15, 16) represent the percentage of reference solutions found and those within a 1%, 2%, and 3% distance of a solution from the reference set. The variability in the boxplots is derived from the results returned by 10 runs for each MDLS configuration, each instance, and each run time.
We see that the three tested configurations of MDLS are close to the reference set for almost all instances. Moreover, we observe that all three configurations provide relatively similar approximations of the Pareto set in most cases. Therefore, the simple construction heuristics of the maxmin configuration are generally enough to reach an acceptable set. Note that although all and leximin configurations include the computationally expensive operator of Highest Leximin insertion, they do not result in better solution sets in most cases.
To evaluate this impact, we compare the average number of iterations performed by the MDLS algorithm with the various configurations for each instance. Figures 4 and 5 show the average number of iterations performed for each instance. As foreseen, integrating the Highest Leximin insertion operator significantly impacts the number of performed iterations. By looking at both boxplots in “Appendix 2” and Figs. 4 and 5, we see that for some instances, more iterations do not necessarily result in better solutions. For example, in instance RC19, even though the maxmin configuration performs almost three times as many iterations as in leximin, using the Highest Leximin insertion heuristic is important to obtain better solutions.
Equivalent solutions for the maxmin objective function
The motivation for studying the leximinSARP is that solutions with a same maxmin value are not necessarily equivalent, and that it is worth exploring them. In fact, the maxmin approach fails to distinguish between solutions with the same minimum values (coverage ratio in this study). Worse, optimizing maxmin first and duration second in a lexicographic fashion, as is done in Balcik (2017), may bias the search toward solutions that are good with the maxmin objective but comparatively bad with the leximin objective, since removing visits reduces the secondary objective. We call solutions that have the same maxmin value maxminequivalent. For example, Table 3 shows 12 solutions with the same maxmin value of 0.571 for the instance RC8. These solutions, however, are different in terms of leximin objective. Solution s5 in Table 3 is the one that is reported in Balcik (2017) as the best maxmin value. Table 3 lists solutions that provide a better vector of coverage ratios than s5 while incurring a modest increase in duration. These solutions include more sites in the assessment plan, which is beneficial since each additional visit increases the coverage ratio associated with at least one characteristic. In particular, solution s12 and s1 provide the same maxmin values but considerably different solutions in terms of leximin objective. We also note that our biobjective approach also allows us to find solutions that would dominate s5 if using the (maxmin, duration) lexicographic objective: s1, s2, s3, s4.
Figure 6 depicts the histogram of the distribution of the number of maxminequivalent solutions in the reference set for the instance RC8. We observe that in this instance, we have 22 distinct solutions with the same minimum coverage ratio of 0.571. We provide similar histograms for other instances in “Appendix 3”. We can see that for example, instance RC16 has more than 70 distinct solutions with the same minimum coverage ratio of 0.125. We also observe that this trend is not contained to highest maxmin values; e.g., the instance R11 has more than 9 maxminequivalent solutions for different values of minimum coverage ratio. Note that these histograms include only nondominated solutions in the solution sets found for the leximinSARP.
Comparison with maxmin values reported in previous approaches
The primary objective function in Balcik (2017) is maximizing the minimum coverage ratio. Minimizing total route duration is considered as a secondary objective in a lexicographic fashion. Our results show that this method can lead to solutions that are dominated by the leximin approach. Figure 7 provides a graphical representation of the nondominated set of solutions obtained by the MDLS algorithm for the instance R8. The solution duration is represented in the horizontal axis, and the minimum coverage ratio is represented in the vertical axis. In this figure, the blue point represents the maxmin solution reported in Balcik (2017) which is dominated by other solutions obtained by MDLS. Moreover, we find new solutions that have higher values in terms of maxmin than the previously best known value. These points are represented in red (above the dashed line representing the previously best known maxmin value). Note that we do not have access to the full vector of coverage ratios of the solutions reported in Balcik (2017). Hence, the comparisons are based on the reported minimum value of coverage ratios and their corresponding durations. For example, Fig. 8 shows the case for which we cannot say that the solutions obtained by the MDLS algorithm dominate the one in Balcik (2017). However, we know that there are two solutions that leximindominate it (the two solutions with a higher maxmin coverage ratio). The graphical representations of other instances in which we obtained such results are provided in “Appendix 4”.
We now look at how many distinct solutions we find with the same maxmin value as the one reported in Balcik (2017), counting all solutions from reference sets. Table 4 provides that information for each instance separately.
It is important to mention that in most of the runs the designed MDLS finds the same maxmin coverage ratios and, in some cases, an even higher value, than the previously best known value. In particular, for long run times, maxmin, all, and leximin configurations find a solution with a maxmin value at least as good as the previously best known one in 87, 85 and 68 percent of the runs, respectively (see “Appendix 5” for the detailed percentages for each instance). This is in addition to providing a set of tradeoff solutions. We also note that the biobjective MDLS algorithm uses comparable CPUs (two 2.0 GHz Intel Xeon versus two 2.3 GHz Intel Core). MDLS produces approximation sets which includes the previously best known maxmin values, in most of the cases within shorter run times, for all but small sizes. The reported CPU time for each instance in Balcik (2017) is the time for running the Tabu Search algorithm four times and taking the best value out of them. In our proposed MDLS algorithm, we report the time limit per single run for each instance. “Appendix 5” provides the statistics regarding how many times we found the same maxmin values reported in Balcik (2017) within 10 runs of each MDLS configuration.
Lexicographic maxmin in the context of humanitarian logistics
Equity (often synonymously called fairness, as noted by Mostajabdaveh et al. (2019)) is an important consideration across a wide range of humanitarian operations. In Sect. 2, we mentioned that there are different approaches to considering equity in optimization models. In this study, we improve the maxmin approach by differentiating between solutions with the same minimum value. This concept can be applied in different areas of humanitarian logistics, where equity is often an important criterion. In the literature on humanitarian logistics, maxmin or minmax approaches are often employed to ensure some level of equity. This guarantees, for example, that all stakeholders receive service at a quality at least as good as the objective value. In the case of rapid needs assessment, this guarantees that each characteristic is covered at least at a certain ratio. However these maxmin (resp. minmax) approaches do not differentiate between different solutions with the same min (resp. max) value, even though doing so could in fact improve the overall humanitarian benefit without degrading the previously mentioned guarantee. To illustrate the added value of the lexicographic approach, we look at the solutions depicted in Table 3. Each row provides the coverage ratios of all characteristics for a given solution. In total 12 solutions are represented, all produced by our solution method and each with the same maxmin value of 0.571. These solutions do not dominate each other and they are sorted from shortest to longest duration, therefore also from worst to best coverage. When considering the maxmin objective function, any of these solutions can be obtained; for example, the solution algorithm by Balcik (2017) produces solution s5. However solutions s6 to s12 are in fact better in terms of coverage ratios. Therefore even when considering only coverage ratio, the lexicographic approach provides better solutions than the maxmin approach in general. It is important to note here that the maxmin approach could have produced any of these twelve solutions, as it simply does not discriminate between them, thus ignoring valuable information in the assessment of solution quality. Moreover, if we embed this lexicographic objective in a biobjective approach, we can balance coverage with, e.g., duration. For example if our time budget is 11.5 then the solution we want is s9: it provides the best coverage under that time budget. These observations apply to other objectives that include measures of equity, which are very common in humanitarian logistics, for instance partial coverage of beneficiaries.
Conclusion
In this study, we investigate how the lexicographic maximin approach can further improve the coverage ratio concerns in the rapid needs assessment routing problem. We consider the biobjective optimization of total assessment time and coverage ratio vector maximization and provide an approximation of the Pareto set for leximin–SARP using multidirectional local search. We develop classical neighborhood operators for the duration and coverage ratio vector maximization objectives. These operators are used within the multidirectional local search framework. We show that implementing them improves the quality of the produced solution sets.
Computational experiments also support our premise that different solutions with the same maxmin values are not equivalent. We see that for most of the instances, we find several maxminequivalent solutions that are, in fact, different solutions in terms of the lexicographic maximin approach  including those that offer a better vector of coverage ratios with a modest increase in duration. As a byproduct of our study, we see that the designed MDLS, in some cases, finds solutions with higher maxmin values than previously known ones or solutions that dominate the previously best known solutions with regard to the new leximin objective.
Introducing the lexicographic maximin approach to the rapid needs assessment stage is a first step to show that there is an alternative to the classic maxmin approach that can express equity in a better way. Future research can focus on using this approach in other contexts of humanitarian logistics, such as last mile distribution problems where fairness to beneficiaries is of vital importance. Future research can also consider uncertainty with respect to travel time. Due to the high level of uncertainty in the condition of the transportation network after the occurrence of a disaster, complete information on travel times may not be available during the rapid needs assessment stage. This uncertainty can be captured using robust optimization or stochastic programming.
Data availability
The data set that supports the findings of this study is provided by Balcik (2017).
References
ACAPS (2011) Technical brief: Purposive sampling and site selection in phase 2. https://www.humanitarianresponse.info/sites/www.humanitarianresponse.info/files/documents/files/Purposive_Sampling_Site_Selection_ACAPS.pdf. [Accessed 28June2020]
Archetti C, Speranza MG, Vigo D (2014) Chapter 10: vehicle routing problems with profits. In: Vehicle routing: problems, methods, and applications, 2nd edn. SIAM, pp 273–297
Arii M (2013) Rapid assessment in disasters. Jpn Med Assoc J 56(1):19–24
Balcik B (2017) Site selection and vehicle routing for postdisaster rapid needs assessment. Transp Res Part E Logist Transp Rev 101:30–58
Balcik B, Beamon BM, Smilowitz K (2008) Last mile distribution in humanitarian relief. J Intell Transp Syst 12(2):51–63
Balcik B, Yanıkoğlu I (2020) A robust optimization approach for humanitarian needs assessment planning under travel time uncertainty. Eur J Oper Res 282(1):40–57
Bertsimas D, Lulli G, Odoni A (2008) The air traffic flow management problem: An integer optimization approach. In: International conference on integer programming and combinatorial optimization. Springer, pp 34–46
Bouveret S, Lemaitre M (2009) Computing leximinoptimal solutions in constraint networks. Artif Intell 173(2):343–364
Bruni M, Khodaparasti S, Beraldi P (2020) The selective minimum latency problem under travel time variability: an application to postdisaster assessment operations. Omega 92:102154
Butt SE, Cavalier TM (1994) A heuristic for the multiple tour maximum collection problem. Comput Oper Res 21(1):101–111
Campbell AM, Vandenbussche D, Hermann W (2008) Routing for relief efforts. Transp Sci 42(2):127–145
Cao W, Çelik M, Ergun Ö, Swann J, Viljoen N (2016) Challenges in service network expansion: an application in donated breastmilk banking in South Africa. Socioecon Plan Sci 53:33–48
Chao IM, Golden BL, Wasil EA (1996) The team orienteering problem. Eur J Oper Res 88(3):464–474
de la Torre LE, Dolinskaya IS, Smilowitz KR (2012) Disaster relief routing: Integrating research and practice. Socioecon Plann Sci 46(1):88–97
Dubois D, Fargier H, Prade H (1996) Refinements of the maximin approach to decisionmaking in a fuzzy environment. Fuzzy Sets Syst 81(1):103–122
Eisenhandler O, Tzur M (2019) The humanitarian pickup and distribution problem. Oper Res 67(1):10–32
Glock K, Meyer A (2020) Mission planning for emergency rapid mapping with drones. Transp Sci 54(2):534–560
Gunawan A, Lau HC, Vansteenwegen P (2016) Orienteering problem: a survey of recent variants, solution approaches and applications. Eur J Oper Res 255(2):315–332
Gutjahr WJ, Fischer S (2018) Equity and deprivation costs in humanitarian logistics. Eur J Oper Res 270(1):185–197
Hakimifar M, Balcik B, Fikar C, Hemmelmayr V, Wakolbinger T (2021) Evaluation of field visit planning heuristics during rapid needs assessment in an uncertain postdisaster environment. Ann Oper Res 1–42
HolguínVeras J, Pérez N, Jaller M, Van Wassenhove LN, ArosVera F (2013) On the appropriate objective function for postdisaster humanitarian logistics models. J Oper Manag 31(5):262–280
Huang M, Smilowitz KR, Balcik B (2013) A continuous approximation approach for assessment routing in disaster relief. Transp Res Part B Methodol 50:20–41
IFRC (2008) Guidelines for assessment in emergencies, Geneva, Switzerland. https://www.icrc.org/en/doc/assets/files/publications/icrc002118009.pdf. Accessed 28June2020
Lehuédé F, Péton O, Tricoire F (2020) A lexicographic minimax approach to the vehicle routing problem with route balancing. Eur J Oper Res 282(1):129–147
Li X, Liu X, Ma H, Hu S (2020) Integrated routing optimization for postdisaster rapiddetailed need assessment. Int J Gen Syst 49(5):521–545
Liu S, Papageorgiou LG (2013) Multiobjective optimisation of production, distribution and capacity planning of global supply chains in the process industry. Omega 41(2):369–382
Mostajabdaveh M, Gutjahr WJ, Sibel Salman F (2019) Inequityaverse shelter location for disaster preparedness. IISE Trans 51(8):809–829
Nace D, Orlin JB (2007) Lexicographically minimum and maximum load linear programming problems. Oper Res 55(1):182–187
Nolz PC, Doerner KF, Gutjahr WJ, Hartl RF (2010) A biobjective metaheuristic for disaster relief operation planning. In: Advances in multiobjective nature inspired computing. Springer, pp 167–187
Ogryczak W (1997) On the lexicographic minimax approach to location problems. Eur J Oper Res 100(3):566–585
Ogryczak W, Luss H, Pióro M, Nace D, Tomaszewski A (2014) Fair optimization and networks: a survey. J Appl Math 2014
Ogryczak W, Pióro M, Tomaszewski A (2005) Telecommunications network design and maxmin optimization problem. J Telecommun Inf Technol 43–56
Ortuño MT, Tirado G, Vitoriano B (2011) A lexicographical goal programming based decision support system for logistics of humanitarian aid. TOP 19(2):464–479
Oruc BE, Kara BY (2018) Postdisaster assessment routing problem. Transp Res Part B Methodol 116:76–102
Pamukcu D, Balcik B (2020) A multicover routing problem for planning rapid needs assessment under different informationsharing settings. OR Spectrum 42(1):1–42
Pisinger D, Ropke S (2007) A general heuristic for vehicle routing problems. Comput Oper Res 34(8):2403–2435
Pisinger D, Ropke S (2019) Large neighborhood search. In: Handbook of metaheuristics. Springer, pp 99–127
Ransikarbum K, Mason SJ (2016) Goal programmingbased postdisaster decision making for integrated relief distribution and earlystage network restoration. Int J Prod Econ 182:324–341
Ropke S, Pisinger D (2006) An adaptive large neighborhood search heuristic for the pickup and delivery problem with time windows. Transp Sci 40(4):455–472
Saliba S (2006) Heuristics for the lexicographic maxordering vehicle routing problem. CEJOR 14(3):313–336
Shaw P (1998) Using constraint programming and local search methods to solve vehicle routing problems. In: International conference on principles and practice of constraint programming. Springer, pp 417–431
Tricoire F (2012) Multidirectional local search. Comput Oper Res 39(12):3089–3101
Tzeng GH, Cheng HJ, Huang TD (2007) Multiobjective optimal planning for designing relief delivery systems. Transp Res Part E Logist Transp Rev 43(6):673–686
Vansteenwegen P, Souffriau W, Van Oudheusden D (2011) The orienteering problem: a survey. Eur J Oper Res 209(1):1–10
Vitoriano B, Ortuño MT, Tirado G, Montero J (2011) A multicriteria optimization model for humanitarian aid distribution. J Global Optim 51(2):189–208
Young HP, Isaac RM (1995) Equity: in theory and practice. J Econ Lit 33(1):210–210
Zhu M, Du X, Zhang X, Luo H, Wang G (2019) MultiUAV rapidassessment taskassignment problem in a postearthquake scenario. IEEE Access 7:74542–74557
Zhu M, Zhang X, Luo H, Wang G, Zhang B (2020) Optimization dubins path of multiple UAVs for postearthquake rapidassessment. Appl Sci 10(4):1388
Zissman M, Evans J, Holcomb K, Jones D, Kercher M, Mineweaser J, Schiff A, Shattuck M, Gralla E, Goentzel J et al (2014) Development and use of a comprehensive humanitarian assessment tool in postearthquake Haiti. Procedia Eng 78:10–21
Acknowledgements
The authors want to thank Burcu Balcik for providing us with the data set used in her study.
Funding
Open access funding provided by Vienna University of Economics and Business (WU).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Mathematical formulation of the leximinSARP
The original SARP model was proposed by Balcik (2017). In this section, we provide the mathematical formulation of the leximinSARP. The following notation, parameters, and decision variables are used to formulate the leximinSARP model:
Sets/indices

N = set of sites in the affected sites indexed by i, j \(\in\) \(N_0\)

C = set of characteristics indexed by c \(\in\) C

\(N_{0}\) = N \(\cup\) \(\{0\}\) where \(\{0\}\) is the depot

K = set of assessment teams indexed by k \(\in\) K
Parameters

\(\alpha _{ic} =\) takes the value 1 if node \(i \in N\) carries characteristic \(c \in C\) and 0 otherwise

\(\tau _{c} =\) total number of sites that carry characteristic \(c \in C\)

\(t_{ij} =\) travel time between nodes i and j

\(T_{max} =\) available time for each team
Decision Variables

\(x_{ijk} = 1\) if team k visits site j after site i and 0 otherwise

\(y_{ik} = 1\) if team k visits site i and 0 otherwise

\(u_i =\) sequence in which site i is visited
Additionally, in order to model the lexicographic objective, we use notation similar to what is used by Ogryczak (1997) and describe it similarly, although our objective is to maximize values whereas their objective is to minimize them. Let \(\langle a \rangle = (a_{\langle 1 \rangle }, a_{\langle 2 \rangle },\ldots , a_{\langle C \rangle })\) denote the vector obtained from a by rearranging its components in the nondecreasing order (therefore \(a_{\langle 1 \rangle } \le a_{\langle 2 \rangle } \le \cdots \le a_{\langle C \rangle }\)). Let also \(\theta _j(y) = y_{\langle j \rangle }, \forall j \in C\), for any vector y of length C.
Finally, decision variable vector \(\gamma = (\gamma _1, \gamma _2,\ldots , \gamma _{C})\) represents the vector of coverage ratios of all characteristics in C. In order to perform leximin optimization of vector \(\gamma\) we perform lexicographic maximization on \(\langle \gamma \rangle\), where lexicographic maximization is denoted lexmax. Accordingly, we can write the mathematical formulation of the leximinSARP as follows:
Mathematical formulation
s.t.
The first objective function (1) maximizes the minimum coverage ratio lexicographically. The second objective function (2) minimizes the total route duration. Coverage ratio of each characteristics is defined through constraints (3). Constraints (4) and (5) guarantee that a vehicle enters and leaves the depot and each selected site. Constraint (6) ensures that each site is visited at most once. Constraint (7) controls the number of routes according to the available number of assessment teams. Constraint (8) ensures that each route is finished within the allowed duration. Constraint (9) is for eliminating subtours. Constraints (10)–(13) define the domains of the variables.
Appendix 2: Boxplots
The boxplots provided in this section represent the proportion of solutions in the reference set found within a 0%, 1%, 2%, 3% distance for each MDLS configuration within short and long time limits.
We observe that the three tested configurations of MDLS do not find more than 15% of the reference set except for small instances with 25 nodes (see Figs. 9, 10, 11, 12, 13, 14, 15, 16 part a). However, for almost all instances and methods, the produced set is indeed close to the reference set. In most of the cases more than 70% of the returned sets are within 1% distance from the reference set (see Figs. 9, 10, 11, 12, 13, 14, 15, 16 part b). This trend clearly increases by considering 2% and 3% distance from the reference set (Figs. 9, 10, 11, 12, 13, 14, 15, 16 part c and d).
These boxplots also show that all three configurations provide relatively similar approximations of the Pareto set in most of the cases. Therefore the results returned by the maxmin configuration, which is the simplest among the three configurations, are generally enough to reach an acceptable set. Nevertheless, in some cases, the Highest Leximin insertion heuristic, used in the leximin and all configurations, is helpful to some degree to find better solutions. For example for instance RC19 (Fig. 16b–d), configurations leximin and all perform better. In Highest Leximin insertion a vector of coverage ratios is compared, while in Highest maxmin insertion only the maxmin value is evaluated. Therefore, the maxmin configuration is faster and performs more iterations within the same time budget.
During our computational experiments, we observed that in general, when the average number of node insertions per iteration of the leximin objective is very low (\(\approx\) less than 4), the leximin configuration performs better than the maxmin configuration. The reason might be that in this situation, there is a limited possibility to insert nodes. Therefore, when a new node is inserted while using the maxmin configuration and this insertion is not improving the leximin objective, there is limited possibility to make up for it later.
Appendix 3: Distribution analysis of maxmimin equivalents
Appendix 4: Union of nondominated solution sets
Appendix 5: Percentage of found best maxmin values within individual runs
See Table 5.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hakimifar, M., Hemmelmayr, V.C. & Tricoire, F. A lexicographic maximin approach to the selective assessment routing problem. OR Spectrum (2022). https://doi.org/10.1007/s00291022006878
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00291022006878
Keywords
 Equity in Humanitarian logistics
 Humanitarian needs assessment
 Multidirectional local search
 Team orienteering