Introduction

The max-min (or min-max) approach is one of the widely applied measures to address equity in humanitarian relief operations, such as maximizing the minimum satisfied demand in relief distribution (e.g., Tzeng et al. 2007; Vitoriano et al. 2011; Ransikarbum and Mason 2016; Cao et al. 2016) or minimizing the maximum arrival time of critical supplies (e.g., Campbell et al. 2008; Nolz et al. 2010). A big flaw of this approach is that all solutions with the same max-min value, which might lead to significantly different solutions, are considered equivalent. This paper aims to refine this approach by applying the lexicographic max-min approach, which has been applied in various supply chain problems such as facility location and network optimization (Ogryczak et al. 2014). We focus on rapid needs assessment operations, which in contrast to relief distribution problems, have received little attention in humanitarian logistics literature (Pamukcu and Balcik 2020). According to de la Torre et al. (2012), majority of studies addressing relief distribution problem assume that the needs of impacted populations are already known or can be estimated, and needs assessment operations are not specifically addressed.

Rapid needs assessment starts immediately after a disaster strikes to quickly evaluate the disaster’s impact by having direct observations of affected sites and conducting interviews with impacted community groups (IFRC 2008; ACAPS 2011; Arii 2013). The assessment teams have experts from various areas ranging from those who are familiar with the local area to specialties in public health, epidemiology, nutrition, logistics, and shelter (ACAPS 2011; Arii 2013). A successful rapid needs assessment assists humanitarian agencies in effectively satisfying the needs of affected people at the time of great need (Arii 2013). Planning the field visits and accordingly deciding which sites to visit is an important decision to reach a successful assessment. Since time and resources are limited during the rapid needs assessment stage, it is usually not possible to evaluate the entire affected region; therefore, assessment teams take a sample of affected sites to visit using sampling methods (ACAPS 2011). These methods, in general, aim to select a limited number of sites to visit, which will provide the opportunity for assessment teams to observe and compare the conditions of various affected community groups such as; displaced persons, host communities, and returnees (IFRC 2008; ACAPS 2011). The goal here is to achieve acceptable coverage of various community groups.

Balcik (2017) propose “Selective Assessment Routing Problem” (SARP), a mixed-integer model that simultaneously addresses site selection and routing decisions of the rapid needs assessment teams. Balcik (2017) aims to evaluate the post-disaster conditions of different community groups, each carrying a distinct characteristic. The objective function in the SARP is maximizing the minimum coverage ratio achieved across the community groups (max-min approach). The coverage ratio of a characteristic is calculated by the number of times that the characteristic is covered in the assessment plan divided by the total number of sites in the network carrying that characteristic (Balcik 2017). The SARP has route duration constraints, which define the assessment deadline.

To illustrate the max-min approach and illustrate problems that may arise when not discriminating between solutions with the same minimum coverage ratio in the SARP, we provide an illustrative example in Fig. 1. In this example, there are three solutions (\(s_1\), \(s_2\), and \(s_3\)) to a SARP instance with three assessment teams and six characteristics. The array of 0s and 1s at each site indicates which characteristics are present at that site. For example, site number 10 carries characteristics 1 and 4. For solution \(s_1\) (see Fig. 1a), the minimum coverage ratio is 0.5 whereas in solution \(s_2\) (see Fig. 1b), the minimum coverage ratio is 0.75. Therefore, from a max-min point of view, \(s_2\) outperforms \(s_1\) since it has a higher minimum coverage ratio. On the other hand, both solutions \(s_2\) and \(s_3\) (see Fig. 1b, c) provide the same minimum coverage value of 0.75, therefore are considered equal from the max-min point of view. However, these two solutions may not be the same from a decision maker’s point of view.

In practice, solutions with the same max-min value often have different desirability due to other criteria. In the case of the SARP, solutions with the same max-min value can provide different coverage for characteristics that are not the least covered characteristic, which is also important: in rapid needs assessment, visiting such extra sites is desirable since each additional visit improves the coverage ratio associated with at least one characteristic (Balcik 2017). Therefore, we believe that introducing the lexicographic maximin approach can be especially relevant in this context. This approach has already been integrated with a few VRP algorithms (Saliba 2006; Lehuédé et al. 2020).

Fig. 1
figure 1

Illustrative examples for max-min approach

During the rapid needs assessment operations, on the one hand, minimizing the total travel time (or having a “faster” assessment) and on the other hand maximizing the reliability level (or having a “better” assessment) are two conflicting but at the same time important goals of decision makers. As Zissman et al. (2014) indicate, in planning a humanitarian needs assessment for Haiti, the decision makers faced a number of design trade-offs including having a faster or better assessment.“The key challenge was that the data were required urgently: a better assessment could be produced by spending more time designing its elements and verifying its results, but the data were needed faster to inform urgent actions to provide relief to those in need” (Zissman et al. 2014, p.13). In order to capture these two conflicting objectives, we propose a bi-objective optimization problem where we find the set of solutions to the SARP that offer the best trade-off between total route duration (assessment time) and community groups coverage ratio maximization using lexicographic ordering.

To solve the bi-objective problem, we apply a multi-directional local search method which is a multi-objective optimization framework that generalizes the concept of local search to multiple objectives (Tricoire 2012). Within this method, we design Adaptive Large Neighborhood Search (ALNS) operators to guide the search toward solutions with higher coverage ratios. We evaluate and analyze the proposed approaches via several computational experiments.

The remainder of the paper is organized as follows. In Sect. 2, we review the relevant literature. In Sect. 3, we introduce the leximin approach to the SARP and explain how can we represent the problem in a bi-objective fashion. We present a solution method and computational results in Sect. 4 and 5. Finally, we provide conclusions in Sect. 6.

Related literature

Equity has been considered in numerous application areas of humanitarian supply chains. There are various approaches for expressing equity in the literature. According to Young and Isaac (1995) as cited in Cao et al. (2016), the equity measures based on a different theory of justice can be categorized into three classes: “(i) Aristotle’s proportional equity principle, where the resources are allocated in proportion to the beneficiaries’ needs, (ii) classical utilitarianism, where the main aim is to maximize the total welfare of the whole system, and (iii) the Rawlsian maximin or difference principle, which refers to maximizing the welfare of the worst-off beneficiaries in the system" (Cao et al. 2016, 34). There are also other equity measures in the literature such as the Gini coefficient adopted from economics literature (e.g., Gutjahr and Fischer 2018; Eisenhandler and Tzur 2019; Mostajabdaveh et al. 2019), lexicographical goal programming formulation (e.g., Ortuño et al. 2011), the deprivation cost concept (e.g., Holguín-Veras et al. 2013). See Mostajabdaveh et al. (2019) for a summary of different approaches on equity measures.

In this paper, we focus on the Rawlsian maximin measures within the context of humanitarian operations. In fact, the focus of this paper is not to compare different equity measures as the definition of an appropriate measure depends on the context.

The min-max approach has been applied in numerous relief distribution problems. Tzeng et al. (2007) construct a relief-distribution model using a multi-objective model for a joint facility location and commodity transportation problem, where the objectives are minimizing total costs, minimizing total travel time, and maximizing the minimal satisfaction during the planning period. Campbell et al. (2008) focus on two objectives of minimizing the maximum arrival time of supplies and the average arrival times of supplies. The authors demonstrate the potential impact of using these objective functions on the trade-off between efficiency and equity within the context of the traveling salesperson and vehicle routing problems. Balcik et al. (2008) address equity by minimizing the maximum percentage of unsatisfied demand over all locations, whereas Ransikarbum and Mason (2016) maximize the minimum percentage of satisfied demand in relief distribution.

Apart from studies that focus on the distribution of relief items, field visit planning of needs assessment processes has been studied in a few papers in the field of optimization. Huang et al. (2013) consider routing of post-disaster assessment teams. In this study, the assessment teams visit all communities in the affected regions, which makes this model appropriate for the detailed assessment stage where time allows visiting all the impacted sites. This, however, is not the case in the rapid needs assessment stage, where it is usually not possible to visit all the sites due to a shortage of time. Constructing routes for rapid needs assessment is studied by Balcik (2017), in which assessment teams need to select a subset of sites to visit and consider the equity among various community groups. Balcik (2017) presents a mixed-integer model for the proposed SARP, which simultaneously addresses site selection and routing decisions and supports the rapid needs assessment process that involves the purposive sampling method, a method that only selects those sites that carry certain characteristics. The objective function is the maximization of the minimum coverage ratio achieved across all community groups. Minimizing total route duration is used as a secondary objective in a lexicographic fashion.

The study of Balcik (2017) has been extended in various directions. Balcik and Yanıkoğlu (2020) consider the travel time as an uncertain parameter in post-disaster networks and propose a robust optimization approach to address the uncertainty. Pamukcu and Balcik (2020) assume assessment coordinators and specify a target number of observations for each community group in advance, and the objective is to minimize the total duration required to cover all community groups. Li et al. (2020) present a bi-objective model integrating both the rapid needs assessment stage and the detailed needs assessment stage. Li et al. (2020), similar to Balcik (2017), maximize the minimum coverage ratio achieved among community groups as the objective of the rapid needs assessment stage. In the detailed need assessment stage, where the assessment teams need to visit all of the affected sites, Li et al. (2020) minimize the maximum assessment time of all assessment teams. Bruni et al. (2020) address the post-disaster assessment operations from a customer-centric point of view by considering a service level constraint that guarantees a specified coverage ratio with the objective of minimizing the total latency. Hakimifar et al. (2021) present easy-to-implement heuristic algorithms for site selection and routing decisions of the assessment teams while planning and executing the field visits. They test the performance of proposed heuristic algorithms within a simulation environment and incorporate various uncertain aspects of the post-disaster environment in the field. The authors compare the deterministic setting of the proposed simple heuristics with that of Balcik (2017).

There are also relevant studies focusing on damage assessment using unmanned aerial vehicles (UAVs) (e.g., Oruc and Kara 2018; Zhu et al. 2019, 2020; Glock and Meyer 2020). Damage assessment using UAVs studies focus mainly on settings where UAVs’ high-quality pictures can meet the assessment purposes, and there is no possibility or necessity to conduct interviews with the community groups, which represent an essential part of needs assessment (Hakimifar et al. 2021). Both SARP and damage assessment with UAVs studies belong to the family of Team Orienteering Problems (Butt and Cavalier 1994; Chao et al. 1996) (TOP). TOPs address site selection and vehicle routing decisions to maximize the benefits collected from the visited nodes and construct efficient routes. The difference between the TOP and the SARP lies in the concept of benefits, i.e., the TOP maximizes total collected profits, which are easy to quantify and unique per location, e.g., revenue, whereas the SARP maximizes the minimum coverage ratio of each characteristic over all visited sites. It is not straightforward to quantify the benefits of visiting a particular site without considering the characteristics of the other sites that might be included in the assessment plan (Balcik 2017). For more information regarding modeling and solution approaches for the TOP, we refer to the surveys conducted by Vansteenwegen et al. (2011), Archetti et al. (2014), and Gunawan et al. (2016).

As an alternative to the classic max-min approach, the lexicographic minimax approach has been applied in various areas of operations research. According to the survey conducted by Ogryczak et al. (2014), the lexicographic minimax approach has been applied in areas such as facility location (Ogryczak 1997), supply chain optimization (Liu and Papageorgiou 2013), air traffic flow management (Bertsimas et al. 2008) and bandwidth allocation and network optimization (e.g., Nace and Orlin 2007; Ogryczak et al. 2005), in order to provide equitable or fair solutions. The lexicographic minimax approach has also been integrated in vehicle routing models. Saliba (2006) presents heuristics for the VRP with lexicographic max-order objective in a single-objective fashion. The lexicographic max-order objective is used to minimize lexicographically the sorted route lengths. Lehuédé et al. (2020) investigate a lexicographic minimax approach to solve a bi-objective vehicle routing problem with route balancing. To the best of our knowledge, this study is the first one to introduce the lexicographic maximin approach to the humanitarian operation context. Furthermore, since the SARP belongs to the family of TOPs, we are also extending the literature on TOPs by introducing the lexicographic maximin approach.

Our contribution is twofold: i) we consider the bi-objective optimization of finding the set of solutions to the SARP that offer the best trade-off between total route duration and coverage ratio vector maximization. The proposed bi-objective problem addresses the two conflicting objectives of faster versus better assessment that decision makers face during the rapid needs assessment processes. ii) we consider the lexicographic maximin approach—a refinement of the classic max-min approach. The lexicographic maximin approach differentiates between solutions with the same max-min value. This differentiation provides a range of solutions for decision makers, in which more sites are included in the assessment plan. Visiting such extra sites is beneficial since it increases the coverage ratio of at least one characteristic.

A lexicographic maximin approach to the selective assessment routing problem

The lexicographic maximin is a refinement of the max-min approach (Dubois et al. 1996). In the lexicographic maximin we compare two vectors, instead of simply comparing two numbers (max-min value). For that purpose, we sort the coverage ratios of a solution from lowest to highest and compare it with the sorted values of the other solution. To illustrate this, let us take solutions \(s_2\) and \(s_3\) from the illustrative example provided in Fig. 1. The vectors of coverage ratios from \(s_2\) and \(s_3\) are shown as bar charts in Fig. 2a, b, respectively. The sorted forms of these vectors are represented in Fig. 2c, d. According to the lexicographic maximin approach, solution \(s_3\) dominates \(s_2\) as it has a higher coverage ratio in the third position of sorted vectors.

Fig. 2
figure 2

Illustrative example for lexicographic maximin approach based on solutions in Fig. 1

We use the following definition of the leximin ordering, adapted from the leximin definition of Bouveret and Lemaitre (2009):

Definition 1

(Leximin ordering) Let x and y denote two vectors in \({\mathbb {R}}^m\). Let \(x^{\uparrow }\) and \(y^{\uparrow }\) denote these same vectors where each element has been rearranged in a non-decreasing order. According to the leximin ordering:

  • the vector x leximin-dominates y (written \(x \succ _{leximin} y\)) if and only if \(\exists i \in \{1,\ldots ,m\}\) such that: \(\forall j \in \{1,\ldots ,i-1\}\), \(x_{j}^{\uparrow }\) = \(y_{j}^{\uparrow }\) and \(x_{i}^{\uparrow }\) > \(y_{i}^{\uparrow }\),

  • x and y are indifferent (written x \(\sim _{leximin}\) y ) if and only if \(x^{\uparrow }\) = \(y^{\uparrow }\).

  • \(x\succeq _{leximin} y\) is the case where \(x \succ _{leximin} y\) or x \(\sim _{leximin}\) y.

As for the bi-objective setting, we consider the problem of finding the set of solutions to the SARP that offer the best trade-off between total route duration (we call it duration in the rest of the paper) and coverage ratio maximization. The lexicographic ordering is used to compare solutions on coverage ratio vector maximization aspects. We call this problem the leximin-SARP. As the leximin-SARP is a bi-objective problem, we define Pareto optimality in the leximin-SARP as follows:

Definition 2

(Dominance and Pareto optimality in the leximin-SARP) Let \(c_s\) be the duration of solution s and \(l^s = (l_{1}^s ,\ldots , l_{m}^s)\) the vector of coverage ratios for solution s.

  • Let s and \(s^\prime\) represent two solutions of the leximin-SARP. A solution s dominates a solution \(s^\prime\) iff: \(c_s \le c_{s^\prime }\) and \(l^s \succeq _{leximin} l^{s^\prime }\) and either \(c_s < c_{s^\prime }\) or \(l^s \succ _{leximin} l^{s^\prime }\).

  • s is a Pareto optimal solution iff no other solution dominates s.

To illustrate the concepts of dominance and Pareto optimality, we present the next example in Table 1, including a set of six solutions according to their resulted durations and coverage ratios (CRs). In this example \(s_2\) is dominated by \(s_1\) since \(s_1\) has a lower duration and leximin-dominates \(s_2\). Similarly, \(s_6\) is dominated by \(s_5\). \(s_4\), however, is not dominated by \(s_5\) since although \(s_5\) leximin-dominates \(s_4\), \(s_4\) has a lower duration. Figure 3 provides a two-dimension solution visualization for this example. Note that although in lexicographic ordering we always compare two vectors of values instead of two points, in two-dimensional visualization we only plot the max-min value. The information in such a figure should always be interpreted along with solutions’ information, i.e., vector of coverage ratios and solution durations (e.g., Table 1 and Fig. 3).

Table 1 Illustrative example for the dominance and Pareto optimality
Fig. 3
figure 3

Two-dimensional visualization of solutions provided in Table 1 in which solutions s2 and s6 are dominated by other solutions

Solution method

The SARP, as a variant of the TOP, is a NP-hard problem (Balcik 2017) and consequently the leximin-SARP as well. In order to compute the complete set of Pareto optimal solutions, we develop a bi-objective metaheuristic that is based on the multi-directional local search (MDLS) framework proposed by Tricoire (2012), to approximate the set of Pareto optimal solutions for the leximin-SARP. We describe the general structure of MDLS in Sect. 4.1. The components of the operators used in the MDLS are explained in Sect. 4.2.

Multi-directional local search

MDLS is a metaheuristic framework for multi-objective optimization, which generalizes the concept of single-objective local search to multiple objectives. To perform MDLS with the leximin-SARP as a bi-objective problem, a specific local search should be defined for each objective (duration and leximin). Therefore, we develop a parametric adaptive large neighborhood search algorithm, called \(ALNS_i\), where parameter i represents the objective function being optimized: \(ALNS_1\) aims at minimizing durations and \(ALNS_2\) aims at maximizing the leximin value. ALNS is a metaheuristic that iteratively destroys and repairs an incumbent solution and has been successfully applied to various transportation problems (Shaw 1998; Ropke and Pisinger 2006; Pisinger and Ropke 2019). We explain the components of the ALNS algorithm in Sect. 4.2. By developing ALNS for both objectives, we can apply MDLS.

MDLS considers an archive of solutions, F, which is preserved and updated via the solution process. MDLS maintains F as a non-dominated set by performing non-dominated union operations (Tricoire 2012). The non-dominated union of two sets performs the union of these two sets while removing the elements that are dominated by at least one element of these two sets. Algorithm 1 outlines the MDLS steps. At each iteration, three steps are performed i) a solution x is randomly selected from F, each solution having equal probability, ii) for each objective, single-objective local search is performed on x, and iii) F is updated with solutions obtained by single-objective local search (Tricoire 2012). We initialize the archive F with the solution of the cheapest insertion heuristic adapted from Pisinger and Ropke (2007).

figure a

ALNS components

Within the designed MDLS framework, for each objective, we define several destroy and repair operators. Then, for each objective, at each iteration, we randomly choose a destroy and repair from the set of operators for that objective and generate a new solution. As in the leximin-SARP, it is not possible to visit all sites; the sites removed from one solution by the destroy operator are added to the list of unassigned (or not visited) sites. The repair operator then selects and inserts some of the unassigned sites in the partially destroyed solution until no further insertion is possible due to time constraints. Below, we explain the destroy and repair operators for each objective.

Duration objective: destroy and repair operators

Classical LNS destroy and repair operators for the VRP with the single objective of cost minimization are defined in detail within the literature, e.g., Pisinger and Ropke (2007). In this study, for the duration objective, we implement some of these destroy and repair operators. The applied removal operators are random removal, worst removal, and related removal. In the following, we describe each briefly:

  • Random removal: this operator diversifies the search by randomly selecting sites to be removed from the current solution. We repeat this process until the number of removed sites is equal to the destroy quantity q (Pisinger and Ropke 2007).

  • Worst removal: in this operator, according to Pisinger and Ropke (2007), the q sites with the highest cost to the duration objective value are removed to “try to insert them at another place in the solution to obtain a better solution value”. Given a site i and a solution s, we define \(f^{\prime }(s,i)\) as the cost of the solution where site i has been removed completely. We define \(\Delta f_{-i}\) as \(\Delta f_{-i}= f(s) - f^{\prime }(s,i)\). This operator randomizes the selected sites to avoid situations where the same sites are removed repetitively. In particular, we sort the sites by descending duration then we choose a random number y in the interval [0,1). The site to be removed is \(L[y^{p_{worst}}|L|]\) ; where L represents the array of sorted sites and the parameter \(p_{worst}\) controls the degree of randomization (Pisinger and Ropke 2007).

  • Related removal: this operator selects a random site i and removes it as well as its \(q-1\) most related sites, aiming to shuffle these similar sites around and create new, perhaps better, solutions. We define relatedness R(ij) of two sites i and j by: R(ij) = \(t_{ij}\), where \(t_{ij}\) is the travel distance between the two sites. The lower R(ij) is, the more related are the two sites. The randomization works similar to the worst removal operator, with parameter \(p_{related}\) controlling the degree of randomization (Pisinger and Ropke 2007).

The two repair operators used for the duration objective work as follows:

  • Cheapest insertion heuristic: this operator calculates the minimum insertion cost for every unassigned site. The insertion cost is determined by subtracting the current solution’s total duration without the inserted site from the duration of the solution with the inserted site. Afterward, we insert the site with the minimum cost difference in the current solution. We continue this process until no further insertion is possible, i.e., maximum allowed time (\(T_{max}\)) is reached (Pisinger and Ropke 2007).

  • k- regret heuristic: this heuristic aims to improve the cheapest insertion heuristic by incorporating a look-ahead information when selecting the site to insert. The site with the biggest regret value is inserted, i.e., the site that has the highest difference between the cost of the insertion into the best tour and the insertions into the k best tours (Pisinger and Ropke 2007). A k- regret heuristic calculates the part with the greatest cost difference between the cheapest and the k-1 next cheapest insertions. The selected site is inserted at its minimum cost position. We consider 2- regret heuristics and 3-regret heuristics as two separate repair operators.

Leximin objective: destroy and repair operators

For the leximin objective, we use the following two destroy operators:

  • Random removal: this operator is the same as the one used for the duration objective and aims to diversify the search by randomly selecting sites to be removed from the current solution. This process is repeated until the number of removed sites is equal to the destroy quantity q.

  • Worst min removal: in this operator the q sites with the worst contribution to the minimum coverage ratio of the solution are removed. That is, this operator starts removing sites that decrease the least the minimum coverage ratio of the current solution. The randomization works similar to the duration objective, controlled by parameter \(p_{worst}\).

Two repair operators have been designed to guide the search toward leximin-optimal solutions:

  • Highest max-min insertion: this operator prioritizes the selection of sites with the highest minimum coverage ratio of characteristics to be inserted first. That is, we first insert (if feasible) the site with the highest contribution to the max-min value of the current solution. We insert each unassigned site within a feasible position that minimizes the total travel time. In order to discriminate between insertions with a same contribution to the max-min objective, we provide two versions of this operator:

    • Highest random max-min: in case there are equivalent max-min values while sorting the sites, the candidate site for insertion is randomly selected among those sites with best max-min value. We continue this process until no further insertion is possible.

    • Highest max-min with priority to the duration: in this method, among the sites with equivalent max-min values, we select the one that yields the lowest increase to the duration objective function. We continue this process until no further insertion is possible.

  • Highest Leximin insertion: in this operator, we prioritize the selection of sites with the highest contribution to the leximin objective. More precisely, we compare the respective vectors of coverage ratios resulting from inserting the unassigned sites. This is in contrast to the Highest max-min insertion which is a proxy to the leximin objective and considers only the max-min value. Therefore, this operator is computationally expensive. We evaluate the performance of this operator in Sect. 5.3.

We follow the adaptive weight adjustment described in (Ropke and Pisinger 2006) to automatically adjust the weights of different operators using statistics from earlier iterations. According to this method, we keep track of the success rate of each operator, which measures how well the operator has performed recently. Within the scope of our study, an operator is successful when using this operator leads to an update in the non-dominated set of solutions. We divide the entire search into a number of segments, each corresponding to the same number of MDLS iterations. We calculate new weights using the recorded scores after each segment using this formula: \(\omega _{i,j+1}=\omega _{i,j}(1-r)+r\frac{\pi _i}{\theta _i}\), where \(\omega _{i,j}\) represents the weight of operator i used in segment j, \(\pi _i\) is the score of operator i obtained during the last segment, and \(\theta _i\) is the number of times we have attempted to use operator i during the last segment. The parameter r (reaction factor) controls how fast the weight adjustment algorithm reacts to changes in the effectiveness of the operators. This is similar to the scheme used by Ropke and Pisinger (2006).

Computational results

The designed algorithm is implemented in C++. All computational experiments are carried out on a 64-bit Windows Server with two 2.3 GHz Intel Core CPUs and 12 GB RAM. We describe our test instances and three different configurations in Sect. 5.1. Parameter settings are presented in Sect. 5.2. The quality of the Pareto front approximation is explained in Sect. 5.3. We analyze the equivalent solutions for the max-min objective function in Sect. 5.4 and compare our results with the max-min values reported in Balcik (2017) in Sect. 5.5.

Instance and configuration description

We evaluate our designed MDLS framework on the Balcik (2017) instances with 25, 50, 75, and 100 nodes. These instances are generated by modifying Solomon’s 100-node random (R) and random-clustered (RC) instances (Balcik 2017). Within these instances, there are 12 characteristics. The maximum allowed duration for each team (\(T_{max}\)) is set between two and eight, and the number of assessment teams (K) ranges between two and six. As a result, there are 48 instances. Table 2 shows the characteristics of the SARP instances for our experiments.

We run MDLS on each instance 10 times with two different run times, using three different configurations called all, leximin and max-min. The difference between these configurations is whether we use the costly operator of Highest Leximin insertion or not. In fact, the three configurations include all removal operators as well as all duration-oriented operators. The all configuration includes both Highest max-min insertion and Highest Leximin insertion. The leximin configuration includes only Highest Leximin insertion and the max-min configuration includes only Highest max-min insertion.

Table 2 Characteristics of the SARP instances for our experiments

Parameter settings

To set the parameter values preliminary test were performed. In these tests, we used instances with 25 and 50 nodes to set the parameters. We changed one parameter at a time while keeping the other parameters fixed. After some preliminary computations under different parameter settings, we found the following parameters performing the best: The ruin quantity q used in the destroy operators is randomly selected within bounds between at least one site and at most 30 percent of the number of sites in the current solution. The other parameters are set as follows: (\(p_{related}\), \(p_{worst}\), r) = (5, 3, 0.1). For the purpose of adaptive weight adjustment, each segment lasts 100 MDLS iterations. It is worth mentioning that the designed algorithm was highly robust against parameters, and changing them did not significantly influence the algorithm’s performance.

Quality of the Pareto front approximation

To analyze the impact of using different configurations on the quality of the Pareto front approximation, we follow the approach described in Lehuédé et al. (2020). Accordingly, we provide a reference set for each instance. The reference set for each instance is the non-dominated union of the sets returned by the algorithm over 60 runs, including all three MDLS configurations, two different time limits, ten runs per setting. We compare the quality of each set by looking at the percentage of found solutions from the reference set. In addition, we analyze the percentage of reference solutions found within a 1%, 2%, and 3% distance of a solution from the assessed set. We call solution \(s_1\) within a \(\alpha\)% distance of another solution \(s_2\) if, when the duration and all coverage ratios of \(s_2\) are multiplied by 1 + \(\alpha /100\) and 1 - \(\alpha /100\), respectively, then \(s_1\) dominates this transformed solution. The boxplots provided in “Appendix 2” (Figs. 9, 10, 11, 12, 13, 14, 15, 16) represent the percentage of reference solutions found and those within a 1%, 2%, and 3% distance of a solution from the reference set. The variability in the boxplots is derived from the results returned by 10 runs for each MDLS configuration, each instance, and each run time.

We see that the three tested configurations of MDLS are close to the reference set for almost all instances. Moreover, we observe that all three configurations provide relatively similar approximations of the Pareto set in most cases. Therefore, the simple construction heuristics of the max-min configuration are generally enough to reach an acceptable set. Note that although all and leximin configurations include the computationally expensive operator of Highest Leximin insertion, they do not result in better solution sets in most cases.

To evaluate this impact, we compare the average number of iterations performed by the MDLS algorithm with the various configurations for each instance. Figures 4 and 5 show the average number of iterations performed for each instance. As foreseen, integrating the Highest Leximin insertion operator significantly impacts the number of performed iterations. By looking at both boxplots in “Appendix 2” and Figs. 4 and 5, we see that for some instances, more iterations do not necessarily result in better solutions. For example, in instance RC19, even though the max-min configuration performs almost three times as many iterations as in leximin, using the Highest Leximin insertion heuristic is important to obtain better solutions.

Fig. 4
figure 4

Average number of MDLS iterations for each configuration on each instance

Fig. 5
figure 5

Average number of MDLS iterations for each configuration on each instance

Equivalent solutions for the max-min objective function

The motivation for studying the leximin-SARP is that solutions with a same max-min value are not necessarily equivalent, and that it is worth exploring them. In fact, the max-min approach fails to distinguish between solutions with the same minimum values (coverage ratio in this study). Worse, optimizing max-min first and duration second in a lexicographic fashion, as is done in Balcik (2017), may bias the search toward solutions that are good with the max-min objective but comparatively bad with the leximin objective, since removing visits reduces the secondary objective. We call solutions that have the same max-min value max-min-equivalent. For example, Table 3 shows 12 solutions with the same max-min value of 0.571 for the instance RC8. These solutions, however, are different in terms of leximin objective. Solution s5 in Table 3 is the one that is reported in Balcik (2017) as the best max-min value. Table 3 lists solutions that provide a better vector of coverage ratios than s5 while incurring a modest increase in duration. These solutions include more sites in the assessment plan, which is beneficial since each additional visit increases the coverage ratio associated with at least one characteristic. In particular, solution s12 and s1 provide the same max-min values but considerably different solutions in terms of leximin objective. We also note that our bi-objective approach also allows us to find solutions that would dominate s5 if using the (max-min, duration) lexicographic objective: s1, s2, s3, s4.

Table 3 max-min-equivalent solutions for instance RC8

Figure 6 depicts the histogram of the distribution of the number of max-min-equivalent solutions in the reference set for the instance RC8. We observe that in this instance, we have 22 distinct solutions with the same minimum coverage ratio of 0.571. We provide similar histograms for other instances in “Appendix 3”. We can see that for example, instance RC16 has more than 70 distinct solutions with the same minimum coverage ratio of 0.125. We also observe that this trend is not contained to highest max-min values; e.g., the instance R11 has more than 9 max-min-equivalent solutions for different values of minimum coverage ratio. Note that these histograms include only non-dominated solutions in the solution sets found for the leximin-SARP.

Fig. 6
figure 6

Distribution analysis for minimum coverage ratio of non-dominated solutions in the reference set for instance RC8

Comparison with max-min values reported in previous approaches

The primary objective function in Balcik (2017) is maximizing the minimum coverage ratio. Minimizing total route duration is considered as a secondary objective in a lexicographic fashion. Our results show that this method can lead to solutions that are dominated by the leximin approach. Figure 7 provides a graphical representation of the non-dominated set of solutions obtained by the MDLS algorithm for the instance R8. The solution duration is represented in the horizontal axis, and the minimum coverage ratio is represented in the vertical axis. In this figure, the blue point represents the max-min solution reported in Balcik (2017) which is dominated by other solutions obtained by MDLS. Moreover, we find new solutions that have higher values in terms of max-min than the previously best known value. These points are represented in red (above the dashed line representing the previously best known max-min value). Note that we do not have access to the full vector of coverage ratios of the solutions reported in Balcik (2017). Hence, the comparisons are based on the reported minimum value of coverage ratios and their corresponding durations. For example, Fig. 8 shows the case for which we cannot say that the solutions obtained by the MDLS algorithm dominate the one in Balcik (2017). However, we know that there are two solutions that leximin-dominate it (the two solutions with a higher max-min coverage ratio). The graphical representations of other instances in which we obtained such results are provided in “Appendix 4”.

Fig. 7
figure 7

Union of non-dominated solution sets of different MDLS variants for the instance R8 (left part is the zoomed-in area)

Fig. 8
figure 8

Union of non-dominated solution sets of different MDLS variants for the instance RC10 (left part is the zoomed-in area)

We now look at how many distinct solutions we find with the same max-min value as the one reported in Balcik (2017), counting all solutions from reference sets. Table 4 provides that information for each instance separately.

It is important to mention that in most of the runs the designed MDLS finds the same max-min coverage ratios and, in some cases, an even higher value, than the previously best known value. In particular, for long run times, max-min, all, and leximin configurations find a solution with a max-min value at least as good as the previously best known one in 87, 85 and 68 percent of the runs, respectively (see “Appendix 5” for the detailed percentages for each instance). This is in addition to providing a set of trade-off solutions. We also note that the bi-objective MDLS algorithm uses comparable CPUs (two 2.0 GHz Intel Xeon versus two 2.3 GHz Intel Core). MDLS produces approximation sets which includes the previously best known max-min values, in most of the cases within shorter run times, for all but small sizes. The reported CPU time for each instance in Balcik (2017) is the time for running the Tabu Search algorithm four times and taking the best value out of them. In our proposed MDLS algorithm, we report the time limit per single run for each instance. “Appendix 5” provides the statistics regarding how many times we found the same max-min values reported in Balcik (2017) within 10 runs of each MDLS configuration.

Table 4 Number of minimum coverage ratio in the reference set that are same or higher than those reported in the Balcik (2017)

Lexicographic max-min in the context of humanitarian logistics

Equity (often synonymously called fairness, as noted by Mostajabdaveh et al. (2019)) is an important consideration across a wide range of humanitarian operations. In Sect. 2, we mentioned that there are different approaches to considering equity in optimization models. In this study, we improve the max-min approach by differentiating between solutions with the same minimum value. This concept can be applied in different areas of humanitarian logistics, where equity is often an important criterion. In the literature on humanitarian logistics, max-min or min-max approaches are often employed to ensure some level of equity. This guarantees, for example, that all stakeholders receive service at a quality at least as good as the objective value. In the case of rapid needs assessment, this guarantees that each characteristic is covered at least at a certain ratio. However these max-min (resp. min-max) approaches do not differentiate between different solutions with the same min (resp. max) value, even though doing so could in fact improve the overall humanitarian benefit without degrading the previously mentioned guarantee. To illustrate the added value of the lexicographic approach, we look at the solutions depicted in Table 3. Each row provides the coverage ratios of all characteristics for a given solution. In total 12 solutions are represented, all produced by our solution method and each with the same max-min value of 0.571. These solutions do not dominate each other and they are sorted from shortest to longest duration, therefore also from worst to best coverage. When considering the max-min objective function, any of these solutions can be obtained; for example, the solution algorithm by Balcik (2017) produces solution s5. However solutions s6 to s12 are in fact better in terms of coverage ratios. Therefore even when considering only coverage ratio, the lexicographic approach provides better solutions than the max-min approach in general. It is important to note here that the max-min approach could have produced any of these twelve solutions, as it simply does not discriminate between them, thus ignoring valuable information in the assessment of solution quality. Moreover, if we embed this lexicographic objective in a bi-objective approach, we can balance coverage with, e.g., duration. For example if our time budget is 11.5 then the solution we want is s9: it provides the best coverage under that time budget. These observations apply to other objectives that include measures of equity, which are very common in humanitarian logistics, for instance partial coverage of beneficiaries.

Conclusion

In this study, we investigate how the lexicographic maximin approach can further improve the coverage ratio concerns in the rapid needs assessment routing problem. We consider the bi-objective optimization of total assessment time and coverage ratio vector maximization and provide an approximation of the Pareto set for leximin–SARP using multi-directional local search. We develop classical neighborhood operators for the duration and coverage ratio vector maximization objectives. These operators are used within the multi-directional local search framework. We show that implementing them improves the quality of the produced solution sets.

Computational experiments also support our premise that different solutions with the same max-min values are not equivalent. We see that for most of the instances, we find several max-min-equivalent solutions that are, in fact, different solutions in terms of the lexicographic maximin approach - including those that offer a better vector of coverage ratios with a modest increase in duration. As a by-product of our study, we see that the designed MDLS, in some cases, finds solutions with higher max-min values than previously known ones or solutions that dominate the previously best known solutions with regard to the new leximin objective.

Introducing the lexicographic maximin approach to the rapid needs assessment stage is a first step to show that there is an alternative to the classic max-min approach that can express equity in a better way. Future research can focus on using this approach in other contexts of humanitarian logistics, such as last mile distribution problems where fairness to beneficiaries is of vital importance. Future research can also consider uncertainty with respect to travel time. Due to the high level of uncertainty in the condition of the transportation network after the occurrence of a disaster, complete information on travel times may not be available during the rapid needs assessment stage. This uncertainty can be captured using robust optimization or stochastic programming.