Abstract
Capacity building and monitoring of response capacity are critical to disaster preparedness. Assessing disaster response capacity is a challenging task in India due to diverse geo-climatic conditions and exposure to different disasters. This paper addresses the absence of a methodological framework to measure multiple aspects of the disaster response capacity of districts in India through indicators. 26 indicators were identified under four factors namely; resources, communication and coordination, budget, and community engagement; anchored on a theoretical framework evolved through literature survey and key informant interviews. Each factor was modelled as a linear function of indicators based on data-sets maintained by district authorities. A Composite Index was constructed as a weighted aggregation of four factors using weightings elicited through Questionnaire Surveys among 151 expert respondents. Weightings were derived through an extension of Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) to balance the variability in perceptions of respondents. As disasters demand quick response, assessment of response capacity with a fewer number of indicators is desirable. Therefore, a reduced set of critical indicators sensitive to the Indian context were derived through model reduction applying probabilistic and statistical methods—\({\mathrm{l}}_{2}\) norm-based sensitivity analysis and coefficient of variation method. Critical indicators are: number of rescue and health service personnel, NGOs, Self-Help-Groups; efficacy of existing SOPs; literacy; and budget options. Robustness of the Composite Index was checked in terms of sensitivity to weightings and model reduction. The critical indicators and the Composite Index would sensitize decision-makers on disaster response capacities across districts.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
India, the second-most populous country in the world, is one of the most disaster-prone areas of the world mostly due to its physiographic and climatic conditions. Nearly 59% of the landmass is earthquake-prone, 12% is vulnerable to floods, 76% of its coastline to cyclones and tsunamis, and almost 68% of the cultivable area to drought, with large tracts in hilly regions at risk of landslides (NDMA 2016). The National Disaster Management Authority (NDMA) is the apex body for Disaster Management (DM) in India, set up for the creation of an enabling environment for institutional mechanisms at the state and district levels. 80% of the country’s districts have created District Disaster Management Plans (DDMP) aligned with the Sendai Framework for Disaster Risk Reduction (Bahadur et al. 2016).
Despite the collective efforts of all government agencies at the state and district levels to plan and prepare for disasters, there is a lack of clarity in roles and responsibilities for disaster response. The inherent tangles in layers of bureaucracy hinder the process further. Even though the erstwhile relief-centric approach to DM has been replaced by a preparedness-driven approach, there are no well-accepted methods to measure the country’s capacity to respond to disasters. Literature states that empirical studies conducted on the determinants of Disaster Preparedness (DP) in developing countries are very few (Muttarak and Pothisiri 2013; Hoffmann and Muttarak 2017). Capacity building for DP encompasses all aspects of creating and sustaining capacity such as stockpiling of equipment and supplies; development of information and co-ordination systems; associated training and field exercises; standard operating procedures (SOP); and institutional and budgetary arrangements (UNISDR 2009, 2015; Hemond and Robert 2012). Capacity building is a linchpin of preparedness and a metric to evaluate “capacity" provides comparable and meaningful information on DP. A simpler and comprehendible method would be to disaggregate capacity building into multiple factors which are measurable through indicators, as indicator-based assessments capture multi-dimensional aspects (Cardona and Carreno 2011).
A Composite Index (CI) composed of several (weighted) individual indicators (OECD 2008) synthesizes “a vast amount of diverse information into a simple, easily usable form” (Davidson and Shah 1997). Each factor, when modelled as an index; would give a clear idea of the status of response capacity related to the attributes it is composed of across different units and through time, when evaluated at regular intervals. An index to quantify the response capacity of a region would also enable (i) comparisons among regions considered; (ii) identification of strengths and weaknesses in the system; and (iii) efficient allocation of scarce resources. Preparedness indices formulated by way of indicators would measure how effectively the government, the civil society and the bodies responsible for DM anticipate; prepare for; manage; respond to, and mitigate the impact of disasters. State governments hold the primary responsibility for DM in India, and the highest level in the three-tiered decentralized local body administrative system is the district. This paper evolves a theoretical framework to assess capacity building for DP of a district; presents multiple factors which define it; develops a set of indicators under each factor which could be aggregated to model an index to measure the corresponding factor, and constructs a composite index aggregated by the factors. The novelty of the work is that it puts forth a set of critical indicators for disaster response capacity assessment derived through probabilistic methods. Moreover, the CI is developed through multi-level aggregation grounded on solid engagement with primary and secondary data, methodically analysed, and systematically validated.
2 Research methodology and factor identification
The selection of indicators for composite indices is to be grounded on a sound theoretical backing to obtain meaningful, reliable results (Freudenberg 2003; Simpson 2006; Eurostat 2014). A conceptual framework was evolved through a comprehensive literature review where-in capacity development for DP was defined in terms of four factors–Resources, Communication and Coordination, Budget, and Community engagement. Easy- to- comprehend and contextually relevant indicators for each factor were explored. They were further fine-tuned and disaggregated into measurable attribute variables through key informant interviews. Variable selection was predominantly based on relevance to the aspect measured, availability, comparability, and reliability of data (Cutter et al. 2010; Khazai et al. 2015); and such that most of them, if not all; belonged to a standard set of data routinely compiled and maintained by the district administration, District Disaster Management Authority or similar authorities. Each factor was then modelled as a linear function of these indicators. Further, a Questionnaire Survey of Experts (QSE) was administered to elicit the relative weightings of these factors on capacity building related to DP. A purposive sampling approach was adopted to select respondents from different categories of expertise, as detailed in Sect. 3.3. An extension of the MCDM tool- Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) was implemented to balance out the variability in perceptions of expert respondent categories.
For a CI, the most common and transparent method for aggregation is arithmetic averaging, which entails summing the product of each variable and its weight (Salzman 2003; OECD 2008; Greco et al 2018). Capacity building for DP was thus modelled as a composite index; a weighted aggregation of four factors, with each factor being a linear function of its corresponding indicators. To arrive at a fewer set of measurable, manageable, and actionable critical indicators sensitive to the Indian context, model reduction was performed through l2 norm-based sensitivity analysis and coefficient of variation method on pertinent data. For developing indexes for factors, equal weighting method and a data-driven weighting method applying Principal Component Analysis were used. For constructing the CI, weightings by subjective assessment through a QSE elicited through Relative Ranking Index method and an extension of TOPSIS method were employed. Further, the CI was checked for robustness with respect to sensitivity to different weighting methods and model reduction. The general methodology adopted for the study is illustrated in Fig. 1.
The analysis presented in the report is the outcome of this multi-level process.
3 Conceptual framework and literature review
The theoretical framework adopted in this paper is based on an “inductive approach”, whereby one “establishes a set of factors judged to be relevant to response capacity, and then attempts to develop indicators for them” (Brooks et al. 2005). The factors capture physical, economic, social, political, and institutional dimensions of capacity development. The authors chose the inductive approach as it can easily be adapted to different geographic settings, cultures, and environments (Winderl 2014). The rationale adopted is that DP is attributed to physical (critical infrastructure, communication systems), economic (budgetary allocations), social (community engagement), political (DRR plans, implementation of training programmes), and institutional (SOP, response systems) dimensions of disaster response capacities. These dimensions were further disaggregated into easily measurable indicators “to yield information with a reasonable level of veracity” (Simpson 2008; Patrizii et al. 2017). Capacity building is the process “whereby people, organizations and society unleash, strengthen, create, adapt and maintain capacity over time” (UNDRR 2019; Aitsi-Selmi 2015; Hagelsteen and Becker 2013; Hagelsteen and Burke 2016) and an indicator system would be a ‘powerful tool to obtain situational analysis’ even as the level of accuracy may be questionable (Liew et al 2019). Though circular logic turns out as a fallacy of the inductive approach, it may be overcome by integrating subjective assessments of expert stakeholders (Brooks et al.2005; Simpson, 2006; Collymore, 2011).
In our search for suitable indicators of capacity building for DP; various UN reports, case studies and manuals, and guidelines on construction of composite indicators were referred to identify parameters that matter. Measurement systems devised to assess DP systems/programmes led by various agencies were also referred to; specifically, in South Asian, African, and Caribbean countries (Cardona 2005, 2007, 2008; Gall 2007; Cutter et al. 2008; Bandura 2008; Cardona and Carreno 2011; Collymore 2011; Oven et al 2017). A few literature-based relevant cases are overviewed here. The Tsunami-resilient Preparedness Index considers 3 dimensions and 35 aspects with 21 disaster experts judging its content relevance (Adiyoso and Kanegae 2018). Another similar instance, Tsunami Recovery Impact Assessment and Monitoring System (TRIAMS) combines 51 indicators to track recovery after Tsunami in 2004 (Winderl 2014). World Risk Index (WRI) and Global Focus Model (GFM) are weighted composite indicators based mostly on secondary data used to analyse hazards, vulnerabilities, and response capacity at the country level. WRI uses 28 indicators on 4 factors related to hazard exposure, susceptibility, coping capacity, and adaptive capacity. HFA (Hyogo Framework for Action) Monitor of UNSIDR tracks goals and priority areas using a self-assessment methodology with 31 capacity indicators; of which 29 are qualitative indicators graded using a five-point assessment tool. Community-Based Resilience Analysis (CoBRA) of UNDP (UNDP 2013) employs surveys and key informant interviews with numeric as well as qualitative indicators to measure resilience of physical, human, financial, natural, and social aspects. Patrisina et al (2018) design key performance indicators to measure individual DP levels using the Delphi method involving expert respondents, with 14 indicators of three critical factors. Based on all these, the authors zeroed in on four broad categories of indicators (henceforth mentioned as “factors”) as depicted in Fig. 2.
The conceptual framework proposed is that each factor is a function of measurable attribute variables which positively contribute to the DP of a district.
3.1 Key informant interviews
Inclusion of perceptions along with quantitative secondary data adds more context-specific elements to disaster resilience measurements (Winderl 2014). Hence, semi-structured interviews were conducted with 39 key informants such as country/ regional heads of DM organisations, NGOs, practitioners, policy advisers with extensive experience in DRR and Emergency Operations personnel at the regional and state levels. Indicators were thus categorised into (i) “input” indicators which measure the financial, human, administrative, and regulatory resources (ii) “output” indicators which measure the consequences of resources used (iii) “outcome indicators” which measure the results at beneficiary levels and (iv) “impact indicators” which measure the cumulative effect of capacity building. The four factors and 33 indicators mentioned in Sect. 2.2; were corroborated as to their typology and are presented in Table 2.
3.2 Questionnaire survey of experts
Assigning weightings to indicators is very critical in the development of composite indicators; weightings being essentially value judgements. Participatory methods incorporating various stakeholders–experts, citizens, and politicians–are extensively used to assign weights to better reflect policy priorities or theoretical factors (OECD 2008; Munda 2003). Moreover, the opinions of practitioners and policy advisers with experience in DRR are also decisive in disaster mitigation initiatives (Keur et al 2016). Barrios et al. (2020), had solicited expert opinion for indicator selection and weighting in their evaluation of hospital DP by a multi-criteria decision-making approach. Expert opinion methods for assessment of disaster risk/ preparedness indicators have shown to yield excellent results (Davidson and Shah 1997, 2001; Freudenberg 2003; Cardona 2011). Expert judgement was elicited not only for the selection of factors and the corresponding indicators but also for assigning their relative weightings through a QSE among 151 respondents from 7 identified categories of experts drawn from academic and research institutions of repute in India, NGOs, Development Authorities, the general public, affected stakeholders and related Central, State Government establishments across the states of Kerala, Karnataka, Tamilnadu, and Gujarat. The survey sample was selected applying a purposive sampling approach and the composition is presented in Table 1.
The factor on which their expertise reflects; based on a subjective assessment of the authors; is depicted in Fig. 3.
3.3 Identification of factors and indicators
As the meaningfulness of an indicator depends on its ability to represent the ideas of the conceptual framework (Davidson 1997), the definition of each factor in alignment with the proposed theoretical frame and their corresponding indicators are discussed in the sections below. Multiple processes and the large number of stakeholders involved in DM and DRR aspects complicates the process of selection of apt indicators. The indicators are to provide “the means to monitor and evaluate implemented measures” (Feldmeyer et al. 2020) for DP. Though it was attempted to identify tangible, easy-to-measure indicators; it was not always possible. Therefore, qualitative indicators; for which data had to be generated through a subjective assessment by the authors (anchored on a solid rationale, as deemed proper); were also included. In order to develop meaningful indicators at district levels, the processes implemented for DRR and DM at district levels were considered, so that the authorities can adapt the methodology of assessment of DP to their domain of operation. For replicability of the methodology, the public availability, as well as prospects for public availability of indicator data, is important. Even though certain selected indicators did not have associated datasets during the time of conducting the study, their compilation was anticipated to be fairly easier. Therefore, the authors adopted tangible, easy-to-measure indicators; aligned to the practical aspects of the extant DM mechanism in India. Care was also taken to the extent possible to align it with the legal-institutional-policy framework on disaster management prevalent in India. In order to be relevant and duplicable in the contexts of other developing countries, an attempt was also made to align it with global frameworks.
3.3.1 Resource factor
Fig Indicators X1 to X15 as shown in Table 2 were explored to represent C1. Indicators for infrastructure and skilled manpower in health, relief and rescue systems were identified. “No. of hospitals”—X1 was included due to availability of comparable data across states; even though they will be useful resources only if they are functional to treat the injured in the aftermath of a disaster, or else they turn out to be a liability. Availability of potential resources was not explored (say, for instance; the possibility of conversion of existing facilities like hostels and hotels to hospitals to deal with emergencies during disasters) due to limited data coverage. “No. of nurses”—X4 was considered and then discarded due to the unavailability of comparable data across districts. X10—No. of policemen/1000 population was replaced by "No. of police stations/1000 population" as reliable data on the number of policemen across districts was too fragmented to compile. X11 and X12—number of boats and number of vehicles available respectively; were combined as X12* as they represented the same resource. The coping capacity of a region related to food security was also explored by including X15—No. of Fair Price Shops (FPS). In India, the central government is responsible for procurement, storage, transportation, and bulk allocation of food grains and state governments distribute it through an established network of around 4,62,000 FPS–one of the biggest systems in the world. 13 indicators were eventually selected, and the discarded ones are marked with**in Table 2.
3.3.2 Communication and coordination system factor
Factor C2 purports to represent modes of communication, temporal information and dissemination of data on disasters for awareness generation as well as for warning; resources for handling media relations and coordination among different agencies. Indicators X16 to X29 as shown in Table 2 including road density, the proliferation of public transportation nodes (bus stations, railway stations, boat jetties) and communication infrastructure (telephone exchanges, police stations, post offices) were explored. As capacity assessment involves reviewing the capacity of a group against desired goals to identify capacity gaps (UNISDR 2009); an indicator X18—Existence of SOP and pertinent manuals/codes was included, to assess the efficacy of communication and coordination systems. This was done by a subjective assessment of the authors on the existence of an Incident Command System in the district (as mandated by NDMA), where binary scores were assigned; ‘1’ if it existed and ‘0’ otherwise. Data on X20—No. of cellular phone subscribers and X21—No. of HAM radio operators across districts were too fragmented to be included.
In the present era, Information and Communication Technology (ICT) initiatives also are crucial in enabling the capacities of regions for enhanced disaster preparedness. People seek up-to-date, reliable, and detailed information in disaster scenarios as it contributes to social inclusion. The network of Common Service Centers (CSC) which deliver ICT to all segments of people through access to information and knowledge was therefore included as an indicator X22—No. of CSC/1000 population. Media; both print and electronic; plays a vital role in the generation of public awareness on DP and communication through assimilation and dissemination of information about affected areas among government authorities, NGOs, and the public; and hazard warnings. The efficacy of mass media campaigns could best be assessed by quantifying the proliferation of print media, visual media, and social media. Comparable data across districts being not directly available, an indicator X25—the “percentage of literate people” was chosen to suit the context. Nevertheless, from a disaster preparedness perspective, or even otherwise; the “percentage of literate people” could be used as an indicator for the reach and efficacy of mass media and awareness campaigns in communication systems.
The efficacy of operational plans, standards, protocols, and procedures involved may be considered as attributed solely by X26 and it was evaluated through a subjective assessment of the authors on an ordinal scale; assigning a value “1” when the DM plan of the district satisfied at least 4 conditions of the following; and a value “2” when it satisfied at least 6 conditions of the following: (1) identified risks and vulnerabilities of the district (2) defined and assigned tasks and responsibilities to all line-departments and stakeholders for pre-disaster and post-disaster phases (3) developed a standardized mechanism to respond to; and manage the disaster efficiently (4) included a response plan for prompt relief, rescue and search support during disasters; (5) included a revision within the past five years; (6) included an HVCRA (Hazard Vulnerability, Capacity, and Risk Assessment), and (7) was well-integrated across agencies, local authorities, and line departments. Though X18 and X26 represent more or less the same attributes, they are separately included for conceptual clarity. Figuring in two different perspectives; with different scores associated with each inclusion; X18 refers to the existence of SOPs and X26 refers to the effectiveness of coordination systems.
The capacity of a region to perform a set of critical tasks under simulated conditions for different hazards is validated by periodic mock drills which involve mobilization of resources, communications, response activities, management initiatives, and post-incident activities of all concerned departments and task forces. Indicators X27—No. of mock drills/simulation exercises per year/1000 population and X28—No. of participants of mock drills/per year/1000 population were considered. Documented data on X27 and X28 being unavailable across districts, they had to be discarded. Hence an indicator X29—Inclusion of procedures for training programmes in DDMP was included. Of the 14 indicators identified for C2, 10 indicators were selected and listed in Table 2; the discarded ones being marked with**.
3.3.3 Budget factor
Budget Factor C3 relates to consistent, timely budgetary allocations for institutional capacity building and technical training. It was best represented by the budgetary allocation for DM from the Centre, State, and District administrations. In the institutional setup prevalent in India, the State Disaster Response Fund (SDRF) and National Disaster Response Fund (NDRF) which are constituted under sections 48(1) (a) and 46 (2) of the DM Act (2005), respectively; are available to all states to facilitate immediate relief in case of severe calamities; though this does not indicate the effectiveness of fund utilisation. Therefore, percentage budget utilisation averaged over 3 consecutive years was identified as an indicator X30. As of now, DM funding has prioritised disaster response over DP; and though mandated, DM funds at state and district levels are yet to materialise. Flexi-funds under Centrally Sponsored Schemes (following the broad objective of the corresponding Central Sector Scheme); Corporate Social Responsibility (CSR) funds and similar Public–Private Sector funds are potential sources of funding for increasing disaster resilience. District Planning Funds are also raised in some states from Members of Parliament Local Area Development Scheme (MPLADS) or Members of Legislative Assembly Local Area Development Scheme (MLADS) received for developmental projects from the central government and are utilised for preparedness, mitigation, and capacity building initiatives. Hence, an indicator X31—Budget allocation/financing options for DM per year in INR /1000 population was considered. Again, this too did not indicate the effectiveness of fund utilisation, neither was there comparable data available for X30 or X31. Therefore, X31 was modified as “Presence of extra Budget allocation/financing options for DM” in the district under scrutiny and was assigned binary scores- ‘1’ if it was present and ‘0’ otherwise.
3.3.4 Community engagement and technology transfer factor
The involvement of specialised technical agencies and academia in capacity-building initiatives for DM is proposed to be captured by this factor C4. Capacity for DP being associated with knowledge and capacities of local people; community engagement is instrumental in formulating local coping and adaptation strategies particularly technology-driven initiatives (Allen 2006). Involvement of community organisations in translating technology to real-time benefits for the public was considered, as documented and comparable data on the involvement of specialised technical agencies and academia were not available. NGOs usually have direct and sustained contact with many communities and an indicator X32—“Number of NGOs active in the region” was identified to suit the context. The most critical component of effective communication on disasters is demonstrated by the appropriate response by the communities, which demands reliable formal and informal communication channels between people to people and people to government (Mukhtar 2018). It is seen that the presence of Self-Help Groups (SHGs) contributes to capacity-building for emergency response, the flow of information, and regional and national coordination mechanisms to prepare for DP in regions (Collymore 2011). Indicator X33—The number of SHGs per 1000 population was also therefore considered.
3.4 Theory of modelling
As a basic premise to arrive at a metric to gauge DP, each factor of capacity was modelled as an index using its identified indicators (Briguglio 2003; Birkmann 2006). To convey information on capacity building for DP of districts of India, separate composite indices were calculated for the four factors that contribute to DP–C1, C2, C3 and C4. An index being a unitless number, the index measures are to be standardised, scaled and normalised such that the type, scope, depth and appropriateness of the indicators and their measurements are deemed comparable (Munda 2003). All indicators (attribute variables) within a factor were scaled with respect to the mean minus two standard deviations. Each factor Ci for a particular district was evaluated as:
where Ai denotes the set of indicators (attribute variables) for the factor Ci and \(\mathrm{Card }\left({\mathrm{A}}_{i}\right)\) denotes the cardinality of the set Ai. One of the most commonly used compensatory aggregation approaches in composite indicators being the linear method (Greco et al. 2018); the factors were combined to derive a Composite Index using a common, simple, and transparent method—weighted arithmetic aggregation of normalised individual indicators (Cardona 2008; Fritzsche 2014; Freudenberg 2003; OECD 2008).
The proposed Disaster Preparedness Index on Capacity building (\({\mathrm{DPI}}_{\mathrm{C}}\)) for a district was thus evaluated as
where ai denotes the weighting of factor Ci for that particular district elicited through expert opinion, and n indicates the number of factors.
A total of 26 out of 33 indicators under four factors as presented in Table 2 were selected for modelling. India has a total of 718 districts spread across 28 States and 8 Union Territories (UTs) from which 123 districts spread across six states of India were chosen to populate the data set for modelling the factor indices and DPI. The rationale behind this selection is that these states have a range of geo-climatic conditions and hence are exposed to varied natural disaster scenarios. UNDRR 2019 classifies natural disasters into five major categories—Geophysical, Hydrological, Meteorological, Climatological, and Biological. The sample states were vulnerable in varying degrees to the first four, and have proven to be highly susceptible to the last- as evidenced by the COVID-19 pandemic (George and Anilkumar, 2021). Statistical data from censuses and district websites were mainly explored to compute factor indices. Multiple data sources were adopted to gather data related to the factors within the constraints of data availability. Major sources were authorities dealing with Disaster Management, Fire and Rescue; Directorate of Health; Directorate of Education; and websites of Highway departments or State Road Development Corporations of respective states. For districts where such datasets were unavailable, data from the DDMP was compiled. All listed indicators were evaluated and analysed for long-term data availability. The critical issue of missing or erratic values within data sets was dealt with by adopting OECD guidelines on data outliers and data imputation (OECD 2008). Adequate temporal coverage of datasets was ensured by setting 2011 as the base year (last census year in India) and considering the average of 3 consecutive years. Wherever base year data was missing, data obtained for the immediately available previous year was used.
3.5 Weighting methods
For developing indexes for factors, weighting methods discussed in Sect. 2.6.1 and Sect. 2.6.2 were applied; and for constructing the \({\mathrm{DPI}}_{\mathrm{C}}\), methods detailed in Sect. 2.6.3 and Sect. 2.6.4 were implemented.
3.5.1 Equal weighting (EW)
All indicators were assumed to contribute equally to the corresponding factor as per the conceptual framework developed in the study. Despite EW being not adequately justified (Greco et al. 2018), it is commonly used for CI development (Bandura 2008; OECD 2008) where the theoretical framework attributes to the rationale.
3.5.2 Weightings from principal component analysis (PCA)
PCA is a ‘data-driven technique’ which may be used to derive weightings in index construction if the indicators are correlated (Ray 2008; Decancq and Lugo 2013; Greco et al. 2018). The weights derived using data-driven techniques such as PCA emerge from the data themselves under a specific mathematical function (Decancq and Lugo, 2013). Kaiser criterion was used to select the principal components with Eigenvalues greater than 1, which accounts for the maximum variance (OECD 2008). For the qth component with an Eigenvalue greater than 1, the weight of each indicator was computed as:
where, \({\mathrm{w}}_{j}\) is the weight of the jth indicator, \({\beta }_{q}\) is the eigenvalue of the qth factor and ajq is the loading value of the jth indicator on qth factor.
3.5.3 Subjective weighting using QSE—Relative Importance Index (RII)
Responses for the QSE were obtained on a Likert scale of 1 to 7 which could not be assessed using parametric methods (Siegel and Castellan 1988). Analysis of structured questionnaire responses involving ordinal measurement scales is commonly done using a non-parametric technique (Chakrabartty 2019)—Relative Importance Index (RII). RII for each factor was determined using Eq. 4 given by:
where W is the Likert rank assigned to each factor by the respondent, A is the highest weight (here, 7), and N, the total number of respondents. RII values were normalised to obtain weighting coefficients of each factor for constructing \(\mathrm{DPIc}\), such that they summed up to 1 and their values ranged from 0 to 1 with 0 not inclusive (Waris et al. 2014).
3.5.4 Technique for order preference by similarity to the ideal solution (TOPSIS)
Assigning weightings to different dimensions of a phenomenon by comparing and evaluating their relative importance may be considered as a multi-criteria decision-making problem. Different MCDA tools may provide the same results (Linkov et al. 2020). Moreover, the authoritativeness of evaluation of alternatives (indicators, in our case) given by different decision-makers may vary due to differences in their levels of expertise and familiarities with the problem. Therefore, a method is adopted in which the variation in perceptions of decision-makers is also accounted for—an extension of Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS). TOPSIS method is considered to be a relatively easier method with good computational efficiency, which offers a clear representation of the logic of human choice (Roszkowska 2013), and is a widely used methodology in environmental MCDA (Linkov et al. 2020). TOPSIS is a multi-criteria- decision-making tool that chooses the alternative closest to the ideal solution and farthest from the negative ideal alternative based on information on attributes from the decision-maker and numerical data. Among the numerous extensions of TOPSIS for group decision making (Yang and Chou 2005; Milani et al 2005; Jahanshahloo et al 2006; Wang and Lee 2007; Chen 2000); that proposed by Li et al (2008) analyses ordinal preferences of group decision-makers, where the weights of decision-makers were included. To integrate the expertise of respondent categories as deliberated in Sect. 2.6.3 and illustrated in Fig. 3, this extension of TOPSIS was implemented and the ranking index \({\mathrm{d}}_{\mathrm{n}}\) of alternative \({\mathrm{A}}_{n}\left(\mathrm{1,2},\dots \mathrm{N}\right)\) was determined as:
where,
where, L is the number of groups of decision makers, \({\mathrm{r}}_{\mathrm{ln}}\left(\in \left\{\mathrm{1,2},\dots ,\mathrm{N}\right\}\right)\) is the comprehensive ranking location of alternative \({\mathrm{A}}_{n}\left(\mathrm{1,2},\dots \mathrm{N}\right)\) and \({\uplambda }_{l}\) is the weight of decision maker. The ranks obtained were then normalised to get weighting coefficients so that they summed up to 1.
The weightings obtained for the indicators and factors applying the methods discussed above are also tabulated in Table 2.
3.6 Model reduction techniques
A Composite Index is rendered unwanted complexity when a large number of variables are attributed for indicators (Freudenberg 2003; Davidson and Shah 1997; Simpson 2006; OECD 2008) as they reflect redundancies (Lind 2010; Otoiu 2014). As disasters demand quick decisions to be made, Composite Indices serve the purpose only if computed as a function of fewer variables. Therefore, to develop a ready-to reckon index with a reduced set of variables, model reduction was performed using a probabilistic method—Model distance-based sensitivity analysis (Sobol 1993; Greegar and Manohar 2016) and another method based on coefficient of variation.
3.6.1 Model distance-based sensitivity analysis by using l 2 norm
Indicators selected in the study (attribute variables) represent different aspects and relate to different regional contexts. So, they are uncertain in nature and can be probabilistically modelled as random variables. A factor is a function of indicators, and its uncertainty is contributed not only by those in indicators but also by uncertainties arising due to their combined interactions. For quantifying these uncertainties, Sobol’s analysis based on analysis of variance (ANOVA), a Global Response Sensitivity Analysis (Sobol 1993; Saltelli et al. 2008) may be used. When the associated random variables are independently distributed, it has equivalence with \({\mathrm{l}}_{2}\) norm-based sensitivity analysis as shown by Greegar and Manohar (2015, 2016). A pair of models could be considered, one in which the uncertainty in all the variables is included; and the other in which a selected uncertain variable is treated as deterministic, for performing \({\mathrm{l}}_{2}\) norm-based sensitivity analysis. Their evaluated proximity would explain the effect of uncertainty in the selected variable on the specified response variable and is a measure of sensitivity with respect to that selected variable. Consider a model given as:
where uncertainties in all the elements of \(\mathrm{X}\) are included; and two altered models; one in which all elements of \(\mathrm{X}\) are uncertain except the ith element (treated as deterministic); and the other, in which all the elements of \(\mathrm{X}\) are deterministic except the ith element (treated as uncertain); given as
Consider a measure of distances between \(\mathrm{Y}\) and the altered models, \({\mathrm{Y}}_{\mathrm{i}}\) and \({\mathrm{Y}}_{\sim \mathrm{i}}\), denoted as \({\mathrm{D}}_{\mathrm{i}}={\text{dist}}\left(\mathrm{Y},{\mathrm{ Y}}_{\mathrm{i}}\right)\text{ and }{\mathrm{D}}_{\sim \mathrm{i}}={\text{dist}}\left(\mathrm{Y},{\mathrm{ Y}}_{\sim \mathrm{i}}\right)\). According to Greegar and Manohar (2015, 2016):
-
i.
Di represents the total effect corresponding to the variable \({\mathrm{X}}_{i}\). Higher values of \({\mathrm{D}}_{i}\) implies that the uncertainty in the ith variable highly reflects on the uncertainty of \(\mathrm{Y}\).
-
ii.
D~i represents the main effect corresponding to the variable \({\mathrm{X}}_{i}\). Lower values of \({\mathrm{D}}_{\sim i}\) imply that the uncertainty in the ith variable highly reflects on the uncertainty of \(\mathrm{Y}\).
-
iii.
Variables corresponding to higher sensitivity may be retained as random and the least sensitive ones may be treated as deterministic.
3.6.2 Based on coefficient of variation
The coefficient of variation of a variable, denoted by \(\updelta\), is evaluated as the ratio of the standard deviation to the mean. It is a useful statistic for comparing the degree of variation between different data series, even when the means considerably differ from one another. Let \(\eta \text{ and }\sigma\) represent mean and standard deviation of the reference model, \(\mathrm{Y}\); and \({\upeta }_{\sim i}\text{ and }{\upsigma }_{\sim i}\) represent the mean and standard deviation of the altered model, \({\mathrm{Y}}_{\sim \mathrm{i}}\). The quantities, \(\updelta\) and \({\updelta }_{\sim i}\) are evaluated as.
where \(\updelta\) and \({\updelta }_{\sim i}\) denote coefficients of variation of the ith variable with respect to the original model and altered model respectively. The normalised coefficient of variation of the ith variable may be considered as a measure of its sensitivity;
Higher values of \({\updelta }_{\sim i}^{*}\) imply that the uncertainty in the ith variable highly reflects on the uncertainty of the response variable, \(\mathrm{Y}\).
3.7 Sensitivity and reliability analysis
CI development involves subjective assessment related to the choice and weightings of indicators and hence requires assessment of associated uncertainties. A Sensitivity Analysis (SA) would capture (i) the variation in the output to different sources of variation in the assumptions, and (ii) how the given CI depends upon the information fed into it. SA quantifies the overall uncertainty in district rankings (based on \(\mathrm{DPIc}\)) as a result of uncertainties in the model input. Robustness assessment of composite indicators (Saltelli et al. 2008, 2019) as in the case of Environmental Sustainability Index are made by a synergetic application of uncertainty and sensitivity analysis to increase its transparency and to validate the assumptions made in the conceptual frame (OECD 2008). The methods used for SA in this study are discussed in the following sections.
3.7.1 Average rank shift
The stability of the computed \(\mathrm{DPIc}\) using different methods and the resulting rank of a given district, \(\mathrm{Rank}({\mathrm{DPI}}_{\mathrm{C}})\), indicates the robustness of the estimation (Nardo et al. 2005; Cutter et al. 2003). The average rank shift, \({\mathrm{R}}_{\mathrm{s}}\), is a measure of the uncertainty of each input factor and is computed as:
where, \({\mathrm{Rank}}_{\mathrm{ref }}({\mathrm{DPI}}_{\mathrm{C}})\) is the median rank of a district considering the different methods of computation, and m; the total number of districts. Lower values of \({\mathrm{R}}_{s}\) imply the closeness of the computed ranks to the median rank.
3.7.2 Cronbach’s alpha and spearman’s rank-order correlation
Cronbach’s alpha measures the reliability or consistency of the rankings as it is a function of the number of ranking methods and the average inter-correlation among them (Cronbach 1951). An alpha value greater than 0.9 implies excellent consistency whereas a value below 0.7 may not be acceptable. Spearman’s rank-order correlation was used to test the reliability of the rankings of districts based on different weighting methods. Spearman’s correlation value of + 1 signifies a perfect positive correlation and − 1 signifies a perfect negative relationship between ranks, while 0 indicates no correlation between ranks (Gibbons and Chakraborty 2003).
4 Results and discussion
The weightings estimated for factors and indicators are tabulated in Table 2. The factor indices and final composite indices developed by the study are tabulated in Table 3. The results of applying model reduction as discussed in Sect. 2.7 are presented in Sect. 3.3. The robustness of the derived indices is discussed in Sect. 3.4. To present a sample analysis in this reported study, a set of 10 districts from 6 different states of India were considered. The districts represent different geo-climatic scenarios and disaster vulnerabilities. These 10 districts have moderate to very high proneness to hydrometeorological disasters as per studies conducted by the India Meteorological Department (Mohapatra 2015). Amongst the 10; Alappuzha, Dakshina Kannada, Nagapattinam, Krishna, Junagadh and Puri are coastal districts. The remaining 4 districts, namely, Kottayam, Sivagangai, Chittoor, and Kheda are non-coastal districts. The districts of Kerala were greatly affected by the 2018 deluge. Further, they represent different regional contexts; Alappuzha is a coastal district and Kottayam is in the midlands. Nagapattinam district of Tamilnadu was severely hit by the tropical cyclone Gaja in 2018. Krishna district of Andhra state was hit by heavy rainfall and floods in 2020. Junagadh was affected by floods and cyclone in 2020 and Kheda suffered from floods in 2019 (both districts belong to Gujarat state). Puri district of Orissa state was massively struck by the Fani cyclone in 2019.
4.1 Factor indices
Table 3 presents the factor index scores and ranking scores computed applying different weighting methods for the 10 districts chosen as a representative sample to present a sample analysis in this report. It is observed that the ranks are consistent for 8 districts for C1, and the ranks differ by 1 position (5th—4th) for Kottayam and Nagapattinam. For C2, the rank varies within 1 position for Kottayam and Sivagangai (1st–2nd); and for Dakshina Kannada and Junagadh (5th—4th). For the Budget factor C3, all the districts considered have the same score and therefore the same ranks.
The calculated Cronbach’s alpha values based on the two weighting methods for C1 and C2 were 0.994 and 0.988 respectively, which indicate high reliability. For Community Engagement factor C4, all the districts considered have consistent ranks for both the weighting methods applied.
4.2 Composite index \(\mathbf{D}\mathbf{P}\mathbf{I}\mathbf{c}\)
Normalised weighted aggregation was used to aggregate \(\mathrm{C}1,\mathrm{ C}2,\mathrm{ C}3\mathrm{ and C}4\) to compute \(\mathrm{DPIc}\). Table 3 presents the computed \(\mathrm{DPIc}\) scores based on the different weighting methods. The rank-ordering for 4 districts is consistent; whereas it has shifted by one position for 6 districts.
The Cronbach's alpha value of ranking of districts based on the four computation methods for DPIc was 0.994 which indicates high consistency.
4.3 Parameter reduction
Comparable data sets for the 26 indicators presented in our study may not always be readily available for all districts of India, and its compilation is likely to be time-consuming. A DPI would be a handy tool for practitioners only if it renders a quick assessment. Hence, model reduction was performed on pertinent secondary data sets of 123 districts to reduce the total number of variables to a manageable number of one to two underlying variables per factor. For performing model reduction, the altered models as specified in Eq. (8) were obtained by fixing the corresponding random variables at their mean values. The results of model reduction applied for the factors C1 and C2 (which were otherwise capturing 13 and 10 indicators respectively) are presented in Table 4.
Factor C3 had only one indicator and C4 had only two; and they were retained as such.
The variables corresponding to higher sensitivity to the model were those which scored higher ranks, and maybe deemed critical. The Resource factor, originally attributed with 13 variables, could thus be estimated with two critical variables: X8—the total number of rescue and relief personnel/10 sq.km. and X5—number of health service personnel/1000 population. The two variables which were most sensitive to the model out of the 10 variables considered for C2 are X26—Efficacy of existing SOPs, manuals/codes, and X25—Efficacy of mass media campaigns- % Literacy.
This does not mean that the remaining 11 variables for C1 and 8 variables for C2 are discarded; this only means that the retained indicators would be treated as random and the least sensitive ones as deterministic; by keeping them as constants, which may be contextually selected; for example, “national average values”.
4.4 Robustness of the developed composite index
Results of the analyses conducted to check the robustness of DPIc developed using (i) all 26 variables and (ii) 7 critical variables (2 each for C1, C2, and C4; and 1 for C3) are discussed next.
4.4.1 Sensitivity and reliability of \(\mathbf{D}\mathbf{P}\mathbf{I}\mathbf{c}\)
The rank ordering of 10 districts selected for sample analysis was considered and tabulated in Table 5 to assess the sensitivity of the CI to different weighting schemes discussed in Sect. 2.6 and to model reduction techniques discussed in Sect. 2.7.
Table 6 shows the average shift in vulnerability rankings from the median rank. The statistics comprise the relative shift in the position of all districts in a single number. Lower values of \({\mathrm{R}}_{s}\) indicate a greater similarity of the rankings to the median ranking. The use of the RII method for factor weighting- considering all variables indicates the lowest difference from the median rank. The \({\mathrm{ R}}_{s}\) value using the TOPSIS method for factor weighting is higher, as it reflects the variability in perceptions of expert respondents.
The average shift in rank using only the critical indicators (with both RII and TOPSIS methods) is the highest; probably because all the remaining indicators are treated as deterministic by keeping them as constants which prevents the high value of one indicator from compensating the very low value of the other indicators.
Table 7 presents the Spearman rank-order correlation between the different methods used for computing factor indices and the Composite Index. The correlation coefficient ranges between 0.92 and 1 indicating a very high positive correlation among the different methods used. There is significant agreement among the ranking of the districts on the DPIc since all coefficients are significant at p < 0.01.
Further, Cronbach’s alpha shows that the \(\mathrm{DPIc}\) rankings for the 10 districts considered have excellent reliability of 0.99. Figure 4. illustrates a comparison of the rankings of the sample districts with respect to their computed Composite Indices. The ranks derived using RII with Equal Weight method are plotted along with the median ranks and the 95% confidence interval of the mean of the rankings evaluated using the four methods of computation.
There is statistically significant agreement among the ranks despite different weighting techniques for indicators and factors being employed, as shown in Fig. 4.
A major outcome of the study is the set of seven Critical attribute variables (derived out of 26 indicators) belonging to datasets regularly maintained by District Authorities which would be a handy tool for practitioners for disaster preparedness assessments. The methodology proposed through this study is based on a conceptual framework that is adaptable to different regional contexts. Continuity and relevance of an index being an evolving process; newer, valuable datasets may become available. The developed framework may thus be extended to integrate newer datasets. Further, with minimal modifications, the methodology is replicable in the context of other developing countries. Moreover, the method adopted for model reduction based on global response sensitivity measures had the feature of retaining the randomness of critical attribute variables while treating the least sensitive ones as deterministic. This implies that the seven critical variables may be used to compute the Composite Index instead of using the 26 attribute variables, which renders the computation more simplified.
5 Conclusion
The development of indicators and an assessment framework to gauge "capacity development for disaster preparedness" for a vast and complex country like India with layers of hazards, vulnerabilities, and risks is not a simple process. The intricacies in the institutional and operational mechanisms of disaster management and inherent bureaucratic tangles render added complexity. The authors attempt to develop a framework and a metric to appraise the coping capacity and levels of preparedness for a regional context in India. Guided by a literature review and key- informant interviews, a theoretical framework was developed and four factors were identified which contribute to capacity building for DP, namely, Resources; Communication and Coordination; Budget; and Community Engagement and Technology Transfer. Corresponding indicators were also identified and further refined based on the availability, comparability, and reliability of data. Each factor was modelled as a linear weighted aggregation of the normalised indicators applying weightings derived by different techniques from a QSE, with 7 expert categories of respondents participating. It is concluded that a Composite Index; an ensemble of four factors contributing differentially to it; and represented by 26 indicators altogether; could be estimated for any regional context; district being the unit considered in this study. This would serve as a metric to assess DP related to capacity building. Comparable data sets for all indicators presented in our study being not always readily available for all districts of India; and its compilation being time-intensive, model reduction techniques were applied to onsite data for 123 districts spread across six states in India and a reduced set of critical variables were developed to render the index a handy tool for practitioners, as disasters demand quick decisions to be made. The study reports that the Resource factor, originally attributed with 13 variables, could be estimated with two critical variables: the total number of rescue and relief personnel/10 sq.km. and the number of health service personnel/1000 population. The Communication and Coordination factor originally attributed with 10 variables, could be estimated with two variables: efficacy of SOPs and percentage literacy. This does not mean that the remaining variables are discarded; this only means that the critical variables would be treated as random and the least sensitive ones treated as deterministic by keeping them as constants which may be contextually selected; for example, “national average values”. The Budget factor is attributed to one variable; the presence of Extra budget allocation for disaster management, annually. The Community Engagement and Technology Transfer factor is attributed to two variables, the number of NGOs active in the region and the number of SHGs in the district.
The development of indicators and an assessment framework to gauge "capacity development for disaster preparedness" for a vast and complex country like India with layers of hazards, vulnerabilities, and risks is not a simple process. The intricacies in the institutional and operational mechanisms of disaster management and inherent bureaucratic tangles render added complexity. The authors attempt to develop a framework and a metric to appraise the coping capacity and levels of preparedness for a regional context in India. Guided by a literature review and key- informant interviews, a theoretical framework was developed and four factors were identified which contribute to capacity building for DP, namely, Resources; Communication and Coordination; Budget; and Community Engagement and Technology Transfer. Corresponding indicators were also identified and further refined based on the availability, comparability, and reliability of data. Each factor was modelled as a linear weighted aggregation of the normalised indicators applying weightings derived by different techniques from a QSE, with 7 expert categories of respondents participating. It is concluded that a Composite Index; an ensemble of four factors contributing differentially to it; and represented by 26 indicators altogether; could be estimated for any regional context; district being the unit considered in this study. This would serve as a metric to assess DP related to capacity building. Comparable data sets for all indicators presented in our study being not always readily available for all districts of India; and its compilation being time-intensive, model reduction techniques were applied to onsite data for 123 districts spread across six states in India and a reduced set of critical variables were developed to render the index a handy tool for practitioners, as disasters demand quick decisions to be made. The study reports that the Resource factor, originally attributed with 13 variables, could be estimated with two critical variables: the total number of rescue and relief personnel/10 sq.km. and the number of health service personnel/1000 population. The Communication and Coordination factor originally attributed with 10 variables, could be estimated with two variables: efficacy of SOPs and percentage literacy. This does not mean that the remaining variables are discarded; this only means that the critical variables would be treated as random and the least sensitive ones treated as deterministic by keeping them as constants which may be contextually selected; for example, "national average values". The Budget factor is attributed to one variable; the presence of Extra budget allocation for disaster management, annually. The Community Engagement and Technology Transfer factor is attributed to two variables, the number of NGOs active in the region and the number of SHGs in the district.
Evaluation of Resource factor clearly established the availability of relief and rescue personnel in the district as a crucial attribute of efficient disaster response. The crucial role of Self-Help Groups in supporting community engagement in disaster preparedness, response, and recovery was demonstrated by the evaluation of Factor Indices. It was inferred that disaster preparedness measures are effective only to the extent they tackle the unique attributes of the community and engage with the community as a whole. An urgency to set aside Budgetary resource allocation specifically for disaster preparedness, over and above what is being done for disaster response was sensed.
Though the 26 indicators were selected predominantly based on literature, key informant interviews, and relevance to the theoretical framework developed; data availability was set as a primary concern, and datasets maintained by District authorities were selected to a great extent. Hence, they may not be comprehensive and therefore gives space for refinement in future researches. A statistical internal validation of the developed indices using sensitivity and reliability analyses (basically a robustness analysis) is implemented in the study to examine how changes in index construction methods affect index results. Robustness analysis is considered by researchers to enhance the overall transparency; though not an assurance of the sensibility of a modelled composite index (Saltelli et al. 2019; Douglas-Smith et al. 2020; Zhang et al. 2020). This turns out to be a limitation of the results evolved out of this study. Furthermore, empirical validation of a Composite Index is fundamentally important for its proper application for intended purposes (Bakkensen et al. 2017). For example, an empirical validation in relation to the quantified losses incurred in real disaster scenarios would render more credibility to the index to aid in decision-making. However, the conceptual framework and methodology developed would provide a baseline for further disaster preparedness assessments at regional levels.
A few of the strategies for applying the results of this study to aid in Disaster Risk Reduction of districts are:
-
i.
Assessing the coping capacity of a district and thereby identifying the need for intervention at higher levels of administration (state/national level).
-
ii.
Identifying areas where specialised training is needed for first responders.
-
iii.
Benchmarking districts to identify, implement, and support strategies to enhance preparedness.
-
iv.
Supporting policies and resources to improve disaster preparedness of districts.
However, there are inherent weaknesses associated with Composite Indices. Composite indicators can be misleading; particularly when used to measure aspects of DP which involves a plethora of complex attributes. As Cardona, 2004, postulates; owing to the seemingly ad-hoc nature of computation and the sensitivity of the results to different weighting and aggregation techniques, composite indicators may sometimes result in distorted findings. Despite these purported shortfalls, the comprehensive methodology adopted in the paper to construct the index is robust enough to fairly represent the DP levels of districts in terms of capacity, as shown by the sensitivity with respect to four different weighting schemes and model reduction. The proposed conceptual framework together with the factors and their associated indicators and the identified critical variables would tender a premise for future researchers to develop on; the applicability and usefulness need to be probed further. The methodology and findings presented in the report are envisaged to assist experts, stakeholders, and decision-makers to arrive at rational decisions regarding the identification of areas for action and anticipation of future developments in disaster preparedness and response in similar contexts. Indicator-based measurement frameworks are very useful as indicators act as a metric to appraise the levels of preparedness and thereby help identify areas requiring augmentation. They render focused inputs for efficient allocation of resources and constructive comparisons among different regions.
Data availability
The authors confirm that the data supporting the findings of this study are available within the article and the data that support the findings of this study are available from the corresponding author, upon reasonable request.
Code availability
Not applicable.
References
Adiyoso W, Kanegae H (2018) Tsunami-resilient preparedness index (TRPI) as a key step for effective disaster reduction intervention. In: McLellan B (ed) Sustainable future for human security. Springer, pp 369–384. https://doi.org/10.1007/978-981-10-5433-4_25
Aitsi-Selmi A, Egaw S, Sasaki H et al (2015) The Sendai framework for disaster risk reduction: renewing the global commitment to people’s resilience, health, and well-being. Int J Disaster Risk Sci 6(2):164–176
Allen KM (2006) Community-based disaster preparedness and climate adaptation: local capacity-building in the Philippines. Disasters 30(1):81–101
Bahadur A, Lovell E, Pichon F (2016) Strengthening disaster risk management in India: a review of five state disaster management plans. Report of research commissioned by the Climate Development Knowledge Network (CDKN) and carried out by the Overseas Development Institute (ODI), with support from the All India Disaster Mitigation Institute (AIDMI)
Bakkensen LA, Fox-Lent C, Read LK, Linkov I (2017) Validating resilience and vulnerability indices in the context of natural disasters. Risk Anal 37(5):982–1004
Bandura R (2008) A survey of composite indices measuring country performance: 2008 update. (UNDP/ODS Working Paper)
Birkmann J (2006) Measuring vulnerability to promote disaster-resilient societies: conceptual frameworks and definitions. In: Birkmann J (ed) Measuring vulnerability to natural hazards: Towards disaster resilient societies. United Nations University Press, pp 9–54
Briguglio L (2003) some considerations with regard to the construction of an index of disaster risk with special reference to Islands and Small States. BID/IDEA Programa de Indicadores para la Gestión de Riesgos, Universidad Nacional de Colombia, Manizales. http://idea.unalmzl.edu.co. Accessed 8 Dec 2020
Brooks N, Adger WN, Kelly PM (2005) The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation. Glob Environ Change 15(2):151–163
Cardona OD, Carreño ML (2011) Updating the indicators of disaster risk and risk management for the Americas. IDRiM Journal 1(1):27–47
Cardona O (2005) Indicators of disaster risk and risk management: summary report. Inter-American Development Bank.
Cardona OD (2007) Indicators of disaster risk and risk management program for Latin America and the Caribbean. Summary report. Updated 2007.
Cardona OD (2008) Indicators of disaster risk and risk management-program for Latin America and the Caribbean: summary report second edition. Updated 2007
Chakrabartty SN (2019) Scoring and analysis of likert scale: few approaches. J Knowl Manag Info Technol 1(2):31–44
Chen C (2000) Extensions to the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst 114(1):1–9. https://doi.org/10.1016/S0165-0114(97)00377-1
Collymore J (2011) Disaster management in the Caribbean: Perspectives on institutional capacity reform and development. Environ Hazards 10(1):6–22
Cronbach LJ (1951) Coefficient alpha and the internal structure of tests. Psychometrika 16(3):297–334. https://doi.org/10.1007/BF02310555
Cutter SL, Boruff BJ, Shirley WL (2003) Social vulnerability to environmental hazards. Soc Sci Q 2(84):242–261. https://doi.org/10.1111/1540-6237.8402002
Cutter SL, Barnes L, Berry M, Burton C, Evans E, Tate E, Webb J (2008) Community and regional resilience: Perspectives from hazards, disasters, and emergency management. Geography 1(7):2301–2306
Cutter SL, Burton CG, Emrich CT (2010) Disaster resilience indicators for benchmarking baseline conditions. J Homel Secur Emerg Manag. https://doi.org/10.2202/1547-7355.1732
Davidson RA, Lambert KB (2001) Comparing the hurricane disaster risk of US coastal counties. Nat Haz Rev 2(3):132–142
Davidson RA, Shah HC (1997) An urban earthquake disaster risk index. In: John A (ed) Blume Earthquake Engineering Center. Standford University
Decancq K, Lugo MA (2013) Weights in multidimensional indices of wellbeing: an overview. Economet Rev 32(1):7–34
DM Act (2005) The gazette of India extraordinary. The Disaster Management Act, 2005, No. 53 of 2005. Ministry of Law and Justice, Government of India
Douglas-Smith D, Iwanaga T, Croke BFW, Jakeman AJ (2020) Certain trends in uncertainty and sensitivity analysis: an overview of software tools and techniques. Environ Model Software. https://doi.org/10.1016/j.envsoft.2019.104588
Eurostat (2014) Towards a harmonised methodology for statistical indicators Part 1: Indicator typologies and terminologies. Publications Office of the European Union, Luxembourg. https://doi.org/10.2785/56118
Feldmeyer D, Wilden D, Jamshed A, Birkmann J (2020) Regional climate resilience index: a novel multimethod comparative approach for indicator development, empirical validation and implementation. Ecol Ind 119:106861
Freudenberg M (2003) Composite indicators of country performance: a critical assessment. OECD Sci Technol Indus. https://doi.org/10.1787/405566708255
Fritzsche K, Schneiderbauer S, Bubeck P et al (2014) The vulnerability sourcebook: concept and guidelines for standardised vulnerability assessments. Available via https://www.adelphi.de/en/publication/vulnerability-sourcebook-concept-and-guidelines-standardised-vulnerability-assessments. Accessed 13 June 2021
Gall M (2007) Indices of social vulnerability to natural hazards: a comparative evaluation. Doctoral dissertation, University of South Carolina
George S, Anilkumar PP (2021) Critical indicators for assessment of capacity development for disaster preparedness in a pandemic context. Int J Disaster Risk Reduct. https://doi.org/10.1016/j.ijdrr.2021.102077
Gibbons JD, Chakraborti S (2003) Nonparametric Statistical Inference. CRC Press
Greco S, Ishizaka A, Tasiou M, Torrisi G (2018) On the methodological framework of composite indices: a review of the issues of weighting, aggregation, and robustness. Soc Indic Res 141(1):61–94. https://doi.org/10.1007/s11205-017-1832-9
Greegar G, Manohar CS (2015) Global response sensitivity analysis using probability distance measures and generalization of Sobol’s analysis. Prob Eng Mech 41:21–33. https://doi.org/10.1016/j.probengmech.2015.04.003
Greegar G, Manohar CS (2016) Global response sensitivity analysis of uncertain structures. Str Saf 58:94–104. https://doi.org/10.1016/j.strusafe.2015.09.006
Hagelsteen M, Becker P (2013) Challenging disparities in capacity development for disaster risk reduction. Int J Disaster Risk Reduct 3:4–13
Hagelsteen M, Burke J (2016) Practical aspects of capacity development in the context of disaster risk reduction. Int J Disaster Risk Reduct 16:43–52. https://doi.org/10.1016/j.ijdrr.2016.01.010
Hémond Y, Robert B (2012) Preparedness: the state of the art and future prospects. Disaster Prev Manag 21(4):404–417. https://doi.org/10.1108/09653561211256125
Hoffmann R, Muttarak R (2017) Learn from the past, prepare for the future: impacts of education and experience on disaster preparedness in the Philippines and Thailand. World Dev 96:32–51. https://doi.org/10.1016/j.worlddev.2017.02.016
Jahanshahloo G, Lotfi F, Izadikhah M (2006) An algorithmic method to extend TOPSIS for decision-making problems with interval data. Appl Math Comp 2(175):1375–1384. https://doi.org/10.1016/j.amc.2005.08.048
Khazai B, Bendimerad F, Cardona OD, Carreno ML et al (2015) A guide to measuring urban risk resilience: principles, tools and practice of urban indicators. Earthquakes and Megacities Initiative (EMI) The Philippines.
Li W, Chen Y, Chen Y (2008) Generalizing TOPSIS for Multi-criteria group decision-making with weighted ordinal preferences. In: proceedings of the 7th World congress on intelligent control and automation, Chongqing, China
Liew DYC, Che Ros F, Harun AN (2019) Developing composite indicators for flood vulnerability assessment: effect of weight and aggregation techniques. Int J Adv Trends Comp Sci Eng. https://doi.org/10.30534/ijatcse/2019/08832019
Lind N (2010) A calibrated index of human development. Soc Indic Res 98(2):301–319. https://doi.org/10.1007/s11205-009-9543-5
Linkov I, Moberg E, Trump BD, Yatsalo B, Keisler JM (2020) Multi-criteria decision analysis: case studies in engineering and the environment. CRC Press
Milani AS, Shanian A, Madoliat R (2005) The effect of normalization norms in multiple attribute decision making models: a case study in gear material selection. Struct Multidiscipl Optim 4(29):312–318
Mohapatra M (2015) Cyclone hazard proneness of districts of India. J Earth Syst Sci 124(3):515–526
Mukhtar R (2018) Review of national multi-hazard early warning system plan of Pakistan in context with sendai framework for disaster risk reduction. Procedia Eng 212:206–213. https://doi.org/10.1016/j.proeng.2018.01.027
Munda G, Nardo M (2003) On the methodological foundations of composite indicators used for ranking countries. Technical Report JRC31473, pp 1–19
Muttarak R, Pothisiri W (2013) The role of education on disaster preparedness: case study of 2012 Indian Ocean earthquakes on Thailand’s Andaman coast. Ecol Soc 18(4):51. https://doi.org/10.5751/ES-06101-180451
Nardo M, Saisana M, Saltelli A, Tarantola S (2005) Tools for composite indicators building. European Comission 15(1):19–20
NDMA (2016) National Disaster Management Plan, 2016. National Disaster Management Authority, New Delhi
OECD (2008) Handbook on Constructing Composite Indicators: Methodology and User Guide. OECD Publishing
Ortiz-Barrios M, Gul M, López-Meza P et al (2020) Evaluation of hospital disaster preparedness by a multi-criteria decision making approach: The case of Turkish hospitals. Int J Disaster Risk Reduct 49:101748. https://doi.org/10.1016/j.ijdrr.2020.101748
Otoiu A, Titan E, Dumitrescu R (2014) Are the variables used in building composite indicators of well-being relevant? Validating composite indexes of well-being. Ecol Indic 46:575–585
Oven KJ, Sigdel S, Rana S et al (2017) Review of the nine minimum characteristics of a disaster resilient community in Nepal. Final Report, Durham University, Durham
Patrisina R, Emetia F, Sirivongpaisal N et al (2018) Key performance indicators of disaster preparedness: a case study of a tsunami disaster. MATEC Web of Conf EDP Sci. https://doi.org/10.1051/matecconf/201822901010
Patrizii V, Pettini A, Resce G (2017) The cost of well-being. Soc Indic Res 133(3):985–1010
Ray AK (2008) Measurement of social development: an international comparison. Soc Indic Res 86(1):1–46
Roszkowska E (2013) Rank ordering criteria weighting methods—A comparative overview. Optimum Studia Ekonomiczne 5(65):14–33
Saltelli A, Ratto M, Andres T et al (2008) Global sensitivity analysis: The primer. Wiley
Saltelli A, Aleksankina K, Becker W, Fennell P, Ferretti F, Holst N, Li S, Wu Q (2019) Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices. Environ Model Software 114:29–39
Salzman J (2003) Methodological Choices Encountered in the Construction of Composite Indices of Economic and Social Well-Being. Center for the Study of Living Standards
Siegel S, Castellan JN (1988) Nonparametric statistics for the behavioral sciences. McGraw-Hill, NewYork, NY
Simpson DM (2008) Disaster preparedness measures: a test case development and application. Disaster Prev Manag Int J 17(5):645–661. https://doi.org/10.1108/09653560810918658
Indicator issues and proposed framework for a disaster preparedness index (DPi). Center for Hazards Research and Policy Development, University of Louisville
Sobol IM (1993) Sensitivity estimates for nonlinear mathematical models. Math Model Comput Exp 1(4):407–414
UNDP (2013) Community Based Resilience Analysis (CoBRA) Conceptual Framework and Methodology. Disaster Risk Reduction Action
UNDRR (2019) Sendai Framework for Disaster Risk Reduction, Disaster Classification. Retrieved from: https://www.desinventar.net/disasterclassification.html
UNISDR U (2009) UNISDR Terminology on Disaster Risk Reduction. Retrieved from: www.unisdr.org/publications. Accessed 26 June 2021
UNISDR (2015) Proposed updated terminology on disaster risk reduction: A technical review. Background paper
Van der Keur P, van Bers C, Henriksen HJ et al (2016) Identification and analysis of uncertainty in disaster risk reduction and climate change adaptation in South and Southeast Asia. Int J Disaster Risk Reduct 16:208–214. https://doi.org/10.1016/j.ijdrr.2016.03.002
Wang Y, Lee H (2007) Generalizing TOPSIS for fuzzy multiple-criteria group decision-making. Comput Math Appl 53(11):1762–1772
Waris M, Shahir ML et al (2014) Criteria for the selection of sustainable onsite construction equipment. Int J Sustain Built Environ 3(1):96–110. https://doi.org/10.1016/j.ijsbe.2014.06.002
Winderl T (2014) Disaster resilience measurements: stocktaking of ongoing efforts in developing systems for measuring resilience. United Nations Development Programme (UNDP). Available via https://www.preventionweb.net/go/37916. Accessed 26 Oct 2015
Yang T, Chou P (2005) Solving a multiresponse simulation–optimization problem with discrete variables using a multi-attribute decision-making method. Math Comput Simul 68(1):9–21. https://doi.org/10.1016/j.matcom.2004.09.004
Zhang Y, Spada M, Cinelli M, Kim W, Burgherr P (2020) MCDA index tool. An interactive software to develop indices and rankings - user manual. Future resilient systems (FRS) team at Singapore-ETH Centre and laboratory for energy systems analysis (LEA) at Paul scherrer Institute, Switzerland. Cluster 2.1: assessing and measuring energy systems resilience. http://www.frs.ethz.ch/research/energy-and-comparative-system/energy-systems-resilience.html. Accessed 18 June 2021
Funding
No funds, grants, or other support was received.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests.
Rights and permissions
About this article
Cite this article
George, S., Kumar, P.P.A. Indicator-based assessment of capacity development for disaster preparedness in the Indian context. Environ Syst Decis 42, 417–435 (2022). https://doi.org/10.1007/s10669-022-09856-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10669-022-09856-0