What would a question, or set of questions relating to subjective resilience look like in practice? Although the process of asking people questions about their perceived levels of resilience may at first seem straightforward, it is anything but. There is a multitude of ways of asking questions relating to subjective resilience, each with its own methodological challenges and biases. Careful thought is needed in designing and delivering questions to ensure the robustness and utility of subjective information. Below we briefly describe a number of options, and associated strengths and limitations, in designing questions related to subjective resilience. Given that this is a relatively new area of practice, with few existing tools, we aim simply to provide and introductory guide to the sorts of tradeoffs and decisions that need to be taken into account when seeking to collect subjective resilience information in practice—it is by no means meant to be exhaustive, and many further considerations will need to be considered (see OECD 2013).
To begin with, there are many different types of ‘resilience’ referred to in the literature. These include: personal resilience, psychological resilience, livelihood resilience, community resilience, social resilience, economic resilience and disaster resilience, to name but a few. While there are many overlaps between them, each is focused on the characteristics that make their respective systems resilient to particular threats. Each is also applied at a specific geographical scale and unit of analysis. Thus, the characteristics and properties of an individual’s psychological resilience may not be the same as those that make up a country’s economic resilience. The first step in designing an assessment of subjective resilience is therefore to decide on the type and scale of resilience one wishes to investigate.
The example we use in this paper to illustrate the potential for subjective assessments is a subset of disaster resilience. Specifically, we are interested in the resilience of households to respond to weather and climate-related extremes. We define this as the ability of households to manage change by maintaining or transforming living standards in the face of shocks related to weather or climate events—such as droughts, floods or the delayed onset of rainfall seasons—without compromising their long-term prospects (adapted from DFID 2011). This focus on disaster resilience can either relate to a single hazard or an aggregate of multiple hazards. A subjective assessment of any of the different types of resilience listed above is entirely feasible, but though would require a different set of questions and wording.
The assessment of subjective resilience can be undertaken using many different evaluative survey techniques. Given the multifaceted nature of resilience, perhaps the most robust manner of collecting information is through open-ended questions, whereby a series of semi-structured (or structured) questions are administered, allowing people to freely reflect on how resilient they perceive their household or livelihood to be. This method allows for rich qualitative data to be collected without prescribing responses. However, open-ended questions and surveys are often difficult to quantify. They also require considerable human and technical resources in collecting relevant data at scale.
The most practical and useful means of collecting information on subjective resilience may therefore be through the delivery of structured surveys. Here, a fixed list of questions and answers that limit the respondents to pre-selected answers from which respondent are requested to choose are administered. The advantage of such an approach is that surveys can be administered quickly, are easier to code and interpret, and standardised. Most importantly, they are more readily quantified. Typically, this type of approach is accompanied by either dichotomous (two-point), multiple choice or scaled questions (such as those reliant on Likert scale responses). However, they can also lend themselves to visual analogue scales or even be combined with open-ended responses.
Before delving into the specifics, it is first important to consider the options for formulating a single close-ended question relating to subjective resilience. Small differences in the way a question is constructed can have large implications for respondent comprehension, reporting and the comparability of data collected (see Table 1). Questions that are easy to understand, low in ambiguity and do not burden the respondents should be sought (OECD 2013). With the assessment of household resilience to weather and climate extremes in mind, one of the first challenges is to specify the threat that is being assessed. Two options exist: a question could either relate to the ability of households to respond to the impacts of a singular stressor, such as drought (see Q1 in Box 1); or it could relate to the collective impact of weather-related extremes (Q2)—this would imply the full range of weather-related extreme events that may affect that particular household, such as floods, droughts and more variable rainfall events.
The former is specific, easier to comprehend and therefore likely to provide answers that are more robust and tailored to a particular threat. While the latter is more vague in its construction and prone to ambiguity—a household may be very resilient to flood events but not at all resilient to drought—its generalisability allows for it to be applied across a wider range of contexts and derive useful information in relation to the many weather-related threats that affect household disaster resilience. This is critical when considering resilience as a wider approach to securing development in the face of a range of shocks and stresses. Choosing between the two approaches is therefore dependent on the research aims and objectives. While there is no right or wrong approach, users should be aware of the merits and limitations of each.
A second, related challenge is deciding on the structure of the question. Precise wording is key, particularly when there are ambiguities with regards to definitions. For example, Q1 in Box 1 presents a simple and direct way of formulating a resilience-related question. However, the term ‘resilience’ means different things to different people. Another option is to omit the word ‘resilience’ in the question and allude to its characteristics. For example, Q4 instead refers instead to the ability of a household to cope and adapt to climate extremes. However, it is very difficult to cover the multifaceted nature of resilience in a single question without sacrificing the validity and utility of the information gleaned from the question. In addition, any singular question that refers to two separate capabilities may elicit different responses and confuse respondents, i.e. referring to Q4, my ability to cope with increased flood risk may be different to my ability to completely adapt my livelihood in response to continued flood risk.
There may also be difficulties in translating questions effectively across languages. Issues of translation affect any cross-cultural survey, whether quantitative or quantitative. Yet, subjective surveys are likely to require particular care in ensuring robust translation given the heavy emphasis on intangible properties, capacities and assets. Some reassurance can, however, be taken from past experiences in translation of surveys of subjective well-being, where studies have documented similar scores across language groups and bilingual individuals in a number of country contexts (Diener and Suh 2000).
Another consideration is the time period of assessment. This is particularly relevant to resilience, as it is comprised of both short-term (e.g. absorptive/coping capacity) and long-term (adaptive capacity) components. Thus, it is important to make reference to the specific time period (and capacity) within the structuring of all relevant questions. For example, questions Q3, Q4 and Q5 each ask respondents to sum up their experiences over a given reference period—either in relation to the present time or in comparison with a stated period. Alternatively, leaving out reference to a specific time period will likely imply that respondents indicate their views at the present moment while drawing on their experiences from the close (and potentially distant) past.
Equally challenging is deciding on the format of response options. Researchers need to consider how many responses to offer, how to label them as well as the scale of intervals. More importantly, they have to decide on whether questions regarding subjective resilience should be measured on a bipolar scale (e.g. agree/disagree) or a unipolar scale (e.g. not at all—completely), and whether respondents should be asked for a judgement involving frequency (how often do you feel…?) or intensity (how resilient do you feel…?) (OECD 2013). Examples of different types of response items, and the various pro and cons associated with each are presented in Box 1.
As with many of the choices described above, each method of designing response options should be tailored to the needs of the user. Some may choose to prioritise concise and short responses (see Q1 and Q4) to limit ambiguity and make cross-country comparison or longitudinal analysis easier. Yet, this will reduce the level of detail that can be extracted from the answers (particularly in the case of binary answers) (Cummins 2003). Note that in the context of subjective resilience, single question answers are likely to be unipolar (running from low resilience to high resilience) rather than bipolar (between two opposing constructs—resilient/not resilient). Others may choose to allow for a greater number of response options to allow for such detail. However, increasing numbers beyond the optimal length can result in information loss, increased error and reduced motivation (ibid.). Five- and seven-point scales remain the most common options within the context of most life evaluation surveys, though there is an increasing number of surveys using higher point scales (typically 11 points). Choosing meaningful labels that are easy communicable, translatable and adequately reflect each of the gradients on the point scale are an equally important consideration.
Drawing on experiences from related fields, it is likely that questions administered to assess subjective resilience to weather-related extreme events (or any other types of resilience) would consist of two main delivery options. The first is to have a simple standalone single-item question (see Fordyce 1988). This approach has long been used in assessments of subjective well-being (SWB). Examples of stand-alone SWB questions include: “All things considered, how satisfied are you with your life as a whole these days?” or “Taken all together, would you say that you are very happy, pretty happy, or not too happy?” These questions aim to elicit an easily replicable global evaluation of one’s life (Krueger and Schkade 2008). They also seek to be as universally applicable as possible in order for comparison (both with other geographic contexts and across time). A similar approach could no-doubt be adopted for the assessment of subjective household resilience. The aim being to design a question that could, to the best possible extent and recognising the limitations associated with it, give an accurate account of a person’s perceived level of household resilience with a single question. With this in mind, each of the examples presented in Box 2 showcases the types of questions that could be applied as a single question to assess subjective disaster resilience at the household level (note that the design of each question is meant to highlight the strengths and weaknesses of different approaches, and is not a proposition for an effective question).
The weaknesses in a single-question approach becomes quickly apparent. Primary amongst them is the difficulty in condensing the different components of resilience into a single concise question. To counter some of these methodological challenges, a second approach would be to ask a series of questions related to aspects known to affect disaster resilience (see Box 2). Each question would probe a different aspect of disaster resilience, aiming to provide a more holistic response. We would consider this to be a far more appropriate way of measuring subjective household resilience. For example, a similar approach is taken by the widely used Satisfaction with Life Scale (SWLS), identifying five related questions that are then used as global measure (Diener et al. 1985). Typically, these questions are then grouped or consolidated to form a composite index. A number of different statistical techniques (such as principal component analysis or various regression-based approaches) can be applied to either identify a small set of questions from a larger subset (that account for much of the variance), or to assign relevant weightage to each question.
As with a single-item question, multiple questions and composite indexes also have their methodological challenges. To begin with, agreeing on which (and how many) questions to include is inevitably difficult and subjective. Indeed, it is possible for numerous different combinations to arise. For example, in the case of psychological resilience, Windle et al. (2011) identify 19 different methods of assessment in the academic literature, each with their own way of questioning, classifying and weighting within their respective resilience scales. One approach would be to start with a clean slate and use bottom-up qualitative research to identify questions that people and communities themselves consider as best representing the characteristics of a resilient household—indeed, questions identified under the first approach may be ‘ground-truthed’ by the latter. This would help avoid expert-led bias, but require extensive initial pilot surveying in order to develop the subset of question areas.
Another option would be to isolate particular characteristics of resilience and assign a small number of questions that relate to each characteristic. These questions could be drawn from the wider literature and would then be grouped and weighted accordingly. For example, given that resilience is often broken down into three interrelated capacities (Folke et al. 2002)—the capacity to cope; the capacity to adapt; and the capacity to transform—questions could quite easily be identified to suit each. See Q5 and Q6 that probe different capacities associated with resilience. The five livelihood capitals (Scoones 1998) are also closely associated with household resilience in many objective frameworks for resilience assessment (Eakin and Wehbe 2009) and could be used as the basis for understanding and probing subjective assessments of resilience—see questions Q8 and Q9. In addition, resilience is often characterised as being comprised of various different processes and functions, such as the iterative learning, accessing knowledge and information or promoting innovation (Jones and Boyd 2011)—see questions Q10 and Q11.
Each of the different frameworks presents a viable way of assessing subjective disaster resilience at the household level. Part of the problem, however, is that there are so many different existing frameworks, many often tailored to specific contexts (Bahadur et al. 2015). Choosing from amongst them inevitably injects some degree of bias, requiring careful thought and transparency. Indeed, while this method offers a useful way of standardising subjective questions relating to common characteristics, it inevitably draws heavily on expert judgement, similar criticisms of traditional objective methods.
It is important to consider that any weighting of the different questions is likely to be subject to various assumptions and methodological weaknesses. Assigning weights can either be done though simplistic and naïve means (such as assuming that each question or category of questions is equally important) or more empirically (such as the use of various statistical analysis to decide on weighting of each question). A number of studies have also adopted hybrid approaches such as engaging local communities to identify and rank the characteristic most relevant to their own resilience (often through participatory rural appraisal methods). These are then used to weight subsequent surveys delivered to households within the community and nearby (Choptiany et al. 2015). No approach is perfect, and judgement calls are required in deciding which methods are best suited to the objectives of any research programme.
A further consideration relates to context. Self-assessment of an individual’s perceived level of climate risk will inevitably be affected by past experience. Thus, an understanding of climate risk (or even listing responses to flood and drought) in a rural setting, where climate hazards are often felt more directly, will not be the same as in an urban setting, where climate hazards tend to be comparatively indirect and mediated through wider socio-economic factors (Da Silva et al. 2012). Accordingly, subjective questions—particularly with regards to the urban contexts—need to be conscious of the interactions between climate and non-climate drivers and be factored into the design of targeted question. For example, a focus on the impacts of climate hazards on well-being or the importance of critical social safety nets during times of hardship may provide a useful entry point to communication and capturing such interactions.
Perhaps the best way of ensuring accurate assessment of subjective resilience is to build on the growing number of approaches and frameworks (see Marshall and Marshall 2007; Marshall 2010; Choptiany et al. 2015; Nguyen and James 2013; Grothmann and Patt 2005; Seara 2014; Lockwood et al.’s 2015), as well as those from wider related fields, and ensure that the lessons learned from their applications are shared, taken forward and further refined. Above all, maintaining the diversity of methods and approaches that range in complexity, scope and focus will be important in gaining a more holistic understanding of resilience.