1 Introduction

While vulnerability assessments (VA) have become acknowledged state-of-the-art methods, for example, in recent strategy publications at the international level (United Nations 2015), but methodological debate and development is on-going (Ford et al. 2010; Kuhlicke et al. 2011a; Preston et al. 2011; Gallina et al. 2016). Vulnerability assessments have become commonplace with descriptions of procedures, types, methods, and conceptual backgrounds available in textbooks (Wisner et al. 2004; Birkmann 2013; Fuchs and Thaler 2018) or guidelines (Fritzsche et al. 2014). Assessment methods include either qualitative empirical assessments (Anderson and Woodrow 1998) or semi-quantitative, often spatially explicit, place-based approaches (Cutter et al. 2003). While much has been established, it still appears necessary to critically investigate the opportunities as well as the limitations of VA, especially place-based or mapping approaches (de Sherbinin 2014). A large number of original research papers consist of singular case study results that are published once, but often lack critical reflection about shortcomings or fail to stimulate follow-up studies on long-term developments in vulnerability or overlook later insights in methodological improvement opportunities. Quite a number of review papers, however, have already covered overviews and comparisons of the state-of-the-art and of specific methodological traits. These advances are particularly notable for (social) vulnerability in specific hazard contexts such as climate change (Ford et al. 2010; Preston et al. 2011), floods (Rufat et al. 2015; Terti et al. 2015), and multi-risks (Gallina et al. 2016), on validation methodology in general (Tate 2012) as well as social capacities (Kuhlicke et al. 2011a).

This article narrows down the analysis of VA to social vulnerability assessments, selects one case study as a benchmark (Fekete 2009), and reflects upon that baseline study’s main findings and identified gaps. Building up from this starting line, the scholarly reception, usage, and shortcomings are analyzed by comparing (Fekete 2009) with the usages and critique of this study and the SVA approach employed by other authors. My own reflections on identified shortcomings have already been published (Fekete 2012a), but certain aspects of the issue demand further investigation; validation demands and opportunities could also permit further development towards longitudinal monitoring of vulnerability and disaster risk. In order to investigate positive findings as well as constraints that have been identified since 2009, it is necessary to conduct a systematic literature review. The following research questions have guided this article:

  • How valid is the approach of the original article, as documented by other publications citing it?

  • Which aspects of findings and constraints have been addressed since 2009?

  • What other ideas have further developed social vulnerability indicator approaches and which expectations about validation can be derived from existing literature, aided also by own further analysis with recent data?

After completion of the literature review, conceptual considerations on validation criteria, benchmarks, and methodological advancements are used to integrate VA with other concepts, such as criticality assessment or risk management goals. These new potential directions are then briefly outlined.

2 Review of the Development of Social Vulnerability Assessments: A Case Study

The author’s original study (Fekete 2009) applied and adjusted an existing vulnerability index approach according to the methodological approach used in the United States (Cutter 1996; Cutter et al. 2003) and in accordance with a theoretical framework of vulnerability (Birkmann 2006). The original research question asked whether social vulnerability could also be identified at a national scale and county-unit-level in Germany, where this had not been conducted before. The study combined an inductive approach using factor analysis and principal component analysis (PCA) with the deductive guidance of a conceptual focus on exposure, susceptibility, and capacity components of vulnerability. The result was a set of indicators of potential vulnerability, developed from an ex ante perspective on disaster risk. While this resembled the state-of-the-art, the Fekete (2009) study went one step further and analyzed one attempt at statistically validating the hypothetical indicators and their variables with a real case event of river flooding in Germany in 2002. Over 1600 household interviews, primarily conducted and analyzed by project partners (Kreibich et al. 2005; Thieken et al. 2007), were reanalyzed using logistic regression, which captured damages and losses but also included socioeconomic profiles and reactions, such as temporary abandonment of housing, financial and social capacities, and satisfaction with damage compensation. The main finding of the study was a methodological procedure to validate vulnerability indicators and correlations between certain socioeconomic and demographic profiles of affected people such as age, education, and income, with flood-impact reactions such as temporary evacuation, shelter, and satisfaction. Shortcomings identified were a lack of knowledge and supporting literature that could advise which variables would serve to validate or benchmark a hypothetical vulnerability indicator. Also unavailable were other approaches with which to compare the validity of the approach employed in the Fekete study. Obvious constraints were a lack of spatial and temporal resolution, which limited the possibilities of up- or downscaling of the findings (Fekete 2010; Fekete et al. 2010).

In order to address the first two research questions, all publications that cited the Fekete article from 2009 to 2016 were analyzed. Google Scholar was used as a search engine because it allows testing and updating this analysis by fellow researchers worldwide, without accessibility constraints or the publishing company selection focus used by other platforms such as Scopus. Constraints of the Google Scholar search results are sometimes an erroneous number of citations due to double counting of publications. An advantage is the provision of links to PDFs hosted on different sites.

On 19 February 2018, 220 citations of the benchmark article (Fekete 2009) were found on Google Scholar; another 127 were located on Scopus. Since both samples were not congruent, as some were listed only in Google Scholar, some only in Scopus, only one source was further used for consistency: Google Scholar as it contains the bigger sample. Only those research articles with 10 citations or more (in Google Scholar) were included into the analysis. This resulted in 64 full research papers, peer-reviewed and published in academic journals with one exception: a book chapter by Torsten Welle and colleagues (Welle et al. 2014). Because this potential contribution was incorrectly selected by Google Scholar (correct author, but a later publication), it was excluded from further analysis. The logic for limiting the literature to 63 items is to not overextend the scope of this small literature review within a research paper. The threshold of 10 citations is randomly selected, as it is hypothesized that those papers already found some acceptance and usage amongst peers. Figure 1 shows the distribution of the number of publications on SVA in relation to the 2009 Fekete baseline article between 2009 and 2016.

Fig. 1
figure 1

Publications with social vulnerability assessments that refer to Fekete (2009) per year

A number of questions were of interest: in which contexts were other publications dealing with social vulnerability since 2009? Context here means in which countries are social vulnerability assessments (SVA) common? Also pertinent is the hazard context in which social vulnerability is embedded. What is the scale of the research area, and what is the size of research unit for the investigation? Which term or terms might be used locally for vulnerability?

These questions were criteria in the literature review and each was analyzed using spread-sheets with categories. The results (Fig. 1) show a decline in citation of the article (Fekete 2009) in publications beginning in 2012, continuing through 2013, and extending into mid-2014. A modest recovery then occurred to a stable level in 2015 and 2016. It would be interesting to investigate whether there also was an overall trend in publications utilizing SVA in 2012/2013. Most articles (56) were research papers, much fewer were review or purely conceptual papers (8). Review papers were relatively highly cited, however, with 2 out of 8 having more than 100 citations, while 6 out of the 56 research papers had more than 100 citations. The majority of the articles were published in the journal Natural Hazards (17), followed by Natural Hazards and Earth Systems Science (7) and Environmental Science and Policy (4). Sixty articles could be accessed with full text, 3 only with abstracts.

The term “social vulnerability” is used in 31 publications, half of the total publications subset. The term socioeconomic vulnerability has been used 4 times, and, of course, many terms similar to SV have been used as well. But it is interesting that the term “resilience” has been used only 5 times. Given the popularity of the term and the plethora of assessments of similar fields such as community resilience or urban resilience, it might be interpreted that SVA, especially in the spatial and indicator approach followed by the original 2009 article, are distinct from the traditional resilience assessment line. Although this assessment is too brief to be a scientifically satisfyingly assessment, it is suggestive relative to other findings that show resilience is adopted later in certain countries and is used in different contexts that are also more conceptual. In contrast, (spatial) SVA and similar quantitative approaches favor the term “vulnerability assessment” (Fekete et al. 2014). Another interesting aspect is that around half of the publications have not used the term SV and preferred another term such as resilience, risk, or vulnerability. This might be an indication that there are still lingering uncertainties about the definition and scope of the terms.

As hazard context, the majority of the articles, not surprisingly, focus on floods (22), since the original 2009 comparison article deals with flood issues, followed by general natural hazards (12) and climate change (8). Five articles did not define the hazard context and in several more articles it was rather difficult to find the hazard context explicitly mentioned. This could point to the conceptual approach of SV in which main focus is not on the hazard. But it might also be interesting to consider the debate of hazard-dependent or independent vulnerability (Schneiderbauer and Ehrlich 2006). It is noteworthy that also studies on technological hazards, health risks, and armed conflict have used similar approaches.

Countries for which SVA have been conducted are dominated by studies from the United States (9), followed by global approaches (6). But it is striking that SVA approaches similar to the original article have been applied in many countries worldwide. The majority of studies are conducted in Europe (18), followed by North America (11), which is understandable because the context of the original study was Germany, an industrialized country in the Global North. The sample, however, is much too small to allow any interpretations of the state of SV or SVA in general, which applies to all the other criteria analyzed in the literature review.

Analyzing what is commonly termed “scale” of analysis, but what actually should be differentiated into research areas (meaning whole area that is investigated) and research units (meaning measurement units within the research area) (Gibson et al. 2000; Fekete et al. 2010), was quite challenging, since many articles were not explicit about the unit of measurement. Urban level was most common (13), closely followed by municipal level (9), but also national (6) and county (6). Overall, administrative boundaries were most common, with fewer raster or grid approaches (Table 1).

Table 1 Literature analysis with focus on scale and usages of the original (Fekete 2009) article.

Regarding quantitative or qualitative approaches, almost all publications followed a semi-quantitative approach by establishing indicators or an index based on either statistical socioeconomic and demographic data and spatial data or based on interview data. Geographic Information Systems are used to combine, compute, and visualize the indicators by 46 of the 63 articles analyzed. Eight articles did not conduct SVA, but rather analyzed theoretical or methodological aspects.

The 63 publications referred to the 2009 article by citing it in different contexts. The most common references were made to the importance or type of validation approach (14). This was followed by those who made reference to the selection and justification of variables (9) or indicators/index (7) for the SV indicators. The 2009 benchmark article was also often just mentioned as a general reference to show that the publications were in line with other studies (7). Critical aspects, such as a lack of non-static approaches to SVA or validation barriers were used surprisingly few times, as would have been expected from a critical self-reassessment (Fekete 2012a). Overall, none of the publications were critical specifically about the 2009 article, which was somehow expected given experience with peers and stakeholders (Fekete 2012a), but also might be anticipated given difficulties in review processes or feedback at conferences. Probably, the discussion about critique on top-down, quantitative, desktop approaches in SVA happens in other fora (Weichselgartner and Kelman 2014). Not all articles citing the 2009 article have been analyzed; critique might be published in those. Critique is very helpful and necessary and it must be stressed that not only many aspects of the original study from 2009, but also aspects of my other publications on SVA contain imperfections. Another finding is that, contrary to expectations, no other publication (in this sample) used the 2009 study as a direct data source or comparison study. It is gratifying to see so much advancement in the field since 2009, for example in SVA validation approaches conducted by experts with much better depth and knowledge than could be engaged in the 2009 study (Tate 2012, 2013; Rufat et al. 2015).

As a conclusion, the approach from the 2009 study seems successful enough to continue with it, while certain aspects still have to be improved. The advancements should focus on shortcomings in the previous study (Fekete 2009) by (1) analyzing and interpreting single indicators and not the overall index only; (2) analyzing not only a static snapshot, but three 5-year snapshots in dynamic comparison; (3) interpreting spatial and temporal heterogeneity; (4) differentiating cities and rural counties; and (5) conceptually separating national societal vulnerability from community-scale and individual human vulnerability. This will hopefully add new insights also to those aspects most commonly used from the 2009 study in other research studies, such as validation, selection of variables, and composition of indicators and indices, when incorporated into dynamic and longitudinal assessments. Not all of that process is captured in this article, but more time is needed to conduct research on a range of SVA variables. As a major constraint, this review is limited by sample design to those publications citing the one original 2009 article. That means that extrapolation to SVA in general is not possible. Another constraint is that articles before 2009 and other relevant publications have not been considered in this review. Also, publications that have been cited fewer than 10 times are not selected, which means that some important deviations from the result or important additions may be missing. More recent publications also are less likely to have reached 10 citations already, which is another bias. But this study’s selections were made on purpose in order to have a sample that can be logically justified by its connection to work with a similar conceptual design and methodological content. Therefore, the following section will also stick to the original method, design, and data sources and expand the discussion, started in the review section, about aspects of interpreting the same type of indicators with recent data.

3 Further Development of the Approach and Demands on Validation

The author’s own approach in 2009 includes many shortcomings and, thereafter, more detailed descriptions of the approach, selection and justification of variables, factor analysis, PCA, as well as spatial autocorrelation tests have been published (Fekete 2010). Also undertaken was a discussion of scale effects, such as the selection of time versus spatial scales, up-scaling options, and constraints (Fekete et al. 2010). It was important to summarize critique received from other peers and envisioned “end users” in a separate article (Fekete 2012a). Still, many frustrations of peers with such quantifying, aggregating, and “accountant-style” top-down approaches have not been answered yet (Weichselgartner and Kelman 2014). Some of those disappointments are triggered probably by false expectations raised by our publications. For example, usefulness for end users has been aimed at. Or maps per se are easily misunderstood representing reality. However, since the literature review found validation the aspect most mentioned, the following section will focus on this aspect, what can and needs to be amended and developed.

3.1 Vulnerability Validation Criteria

Other publications have already addressed further needs and approaches to amend the statistical methodologies and sensitivity analyses (Tate 2012) as well as selection and common usages of variables in SVA (Rufat et al. 2015). But one major question in the 2009 approach has not been addressed: which types of information could serve as criteria or benchmarks to validate vulnerability in the sense of logical testing of whether a hypothetical vulnerability assumption has proven significant during or after some real crisis or disaster event? In 2009, the following three dependent variables were used in a logistic regression as vulnerability validation criteria: (1) people affected by the flood since they had to leave their homes; (2) people seeking emergency shelters; and (3) people satisfied with damage compensation.

Regarding the range of possible criteria for testing a “revealed” vulnerability, Table 2 includes just a very limited selection. The problem is that vulnerability indications can be read and interpreted quite differently; what one author would regard as susceptibility might be regarded as exposure or lack of capacities by another.

Table 2 Vulnerability validation criteria used in a logistic regression with household survey data after a flood event in 2002.

Exposure, damage, and loss are probably the most straightforward validation criteria for testing assumed vulnerability, especially if the assumed weakness has resulted in disproportionally higher impacts. But even this criterion produces potential uncertainty. Were vulnerability aspects really the primary causal agent? Or were other variables the ones that stimulated observed impact depths? Caution is also necessary when time has elapsed between historical and recent damage events, since populations, behaviors, land use patterns, and so on (and therefore vulnerabilities) may have changed. Because exposure is most related to hazard aspects, certain studies (Anderson and Woodrow 1998; Davidson and Shah 1997) and even recent UNISDR (United Nations International Strategy for Disaster Reduction) definitions have separated exposure from vulnerability. To validate the core of vulnerability, susceptibilities must be identified and then validated. Because susceptibilities (also sometimes termed sensitivities) define characteristics that are nested within the object and subject analyzed, damage or loss impacts often are intensified. In Table 2, the criterion by which people see refugee shelters is such an example. Within the group of people who had to leave their homes, a small subgroup ended up in public emergency shelters. Since the 2002 river-flood event was not a surprise, but rather was an event that developed over days and weeks, certain assumptions can be made. Because other data from the household survey identify alternatives to emergency shelters, such as people going to relatives, friends, or an affordable hotel, we assumed that emergency shelters were an indication of those people who had no social networks or other options than to seek refuge in public shelters. This reveals not a physical susceptibility but an absence of social ties and/or the existence of societal conditions that constitute a type of susceptibility. At the same time, the same susceptibility also indicates a society able to provide such capacities as emergency shelters. Summarizing the above, this is not a perfect vulnerability validation criterion, but it meets many assumptions about the complicated nature of susceptibility.

The third validation criterion used in our previous study, people satisfied (or unsatisfied) with damage regulation after the flood, is also a criterion not easy to grasp. It indicates societal moods and personal reactions and the validation found correlations with socioeconomic indicators of unemployment and education level. It is an interesting validation criterion not so much because of its precision, but because it captures a soft aspect, one of the intangibles; not a physical susceptibility but more of a sociocultural one. It also indicates societal capacities to compensate flood losses by insurance and governmental aid as well as the challenges associated with getting the right aid to people in time and in satisfying amounts. Since it represents a social reaction (satisfaction) that would not exist without the existence of the capacity (damage regulation service), we have displayed it as an example of “capacities” in the table. But one might rightly argue for it being a susceptibility component as well.

The range of possible and necessary vulnerability validation criteria is much broader. Physical vulnerabilities of human beings could be validated by death tolls, physical wounds, diseases, or health issues. Psychological vulnerabilities could be validated by mental traumata. All of the following vulnerability characteristics are more indirectly related to causing deaths or health impacts. For example, poverty or social exclusion, lack of capacities and so on all per se do not kill people directly during a disaster event. But they provide conditions (physical exposure), lack of alternatives, and so forth to foster or force vulnerabilities (root causes, dynamics, and so on) to develop into disaster risk pathways.

3.2 Aggregation Aspects and Possible Interpretations of Indicators

If a social vulnerability index (SVI) was composed as a reassessment of the 2009 approach, the individual indicators would deserve more scrutiny regarding their explanatory power. Since the index approach was much criticized for blurring the individual indicators that compose it, we pass over the index and instead investigate how the indicators would influence the overall picture of vulnerability. This is due to our findings on the acceptance of vulnerability indicators by end users and decision makers. There are hindrances to acceptance, such as mayors not appreciating being labelled as vulnerable (Fekete 2012a) or misunderstanding maps (Fekete et al. 2015). Other constraints are overexpectations on the part of the end users who utilize scientific results when published. In a recent project (Fekete et al. 2017), end users were more enthusiastic about being involved in the science and not just being used as interview sources. This is in line with recently promoted participatory approaches that codesign and codevelop research within a more nuanced understanding of producing knowledge (Weichselgartner and Kasperson 2010; Weichselgartner and Pigeon 2015). This cooperative approach contradicts other preexisting notions of “producing policy-relevant information” first and then distributing it to end users at the end, which can fail (Fekete 2012a). Many vulnerability models are perceived as black boxes as stakeholders are not involved in designing it. Aggregated indices may add to this black-box perception, when composition of indicators becomes too complex or hidden to immediately grasp their meaning (Fekete et al. 2015). One underlying question is, do end users have enough confidence in the resulting indicators to actually use them in their decision making? The reviewed publications analyzed in this article do not tackle this issue. The following section will not be able to answer this question, but will conceptually analyze the problem by looking at selected examples of the single indicators displayed in Fig. 2 to identify which aspects could be interpreted by users.

Fig. 2
figure 2

Data source Administrative digital boundaries were retrieved from Federal Office of Cartography and Geodesy (2017). Individual demographic statistics were derived from Federal Office of Statistics (2017), extracted, categorized, normalized, and visualized in QGIS by the author

Indicator maps related to social vulnerability in Germany at county and city administrative levels.

Figure 2 shows five of six indicators validated in 2009 by using a second data set from a real case flood event. These indicators were composed for the year 2015 in the upper row of Fig. 2. The visual comparison already reveals a strong spatial correlation between the first three indicators, unemployment rate, basic education, and elderly population. What does this say about explanatory power? It might be seen as three indicators hardening an assumption that certain spatial areas in Eastern German have higher levels of assumed vulnerability. On the other hand, the confounding influence is very high—at least unemployment and low education levels are almost intuitively correlated. It must be considered for future SVI whether or not fewer indicators are a better approach than collecting as many as feasible. And it must be considered whether spatial heterogeneity should not also be guiding the selection of indicators for an index. For example, population numbers and one-apartment buildings (an indicator of home-ownership) highlight different counties and areas and might be valuable in adding other thematic plus spatial aspects to consider. Both spatial and temporal heterogeneity are important factors to consider in method design in order to assess the validity of an indicator or index. The lower row of maps in Fig. 2 displays in green colors where data changes between 2005 and 2015 result in lower (hypothetical) vulnerability. Brown colors indicate increasing vulnerability. Visual interpretation shows that unemployment, while above average in Eastern Germany, also exhibited above average decrease from 2005 to 2015. Basic education and one-apartment buildings in almost all counties and cities have changed in a direction that ameliorates vulnerability, while age went up, a reflection of demographic change in an ageing society. The spatial explanatory power of the latter three indicators might be questioned—when not reflecting spatially different patterns, are these useful (vulnerability) indicators? Of course, the absolute rate of change must also be analyzed, investigating minimum and maximum values and fluctuations per year—when an indicator fluctuates up and down around just low changes in values, the explanatory power might be regarded as low. Except when a variable is a (normatively justified) key indicator, then even small changes can be magnified in weight by incorporating other, less important indicators.

3.3 Benchmarking Vulnerability: Criticality Steps and Service Target Levels

While it is already a challenge to identify validation criteria for vulnerability, it is even more challenging to identify thresholds or benchmarks (Cutter et al. 2010). Vulnerability typically displays ranges of possible degrees; it rarely depicts thresholds that determine when to speak of vulnerability or when vulnerability is inevitably turning into disaster loss. Of course, GIS maps visualize degrees of vulnerability, by standard deviations, natural breaks, quantiles, and so on. But there is rarely justification behind such breaks beyond statistical argumentation that is related to validations with real damage cases. What is a tipping-point when a certain number of people with certain characteristics will inevitably have to leave a place or will get killed?

There are other terms and lines of research where such thresholds are investigated. For example, loss and damage research (Wrathall et al. 2015), risk assessments using traditional loss and probability matrices (Federal Office of Civil Protection and Disaster Assistance 2010), or under the term criticality or severity. For example, failure mode effects and criticality analyses differentiate degrees of criticality according to stages of loss—from loss of human lives to loss of aircraft, which are damages that cannot be repaired, down to maintenance issues (US DoD 1980). Such categorization could serve as an example by which to differentiate vulnerability.

Finding evidence to back up thresholds is difficult. After the 2009 study, our research in other fields, such as critical infrastructure and civil protection and risk and crisis management, suggests the need to add next to criticality other concepts that complement existing vulnerability frameworks. The demand for a methodology to justify often hidden and underlying reasons for prioritizing one value such as human lives over economic or ecological loss has been observed in risk and crisis management approaches in civil protection. In order to address this limitation, vulnerability and risk assessments have been included in a more comprehensive framework that incorporates preparation and application phases. This expansion includes validation and communication structures called risk management frameworks (IRGC 2012; ISO 2009; Federal Ministry of the Interior 2008). Prioritization of what is “measured” by the risk and vulnerability assessment is termed “protection” or a “risk management goal” (Fekete et al. 2012). A methodology is suggested in those studies to order the underlying values that exist in a civil protection agency, for example, to prioritize saving human lives over economic interests. But this methodology is useful also for economic risk assessments or ecological assessments by making the value decision explicit: the overall human value is selected—maximize saving human lives. Time characteristics such as quickness of effect or duration enhance this humanitarian value. Although still underrepresented in VA, time restrictions have been found to be an ubiquitarian characteristic that can be applied to almost any indicator or process. For example, population density is a generally good indicator for exposure to flood disaster risk, but in combination with onset speed of the flood or daytime population in a city it becomes much more precise. Risk management goals are composed of the value to protect or analyze and thresholds of countermeasures (capacities) that should be achieved. Such thresholds can be zero death visions in road safety, or a delimitation of climate change-related temperature incline to 2 degrees, for example (Fekete 2012b). These thresholds or “goals” could also be useful for VA and they can either be decided upon as a strategic goal or be based on real cases or measurements. An example is the service time for fire fighters to reach their destination [varying between 5 and 180 min in Europe (Weber 2013) or USA (Sa’adah 2004)].

4 Conclusion

First, an existing SVA approach has been analyzed by its usage in a literature review in order to identify if and what aspects of the SVA were also used by other studies or regarded as useful. The findings to these research questions show that validation is often still lacking, yet is often regarded as an important component of a SVA. The findings also show common agreement on certain indicator selections and usage of spatial assessments in a great number of countries worldwide. The abundance of urban area assessments is in line with a general focus on cities within disaster risk and resilience research (Fekete and Fiedrich 2018). Guidance on how to conduct validations, usage of existing studies for cross-validation, and guidance on variables and indicator selection is still wanting.

In the second part, this article has therefore discussed possible validation criteria and benchmarks. Benchmarks that form a world-wide data base of disaster cases with an explicit focus on revealed vulnerability are still missing. Accounts of human and economic losses still address mainly the exposure component and therefore undervalue the core of vulnerability, termed susceptibility. In the absence of validation criteria or benchmarks, this article has therefore suggested insights from methodologies from related fields such as critical infrastructure or risk and crisis management. Concepts such as criticality steps, risk management goals, and target levels, as well as integrative risk and crisis management frameworks can help to advance vulnerability from a relative degree estimate to a threshold value. While all these approaches have their own limitations and constraints, they may stimulate the further conceptual and applied development of VA.