Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

4.1 Introduction

As already noted, the 2008 European Air Quality Directive (AQD) (2008/50/EC) encourages the use of models in combination with monitoring in a range of applications. It also requires Member States (MS) to design appropriate air quality plans for zones where air quality does not comply with the AQD limit values and to assess possible emission reduction measures to reduce concentration levels. These emissions reductions then need to be distributed in an optimal and cost effective way through the territory. Obligations resulting from other EU directives (e.g. the National Emission Ceiling Directive) and targeting more specific sectors of activity (e.g. transport, industry, energy, agriculture) must also be considered when designing and assessing local and regional air quality plans (Syri et al. 2002; Coll et al. 2009). In order to cope with these various elements MS have in the last decade developed and applied a wide range of different modelling methods to assess the effects of local and regional emission abatement policy options on air quality and human health (e.g. Cuvelier et al. 2007; Thunis et al. 2007; De Ridder et al. 2008; Carnevale et al. 2011; Lefebvre et al. 2011; Borrego et al. 2012; Mediavilla-Sahagun and ApSimon 2013).

4.2 Available Tools

The following Table 4.1 summarizes the integrated assessment modelling tools most used in European countries. They can be classified in different ways according to the blocks of the DPSIR framework they investigate deeper, and are based on data collected from various public and specific sources.

Table 4.1 Major IAM tools used in Europe

At the EU level, the state-of-the-art regarding decision-making tools is GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies), developed at the International Institute for Applied Systems Analysis, Laxenburg, Austria, by Amann et al. (2011). The GAINS model considers the co-benefits of simultaneous reduction of air pollution and greenhouse gas emissions. It has been widely used in international negotiations (as in the 2012 revision of the Gothenburg Protocol) and is currently applied to support the EU air policy review. Some national systems have been developed, starting from the GAINS methodology at EU level. Two well-known implementations are RAINS/GAINS-Italy (D’Elia et al. 2009) and RAINS/GAINS-Netherlands (Van Jaarsveld 2004). Another national level implementation is the FRES model (Karvosenoja et al. 2007), developed at the Finnish Environment Institute (SYKE) to assess, in a consistent framework, the emissions of air pollutants, their processes and dispersion in the atmosphere, effects on the environment and potential for their control and related costs. An additional important initiative at national level is the PAREST project, in which emission reference scenarios until 2020 were constructed for PM and for aerosol precursors, for Germany and Europe (Builtjes et al. 2010). The ROSE model (Juda-Rezler 2004) has been developed at Warsaw University of Technology (WUT) for Poland. ROSE is an effect-based IAM comprised of a suite of models: an Eulerian grid air pollution model, statistical models for assessing environment sensitivity to the Sulphur species and an optimization model with modern evolutionary computation techniques.

At urban/local scale a few integrated assessment models have been developed and applied (e.g. Vlachokostas et al. 2009; Zachary et al. 2011; Mediavilla-Sahagun and ApSimon 2013). In RIAT (Carnevale et al. 2012) the main goal is to compute the most efficient mix of local policies required to reduce secondary pollution exposure, in compliance with air quality regulations, while accounting for characteristics of the area under consideration. RIAT solves a multi-objective optimization, in which an air quality index is minimized constrained by a specific emission reduction implementation cost. It will be described in more details in the following chapter. The Luxembourg Energy Air Quality model (LEAQ) (Zachary et al. 2011) integrated assessment tool focuses on projected energy policy and related air quality at the urban and small-nation scale. The tool has been developed initially for the Grand Duchy of Luxembourg, but is flexible and could be adapted for any city with sufficient information concerning energy use and relevant air quality. The UKIAM model (Oxley et al. 2003) has been developed to explore attainment of UK emission ceilings, while meeting other environmental objectives, including urban air quality and human health, as well as natural ecosystems. Nested within the European scale ASAM model (Oxley and ApSimon 2007), UKIAM operates at high resolution, linked to the BRUTAL transport model for the UK road network to provide roadside concentrations, and to explore non-technical measures affecting traffic volumes and composition.

4.3 Areas for Future Research of DPSIR Blocks

This section identifies limitations of the current assessment methods and proposes key areas to be addressed by research and innovation. It is organized into several sub-sections, each corresponding to a specific building block of the DPSIR scheme.

4.3.1 Drivers (Activities)

Considerable weaknesses were identified for the DRIVERS block, for all activity sectors contributing to local scale emissions: not only for power plants, road traffic and residential combustion, but also agriculture, non-road traffic and machinery. An important future research line should be devoted to the integration of activity inventories at different scales. At the moment, inconsistencies exist between local/regional and EU level data collection methods and tools, and this prevents the implementation of a fully integrated approach connecting the various governance scales. While activity values are usually available at the international/national level, this is not the case at regional/local scales, where only emission inventories (PRESSURES) are compiled.

A further key issue for future research is related to activity evolution. On the one side, one would certainly like to improve the estimation of how local economic sectors will develop and adapt in the future, taking into account both internal factors, such as economic downturns, and external ones, as climate changes. This means considering new land use policies (activity location) as part of the IAM problem. On the other side, since a perfect prediction of activity evolution is out of question, new methods to deal with uncertain predictions (ensemble modelling, risk aversion, …) have to be developed and possibly become standard.

4.3.2 Pressures (Emissions)

In the IAM database collected by APPRAISAL, 70 % of the respondents identified emission values as the main weakness of their modelling approach. Quantifying the effectiveness of specific abatement measures within a zone presumes that the emission inventory is disaggregated with sufficient details both spatially and per categories to properly consider the emission abatement measures. This level of detail is unfortunately lacking in most inventories leading to uncertain estimates of the effect of measures. The official national and European (EMEP) emission inventories only contain emission totals for the member state as a whole (or alternatively, gridded data with only SNAP macro-sector detail). Almost all studies focusing on local/urban scales identify, as a major issue, the lack of comprehensive, accurate and up-to-date emission data from bottom-up emission estimation methods. Relevant information on desirable practice for compiling such local emission inventories can be found in the guidelines of the FAIRMODE workgroup on ‘Urban emissions and Projections’ and the report on ‘Integrated Urban Emission Inventories’ of the Citeair II INTERREG project (http://www.citeair.eu/).

There is a need for general methodologies for emission inventories that allow:

  • Consistent harmonization of bottom-up and top-down emission inventories, to allow “seamless” integration of measures from local to EU level, and vice versa;

  • Development of approaches to improve the quality of emission inventories, to ‘validate’ them and to assess the emission level uncertainty (inverse modelling, source-apportionment methods, new model chains to describe projections, …);

  • Adaptation of disaggregation coefficients (in space and time) to regional and local scales, especially for CO, PM and NH3 emissions.

Additionally, emission projections need to improve data consistency: for instance, the transport sector still lacks data regarding the real vehicle fleet composition (especially the split between different categories of vehicles age and engines type). A finer description for biogenic emissions is also required, better considering data on land use, meteorology and topography (slopes and orientation), according to the species, which can effectively be taken into account, particularly in mountainous and coastal areas.

Emissions factors are another critical point that deserves deeper consideration in particular to define PM components (e.g. BC, metal, UFP, wildfires) and allow to compute the emission of other gaseous pollutants (VOC, SLCP, reactive nitrogen), of HFC emissions from refrigeration and air conditioning equipment’s and NO2 emissions from catalytic converters of cars, as well as those resulting from agricultural fertilizers. Another important improvement would be the splitting of aggregated road traffic emission factors to account for the continuous changes and evolutions of the real vehicle fleets at local, regional and higher levels.

4.3.3 State (Concentration Levels)

Key areas to be addressed by research and innovation in the STATE module refer to both actual measurements and modelling tools.

From the point of view of measurements, we suggest to develop a stronger integration of ground-based and remote-sensing monitoring methods, to assess the “current” AQ situation at a wider scale as well as improve the understanding of the composition of the various PM fractions.

As to models, in order to better assess the AQ state (and the associated health impacts), research should be oriented to better represent AQ at a very detailed scale. This could be done either through the use of Computational Fluid Dynamics (CFD) to explicitly represent local and street levels or by developing sub-grid scale models and parameterization within Chemical Transport Models. Concerning meteorological models, a better use of urban modules in mesoscale models would benefit to regional and more local studies, and help to link models at different scales.

Modelling the urban or local scales requires the inclusion of specific small-scale processes, but also to consider the influence of larger scale effects. This is a challenge that still needs to be worked on because common practices are mainly based on the application of mesoscale models to urban areas without the proper urban parameterizations, and on Gaussian models that are still limited, even with the latest developments.

The use of CFD models to simulate urban areas, forced by a mesoscale model, is a current research area, still with strong limitations because of the high demand of computer time. It is still impossible to simulate a full year period with this modelling approach without several simplifying assumptions. In the future, these limitations could be overcome and the development of the proper link between the mesoscale and the CFD models should therefore be considered as a key research area.

This said, there are still some processes that require a better description within the models. In general, air quality models tend to underestimate peak PM concentrations while exceedances for PM are often considered the most meaningful index in terms of health impact. Further research is required to improve modules for describing windblown dust, resuspension and the formation and fate of secondary organic aerosols. Significant scientific uncertainties also remain regarding the relative contributions of the major components of fine PM, especially organic carbon and metals/dust. In particular, substantial uncertainties in gas-phase and aqueous-phase chemistry mechanisms remain, including key inorganic reactions, aromatic and biogenic reactions and aqueous-phase chemistry. Future research might also include stratospheric chemistry as the spatial domain for air quality models increases when climate applications are considered. The exchange processes with the surface should be further improved considering for example surface bidirectional exchange (ammonia, mercury or polyaromatic hydrocarbons) or the interaction with vegetation, and models have to better couple physics (meteorology) and chemistry processes. This is not only relevant for connecting air quality and climate change modelling, but it is also important when moving to smaller scales (<1 km) where the meteorological models start to resolve turbulent eddies.

Measurements contain valuable information, which can be used as complementary input to modelling results. It is striking that in 40 % of the APPRAISAL reported studies, measurement data were not used at all, not even for model evaluation. This is clearly a point where air quality assessment reports and more specifically air quality plans could be improved. Even if affected by an intrinsic imprecision, monitoring data have the clear advantage that field concentration levels are evaluated with much more accuracy than model results. The main question, which arises in IA applications, is: “how these measurement data can be used most appropriately?” Most of the model results in IA studies are dealing with future projections under certain policy options. By definition, no measurement data are available for this kind of future estimates. A key approach to this problem is to use measurement data in combination with model results at least for the reference case of a recent year. This reference case is most often used as a starting point in the IA exercise. This procedure is referred to as “model calibration” or “data assimilation”.

Discussion arises when this combined information has to be used for the simulation of policy scenarios. The use of data assimilation corrections (or calibration factors) as “relevant” information for scenario runs is generally considered appropriate. However, specific and well-defined methodologies to do so are not at hand. One possible approach is to assess the simulated concentration changes of a set of specific policy options in relation to the reference case/year. The resulting concentration changes (so called “deltas”) can then be applied on top of the calibrated or data assimilated concentration fields of the reference year (see for example Kiesewetter et al. 2013). However, more research is required to pin down appropriate methodologies to combine reference year measurements with modelling results for future policy scenarios.

Model evaluation is inherent to all these developments and also to common modelling practice. There are already several reported and applied procedures to evaluate models (including model intercomparison exercises), but with different purposes and focusing on particular types of models and/or applications. There is enough information to provide a standardized evaluation protocol organized according to different modelling needs and characteristics. This protocol would be particularly important for stakeholders who need to understand model results in order to decide and implement air quality improvement measures. FAIRMODE activities are addressing this challenge, but a stronger focus on the urban and local scales is needed.

Optimization problems cannot embed full 3D deterministic multi-phase models for describing the nonlinear dynamics linking precursor emissions to air pollutant concentrations because of their computational requirements. IAMs therefore rely on simplified relationships for describing the links between emissions and air quality, which are called “source/receptor (S/R) relationships” (or “surrogate models”). These types of models can be both linear and nonlinear, and examples can be found in literature for both types of approaches. Future research will need to extend surrogate model approaches to properly describe the most important processes in terms of chemistry, meteorology at the appropriate scale accounting for potential non-linearity. Moreover, it will need to focus on proper “Design of Experiments” methods (that is to say, the way in which CTM simulations should be planned, for identification of the surrogate models). On the one hand they need to maximize the information used to identify S/R relationships and, on the other hand, to limit the number of CTM simulations required to derive these relationships.

Finally, integrated assessment long-term studies should take into account both air quality and climate change issues. In this framework, it is important to develop the use of future meteorological simulations for running AQ models. A challenge is the development in IAM of online chemical transport models, which allow the study of feedback interactions between meteorological/chemical processes within the atmosphere, and thus take into account AQ/climate change connections.

4.3.4 Impact (Human Health)

Traditionally, modelling tools have addressed air quality assessment issues including dispersion and chemistry but rarely have considered also exposure or health indicators. However, Health Impact Assessment (HIA) should be part of integrated assessment, as it usually involves a combination of procedures, methods and tools by which an air quality policy can be judged in terms of societal impact. Quantification of health effects in HIA (Pope and Dockery 2006) is particularly important, as knowing the size of an effect helps decision makers to distinguish between the details and the main issues that need to be addressed and facilitates decision making by clarifying the trade-offs that may be entailed. Secondly, adding up all positive and negative health effects using appropriate modelling methods allows for the use of economic instruments such as cost-effectiveness analysis, which further aids decision-making.

Exposure-response functions (which quantify the change in population health due to a given exposure) are identified as the main sources of uncertainty in an integrated assessment (Tainio 2009), but it is also important to further explore the “complete individual exposure to air pollution” pathways. “Complete” here means indoor as well as outdoor air pollution over a 24 h/24 h period; “Individual” means monitoring air quality at the person level, possibly using portable and easy-to-wear monitors. These two factors, together with a dynamic view of exposition variations, will result in a more comprehensive view on individual exposure. If this could be combined with human biomonitoring, i.e. measuring the concentration of a certain pollutant or one of its by-products in the human body, it would enrich our current knowledge regarding the impact of air pollution on human health. This would clearly necessitate the consideration of dynamic maps of population and pollution (i.e. considering the hourly population living/working habits depending on age, gender, activity… and modelling air quality maps with the same level of detail).

Most plans and projects are focused on long-term exposure that has much greater public health impact. Not all acute effects are included in long-term impacts and therefore short-term impact on morbidity and mortality might be underestimated. Mortality and morbidity factors of long-term NO2 and O3 exposure should also be investigated, as well as NO2 exposure effects in particularly polluted environments (i.e. busy roads).

Overall, the most critical element in respect of HIA is the lack of general methods to deal with the multi-pollutant case. In all urban areas, in fact, citizens are exposed to a cocktail of different pollutants, the combined effect of which is largely unknown.

4.3.5 Responses (Methodologies to Design Measures)

The RESPONSES module includes methodologies that can be formalized and implemented to design AQ plans. This is related on one side to the type of decisions that can be assumed at local level and how they can be integrated into other policy domains (decision variables), on the other side to the methodologies to select such decisions (decision problem). It is clear that the two aspects are strictly interrelated, and, for instance, the definition of the decision variables can affect the formalization of the decision problem.

As to the first aspect, the inclusion of socio-economic aspects in the decision problem formulation (e.g. the public acceptance of different measures) and the land planning aspect should be considered in AQ plans. Such plans should also be tightly connected with other policy areas (e.g. energy, transport, etc.) and related plans.

Possibly, the main challenge in this field is the inclusion of “non-technical/efficiency measures” within the planning options. The use of these measures is now limited to scenario analysis, because it is very difficult to estimate removal efficiencies and costs of such measures, particularly, because they impact many other sectors beside air quality. For instance, car sharing has the potential to reduce not only exhaust emissions, but also accidents and noise. How can the overall cost be associated to the benefits in such diverse sectors? An additional complexity is related to the use of these measures in an optimization framework; from this point of view, new formal approaches need to be devised.

As to the problem formulation, one major area of investigation for the future is the consideration of dynamic evolution of the physical, economic, and social environment. All current approaches are static, in the sense that they devise a solution to be reached within a given time horizon (say, for instance, in 2020). However, the system we want to control is non-stationary (e.g. the effect of the current economic crisis) and it may therefore be more supportive for decision makers to know where and when to currently invest with the highest priority in order to follow a certain path to the target with the ability to adapt decisions with time, in case the system evolution differs from the projected one. This involves the necessity of flexibly adding into the plans the advent of new technologies and the ability to determine the cost of scrapping old plants to substitute them with newer ones. This essentially means designing a new generation of Decision Support Systems to be intended more as control dashboards, than planning tools. Related to the dynamic problem is also the issue of how to evaluate future benefits of air quality investments. If economy has defined since long how to account for investment costs lasting for a given period in the future, this is more difficult for benefits that are not monetizable or last in the future for an unknown period. How can we account for a 20 % improvement of an air quality index ten years from now? What is the benefit from a reduction of PM10 today that will decrease cardiovascular problems in a population sometime in the future?

A more synergic use of Source Apportionment and Optimization approaches should also be fostered. SA could limit the degrees of freedom of cost-effectiveness analysis, constraining the optimal solution to consider only a subset of the possible measures previously identified applying SA. On the other hand, the optimization approaches can automatically perform source apportionment establishing the most cost-effective emission reductions and identifying the sources categories associated to these reductions, without the need to monitor and chemically characterize air pollutants.

4.4 Areas for Future Research of IAM Systems

A number of directions for future research have been identified by considering the IAM as a whole, in particular related to the integration of IAM scales and the uncertainty assessment.

One point is certainly the development of methodologies integrating widely used source-apportionment and modelling approaches to quantify the effective potential of regional-local policies and of European/national ones in a specific domain.

Different models are designed and implemented to approach different spatial scales (from regional, to local, to street level). Future research should study how to link these different scales and to build an IAM system able to connect different “scale-dependent” approaches consistently, to model policy options from regional, to local, to street scale.

As it is already done with CTMs, a research direction could be devoted to developing IAMs nesting capabilities (both one-way and two-way nesting) to easily manage EU/national constraints at regional level, and at the same time to provide feedbacks from the regional to the EU/national scale.

At the moment, national climate change policies simply dictate some constraints to local air quality plans, but it is well known that also local air quality policies (e.g. the reduction of aerosols) can have consequences in terms of climate change. In a “resource limited” world, the aspect of maximizing the efficiency of the actions (to get win-win solutions for AQ and CC) will become of extreme importance and will require guidelines to integrate climate change policies (normally established at national or even international levels) with air quality plans developed at regional/local level.

Uncertainty estimates are an essential element of integrated assessment, as a whole. Uncertainty information is not intended to directly dispute the validity of the assessment estimates, but to help prioritize efforts to improve the accuracy of those assessments in the future, guiding decisions on methodological choices with respect to the tools that are being used.

In order to assess the total uncertainty and evaluate the performance of an IAM system, the uncertainty related to the different modelling components of the system (meteorological modelling, air quality modelling, exposure modelling, cost-benefit modelling) has to be quantified separately. In literature, there are very few works concerning the application of uncertainty/sensitivity analysis in the IAM considered as a whole system. The most complete works in this frame are due to Uusitalo et al. (2015), who present a quite complete methodological review concerning possible application of uncertainty and sensitivity analysis in IAM, and to Oxley and ApSimon (2007), who reviewed the issues related to uncertainty in IAM, particularly focusing on space and time resolution and on the problem of uncertainty propagation in integrated system. More in general, all works, possibly with the only exception of Freeman et al. (1986), use a numerical approach based on Monte Carlo simulation at different levels of complexity. This is probably due to the increasing computational capacity and to the relatively newness of the problem treatment in the context, causing scientist to directly start the study from the numerical approaches both for uncertainty and sensitivity analysis.

As the chemical and physical processes involved are not linear, and some uncertainties may compensate each other (Carnevale et al. 2016), the interconnection of all IAMs individual uncertainties remains a scientific challenge. Combining all uncertainties to calculate a total uncertainty would require a great number of simulations, accounting for all possible combinations. This complexity does not allow for setting straightforward quality criteria in terms of IAMs, even though IAM is considered an important policy tool.

In more detail, some of the issues still to be investigated on IAM concern:

  • The optimization algorithms. The decision problem is solved by means of optimization algorithms. How does the optimization algorithm bias the determination of effective policies?

  • The planning indicators for human, ecosystems and materials exposure. The decision problem determines the abatement measures or other actions that optimize the objectives, and that have to comply with the physical, economic and environmental constraints. Objectives and environmental constraints are typically indicators of human, ecosystems and material exposure. How do different sets of indicators impact on policies design?

  • The source/receptor relationships. What is the uncertainty of source/receptor relationships? Which is the sensitivity of the decision problem solutions to different source/receptor relationships?

  • The emission and climatic conditions. Such source/receptor relationships are identified processing CTM simulations for different reference years, meaning for some specific emission and meteorological scenarios. The overall results of IAM application are indeed variations with respect to these conditions that probably will not be exactly replicated in the future, when decision will be implemented. How do the assumption about these reference years impact the design of policies?

In general, all these points highlight the need of defining a set of indexes and a methodology to measure the sensitivity of the decision problem solutions. It is in fact worth underlining that, while for air quality models the sensitivity can be measured by referring in one way or the other to field data, for IAMs this is not possible, since an absolute “optimal” policy is not known and most of the times it does not even exist. The traditional concept of model accuracy must thus be replaced by notions such as risk of a certain decision or regret of choosing one policy instead of another. Indeed, since long ago, the “UNECE workshop on uncertainty treatment in integrated assessment modelling” (UNECE 2002), concluded that policy makers are mainly interested in robust strategies. Robustness implies that optimal policies do not significantly change due to changes in the uncertain model elements. Robust strategies should avoid regret investments (no-regret approach) and/or the risk of serious damage (precautionary approach) (Amann et al. 2011).