Quantitative Analysis
We carried out exploratory quantitative analyses with a view to identifying potential drivers of policy monitoring, drawing on the three perspectives discussed above. We started by regressing the number of climate policies and measures for which member states reported quantified emissions reductions (projections for the year 2020) in 2009, 2011, 2013, 2015, and 2017 on a series of plausible drivers of policy monitoring practice derived from the literature. We interpret the generation of quantified emissions reductions for individual policies instead of a general aggregate or loose qualitative statements as a sign of serious commitment to policy monitoring and to the implementation of the monitoring decision/regulation.Footnote 6 Moreover, we compiled data on the punctuality of reporting under the monitoring mechanism, our second dependent variable. Member states must report on their climate policies by 15 March every other year, and we categorised countries that delivered their report in the month of March as on time, thus allowing for insignificant administrative delays (for the full data, see Table 4 in the online Appendix). Reporting punctuality can be seen as a crude proxy for the implementation of policy monitoring because states that fail to provide information on time probably face difficulties in organising the policy monitoring and/or lack resources for the task.
Our independent variables (i.e., drivers of policy monitoring) were chosen based on the review in Sect. 3 to reflect variation in institutional settings, policy implementation, and efforts to deliver high-quality policy monitoring. To capture institutional effects, we included a variable indicating previous experience with monitoring and evaluating national climate change policies (none, some, or extensive experience) based on data from a European Commission-funded report on quantifying the emissions effects of policies and measures (AEA et al. 2009). A dummy variable indicates the legislative shift from the Monitoring Mechanism Decision (MMD: 2004–2012) to the Monitoring Mechanism Regulation (MMR: 2013–2019). We expect that the number of policies with quantifications initially decreased with the introduction of the MMR (all else being equal), since it explicitly allows bundling of policy instruments that achieve their effects in combination. We model learning effects through a “trend” variable increasing linearly with each obligatory round of climate change policy reporting from 2009 until 2017. In terms of the capacity to comply with policy monitoring requirements, the regression models include governmental final consumption expenditures as a proxy for the national bureaucracy’s financial capabilities and a measure of political constraints (the expectation being that the more constraints, the less reporting). In terms of willingness to comply we further consider the seat share of green parties and the degree of climate change concern among the national population (both expected to increase quantitative reporting). Finally, we control for per capita CO2 emissions of each country, its gross domestic product (GDP) per capita, and population size. All time-varying variables were entered with a 1-year lag because most data collection and preparatory work takes place in the year prior to submission. We estimated negative binomial regression models for the number of reported policies and measures with quantified emissions reductions and logit models for the punctuality of reporting. Because our observations within countries are not independent, we used standard errors adjusted for clustering within countries. Table 1 summarises our data, its sources, and the operationalisation of our variables.
Table 1 Summary statistics and data sources Our first regression models (see Table 2) consider potential determinants of the number of policies with quantified, projected emissions reductions (for 2020). They reveal that the number of policies with quantified projections increases with the overall number of reported policies. More precisely, for every additional climate policy reported, the number of quantifications increases by 1.3; or almost 50% for a standard deviation increase in the number of reported policies. Put simply, if a country reports more climate policies, it also quantifies more compared to countries that report fewer policies and correspondingly generate fewer quantifications.
Table 2 Determinants of the number of reported climate policies with reduction projections Second, greater experience with policy monitoring and evaluation in 2009 has translated into greater levels of reporting across our observation years, pointing to a certain level of institutional path dependency in policy monitoring. In other words, countries that were ahead 10 years ago remain, by and large, ahead today.
Third, the policy monitoring “trend” variable shows a strongly significant positive association with the dependent variable, hinting at the presence of a learning effect in policy monitoring over time (for further qualitative evidence, see Sect. 4.2). Based on model 3, with every biennial reporting cycle, the number of quantifications increases by around 23% (all other factors held constant). This is natural, as countries that initiate quantification are likely to replicate the approach across policy domains in order to achieve coherent policy monitoring and reporting.
Fourth, the model demonstrates that the introduction of the MMR in 2013 (in effect for the reporting rounds of 2015 and 2017) decreased the expected number of quantified projections by around 60%. In fact, that institutional shift seems to be associated with a reduction both in the total number of reported policies and in the share of policies with quantified projections (Fig. 2). Because we control for the total number of instruments, we interpret this finding as an adjustment effect related to the aforementioned explicit possibilities for reporting “bundles” of policies and their emissions reductions in the MMR. All other variables, including government expenditure, political constraints, green seats, climate concern, CO2 emissions, GDP, and population returned nonsignificant results. The online Appendix features several robustness checks such as inclusion of a lagged dependent variable, a government’s environmental policy position, and country fixed effects (see Table 5). These changes do not affect our main findings.
Our second exploratory analysis models determinants of punctual reporting (Table 3). The results should be interpreted with caution, given very limited extant knowledge of the determinants of timely reporting. We observe that prior experience with policy evaluation and monitoring, as well as higher levels of government expenditure, are positively and significantly associated with punctual reporting compared to less experience and lower expenditures. The latter effect, even though statistically significant only at the 10% level, is quite substantial, with a 1% increase in government expenditure being associated with a 26% increase in the odds of punctual reporting. Furthermore, there is a marginally significant effect in model 1, indicating that the introduction of the MMR led to an almost 80% decrease in the odds of punctual reporting. Slower reporting may, at least in part, originate from adjusting to the new policy monitoring system (similar to the arguments on this independent variable above). All the other variables yielded nonsignificant results (see model 3 in Table 3).
Table 3 Determinants of punctual submission of policy monitoring reports Qualitative Analysis
We interviewed four staff members of the EEA in order to better understand the policy monitoring patterns from the perspectives of institutional settings, implementation, and quality. This section presents our findings. The main institutional change at the EU level was the 2013 shift from basing the monitoring mechanism on a decision (“Monitoring Mechanism Decision” [MMD]) to basing it on a regulation (“Monitoring Mechanisms Regulation” [MMR]). Although EEA staff report that the choice of legal act was not a main driver of change,Footnote 7 the new rules do affect the nature of the data. As one EEA staff member put it, “[w]e see the clear improvement in the 2017 reporting in comparison to 2015 reporting cycle. Both quantity and quality of the reported information improved.”Footnote 8 In line with this statement, our quantitative analyses also suggest that an immediate impact of the introduction of the MMR was a decline in quantitative and timely reporting in 2015, but then improvements appeared to materialise in 2017 (see Sect. 4.1). We do not have data on institutional changes in the member states, except that the shift from a decision to a regulation removed potential ambiguities in reporting obligations introduced by national laws.
As regards implementation, the Commission opted to implement the monitoring mechanism via a regulation rather than a directive in order to ensure it was applied throughout the EU as consistently as possible. In theory, the Commission oversees implementation. In practice, there is no precise technical definition of what counts as a “policy” that a member state is expected to report (see Footnote 1 above). Member states can thus make discretionary choices in reporting. One EEA staff member noted the concerted efforts by the EEA to assist the member states in reporting, for example through the provision of webinars, workshops, and other technical assistance. The new regulation has also strengthened the coupling (or a compliance cycle) between the effort sharing and the greenhouse gas reporting, where a member state’s progress is checked against the targets each year.Footnote 9 European Environment Agency staff members stated that the monitoring system “functions well” and highlighted that its implementation has helped them to detect (increasing) member state difficulties in reaching the 2020 targets.Footnote 10 However, progress checking is done against national aggregate greenhouse gas statistics, and not against the disaggregated policy-based specific data on which this article centres.
Finally, the EEA staff identified four key drivers of policy monitoring quality: First, they discussed what demotivates member states. One of the challenges is that “the concrete impact and use of this information, as well as the actual benefits of such reporting, might not always be very explicit from member states’ perspective.”Footnote 11 So the EEA has attempted to explain the benefits (to them) of climate policy monitoring and reporting, while also conceding that managerial logics around complying with existing guidelines and legislation also continue to play an important role.Footnote 12 The second factor concerns the political willingness to engage in policy monitoring. Ex post reporting may be unattractive (see also Schoenefeld and Jordan 2019, p. 370) because, as one EEA staff member put it,
[…] politicians are not always very interested in ex post evaluation results, because this is just about looking at the past and might possibly identify certain failures. Policy makers prefer focusing on future measures and demonstrating that their plans will succeed, for example in reaching their country’s target.Footnote 13
The third factor affecting policy monitoring quality concerns the willingness and ability to mobilise resources for policy monitoring exercises. In some states—especially in times of austerity—policy monitoring may simply not be a key priority, and it may be subject to budget cuts.Footnote 14 Fourth, the availability of commonly agreed methodologies affects policy monitoring quality in the EEA’s view. “À la carte” standards and methods, as Schoenefeld and Jordan (2017) have put it, are generally unable to generate consistent data; the EEA staff members stressed the existence of considerable methodological guidance, albeit accessing and using these documents can sometimes challenge the member states.Footnote 15
The Links Between Institutions, Implementation, and Quality
Our quantitative analyses and qualitative findings reveal that institutional settings, implementation, and quality are closely interlinked. This becomes especially apparent in the recent EU efforts to bring together its climate and energy policy in the so-called Energy Union (see Knodt 2019).Footnote 16 The regulation on the governance of the Energy Union and climate action (2018/1999) that entered into force in December 2018 integrates two policy areas whose monitoring efforts had developed rather separately at the European level: climate change and environment have been the competence of the Commission Directorate-General (DG) Climate Action and DG Environment, whereas energy policy has been the competence of DG Energy. As one EEA staff explained, “The Energy Union […] integrates different pillars or dimensions that had been considered a bit separately until now.”Footnote 17 Another EEA staff member vividly described the challenges of integrating monitoring activities across the two policy areas:
They’d [DG Energy] be organised by the oil sector, the nuclear sector, […] the renewables sector, and they didn’t really talk to each other and […] the different parts of the energy industry would have different inroads into DG Energy. But actually get[ting] them to start reporting on outcomes, or environmental outcomes or climate outcomes, is something that […] has been a big change for them.Footnote 18
These sentiments were echoed by another EEA staff member, who explained that “the idea to them [DG Energy] that a small agency in Copenhagen which has environment in its name would start messing around in the high politics of energy was just not OK for them.”Footnote 19 The person went on to say that “it’s unbelievable how much time and effort is being spent on just having our agency working together with a DG.”Footnote 20 Furthermore, the EEA staff member emphasised that understandings of reporting differed significantly in the energy and the climate sectors: while reporting in the energy sector is often industry specific and sometimes based on data from existing sources, climate policy reporting is based on official country-based data that go through quality assurance procedures. Another EEA staff member also stressed the special nature of data emerging from the MMR, because “when data is officially reported to us, it has a different meaning.”Footnote 21 Directorate-General Energy has, according to the EEA staff, had a different reporting approach, sometimes using consultants in order to harvest existing data.Footnote 22 EEA staff described the institutional integration currently underway in the context of the Energy Union as “intensely political,” with “some terrible fights between ministries at national level.”Footnote 23 For example,
One of the kind of observed discussions was around what would happen to the Climate Change Committee in the Energy Union governance proposal where the Commission proposal was to have one committee at EU level, and the countries wanted to pick it apart again and say, no […] we need an energy committee and we need a climate change committee.Footnote 24
Asked about the potential integration of the Policies and Measures Database with other databases, such as MURE on energy efficiency,Footnote 25 one EEA staff member highlighted the “big institutional challenge in integrating these different reporting streams.”Footnote 26 In sum, institutional integration of policy monitoring across policy sectors and databases can generate significant obstacles, at least in the short term.
There are also key questions around who actively governs policy monitoring in the future (see Schoenefeld and Jordan 2017). One EEA staff member stressed the important role of public institutions in policy monitoring:
I think that in the next 5 years […] this whole monitoring and data part of our work will go through a very rapid evolution, and I think if public institutions are not at the forefront of that for public purpose, it will be private institutions who take over part of the job based on free data, and they will then start to sell the information to others […] also in the public sphere.Footnote 27
Taken together, our interviews illuminate the relevance of institutional settings, implementation, and quality for policy monitoring. These factors also affect the use of policy monitoring data in evaluations, where quality is highly multidimensional (Widmer 2012, p. 263). The significance of the factors is especially clear when institutions are modified through changes in legislation, such as in the case of the Energy Union.