In this section, we look at (i) measures of quantity of concessional aid/finance, (ii) what we know about effective development finance and, (iii) what measures we have available on quality.
3.1 Measuring Concessional Development Finance and Aid Quantity
Measuring the quantity of aid—or concessional development finance—requires a common definition. Most efforts and our starting point are to concentrate on the (net) concessional element of governments’ grants or loans to other countries. Calculating this requires data that is consistent between countries and a common methodology to establish how concessional the lending is. For the 30 members of the OECD’s Development Assistance Committee (DAC), both of these exist in defining official development assistance (ODA) and “other official flows”, though the methodology remains contentious and arguably does not provide a consistent guide to the concessionality of development finance (OECD-DAC 2018). Still, the original aim of the DAC—to agree on standards in order to enable a fair comparison between countries of different economic sizes in supporting development and provide countries with good incentives to do so (Scott 2015)—remains highly relevant.
Beyond the DAC, there is no common data nor an agreed methodology on calculating concessional finance. There are valuable efforts to consistently and comprehensively measure all financial flows—in particular, the task force on Total Official Support for Sustainable Development (OECD-DAC, n.d.). In February 2019, the task force published “emerging reporting instructions”—so usable measures are therefore some way off, and in any case, they would not identify the degree of concessionality in that finance and may not be available for all countries.
To calculate an estimate of government concessional finance provided by each country, there seem to be several possibilities. First, we could use government budgets to calculate the cost—to the taxpayer—of a country’s aid and development finance efforts.4 This would reliably estimate the financial effort a country is making to support development, but it relies on there being publicly available information—suitably disaggregated accounts—on a country’s overseas activities. Second, an important element of concessional finance is the funds provided to international development organisations. This could be limited to those “multilateral” institutions with open, accessible accounts or, ideally, also include bilateral agencies—see McArthur and Rasmussen (2018), for example. Third, countries could be surveyed to provide their own estimates of concessional finance. Fourth, if government development finance, and its terms, could be measured, the degree of concessionality could be calculated or estimated (AidData produces figures for China on this basis). A new paper by me and my colleagues combines these latter three approaches to generate a new measure of “Finance for International Development” which enables more consistent comparisons across countries (Mitchell et al. 2020).
On the “quantity” of concessional development finance then, there seem to be some possibilities for making estimates. These will produce imperfect or incomplete figures, but these estimates will perhaps incentivise countries to do better, or ideally to come together and agree on common definitions.
3.2 What Does Effective Development Finance Look Like?
It is clear from evaluation evidence that aid and development finance can be transformationally positive or, conversely, completely wasted, or even damaging. Waste can occur in spite of good intentions, where interventions turn out to be less effective at reducing poverty than expected, as was the case with microfinance (Roodman 2012, p. 2), or because effectiveness is subordinated to geopolitical or commercial interests, for example in the famous case of the Pergau Dam (Lankester 2012). Aid might also have unintended effects: there is evidence that American food aid prolongs civil conflict (Nunn and Qian 2014). However, aid can also have enormous positive effects. In just under 20 years, Gavi, the Vaccine Alliance says it has treated 0.7 billion children and prevented 10 million deaths, generating savings of $18 in healthcare costs, lost wages, and lost productivity for each $1 spent (Gavi, the Vaccine Alliance, n.d.).
Countries discussed “aid effectiveness”—specifically the high-level principles and practices by which aid is allocated—and reached agreements in Rome (2003), Paris (2005), Accra (2008), and Busan (2011) (OECD, n.d.-a). Over the past decade, the context altered significantly with, for example: the agreement of the SDGs; the shift in the world’s major economies and concentrations of extreme poverty; and aid providers changing the instruments they use. The principles (OECD, n.d.-b) agreed at Busan were:
Ownership of development priorities by developing countries: Countries should define the development model that they want to implement.
A focus on results: Having a sustainable impact should be the driving force behind investments and efforts in development policy-making.
Partnerships for development: Development depends on the participation of all actors and recognises the diversity and complementarity of their functions.
Transparency and shared responsibility: Development cooperation must be transparent and accountable to all citizens.
Busan also gave birth to the GPEDC.5 The GPEDC now collects data with indicators6 grouped under these four principles. The GPEDC’s online platform enables comparisons between “development partners”, but for some countries, the data is thin, or non-existent (e.g. in the 2017 results, China scored 100 per cent for indicator 1a—“proportion of new development interventions draw their objectives from country-led results frameworks”—but this is based on just one country’s response). The GPEDC does not attempt to aggregate the indicators into themes, nor overall scores; nor does it rank agencies or countries. We return to measures that do attempt to compare, combine, and rank below.
What does research say about the impact of development finance and the practices that enhance its effectiveness? Much research focusses on GDP growth as the variable of interest, although it is clear that much aid is not targeted at that outcome—in particular, humanitarian aid, which covers 12.8 per cent of aid for DAC countries (OECD, n.d.-c), is most needed where economies are shrinking. Still, in the case of increased levels of education and health, we would expect some positive impact on GDP, and perhaps regression-based analysis can provide insights on effective aid. Howarth (2017) provides a good overview of this literature and concludes:
[T]here is very little evidence to support the “hard” sceptical view that aid actively harms growth. It is, however, now understood that aid is subject to diminishing returns, and increasing it beyond a certain proportion of a recipient’s GDP may have a harmful effect. Expectations about what aid can achieve have also become more realistic.
We are currently undertaking a literature review on the evidence on the determinants of effective aid that will consider issues such as using (recipient) country systems, recipient ownership, predictability, and transparency as traits of aid associated with higher impacts. Similarly, the evidence is clear that tying aid—that is, requiring aid to be spent only with providers from your own country—reduces its effectiveness,7 but it still features prominently in aid providers’ commitments (Meeks, n.d.).
Which of these principles or practices can and should be measured? And how important are they? Below we move on to efforts to measure and aggregate measures of quality that attempt to bring these together by donor or agency.
3.3 Data Sources for Measuring Development Finance Effectiveness
The evidence and theory about aid effectiveness is all very well, but what can we actually measure?
There are just three main sources that can be used to measure elements of development finance quality consistently across a wide range of countries (OECD-only measures are discussed below). These are:
GPEDC survey—collected 10 indicators under four themes8;
Listening to Leaders survey—AidData’s (2018) survey has been done twice, 2017 and 2014, and it provides data on the level of “helpfulness” and “influence” for providers of development cooperation;
The International Aid Transparency Initiative—which is a standard for open data publishing that is available to all countries as well as non-state donors, and it enables some analysis of a large proportion of aid.
We have already seen that the information on the quantity of concessional finance is limited, and that these sources provide a relatively limited picture of the quality of development finance.
The GPEDC survey measures indicators of development finance effectiveness but, as noted above, the results are dependent on response rates. In the 2016 round, this led to patchy coverage, at least in terms of being able to assess some major countries providing assistance—for example, some of China’s results were based on just one response (we are awaiting the details of the 2018 results). In addition, many Southern providers are averse to efforts to define, monitor, and compare development cooperation measures for both political and technical reasons (Besharati and MacFeely 2019). AidData’s (2018) Listening to Leaders survey measures the views of leaders in low- and middle-income countries, but it is not a direct measure of aid effectiveness. The International Aid Transparency Initiative, which also feeds into the GPEDC’s monitoring framework, improves transparency by hosting a machine-readable database of aid projects, and it gives cross-country comparisons of transparency in its published statistics. Publish What You Fund used this, along with other information, to create the Aid Transparency Index (Publish What You Fund 2018), which gives more detail on transparency for large donors. Transparency is likely to encourage scrutiny and lead to more effective behaviours, though it says little about the effectiveness of development finance more generally.
In addition to these measures, there are assessments being undertaken on the effectiveness of international organisations that receive aid. The Multilateral Organisation Performance Assessment Network provides institutional scores in four areas of organisational effectiveness but also covers “results” (development effectiveness).9 In addition, the Australian government (Australian Government & Australian AID 2012), the Ministry of Foreign Affairs of Denmark (2013), and the United Kingdom government (UK Government & Department for International Development 2011, 2013), among others, have undertaken and published their own reviews of multilateral organisations. Still, even if these reviews produce “ratings”, they largely reflect qualitative analysis. Furthermore, they are limited to multilateral organisations and are not available for countries own (“bilateral”) agencies. Still, to the extent that we know how much countries contribute to these organisations, we have some ways of assessing the “quality” of those contributions.
For OECD-DAC countries, and for the multilateral institutions that spend ODA, it is possible to calculate a number of aid effectiveness indicators using the OECD’s “Creditor Reporting System”. These indicators can relate to theory, evidence, or consensus about how they relate to quality. For these countries, the common reporting standard (CRS) provides a relatively comprehensive and comparable source of data at the project level to show where aid goes, the level of financial commitments and disbursements, what purposes it serves, and some descriptive information. However, it is up to the user of the data to conceptualise how these variables can be manipulated and analysed to produced aid quality indicators. Notably, many emerging development actors and providers of South-South cooperation do not report to the DAC, so CRS data is not available about them.
OECD-DAC countries also undertake systematic peer reviews, and these draw on quantitative measures as well as undertake qualitative assessments of development effectiveness. These are an important source of scrutiny, challenge, and mutual learning, and they often achieve high levels of engagement from ministers. Since the reviews follow a framework, it may be possible to systematically assess the findings across reviews—for example, on the elements of evaluation framework—and assign a quantitative score to the analysis.
So, for the 30 OECD-DAC countries and the group of around 13 countries that report to the DAC (OECD 2018a), relatively detailed data exists or can be feasibly created with publicly available data, but beyond these countries we have very limited quantitative information on the characteristics of their concessional development finance. Some admirable efforts have been made to use publicly available data to estimate the concessional element of development finance from Southern cooperation providers (United Nations Development Programme 2016) and also to identify six process and six outcome quality assessment guidelines. These are important efforts and, given the likely importance of transparency to effectiveness, Southern actors can surely accelerate progress towards the SDGs by providing common and consistent data on their concessional finance.
3.4 Quantifying Aid Quality
There have been relatively few attempts to quantitatively measure aid quality. Roodman (2013) developed a three-part aid quality measure that took a given quantity of aid and discounted it for tied aid, selectivity for less-poor and poorly governed recipient countries, and project proliferation. The resulting “quality-adjusted aid quantity” was used in the Commitment to Development Index through 2013. Easterly and Pfutze (2008) assessed and ranked 48 agencies quantitatively on aid “best practices” and included their own survey (with limited responses) regarding employment and administrative expenses to calculate overhead costs of agencies. Knack et al. (2010), in a World Bank policy research working paper, use a quantitative measure with 18 indicators using the Paris survey and OECD-DAC data. Birdsall and Kharas (2014) produced “QuODA”, the quality of official development assistance, from 2010 (based on data from 2008), which put together 30 indicators of aid quality and grouped them under four themes that aligned with the Paris principles of aid effectiveness. This enabled agencies to compare their “scores” in each of these four areas. Subsequently, Barder et al. (2016) combined QuODA scores for bilateral and multilateral donors to produce an “Aid Quality Index”, which was used in the Commitment to Development Index from 2014.
McKee and Mitchell (2018) produced an updated QuODA and were able to replicate or replace 24 indicators of aid effectiveness (see Annex A). Several of the original QuODA indicators were altered and replaced with GPEDC measures, which put a stronger emphasis on recipient views of aid (e.g. QuODA originally measured the effectiveness of recipients’ evaluation systems, but in the updated version using GPEDC measures, it checks whether evaluations are planned with recipients). As before, QuODA was fed into the Commitment to Development Index as the measure of “aid quality”, thereby giving even weight to each of the 24 indicators.
There are a number of significant limitations to these measures of aid quality including: heterogeneity in donor mandates and aid objectives whether there is any objective “optimum” allocation to aid recipients and whether fragmentation (allowing for greater competition and choice) or concentration (requiring less administrative burden) of donors is better for recipients (e.g. see Klingebiel et al. 2016). As noted above, there are limited direct links from measures of aid effectiveness to actual development impact. In the case of QuODA, a significant criticism is that it gives credit for aid going to poor and well-governed countries, but this runs counter to the need to give aid to fragile states, which are home to more than 60 per cent of the world’s extreme poor, and this figure expected to rise further (OECD 2018b, p. 99). Indeed, this challenge also applies to a number of indicators, including those collected by the GPEDC, which tends to emphasise alignment to, or use of, recipient country systems and frameworks that may be weaker in fragile states.
3.5 Concluding on Measures of Aid Effectiveness
In terms of measuring aid quality, there is a stated international consensus on the principles of effective aid from 2011, but little sign that these are the foundation of aid allocation, nor of recent or high-level interest from governments. There are some areas where evidence gives a clear view of some types of aid being more effective, including: avoiding tied aid, giving transparently and predictably, and giving to poorer countries. Still, in other areas, there is a lack of evidence, and tensions between different priorities. For example, there is a tension between giving in well-governed countries versus fragile states, and there may be a tension between using recipient systems and preventing leakage, particularly in fragile states. There may also be a tension between impacts that are easier to measure precisely, and programmes aiming for more systemic change.
There are few individual or aggregate measures of aid quality covering all actors. Nonetheless, there would still be value in producing and, crucially, comparing them. These comparisons would enable providers to see how their own development finance compared to others, prompt questions about differences, and ultimately improve approaches through learning. For DAC donors, much greater detail is possible, and it seems right that countries with high incomes have greater responsibility and should be held to a higher level of accountability.