Skip to main content
Log in

‘Outcome’ Assessment: What Exactly Are We Measuring? A Personal Reflection on Measuring the Outcomes from Competition Agencies’ Interventions

  • Published:
De Economist Aims and scope Submit manuscript

Abstract

The UK and Dutch competition agencies were pioneers in Europe in publishing annual assessments of the ‘outcomes’ from their work. These countries had new competition laws to report on and a strong culture of public sector evaluation. Other countries have followed, with different approaches but enough common ground for OECD to develop a standard methodology. However, the measures are simplistic: they miss many important aspects of ‘outcomes’. I nonetheless argue that these assessments are worth carrying out but they should be recognised for what they are: a rather sophisticated measure of agency activity, rather than a simplistic assessment of outcomes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In many cases, ‘overall effect’ should be understood to mean ‘effect of all of their activities’. However, competition agencies vary in the scope of their work and most will exclude at least some activities from this measure.

  2. For one of many examples see Kovacic (2009), taking issue with simplistic interpretations of the US agencies’ records during the G. W. Bush administration.

  3. See Avdasheva et al. (2015), discussed further below, regarding the Federal Antimonopoly Service of the Russian Federation.

  4. OECD (2012).

  5. See Duso (2012) for a survey and assessment.

  6. GCR (2017). My understanding is that the GCR ratings represent the judgement of GCR staff based upon the surveyed opinions of competition professionals—there is no mechanical link between the surveyed opinions and the rating.

  7. Schwab and Salai-Martín (2017).

  8. Wright and Diveley (2012) is a systematic study of the FTC’s record in this respect, compared to generalist judges.

  9. See Alemani et al. (2012) for an OECD indicator and survey of such measures.

  10. See Symeonidis (2008), for a differences-in-differences study of the effect of prohibiting cartels, for example, or Voigt (2009) for a cross-country comparison.

  11. Other authors use different terminology but adopt a similar framework. Ormosi (2012) identifies evaluation for accountability, ex post evaluation and evaluating the broader impact of competition policy. Huschelrath and Leheyda (2010) distinguish between “the knowledge function, the control function, the legitimacy function and the dialog function”. They identify ‘legitimacy’ (the closest equivalent of ‘accountability’ in other frameworks) as the main motivation behind evaluation efforts, as do Niels and van Dijk (2008).

  12. See for example Kwoka (2015) for the US and Mariuzzo et al. (2016) for the EU. An annex to OECD (2016) lists 134 papers and other studies containing one or more such evaluations and briefly summarises the scope and methodology of each.

  13. However, ex post studies (and the metastudies drawn from them referred to above) can and should be used for calibrating the assumptions used in output assessment. I am aware of two jurisdictions—the UK and Hungary—that commissioned external reviews of their outcome assessment methodology including assessments of the plausibility of assumptions. The OECD drew upon the UK study in establishing its outcome assessment methodology—see below.

  14. The ‘Performance and Accountability’ report for the FTC and the ‘Annual Performance Report’ for the DoJ.

  15. I was on the economics staff of the UK Competition Commission from 2003 to 2008, as Chief Economist from 2005. In that role, I was responsible for the CC’s assessment programme, including its first published ‘outcome measures’, which it termed ‘quantification of the additional costs that consumers in the UK might be expected to incur were it not for our decisions’. The other UK competition authority, the OFT, referred to ‘Positive Impact’ and the successor CMA uses ‘impact assessment’.

  16. The OECD’s 1996 Recommendation on Regulatory Impact Assessment formalised this process but that was neither the start nor the final word on a more general emphasis on quantifying outcomes in economic policy. See OECD (2009).

  17. The word used in UK outcome (‘impact’) assessment has always been ‘justify’, not ‘outweigh’ or ‘exceed’, to allow for policies for which the measured benefits might not exceed the costs.

  18. See for example Better Regulation Executive (2005).

  19. An approach encapsulated in the ‘Green Book’, Treasury (2011).

  20. It also argued that publication of an assessment during the period in which a legal challenge could be brought would be potentially harmful—a concern that also constrained the timing of publication of its own outcome assessments.

  21. See for example Treasury (2006).

  22. GVH in Hungary, for example, which has also commissioned an external review of its approach: Murakozy and Valentiny (2014).

  23. OECD (2012).

  24. See Lyons (2003).

  25. Quite apart from the practical aspect that this is what competition agencies themselves typically seek to do, there are theoretical arguments for preferring to measure consumer welfare over total, some based on the likely dissipation of ‘monopoly profits’ through wasteful rent-seeking activity (so the rectangle is also a welfare loss), others arguing that long-run total welfare is more likely to be maximised by pursuing consumer welfare. See Huschelrath (2008) for a discussion centred on Harberger’s (1954) original finding that monopoly deadweight losses appear to be surprisingly small.

  26. Davies (2013): “there is a case for ignoring this [deadweight loss] adjustment even although it is academically uncomfortable to do so”.

  27. A very early UK Competition Commission statement of ‘outcome’, in a speech by the then-Chairman, included an estimate of consumer savings arising from the CC’s decision to tighten price controls in its role as appeals body for sectoral regulatory decisions. However, the CC’s appellate role can result in price controls being tightened or loosened and it seems wrong (and would lead to perverse incentives) to value only the former outcome. Accordingly, the CC did not continue to estimate consumer benefits arising from its appellate role when it began formally reporting outcomes.

  28. Although some agencies do scale their estimates up to reflect a more direct and immediate estimate of activity deterred through their interventions, as I discuss below.

  29. As a competition agency head noted “the charm of this third source of benefits—called ‘deterrence effects’ by economists—is that it is delivered by competition agencies even when they are inactive (so long as people think that they might become active).” Geroski (2006). Clearly, such benefits exist and are significant. Clarke and Evenett (2003) is a study finding that countries with competition laws and competition agencies funded to enforce them reduced overcharges from a vitamin cartel whether they actually caught it or not. However, as Geroski notes, there must be a realistic possibility that the agency might become active.

  30. I was head of the OECD’s Competition Division at the time, although most of this work was led by Cristiana Vitale and we had external support from Stephen Davies and Peter Ormosi of the University of East Anglia.

  31. OECD (2014).

  32. However, while recognising the force of this objection, one could note that all competition agencies do publish partial and unsophisticated activity measures. The 2016 annual report from the Bundeskartellamt (Bundeskartellamt 2016), for example, prominently notes total number of proceedings, fines, second phase merger investigations and other measures of activity.

  33. Particularly as surveyed in Davies (2010)—no relation—who was commissioned by the UK Office of Fair Trading to review their approach.

  34. It is not really a ‘dynamic effect’, but in one case for which the UK CC reported outcomes—Macquarie/National Grid 2006—customers signed very long-term contracts and so both the case decision and the outcome estimate used a 12-year forecast.

  35. Deloitte (2007) is a study conducted for the UK OFT, finding precisely this 5:1 ratio of undetected versus detected cases. Audretsch (1983) finds an 11 or even 16–12 ratio, for US mergers. However, it is not at all clear what these surveys measure or why that is the right answer. What is the population of mergers that could have happened that did not? The Deloitte 5:1 ratio is based on proposed mergers abandoned or modified following external legal advice but as Deloitte’s report discusses in detail, this is only a partial measure.

  36. But not necessarily even then. Schinkel and Tuinstra, for example, note that an increase in Type I errors (false positive identification of a breach of the law) could increase cartel behaviour, so increased activity that results in lower quality decisions can produce more anti-competitive behaviour, not less.

  37. As discussed in Ormosi (2012), for example. Ormosi has subsequently produced several papers proposing innovative approaches to estimating the total ‘population’ of undetected breaches of competition law (especially cartels).

  38. In my own experience of estimating these measures, the scale of the sector/market investigated overwhelmingly drives the differences in ‘outcome’ measured from 1 year to the next.

  39. I was the first chief executive of a new competition agency in a small developing country: Mauritius. Even in Mauritius, which is internationally recognised to have good institutional governance, senior officials from the Finance Ministry were quite seriously concerned not to provide a lavish budget for “another agency that does not do anything”. Committing to publish outcome measures of the sort described in this paper was an important part of the negotiations that persuaded the Government to provide adequate funding to the nascent competition agency.

  40. Avdasheva et al. (2015) describe in detail how such incentives might explain what they term the “miracle” of FAS Russia’s very high activity rates. They note that “in 2013 alone, 2635 [abuse of dominance] investigations were opened, and 2212 were cleared”. The authors ascribe this focus to the high importance accorded to complaints in the Russian system (and further note that the high caseload particularly emphasises accusations with high individual damage—often a complaining competitor).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Davies.

Additional information

J. Davies: The author is a Senior Vice-President with Compass Lexecon Europe. The opinions expressed in this piece are personal views and should not be taken to reflect the views of Compass Lexecon or any of its other employees. The author was formerly employed by the OECD and two national competition agencies and of course a similar disclaimer applies. I would like to thank Jarig van Sinderen and an anonymous reviewer for helpful comments on an earlier draft of this paper.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Davies, J. ‘Outcome’ Assessment: What Exactly Are We Measuring? A Personal Reflection on Measuring the Outcomes from Competition Agencies’ Interventions. De Economist 166, 7–22 (2018). https://doi.org/10.1007/s10645-017-9307-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10645-017-9307-6

Keywords

JEL Classification

Navigation