Abstract
The UK and Dutch competition agencies were pioneers in Europe in publishing annual assessments of the ‘outcomes’ from their work. These countries had new competition laws to report on and a strong culture of public sector evaluation. Other countries have followed, with different approaches but enough common ground for OECD to develop a standard methodology. However, the measures are simplistic: they miss many important aspects of ‘outcomes’. I nonetheless argue that these assessments are worth carrying out but they should be recognised for what they are: a rather sophisticated measure of agency activity, rather than a simplistic assessment of outcomes.
Similar content being viewed by others
Notes
In many cases, ‘overall effect’ should be understood to mean ‘effect of all of their activities’. However, competition agencies vary in the scope of their work and most will exclude at least some activities from this measure.
For one of many examples see Kovacic (2009), taking issue with simplistic interpretations of the US agencies’ records during the G. W. Bush administration.
See Avdasheva et al. (2015), discussed further below, regarding the Federal Antimonopoly Service of the Russian Federation.
OECD (2012).
See Duso (2012) for a survey and assessment.
GCR (2017). My understanding is that the GCR ratings represent the judgement of GCR staff based upon the surveyed opinions of competition professionals—there is no mechanical link between the surveyed opinions and the rating.
Schwab and Salai-Martín (2017).
Wright and Diveley (2012) is a systematic study of the FTC’s record in this respect, compared to generalist judges.
See Alemani et al. (2012) for an OECD indicator and survey of such measures.
Other authors use different terminology but adopt a similar framework. Ormosi (2012) identifies evaluation for accountability, ex post evaluation and evaluating the broader impact of competition policy. Huschelrath and Leheyda (2010) distinguish between “the knowledge function, the control function, the legitimacy function and the dialog function”. They identify ‘legitimacy’ (the closest equivalent of ‘accountability’ in other frameworks) as the main motivation behind evaluation efforts, as do Niels and van Dijk (2008).
However, ex post studies (and the metastudies drawn from them referred to above) can and should be used for calibrating the assumptions used in output assessment. I am aware of two jurisdictions—the UK and Hungary—that commissioned external reviews of their outcome assessment methodology including assessments of the plausibility of assumptions. The OECD drew upon the UK study in establishing its outcome assessment methodology—see below.
The ‘Performance and Accountability’ report for the FTC and the ‘Annual Performance Report’ for the DoJ.
I was on the economics staff of the UK Competition Commission from 2003 to 2008, as Chief Economist from 2005. In that role, I was responsible for the CC’s assessment programme, including its first published ‘outcome measures’, which it termed ‘quantification of the additional costs that consumers in the UK might be expected to incur were it not for our decisions’. The other UK competition authority, the OFT, referred to ‘Positive Impact’ and the successor CMA uses ‘impact assessment’.
The OECD’s 1996 Recommendation on Regulatory Impact Assessment formalised this process but that was neither the start nor the final word on a more general emphasis on quantifying outcomes in economic policy. See OECD (2009).
The word used in UK outcome (‘impact’) assessment has always been ‘justify’, not ‘outweigh’ or ‘exceed’, to allow for policies for which the measured benefits might not exceed the costs.
See for example Better Regulation Executive (2005).
An approach encapsulated in the ‘Green Book’, Treasury (2011).
It also argued that publication of an assessment during the period in which a legal challenge could be brought would be potentially harmful—a concern that also constrained the timing of publication of its own outcome assessments.
See for example Treasury (2006).
GVH in Hungary, for example, which has also commissioned an external review of its approach: Murakozy and Valentiny (2014).
OECD (2012).
See Lyons (2003).
Quite apart from the practical aspect that this is what competition agencies themselves typically seek to do, there are theoretical arguments for preferring to measure consumer welfare over total, some based on the likely dissipation of ‘monopoly profits’ through wasteful rent-seeking activity (so the rectangle is also a welfare loss), others arguing that long-run total welfare is more likely to be maximised by pursuing consumer welfare. See Huschelrath (2008) for a discussion centred on Harberger’s (1954) original finding that monopoly deadweight losses appear to be surprisingly small.
Davies (2013): “there is a case for ignoring this [deadweight loss] adjustment even although it is academically uncomfortable to do so”.
A very early UK Competition Commission statement of ‘outcome’, in a speech by the then-Chairman, included an estimate of consumer savings arising from the CC’s decision to tighten price controls in its role as appeals body for sectoral regulatory decisions. However, the CC’s appellate role can result in price controls being tightened or loosened and it seems wrong (and would lead to perverse incentives) to value only the former outcome. Accordingly, the CC did not continue to estimate consumer benefits arising from its appellate role when it began formally reporting outcomes.
Although some agencies do scale their estimates up to reflect a more direct and immediate estimate of activity deterred through their interventions, as I discuss below.
As a competition agency head noted “the charm of this third source of benefits—called ‘deterrence effects’ by economists—is that it is delivered by competition agencies even when they are inactive (so long as people think that they might become active).” Geroski (2006). Clearly, such benefits exist and are significant. Clarke and Evenett (2003) is a study finding that countries with competition laws and competition agencies funded to enforce them reduced overcharges from a vitamin cartel whether they actually caught it or not. However, as Geroski notes, there must be a realistic possibility that the agency might become active.
I was head of the OECD’s Competition Division at the time, although most of this work was led by Cristiana Vitale and we had external support from Stephen Davies and Peter Ormosi of the University of East Anglia.
OECD (2014).
However, while recognising the force of this objection, one could note that all competition agencies do publish partial and unsophisticated activity measures. The 2016 annual report from the Bundeskartellamt (Bundeskartellamt 2016), for example, prominently notes total number of proceedings, fines, second phase merger investigations and other measures of activity.
Particularly as surveyed in Davies (2010)—no relation—who was commissioned by the UK Office of Fair Trading to review their approach.
It is not really a ‘dynamic effect’, but in one case for which the UK CC reported outcomes—Macquarie/National Grid 2006—customers signed very long-term contracts and so both the case decision and the outcome estimate used a 12-year forecast.
Deloitte (2007) is a study conducted for the UK OFT, finding precisely this 5:1 ratio of undetected versus detected cases. Audretsch (1983) finds an 11 or even 16–12 ratio, for US mergers. However, it is not at all clear what these surveys measure or why that is the right answer. What is the population of mergers that could have happened that did not? The Deloitte 5:1 ratio is based on proposed mergers abandoned or modified following external legal advice but as Deloitte’s report discusses in detail, this is only a partial measure.
But not necessarily even then. Schinkel and Tuinstra, for example, note that an increase in Type I errors (false positive identification of a breach of the law) could increase cartel behaviour, so increased activity that results in lower quality decisions can produce more anti-competitive behaviour, not less.
As discussed in Ormosi (2012), for example. Ormosi has subsequently produced several papers proposing innovative approaches to estimating the total ‘population’ of undetected breaches of competition law (especially cartels).
In my own experience of estimating these measures, the scale of the sector/market investigated overwhelmingly drives the differences in ‘outcome’ measured from 1 year to the next.
I was the first chief executive of a new competition agency in a small developing country: Mauritius. Even in Mauritius, which is internationally recognised to have good institutional governance, senior officials from the Finance Ministry were quite seriously concerned not to provide a lavish budget for “another agency that does not do anything”. Committing to publish outcome measures of the sort described in this paper was an important part of the negotiations that persuaded the Government to provide adequate funding to the nascent competition agency.
Avdasheva et al. (2015) describe in detail how such incentives might explain what they term the “miracle” of FAS Russia’s very high activity rates. They note that “in 2013 alone, 2635 [abuse of dominance] investigations were opened, and 2212 were cleared”. The authors ascribe this focus to the high importance accorded to complaints in the Russian system (and further note that the high caseload particularly emphasises accusations with high individual damage—often a complaining competitor).
References
Alemani, E., Klein, C., Koske, I., Vitale, C., & Wanner, I. (2012). New indicators of competition law and policy in 2013 for OECD and non-OECD countries. OECD Economics Department Working Papers No. 1104.
Audretsch, D. A. (1983). The effectiveness of antitrust policy towards horizontal mergers. Ann Arbor: UMI Research Press.
Avdasheva, S., Tsytsulina, D., Golovanova, S., & Sidorova, Y. (2015). Discovering the miracle of large numbers of antitrust investigations in Russia: The role of competition agency incentives. National Research University Higher School of Economics, Working Paper.
Better Regulation Executive. (2005). Measuring administrative costs: UK standard cost model manual. London: Cabinet Office.
Bundeskartellamt: Annual report 2016 available at http://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Jahresbericht/Jahresbericht_2016.pdf.
Clarke, J. L., & Evenett, S. J. (2003). The deterrent effect of national anti-cartel laws: Evidence from the international vitamins cartel. The Antitrust Bulletin, 48(3), 689–726.
Davies, S. W. (2010). A review of OFT’s impact estimation methods, Office of Fair Trading, OFT1164.
Davies, S. W. (2013). Assessment of the impact of competition agencies’ activities. Available at http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=DAF/COMP/WP2(2013)1&docLanguage=En.
Deloitte (2007). The deterrent effect of enforcement activity by the Office of Fair Trading, Office of Fair Trading report OFT 962. Available athttp://webarchive.nationalarchives.gov.uk/20140402181127/http://www.oft.gov.uk/shared_oft/reports/Evaluating-OFTs-work/oft962.pdf.
Don, H., Kemp, R., & van Sinderen, J. (2008). Measuring the economic effects of competition law enforcement. De Economist, 156, 341–348.
Duso, T. (2012). A decade of ex-post merger policy evaluations: A progress report. In Pros and cons of merger control 2012, Konkurrensverket. Available at http://www.konkurrensverket.se/globalassets/english/research/more-pros-and-cons-of-merger-control.pdf.
Geroski, P. (2006). Essays in competition policy. Available at http://www.regulation.org.uk/library/2006_geroski_essays.pdf.
Global Competition Review. (2017). Rating enforcement 2017. Available at http://globalcompetitionreview.com/benchmarking/rating-enforcement-2017/1144770/introduction.
Harberger, A. (1954). Monopoly and resource allocation. American Economic Review - Papers and Proceedings, 44, 77–87.
Huschelrath, K. (2008). Is it worth all the trouble? The costs and benefits of antitrust enforcement. ZEW Working Paper No. 08-107.
Huschelrath, K., & Leheyda, N. (2010). A methodology for the evaluation of competition policy. ZEW Discussion Paper No. 10-081.
Kovacic, W. E. (2009). Rating the Competition Agencies: What Constitutes Good Performance? George Mason Law Review, 16, 903.
Kwoka, J. (2015). Mergers, merger control, and remedies. Cambridge, MA: MIT Press.
Lyons, B. (2003). Could politicians be more right than economists? A theory of welfare standards. EUI Working Papers 2003/14.
Mariuzzo, F., Ormosi, P., & Havell, R. (2016). What can merger retrospectives tell us? An assessment of European mergers. CCP Working Paper 16-4.
Murakozy, B., & Valentiny, P. (2014). Review of the ex-ante assessment of the welfare gains achieved by the GVH. Published by GVH at http://www.gvh.hu/en//data/cms1030092/GVH_Impact_Assessment_KRTK_ertekeles___final_PUBLIC_Eng.pdf.
Neven, D., & Zengler, H. (2008). Ex post evaluation of enforcement: A principal-agent perspective. De Economist, 156, 477–490.
Niels, G., & van Dijk, R. (2008). Competition policy: What are the costs and benefits of measuring its costs and benefits? De Economist, 156, 249–364.
OECD. (2009). Regulatory impact analysis: A tool for policy coherence. Available at http://www.oecd.org/gov/regulatory-policy/ria-tool-for-policy-coherence.htm.
OECD. (2012). Evaluation of competition enforcement and advocacy activities: The results of an OPECD survey. Available at http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=DAF/COMP/WP2(2012)7/FINAL&docLanguage=En.
OECD. (2014). Guide for helping competition agencies assess the expected value of their activities. Available at http://www.oecd.org/daf/competition/guide-impact-assessment-competition-activities.htm.
OECD. (2016). Reference guide on ex-post evaluation of competition agencies’ enforcement decisions. Available at http://www.oecd.org/daf/competition/reference-guide-on-ex-post-evaluation-of-enforcement-decisions.htm.
Ormosi (2012). Evaluating the Impact of Competition Law enforcement, OECD. Available at http://www.oecd.org/officialdocuments/displaydocumentpdf?cote=DAF/COMP/WP2%282012%295&doclanguage=en.
Schwab, K., & Salai-Martín, X. (2017). The global competitiveness report. World Economic Forum.
Symeonidis, G. (2008). The effect of competition on wages and productivity: Evidence from the United Kingdom. Review of Economics and Statistics, 90(1), 134–146.
Treasury, H. M. (2006). Productivity in the UK 6: Progress and new evidence. London: H.M. Treasury, HMSO.
Treasury, H. M. (2011). The green book: Appraisal and evaluation in central government. London: H.M. Treasury, HMSO.
Voigt, S. (2009). The effects of competition policy on development: Cross country evidence using four new indicators. Journal of Development Studies, 45(8), 1225–1248.
Werden, G. J. (2008). Assessing the effects of antitrust enforcement in the United States. De Economist, 156, 433–451.
Wright, J. D., & Diveley, A. M. (2012). Do expert agencies outperform generalist judges? Some preliminary evidence from the Federal Trade Commission. Journal of Antitrust Enforcement, 1–22.
Author information
Authors and Affiliations
Corresponding author
Additional information
J. Davies: The author is a Senior Vice-President with Compass Lexecon Europe. The opinions expressed in this piece are personal views and should not be taken to reflect the views of Compass Lexecon or any of its other employees. The author was formerly employed by the OECD and two national competition agencies and of course a similar disclaimer applies. I would like to thank Jarig van Sinderen and an anonymous reviewer for helpful comments on an earlier draft of this paper.
Rights and permissions
About this article
Cite this article
Davies, J. ‘Outcome’ Assessment: What Exactly Are We Measuring? A Personal Reflection on Measuring the Outcomes from Competition Agencies’ Interventions. De Economist 166, 7–22 (2018). https://doi.org/10.1007/s10645-017-9307-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10645-017-9307-6