Skip to main content

Evaluating Public Support to the Investment Activities of Business Firms: A Multilevel Meta-Regression Analysis of Italian Studies

Abstract

We conduct an extensive sign-and-significance meta-regression analysis of counterfactual programme evaluations from Italy, considering both published and grey literature on policies supporting firms’ investments. We specify a multilevel model for the probability of finding positive effect estimates, also assessing correlation possibly induced by co-authorship networks. We find that the probability of positive effects is considerable, especially for weaker firms and outcomes that are directly targeted by public programmes. However, these policies are less likely to trigger change in the long run.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

Notes

  1. 1.

    Note that existing meta-analyses in this field, which have a global coverage, include almost this same number of studies and estimates, with only one or two studies that are related to Italy.

  2. 2.

    See the OECD-STIP Compass—Policy analysis and discovery tool for better decision-making, a repository of policies promoting science, technology and innovation, available at https://stip.oecd.org/stip.html (last accessed on 21st June 2021).

  3. 3.

    Italy is characterised by a quasi-federal system in which a large part of enterprise and innovation policies are shared between Regions and the State according to the principle of vertical subsidiarity (Caloffi and Mariani 2018). As a result, regional-scale initiatives coexist with some programmes of national relevance that are managed by the Italian government.

  4. 4.

    We interviewed our colleagues during the annual meetings (2016) of the SIE-Italian Economic Association, SIEPI-Italian Society of Industrial Economics and Policy, AISRe-Italian Association of Regional Science.

  5. 5.

    Our database construction ended in March 2016 and our analysis was performed with that database available at that time. However, for the sake of completeness, in the list of references included in the MRA, we have signalled whether the paper has been subsequently published.

  6. 6.

    An extremely limited number of papers also reported estimates obtained in a continuous-treatment framework, for example using generalised propensity scores and dose–response functions. These latter few estimates were left out of the sample, as—for several reasons—they were hardly comparable to the others.

  7. 7.

    For example, if public support reduces the risk of firm exit, then the negative sign of the t-statistic must be turned positive; instead, if it increases exit risk, then the positive sign of the \({\mathrm{t}}_{\mathrm{ij}}\) must be turned negative. Other options to transform the value of the estimates are partial correlation coefficients and elasticities (Stanley and Doucouliagos 2012). Such options do not seem suitable to our context of analysis, which is characterised by estimates obtained under the classical binary-treatment framework and with a widespread use of semi-parametric methods that try to avoid model dependence.

  8. 8.

    The left and right approximations of the density at the threshold are done independently from each other. Inference relies on a local cubic (triangular) Kernel approximation, with bandwidths optimised separately at each side using a local quadratic fit.

  9. 9.

    Since the observed power of a given \({\mathrm{t}}_{\mathrm{ij}}\) is a one-to-one function of its own p-value, \({\mathrm{p}}_{\mathrm{ij}}\) (Hoenig and Heisey 2001), repeating the meta-analysis with a smaller significance threshold is equivalent to see what happens if one (as in Ioannidis et al. 2017) is more demanding in terms of the statistical power that each significant estimate should exhibit to deserve consideration. In our study, the positive treatment effect estimates that are significant at 5% have a median observed power of 81.7%, a minimum of 50.3% and a maximum near to 100%. By selecting from the previous estimates only those whose \({\mathrm{p}}_{\mathrm{ij}}<0.025\) we conduct the analysis on a subset of significant estimates that have more power. Here, the median observed power is 87.3% and the minimum is 62.5%.

  10. 10.

    The issue of heteroscedasticity is relevant in presence of a meta-regression model for the effect size, but not with a probability model for sign and significance. In the former case, estimates of the effect size may be characterized by different levels of precision (i.e., different standard errors), which is connected to the sample size of the study they come from. In this sense, the observations of a meta-regression model can be heteroscedastic, and this requires the use of a weighted estimator instead of the usual OLS (Stanley and Doucouliagos 2012). In case of the logit probability model for sign and significance, the conditional distribution of $${y}_{i}$$ given the covariates $${X}_{i}$$ is assumed to be Bernoulli with parameter $$\pi \left({X}_{i}\right)$$, a probability. The variance of this distribution is $$\pi \left({X}_{i}\right)\times \left(1-\pi \left({X}_{i}\right)\right)$$, a nonconstant function of $${X}_{i}$$. Therefore, heteroscedasticity is automatically assumed to exist.

  11. 11.

    In the studies included in our analysis, programmes aimed at R&D may employ the following instruments: subsidies, direct loans and tax-credit. Programmes aimed at investments may employ the following instruments: subsidies, direct loans, tax-credit and public loan guarantees.

  12. 12.

    Out of 431 estimates on outcomes that are measured simultaneously to programme participation, 39% refer to outcomes that are directly targeted by that particular type of programme, whereas 61% refer to outcomes that might be affected by the programme in a more indirect fashion. Out of 635 estimates on outcomes that are measured after programme participation, 124 (19.5%) refer to outcomes that are directly targeted by that particular type of programme, whereas 80.5% refer to outcomes that might be affected by the programme in a more indirect fashion.

References

  1. Accetturo A, De Blasio G (2019) Morire di aiuti: I fallimenti delle politiche per il Sud (e come evitarli). IBL Libri, Torino

    Google Scholar 

  2. Allcott H (2015) Site selection bias in program evaluation. Q J Econ 130(3):1117–1165

    Article  Google Scholar 

  3. Anselin L (1988) Spatial econometrics: methods and models. Kluwer Academic Publishers, Dordrecht

    Book  Google Scholar 

  4. Athey S, Imbens G (2017) The state of applied econometrics: Causality and policy evaluation. J Econ Perspect 31(2):3–32

    Article  Google Scholar 

  5. Awaworyi Churchill S, Ugur M, Siew Ling Y (2016) Does government size affect per-capita income growth? A Hierarchical Meta-Regression Analysis. Econ Rec 93(300):142–171

    Article  Google Scholar 

  6. Bandiera O, Fischer G, Prat A, Ytsma E (2016) Do women respond less to performance pay? Building evidence from multiple experiments. CEPR Discussion Paper 11724.

  7. Beck T, Demirguc-Kunt A (2006) Small and medium-size enterprises: access to finance as a growth constraint. J Bank Financ 30(11):2931–2943

    Article  Google Scholar 

  8. Becker B (2015) Public R&D policies and private R&D investment: a survey of the empirical evidence. J Econ Surv 29(5):917–942

    Article  Google Scholar 

  9. Begg CB, Berlin JA (1988) Publication bias: a problem in interpreting medical data. J R Stat Soc Ser A (stat Soc) 151(3):419–463

    Article  Google Scholar 

  10. Berger A, Udell G (1998) The economics of small business finance: The role of private equity and debt markets in the financial growth cycle. J Bank Financ 22:613–673

    Article  Google Scholar 

  11. Borgatti SP, Everett MG, Freeman LC (2002) Ucinet for windows: software for social network analysis. Harvard, Analytic Technologies

    Google Scholar 

  12. Brodeur A, Lé M, Sangnier M, Zylberberg Y (2016) Star wars: the empirics strike back. Am Econ J Appl Econ 8(1):1–32

    Article  Google Scholar 

  13. Bruns SB (2017) Meta-regression models and observational research. Oxford Bull Econ Stat 79(5):637–653

    Article  Google Scholar 

  14. Caloffi A, Mariani M (2018) Regional policy mixes for enterprise and innovation: a fuzzy-set clustering approach. Environ Plan C: Polit Space 36(1):28–46

    Google Scholar 

  15. Card D, Kluve J, Weber A (2010) Active labour market policy evaluations: a meta-analysis. Econ J 120(548):F452–F477

    Article  Google Scholar 

  16. Card D, Kluve J, Weber A (2018) What works? A meta analysis of recent active labor market program evaluations. J Eur Econ Assoc 16(3):894–931

    Article  Google Scholar 

  17. Castellacci F, Mee Lie C (2015) Do the effects of R&D tax credits vary across industries? A meta-regression analysis. Res Policy 44(4):819–832

    Article  Google Scholar 

  18. Cattaneo MD, Jansson M, Ma X (2018) Manipulation testing based on density discontinuity. Stata Journal 18:234–261

    Article  Google Scholar 

  19. Cerqua A, Pellegrini G (2020) Evaluation of the effectiveness of firm subsidies in lagging-behind areas: the Italian job. Sci Region Ital J Region Sci 19(3):477–500

    Google Scholar 

  20. Cerqua A (2014) Developments in the evaluation of capital subsidy policies. PhD Thesis in Economic Statistics, Sapienza Università di Roma, https://iris.uniroma1.it/retrieve/handle/11573/917373/326646/Augusto%20Cerqua%20-%20Developments%20in%20the%20Evaluation%20of%20Capital%20Subsidy%20Policies.pdf

  21. Dimos C, Pugh G (2016) The effectiveness of R&D subsidies: a meta-regression analysis of the evaluation literature. Res Policy 45(4):797–815

    Article  Google Scholar 

  22. Dodgson M (2017) Innovation in firms. Oxf Rev Econ Policy 33(1):85–100

    Article  Google Scholar 

  23. García-Quevedo J (2004) Do public subsidies complement business R&D? A meta-analysis of the econometric evidence. Kyklos 57(1):87–102

    Article  Google Scholar 

  24. Gelman A, Imbens G (2019) Why high-order polynomials should not be used in regression discontinuity designs. J Bus Econ Stat 37(3):447–456

    Article  Google Scholar 

  25. Gerber AS, Malhotra N (2008) Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociol Methods Res 37(1):3–30

    Article  Google Scholar 

  26. Havránek T, Stanley TD, Doucouliagos H, Bom P, Geyer-Klingeberg J, Iwasaki I, Reed WR, Rost K, Van Aert RCM (2020) Reporting guidelines for meta-analysis in economics. J Econ Surv 34(3):469–475

    Article  Google Scholar 

  27. Hoenig JM, Heisey DM (2001) The abuse of power: The pervasive fallacy of power calculations for data analysis. Am Stat 55(1):19–24

    Article  Google Scholar 

  28. Hopewell S, McDonald S, Clarke MJ, Egger M (2007) Grey literature in meta-analyses of randomized trials of health care interventions. The Cochrane Library. Cochrane Database Syst Rev 2007:2

    Google Scholar 

  29. Imbens GW, Rubin DB (2015) Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, New York

    Book  Google Scholar 

  30. Imbens GW, Wooldridge JM (2009) Recent developments in the econometrics of program evaluation. J Econ Lit 47(1):5–86

    Article  Google Scholar 

  31. Ioannidis J, Stanley TD, Doucouliagos H (2017) The power of bias in economics research. Econ J 127:F236–F265

    Article  Google Scholar 

  32. Jaffe A (2002) Building programme evaluation into the design of public research-support programmes. Oxf Rev Econ Policy 18(1):22–34

    Article  Google Scholar 

  33. Kluve J (2010) The effectiveness of European active labor market programs. Labour Econ 17(6):904–918

    Article  Google Scholar 

  34. Mariani M (2019) Regional industrial policy evaluation: Introductory remarks. Sci Regional Italy J Region Sci 18(2):165–172

    Google Scholar 

  35. Mazzola F (2015) Valutazione ex-post degli incentivi alle imprese nelle economie territoriali: nuovi sviluppi. Sci Regional Italy J Region Sci 14(3 suppl.):5–12

    Google Scholar 

  36. McCrary J (2008) Manipulation of the running variable in the regression discontinuity design: a density test. J Econ 142(2):698–714

    Article  Google Scholar 

  37. McKelvey M, Saemundsson RJ (2018) An evolutionary model of innovation policy: conceptualizing the growth of knowledge in innovation policy as an evolution of policy alternatives. Ind Corp Chang 27(5):851–865

    Article  Google Scholar 

  38. Mundlak Y (1978) On the pooling of time series and cross section data. Econometrica 46(1):69–85

    Article  Google Scholar 

  39. Mytelka LK, Smith K (2002) Policy learning and innovation theory: an interactive and co-evolving process. Res Policy 31(8–9):1467–1479

    Article  Google Scholar 

  40. Newman ME (2001) The structure of scientific collaboration networks. Proc Natl Acad Sci 98(2):404–409

    Article  Google Scholar 

  41. Olsen R, Bell S, Orr L, Stuart EA (2013) External validity in policy evaluations that choose sites purposively. J Policy Anal Manag 32(1):107–121

    Article  Google Scholar 

  42. Peneder M (2008) The problem of private under-investment in innovation: a policy mind map. Technovation 28(8):518–530

    Article  Google Scholar 

  43. Scott J, Carrington PJ (2011) The SAGE Handbook of social network analysis. Sage, London

    Google Scholar 

  44. Skrondal A, Rabe-Hesketh S (2004) Generalized latent variable modeling: multilevel, longitudinal and structural equation models. Chapman & Hall/CRC, Boca Raton, FL

    Book  Google Scholar 

  45. Snijders TAB, Bosker RJ (2012) Multilevel analysis: an introduction to basic and advanced multilevel modeling, 2nd edn. Sage, New York

    Google Scholar 

  46. Stanley TD, Doucouliagos H (2012) Meta-regression analysis in economics and business. Routledge, London

    Book  Google Scholar 

  47. Stanley TD, Doucouliagos H, Giles M, Heckemeyer JH, Johnston RJ, Laroche P, Nelson JP, Paldam M, Poot J, Pugh G, Rosenberger RS, Rost K (2013) Meta-analysis of economics research reporting guidelines. J Econ Surv 27(2):390–394

    Article  Google Scholar 

  48. Storey DJ, Keasey K, Watson R, Wynarczyk P (2016) The performance of small firms: profits, jobs and failures. Routledge, London

    Book  Google Scholar 

  49. Ugur M, Trushin E, Solomon E, Guidi F (2016) R&D and productivity in OECD firms and industries: a hierarchical meta-regression analysis. Res Policy 45(10):2069–2086

    Article  Google Scholar 

  50. Ugur M, Awaworyi Churchill S, Solomon EM (2017) Technological innovation and employment in derived labour demand models: a hierarchical meta-regression analysis. J Econ Surv. https://doi.org/10.1111/joes.12187

    Article  Google Scholar 

  51. Vivalt E (2020) How much can we generalize from impact evaluations? J Eur Econ Assoc 18(6):3045–3089

    Article  Google Scholar 

  52. Wasserman S, Faust K (1994) Social network analysis: methods and applications. Cambridge University Press, Cambridge

    Book  Google Scholar 

  53. Zúñiga-Vicente JÁ, Alonso-Borrego C, Forcadell FJ, Galán JI (2014) Assessing the effect of public subsidies on firm R&D investment: a survey. J Econ Surv 28(1):36–67

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Marco Mariani.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 41 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bocci, C., Caloffi, A., Mariani, M. et al. Evaluating Public Support to the Investment Activities of Business Firms: A Multilevel Meta-Regression Analysis of Italian Studies. Ital Econ J (2021). https://doi.org/10.1007/s40797-021-00170-3

Download citation

Keywords

  • Meta-regression analysis
  • Public incentives to private investments
  • Innovation policies
  • Programme evaluation

JEL Classification

  • H53
  • L52
  • L53