Advertisement

“If it Ain’t Evaluated, Don’t Fix it!”

The Politics of Evaluability in Occupational Health and Safety
  • Peter Dahler-LarsenEmail author
  • Anna Sundby
Abhandlung

Abstract

While the ideal of ever-more systematic evaluation is cherished in the European Union (EU) and elsewhere, it remains difficult to provide robust evaluation results when evaluability is low. This paper uses evaluability assessment as a theoretical–analytical tool to explore the policy/evaluation interface in a contemporary context characterized by multilevel governance. What is the political function of (low) evaluability? This article comprises a case study of a policy instrument, workplace assessments, which have taken place for the last 30 years under Danish legislation as a consequence of EU framework directive 89/391 on occupational health and safety. The study includes a review of the most recent EU evaluation of the above-mentioned framework directive, other documents, and preliminary survey results. The official evaluation results are meagre due to seven underlying challenges to evaluability found at various levels of governance. The relatively low political priority given to occupational health and safety as a policy area may help explain why little has been done over three decades to increase the evaluability of the policy instruments prescribed by the framework directive. Low evaluability contributes to keeping a policy area free from potential contestation, especially in a normative context where evaluative evidence is a precondition for rational policy change.

Keywords

Evaluation Evaluability Evaluability assessment Politics of evaluation Occupational Health and Safety 

Keine Heilung ohne Diagnose!

Die Politik der Evaluierbarkeit in Bezug auf Gesundheit und Sicherheit am Arbeitsplatz

Zusammenfassung

Während auf der einen Seite eine immer systematischere Evaluierbarkeitsbeurteilung („evaluability assessment“ [EA]) in der EU und darüber hinaus als besonders wichtig angesehen wird, erschwert auf der anderen Seite eine geringe Evaluierbarkeit das Erstellen von robusten Evaluationen. Dieser Beitrag verwendet die Evaluierbarkeitsbeurteilung als theoretisch-analytisches Instrument, um die Schnittstelle zwischen Politik und Evaluierung im Kontext der Mehrebenen-Governance zu untersuchen. In diesem Zusammenhang wird danach gefragt, welche politische Funktion (geringe) Evaluierbarkeit hat. Zur Beantwortung dieser Frage wird auf eine Fallstudie der Arbeitsplatzbewertungen (Workplace Assessments [WPAs]) in Dänemark eingegangen. Arbeitsplatzbewertungen werden nach dänischem Recht seit 30 Jahren durch Umsetzung der EU-Rahmenrichtlinie 89/391 zur Sicherheit und zum Schutz der Gesundheit am Arbeitsplatz durchgeführt. Die vorliegende Analyse enthält einen Überblick der neuesten EU-Evaluation dieser Richtlinie sowie anderer relevanter Dokumente und Evaluationsergebnisse. Da die Evaluierbarkeit auf verschiedenen Governance-Ebenen vor sieben grundlegenden Herausforderungen steht, ist die offizielle Evaluation unzureichend. Die relativ geringe politische Priorität des Arbeitsschutzes als Politikbereich erklärt, warum in den vergangenen drei Jahrzehnten wenig unternommen wurde, um die Evaluierbarkeit der in der Rahmenrichtlinie festgelegten Policy-Instrumente zu verbessern. Eine geringe Evaluierbarkeit trägt dazu bei, einen Politikbereich vor potenziellen Anfechtungen zu schützen. Dies trifft insbesondere dann zu, wenn Evaluationsergebnisse als Voraussetzung für rationale Politikänderungen gelten.

Schlüsselwörter

Evaluierung Evaluierbarkeit Evaluierbarkeitsbeurteilung Politik der Evaluierung Gesundheit und Sicherheit am Arbeitsplatz 

Notes

Funding

The authors have received funding from the Danish Working Environment Research Fund.

Author Contribution

Peter Dahler-Larsen wrote the article. Anna Sundby collected and analyzed the survey data. Both authors approved the final version of the manuscript.

References

  1. Allvin, Michael, and Gunnar Aronsson. 2003. The future of work environment reforms: does the concept of work environment apply within the new economy? International Journal of Health Services 33(1):99–111.CrossRefGoogle Scholar
  2. Arbejdstilsynet [The Danish Work Environment Authority]. 2013. Dansk rapport om den praktiske gennemførsel af EU-direktiverne om sikkerhed og sundhed på arbejdspladsen 2007–2012 [Danish report on the practical implementation of the EU-directives on safety and health in the workplace 2007–2012]. Copenhagen: Arbejdstilsynet.Google Scholar
  3. Bovens, Mark, Paul Hart, and Sanneke Kuipers. 2006. The politics of policy evaluation. In The Oxford handbook of public policy, ed. Robert E. Goodin, Michael Moran, and Martin Rein, 319–335. New York: Oxford University Press.Google Scholar
  4. Dahler-Larsen, Peter. 2012a. The evaluation society. California: Stanford University Press.Google Scholar
  5. Dahler-Larsen, Peter. 2012b. Evaluation as a situational or a universal good? Why evaluability assessment for evaluation systems is a good idea, what it might look like in practice, and why it is not fashionable. Scandinavian Journal of Public Administration 16(3):29–46.Google Scholar
  6. Dahler-Larsen, Peter, and Anna Sundby. 2019. APV. Odense: Syddansk Universitetsforlag.Google Scholar
  7. Dunlop, Claire A., and Claudio M. Radaelli. 2016. Handbook of regulatory impact assessment. Cheltenham: Edward Elgar.CrossRefGoogle Scholar
  8. European Agency for Safety and Health at Work. 1989. Directive 89/391/EEC—OSH “framework directive”. https://OSHa.europa.eu/en/legislation/directives/the-OSH-framework-directive/1. Accessed 18 Jan 2018.Google Scholar
  9. European Commission. 2017. Ex-post evaluation of the European Union occupational safety and health directives. Brussels: European Commission.Google Scholar
  10. Funnell, Sue C., and Patricia J. Rogers. 2011. Purposeful program theory: effective use of theories of change and logic models. San Fransisco: Jossey-Bass.Google Scholar
  11. Gagliardi, Diana, Alessandro Marinaccio, Antonio Valenti, and Sergio Iavicoli. 2012. Occupation safety and health in europe: lessons from the past, challenges and opportunities for the future. Industrial Health 50(1):7–11.CrossRefGoogle Scholar
  12. Gallagher, Clare, and Elsa Underhill. 2012. Managing work health and safety: recent developments and future directions. Asia Pacific Journal of Human Resources 50(2):227–244.CrossRefGoogle Scholar
  13. Hasle, Peter, Hans Jørgen Limborg, and Klaus T. Nielsen. 2014. Working environment interventions—bridging the gap between policy instruments and practice. Safety Science 68:73–80.CrossRefGoogle Scholar
  14. Hohnen, Pernille, and Peter Hasle. 2011. Making work environment Auditable—a “critical case” study of certified occupational health and safety management systems in Denmark. Safety Science 49(7):1022–1029.CrossRefGoogle Scholar
  15. Højlund, Steven. 2014. Evaluation use in evaluation systems—the case of the European Commission. Evaluation 20(4):428–446.CrossRefGoogle Scholar
  16. Hooghe, Liesbet, and Gary Marks. 2003. Unraveling the central state, but how? Types of multi-level governance. The American Political Science Review 97(2):233–243.Google Scholar
  17. Kassim, Hussein, and Patrick Le Galès. 2010. Exploring governance in a multi-level polity: a policy instruments approach. West European Politics 33(1):1–21.CrossRefGoogle Scholar
  18. Lascoumes, Pierre, and Patrick Le Galès. 2007. Introduction: understanding public policy through its instruments—from the nature of instruments to the sociology of public policy instrumentation. Governance 20(1):1–21.CrossRefGoogle Scholar
  19. Leeuw, Frans L., and Jan-Eric Furubo. 2008. Evaluation systems: what are they and why study them? Evaluation 14(2):157–169.CrossRefGoogle Scholar
  20. LO. 2018. Arbejdsmiljørepræsentantens vilkår. Copenhagen: The National Organization of Trade Unions.Google Scholar
  21. Mastenbroek, Ellen, Stijn van Voorst, and Anne Meuwese. 2016. Closing the regulatory cycle? A meta evaluation of ex-post legislative evaluations by the European Commission. Journal of European Public Policy 23(9):1329–1348.CrossRefGoogle Scholar
  22. Meuwese, Anne. 2012. Impact assessment in the European Union: the continuation of politics by other means. In Sustainable development, evaluation and policy-making: theory, Practise and quality assurance, ed. Anneke von Raggamby, 141–149. Cheltenham: Edward Elgar.Google Scholar
  23. Ministry of Employment. 2017. Act no. 1084 of 19 September. Copenhagen: Ministry of Employment.Google Scholar
  24. Pawson, Ray, and Nicholas Tilley. 1997. Realistic evaluation. London: SAGE.Google Scholar
  25. Power, Michael. 1997. The audit society. Oxford: Oxford University Press.Google Scholar
  26. Radaelli, Claudio M., and Anne C.M. Meuwese. 2010. Hard questions, hard solutions: proceduralisation through impact assessment in the EU. West European Politics 33(1):136–153.CrossRefGoogle Scholar
  27. Rist, Ray C., and Nicoletta Stame. 2006. From studies to streams. New Brunswick: Transaction Publishers.Google Scholar
  28. Robson, Lynda S., Benjamin C. Amick, Cindy Moser, Mark Pagell, Elizabeth Mansfield, Harry S. Shannon, Michael B. Swift, Sheilah Hogg-Johnson, Siobhan Cardoso, and Harriet South. 2016. Important factors in common among organizations making large improvement in OHS performance: results of an exploratory multiple case study. Safety Science 86:211–227.CrossRefGoogle Scholar
  29. Robson, Lynda S., Judith A. Clarke, Kimberley Cullen, Amber Bielecky, Colette Severin, Philip L. Bigelow, Emma Irvin, Anthony Culyer, and Quenby Mahood. 2007. The effectiveness of occupational health and safety management system interventions: a systematic review. Safety Science 45(3):329–353.CrossRefGoogle Scholar
  30. Rossi, Peter H., Howard E. Freeman, and Mark W. Lipsey. 2004. Evaluation: a systematic approach. Thousand Oaks: SAGE.Google Scholar
  31. Schmidt, Vivien. 2013. Democray and legitimacy in the European Union revisited: output, input and throughput. Political Studies 13(61):2–22.CrossRefGoogle Scholar
  32. Shadish, William R., Thomas D. Cook, and Laura C. Leviton. 1991. Foundations of program evaluation: theories of practice. Newbury Park: SAGE.Google Scholar
  33. Smismans, Stijn. 2003. Towards a new community strategy on health and safety at work? Caught in the institutional web of soft procedures. International Journal of Comparative Labour Law and Industrial Relations 19(1):55–83.Google Scholar
  34. Smismans, Stijn. 2015. Policy evaluation in the EU: the challenges of linking ex Ante and ex post appraisal. European Journal of Risk Regulation 6(1):6–26.CrossRefGoogle Scholar
  35. Smith, Midge F. 2005. Article on Evaluability assessment. In Encyclopedia of evaluation, ed. Sandra Mathison, 136–139. Thousand Oaks: SAGE.Google Scholar
  36. Stephenson, Paul. 2013. Twenty years of multi-level governance: “where does it come from? What is it? Where is it going?”. Journal of European Public Policy 20(6):817–837.CrossRefGoogle Scholar
  37. Stern, Elliot, Nicoletta Stame, John Mayne, Kim Forss, Rick Davies, and Barbara Befani. 2012. Broadening the range of designs and methods for impact evaluations. https://www.oecd.org/derec/50399683.pdf. Accessed 18 Jan 2019.CrossRefGoogle Scholar
  38. Taylor, David, and Susan Balloch. 2005. The politics of evaluation: participation and policy implementation. Bristol: The Policy Press.CrossRefGoogle Scholar
  39. Tompa, Emile, Christina Kalcevich, Michael Foley, Chris McLeod, Sheilah Hogg-Johnson, Kim Cullen, Ellen MacEachen, Quenby Mahood, and Emma Irvin. 2016. A systematic literature review of the effectiveness of occupational health and safety regulatory enforcement. American Journal of Industrial Medicine 59:919–933.CrossRefGoogle Scholar
  40. Vedung, Evert. 2009. Utvärdering i Politik och Förvaltning [Evaluation in Policy and Administration]. Lund: Studentlitteratur.Google Scholar

Copyright information

© Deutsche Vereinigung für Politikwissenschaft 2019

Authors and Affiliations

  1. 1.CopenhagenDenmark

Personalised recommendations