Advertisement

Structure, Process, and Agency in the Evaluation of Risk Governance

  • Urbano Fra.PaleoEmail author
Chapter

Abstract

Risk governance is an approach adopted to understand and comprehensively handle the complexity of the decision-making processes involved in risk production and reduction. In this chapter, it is argued that the evaluation of risk governance may go beyond the measurement of performance and advancement in disaster risk reduction—it can also act to increase social learning and risk awareness, to reach consensus over the equitable distribution of risks and to negotiate a preferred level of risk. Thus, participatory evaluation is proposed to increase the exchange and sharing of the different concerns, knowledges, interests, and values of societal players. Finally, an evaluation framework based on a mixed hierarchical and networked structure of criteria, components, and the dimensions of risk governance is advanced to complete the evaluation system. In this framework, criteria should measure not just policy development but also organizational structure, as well as the role of societal actors and the intervention of formal and informal arrangements.

Keywords

Disaster Risk Public Participation Disaster Risk Reduction Participatory Evaluation Risk Governance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

Copy editing of this chapter has been made possible with the financial support of the Government of Extremadura and the European Regional Development Fund (ERDF) through the grant GR10071 awarded to the research group on Geospatial Engineering and Urban Heritage of the University of Extremadura.

References

  1. Andriessen, J.H.E. 2003. Working with groupware. Understanding and evaluating collaboration technology. New York: Springer.CrossRefGoogle Scholar
  2. Balkin, J.M. 2004. Digital speech and democratic culture: A theory of freedom of expression for the information society. New York University Law Review 79(1): 1–55.Google Scholar
  3. Berkes, F. 2009. Evolution of co-management: Role of knowledge generation, bridging organizations and social learning. Journal of Environmental Management 90(5): 1692–1702.CrossRefGoogle Scholar
  4. Birkmann, J., and K. von Teichman. 2010. Integrating disaster risk reduction and climate change adaptation: Key challenges—Scales, knowledge, and norms. Sustainability Science 5: 171–184.CrossRefGoogle Scholar
  5. Black, P., C. Harrison, C. Lee, B. Marshall, and D. Wiliam. 2002. Working inside the black box: Assessment for learning in the classroom. London: GL Assessment.Google Scholar
  6. Brand, R., and A. Karvonen. 2007. The ecosystem of expertise: Complementary knowledges for sustainable development. Sustainability: Science, Practice, & Policy 3(1): 21–31.Google Scholar
  7. Buck, T. 2013. Spain’s indignados switch focus to workaday struggles. Financial Times, 17/05/2013. Available at: http://www.ft.com/cms/s/0/342f1cee-bcaf-11e2-9519-00144feab7de.html#axzz2iXkInUGC.
  8. Burby, R.J., and P.J. May. 2009. Command or cooperate? Rethinking traditional central governments’ hazard mitigation policies. In Building safer communities. Risk governance, spatial planning, and responses to natural hazards, ed. U. Fra.Paleo, 34–43. Amsterdam: IOS Press.Google Scholar
  9. Burton, I., R.W. Kates, and G.F. White. 1993. The environment as hazard, 2nd ed. New York: Guilford Press.Google Scholar
  10. Cardona, O.D. 2006. A system of indicators for disaster risk management in the Americas. In Measuring vulnerability to natural hazards: Towards disaster resilient societies, ed. J. Birkmann, 189–209. New York: UNU Press.Google Scholar
  11. Carlarne, C., and J. Carlarne. 2006. In-credible government: Legitimacy, democracy, and non-governmental organizations. Public Organizations Review 6: 347–371.CrossRefGoogle Scholar
  12. Carreño, M.L., O.D. Cardona, and A.H. Barbat. 2007. A disaster risk management performance index. Natural Hazards 41: 1–20.CrossRefGoogle Scholar
  13. Chambers, R. 1997. Whose reality counts? Putting the first last. London: Intermediate Technology Publications.CrossRefGoogle Scholar
  14. Chen, H.T. 1996. A comprehensive typology for program evaluation. Evaluation Practice 17(2): 121–130.CrossRefGoogle Scholar
  15. Cutter, S.L. 2003. GI science, disasters, and emergency management. Transactions in GIS 7(4): 439–445.CrossRefGoogle Scholar
  16. DARA. 2011. Análisis de capacidades y condiciones para la reducción del riesgo de desastres. Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Panamá y República Dominicana. Informe sumario. Madrid: DARA.Google Scholar
  17. de Boer, H., J. Enders, and U. Schimank. 2008. Comparing higher education governance systems in four European countries. In Governance and performance of education systems, ed. N.C. Soguel and P. Jaccard, 35–54. Berlin: Springer.Google Scholar
  18. deLeon, P. 1992. The democratization of the policy sciences. Public Administration Review 52(125–1): 29.Google Scholar
  19. EC (European Commission). 2001. European governance: A white paper. Available at: http://ec.europa.eu/governance/white_paper/en.pdf.
  20. Estrella, M., and J. Gaventa. 1998. Who counts reality? Participatory monitoring and evaluation: A literature review, IDS working paper no. 70. Brighton: Institute of Development Studies, University of Sussex.Google Scholar
  21. Fetterman, D.M. 1994. Empowerment evaluation. American Journal of Evaluation 15: 1–45.CrossRefGoogle Scholar
  22. Fischer, F. 1993. Citizen participation and the democratization of policy expertise: From theoretical inquiry to practical cases. Policy Sciences 26(3): 165–187.CrossRefGoogle Scholar
  23. Fra.Paleo, U. 2009. On exposure to natural hazards: Revisiting a neglected primal action. In Building safer communities. Risk governance, spatial planning, and responses to natural hazards, ed. U. Fra.Paleo, 61–78. Amsterdam: IOS Press.Google Scholar
  24. Fra.Paleo, U. 2013. A functional risk society? Progressing from management to governance while learning from disasters. ISC and UNESCO. In World social science report 2013. Changing global environments, 434–438. Paris: OECD/UNESCO.CrossRefGoogle Scholar
  25. Fra.Paleo, U. 2014. Principles for the measurement and evaluation of risk governance. In Earthquake hazard impact and urban planning, ed. M. Bostenaru, I. Armas, and A. Goretti. Berlin: Springer.Google Scholar
  26. Friedmann, J. 1987. Planning in the public domain. Princeton: Princeton University Press.Google Scholar
  27. Fung, A., and E.O. Wright. 2001. Deepening democracy: Innovations in empowered participatory governance. Politics and Society 29(1): 5–41.CrossRefGoogle Scholar
  28. Glasius, M., and G. Pleyers. 2013. The global moment of 2011: Democracy, social justice and dignity. Development and Change 44(3): 547–567.CrossRefGoogle Scholar
  29. Gonsalves, J.F., IIRR, and J. Gaventa. 2000. Preface. In Learning from change: Issues and experiences in participatory monitoring and evaluation, ed. M. Estrella, xi–xiii. Londres: IDS. Institute of Development Studies.Google Scholar
  30. Greene, J. 2006. Evaluation, democracy, and social change. In The Sage handbook of evaluation, ed. I.F. Shaw, J. Greene, and M. Mark, 118–140. Thousand Oaks: Sage.Google Scholar
  31. Hamilton, S., and N.L. Chervany. 1981. Evaluating information system effectiveness—Part I: Comparing evaluation approaches. MIS Quarterly 5(3): 55–69.CrossRefGoogle Scholar
  32. Hannan, M.T., and J. Freeman. 1984. Structural inertia and organizational change. American Sociological Review 49(2): 149–164.CrossRefGoogle Scholar
  33. House, E. 1991. Evaluation and social justice: Where are we? In Evaluation and education: At quarter century, ed. M.W. McLaughlin and D.C. Phillips, 233–247. Chicago: University of Chicago Press.Google Scholar
  34. Hyde, S.D. 2007. The observer effect in international politics: Evidence from a natural experiment. World Politics 60(01): 37–63.CrossRefGoogle Scholar
  35. Hyde, S.D. 2010. Experimenting in democracy promotion: International observers and the 2004 presidential elections in Indonesia. Perspectives on Politics 8(02): 511–527.CrossRefGoogle Scholar
  36. Inter-American Development Bank (IDB). 2010. Indicators of disaster risk management. Program for Latin America and the Caribbean Summary Report. Washington, DC: IDB.Google Scholar
  37. ISDR (International Strategy for Disaster Reduction). 2005. Hyogo framework for action 2005–2015: Building the resilience of nations and communities to disasters. A/CONF.206/6. Available at: http://www.unisdr.org/2005/wcdr/intergover/official-doc/L-docs/Hyogo-framework-for-action-english.pdf.
  38. Iverson, T., and C. Perrings. 2012. Precaution and proportionality in the management of global environmental change. Global Environmental Change 22(1): 161–177.CrossRefGoogle Scholar
  39. Johansen, R. 1988. Groupware. Computer support for business teams. New York: The Free Press.Google Scholar
  40. Jonassen, D.H., and H.I. Kwon. 2001. Communication patterns in computer mediated versus face-to-face group problem solving. Educational Technology Research and Development 49(1): 35–51.CrossRefGoogle Scholar
  41. Kaufmann, D., A. Kraay, and M. Mastruzzi. 2010. The worldwide governance indicators: Methodology and analytical issues, World Bank policy research working paper no. 5430. Washington, DC: The Brookings Institution – World Bank.Google Scholar
  42. Keen, M., V.A. Brown, and R. Dyball. 2005. Social learning in environmental management: Towards a sustainable future. London: EarthScan.Google Scholar
  43. Kettl, D.F. 2000. The transformation of governance: Globalization, devolution, and the role of government. Public Administration Review 60(6): 488–497.CrossRefGoogle Scholar
  44. King, C.S., K.M. Feltey, and B.O.N. Susel. 1998. The question of participation: Toward authentic public participation in public administration. Public Administration Review 58(4): 317–326.CrossRefGoogle Scholar
  45. Kuhlman, S. 2003. Evaluation of research and innovation policies: A discussion of trends with examples from Germany. International Journal of Technology Management 26(2/3/4): 131–149.CrossRefGoogle Scholar
  46. Kushner, S. 2001. Culture, standards, and program qualities. In Vision of quality: How evaluators define, understand and represent program quality, ed. A.P. Benson, D.M. Hinn, and C. Lloyd, 121–134. Bingley: Emerald.Google Scholar
  47. MacDonald, B. 1976. Evaluation and the control of education. In Curriculum evaluation today: Trends and implications, ed. D. Tawney, 125–136. London: Macmillan.Google Scholar
  48. Mathison, S. 1996. The role of deliberation in evaluation. Annual meeting of the American Evaluation Association, Atlanta, November 1996.Google Scholar
  49. Mathison, S. 2000. Deliberation, evaluation, and democracy. New Directions for Evaluation 85: 85–89.CrossRefGoogle Scholar
  50. May, B., and R. Plummer. 2011. Accommodating the challenges of climate change adaptation and governance in conventional risk management: Adaptive collaborative risk management (ACRM). Ecology and Society 16(1): 47.Google Scholar
  51. McCarthy, D. 2009. A critical systems approach to socio-ecological systems: Implications for social learning and governance. Saarbrücken: VDM Verlag.Google Scholar
  52. McKnight, K.M., and L. Sechrest. 2004. Program evaluation. In Comprehensive handbook of psychological assessment, Vol. 3: Behavioral assessment, ed. S.N. Haynes and E.M. Heiby, 246–266. Hoboken: Wiley.Google Scholar
  53. Medsker, L., M. Tan, and E. Turban. 1995. Knowledge acquisition from multiple experts: Problems and issues. Expert Systems with Applications 9(1): 35–40.CrossRefGoogle Scholar
  54. Mileti, D.S., T.E. Drabek, and J.E. Haas. 1975. Human systems in extreme environments: A sociological perspective. Boulder: Institute of Behavioral Science.Google Scholar
  55. Mitchell, B. 1997. Resource and environmental management. Toronto: Longman.Google Scholar
  56. Mitchell, T. 2003. An operational framework for mainstreaming disaster risk reduction, Disaster studies working paper 8. London: Benfield Hazard Research Centre, University College London.Google Scholar
  57. Mitchell, B. (ed.). 2004. Resource and environmental management in Canada: Addressing conflict and uncertainty, 3rd ed. Don Mills: Oxford University Press.Google Scholar
  58. Moote, M.A., M.P. McClaran, and D.K. Chickering. 1997. Theory in practice: Applying participatory democracy theory to public land planning. Environmental Management 21(6): 877–889.CrossRefGoogle Scholar
  59. Morgan, M.G., and M. Henrion. 1990. Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. New York: Cambridge University Press.CrossRefGoogle Scholar
  60. Morrison, J. 2003. ABC of learning and teaching in medicine: Evaluation. British Medical Journal 326: 385–387.CrossRefGoogle Scholar
  61. Newman, J., M. Barnes, H. Sullivan, and A. Knops. 2004. Public participation and collaborative governance. Journal of Social Policy 33(2): 203–223.CrossRefGoogle Scholar
  62. Nordhaus, W.D. 1994. Expert opinion on climate change. American Scientist 82: 45–51.Google Scholar
  63. Parker, D. 1999. Criteria for evaluating the condition of a tropical cyclone warning system. Disasters 23(3): 193–216.CrossRefGoogle Scholar
  64. Quarantelli, E. 1991. Patterns of sheltering and housing in American disasters, Preliminary paper no. 170. Newark: Disaster Research Center, University of Delaware.Google Scholar
  65. Rallis, S.F., and G. Rossman. 2000. Dialogue for learning: Evaluator as a critical friend. New Direction in Program Evaluation 86: 81–92.CrossRefGoogle Scholar
  66. Rauschmayer, F., A. Berghöfer, I. Omann, and D. Zikos. 2009. Examining processes or/and outcomes? Evaluation concepts in European governance of natural resources. Environmental Policy and Governance 19: 159–173.CrossRefGoogle Scholar
  67. Renn, O. 2004. The challenge of integrating deliberation and expertise. Participation and discourse in risk management. In Risk analysis and society: An interdisciplinary characterization of the field, ed. T.L. McDaniels and M.J. Small, 289–366. Cambridge: Cambridge University Press.Google Scholar
  68. Renn, O., A. Klinke, and M. van Asselt. 2011. Coping with complexity, uncertainty and ambiguity in risk governance: A synthesis. Ambio 40(2): 231–246.CrossRefGoogle Scholar
  69. Rice, S.E., and S. Patrick. 2008. Index of state weakness in the developing world. Washington, DC: The Brookings Institution.Google Scholar
  70. Roorda, N., C. Rammel, S. Waara, and U. Fra.Paleo. 2009. AISHE 2.0 Manual. Assessment instrument for sustainability in higher education. http://www.slideshare.net/NRoorda/aishe-20-manual.
  71. Roughgarden, T., and S. Schneider. 1999. Climate change policy: Quantifying uncertainties for damages and optimal carbon taxes. Energy Policy 37: 415–429.CrossRefGoogle Scholar
  72. Rowe, G., and L.J. Frewer. 2000. Public participation methods: A framework for evaluation. Science, Technology & Human Values 25(1): 3–29.CrossRefGoogle Scholar
  73. Saward, M. 1992. Co-optive politics and state legitimacy. Aldershot: Dartmouth.Google Scholar
  74. Simpson, D.M., and M. Katirai. 2006. Indicator issues and proposed framework for a disaster preparedness index (DPi), Working paper 06–03. Louisville: Center for Hazards Research and Policy Development, University of Louisville.Google Scholar
  75. Springett, J., and N. Wallerstein. 2008. Issues in participatory evaluation. In Community based participatory research for health: Process to outcomes, 2nd ed, ed. M. Minkler and N. Wallerstein, 199–220. San Francisco: Jossey Bass.Google Scholar
  76. Stallings, R.A. 1990. Media discourse and the social construction of risk. Social Problems 37(1): 80–95.CrossRefGoogle Scholar
  77. Stetler, C.B., M.W. Legro, C.M. Wallace, C. Bowman, M. Guihan, H. Hagedorn, B. Kimmel, N.D. Sharp, and J.L. Smith. 2006. The role of formative evaluation in implementation research and the QUERI experience. Journal of General Internal Medicine 21(2): S1–S8.CrossRefGoogle Scholar
  78. Stivers, C. 1990. Active citizenship and public administration. In Refounding public administration, ed. G.L. Wamsley, R.N. Bacher, C.T. Goodsell, P.S. Kronenberg, J.A. Rohr, C.M. Stivers, O.F. White, and J.F. Wolf, 246–273. Thousand Oaks: Sage.Google Scholar
  79. UNISDR. 2010. The 10 essentials for making cities resilient. Available at: http://www.unisdr.org/campaign/resilientcities/toolkit/essentials.
  80. Weibelzahl, S. 2005. Problems and pitfalls in evaluating adaptive systems. In Fourth workshop on the evaluation of adaptive systems, ed. S. Weibelzahl, A. Paramythis, and J. Masthoff, 57–64. Edinburgh.Google Scholar
  81. Weichselgartner, J. 2001. Disaster mitigation: The concept of vulnerability revisited. Disaster Prevention and Management 10(2): 85–95.CrossRefGoogle Scholar
  82. Weiler, H. 1983. Legalization, expertise and participation: Strategies of compensatory legitimation in educational policy. Comparative Education Review 27(2): 259–277.CrossRefGoogle Scholar
  83. Wolter, S.C. 2007. Purpose and limits of a national monitoring of the education system through indicators. In Governance and performance of education systems, ed. N.C. Soguel and P. Jaccard, 57–84. Berlin: Springer.CrossRefGoogle Scholar
  84. Woodward, R.T., and R.C. Bishop. 1997. How to decide when experts disagree: Uncertainty-based choice rules in environmental policy. Land Economics 73(4): 492–507.CrossRefGoogle Scholar
  85. Worthen, B.R., J.R. Sanders, and J.L. Fitzpatrick. 1997. Program evaluation. Alternative approaches and practical guidelines, 2nd ed. New York: Addison Wesley Longman.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  1. 1.School of Social Sciences and HumanitiesUniversity of ExtremaduraCáceresSpain

Personalised recommendations