Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse

  • Alexander BuhmannEmail author
  • Johannes Paßmann
  • Christian Fieseler
Original Paper


While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies (a) that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; (b) that the way in which organizations practically engage with emergent expectations about algorithms may be manipulative, adaptive, or moral; and (c) that when accountability relationships are heavily burdened by the opacity and fluidity of complex algorithmic systems, the emphasis of engagement should shift to a rational communication process through which a continuous and tentative assessment of the development, workings, and consequences of algorithms can be achieved over time. The degree to which such engagement is, in fact, rational can be assessed based on four discourse-ethical principles of participation, comprehension, multivocality, and responsiveness. We conclude that the framework may help organizations and their environments to jointly work toward greater accountability for complex algorithms. It may further help organizations in reputational positioning surrounding accountability issues. The discourse-ethical principles introduced in this article are meant to elevate these positioning contests to extend beyond mere adaption or compliance and help guide organizations to find moral and forward-looking solutions to accountability issues.


Reputation Accountability Algorithms Opacity Discourse ethics 



This work was financially supported by the Norwegian Research Council as part of their Fair Labor in the Digitized Economy project (Grant Number 247725/O70).


  1. ACM Association for Computing Machinery US Public Policy Council. (2017). Statement on algorithmic transparency and accountability. Retrieved December 1, 2017, from
  2. Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology and Human Values, 41(1), 93–117.Google Scholar
  3. Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 3(2), 1–17.Google Scholar
  4. Bachmann, P. (2017). Medienunternehmen und der strategische Umgang mit Media Responsibility und Corporate Social Responsibility. Wiesbaden: Springer.Google Scholar
  5. Barnet, B. A. (2009). Idiomedia: The rise of personalized, aggregated content. Continuum, 23(1), 93–99.Google Scholar
  6. Bartlett, J. L., Pallas, J., & Frostenson, M. (2013). Reputation and legitimacy: Accreditation and rankings to assess organizations. In C. E. Carroll (Ed.), The handbook of communication and corporate reputation (pp. 530–544). Malden, MA: Wiley.Google Scholar
  7. Beck, M. (2016). Can a death-predicting algorithm improve care? Wall Street Journal, 2. December 2016.Google Scholar
  8. Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002.Google Scholar
  9. Beller, M., Zaidman, A., Karpov, A., & Zwaan, R. (2017). The last line effect explained. Empirical Software Engineering, 22(3), 1508–1536. Scholar
  10. Bernaz, N. (2013). Enhancing corporate accountability for human rights violations: Is extraterritoriality the magic potion? Journal of Business Ethics, 117(3), 493–511. Scholar
  11. Borgman, C. L. (2015). Big data, little data, no data: Scholarship in the networked world. Cambridge, MA: MIT Press.Google Scholar
  12. Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468.Google Scholar
  13. Bovens, M. (2010). Two concepts of accountability: Accountability as a virtue and as a mechanism. West European Politics, 33(5), 946–967.Google Scholar
  14. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–17.Google Scholar
  15. Busuioc, M. (2013). European agencies: Law and practices of accountability. Oxford: Oxford University Press.Google Scholar
  16. Busuioc, M., & Lodge, M. (2017). Reputation and accountability relationships: Managing accountability expectation through reputation. Public Administration Review, 77(1), 99–100.Google Scholar
  17. Carlson, M. (2015). The robotic reporter. Digital Journalism, 3(3), 416–431.Google Scholar
  18. Carmona, S., Donoso, R., & Reckers, P. M. J. (2013). Timing in accountability and trust relationships. Journal of Business Ethics, 112(3), 481–495. Scholar
  19. Colquitt, J., & George, G. (2011). Publishing in AMJ. Part one: topic choice. Academy of Management Journal, 54(3), 432–435.Google Scholar
  20. Coombs, T. W. (2013). Situational theory of crisis: Situational crisis communication theory and corporate reputation. In C. E. Carroll (Ed.), The handbook of communication and corporate reputation (pp. 262–278). Malden, MA: Wiley-Blackwell.Google Scholar
  21. Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology and Human Values, 41(1), 77–92.Google Scholar
  22. Danna, A., & Gandy, O. H. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics, 40(4), 373–386.Google Scholar
  23. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP) (pp. 598–617). IEEE.Google Scholar
  24. De Cremer, D., & Barker, M. (2003). Accountability and cooperation in social dilemmas: The influence of others’ reputational concerns. Current Psychology, 22(2), 155–163. Scholar
  25. Deephouse, D. L., & Carter, S. M. (2005). An examination of differences between organizational legitimacy and organizational reputation. Journal of Management Studies, 42(2), 329–360. Scholar
  26. Desmarais, S. L., & Singh, J. P. (2013). Risk assessment instruments validated and implemented in correctional settings in the United States. Council of State Governments: Lexington.Google Scholar
  27. DeZoort, F. T., & Harrison, P. D. (2016). Understanding auditors’ sense of responsibility for detecting fraud within organizations. Journal of Business Ethics. Scholar
  28. Diakopoulos. N. (2013). Algorithmic defamation: The case of the shameless autocomplete. Tow Center.Google Scholar
  29. Diakopoulos, N. (2015). Algorithmic accountability. Digital Journalism, 3(3), 398–415.Google Scholar
  30. Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828.Google Scholar
  31. Diakopoulos, N., Friedler, S., & Arenas, M. et al. (2018). Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. Retrieved September 1, 2018, from
  32. Dörr, K. N., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital Journalism, 5(4), 404–419.Google Scholar
  33. Doshi-Velez, F., & Kortz, M. (2017). Accountability of AI under the law: The role of explanation. Berkman Klein Center Working Group on explanation and the law, Berkman Klein Center for Internet & Society working paper.Google Scholar
  34. Dubnick, M. J., & Frederickson, H. G. (2010). Accountable agents: Federal performance measurement and third-party government. Journal of Public Administration Research and Theory, 20(suppl_1), i143–i159. Scholar
  35. Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a ‘Right to Explanation’ is probably not the remedy you are looking for. 16 Duke Law & Technology Review 18 (2017). SSRN: or
  36. Eisenegger, M., & Imhof, K. (2008). The true, the good and the beautiful: Reputation management in the media society. In A. Zerfaß, B. V. Ruler, & K. Sriramesh (Eds.), Public relations research: European and international perspectives and innovations (pp. 125–146). Wiesbaden: VS Verlag.Google Scholar
  37. EPIC, Electronic Privacy Information Center. (2017). Algorithms in the criminal justice system. Retrieved August 25, 2018, from
  38. Fanta, A. (2017). Putting Europe’s robots on the map: Automated journalism in news agencies. Retrieved December 19, 2017, from
  39. Ferraro, F., Etzion, D., & Gehman, J. (2015). Tackling grand challenges pragmatically: robust action revisited. Organization Studies, 36(3), 363–390.Google Scholar
  40. Floridi, L. (2012). Big data and their epistemological challenge. Philosophy & Technology, 25(4), 435–437.Google Scholar
  41. Franzke, A., & Schäfer, M.T. (2017). DEDA Worksheet. Poster, Utrecht Data School, Retrieved December 14, 2017, from
  42. French, W., Zeiss, H., & Scherer, A. G. (2001). Intercultural discourse ethics: Testing Trompenaars’ and Hampden-Turner’s conclusions about Americans and the French. Journal of Business Ethics, 34(3–4), 145–159.Google Scholar
  43. Garber, M. (2016). When algorithms take the stand. The Atlantic. June 30, 2016.Google Scholar
  44. Gilad, S., Maor, M., & Bloom, P. B.-N. (2015). Organizational reputation, the content of public allegations, and regulatory communication. Journal of Public Administration Research and Theory, 25(2), 451–478. Scholar
  45. Gilbert, D. U., & Rasche, A. (2007). Discourse ethics and social accountability: The ethics of SA 8000. Business Ethics Quarterly, 17(2), 187–216.Google Scholar
  46. Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies. Essays on communication, materiality, and society (pp. 167–194). Cambrdige/MA: MIT Press.Google Scholar
  47. Glenn, T., & Monteith, S. (2014). Privacy in the digital world: Medical and health data outside of HIPAA protections. Current Psychiatry Reports, 16(11), 494, 1–11.Google Scholar
  48. Graham, S. D. N. (2016). Software-sorted geographies. Progress in Human Geography, 29(5), 562–580.Google Scholar
  49. Gray, R. (2002). The social accounting project and Accounting Organizations and Society Privileging engagement, imaginings, new accountings and pragmatism over critique? Accounting, Organizations and Society, 27(7), 687–708. Scholar
  50. Greenwood, M. (2007). Stakeholder engagement: Beyond the myth of corporate responsibility. Journal of Business Ethics, 74(4), 315–327.Google Scholar
  51. Greenwood, R., Raynard, M., Kodeih, F., Micelotta, E. R., & Lounsbury, M. (2011). Institutional complexity and organizational responses. Academy of Management Annals, 5(1), 317–371. Scholar
  52. Habermas, J. (1999). Moral consciousness and communicative action (C. Lenhardt & S. W. Nicholsen, Trans.). Cambridge, MA: MIT Press.Google Scholar
  53. Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411–426.Google Scholar
  54. Hildebrandt, M. (2008). Profiling and the rule of law. Identity in the Information Society, 1(1), 55–70.Google Scholar
  55. Hoos, F., Pruijssers, J. L., & Lander, M. W. (2017). Who’s watching? Accountability in different audit regimes and the effects on auditors’ professional skepticism. Journal of Business Ethics. Scholar
  56. Hunt, S. K., & Dumville, R. (2016). Recidivism among federal offenders: A comprehensive overview. United States Sanctioning Commission. Retrieved December 19, 2017, from
  57. Karpoff, J. M. (2012). Does reputation work to discipline corporate conduct? In M. L. Barnett & T. G. Pollock (Eds.), The Oxford handbook of corporate reputation (pp. 361–382). Oxford: Oxford University Press.Google Scholar
  58. Kehl, D., Guo, P., & Kessler, S. (2017). Algorithms in the criminal justice system: Assessing the use of risk assessments in sentencing. Berkman Klein Center for Internet & Society, Harvard Law School. Retrieved December 19, 2017, from
  59. Kim, M., Bergman, L., Lau, T., & Notkin, D. (2004). An ethnographic study of copy and paste programming practices in OOPL. In 2004 international symposium on empirical software engineering, 2004. ISESE’04. Proceedings (pp. 83–92). IEEE.Google Scholar
  60. Kim, H., Giacomin, J., & Macredie, R. (2014). A qualitative study of stakeholders’ perspectives on the social network service environment. International Journal of Human-Computer Interaction, 30(12), 965–976.Google Scholar
  61. King, B. G., & Whetten, D. A. (2008). Rethinking the relationship between reputation and legitimacy: A social actor conceptualization. Corporate Reputation Review, 11(3), 192–207. Scholar
  62. Leese, M. (2014). The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Security Dialogue, 45(5), 494–511.Google Scholar
  63. Lubit, R. (2001). The keys to sustainable competitive advantage: Tacit knowledge and knowledge management. Organizational Dynamics, 29(3), 164–178. Scholar
  64. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W.H. Freeman & Company.Google Scholar
  65. Martin, K. (2018). Ethical implications and accountability of algorithms. Journal of Business Ethics. Scholar
  66. McDonnell, M.-H., & King, B. (2013). Keeping up appearances: Reputational threat and impression management after social movement boycotts. Administrative Science Quarterly, 58(3), 387–419. Scholar
  67. Mingers, J., & Walsham, G. (2010). Toward ethical information systems: The contribution of discourse ethics. MIS Quarterly, 34(4), 833–885.Google Scholar
  68. Minsky, M. (1967). Why programming is a good medium for expressing poorly understood and sloppily formulated ideas. In M. Krampen & P. Seitz (Eds.), Design and planning II-computers in design and communication (pp. 120–125). New York: Hastings House Publishers.Google Scholar
  69. Mittelstadt, B. (2016). Auditing for transparency in content personalization systems. International Journal of Communication, 10, 4991–5002.Google Scholar
  70. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society. Scholar
  71. Montal, T., & Reich, Z. (2017). I, Robot. You, Journalist. Who is the Author? Authorship, bylines and full disclosure in automated journalism. Digital Journalism, 5(7), 829–849.Google Scholar
  72. Nanz, P., & Steffek, J. (2005). Assessing the democratic quality of deliberation in international governance: Criteria and research strategies. Acta Politica, 40, 368–383.Google Scholar
  73. Naughton, J. (2016). Opinion, even algorithms are biased against black men. The Guardian. June 26, 2016.Google Scholar
  74. Niemi, J. I. (2008). The foundations of Jürgen Habermas’s discourse ethics. The Journal of Value Inquiry, 42(2), 255–268.Google Scholar
  75. Nissenbaum, H. (2011). A contextual approach to privacy online. Dædalus, the Journal of the American Academy of Arts & Sciences, 140(4), 32–48.Google Scholar
  76. Norris, P. (2014). Watchdog journalism. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability. Oxford: Oxford University Press.Google Scholar
  77. Owen, D. L., Swift, T. A., Humphrey, C., & Bowerman, M. (2000). The new social audits: Accountability, managerial capture or the agenda of social champions? European Accounting Review, 9(1), 81–98. Scholar
  78. Paßmann, J., & Boersma, A. (2017). Unknowing algorithms. On transparency of un-openable black boxes. In K. van Es & M. T. Schäfer (Eds.), The Datafied Society. Studying Culture through Data (pp. 139–146). Amsterdam: Amsterdam University Press.Google Scholar
  79. Palazzo, G., & Scherer, A. G. (2006). Corporate legitimacy as deliberation: A communicative framework. Journal of Business Ethics, 66(1), 71–88.Google Scholar
  80. Pasquale, F. (2010). Beyond innovation and competition: The need for qualified transparency in internet intermediaries. Northwestern University Law Review, 104, 105.Google Scholar
  81. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.Google Scholar
  82. Passig, K. (2017). Fünfzig Jahre black box. Merkur. Gegründet 1947 als Deutsche Zeitschrift für europäisches Denken, 823(12), 16–30.Google Scholar
  83. Posner, L., & Shahan, A. (2014). Audit institutions. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability. Oxford: Oxford University Press.Google Scholar
  84. Rasche, A., & Esser, D. (2006). From stakeholder management to stakeholder accountability applying habermasian discourse ethics to accountability research. Journal of Business Ethics, 65(3), 251–267.Google Scholar
  85. Rindova, V. P., Pollock, T. G., & Hayward, M. L. A. (2006). Celebrity firms: The social construction of market popularity. The Academy of Management Review, 31(1), 50–71. Scholar
  86. Romenti, S. (2010). Reputation and stakeholder engagement: An Italian case study. Journal of Communication Management, 14(4), 306–318.Google Scholar
  87. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014a). An algorithm audit. In S. P. Gangadharan (Ed.), Data and discrimination: Collected essays (pp. 6–10). Washington, DC: New America Foundation.Google Scholar
  88. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014b). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Paper presented to “Data and discrimination: Converting critical concerns into productive inquiry”, a preconference at the 64th annual meeting of the international communication association. May 22, 2014, Seattle, WA, USA.Google Scholar
  89. Scherer, A. G., Palazzo, G., & Seidl, D. (2013). Managing legitimacy in complex and heterogeneous environments: Sustainable development in a globalized world. Journal of Management Studies, 50(2), 259–284. Scholar
  90. Scurich, N., & Monahan, J. (2016). Evidence-based sentencing: Public openness and opposition to using gender, age, and race as risk factors for recidivism. Law and Human Behavior, 40(1), 36.Google Scholar
  91. Seele, P., & Lock, I. (2015). Instrumental and/or deliberative? A typology of CSR communication tools. Journal of Business Ethics, 131(2), 401–414.Google Scholar
  92. Smith, M. (2016). In Wisconsin, a backlash against using data to foretell defendants’ Futures. NY Times. June 22, 2016.Google Scholar
  93. Stalder, F. (2016). Kultur der Digitalität. Berlin: Suhrkamp.Google Scholar
  94. Stark, M., & Fins, J. J. (2013). What’s not being shared in shared decision making? Hastings Center Report, 43(4), 13–16.Google Scholar
  95. Steenbergen, M. R., Bachtiger, A., Sporndli, M., & Steiner, J. (2003). Measuring political deliberation: A discourse quality index. Comparative European Politics, 1(1), 21–48.Google Scholar
  96. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. Academy of Management Review, 20(3), 571–610. Scholar
  97. Suurmond, G., Swank, O. H., & Visser, B. (2004). On the bad reputation of reputational concerns. Journal of Public Economics, 88(12), 2817–2838. Scholar
  98. Swift, T. (2001). Trust, reputation and corporate accountability to stakeholders. Business Ethics, a European Review, 10(1), 16–26.Google Scholar
  99. Tutt, A. (2016). An FDA for algorithms. Social science research network. Retrieved December 14, 2017, from
  100. Van Buren, H. J. (2001). If fairness is the problem, is consent the solution? Integrating ISCT and stakeholder theory. Business Ethics Quarterly, 11(3), 481–499.CrossRefGoogle Scholar
  101. Van de Walle, S., & Cornelissen, F. (2014). Performance reporting. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability. Oxford: Oxford University Press.Google Scholar
  102. Van Otterlo, M. (2013). A machine learning view on profiling. In M. Hildebrant & K. de Vries (Eds.), Privacy, due process and the computational turn: Philosophers of law meet philosophers of technology (pp. 46–64). London, UK: Routledge.Google Scholar
  103. Wiener, N. (1960). Some moral and technical consequences of automation. Science, 131, 1355–1358.Google Scholar
  104. Wojciechowski, B. (2010). Discourse ethics as a basis of the application of law. In J. Jemielniak & P. Miklaszewicz (Eds.), Interpretation of law in the global world: From particularism to a universal approach (pp. 53–69). Berlin: Springer.Google Scholar
  105. Zarsky, T. (2016). The trouble with algorithmic decisions an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology and Human Values, 41(1), 118–132.Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Communication and CultureBI Norwegian Business SchoolOsloNorway
  2. 2.Department for Media StudiesUniversity of SiegenSiegenGermany

Personalised recommendations