Advertisement

Artificial Intelligence and Transparency: Opening the Black Box

  • Thomas WischmeyerEmail author
Chapter

Abstract

The alleged opacity of AI has become a major political issue over the past few years. Opening the black box, so it is argued, is indispensable to identify encroachments on user privacy, to detect biases and to prevent other potential harms. However, what is less clear is how the call for AI transparency can be translated into reasonable regulation. This Chapter argues that designing AI transparency regulation is less difficult than oftentimes assumed. Regulators profit from the fact that the legal system has already gained considerable experience with the question of how to shed light on partially opaque decision-making systems—human decisions. This experience provides lawyers with a realistic perspective of the functions of potential AI transparency legislation as well as with a set of legal instruments which can be employed to this end.

References

  1. Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973–989CrossRefGoogle Scholar
  2. Arbeitsgruppe “Digitaler Neustart” (2018) Zwischenbericht der Arbeitsgruppe “Digitaler Neustart” zur Frühjahrskonferenz der Justizministerinnen und Justizminister am 6. und 7. Juni 2018 in Eisenach. www.justiz.nrw.de/JM/schwerpunkte/digitaler_neustart/zt_fortsetzung_arbeitsgruppe_teil_2/2018-04-23-Zwischenbericht-F-Jumiko-2018%2D%2D-final.pdf
  3. Article 29 Data Protection Working Party (2018) Guidelines on automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01). ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053
  4. Asilomar Conference (2017) Asilomar AI principles. futureoflife.org/ai-principles
  5. Ben-Shahar O, Chilton A (2016) Simplification of privacy disclosures: an experimental test. J Legal Stud 45:S41–S67CrossRefGoogle Scholar
  6. Ben-Shahar O, Schneider C (2011) The failure of mandated disclosure. Univ Pa Law Rev 159:647–749Google Scholar
  7. Buchner B (2018) Artikel 22 DSGVO. In: Kühling J, Buchner B (eds) DS-GVO. BDSG, 2nd edn. C.H. Beck, MünchenGoogle Scholar
  8. Bundesanstalt für Finanzdienstleistungsaufsicht (2018) Big Data trifft auf künstliche Intelligenz. Herausforderungen und Implikationen für Aufsicht und Regulierung von Finanzdienstleistungen. www.bafin.de/SharedDocs/Downloads/DE/dl_bdai_studie.html
  9. Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc 3:205395171562251.  https://doi.org/10.1177/2053951715622512 CrossRefGoogle Scholar
  10. Busch C (2016) The future of pre-contractual information duties: from behavioural insights to big data. In: Twigg-Flesner C (ed) Research handbook on EU consumer and contract law. Edward Elgar, Cheltenham, pp 221–240CrossRefGoogle Scholar
  11. Busch C (2018) Algorithmic Accountablity, Gutachten im Auftrag von abida, 2018. http://www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%20Accountability.pdf
  12. Citron D, Pasquale F (2014) The scored society: due process for automated predictions. Washington Law Rev 89:1–33Google Scholar
  13. Costas J, Grey C (2016) Secrecy at work. The hidden architecture of organizational life. Stanford Business Books, StanfordGoogle Scholar
  14. Crawford K (2016) Can an algorithm be agonistic? Ten scenes from life in calculated publics. Sci Technol Hum Values 41(1):77–92CrossRefGoogle Scholar
  15. Datenethikkommission (2018) Empfehlungen der Datenethikkommission für die Strategie Künstliche Intelligenz der Bundesregierung. www.bmi.bund.de/SharedDocs/downloads/DE/veroeffentlichungen/2018/empfehlungen-datenethikkommission.pdf?__blob=publicationFile&v=1
  16. Datta A, Sen S, Zick Y (2017) Algorithmic transparency via quantitative input influence. In: Cerquitelli T, Quercia D, Pasquale F (eds) Transparent data mining for big and small data. Springer, Cham, pp 71–94CrossRefGoogle Scholar
  17. Diakopoulos N (2016) Accountability in algorithmic decision making. Commun ACM 59(2):56–62CrossRefGoogle Scholar
  18. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. Working Paper, March 2, 2017Google Scholar
  19. Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. Working Paper, November 21, 2017Google Scholar
  20. Edwards L, Veale M (2017) Slave to the algorithm? Why a ‘Right to an Explanation’ is probably not the remedy you are looking for. Duke Law Technol Rev 16(1):18–84Google Scholar
  21. European Commission (2018) Artificial intelligence for Europe. COM(2018) 237 finalGoogle Scholar
  22. European Parliament (2017) Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. 2015/2103(INL)Google Scholar
  23. Fassbender B (2006) Wissen als Grundlage staatlichen Handelns. In: Isensee J, Kirchhof P (eds) Handbuch des Staatsrechts, vol IV, 3rd edn. C.F. Müller, Heidelberg, § 76Google Scholar
  24. Fenster M (2017) The transparency fix. Secrets, leaks, and uncontrollable Government information. Stanford University Press, StanfordGoogle Scholar
  25. Florini A (2007) The right to know: transparency for an open world. Columbia University Press, New YorkCrossRefGoogle Scholar
  26. Fong R, Vedaldi A (2018) Interpretable explanations of Black Boxes by meaningful perturbation, last revised 10 Jan 2018. arxiv.org/abs/1704.03296
  27. Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. arxiv.org/pdf/1606.08813.pdf
  28. Gusy C (2012) Informationsbeziehungen zwischen Staat und Bürger. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 2, 2nd edn. C.H. Beck, München, § 23Google Scholar
  29. Harhoff D, Heumann S, Jentzsch N, Lorenz P (2018) Eckpunkte einer nationalen Strategie für Künstliche Intelligenz. www.stiftung-nv.de/de/publikation/eckpunkte-einer-nationalen-strategie-fuer-kuenstliche-intelligenz
  30. Heald D (2006) Varieties of transparency. Proc Br Acad 135:25–43Google Scholar
  31. Hildebrandt M (2011) Who needs stories if you can get the data? Philos Technol 24:371–390CrossRefGoogle Scholar
  32. Hoffmann-Riem W (2014) Regulierungswissen in der Regulierung. In: Bora A, Reinhardt C, Henkel A (eds) Wissensregulierung und Regulierungswissen. Velbrück Wissenschaft, Weilerswist, pp 135–156Google Scholar
  33. Hoffmann-Riem W (2017) Verhaltenssteuerung durch Algorithmen – Eine Herausforderung für das Recht. Archiv des öffentlichen Rechts 142:1–42CrossRefGoogle Scholar
  34. Holznagel B (2012) Informationsbeziehungen in und zwischen Behörden. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 2, 2nd edn. C.H. Beck, München, § 24Google Scholar
  35. Hood C, Heald D (2006) Transparency. The key to better Governance? Oxford University Press, OxfordCrossRefGoogle Scholar
  36. House of Lords Select Committee on Artificial Intelligence (2018) AI in the UK – Ready, willing and able? publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
  37. Imwinkelried E (2017) Computer source code. DePaul Law Rev 66:97–132Google Scholar
  38. Jean B, Kassem L (2018) L’ouverture des données dans les Universités. openaccess.parisnanterre.fr/medias/fichier/e-tude-open-data-inno3_1519834765367-pdf
  39. Jestaedt M (2001) Das Geheimnis im Staat der Öffentlichkeit. Was darf der Verfassungsstaat verbergen? Archiv des öffentlichen Rechts 126:204–243Google Scholar
  40. Kaushal M, Nolan S (2015) Understanding artificial intelligence. Brookings Institute, Washington, D.C. www.brookings.edu/blogs/techtank/posts/2015/04/14-understanding-artificial-intelligence
  41. Kaye D (2018) Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression 29 August 2018. United Nations A/73/348Google Scholar
  42. Kischel U (2003) Die Begründung. Mohr Siebeck, TübingenGoogle Scholar
  43. Knobloch T (2018) Vor die Lage kommen: Predictive Policing in Deutschland, Stiftung Neue Verantwortung. www.stiftung-nv.de/sites/default/files/predictive.policing.pdf (19 Jan 2019)
  44. Konferenz der Informationsfreiheitsbeauftragten (2018) Positionspapier. www.datenschutzzentrum.de/uploads/informationsfreiheit/2018_Positionspapier-Transparenz-von-Algorithmen.pdf
  45. Lakkaraju H, Caruana R, Kamar E, Leskovec J (2013) Interpretable & explorable approximations of black box models. arxiv.org/pdf/1707.01154.pdf
  46. Leese M (2014) The new profiling: algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Secur Dialogue 45(5):494–511CrossRefGoogle Scholar
  47. Lem S (2013) Summa technologiae. University of Minnesota Press, MinneapolisGoogle Scholar
  48. Lewis D (1973a) Counterfactuals. Harvard University Press, CambridgeGoogle Scholar
  49. Lewis D (1973b) Causation. J Philos 70:556–567CrossRefGoogle Scholar
  50. Luhmann N (1983) Legitimation durch Verfahren. Suhrkamp, Frankfurt am MainGoogle Scholar
  51. Luhmann N (2017) Die Kontrolle von Intransparenz. Suhrkamp, BerlinGoogle Scholar
  52. Martini M (2017) Algorithmen als Herausforderung für die Rechtsordnung. JuristenZeitung 72:1017–1025CrossRefGoogle Scholar
  53. Martini M (2018) Artikel 22 DSGVO. In: Paal B, Pauly D (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz, 2nd edn. C.H. Beck, MünchenGoogle Scholar
  54. Martini M, Nink D (2017) Wenn Maschinen entscheiden… – vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz. Neue Zeitschrift für Verwaltungsrecht Extra 36:1–14Google Scholar
  55. Mayer-Schönberger V, Cukier K (2013) Big data. Houghton Mifflin Harcourt, BostonGoogle Scholar
  56. Merton R (1968) Social theory and social structure. Macmillan, New YorkGoogle Scholar
  57. Mittelstadt B, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms. Big Data Soc 3(2):1–21CrossRefGoogle Scholar
  58. Montavon G, Samek W, Müller K (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Process 73:1–15CrossRefGoogle Scholar
  59. National Science and Technology Council Committee on Technology (2016) Preparing for the future of artificial intelligence. obamawhitehouse.archiv es.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_ future_of_ai.pdfGoogle Scholar
  60. Neyland D (2016) Bearing accountable witness to the ethical algorithmic system. Sci Technol Hum Values 41(1):50–76CrossRefGoogle Scholar
  61. OECD Global Science Forum (2016) Research ethics and new forms of data for social and economic research. www.oecd.org/sti/inno/globalscienceforumreports.htm
  62. Palacio S, Folz J, Hees J, Raue F, Borth D, Dengel A (2018) What do deep networks like to see? arxiv.org/abs/1803.08337
  63. Pasquale F (2015) The Black Box Society: the secret algorithms that control money and information. Harvard University Press, CambridgeCrossRefGoogle Scholar
  64. Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic impact assessments: a practical framework for public agency accountability. ainowinstitute.org/aiareport2018.pdf
  65. Ribeiro M, Singh S, Guestrin C (2016) “Why Should I Trust You?” Explaining the predictions of any classifier. arxiv.org/pdf/1602.04938.pdf
  66. Roth A (2017) Machine testimony. Yale Law J 126:1972–2259Google Scholar
  67. Rundfunkkommission der Länder (2018) Diskussionsentwurf zu den Bereichen Rundfunkbegriff, Plattformregulierung und Intermediäre. www.rlp.de/fileadmin/rlp-stk/pdf-Dateien/Medienpolitik/04_MStV_Online_2018_Fristverlaengerung.pdf
  68. Russell S, Dewey S, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. arxiv.org/abs/1602.03506 CrossRefGoogle Scholar
  69. Sachverständigenrat für Verbraucherfragen (2018) Technische und rechtliche Betrachtungen algorithmischer Entscheidungsverfahren. Gutachten der Fachgruppe Rechtsinformatik der Gesellschaft für Informatik e.V. http://www.svr-verbraucherfragen.de/wp-content/uploads/GI_Studie_Algorithmenregulierung.pdf
  70. Salmon W (1994) Causality without counterfactuals. Philos Sci 61:297–312CrossRefGoogle Scholar
  71. Sandvig C, Hamilton K, Karahalios K, Langbort C (2014) Auditing algorithms: research methods for detecting discrimination on internet platforms. www.personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20%2D%2D%20Sandvig%20%2D%2D%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf
  72. Saurer J (2009) Die Begründung im deutschen, europäischen und US-amerikanischen Verwaltungsverfahrensrecht. Verwaltungsarchiv 100:364–388Google Scholar
  73. Scheppele K (1988) Legal secrets. University of Chicago Press, ChicagoGoogle Scholar
  74. Scherer M (2016) Regulating artificial intelligence systems. Harv J Law Technol 29:353–400Google Scholar
  75. Scherzberg A (2000) Die Öffentlichkeit der Verwaltung. Nomos, Baden-BadenGoogle Scholar
  76. Scherzberg A (2013) Öffentlichkeitskontrolle. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 3, 2nd edn. C.H. Beck, München, § 49Google Scholar
  77. Schwartz B (2015) Google: we make thousands of updates to search algorithms each year. www.seroundtable.com/google-updates- thousands-20403.html
  78. Selbst A, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139Google Scholar
  79. Singapore Personal Data Protection Commission (2018) Discussion paper on artificial intelligence and personal data. www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/Discussion-Paper-on-AI-and-PD%2D%2D-050618.pdf
  80. Stelkens U (2018) § 39 VwVfG. In: Stelkens P, Bonk H, Sachs M (eds) Verwaltungsverfahrensgesetz, 9th edn. C.H. Beck, MünchenGoogle Scholar
  81. Tene O, Polonetsky J (2013) Big data for all: privacy and user control in the age of analytics. Northwest J Technol Intellect Prop 11:239–273Google Scholar
  82. Tsoukas H (1997) The tyranny of light. The temptations and paradoxes of the information society. Futures 29:827–843CrossRefGoogle Scholar
  83. Tutt A (2017) An FDA for algorithms. Adm Law Rev 69:83–123Google Scholar
  84. van Otterlo M (2013) A machine learning view on profiling. In: Hildebrandt M, de Vries K (eds) Privacy, due process and the computational turn. Routledge, Abingdon-on-Thames, pp 41–64Google Scholar
  85. Villani C (2018) For a meaningful artificial intelligence – towards a French and European Strategy. www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
  86. von Lewinski K (2014) Überwachung, Datenschutz und die Zukunft des Informationsrechts. In: Telemedicus (ed) Überwachung und Recht. epubli GmbH, Berlin, pp 1–30Google Scholar
  87. von Lewinski K (2018) Artikel 22 DSGVO. In: Wolff H, Brink S (eds) Beck‘scher Online-Kommentar Datenschutzrecht. C.H. Beck, MünchenGoogle Scholar
  88. Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decisionmaking does not exist in the general data protection regulation. Int Data Priv Law 7:76–99CrossRefGoogle Scholar
  89. Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the Black Box: automated decisions and the GDPR. Harv J Law Technol 31:841–887Google Scholar
  90. Wexler R (2018) Life, liberty, and trade secrets: intellectual property in the criminal justice system. Stanf Law Rev 70:1343–1429Google Scholar
  91. Wischmeyer T (2015) Der »Wille des Gesetzgebers«. Zur Rolle der Gesetzesmaterialien in der Rechtsanwendung. JuristenZeitung 70:957–966CrossRefGoogle Scholar
  92. Wischmeyer T (2018a) Regulierung intelligenter Systeme. Archiv des öffentlichen Rechts 143:1–66CrossRefGoogle Scholar
  93. Wischmeyer T (2018b) Formen und Funktionen des exekutiven Geheimnisschutzes. Die Verwaltung 51:393–426CrossRefGoogle Scholar
  94. Woodward J (2017) Scientific explanation. In: Zalta E (ed) The Stanford encyclopedia of philosophy. Stanford University, Stanford. plato.stanford.edu/archives/fall2017/entries/scientific-explanation Google Scholar
  95. Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Ćirkovic M (eds) Global catastrophic risks. Oxford University Press, New York, pp 308–345Google Scholar
  96. Zarsky T (2013) Transparent Predictions. Univ Ill Law Rev 4:1503–1570Google Scholar
  97. Zweig K (2016) 2. Arbeitspapier: Überprüfbarkeit von Algorithmen. algorithmwatch.org/de/zweites-arbeitspapier-ueberpruefbarkeit-algorithmen
  98. Zweig K (2019) Algorithmische Entscheidungen: Transparenz und Kontrolle, Analysen & Argumente, Digitale Gesellschaft, Januar 2019. https://www.kas.de/c/document_library/get_file?uuid=533ef913-e567-987d-54c3-1906395cdb81&groupId=252038

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Faculty of LawUniversity of BielefeldBielefeldGermany

Personalised recommendations