Skip to main content

The Data Protection Impact Assessment as a Tool to Enforce Non-discriminatory AI

  • Conference paper
  • First Online:
Book cover Privacy Technologies and Policy (APF 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12121))

Included in the following conference series:

Abstract

This paper argues that the novel tools under the General Data Protection Regulation (GDPR) may provide an effective legally binding mechanism for enforcing non-discriminatory AI systems. Building on relevant guidelines, the generic literature on impact assessments and algorithmic fairness, this paper aims to propose a specialized methodological framework for carrying out a Data Protection Impact Assessment (DPIA) to enable controllers to assess and prevent ex ante the risk to the right to non-discrimination as one of the key fundamental rights that GDPR aims to safeguard.

The author is thankful to Laurens Naudts who provided very useful comments on the draft. The paper reflects author’s personal opinion as a researcher and is in no way representing EU institutions’ position on the subject.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Under this paper, the term AI is used to cover primarily machine and deep learning techniques that aim to simulate human intelligence and support or replace human decision-making. Still, the methodology proposed could be also relevant for other AI fields such as natural language processing, reasoning and other fields of AI application.

  2. 2.

    CJEU Opinion 1/15 on Draft agreement between Canada and the European Union — Transfer of Passenger Name Record data from the European Union to Canada, ECLI:EU:C:2017:592; Case C-524/06 Heinz Huber v Bundesrepublik Deutschland, ECLI:EU:C:2008:724.

  3. 3.

    CJEU, C-136/17 GC and Others v CNIL, ECLI:EU:C:2019:773, para.68.

  4. 4.

    CJEU, C-528/13, Geoffrey Léger v. Ministre des Affaires sociales, para. 54.

  5. 5.

    E.g., Art. 2(2)(a) of the Race Equality Directive 2000/43/EC.

  6. 6.

    Article 2(a) and (b) Racial Equality Directive 2000/43/EC.

  7. 7.

    For example, in C-54/07, Firma Feryn NV, 10 July 2008, the CJEU did not treat discriminatory motive as relevant to deciding if discrimination had occurred. See also ECtHR, Biao v. Denmark (Grand Chamber), No. 38590/10, 24 May 2016, para. 103. ECtHR, D.H. and Others v. the Czech Republic [GC], No. 57325/00, 13 November 2007, para. 79.

  8. 8.

    CJEU, C-83/14, CHEZ, para. 128.

  9. 9.

    ECtHR, Sejdić and Finci v. Bosnia and Herzegovina [GC], Nos. 27996/06 and 34836/06.

  10. 10.

    CJEU, C-83/14, CHEZ, paras. 99–100.

  11. 11.

    The CJEU has traditionally requested that the differential impact must be of a significant quantitative nature, certainly above 60%. See CJEU, C-33/89, Maria Kowalska, 27 June 1990. Still, in C-167/97 Seymour-Smith, para.61 the CJEU suggested that a lower level of disproportion could be accepted as a proof of indirect discrimination ‘if it revealed a persistent and relatively constant disparity over a long period between men and women’.

  12. 12.

    See also article 35(9) GDPR which requires only facultative involvement of the affected data subjects.

  13. 13.

    CJEU, Case 109/88, Danfoss, EU:C:1989:383 para. 16.

  14. 14.

    See in this sense also R. Binns [24] who argues that the ‘human-in-the-loop’ may be able to serve the aim of individual justice”.

  15. 15.

    CJEU, C–406/15 Milkova, EU:C:2017:198, para. 66.

  16. 16.

    CJEU, Case C–450/93, Kalanke, EU:C:1995:322, para. 22.

  17. 17.

    E.g. C-122/15, FOA (Kaltoft), EU:C:2016:391; C-354/13, EU:C:2014:2463; Betriu Montull, C-5/12, EU:C:2013:571.

  18. 18.

    ‘Status’ has been defined by the ECtHR as “identifiable, objective or personal characteristic by which persons or groups are distinguishable from one another”, see Clift v The UK App no 7205/07 (ECtHR, 13 July 2010 para 55. While the ECtHR has ruled that the characteristic should not be personal in the sense that it must be “inherent and innate’, in its past case-law it has excluded objective factors (e.g. location) not linked to a personal characteristic or personal choice (e.g. membership in trade union) as a potential protected ‘status’ under Article 14 ECHR, see for example Magee v the United Kingdom App no. 28135/95 (ECtHR, 20 June 2000) para 50. Big Brother Watch and Others v The United Kingdom App nos 58170/13, 62322/14 and 24960/15 (ECtHR 13 September 2018) para 516–518.

  19. 19.

    CJEU, C-443/15, David L. Parris v. Trinity College Dublin and Others, 24 November 2016.

References

  1. Article 29 Data Protection Working Party: Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679, 4 October 2017, 17/EN WP 248 (2017)

    Google Scholar 

  2. Article 29 Data Protection Working Party: Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, 17/EN WP 251, 03 October 2017

    Google Scholar 

  3. European Data Protection Board: Draft Guidelines 4/2019 on Article 25 Data Protection by Design and by Default, adopted on 13 November 2019

    Google Scholar 

  4. UK Information Commissioner’s Office: Human Bias and Discrimination in AI systems, 25 June 2019. https://ai-auditingframework.blogspot.com/2019/06/human-bias-and-discrimination-in-ai.html

  5. UK Information Commissioner’s Office: Automated Decision Making: The Role of Meaningful Human Reviews, 12 April 2019. https://ai-auditingframework.blogspot.com/2019/04/automated-decision-making-role-of.html

  6. Norwegian Data Protection Authority: Artificial Intelligence and Privacy (2018)

    Google Scholar 

  7. EU Fundamental Rights Agency: Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights (2019)

    Google Scholar 

  8. High-Level Expert Group on AI established by the European Commission: Ethics Guidelines For Trustworthy Artificial Intelligence, 8 April 2019

    Google Scholar 

  9. Consultative Committee of the Convention for the Protection of Individuals with Regards to Automatic Processing of Personal Data (T-PD): Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data, Strasbourg (2017). T-PD(2017)01

    Google Scholar 

  10. Kloza, D., Van Dijk, N., Casiraghi, S., Vazquez Maymir, S., Roda, S., Tanas, A. & Konstantinou, I. Towards a method for data protection impact assessment: Making sense of GDPR requirements, 5 Nov 2019, d.pia.lab Policy Brief, January 2019

    Google Scholar 

  11. Gellert, R.: We have always managed risks in data protection law. Understanding the similarities and differences between the rights-based and the risk-based approaches to data protection. Eur. Data Protect. Law Rev. 2(4), 481–492 (2016)

    Google Scholar 

  12. Mantelero, A: AI and Big data: a blueprint for a human rights, social and ethical impact assessment. Comput. Law Secur. Rev. 34(4), 754–772 (2018)

    Google Scholar 

  13. Kaminski, M.E., Malgieri, G.: Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations. University of Colorado Law Legal Studies Research Paper No. 19-28 (2019)

    Google Scholar 

  14. Hacker, P.: Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Rev. 55, 1143–1186 (2018)

    Google Scholar 

  15. Barocas, S., Selbst, A.: Big data’s disparate impact. Calif. Law Rev. 104, 671–732; 694–714 (2016)

    Google Scholar 

  16. Kroll, J.A., et al.: Accountable algorithms. Univ. Pennsylvania Law Rev. 165, 633 (2016)

    Google Scholar 

  17. Zarsky, T.: Understanding discrimination in the scored society. Washington Law Rev. 89(4) (2014). SSRN: https://ssrn.com/abstract=2550248

  18. Castelluccia, C., Le Métayer, D.: Understanding algorithmic decision-making: opportunities and challenges’. European Parliamentary Research Service, Scientific Foresight Unit (STOA) PE 624.261 (2019)

    Google Scholar 

  19. Žliobaitė, I.: Measuring discrimination in algorithmic decision making. In: 31 Data Mining & Knowledge Discovery, pp. 1060–1089 (2017)

    Google Scholar 

  20. Mehrabi, N., et al.: A Survey on Bias and Fairness in Machine Learning (2019)

    Google Scholar 

  21. Veale, E., Binns, R.: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 4(2), 1–17 (2017)

    Google Scholar 

  22. Liu, L., Dean, S., Rolf, E., Simchowitz, M., Hardt, M.: Delayed impact of fair machine learning. In: Proceedings of the 35th International Conference on Machine Learning (2018)

    Google Scholar 

  23. Davis, J.: Design methods for ethical persuasive computing. In: 2009 Proceedings of the 4th International Conference on Persuasive Technology, p. 6. ACM (2009)

    Google Scholar 

  24. Binns, R.: On the apparent conflict between individual and group fairness. In: Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM (2020)

    Google Scholar 

  25. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of FAT* 2019: Conference on Fairness, Accountability, and Transparency (FAT* 2019) (2019). https://doi.org/10.1145/3287560.3287574. ISBN 978-1-4503-6125-5

  26. Zemel, R., et al.: Learning fair representations. In: Proceedings of the 30th International Conference on Machine Learning, vol. 28, p. 325 (2013)

    Google Scholar 

  27. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2011)

    Google Scholar 

  28. Kamishima, T., et al.: Fairness-aware learning through regularization approach. In: Proceedings of the 3rd IEEE International Workshop on Privacy Aspects of Data Mining, p. 643 (2011)

    Google Scholar 

  29. Gebru, T., et al: Datasheets for Datasets (2018). arXiv:1803.09010 [cs.DB]

  30. Kilbertus, N., Gascón, A., Kusner, M.J., Veale, M., Gummadi, K.P., Weller, A.: Blind justice: fairness with encrypted sensitive attributes. In: Proceedings of the 35th International Conference on Machine Learning, PMLR 80, 2630–2639 (2018)

    Google Scholar 

  31. Sandvig, C., et al.: Auditing algorithms: research methods for detecting discrimination on internet platforms. Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, Seattle, WA, USA, 22 May 2014

    Google Scholar 

  32. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31(2) (2017)

    Google Scholar 

  33. Kallus, N., Mao, X., Zhou, A.: Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination, 1 June 2019. https://arxiv.org/abs/1906.00285

  34. Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C., Venkatasubramanian, S.: Runaway feedback loops in predictive policing. In: Conference on Fairness, Accountability, and Transparency. Proceedings of Machine Learning Research 81, 1–12 (2018)

    Google Scholar 

  35. Žliobaitė, I.: Learning under concept drift: an overview (2010)

    Google Scholar 

  36. Wachter, S.: Affinity profiling and discrimination by association in online behavioural advertising. Berkeley Technol. Law J. 35(2) (forthcoming, 2020)

    Google Scholar 

  37. British Institute Standard 8611: Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems (2016)

    Google Scholar 

  38. Institute of Electrical and Electronics Engineers: Standards P7003 - Algorithmic Bias Considerations (under development). https://standards.ieee.org/project/7003.html

  39. Taylor, L., Floridi, L., van der Sloot, B.: Group Privacy: New Challenges of Data Technologies. Springer International Publishing, Cham (2019)

    Google Scholar 

  40. Naudts, L.: How machine learning generates unfair inequalities and how data protection instruments may help in mitigating them. In: Leenes, R., van Brakel, R., Gutwirth, S., De Hert, P. (eds.) Data Protection and Privacy: The Internet of Bodies (CPDP 2019) (2019)

    Google Scholar 

  41. Klare, B.F., Burge, M.J., Klontz, J.C., Bruegge, R.W., Jain, A.K.: Face recognition performance: role of demographic information. IEEE Trans. Inf. Forensics Secur. 7(6), 1789–1801 (2012)

    Article  Google Scholar 

  42. European Commission: White Paper On Artificial Intelligence - A European approach to excellence and trust COM, 65 final (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yordanka Ivanova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ivanova, Y. (2020). The Data Protection Impact Assessment as a Tool to Enforce Non-discriminatory AI. In: Antunes, L., Naldi, M., Italiano, G., Rannenberg, K., Drogkaris, P. (eds) Privacy Technologies and Policy. APF 2020. Lecture Notes in Computer Science(), vol 12121. Springer, Cham. https://doi.org/10.1007/978-3-030-55196-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55196-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55195-7

  • Online ISBN: 978-3-030-55196-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics