Advertisement

Maschinelles Lernen und Diskriminierung: Probleme und Lösungsansätze

  • Thilo HagendorffEmail author
Hauptbeiträge
  • 64 Downloads

Zusammenfassung

Zwischen menschlichem Handeln und technischen Artefakten besteht eine permanente Wechselwirkung, welche sich unter anderem in Form von Wertübertragungsprozessen manifestiert. Besondere Beachtung findet dieser Sachverhalt in den vergangenen Jahren insbesondere im Kontext digitaler Informations- und Kommunikationssysteme und hierbei wiederum im Besonderen im Rahmen der Anwendung von Verfahren des maschinellen Lernens. Der vorliegende Aufsatz trägt Beispiele für Wertübertragungsprozesse in diesem Kontext zusammen und geht dabei insbesondere auf das Thema der algorithmischen Diskriminierung ein. Beschrieben werden Ursachen für diese Form der Diskriminierung. Anschließend werden konkrete Maßnahmen vorgestellt, anhand derer das Ziel eines nichtdiskriminierenden Einsatzes von Techniken des Maschinenlernens erreicht werden kann.

Schlüsselwörter

Maschinenlernen Diskriminierung Anti-Diskriminierung Werte Algorithmen 

Machine Learning and Discrimination: Problems and Solutions

Abstract

Between human action and technical artefacts there is a permanent interplay, which manifests itself, among other things, in the form of value transfer processes. In recent years, this issue has received particular attention in digital information and communication systems and, in particular, in the use of machine learning applications. This paper brings together examples of value transfer processes in this context and deals with the issue of algorithmic discrimination. Causes for this form of discrimination are described. Subsequently, the paper outlines concrete measures that can be used to achieve the goal of a non-discriminatory use of machine learning techniques.

Keywords

Machine learning Discrimination Anti-discrimination Values Algorithms 

Literatur

  1. Aarden, Eric, und Daniel Barben. 2013. Science and Technology Studies. In Konzepte und Verfahren der Technikfolgenabschätzung, Hrsg. Georg Simonis, 35–50. Wiesbaden: Springer VS.Google Scholar
  2. Ammicht Quinn, Regina, Maria Beimborn, Thilo Hagendorff, Anja Königseder, Michael Nagenborg, Magdalena Schuler, und David Schumann. 2014. Forschungsprojekt KRETA. Abschlussbericht. Tübingen., 1–52.Google Scholar
  3. Angwin, Julia, Jeff Larson, Surya Mattu, und Lauren Kirchner. 2016. Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Zugegriffen: 18. Jan. 2018.Google Scholar
  4. Barocas, Solon, und Andrew D. Selbst. 2016. Big data’s disparate impact. California Law Review 104:671–732.Google Scholar
  5. Behrens, John T. 1997. Principles and procedures of exploratory data analysis. Psychological Methods 2(2):131–160.Google Scholar
  6. Bello-Salau, H., A.F. Salami, und M. Hussaini. 2012. Ethical analysis of the full-body scanner (FBS) for airport security. Advances in Natural and Applied Science 6:664–672.Google Scholar
  7. Beveridge, Ross J., Bruce A. Draper, und David Bolme. 2003. A statistical assessment of subject factors in the PCA recognition of human faces. Computer Vision and Pattern Recognition Workshop, IEEE., 1–9.Google Scholar
  8. Bijker, Wiebe E., und Trevor J. Pinch. 2005. The social construction of facts and artifacts. Or how the sociology of science and the sociology of technology might benefit of each other. In The social construction of technological systems. New directions in the sociology and history of technology, Hrsg. Wiebe E. Bijker, Thomas P. Hughes, und Trevor J. Pinch, 17–50. Cambridge: MIT Press.Google Scholar
  9. Boyd, Danah, und Kate Crawford. 2012. Critical questions for Big Data. Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15(5):662–679.Google Scholar
  10. Bozdag, Engin. 2013. Bias in algorithmic filtering and personalization. Ethics and Information Technology 15(3):209–227.Google Scholar
  11. Brennan, Tim, William Dieterich, und Beate Ehret. 2008. Evaluating the predictive validity of the compas risk and needs assessment system. Criminal Justice and Behavior 36(1):21–40.Google Scholar
  12. Brey, Philip. 2010. Values in technology and disclosive computer ethics. In The cambridge handbook of information and computer ethics, Hrsg. Luciano Floridi, 41–58. Cambridge, Massachusetts: Cambridge University Press.Google Scholar
  13. Calders, Toon, und Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21(2):277–292.Google Scholar
  14. Citron, Danielle K., und Frank Pasquale. 2014. The scored society. Due process for automated predictions. Washington Law Review 89:1–33.Google Scholar
  15. Crawford, Kate, und Jason Schultz. 2014. Big data and Due process. Toward a framework to redress predictive privacy harms. Boston College Law Review 55(93):93–128.Google Scholar
  16. Domingos, Pedro. 2012. A Few Useful Things to Know About Machine Learning. Communications of the ACM 55(10):78–87.Google Scholar
  17. Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, und Richard Zemel. 2011. Fairness Through Awareness. arXiv: 1104.3913:1–24.Google Scholar
  18. Friedman, Batya, und Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems 14(3):330–347.Google Scholar
  19. Hajian, Sara, und Josep Domingo-Ferrer. 2013a. A Methodology for Direct and Indirect Discrimination Prevention in Data Mining. IEEE Transactions on Knowledge and Data Engineering 25(7):1445–1459.Google Scholar
  20. Hajian, Sara, und Josep Domingo-Ferrer. 2013b. Direct and indirect discrimination prevention methods. In Discrimination and privacy in the information society. Data mining and profiling in large databases, Hrsg. Bart Custers, Toon Calders, Bart Schermer, und Tal Zarsky, 241–256. Berlin: Springer.Google Scholar
  21. Hellman, Deborah. 2011. When is discrimination wrong? Cambridge: Harvard University Press.Google Scholar
  22. Hermanin, Costanza, und Angelina Atanasove. 2013. Making „Big Data“ Work for Equality. http://www.opensocietyfoundations.org/voices/making-big-data-work-equality-0. Zugegriffen: 9. Jan. 2014.Google Scholar
  23. Introna, Lucas D., und David Wood. 2004. Picturing Algorithmic Surveillance. The Politics of Facial Recognition Systems. Surveillance & Society 2(2/3):177–198.Google Scholar
  24. Kamiran, Faisal, und Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems 33(1):1–33.Google Scholar
  25. Kamiran, Faisal, Toon Calders, und Mykola Pechenizkiy. 2013. Techniques for discrimination-free predictive models. In Discrimination and privacy in the information society. Data mining and profiling in large databases, Hrsg. Bart Custers, Toon Calders, Bart Schermer, und Tal Zarsky, 223–239. Berlin: Springer.Google Scholar
  26. Kerr, Ian, und Jessica Earle. 2013. Prediction, preemption, presumption. How big data threatens big picture privacy. Stanford Law Review Online 66:65–72.Google Scholar
  27. Khandani, Amir E., J. Kim Adlar, und Andrew W. Lo. 2010. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance 34(11):2767–2787.Google Scholar
  28. Kleinberg, Jon M., Sendhill Mullainathan, und Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807:1–23.Google Scholar
  29. Kusner, Matt J., Joshua R. Loftus, Chris Russell, und Ricardo Silva. 2017. Counterfactual Fairness. arXiv:1703.06856:1–21.Google Scholar
  30. Latour, Bruno. 1999. Pandora’s hope. Essays on the reality of science studies. Cambridge: Harvard University Press.Google Scholar
  31. Latour, Bruno. 2014. Technology is society made durable. The Sociological Review 38(1):103–131.Google Scholar
  32. Levin, Sam. 2016. A beauty contest was judged by AI and the robots didn’t like dark skin. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Zugegriffen: 10. Sept. 2016.Google Scholar
  33. Mager, Astrid. 2012. Algorithmic Ideology. How capitalist society shapes search engines. Information, Communication & Society 15(5):769–787.Google Scholar
  34. Mahoney, John F., und James M. Mohen. 2007. Method and system for loan origination and underwriting (US7287008 B1) Google Scholar
  35. Misty, Adrienne. 2016. Microsoft creates AI bot—Internet immediately turns it racist. https://socialhax.com/2016/03/24/microsoft-creates-ai-bot-internet-immediately-turns-racist/. Zugegriffen: 17. Jan. 2018.Google Scholar
  36. Mittelstadt, Brent D., Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, und Luciano Floridi. 2016. The ethics of algorithms. Mapping the debate. Big Data & Society 3(2):1–21.Google Scholar
  37. Mueller, John P., und Luca Massaron. 2016. Machine learning for dummies. New Jersey: John Wiley & Sons.Google Scholar
  38. Musik, Christoph. 2011. The thinking eye is only half the story. High-level semantic video surveillance. Information Polity 16:339–353.Google Scholar
  39. Musik, Christoph. 2016. Ground Truth Studies. A Socio-Technical Framework., 1–7.Google Scholar
  40. O’Neil, Cathy. 2016. Weapons of math destruction. How big data increases inequality and threatens democracy. New York: Crown Publishers.Google Scholar
  41. Pedreshi, Dino, Salvatore Ruggieri, und Franco Turini. 2008. Discrimination-aware data mining. In Proceeding of the 14th ACM SIGKDD international conference on knowledge discovery and data mining—KDD 08, Hrsg. Ying Li, Bing Liu, und Sunita Sarawagi, 560–568. New York: ACM Press.Google Scholar
  42. Rammert, Werner, und Ingo Schulz-Schaeffer. 2002. Technik und Handeln. Wenn soziales Handeln sich auf menschliches Verhalten und technische Artefakte verteilt., 1–37.Google Scholar
  43. Richards, Neil M., und Jonathan H. King. 2014. Big Data Ethics. Wake Forest Law Review 49:393–432.Google Scholar
  44. Rieder, Gernot, und Judith Simon. 2016. Datatrust. Or, the political quest for numerical evidence and the epistemologies of Big Data. Big Data & Society 3(1):1–6.Google Scholar
  45. Rothmann, Robert. 2014. Jaro Sterbik-Lamina und Walter Peissl., 1–86. Wien: Credit Scoring in Österreich.Google Scholar
  46. Tutt, Andrew. 2017. An FDA for Algorithms. Administrative Law Review 83:83–123.Google Scholar
  47. Veale, Michael, und Reuben Binns. 2017. Fairer machine learning in the real world. Mitigating discrimination without collecting sensitive data. Big Data & Society 4(2):1–17.Google Scholar
  48. Winner, Langdon. 1980. Do artifacts have politics? In: modern technology: problem or opportunity? Daedalus 109(1):121–136.Google Scholar
  49. Zafar, Muhammad B., Isabel Valera, Manuel G. Rodriguez, und Krishna P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact. Learning classification without disparate mistreatment. arXiv:1610.08452:1–10.Google Scholar

Copyright information

© Österreichische Gesellschaft für Soziologie 2019

Authors and Affiliations

  1. 1.Universität TübingenTübingenDeutschland

Personalised recommendations