Skip to main content

Rassistische Maschinen?

Übertragungsprozesse von Wertorientierungen zwischen Gesellschaft und Technik

  • Chapter
  • First Online:
Maschinenethik

Part of the book series: Ethik in mediatisierten Welten ((EMW))

Zusammenfassung

Die „Flüchtlingskrise“ beherrscht den öffentlichen sowie massenmedial geführten Diskurs, auch in Foren digitaler sozialer Netzwerke. Gezeigt werden soll, wie Rassismus, welcher zuerst einmal ein genuin soziales Phänomen ist, über verschiedene Übersetzungs- und Lernprozesse hinweg in die Funktionsweise informationstechnischer Systeme (Algorithmen) einfließt. Ein entscheidender Faktor dabei ist der (Trainings-)Datensatz ist, aus welchem die Algorithmen lernen. Es bedarf folglich zum einen einer kritischen Reflexion der Verwendungsweisen von (Trainings-)Datensätzen, zum anderen ist zu fragen, inwiefern Medien insgesamt ein bloßer „Spiegel“ der Gesellschaft sein sollen, oder ob eine gewisse normative Lenkung von Medien sozial wünschenswert ist.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  • Amoore, Louise (2011). Data Derivatives. On the Emergence of a Security Risk Calculus for Our Times. Theory, Culture & Society 28 (6), 24–43.

    Google Scholar 

  • Anderson, Chris (2008). The End of Theory. The Data Deluge Makes the Scientific Method Obsolete. http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory. Zugegriffen: 10. November 2014.

  • Angwin, Julia, Jeff Larson, Surya Mattu, Lauren Kirchner (2016). Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Zugegriffen: 15. Juli 2016.

  • Baecker, Dirk (2011). Organisation und Störung. Frankfurt a.M: Suhrkamp.

    Google Scholar 

  • Beck, Ulrich (1988). Gegengifte. Die organisierte Unverantwortlichkeit. Frankfurt a.M: Suhrkamp.

    Google Scholar 

  • Bello-Salau, H., Salami, A. F., Hussaini, M. (2012). Ethical Analysis of the Full-Body Scanner (FBS) for Airport Security. Advances in Natural and Applied Science 6, 664–672.

    Google Scholar 

  • Beveridge, R., Bolme, D., Draper, J., Givens, G. (2003). A Statistical Assessment of Subject Factors in the PCA Recognition of Human Faces. In Computer Vision and Pattern Recognition Workshop, 1–9: IEEE.

    Google Scholar 

  • Brey, Philip (2010). Values in technology and disclosive computer ethics. In Luciano Floridi (Hrsg.), The Cambridge Handbook of Information and Computer Ethics (S. 41–58). Cambridge, Massachusetts: Cambridge University Press.

    Google Scholar 

  • Burrell, Jenna (2016). How the machine ‘thinks’. Understanding opacity in machine learning algorithms. Big Data & Society 3 (1), 1–12.

    Google Scholar 

  • Clarke, Roger (1988). Information technology and dataveillance. Communications of the ACM 31 (5), 498–512.

    Google Scholar 

  • Datta, Amit, Datta, Anupam, Tschantz, Carl Michael (2015). Automated Experiments on Ad Privacy Settings. A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies (1), 92–112.

    Google Scholar 

  • Domingos, Pedro (2012). A Few Useful Things to Know About Machine Learning. Communications of the ACM 55 (10), 78–87.

    Google Scholar 

  • Hagendorff, Thilo (2015). Technikethische Werte im Konflikt. Das Beispiel des Körperscanners. TATuP – Zeitschrift des ITAS zur Technikfolgenabschätzung 24 (1), 82–86.

    Google Scholar 

  • Haggerty, Kevin D. & Ericson, Richard V. (2000). The surveillant assemblage. The British Journal of Sociology 51 (4), 605–622.

    Google Scholar 

  • Hermanin, Costanza & Atanasove, Angelina (2013). Making “Big Data” Work for Equali-ty. http://www.opensocietyfoundations.org/voices/making-big-data-work-equality-0. Zugegriffen: 9. Januar 2014.

  • Homann, Karl (1993). Wirtschaftsethik. Die Funktion der Moral in der modernen Wirtschaft. In Josef Wieland (Hrsg.), Wirtschaftsethik und Theorie der Gesellschaft (S.32–53). Frankfurt am Main: Suhrkamp.

    Google Scholar 

  • Horvát, Emöke-Ágnes, Hanselmann, Michael, Hamprecht, Fred A., & Zweig, Katharina A (2012). One Plus One Makes Three (for Social Networks). PloS one 7 (4), 1–8.

    Google Scholar 

  • Karas, Stan (2004). Loving Big brother. Albany Law Journal of Science & Technology 15 (3), 607–637.

    Google Scholar 

  • Kerr, Ian & Earle, Jessica (2013). Prediction, Preemption, Presumption. How Big Data Threatens Big Picture Privacy. Stanford Law Review Online 66, 65–72.

    Google Scholar 

  • Kitchin, Rob (2014a). The Data Revolution. Big Data, Open Data, Data Infrastructures and Their Consequences. London: SAGE Publications.

    Google Scholar 

  • Kitchin, Rob (2014b). Thinking Critically About and Researching Algorithms. The Programmale City Working Paper 5, 1–29.

    Google Scholar 

  • Kosinski, Michal, Yoram Bachrach, Pushmeet Kohli, David Stillwell & Thore Graepel (2014). Manifestations of user personality in website choice and behaviour on online social networks. Machine Learning 95 (3), 357–380.

    Google Scholar 

  • Kosinski, Michal, Stillwell, David & Graepel, Thore (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110 (15), 5802–5805.

    Google Scholar 

  • Lambiotte, Renaud & Kosinski, Michal (2014). Tracking the Digital Footprints of Personality. Proceedings of the IEEE 102 (12), 1934–1939.

    Google Scholar 

  • Latzer, Michael, Hollnbuchner, Katharina, Just, Natascha & Florian Saurwein (2014). The economics of algorithmic selection on the Internet. Media Change and Innovation Division WorkingPaper. http://www.mediachange.ch/media/pdf/publications/Economics_of_algorithmic_selection_WP.pdf: 1–41. Zurich: University of Zurich.

  • Levin, Sam (2016). A beauty contest was judged by AI and the robots didn’t like dark skin. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Zugegriffen: 10. September 2016.

  • Los, Maria (2006). Looking into the future: surveillance, globalization and the totalitarian potential. In David Lyon (Hrsg.), Theorizing Surveillance. The panopticon and beyond (S. 69–94). Cullompton: Willian Publishing.

    Google Scholar 

  • Lyon, David (2003a). Surveillance as social sorting. Computer codes and mobile bodies. In David Lyon (Hrsg.), Surveillance as Social Sorting. Privacy, risk, and digital discrimination (S. 13–30). London: Routledge.

    Google Scholar 

  • Lyon, David (Hrsg.). (2003b). Surveillance as Social Sorting. Privacy, risk, and digital discrimination. London: Routledge.

    Google Scholar 

  • Matzner, Tobias (2016). Beyond data as representation. The performativity of Big Data in surveillance. Surveillance & Society 14 (2), 197–210.

    Google Scholar 

  • Misty, Adrienne (2016). Microsoft Creates AI Bot – Internet Immediately Turns it Racist. https://socialhax.com/2016/03/24/microsoft-creates-ai-bot-internet-immediately-turns-racist/. Zugegriffen: 18. Juni 2016.

  • Nakamura, Lisa & Chow-White, Peter A. (Hrsg.). (2012). Race After the Internet. New York: Routledge.

    Google Scholar 

  • O’Neil, Cathy (2016). Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers.

    Google Scholar 

  • Pasquale, Frank (2015). The Black Box Society. The Sectret Algorithms That Control Money and Information. Cambridge, Massachusetts: Harvard University Press.

    Google Scholar 

  • Raley, Rita (2013). Dataveillance and Counterveillance. In Lisa Gitelman (Hrsg.), “Raw Data” Is an Oxymoron (S. 121–146). Cambridge, Massachusetts: The MIT Press.

    Google Scholar 

  • Regan, Priscilla M. 1995. Legislating Privacy. Technology, Social Values, and Public Policy. Chapel Hill: University of North Carolina Press.

    Google Scholar 

  • Richter, Tobias, Naumann,Johannes & Groeben, Norbert (2001). Das Inventar zur Computerbildung (INCOBI). Ein Instrument zur Erfassung von Computer Literacy und computerbezogenen Einstellungen bei Studierenden der Geistes-und Sozialwissenschaften. Psychologie in Erziehung und Unterricht 48 (1), 1–13.

    Google Scholar 

  • Rouvroy, Antoinette (2013). The end(s) of critique: data behaviourism versus due process. In Mireille Hildebrandt & Katja de Vries (Hrsg.), Privacy, Due Process and the Computational Turn. The philosophy of law meets the philosophy of technology (S.143–168). Abingdon, Oxon: Routledge.

    Google Scholar 

  • Saurwein, Florian, Just, Natscha & Michael Latzer (2015). Governance of algorithms: options and limitations. info 17 (6), 35–49.

    Google Scholar 

  • Schaar, Peter (2016). Algorithmentransparenz. In Alexander Dix, Gregor Franßen, Michael Kloepfer, Peter Schaar, Friedrich Schoch & Andrea Voßhoff (Hrsg.), Informationsfreiheit und Informationsrecht. Jahrbuch 2015 (S. 23–36). Berlin: Lexxion Verlagsgesellschaft.

    Google Scholar 

  • Solove, Daniel J. (2011). Nothing to Hide. The False Tradeoff between Privacy and Security. Yale: Yale University Press.

    Google Scholar 

  • Steiner, Christopher (2012). Automate This. How Algorithms Took Over Our Markets, Our Jobs, and the World. New York: Penguin.

    Google Scholar 

  • Turow, Joseph (2012). The Daily You. How the New Advertising Industry Is Defining Your Identity and Your Worth. New Haven: Yale University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thilo Hagendorff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hagendorff, T. (2019). Rassistische Maschinen?. In: Rath, M., Krotz, F., Karmasin, M. (eds) Maschinenethik. Ethik in mediatisierten Welten. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-21083-0_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-21083-0_8

  • Published:

  • Publisher Name: Springer VS, Wiesbaden

  • Print ISBN: 978-3-658-21082-3

  • Online ISBN: 978-3-658-21083-0

  • eBook Packages: Social Science and Law (German Language)

Publish with us

Policies and ethics