Advertisement

AI & SOCIETY

, Volume 34, Issue 2, pp 313–319 | Cite as

Using Dreyfus’ legacy to understand justice in algorithm-based processes

  • David CasacubertaEmail author
  • Ariel Guersenzvaig
Original Article

Abstract

As AI is linked to more and more aspects of our lives, the need for algorithms that can take decisions that are not only accurate but also fair becomes apparent. It can be seen both in discussions of future trends such as autonomous vehicles or the issue of superintelligence, as well as actual implementations of machine learning used to decide whether a person should be admitted in certain university or will be able to return a credit. In this paper, we will use Dreyfus’ account on ethical expertise to show that, to give an AI some ability to make ethical judgements, a pure symbolic, conceptual approach is not enough. We also need the ability to make sense of the surroundings to reframe and define situations in a dynamic way, using multiple perspectives in a pre-reflective way.

Keywords

Expertise Computer ethics Autonomous driving Algorithmic biases Prereflective knowledge Autopoiesis Dreyfus 

References

  1. Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  2. Appiah A (2008) Experiments in ethics. Harvard University Press, CambridgeGoogle Scholar
  3. Arkin RC, Ulam P (2009) An ethical adaptor: Behavioral modification derived from moral emotions. In: Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium on (pp. 381–387). IEEEGoogle Scholar
  4. Ayer AJ (1937) Language, truth and logic. Dover, New YorkGoogle Scholar
  5. Berry-Jester AM et al (2015) The new science of sentencing: should prison sentences be based on crimes that haven’t been committed yet?, The Marshall Project (Aug. 4, 2015)Google Scholar
  6. Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Sci 352(6293):1573–1576CrossRefGoogle Scholar
  7. Bostrom N (2014) Superintelligence: paths, dangers, strategies. OUP Oxford, OxfordGoogle Scholar
  8. Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence, pp 316–334Google Scholar
  9. Buolamwini J (2016) “How I’m fighting bias in algorithms.” TED Talks. https://www.ted.com/talks/joy_buolamwini_how_i_m_ fighting_bias_in_algorithms
  10. Chrisley R (1995) Taking embodiment seriously: non-conceptual content and robotics. In: Ford K, Glymour C, Hayes P (eds) Android Epistemology. AAAI/MIT Press, Cambridge, pp 141–166Google Scholar
  11. Cowley C (2005) A New rejection of moral expertise. Med Healthcare Philos 8:273–279CrossRefGoogle Scholar
  12. Datta A, Tschantz MC (2015) Automated experiments on ad privacy settings. Proc Priv Enhanc Technol 1:92–112CrossRefGoogle Scholar
  13. Davis E (2015) Ethical guidelines for a superintelligence. Artif Intell 220:121–124CrossRefGoogle Scholar
  14. DeSouza N (2013) Pre-reflective ethical know-how. Ethic Theory Moral Prac 16:279–294CrossRefGoogle Scholar
  15. Dreyfus HL, Dreyfus SE (1986) Mind over machine. Free Press, New YorkGoogle Scholar
  16. Dreyfus HL, Dreyfus SE (1990) What is moral maturity? A phenomenological account of the development of ethical expertise. In: Universalism vs. communitarianism, pp 237–264Google Scholar
  17. Dreyfus HL, Dreyfus SE (1991) Towards a phenomenology of ethical expertise. Hum Stud 14:229–250CrossRefGoogle Scholar
  18. Dreyfus HL, Dreyfus SE (2004) The ethical implications of the five-stage-skill-acquisition model. Bull Sci Technol Soc 24:251–264CrossRefGoogle Scholar
  19. Foot P (1967) The problem of abortion and the doctrine of double effect. Oxford Review 5:5–15Google Scholar
  20. Freeman K (2016) Algorithmic injustice: how the Wisconsin’s supreme court failed to protect due process rights in state V. Loomis. N C J Law Technol 18:75–106Google Scholar
  21. Gibbard A (1991) Wise choices, apt feelings. Harvard University, CambridgeGoogle Scholar
  22. Goodall NJ (2014) Machine ethics and automated vehicles. Road vehicle automation. Springer International Publishing, Berlin, pp 93–102CrossRefGoogle Scholar
  23. Hevelke A, Nida-Rümelin J (2015) Responsibility for crashes of autonomous vehicles: an ethical analysis. Sci Eng Ethics 21(3):619–630CrossRefGoogle Scholar
  24. Hurley M, Adebayo J (2016) Credit scoring in the era of big data. Yale J Law Technol 18(1):151Google Scholar
  25. Introna L, Wood D (2002) Picturing algorithmic surveillance: the politics of facial recognition systems. Surveill Soc 2:177–198CrossRefGoogle Scholar
  26. Koerth-Baker M (2016) The calculus of criminal risk: the justice system has come to rely heavily on quantitative assessments of criminal risk. How well they work is a complicated question. UNDARKGoogle Scholar
  27. Lin P (2016) Why ethics matters for autonomous cars. In: Maurer M, Gerdes JC, Lenz B, Winner H (eds) Autonomous driving. Springer, BerlinGoogle Scholar
  28. Lyon D (2015) Surveillance as social sorting: privacy, risk and automated discrimination. Routledge, New YorkGoogle Scholar
  29. Muehlhauser L, Helm L (2012) The singularity and machine ethics. In: Singularity Hypotheses. Springer, Berlin Heidelberg, pp 101–126CrossRefGoogle Scholar
  30. Newman N (2014) The costs of lost privacy: consumer harm and rising economic inequality in the age of Google (September 24, 2013). William Mitchell Law Rev 40(2):12Google Scholar
  31. O’Neil C (2016) Weapons of math destruction. Allen Lane, London, pp 84–87zbMATHGoogle Scholar
  32. Pinker S (2008) The moral instinct. The New York times magazine, p 12Google Scholar
  33. Reuters (2016) New Zealand passport robot tells applicant of Asian descent to open eyes. Technology NewsGoogle Scholar
  34. Scofield GR (1993) Ethics consultation: the least dangerous profession? Camb Q Healthc Ethics 2:417–426CrossRefGoogle Scholar
  35. Sethi N, Mehta N, Srivastava M (2013) Algorithms in our daily life Livemint. http://www.livemint.com/Specials/34LMe9rhl7u4fVJPKPtJEN/Algorithms-in-our-daily-life.html. Accessed 27 Aug 2013
  36. Skeem J, Eno Louden J (2007) Assessment of evidence on the quality of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Unpublished report prepared for the California Department of Corrections and Rehabilitation. https://webfiles.uci.edu/skeem/Downloads.html
  37. Skirpan M, Yeh T (2017) Designing a Moral Compass for the Future of Computer Vision using Speculative Analysis. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on (pp 1368–1377). IEEEGoogle Scholar
  38. Thomson JJ (1976) Killing, letting die, and the trolley problem. 59 The Monist 59:204–217CrossRefGoogle Scholar
  39. Vallverdú J (2011) The Eastern construction of the artificial mind. Enrahonar Quad Filos 47:171–185CrossRefGoogle Scholar
  40. Varelius J (2007) Is ethical expertise possible? Med Healthcare Philos.  https://doi.org/10.1007/s11019-007-9089-8 (Online First)Google Scholar
  41. Wood D, Konvitz E, Ball K (2003) The constant state of emergency: surveillance after 9/11. In: Ball K, Webster F (eds) The intensification of surveillance: crime, terror and warfare in the information era. Pluto Press, LondonGoogle Scholar
  42. Zureik E (2004) Governance, security and technology: the case of biometrics’. Stud Polit Econ 73:113–137Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Philosophy DepartmentUniversitat Autònoma de BarcelonaBarcelonaSpain
  2. 2.ELISAVA, Barcelona School of Design and EngineeringBarcelonaSpain

Personalised recommendations