Advertisement

Fair, Transparent, and Accountable Algorithmic Decision-making Processes

The Premise, the Proposed Solutions, and the Open Challenges
  • Bruno Lepri
  • Nuria Oliver
  • Emmanuel Letouzé
  • Alex Pentland
  • Patrick Vinck
Research Article

Abstract

The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.

Keywords

Algorithmic decision-making Algorithmic transparency Fairness Accountability Social good 

References

  1. Akerlof, G. (1970). The market for lemons: quality uncertainty and the market mechanism. The Quarterly Journal of Economics, 84(3), 488–500.CrossRefGoogle Scholar
  2. Akerlof, G., & Shiller, R. (2009). Animal spirits: how human psychology drives the economy, and why it matters for global capitalism. Princeton: Princeton University Press.Google Scholar
  3. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  4. Barocas, S., & Selbst, A. (2016). Big data’s disparate impact. California Law Review, 104, 671–732.Google Scholar
  5. Barry-Jester, A.M., Casselman, B., & Goldstein, D. (2015). The new science of sentencing. The Marshall Project. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing.
  6. Bhargava, R., Deahl, E., Letouzé, E., Noonan, A., Sangokoya, D., & Shoup, N. (2015). Beyond data literacy: reinventing community engagement and empowerment in the age of data. Data-Pop Alliance White Paper Series. http://datapopalliance.org/wp-content/uploads/2015/11/Beyond-Data-Literacy-2015.pdf.
  7. Boyd, D., & Crawford, K. (2012). Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, & Society, 15(5), 662–679.CrossRefGoogle Scholar
  8. Burrell, J. (2016). How the machine thinks: understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.CrossRefGoogle Scholar
  9. Calders, T., & Verwer, S. (2010). Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277–292.CrossRefGoogle Scholar
  10. Calders, T., & Zliobaite, I. (2013). Why unbiased computational processes can lead to discriminative decision procedures. In Custers, B., Calders, T., Schermer, B., & Zarsky, T. (Eds.) Discrimination and privacy in the information society (pp. 43–57).Google Scholar
  11. Caruana, R., Kangarloo, H., David, J., Dionisio, N., Sinha, U., & Johnson, D. (1999). Case-based explanation of non-case-based learning methods. In Proceedings of the 1999 american medical informatics association (AMIA) symposium (pp. 212–215).Google Scholar
  12. Chalfin, A., Danieli, O., Hillis, A., Jelveh, Z., Luca, M., Ludwig, J., & Mullainathan, S. (2016). Productivity and selection of human capital with machine learning. American Economic Review, 106(5), 124–127.CrossRefGoogle Scholar
  13. Chouldechova, S. (2016). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. arXiv:1610.07524.
  14. Christin, A., Rosenblatt, A., & Boyd, D. (2015). Courts and predictive algorithms. Data & Civil Rights Primer.Google Scholar
  15. Citron, D., & Pasquale, F. (2014). The scored society. Washington Law Review, 89(1), 1–33.Google Scholar
  16. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Fair algorithms and the equal treatment principle. Working Paper.Google Scholar
  17. Crawford, K., & Schultz, J. (2014). Big data and due process: toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93–128.Google Scholar
  18. Datta, A., Tschantz, M.C., & Datta, A. (2015). Automated experiments on ad privacy settings. In Proceedings on privacy enhancing technologies (pp. 92–112).Google Scholar
  19. Diakopoulos, N. (2015). Algorithmic accountability: journalistic investigation of computational power structures. Digital Journalism.Google Scholar
  20. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness throug awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214–226). New York: ACM.Google Scholar
  21. Dworkin, R. (2000). Sovereign virtue: the theory and the practice of equality. Cambridge: Harvard University Press.Google Scholar
  22. Feldman, M., Friedler, S., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 259–268).Google Scholar
  23. Fiske, S. (1998). Stereotyping, prejudice, and discrimination. In Gilbert, D., Fiske, S., & Lindzey, G. (Eds.) Handbook of social psychology (pp. 357–411). Boston: McGraw-Hill.Google Scholar
  24. Foster, D., & Vohra, R.V. (1998). Asymptotic calibration. Biometrika, 85(2), 379–390.CrossRefGoogle Scholar
  25. Friedler, S.A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. arXiv:1609.07236.
  26. Gillespie, T. (2014). The relevance of algorithms. In Gillespie, T., Boczkowski, P., & Foot, K. (Eds.) Media technologies: essays on communication, materiality, and society (pp. 167–193). Cambridge: MIT Press.Google Scholar
  27. Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: from discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). New York: ACM.Google Scholar
  28. Hardt, M., Megiddo, N., Papadimitriou, C., & Wootters, M. (2016). Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science (pp. 111–122). New York: ACM.Google Scholar
  29. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Proceedings of the international on advances in neural information processing systems (NIPS) (pp. 3315–3323).Google Scholar
  30. Joseph, M., Kearns, M., Morgenstern, J., Neel, S., & Roth, A. (2016). Rawlsian fairness for machine learning. arXiv:1610.09559.
  31. Kamiran, F., Calders, T., & Pechenizkiy, M. (2010). Discrimination aware decision tree learning. In Proceedings of 2010 IEEE international conference on data mining (pp. 869–874). Washington, DC: IEEE.Google Scholar
  32. Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2011). Fairness-aware classifier with prejudice remover regularizer, In: Proceedings of the european conference on machine learning and principles of knowledge discovery in databases (ECMLPKDD), Part II (pp. 35–50).Google Scholar
  33. Kearns, M., & Nevmyvaka, Y. (2013). Machine learning for market microstructure and high frequency trading. In O’hara, M., Lopez de prado, M., & Easley, D. (Eds.) High frequency trading. London: Risk books.Google Scholar
  34. Khandani, A.E., Kim, A.J., & Lo, A.W. (2010). Consumer credit risk models via machine-learning algorithms. Journal of Banking and Finance, 34, 2767–2787.CrossRefGoogle Scholar
  35. Khanna, P. (2017). Technocracy in America: rise of the info-state. CreateSpace Independent Publishing Platform.Google Scholar
  36. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th innovations in theoretical computer science conference. New York: ACM.Google Scholar
  37. Kroll, J.A., Huey, J., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633–707.Google Scholar
  38. Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The tyranny of data? The bright and dark sides of data-driven decision-making for social good. arXiv:1612.00323.
  39. Lipton, Z.C. (2016). The mythos of model interpretability. In 2016 ICML workshop on human interpretability in machine learning.Google Scholar
  40. Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2012). Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 623–631). New York: ACM.Google Scholar
  41. Macnish, K. (2012). Unblinking eyes: the ethics of automating surveillance. Ethics and Information Technology, 14(2), 151–167.CrossRefGoogle Scholar
  42. McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems.Google Scholar
  43. Munoz, C., Smith, M., & Patil, D. (2016). Big data: a report on algorithmic systems, opportunity, and civil rights. Tech. rep., Executive Office of the President.Google Scholar
  44. Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books.Google Scholar
  45. O’Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. Crown.Google Scholar
  46. Pager, D., & Shepherd, H. (2008). The sociology of discrimination: racial discrimination in employment, housing, credit and consumer market. Annual Review of Sociology, 34, 181–209.CrossRefGoogle Scholar
  47. Pasquale, F. (2015). The Black Blox Society: the secret algorithms that control money and information. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  48. Pedreschi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 560–568).Google Scholar
  49. Pentland, A. (2014). Saving big data from itself. Scientific American, 311(2), 64–67.CrossRefGoogle Scholar
  50. Podesta, J., Pritzker, P., Moniz, E., Holdren, J., & Zients, J. (2014). Big data: seizing opportunities, preserving values. Tech. rep., Executive Office of the President.Google Scholar
  51. Ramirez, E., Brill, J., Ohlhausen, M., & McSweeny, T. (2016). Big data: a tool for inclusion or exclusion? Tech. rep., Federal Trade Commission.Google Scholar
  52. Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.Google Scholar
  53. Ribeiro, M., Singh, S., & Guestrin, C. (2016). Why should I trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144).Google Scholar
  54. Roemer, J.E. (1996). Theories of distributive justice. Cambridge: Harvard University Press.Google Scholar
  55. Roemer, J.E. (1998). Equality of opportunity. Cambridge: Harvard University Press.Google Scholar
  56. Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582–638.CrossRefGoogle Scholar
  57. Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7–59.CrossRefGoogle Scholar
  58. San Pedro, J., Proserpio, D., & Oliver, N. (2015). Mobiscore: towards universal credit scoring from mobile phone data. In Proceedings of the international conference on user modeling, adaptation and personalization (UMAP) (pp. 195–207).Google Scholar
  59. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: research methods for detecting discrimination on internet platforms. In Data and discrimination: converting critical concerns into productive inquiry, a preconference at the 64th annual meeting of the international communication association.Google Scholar
  60. Schermer, B.W. (2011). The limits of privacy in automated profiling and data mining. Computer Law & Security Review, 27(1), 45–52.CrossRefGoogle Scholar
  61. Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034.
  62. Sunstein, C. (2012). Regulation in an uncertain world. National Academy of Sciences. https://www.whitehouse.gov/sites/default/files/omb/inforeg/speeches/regulation-in-an-uncertain-world-06202012.pdf.
  63. Sweeney, L. (2013). Discrimination in online ad delivery. Available at SSRN: http://ssrn.com/abstract=2208240.
  64. Tobler, C. (2008). Limits and potential of the concept of indirect discrimination. Tech. rep., European Network of Legal Experts in Anti-Discrimination.Google Scholar
  65. Tverksy, A., & Kahnemann, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185(4157), 1124–1131.CrossRefGoogle Scholar
  66. Wang, T., Rudin, C., Wagner, D., & Sevieri, R. (2013). Learning to detect patterns of crime. In Machine learning and knowledge discovery in databases (pp. 515–530). Springer.Google Scholar
  67. Willson, M. (2016). Algorithms (and the) everyday. Information, Communication & Society.Google Scholar
  68. Zafar, M.B., Martinez, I.V., Rodriguez, M.D., & Gummadi, K.P. (2015). Learning fair classifiers. arXiv:1507.05259.
  69. Zarsky, T. (1989). Automated prediction: perception, law and policy. Communications of the ACM, 4, 167–186.Google Scholar
  70. Zarsky, T. (2016). The trouble with algorithmic decisions: an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, and Human Values, 41(1), 118–132.CrossRefGoogle Scholar
  71. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2012). Learning fair representation. In Proceedings of the 2013 international conference on machine learning (ICML) (pp. 325–333).Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  • Bruno Lepri
    • 1
  • Nuria Oliver
    • 2
    • 3
  • Emmanuel Letouzé
    • 3
    • 4
  • Alex Pentland
    • 3
    • 4
  • Patrick Vinck
    • 3
    • 5
  1. 1.Fondazione Bruno KesslerTrentoItaly
  2. 2.Vodafone ResearchLondonUK
  3. 3.Data-Pop AllianceNew YorkUSA
  4. 4.MIT Media LabCambridgeUSA
  5. 5.Harvard Humanitarian InitiativeCambridgeUSA

Personalised recommendations