Skip to main content

Advertisement

Log in

Fair, Transparent, and Accountable Algorithmic Decision-making Processes

The Premise, the Proposed Solutions, and the Open Challenges

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence

  2. http://www.datatransparencylab.org/

  3. http://www.darpa.mil/program/explainable-artificial-intelligence

  4. http://www.law.nyu.edu/centers/ili/algorithmsconference

  5. http://www.chicagotribune.com/business/ct-background-check-penalties-1030-biz-20151029-story.html

  6. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

  7. http://opalproject.org/

References

  • Akerlof, G. (1970). The market for lemons: quality uncertainty and the market mechanism. The Quarterly Journal of Economics, 84(3), 488–500.

    Article  Google Scholar 

  • Akerlof, G., & Shiller, R. (2009). Animal spirits: how human psychology drives the economy, and why it matters for global capitalism. Princeton: Princeton University Press.

    Google Scholar 

  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  • Barocas, S., & Selbst, A. (2016). Big data’s disparate impact. California Law Review, 104, 671–732.

    Google Scholar 

  • Barry-Jester, A.M., Casselman, B., & Goldstein, D. (2015). The new science of sentencing. The Marshall Project. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing.

  • Bhargava, R., Deahl, E., Letouzé, E., Noonan, A., Sangokoya, D., & Shoup, N. (2015). Beyond data literacy: reinventing community engagement and empowerment in the age of data. Data-Pop Alliance White Paper Series. http://datapopalliance.org/wp-content/uploads/2015/11/Beyond-Data-Literacy-2015.pdf.

  • Boyd, D., & Crawford, K. (2012). Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, & Society, 15(5), 662–679.

    Article  Google Scholar 

  • Burrell, J. (2016). How the machine thinks: understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.

    Article  Google Scholar 

  • Calders, T., & Verwer, S. (2010). Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277–292.

    Article  Google Scholar 

  • Calders, T., & Zliobaite, I. (2013). Why unbiased computational processes can lead to discriminative decision procedures. In Custers, B., Calders, T., Schermer, B., & Zarsky, T. (Eds.) Discrimination and privacy in the information society (pp. 43–57).

  • Caruana, R., Kangarloo, H., David, J., Dionisio, N., Sinha, U., & Johnson, D. (1999). Case-based explanation of non-case-based learning methods. In Proceedings of the 1999 american medical informatics association (AMIA) symposium (pp. 212–215).

  • Chalfin, A., Danieli, O., Hillis, A., Jelveh, Z., Luca, M., Ludwig, J., & Mullainathan, S. (2016). Productivity and selection of human capital with machine learning. American Economic Review, 106(5), 124–127.

    Article  Google Scholar 

  • Chouldechova, S. (2016). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. arXiv:1610.07524.

  • Christin, A., Rosenblatt, A., & Boyd, D. (2015). Courts and predictive algorithms. Data & Civil Rights Primer.

  • Citron, D., & Pasquale, F. (2014). The scored society. Washington Law Review, 89(1), 1–33.

    Google Scholar 

  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Fair algorithms and the equal treatment principle. Working Paper.

  • Crawford, K., & Schultz, J. (2014). Big data and due process: toward a framework to redress predictive privacy harms. Boston College Law Review, 55(1), 93–128.

    Google Scholar 

  • Datta, A., Tschantz, M.C., & Datta, A. (2015). Automated experiments on ad privacy settings. In Proceedings on privacy enhancing technologies (pp. 92–112).

  • Diakopoulos, N. (2015). Algorithmic accountability: journalistic investigation of computational power structures. Digital Journalism.

  • Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness throug awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214–226). New York: ACM.

  • Dworkin, R. (2000). Sovereign virtue: the theory and the practice of equality. Cambridge: Harvard University Press.

    Google Scholar 

  • Feldman, M., Friedler, S., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 259–268).

  • Fiske, S. (1998). Stereotyping, prejudice, and discrimination. In Gilbert, D., Fiske, S., & Lindzey, G. (Eds.) Handbook of social psychology (pp. 357–411). Boston: McGraw-Hill.

  • Foster, D., & Vohra, R.V. (1998). Asymptotic calibration. Biometrika, 85(2), 379–390.

    Article  Google Scholar 

  • Friedler, S.A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. arXiv:1609.07236.

  • Gillespie, T. (2014). The relevance of algorithms. In Gillespie, T., Boczkowski, P., & Foot, K. (Eds.) Media technologies: essays on communication, materiality, and society (pp. 167–193). Cambridge: MIT Press.

  • Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: from discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). New York: ACM.

  • Hardt, M., Megiddo, N., Papadimitriou, C., & Wootters, M. (2016). Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science (pp. 111–122). New York: ACM.

  • Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Proceedings of the international on advances in neural information processing systems (NIPS) (pp. 3315–3323).

  • Joseph, M., Kearns, M., Morgenstern, J., Neel, S., & Roth, A. (2016). Rawlsian fairness for machine learning. arXiv:1610.09559.

  • Kamiran, F., Calders, T., & Pechenizkiy, M. (2010). Discrimination aware decision tree learning. In Proceedings of 2010 IEEE international conference on data mining (pp. 869–874). Washington, DC: IEEE.

  • Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2011). Fairness-aware classifier with prejudice remover regularizer, In: Proceedings of the european conference on machine learning and principles of knowledge discovery in databases (ECMLPKDD), Part II (pp. 35–50).

  • Kearns, M., & Nevmyvaka, Y. (2013). Machine learning for market microstructure and high frequency trading. In O’hara, M., Lopez de prado, M., & Easley, D. (Eds.) High frequency trading. London: Risk books.

  • Khandani, A.E., Kim, A.J., & Lo, A.W. (2010). Consumer credit risk models via machine-learning algorithms. Journal of Banking and Finance, 34, 2767–2787.

    Article  Google Scholar 

  • Khanna, P. (2017). Technocracy in America: rise of the info-state. CreateSpace Independent Publishing Platform.

  • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th innovations in theoretical computer science conference. New York: ACM.

  • Kroll, J.A., Huey, J., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633–707.

    Google Scholar 

  • Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The tyranny of data? The bright and dark sides of data-driven decision-making for social good. arXiv:1612.00323.

  • Lipton, Z.C. (2016). The mythos of model interpretability. In 2016 ICML workshop on human interpretability in machine learning.

  • Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2012). Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 623–631). New York: ACM.

  • Macnish, K. (2012). Unblinking eyes: the ethics of automating surveillance. Ethics and Information Technology, 14(2), 151–167.

    Article  Google Scholar 

  • McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems.

  • Munoz, C., Smith, M., & Patil, D. (2016). Big data: a report on algorithmic systems, opportunity, and civil rights. Tech. rep., Executive Office of the President.

  • Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books.

    Google Scholar 

  • O’Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. Crown.

  • Pager, D., & Shepherd, H. (2008). The sociology of discrimination: racial discrimination in employment, housing, credit and consumer market. Annual Review of Sociology, 34, 181–209.

    Article  Google Scholar 

  • Pasquale, F. (2015). The Black Blox Society: the secret algorithms that control money and information. Cambridge: Harvard University Press.

    Book  Google Scholar 

  • Pedreschi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 560–568).

  • Pentland, A. (2014). Saving big data from itself. Scientific American, 311(2), 64–67.

    Article  Google Scholar 

  • Podesta, J., Pritzker, P., Moniz, E., Holdren, J., & Zients, J. (2014). Big data: seizing opportunities, preserving values. Tech. rep., Executive Office of the President.

  • Ramirez, E., Brill, J., Ohlhausen, M., & McSweeny, T. (2016). Big data: a tool for inclusion or exclusion? Tech. rep., Federal Trade Commission.

  • Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.

    Google Scholar 

  • Ribeiro, M., Singh, S., & Guestrin, C. (2016). Why should I trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144).

  • Roemer, J.E. (1996). Theories of distributive justice. Cambridge: Harvard University Press.

    Google Scholar 

  • Roemer, J.E. (1998). Equality of opportunity. Cambridge: Harvard University Press.

    Google Scholar 

  • Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582–638.

    Article  Google Scholar 

  • Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7–59.

    Article  Google Scholar 

  • San Pedro, J., Proserpio, D., & Oliver, N. (2015). Mobiscore: towards universal credit scoring from mobile phone data. In Proceedings of the international conference on user modeling, adaptation and personalization (UMAP) (pp. 195–207).

  • Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: research methods for detecting discrimination on internet platforms. In Data and discrimination: converting critical concerns into productive inquiry, a preconference at the 64th annual meeting of the international communication association.

  • Schermer, B.W. (2011). The limits of privacy in automated profiling and data mining. Computer Law & Security Review, 27(1), 45–52.

    Article  Google Scholar 

  • Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034.

  • Sunstein, C. (2012). Regulation in an uncertain world. National Academy of Sciences. https://www.whitehouse.gov/sites/default/files/omb/inforeg/speeches/regulation-in-an-uncertain-world-06202012.pdf.

  • Sweeney, L. (2013). Discrimination in online ad delivery. Available at SSRN: http://ssrn.com/abstract=2208240.

  • Tobler, C. (2008). Limits and potential of the concept of indirect discrimination. Tech. rep., European Network of Legal Experts in Anti-Discrimination.

  • Tverksy, A., & Kahnemann, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185(4157), 1124–1131.

    Article  Google Scholar 

  • Wang, T., Rudin, C., Wagner, D., & Sevieri, R. (2013). Learning to detect patterns of crime. In Machine learning and knowledge discovery in databases (pp. 515–530). Springer.

  • Willson, M. (2016). Algorithms (and the) everyday. Information, Communication & Society.

  • Zafar, M.B., Martinez, I.V., Rodriguez, M.D., & Gummadi, K.P. (2015). Learning fair classifiers. arXiv:1507.05259.

  • Zarsky, T. (1989). Automated prediction: perception, law and policy. Communications of the ACM, 4, 167–186.

    Google Scholar 

  • Zarsky, T. (2016). The trouble with algorithmic decisions: an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, and Human Values, 41(1), 118–132.

    Article  Google Scholar 

  • Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2012). Learning fair representation. In Proceedings of the 2013 international conference on machine learning (ICML) (pp. 325–333).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Lepri.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lepri, B., Oliver, N., Letouzé, E. et al. Fair, Transparent, and Accountable Algorithmic Decision-making Processes. Philos. Technol. 31, 611–627 (2018). https://doi.org/10.1007/s13347-017-0279-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-017-0279-x

Keywords

Navigation