Advertisement

Transparency: Motivations and Challenges

  • Adrian WellerEmail author
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11700)

Abstract

Transparency is often deemed critical to enable effective real-world deployment of intelligent systems. Yet the motivations for and benefits of different types of transparency can vary significantly depending on context, and objective measurement criteria are difficult to identify. We provide a brief survey, suggesting challenges and related concerns, particularly when agents have misaligned interests. We highlight and review settings where transparency may cause harm, discussing connections across privacy, multi-agent game theory, economics, fairness and trust.

Keywords

Transparency Interpretability Explainable Social good 

Notes

Acknowledgements

This article is an extended version of [71]. The author thanks Frank Kelly for pointing out the Braess’ paradox example and related intuition; and thanks Vasco Carvalho, Stephen Cave, Jon Crowcroft, David Fohrman, Yarin Gal, Adria Gascon, Zoubin Ghahramani, Sanjeev Goyal, Krishna P. Gummadi, Dylan Hadfield-Menell, Bill Janeway, Frank Kelly, Aryeh Kontorovich, Neil Lawrence, Barney Pell and Mark Rowland for helpful discussions; and thanks the anonymous reviewers for helpful comments. The author acknowledges support from the David MacKay Newton research fellowship at Darwin College, The Alan Turing Institute under EPSRC grant EP/N510129/1 & TU/B/000074, and the Leverhulme Trust via the CFI.

References

  1. 1.
    Adel, T., Ghahramani, Z., Weller, A.: Discovering interpretable representations for both deep generative and discriminative models. In: ICML (2018)Google Scholar
  2. 2.
    Adelberg, A.: Narrative disclosures contained in financial reports: means of communication or manipulation? Acc. Bus. Res. 9(35), 179–190 (1979)CrossRefGoogle Scholar
  3. 3.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arXiv preprint. arXiv:1606.06565 (2016)
  4. 4.
    Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. ProPublica, May 2016Google Scholar
  5. 5.
    Arrow, K., Debreu, G.: Existence of an equilibrium for a competitive economy. Econometrica J. Econometric Soc. 22, 265–290 (1954)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Axelrod, R.: The Evolution of Cooperation. Basic Books, New York (2006)zbMATHGoogle Scholar
  7. 7.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)CrossRefGoogle Scholar
  8. 8.
    Barocas, S., Selbst, A.: Big data’s disparate impact. Calif. Law Rev. 104(3), 671–732 (2016)Google Scholar
  9. 9.
    Bernstein, E.: The transparency trap. Harv. Bus. Rev. 92(10), 58–66 (2014)Google Scholar
  10. 10.
    Braess, D.: Über ein paradoxon aus der verkehrsplanung. Math. Methods Oper. Res. 12(1), 258–268 (1968)CrossRefGoogle Scholar
  11. 11.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS (2016)Google Scholar
  12. 12.
    Coldewey, D.: Bulgaria now requires (some) government software be open source. TechCrunch, July 2016Google Scholar
  13. 13.
    Cole, D.: We kill people based on metadata. The New York Review of Books, May 2014Google Scholar
  14. 14.
    Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: NeurIPS (2017)Google Scholar
  15. 15.
    Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617. IEEE (2016)Google Scholar
  16. 16.
    Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. CoRR abs/1702.08608 (2017)Google Scholar
  17. 17.
    Dwork, C., Hardt, M., Pitassi, T., Reingold, O.: Fairness through awareness. In: ITCSC (2012)Google Scholar
  18. 18.
    Einstein, A.: The common language of science. Adv. Sci. 2(5) (1942). A 1941 recording by Einstein to the British Association for the Advancement of Science is available https://www.youtube.com/watch?v=e3B5BC4rhAU
  19. 19.
    Etzioni, A.: Is transparency the best disinfectant? J. Polit. Philos. 18(4), 389–404 (2010)CrossRefGoogle Scholar
  20. 20.
    Evtimova, K., Drozdov, A., Kiela, D., Cho, K.: Emergent communication in a multi-modal, multi-step referential game. In: ICLR (2018)Google Scholar
  21. 21.
    Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: KDD (2015)Google Scholar
  22. 22.
    Ghani, R.: You say you want transparency and interpretability? (2016). http://www.rayidghani.com/you-say-you-want-transparency-and-interpretability
  23. 23.
    Gilovich, T., Savitsky, K., Medvec, V.: The illusion of transparency: biased assessments of others’ ability to read one’s emotional states. J. Pers. Soc. Psychol. 75(2), 332 (1998)CrossRefGoogle Scholar
  24. 24.
    Grgić-Hlača, N., Zafar, M.B., Gummadi, K.P., Weller, A.: On fairness, diversity and randomness in algorithmic decision making. In: FAT/ML Workshop at KDD (2017)Google Scholar
  25. 25.
    Griffin, D., Ross, L.: Subjective construal, social inference, and human misunderstanding. Adv. Exp. Soc. Psychol. 24, 319–359 (1991)Google Scholar
  26. 26.
    Hammond, R., Axelrod, R.: The evolution of ethnocentrism. J. Conflict Resolut. 50(6), 926–936 (2006). http://www-personal.umich.edu/~axe/vtmovie.htmCrossRefGoogle Scholar
  27. 27.
    Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NeurIPS (2016)Google Scholar
  28. 28.
    Havrylov, S., Titov, I.: Emergence of language with multi-agent games: learning to communicate with sequences of symbols. arXiv preprint. arXiv:1705.11192 (2017)
  29. 29.
    Higgins, I., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR (2017)Google Scholar
  30. 30.
    Horling, B., Lesser, V.: A survey of multi-agent organizational paradigms. Knowl. Eng. Rev. 19(4), 281–316 (2004)CrossRefGoogle Scholar
  31. 31.
    Joseph, M., Kearns, M., Morgenstern, J., Roth, A.: Fairness in learning: classic and contextual bandits. In: NeurIPS (2016)Google Scholar
  32. 32.
    Kelly, F.: Network routing. Philos. Trans. Royal Soc. Lond. A Math. Phys. Eng. Sci. 337(1647), 343–367 (1991)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Kilbertus, N., Gascón, A., Kusner, M.J., Veale, M., Gummadi, K.P., Weller, A.: Blind justice: fairness with encrypted sensitive attributes. In: ICML (2018)Google Scholar
  34. 34.
    Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML (2017)Google Scholar
  35. 35.
    Lakkaraju, H., Bach, S., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: KDD, pp. 1675–1684. ACM (2016)Google Scholar
  36. 36.
    Lakonishok, J., Shleifer, A., Vishny, R.W., Hart, O., Perry, G.L.: The structure and performance of the money management industry. Brook. Pap. Econ. Act. Microecon. 1992, 339–391 (1992)CrossRefGoogle Scholar
  37. 37.
    Langer, E., Blank, A., Chanowitz, B.: The mindlessness of ostensibly thoughtful action: the role of “placebic” information in interpersonal interaction. J. Pers. Soc. Psychol. 36(6), 635–642 (1978)CrossRefGoogle Scholar
  38. 38.
    Lawrence, N.: Living together: mind and machine intelligence. CoRR abs/1705. 07996 (2017)Google Scholar
  39. 39.
    Leibo, J., Zambaldi, V., Lanctot, M., Marecki, J., Graepel, T.: Multi-agent reinforcement learning in sequential social dilemmas. In: AAMAS (2017). https://deepmind.com/blog/understanding-agent-cooperation/
  40. 40.
    Lerner, J., Tetlock, P.: Accounting for the effects of accountability. Psychol. Bull. 125(2), 255 (1999)CrossRefGoogle Scholar
  41. 41.
    Levine, E., Schweitzer, M.: Prosocial lies: when deception breeds trust. Organ. Behav. Hum. Decis. Process. 126, 88–106 (2015)CrossRefGoogle Scholar
  42. 42.
    Lipton, Z.: The mythos of model interpretability. In: ICML Workshop on Human Interpretability in Machine Learning, New York, NY, pp. 96–100 (2016)Google Scholar
  43. 43.
    Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: NeurIPS (2017). https://blog.openai.com/learning-to-cooperate-compete-and-communicate/
  44. 44.
    Lowenstein, L.: Financial transparency and corporate governance: you manage what you measure. Columbia Law Rev. 96(5), 1335–1362 (1996)CrossRefGoogle Scholar
  45. 45.
    Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017)Google Scholar
  46. 46.
    McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of Bayesian deep learning. In: IJCAI (2017)Google Scholar
  47. 47.
    Mordatch, I., Abbeel, P.: Emergence of grounded compositional language in multi-agent populations. In: AAAI (2018)Google Scholar
  48. 48.
    Mortier, R., Haddadi, H., Henderson, T., McAuley, D., Crowcroft, J.: Human-data interaction: the human face of the data-driven society. Social Science Research Network (2014)Google Scholar
  49. 49.
    O’Neill, O.: What we don’t understand about trust. TED talk, June 2013Google Scholar
  50. 50.
    Pandey, G.: Indian Supreme Court in landmark ruling on privacy. BBC News, 24 August 2017. http://www.bbc.co.uk/news/world-asia-india-41033954
  51. 51.
    Peled, A.: When transparency and collaboration collide: the USA open data program. J. Am. Soc. Inf. Sci. Technol. 62(11), 2085–2094 (2011)CrossRefGoogle Scholar
  52. 52.
    Peters, J., Janzing, D., Schölkopf, B.: Elements of Causal Inference - Foundations and Learning Algorithms. Adaptive Computation and Machine Learning Series. MIT Press, Cambridge (2017)zbMATHGoogle Scholar
  53. 53.
    Prat, A.: The wrong kind of transparency. Am. Econ. Rev. 95(3), 862–877 (2005)CrossRefGoogle Scholar
  54. 54.
    Ribeiro, M., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier. In: KDD (2016)Google Scholar
  55. 55.
    Ross, S.A.: The economic theory of agency: the principal’s problem. Am. Econ. Rev. 63(2), 134–139 (1973)Google Scholar
  56. 56.
    Roughgarden, T.: Selfish routing and the price of anarchy. Technical report, Stanford University Dept of Computer Science (2006). http://www.dtic.mil/get-tr-doc/pdf?AD=ADA637949
  57. 57.
    Rudin, C.: Please stop explaining black box models for high stakes decisions. In: NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning (2018)Google Scholar
  58. 58.
    Rudin, C., Ustun, B.: Optimized scoring systems: toward trust in machine learning for healthcare and criminal justice. Interfaces 48(5), 449–466 (2018)CrossRefGoogle Scholar
  59. 59.
    Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Networks Learn. Syst. 28(11), 2660–2673 (2017)MathSciNetCrossRefGoogle Scholar
  60. 60.
    Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953)MathSciNetzbMATHGoogle Scholar
  61. 61.
    Solove, D.: Privacy and power: computer databases and metaphors for information privacy. Stanford Law Rev. 53(6), 1393–1462 (2001)CrossRefGoogle Scholar
  62. 62.
    Solove, D.: Why privacy matters even if you have ‘nothing to hide’. Chronicle of Higher Education, 15 May 2011Google Scholar
  63. 63.
    Sridhar, D., Batniji, R.: Misfinancing global health: a case for transparency in disbursements and decision making. Lancet 372(9644), 1185–1191 (2008)CrossRefGoogle Scholar
  64. 64.
    Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2014)CrossRefGoogle Scholar
  65. 65.
    Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: ICML (2017)Google Scholar
  66. 66.
    Tishby, N., Pereira, F., Bialek, W.: The information bottleneck method. In: Allerton Conference on Communication, Control, and Computing (1999)Google Scholar
  67. 67.
    Varshney, K.R., Alemzadeh, H.: On the safety of machine learning: cyber-physical systems, decision sciences, and data products. Big Data 5(3), 246–255 (2017)CrossRefGoogle Scholar
  68. 68.
    Varshney, L., Varshney, K.: Decision making with quantized priors leads to discrimination. Proc. IEEE 105(2), 241–255 (2017)CrossRefGoogle Scholar
  69. 69.
    Vishwanath, T., Kaufmann, D.: Toward transparency: new approaches and their application to financial markets. World Bank Res. Obs. 16(1), 41–57 (2001)CrossRefGoogle Scholar
  70. 70.
    Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations withoutopening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 841 (2018)Google Scholar
  71. 71.
    Weller, A.: Challenges for transparency. In: ICML Workshop on Human Interpretability (2017)Google Scholar
  72. 72.
    Wiggers, K.: New York City announces task force to find biases in algorithms. VentureBeat, May 2018Google Scholar
  73. 73.
    Wiley, R.: The evolution of communication: information and manipulation. Anim. Behav. 2, 156–189 (1983)Google Scholar
  74. 74.
    Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P., Weller, A.: From parity to preference-based notions of fairness in classification. In: NeurIPS (2017)Google Scholar
  75. 75.
    Zintgraf, L., Cohen, T., Adel, T., Welling, M.: Visualizing deep neural network decisions: prediction difference analysis. In: ICLR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of CambridgeCambridgeUK
  2. 2.The Alan Turing InstituteLondonUK
  3. 3.Leverhulme Centre for the Future of IntelligenceCambridgeUK

Personalised recommendations