Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In ACM proceedings of the 2018 CHI conference on human factors in computing systems (Vol. 582, pp. 1–18).
Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Nature Digital Medicine. Retrieved June 29, 2020, from https://www.nature.com/articles/s41746-018-0040-6.
ACM (2017). Statement on algorithmic transparency and accountability. Retrieved January 10, 2020, from https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160.
Albu, O. B., & Flyverbom, M. (2019). Organizational transparency: Conceptualizations, conditions, and consequences. Business and Society, 58(2), 68–297.
AlgorithmWatch. (2019). Automating society: Taking stock of automated decision-making in the EU. Retrieved June 29, 2020, from https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf.
Altman, I. (1975). The environment and social behavior: Privacy, personal space, territory, and crowding. Monterey, California: Brooks/Cole Publishing Company.
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989.
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY, 35, 611–623.
Bahner, J. E., Hüper, A. D., & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. International Journal of Human-Computer Studies, 66(9), 688–699.
Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics (7th ed.). Oxford: Oxford University Press.
Ben-Shahar, O., & Schneider, C. E. (2011). The failure of mandated disclosure. University of Pennsylvania Law Review, 159, 652–743.
Ben-Shahar, O., & Schneider, C. E. (2014). More than you wanted to know: The failure of mandated disclosure. Princeton: Princeton University Press.
Berglund, T. (2014). Corporate governance and optimal transparency. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 359–370). Oxford: Oxford University Press.
Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 351–370.
Bishop, S. (2018). Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence, 24(1), 69–84.
Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468.
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227.
Brayne, S. (2017). Big data surveillance: The case of policing. American Sociological Review, 82(5), 977–1008.
Brayne, S., & Christin, A. (2020). Technologies of crime prediction: The reception of algorithms in policing and criminal courts. Social Problems. https://doi.org/10.1093/socpro/spaa004.
Brey, P. (2010). Values in technology and disclosive computer ethics. The Cambridge Handbook of Information and Computer Ethics, 4, 41–58.
Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., Velidi, S., & Viljoen, S. (2019). The chilling effects of algorithmic profiling: Mapping the issues. Computer Law & Security Review, 36, 105367.
Buhmann, A., Paßmann, J., & Fieseler, C. (2019). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics. https://doi.org/10.1007/s10551-019-04226-4.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
Calo, R. (2011). Against notice skepticism in privacy (and elsewhere). Notre Dame Law Review, 87(3), 1027–1072.
Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society, 6, 1–19.
Carlsson, B. (2014). Transparency of innovation policy. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 219–238). Oxford: Oxford University Press.
Casey, B., Farhangi, A., & Vogl, R. (2019). Rethinking explainable machines: The GDPR’s right to explanation debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Journal, 34(1), 143–188.
Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Retrieved January 10, 2020, from Privacy by Design—Foundational Principles.
Cavoukian, A., Shapiro, S., & Cronk, R. J. (2014). Privacy engineering: Proactively embedding privacy, by design. Office of the Information and Privacy Commissioner. Retrieved January 10, 2020, from https://www.ipc.on.ca/wp-content/uploads/resources/pbd-priv-engineering.pdf.
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068.
d’Aquin, M., Troullinou, P., O’Connor, N. E., Cullen, A., Faller, G., & Holden, L. (2018). Towards an “Ethics by Design” methodology for AI research projects. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 54–59).
De Laat, P. B. (2018). Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & Technology, 31(4), 525–541.
Dennedy, M., Fox, J., & Finneran, T. (2014). The privacy engineer’s manifesto: Getting from policy to code to QA to value. New York: Apress.
Diakopoulos, N. (2016). Accountability in algorithmic decision-making: A view from computational journalism. Communications of the ACM, 59(2), 56–62.
Dignum, V., Baldoni, M., Baroglio, C., Caon, M., Chatila, R., Dennis, L., Génova, G., Haim, G., Kließ, M.S., Lopez-Sanchez, M., & Micalizio, R. (2018). Ethics by design: Necessity or curse?. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 60–66).
Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2), 2053951716665128.
Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18–84.
Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Security and Privacy, 16(3), 46–54.
Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74.
Elia, J. (2009). Transparency rights, technology, and trust. Ethics and Information Technology, 11(2), 145–153.
Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018). Communicating algorithmic process in online behavioral advertising. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–13).
European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust. Retrieved August 19, 2020, from https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
Felzmann, H., Fosch Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2019a). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 1–14.
Felzmann, H., Fosch Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2019b). Robots and transparency: The multiple dimensions of transparency in the context of robot technologies. IEEE Robotics and Automation Magazine, 26(2), 71–78.
Forssbaeck, J., & Oxelheim, L. (2014). The multifaceted concept of transparency. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 3–30). Oxford: Oxford University Press.
Foster, C., & Frieden, J. (2017). Crisis of trust: Socio-economic determinants of Europeans’ confidence in government. European Union Politics, 18(4), 511–535.
Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in Practice, 17(4–5), 663–671.
Friedman, B., Kahn, P., & Borning, A. (2008). Value sensitive design and information systems. In K. E. Himma & H. T. Tavani (Eds.), The handbook of information and computer ethics (pp. 69–101). Hoboken, NJ: Wiley.
Fule, P., & Roddick, J. F. (2004). Detecting privacy and ethical sensitivity in data mining results. In Proceedings of the 27th Australasian conference on computer science (Vol. 26, pp. 159–166). Australian Computer Society, Inc.
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57.
Greiling, D. (2014). Accountability and trust. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 617–631). Oxford: Oxford University Press.
Grimmelikhuijsen, S., Porumbescu, G., Hong, B., & Im, T. (2013). The effect of transparency on trust in government: A cross-national comparative experiment. Public Administration Review, 73(4), 575–586.
Heald, D. (2006). Varieties of transparency. In C. Hood & D. Heald (Eds.), Transparency: The key to better governance? (pp. 25–43). London: British Academy Scholarship.
Hildebrandt, M. (2013). Profile transparency by design? Re-enabling double contingency. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology (pp. 221–246). London: Routledge.
Hirschman, A. O. (1970). Exit, voice, and loyalty: Responses to decline in firms, organizations, and states (Vol. 25). Cambridge: Harvard University Press.
HLEG AI (High-Level Expert Group on Artificial Intelligence). (2019). Ethics guidelines for trustworthy AI. Retrieved November 11, 2020, from https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines.
Hood, C. (2006). Transparency in historical perspective. In C. Hood & D. Heald (Eds.), Transparency: The key to better governance? (pp. 3–23). London: British Academy Scholarship.
IBM. (2018). Principles for trust and transparency. Retrieved November 11, 2020, from https://www.ibm.com/blogs/policy/wp-content/uploads/2018/05/IBM_Principles_OnePage.pdf. Accessed .
ICDPPC (International Conference of Data Protection and Privacy Commissioners). (2018). Declaration on ethics and data protection in artificial intelligence. Retrieved November 11, 2020, from https://icdppc.org/wp-content/uploads/2018/10/20180922_ICDPPC-40th_AI-Declaration_ADOPTED.pdf.
ICO. (2020). What is automated individual decision-making and profiling? Retrieved August 19, 2020, from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/.
IEEE. (2019). Ethically aligned design (version 2). Retrieved November 11, 2020, from https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf.
Iphofen, R., & Kritikos, M. (2019). Regulating artificial intelligence and robotics: Ethics by design in a digital society. Contemporary Social Science. https://doi.org/10.1080/21582041.2018.1563803.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4–25.
Kaminski, M. E. (2019). The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189–218.
Karanasiou, A. P., & Pinotsis, D. A. (2017). A study into the layers of automated decision-making: Emergent normative and legal aspects of deep learning. International Review of Law, Computers & Technology, 31(2), 170–187.
Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081–2096.
Kitchin, R., & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assemblages and their work. In The programmable city working paper. Retrived November 11, 2020, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id = 2474112.
Kolkman, D. (2020). The (in) credibility of algorithmic models to non-experts. Information, Communication & Society. https://doi.org/10.1080/1369118X.2020.1761860.
Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705.
Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology and Human Values, 44(2), 291–314.
Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W. K. (2013). Too much, too little, or just right? Ways explanations impact end users’ mental models. In 2013 IEEE Symposium on visual languages and human centric computing (pp. 3–10).
Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Retrieved October, 20, 2020, from https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_Intelligence_Upholding_Human_Rights.pdf.
Leetaru, K. (2018). Without transparency, democracy dies in the darkness of social media. Forbes, 25 January 2020. Retrieved October 20, 2020, from https://www.forbes.com/sites/kalevleetaru/2018/01/25/without-transparency-democracy-dies-in-the-darkness-of-social-media/.
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611–627.
Mascharka, D., Tran, P., Soklaski, R., & Majumdar, A. (2018). Transparency by design: Closing the gap between performance and interpretability in visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4942–4950).
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
Meijer, A. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 507–524). Oxford: Oxford University Press.
Merchant, B. (2019). Tech journalism’s ‘on background’ scourge. Columbia Journalism Review, July 17 2019. Retrieved November 11, 2020, from https://www.cjr.org/opinion/tech-journalism-on-background.php.
Microsoft. (2019). Microsoft AI principles. Retrieved November 11, 2020, from https://www.microsoft.com/en-us/ai/our-approach-to-ai.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
Mulligan, D. K., & King, J. (2011). Bridging the gap between privacy and design. University of Pennsylvania Journal of Constitutional Law, 14, 989–1034.
Neyland, D. (2016). Bearing account-able witness to the ethical algorithmic system. Science, Technology, & Human Values, 41(1), 50–76. https://doi.org/10.1177/0162243915598056.
Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42.
Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3), 120.
O’Neill, O. (2002). A question of trust: The BBC Reith Lectures 2002. Cambridge: Cambridge University Press.
Paal, B. P., & Pauly, D. A. (Eds.). (2018). Datenschutz-Grundverordnung Bundesdatenschutzgesetz. Munich: CH Beck.
Pasquale, F. (2015). The black box society. Cambridge, MA: Harvard University Press.
Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. In ACM proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–13).
Rawlins, B. (2008). Give the emperor a mirror: Toward developing a stakeholder measurement of organizational transparency. Journal of Public Relations Research, 21(1), 71–99.
Ringel, L. (2019). Unpacking the Transparency-Secrecy Nexus: Frontstage and backstage behaviour in a political party. Organization Studies, 40(5), 705–723.
Roberge, J., & Seyfert, R. (2016). What are algorithmic cultures? In R. Seyfert & J. Roberge (Eds.), Algorithmic cultures: essays on
meaning, performance and new technologies (pp. 13–37). Routledge, Taylor & Francis.
Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. New Haven: Yale University Press.
Rosenberg, M. (2019). Ad tool Facebook built to fight disinformation doesn’t work as advertised. The New York Times, 25 July 2019. Retrieved November 11, 2020, from https://www.nytimes.com/2019/07/25/technology/facebook-ad-library.htm.
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393–404.
Santa Clara Principles. (2018). Santa Clara principles on transparency and accountability in content moderation. Retrieved November 11, 2020, from https://newamericadotorg.s3.amazonaws.com/documents/Santa_Clara_Principles.pdf.
Schermer, B. W. (2011). The limits of privacy in automated profiling and data mining. Computer Law & Security Review, 27(1), 45–52.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 2053951717738104.
Seaver, N. (2019). Knowing algorithms. In J. Vertesi & D. Ribes (Eds.), Digital STS: A field guide for science and technology studies (pp. 412–422). Princeton
Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242.
Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk theories of algorithmic recommendations on Spotify: Enacting data assemblages in the global South. Big Data & Society, 7(1), 1–15.
Singh, S. (2019). Everything in moderation: An analysis of how Internet platforms are using artificial intelligence to moderate user-generated content. New America, 22 July 2019. Retrieved October 20, 2020, from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/.
Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412.
Sunstein, C. S. (2018). Output transparency vs. input transparency. In D. E. Pozen & M. Schudson (Eds.). Troubling transparency: The history and future of freedom of information (Chapter 9). New York: Columbia University Press. Retrieved November 11, 2020, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2826009.
Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What do we mean when we talk about transparency? Toward meaningful transparency in commercial content moderation. International Journal of Communication, 13, 1526–1543.
Tamò-Larrieux, A. (2018). Designing for privacy and its legal framework. Cham: Springer.
Tielenburg, D. S. (2018). The ‘dark sides’ of transparency: Rethinking information disclosure as a social praxis. Master’s thesis, Utrecht University. Retrieved October 20, 2020, from https://dspace.library.uu.nl/handle/1874/369521.
Tsoukas, H. (1997). The tyranny of light: The temptations and the paradoxes of the information society. Futures, 29(9), 827–843.
Tutt, A. (2017). An FDA for algorithms. Administrative Law Review, 69(1), 83–123.
Van Otterlo, M. (2013). A machine learning view on profiling. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: Philosophers of law meet philosophers of technology (pp. 41–64). London: Routledge.
Van Wynsberghe, A. (2013). Designing robots for care: Care centered value-sensitive design. Science and Engineering Ethics, 19(2), 407–433.
Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI conference on human factors in computing systems (paper 440). New York: ACM.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 7(2), 494–620.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99.
Weller, A. (2017). Challenges for transparency. arXiv preprint arXiv:1708.01870.
Wieringa, M. (2020). What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability. In FAT* ‘20: Proceedings of the 2020 conference on fairness, accountability, and transparency, January 2020 (pp. 1–18). https://doi.org/10.1145/3351095.3372833
Williams, C. C. (2005). Trust diffusion: The effect of interpersonal trust on structure, function, and organizational transparency. Business and Society, 44(3), 357–368.
Zarsky, T. Z. (2013). Transparent predictions. University of Illinois Law Review, 4, 1503–1570.
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32(4), 661–683.