Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp 308–18. Vienna Austria: ACM. https://doi.org/10.1145/2976749.2978318. Accessed 24 Aug 2020
Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG (2020) Roles for computing in social change. https://arxiv.org/pdf/1912.04883.pdf. Accessed 24 Aug 2020
Aggarwal N (2020) The norms of algorithmic credit scoring. SSRN Electron J. https://doi.org/10.2139/ssrn.3569083
Article
Google Scholar
Allen A (2011) Unpopular privacy what must we hide? Oxford University Press, Oxford. https://doi.org/10.1093/acprof:oso/9780195141375.001.0001
Book
Google Scholar
Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973–989. https://doi.org/10.1177/1461444816676645
Article
Google Scholar
Angwin J, Larson J, Mattu S, Lauren K (2016) Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 24 Aug 2020
Arnold M, Bellamy RKE, Hind M, Houde S, Mehta S, Mojsilovic A, Nair R et al (2019) FactSheets: increasing trust in AI services through supplier’s declarations of conformity. ArXiv:1808.07261. http://arxiv.org/abs/1808.07261. Accessed 24 Aug 2020
Bambauer J, Zarsky T (2018) The algorithmic game. Notre Dame Law Rev 94(1):1–47
MathSciNet
Google Scholar
Barocas S, Selbst AD (2016) Big data’s disparate impact. SSRN Electron J. https://doi.org/10.2139/ssrn.2477899
Article
Google Scholar
Bauer WA, Dubljević V (2020) AI assistants and the paradox of internal automaticity. Neuroethics 13(3):303–310. https://doi.org/10.1007/s12152-019-09423-6
Article
Google Scholar
Baumer EPS (2017) Toward human-centered algorithm design. Big Data Soc 4(2):205395171771885
MathSciNet
Article
Google Scholar
Beer D (2017) The social power of algorithms. Inform Commun Soc 20(1):1–13. https://doi.org/10.1080/1369118X.2016.1216147
MathSciNet
Article
Google Scholar
Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Polity, Medford
Google Scholar
Benjamin R (2020) 2020 Vision: reimagining the default settings of technology and society. https://iclr.cc/virtual_2020/speaker_3.html. Accessed 24 Aug 2020
Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in criminal justice risk assessments: the state of the art. Sociol Methods Res. https://doi.org/10.1177/0049124118782533
Article
Google Scholar
Binns R (2018) Fairness in machine learning: lessons from political philosophy. ArXiv:1712.03586. http://arxiv.org/abs/1712.03586. Accessed 24 Aug 2020
Blacklaws C (2018) Algorithms: transparency and accountability. Philos Trans R Soc A Math Phys Eng Sci 376(2128):20170351. https://doi.org/10.1098/rsta.2017.0351
Article
Google Scholar
Blyth CR (1972) On Simpson’s paradox and the sure-thing principle. J Am Stat Assoc 67(338):364–366. https://doi.org/10.1080/01621459.1972.10482387
MathSciNet
Article
MATH
Google Scholar
Boyd D, Crawford K (2012) Critical questions for big data. Inform Commun Soc 15(5):662–679. https://doi.org/10.1080/1369118X.2012.678878
Article
Google Scholar
Buhmann A, Paßmann J, Fieseler C (2019) Managing algorithmic accountability: balancing reputational concerns, engagement strategies, and the potential of rational discourse. J Bus Ethics. https://doi.org/10.1007/s10551-019-04226-4
Article
Google Scholar
Burke R (2017) Multisided fairness for recommendation. ArXiv:1707.00093. http://arxiv.org/abs/1707.00093. Accessed 24 Aug 2020
Burrell J (2016) How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data Soc 3(1):205395171562251. https://doi.org/10.1177/2053951715622512
Article
Google Scholar
Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, Khlaaf H et al. (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. ArXiv:2004.07213 [Cs]. http://arxiv.org/abs/2004.07213. Accessed 24 Aug 2020
Chakraborty A, Patro GK, Ganguly N, Gummadi KP, Loiseau P (2019) Equality of voice: towards fair representation in crowdsourced top-K recommendations. In: Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, 129–38. Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/3287560.3287570
Cohen J (2000) Examined lives: informational privacy and the subject as object. Georgetown Law Faculty Publications and Other Works, January. https://scholarship.law.georgetown.edu/facpub/810. Accessed 24 Aug 2020
Corbett-Davies S, Goel S (2018) The measure and mismeasure of fairness: a critical review of fair machine learning. ArXiv:1808.00023. http://arxiv.org/abs/1808.00023. Accessed 24 Aug 2020
Cowls J, Tsamados A, Taddeo M, Floridi L (2021) A definition, benchmark and database of AI for social good initiatives. Nat Mach Intell
Crain M (2018) The limits of transparency: data brokers and commodification. New Media Soc 20(1):88–104. https://doi.org/10.1177/1461444816657096
Article
Google Scholar
Cummings M (2012) Automation bias in intelligent time critical decision support systems. In: AIAA 1st Intelligent Systems Technical Conference. Chicago, Illinois: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2004-6313
Dahl ES (2018) Appraising Black-boxed technology: the positive prospects. Philos Technol 31(4):571–591. https://doi.org/10.1007/s13347-017-0275-1
Article
Google Scholar
Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 4691–97. Melbourne, Australia: International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2017/654
Datta A, Tschantz MC, Datta A (2015) Automated experiments on Ad privacy settings. Proc Priv Enhanc Technol 2015(1):92–112. https://doi.org/10.1515/popets-2015-0007
Article
Google Scholar
Davis E, Marcus G (2019) Rebooting AI: building artificial intelligence we can trust. Pantheon Books, New York
Google Scholar
Diakopoulos N, Koliska M (2017) Algorithmic transparency in the news media. Digit Journal 5(7):809–828. https://doi.org/10.1080/21670811.2016.1208053
Article
Google Scholar
Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. ArXiv:1702.08608. http://arxiv.org/abs/1702.08608. Accessed 24 Aug 2020
Edwards L, Veale M (2017) Slave to the algorithm? Why a right to explanationn is probably not the remedy you are looking for. SSRN Electron J. https://doi.org/10.2139/ssrn.2972855
Article
Google Scholar
Ekstrand M, Levy K (2018) FAT* Network. https://fatconference.org/network. Accessed 24 August 2020
Eubanks V (2017) Automating inequality: how high-tech tools profile, police, and punish the poor, 1st edn. St. Martin’s Press, New York
Google Scholar
Fink K (2018) Opening the government’s black boxes: freedom of information and algorithmic accountability. Inform Commun Soc 21(10):1453–1471. https://doi.org/10.1080/1369118X.2017.1330418
Article
Google Scholar
Floridi L (2012) Distributed morality in an information society. Sci Eng Ethics 19(3):727–743. https://doi.org/10.1007/s11948-012-9413-4
MathSciNet
Article
Google Scholar
Floridi L (2016) Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions. Philos Trans R Soc A Math Phys Eng Sci 374(2083):20160112. https://doi.org/10.1098/rsta.2016.0112
Article
Google Scholar
Floridi L (2017) Infraethics–on the conditions of possibility of morality. Philos Technol 30(4):391–394. https://doi.org/10.1007/s13347-017-0291-1
Article
Google Scholar
Floridi L (2019b) Translating principles into practices of digital ethics: five risks of being unethical. Philos Technol 32(2):185–193. https://doi.org/10.1007/s13347-019-00354-x
Article
Google Scholar
Floridi L (2019a) What the near future of artificial intelligence could be. Philos Technol 32(1):1–15. https://doi.org/10.1007/s13347-019-00345-y
Article
Google Scholar
Floridi L, Cowls J (2019) A unified framework of five principles for ai in society. Harvard Data Sci Rev. https://doi.org/10.1162/99608f92.8cd550d1
Article
Google Scholar
Floridi L, Taddeo M (2016) What is data ethics? Philos Trans R Soc A: Math Phys Eng Sci 374(2083):20160360. https://doi.org/10.1098/rsta.2016.0360
Article
Google Scholar
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C et al (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5
Article
Google Scholar
Floridi L, Cowls J, King TC, Taddeo M (2020) How to design AI for social good: seven essential factors. Sci Eng Ethics 26(3):1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Article
Google Scholar
Fuster A, Goldsmith-Pinkham P, Ramadorai T, Walther A (2017) Predictably unequal? The effects of machine learning on credit markets. SSRN Electron J. https://doi.org/10.2139/ssrn.3072038
Article
Google Scholar
Gajane P, Pechenizkiy M (2018) On formalizing fairness in prediction with machine learning. ArXiv:1710.03184. http://arxiv.org/abs/1710.03184. Accessed 24 Aug 2020
Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Daumé III H, Crawford K (2020) Datasheets for datasets. ArXiv:1803.09010. http://arxiv.org/abs/1803.09010. Accessed 1 Aug 2020
Gillis TB, Spiess J (2019) Big data and discrimination. Univ Chicago Law Rev 459
Grant MJ, Booth A (2009) Types and associated methodologies: a typology of reviews. Health Inform Lib J 26(2):91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
Article
Google Scholar
Green B, Chen Y (2019) Disparate interactions: an algorithm-in-the-loop analysis of fairness in risk assessments. In: Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT*’19, 90–99. Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/3287560.3287563
Green B, Viljoen S (2020) Algorithmic realism: expanding the boundaries of algorithmic thought. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19–31. Barcelona Spain: ACM. https://doi.org/10.1145/3351095.3372840
Grgić-Hlača N, Redmiles EM, Gummadi KP, Weller A (2018) Human perceptions of fairness in algorithmic decision making: a case study of criminal risk prediction. ArXiv:1802.09548. http://arxiv.org/abs/1802.09548. Accessed 24 Aug 2020
Grote T, Berens P (2020) On the ethics of algorithmic decision-making in healthcare. J Med Ethics 46(3):205–211. https://doi.org/10.1136/medethics-2019-105586
Article
Google Scholar
Hager GD, Drobnis A, Fang F, Ghani R, Greenwald A, Lyons T, Parkes DC et al (2019) Artificial intelligence for social good. ArXiv:1901.05406 http://arxiv.org/abs/1901.05406. Accessed 24 Aug 2020
Harwell D (2020) Dating apps need women. advertisers need diversity. AI companies offer a solution: fake people. Washington Post
Hauer T (2019) Society caught in a labyrinth of algorithms: disputes, promises, and limitations of the new order of things. Society 56(3):222–230. https://doi.org/10.1007/s12115-019-00358-5
Article
Google Scholar
Henderson P, Sinha K, Angelard-Gontier N, Ke NR, Fried G, Lowe R, Pineau J (2018) Ethical challenges in data-driven dialogue systems. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 123–29. New Orleans LA USA: ACM. https://doi.org/10.1145/3278721.3278777
Hill RK (2016) What an algorithm is. Philos Technol 29(1):35–59. https://doi.org/10.1007/s13347-014-0184-5
Article
Google Scholar
Hleg AI (2019) Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. Accessed 24 Aug 2020
Hoffmann AL, Roberts ST, Wolf CT, Wood S (2018) Beyond fairness, accountability, and transparency in the ethics of algorithms: contributions and perspectives from LIS. Proc Assoc Inform Sci Technol 55(1):694–696. https://doi.org/10.1002/pra2.2018.14505501084
Article
Google Scholar
Hu M (2017) Algorithmic Jim Crow. Fordham Law Review. https://ir.lawnet.fordham.edu/flr/vol86/iss2/13/. Accessed 24 Aug 2020
Hutson M (2019) Bringing machine learning to the masses. Science 365(6452):416–417. https://doi.org/10.1126/science.365.6452.416
Article
Google Scholar
ICO (2020) ICO and The Turing Consultation on Explaining AI Decisions Guidance. ICO. 30 March 2020. https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-on-explaining-ai-decisions-guidance/. Accessed 24 Aug 2020
James G, Witten G, Hastie T, Tibshirani R (2013) An Introduction to statistical learning. Springer, New York
Book
Google Scholar
Karppi T (2018) The computer said so: on the ethics, effectiveness, and cultural techniques of predictive policing. Soc Media Soc 4(2):205630511876829. https://doi.org/10.1177/2056305118768296
Article
Google Scholar
Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. ArXiv:1812.04948. http://arxiv.org/abs/1812.04948. Accessed 24 Aug 2020
Katell M, Young M, Dailey D, Herman B, Guetler V, Tam A, Binz C, Raz D, Krafft PM (2020) Toward Situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45–55. Barcelona Spain: ACM. https://doi.org/10.1145/3351095.3372874
King G, Persily N (2020) Unprecedented Facebook URLs dataset now available for academic research through social science one. 2020. Unprecedented Facebook URLs Dataset now Available for Academic Research through Social Science One
Kizilcec R (2016) How much information? In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp 2390–2395. https://doi.org/10.1145/2858036.2858402
Klee R (1996) Introduction to the philosophy of science: cutting nature at its seams. Oxford University Press, Oxford
Google Scholar
Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent trade-offs in the fair determination of risk scores. ArXiv:1609.05807. http://arxiv.org/abs/1609.05807. Accessed 24 Aug 2020
Kortylewski A, Egger B, Schneider A, Gerig T, Morel-Forster F, Vetter T (2019) Analyzing and Reducing the damage of dataset bias to face recognition with synthetic data. http://openaccess.thecvf.com/content_CVPRW_2019/html/BEFA/Kortylewski_Analyzing_and_Reducing_the_Damage_of_Dataset_Bias_to_Face_CVPRW_2019_paper.html. Accessed 24 Aug 2020
Labati RD, Genovese A, Muñoz E, Piuri V, Scotti F, Sforza G (2016) Biometric recognition in automated border control: a survey. ACM Comput Surv 49(2):1–39. https://doi.org/10.1145/2933241
Article
Google Scholar
Lambrecht A, Tucker C (2019) Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Manag Sci 65(7):2966–2981. https://doi.org/10.1287/mnsc.2018.3093
Article
Google Scholar
Larson B (2017) Gender as a variable in natural-language processing: ethical considerations. In: Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, 1–11. Valencia, Spain: Association for Computational Linguistics. https://doi.org/10.18653/v1/W17-1601
Lee MK (2018) Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc 5(1):205395171875668. https://doi.org/10.1177/2053951718756684
Article
Google Scholar
Lee TN (2018) Detecting racial bias in algorithms and machine learning. J Inform Commun Ethics Soc 16(3):252–260. https://doi.org/10.1108/JICES-06-2018-0056
Lee MS, Floridi L (2020) Algorithmic fairness in mortgage lending: from absolute conditions to relational trade-offs. SSRN Electron J. https://doi.org/10.2139/ssrn.3559407
Article
Google Scholar
Lee MK, Kim JT, Lizarondo L (2017) A human-centered approach to algorithmic services: considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems—CHI ’17, 3365–76. Denver, Colorado, USA: ACM Press. https://doi.org/10.1145/3025453.3025884
Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. Philos Technol 31(4):611–627. https://doi.org/10.1007/s13347-017-0279-x
Article
Google Scholar
Lewis D (2019) Social Credit case study: city citizen scores in Xiamen and Fuzhou’. Medium: Berkman Klein Center Collection. 8 October 2019. https://medium.com/berkman-klein-center/social-credit-case-study-city-citizen-scores-in-xiamen-and-fuzhou-2a65feb2bbb3. Accessed 10 Oct 2020
Lipworth W, Mason PH, Kerridge I, Ioannidis JPA (2017) Ethics and epistemology in big data research. J Bioethical Inq 14(4):489–500. https://doi.org/10.1007/s11673-017-9771-3
Article
Google Scholar
Magalhães JC (2018) Do algorithms shape character? Considering algorithmic ethical subjectivation. Soc Media Soc 4(2):205630511876830. https://doi.org/10.1177/2056305118768301
Article
Google Scholar
Malhotra C, Kotwal V, Dalal S (2018) Ethical framework for machine learning. In: 2018 ITU Kaleidoscope: machine learning for a 5G Future (ITU K), 1–8. Santa Fe: IEEE. https://doi.org/10.23919/ITU-WT.2018.8597767
Martin K (2019) Ethical implications and accountability of algorithms. J Bus Ethics 160(4):835–850. https://doi.org/10.1007/s10551-018-3921-3
Article
Google Scholar
Mayson SG (2019) ‘Bias In, Bias Out’. Yale Law Journal, no. 128. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3257004. Accessed 24 Aug 2020
Milano S, Taddeo M, Floridi L (2020) Recommender systems and their ethical challenges. AI Soc. https://doi.org/10.1007/s00146-020-00950-y
Article
Google Scholar
Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc. https://doi.org/10.1177/2053951716679679
Article
Google Scholar
Mojsilovic A (2018) Introducing AI explainability 360. https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/. Accessed 24 Aug 2020
Möller J, Trilling D, Helberger N, van Es B (2018) Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Inform Commun Soc 21(7):959–977. https://doi.org/10.1080/1369118X.2018.1444076
Article
Google Scholar
Morley J, Floridi L, Kinsey L, Elhalal A (2019) From what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. https://doi.org/10.1007/s11948-019-00165-5
Article
Google Scholar
Morley J, Machado C, Burr C, Cowls J, Taddeo M, Floridi L (2019) The debate on the ethics of ai in health care: a reconstruction and critical review. SSRN Electron J. https://doi.org/10.2139/ssrn.3486518
Article
Google Scholar
Murgia M (2018) DeepMind’s move to transfer health unit to Google Stirs data fears. Financial Times, New York, p 2018
Google Scholar
Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. New York University Press, New York
Book
Google Scholar
Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447–453. https://doi.org/10.1126/science.aax2342
Article
Google Scholar
Ochigame R (2019) The invention of “Ethical AI”, 2019. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/. Accessed 24 Aug 2020
OECD (2019) Recommendation of the council on artificial intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. Accessed 24 Aug 2020
Olhede SC, Wolfe PJ (2018) The growing ubiquity of algorithms in society: implications, impacts and innovations. Philos Trans R Soc A Math Phys Eng Sci 376(2128):20170364. https://doi.org/10.1098/rsta.2017.0364
Article
MATH
Google Scholar
Olteanu A, Castillo C, Diaz F, Kiciman E (2016) Social data: biases, methodological pitfalls, and ethical boundaries. SSRN Electron J. https://doi.org/10.2139/ssrn.2886526
Article
Google Scholar
Oswald M (2018) Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power. Philos Trans R Soc A: Math Phys Eng Sci 376(2128):20170359. https://doi.org/10.1098/rsta.2017.0359
Article
Google Scholar
Paraschakis D (2017) Towards an ethical recommendation framework. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS), 211–20. Brighton, United Kingdom: IEEE. https://doi.org/10.1109/RCIS.2017.7956539
Paraschakis D (2018) Algorithmic and ethical aspects of recommender systems in E-commerce. Malmö Universitet, Malmö
Book
Google Scholar
Perra N, Rocha LEC (2019) Modelling opinion dynamics in the age of algorithmic personalisation. Sci Rep 9(1):7261. https://doi.org/10.1038/s41598-019-43830-2
Article
Google Scholar
Perrault R, Yoav S, Brynjolfsson E, Jack C, Etchmendy J, Grosz B, Terah L, James M, Saurabh M, Carlos NJ (2019) Artificial Intelligence Index Report 2019
Prates MOR, Avelar PH, Lamb LC (2019) Assessing gender bias in machine translation: a case study with google translate. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04144-6
Article
Google Scholar
Dignum V, Lopez-Sanchez M, Micalizio R, Pavón J, Slavkovik M, Smakman M, van Steenbergen M et al (2018) Ethics by design: necessity or curse? In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society—AIES ’18, 60–66. New Orleans, LA, USA: ACM Press. https://doi.org/10.1145/3278721.3278745
Rachels J (1975) Why privacy is important. Philos Public Aff 4(4):323–333
Google Scholar
Rahwan I (2018) Society-in-the-loop: programming the algorithmic social contract. Ethics Inf Technol 20(1):5–14. https://doi.org/10.1007/s10676-017-9430-8
Article
Google Scholar
Ras G, van Gerven M, Haselager P (2018) Explanation methods in deep learning: users, values, concerns and challenges. ArXiv:1803.07517. http://arxiv.org/abs/1803.07517. Accessed 24 Aug 2020
Reddy E, Cakici B, Ballestero A (2019) Beyond mystery: putting algorithmic accountability in context. Big Data Soc 6(1):205395171982685. https://doi.org/10.1177/2053951719826856
Article
Google Scholar
Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic impact assessments: a practical framework for public agency accountability’. AI Now Institute. https://ainowinstitute.org/aiareport2018.pdf. Accessed 24 Aug 2020
Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423. Accessed 24 Aug 2020
Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29(4):495–514. https://doi.org/10.1007/s11023-019-09509-3
Article
Google Scholar
Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2019) The Chinese approach to artificial intelligence: an analysis of policy and regulation. SSRN Electron J. https://doi.org/10.2139/ssrn.3469784
Article
Google Scholar
Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2020) The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI Soc. https://doi.org/10.1007/s00146-020-00992-2
Article
Google Scholar
Rössler B (2015) The value of privacy. https://philpapers.org/rec/ROSTVO-9. Accessed 24 Aug 2020
Rubel A, Castro C, Pham A (2019) Agency laundering and information technologies. Ethical Theory Moral Pract 22(4):1017–1041. https://doi.org/10.1007/s10677-019-10030-w
Article
Google Scholar
Sandvig C, Hamilton K, Karahalios K, Langbort C (2016) When the algorithm itself is a racist: diagnosing ethical harm in the basic components of software. Int J Commun 10:4972–4990
Google Scholar
Saxena N, Huang K, DeFilippis E, Radanovic G, Parkes D, Liu Y (2019) How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. ArXiv:1811.03654. http://arxiv.org/abs/1811.03654. Accessed 24 Aug 2020
Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, 59–68. Atlanta, GA, USA: ACM Press. https://doi.org/10.1145/3287560.3287598
Shah H (2018) Algorithmic accountability. Philos Trans R Soc A: Math Phys Eng Sci 376(2128):20170362. https://doi.org/10.1098/rsta.2017.0362
Article
Google Scholar
Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98(September):277–284. https://doi.org/10.1016/j.chb.2019.04.019
Article
Google Scholar
Sloan RH, Warner R (2018) When is an algorithm transparent? predictive analytics, privacy, and public policy. IEEE Secur Priv 16(3):18–25. https://doi.org/10.1109/MSP.2018.2701166
Article
Google Scholar
Stilgoe J (2018) Machine learning, social learning and the governance of self-driving cars. Soc Stud Sci 48(1):25–56. https://doi.org/10.1177/0306312717741687
Article
Google Scholar
Szegedy C, Wojciech Z, Ilya S, Joan B, Dumitru E, Ian G, Rob F (2014) Intriguing Properties of Neural Networks. ArXiv:1312.6199 [Cs]. http://arxiv.org/abs/1312.6199. Accessed 18 July 2020
Taddeo M, Floridi L (2018a) Regulate artificial intelligence to avert cyber arms race. Nature 556(7701):296–298. https://doi.org/10.1038/d41586-018-04602-6
Article
Google Scholar
Taddeo M, Floridi L (2018b) How AI can be a force for good. Science 361(6404):751–752. https://doi.org/10.1126/science.aat5991
MathSciNet
Article
MATH
Google Scholar
Taddeo M, McCutcheon T, Floridi L (2019) Trusting artificial intelligence in cybersecurity is a double-edged sword. Nat Mach Intell 1(12):557–560. https://doi.org/10.1038/s42256-019-0109-1
Article
Google Scholar
Taylor L, Floridi L, van der Sloot B (eds) (2017) Group privacy: new challenges of data technologies. Springer, Berlin Heidelberg, New York
Google Scholar
Tickle AB, Andrews R, Golea M, Diederich J (1998) The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Trans Neural Netw 9(6):1057–1068. https://doi.org/10.1109/72.728352
Article
Google Scholar
Turilli M, Floridi L (2009) The ethics of information transparency. Ethics Inf Technol 11(2):105–112. https://doi.org/10.1007/s10676-009-9187-9
Article
Google Scholar
Valiant LG (1984) A theory of the learnable. Commun ACM 27(11):1134–1142. https://doi.org/10.1145/1968.1972
Article
MATH
Google Scholar
Veale M, Binns R (2017) Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc 4(2):205395171774353. https://doi.org/10.1177/2053951717743530
Article
Google Scholar
Vedder A, Naudts L (2017) Accountability for the use of algorithms in a big data environment. Int Rev Law Comput Technol 31(2):206–224. https://doi.org/10.1080/13600869.2017.1298547
Article
Google Scholar
Wang S, Jiang X, Singh S, Marmor R, Bonomi L, Fox D, Dow M, Ohno-Machado L (2017) Genome privacy: challenges, technical approaches to mitigate risk, and ethical considerations in the united states: genome privacy in biomedical research. Ann N Y Acad Sci 1387(1):73–83. https://doi.org/10.1111/nyas.13259
Article
Google Scholar
Watson D, Floridi L (2020) The explanation game: a formal framework for interpretable machine learning. SSRN Electron J. https://doi.org/10.2139/ssrn.3509737
Article
Google Scholar
Watson DS, Krutzinna J, Bruce IN, Griffiths CEM, McInnes IB, Barnes MR, Floridi L (2019) Clinical applications of machine learning algorithms: beyond the black box. BMJ. https://doi.org/10.1136/bmj.l886
Article
Google Scholar
Webb H, Patel M, Rovatsos M, Davoust A, Ceppi S, Koene A, Dowthwaite L, Portillo V, Jirotka M, Cano M (2019) “It would be pretty immoral to choose a random algorithm”: opening up algorithmic interpretability and transparency. J Inform Commun Ethics Soc 17(2):210–228. https://doi.org/10.1108/JICES-11-2018-0092
Article
Google Scholar
Weller A (2019) Transparency: motivations and challenges. ArXiv:1708.01870. http://arxiv.org/abs/1708.01870. Accessed 24 Aug 2020
Wexler J (2018) The what-if tool: code-free probing of machine learning models. https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html. Accessed 24 Aug 2020
Whitman M, Hsiang C-y, Roark K (2018) Potential for participatory big data ethics and algorithm design: a scoping mapping review. In: Proceedings of the 15th Participatory Design Conference on Short Papers, Situated Actions, Workshops and Tutorial - PDC ’18, 1–6. Hasselt and Genk, Belgium: ACM Press. https://doi.org/10.1145/3210604.3210644
Wiener N (1950) The human use of human beings.
Winner L (1980) Do artifacts have politics? Modern Techn Probl Oppor 109(1):121–136
Google Scholar
Wong P-H (2019) Democratizing algorithmic fairness. Philos Technol. https://doi.org/10.1007/s13347-019-00355-w
Article
Google Scholar
Xian Z, Li Q, Huang X, Li L (2017) New SVD-based collaborative filtering algorithms with differential privacy. J Intell Fuzzy Syst 33(4):2133–2144. https://doi.org/10.3233/JIFS-162053
Article
MATH
Google Scholar
Xu D, Yuan S, Zhang L, Wu X (2018) FairGAN: fairness-aware generative adversarial networks. In: 2018 IEEE International Conference on Big Data (Big Data), 570–75. Seattle, WA, USA: IEEE. https://doi.org/10.1109/BigData.2018.8622525
Yang G-Z, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, Jacobstein R et al (2018) The grand challenges of science robotics. Sci Robot 3(14):eaar7650. https://doi.org/10.1126/scirobotics.aar7650
Article
Google Scholar
Yampolskiy RV (2018) Artificial intelligence safety and security
Yu M, Du G (2019) Why are Chinese courts turning to AI?’ The Diplomat. 19 January 2019. https://thediplomat.com/2019/01/why-are-chinese-courts-turning-to-ai/. Accessed 24 Aug 2020
Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32(4):661–683. https://doi.org/10.1007/s13347-018-0330-6
Article
Google Scholar
Zhou Na, Zhang C-T, Lv H-Y, Hao C-X, Li T-J, Zhu J-J, Zhu H et al (2019) Concordance study between IBM Watson for oncology and clinical practice for patients with cancer in China. Oncologist 24(6):812–819. https://doi.org/10.1634/theoncologist.2018-0255
Article
Google Scholar