Aggarwal, N., Eidenmüller, H., Enriques, L., Payne, J., & Zwieten, K. (2019). Autonomous systems and the law. München: Baden-Baden.
Google Scholar
AI HLEG. 2019. European Commission’s ethics guidelines for trustworthy artificial intelligence. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1.
AIEIG. 2020. From principles to practice — An interdisciplinary framework to operationalise AI ethics. AI Ethics Impact Group, VDE Association for Electrical Electronic & Information Technologies e.V., Bertelsmann Stiftung, 1–56. https://doi.org/10.11586/2020013.
Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data and Society. https://doi.org/10.1177/2053951720949566
Article
Google Scholar
AlgorithmWatch. 2019. Automating society: Taking stock of automated decision-making in the EU. Bertelsmann Stiftung, 73–83. https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf.
Ananny, M., & Crawford, K. (2018). Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Article
Google Scholar
Arvan, M. (2018). Mental time-travel, semantic flexibility, and A.I. ethics. AI and Society. https://doi.org/10.1007/s00146-018-0848-2
Article
Google Scholar
Assessment List for Trustworthy AI. 2020. Assessment list for trustworthy AI (ALTAI). https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.
Auer, F., & Felderer, M. (2018). Shifting quality assurance of machine learning algorithms to live systems. In M. Tichy, E. Bodden, M. Kuhrmann, S. Wagner, & J.-P. Steghöfer (Eds.), Software Engineering und Software Management 2018 (S. 211–212). Bonn: Gesellschaft für Informatik.
Barredo Arrieta, A., Del Ser, J., Gil-Lopez, S., Díaz-Rodríguez, N., Bennetot, A., Chatila, R., et al. (2020). Explainable explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
Article
Google Scholar
Bellamy, R. K. E., Mojsilovic, A., Nagar, S., Natesan Ramamurthy, K., Richards, J., Saha, D., Sattigeri, P., et al. (2019). AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development. https://doi.org/10.1147/JRD.2019.2942287
Article
Google Scholar
Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3), 73–80.
Article
Google Scholar
Boddington, P., Millican, P., & Wooldridge, M. (2017). Minds and machines special issue: Ethics and artificial intelligence. Minds and Machines, 27(4), 569–574. https://doi.org/10.1007/s11023-017-9449-y
Article
Google Scholar
Brown, S., Davidovic, J., & Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data & Society, 8(1), 205395172098386. https://doi.org/10.1177/2053951720983865
Article
Google Scholar
Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Khlaaf, H., et al. 2020. Toward trustworthy AI development: Mechanisms for supporting verifiable claims. ArXiv, no. 2004.07213[cs.CY]. http://arxiv.org/abs/2004.07213.
Bryson, J., & Winfield, A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 50(5), 116–19.
Article
Google Scholar
Burrell, Jenna. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society. https://doi.org/10.1177/2053951715622512
Article
Google Scholar
Cabrera, Á. A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., Chau, D. H. 2019. FairVis: Visual analytics for discovering intersectional bias in machine learning. http://arxiv.org/abs/1904.05419.
Cath, C., Cowls, J., Taddeo, M., & Floridi, L. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A Mathematical, Physical and Engineering Sciences. https://doi.org/10.1098/rsta.2018.0080
Article
Google Scholar
Chopra, A. K., Singh, M. P. 2018. Sociotechnical systems and ethics in the large. In AIES 2018—Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 48–53). https://doi.org/10.1145/3278721.3278740.
Christian, B. (2020). The alignment problem: Machine learning and human values. W.W. Norton & Company Ltd.
Google Scholar
Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. HeinOnline, 1, 1–34.
Google Scholar
CNIL. 2019. Privacy impact assessment—Methodology. Commision Nationale Informatique & Libertés, 400.
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
Article
Google Scholar
Conrad, C. A. (2018). Business ethics—A philosophical and behavioral approach. Springer. https://doi.org/10.1007/978-3-319-91575-3
Book
Google Scholar
Cookson, C. 2018. Artificial intelligence faces public backlash, warns scientist. Financial Times, June 9, 2018. https://www.ft.com/content/0b301152-b0f8-11e8-99ca-68cf89602132.
Council of Europe. 2018. Algorithms and human rights. www.coe.int/freedomofexpression.
Cowls, J., & Floridi, L. (2018). Prolegomena to a white paper on an ethical framework for a good AI society. SSRN Electronic Journal.
Cummings, M. L. 2004. Automation bias in intelligent time critical decision support systems. In Collection of technical papers—AIAA 1st intelligent systems technical conference (Vol. 2, pp. 557–62).
Dafoe, A. (2017). AI governance: A research agenda. American Journal of Psychiatry. https://doi.org/10.1176/ajp.134.8.aj1348938
Article
Google Scholar
D’Agostino, M., & Durante, M. (2018). Introduction: The governance of algorithms. Philosophy and Technology, 31(4), 499–505. https://doi.org/10.1007/s13347-018-0337-z
Article
Google Scholar
Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., Scowcroft J, and Hajkowicz S. 2019. Artificial intelligence: Australia’s ethics framework.
Deloitte. 2020. Deloitte introduces trustworthy AI framework to guide organizations in ethical application of technology. Press Release. 2020. https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/deloitte-introduces-trustworthy-ai-framework.html.
Dennis, L. A., Fisher, M., Lincoln, N. K., Lisitsa, A., & Veres, S. M. (2016). Practical verification of decision-making in agent-based autonomous systems. Automated Software Engineering, 23(3), 305–359. https://doi.org/10.1007/s10515-014-0168-9
Article
Google Scholar
Di Maio, P. (2014). Towards a metamodel to support the joint optimization of socio technical systems. Systems, 2(3), 273–296. https://doi.org/10.3390/systems2030273
Article
Google Scholar
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411
Article
Google Scholar
Dignum, V. 2017. Responsible autonomy. In Proceedings of the international joint conference on autonomous agents and multiagent systems, AAMAS 1: 5. https://doi.org/10.24963/ijcai.2017/655.
ECP. 2018. Artificial intelligence impact assessment.
Ellemers, N., van der Toorn, J., Paunov, Y., & van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published From 1940 Through 2017. Personality and Social Psychology Review, 23(4), 332–366. https://doi.org/10.1177/1088868318811759
Article
Google Scholar
Epstein, Z., Payne, B. H., Shen, J. H., Hong, C. J., Felbo, B., Dubey, A., Groh, M., Obradovich, N., Cebrian, M., Rahwan, I. 2018. Turingbox: An experimental platform for the evaluation of AI systems. In IJCAI international joint conference on artificial intelligence 2018-July (pp. 5826–28). https://doi.org/10.24963/ijcai.2018/851.
Erdelyi, O. J., Goldsmith, J. 2018. Regulating artificial intelligence P. In AAAI/ACM conference on artificial intelligence, ethics and society. http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_13.pdf.
Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149–156. https://doi.org/10.1007/s10676-016-9400-6
Article
Google Scholar
European Commission. 2021. Proposal for regulation of the European Parliament and of the council. COM(2021) 206 final. Brussels.
Evans, K., de Moura, N., Chauvier, S., Chatila, R., & Dogan, E. (2020). Ethical decision making in autonomous vehicles: The AV ethics project. Science and Engineering Ethics, 26(6), 3285–3312. https://doi.org/10.1007/s11948-020-00272-8
Article
Google Scholar
Fagerholm, F., Guinea, A. S., Mäenpää, H., Münch, J. 2014. Building blocks for continuous experimentation. In Proceedings of the 1st international workshop on rapid continuous software engineering (pp. 26–35). RCoSE 2014. ACM. https://doi.org/10.1145/2593812.2593816.
Falkenberg, L., & Herremans, I. (1995). Ethical behaviours in organizations: Directed by the formal or informal systems? Journal of Business Ethics, 14(2), 133–143. https://doi.org/10.1007/BF00872018
Article
Google Scholar
Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Towards transparency by design for artificial intelligence. Science and Engineering Ethics, 26(6), 3333–3361. https://doi.org/10.1007/s11948-020-00276-4
Article
Google Scholar
Floridi, L. (2013). Distributed morality in an information society. Science and Engineering Ethics, 19(3), 727–743. https://doi.org/10.1007/s11948-012-9413-4.
Article
Google Scholar
Floridi, L. (2016a). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083). https://doi.org/10.1098/rsta.2016.0112.
Floridi, L. (2016b). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics, 22(6), 1669–1688. https://doi.org/10.1007/s11948-015-9733-2.
Article
Google Scholar
Floridi, L. (2017a). Infraethics–On the conditions of possibility of morality. Philosophy and Technology, 30(4), 391–394. https://doi.org/10.1007/s13347-017-0291-1.
Article
Google Scholar
Floridi, L. (2017b). The logic of design as a conceptual logic of information. Minds and Machines, 27(3), 495–519. https://doi.org/10.1007/s11023-017-9438-1.
Article
Google Scholar
Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy and Technology, 31(1). https://doi.org/10.1007/s13347-018-0303-9.
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy and Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x.
Article
Google Scholar
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, (1), 1–13. https://doi.org/10.1162/99608f92.8cd550d1.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C. et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5.
Article
Google Scholar
ForHumanity. 2021. Independent audit of AI systems. 2021. https://forhumanity.center/independent-audit-of-ai-systems.
Friedler, S. A., Scheidegger, C., Venkatasubramanian, Suresh. 2016. On the (im)possibility of fairness, no. im: 1–16. http://arxiv.org/abs/1609.07236.
Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437. https://doi.org/10.1007/s11023-020-09539-2
Article
Google Scholar
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, Hal., Crawford, K. 2018. Datasheets for datasets. http://arxiv.org/abs/1803.09010.
Goodman, B. 2016. A step towards accountable algorithms? : Algorithmic discrimination and the European Union general data protection. In:29th conference on neural information processing systems (NIPS 2016), Barcelona, Spain., no. Nips (pp. 1–7).
Google. 2020. What-If-Tool. Partnership on AI. 2020. https://pair-code.github.io/what-if-tool/index.html.
Gov. of Canada. 2019. Algorithmic impact assessment (AIA). Responsible use of artificial intelligence (AI). 2019. https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html.
Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205–211. https://doi.org/10.1136/medethics-2019-105586
Article
Google Scholar
Hagendorff, T. 2020. The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, no. January. https://doi.org/10.1007/s11023-020-09517-8.
IAF. 2019. Ethical data impact assessments and oversight models. Information Accountability Foundation, no. January. https://www.immd.gov.hk/pdf/PCAReport.pdf.
ICO. 2018. Guide to the general data protection regulation (GDPR). Guide to the general data protection regulation, n/a. https://doi.org/10.1111/j.1751-1097.1994.tb09662.x.
ICO. 2020. Guidance on the AI auditing framework: Draft guidance for consultation. Information Commissioner’s Office. https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf.
IEEE. (2019). Ethically aligned design. Intelligent Systems, Control and Automation: Science and Engineering, 95, 11–16. https://doi.org/10.1007/978-3-030-12524-0_2
Article
Google Scholar
IIA. 2017. The institute of internal auditors’s artificial intelligence auditing framework: Practical applications Part A. Global Perspectives and Insights. www.theiia.org/gpi.
Jobin, A., Ienca, M., Vayena, E. 2019. Artificial intelligence: The global landscape of ethics guidelines.
Jotterand, F., & Bosco, C. (2020). Keeping the ‘human in the loop’ in the age of artificial intelligence: Accompanying commentary for ‘correcting the brain?’ By Rainey and Erden. Science and Engineering Ethics, 26(5), 2455–2460. https://doi.org/10.1007/s11948-020-00241-1
Article
Google Scholar
Karanasiou, A. P., & Pinotsis, D. A. (2017). A Study into the layers of automated decision-making: emergent normative and legal aspects of deep learning. International Review of Law, Computers & Technology, 31(2), 170–187. https://doi.org/10.1080/13600869.2017.1298499
Article
Google Scholar
Kazim, E., Denny, D. M. T., & Koshiyama, A. (2021). AI auditing and impact assessment: According to the UK information commissioner’s office. AI and Ethics, no. 0123456789. https://doi.org/10.1007/s43681-021-00039-2.
Keyes, O., Hutson, J., Durbin, M. 2019. A mulching proposal no. May 2019 (pp. 1–11). https://doi.org/10.1145/3290607.3310433.
Kim, P. 2017. Auditing algorithms for discrimination. University of Pennsylvania Law Review, 166, 189–203.
Kleinberg, J., Mullainathan, S., Raghavan, M. 2017. Inherent trade-offs in the fair determination of risk scores. In Leibniz International Proceedings in Informatics, LIPIcs 67 (pp 1–23). https://doi.org/10.4230/LIPIcs.ITCS.2017.43.
Koene, A., Clifton, C., Hatada, Y., Webb, H., Richardson, R. 2019. A governance framework for algorithmic accountability and transparency. https://doi.org/10.2861/59990.
Kolhar, M., Abu-Alhaj, M. M., & El-Atty, S. M. A. (2017). Cloud data auditing techniques with a focus on privacy and security. IEEE Security and Privacy, 15(1), 42–51. https://doi.org/10.1109/MSP.2017.16
Article
Google Scholar
Koshiyama, A. 2019. Algorithmic impact assessment: Fairness, robustness and explainability in automated decision-making.
Krafft, T. D., Zweig, K. A., & König, P. D. (2020). How to regulate algorithmic decision-making: A framework of regulatory requirements for different applications. Regulation and Governance. https://doi.org/10.1111/rego.12369
Article
Google Scholar
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions. Mathematical, Physical and Engineering Sciences, 376(2133).
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., Yu, H. 2016. Accountable algorithms. University of Pennsylvania Law Review, no. 633: 66. https://doi.org/10.1002/ejoc.201200111.
Kusner, M., Loftus, J., Russell, C., Silva, R. 2017. Counterfactual fairness. In Advances in neural information processing systems December (pp. 4067–77).
LaBrie, R. C., Steinke, G. H. 2019. Towards a framework for ethical audits of AI algorithms. In 25th americas conference on information systems, AMCIS 2019 (pp 1–5).
Lauer, D. (2020). You cannot have AI ethics without ethics. AI and Ethics, 0123456789, 1–5. https://doi.org/10.1007/s43681-020-00013-4
Article
Google Scholar
Lee, M., Floridi, L., & Denev, A. (2020). Innovating with confidence: Embedding governance and fairness in a financial services risk management framework. Berkeley Technology Law Journal, 34(2), 1–19.
Google Scholar
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy and Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x
Article
Google Scholar
Leslie, D. (2019). Understanding artificial intelligence ethics and safety. The Alan Turing Institute (June, 2019).
Leveson, Nancy. (2011). Engineering a safer world : Systems thinking applied to safety. Engineering systems. MIT Press.
Google Scholar
Lipton, Z. C., & Steinhardt, J. (2019). Troubling trends in machine-learning scholarship. Queue, 17(1), 1–15. https://doi.org/10.1145/3317287.3328534
Article
Google Scholar
Loi, M., Ferrario, A., & Viganò, E. (2020). Transparency as design publicity: Explaining and justifying inscrutable algorithms. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09564-w
Article
Google Scholar
Mahajan, V., Venugopal, V. K., Murugavel, M., & Mahajan, H. (2020). The algorithmic audit: Working with vendors to validate radiology-AI algorithms—How we do it. Academic Radiology, 27(1), 132–135. https://doi.org/10.1016/j.acra.2019.09.009.
Article
Google Scholar
Mau, S., & Howe, S. (2019). The metric society: On the quantification of the social. Ebook Central.
Google Scholar
Microsoft. 2020. Fairlearn: A toolkit for assessing and improving fairness in AI (pp. 1–6).
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., Gebru, T. 2019. Model cards for model reporting. In FAT* 2019—Proceedings of the 2019 Conference on fairness, accountability, and transparency, no. Figure 2 (pp. 220–29). https://doi.org/10.1145/3287560.3287596.
Mittelstadt, B. (2016). Auditing for transparency in content personalization systems. International Journal of Communication, 10(June), 4991–5002.
Google Scholar
Mökander, J., & Floridi, L. (2021). Ethics—Based auditing to develop trustworthy AI. Minds and Machines, no. 0123456789, 2–6. https://doi.org/10.1007/s11023-021-09557-8.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141. https://doi.org/10.1007/s11948-019-00165-5.
Article
Google Scholar
OECD. 2019. Recommendation of the council on artificial intelligence. OECD/LEGAL/0449.
ORCAA. 2020. It’s the age of the algorithm and we have arrived unprepared. https://orcaarisk.com/.
Oxborough, C., Cameron, E., Rao, A., Birchall, A., Townsend, A., Westermann, Christian. 2018. Explainable AI. https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf.
PDPC. 2020. Model artificial intelligence governance framework second edition. Personal data protection commission of Singapore.
Power, M. (1999). The audit society [electronic resource] : Rituals of verification. Oxford University Press. Oxford Scholarship Online.
Book
Google Scholar
PwC. 2019. A practical guide to responsible artificial intelligence (AI). https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai/responsible-ai-practical-guide.pdf.
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8
Article
Google Scholar
Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In AIES 2019—Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 429–435). https://doi.org/10.1145/3306618.3314244.
Raji, I. D., Smart, A., White R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., Barnes, P. 2020. “losing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In FAT* 2020—Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 33–44). https://doi.org/10.1145/3351095.3372873.
Responsible AI Licenses. 2021. AI licenses. https://www.licenses.ai/about.
Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., Rodolfa, K. T., Ghani, R. 2018. Aequitas: A bias and fairness audit toolkit, no. 2018. http://arxiv.org/abs/1811.05577.
Sánchez-Monedero, J., Dencik, L., Edwards, L. 2020. What does it mean to ‘solve’ the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp 458–68). https://doi.org/10.1145/3351095.3372849.
Sandvig, C., Hamilton, K., Karahalios, K., Langbort, C. 2014. Auditing algorithms. In ICA 2014 data and discrimination preconference. (pp. 1–23). https://doi.org/10.1109/DEXA.2009.55.
Scherer, M. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29(2), 353.
Google Scholar
Schulam, P., Saria, S. 2019. Can you trust this prediction? Auditing pointwise reliability after learning 89. http://arxiv.org/abs/1901.00403.
Sharma, S, Henderson, J, Ghosh, J. 2019. CERTIFAI: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models. http://arxiv.org/abs/1905.07857.
Smart Dubai. 2019. AI ethics principles & guidelines. Smart Dubai Office.
Springer, A., Whittaker, S. 2019. Making transparency clear.
Steghöfer, J. P., Knauss, E., Horkoff, J., Wohlrab, R. 2019. Challenges of scaled agile for safety-critical systems. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) 11915 LNCS (pp. 350–66). https://doi.org/10.1007/978-3-030-35333-9_26.
Strenge, B., & Schack, T. (2020). AWOSE—A process model for incorporating ethical analyses in agile systems engineering. Science and Engineering Ethics, 26(2), 851–870. https://doi.org/10.1007/s11948-019-00133-z
Article
Google Scholar
Susskind, R., & Susskind, D. (2015). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
Book
Google Scholar
Taddeo, M. (2016). Data philanthropy and the design of the infraethics for information societies. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083). https://doi.org/10.1098/rsta.2016.0113.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991.
Article
Google Scholar
Tasioulas, J. (2018). First steps towards an ethics of robots and artificial intelligence. SSRN Electronic Journal, 7(1), 61–95. https://doi.org/10.2139/ssrn.3172840
Article
Google Scholar
Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, Conn.: Yale University Press.
Google Scholar
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., et al. (2020). The ethics of algorithms: Key problems and solutions. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3662302.
Article
Google Scholar
Turner Lee, N. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society, 16(3), 252–260. https://doi.org/10.1108/JICES-06-2018-0056
Article
Google Scholar
Tutt, A. (2017). An FDA for algorithms. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2747994
Article
Google Scholar
Ulrich, B., Bauberger, S., Damm, T., Engels, R., Rehbein, M. 2018. Policy paper on the asilomar principles on artificial intelligence.
Vakkuri, V., Kemell, K. K., Kultanen, J., Siponen, M., Abrahamsson, P. 2019. Ethically aligned design of autonomous systems: Industry viewpoint and an empirical study. ArXiv.
van de Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30(3), 385–409. https://doi.org/10.1007/s11023-020-09537-4
Article
Google Scholar
Wachter, S., Mittelstadt, B., Russell, C. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR.
WEF. 2020. White paper a framework for responsible limits on facial recognition. World Economic Forum, no. February.
Weiss, I. R. (1980). Auditability of software: A survey of techniques and costs. MIS Quarterly: Management Information Systems, 4(4), 39–50. https://doi.org/10.2307/248959
Article
Google Scholar
Whittlestone, J., Alexandrova, A., Nyrup, R., Cave, S. 2019. The role and limits of principles in AI ethics: Towards a focus on tensions. In AIES 2019—Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 195–200). https://doi.org/10.1145/3306618.3314289.
Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K. 2019. Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. http://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf.
Wiener, N. (1988). The human use of human beings: Cybernetics and society. Da Capo Series in Science.
Google Scholar
Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science Technology and Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575
Article
Google Scholar