Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems—CHI’18 (pp. 1–18). https://doi.org/10.1145/3173574.3174156.
Adamson, G., Havens, J. C., & Chatila, R. (2019). Designing a value-driven future for ethical autonomous and intelligent systems. Proceedings of the IEEE, 107(3), 518–525. https://doi.org/10.1109/JPROC.2018.2884923.
Article
Google Scholar
AI Now Institute Algorithmic Accountability Policy Toolkit. (2018). Retrieved from https://ainowinstitute.org/aap-toolkit.pdf.
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261. https://doi.org/10.1080/09528130050111428.
Article
Google Scholar
Alshammari, M., & Simpson, A. (2017). Towards a principled approach for engineering privacy by design. In E. Schweighofer, H. Leitold, A. Mitrakas, & K. Rannenberg (Eds.), Privacy technologies and policy (Vol. 10518, pp. 161–177). Cham: Springer. https://doi.org/10.1007/978-3-319-67280-9_9.
Chapter
Google Scholar
Anabo, I. F., Elexpuru-Albizuri, I., & Villardón-Gallego, L. (2019). Revisiting the Belmont report’s ethical principles in internet-mediated research: Perspectives from disciplinary associations in the social sciences. Ethics and Information Technology, 21(2), 137–149. https://doi.org/10.1007/s10676-018-9495-z.
Article
Google Scholar
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645.
Article
Google Scholar
Anderson, M., & Anderson, S. L. (2018). GenEth: A general ethical dilemma analyzer. Paladyn, Journal of Behavioral Robotics, 9(1), 337–357. https://doi.org/10.1515/pjbr-2018-0024.
Article
Google Scholar
Antignac, T., Sands, D., & Schneider, G. (2016). Data minimisation: A language-based approach (long version). arXiv:1611.05642 [Cs].
Arnold, T., & Scheutz, M. (2018). The “big red button” is too late: An alternative model for the ethical evaluation of AI systems. Ethics and Information Technology, 20(1), 59–69. https://doi.org/10.1007/s10676-018-9447-7.
Article
Google Scholar
Arvan, M. (2014). A better, dual theory of human rights: A better, dual theory of human rights. The Philosophical Forum, 45(1), 17–47. https://doi.org/10.1111/phil.12025.
Article
Google Scholar
Arvan, M. (2018). Mental time-travel, semantic flexibility, and A.I. ethics. AI & Society. https://doi.org/10.1007/s00146-018-0848-2.
Article
Google Scholar
Beijing AI Principles. (2019). Retrieved from Beijing Academy of Aritifical Intelligence website. https://www.baai.ac.cn/blog/beijing-ai-principles.
Bibal, A., & Frénay, B. (2016). Interpretability of machine learning models and representations: An introduction.
Binns, R. (2018a). Algorithmic accountability and public reason. Philosophy & Technology, 31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5.
Article
Google Scholar
Binns, R. (2018b). What can political philosophy teach us about algorithmic fairness? IEEE Security and Privacy, 16(3), 73–80. https://doi.org/10.1109/MSP.2018.2701147.
Article
Google Scholar
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI conference on human factors in computing systems—CHI’18 (pp. 1–14). https://doi.org/10.1145/3173574.3173951.
Buhmann, A., Paßmann, J., & Fieseler, C. (2019). Managing algorithmic accountability: Balancing reputational concerns, engagement strategies, and the potential of rational discourse. Journal of Business Ethics. https://doi.org/10.1007/s10551-019-04226-4.
Article
Google Scholar
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512.
Article
Google Scholar
Cath, C. (2018). Governing Artificial Intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080.
Article
Google Scholar
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7.
Article
Google Scholar
Cath, C., Zimmer, M., Lomborg, S., & Zevenbergen, B. (2018). Association of internet researchers (AoIR) roundtable summary: Artificial Intelligence and the good society workshop proceedings. Philosophy & Technology, 31(1), 155–162. https://doi.org/10.1007/s13347-018-0304-8.
Article
Google Scholar
Cavoukian, A., Taylor, S., & Abrams, M. E. (2010). Privacy by design: Essential for organizational accountability and strong business practices. Identity in the Information Society, 3(2), 405–413. https://doi.org/10.1007/s12394-010-0053-z.
Article
Google Scholar
Clarke, R. (2019). Principles and business processes for responsible AI. Computer Law and Security Review. https://doi.org/10.1016/j.clsr.2019.04.007.
Article
Google Scholar
Coeckelbergh, M. (2012). Moral responsibility, technology, and experiences of the tragic: From Kierkegaard to offshore engineering. Science and Engineering Ethics, 18(1), 35–48. https://doi.org/10.1007/s11948-010-9233-3.
Article
Google Scholar
Cookson, C. (2018, September 6). Artificial Intelligence faces public backlash, warns scientist. Financial Times. Retrieved from https://www.ft.com/content/0b301152-b0f8-11e8-99ca-68cf89602132.
Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors (May 15, 2019). Available at SSRN: https://ssrn.com/abstract=.
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313. https://doi.org/10.1038/538311a.
Article
Google Scholar
D’Agostino, M., & Durante, M. (2018). Introduction: The governance of algorithms. Philosophy & Technology, 31(4), 499–505. https://doi.org/10.1007/s13347-018-0337-z.
Article
Google Scholar
Dennis, L. A., Fisher, M., Lincoln, N. K., Lisitsa, A., & Veres, S. M. (2016). Practical verification of decision-making in agent-based autonomous systems. Automated Software Engineering, 23(3), 305–359. https://doi.org/10.1007/s10515-014-0168-9.
Article
Google Scholar
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411.
Article
Google Scholar
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608 [Cs, Stat].
DotEveryone. (2019). The DotEveryone consequence scanning agile event. Retrieved from https://doteveryone.org.uk/project/consequence-scanning/.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. https://doi.org/10.1126/sciadv.aao5580.
Article
Google Scholar
Durante, M. (2010). What is the model of trust for multi-agent systems? Whether or not e-trust applies to autonomous agents. Knowledge, Technology & Policy, 23(3–4), 347–366. https://doi.org/10.1007/s12130-010-9118-4.
Article
Google Scholar
Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a “right to an explanation” to a “right to better decisions”? IEEE Security and Privacy, 16(3), 46–54. https://doi.org/10.1109/MSP.2018.2701152.
Article
Google Scholar
European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/futurium/en/ai-alliance-consultation.
Floridi, L. (2016a). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112. https://doi.org/10.1098/rsta.2016.0112.
Article
Google Scholar
Floridi, L. (2016b). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics, 22(6), 1669–1688. https://doi.org/10.1007/s11948-015-9733-2.
Article
Google Scholar
Floridi, L. (2017). The logic of design as a conceptual logic of information. Minds and Machines, 27(3), 495–519. https://doi.org/10.1007/s11023-017-9438-1.
Article
Google Scholar
Floridi, L. (2018). Soft ethics, the governance of the digital and the general data protection regulation. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180081. https://doi.org/10.1098/rsta.2018.0081.
Article
Google Scholar
Floridi, L. (2019a). Establishing the rules for building trustworthy AI. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0055-y.
Article
Google Scholar
Floridi, L. (2019b). The logic of information: A theory of philosophy as conceptual design (1st ed.). New York, NY: Oxford University Press.
Book
Google Scholar
Floridi, L. (2019c). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00354-x.
Article
Google Scholar
Floridi, L, & Clement-Jones, T. (2019, March 20). The five principles key to any ethical framework for AI. Tech New Statesman. Retrieved from https://tech.newstatesman.com/policy/ai-ethics-framework.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1.
Article
Google Scholar
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5.
Article
Google Scholar
Floridi, L., & Strait, A. (Forthcoming). Ethical foresight analysis: What it is and why it is needed.
Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360.
Article
Google Scholar
Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. arXiv:1609.07236 [Cs, Stat].
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a ‘right to explanation’. AI Magazine, 38(3), 50. https://doi.org/10.1609/aimag.v38i3.2741.
Article
Google Scholar
Green, B. P. (2018). Ethical reflections on Artificial Intelligence. Scientia et Fides, 6(2), 9. https://doi.org/10.12775/setf.2018.015.
Article
Google Scholar
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009.
Article
Google Scholar
Habermas, J. (1983). Moralbewußtsein und kommunikatives Handeln. Frankfurt am Main: Suhrkamp. [English, 1990a]
Habermas, J. (1991). The structural transformation of the public sphere: An inquiry into a category of bourgeois society. Cambridge, Mass: MIT Press.
Google Scholar
Hagendorff, T. (2019). The ethics of AI ethics—an evaluation of guidelines. arXiv:1903.03425 [Cs, Stat].
Heath, J. (2014). Rebooting discourse ethics. Philosophy and Social Criticism, 40(9), 829–866. https://doi.org/10.1177/0191453714545340.
Article
Google Scholar
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5.
Article
Google Scholar
Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The dataset nutrition label: A framework to drive higher data quality standards. arXiv:1805.03677 [Cs].
Holm, E. A. (2019). In defense of the black box. Science, 364(6435), 26–27. https://doi.org/10.1126/science.aax0162.
Article
Google Scholar
Holzinger, A. (2018). From machine learning to explainable AI. World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018, 55–66. https://doi.org/10.1109/DISA.2018.8490530.
Article
Google Scholar
ideo.org. (2015). The field guide to human-centered design. Retrieved from http://www.designkit.org/resources/1.
Involve, & DeepMind. (2019). How to stimulate effective public engagement on the ethics of Artificial Intelligence. Retrieved from https://www.involve.org.uk/sites/default/files/field/attachemnt/How%20to%20stimulate%20effective%20public%20debate%20on%20the%20ethics%20of%20artificial%20intelligence%20.pdf.
Jacobs, N., & Huldtgren, A. (2018). Why value sensitive design needs ethical commitments. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9467-3.
Article
Google Scholar
Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial Intelligence: The global landscape of ethics guidelines. arXiv:1906.11668 [Cs].
Johansson, F. D., Shalit, U., & Sontag, D. (2016). Learning representations for counterfactual inference. arXiv:1605.03661 [Cs, Stat].
Kemper, J., & Kolkman, D. (2018). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society. https://doi.org/10.1080/1369118X.2018.1477967.
Article
Google Scholar
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human decisions and machine predictions. The Quarterly Journal of Economics. https://doi.org/10.1093/qje/qjx032.
Article
Google Scholar
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1609.05807.
Knight, W. (2019). Why does Beijing suddenly care about AI ethics? MIT Technology Review. Retrieved from https://www.technologyreview.com/s/613610/why-does-china-suddenly-care-about-ai-ethics-and-privacy/.
Knoppers, B. M., & Thorogood, A. M. (2017). Ethics and big data in health. Current Opinion in Systems Biology, 4, 53–57. https://doi.org/10.1016/j.coisb.2017.07.001.
Article
Google Scholar
Kolter, Z., & Madry, A. (2018). Materials for tutorial adversarial robustness: Theory and practice. Retrieved from https://adversarial-ml-tutorial.org/.
Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084. https://doi.org/10.1098/rsta.2018.0084.
Article
Google Scholar
La Fors, K., Custers, B., & Keymolen, E. (2019). Reassessing values for emerging big data technologies: Integrating design-based and application-based approaches. Ethics and Information Technology. https://doi.org/10.1007/s10676-019-09503-4.
Article
Google Scholar
Lakkaraju, H., Kleinberg, J., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). The selective labels problem: evaluating algorithmic predictions in the presence of unobservables. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining—KDD’17 (pp. 275–284). https://doi.org/10.1145/3097983.3098066.
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x.
Article
Google Scholar
Lessig, L., & Lessig, L. (2006). Code (Version 2.0). New York: Basic Books.
Google Scholar
Lighthill, J. (1973). ‘Artificial Intelligence: A general survey’ in Artificial Intelligence: A paper symposium. Retrieved from UK Science Research Council website: http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm.
Lipton, Z. C. (2016). The mythos of model interpretability. arXiv:1606.03490 [Cs, Stat].
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 4765–4774). Retrieved from http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf.
Makri, E.-L., & Lambrinoudakis, C. (2015). Privacy principles: Towards a common privacy audit methodology. In S. Fischer-Hübner, C. Lambrinoudakis, & J. López (Eds.), Trust, privacy and security in digital business (Vol. 9264, pp. 219–234). Cham: Springer.
Chapter
Google Scholar
Matzner, T. (2014). Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”. Journal of Information, Communication and Ethics in Society, 12(2), 93–106. https://doi.org/10.1108/JICES-08-2013-0030.
Article
Google Scholar
Mikhailov, D. (2019). A new method for ethical data science. Retrieved from Medium website: https://medium.com/wellcome-data-labs/a-new-method-for-ethical-data-science-edb59e400ae9.
Miller, C., & Coldicott, R. (2019). People, power and technology: The tech workers’ view. Retrieved from Doteveryone website: https://doteveryone.org.uk/report/workersview/.
Mingers, J. (2011). Ethics and OR: Operationalising discourse ethics. European Journal of Operational Research, 210(1), 114–124. https://doi.org/10.1016/j.ejor.2010.11.003.
Article
Google Scholar
Mingers, J., & Walsham, G. (2010). Toward ethical information systems: The contribution of discourse ethics. MIS Quarterly: Management Information Systems, 34(4), 855–870.
Article
Google Scholar
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency—FAT*’19 (pp. 220–229). https://doi.org/10.1145/3287560.3287596.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679.
Article
Google Scholar
Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79, 119.
Google Scholar
OECD. (2019a). Forty-two countries adopt new OECD principles on Artificial Intelligence. Retrieved from https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm.
OECD. (2019b). Recommendation of the Council on Artificial Intelligence. Retrieved from https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
Oetzel, M. C., & Spiekermann, S. (2014). A systematic methodology for privacy impact assessments: A design science approach. European Journal of Information Systems, 23(2), 126–150. https://doi.org/10.1057/ejis.2013.18.
Article
Google Scholar
Overdorf, R., Kulynych, B., Balsa, E., Troncoso, C., & Gürses, S. (2018). Questioning the assumptions behind fairness solutions. arXiv:1811.11293 [Cs].
Oxborough, C., Cameron, E., Rao, A., Birchall, A., Townsend, A., & Westermann, C. (2018). Explainable AI: Driving business value through greater understanding. Retrieved from PWC website: https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf.
Peters, D., & Calvo, R. A. (2019, May 2). Beyond principles: A process for responsible tech. Retrieved from Medium website: https://medium.com/ethics-of-digital-experience/beyond-principles-a-process-for-responsible-tech-aefc921f7317.
Polykalas, S. E., & Prezerakos, G. N. (2019). When the mobile app is free, the product is your personal data. Digital Policy, Regulation and Governance, 21(2), 89–101. https://doi.org/10.1108/DPRG-11-2018-0068.
Article
Google Scholar
Poursabzi-Sangdeh, F., Goldstein, D. G., Hofman, J. M., Vaughan, J. W., & Wallach, H. (2018). Manipulating and measuring model interpretability. arXiv:1802.07810 [Cs].
PWC. (2019). The PwC responsible AI framework. Retrieved from https://www.pwc.co.uk/services/audit-assurance/risk-assurance/services/technology-risk/technology-risk-insights/accelerating-innovation-through-responsible-ai.html.
Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. Retrieved from AINow website: https://ainowinstitute.org/aiareport2018.pdf.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August 12). Local interpretable model-agnostic explanations (LIME): An introduction a technique to explain the predictions of any machine learning classifier. Retrieved from https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime.
Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20(2), 127–142. https://doi.org/10.1007/s10676-018-9452-x.
Article
Google Scholar
Russell, C., Kusner, M. J., Loftus, J., & Silva, R. (2017). When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 6414–6423). Retrieved from http://papers.nips.cc/paper/7220-when-worlds-collide-integrating-different-counterfactual-assumptions-in-fairness.pdf.
Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology. https://doi.org/10.1007/s10676-019-09502-5.
Article
Google Scholar
Samuel, A. L. (1960). Some moral and technical consequences of automation—a refutation. Science, 132(3429), 741–742. https://doi.org/10.1126/science.132.3429.741.
Article
Google Scholar
Selbst, A. D. (2017). Disparate impact in big data policing. Georgia Law Review, 52(1), 109–196.
Google Scholar
Spielkamp, M., Matzat, L., Penner, K., Thummler, M., Thiel, V., Gießler, S., & Eisenhauer, A. (2019). Algorithm Watch 2019: The AI Ethics Guidelines Global Inventory. Retrieved from https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/.
Stahl, B. C., & Wright, D. (2018). Ethics and privacy in AI and big data: Implementing responsible research and innovation. IEEE Security and Privacy, 16(3), 26–33. https://doi.org/10.1109/MSP.2018.2701164.
Article
Google Scholar
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991.
Article
Google Scholar
Turilli, M. (2007). Ethical protocols design. Ethics and Information Technology, 9(1), 49–62. https://doi.org/10.1007/s10676-006-9128-9.
Article
Google Scholar
Turilli, M. (2008). Ethics and the practice of software design. In A. Briggle, P. Brey, & K. Waelbers (Eds.), Current issues in computing and philosophy. Amsterdam: IOS Press.
Google Scholar
Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11(2), 105–112. https://doi.org/10.1007/s10676-009-9187-9.
Article
Google Scholar
Vakkuri, V., Kemell, K.-K., Kultanen, J., Siponen, M., & Abrahamsson, P. (2019). Ethically aligned design of autonomous systems: Industry viewpoint and an empirical study. arXiv:1906.07946 [Cs].
Vaughan, J., & Wallach, H. (2016). The inescapability of uncertainty: AI, uncertainty, and why you should vote no matter what predictions say. Retrieved 4 July 2019, from Points. Data Society website: https://points.datasociety.net/uncertainty-edd5caf8981b.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005.
Article
Google Scholar
Wiener, N. (1961). Cybernetics: Or control and communication in the animal and the machine (2d ed.). New York: MIT Press.
Google Scholar
Winfield, A. (2019, April 18). An updated round up of ethical principles of robotics and AI. Retrieved from http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html.
Yetim, F. (2019). Supporting and understanding reflection on persuasive technology through a reflection schema. In H. Oinas-Kukkonen, K. T. Win, E. Karapanos, P. Karppinen, & E. Kyza (Eds.), Persuasive technology: Development of persuasive and behavior change support systems (pp. 43–51). Cham: Springer.
Chapter
Google Scholar