Abstract
We are currently witnessing an ever-growing entanglement of intelligent technology with people in their everyday lives, creating intersections with ethics, trust, and responsibility. Understanding, implementing, and designing human interactions with these technologies is central to many advanced uses of intelligent and distributed systems and is related to contested concepts, such as various forms of agency, shared decision-making, and situational awareness. Numerous guidelines have been proposed to outline points of concern when building ethically acceptable artificial intelligence (AI) systems. However, these guidelines are usually presented as general policies, and how we can teach computer science students the needed critical and reflective thinking on the social implications of future intelligent technologies is not obvious. This chapter presents how we used adversarial chatbots to expose computer science students to the importance of ethics and responsible design of AI technologies. We focus on the pedagogical goals, strategy, and course layout and reflect how this can serve as a blueprint for other educators in broader responsible innovation contexts, e.g., nonchat AI technologies, robotics, and other human-computer interaction (HCI) themes.
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
A shorter description of the course was published as a short paper at INTERACT 2021 (the 18th International Conference promoted by the IFIP Technical Committee 13 on Human-Computer Interaction): Weiss, A., Vrecar, R., Zamiechowska, J., & Purgathofer, P. (2021, August). Using the Design of Adversarial Chatbots as a Means to Expose Computer Science Students to the Importance of Ethics and Responsible Design of AI Technologies. In IFIP Conference on Human-Computer Interaction (pp. 331–339). Springer, Cham.
References
Chan, D. (2019). Primary students to be taught AI in Guangzhou schools government has decided to give priority to the development of AI, information and biopharmaceutical industries. Accessed September 27, 2021, from https://asiatimes.com/2019/07/primary-students-to-be-taught-ai-in-guangzhou-schools/.
Ellis, C., Adams, T. E., & Bochner, A. P. (2011). Autoethnography: An overview. Historical Social Research/Historische sozialforschung, 36(4), 273–290.
Fiesler, C., Garrett, N., & Beard, N. (2020). What do we teach when we teach tech ethics? A syllabi analysis. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education, pp. 289–295.
Fisher, E., Mahajan, R. L., & Mitcham, C. (2006). Midstream modulation of technology: Governance from within. Bulletin of Science, Technology & Society, 26(6), 485–496.
Fochler, M. (2016). Beyond and between academia and business: How Austrian biotechnology researchers describe high-tech startup companies as spaces of knowledge production. Social Studies of Science, 46(2), 259–281.
Geiskkovitch, D. Y., Cormier, D., Seo, S. H., & Young, J. E. (2016). Please continue, we need more data: An exploration of obedience to robots. Journal Human Robot Interaction, 5(1), 82–99.
Haikonen, P. O. (2007). Reflections of consciousness: The mirror test. AAAI Fall Symposium: AI and Consciousness, pp. 67–71.
Johnson, D. (1994). Who should teach computer ethics and computers & society? ACM SIGCAS Computers and Society, 24(2), 6–13.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.
Nourbakhsh, I. R. (2013). Robot futures. MIT Press.
O’Neil, C. (2017). The ivory tower can’t keep ignoring tech. Accessed September 27, 2021, from https://www.nytimes.com/2017/11/14/opinion/academia-tech-algorithms.html.
Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760.
Rea, D. J., Geiskkovitch, D., & Young, J. E. (2017). Wizard of awwws: Exploring psychological impact on the researchers in social HRI experiments. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ‘17 (pp. 21–29). Association for Computing Machinery.
REELER. (2020). BuildBot. Accessed September 27, 2021, from https://reelertoolbox.ab-acus.com/buildbot/.
Ruane, E., Birhane, A., & Ventresque, A. (2019). Conversational AI: Social and ethical considerations. In AICS, pp. 104–115.
Saygin, A. P., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and machines, 10(4), 463–518.
Searle, J. (2006). Chinese room argument, the Encyclopedia of Cognitive Science.
Sigl, L., de Pagter, J., & Papagni, G. (2021). “Imagine responsible robotics” - a card-based engagement method.
Slavkovik, M. (2020). Teaching AI ethics: Observations and challenges. In Norsk IKT-konferanse for forskning og utdanning, no. 4.
Young, J. E. (2017). An HRI graduate course for exposing technologists to the importance of considering social aspects of technology. Journal of Human-Robot Interaction, 6(2), 27–47.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Weiss, A., Vrecar, R., Zamiechowska, J., Purgathofer, P. (2023). It’s Only a Bot! How Adversarial Chatbots can be a Vehicle to Teach Responsible AI. In: Schmidpeter, R., Altenburger, R. (eds) Responsible Artificial Intelligence. CSR, Sustainability, Ethics & Governance. Springer, Cham. https://doi.org/10.1007/978-3-031-09245-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-09245-9_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09244-2
Online ISBN: 978-3-031-09245-9
eBook Packages: Computer ScienceComputer Science (R0)