The BESTBOT Project
- 152 Downloads
The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.
KeywordsMachine ethics Robot ethics Information ethics Artificial intelligence Robotics Chatbots Human-computer interaction
- Anderson, Michael, and Susan Leigh Anderson, eds. 2011. Machine ethics. Cambridge: Cambridge University Press.Google Scholar
- Asimov, Isaac. 1973. The best of Isaac Asimov. London: Sphere.Google Scholar
- Bendel, Oliver. 2013. Good bot, bad bot: Dialog zwischen Mensch und Maschine. UnternehmerZeitung 7/8: 30–31.Google Scholar
- Bendel, Oliver. 2015. Können Maschinen lügen? Die Wahrheit über Münchhausen-Maschinen. Telepolis, 1 March 2015. http://www.heise.de/tp/artikel/44/44242/1.html. Last accessed 1 Nov 2019.
- Bendel, Oliver. 2016. The GOODBOT project: A chatbot as a moral machine. Telepolis, 17 May 2016. http://www.heise.de/tp/artikel/48/48260/1.html. Last accessed 1 Nov 2019.
- Bendel, Oliver. 2018a. From GOODBOT to BESTBOT. In The 2018 AAAI Spring symposium series, 2–9. Palo Alto: AAAI Press.Google Scholar
- Bendel, Oliver. 2018b. The uncanny return of physiognomy. In The 2018 AAAI Spring symposium series, 2–6. Palo Alto: AAAI Press.Google Scholar
- Bendel, Oliver, Kevin Schwegler, and Bradley Richards. 2016. The LIEBOT project, extended abstract. In Machine ethics and machine law, Jagiellonian University. November 18–19, 2016, Cracow, Poland. E-proceedings. Cracow: Jagiellonian University.Google Scholar
- Bendel, Oliver, Kevin Schwegler, and Bradley Richards. 2017. Towards Kant machines. In The 2017 AAAI Spring symposium series, 7–11. Palo Alto: AAAI Press.Google Scholar
- Feng, Ranran, and Balakrishnan Prabhakaran. 2016. On the “face of things”. In Proceedings of the 2016 ACM on international conference on multimedia retrieval, New York, 6–9 June 2016, 3–4.Google Scholar
- Guo, Guodong. 2015a. Age estimation. In Encyclopedia of biometrics, ed. Stan Z. Li and Anil K. Jain, 9–14. New York: Springer Science+Business Media.Google Scholar
- Li, Stan Z., and Anil K. Jain, eds. 2011. Handbook of face recognition. London: Springer.Google Scholar
- Marlow, Jennifer, and Jason Wiese. 2017. Surveying user reactions to recommendations based on inferences made by face detection technology. In Proceedings of the 2017 ACM conference on recommender systems, Como, 27–31 August 2017, 269–273.Google Scholar
- Pereira, Luís Moniz, and Ari Saptawijaya. 2016. Programming machine ethics. Cham: Springer International Publishing Switzerland.Google Scholar
- Sasaki, Kosuke, Manabu Hashimoto, and Noriko Nagata. 2017. Person invariant classification of subtle facial expressions using coded movement direction of keypoints. In Video analytics: Face and facial expression recognition and audience measurement, Third international workshop, VAAM 2016, and second international workshop, FFER 2016, Cancun, Mexico, 4 December 2016, revised selected papers, 61–47, ed. Kamal Nasrollahi, Cosimo Distante, Gang Hua, et al.Google Scholar
- Studer, David. 2018. The BESTBOT project. Bachelor thesis, School of Business FHNW. Olten: FHNW.Google Scholar
- Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. New York: Oxford University Press.Google Scholar