Advertisement

Ethics, the Only Safeguard Against the Possible Negative Impacts of Autonomous Robots?

  • Rodolphe GelinEmail author
Chapter
Part of the Intelligent Systems, Control and Automation: Science and Engineering book series (ISCA, volume 95)

Abstract

Companion robots will become closer and closer to us. They will enter our intimacy. This proximity will raise ethical problems that strict technology per se will probably be just unable to solve. Even if research tries to find out how ethical rules can be implemented in the robots’ cognitive architecture, does the ethics implemented by the developer fit with the user’s ethics? In this paper, we propose a pragmatic approach to this question by focusing on the aspect of responsibility. In case of misbehavior of a robot, who is responsible? And even more pragmatically, who will pay for the eventually caused damages?

Keywords

Ethics Responsibility Companion robot Regulation 

References

  1. 1.
    Agravante DJ, Claudio G, Spindler F, Chaumette F (2017) Visual servoing in an optimization framework for the whole-body control of humanoid robots. IEEE Robot Autom Lett 2(2):608–615CrossRefGoogle Scholar
  2. 2.
    Asimov I (1951) I, robot. Gnome PressGoogle Scholar
  3. 3.
    Bechade L, Dubuisson-Duplessis G, Pittaro G, Garcia M, Devillers L (2018) Towards metrics of evaluation of pepper robot as a social companion for the elderly. In: Eskenazi M, Devillers L, Mariani J (eds) 8th international workshop on spoken dialog systems: advanced social interaction with agents. Springer, BerlinGoogle Scholar
  4. 4.
    Bensoussan A, Bensoussan J (2015) Droit des robots. Éditions LarcierGoogle Scholar
  5. 5.
    British Standards Institute: BS8611: 2016 Robots and robotic devices: guide to the ethical design and application of robots and robotic systems, BSI London (2016). ISBN 9780580895302Google Scholar
  6. 6.
    Chang S, Kim J, Kim I, Borm JH, Lee C, Park JO (1999) KIST teleoperation system for humanoid robot. In: Proceedings of 1999 IEEE/RSJ international conference on intelligent robots and systems, vol 2. IROS’99. IEEE, pp 1198–1203Google Scholar
  7. 7.
    Chevalier P, Martin JC, Isableu B, Bazile C, Tapus A (2017) Impact of sensory preferences of individuals with autism on the recognition of emotions expressed by two robots, an avatar, and a human. Auton Robots 41(3):613–635CrossRefGoogle Scholar
  8. 8.
    Collectif C (2014) Ethique de la recherche en robotique. Doctoral dissertation, CERNA; ALLISTENEGoogle Scholar
  9. 9.
    El-Yacoubi MA, He H, Roualdes F, Selmi M, Hariz M, Gillet F (2015) Vision-based recognition of activities by a humanoid robot. Int J Adv Rob Syst 12(12):179Google Scholar
  10. 10.
    Gelin R (2017) The domestic robot: ethical and technical concerns. In: Aldinhas Ferreira M, Silva Sequeira J, Tokhi M, Kadar E, Virk G (eds) A world with robots. Intelligent systems, control and automation: science and engineering, vol 84. Springer, ChamGoogle Scholar
  11. 11.
    Haddadin S, Albu-Schaffer A, De Luca A, Hirzinger G (2008) Collision detection and reaction: a contribution to safe physical human-robot interaction. In: IEEE/RSJ international conference on intelligent robots and systems. IROS. IEEE, pp 3356–3363Google Scholar
  12. 12.
    Harmo P, Taipalus T, Knuuttila J, Vallet J, Halme A (2005) Needs and solutions-home automation and service robots for the elderly and disabled. In: 2005 IEEE/RSJ international conference on intelligent robots and systems (IROS 2005). IEEE, pp 3201–3206Google Scholar
  13. 13.
    Jacobs T, Virk GS (2014) ISO 13482-The new safety standard for personal care robots. In Proceedings of ISR/Robotik 2014; 41st international symposium on robotics. VDE, pp 1–6Google Scholar
  14. 14.
    Lipton ZC (2016) The mythos of model interpretability. arXiv preprint arXiv:1606.03490
  15. 15.
    Malle BF, Scheutz M (2014) Moral competence in social robots. In: 2014 IEEE international symposium on ethics in science, technology and engineering. IEEE, pp 1–6Google Scholar
  16. 16.
    Miller KW, Wolf MJ, Grodzinsky FS (2017) Why we should have seen that coming: comments on microsoft’s Tay “Experiment,” and wider implicationsGoogle Scholar
  17. 17.
    Pandey AK, de Silva L, Alami R (2016) A novel concept of human-robot competition for evaluating a robot’s reasoning capabilities in HRI. In: The eleventh ACM/IEEE international conference on human robot interaction. IEEE Press, pp 491–492Google Scholar
  18. 18.
    Pandey AK, Gelin R, Ruocco M, Monforte M, Siciliano B (2017) When a social robot might learn to support potentially immoral behaviors on the name of privacy: the dilemma of privacy versus ethics for a socially intelligent robot. In: Privacy-sensitive robotics 2017. HRIGoogle Scholar
  19. 19.
    Rossi S, Ferland F, Tapus A (2017) User profiling and behavioral adaptation for HRI: a survey. Pattern Recogn Lett 99:3–12CrossRefGoogle Scholar
  20. 20.
    Tahon M, Devillers L (2016) Towards a small set of robust acoustic features for emotion recognition: challenges. IEEE/ACM Trans Audio Speech Lang Process 24(1):16–28CrossRefGoogle Scholar
  21. 21.
    Tisseron S (2015) Le jour où mon robot m’aimera: Vers l’empathie artificielle. Albin MichelGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.InnovationSoftBank Robotics EuropeParisFrance

Personalised recommendations