Advertisement

Application of Instruction-Based Behavior Explanation to a Reinforcement Learning Agent with Changing Policy

  • Yosuke Fukuchi
  • Masahiko Osawa
  • Hiroshi Yamakawa
  • Michita Imai
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10634)

Abstract

Agents that acquire their own policies autonomously have the risk of accidents caused by the agents’ unexpected behavior. Therefore, it is necessary to improve the predictability of the agents’ behavior in order to ensure the safety. Instruction-based Behavior Explanation (IBE) is a method for a reinforcement learning agent to announce the agent’s future behavior. However, it was not verified that the IBE was applicable to an agent that changes the policy dynamically. In this paper, we consider agents under training and improve the IBE for the application to agents with changing policy. We conducted an experiment to verify if the behavior explanation model of an immature agent worked even after the agent’s further training. The results indicated the applicability of the improved IBE to agents under training.

Keywords

Reinforcement learning Instruction-based behavior explanation (IBE) 

References

  1. 1.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Man, D.: Concrete problems in AI safety. arXiv preprint (2016). arXiv:1606.06565
  2. 2.
    Le, Q.V.: Building high-level features using large scale unsupervised learning. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8595–8598 (2013)Google Scholar
  3. 3.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). doi: 10.1007/978-3-319-10590-1_53 Google Scholar
  4. 4.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint (2013). arXiv:1312.6034
  5. 5.
    Elizalde, F., Sucar, L.E., Luque, M., Dez, F.J., Ballesteros, A.R.: Policy explanation in factored markov decision processes. In: Proceedings of the 4th European Workshop on Probabilistic Graphical Models (2008)Google Scholar
  6. 6.
    Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM (2017)Google Scholar
  7. 7.
    Fukuchi, Y., Osawa, M., Yamakawa, H., Imai, M.: Autonomous self-explanation of behavior for interactive reinforcement learning agents. In: Proceedings of the 5th International Conference on Human Agent Interaction (2017)Google Scholar
  8. 8.
    Knox, W.B., Stone, P.: Interactively shaping agents via human reinforcement: the tamer framework. In: Proceedings of the Fifth International Conference on Knowledge Capture, pp. 9–16. ACM (2009)Google Scholar
  9. 9.
    Cruz, F., Magg, S., Weber, C., Wermter, S.: Training agents with interactive reinforcement learning and contextual affordances. IEEE Trans. Cognit. Dev. Syst. 8(4), 271–284 (2016)CrossRefGoogle Scholar
  10. 10.
    Thomaz, A.L., Breazeal, C.: Reinforcement learning with human teachers: evidence of feedback and guidance with implications for learning performance. In: The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference (AAAI), vol. 6, pp. 1000–1005 (2006)Google Scholar
  11. 11.
    Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym. arXiv preprint (2016). arXiv:1606.01540
  12. 12.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, G.B., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human level control through deep reinforcement learning. Nature 518, 529–533 (2017)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Yosuke Fukuchi
    • 1
  • Masahiko Osawa
    • 1
    • 2
  • Hiroshi Yamakawa
    • 3
    • 4
  • Michita Imai
    • 1
  1. 1.Graduate School of Science and TechnologyKeio UniversityYokohama-shiJapan
  2. 2.Japan Society for the Promotion of ScienceTokyoJapan
  3. 3.Dwango Artificial Intelligence LaboratoryTokyoJapan
  4. 4.The Whole Brain Architecture InitiativeTokyoJapan

Personalised recommendations