Skip to main content

Tracking the Race Between Deep Reinforcement Learning and Imitation Learning

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12289)

Abstract

Learning-based approaches for solving large sequential decision making problems have become popular in recent years. The resulting agents perform differently and their characteristics depend on those of the underlying learning approach. Here, we consider a benchmark planning problem from the reinforcement learning domain, the Racetrack, to investigate the properties of agents derived from different deep (reinforcement) learning approaches. We compare the performance of deep supervised learning, in particular imitation learning, to reinforcement learning for the Racetrack model. We find that imitation learning yields agents that follow more risky paths. In contrast, the decisions of deep reinforcement learning are more foresighted, i.e., avoid states in which fatal decisions are more likely. Our evaluations show that for this sequential decision making problem, deep reinforcement learning performs best in many aspects even though for imitation learning optimal decisions are considered.

Keywords

  • Deep reinforcement learning
  • Imitation learning

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-59854-9_2
  • Chapter length: 7 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   64.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-59854-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   84.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

References

  1. Agostinelli, F., McAleer, S., Shmakov, A., Baldi, P.: Solving the Rubik’s Cube with deep reinforcement learning and search. Nat. Mach. Intell. 1(8), 356–363 (2019)

    CrossRef  Google Scholar 

  2. Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using real-time dynamic programming. Artif. Intell. 72(1–2), 81–138 (1995)

    CrossRef  Google Scholar 

  3. Bonet, B., Geffner, H.: GPT: a tool for planning with uncertainty and partial information. In: Proceedings of the IJCAI Workshop on Planning with Uncertainty and Incomplete Information, pp. 82–87 (2001)

    Google Scholar 

  4. Gros, T.P., Hermanns, H., Hoffmann, J., Klauck, M., Steinmetz, M.: Deep statistical model checking. In: Gotsman, A., Sokolova, A. (eds.) FORTE 2020. LNCS, vol. 12136, pp. 96–114. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50086-3_6

    CrossRef  Google Scholar 

  5. Gros, T.P., Höller, D., Hoffmann, J., Wolf, V.: Tracking the race between deep reinforcement learning and imitation learning – extended version. arXiv preprint arXiv:2008.00766 (2020)

  6. Judah, K., Fern, A.P., Dietterich, T.G., Tadepalli, P.: Active imitation learning: formal and practical reductions to I.I.D. learning. J. Mach. Learn. Res. 15(120), 4105–4143 (2014)

    Google Scholar 

  7. Ketkar, N.: Introduction to PyTorch. In: Ketkar, N. (ed.) Deep Learning with Python, pp. 195–208. Apress, Berkeley (2017). https://doi.org/10.1007/978-1-4842-2766-4_12

    CrossRef  Google Scholar 

  8. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)

  9. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    CrossRef  Google Scholar 

  10. Pineda, L.E., Zilberstein, S.: Planning under uncertainty using reduced models: revisiting determinization. In: Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS), pp. 217–225. AAAI Press (2014)

    Google Scholar 

  11. Ross, S., Gordon, G.J., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR Proceedings, vol. 15, pp. 627–635. JMLR.org (2011)

    Google Scholar 

  12. Schaal, S.: Is imitation learning the route to humanoid robots? Trends Cogn. Sci. 3(6), 233–242 (1999)

    CrossRef  Google Scholar 

  13. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–503 (2016)

    CrossRef  Google Scholar 

  14. Silver, D., et al.: A general reinforcement learning algorithm that masters Chess, Shogi, and Go through self-play. Science 362(6419), 1140–1144 (2018)

    MathSciNet  CrossRef  Google Scholar 

  15. Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017)

    CrossRef  Google Scholar 

  16. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning, 2nd edn. The MIT Press, Cambridge (2018)

    MATH  Google Scholar 

Download references

Acknowledgements

This work has been partially funded by DFG grant 389792660 as part of TRR 248 (see https://perspicuous-computing.science).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timo P. Gros .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Gros, T.P., Höller, D., Hoffmann, J., Wolf, V. (2020). Tracking the Race Between Deep Reinforcement Learning and Imitation Learning. In: Gribaudo, M., Jansen, D.N., Remke, A. (eds) Quantitative Evaluation of Systems. QEST 2020. Lecture Notes in Computer Science(), vol 12289. Springer, Cham. https://doi.org/10.1007/978-3-030-59854-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59854-9_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59853-2

  • Online ISBN: 978-3-030-59854-9

  • eBook Packages: Computer ScienceComputer Science (R0)