Skip to main content

Inverse Reinforcement Learning Application for Discrete and Continuous Environments

  • Conference paper
  • First Online:
AETA 2019 - Recent Advances in Electrical Engineering and Related Sciences: Theory and Application (AETA 2019)

Abstract

We show the application of inverse reinforcement learning (IRL) in discrete and continuous environments based on the apprenticeship focus. The objective is to learn a mathematical definition of a task based on trajectories made by an expert agent. To achieve this, there must be features functions that in some way describe abilities that the agent can learn. Therefore, the description of a task can be formulated as a linear combination of those functions. This learning method was applied in Open-AI gym environments to show the learning process of a task using demonstrations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004, New York, NY, USA (2004)

    Google Scholar 

  2. Agrawal, A., Verschueren, R., Diamond, S., Boyd, S.: A rewriting system for convex optimization problems. J. Control Decis. 5(1), 42–60 (2018)

    Article  MathSciNet  Google Scholar 

  3. Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. In: Artificial Neural Networks, pp. 81–93. IEEE Press, Piscataway (1990). http://dl.acm.org/citation.cfm?id=104134.104143

  4. Diamond, S., Boyd, S.: CVXPY: a Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(83), 1–5 (2016)

    MathSciNet  MATH  Google Scholar 

  5. Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. CoRR cs.LG/9905014 (1999). http://arxiv.org/abs/cs.LG/9905014

  6. Ng, A.Y., Russell, S.J.: Algorithms for inverse reinforcement learning. In: Proceedings of the Seventeenth International Conference on Machine Learning, ICML 2000, pp. 663–670. Morgan Kaufmann Publishers Inc., San Francisco (2000). http://dl.acm.org/citation.cfm?id=645529.657801

  7. Piot, B., Geist, M., Pietquin, O.: Bridging the gap between imitation learning and inverse reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 28, 1814–1826 (2017)

    Article  MathSciNet  Google Scholar 

  8. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. The MIT Press, Cambridge (1998)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carolina Higuera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suarez, Y., Higuera, C., Camacho, E.C. (2021). Inverse Reinforcement Learning Application for Discrete and Continuous Environments. In: Cortes Tobar, D., Hoang Duy, V., Trong Dao, T. (eds) AETA 2019 - Recent Advances in Electrical Engineering and Related Sciences: Theory and Application. AETA 2019. Lecture Notes in Electrical Engineering, vol 685. Springer, Cham. https://doi.org/10.1007/978-3-030-53021-1_35

Download citation

Publish with us

Policies and ethics