Skip to main content

Improvements to DQN

  • Chapter
  • First Online:
The Art of Reinforcement Learning
  • 534 Accesses

Abstract

The success story of the DQN agent, which achieved human-level performance in playing Atari games, marked a major breakthrough in the field of artificial intelligence and reinforcement learning. Since then, extensive research and development efforts have been dedicated to building upon the foundations of DQN. Notably, these endeavors have focused on enhancing the network architecture and refining techniques for optimal utilization of experience replay samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hado Hasselt. Double q-learning. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010.

    Google Scholar 

  2. Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning, 2015.

    Google Scholar 

  3. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay, 2016.

    Google Scholar 

  4. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning, 2016.

    Google Scholar 

  5. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning, 2017.

    Google Scholar 

  6. Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning, 2017.

    Google Scholar 

  7. Will Dabney, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning, 2018.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hu, M. (2023). Improvements to DQN. In: The Art of Reinforcement Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-9606-6_8

Download citation

Publish with us

Policies and ethics