Abstract
For the task of training RL agent in a sparse-reward, image-based observation environment, the agent should perfect both learning latent representation and having a good-exploration strategy. Standard approaches such as variational auto-encoder (VAE) could learn such representation. However, these approaches are only designed to encode the input observations into a pre-defined latent distribution and do not take into account the dynamics of the environment. To improve the training process from high-dimensional input images, we extend the standard VAE framework to learn a compact latent representation that can mimic the structures of the underlying Markov decision process. We further add an intrinsic reward based on the learned latent to encourage exploratory actions in the sparse reward environments. The intrinsic reward is designed to direct the policy to visit distant states in the latent space. Experiments on several gridworld environments with sparse rewards are carried out to demonstrate the effectiveness of our proposed approach. Compared to other baselines, our method has more stable performance and better exploration coverage by exploiting the learned latent structure property.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
Yarats, D., Zhang, A., Kostrikov, I., Amos, B., Pineau, J., Fergus, R.: Improving sample efficiency in model-free reinforcement learning from images. Proc. AAAI Conf. Artif. Intell. 35, 10674–10681 (2021)
Shelhamer, E., Mahmoudieh, P., Argus, M., Darrell, T.: Loss is its own reward: self-supervision for reinforcement learning. arXiv preprint arXiv:1612.07307 (2016)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
Higgins, I., et al.: Beta-vae: learning basic visual concepts with a constrained variational framework (2016)
Kim, H., Kim, J., Jeong, Y., Levine, S. and Song, H.O.: Emi: exploration with mutual information. arXiv preprint arXiv:1810.01176 (2018)
Ermolov, A., Sebe, N.: Latent world models for intrinsically motivated exploration. Adv. Neural Inf. Process. Syst. 33, 5565–5575 (2020)
Tang, H., et al.: # exploration: a study of count-based exploration for deep reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Stadie, B.C., Levine, S., Abbeel, P.: Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814 (2015)
Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., Munos, R.: Unifying count-based exploration and intrinsic motivation. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Burda, Y., Edwards, H., Storkey, A., Klimov, O.: Exploration by random network distillation. arXiv preprint arXiv:1810.12894 (2018)
Zhang, T., Rashidinejad, P., Jiao, J., Tian, Y., Gonzalez, J.E., Russell, S.: Exploration via maximizing deviation from explored regions. Adv. Neural Inf. Process. Syst. 34, 9663–9680 (2021)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870. PMLR (2018)
Nachum, O., Gu, S.S., Lee, H., Levine, S.: Visual reinforcement learning with imagined goals. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Zhang, A., McAllister, R., Calandra, R., Gal, Y., Levine, S.: Learning invariant representations for reinforcement learning without reconstruction. arXiv preprint arXiv:2006.10742 (2020)
Hafner, D., et al.: Learning latent dynamics for planning from pixels. In: International Conference on Machine Learning, pp. 2555–2565. PMLR (2019)
Christopher, J.C.H.: Christopher JCH watkins and peter dayan: q-learning. Mach. Learn. 8(3), 279–292 (1992)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, Cambridge (2018)
Han, S., Sung, Y.: Diversity actor-critic: sample-aware entropy regularization for sample-efficient exploration. In: International Conference on Machine Learning, pp. 4018–4029. PMLR (2021)
Savinov, N., et al.: Episodic curiosity through reachability. arXiv preprint arXiv:1810.02274 (2018)
Raileanu, R., Rocktäschel, T.: Ride: rewarding impact-driven exploration for procedurally-generated environments. arXiv preprint arXiv:2002.12292 (2020)
Kostrikov, I., Yarats, D., Fergus, R.: Image augmentation is all you need: regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649 (2020)
Chevalier-Boisvert, M., Willems, L., Pal, S.: Minimalistic gridworld environment for gymnasium (2018)
Acknowledgements
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-22-1-4026.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Le, BG., Hoang, TL., Kieu, HD., Ta, VC. (2023). Structural and Compact Latent Representation Learning on Sparse Reward Environments. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13996. Springer, Singapore. https://doi.org/10.1007/978-981-99-5837-5_4
Download citation
DOI: https://doi.org/10.1007/978-981-99-5837-5_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-5836-8
Online ISBN: 978-981-99-5837-5
eBook Packages: Computer ScienceComputer Science (R0)