Abstract
In many sequential decision making problems progress is predominantly based on artificial data sets. This can be attributed to insufficient access to real data. Here we propose to mitigate this by using generative adversarial networks (GANs) to generate representative data sets from real data. Specifically, we investigate how GANs can generate training data for reinforcement learning (RL) problems. We distinguish structural properties (does the generated data follow the distribution of the original data), functional properties (is there a difference between the evaluation of policies for generated and real life data), and show that with a relatively small number of data points (a few thousand) we can train GANs that generate representative data for classical control RL environments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antoniou, A., Storkey, A., Edwards, H.: Data augmentation generative adversarial networks (2017)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein Gan (2017)
Bang, H., Robins, J.M.: Doubly robust estimation in missing data and causal inference models. Biometrics 61(4), 962–973 (2005)
Brockman, G., et al.: Openai gym. CoRR abs/1606.01540 (2016). http://arxiv.org/abs/1606.01540
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–503 (2016). http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html
Dulac-Arnold, G., Mankowitz, D., Hester, T.: Challenges of real-world reinforcement learning (2019)
Gauci, J., et al.: Horizon: Facebook’s open source applied reinforcement learning platform (2018)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Hoogendoorn, M., Funk, B.: Machine Learning for the Quantified Self: On the Art of Learning from Sensory Data. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-319-66308-1
Jiang, N., Li, L.: Doubly robust off-policy value evaluation for reinforcement learning. arXiv preprint arXiv:1511.03722 (2015)
Kodali, N., Abernethy, J., Hays, J., Kira, Z.: On convergence and stability of GANs (2017)
Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951). https://doi.org/10.1214/aoms/1177729694
Liu, Y., Zeng, Y., Chen, Y., Tang, J., Pan, Y.: Self-improving generative adversarial reinforcement learning. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent AAMAS 2019, Systems, pp. 52–60. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2019). http://dl.acm.org/citation.cfm?id=3306127.3331673
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236
Nagarajan, V., Kolter, J.Z.: Gradient descent GAN optimization is locally stable. In: Advances in Neural Information Processing Systems, vol. 30
Precup, D., Sutton, R.S., Dasgupta, S.: Off-policy temporal-difference learning with function approximation. In: ICML, pp. 417–424 (2001)
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs (2016)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge (2018)
Thanh-Tung, H., Tran, T., Venkatesh, S.: Improving generalization and stability of generative adversarial networks (2019)
Thomas, P., Brunskill, E.: Data-efficient off-policy policy evaluation for reinforcement learning. In: International Conference on Machine Learning, pp. 2139–2148 (2016)
Tseng, H.H., Luo, Y., Cui, S., Chien, J.T., Ten Haken, R.K., Naqa, I.E.: Deep reinforcement learning for automated radiation adaptation in lung cancer. Med. Phys. 44(12), 6690–6705 (2017)
Vinyals, O., et al.: Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575, 350–354 (2019)
Wiering, M., van Otterlo, M. (eds.): Reinforcement Learning: State of the Art. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
el Hassouni, A., Hoogendoorn, M., Eiben, A.E., Muhonen, V. (2020). Structural and Functional Representativity of GANs for Data Generation in Sequential Decision Making. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2020. Lecture Notes in Computer Science(), vol 12565. Springer, Cham. https://doi.org/10.1007/978-3-030-64583-0_41
Download citation
DOI: https://doi.org/10.1007/978-3-030-64583-0_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-64582-3
Online ISBN: 978-3-030-64583-0
eBook Packages: Computer ScienceComputer Science (R0)