Abstract
Evolution Strategy (ES) is a potent black-box optimization technique based on natural evolution. A key step in each ES iteration is the ranking of candidate solutions based on some fitness score. In the Reinforcement Learning (RL) context, this step entails evaluating several policies. Presently, this evaluation is done via on-policy approaches: each policy’s score is estimated by interacting several times with the environment using that policy. Such ideas lead to wasteful interactions since, once the ranking is done, only the data associated with the top-ranked policies are used for subsequent learning. To improve sample efficiency, we introduce a novel off-policy ranking approach using a local approximation for the fitness function. We demonstrate our idea for two leading ES methods: Augmented Random Search (ARS) and Trust Region Evolution Strategy (TRES). MuJoCo simulations show that, compared to the original methods, our off-policy variants have similar running times for reaching reward thresholds but need only around 70% as much data on average. In fact, in some tasks like HalfCheetah-v3 and Ant-v3, we need just 50% as much data. Notably, our method supports extensive parallelization, enabling our ES variants to be significantly faster than popular non-ES RL methods like TRPO, PPO, and SAC.
ESR was supported by the Prime Minister’s Research Fellowship (PMRF). SK was supported by the SERB Core Research Grant CRG/2021/008115. GT was supported in part by DST-SERB’s Core Research Grant CRG/2021/00833, in part by IISc Start-up grants SG/MHRD-19-0054 and SR/MHRD-19-0040, and in part by the “Pratiksha Trust Young Investigator” award.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P.: Benchmarking deep reinforcement learning for continuous control. In: International Conference on Machine Learning, pp. 1329–1338. PMLR (2016)
Eshwar, S., Kolathaya, S., Thoppe, G.: Improving sample efficiency in evolutionary RL using off-policy ranking arXiv:2208.10583 (2023)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870. PMLR (2018)
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Kakade, S., Langford, J.: Approximately optimal approximate reinforcement learning. In: In Proc. 19th International Conference on Machine Learning. Citeseer (2002)
Kallus, N., Uehara, M.: Doubly robust off-policy value and gradient estimation for deterministic policies. Advances in Neural Information Processing Systems 33 (2020)
Kallus, N., Zhou, A.: Policy evaluation and optimization with continuous treatments. In: International Conference on Artificial Intelligence and Statistics, pp. 1243–1251. PMLR (2018)
Lagoudakis, M.G., Parr, R.: Model-free least-squares policy iteration. Advances in neural information processing systems 14 (2001)
Li, Z., Lin, X., Zhang, Q., Liu, H.: Evolution strategies for continuous optimization: A survey of the state-of-the-art. Swarm Evol. Comput. 56, 100694 (2020)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning. In: ICLR (Poster) (2016)
Liu, G., Zhao, L., Yang, F., Bian, J., Qin, T., Yu, N., Liu, T.Y.: Trust region evolution strategies. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 4352–4359 (2019)
Mania, H., Guy, A., Recht, B.: Simple random search of static linear policies is competitive for reinforcement learning. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. pp. 1805–1814 (2018)
Matyas, J.: Random optimization. Autom. Remote. Control. 26(2), 246–253 (1965)
Pourchot, A., Sigaud, O.: Cem-rl: Combining evolutionary and gradient-based methods for policy search. arXiv preprint arXiv:1810.01222 (2018)
Rajeswaran, A., Lowrey, K., Todorov, E., Kakade, S.: Towards generalization and simplicity in continuous control. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. pp. 6553–6564 (2017)
Salimans, T., Ho, J., Chen, X., Sidor, S., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017)
Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International conference on machine learning. pp. 1889–1897. PMLR (2015)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Todorov, E., Erez, T., Tassa, Y.: Mujoco: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 5026–5033. IEEE (2012)
Zhang, Y., Ross, K.W.: On-policy deep reinforcement learning for the average-reward criterion. In: International Conference on Machine Learning. pp. 12535–12545. PMLR (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Differences in the Original ARS from Our Off-Policy Variant
In this section, we mention the key differences between the original ARS and our off-policy variant OP-ARS. Table 4 indicates the exact steps that differ between the algorithms and their implications.
B Hyperparameters
In this section, we would like to describe the set of hyperparameters used in our experiments. The ARS and TRES algorithms have a predefined set of hyperparameters, which have been fine-tuned in the corresponding papers. Hence, we use the same hyperparameters in our algorithms for most of the environments. Our off-policy variants OP-ARS and OP-TRES have two new hyperparameters: the number of trajectories to run using behavior policy \(n_b\) and the bandwidth in kernel function h. We experiment with varying values of \(n_b\) and h, as indicated in Table 5, as part of our hyperparameter exploration process. The most effective hyperparameters, denoted in bold, are employed to generate the outcomes presented in Table 1 and Fig. 1, including the images in the uppermost row of Fig. 3.
C LQR Experiments
In this section, we showcase the outcomes of our experiments conducted with the LQR environment. As elucidated in Sect. 4.3 of [13], there exist limitations inherent to MuJoCo robotic tasks. Particularly notable is the lack of knowledge concerning the optimal policies within these environments. This uncertainty extends to the comparison between the learned policy of their algorithm and the optimal policy. A viable approach involves applying the algorithms to straightforward, widely recognized environments with well-established optimal policies. In [13], the choice fell upon the Linear Quadratic Regulator (LQR) as the benchmarking environment due to its known dynamics. For a more in-depth understanding of this environment, additional insights can be found in [3, Appendix D.2].
We employ the same framework utilized by [13] to compare our approach with model-based Nominal Control, LSPI [9], and ARS [13]. As demonstrated in the work by [13], the Nominal method exhibits significantly greater sample efficiency than LSPI and ARS by orders of magnitude, underscoring the potential for enhancement. Our experiments corroborate this notion, revealing that our method surpasses ARS in terms of sample efficiency, as indicated in Fig. 4a. Furthermore, Fig. 4b underscores our algorithm’s superiority over ARS in terms of stability frequency.
Rights and permissions
Copyright information
© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Eshwar, S.R., Kolathaya, S., Thoppe, G. (2024). Improving Sample Efficiency in Evolutionary RL Using Off-Policy Ranking. In: Kalyvianaki, E., Paolieri, M. (eds) Performance Evaluation Methodologies and Tools. VALUETOOLS 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 539. Springer, Cham. https://doi.org/10.1007/978-3-031-48885-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-48885-6_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-48884-9
Online ISBN: 978-3-031-48885-6
eBook Packages: Computer ScienceComputer Science (R0)