Skip to main content

Latent-Conditioned Policy Gradient for Multi-Objective Deep Reinforcement Learning

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14259))

Included in the following conference series:

Abstract

Sequential decision making in the real world often requires finding a good balance of conflicting objectives. In general, there exist a plethora of Pareto-optimal policies that embody different patterns of compromises between objectives, and it is technically challenging to obtain them exhaustively using deep neural networks. In this work, we propose a novel multi-objective reinforcement learning (MORL) algorithm that trains a single neural network via policy gradient to approximately obtain the entire Pareto set in a single run of training, without relying on linear scalarization of objectives. The proposed method works in both continuous and discrete action spaces with no design change of the policy network. Numerical experiments demonstrate the practicality and efficacy of our approach in comparison to standard MORL baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abels, A., Roijers, D.M., Lenaerts, T., Nowé, A., Steckelmacher, D.: Dynamic weights in multi-objective deep reinforcement learning. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 9–15 June 2019, California, USA (2019)

    Google Scholar 

  2. Basaklar, T., Gumussoy, S., Ogras, Ü.Y.: PD-MORL: preference-driven multi-objective reinforcement learning algorithm. CoRR abs/2208.07914 (2022)

    Google Scholar 

  3. Blank, J., Deb, K.: Pymoo: multi-objective optimization in python. IEEE Access 8, 89497–89509 (2020)

    Article  Google Scholar 

  4. Bouchacourt, D., Mudigonda, P.K., Nowozin, S.: DISCO Nets : dissimilarity coefficients networks. In: Lee, D.D., Sugiyama, M., von Luxburg, U., Guyon, I., Garnett, R. (eds.) Annual Conference on Neural Information Processing Systems 2016, December 5–10, 2016, Barcelona, Spain (2016)

    Google Scholar 

  5. Castelletti, A., Pianosi, F., Restelli, M.: A multiobjective reinforcement learning approach to water resourcessystems operation: pareto frontier approximation in a single run. Water Resour. Res. 49, 3476–3486 (2013)

    Article  Google Scholar 

  6. Castelletti, A., Pianosi, F., Restelli, M.: Tree-based fitted q-iteration for multi-objective markov decision problems. In: Editors (ed.) The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, June 10–15, 2012. pp. 1–8. IEEE (2012)

    Google Scholar 

  7. Chen, X., Ghadirzadeh, A., Björkman, M., Jensfelt, P.: Meta-learning for multi-objective reinforcement learning. In: Editors (ed.) 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 977–983 (2019)

    Google Scholar 

  8. Dabney, W., Ostrovski, G., Silver, D., Munos, R.: Implicit quantile networks for distributional reinforcement learning. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10–15, 2018 (2018)

    Google Scholar 

  9. Das, I., Dennis, J.E.: A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multicriteria optimization problems. Struct. Optim. 14, 63–69 (1997)

    Article  Google Scholar 

  10. Eysenbach, B., Gupta, A., Ibarz, J., Levine, S.: diversity is all you need: learning skills without a reward function. In: Editors (ed.) 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6–9, 2019. OpenReview.net (2019)

    Google Scholar 

  11. Hayes, C.F., et al.: A practical guide to multi-objective reinforcement learning and planning. Auton. Agents Multi Agent Syst. 36(1), 26 (2022)

    Google Scholar 

  12. Kanazawa, T.: (2023), www.github.com/TaTKSM/LCMOPG

  13. Kanazawa, T., Gupta, C.: Sample-based uncertainty quantification with a single deterministic neural network. In: Bäck, T., van Stein, B., Wagner, C., Garibaldi, J.M., Lam, H.K., Cottrell, M., Doctor, F., Filipe, J., Warwick, K., Kacprzyk, J. (eds.) Proceedings of the 14th International Joint Conference on Computational Intelligence, IJCCI 2022, Valletta, Malta, October 24–26, 2022. pp. 292–304. SCITEPRESS (2022)

    Google Scholar 

  14. Kanazawa, T., Gupta, C.: Latent-conditioned policy gradient for multi-objective deep reinforcement learning. CoRR abs/2303.08909 (2023)

    Google Scholar 

  15. Liu, C., Xu, X., Hu, D.: Multiobjective reinforcement learning: a comprehensive overview. IEEE Trans. Syst. Man Cybern. Syst. 45(3), 385–398 (2015)

    Article  Google Scholar 

  16. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  17. Moffaert, K.V., Nowé, A.: Multi-objective reinforcement learning using sets of pareto dominating policies. J. Mach. Learn. Res. 15(1), 3483–3512 (2014)

    MathSciNet  MATH  Google Scholar 

  18. Mossalam, H., Assael, Y.M., Roijers, D.M., Whiteson, S.: Multi-objective deep reinforcement learning. In: NIPS 2016 Workshop on Deep Reinforcement Learning, CoRR abs/1610.02707 (2016)

    Google Scholar 

  19. Parisi, S., Pirotta, M., Peters, J.: Manifold-based multi-objective policy search with sample reuse. Neurocomputing 263, 3–14 (2017)

    Article  Google Scholar 

  20. Parisi, S., Pirotta, M., Restelli, M.: Multi-objective Reinforcement Learning through continuous pareto manifold approximation. J. Artif. Intell. Res. 57, 187–227 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Parisi, S., Pirotta, M., Smacchia, N., Bascetta, L., Restelli, M.: Policy gradient approaches for multi-objective sequential decision making. In: Editors (ed.) 2014 International Joint Conference on Neural Networks, IJCNN 2014, Beijing, China, July 6–11, 2014. pp. 2323–2330. IEEE (2014)

    Google Scholar 

  22. Pirotta, M., Parisi, S., Restelli, M.: Multi-Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. In: Bonet, B., Koenig, S. (eds.) Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25–30, 2015, Austin, Texas, USA. pp. 2928–2934. AAAI Press (2015)

    Google Scholar 

  23. Reymond, M.: (2022), www.github.com/mathieu-reymond/pareto-conditioned-networks/tree/main/envs/minecart

  24. Reymond, M., Bargiacchi, E., Nowé, A.: Pareto Conditioned Networks. In: Faliszewski, P., Mascardi, V., Pelachaud, C., Taylor, M.E. (eds.) 21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9–13, 2022. pp. 1110–1118 (2022)

    Google Scholar 

  25. Reymond, M., Nowe, A.: Pareto-DQN: approximating the Pareto front in complex multi-objective decision problems. In: Editors (ed.) Proceedings of the Adaptive and Learning Agents Workshop 2019 (ALA-19) at AAMAS (2019)

    Google Scholar 

  26. Roijers, D.M., Vamplew, P., Whiteson, S., Dazeley, R.: A survey of multi-objective sequential decision-making. J. Artif. Intell. Res. 48, 67–113 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Schmidhuber, J.: Reinforcement learning upside down: don’t predict rewards - just map them to actions. CoRR abs/1912.02875 (2019)

    Google Scholar 

  28. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, second edn. (2018)

    Google Scholar 

  29. Vamplew, P., Dazeley, R., Berry, A., Issabekov, R., Dekker, E.: Empirical evaluation methods for multiobjective reinforcement learning algorithms. Mach. Learn. 84(1–2), 51–80 (2011)

    Article  MathSciNet  Google Scholar 

  30. Vamplew, P., Yearwood, J., Dazeley, R., Berry, A.: On the Limitations of Scalarisation for Multi-objective Reinforcement Learning of Pareto Fronts. In: Wobcke, W., Zhang, M. (eds.) AI 2008: Advances in Artificial Intelligence, pp. 372–378. Springer, Berlin Heidelberg, Berlin, Heidelberg (2008)

    Chapter  Google Scholar 

  31. Van Moffaert, K., Drugan, M.M., Nowé, A.: Scalarized multi-objective reinforcement learning: Novel design techniques. In: Editors (ed.) 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). pp. 191–199 (2013)

    Google Scholar 

  32. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992)

    Article  MATH  Google Scholar 

  33. Xu, J., Tian, Y., Ma, P., Rus, D., Sueda, S., Matusik, W.: Prediction-guided multi-objective reinforcement learning for continuous robot control. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, pp. 13–18 July 2020 (2020)

    Google Scholar 

  34. Yang, R.: (2019), www.github.com/RunzheYang/MORL/blob/master/synthetic/envs/fruit_tree.py

  35. Yang, R., Sun, X., Narasimhan, K.: A Generalized algorithm for multi-objective reinforcement learning and policy adaptation. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8–14, 2019, Vancouver, BC, Canada (2019)

    Google Scholar 

  36. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999)

    Article  Google Scholar 

  37. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., da Fonseca, V.G.: Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans. Evol. Comput. 7(2), 117–132 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takuya Kanazawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kanazawa, T., Gupta, C. (2023). Latent-Conditioned Policy Gradient for Multi-Objective Deep Reinforcement Learning. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14259. Springer, Cham. https://doi.org/10.1007/978-3-031-44223-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44223-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44222-3

  • Online ISBN: 978-3-031-44223-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics