Skip to main content
Log in

Deep reinforcement learning-based framework for constrained any-objective optimization

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Optimization problems are widely used in many real-world applications. These problems are rarely unconstrained and are usually considered constrained optimization problems. Regarding the number of objectives, the optimization problems can be categorized into single- (for one), multi- (usually for two and three), and many- (more than three) objective optimization problems. In this paper, an Any-Objective Optimization (AOO) framework is introduced based on Deep Reinforcement Learning (DRL) models. The term any-objective optimization is coined to indicate the generalized structure of the proposed algorithm that regardless of the number of objectives, can solve the constrained optimization problems with any number of objectives. To trade off the multiple conflicting objectives, RL algorithms can be extended to a framework called Multi-Objective Reinforcement Learning (MORL). By converting a constrained optimization problem into an environment that can be explored by the MORL and deep learning algorithms, any constrained optimization problem can be tackled. In this research, to solve a constrained optimization problem with any number of objective functions, a novel reward function is introduced, and the algorithm begins a heuristic search in the environment to find the optimal solution(s) and generates an archive of the optimal Pareto front solution. The corresponding environment is constructed modular, such that any RL algorithm with arbitrary reward function types (scalar or vector) can be utilized. To evaluate the proposed algorithm, some popular test function-defined constrained optimization problems with continuous variable and objective spaces as illustrative examples are considered, and five of the widely used DRL algorithms are implemented to test the case studies. To demonstrate the capabilities of the proposed algorithm, the obtained results are compared with structurally similar GA-based well-known existing single-, multi-, and many-objective optimization algorithms, respectively. The results show that the proposed framework can be a well-performing baseline for a new type of DRL-based optimization algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Due to privacy and ethical concerns, neither the data nor the source of the data can be made available.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saeed Khodaygan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Honari, H., Khodaygan, S. Deep reinforcement learning-based framework for constrained any-objective optimization. J Ambient Intell Human Comput 14, 9575–9591 (2023). https://doi.org/10.1007/s12652-023-04630-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-023-04630-9

Keywords

Navigation