Skip to main content
Log in

Fluid dynamic control and optimization using deep reinforcement learning

  • Review
  • Published:
JMST Advances Aims and scope Submit manuscript

Abstract

This paper presents a review of recent research on applying deep reinforcement learning in fluid dynamics. Reinforcement learning is a technique in which the agent autonomously learns optimal action strategies while interacting with the environment, mimicking human learning mechanisms. Combined with artificial intelligence technology, it is providing a new direction in fluid dynamic control and optimization, which were challenging due to the nonlinear and high-dimensional characteristics of the fluid. In the section on fluid dynamic control, control strategies for drag reduction and research on controlling biological motion are reviewed. The optimization section focuses on shape optimization and automation of computational fluid dynamics. Current challenges and possible future developments are also described.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. J. Rabault, M. Kuchta, A. Jensen, U. Réglade, N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control. J. Fluid Mech. 865, 281–302 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  2. Y. Mao, S. Zhong, H. Yin, Active flow control using deep reinforcement learning with time delays in Markov decision process and autoregressive policy. Phys. Fluids, 34(5) (2022)

  3. M. Tokarev, E. Palkin, R. Mullyadzhanov, Deep reinforcement learning control of cylinder flow using rotary oscillations at low Reynolds number. Energies 13(22), 5920 (2020)

    Article  CAS  Google Scholar 

  4. H. Xu, W. Zhang, J. Deng, J. Rabault, Active flow control with rotating cylinders by an artificial neural network trained by deep reinforcement learning. J. Hydrodyn. 32(2), 254–258 (2020)

    Article  ADS  Google Scholar 

  5. S. Verma, G. Novati, P. Koumoutsakos, Efficient collective swimming by harnessing vortices through deep reinforcement learning. Proc. Natl. Acad. Sci. 115(23), 5849–5854 (2018)

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  6. S. Hong, S. Kim, D. You, Control of a fly-mimicking flyer in complex flow using deep reinforcement learning. arXiv preprint arXiv:2111.03454 (2021)

  7. J. Viquerat, J. Rabault, A. Kuhnle, H. Ghraieb, A. Larcher, E. Hachem, Direct shape optimization through deep reinforcement learning. J. Comput. Phys. 428, 110080 (2021)

    Article  MathSciNet  Google Scholar 

  8. S. Qin, S. Wang, L. Wang, C. Wang, G. Sun, Y. Zhong, Multi-objective optimization of cascade blade profile based on reinforcement learning. Appl. Sci. 11(1), 106 (2020)

    Article  Google Scholar 

  9. R. Li, Y. Zhang, H. Chen, Learning the aerodynamic design of supercritical airfoils through deep reinforcement learning. AIAA J. 59(10), 3988–4001 (2021)

    Article  ADS  CAS  Google Scholar 

  10. S. Kim, I. Kim, D. You, Multi-condition multi-objective optimization using deep reinforcement learning. J. Comput. Phys. 462, 111263 (2022)

    Article  MathSciNet  Google Scholar 

  11. I. Kim, S. Kim, D. You, Non-iterative generation of an optimal mesh for a blade passage using deep reinforcement learning. Comput. Phys. Commun. 294, 108962 (2024)

    Article  CAS  Google Scholar 

Download references

Acknowledgements

This study was conducted with the support of the National Research Foundation of Korea (NRF-2021R1A2C2092146) and the Samsung Future Technology Development Program (SRFC-TB1703-51).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donghyun You.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, I., You, D. Fluid dynamic control and optimization using deep reinforcement learning. JMST Adv. 6, 61–65 (2024). https://doi.org/10.1007/s42791-024-00067-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42791-024-00067-z

Keywords

Navigation