Skip to main content

Adaptive Interfered Fluid Dynamic System Algorithm Based on Deep Reinforcement Learning Framework

  • Conference paper
  • First Online:
Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021) (ICAUS 2021)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 861))

Included in the following conference series:

Abstract

In this paper, adaptive interfered fluid dynamic system algorithm (AIFDS) is proposed for unmanned aerial vehicle (UAV) path planning in dynamic obstacle environment, inspired by the natural flow to avoid rocks. In AIFDS, UAV is regarded as an agent. Through the interaction with the environment, it gradually learns how to adjust the flow field, so as to plan a path with high safety, short distance and short execution time in advance. AIFDS can be combined with almost all reinforcement learning algorithms with continuous action space. This paper studies the combination of AIFDS with SAC, DDPG, PPO, TD3 algorithm. Experiments are carried out in the environment with multiple dynamic obstacles, and the results show that AIFDS has a bright performance in the aspect of path safety.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 549.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 699.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 699.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Otto, A., Agatz, N., Campbell, J., et al.: Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: a survey. Networks 72(4), 411–458 (2018)

    Article  MathSciNet  Google Scholar 

  2. Wu, J., Wang, H., Li, N.: Obstacle avoidance based on virtual repulsive potential fields under limited perceptions. In: IEEE 15th International Conference on Control and Automation (ICCA) 2019, pp. 338–343. IEEE, Edinburgh (2019)

    Google Scholar 

  3. Chen, Y., Luo, G., Mei, Y., et al.: UAV path planning using artificial potential field method updated by optimal control theory. Int. J. Syst. Sci. 47(6), 1407–1420 (2016)

    Article  MathSciNet  Google Scholar 

  4. Zhang, S., Yu, J., Mei, Y., et al.: Unmanned aerial vehicle trajectory planning by an integrated algorithm in a complex obstacle environment. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 231(11), 2048–2067 (2017)

    Article  Google Scholar 

  5. Turnbull, O., Lawry, J., Lowenberg, M., et al.: A cloned linguistic decision tree controller for real-time path planning in hostile environments. Fuzzy Sets Syst. 293, 1–29 (2016)

    Article  MathSciNet  Google Scholar 

  6. Cao, L., Qiao, D., Xu, J.: Suboptimal artificial potential function sliding mode control for spacecraft rendezvous with obstacle avoidance. Acta Astronaut. 143, 133–146 (2018)

    Article  Google Scholar 

  7. Wu, J., Wang, H., Li, N., et al.: Formation obstacle avoidance: a fluid-based solution. IEEE Syst. J. 14(1), 1479–1490 (2019)

    Article  Google Scholar 

  8. Lv, L., Zhang, S., Ding, D., et al.: Path planning via an improved DQN-based learning policy. IEEE Access 7, 67319–67330 (2019)

    Article  Google Scholar 

  9. Yan, C., Xiang, X., Wang, C.: Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments. J. Intell. Robot. Syst. 98, 297–309 (2019)

    Article  Google Scholar 

  10. Wang, C., Wang, J., Zhang, X., et al.: Autonomous navigation of UAV in large-scale unknown complex environment with deep reinforcement learning. In: IEEE Global Conference on Signal and Information Processing (GlobalSIP) 2017, pp. 858–862. IEEE, Montreal (2017)

    Google Scholar 

  11. Singla, A., Padakandla, S., Bhatnagar, S.: Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge. IEEE Trans. Intell. Transp. Syst. 22, 107–118 (2019)

    Article  Google Scholar 

  12. Qie, H., Shi, D., Shen, T., et al.: Joint optimization of multi-UAV target assignment and path planning based on multi-agent reinforcement learning. IEEE Access 7, 146264–146272 (2019)

    Article  Google Scholar 

  13. Vinyals, O., Babuschkin, I., Czarnecki, W.M., et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019)

    Article  Google Scholar 

Download references

Acknowledgement

This research has been funded in part by the National Natural Science Foundations of China under Grant 61673042 and 61175084, and the Aeronautical Science Foundation of China under Grant 2018ZC51031.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Wang, H. (2022). Adaptive Interfered Fluid Dynamic System Algorithm Based on Deep Reinforcement Learning Framework. In: Wu, M., Niu, Y., Gu, M., Cheng, J. (eds) Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021). ICAUS 2021. Lecture Notes in Electrical Engineering, vol 861. Springer, Singapore. https://doi.org/10.1007/978-981-16-9492-9_139

Download citation

Publish with us

Policies and ethics