Advertisement

Soft Actor-Critic-Based Continuous Control Optimization for Moving Target Tracking

  • Tao Chen
  • Xingxing Ma
  • Shixun YouEmail author
  • Xiaoli Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

In the field of cognitive electronic warfare (CEW), unmanned combat aerial vehicle (UCAV) realize moving targets tracking is a prerequisite for effective attack on the enemy. However, most of the traditional target tracking use intelligent algorithms combined with filtering algorithms leads to the UCAV flight motion discontinuous and have limited application in the field of CEW. This paper proposes a continuous control optimization for moving target tracking based on soft actor-critic (SAC) algorithm. Adopting the SAC algorithm, the deep reinforcement learning technology is introduced into moving target tracking train. The simulation analysis is carried out in our environment named Explorer, when the UCAV operation cycle of is 0.4 s, after about 2000 steps of iteration, the success rate of UCAV target tracking is above 92.92%, and the tracking effect is improved compared with the benchmark.

Keywords

Cognitive electronic warfare Target tracking SAC 

Notes

Acknowledgment

This paper is funded by the International Exchange Program of Harbin Engineering University for Innovation-oriented Talents Cultivation.

References

  1. 1.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)zbMATHGoogle Scholar
  2. 2.
    Niu, C.F., Liu, Y.S.: Hierarchic particle swarm optimization based target tracking. In: International Conference on Advanced Computer Theory and Engineering, Cairo, Egypt, pp. 517–524 (2009)Google Scholar
  3. 3.
    Mu, C., Yuan, Z., Song, J., et al.: A new approach to track moving target with improved mean shift algorithm and Kalman filter. In: International Conference on Intelligent Human-machine Systems and Cybernetics (2012)Google Scholar
  4. 4.
    Hui, Z., Weng, X., Fu, Y., et al.: Study on fusion tracking algorithm of fire calculation for MMW/Tv combined guidance UCAV based on IMM-PF. In: Chinese Control and Decision Conference. IEEE (2011)Google Scholar
  5. 5.
    Wu, C., Han, C., Sun, Z.: A new nonlinear filtering method for ballistic target tracking. In: 2009 12th International Conference on Information Fusion. IEEE (2009)Google Scholar
  6. 6.
    Sun, T., He, B., Nian, R., et al.: Target following for an autonomous underwater vehicle using regularized ELM-based reinforcement learning. In: OCEANS 2015 - MTS/IEEE Washington. IEEE (2015)Google Scholar
  7. 7.
    You, S., Diao, M., Gao, L.: Deep reinforcement learning for target searching in cognitive electronic warfare. IEEE Access 7, 37432–37447 (2019)CrossRefGoogle Scholar
  8. 8.
    Zhu, L., Cheng, X., Yuan, F.G.: A 3D collision avoidance strategy for UAV with physical constraints. Measurement 77, 40–49 (2016)CrossRefGoogle Scholar
  9. 9.
    Haarnoja, T., Zhou, A., Abbeel, P., et al.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor (2018)Google Scholar
  10. 10.
    Imanberdiyev, N., Fu, C., Kayacan, E., et al.: Autonomous navigation of UAV by using real-time model-based reinforcement learning. In: International Conference on Control. IEEE (2017)Google Scholar
  11. 11.
    You, S., Gao, L., Diao, M.: Real-time path planning based on the situation space of UCAVs in a dynamic environment. Microgravity Sci. Technol. 30, 899–910 (2018)CrossRefGoogle Scholar
  12. 12.
    Ma, X., Xia, L., Zhao, Q.: Air-combat strategy using deep Q-learning 3952–3957 (2018).  https://doi.org/10.1109/cac.2018.8623434
  13. 13.
    Dionisio-Ortega, S., Rojas-Perez, L.O., Martinez-Carranza, J., et al.: A deep learning approach towards autonomous flight in forest environments. In: 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP). IEEE (2018)Google Scholar
  14. 14.
    Yun, S., Choi, J., Yoo, Y., et al.: Action-driven visual object tracking with deep reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 29, 2239–2252 (2018)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Tao Chen
    • 1
  • Xingxing Ma
    • 1
  • Shixun You
    • 1
    Email author
  • Xiaoli Zhang
    • 1
  1. 1.College of Information and Communication EngineeringHarbin Engineering UniversityHeilongjiangChina

Personalised recommendations