Advertisement

Planar Evasive Aircrafts Maneuvers Using Reinforcement Learning

Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 193)

Abstract

In this paper, the reinforcement learning technique is proposed to implement evasive strategies for aircrafts during engagement. A simplified point-mass model is used to describe the aircraft and the missile equations of motion. The missile follows the pure proportional navigation guidance (PPNG) law to attack the aircraft. Q-learning algorithm which is a form of reinforcement learning is suggested to learn the evasive maneuvers. The performance of the proposed approach is analyzed with numerical simulations. It is shown that the aircraft evades from a missile properly by reinforcement learning with bang-bang type action profiles.

Keywords

missile evasive maneuvers reinforcement Leraning Q-learning pure proportional navigation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ben-Asher, J., Cliff, E.M., Kelley, H.J.: Optimal evasion against a proportional guided pursure. Journal of Guidance, Control, and Dynamics 12(4), 598–600 (1989)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Imado, F., Miwa, S.: Fighter evasive maneuvers against proportional navigation missile. Journal of Aircraft 23(11), 825–830 (1986)CrossRefGoogle Scholar
  3. 3.
    Ong, S.Y., Pierson, B.L.: Optimal planar evasive aircraft maneuvers against proportional navigation missiles. Journal of Guidance, Control, and Dynamics 19(6), 1210–1215 (1996)MATHCrossRefGoogle Scholar
  4. 4.
    Watkins, C.: Learning from delayed rewards, PhD. Thesis, Cambridge, England (1989)Google Scholar
  5. 5.
    Manju, M.S.: An analysis of Q-learning algorithms with strategies of reward function. International Journal on Computer Science and Engineering 3(2), 814–820 (2011)Google Scholar
  6. 6.
    Yang, C.D., Yang, C.C.: Optimal pure proportional navigation for maneuvering targets. IEEE Transactions on Aerospace and Electronic Systems 33, 949–957 (1997)CrossRefGoogle Scholar
  7. 7.
    Raivio, T.: Capture set computation of an optimal guided missile. Journal of Guidance, Control, and Dynamics 24(6), 1167–1175 (2001)CrossRefGoogle Scholar
  8. 8.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT Press (1998)Google Scholar
  9. 9.
    Lee, D., Bang, H., Baek, K.: Autorotation of an Unmanned Helicopter by a Reinforcement Learning Algorithm. In: AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, Hawaii (2008)Google Scholar
  10. 10.
    Jung, B., Kim, K.S., Kim, Y.: Guidance law for evasive aircraft maneuvers using artificial intelligence. In: Guidance, Navigation, and Control Conference and Exhibit, Austin, Texas (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Korea Division of Aerospace Engineering, School of Mechanical, Aerospace & Systems EngineeringKorea Advanced Institute of Science and TechnologyDaejeonKorea

Personalised recommendations