Skip to main content

Proposal and Evaluation of a Tabu List Based DQN for AAV Mobility

  • Conference paper
  • First Online:
Advances in Internet, Data and Web Technologies (EIDWT 2021)

Abstract

The Deep Q Network (DQN) is one of the methods of the deep reinforcement learning algorithm, which is a deep neural network structure used to estimate Q-values in Q-learning methods. The authors have previously designed and implemented a DQN-based mobility control methods for Autonomous Aerial Vehicle (AAV). In this paper, we propose and evaluate a DQN based on tabu list strategy for AAV mobility control. For evaluation, we simulate were conducted for the mobility control of AAV in a staircase environment using normal DQN and tabu list based DQN. The simulation results showed that a tabu list based DQN was a better solution than the normal DQN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Stöcker, C., et al.: Review of the current state of UAV regulations. Remote Sens. 9(5), 1–26 (2017)

    Article  Google Scholar 

  2. Saito, N., Oda, T., Hirata, A., Hirota, Y., Hirota, M., Katayama, K.: Design and implementation of a DQN based AAV. In: Proceedings of the 15th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA-2020), pp. 321–329 (2020)

    Google Scholar 

  3. Oda, T., Obukata, R., Ikeda, M., Barolli, L., Takizawa, M.: Design and implementation of a simulation system based on deep Q-network for mobile actor node control in wireless sensor and actor networks. In: Proceedings of the 31th IEEE International Conference on Advanced Information Networking and Applications Workshops (IEEE WAINA-2017) (2017)

    Google Scholar 

  4. Oda, T., Kulla, E., Cuka, M., Elmazi, D., Ikeda, M., Barolli, L.: Performance evaluation of a deep Q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering different distributions of events. In: Proceedings of the 11th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2017), pp. 36–49 (2017)

    Google Scholar 

  5. Oda, T., Elmazi, D., Cuka, M., Kulla, E., Ikeda, M., Barolli, L.: Performance evaluation of a deep Q-network based simulation system for actor node mobility control in wireless sensor and actor networks considering three-dimensional environment. In: Proceedings of the 9th International Conference on Intelligent Networking and Collaborative Systems (INCoS-2017), pp. 41–52 (2017)

    Google Scholar 

  6. Oda, T., Kulla, E., Katayama, K., Ikeda, M., Barolli, L.: A deep Q-network based simulation system for actor node mobility control in WSANs considering three-dimensional environment: a comparison study for normal and uniform distributions. In: Proceedings of the 12th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2018), pp. 842–852 (2018)

    Google Scholar 

  7. Toyoshima, K., Oda, T., Hirota, M., Katayama, K., Barolli, L.: A DQN based mobile actor node control in WSAN: simulation results of different distributions of events considering three-dimensional environment. In: Proceedings of the 8th International Conference on Emerging Internet, Data & Web Technologies (EIDWT-2020), pp. 197–209 (2020)

    Google Scholar 

  8. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  9. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning, pp. 1–9 (2013). arXiv:1312.5602v1

  10. Lei, T., Ming, L.: A robot exploration strategy based on Q-learning network. In: IEEE International Conference on Real-Time Computing and Robotics (RCAR-2016), pp. 57–62 (2016)

    Google Scholar 

  11. Riedmiller, M.: Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In: The 16th European Conference on Machine Learning (ECML-2005). Lecture Notes in Computer Science, vol. 3720, pp. 317–328 (2005)

    Google Scholar 

  12. Lin, L.J.: Reinforcement learning for robots using neural networks. Technical report, DTIC document (1993)

    Google Scholar 

  13. Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN-2010), pp. 1–8 (2010)

    Google Scholar 

  14. Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1–2), 99–134 (1998)

    Article  MathSciNet  Google Scholar 

  15. Glover, F.: Tabu search - part I. ORSA J. Comput. 1(3), 135–206 (1989)

    Article  Google Scholar 

  16. The Rust Programming Language. https://www.rust-lang.org/. Accessed 14 Oct 2019

  17. Takano, K., Oda, T., Kohata, M.: Design of a DSL for converting rust programming language into RTL. In: Proceedings of the 8th International Conference on Emerging Internet, Data & Web Technologies (EIDWT-2020), pp. 342–350 (2020)

    Google Scholar 

  18. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS-2010), pp. 249–256 (2010)

    Google Scholar 

  19. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS-2011), pp. 315–323 (2011)

    Google Scholar 

Download references

Acknowledgement

This work was supported by Grant for Promotion of Okayama University of Science (OUS) Research Project (OUS-RP-20-3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tetsuya Oda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saito, N., Oda, T., Hirata, A., Nagai, Y., Hirota, M., Katayama, K. (2021). Proposal and Evaluation of a Tabu List Based DQN for AAV Mobility. In: Barolli, L., Natwichai, J., Enokido, T. (eds) Advances in Internet, Data and Web Technologies. EIDWT 2021. Lecture Notes on Data Engineering and Communications Technologies, vol 65. Springer, Cham. https://doi.org/10.1007/978-3-030-70639-5_18

Download citation

Publish with us

Policies and ethics