A Reinforcement Learning-Based Routing Protocol in VANETs
Vehicular ad hoc networks serves as an important enabling technology for assistant driving and intelligent transportation, it has aroused wide concern since it was proposed. However, due to the dynamic topology and poor link quality of wireless channel in VANETs caused by vehicle movement and obstacles, establishing a reliable multi-hop communication in VANETs is rather challenging. In this paper, we proposed a position-based reinforcement learning routing protocol. The protocol uses Q-learning to evaluate the quality of the neighbor nodes, and thus selects the next-hop node according to the quality of the neighbor nodes and the position of the destination node to maintain the stability and reliability of the links and routing. Through extensive simulation, the effectiveness of the proposed protocol is shown.
KeywordsVehicular ad hoc networks (VANETs) Reinforcement learning Routing protocol
- 2.Altayeb, M., Mahgoub, I.: A survey of vehicular ad hoc networks routing protocols. Int. J. Innov. Appl. Stud. 3(3), 829–846 (2013)Google Scholar
- 3.Perkins, C., Belding-Royer, E., Das, S.: Ad hoc on-demand distance vector (AODV) routing, No. RFC 3561 (2003)Google Scholar
- 5.Karp, B., Kung, H.T.: GPSR: greedy perimeter stateless routing for wireless networks. In: Proceedings of the 6th Annual International Conference on Mobile Computing and Networking, pp. 243–254. ACM (2000)Google Scholar
- 7.Parekh, A.K.: Selecting routers in ad-hoc wireless networks. In: Proceedings of the SBT/IEEE International Telecommunications Symposium, vol. 204 (1994)Google Scholar