Abstract
Modern communication networks have grown highly complex and dynamic, making them difficult to describe, forecast, and govern. So, the software-defined networks (SDNs) have emerged. It is a centralized network, and it is flexible to route network flows. Traffic engineering (TE) technologies are used with deep reinforcement learning (RL) in SDN to make networks more agile. Different strategies for network balance, improvement, and minimizing maximum link usage in the overall network were considered. In this article, recent work on routing as well as TE in SDN and hybrid SDN is analyzed. The mathematical model and algorithm used in each method are interpreted, and an in-depth analysis has been done.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Xu Z, Tang J, Meng J, Zhang W, Wang Y, Liu CH, Yang D (2018) Experience-driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871–1879
Wu T, Zhou P, Wang B, Li A, Tang X, Xu Z, Ding X (2020) Joint traffic control and multi-channel reassignment for core backbone network in SDN-IoT: a multi-agent deep reinforcement learning approach. IEEE Trans Netw Sci Eng 8(1):231–245
Sun P, Lan J, Li J, Zhang J, Hu Y, Guo Z (2020) A scalable deep reinforcement learning approach for traffic engineering based on link control. IEEE Commun Lett 25(1):171–175
https://www.fiber-optic-cable-sale.com/will-sdn-change-future-network.html
Zhang J, Ye M, Guo Z, Yen CY, Chao HJ (2020) CFR-RL: Traffic engineering with reinforcement learning in SDN. IEEE J Sel Areas Commun 38(10):2249–2259
Guo Y, Wang W, Zhang H, Guo W, Wang Z, Tian Y, …, Wu J (2021) Traffic engineering in hybrid software defined network via reinforcement learning. J Netw Comp Appl 103116
Chen YR, Rezapour A, Tzeng WG, Tsai SC (2020) RL-routing: an SDN routing algorithm based on deep reinforcement learning. IEEE Trans Netw Sci Eng 7(4):3185–3199
Sun P, Hu Y, Lan J, Tian L, Chen M (2019) TIDE: time-relevant deep reinforcement learning for routing optimization. Futur Gener Comput Syst 99:401–409
Rischke J, Sossalla P, Salah H, Fitzek FH, Reisslein M (2020) Qr-sdn: towards reinforcement learning states, actions, and rewards for direct flow routing in software-defined networks. IEEE Access 8:174773–174791
Sun P, Guo Z, Lan J, Li J, Hu Y, Baker T (2021) ScaleDRL: a scalable deep reinforcement learning approach for traffic engineering in SDN with pinning control. Comput Netw 190:107891
Fortz B, Thorup M (2000) Internet traffic engineering by optimizing OSPF weights. In: Proceedings IEEE INFOCOM 2000 conference on computer communications. Nineteenth annual joint conference of the IEEE computer and communications societies (Cat. No. 00CH37064), vol 2. IEEE, pp 519–528
Tian Y et al (2020) Traffic engineering in partially deployed segment routing over IPv6 network with deep reinforcement learning. IEEE/ACM Trans Network 28(4):1573–1586. https://doi.org/10.1109/TNET.2020.2987866
Guo Y, Wang Z, Yin X, Shi X, Wu J (2014) Traffic engineering in SDN/OSPF hybrid network. In: 2014 IEEE 22nd international conference on network protocols, pp 563–568. https://doi.org/10.1109/ICNP.2014.90
Zhang J, Xi K, Luo M, Chao HJ (2014) Dynamic hybrid routing: achieve load balancing for changing traffic demands. In: IEEE international symposium on quality of service’14, pp 105–110
Villamizar C (1999) Ospf optimized multipath (ospf-omp). IETF Internet-Draft, draft-ietf-ospf-omp-03.txt
Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn 8(3):229–256
Mao H, Alizadeh M, Menache I, Kandula S. Resource management with deep reinforcement learning. In: ACM workshop on hot topics in networks’16, pp 50–56
Xu Z, Tang J, Meng J, Zhang W, Wang Y, Liu CH, Yang D (2018) Experience driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871–1879
Liu Y-Y, Slotine J-J, Barabási A-L (2011) Controllability of complex networks. Nature 473(7346):167
Wu Y, Mansimov E, Grosse RB, Liao S, Ba J (2017) Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In: Advances in neural information processing systems, pp 5279–5288
Lin S-C, Akyildiz IF et al (2016) Qos-aware adaptive routing in multi-layer hierarchical software defined networks: a reinforcement learning approach. In: 2016 IEEE international conference on services computing (SCC). IEEE, pp 25–33.
Xu Z, Tang J et al (2018) Experience-driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871
Wu Y, Mansimov E, Grosse RB, Liao S, Ba J (2017) Scalable trustregion method for deep reinforcement learning using kronecker-factored approximation. In: Advances in neural information processing systems, pp 5279–5288
Boyan JA, Littman ML (1993) Packet routing in dynamically changing networks: a reinforcement learning approach. In: Proceedings of 6th international conference on neural information processing systems, NIPS’93. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp 671–678. [Online]. Available: http://dl.acm.org/citation.cfm?id=2987189.298727
Openflow switch specification version 1.3.5. open networking foundation. [Online]. Available: https://www.opennetworking.org/wpcontent/uploads/2014/10/openflow-switch-v1.3
Karakus M, Durresi A (2017) Quality of service (QoS) in software defined networking (SDN): a survey. J Netw Comput Appl 80:200–218
Lillicrap TP, Hunt JJ, Pritzel A et al (2015) Continuous control with deep reinforcement learning. Computer Science 8(6):A187
Silver D, Lever G, Heess N et al (2014) Deterministic policy gradient algorithms. In: International conference on machine learning, pp 387–395. JMLR.org
Tarjan R (1972) Depth-first search and linear graph algorithms. SIAM J Comput 1(2):146–160
Guck JW, Van Bemten A, Reisslein M, Kellerer W (2018) Unicast QoS routing algorithms for SDN: a comprehensive survey and performance evaluation. IEEE CommunSurveys Tuts 20(1):388–415
Chen JIZ, Smys S (2020) Optimized dynamic routing in multimedia vehicular networks. J Inf Technol 2(3)
Bhalaji N (2021) Cluster formation using fuzzy logic in wireless sensor networks. IRO J Sustain Wirel Syst 3(1)
Ram GM, Ilavarsan E (2021) Review on energy-efficient routing protocols in WSN. In: Computer networks, big data and IoT
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Bhavani, A., Ekshitha, Y., Mounika, A., Prabu, U. (2023). A Study on Reinforcement Learning-Based Traffic Engineering in Software-Defined Networks. In: Smys, S., Lafata, P., Palanisamy, R., Kamel, K.A. (eds) Computer Networks and Inventive Communication Technologies. Lecture Notes on Data Engineering and Communications Technologies, vol 141. Springer, Singapore. https://doi.org/10.1007/978-981-19-3035-5_4
Download citation
DOI: https://doi.org/10.1007/978-981-19-3035-5_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-3034-8
Online ISBN: 978-981-19-3035-5
eBook Packages: EngineeringEngineering (R0)