Skip to main content

A Study on Reinforcement Learning-Based Traffic Engineering in Software-Defined Networks

  • Conference paper
  • First Online:
Computer Networks and Inventive Communication Technologies

Part of the book series: Lecture Notes on Data Engineering and Communications Technologies ((LNDECT,volume 141))

  • 481 Accesses

Abstract

Modern communication networks have grown highly complex and dynamic, making them difficult to describe, forecast, and govern. So, the software-defined networks (SDNs) have emerged. It is a centralized network, and it is flexible to route network flows. Traffic engineering (TE) technologies are used with deep reinforcement learning (RL) in SDN to make networks more agile. Different strategies for network balance, improvement, and minimizing maximum link usage in the overall network were considered. In this article, recent work on routing as well as TE in SDN and hybrid SDN is analyzed. The mathematical model and algorithm used in each method are interpreted, and an in-depth analysis has been done.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Xu Z, Tang J, Meng J, Zhang W, Wang Y, Liu CH, Yang D (2018) Experience-driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871–1879

    Google Scholar 

  2. Wu T, Zhou P, Wang B, Li A, Tang X, Xu Z, Ding X (2020) Joint traffic control and multi-channel reassignment for core backbone network in SDN-IoT: a multi-agent deep reinforcement learning approach. IEEE Trans Netw Sci Eng 8(1):231–245

    Article  MathSciNet  Google Scholar 

  3. Sun P, Lan J, Li J, Zhang J, Hu Y, Guo Z (2020) A scalable deep reinforcement learning approach for traffic engineering based on link control. IEEE Commun Lett 25(1):171–175

    Article  Google Scholar 

  4. https://www.fiber-optic-cable-sale.com/will-sdn-change-future-network.html

  5. Zhang J, Ye M, Guo Z, Yen CY, Chao HJ (2020) CFR-RL: Traffic engineering with reinforcement learning in SDN. IEEE J Sel Areas Commun 38(10):2249–2259

    Article  Google Scholar 

  6. Guo Y, Wang W, Zhang H, Guo W, Wang Z, Tian Y, …, Wu J (2021) Traffic engineering in hybrid software defined network via reinforcement learning. J Netw Comp Appl 103116

    Google Scholar 

  7. Chen YR, Rezapour A, Tzeng WG, Tsai SC (2020) RL-routing: an SDN routing algorithm based on deep reinforcement learning. IEEE Trans Netw Sci Eng 7(4):3185–3199

    Article  Google Scholar 

  8. Sun P, Hu Y, Lan J, Tian L, Chen M (2019) TIDE: time-relevant deep reinforcement learning for routing optimization. Futur Gener Comput Syst 99:401–409

    Article  Google Scholar 

  9. Rischke J, Sossalla P, Salah H, Fitzek FH, Reisslein M (2020) Qr-sdn: towards reinforcement learning states, actions, and rewards for direct flow routing in software-defined networks. IEEE Access 8:174773–174791

    Article  Google Scholar 

  10. Sun P, Guo Z, Lan J, Li J, Hu Y, Baker T (2021) ScaleDRL: a scalable deep reinforcement learning approach for traffic engineering in SDN with pinning control. Comput Netw 190:107891

    Article  Google Scholar 

  11. Fortz B, Thorup M (2000) Internet traffic engineering by optimizing OSPF weights. In: Proceedings IEEE INFOCOM 2000 conference on computer communications. Nineteenth annual joint conference of the IEEE computer and communications societies (Cat. No. 00CH37064), vol 2. IEEE, pp 519–528

    Google Scholar 

  12. Tian Y et al (2020) Traffic engineering in partially deployed segment routing over IPv6 network with deep reinforcement learning. IEEE/ACM Trans Network 28(4):1573–1586. https://doi.org/10.1109/TNET.2020.2987866

    Article  Google Scholar 

  13. Guo Y, Wang Z, Yin X, Shi X, Wu J (2014) Traffic engineering in SDN/OSPF hybrid network. In: 2014 IEEE 22nd international conference on network protocols, pp 563–568. https://doi.org/10.1109/ICNP.2014.90

  14. Zhang J, Xi K, Luo M, Chao HJ (2014) Dynamic hybrid routing: achieve load balancing for changing traffic demands. In: IEEE international symposium on quality of service’14, pp 105–110

    Google Scholar 

  15. Villamizar C (1999) Ospf optimized multipath (ospf-omp). IETF Internet-Draft, draft-ietf-ospf-omp-03.txt

    Google Scholar 

  16. Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn 8(3):229–256

    Article  MATH  Google Scholar 

  17. Mao H, Alizadeh M, Menache I, Kandula S. Resource management with deep reinforcement learning. In: ACM workshop on hot topics in networks’16, pp 50–56

    Google Scholar 

  18. Xu Z, Tang J, Meng J, Zhang W, Wang Y, Liu CH, Yang D (2018) Experience driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871–1879

    Google Scholar 

  19. Liu Y-Y, Slotine J-J, Barabási A-L (2011) Controllability of complex networks. Nature 473(7346):167

    Article  Google Scholar 

  20. Wu Y, Mansimov E, Grosse RB, Liao S, Ba J (2017) Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In: Advances in neural information processing systems, pp 5279–5288

    Google Scholar 

  21. Lin S-C, Akyildiz IF et al (2016) Qos-aware adaptive routing in multi-layer hierarchical software defined networks: a reinforcement learning approach. In: 2016 IEEE international conference on services computing (SCC). IEEE, pp 25–33.

    Google Scholar 

  22. Xu Z, Tang J et al (2018) Experience-driven networking: a deep reinforcement learning based approach. In: IEEE INFOCOM 2018-IEEE conference on computer communications. IEEE, pp 1871

    Google Scholar 

  23. Wu Y, Mansimov E, Grosse RB, Liao S, Ba J (2017) Scalable trustregion method for deep reinforcement learning using kronecker-factored approximation. In: Advances in neural information processing systems, pp 5279–5288

    Google Scholar 

  24. Boyan JA, Littman ML (1993) Packet routing in dynamically changing networks: a reinforcement learning approach. In: Proceedings of 6th international conference on neural information processing systems, NIPS’93. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp 671–678. [Online]. Available: http://dl.acm.org/citation.cfm?id=2987189.298727

  25. Openflow switch specification version 1.3.5. open networking foundation. [Online]. Available: https://www.opennetworking.org/wpcontent/uploads/2014/10/openflow-switch-v1.3

  26. Karakus M, Durresi A (2017) Quality of service (QoS) in software defined networking (SDN): a survey. J Netw Comput Appl 80:200–218

    Article  Google Scholar 

  27. Lillicrap TP, Hunt JJ, Pritzel A et al (2015) Continuous control with deep reinforcement learning. Computer Science 8(6):A187

    Google Scholar 

  28. Silver D, Lever G, Heess N et al (2014) Deterministic policy gradient algorithms. In: International conference on machine learning, pp 387–395. JMLR.org

    Google Scholar 

  29. Tarjan R (1972) Depth-first search and linear graph algorithms. SIAM J Comput 1(2):146–160

    Article  MathSciNet  MATH  Google Scholar 

  30. Guck JW, Van Bemten A, Reisslein M, Kellerer W (2018) Unicast QoS routing algorithms for SDN: a comprehensive survey and performance evaluation. IEEE CommunSurveys Tuts 20(1):388–415

    Google Scholar 

  31. Chen JIZ, Smys S (2020) Optimized dynamic routing in multimedia vehicular networks. J Inf Technol 2(3)

    Google Scholar 

  32. Bhalaji N (2021) Cluster formation using fuzzy logic in wireless sensor networks. IRO J Sustain Wirel Syst 3(1)

    Google Scholar 

  33. Ram GM, Ilavarsan E (2021) Review on energy-efficient routing protocols in WSN. In: Computer networks, big data and IoT

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to U. Prabu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bhavani, A., Ekshitha, Y., Mounika, A., Prabu, U. (2023). A Study on Reinforcement Learning-Based Traffic Engineering in Software-Defined Networks. In: Smys, S., Lafata, P., Palanisamy, R., Kamel, K.A. (eds) Computer Networks and Inventive Communication Technologies. Lecture Notes on Data Engineering and Communications Technologies, vol 141. Springer, Singapore. https://doi.org/10.1007/978-981-19-3035-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-3035-5_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-3034-8

  • Online ISBN: 978-981-19-3035-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics