Skip to main content
Log in

Software-defined load-balanced data center: design, implementation and performance analysis

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Data centers are growing densely and providing various services to millions of users through a collection of limited servers. That’s why large-scale data center servers are threatened by the overload phenomenon. In this paper, we propose a framework for data centers that are based on Software-defined networking (SDN) technology and, taking advantage of this technology, seek to balance the load between servers and prevent overloading on a given server. In addition, this framework provides the required services in a fast time and with low computational complexity. The proposed framework is implemented in a real testbed, and a wide variety of experimentations are carried out in comprehensive scenarios to evaluate its performance. Furthermore, the framework is evaluated with four data centers including Three-layer, Fat-Tree, BCube, and Dcell data centers. In the testbed, Open vSwitch v2.4.1 and Floodlight v1.2 are used to implement switches and OpenFlow controllers. The results show that in all four SDN-based architectures, the load balances between the servers is well maintained, and a significant improvement has been made in parameters such as throughput, delay, and resource consumption.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. Software-Defined Data Center.

  2. http://voip-lab.um.ac.ir/index.php?lang=en.

  3. The results of the other architectures are given in the appendix.

  4. We want the control of the network to be centralized rather than having each device be its own island, which greatly simplifies the network discovery, connectivity and control issues that have resulted with the current system. Having this overarching control actually makes the whole network programmable instead of having to individually configure each device every time an application is added or something moves.

References

  1. Darabseh, A., Al-Ayyoub, M., Jararweh, Y., Benkhelifa, E., Vouk, M., Rindos, A.: Sddc: A software defined datacenter experimental framework. In: 2015 3rd International Conference on Future Internet of Things and Cloud, pp. 189–194 (2015). https://doi.org/10.1109/FiCloud.2015.127

  2. Hu, T., Guo, Z., Yi, P., Baker, T., Lan, J.: Multi-controller based software-defined networking: a survey. IEEE Access 6, 15980–15996 (2018)

    Article  Google Scholar 

  3. Zhang, Y., Cui, L., Wang, W., Zhang, Y.: A survey on software defined networking with multiple controllers. J. Netw. Comput. Appl. 103, 101–118 (2018)

    Article  Google Scholar 

  4. Xie, J., Yu, F.R., Huang, T., Xie, R., Liu, J., Wang, C., Liu, Y.: A survey of machine learning techniques applied to software defined networking (sdn): research issues and challenges. IEEE Commun. Surv. Tutor. 21(1), 393–430 (2018)

    Article  Google Scholar 

  5. Amulothu, V.S.N., Kapur, A., Khani, K., Shukla, V.: Adaptive software defined networking controller, US Patent App. 10/257,073 (2019)

  6. TaKeaWays, K.: The software-defined data center is the future of infrastructure architecture, strategies

  7. Carlson, M., Yoder, A., Schoeb, L., Deel, D., Pratt, C., Lionetti, C., Voigt, D.: Software defined storage, Storage Networking Industry Assoc. working draft, pp. 20–24 (2014)

  8. Bayram, U., Divine, D., Zhou, P., Rozier, E.W.: Improving reliability with dynamic syndrome allocation in intelligent software defined data centers. In: 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, IEEE, pp. 219–230 (2015)

  9. Törhönen, V.: Designing a software-defined datacenter, Master’s thesis (2014)

  10. Rozier, E.W., Zhou, P., Divine, D.: Building intelligence for software defined data centers: modeling usage patterns. In: Proceedings of the 6th International Systems and Storage Conference, ACM, p. 20 (2013)

  11. Paščinski, U., Trnkoczy, J., Stankovski, V., Cigale, M., Gec, S.: Qos-aware orchestration of network intensive software utilities within software defined data centres. J. Grid Comput. 16(1), 85–112 (2018)

    Article  Google Scholar 

  12. Cui, L., Hailing, X., Chen, D.: Distributed global load-balancing system for software-defined data centers, US Patent 9,998,530 (2018)

  13. Hu, Y., Li, C., Liu, L., Li, T.: Hope: enabling efficient service orchestration in software-defined data centers. In: Proceedings of the 2016 International Conference on Supercomputing, ACM, p. 10 (2016)

  14. Touihri, R., Alwan, S., Dandoush, A., Aitsaadi, N., Veillon, C.: Novel optimized sdn routing scheme in camcube server only data center networks. In: 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC), pp. 1–2 (2019). https://doi.org/10.1109/CCNC.2019.8651677

  15. Yang, M., Rastegarfar, H., Djordjevic, I.B.: Physical-layer adaptive resource allocation in software-defined data center networks. IEEE/OSA J. Opt Commun. Netw. 10(12), 1015–1026 (2018). https://doi.org/10.1364/JOCN.10.001015

    Article  Google Scholar 

  16. Xu, Y., Yan, Y., Dai, Z., Wang, X.: A management model for sdn-based data center networks. In: IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) 2014, pp. 113–114 (2014). https://doi.org/10.1109/INFCOMW.2014.6849181

  17. Hou, W., Shi, L., Wang, Y., Wang, F., Lyu, H., St-Hilaire, M.: An improved sdn-based fabric for flexible data center networks. In: 2017 International Conference on Computing, Networking and Communications (ICNC), pp. 432–436 (2017). https://doi.org/10.1109/ICCNC.2017.7876167

  18. Yao, H., Muqing, W., Shen, L.: An sdn-based slow start algorithm for data center networks. In: IEEE 2nd Information Technology, Networking. Electronic and Automation Control Conference (ITNEC) 2017, pp. 687–691 (2017). https://doi.org/10.1109/ITNEC.2017.8284820

  19. Di, J., Ma, Q.: Design and implementation of sdn-base qos traffic control method for electric power data center network. In: 2016 2nd IEEE International Conference on Computer and Communications (ICCC), pp. 2669–2672 (2016). https://doi.org/10.1109/CompComm.2016.7925182

  20. Kathiravelu, P.: Software-defined networking-based enhancements to data quality and qos in multi-tenanted data center clouds. In: 2016 IEEE International Conference on Cloud Engineering Workshop (IC2EW), pp. 201–203 (2016). https://doi.org/10.1109/IC2EW.2016.19

  21. Xie, K., Huang, X., Hao, S., Ma, M., Zhang, P., Hu, D.: \({E}^{3}\) mc: improving energy efficiency via elastic multi-controller sdn in data center networks. IEEE Access 4, 6780–6791 (2016). https://doi.org/10.1109/ACCESS.2016.2617871

    Article  Google Scholar 

  22. Zhu, H., Fan, H., Luo, X., Jin, Y.: Intelligent timeout master: dynamic timeout for sdn-based data centers. In: IFIP/IEEE International Symposium on Integrated Network Management (IM) 2015, pp. 734–737 (2015). https://doi.org/10.1109/INM.2015.7140363

  23. Hwang, R., Tang, Y.: Fast failover mechanism for sdn-enabled data centers. In: International Computer Symposium (ICS) 2016, pp. 171–176 (2016). https://doi.org/10.1109/ICS.2016.0042

  24. Zakia, U., Ben Yedder, H., Dynamic load balancing in sdn-based data center networks. In: 8th IEEE Annual Information Technology. Electronics and Mobile Communication Conference (IEMCON) 2017, pp. 242–247 (2017). https://doi.org/10.1109/IEMCON.2017.8117206

Download references

Acknowledgements

This work was supported by the Quchan University of Technology (Grant Nos. 11942).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ahmadreza Montazerolghaem.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Supplementary experiments and results

Appendix: Supplementary experiments and results

1.1 Experimental results of the Fat-Tree architecture

We use 10 OpenFlow switches and 8 servers according to Fig. 3 architecture and repeat the experiments as before to test this architecture.

1.1.1 Experiment 1: Fixed load

There are two scenarios in this experiment. In Scenario 1, each server’s background traffic is equal to 500 bps. In Scenario 2, the background traffic for Servers 1 to 4 (P1 to P4) is 1000 and for the Servers 5 to 8 (5P to P8) is 500 bps. A fixed offered-load of 400 seconds at a rate of 1500 requests per second is also injected into the system by a traffic generator. Figures 16 and 17 show the performance of the servers.

Fig. 16
figure 16

Comparison of the performance of data center servers in two scenarios

As can be seen from Figures 16 and 17, the throughput and average delay in both scenarios are slightly improved compared to the Three-layer architecture. The reason is that the Fat-Tree architecture (compared to the Three-layer architecture) is divided into two general sections. OpenFlow switches are divided into two subsystems in this architecture, which will result in better load balancing between the 8 servers. As in the previous sections, the results of Scenario 2 is slightly worse than the results of Scenario 1 due to the lack of background traffic.

Fig. 17
figure 17

Comparison of data center servers’ resource usage in two scenarios

1.1.2 Experiment 2: Variable load

In the previous section, a constant load of 1500 requests per second (1500 rps) is injected into the system, but in this section, we evaluate variable load performance. The results are shown in Fig. 18. These results are almost similar to those obtained from the Three-layer architecture.

Fig. 18
figure 18

Performance over time and with different offered-loads

1.1.3 Experiment 3: Comparison with the traditional Fat-Tree architecture

Table 2 provides a comparison of traditional Fat-tree and SDN-based architectures in terms of throughput, delay, and resource consumption. As can be seen, SDN technology has been able to significantly improve the quality of service of server requests including throughput, delay, and resource consumption.

Table 2 Comparison of traditional Fat-Tree architecture with SDN

1.2 Experimental results for BCube architecture

We use 8 OpenFlow switches and 16 servers in accordance with Fig. 5, to test the BCube architecture.

1.2.1 Experiment 1: Constant load

The first experiment, as described in the previous sections, consists of two scenarios with different background traffic. In Scenario 1, each server’s background traffic is equal to 500 packets per second. However, in the second scenario, the background traffic of the servers are not equal. The results are shown in Fig. 19.

As shown in Fig. 19 and the figures in the previous sections, servers are less efficient in the BCube architecture than in the Fat-Tree architecture but relatively better than the Three-layer architecture. This is also illustrated in the resources usage by the servers in Fig. 20.

Fig. 19
figure 19

Comparison of the performance of data center servers in two scenarios

As can be seen, the servers resource usage in this architecture is slightly more than in the Fat-Tree architecture. This is because, in the Fat-Tree architecture, all the components are divided into two parts, and access to each server with fewer jumps (links) is possible, while in the BCube architecture, access from the source server to the destination server is possible with more jumps.

It is worth noting that all three examined architectures have correctly distributed loads, and the process of the three SDN-based, Round-Robin and Random algorithms are consistent in all three architectures, having no significant difference. For example, Scenario 2 consumes more resources than Scenario 1 (Fig. 20).

Fig. 20
figure 20

Comparison of resource usage by the data center server in two scenarios

1.2.2 Experiment 2: Variable load

In the previous section, a constant load of 1500 requests per second (1500 rps) is injected into the system, however, in this section, we evaluate variable load performance. The results are shown in Fig. 21. These results are almost similar to those obtained from previous architectures.

As can be seen from Fig. 21c, d, the amount of resources consumed by the controller is not comparable to the servers, and therefore the probability of controller bottlenecks is very low.

Fig. 21
figure 21

Performance over time and with different offered-loads

1.2.3 Experiment 3: Comparison with traditional BCube architecture

Table 3 provides a comparison of traditional BCube and SDN-based architectures in terms of throughput, delay, and resource consumption.

Table 3 Comparison of traditional BCube architecture with SDN

As can be seen, SDN technology has been able to significantly improve the quality of service of server requests including throughput, delay, and resource consumption.up to here, in all three studied architectures, SDN technology has had a great impact on server performance.

1.3 Experimental results of Dcell architecture

We use 5 OpenFlow switches and 20 servers in accordance with the architecture in Fig. 7, to test the Dcell architecture.

1.3.1 Experiment 1: Constant load

This experiment is also a repetition of the first experiment in the previous sections. The first experiment involves two scenarios with different background traffic. In Scenario 1, each server’s background traffic is equal to 500 bps. In Scenario 2, the background traffic for servers 1 to 10 (P1 to P10) is 1000 and for servers 11 to 20 (P11 to P20) is 500 bps. Figure 22 illustrates the performance of SIP servers in this architecture.

As can be seen, the delay difference of the three SDN-based, Round-Robin and Random methods is less than in the previous architectures. In other words, the performance of these three algorithms in this architecture is close to each other (compared to the previous three).

Fig. 22
figure 22

Server performance over time

The results of consuming resources such as CPU and memory are almost similar to the BCube architecture and we refuse to explain them here.

However, as can be seen from the results above, the SDN controller has been able to distribute the load with high throughput and low latency between the servers.

1.3.2 Experiment 2: Variable load

This section evaluates the performance of variable loads. However, since the results of this section are similar to those of the BCube architecture, we refuse to explain them again. It should be noted that the intended architecture for Dcell consists of five cells and the transfer of packets from one cell to another is slightly delayed compared to other architectures. For example, according to our observations, this architecture had a slightly longer delay than the previous architectures in sending loads from server 1 in cell 1 to server 2 in cell 2. It should be noted, however, that SDN-based architectures have greatly improved service quality than traditional architectures.

Table 4 Comparison of traditional Dcell architecture with SDN

1.3.3 Experiment 3: Comparison with traditional Dcell architecture

Table 4 provides a comparison of traditional Dcell and SDN-based architectures in terms of throughput, delay, and resource consumption.

As can be seen, SDN technology has been able to significantly improve the quality of service of server requests including throughput, delay, and resource consumption.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Montazerolghaem, A. Software-defined load-balanced data center: design, implementation and performance analysis. Cluster Comput 24, 591–610 (2021). https://doi.org/10.1007/s10586-020-03134-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-020-03134-x

Keywords

Navigation