Skip to main content

Fully Integrated Software-Defined Networking (SDN) Testbed Using Open-Source Platforms

Abstract

With the era of IoT, networking concepts, such as Software-Defined Networking (SDN), Network Function Virtualization (NFV), Cloud Computing, Multi-access Edge Computing (MEC), Network Slicing, etc., were introduced to cater to the demands for performance, portability, scalability and energy efficiency of networks. To overcome the several limitations inherent to traditional networks, such as static configuration, non-scalability and lower efficiency, Software-Defined Networking (SDN) separates the network in to the control plane and the data plane. Generally, the control layer provides the centralized control for all connected devices while the data plane allows programmable network configuration harnessed by SDN applications in the application layer. A controller in the control plane is the brain of the SDN environment used for managing the entire network. It delivers requisite instructions to the underline architecture in handling the packets. However, when managing a large-scale network, a single controller is inadequate and it induces the necessity of using multiple controllers. Consequently, multiple controllers yet again shall transform the SDN environment into a distributed environment. Therefore, a suitable controller must be selected in implementing a distributed SDN environment to avoid any malfunction transpired by the network. In this research, a common platform at the level of Layer 2 was developed to judge the steadiness of the controllers: OpenDaylight (ODL) and Open Networking Operating System (ONOS) consist of different network sizes.

Introduction

In a traditional environment, each network device has a separate control plane and a data plane locally, provided finite number of switches or routers. These local control planes consist of local routing tables or local Routing Information Bases (RIBs). Protocols like Open Shortest Path First (OSPF) are used to update the routing tables. However, there is no single device that has visibility of the entire network and each device acts independently. In the absence of a centralized controller inside the traditional network architecture, an administrator always needs to manually configure each local device. From the management point of view, this becomes an annoying task in the presence of multiple network devices [1]. The other problem that limits the automation of networking devices is that every device has its own proprietary Operating System (OS) and interfaces. Consequently, it is very difficult to apply a new application or a routing protocol that is already installed in each device from different vendors, such as HP, Cisco, Juniper, etc. There is no open interface on these devices that allows to apply changes to the control plane. It would make much more feasible, if networking devices can allow anyone to develop applications, protocols and utilities at any juncture. This even enables to accommodate new developments and innovations happened in future networking. In the meantime, novelties of server technologies and rapid development of virtualization technologies have allowed anyone to write an application for an OS, such as Windows or Linux or Mac OS. As a result, OS does not need to be concerned about the physical hardware that the application is installed on. Someone can even rapidly move a VM (Virtual Machine) from one host to another platform in which OS hides the complexity of the hardware. This was not promising in networking until Software-Defined Networking (SDN) and Network Function Virtualization (NFV) concepts were appeared [2, 3].

SDN is an extension or amalgamation of earlier proposals like programmable networks and control plane separation projects rather than a new arrangement. It has obtained a considerable attention with the development of SDN and OpenFlow as the next programmable network. It simply manages multiple network devices such as SDN switches using a centralized manager, i.e., SDN controller. Technically, SDN separates the data plane and control plane, thus able to separate the network forwarding functions from the controlling functions. SDN architecture mainly consists of three layers; Application layer, Controller layer and Infrastructure layer. The infrastructure layer means the existing or new physical or virtual network components represent the data/forwarding plane. The control layer acts as the centralized brain for the infrastructure layer. In here, the control layer logically divides the environment as the northbound Interface (NBI) and the southbound interface (SBI). The upper portion of the control layer is known as the NBI while the lower portion is called as SBI. Typically, OpenFlow is used to communicate between the control layer and the infrastructure layer while SDN controller separates the control plane from the infrastructure layer [4]. Moreover, SDN solutions in today’s industry are t enterprise level and they are not customizable according to the requirement of the user. On the other hand, when working with SDN, the control plane is supported or managed by a single controller or by several controllers that belongs to a particular vendor only. Thus, improvements should be made to run under multi-controller environment from different vendors. However, to implement a distributed SDN environment without any malfunctioning of the network, we must choose a suitable controller. Hence, in this research, we have considered two Java-based open-source controllers and Python-based controller to develop a common platform. They are OpenDaylight (ODL), Open Network Operating System (ONOS) and Ryu controllers. Ultimately, we have done a quantitative analysis for ODL and ONOS controllers to compare their performance. Moreover, Ryu was used to demonstrate the firewall, which is a Network Function Virtualization (NFV) feature integrated with the SDN platform.

The rest of the manuscript is arranged as follows: Sect. 2 depicts related literature of the research tittle. Section 3 summarizes the implementation of SDN controllers OpenDaylight, ONOS and Ryu as the SDN firewall. Then, the results obtained from virtual as well as physical implementations are shown in Sect. 4. It consists of a quantitative analysis to predict the best controller subject to emulated scenarios. Finally, Sect. 5 presents the conclusions and future directives from the research.

Related Work

The controller is the brain of the SDN network whose core design concept is applied for all unidentified packets that arrive at the SDN switch. The SDN switch will encapsulate them via OpenFlow Packet-in and send it to the controller for further analysis. Then the controller routes the Packet-in logically and communicates with the switch for handling the packets. To study and design the SDN controller, the packet processing logic should be verified as depicted on the SDN controller architecture of Fig. 1.

Fig. 1
figure1

Architecture of the SDN controller [4]

The control plane of the SDN architecture can be designed either as a single controller or distributed/multiple controller. These architectures can be further divided as physically or logically centralized and physically or logically distributed, then again with different orientations, such as flat, hierarchical, etc. [5]. Generally, implementation of a centralized controller is simple and it is responsible for handling all control plane activities at a single server. However, a single controller in a large-scale network with no intelligence is prone to single point failures. This results poor efficiency, unexpected high delays owing to distance between controller and switches, consequently undesirably affects to the network reliability. To pull out this up to some extent, multiple controllers have been proposed such that more than one controller work cooperatively in making forwarding decisions among multiple domains. These distributed controllers are more realistic with scalable networks and shows high performance during increase demand of requests.

OpenDaylight (ODL) is an example for Java-based open-source distributed type controller. It is developed and managed by the Linux foundation. ODL is based on modular architecture, which gives flexibility to a developer to plug in new applications using Northbound Application Programming Interfaces (APIs). ODL uses the OpenFlow as the Southbound Protocol and other standard protocols to communicate with the underline architecture. In addition, anyone can develop northbound applications according to their requirements [6,7,8]. On the other hand, Open Networking Operating System (ONOS) is also a Java-based controller. ONOS is a kind of SDN network operating system for service providers which is designed to succeed in higher performance, higher availability, scaling-out and well-defined northbound and southbound interfaces. The ONOS controller is principally designed for carrier networks, thus offers the ability of providing new SDN services along with their initial proprietary services. Other main characteristic is the ability of supporting even hybrid networks [9, 10]. Table 1 shows the characteristic differences between ODL and ONOS controllers extracted from the previous literature [11,12,13,14].

Table 1 Feature-based comparison of ODL and ONOS controllers

OpenFlow is the first communication protocol between the control layer and data layer in SDN architecture and it offers standard API for programming network devices. It is based on Ethernet switching technology that uses flow tables with required actions. Controller and OpenFlow switch is communicating through a secured channel. OpenFlow protocol is an interface between control and data planes. There is surge in development of OpenFlow based SDN solutions [15]. Open vSwitch (OVS) is an open-source OpenFlow switch, which is mainly designed to work as a virtual switch in the virtualized environments. Traditionally, most of the network applications like load balancers, firewall, proxy and IPsec, etc. have been provided by the hardware middle boxes. OpenFlow enables programmability of the switches and hence the applications specific packet forwarding actions are pushed to the switches [16].

SDN testbeds use open-source codes to control universal SDN controllers and switches. The controller communicates with the OpenFlow switch and manages the switch through the OpenFlow protocol [17]. Throughput is the amount of data that enters and goes through a system in a given amount of time. It is applied to systems ranging from various aspects of computer and network systems to organizations [18]. RYU is an open-source framework (Apache 2.0 licensed) created by NTT and written in Python. Ryu supports several southbound interfaces, such as OpenFlow and NETCONF. Regarding OpenFlow, Ryu supports the following versions: 1.0, 1.2, 1.3, 1.4, and 1.5. A REST API is available to be used for external SDN applications. Currently, Ryu is fully integrated into Neutron (OpenStack Networking Service) [19].

Mininet is an emulation software, which creates a network of virtual hosts, switches, controllers, and links. Mininet hosts run standard Linux system. Mininet networks run real codes including standard Linux network applications as well as the real Linux kernel and network stack. Mininet can be used to modify switches, host and can move to a real system with minimal change for real-world testing, performance evaluation, and deployment. It means, the design that works in Mininet can usually move directly to hardware switches for line rate packet forwarding. NFV is the concept of replacing dedicated network appliances, such as routers and firewalls, with software running on COTS (Commercial Off-the-Shelf) servers [20]. NFV is the next step in virtualization, taking physical networking equipment and running on a Virtual Machine (VM) (load balancers and firewall features etc.). The main idea of NFV is the decoupling of physical network equipment from the functions that run on them. This allows for the consolidation of many network equipment types onto high volume servers, switches and storage, which are located in data centers and distributed network nodes at end-user premises [21, 22].

Implementations of SDN Controllers

OpenDaylight Controller

OpenDaylight Controller Implementation

The main requirement of the SDN testbed is to manage an underline network with the use of a single centralized controller. OpenDaylight (ODL), an open-source controller is used as the centralized controller for this SDN multi-controller testbed environment. Open SDN supports the OpenFlow protocol and other open standards. Unlike vendor based SDN products, such as HPE Virtual Application Network (VAN) SDN and Cisco APIC Enterprise Module (APIC-EM), ODL supports almost all open SDN standards. Typically, ODL is developed based on JAVA. It can keep its own Java Virtual Machine (JVM) and the communication is achieved via the REST API either using JSON or XML formats. Hence, ODL Nitrogen distribution was selected as the controller for this testbed. However, the characteristics which are specific to ODL tend to arise several issues in developing a multi-controller SDN environment as discussed below. Generally, ODL is deployed on a VM running on a virtualized workstation. Ubuntu is used as the operating system of the VM. A minimum of 4 GB Random Access Memory (RAM) is required. The management of the underline network is done with the aid of a REST API. Therefore, some additional features should be added to the controller to have better performance of the REST API. In addition, Open Java Development Kit (JDK) should be added to the VM to run the controller. Once the content is unzipped, the controller is launched using the path < distribution directory > /bin/karaf. Then the controller can be accessed through a web browser using the URL http:// < controller-ip > :8181/index.html. For convenience, a static IP is used for the controller and the ports are connected through the VMs firewall. However, still this setup does not fulfill the conditions to manage the network using the controller. As a result, the REST API of the controller is enabled. The REST API of the controller is enabled by adding multiple features to the launched controller. By adding these features, a user or an admin is able to view and manage the topology of the network as well as to manage the flows written on each switch available in the network. DLUX-core, restconf, l2switch are some of the additional features that are required for better functionality of the controller.

  • DLUX—A node is a switch in the network and the flows are written on it. To retrieve topology data of the network and flow data at different nodes, DLUX-core is used.

  • RESTconf—An external user or an admin uses REST API for managing the network. Once it is enabled, management can be done through several applications that are written to support the REST API. Hence, to enable the REST API of the controller, restconf is used.

  • L2switch—OpenFlow is the protocol used to communicate between switches in the forwarding plane or data plane. Since ODL is already compatible with OpenFlow, it should be enabled prior to use by the controller. Thus, L2switch enables the OpenFlow plugging needed for the communication among devices in the forwarding plane and the controller.

By adding the above features to the controller, the data related to the network topology, flow data and node data are obtained. ODL is preferred here since it is capable of providing a detailed diagram of the network topology which includes node IDs, MAC addresses and the IP address of the hosts.

Northbound Applications

Northbound applications are used to interact with the controller. As a result, these applications help the administrators to push configurations of the underline network topologies which is managed using a specific controller.

  • Postman: REST APIs are used in managing the network through the controller, despite of creating flows directly into the switch via its console. Using Postman REST-Client, it is easy to identify the common characteristics in managing the network configurations while communicating with the controller via northbound APIs. In Postman REST-Client, there are some important features to be reckoned for sending and receiving flows. First, it is required to fill the actions, such as GET, PUT, POST, DELETE, etc. Then the controller IP address and relevant path should be given for the action that is desired.

  • Developed Flow Manager Application: The main objective of this research is to develop an application which could perform in a multi-controller SDN environment. Thus the application should have the ability itself to manage the entire flows of each Open vSwitch in the topology. The application used to manage flows of Open vSwitches supports either ODL or ONOS. This guarantees the open SDN environment functionality against different controllers by different vendors.

OpenDaylight Controller Flow Statistics

OpenDaylight has its own operational inventory to store real-time stats of flows, which are in restconf/operational. Moreover, the operational inventory includes different sub-inventories to store other sorts of data related to the flows. Using the defined requests in the Postman REST-Client, information of relevant flows is obtained.

  • Topology operational information—http:// < controller-ip > :8181/ restconf/operational/network-topology:network-topology/

  • Inventory information—http:// < controller-ip > :8181/restconf/ operational/opendaylight-inventory:nodes/

  • Node information—http:// < controller-ip > :8181/restconf/ operational/opendaylight-inventory:nodes/node/openflow:1

  • Table information—http:// < controller-ip > :8181/restconf/ operational/opendaylightinventory:nodes/node/openflow:1/table/()

According to the results obtained using POSTMAN, it can be observed that how the actual flow exists. Typically, a flow table consists of following main parameters.

  • Match Field—This is the field that can be used to describe the matching criteria against packets. It is defined based on the ingress port, packet headers and metadata from the previous flow table.

  • Priority—Higher the number received an entry indicates the higher priority.

  • Counters—This field increases when a packet matches an entry.

  • Instructions—Procedures that are applied for matching packets

  • Timeouts—This is the idle time or the hard time that specifies the amount of time before an entry expires.

  • Cookie—This is not for processing packets, but uses for the controller to filter flows based on their types (statistics, modifications and deletion).

ONOS Controller

ONOS Controller Implementation

ONOS is one of the SDN controllers that is freely available to use in implementing and testing the SDN environments. This is also a Java-based controller that supports OpenFlow protocol as the southbound protocol while providing the interaction among users with REST APIs. On the other hand, ONOS maintains a simple OpenFlow flow structure than other JAVA-based controllers. This makes easier to develop the northbound application which is possible to write the flows inside the ONOS database. However, ONOS has its own characteristics that lead to several issues in designing under multi-controller SDN environments. ONOS requires at least 4 GB of RAM to guarantee the proper functionality. The controller is deployed on a separate Virtual Machine running on Linux OS (Ubuntu). The embedded REST APIs can be used to identify the flow structure of ONOS and the northbound applications accordingly. ONOS also requires JAVA version 8 and compatible JDKs for the controller to be started. One of the following services can be used to launch the controller, i.e., either. / < distribution directory > /onos-service start or. / < distribution directory > /onos-service clean.

The web User Interface (UI) developed for the controller is accessed via http:// < controller-ip > :8181/onos/ui once the controller is loaded properly. ONOS requires some additional packages and features to get the required functionalities from the controller as mentioned below.

  • org.onosproject.openflow-base

  • org.onosproject.openflow

  • org.onosproject.drivers

  • org.onosproject.fwd

  • org.onosproject.lldpprovider

  • org.onosproject.hostprovider

The above packages are responsible for providing the required features to ONOS to perform specific operations. The package org.onosproject.openflow provides the necessary OpenFlow protocol-related parameters while org.onosproject.fwd and org.onosproject.lldpprovider help Layer 2 packets forwarding among network elements. The features included in these packages can be installed via the ONOS Command Line Interface (CLI) directly. This is done by defining the required feature after executing the feature:install. The packet forwarding in the Layer 2 of the topology is enabled with onosappsfwd and onos-providers-lldp. In the meantime, onos-drivers-ovsdb adds the required ovsdb drivers while onos-gui and onos-rest enable the web UI and the REST APIs in the controller.

ONOS Controller Flow Statistics

ONOS flow structure is similar to that of ODL. However, a flow acts less complex than ODL flows. The major drawback of ONOS is that it does not allow user to configure custom flow IDs. Instead ONOS itself generates a random flow ID, which is a hash value based on the priority, selector criteria, port and type. The following REST requests can be used to obtain device, port or topology information.

  • Device information—http:// < controller-ip > :8181/onos/v1/devices

  • Port information—http:// < controller-ip > :8181/onos/v1/devices/ < device-id > /ports

  • Topology information—http:// < controller-ip > :8181/onos/v1/topology

Open vSwitch

Open vSwitch Implementation

Open vSwitch (OVS) can be implemented either physically or virtually. The physical implementation can be done with either using real SDN capable switches or Raspberry Pi boards. An SDN capable switch is an L2 Switch which additionally supports OpenFlow protocols in forwarding packets (or any other NBI protocols). Since SDN capable switches are expensive, usually they are not heavily used in developing testbeds under normal circumstances, despite at production or enterprise environments. Thus, implementing OVS physically is accomplished using Raspberry Pi board computers alternatively. Typically, Raspberry Pi runs on Raspbian, which is an operating system developed based on Debian. In this research, we have used a Raspberry Pi for each OVS. Once OVS is deployed, OVS daemon server should be run in accompanied with OVSDB server. OVSDB enables the database facilities that are needed to store flow data coming from the switch. However, there is only one Ethernet port available on the Raspberry Pi. Therefore, to connect with multiple ports, USB-to-Ethernet adapters can be used.

On the other hand, virtual implementation of an OVS can be done with either using a separate VM for each switch. Indeed using a dedicated VM for each OVS in a testbed environment, is a waste of resources except enterprise level environments. The most efficient way of implementing an OVS in a testbed is that use of either a simulation tool or an emulation tool. GNS3 is one of the simulation tools that supports implementing of an OVS. However, GNS3 images also consume same amount of resources when running. Increasing the number of OVSs in the network increases the memory requirements. As a solution for this, an emulation tool, Mininet has been used which runs on Linux distributions. Mininet overcomes the aforesaid issue when creating virtual network topologies running on the actual kernel. These topologies are designed using a script based on python.

Connectivity between Controller and OVS

The management of the network is only possible if the switch is connected to the controller and visible via the controller. In general, an OVS has multiple ports. At first, a bridge should be created to communicate with the controller and then the specific ports should be assigned to the particular bridge. However, the port which connects with the controller should not be added to the bridge. Also, several bridges can be created in the same switch and different ports can be assigned accordingly. Once the bridges are created and the ports are assigned, the administrator can inform the address of the controller to the particular bridge. After these processes, the connection between the switch and the controller is established. However, to make sure all hosts are properly attached to the switch, pinging confirmations among each other can be used. Once the connection between the OVS and the controller is set, the OVS is visible through the controller and it can communicate with the controller via its LOCAL port or also called as the management port.

Application Developed to Manage Flows

Instead of Postman REST-client, managing the network can be assured using a developed Java code. Therefore, an application was designed to get the flows, thus to help the administrator understanding them easily. Generally, flows to the controller can be categorized as two types of types: proactive flows and reactive flows. However, reactive flows are only required when employing NFV features for adaptive networks. In this research project, two Java code algorithms are used to manage flows, such as Flow View Form and Flow Assign Form, for the controller.

  1. (a)

    Flow View Form: This is a user-friendly GUI and it can be used to view the flows that are stored in controllers. Basically, it includes Java classes and packages that are coded to view the flows. First, it needs to create a HTTP connectivity between the controller and the developed application. The following steps are required to make sure such a connectivity.

  2. Creating the URL.

  3. Creating the authentication string and encoding it to Base64.

  4. Creating the http connection.

  5. Setting connection properties.

  6. Receiving the response from the connection input stream to ensure the connectivity.

    Then, different kinds of requests can be sent to a particular controller by requesting flow details. The relevant output flows are generated in the form of StringBuffer. StringBuffer represents expandable and writable character sequences. Now, JSONObject and JSONArray are required to get the string values from the resultant flow. JSONObject and JSONArray are pre-defined Java classes. Afterward, the flow details can be displayed on the Flow View Form. The steps 1–4 depicted in Fig. 2 describe the process of getting flow details from the controller. Similarly, if a user wants to delete a flow written before, it is required to create a HTTP connection with the purpose of deleting. Then the user can obtain relevant delete flow details that matched with the requested flow-id.

  7. (b)

    Flow Assign Form: As the name implies, it is used to put new flows to the controller. Figure 3 shows the basic steps of the Flow Assign Form. First, a flow is needed to be created according to the desired requirements. Flow Assign Form consists of parameters, such as priority, action, port, etc. JSONObject and JSONArray are used to create a JSON flow after gathering all the required parameters. Afterward, it is required to set up a HTTP connectivity for putting relevant flows. Using this connection, the flows can be sent to a particular data store that is located in either ONOS or ODL.

Fig. 2
figure2

Block diagram of the Flow View Form

Fig. 3
figure3

Block diagram of the Flow Assign Form

SDN Firewall

Generally, a firewall has the capability of allowing or denying several set of rules assigned to it. These rules are placed into the firewall via the proposed northbound application. For this purpose, Ryu SDN controller is attached with the testbed and it runs as a separate server. Ryu is python-based controller and it supports number of libraries which are freely available. Another reason to deploy firewall on Ryu is that, it comprises a less complex structure, whereas other controllers, such as ODL and ONOS, are relatively more complex. Therefore, a developer could easily develop their desired applications Ryu itself.

Ryu is deployed in a separate VM running Linux OS. First, the underline network is advertised to all other controllers having common management for the same topology via multiple controllers. This helps the underline topology to use SDN firewall and run according to the rules stated inside the firewall. The REST-client is developed to manage the firewall functions being in a user-friendly graphical user interface framework. It is designed to cover three main functionalities as shown in Fig. 4.

  • Running the firewall—A separate python script has been written on Ryu controller’s workstation to run the application.

  • Applying firewall rules—The developed algorithm provides users to apply firewall rules in a simple way.

  • Viewing and deleting the firewall rules

Fig. 4
figure4

Main interface for the firewall application

Results and Discussion

The developed application is capable of handling the main tasks of a multi-controller SDN platform. This ensures that it can be further developed to support more features under distributed SDN architectures. ODL and ONOS controllers have been used to develop a multi-controller testbed. Flows are sent to each controller via the developed Flow Manager application and the obtained results against each scenario is described in this section. The Mininet is used to create topologies for testing. Appropriate flows for testing the topologies are sent via the application.

Testing the Application for ODL

The hosts in the Mininet topology are not capable of communicating each other at the beginning. To enable the Layer 2 communication between each hosts in the network topology, odl-l2switch-all feature should be installed and enabled. Once it is enabled, the hosts can seek for connection among each other. To apply changes to the topology, flows should be pushed to the OpenvSwitches in the network via the controller. However, before pushing the flows, the connectivity of each host should be verified. The Fig. 5 confirms the connectivity of each host in the network.

Fig. 5
figure5

Checking the connectivity among hosts before adding the flow

To push a new flow to the controller, it should be sent from the controller to the relevant switch via the developed application as shown in Fig. 6.

Fig. 6
figure6

How to push a flow in ODL Flow Manager

This application easily illustrates the task of the administrator in managing the flows of the network. In here, the controller IP is the IPv4 address of the ODL controller. The flow is written to the openflow:1 switch and the Port 1 that is used to write the flow belongs to that switch. When necessary parameters, such as flow name, priority, action, table-id, are inserted, it directly forwards the information to the controller instead of logging to each switch terminal. According to the added flow, the flow is written only targeting the switch openflow:1 and its Port 1. In the meantime, other packets from h1 to all other hosts in the network should be denied. The Fig. 7 depicts that.

Fig. 7
figure7

Checking the connectivity among hosts after adding the new flow

The next step is to delete a flow from the switch. Therefore, the same flow added earlier was deleted as shown in Fig. 8. The CLI output shown in Fig. 9 confirms that both adding as well as deleting operations are accepted by the controller correctly.

Fig. 8
figure8

How to delete the previously added flow in ODL Flow Manager

Fig. 9
figure9

Checking the connectivity among hosts after deleting the previously added flow

Testing the Application for ONOS

The similar procedure followed with ODL to test the developed application for ONOS controller. As seen in ODL, ONOS should allow some features to enable the Layer 2 communication between hosts in the network. This is achieved by enabling onos-apps-fwd. Once this feature is enabled, the communications among the hosts are possible. The flows can be added to a specific switch via the controller. The same developed application is capable of adding flows through the ONOS controller. Thus, it ensures the application’s steadiness to function in multi-controller platform. Similar to ODL, the specific switch and the port can be selected by getting the flows after entering the IPv4 address of the controller. Once the switches and the ports are loaded, parameters, such as action/node connector, priority, type, can be agreed via the application. One notable difference here is that there is no flow name in ONOS controller since it does not support adding any custom generated names for flows. Instead ONOS generates an ID itself to the flow as shown in Fig. 10.

Fig. 10
figure10

How to push a flow With ONOS

Afterward, the added flow is available within the switch. The flow is written to the relevant switch 1 and its Port 2 and the action is to controller which is the same as drop. Therefore, all the packets from the host connected to Port 2, i.e., h2 should be dropped. It can observed in Fig. 11.

Fig. 11
figure11

Checking the connectivity among hosts after adding the flow

The application also supports the flow deletion feature for ONOS similar to ODL. Once the flow is deleted, the connectivity should be restored as shown in Fig. 12.

Fig. 12
figure12

Checking the connectivity among hosts after deleting the previously added flow with ONOS

Throughput Analysis of ODL and ONOS

The two controllers, ODL and ONOS, are deployed in two separate workstations. These two controllers are used in their default configurations as shown in Fig. 13. Mininet runs on another machine to emulate the network that needs to be analyzed. The emulated network involves the tree topology written using a python script. Results are obtained and analyzed by changing the depth of the topology from 2 to 4 while keeping the constant fan-out of 3 as shown in Fig. 14. The depth of the topology is defined as the number of layers of the main network components of the topology such as switches. The fan-out is the number of links connected to a single main network element or a switch. In here, tree topology was used to increase the core network elements in which the network load might be a bottleneck for the stability and the proper functionality of the controllers. However, a better analysis can be done with the increase of the load in the core network. Once the network is emulated, a TCP server and a TCP client is defined. For all three network topologies, Host 1 (h1) is considered as a TCP server while the TCP client is the host which is emulated at last. The throughput between the TCP server and the TCP client is measured using iperf, which is a module that can be added to the Linux OS.

Fig. 13
figure13

Experimental network setup

Fig. 14
figure14

Mininet topologies for Depth = 2, 3 and 4

Throughput data are obtained for all three topologies considering individual scenario of each controller. In this case, the time duration that considers to measure the throughput is first 15 s after emulating the network in Mininet. To obtain better results, five data sets were averaged for a particular topology per controller. Figure 15 shows the variation of throughput of ODL under different topologies. There cannot be observed any huge variations of the throughput when an individual topology is considered. Each topology has its own throughput at almost a fixed value. However, with the change of the topology, i.e., with the increase of elements in the network, ODL cannot retain at the same throughput, which results a throughput reduction by a factor of nearly 40% when changing the depth of the topology from 2 to 3 and nearly 60% when changing the depth of the topology from 3 to 4.

Fig. 15
figure15

Throughput variation of ODL controller

Figure 16 shows the variation of throughput of ONOS with generated tree topologies. ONOS is also capable of keeping a constant throughput in 15 s duration. However, with the change of the topology, i.e., increasing the core network elements, results in increasing the throughput of the network. An increment by a factor of nearly 50% is likely in changing the depth of the topology from 2 to 3. The same parameter has reduced up to 20% in changing the depth from 3 to 4, however, still retained an increase. Note that throughput computations in the range of Mbps or Gbps do not affect to results since each controller was independently considered.

Fig. 16
figure16

Throughput variation of ONOS controller

SDN Firewall

Ryu is used as the controller for developing the SDN firewall. In general, the firewall functionality performs at the Layer 4 of the Open Systems Interconnection (OSI) protocol suite. Three protocols were considered when designing and evaluating the firewall. The firewall is capable of allowing or denying ICMP, TCP or UDP traffic generated by hosts in the network. First, the firewall should be enabled on the switch. Once the firewall is enabled, there is no any connectivity among the hosts in the network. To enable the connectivity between the hosts, a new rule should be added. When the rule is added, hosts are able to communicate with each other. A rule added in the firewall application to deny an Internet Control Message Protocol (ICMP) traffic from h2 (10.0.0.2) to h3 (10.0.0.3) is shown in Fig. 17.

Fig. 17
figure17

Adding an ICMP denying rule to the firewall application

The added rule can be removed to reinstate the communication between the aforesaid hosts. The confirmation for restoring the previously denied connection between h2 and h3 is shown in Fig. 18. We have followed the same procedure to implement the firewall application with other two transport layer protocol, TCP and UDP, to impose traffic restrictions.

Fig. 18
figure18

Ping command to verify the reinstated connectivity after deleting the firewall denied rule between h2 and h3

Conclusion

In this research, we have focused on two main characteristics of SDN to accommodate multi-vendor platforms and how NFV feature is applied to the SDN testbed. Although both these controllers perform well in a distributed environment, it was observed that OpenDaylight controller fails to keep the throughput at a constant level with the increase of network size. However, in ONOS, it increases the throughput with the increment of the depth of the network topology. Hence, it can be deduced that for a large-scale distributed SDN environment, ONOS is much robust compared to OpenDaylight. Although the ONOS is quantitatively better than OpenDaylight, it was also revealed that OpenDaylight flows are more informative than the ONOS controller flows. For example, OpenDaylight controller has a unique flow name to identify a flow whereas ONOS does not support that. Therefore, when a REST-client application is developed, OpenDaylight is more flexible for coding purposes.

Moreover, we have demonstrated how economical computers such as Raspberry Pi units can be used to mimic a network of OpenFlow switches in SDN-enabled large-scale networks. This application was developed up to the Layer 2 as a multi-controller platform. Nevertheless, it could be interesting extending it up to the Application Layer. In this research project, we had to override the functions in ONOS core and append based on our requirements. In addition to that, two different data stores were maintained separately for OpenDaylight and ONOS. Hence, as a future work, it is recommended to consider a common data store for both the controllers. The firewall application was developed only for Ryu controller. Even the firewall can be made as a common firewall for all controllers.

Abbreviations

API:

Application Programming Interface

APIC-EM:

APIC Enterprise Module

CLI:

Command Line Interface

COTS:

Commercial Off-the-Shelf

GUI:

Graphical user interface

ICMP:

Internet Control Message Protocol

JDK:

Java SE Development Kit

JSON:

JavaScript Object Notation

JVM:

Java Virtual Machine

MAC:

Media Access Control

NBI:

Northbound Interface

NETCONF:

Network Configuration Protocol

NFV:

Network Functions Virtualization

ODL:

OpenDaylight

ONOS:

Open Network Operating System

OS:

Operating System

OSPF:

Open Shortest Path First

OVS:

Open vSwitch

OVSDB:

Open vSwitch Database

RAM:

Random Access Memory

RIB:

Routing Information Base

SBI:

Southbound interface

SDN:

Software-Defined Networking

URL:

Uniform Resource Locator

VM:

Virtual Machine

XML:

EXtensible Markup Language

References

  1. 1.

    Kreutz D, Ramos FMV, Veríssimo PE, Rothenberg CE, Azodolmolky S, Uhlig S. Software-defined networking: a comprehensive survey. Proc IEEE. 2015;103(1):14–76.

    Article  Google Scholar 

  2. 2.

    Wang F, Wang H, Lei B, Ma W. A research on high-performance SDN controller. International conference on cloud computing and big data. 2014; pp. 168–174

  3. 3.

    Routray SK, Sharmila KP. Software defined networking for 5G. 4th International conference on advanced computing and communication systems (ICACCS). 2017; pp. 1–5

  4. 4.

    Hu F, Hao Q, Bao K. A survey on software-defined network and OpenFlow: from concept to implementation. In: IEEE communications surveys and tutorials. 2014; 16(4): 2181–2206

  5. 5.

    Isong B, Molose RRS, Abu-Mahfouz AM, Dladlu N. Comprehensive review of SDN controller placement strategies. IEEE Access. 2020;8:170070–92.

    Article  Google Scholar 

  6. 6.

    Badotra S, Singh J. Open daylight as a controller for software defined networking. Int J Adv Res Comput Sci. 2017;8(5):1105–11.

    Google Scholar 

  7. 7.

    Medved J, Varga R, Tkacik A, Gray K. OpenDaylight: towards a model-driven SDN controller architecture. Proceeding of IEEE international symposium on a world of wireless, mobile and multimedia networks 2014; pp. 1–6

  8. 8.

    Eftimie A, Borcoci E. SDN controller implementation using OpenDaylight: experiments. 13th international conference on communications (COMM) 2020

  9. 9.

    Kim W, Li J, Hong JW, Suh Y. OFMon: OpenFlow monitoring system in ONOS controllers. IEEE NetSoft conference and workshops (NetSoft), 2016; pp. 397–402

  10. 10.

    Shin JW, Lee HY, Lee WJ, Chung MY. Access control with ONOS controller in the SDN based WLAN testbed. 8th International conference on ubiquitous and future networks (ICUFN) 2016; pp. 656–660

  11. 11.

    Paliwal M, Shrimankar D, Tembhurne O. Controllers in SDN: a review report. IEEE Access. 2018;6:36256–70.

    Article  Google Scholar 

  12. 12.

    Vizarreta P, et al. Assessing the maturity of SDN controllers with software reliability growth models. IEEE Trans Netw Serv Manag. 2018;15(3):1090–104.

    Article  Google Scholar 

  13. 13.

    Tello AMD, Abolhasan M. SDN controllers scalability and performance study. 2019 13th International conference on signal processing and communication systems (ICSPCS) 2019; pp. 1–10

  14. 14.

    Salman O, Elhajj IH, Kayssi A, Chehab A. SDN controllers: a comparative study. 18th Mediterranean electrotechnical conference (MELECON) 2016; pp. 1–6

  15. 15.

    Gorja P, Kurapati R. Extending open vSwitch to L4-L7 service aware OpenFlow switch. IEEE international advance computing conference (IACC) 2014; pp. 343–347

  16. 16.

    Čejka T, Krejčí R. Configuration of open vSwitch using OF-CONFIG. IEEE/IFIP network operations and management symposium (NOMS) 2016; pp. 883–888

  17. 17.

    Alcorn J, Melton S, Chow CE. Portable SDN testbed prototype. 47th Annual IEEE/IFIP international conference on dependable systems and networks workshops (DSN-W) 2017; pp. 109–110

  18. 18.

    Kurniawan A. Throughput performance of routing protocols based on SNR in wireless mobile ad hoc networks. 1st International conference on wireless and telematics (ICWT) 2015; pp. 1–6

  19. 19.

    Asadollahi S, Goswami B, Sameer M. Ryu controller’s scalability experiment on software defined networks. IEEE international conference on current trends in advanced computing (ICCTAC) 2018; pp. 1–5

  20. 20.

    Shu Z, Wan J, Li D, Lin J, Vasilakos AV, Imran MA. Security in software-defined networking: threats and countermeasures. Mob Netw Appl. 2016;21:764–76.

    Article  Google Scholar 

  21. 21.

    Kaloxylos A. A survey and an analysis of network slicing in 5G networks. IEEE Commun Stand Mag. 2018;2(1):60–5.

    Article  Google Scholar 

  22. 22.

    Mijumbi R, Serrat J, Gorricho J, Bouten N, De Turck F, Boutaba R. Network function virtualization: state-of-the-art and research challenges. IEEE Commun Surv Tutor. 2016;18(1):236–62.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Research Division at Sri Lanka Telecom (SLT), Sri Lanka. In this regard, we would like to acknowledge Eng. Anuradha Udunuwara for his support guidance and advice to conduct the research successfully.

Author information

Affiliations

Authors

Contributions

Already stated under previous sections.

Corresponding author

Correspondence to Eranda Harshanath Jayatunga.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

N/A.

Data availability

The data that support the findings of this study are available on request from the corresponding author, E.H. Jayatunga.

Code availability

Same as the statement stated in “Availability of data and materials”.

Consent to participate

No other parties in addition to authors were involved in the research.

Consent for publication

No other parties in addition to authors were involved in the research.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Cyber Security and Privacy in Communication Networks” guest edited by Rajiv Misra, R K Shyamsunder, Alexiei Dingli, Natalie Denk, Omer Rana, Alexander Pfeiffer, Ashok Patel and Nishtha Kesswani.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rathnamalala, D.B., Milan, H.W.P., Dilshani, K.L.I. et al. Fully Integrated Software-Defined Networking (SDN) Testbed Using Open-Source Platforms. SN COMPUT. SCI. 3, 85 (2022). https://doi.org/10.1007/s42979-021-00973-2

Download citation

Keywords

  • Software-Defined Networking
  • OpenDaylight
  • ONOS
  • Ryu OpenFlow