Keywords

1 Introduction

Nowadays, to match the huge demand for content distribution, we need a new network paradigm shift from IP-based services into information-based services. In this way, we can transfer information in a wide range of services efficiently, especially in the case of High-definition video transmission with high bitrate/speed requirement. In fact, video content has become a major part of the total Internet traffic, and mobile/wireless data traffic is a notable trend for content accesses in the future. According to Cisco’s report, IP video traffic occupied 75% of the whole Internet in 2017, and this number will increase to 82% by 2022 in which the mobile devices will carry 44% of the total IP data traffic [1]. Thus, ensuring mobile user’s VoD (Video on Demand) experience is a key to realize efficient content the distribution model for the Internet.

In this context, ICN (Information-Centric Networking) [2, 3] was proposed as a promising future network approach since 2005. The key features of ICN include in-network caching and using named data instead of IP addresses for forwarding and routing content. However, ICN still has implementation issues because all the content nodes in ICN need to have the memory storage for content caching then make them consume more power compared to the IP routers [4, 5]. Also, the default caching mechanism in ICN produces high cache redundancy by wasting cache space for storing on-path content duplicates as analyzed in our prior work [6, 7].

Currently, many video service providers have selected CDN (Content Delivery Network) as their solution for serving the vast video traffic. The idea of CDN is placing suitable dedicated caches as distributed servers at the edge of various network domains or geographical regions to reduce network load and response time. However, considering the high cost of the cache servers and the video resources, CDN still has the feasibility issue for deployment.

Thus, in this paper, we have proposed to integrate CDN and ICN as a 5G network slice for efficient video streaming service which will be detailed in the next sections.

2 Related Work

In this section, we introduce the major concepts that are related to our proposal.

2.1 Content-Centric Networking (CCN)

Content-Centric Networking (CCN) [2] is a notable ICN platform that enables users to obtain desired content through its name instead of its location as defined in the existing IP-based Internet architecture. CCN is also implemented using ICN routers with caching function rather than IP-routers to realize efficient content dissemination.

In CCN, there are two types of the packet which are Interest and Data packets. Interests consist of a content name, which is requested by the consumer. Data packets carry content data and act as response for content requests, i.e., Interests. The data transmission unit in CCN is chunk, i.e., a content is split into a number of equally sized chunks.

For the content distribution process, firstly, Interests will be sent by the consumer. Then, the Interests will be broadcasted throughout the whole network. This step can be regarded as a searching strategy: Once a user sends an Interest, it will be sent to the nearest node to minimize the transmission time. Besides, users who are in the same area may express the Interests for the same content so that they can get the desired content from the suitable intermediate ICN nodes without the need of downloading it from the content provider (server).

For the data retrieval process, the content name prefix will be searched in the cache memory of the CCN routers, called Content Store (CS). If Interest matches the prefix, data will be returned to the consumer via the corresponding face (network interface). Otherwise, Interest will continue finding the content in Pending Interest Table (PIT) [2]. If the required content exists, content data is sent back by the reverse path of Interest and a new entry of Interest will be added to PIT. If the content still can not found, that information will also be recorded in PIT. Then, Interest will be forwarded according to Forwarding Information Base (FIB) for the forwarding procedure [2, 8].

Besides the name-based forwarding strategy, another key feature of CCN is in-network caching. Different from TCP/IP design, CCN node has its cache memory so that it can store downloaded contents dynamically. In other words, once the content has been downloaded, it will be cached by the CCN node. Hence, the total content downloading time and E2E (End to End) hop count can be reduced.

2.2 Content Delivery Network (CDN)

Recently, CDN has been widely implemented to serve the content-based services, represented by the well-known CDN operators, e.g., Netflix, Akamai. CDN deploys edge servers which contain contents from the original content producers/servers. In this way, the data traffic of the original server can be split to multiple mirror cache servers, which leads to relieving the burden of the source server. Additionally, based on users’ geometric information (such as IP addresses), the DNS (Domain Name System) server can identify which edge server is the nearest one to users. Thus, it can allocate the most appropriate server to the user and the downloading time on the user side can be reduced considerably. Hence, the users’ QoE (Quality of Experience) can be improved as well. Also, CDN users from different ISPs (Internet Service Provider) and regions can gain similar bitrate experience [9].

However, CDN still has its disadvantages. Firstly, due to the high cost of deploying CDN mirror edge servers, CDN users are requested to pay an additional fee for the premier services. Next, the content updating process (i.e., pushing new video contents to the edge servers) takes time and may not be suitable for every scenario. In general, CDN is primarily used for VoD (Video on Demand) or downloading services. However, in some specific scenarios, users do not need all the cached contents then some of the valuable cache memory is being wasted. Thirdly, though deploying CDN servers can reduce and separate data traffic from the core server, bottleneck sometimes can still happen.

Specifically, when users’ requests become huge in one specific area, the data traffic between users and this area’s CDN edge cache servers can cause the network bottleneck. In order to prevent this situation, the service providers and CDN operator should increase the number of available CDN caches, which might lead to a higher cost.

2.3 SDN/NFV (Software-Defined Networking/Network Functions Virtualization)

Due to the increasingly diverse need from the user side, ISPs are currently in need of adaptive services to meet users’ various demands. However, different from allocating a single service, customizing multiple types of service via physical network resource configuration is challenging. For example, nowadays, FHD (Full High-Definition) or even UHD (Ultra High-Definition) video streaming service and VoIP (Voice over IP) service are among the most popular applications of network service, but they have different optimized network resource configurations. Particularly, HD/UHD video streaming service requires high bandwidth and throughput while VoIP service requires a stable network environment and low-latency. Moreover, it takes time and a high cost to set up and tune each network service. The introduction of NFV (Network Functions Virtualization) and SDN (Software-Defined Networking) in recent years then aims to satisfy the strong demand of “dynamic network configuration”.

NFV uses virtualization technology to split network applications in a simple and adaptive manner. A general NFV architecture usually includes VNFs (Virtualized Network Functions), Hardware resources, Virtualization Layer, NFVI (NFV Infrastructure), NFV Management and Orchestration (NFVM and NFVO). Specifically, NFVI is the key to manage the hardware resources and change the physical hardware into virtualization resources pool so that the computing components can be managed flexibly and conveniently. The VNFs can install and provide service applications. Also, the VNFM and NFVO are responsible for managing and orchestrating the NFV’s whole resources and processes [10].

As NFV offers the solution for making physical network resource virtual, SDN is the other technology needed for link virtualization. The most crucial feature of SDN is that it separates the network connection into control-plane and data-plane. By separating the two planes, network control becomes more convenient and flexible since control-plane requires flexibility whereas data-plane requires low-latency. For instance, the processes such as switching routing protocols and generating routing table can be implemented on one control-plane. To complete the separation, a protocol named OpenFlow would be used [11].

2.4 Network Slicing

Network Slicing is a virtualization technology based on NFV/SDN. It enables running multiple logical networks on one common shared-physical network infrastructure to maximize the flexibility and maintainability. Besides, the network resources are split dynamically so that each logical network’s parameter such as capacity, bandwidth, and delay can be customized corresponding to the user requests and network status.

Network slicing plays an essential role as a key concept in the upcoming 5G network to realize various high-speed network applications, e.g., IoT (Internet of Things) and 8K streaming services [12]. Particularly, the ISPs could manage and customize each network slice simply and comfortably by defining each network service as one dynamic network slice and separate them logically [13].

In this research, we apply the emerging concepts of ICN, CDN, and SDN/NFV technologies to realize a network slicing design for efficient content delivery over different multiple geographical locations.

3 System Design

In this section, we introduce the concept and mechanism of the proposed CDN/ICN content delivery system.

3.1 The Proposed ICN/CDN System

In this research, we combine CDN with ICN to enable an efficient contents delivery network system with low congestion rate. Besides, to meet the future mobile network’s needs, we have also proposed using our system as one 5G service slice in the context of the project “5G! Pagoda”, a collaborative Europe-Japan research project for softwarized 5G network evolutions [14].

The benefits of integrating ICN and CDN are three-fold as follows:

Firstly, using CDN and ICN can drastically reduce the congestion ratio of the whole network. Particularly, as aforementioned in Sect. 2, CDN can reduce the data traffic of the core/original server by deploying multiple edge mirror cache servers in various regions. However, when the number of users becomes large enough in one region, the congestion would occur at the edge servers. Therefore, by adding ICN nodes linked to the CDN edge servers, we aim to substantially diminish data traffics of CDN edge cache servers, thanks to the ICN’s in-network caching feature [15].

The second benefit is increasing the users’ QoE (delay time), especially for contents with high popularity level. By using CDN’s cache server for dynamic and optimized content allocations, the download time for the requested content can be reduced considerably [16]. Specifically, this improvement is realized by the efficient retrieval process from the CDN slice to the ICN slice via the appropriate CDN cache and the ICN Gateway.

Additionally, as ICN is a potential future network design at the initial deployment stage and CDN has been a successful content-based business model, by combining CDN and ICN, we can take advantage the merits of both networking models: CDN is used for optimal content allocations at suitable CDN caches nearby the clients while ICN is used for quickly distribute content to users via the ICN nodes with the built-in dynamic in-network caching feature.

3.2 System Overview

To show that our ICN/CDN system can provide an efficient and realistic video delivery model in real-world, we configure the whole system across different continents.

Specifically, the system has been implemented to transfer content objects (videos) published in Finland (Europe EU) to Japan (JP, Asia). In this way, we are expecting to model and realize a promising and practical video streaming system deployment, e.g., Netflix or YouTube videos.

Figure 1 demonstrates our integrated ICN/CDN system configuration. In our design, the system includes two major parts which are CDN slice for contents provider side (“EU Side”) and ICN slice for content distribution to users (“Japan Side”). Typically, on the EU region, the original CDN content server deployed at Aalto University, Finland (Aalto server) has been set up to publish video contents. Also, we assume that users will request their desired video contents from the Japan region. Thus, CDN cache mirror server and ICN nodes are deployed on the Japan side.

Fig. 1.
figure 1

ICN/CDN system configuration

Since our whole work is to realize the 5G network slicing concept, each network slice is implemented based on SDN/NFV technology. It is also necessary to define a regional Orchestrator to manage every VNF instance dynamically. Typically, the EU Orchestrator is responsible for managing each instance and resource pool on the EU side while Aalto servers represent the CDN publishers/providers.

The ICN slice configuration is shown in Fig. 2 in which ICN nodes are implemented by JP Orchestrator (Hitachi Orchestrator) using CCN platform on the Japan side. Firstly, the video contents are stored in the ICN Gateway (an OpenStack based CDN/ICN edge video cache server on the Japan side). Then, we add ICN Gateway FIB entries to CCN nodes for the efficient forwarding process in ICN. Hence, the video content information from the ICN cache can be shared through the Japan domain via the CCN nodes in ICN slice. To reduce the network load and congestion, we use multiple ICN-enabled edge nodes (with transient caches and substantial lower cache storage compared to CDN caches) to separate the content traffic of the ICN Gateway from the other ICN intermediate nodes.

Fig. 2.
figure 2

ICN slice configuration

3.3 End-to-End (E2E) Content Delivery

In this part, we briefly present a complete E2E content delivery procedure of the proposed ICN/CDN system. It contains four major steps which are slice establishment, slice stitching, content request, and content delivery.

Firstly, in the “slice establishment” stage, all the CDN and ICN NFV instances are initiated and allocated virtually. During this stage, both JP and EU’ Orchestrators will create and configure each instance with a suitable virtual configuration dynamically.

Then, “slice stitching” stage will be executed. After ICN and CDN instances have been established, JP Orchestrator will inform ICN Coordinator and provide the Coordinator the FIB entries of ICN Gateway so that ICN nodes can determine the suitable routing path. After the ICN nodes add the FIB entries of ICN gateway, the slice stitching process is completed.

Next, the user sends the content request, and the content delivery process will be performed. Upon the slice stitching’s completion and the video service is triggered at the CDN slice, the user receives the table of contents which lists all of the available video contents’ “exact name” (which will be detailed in Sect. 3.4) with resolution, video name, bitrate and the corresponding CDN cache from CDN coordinator. The user then can choose which video they want to watch. When the user selects the desired video content, the system will generate an ICN Interest to ICN Gateway to check whether this content is already cached in ICN slice or not. If it has been cached, then the target content will be renamed as CCN “exact name” format and sent back to the User Equipment (UE). When the UE receives the exact content name and acknowledge it, the content transmission can start. Otherwise, if the user selects the desired content and the ICN Gateway does not have the content that the user asked, this Interest will be converted as a content request to the suitable CDN server with respective content. The CDN server then pushes the requested content to ICN Gateway. In this way, users can receive the content in the ICN slice when the same content Interests are received again.

3.4 Naming Strategy in ICN

One key feature of ICN is that its information route is based on the content name instead of an IP address. Specifically, each ICN content has its own unique ICN name without the need of name resolution via the DNS (Domain Name Server) system as of the TCP/IP architecture.

However, since in our ICN/CDN video streaming system, both ICN (CCN platform) and IP (CDN) are co-existed, the suitable way to transfer and convert the content name format from IP to CCN is worth to be considered so that UEs in Japan side can receive the desired data efficiently via the Gateway in the ICN slice [17].

Firstly, the initial content name on the CDN side consists of an article name, resolution, bitrate, and video package format. For example, on the Aalto CDN server, a Demo video content is named as “Demo-1920*1080-3000kbps.avi”.

However, in CCN, since we want to let the user know which ICN node is involved in serving a specific content, we decide to implement another naming format by adding a “Node ID”. “Node ID” represents the caching node with the requested content name for content distribution in ICN from CDN slice. Thus, in the simplified case, the “Node ID” will be “ICN Gateway”. Besides, we add the content source with location before “Node ID” (left-most position) in the content naming structure. As “Finland, EU” (Aalto University) is the content publisher’s location in our system design, and the CCN name should be “/EU/Finland/Aalto-University/ICN-Gateway/Demo-1920*1080-3000kbps.avi”. By using this naming format, the user can be aware of the content transmission so that they can decide whether to receive the content or not. Since the naming format in CCN is Longest-Prefix Match (LPM) for forwarding and routing procedure, we define the proposed CCN name structure as the content “exact name”.

3.5 ICN Gateway

As shown in Figs. 1 and 2, between the ICN slice and the CDN slice, we have deployed an additional OpenStack-based node and named it as “ICN-Gateway”. The ICN-Gateway enables both CCN and TCP/IP protocol so that the video content from the CDN cache server can be cached inside the CCN’s repository. From ICN-Gateway, cached contents would be stored at ICN intermediate nodes then transfer to users for minimizing the latency of subsequent content accesses.

Meanwhile, ICN-Gateway takes responsibility for converting content name from CDN naming format into the proposed CCN’s “exact name”.

4 System Evaluations

4.1 FLARE-Based ICN Nodes

Regarding the proposed ICN/CDN system implementation, we build the joint test-bed based E2E content delivery at Waseda University in which the virtual content nodes image and configuration setting are installed in deeply programmable nodes, namely FLARE, developed by the University of Tokyo. We select FLARE as it enables an Open Deeply Programmable Switch/Network Node Architecture to verify the merits of the proposal over multi-domain test-bed with multi-core processors toward 5G slicing [18]. FLARE also realizes resource isolation with lightweight control plane and data plane programmability. We then implemented ICN Based Virtualization nodes on FLARE, and the hardware configuration of FLARE is shown in Table 1. Specifically, we use Docker as the container technology to implement ICN nodes’ virtualization over FLAREs.

Table 1. FLARE’s detailed configuration.

Also, since Hitachi. Ltd. acts as the Orchestrator of the Japan domain (Fig. 1), ICN nodes can be established and managed dynamically so that the ICN slice follows the network slicing standardization [19].

4.2 The Proposed System Configuration

Note that in this research, we focus on the ICN Slice design for content distribution when the content objects are already stored at the ICN Gateway from the CDN Slice, i.e., suppose that Optimal VNFs Placement in CDN Slicing over Multi-domain is already performed. Then, this paper is different from our prior work in the same research theme which presented the overall integrated ICN/CDN system design [16].

For the experiment evaluation, we have set up an ICN slice configuration as shown in Fig. 3. Particularly, at first, an OpenStack based ICN-Gateway caches the test video content. Then, upon receiving the message from Orchestrator (Hitachi Orchestrator), the ICN coordinator can receive the FIB entry of ICN-Gateway. By using this information, FLARE-ICN Node 1 can be set up and make a connection with ICN-Gateway in CCN protocol (slice stitching procedure). Next, two ICN nodes are connected, and finally, on the UE side, UE 1 is connected with ICN Node 1 while UE 2 makes a connection with ICN Node 2 via the CCN protocol. We use this system configuration as the evaluation scenario model to verify the benefit of using CCN nodes with in-network caching feature so that the video contents with high popularity in a geographical domain can be transmitted to the user side efficiently with minimized response time [20].

Fig. 3.
figure 3

FLARE-based testbed configuration

4.3 Test Scenarios

After the slice stitching procedure is completed by the Orchestrator at the Japan side via information exchanges from Gateway at ICN slice, we perform the test scenarios using the above testbed configuration. In particular, we conduct four different test scenarios to verify the efficiency of the proposed ICN/CDN system for content distribution in which the content delivery is conducted twice for each scenario as follows:

  • Scenario 1: Firstly, the content request is sent from UE1. Then, for the second time, the content request (for the same content) is also sent by UE1.

  • Scenario 2: First content request is sent from UE2, and a second-time request is from UE1 for the same content.

  • Scenario 3: The first-time request is from UE1, and then UE2 will send Interest for the same content.

  • Scenario 4: Both requests are sent by UE2.

It should be noted that the test content file in each Scenario has the same size (either 1 MB or 10 MB). Then, we have evaluated ICN slice performance by measuring downloading time, E2E hops count, throughput, and Round-Trip Time (RTT) [21] which will be detailed in the next subsection.

4.4 The Integrated ICN/CDN System Performance Evaluations and Discussion

The four above network metrics are evaluated as follows:

Downloading Time.

Downloading time means spending time since a user sends the first content Interest until the requested file’s last chunk is transmitted to the user. Shorter download time refers to a higher transmission rate. As shown in Fig. 4, we have measured downloading time using the 1 MB and 10 MB sized file and in both cases, the second time request’s download time is much smaller than the first time. Thus, as long as the content has been cached by ICN nodes once (i.e., the requested content is stored in ICN slice), the buffering time for streaming service on the user side can be reduced considerably. As a result, QoE can be ensured, especially in the case of popular content.

Fig. 4.
figure 4

Download time

E2E Hop Counts.

Similarly, when measuring the number of hop counts between UEs and the content source, we realize that the second time request’s hops are always less than that of the first time (Fig. 5). The reason is that thanks to the in-network caching feature in ICN, for all the four test scenarios, the contents will be cached at the nearest ICN nodes after the first request.

Fig. 5.
figure 5

End-to-end hop counts

Round-trip Time (RTT).

RTT measures the period since a packet is sent until it is responded. As shown in Fig. 6, RTT gets smaller after the first Interest when we test a content file in CCN’s default chunk size (4 KB). As the requested content is stored at the nearest nodes (ICN node 1 or ICN node 2) after the first request, the subsequent requests for the same content become smoother, i.e., a reduced RTT shows better QoE on the user side.

Fig. 6.
figure 6

Round-trip time

Throughput.

Throughput is a key performance of the network, and the same tendency can be realized when measuring throughput in both cases of 1 MB and 10 MB test contents (Fig. 7). Specifically, in scenario 2 and scenario 3, the second time requests always get higher throughput compared to the first time. However, in scenario 1 and scenario 4, the throughput performance is decreased. The reason is that since our UEs are also equipped with CCN protocol, UEs will cache content into their repository with the built-in in-network caching feature as long as they retrieve the content once. This result explains why when users send the same Interest as the first time, they do not have a high throughput via their network interfaces. This deployment then also leads to less heavy data traffics for a stable network with low congestion rate.

Fig. 7.
figure 7

Throughput

Overall, the above scenarios show that our proposed system can improve the network performance efficiently right after a requested content is stored in the ICN slice.

5 Conclusion

In this paper, we have proposed, designed, implemented, and evaluated the combined ICN/CDN architecture as a video streaming service. The joint-testbed evaluations between Japan and Europe show that our approach can reduce the download time effectively, especially when transmitting contents with high popularity. This realizes a potential and feasible network design for efficient video streaming service by leveraging SDN/NFV technologies and combining the benefits of both ICN and CDN for video content distribution.

The concept and design of function chaining design for optimal VNF allocation in network slicing of the integrated ICN/CDN will be the focus of our future work.