Toward a Loss-Free Packet Transmission via Network Coding

Chapter

Abstract

Network coding promises significant benefits in network performance, especially in lossy environment. As the Transmission Control Protocol (TCP) forms the central part of the Internet protocol, it is necessary to find out the way that makes these benefits compatible with TCP. This chapter introduces a new mechanism for TCP based on network coding which only requires minor changes to the protocol to achieve incremental deployment. The center of the scheme is transmitting linear combination of original packets in the congestion window and simultaneously generating redundant combinations to mask random losses from TCP. Original packets in the congestion window can be deleted even before it is decoded at the receiver side, since the receiver acknowledges the degree of combinations instead of packet itself. Thus, all the original packets can be obtained once enough combinations are collected. Simulation results show the scheme achieves much higher throughput than original TCP in lossy network. Though it still seems far from being deployed in the real network, it has finished the first step in taking the concept of NC into practice.

Keywords

Network coding TCP Random loss Redundant combination Degree of combination 

4.1 Background

History has witnessed the great success of the Transmission Control Protocol (TCP) in the Internet as a connection-oriented protocol tied with the transport layer. Despite the fact that TCP performs very well in the wired networks since its inception, it still suffers a lot in wireless networks due to the external interference and even random loss. The problem stems from its misinterpretation of packet loss which is a sign of congestion in wired networks; thus TCP erroneously reduces the congestion window as a precaution. That is, it slows down the sending rate, alleviating congestion and making network stable. Then, the following congestion avoidance phase with a conservative increase in window size causes the transmission link to be underutilized. As more and more wireless technologies are deployed, the problem becomes much sharper when TCP is running over wireless network with high bit error or random packet loss which happens frequently. Thus, wrong interpretation of random loss as the sign of congestion results in TCP suffering performance degradation and is unsatisfactory.

Aiming to the enhancement in wireless network, one promising variant called TCP-Veno [1] was proposed, showing remarkable improvement on performance over legacy TCP. TC-Veno, which concentrates on the non-congestion loss problem, takes advantage of the congestion detection scheme in TCP-Vegas [2] and dexterously combines it with TCP-Reno. In TCP-Veno, two kinds of loss, namely, congestion and random losses, are differentiated by estimating the number of overstocked packets N. Specifically, if the N is smaller than a presetting threshold β, the loss is then considered as random loss, and TCP-Veno only cuts down the congestion window by \( \frac{1}{5} \) rather than \( \frac{1}{2} \) to maintain the sending rate at a high level. Otherwise, loss is viewed as congestion and the congestion window shrinks to \( \frac{1}{2} \) as TCP-Reno does. A key parameter N representing the backlog of packets that are in the current queue is calculated as follows:
$$ N=\mathrm{Actual}\times \left(\mathrm{R}\mathrm{T}\mathrm{T}-\mathrm{MRTT}\right)=\left(\frac{\mathrm{cwnd}}{\mathrm{MRTT}}-\frac{\mathrm{cwnd}}{\mathrm{RTT}}\right)\times \mathrm{MRTT}, \vspace*{-4pt}$$
(4.1)
where MRTT and RTT are the expected and actual RTT, respectively. Although remarkable performance has been achieved by TCP-Veno especially in wireless networks, TCP can still be further improved in many aspects such as redundant packets to mask random losses with initiative.

On referring the redundant packets, traditional TCP can hardly make it since it is impossible to know which packet will be lost in advance. Thus, sending redundant packets blindly not only occupies network resources but also affects the good put. However, things will be different if the redundant packet contains the information of multiple packets which can be realized by coding.

Network coding (NC), emerging as an important promising approach to the transmission of network, means an arbitrary and causal mapping from inputs to outputs of any node in the network. The major advantage of NC is derived from its ability to mix data across time and across flows [3], making data transmission robust and effective. Meanwhile, this technology becomes a potential approach to achieve redundant transmission.

In TCP, the paradigm of orderly delivery with response is one of the key features, which is totally different after coding operation is involved. By collecting enough coded packets, the receiver can recover the original packets regardless of the order they arrived, thus naturally eliminating this kind of limitation on sequence. Along this thought, the sender cares much more about the quantity of the coded packets (degree) that can be received by the receiver instead of which one it received or not. Therefore, a critical issue on how to establish a mapping relationship in the order of disorder transformation should be answered, which provides a basic and feasible way to reform TCP with NC. In short, the scheme of acknowledgment (ACK) should be modified from information response to degree response in order to accommodate the new paradigm.

4.2 Overview

The very first idea of the combination of TCP and NC was proposed in [4] by Sundararajan et al. In fact, the general approach has been to mask losses from TCP using link layer retransmission in [5] before that. However, this kind of scheme is complicated for link layer retransmission to interact with TCP’s retransmission which makes performance suffer owing to the independent retransmission protocols at different layers.

On history, there was another similar famous idea known as the digital fountain codes [6] which was used to transmit packets on lossy links. The sender generates a stream of random linear combinations for a batch of k packets, and the receiver can recover the batch as long as it collects slightly more than k combinations. During the transmission, there are no feedbacks except the receiver finished decoding successfully. Although the fountain codes are low in complexity and rate less, its encoding operation has to perform on a batch of packets. That is to say, only after current batch has been received and decoded can the original packet be available. Thus, if this idea is directly used in TCP, either it will lead to timeout retransmission or a very large round-trip time (RTT) which causes low throughput.

Hence, to accommodate the ACK-based protocol, an independent network coding layer should be added below TCP and above IP layer on both sender and receiver sides to process coded packets [5]. Specifically, the additional new layer is dedicated to mask packet loss against upper layer with redundant coded packets (linear combinations). Similar to the fountain codes, once the receiver collects enough combinations, it can recover the original packets via decode operation. This procedure is like solving multivariate equations where the combinations can be considered multivariate unknowns. Compared to the fountain codes, network coding-based scheme generates linear combinations with variable quantity of original packets rather than batch of them to shorten the decoding delay. Here, ACK is sent for each combination as the information degree of freedom rather than information itself. As long as the quantity of equations matches the degree, all the unknowns can be decoded in time. This can explain the mapping relationship referred above.

As one can see from Fig. 4.1, the biggest difference between two kinds of transmission lies in the sending packets. The left one transmits original packets (without NC), while the right one encodes the original packets before sending them out (with NC).When packets are transmitted in the traditional way, the receiver has to obtain definitely every original packet without any losses. Nevertheless, it is difficult to guarantee orderly delivery in wireless networks due to the interference such as high bit error rate or noise in transmission. Consequently, congestion window erroneously shrinks to a small number (caused by RTO or quick retransmission) and leads to low throughput. If the packets are coded before sending out, the receiver cares much more about the degree of freedom rather than the packet sequence. Thus when a new combination arrives, the receiver will orderly confirm the packets even if the original packets have not been decoded.
Fig. 4.1

Response of information and degree

4.3 Preliminaries

In order to utilize the standard TCP protocol with minimal change, TCP-Vegas is picked up as it is more compatible, which uses a proactive approach to carry out congestion control by inferring the size of network buffers even before packets get dropped. The crux of the algorithm is to adjust the congestion window and the transmission rate through estimating RTT to find the discrepancy between the expected and actual transmission rate. As congestion arises, buffers start to fill up and the RTT starts to rise, which can be used to adjust the congestion window and the rate as a congestion sign [2]. Due to the RTT-aware characteristic of Vegas, the congestion window will not be affected too much but usually causes the latest RTT a little larger if a packet is lost. Comparatively, TCP-Reno which uses packet loss as a congestion indicator and shrinks congestion window drastically by half when loss packet is detected may not be so suitable to work with NC.

Before being applied NC, the message is split into a stream of packets \( {P}_1,{P}_2,\cdots \). The packet is treated as a vector over a finite field F q of size q. Thus in the NC system, a node also performs a linear network coding to incoming packets in addition to forwarding them. That means the node may transmit a packet obtained by linearly combining the vectors corresponding to the incoming packets. For instance, it may transmit \( {q}_1=\alpha {p}_1+\beta {p}_2 \) and \( {q}_2=\gamma {p}_1+\delta {p}_2 \), where \( \alpha, \beta, \gamma, \delta \in {F}_q \) are the coefficients. Assuming the packets have l symbols, the encoding process may be written in matrix form as
$$ \left(\begin{array}{llll}{q}_{11}\hfill & {q}_{12}\hfill & \cdots \hfill & {q}_{1l}\hfill \\ {}{q}_{21}\hfill & {q}_{22}\hfill & \cdots \hfill & {q}_{2l}\hfill \end{array}\right)=C\cdot \left(\begin{array}{llll}{p}_{11}\hfill & {p}_{12}\hfill & \cdots \hfill & {p}_{1l}\hfill \\ {}{p}_{21}\hfill & {p}_{22}\hfill & \cdots \hfill & {p}_{2l}\hfill \end{array}\right), $$
(4.2)
where \( C=\left(\begin{array}{cc}\hfill \alpha \hfill & \hfill \beta \hfill \\ {}\hfill \gamma \hfill & \hfill \delta \hfill \end{array}\right) \) is the coefficient matrix. Therefore, original packets can be obtained by inverting matrix C using Gauss-Jordan elimination after receiving q1 and q2, which is given by
$$ \left(\begin{array}{llll}{p}_{11}\hfill & {p}_{12}\hfill & \cdots \hfill & {p}_{1l}\hfill \\ {}{p}_{21}\hfill & {p}_{22}\hfill & \cdots \hfill & {p}_{2l}\hfill \end{array}\right)={C}^{-1}\cdot \left(\begin{array}{llll}{q}_{11}\hfill & {q}_{12}\hfill & \cdots \hfill & {q}_{1l}\hfill \\ {}{q}_{21}\hfill & {q}_{22}\hfill & \cdots \hfill & {q}_{2l}\hfill \end{array}\right), $$
(4.3)
As a result, the receiver needs to receive as many linear combinations as the number of original packets involved to recover all of them.

In the above setting, some definitions are introduced [7] that will be useful throughout this chapter.

Definition 1

Seeing a packet: A node is said to have seen a packet pk if it has enough information to compute a linear combination of the form (\( {p}_{\mathrm{k}}+q \)), where \( q={\displaystyle {\sum}_{l>k}{\alpha}_l{p}_l} \), with \( {\alpha}_l\in {F}_q \) for all \( l>k \). Thus, q is a linear combination with packet indices larger than k. Actually, “seen packets” reflect the ability of decoding packets. When seen equals the number of coded original packets, original packets can be recovered via decoding operation. Algebraically speaking, the procedure is like solving linear equations, which explains the fact that orderly confirmation of original TCP does not exist anymore.

Definition 2

Knowledge of a node: The knowledge of a node is the set of all linear combinations of original packets. It equals the total number of original packets involved in the combinations or the degree of freedom. If a node has seen packet p k , then it knows exactly one linear combination of the form \( {p}_k+q \), where q is the vector which includes only unseen packets in terms of Definition 1.

4.4 Network Coding-Based TCP

As a solution to mask losses from TCP layer, the protocol is needed to be modified to suit accommodating NC function. There are mainly three categories of schemes to implement NC on TCP, each of which has a same architecture as shown in Fig. 4.2 where new TCP coexists with the original TCP.
Fig. 4.2

Illustration of the new TCP layer

4.4.1 Static Methodology

As the seminal work of NC-based TCP [4], the primary aim is to mask losses from TCP using random linear coding. Thus the first problem is how to acknowledge the data. According to the definitions in the last section, the degree of freedom is more important in the new protocol and the notion of seen packets also achieves an ordering of the degree of freedom. Thus, once receiving a linear combination, the node finds out the new seen packet if there is any.

The idea of masking losses lies in the fact that every new arrival random linear combination will cause the next unseen packet to be seen with high probability. Therefore, once the number of combinations reaches the value of the degree of freedom, the node can decode all the original packets and deliver them to the upper layer. Given the loss rate, the number of combinations sent by the source node should be slightly larger than the number of original packets which is controlled by the redundancy parameter R. The redundancy is the ratio of actual sending packets and least sending packets (without loss) which is preset. It is notable that the redundancy should be set neither too small nor too large. If there is too little redundancy, the losses are not effectively masked from TCP layer which will lead to low throughput. On the other extreme, it is also bad, not only because the transmission rate becomes limited by the rate of code but also because too many packets may congest the network. Hence the ideal R should be kept equal to the reciprocal of the probability of successful reception in theory. However, in the static strategy, redundancy keeps the same once it was set during the transmission. The algorithm is specified as follows in the NC module, separating into source node and receiver.

Source node: Respond to the arrival of a packet from source node and an ACK from the receiver

1 \( NUM\leftarrow 0 \);

2 wait until a packet comes;

3 if (control packet)

4 deliver to the IP layer and return to 2;

5 if (ACK from the receiver)

6 remove the ACKed packet from the coding window and deliver the ACK to the source node;

7 if (new data packet)

8 add to the coding window;

9 \( NUM\leftarrow NUM+R \) (R is redundancy);

10 while (\( i\le \left\lfloor NUM\right\rfloor \))

11 generate a random linear combination of the packets in the coding window;

12 add the coefficients to the network coding header and deliver to the IP layer;

13 \( NUM\leftarrow NUM-\left\lfloor NUM\right\rfloor \);

14 return to 2.

Packets used for the connection management or acknowledgment are directly forwarded (lines 3–6). For the new data packets, they are first added to the coding window in NC module and then generated linear combinations according to NUM with coefficients added to the NC header (lines 7–12). Finally NUM is updated by the fraction of current NUM for the next time (line 13).

Receiver: Respond to the arrival of a packet from source node and an ACK from the receiver

1 wait until a packet comes;

2 if (ACK from receiver)

3 if (control packet)

4 deliver to the IP layer and return to 1;

5 else

6 ignore and return to 1;

7 if (combinations from source node)

8 retrieve the coefficients and perform Gauss-Jordan elimination to update the seen packets;

9 perform corresponding operations to the payload as the coefficients and deliver the decoded packets to the upper layer;

10 generate an ACK with the next unseen packet;

11 return to 1.

Similar to the source node, control packets are directly forwarded (lines 3–4). For the new arrival combination, Gauss-Jordan elimination is performed on the coefficients to update the seen packets, and same operations are done to the payload if the combination causes a new packet to be seen (lines 7–9). After that, an ACK with the latest seen packet or the next unseen packet is sent to the source node (line 10).

The soundness of the new protocol can be guaranteed since every packet will be delivered reliably to the receiver eventually. Though the ACK mechanism is a little different from the conventional one which removes a packet once it is received, every packet can be decoded as long as they have been seen. To make the congestion control scheme work, whenever a new packet enters the congestion window, it is transmitted to the NC module. Packets transmitted earlier that have not been decoded can be removed from the coding window. This will not be a problem since redundant packets can eventually make every packet be seen and decoded.

Static scheme is effective only if the redundancy approximates or exceeds the theoretical value, and thus it is suitable for the environment with quasi-constant loss rate.

4.4.2 Dynamic Methodology

To make the new protocol available in the network without constant loss rate, Chen et al. proposed a dynamic scheme in [8]. In the beginning, the scheme was proposed to decrease the decoding delay, since the static scheme does not consider this issue and has to buffer a considerable number of packets before decoding all packets. Therefore, it is hard to deploy in a real system in spite of the benefits brought by NC besides the unadjustable redundancy.

Dynamic scheme retransmits the appropriate number of random linear combinations in terms of the feedback information when loss happens. The appropriate number is decided by the difference between the largest packet index max _index in the combination and the largest seen packet index max _seen. Thus, the difference termed loss mentioned above is embedded in the ACK in conjunction with the sequence number of the largest seen packet max _index. For instance, consider source node sent three random linear combinations \( x={p}_1 \), \( y=2{p}_1+3{p}_2 \), and \( z={p}_1+3{p}_2+4{p}_3 \) to the receiver with the second combination y lost. The source node will receive an ACK with \( \max \_{index}=3 \) and \( \mathit{loss}=3-2=1 \)after combination z reached the receiver. Here, loss indicates the number of packets still needed by the receiver to decode all the combinations.

Dynamic scheme uses the loss to decide the number of retransmission combinations instead of redundancy parameter R. The retransmission algorithm is specified as follows:

Algorithm of retransmission

1 wait until an ACK comes;

2 if (\( Tnow- Tlast>RTO \))

3 retransmit loss linear combinations of first loss packets in the coding window;

4 else

5 if (\( loss> last\_ loss \))

6 retransmit \( loss- last\_ loss \) combinations of first \( loss- last\_ loss \) packets in the coding window;

Here, Tnow is the current time, Tlast is the time of the last retransmission, and RTO is the timeout value set by TCP. Similarly, last_loss is the loss value in the last time. The algorithm first checks to see whether timeout happens since the last retransmission combinations were sent out (line 2). If timeout happens, considerate represents the last retransmission is lost and new loss combinations (line 3) are retransmitted. If there are extra lost combinations after the last retransmission, \( {loss}-{last}\_{loss} \) combinations of first \( {loss}-{last}\_{loss} \) packets instead of all the packets are sent out to decrease the decoding delay (lines 4–6). Compared to the static scheme, dynamic scheme can handle both random losses and unknown bursty losses in time. As a result, decoding delay can be reduced.

4.4.3 Adaptive Methodology

Static scheme masks random loss through accumulating the fraction of current R in prior to the next transmission, while dynamic scheme can detect the new lost packets once an ACK is received by the source node and retransmits new combinations immediately if necessary. Thus the static scheme can be also considered as a kind of preventive scheme. For a lossy environment especially wireless network, chances are that the retransmit combinations will also be lost. However, dynamic scheme can hardly solve this problem until retransmission timeout, since the value of \( {loss}-{last}\_{loss} \) stays the same during this period. Therefore, it is necessary to collect the advantages of both schemes to completely exploit the superiority of NC.

Adaptive scheme [9] combines both advantages of static and dynamic scheme. Moreover, it makes redundancy adaptive to the environment rather than constant during the transmission. In the scheme, the redundancy is calculated and updated by source node once it receives the feedback. The adjusting algorithm is specified as follows:

Algorithm of redundancy adjustment

1 wait until an ACK comes;

1 if (\( loss- last\_ loss<0 \))

2 \( R\leftarrow R+\left( loss- last\_ loss\right)\times \frac{pktsize}{B\times D} \) and return to 1;

3 if (\( R<1 \))

4 \( R\leftarrow 1 \);

5 if (\( loss- last\_ loss=0 \))

6 return to 1;

7 if (\( loss- last\_ loss>0 \))

8 retransmit \( loss- last\_ loss \) combinations of first \( loss- last\_ loss \) packets in the coding window;

9 \( R\leftarrow R+\left( loss- last\_ loss\right)\times \frac{pktsize}{B\times D} \) and return to 1.

Here, B and D are the link bandwidth and link transmission delay, respectively. If \( {loss}-{last}\_{loss} \) is smaller than zero, it represents one or more retransmission packets have already been received by the receiver and no more combinations should be sent this time. If \( {loss}-{last}\_{loss} \) is positive, extra combinations are needed. \( \frac{B\times D}{\mathrm{pktsize}} \) is the number of packets transmitted in the direction from source node to sink node. Thus the function of \( \left({loss}-{last}\_{loss}\right)\times \frac{\mathrm{pktsize}}{B\times D} \) is to send the retransmission packet again quickly when the last retransmission is lost, since the ACK of the retransmission packet should be back before this moment. In adaptive scheme, R reflects the situation of underlying links RTT/2 ago and launches another retransmission after several RTTs which could make at least a part that can be decoded.

4.5 Packet Format

Packet format should be redesigned to support NC function which contains coefficient, redundancy, and so on. In the new protocol, an entire packet serves as the basic unit of data, that is, the exact same operation is being performed on every symbol within the packet. The main advantage of this is to perform decoding matrix operations at the granularity of packets, and one coefficient can be used for one packet instead of each symbol which is typically one-byte long. TCP may generate segments in different sizes which are controlled by maximum transmission unit. The solution to the variable length and repacketization in [7] is as follows:
  • Any part of the incoming segment already in the buffer is removed from the segment.

  • Remaining contiguous part of the segment is repacked in a new separate TCP packet.

  • The source and destination ports are removed to be added in the NC header.

  • Packets are appended with zeros to achieve the same length as the longest packet in the coding buffer as shown in Fig. 4.3.
    Fig. 4.3

    Illustration of the coding buffer

Here, the reason the ports are removed from original TCP header is that they are used to identify which TCP connection the packet corresponds to. Thus, they are added to the combination as a part of NC header.

As per the NC header, it is also designed in detail in [7] and some modification is made to work with the improved algorithm as in Fig. 4.4.
Fig. 4.4

Illustration of the NC header

The meaning of each field is described below with size (in bytes) written around.
  • Src and dst port: Used to identify which TCP connection the coded packet corresponds to

  • n: The number of packets involved in the combination

  • Base: The sequence number of the first unacknowledged byte

  • PktID: The ID of the combination

  • Start i : The starting byte of the data in ith packet

  • End i : The last byte of the data in ith packet

  • a i : The coefficient of the ith packet in the combination

The Start i and End i are relative to the previous packet’s corresponding field except Start1. PktID is used to calculate the lost packets and update new seen packets in conjunction with n. Base is used to decide whether the packet can be deleted from the buffer safely which is the oldest byte in the coding buffer. Thus the NC header is \( 15+5n \) bytes including the port information.

Another important concept that should be introduced is the coding window, which decides the number of packets in a combination. Theoretically, the sender generates a random linear combination of all the packets in the coding buffer. However, it may result as a huge size of NC header. Thus, a constant number of the packets are coded together and the number depends on the loss rate of the network environment. Because only if the difference between the number of redundant packets received and the number of original packets involved is less than the coding window, the loss can be masked from TCP.

4.6 Experiment

The topology used in the experiment is shown in Fig. 4.5. It consists of several sources \( {S}_i,\left(i\in 1,\cdots, n\right) \), several receivers \( {R}_i,\left(i\in 1,\cdots, n\right) \), and three intermediates \( {M}_i,\left(i\in 1,\cdots 3\right) \). Each packet starts transmitting from S i , with R i as the intended destination. The channel between each pair of nodes is assumed to be independent of each other.
Fig. 4.5

Illustration of the topology in experiment

4.6.1 Soundness

The new protocol is based on TCP-Vegas since TCP-Vegas controls the congestion window more smoothly via using RTT, compared to the more abrupt manner in TCP-Reno. However, if the coding window is properly chosen, TCP-Reno can also work with NC. According to [7], the value of coding window should be large enough to mask the link loss as well as small enough to make the queue drops visible to TCP which is a little complex to fulfill.

Despite that research has shown the superiority of TCP-Vegas over TCP-Reno [10], TCP-Reno is still the most widely deployed variant of TCP. When both types of the connections share a link, TCP-Reno generally steals bandwidth from TCP-Vegas and dominates due to its aggressive nature, compared to the more conservative mechanism of TCP-Vegas [11]. However, they can be compatible with one another if TCP-Vegas is configured properly [12]. Here, the parameters of TCP-Vegas are set to be \( \alpha =28 \), \( \beta =30 \), and \( \gamma =2 \).

The result of soundness for one flow of TCP-Reno and the one of TCP with NC function is shown in Fig. 4.6. Here the loss rate is set to 0 % and two flows join in the network one after another.
Fig. 4.6

Illustration of the fairness between two flows

As per the situation of more flows coexist in the network, Fig. 4.7 shows the result, where Jain’s fairness index [13] is used with definition as follows:
Fig. 4.7

Illustration of the fairness among multiple flows [9]

$$ f=\frac{{\left({\displaystyle {\sum}_{i=1}^n{x}_i}\right)}^2}{n{\displaystyle {\sum}_{i=1}^n{x}_i^2}}, \vspace*{-6pt}$$
(4.4)
Here n is the number of connections and x i is the throughput of the ith connection. The closer the f is to 1, the more fair a protocol becomes.

4.6.2 Effectiveness

Random loss rate is configured on each link to show the effectiveness with NC function. Suppose the random loss rate on each link is p and the overall probability of packet loss is \( 1-{\left(1-p\right)}^{n-1} \), where n is the number of intermediate nodes. The loss rate in experiment varies from 0 to 20 % in which flow with different protocols is tested as in Fig. 4.8.
Fig. 4.8

Illustration of the fairness among multiple flows [9]

Testing results are divided into two parts which are recognized as conventional TCP protocols and TCP with NC function. As one can see from the figure, the enhancement brought by NC is substantially obvious.

4.6.3 Network Switching

In the last subsection, loss rate is a constant value during the experiment which makes the performance of different NC schemes almost the same. Thus, the next experiment is used to compare the three NC schemes mentioned in Sect. 4.4. Here, TCP-NC (static), TCP-DNC (dynamic), and TCP-NCDR (adaptive) are used to represent the scheme, respectively. Simultaneously, loss rate is changed drastically from time to time to simulate the network switching in practical environment as shown in Fig. 4.9.
Fig. 4.9

Loss rate vs. time for simulation

The throughput is calculated at intervals of 2.5 s, and the experiment is simulated for 1000 s. For TCP-NC, R is set to 1.087 (reciprocal of 92 %) as the initial value which can mask losses from TCP if the loss rate is less than or equal 8 % as explained above. Figure 4.10 illustrates the comparison of three schemes.
Fig. 4.10

(a) Throughput vs. time for different schemes (TCP-Westwood) [9]. (b) Throughput vs. time for different schemes (TCP-NC) [9]. (c) Throughput vs. time for different schemes (TCP-DNC) [9]

As one can see from the figure, TCP-NC performs equally well as TCP-NCDR during the time 20–250 s, since R can match the loss rate very well. The same happens during the time interval of 700–800 s and 950–1000 s. During the other time, R is too small to effectively mask losses from TCP layer which leads to frequent timeouts and low throughput. Although TCP-DNC has obtained a good performance both in throughput and robustness, there is a further improvement with TCP-NCDR. In this scenario, continuous random loss is the dominant mode during the stable phase, while intermittent random loss plays an important role in saltus phase.

The design of TCP-DNC focuses on the difference between neighboring losses which neglects the accumulated redundant packets against successional random loss. Given redundant packets may also lose, it will not be sent out again unless another saltus of loss or timeout. Consequently, congestion window shrinks slowly by interpreting this phenomenon as a sign of congestion under TCP-DNC which is based on TCP-Vegas until packets are decoded. Thus, it performs slightly poor during the stable phase compared to TCP-NCDR. Despite the fact that adaptive scheme outperforms the other two, it requires more feedback and calculation overhead during the transmission which is possible to reduce this overhead by further optimizing the scheme.

Figure 4.11 shows the average throughput of each scheme during the switching experiment. As one can see from the result, TCP-NCDR outperforms the other scheme due to the dynamic redundancy, while conventional TCP such as TCP-Westwood [14] only achieves a small amount of throughput.
Fig. 4.11

Average throughput of different schemes

4.6.4 Multi-flows

This subsection considers the performance improvement when multi-flow joins in and the topology used in the experiment is the same as shown in Fig. 4.5. Here, random loss rate p is also configured on each link which makes the general loss rate varies from 0 to 20 % as the above.

The number of flows in Fig. 4.12 is 1, 2, 4, 8, and 10, respectively. Throughput is defined as normalized one as follows:
Fig. 4.12

(a) Illustration of the effectiveness of multi-flows in wired network (one flow). (b) Illustration of the effectiveness of multi-flows in wired network (two flows). (c) Illustration of the effectiveness of multi-flows in wired network (four flows). (d) Illustration of the effectiveness of multi-flows in wired network (eight flows). (e) Illustration of the effectiveness of multi-flows in wired network (ten flows)

$$ T=\frac{T_{\mathrm{NC}}}{T_{\mathrm{Reno}}}, $$
(4.5)
where TNC and TReno are the average throughput of NC scheme and conventional TCP, respectively. Here the NC scheme represents TCP-NCDR. As one can see from each figure, NC scheme outperforms conventional TCP especially under the lossy environment, and it performs a little better than conventional TCP when the load becomes heavier in a relative low lossy environment.
For the following experiment, a small modification is done to the topology as shown in Fig. 4.13 where the channel from intermediate node to the receivers is substituted by wireless channel.
Fig. 4.13

Illustration of the hybrid topology in experiment

As one can see from the result in Fig. 4.14, NC-based scheme still achieves better in throughput. However, as the quantity of flows increases, the collision probability also increases, making throughput improvement less effective than in the wired environment.
Fig. 4.14

(a) Illustration of the effectiveness of multi-flows in hybrid network (one flow). (b) Illustration of the effectiveness of multi-flows in hybrid network (two flows). (c) Illustration of the effectiveness of multi-flows in hybrid network (four flows). (d) Illustration of the effectiveness of multi-flows in hybrid network (eight flows). (e) Illustration of the effectiveness of multi-flows in hybrid network (ten flows)

4.7 Conclusion

Up to now, there is rarely a protocol that can substitute TCP. It has been proven significantly successful in the Internet as a connection-oriented protocol tied with the transport layer. However, it is reasonable that conventional TCP becomes increasingly mismatched to the transmission requirements. Thus, network coding, which has the ability to mix data across time and across flows, may become an important potential approach to tackle with this problem. This chapter briefly introduces the combination of TCP and NC, from idea to preliminary simulation, which demonstrates a robust and effective improvement in data transmission especially in lossy environment. Despite the notable results in the experiment, it still seems far from being deployed in the practical network. The current work can be viewed as a first step in taking the concept of NC into practice, and many left-open questions such as re-encoding at intermediate nodes in conjunction with the estimation on calculation overhead will be researched intensively in the near future.

References

  1. 1.
    C.P. Fu, S.C. Liew, TCP veno: TCP enhancement for transmission over wireless access networks. IEEE J Select Areas Commun 21(2), 216–228 (2003)CrossRefGoogle Scholar
  2. 2.
    L. S. Bramko, S. W. O’Malley, L. L. Peterson, TCP Vegas: new techniques for congestion detection and avoidance, in Proceedings of ACM SIGCOMM Symposium, Aug 1994, pp. 24–35Google Scholar
  3. 3.
    T. Ho, Networking from a Network Coding Perspective, Ph.D Thesis, Massachusetts Institute of Technology, Dept. of EECS, May 2004Google Scholar
  4. 4.
    J. K. Sundararajan, D. Shah, M. Medard, M. Mitzenmacher, J. Barros, Network coding meets TCP, in Proceedings of IEEE INFOCOM, Apr 2009, pp. 280–288Google Scholar
  5. 5.
    S. Paul, E. Ayanoglu, T. F. L. Porta, K.-W. H. Chen, K. E. Sabnani, R. D. Gitlin, An asymmetric protocol for digital cellular communications, in Proceedings of IEEE INFOCOM, Apr 1995, pp. 1053Google Scholar
  6. 6.
    M. Luby, LT Codes, in Proceedings of IEEE Symposium on Foundations of Computer Science, Oct. 2002, pp. 271–280Google Scholar
  7. 7.
    J.K. Sundararajan, D. Shah, M. Medard, M. Mitzenmacher, J. Barros, Network coding meets TCP: theory and implementation. Proc IEEE 99, 490–512 (2011)CrossRefGoogle Scholar
  8. 8.
    J. Chen, W. Tan, L. Liu, X. Hu, F. Xu, Towards zero loss for TCP in wireless networks, in Proceedings of IEEE IPCCC, Dec 2009, pp. 65–70Google Scholar
  9. 9.
    K. Pan, H. Li, S. Y. Li, Q, Shi, W. Yin, Flying against lossy light-load hybrid networks, in Proceedings of IEEE IWCMC, Apr 2013, pp. 252–257Google Scholar
  10. 10.
    P. A. Chou, Y. N. Wu, K. Jain, Practical network coding, in Proceedings of Allerton Conference on CCC, 2003Google Scholar
  11. 11.
    W. Feng, S. Vanichpun, Enabling compatibility between TCP Reno and TCP Vegas, in Proceedings of IEEE Symposium on Applications and the Internet, Jan. 2003, pp. 301–308Google Scholar
  12. 12.
    L. S. Bramko, S. W. O’Malley, L. L. Peterson, TCP Vegas: new techniques for congestion detection and avoidance, in Proceedings of SIGCOMM, Aug. 1994, pp. 24–35.Google Scholar
  13. 13.
    R. Jain, The art of computer system performance analysis (Wiley, New York, 1991)Google Scholar
  14. 14.
    C. Casetti, M. Gerla, S. Mascolo, M.Y. Sanadidi, R. Wang, TCP west wood: end-to-end congestion control for wired/wireless networks. J Wireless Netw 8(5), 467–479 (2002)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Shenzhen Key Lab of Information Theory and Future networks architecture, Shenzhen Engineering Lab of Converged Networks Technology, School of Electronic & Computer EngineeringPeking UniversityBeijingChina
  2. 2.University of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations