Advertisement

CCF Transactions on Networking

, Volume 1, Issue 1–4, pp 37–51 | Cite as

Effect of multiplexing TCP flows on delay sensitivity in the internet communications

  • Sneha K. ThombreEmail author
  • Lalit M. Patnaik
  • Anil S. Tavildar
Regular Paper
  • 64 Downloads

Abstract

TCP is the dominant protocol in the Internet and delay friendliness of TCP is a well studied, complex and a practical issue. While all other components of end-to-end delay have been adequately characterized, the issue regarding queuing delay and the number of re-transmissions which are random in nature, add to uncertainty in the Internet. The datagrams at the router are from number of multiplexed flows’ M(n) which constitute a stochastic process. This paper examines queuing behavior subject to multiplexing a stochastic process M (n) of flows’. The arrival and service processes at the bottleneck router have been mathematically modeled accounting for different proportions of TCP and UDP with their datagram sizes as parameters. This model is then used to evaluate the mean queuing delay for datagrams for different fractions of TCP in the background traffic. Further using discrete queue analysis of datagrams, estimation of average instantaneous delay for datagrams of a tagged flow has been derived to ascertain the delay impact of multiplexing of flows’ on datagrams of a tagged flow by fellow flows’. The analysis divulges an intriguing behavior that flows’ multiplexing ‘harms’ the queuing delay. It has been observed that mean queuing delay for datagrams and average instantaneous delay for datagrams of a typical flow get degraded with the increased number of fellow TCP flows’ in the background traffic.

Keywords

The Internet TCP IP UDP End-to-end delay Bursty traffic 

1 Introduction

The Internet has revolutionized human life. The performance of the Internet is closely related to TCP protocol as large percentage of the Internet traffic is transported over this protocol. Performance of TCP depends upon the congestion control mechanism built in the protocol. The congestion control algorithm does not merely prevent congestion collapse but also behaves as a resource allocation mechanism (Srikant and Ying 2013; Bertsekas et al. 1992). Therefore, this makes TCP slightly unsuitable for delay sensitive applications. Some researchers have suggested that the UDP may perform better for such applications (Bertsekas et al. 1992). The IETF RFC 8085 however forbids using UDP to transport data unless it is accompanied by congestion control. Congestion control in TCP is window based, reactive and is implemented at transport layer by TCP. Each flow is subjected individually to congestion control and flow control mechanism. Flows’ multiplexed at router constitutes a stochastic process. Datagrams generated from each of the flows’ share network resources and therefore strongly impact fellow datagrams and get affected by fellow datagrams (Bertsekas et al. 1992). Furthermore these flows may have radically different performance requirements depending upon the applications they support, like some flows may be delay sensitive and the rest may be elastic. It is observed from our studies that, not much effort on the study of the effect of increased multiplexed TCP flows’ on the queuing delay is available in the literature. There are numerous models for throughput and delay with regard to TCP available however the need for newer models exists as explained below.

The number of flows’ M(n) generated at transport is a dynamic process. To understand the impact of flows on each other regarding delay performance is the motive of this work. The work attempts to gain insights to questions like, do the multiplexed flows impact each other if so then do they collaborate or harm each other? The queue modeled in this paper is to be considered at network layer and is to be applied to datagrams of a congested router. Even though the sending rates of the source flows are decided by transport layer which understands congestion and flow control, when datagrams of these flows first hit the edge router and are buffered, the outgoing rates are changed. Then on-wards, incoming and outgoing rates at the subsequent routers are decided by router buffering and bear hardly any correlation to the original sending source rates. Buffering changes the timing information between the rates decided by source flows’ at transport layer and actual outgoing rates followed at the routers. To understand the effect of bursty arrivals and the role of buffering in communication networks, datagram arrivals at router links can be modeled as random processes. In this paper the arrival and service rates have been modeled at a typical congested router considering buffering and then queuing delay performance has been analyzed. Even-though TCP is the most studied and widely used protocol and precincts of TCP are known, researchers have suggested some advances may be possible through the investigations of delay sensitivity of TCP (Mondal et al. 2012). End-to-end delay has components like propagation delay, transmission delay, processing delay which are all fixed, predictable components for a given network environment and have been adequately characterized. The one random component is queuing delay and the end-to-end delay is affected by queuing delay (Briscoe et al. 2014; Csoma et al. 2015). The datagrams at the router queues (TCP and UDP) share resources and therefore impact and get impacted by fellow datagrams and the proportion of TCP in the Internet traffic has crossed 90%. Therefore to evaluate the performance due to interaction between datagrams of TCP and also between TCP and UDP is imperative. Motivation of this paper is also to analyze and quantify the impact of varying TCP proportion in the background traffic on, mean delay of datagrams using queuing theory and on average instantaneous delay of the datagrams of a tagged flow using statistical analysis of a discrete queue. The outcome of this investigation is that the mean delay of datagrams and average instantaneous delay of datagrams of a typical flow get adversely affected by the proportion of TCP in the background traffic. This also means that TCP flows’ in fact harm or hurt one another and are not friendly. The work in this paper is an attempt to quantify the effect on degradation of delay performance due to interaction of datagrams of a TCP flow with fellow TCP and UDP datagrams. The analysis is done to supplement the existing published work and to make it more comprehensive. This study may provide some more insight to deal with congestion related issues of TCP in modern day traffic which comprises combinations of both elastic and delay sensitive or real time applications.

1.1 The specific research contributions of the paper are as follows

Mathematical modeling for arrival and service processes at the congested routers in the network layer has been done using contributions of TCP and UDP as parameters. The model also includes the length of the datagram, which also significantly effects the delay performance. Based on the above formulations, the mean queuing delay has been estimated using the parameters of traffic distribution. This analytical approach itself is the unique feature of the paper. Further quantification of mean instantaneous delay has been attempted using discrete traffic model, with respect to arrival sequence of datagrams for three pre-selected values of \(\alpha\), the fraction of TCP in the background traffic. Using real time traffic, measurements are carried out to ascertain the arrival, service and inter-arrival distributions of TCP, UDP and TCP plus UDP and then parameters of the distribution were determined. These real values of traffic distributions were substituted in the model and analysis is carried out, to obtain mean queuing delay and average instantaneous delay. For validation of the above mathematical analysis, actual estimation of average values of end-to-end delay based on exhaustive NS2 simulations, under various conditions has been carried out. Simulation results clearly validate trends predicated by our mathematical and analytical approaches.

The paper is organized as follows. After a brief introduction in Sects. 1 and 2 is devoted to review of the related work by other researchers in this area. Section 3 presents statistical characterization of traffic from the typical traffic traces collected at the academic institute and defines a unique mathematical model of the arrival and service processes. Section 4 provides the NS2 simulations to study the behavior of multiplexed stochastic flows at the router which explores the mathematical model. This section includes evaluation of average end-to-end delay and average instantaneous delay for multiplexed flows for different network scenarios. Section 5 presents estimation of mean waiting time or mean queuing delay of datagrams and average instantaneous delay for the Internet traffic for the validation of simulation results. Section 6 concludes the work.

2 Related work

Use of TCP protocol has been extensively investigated by researchers both in academia and industry during last 15–20 years. Some important part is given in this section.

2.1 Relation of queuing delay and buffer size

Shailendra et al. (2010a) have compared mean queuing delay incurred in link-by-link error control with end-to-end mechanism and have inferred that link-by-link mechanism results in much lesser mean queuing delay compared to end-to-end approach. Through an interesting work Rouskas et al. (2011) have evaluated the relationship between buffer sizes of core routers and throughput of TCP. It has been mentioned that for a specific buffer range, real time traffic losses increase as the buffer size increases, due buffer sharing dynamics at routers between real time and elastic traffic. The queuing delay is closely related to buffer sizes of routers (Iya et al. 2015). The issue of Bufferbloat deserves mention, though known since three decades. Gettys and Nichols (2012) have inferred that over sized buffers are the major cause of pervasive bufferbloat. Nichols and Jacobson (2012) have provided a new Active Queue Management (AQM) technique namely Controlling Delay (CoDel) to deal with bufferbloat or persistent full buffer problem. Araldo and Rossi (2013) have proposed a technique to measure queuing delay (bufferbloat) in the Internet using passive measurements. The authors have concluded that there appears to be no correlation between ISP traffic load and queuing delay. Showail et al. (2016) inferred that buffer sizing is an important network configuration parameter that impacts the Quality of Service in wireless networks too. Cardwell et al. (2017) have proposed a novel congestion control algorithm, namely BBR. The BBR (bottleneck bandwidth and round-trip time) congestion control algorithm takes a different approach: rather than assuming that packet loss is equivalent to congestion, BBR builds a model of the network path in order to avoid and respond to actual congestion. BBR is still subject to behavior and performance analysis.

2.2 Dominance of TCP and different TCP types

The surveys for different TCP congestion-control methods that depend on implicit and explicit signaling from the network highlight some issues (Afanasyev et al. 2010; Lochert et al. 2007). Afanasyev et al. (2010) have discussed TCP evolutionary graph of congestion control mechanisms with their advantages and disadvantages. The authors have remarked that there appears no agreement in research community on the suitability of any specific TCP congestion control technique, over the others. Lochert et al. (2007) have suggested a policy to control congestion where it occurs and have further pointed out that end-to-end mechanism currently used has been unsuitable for wireless scenarios. It may be readily observed from the sample anonymized Internet Trace of 2018 dataset from ‘Center for Applied Internet Data Analysis’ (CAIDA) website that TCP seems to be the dominant protocol and more than 90% traffic in the Internet constitutes of TCP. Comparison of parameters like throughput, fairness and friendliness for Linux TCP variants has been done by Callegari et al. (2012) but none emerged as the most suitable for all applications. Yang et al. (2014) have provided detailed information about TCP heterogeneous congestion avoidance algorithm deployment pattern in the current Internet and have inferred that TCP Cubic (46.92%) is widely used, followed by TCP Compound (25.62%) and then TCP Reno (14.46% ). A new data/content centric approach to TCP/IP is evolving namely Named Data Networking (NDN) which is receiver driven connectionless transport. These protocols are based on usage/dissemination and retrieval based the Internet. CDNs, P2P overlays HTTP proxies are implemented on top of the Internet infrastructure. Ren et al. (2016) have done comprehensive survey of congestion control in NDN. Research challenges and open issues in NDN are presented in the survey work. Liu et al. (2017) have proposed a switch-level regional congestion mitigation mechanism (RCM) in data-centers. Many good congestion control mechanisms have been proposed in WSN and IoT.

2.3 TCP and QoS for real time applications

It has been reported that the modern day traffic suffers from the issue of TCP delay affecting the stringent QoS requirement particularly, for real time applications (Srikant and Ying 2013; Casas et al. 2014). Some of the noteworthy solutions proposed to deal with TCP delay QoS issues have been briefly reviewed in the following paragraphs.

Xu et al. (2012) have done the measurement study of three popular proprietary, high bandwidth and low delay video telephony applications (Skype, ichat and Google+) to get some understanding into the architectural design of these applications and the design effect on user QoS and QoE. Casas et al. (2014) have developed an online QoE monitoring system to understand degradation and then provisioning resources on IP by service providers. They have also made an important observation that Youtube and other Content Distribution Networks (CDN) bring the content close to the end user so as to minimize propagation delay. Similar strategies have been adopted by Google and other CDNs. So far till date, HTTP/HTTPS and therefore TCP do not differentiate the divergent QoS needs for dissimilar applications in Cloud IoT traffic. Therefore content being brought near to the user minimizes only the propagation delay. Li et al. (2016) have considered resource allocation problem in real time (non-elastic) traffic for multipath networks. Further resource allocation problem has been converted to strictly convex problem and optimal solution satisfying the Karush-Kuhn-Tucker conditions has been analyzed. Wang et al. (2017) have analyzed utility function for clients requirement of bandwidth under fixed QoS. A resource allocation scheme on double-sided combinational auctions was then proposed by the authors. To improve QoS, modern day applications prefer TCP multipath sessions , TCP multiple sessions, multiple content servers at different locations with HTTP redirect to minimize delay and dynamic use of ephemeral port numbers greater than 1024 not registered with the Internet Assigned Numbers Authority (IANA).

Thus the above review suggests that the network delay is an important parameter adversely affecting the QoS for the Internet communications particularly for real time applications and solutions have been attempted either by adding newer TCP flavors or at the application layer (layer 5) by proprietary individual applications. Numerous models are available for mixture of TCP and UDP, but none of them specifically address to quantify the impact of interaction between TCP flows in the Internet traffic. Therefore need for a new model does exist. Motivated by the concern that queuing delay is on the rise and TCP is the dominant protocol, this paper attempts to analyze the variable queuing delay by modeling the arrival and service processes as random processes and by accounting various proportions of TCP and UDP with their respective datagram sizes to evaluate mean delay. Next, simulations are carried out using NS2 to examine queuing behavior of multiplexed flows. NS2 simulations have been carried out for estimating average end-to-end delays and average instantaneous delay for tagged flow for different scenarios for varying fraction of TCP in the background traffic. After motivating simulation results, estimation results using proposed model and real-time dataset were analyzed. In the estimation analysis section, the average instantaneous delay is derived for the flow of interest using discrete transient queue analysis. Here the impact of background TCP flows affecting the QoS of a typical TCP flow or UDP flow has been investigated. To ascertain the values of the Internet traffic, the dataset used in this paper has been generated by collecting real time traffic data at a leading well-known academic institute at Pune, India. Curve fitting procedure has been used for data collected for the arrival process, varying packet sizes and inter-arrival times of TCP, UDP and TCP plus UDP together. The dataset shows the same self similarity features as available in explored related work. Application user flow has been qualified by a tuple with attributes such as source IP address, source port number, destinations IP address and destination port number. The analytical results for queuing delay making use of measured values have been plotted. The analysis for instantaneous values of queuing network delay using discrete statistical queue model has also been carried out.

3 Statistical characterization of the traffic

Empirical measurements done and observations made, like self similarity in arrival statistics, datagram sizes and dominance of TCP in overall traffic have been largely consistent with inferences drawn in the research studies (Domaski et al. 2017; Chen et al. 2016; Polaganga and Liang 2015; Kaur et al. 2017). The statistical features of aggregate TCP arrival process and inter-arrival times have been shown to have long range dependency (Calzarossa and Serazzi 1993; Rossi et al. 2004).

Generation of the measured dataset for this paper was carried at the academic institute (Cummins Engineering College for Women, Pune India) for various attributes at different timings of 1 h each and having definite mean values. Traffic measurements on the empirical data at the edge router were carried out over one week (168 h) period and hourly arrival rates, datagram sizes and inter-arrival timings for each 10 \(\mu\)s interval were measured using Wireshark. Based on these measurements parameters like arrival rates, datagram sizes and inter-arrival times were worked out. After curative process, curve fitting was done to obtain the distributions and their parameters. The flows from the obtained dataset were sorted for and distributions for TCP only, UDP only and TCP plus UDP together for tagged traffic were determined. It was observed that the traffic sorted by three cases as TCP only, UDP only and TCP plus UDP together, follow self similarity in each individual case, resembling two parameter Weibull distribution. It was noted that arrival rates and datagram sizes were independent of each other in all cases. Further TCP contributes maximum to overall traffic (around \(92\%\)).
Table 1

Measurement dataset: parameters for Weibull distribution [(\(\alpha\)) \(\approx\) 0.92]

 

TCP only

UDP only

TCP+UDP

Arrival rate

   

(a) Shape parameter (\(\beta\))

1.34266

1.29519

1.71667

(b) Scale parameter (\(\eta\))

10.8176

2.72557

15.7024

Datagram size

   

(a) Shape parameter (\(\beta\))

0.811221

1.10141

0.806258

(b) Scale parameter (\(\eta\))

712.344

826.505

641.644

Inter arrival time

   

(a) Shape parameter (\(\beta\))

0.52512

0.61658

0.519675

(b) Scale parameter (\(\eta\))

1.50925

0.00941

1.71158

Interestingly the arrival rates, datagrams sizes and interarrival times for TCP only, UDP only and TCP plus UDP from measurement dataset have been observed to follow Weibull distribution. The Weibull distributed process have heavy-tailed characterictics and can model the fixed rate in ON/OFF period lengths, when producing self-similar traffic by multiplexing different flows. The density function of a two parameter Weibull distribution has been given as in Eq. (1) (Downey 2001).
$$\begin{aligned} f(t)=\frac{\beta }{\eta }\left( \frac{t}{\eta }\right) ^{\beta -1}e^{-\left( \frac{t}{\eta }\right) ^{\beta }} \end{aligned}$$
(1)
where parameters \(\beta > 0\) and \(\eta > 0\) represent the shape and scale parameters respectively. Table 1 shows the values of parameters of Weibull distribution from empirical dataset where \(\alpha =0.92\). Please refer Sect. 3.1. However for different values of \(\alpha\) the shape and size parameters of Weibull distribution were calculated and used in subsequent analysis.

3.1 Mathematical modeling for arrival/service processes

To appreciate the effect of packet size in modeling, we consider the following M/M/1 example for packet size n, with arrival rate (\(\lambda\)) and mean service time for packet size n (\(\frac{1}{\mu _{n}})\). Then mean delay (T) and utilization (\(\rho\)) are given by Bose (2013),
$$\begin{aligned} T= \frac{\frac{1}{\mu _{n}}}{\left( 1-\rho \right) } \end{aligned}$$
(2)
and
$$\begin{aligned} \rho = \frac{\lambda }{\mu _{n}} \end{aligned}$$
(3)
Now considering the same model, except the packet size is reduced to half i.e; (\(\frac{n}{2}\)), packet arrivals are still Poisson, which means arrival rate doubles (\(\lambda \rightarrow {2}{\lambda })\). Assuming service time of packet is proportional to its size, then the service time is halved (\(\frac{1}{2\mu _{n}}\)) or service rate also doubles. The mean service time decreases proportionally to the factor by which the packet size is increased, since utilization remains same.

The same analogy for modeling service rate is also applied. The sizes of the TCP segments and UDP datagrams are accounted for, along with the proportion of TCP and UDP in model shown in Eqs. (4) and (5). Though the service time for datagrams in the router is constant, the need of service time being proportional to the size of datagrams provides better accuracy and accounts for the dynamic behavior of buffer occupancy.

The arrival process (A) and effective service process (S) can be modeled as:
$$\begin{aligned}&A\overset{d} =\,\alpha C P+( 1-\alpha ) B D \end{aligned}$$
(4)
$$\begin{aligned}&S\mathop = \limits ^d {\theta }\left( {\alpha C + (1 - \alpha ){B}} \right) \end{aligned}$$
(5)
where, \(\alpha =\) proportion of TCP in background traffic and (\({{1}-{\alpha }})=\) proportion of UDP, C = a random variable indicating TCP burst (segment) size in bytes, P = a random variable indicating arrival rate of TCP segments in datagrams/s, B = a random variable indicating size of UDP datagrams in bytes, D = a random variable indicating UDP arrival rate in datagrams/s, \(\theta =\) fixed service rate at router in datagrams/s both for TCP and UDP, and \(\overset{d}{\mathop {=}}\rightarrow\) represents equality in distribution.

It is assumed that P and C are independent and similarly B and D are independent random variables. The service decision of routing is made at the router by opening the header of datagram and it is independent of the payload size of datagram in question. The TCP has variable segment sizes, therefore these datagrams when served at the router queue create space in the buffer according to their respective sizes. This space therefore decides whether the newer arriving datagram can be accommodated or discarded. Thus the above model takes into account effect of size of datagrams and continuously changing buffer occupancy, which decides the waiting time of datagram in the queue. While servicing the datagrams the router will find different sizes of TCP segments and UDP datagrams. For example, a UDP datagram of 210 bytes and a TCP segment of 1500 bytes are served at the same rate. Therefore, if a UDP packet is served at (\(\theta\)) packets/sec, then the TCP is effectively served at approximately (\(\theta /7\) i.e, 1500/7) datagrams/sec. The next section is NS2 simulations for analyzing the behavior of multiplexed flows.

4 Average end-to-end delay based on simulations

Datagram losses occur very often on the bottleneck router from local area networks (LANs) to WANs. The simulation model consists of bottleneck node from LAN to WAN and source-destination pairs. A simple dumble-shaped network topology as shown in Fig. 1 has been simulated. The simulations have been carried out using NS2 platform and the plots for average total end-to-end delay, which includes queuing delay, along with propagation delay, transmission delay and processing delay have been plotted in the following subsection. Except for queuing delay and number of retransmissions, the rest of the delay components being deterministic, are same in all the cases.
$$\begin{aligned} {\text {Avg. total end-to-end network delay}}=\sum (1+p)\left[ X + Y + Z\right] \end{aligned}$$
(6)
where p is the number of times the datagram needs to be retransmitted and X, Y and Z are propagation, transmission and queuing delay components respectively. Clearly p, the number of retransmissions and Z (queuing delay) are random delay components and p = 0 for UDP.
The retransmissions delays due to congestion is also significant random parameter. The intention of simulation has been to understand the effect of datagrams from background traffic flows’ on datagrams of tagged flow at the congested router and measure the variability of queuing delay and number of retransmissions in the controlled environment. For UDP there are no retransmissions. The variation in the end-to-end delay values is due to queuing delay and number of retransmissions, rest all factors remaining the same. Therefore parameters of simulations are chosen accordingly. In the simulated network the proportions of TCP and UDP flows’ were varied. As the proportion of TCP increases in the overall traffic increases, the tagged flow datagrams finds increased proportion of datagrams from TCP in the background traffic. The interaction of datagrams from tagged and background flows’ for resources at router, the buffer dynamics, effect of retransmissions due to AIMD in TCP are the factors at play. Variation in queuing delay can easily be studied.
Table 2

NS2 Simulation parameters

LAN bandwidth

10 Mbps

WAN bandwidth

1.5 Mbps

Access-link delay

1 ms at both sides (source and destination) in

Homogeneous scenario, and in case of

Heterogeneous scenario 1 ms at source side and

Pattern of (1, 3, \(\ldots\), 19) ms for the last destination side

Bottleneck-link delay

50 ms

Queue limits

Default

TCP type

Std. Cubic TCP (with built in AIMD mechanism)

TCPs maximum window size

Default (1000)

Packet size

Default (1460 in bytes)

Queue management

RED queue management was used in the bottleneck

Link and DropTail in the access link

The parameters used in the NS2 simulation have been shown in Table 2. As mentioned earlier since TCP Cubic has been most widely used protocol [46.92%], all simulations have been done using this protocol. Further RED AQM has also been incorporated to reduce overall end-to-end delay. Simulations have also been done with other TCP flavors such as TCP Reno, TCP Compound and remarks on the comparative delay performance have also been incorporated in the paper. While computing the average, 15 iterations were carried out in NS2 simulator and end-to-end delays were evaluated for all iterations. Average of all these iterations has been taken as average end-to-end delay for the network.

4.1 Average end-to-end delay evaluation

In the Internet environment, both TCP segments and UDP datagrams are multiplexed at routers and contend for bandwidth allocation. Typically, in the Internet, loss of datagrams occurs at the bottleneck router where datagrams move from LAN to WAN. For the simple network topology considered for evaluation Fig. 1 where ten sources and ten corresponding destinations have been assumed. Initially, traffic in TCP is taken as 100% and that in UDP as 0%. Gradually the TCP traffic is reduced to 0% by simultaneously increasing the UDP traffic to 100%. Two network scenarios, namely homogeneous and heterogeneous have been considered. For both the cases, it was assumed that the links are error-free, the link bandwidths are as shown in Fig. 1 and routers have infinite buffer space. In the homogeneous scenario, delays on source side and delays on destination side have been assumed equal, whereas, for the heterogeneous scenario, delays on source side LAN are fixed and delays on destination side followed a pattern such as 1 ms, 3 ms, \(\ldots\) 19 ms for the last destination. Infinite FTP sources were attached to generate TCP traffic while the constant bit rate (CBR) traffic source was used to generate UDP traffic. For a typical flow, average delay values were determined by varying the proportion of TCP and UDP flows’, total flows’ remaining constant.For both homogeneous and heterogeneous cases, values of average end-to-end delay for a typical tagged flow of TCP and UDP, with different proportions of TCP in the background have been presented in Figs. 2 and 3 show average end-to-end delay for homogeneous and heterogeneous scenarios respectively. For a fair comparison, care has been taken to ensure that WAN link was never underutilized.
Fig. 1

Network layout for homogeneous scenario

Fig. 2

Average end-to-end delay for a typical TCP and UDP flow for various proportions of TCP background traffic (homogeneous case)

Fig. 3

Average end-to-end a delay for a typical TCP and UDP flow for various proportions of TCP background traffic (heterogeneous case)

In Figs. 2 and 3 x-axis indicates increasing number of TCP nodes which also indicates decreasing number of UDP nodes as total number remains constant e.g., 2 TCP node means 8 UDP nodes, 5 TCP nodes means 5 UDP nodes and so forth (\(\alpha\) increasing from 0.2 to 1.0 ). Observations which may be drawn from Figs. 2 and 3 could be as follows:
  1. 1

    For all the proportions of TCP in background traffic and for both cases of homogeneous and heterogeneous network scenarios, the average end-to-end delay for TCP tagged flow is significantly higher compared to UDP tagged flow. Increase of TCP proportion in the background traffic degrades the average end-to-end delay more for TCP tagged flow compared to UDP tagged flow.

     
  2. 2

    When TCP background traffic gradually increases to 100% and UDP background traffic decreases to 0\(\%\), average end-to-end delay for TCP tagged flow increases considerably. This increase is 200% for homogeneous case and 400% for heterogeneous case. For UDP also is the similar case but mean delay values of UDP are very less compared to TCP.

     
  3. 3

    For 100% TCP background traffic, average end-to-end delay for TCP tagged flow in the heterogeneous case is 820 ms as compared to 390 ms for the homogeneous case.

     
  4. 4

    Average end-to-end delay has transmission, propagation, queuing delay components. Except for queuing delay and number of retransmissions components, rest are fixed in homogeneous case and propagation delay varies in a predeclared fashion in heterogeneous case. The maximum average total end-to-end delay for UDP for both homogeneous and heterogeneous cases are 250 ms with maximum TCP datagrams in background. The same values for TCP are 380 ms and 800 ms respectively for homogeneous and heterogeneous cases. As UDP does not have retransmissions and congestion control mechanism, the significant increase in total end-to-end delay for TCP case can definitely be attributed to TCP congestion control mechanism and retransmissions.

     
  5. 5

    For 100% TCP background traffic in heterogeneous case, the interaction of datagrams from TCP is evident. Average end-to-end delay of datagrams from tagged of TCP is degraded to maximum when background traffic is all TCP. This is a pertinent observation as more than 90% flows’ are TCP in the Internet in real time.

     
In above observations of the simulation results, the datagram sizes of TCP, TCP ack and UDP are fixed, therefore the significant increase at 0.8 vlue of \(\alpha\) can definitely be attributed to TCP congestion control mechanism, peculiarly the retransmission aspect of TCP congestion control (both number of retransmissions and queuing delay are random variables). Without retransmissions total end-to-end delay of a flow would be as low as UDP value which has random queuing delay component but no retransmissions.

4.2 Plots for average instantaneous end-to-end delay

For both homogeneous and heterogeneous cases, average instantaneous end-to-end delays experienced by TCP/UDP tagged traffic have been recorded and plotted for some chosen values of \(\alpha\) i.e, percentage of TCP in the background traffic, with respect to datagram arrival sequence. Figures 4 and 5 show plot of average instantaneous end-to-end delay of TCP tagged traffic for homogeneous and heterogeneous scenarios respectively. Similarly Figs. 6 and 7 show the plots of average instantaneous end-to-end delays for UDP tagged traffic. Following observations can be made based on these plots.
Fig. 4

Instantaneous delay of a tagged TCP flow for various proportions of TCP in background traffic for homogeneous scenario

Fig. 5

Instantaneous delay of a tagged TCP flow for various proportions of TCP in background traffic for heterogeneous scenario

Fig. 6

Instantaneous delay of a tagged UDP flow for various proportions of TCP in background traffic for homogeneous scenario

Fig. 7

Instantaneous delay of a tagged UDP flow for various proportions of TCP in background traffic for heterogeneous scenario

  1. 1.

    In all the four cases average end-to-end instantaneous delay for tagged TCP traffic is significantly higher compared to the UDP tagged traffic. Maximum value of UDP average instantaneous end-to-end delay is 360 ms as compared to the maximum value of TCP which is 18000 ms (worst case).

     
  2. 2.

    As the proportion of TCP in the background traffic increases, average instantaneous end-to-end delay for both TCP and UDP tagged traffic increases. Comparing Figs. 4 and 5, it is observed that for TCP tagged traffic, the delay is more adverse for heterogeneous scenario. This is not true with UDP tagged traffic. Figure 6 shows that the average instantaneous end-to-end delay performance for UDP tagged traffic exhibits lot of variation. The variation increases with the increase of TCP proportion in the background traffic. Figure 7 the heterogeneous scenarios, indicates that clearly the value of average instantaneous end-to-end delay increases with increase in TCP proportion in the background traffic. The reason for this may be, the greedy nature of TCP congestion control algorithm which tries to grab all available outgoing capacity for TCP traffic. Interestingly in heterogeneous case, not all the datagrams of tagged UDP flow reach the destination. As the background TCP proportion increases more datagrams reach the destination but with increased average instantaneous end-to-end delay. Variation in end-to-end instantaneous is more in homogeneous case compared to heterogeneous but in homogeneous case most datagrams reach the destination. In the Internet (heterogeneous case), most of UDP based applications like DNS,DHCP etc are all low RTT value applications.

     
  3. 3.

    In the homogeneous case for TCP tagged traffic and for 100% TCP in the background traffic, global synchronization effect is pronounced, Fig. 4 for \(\alpha =1\).

     
  4. 4.

    It is clear that TCP flows harm fellow fellows and UDP too.

     

4.3 Comparison of different flavors of TCP using average end-to-end delay

Just to compare different flavors of TCP, average end-to-end delays for a typical TCP tagged flow, with \(\alpha\) = 0.8 of background traffic based on TCP have been recorded. Table 3 shows records for average end-to-end delay measurements on congested link with and without RED for \(\alpha\) using different TCP flavors.
Table 3

Comparison of TCP variants

Average End-to-End delay in in sec.

for \(\alpha =0.8\) (80% nodes are TCP)

TCP type

Without RED

With RED

TCP cubic (heterogeneous case)

0.5622

0.347

TCP cubic (homogeneous case)

0.3434

0.1981

TCP compound (heterogeneous case)

0.3409

0.2132

TCP compound (homogeneous case)

0.3174

0.1816

TCP reno (heterogeneous case)

0.3894

0.1952

TCP reno (homogeneous case)

0.3380

0.1838

It can be seen that when \(\alpha\) = 0.8 performances of TCP Reno and TCP Compound are comparable, whereas the TCP Cubic gives the worst delay performance. However the performance improves with the use of RED for TCP Cubic scheduling algorithm by 42.3% in the case of homogeneous network scenario and by 38.27% for the heterogeneous network scenario.

Observations are similar after network scaling, the number of nodes are increased from ten to hundred (both source and destination nodes) and identical simulation exercise has been carried out to determine the average end-to-end delay performance using TCP Cubic. Thus average end-to-end delay worsens nearly 4 times if TCP replaces UDP, for both tagged and background traffic. TCP flows are not collaborative or cooperating. As proportion of TCP in the background traffic increases beyond 80%, this degradation would become quite severe and would compromise the ITU standards. We now present the estimation of mean delay and average instantaneous delay using statistical analysis in the following section.

5 Estimation of mean delay and average instantaneous delay

Multiplexed number of parallel flows’ denoted by M(n) is a stochastic process where number of flows’ are random. Since TCP is dominant protocol, majority of the flows are TCP. These random number of flows’ result in arrival process of datagrams at the router. These datagrams get buffered/lost depending on the buffer size occupancy at that instant. Buffered datagrams get served and reach the next hop and process continues. The interaction between datagrams of different flows’ on the delay performance is analyzed at the network layer.

To calculate the mean delay, the queue is defined using the arrival and effective service process model given in Eqs. (4) and (5) respectively. The arrival and service of datagrams happen at network layer where TCP segments and UDP datagrams contend at the router. Random variables P,C,B and D follow Weibull distribution with parameters as indicated Table 1 by experimental measurements.

5.1 Mean queuing delay

The probability density function of Weibull distribution of independent random variables P and C of TCP only are given by:
$$\begin{aligned} {{{\text {f}}_{P}}(t)=\frac{\beta 1 }{\eta 1 }\left( \frac{t}{\eta 1 }\right) ^{\beta 1 -1}e^{-(\frac{t}{\eta 1})^{\beta 1 }}} \end{aligned}$$
(7)
and
$$\begin{aligned} {{{\text {f}}_{C}}(t)=\frac{\beta 2 }{\eta 2 }\left( \frac{t}{\eta 2 }\right) ^{\beta 2 -1}e^{-(\frac{t}{\eta 2})^{\beta 2 }}} \end{aligned}$$
(8)
where parameters \(\beta 1 > 0\), \(\eta 1 > 0\) and \(\beta 2 > 0\), \(\eta 2 > 0\) are the shape and scale parameters of random variables P and C corresponding to arrival rate and datagram sizes for TCP part (second column of Table 1).

We calculate the product distribution of \(\alpha\).P.C by Jacobian transformation (Papoulis and Pillai 2002).

Consider, \(X=\alpha P\), \(Y=C\)

Let \(Z=X.Y\) , \(W=Y\)

Therefore \(X=Z/W\), \(Y=W\) and \(x > 0\), \(y > 0\) is equivalent to \(z> 0, w > 0\).

The Jacobian (J) of the above expression can now be calculated as:
$$\begin{aligned} {f_{zw}}\left( {ZW} \right) = {\frac{{{f_{XY}}(x,y)}}{{\left| {\partial \left( {z,w} \right) /\partial \left( {x,y} \right) } \right| }}_{x = z/w,y = w}} , \end{aligned}$$
where the Jacobian of the above equations is defined by standard formulations:
$$\begin{aligned} J = \frac{{\partial (Z,W)}}{{\partial (X,Y)}} = \left| {\begin{array}{*{20}{c}} {\frac{{\partial z}}{{\partial x}}}&{}{\frac{{\partial z}}{{\partial y}}}\\ {\frac{{\partial w}}{{\partial x}}}&{}{\frac{{\partial w}}{{\partial y}}} \end{array}} \right| = \left| {\begin{array}{*{20}{c}} y&{}x\\ 0&{}1 \end{array}} \right| =y .\end{aligned}$$
(9)
Thus,
$$\begin{aligned} {f_{Z,W}}(z,w) = \frac{{{f_X}\left( x \right) {f_Y}\left( y \right) }}{w} \end{aligned}$$
(10)
as X and Y are independent.
Similar exercise has been carried out for UDP proportion using Jacobin transformation for independent random variables B and D. \(f_{(1-\alpha )D}(t)\) and \(f_{B}(t)\) are distributions of B and D respectively. As can be seen from Eq. (4), the distribution of arrival process (A) is viewed as distribution of summation of random variables (\(\alpha PC\)) and (\((1-\alpha )BD\)). Therefore the distribution of arrival process (A) using Laplace property of pdf, can be written as follows:
$$\begin{aligned} f_{A}\left( s \right) =f_{\alpha PC}\left( s \right) f_{(1-\alpha )DB}\left( s \right) . \end{aligned}$$
(11)
Using effective service process model given in Eq. (5), the distributions for effective service rate (S) can be determined considering random variable (C) and (B) representing variables datagram sizes for TCP and UDP respectively. The density function of Weibull distribution of independent random variables C and B are as follows:
$$\begin{aligned}&{{{\text {f}}_{(\theta \alpha C)}}(t)=\frac{\beta 2 }{(\theta \alpha \eta 2) }\left( \frac{t}{(\theta \alpha \eta 2) }\right) ^{\beta 2 -1}e^{-\left( \frac{t}{(\theta \alpha \eta 2)}\right) ^{\beta 2 }}} \end{aligned}$$
(12)
$$\begin{aligned}&{{{\text {f}}_{(\theta (1-\alpha )B)}}(t)=\frac{\beta 4 }{(\theta (1-\alpha )\eta 4) }(\frac{t}{(\theta (1-\alpha )\eta 4) })^{\beta 4 -1}e^{-(\frac{t}{(\theta (1-\alpha )\eta 4)})^{\beta 4 }}} \end{aligned}$$
(13)
where parameters \(\beta 2 > 0\), \(\eta 2 > 0\) and \(\beta 4 >0\), \(\eta 4 > 0\) are the shape and scale parameters of random variables C and B corresponding TCP and UDP respectively.
Using Eqs. (12), (13) and using Laplace transform property of distributions, the pdf of effective service rate (S) can now be written as:
$$\begin{aligned} f_{S}\left( s \right) =f_{\theta \alpha C}\left( s \right) f_{\theta (1-\alpha )B}\left( s \right) . \end{aligned}$$
(14)
From the above discussion and considering the arrival and service services processes, the router queue can be modeled as a G/G/1 queue. The M/M/1 queue discussed earlier is a special case of G/G/1 queue. For a G/G/1 queue, when traffic offered to the router is high, the upper bound on the distribution of waiting time \('T'\) of a single congested router can approximated to an exponentially distributed random variable (Kleinrock 1976; Srikant and Ying 2013) with mean value given by,
$$\begin{aligned} T\le \frac{\lambda \left( \sigma _{A}^{2}+\sigma _{S}^{2} \right) }{2(1-\rho )} . \end{aligned}$$
(15)
where (\(\sigma _{A}\)) represents standard deviation for distribution of arrival process (A) and (\(\sigma _{S}\)) represents standard deviation of distribution of effective service rate (S) and \(\rho\) represents utilization.

The arrival process distribution and effective service rate distribution are obtained by varying values of (\(\alpha\)) from 0.1 to 0.9 with increasing steps of 0.1. Numerically solving Eqs. (11) and (14) for (\(\sigma _A\)) and (\(\sigma _S\)), the maximum mean delay due to single congested router has been computed using Eq. (15) for various values of \(\alpha\). For comparison, the analysis has been done by considering the size of datagrams as fixed and also by varying the datagram sizes for different proportions of TCP and UDP in the background traffic.

Figure 8 shows relationship of mean queuing delay \('T'\) with varying proportion (i.e, \(\alpha\)) of TCP traffic in background for both the cases, first case where mean delay is calculated considering datagram size as random variable and the other considering datagram size as constant (TCP packet size of 1500 bytes which is Maximum Transfer Unit (MTU) of Ethernet and UDP packet size 210 bytes). The mean queuing delay values in Fig. 8 are for a single congested router. It can be readily seen that mean waiting time of the datagrams at the router increases with the increased TCP proportion in the background traffic. Size of datagrams is also a critical parameter for estimating mean waiting time as it increases considerably with variable datagram sizes. The significant increase in the mean waiting time for variable size datagram traffic, compared to the fixed size datagram; for values of \(\alpha\) between 0.6 and 0.7 may be attributed to parameter values determined from the empirical dataset. The mean queuing delay values due to single router increase to 100 msec when background traffic has datagrams from TCP contributing more than 90%.
Fig. 8

Mean waiting time for various proportions of TCP where XX: Mean delay considering datagram size as random variable and YY: Mean delay considering datagram size as constant

The following subsection focuses on analysis of instantaneous delay of tagged datagrams, due to multiplexing and buffering of datagrams for tagged traffic against background traffic with finite buffer size.

5.2 Analysis for average values of instantaneous delay for tagged traffic

Each datagram at the router input can be treated independently and hence the router queue can also be modeled as a discrete time queue, where the time axis can be divided into discrete slots such that only one datagram arrives in a slot and arrival/departure of datagrams happens at the boundary of slot. The channel time can be slotted with the size equal to the transmission time (t) required by one IP datagram and the processing has been assumed to be on FCFS basis. The router’s output port queue has been modeled as finite single server queue (independent of the of other output ports) and the objective is to estimate the mean instantaneous delay of either TCP or UDP tagged datagrams. The tagged datagrams are essentially from a typical flow of interest and the rest of the datagrams constitute the background traffic, consisting of both TCP and UDP.

5.2.1 Estimation of mean instantaneous delay with finite buffer size

Let the tagged traffic datagrams arrive as nth and \((n+1)\)st arrivals and let there be k slots in between them occupied by background traffic, as shown in Fig. 9. The distributions of tagged traffic (TCP or UDP) and also, the distributions of inter-arrival times of tagged traffic are indicated in Table 1. Following similar analysis as done for arrival/service process, the probability density function for inter-arrival time slots \(=k\) can be considered as discrete Weibull pdf given by,
$$\begin{aligned} f(k)=\frac{\beta }{\eta }\left( \frac{k}{\eta }\right) ^{\beta -1}e^{-\left( \frac{k}{\eta }\right) ^{\beta }} , \end{aligned}$$
(16)
where parameters \(\beta > 0\) and \(\eta > 0\) are the shape and scale parameters respectively where \(f(k)=Pr\)(inter-arrival time =k slots) for tagged traffic. The resultant arrival distribution of the background traffic is given by,
$$\begin{aligned} f_{A}(t)=f_{\alpha PC}^{{}}(t)+f_{(1-\alpha ) DB}^{{}}(t), \end{aligned}$$
(17)
where \(f_{\alpha PC}^{{}}(t)\) and \(f_{(1-\alpha ) DB}^{{}}(t)\) represent TCP and UDP arrival distributions and are determined as in Eq. (11) above.
Fig. 9

Overall traffic flow diagram

As seen from Fig. 9, \(L_{k}^{n}\) is a random variable to indicate the queue length at the end of the kth time slot after the receipt of nth datagram from the tagged traffic. For \(k=0\), let \(L_{0}^{n}\) represents the queue length immediately after the nth datagram arriving from tagged traffic. At the beginning of the first time slot after the receipt of the nth datagram from the tagged traffic, there are \(L_{1}^{n}\) datagrams in the output queue, where \(L_{1}^{n}\) includes,
  1. 1.

    number of datagrams already in the buffer immediately before the nth datagram arriving from the tagged traffic, \(L_{0}^{n}\),

     
  2. 2.

    one datagram newly arriving from tagged traffic and

     
  3. 3.

    \(W_{0}\) datagrams newly arriving from background traffic.

     
Thus, we can obtain the following equations:
$$\begin{aligned}&L_{1}^{n}=L_{0}^{n}+1+W_{0}^{n} , \end{aligned}$$
(18)
$$\begin{aligned}&L_{k+1}^{n}=L_{k}^{n}+W_{k}^{n}, \quad k=1,2,3,\ldots I. \end{aligned}$$
(19)
where I represents maximum number of slots at the output queue. Let \(q_{k}^{n}(j)\) be the probability density function of \(L_{k}^{n}\) where j is the discrete random variable, representing the number of datagrams in the output queue at (n,k) slots. Then we have following equations:
$$\begin{aligned}&q_{1}^{n}(j)=q_{0}^{n}(j-1)+w_{0}(j-1) \end{aligned}$$
(20)
$$\begin{aligned}&q_{(k+1)}^{n}(j)=q_{k}^{n}(j)+w_{k}(j) . \end{aligned}$$
(21)
Therefore,
$$\begin{aligned} L_{k}^{n}=max(L_{k}^{n}-1,0). \end{aligned}$$
(22)
When the \((n+1)\)st datagram from the tagged traffic arrives at the queue and occupies the \((k+1)\)st slot, the corresponding queue length is \(L_{0}^{n+1}\). Then the probability distribution \(q_{0}^{n+1}(j)\) represents distribution of queuing length before arrival of \((n+1)\)st datagram from the tagged traffic and can be written as,
$$\begin{aligned} q_{0}^{n+1}(j)=\sum \limits _{k=0}^{I}f(k).q_{k}^{n}(j) . \end{aligned}$$
(23)
Equation (23) requires iterative calculations by setting the initial values to \(q_{0}^{1}(j)\) with the condition;
$$\begin{aligned} \sum _{j=0}^{I}q_{0}^{1}(j)=1, \end{aligned}$$
(24)
Delay for the tagged traffic can be estimated using queuing length distribution at the output queue. When the \((n+1)\)st datagram of tagged traffic arrives at the \((k+1)\)th slot, the queuing delay can be estimated based on the length of the queue given by:
$$\begin{aligned} L_{k}^{n}=L_{(k-1)}^{n}+W_{(k-1)}^{n} , \end{aligned}$$
(25)
where \(L_{(k-1)}^{n}=\) datagrams already in the queue and \(W_{(k-1)}^{n}=\) newly arriving datagrams of background traffic at the kth slot along with the \((n+1)\)st datagram, but served before \((n+1)\)st datagram from tagged traffic. So, the waiting time for the \((n+1)\)st datagram includes:
  1. 1.

    time for datagrams in the buffer after arrival of the nth datagram in the \({(k-1)}\)st slot and

     
  2. 2.

    the time taken by the background datagrams arrived along with the \((n+1)\)st datagram but served before the \((n+1)\)st datagram.

     
Thus, the pdf of waiting time of the \((n+1)\)st datagram is simply the convolution of the two pdfs is given by:
$$\begin{aligned} q_{k}^{n}=q_{(k-1)}^{n}(j)\otimes w_{(k-1)}^{n}(j) .\end{aligned}$$
(26)
Mean instantaneous delay (\(M_{id}\)) can be expressed as
$$\begin{aligned} M_{id}=\sum _{j} L_{k}^{n}.q_{k}^{n}. \end{aligned}$$
(27)
Using Eq. (26) the above leads to
$$\begin{aligned} M_{id}=\sum _{j} L_{k}^{n} \,\,[q_{k-1}^{n}(j)\otimes w_{k-1}^{n}(j) ] . \end{aligned}$$
(28)
Fig. 10

Mean instantaneous delay for tagged TCP flow for various proportions of TCP in the background traffic

Figure 10 indicates the mean instantaneous delay for TCP tagged traffic for three chosen values of (\(\alpha\)).

Mean instantaneous delay has been plotted with respect to the sequence of datagrams (n) in Fig. 10 for different values of \(\alpha\) for TCP tagged traffic. It can be readily seen that the mean instantaneous delay increases sharply as \(\alpha\) value, increases beyond 0.5. It is evident that as TCP fraction increases in the background, it harms the tagged TCP flow’s mean instantaneous delay. It is also observed that mean instantaneous delay increases considerably when traffic load in number of datagrams increases beyond \(n=15\), this may be due to buffer capacity getting fully utilized.

The above observations (the analytical queuing values are for single router and simulation results are for 2 router queues) are consistent with the observation based on analytical results shown in Fig. 8, indicating that in controlled network environment where propagation, transmission, processing are same, the variation in average end-to-end delay is mainly due to queuing delay component and number of retransmissions and is more significant for TCP tagged traffic. Considerable (fourfold) increase in queuing delay is observed as the proportion of TCP in the background traffic increases. It is therefore clear that the proportion of TCP in background degrades the average end-to-end delay for both tagged TCP and UDP flows’.

The practical use cases or applications of the model developed in this paper can be for estimation of delay jitter for multiplexed flows’ (very important parameter for multimedia applications in the Internet), algorithmic modelling of a typical UDP/TCP flow throughput considering the effect of background TCP quantum etc. These can be done as possible extensions for the future work.

6 Conclusions

This paper explores the interaction of datagrams between TCP flow with fellow TCP and also the interaction of datagrams between UDP with TCP. As presumed delay parameters for datagrams of a typical TCP flow is substantially larger than UDP but interestingly the interaction gives intriguing results. The mean delay of datagrams worsens 400% for TCP and around 30% for UDP when background traffic has maximum TCP. Secondly average instantaneous delay for datagrams of TCP tagged flow is substantially larger as compared to UDP tagged flow when background traffic has maximum TCP. Delay performance values are dependent directly on the proportion of TCP in the background traffic. This is true for all TCP flavors like Reno, Cubic and Compound even after the use of scheduling mechanism (RED). It is clear that TCP flows’ harm fellow flows’ for delay performance. This affects all applications specially the QoS for real time applications. The reason can be attributed to queuing delay and number of retransmissions which are variable in nature. Interaction between TCP flows’ is adverse with respect to delay performance.

Degradation in end-to-end delay being significantly higher for TCP, compared to UDP based traffic (approximately 4 times), there is no point in exploring any new TCP based solutions as they would always worsen the delay performance. Possibly one could look at exploring UDP based congestion control which may minimize the queuing delays, particularly for real time applications. However this may necessitate major architectural changes in the Internet which would be difficult to implement, if not impossible. Alternatively one can explore congestion avoidance mechanism using link-by-link congestion control at network layer itself as suggested in literature survey by Shailendra et al. (2010b), thereby trying to avoid the congestion before it happens. This however may also necessitate optimum sizing of buffers in routers. This might help in minimizing the effect of queuing delay in the Internet.

Notes

Compliance with ethical standards

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

References

  1. Afanasyev, A., Tilley, N., Reiher, P., Kleinrock, L.: Host-to-host congestion control for tcp. IEEE Commun. Surv. Tutor. 12(3), 304–342 (2010)CrossRefGoogle Scholar
  2. Araldo, A. G,. Rossi, D.: Dissecting bufferbloat: Measurement and per-application breakdown of queueing delay. In: Proceedings of the 2013 workshop on Student workhop, pp. 53–56. ACM (2013)Google Scholar
  3. Bertsekas, D .P., Gallager, R .G., Humblet, P.: Data Networks, vol. 2. Prentice-Hall International, New Jersey (1992)zbMATHGoogle Scholar
  4. Bose, S.K.: An Introduction to Queueing Systems. Springer, Berlin (2013)Google Scholar
  5. Briscoe, B., Brunstrom, A., Petlund, A., Hayes, D., Ros, D., Tsang, J., Gjessing, S., Fairhurst, G., Griwodz, C., Welzl, M.: Reducing internet latency: a survey of techniques and their merits. IEEE Commun. Surv. Tutor. 18(3), 2149–2196 (2014)CrossRefGoogle Scholar
  6. Callegari, C., Giordano, S., Pagano, M., Pepe, T.: Behavior analysis of tcp linux variants. Comput. Netw. 56(1), 462–476 (2012)CrossRefGoogle Scholar
  7. Calzarossa, M., Serazzi, G.: Workload characterization: a survey. Proc. IEEE 81(8), 1136–1150 (1993)CrossRefGoogle Scholar
  8. Cardwell, N., Cheng, Y., Gunn, C.S., Yeganeh, S.H.: Bbr: congestion-based congestion control. Commun. ACM 60(2), 58–66 (2017)CrossRefGoogle Scholar
  9. Casas, P., D’Alconzo, A., Fiadino, P., Bär, A., Finamore, A., Zseby, T.: When youtube does not work?analysis of qoe-relevant degradation in google cdn traffic. IEEE Trans. Netw. Serv. Manag. 11(4), 441–457 (2014)CrossRefGoogle Scholar
  10. Chen, H., Jin, H., Wu, S.: Minimizing inter-server communications by exploiting self-similarity in online social networks. IEEE Trans. Parallel Distrib. Syst. 4, 1116–1130 (2016)CrossRefGoogle Scholar
  11. Csoma, A., Toka, L., Gulyás, A.: On lower estimating internet queuing delay. In: 2015 38th International Conference on Telecommunications and Signal Processing (TSP), pp. 299–303. IEEE (2015)Google Scholar
  12. Domaski, A., Domaska, J., Czachrski, T., Klamka, J.: Self-similarity traffic and aqm mechanism based on non-integer order controller. In: International Conference on Computer Networks, pp. 336–350. Springer (2017)Google Scholar
  13. Downey, A. B.: Evidence for long-tailed distributions in the internet. In: Proceedings of the 1st ACM SIGCOMM Workshop on Internet Measurement, pp. 229–241. ACM (2001)Google Scholar
  14. Gettys, J., Nichols, K.: Bufferbloat: dark buffers in the internet. Commun. ACM 55(1), 57–65 (2012)CrossRefGoogle Scholar
  15. Iya, N., Kuhn, N., Verdicchio, F., Fairhurst, G.: Analyzing the impact of bufferbloat on latency-sensitive applications. In: 2015 IEEE International Conference on Communications (ICC), pp. 6098–6103. IEEE (2015)Google Scholar
  16. Kaur, G., Saxena, V., Gupta, J.: Detection of tcp targeted high bandwidth attacks using self-similarity. J. King Saud Univ. Comput. Inf. Sci. (2017)Google Scholar
  17. Kleinrock, L.: Queueing systems, volume 2: Computer applications, volume 66. Wiley, New York (1976)Google Scholar
  18. Li, S., Sun, W., Hua, C.: Optimal resource allocation for heterogeneous traffic in multipath networks. Int. J. Commun. Syst. 29(1), 84–98 (2016)CrossRefGoogle Scholar
  19. Liu, X., Yang, F., Jin, Y., Wang, Z., Cao, Z., Sun, N.: Regional congestion mitigation in lossless datacenter networks. In: IFIP International Conference on Network and Parallel Computing, pp. 62–74. Springer (2017)Google Scholar
  20. Lochert, C., Scheuermann, B., Mauve, M.: A survey on congestion control for mobile ad hoc networks. Wirel. Commun. Mob. Comput. 7(5), 655–676 (2007)CrossRefGoogle Scholar
  21. Mondal, A., Trestian, I., Qin, Z., Kuzmanovic, A.: P2p as a cdn: a new service model for file sharing. Comput. Netw. 56(14), 3233–3246 (2012)CrossRefGoogle Scholar
  22. Nichols, K., Jacobson, V.: Controlling queue delay. Commun. ACM 55(7), 42–50 (2012)CrossRefGoogle Scholar
  23. Papoulis, A., Pillai, S.U.: Probability, Random Variables, and Stochastic Processes. Tata McGraw-Hill Education, UK (2002)Google Scholar
  24. Polaganga, R.K., Liang, Q.: Self-similarity and modeling of lte/lte-a data traffic. Measurement 75, 218–229 (2015)CrossRefGoogle Scholar
  25. Ren, Y., Li, J., Shi, S., Li, L., Wang, G., Zhang, B.: Congestion control in named data networking-a survey. Comput. Commun. 86, 1–11 (2016)CrossRefGoogle Scholar
  26. Rossi, D., Muscariello, L., Mellia, M.: On the properties of tcp flow arrival process. In: 2004 IEEE International Conference on Communications, Vol. 4, pp. 2153–2157. IEEE (2004)Google Scholar
  27. Rouskas, G., Vishwanath, A., Sivaraman, V.: Anomalous loss performance for mixed real-time and tcp traffic in routers with very small buffers. IEEE/ACM Trans. Netw. 19(4), (2011)Google Scholar
  28. Shailendra, S., Bhattacharjee, R., Bose, S.K.: Optimized flow division modeling for multi-path transport. Annual IEEE India Conference (INDICON) 1–4 (2010a)Google Scholar
  29. Shailendra, S., Bhattacharjee, R., Bose, S.K.: Optimized flow division modeling for multi-path transport. Annual IEEE India Conference (INDICON) 1–4 (2010b)Google Scholar
  30. Showail, A., Jamshaid, K., Shihada, B.: Buffer sizing in wireless networks: challenges, solutions, and opportunities. IEEE Commun. Mag. 54(4), 130–137 (2016)CrossRefGoogle Scholar
  31. Srikant, R., Ying, L.: Communication networks: an optimization, control, and stochastic networks perspective. Cambridge University Press, Cambridge (2013)CrossRefzbMATHGoogle Scholar
  32. Wang, J., Liu, A., Yan, T., Zeng, Z.: A resource allocation model based on double-sided combinational auctions for transparent computing. Peer-to-Peer Netw Appl 1–18 (2017)Google Scholar
  33. Xu, Y., Yu, C., Li, J., Liu, Y.: Video telephony for end-consumers: measurement study of google+, ichat, and skype. In: Proceedings of the 2012 ACM conference on Internet measurement conference, pp. 371–384. ACM (2012)Google Scholar
  34. Yang, P., Shao, J., Luo, W., Xu, L., Deogun, J., Lu, Y.: Tcp congestion avoidance algorithm identification. IEEE/ACM Trans. Netw. (TON) 22(4), 1311–1324 (2014)CrossRefGoogle Scholar

Copyright information

© China Computer Federation (CCF) 2019

Authors and Affiliations

  • Sneha K. Thombre
    • 1
    Email author
  • Lalit M. Patnaik
    • 2
  • Anil S. Tavildar
    • 3
  1. 1.Cummins College of Engineering for WomenPuneIndia
  2. 2.National Institute of Advanced Studies, Indian Institute of Sciences CampusBangaloreIndia
  3. 3.Vishwakarma Institute of Information TechnologyPuneIndia

Personalised recommendations