Network devices supporting 100G links are in demand to meet the communication requirements of computing nodes in datacenters and warehouse computers. In this paper, we propose TQ and TQ-Smooth, two light-weight, fair schedulers that accommodate an arbitrarily large number of requestors and are suitable for ultra high-speed links. We show that our first algorithm, TQ, as well its predecessor, DRR, may result in bursty service even in the common case where flow weights are approximately equal, and we find that this can damage the performance of buffer-credit allocation schemes. Our second algorithm, TQ-Smooth, improves short-term fairness to deliver very smooth service when flow weights are approximately equal, while allocating bandwidth in a weighted fair manner. In many practical situations, a scheduler is asked to allocate resources in fixed-size chunks (e.g. buffer units), whose size may exceed that of (small) network packets. In such cases, byte-level fairness will typically be compromised when small-packet flows compete with large-packet ones. We describe and evaluate a scheme that dynamically adjusts the service rates of request/grant buffer reservation to achieve byte-level fairness based on received packet sizes.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
In the common case, flow weights are likely to be equal.
The two queues that we use bear a resemblance with the hot and cold queues that are used in RECN . Note that we do not separate the flows in hot and cold as RECN does, but we use the two queues to priortize service and maintain fairness. In fact, any flow will spend some time in either queue.
Negative credits are also used in .
We ignore here the trivial case where \(f\) is the only active flow and thus receives all service.
Note that instead of per-flow request counters we could maintain per-flow request queues to store the size of each individual request. However for large port numbers, this adds significant cost to the implementation.
Chrysos N, Neeser F, Gusat M, Clauberg R, Minkenberg C, Basso C, Valk K (2013) Arbitration of many thousand flows at 100G and beyond. In: Proceedings of the interconnection network architecture: on-chip, multi-chip (INA-OCMC ’13), Berlin, Germany
Greenberg A, Hamilton JR, Jain N, Kandula S, Kim C, Lahiri P, Maltz DA, Patel P, Sengupta S (2009) VL2: a scalable and flexible data center network. ACM SIGCOMM Comput Commun Rev 39(4):51–62
Alizadeh M, Greenberg A, Maltz DA, Padhye J, Patel P, Prabhakar B, Sengupta S, Sridharan M (2010) DCTCP: efficient packet transport for the commoditized data center. In: Proceedings of the ACM SIGCOMM, New Delhi, India
Crisan D, Anghel AS, Birke R, Minkenberg C, Gusat M (2010) Short and fat: TCP performance in CEE datacenter networks. IEEE hot interconnects, Santa Clara, CA, USA
Neeser FD, Chrysos N, Clauberg R, Crisan D, Gusat M, Minkenberg C, Valk KM, Basso C (2012) Occupancy sampling for terabit CEE switches. In: Proceedings of the IEEE high-performance interconnects (HOTI), San-Jose, CA, USA
Alizadeh M, Yang S, Katti S, McKeown N, Prabhakar B, Shenker S (2012) Deconstructing datacenter packet transport. In: Proceedings of the 11th ACM Workshop on hot topics in networks, pp 133–138
Chrysos N, Katevenis M (2006) Scheduling in non-blocking buffered three-stage switching fabrics. In: Proceedings of the IEEE INFOCOM, Barcelona, Spain
Chrysos N (2007) Congestion management for non-blocking clos networks. In: Procedings of the ACM/IEEE ANCS, Orland, FL, USA
Kavvadias S, Katevenis M, Zampetakis M, Nikolopoulos D (2010) On-chip communication synchronization mechanisms with cache-integrated network interfaces. In: Proceedings of the ACM international conference on computing frontiers (CF’10), Bertinoro, Italy
IEEE (2010) P802.1Qbb/D2.3—Virtual bridged local area networks—amendment: priority-based flow control. pp 1–40
Demers A, Keshav S, Shenker S (1989) Analysis and simulation of a fair queueing algorithm. In: Proceedings of the ACM SIGCOMM, Austin, TX, USA
Katevenis M, Sidiropoulos S, Courcoubetis C (1991) Weighted round-robin cell multiplexing in a general-purpose ATM switch chip. IEEE J Sel Areas Commun 9(8):1265–1279
Parekh AK, Gallager RG (1992) A generalized processor sharing approach to flow control in integrated services networks—the single node case. In: Proceedings of the IEEE INFOCOM, Florence, Italy
Shreedhar M, Varghese G (1996) Efficient fair queueing using deficit round robin. IEEE/ACM Trans Netw 4(3):375–385
Lenzini L, Mingozzi E, Steay G (2004) Tradeoffs between low complexity, low latency, and fairness with deficit round-robin schedulers. IEEE/ACM Trans Netw 12(4):681–693
Sriram R, Joseph P (2006) The stratified round robin scheduler: design, analysis and implementation. IEEE/ACM Trans Netw 6(6):1362–1373
Yuan X, Duan Z (2008) Fair round-robin: a low complexity packet scheduler with proportional and worst-case fairness. IEEE Trans Comput 58(3):365–379
About this article
Cite this article
Chrysos, N., Neeser, F., Gusat, M. et al. Tandem queue weighted fair smooth scheduling. Des Autom Embed Syst 18, 183–197 (2014). https://doi.org/10.1007/s10617-014-9132-y
- Packet Size
- Active Flow
- Flow Weight
- Packet Scheduler
- Service Credit