A new QoE fairness index for QoE management
 1.8k Downloads
 4 Citations
Abstract
The usercentric management of networks and services focuses on the Quality of Experience (QoE) as perceived by the end user. In general, the goal is to maximize (or at least ensure an acceptable) QoE, while ensuring fairness among users, e.g., in terms of resource allocation and scheduling in shared systems. A problem arising in this context is that the notions of fairness commonly applied in the QoS domain do not translate well to the QoE domain. We have recently proposed a QoE fairness index F, which solves these issues. In this paper, we provide a detailed rationale for it, along with a thorough comparison of the proposed index and its properties against the most widely used QoS fairness indices, showing its advantages. We furthermore explore the potential uses of the index, in the context of QoE management and describe future research lines on this topic.
Keywords
Quality of experience (QoE) Quality of service (QoS) Fairness Fairness indexIntroduction
Quality of Experience (QoE) is “the degree of delight or annoyance of the user of an application or service” [26]. It is generally accepted that the quality experienced by a user of a networked service is dependent, in a nontrivial and often nonlinear way, on the network’s QoS. Moreover, the QoE of different services is often different given the same network conditions; i.e., the way in which QoS can be mapped to QoE is servicespecific. For example, voice services can usually withstand higher loss rates than video streaming services, but are in turn more sensitive to large delays. Hence, given a network condition with certain QoS characteristics, the QoE experienced by users of different services can vary significantly. From the point of view of fairness, as we will see, we need not concern ourselves with the different aspects of how QoS affects QoE for different services, but rather how different users’ expectations, in terms of QoE, are affected by the underlying QoS: are most (or all) users receiving similar quality levels, regardless of the services they use?
Standards such as ETSI TS 102 2501 V2. 2.1 [11] specify how to compute various QoS metrics and highlight the need to consider customer QoE targets. However, fairness aspects are not considered. From a network operator’s point of view, QoE is an important aspect in keeping customers satisfied, e.g., decreasing churn. This has lead to a number of mechanisms for QoEdriven network resource management, aimed at maintaining quality above a certain threshold for every user (or in some proposals, “premium” users, at least). An issue common to all those efforts is that of dividing the available resources among users so as to maintain a satisfied customer base. In this paper, we explore (in depth) a notion of QoE fairness, first introduced in our previous work [15], to quantify the degree to which the users sharing a network, and using a variety of services on it, achieve commensurate QoE. In this paper, we expound upon both the concept of QoE fairness and the proposed QoE fairness index. We show that the QoSfair methods of resource distribution among users do not, in general, result in QoEfair systems, even when considering a singleservice scenario, and therefore, that QoE fairness needs to be considered explicitly when evaluating the performance of management schemes. We further illustrate the differences between QoS fairness and QoE fairness indices by means of concrete case studies.
The remainder of this paper is structured as follows. “Background and related work: notion of fairness and its applications” provides background on the notion of fairness in shared systems and the networking domain, and discusses the move to considering fairness from a user perspective. The move from QoS to QoE management and the motivation for considering QoE fairness are then further discussed in “From QoS to QoE management” . “QoE fairness index” specifies the requested properties of a fairness index, while “Relative standard deviation and Jain’s fairness index” introduces the commonly used relative standard deviation (RSD) and Jain’s index. The properties of Jain’s index are further elaborated in “Issues with Jain’s index for QoE fairness”. “Defining a QoE fairness index” presents the QoE fairness index we proposed [15], and the rationale behind it. We provide an example of its application for web QoE and video streaming QoE to demonstrate its relevance for benchmarking and system design in “Application of the QoE fairness index”. Finally, “Conclusions and discussion” concludes this work and discusses further research issues.
Background and related work: notion of fairness and its applications
Notion of fairness in shared environments
Fairness in shared systems has been widely studied as an important system performance metric, with different application areas and lack of a universal metric. In general, approaches quantifying fairness have relied mainly on measures such as secondorder statistics (variance, standard deviation, coefficient of variation), entropybased measures, and difference to optimal solution, e.g. [6]. A key issue is defining what is considered to be “fair”, and then designing and evaluating various scheduling policies in terms of fairness. For example, while proportional fairness relates to the idea that it is fair for jobs to receive response times proportional to their service times, temporal fairness respects the seniority of customers and the firstcomefirstserve policy. Wierman [46] provides an overview and comparison of various scheduling policies which are focused on guaranteeing equitable response times to all job sizes.
AviItzhak et al. [1] address the applicability of various fairness measures for different applications involving queue scheduling, such as call centers, supermarkets, banks, etc.
A fairness measure is inherently linked to some kind of performance objective, such as minimizing waiting times or maximizing the amount of allocated resources. A commonly studied tradeoff when considering different resource allocation optimization objectives is that between efficiency and fairness [3, 24]. Moreover, a key question that arises is at which granularity level should fairness be quantified and measured [1]. Related to different granularity levels is also the question of the time scales at which fairness is calculated, with most QoS fairness index measures used in the literature (such as max–min fairness and Jain’s fairness index [20]) reflecting longterm average system fairness. In contrast, a system may be considered shortterm fair [9] if for N competing hosts, the relative probability of each host accessing a shared resource is 1/N in any short interval. Deng et al. [9] further note that while shortterm fairness implies longterm fairness, longterm fairness may not ensure shortterm fairness.
QoE fairness, just like QoE, can be considered at different time scales, and its applicability can vary according to them
Time scale  Duration  Interpretation  Example: web QoE  Example video QoE  Related network metrics 

Instantaneous  Tens of ms  Insession  Not applicable  Video frame  Throughput 
Short term  Seconds  Insession  Web objects, Single page  DASH segment, single scene  Avg. throughput, latency 
Mid term  Minutes  Singlesession  Web session  Video scene, short clips  Aggregated over time 
Long term  Hours, days  Multisession  Commonly visited sites  Several episodes  Aggregated over time 
Notion of fairness in networking
In networking, fairness in resource allocation and scheduling is either linked to sharing resources evenly among the entities, or scaling the utility function of an entity in proportion to others. Flow based resource sharing, e.g., max–min fairness, is the fundament of the design of TCP and fair queuing scheduling approaches [8, 36]. A resource allocation is said to be max–min fair if the bit rate of one flow cannot be increased without decreasing the bit rate of a flow that has a smaller bit rate. This definition puts emphasis on maintaining high values for the smallest rates, even though this may be at the expense of network inefficiency [25].
In a more general and utilitydriven approach, proportional fairness was introduced in the seminal works by Kelly [22] and Kelly et al. [23]. Questioning the notion of max–min fairness, Kelly et al. [23] argue that bandwidth sharing should be driven by the objective of maximizing overall utility of flows, assuming logarithmic utility functions. Weighted proportional fairness is further defined as the scaling of an entity’s utility function relative to others, such that the entities will allocate flow rates so that the cost they cause will equal the weight they choose [22].
An alternative bandwidthsharing approach is that of \(\alpha \)fairness and the associated utility maximization. Mo and Walrand [28] propose this as a decoupled fairness criteria, which each user can use to achieve fairness without considering the behavior of other users. Bonald and Proutiere [4] introduce the notion of balanced fairness, referring to allocations for which the steady state distribution is insensitive to any traffic characteristics except the traffic intensities. They note that this insensitivity property does not hold for utilitybased allocations such as max–min and proportional fairness, where an optimal allocation process depends on detailed traffic characteristics such as flow arrival process and flow size distribution.
Any resource scheduling allocation between different entities (users, applications, flows/sessions, bitstreams) must have a notion of fairness. For example, according to the General Processor Sharing Model (GPS), each host is assigned a fair portion of a shared resource for any time interval [36]. While GPS has a binary outcome (a system is either fair or not), other metrics (such as the max–min fairness index) quantify the fairness level when the system is not perfectly fair [9]. A QoS fairness index should thus reflect the distance between the actual and the idealised allocation. The fairness should be relative to the resource unit \(x_i\) which is allocated to entity i relative to the other entities. Various measures have been proposed, both for measuring short and longterm fairness, as discussed previously.
The most frequently used QoS fairness metric is Jain’s index [20], which approximates the ratio between the squares of the first and second order moments of the resources \(x_i\) allocated for entity i. Jain’s index is primarily for assessing longterm fairness (e.g., averaged per user, session), but can also evaluate shortterm fairness by considering the sliding window average of \(x_i\). Jain’s fairness index has also been used to improve socalled transient fairness in the context of congestion control, when computing the optimal initial shaping rate for new flows entering a mobile network (rather than using a fixed value and/or Slow Startlike method) [35].
Apart from Jain’s index, other indexes which (partly) measure the fairness of shared resources are: variation, coefficient of variation, and the ratio between the maximum access share by a host and the minimum access share (max–min index, [34]).
Fairness from the user’s perspective
While QoS fairness has been well established in the networking community, less focus has been put on considering fairness from a truly useroriented perspective. Following Kelly’s theoretical notion of weighted proportional fairness, Briscoe [5] sharply criticized flowrate fairness, and argued that fairness should be considered from the point of view of congestion costs (cost fairness) or user benefits. He states that if fairness is defined between flows, then users can simply create more flows to get a larger resource allocation. Moreover, flow fairness is defined instantaneously, and has no necessary relation to realworld fairness over time. In other words, Briscoe’s criticism of flowlevel fairness leads to the notion that fairness should be considered at a higher level, where realworld entities are considered, such as people or organizations.
Following this perspective, recent papers have argued that a QoS fair system is not necessarily QoE fair, e.g., Mansy et al. [27], given the lack of consideration of service QoE models. Such models specify the relationships between userlevel QoE and various applicationlayer performance indicators (e.g., file loading times, video rebuffering) or influence factors such as device capabilities, context of use, network and system requirements, user preferences, etc.
As an example we consider QoE fairness in the context of bottleneck link sharing among adaptive video streams, where the on/off nature of flows results in inaccurate clientside bandwidth estimation and leads to a potential unfair resource demand [13, 27, 37].
De Cicco et al. [7] propose a clientside algorithm which avoids on/off behavior until reaching the highest possible playback quality. However, while they focus on QoS fairness, the approach still faces such problems as heterogeneous user devices, thus the issue of achieving QoE fairness remains. Georgopoulos et al. [13] proposed an OpenFlowassisted system that allocates network resources among competing adaptive video streams originating from heterogeneous clients, so as to achieve userlevel (QoE) fairness. The allocation utilizes utility functions relating bitrate to QoE, whereby the quality metric used to evaluate QoE is the objectively measured Structural Similarity Index (SSIM). They evaluate their system against other systems by comparing mean achieved QoE and QoE variance.
Mansy et al. [27] also argue that typical flowrate (QoS) fairness ignores userlevel fairness and is ultimately unfair, thus proposing a QoE fairness metric in the range [0; 1] based on Jain’s fairness index. Their metric considers a set of QoE values corresponding to bitrate allocation, calculated taking into account factors such as user screen size, resolution, and viewing distance. Further, Petrangeli et al. [37] incorporate the notion of maximizing fairness, expressed as the standard deviation of clients’ QoE, into a novel rate adaptation algorithm for adaptive streaming. Villa and Heegaard [45] specify a ‘perceived fairness metric’ as the difference between the worst and best performing streaming sessions in terms of average number of rate reductions (i.e., discrimination events) per minute. This is however an applicationlevel (and applicationspecific) QoS metric, and not a general QoE fairness index.
Going beyond relating QoE to allocated bitrate, Gabale et al. [12] measure videodelivery QoE in terms of the number and duration of playout stalls, with the objective of fairly distributing stalls across clients. Mu et al. [31] propose a solution for achieving userlevel fairness of adaptive video streaming, exploiting video quality, switching impact, and cost efficiency as fairness metrics. QoE fairness is computed based on calculation of the relative standard deviation (coefficient of variation) of QoE values. In their work on computing a benchmark QoEoptimal adaptation strategy for adaptive video streaming, Hoßfeld et al. [16] use Jain’s fairness index to show that QoE can be shared in a fair manner among multiple competing streams.
It is clear that many approaches use applicationlevel QoS metrics (like number of stalls, video bitrate, video quality switches) and use measures such as Jain’s fairness index or coefficient of variation to evaluate systems in terms of QoE fairness, e.g., [16, 21, 27, 42]. In the remainder of the paper (“Relative standard deviation and Jain’s fairness index” and “Issues with Jain’s index for QoE fairness”), we will argue that these measures are not necessarily suitable for QoE fairness.
Application of fairness index: (benchmarking of) QoE management in resource constrained environments
An important consideration is the applicability of a QoE fairness index, for example in the context of scheduling, resource assignment, optimization, etc.
For the most part, approaches discussed in the previous sections aim to exploit the notion of QoE fairness for optimized QoEdriven network resource allocation, often in the context of a concrete service. We focus instead on a fairness index independent of the underlying service and QoE model used. We have defined a generic QoE fairness index to serve, e.g., as a benchmark when comparing different resource management techniques in terms of their fairness across users and services (Fig. 1).
In the following section, we further elaborate on the motivation of going from QoS to QoE management, and on the need to consider QoE fairness in that context.
From QoS to QoE management
A general view on fairness
In other areas such as ethics and ecnonomics, fairness does not, in general, relate to utility, but rather to how resources are distributed among actors. We note in particular that a better system is not necessarily fairer, and neither is a fairer system necessarily better. Utility and fairness are orthogonal concepts.
For a simplified (and lighthearted) view on the orthogonality between fairness and utility, we could draw an analogy to the coldwar era superpowers and their economic models. In the Soviet model, there was an emphasis on fairness, but the overall quality of life (QoL) was low (i.e., most everyone had similarly low QoL). In the American model, the emphasis was on quality of life, but only for those who could achieve it on their own, leading to higher average QoL, but much lower fairness (QoL was much more variable across sectors of the population). While the economic and societal merits of each approach are arguably not settled, we can draw a parallel to the notions presented in this paper, namely that the overall QoE achieved on a system is not directly related to how fair the system is, and viceversa. Depending on the goals and context of whomever is in charge of managing the quality (in the context of this paper, an ISP, for instance), the relative weight of each can be valued differently.
Why QoE management over QoS management?
Our main working assumption is as follows: network operators strive to maintain their users sufficiently satisfied with their service that they will not churn, while simultaneously trying to maximize their margins. There are different ways in which an operator can go about this (e.g, lower prices, higher speeds, bundled services), but conceptually, they all lead to a notion of utility, or perceived value that the users derive from their network connection.
Operators have a limited resource budget, and how they allocate it will have a (possibly large) impact on the users’ utility. One option, for example, would be to ensure that the network capacity is distributed evenly across users. However, it is easy to see that this fails if users have applications with different QoS requirements. While the allocation may seem reasonable from the QoS point of view, it fails to account for the users’ utility, which will vary with the application or service under use. In this context, QoE provides a reasonable proxy measure for utility, and if the operator were to take QoE into account instead of QoS, a better distribution of its resources could be achieved (for instance, assigning more bandwidth to users who are watching video than to those who are just browsing the web; or providing expedited forwarding for users of realtime services such as VoIP or videoconferencing).
Let us consider a hypothetical scenario to illustrate the difference between QoSbased management and QoEbased management, as well as between QoS fairness and QoE fairness. We assume a video service delivered using HTTP Adaptive Streaming (HAS), with an associated QoE model Q that takes into account the device on which the user is accessing the content (that is, like in the Emodel for voice, mobile devices have a socalled “advantage factor”, that considers e.g., convenience of use alongside devicespecific limitations, such as screen resolution). As the simplest scenario, we consider two users \(U_l\) and \(U_m\), accessing the service (from a laptop and mobile phone, respectively) over a shared link with capacity \(C < 2R_{MAX}\), where \(R_{MAX}\) is the bitrate of the highestquality video representation available. Now, doing a QoSfair distribution of resources would result in both \(U_l\) and \(U_m\) having the same available bandwidth \(b<R_{MAX}\). However, given the different devices being used by each user, their QoE, as per Q, could be significantly different, with \(U_m\) receiving higher QoE (due to the advantage factor). If the operator were to consider QoE fairness^{1} instead, the resource distribution could result in \(U_m\) and \(U_l\) receiving \(b_m < b \le b_l \le R_{MAX}\), respecively, and their corresponding \(Q(b_m)\) and \(Q(b_l)\) values being closer together (i.e., more QoEfair). Depending on the relationship between the \(b_i\) values and \(R_{MAX}\), both users could even experience their maximal possible quality.
The use of QoE models to solve this resource allocation problem allows the operator to be “closer” to the users’ needs in terms of service quality.
On the need for QoE fairness
Besides keeping their users sufficiently satisfied, operators may care about doing so in a fair manner. Whereas in many cases users will not be aware of the quality experienced by other users, there are several contexts in which they may (e.g., shared activities, applications involving social media), and this can become a relevant factor, so distributing the resources in a “fair way” can be a smart business practice for operators. As discussed in above, what is fair in the QoS domain, may not be fair in the QoE domain, and so a notion of QoE fairness becomes necessary. We note that this applies not only to scenarios where there are multiple different services involved, but also in scenarios where a single service is considered. In what follows, we focus on these singleservice scenarios, but the contributions presented herein hold also for multiservice scenarios as well, provided that QoE models for those services are available and comparable, which to the best of our knowledge is still an open problem.^{2}
QoE fairness and QoE management

A twostep approach, maximizing first the average QoE, with a second step to solve for maximum fairness while maintaining the previously determined average quality level.

An approach based on utility functions, where the optimization targets (e.g., cost minimization, average quality maximization, fairness maximization) are combined into a utility function.
QoE fairness index
QoE models and QoE fairness
We have proposed a QoE fairness index [15], F(Y), which enables us to assess the fairness of a provided service, for which we assume that we have a set of QoE values (Y) produced by a QoE model (about whose particulars we need not worry) mapping a set of QoS parameters x to a unique QoE estimate y.
In resource management, network and service providers already use a notion of fairness at the QoS level, striving to allocate a fair share of resources (e.g., bandwidth) to each segment/session/user. However, as we will discuss in this article, the notion of QoS fairness and fair share resource allocation will in general not provide QoE fairness, and a new QoE fairness index is required in order to assess the fairness at the QoE level.
L and H are the lower and uppper bounds of the QoE scale, respectively, e.g., \(L=1\) (‘bad quality’) and \(H=5\) (‘excellent quality’) when using a 5point absolute category rating scale. As an example for a QoE model, \(y = Q(x)\) is the mean opinion score (MOS) value corresponding to QoS x. In the literature, those QoE models are often derived by subjective user studies, and typically only the MOS is used. But other QoE metrics (like median, quantiles, etc.) may be especially of interest for service providers [14], which may be reflected by the mapping function Q.
Desirable properties of a QoE fairness index
 (a)
Population size independence: it should be applicable to any number of users. If the QoE values emerging in the system follow a certain distribution Y, then the actual number of users should not affect the fairness index.Let \(Y_x\) be the set of x samples of the RV Y. We demand: If \(Y_n \sim Y\) and \(Y_m \sim Y\), then \(F(Y_n)=F(Y_m)\), even if \(n \ne m\). For example, the absolute difference \(D=\sum _{i=1}^n Y_i  {\text{E}}[Y]\) from the average QoE \({\text{E}}[Y]\) is a measure for the diversity of QoE values in the system. However, the more users n are in the system the larger the value of D may get. Hence, such a metric is not convenient to quantify QoE fairness. Also the sum of Y or the standard error of Y depend on the sample size and hence violate this property, while the expected value and standard deviation fulfil it.
 (b)
Scale and metric independence: the unit of measurement should not matter (for QoE this means independent of L and H values). The main intention of the formulation of this property is the fact that the unit does not influence the fairness index. For example, it does not matter if kpbs or Mbps is used when considering network throughput. For Jain’s index, the measurement scale requires to be a ratio scale with a clearly defined zero point. On such a ratio scale, scale and metric independence can be formulated as \(F(aY)=F(Y)\) for \(a>0\). However, QoE is measured on a category or interval scale, see also “Relative standard deviation on an interval scale”. Therefore, scale and metric independence means that the fairness index is the same when the QoE values are linearly transformed (to another interval scale). We demand: \(F(aY+b)=F(Y)\) for \(a\ne 0\) and any b. Please note that a negative value of a means that the interpretation of the QoE values is inverted. Instead of the degree or delight of the user Y, the annoyance or dissatisfaction is expressed by \(Y\).
 (c)
Boundedness: the fairness index should be bounded (without loss of generality it is set to be between 0 and 1). A bounded fairness index enables comparison of different sets of QoE values (e.g., from different applications) if the fairness index is mapped on the same value range. We demand: \(F(Y) \in [0;1]\).
 (d)
Continuity: the fairness index should take continuous values and changes in resource allocation should change the index (e.g., the max–min ratio does not satisfy this since it considers only the max and the min, and not values of \(x_i\) in between). We demand: \(F(Y)\in \mathbb {R}\) and \(F(Y)\ne F(Y')\) if \(Y_i=Y'_i\) but \(\exists j: Y_j\ne Y'_j\). Please note that the continuity allows to discriminate systems. Although a discrete fairness index may be also useful in practice, the discriminative power of a continous index is benefical in QoE management.
 (e)
Intuition: the fairness index should be intuitive: high value if fair (\(F(Y)=1\) is “perfect” fairness), and low value if unfair (\(F(Y)=0\), if possible, is totally unfair). \(F(Y)=1\) means that all users get the same QoE. The most unfair system is when half of the users obtain the best quality and the other half get the worst quality.
 (f)
QoE level independence: the fairness index is independent of QoE level, whether system achieves good or bad QoE. As discussed in “From QoS to QoE management”, overall QoE and QoE fairness are orthogonal concepts, and thus we want the QoE fairness index to be independent of the overall QoE of the system. We demand: The fairness statistic F(Y) shall be independent of the sample mean \({\text{E}}[Y]\). The theorem by Basu [2] shows that sample variance and standard deviation fulfill this property and are independent from the sample mean. Therefore, we can concretize this property using the variance of QoE values. We demand: Given two systems with \({\text {Var}}[Y_1]={\text {Var}}[Y_2]\) and \({\text {E}}[Y_1]\ne {\text {E}}[Y_2]\), then \(F(Y_1)=F(Y_2)\). A simple example for the rational of this property is as follows. Let us assume that all users experience a fair QoE (3 on a 5point scale). The system is totally fair. If all users experience a good QoE (4 on a 5point scale), the system is obviously better, but the system is not fairer. Please note that a shift in QoE (i.e., changing the QoE level) without changing the dispersion of QoE values around the mean does not affect the fairness index.
As an example: we regard the fairness of a system (I) with an average QoE value \(\bar{y}=4\) on a 5point ACR scale and \(50\%\) of users with \(y=3.5\) and \(50\%\) with \(y=4.5\), and a system (II) with an average QoE \(\bar{y}=2\) with \(50\%\) of users with \(y=1.5\) and \(50\%\) with \(y=2.5\) to have the same fairness.
We would like to highlight that property (b) Scale and metric independence and property (f) QoE level independence are key features. Since QoE is given on arbitrary interval scales, any linear transformation must not influence the fairness index. To have a higher flexibility in QoE management, QoE level independence is necessary. This allows to mimic combined utility functions with relevance factors (e.g., for fairness, costs, overall QoE) defined by the provider. The utility values are then easily derived as \(U(Y,F_Y)\). The other features (population size independence, boundedness, continuity, intuition) are desired to have a mathematically “nice” metric, which is intuitive and easy to interpret.
We also note that in Hoßfeld et al. [15], we demanded additional properties (deviation symmetry and validity for multiapplications) for a QoE fairness metric. However, after receiving feedback from the reviewers of this, we carefully analyzed those properties and revised them. In particular, we found that those properties follow from the set of properties above: (a)–(f). We discuss these derived properties below.
Additional properties derived from desirable properties
 (g)
Deviation symmetric: the fairness index should only depend on the absolute value of the deviation from the mean value, not whether it is positive or negative. This property follows from (b). When considering the distribution Y of QoE values, the flipped distribution \(Y'\) (i.e., reflection in a line parallel to the yaxis in the middle of the QoE scale) is simply \(Y'=Y+L+H\). Thus, \(F(Y)=F(Y')\) due to property (b) with \(a=1\) and \(b=L+H\). Deviation symmetry can also be seen from property (f). \({\text {Var}}[Y'] = (1)^2 {\text {Var}}[Y] = {\text {Var}}[Y]\), and hence \(F(Y') = F(Y)\).
 (h)
Valid for multiapplications: the fairness index should reflect the crossapplication fairness (and not only between users of the same application). Property (h) requires that a set of suitable QoE models exists for the applications considered. If the QoE models fulfill this, then the fairness index fulfills this property too. QoE and QoE models are application specific, and how to compare QoE values from different applications is a separate and challenging topic that is outside the scope of this paper.
Relative standard deviation and Jain’s fairness index
Arguably, the two most common indexes used in literature for quantifying QoE fairness are the relative standard deviation (RSD) and Jain’s fairness index. They rely on secondorder moments of the QoE values Y (a random variable resulting from mapping the QoS parameters X, another random variable, with the QoE model Q; \(Y=Q(X)\)) in a system to numerically express the dispersion of QoE values across users.
Relative standard deviation (RSD)
This is illustrated in Fig. 4 which shows the maximum standard deviation (\(\sigma _{\max }(\mu )\)) and RSD (\(c_{max}(\mu )\)) as a function of average QoE (\(\mu \)) on a 5point scale. It can be observed that the maximum RSD \(c_{max}\) is not achieved for the most unfair system at \(\mu =3\) but at \(\mu _{\max }=1.67\).
Thus we conclude that the RSD is not an intuitive fairness measure, as the most unfair system (Eq. 12) does not reach the maximum RSD (Eq. 9).^{3} From Eq. (9), we further see that the bounds of RSD depend on the actual rating scale. In case of \(L=0\), the RSD, however, is not bounded and violates property (c) ‘boundedness’. RSD also trivially violates property (f) ‘QoE level independence’, as the RSD depends on the average QoE value (Eq. 2).
Furthermore, RSD does not fulfill property (g) ‘Deviation symmetric’ which is demonstrated in two simple scenarios, cf. Table 2. In scenario (A), 90% of users experience best QoE and 10% experience worst QoE. In scenario (B), the opposite ratio is observed, i.e. 10% of users experience best QoE and 90% experience worst QoE. Clearly, scenario A leads to better QoE than scenario B, however, both systems reveal the same unfairness. Nevertheless, the RSD is different in both scenarios, i.e. \(c_A\ne c_B\), and leads to very different results. The RSD is not deviation symmetric.
Illustrative scenario and fairness indexes
id  Best QoE (%)  Worst QoE (%)  Avg.  Std.  RSD  J  F 

QoE values with \(L=1\) and \(H=5\)  
(A)  90  10  4.6  1.2  0.26  0.94  0.40 
(B)  10  90  1.4  1.2  0.86  0.58  0.40 
Normalized QoE values with \(L=0\) and \(H=1\)  
(C)  90  10  0.9  0.30  0.33  0.90  0.40 
(D)  10  90  0.10  0.30  3.00  0.10  0.40 
Jain’s fairness index J
Issues with Jain’s index for QoE fairness
In the following, we demonstrate that Jain’s fairness index violates several desirable properties introduced in “Desirable properties of a QoE fairness index”.^{4} We further illustrate severe issues for its application in the QoE domain.
Scale and metric dependency of J
The scale dependency of J is caused by the dependency of the RSD on the actual scale, as shown in Eq. (16). To be more precise, a linear transformation T(Y) of the QoE values impacts J.
Figure 5 highlights the dependency of J when using different QoE domains with varying L and H. On a 5point scale [1; 5], the same average QoE level \(\mu =2\) is considered and only the standard deviation of the QoE values \(\sigma \) is varied. The QoE values are then transformed to different rating scales [L; H]. It can be seen from Fig. 5 that Jain’s fairness index is not scale independent.
However, in the QoE domain the most common scale is the 5point MOS scale with \(L=1\) and \(H=5\). When using normalized QoE values in [0; 1], Jain’s fairness index dependence. WHere do I see this?. If \(L = 0\), J is very sensitive to QoE values close to zero as depicted in Fig. 6.
We consider here a constant standard deviation \(\sigma =0.1\) on the 5point MOS scale and vary the average QoE value \(\mu \).
Such a small \(\sigma \) is reached when 50% of the users get maximum QoE 5 and 50% get a QoE of 4.8. This is also reached when 50% of the users get minimum QoE 1 and 50% get a QoE of 1.2. Another scenario leading to the same \(\sigma =0.1\) is the following. 99.9375% obtain QoE 5 and the remaining 0.0625% obtain QoE 1.
Furthermore, Jain’s index is not able to capture fairness when higher values on the scale mean a lower QoE. An example considers the following quality degradation scale. 0—no degradation, 1—perceptible but not annoying, 2—slightly annoying, 3—annoying, 4—very annoying, 5—extremely annoying. Let us consider that \(n1\) users experience the best quality 0 and 1 user obtains a 1. Then, the average QoE is \({\text {E}}[Y]=1/n\), while the coefficient of variation follows as \(c_Y=\sqrt{(}1/n)\). Then \(J=1/n\). In the limit, J converges towards \(\lim _{n \rightarrow \infty }1/n=0\). Hence, in the best and fairest scenario, J quantifies the scenario as completely unfair.
QoE level dependence of J
From Fig. 6, we further see that Jain’s fairness index is QoE level dependent. A more explicit visualization is provided in Fig. 7 which clearly illustrates the QoE level dependence of J.^{5} In particular, J is plotted against the standard deviation \(\sigma \) for different average QoE values \(\mu =2,3,4\).
Deviation asymmetry of J
The desired property (g) ‘Deviation Symmetry’ means that the fairness index should only depend on the absolute value of the deviation from the mean value, not whether it is positive or negative.
Therefore, a scenario is considered in which a ratio of p users experience 2 and \(1p\) experience \(2+\delta \). Figure 8 plots now Jain’s fairness index J against the discrepancy \(\delta \in [1;1]\) between the two user classes. We observe that Jain’s index is not deviation symmetric, as the resulting curves for \(p=0.1\) and \(p=0.3\) are not symmetric at \(\delta =0\).
Relative standard deviation on an interval scale
A major concern of the application of Jain’s fairness index in the QoE domain is the typical interval scale of the QoE function Q in Eq. (1). The RSD may not have any meaning for data on an interval scale. For the computation of an RSD, a ratio scale is required which contains a natural zero value, like ‘no waiting time’ \(\equiv 0 {s}\).
However, the MOS scale typically used in QoE models is not a ratio scale. There is no meaningful zero value on the QoE scales: ‘zero’ would mean ‘no QoE’—which is not defined. Hence, the RSD of QoE values—and therefore Jain’s index—have no meaning for QoE values. The MOS scale can be considered as an interval scale as concluded by Norman [32]. Therefore, it is required to use other statistics (like the standard deviation) to measure the deviation from the mean.
Remark
For QoS Fairness, the usage of relative standard deviation as in Jain’s fairness index is very reasonable. An example for a QoS measure is bandwidth which measures the bandwidth of a user on a ratio scale with a meaningful zero value (‘no bandwidth’).
However, Jain’s fairness index may also be difficult to interpret if the data is measured on a ratio scale—which allows to compute the RSD.Consider the following example. The QoS measure is delay, e.g., web page load time, which measures a duration on a ratio scale with a meaningful zero value (‘no delay’). However, in that case, Jain’s index leads to counterintuitive results. In a scenario, where 100% of users get no delay, \(J=0\). Figure 9 can be reinterpreted when considering the ratio p of users experiences a delay of 1 [s], while \(1p\) experience no delay. Thus, for QoS measures like delays, Jain’s index cannot be directly applied to quantify QoS fairness.
Defining a QoE fairness index
Before presenting the formal definition of F, we briefly sketch the rationale behind it. After the definition, we discuss its properties, and compare it to Jain’s index.
Rationale for a QoE fairness index
Jain’s fairness index is not applicable as a QoE fairness index, as it violates some of the desired properties as specified in “Desirable properties of a QoE fairness index”. It is a reasonable approach to only consider the standard deviations without relating them to mean values when defining QoE fairness. The standard deviation \(\sigma \) of the QoE values Y quantifies the dispersion of the users’ QoE in a system.
A new QoE fairness index F
Figure 10 illustrates the meaning of the fairness index F. A certain fraction of the QoE domain [L; H] is covered by the standard deviation \(\sigma \) around the average QoE \(\mu \) in both directions. The size of the interval \([ \mu  \sigma , \mu + \sigma ]\) is \(2 \sigma \) reflects how unfair the QoE values are distributed over the QoE domain. Accordingly, the fairness index F is the size of the complement of this interval, i.e. (\(12\sigma ^*\)), normalized by the size of the QoE rating domain \(HL\).
Definition
The QoE Fairness Index F is defined as the linear transformation \(F=1\frac{2\sigma }{HL}\) over the QoE Y of all users consuming a service. A system is absolutely QoE fair when all users receive the same QoE value.
Properties of the QoE fairness index F
 (a)
Population size independence—F is applicable to any number N of users in the system. The value of F is independent of N.
 (b)Scale and metric independence—The unit of measurement should not matter. In the context of QoE, the fairness measure is independent of L and H. To be more precise, any linear transformation \(T(Y)=aY+b\) of the QoE values Y does not change the value of the fairness index. For the transformed values we obtainHence, F is scale independent (which is also indicated in Table 2).$$\begin{aligned}F_{T(Y)}& = 1  \frac{2{\text{Std}}[T(Y)]}{T(H)  T(L)} \\ &= 1  \frac{2a{\text{Std}}[Y]}{(aH + b)  (aL + b)}\\& = 1  \frac{2{\text{Std}}[Y]}{H  L} = F_Y.\end{aligned}$$(35)
 (c)
Boundedness—F is bounded between 0 and 1.
 (d)
Continuity—F takes continuous values in [0; 1].
 (e)
Intuitive—F is intuitive. The maximum fairness \(F_{max}=1\) is for minimum standard deviation (\(\sigma = 0\)). The minimum fairness \(F_{min}=0\) is found when standard deviation is at its maximum; this happens in the most unfair scenario (50% of users get L and 50% get H). Any fairness value F can also be interpreted as follows when considering normalized QoE values. (A) Half of the users get maximum QoE \(H=1\) and the other half gets QoE y. Then, \(F=y\). (B) Half of the users get minimum QoE \(L=0\). The other half get QoE y. Then, \(F=1y\). The equations are provided in Table 5. Exemplary numerical values are provided in Table 3.
 (g)
Deviation symmetric—F does not depend on the absolute value of the deviation from the mean value, not whether it is positive or negative. This is clear from the definition of F and visualized in Figs. 8 and 9.
 (f)
QoE level independence—F is independent of the actual QoE level, whether the system achieves good or bad QoE. This is also clear from the definition of F, since F only depends on the deviation from the mean. Figure 6 visualizes the QoE level independence. A constant standard deviation \(\sigma \) is assumed while the average QoE \(\mu \) is varied. Since F is independent from \(\mu \), F is a constant value which only depends on \(\sigma \) (and the QoE value range [L; H]).
 (h)
Valid for multiapplications—The index should reflect the crossapplication fairness (and not only between users of the same application). The index should also be applicable to different applications. This property is respected by F, provided that the QoE mapping function Q allows to have comparable QoE values. Further, F can be applied to any application, as it is based on the deviation of the QoE values. The same is also true for J and RSD. However, literature also suggests other fairness metrics which are only defined for a single application and use case, e.g. Cofano et al. [6] as discussed in “Fairness from the user’s perspective”.
Illustration of Jain’s J and QoE Fairness Index F for various scenarios and their distributions Y, \(L=1,H=5\)
Scenario  Description  J  F 

1  All users experience 1  1.00  1.00 
2  50% experience 1 and 50% experience 2  0.90  0.75 
3  50% experience 1 and 50% experience 3  0.80  0.50 
4  50% experience 1 and 50% experience 4  0.74  0.25 
5  50% experience 1 and 50% experience 5  0.69  0.00 
6  50% experience 2 and 50% experience 4  0.90  0.50 
7  50% experience 2.9 and 50% experience 4.9  0.94  0.50 
8  Uniform distribution \(Y\sim U(L;H)\).  0.75  0.42 
Qualitative comparison of fairness indexes
A summary of the comparison between F and Jain’s index J as well as the RSD is provided in Table 4. All three indexes are population size independent, valid for multiapplications and return positive continuous values. While F fulfills all desirable properties, J and the RSD violate key properties: (g) ‘deviation symmetric’, see Fig. 8; (f) ‘QoE level independence’, see Eq. (17) or Fig. 7; (f) ‘Scale and metric independence’, see Fig. 5.
Qualitative comparison of fairness indexes
Property  RSD c  Jain’s J  Fair. F 

(a) Population size independent  X  X  X 
(b) Scale and metric independent  –  –  X 
(c) Boundedness  –  (X)  X 
(d) Continuity  X  X  X 
(e) Intuitive  –  (–)  X 
(g) Deviation symmetric  –  –  X 
(f) QoE level independent  –  –  X 
(h) Valid for multiapplications  X  X  X 
Figure 11 shows the numerical values for the average QoE, the standard deviation and the RSD of the QoE values depending on the average number of stalls \({\text {E}}[N]=Kp\). Based on \({\text {E}}[N]\), the parameter \(p={\text {E}}[N]/K\) of the binomial distribution is derived. We clearly observe the exponential decay of the QoE model Q when considering the average QoE. The standard deviation and the RSD show however a different behavior.
The main observations can be seen in Fig. 12. Firstly, QoS fairness \(J_N\) is different than QoE fairness quantified by F and J. In particular, the QoS fairness approaches zero (i.e. completely unfair system) when the average number of stalls approaches zero. We further see the sensitivity of Jain’s fairness index for values close to zero. QoE fairness depicts a different behavior. In case of no stalling, all users get best QoE and the variance of the QoE values diminishes. Hence, the QoE fairness indexes are 1. With increasing number of stalls in the system, the standard deviation increases and hence fairness decreases until a certain threshold. After that threshold, the variance decreases and more users suffer. Hence, the QoE fairness increases again. In contrast, QoS fairness shows here a monotonic behavior.
Secondly, Jain’s fairness index depends on the scale. The curves for \(J_5\) and \(J_1\) differ (a) when using the QoE function Q on a 5point scale differs and (b) when using normalized QoE values \(Q^*\). Thirdly, Jain’s fairness index applied to QoE values is less sensitive than the fairness index F and does not allow to clearly discriminate fairness issues. From \(J_5\) or \(J_1\), one might conclude that the system is more or less fair. However, F clearly depicts that for certain scenarios (around \({\text {E}}[N]=1.5\) stalls) the system leads to unfairness.
Application of the QoE fairness index
The goal of this section is to show how the proposed QoE fairness index F can be applied. As a result of the numerical examples, we show that a QoS fair system is QoE unfair (case study: Web QoE for M/D/1PS). In addition, we show how to design a system in which a provider may tradeoff between fairness and overall performance (case study: HTTP adaptive streaming QoE).
Case study: web browsing QoE in an M/D/1processor sharing system
The analytical M/D/1PS sharing system is well understood and describes a perfect QoS fair system which is nevertheless QoE unfair. Literature has shown that the processor sharing (PS) model captures well the characteristics of a system with a single shared bottleneck, see the survey by Roberts [39].^{9}
Figure 14 depicts the QoE behavior of the system depending on the offered load. With increasing load, the download times increase and the QoE suffers (left Yaxis, see quantiles and average QoE values). The standard deviation of the QoE values shows a nonlinear behavior. For very low loads, the users arrive often at an empty system and everyone experiences the same QoS and QoE. Due to the random arrival of users, some users share the capacity with others which happens more often with increasing load. At a certain load, it is well known that the download times exponentially increase [33] resulting in smaller QoE differences across users (but at a low overall QoE). Note that approaching the overload situation (\(\rho \rightarrow 1\)), all users experience the same poor quality (\(\lim _{\rho \rightarrow 1} Q(t)=L\), but the system is perfectly fair (\(\lim _{\rho \rightarrow 1} F=1\)), if terribly underperforming.
Figure 15 illustrates the different fairness indexes. Jain’s fairness index J leads to different results and conclusions than F. In case of low load (\(\rho <0.4\)), J suggests a perfect QoE fair system. However, when looking at the standard deviations of the QoE in Fig. 14, we see already stronger discrepancies between users. J is not capturing this properly, since the average QoE is high for this load. We further see again that J is not very sensitive. The minimal fairness value is about 0.8. In contrast, our proposed fairness metric properly reflects the variances in QoE. F is more sensitive and identifies fairness issues even in the low load scenarios. F drops close to 0.5 which properly reflects that the system leads to \(0.5 \sigma _{\max }=1\). We further observe that there is a strong discrepancy between QoS fairness (expressed by using Jain’s fairness index \(J_T\) of download times T converging to 0.5) and QoE fairness.
Thus, the fairness index F gives the possibility to clearly identify under which conditions and in which scenarios fairness issues arise.
Case study: HTTP adaptive streaming QoE
As a second case study, HTTP adaptive streaming (HAS) is considered to demonstrate the comparison of different approaches with respect to QoE fairness but also with respect to overall QoE. When a provider has to decide which mechanism to use in practice, the (possible) tradeoff between QoE fairness and overall QoE may be considered.
HAS allows the video player to dynamically adjust the video bitrate according to the current network situation. Thereby, HAS tries to overcome video stalling at the cost of reduced video bitrate and lower video quality. However, from a QoE perspective, stalling is the dominating QoE influence factor. For the interested reader, Seufert et al. [40] provides a comprehensive survey on HAS QoE and HAS technology.
A provider needs to decide how relevant fairness is. Thus, there may be a tradeoff between fairness and QoE. In Fig. 17 we sketch this more clearly. A provider may use a weighted sum of the average QoE and the fairness depending on a parameter \(\theta \) specifying the relevance of fairness. Thus, a value function is defined. For example, \(v=(1\theta )Q^*+\theta F\). Thereby, we use the normalized QoE values \(Q^*\) to have the fairness index and the average QoE in the interval [0; 1].^{11} This allows for an intuitive meaning of the relevance parameter. From Fig. 17, we observe that the FESTIVE approach may be preferred instead of MSS if fairness is as important as average QoE (\(\theta \ge 0.5\)). We would like to emphasize that other fairness indexes (Jain or RSD) lead to other values and change the outcome of an operator’s decision. Figure 18 shows (again) that Jain’s index is not able to discriminate the fairness properly (cf. Fig. 12)—here across mechanisms—and also suffers like the RSD suffers from being scale and metric dependent. Since J always leads to high fairness values, the utility function would not consider fairness appropriately and mainly put weight on overall QoE. Figure 19 shows the different outcomes. In case of little relevance of fairness (\(\rho =0.1\)), the fairness index has only a minor impact, as desired and defined. For higher relevance, it can be seen that the order of mechanisms changes between F and J, i.e. leading to different conclusions for operators.
Conclusions and discussion
The motivation for defining a fairness index comes primarily from the operator’s perspective, as QoE fairness measures can be used to drive resource allocation mechanisms aimed at maximizing the satisfied customer base. The application of a QoE fairness metric is manifold, ranging from QoE management mechanisms and system optimization to benchmarking different resource management techniques. We have introduced a definition for a QoE fairness index, and showed that QoE fairness does not, due to the nature of QoS to QoE mappings for most services, necessarily follow from QoS fairness. We argue that commonly used QoS fairness metrics such as Jain’s fairness index are not suitable for quantifying QoE fairness, despite being used for that purpose in the literature. Our proposed metric fulfills a number of desirable qualities, and it is intuitively simple to understand. We illustrate its use with an example use case for Web QoE modeled as a function of page loading times. Another use case is the selection of an HTTP adaptive streaming mechanism which may be guided by the overall video QoE as well as the QoE fairness. QoE fairness says nothing about how good the system is and thus needs to be considered together with, and most likely subordinated to, the achieved overall QoE in system design. We emphasize that the proposed QoE fairness index is just a means for benchmarking or designing systems, and may be used as an extra tool for operators to make better informed decisions concerning QoE management.
Remark The proposed fairness index is a generic concept which may also be applied to other domains than QoE (like QoS) or for different purposes. It can be applied to any data on an interval scale with a clearly defined bounded value range.
QoE fairness from the operator’s perspective
Throughout the paper, we have considered fairness from the operator’s perspective. In fact, for most of today’s common applications usage scenarios, users are not necessarily aware of whether they receive a fair QoE. The question arises, then, why should operators care about fairness?
We posit that QoE fairness can help operators obtain better overall customer satisfaction, if used, e.g., as a secondary service optimization objective. For instance, when optimizing a service for average QoE, it is common to find a family of solutions with similar results QoEwise, but varying fairness levels. In such cases, choosing the fairer solution (with some additional constraints in terms of acceptance thresholds) will often result in more satisfied users, and lower churn levels.
These optimizations become more important in multiapplication scenarios, where the relative value of each QoS metric (e.g., throughput, latency) can vary wildly in terms of QoE. Assuming that the QoE models produce comparable results, QoE fairness may imply very different resource management policies than just trying to give each user a fair share of the network’s capacity. This type of application, however, remains contingent on the development of QoE models whose results can be compared across applications. This is still an open problem.
For some application scenarios, where social interaction happens in parallel to the service delivery (e.g., realtime commenting on social networks for a live stream), or where the use of different devices can have an impact on both resource requirements and perceived quality (as exemplified in “From QoS to QoE management”), then fairness can be more relevant for individual applications, as an unequal share of resources leading to unfairness may be observed by the users themselves, and QoSfair approaches can result in suboptimal QoE.
Another possible interpretation of QoE fairness would be in terms of societal welfare, with the understanding that a more fair distribution of QoE would make for a larger number of satisfied (or even happy) users. This may also be relevant in the case of noncommercial scenarios, such as city or coopowned networks.
Further work

QoE Fairnessaware service optimization: whereby QoE management mechanisms consider fairness as an optimization target (e.g., in a twostep optimization approach, optimizing first for overall QoE, and then for fairness).

The comparability of QoE results for different applications: QoE fairness is most interesting for operators when QoE can be compared across services. This is not (necessarily) the case with current QoE models, and further studies are needed in this direction. This research line is also relevant to service pricing and QoEaware SLA definitions, as addressed for example by Varela et al. [44].

The concept of QoE fairness can be more relevant for longterm contexts than for shortterm ones. A provider may need to ensure that its customers are equally served over one month, while shortterm variances may be accepted by the users. In a rough sense, some providers are implementing this with reduced data speeds at the end of a month when a certain data volume is consumed. This may lead to a certain kind of longterm fairness, see also “Notion of fairness in shared environments”. However, most QoE models only produce shortterm QoE estimates, and are not suited for other usages.

Related to the item above, service pricing is also relevant to the notion of QoE fairness (as in, is the utility perceived by the user commensurate to the price they pay?), yet it is not really considered in QoE models. On this point, price and fairness can be closely related. Our proposed metric is priceindependent as long as the QoE models consider price (and consequently users’ expectations), but this is not usually the case.
Footnotes
 1.
This assumes a mechanism for determining the type of device each user is using, which could be implemented e.g., via suitable APIs, or by monitoring whether users are making requests to the mobile version of the HAS service.
 2.
I.e. given two models for different services, using the same scale, it is not clear whether equal output values from them correspond to equal QoE for users of each service. In the case of different scales, this is even further complicated.
 3.
It is \(c_u \le c_{max}\), as \(H \ge L\).
 4.
We have already seen in “Relative standard deviation (RSD)” that the RSD violates several desirable properties. Since J and RSD are inversely proportional, cf. Eq. (17), this implied that J violates some of those properties too. In “Issues with Jain’s index for QoE fairness”, we will visualize those violations for J to make the reader aware of how severe these violations are.
 5.
 6.
The maximum standard deviation has already been derived in Eq. (7) in “Relative standard deviation (RSD)”. For readability and due to its importance for deriving F, the maximum standard deviation is repeated here.
 7.
F must not be confused with standard deviation of user ratings in a subjective study (for a system under test) i.e. the user rating diversity.
 8.
Realistic scenarios and measurement traces are provided in “Application of the QoE fairness index” to demonstrate the application of the QoE fairness index F.
 9.
In practice, the fair share of resources is not perfectly achieved due to imperfections in the transport protocol. In Hoßfeld et al. [15], measurement results are provided for the same system on web QoE which show the same characteristics in terms of fairness. In this article, we focus on the analytical model as it is well known and understood and generalizes the measurement results.
 10.
For details about the HAS algorithms (FINEAS, QL., FESTIVE, MILLER, MSS), the interested reader is referred to Petrangeli et al. [37]. For our purposes, the detailed description is not relevant.
 11.
Instead of defining a relevance parameter for fairness, one might also define weights for each component. This may also overcome the normalization of the QoE values.
Notes
Acknowledgements
This work emerged from the First International Symposium on Quality of Life (QoL2016) in Zagreb, Croatia, in March 2016. This work was partly funded by Deutsche Forschungsgemeinschaft (DFG) under Grants HO 4770/12 (DFG OekoNet); the Croatian Science Foundation, project no. UIP2014095605 (QMANIC); the NTNU QUAM Research Lab (Quantitative modelling of dependability and performance). The authors would like to thank Stefano Petrangeli and Steven Latre for providing the data [37] as used in “Case study: HTTP adaptive streaming QoE”.
Compliance with ethical standards
Conflict of interest
On behalf of all authors, the corresponding author, Tobias Hoßfeld states that there is no conflict of interest.
References
 1.AviItzhak B, Levy H, Raz D (2008) Quantifying fairness in queuing systems: Principles, approaches, and applicability. Prob Eng Inf Sci 22(4):495–517MathSciNetCrossRefGoogle Scholar
 2.Basu D (1955) On statistics independent of a complete sufficient statistic. Sankhyā Indian J Stat (1933–1960) 15(4):377–380MathSciNetzbMATHGoogle Scholar
 3.Bertsimas D, Farias VF, Trichakis N (2012) On the efficiencyfairness tradeoff. Manag Sci 58(12):2234–2250CrossRefGoogle Scholar
 4.Bonald T, Proutiere A (2003) Insensitive bandwidth sharing in data networks. Queueing Syst 44(1):69–100MathSciNetCrossRefGoogle Scholar
 5.Briscoe B (2007) Flow rate fairness: dismantling a religion. ACM SIGCOMM Comput Commun Rev 37(2):63–74CrossRefGoogle Scholar
 6.Cofano G et al (2016) Design and experimental evaluation of networkassisted strategies for HTTP adaptive streaming. In: Proceedings of the 7th international conference on multimedia systems, ACM, p 3Google Scholar
 7.De Cicco L et al (2013) Elastic: a clientside controller for dynamic adaptive streaming over http (dash). In: 20th International packet video workshop, IEEE, pp 1–8Google Scholar
 8.Demers A, Keshav S, Shenker S (1989) Analysis and simulation of a fair queueing algorithm. ACM SIGCOMM Comput Commun Rev 19(4):1–12CrossRefGoogle Scholar
 9.Deng J, Han YS, Liang B (2009) Fairness index based on variational distance. In: GLOBECOM, pp 1–6Google Scholar
 10.Egger S et al (2012) “Time is bandwidth”? Narrowing the gap between subjective time perception and quality of experience. In: IEEE int. conference on communications (ICC), Ottawa, Canada, June 2012Google Scholar
 11.ETSI TS 102 2501 V2. 2.1. (2011) QoS aspects for popular services in mobile networks. In: Speech and multimedia transmission quality (STQ) 201104Google Scholar
 12.Gabale V et al (2012) InSite: QoEaware video delivery from cloud data centers. In: Quality of service (IWQoS), 2012 IEEE 20th international workshop on, IEEE, pp 1–9Google Scholar
 13.Georgopoulos P et al (2013) Towards networkwide QoE fairness using openflowassisted adaptive video streaming. In: ACM SIGCOMM workshop on Future humancentric multimedia networking, Hong Kong, Aug. 2013Google Scholar
 14.Hoßfeld T, Heegaard PE, Varela M (2015) QoE beyond the MOS: added value using quantiles and distributions. In: Seventh int. workshop on quality of multimedia experience (QoMEX), Costa Navarino, Greece, June 2015Google Scholar
 15.Hoßfeld T et al (2017) Definition of QoE fairness in shared systems. IEEE Commun Lett 21(1):184–187CrossRefGoogle Scholar
 16.Hoßfeld T et al (2015) Identifying QoE optimal adaptation of HTTP adaptive streaming based on subjective studies. Comput Netw 81:320–332CrossRefGoogle Scholar
 17.Hoßfeld T et al (2013) Internet video delivery in YouTube: from traffic measurements to quality of experience. In: Biersack E, Callegari C, Matijasevic M (eds) Data traffic monitoring and analysis: from measurement, classification and anomaly detection to quality of experience. Computer communications and networks series. Springer, Berlin, HeidelbergCrossRefGoogle Scholar
 18.Hoßfeld T et al (2017) No silver bullet: QoE metrics, QoE fairness, and user diversity in the context of QoE management. In: 2017 Ninth international conference on quality of multimedia experience (QoMEX), IEEE, pp 1–6Google Scholar
 19.HuynhThu Q et al (2011) Study of rating scales for subjective quality assessment of highdefinition video. IEEE Trans Broadcast 57(1):1–14CrossRefGoogle Scholar
 20.Jain R, Chiu DM, Hawe WR (1984) A quantitative measure of fairness and discrimination for resource allocation in shared computer system, vol 38. Eastern Research Laboratory, Digital Equipment Corporation, HudsonGoogle Scholar
 21.Jiang J, Sekar V, Zhang H (2014) Improving fairness, efficiency, and stability in httpbased adaptive video streaming with festive. IEEE/ACM Trans Netw 22(1):326–340CrossRefGoogle Scholar
 22.Kelly F (1997) Charging and rate control for elastic traffic. Eur Trans Telecommun 8(1) (1997)CrossRefGoogle Scholar
 23.Kelly FP, Maulloo AK, Tan DKH (1998) Rate control for communication networks: shadow prices, proportional fairness and stability. J Oper Res Soc 49(3):237–252CrossRefGoogle Scholar
 24.Lan T et al (2010) An axiomatic theory of fairness in network resource allocation. In: Proceedings of IEEE INFOCOM, San Diego, CA, pp 1–9. https://doi.org/10.1109/INFCOM.2010.5461911
 25.Le Boudec JY (2005) Rate adaptation, congestion control and fairness: a tutorial. https://moodle.epfl.ch/file.php/523/CC_Tutorial/cc.pdf
 26.Le Callet P Möller S, Perkis A et al (2013) Qualinet white paper on definitions of quality of experience. In: European network on quality of experience in multimedia systems and services (COST Action IC 1003), March 2013Google Scholar
 27.Mansy A, Fayed M, Ammar M (2015) Networklayer fairness for adaptive video streams. In: IFIP Networking conference (IFIP networking). Toulouse, France, May 2015Google Scholar
 28.Mo J, Walrand J (2000) Fair endtoend windowbased congestion control. IEEE/ACM Trans Netw (ToN) 8(5):556–567CrossRefGoogle Scholar
 29.Möller S (2012) Assessment and prediction of speech quality in telecommunications. Springer Science+Business Media DordrechtGoogle Scholar
 30.Möller S et al (2011) Speech quality estimation: models and trends. IEEE Signal Process Mag 28(6):18–28CrossRefGoogle Scholar
 31.Mu M et al (2015) Userlevel fairness delivered: network resource allocation for adaptive video streaming. In: IEEE 23rd international symposium on quality of service (IWQoS), IEEE, pp 85–94Google Scholar
 32.Norman G (2010) Likert scales, levels of measurement and the laws of statistics. Adv Health Sci Educ 15(5):625–632CrossRefGoogle Scholar
 33.Ott TJ (1984) The sojourntime distribution in the M/G/1 queue with processor sharing. J Appl Prob 21(2):360–378MathSciNetCrossRefGoogle Scholar
 34.Ozugur T et al (1998) Balanced media access methods for wireless networks. In: Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking, ACM, pp 21–32Google Scholar
 35.Palyi PL, Racz S, Nadas S (2008) Fairnessoptimal initial shaping rate for HSDPA transport network congestion control. In: Communication systems, 2008. ICCS 2008. 11th IEEE singapore international conference on, IEEE, pp 1415–1421Google Scholar
 36.Parekh AK, Gallager RG (1993) A generalized processor sharing approach to flow control in integrated services networks: the singlenode case. IEEE/ACM Trans Netw (ToN) 1(3):344–357CrossRefGoogle Scholar
 37.Petrangeli S et al (2016) QoEdriven rate adaptation heuristic for fair adaptive video streaming. ACM Trans Multimedia Comput Commun Appl 12(2):28Google Scholar
 38.Reichl P et al (2010) The logarithmic nature of QoE and the role of the Weber–Fechner law in QoE assessment. In: IEEE int. conference on communications (ICC). Cape Town, South Africa, May 2010Google Scholar
 39.Roberts JW (2004) A survey on statistical bandwidth sharing. Comput Netw 45(3):319–332MathSciNetCrossRefGoogle Scholar
 40.Michael S et al (2015) A survey on quality of experience of http adaptive streaming. IEEE Commun Surv Tutor 17(1):469–492CrossRefGoogle Scholar
 41.Shalmon M (2007) Explicit formulas for the variance of conditioned sojourn times in M/D/1PS. Oper Res Lett 35(4):463–466MathSciNetCrossRefGoogle Scholar
 42.Taboada I et al (2013) QoEaware optimization of multimedia flow scheduling. Comput Commun 36(15):1629–1638CrossRefGoogle Scholar
 43.Tominaga T et al (2010) Performance comparisons of subjective quality assessment methods for mobile video. In: 2010 Second international workshop on quality of multimedia experience (QoMEX), June 2010, pp 82– 87Google Scholar
 44.Varela M et al (2015) Experience level agreements (ELA): the challenges of selling QoE to the user. In: Proc. of the ICC 2015 workshops, IEEE, pp 1741–1746. https://doi.org/10.1109/ICCW.2015.7247432
 45.Villa B, Heegaard PE (2012) Improving perceived fairness and QoE for adaptive video streams. In: Proc. ICNS 2012, pp 149–158Google Scholar
 46.Wierman A (2007) Fairness and classifications. ACM SIGMETRICS Perform Eval Rev 34(4):4–12CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.