A new QoE fairness index for QoE management

  • Tobias HoßfeldEmail author
  • Lea Skorin-Kapov
  • Poul E. Heegaard
  • Martín Varela
Open Access
Research Article
Part of the following topical collections:
  1. Managing QoE of Future Networks and Applications


The user-centric management of networks and services focuses on the Quality of Experience (QoE) as perceived by the end user. In general, the goal is to maximize (or at least ensure an acceptable) QoE, while ensuring fairness among users, e.g., in terms of resource allocation and scheduling in shared systems. A problem arising in this context is that the notions of fairness commonly applied in the QoS domain do not translate well to the QoE domain. We have recently proposed a QoE fairness index F, which solves these issues. In this paper, we provide a detailed rationale for it, along with a thorough comparison of the proposed index and its properties against the most widely used QoS fairness indices, showing its advantages. We furthermore explore the potential uses of the index, in the context of QoE management and describe future research lines on this topic.


Quality of experience (QoE) Quality of service (QoS) Fairness Fairness index 


Quality of Experience (QoE) is “the degree of delight or annoyance of the user of an application or service” [26]. It is generally accepted that the quality experienced by a user of a networked service is dependent, in a non-trivial and often non-linear way, on the network’s QoS. Moreover, the QoE of different services is often different given the same network conditions; i.e., the way in which QoS can be mapped to QoE is service-specific. For example, voice services can usually withstand higher loss rates than video streaming services, but are in turn more sensitive to large delays. Hence, given a network condition with certain QoS characteristics, the QoE experienced by users of different services can vary significantly. From the point of view of fairness, as we will see, we need not concern ourselves with the different aspects of how QoS affects QoE for different services, but rather how different users’ expectations, in terms of QoE, are affected by the underlying QoS: are most (or all) users receiving similar quality levels, regardless of the services they use?

Standards such as ETSI TS 102 250-1 V2. 2.1 [11] specify how to compute various QoS metrics and highlight the need to consider customer QoE targets. However, fairness aspects are not considered. From a network operator’s point of view, QoE is an important aspect in keeping customers satisfied, e.g., decreasing churn. This has lead to a number of mechanisms for QoE-driven network resource management, aimed at maintaining quality above a certain threshold for every user (or in some proposals, “premium” users, at least). An issue common to all those efforts is that of dividing the available resources among users so as to maintain a satisfied customer base. In this paper, we explore (in depth) a notion of QoE fairness, first introduced in our previous work [15], to quantify the degree to which the users sharing a network, and using a variety of services on it, achieve commensurate QoE. In this paper, we expound upon both the concept of QoE fairness and the proposed QoE fairness index. We show that the QoS-fair methods of resource distribution among users do not, in general, result in QoE-fair systems, even when considering a single-service scenario, and therefore, that QoE fairness needs to be considered explicitly when evaluating the performance of management schemes. We further illustrate the differences between QoS fairness and QoE fairness indices by means of concrete case studies.

The remainder of this paper is structured as follows. “Background and related work: notion of fairness and its applications” provides background on the notion of fairness in shared systems and the networking domain, and discusses the move to considering fairness from a user perspective. The move from QoS to QoE management and the motivation for considering QoE fairness are then further discussed in “From QoS to QoE management” . “QoE fairness index” specifies the requested properties of a fairness index, while “Relative standard deviation and Jain’s fairness index” introduces the commonly used relative standard deviation (RSD) and Jain’s index. The properties of Jain’s index are further elaborated in “Issues with Jain’s index for QoE fairness”. “Defining a QoE fairness index” presents the QoE fairness index we proposed [15], and the rationale behind it. We provide an example of its application for web QoE and video streaming QoE to demonstrate its relevance for benchmarking and system design in “Application of the QoE fairness index”. Finally, “Conclusions and discussion” concludes this work and discusses further research issues.

Background and related work: notion of fairness and its applications

Notion of fairness in shared environments

Fairness in shared systems has been widely studied as an important system performance metric, with different application areas and lack of a universal metric. In general, approaches quantifying fairness have relied mainly on measures such as second-order statistics (variance, standard deviation, coefficient of variation), entropy-based measures, and difference to optimal solution, e.g. [6]. A key issue is defining what is considered to be “fair”, and then designing and evaluating various scheduling policies in terms of fairness. For example, while proportional fairness relates to the idea that it is fair for jobs to receive response times proportional to their service times, temporal fairness respects the seniority of customers and the first-come-first-serve policy. Wierman [46] provides an overview and comparison of various scheduling policies which are focused on guaranteeing equitable response times to all job sizes.

Avi-Itzhak et al. [1] address the applicability of various fairness measures for different applications involving queue scheduling, such as call centers, supermarkets, banks, etc.

A fairness measure is inherently linked to some kind of performance objective, such as minimizing waiting times or maximizing the amount of allocated resources. A commonly studied trade-off when considering different resource allocation optimization objectives is that between efficiency and fairness [3, 24]. Moreover, a key question that arises is at which granularity level should fairness be quantified and measured [1]. Related to different granularity levels is also the question of the time scales at which fairness is calculated, with most QoS fairness index measures used in the literature (such as max–min fairness and Jain’s fairness index [20]) reflecting long-term average system fairness. In contrast, a system may be considered short-term fair [9] if for N competing hosts, the relative probability of each host accessing a shared resource is 1/N in any short interval. Deng et al. [9] further note that while short-term fairness implies long-term fairness, long-term fairness may not ensure short-term fairness.

Table 1 gives example time scales and session-related elements for which performance measures may be derived. The time scale at which to compute fairness may be different when targeting QoE fairness as opposed to targeting QoS fairness. In the context of QoE fairness, the fairness calculation time scale is linked to the time scale on which QoE is actually measured (or estimated), typically short- or mid-term.
Table 1

QoE fairness, just like QoE, can be considered at different time scales, and its applicability can vary according to them

Time scale



Example: web QoE

Example video QoE

Related network metrics


Tens of ms


Not applicable

Video frame


Short term



Web objects, Single page

DASH segment, single scene

Avg. throughput, latency

Mid term



Web session

Video scene, short clips

Aggregated over time

Long term

Hours, days


Commonly visited sites

Several episodes

Aggregated over time

We note that, while very different QoE functions are needed to estimate QoE at these varying time scales, the notion of fairness remains unchanged (although its impact may be significantly different)

Notion of fairness in networking

In networking, fairness in resource allocation and scheduling is either linked to sharing resources evenly among the entities, or scaling the utility function of an entity in proportion to others. Flow based resource sharing, e.g., max–min fairness, is the fundament of the design of TCP and fair queuing scheduling approaches [8, 36]. A resource allocation is said to be max–min fair if the bit rate of one flow cannot be increased without decreasing the bit rate of a flow that has a smaller bit rate. This definition puts emphasis on maintaining high values for the smallest rates, even though this may be at the expense of network inefficiency [25].

In a more general and utility-driven approach, proportional fairness was introduced in the seminal works by Kelly [22] and Kelly et al. [23]. Questioning the notion of max–min fairness, Kelly et al. [23] argue that bandwidth sharing should be driven by the objective of maximizing overall utility of flows, assuming logarithmic utility functions. Weighted proportional fairness is further defined as the scaling of an entity’s utility function relative to others, such that the entities will allocate flow rates so that the cost they cause will equal the weight they choose [22].

An alternative bandwidth-sharing approach is that of \(\alpha \)-fairness and the associated utility maximization. Mo and Walrand [28] propose this as a decoupled fairness criteria, which each user can use to achieve fairness without considering the behavior of other users. Bonald and Proutiere [4] introduce the notion of balanced fairness, referring to allocations for which the steady state distribution is insensitive to any traffic characteristics except the traffic intensities. They note that this insensitivity property does not hold for utility-based allocations such as max–min and proportional fairness, where an optimal allocation process depends on detailed traffic characteristics such as flow arrival process and flow size distribution.

Any resource scheduling allocation between different entities (users, applications, flows/sessions, bitstreams) must have a notion of fairness. For example, according to the General Processor Sharing Model (GPS), each host is assigned a fair portion of a shared resource for any time interval [36]. While GPS has a binary outcome (a system is either fair or not), other metrics (such as the max–min fairness index) quantify the fairness level when the system is not perfectly fair [9]. A QoS fairness index should thus reflect the distance between the actual and the idealised allocation. The fairness should be relative to the resource unit \(x_i\) which is allocated to entity i relative to the other entities. Various measures have been proposed, both for measuring short- and long-term fairness, as discussed previously.

The most frequently used QoS fairness metric is Jain’s index [20], which approximates the ratio between the squares of the first and second order moments of the resources \(x_i\) allocated for entity i. Jain’s index is primarily for assessing long-term fairness (e.g., averaged per user, session), but can also evaluate short-term fairness by considering the sliding window average of \(x_i\). Jain’s fairness index has also been used to improve so-called transient fairness in the context of congestion control, when computing the optimal initial shaping rate for new flows entering a mobile network (rather than using a fixed value and/or Slow Start-like method) [35].

Apart from Jain’s index, other indexes which (partly) measure the fairness of shared resources are: variation, coefficient of variation, and the ratio between the maximum access share by a host and the minimum access share (max–min index, [34]).

Fairness from the user’s perspective

While QoS fairness has been well established in the networking community, less focus has been put on considering fairness from a truly user-oriented perspective. Following Kelly’s theoretical notion of weighted proportional fairness, Briscoe [5] sharply criticized flow-rate fairness, and argued that fairness should be considered from the point of view of congestion costs (cost fairness) or user benefits. He states that if fairness is defined between flows, then users can simply create more flows to get a larger resource allocation. Moreover, flow fairness is defined instantaneously, and has no necessary relation to real-world fairness over time. In other words, Briscoe’s criticism of flow-level fairness leads to the notion that fairness should be considered at a higher level, where real-world entities are considered, such as people or organizations.

Following this perspective, recent papers have argued that a QoS fair system is not necessarily QoE fair, e.g., Mansy et al. [27], given the lack of consideration of service QoE models. Such models specify the relationships between user-level QoE and various application-layer performance indicators (e.g., file loading times, video re-buffering) or influence factors such as device capabilities, context of use, network and system requirements, user preferences, etc.

As an example we consider QoE fairness in the context of bottleneck link sharing among adaptive video streams, where the on/off nature of flows results in inaccurate client-side bandwidth estimation and leads to a potential unfair resource demand [13, 27, 37].

De Cicco et al. [7] propose a client-side algorithm which avoids on/off behavior until reaching the highest possible playback quality. However, while they focus on QoS fairness, the approach still faces such problems as heterogeneous user devices, thus the issue of achieving QoE fairness remains. Georgopoulos et al. [13] proposed an OpenFlow-assisted system that allocates network resources among competing adaptive video streams originating from heterogeneous clients, so as to achieve user-level (QoE) fairness. The allocation utilizes utility functions relating bitrate to QoE, whereby the quality metric used to evaluate QoE is the objectively measured Structural Similarity Index (SSIM). They evaluate their system against other systems by comparing mean achieved QoE and QoE variance.

Mansy et al. [27] also argue that typical flow-rate (QoS) fairness ignores user-level fairness and is ultimately unfair, thus proposing a QoE fairness metric in the range [0; 1] based on Jain’s fairness index. Their metric considers a set of QoE values corresponding to bitrate allocation, calculated taking into account factors such as user screen size, resolution, and viewing distance. Further, Petrangeli et al. [37] incorporate the notion of maximizing fairness, expressed as the standard deviation of clients’ QoE, into a novel rate adaptation algorithm for adaptive streaming. Villa and Heegaard [45] specify a ‘perceived fairness metric’ as the difference between the worst and best performing streaming sessions in terms of average number of rate reductions (i.e., discrimination events) per minute. This is however an application-level (and application-specific) QoS metric, and not a general QoE fairness index.

Going beyond relating QoE to allocated bitrate, Gabale et al. [12] measure video-delivery QoE in terms of the number and duration of playout stalls, with the objective of fairly distributing stalls across clients. Mu et al. [31] propose a solution for achieving user-level fairness of adaptive video streaming, exploiting video quality, switching impact, and cost efficiency as fairness metrics. QoE fairness is computed based on calculation of the relative standard deviation (coefficient of variation) of QoE values. In their work on computing a benchmark QoE-optimal adaptation strategy for adaptive video streaming, Hoßfeld et al. [16] use Jain’s fairness index to show that QoE can be shared in a fair manner among multiple competing streams.

It is clear that many approaches use application-level QoS metrics (like number of stalls, video bitrate, video quality switches) and use measures such as Jain’s fairness index or coefficient of variation to evaluate systems in terms of QoE fairness, e.g., [16, 21, 27, 42]. In the remainder of the paper (“Relative standard deviation and Jain’s fairness index” and “Issues with Jain’s index for QoE fairness”), we will argue that these measures are not necessarily suitable for QoE fairness.

Application of fairness index: (benchmarking of) QoE management in resource constrained environments

An important consideration is the applicability of a QoE fairness index, for example in the context of scheduling, resource assignment, optimization, etc.

For the most part, approaches discussed in the previous sections aim to exploit the notion of QoE fairness for optimized QoE-driven network resource allocation, often in the context of a concrete service. We focus instead on a fairness index independent of the underlying service and QoE model used. We have defined a generic QoE fairness index to serve, e.g., as a benchmark when comparing different resource management techniques in terms of their fairness across users and services (Fig. 1).

In the following section, we further elaborate on the motivation of going from QoS to QoE management, and on the need to consider QoE fairness in that context.

From QoS to QoE management

Fig. 1

Illustration of QoE management

A general view on fairness

In other areas such as ethics and ecnonomics, fairness does not, in general, relate to utility, but rather to how resources are distributed among actors. We note in particular that a better system is not necessarily fairer, and neither is a fairer system necessarily better. Utility and fairness are orthogonal concepts.

For a simplified (and light-hearted) view on the orthogonality between fairness and utility, we could draw an analogy to the cold-war era superpowers and their economic models. In the Soviet model, there was an emphasis on fairness, but the overall quality of life (QoL) was low (i.e., most everyone had similarly low QoL). In the American model, the emphasis was on quality of life, but only for those who could achieve it on their own, leading to higher average QoL, but much lower fairness (QoL was much more variable across sectors of the population). While the economic and societal merits of each approach are arguably not settled, we can draw a parallel to the notions presented in this paper, namely that the overall QoE achieved on a system is not directly related to how fair the system is, and vice-versa. Depending on the goals and context of whomever is in charge of managing the quality (in the context of this paper, an ISP, for instance), the relative weight of each can be valued differently.

Why QoE management over QoS management?

Our main working assumption is as follows: network operators strive to maintain their users sufficiently satisfied with their service that they will not churn, while simultaneously trying to maximize their margins. There are different ways in which an operator can go about this (e.g, lower prices, higher speeds, bundled services), but conceptually, they all lead to a notion of utility, or perceived value that the users derive from their network connection.

Operators have a limited resource budget, and how they allocate it will have a (possibly large) impact on the users’ utility. One option, for example, would be to ensure that the network capacity is distributed evenly across users. However, it is easy to see that this fails if users have applications with different QoS requirements. While the allocation may seem reasonable from the QoS point of view, it fails to account for the users’ utility, which will vary with the application or service under use. In this context, QoE provides a reasonable proxy measure for utility, and if the operator were to take QoE into account instead of QoS, a better distribution of its resources could be achieved (for instance, assigning more bandwidth to users who are watching video than to those who are just browsing the web; or providing expedited forwarding for users of real-time services such as VoIP or video-conferencing).

Let us consider a hypothetical scenario to illustrate the difference between QoS-based management and QoE-based management, as well as between QoS fairness and QoE fairness. We assume a video service delivered using HTTP Adaptive Streaming (HAS), with an associated QoE model Q that takes into account the device on which the user is accessing the content (that is, like in the E-model for voice, mobile devices have a so-called “advantage factor”, that considers e.g., convenience of use alongside device-specific limitations, such as screen resolution). As the simplest scenario, we consider two users \(U_l\) and \(U_m\), accessing the service (from a laptop and mobile phone, respectively) over a shared link with capacity \(C < 2R_{MAX}\), where \(R_{MAX}\) is the bitrate of the highest-quality video representation available. Now, doing a QoS-fair distribution of resources would result in both \(U_l\) and \(U_m\) having the same available bandwidth \(b<R_{MAX}\). However, given the different devices being used by each user, their QoE, as per Q, could be significantly different, with \(U_m\) receiving higher QoE (due to the advantage factor). If the operator were to consider QoE fairness1 instead, the resource distribution could result in \(U_m\) and \(U_l\) receiving \(b_m < b \le b_l \le R_{MAX}\), respecively, and their corresponding \(Q(b_m)\) and \(Q(b_l)\) values being closer together (i.e., more QoE-fair). Depending on the relationship between the \(b_i\) values and \(R_{MAX}\), both users could even experience their maximal possible quality.

The use of QoE models to solve this resource allocation problem allows the operator to be “closer” to the users’ needs in terms of service quality.

On the need for QoE fairness

Besides keeping their users sufficiently satisfied, operators may care about doing so in a fair manner. Whereas in many cases users will not be aware of the quality experienced by other users, there are several contexts in which they may (e.g., shared activities, applications involving social media), and this can become a relevant factor, so distributing the resources in a “fair way” can be a smart business practice for operators. As discussed in above, what is fair in the QoS domain, may not be fair in the QoE domain, and so a notion of QoE fairness becomes necessary. We note that this applies not only to scenarios where there are multiple different services involved, but also in scenarios where a single service is considered. In what follows, we focus on these single-service scenarios, but the contributions presented herein hold also for multi-service scenarios as well, provided that QoE models for those services are available and comparable, which to the best of our knowledge is still an open problem.2

QoE fairness and QoE management

As mentioned above, QoE management problems often revolve around maximizing some measure of QoE (the Mean Opinion Score in the simplest case, but ideally something like the percentage of users who rate the service above a certain threshold). The literature further advocates the need to consider ensuring fairness among users, in particular related to QoE fairness [13, 27, 37]. This generally leads to a multi-objective optimization problem, typically, maximizing QoE subject to some fairness constraints. This can be approached in several ways, such as:
  • A two-step approach, maximizing first the average QoE, with a second step to solve for maximum fairness while maintaining the previously determined average quality level.

  • An approach based on utility functions, where the optimization targets (e.g., cost minimization, average quality maximization, fairness maximization) are combined into a utility function.

We posit that in a QoE management context, we generally do not want to confound QoE fairness from overall QoE, e.g., by using utility functions. By treating QoE and QoE fairness as orthogonal goals, the operator can decide the correct trade-offs and decide about the relevance of fairness. This is easier from a practical point of view if there are two separate metrics for both concepts. Our aim is to provide a means for the provider to implement QoE management by considering the various aspects independently, according to their particular situation. The existence of a QoE fairness metric does not imply that operators must use only QoE fair assignment of resources, but should they need to, they have a well-founded metric at their disposal. For further discussions related to QoE management, different QoE metrics and fairness, the interested reader is referred to Hoßfeld et al. [18].

QoE fairness index

QoE models and QoE fairness

We have proposed a QoE fairness index [15], F(Y), which enables us to assess the fairness of a provided service, for which we assume that we have a set of QoE values (Y) produced by a QoE model (about whose particulars we need not worry) mapping a set of QoS parameters x to a unique QoE estimate y.

In resource management, network and service providers already use a notion of fairness at the QoS level, striving to allocate a fair share of resources (e.g., bandwidth) to each segment/session/user. However, as we will discuss in this article, the notion of QoS fairness and fair share resource allocation will in general not provide QoE fairness, and a new QoE fairness index is required in order to assess the fairness at the QoE level.

Figure 2 shows a generic QoE model that maps the QoS factors (x) and other influencing factors to a QoE value (y). For the notion of the QoE fairness metric, it is not important which time-scale is considered by the QoE model. The fairness index is independent of the time-scale. We consider a shared system of users consuming a certain service. For each service there is a set of QoS parameters (of various key QoS influence factors on QoE) given in a vector x. There exists a mapping function Q taking the QoS parameters in the set X into a QoE value y. For user i the corresponding QoS parameters are \(x_i\), with QoE value \(y_i\).
$$ Q: X \mapsto y = Q(X) \in [L;H] \, . $$
We note that Q does not need to be monotonic.
Fig. 2

Scope of the paper: QoE fairness index. Please note that the notion and the variables frequently used in this article are summarized in Table 6

L and H are the lower and uppper bounds of the QoE scale, respectively, e.g., \(L=1\) (‘bad quality’) and \(H=5\) (‘excellent quality’) when using a 5-point absolute category rating scale. As an example for a QoE model, \(y = Q(x)\) is the mean opinion score (MOS) value corresponding to QoS x. In the literature, those QoE models are often derived by subjective user studies, and typically only the MOS is used. But other QoE metrics (like median, quantiles, etc.) may be especially of interest for service providers [14], which may be reflected by the mapping function Q.

Desirable properties of a QoE fairness index

As mentioned in the introduction, the most commonly used quantification of (QoS) fairness is Jain’s fairness index [20]. It was designed for QoS fairness with the properties as introduced in the following section. We briefly explain those properties and interpret them for QoE fairness. Please note that we discuss the properties in detail for Jain’s fairness index applied to QoE as well as our proposed index in “Properties of the QoE fairness index F” and validate the indices with respect to these properties. A fairness index F(Y) maps the QoE values Y to a single scalar value. Y can be both a random variable and a set of samples in the following. Thus, F should have the following properties:
  1. (a)

    Population size independence: it should be applicable to any number of users. If the QoE values emerging in the system follow a certain distribution Y, then the actual number of users should not affect the fairness index.Let \(Y_x\) be the set of x samples of the RV Y. We demand: If \(Y_n \sim Y\) and \(Y_m \sim Y\), then \(F(Y_n)=F(Y_m)\), even if \(n \ne m\). For example, the absolute difference \(D=\sum _{i=1}^n Y_i - {\text{E}}[Y]\) from the average QoE \({\text{E}}[Y]\) is a measure for the diversity of QoE values in the system. However, the more users n are in the system the larger the value of D may get. Hence, such a metric is not convenient to quantify QoE fairness. Also the sum of Y or the standard error of Y depend on the sample size and hence violate this property, while the expected value and standard deviation fulfil it.

  2. (b)

    Scale and metric independence: the unit of measurement should not matter (for QoE this means independent of L and H values). The main intention of the formulation of this property is the fact that the unit does not influence the fairness index. For example, it does not matter if kpbs or Mbps is used when considering network throughput. For Jain’s index, the measurement scale requires to be a ratio scale with a clearly defined zero point. On such a ratio scale, scale and metric independence can be formulated as \(F(aY)=F(Y)\) for \(a>0\). However, QoE is measured on a category or interval scale, see also “Relative standard deviation on an interval scale”. Therefore, scale and metric independence means that the fairness index is the same when the QoE values are linearly transformed (to another interval scale). We demand: \(F(aY+b)=F(Y)\) for \(a\ne 0\) and any b. Please note that a negative value of a means that the interpretation of the QoE values is inverted. Instead of the degree or delight of the user Y, the annoyance or dissatisfaction is expressed by \(-Y\).

  3. (c)

    Boundedness: the fairness index should be bounded (without loss of generality it is set to be between 0 and 1). A bounded fairness index enables comparison of different sets of QoE values (e.g., from different applications) if the fairness index is mapped on the same value range. We demand: \(F(Y) \in [0;1]\).

  4. (d)

    Continuity: the fairness index should take continuous values and changes in resource allocation should change the index (e.g., the max–min ratio does not satisfy this since it considers only the max and the min, and not values of \(x_i\) in between). We demand: \(F(Y)\in \mathbb {R}\) and \(F(Y)\ne F(Y')\) if \(Y_i=Y'_i\) but \(\exists j: Y_j\ne Y'_j\). Please note that the continuity allows to discriminate systems. Although a discrete fairness index may be also useful in practice, the discriminative power of a continous index is benefical in QoE management.

  5. (e)

    Intuition: the fairness index should be intuitive: high value if fair (\(F(Y)=1\) is “perfect” fairness), and low value if unfair (\(F(Y)=0\), if possible, is totally unfair). \(F(Y)=1\) means that all users get the same QoE. The most unfair system is when half of the users obtain the best quality and the other half get the worst quality.

Please note that those properties are claimed in literature by Jain et al. [20]. The main motivation for properties (c)–(e) is to have an intuitive metric which provides continuous values in the interval [0; 1]. The fairness index values are thus comparable across systems. The (b) ‘scale and metric independence’ is crucial. In QoE assessment, different rating scales are commonly used to assess quality, such as 5-point, 7-point, 11-point or continuous scales, which have different performance in terms of discriminatory power and reliability, and also differ in assessment time and ease of use by the subjects. As examples, Tominaga et al. [43] discuss different rating scales for mobile video, Huynh-Thu et al. [19] for high-definition video, and Möller [29] for speech quality. Moreover, the used QoE models may operate on different scales. For example, the transmission rating scale of the E-model quantifies the quality of speech transmission on a rating scale from \(0, \ldots , 100\). This scale is extended to \(0, \ldots , 129\) to consider wideband transmission [30]. Consequently, we argue that a QoE fairness index needs to quantify fairness independent of the underlying scale.
When we want to quantify the QoE fairness we need an index with the same properties as above, plus the following:
  1. (f)

    QoE level independence: the fairness index is independent of QoE level, whether system achieves good or bad QoE. As discussed in “From QoS to QoE management”, overall QoE and QoE fairness are orthogonal concepts, and thus we want the QoE fairness index to be independent of the overall QoE of the system. We demand: The fairness statistic F(Y) shall be independent of the sample mean \({\text{E}}[Y]\). The theorem by Basu [2] shows that sample variance and standard deviation fulfill this property and are independent from the sample mean. Therefore, we can concretize this property using the variance of QoE values. We demand: Given two systems with \({\text {Var}}[Y_1]={\text {Var}}[Y_2]\) and \({\text {E}}[Y_1]\ne {\text {E}}[Y_2]\), then \(F(Y_1)=F(Y_2)\). A simple example for the rational of this property is as follows. Let us assume that all users experience a fair QoE (3 on a 5-point scale). The system is totally fair. If all users experience a good QoE (4 on a 5-point scale), the system is obviously better, but the system is not fairer. Please note that a shift in QoE (i.e., changing the QoE level) without changing the dispersion of QoE values around the mean does not affect the fairness index.

It is important to notice that as per property (f) above, F does not consider the absolute (average) QoE values across all users in a system, i.e., it is independent of the QoE level. Thus, QoE managment solutions can separate between QoE fairness and overall QoE of the system. Of course, QoE management solutions will consider the overall QoE, see “From QoS to QoE management”.

As an example: we regard the fairness of a system (I) with an average QoE value \(\bar{y}=4\) on a 5-point ACR scale and \(50\%\) of users with \(y=3.5\) and \(50\%\) with \(y=4.5\), and a system (II) with an average QoE \(\bar{y}=2\) with \(50\%\) of users with \(y=1.5\) and \(50\%\) with \(y=2.5\) to have the same fairness.

As another example, a (bad) system with an average QoE \(\bar{y}=1.4\), with \(10\%\) of users with \(y=5\) and \(90\%\) with \(y=1\) is more fair than a system with an average QoE \(\bar{y}=4\) with \(50\%\) of users with \(y=3\) and \(50\%\) with \(y=5\). This property makes QoE fairness orthogonal to the overall QoE of the system, which allows to objectively benchmark systems with respect to both aspects and to evaluate the possible trade-off between QoE fairness and overall QoE (see “Application of the QoE fairness index” for an example). Figure 3 visualizes QoE level indepence from QoE fairness.
Fig. 3

Illustration of property (f) that QoE level is independent from QoE fairness for more flexiblity in QoE management

We would like to highlight that property (b) Scale and metric independence and property (f) QoE level independence are key features. Since QoE is given on arbitrary interval scales, any linear transformation must not influence the fairness index. To have a higher flexibility in QoE management, QoE level independence is necessary. This allows to mimic combined utility functions with relevance factors (e.g., for fairness, costs, overall QoE) defined by the provider. The utility values are then easily derived as \(U(Y,F_Y)\). The other features (population size independence, boundedness, continuity, intuition) are desired to have a mathematically “nice” metric, which is intuitive and easy to interpret.

We also note that in Hoßfeld et al. [15], we demanded additional properties (deviation symmetry and validity for multi-applications) for a QoE fairness metric. However, after receiving feedback from the reviewers of this, we carefully analyzed those properties and revised them. In particular, we found that those properties follow from the set of properties above: (a)–(f). We discuss these derived properties below.

Additional properties derived from desirable properties

  1. (g)

    Deviation symmetric: the fairness index should only depend on the absolute value of the deviation from the mean value, not whether it is positive or negative. This property follows from (b). When considering the distribution Y of QoE values, the flipped distribution \(Y'\) (i.e., reflection in a line parallel to the y-axis in the middle of the QoE scale) is simply \(Y'=-Y+L+H\). Thus, \(F(Y)=F(Y')\) due to property (b) with \(a=-1\) and \(b=L+H\). Deviation symmetry can also be seen from property (f). \({\text {Var}}[Y'] = (-1)^2 {\text {Var}}[Y] = {\text {Var}}[Y]\), and hence \(F(Y') = F(Y)\).

  2. (h)

    Valid for multi-applications: the fairness index should reflect the cross-application fairness (and not only between users of the same application). Property (h) requires that a set of suitable QoE models exists for the applications considered. If the QoE models fulfill this, then the fairness index fulfills this property too. QoE and QoE models are application specific, and how to compare QoE values from different applications is a separate and challenging topic that is outside the scope of this paper.

In an axiomatic theory of fairness in network resource allocation Lan et al. [24] demand similar properties: continuity, independence of the number of users, homogeneity \(F(Y)=F(aY)\) (which follows from scale and metric independence). Monotonicity is demanded which means that for \(n=2\) users, the fairness measure is monotonically increasing as the absolute difference between the two elements shrinks to zero. This is reflected in property (e). The fairness index converges towards \(F=1\) if the users perceive the same QoE and the difference in QoE is zero.

Relative standard deviation and Jain’s fairness index

Arguably, the two most common indexes used in literature for quantifying QoE fairness are the relative standard deviation (RSD) and Jain’s fairness index. They rely on second-order moments of the QoE values Y (a random variable resulting from mapping the QoS parameters X, another random variable, with the QoE model Q; \(Y=Q(X)\)) in a system to numerically express the dispersion of QoE values across users.

Relative standard deviation (RSD)

The relative standard deviation c (also referred to as coefficient of variation) is the standard deviation \(\sigma ={\text {Std}}[Y]\) of the QoE values normalized by the average QoE \(\mu ={\text {E}}[Y]\).
$$\begin{aligned} c = \frac{\sigma }{\mu } = \frac{{\text {Std}}[Y]}{{\text {E}}[Y]} \end{aligned}.$$
Given that \(\mu > 0\) then RSD \(c\ge 0\). Using RSD as a fairness index, then a low c represents a fair system while a higher c refers to a more unfair system. Maximum fairness is achieved when \(c=0\), i.e., all users experience the same QoE.Minimum fairness is obtained when RSD is \(c_{max}\). Let us consider the maximum standard deviation \(\sigma _{\max }\) for an observed average QoE \(\mu \). The \(\sigma _{\max }\) is obtained when a ratio p of users get the minimum QoE L and \(1-p\) get the maximum H. Then, the average QoE value is \(\mu =pL+(1-p)H\), which can be transformed to
$$\begin{aligned} p=\frac{H-\mu }{H-L}. \end{aligned}$$
The maximum standard deviation \(\sigma _{\max }\) follows as
$$\begin{aligned} \sigma _{\max }^2=(H-L) \sqrt{(1-p)p}. \end{aligned}$$
Replacing p with Eq. (3) then we get \(\sigma _{\max }\) as a function of \(\mu \):
$$\begin{aligned} \sigma _{\max }(\mu ) = \sqrt{(\mu -L)(H-\mu )} \end{aligned}$$
and the maximum RSD \(c_{max}\) as a function of \(\mu \)
$$\begin{aligned} c_{max}(\mu ) = \frac{\sqrt{(\mu -L)(H-\mu )}}{\mu }. \end{aligned}$$
The \(c_{max}\) reaches its maximum value for \(\mu \) satisfying:
$$\begin{aligned} \frac{\partial c_{max}(\mu )}{\partial \mu } = 0 \end{aligned}$$
which gives
$$\begin{aligned} \mu _{\max }=\frac{2HL}{H+L} \end{aligned}$$
$$\begin{aligned} c_{max}=\frac{1}{2}\frac{H-L}{\sqrt{HL}}. \end{aligned}$$
Let us return to using c as a fairness index. Intuitively, we regard the most unfair system to be when half of the users experience lowest QoE L and half of the users experience highest QoE H, e.g., \(p=0.5\) in Eq. (3) and hence
$$\begin{aligned} \mu _u=\tfrac{1}{2}(H+L) \end{aligned}$$
with standard deviation
$$\begin{aligned} \sigma _u=\tfrac{1}{2}(H-L) \end{aligned}$$
and relative standard deviation
$$\begin{aligned} c_u=\frac{H-L}{H+L} \, . \end{aligned}$$
On a 5-point scale with \(L=1\) and \(H=5\), then \(\mu _u=3\), \(\sigma _u=2\), and \(c_u=\frac{2}{3} = 0.67\) for the most unfair system. Applying the same parameters to Eq. (9) gives \(c_{max}= 2/\sqrt{5} = 0.89\) (for \(\mu _{\max } = 5/3\)), and hence \(c_{max}> c_u\). Generally, it can be proven that \(c_{max}> c_u\) when \(H>L\), which means that using RSD as a fairness index will never rate the most unfair system (in our notion) as the most unfair system.

This is illustrated in Fig. 4 which shows the maximum standard deviation (\(\sigma _{\max }(\mu )\)) and RSD (\(c_{max}(\mu )\)) as a function of average QoE (\(\mu \)) on a 5-point scale. It can be observed that the maximum RSD \(c_{max}\) is not achieved for the most unfair system at \(\mu =3\) but at \(\mu _{\max }=1.67\).

Thus we conclude that the RSD is not an intuitive fairness measure, as the most unfair system (Eq. 12) does not reach the maximum RSD (Eq. 9).3 From Eq. (9), we further see that the bounds of RSD depend on the actual rating scale. In case of \(L=0\), the RSD, however, is not bounded and violates property (c) ‘boundedness’. RSD also trivially violates property (f) ‘QoE level independence’, as the RSD depends on the average QoE value (Eq. 2).

Furthermore, RSD does not fulfill property (g) ‘Deviation symmetric’ which is demonstrated in two simple scenarios, cf. Table 2. In scenario (A), 90% of users experience best QoE and 10% experience worst QoE. In scenario (B), the opposite ratio is observed, i.e. 10% of users experience best QoE and 90% experience worst QoE. Clearly, scenario A leads to better QoE than scenario B, however, both systems reveal the same unfairness. Nevertheless, the RSD is different in both scenarios, i.e. \(c_A\ne c_B\), and leads to very different results. The RSD is not deviation symmetric.

The RSD is further scale dependent and violates property (b). A linear transformation of the QoE values Y will also lead to different RSD values. We define T(Y) as linear transformation with parameters a and b. For example, \(a=-b=\tfrac{1}{4}\) when normalizing QoE values from [1; 5] to [0; 1].
$$ T(Y)=aY+b.$$
For the linearly transformed QoE values, we observe a dependency of the RSD value on the transformation.
$$ {\text {E}}[T(Y)] = a \mu +b,$$
$${\text {Std}}[T(Y)] = a\sigma,$$
$$ c_{T} = \frac{a \sigma }{a \mu + b}.$$
Table 2 illustrates the scale dependency numerically. Scenario (C) and (D) are equivalent to Scenario (A) and (B), however, a normalized scaled is used instead of a 5-point scale. It is \(c_A \ne c_C\) and \(c_B \ne c_D\), respectively. (We note that index F will be introduced in detail later on in “Defining a QoE fairness index”, with descriptions left out at this point for the sake of readability.)
Table 2

Illustrative scenario and fairness indexes


Best QoE (%)

Worst QoE (%)






QoE values with \(L=1\) and \(H=5\)

















Normalized QoE values with \(L=0\) and \(H=1\)

















The RSD c and Jain’s fairness index J depend on the average QoE value. As a consequence, RSD and J are scale dependent and return different values after normalization of QoE values to [0; 1]

Jain’s fairness index J

The well-known Jain’s fairness index J can be applied to QoE values Y and is a function of the RSD c.
$$\begin{aligned} J=\frac{1}{1+c^2} = \frac{{\text {E}}[Y]^2}{{\text {E}}[Y^2]}. \end{aligned}$$
Jain’s index takes continuous values in [0; 1]. The maximum fairness (\(J_{max}=1\)) is reached for the minimum standard deviation (\(\sigma _{\min } = 0\)). If we consider the most unfair scenario with maximum standard deviation \(\sigma _{\max }\) (Eq. 7), we will expect that the fairness index reaches its minimum. Substitute \(c_u\) from Eq. (12) in Eq. (17) then
$$\begin{aligned} J_u = \frac{1}{1+c_u^2}= \frac{(H+L)^2}{2 (H^2+L^2)} \, . \end{aligned}$$
Thus, in the maximum unfair scenario, a nonzero value \(J_u>0\) is observed. Therefore, Jain’s index is not an intuitive fairness measure, as the most unfair system does not reach the minimum value, i.e. \(J_u > J_{min}\) when \(H>L\). In fact, the maximum RSD (Eq. 9) leads to the minimum possible value \(J_{min}\).
$$\begin{aligned} J_{min}= \frac{4HL}{(H+L)^2}. \end{aligned}$$
This means that the minimum value \(J_{min}\) depends on the bounds of the value range [LH]. If \(L=0\), then \(J_{min}=0\). On a 5-point scale, \(J_{min}=\tfrac{5}{9}\approx 0.56\). Although J is bounded in [0; 1], the lower bound \(J_{min}\) is determined by the value range [LH] and the bounds are \([J_{min};1]\). Property (c) is partly fulfilled.
Fig. 4

The maximum standard deviation \(\sigma _{\max }\) and coefficient of variation \(c_{\max }\) depending on the average QoE value \(\mu \) is plotted. We observe that the curve for \(\sigma _{\max }\)is symmetric – in contrast to \(c_{\max }\). The unfairest scenario (\(50{\%}\) experience worst quality L and best quality H, respectively, leading to \(\mu =3\)) is well captured by \(\sigma _{\max }\), but not by \(c_{\max }\) due to the normalization by the observed (average) QoE level \(\mu \)

Issues with Jain’s index for QoE fairness

In the following, we demonstrate that Jain’s fairness index violates several desirable properties introduced in “Desirable properties of a QoE fairness index”.4 We further illustrate severe issues for its application in the QoE domain.

Scale and metric dependency of J

The scale dependency of J is caused by the dependency of the RSD on the actual scale, as shown in Eq. (16). To be more precise, a linear transformation T(Y) of the QoE values impacts J.

Figure 5 highlights the dependency of J when using different QoE domains with varying L and H. On a 5-point scale [1; 5], the same average QoE level \(\mu =2\) is considered and only the standard deviation of the QoE values \(\sigma \) is varied. The QoE values are then transformed to different rating scales [LH]. It can be seen from Fig. 5 that Jain’s fairness index is not scale independent.

Please note that Jain’s index was developed for measures on a ratio scale (like bandwidth or waiting times), which means \(L=0\). In that case (ratio scale), Jain’s index fulfills the scale independence. It does not matter e.g., if bandwidth is measured in kbit or MByte. Only in the case of \(L=0\), Jain’s index is scale independent. We observe that J is equal for any H, iff \(L=0\). Let us assume \(\sigma ={\text {Std}}[Y_{0;1}]\) and \(\mu ={\text {E}}[Y_{0;1}]\) when considering the QoE values on a [0; 1] scale. Now we consider the linearly transformed values on the scale [LH], i.e.
$$\begin{aligned} T(Y)=(H-L)Y+L \end{aligned}$$
with \(a=H-L\) and \(b=L\) in Eq. (13). The RSD \(c_T\) of the transformed values (Eq. 16) is equal to c, iff \(L=0\).
$$ c_T=c \Leftrightarrow \frac{a \sigma }{a \mu +b} = \frac{\sigma }{\mu } \Leftrightarrow b=0 .$$
Fig. 5

Property (b) ‘Scale Independence’: Jain’s fairness index is violating scale independence. It matters if scores are linearly transformed. In contrast, the proposed fairness index F is scale independent

However, in the QoE domain the most common scale is the 5-point MOS scale with \(L=1\) and \(H=5\). When using normalized QoE values in [0; 1], Jain’s fairness index dependence. WHere do I see this?. If \(L = 0\), J is very sensitive to QoE values close to zero as depicted in Fig. 6.

We consider here a constant standard deviation \(\sigma =0.1\) on the 5-point MOS scale and vary the average QoE value \(\mu \).

Such a small \(\sigma \) is reached when 50% of the users get maximum QoE 5 and 50% get a QoE of 4.8. This is also reached when 50% of the users get minimum QoE 1 and 50% get a QoE of 1.2. Another scenario leading to the same \(\sigma =0.1\) is the following. 99.9375% obtain QoE 5 and the remaining 0.0625% obtain QoE 1.

However, Jain’s index is varying from 0.5 to 1, as depicted in Fig. 6 which shows the fairness index J for normalized QoE values for varying \(\mu \) and constant \(\sigma ^*\). Due to the normalization, \(\sigma ^*={\sigma }/{4} =0.025\) see Eq. (15), we observe that J is clearly sensitive to \(\mu \) close to zero. A small shift of the average QoE significantly decreases J.
Fig. 6

J is very sensitive to QoE values close to zero. In contrast, the proposed fairness index F is QoE level independent. A constant standard deviation \(\sigma ^*={\sigma }/{4}=0.025\) is assumed on a normalized scale [0; 1]

Furthermore, Jain’s index is not able to capture fairness when higher values on the scale mean a lower QoE. An example considers the following quality degradation scale. 0—no degradation, 1—perceptible but not annoying, 2—slightly annoying, 3—annoying, 4—very annoying, 5—extremely annoying. Let us consider that \(n-1\) users experience the best quality 0 and 1 user obtains a 1. Then, the average QoE is \({\text {E}}[Y]=1/n\), while the coefficient of variation follows as \(c_Y=\sqrt{(}1/n)\). Then \(J=1/n\). In the limit, J converges towards \(\lim _{n \rightarrow \infty }1/n=0\). Hence, in the best and fairest scenario, J quantifies the scenario as completely unfair.

QoE level dependence of J

From Fig. 6, we further see that Jain’s fairness index is QoE level dependent. A more explicit visualization is provided in Fig. 7 which clearly illustrates the QoE level dependence of J.5 In particular, J is plotted against the standard deviation \(\sigma \) for different average QoE values \(\mu =2,3,4\).

The three different curves for the average QoE \(\mu \) are not overlapping, although the same standard deviation from the mean value is observed. For higher average QoE values, the same standard deviation leads to higher fairness. For \(\mu =4\) and \(\mu =2\), we observe a value of Jain’s fairness index about 0.95 and 0.8 for \(\sigma =1\).
Fig. 7

Property (f) ‘QoE Level Independence’: On a 5-point scale, Jain’s fairness index is computed for varying standard deviation \(\sigma \). Each curve reflects an average QoE value \(\mu =2,3,4\). The curves are not overlapping and depend on \(\mu \). Jain’s fairness index J is QoE level dependent – in contrast to F

Deviation asymmetry of J

The desired property (g) ‘Deviation Symmetry’ means that the fairness index should only depend on the absolute value of the deviation from the mean value, not whether it is positive or negative.

Therefore, a scenario is considered in which a ratio of p users experience 2 and \(1-p\) experience \(2+\delta \). Figure 8 plots now Jain’s fairness index J against the discrepancy \(\delta \in [-1;1]\) between the two user classes. We observe that Jain’s index is not deviation symmetric, as the resulting curves for \(p=0.1\) and \(p=0.3\) are not symmetric at \(\delta =0\).

Another illustration is provided in Fig. 9. Normalized QoE values are considered with \(L=0\) and \(H=1\). Again, two user classes are considered. A ratio p of users obtains QoE y and \(1-p\) obtain worst QoE L. In that scenario, we obtain the following statistical measures and fairness index J.
$$ {\text {E}}[Y]= py $$
$$ {\text {E}}[Y^2]= py^2 $$
$$ {\text {Var}}[Y]= y^2(p-p^2) $$
$$ {\text {Std}}[Y]= y\sqrt{p-p^2},$$
$$ c_Y= \sqrt{\tfrac{1-p}{p}},$$
$$ J= p. $$
This scenario provides an intuitive meaning for J. Relating the interpretation to QoS and resource allocation, a ratio of p users share the resource and obtain a rating of \(y > 0 \); the other \(1-p\) users receive zero resources which means no service, hence a rating of \(L=0\). If \(p=100\%\) of users get \(y=H\), then \(J=1\) (maximal). If \(p=100\%\) of users get \(y=L=0\), then \(J = 0\) (minimal). However, \(J=0\) represents a totally unfair system, but all users get the same (bad) rating. Jain’s fairness index is not intuitive when applied to QoE. Similar thoughts for other values p and \(q=1-p\) (cf. Table 2) show that Jain’s fairness index violates the deviation symmetry.
Fig. 8

Property (g) ‘Deviation Symmetry’: In this scenario, a ratio of p users experience 2 and \(1-p\) experience \(2+\delta \). We observe that J violates property (g) ‘Deviation symmetric’ and is less sensitive than F

Fig. 9

Property (g) ‘Deviation Symmetry’: two user groups are considered in this illustration. A ratio p of users gets maximum QoE \(H=1\) and \(1-p\) gets minimum QoE \(L=0\). Jain’s fairness index follows as \(J=p\), see Eq. (27). The proposed fairness index F is derived as \(F=1-2\sigma =1-2\sqrt{p-p^2}\), see Eq. (25). If 50% of the users get 0 and 50% get maximum QoE, then this is maximum unfair, \(F=0\). However, \(J=p=1/2\)

Relative standard deviation on an interval scale

A major concern of the application of Jain’s fairness index in the QoE domain is the typical interval scale of the QoE function Q in Eq. (1). The RSD may not have any meaning for data on an interval scale. For the computation of an RSD, a ratio scale is required which contains a natural zero value, like ‘no waiting time’ \(\equiv 0 {s}\).

However, the MOS scale typically used in QoE models is not a ratio scale. There is no meaningful zero value on the QoE scales: ‘zero’ would mean ‘no QoE’—which is not defined. Hence, the RSD of QoE values—and therefore Jain’s index—have no meaning for QoE values. The MOS scale can be considered as an interval scale as concluded by Norman [32]. Therefore, it is required to use other statistics (like the standard deviation) to measure the deviation from the mean.


For QoS Fairness, the usage of relative standard deviation as in Jain’s fairness index is very reasonable. An example for a QoS measure is bandwidth which measures the bandwidth of a user on a ratio scale with a meaningful zero value (‘no bandwidth’).

However, Jain’s fairness index may also be difficult to interpret if the data is measured on a ratio scale—which allows to compute the RSD.Consider the following example. The QoS measure is delay, e.g., web page load time, which measures a duration on a ratio scale with a meaningful zero value (‘no delay’). However, in that case, Jain’s index leads to counterintuitive results. In a scenario, where 100% of users get no delay, \(J=0\). Figure 9 can be re-interpreted when considering the ratio p of users experiences a delay of 1 [s], while \(1-p\) experience no delay. Thus, for QoS measures like delays, Jain’s index cannot be directly applied to quantify QoS fairness.

Defining a QoE fairness index

Before presenting the formal definition of F, we briefly sketch the rationale behind it. After the definition, we discuss its properties, and compare it to Jain’s index.

Rationale for a QoE fairness index

Jain’s fairness index is not applicable as a QoE fairness index, as it violates some of the desired properties as specified in “Desirable properties of a QoE fairness index”. It is a reasonable approach to only consider the standard deviations without relating them to mean values when defining QoE fairness. The standard deviation \(\sigma \) of the QoE values Y quantifies the dispersion of the users’ QoE in a system.

There exists a maximum standard deviation of the QoE value Y over the bounded value domain [LH].6 The maximum \(\sigma _{\max }\) is obtained when \(\frac{1}{2}\) of the users experience L and H, respectively. In that case, the average QoE value is
$$\begin{aligned} {\text {E}}[Y]={(L+H)}/{2} \end{aligned}$$
and the maximum second order moment is
$$\begin{aligned} {\text {E}}[Y^2]={(L^2+H^2)}/{2} \, . \end{aligned}$$
Then, the maximum standard deviation is
$$\begin{aligned} \sigma _{\max }= \frac{1}{2}(H-L) \,. \end{aligned}$$
The average QoE is different from the MOS, as users in the system are experiencing different conditions resulting into certain QoE values Q(x). When computing a MOS, all subjects experience the same test condition and the average over all user ratings is derived.

A new QoE fairness index F

We define the fairness index F as a linear transformation of the standard deviation \(\sigma \) of Y to [0; 1]. The observed \(\sigma \) is normalized with the maximal standard deviation \(\sigma _{\max }\) and measures the degree of unfairness. Hence, the difference between 1 (indicating perfect fairness) and \(\sigma /\sigma _{\max }\) is defined as fairness index.
$$\begin{aligned} F = 1-\frac{\sigma }{\sigma _{\max }} = 1-\frac{2\sigma }{H-L}. \end{aligned}$$
We note that F can also be interpreted in another way. The QoE values are normalized to the QoE domain [0; 1],
$$ Y^*=\frac{Y-L}{H-L}. $$
Then, the standard deviation is
$$ \sigma ^*={\text {Std}}[Y^*]=\frac{\sigma }{H-L} $$
and the maximum standard deviation is \(\sigma _{\max }=\tfrac{1}{2}\). Then, the fairness index follows as
$$\begin{aligned} F=1-2\sigma ^* = 1-\frac{2\sigma }{H-L} \, . \end{aligned}$$
Thus, (I) normalizing the standard deviation by the maximum possible standard deviation (\(1-\frac{\sigma }{\sigma _{\max }}\)) or (II) normalizing the QoE values \(Y^*\) with \(\sigma ^*\) and \(\sigma _{\max }^*\) result both in the same \(F \in [0;1]\).
Fig. 10

Illustration of the new QoE fairness index F (solid hatch) on a typical 5-point scale with length \(H-L\) with \(\mu =3.5\) and \(\sigma = 0.7\)

Figure 10 illustrates the meaning of the fairness index F. A certain fraction of the QoE domain [LH] is covered by the standard deviation \(\sigma \) around the average QoE \(\mu \) in both directions. The size of the interval \([ \mu - \sigma , \mu + \sigma ]\) is \(2 \sigma \) reflects how unfair the QoE values are distributed over the QoE domain. Accordingly, the fairness index F is the size of the complement of this interval, i.e. (\(1-2\sigma ^*\)), normalized by the size of the QoE rating domain \(H-L\).


The QoE Fairness Index F is defined as the linear transformation \(F=1-\frac{2\sigma }{H-L}\) over the QoE Y of all users consuming a service. A system is absolutely QoE fair when all users receive the same QoE value.

Properties of the QoE fairness index F

The proposed fairness index, F, needs to fulfill the properties as introduced in “Desirable properties of a QoE fairness index”. In the following, the properties are revisited and analyzed with respect to F.
  1. (a)

    Population size independenceF is applicable to any number N of users in the system. The value of F is independent of N.

  2. (b)
    Scale and metric independence—The unit of measurement should not matter. In the context of QoE, the fairness measure is independent of L and H. To be more precise, any linear transformation \(T(Y)=aY+b\) of the QoE values Y does not change the value of the fairness index. For the transformed values we obtain
    $$\begin{aligned}F_{T(Y)}& = 1 - \frac{2{\text{Std}}[T(Y)]}{T(H) - T(L)} \\ &= 1 - \frac{2a{\text{Std}}[Y]}{(aH + b) - (aL + b)}\\& = 1 - \frac{2{\text{Std}}[Y]}{H - L} = F_Y.\end{aligned}$$
    Hence, F is scale independent (which is also indicated in Table 2).
  3. (c)

    BoundednessF is bounded between 0 and 1.

  4. (d)

    ContinuityF takes continuous values in [0; 1].

  5. (e)

    IntuitiveF is intuitive. The maximum fairness \(F_{max}=1\) is for minimum standard deviation (\(\sigma = 0\)). The minimum fairness \(F_{min}=0\) is found when standard deviation is at its maximum; this happens in the most unfair scenario (50% of users get L and 50% get H). Any fairness value F can also be interpreted as follows when considering normalized QoE values. (A) Half of the users get maximum QoE \(H=1\) and the other half gets QoE y. Then, \(F=y\). (B) Half of the users get minimum QoE \(L=0\). The other half get QoE y. Then, \(F=1-y\). The equations are provided in Table 5. Exemplary numerical values are provided in Table 3.

  6. (g)

    Deviation symmetricF does not depend on the absolute value of the deviation from the mean value, not whether it is positive or negative. This is clear from the definition of F and visualized in Figs. 8 and 9.

  7. (f)

    QoE level independenceF is independent of the actual QoE level, whether the system achieves good or bad QoE. This is also clear from the definition of F, since F only depends on the deviation from the mean. Figure 6 visualizes the QoE level independence. A constant standard deviation \(\sigma \) is assumed while the average QoE \(\mu \) is varied. Since F is independent from \(\mu \), F is a constant value which only depends on \(\sigma \) (and the QoE value range [LH]).

  8. (h)

    Valid for multi-applications—The index should reflect the cross-application fairness (and not only between users of the same application). The index should also be applicable to different applications. This property is respected by F, provided that the QoE mapping function Q allows to have comparable QoE values. Further, F can be applied to any application, as it is based on the deviation of the QoE values. The same is also true for J and RSD. However, literature also suggests other fairness metrics which are only defined for a single application and use case, e.g. Cofano et al. [6] as discussed in “Fairness from the user’s perspective”.

We observe that the definition proposed fulfills the properties outlined in “Desirable properties of a QoE fairness index”. The QoE fairness index F reflects the system perspective of fairness and quantifies the fairness of the entire system across all users.7
Table 3

Illustration of Jain’s J and QoE Fairness Index F for various scenarios and their distributions Y, \(L=1,H=5\)






All users experience 1




50% experience 1 and 50% experience 2




50% experience 1 and 50% experience 3




50% experience 1 and 50% experience 4




50% experience 1 and 50% experience 5




50% experience 2 and 50% experience 4




50% experience 2.9 and 50% experience 4.9




Uniform distribution \(Y\sim U(L;H)\).



Qualitative comparison of fairness indexes

A summary of the comparison between F and Jain’s index J as well as the RSD is provided in Table 4. All three indexes are population size independent, valid for multi-applications and return positive continuous values. While F fulfills all desirable properties, J and the RSD violate key properties: (g) ‘deviation symmetric’, see Fig. 8; (f) ‘QoE level independence’, see Eq. (17) or Fig. 7; (f) ‘Scale and metric independence’, see Fig. 5.

In the list of properties it is stated that the fairness index should be intuitive. This means for instance that the index should be at its minimum when the users’ experience (at least their scores) is maximally different. This holds for fairness index F, but not for J since this depends on the values of L and H. Although J is bounded between [0; 1], its minimum value \(J_{min}\) depends on L and H. In this section, we will show in real numerical examples of YouTube QoE, those limitations are severe. In particular, we show that J is not very sensitive and hardly discriminates fairness in different scenarios.
Table 4

Qualitative comparison of fairness indexes



Jain’s J

Fair. F

(a) Population size independent




(b) Scale and metric independent


(c) Boundedness



(d) Continuity




(e) Intuitive



(g) Deviation symmetric


(f) QoE level independent


(h) Valid for multi-applications




As example, HTTP video streaming and the impact of video stalls on video QoE is considered. Hoßfeld et al. [17] provide a QoE model for non-adaptive video streaming in terms of MOS on a 5-point scale for N stalls and a total stalling time T. In our numerical results, we assume \(T=N L\) with an average stall duration of \(L=1 {s}\).
$$ Q(N)= 3.5 e^{-0.15 T - 0.19 N} + 1.5 = 3.5 e^{-0.34 N} + 1.5$$
The normalized QoE function \(Q^* \in [0;1]\) is
$$\begin{aligned} Q^*(N)&= \frac{Q(N)-L}{H-L} \end{aligned}$$
with \(H=5\) and \(L=1\).
In order to illustrate the differences between the QoE fairness indexes, we consider N as a random variable describing the number of stalls experienced by the users in the system. We assume that N follows a binomial distribution, i.e. \(N\sim ~\mathrm {Binomial}(K,p)\). In this example, we assume \(K=10\) and vary p. Please note that it is not important for the illustration which distribution is assumed.8 The probability \(p_n\) that a user experiences n stalls is then
$$\begin{aligned} p_n = \left( {\begin{array}{c}n\\ K\end{array}}\right) p^K(1-p)^{n-K} \, . \end{aligned}$$
From Eq. (41), the key statistics can be derived.
$$\begin{aligned} {\text {E}}[N]= & {} \sum _{n=0}^\infty n p_n = K p\end{aligned}$$
$$\begin{aligned} {\text {Std}}[N]= & {} \sqrt{K p (1-p)}\end{aligned}$$
$$\begin{aligned} c_N= & {} \frac{\sqrt{1-p}}{\sqrt{Kp}}. \end{aligned}$$
QoS fairness with respect to the stalls can therefore be derived via Jain’s index.
$$\begin{aligned} J_N = \frac{1}{1+c_N^2} = \frac{Kp}{Kp+(1-p)} \end{aligned}$$
Due to the QoE mapping function Q, analytical expressions for the average QoE \(\mu ={\text {E}}[Q(N)]\) and the standard deviation of the QoE values \(\sigma ={\text {Std}}[Q(N)]\) are rather bulky and omitted here. It has to be noted that
$$\begin{aligned} {\text {E}}[Q(N)] = \sum _{n=0}^\infty Q(n) p_n \ne Q({\text {E}}[N]) \, . \end{aligned}$$
We numerically compute the QoE fairness indexes F and Jain’s index on a 5-point scale with Q(N) as well as for normalized QoE values \(Q^*(N)\). We refer to them as \(J_5=J_{Q(N)}\) and \(J_1=J_{Q^*(N)}\).
Fig. 11

Qualitative comparison between J and F. We assume a binomial distribution of the number N of stalls and observe accordingly those (first and second order) QoE values

Fig. 12

Qualitative comparison between J and F. Three key observations are found. (1) QoS Fairness is different from QoE Fairness—independent of the QoE fairness index. (2) Jain’s index for QoE fairness depends on the rating scale ([1; 5] vs. [0; 1]). Finally, (3) J is less sensitive than F

Figure 11 shows the numerical values for the average QoE, the standard deviation and the RSD of the QoE values depending on the average number of stalls \({\text {E}}[N]=Kp\). Based on \({\text {E}}[N]\), the parameter \(p={\text {E}}[N]/K\) of the binomial distribution is derived. We clearly observe the exponential decay of the QoE model Q when considering the average QoE. The standard deviation and the RSD show however a different behavior.

The main observations can be seen in Fig. 12. Firstly, QoS fairness \(J_N\) is different than QoE fairness quantified by F and J. In particular, the QoS fairness approaches zero (i.e. completely unfair system) when the average number of stalls approaches zero. We further see the sensitivity of Jain’s fairness index for values close to zero. QoE fairness depicts a different behavior. In case of no stalling, all users get best QoE and the variance of the QoE values diminishes. Hence, the QoE fairness indexes are 1. With increasing number of stalls in the system, the standard deviation increases and hence fairness decreases until a certain threshold. After that threshold, the variance decreases and more users suffer. Hence, the QoE fairness increases again. In contrast, QoS fairness shows here a monotonic behavior.

Secondly, Jain’s fairness index depends on the scale. The curves for \(J_5\) and \(J_1\) differ (a) when using the QoE function Q on a 5-point scale differs and (b) when using normalized QoE values \(Q^*\). Thirdly, Jain’s fairness index applied to QoE values is less sensitive than the fairness index F and does not allow to clearly discriminate fairness issues. From \(J_5\) or \(J_1\), one might conclude that the system is more or less fair. However, F clearly depicts that for certain scenarios (around \({\text {E}}[N]=1.5\) stalls) the system leads to unfairness.

Figure 13 summarizes those issues in a simple scenario. Half of the users experience worst quality (\(L=0\)) and the other half experience QoE V. Jain’s index is a constant value \(J=1/2\) independent of V. Thus, in the scenario where all users obtain the same value \(V=0\), J does not indicate that the system is fair. In the unfairest scenario, where half of the users get best QoE \(V=100\), the same fairness index is obtained as in the fair scenario. J does not have intuitive meaning when applied to QoE to express fairness. In contrast, if the rating scale is shifted by 100, i.e., \(L=100\) and \(H=200\), J is not constant anymore, but always leads to values \(J \ge 0.9\) (cf. dashed line). Again, the rating scale dependence of J leads to severe differences. In contrast, the proposed QoE fairness index F distinguishes clearly the different scenarios reflected by V. Independent of the underlying QoE scale, the unfairest scenario leads to \(V=0\), while the fairest scenario leads to \(V=1\).
Fig. 13

Comparison of J and F. A simple scenario is considered with a continuous rating scale from [0; 100]. 50% of users experience worst quality (\(L=0\)) and the other 50% experience QoE V. The dashed line indicates when the QoE scale is shifted to [100; 200]

Application of the QoE fairness index

The goal of this section is to show how the proposed QoE fairness index F can be applied. As a result of the numerical examples, we show that a QoS fair system is QoE unfair (case study: Web QoE for M/D/1-PS). In addition, we show how to design a system in which a provider may trade-off between fairness and overall performance (case study: HTTP adaptive streaming QoE).

Case study: web browsing QoE in an M/D/1-processor sharing system

The analytical M/D/1-PS sharing system is well understood and describes a perfect QoS fair system which is nevertheless QoE unfair. Literature has shown that the processor sharing (PS) model captures well the characteristics of a system with a single shared bottleneck, see the survey by Roberts [39].9

In the processor sharing model, the bandwidth C of the bottleneck (i.e., the QoS resource) is perfectly fairly shared among the n users in the system, i.e., each user receives C / n. The QoS resource is instantaneously adjusted when the number of users changes. Thus, perfect QoS fairness is considered here on an instantaneous time scale, as depicted in Table 1. The M/D/1-PS system leads also to proportional (QoS) fairness as discussed in “Background and related work: notion of fairness and its applications”. The proportional fairness [23] means (in case of deterministic service requirements) that the expected service time \({\text {E}}[T]\) is proportional to the requirements of the service \(\rho \).
$$\begin{aligned} {\text {E}}[T] \propto \rho \end{aligned}$$
In the context of web browsing QoE, we have the following M/D/1-PS model. A web server is considered where all users share the server’s capacity C equally, i.e., processor sharing model. All users download the same web page of constant size b. User web page requests are modeled with a Poisson process and request rate \(\lambda \). Thus, the shared environment is modeled as M/D/1-PS queueing system. The offered load \(\rho \) of the system is \(\rho =\lambda \frac{b}{C}=\lambda b^*\).
The expected value \(\mu _T\) of the download time T is derived by Ott [33] and depends on the file size b,
$$\begin{aligned} \mu _T = \frac{b}{1-\rho }. \end{aligned}$$
Thus, M/D/1-PS is proportional fair.
For M/D/1-PS, the RSD \(c_T\) of the download time T depends only on the offered load \(\rho \) and is derived by Shalmon [41].
$$\begin{aligned} c_T^2 = \frac{1}{\rho ^2}[2-\rho ^2-2(1-\rho )e^\rho ] \approx \rho \end{aligned}$$
Thus, we can directly compute Jain’s fairness index \(J_T\) for the download time.
Reichl et al. [38] observe a logarithmic relationship between waiting time and QoE which is formulated as the WQL hypothesis by Egger et al. [10]. The QoE of web browsing is derived as a logarithmic function of the page load time t.
$$\begin{aligned} Q(t) = -a \log (t) + b. \end{aligned}$$
The QoE domain ranges from \(L=1\) to \(H=5\) and we assume \(a=1\) and \(b=4\) [10].
The page load times T in the M/D/1-PS system are a random variable. For computing QoE fairness F, we need to derive the standard deviation \({\text {Std}}[Q(T)]\) of the QoE values Q(T). Further, Jain’s fairness index is applied to Q(T).
Fig. 14

Case study ‘Web QoE’: Overall QoE and standard deviation \(\sigma _Y\) of the system depending on the offered load

Fig. 15

Case study ‘Web QoE’: The fairness indexes are compared for different load situations. QoS fairness in terms of Jain’s fairness index of sojourn times is different than QoE fairness

Figure 14 depicts the QoE behavior of the system depending on the offered load. With increasing load, the download times increase and the QoE suffers (left Y-axis, see quantiles and average QoE values). The standard deviation of the QoE values shows a non-linear behavior. For very low loads, the users arrive often at an empty system and everyone experiences the same QoS and QoE. Due to the random arrival of users, some users share the capacity with others which happens more often with increasing load. At a certain load, it is well known that the download times exponentially increase [33] resulting in smaller QoE differences across users (but at a low overall QoE). Note that approaching the overload situation (\(\rho \rightarrow 1\)), all users experience the same poor quality (\(\lim _{\rho \rightarrow 1} Q(t)=L\), but the system is perfectly fair (\(\lim _{\rho \rightarrow 1} F=1\)), if terribly under-performing.

Figure 15 illustrates the different fairness indexes. Jain’s fairness index J leads to different results and conclusions than F. In case of low load (\(\rho <0.4\)), J suggests a perfect QoE fair system. However, when looking at the standard deviations of the QoE in Fig. 14, we see already stronger discrepancies between users. J is not capturing this properly, since the average QoE is high for this load. We further see again that J is not very sensitive. The minimal fairness value is about 0.8. In contrast, our proposed fairness metric properly reflects the variances in QoE. F is more sensitive and identifies fairness issues even in the low load scenarios. F drops close to 0.5 which properly reflects that the system leads to \(0.5 \sigma _{\max }=1\). We further observe that there is a strong discrepancy between QoS fairness (expressed by using Jain’s fairness index \(J_T\) of download times T converging to 0.5) and QoE fairness.

Thus, the fairness index F gives the possibility to clearly identify under which conditions and in which scenarios fairness issues arise.

Case study: HTTP adaptive streaming QoE

As a second case study, HTTP adaptive streaming (HAS) is considered to demonstrate the comparison of different approaches with respect to QoE fairness but also with respect to overall QoE. When a provider has to decide which mechanism to use in practice, the (possible) trade-off between QoE fairness and overall QoE may be considered.

HAS allows the video player to dynamically adjust the video bitrate according to the current network situation. Thereby, HAS tries to overcome video stalling at the cost of reduced video bitrate and lower video quality. However, from a QoE perspective, stalling is the dominating QoE influence factor. For the interested reader, Seufert et al. [40] provides a comprehensive survey on HAS QoE and HAS technology.

However, when multiple HAS clients are competing for shared network resources, they may negatively influence each other in terms of QoE [37]. Thus, a QoE fairness issue may arise due to the HAS adaptation algorithm. To this end, several developed strategies can be found in the literature, which try to optimize QoE while maintaining QoE fairness among the users.
Fig. 16

Case study ‘Video streaming QoE’: Different mechanisms from literature are compared in terms of QoE and fairness. The results by Petrangeli et al. [37] are reevaluated to quantify F

Fig. 17

Case study ‘Video streaming QoE’: For the decision which algorithm to use in practice, a value function is defined \((1-\theta ) Q^* + \theta \cdot F\) which allows to quantify the relevance \(\theta \) of fairness for the decision

In particular, Petrangeli et al. [37] developed a QoE-driven rate adaptation heuristic (’FINEAS’) and evaluated different mechanisms in terms of QoE.10 In their context, N users are consuming an HTTP adaptive streaming service which uses one concrete HAS mechanism. In the system, network bandwidth is the scarce resource and the users (to be more precise: the HAS mechanisms) are competing. The goal of the study by Petrangeli et al. [37] is to identify the HAS mechanisms which leads to the best overall QoE and a fair system. In the paper, the average QoE and the standard deviation of the QoE values over the N are reported for the HAS mechanisms. Since the results rely on simulations and are repeated several times, there are also confidence intervals specified for the average and the standard deviation. With the confidence interval of the standard deviation, \([\sigma _1;\sigma _2]\), we may also derive a confidence interval for the fairness index F for a given significance level \(\alpha \). Let us consider the probability that the real standard deviation \(\sigma \) lies within the bounds of the confidence interval.
$$ P\left(\sigma _1 \le \sigma \le \sigma _2\right) = 1-\alpha.$$
This can be transformed as follows. Please note that the less-than sign changes.
$$ P\left(1-\frac{2\sigma _1}{H-L} \ge F \ge 1-\frac{2\sigma _2}{H-L}\right)= 1-\alpha $$
$$\begin{aligned} P(F_1 \ge F \ge F_2)= & {} 1-\alpha . \end{aligned}$$
This leads to the confidence interval for F.
$$\begin{aligned}{}[1-\frac{2\sigma _2}{H-L}; 1-\frac{2\sigma _1}{H-L}]. \end{aligned}$$
Figure 16 shows the numerical results of the strategies concerning the average QoE and the fairness index F. It can be clearly seen that the FINEAS strategy outperforms the other strategies as it is better in terms of average QoE and fairness. However, it remains unclear if the HAS adaptation strategy MSS is for example better than FESTIVE. MSS leads to higher average QoE but a lower fairness.

A provider needs to decide how relevant fairness is. Thus, there may be a trade-off between fairness and QoE. In Fig. 17 we sketch this more clearly. A provider may use a weighted sum of the average QoE and the fairness depending on a parameter \(\theta \) specifying the relevance of fairness. Thus, a value function is defined. For example, \(v=(1-\theta )Q^*+\theta F\). Thereby, we use the normalized QoE values \(Q^*\) to have the fairness index and the average QoE in the interval [0; 1].11 This allows for an intuitive meaning of the relevance parameter. From Fig. 17, we observe that the FESTIVE approach may be preferred instead of MSS if fairness is as important as average QoE (\(\theta \ge 0.5\)). We would like to emphasize that other fairness indexes (Jain or RSD) lead to other values and change the outcome of an operator’s decision. Figure 18 shows (again) that Jain’s index is not able to discriminate the fairness properly (cf. Fig. 12)—here across mechanisms—and also suffers like the RSD suffers from being scale and metric dependent. Since J always leads to high fairness values, the utility function would not consider fairness appropriately and mainly put weight on overall QoE. Figure 19 shows the different outcomes. In case of little relevance of fairness (\(\rho =0.1\)), the fairness index has only a minor impact, as desired and defined. For higher relevance, it can be seen that the order of mechanisms changes between F and J, i.e. leading to different conclusions for operators.

In practice, this value function may be more complex and also include other aspects such as costs. This is however out of scope for this article. The basic intention is to highlight that for system design choice, the overall QoE and fairness needs to be evaluated and considered. Accordingly, the relevance of fairness needs to be decided.
Fig. 18

Case study ‘Video streaming QoE’: Different fairness indexes lead to different values. We observe that Jain’s index is not very sensitive, leading to high fairness values

Fig. 19

Case study ‘Video streaming QoE’: Different fairness indexes lead to different values and conclusions for operators. We consider different fairness relevance factors \(\rho \). FINEAS and Miller always lead to best and worst results, respectively. However, the order of the others changes depending on F and J

Conclusions and discussion

The motivation for defining a fairness index comes primarily from the operator’s perspective, as QoE fairness measures can be used to drive resource allocation mechanisms aimed at maximizing the satisfied customer base. The application of a QoE fairness metric is manifold, ranging from QoE management mechanisms and system optimization to benchmarking different resource management techniques. We have introduced a definition for a QoE fairness index, and showed that QoE fairness does not, due to the nature of QoS to QoE mappings for most services, necessarily follow from QoS fairness. We argue that commonly used QoS fairness metrics such as Jain’s fairness index are not suitable for quantifying QoE fairness, despite being used for that purpose in the literature. Our proposed metric fulfills a number of desirable qualities, and it is intuitively simple to understand. We illustrate its use with an example use case for Web QoE modeled as a function of page loading times. Another use case is the selection of an HTTP adaptive streaming mechanism which may be guided by the overall video QoE as well as the QoE fairness. QoE fairness says nothing about how good the system is and thus needs to be considered together with, and most likely subordinated to, the achieved overall QoE in system design. We emphasize that the proposed QoE fairness index is just a means for benchmarking or designing systems, and may be used as an extra tool for operators to make better informed decisions concerning QoE management.

Remark The proposed fairness index is a generic concept which may also be applied to other domains than QoE (like QoS) or for different purposes. It can be applied to any data on an interval scale with a clearly defined bounded value range.

QoE fairness from the operator’s perspective

Throughout the paper, we have considered fairness from the operator’s perspective. In fact, for most of today’s common applications usage scenarios, users are not necessarily aware of whether they receive a fair QoE. The question arises, then, why should operators care about fairness?

We posit that QoE fairness can help operators obtain better overall customer satisfaction, if used, e.g., as a secondary service optimization objective. For instance, when optimizing a service for average QoE, it is common to find a family of solutions with similar results QoE-wise, but varying fairness levels. In such cases, choosing the fairer solution (with some additional constraints in terms of acceptance thresholds) will often result in more satisfied users, and lower churn levels.

These optimizations become more important in multi-application scenarios, where the relative value of each QoS metric (e.g., throughput, latency) can vary wildly in terms of QoE. Assuming that the QoE models produce comparable results, QoE fairness may imply very different resource management policies than just trying to give each user a fair share of the network’s capacity. This type of application, however, remains contingent on the development of QoE models whose results can be compared across applications. This is still an open problem.

For some application scenarios, where social interaction happens in parallel to the service delivery (e.g., real-time commenting on social networks for a live stream), or where the use of different devices can have an impact on both resource requirements and perceived quality (as exemplified in “From QoS to QoE management”), then fairness can be more relevant for individual applications, as an unequal share of resources leading to unfairness may be observed by the users themselves, and QoS-fair approaches can result in sub-optimal QoE.

Another possible interpretation of QoE fairness would be in terms of societal welfare, with the understanding that a more fair distribution of QoE would make for a larger number of satisfied (or even happy) users. This may also be relevant in the case of non-commercial scenarios, such as city- or coop-owned networks.

Further work

The notion of QoE fairness defined in this paper leads us to some interesting questions, which may be the basis of future research endeavours.
  • QoE Fairness-aware service optimization: whereby QoE management mechanisms consider fairness as an optimization target (e.g., in a two-step optimization approach, optimizing first for overall QoE, and then for fairness).

  • The comparability of QoE results for different applications: QoE fairness is most interesting for operators when QoE can be compared across services. This is not (necessarily) the case with current QoE models, and further studies are needed in this direction. This research line is also relevant to service pricing and QoE-aware SLA definitions, as addressed for example by Varela et al. [44].

  • The concept of QoE fairness can be more relevant for long-term contexts than for short-term ones. A provider may need to ensure that its customers are equally served over one month, while short-term variances may be accepted by the users. In a rough sense, some providers are implementing this with reduced data speeds at the end of a month when a certain data volume is consumed. This may lead to a certain kind of long-term fairness, see also “Notion of fairness in shared environments”. However, most QoE models only produce short-term QoE estimates, and are not suited for other usages.

  • Related to the item above, service pricing is also relevant to the notion of QoE fairness (as in, is the utility perceived by the user commensurate to the price they pay?), yet it is not really considered in QoE models. On this point, price and fairness can be closely related. Our proposed metric is price-independent as long as the QoE models consider price (and consequently users’ expectations), but this is not usually the case.


  1. 1.

    This assumes a mechanism for determining the type of device each user is using, which could be implemented e.g., via suitable APIs, or by monitoring whether users are making requests to the mobile version of the HAS service.

  2. 2.

    I.e. given two models for different services, using the same scale, it is not clear whether equal output values from them correspond to equal QoE for users of each service. In the case of different scales, this is even further complicated.

  3. 3.

    It is \(c_u \le c_{max}\), as \(H \ge L\).

  4. 4.

    We have already seen in “Relative standard deviation (RSD)” that the RSD violates several desirable properties. Since J and RSD are inversely proportional, cf. Eq. (17), this implied that J violates some of those properties too. In “Issues with Jain’s index for QoE fairness”, we will visualize those violations for J to make the reader aware of how severe these violations are.

  5. 5.

    The QoE level dependence of J is visible in Figs. 6 and 7 as well as from Eq. (17).

  6. 6.

    The maximum standard deviation has already been derived in Eq. (7) in “Relative standard deviation (RSD)”. For readability and due to its importance for deriving F, the maximum standard deviation is repeated here.

  7. 7.

    F must not be confused with standard deviation of user ratings in a subjective study (for a system under test) i.e. the user rating diversity.

  8. 8.

    Realistic scenarios and measurement traces are provided in “Application of the QoE fairness index” to demonstrate the application of the QoE fairness index F.

  9. 9.

    In practice, the fair share of resources is not perfectly achieved due to imperfections in the transport protocol. In Hoßfeld et al. [15], measurement results are provided for the same system on web QoE which show the same characteristics in terms of fairness. In this article, we focus on the analytical model as it is well known and understood and generalizes the measurement results.

  10. 10.

    For details about the HAS algorithms (FINEAS, Q-L., FESTIVE, MILLER, MSS), the interested reader is referred to Petrangeli et al. [37]. For our purposes, the detailed description is not relevant.

  11. 11.

    Instead of defining a relevance parameter for fairness, one might also define weights for each component. This may also overcome the normalization of the QoE values.



This work emerged from the First International Symposium on Quality of Life (QoL-2016) in Zagreb, Croatia, in March 2016. This work was partly funded by Deutsche Forschungsgemeinschaft (DFG) under Grants HO 4770/1-2 (DFG OekoNet); the Croatian Science Foundation, project no. UIP-2014-09-5605 (Q-MANIC); the NTNU QUAM Research Lab (Quantitative modelling of dependability and performance). The authors would like to thank Stefano Petrangeli and Steven Latre for providing the data [37] as used in “Case study: HTTP adaptive streaming QoE”.

Compliance with ethical standards

Conflict of interest

On behalf of all authors, the corresponding author, Tobias Hoßfeld states that there is no conflict of interest.


  1. 1.
    Avi-Itzhak B, Levy H, Raz D (2008) Quantifying fairness in queuing systems: Principles, approaches, and applicability. Prob Eng Inf Sci 22(4):495–517MathSciNetCrossRefGoogle Scholar
  2. 2.
    Basu D (1955) On statistics independent of a complete sufficient statistic. Sankhyā Indian J Stat (1933–1960) 15(4):377–380MathSciNetzbMATHGoogle Scholar
  3. 3.
    Bertsimas D, Farias VF, Trichakis N (2012) On the efficiency-fairness trade-off. Manag Sci 58(12):2234–2250CrossRefGoogle Scholar
  4. 4.
    Bonald T, Proutiere A (2003) Insensitive bandwidth sharing in data networks. Queueing Syst 44(1):69–100MathSciNetCrossRefGoogle Scholar
  5. 5.
    Briscoe B (2007) Flow rate fairness: dismantling a religion. ACM SIGCOMM Comput Commun Rev 37(2):63–74CrossRefGoogle Scholar
  6. 6.
    Cofano G et al (2016) Design and experimental evaluation of network-assisted strategies for HTTP adaptive streaming. In: Proceedings of the 7th international conference on multimedia systems, ACM, p 3Google Scholar
  7. 7.
    De Cicco L et al (2013) Elastic: a client-side controller for dynamic adaptive streaming over http (dash). In: 20th International packet video workshop, IEEE, pp 1–8Google Scholar
  8. 8.
    Demers A, Keshav S, Shenker S (1989) Analysis and simulation of a fair queueing algorithm. ACM SIGCOMM Comput Commun Rev 19(4):1–12CrossRefGoogle Scholar
  9. 9.
    Deng J, Han YS, Liang B (2009) Fairness index based on variational distance. In: GLOBECOM, pp 1–6Google Scholar
  10. 10.
    Egger S et al (2012) “Time is bandwidth”? Narrowing the gap between subjective time perception and quality of experience. In: IEEE int. conference on communications (ICC), Ottawa, Canada, June 2012Google Scholar
  11. 11.
    ETSI TS 102 250-1 V2. 2.1. (2011) QoS aspects for popular services in mobile networks. In: Speech and multimedia transmission quality (STQ) 2011-04Google Scholar
  12. 12.
    Gabale V et al (2012) InSite: QoE-aware video delivery from cloud data centers. In: Quality of service (IWQoS), 2012 IEEE 20th international workshop on, IEEE, pp 1–9Google Scholar
  13. 13.
    Georgopoulos P et al (2013) Towards network-wide QoE fairness using openflow-assisted adaptive video streaming. In: ACM SIGCOMM workshop on Future human-centric multimedia networking, Hong Kong, Aug. 2013Google Scholar
  14. 14.
    Hoßfeld T, Heegaard PE, Varela M (2015) QoE beyond the MOS: added value using quantiles and distributions. In: Seventh int. workshop on quality of multimedia experience (QoMEX), Costa Navarino, Greece, June 2015Google Scholar
  15. 15.
    Hoßfeld T et al (2017) Definition of QoE fairness in shared systems. IEEE Commun Lett 21(1):184–187CrossRefGoogle Scholar
  16. 16.
    Hoßfeld T et al (2015) Identifying QoE optimal adaptation of HTTP adaptive streaming based on subjective studies. Comput Netw 81:320–332CrossRefGoogle Scholar
  17. 17.
    Hoßfeld T et al (2013) Internet video delivery in You-Tube: from traffic measurements to quality of experience. In: Biersack E, Callegari C, Matijasevic M (eds) Data traffic monitoring and analysis: from measurement, classification and anomaly detection to quality of experience. Computer communications and networks series. Springer, Berlin, HeidelbergCrossRefGoogle Scholar
  18. 18.
    Hoßfeld T et al (2017) No silver bullet: QoE metrics, QoE fairness, and user diversity in the context of QoE management. In: 2017 Ninth international conference on quality of multimedia experience (QoMEX), IEEE, pp 1–6Google Scholar
  19. 19.
    Huynh-Thu Q et al (2011) Study of rating scales for subjective quality assessment of high-definition video. IEEE Trans Broadcast 57(1):1–14CrossRefGoogle Scholar
  20. 20.
    Jain R, Chiu D-M, Hawe WR (1984) A quantitative measure of fairness and discrimination for resource allocation in shared computer system, vol 38. Eastern Research Laboratory, Digital Equipment Corporation, HudsonGoogle Scholar
  21. 21.
    Jiang J, Sekar V, Zhang H (2014) Improving fairness, efficiency, and stability in http-based adaptive video streaming with festive. IEEE/ACM Trans Netw 22(1):326–340CrossRefGoogle Scholar
  22. 22.
    Kelly F (1997) Charging and rate control for elastic traffic. Eur Trans Telecommun 8(1) (1997)CrossRefGoogle Scholar
  23. 23.
    Kelly FP, Maulloo AK, Tan DKH (1998) Rate control for communication networks: shadow prices, proportional fairness and stability. J Oper Res Soc 49(3):237–252CrossRefGoogle Scholar
  24. 24.
    Lan T et al (2010) An axiomatic theory of fairness in network resource allocation. In: Proceedings of IEEE INFOCOM, San Diego, CA, pp 1–9.
  25. 25.
    Le Boudec J-Y (2005) Rate adaptation, congestion control and fairness: a tutorial.
  26. 26.
    Le Callet P Möller S, Perkis A et al (2013) Qualinet white paper on definitions of quality of experience. In: European network on quality of experience in multimedia systems and services (COST Action IC 1003), March 2013Google Scholar
  27. 27.
    Mansy A, Fayed M, Ammar M (2015) Network-layer fairness for adaptive video streams. In: IFIP Networking conference (IFIP networking). Toulouse, France, May 2015Google Scholar
  28. 28.
    Mo J, Walrand J (2000) Fair end-to-end window-based congestion control. IEEE/ACM Trans Netw (ToN) 8(5):556–567CrossRefGoogle Scholar
  29. 29.
    Möller S (2012) Assessment and prediction of speech quality in telecommunications. Springer Science+Business Media DordrechtGoogle Scholar
  30. 30.
    Möller S et al (2011) Speech quality estimation: models and trends. IEEE Signal Process Mag 28(6):18–28CrossRefGoogle Scholar
  31. 31.
    Mu M et al (2015) User-level fairness delivered: network resource allocation for adaptive video streaming. In: IEEE 23rd international symposium on quality of service (IWQoS), IEEE, pp 85–94Google Scholar
  32. 32.
    Norman G (2010) Likert scales, levels of measurement and the laws of statistics. Adv Health Sci Educ 15(5):625–632CrossRefGoogle Scholar
  33. 33.
    Ott TJ (1984) The sojourn-time distribution in the M/G/1 queue with processor sharing. J Appl Prob 21(2):360–378MathSciNetCrossRefGoogle Scholar
  34. 34.
    Ozugur T et al (1998) Balanced media access methods for wireless networks. In: Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking, ACM, pp 21–32Google Scholar
  35. 35.
    Palyi PL, Racz S, Nadas S (2008) Fairness-optimal initial shaping rate for HSDPA transport network congestion control. In: Communication systems, 2008. ICCS 2008. 11th IEEE singapore international conference on, IEEE, pp 1415–1421Google Scholar
  36. 36.
    Parekh AK, Gallager RG (1993) A generalized processor sharing approach to flow control in integrated services networks: the single-node case. IEEE/ACM Trans Netw (ToN) 1(3):344–357CrossRefGoogle Scholar
  37. 37.
    Petrangeli S et al (2016) QoE-driven rate adaptation heuristic for fair adaptive video streaming. ACM Trans Multimedia Comput Commun Appl 12(2):28Google Scholar
  38. 38.
    Reichl P et al (2010) The logarithmic nature of QoE and the role of the Weber–Fechner law in QoE assessment. In: IEEE int. conference on communications (ICC). Cape Town, South Africa, May 2010Google Scholar
  39. 39.
    Roberts JW (2004) A survey on statistical bandwidth sharing. Comput Netw 45(3):319–332MathSciNetCrossRefGoogle Scholar
  40. 40.
    Michael S et al (2015) A survey on quality of experience of http adaptive streaming. IEEE Commun Surv Tutor 17(1):469–492CrossRefGoogle Scholar
  41. 41.
    Shalmon M (2007) Explicit formulas for the variance of conditioned sojourn times in M/D/1-PS. Oper Res Lett 35(4):463–466MathSciNetCrossRefGoogle Scholar
  42. 42.
    Taboada I et al (2013) QoE-aware optimization of multimedia flow scheduling. Comput Commun 36(15):1629–1638CrossRefGoogle Scholar
  43. 43.
    Tominaga T et al (2010) Performance comparisons of subjective quality assessment methods for mobile video. In: 2010 Second international workshop on quality of multimedia experience (QoMEX), June 2010, pp 82– 87Google Scholar
  44. 44.
    Varela M et al (2015) Experience level agreements (ELA): the challenges of selling QoE to the user. In: Proc. of the ICC 2015 workshops, IEEE, pp 1741–1746.
  45. 45.
    Villa B, Heegaard PE (2012) Improving perceived fairness and QoE for adaptive video streams. In: Proc. ICNS 2012, pp 149–158Google Scholar
  46. 46.
    Wierman A (2007) Fairness and classifications. ACM SIGMETRICS Perform Eval Rev 34(4):4–12CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Modeling of Adaptive SystemsUniversity of Duisburg-EssenEssenGermany
  2. 2.Faculty of Electrical Engineering and ComputingUniversity of ZagrebZagrebCroatia
  3. 3.Department of Information Security and Communication TechnologyNTNU, Norwegian University of Science and TechnologyTrondheimNorway
  4. 4.OuloFinland
  5. 5.Chair of Communication NetworksUniversity of WürzburgWürzburgGermany

Personalised recommendations