1 Introduction

Long time evolution (LTE) was introduced by the 3GPP (www.3gpp.org) as the next step of the evolution of mobile cellular networks and aims to face the rising demand for data transfers. The success of the LTE will depend on the provision of new and attractive multimedia services to its users, such as video (video on demand or television), live interactions (telephony or conferencing) or large scale social-multimedia services (e.g. Youtube). Nonetheless, multimedia content has much higher bit rates than other content types, which will lead to an increase in demand for network resources. Furthermore, it is expected that the overall mobile data traffic will significantly increase in the next few years [7]. As the multimedia services may be susceptible to delays (e.g. voice-over-IP), loss (e.g. video streaming), or both (e.g. video conferencing) proper quality of service (QoS) needs to be guaranteed.

The LTE may operate as a pure packet switching system and consequently all traffic including delay sensitive services need to be scheduled. Therefore the scheduling mechanism, implemented in the network base station (BS) and distributing radio resources among users, should be considered as a significant part of the system design. In order to support downlink data services with high transmission rates, a BS transmits data using shared channels where the content that comes from many users is multiplexed in time and frequency domains. As a result, appropriate scheduling policies are in demand in order to achieve desired performance and QoS level by the BS users.

The characteristics of wireless links quite different from those of wire-line links. Wireless channel conditions may vary over time because of fading and shadowing. Time and location dependent signal attenuation, noise, and interference may result in transmission errors and different channel capacities in subsequent time slots. Furthermore, users may experience different channel conditions at the same time. In such a wireless environment, two extremes of the scheduling algorithm may be applied: opportunistic scheduling and fair scheduling.

The aim of the opportunistic scheduling is to maximize the sum of the transfer data rates for all users. It can be achieved by exploiting the fact that different channel gains are experienced by users in different time slots and at different frequencies. Taking into account the observation above, there is a high probability that some of the users will have a strong channel at a given time slot. By scheduling only these users to transmit data, the shared channel is used most efficiently and the total system throughput is maximized. Such scheduling mechanisms are called opportunistic because they take advantage of favourable channel conditions when assigning time slots to users. With proper dynamic scheduling, the opportunistic scheduling can result in a higher spectrum utilization and a considerable increase in the total throughput as the number of active users becomes significant. However, the guarantee of proper level of fairness and QoS may be a challenge in the case of opportunistic scheduling. The main problem is that users’ data cannot always be suspended until the channel conditions will change to more favourable for transmission, especially if these conditions change slowly in time. Besides, it is important that network operators can provide sufficient signal coverage not only to the users who already have sufficient channel conditions because of their proximity to a BS, but also to the users located further from the BS, near the cell edge.

The second extreme of the scheduling algorithms, the fair scheduling, is related more to the latency experienced by the users than to the total data rate achieved. This is particularly important for real-time applications like the above mentioned multimedia services, where a certain level of transmission data rate and QoS should be fulfilled independently of the channel state.

Generally, most scheduling algorithms can be classified between these two extremes described above, including elements of each of them in order to deliver the required mix of QoS. The LTE specifications do not impose the utilization of any specific scheduler. Thus vendors are free to implement their own solutions. Therefore in this paper in a simulated LTE environment, we compare three algorithms implementing different scheduling strategies. Taking into account mixed traffic scenarios, including video, VoIP streaming services, and non-real time service, we exploit the strengths and weaknesses of the compared algorithms.

2 Scheduling in LTE

2.1 System Architecture

The functions of a BS in the LTE system are delegated to a unit called eNodeB. The eNodeB is responsible, among others, for managing resource scheduling for both uplink and downlink channels and consequently fulfilling the expectations of as many users of the system as possible, taking into account the QoS requirements of their respective applications. A typical single-cell cellular radio system is shown in Fig. 1, including certain number \(N\) of User Equipments (UEs) communicating with one eNodeB over a wireless channel of total bandwidth \(R\). Each UE has several data buffers corresponding to different uplink logical channels, each with different delay and data rate constraints. Similarly, in the downlink, the eNodeB may be equipped with several queues per UE containing dedicated data which may have different QoS constraints. A buffer at the eNodeB is assigned to each of the users. Packets arriving into this buffer are time stamped and queued for further transmission according to the first-in-first-out (FIFO) discipline. The time difference between the current time and the arrival time is computed for each packet in the queue at the eNodeB buffer. This difference is called as the head of line (HOL) packet delay. When the HOL packet delay exceeds a certain threshold, the packet is discarded.

Fig. 1
figure 1

System architecture

In the LTE, the total system bandwidth \(R\) is divided into scheduling blocks (SB) further transmitted in the time domain (TD) and frequency domain (FD). This is also the major difference between packet scheduling in the LTE and that in earlier radio access technologies, such as HSDPA, where the scheduling involved only the time domain. In the time domain, the SBs have the duration of 1 ms. In order to analyse the scheduling algorithms, we assume that the channel changes its state in discrete units of time. The duration between two consequent channel changes is longer then the SB duration. Thus the channel is assumed to be stationary for the duration of each SB, but may vary between transmission of subsequent SBs. In the frequency domain, SB can be transmitted using one of the 12 sub-carriers of total bandwidth equal 180 kHz, Fig. 2. The goal of a resource scheduling algorithm in the eNodeB is to allocate the time and frequency resources for each SB in order to optimise a function of a set of performance metrics, e.g. a throughput, delay, or spectral efficiency.

Fig. 2
figure 2

LTE scheduler

In order to make scheduling decisions, certain information such as the number of sessions, their reserved rates, link states, and the statuses of session queues are needed by the scheduler. The UEs periodically measure the channel quality and, on the basis of these measurements, compute the Channel Quality Indicator (CQI) which is then sent, with uplink control messages, to the scheduler located at a eNodeB. The CQI is exploited by the scheduler link adaptation module, which on this basis selects a UE with the most suitable modulation scheme and coding rate at the physical level with the objective of the spectral efficiency maximization. However, if the channel changes its state relatively slowly in comparison with the delays requirements of an application, the scheduler faces two alternatives: either to schedule the UE for transmission even though its channel does not meet required conditions, thus reducing the throughput gain that could be achieved by scheduling another UE with better transmission conditions; or to violate the delay requirements of the application. Therefore there is a trade-off between the throughput gain and the latency requirements of delay sensitive applications.

2.2 Proportional Fair Scheduler

A proportional fair (PF) scheduler assigns radio resources taking into account both the CQI and the past user throughput [19]. The goal is to maximise the total network throughput and to guarantee fairness among flows.

Let us assume that there are \(N\) UEs in a cell with \(R_i(k)\) achievable transmission rate for an \(i\)th UE in the transmission interval \(k\) which depends on the UE current CQI. Let us assume also that the scheduler computes the running average of the achieved rate \(\bar{R_i(k)}\) for each UE. Then, according to the PF scheduling policy, the UE \(i\) is chosen for transmission in the time-slot \(k\) if:

$$\begin{aligned} Q(k)=\text {arg} \max _{1 \le i \le N} \frac{R_i(k)}{\bar{R_i(k)}}, \quad k=1,2,\ldots , \end{aligned}$$
(1)

where \(Q(k)\) is a priority function for the transmission. The running averages of achieved rates \(R_i(k)\) are updated at each time-slot as:

$$\begin{aligned} \bar{R_i(k+1))} = \left\{ \begin{array}{l} (1-1/t_c)\bar{R_i(k)}+1/t_c\; R_i(k),\\ \quad \hbox { if a UE } i \hbox { transmits data}\\ (1-1/t_c)\bar{R_i(k)}, \\ \quad \hbox { if a UE } i \hbox { does not transmit data} \end{array}\right. , \end{aligned}$$
(2)

where \(t_c\) is a constant value greater than 1 and may be interpreted as a memory of the averaging filter. Thus the algorithm schedules a UE when its instantaneous channel quality is relatively high compared to its own average channel conditions over the time period \(t_c\). When \(t_c\) is sufficiently large, the user with the strongest channel is scheduled for transmission. In contrast, when \(t_c \rightarrow 1\), the PF scheduler does not take into account the user’s channel conditions and performs round-robin scheduling. As a result, by manipulating the memory of the averaging filter, it is possible to configure the PF scheduler to exploit the multi-user diversity to a greater or smaller extent and consequently to shift the compromise between the cell throughput and shorter packet delay.

2.3 Modified Largest Weighted Delay First Scheduler

The Modified Largest Weighted Delay First (M-LWDF) scheduler supports multiple data users with different QoS requirements. In every time slot the M-LWDF serves a user with the highest priority. The priority of the UE \(i\) is computed as:

$$\begin{aligned} Q(k)=\text {arg} \max _{1 \le i \le N} a_i R_i(k) D_i(k), \end{aligned}$$
(3)

where \(D_i\) the HOL packet delay of the UE \(i, R_i(k)\), similarly as in Eq. 1, denotes the maximum supportable data rate at the transmission interval \(k\) and \(a_i\) denotes a QoS parameter that allows to differentiate between UEs with different QoS priorities. The HOL packet delay \(D_i(k)\) is defined as queuing delay of the packet located at the front of queue \(U_i\) at the transmission interval \(k\). The scheduler schema was presented in Fig. 3.

Fig. 3
figure 3

Two channel M-LWDF scheduler

The main feature of this algorithm is that his scheduling decision takes into account both the CQI and the states of the queues. The M-LWDF algorithm is relatively easy to implement because the scheduler only needs to time stamp the incoming data packets and keep track of the current queue length. The algorithm is able to handle all offered traffic, unless it is not feasible handling it at all [2].

2.4 EXP/PF Scheduler

The adaptive EXP/PF algorithm handles streaming services as well as best-effort data services [17]. The algorithm is implemented as a composite of two schedulers employed to guarantee the delay bound of streaming services with high probability and to maximize the system throughput while guaranteeing proportional fairness between UEs. The first scheduler is based on the EXP rule, in which it is applied to streaming services in order to guarantee some delay constraints [18], while the second is based on the PF rule with a resource controlling factor applied to best-effort services for maximizing system throughput while achieving proportional fairness. A UE \(i\) is determined to be served at the \(k\)th slot by the priority function \(Q(k)\) according to its value calculated by the formula:

$$\begin{aligned} Q(k)= \text {arg} \max _{1 \le i \le N} \left\{ \begin{array}{ll} \exp \big (\frac{D_i(k)-\bar{D(k)}}{1+\sqrt{\bar{D(k)}}}\big )\frac{R_i(k)}{\bar{R_i(k)}}, &{} i \in A \\ w(k) \frac{R_i(k)}{\bar{R_i}(k)}, &{} i \in B \\ \end{array}\right. . \end{aligned}$$
(4)

Each UE belongs to one of two groups which include the group of streaming service UEs \(A\) and the group of best-effort service UEs \(B\). Each UE is fundamentally evaluated by the EXP rule or the PF algorithm, depending upon whether the \(i\)th UE comes from group \(A\) or group \(B\). As was already shown in Eq. (3), \(D_i(k)\) is the HOL packet delay of UE \(i\) at the \(k\)th slot and \(R_i(k)\) is the supportable data rate of the \(i\)th UE at \(k\)th slot. The \(\bar{R_i}\) is the average supportable data rate of the \(i\)th UE. The resources assigned for group \(B\) are controlled by the adaptive factor \(w(k)\) which divides resources between real time service UEs and non-real time service UEs. From Eq. (4) it can be noted that the PF algorithm gives higher priority to the UE with better channel conditions but the EXP rule algorithm gives higher priority to the UE with higher transmission delay or with more packets in its buffer.

3 Previous Works

An analytical examination of different characteristics of the PF algorithm has been presented amongst others in [8] and [12]. Furthermore, the performances of different aspects of the frequency domain packet scheduling (FDPS) in a variety of scenarios have been studied. Simulation performed in [15] indicates that the PF scheduler can provide an increase in the average system throughput of about 45 % compared to a time-opportunistic scheduler which does not use subcarrier CQI information. In the authors’ opinion, the magnitudes of the gains depend on many factors including CQI accuracy, CQI frequency resolution and the number of active users. In [16], the authors investigate the performance of the FDPS under fractional load generated by a Markov Chain. Their simulation shows that the PF scheduler tries to optimize resource allocation by avoiding transmission on frequencies that are experiencing severe interference. Such a behaviour leads to a gain of 4 dB in signal to interference ratio (SINR) over the frequency-blind scheduler. The authors stress that FDPS gain depends on the ability of link adaptation module to track the variations in interference. This ability is reduced if the frequency usage pattern is changing at a fast rate. The authors of [10] propose to divide a packet scheduler into two phases: TD where the packet scheduler chooses a subset of all users connected to a BS, and the FD where the scheduler does the actual frequency allocation for the users. According to the authors, such division is suitable for two reasons: using different schedulers in both scheduling domains provides scheduling flexibility, since both domains can be independently configured. The authors apply to both of the scheduler phases combinations of PF and frequency-blind algorithms concluding that applying the PF scheduler for both of the phases provides the best tradeoff between spectral efficiency and fairness. Several studies of the PF scheduling have been conducted in the context of WCDMA/HSDPA technologies, whose results can be transferred to the LTE. The authors of [11] examined the link level system performance of the PF scheduler, taking into account such issues as the link adaptation dynamic range, power and code resources, convergence settings, signalling overhead and code multiplexing. In [20], the authors analysed the performance of the PF scheduling for VoIP traffic, including a comparison to round-robin scheduling. A study of the PF scheduler performance applied to a video streaming in HSDPA-based cellular system can be found in [19]. The PF scheduler is commonly used as a reference scheduler for the experiments performed among others in [13] and [21].

The M-LWDF and EXP/PF algorithms are relatively less referenced in comparison with the PF algorithm. The M-LWDF algorithm was described, among others, in [2]. The deeper analysis of the algorithm, performed in simulated, wireless, cellular HSDPA environment, can be found in [1]. The results show that the M-LWDF algorithm is a viable solution for scheduling streaming flows in the HSDPA. However, the algorithm is unfair because it provides a better QoS to the users under favourable propagation conditions.

The closest work to ours is [3] where the performances of the EXP/PF and the M-LWDF algorithms for a video streaming have been compared. Brief simulation results show a better performance of the M-LWDF algorithm for lower loads and a better performance of EXP/PF for higher loads. However, the authors do not give any information about the simulator nor the simulation methodology which they used.

In this work, in addition to video traffic, we also compare the performance of the schedulers for VoIP traffic. Furthermore, our analysis includes scenarios in which UEs move at a different speed. Finally, beside throughput and packets loss ratio, we also analyse packets delays. The above features are the main contributions of this paper.

4 Methodology

4.1 Metrics

From the user’s perspective, the key performance characteristic of a network is the QoS of received multimedia content. The multimedia QoS may be influenced by the transport characteristics, compression techniques and data loss. There have been several different proposed methods of analysing voice or video quality which can be roughly divided into two major categories: subjective multimedia quality analysis, where users of the multimedia content are directly involved; and objective multimedia quality analysis based on measurements and mathematical models.

In our work, the QoS is analysed by means of the latter method, which involves, amongst others, the measurement of (1) packets delay: The latency of a particular data packet which is the time elapsed from when the packet is received by an eNodeB till when the packet is received by a UE; (2) packet loss ratio (PLR): the ratio of packets which were discarded during transmission (e.g. caused by the exceeding the HOL threshold) to the total number of transmitted packets; (3) throughput: the number of packets processed by a system over a given interval of time, (4) spectral efficiency: the metric not directly perceived by the system users; however, it has influence on the metrics 1 and 2. The spectral efficiency is described as an information rate that can be transmitted over a given bandwidth in a specific communication system. In mathematical terms, it is computed as a result of the division of net bitrate or maximum throughput by the bandwidth of a communication channel. It is interpreted as a measure of how efficiently a limited frequency spectrum is utilized by a physical layer protocol. The above defined metrics describe the user-level performance in the case when the offered traffic consists of elastic and non-elastic flows.

4.2 Simulator

For our experiment, we used LTE-Sim simulator [14], which encompasses several aspects of the LTE and Evolved Packet Core networks. It supports single and multi-cell environments, QoS management, multi users environment, user mobility, handover procedures, and frequency reuse techniques. Three kinds of network nodes are modelled: UEs, eNodeBs and Mobility Management Entities. At the application layer, the simulator implements four different traffic generators and supports the management of data radio bearer. Finally, it implements several well-known scheduling strategies, among them there are the PF, M-LWDF and EXP/PF strategies with the CQI feedback compared in our work.

In all our scenarios, we simulated 19 cells, which, in order to employ the frequency reuse concept, formed a cluster of four cells, i.e. the four cells were regularly replicated to cover the whole service area. Every cell had a radius equal to 1 km, 25 downlink and uplink channels, and a variable number of UEs. We assumed that 25 % of the UEs were dedicated to video streaming, 50 % to VoIP conversations, and 25 % of them to non-real time background traffic (e.g. web browsing, files downloading).

The video application simulates 242 kbps video stream with a pattern produced by the H.264 coder. The VoIP application simulates G.729 voice flows with an ON-OFF pattern. During the ON period, the source data rate is 8 kbps, while during the OFF period the rate is zero because the presence of a voice activity detector is assumed. Finally, the non-real time background traffic corresponds to the ideal greedy source that always has packets to send.

For the measurements, we selected the middle cell with UEs being uniformly distributed within the serving eNodeB and moving in random directions within the simulated cell. The speed of the movements is a variable set to 3, 30 or 120 km/h, depending on the examined scenario. The UEs report instantaneously their downlink CQI values to the serving eNodeB. Pathloss in an urban environment, the shadow fading and multi-path fading are taken into account, affecting channel gain. A detailed description of the channel and LTE PHY layer implementation in the simulator is presented in [14] in the section F of the chapter III.

Each UE has an infinite buffer assigned at the eNodeB. Packets arriving into the buffer are time stamped and queued for transmission based on a FIFO basis. For each packet in the queue at the eNodeB buffer, the HOL of packet delay is computed. If the HOL of the packet delay exceeds a specified threshold, then the packet is discarded. The packet scheduler determines UEs priority based on the three examined scheduling algorithms.

The parameters above with additional details were summarised in Table 1.

Table 1 Simulation parameters

5 Results

As the results of the simulation, we obtained comparisons of the metrics described in the Sect. 4.1: packets delay, packet loss ratio, throughput, and spectral efficiency. The comparisons were performed in a function of a UEs number: starting from five UEs and gradually increasing them to thirty with the step of five.

We can state that the selection of the scheduling algorithm has a considerable impact on packets delay experienced by the UEs. In the case of the video traffic, with the increasing UEs speed, the difference between the PF scheduler and the M-LWDF and EXP schedulers increases, Fig. 4a. For all the examined cases, the delay experienced by users handled by the M-LWDF and EXP schedulers is below the HOL threshold set to 0.1 s, see also the Eqs. (3) and (4). When UEs move at 3 km/h, Fig. 4a, the difference in the performance of the schedulers starts to be visible after the number of the UEs exceeds 15. For the 30 UEs, the average delay experienced by users of the PF scheduler is about 1 s. After increasing the UEs speed to 30 km/h, Fig. 4b, the PF scheduler keeps pace with the other two schedulers until the number of the UEs is 10 or less. The maximum delay of about 8 s is achieved for the maximum number of allowed UEs in our system. Further increase of the UEs speed to 120 km/h, Fig. 4c, widens the gap between the examined schedulers. The difference in delay between the PF scheduler and the M-LWDF and EXP schedulers is about two orders of magnitude, even for the relatively small number of the UEs being active in the system.

As was shown in [4], the packet delays above 0.2 s may cause re-buffering events during video play-out. Therefore we can assume that the PF scheduler, for most of the scenario parameters, will not provide adequate video quality to the supported UEs.

Fig. 4
figure 4

Video delay for different speed of UEs. a 3 km/h, b 30 km/h, c 120 km/h

The relation in performance among the schedulers forwarding VoIP traffic, Fig. 5, follows the relation in performance among the schedulers handling the video traffic, Fig. 4. For the scenario where the UEs move at 3 km/h, Fig. 5a , the difference between the schedulers may be noticed when the number of UEs exceeds 15. When the number of UEs exceeds 20, the delay experienced by the PF scheduler exceeds the maximum delay for voice services recommended by the ITU-T [9], which was set to 0.15 s. Moreover, this threshold should not be violated in the entire voice path, part of which may be on the public connection between an UE and an eNodeB.

Fig. 5
figure 5

VoIP delay for different speed of UEs. a 3 km/h, b 30 km/h, c 120 km/h

Quite similar results are obtained for the scenario in which the speed of the UEs is increased to 30 km/h, Fig. 5b. However, in this case all schedulers achieve slightly higher delays in comparison with the case in which the speed was set to 3 km/h. The PF scheduler exceeds the above mentioned threshold of 0.15 s when there are merely between 15 and 20 active UEs in the network cell.

In the case where the UEs speed is set to 120 km/h, the difference in the performance among the schedulers starts when the UEs number is larger than 10, Fig. 5c. For the PF scheduler, the maximum number of UEs should not exceed 15 in order to preserve the delay threshold of 0.15 s recommended above. The delay for the other two schedulers, for the whole range of available UEs, is slightly below 0.1 s (the value of the HOL threshold).

As is shown in Fig. 6, the PLR rises steadily for all three schedulers as the number of the UEs increases. Furthermore, the performance of the M-LWDF and EXP schedulers is quite similar.

Fig. 6
figure 6

Video packet loss for different speed of UEs. a 3 km/h, b 30 km/h, c 120 km/h

In the case of the low, Fig. 6a, and the medium speed UEs, Fig. 6b, the range of growth is quite wide: for the 5 UEs in the system, the PLR is about 0.2–2 % and increase rapidly with the increase of the UEs, reaching about 50–100 % for 30 UEs. The PF scheduler achieves comparable results with the two other schedulers when the UE moves at 3 km/h speed; however, when the speed is increased to 30 km/h, the PLR for the PF scheduler is nearly 100 % for 20 UEs, which is a much worse result in comparison with the remaining examined schedulers.

When the UEs speed reaches 120 km/h, Fig. 6c, the PLR is quite high for all the three examined schedulers. Even for the small number of the UEs, the M-LWDF and EXP schedulers drop about 40 % of the forwarded packets, whereas the PF scheduler drops more than 60 % of the packets. After increasing the UEs number to 30, the PLR for the M-LWDF and EXP schedulers achieve about 60 %, and nearly 100 % for the PF scheduler. Such high PLR values suggest that watching even relatively a low bit rate video (242 kbps) may be impossible even for little network load. Thus for high speed UEs, not only should multimedia aware schedulers be used, but also a client will have to accept higher network delays in order to avoid such extensive packets loss.

The PLR for the VoIP service does not exceed 10 % regardless of the scheduling strategy and the UEs speed, Fig. 7.

In the case of the low, Fig. 7a, and the medium speed of UEs, Fig. 7b, the PLR is less than 0.1 % for all the examined schedulers and the UEs number not exceeding 10. These values rise gradually with the increase of the UEs number, reaching up to 0.3 % for 15 UEs in the cell.

Fig. 7
figure 7

VoIP packet loss for different speed of UEs. a 3 km/h, b 30 km/h, c 120 km/h

When the UEs movement speed is increased to 120 km/h, the PLR is slightly higher, Fig. 7c. For 5 UEs handled by the M-LWDF or EXP scheduler, the PLR is about 0.2 %. When UEs number does not exceed 15, the PLR ratio is between 0.1 and 1 % for all schedulers, which may have certain impact on the conversation quality. For more than 25 UEs in the cell, all the schedulers drop between 3 and 10 % of the forwarded packets, regardless of the speed at which the UEs move. Such PLR levels will exceed the PLR threshold recommended for G.729 codec, which is set to the maximum value of 1 % [6].

Therefore in the case of a VoIP service, the major influence on the QoS has a number of active UEs attached to a single eNodeB. Without any additional prioritisation of VoIP traffic, even for a small network load, this kind of service will be inaccessible due to high delay and packet losses.

In the next experiment, we compared the data throughput for the voice and video services. The throughput, above all, is limited by the previously analysed packets delay and loss.

In the scenario when UEs move slowly, Fig. 8a, and there are only several active UEs within the network cell, the average throughput utilised by a single UE is sufficient for 242 kbps video play-out. However, with the increase of the UEs number, the throughput experienced by a single UE drops. The rate of the decline is especially visible for the PF scheduler.

Fig. 8
figure 8

Video throughput per UE. a 3 km/h, b 30 km/h, c 120 km/h

After increasing the speed to 30 km/h, the throughput of a single UE is severely reduced and ranges between 100 and 120 kbps even for only a few active UEs. Such low throughput values will have a high negative impact on the video play-out. As was shown in [5], the bandwidth reduction has a great influence on the quality of the experience of streamed video.

The increase of the UEs number operating in the cell or an increase of their speed within the examined cell leads to further degradation of the service quality, Fig. 8b, c. The video QoS is especially affected in the UEs which are serviced by a non-multimedia friendly PF scheduler.

Summarising the experiment above, we may state that in an LTE system configured as in our experiment, smooth video play-out is only possible for a limited number of UEs moving at a low speed.

In the case of the VoIP service, for slow moving UEs, the performance of all the three schedulers is relatively much the same, Fig. 9a. Nevertheless, when the number of UEs exceeds 25, the throughput achieved by the UEs using the PF scheduler is clearly lower compared to the other two schedulers.

When the UEs accelerate to 30 km/h, Fig. 9b, the throughput of VoIP services is reduced by half in comparison with the scenario with slow moving UEs. While the UEs number in a network cell is relatively low, the UEs handled by the MLWDF scheduler achieve the best performance. As the number of UEs reaches 15 or more, the MLWDF and EXP/PF schedulers behave alike. With the increasing number of UEs, the performance of the PF scheduler deteriorates.

Fig. 9
figure 9

VoIP throughput per UE. a 3 km/h, b 30 km/h, c 120 km/h

If we increase the UEs speed to 120 km/h, Fig. 9c, the achieved throughput is comparable to the scenario, in which UEs move at 30 km/h. The performance of the PF scheduler, compared to the previous scenario, is slightly worse when there are only several UEs in the cell. The performance of the other two schedulers is much the same.

As the traffic for the non-real time services is not prioritised, its throughput is similar for all the examined schedulers, Fig. 10. With the increase of multimedia traffic, the non-real time services are pushed to the background; therefore we can observe a sharp decline in the throughput with the increasing number of the UEs.

Fig. 10
figure 10

Non-real time services throughput per UE

Fig. 11
figure 11

Spectral efficiency

Analysing the spectral efficiency of the scheduling algorithms, we can observe its highest decrease when the number of UEs rises to 10, Fig. 11. When the UEs number is higher than 10, in the case of the M-LWDF and EXP schedulers, the spectral efficiency per user slightly rises, proportionally to the number of UEs. For the maximum number of allowed UEs, the spectral efficiency drops again. Regarding the PF algorithm, we can notice that its performance decreases with the growing number of UEs, obtaining the minimum value when the UEs number is 30.

6 Conclusions

In this article, we evaluated the packet scheduling performance of three LTE downlink schedulers: PF, M-LWDF, and EXP/PF. We compared packets delay induced by the schedulers, system throughput, packet loss ratio, and spectral efficiency of the schedulers. For the sake of comparison, we used several simulation scenarios consisting of a real-time video streaming service, VoIP conversations and non-real time network applications. The UEs, supporting the above services and applications, were moving at three different types of speed within one of the nineteen simulated network cells.

Taking into account several metrics, the simulation results show that the PF scheduler is usually considerably outperformed by the two other schedulers. The PF algorithm reveals its weakness, particularly when confronted with demanding real-time video traffic and when there is a need to support more than several UEs in a network cell. The performances of the M-LWDF and EXP/PF algorithms are comparative, with a slight predominance of the M-LWDF algorithm in the case of packets loss and data throughput.

The research shows the importance of proper selection of scheduling strategy at a network base station. The lack of multimedia traffic awareness by the scheduling algorithms implemented at the base station will lead to major degradation of video and VoIP services, even for a relatively small number of UEs in a network.