Keywords

Introduction

Wireless communication networks have significantly impacted various aspects of our lives, including healthcare, professional networking, and accessing information. Over the past few decades, these networks have evolved from being expensive to becoming pervasive and accessible to many of the population. The development of cellular networks has been particularly revolutionary, leading to the emergence of new use cases and challenges. There are four generations of cellular networks (1G–4G) shown in Fig. 13.1, and the fifth generation has recently been introduced. The first and second generations were primarily focused on voice-based communication. The first and second generations of cellular networks were aimed to perform voice-centric operations. Whereas the third and fourth generations of cellular networks involved data packets with new data rate and frequency. Each new generation of cellular networks has built upon the services of the previous generation and has seen significant technological advancements. However, one drawback of the first four generations was that their communication design was primarily focused on human-centric services.

Fig. 13.1
An evolution of cellular generation from 1 G to 5 G. 1 G and 2 G represent voice-centric services, 4 G represents mobile broadband with L T E, and 5 G represents network society.

Cellular generation

In other words, the first four generations of cellular networks were developed primarily to support voice and data communication between individuals. While these networks have evolved to offer faster speeds and more advanced features, their basic design remained centered around human communication needs. With the advent of the fifth generation of cellular networks, there is a greater emphasis on designing networks to support a broader range of use cases, including machine-to-machine communication and the Internet of Things (IoT). Before the 5G, cellular networks’ concentration was to perform more services to human users (Liberg et al. 2017). The Internet of Things (IoT) has emerged as a significant technological revolution in recent years. IoT refers to the interconnection of many devices embedded in everyday objects, enabling them to access the internet and share data without requiring human intervention. This idea embodies the concept of everything being connected. Any smart device can connect to the internet through physical items such as machines, devices, and vehicles.

IoT devices can serve various applications, including healthcare, agriculture, security, smart cities, industrial automation, and autonomous driving. New applications are emerging daily, showcasing this technology’s vast potential. The growth of IoT has led to the development of new business models and opportunities for innovation. With the increasing use of connected devices, IoT is set to become an integral part of our daily lives in the years to come. This diversity of IoT applications has caused transcendent freedom to users, and recently we have seen an enormous accession in their numbers. Massive devices are already implemented, and these numbers are expected to grow shortly, with predictions reporting that most equal 29 billion IoT devices will be functioning by 2023 (Ericsson Mobility Report). To adapt to the new requirements for device connectivity to further assist IoT, previous cellular networks must be restructured.

This chapter will focus on identifying the core issues that limit IoT implementation over cellular networks at a large scale and a novel solution to mitigate them. The majority of the problems arise in three distinct aspects, i.e., the establishment of connection, utilization of network resources, and efficiency. In this context, we examined the containment of massive Machine‐type Communications (mMTC) into cellular networks. Apart from that, the performance of Narrowband-IoT (NB-IoT) within cellular networks will be improved.

This chapter is divided into several sections. Firstly, we provide an overview of related work in “Literature Review” section. Next, in “Narrowband-Internet of Things (NB-IoT)” section, we discuss the Narrowband-Internet of Things (NB-IoT) and explain the Power Saving Mode (PSM) and extended Discontinuous Reception (eDRX). We then present our methodology and performance analysis in “Methodology” section. Validation results for the proposed algorithm and analytical NB-IoT model are provided in “Result” section and “Discussion” section, respectively. Finally, we offer concluding remarks and outline future work in “Conclusion” section.

Literature Review

In this section, we examine the literature related to the issues discussed in this chapter. We review the significant works related to each key challenge and explore how research in this area has evolved in recent years.

The diverse range of IoT applications has varying requirements, such as stringent latency, unbiased transmissions, static or large mobility devices, and small or high volumes of data. Therefore, only some approaches can cater to all IoT applications. In such cases, Low-Power Wide-Area Networks (LPWANs) are becoming the preferred option for many IoT use cases (Masoudi et al. 2021). A Low-Power Wide-Area (LPWA) wireless IoT radio access network faces four Performance Indicators (KPIs) inconsistencies: Coverage Area A, Battery Lifetime, Device Capacity, and Estimation Cost. Traditional cellular networks cannot meet these KPIs. As IoT devices started to be deployed in large numbers, various surveys were conducted to address these issues (Langat and Musyoki 2022; Singh et al. 2021a; Amodu and Othman 2018). Numerous surveys have identified significant inefficiencies in LPWA networks and emphasized critical research directions. One of the primary concerns highlighted in most research is the high number of collisions in the random-access channel (RACH) during the RA process. As a result, several surveys have been conducted to address this issue (Andrade et al. 2018; Althumali and Othman 2018; Kafi 2021) concentrated mainly on the RA process, and the consequence of IoT traffic on HTC. At that time, IoT devices were mostly recognized of lower priority and importance corresponded to HTC devices. Recently, deploying IoT devices is massively increasing over cellular networks, recent surveys (Mahmood et al. 2020; Wu et al. 2020; El-Tanab and Hamouda 2021) exposed additional issues and recognized new research directions.

In recent years, the research community has recognized the equal importance of IoT devices compared to HTC devices due to their significant development. As a result, several recent surveys have been conducted to address this issue (Al-Dulaimi et al. 2018; Li et al. 2021; Suma 2021). Recent research has focused on addressing issues affecting IoT and HTC devices in 4G and upcoming 5G networks. One proposed solution is the Enhanced Access Barring (EAB) scheme, which dynamically adjusts the preventing parameters to balance the number of collisions and network access delay for both devices (El-Tanab and Hamouda 2021; Vidal et al. 2019; Bui et al. 2019; Tello-Oquendo et al. 2018; Zhan et al. 2021; Leyva-Mayorga et al. 2019; Haile et al. 2021; Singh et al. 2021b). Nonetheless, few research has focused on the broadcast transmission functionality and network utilization, and energy consumption for IoT devices. Previous suggestions aimed to improve the efficiency of multicasting transmissions by modifying the Modulation and Coding Scheme (MCS) used (Zuhra et al. 2019; Fuad et al. 2021; Rinaldi et al. 2020; Chen et al. 2020). As the number of devices using multi-cast services increased, previous schemes were found to need to be more sufficient in providing satisfactory services without affecting unicast traffic or adding significant processing at the base station. As a result, parallel research was conducted to improve the quality of service experienced by users.

In Ghandri et al. (2018), The authors distinguish between two types of services: bandwidth-intensive streaming services and file delivery, while in Park et al. (2018) and Guangzhi (2021), the devices are categorized into different groups based on the services they receive. The authors in Guangzhi (2021) users were categorized into groups based on the feedback received about the channel quality. Similar approaches have been proposed in related works, where devices are categorized into groups based on their experienced Quality of Service (QoS) (Zuhra et al. 2019; Chen et al. 2021; Li et al. 2018; Gong et al. 2022; Saily et al. 2019).

Since the advent of the IoT era, energy conservation has been recognized as a crucial objective for both device and network aspects in current and future cellular networks. Several research studies (Liu et al., 2019; Tang et al., 2020; Ferranti et al., 2019; Jahid et al., 2019) have highlighted the challenging issues and addressed various research directions, while other research (Uppal and Gangadharappa 2021; Pham et al. 2020) Initial research on energy consumption focused on the network side, with efforts to investigate proposed energy-saving practices and approaches and identify suitable parameters based on the development prospects of both the network and devices. Reducing the energy consumption of both the network and the devices has been a crucial goal since the beginning of the IoT era. Initially, most research on energy consumption focused on the network side, aiming to reduce the BS’s energy consumption due to the large number of devices they had to serve simultaneously. Several parallel research areas followed, which can be broadly divided into the following categories:

  1. 1.

    Resource allocation schemes

Researchers have investigated various transmission strategies and resource allocation schemes that depend on the transmission models of the devices (either HTC or IoT) and the total traffic load in the cell (Wu et al. 2015; Hou et al. 2020).

  1. 2.

    Dynamic adaptation of transmission parameters

These proposals (Bonnefoi et al. 2019; Jahid et al. 2021; Ozyurt et al. 2021; Lassoued and Boujnah 2020; Lv et al. 2020) aim to optimize the transmission and operation parameters of the BSs dynamically to minimize their energy consumption by utilizing on/off switching and irregular transmission schemes.

  1. 3.

    Collision mitigation methods

These approaches aim to reduce the energy consumption of BSs by mitigating interference from different transmissions and preventing the retransmission of previous messages. Several approaches have been proposed in this area, as highlighted in Khazali et al. (2018), Nikjoo et al. (2018) and Ghosh (2020).

With the massive increase in device numbers, the research community began to focus on the energy consumption of the devices. This led to the development of many parallel directions in research. Several works (Chang and Tsai 2018; Sehati and Ghaderi 2018; Verma et al. 2019, 2018; Bithas et al. 2019; Li and Chen 2019; Mughees et al. 2021; Sanislav et al. 2018) aimed to optimize the Discontinuous Reception (DRX) configuration requirements of IoT devices to raise the sleeping period to minimize the energy consumption.

Several studies (e.g., Sanislav et al. 2018; Al Homssi et al. 2018; Chen et al. 2018; Malik et al. 2018; Wang et al. 2020; Tsoukaneri et al. 2020; Liang et al. 2018) have proposed optimizing transmission parameters to reduce the energy consumption of IoT devices during operation, by adjusting settings such as duty cycles or transmission numbers (Himeur et al. 2020). Another direction in reducing energy consumption of IoT devices is optimizing resource allocation and data transmission parameters, such as adjusting the data rate (Chen et al. 2018; Malik et al. 2018). Most energy-related research hasn’t focused on cellular network technology in recent years. As a result, the unique characteristics of individual devices were not considered, and some studies attempted to establish general models for energy consumption in IoT devices (Tsoukaneri et al. 2020; Finnegan and Brown 2018; Azoidou et al. 2018; Sadowski and Spachos 2020; Duhovnikov et al. 2019; Lan et al. 2019).

As a result, various works such as Finnegan and Brown (2018), Andres-Maldonado et al. (2017), Yeoh et al. (2018), Lauridsen et al. (2018), Bello et al. (2019), Soussi et al. (2018) and Sinha et al. (2017) have focused on evaluating the impact of NB-IoT technology on energy consumption, examining its distinct operating modes and associated energy costs. In addition, Yeoh et al. (2018) conducts experimental research on the energy consumption of a commercial prefab NB-IoT module for necessary communication services.

Narrowband-Internet of Things (NB-IoT)

NB-IoT is a Low Power Wide Area Network (LPWAN) radio technology licensed and designed for enhanced indoor coverage for many low-cost, low-capability, and low-power IoT devices. It eliminates dual connectivity and mobility features, further reducing device costs.

Currently, two significant cellular IoT technologies are NB-IoT and LTE-M, which target IoT use cases.

NB-IoT is designed to cater to low-cost Machine-Type Communication (MTC) UEs with lower power consumption and higher coverage area than conventional enhanced Mobile Broadband (eMBB) UEs. This is achieved by utilizing a small portion of the spectrum, a distinct radio interface design, and simplified LTE network functions. NB-IoT is a new 3GPP radio-access technology that is partially backwards compatible with previous generations of cellular networks, meaning existing devices cannot immediately use it. NB-IoT has been designed to be backwards compatible with previous generations of cellular networks, leveraging the existing physical layer design to a great extent for coexistence with legacy designs (Wang et al. 2017).

NB-IoT physical channels utilize the exciting cellular network to extensive coverage that allows seamless coexistence and interoperability. NB-IoT is a half-duplex technology and supports OFDMA transmissions in the downlink and SCFDMA transmissions in the uplink, similar to 4G. The technology requires a minimum channel bandwidth of 180 kHz, equivalent to one Physical Resource Block (PRB). This means the UE does not need to listen to the DL while transmitting in the UL and vice versa, regardless of the deployment mode. Figure 13.2 illustrates the design of NB-IoT subframes, which support half-duplex operation and use OFDMA transmissions in the downlink and SCFDMA transmissions in the uplink. The technology requires a minimum channel bandwidth of 180 kHz, equivalent to one Physical Resource Block (PRB). The physical channels specified in the NB-IoT standard include the Narrowband Physical Broadcast Channel (NPBCH), which is used for broadcasting master information for regularity access (i.e., Master Information Block or MIB), the Narrowband Physical Downlink Control Channel (NPDCCH) for uplink and downlink scheduling information, the Narrowband Physical Downlink Shared Channel (NPDSCH) for downlink dedicated and standard data, the Narrowband Physical Random-Access Channel (NPRACH) for uplink dedicated and standard data, and the Narrowband Physical Uplink Shared Channel (NPUSCH) for uplink data. The NPUSCH channel has two formats: NPUSCH format 1 for UL data transmissions and NPUSCH format 2 for Hybrid Automatic Repeat Request (HARQ) feedback for NPDSCH.

Fig. 13.2
A two-part illustration of one radio frame with 10 sub frames of uplink and downlink. Uplink includes 2 matrices of 12 subcarriers versus 14 O F D M symbols and 48 subcarriers versus 4 symbol groups. Downlink includes 12 subcarriers versus a subframe of 14 O F D M symbols.

NB-IoT in-band physical channels time multiplexing (Rastogi et al. 2020)

Power Saving Features

To ensure an extended battery life of more than 10 years on a single battery charge, NB-IoT employs two power-saving techniques:

  1. 1.

    The Power Saving Mode (PSM)

  2. 2.

    The extended Discontinuous Reception (eDRX)

Both approaches enable the UE to enter a power-saving mode in which monitoring for paging/scheduling information is not required.

  1. 1.

    PSM: The Power Saving Mode (PSM) in NB-IoT allows devices to enter a deep sleep mode by disconnecting from most of their connections while remaining connected to the network, which can be seen in Fig. 13.3. This mode allows the device to save power while not connected to the network, but still wake up whenever necessary to send data. The PSM technique is specifically designed to help IoT devices conserve battery power and potentially achieve a battery life of over 10 years.

    Fig. 13.3
    A graph of power consumption versus time plots 2 bars of data transfer with the distance between them as T A U period. Each bar has a group of 3 bars for page monitoring. The distance between the first 3 groups of bars and the second data transfer bar is labeled device not reachable.

    Power saving mode (PSM)

    PSM is a power-off mode that keeps the device connected to the network, according to the 3GPP TS 23.682 specification. Curiously, the PSM mode seemed in 3GPP specifications earlier than the NB-IoT in 3GPP Release 12. In PSM, the device turns to a sort of power-off mode for a suitable amount of time. If the device needs to transmit data, it can wake up without required to register in the network and the necessary signaling.

  2. 2.

    eDRX: NB-IoT employs an extended discontinuous reception (eDRX) technique that puts devices in an idle mode where they do not receive radio signals for a specific period. This allows the devices to conserve power and prolong battery life. Periodically, the devices wake up to receive paging messages from the network and check for incoming information before returning to deep sleep mode. The period of intermittent eDRX reception in the NB-IoT mode ranges from 20.48 to 10485.76 s, with the eDRX mode allowing the receiving path of the device to remain shut off for a more extended period. Including eDRX in the 3GPP Release, 13 specifications enable an additional power-saving mode for IoT devices.

    In summary, after the Active Timer expires, the UE switches to PSM mode, completely disconnecting the radio and only maintaining a primary oscillator for time reference. In PSM, energy consumption is similar to the power-off state. eDRX is a technique that prolongs the sleep time of the I-DRX. When using eDRX, an active phase is controlled by a Paging Time Window (PTW) timer during each eDRX cycle, where the UE can be reached using I-DRX cycles, followed by a sleep phase for the rest of the eDRX process (Fig. 13.4). This cycle continues until the Active Timer expires.

    Fig. 13.4
    A graph of power consumption versus time plots a bar of data transfer followed by three small bars that represent the dormant device. The distances between the small bars are labeled sleep.

    Extended discontinuous reception (eDRX)

Methodology

In a computer system, all components require energy to function. While desktop computers have Power Supply Units (PSUs), laptops typically rely on batteries. This chapter explicitly focuses on how LPWA technologies optimize power usage, particularly NB-IoT. Micro-controllers and sensors in LPWA networks are small computer systems prioritizing energy consumption optimization to achieve longer battery life rather than performance analyses. While components utilize power to perform computations, some energy dissipates from the system. This chapter will provide a basic understanding of energy (E) and power (P) concepts and how they relate to consumption.

To discuss the concept of energy in computer systems, it is essential to understand the two primary forms of energy—potential and dynamic. The law of conservation of energy states that the total energy in a closed system remains constant and cannot be created or destroyed; instead, it transforms into other forms of energy. Computer systems change the energy required to perform tasks into different shapes, mainly heat. Energy is also measured as the amount of work done on an object per unit of time, referred to as the energy consumption rate. The primary parameters used to measure energy in computer systems include voltage (V), current (A), power (W), energy (Wh), and time (s).

A fundamental formula for calculating power will be used and modified to compute these factors. Equation 13.1 is used to calculate the average power by dividing the energy by the time elapsed:

$$ P = \frac{E}{t} $$
(13.1)

To calculate power, we need to measure the voltage provided by a constant source like a battery and the current flowing through it. This can be done using the following equation, which is a modified form of Ohm’s law:

$$ P(t) = V(t) \times I(t) $$
(13.2)

It is important to note that power consumption can be optimized by controlling either the voltage, current, or both. In the case of battery-powered devices, reducing the voltage or current can help to extend the battery life. However, this may come at the cost of reduced performance or functionality. Therefore, finding the optimal balance between power consumption and performance is crucial for designing efficient and effective computer systems.

When using a computer system, the energy consumed can be calculated by determining the power usage over a specific period. Equations 13.1 and 13.2 can be used to derive a function that calculates the energy consumed by a computer system based on the voltage, current, and time elapsed. This can be expressed as:

$$ E = \int\limits_{{t_{1} }}^{{t_{2} }} {V(t) \times I(t) \times dt = } \int\limits_{{t_{1} }}^{{t_{2} }} {P(t) \times dt} $$
(13.3)

Power Management in 3GPP Standard

LPWA technologies are designed to optimize power usage and ensure a longer device battery life. In the 3GPP standard, power management techniques sustain low energy consumption while maintaining a reliable connection. Some of these techniques include:

  • Low power mode allows devices to enter a deep sleep state while still connected to the network, thereby conserving power.

  • Lightweight MAC protocols: These protocols are designed to be simple and efficient, reducing the energy required for communication.

  • Topology: The topology of LPWA networks is optimized to reduce the energy required for communication by using fewer hops and minimizing interference.

  • Utilization of more complex base stations: By using more complex base stations, LPWA networks can achieve better coverage and reduce the energy required for communication.

To conserve power in LPWA technology, User Equipment (UE) does not require continuous data transmission. Instead, it wakes up from sleep mode to send requested data and utilizes power-hungry components for a short time. Lightweight MAC protocols are also needed to reduce complex overhead for LPWA UEs. Network topology options include Mesh topology, commonly used in standard cellular networks and WLAN. UEs should aim to connect directly to the base station to avoid unnecessary jumps, which can improve battery life. In 3GPP standardized technologies, only the user can initiate the low power mode. Discharging unnecessary operations on base stations can further extend the battery life of UEs.

Low Power Mode

LPWA technologies, including cellular networks, use a low-power mode to optimize power consumption and extend battery life. This mode involves powering down heavy elements like the processor. The low-power mode can be implemented differently, depending on the application. For example, a device that only transmits information using the uplink can be scheduled to send information twice a day or manually trigger transfer messages. If the device can receive notifications through the downlink, it must listen to the network for these messages. There are several ways to achieve this, and the most suitable approach depends on the use case and how often the device wakes up from low-power mode. If the device periodically transmits messages, it can simultaneously listen for messages on the downlink.

In eMTC and NB-IoT, the low power mode is implemented differently, but both use power-efficient techniques like PSM and eDRX. The difference between these techniques is that eDRX allows the modem to listen to incoming signals, whereas PSM requires the modem to wake up to send data before receiving any data. Although energy-efficient communication is crucial for the successful deployment of MTC over existing cellular networks, there needs to be more research focused on energy-efficient uplink MTC scheduling.

Algorithm 13.1 illustrates the procedure to apply the Prediction Energy Saving Technique (PEST) on the User Equipment (UE) side.

To improve energy efficiency, the UE in cellular networks can store a prescheduling command transmitted via a narrowband physical uplink shared channel (NPUSCH) and check it when an uplink packet occurs. The UE follows the legacy scheduling request procedure if there is no prescheduling request. However, if there is a prescheduling request, the UE delays the uplink packet transmission until the scheduled time without triggering the scheduling request procedure.

In some cases, there may be a radical change in traffic arrival, such as when some micro-BSs are turned off to save energy during low-traffic periods. This situation requires neighboring BSs to cover the coverage areas of the turned-off BSs, which is called cell zooming. During low-traffic periods, the density of active BSs decreases, and communication distances increase. Legacy UEs, such as smartphones, are designed for daily recharging and are more efficient in such patterns. This proposed algorithm will be evaluated using analytical and simulation models.

Algorithm 13.1

Proposed UE procedure in an NB-IoT network

An algorithm for scheduling request procedures. It talks about storing up link packet in the buffer, delaying up link transmission by prescheduled time, and processing N P U S C H transmission.

Battery Lifetime Estimation

We adopt a methodology similar to that used to assess the UE battery life by measuring energy consumption. Our study considers a smart utility sensor, which sends periodic uplink reports with a predetermined inter-arrival time (IAT) as per the traffic profile. Before the start of periodic reporting, the UE needs to re-establish the RRC connection and thus perform the CP procedure.

We divided the battery lifetime approximation into four phases for modeling the periodic traffic pattern:

  • P1: The UE wakes up from Power Saving Mode (PSM), establishes the RRC connection, and transmits data using the CP procedure.

  • P2: The UE continuously monitors the Narrowband Physical Downlink Control Channel (NPDCCH) until the RRC connection is released.

  • P3: The UE utilizes extended/enhanced Discontinuous Reception (eDRX) until the Active Timer expires.

  • P4: The UE enters sleep mode using PSM until the next transmission period begins.

To estimate the energy consumption for transferring one UL report, denoted as Ereport, we used a similar methodology to the one described in. This method assumes a smart utility sensor and periodic UL reporting with a predefined Inter-Arrival Time (IAT). Before the periodic reporting, the UE must reestablish the RRC connection, which involves performing the CP procedure.

$$ E_{{{\text{report}}}} = E_{{{\text{conn}}}} + E_{{{\text{rel}}}} + E_{{{\text{idle}}}} + P_{{{\text{standby}}}} + T_{{{\text{sleep}}}} $$
(13.4)
$$ T_{{{\text{sleep}}}} = {\text{IAT}} - T_{{{\text{conn}}}} - T_{{{\text{rel}}}} - T_{{{\text{idle}}}} $$
(13.5)

The energy consumption for transferring one UL report, Ereport, was estimated using a similar methodology. As described above, the four phases for modeling the periodic traffic pattern were P1, P2, P3, and P4. The energy consumed in joules within the phases P1, P2, and P3 is denoted as Econn, Erel, and Eidle, respectively. Pstandby represents the average power consumption in PSM, and Tconn, Trel, Tidle, and Tsleep represent the duration in seconds of the phases P1, P2, P3, and P4, respectively. Finally, the energy consumed per day, denoted as Eday, and the battery lifetime in years indicated as Blife can be determined as follows:

$$ E_{{{\text{day}}}} = \frac{{D_{{{\text{day}}}} }}{{{\text{IAT}}}} \times E_{{{\text{report}}}} $$
(13.6)
$$ B_{{{\text{life}}}} = \frac{{{\text{Bat}}_{C} }}{{\frac{{E_{{{\text{day}}}} }}{3600} \times 365 \times 25}} $$
(13.7)

In our simulation, we consider periodic UL reports as UDP packets with 50 B of payload and a battery capacity of BatC = 5 Wh (Wang et al. 2017). We use the value of Dday to represent the duration of 1 day in seconds.

Results

This section presents the validation results of our proposed analytical NB-IoT model. We used Eqs. 13.1, 13.2, and 13.3 to calculate the modules’ energy consumption and average power. The validation was performed based on two metrics: battery lifetime and latency for performing Control Plane (CP) optimization. Our proposed algorithm aims to reduce the total energy consumption of NB-IoT devices by exploiting their lack of mobility and minimizing the number of costly and unnecessary procedures. The module was tested with a voltage of 3.7 V and a signal strength of −75 dBm. The energy consumption and average power were calculated based on the voltage and current for a particular period. It should be noted that the module is still under development.

PSM

Figure 13.5 displays the results of a 1-h Power Saving Mode (PSM) test where the system only woke up from sleep once. The initial peak in the current was measured before the system entered sleep mode for the first time. It is worth noting that the module used in the test is still under development.

Fig. 13.5
A line graph of current in amperes versus time in seconds plots a horizontal at around 0.00 ampere, with the highest fluctuations at 0 and 1500 seconds.

1-h PSM test

Furthermore, we performed Timer and Button analyses, looking at the cycles individually to achieve better results for both modes. Table 13.1 shows the results of these analyses and is illustrated in Fig. 13.6, indicating a significant decrease in both current and power when beginning the sleep mode.

Table 13.1 PSM active and sleep
Fig. 13.6
Two line graphs of current in amperes versus time in seconds. The left graph has high fluctuations in the middle labeled as active, with low horizontals on either side. The right graph has a low horizontal approximately between 250 and 3000 seconds labeled as sleep, with high peaks on either end.

PSM active and sleep cycles

eDRX

Figure 13.7 displays the results of a 20-min eDRX test, where a request was sent to the system at the beginning of the analysis, resulting in the first spike in current. This spike reached a peak of over 0.25 A. Following this, the system woke up periodically to listen to the downlink, with an average current of approximately 0.2 A. During idle cycles, the system was in sleep mode, with the current varying between 0.1 and 0.15 A. Figure 13.8 shows multiple spikes in the current readings during each cycle, which may be because the current is not measured from the system alone, as applications are also running on the application processor. The results for each cycle are provided in Table 13.2.

Fig. 13.7
A line graph of current in amperes versus time in seconds plots a densely fluctuating wave. It has a horizontal dashed line at around 0.135 amperes.

20 min eDRX test

Fig. 13.8
3 line graphs of current versus time. a, plots a fluctuating trend that increases and is labeled as active. b, plots a fluctuating trend with higher peaks in the center labeled as idle. c, plots a fluctuating trend with lower peaks labeled as sleep, and has high peaks on either end.

eDRX active, idle, and sleep cycles

Table 13.2 The result of each separate cycle

Discussion

This section describes a comprehensive evaluation of energy consumption in NB-IoT devices based on measurements, focusing on identifying optimization targets for network operations. Furthermore, we proposed an energy consumption model that we utilized in simulation experiments and our empirical observations to estimate the battery requirements for NB-IoT devices to achieve the desired battery life.

Estimating an NB-IoT device’s lifetime can be accomplished by using the battery’s capacity and network configurations. In the case of Power Saving Mode (PSM), we can calculate the battery life using four different timers: 1 h, 1 day, 1 week, and 1 month (30 days). Table 13.3 provides each cycle’s average power and respective times. To determine the battery’s lifetime, we divide the battery’s energy by the average power according to the following formula:

$$ {\text{Lifetime}} = \frac{{5\;{\text{Wh}}}}{{P_{tot} }} $$
(13.8)
Table 13.3 Lifetime of 5 Wh battery (PSM)

Table 13.3 illustrates a 5 Wh battery life when utilizing Power Saving Mode (PSM) with NB-IoT and eMTC. As anticipated, the battery life is considerably shorter for eMTC compared to NB-IoT. The battery life varies from 17 to 3496 days, depending on the duration of the sleep mode.

Table 13.4 depicts the power usage for each eDRX cycle and their respective times for the same battery with 5 Wh. Similar to PSM, we can utilize the same equations to estimate the battery’s lifetime.

Table 13.4 Average power summary (eDRX)

Conclusions

The proliferation of the Internet of Things (IoT) has revolutionized various domains of our lives by extending network connectivity to everyday objects, enabling them to communicate with each other without human intervention. IoT devices have a wide range of applications, including digital health, smart homes, autonomous driving, and industrial automation, with new applications being developed daily.

Cellular networks have emerged as a strong candidate to support IoT devices, mainly due to their extensive deployment, large coverage area, and varying data rates. However, traditional cellular networks were historically designed to help high-throughput communication (HTC) devices, exhibiting considerably different traffic patterns than IoT devices, leading to inefficiencies at both the network and device levels. These challenges are not limited to a single area but span various operational areas of cellular networks, such as the connection establishment process, network resource utilization, and device energy consumption.

LPWAN technology selection for IoT applications should be determined case-by-case, considering the device’s data transmission requirements, desired lifetime, and access to a charging source. PSM may be more appropriate for devices that only need to send data infrequently, while eDRX may be more suitable for devices that listen to incoming information frequently. Considering the power-saving feature utilized is crucial since neither PSM nor eDRX is a one-size-fits-all technology.

The current chapter focuses on the challenges IoT devices face in cellular networks, focusing on their unique communication patterns and requirements. To address these challenges, the chapter presents an analytical model that enables estimating the energy consumption of an NB-IoT device. The results obtained from the analysis indicate that the use cases for eDRX and PSM differ and that devices that need to listen to the downlink frequently may require more frequent battery recharging.

Future Studies

The rising traffic generated by emerging IoT applications presents a significant challenge for cellular networks. Machine-to-Machine (M2M) communications, known as MTC, are crucial for current and future cellular networks. Therefore, cellular networks must continually evolve and adapt to new requirements. NB-IoT is an example of this evolution as it leverages LTE technology to provide IoT support. However, new technologies like NB-IoT and eMTC still present unresolved research issues and uncertainties. Further research is needed to address these challenges and support the growth of IoT in cellular networks.

Various unknown factors, such as range and configurations, make it difficult to provide accurate details on the energy consumption of NB-IoT devices. Additionally, no experiments were conducted to analyze how the amount of data sent affects energy consumption, an area for future development.

Based on the findings of this chapter, there are several open issues and potential improvements that need to be addressed, including:

  1. 1.

    Expanding the NB-IoT model proposed in this study to analyze the enhanced Discontinuous Reception (eDRX) and PSM performance.

  2. 2.

    Extending the analysis to cover extended coverage areas.

  3. 3.

    Conducting experimental measurements of battery lifetime, considering the non-ideal characteristics of actual batteries, such as self-discharge and temperature variations.

  4. 4.

    Investigating alternative antenna schemes that can improve the Signal-to-Noise Ratio (SNR) without increasing the User Equipment (UE) complexity.