Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

Link quality estimation plays a crucial role for both routing protocols and mobility management mechanisms.

The consideration of link quality in the process of routing is a prerequisite to overcome link unreliability and maintain acceptable network performance. Indeed, delivering data over high quality links (i) reduces the number of packet retransmissions in the network, (ii) increases its throughput and (iii) ensures a stable topology. This implies that efficient routing metrics should integrate not only the path length criterion, in terms of hops or communication delay, but also the path’s global quality. Path quality is evaluated based on the assessment of links that compose it. Hence, in the first part of this chapter, we address the question of how to design an efficient routing metric based on a novel LQE that ensures reliable end-to-end delivery.

Mobility management is a wide area that covers various aspects such as handoff processes, re-routing, re-addressing and security issues. Within the scope of this book, our main focus is on the handoff processes as it leverages on reliable link quality estimation. Handoff refers to the process where a mobile node disconnects from one point of attachment and connects to another. Hence, it is clear that handoff process greatly relies on link quality estimation. This fact motivates us to address the question of how to use link quality estimation for an efficient handoff in mobility management solutions.

5.2 On the Use of Link Quality Estimation for Routing

5.2.1 Link Quality Based Routing Metrics

Link quality-based routing metrics consider the criteria of path global quality in path selection. They may also integrate other criteria such as path length in terms of hop count or communication delay, and nodes energy, depending on the application requirements.

Path quality is determined through the assessment of links composing the path. Depending on the link quality estimator category, the path quality can be the sum (e.g., for RNP-based LQEs), the product (e.g., PRR-based LQEs), the max/min (e.g., Hardware-based LQEs and score-based LQEs), or any other function, of link quality estimates over the path. Next we overview a set of representative routing metrics to illustrate this statement.

The DoUble Cost Field HYbrid (DUCHY) [1] and SP(t) [2] are two routing metrics that select routes with minimum hops and high quality links. For DUCHY, each node maintains a set of neighbors that are nearer, in terms of hops, to the tree root. Then, the parent node is selected among the maintained set of neighbors as the one that has the best link quality. Link quality estimation is performed using both CSI (Channel State Information) and RNP. As for SP(t), each node maintains a set of neighbors that have link quality exceeding a threshold t. Link quality estimation is performed using WMEWMA. Then, the parent node is selected among the maintained set of neighbors as the nearest one, in terms of hops, to the tree root.

ETX [3] and four-bit [4] are two link quality estimators that have been extensively used as routing metrics. Both approximate the RNP (RNP-based category). Using ETX or four-bit, the path cost is the sum of quality estimates of its links. This path cost can be generalized to any RNP-based link estimator, since the number of packet retransmissions over the path is typically the sum of packet retransmissions of each link composing the path.

MAX-LQI and Path-DR [5] aim to select the most reliable path, regardless of its hop count. MAX-LQI selects the path having the highest minimum LQI over the links that compose the path. Path-DR approximates the link PRR using LQI measurements and then evaluates the path cost as the product of link PRRs. Path-DR selects paths having the maximum of this product. The product of link estimates can be generalized to any PRR-based LQE.

The aforementioned link quality based routing metrics use traditional LQEs, such as PRR, RNP, four-bit, and LQI. These LQEs are not sufficiently accurate as they either rely on a single-link-quality metric, or use simple but inaccurate techniques for the combination of link quality metrics such as filtering through the EWMA. Further, these metrics can only capture one link aspect such as link delivery or the number of packet retransmissions over the link (refer to Chap. 4 for more details on the limitation of these LQEs). On the other hand, F-LQE was shown more reliable and more stable than these LQEs as it takes into account several important link aspects. The accuracy of link quality estimation greatly affects the effectiveness of link quality based routing metrics.

In [6], the authors propose using F-LQE to design an efficient link quality based routing metric. In other words, the authors addressed the question of how to use reliable link quality estimation provided by F-LQE to build a routing metric that improves routing performance, e.g., in terms of end-to-end packet delivery. They call their routing metric FLQE-RM (Fuzzy Link Quality Estimator based Routing Metric). FLQE-RM has three main design requirements:

  • First, FLQE-RM should correctly evaluate the path cost based on individual link costs, i.e., F-LQE link quality estimates. This requirement should be carefully addressed as F-LQE can be efficient on a link basis, but inadequate at the path level due to inadequate path cost evaluation. This situation may result in a dramatic reduction of routing performance.

  • Second, path cost evaluation should take into account not only the path global quality but also the weakest quality link in the path. In fact, a path may have the highest global quality among candidate paths; yet it may still contain a weak quality link. This situation leads to several packet looses over this link, which negatively affects the routing performance, such as the end-to-end packet delivery.

  • Third, FLQE-RM should favor the selection of short paths. In fact, selecting short paths reduces the number of transmissions over the path and also the number of nodes involved for packet delivery, which conserves the node’s energy and thus extends the network lifetime.

Based on these requirements, FLQE-RM is defined as follow:

$$\begin{aligned} FLQE\_RM = \sum _{i \in Path} \frac{1}{FLQE_{i}} \end{aligned}$$
(5.1)

\(\frac{1}{FLQE_{i}}\) is the cost of the link \(i\). Thus FLQE-RM defines the path cost as the sum of the links’ costs. The path having minimal cost is selected. FLQE-RM takes into account the global path quality and implicitly favors the selection of short paths thanks to the link cost definition. Indeed, by defining the link cost as \(\frac{1}{FLQE_{i}}\) instead of \(FLQE_{i}\), the path selection is a minimization of the path cost instead of a maximization. Hence, the longer the path, higher its cost and thus the lower the chance it will be selected. The link cost definition also improves the effectiveness of FLQE-RM by avoiding paths having weak quality links: The lower the link quality, the higher its cost, which impacts the overall path cost and increases the probability that the path is rejected.

Next, we show how FLQE-RM can indeed improve the routing performance, when integrated to the CTP (Collection Tree Protocol) routing protocol.

5.2.2 Overview of CTP (Collection Tree Protocol)

As data collection is one of the most popular low-power wireless networks applications, CTP has gained a lot of interest during the last years. CTP establishes and maintains a routing tree, where the tree root is the ultimate sink node of the collected data. In CTP, three types of nodes can be identified:

  • The sink node: One node in the network advertises itself as a sink node (generally the node with id 0). It is the root of the routing tree. All other nodes forward information to the root based on the tree formed via link quality estimation.

  • The parent node: Except the sink node, each node has a parent, which represents the next hop towards the tree root. Each parent node has a certain number of child nodes.

  • The child node: It is associated to a single parent node and can be in turn the parent of other child nodes situated further below in the tree hierarchy. Notice that the data traffic flow is from the child node to the parent node.

CTP is the reference protocol for the network layer of TinyOS 2.x. stack [7]. Due to its modularity, and also to the fact that it relies on a link quality based routing metric, we use it as a benchmark for analyzing the impact of different link quality based routing metrics on routing performance.

The CTP implementation contains three basic components: the link estimator, the routing engine and the forwarding engine. These components are shortly described next.

5.2.2.1 Link Estimator

This component is based on the Link Estimation Exchange Protocol (LEEP) [8] and four-bit [4]. Note that the implementation of four-bit in Link estimator component is slightly different from its specification in [4]. According to this implementation, four-bit combines beacon-driven estimate (estETX) and data-driven estimate (RNP) using the EWMA filter. RNP is computed based on DLQ transmitted/retransmitted data packets and estETX is computed based on BLQ received beacons. It is given by the following expression:

$$\begin{aligned} {ETX}(BLQ, \alpha ) = \frac{1}{{SPRR}_{in}\times {SPRR}_{out}} - 1 \end{aligned}$$
(5.2)

where \({SPRR}_{in}\) is the PRR of the inbound link, smoothed using EWMA. \({SPRR}_{out}\) is the PRR of the outbound link, smoothed using EWMA. \({SPRR}_{out}\) is gathered from a received beacon or data packet.

Each node maintains a neighbor table, where each entry contains useful information for estimating the quality of the link to a particular neighbor. These information include (i) the neighbor address; (ii) the sequence number of the last received beacon, the number of received beacons and the number of missed beacons (these are used for \({SPRR}_{in}\) computation); (iii) the inbound link quality (\({SPRR}_{in}\) ) and the outbound link quality (\({SPRR}_{out}\) ); (iv) the number of acknowledged packets and the total number of transmitted/retransmitted data packets (these are used to compute \({estETX}_{up}\)); (v) link cost (four-bit estimate); and (vi) different flags that describe the state of the entry.

The replacement policy in the neighbor table is governed by the use of the compare bit and the pin bit. The pin bit applies to the neighbor table entries. When the pin bit is set on a particular entry, it cannot be removed from the table until the pin bit is cleared. The compare bit is checked when a beacon is received. It indicates whether the route provided by the beacon sender is better than the route provided by one or more of the entries in the neighbor table.

5.2.2.2 Routing Engine

This component is responsible for the establishment and maintenance of the routing tree for data collection. Each node maintains a routing table. In this table, there is an entry for each where each neighbor. An entry contains the following fields:

  • the address of the neighbor,

  • the address of the parent of this neighbor,

  • the cost of the neighbor, and

  • an indicator on whether the neighbor is congested.

The neighbor cost refers to the route cost from this neighbor to the sink. Generally, a node cost is computed as the cost of its parent plus the cost of its link to its parent. The link cost corresponds to four-bit estimate.The cost of a route is computed as the sum of links’ costs. Lower route costs are better. Note that the sink node has a cost equal to zero.

A node updates its route to the sink, which corresponds to the update of its parent, periodically. Parent update consists in searching in the routing table for a neighbor that provides a route cost better than that provided by current parent. To compute the route cost through a given neighbor, the node gets the neighbor cost from the routing table, and the link cost to the neighbor from the neighbor table and then sums the two values. To avoid frequent parent changes leading to unstable topology, a node changes its parent only when a number of conditions are satisfied. For example, the new parent should provide a route cost lower than the current route cost by ParentChTh, which is a constant parameter defined by CTP.

The tree is maintained by beacons sent by each node according to an adaptive beaconing rate, to ensure a minimum number of beacons sent along with a consistent tree. When a node sends a beacon, it includes the address of its parent as well as its cost, i.e., the route cost from the node to the sink, in the beacon header. It also includes a list of neighbor entries in the beacon footer. A neighbor entry is composed of the neighbor address and the SPRR of the inbound link, \({ SPRR}_{in}\). When a node receives a beacon, it seeks for its address in the list of neighbor entries. When found, it extracts the \({ SPRR}_{in}\), and updates the SPRR of the outbound link, \({ SPRR}_{out}\), in its neighbor table.

5.2.2.3 Forwarding Engine

This component is responsible for queueing and scheduling outgoing data packets. Each node, maintains a forwarding queue that adopts a set of rules to process data packets. For example, a data packet is ejected from the queue if it has been acknowledged or has reached the maximum retransmission count. When a node receives a data packet from a neighbor with cost less than its cost, it drops the packet and signals an inconsistency in the network (a loop detection). Data packets are automatically forwarded to the next hop in the tree, which corresponds to the parent node. When a node sends a data packet, it includes its cost in the packet header. As with beacons, the node includes a list of neighbor entries in the packet footer.

5.2.3 Integration of FLQE-RM in CTP

To integrate the proposed F-LQE based routing metrics in CTP, we have implemented F-LQE in the Link Estimator component, as replacement of the four-bit estimator.

5.2.3.1 Beacon-Driven Link Quality Estimation

Recall that F-LQE combines four metrics, which are computed at the receiver side, i.e., based on received traffic:

  • SPRR: Smoothed Packet Reception Ratio over the link,

  • ASL: link ASymmetry Level,

  • SF: link Stability Factor, and

  • ASNR: link Average Signal-to-Noise Ratio.

Our implementation of F-LQE leverages on broadcast control traffic (i.e., beacons), which is initiated by the CTP routing engine for the topology control. F-LQE can be also implemented based on data traffic, which requires the overhearing of incoming packets.

CTP uses an adaptive beaconing rate that changes according to the topology consistency. In our implementation, we disabled this mechanism and we used a constant beaconing rate of 1 beacon/s.

5.2.3.2 Channel Quality Assessment

In F-LQE, channel quality is assessed by ASNR. However, SNR is not the optimum choice in the context of routing, where the node needs to quickly switch to a better parent when the current parent breaks down. In fact, SNR computation is relatively time consuming as it involves two separated operations: It is derived by subtracting the noise floor (N) from the received signal (S), where the S is deduced by sampling the RSSI at the packet reception, and N is derived from the RSSI sample just after the packet reception. Therefore, in F-LQE implementation, we substitute SNR by LQI (Link Quality Indicator), which assesses channel quality in one operation while still providing acceptable accuracy.

5.2.3.3 Link Direction

In CTP tree routing, data travel from child to parent. In order to select their parents, child nodes need to assess direct links, i.e., \(child\rightarrow parent\) links. Although F-LQE takes into consideration link asymmetry through ASL metric, it evaluates the reverse link, i.e., \(parent\rightarrow child\) link (because each of SPRR, SF, and ASNR provides reverse link estimate). Considering the reverse link estimate to decide about the direct link for parent selection leads to misleading routing decisions. Therefore, we define two F-LQE estimates: \(\mathrm{{F-LQE}}_{in}\) and \(\mathrm{{F-LQE}}_{out}\). \(\mathrm{{F-LQE}}_{in}\) is the F-LQE for the reverse link, (i.e., inbound link). It is computed by each node, based on incoming beacons. \(\mathrm{{F-LQE}}_{out}\) is the F-LQE for the direct link, (i.e., outbound link) and it is gathered from received packets. \(\mathrm{{F-LQE}}_{in}\) and \(\mathrm{{F-LQE}}_{out}\) are stored in the neighbor table, with respect to each neighbor node. As reported in Sect. 5.2.2, CTP defines a list of neighbor entries, that is included in the footer of each sent packet. In our implementation, a neighbor entry is composed of the neighbor address, \(\mathrm{{PRR}}_{in}\), and \(\mathrm{{F-LQE}}_{in}\). When a node receives a packet, it extracts \(\mathrm{{PRR}}_{in}\), and \(\mathrm{{F-LQE}}_{in}\) and stores them in its neighbor table, specifically in \(\mathrm{{PRR}}_{out}\), and the \(\mathrm{{F-LQE}}_{out}\) fields. Link Estimator component is used by the Routing engine to get the link cost, which corresponds to \(\frac{1}{F-LQE_{out}}\).

5.2.3.4 Parent Update

Nodes update their parents, when the new parent is better than the current one by ParentChTh. This constant parameter depends on the routing metric. We set it to 4 for F-LQE based routing metrics (based on several experimental measurements). The ParentChTh for four-bit is equal to 1.5 (default value).

5.2.3.5 Routing Engine

Like four-bit, FLQE-RM selects parents that lead to minimal path costs, where a path cost is the sum of its link costs. Hence, the implementation of FLQE-RM does not require major modifications in the Routing Engine component.

5.2.4 Impact of FLQE-RM on CTP Performance

In this section, we investigate the impact of FLQE-RM on the performance of CTP using experimentation with real WSN platforms. Further, we compare the impact of FLQE-RM to that of four-bit, the default metric of CTP, as well as ETX [3]. Both four-bit and ETX are considered by the community as representative and reference metrics.

In our study, the considered performance metrics are the following:

  • Packet Delivery Ratio (PDR). It is computed as the total number of delivered packets (at the sink node, i.e., the root) over the total number of sent packets (by all source nodes). This metric indicates the end-to-end reliability of routing protocols.

  • Average number of retransmissions across the network per delivered packet. (RTX). This metric is of paramount importance for low-power wireless networks as it greatly affects the network lifetime. In fact, communication is the most energy consuming operation for a sensor node. Therefore, efficient routing protocols try to minimize packet retransmissions by delivering data over high quality links, which extend the network lifetime.

  • Average number of parent changes per node (ParentCh). This metric is an indicator of topology stability. The number of parent changes depends on two factors: the ParentChTh parameter of CTP, and also the agility of the LQE allowing for detecting link quality changes. Too many parent changes leads to instable topology, but improves the quality of routes and thus improves routing performance (e.g., PDR and RTX). On the other hand, few parent changes leads to stable topology but also paths with potentially lower quality. Hence, an agile LQE, along with a good ParentChTh choice would lead to a good tradeoff between topology stability and route quality.

  • Average path lengths, i.e., average Hop Count. It is important that link quality aware routing protocols minimize route lengths in order to reduce (i) the number of packet transmissions to deliver a packet, (ii) the number of involved nodes for data delivery, and possibly (iii) the end-to-end latency (in case the involved nodes are not overloaded).

5.2.4.1 Experiments Description

In our experimental study, we resort to remote testbeds (i.e., general-purpose testbeds) for large scale experiments. Examples of remote testbeds include MoteLab [9], Indriya [10], Twist [11], Kansei [12], and Emulab [13].

Remote testbeds are designed to be remotely used by several users over the world. Roughly, they are composed of four building blocks: (i) the underlying low-power wireless network (i.e., a set of sensor nodes), (ii) a network backbone providing reliable channels to remotely control sensor nodes, (iii) a server that handles sensor nodes reprogramming and data logging into a database, and (iv) a web-interface coupled with a scheduling policy to allow the testbed sharing among several users. The testbed users must be experts on the programming environment supported by the tesbeds (e.g. TinyOS, Emstar), to be able to provide executable files for motes programming. They must also create their own software tool to analyze the experimental data and produce results.

Our experimental study is carried out on both MoteLab [9] and Indriya [10] testbeds. MoteLab consists of 190 TMote Sky motes, deployed over three floors of Harvard university building, and Indriya consists of 127 TelosB motes, deployed over three floors in the National University of Singapore (NUS) building. In both testbeds, node placement is very irregular. Node programming is performed using TinyOS.

In contrast to Indriya, which is a recently released testbed, MoteLab is serving the WSN community for six years. Hence, around 100 nodes in MoteLab are not working mostly due to aged hardware. Further, the number of working nodes in both testbeds varies according to time due to many reasons such as hardware failure and human activity. Our experiments were conducted within April–July 2011, where 72 nodes from Motelab and 121 nodes from Indriya were available.

Using low transmission powers for sensor nodes leads to more intermediate quality links, and thus allows us better evaluate link quality based routing metrics. However, this may lead to a partitioned network, as some nodes may not be able to join the network due to poor connectivity. Hence, the transmission power should be correctly set to have as much as possible a rich set of links (i.e., having different qualities), while preserving the network connectivity. To this end, we set the transmission power to \({-}\)25 dBm for Indriya experiments and to 0 dBm for MoteLab experiments. These values were determined through several experiments. In each experiment, we set the transmission power to different values and check the connectivity of the network through the graphical interface provided by the testbed software.

Our experiments consist of a many-to-one application scenario where nodes generate traffic at a fixed rate, destined to the sink node. Data collection is performed using CTP, with a fixed beacon rate (1 packet(pkt)/s). Nodes use the default MAC protocol in TinyOS, B-MAC. Recall that we set the transmit power to \({-}\)25 dBm for Indriya experiments and to 0 dBm for MoteLab experiments. The radio channel is set to 26 to avoid interference with co-existing networks such as Wi-Fi. Most of experiments were conducted with Indriya as it provides more active nodes (121 nodes) than MoteLab (72 nodes). Each experiment lasts 60 min. Nodes begin their transmission after a delay of 10 min to enable the topology establishment.

Table 5.1 Experiment sets
Fig. 5.1
figure 1

a Packet delivery ratio (PDR). b Average umber of packet retransmissions (RTX). c Average routes hop count (Hop Count). d Average number of parent changes (ParentCh). Impact of FLQE-RM, four-bit, and ETX on CTP performance, using Indriya testbed (refer to Table 5.1—Set 1)

Experiments are divided into different sets. In each experiment set, we varied a certain parameter to study its impact, and the experiment was repeated for each parameter modification. Parameters under-consideration were the testbed under use, traffic load, topology, and the number of source nodes. Table 5.1 depicts the different settings for each experiments set.

5.2.4.2 Experimental Results

5.2.4.2.1 Performance for Different Testbeds

We begin by assessing the overall impact of FLQE-RM, four-bit and ETX on CTP routing performance, using the Indriya testbed (refer to Table 5.1—Set 1 of experiments). Each experiment is repeated 5 times. Experimental results are illustrated in Fig. 5.1.

Fig. 5.2
figure 2

a Packet delivery ratio (PDR). b Average number of packet retransmissions (RTX). c Average routes hop count (Hop Count). d Average number of parent changes (ParentCh2). Impact of FLQE-RM, four-bit and ETX, on CTP performance, using MoteLab testbed (refer to Table 5.1—Set 1)

Figure 5.1 shows that FLQE-RM provides better routing performance, compared to four-bit and ETX as it is capable to deliver more packets (Fig. 5.1a), with less retransmissions (Fig 5.1b), less parent changes (Fig. 5.1d), and through shorter routes (Fig. 5.1c).

Figure 5.1a shows that ETX has very low PDR compared with FLQE-RM and four-bit. This can be interpreted by the fact that ETX is not able to identify high quality routes for data delivery. One of the reasons is the unreliability of ETX as a LQE, i.e., ETX is not an accurate metric for link quality estimation. Further, ETX is unstable as it leads to frequent parent changes (Fig. 5.1d). Parent changes may lead to several packet looses. The unreliability and unstability of ETX was confirmed in Chap. 2, when we analyzed the statistical properties of different LQEs, including ETX, independently of higher layer protocols, especially routing.

Network conditions, especially the nature of the surrounding environment (e.g., indoor/outdoor, static/mobile obstacles, the geography of the environment), the type of the platform, and even the climate conditions (e.g., temperature, humidity), affect the quality of the underlying links, and thus impact the network performance. For this reason, we have investigated the performance of FLQE-RM, four-bit, and ETX, using a different testbed from Indriya. Experimental results carried out with MoteLab (refer to Table 5.1—Set 1 of experiments) are depicted in Fig. 5.2. From this figure two main observations can be made: First, by examining the PDR in Fig. 5.2a, it can be inferred that links in MoteLab have worse quality,than those in Indriya, as the maximum achieved PDR (by FLQE-RM) is equal to 75 %. Second, MoteLab experimental results confirm that FLQE-RM leads to the best routing performance and ETX leads to the worst. This observation can be interpreted by F-LQE reliability. Indeed, we have shown in Chap. 3 that F-LQE provides a fine grain classification of links, especially intermediate links (better than four-bit and ETX).

Fig. 5.3
figure 3

Performance as a function of the traffic load (refer to Table 5.1—Set 2)

5.2.4.2.2 Performance as a Function of the Traffic Load

We have assessed the impact of FLQE-RM, four-bit and ETX on CTP routing performance for different traffic loads. The Experiment settings are presented in Table 5.1—Set 2 and Fig. 5.3 illustrates the experimental results. With a higher traffic load, the congestion level of the network increases, which leads to packet losses induced by buffer overflows as well as MAC collisions.

For traffic loads less than or equal to 1 pkt/s, Fig. 5.3 shows that FLQE-RM performs better than four-bit and ETX: It increases the PDR and reduces the number of parent changes. If we observe RTX and Hop count together, it can be inferred that FLQE-RM reduces the global number of packet transmissions (i.e., Hop count) and retransmissions (i.e., RTX), compared with ETX and four-bit. For example, for traffic load equal to 1 pkt/s, FLQE-RM has RTX equal to 1.27 and Hop count equal to 4.56, while ETX has RTX equal to 1.123 and a Hop count equal to 4.86. Thus, overall, FLQE-RM reduces the number of packet transmissions and retransmissions (5.83) compared with ETX (5.98).

For traffic loads equal to 2 pkts/s, Fig. 5.3 shows that FLQE-RM provides slightly better (or nearly equal) performance than four-bit. This might be due to the fact that four-bit has more information on links status as the data rate (2 pkts/s) is double the beacon rate (1 pkt/s). Recall that four-bit uses both beacon traffic and data traffic for link quality estimation, while FLQE-RM and ETX perform link quality estimation based on beacon traffic only. Figure 5.3 also shows that for traffic loads equal to 2 pkts/s, ETX outperforms FLQE-RM and four-bit in terms of all performance metrics, except the parent changes. This observation would pertain to CTP, which does not contain any explicit congestion control mechanism, as it is designed for low data-rate applications.

5.2.4.2.3 Performance as a Function of the Number of Source Nodes

We have analyzed the impact of FLQE-RM, four-bit and ETX on CTP routing performance while varying the number of source nodes. the experiment settings are presented in Table 5.1—Set 3 and experimental results are illustrated in Fig. 5.4. By default, all nodes except the root node (i.e., 121 nodes) are data sources (refer to Table 5.1). By decreasing the number of source nodes, the congestion level of the network deceases, which reduces the number of packet looses induced by collisions or buffer overflow.

Figure 5.4 shows that overall, FLQE-RM leads to the best performance and ETX leads to the worst. By observing Figs. 5.3 and 5.4, it can be observed that generally, in terms of PDR, routing metrics are more sensitive to the traffic load variation than the number of source nodes variation. This is due to the considered data traffic rate (0.125 pkt/s), which is low enough to avoid network congestion for any number of source nodes.

5.2.4.2.4 Performance as a Function of the Topology

The network topology has a significant impact on routing performance [14]. To examine the impact of the topology on CTP routing, we considered different sink (root node) placements. Hence, for each CTP version, based on a particular routing metric (FLQE-RM, four-bit or ETX), we carried out a set of experiments, while varying the sink node assignment, i.e., varying the Root ID (refer to Table 5.1—Set 4).

Figure 5.5 illustrates the routing performance with respect to each routing metric as a function of the root ID assignment. This figure confirms the impact of the topology on routing performance. Further, it shows that again, FLQE-RM leads to the best performance and ETX leads to the worst, for all considered sink assignments.

5.2.4.3 Results Review

This section provides a review of our experimental results with the 122-node Indriya testbed, as illustrated in Tables 5.2, 5.3, and 5.4. These tables show that overall, FLQE-RM improves the end to end packet delivery (PDR) by up to 16 % over four-bit (Table 5.2) and up to 24 % over ETX (Table 5.4). It also reduces the number of retransmissions per delivered packet by up to 32 % over four-bit and also ETX (Table 5.3). The Hop count metric can be interpreted by the average route lengths as well as the average number of packet transmissions to deliver a packet. FLQE-RM reduces the Hop count by up to 4 % over four-bit (Tables 5.3 and 5.4) and up to 45 % over ETX (Table 5.3). The ParentCh metric implies on the topology stability. FLQE-RM improves topology stability by up to 47 % over four-bit (Table 5.3) and up to 92 % over ETX (Table 5.4).

Table 5.2 Overall results for Indriya experiments, where 121 nodes are data sources and the node with ID equal to 1 is selected as root, averaged over all considered traffic loads
Fig. 5.4
figure 4

Performance as a function of the number of source nodes (refer to Table 5.1—Set 3)

Fig. 5.5
figure 5

Performance as a function of the topology (refer to Table 5.1—Set 4)

Table 5.3 Overall results for Indriya experiments, where the traffic load is fixed to 0.125 pkt/s and the node with ID equal to 1 is selected as root, averaged over all considered number of source nodes
Table 5.4 Overall results for Indriya experiments, where 121 nodes are data sources and the traffic load is fixed to 0.125 pkt/s, averaged over all considered Root ID assignments

5.2.4.4 Memory Footprint and Computation Complexity

We measured the memory footprint with respect to four-bit, ETX, and FLQE-RM, in terms of RAM and ROM consumptions. As shown in Table 5.5, a sensor node (precisely, TelosB mote) running FLQE-RM as routing metric consumes a total ROM footprint equal to 27.10 KB and a total RAM footprint equal to 4.47 KB. Compared to four-bit and ETX, FLQE-RM has more memory footprint as depicted in Table 5.5. Nevertheless, today’s sensor platforms provide higher memory than that consumed by FLQE-RM. For example, a TelosB mote has total ROM of 48 KB and a total RAM of 10 KB. Our experimental study with Indriya and MoteLab proves that the FLQE-RM metric can be implemented on TelosB and TMote Sky motes.

Table 5.5 Memory footprint of four-bit, ETX, and FLQE-RM

FLQE-RM relies on F-LQE estimator, which is computationally more complex than four-bit and ETX. Typically, F-LQE computes four link quality metrics (SPRR, ASL, SF, and ALQI) applies these metrics to piecewise linear membership functions, then combines the different membership levels into a particular equation. On the other hand four-bit combines two link quality metrics through a simple weighted sum (the EWMA filter), and ETX uses a single link quality metric.

5.2.5 Discussion

FLQE-RM, four-bit, and ETX build routing trees, based on link quality estimation. Typically, an efficient routing metric (i) reduces the number of packet transmissions and retransmissions in the network, (ii) increases its delivery and (iii) ensures a stable topology. Our experimental study demonstrates that FLQE-RM establishes and maintains the routing tree better than four-bit, and ETX as it generally presents the highest PDR and the lowest RTX, Hop count and ParentCh. The effectiveness of FLQE-RM as a routing metric can be interpreted by (i) the accuracy of the link quality estimation as well as (ii) the efficiency of path cost evaluation.

In the context of CTP routing, all routing decisions are based on link quality estimation. Therefore, the accuracy of link quality estimation significantly impacts the effectiveness of routing metrics: the more accurate the estimate is, the better routing decisions are. In the previous chapter, we have shown that F-LQE is more accurate than four-bit and ETX as it provides a fine grained classification of links, especially intermediate links (these are the most difficult to assess). Thus, our experimental results confirm the accuracy of F-LQE, which is traduced by the correctness of routing decisions.

The effectiveness of a routing metric depends not only on the accuracy of link quality estimation, but also on how to use link estimates to evaluate the path cost. The FLQE-RM path cost function allows to select paths constituted with high quality links, while avoiding those having some weak links among high quality links. This path cost function also favor the selection of short paths. In fact, the path cost functions of four-bit and ETX metrics also shares these features. That is, they take into account the path global quality and implicitly favors the selection of short paths that do not have poor links. Hence, what makes FLQE-RM more effective than four-bit and ETX is the accuracy of link quality estimation through the use of F-LQE.

Our experimental results also shows that four-bit performs better than ETX. This result mainly pertains to the accuracy of link quality estimation. Four-bit takes into account more link aspects compared to ETX, as it combines RNP and estETX. estETX is a smoothed ETX using the EWMA filter.

The better performance of FLQE-RM over four-bit and ETX does not come without a price. As we have shown above, FLQE-RM involves higher memory footprint and computation complexity.

5.3 On the Use of Link Quality Estimation for Mobility Management

5.3.1 Link Quality Estimation for Mobile Applications

Nowadays, mobility is one of the major requirements in several emerging ubiquitous and pervasive sensor network applications, including health-care monitoring, intelligent transportation systems and industrial automation [1517]. In some of these scenarios, mobile nodes are required to transmit data to a fixed-node infrastructure in a timely and reliable fashion. For example, in clinical health monitoring [18, 19], patients have small sensing devices embedded in their bodies that report data through a fixed wireless network infrastructure. In these type of scenarios, it is necessary to provide a reliable and constant stream of information.

Mobility management is a wide area which covers various aspects such as handoff process, re-routing, re-addressing and security issues. Due to the scope of this book, our main focus is on the handoff process. Handoff refers to the process where a mobile node disconnects from one point of attachment and connects to another. Hence, it is clear that handoff process greatly relies on link quality estimation. This fact motivates us to address the question of how to use link quality estimation for an efficient handoff in mobility management solutions.

In mobile applications, especially those deployed in harsh environments with rapid variations of wireless channel, what is important for a fast handoff decision is not just the accuracy of link quality estimation, but also the possibility to gather instantaneous link quality estimation at the time of transmission. However, accuracy and responsiveness in link estimation are two conflicting requirements. As discussed in Chap. 3, accurate link quality estimation requires the combination of several link properties to provide a snapshot of the real link status. Such combination of several link quality metrics into one composite LQE is time consuming, because it requires averaging over several link measurements (e.g. to compute PRR). Consequently, a composite LQE may not able to provide timely link quality estimation, which has a negative impact on the effectiveness of the handoff schema.

We argue that in mobile applications, single-metric LQEs such as RSSI and SNR, which are considered not sufficiently accurate [6], have the advantage of being responsive and thus would be more appropriate for handoff process. However what LQE to use? and how to tune it for a fast handoff? are just inevitable questions that are addressed next.

5.3.2 Overview of Handoff Process

A naive handoff solution in applications with mobile users is to broadcast information to all neighboring static nodes, known as access points (APs), within the transmission range. Broadcasts lead to redundant information at neighboring APs. This also implies that the fixed infrastructure wastes resources in forwarding the same information to the end point.

A more efficient solution for mobile nodes is to use a single AP to transmit data at any given time. This alternative requires nodes to perform reliable and fast handoffs between neighboring APs. In practice, a handoff starts when the link with the current (serving) AP drops below a given value (\(TH_{low}\)) and stops when it finds a new AP with the required link quality (\(TH_{high}\)). The most important issues that should be considered when designing a handoff mechanism for low-power networks are as follows:

5.3.2.1 Types of Handoff

The type of handoff is dictated by the capabilities of the radio, standards and technologies. Handoffs are classified into two main categories: hard handoffs and soft handoffs.

The soft handoff technique in wireless cellular networks uses multiple channels at the same time. This characteristic enables a mobile node to communicate with several APs and assess their link qualities while transmitting data to the serving AP. It is possible to perform soft handoff by utilizing a network-based mobility management which is supported by mobile IPv6. The use of IPv6 imposes extra overhead and increases the energy consumption of the network drastically. The implementation of soft handoff approach is feasible for low-power wireless networks, however, it is impractical for many applications.

In a hard handoff, the radio can use only one channel at any given time, and hence, it needs to stop the data transmission before the handoff process starts. Consequently, in hard handoffs it is central to minimize the time spent looking for a new AP. Low-power nodes typically rely on low-power radio transceivers that can operate on a single channel at a time, such as the widely used CC2420. This implies that current low-power wireless networks should utilize a hard handoff approach.

5.3.2.2 Impact of Low-Power Links on Handoff

Low-power links have two characteristics that affect the handoff process: short coverage and high variability [20].

Short coverages imply low densities of access points. In cellular networks, for example, it is common to be within the range of tens of APs. This permits the node to be conservative with thresholds and to select links with very high reliability. On the other hand, sensor networks may not be deployed in such high densities, and hence, the handoff should relax its link quality requirements. In practice, this implies that the handoff parameters should be more carefully calibrated within the (unreliable) transitional region.

The high variability of links has an impact in stability. When not designed properly, handoff mechanisms may degrade the network performance due to the ping-pong effect, which consists in mobile nodes having consecutive and redundant handoffs between two APs due to sudden fluctuation of their link qualities. This happens usually when a mobile node moves in the frontiers of two APs. Hence, to be stable, a handoff mechanism should calibrate the appropriate thresholds according to the particular variance of its wireless links.

5.3.2.3 Handoff Triggering

The first step in a handoff scheme is to determine when should a node deem a link as weak and start looking for another AP. We call this step, which is totally based on link quality estimation, handoff triggering. In the sensor network community, the de-facto way to classify links is to use the connected, transitional and disconnected regions (refer to Chap. 1 for the description of these regions).

Fig. 5.6
figure 6

Low-power link model. a RSSI versus PRR. b SNR versus PRR

We use RSSI and SNR in order to identify these regions. Hence, we gathered RSSI and SNR values at different parts of a building utilizing different nodes. The results illustrated in Fig. 5.6 have been collected by sampling many signals (every 10 ms) during a mobile node (MN) trip from one AP to another AP (with transmission power of \({-}\)25 dBm). The figures depict these three regions for RSSI and SNR [19] that agree with studies in [21]. The SNR is calculated by measuring the noise-floor immediately after receiving the packet, and then, subtracting it from the RSSI value. The RSSI regions can be mapped directly to the SNR ones by subtracting the average noise-floor. The graphics illustrate that in a transitional region, the RSSI values are in a range of [\({-}\)92 dBm, \({-}\)80 dBm] and the SNR values are in a range of [5 dB, 17 dB].

5.3.2.4 Handoff Parameters

The process of switching from one AP to another should be performed wisely such that the ping-pong effect is minimized. Figure 5.7 depicts the two cases of efficient and inefficient handoff mechanism. In this example, the experiment encompasses two APs and a mobile node. The y-axis shows the RSSI detected by the serving AP and the vertical bars denote the handoffs performed. Note that, the RSSI is measured at the AP side. The transitional region in sensor networks, for the CC2420 radio transceiver, encompasses the approximate range (shown in Fig. 5.6). Intuition may dictate that it is better to perform handoff in the connected region with more reliable links. A conservative approach is depicted in Fig. 5.7a, which considers \({-}\)85 dBm as the lower threshold (\(TH_{low}\)), and the upper threshold (\(TH_{high}\)) is 1 dB higher. These parameters lead to a negative effect: a long delay that takes three handoffs for a mobile node moving between the two contiguous APs (ping-pong effect). Figure 5.7b shows that by considering a wider margin, deeper into the transitional region, the ping-pong effect disappears and the delay is reduced a lot. This mechanism which involves a disconnection period is an example of the case where the MN has one radio and does not support IP. A careful calibration of the parameters can reduce the disconnection period which is so called the handoff delay.

Fig. 5.7
figure 7

a An example of an inefficient handoff. b An example of an efficient handoff [19]

There are various parameters involved in a handoff process which are supposed to be defined on the MN and AP devices. The threshold levels and the hysteresis margin are the most important parameters which define the starting and ending moments of a disconnection. The lowest threshold has to consider the boundaries of the transitional region. If the threshold is too high, the node could perform unnecessary handoffs and if the threshold is too low the node may use unreliable links. If the margin is too narrow, the mobile node may end up performing unnecessary and frequent handoffs between two APs (ping-pong effect). If the margin is too wide, the handoff may take too long which ends up increasing the delay and decreasing the delivery rate.

5.3.3 Soft Handoff in Low-Power Wireless Networks

As previously described, there are two major strategies to make a handoff process that are soft handoff with network layer solution and hard handoff with MAC layer solution. The first approach neglects the energy efficiency issue has been extended in [22, 23].

In [22] the problem related to the mobility of sensor node (SN) to handoff between different gateways (GW), connected to the backbone network is addressed. It proposes a soft handoff decision for low-power wireless networks based on 6LoWPAN (SH-WSN6) which avoids unnecessary handoffs when there are multiple GWs in the range of SNs. The sensor node is able to register to multiple GWs at the same time by using Internet Protocol (IP) solution. The SH-WSN6 takes advantage of router advertisement (RA) message defined in the Internet control message protocol (ICMP). GWs transmit RA messages periodically to advertise their presence. At first, SN can register to only one GW. By receiving RA in each interval, the SN decides for the best GW. Every time a SN registers with a new GW, it gains a new route. This improves connectivity by having route diversity. If there is an unreliable link, comparison algorithm makes a decision to remove that link and therefore improves the QoS since poor links will not be used anymore. Comparison algorithm makes independent decision for start of handoff. Decision is made based on the comparison of the ratio of RA messages coming from GWs in the range. SN also notices when a GW moves away from SN’s range by comparing the ratio of RA messages. Comparison algorithm assumes that GW’s send RA messages at the same rate, which is a reasonable assumption.

GINSENG project presents a hard handoff solution within its mobility operation [23]. It is implemented on top of GinMAC that is a TDMA scheme for channel access with a pre-dimensioned virtual tree topology and hierarchical addresses. Two control messages are transmitted in order to support the attachment of the MN to a new point of attachment. These messages are the Join and the Join Ack that are sent/received when the MN is still attached to the previous tree position. Therefore, the role of the dynamic topology control in soft handoff mobility is to support the re-attachment of the MN to a different tree position as a result of movement inside the testbed area. In the handoff decision rules, some parameters are defined that are (i) RSSI threshold, (ii) better RSSI, (iii) number of lost packets, and (iv) packet loss percentage. These values are set according to the application requirements.

5.3.4 Hard Handoff in Low-Power Wireless Networks

The second approach in doing a handoff addresses a MAC layer solution for hard handoff mechanism in mobile low-power wireless networks. This solution is either specialized for passive decision with non-real-time support in [18] or for active decision with real-time support in [19].

In [18] authors describe a wireless clinical monitoring system collecting the vital signs of patients. In this study, the mobile node connects to a fixed AP by listening to beacons periodically broadcasted by all APs. The node connects to the AP with the highest RSSI. The scheme is simple and reliable for low traffic data rates. However, there is a high utilization of bandwidth due to periodic broadcasts and handoffs are passively performed whenever the mobile node cannot deliver data packets.

Smart-HOP [19] is a fast handoff process for low-power wireless networks, which get advantage of high responsiveness of RSSI/SNR and embeds a software based approach to reduce the decision inaccuracy. This is provided by adding three features: (i) getting averaged value of RSSI/SNR by exploiting a sliding window to minimize the sudden changes, (ii) filtering out the asymmetric property by using reply packets at the Data Transmission and Discovery Phase, and (iii) applying wide hysteresis margin to reduce the link variability in the transitional region.

5.3.5 Smart-HOP Design

The smart-HOP algorithm has two main phases: (i) Data Transmission Phase and (ii) Discovery Phase. A timeline of the algorithm is depicted in Fig. 5.8.

Fig. 5.8
figure 8

Time diagram of the smart-HOP mechanism [19]

Initially, the mobile node is not attached to any access point. This state is similar to the case when the MN disconnects from one AP and searches for a better AP. In both cases, the MN performs a Discovery Phase by sending \(n\) request packets in a given window \(w\) and receiving a \(reply\) packet from each neighboring AP. The reply packet embeds the link quality level that is defined as the average RSSI/SNR level of \(n\) consecutive packets. By receiving reply packets at the MN, the down-link information is extracted. Then the mobile nodes selects the AP with highest link quality level which in turn considers the asymmetry feature of low-power wireless links. Upon detecting a good link, the MN resumes a Data Transmission Phase with the AP serving the most reliable link. The data packets are sent in burst and receive a reply afterwards similar to the Discovery Phase. This process enables monitoring the current link during the normal data communication process. The details of both phases are shown in Fig. 5.8. The smart-HOP process relies on three main tuning parameters, which are presented in details as follows.

Parameter 1: link monitoring frequency. It is an important parameter for any handoff process, which determines how frequent the link monitoring should be. The link monitoring property is captured by the window size parameter (\(ws\)), which represents the number of packets required to estimate the link quality over a specific time. A small \(ws\) (high sampling frequency) provides detailed information about the link but increases the processing of reply packets, which leads to higher energy consumption and lower delivery rates. The packet delivery reduces as the MN opts for several unnecessary handoffs. The handoff is ordered by detecting low quality links that happen by sudden fluctuations of signal strength. On the other hand, a large \(ws\) (low sampling frequency) provides only coarse grained information about the link and decreases the responsiveness of the system. A large \(ws\) leads to late decision, which is not suitable for a mobile network with dynamic link changes.

The mobile node starts the Discovery Phase when the link quality goes below a certain threshold (\(TH_{low}\)) and looks for APs that are above a reliable threshold (\(TH_{high}=TH_{low} + HM\), where \(HM\) is the hysteresis margin). During the Discovery Phase, the mobile node sends \(ws\) beacons periodically and the neighboring APs reply with the average RSSI or SNR of the beacons. If one or more APs are above \(TH_{high}\), the mobile node connects to the AP with the highest link quality and resumes data communication, else, it continues broadcasting beacons in burst until discovering a suitable AP. In order to reduce the effects of collisions, the APs use a simple TDMA MAC.

Parameter 2: threshold levels and hysteresis margin. In low-power wireless networks, the selection of thresholds and hysteresis margins is dictated by the characteristics of the transitional region and the variability of the wireless link. The lowest threshold has to consider the boundaries of the transitional region. Wireless sensors spend most of the time in the transitional region. The exact threshold level within the transitional region is computed from the simulation and experimental analysis. If threshold \(TH_{low}\) is too high, the node could perform unnecessary handoffs (by being too selective). If the threshold is too low, the node may use unreliable links. The hysteresis margin plays a central role in coping with the variability of low-power wireless links. If the hysteresis margin is too narrow, the mobile node may end up performing unnecessary and frequent handoffs between two APs (ping-pong effect), as illustrated in Fig. 5.7. If the hysteresis margin is too large, the handoff may take too long, which ends up increasing the network inaccessibility time, and thus delivery delay and decreasing the delivery rate.

Parameter 3: AP stability monitoring. Due to the high variability of wireless links, the mobile node may detect an AP that is momentarily above \(TH_{high}\), but the link quality may decrease shortly after being selected. In order to avoid this, it is important to assess the stability of the AP candidate. After detecting an AP above \(TH_{high}\), smart-HOP sends \(m\) further bursts of beacons to validate the stability of that AP. The burst of beacons stands for the \(ws\) request beacons followed by the reply packets received from neighboring APs. Stability monitoring is tightly coupled to the hysteresis margin. A wide hysteresis margin requires a lower \(m\), and vice versa.

5.3.6 Smart-HOP Observations

To evaluate smart-HOP functionality, different scenarios were considered, which are summarized in Table 5.6. For example, scenario \(A\) with a 5 dBm margin and stability 2, means that after the mobile node detects an AP above \(TH_{high}=-90\) dBm, the node will send two 3-beacon bursts to observe if the link remains above \(TH_{high}\). The hysteresis margin \(HM\) captures the sensitivity to ping-pong effects, and the number of bursts \(m\), the stability of the AP candidate (recall that each burst in \(m\) contains three beacons).

Table 5.6 Description of second set of scenarios

Calibrating the parameters of smart-HOP requires a testbed that provides a significant degree of repeatability. A fair comparison of different parameters is only possible if all of them observe similar channel conditions. In order to achieve this, a model-train in a large room is employed. The room is 7\(\times \)7 m and the locomotive follows a 3.5\(\times \)3.5 m square layout. The speed of the locomotive is approximately 1 m/s (average walking speed). Fig. 5.9a depicts a locomotive passing by an AP and Fig. 5.9b shows the experimental scenario.

In real-world applications, the deployment of access points (or base stations) is subject to an accurate study to ensure the coverage of the area of interest. In cellular networks, the density of access points guarantees full coverage and redundancy. In other wireless networks, the density of access points depends on the real-time requirements of the application. In critical applications, complete coverage is an essential requirement. To prevent extreme deployment conditions such as very high or very low density of APs, smart-HOP tests provided minimal overlap between contiguous APs. For each evaluation tuple \(<TH_{low},HM,m>\), the mobile node took four laps, which lead to a minimum of 16 handoffs. The experiments show the results for the narrow margin (1 dBm), and for the wide margin (5 dBm).

Fig. 5.9
figure 9

a MN passing by an AP. b Nodes’ deployment

Figure 5.10 shows the number of handoffs, handoff delay and the relative packet delivery ratio for two cases of narrow and wide hysteresis margin.

The high variability of low-power links can cause severe ping-pong effects. Figure 5.10a, b show two important trends with narrow margin; first, all scenarios have ping-pong effects. The optimal number of handoffs is 16, but all scenarios have between 32 and 48. Due to the link variability, the transition between neighboring APs requires between 2 and 3 handoffs. Second, a longer monitoring of stability \(m\) helps alleviating ping-pong effects. Moreover, for all scenarios, the higher the stability, the lower the number of handoffs.

Fig. 5.10
figure 10

a Number of handoffs (narrow HM). b Number of handoffs (wide HM). c Mean handoff delay (narrow HM). d Mean handoff delay (wide HM). e Relative delivery ratio (narrow HM). f Relative delivery ratio (wide HM). The horizontal lines represent the results for the best scenario: 32 for the number of handoffs and 96 for the relative delivery ratio [19]

Thresholds at the higher end of the transitional region lead to longer delays and lower delivery rates. A threshold selected at the higher end of the transitional region can lead to an order of magnitude more delay than a threshold at the lower end. This happens because mobile nodes with higher thresholds spend more time looking for overly reliable links, and consequently less time transmitting data.

The most efficient handoffs seem to occur for thresholds at the lower end of the transitional region with wide hysteresis margin. Scenario B (\({-}\)90 dBm) with stability 1 maximizes the three metrics of interest. It leads to the least number of handoffs, with the lowest average delay and highest delivery rate. It is important to highlight the trends achieved by the wider hysteresis margin. First, the ping-pong effect is eliminated in all scenarios. Second, contrarily to the narrower hysteresis margin, monitoring the stability of the new AP for longer periods (\(m = 2\) or \(3\)) does not provide any further gains, because the wider margin copes with most of the link variability.

Impact of interference. The functionality of smart-HOP was also analyzed under interference, by comparing the RSSI and SNR based models. Different types of interference such as periodic (similar to microwave ovens) and bursty (similar to WiFi devices) were generated. The observations indicated that smart-HOP with SNR increases both the average delay and the delivery ratio. The longer handoff delay occurs because the MN spends more time in the Discovery Phase looking for good links. The MN detects the presence of interference earlier and starts the Discovery Phase. It attaches to the new AP by observing a high quality link in terms of lower noise-floor. Hence, in SNR based handoff with the necessity of always connecting to lower noisy link, the packet delivery rate is higher.

5.3.7 Conclusion

Link quality estimation is a fundamental building block for several network protocols and mechanisms, especially for routing and mobility management. The first part of this chapter addressed the problem of using link quality estimation for improving routing performance, especially CTP routing protocol. We have presented FLQE-RM, a routing metric based on F-LQE. Based on TOSSIM 2 simulation and real experimentation, FLQE-RM was found to improve CTP routing performance. Typically, FLQE-RM establishes and maintains CTP routing trees better than four-bit, and ETX.

The second part of this chapter addressed the problem of using link quality estimation for a fast handoff process in mobility management. Due to the high unreliability and dynamic changes of low-power lossy links with mobility support, a fast/responsive LQE is more acceptable than an accurate yet less responsive LQE. Smart-HOP proposes a hard handoff process for mobile low-power wireless network applications. It takes advantage of a sliding window to reduce the sudden fluctuations, filters out the asymmetry property of the link and applies a wide hysteresis margin to reduce the link variability. The results indicated that smart-HOP is able to perform a fast handoff with high delivery ratio.