3.1 Background

Graph routing [1] as an effective way to improve network reliability has been widely used in recent years. A network under graph routing allocates two dedicated time slots for each transmission; if the first transmission fails, a retransmission will be sent. Furthermore, the controller assigns a third shared slot on a separate path for another retransmission. Since graph routing is a reliable method to handle transmission failures, a few works have begun to focus on graph routing. The work in [2] presents the first worst-case end-to-end delay analysis for periodic real-time flows under reliable graph routing. The work in [3] studies the network lifetime maximization problem under graph routing. However, graph routing introduces great challenges for real-time analysis. Many conflicts are generated on a large number of transmission tasks. Obviously, the task which is more critical but has a low priority may miss its deadline in this situation. However, many systems need to guarantee high-criticality task’s schedulability even though in the worst case. That is really very important in many scenarios such as industrial production line, vehicle driving system, etc. To improve the schedulability of high-criticality flows when the network is running, we introduce resource analysis into mixed-criticality industrial wireless networks. Mixed-criticality network can improve the schedulability of high-criticality flows by dynamically switching the network criticality, and resource analysis is a major way to analyze the schedulability in real-time systems. Combining mixed-criticality network and resource analysis, we can estimate the schedulability of networks with different critical levels.

In this chapter, we propose a novel industrial network model with EDF scheduling. Our objective is to improve the network reliability, especially for high-criticality flows to arrive at their destinations on time even though in the worst case. We analyze the network schedulability by the method of resource analysis. The network is reliable when the network resource supply is no less than the network upper demand in any length of time slot. The main challenges in our work are (1) how to evaluate network demand when a network switches from low-criticality mode to high-criticality mode and (2) how to tighten the network demand bound function to ensure that the analysis result is not too pessimistic. The network we focus on, in the beginning, works in low-criticality mode, and the flows transmit under the EDF policy [4] and source routing. The packets are transmitted from the source to the destination on the primary paths; when an error occurs or the demand changes, the network switches to high-criticality mode to enhance the schedulability of high-criticality flows. The network substitutes reliable graph routing for source routing. Furthermore, we present a supply/demand bound analysis method to analyze the schedulability of periodic flows in industrial wireless sensor networks. By comparing the relationship between network supply bound and demand, we can predict whether the network can be scheduled. The current study makes the following key contributions:

  1. 1.

    We propose a mixed-criticality industrial network, in which network routing switches from source routing to graph routing when the criticality mode changes.

  2. 2.

    We theoretically derive the supply/demand bound function as a novel analysis method for industrial wireless networks. By analyzing channel contention and transmission conflict, we obtain the upper-bound function of demand in any length of time slot. When given a network supply bound function, we can determine the schedulability of flows under different criticality modes.

  3. 3.

    We tighten the demand bound by analyzing carry-over jobs (which are released but not finished at the switching slot) and discussing the number of conflicts between two flows.

  4. 4.

    Our method can be applied for general networks. By calculating the maximum demand bound of networks, we can analyze network schedulability in the system design stage; after network deployment, the upper bound of communications can be obtained by our method.

3.2 System Model

We consider an industrial wireless network consisting of field devices, one gateway, and one centralized network manager. Our network is proposed in three aspects. We first propose a network model that is abstracted from mainstream industrial network standards. Then, we introduce a mixed-criticality network. Finally, we apply EDF scheduling in the industrial network.

3.2.1 Network Model

Without loss of generality, our model has the same salient features as WirelessHART and WIA-PA, which make it particularly suitable for process industries:

Time Division Multiple Access (TDMA)

In industrial wireless sensor networks, time is synchronized and slotted. Because the length of a time slot allows exactly one transmission, TDMA protocols can provide predictable communication latency and real-time communication.

Route and Spectrum Diversity

To mitigate physical obstacles, broken links, and interference, the messages are routed through multiple paths. Spectrum diversity gives the network access to all 16 channels defined in the IEEE 802.15.4 physical layer and allows per-time slot channel hopping. The combination of spectrum and route diversity allows a packet to be transmitted multiple times, over different channels and different paths, thereby handling the challenges of network dynamics in harsh and variable environments at the cost of redundant transmissions and scheduling complexity [5].

Handling Internal Interference

Industrial networks allow only one transmission in each channel in a time slot across the entire network, thereby avoiding the spatial reuse of channels. Thus, the total number of concurrent transmissions in the entire network at any slot is no greater than the number of available channels.

With the above features, the network can be modeled as a graph G = (V, E, m), in which the node set V  represents the network devices (all sensor nodes in our model are fixed), E is the set of edges between these devices, and m is the number of channels. Network routing is shown in Fig. 3.1; our model supports both source routing and graph routing. Source outing is well known in academic research; we will not explore it in this article. Graph routing is a unique feature of industrial wireless sensor networks. In graph routing, a routing graph is a directed list of paths that connect two devices. As shown in Fig. 3.1b, graph routing has a primary path and multiple backup paths. This provides redundancy in the route and improves the reliability. As stated in the standard of WirelessHART, for each intermediate node on the primary path, a backup path is generated to handle link or node failure on the primary path. The network manager allocates α dedicated slots, a transmission and (α − 1) retransmission on the primary path. A (α + 1)th shared slot is allocated on the backup path, usually α = 2. In a dedicated slot, one channel only allows one transmission. However, for the case of a shared slot, the transmissions having the same receiver can be scheduled in the same slot. The senders that attempt to transmit in a shared slot contend for the channel using a carrier sense multiple access with collision avoidance (CSMA/CA) scheme [2]. Hence, multiple transmissions can be scheduled in the same channel to contend in a shared slot. For instance, the network manager allocates two dedicated slots for the packet transmits from node S to node V 1 in Fig. 3.1b. After the transmissions on the primary path, a third slot is allocated for the packet transmits from node S to V 5 as a backup path. When two backup paths intersect at node V 3, they can avoid collision by CSMA/CA.

Fig. 3.1
2 node diagrams labeled A and B. A has node S that leads to V 1, V 2, and D. B has nodes S, D, and V 1 through V 6 interconnected to each other via dedicated and shared link arrows.

Network routing. (a) Source routing. (b) Graph routing

It is important to note that the receiver responds with an ACK packet before retransmission and backup; the sender retransmits or sends a backup packet when it does not receive an ACK. Because ACK is a part of the transmission, we do not need to especially analyze the demand of ACK.

3.2.2 Mixed-Criticality Network

A periodic end-to-end communication between a source and a destination is called a flow. Network switch instruction is a part of the control flow. Because we analyze network total demand, we need not distinguish whether a flow is a data flow or a control flow. The total number of flows in the network is n, denoted by F = {F 1, F 2, …, F n}. F i is characterized by < t i, d i, ξ, c i, ϕ i > , 1 ≤ i ≤ n, where t i is the period; d i is the deadline; ξ is the criticality level (we focus on dual-criticality network {LO, HI}); ξ = 2, means the network allocates two slots, one transmission and one retransmission. Our model can be easily extended to networks with an arbitrary number of criticality levels (by increasing the number of retransmissions on the primary path); c i is the number of hops required to deliver a packet from source to destination. When the network mode switches to high criticality, we denote the total transmission hops of both the primary path and shared paths as C i; and ϕ i is the routing path of the flow. Thus, we can describe each flow F i as follows. F i periodically generates a packet at its period t i, and then sends it to the destination before its deadline d i via the routing path ϕ i with c i hops.

In the beginning, messages are transmitted under source routing in low criticality. When an error occurs or the demand changes, the control flow will send a switch instruction, and the network will switch to high-criticality mode. To enhance network reliability, the messages are transmitted under graph routing when the network is running on high-criticality mode. This is an irreversible process; high-criticality mode will never switch back to low-criticality mode (the analytical method of irreversible processes is similar to criticality mode switches from low to high). After the switch, we are not required to meet any deadlines for low-criticality flows, but high-criticality flows may instead execute for up to their high-criticality level characters.

3.2.3 EDF Scheduling in Industrial Networks

In this subsection, we provide an overview of the earliest deadline first scheduling under industrial wireless sensor networks to analyze network schedulability. EDF scheduling is a commonly adopted policy in practice for real-time CPU scheduling, cyber-physical systems, and industrial networks [6]. In an EDF scheduling policy, each job priority is assigned by its absolute deadline, and the transmission is scheduled based on this priority. Each node in our network is equipped with a half-duplex omnidirectional radio transceiver that can alternate its status between transmitting and receiving. There are two kinds of delay in industrial wireless sensor networks, which can be summarized as follows:

  • Channel contention: each channel is assigned to one transmission across the entire network in the same slot.

  • Transmission conflicts: whenever two transmissions conflict, the transmission that belongs to the lower-priority job must be delayed by the higher-priority one, regardless of how many channels are available. It is important to note that one node can perform only one operation (receiving or transmitting) in each slot.

In EDF scheduling, the priority is inversely proportional to its absolute deadline. We explain the operating principle of EDF scheduling in Fig. 3.2. There are two channels(CH1 and CH2) and flows in this network. At the beginning, the priority of F 2 is higher than F 1 since d 2 = 4 < d 1 = 5. Then the controller allocates CH1 for F 2 first. The flow with lower priority must be delayed when transmission conflict occurs such as F 1 will be delayed by F 2 at the 3rd time slot. At the 5th time slot, the second packet is generated by F 2 with an absolute deadline 8, which is larger than 5. Hence, the priority inversion, and CH1 are allocated to F 1.

Fig. 3.2
2 diagrams labeled A and B. A has nodes 2, 3, 4, 5, 6, and 7 connected, via R subscript 1 and R subscript 2, to 1. B has 2 tabular diagrams of rows C H 1 and C H 2 each for the first and second periods of F subscript 1 and F subscript 2 with data.

An example for EDF scheduling. (a) Routing. (b) EDF scheduling

Channel contention occurs when high-priority jobs occupy all channels in a time slot; a transmission conflict is generated when several transmissions involve a common node at the same dedicated slot, and a low-priority job is delayed by high-priority ones. However, for the case of shared slots, transmissions with the same receiver can be scheduled in the same slot. When channel contention occurs between backup paths, the senders on the backup path use a CSMA/CA scheme to contend for the channel, and a network delay will not result in this condition. For a network under graph routing, two paths Ï• i and Ï• j involving a common node may conflict in four conditions:

  1. 1.

    Ï• i is a primary path, Ï• j is a backup path;

  2. 2.

    both Ï• i and Ï• j are primary paths;

  3. 3.

    both Ï• i and Ï• j are backup paths;

  4. 4.

    Ï• i is a backup path, Ï• j is a primary path.

Except for condition 3, the other three conditions may generate transmission conflicts. Consequently, the total delay caused by these conditions depends on how their primary and backup paths intersect in the network.

In a real-time system, one task is schedulable when it could be executed completely before its deadline. Hence, the flow could be scheduled when all the packets generated by the flow could arrive destination before their relative deadlines. Then we define the network schedulability as whether or not all flows in a network are schedulable.

3.3 Problem Formulation

Given a mixed-criticality industrial network G = (V, E, m), the flow set F and the EDF scheduling algorithm, our objective is to analyze the relationship between the maximum execution demand of the flows and network resource in any time interval such that the schedulability of the flow set can be determined. A successful method to analyzing the schedulability of real-time workloads is to use demand bound functions [7, 8]. We introduce this concept into industrial wireless sensor networks and propose two definitions as follows:

Definition 3.1 (Supply Bound Function)

A supply bound function sbf(l) is the minimal transmission capacity provided by the network within a time interval of length l.

Definition 3.2 (Demand-Bound Function)

A demand bound function dbf(F i, l) gives an upper bound on the maximum possible execution demand of flow F i in any time interval of length l, where demand is calculated as the total amount of required execution time of flows with their whole scheduling windows within the time interval.

There are methods for computing the supply bound function sbf(l) in single-processor systems [9, 10]—for example, a unit-speed, dedicated uniprocessor has sbf(l) = l. We say that a supply bound function sbf is of no more than unit speed if

$$\displaystyle \begin{aligned} sbf(0)=0 \land \forall l, k \geq 0: sbf(l+k)-sbf(l) \leq k. \end{aligned} $$
(3.1)

Because each channel can be mapped as one processor, the supply bound function sbf of the industrial network can be bounded as

$$\displaystyle \begin{aligned} sbf(0)=0 \land \forall l, k \geq 0: sbf(l+k)-sbf(l) \leq Ch \times k, \end{aligned} $$
(3.2)

where Ch is the number of channels in the network. Furthermore, as a natural assumption of all proposed virtual resource platforms in the literature, we assume that the supply bound function is piecewise linear in all intervals [k, k + l]. In TDM (time division multiple), the network supply bound function can be expressed as

$$\displaystyle \begin{aligned} sbf(l)=\max(l\mod \Theta -\Theta+\Phi, 0)+\lfloor \frac{l}{\Theta} \rfloor \Phi, \end{aligned} $$
(3.3)

where Θ is the period of TDM, and Φ is the length of slots allocated to the transmission.

In different modes, the schedulability of the flow set is determined as follows:

$$\displaystyle \begin{aligned} {} \sum_{F_{i}\in F}dbf_{LO}(F_{i},l)\leq sbf_{LO}(l), \forall l\geq 0. \end{aligned} $$
(3.4)
$$\displaystyle \begin{aligned} {} \sum_{F_{i}\in HI(F)}dbf_{HI}(F_{i},l)\leq sbf_{HI}(l), \forall l\geq 0. \end{aligned} $$
(3.5)

Similar to real-time scheduling, the flow set is scheduled when the network is satisfied by Eqs. (3.4) and (3.5). However, in contrast to real-time scheduling, there are two kinds of delays in industrial wireless sensor networks, channel contention and transmission conflicts. When a transmission conflict occurs, a high-priority job will influence a low priority job, and thus, the flows are not independent.

Note that transmission conflict is a distinguishing feature in industrial wireless sensor networks that does not exist in conventional real-time processor scheduling problems. To analyze the network demand in any time interval, we must consider the delay caused by transmission conflicts.

Moreover, in mixed-criticality networks, there may be some jobs that are released but not finished at the time of the switch to high-criticality mode; we define these jobs as carry-over jobs. We must analyze carry-over jobs to tighten the demand bound of the network.

3.4 Demand-Bound Function of Industrial Networks

In this section, we analyze the network demand bound function for a single-criticality network and mixed-criticality network. For the single-criticality network, we study the demand bound function from channel contention and transmission conflicts. On this basis, we then analyze the delay caused by carry-over jobs (the job that is released but not finished at the time of the switch) in the mixed-criticality network. Finally, we study the methods for tightening the network demand bound function.

3.4.1 Analysis of Single-Criticality Networks

In this subsection, we study the demand bound function under a single-criticality network in two steps. First, we formulate network transmission conflict delay with path overlaps; we then analyze the network dbf. To make our study self-contained, we present the results of the state-of-the-art demand bound function for CPU scheduling [11, 12]. Assuming that the flows are executed on a multiprocessor platform, the channel is mapped as a processor. We can obtain the network demand caused by channel contention in any time interval l as

(3.6)

Equation (3.6) considers only the delay caused by channel contention, denoted as dbf(l)ch. The jobs are conflicted when their transmission paths have overlaps. As shown in Fig. 3.3, the priority of the job in F i is higher than the one in F j, so the job in F j may be delayed by in F i at nodes V  and V 1 to V h (we assume the network is connected and do not consider the case where the path disconnects).

Fig. 3.3
2 node diagrams connect the nodes U, V, V subscript 1, and V subscript H to W via F subscript i and F subscript j arrows. There are several blank nodes throughout the diagram.

An example of transmission delay

Transmission conflicts are generated at the path overlaps, and the network requires more resources to solve the transmission conflicts. To obtain dbf(l) of the network, we must first study the relationship between conflict delay and path overlap. However, estimation transmission conflict delay by the length of the overlap is often a pessimistic method. As shown in Fig. 3.3, the delay is much smaller than the length of the path overlap. To avoid pessimistic estimation, we introduce the result proposed by Saifullah in [13]. The length of the kth path overlap is denoted as Len k(ij), and its conflict delay is D k(ij). For the overlap as V 1 →…V h, if there exists node u, w ∈ V  such that u → V 1 →…V h → w is also on F i’s route, then Len k(ij) = h + 1. If only u or only w exists, then Len k(ij) = h. If neither u nor v exists, then Len k(ij) = h − 1. In our example, Len 1(ij) = 2, Len 2(ij) = 7 and D(ij) = D 1(ij) + D 2(ij), which is at most 2 + 3 = 5. Obviously, Len k(ij) is the upper bound of D k(ij), which means Len k(ij) ≥ D k(ij). For the flow set F, the total delay caused by transmission conflicts Δ is

$$\displaystyle \begin{aligned} \Delta=\sum_{1\leq i,j\leq n}D_{k}(ij)\leq \sum_{1\leq i,j\leq n}Len_{k}(ij). \end{aligned} $$
(3.7)

By the Lemma proposed in [13], the estimation of the delay caused by overlap with a length of at least 4 can be tightened. We then formulate the total transmission conflicts between F i and F j as

$$\displaystyle \begin{aligned} {} \Delta(ij)=\sum_{k=1}^{\delta(ij)}Len_{k}(ij)-\sum_{k'=1}^{\delta'(ij)}(Len_{k'}(ij)-3), \end{aligned} $$
(3.8)

where δ(ij) is the number of path overlaps, δ′(ij) is the number of paths overlap with a length of at least 4. Because all flows have a periodic duty, we denote T as the least common multiple of flow set F (because the period is an integral multiple of 2, T is equal to the longest period among F). Network dbf changes with time interval l while it slides from 0 to T. However, Lemma 3.2 proposed by Saifullah is scheduled under fixed priority, so the priorities of flows are variable under EDF scheduling. We must analyze whether Saifullah’s result is suitable under EDF scheduling. We denote the mth job generated by F i as \(F_{i}^{m}\), and our objective is to estimate the delay caused by transmission conflicts by analyzing the number of conflicts.

Lemma 3.1

\(F_{i}^{k}\) and \(F_{j}^{g}\) are two jobs of flow i and j, when \(F_{i}^{k}\) and \(F_{j}^{g}\) ( \(F_{i}^{k}\in hp(F_{j}^{g})\) ) conflict, the job \(F_{i}^{k}\) will never be blocked by the job \(F_{j}^{g+m}\) . However, \(F_{i}^{k+m}\) may be blocked by \(F_{j}^{g}\).

Proof

At the beginning, the priority of \(F_{i}^{k}\) is higher than \(F_{j}^{g}\), which means \(d_{i}^{k}<d_{j}^{k}\). As Fig. 3.3 shows, two flows may conflict at V 1, and F j is delayed by F i. When \(F_{i}^{k}\) is forwarded to V h, two jobs may conflict again. If \(F_{i}^{k}\) is blocked by \(F_{j}^{k+m}\), we can obtain \(d_{i}^{k}>d_{j}^{g+m}\). Because \(d_{j}^{g+m}>d_{j}^{g}\), this contradicts with \(d_{i}^{k}<d_{j}^{g}\). Hence, \(F_{i}^{k}\) will never be blocked by \(F_{j}^{g+m}\).

We prove that \(F_{i}^{k+m}\) is blocked by \(F_{j}^{g}\) through an example. We use the following simple flow set: F 1 = {c 1 = 1, d 1 = t 1 = 2} and F 2 = {c 2 = 1, d 2 = t 2 = 3}.

At the beginning, the priority of \(F_{1}^{1}\) is higher than \(F_{2}^{1}\), because the absolute deadline is 2 and 3, respectively. At time slot 2, another job is generated by F 1 with the absolute deadline of 2. However, the absolute deadline of \(F_{2}^{1}\) is 1, \(F_{1}^{2}\) is blocked by \(F_{2}^{1}\). Hence, \(F_{i}^{k+m}\) can be blocked by \(F_{j}^{g}\). â–¡

Because a path is a chain of transmissions from source to destination, in considering the conflict delay caused by multiple jobs of F i on flow F j, we analyze the number of conflicts for F i and F j. Thus, Lemma 3.2 establishes the upper bound of this value.

Lemma 3.2

When F j and F i conflict, within any time interval of length l, each job of F j can be blocked no more than \(\lceil \frac {l}{t_{i}}\rceil \) times, and F j can be blocked by F j no more than \(\lceil \frac {l}{t_{j}}\rceil \) times.

Proof

Based on Lemma 3.1, we know that the priority inversion will occur in the process of transmission. If \(F_{i}^{k}\) is a higher-priority job than \(F_{j}^{g}\), the jobs released after \(F_{j}^{g}\) must be blocked by \(F_{i}^{k}\) until \(F_{i}^{k}\) is finished. If all jobs generated by F i satisfy \(d_{i}^{k+\lceil \frac {l}{t_{i}}\rceil }<d_{j}^{g}\), where k and g are the first jobs for F i and F j, respectively, in l, then there are no more than \(\lceil \frac {l}{t_{i}}\rceil \) jobs of F i. Beyond that, because there is no transmission conflict, the other jobs of F j are not blocked by F i. Hence, F j can be blocked by F i no more than \(\lceil \frac {l}{t_{i}}\rceil \) times. The same as F i, F i can be blocked by F j no more than \(\lceil \frac {l}{t_{j}}\rceil \) times. â–¡

By Lemmas 3.1 and 3.2, we can estimate the network demand caused by the transmission conflict. Based on Eq. (3.6), we obtain the upper bound of dbf(l) as follows:

Theorem 3.1

In any time interval of length l, the demand bound function under a single-critical network (low-criticality mode) is upper-bounded by

(3.9)

Proof

Network demand is the upper bound in a time interval of length l, which consists of two parts, channel contention and transmission conflict. The demand of channel contention is bounded by Eq. (3.6). For the demand of the transmission conflict, we first analyze each time conflict delay for every two paths by Eq. (3.8); the number of conflicts can then be obtained by Lemma 3.2. We can obtain the network demand of transmission conflict as

$$\displaystyle \begin{aligned} \sum_{1\leq i, j\leq n} (\Delta(ij) \max\{\lceil \frac{l}{t_{i}}\rceil, \lceil \frac{l}{t_{j}}\rceil\}). \end{aligned} $$
(3.10)

Hence, we can obtain the demand bound function under a single-critical network upper-bounded by Eq. (3.9). â–¡

3.4.2 Analysis of Mixed-Criticality Networks

Based on the result proposed in Sect. 3.4.1, we extend the idea of a demand bound function to a mixed-criticality network. For illustration purposes, only a dual-criticality network is considered; this means that ξ has only two values, LO (low-criticality mode) and HI (high-criticality mode). Nevertheless, it can be easily extended to networks with an arbitrary number of criticality modes. We construct three demand bound functions: the demand bound function in low- and high-criticality modes (dbf LO(l) and dbf HI(l) ) and the demand bound function when network mode switches (dbf LO2HI(l)). We analyze dbf HI(l) and dbf LO2HI(l) under graph routing in this subsection.

The network begins from the low-criticality level, and all flows are served and executed as in a single-criticality network. When errors or emergencies occur, the centralized network manager will trigger the switching of the network mode from LO to HI. In high-criticality mode, the network turns to graph routing, and the flows in the low-criticality level are discarded; only high-criticality flows can be delivered. The job that is active (released, but not finished) from a high-criticality flow at the time of the switch is still running under source routing; n HI is the number of high-criticality flows, and there are no more than n HI carry-over jobs that are active at the time of the switch. We define these carry-over jobs as new flows \(F_{(n_{HI}+1)}, F_{(n_{HI}+2)} \dots F_{2n_{HI}}\), which have the same characters as the corresponding flows in F except for c and t. For the new flow \(F_{p+n_{HI}}\), \(c_{p}>c_{(p+n_{HI})}\), and as an accidental event, \(t_{(p+n_{HI})}\gg t_{p}\).

When the network switches from LO to HI, the demand of carry-over jobs is

$$\displaystyle \begin{aligned} \frac{1}{m}\sum_{p=1+n_{HI}}^{2n_{HI}}c_{p}+\sum_{n_{HI}\leq p, q\leq 2n_{HI}}\Delta(pq). \end{aligned} $$
(3.11)

Furthermore, the flows will generate new jobs when the network switches to high-criticality mode. Because each node except the destination on the primary path generates one backup path, the total number of paths for F p is c p + 1 and the execution time for each backup path \(c_{p}^{k}\) can be obtained from the network easily. The total execution time of F i can be denoted as \(C_{p}=c_{p}+\sum _{k=1}^{c_{p}}c_{p}^{k}\). Therefore, network demand for channel contention under graph routing is

(3.12)

Based on the rules of transmission conflict proposed in Sect. 3.2.3, a transmission conflict between two flows is generated only if there is at least one flow transmission on the primary path. Therefore, we analyze dbf HI(l) by studying the transmission conflict generated on the primary path. For \(F_{p}^{g}\) and \(F_{q}^{m}\), when given d p < dq, \(F_{q}^{m}\) may be delayed by \(F_{p}^{g}\) and its backup paths. We denote the path set of F p and its backup paths as I; each path in I is denoted as p′. The upper bound delay of \(F_{q}^{m}\) caused by \(F_{p}^{g}\) is denoted as Δ(p′q). Δ(p′q) can be formulated as

$$\displaystyle \begin{aligned} {} \Delta(p'q)= \sum_{p'=1}^{c_{p}+1}(\sum_{k=1}^{\delta(p'q)}Len_{k}(p'q)-\sum_{k'=1}^{\delta'(p'q)}(Len_{k'}(p'q)-3)). \end{aligned} $$
(3.13)

For the job on the backup path, a transmission delay occurs only when it conflicts with primary paths with high-priority jobs. When we reverse the priority of \(F_{p}^{g}\) and \(F_{q}^{m}\), Eq. (3.13) is the upper bound additional demand of \(F_{p}^{g}\) caused by \(F_{q}^{m}\). From the above, the network upper bound demand function under graph routing can be described as

(3.14)

We can then obtain dbf LO2HI(l) as

(3.15)

Because transmission on a backup path occurs only when the two previous attempts fail, when the transmission success rate on the primary path satisfies the network packet reception ratio, the sender has no need to send a backup packet. Hence, the network upper bound demand function in this case can be rewritten as

(3.16)

3.4.3 Tightening the Demand Bound Functions

A loose demand bound function will lead to a pessimistic estimation of network schedulability. In this subsection, we tighten our demand bound functions by discussing the relationship between two flows and transmission conflict.

In our previous analysis Lemma 3.2, the number of conflict jobs is a conservative estimation as \(\max \{\lceil \frac {l}{t_{i}}\rceil , \lceil \frac {l}{t_{j}}\rceil \}\). However, this value can be reduced by classifying discussions. We divide this value into the following categories:

  • t i ≤ t j, and d i ≤ d j.

  • t i ≤ t j, and d i ≥ d j.

When the path of F i and F j have overlaps, they may generate transmission conflicts. The delay caused by conflict cannot occur in each slot because the flow does not transmit between d and t. Obviously, when one flow works in its ideal time (between d and t), there is no transmission conflict between F i and F j.

Condition 1 is shown in Fig. 3.4a; conflict occurs only when both F i and F j have job transmissions on the path. For a given l, the number of conflicting jobs can be expressed as

$$\displaystyle \begin{aligned} \lceil\frac{l}{t_{j}}\rceil(\lfloor\frac{d_{j}}{t_{i}}\rfloor+1). \end{aligned} $$
(3.17)
Fig. 3.4
2 line diagrams labeled A and B. A and B have 2 horizontal lines of F subscript i and F subscript j each with vertical upward and downward arrows. The first 2 arrows are labeled d subscript i and d subscript j, and the distance between the first and third arrow is labeled t subscript i and t subscript j.

Classified discussion. (a) Condition 1. (b) Condition 2

Similarly, we can obtain the number of conflicting jobs in condition 2 as

$$\displaystyle \begin{aligned} \lceil\frac{l}{t_{j}}\rceil(\lfloor\frac{d_{j}}{t_{i}}\rfloor+1)=2\lceil\frac{l}{t_{j}}\rceil. \end{aligned} $$
(3.18)

We denote the number of conflicts as Num(ij). When we know each flow’s routing information, the estimation of Num(ij) can be further precise. By taking the modulus of \(\frac {d_{j}}{t_{i}}\), we can estimate the maximum length of F i’s residual path as \(||\frac {d_{j}}{t_{i}}||\). The delay on this residual path is denoted as ψ, and we can obtain ψ as follows:

  • If F i has an overlap with F j on this residual path, \(\psi =\Delta (||\frac {d_{j}}{t_{i}}||)\), where \(\Delta (||\frac {d_{j}}{t_{i}}||)\) is the delay on the residual path whose length is \(||\frac {d_{j}}{t_{i}}||\).

  • If F i has no overlap with F j on this residual path, ψ = 0.

The number of conflicts can be expressed as

$$\displaystyle \begin{aligned} {} Num(ij)=\lceil\frac{l}{t_{j}}\rceil(\lfloor\frac{d_{j}}{t_{i}}\rfloor+\psi). \end{aligned} $$
(3.19)

We can then obtain the network demand bound functions as follows:

Theorem 3.2

In any time interval of length l, the demand bound function in each mode can be expressed as

(3.20)
(3.21)
(3.22)

The network demand bound function is \(dbf(l)=\max \{dbf_{LO}(l), dbf_{LO2HI}(l), dbf_{HI}(l)\}\) , and the network can be scheduled when dbf(l) is no less than \(\min {\{}dbf_{LO}(l), dbf_{LO2HI}(l){\}}\).

Proof

The proofs of demand bound functions are similar to in Theorem 3.1. The difference is that we reduce the number of conflicts by classifying the discussion, and the demand bound functions are tightened. Because there are carry-over jobs at the switching time, dbf LO2HI(l) must be greater than dbf HI(l). When the network supply in a time interval of length l sbf(l) is larger than dbf LO(l), the network can be scheduled in low-criticality mode; when dbf LO2HI(l) ≤ sbf(l) < dbf LO(l), the network can be scheduled in high-criticality mode; when \( sbf(l) > \max \{dbf_{LO}(l), dbf_{LO2HI}(l)\}\), the network cannot be scheduled. Hence, the network can be scheduled when dbf(l) is no less than \(\min \{dbf_{LO}(l), dbf_{LO2HI}(l)\}\). □

3.5 Performance Evaluations

In this section, we conduct experiments to evaluate the performance of our proposed methods. Our method is first compared with the simulation result. We then compare our method with the supply/demand bound function analysis without tightening.

To illustrate the applicability of our method, for each parameter configuration, several test cases are generated randomly. For each test case, the network gateway is placed at the center of playground area A, and the other nodes are deployed randomly around the gateway. According to the suggestion in [14], given the transmitting range d = 40 m, the number of nodes n and the playground area A should satisfy

$$\displaystyle \begin{aligned} \frac{n}{A}=\frac{2\pi}{d^{2}\sqrt{27}}. \end{aligned} $$
(3.23)

If two nodes can communicate with each other, which means that the distance between two nodes is less than d, they are adjacent nodes. By repeatedly connecting the nearest node from the source node to the gateway, the network topology can be obtained. If some source nodes cannot connect to the gateway, their locations are generated randomly again.

Our simulations use the utilization u to control the workload of the entire network. To make flow sets available, we specify the network utilization \(U=\sum u_{i}\)(U < 1), and the UUniFast algorithm [15] is used to generate each flow’s utilization u i (\(u_{i}=\frac {c_{i}}{t_{i}}\)). The result generated by the UUniFast algorithm follows a uniform distribution and is neither pessimistic nor optimistic for the analysis [15].

Figure 3.5 is an example of the relationship between the demand bound function in different criticality modes and the supply bound function. In this example, according to the actual situation, we set the number of nodes as n = 70 and the number of flows as F = 20. At the beginning, with the network running in low-criticality mode, the demand is zero. At time slot 5, DBFLO is 72, which is larger than the upper bound of network supply; the network then switches to high-criticality mode. Considering carry-over jobs, we can calculate the demand in high-criticality mode from time slot 5. Because the network demand is less than the supply, this example is a stable network. Furthermore, Fig. 3.5 reveals that the demand bound functions are stepwise increasing. This is because dbf(l) is the network demand over a period of time. When a job has enough time slots to transmit (e.g., a job is just released), its demand is zero and does not require immediate execution. With the decrease of the remaining time, the job becomes urgent. When the remaining time for the job is c, the job must be forwarded immediately; otherwise, it will miss the deadline. The job demand is then changed to the number of hops c.

Fig. 3.5
A multi-line graph of demand versus time interval length plots D B F L O, D B F H I, and S B F. D B F H I and S B F follow an increasing trend, and D B F L O has a staircase-shaped curve that also follows an increasing trend.

Relationship between demand bound functions and supply bound function

Figure 3.6 is the variation tendency of DBFHI with the proportion of high-criticality flows. Because changing the proportion of n HI does not affect network demand in low-criticality mode, Fig. 3.6 shows the network demand only in high-criticality mode. Obviously, the network demand is increasing with the increasing proportion of high-criticality flows. At the beginning (0.4–0.6), the network demand increases slowly. From 0.7–0.9, the demand of the network increases rapidly. This is because more flows in high-criticality mode generate more transmission conflicts in conditions 1, 2, and 4. The network needs more resources to ensure that the job meets its deadline. This phenomenon is enhanced severely with increasing P.

Fig. 3.6
A line graph of demand versus the proportion of high-criticality flows plots an increasing curve that has a peak at 0.5, 0.6, and 0.8 respectively. All values are estimated.

Variation tendency of DBFHI with the percentage of high-criticality flows

To analyze the correctness of our method, we compare the network schedulability ratio between the simulation result (denoted as MixedSim) and our method (denoted as MixedEDF) in Fig. 3.7. For each point in the figures, more than 100 test cases are randomly generated. From the figures, we can know that our algorithm can accurately evaluate the network schedulability ratio regardless of which parameters are used. Because we pessimistically estimate transmission conflicts to guarantee our method’s reliability, the evaluation value of the network demand bound is larger than the actual demand. In Fig. 3.7a and b, the proportions of high-criticality flows are P = 0.4 and P = 0.5, respectively. With the increasing of nodes, the network schedulability ratio declines in both situations. However, the schedulability ratio in Fig. 3.7b falls faster than in Fig .3.7a. This is because the network generates more transmission conflicts when increasing the number of high-criticality flows. Note that compared with Fig. 3.7a, Fig. 3.7c has 0.1 additional utilization, so the spacing between the simulation curve and analysis curve is expanded. Although there are fluctuations between 30 to 60, our method can always bound the schedulable ratio (the fluctuations are caused by the randomly generated network environment). Because the two figures generate test cases according to the respective utilization, their test cases are different. When network utilization increases, the number of hops from source to destination increases. This increases the number of potential conflicts. The estimation result then becomes more pessimistic.

Fig. 3.7
3 multi-line graphs labeled A, B, and C of schedulable ratio versus the number of nodes plots Mixed sim and Mixed D B F. Graph A and Graph B follow a decreasing curve with peaks and dips. Graph C has a fluctuating curve with heavy peaks and dips.

Relationship between schedulability ratio and the number of nodes. (a) U = 0.5, P = 0.4. (b) U = 0.5, P = 0.5. (c) U = 0.6, P = 0.4

Figure 3.8 is the relationship between the schedulability ratio and the proportion of high-criticality flows. It is easy to understand that the schedulability ratio declines with the increasing proportion of high-criticality flows. However, the spacing between the two curves changes with P (small–big–small). This is because our method should consider the transmission conflicts in all situations to ensure reliability. In the beginning, there are only a few conflicts in high-criticality mode. With increasing high-criticality flows, the strict estimation considers each path overlap as a transmission conflict, which leads to larger spacing between two curves. When P = 0.7, the number of conflicts increases in MixedSim, which reduces the schedulability ratio, and then the difference becomes small.

Fig. 3.8
A multi-line graph of the schedulable ratio versus the proportion of high-criticality flows plots Mixed sim and Mixed D B F. Both curves follow a decreasing trend.

Relationship between schedulability ratio and the proportion of high-criticality flows

We illustrate the advantage of MixedDBF by comparing it with the supply/demand bound function analysis without tightening (denoted as MixedDBF-nt) in Fig. 3.9. Obviously, MixedDBF is better than MixedDBF-nt regardless of the conditions. With increasing network utilization or proportion of high-criticality flows, the error of MixedDBF-nt grows faster than MixedDBF. The reason is that both increasing network utilization and the number of high-criticality flows will increase the number of path overlaps. MixedDBF tightens the delay caused by the transmission conflict by Eq. (3.19). With increasing overlaps, the effect of Eq. (3.19) will be better. Hence, the error of MixedDBF-nt grows faster than MixedDBF.

Fig. 3.9
3 multi-line graphs labeled A, B, and C of schedulable ratio versus the number of nodes plots Mixed sim, Mixed D B F, and Mixed D B E n t. All the graphs follow a decreasing curve with peaks and dips.

Schedulability comparison among MixedSim, MixedDBF, MixedDBF-nt. (a) U = 0.4, P = 0.2. (b) U = 0.5, P = 0.2. (c) U = 0.4, P = 0.6

3.6 Summary

WirelessHART adopts reliable graph routing to enhance network reliability. However, graph routing introduces substantial challenges in analyzing the schedulability of real-time flows. Too much transmission load will increase conflicts and reduce network performance. Disaster may happen when critical tasks miss their deadlines in this situation. Hence, firstly, we propose a novel network model that can switch routing based on the criticality mode of networks. When errors or accidents occur, the network switches to high-criticality mode and low-level critical tasks are abandoned. Secondly, we analyze the demand bound of mixed-criticality industrial wireless sensor networks under the EDF policy and formulate network demand bounds in each criticality mode. Thirdly, we tighten the demand bound by analyzing carry-over jobs and classifying the number of conflicts to improve analysis accuracy. The simulations based on random network topologies demonstrate that our method can estimate network schedulability efficiently.