1 Introduction

Currently, the pervasive deployment of the internet of things (IoT) applications sheds light on the significance of the real-time status update system. In such system, the status updates of a physical phenomenon evolving at the transmitter end should be promptly conveyed to the beneficiaries for monitoring and control purposes [1]. In this regard, there are a host of real-life applications, to name a few, reporting the vehicle’s position in autonomous vehicular technology [2] and reporting soil properties in smart agricultural systems [3]. The main objective of all such applications is to enhance the information freshness (timeliness) due to the insignificance and the unreliability of the outdated updates [4]. However, in different IoT applications, there are computationally intensive status updates. For instance, in autonomous driving and augmented reality technologies, higher computations are consumed in image processing and voice recognition for each status update packet before being interpreted as an update for monitoring and control purposes. For such applications, depending on the local processor of the end user (having limited computation capacity) or offloading the computations to the central cloud (being far from the end user) contributes to information staleness. As a result, mobile edge computing (MEC) system has emerged [5] as a promising paradigm for information freshness enhancement and low-latency transmission. In such a system, the edge server, with higher computation power than the local processor, handles the intensive computations in proximity to the end user.

The information freshness is to be quantified using the age of information (AoI) metric, which was introduced for the first time in [1]. The AoI, denoted as \(\Delta (t)\), is defined as the time span since the generation time instant u(t) related to the freshest received packet at the monitor [6]. Hence, it is formulated as \(\Delta (t)=t-u(t)\), and it should be as minimum as possible to ensure timely status updates. In this regard, it has been proved that minimizing the AoI is a distinct problem, which differs from minimizing/maximizing information delay/throughput [7]. There are a host of research works addressing the AoI performance of the real-time status update system, in general, and under the MEC systems, in particular.

For the single-source systems, the authors in [1] initiated the mathematical manipulation of the AoI metric using the queueing abstractions of M/M/1, M/D/1 and D/M/1 under the the first come first served (FCFS) queueing policy. This work was extended, in [8], by the deployment of M/M/1/\(1^{*}\), where the (\(*\)) is a notation for the preemption in service (\(\text {PR}^{\text {(s)}}\)) feature. The newly arrived packet can preempt the ongoing service promptly without the need of a waiting buffer; hence, this model is denoted in our context as \(\text {PR}^{\text {(s)}}\)-Bufferless. In [6], the M/M/1/\(2^{*}\) queueing model was proposed, where the (\(*\)) notation here is used to indicate that the preemption is allowed only within the waiting buffer (\(\text {PR}^{\text {(w)}}\)) under a non-preemptive service discipline (\(\text {NP}^{\text {(s)}}\)). Hence, this model is denoted in our context as \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\). In [9], the authors proved the optimality of the \(\text {PR}^{\text {(s)}}\)-Bufferless approach in the case of a single-source update system with a memoryless server. Since the computational aspects of the computationally intensive messages have not been considered in the aforementioned research work, the study in [10] is the first to analyze the AoI under a single-source MEC system, where an M/M/1 queueing model with FCFS is deployed to model the remote computing scheme. This work was then extended by reusing the M/M/1/\(2^{*}\) queueing model for the computing stage [11].

Regarding the multi-source systems, the authors in [12] extended the aforementioned schemes, \(\text {PR}^{\text {(s)}}\)-Bufferless and \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\), to be employed under the multiple-source case. In this work, the stochastic hybrid system approach (SHS) is deployed for the first time in the AoI context to formulate the average AoI for the case of memoryless servers. The SHS approach was further generalized, in [13], by analyzing the higher order AoI moments along with the moment generating function (MGF). The AoI performance of the multi-source MEC systems has also been studied in the literature. In [14], the AoI performance of three different computing scenarios was addressed using a FCFS M/M/1 queueing abstraction, which models both the transmission and computing stages.

The majority of the research studies pertaining to multi-source systems assume the same AoI requirements for all status update streams, employing the same service treatment. However, the more realistic scenario is to consider a distinct priority for each stream according to its importance and timeliness sensitivity. To illustrate, in Vehicle-to-Everything (V2X) technology, the status-update streams may be classified into three priority classes [15]: the highest priority is assigned for road safety data (e.g., vehicle’s speed); the intermediate priority is assigned for traffic management data (e.g., vehicle’s destination); and the lowest priority is dedicated for convenience and entertainment data (e.g., air-conditioning control). Considering the MEC integrated systems, there is also a dire need to consider different priority classes to reflect the urgency in task execution [16].

There is limited AoI research work considering the priority setting for the multi-source status update systems. It can be classified according to the employed service discipline: \(\text {PR}^{\text {(s)}}\) [15, 17,18,19] and \(\text {NP}^{\text {(s)}}\) [17, 20, 21]. The notion of \(\text {PR}^{\text {(s)}}\) under the priority setting means that the lower priority (LP) class can be preempted either from a higher priority (HP) one or from the same priority class. For the \(\text {PR}^{\text {(s)}}\) priority schemes, the authors in [17] reanalyzed the M/M/1/\(1^{*}\) queueing abstraction (proposed in [8, 12]) under the case of multiple prioritized sources. In [18], the previous work was extended by proposing an M/M/1 priority queueing model with separate one-sized buffer for each class under the \(\text {PR}^{\text {(s)}}\) scheme; hence, it can be referred to as \(\text {PR}^{\text {(s)}}\)-Multi-buffer in our context. However, in [19], the content-based buffering scheme was proposed in a two-class M/G/1 queueing system. In such a scheme, each class has its own buffering mechanism: the bufferless scheme for the HP class and the infinite buffer scheme under the FCFS policy for the LP class. The buffering mechanism was further generalized in [15] for any number of classes, where each class has its own buffer (finite or infinite), and the Lexicographic optimality approach is used to manage the scheduling of multiple-access requests for a single memoryless server. As regards to the \(\text {NP}^{\text {(s)}}\) priority schemes, the authors in [17] re-studied the M/M/1/\(2^{*}\) queueing model (proposed in [6, 12]) under the priority setting. This work was extended in [21], where the buffered packet experiences a deterministic deadline, beyond which it will be dropped. The buffering mechanism was generalized in the work of [20] by considering separate one-sized buffers and separate infinite buffers for each class.

For the aforementioned priority-based schemes, the common feature is that either the \(\text {PR}^{\text {(s)}}\) or \(\text {NP}^{\text {(s)}}\) is employed. However, in the context of the general priority queueing system [22], it is stipulated that neither the preemptive nor the non-preemptive priority disciplines can fulfill the satisfaction for all priority classes. To clarify, the \(\text {PR}^{\text {(s)}}\) is very harsh for the LP classes due to the frequent interruptions. On the other hand, in the \(\text {NP}^{\text {(s)}}\), the HP classes are dissatisfied with being hindered by the ongoing service of the LP classes. To counter this paradox, the hybrid preemptive/non-preemptive service discipline has been introduced [22]. In this discipline, the server has the power to control the preemption decision using a discretionary rule. In [23], four different variations of this rule are mentioned.

As far as we know, no research work has considered the hybrid preemptive/non-preemptive discipline for the prioritized status-update system under the analytical framework of the AoI, except our first attempt in [24]. Moreover, there is no AoI-based research work pertaining to the status update systems working under multi-source MEC with prioritized update streams. Based on these motivations, in the current work, the hybrid preemptive/non-preemptive discipline is further proposed to manage the contention between multi-priority sources requesting the intensive computations of their status updates at the edge server in an IoT-enabled MEC environment. The edge computing scheme is considered where all computational tasks are offloaded to the edge server for processing. The computing stage at the edge server is modelled as an M/M/1/2 priority queueing model with a shared buffer of size one. Here, two hybrid disciplines are deployed to govern the interaction between priority classes within the shared server and the shared buffer. For these hybrid disciplines, the probabilistic preemption approach (as a discretionary approach for preemption) is utilized. That is, upon the occurrence of a service (resp. buffer) request, the existing served (resp. buffered) packet is given a probabilistic decision either to be preempted or not. The probabilistic preemption in service (\(\text {Prob}^{\text {(s)}}\)) and the probabilistic preemption in waiting (\(\text {Prob}^{\text {(w)}}\)) are governed by distinct probability parameters to ensure the generality of the system. It is worth mentioning that the probabilistic preemption approach has not been extensively addressed in the AoI literature. In [25] and [26], the \(\text {NP}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) was proposed for a single-source system, whereas the \(\text {Prob}^{\text {(s)}}\)-Bufferless was introduced in [27] for multiple unprioritized streams. However, in our previous work [24], the \(\text {Prob}^{\text {(s)}}\)-Bufferless is reused for multiple prioritized streams. To the best of our knowledge, the proposed combined policy (denoted as \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\)) has never been addressed in the AoI context. The SHS approach is employed to analyze the average AoI, along with other higher order moments for any number of priority classes. Subsequently, a numerical study on a three-class network is presented to highlight the effect of the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) policy on the AoI performance, compared with the classical priority-based models [17, 18] besides our previous work [24]. Moreover, four different approaches are introduced to clarify the setting procedure of the proposed preemption probability parameters (controlling parameters). It is revealed that our proposed model, unlike the classical ones, can achieve the satisfaction of the whole priority classes by a thorough adjustment of the controlling parameters. Moreover, compared with our previous work \(\text {Prob}^{\text {(s)}}\)-Bufferless [24], the buffer functionalities under the proposed \(\text {Prob}^{\text {(w)}}\) policy achieve more promising results.

Based on the foregoing, the main contributions of this work can be summarized as follows:

  • The AoI performance of the multi-priority class status update system is addressed under an IoT-enabled MEC environment. To the best of our knowledge, it is the first time to investigate the information freshness performance of the MEC systems with different priority tasks.

  • The idea of the hybrid preemptive/non-preemptive discipline is proposed under an M/M/1/2 priority queueing model, with the AoI metric being analyzed and investigated. As far as we know, this hybrid discipline has never been addressed for the case of multiple prioritized sources under the AoI analytical framework [15, 17,18,19,20,21], except our initial attempt [24].

  • The probabilistic preemption approach, as a discretionary priority discipline, is employed at both the server and the buffer independently through the combined policy \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\). To the best of our knowledge, it is the first time to consider such a combined policy in the AoI analysis.

  • The higher order AoI moments, besides the average performance, are analyzed and investigated. Throughout the AoI literature under the priority setting [15, 17,18,19,20,21], no research work has considered these further distributional AoI characteristics.

  • Four different approaches are presented to address the controlling parameters’ setting by pursuing distinct system objectives. Moreover, a low-complex heuristic generation approach is suggested, yielding a near-optimal performance of the whole network satisfaction.

In the sequel, our work is presented as follows. Section 2 demonstrates the proposed system description and the traffic parameters setting. The AoI analysis is presented in Sect. 3, which is initiated by a brief preliminary on the SHS approach. The numerical study is then presented in Sect. 4 for performance evaluation. Eventually, a summary of the current work and our perception of the envisaged work are presented in Sect. 5.

Table 1 Symbol notations used throughout the article

2 System model

2.1 System model description and assumptions

Fig. 1
figure 1

The proposed multi-class IoT-enabled MEC system

Fig. 2
figure 2

The detailed system description

Fig. 3
figure 3

Flowchart illustrating the control unit functionality for each arrival reques

As depicted in Fig. 1, a typical multi-class IoT-enabled MEC system is considered. Each of M possible sources holds a status update stream with a distinct priority class, where \(\text {S}_1\) and \(\text {S}_M\) are the highest and the lowest priority classes, respectivelyFootnote 1. These sources contend to transmit their status updates for processing, monitoring and control purposes at the edge base station (BS), which incorporates the MEC server as a processing unit with higher computational power. The detailed description of the proposed system is demonstrated in Fig. 2. The system is equipped with a waiting buffer of size one to permit the waiting for a busy server. The received status update packet, only after being processed at the server, is converted to readable information to the monitor. Both the server and buffer are managed by the service policy and queueing policy, respectively, which will be explained shortly. A control unit is provided to deploy the service and queueing policies in governing the multiple access requests at the base station.

The service policy. It controls the access of the newly arrived entity to a busy server. In this regard, the proposed \(\text {Prob}^{\text {(s)}}\) policy is deployed to resolve the contention between the arrived entity \(\text {E}_{\text {a}}\) (class \(\text {c}_{\text {a}}\)) and the served entity \(\text {E}_{\text {s}}\) (class \(\text {c}_{\text {s}}\)). This policy stipulates that upon the request from \(\text {E}_{\text {a}}\), a probabilistic decision is taken, either to preempt \(\text {E}_{\text {s}}\) or not, according to the preemption probability parameter \(p^s_{\text {c}_{\text {a}},\text {c}_{\text {s}}}\in [0,1]\). The cases of \(p^s_{\text {c}_{\text {a}},\text {c}_{\text {s}}}=0\) and \(p^s_{\text {c}_{\text {a}},\text {c}_{\text {s}}}=1\) demonstrate the strict non-preemption and the strict preemption cases, respectively. However, the priority setting between status update streams entails that \(p^s_{\text {c}_{\text {a}},\text {c}_{\text {s}}}=0\) for \(\text {c}_{\text {a}}>\text {c}_{\text {s}}\), where the LP class cannot preempt the ongoing service of the HP one. Furthermore, a protection feature is assumed to be deployed alongside the \(\text {Prob}^{\text {(s)}}\) policy for the sake of the LP classes due to the expected frequent interruptions from the HP ones, especially under the higher traffic loading conditions. Under this assumption, whenever a LP class being served is given the non-preemption decision upon the request from the HP class, it becomes protected from further interruptions from all HP classes until its service completion. The significance of this protection feature will be explained subsequently in the numerical study.

The queueing policy. It regulates the access to the waiting buffer. Let \(\text {E}_{\text {ab}}\) (class \(\text {c}_{\text {ab}}\)) denote the entity requesting the buffer. This request may originate from a newly arrived entity that cannot preempt the served entity or from a preempted served entity. The \(\text {Prob}^{\text {(w)}}\) policy, independent from the \(\text {Prob}^{\text {(s)}}\) policy, is proposed to control the preemption possibility of the buffered entity \(\text {E}_{\text {b}}\) (class \(\text {c}_{\text {b}}\)), according to a distinct preemption probability parameter \(p^w_{\text {c}_{\text {ab}},\text {c}_{\text {b}}}\). It is also assumed that \(p^w_{\text {c}_{\text {ab}},\text {c}_{\text {b}}}=0\), for \(\text {c}_{\text {ab}}>\text {c}_{\text {b}}\), to reflect the priority setting; however, the protection feature is assumed not to be deployed with the queueing policy. Moreover, it is assumed that at most one packet related to each class is allowed to exist on the whole system (either in the server or the buffer); hence, \(\text {c}_{\text {b}}\ne \text {c}_{\text {s}}\).

Based on the foregoing, the control unit functionality can be described in the flowchart shown in Fig. 3. The server status (busy/empty) is firstly checked by the control unit to manage the access possibility of \(\text {E}_{\text {a}}\). For a busy server, the service preemption possibility is then examined between the entity \(\text {E}_{\text {a}}\) and the served one \(\text {E}_{\text {s}}\). This phase is governed by the \(\text {Prob}^{\text {(s)}}\) policy and the consideration of the protection feature. If the preemption is permitted, the served packet being preempted will request the buffer (\(\text {E}_{\text {ab}}\leftarrow \text {E}_{\text {s}}\)), leaving the entity \(\text {E}_{\text {a}}\) to join the server (\(\text {E}_{\text {s}}\leftarrow \text {E}_{\text {a}}\)). On the other hand, the non-preemption decision forces the entity \(\text {E}_{\text {a}}\) to request the buffer (\(\text {E}_{\text {ab}}\leftarrow \text {E}_{\text {a}}\)). Regarding the buffer request \(\text {E}_{\text {ab}}\) (if exists), class index duplication check is firstly examined between \(\text {E}_{\text {ab}}\) and \(\text {E}_{\text {s}}\) (\(\text {c}_{\text {ab}}{\mathop {=}\limits ^{?}}\text {c}_{\text {s}}\)). If \(\text {c}_{\text {ab}}=\text {c}_{\text {s}}\), the entity \(\text {E}_{\text {ab}}\) will be dropped according to the aforementioned system assumption that \(\text {c}_{\text {b}}\ne \text {c}_{\text {s}}\). Otherwise, the entity \(\text {E}_{\text {ab}}\) joins the empty buffer immediately (\(\text {E}_{\text {b}}\leftarrow \text {E}_{\text {ab}}\)). If the buffer is busy, the buffer preemption possibility is then checked by employing the \(\text {Prob}^{\text {(w)}}\) policy. If the preemption is admitted, the buffered entity will be dropped, leaving its place to the entity \(\text {E}_{\text {ab}}\) (i.e., \(\text {E}_{\text {b}}\leftarrow \text {E}_{\text {ab}}\)). However, upon the non-preemption decision, the entity \(\text {E}_{\text {ab}}\) will be dropped.

For the benefit of our reader, a further demonstration of the proposed system dynamics is presented in Appendix 1 using a working example on three priority classes.

2.2 Traffic parameters

For mathematical manipulation, it is assumed that the arrival process of each prioritized stream m (\(1\le m\le M\)) follows the Poisson process with rate \(\lambda _m\). Therefore, for each class m, the total arrival rate of its HP classes and its LP classes can be denoted as \(\hat{\lambda }_m=\sum _{i=1}^{m-1} \lambda _i\) and \(\check{\lambda }_m=\sum _{i=m+1}^{M} \lambda _i\), respectively. Hence, the total arrival rate can be represented as \(\lambda _{\text {total}}=\hat{\lambda }_m+\lambda _m+\check{\lambda }_m\). The exponential service time distribution is also assumed for all priority classes with a service rate \(\mu _m\). Accordingly, the total offered load from the whole network is \(\rho _{\text {total}}=\sum _{i=1}^{M} \lambda _i/\mu _i\).

3 Performance analysis

The aim is to evaluate the kth AoI moment for each class h (\(\textrm{E}[\Delta _h^{k}]\), \(k\ge 1\), \(1\le h\le M\)). Based on the system model description in Sect. 2, the proposed system can be mathematically considered as a modified M/M/1/2 priority queueing system due to the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) policy; hence, it can be refereed to as M/M/1/\(2^*\)/(\(\textbf{P}^s,\textbf{P}^w\)). The matrix \(\textbf{P}^s\) (resp. \(\textbf{P}^w\)) contains all preemption probability parameters \([p_{i,j}^s]\) (resp. \([p_{i,j}^w]\)) deployed by the \(\text {Prob}^{\text {(s)}}\) (resp. \(\text {Prob}^{\text {(w)}}\)) policy. It should be noted that both \(\textbf{P}^s,\textbf{P}^w\) are upper triangular matrices to obey the priority setting between classes as mentioned in Sect. 2.

For the proposed queueing system abstraction M/M/1/\(2^*\)/(\(\textbf{P}^s\),\(\textbf{P}^w\)), due to its finite-state nature (capacity of two packets) and the assumption of memoryless server, the stochastic hybrid system (SHS) approach is more tractable to be deployed in such case to analyze the AoI performance [12].

Accordingly, this section is organized as follows. A brief preliminary on the AoI-oriented SHS approach is firstly introduced in Sect. 3.1. Then, the detailed SHS analysis of our proposed model is presented in Sect. 3.2, ending up with an algorithm that can be used to extract the AoI moments for the proposed model under a general number of priority classes. After that, Sect. 3.2.1 considers the case of a two-class network, through which closed-form results of the average AoI are provided.

3.1 Preliminary on the AoI-oriented SHS approach

The SHS is a randomly-evolved system, whose state is a hybrid of both continuous and discrete components. In the AoI context, the system behaviour can be described by the hybrid state \((q(t),\textbf{x}(t))\). The discrete component \(q(t)\in Q=\{0,1,...,m\}\) signifies all possible states reflecting the system occupancy upon some stochastic events, such as packet arrival or service completion. On the other hand, the continuous component \(\textbf{x}(t)=[x_0(t),x_1(t),...,x_n(t)]\in \mathbb {R}^{n+1}\) represents the continuous-time growing of the AoI signals measured for a certain source stream. The signal \(x_0(t)\) measures the AoI as perceived at the monitor (after successful service completion), while the signals \(x_1(t),...,x_n(t)\) are the corresponding measures as seen by some virtual monitors distributed within the system to track the packet AoI process from its arrival and before its successful departure. For instance, these virtual monitors can be located at the server or at a fixed queue position. Next, the mathematical model related to the evolution of the hybrid state is presented.

First, in the case of a memoryless server, the continuous-time Markov chain (CTMC) \(\{q(t)\}\) can be used to model the evolution of the discrete states q(t) with a set of transitions L. Upon each transition \(l\in L\), the discrete state will be changed from \(q_l\) to \(q'_l\) with a transition rate \(\lambda ^{(l)}\delta _{q_l,q(t)}\). The Kronecker delta function is used here to make the transition rate \(\lambda ^{(l)}\) pertaining to the source state \(q(t)=q_l\) upon the l-th transition. In this regard, \(L'_q=\{l \in L\,q'_l=q \}\) and \(L_q=\{l \in L\,q_l=q \}\) are referred to as the incoming and the outgoing transitions of the state q.

As regards to the continuous states \(\textbf{x}(t)\), their progression is to be modeled in two occasions. Firstly, upon each discrete transition l, \(\textbf{x}(t)\) encounters a resetting: \(\textbf{x}\) resets to \(\mathbf {x'}=\textbf{x} \textbf{A}_l\). The matrix \(\textbf{A}_l \in \{0,1\}^{(n+1)\times (n+1)}\) is known as the reset maps matrix. Secondly, during the holding time at each state q(t), \(\textbf{x}(t)\) increases linearly (\(\dot{\textbf{x}}=\textbf{1}_{n+1}\)), abiding by the notion of piecewise linear SHS [12].

Having modelled the SHS behaviour, let declare the following notions for each state \(q\in Q\): the state probability, \(\pi _q(t)=\textrm{P}[q(t)=q]=\textrm{E}[\delta _{q,q(t)}]\); the kth-moment correlation, \(v_{q,j}^{(k)}(t)=\textrm{E}[x_j^k(t) \delta _{q,q(t)}],\ 0\le j \le n,\ k\ge 0\). For all \(0\le j \le n\), all corresponding correlations can be combined in the k-th moment correlation vector, \(\textbf{v}_{q}^{(k)}(t)=[v_{q,0}^{(k)}(t),...,v_{q,n}^{(k)}(t)]=\textrm{E}[\textbf{x}^k(t) \delta _{q,q(t)}]\). It should be noted that \(\textbf{v}_{q}^{(0)}(t)=\textbf{1}_{n+1} \pi _q(t)\). Hence, using the law of total average, the kth moment of the process \(\textbf{x}^k(t)\) can be deduced as

$$\begin{aligned} \textrm{E}[\textbf{x}^k(t)]&=\sum _{{q\in Q}} \textrm{E}[\textbf{x}^k(t)|q(t)=q]\textrm{P}[q(t)=q]=\sum _{{q\in Q}}\textrm{E}[\textbf{x}^k(t) \delta _{q,q(t)}]=\sum _{{q\in Q}} \textbf{v}_{q}^{(k)}(t). \end{aligned}$$
(1)

After declaring the foregoing quantities, the SHS analysis starts by evaluating the state probabilities \(\pi _q(t)\) (\(\forall q\in Q\)). Since the ergodicity of the CTMC \(\{q(t)\}\) is a substantial assumption in the AoI analysis [12, 13], the state probability \(\pi _q(t)\) will converge to \(\bar{\pi }_q\). Based on this assumption and following [13, Lemma 1], the stationary state probability \(\bar{\pi }_q\) (\(\forall q\in Q\)) can be deduced by deploying the well-known global balancing equation for each \(q\in Q\):

$$\begin{aligned} \bar{\pi }_q \left( \sum _{l\in L_q}\lambda ^{(l)}\right) =\sum _{l \in L'_{q}} \lambda ^{(l)} \bar{\pi }_{q_{l}},\ \ q\in Q. \end{aligned}$$
(2)

Then, the constructed system of linear equations is to be solved simultaneously with the normalization equation,

$$\begin{aligned} \sum _{q\in Q}\bar{\pi }_q=1. \end{aligned}$$
(3)

It should be noted that the stationarity of \(\pi _q(t)\rightarrow \bar{\pi }_q\) does not imply the convergence of the correlation \(\textbf{v}_q^{(k)}(t)\). This is because the correlation \(\textbf{v}_q^{(k)}(t)\) is an AoI measurement process which is independent from the evolution of the system occupancy q(t). In this regard, a detailed explanation for the convergence of \(\textbf{v}_{q}^{(k)}(t)\) is demonstrated in the work of [12] and [13]. Accordingly, the correlation \(\textbf{v}_{q}^{(k)}(t)\) will converge to \(\bar{\textbf{v}}_q^{(k)}\) (\(k\ge 1\)), which satisfies the following system of linear equations, \(\forall q\in Q\):

$$\begin{aligned} \bar{\textbf{v}}_q^{(k)}\left( \sum _{l \in L_q}\lambda ^{(l)}\right) =k\bar{\textbf{v}}_q^{(k-1)}+\sum _{l \in L'_q} \lambda ^{(l)} \bar{\textbf{v}}_{q_l}^{(k)} \textbf{A}_l,\ k\ge 1. \end{aligned}$$
(4)

By (4), solving for the limiting correlation vector \(\bar{\textbf{v}}_q^{(k)}\) (\(k\ge 1\)) is executed recursively. After that, the kth AoI moment at the monitor can be deduced using equation (1) as follows:

$$\begin{aligned} \textrm{E}[\Delta ^{k}]=\textrm{E}[x_0^{k}]=\lim _{t\rightarrow \infty }\textrm{E}[x_0^k(t)]&=\sum _{q\in Q}\bar{v}_{q,0}^{(k)}. \end{aligned}$$
(5)

3.2 The SHS analysis of the proposed model

In this section, the mathematical evaluation of the \(\textrm{E}[\Delta _h^{k}]\)( \(k\ge 1\), \(1\le h\le M\)) is presented, using the aforementioned SHS analysis. The subsequent analysis is based on the perspective of class h, which is the class of interest.

Firstly, a two-dimensional CTMC \(\{q(t)\}\) is deployed to keep track of the whole system occupancy. More specifically, the first and the second dimensions of q(t) refer to the index of the served and the queued packets, respectively. In this regard, according to the system description in Sect. 2, the discrete state space can be represented as \(Q=\mathcal {T}\cup \mathcal {U}\cup \mathcal {V}\cup \mathcal {W}\), where the disjoints sets \(\mathcal {T},\mathcal {U},\mathcal {V}\), and \(\mathcal {W}\) are as follows:

$$\begin{aligned}&\mathcal {T}=\{(0,0)\},\quad \mathcal {U}=\{(m,0)|1\le m\le M\},\nonumber \\&\mathcal {V}=\{(m,n)|1\le m\le M-1,m+1\le n\le M\},\nonumber \\&\mathcal {W}=\{(m^{*},n)|2\le m\le M,1\le n\le M\backslash m\}. \end{aligned}$$
(6)

For more clarification, the set \(\mathcal {T}\) represents the case of an empty system. The set \(\mathcal {U}\) symbolizes the case of an empty buffer while the server is engaged with any class m (\(1\le m\le M\)). The two sets \(\mathcal {V}\) and \(\mathcal {W}\) represent the case of a busy-buffer, busy-server system. The main difference between these two sets is whether the class being served has gained the protection feature or not. The (\(*\)) notation is used to distinguish those classes gained the protection feature. It should be noted that whenever the served class m possesses the protection feature, the buffered entity may belong to any class except m, as noticed in set \(\mathcal {W}\). Otherwise, the LP class is always awaiting the HP served class (as in set \(\mathcal {V}\)).

According to the system structure described in Sect. 2, three AoI signals is to be tracked \(\textbf{x}(t)=[x_0(t),x_1(t),x_2(t)]\). These signals \(x_0(t)\), \(x_1(t)\) and \(x_2(t)\) can be considered as the dedicated gauges of the AoI processes at the monitor, server and buffer, respectively. Note that the monitor’s gauge \(x_0(t)\) is only affected by the successful packet departure of the class of interest h; whereas, both \(x_1(t)\) and \(x_2(t)\) work for any class \(1\le m\le M\). Moreover, as mentioned earlier, all these measures increase at a unit rate (\(\dot{\textbf{x}}(t)=1\)) during the holding time at each discrete state q(t).

Table 2 summarizes the SHS CTMC with all discrete state transitions. In this table, the transition sets—\(L_{\mathcal {T}}\), \(L_{\mathcal {U}}\), \(L_{\mathcal {V}}\), and \(L_{\mathcal {W}}\)— describe the corresponding inner transitions for each discrete set \(\mathcal {T}\), \(\mathcal {U}\), \(\mathcal {V}\) and \(\mathcal {W}\), respectively. For each transition set, there are some transitions cases l described by the tuple \(a_l=(q_l,q'_l,\lambda ^{(l)},\textbf{x}'(t),\textbf{A}_l)\). Moreover, the self transitions (i.e., when \(q_l=q'_l\)) stand for the self-preemption occurrence within the same class. However, only the self-preemptions related to the class of interest h are considered because the self-preemptions related to others have no effect in the AoI evolution of class h.

Table 2 Transition table of the SHS CTMC

In the following, the main stochastic events incorporated in Table 2 are summarized. The detailed explanation of this table is left to Appendix 1 with an illustrative example on the case of three-class network (Fig. 19 and Table 6).

  • Departure-to-monitor events. The departed packet may leave the system empty (\(L_{\mathcal {T}}\)) or permit the buffered packet to take its place (\(\mathcal {U}2\), \(\mathcal {U}2'\), \(\mathcal {U}3\), \(\mathcal {U}3'\)).

  • Fresh arrival entry into an empty server (\(\mathcal {U}1\)).

  • Fresh arrival preemption of the ongoing service. Here, the preempted packet may join an empty buffer (\(\mathcal {V}2\)), preempt the buffered packet (\(\mathcal {V}4\)), or fail to preempt the buffered packet and it consequently gets dropped (\(\mathcal {V}5\)).

  • Fresh arrival blocking by the ongoing service. The blocked arrival may instead join the empty buffer (\(\mathcal {V}1\) and \(\mathcal {W}2\)), preempt the buffered packet (\(\mathcal {V}3\), \(\mathcal {W}1\) and \(\mathcal {W}3\)), or fail to preempt the buffered packet, so it gets dropped (\(\mathcal {W}4\)).

  • The self-preemptions related to the class of interest h. These preemptions occur in either the server (\(\mathcal {U}4\), \(\mathcal {V}6\) and \(\mathcal {W}5\)) or the buffer (\(\mathcal {V}7\) and \(\mathcal {W}6\)).

Going forward, concurrently with the discrete state transitions explained above, the AoI signals will be reset from \(\textbf{x}(t)=[x_0(t),x_1(t),x_2(t)]\) to \(\textbf{x}'(t)\) \(=[x'_0(t),\) \(x'_1(t),\) \(x'_2(t)]\) due to the following actions; otherwise, the AoI measures continue increasing without resetting:

  1. 1.

    If a fresh arrival joins the server (e.g., \(\mathcal {U}1\), \(\mathcal {U}4\), \(\mathcal {V}2\) and \(\mathcal {V}4\)), the corresponding gauge measure starts from scratch (\(x'_1(t)=0\)). Similarly, \(x'_2(t)=0\) upon a fresh arrival entry into the buffer (e.g., \(\mathcal {V}1\), \(\mathcal {V}3\), \(\mathcal {W}1\), \(\mathcal {W}2\) and \(\mathcal {W}3\)).

  2. 2.

    If the buffered packet enters an empty server, its elapsed age process measured by the buffer’s gauge \(x_2(t)\) will be resumed by the server’s gauge, i.e., \(x'_1(t)=x_2(t)\) (\(\mathcal {U}2\), \(\mathcal {U}2'\), \(\mathcal {U}3\) and \(\mathcal {U}3'\)). The same situation exists if the served packet is to be preempted and joins the buffer; therefore, \(x'_2(t)=x_1(t)\) (\(\mathcal {V}2\) and \(\mathcal {V}4\)).

  3. 3.

    If the class of interest h departs successfully from the server to the monitor, the monitor’s gauge will be reset to the last measure of the server’s gauge, i.e., \(x'_0(t)=x_1(t)\). This is clear in cases \(\mathcal {T}2\), \(\mathcal {U}2\) and \(\mathcal {U}3\).

The SHS analysis begins with evaluating \(\bar{\pi }_q\) (\(\forall q\in Q\)) by solving the following global balancing equations with the normalization Eq. (3):

  • for \(q\in \mathcal {T}=\{(0,0)\}\),

    $$\begin{aligned} \bar{\pi }_{q} \big (\lambda _{\text {total}} \big )=\sum _{{i=1}}^{M}\bar{\pi }_{(i,0)}\mu _i, \end{aligned}$$
    (7)
  • for \(q\in \mathcal {U}=\{(m,0)|1\le m\le M\}\),

    $$\begin{aligned} \bar{\pi }_{q} \big (\mu _m+\hat{\lambda }_m+\check{\lambda }_m\big )=\bar{\pi }_{(0,0)}\lambda _m+\sum _{{i=1}}^{m-1}\bar{\pi }_{(i,m)} \mu _i+\sum _{{2\le i\le M\backslash m}}^{}\bar{\pi }_{(i^{*},m)} \mu _i, \end{aligned}$$
    (8)
  • for \(q\in \mathcal {V}=\{(m,n)|1\le m\le M-1,m+1\le n\le M\}\),

    $$\begin{aligned}&\bar{\pi }_{q}\big (\mu _m+\hat{\lambda }_m\big )=\bar{\pi }_{(m,0)}\lambda _n+\bar{\pi }_{(n,0)} \lambda _m p_{m,n}^s+\lambda _n \sum _{{i=n+1}}^{M}\bar{\pi }_{(m,i)}p_{n,i}^w\nonumber \\&\qquad +\lambda _m p_{m,n}^s\sum _{{i=n+1}}^{M}\bar{\pi }_{(n,i)} p_{n,i}^w+\lambda _m \sum _{{i=m+1}}^{n-1}\bar{\pi }_{(i,n)}p_{m,i}^s(1-p_{i,n}^w), \end{aligned}$$
    (9)
  • for \(q\in \mathcal {W}=\{(m^{*},n)|2\le m\le M,1\le n\le M\backslash m\}\),

    $$\begin{aligned}&\bar{\pi }_{q}\big (\mu _m+\sum _{{1\le i\le n-1 \backslash m}}^{}\lambda _i p_{i,n}^w\big )=\lambda _n\sum _{{n+1\le i\le M \backslash m}}^{}\bar{\pi }_{(m^{*},i)} p_{n,i}^w+1_{n<m}(n)\ \lambda _n (1-p_{n,m}^s) \nonumber \\&\quad \Big (\bar{\pi }_{(m,0)}+\sum _{{i=m+1}}^{M}\bar{\pi }_{(m,i)}p_{n,i}^w\Big )+1_{n>m}(n)\Big (\bar{\pi }_{(m,n)}\sum _{{i=1}}^{m-1}\lambda _i (1-p_{i,m}^s)(1-p_{i,n}^w)\Big ). \end{aligned}$$
    (10)

Then, \(\bar{\textbf{v}}_q^{(1)}\) is to be calculated by applying the system of linear equations described by (4) at \(k=1\) as follows:

  • for \(q\in \mathcal {T}=\{(0,0)\}\),

    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{q}\big (\lambda _{\text {total}} \big )=\bar{\pi }_{q}\textbf{1}+\mu _h \bar{\textbf{v}}^{(1)}_{(h,0)} \textbf{A}_{\mathcal {T}2}+ \sum _{{1\le i\le M\backslash h}}^{}\mu _i\bar{\textbf{v}}^{(1)}_{(i,0)}\textbf{A}_{\mathcal {T}1}, \end{aligned}$$
    (11)
  • for \(q\in \mathcal {U}=\{(m,0)|1\le m\le M\}\),

    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{q}\big (\mu _m+\hat{\lambda }_m+\check{\lambda }_m+\delta _{h,m} \lambda _h p_{h,h}^s\big )=\bar{\pi }_{q}\textbf{1}+\lambda _m \bar{\textbf{v}}^{(1)}_{(0,0)}\textbf{A}_{\mathcal {U}1}\nonumber \\&\qquad +\sum _{i=1}^{m-1}\mu _i\Big (\delta _{h,i} \bar{\textbf{v}}^{(1)}_{(i,m)}\textbf{A}_{\mathcal {U}2}+ (1-\delta _{h,i})\bar{\textbf{v}}^{(1)}_{i,m} \textbf{A}_{\mathcal {U}2'}\Big )+\sum _{{2\le i\le M\backslash m}}^{}\mu _i\Big (\delta _{h,i} \bar{\textbf{v}}^{(1)}_{(i^{*},m)}\textbf{A}_{\mathcal {U}3}\nonumber \\&\qquad +(1-\delta _{h,i})\bar{\textbf{v}}^{(1)}_{(i^{*},m)}\textbf{A}_{\mathcal {U}3'}\Big )+\delta _{h,m}\lambda _h p_{h,h}^s \bar{\textbf{v}}^{(1)}_{q}\textbf{A}_{\mathcal {U}4}, \end{aligned}$$
    (12)
  • for \(q\in \mathcal {V}=\{(m,n)|1\le m\le M-1,m+1\le n\le M\}\),

    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{q}\big (\mu _m+\hat{\lambda }_m+\delta _{h,m} \lambda _h p_{h,h}^s+\delta _{h,n} \lambda _h p_{h,h}^w\big )=\bar{\pi }_{q}\textbf{1}+\lambda _n \bar{\textbf{v}}^{(1)}_{(m,0)}\textbf{A}_{\mathcal {V}1}\nonumber \\ {}&+\lambda _m p_{m,n}^s\bar{\textbf{v}}^{(1)}_{(n,0)}\textbf{A}_{\mathcal {V}2}+\lambda _n \sum _{{i=n+1}}^{M}p_{n,i}^w \bar{\textbf{v}}^{(1)}_{(m,i)}\textbf{A}_{\mathcal {V}3}+\lambda _m p_{m,n}^s \sum _{{i=n+1}}^{M} p_{n,i}^w \bar{\textbf{v}}^{(1)}_{(n,i)}\textbf{A}_{\mathcal {V}4}\nonumber \\ {}&+\lambda _m \sum _{{i=m+1}}^{n-1}p_{m,i}^s(1-p_{i,n}^w)\bar{\textbf{v}}^{(1)}_{(i,n)}\textbf{A}_{\mathcal {V}5}+\delta _{h,m}\lambda _h p_{h,h}^s \bar{\textbf{v}}^{(1)}_{q}\textbf{A}_{\mathcal {V}6}+\delta _{h,n}\lambda _h p_{h,h}^w \bar{\textbf{v}}^{(1)}_{q}\textbf{A}_{\mathcal {V}7}, \end{aligned}$$
    (13)
  • for \(q\in \mathcal {W}=\{(m^{*},n)|2\le m\le M,1\le n\le M \backslash m\}\),

    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{q}\big (\mu _m+\sum _{{1\le i\le n-1 \backslash m}}^{}\lambda _i p_{i,n}^w+\lambda _h(\delta _{h,m} p_{h,h}^s+\delta _{h,n}p_{h,h}^w\big )=\bar{\pi }_{q}\textbf{1}\nonumber \\&\qquad +\lambda _n\sum _{{n+1\le i\le M\backslash m}}^{}p_{n,i}^w\bar{\textbf{v}}^{(1)}_{(m^{*},i)}\textbf{A}_{\mathcal {W}1}+\text {1}_{n<m}(n)\ \lambda _n\times (1-p_{n,m}^s)\times \Big (\bar{\textbf{v}}^{(1)}_{(m,0)}\textbf{A}_{\mathcal {W}2}\nonumber \\&\qquad +\sum _{{m+1}}^{M}p_{n,i}^w\bar{\textbf{v}}^{(1)}_{(m,i)}\textbf{A}_{\mathcal {W}3}\Big )+\text {1}_{n>m}(n)\Big (\bar{\textbf{v}}^{(1)}_{(m,n)}\textbf{A}_{\mathcal {W}4}\sum _{{i=1}}^{m-1}\lambda _i (1-p_{i,m}^s)(1-p_{i,n}^w)\Big )\nonumber \\ {}&+\delta _{h,m}\lambda _h p_{h,h}^s \bar{\textbf{v}}^{(1)}_{q}\textbf{A}_{\mathcal {W}5}+\delta _{h,n}\lambda _h p_{h,h}^w\bar{\textbf{v}}^{(1)}_{q}\textbf{A}_{\mathcal {W}6}. \end{aligned}$$
    (14)

After evaluating \(\bar{\textbf{v}}_q^{(1)}\), the higher moment correlations \(\bar{\textbf{v}}_q^{(k)}\) (\(k>1\)) can be evaluated recursively by applying the system of linear equation described by (4). Finally, \(\textrm{E}[\Delta _h^k]\) can be evaluated by using Eq. (5). The foregoing steps can be applied for any class of interest h (\(1\le h\le M\)).

The algorithmic approach of the foregoing SHS analysis for any number of classes M can be summarized in Algorithm 1 as follows:

Algorithm 1
figure a

The SHS algorithm of the proposed scheme

By following this algorithm, exact evaluation of \(\textrm{E}[\Delta _h^k]\) (for all \(1\le h\le M\)) can be obtained because it is more involved to get closed-form expressions under a general number of priority classes. However, in the sequel of this section, closed-form results for the average AoI are derived under a case study on a two-class network.

3.2.1 Analytical case study on a two-class network

In such a case, the corresponding \(\textbf{P}^s\) and \(\textbf{P}^w\) will be as follow:

$$\begin{aligned} \textbf{P}^s= \begin{bmatrix} p_{1,1}^s&{}p_{1,2}^s\\ 0&{}p_{2,2}^s \end{bmatrix},\quad \textbf{P}^w=\begin{bmatrix} p_{1,1}^w&{}\sim \\ 0&{}p_{2,2}^w \end{bmatrix}. \end{aligned}$$

It should be noted that we ignore the entry \(p_{1,2}^w\). This is because whenever class 2 is awaiting the service of class 1, any fresh arrival of class 1 either replaces the existing class-1 packet being served or gets dropped from the system (according to \(p_{1,1}^s\)). Subsequently, the demonstrated steps in Algorithm 1 is deployed:

  1. 1.

    Constructing the discrete state space Q:

    $$\begin{aligned}&\mathcal {T}=\{(0,0)\},\ \mathcal {U}=\{(1,0),(2,0)\},\ \mathcal {V}=\{(1,2)\},\ \mathcal {W}=\{(2^{*},1)\}. \end{aligned}$$
    (15)
  2. 2.

    Constructing the transition table while considering that class 1 is the class of interest (\(h=1\)), as shown in Table 3.

  3. 3.

    Finding the stationary state probabilities \(\bar{\pi }_q\) (\(\forall q\in Q\)) by solving the following system of equations:

    $$\begin{aligned}&\bar{\pi }_{(0,0))} \big (\lambda _1+\lambda _2 \big )=\bar{\pi }_{(1,0)}\mu _1+\bar{\pi }_{(2,0)}\mu _2, \end{aligned}$$
    (16)
    $$\begin{aligned}&\bar{\pi }_{(1,0)} \big (\mu _1+\lambda _2\big )=\bar{\pi }_{(0,0)}\lambda _1+\bar{\pi }_{(2^{*},1)}\mu _2, \end{aligned}$$
    (17)
    $$\begin{aligned}&\bar{\pi }_{(2,0)} \big (\mu _2+\lambda _1\big )=\bar{\pi }_{(0,0)}\lambda _2+\bar{\pi }_{(1,2)}\mu _1, \end{aligned}$$
    (18)
    $$\begin{aligned}&\bar{\pi }_{(1,2)}\big (\mu _1\big )=\bar{\pi }_{(1,0)}\lambda _2+\bar{\pi }_{(2,0)} \lambda _1 p_{1,2}^s, \end{aligned}$$
    (19)
    $$\begin{aligned}&\bar{\pi }_{(2^{*},1)}\big (\mu _2 \big )=\bar{\pi }_{(2,0)}\lambda _1 (1-p_{1,2}^s), \end{aligned}$$
    (20)
    $$\begin{aligned}&\sum _{q\in Q}\bar{\pi }_q=1. \end{aligned}$$
    (21)
  4. 4.

    For \(h=1\) (the class of interest), evaluating the \(1^{\text {st}}\)-moment correlation vector \(\bar{\textbf{v}}_q^{(1)}\) (\(\forall q\in Q\)) by solving the following system of equations:

    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{(0,0)}\big (\lambda _1+\lambda _2 \big )=\bar{\pi }_{(0,0)}\textbf{1}+\mu _1 \bar{\textbf{v}}^{(1)}_{(1,0)} \textbf{A}_{\mathcal {T}2}+ \mu _2\bar{\textbf{v}}^{(1)}_{(2,0)}\textbf{A}_{\mathcal {T}1}, \end{aligned}$$
    (22)
    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{(1,0)}\big (\mu _1+\lambda _2+\lambda _1 p_{1,1}^s\big )=\pi _{(1,0)}\textbf{1}+\lambda _1 \bar{\textbf{v}}^{(1)}_{(0,0)}\textbf{A}_{\mathcal {U}1}+\mu _2 \bar{\textbf{v}}^{(1)}_{(2^{*},1)}\textbf{A}_{\mathcal {U}3}\nonumber \\&\qquad +\lambda _1 p_{1,1}^s \bar{\textbf{v}}^{(1)}_{(1,0)}\textbf{A}_{\mathcal {U}4}, \end{aligned}$$
    (23)
    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{(2,0)}\big (\mu _2+\lambda _1\big )=\bar{\pi }_{(2,0)}\textbf{1}+\lambda _2 \bar{\textbf{v}}^{(1)}_{(0,0)}\textbf{A}_{\mathcal {U}1}+\mu _1 \bar{\textbf{v}}^{(1)}_{(1,2)}\textbf{A}_{\mathcal {U}2}, \end{aligned}$$
    (24)
    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{(1,2)}\big (\mu _1+\lambda _1 p_{1,1}^s\big )=\bar{\pi }_{(1,2)}\textbf{1}+\lambda _2 \bar{\textbf{v}}^{(1)}_{(1,0)}\textbf{A}_{\mathcal {V}1}+\lambda _1 p_{1,2}^s\bar{\textbf{v}}^{(1)}_{(2,0)}\textbf{A}_{\mathcal {V}2}\nonumber \\&\qquad +\lambda _1 p_{1,1}^s \bar{\textbf{v}}^{(1)}_{(1,2)}\textbf{A}_{\mathcal {V}6}, \end{aligned}$$
    (25)
    $$\begin{aligned}&\bar{\textbf{v}}^{(1)}_{(2^{*},1)}\big (\mu _2+\lambda _1 p_{1,1}^w\big )=\bar{\pi }_{(2^{*},1)}\textbf{1}+\lambda _1(1-p_{1,2}^s)\bar{\textbf{v}}^{(1)}_{(2,0)}\textbf{A}_{\mathcal {W}2}+\lambda _1 p_{1,1}^w\bar{\textbf{v}}^{(1)}_{(2^{*},1)}\textbf{A}_{\mathcal {W}6}. \end{aligned}$$
    (26)
  5. 5.

    Evaluating \(\textrm{E}[\Delta _1]\) using Eq. (5):

    $$\begin{aligned}&\textrm{E}[\Delta _1]=\bar{v}_{(0,0),0}^{(1)}+\bar{v}_{(1,0),0}^{(1)}+\bar{v}_{(2,0),0}^{(1)}+\bar{v}_{(1,2),0}^{(1)}+\bar{v}_{(2^{*},1),0}^{(1)}. \end{aligned}$$
    (27)
  6. 6.

    Similarly, finding \(\textrm{E}[\Delta _2]\) by repeating steps ii, iv and v taking into consideration \(h=2\) (the class of interest).

Table 3 Transition table of the SHS CTMC, considering class 1 is the one of interest (h=1)

In order to find closed-form results of the average AoI for the two classes, let’s consider homogeneous arrival and service processes, that is, \(\lambda _1=\lambda _2=\lambda\) and \(\mu _1=\mu _2=\mu\) (hence \(\rho =\frac{\lambda }{\mu }\)). This relaxation is considered to make the final results in a quite concise form. Moreover, since we are interested in the average AoI results, the self-preemption is admitted for both classes based on a conclusion reached subsequently in Sect. 4.1.1, that is, \(p_{1,1}^s=p_{1,1}^w=1\) and \(p_{2,2}^s=p_{2,2}^w=1\). Considering \(p_{1,2}^s=p_s\), the stationary state probabilities resulting from the aforementioned step (iii) will be as follows:

$$\begin{aligned}&\bar{\pi }_{(0,0)}=\frac{1}{2 {\rho }^2+2{\rho }+1}, \end{aligned}$$
(28)
$$\begin{aligned}&\bar{\pi }_{(1,0)}=\frac{{\rho } (2 {\rho } ({p_s}-1)-{1})}{\left( 2 {\rho }^2+2 {\rho } +{1}^2\right) ({\rho } ({p_s}-2)-1)}, \end{aligned}$$
(29)
$$\begin{aligned}&\bar{\pi }_{(2,0)}=\frac{{\rho } (2 {\rho }+{1})}{\left( 2 {\rho }^2+2 {\rho }+{1}^2\right) ({\rho } ({p_s}-2)-{1})}, \end{aligned}$$
(30)
$$\begin{aligned}&\bar{\pi }_{(1,2)}=\frac{{\rho }^2 (2 {\rho }+{p_s}+{1})}{\left( 2 {\rho }^2+2 {\rho }+{1}^2\right) ({\rho } ({p_s}-2)-{1})}, \end{aligned}$$
(31)
$$\begin{aligned}&\bar{\pi }_{(2^{*},1)}=\frac{{\rho }^2 ({p_s}-1) (2 {\rho }+{1})}{\left( 2 {\rho }^2+2 {\rho } +{1}^2\right) ({\rho } ({p_s}-2)-{1})}. \end{aligned}$$
(32)

Subsequently, going through steps (iv) and (v), the average AoI for both classes can be formulated as follows:

$$\begin{aligned}&\textrm{E}[\Delta _1]=\frac{1}{\mathcal {A}({\rho }+{1}) (2 {\rho }+{1}) }\Bigg (8 {\rho }^8 (2 {p_s}-3)+4 {\rho }^7 (19 {p_s}-31)-2 {\rho }^6 \left( 5 {p_s}^2-85{p_s}+143\right) \nonumber \\&\qquad +{\rho }^5 \left( -12 {p_s}^2+187 {p_s}-365\right) +{\rho }^4 \left( -5 {p_s}^2+117 {p_s}-293\right) -{\rho }^3 \left( {p_s}^2-45 {p_s}+156\right) \nonumber \\&\qquad +2 {\rho }^2 (5 {p_s}-27)+{\rho } ({p_s}-11)-1\Bigg ), \end{aligned}$$
(33)
$$\begin{aligned}&\textrm{E}[\Delta _2]=\frac{1}{\mathcal {A}\left( 2 {\rho }^2-{\rho } ({p_s}-3)+{1}^2\right) ({\rho } ({p_s}-1)-{1})}\Bigg (8 {\rho }^9 \left( {p_s}^2-3 {p_s}+3\right) \nonumber \\&\qquad -4 {\rho }^8 \left( {p_s}^3-13 {p_s}^2+36 {p_s}-37\right) -2 {\rho }^7 \left( 7 {p_s}^3-68 {p_s}^2+194 {p_s}-205\right) \nonumber \\&\qquad +{\rho }^6 \left( -15 {p_s}^3+174 {p_s}^2-574 {p_s}+651\right) +{\rho }^5 \left( -11 {p_s}^3+137 {p_s}^2-526 {p_s}+658\right) \nonumber \\&\qquad +{\rho }^4 \left( -5 {p_s}^3+71 {p_s}^2-318 {p_s}+449\right) -{\rho }^3 \left( {p_s}^3-22 {p_s}^2+125 {p_s}-210\right) \nonumber \\&\qquad +{\rho }^2 \left( 3 {p_s}^2-29 {p_s}+65\right) -3 {\rho } ({p_s}-4)+1\Bigg ), \end{aligned}$$
(34)

where

$$\begin{aligned} \mathcal {A}=\rho \mu ({\rho }+{1}) (2 {\rho }+{1}) \left( 2 {\rho }^2+2 {\rho } +1\right) ({\rho } ({p_s}-2)-{1}). \end{aligned}$$

4 Numerical results

In this section, the analytical work presented in Sect. 3 will be numerically elaborated for performance evaluation. In the subsequent studies, a network of three prioritized classes is considered. Such setting is motivated by an example of three-class status update system in V2X technology [15] as illustrated in Sect. 1. Unless otherwise indicated, the homogeneous arrival and service processes are considered, where \(\lambda _1=\lambda _2=\lambda _3=\frac{1}{3}\lambda _{\text {total}}\) and \(\mu _1=\mu _2=\mu _3=\mu =1\). Moreover, for performance evaluation, a comparative study will be conducted between our proposed model and some priority-based classical schemes (incorporated in Sect. 1). By classical, we mean the models that employ either the strict preemption or non-preemption schemes. In this regard, the two priority-based classical schemes proposed in [17] will be considered: \(\text {PR}^{(\text {s})}\)-Bufferless scheme and \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) scheme. Moreover, the classical scheme introduced in [18], \(\text {PR}^{(\text {s})}\)-Multi-buffer scheme, will be considered in one of the subsequent numerical studies. In the sequel, our proposed model will be denoted relative to its combined policy \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\).

Firstly, the effect of the proposed model on the AoI performance will be investigated in Sect. 4.1, incorporating the study of the average performance and the corresponding dispersion. Subsequently, four different methods are presented in Sect. 4.2 to manifest how to set the system controlling parameters \(\textbf{P}^s,\textbf{P}^w\). However, for presentation convenience, the content of these matrices will be written as follows: priority preemption probabilities between classes, \(\textbf{P}=[p_{1,2}^s,p_{1,3}^s,p_{2,3}^s,p_{1,2}^w,p_{1,3}^w,p_{2,3}^w]\); and self preemption probabilities, \(\textbf{P}_{\text {self}}=[p_{1,1}^s,p_{2,2}^s,p_{3,3}^s,p_{1,1}^w,p_{2,2}^w,p_{3,3}^w]\). Lastly, the validation of our analysis is presented in Sect. 4.3.

4.1 AoI performance evaluation

In this study, the AoI performance under the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) model will be compared with the two classical schemes proposed in [17]: \(\text {PR}^{(\text {s})}\)-Bufferless scheme and the \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) scheme. Throughout this comparative study, the AoI performance for each priority class will be investigated in terms of its average and the corresponding dispersion (variability) beyond the average in Sects. 4.1.1 and 4.1.2, respectively. In both studies, the proposed controlling parameters are set as \(\textbf{P}=[0.2,0.2,0.2,0.8,0.8,0.8]\). However, two cases of the self-preemption (SP) modes will be compared: strict SP (\(\textbf{P}_{\text {self}}=\textbf{1}_{1\times 6}\)) and probabilistic SP (\(\textbf{P}_{\text {self}}=0.5\times \textbf{1}_{1\times 6}\)).

4.1.1 Average AoI performance

In this study, the average AoI performance for each prioritized class will be addressed as shown in Figs. 4, 5 and 6. Based on theses illustrations, the following investigations can be laid out:

  • Regarding the \(\text {PR}^{\text {(s)}}\)-Bufferless model, as shown in Fig. 4, it is considered as the optimal approach for the highest priority class (class 1): the result that has been proved in [9]. Accordingly, let this optimal performance be denoted as \(\Delta _1^{\text {min}}\). However, this comes at the cost of a harsh degradation on the LP classes due to the frequent interruptions (as shown in Figs. 5 and 6). Conversely, the \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) model significantly enhances the performance of the LP classes (Figs. 5 and 6) due to the relief from service interruptions, to the detriment of class 1 (Fig. 4), which is imposed to wait for the ongoing service of the LP classes.

  • The foregoing paradox between classical models is resolved by employing the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) model, by which a substantial enhancement for class 2 (Fig. 5) and class 3 (Fig. 6) is noticed under a compromise performance for class 1 (Fig. 4). This is because the hybrid preemptive/non-preemptive disciplines, employed at the server and the buffer, relax the adverse consequences of the strict preemption and non-preemption schemes.

  • From another perspective, by a proper adjustment of the controlling parameters (\(\textbf{P}\) and \(\textbf{P}_{\text {self}}\)), the stability of the AoI performance for the LP network can be guaranteed. By stable performance, we mean that the average AoI is always bounded with respect to the increasing offered load of the network. As shown in Fig. 5, \(\textrm{E}[\Delta _2]|_{\text {Prob}^{\text {(s)}}\text {-}\text {Prob}^{\text {(w)}}}\) becomes stable despite being unstable in the classical schemes. In addition, as shown in Fig. 6, \(\textrm{E}[\Delta _3]|_{\text {Prob}^{\text {(s)}}\text {-}\text {Prob}^{\text {(w)}}}\) experiences a marginal increase at the higher loading condition (almost stable) in contrast with the dramatic increase in case of classical schemes. After doing further numerical elaborations in this regard (not presented here due to space constraints), it turned out that the protection feature assumption for the LP classes has a significant contribution in such stability privilege.

  • Regarding the SP setting of the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) model, the strict SP is the preferable setting, rather than the probabilistic one, for the sake of the average AoI for each class. This is because the strict SP setting (\(\textbf{P}_{\text {self}}=\textbf{1}_{1\times 6}\)) permits the replacement of the stale packet (in the server or buffer) with a fresh one of the same class, the case that enhances the average AoI. Moreover, it was found that employing the strict SP mode for a certain class has no adverse consequences on the others. This is due to the memoryless property of the assumed exponential service time distribution.

Fig. 4
figure 4

Class 1’s average AoI

Fig. 5
figure 5

Class 2’s average AoI

Fig. 6
figure 6

Class 3’s average AoI

Fig. 7
figure 7

Class 1’s coefficient of variation performance

Fig. 8
figure 8

Class 2’s coefficient of variation performance

Fig. 9
figure 9

Class 3’s coefficient of variation performance

4.1.2 AoI dispersion

This study implies the evaluation of the standard deviation \(\sigma\) (\(\sigma ^2=\textrm{E}[\Delta ^2]-(\textrm{E}[\Delta ])^2\)) for each priority class. However, due to the difference in the average AoI performance of the models incorporated in the comparative study, the coefficient of variation metric (\(\textrm{CV}=\sigma /\textrm{E}[\Delta ]\)) is to be used instead of the standard deviation.

The \(\textrm{CV}\) performance of all priority classes is demonstrated in Figs. 7, 8 and 9. The main remarks are summarized as follows:

  • For class 1 performance depicted in Fig. 7, the \(\text {PR}^{\text {(s)}}\)-Bufferless scheme suffers from higher AoI variability compared with the \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) scheme, contrasting with the situation of its average performance (Fig. 4). Hence, employing any of these classical schemes for class 1 results in a trade-off between the enhancement of the average AoI performance and its dispersion. However, this trade-off is compromised by adopting the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) model, as noticed in Figs. 4 and 7.

  • For the LP classes, as shown in Figs. 8 and 9, the proposed model besides the \(\text {NP}^{\text {(s)}}\)-\(\text {NP}^{\text {(w)}}\) scheme yield significant enhancements of the AoI variability, compared with the \(\text {PR}^{\text {(s)}}\)-Bufferless.

  • Regarding the SP setting, it should be noted that, unlike the average performance, the AoI variability is significantly improved by the probabilistic SP setting rather than the strict mode. This result manifests the importance of the probabilistic SP setting (characterized by \(\textbf{P}_{\text {self}}\)) in adjusting the average AoI and the corresponding dispersion to some acceptable levels.

According to the foregoing study on the average AoI performance and its dispersion, the usefulness of our proposed model is corroborated. By thoroughly adjusting the controlling parameters (\(\textbf{P}\) and \(\textbf{P}_{\text {self}}\)), the performance of the LP classes can be enhanced significantly without violating the strict AoI sensitivity limit of class 1 (determined by the application). This in turn promotes the reliability of the proposed model towards the whole priority classes, compared with the classical schemes. Accordingly, in the next section, we will exhibit different methods by which the controlling parameters can be adjusted so that the whole network satisfaction is fulfilled.

4.2 Controlling parameters setting

In this section, different approaches are introduced to manifest how to adjust the proposed controlling parameters \(\textbf{P}\) and \(\textbf{P}_{\text {self}}\). However, our main interest will be placed on the average AoI performance; hence, the strict SP setting will be assumed henceforth (\(\textbf{P}_{\text {self}}=\textbf{1}_{1\times 6}\)). In this regard, four different approaches are proposed for the setting of \(\textbf{P}\). For each approach, there is a distinct system objective to be accomplished as follows: HP class protection, optimal aggregate traffic intensity, optimal network satisfaction, and low-complex near-optimal network satisfaction.

4.2.1 Approach 1: HP-class protection

In this approach, the main objective is to adjust the controlling parameters so that the AoI performance of the LP classes can be enhanced, while class 1 is kept gaining its optimal performance \(\Delta _1^{\text {min}}\). To this end, the server should be governed by the full \(\text {PR}^{\text {(s)}}\) policy so that any new request from class 1 can be handled by the server promptly. Hence, an extracted version of the proposed model is considered, that is, \(\text {PR}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) where \(p_{1,2}^s=p_{1,3}^s=p_{2,3}^s=1\). In such case, it is clear that \(p_{1,2}^w\) and \(p_{1,3}^w\) will be useless and the only parameter that has the control on the performance is \(p_{2,3}^w\).

Based on the foregoing, in the following numerical example, the setting of the controlling parameters will be \(\textbf{P}=[1,1,1,1,1,0.2]\). The extracted version of our model \(\text {PR}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) is to be compared with other related classical models that adopt full \(\text {PR}^{\text {(s)}}\) policy: \(\text {PR}^{\text {(s)}}\)-Bufferless model [17] and \(\text {PR}^{\text {(s)}}\)-Multi-buffer model [18]. This comparative study is depicted in terms of the average AoI for each class, as shown in Figs. 10, 11 and 12. As noticed in Fig. 10, the common performance \(\Delta _1^{\text {min}}\) is yielded for all models under study. However, the effect on the LP network can be explained as follows:

  • Compared with the \(\text {PR}^{\text {(s)}}\)-Bufferless scheme, the \(\text {PR}^{\text {(s)}}\)-multi-buffer model significantly improves class 2 performance (Fig. 11). Whereas, class 3 performance (Fig. 12) gets worse after (\(\rho ^{*}\approx 6.5\)). This trend is due to the deployed multi-buffer mechanism, by which the HP classes constitute a heavy workload over the lowest priority class, especially at the higher traffic loading conditions.

  • On the other hand, comparing with \(\text {PR}^{\text {(s)}}\)-multi-buffer model, the extracted version of our model \(\text {PR}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\), by its controlling parameter \(p_{2,3}^w=0.2\), yields a significant enhancement for class 3 at the cost of marginal degradation for class 2. As noticed in Fig. 12, the crossover point is shifted to \(\rho ^{**}\approx 16.5\).

To conclude, the deployment of a single buffer controlled by the proposed \(\text {Prob}^{\text {(w)}}\) policy can outperform the multi-buffer case. Hence, comparatively speaking, our proposed system is less costly but more efficient: it makes the best use of its buffer resources owing to the proposed \(\text {Prob}^{\text {(w)}}\) policy.

Fig. 10
figure 10

Class 1’s average AoI under the approach of HP-class protection

Fig. 11
figure 11

Class 2’s average AoI under the approach of HP-class protection

Fig. 12
figure 12

Class 3’s average AoI under the approach of HP-class protection

4.2.2 Approach 2: optimal aggregate traffic intensity

In this approach, under a fixed assignment of the controlling parameters \(\textbf{P}\), the main objective is to adapt the traffic intensities (\(\lambda _1,\lambda _2,\lambda _3\)) so that the aggregate traffic intensity \(\lambda _{\text {total}}\) (the total packet generation rate) is minimized. However, this minimization problem is limited by the fulfillment of some AoI constraints related to each class. This problem is motivated by the fact that minimizing the total generation rate \(\lambda _{\text {total}}\) is a sign of minimizing the total sensing power needed for the whole network. Hence, in our context, the sensing power and the generation rate are used interchangeably.

Based on the aforementioned description, the AoI constraints are represented as follows:

$$\begin{aligned}&\textrm{E}[\Delta _1]<\Delta _{\text {max}},\ \ \ \textrm{E}[\Delta _2]<\frac{1}{\beta _2}\Delta _{\text {max}},\ \ \ \textrm{E}[\Delta _3]<\frac{1}{\beta _3}\Delta _{\text {max}}, \end{aligned}$$

where \(\beta _2\) and \(\beta _3\) are the importance factors for classes 2 and 3, respectively, relative to the importance of class 1; hence, \(0<\beta _2,\beta _3 \le 1\). In the prioritized case, these factors can be set such that \(\beta _2>\beta _3\). According to these constraints, whenever class 2 and 3 become nowhere near as important as class 1 (\(\beta _2,\beta _3\ll 1\)), their corresponding constraints will be much less strict (increasing the AoI bound). On the other hand, \(\beta _2=\beta _3=1\) is the case of the strictest QoS requirements, where a unique AoI bound (\(\Delta _{\text {max}}\)) should be obeyed for all classes.

In this regard, Table 4 demonstrates the resulting optimal aggregate intensity \(\lambda _{\text {total}}^{*}\) in a comparative study with the classical models in addition to our previous work [24], which employs the \(\text {Prob}^{\text {(s)}}\)-Bufferless scheme. In this study, the controlling parameters \(\textbf{P}\) are set fixed as \(\textbf{P}=0.5\times \textbf{1}_{1\times 6}\). Moreover, through this comparative study, we consider different values of \(\beta _2\) and \(\beta _3\), by which different AoI constraints of the LP classes are applied. However, the AoI bound for class 1 is set to be \(\Delta _{\text {max}}=4\).

According to Table 4, it should be noted that the total generation rate has an upward trend as \(\beta _2\) and \(\beta _3\) approach 1 (i.e., as the AoI constraints become stricter). This is because in such cases higher generation rate is needed to cope with the confined AoI constraints. Moreover, our proposed model outperforms other models by satisfying the AoI bounds for all classes with minimum sensing power needed. Conversely, the classical models need a higher energy as the constraints become stricter. More specifically, in this study, it was found that the classical models cannot at all satisfy the strictest AoI bounds (at \(\beta _2=\beta _3=1\)). Furthermore, compared with our previous work \(\text {Prob}^{\text {(s)}}\)-Bufferless, the current work is superior due to the buffering functionalities with the corresponding \(\text {Prob}^{\text {(w)}}\) policy.

Based on the foregoing investigations, our proposed scheme can be considered as an energy-efficient module, which is capable of fulfilling the AoI restrictions of all prioritized classes with a minimum power needed.

Table 4 The optimal aggregate traffic intensity \(\lambda _{\text {total}}^{*}\) under the four different models

4.2.3 Approach 3: optimal network satisfaction

In this approach, the fulfillment of the whole network satisfaction is represented as an optimization problem, whose decision variables are the system controlling parameters \(\textbf{P}\) to be explored. In this regard, the network cost function to be minimized is denoted as \(C_{\alpha _2,\alpha _3}\), which is defined as the weighted sum of the average AoI for all priority classes. Hence, the minimization problem can be constructed as follows:

$$\begin{aligned} \min _{\begin{array}{c} \textbf{P} \end{array}} C_{\alpha _2,\alpha _3}=\textrm{E}[\Delta _{1}]&+\alpha _2 \times \textrm{E}[\Delta _2]+ \alpha _3 \times \textrm{E}[\Delta _3], 0\le \alpha _2,\alpha _3\le 1. \end{aligned}$$
(35)

The weighting parameters \(\alpha _2,\alpha _3\) are deployed to determine to what extent the whole priority network satisfaction is sensitive to the performance of the LP classes. Under the priority setting, it is intuitive that \(\alpha _2>\alpha _3\).

In this regard, the squirrel search algorithm (SSA) [28] is used to solve the optimization problem (35) (finding the optimal controlling parameters \(\textbf{P}^{*}\)). In this nature-inspired algorithm, the efficient foraging behaviour of southern flying squirrels (FSs) is exploited, where a special locomotion technique (called gliding) is used instead of flying so that spacious areas of the forest can be covered swiftly with minimum energy consumption. The hickory nut tree is always the optimal destination for all FSs. In winter, it is the main food source to fulfill their energy requirements. In autumn, FSs seek the hickory nut trees for storage purposes to be abundant in winter. Accordingly, the location of the hickory nut tree is the optimal solution to be explored \(\textbf{P}^{*}\). Hence, we assume a 6th-dimensional search space for the FSs to suit the dimension of \(\textbf{P}^{*}\). Moreover, assuming the number of 50 FSs through the search space, the location of each one is a candidate solution of the problem. Furthermore, using the same stopping criterion introduced in [28], we set the maximum number of iterations at 100. Other deployed SSA parameters are set as recommended in [28]. Owing to the SSA location-update strategy, its accuracy and convergence rate are proved to outperform other nature-inspired schemes [28]. Further details in this regard can be found through the work of [28].

By solving the optimization problem (35), the optimal capabilities of our proposed model can be investigated. This problem is solved under three different schemes: our previous work [24] (\(\text {Prob}^{\text {(s)}}\)-Bufferless); and two cases of our current proposed model (\(\text {Prob}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) and \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\)). It should be noted that \(\text {Prob}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) scheme is obtained from the proposed model by imposing \(p_{1,2}^w=p_{1,3}^w=p_{2,3}^w=1\). This scheme is incorporated in the comparative study to clarify whether the system privilege comes from using the buffer only or from using the buffer with its \(\text {PR}^{\text {(w)}}\) policy.

Having obtained the optimization results for the aforementioned schemes, a comparative study is conducted with the classical schemes, \(\text {PR}^{\text {(s)}}\)-Bufferless and \(\text {NP}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) [17], in terms of \(C_{\frac{1}{2},\frac{1}{4}}\), which is a representative of the whole satisfaction of the priority network. Assuming \(\mu _1=\mu _2=\mu _3=1\), three different traffic cases (TC1, TC2 and TC3) of the arrival processes are addressed: TC1, \(\lambda _1=2\times \lambda _2=4\times \lambda _3\) (Fig. 13); TC2, \(\lambda _1=\frac{1}{2}\times \lambda _2=\frac{1}{4}\times \lambda _3\) (Fig. 14); and TC3, \(\lambda _1=\lambda _2=\lambda _3\) (Fig. 15). All traffic cases form the same offered load \(\rho _{\text {total}}=[0.5,30]\).

Overall, Figs. 13, 14 and 15 demonstrate that the optimal network satisfaction in all traffic cases is achieved by employing the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) policy owing to its controlling parameters. Moreover, the superiority of the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) policy compared with the \(\text {Prob}^{\text {(s)}}\)-\(\text {PR}^{\text {(w)}}\) policy is an indication for the importance of the \(\text {Prob}^{\text {(w)}}\) policy to regulate the contention on the buffer especially at the higher traffic loading conditions: the result that underpins the conclusion reached in Sect. 4.2.1. Hence, the deployment of \(\text {Prob}^{\text {(s)}}\) policy combined with the \(\text {Prob}^{\text {(w)}}\) policy is essential to ensure the robustness of the system at all traffic conditions. Furthermore, compared with the classical approaches, the performance gain of our model becomes more significant in the case of TC1 (\(\lambda _1>\lambda _2>\lambda _3\)) as shown in Fig. 13), where the LP classes are adversely affected by the excess load of the HP ones.

Fig. 13
figure 13

The cost function \(C_{\frac{1}{2},\frac{1}{4}}\) under the traffic case \(\text {TC1,}\ \lambda _1>\lambda _2>\lambda _3\)

Fig. 14
figure 14

The cost function \(C_{\frac{1}{2},\frac{1}{4}}\) under the traffic case \(\text {TC2,}\ \lambda _1<\lambda _2<\lambda _3\)

Fig. 15
figure 15

The cost function \(C_{\frac{1}{2},\frac{1}{4}}\) under the traffic case \(\text {TC3,}\ \lambda _1=\lambda _2=\lambda _3\)

4.2.4 Approach 4: low-complex near-optimal network satisfaction

In this section, heuristic generation formulas for the controlling parameters \(\textbf{P}\) are proposed to introduce a near-optimal solution for the optimization problem (35), without any computational complexity. Accordingly, these formulas will undergo different studies to substantiate the near-optimality feature.

First of all, the general proposed heuristic formula of all controlling parameters involved in \(\textbf{P}\) is expressed as follows, for \(1\le m\le M-1\ \text {and}\ m+1 \le n \le M\):

$$\begin{aligned}&p_{m,n}^s=p_{m,n}^w=e^{-\big [\frac{\alpha _n}{\alpha _m}\times \frac{\rho _m}{\rho _n}\big ]\rho _m}. \end{aligned}$$
(36)

By this generation rule, we take into account the following considerations: First, \(p_{m,n}^s\) (\(p_{m,n}^w\)) decreases exponentially with the increase of the offered load of class m (the source of interruption). Secondly, the decaying rate of \(p_{m,n}^s\) (\(p_{m,n}^w\)) depends on the exponent coefficients \(0\le \frac{\alpha _n}{\alpha _m}\le 1\) and \(\frac{\rho _m}{\rho _n}>0\). Therefore, when \(\rho _m<\rho _n\), the decreasing regression of \(p_{m,n}^s\) (\(p_{m,n}^w\)) will be marginal for the sake of the HP class m, which is adversely affected by the excess load of the LP class n. However, in the reversed situation (\(\rho _m>\rho _n\)), \(p_{m,n}^s\) (\(p_{m,n}^w\)) decreases swiftly to relieve the LP class n from the expected increase in HP class m preemptions. Based on this design, the controlling parameters enable the system to respond in favour of the class experiencing the harsh conditions. From another perspective, by the importance ratio (\(\frac{\alpha _n}{\alpha _m}\)), the design of the controlling parameters can reflect the differentiated importance between the prioritized classes. Here, class-1 importance can be regarded as \(\alpha _1=1\).

To verify the near-optimality feature of the proposed heuristic method with the optimal solution of (35) presented in Sect. 4.2.3, some studies are carried out as follows: studying different traffic cases; and experimenting with different cost functions, \(C_{\frac{1}{2},\frac{1}{4}}\) and \(C_{\frac{1}{4},\frac{1}{8}}\). In this study, five traffic cases will be considered: the aforementioned cases (TC1, TC2 and TC3); TC4, \(\mu _1=2,\mu _2=1,\mu _3=\frac{2}{3}\); and TC5, \(\mu _1=\frac{2}{3},\mu _2=1,\mu _3=2\). In the cases TC4 and TC5, the arrival processes are homogeneous \(\lambda _1=\lambda _2=\lambda _3\). However, all traffic cases constitute the same total offered load \(\rho _{\text {total}}\). Under each traffic case, the maximum percentage error between the heuristic solution and the optimal one is calculated over the span \(\rho _{\text {total}}=[0.5,15]\). This accuracy percentage is tabulated in Table 5, where acceptable accuracy levels are demonstrated in all cases under study. Moreover, under each traffic case, it is clear that the accuracy gets improved whenever class 2 and class 3 become nowhere near as important as class 1 (\(\alpha _2,\alpha _3\ll 1\)). Consequently, the proposed low-complex heuristic generating method of the controlling parameters can obtain near-optimal performance of the whole network satisfaction.

Table 5 Maximum percentage error between the heuristic and the optimal solutions over the span \(\rho _{\text {total}}=[0.5,15]\)

4.3 Analytical model validation

Throughout this section, the presented analytical study in Sect. 3 is verified by building up a simulation environment, similar to the proposed system described in Sect. 2, using MATLAB R2015a. The stationary results can be ensured by setting a large simulation time (\({10}^6\) time units). To this end, a workstation with the following specifications is deployed: Intel(R) Xeon(R) Gold 6230R CPU, 2.10 GHz (2 processors); 128 GB RAM; and 64 bit Windows 10 pro operating system.

Fig. 16
figure 16

Analytical model validation in terms of the average AoI

Fig. 17
figure 17

Analytical model validation in terms of the AoI standard deviation

In this study, the controlling parameters, \(\textbf{P}\) and \(\textbf{P}_{\text {self}}\) are set as follows: \(\textbf{P}=[0.5,0.5,0.5,0.3,0.3,0.3]\) and \(\textbf{P}_{\text {self}}=\textbf{1}_{1\times 6}\). Accordingly, the average AoI \(\textrm{E}[\Delta ]\) and the corresponding standard deviation \(\sigma\) are evaluated under both the analytical and simulation framework. As depicted in Figs. 16 and 17, the simulation results substantiate the validity of the analytical results with maximum percentages errors of 0.5775 % and 1.2621 % for \(\textrm{E}[\Delta ]\) (Fig. 16) and \(\sigma\) (Fig. 17), respectively.

5 Conclusion

This work enhanced the AoI performance of multi-class IoT-enabled MEC systems by proposing a hybrid preemptive/non-preemptive discipline under an M/M/1/2 priority queueing model. Through the proposed \(\text {Prob}^{\text {(s)}}\)-\(\text {Prob}^{\text {(w)}}\) policy, the probabilistic preemption approach (as a discretionary discipline for preemption) is deployed to govern the rivalry between priority classes within the shared server and buffer. Independent probabilistic preemption decisions are taken for the \(\text {Prob}^{\text {(s)}}\) and the \(\text {Prob}^{\text {(w)}}\) policies, and with distinct probability parameters (system controlling parameters). The SHS approach is deployed to analyze the average AoI in addition to the corresponding higher order moments. An algorithmic approach is presented to extract the AoI moments for any specific number of classes. However, under a case study on a two-class network, closed-form results of the average AoI are derived. Then, a numerical study on a three-class network is presented for performance evaluation. Throughout this study, the performance of the proposed model is compared with some of the priority-based classical schemes in terms of the average AoI and its coefficient of variation. In this regard, it is demonstrated that the thorough adjustment of the system controlling parameters ensures the reliability of the proposed system not for specific class but for all priority classes. Hence, four different approaches are presented to explain the setting of the proposed controlling parameters. Based on these approaches, it turned out that our proposed model—by its controlling parameters— acquires different privileges. Firstly, the performance of the LP classes can be manipulated without violating the optimal performance of the highest priority (age-sensitive) class. In addition, owing to the proposed \(\text {Prob}^{\text {(w)}}\) policy, the system exploits the buffer resources efficiently, making the system less costly. Moreover, the proposed model is an energy-efficient, where the AoI bounds of all classes can be fulfilled with minimum sensing power needed. Furthermore, using the proposed scheme, the optimal satisfaction of the whole priority network can be achieved by exploring the suitable setting of the controlling parameters according to the working traffic conditions. In this study, it is also demonstrated that the \(\text {Prob}^{\text {(w)}}\) policy has a significant impact on maintaining the optimality of the proposed scheme under the heavy loading conditions. Finally, a heuristic generation method for the controlling parameters is also proposed to provide a near-optimal solution without any computational complexity.

As a future work, it is expected to extend our work to incorporate other discretionary rules for the hybrid discipline instead of the probabilistic preemption approach. Moreover, the extension to a general service time distribution is an inevitable phase in our future work. From another perspective, it is envisaged that the proposed model can be incorporated into a multi-tier computing network considering the following: local servers with limited computation power, heterogeneous edge servers with higher computational power, and central clouds with super computation power.