Schedulability of probabilistic mixed-criticality systems

Mixed-criticality systems often need to fulfill safety standards that dictate different requirements for each criticality level, for example given in the ‘probability of failure per hour’ format. A recent trend suggests designing this kind of systems by jointly scheduling tasks of different criticality levels on a shared platform. When this is done, the usual assumption is that tasks of lower criticality are degraded when a higher criticality task needs more resources, for example when it overruns a bound on its execution time. However, a way to quantify the impact this degradation has on the overall system is not well understood. Meanwhile, to improve schedulability and to avoid over-provisioning of resources due to overly pessimistic worst-case execution time estimates of higher criticality tasks, a new paradigm emerged where task’s execution times are modeled with random variables. In this paper, we analyze a system with probabilistic execution times, and propose metrics that are inspired by safety standards. Among these metrics are the probability of deadline miss per hour, the expected time before degradation happens, and the duration of the degradation. We argue that these quantities provide a holistic view of the system’s operation and schedulability.


Introduction
Mixed-criticality (MC) systems are real-time systems that feature tasks of different criticality levels. Typical application domains include avionics and automotive . In MC systems, each task has an associated criticality level. Depending on the criticality level, a failure of a task, for example due to deadline miss, can have a more or less severe impact on the overall safety of the system. Due to possible catastrophic consequences of a system failure, MC systems for some application domains are subject to certification standards. For example, DO-178C (Rtca/do-178c 2012) is a standard for avionics systems. It defines five criticality levels, 'A' to 'E', with 'A' being the highest criticality level. Here, a failure of a task of criticality 'A' can have a negative impact on the overall safety of the aircraft, while a failure of a task of criticality 'D' may only slightly increase the aircraft crew's workload. Quantitatively, an application's criticality correlates to a tolerable failure rate under a given certification standard. The failure rates of all tasks, under their respective criticality levels, have to be guaranteed for certification of the overall system. As an example, Table 1 states the tolerable failure rates for DO-178B. 1 Traditionally, industry favors physical segregation of tasks based on their criticality level (Tămaş-Selicean and Pop 2015). This implies, for example, that tasks of each criticality level execute on their own hardware, and tasks of different criticality levels do not interfere. However, such a physical separation based on criticality levels can lead to system under-utilization and complex distributed multi-processor architectures. Recently, there has been a push towards integrating tasks of different criticality levels on a single hardware platform . The advantages for such consolidation include reduction in cost, power dissipation, weight, as well as maintenance.
Unfortunately, this consolidation of criticality levels makes isolating tasks of different criticality levels problematic. Essentially, a low criticality level 'D' task may hinder the execution of a higher criticality level 'B' task, possibly resulting in a deadline miss-which can be considered as a type of failure. To counter this, researchers have proposed several schemes which are covered in detail in Sect. 2. Broadly speaking, the approaches are based on an execution time abstraction proposed by Vestal (2007). Vestal's model builds on the Worst-case Execution Time Real-Time Systems (2021) 57:397-442 (WCET) abstraction. He assumes that tasks have a set of WCET estimates with different levels of confidence. The system is required to meet the deadline of a criticality level 'A' task for the highest confidence and most pessimistic WCET estimates. For lower criticality tasks, correct execution needs to be guaranteed for less pessimistic WCET estimates. Prominent proposed approaches that build on Vestal's model feature mode-based scheduling schemes that ensure that the system executes tasks of all criticality levels correctly when less pessimistic WCETs estimates are not overrun, while reduced service to tasks of lower criticality levels is in place when this is not the case. In this paper, instead of taking a single WCET estimate as in the traditional real-time model, or a criticality dependent set of WCET estimates as per Vestal's model, we assume a stochastic model of execution times. For each task, the execution time is modeled with an independent random variable. This additional information on the execution time allows us to have improved schedulability due to the so called multiplexer gain, i.e., the likelihood of high execution times of many tasks occurring simultaneously is very small. Under the proposed scheme there is a non-zero probability of a high criticality task missing its deadline. If the probability is less than the failure rate specification of the criticality level, see for example Table 1, then the MC system can still be schedulable according to the probabilistic bounds on deadline misses.
Individual tasks are assumed to be periodic with constrained deadlines. The platform is assumed to have a single core. We assume a dual-critical model, where the criticality of tasks can be either lo or hi. The system is also assumed to have two modes of operation: lo-and hi-criticality mode. In the lo-criticality mode, all tasks are executed normally. In the hi-criticality mode, newly released jobs of lo tasks are starting in a degraded mode so that preference is given to hi tasks.
The application of stochastic execution to MC systems is not new and several recent works exist Masrur 2016;Guo et al. 2015). However, existing results do not provide a holistic scheduling scheme and analysis covering all execution modes and transitions. A detailed accounting of existing schemes and their limitations is given in Sect. 2. In the following, we suppose that a MC scheduling scheme fulfills the following requirements: -Schedulability analysis of tasks is provided for each criticality level in each system mode. -Conditions that should trigger a mode switch are defined. -Analysis of the time spent in each system mode is provided.
-A method to consolidate these individual components and compute a metric comparable to the Probability of Failure per Hour for tasks of each criticality level is given.
In this paper, we address all of these individual components. Specifically, we make the following contributions: 1. We propose conditions that trigger a mode switch, both from lo-to hi-criticality mode (lo → hi), and from hi-to lo-criticality mode (hi → lo). 2. We provide a detailed stochastic analysis of lo-criticality mode. Using the analysis, the Probability of Deadline Miss per Hour in this mode is computed for tasks of both criticality levels. 3. We provide a first stochastic analysis of hi-criticality mode. Using the analysis, the maximal time spent in hi-criticality mode is obtained, along with the Probability of Deadline Miss per Hour for tasks of both criticality levels. Also taken into account is the probability the system enters hi-criticality mode. 4. Using contributions 1-3, we compute the overall Probability of Deadline Miss per Hour values for all tasks by consolidating the respective values for lo-and hicriticality mode. This allows us to compare these probabilities with the permitted ones found in typical certification standards. 5. We determine the probability that a lo task is started in its degraded mode.
Due to these contributions, we claim that this is the first work which provides a system-wide approach to MC scheduling, while considering a stochastic model of task execution times.
Organization: This paper is organized as follows: Sect. 2 highlights the related research in Mixed-criticality scheduling and in stochastic analysis. It also highlights the limitations of existing research which are addressed by this work. Section 3 states our system model. The model includes the task model and the model of the MC system. This is followed by Sect. 4, which states and explains important definitions and operations for stochastic analysis of systems with non-deterministic execution times. Section 5 covers the proposed analysis for getting Probability of Deadline Miss per Hour values, both for all lo and for all hi tasks. This section also has important intermediate results such as the duration of lo-and hi-criticality mode, and the probability of each event that causes a system mode switch. Results are covered in Sect. 6. In this section, we evaluate various schedulability metrics and design trade-offs for MC systems. Conclusion is given in Sect. 7, followed by references. Vestal's paper (2007) is the first paper that presents the MC model, where safetycritical tasks have multiple WCET estimates with different levels of assurance. Based on the model, a preemptive fixed priority scheduling scheme for sporadic task sets is presented: Static Mixed Criticality (SMC). In the widely examined dualcriticality case, hard guarantees are given to hi tasks, but lo jobs might miss their deadline if a hi job overruns its optimistic WCET. As well as this, a lo job is descheduled if it overruns its WCET.  introduced an important fixed priority scheduling scheme, Adaptive Mixed Criticality (AMC), which defines a system that can operate in different modes. The system starts in lo-criticality mode where all tasks are scheduled to execute according to their optimistic WCET estimates. If any job overruns its optimistic WCET, a switch to hi-criticality mode happens, where all lo tasks are de-scheduled. This way, hi tasks are guaranteed to meet their deadlines all the time, whereas lo tasks have this guarantee only in lo-criticality mode.

Related work
EDF scheduling has been adapted to Vestal's model as well.  propose a scheduling scheme for sporadic task sets based on EDF, called EDF-VD. In this scheme, the deadlines of all hi tasks are scaled down by a single scaling factor so that an overrun is detected early. Once an overrun is detected, the system enters hi-criticality mode where all lo tasks are de-scheduled. In this scheme, all tasks meet their deadlines if no optimistic WCET is overrun, while only hi tasks meet their deadlines if some of them are overrun. Ekberg and Yi (2012) use demand-bound functions to scale the deadlines of hi tasks individually, by a heuristic search strategy. Deadlines are chosen so that the schedulability of the system is maximized. The lo-and hi-criticality mode model in this scheme is similar to the one used in . Huang et al. (2014) amend EDF-VD to include degraded service for low criticality tasks while the system is in hi-criticality mode. The paper also presents an upper bound on the duration of this mode. Park and Kim (2011) present another EDF-based scheme, CBEDF. Here, high criticality tasks are always guaranteed to execute, while some guarantees are given to tasks of low criticality using offline empty slack location discovery. Vestal's model with two modes of operation was also investigated for time-triggered scheduling, most notably in Baruah and Fohler (2011). For a comprehensive overview of research into Mixed Criticality, we refer the reader to the review by Burns and Davis (2017), while for a discussion on the applicability of Mixed Criticality systems to industry and its safety-critical practices see Ernst and Di Natale (2016).
As for probabilistic MC systems, related work often models them with probabilistic Worst-Case Execution Time (pWCET) distributions, which are seen as extending Vestal's model such that each task has a large number of WCETs with various levels of confidence . A pWCET distribution comes from either the randomness inherent in a system and its environment, or the lack of knowledge we have about a system, or possibly both . To derive these distributions, well established methods like static probabilistic timing analysis (Devgan and Kashyap 2003), or measurement based probabilistic timing analysis techniques (Cucu-Grosjean et al. 2012) already exist. Ideally, modeling tasks with pWCET distributions removes dependency between them, meaning any task-set can be analyzed as though all tasks had independent execution times. In practice, by using pWCET distributions, these dependencies are reduced but not removed completely. This still poses a major problem in applying pWCET methodologies for real-time computing. For an extensive survey of timing analysis techniques, we refer the reader to . In this paper we assume that tasks' execution times are modeled with random variables which are given, and these random variables can be seen as an abstraction of ideal pWCETs.
For the analysis of probabilistic MC systems, obtaining probabilistic response times is key. The survey on probabilistic schedulability analyses by  lists various approaches to response time analysis. Our paper builds mainly upon the work of Díaz et al. (2002, as their analysis of real-time systems is pessimistic. Using probabilistic analysis, existing work often presents scheduling schemes where individual tasks have certain permissible deadline miss probabilities. Examples are Maxim et al. (2017) and Abdeddaïm and Maxim (2017), were SMC and AMC scheduling are adapted to a probabilistic MC model, demonstrating the improvement in schedulability. Masrur (2016) proposes a scheme with no mode switches, where lo tasks have a soft guarantee on meeting their deadline as well. Gopalakrishnan (2016, 2018) use a Markov decision process to provide probabilistic guarantees to jobs, and also formulate an optimization problem that provides the scheduling policy. Santinelli and George (2015), Santinelli and Guo (2018), and Santinelli et al. (2016) examine probabilistic MC systems by doing a sensitivity analysis, which focuses on the impact made by varying execution times. However, we observe that a holistic characterization of probabilistic mixed-criticality systems remains largely unexplored in the state-of-the-art. Deadline miss probabilities of individual jobs are often not aggregated into system-wide metrics, for example in Masrur (2016) and Maxim et al. (2017). We note that giving soft guarantees to individual tasks is not equivalent to guaranteeing a probability of deadline miss per hour. Another related work, Guo et al. (2015), analyzes a simple probabilistic model, where a hi task has just two WCETs and their corresponding probabilities of occurrence. Using the model, they propose a EDF-based scheduling algorithm which has an allowed probability of a timing fault happening system-wide. Finally, Küttler et al. (2017) consider a model where some guarantees are available to tasks of lower criticality. They propose lowering the priorities of lower criticality tasks in certain modes of operation. Still, without characterizing the duration of modes, we believe that the impact of degradation of lo tasks can not be properly quantified.
Finally, our own previous work (Draskovic et al. 2016) addresses the probability of deadline miss in lo-criticality mode of a dual mode system, while also commenting on the time before a transition to hi-criticality mode happens. However, a system-wide overview of the system is not given as hi-criticality mode is not analyzed. In this paper, we address the aforementioned limitations of the state-of-the-art.

System model
We start this section with an informal overview of our system model, before precise definitions are presented. The model is an extension of Vestal's original model (2007), and as is with Adaptive Mixed Criticality , there are two modes of operation, lo-and hi-criticality mode.
lo-criticality mode can be considered a normal mode of operation, and the system is expected to operate in this mode most of the time. hi-criticality mode can be considered an emergency mode, where newly instantiated lo jobs are started and running in degraded mode so preference is given to the execution of hi jobs. More specifically, hi criticality tasks are not affected by the mode of operation, these task are always released and executed until their completion. lo criticality tasks have two variants: each lo job can be released in degraded or regular mode. They always finish in the mode they started with. Though lo tasks are never dropped, they are released with degradation when the system is in hi-criticality mode. In practice, this means that there are two implementations of each task, and the degraded variant offers a reduced functionality. For example, the numerical result is computed with less precision. Vestal's original model specifies dropping lo jobs when hi jobs need more resources, and our model can be seen as a generalization where not executing a job is the extreme case.
The system starts in lo-criticality mode, and remains there until a mode switching event occurs. The first mode switching event is the only one discussed for non-probabilistic MC systems, and is thus found in previous work, for example Ekberg and Yi 2012;Huang et al. 2014;Maxim et al. 2017): a hi job's execution lasts longer than a provided threshold. The second mode switching event is when a hi job misses its deadline. It is introduced to reduce the probability of consecutive deadline misses of hi jobs. Note that a hi job might miss its deadline without overrunning its threshold execution time, for example because it was blocked by jobs of higher priority. Finally, the third mode switch event is when a long backlog of lo jobs accumulates, which could in turn produce an arbitrarily high backlog when entering hi mode. Once in hi-criticality mode, the system switches back to lo-mode the first time it is idle.
Using this model, we say a task-set to be schedulable using fixed priority preemptive scheduling, if the probability that any job misses its deadline during an hour of operation is sufficiently small, and if the ratio of lo jobs released in degraded mode is acceptable.
General notation on random variables This work deals with discrete random variables, and they are denoted using calligraphic symbols, for example A . The probability function of A , noted p A (⋅) , tells us the probability that A takes a specific value u: p A (u) = ℙ(A = u) . Without loss of generality, we assume that the possible values of all random variables span the full range of natural numbers. If the maximal and minimal values with non-zero probability of A exist, and are noted u max and u min , then the probability function can be represented in vector notation: Let us define a relation to compare two random variables A and B , as was done by Díaz et al. (2002).
Definition 1 (First-Order Stochastic Dominance) A is greater or equal than B , written as A ⪰ B , if and only if Note that probability densities can be incomparable. We introduce a shorthand notation for the probability that a variable modeled with random variable A has a value greater than scalar s. Instead of the cumbersome expression ∑ s<i ℙ(i = A) , we use ℙ(s < A). Finally, we introduce a simple notation [s] 1 to indicate that a scalar or expression s is limited to a maximum value of 1, [s] 1 = min (s, 1).
Task model A task-set Π consists of N independent tasks. Each task is periodic, constrained deadline, with an initial phase and a criticality level. A single task i where T i is the period, D i is the relative deadline, i is the phase, i ∈ {LO, HI} is the task's criticality level, and C i models the probabilistic execution time. C i has a maximal value with non-zero probability, which is the WCET, noted C max i . Tasks with criticality level lo and hi are referred to as 'lo tasks' and 'hi tasks', respectively. An instance j of task i is called a job, and denoted as i,j . Each job i,j has its release time r i,j = i + (j − 1) ⋅ T i , and its absolute deadline d i,j = r i,j + D i . The hyperperiod hp of a set of tasks is defined to be the least common multiple of all task periods.
We model the execution times of each task i with known independent and identically distributed random variables C i . This means that there is no dependency between the execution times of any two jobs, regardless of whether they are of the same task or not, and execution times of all jobs of one task are modeled with the same random variable. However, the provided analysis is safe, i.e., if the computed bounds hold for a given set of probabilistic execution times, they also hold if the execution times are smaller or equal according to Definition 1. Therefore, the probabilistic execution times C i can also be regarded as ideal probabilistic worst case execution times (pWCETs), which would remove the requirement that execution times of jobs are independent.
In the standard MC model (Vestal 2007), hi tasks have an optimistic and a pessimistic WCET estimate, and lo tasks are executed by the processor only if hi tasks meet their optimistic WCET estimates during operation. The reasoning behind this is the assumption that most of the time hi tasks will not execute for longer than their optimistic WCET estimate, so less computational resources are needed for the correct operation of the system. In this paper, we assume that the distribution of the execution time of each task C i is known. Therefore, instead of the optimistic WCET estimate, for each hi task we define a threshold execution time value C thr i . We assume this value is a given design choice. Note that the probability that a hi task executes for longer than this threshold is ℙ(C i > C thr i ) . The precise way this threshold is used in scheduling of jobs is described later in this section. Additionally, instead of not executing lo jobs in order to free up resources, we introduce that each lo job can be released in degraded or regular mode. If it executes with degradation, its WCET is C deg i . The C deg i value is assumed to given as a design choice. It could be zero if the task is not to be run in hi-criticality mode, or it can be any value less than its WCET: in this case it is assumed that a lower functionality is provided.
For the execution time of hi tasks, it is useful to introduce the following random variable that describes a worst-case behavior as long as the analyzed system is still in LO-critical mode.

Definition 2 (Trimmed Execution Time) Random variable C LO
i models the execution time of hi tasks i , but modified such that they do not execute for longer than C thr i :

3
Real-Time Systems (2021) 57:397-442 Figure 1a illustrates the C i of a lo task, as well as the WCET denoted as C deg i in degraded mode. Figure 1b illustrates the C i of a hi task as well as the trimmed execution time C LO i with the corresponding C thr i and C max i values. This definition differs from the one found in many related works, i.e. Draskovic et al. (2016), where the execution time of hi tasks in LO-critical mode is defined as the conditional probability ℙ(p C i (u) = u|u ≤ C thr i ) , often called 'truncated' execution time. The 'trimmed' execution times, as defined in this paper, are by definition greater or equal to the equivalent 'truncated' execution times. This paper uses 'trimmed' execution times because they simplify the analysis of hi-criticality mode, namely by simplifying initial conditions noted by Definition 12. The cost of this simplification is that it introduces pessimism in the lo-criticality mode analysis, however this has been found to be numerically negligible through simulations. Nevertheless, using the 'truncated' execution times option with a more complex analysis is also possible. For more information, see the comment on future work in the conclusion.
The response time of job i,j is modeled with random variable R i,j . The way this variable can be obtained and upper-bounded is presented in Sect. 4. The deadline miss probability of job i,j is the probability that this job finishes after its deadline Schedulability In this paper, we consider a single-core platform. A simple execution model is used, where task preemption overhead is zero.
As in the standard MC model, the system is defined to operate in two modes of operation, lo-and hi-criticality mode. When the system is operating in lo-criticality mode, both lo and hi jobs are released. When the system is operating in hi-criticality mode, hi jobs are released normally, while lo jobs are released in degraded mode.
In this paper the definition of schedulability is inspired by the probability-of-failure-per-hour notion. Therefore, we first define the probability of deadline miss per hour, before defining schedulability. We also define the probability of degraded job, a proportion of how many lo jobs execute in degraded mode in the long run. Definition 3 (Failure Probabilities) The probability of deadline miss per time interval T for hi or lo jobs is denoted as HI (T) or LO (T) , respectively. It is the probability that at least one hi or lo job misses its deadline during a time interval of length T.
Formally, we define HI (T) and LO (T) as: Definition 4 (Probability of Degraded Job) The probability of degraded lo jobs deg is the probability that any individual lo job is released in degraded mode: where The probabilistic MC scheduling scheme used in this paper can now be defined: Definition 6 (Probabilistic MC Scheduling) In lo-criticality mode, all tasks are scheduled using a provided fixed-priority preemptive schedule. The system starts in lo-criticality mode, and remains in it until one of the following events causes a transition to hi-criticality mode: 1. A hi job overruns its threshold execution time C thr i . 2. A hi job misses its deadline. 3. The system-level backlog, meaning the amount of pending execution, becomes higher than a predefined threshold B max .
In hi-criticality mode, the same fixed-priority preemptive schedule is used, but lo jobs are released with degradation in order to free up the processor. lo jobs starting in locriticality mode are still continuing in their normal mode with execution time C i . The system remains in hi-criticality mode until it becomes idle for the first time.

Preliminaries
With tasks having probabilistic execution times, a set of computational primitives are required to perform the schedulability analysis. A probabilistic analysis of real-time systems, on which our analysis is based, was described by Díaz et al. (2002.
We summarize the analysis technique in this section. The analysis and its primitives are used extensively in the following sections to perform the schedulability analysis of mixed-criticality systems. The analysis requires computation of the backlog, i.e., the sum of pending execution times of all ready jobs. For each priority level i there is a backlog containing the execution times of all pending jobs with priority i or higher. When a new job with priority i arrives, all backlogs with level i or lower are increased by adding its execution time. Adding the execution time random variable to a backlog is done using convolution. Executing a job decreases the backlogs of all levels i that are equal or smaller than the priority of the job. Decreasing the backlog is done using shrinking.
Definition 7 (Backlog) The ith priority backlog at time t, B i (t) , is a random variable that describes the sum of all remaining execution times of pending jobs of priority not less than i, at time t. The backlog B i (t−) is the same as B i (t) , except it does not take into account jobs released at time t.
Using convolution to compute backlog after arrival of a job Suppose that a job i,j is released at time r i,j , and B k (r i,j −) is the kth priority backlog at time r i,j , but excludes the newly released job. Assuming that i ≥ k , and that no other job is released at the same time, backlog B k (r i,j ) can be computed using the convolution operator ⊗: Backlog reduction due to execution of highest priority job Let us assume that in the interval t 0 < t < t 1 there are no job arrivals. During this interval, the backlog is decreased as the processor executes pending jobs. If B i (t 0 ) is the ith priority backlog at time t 0 , the corresponding backlog at time t can be computed using the so-called shrinking operation. Specifically, for computing backlog at time t 0 < t < t 1 , the following equation can be used: In other words, the backlog after an execution of t − t 0 time units is computed by left-shifting the initial backlog by t − t 0 , while truncating at zero since the processor is idle when no pending execution is present. For brevity, we define the corresponding shrinking function of a random variable B: Backlog State Space Exploration First, we define the function for computing the backlog at some time t + u given the backlog at time t.

Definition 8 (Backlog Computation)
B i (t), Π, i, t, u is a function for computing the ith priority backlog at time t + u , i.e., B i (t + u) . We assume that the ith priority backlog at time t is B i (t) , and that the task arrivals and execution times in the interval [t, t + u) are in accordance with task set Π.
The computation of can be done by applying the definition of a task set as well as the previously described operations, namely convolution and shrinking. We demonstrate this using the following example.

Upper bound of backlog
In order to provide a holistic schedulability analysis, we need to determine upper bounds of the backlogs for all time instances within any future hyperperiod, i.e., we are interested in a set of random variables B i (t) such that B i (n ⋅ HP + t) ⪯ B i (t) for all priority levels i, future hyperperiods n ≥ 0 and time instances within a hyperperiod 0 ≤ t < HP . We start by computing the steady-state backlog and proceed by showing that it provides the desired upper bound.
Computation of the steady state backlog The ith priority backlog at the start of the nth hyperperiod is B i (n ⋅ HP) , but this backlog may be different for each n. However, the sequence of random variables {B i (n ⋅ HP)} can be viewed as a Markov process as shown by Díaz et al. (2002). Specifically, they present the following theorem about the existence of a limit to the above mentioned sequence, including the corresponding proof: Theorem 1 (Section 4.2 of Díaz et al. 2002) The sequence of backlogs {B i (n ⋅ HP)} for n ≥ 0, where i is a priority level, has a limit if the average system utilization is less than one, and if the sequence of jobs remains the same each hyperperiod. If it exists, this limit is called the ith priority steady state backlog at the beginning of the hyperperiod, and noted B i (0).
For computing the steady state backlog at the start of a hyperperiod B i (0) , Diaz et al. propose three methods. The first method is an exact one stated in Sect. 4.3.2 of Díaz et al. (2002) and exploits the structure of the infinite dimension transition matrix P . A second method (Sect. 4.3.3 of Díaz et al. (2002)) finds an approximate value of B i (0) by truncating P to make its dimension finite. Finally, a third method is to iterate over hyperperiods until the following relaxed steady state condition is satisfied: This condition states that the maximum difference between all ith priority backlogs must not exceed a configurable small value . This method does not require computation nor truncation of the transition matrix P . For further details on choosing appropriate initial backlogs, please refer to .
Pessimism of the steady state backlog Assuming that the initial backlog is zero at every priority level, and that the sequence of jobs remains the same each hyperperiod, it has been shown in  that the ith priority steady state backlog is an upper bound to all ith priority backlogs at the start of the hyperperiod. The following two Lemmas can be used to show that the backlogs at the beginning of a hyperperiod are increasing from hyperperiod to hyperperiod. They state that the operations of convolution and shrinking preserve the partial ordering of random variables.

Lemma 1 (Property 3 in Díaz et al. 2004) Given three positive random variables
Lemma 2 (Property 6 in ) Given two positive random variables Now, the following Theorem can be shown by means of the above considerations: We have, by definition, B i (t) = lim n→∞ B i (t + n ⋅ HP) for all n ≥ 0 and 0 ≤ t < HP , and we know from Theorem 1 that B i (n ⋅ HP) ⪯ B i (0) for all n ≥ 0.

) Assuming that the initial backlog is zero, and that the sequence of jobs remains the same each hyperperiod, the ith priority backlog at time t inside every hyperperiod is upper bounded by the ith priority steady state backlog at time t inside the hyperperiod:
In summary, if the initial backlog is zero, the steady-state backlog B i (t) provides an upper bound for all backlogs within any future hyperperiod. This result will be used extensively in the the response time analysis described next.

Response time analysis
The response time of a job R i,j tells us when this job will finish its execution, relative to its release time. We summarize the procedure as proposed by Díaz et al. (2002). The response time of a given job i,j is influenced by the initial backlog at its release time B i (r i,j ) , and the computation times of all jobs that preempt the job. Therefore we can define a function: The pseudocode for computing response times is given in Algorithm 1. For a given job i,j , first C i is convolved with the the current ith priority backlog (line 2). This would provide us with the response time of i,j , if there were no preempting jobs. When a preempting job is released at a given point in time, then the probability function vector of i,j 's response time is split in two portions (line 6): the part before preemption ( R l ), and the part after preemption ( R u ). The part after preemption is convolved with the probability function vector of the preempting job's computation time, and the result is added to R l in order to get i,j 's response time after this preemption (lines 7 and 8). The probability function of R i,j is only computed until the job's deadline d i,j . Next, we present a theorem that we will use to obtain the worst-case hourly deadline miss probability. Beforehand, the Lemma shows that the response time function is monotone in the backlog at the release time of the job.
Lemma 3 (Theorem 1, Property 3 of López et al. 2008) Given two random variables As the steady-state backlog at any time within a hyperperiod is always greater than or equal to the backlog at the corresponding time within any hyperperiod, the following Lemma can be obtained.

Lemma 4
Assuming the initial backlog is zero, substituting any backlog B i (r i,j ) with the appropriate steady state backlog B i (r i,j ) in the response time analysis, produces a value greater or equal to the response time.
Proof This Lemma is a direct consequence of Lemma 3 and Theorem 2 as well as the results in López et al. (2008). ◻ The value B i (r i,j mod HP), Π, i,j will be named the steady state response time, and denoted as R i,j . Note that use of the steady-state backlog B i leads to an upper bound of the response time R i,j . Based on these results, we can now determine an upper bound on the response time of each job. Due to the fact that we defined the steady-state (worst case) hyperperiod, we can finally determine the worst-case deadline miss probability of a job i,j within any hyperperiod. Instead of using the modulo operation as in Lemma 4 we can also just look at jobs i,j within the single worst case hyperperiod with 0 ≤ j < HP∕T i .

Theorem 3
The deadline miss probability of a job i,j denoted as i,j can be bounded as follows: Proof The proof follows directly from the results described in López et al. (2008) as well as Lemma 4. ◻ 1 3

Analysis of mixed-criticality systems with stochastic task execution times
In this section, we determine the ( HI , LO , deg )-schedulability of a mixed-critical task set Π as defined in Definition 5. To this end, we compute upper bounds on probabilities that there is at least one deadline miss of a LO or HI job within 1 h, i.e., HI (T) or LO (T) , respectively, for a time interval of length T = 1 h. In addition, we will compute an upper bound on the probability that a lo job operates in degraded mode deg . The underlying concept of the forthcoming analysis is described next.
Let us start with the computation of the probability deg that a LO job operates in degraded mode. This probability can be upper bounded by noting that LO jobs are executed only in their degraded mode if their release time r i,j happens during HI-criticality mode. Therefore, we will first determine the maximal length Δ HI max of any HI -criticality mode execution. In addition, we determine an upper bound on the probability, that there is at least one mode switch within a single hyperperiod denoted as P HP LO→HI . Using these two values, we can bound the relative time the system is in HI mode and therefore, the probability that a lo job operates in degraded mode.
To determine upper bounds on probabilities HI (1h) , LO (1h) that there is at least one deadline miss of a LO or HI job within 1 h, we first look at upper bounds on the probabilities that at least one LO or HI job misses its deadline during any HI -criticality mode execution that is started within a hyperperiod, denoted as HI HI or HI LO , respectively. Note that the upper index denotes the mode, whereas the lower one denotes the criticality of the jobs we are considering. In addition, we determine an upper bound on the probability that at least one LO or HI job misses its deadline during a hyperperiod under the conditions that first, no mode switches take place and second, HI jobs do not overrun their threshold C thr . We denote these values as LO HI or LO LO , respectively. Again, the upper index concerns the mode and the lower one the criticality of the considered jobs. Now we can determine the desired probabilities HI (T) and LO (T) by combining (a) the w.c. probabilities

LO HI
and LO LO that a deadline miss happens during a hyperperiod if the system is in LO -criticality mode, (b) the w.c. probabilities HI HI or HI LO that at least one LO or HI job misses its deadline during any HI-criticality mode started within a hyperperiod.
We will now first determine bounds deg and (1h) using the above defined quantities: Δ HI max , P HP LO→HI , HI and LO for HI and LO jobs, i.e., for ∈ {LO, HI} . Afterwards, we will explain how these quantities can be determined.

Probability of job degradation
In this section, we will compute an upper bound on the probability that a LO job operates in degraded mode, i.e., deg . As described above, we will make use of the maximal duration of a HI-criticality mode execution and the probability that there is no mode switch within a hyperperiod.

3
Real-Time Systems (2021) 57:397-442 Definition 9 (Maximal Duration of High-Criticality-Mode) The quantity Δ HI max denotes the maximal duration the system is continuously executing in HI-criticality mode.
Definition 10 (Mode Switch Probability) The quantity P HP LO→HI denotes an upper bound on the probability that there is at least one mode switch lo → hi within a single hyperperiod.
Using these definitions, we can determine an upper bound on the desired quantity.

Theorem 4 The probability of degradation of a LO job can be bounded as follows:
Proof We obtain this value by multiplying the probability that hi-criticality mode is entered during one hyperperiod, with the the number of lo jobs that are released in degraded mode when it does.
First, note that there is some constant number K of lo jobs with that are released every hyperperiod. From the moment one HI-criticality mode is entered, it executes at least partly in at most ⌈1 + Δ HI max ∕HP⌉ hyperperiods. Therefore, what ever the number of mode switches is inside one hyperperiod, in the worst case, all lo jobs from this and the next ⌈Δ HI max ∕HP⌉ hyperperiods are executed in degraded mode. In other words, K ⋅ ⌈ Δ HI max ∕HP + 1 ⌉ lo jobs are degraded. Second, let us note that there is at least one mode switch within a hyperperiod with probability P HP LO→HI . Combining this probability with the number of lo jobs that are degraded if a mode switch happens, we get: ◻ This upper bound on the probability of degradation of a LO job may be overly pessimistic in the case when the hyperperiod is much larger than the maximal duration of HI-criticality mode, HP ≫ Δ HI max . Still, in practical scenarios, it is not considered usual practice to design a system with a very long hyperperiod. We therefore accept the upper bound as satisfactory.
The necessary quantities Δ HI max and P HP LO→HI will be determined later as part of our analysis of the HI -and LO-criticality modes.

Probabilities of deadline misses
Let us now determine the deadline miss probabilities of HI (T) and LO (T) , i.e., the probabilities that at least one HI criticality job or one LO criticality job misses its deadline within the time interval T. With T = 1 h we get the quantities as required by the schedulability test according to Definition 5. For the following theorem, let us suppose that ∈ {LO, HI} denotes the criticality of jobs in the deadline miss probabilities.
In principle, the analysis investigates two coupled systems. The first one which is denoted as the LO-system never does a mode switch, i.e., all mode switch events are ignored. In addition, it uses modified execution time probabilities of HI criticality jobs such that the LO-system pessimistically describes the behavior of the original system if operating in LO-criticality mode. In particular, all execution times of HI jobs that are higher than the threshold are trimmed to it, see Definition 2. The worst-case steady-state probability that at least one job misses its deadline during a hyperperiod in the LO-system is denoted as LO . This probability is determined using the worst-case steady-state backlog and response-time analysis as provided in Lemma 4, but using the trimmed execution times of HI jobs. The other system is denoted as the HI-system and considers the case that at least one lo → hi mode switch happened within a hyperperiod, i.e., at least one HI-criticality mode is executed.

Definition 11 (Deadline Miss Probabilities in Different Modes)
The worst-case probability that at least one critical job misses its deadline during any HI-criticality mode started in a single hyperperiod is denoted as HI . The worst-case steadystate probability that at least one critical job misses its deadline during a hyperperiod in a system where (a) all mode switch events are ignored and (b) execution times of HI jobs are trimmed to their threshold according to Definition 2 is denoted as Note that LO can be computed according to Lemma 4. Using these definitions, we can determine bounds on the requested deadline miss probabilities using the following result. The desired probabilities per hour can be obtained by setting T = 1 h.

Theorem 5 (Deadline Miss Probabilities) The deadline miss probabilities (T) for ∈ {LO, HI} can be bounded as follows:
Proof It needs to be proven that the probability that there is no deadline miss of any job within time interval T is bounded by There is no deadline miss within T if there is no deadline miss when the system executes in LO-criticality mode and there is no deadline miss if it operates in HI-criticality mode. Suppose the first event is named a and the second one b, then we know that p(a ∩ b) = p(a) + p(b) − p(a ∪ b) ≥ p(a) + p(b) − 1 even if both events are not independent. Therefore, the theorem is true if lower bounds the probability that there is no deadline miss when the system is in LO -criticality mode and lower bounds the probability that there is no deadline miss when the system is in HI -criticality mode. Let us first look at the LO-criticality mode. At first, note that ⌈T∕HP⌉ is the number of hyperperiods that completely cover an interval of length T. Therefore, we can safely assume that our interval has the length of ⌈T∕HP⌉ full hyperperiods. Remember that the backlogs during a steady-state computation are monotonically increasing, see Theorem 2. In a similar way, response times of jobs are monotonically increasing from hyperperiod to hyperperiod, see Lemma 4. As a result, the deadline miss probabilities of jobs are increasing from hyperperiod to hyperperiod as well and LO is a safe upper bound for every hyperperiod in our modified LO-system. We model the system as a worst-case Bernoulli process, acting from hyperperiod to hyperperiod. As a result, is a lower bound on the probability that there is no deadline miss in the LO-system, i.e. all switching events are disabled and the execution times of HI jobs are trimmed. It remains to be shown that the response times in our LO-system are always larger or equal than those in the original system when it is in LO-criticality mode. This is certainly true as after a hi → lo mode switch, the backlogs are 0 for sure and therefore, they are lower than those in the modified LO-system. Due to Lemma 4, the response times are larger in the modified LO-system. Moreover, trimming of execution times of HI criticality jobs has no influence on the backlogs as long as there is no hi → lo mode switch, i.e., the original system operates in LO-mode. Now let us look at the HI-mode. Again note, that ⌈T∕HP⌉ is the number of hyperperiods that completely cover an interval of length T. The worst-case probability that at least one critical job misses its deadline during any HI-criticality mode started in a single hyperperiod is denoted as HI , see Definition 11. Therefore, � 1 − HI � ⌈ T HP ⌉ is a lower bound on the probability that there is no deadline miss caused by a lo → hi switch within a hyperperiod. This concludes the proof as we considered the case that the systems operates in LO-criticality mode somewhere within a hyperperiod (bounded by the case that it is always in this mode during the hyperperiod) and the case that one or more HI -criticality modes are started within a hyperperiod (all corresponding deadline misses are accounted for in the hyperperiod where the HI-criticality mode was started). ◻ Now we will determine the quantities Δ HI max , P HP LO→HI , LO and HI required to compute deg , HI (T) and LO (T) . We start by analyzing the behavior of the MC system in LO-criticality mode.

LO-criticality mode
The analysis of the LO-criticality mode will allow us to determine some of the required quantities, namely the worst case probability P HP LO→HI of at least one lo → hi mode switch within a hyperperiod and the worst-case probability LO that at least one critical job misses its deadline within a hyperperiod if operating in the modified LO-system, see Sect. 5.2. Moreover, we will determine the worst-case probability of a lo → hi mode switch at time instance t ∈ {0, … , HP − 1} within any hyperperiod, as this quantity will allow us to analyse the -critical mode later on.
Lemma 5 Given a modified task system where no lo → hi mode switch is executed and all HI critical jobs are trimmed to their execution time threshold C thr i , see Definition 2. Then, is an upper bound on the probability of at least one deadline miss of any job during LO-criticality mode execution within any hyperperiod, where i,j denotes an upper bound on the deadline miss probability of job i,j according to Theorem 5. Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof We will show that the response times in the modified system are always larger or equal than those in the original system when it is in LO-criticality mode. According to Theorem 3, the upper bound on the deadline miss probability i,j holds for any hyperperiod. On the other hand, we can not assume that the miss probabilities for the jobs are within a hyperperiod are independent. Therefore, we upper bound the probability of the union of events by their sum. It remains to be shown that the modified LO-system with all lo → hi mode switches disabled and the trimmed execution times of HI critical jobs provides upper bounds on the original system when operating in LO-criticality mode. This is certainly true as after a hi → lo mode switch in the original system, the backlogs are 0 for sure and therefore, they are lower than those in the modified LO-system. Due to Lemma 4, the response times are larger in the modified LO-system. Moreover, trimming of execution times of HI criticality jobs has no influence on the backlogs as long as there is no hi → lo mode switch, i.e., the original system operates in LO-mode. The bounding of the value LO to 1 is safe, as for any summation of events we have p(a ∪ b) ≤ p(a) + (b) and p(a ∪ b) ≤ 1 leading to p(a ∪ b) ≤ min (1, p(a) + (b)) . ◻ Now, we will determine an upper bound on the worst-case probability P LO→HI (t) of a lo → hi mode switch at time instance t ∈ {0, … , HP − 1} within any hyperperiod. Remember that there are three triggering events for a lo → hi mode switch, namely (a) a HI critical job misses its deadline (b) the system-level backlog, meaning the amount of pending executions, becomes higher than a predefined threshold B max and (c) a HI critical job overruns its threshold execution time C thr . We will analyze the three different mechanisms one after the other and finally combine the results.
Let us start with the deadline miss probability at time instance 0 ≤ t < HP which we denote as P dm (t).
Lemma 6 Given a modified task system where no lo → hi mode switch is executed and all HI critical jobs are trimmed to their execution time threshold C thr i , see Definition 2. Then, is an upper bound on the probability of at least one deadline miss of any HI critical job during LO-criticality mode execution at time t, 0 ≤ t < HP, where i,j denotes an upper bound on the deadline miss probability of job i,j in the modified task system according to Theorem 3. Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof We can not assume that the deadline miss probabilities at time t are independent. Therefore we use as an upper bound of the union of events the sum of the individual probabilities.
The bounding of the value P dm (t) to 1 is safe, as for any summation of events we have p(a ∪ b) ≤ p(a) + (b) and p(a ∪ b) ≤ 1 leading to p(a ∪ b) ≤ min (1, p(a) + (b)) . S(t) denotes the set of all HI critical jobs with deadline at time t. ◻ We continue with the probability that at time instance 0 ≤ t < HP the total backlog exceeds the upper bound B max which we denote as P be (t).
Lemma 7 Given a modified task system where no lo → hi mode switch is executed and all HI critical jobs are trimmed to their execution time threshold C thr i , see Definition 2. Then, is an upper bound on the probability that the total backlog at time t exceeds B max during LO-criticality mode execution within any hyperperiod, where B N (t) denotes an upper bound on the lowest priority backlog in the modified task system according to Theorem 2.

Proof
The total backlog equals B N (t) according to Definition 7. Then, the Lemma directly follows from Theorem 2. ◻ Unfortunately, the computation of the probability P ov (t) that at time instance 0 ≤ t < HP at least one HI critical job overruns its threshold execution time C thr i is more involved. Whereas the overrun probability ℙ(C i > C thr i ) can be simply calculated, it is more complex to understand at what time instance such an event happens, due to interference from other jobs. We will first compute the upper bound on the backlog for our modified LO-system as usual. Based on this, we now consider each HI critical job individually and compute its response time if the job would have the execution time C thr i . If this response time plus the release time r i,j of the job equals t, then the job overruns at t under the condition that it overruns at all. The following Lemma summarizes the corresponding result.

Lemma 8 Given a modified task system where no lo → hi mode switch is executed and all HI critical jobs are trimmed to their execution time threshold C thr
i , see Definition 2. Then, ∀0 ≤ t < HP is an upper bound on the probability that at time instance 0 ≤ t < HP at least one HI critical job overruns its threshold execution time C thr i . Here, B i (t) denotes an upper bound on the level i backlog in the modified task system according to Theorem 2 and i,j denotes a modified job i,j with a deterministic computation time of C thr i . Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof At first note that we do not assume that the probabilities of overrunning the threshold execution time C thr i are independent. Therefore, the union of at least one overrun at time t is bounded by the sum of individual probabilities for each HI job, see the definition of S. Moreover, ℙ(a) = ℙ(a|b) ⋅ ℙ(b) for events a and b. In our case, ℙ(b) = ℙ(C i > C thr i ) , i.e., the event that task i,j has a overrun of its threshold execution time.
We now need to show that the term ℙ( B i (r i,j ), Π, i,j + r i,j ) mod HP = t) denotes the probability that an overrun due to task i,j happens at time t under condition that the overrun happens at all, i.e., it represents ℙ(a|b) . Note that the term [ B i (r i,j ), Π, i,j + r i,j ] denotes the fin- ishing time of task i,j if using the worst-case steady-state backlogs B and the execution time C thr i . Therefore, under the assumption that the task overruns, it determines the distribution of the time when the overrun actually happens. As this time may be in the next hyperperiod, we use the modulo operation.
The bounding of the value P ov to 1 is safe, as for any summation of events we have p(a ∪ b) ≤ p(a) + (b) and p(a ∪ b) ≤ 1 leading to p(a ∪ b) ≤ min (1, p(a) + (b)) . ◻ Based on the previous three Lemmas we can conclude this section with the desired worst-case probability P LO→HI (t) of a lo → hi mode switch at time instance 0 ≤ t < HP within any hyperperiod.
Theorem 6 P LO→HI (t) is an upper bound on the worst-case probability of a lo → hi mode switch at time instance 0 ≤ t < HP within any hyperperiod with where P dm (t), P be (t) and P ov (t) are computed according to Lemmas 6, 7 and 8, respectively. An upper bound on the probability of at least one lo → hi mode switch within a hyperperiod can be determined as Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof The Theorem is a simple consequence of the previous Lemmas as we can not assume independence of events within a hyperperiod. ◻ As a simple corollary to the above Theorem, one can compute a lower bound on the expected length of a single LO-criticality mode execution as This results concludes the analysis of the LO-criticality mode and we are now analysing the HI-criticality mode in order to determine the remaining quantities as necessary for Theorems 4 and 5.

HI-criticality mode
We are still missing the computation of the maximal duration of a HI-criticality mode execution quantity Δ HI max , as well as the worst-case probability HI of at least one deadline miss of any job during any HI-criticality mode started within a hyperperiod, where ∈ {LO, HI}.
To this end, we will determine HP different worst-case HI-criticality mode scenarios, one for each starting time 0 ≤ t < HP relative to the beginning of a hyperperiod. In other words, we will investigate HP different HI-criticality mode executions and then use the maximum of their durations as Δ HI max , and the maximum of their deadline miss probabilities to determine upper bounds that at least one HI or LO task misses its deadline during a single HI-criticality mode execution. These quantities will then be combined with the probability P LO→HI (t) that a lo → hi switch happens at relative starting time t in order to determine HI , i.e., the worst-case probability of at least one deadline miss of any critical job during any HI-criticality mode started within a hyperperiod.
Broadly speaking, hi-criticality mode has three differences with lo-criticality mode. First, jobs released in hi-mode have different execution times: lo jobs are released in degraded mode, and hi jobs do not have the condition that they do not overrun their C thr i execution time threshold. Second, 'carry-over' jobs, which are released in lo-criticality mode but whose deadlines are after the mode switch, are present in hi-criticality mode and they need to be accounted for. Third, the initial system-level backlog is not zero, but depends on the mode switch time trigger. To account for these differences, we present the following worst-case HI-critical execution task-set. It is created such that it is pessimistic what ever the mode switch trigger may be, and it accounts for both carry-over jobs and jobs released during hi-mode.
The worst-case HI-mode scenario for starting at time t will be defined as follows: Definition 12 (Worst-Case HI-Criticality Execution) We define HP task set Π (t) , one for each starting time 0 ≤ t < HP . It differs from the original task set Π as follows: 1. The phase offsets i are implicitly changed such that all jobs are already available in 0 ≤ t < HP , i.e., we allow for negative job indices j. 2. We consider all jobs with starting times after t, i.e., j ≥ (t − i )∕T i + 1 . They have a known execution time Ĉ i which is not larger than the degraded mode WCET C deg i for LO criticality jobs, and a known execution time Ĉ i = C i for HI criticality jobs. 3. We consider jobs whose release time is smaller than t and deadline is larger than t. These included jobs i,j ∈T with (t − i )∕T i + 1 < j < (t + D i − i )∕T i + 1 have execution times Ĉ i = C i for both lo and hi criticality jobs; i.e. for lo jobs the execution times are not degraded, and for hi jobs they may or may not overrun their C thr i threshold. 4. In addition, for each hi-criticality mode starting time t, 0 ≤ t < HP , we introduce the initial backlog at time t and priority levels 1 ≤ i ≤ N , B i (t) . If a overrun can not happen at time t, due to the fact there is no hi job released whose deadline has passed by time t, the initial backlog is as follows: where B i (t) denotes an upper bound on the ith priority backlog in the modified lo-criticality system according to Theorem 2. If an overrun can happen at time t, due to at least one hi job having its release time before t and its deadline after, then the initial backlog at time t is the following: where B i (t) denotes an upper bound on the ith priority backlog in the modified lo-criticality system according to Theorem 2, but with the added condition that at least one of the released hi jobs whose deadline is after time t has overrun its threshold execution time C thr i . Let us now describe how B i (t) can be computed. To this end, we solve Here, B i (t) denotes an upper bound on the ith priority backlog in the modified lo-criticality system according to Theorem 2. B + i (t) is also an upper bound on the ith priority backlog according to Theorem 2, but the system used for its computation is slightly modified. It is the lo-criticality system with the difference that hi jobs released before time t whose deadlines are after that time have no condition on whether they overrun their C thr i execution time or not-we use their normal execution times C i in calculating the backlog. The probability that none of these hi jobs overrun their respective C thr i execution times is noted ℙ( ) , while the ℙ( ) = 1 − ℙ( ) is the probability that at least one of these hi jobs overruns. ℙ( ) is obtained directly from execution times of these hi jobs, Condition 2 includes all tasks which are released during HI-criticality mode, noting that lo jobs are degraded and hi jobs have C i execution times. The third condition deals with carry-over jobs from LO -to HI-criticality mode, whose deadline misses have not yet been accounted for in the LO-criticality mode analysis. Note that here the worst case comes from the assumption that all hi jobs may overrun. Finally, condition 4 includes the worst-case backlog at the starting time t, as it is the backlog with the condition that an overrun of at least one hi job occurred, but also it is limited by the maximal backlog B max . Simpler constructions of the worst-case task-set lead to high overestimations to the length and deadline miss probabilities of hi-criticality mode.
Starting from the worst-case scenarios for the HI-mode for each time instant t, 0 ≤ t < HP , we now evaluate each scenario and determine the corresponding worstcase durations as well as the deadline miss probabilities. To do this, we apply the results from Sect. 4 and use the function B i (t),Π, i, t, u to compute all relevant backlogs for the task sets from Definition 12. The successive computation of the backlogs stops whenever the system gets idle for the first time: B i (t s ) = 0 for all priority levels i. This time is an upper bound on the hi → lo switching time. Using the response time analysis, see (10), we can finally determine all jobs that miss their deadline during the HI-mode. Additionally, for the response time analysis for calculating the deadline miss probabilities of hi carry-over jobs, we substitute the execution time of the carry-over job under analysis Ĉ i with the conditional execution time ℙ(C i > C thr i ) , in order to get the deadline miss probability with the condition that the hi carry-over job overran its C thr i execution time threshold.

Lemma 9
The first time t idle , the execution of the task set Π (t) from Definition 12 yields a system-level backlog which is zero, determines an upper bound Δ HI max (t) on the duration of a HI-criticality mode starting at time t relative to the beginning of any hyperperiod of the original task system Π: Let us define the probability p i,j (t) that some job i,j of task set Π (t) from Definition 12 misses its deadline in the time interval [t, t + Δ HI max (t)] . Then HI (t) is an upper bound on the probability that there is at least one deadline miss of any critical job with ∈ {LO, HI} within a HI-criticality mode execution starting at time t relative to the beginning of any hyperperiod in the original task system Π:

Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof The main part of the proof is to show that the task set Π (t) indeed defines a worst-case scenario in terms of duration and deadline miss probabilities, when the HI-criticality mode starts at time t relative to the beginning of any hyperperiod. Note that the second condition in Definition 12 ensures that all tasks which are released during a HI-criticality mode in the worst case, are included in the HI-criticality task set as well. Moreover, we consider the exact execution times for all of these jobs, namely the degraded execution times Ĉ i which are not longer than C deg i for LO criticality jobs, and Ĉ i = C i for HI criticality jobs. The third condition adds the worstcase carry-over jobs from LO -to HI-criticality mode whose deadline misses have not yet been accounted for in the LO-mode analysis. All jobs who missed their deadline before the lo → hi mode switch have been considered already in the LO-mode analysis, but their possible backlog at t will be considered. Therefore, we just need to explicitly include jobs whose release time is before and whose deadline is after the lo → hi mode switch. The corresponding execution times are taken as worst-case as well, namely for each carry-over hi job individually, for calculating its deadline miss probability we assume it overruns its execution time threshold. Finally, we look at the worst-case backlog at the starting time t. It encompasses the remaining execution times of jobs who were released before t but not yet finished. Due to the triggering condition of a mode switch, we assume the worst-case that at least one hi job has overrun its C thr i execution time. Also according to triggering conditions, the backlog is never larger then B max for all priority levels. Note that the backlog also contains jobs whose deadline is within the HI-mode, i.e., the carry-over jobs who have been explicitly included as tasks.
In order to determine the upper bound on the deadline miss probability HI (t) of any -critical job we again do not assume independence of individual miss events and use the sum of the corresponding probabilities as an upper bound. ◻ As a result of this Lemma we can determine the desired quantities, namely maximal duration and upper bound on deadline misses, for each time point t relative to the starting of a hyperperiod. The computations are based on simple simulations of HP executions of worst-case HI-criticality mode scenarios. The simulation times are finite as long as there exists a finite time in Π (t) when the system gets the first time idle. The following Lemma leads to a necessary and sufficient condition.
Lemma 10 A set of finite bounds Δ HI max (t) on the duration of HI-criticality modes exists if and only if the maximal system utilization in hi-criticality mode in the original system is less than one.
Proof Let us look at the modified task set Π (t) starting at time t. If the maximal system utilization in hi-criticality mode is less than one, then the maximal system level backlog at time t + (n + 1) ⋅ HP is strictly smaller than the maximal system level backlog at time t + n ⋅ HP for n > 1 , because the arriving jobs in time interval [t + n ⋅ HP, t + (n + 1) ⋅ HP) are identical for all n > 1 and there is less additional accumulated computation time from all arriving jobs than its length HP . Therefore, a time instance will exist when the maximal system level backlog is zero and the system is idle. If the maximal system utilization in hi-criticality mode is larger or equal than one, then the maximal system level backlog at time t + (n + 1) ⋅ HP could be equal or greater than the maximal system level backlog at time t + n ⋅ HP . Therefore, in the worst case, the system level backlog never gets to zero and the hi-criticality mode could last for ever. ◻ Based on these results, we can now aggregate the computed quantities in order to determine the maximal duration of a HI-criticality mode execution quantities Δ HI max as well as the worst-case probability HI of at least one deadline miss of any job during any HI-criticality mode started within a hyperperiod, where ∈ {LO, HI}.
Theorem 7 Δ HI max is an upper bound on the maximal duration of any HI-criticality mode in the original task system Π, where HI is a bound on the worst-case probability of at least one deadline miss of any critical job with ∈ {LO, HI} during any HI-criticality mode started within a hyperperiod in the original task system, where with P LO→HI (t) as determined in Theorem 6. Note, […] 1 indicates the expression is limited to a maximum value of 1.
Proof According to Lemma 9, Δ HI max (t) is an upper bound on the duration of a HI -criticality mode starting at relative time t within a hyperperiod. Clearly, the maximum for all relative time instances provides the maximal duration for any time instance. The probability of a deadline miss within a HI-mode execution is the probability of the union of deadline misses at any time instance within the hyperperiod. As we cannot assume independence, we upper bound this probability by the sum of individual probabilities. The probability of a deadline miss within a HI-mode starting at relative time t is clearly the probability that a a mode switch happens, i.e., P LO→HI (t) , times the probability that a deadline miss happens within the HI-mode, i.e., HI (t). ◻ This concludes the schedulability analysis of probabilistic Mixed-Criticality Systems according to Definition 5, as all required quantities for Theorems 4 and 5 have been determined in Sects. 5.3 and 5.4 .
Of course, the tightness of the analysis can be improved through various approaches. Some of them as well as limitations of the described analysis are noted in the conclusion.

Experimental results
In order to illustrate our probabilistic Mixed Criticality (pMC) schedulability analysis, this section first shows one sample task-set. The sample task-set is inspired by applications from the avionics industry. Then, experiments on randomly generated task-sets are used to compare pMC scheduling with other schemes: a probabilistic but non-Mixed Criticality scheme 'Probabilistic Deadline Monotonic Priority Ordering' pDMPO, the deterministic 'Adaptive Mixed Criticality' scheme (AMC), and a deterministic non-MC 'Deadline Monotonic Priority Ordering' scheme. These are all listed in Table 2, and described in detail below. For the experiments, we generated randomized task-sets with all but one parameter the same, in order to see the effect this one parameter has. Three experiments are conducted. The first experiment serves to show the impact of the system utilization, the second experiment varies the probability each hi task overruns its C thr i execution time threshold ℙ(C i > C thr i ) , and finally the impact of the maximal system-level backlog is visualized in the third experiment. In general, we show that pMC dominates all other schemes, except in situations when hi-criticality mode is entered too often. In these cases, we find that there is too much degradation of lo jobs, therefore scheduling using the probabilistic but non-Mixed Criticality pDMPO yields better results.
Baseline schemes To evaluate pMC scheduling, we have used three deterministic and one probabilistic baseline scheme, as listed in Table 2. All schemes are based on fixed-priority preemptive scheduling. The first deterministic scheme is a non-Mixed Criticality one, Deadline Monotonic Priority Ordering (DMPO). As the name suggests, tasks are prioritized only by their deadlines, and scheduled according to their C max i WCETs. The next scheme is Adaptive Mixed Criticality (AMC), as described by . The scheme features two modes of operation. The system starts in locriticality mode where hi tasks are scheduled according to their C thr i threshold execution times. If any hi job overruns this value, a switch to hi-criticality mode happens, where all lo tasks are released in degraded mode. The scheme does not quantify the duration of these two modes, only the schedulability of them.
As a deterministic baseline scheme we introduce the UB-HL bound . The bound is a necessary test for all fixed priority preemptive MC schemes, and such it provides an upper bound on the performance of all fixed priority preemptive deterministic MC schemes.
Finally, the Probabilistic Deadline Monotonic Priority Ordering (pDMPO) scheme represents the analysis as introduced by Díaz et al. (2002). In pDMPO, tasks are given priorities based on their deadlines, they are scheduled using their complete C i execution times, and there is only one mode of operation. The scheme can be viewed as a border case of pMC, where hi-criticality mode is never entered.
Task Execution Times To model task execution times C i , Weibull distributions were used, with a condition that they do not take values greater than the task's WCET C max i . These distributions have been used in related work for modeling the distribution of long but unlikely execution times (Cucu-Grosjean et al. 2012).
Weibull distributions are functions of parameters k and . To generate an execution time, we first choose k uniformly from [1.5, 3]. Then, the parameter is computed the following way. For lo tasks, was computed such that the cumulative density function at the task's WCET C max i is 1 − 10 −8 . Similarly, for hi tasks, we choose so the cumulative density function at the task's execution time threshold C thr i is 1 − 10 −8 , unless stated otherwise. This is the way we set the probability a hi task overruns its threshold execution time. Finally, all values of the probability density function above C max i are set to be 0, and the rest of the distribution is normalized. This way, we have a valid execution time modeled by a Weibull distribution, with the condition it never exceeds the task's WCET C max i , and for which the probability a hi task overruns its execution time threshold is C thr i .

Sample system
Here we introduce a task-set modelling a sample system, to which we applied our proposed schedulability analysis. We explored the task-set, first by varying execution times of all tasks, and then by varying deadlines. This was done to illustrate probabilistic Mixed Criticality scheduling. We present the three schedulability values: HI (1h) , LO (1h) , and deg , and we also show the expected duration of lo-criticality mode Δ LO exp , and the maximal duration of hi-criticality mode Δ HI max . The system's lo and hi tasks are inspired by the ROSACE ) and FMS (Durrieu et al. 2014) applications, respectively. The hi tasks are inspired by an industrial implementation of the flight management system (FMS). This application Table 3 The sample system's parameters Note that values relating to execution times are a function of parameter f c consists of one task which reads sensor data, and four tasks that compute the location of the aircraft. For lo tasks, the open source avionic benchmark ROSACE was modeled. It is made up of three tasks which simulate pilot's instructions, and eight tasks implementing a controller. Setup Table 3 lists the tasks' periods and execution time values: worst-case execution times (WCETs) C max i , thresholds for hi tasks C thr i , and degraded WCETs C deg i for lo tasks. Execution time values are functions of the parameter f c , which we vary from 0.05 to 7.5 in 0.05 steps. Note that for hi tasks, C max i values are 2.5 times larger than the corresponding C thr i , while for lo tasks the worst-case execution time in degraded mode is C deg i = 0.33 ⋅ C max i , rounded up to the nearest integer. The deadline of each task has been constrained by a factor of f d , D i = T i ⋅ f d , where f d is varied from 0.005 to 1 in steps of 0.005. Next, initial phases for tasks are 0, while tasks' priority assignments are given in the table. Note that we use deadline monotonic priority assignment.
We model probabilistic execution times of tasks with Weibull distributions, as described in the beginning of this section. The probability that a hi task executes for longer than its threshold execution time C thr i is ℙ(C i > C thr i ) = 10 −8 , for every hi task. For the maximal system-level backlog, we used B max = 5ms. The hyperperiod lasts for 60ms, and inside one there are 500 lo jobs and 19 hi jobs. Regardless of the parameter f c , the utilization of lo tasks is 5.73 times higher than the utilization of hi tasks. In Fig. 2, the two left plots have results when deadlines are fixed ( f d = 1 ), but execution times values from Table 3 are varied with f c ∈ (0, 7.5] . In the two right plots of Fig. 2, shown are results when deadlines are varied f d ∈ (0, 1] , but all execution time values are fixed ( f c = 2).
Results As expected, the deadline miss probability per hour for both hi and lo jobs, HI (1h) and LO (1h) , increases as the utilization increases, or as the deadlines become more constrained. In this example, LO (1h) is larger than HI (1h) , even though hi criticality tasks have the lowest priority. This is mainly because there are more lo than hi jobs, i.e. 500 versus 19 jobs per hyperperiod. As for the probability that a lo job is released in degraded mode, deg , we notice it follows a similar trend. In this experiment, the value never goes to zero, because there is always a non-zero probability a lo → hi criticality mode switch occurs.
In the bottom right plot of Fig. 2, the expected duration of lo-mode is shown to resemble the inverse of deg . Except when the deadlines are very constrained ( f d < 0.12 ), lo-criticality mode lasts for an expected Δ LO exp = 88 h before a trigger event occurs. The maximal duration of hi-criticality mode Δ HI max depends only on the system utilization. This is shown in the bottom left plot as a function of f c . The value is 1.1 ms for f c = 2 , and 21.7 ms for f c = 7.5 . Both values are smaller than Δ LO exp by orders of magnitude.

Randomized systems
Now we continue, and present three further experiments. They demonstrate the impact of three design parameters on schedulability: the system utilization, the probability that a hi tasks overruns its execution time threshold C thr i , and the choice of the maximal system-level backlog.
More specifically, the first experiment shows whether task-sets of different system utilizations are ( HI , LO , deg )-schedulable using probabilistic Mixed Criticality (pMC) scheduling, as well as other scheduling schemes.
The second and third experiments compare pMC with the probabilistic but non-MC scheme pDMPO. They demonstrate that pMC leads to improved schedulability, except when hi-criticality mode is entered too often, either because of the first or the third mode switch trigger, respectively.
For all three experiments, tasks were randomly generated as described below.
Task-Set Generation For each of the three experiments presented, the UUnifast-Discard algorithm ) was used to randomly generate task-sets, with the following parameters we found reasonable.
-First, periods and maximal execution times in lo-criticality mode ( C thr i values for hi tasks and C max i for lo tasks) were generated by the UUnifast algorithm. Periods were chosen between {50 μ s, 100 μ s, 200 μ s, 250 μ s, 500 μ s, 1000 μs}. -All initial phases were set to 0, and tasks' deadlines are equal to their period. -Then, every task's criticality is assigned to be hi with a probability of 0.5 (i.e. parameter CP = 0.5).
i is a fixed multiplier of the corresponding threshold C thr i , C max i = 1.5 ⋅ C thr i (i.e. parameter CF = 1.5 ). For lo tasks, their degraded WCET is set to be a third of their actual WCET, C deg i = 0.33 ⋅ C max i . -To model task execution times C i , we have used Weibull distributions as explained at the beginning of this Section. The probability each hi job i,j overruns its execution time threshold is ℙ(C i > C thr i ) = 10 −8 , unless stated otherwise.
-The number of tasks per task-set is 60.
-Finally, the maximum backlog B max is 500 μ s, unless stated otherwise.
For the system utilization and other details, we refer the reader to the setup section of each experiment.
Priority Assignment For the probabilistic scheduling schemes pMC and pDMPO, we have used deadline monotonic priority assignment. Note that (Maxim et al. 2011) shows that this assignment is in general not optimal for probabilistic systems, they suggest instead Audsley's priority assignment algorithm. For the deterministic scheduling schemes, AMC uses Audsley's priority assignment which is optimal for this scheme, while DMPO by definition uses deadline monotonic priorities.

'Utilization' experiment
In this first experiment, we examine the schedulability of systems with various system utilizations. More precisely, we check whether randomly generated systems of utilization 0.1 through 2.0 are ( HI , LO , deg ) = (10 −8 , 10 −6 , 10 −5 )-schedulable under probabilistic Mixed Criticality (pMC) scheduling, under a probabilistic but non-MC scheme (pDMPO), as well as under deterministic baseline schemes: DMPO, AMC, and UB-HL. We also examine the values relevant to pMC scheduling as functions of maximum system utilization: the probability of deadline miss per hour for hi or lo jobs HI (1h) and LO (1h) , and the probability of degraded lo jobs deg . Setup We ranged the system utilization from 0.1 to 2.0 with 0.1 steps, and for each step we created 1000 task-sets according to the previously given description. To reiterate, the following parameters were used: the ratio between the WCET C max i and execution time threshold C thr i for every hi task is CF = C max i ∕C thr i = 1.5 , the probability each task is assigned hi criticality is CP = 0.5 , the probability a hi job overruns its execution time threshold ℙ(C i > C thr i ) = 10 −8 , the degradation of lo tasks is C deg i = 0.33 ⋅ C max i , there are 60 tasks in each task-set, and the maximal system-level backlog is B max = 500 μs.
Tasks' execution times C i depend on the utilization and task-set in question. We found the mean of the execution times to be between 2.84 and 16.38 μ s, with the maximal execution time C max i among all tasks in a task-set being between 21 and 387 μs.
Results Figure 3 presents the most important result of our experiments. For tasksets of different system utilizations, the (10 −8 , 10 −6 , 10 −5 )-schedulability under various scheduling schemes is given in Fig. 3 Top. To understand better how utilization impacts pMC schedulability, Figure 3 Middle and Bottom show statistics on the HI (1h) , LO (1h) and deg metrics. The box-plots visualize the 10th, 25th, 50th, 75th, and 90th percentile of each metric.
Regarding the three deterministic schemes, we see that they perform similarly as in related work, for example . Remember that for deterministic schemes, a task-set is either 'completely' schedulable or it is not, as there is no notion of probabilities.
In Fig. 3 Top, we can see that deadline monotonic priority ordering (DMPO) has the lowest schedulability among all tested schemes. This is because DMPO attempts to schedule a task-set using only WCET ( C max i ) values. The adaptive Mixed Criticality (AMC) scheme performs better, as it performs a lo → hi mode switch every time hi jobs need more execution time. Still, the schedulability of deterministic fixed priority preemptive schemes is upper-bounded by the UB-HL bound.
For the probabilistic schemes pDMPO and pMC, we can confirm that they outperform deterministic schemes. Probabilistic schemes allow a system with a utilization greater than one to be schedulable, because they take into account the low probability that a long execution time is observed. Let us first focus on probabilistic  deadline monotonic priority ordering (pDMPO). We understand from Díaz et al. (2002) that deadline misses under pDMPO happen when the backlog is large, i.e. when one or more jobs take a long time to execute. The bigger the utilization is, the likelier it is that the backlog is large. As for probabilistic Mixed Criticality (pMC), it features three lo → hi mode switch triggers. All three triggers are indicators that the backlog is large: the first trigger activates when a hi job executes for a long time, the second trigger indicates that a hi job missed its deadline due to a large backlog blocking its execution, and finally the third trigger explicitly notes that the system-level backlog is too large. After detecting these high-backlog situations, the system under pMC transitions to hi-criticality mode where lo jobs are degraded, and thus the backlog is decreased. This ensures that deadline miss probabilities of both lo and hi tasks are reduced, at the cost of having some lo jobs released in degraded mode. Most importantly, this is demonstrated in Fig. 3 Top, where pMC outperforms pDMPO as well as all other schemes. Furthermore, in Fig. 3 Middle, we see how both HI (1h) and LO (1h) increase gradually with the increase of utilization. The small difference between HI (1h) and LO (1h) comes from the fact that the system switches to hi-criticality mode whenever a hi jobs overruns its C thr i threshold, which helps hi jobs keep their deadline. Finally, Fig. 3 Bottom shows the probability a lo job is released with degradation. This slight increase is a sign of being in hi-criticality mode more often, and this quantifies the cost of probabilistic Mixed Criticality scheduling.

'Execution Threshold' experiment
In this experiment, we varied a design parameter relating to tasks' execution times C i : the probability that a hi job overruns its execution time threshold C thr i . We then inspected how this impacts schedulability under probabilistic Mixed Criticality (pMC) and the probabilistic non-Mixed Criticality pDMPO scheme. Because we used a utilization of 1.4, deterministic schemes could not schedule any task-sets. The probability each hi job i,j overruns its execution time threshold ℙ(C i > C thr i ) is ranged from 5 ⋅ 10 −12 to 10 −4 . Ultimately, this experiment demonstrates that it makes sense to use probabilistic Mixed Criticality scheduling if hi-criticality mode is not entered too often, and the importance of the deg metric is justified. Setup A total of 16 configurations, each with 1000 task-sets, were generated for this experiment. The configurations have the same parameters, except for the probability each hi job i,j overruns its execution time threshold ℙ(C i > C thr i ) . The following values for ℙ(C i > C thr i ) were used: {5 ⋅ 10 −12 , 10 −11 , 5 ⋅ 10 −11 , ..., 10 −4 } . Besides this, the system utilization for all configurations is 1.4, while all other parameters are according to the description mentioned at the beginning of Sect. 6.2.
Regardless of the fact that ℙ(C i > C thr i ) is varied by 8 orders of magnitude, we found that the mean execution time per configuration changes little. It is between 8.69 and 8.70 μ s. Among all tasks in every task-set, the worst case execution time C max i is 287 μs.
In Fig. 4 Top, let us first focus on comparing pDMPO and pMC when deg = 1 . In this case, when the deg metric is ignored, we see that more task-sets are always schedulable under pMC than under pDMPO. The reasons pMC scheduling is better in this case are the same reasons as in the 'utilization' experiment: by switching to hi-criticality mode after certain triggering events, the system under pMC scheduling reduces the backlog in these situations, which ultimately makes deadline misses less likely. Now, let us examine pMC with a realistic deg bound, i.e. deg = 10 −5 . As shown in the top figure, it is clear that there exists a limit after which pMC scheduling is not useful at all, as it leads to too much degradation. This can be understood by viewing Fig. 4 Bottom, where we see the cost of switching to hi-mode. On one -schedulability under pMC, for various probabilities that a hi job overruns its execution time threshold ℙ(C i > C thr i ) (Top), and the impact this value has on LO (1h) , HI (1h) (Middle) and deg (Bottom) extreme case, when ℙ(C i > C thr i ) = 10 −4 , the system switches to hi-mode often, on average once every 48.93 ms (not shown in figure). Then, an average ratio of 0.046 of lo jobs are released in degraded mode. In a moderate case, for ℙ(C i > C thr i ) = 10 −8 , hi jobs overrun their execution time threshold C thr i less often, and lo-mode lasts on average 8.34 min. Here, an average ratio of 4.19 ⋅ 10 −6 of lo jobs are degraded. Finally, on the other extreme case, when ℙ(C i > C thr i ) = 5 ⋅ 10 −12 , lo-mode lasts for 278.00 h on average, and only a tiny fraction of 2.09 ⋅ 10 −9 lo jobs are released in degraded mode. For many realistic applications, there exists a limit on the degradation which can be tolerated, before a complete loss of function happens. Thus we argue that this experiment demonstrates why the deg metric is crucial for probabilistic Mixed Criticality scheduling.
Finally, let us comment on LO (1h) and HI (1h) , found in Fig. 4 Middle. These are similar, except HI (1h) is larger for higher ℙ(C i > C thr i ) values. We have found that this increase in HI (1h) appears as a result of pessimistic assumptions introduced in Definition 12. We comment more about this pessimism the next experiment.

'Maximal Backlog' experiment
In the final experiment on randomized systems, the maximum system-level backlog B max was varied. This affects how often hi-criticality mode is entered, while it has no effect on the lo-criticality mode. When the occurrence of hi-criticality mode is artificially increased, we can see the pessimism in the analysis of that mode-which we found mostly to be introduced by pessimistic assumptions on the initial conditions in hi-mode, as per Definition 12. As in the previous experiment, we tested the (10 −8 , 10 −6 , 10 −5 )-schedulability of task-sets under pMC and pDMPO, and (10 −8 , 10 −6 , 1)-schedulability under pMC scheduling. Setup For this experiment, we first generated 1000 task-sets with a system utilization of 1.2. This high utilization guarantees that no deterministic scheme can be used to schedule task-sets. All parameters except for the maximum system-level backlog are according to the description at the beginning of this section. Then, the maximum system-level backlog B max was varied from 40 to 600 μ s, and all of the 1000 task-sets are analyzed for every B max value. Each generated task-set has 60 tasks, the mean execution time among all tasks in every task-set is 10.61 μ s, while the maximum execution time overall is 255 μs.
Results Figure 5 visualizes the results of this experiment. As done in the previous experiment, we conducted a (10 −8 , 10 −6 , 10 −5 )-schedulability test under pMC and pDMPO, as well as a schedulability test under pMC when the deg metric is ignored (i.e. deg = 1 ). The box-plots visualize the 10th, 25th, 50th, 75th, and 90th percentile of each evaluated metric. By definition, the maximum system-level backlog B max does not impact scheduling under pDMPO at all, so the schedulability under this scheme is constant.
Regarding the impact on pMC scheduling, specifically on HI (1h) and on LO (1h) , we see two cases. On the one hand, when the maximum system-level backlog B max is sufficiently large, i.e. ≥ 200 μ s, we see that it has a negligible impact on HI (1h) and LO (1h) values. On the other hand, when a small B max causes hi-mode to be entered often, HI (1h) and LO (1h) both increase. Ideally, how often hi-mode is entered should not impact HI (1h) and LO (1h) . The increase is a result of pessimism introduced in point 4 of Definition 12. As the reader recalls, there we make a pessimistic assumption that all hi jobs are overrunning their execution time thresholds C thr i at the time of the mode switch. This pessimistic assumption is mainly introduced to reduce the number of cases under which hi-criticality mode is analyzed.
The impact the backlog B max has on deg is straightforward. As hi-mode is entered more often, deg increases. Because of this increase, we find that few task-sets are (10 −8 , 10 −6 , 10 −5 )-schedulable under pMC for B max values less than 200 μ s. We can therefore conclude thus the pessimism of hi-criticality mode analysis does not play a major role in the schedulability analysis of task-sets under realistic requirements for the maximal permitted degradation of lo jobs deg . Finally, we observe again the main result from the 'execution threshold' experiment: probabilistic Mixed Criticality (pMC) scheduling is better than the non-MC scheme pDMPO, except when hi-criticality mode is entered too often.

Conclusion
Modeling tasks' execution times with random variables in Vestal's mixed-criticality model allows for a schedulability analysis based on the 'probability of deadline miss per hour'. We presented a dual-criticality system which operates in either lo-or hicriticality mode. In lo-criticality mode, both lo and hi jobs run normally, but certain optimism towards hi jobs exists: they are required not to overrun their C thr i execution time threshold, a value analogues to the optimistic WCET in Vestal's model. hicriticality mode is entered when a violation of this optimistic condition is detected, or when one of the following two events happen: a hi job misses its deadline, or the system-level backlog exceeds its maximal value. In this mode, lo jobs are degraded by having a shorter time budget for execution, so hi jobs have more resources available. This mode lasts until the system becomes idle.
To characterize such a system, we first defined ( HI , LO , deg )-schedulability, which quantifies the soft schedulability of a probabilistic mixed-criticality system. The schedulability conditions determine whether the probability of deadline miss per hour for hi jobs, the probability of deadline miss per hour for lo jobs and the probability a lo job is started in its degraded mode are less that the given ( HI , LO , deg ) limits.
Then, we presented an analysis approach. This was done by splitting the system into two-the lo-and the hi-criticality mode system-and combining the results. On one hand, a steady state analysis was carried out for lo-criticality mode, in which the system is expected to stay for a long time. This enabled us to pessimistically bound the deadline miss probability of each job, which we then used to find the probability that any job misses its deadline while in lo-mode in a certain time period. On the other hand, a simulation of the transient hi-criticality mode was used to bound its duration, and to obtain the probability of deadline miss of jobs inside it. This, together with the probability a lo → hi mode switch happens, enabled us to find the probability any job misses its deadline while in hi-mode in a certain time period.
Finally, simulation results illustrate all of the metrics on a sample task-set, and experiments involving schedulability analysis show how various design choices impact schedulability. Here, we show how probabilistic Mixed Criticality scheduling compares to other schemes, and make a clear case that using pMC makes sense for most cases, except when the amount of lo job degradation is too high.
Limitations and Future Work Our analysis applies for fixed priority preemptive scheduling, but it could be extended to dynamic scheduling schemes as well. On the one hand, probabilistic response-time calculus already exists for dynamic schemes (Díaz et al. 2002). In addition, dynamic-priority Mixed-Criticality schemes are found to be relevant Guo et al. 2015).
Regarding our proposed scheme, its main limitation is the pessimism of the analysis of hi-criticality mode. This pessimism is due to the fact that we have a single analysis whatever the reason for making the lo → hi transition was.
In a future work, it would be possible to do a less pessimistic analysis of hi-mode by deconstructing the analysis into three sub-classes, one for each lo → hi mode switch reason. For example, if a mode switch was caused by a maximal system-level backlog exceedance, the initial backlog would surely be exactly B max . If the mode switch was not caused by an overrunning job, there would be no need to assume that carry-over jobs of hi criticality surely overrun. If the mode switch was caused by an overrunning hi job, one could introduce cases depending on which job cause the mode switch.

3
The pessimism of the analysis for the lo-criticality mode could be reduced as well, but arguably this would bear less fruit. One idea here is to estimate the percentage of time a system spends in lo-criticality mode. In calculating (T) in our work, we assumed the system is in lo-mode all the time. Replacing this assumption with a better estimate would bring improvement, however only for systems which spend a non-negligible amount of time in hi-criticality mode, which is usually not assumed to be the case. Another idea is to use a less pessimistic model of hi tasks in lo-mode, by modeling their executions with conditional 'truncated' execution times as is done in several related works (Draskovic et al. 2016;Maxim et al. 2017). However, this would require performing two lo-mode analyses: the one presented here would be used to calculate initial conditions in hi-mode, and the other with the less pessimistic model of hi tasks would be used to calculate deadline miss probabilities in lo-criticality mode.

Appendix: Computational complexity of the analysis
Here we comment on the computational complexity of our proposed probabilistic Mixed Criticality (pMC) schedulability analysis. Algorithm A presents a highlevel recapitulation of the analysis, where all pseudo-commands are as explained in Sect. 5.
The computational complexity of the analysis is O(n 2 ⋅ HP ⋅ c log c) , where n is the number of jobs in one hyperperiod, HP is the length of one hyperperiod, and c is the length or number of values in the execution time distributions.
In the analysis, the most complex atomic command is the convolution. When using FFT, one convolution has a cost of O(c log c).
Let us now comment on the complexity of the analysis in detail. According to Sect. 4.1, the steady state backlog is approximated by B i (k ⋅ HP) , where k is the smallest natural number satisfying inequality (9). To calculate B i (k ⋅ HP) , a convolution is needed for every one of the n ⋅ k jobs, thus the cost of line 2 is O(n ⋅ k ⋅ c log c) . Similarly, according to point 4 of Definition 12, backlog B i (t) is defined as a combination of two steady state backlogs, and the cost of line 14 is also O(n ⋅ k ⋅ c log c) . The number k depends on the required numerical precision (9), but we have found it to be in the same order of magnitude as n, k ∼ n.
To compute deadline miss probabilities, i.e. lines 4 and, 17, response time analysis is used as defined by Algorithm 1. Line 6 is based on response time analysis as well (Lemma 8). To find the response time of a job, we need to do as many convolutions as there are jobs preempting the said job. Thus, the cost of these lines is O(n ⋅ c log c). Real-Time Systems (2021) 57:397-442 Finally, when analyzing hi-mode, the maximal duration of the mode Δ HI max plays a role. When calculating Δ HI max in line 15, and when computing deadlines miss probabilities of jobs in lines 16 and 17, we need to take into account all jobs that are released in hi-mode. Regardless on when hi-mode is entered or exited, the number of these jobs is at most n ⋅ Δ HI max ∕HP . For schedulable systems, we found that Δ HI max is in the same order of magnitude as HP , Δ HI max ∼ HP and Δ HI max ∕HP ∼ 1.
Event though the computational complexity of this scheme is high, we find it to be acceptable. The analysis only needs to be done offline, while designing the system. Furthermore, parts of Algorithm A are parallelizable. Each iteration of the for-loop in line 13 can be run independently, meaning that the analysis of hi-mode can be done in parallel on HP processes, each of complexity O(n 2 ⋅ c log c) . Consequently, this would be the computational complexity of the whole Algorithm, if we were to have unlimited resources.
Runtimes For Sect. 6.2, we ran the analysis of each task-set on a single core of a Dual Deca-Core Intel Xeon E5-2690 v2, running at 3.00GHz. As defined in the taskset generation, all task-sets have HP = 1000 and c ∼ 1000 . In Table 4, we noted the average analysis runtimes for task-sets of different utilizations and number of jobs.

Appendix: Notation
See Table 5.  w.c. duration of HI-mode if starting at t Δ HI max (t) w.c. prob. of at least one deadline miss of any job during any HI mode started at t HI (t)